图像是怎么显示到屏幕上的
图像的显示是有CPU和GPU共同一起完成的,
CPU: 负责图片的解码(视频解码是由GPU来完成的)
GPU: 负责纹理混合,顶点绘制计算,像素的计算填色,渲染到帧缓冲区
图像的显示的两种情况:
正常情况下的显示方式
正常情况下流程
- CPU计算解码
- GPU渲染
- 存入帧缓冲区
- 视频控制器读取帧缓存区中的数据,经过数模转换然后逐行显示到屏幕上
如下图所示:
GPU渲染过程中,会遵循油画算法
按照从远到近一层层的渲染,然后将每一层的结果放到帧缓冲区,当帧缓冲区的数据渲染到屏幕上的时候,帧缓冲区的数据就会丢弃.如图:
离屏渲染(Offscreen rendering)
按照上述方式来渲染能满足我们所有需求么?
这个时候问题来了,
如果我们对图像进行裁剪的时候,这个时候帧缓冲区的数据已经丢弃,就不能对图层进行操作,这个时候就需要另开辟一些内存空间来存放这些帧数据,这个新开辟的内存空间就叫做离屏缓冲区.
这个时候每个图层的结果就不是直接放在帧缓冲区了,而是存入离屏缓冲区,然后等到裁剪等操作完成之后,然后将数据从离屏缓冲区存到帧缓冲区,然后由视频控制器读取到显示上.这个时候流程就变成下图所示:
离屏渲染的限制:
- 离屏渲染空间---》超过屏幕像素的2.5倍,不能使用
- 离屏渲染时间--》100ms内没有使用,会被销毁
- 不能一步渲染出来,就会触发到离屏渲
离屏渲染探索
我们ios项目里怎么检查是否触发离屏渲染呢,Simulator自带了一个检测离屏渲染的功能(Color Off_screen Render) 打开此功能,如果有离屏渲染会变黄色
//1.按钮存在背景图片
UIButton *btn1 = [UIButton buttonWithType:UIButtonTypeCustom];
btn1.frame = CGRectMake(100, 30, 100, 100);
btn1.layer.cornerRadius = 50;
[self.view addSubview:btn1];
[btn1 setImage:[UIImage imageNamed:@"btn.png"] forState:UIControlStateNormal];
btn1.clipsToBounds = YES;
//2.按钮不存在背景图片
UIButton *btn2 = [UIButton buttonWithType:UIButtonTypeCustom];
btn2.frame = CGRectMake(100, 180, 100, 100);
btn2.layer.cornerRadius = 50;
btn2.backgroundColor = [UIColor blueColor];
[self.view addSubview:btn2];
btn2.clipsToBounds = YES;
//3.UIImageView 设置了图片+背景色;
UIImageView *img1 = [[UIImageView alloc]init];
img1.frame = CGRectMake(100, 320, 100, 100);
img1.backgroundColor = [UIColor blueColor];
[self.view addSubview:img1];
img1.layer.cornerRadius = 50;
img1.layer.masksToBounds = YES;
img1.image = [UIImage imageNamed:@"btn.png"];
//4.UIImageView 只设置了图片,无背景色;
UIImageView *img2 = [[UIImageView alloc]init];
img2.frame = CGRectMake(100, 480, 100, 100);
[self.view addSubview:img2];
img2.layer.cornerRadius = 50;
img2.layer.masksToBounds = YES;
img2.image = [UIImage imageNamed:@"btn.png"];
第一个button
设置了图片并且进行了裁剪,而第二个没有设置图片,触发离屏渲染的条件,有多个不同的图层需要渲染时而且要对其进行操作(比如裁剪)时自动触发.
那么第一个UIImageView
设置了图片也设置了背景,第二个UIImageView
没有设置背景,所以第一个会触发离屏渲染,第二个不会.
那么 为什么第一个button
没有设置背景颜色,也会触发离屏渲染呢.来看下button
的属性
@property(nullable, nonatomic,readonly,strong) UILabel *titleLabel API_AVAILABLE(ios(3.0));
@property(nullable, nonatomic,readonly,strong) UIImageView *imageView API_AVAILABLE(ios(3.0));
button上是有一个imageView的图层的,当我们给button设置圆角时并且设置了layer.masksToBounds = YES
时会触发离屏渲染,
这里有一点是要注意的:
cornerRadius
只会设置backgroundColor
和border
的圆角,不会设置content
的圆角,只有设置了layer.masksToBounds = YES
时才会对content
进行圆角处理
离屏缓冲区可以无限大么?离屏缓冲区数据什么时候释放
离屏缓冲区是有大小限制的,苹果官方文档给出,离屏缓冲区最大为屏幕像素的2.5倍,
离屏缓冲区数据在100ms内如果没有被试用,就会被丢弃
触发离屏渲染的几种情况
- 使用了mask蒙版
- 进行了裁剪 layer.maskToBounds/view.clipsToBounds
- 设置了组透明度为YES,并且透明度不为1的layer(layer.allowsGroupOpacity/ layer.opacity)
- 设置了阴影shadow
- 光栅化layer.shouldRasterize
- 绘制了文字的layer(UILabel, CATextLayer, Core Text 等)
光栅化(shouldRasterize)
/* When true, the layer is rendered as a bitmap in its local coordinate
* space ("rasterized"), then the bitmap is composited into the
* destination (with the minificationFilter and magnificationFilter
* properties of the layer applied if the bitmap needs scaling).
* Rasterization occurs after the layer's filters and shadow effects
* are applied, but before the opacity modulation. As an implementation
* detail the rendering engine may attempt to cache and reuse the
* bitmap from one frame to the next. (Whether it does or not will have
* no affect on the rendered output.)
*
* When false the layer is composited directly into the destination
* whenever possible (however, certain features of the compositing
* model may force rasterization, e.g. adding filters).
*
* Defaults to NO. Animatable. */
@property BOOL shouldRasterize;
大致意思是,在阴影和裁剪过滤后才会光栅化,然后将layer渲染为位图,然后存储在离屏缓冲区,渲染引擎会缓存或者复用从这一帧到下一帧的数据
所以,
如果layer是能被复用的会被频繁改变的.比如cell被复用,这个时候打开光栅化反而会有一个更高的性能
YYKit对圆角的处理
- (UIImage *)imageByRoundCornerRadius:(CGFloat)radius
corners:(UIRectCorner)corners
borderWidth:(CGFloat)borderWidth
borderColor:(UIColor *)borderColor
borderLineJoin:(CGLineJoin)borderLineJoin {
if (corners != UIRectCornerAllCorners) {
UIRectCorner tmp = 0;
if (corners & UIRectCornerTopLeft) tmp |= UIRectCornerBottomLeft;
if (corners & UIRectCornerTopRight) tmp |= UIRectCornerBottomRight;
if (corners & UIRectCornerBottomLeft) tmp |= UIRectCornerTopLeft;
if (corners & UIRectCornerBottomRight) tmp |= UIRectCornerTopRight;
corners = tmp;
}
UIGraphicsBeginImageContextWithOptions(self.size, NO, self.scale);
CGContextRef context = UIGraphicsGetCurrentContext();
CGRect rect = CGRectMake(0, 0, self.size.width, self.size.height);
CGContextScaleCTM(context, 1, -1);
CGContextTranslateCTM(context, 0, -rect.size.height);
CGFloat minSize = MIN(self.size.width, self.size.height);
if (borderWidth < minSize / 2) {
UIBezierPath *path = [UIBezierPath bezierPathWithRoundedRect:CGRectInset(rect, borderWidth, borderWidth) byRoundingCorners:corners cornerRadii:CGSizeMake(radius, borderWidth)];
[path closePath];
CGContextSaveGState(context);
[path addClip];
CGContextDrawImage(context, rect, self.CGImage);
CGContextRestoreGState(context);
}
if (borderColor && borderWidth < minSize / 2 && borderWidth > 0) {
CGFloat strokeInset = (floor(borderWidth * self.scale) + 0.5) / self.scale;
CGRect strokeRect = CGRectInset(rect, strokeInset, strokeInset);
CGFloat strokeRadius = radius > self.scale / 2 ? radius - self.scale / 2 : 0;
UIBezierPath *path = [UIBezierPath bezierPathWithRoundedRect:strokeRect byRoundingCorners:corners cornerRadii:CGSizeMake(strokeRadius, borderWidth)];
[path closePath];
path.lineWidth = borderWidth;
path.lineJoinStyle = borderLineJoin;
[borderColor setStroke];
[path stroke];
}
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return image;
}