概述
GPUImage是一个著名的图像处理开源库,它让你能够在图片、视频、相机上使用GPU加速的滤镜和其它特效。与CoreImage框架相比,可以根据GPUImage提供的接口,使用自定义的滤镜。项目地址:https://github.com/BradLarson/GPUImage
这篇文章主要是阅读GPUImage框架中的 GPUImageFramebuffer、GPUImageFramebufferCache 两个重要类的源码。这两个是GPUImage中处理帧缓存相关的类,是各种滤镜的基础。以下是源码内容:
GPUImageFramebuffer
GPUImageFramebufferCache
准备
- 位图创建。通过帧缓存生成图像的时候,GPUImage使用CGImage相关的API生成相关的位图对象。以下是该API的相关说明:
// CGImageRef这个结构用来创建像素位图,可以通过操作存储的像素位来编辑图片
typedef struct CGImage *CGImageRef;
/**
通过CGImageCreate方法,我们可以创建出一个CGImageRef类型的对象
@param width 图片宽度像素
@param height 图片高度像素
@param bitsPerComponent 每个颜色的比特数,例如在rgba32模式下为8
@param bitsPerPixel 每个像素的总比特数
@param bytesPerRow 每一行占用的字节数,注意这里的单位是字节
@param space 颜色空间模式
@param bitmapInfo 位图像素布局枚举
@param provider 数据源提供者
@param decode 解码渲染数组
@param shouldInterpolate 是否抗锯齿
@param intent 图片相关参数
@return 位图
*/
CGImageRef CGImageCreate(size_t width,
size_t height,
size_t bitsPerComponent,
size_t bitsPerPixel,
size_t bytesPerRow,
CGColorSpaceRef space,
CGBitmapInfo bitmapInfo,
CGDataProviderRef provider,
const CGFloat *decode,
bool shouldInterpolate,
CGColorRenderingIntent intent)
GPUImageFramebuffer
这个类不是很复杂,它主要是涉及帧缓存和帧缓存附件等相关OpenGLES的知识,参见
OpenGL ES入门12-帧缓存 。GPUImageFramebuffer 管理者帧缓存和纹理附件。其中纹理附件涉及到了相关的纹理选项。因此,它提供的属性也是和帧缓存、纹理附件、纹理选项等相关。
- 属性
// 帧缓存大小
@property(readonly) CGSize size;
// 纹理选项
@property(readonly) GPUTextureOptions textureOptions;
// 纹理缓存
@property(readonly) GLuint texture;
// 是否仅有纹理没有帧缓存
@property(readonly) BOOL missingFramebuffer;
- 初始化方法
- (id)initWithSize:(CGSize)framebufferSize;
- (id)initWithSize:(CGSize)framebufferSize textureOptions:(GPUTextureOptions)fboTextureOptions onlyTexture:(BOOL)onlyGenerateTexture;
- (id)initWithSize:(CGSize)framebufferSize overriddenTexture:(GLuint)inputTexture;
- (id)initWithSize:(CGSize)framebufferSize;
{
// 提供默认纹理选项
GPUTextureOptions defaultTextureOptions;
defaultTextureOptions.minFilter = GL_LINEAR;
defaultTextureOptions.magFilter = GL_LINEAR;
defaultTextureOptions.wrapS = GL_CLAMP_TO_EDGE;
defaultTextureOptions.wrapT = GL_CLAMP_TO_EDGE;
defaultTextureOptions.internalFormat = GL_RGBA;
defaultTextureOptions.format = GL_BGRA;
defaultTextureOptions.type = GL_UNSIGNED_BYTE;
// 根据默认纹理选项以及强制生成帧缓存和纹理附件进行相关初始化
if (!(self = [self initWithSize:framebufferSize textureOptions:defaultTextureOptions onlyTexture:NO]))
{
return nil;
}
return self;
}
- (id)initWithSize:(CGSize)framebufferSize textureOptions:(GPUTextureOptions)fboTextureOptions onlyTexture:(BOOL)onlyGenerateTexture;
{
if (!(self = [super init]))
{
return nil;
}
// 纹理选项
_textureOptions = fboTextureOptions;
_size = framebufferSize;
framebufferReferenceCount = 0;
referenceCountingDisabled = NO;
// 是否只生成纹理缓存
_missingFramebuffer = onlyGenerateTexture;
// 如果只生成纹理缓存,则不生成帧缓存
if (_missingFramebuffer)
{
runSynchronouslyOnVideoProcessingQueue(^{
[GPUImageContext useImageProcessingContext];
[self generateTexture];
framebuffer = 0;
});
}
// 既生成纹理缓存又生成帧缓存
else
{
[self generateFramebuffer];
}
return self;
}
- 方法列表。方法主要分为四大类:
第一类:与使用当前帧缓存相关的方法,
第二类:与GPUImageFramebuffer
引用计数相关的方法,
第三类:从帧缓存生成位图相关的方法,
第四类:获取帧缓存原始数据相关的方法。
// Usage
- (void)activateFramebuffer;
// Reference counting
- (void)lock;
- (void)unlock;
- (void)clearAllLocks;
- (void)disableReferenceCounting;
- (void)enableReferenceCounting;
// Image capture
- (CGImageRef)newCGImageFromFramebufferContents;
- (void)restoreRenderTarget;
// Raw data bytes
- (void)lockForReading;
- (void)unlockAfterReading;
- (NSUInteger)bytesPerRow;
- (GLubyte *)byteBuffer;
- (CVPixelBufferRef)pixelBuffer;
- 激活。在使用帧缓存的时候首先要激活(即绑定为当前帧缓存),激活之后才能在当前帧缓存上进行相关操作。
- (void)activateFramebuffer;
{
glBindFramebuffer(GL_FRAMEBUFFER, framebuffer);
// 激活的时候需要设置视口大小
glViewport(0, 0, (int)_size.width, (int)_size.height);
}
- 引用计数。
开启引用计数后,每次调用 lock 方法后,引用计数加一。
- (void)lock;
{
if (referenceCountingDisabled)
{
return;
}
framebufferReferenceCount++;
}
开启引用计数后,当引用计数小于1的时候,会调用 returnFramebufferToCache
函数把自己放回 GPUImageFramebufferCache
中,便于之后的使用,并不会销毁帧缓存。销毁帧缓存是通过 destroyFramebuffer
函数,该函数是私有函数,未在头文件中公开,在 dealloc
中被调用。
- (void)unlock;
{
if (referenceCountingDisabled)
{
return;
}
NSAssert(framebufferReferenceCount > 0, @"Tried to overrelease a framebuffer, did you forget to call -useNextFrameForImageCapture before using -imageFromCurrentFramebuffer?");
framebufferReferenceCount--;
if (framebufferReferenceCount < 1)
{
[[GPUImageContext sharedFramebufferCache] returnFramebufferToCache:self];
}
}
清除所有引用计数,以及开启关闭引用计数如下:
- (void)clearAllLocks;
{
framebufferReferenceCount = 0;
}
- (void)disableReferenceCounting;
{
referenceCountingDisabled = YES;
}
- (void)enableReferenceCounting;
{
referenceCountingDisabled = NO;
}
- 从帧缓存中生成图片。在读取图片数据的时候,根据设备是否支持
CoreVideo
框架,GPUImage 会选择使用CVPixelBufferGetBaseAddress
或者glReadPixels
读取帧缓存中的数据。最后通过CGImageCreate
,创建CGImage
对象并返回该对象。
- (CGImageRef)newCGImageFromFramebufferContents;
{
// a CGImage can only be created from a 'normal' color texture
NSAssert(self.textureOptions.internalFormat == GL_RGBA, @"For conversion to a CGImage the output texture format for this filter must be GL_RGBA.");
NSAssert(self.textureOptions.type == GL_UNSIGNED_BYTE, @"For conversion to a CGImage the type of the output texture of this filter must be GL_UNSIGNED_BYTE.");
__block CGImageRef cgImageFromBytes;
// 在VideoProcessingQueue中进行同步处理
runSynchronouslyOnVideoProcessingQueue(^{
// 设置OpenGLES上下文
[GPUImageContext useImageProcessingContext];
// 图片的总大小 = 帧缓存大小 * 每个像素点字节数
NSUInteger totalBytesForImage = (int)_size.width * (int)_size.height * 4;
// It appears that the width of a texture must be padded out to be a multiple of 8 (32 bytes) if reading from it using a texture cache
GLubyte *rawImagePixels;
CGDataProviderRef dataProvider = NULL;
// 判断是否支持CoreVideo的快速纹理上传
if ([GPUImageContext supportsFastTextureUpload])
{
#if TARGET_IPHONE_SIMULATOR || TARGET_OS_IPHONE
// 图像宽度 = 每行图像数据大小 / 每个像素点字节数
NSUInteger paddedWidthOfImage = CVPixelBufferGetBytesPerRow(renderTarget) / 4.0;
// 图像大小 = 图像宽度 * 高度 * 每个像素点字节数
NSUInteger paddedBytesForImage = paddedWidthOfImage * (int)_size.height * 4;
// 等待OpenGL指令执行完成,与glFlush有区别
glFinish();
CFRetain(renderTarget); // I need to retain the pixel buffer here and release in the data source callback to prevent its bytes from being prematurely deallocated during a photo write operation
[self lockForReading];
rawImagePixels = (GLubyte *)CVPixelBufferGetBaseAddress(renderTarget);
// 创建CGDataProviderRef对象
dataProvider = CGDataProviderCreateWithData((__bridge_retained void*)self, rawImagePixels, paddedBytesForImage, dataProviderUnlockCallback);
[[GPUImageContext sharedFramebufferCache] addFramebufferToActiveImageCaptureList:self]; // In case the framebuffer is swapped out on the filter, need to have a strong reference to it somewhere for it to hang on while the image is in existence
#else
#endif
}
else
{
// 激活帧缓存
[self activateFramebuffer];
rawImagePixels = (GLubyte *)malloc(totalBytesForImage);
// 从当前的帧缓存读取图片数据
glReadPixels(0, 0, (int)_size.width, (int)_size.height, GL_RGBA, GL_UNSIGNED_BYTE, rawImagePixels);
// 创建 CGDataProvider
dataProvider = CGDataProviderCreateWithData(NULL, rawImagePixels, totalBytesForImage, dataProviderReleaseCallback);
// 读取到数据之后不需要再持有帧缓存
[self unlock]; // Don't need to keep this around anymore
}
CGColorSpaceRef defaultRGBColorSpace = CGColorSpaceCreateDeviceRGB();
if ([GPUImageContext supportsFastTextureUpload])
{
#if TARGET_IPHONE_SIMULATOR || TARGET_OS_IPHONE
// 创建CGImage对象
cgImageFromBytes = CGImageCreate((int)_size.width, (int)_size.height, 8, 32, CVPixelBufferGetBytesPerRow(renderTarget), defaultRGBColorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst, dataProvider, NULL, NO, kCGRenderingIntentDefault);
#else
#endif
}
else
{
// 创建CGImage对象
cgImageFromBytes = CGImageCreate((int)_size.width, (int)_size.height, 8, 32, 4 * (int)_size.width, defaultRGBColorSpace, kCGBitmapByteOrderDefault | kCGImageAlphaLast, dataProvider, NULL, NO, kCGRenderingIntentDefault);
}
// Capture image with current device orientation
// 释放数据
CGDataProviderRelease(dataProvider);
CGColorSpaceRelease(defaultRGBColorSpace);
});
return cgImageFromBytes;
}
- 获取帧缓存原始数据相关的方法。如果设备支持
CoreVideo
框架,获取纹理数据的相关操作会调用下面的这些方法。详细见newCGImageFromFramebufferContents
方法。
- (void)lockForReading
{
#if TARGET_IPHONE_SIMULATOR || TARGET_OS_IPHONE
if ([GPUImageContext supportsFastTextureUpload])
{
if (readLockCount == 0)
{
// 在访问CPU的像素数据之前,必须调用CVPixelBufferLockBaseAddress
CVPixelBufferLockBaseAddress(renderTarget, 0);
}
readLockCount++;
}
#endif
}
- (void)unlockAfterReading
{
#if TARGET_IPHONE_SIMULATOR || TARGET_OS_IPHONE
if ([GPUImageContext supportsFastTextureUpload])
{
NSAssert(readLockCount > 0, @"Unbalanced call to -[GPUImageFramebuffer unlockAfterReading]");
readLockCount--;
if (readLockCount == 0)
{
// 访问结束后,必须调用CVPixelBufferUnlockBaseAddress
CVPixelBufferUnlockBaseAddress(renderTarget, 0);
}
}
#endif
}
- (NSUInteger)bytesPerRow;
{
if ([GPUImageContext supportsFastTextureUpload])
{
#if TARGET_IPHONE_SIMULATOR || TARGET_OS_IPHONE
// 获取每行数据大小
return CVPixelBufferGetBytesPerRow(renderTarget);
#else
return _size.width * 4; // TODO: do more with this on the non-texture-cache side
#endif
}
else
{
return _size.width * 4;
}
}
GPUImageFramebufferCache
GPUImageFramebufferCache类核心的职责是管理GPUImageFramebuffer对象。
- 属性
@interface GPUImageFramebufferCache()
{
// NSCache *framebufferCache;
// 缓存字典
NSMutableDictionary *framebufferCache;
// 缓存数量字典
NSMutableDictionary *framebufferTypeCounts;
// 当前正在使用的GPUImageFramebuffer数组
NSMutableArray *activeImageCaptureList; // Where framebuffers that may be lost by a filter, but which are still needed for a UIImage, etc., are stored
id memoryWarningObserver;
// 缓存队列
dispatch_queue_t framebufferCacheQueue;
}
- 构造方法。构造方法中主要是初始化各个缓存池,以及创建缓存队列,以及对系统内存警告进行监听。
- (id)init;
{
if (!(self = [super init]))
{
return nil;
}
#if TARGET_IPHONE_SIMULATOR || TARGET_OS_IPHONE
__unsafe_unretained __typeof__ (self) weakSelf = self;
// 监听系统内存警告,收到通知,便清理缓存数组。
memoryWarningObserver = [[NSNotificationCenter defaultCenter] addObserverForName:UIApplicationDidReceiveMemoryWarningNotification object:nil queue:nil usingBlock:^(NSNotification *note) {
__typeof__ (self) strongSelf = weakSelf;
if (strongSelf) {
[strongSelf purgeAllUnassignedFramebuffers];
}
}];
#else
#endif
// 初始化缓存池
// framebufferCache = [[NSCache alloc] init];
framebufferCache = [[NSMutableDictionary alloc] init];
framebufferTypeCounts = [[NSMutableDictionary alloc] init];
activeImageCaptureList = [[NSMutableArray alloc] init];
framebufferCacheQueue = dispatch_queue_create("com.sunsetlakesoftware.GPUImage.framebufferCacheQueue", GPUImageDefaultQueueAttribute());
return self;
}
- 方法列表。主要涉及到在缓存中查找
GPUImageFramebuffer
,将GPUImageFramebuffer
加入缓存,清空缓存等相关方法。
- (GPUImageFramebuffer *)fetchFramebufferForSize:(CGSize)framebufferSize textureOptions:(GPUTextureOptions)textureOptions onlyTexture:(BOOL)onlyTexture;
- (GPUImageFramebuffer *)fetchFramebufferForSize:(CGSize)framebufferSize onlyTexture:(BOOL)onlyTexture;
- (void)returnFramebufferToCache:(GPUImageFramebuffer *)framebuffer;
- (void)purgeAllUnassignedFramebuffers;
- (void)addFramebufferToActiveImageCaptureList:(GPUImageFramebuffer *)framebuffer;
- (void)removeFramebufferFromActiveImageCaptureList:(GPUImageFramebuffer *)framebuffer;
- 根据framebufferSize、textureOptions和onlyTexture查找GPUImageFramebuffer。如果找不到framebufferCache,会创建新的缓存。这里说的相同类型的
GPUImageFramebuffer
指的是根据- (NSString *)hashForSize:(CGSize)size textureOptions:(GPUTextureOptions)textureOptions onlyTexture:(BOOL)onlyTexture
得出来的lookupHash相同。
- (GPUImageFramebuffer *)fetchFramebufferForSize:(CGSize)framebufferSize textureOptions:(GPUTextureOptions)textureOptions onlyTexture:(BOOL)onlyTexture;
{
__block GPUImageFramebuffer *framebufferFromCache = nil;
// dispatch_sync(framebufferCacheQueue, ^{
runSynchronouslyOnVideoProcessingQueue(^{
// 创建查找字符串
NSString *lookupHash = [self hashForSize:framebufferSize textureOptions:textureOptions onlyTexture:onlyTexture];
// 获取GPUImageFramebuffer在缓存中的数量
NSNumber *numberOfMatchingTexturesInCache = [framebufferTypeCounts objectForKey:lookupHash];
NSInteger numberOfMatchingTextures = [numberOfMatchingTexturesInCache integerValue];
// 缓存中如果没有,则创建
if ([numberOfMatchingTexturesInCache integerValue] < 1)
{
// Nothing in the cache, create a new framebuffer to use
framebufferFromCache = [[GPUImageFramebuffer alloc] initWithSize:framebufferSize textureOptions:textureOptions onlyTexture:onlyTexture];
}
else
{
// Something found, pull the old framebuffer and decrement the count
// 缓存中如果有,则取出最后一个,如果取出framebufferFromCache为空,则取倒数第二个,依次类推。
NSInteger currentTextureID = (numberOfMatchingTextures - 1);
while ((framebufferFromCache == nil) && (currentTextureID >= 0))
{
// 根据数量构建带数量的textureHash字符串
NSString *textureHash = [NSString stringWithFormat:@"%@-%ld", lookupHash, (long)currentTextureID];
// 查找以textureHash为key的GPUImageFramebuffer是否存在
framebufferFromCache = [framebufferCache objectForKey:textureHash];
// Test the values in the cache first, to see if they got invalidated behind our back
if (framebufferFromCache != nil)
{
// 存在,则从缓存中删除
// Withdraw this from the cache while it's in use
[framebufferCache removeObjectForKey:textureHash];
}
currentTextureID--;
}
currentTextureID++;
// 更新framebufferTypeCounts中相同类型GPUImageFramebuffer的数量
[framebufferTypeCounts setObject:[NSNumber numberWithInteger:currentTextureID] forKey:lookupHash];
// 还是没有则创建
if (framebufferFromCache == nil)
{
framebufferFromCache = [[GPUImageFramebuffer alloc] initWithSize:framebufferSize textureOptions:textureOptions onlyTexture:onlyTexture];
}
}
});
// 引用计数加1,返回
[framebufferFromCache lock];
return framebufferFromCache;
}
- 根据framebufferSize和onlyTexture以及默认的GPUTextureOptions查找GPUImageFramebuffer。如果找不到,会创建新的缓存。
- (GPUImageFramebuffer *)fetchFramebufferForSize:(CGSize)framebufferSize onlyTexture:(BOOL)onlyTexture;
{
GPUTextureOptions defaultTextureOptions;
defaultTextureOptions.minFilter = GL_LINEAR;
defaultTextureOptions.magFilter = GL_LINEAR;
defaultTextureOptions.wrapS = GL_CLAMP_TO_EDGE;
defaultTextureOptions.wrapT = GL_CLAMP_TO_EDGE;
defaultTextureOptions.internalFormat = GL_RGBA;
defaultTextureOptions.format = GL_BGRA;
defaultTextureOptions.type = GL_UNSIGNED_BYTE;
return [self fetchFramebufferForSize:framebufferSize textureOptions:defaultTextureOptions onlyTexture:onlyTexture];
}
- 回收缓存。根据size、textureOptions和onlyTexture,创建缓存的key值,ramebufferTypeCounts中的key由lookupHash构成没有加数量。在framebufferCache中,key值由lookupHash加上数量避免覆盖相同的GPUImageFramebuffer。
- (void)returnFramebufferToCache:(GPUImageFramebuffer *)framebuffer;
{
// 清除引用计数
[framebuffer clearAllLocks];
// dispatch_async(framebufferCacheQueue, ^{
runAsynchronouslyOnVideoProcessingQueue(^{
CGSize framebufferSize = framebuffer.size;
GPUTextureOptions framebufferTextureOptions = framebuffer.textureOptions;
// 常见查找hash字符串
NSString *lookupHash = [self hashForSize:framebufferSize textureOptions:framebufferTextureOptions onlyTexture:framebuffer.missingFramebuffer];
// 获取当前同类型缓存的数量
NSNumber *numberOfMatchingTexturesInCache = [framebufferTypeCounts objectForKey:lookupHash];
NSInteger numberOfMatchingTextures = [numberOfMatchingTexturesInCache integerValue];
// 对相同类型的GPUImageFramebuffer,存放在framebufferCache中时,key值由lookupHash加上数量避免覆盖相同的GPUImageFramebuffer。
NSString *textureHash = [NSString stringWithFormat:@"%@-%ld", lookupHash, (long)numberOfMatchingTextures];
// [framebufferCache setObject:framebuffer forKey:textureHash cost:round(framebufferSize.width * framebufferSize.height * 4.0)];
[framebufferCache setObject:framebuffer forKey:textureHash];
// framebufferTypeCounts中的key没有加数量
[framebufferTypeCounts setObject:[NSNumber numberWithInteger:(numberOfMatchingTextures + 1)] forKey:lookupHash];
});
}
- 内存警告的时候,清空缓存。
- (void)purgeAllUnassignedFramebuffers;
{
runAsynchronouslyOnVideoProcessingQueue(^{
// dispatch_async(framebufferCacheQueue, ^{
[framebufferCache removeAllObjects];
[framebufferTypeCounts removeAllObjects];
#if TARGET_IPHONE_SIMULATOR || TARGET_OS_IPHONE
CVOpenGLESTextureCacheFlush([[GPUImageContext sharedImageProcessingContext] coreVideoTextureCache], 0);
#else
#endif
});
}
- 帧缓存持有与释放。在读取帧缓存图像数据时,需要保持对
GPUImageFramebuffer
的引用。并且读取完数据后,需要对其进行释放。详细见GPUImageFramebuffer
的newCGImageFromFramebufferContents
方法。
- (void)addFramebufferToActiveImageCaptureList:(GPUImageFramebuffer *)framebuffer;
{
runAsynchronouslyOnVideoProcessingQueue(^{
// dispatch_async(framebufferCacheQueue, ^{
[activeImageCaptureList addObject:framebuffer];
});
}
- (void)removeFramebufferFromActiveImageCaptureList:(GPUImageFramebuffer *)framebuffer;
{
runAsynchronouslyOnVideoProcessingQueue(^{
// dispatch_async(framebufferCacheQueue, ^{
[activeImageCaptureList removeObject:framebuffer];
});
}
总结
GPUImageFramebuffer 封装了OpenGLES中帧缓存,纹理附件等相关技术。
GPUImageFramebufferCache 管理着GPUImageFramebuffer,方便GPUImageFramebuffer的重复利用。
源码地址:GPUImage源码阅读系列 https://github.com/QinminiOS/GPUImage
系列文章地址:GPUImage源码阅读 http://www.jianshu.com/nb/11749791