概述
GPUImage是一个著名的图像处理开源库,它让你能够在图片、视频、相机上使用GPU加速的滤镜和其它特效。与CoreImage框架相比,可以根据GPUImage提供的接口,使用自定义的滤镜。项目地址:https://github.com/BradLarson/GPUImage
这篇文章主要是阅读GPUImage框架中的GPUImageVideoCamera、GPUImageStillCamera、GPUImageMovieWriter、GPUImageMovie这几个类的源码。在介绍 GPUImage源码阅读(四) 的时候,数据源主要来自于图片与UI的渲染,这篇文章主要介绍数据源来自相机、音视频文件的情况。同样的显示画面会用到上次介绍的GPUImageView,这里还会用GPUImageMovieWriter保存录制的音视频文件。以下是源码内容:
GPUImageVideoCamera
GPUImageStillCamera
GPUImageMovieWriter
GPUImageMovie
实现效果
- 录制视频
- 拍照
- 视频转码与滤镜
GPUImageVideoCamera
GPUImageVideoCamera继承自GPUImageOutput,实现了 AVCaptureVideoDataOutputSampleBufferDelegate
和 AVCaptureAudioDataOutputSampleBufferDelegate
协议。GPUImageVideoCamera可以调用相机进行视频拍摄,拍摄之后会生成帧缓存对象,我们可以使用GPUImageView显示,也可以使用GPUImageMovieWriter保存为视频文件。同时也提供了GPUImageVideoCameraDelegate
代理,方便我们自己处理CMSampleBuffer。在处理视频的时候会涉及到以下概念:
kCVPixelFormatType_32BGRA
kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange
kCVPixelFormatType_420YpCbCr8BiPlanarFullRange
这几种像素格式,由于在之前在 OpenGL ES入门11-相机视频渲染 做了详细的介绍,这里便不多介绍,如有需要请参考之前的文章。
- 属性列表。属性中大多数是与相机相关的相关参数。
// AVCaptureSession是否在运行
@property(readonly, nonatomic) BOOL isRunning;
// AVCaptureSession对象
@property(readonly, retain, nonatomic) AVCaptureSession *captureSession;
// 视频输出的质量、大小的控制参数。如:AVCaptureSessionPreset640x480
@property (readwrite, nonatomic, copy) NSString *captureSessionPreset;
// 视频的帧率
@property (readwrite) int32_t frameRate;
// 正在使用哪个相机
@property (readonly, getter = isFrontFacingCameraPresent) BOOL frontFacingCameraPresent;
@property (readonly, getter = isBackFacingCameraPresent) BOOL backFacingCameraPresent;
// 实时日志输出
@property(readwrite, nonatomic) BOOL runBenchmark;
// 正在使用的相机对象,方便设置参数
@property(readonly) AVCaptureDevice *inputCamera;
// 输出图片的方向
@property(readwrite, nonatomic) UIInterfaceOrientation outputImageOrientation;
// 前置相机水平镜像
@property(readwrite, nonatomic) BOOL horizontallyMirrorFrontFacingCamera, horizontallyMirrorRearFacingCamera;
// GPUImageVideoCameraDelegate代理
@property(nonatomic, assign) id<GPUImageVideoCameraDelegate> delegate;
- 初始化方法。
- (id)initWithSessionPreset:(NSString *)sessionPreset cameraPosition:(AVCaptureDevicePosition)cameraPosition;
GPUImageVideoCamera初始化方法比较少,初始化的时候需要传递视频质量以及使用哪个相机。如果直接调用 - (instancetype)init
则会使用 AVCaptureSessionPreset640x480
和 AVCaptureDevicePositionBack
来初始化。
- (id)initWithSessionPreset:(NSString *)sessionPreset cameraPosition:(AVCaptureDevicePosition)cameraPosition;
{
if (!(self = [super init]))
{
return nil;
}
// 创建音视频处理队列
cameraProcessingQueue = dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_HIGH,0);
audioProcessingQueue = dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_LOW,0);
// 创建信号量
frameRenderingSemaphore = dispatch_semaphore_create(1);
// 变量的初始化
_frameRate = 0; // This will not set frame rate unless this value gets set to 1 or above
_runBenchmark = NO;
capturePaused = NO;
outputRotation = kGPUImageNoRotation;
internalRotation = kGPUImageNoRotation;
captureAsYUV = YES;
_preferredConversion = kColorConversion709;
// 根据传入参数获取前后相机
_inputCamera = nil;
NSArray *devices = [AVCaptureDevice devicesWithMediaType:AVMediaTypeVideo];
for (AVCaptureDevice *device in devices)
{
if ([device position] == cameraPosition)
{
_inputCamera = device;
}
}
// 获取相机失败,立即返回
if (!_inputCamera) {
return nil;
}
// 创建会话对象
_captureSession = [[AVCaptureSession alloc] init];
// 开始配置
[_captureSession beginConfiguration];
// 创建video输入对象
NSError *error = nil;
videoInput = [[AVCaptureDeviceInput alloc] initWithDevice:_inputCamera error:&error];
if ([_captureSession canAddInput:videoInput])
{
[_captureSession addInput:videoInput];
}
// 创建video输出对象
videoOutput = [[AVCaptureVideoDataOutput alloc] init];
[videoOutput setAlwaysDiscardsLateVideoFrames:NO];
// if (captureAsYUV && [GPUImageContext deviceSupportsRedTextures])
// 设置YUV的处理方式
if (captureAsYUV && [GPUImageContext supportsFastTextureUpload])
{
BOOL supportsFullYUVRange = NO;
NSArray *supportedPixelFormats = videoOutput.availableVideoCVPixelFormatTypes;
for (NSNumber *currentPixelFormat in supportedPixelFormats)
{
if ([currentPixelFormat intValue] == kCVPixelFormatType_420YpCbCr8BiPlanarFullRange)
{
supportsFullYUVRange = YES;
}
}
if (supportsFullYUVRange)
{
// 设置kCVPixelFormatType_420YpCbCr8BiPlanarFullRange格式
[videoOutput setVideoSettings:[NSDictionary dictionaryWithObject:[NSNumber numberWithInt:kCVPixelFormatType_420YpCbCr8BiPlanarFullRange] forKey:(id)kCVPixelBufferPixelFormatTypeKey]];
isFullYUVRange = YES;
}
else
{
// 设置kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange格式
[videoOutput setVideoSettings:[NSDictionary dictionaryWithObject:[NSNumber numberWithInt:kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange] forKey:(id)kCVPixelBufferPixelFormatTypeKey]];
isFullYUVRange = NO;
}
}
else
{
// 设置kCVPixelFormatType_32BGRA格式
[videoOutput setVideoSettings:[NSDictionary dictionaryWithObject:[NSNumber numberWithInt:kCVPixelFormatType_32BGRA] forKey:(id)kCVPixelBufferPixelFormatTypeKey]];
}
// 创建GL程序、获取属性位置
runSynchronouslyOnVideoProcessingQueue(^{
if (captureAsYUV)
{
[GPUImageContext useImageProcessingContext];
// if ([GPUImageContext deviceSupportsRedTextures])
// {
// yuvConversionProgram = [[GPUImageContext sharedImageProcessingContext] programForVertexShaderString:kGPUImageVertexShaderString fragmentShaderString:kGPUImageYUVVideoRangeConversionForRGFragmentShaderString];
// }
// else
// {
if (isFullYUVRange)
{
yuvConversionProgram = [[GPUImageContext sharedImageProcessingContext] programForVertexShaderString:kGPUImageVertexShaderString fragmentShaderString:kGPUImageYUVFullRangeConversionForLAFragmentShaderString];
}
else
{
yuvConversionProgram = [[GPUImageContext sharedImageProcessingContext] programForVertexShaderString:kGPUImageVertexShaderString fragmentShaderString:kGPUImageYUVVideoRangeConversionForLAFragmentShaderString];
}
// }
if (!yuvConversionProgram.initialized)
{
[yuvConversionProgram addAttribute:@"position"];
[yuvConversionProgram addAttribute:@"inputTextureCoordinate"];
if (![yuvConversionProgram link])
{
NSString *progLog = [yuvConversionProgram programLog];
NSLog(@"Program link log: %@", progLog);
NSString *fragLog = [yuvConversionProgram fragmentShaderLog];
NSLog(@"Fragment shader compile log: %@", fragLog);
NSString *vertLog = [yuvConversionProgram vertexShaderLog];
NSLog(@"Vertex shader compile log: %@", vertLog);
yuvConversionProgram = nil;
NSAssert(NO, @"Filter shader link failed");
}
}
yuvConversionPositionAttribute = [yuvConversionProgram attributeIndex:@"position"];
yuvConversionTextureCoordinateAttribute = [yuvConversionProgram attributeIndex:@"inputTextureCoordinate"];
yuvConversionLuminanceTextureUniform = [yuvConversionProgram uniformIndex:@"luminanceTexture"];
yuvConversionChrominanceTextureUniform = [yuvConversionProgram uniformIndex:@"chrominanceTexture"];
yuvConversionMatrixUniform = [yuvConversionProgram uniformIndex:@"colorConversionMatrix"];
[GPUImageContext setActiveShaderProgram:yuvConversionProgram];
glEnableVertexAttribArray(yuvConversionPositionAttribute);
glEnableVertexAttribArray(yuvConversionTextureCoordinateAttribute);
}
});
// 设置AVCaptureVideoDataOutputSampleBufferDelegate代理
[videoOutput setSampleBufferDelegate:self queue:cameraProcessingQueue];
// 添加输出
if ([_captureSession canAddOutput:videoOutput])
{
[_captureSession addOutput:videoOutput];
}
else
{
NSLog(@"Couldn't add video output");
return nil;
}
// 设置视频质量
_captureSessionPreset = sessionPreset;
[_captureSession setSessionPreset:_captureSessionPreset];
// This will let you get 60 FPS video from the 720p preset on an iPhone 4S, but only that device and that preset
// AVCaptureConnection *conn = [videoOutput connectionWithMediaType:AVMediaTypeVideo];
//
// if (conn.supportsVideoMinFrameDuration)
// conn.videoMinFrameDuration = CMTimeMake(1,60);
// if (conn.supportsVideoMaxFrameDuration)
// conn.videoMaxFrameDuration = CMTimeMake(1,60);
// 提交配置
[_captureSession commitConfiguration];
return self;
}
- 其它方法。GPUImageVideoCamera方法大致分为这几类:1、添加输入输出设备;2、捕获视频;3、处理音视频;4、相机参数设置;
// 添加、移除音频输入输出
- (BOOL)addAudioInputsAndOutputs;
- (BOOL)removeAudioInputsAndOutputs;
// 移除所有输入输出
- (void)removeInputsAndOutputs;
// 开始、关闭、暂停、恢复相机捕获
- (void)startCameraCapture;
- (void)stopCameraCapture;
- (void)pauseCameraCapture;
- (void)resumeCameraCapture;
// 处理音视频
- (void)processVideoSampleBuffer:(CMSampleBufferRef)sampleBuffer;
- (void)processAudioSampleBuffer:(CMSampleBufferRef)sampleBuffer;
// 获取相机相关参数
- (AVCaptureDevicePosition)cameraPosition;
- (AVCaptureConnection *)videoCaptureConnection;
+ (BOOL)isBackFacingCameraPresent;
+ (BOOL)isFrontFacingCameraPresent;
// 变换相机
- (void)rotateCamera;
// 获取平均帧率
- (CGFloat)averageFrameDurationDuringCapture;
// 重置相关基准
- (void)resetBenchmarkAverage;
虽然GPUImageVideoCamera方法比较多,但是内部的逻辑不是很复杂。
// 增加音频输入输出
- (BOOL)addAudioInputsAndOutputs
{
if (audioOutput)
return NO;
[_captureSession beginConfiguration];
_microphone = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeAudio];
audioInput = [AVCaptureDeviceInput deviceInputWithDevice:_microphone error:nil];
if ([_captureSession canAddInput:audioInput])
{
[_captureSession addInput:audioInput];
}
audioOutput = [[AVCaptureAudioDataOutput alloc] init];
if ([_captureSession canAddOutput:audioOutput])
{
[_captureSession addOutput:audioOutput];
}
else
{
NSLog(@"Couldn't add audio output");
}
[audioOutput setSampleBufferDelegate:self queue:audioProcessingQueue];
[_captureSession commitConfiguration];
return YES;
}
// 移除音频输入输出
- (BOOL)removeAudioInputsAndOutputs
{
if (!audioOutput)
return NO;
[_captureSession beginConfiguration];
[_captureSession removeInput:audioInput];
[_captureSession removeOutput:audioOutput];
audioInput = nil;
audioOutput = nil;
_microphone = nil;
[_captureSession commitConfiguration];
return YES;
}
// 移除所有输入输出
- (void)removeInputsAndOutputs;
{
[_captureSession beginConfiguration];
if (videoInput) {
[_captureSession removeInput:videoInput];
[_captureSession removeOutput:videoOutput];
videoInput = nil;
videoOutput = nil;
}
if (_microphone != nil)
{
[_captureSession removeInput:audioInput];
[_captureSession removeOutput:audioOutput];
audioInput = nil;
audioOutput = nil;
_microphone = nil;
}
[_captureSession commitConfiguration];
}
// 开始捕获
- (void)startCameraCapture;
{
if (![_captureSession isRunning])
{
startingCaptureTime = [NSDate date];
[_captureSession startRunning];
};
}
// 停止捕获
- (void)stopCameraCapture;
{
if ([_captureSession isRunning])
{
[_captureSession stopRunning];
}
}
// 暂停捕获
- (void)pauseCameraCapture;
{
capturePaused = YES;
}
// 恢复捕获
- (void)resumeCameraCapture;
{
capturePaused = NO;
}
// 处理视频
- (void)processVideoSampleBuffer:(CMSampleBufferRef)sampleBuffer;
{
if (capturePaused)
{
return;
}
CFAbsoluteTime startTime = CFAbsoluteTimeGetCurrent();
CVImageBufferRef cameraFrame = CMSampleBufferGetImageBuffer(sampleBuffer);
// 获取视频宽高
int bufferWidth = (int) CVPixelBufferGetWidth(cameraFrame);
int bufferHeight = (int) CVPixelBufferGetHeight(cameraFrame);
CFTypeRef colorAttachments = CVBufferGetAttachment(cameraFrame, kCVImageBufferYCbCrMatrixKey, NULL);
if (colorAttachments != NULL)
{
if(CFStringCompare(colorAttachments, kCVImageBufferYCbCrMatrix_ITU_R_601_4, 0) == kCFCompareEqualTo)
{
if (isFullYUVRange)
{
_preferredConversion = kColorConversion601FullRange;
}
else
{
_preferredConversion = kColorConversion601;
}
}
else
{
_preferredConversion = kColorConversion709;
}
}
else
{
if (isFullYUVRange)
{
_preferredConversion = kColorConversion601FullRange;
}
else
{
_preferredConversion = kColorConversion601;
}
}
CMTime currentTime = CMSampleBufferGetPresentationTimeStamp(sampleBuffer);
[GPUImageContext useImageProcessingContext];
// 快速YUV纹理生成
if ([GPUImageContext supportsFastTextureUpload] && captureAsYUV)
{
CVOpenGLESTextureRef luminanceTextureRef = NULL;
CVOpenGLESTextureRef chrominanceTextureRef = NULL;
// if (captureAsYUV && [GPUImageContext deviceSupportsRedTextures])
if (CVPixelBufferGetPlaneCount(cameraFrame) > 0) // Check for YUV planar inputs to do RGB conversion
{
CVPixelBufferLockBaseAddress(cameraFrame, 0);
if ( (imageBufferWidth != bufferWidth) && (imageBufferHeight != bufferHeight) )
{
imageBufferWidth = bufferWidth;
imageBufferHeight = bufferHeight;
}
CVReturn err;
// Y分量
glActiveTexture(GL_TEXTURE4);
if ([GPUImageContext deviceSupportsRedTextures])
{
// err = CVOpenGLESTextureCacheCreateTextureFromImage(kCFAllocatorDefault, coreVideoTextureCache, cameraFrame, NULL, GL_TEXTURE_2D, GL_RED_EXT, bufferWidth, bufferHeight, GL_RED_EXT, GL_UNSIGNED_BYTE, 0, &luminanceTextureRef);
err = CVOpenGLESTextureCacheCreateTextureFromImage(kCFAllocatorDefault, [[GPUImageContext sharedImageProcessingContext] coreVideoTextureCache], cameraFrame, NULL, GL_TEXTURE_2D, GL_LUMINANCE, bufferWidth, bufferHeight, GL_LUMINANCE, GL_UNSIGNED_BYTE, 0, &luminanceTextureRef);
}
else
{
err = CVOpenGLESTextureCacheCreateTextureFromImage(kCFAllocatorDefault, [[GPUImageContext sharedImageProcessingContext] coreVideoTextureCache], cameraFrame, NULL, GL_TEXTURE_2D, GL_LUMINANCE, bufferWidth, bufferHeight, GL_LUMINANCE, GL_UNSIGNED_BYTE, 0, &luminanceTextureRef);
}
if (err)
{
NSLog(@"Error at CVOpenGLESTextureCacheCreateTextureFromImage %d", err);
}
luminanceTexture = CVOpenGLESTextureGetName(luminanceTextureRef);
glBindTexture(GL_TEXTURE_2D, luminanceTexture);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
// UV分量(Width/2 = Width/4 + Width/4)
glActiveTexture(GL_TEXTURE5);
if ([GPUImageContext deviceSupportsRedTextures])
{
// err = CVOpenGLESTextureCacheCreateTextureFromImage(kCFAllocatorDefault, coreVideoTextureCache, cameraFrame, NULL, GL_TEXTURE_2D, GL_RG_EXT, bufferWidth/2, bufferHeight/2, GL_RG_EXT, GL_UNSIGNED_BYTE, 1, &chrominanceTextureRef);
err = CVOpenGLESTextureCacheCreateTextureFromImage(kCFAllocatorDefault, [[GPUImageContext sharedImageProcessingContext] coreVideoTextureCache], cameraFrame, NULL, GL_TEXTURE_2D, GL_LUMINANCE_ALPHA, bufferWidth/2, bufferHeight/2, GL_LUMINANCE_ALPHA, GL_UNSIGNED_BYTE, 1, &chrominanceTextureRef);
}
else
{
err = CVOpenGLESTextureCacheCreateTextureFromImage(kCFAllocatorDefault, [[GPUImageContext sharedImageProcessingContext] coreVideoTextureCache], cameraFrame, NULL, GL_TEXTURE_2D, GL_LUMINANCE_ALPHA, bufferWidth/2, bufferHeight/2, GL_LUMINANCE_ALPHA, GL_UNSIGNED_BYTE, 1, &chrominanceTextureRef);
}
if (err)
{
NSLog(@"Error at CVOpenGLESTextureCacheCreateTextureFromImage %d", err);
}
chrominanceTexture = CVOpenGLESTextureGetName(chrominanceTextureRef);
glBindTexture(GL_TEXTURE_2D, chrominanceTexture);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
// if (!allTargetsWantMonochromeData)
// {
[self convertYUVToRGBOutput];
// }
int rotatedImageBufferWidth = bufferWidth, rotatedImageBufferHeight = bufferHeight;
if (GPUImageRotationSwapsWidthAndHeight(internalRotation))
{
rotatedImageBufferWidth = bufferHeight;
rotatedImageBufferHeight = bufferWidth;
}
[self updateTargetsForVideoCameraUsingCacheTextureAtWidth:rotatedImageBufferWidth height:rotatedImageBufferHeight time:currentTime];
CVPixelBufferUnlockBaseAddress(cameraFrame, 0);
CFRelease(luminanceTextureRef);
CFRelease(chrominanceTextureRef);
}
else
{
// TODO: Mesh this with the output framebuffer structure
// CVPixelBufferLockBaseAddress(cameraFrame, 0);
//
// CVReturn err = CVOpenGLESTextureCacheCreateTextureFromImage(kCFAllocatorDefault, [[GPUImageContext sharedImageProcessingContext] coreVideoTextureCache], cameraFrame, NULL, GL_TEXTURE_2D, GL_RGBA, bufferWidth, bufferHeight, GL_BGRA, GL_UNSIGNED_BYTE, 0, &texture);
//
// if (!texture || err) {
// NSLog(@"Camera CVOpenGLESTextureCacheCreateTextureFromImage failed (error: %d)", err);
// NSAssert(NO, @"Camera failure");
// return;
// }
//
// outputTexture = CVOpenGLESTextureGetName(texture);
// // glBindTexture(CVOpenGLESTextureGetTarget(texture), outputTexture);
// glBindTexture(GL_TEXTURE_2D, outputTexture);
// glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
// glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
// glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
// glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
//
// [self updateTargetsForVideoCameraUsingCacheTextureAtWidth:bufferWidth height:bufferHeight time:currentTime];
//
// CVPixelBufferUnlockBaseAddress(cameraFrame, 0);
// CFRelease(texture);
//
// outputTexture = 0;
}
// 帧率
if (_runBenchmark)
{
numberOfFramesCaptured++;
if (numberOfFramesCaptured > INITIALFRAMESTOIGNOREFORBENCHMARK)
{
CFAbsoluteTime currentFrameTime = (CFAbsoluteTimeGetCurrent() - startTime);
totalFrameTimeDuringCapture += currentFrameTime;
NSLog(@"Average frame time : %f ms", [self averageFrameDurationDuringCapture]);
NSLog(@"Current frame time : %f ms", 1000.0 * currentFrameTime);
}
}
}
else
{
// 锁定基地址
CVPixelBufferLockBaseAddress(cameraFrame, 0);
// 获取每行的字节宽度(width * 4)
int bytesPerRow = (int) CVPixelBufferGetBytesPerRow(cameraFrame);
// 获取帧缓存
outputFramebuffer = [[GPUImageContext sharedFramebufferCache] fetchFramebufferForSize:CGSizeMake(bytesPerRow / 4, bufferHeight) onlyTexture:YES];
[outputFramebuffer activateFramebuffer];
// 激活纹理
glBindTexture(GL_TEXTURE_2D, [outputFramebuffer texture]);
// glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, bufferWidth, bufferHeight, 0, GL_BGRA, GL_UNSIGNED_BYTE, CVPixelBufferGetBaseAddress(cameraFrame));
// Using BGRA extension to pull in video frame data directly
// The use of bytesPerRow / 4 accounts for a display glitch present in preview video frames when using the photo preset on the camera
// BGRA转RGBA
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, bytesPerRow / 4, bufferHeight, 0, GL_BGRA, GL_UNSIGNED_BYTE, CVPixelBufferGetBaseAddress(cameraFrame));
[self updateTargetsForVideoCameraUsingCacheTextureAtWidth:bytesPerRow / 4 height:bufferHeight time:currentTime];
CVPixelBufferUnlockBaseAddress(cameraFrame, 0);
// 更新帧率
if (_runBenchmark)
{
numberOfFramesCaptured++;
if (numberOfFramesCaptured > INITIALFRAMESTOIGNOREFORBENCHMARK)
{
CFAbsoluteTime currentFrameTime = (CFAbsoluteTimeGetCurrent() - startTime);
totalFrameTimeDuringCapture += currentFrameTime;
}
}
}
}
// 处理音频
- (void)processAudioSampleBuffer:(CMSampleBufferRef)sampleBuffer;
{
[self.audioEncodingTarget processAudioBuffer:sampleBuffer];
}
- 注意
由于processAudioSampleBuffer
直接交给audioEncodingTarget处理AudioBuffer。因此,录制视频的时候如果需要加入声音需要设置audioEncodingTarget。否则,录制出的视频就没有声音。
- 纹理格式与颜色映射
基本格式 | 纹素数据描述
:----:|:---:
GL_RED | (R, 0.0, 0.0, 1.0)
GL_RG | (R, G, 0.0, 1.0)
GL_RGB | (R, G, B, 1.0)
GL_RGBA | (R, G, B, A)
GL_LUMINANCE | (L, L, L, 1.0)
GL_LUMINANCE_ALPHA | (L, L, L, A)
GL_ALPH | (0.0, 0.0, 0.0, A)
通过上表,我们就知道为什么在GPUImage中,Y分量用GL_LUMINANCE内部各式,和UV分量用GL_LUMINANCE_ALPHA内部各式。
GPUImageStillCamera
GPUImageStillCamera主要用来进行拍照。它继承自 GPUImageVideoCamera
,因此,除了具备GPUImageVideoCamera的功能,它还提供了一套丰富的拍照API,方便我们进行拍照的相关操作。
- 属性列表。GPUImageStillCamera属性比较少,属性也主要是与图片相关。
// jpeg图片的压缩率,默认是0.8
@property CGFloat jpegCompressionQuality;
// 图片的Metadata信息
@property (readonly) NSDictionary *currentCaptureMetadata;
- 方法列表。方法主要是和拍照相关,输出的类型比较丰富,可以是CMSampleBuffer、UIImage、NSData 等。如果有滤镜需要,还可以传入相关滤镜(final Filter In Chain)。
- (void)capturePhotoAsSampleBufferWithCompletionHandler:(void (^)(CMSampleBufferRef imageSampleBuffer, NSError *error))block;
- (void)capturePhotoAsImageProcessedUpToFilter:(GPUImageOutput<GPUImageInput> *)finalFilterInChain withCompletionHandler:(void (^)(UIImage *processedImage, NSError *error))block;
- (void)capturePhotoAsImageProcessedUpToFilter:(GPUImageOutput<GPUImageInput> *)finalFilterInChain withOrientation:(UIImageOrientation)orientation withCompletionHandler:(void (^)(UIImage *processedImage, NSError *error))block;
- (void)capturePhotoAsJPEGProcessedUpToFilter:(GPUImageOutput<GPUImageInput> *)finalFilterInChain withCompletionHandler:(void (^)(NSData *processedJPEG, NSError *error))block;
- (void)capturePhotoAsJPEGProcessedUpToFilter:(GPUImageOutput<GPUImageInput> *)finalFilterInChain withOrientation:(UIImageOrientation)orientation withCompletionHandler:(void (^)(NSData *processedJPEG, NSError *error))block;
- (void)capturePhotoAsPNGProcessedUpToFilter:(GPUImageOutput<GPUImageInput> *)finalFilterInChain withCompletionHandler:(void (^)(NSData *processedPNG, NSError *error))block;
- (void)capturePhotoAsPNGProcessedUpToFilter:(GPUImageOutput<GPUImageInput> *)finalFilterInChain withOrientation:(UIImageOrientation)orientation withCompletionHandler:(void (^)(NSData *processedPNG, NSError *error))block;
虽然API比较丰富,但是最终都是调用的私有方法 - (void)capturePhotoProcessedUpToFilter:(GPUImageOutput<GPUImageInput> *)finalFilterInChain withImageOnGPUHandler:(void (^)(NSError *error))block
。因此,着重留意该方法就行了。
- (void)capturePhotoAsJPEGProcessedUpToFilter:(GPUImageOutput<GPUImageInput> *)finalFilterInChain withOrientation:(UIImageOrientation)orientation withCompletionHandler:(void (^)(NSData *processedImage, NSError *error))block {
// 调用私有方法生成帧缓存对象
[self capturePhotoProcessedUpToFilter:finalFilterInChain withImageOnGPUHandler:^(NSError *error) {
NSData *dataForJPEGFile = nil;
if(!error) {
@autoreleasepool {
// 读取帧缓存并生成UIImage对象
UIImage *filteredPhoto = [finalFilterInChain imageFromCurrentFramebufferWithOrientation:orientation];
dispatch_semaphore_signal(frameRenderingSemaphore);
// 由UIImage生成NSData对象
dataForJPEGFile = UIImageJPEGRepresentation(filteredPhoto, self.jpegCompressionQuality);
}
} else {
dispatch_semaphore_signal(frameRenderingSemaphore);
}
block(dataForJPEGFile, error);
}];
}
- (void)capturePhotoProcessedUpToFilter:(GPUImageOutput<GPUImageInput> *)finalFilterInChain withImageOnGPUHandler:(void (^)(NSError *error))block
{
// 等待计数器
dispatch_semaphore_wait(frameRenderingSemaphore, DISPATCH_TIME_FOREVER);
// 判断是否捕获图像
if(photoOutput.isCapturingStillImage){
block([NSError errorWithDomain:AVFoundationErrorDomain code:AVErrorMaximumStillImageCaptureRequestsExceeded userInfo:nil]);
return;
}
// 异步捕获图像
[photoOutput captureStillImageAsynchronouslyFromConnection:[[photoOutput connections] objectAtIndex:0] completionHandler:^(CMSampleBufferRef imageSampleBuffer, NSError *error) {
if(imageSampleBuffer == NULL){
block(error);
return;
}
// For now, resize photos to fix within the max texture size of the GPU
CVImageBufferRef cameraFrame = CMSampleBufferGetImageBuffer(imageSampleBuffer);
// 获取图像大小
CGSize sizeOfPhoto = CGSizeMake(CVPixelBufferGetWidth(cameraFrame), CVPixelBufferGetHeight(cameraFrame));
CGSize scaledImageSizeToFitOnGPU = [GPUImageContext sizeThatFitsWithinATextureForSize:sizeOfPhoto];
// 判断时候需要调整大小
if (!CGSizeEqualToSize(sizeOfPhoto, scaledImageSizeToFitOnGPU))
{
CMSampleBufferRef sampleBuffer = NULL;
if (CVPixelBufferGetPlaneCount(cameraFrame) > 0)
{
NSAssert(NO, @"Error: no downsampling for YUV input in the framework yet");
}
else
{
// 图像调整
GPUImageCreateResizedSampleBuffer(cameraFrame, scaledImageSizeToFitOnGPU, &sampleBuffer);
}
dispatch_semaphore_signal(frameRenderingSemaphore);
[finalFilterInChain useNextFrameForImageCapture];
// 调用父类进行图片处理,生成帧缓存对象
[self captureOutput:photoOutput didOutputSampleBuffer:sampleBuffer fromConnection:[[photoOutput connections] objectAtIndex:0]];
dispatch_semaphore_wait(frameRenderingSemaphore, DISPATCH_TIME_FOREVER);
if (sampleBuffer != NULL)
CFRelease(sampleBuffer);
}
else
{
// This is a workaround for the corrupt images that are sometimes returned when taking a photo with the front camera and using the iOS 5.0 texture caches
AVCaptureDevicePosition currentCameraPosition = [[videoInput device] position];
if ( (currentCameraPosition != AVCaptureDevicePositionFront) || (![GPUImageContext supportsFastTextureUpload]) || !requiresFrontCameraTextureCacheCorruptionWorkaround)
{
dispatch_semaphore_signal(frameRenderingSemaphore);
[finalFilterInChain useNextFrameForImageCapture];
// 调用父类进行图片处理,生成帧缓存对象
[self captureOutput:photoOutput didOutputSampleBuffer:imageSampleBuffer fromConnection:[[photoOutput connections] objectAtIndex:0]];
dispatch_semaphore_wait(frameRenderingSemaphore, DISPATCH_TIME_FOREVER);
}
}
// 获取图像的metadata信息
CFDictionaryRef metadata = CMCopyDictionaryOfAttachments(NULL, imageSampleBuffer, kCMAttachmentMode_ShouldPropagate);
_currentCaptureMetadata = (__bridge_transfer NSDictionary *)metadata;
block(nil);
_currentCaptureMetadata = nil;
}];
}
GPUImageMovieWriter
GPUImageMovieWriter主要的功能是编码音视频并保存为音视频文件,它实现了GPUImageInput协议。因此,可以接受帧缓存的输入。GPUImageMovieWriter 在进行音视频录制的时候,主要用到这几个类 AVAssetWriter
、AVAssetWriterInput
、AVAssetWriterInputPixelBufferAdaptor
。AVAssetWriter支持的音视频格式比较多,具体可以参考下面的表格:
定义 | 扩展名 |
---|---|
AVFileTypeQuickTimeMovie | .mov 或 .qt |
AVFileTypeMPEG4 | .mp4 |
AVFileTypeAppleM4V | .m4v |
AVFileTypeAppleM4A | .m4a |
AVFileType3GPP | .3gp 或 .3gpp 或 .sdv |
AVFileType3GPP2 | .3g2 或 .3gp2 |
AVFileTypeCoreAudioFormat | .caf |
AVFileTypeWAVE | .wav 或 .wave 或 .bwf |
AVFileTypeAIFF | .aif 或 .aiff |
AVFileTypeAIFC | .aifc 或 .cdda |
AVFileTypeAMR | .amr |
AVFileTypeWAVE | .wav 或 .wave 或 .bwf |
AVFileTypeMPEGLayer3 | .mp3 |
AVFileTypeSunAU | .au 或 .snd |
AVFileTypeAC3 | .ac3 |
AVFileTypeEnhancedAC3 | .eac3 |
- 属性。GPUImageMovieWriter的属性比较多,但是比较实用。很多都是与音视频处理状态相关的,比如:是保存音视频、保存完成回调、失败回调等。以下是一部分比较重要的属性。
// 是否有音频
@property(readwrite, nonatomic) BOOL hasAudioTrack;
// 是否不处理音频
@property(readwrite, nonatomic) BOOL shouldPassthroughAudio;
// 标记不被再次使用
@property(readwrite, nonatomic) BOOL shouldInvalidateAudioSampleWhenDone;
// 完成与失败回调
@property(nonatomic, copy) void(^completionBlock)(void);
@property(nonatomic, copy) void(^failureBlock)(NSError*);
// 是否实时编码视频
@property(readwrite, nonatomic) BOOL encodingLiveVideo;
// 音视频就绪回调
@property(nonatomic, copy) BOOL(^videoInputReadyCallback)(void);
@property(nonatomic, copy) BOOL(^audioInputReadyCallback)(void);
// 处理音频回调
@property(nonatomic, copy) void(^audioProcessingCallback)(SInt16 **samplesRef, CMItemCount numSamplesInBuffer);
// 获取AVAssetWriter
@property(nonatomic, readonly) AVAssetWriter *assetWriter;
// 获取开始到前一帧的时长
@property(nonatomic, readonly) CMTime duration;
- 初始化方法
- (id)initWithMovieURL:(NSURL *)newMovieURL size:(CGSize)newSize;
- (id)initWithMovieURL:(NSURL *)newMovieURL size:(CGSize)newSize fileType:(NSString *)newFileType outputSettings:(NSDictionary *)outputSettings;
初始化的时候主要涉及到:1、初始化实例变量;2、创建OpenGL程序;3、初始化AVAssetWriter相关参数,如:视频编码方式、视频大小等。这里需要注意的是初始化的时候没有初始化音频相关参数,如果需要处理音频,需使用 - (void)setHasAudioTrack:(BOOL)newValue
进行相关设置。
- (id)initWithMovieURL:(NSURL *)newMovieURL size:(CGSize)newSize;
{
// 调用其它初始化方法
return [self initWithMovieURL:newMovieURL size:newSize fileType:AVFileTypeQuickTimeMovie outputSettings:nil];
}
- (id)initWithMovieURL:(NSURL *)newMovieURL size:(CGSize)newSize fileType:(NSString *)newFileType outputSettings:(NSMutableDictionary *)outputSettings;
{
if (!(self = [super init]))
{
return nil;
}
// 初始实例变量
_shouldInvalidateAudioSampleWhenDone = NO;
self.enabled = YES;
alreadyFinishedRecording = NO;
videoEncodingIsFinished = NO;
audioEncodingIsFinished = NO;
discont = NO;
videoSize = newSize;
movieURL = newMovieURL;
fileType = newFileType;
startTime = kCMTimeInvalid;
_encodingLiveVideo = [[outputSettings objectForKey:@"EncodingLiveVideo"] isKindOfClass:[NSNumber class]] ? [[outputSettings objectForKey:@"EncodingLiveVideo"] boolValue] : YES;
previousFrameTime = kCMTimeNegativeInfinity;
previousAudioTime = kCMTimeNegativeInfinity;
inputRotation = kGPUImageNoRotation;
// 初始上下文对象
_movieWriterContext = [[GPUImageContext alloc] init];
[_movieWriterContext useSharegroup:[[[GPUImageContext sharedImageProcessingContext] context] sharegroup]];
runSynchronouslyOnContextQueue(_movieWriterContext, ^{
[_movieWriterContext useAsCurrentContext];
// 初始化OpenGL程序
if ([GPUImageContext supportsFastTextureUpload])
{
colorSwizzlingProgram = [_movieWriterContext programForVertexShaderString:kGPUImageVertexShaderString fragmentShaderString:kGPUImagePassthroughFragmentShaderString];
}
else
{
colorSwizzlingProgram = [_movieWriterContext programForVertexShaderString:kGPUImageVertexShaderString fragmentShaderString:kGPUImageColorSwizzlingFragmentShaderString];
}
// 获取glsl中的相关变量
if (!colorSwizzlingProgram.initialized)
{
[colorSwizzlingProgram addAttribute:@"position"];
[colorSwizzlingProgram addAttribute:@"inputTextureCoordinate"];
if (![colorSwizzlingProgram link])
{
NSString *progLog = [colorSwizzlingProgram programLog];
NSLog(@"Program link log: %@", progLog);
NSString *fragLog = [colorSwizzlingProgram fragmentShaderLog];
NSLog(@"Fragment shader compile log: %@", fragLog);
NSString *vertLog = [colorSwizzlingProgram vertexShaderLog];
NSLog(@"Vertex shader compile log: %@", vertLog);
colorSwizzlingProgram = nil;
NSAssert(NO, @"Filter shader link failed");
}
}
colorSwizzlingPositionAttribute = [colorSwizzlingProgram attributeIndex:@"position"];
colorSwizzlingTextureCoordinateAttribute = [colorSwizzlingProgram attributeIndex:@"inputTextureCoordinate"];
colorSwizzlingInputTextureUniform = [colorSwizzlingProgram uniformIndex:@"inputImageTexture"];
[_movieWriterContext setContextShaderProgram:colorSwizzlingProgram];
glEnableVertexAttribArray(colorSwizzlingPositionAttribute);
glEnableVertexAttribArray(colorSwizzlingTextureCoordinateAttribute);
});
[self initializeMovieWithOutputSettings:outputSettings];
return self;
}
- (void)initializeMovieWithOutputSettings:(NSDictionary *)outputSettings;
{
isRecording = NO;
self.enabled = YES;
NSError *error = nil;
// 初始化AVAssetWriter,传入文件路径和文件格式
assetWriter = [[AVAssetWriter alloc] initWithURL:movieURL fileType:fileType error:&error];
// 处理初始化失败回调
if (error != nil)
{
NSLog(@"Error: %@", error);
if (failureBlock)
{
failureBlock(error);
}
else
{
if(self.delegate && [self.delegate respondsToSelector:@selector(movieRecordingFailedWithError:)])
{
[self.delegate movieRecordingFailedWithError:error];
}
}
}
// Set this to make sure that a functional movie is produced, even if the recording is cut off mid-stream. Only the last second should be lost in that case.
assetWriter.movieFragmentInterval = CMTimeMakeWithSeconds(1.0, 1000);
// 设置视频的宽高,以及编码格式
if (outputSettings == nil)
{
NSMutableDictionary *settings = [[NSMutableDictionary alloc] init];
[settings setObject:AVVideoCodecH264 forKey:AVVideoCodecKey];
[settings setObject:[NSNumber numberWithInt:videoSize.width] forKey:AVVideoWidthKey];
[settings setObject:[NSNumber numberWithInt:videoSize.height] forKey:AVVideoHeightKey];
outputSettings = settings;
}
// 如果自己传入了相关设置,检查设置中是否有必要的参数
else
{
__unused NSString *videoCodec = [outputSettings objectForKey:AVVideoCodecKey];
__unused NSNumber *width = [outputSettings objectForKey:AVVideoWidthKey];
__unused NSNumber *height = [outputSettings objectForKey:AVVideoHeightKey];
NSAssert(videoCodec && width && height, @"OutputSettings is missing required parameters.");
if( [outputSettings objectForKey:@"EncodingLiveVideo"] ) {
NSMutableDictionary *tmp = [outputSettings mutableCopy];
[tmp removeObjectForKey:@"EncodingLiveVideo"];
outputSettings = tmp;
}
}
/*
NSDictionary *videoCleanApertureSettings = [NSDictionary dictionaryWithObjectsAndKeys:
[NSNumber numberWithInt:videoSize.width], AVVideoCleanApertureWidthKey,
[NSNumber numberWithInt:videoSize.height], AVVideoCleanApertureHeightKey,
[NSNumber numberWithInt:0], AVVideoCleanApertureHorizontalOffsetKey,
[NSNumber numberWithInt:0], AVVideoCleanApertureVerticalOffsetKey,
nil];
NSDictionary *videoAspectRatioSettings = [NSDictionary dictionaryWithObjectsAndKeys:
[NSNumber numberWithInt:3], AVVideoPixelAspectRatioHorizontalSpacingKey,
[NSNumber numberWithInt:3], AVVideoPixelAspectRatioVerticalSpacingKey,
nil];
NSMutableDictionary * compressionProperties = [[NSMutableDictionary alloc] init];
[compressionProperties setObject:videoCleanApertureSettings forKey:AVVideoCleanApertureKey];
[compressionProperties setObject:videoAspectRatioSettings forKey:AVVideoPixelAspectRatioKey];
[compressionProperties setObject:[NSNumber numberWithInt: 2000000] forKey:AVVideoAverageBitRateKey];
[compressionProperties setObject:[NSNumber numberWithInt: 16] forKey:AVVideoMaxKeyFrameIntervalKey];
[compressionProperties setObject:AVVideoProfileLevelH264Main31 forKey:AVVideoProfileLevelKey];
[outputSettings setObject:compressionProperties forKey:AVVideoCompressionPropertiesKey];
*/
// 创建视频的AVAssetWriterInput
assetWriterVideoInput = [AVAssetWriterInput assetWriterInputWithMediaType:AVMediaTypeVideo outputSettings:outputSettings];
// 实时的数据处理,如果不设置可能有丢帧现象
assetWriterVideoInput.expectsMediaDataInRealTime = _encodingLiveVideo;
// You need to use BGRA for the video in order to get realtime encoding. I use a color-swizzling shader to line up glReadPixels' normal RGBA output with the movie input's BGRA.
// 设置输入到编码器的像素格式
NSDictionary *sourcePixelBufferAttributesDictionary = [NSDictionary dictionaryWithObjectsAndKeys: [NSNumber numberWithInt:kCVPixelFormatType_32BGRA], kCVPixelBufferPixelFormatTypeKey,
[NSNumber numberWithInt:videoSize.width], kCVPixelBufferWidthKey,
[NSNumber numberWithInt:videoSize.height], kCVPixelBufferHeightKey,
nil];
// NSDictionary *sourcePixelBufferAttributesDictionary = [NSDictionary dictionaryWithObjectsAndKeys: [NSNumber numberWithInt:kCVPixelFormatType_32ARGB], kCVPixelBufferPixelFormatTypeKey,
// nil];
// 创建AVAssetWriterInputPixelBufferAdaptor对象
assetWriterPixelBufferInput = [AVAssetWriterInputPixelBufferAdaptor assetWriterInputPixelBufferAdaptorWithAssetWriterInput:assetWriterVideoInput sourcePixelBufferAttributes:sourcePixelBufferAttributesDictionary];
[assetWriter addInput:assetWriterVideoInput];
}
- 其它方法。
// 设置需要写入音频数据
- (void)setHasAudioTrack:(BOOL)hasAudioTrack audioSettings:(NSDictionary *)audioOutputSettings;
// 开始、结束、取消录制
- (void)startRecording;
- (void)startRecordingInOrientation:(CGAffineTransform)orientationTransform;
- (void)finishRecording;
- (void)finishRecordingWithCompletionHandler:(void (^)(void))handler;
- (void)cancelRecording;
// 处理音频
- (void)processAudioBuffer:(CMSampleBufferRef)audioBuffer;
// 处理同步videoInputReadyCallback、audioInputReadyCallback回调
- (void)enableSynchronizationCallbacks;
GPUImageMovieWriter 方法不是很多,但是方法都比较长,内部处理也相对比较复杂。这里只给出了常见的方法。如果需要录制视频,可以仔细阅读GPUImageMovieWriter的源码。
// 初始话音频参数,如编码格式、声道数、采样率、码率
- (void)setHasAudioTrack:(BOOL)newValue audioSettings:(NSDictionary *)audioOutputSettings;
{
_hasAudioTrack = newValue;
if (_hasAudioTrack)
{
if (_shouldPassthroughAudio)
{
// Do not set any settings so audio will be the same as passthrough
audioOutputSettings = nil;
}
else if (audioOutputSettings == nil)
{
AVAudioSession *sharedAudioSession = [AVAudioSession sharedInstance];
double preferredHardwareSampleRate;
if ([sharedAudioSession respondsToSelector:@selector(sampleRate)])
{
preferredHardwareSampleRate = [sharedAudioSession sampleRate];
}
else
{
#pragma clang diagnostic push
#pragma clang diagnostic ignored "-Wdeprecated-declarations"
preferredHardwareSampleRate = [[AVAudioSession sharedInstance] currentHardwareSampleRate];
#pragma clang diagnostic pop
}
AudioChannelLayout acl;
bzero( &acl, sizeof(acl));
acl.mChannelLayoutTag = kAudioChannelLayoutTag_Mono;
audioOutputSettings = [NSDictionary dictionaryWithObjectsAndKeys:
[ NSNumber numberWithInt: kAudioFormatMPEG4AAC], AVFormatIDKey,
[ NSNumber numberWithInt: 1 ], AVNumberOfChannelsKey,
[ NSNumber numberWithFloat: preferredHardwareSampleRate ], AVSampleRateKey,
[ NSData dataWithBytes: &acl length: sizeof( acl ) ], AVChannelLayoutKey,
//[ NSNumber numberWithInt:AVAudioQualityLow], AVEncoderAudioQualityKey,
[ NSNumber numberWithInt: 64000 ], AVEncoderBitRateKey,
nil];
/*
AudioChannelLayout acl;
bzero( &acl, sizeof(acl));
acl.mChannelLayoutTag = kAudioChannelLayoutTag_Mono;
audioOutputSettings = [NSDictionary dictionaryWithObjectsAndKeys:
[ NSNumber numberWithInt: kAudioFormatMPEG4AAC ], AVFormatIDKey,
[ NSNumber numberWithInt: 1 ], AVNumberOfChannelsKey,
[ NSNumber numberWithFloat: 44100.0 ], AVSampleRateKey,
[ NSNumber numberWithInt: 64000 ], AVEncoderBitRateKey,
[ NSData dataWithBytes: &acl length: sizeof( acl ) ], AVChannelLayoutKey,
nil];*/
}
assetWriterAudioInput = [AVAssetWriterInput assetWriterInputWithMediaType:AVMediaTypeAudio outputSettings:audioOutputSettings];
[assetWriter addInput:assetWriterAudioInput];
assetWriterAudioInput.expectsMediaDataInRealTime = _encodingLiveVideo;
}
else
{
// Remove audio track if it exists
}
}
- (void)finishRecordingWithCompletionHandler:(void (^)(void))handler;
{
runSynchronouslyOnContextQueue(_movieWriterContext, ^{
isRecording = NO;
if (assetWriter.status == AVAssetWriterStatusCompleted || assetWriter.status == AVAssetWriterStatusCancelled || assetWriter.status == AVAssetWriterStatusUnknown)
{
if (handler)
runAsynchronouslyOnContextQueue(_movieWriterContext, handler);
return;
}
if( assetWriter.status == AVAssetWriterStatusWriting && ! videoEncodingIsFinished )
{
videoEncodingIsFinished = YES;
[assetWriterVideoInput markAsFinished];
}
if( assetWriter.status == AVAssetWriterStatusWriting && ! audioEncodingIsFinished )
{
audioEncodingIsFinished = YES;
[assetWriterAudioInput markAsFinished];
}
#if (!defined(__IPHONE_6_0) || (__IPHONE_OS_VERSION_MAX_ALLOWED < __IPHONE_6_0))
// Not iOS 6 SDK
[assetWriter finishWriting];
if (handler)
runAsynchronouslyOnContextQueue(_movieWriterContext,handler);
#else
// iOS 6 SDK
if ([assetWriter respondsToSelector:@selector(finishWritingWithCompletionHandler:)]) {
// Running iOS 6
[assetWriter finishWritingWithCompletionHandler:(handler ?: ^{ })];
}
else {
// Not running iOS 6
#pragma clang diagnostic push
#pragma clang diagnostic ignored "-Wdeprecated-declarations"
[assetWriter finishWriting];
#pragma clang diagnostic pop
if (handler)
runAsynchronouslyOnContextQueue(_movieWriterContext, handler);
}
#endif
});
}
// 处理音频数据
- (void)processAudioBuffer:(CMSampleBufferRef)audioBuffer;
{
if (!isRecording || _paused)
{
return;
}
// if (_hasAudioTrack && CMTIME_IS_VALID(startTime))
// 有音频数据才处理
if (_hasAudioTrack)
{
CFRetain(audioBuffer);
// 获取音频的pts
CMTime currentSampleTime = CMSampleBufferGetOutputPresentationTimeStamp(audioBuffer);
if (CMTIME_IS_INVALID(startTime))
{
runSynchronouslyOnContextQueue(_movieWriterContext, ^{
// 判断assetWriter的状态,不是写入状态则开始写
if ((audioInputReadyCallback == NULL) && (assetWriter.status != AVAssetWriterStatusWriting))
{
[assetWriter startWriting];
}
// 设置pts
[assetWriter startSessionAtSourceTime:currentSampleTime];
startTime = currentSampleTime;
});
}
// 判断需不需要再使用audioBuffer,如果不需要使用则设置为invalidate状态
if (!assetWriterAudioInput.readyForMoreMediaData && _encodingLiveVideo)
{
NSLog(@"1: Had to drop an audio frame: %@", CFBridgingRelease(CMTimeCopyDescription(kCFAllocatorDefault, currentSampleTime)));
if (_shouldInvalidateAudioSampleWhenDone)
{
CMSampleBufferInvalidate(audioBuffer);
}
CFRelease(audioBuffer);
return;
}
if (discont) {
discont = NO;
CMTime current;
if (offsetTime.value > 0) {
current = CMTimeSubtract(currentSampleTime, offsetTime);
} else {
current = currentSampleTime;
}
CMTime offset = CMTimeSubtract(current, previousAudioTime);
if (offsetTime.value == 0) {
offsetTime = offset;
} else {
offsetTime = CMTimeAdd(offsetTime, offset);
}
}
if (offsetTime.value > 0) {
CFRelease(audioBuffer);
audioBuffer = [self adjustTime:audioBuffer by:offsetTime];
CFRetain(audioBuffer);
}
// record most recent time so we know the length of the pause
currentSampleTime = CMSampleBufferGetPresentationTimeStamp(audioBuffer);
previousAudioTime = currentSampleTime;
//if the consumer wants to do something with the audio samples before writing, let him.
// 如果处理音频需要回调,则回调
if (self.audioProcessingCallback) {
//need to introspect into the opaque CMBlockBuffer structure to find its raw sample buffers.
CMBlockBufferRef buffer = CMSampleBufferGetDataBuffer(audioBuffer);
CMItemCount numSamplesInBuffer = CMSampleBufferGetNumSamples(audioBuffer);
AudioBufferList audioBufferList;
CMSampleBufferGetAudioBufferListWithRetainedBlockBuffer(audioBuffer,
NULL,
&audioBufferList,
sizeof(audioBufferList),
NULL,
NULL,
kCMSampleBufferFlag_AudioBufferList_Assure16ByteAlignment,
&buffer
);
//passing a live pointer to the audio buffers, try to process them in-place or we might have syncing issues.
for (int bufferCount=0; bufferCount < audioBufferList.mNumberBuffers; bufferCount++) {
SInt16 *samples = (SInt16 *)audioBufferList.mBuffers[bufferCount].mData;
self.audioProcessingCallback(&samples, numSamplesInBuffer);
}
}
// NSLog(@"Recorded audio sample time: %lld, %d, %lld", currentSampleTime.value, currentSampleTime.timescale, currentSampleTime.epoch);
// 写入音频block
void(^write)() = ^() {
// 如不能append 则等待
while( ! assetWriterAudioInput.readyForMoreMediaData && ! _encodingLiveVideo && ! audioEncodingIsFinished ) {
NSDate *maxDate = [NSDate dateWithTimeIntervalSinceNow:0.5];
//NSLog(@"audio waiting...");
[[NSRunLoop currentRunLoop] runUntilDate:maxDate];
}
if (!assetWriterAudioInput.readyForMoreMediaData)
{
NSLog(@"2: Had to drop an audio frame %@", CFBridgingRelease(CMTimeCopyDescription(kCFAllocatorDefault, currentSampleTime)));
}
// 当readyForMoreMediaData为YES的时候才可以追加数据
else if(assetWriter.status == AVAssetWriterStatusWriting)
{
if (![assetWriterAudioInput appendSampleBuffer:audioBuffer])
NSLog(@"Problem appending audio buffer at time: %@", CFBridgingRelease(CMTimeCopyDescription(kCFAllocatorDefault, currentSampleTime)));
}
else
{
//NSLog(@"Wrote an audio frame %@", CFBridgingRelease(CMTimeCopyDescription(kCFAllocatorDefault, currentSampleTime)));
}
// 标记不被使用
if (_shouldInvalidateAudioSampleWhenDone)
{
CMSampleBufferInvalidate(audioBuffer);
}
CFRelease(audioBuffer);
};
// runAsynchronouslyOnContextQueue(_movieWriterContext, write);
// 如果需要编码视频,则派发到相同队列中执行
if( _encodingLiveVideo )
{
runAsynchronouslyOnContextQueue(_movieWriterContext, write);
}
else
{
// 否则,直接写入
write();
}
}
}
GPUImageMovie
GPUImageMovie 主要的作用是读取与解码音视频文件。它继承自GPUImageOutput,可以输出帧缓存对象,由于没有实现GPUImageInput协议,因此只能作为响应源。
- 初始化。可以通过NSURL、AVPlayerItem、AVAsset初始化。
- (id)initWithAsset:(AVAsset *)asset;
- (id)initWithPlayerItem:(AVPlayerItem *)playerItem;
- (id)initWithURL:(NSURL *)url;
初始化相对简单,只是简单保存传入数据。
- (id)initWithURL:(NSURL *)url;
{
if (!(self = [super init]))
{
return nil;
}
[self yuvConversionSetup];
self.url = url;
self.asset = nil;
return self;
}
- (id)initWithAsset:(AVAsset *)asset;
{
if (!(self = [super init]))
{
return nil;
}
[self yuvConversionSetup];
self.url = nil;
self.asset = asset;
return self;
}
- (id)initWithPlayerItem:(AVPlayerItem *)playerItem;
{
if (!(self = [super init]))
{
return nil;
}
[self yuvConversionSetup];
self.url = nil;
self.asset = nil;
self.playerItem = playerItem;
return self;
}
- 其它方法。GPUImageMovie方法比较少,但是代码比较多相对比较复杂。由于篇幅有限,这里不再细看。它的方法主要分为这几类:1、读取音视频数据;2、读取的控制(开始、暂停、取消);3、处理音视频数据帧。
// 允许使用GPUImageMovieWriter进行音视频同步编码
- (void)enableSynchronizedEncodingUsingMovieWriter:(GPUImageMovieWriter *)movieWriter;
// 读取音视频
- (BOOL)readNextVideoFrameFromOutput:(AVAssetReaderOutput *)readerVideoTrackOutput;
- (BOOL)readNextAudioSampleFromOutput:(AVAssetReaderOutput *)readerAudioTrackOutput;
// 开始、结束、取消读取
- (void)startProcessing;
- (void)endProcessing;
- (void)cancelProcessing;
// 处理视频帧
- (void)processMovieFrame:(CMSampleBufferRef)movieSampleBuffer;
实现过程
- 录制视频
#import "ViewController.h"
#import <GPUImage.h>
#define DOCUMENT(path) [[NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES) lastObject] stringByAppendingPathComponent:path]
@interface ViewController ()
@property (weak, nonatomic) IBOutlet GPUImageView *imageView;
@property (strong, nonatomic) GPUImageVideoCamera *video;
@property (strong, nonatomic) GPUImageMovieWriter *writer;
@property (nonatomic, strong) NSURL *videoFile;
@property (nonatomic, readonly, getter=isRecording) BOOL recording;
@end
@implementation ViewController
- (void)viewDidLoad {
[super viewDidLoad];
_recording = NO;
// 设置背景色
[_imageView setBackgroundColorRed:1.0 green:1.0 blue:1.0 alpha:1.0];
// 设置保存文件路径
_videoFile = [NSURL fileURLWithPath:DOCUMENT(@"/1.mov")];
// 删除文件
[[NSFileManager defaultManager] removeItemAtURL:_videoFile error:nil];
// 设置GPUImageMovieWriter
_writer = [[GPUImageMovieWriter alloc] initWithMovieURL:_videoFile size:CGSizeMake(480, 640)];
[_writer setHasAudioTrack:YES audioSettings:nil];
// 设置GPUImageVideoCamera
_video = [[GPUImageVideoCamera alloc] initWithSessionPreset:AVCaptureSessionPreset640x480 cameraPosition:AVCaptureDevicePositionBack];
_video.outputImageOrientation = UIInterfaceOrientationPortrait;
[_video addAudioInputsAndOutputs];
// 设置音频处理Target
_video.audioEncodingTarget = _writer;
// 设置Target
[_video addTarget:_imageView];
[_video addTarget:_writer];
// 开始拍摄
[_video startCameraCapture];
}
- (IBAction)startButtonTapped:(UIButton *)sender
{
if (!_recording) {
// 开始录制视频
[_writer startRecording];
_recording = YES;
}
}
- (IBAction)finishButtonTapped:(UIButton *)sender
{
// 结束录制
[_writer finishRecording];
}
@end
- 拍照
#import "SecondViewController.h"
#import "ImageShowViewController.h"
#import <GPUImage.h>
@interface SecondViewController ()
@property (weak, nonatomic) IBOutlet GPUImageView *imageView;
@property (nonatomic, strong) GPUImageStillCamera *camera;
@property (nonatomic, strong) GPUImageFilter *filter;
@end
@implementation SecondViewController
- (void)viewDidLoad {
[super viewDidLoad];
// 设置背景色
[_imageView setBackgroundColorRed:1.0 green:1.0 blue:1.0 alpha:1.0];
// 滤镜
_filter = [[GPUImageGrayscaleFilter alloc] init];
// 初始化
_camera = [[GPUImageStillCamera alloc] initWithSessionPreset:AVCaptureSessionPresetPhoto cameraPosition:AVCaptureDevicePositionBack];
_camera.outputImageOrientation = UIInterfaceOrientationPortrait;
[_camera addTarget:_filter];
[_filter addTarget:_imageView];
// 开始运行
[_camera startCameraCapture];
}
- (IBAction)pictureButtonTapped:(UIButton *)sender
{
if ([_camera isRunning]) {
[_camera capturePhotoAsImageProcessedUpToFilter:_filter withCompletionHandler:^(UIImage *processedImage, NSError *error) {
[_camera stopCameraCapture];
ImageShowViewController *imageShowVC = [[UIStoryboard storyboardWithName:@"Main" bundle:nil] instantiateViewControllerWithIdentifier:@"ImageShowViewController"];
imageShowVC.image = processedImage;
[self presentViewController:imageShowVC animated:YES completion:NULL];
}];
}else {
[_camera startCameraCapture];
}
}
- 视频转码与滤镜。由于需要将aac解码为pcm,因此我修改了GPUImageMovie.m源码的230行( 为了解决这个报错
[AVAssetWriterInput appendSampleBuffer:] Cannot append sample buffer: Input buffer must be in an uncompressed format when outputSettings is not nil
),如下所示:
NSDictionary *audioOutputSetting = @{
AVFormatIDKey : @(kAudioFormatLinearPCM)
};
// This might need to be extended to handle movies with more than one audio track
AVAssetTrack* audioTrack = [audioTracks objectAtIndex:0];
readerAudioTrackOutput = [AVAssetReaderTrackOutput assetReaderTrackOutputWithTrack:audioTrack outputSettings:audioOutputSetting];
#import "ThirdViewController.h"
#import <GPUImage.h>
#define DOCUMENT(path) [[NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES) lastObject] stringByAppendingPathComponent:path]
@interface ThirdViewController ()
@property (weak, nonatomic) IBOutlet GPUImageView *imageView;
@property (nonatomic, strong) GPUImageMovie *movie;
@property (nonatomic, strong) GPUImageMovieWriter *movieWriter;
@property (nonatomic, strong) GPUImageFilter *filter;
@property (nonatomic, assign) CGSize size;
@end
@implementation ThirdViewController
- (void)viewDidLoad
{
[super viewDidLoad];
// 获取文件路径
NSURL *fileURL = [[NSBundle mainBundle] URLForResource:@"1.mp4" withExtension:nil];
AVAsset *asset = [AVAsset assetWithURL:fileURL];
// 获取视频宽高
NSArray *tracks = [asset tracksWithMediaType:AVMediaTypeVideo];
AVAssetTrack *videoTrack = [tracks firstObject];
_size = videoTrack.naturalSize;
// 初始化GPUImageMovie
_movie = [[GPUImageMovie alloc] initWithAsset:asset];
// 滤镜
_filter = [[GPUImageGrayscaleFilter alloc] init];
[_movie addTarget:_filter];
[_filter addTarget:_imageView];
}
- (IBAction)playButtonTapped:(UIButton *)sender
{
[_movie startProcessing];
}
- (IBAction)transcodeButtonTapped:(id)sender
{
// 文件路径
NSURL *videoFile = [NSURL fileURLWithPath:DOCUMENT(@"/2.mov")];
[[NSFileManager defaultManager] removeItemAtURL:videoFile error:nil];
// GPUImageMovieWriter
_movieWriter = [[GPUImageMovieWriter alloc] initWithMovieURL:videoFile size:_size];
[_movieWriter setHasAudioTrack:YES audioSettings:nil];
// GPUImageMovie相关设置
_movie.audioEncodingTarget = _movieWriter;
[_filter addTarget:_movieWriter];
[_movie enableSynchronizedEncodingUsingMovieWriter:_movieWriter];
// 开始转码
[_movieWriter startRecording];
[_movie startProcessing];
// 结束
__weak typeof(_movieWriter) wMovieWriter = _movieWriter;
__weak typeof(self) wSelf = self;
[_movieWriter setCompletionBlock:^{
[wMovieWriter finishRecording];
[wSelf.movie removeTarget:wMovieWriter];
wSelf.movie.audioEncodingTarget = nil;
}];
}
总结
GPUImageVideoCamera、GPUImageStillCamera、GPUImageMovieWriter、GPUImageMovie 这几个类在处理相机、音视频的时候非常有用,由于篇幅限制,不能全部讲解它们的源码。如果有需要可以自己好好阅读。
源码地址:GPUImage源码阅读系列 https://github.com/QinminiOS/GPUImage
系列文章地址:GPUImage源码阅读 http://www.jianshu.com/nb/11749791