AVFoundation 读取和写入媒体

1 读取和写入媒体相关类

在AVFoundation中,底层依靠AVAssetReader和AVAssetWriter实现读取和写入媒体的底层功能。


1.1 AVAssetReader及其相关类

AVAssetReader可以从AVAsset对象中读取基于CMSampleBuffer的媒体数据。通常围棋配置一个或多个AVAssetReaderOutput实例。它是一个抽象类,根据业务需要真正使用的应该是其子类负责单个媒体轨道输出的TrackOutput、混合音频轨道输出的AudioMixOutput和多视频轨道组合输出的CompositionOutput。AVAssetReader只能读取一个资源样本,需要读取多个时可以将多个AVAsset对象组合为AVComposition中,在该系列文章编辑媒体中将会解释。

一个资源读取器内部是以多线程方式不断提取下一帧媒体样本,但是不倾向于在实时操作中使用该方法,实时操作应该使用AVPlayerItemVideoOutput类来处理。

- (AVAssetReader *)initialAssetReader {
    AVAsset *asset = nil;
    AVAssetTrack *track = [[asset tracksWithMediaType:AVMediaTypeVideo] firstObject];
    AVAssetReader *assetReader = [[AVAssetReader alloc] initWithAsset:asset error:nil];
    // 当[GPUImageContext supportsFastTextureUpload]时也可以选用420YpCbCr8,此处将视频帧解压成BGRA格式
    NSDictionary *readerOutputSettings = @{(id)kCVPixelBufferPixelFormatTypeKey : @(kCVPixelFormatType_32BGRA)};
    AVAssetReaderTrackOutput *trackOutput = [[AVAssetReaderTrackOutput alloc] initWithTrack:track outputSettings:readerOutputSettings];
    [assetReader addOutput:trackOutput];
    [assetReader startReading];
    return assetReader;
}

1.2 AVAssetWriter及其相关类

AVAssetWriter用于对媒体资源编码并将其写入到mp4或者mov容器文件中。需要为Writer配置一个或者多个AVAssetWriterInput,每个input可以指定一种媒体类型(如音频或者视频),最终每个input会生成一个独立的AVAssetTrack。处理视频类型的时候,常用AVAssetWriterInputPixelBufferAdaptor处理CVPixelBuffer类型的样本数据,这样可以提供更好的性能。另外还可以将Input包装成InputGroup赋值的Writer,通常在添加可选字幕和可选语言等轨道信息时候使用该方式。这样播放器在播放时能够通过AVMediaSelectionGroup和AVMediaSelectionOption选择对应的语言媒体轨道。在该系列文章播放视频资源时有解释。

AVAssetWriter在文件中写入数据的方式为交错式,即一段视频资源后紧接一段音频资源,其后再紧接一段视频资源,以此类推。为了保持合适的交错模式写入到文件中,AVAssetWriterInput提供readyForMoreMediaData属性,只有在该属性为YES才能写入到文件中。

AVAssetWriter写文件分两个不同的情形,实时写入:比如从相机设备捕捉到图像需要写入文件中,可以将AVAssetWriterInput的expectsMediaDataInRealTime属性设置为YES,这样确保readyForMoreMediaData被正确计算。这个设置使快速写入样本的优先级高于维持视频音频轨道交错写入的优先级,即得到的并非是绝对1:1的交错视频音频帧数据,这样优化了写入逻辑,得到了自然交错的数据。

离线写入:如从文件中读取到buffer,将其写入到文件中。该情形下仍需观察readyFor...属性。或者可以调用requestMediaDataWhenReadyOnQueue:usingBlock方法,在输入准备好时会自动调用该block。

- (AVAssetWriter *)initialAssetWriter {
    NSURL *outputURL = nil;
    AVAssetWriter *assetWriter = [[AVAssetWriter alloc] initWithURL:outputURL fileType:AVFileTypeQuickTimeMovie error:nil];
    
    NSDictionary *writerOutputSettings =
    @{AVVideoCodecKey: AVVideoCodecH264,
      AVVideoWidthKey: @1280,
      AVVideoHeightKey: @720,
      AVVideoCompressionPropertiesKey: @{
              // 关键帧的间隔,1为所有帧都是关键帧,值越高,压缩率越高
              AVVideoMaxKeyFrameIntervalKey: @1,
              // 也可以使用track.estimatedDataRate,它直接决定了视频文件的大小
              AVVideoAverageBitRateKey: @10500000,
              AVVideoProfileLevelKey: AVVideoProfileLevelH264Main31,
              }
      };
    
    AVAssetWriterInput *writerInput = [[AVAssetWriterInput alloc] initWithMediaType:AVMediaTypeVideo outputSettings:writerOutputSettings];
    [assetWriter addInput:writerInput];
    [assetWriter startWriting];
    return assetWriter;
}

- (void)rewriteMovieFile {
    dispatch_queue_t dispatchQueue = dispatch_queue_create("com.tapharmonic.WriterQueue", DISPATCH_QUEUE_SERIAL);
    AVAssetReader *reader = [self initialAssetReader];
    AVAssetReaderTrackOutput *trackOutput = (AVAssetReaderTrackOutput *)reader.outputs.firstObject;
    AVAssetWriter *writer = [self initialAssetWriter];
    AVAssetWriterInput *writerInput = writer.inputs.firstObject;
    // 创建新的输出会话,需要在startWriting之后调用
    [writer startSessionAtSourceTime:kCMTimeZero];
    
    [writerInput requestMediaDataWhenReadyOnQueue:dispatchQueue usingBlock:^{
        BOOL complete = NO;
        while ([writerInput isReadyForMoreMediaData] && !complete) {
            CMSampleBufferRef sampleBuffer = [trackOutput copyNextSampleBuffer];
            if (sampleBuffer) {
                BOOL result = [writerInput appendSampleBuffer:sampleBuffer];
                CFRelease(sampleBuffer);
                complete = !result;
                // 需要处理写入失败
            } else {
                // 标记写入器的输入结束,需在关闭输出会话之前调用
                [writerInput markAsFinished];
                complete = YES;
            }
        }
        
        if (complete) {
            // 关闭输出会话
            // 这里writer不会持有这个block,不会导致循环引用
            [writer finishWritingWithCompletionHandler:^{
                AVAssetWriterStatus status = writer.status;
                if (status == AVAssetWriterStatusCompleted) {
                    // 处理完成逻辑
                } else {
                    // 处理失败逻辑
                }
            }];
        }
    }];
}

2 创建音频波形图

主要分为三个步骤,1)读取数据:从AVAsset中读取解压后的音频数据。2)过滤和处理数据:一般单声道音频文件的采样率为44.1Hz,即使非常小的音频文件也有可能包含几十万个以上的样本,因此需要过缩减样本。处理数据的方式通常有三种,首先将样本分为小的样本块。找到每块中的最大值每块的平均值每块的min/max值3)渲染数据:将过滤的音频数据通过Core Animation中Quartz框架绘制到屏幕上。

2.1 读取音频样本

音频数据被保存在一个CMBlockBuffer中,访问它的方式有多种,可以直接通过CMSamplebuffer的函数...GetDataBuffer得到。此处CMBlockBuffer得到的是unretained引用,如果要将它传递到Core Audio时需要使用CMSampleBuffer GetAudioBuffer ListWith RetainedBlockBuffer函数。

SampleDataProvider提供数据

@implementation THSampleDataProvider
+ (void)loadAudioSamplesFromAsset:(AVAsset *)asset
                  completionBlock:(THSampleDataCompletionBlock)completionBlock {
    
    NSString *tracks = @"tracks";
    [asset loadValuesAsynchronouslyForKeys:@[tracks] completionHandler:^{
        AVKeyValueStatus status = [asset statusOfValueForKey:tracks error:nil];
        NSData *sampleData = nil;
        if (status == AVKeyValueStatusLoaded) {
            sampleData = [self readAudioSamplesFromAsset:asset];
        }
        dispatch_async(dispatch_get_main_queue(), ^{
            completionBlock(sampleData);
        });
    }];
}

+ (NSData *)readAudioSamplesFromAsset:(AVAsset *)asset {
    NSError *error = nil;
    AVAssetReader *assetReader = [[AVAssetReader alloc] initWithAsset:asset error:&error];
    if (!assetReader) {
        NSLog(@"Error creating asset reader: %@",[error localizedDescription]);
        return nil;
    }
    
    AVAssetTrack *track = [[asset tracksWithMediaType:AVMediaTypeAudio] firstObject];
    NSDictionary *outputSettings =
    @{AVFormatIDKey             :@(kAudioFormatLinearPCM),
      AVLinearPCMIsBigEndianKey :@NO,
      AVLinearPCMIsFloatKey     :@NO,
      AVLinearPCMBitDepthKey    :@(16)
      };
    
    AVAssetReaderTrackOutput *trackOutput = [[AVAssetReaderTrackOutput alloc] initWithTrack:track outputSettings:outputSettings];
    [assetReader addOutput:trackOutput];
    [assetReader startReading];
    
    NSMutableData *sampleData = [NSMutableData data];
    while (assetReader.status == AVAssetReaderStatusReading) {
        CMSampleBufferRef sampleBuffer = [trackOutput copyNextSampleBuffer];
        if (sampleBuffer) {
            CMBlockBufferRef blockBufferRef = CMSampleBufferGetDataBuffer(sampleBuffer);
            size_t length = CMBlockBufferGetDataLength(blockBufferRef);
            SInt16 sampleBytes[length];
            
            CMBlockBufferCopyDataBytes(blockBufferRef, 0, length, sampleBytes);
            [sampleData appendBytes:sampleBytes length:length];
            
            CMSampleBufferInvalidate(sampleBuffer);
            CFRelease(sampleBuffer);
        }
    }
    
    if (assetReader.status == AVAssetReaderStatusCompleted) {
        return sampleData.copy;
    } else {
        NSLog(@"Failed to read audio sample from asset...");
        return nil;
    }
}
@end

2.2 过滤音频样本

THSampleDataFilter过滤音频数据

@implementation THSampleDataFilter
- (id)initWithData:(NSData *)sampleData {
    if (self = [super init]) {
        _sampleData = sampleData;
    }
    return self;
}

// 通过外部传入视图size决定内部数据过滤,本实例中为水平上每一个点分配一个数据块,所有样本中最大的点表示垂直方向上的高度
- (NSArray *)filteredSamplesForSize:(CGSize)size {
    NSMutableArray *filteredSamples = [[NSMutableArray alloc] initWithCapacity:10];
    NSUInteger sampleCount = self.sampleData.length / sizeof(SInt16);
    NSUInteger binSize = sampleCount / size.width;
    
    SInt16 *bytes = (SInt16 *)self.sampleData.bytes;
    SInt16 maxSample = 0;
    for (NSUInteger i = 0; i < sampleCount; i += binSize) {
        SInt16 sampleBin[binSize];
        for (NSUInteger j = 0; j < binSize; j++) {
            sampleBin[j] = CFSwapInt16LittleToHost(bytes[i + j]);
        }
        
        SInt16 value = [self maxValueInArray:sampleBin ofSize:binSize];
        [filteredSamples addObject:@(value)];
        if (value > maxSample) {
            maxSample = value;
        }
    }
    
    CGFloat scaleFactor = (size.height / 2) / maxSample;
    for (NSUInteger i = 0; i < filteredSamples.count; i++) {
        filteredSamples[i] = @([filteredSamples[i] integerValue] * scaleFactor);
    }
    return filteredSamples.copy;
}

- (SInt16)maxValueInArray:(SInt16 *)values ofSize:(NSUInteger)size {
    SInt16 maxValue = 0;
    for (int i = 0; i < size; i++) {
        if (abs(values[i]) > maxValue) {
            maxValue = abs(values[i]);
        }
    }
    return maxValue;
}
@end

2.3 渲染已经过滤的音频样本

THWaveformView渲染音频样本

@implementation THWaveformView
- (instancetype)initWithFrame:(CGRect)frame {
    self = [super initWithFrame:frame];
    if (self) {
        [self setupView];
    }
    return self;
}

- (void)setupView {
    self.backgroundColor = [UIColor clearColor];
    self.waveColor = [UIColor whiteColor];
    self.layer.cornerRadius = 2.0f;
    self.layer.masksToBounds = YES;
    
    UIActivityIndicatorViewStyle style = UIActivityIndicatorViewStyleWhiteLarge;
    
    _loadingView =
        [[UIActivityIndicatorView alloc] initWithActivityIndicatorStyle:style];
    
    CGSize size = _loadingView.frame.size;
    CGFloat x = (self.bounds.size.width - size.width) / 2;
    CGFloat y = (self.bounds.size.height - size.height) / 2;
    _loadingView.frame = CGRectMake(x, y, size.width, size.height);
    [self addSubview:_loadingView];
    
    [_loadingView startAnimating];
}

- (void)setWaveColor:(UIColor *)waveColor {
    _waveColor = waveColor;
    self.layer.borderWidth = 2.0f;
    self.layer.borderColor = waveColor.CGColor;
    [self setNeedsDisplay];
}

- (void)setAsset:(AVAsset *)asset {
    _asset = asset;
    [THSampleDataProvider loadAudioSamplesFromAsset:asset completionBlock:^(NSData *sampleData) {
        self.filter = [[THSampleDataFilter alloc] initWithData:sampleData];
        [self.loadingView stopAnimating];
        [self setNeedsDisplay];
    }];
}

- (void)drawRect:(CGRect)rect {
    CGContextRef context = UIGraphicsGetCurrentContext();
    CGContextScaleCTM(context, THWidthScaling, THHeightScaling);
    CGFloat xOffset = self.bounds.size.width - (self.bounds.size.width * THWidthScaling);
    CGFloat yOffset = self.bounds.size.height - (self.bounds.size.height * THHeightScaling);
    CGContextTranslateCTM(context, xOffset/2, yOffset/2);
    
    NSArray *filteredSamples = [self.filter filteredSamplesForSize:self.bounds.size];
    CGFloat midY = CGRectGetMidY(rect);
    CGMutablePathRef halfPath = CGPathCreateMutable();
    CGPathMoveToPoint(halfPath, NULL, 0.0f, midY);
    for (NSUInteger i = 0; i < filteredSamples.count; i++) {
        float sample = [filteredSamples[i] floatValue];
        CGPathAddLineToPoint(halfPath, NULL, i, midY-sample);
    }
    CGPathAddLineToPoint(halfPath, NULL, filteredSamples.count, midY);
    
    CGMutablePathRef fullPath = CGPathCreateMutable();
    CGPathAddPath(fullPath, NULL, halfPath);
    CGAffineTransform transform = CGAffineTransformIdentity;
    transform = CGAffineTransformTranslate(transform, 0, CGRectGetHeight(rect));
    transform = CGAffineTransformScale(transform, 1.0, -1.0);
    CGPathAddPath(fullPath, &transform, halfPath);
    
    CGContextAddPath(context, fullPath);
    CGContextSetFillColorWithColor(context, self.waveColor.CGColor);
    CGContextDrawPath(context, kCGPathFill);
    
    CGPathRelease(halfPath);
    CGPathRelease(fullPath);
}
@end

3 捕捉视频的高级方法-DataOutput

使用DataOutput作为捕捉会话时,前文已经说过此时拿到的将是CMSampleBuffer对象,此时要将其显示到屏幕上需要使用Core Animation框架的知识,通常还需要对捕捉到的图像进行滤镜处理,这就需要使用到OpenGL ES的相关知识。

3.1 Core Animation和OpenGL ES简介

Core Animation在iOS中底层是通过GPU实现对图层的渲染,后续将另起文章说明。OpenGL ES是一个图像处理的跨平台工具,具体使用另起文章说明,这里只简单介绍以满足本实例使用需要。OpenGL ES有两个重要的UI类,GLKViewController和GLKView。当需要使用OpenGL对数据进行复杂的三维建模渲染时可以直接使用GLKViewController做问该页面根控制器,直接为其添加相机管理属性,同时将捕捉到的Samplebuffer转换为GL的贴图。当如本实例一样仅仅简单处理和显示图片数据时可以直接使用GLKView作为视频录制的PreviewView,将SampleBuffer转换为CIImage对象渲染。

3.1.1 ContextManager

GLKView初始化时需要传入一个EAGLContext负责其中图像绘制,然而在GLKView中渲染CIImage对象时需要将EAGLContext封装为CIContext使用。使用ContextManager管理上下文。

@interface THContextManager : NSObject
+ (instancetype)sharedInstance;

@property (strong, nonatomic, readonly) EAGLContext *eaglContext;
@property (strong, nonatomic, readonly) CIContext *ciContext;
@end

@implementation THContextManager
+ (instancetype)sharedInstance {
    static dispatch_once_t predicate;
    static THContextManager *instance = nil;
    dispatch_once(&predicate, ^{instance = [[self alloc] init];});
    return instance;
}

- (instancetype)init {
    if (self = [super init]) {
        _eaglContext = [[EAGLContext alloc] initWithAPI:kEAGLRenderingAPIOpenGLES2];
        // 设置一个Null关掉色彩管理,色彩管理会降低性能。只有在需要色彩保真度的情境下才会希望使用色彩管理,然而实时App通常都不会关注色彩保真。
        NSDictionary *options = @{kCIContextWorkingColorSpace : [NSNull null]};
        _ciContext = [CIContext contextWithEAGLContext:_eaglContext options:options];
    }
    return self;
}
@end
3.1.2 创建CIFilter
@implementation THPhotoFilters
+ (NSArray *)filterNames {
    return @[@"CIPhotoEffectChrome"];
}

+ (NSArray *)filterDisplayNames {
    NSMutableArray *displayNames = [NSMutableArray array];
    for (NSString *filterName in [self filterNames]) {
        [displayNames addObject:[filterName stringByMatchingRegex:@"CIPhotoEffect(.*)" capture:1]];
    }
    return displayNames;
}

+ (CIFilter *)defaultFilter {
    return [CIFilter filterWithName:[[self filterNames] firstObject]];
}

+ (CIFilter *)filterForDisplayName:(NSString *)displayName {
    for (NSString *name in [self filterNames]) {
        if ([name containsString:displayName]) {
            return [CIFilter filterWithName:name];
        }
    }
    return nil;
}
@end
3.1.3 使用滤镜和GLKView渲染图片

CIFilter是是Core Image内置的滤镜。GLKView渲染CIFilter滤镜处理的CIImage图片

@interface THPreviewView : GLKView <THImageTarget>
@property (strong, nonatomic) CIFilter *filter;
@property (strong, nonatomic) CIContext *coreImageContext;
@end

@interface THPreviewView ()
@property (nonatomic) CGRect drawableBounds;
@end

@implementation THPreviewView
- (instancetype)initWithFrame:(CGRect)frame context:(EAGLContext *)context {
    if (self = [super initWithFrame:frame context:context]) {
        self.enableSetNeedsDisplay = NO;
        self.backgroundColor = [UIColor blackColor];
        self.opaque = YES;
        // because the native video image from the back camera is in UIDeviceOrientationLandscapeLeft (i.e. the home button is on the right),
        // we need to apply a clockwise 90 degree transform so that we can draw the video preview as if we were in a landscape-oriented view;
        // if you're using the front camera and you want to have a mirrored preview (so that the user is seeing themselves in the mirror), you
        // need to apply an additional horizontal flip (by concatenating CGAffineTransformMakeScale(-1.0, 1.0) to the rotation transform)
        self.transform = CGAffineTransformMakeRotation(M_PI_2);
        self.frame = frame;

        [self bindDrawable];
        _drawableBounds = self.bounds;
        _drawableBounds.size.width = self.drawableWidth;
        _drawableBounds.size.height = self.drawableHeight;
        [[NSNotificationCenter defaultCenter] addObserver:self
                                                 selector:@selector(filterChanged:) name:THFilterSelectionChangedNotification
                                                   object:nil];
    }
    return self;
}

- (void)filterChanged:(NSNotification *)notification {
    self.filter = notification.object;
}

- (void)setImage:(CIImage *)sourceImage {
    [self bindDrawable];
    [self.filter setValue:sourceImage forKey:kCIInputImageKey];
    CIImage *filteredImage = self.filter.outputImage;
    if (filteredImage) {
        // 将图像在可以画图范围内保持横纵比缩放后返回的居中矩阵
        CGRect cropRect =
            THCenterCropImageRect(sourceImage.extent, self.drawableBounds);
        [self.coreImageContext drawImage:filteredImage inRect:self.drawableBounds fromRect:cropRect];
    }
    [self display];
    [self.filter setValue:nil forKey:kCIInputImageKey];
}
@end

3.2 捕捉用CIFilter处理的视频

3.2.1 配置摄像头管理器
@interface THCameraController : THBaseCameraController
- (void)startRecording;
- (void)stopRecording;
@property (nonatomic, getter = isRecording) BOOL recording;
@property (weak, nonatomic) id <THImageTarget> imageTarget;
@end

@interface THCameraController () <AVCaptureVideoDataOutputSampleBufferDelegate,
                                  AVCaptureAudioDataOutputSampleBufferDelegate, THMovieWriterDelegate>
@property (strong, nonatomic) AVCaptureVideoDataOutput *videoDataOutput;
@property (strong, nonatomic) AVCaptureAudioDataOutput *audioDataOutput;
@property (strong, nonatomic) THMovieWriter *movieWriter;
@end

@implementation THCameraController
- (BOOL)setupSessionOutputs:(NSError **)error {
    self.videoDataOutput = [[AVCaptureVideoDataOutput alloc] init];
    // 结合Core Image和OpenGL框架时使用32BGRA
    NSDictionary *outputSettings = @{(id)kCVPixelBufferPixelFormatTypeKey : @(kCVPixelFormatType_32BGRA)};
    self.videoDataOutput.videoSettings = outputSettings;
    self.videoDataOutput.alwaysDiscardsLateVideoFrames = NO;
    [self.videoDataOutput setSampleBufferDelegate:self queue:self.dispatchQueue];
    if ([self.captureSession canAddOutput:self.videoDataOutput]) {
        [self.captureSession addOutput:self.videoDataOutput];
    } else {
        return NO;
    }
    
    self.audioDataOutput = [[AVCaptureAudioDataOutput alloc] init];
    [self.audioDataOutput setSampleBufferDelegate:self queue:self.dispatchQueue];
    if ([self.captureSession canAddOutput:self.audioDataOutput]) {
        [self.captureSession addOutput:self.audioDataOutput];
    } else {
        return NO;
    }
    
    NSString *fileType = AVFileTypeQuickTimeMovie;
    NSDictionary *videoSettings = [self.videoDataOutput recommendedVideoSettingsForAssetWriterWithOutputFileType:fileType];
    NSDictionary *audioSettings = [self.audioDataOutput recommendedAudioSettingsForAssetWriterWithOutputFileType:fileType];
    self.movieWriter = [[THMovieWriter alloc] initWithVideoSettings:videoSettings audioSettings:audioSettings dispatchQueue:self.dispatchQueue];
    self.movieWriter.delegate = self;
    return YES;
}

- (NSString *)sessionPreset {
    return AVCaptureSessionPresetMedium;
}

- (void)startRecording {
    [self.movieWriter startWriting];
    self.recording = YES;
}

- (void)stopRecording {
    [self.movieWriter stopWriting];
    self.recording = NO;
}

#pragma mark - Delegate methods
- (void)captureOutput:(AVCaptureOutput *)captureOutput
didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer
       fromConnection:(AVCaptureConnection *)connection {
    [self.movieWriter processSampleBuffer:sampleBuffer];
    
    if (captureOutput == self.videoDataOutput) {
        CVPixelBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
        CIImage *sourceImage = [CIImage imageWithCVImageBuffer:imageBuffer options:nil];
        [self.imageTarget setImage:sourceImage];
    }
}

- (void)didWriteMovieAtURL:(NSURL *)outputURL {
    NSError *error = nil;
    __block PHObjectPlaceholder *createdAsset = nil;
    [[PHPhotoLibrary sharedPhotoLibrary] performChangesAndWait:^{
        createdAsset = [PHAssetCreationRequest creationRequestForAssetFromVideoAtFileURL:outputURL].placeholderForCreatedAsset;
    } error:&error];
    if (error || !createdAsset) {
        [self.delegate assetLibraryWriteFailedWithError:error];
    }
}
@end
3.2.2 写入处理后的视频文件

新建AVAssetWriter子类

@protocol THMovieWriterDelegate <NSObject>
- (void)didWriteMovieAtURL:(NSURL *)outputURL;
@end

@interface THMovieWriter : NSObject
@property (nonatomic) BOOL isWriting;
@property (weak, nonatomic) id<THMovieWriterDelegate> delegate;

- (id)initWithVideoSettings:(NSDictionary *)videoSettings
              audioSettings:(NSDictionary *)audioSettings
              dispatchQueue:(dispatch_queue_t)dispatchQueue;
- (void)startWriting;
- (void)stopWriting;
- (void)processSampleBuffer:(CMSampleBufferRef)sampleBuffer;
@end


// .m
static NSString *const THVideoFilename = @"movie.mov";
@interface THMovieWriter ()
@property (strong, nonatomic) AVAssetWriter *assetWriter;
@property (strong, nonatomic) AVAssetWriterInput *assetWriterVideoInput;
@property (strong, nonatomic) AVAssetWriterInput *assetWriterAudioInput;
@property (strong, nonatomic)
    AVAssetWriterInputPixelBufferAdaptor *assetWriterInputPixelBufferAdaptor;

@property (strong, nonatomic) dispatch_queue_t dispatchQueue;

@property (weak, nonatomic) CIContext *ciContext;
@property (nonatomic) CGColorSpaceRef colorSpace;
@property (strong, nonatomic) CIFilter *activeFilter;

@property (strong, nonatomic) NSDictionary *videoSettings;
@property (strong, nonatomic) NSDictionary *audioSettings;

@property (nonatomic) BOOL firstSample;
@end

初始化AVAssetWriter

- (id)initWithVideoSettings:(NSDictionary *)videoSettings
              audioSettings:(NSDictionary *)audioSettings
              dispatchQueue:(dispatch_queue_t)dispatchQueue {
    if (self = [super init]) {
        _videoSettings = videoSettings;
        _audioSettings = audioSettings;
        _dispatchQueue = dispatchQueue;
        
        _ciContext = [THContextManager sharedInstance].ciContext;
        _colorSpace = CGColorSpaceCreateDeviceRGB();
        _activeFilter = [THPhotoFilters defaultFilter];
        _firstSample = YES;
        
        NSNotificationCenter *nc = [NSNotificationCenter defaultCenter];
        [nc addObserver:self selector:@selector(filterChanged:) name:THFilterSelectionChangedNotification object:nil];
    }
    return self;
}

- (void)dealloc {
    CGColorSpaceRelease(_colorSpace);
    [[NSNotificationCenter defaultCenter] removeObserver:self];
}

- (void)filterChanged:(NSNotification *)notification {
    self.activeFilter = [notification.object copy];
}

写入Samplebuffer

- (void)startWriting {
    dispatch_async(self.dispatchQueue, ^{
        NSError *error = nil;
        NSString *fileType = AVFileTypeQuickTimeMovie;
        self.assetWriter = [AVAssetWriter assetWriterWithURL:[self outputURL] fileType:fileType error:&error];
        if (!self.assetWriter || error) {
            NSString *formatString = @"Could not creat AVAssetWriter: %@";
            NSLog(@"%@",[NSString stringWithFormat:formatString, error]);
            return;
        }
        
        self.assetWriterVideoInput = [[AVAssetWriterInput alloc] initWithMediaType:AVMediaTypeVideo outputSettings:self.videoSettings];
        self.assetWriterVideoInput.expectsMediaDataInRealTime = YES;
        UIDeviceOrientation orientation = [UIDevice currentDevice].orientation;
        self.assetWriterVideoInput.transform = THTransformForDeviceOrientation(orientation);
        
        NSDictionary *attributes =
        @{(id)kCVPixelBufferPixelFormatTypeKey : @(kCVPixelFormatType_32BGRA),
          (id)kCVPixelBufferWidthKey : self.videoSettings[AVVideoWidthKey],
          (id)kCVPixelBufferHeightKey : self.videoSettings[AVVideoHeightKey],
          (id)kCVPixelFormatOpenGLESCompatibility : (id)kCFBooleanTrue
          };
        // 此处attributes设置需要对于CaptureVideoDataOutput中的格式,由于要结合OpenGL,因此要设置GLESCompatibility为YES,
        // 高宽可以不设置将采用iput中的高和宽,这里需要指定PixelFormatTypeKey为32BGRA,因为本实例中摄像头捕捉图像输出格式为32BGRA,
        // 摄像头输出的默认格式为YBR4208---,BufferAdaptor的默认格式也是YBR4208。
        self.assetWriterInputPixelBufferAdaptor = [[AVAssetWriterInputPixelBufferAdaptor alloc] initWithAssetWriterInput:self.assetWriterVideoInput sourcePixelBufferAttributes:attributes];
        
        if ([self.assetWriter canAddInput:self.assetWriterVideoInput]) {
            [self.assetWriter addInput:self.assetWriterVideoInput];
        } else {
            NSLog(@"Unable to add video input.");
            return;
        }
        
        
        self.assetWriterAudioInput = [[AVAssetWriterInput alloc] initWithMediaType:AVMediaTypeAudio outputSettings:self.audioSettings];
        self.assetWriterAudioInput.expectsMediaDataInRealTime = YES;
        if ([self.assetWriter canAddInput:self.assetWriterAudioInput]) {
            [self.assetWriter addInput:self.assetWriterAudioInput];
        } else {
            NSLog(@"Unable to add audio input.");
            return;
        }
        
        self.isWriting = YES;
        self.firstSample = YES;
    });
}

处理Samplebuffer

- (void)processSampleBuffer:(CMSampleBufferRef)sampleBuffer {
    if (!self.isWriting) {
        return;
    }
    CMFormatDescriptionRef formatDesc = CMSampleBufferGetFormatDescription(sampleBuffer);
    CMMediaType mediaType = CMFormatDescriptionGetMediaType(formatDesc);
    if (mediaType == kCMMediaType_Video) {
        CMTime timeStamp = CMSampleBufferGetPresentationTimeStamp(sampleBuffer);
        // 开始录制会话时候由于相机设置先添加视频输出,因此拿到的第一个样本数据是来自于视频的数据,因此在这里开启写入器会话
        if (self.firstSample) {
            if ([self.assetWriter startWriting]) {
                [self.assetWriter startSessionAtSourceTime:timeStamp];
            } else {
                NSLog(@"Falied to start writing");
            }
            self.firstSample = NO;
        }
        
        // 创建空的图片缓存,用于接受滤镜处理后的图片
        CVPixelBufferRef outputReaderBuffer = NULL;
        CVPixelBufferPoolRef pixelBufferPool = self.assetWriterInputPixelBufferAdaptor.pixelBufferPool;
        OSStatus err = CVPixelBufferPoolCreatePixelBuffer(NULL, pixelBufferPool, &outputReaderBuffer);
        if (err) {
            NSLog(@"Unable to obtain a pixel buffer form the pool.");
            return;
        }
        
        CVPixelBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
        CIImage *sourceImage = [CIImage imageWithCVPixelBuffer:imageBuffer options:nil];
        [self.activeFilter setValue:sourceImage forKey:kCIInputImageKey];
        CIImage *filteredImage = self.activeFilter.outputImage;
        
        if (!filteredImage) {
            filteredImage = sourceImage;
        }
        [self.ciContext render:filteredImage toCVPixelBuffer:outputReaderBuffer bounds:filteredImage.extent colorSpace:self.colorSpace];
        if (self.assetWriterVideoInput.isReadyForMoreMediaData) {
            if (![self.assetWriterInputPixelBufferAdaptor appendPixelBuffer:outputReaderBuffer withPresentationTime:timeStamp]) {
                NSLog(@"Error appending pixel buffer.");
            }
        }
        CVPixelBufferRelease(outputReaderBuffer);
    } else if (!self.firstSample && mediaType == kCMMediaType_Audio) {
        if (self.assetWriterAudioInput.isReadyForMoreMediaData) {
            if (![self.assetWriterAudioInput appendSampleBuffer:sampleBuffer]) {
                NSLog(@"Error appending audio sample buffer");
            }
        }
    }
}

结束媒体资源写入

- (void)stopWriting {
    self.isWriting = NO;
    dispatch_async(self.dispatchQueue, ^{
        [self.assetWriter finishWritingWithCompletionHandler:^{
            if (self.assetWriter.status == AVAssetWriterStatusCompleted) {
                dispatch_async(dispatch_get_main_queue(), ^{
                    NSURL *fileURL = [self.assetWriter outputURL];
                    [self.delegate didWriteMovieAtURL:fileURL];
                });
            } else {
                NSLog(@"Failed to write movie: %@", self.assetWriter.error);
            }
        }];
    });
}

- (NSURL *)outputURL {
    NSString *filePath =
        [NSTemporaryDirectory() stringByAppendingPathComponent:THVideoFilename];
    NSURL *url = [NSURL fileURLWithPath:filePath];
    if ([[NSFileManager defaultManager] fileExistsAtPath:url.path]) {
        [[NSFileManager defaultManager] removeItemAtURL:url error:nil];
    }
    return url;
}

这里只是简单的使用AVCaptureVideo(Audio)DataOutput输出视频。在Apple Developer Center上有更多资源。StopNGo for iOS演示了定格动画相机。Sample Photo Editing Extension演示了创建iOS8的课编辑扩展。Technical Note TN2310演示了使用AVAssetWriter和AVAssetReader进行时间编码。Writing Subtitles to a Movie from the Command Line for OS X也演示了AVFoundation的使用。

最后编辑于
©著作权归作者所有,转载或内容合作请联系作者
  • 序言:七十年代末,一起剥皮案震惊了整个滨河市,随后出现的几起案子,更是在滨河造成了极大的恐慌,老刑警刘岩,带你破解...
    沈念sama阅读 194,088评论 5 459
  • 序言:滨河连续发生了三起死亡事件,死亡现场离奇诡异,居然都是意外死亡,警方通过查阅死者的电脑和手机,发现死者居然都...
    沈念sama阅读 81,715评论 2 371
  • 文/潘晓璐 我一进店门,熙熙楼的掌柜王于贵愁眉苦脸地迎上来,“玉大人,你说我怎么就摊上这事。” “怎么了?”我有些...
    开封第一讲书人阅读 141,361评论 0 319
  • 文/不坏的土叔 我叫张陵,是天一观的道长。 经常有香客问我,道长,这世上最难降的妖魔是什么? 我笑而不...
    开封第一讲书人阅读 52,099评论 1 263
  • 正文 为了忘掉前任,我火速办了婚礼,结果婚礼上,老公的妹妹穿的比我还像新娘。我一直安慰自己,他们只是感情好,可当我...
    茶点故事阅读 60,987评论 4 355
  • 文/花漫 我一把揭开白布。 她就那样静静地躺着,像睡着了一般。 火红的嫁衣衬着肌肤如雪。 梳的纹丝不乱的头发上,一...
    开封第一讲书人阅读 46,063评论 1 272
  • 那天,我揣着相机与录音,去河边找鬼。 笑死,一个胖子当着我的面吹牛,可吹牛的内容都是我干的。 我是一名探鬼主播,决...
    沈念sama阅读 36,486评论 3 381
  • 文/苍兰香墨 我猛地睁开眼,长吁一口气:“原来是场噩梦啊……” “哼!你这毒妇竟也来了?” 一声冷哼从身侧响起,我...
    开封第一讲书人阅读 35,175评论 0 253
  • 序言:老挝万荣一对情侣失踪,失踪者是张志新(化名)和其女友刘颖,没想到半个月后,有当地人在树林里发现了一具尸体,经...
    沈念sama阅读 39,440评论 1 290
  • 正文 独居荒郊野岭守林人离奇死亡,尸身上长有42处带血的脓包…… 初始之章·张勋 以下内容为张勋视角 年9月15日...
    茶点故事阅读 34,518评论 2 309
  • 正文 我和宋清朗相恋三年,在试婚纱的时候发现自己被绿了。 大学时的朋友给我发了我未婚夫和他白月光在一起吃饭的照片。...
    茶点故事阅读 36,305评论 1 326
  • 序言:一个原本活蹦乱跳的男人离奇死亡,死状恐怖,灵堂内的尸体忽然破棺而出,到底是诈尸还是另有隐情,我是刑警宁泽,带...
    沈念sama阅读 32,190评论 3 312
  • 正文 年R本政府宣布,位于F岛的核电站,受9级特大地震影响,放射性物质发生泄漏。R本人自食恶果不足惜,却给世界环境...
    茶点故事阅读 37,550评论 3 298
  • 文/蒙蒙 一、第九天 我趴在偏房一处隐蔽的房顶上张望。 院中可真热闹,春花似锦、人声如沸。这庄子的主人今日做“春日...
    开封第一讲书人阅读 28,880评论 0 17
  • 文/苍兰香墨 我抬头看了看天上的太阳。三九已至,却和暖如春,着一层夹袄步出监牢的瞬间,已是汗流浃背。 一阵脚步声响...
    开封第一讲书人阅读 30,152评论 1 250
  • 我被黑心中介骗来泰国打工, 没想到刚下飞机就差点儿被人妖公主榨干…… 1. 我叫王不留,地道东北人。 一个月前我还...
    沈念sama阅读 41,451评论 2 341
  • 正文 我出身青楼,却偏偏与公主长得像,于是被迫代替她去往敌国和亲。 传闻我的和亲对象是个残疾皇子,可洞房花烛夜当晚...
    茶点故事阅读 40,637评论 2 335

推荐阅读更多精彩内容

  • 视频应用的大部分场景使用前面介绍的AVFoundation各种功能即可.但有时候会遇到特殊要求,这时就可能需要直接...
    ValienZh阅读 2,683评论 1 2
  • 发现 关注 消息 iOS 第三方库、插件、知名博客总结 作者大灰狼的小绵羊哥哥关注 2017.06.26 09:4...
    肇东周阅读 11,943评论 4 60
  • “如果一个梦想就可以让生活充满意义,那么生活也真实毫无意义。”我闭着眼小心翼翼地哀叹,生怕惊扰了醉生梦殇——那对沉...
    大合集阅读 473评论 1 1
  • 立夏走向山原的时候 小满就在布谷声中紧跟了上去 十八九岁的妙龄 瞥一眼蓝天下的流云 太阳映红了她红扑扑的脸蛋 大家...
    五哥放羊阅读 571评论 0 3
  • 暖心的爱在脚上 文:冰凌 暖暖的,暖暖的 从脚心出发 回到少年 牵手走过似水流年 疼与爱 飞过沧海桑田 如果时光允...
    凌儿的天空阅读 224评论 0 0