AVFoundation框架提供了一组功能丰富的类,以简化音视频的编辑。compositions是AVFoundation编辑API的核心。compositions只是来自一个或多个不同媒体asset的track的集合。AVMutableComposition类提供用于插入和移除轨道,以及管理他们的时间排序的接口。图3-1显示了如何将现有asset组合成新的composition以形成新asset。如果您要做的只是将多个资产依次合并到一个文件中,那么这就是您需要的详细信息。如果你要对composition中的音轨或视轨进行自定义处理,则需要分别合并混音或视频作品。
图3-1 AVMutableComposition将资产组合在一起
使用AVMutableAudioMix该类,您可以对composition的音轨执行自定义音频处理,如图3-2所示。当前,您可以为音频轨道指定最大音量或设置音量斜坡。
图3-2 AVMutableAudioMix执行音频混合
您可以使用AVMutableVideoComposition该类直接将作品中的视轨进行编辑,如图3-3所示。使用单个视频合成,您可以为输出视频指定所需的渲染大小和比例以及帧持续时间。通过视频合成的指令(由AVMutableVideoCompositionInstruction类表示),您可以修改视频的背景色并应用图层指令。这些图层指令(由AVMutableVideoCompositionLayerInstruction该类表示)可用于将变型,变换渐变,不透明度和不透明度渐变应用于合成中的视频轨道。视频合成类还使您能够使用animationTool属性将Core Animation框架中的效果引入视频中。
图3-3 AVMutableVideoComposition
可以使用AVAssetExportSession对象,将composition与 audio mix 和video composition合在一起,如图3-4所示。您可以使用composition来初始化export session,然后分别将 audio mix 和 video composition分别分配给audioMix和videoComposition属性。
图3-4 使用AVAssetExportSession将媒体元素到输出文件中
创建composition
要创建自己的composition,请使用AVMutableComposition该类。要将媒体数据添加到您的composition中,您必须添加一个或多个以AVMutableCompositionTrack类表示的composition track。最简单的情况是使用一个video track和一个audio track创建mutable composition:
AVMutableComposition *mutableComposition = [AVMutableComposition composition];
// Create the video composition track.
AVMutableCompositionTrack *mutableCompositionVideoTrack = [mutableComposition addMutableTrackWithMediaType:AVMediaTypeVideo preferredTrackID:kCMPersistentTrackID_Invalid];
// Create the audio composition track.
AVMutableCompositionTrack *mutableCompositionAudioTrack = [mutableComposition addMutableTrackWithMediaType:AVMediaTypeAudio preferredTrackID:kCMPersistentTrackID_Invalid];
初始composition track的option
向composition添加新tracks时,必须同时提供媒体类型和 track ID。尽管音频和视频是最常用的媒体类型,但是您也可以指定其他媒体类型,例如AVMediaTypeSubtitle或AVMediaTypeText。
与某些视听数据关联的每个轨道都有一个唯一的标识符,称为trace ID。如果您指定kCMPersistentTrackID_Invalid为首选轨道ID,则会自动为您生成一个唯一标识符并将其与该轨道相关联。
将音视数据添加到composition中
一旦你有一个包含一个或多个tracks的composition,你就可以开始将媒体数据添加到适当的track。要将媒体数据添加到composition track,您需要访问AVAsset媒体数据所在的对象。您可以使用mutable composition track interface 将具有相同基础媒体类型的多个tracks放到同一track。以下示例说明了如何将两个不同的asset tracks 依次添加到同一composition track:
// You can retrieve AVAssets from a number of places, like the camera roll for example.
AVAsset *videoAsset = <#AVAsset with at least one video track#>;
AVAsset *anotherVideoAsset = <#another AVAsset with at least one video track#>;
// Get the first video track from each asset.
AVAssetTrack *videoAssetTrack = [[videoAsset tracksWithMediaType:AVMediaTypeVideo] objectAtIndex:0];
AVAssetTrack *anotherVideoAssetTrack = [[anotherVideoAsset tracksWithMediaType:AVMediaTypeVideo] objectAtIndex:0];
// Add them both to the composition.
[mutableCompositionVideoTrack insertTimeRange:CMTimeRangeMake(kCMTimeZero,videoAssetTrack.timeRange.duration) ofTrack:videoAssetTrack atTime:kCMTimeZero error:nil];
[mutableCompositionVideoTrack insertTimeRange:CMTimeRangeMake(kCMTimeZero,anotherVideoAssetTrack.timeRange.duration) ofTrack:anotherVideoAssetTrack atTime:videoAssetTrack.timeRange.duration error:nil];
检索兼容的Composition Tracks
在可能的情况下,每种媒体类型尽可能只有一个composition track。兼容 asset tracks的这种统一导致最少的资源使用量。串行显示媒体数据时,应将相同类型的所有媒体数据放在同一composition track。您可以查询mutable composition,以了解是否有任何与所需composition tracks兼容的 asset track:
AVMutableCompositionTrack *compatibleCompositionTrack = [mutableComposition mutableTrackCompatibleWithTrack:<#the AVAssetTrack you want to insert#>];
if (compatibleCompositionTrack) {
// Implementation continues.
}
注意: 将多个视频片段放置在同一构图轨道上可能会导致在视频片段之间的过渡处(特别是在嵌入式设备上)过渡时丢帧。为视频片段选择composition tracks数量完全取决于应用程序的设计及其预期的平台。
生成音量斜坡
单个AVMutableAudioMix
对象可以单独对composition中的所有audio tracks执行自定义音频处理。您可以使用audioMixclass方法创建audio mix ,并使用AVMutableAudioMixInputParameters的实例将音频混音与composition中的特定tracks相关联。audio mix可用于改变track的音量。下面的示例显示如何在特定音轨上设置音量渐变,以在composition期间使音频逐渐淡出:
AVMutableAudioMix *mutableAudioMix = [AVMutableAudioMix audioMix];
// Create the audio mix input parameters object.
AVMutableAudioMixInputParameters *mixParameters = [AVMutableAudioMixInputParameters audioMixInputParametersWithTrack:mutableCompositionAudioTrack];
// Set the volume ramp to slowly fade the audio out over the duration of the composition.
[mixParameters setVolumeRampFromStartVolume:1.f toEndVolume:0.f timeRange:CMTimeRangeMake(kCMTimeZero, mutableComposition.duration)];
// Attach the input parameters to the audio mix.
mutableAudioMix.inputParameters = @[mixParameters];
执行自定义视频处理
与audio mix一样,您只需要一个AVMutableVideoComposition
对象就可以对合成的视频轨道执行所有自定义视频处理。使用视频composition,您可以直接为合成的视频track设置合适的渲染大小,比例和帧频。有关为这些属性设置适当值的详细示例,请参见设置渲染大小和帧持续时间。
更改构图的背景色
所有视频的compositions必须包含一个AVVideoCompositionInstruction的对象数组,这个数组中至少有一个video composition instruction。您可以使用AVMutableVideoCompositionInstruction该类来创建自己的视频composition instructions。使用视频composition instructions令,您可以修改composition的背景颜色,指定是否需要后期处理或应用图层指令。
以下示例说明了如何创建视频合成指令,该指令将整个composition的背景色更改为红色。
AVMutableVideoCompositionInstruction *mutableVideoCompositionInstruction = [AVMutableVideoCompositionInstruction videoCompositionInstruction];
mutableVideoCompositionInstruction.timeRange = CMTimeRangeMake(kCMTimeZero, mutableComposition.duration);
mutableVideoCompositionInstruction.backgroundColor = [[UIColor redColor] CGColor];
应用不透明度斜坡
视频composition instructions也可以用于应用composition layer instructions。一个AVMutableVideoCompositionLayerInstruction对象可以应用到composition中的变换,变换坡道,不透明度和不透明度坡道到一定视频track 。一个视频 composition instruction中的layerInstructions数组中各个layer instructions的顺序决定了在该composition instruction期间,应如何对来自源轨道的视频帧进行分层和组装。以下代码片段显示了如何设置不透明度渐变,以在过渡到第二个视频之前缓慢淡出合成中的第一个视频:
AVAsset *firstVideoAssetTrack = <#AVAssetTrack representing the first video segment played in the composition#>;
AVAsset *secondVideoAssetTrack = <#AVAssetTrack representing the second video segment played in the composition#>;
// Create the first video composition instruction.
AVMutableVideoCompositionInstruction *firstVideoCompositionInstruction = [AVMutableVideoCompositionInstruction videoCompositionInstruction];
// Set its time range to span the duration of the first video track.
firstVideoCompositionInstruction.timeRange = CMTimeRangeMake(kCMTimeZero, firstVideoAssetTrack.timeRange.duration);
// Create the layer instruction and associate it with the composition video track.
AVMutableVideoCompositionLayerInstruction *firstVideoLayerInstruction = [AVMutableVideoCompositionLayerInstruction videoCompositionLayerInstructionWithAssetTrack:mutableCompositionVideoTrack];
// Create the opacity ramp to fade out the first video track over its entire duration.
[firstVideoLayerInstruction setOpacityRampFromStartOpacity:1.f toEndOpacity:0.f timeRange:CMTimeRangeMake(kCMTimeZero, firstVideoAssetTrack.timeRange.duration)];
// Create the second video composition instruction so that the second video track isn't transparent.
AVMutableVideoCompositionInstruction *secondVideoCompositionInstruction = [AVMutableVideoCompositionInstruction videoCompositionInstruction];
// Set its time range to span the duration of the second video track.
secondVideoCompositionInstruction.timeRange = CMTimeRangeMake(firstVideoAssetTrack.timeRange.duration, CMTimeAdd(firstVideoAssetTrack.timeRange.duration, secondVideoAssetTrack.timeRange.duration));
// Create the second layer instruction and associate it with the composition video track.
AVMutableVideoCompositionLayerInstruction *secondVideoLayerInstruction = [AVMutableVideoCompositionLayerInstruction videoCompositionLayerInstructionWithAssetTrack:mutableCompositionVideoTrack];
// Attach the first layer instruction to the first video composition instruction.
firstVideoCompositionInstruction.layerInstructions = @[firstVideoLayerInstruction];
// Attach the second layer instruction to the second video composition instruction.
secondVideoCompositionInstruction.layerInstructions = @[secondVideoLayerInstruction];
// Attach both of the video composition instructions to the video composition.
AVMutableVideoComposition *mutableVideoComposition = [AVMutableVideoComposition videoComposition];
mutableVideoComposition.instructions = @[firstVideoCompositionInstruction, secondVideoCompositionInstruction];
整合核心动画效果
视频合成可以通过该animationTool属性将Core Animation的功能添加到您的composition中。通过此动画工具,您可以完成视频加水印,添加标题或设置动画之类的任务。可以以两种不同的方式将Core Animation与视频合成一起使用:您可以将Core Animation图层添加为其自己的单独composition track,或者可以将Core Animation效果(使用Core Animation图层)直接渲染到composition中的视频帧中。以下代码通过在视频中心添加水印来显示后一种选项:
CALayer *watermarkLayer = <#CALayer representing your desired watermark image#>;
CALayer *parentLayer = [CALayer layer];
CALayer *videoLayer = [CALayer layer];
parentLayer.frame = CGRectMake(0, 0, mutableVideoComposition.renderSize.width, mutableVideoComposition.renderSize.height);
videoLayer.frame = CGRectMake(0, 0, mutableVideoComposition.renderSize.width, mutableVideoComposition.renderSize.height);
[parentLayer addSublayer:videoLayer];
watermarkLayer.position = CGPointMake(mutableVideoComposition.renderSize.width/2, mutableVideoComposition.renderSize.height/4);
[parentLayer addSublayer:watermarkLayer];
mutableVideoComposition.animationTool = [AVVideoCompositionCoreAnimationTool videoCompositionCoreAnimationToolWithPostProcessingAsVideoLayer:videoLayer inLayer:parentLayer];
放在一起:组合多种资产并将结果保存到相机胶卷中
此简短的代码示例说明了如何组合两个视频asset track和一个音频asset track来创建单个视频文件。它显示了如何:
创建一个AVMutableComposition对象并添加多个AVMutableCompositionTrack对象
将AVAssetTrack对象的时间范围添加到兼容的composition tracks
检查preferredTransform视频asset track的属性,以确定视频的方向
使用AVMutableVideoCompositionLayerInstruction对象将变换应用于composition中的video track
为视频的composition中的renderSize和frameDuration属性设置适当的值
导出到视频文件时,将composition与video composition结合使用
将视频文件保存到相机胶卷
注意: 为了专注于最相关的代码,此示例省略了完整应用程序的多个方面,例如内存管理和错误处理。要使用AVFoundation,您应该对Cocoa有足够的经验来推断出缺失的部分。
创建composition
要将来自不同asset的track拼凑在一起,可以使用一个AVMutableComposition对象。创建composition并添加一个audio track和一个video track。
AVMutableComposition * mutableComposition = [AVMutableComposition组成];
AVMutableCompositionTrack * videoCompositionTrack = [mutableComposition addMutableTrackWithMediaType:AVMediaTypeVideo preferredTrackID:kCMPersistentTrackID_Invalid];
AVMutableCompositionTrack * audioCompositionTrack = [mutableComposition addMutableTrackWithMediaType:AVMediaTypeAudio preferredTrackID:kCMPersistentTrackID_Invalid];
添加asset
空的composition对您是没用的。将两个video asset tracks和audio asset track添加到composition中。
AVAssetTrack *firstVideoAssetTrack = [[firstVideoAsset tracksWithMediaType:AVMediaTypeVideo] objectAtIndex:0];
AVAssetTrack *secondVideoAssetTrack = [[secondVideoAsset tracksWithMediaType:AVMediaTypeVideo] objectAtIndex:0];
[videoCompositionTrack insertTimeRange:CMTimeRangeMake(kCMTimeZero, firstVideoAssetTrack.timeRange.duration) ofTrack:firstVideoAssetTrack atTime:kCMTimeZero error:nil];
[videoCompositionTrack insertTimeRange:CMTimeRangeMake(kCMTimeZero, secondVideoAssetTrack.timeRange.duration) ofTrack:secondVideoAssetTrack atTime:firstVideoAssetTrack.timeRange.duration error:nil];
[audioCompositionTrack insertTimeRange:CMTimeRangeMake(kCMTimeZero, CMTimeAdd(firstVideoAssetTrack.timeRange.duration, secondVideoAssetTrack.timeRange.duration)) ofTrack:[[audioAsset tracksWithMediaType:AVMediaTypeAudio] objectAtIndex:0] atTime:kCMTimeZero error:nil];
注意: 假设您拥有两个asset,每个asset至少包含一个video track,而第三asset又包含至少一个音audio track。可以从“相机胶卷”中检索video,并且可以从音乐库或视频本身中检索audio track。
检查视频方向
将video和audio轨道添加到composition中后,需要确保两个video tracks的方向正确。默认情况下,所有视频轨道均假定为横向模式。如果您的视频轨道是以纵向模式拍摄的,则在导出视频时将无法正确定向视频。同样,如果您尝试将纵向模式下的视频与横向模式下的视频结合起来,则导出会话将无法完成。
BOOL isFirstVideoPortrait = NO;
CGAffineTransform firstTransform = firstVideoAssetTrack.preferredTransform;
// Check the first video track's preferred transform to determine if it was recorded in portrait mode.
if (firstTransform.a == 0 && firstTransform.d == 0 && (firstTransform.b == 1.0 || firstTransform.b == -1.0) && (firstTransform.c == 1.0 || firstTransform.c == -1.0)) {
isFirstVideoPortrait = YES;
}
BOOL isSecondVideoPortrait = NO;
CGAffineTransform secondTransform = secondVideoAssetTrack.preferredTransform;
// Check the second video track's preferred transform to determine if it was recorded in portrait mode.
if (secondTransform.a == 0 && secondTransform.d == 0 && (secondTransform.b == 1.0 || secondTransform.b == -1.0) && (secondTransform.c == 1.0 || secondTransform.c == -1.0)) {
isSecondVideoPortrait = YES;
}
if ((isFirstVideoAssetPortrait && !isSecondVideoAssetPortrait) || (!isFirstVideoAssetPortrait && isSecondVideoAssetPortrait)) {
UIAlertView *incompatibleVideoOrientationAlert = [[UIAlertView alloc] initWithTitle:@"Error!" message:@"Cannot combine a video shot in portrait mode with a video shot in landscape mode." delegate:self cancelButtonTitle:@"Dismiss" otherButtonTitles:nil];
[incompatibleVideoOrientationAlert show];
return;
}
应用video composition layer Instructions
一旦知道视频片段具有兼容的方向,就可以将必要的layer instructions应用于每个layer instructions,并将这些layer instructions添加到video composition中。
AVMutableVideoCompositionInstruction *firstVideoCompositionInstruction = [AVMutableVideoCompositionInstruction videoCompositionInstruction];
// Set the time range of the first instruction to span the duration of the first video track.
firstVideoCompositionInstruction.timeRange = CMTimeRangeMake(kCMTimeZero, firstVideoAssetTrack.timeRange.duration);
AVMutableVideoCompositionInstruction * secondVideoCompositionInstruction = [AVMutableVideoCompositionInstruction videoCompositionInstruction];
// Set the time range of the second instruction to span the duration of the second video track.
secondVideoCompositionInstruction.timeRange = CMTimeRangeMake(firstVideoAssetTrack.timeRange.duration, CMTimeAdd(firstVideoAssetTrack.timeRange.duration, secondVideoAssetTrack.timeRange.duration));
AVMutableVideoCompositionLayerInstruction *firstVideoLayerInstruction = [AVMutableVideoCompositionLayerInstruction videoCompositionLayerInstructionWithAssetTrack:videoCompositionTrack];
// Set the transform of the first layer instruction to the preferred transform of the first video track.
[firstVideoLayerInstruction setTransform:firstTransform atTime:kCMTimeZero];
AVMutableVideoCompositionLayerInstruction *secondVideoLayerInstruction = [AVMutableVideoCompositionLayerInstruction videoCompositionLayerInstructionWithAssetTrack:videoCompositionTrack];
// Set the transform of the second layer instruction to the preferred transform of the second video track.
[secondVideoLayerInstruction setTransform:secondTransform atTime:firstVideoAssetTrack.timeRange.duration];
firstVideoCompositionInstruction.layerInstructions = @[firstVideoLayerInstruction];
secondVideoCompositionInstruction.layerInstructions = @[secondVideoLayerInstruction];
AVMutableVideoComposition *mutableVideoComposition = [AVMutableVideoComposition videoComposition];
mutableVideoComposition.instructions = @[firstVideoCompositionInstruction, secondVideoCompositionInstruction]
所有AVAssetTrack对象都有一个preferredTransform属性,其中包含该资产轨道的方向信息。每当资产轨道显示在屏幕上时,都会应用此转换。在先前的代码中,将layer instruction’s transform 设置为asset track’s transform,以便一旦调整其渲染大小,新composition中的视频就会正确显示。
设置渲染大小和帧持续时间
要完成视频方向修正,您必须相应地调整renderSize属性。您还应该为frameDuration属性选择一个合适的值,例如1/30秒(或每秒30帧)。默认情况下,该renderScale属性设置为1.0,适用于此composition。
CGSize naturalSizeFirst, naturalSizeSecond;
// If the first video asset was shot in portrait mode, then so was the second one if we made it here.
if (isFirstVideoAssetPortrait) {
// Invert the width and height for the video tracks to ensure that they display properly.
naturalSizeFirst = CGSizeMake(firstVideoAssetTrack.naturalSize.height, firstVideoAssetTrack.naturalSize.width);
naturalSizeSecond = CGSizeMake(secondVideoAssetTrack.naturalSize.height, secondVideoAssetTrack.naturalSize.width);
}
else {
// If the videos weren't shot in portrait mode, we can just use their natural sizes.
naturalSizeFirst = firstVideoAssetTrack.naturalSize;
naturalSizeSecond = secondVideoAssetTrack.naturalSize;
}
float renderWidth, renderHeight;
// Set the renderWidth and renderHeight to the max of the two videos widths and heights.
if (naturalSizeFirst.width > naturalSizeSecond.width) {
renderWidth = naturalSizeFirst.width;
}
else {
renderWidth = naturalSizeSecond.width;
}
if (naturalSizeFirst.height > naturalSizeSecond.height) {
renderHeight = naturalSizeFirst.height;
}
else {
renderHeight = naturalSizeSecond.height;
}
mutableVideoComposition.renderSize = CGSizeMake(renderWidth, renderHeight);
// Set the frame duration to an appropriate value (i.e. 30 frames per second for video).
mutableVideoComposition.frameDuration = CMTimeMake(1,30);
导出composition并将其保存到相机胶卷
此过程的最后一步是将整个composition导出到单个视频文件中,并将该视频保存到相机胶卷中。您使用一个AVAssetExportSession对象来创建新的视频文件,并将所需的输出文件URL传递给该对象。然后,您可以使用ALAssetsLibrary该类将生成的视频文件保存到“相机胶卷”中。
// Create a static date formatter so we only have to initialize it once.
static NSDateFormatter *kDateFormatter;
if (!kDateFormatter) {
kDateFormatter = [[NSDateFormatter alloc] init];
kDateFormatter.dateStyle = NSDateFormatterMediumStyle;
kDateFormatter.timeStyle = NSDateFormatterShortStyle;
}
// Create the export session with the composition and set the preset to the highest quality.
AVAssetExportSession *exporter = [[AVAssetExportSession alloc] initWithAsset:mutableComposition presetName:AVAssetExportPresetHighestQuality];
// Set the desired output URL for the file created by the export process.
exporter.outputURL = [[[[NSFileManager defaultManager] URLForDirectory:NSDocumentDirectory inDomain:NSUserDomainMask appropriateForURL:nil create:@YES error:nil] URLByAppendingPathComponent:[kDateFormatter stringFromDate:[NSDate date]]] URLByAppendingPathExtension:CFBridgingRelease(UTTypeCopyPreferredTagWithClass((CFStringRef)AVFileTypeQuickTimeMovie, kUTTagClassFilenameExtension))];
// Set the output file type to be a QuickTime movie.
exporter.outputFileType = AVFileTypeQuickTimeMovie;
exporter.shouldOptimizeForNetworkUse = YES;
exporter.videoComposition = mutableVideoComposition;
// Asynchronously export the composition to a video file and save this file to the camera roll once export completes.
[exporter exportAsynchronouslyWithCompletionHandler:^{
dispatch_async(dispatch_get_main_queue(), ^{
if (exporter.status == AVAssetExportSessionStatusCompleted) {
ALAssetsLibrary *assetsLibrary = [[ALAssetsLibrary alloc] init];
if ([assetsLibrary videoAtPathIsCompatibleWithSavedPhotosAlbum:exporter.outputURL]) {
[assetsLibrary writeVideoAtPathToSavedPhotosAlbum:exporter.outputURL completionBlock:NULL];
}
}
});
}];