超有梗AVFoundation总结

一篇较好的学习文章

AVFoundation Tutorial: Adding Overlays and Animations to Videos

AVFoundation的一些应用

音视频合成

超有梗1.0编辑页重做了,我也对以前代码做了优化,现在录音、音乐、音效的添加,都视作一个音块。具体功能大家可以在AppStore下来自己看看😂

首先看一下model

代码里都添加了详细注释

class MediaBrick: NSObject {
    
    // MARK: - 音块一共有4种,原视频、录音、音乐、音效
    enum MediaType {
        case video
        case record
        case music
        case soundEffect
    }
    
    var type: MediaType!
    /// 最早开始时间,由于音块可以拖动范围,这个其实是最早时间的限制
    var startTime: TimeInterval = 0
    // (1)
    /// 最晚结束时间,由于音块可以拖动范围,这个其实是最晚时间的限制
    var endTime: TimeInterval = 0
    /// 被编辑的开始时间
    let modifiedStartTimeVarible = Variable<TimeInterval>(0)
    /// 被编辑的结束时间
    let modifiedEndTimeVarible = Variable<TimeInterval>(0)
    
    /// 用于计算 以上时间都是基于视频时间
    var videoDuration: TimeInterval = 0
    /// 这是一个view state,本来放在model是不合适的,不过这样很方便读取和传递
    var collectionViewContentWidth: CGFloat = 0
    
    /// 媒体文件的沙盒路径
    var fileUrl: URL?
    /// 该段媒体文件的音量
    var preferredVolume: Float = 1
    
    /// 用于 type == .record
    var pitchType: PitchType = .original
    
    /// 用于 type == .music, 已经被裁剪过
    var musicAsset: AVAsset?
    
    /// 用于 type == .soundEffect
    let soundEffectIconUrlVariable = Variable<URL?>(nil)
    
    // MARK: - 处理UI逻辑
    let isFoldVariable = Variable<Bool>(false)
    let isSelectedVariable = Variable<Bool>(false)
    let deleteSubject = PublishSubject<Void>()
    let beganModifyTimeSubject = PublishSubject<Void>()
    let endModifyTimeSubject = PublishSubject<Void>()
    
    /// 控制是否需要合成
    var isNeedCompose: Bool = true
    
    deinit {
        print("\(description) deinit")
    }
    
    /// 一个新的对象,只复制了4个时间,仅用于计算和处理UI
    func copy() -> MediaBrick {
        let mediaBrick = MediaBrick()
        mediaBrick.startTime = startTime
        mediaBrick.endTime = endTime
        mediaBrick.modifiedStartTimeVarible.value = modifiedStartTimeVarible.value
        mediaBrick.modifiedEndTimeVarible.value = modifiedEndTimeVarible.value
        return mediaBrick
    }
}

(1)Variable对象是RxSwift对象,它本身有存储功能,例如:

let modifiedStartTimeVarible = Variable<TimeInterval>(0)
modifiedStartTimeVarible.value = 1
print(modifiedStartTimeVarible.value)

可以使用modifiedStartTimeVarible.value来读写值。

合成

    // 主要是把音频合在视频上,所以视频的处理会有一些不同,传参的时候把视频的model和其他音频的model分开了
    static func compose(videoBrick: MediaBrick, audioBricks: [MediaBrick]) -> (AVMutableComposition, AVMutableAudioMix)? {

        // 这个是最后的合成对象,新建的时候相当于是一张白纸,准备往上面画画
        let composition = AVMutableComposition()
        // 这个是控制最后的composition的音量的,一般来说都会被设计成composition的属性,但iOS设计成了2个对象
        let audioMix = AVMutableAudioMix()
        // 初始化该属性为一个空数组,之后可以直接往数组里添加对象
        audioMix.inputParameters = []
        
        // 如果没有视频文件,return nil并且记录失败
        guard let fileUrl = videoBrick.fileUrl else {
            logFail(mediaBrick: videoBrick)
            return nil
        }
        let videoAsset = AVAsset(url: fileUrl)
        
        // 视频的全长范围
        let range = CMTimeRange(start: kCMTimeZero, duration: videoAsset.duration)
        
        // 因为新建的composition是空的,先把原视频的视轨添加上去
        // 依次获取视频资源的视轨originVideoAssetTrack;创建composition新加的视轨originVideoCompotionTrack
        guard let originVideoAssetTrack = videoAsset.tracks(withMediaType: .video).first,
            let originVideoCompotionTrack = composition.addMutableTrack(withMediaType: AVMediaType.video, preferredTrackID: kCMPersistentTrackID_Invalid) else {
            logFail(mediaBrick: videoBrick)
            return nil
        }
        do {
            // 将originVideoCompotionTrack填满originVideoAssetTrack的内容
            try originVideoCompotionTrack.insertTimeRange(range, of: originVideoAssetTrack, at: kCMTimeZero)
        } catch {
            logFail(mediaBrick: videoBrick, error: error)
            return nil
        }
        // 到此添加完毕
        
        // 添加原视频的音轨,音轨可能有多个,先检查没有音轨return nil并且记录失败
        let audioTracks = videoAsset.tracks(withMediaType: .audio)
        guard audioTracks.count != 0 else {
            logFail(mediaBrick: videoBrick)
            return nil
        }
        // 所有被新建的originAudioCompositionTrack需要持有起来,之后被重合的音轨需要删除原音音轨
        var originAudioCompositionTracks: [AVMutableCompositionTrack] = []
        for originAudioAssetTrack in audioTracks {
            // 循环里和上面的逻辑一样
            guard let originAudioCompositionTrack = composition.addMutableTrack(withMediaType: AVMediaType.audio, preferredTrackID: kCMPersistentTrackID_Invalid) else {
                logFail(mediaBrick: videoBrick)
                continue
            }
            do {
                try originAudioCompositionTrack.insertTimeRange(range, of: originAudioAssetTrack, at: kCMTimeZero)
                originAudioCompositionTracks.append(originAudioCompositionTrack)
            } catch {
                logFail(mediaBrick: videoBrick, error: error)
                continue
            }
        }
        
        // 到此准备工作做完了,现在composition已经和原视频文件具有相同的视轨和音轨了
        
        // 开始合成录音、音乐、音效
        for audioBrick in audioBricks {
            
            var mediaAsset: AVAsset!
            switch audioBrick.type! {
            case .record:
                
                // 获取本地录音资源文件,从pcm转到aac,并且完成变音功能
                guard let fileUrl = getAACFileUrl(recordBrick: audioBrick) else { continue }
                mediaAsset = AVAsset(url: fileUrl)
                
            case .music:

                // 因为音乐可以先编辑,优先取编辑之后的资源文件,再去原音乐资源文件
                if let asset = audioBrick.musicAsset {
                    mediaAsset = asset
                } else if let fileUrl = audioBrick.fileUrl {
                    mediaAsset = AVAsset(url: fileUrl)
                } else {
                    continue
                }
                
            case .soundEffect:
                
                // 获取本地音效资源文件
                guard let fileUrl = audioBrick.fileUrl else { continue }
                mediaAsset = AVAsset(url: fileUrl)
                
            default:
                continue
            }
            
            // 和上面的总体逻辑一样,获取资源文件的音轨,添加composition的音轨
            for audioAssetTrack in mediaAsset.tracks(withMediaType: .audio) {
                guard let audioCompositionTrack = composition.addMutableTrack(withMediaType: .audio, preferredTrackID: kCMPersistentTrackID_Invalid) else {
                    logFail(mediaBrick: audioBrick)
                    continue
                }
                
                // 然后把资源文件的音轨插入到composition的音轨
                // 但是这些音频文件主要是在插入时间上有不同,原音轨用全范围即可,这里用到的范围会比较多
                // 一些范围检查
                let modifiedStartTime = max(audioBrick.modifiedStartTimeVarible.value, 0)
                let modifiedEndTime = min(audioBrick.modifiedEndTimeVarible.value, videoAsset.duration.seconds)
                guard modifiedStartTime < modifiedEndTime else { continue }
                
                // 参照音频文件的时间,是该音频资源内部的时间
                // 被编辑的时间 - 最早时间,即是内部的时间,这里使用的时间是CMTime
                let startTimeByAudio = CMTime(seconds: modifiedStartTime - audioBrick.startTime, preferredTimescale: audioAssetTrack.naturalTimeScale)
                // 这段音频的总时长
                let audioDuration = CMTime(seconds: modifiedEndTime - modifiedStartTime, preferredTimescale: audioAssetTrack.naturalTimeScale)
                // 根据上面两个时间,做出CMTimeRange
                let rangeByAudio = CMTimeRangeMake(startTimeByAudio, audioDuration)
                
                // 参照视频文件的时间
                let startTimeByVideo = CMTime(seconds: modifiedStartTime, preferredTimescale: audioAssetTrack.naturalTimeScale)
                
                do {
                    // 开始填充audioCompositionTrack,将上面准备好的参数填入
                    try audioCompositionTrack.insertTimeRange(rangeByAudio, of: audioAssetTrack, at: startTimeByVideo)
                } catch {
                    logFail(mediaBrick: audioBrick, error: error)
                    continue
                }
                
                // 这是控制这段音频音量的代码
                let inputParameter = AVMutableAudioMixInputParameters(track: audioCompositionTrack)
                inputParameter.setVolume(audioBrick.preferredVolume, at: kCMTimeZero)
                audioMix.inputParameters.append(inputParameter)
                
                // 如果是录音和音乐,需要把原音轨对应的声音去掉,所以去掉对应的范围
                if audioBrick.type! != .soundEffect {
                    // replace origin audio to empty
                    let removeRange = CMTimeRangeMake(startTimeByVideo, audioDuration)
                    originAudioCompositionTracks.forEach {
                        $0.removeTimeRange(removeRange)
                        $0.insertEmptyTimeRange(removeRange)
                    }
                }
            }
        }
        // 返回的composition和audioMix,会被用在AVPlayer上进行播放
        return (composition, audioMix)
    }

裁剪

// 视频支持裁剪功能,第一个参数其实是上面compose方法产生的composition,同时需要视频的model来获取裁剪时间
    static func crop(asset: AVMutableComposition, videoBrick: MediaBrick) -> (AVMutableComposition, AVMutableVideoComposition?)? {
        
        // 同样是新建一个空的composition
        let composition = AVMutableComposition()
        
        // 范围检查
        let startTime = videoBrick.modifiedStartTimeVarible.value
        let endTime = videoBrick.modifiedEndTimeVarible.value
        guard startTime < endTime else { return nil }
        
        // 这里和之前类似,将视频资源的视轨插入到composition新加的视轨上
        guard let videoAssetTrack = asset.tracks(withMediaType: .video).first,
            let videoCompositionTrack = composition.addMutableTrack(withMediaType: AVMediaType.video, preferredTrackID: kCMPersistentTrackID_Invalid) else {
            logFail(mediaBrick: videoBrick)
            return nil
        }
        // 区别是范围的取值,范围取成裁剪后的范围,裁剪功能就完成了
        let startCMTime = CMTime(seconds: startTime, preferredTimescale: videoAssetTrack.naturalTimeScale)
        let endCMTime = CMTime(seconds: endTime, preferredTimescale: videoAssetTrack.naturalTimeScale)
        let range = CMTimeRange(start: startCMTime, end: endCMTime)
        do {
            try videoCompositionTrack.insertTimeRange(range, of: videoAssetTrack, at: kCMTimeZero)
        } catch {
            logFail(mediaBrick: videoBrick, error: error)
            return nil
        }
        
        // 这里是对竖直视频的处理,如果视频的方向不对,需要矫正(用手机竖直拍摄的视频方向就不对)
        // 下面的代码看做是固定处理代码吧
        // (其实所有视轨插入都需要这段代码,不过目前用来合成的视频方向都是正确的,而自己上传的视频都会先被裁剪、矫正,再进入编辑页)
        var videoComposition: AVMutableVideoComposition?
        if videoAssetTrack.preferredTransform != .identity {
            
            let layerInstruction = AVMutableVideoCompositionLayerInstruction(assetTrack: videoCompositionTrack)
            // (1)
            let transform = videoAssetTrack.ks.transform
            layerInstruction.setTransform(transform, at: startCMTime)
            
            let instruction = AVMutableVideoCompositionInstruction()
            instruction.timeRange = range
            instruction.layerInstructions = [layerInstruction]
            
            videoComposition = AVMutableVideoComposition()
            // (2)
            videoComposition!.renderSize = videoAssetTrack.ks.renderSize
            videoComposition!.frameDuration = CMTime(value: 1, timescale: 30)
            videoComposition!.instructions = [instruction]
        }
        
        // 下面和之前的逻辑类似,根据范围裁剪
        for audioAssetTrack in asset.tracks(withMediaType: .audio) {
            guard let audioCompositionTrack = composition.addMutableTrack(withMediaType: AVMediaType.audio, preferredTrackID: kCMPersistentTrackID_Invalid) else {
                logFail(mediaBrick: videoBrick)
                continue
            }
            let startCMTime = CMTime(seconds: startTime, preferredTimescale: audioAssetTrack.naturalTimeScale)
            let endCMTime = CMTime(seconds: endTime, preferredTimescale: audioAssetTrack.naturalTimeScale)
            let range = CMTimeRange(start: startCMTime, end: endCMTime)
            do {
                try audioCompositionTrack.insertTimeRange(range, of: audioAssetTrack, at: kCMTimeZero)
            } catch {
                logFail(mediaBrick: videoBrick, error: error)
                continue
            }
        }
        // 返回的composition、videoComposition会在导出的时候使用
        return (composition, videoComposition)
    }

(1)(2)带有.ks.的写法都是自己添加的extension,具体代码如下,主要是根据视频的方向调整宽高

extension Kuso where T: AVAssetTrack {
    
    var renderSize: CGSize {
        let preferredTransform = base.preferredTransform
        let width = floor(base.naturalSize.width)
        let height = floor(base.naturalSize.height)
        
        if preferredTransform.b != 0 {
            return CGSize(width: height, height: width)
        } else {
            return CGSize(width: width, height: height)
        }
    }
    
    var transform: CGAffineTransform {
        let preferredTransform = base.preferredTransform
        let width = floor(base.naturalSize.width)
        let height = floor(base.naturalSize.height)
        
        if preferredTransform.b == 1 { // home在左
            return CGAffineTransform(translationX: height, y: 0).rotated(by: CGFloat.pi/2)
        } else if preferredTransform.b == -1 { // home在右
            return CGAffineTransform(translationX: 0, y: width).rotated(by: CGFloat.pi/2 * 3)
        } else { // home在上
            return CGAffineTransform(translationX: width, y: height).rotated(by: CGFloat.pi)
        }
    }
    
    var appropriateExportPreset: String {
        
        if renderSize.width <= 640 {
            return AVAssetExportPreset640x480
        } else if renderSize.width <= 960 {
            return AVAssetExportPreset960x540
        } else if renderSize.width <= 1280 {
            return AVAssetExportPreset1280x720
        } else {
            return AVAssetExportPreset1920x1080
        }
    }
}

导出

    // 完成视频编辑后,需要把内存里的composition audioMix videoComposition都导出到沙盒,存储起来,用来上传
    static func exportComposedVideo(composition: AVComposition, audioMix: AVAudioMix? = nil, videoComposition: AVVideoComposition? = nil) -> Observable<URL> {
        return Observable<URL>.create({ (observer) -> Disposable in
            
            // 根据视轨的分辨率取得合适的导出分辨率
            let exportPreset = composition.ks.appropriateExportPreset
            
            // 获取兼容性的exportSession
            // (1)
            guard let exportSession = AVAssetExportSession.ks.compatibleSession(asset: composition, priorPresetName: exportPreset) else {
                return Disposables.create()
            }
            // 根据时间戳新建一个视频文件路径
            let outputUrl = FileManager.ks.newEditVideoUrl
            
            // 设置exportSession的参数
            exportSession.audioMix = audioMix
            exportSession.videoComposition = videoComposition
            exportSession.outputFileType = .mp4
            exportSession.outputURL = outputUrl
            exportSession.shouldOptimizeForNetworkUse = true
            exportSession.exportAsynchronously { [weak exportSession] in
                guard let es = exportSession else {
                    return
                }
                switch es.status {
                case .completed:
                    // 成功则发出最终的url
                    observer.onNext(outputUrl)
                case .failed:
                    // 失败则抛出错误
                    if let error = es.error {
                        logFail(error: error)
                        observer.onError(error)
                    }
                default:
                    break
                }
            }
            return Disposables.create {
                // 如果这个observer被取消了,也把正在export的session取消掉
                exportSession.cancelExport()
            }
        })
            // 很随意的异步一下,其实意义不大
            .observeOn(MainScheduler.asyncInstance)
    }

(1)其实就是按照分辨率等级依次取合适的AVAssetExportSession

let defaultPresets = [AVAssetExportPreset1280x720, AVAssetExportPreset960x540, AVAssetExportPreset640x480, AVAssetExportPresetMediumQuality, AVAssetExportPresetLowQuality]

extension Kuso where T == AVAssetExportSession {
    
    static func compatibleSession(asset: AVAsset, priorPresetName: String) -> AVAssetExportSession? {
        
        if let es = T(asset: asset, presetName: priorPresetName) {
            return es
        } else {
            
            let compatiblePresets = T.exportPresets(compatibleWith: asset)
            for defaultPreset in defaultPresets {
                guard compatiblePresets.contains(defaultPreset) else {
                    continue
                }
                return T(asset: asset, presetName: defaultPreset)
            }
            return nil
        }
    }
}

添加水印、文字等


    static func addWatermark(fileUrl: URL) -> Observable<URL> {
        
        return Observable<URL>.create { (observer) -> Disposable in
            
            // 都是从空的composition开始
            let composition = AVMutableComposition()

            // 资源文件、视频范围
            let videoAsset = AVAsset(url: fileUrl)
            let range = CMTimeRange(start: kCMTimeZero, end: videoAsset.duration)
            
            // 获取资源文件视轨,创建新的待添加的视轨
            guard let videoAssetTrack = videoAsset.tracks(withMediaType: .video).first,
                let videoCompositionTrack = composition.addMutableTrack(withMediaType: AVMediaType.video, preferredTrackID: kCMPersistentTrackID_Invalid) else {
                    return Disposables.create()
            }
            do {
                // 将videoCompostionTrack填满videoAssetTrack的内容
                try videoCompositionTrack.insertTimeRange(range, of: videoAssetTrack, at: kCMTimeZero)
            } catch {
                observer.onError(error)
                return Disposables.create()
            }
            
            // 加水印需要使用AVMutableVideoCompositionLayerInstruction
            let layerInstruction = AVMutableVideoCompositionLayerInstruction(assetTrack: videoCompositionTrack)
            // 如果方向不对 矫正
            if videoAssetTrack.preferredTransform != .identity {
                let transform = videoAssetTrack.ks.transform
                layerInstruction.setTransform(transform, at: kCMTimeZero)
            }
            // 固定写法
            let instruction = AVMutableVideoCompositionInstruction()
            instruction.timeRange = range
            instruction.layerInstructions = [layerInstruction]
            
            let videoComposition = AVMutableVideoComposition()
            videoComposition.frameDuration = CMTime(value: 1, timescale: 30)
            let renderSize = videoAssetTrack.ks.renderSize
            videoComposition.renderSize = renderSize
            videoComposition.instructions = [instruction]
            
            // 加水印的层级分为3个layer parentLayer作为底 videoLayer上放的是视频 还有一个watermarkLayer上放置水印或者其他自定义类容 例如文字
            let parentLayer = CALayer()
            let videoLayer = CALayer()
            parentLayer.addSublayer(videoLayer)
            [parentLayer, videoLayer].forEach{
                $0.frame = CGRect(origin: .zero, size: renderSize)
            }
            // 固定写法
            videoComposition.animationTool = AVVideoCompositionCoreAnimationTool(postProcessingAsVideoLayer: videoLayer, in: parentLayer)
            // 3个layer从下到上依次为 parentLayer videoLayer watermarkLayer,后2个layer的顺序可以根据需求交换,改变size大小等
            // 这里创建的watermarkLayer已经被添加了一些CoreAnimation,这样加出来的水印就可以动了
            let watermarkLayer = self.createWatermarkLayer(parentSize: renderSize)
            parentLayer.addSublayer(watermarkLayer)
            
            // 兼容某些质量很差的视频,把导出参数降低,AVAssetExportPresetMediumQuality其实是一种很兼容,视频很模糊的选项
            var exportPreset: String!
            let minFrameDuration = videoAssetTrack.minFrameDuration
            if minFrameDuration.seconds < 0.001 {
                exportPreset = AVAssetExportPresetMediumQuality
            } else {
                exportPreset = videoAssetTrack.ks.appropriateExportPreset
            }
            
            // 对音频做上面的类似操作,只需要加进去即可
            for originAudioAssetTrack in videoAsset.tracks(withMediaType: .audio) {
                guard let audioCompositionTrack = composition.addMutableTrack(withMediaType: .audio, preferredTrackID: kCMPersistentTrackID_Invalid) else {
                    continue
                }
                do {
                    try audioCompositionTrack.insertTimeRange(range, of: originAudioAssetTrack, at: kCMTimeZero)
                } catch {
                    observer.onError(error)
                    return Disposables.create()
                }
            }
            
            // 导出到沙盒
            guard let exportSession = AVAssetExportSession.ks.compatibleSession(asset: composition, priorPresetName: exportPreset) else {
                return Disposables.create()
            }
            // 根据时间戳新建一个水印目录下的文件
            let outputUrl = FileManager.ks.newWatermarkVideoUrl
            
            exportSession.videoComposition = videoComposition
            exportSession.outputFileType = .mp4
            exportSession.outputURL = outputUrl
            exportSession.shouldOptimizeForNetworkUse = true
            
            // exportSession有progress可以读取,但是不能kvo或者有回调通知,只能加个timer来读取进度
            let timer = Timer(timeInterval: 0.05, repeats: true, block: { [weak exportSession] (timer) in
                guard let es = exportSession else {
                    return
                }
                let progress = Double(es.progress) * 0.49 + 0.5
                self.progressHandler?(progress)
                if es.progress == 1 {
                    timer.invalidate()
                }
            })
            RunLoop.current.add(timer, forMode: RunLoopMode.commonModes)
            timer.fire()
            
            exportSession.exportAsynchronously { [weak exportSession] in
                guard let es = exportSession else {
                    return
                }
                switch es.status {
                case .completed:
                    // 成功后发出最后的url
                    observer.onNext(outputUrl)
                case .failed:
                    // 有错误则发出错误
                    if let error = es.error {
                        observer.onError(error)
                    }
                default:
                    break
                }
            }
            return Disposables.create {
                // 如果该操作被取消,停掉timer和exportSession
                timer.invalidate()
                exportSession.cancelExport()
            }
        }
    }
    
    /* 下面是创建layer的坐标、大小计算,以及动画添加 */
    static func createWatermarkLayer(parentSize: CGSize) -> CALayer {
        // 坐标轴原点为0,0 右上角为 +,+
        let multiper = max(parentSize.width, parentSize.height)/1080 * 2.3
        
        let layerSize = CGSize(width: multiper * 95, height: multiper * 61)
        let layerStartPosition = CGPoint(x: layerSize.width/2, y: parentSize.height - layerSize.height/2)
        let layerEndPosition = CGPoint(x: parentSize.width - layerSize.width/2, y: layerSize.height/2)
        let layer = CALayer()
        layer.frame = CGRect(origin: .zero, size: layerSize)
        layer.position = layerStartPosition
        addPositionAnimation(layer: layer, startPosition: layerStartPosition, endPosition: layerEndPosition)
        
        let logoSize = CGSize(width: multiper * 90, height: multiper * 50)
        let logoPosition = CGPoint(x: logoSize.width/2, y: 11 * multiper + logoSize.height/2)
        let logoLayer = CALayer()
        logoLayer.frame = CGRect(origin: .zero, size: logoSize)
        logoLayer.position = logoPosition
        addContentsAnimation(layer: logoLayer)
        layer.addSublayer(logoLayer)
        
        let idSize = CGSize(width: layerSize.width, height: multiper * 16.5)
        let idPosition = CGPoint(x: idSize.width/2 - 11.5 * multiper, y: 5 * multiper + idSize.height/2)
        let idLayer = CATextLayer()
        idLayer.frame = CGRect(origin: .zero, size: idSize)
        idLayer.position = idPosition
        idLayer.string = "ID: \(userId.description)"
        idLayer.foregroundColor = UIColor.white.cgColor
        idLayer.fontSize = 12 * multiper
        idLayer.font = CGFont.init(UIFont.boldSystemFont(ofSize: idLayer.fontSize).fontName as CFString)
        idLayer.alignmentMode = kCAAlignmentRight
        layer.addSublayer(idLayer)
        return layer
    }
    
    static func addPositionAnimation(layer: CALayer, startPosition: CGPoint, endPosition: CGPoint) {
        let keyframe = CAKeyframeAnimation(keyPath: "position")
        keyframe.values = [startPosition, endPosition]
        keyframe.duration = 10
        keyframe.isRemovedOnCompletion = false
        keyframe.fillMode = kCAFillModeForwards
        keyframe.beginTime = AVCoreAnimationBeginTimeAtZero
        keyframe.calculationMode = kCAAnimationDiscrete
        layer.add(keyframe, forKey: "position")
    }
    
    static func addContentsAnimation(layer: CALayer) {
        
        let imgs = (0...21).map { idx -> CGImage in
            let name = "wm_\(idx)"
            return UIImage(named: name)!.cgImage!
        }
        layer.contents = imgs.first
        
        let keyframe = CAKeyframeAnimation(keyPath: "contents")
        keyframe.duration = 1
        keyframe.values = imgs
        keyframe.repeatCount = .greatestFiniteMagnitude
        keyframe.isRemovedOnCompletion = false
        keyframe.beginTime = AVCoreAnimationBeginTimeAtZero
        layer.add(keyframe, forKey: "contents")
    }
最后编辑于
©著作权归作者所有,转载或内容合作请联系作者
  • 序言:七十年代末,一起剥皮案震惊了整个滨河市,随后出现的几起案子,更是在滨河造成了极大的恐慌,老刑警刘岩,带你破解...
    沈念sama阅读 203,456评论 5 477
  • 序言:滨河连续发生了三起死亡事件,死亡现场离奇诡异,居然都是意外死亡,警方通过查阅死者的电脑和手机,发现死者居然都...
    沈念sama阅读 85,370评论 2 381
  • 文/潘晓璐 我一进店门,熙熙楼的掌柜王于贵愁眉苦脸地迎上来,“玉大人,你说我怎么就摊上这事。” “怎么了?”我有些...
    开封第一讲书人阅读 150,337评论 0 337
  • 文/不坏的土叔 我叫张陵,是天一观的道长。 经常有香客问我,道长,这世上最难降的妖魔是什么? 我笑而不...
    开封第一讲书人阅读 54,583评论 1 273
  • 正文 为了忘掉前任,我火速办了婚礼,结果婚礼上,老公的妹妹穿的比我还像新娘。我一直安慰自己,他们只是感情好,可当我...
    茶点故事阅读 63,596评论 5 365
  • 文/花漫 我一把揭开白布。 她就那样静静地躺着,像睡着了一般。 火红的嫁衣衬着肌肤如雪。 梳的纹丝不乱的头发上,一...
    开封第一讲书人阅读 48,572评论 1 281
  • 那天,我揣着相机与录音,去河边找鬼。 笑死,一个胖子当着我的面吹牛,可吹牛的内容都是我干的。 我是一名探鬼主播,决...
    沈念sama阅读 37,936评论 3 395
  • 文/苍兰香墨 我猛地睁开眼,长吁一口气:“原来是场噩梦啊……” “哼!你这毒妇竟也来了?” 一声冷哼从身侧响起,我...
    开封第一讲书人阅读 36,595评论 0 258
  • 序言:老挝万荣一对情侣失踪,失踪者是张志新(化名)和其女友刘颖,没想到半个月后,有当地人在树林里发现了一具尸体,经...
    沈念sama阅读 40,850评论 1 297
  • 正文 独居荒郊野岭守林人离奇死亡,尸身上长有42处带血的脓包…… 初始之章·张勋 以下内容为张勋视角 年9月15日...
    茶点故事阅读 35,601评论 2 321
  • 正文 我和宋清朗相恋三年,在试婚纱的时候发现自己被绿了。 大学时的朋友给我发了我未婚夫和他白月光在一起吃饭的照片。...
    茶点故事阅读 37,685评论 1 329
  • 序言:一个原本活蹦乱跳的男人离奇死亡,死状恐怖,灵堂内的尸体忽然破棺而出,到底是诈尸还是另有隐情,我是刑警宁泽,带...
    沈念sama阅读 33,371评论 4 318
  • 正文 年R本政府宣布,位于F岛的核电站,受9级特大地震影响,放射性物质发生泄漏。R本人自食恶果不足惜,却给世界环境...
    茶点故事阅读 38,951评论 3 307
  • 文/蒙蒙 一、第九天 我趴在偏房一处隐蔽的房顶上张望。 院中可真热闹,春花似锦、人声如沸。这庄子的主人今日做“春日...
    开封第一讲书人阅读 29,934评论 0 19
  • 文/苍兰香墨 我抬头看了看天上的太阳。三九已至,却和暖如春,着一层夹袄步出监牢的瞬间,已是汗流浃背。 一阵脚步声响...
    开封第一讲书人阅读 31,167评论 1 259
  • 我被黑心中介骗来泰国打工, 没想到刚下飞机就差点儿被人妖公主榨干…… 1. 我叫王不留,地道东北人。 一个月前我还...
    沈念sama阅读 43,636评论 2 349
  • 正文 我出身青楼,却偏偏与公主长得像,于是被迫代替她去往敌国和亲。 传闻我的和亲对象是个残疾皇子,可洞房花烛夜当晚...
    茶点故事阅读 42,411评论 2 342