概述
- 音视频采集包括两部分:
- 视频采集
- 音频采集
- 在iOS开发中,是可以同步采集视频&音频的,使用方式也非常简单
- 相关的采集API都封装在AVFoundation框架中,导入对应框架,实现功能即可
采集步骤
- PS:如果做过二维码开发,应该对相关步骤非常熟悉(非常类似)
- 导入框架
- 相关API主要在AVFoundation框架中,因此需要先导入框架
- 创建捕捉会话(AVCaptureSession)
- 该会话用于连接之后的输入源&输出源
- 输入源:摄像头&话筒
- 输出源:拿到对应的音频&视频数据的出口
- 会话:用于将输入源&输出源连接起来
- 设置视频输入源&输出源
- 输入源(AVCaptureDeviceInput):从摄像头输入
- 输出源(AVCaptureVideoDataOutput):可以设置代理,在代理方法中拿到数据
- 将输入&输出添加到会话中
- 设置音频输入源&输出源
- 输入源(AVCaptureDeviceInput):从话筒输入
- 输出源(AVCaptureAudioDataOutput):可以设置代理,在代理方法中拿到数据
- 将输入&输出添加到会话中
- 添加预览图层(可选)
- 如果希望用户看到采集的画面,可以添加预览图层
- 该预览图层不是必须的,及时没有添加也可以正常采集数据
- 开始采集即可
- 调用会话(AVCaptureSession)的startRunning方法即可开始采集
代码解析
-
函数一(设置视频输入输出)
-
函数二(设置音频输入输出)
-
添加预览图层
-
遵守协议,实现代理方法
实现代码
- 整体步骤代码
// 1.创建捕捉会话
let session = AVCaptureSession()
// 2.设置视频输入输出
setupVideoSource(session: session)
// 3.设置音频输入输出
setupAudioSource(session: session)
// 4.添加预览图层
setupPreviewLayer(session: session)
// 5.开始扫描
session.startRunning()
- 函数一(设置视频输入输出)
// 给会话设置视频源(输入源&输出源)
fileprivate func setupVideoSource(session : AVCaptureSession) {
// 1.创建输入
// 1.1.获取所有的设备(包括前置&后置摄像头)
guard let devices = AVCaptureDevice.devices(withMediaType: AVMediaTypeVideo) as? [AVCaptureDevice] else { return }
// 1.2.取出获取前置摄像头
let d = devices.filter({ return $0.position == .front }).first
// 1.3.通过前置摄像头创建输入设备
guard let videoInput = try? AVCaptureDeviceInput(device: d) else { return }
// 2.创建输出源
// 2.1.创建视频输出源
let videoOutput = AVCaptureVideoDataOutput()
// 2.2.设置代理,以及代理方法的执行队列(在代理方法中拿到采集到的数据)
let queue = DispatchQueue.global()
videoOutput.setSampleBufferDelegate(self, queue: queue)
// 3.将输入&输出添加到会话中
// 3.1.添加输入源
if session.canAddInput(videoInput) {
session.addInput(videoInput)
}
// 3.2.添加输出源
if session.canAddOutput(videoOutput) {
session.addOutput(videoOutput)
}
// 4.给connect赋值
videoConnect = videoOutput.connection(withMediaType: AVMediaTypeVideo)
}
- 函数二(设置音频输入输出)
// 给会话设置音频源(输入源&输出源)
fileprivate func setupAudioSource(session : AVCaptureSession) {
// 1.创建输入
guard let device = AVCaptureDevice.defaultDevice(withMediaType: AVMediaTypeAudio) else { return }
guard let audioInput = try? AVCaptureDeviceInput(device: device) else { return }
// 2.创建输出源
let audioOutput = AVCaptureAudioDataOutput()
let queue = DispatchQueue.global()
audioOutput.setSampleBufferDelegate(self, queue: queue)
// 3.将输入&输出添加到会话中
if session.canAddInput(audioInput) {
session.addInput(audioInput)
}
if session.canAddOutput(audioOutput) {
session.addOutput(audioOutput)
}
}
- 添加预览图
// 添加预览图层
fileprivate func setupPreviewLayer(session : AVCaptureSession) {
// 1.创建预览图层
guard let previewLayer = AVCaptureVideoPreviewLayer(session: session) else { return }
// 2.设置图层的属性
previewLayer.frame = view.bounds
// 3.将图层添加到view中
view.layer.insertSublayer(previewLayer, at: 0)
}
- 遵守协议,实现代理方法
extension ViewController : AVCaptureVideoDataOutputSampleBufferDelegate, AVCaptureAudioDataOutputSampleBufferDelegate {
func captureOutput(_ captureOutput: AVCaptureOutput!, didOutputSampleBuffer sampleBuffer: CMSampleBuffer!, from connection: AVCaptureConnection!) {
if connection == videoConnect {
print("视频数据")
} else {
print("音频数据")
}
}
}
停止扫描
- 比如用户不再直接,我们需要停止扫描
- 移除预览图层(不再直播肯定不需要预览图层了)
- 停止扫描(调用session的stopRunning方法)
- 将session设置为nil(对象不再使用,指针置空)
切换镜头&聚焦&写入文件
切换镜头(前置&后置摄像头)
- 切换步骤
- 给切换过程添加动画
- 获取当前摄像头是前置还是后置
- 取出相反的摄像头(之前是前置,这次取出后置)
- 通过新摄像头重新获取设备(AVCaptureDevice)
- 通过设备(AVCaptureDevice)创建新的输入(AVCaptureDeviceInput)
- 移除旧input&添加新的input
- 注意:修改session配置之前先调用开启修改配置选项,配置完成后,调用提交修改配置选项
- session?.beginConfiguration()
- session?.commitConfiguration()
保存新的input
-
图列解析
- 代码如下
@IBAction func switchScene() {
// 0.执行动画
let rotaionAnim = CATransition()
rotaionAnim.type = "oglFlip"
rotaionAnim.subtype = "fromLeft"
rotaionAnim.duration = 0.5
view.layer.add(rotaionAnim, forKey: nil)
// 1.校验videoInput是否有值
guard let videoInput = videoInput else { return }
// 2.获取当前镜头
let position : AVCaptureDevicePosition = videoInput.device.position == .front ? .back : .front
// 3.创建新的input
guard let devices = AVCaptureDevice.devices(withMediaType: AVMediaTypeVideo) as? [AVCaptureDevice] else { return }
guard let newDevice = devices.filter({$0.position == position}).first else { return }
guard let newVideoInput = try? AVCaptureDeviceInput(device: newDevice) else { return }
// 4.移除旧输入,添加新输入
session?.beginConfiguration()
session?.removeInput(videoInput)
session?.addInput(newVideoInput)
session?.commitConfiguration()
// 5.保存新输入
self.videoInput = newVideoInput
}
写入文件
- 写入文件步骤
- 创建AVCaptureMovieFileOutput对象
- 用于将音频视频写入文件
- 将movieFileOutput对象,添加到session的输出中
- 写入文件也是一种输出
- 设置视频的稳定模式
- 不设置可能会出现视频跳帧等问题
- 通常设置为自动即可
- 开始写入
- 录制完成,停止写入即可
- 创建AVCaptureMovieFileOutput对象
-
代码解析
- 代码如下:
- 创建、添加、设置代码
// 添加文件输出
let movieFileoutput = AVCaptureMovieFileOutput()
self.movieFileOutput = movieFileoutput
session.addOutput(movieFileoutput)
// 获取视频的connection
let connection = movieFileoutput.connection(withMediaType: AVMediaTypeVideo)
// 设置视频的稳定模式
connection?.preferredVideoStabilizationMode = .auto
// 开始写入视频
movieFileoutput.startRecording(toOutputFileURL: outputFileURL, recordingDelegate: self)
- 停止写入代码
// 0.停止写入
self.movieFileOutput?.stopRecording()
- 在代理方法中监听开始、结束事件
extension ViewController : AVCaptureFileOutputRecordingDelegate {
func capture(_ captureOutput: AVCaptureFileOutput!, didStartRecordingToOutputFileAt fileURL: URL!, fromConnections connections: [Any]!) {
print("开始录制")
}
func capture(_ captureOutput: AVCaptureFileOutput!, didFinishRecordingToOutputFileAt outputFileURL: URL!, fromConnections connections: [Any]!, error: Error!) {
print("停止录制")
}
}
完整代码
import UIKit
import AVFoundation
class ViewController: UIViewController {
fileprivate lazy var session : AVCaptureSession = AVCaptureSession()
fileprivate var videoOutput : AVCaptureVideoDataOutput?
fileprivate var videoInput : AVCaptureDeviceInput?
fileprivate var movieOutput : AVCaptureMovieFileOutput?
@IBAction func rotateCamera() {
guard let videoInput = videoInput else {
return
}
//朝向
let position : AVCaptureDevicePosition = videoInput.device.position == .front ? .back : .front
guard let devices = (AVCaptureDevice.devices(withMediaType: AVMediaTypeVideo) as? [AVCaptureDevice]) else { return }
guard let device = devices.filter({ $0.position == position }).first else { return }
guard let newInput = try? AVCaptureDeviceInput(device: device) else { return }
// 2.开启配置
session.beginConfiguration()
session.removeInput(videoInput)
session.addInput(newInput)
session.commitConfiguration()
// 3.记录新的input
self.videoInput = newInput
}
}
extension ViewController {
@IBAction func startCapturing() {
// 1.设置视频的输入输出
setupVideoInputOutput()
// 2.设置音频的输入输出
setupAudioInputOutput()
// 3.设置预览图层
setupPreviewLayer()
// 4.开始采集
session.startRunning()
// 3.写入文件
setupMovieOutput()
}
@IBAction func stopCapturing() {
movieOutput?.stopRecording()
session.stopRunning()
}
fileprivate func setupVideoInputOutput() {
// 1.获取前置摄像头设备
guard let devices = (AVCaptureDevice.devices(withMediaType: AVMediaTypeVideo) as? [AVCaptureDevice]) else { return }
guard let device = devices.filter({ $0.position == .front }).first else { return }
// 2.创建输入设备
guard let input = try? AVCaptureDeviceInput(device: device) else { return }
self.videoInput = input
// 3.创建输出设备
let output = AVCaptureVideoDataOutput()
let queue = DispatchQueue.global()
output.setSampleBufferDelegate(self, queue: queue)
self.videoOutput = output
// 4.添加输入输出设备
if session.canAddInput(input) {
session.addInput(input)
}
if session.canAddOutput(output) {
session.addOutput(output)
}
}
fileprivate func setupAudioInputOutput() {
// 1.创建音频设备
guard let device = AVCaptureDevice.defaultDevice(withMediaType: AVMediaTypeAudio) else { return }
guard let input = try? AVCaptureDeviceInput(device: device) else { return }
// 2.创建输出设备
let output = AVCaptureAudioDataOutput()
let queue = DispatchQueue.global()
output.setSampleBufferDelegate(self, queue: queue)
// 3.添加输入输出设备
if session.canAddInput(input) {
session.addInput(input)
}
if session.canAddOutput(output) {
session.addOutput(output)
}
}
fileprivate func setupPreviewLayer() {
guard let previewLayer = AVCaptureVideoPreviewLayer(session: session) else { return }
previewLayer.frame = view.bounds
view.layer.insertSublayer(previewLayer, at: 0)
}
fileprivate func setupMovieOutput() {
// 1.创建movie的输出
let output = AVCaptureMovieFileOutput()
self.movieOutput = output
if session.canAddOutput(output) {
session.addOutput(output)
}
guard let path = NSSearchPathForDirectoriesInDomains(.documentDirectory, .userDomainMask, true).first else { return }
let filePath = path + "/abc.mp4"
let url = URL(fileURLWithPath: filePath)
output.startRecording(toOutputFileURL: url, recordingDelegate: self)
}
}
extension ViewController : AVCaptureVideoDataOutputSampleBufferDelegate, AVCaptureAudioDataOutputSampleBufferDelegate {
func captureOutput(_ captureOutput: AVCaptureOutput!, didOutputSampleBuffer sampleBuffer: CMSampleBuffer!, from connection: AVCaptureConnection!) {
if self.videoOutput?.connection(withMediaType: AVMediaTypeVideo) == connection {
print("采集到视频文件")
} else {
print("采集到音频文件")
}
}
}
extension ViewController : AVCaptureFileOutputRecordingDelegate {
func capture(_ captureOutput: AVCaptureFileOutput!, didStartRecordingToOutputFileAt fileURL: URL!, fromConnections connections: [Any]!) {
print("开始录制")
}
func capture(_ captureOutput: AVCaptureFileOutput!, didFinishRecordingToOutputFileAt outputFileURL: URL!, fromConnections connections: [Any]!, error: Error!) {
print("结束录制")
}
}