本文档通过Metal compute shader对摄像头当前捕获的画面进行简单的Gamma校正,绘制到屏幕(MTKView)及将渲染结果保存成UIImage。文档最后简要讨论了Metal compute shader的dispatchThreadgroups配置问题。
文档结构:
- 配置AVCaptureSession获取摄像头当前画面
- 初始化Compute Shader环境
- 编写Gamma校正shader代码
- 渲染Compute Shader处理后的纹理到屏幕
- 读取Metal渲染结果并生成UIImage
- 讨论:Metal compute shader合理的dispatchThreadgroups设置
1. 配置AVCaptureSession获取摄像头当前画面
参考我之前的文档iOS VideoToolbox硬编H.265(HEVC)H.264(AVC):1 概述进行摄像头的配置,简单起见,令摄像头输出画面为竖直方向的RGBA数据,后续文档再实践Metal Shader实现YUV转RGB,然后进行各种滤镜的叠加,参考代码如下。
let device = AVCaptureDevice.defaultDevice(withMediaType: AVMediaTypeVideo)
let input = try? AVCaptureDeviceInput(device: device)
if session.canAddInput(input) {
session.addInput(input)
}
let output = AVCaptureVideoDataOutput()
output.videoSettings = [kCVPixelBufferPixelFormatTypeKey as AnyHashable : kCVPixelFormatType_32BGRA]
output.setSampleBufferDelegate(self, queue: DispatchQueue(label: "CamOutputQueue"))
if session.canAddOutput(output) {
session.addOutput(output)
}
if session.canSetSessionPreset(AVCaptureSessionPreset1920x1080) {
session.canSetSessionPreset(AVCaptureSessionPreset1920x1080)
}
session.beginConfiguration()
for (_, connection) in output.connections.enumerated() {
for (_, port) in (connection as! AVCaptureConnection).inputPorts.enumerated() {
if (port as! AVCaptureInputPort).mediaType == AVMediaTypeVideo {
videoConnection = connection as? AVCaptureConnection
break
}
}
if videoConnection != nil {
break;
}
}
if (videoConnection?.isVideoOrientationSupported)! {
videoConnection?.videoOrientation = .portrait
}
session.commitConfiguration()
session.startRunning()
2. 初始化Compute Shader环境
Core Video给Metal提供了类似OpenGL ES创建纹理的接口CVMetalTextureCache。除此之外,还需进行Metal要求的MTLLibrary等准备工作,参考代码如下。
var textureCache : CVMetalTextureCache?
var imageTexture: MTLTexture?
var commandQueue: MTLCommandQueue?
var library: MTLLibrary?
var pipeline: MTLComputePipelineState?
//------------
device = MTLCreateSystemDefaultDevice()
mtlView.device = device
mtlView.framebufferOnly = false
mtlView.clearColor = MTLClearColor(red: 0, green: 0, blue: 0, alpha: 1)
library = device?.newDefaultLibrary()
guard let function = library?.makeFunction(name: "gamma_filter") else {
fatalError()
}
pipeline = try! device?.makeComputePipelineState(function: function)
commandQueue = device?.makeCommandQueue()
CVMetalTextureCacheCreate(kCFAllocatorDefault, nil, device!, nil, &textureCache)
由于要读取屏幕上显示的画面,需将MTKView.framebufferOnly属性设置为false。
3. 编写Gamma校正shader代码
inTexture
表示摄像头当前捕获的画面,outTexture
表示处理后的数据,将会渲染到屏幕。
#include <metal_stdlib>
using namespace metal;
kernel void gamma_filter(
texture2d<float, access::read> inTexture [[texture(0)]],
texture2d<float, access::write> outTexture [[texture(1)]],
uint2 gid [[thread_position_in_grid]])
{
float4 inColor = inTexture.read(gid);
const float4 outColor = float4(pow(inColor.rgb, float3(0.4/* gamma校正参数 */)), inColor.a);
outTexture.write(outColor, gid);
}
4. 渲染Compute Shader处理后的纹理到屏幕
在MTKViewDelegate的draw(in view: MTKView)
方法中绘制Compute Shader处理后的纹理到屏幕,参考代码如下。
guard let texture = imageTexture else {
return
}
guard let drawable = view.currentDrawable else {
return
}
guard let commandBuffer = commandQueue?.makeCommandBuffer() else {
return
}
let encoder = commandBuffer.makeComputeCommandEncoder()
encoder.setComputePipelineState(pipeline!)
encoder.setTexture(texture, at: 0)
encoder.setTexture(drawable.texture, at: 1)
let threads = MTLSize(width: 16, height: 16, depth: 1)
let threadgroups = MTLSize(width: texture.width / threads.width,
height: texture.height / threads.height,
depth: 1)
encoder.dispatchThreadgroups(threadgroups, threadsPerThreadgroup: threads)
encoder.endEncoding()
commandBuffer.present(drawable)
commandBuffer.commit()
关键代码encoder.setTexture(drawable.texture, at: 1)
指示compute shader将gamma校正结果写到MTKView.currentDrawable.texture。
5. 读取Metal渲染结果并生成UIImage
类似OpenGL ES的glReadPixels操作,需要注意大小端字节序及UIKit与Metal纹理坐标系的差异。由第4节渲染Compute Shader处理后的纹理到屏幕可知,MTKView.currentDrawable.texture是当前的渲染结果纹理,读取Metal渲染结果问题就成了MTLTexture转换成UIImage问题,可借助Core Graphics接口实现,参考代码如下。
let image = currentDrawable?.texture.toUIImage()
为方便后续开发,给MTLTexture添加转换成UIImage接口。
public extension MTLTexture {
public func toUIImage() -> UIImage {
let bytesPerPixel: Int = 4
let imageByteCount = self.width * self.height * bytesPerPixel
let bytesPerRow = self.width * bytesPerPixel
var src = [UInt8](repeating: 0, count: Int(imageByteCount))
let region = MTLRegionMake2D(0, 0, self.width, self.height)
self.getBytes(&src, bytesPerRow: bytesPerRow, from: region, mipmapLevel: 0)
let bitmapInfo = CGBitmapInfo(rawValue: (CGBitmapInfo.byteOrder32Little.rawValue | CGImageAlphaInfo.noneSkipFirst.rawValue))
let colorSpace = CGColorSpaceCreateDeviceRGB()
let bitsPerComponent = 8
let context = CGContext(data: &src, width: self.width, height: self.height, bitsPerComponent: bitsPerComponent, bytesPerRow: bytesPerRow, space: colorSpace, bitmapInfo: bitmapInfo.rawValue);
let dstImageFilter = context?.makeImage();
return UIImage(cgImage: dstImageFilter!, scale: 0.0, orientation: UIImageOrientation.downMirrored) // 对于本文档,不需要downMirrored,因为第1节强制摄像头输出portrait方向图像
}
}
6. 讨论:compute shader合理的dispatchThreadgroups设置
第4节渲染Compute Shader处理后的纹理到屏幕简单设置了dispatchThreadgroups,那么合理的dispatchThreadgroups值应该是多少呢?可参考官方文档:Working with threads and threadgroups,参考设置代码如下。
let w = pipeline!.threadExecutionWidth
let h = pipeline!.maxTotalThreadsPerThreadgroup / w
let threadsPerThreadgroup = MTLSizeMake(w, h, 1)
let threadgroupsPerGrid = MTLSize(width: (texture.width + w - 1) / w,
height: (texture.height + h - 1) / h,
depth: 1)
使用上述代码,在iPhone 7p上计算1080p画面,GPU耗时略有下降。