原文链接:
https://developer.apple.com/documentation/realitykit/creating_3d_objects_from_photographs
构建虚拟物体以用于你的AR体验。
概述
要从一系列的照片中创建一个三维物体,需要使用PhotogrammetrySession将图像提交给RealityKit,注册以接收状态更新,然后启动会话。这个过程完成之后会产生一个拍摄对象的三维呈现,你可以在你的应用程序中使用,或者导出到其他软件,如Reality Composer。
关于为摄影测量捕获高质量的图像的更多信息,请参阅Capturing Photographs for RealityKit Object Capture。
检查设备是否可用
RealityKit对象捕捉只适用于满足执行对象重建的最低要求的Mac电脑--包括至少4GB的内存--并且拥有支持光线追踪的GPU。在使用任何Object Capture APIs之前,请检查你的代码所运行的计算机是否符合这些要求,只有在符合这些要求的情况下才可以进行。
import Metal
// Checks to make sure at least one GPU meets the minimum requirements
// for object reconstruction. At least one GPU must be a "high power"
// device, which means it has at least 4 GB of RAM, provides
// barycentric coordinates to the fragment shader, and is running on an
// Apple silicon Mac or an Intel Mac with a discrete GPU.
private func supportsObjectReconstruction() -> Bool {
for device in MTLCopyAllDevices() where
!device.isLowPower &&
device.areBarycentricCoordsSupported &&
device.recommendedMaxWorkingSetSize >= UInt64(4e9) {
return true
}
return false
}
// Returns `true` if at least one GPU has hardware support for ray tracing.
// The GPU that supports ray tracing need not be the same GPU that supports
// object reconstruction.
private func supportsRayTracing() -> Bool {
for device in MTLCopyAllDevices() where device.supportsRaytracing {
return true
}
return false
}
// Returns `true` if the current hardware supports Object Capture.
func supportsObjectCapture() -> Bool {
return supportsObjectReconstruction() && supportsRayTracing()
}
func doObjectCapture() {
guard supportsObjectCapture() else {
print("Object capture not available")
return
}
// ...
}
创建摄影测量法会话
首先创建一个PhotogrammetrySession.Request,其URL指向生成的USDZ文件所需的输出位置和模型所需的细节级别。接下来,使用该请求,以及指向包含你的图像的目录的URL,来创建PhotogrammetrySession对象。
let inputFolderUrl = URL(fileURLWithPath: "/tmp/MyInputImages/")
let url = URL(fileURLWithPath: "MyObject.usdz")
var request = PhotogrammetrySession.Request.modelFile(url: url,
detail: .full)
guard let session = try PhotogrammetrySession(input: inputFolderUrl) else {
return }
监听更新并开始创建
RealityKit使用PhotogrammetrySession.Output对象的AsyncSequence来在后台提供关于对象创建过程的状态更新。为了更新你的应用程序的用户界面或采取其他行动作为这些状态更新的结果,创建一个异步任务并对输出使用for-try-await循环。
let waiter = async {
do {
for try await output in session.outputs {
switch output {
case .processingComplete:
// RealityKit has processed all requests.
case .requestError(let request, let error):
// Request encountered an error.
case .requestComplete(let request, let result):
// RealityKit has finished processing a request.
case .requestProgress(let request, let fractionComplete):
// Periodic progress update. Update UI here.
case .inputComplete:
// Ingestion of images is complete and processing begins.
case .invalidSample(let id, let reason):
// RealityKit deemed a sample invalid and didn't use it.
case .skippedSample(let id):
// RealityKit was unable to use a provided sample.
case .automaticDownsampling:
// RealityKit downsampled the input images because of
// resource constraints.
case .processingCancelled
// Processing was canceled.
@unknown default:
// Unrecognized output.
}
}
} catch {
print("Output: ERROR = \(String(describing: error))")
// Handle error.
}
}
一旦你创建了一个会话并注册接收状态更新,就可以通过调用process(request:)启动对象创建过程。RealityKit在后台处理照片,并在过程完成或失败时通知你的应用程序。
session.process(requests: [request])
补偿具有挑战性的图像
RealityKit的默认摄影测量设置对绝大多数的输入图像都有效。然而,如果你的图像集对比度低或缺乏许多可识别的地标,你可以通过创建一个PhotogrammetrySession.Configuration对象并在创建PhotogrammetrySession时将其传入初始化器来覆盖默认值以进行补偿。
为了简化对象创建过程,你可以使用自定义配置,通过将相邻的图像列在一起,依次向PhotogrammetrySession提供图像,或者控制对对象遮蔽的支持,遮蔽对象周围图像的一部分。
let config = Configuration()
// Use slower, more sensitive landmark detection.
config.featureSensitivity = .high
// Adjacent images are next to each other.
config.sampleOrdering = .sequential
// Object masking is enabled.
config.isObjectMaskingEnabled = true
let session = try PhotogrammetrySession(input: inputFolderUrl,
configuration:config)