Swift:在 ARKit 中获取用于面部跟踪的 TruthDepth 相机参数 [英] Swift: Get the TruthDepth camera parameters for face tracking in ARKit

查看:25
本文介绍了Swift:在 ARKit 中获取用于面部跟踪的 TruthDepth 相机参数的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我的目标:

我正在尝试在进行面部跟踪时获取 TruthDepth 相机的 TruthDepth 相机参数(例如内在、外在、镜头畸变等).我读到有示例并且可以使用 OpenCV.我只是想知道是否应该在 Swift 中实现类似的目标.

我读过和尝试过的:

我阅读了关于 ARCamera 的苹果文档:intrinsics 和AVCameraCalibrationData:extrinsicMatrixintrinsicMatrix.

然而,我发现的只是AVCameraCalibrationDataARCamera 的声明:


对于AVCameraCalibrationData


对于内在矩阵

var internalMatrix: matrix_float3x3 { get }

对于外部矩阵

var extrinsicMatrix: matrix_float4x3 { get }

我还阅读了这篇文章:getiOS 上的相机校准数据 并尝试了 Bourne 的建议:

func photoOutput(_ output: AVCapturePhotoOutput, didFinishProcessingPhoto photo: AVCapturePhoto, error: Error?) {让 ex = photo.depthData?.cameraCalibrationData?.extrinsicMatrix//let ex = photo.cameraCalibrationData?.extrinsicMatrix让 int = photo.cameraCalibrationData?.intrinsicMatrixphoto.depthData?.cameraCalibrationData?.lensDistortionCenter打印(ExtrinsicM:\(字符串(描述:ex))")打印(isCameraCalibrationDataDeliverySupported:\(output.isCameraCalibrationDataDeliverySupported)")}

但它根本不打印矩阵.


对于 ARCamera 我已经阅读了 Andy Fedoroff 的 RealityKit 中使用的相机的焦距:

var 内在函数:simd_float3x3 { get }功能指令(){sceneView.pointOfView?.camera?.focalLengthDispatchQueue.main.asyncAfter(deadline: .now() + 2.0) {打印(焦距:\(字符串(描述:self.sceneView.pointOfView?.camera?.focalLength))")打印(传感器高度:\(字符串(描述:self.sceneView.pointOfView?.camera?.sensorHeight))")//传感器高度 (mm)让框架 = self.sceneView.session.currentFrame//内部矩阵打印(内在 fx:\(字符串(描述:框架?.camera.intrinsics.columns.0.x))")打印(内在 fy:\(字符串(描述:框架?.camera.intrinsics.columns.1.y))")打印(内在牛:\(字符串(描述:框架?.camera.intrinsics.columns.2.x))")打印(Intrinsics oy:\(字符串(描述:框架?.camera.intrinsics.columns.2.y))")}}

它显示了渲染相机参数:

焦距:可选(20.784610748291016)传感器高度:可选(24.0)内在 fx:可选(1277.3052)内部函数 fy:可选(1277.3052)内在牛:可选(720.29443)Intrinsics oy:可选(539.8974)

但是,这仅显示渲染相机,而不是我用于面部跟踪的 TruthDepth 相机.


那么谁能帮助我开始获取 TruthDepth 相机参数,因为文档除了声明之外没有真正显示任何示例?

非常感谢!

解决方案

无法打印内在函数的原因可能是因为在可选链中获得了 nil.你应该看看苹果的评论这里此处.

<块引用>

仅当您在请求捕获时指定了 isCameraCalibrationDataDeliveryEnabledisDualCameraDualPhotoDeliveryEnabled 设置时,才会显示相机校准数据.对于包含深度数据的捕获中的相机校准数据,请参阅 AVDepthData cameraCalibrationData 属性.

<块引用>

要请求在照片旁边捕获深度数据(在支持的设备上),请在请求照片捕获时将照片设置对象的 isDepthDataDeliveryEnabled 属性设置为 true.如果您没有请求传递深度数据,则此属性的值为 nil.

所以如果你想获得原深感摄像头的intrinsicMatrixextrinsicMatrix,你应该使用builtInTrueDepthCamera作为输入设备,设置将管道照片输出的isDepthDataDeliveryEnabled设置为true,并在拍摄照片时将isDepthDataDeliveryEnabled设置为true.然后您可以通过访问 photo 参数的 depthData.cameraCalibrationData 属性来访问 photoOutput(_: didFinishProcessingPhoto: error:) 回调中的内在矩阵.

这是用于设置的 代码示例这样的管道.

My goal:

I am trying to get the TruthDepth camera parameters (such as the intrinsic, extrinsic, lens distortion etc) for the TruthDepth camera while I am doing the face tracking. I read that there is examples and possible to that with OpenCV. I am just wondering should one achieve similar goals in Swift.

What I have read and tried:

I read that the apple documentation about ARCamera: intrinsics and AVCameraCalibrationData: extrinsicMatrix and intrinsicMatrix.

However, all I found was just the declarations for both AVCameraCalibrationData and ARCamera:


For AVCameraCalibrationData


For intrinsicMatrix

var intrinsicMatrix: matrix_float3x3 { get }

For extrinsicMatrix

var extrinsicMatrix: matrix_float4x3 { get }

I also read this post: get Camera Calibration Data on iOS and tried Bourne's suggestion:

func photoOutput(_ output: AVCapturePhotoOutput, didFinishProcessingPhoto photo: AVCapturePhoto, error: Error?) {
        let ex = photo.depthData?.cameraCalibrationData?.extrinsicMatrix
        //let ex = photo.cameraCalibrationData?.extrinsicMatrix
        let int = photo.cameraCalibrationData?.intrinsicMatrix
        photo.depthData?.cameraCalibrationData?.lensDistortionCenter
        print ("ExtrinsicM: \(String(describing: ex))")
        print("isCameraCalibrationDataDeliverySupported: \(output.isCameraCalibrationDataDeliverySupported)")
    }

But it does not printing the matrix at all.


For ARCamera I have read from Andy Fedoroff's Focal Length of the camera used in RealityKit:

var intrinsics: simd_float3x3 { get }
func inst (){
    sceneView.pointOfView?.camera?.focalLength
    DispatchQueue.main.asyncAfter(deadline: .now() + 2.0) {
        print(" Focal Length: \(String(describing: self.sceneView.pointOfView?.camera?.focalLength))")
        print("Sensor Height: \(String(describing: self.sceneView.pointOfView?.camera?.sensorHeight))")
        // SENSOR HEIGHT IN mm
        let frame = self.sceneView.session.currentFrame
        // INTRINSICS MATRIX
        print("Intrinsics fx: \(String(describing: frame?.camera.intrinsics.columns.0.x))")
        print("Intrinsics fy: \(String(describing: frame?.camera.intrinsics.columns.1.y))")
        print("Intrinsics ox: \(String(describing: frame?.camera.intrinsics.columns.2.x))")
        print("Intrinsics oy: \(String(describing: frame?.camera.intrinsics.columns.2.y))")
    }
}

It shows the render camera parameters:

Focal Length: Optional(20.784610748291016)
Sensor Height: Optional(24.0)
Intrinsics fx: Optional(1277.3052)
Intrinsics fy: Optional(1277.3052)
Intrinsics ox: Optional(720.29443)
Intrinsics oy: Optional(539.8974)

However, this only shows the render camera instead of the TruthDepth camera that I am using for face tracking.


So can anyone help me get started with getting the TruthDepth camera parameters as the documentation did not really show any example other than the declarations?

Thank you so much!

解决方案

The reason why you cannot print the intrinsics is probably because you got nil in the optional chaining. You should have a look at Apple's remark here and here.

Camera calibration data is present only if you specified the isCameraCalibrationDataDeliveryEnabled and isDualCameraDualPhotoDeliveryEnabled settings when requesting capture. For camera calibration data in a capture that includes depth data, see the AVDepthData cameraCalibrationData property.

To request capture of depth data alongside a photo (on supported devices), set the isDepthDataDeliveryEnabled property of your photo settings object to true when requesting photo capture. If you did not request depth data delivery, this property's value is nil.

So if you want to get the intrinsicMatrix and extrinsicMatrix of the TrueDepth camera, you should use builtInTrueDepthCamera as the input device, set the isDepthDataDeliveryEnabled of the pipeline's photo output to true, and set isDepthDataDeliveryEnabled to true when you capture the photo. Then you can access the intrinsic matrices in photoOutput(_: didFinishProcessingPhoto: error:) call back by accessing the depthData.cameraCalibrationData attribute of photo argument.

Here's a code sample for setting up such a pipeline.

这篇关于Swift:在 ARKit 中获取用于面部跟踪的 TruthDepth 相机参数的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆