Swift:在ARKit中获取用于面部跟踪的TruthDepth相机参数 [英] Swift: Get the TruthDepth camera parameters for face tracking in ARKit

查看:266
本文介绍了Swift:在ARKit中获取用于面部跟踪的TruthDepth相机参数的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我的目标:

我正在尝试进行人脸跟踪时,为TruthDepth相机获取TruthDepth相机参数(例如固有,外在,镜头失真等).我读到有一些示例,并且可以通过 OpenCV 实现.我只是想知道应该在Swift中实现类似的目标.

I am trying to get the TruthDepth camera parameters (such as the intrinsic, extrinsic, lens distortion etc) for the TruthDepth camera while I am doing the face tracking. I read that there is examples and possible to that with OpenCV. I am just wondering should one achieve similar goals in Swift.

我已阅读并尝试过的内容:

我阅读了有关ARCamera的苹果文档: intrinsics 和AVCameraCalibrationData: extrinsicMatrix

I read that the apple documentation about ARCamera: intrinsics and AVCameraCalibrationData: extrinsicMatrix and intrinsicMatrix.

但是,我发现的只是AVCameraCalibrationDataARCamera的声明:

However, all I found was just the declarations for both AVCameraCalibrationData and ARCamera:

对于AVCameraCalibrationData

对于internalMatrix

For intrinsicMatrix

var intrinsicMatrix: matrix_float3x3 { get }

对于extrinsicMatrix

For extrinsicMatrix

var extrinsicMatrix: matrix_float4x3 { get }

我也阅读了这篇文章:获取iOS上的相机​​校准数据,并尝试了Bourne的建议:

I also read this post: get Camera Calibration Data on iOS and tried Bourne's suggestion:

func photoOutput(_ output: AVCapturePhotoOutput, didFinishProcessingPhoto photo: AVCapturePhoto, error: Error?) {
        let ex = photo.depthData?.cameraCalibrationData?.extrinsicMatrix
        //let ex = photo.cameraCalibrationData?.extrinsicMatrix
        let int = photo.cameraCalibrationData?.intrinsicMatrix
        photo.depthData?.cameraCalibrationData?.lensDistortionCenter
        print ("ExtrinsicM: \(String(describing: ex))")
        print("isCameraCalibrationDataDeliverySupported: \(output.isCameraCalibrationDataDeliverySupported)")
    }

但是它根本不打印矩阵.

But it does not printing the matrix at all.

对于ARCamera,我从安迪·费多罗夫(Andy Fedoroff)的

For ARCamera I have read from Andy Fedoroff's Focal Length of the camera used in RealityKit:

var intrinsics: simd_float3x3 { get }
func inst (){
    sceneView.pointOfView?.camera?.focalLength
    DispatchQueue.main.asyncAfter(deadline: .now() + 2.0) {
        print(" Focal Length: \(String(describing: self.sceneView.pointOfView?.camera?.focalLength))")
        print("Sensor Height: \(String(describing: self.sceneView.pointOfView?.camera?.sensorHeight))")
        // SENSOR HEIGHT IN mm
        let frame = self.sceneView.session.currentFrame
        // INTRINSICS MATRIX
        print("Intrinsics fx: \(String(describing: frame?.camera.intrinsics.columns.0.x))")
        print("Intrinsics fy: \(String(describing: frame?.camera.intrinsics.columns.1.y))")
        print("Intrinsics ox: \(String(describing: frame?.camera.intrinsics.columns.2.x))")
        print("Intrinsics oy: \(String(describing: frame?.camera.intrinsics.columns.2.y))")
    }
}

它显示了渲染摄像机的参数:

It shows the render camera parameters:

Focal Length: Optional(20.784610748291016)
Sensor Height: Optional(24.0)
Intrinsics fx: Optional(1277.3052)
Intrinsics fy: Optional(1277.3052)
Intrinsics ox: Optional(720.29443)
Intrinsics oy: Optional(539.8974)

但是,这仅显示渲染摄像头,而不是我用于面部跟踪的TruthDepth摄像头.

However, this only shows the render camera instead of the TruthDepth camera that I am using for face tracking.

因此,有人可以帮助我开始获取TruthDepth相机参数,因为文档中并未真正显示除声明以外的任何示例吗?

So can anyone help me get started with getting the TruthDepth camera parameters as the documentation did not really show any example other than the declarations?

非常感谢您!

推荐答案

之所以无法打印内在函数,可能是因为您在可选链中得到了nil.您应该在此处此处.

The reason why you cannot print the intrinsics is probably because you got nil in the optional chaining. You should have a look at Apple's remark here and here.

仅当在请求捕获时指定了isCameraCalibrationDataDeliveryEnabledisDualCameraDualPhotoDeliveryEnabled设置时,才存在相机校准数据.有关包含深度数据的捕获中的相机校准数据,请参见AVDepthData cameraCalibrationData属性.

Camera calibration data is present only if you specified the isCameraCalibrationDataDeliveryEnabled and isDualCameraDualPhotoDeliveryEnabled settings when requesting capture. For camera calibration data in a capture that includes depth data, see the AVDepthData cameraCalibrationData property.

要请求在照片(与支持的设备)上同时捕获深度数据,请在请求照片捕获时将照片设置对象的isDepthDataDeliveryEnabled属性设置为true.如果您不要求发送深度数据,则此属性的值为nil.

To request capture of depth data alongside a photo (on supported devices), set the isDepthDataDeliveryEnabled property of your photo settings object to true when requesting photo capture. If you did not request depth data delivery, this property's value is nil.

因此,如果要获取TrueDepth相机的intrinsicMatrixextrinsicMatrix,则应使用builtInTrueDepthCamera作为输入设备,将管道照片输出的isDepthDataDeliveryEnabled设置为true,然后设置拍摄照片时,从isDepthDataDeliveryEnabledtrue.然后,可以通过访问photo参数的depthData.cameraCalibrationData属性来访问photoOutput(_: didFinishProcessingPhoto: error:)回调中的固有矩阵.

So if you want to get the intrinsicMatrix and extrinsicMatrix of the TrueDepth camera, you should use builtInTrueDepthCamera as the input device, set the isDepthDataDeliveryEnabled of the pipeline's photo output to true, and set isDepthDataDeliveryEnabled to true when you capture the photo. Then you can access the intrinsic matrices in photoOutput(_: didFinishProcessingPhoto: error:) call back by accessing the depthData.cameraCalibrationData attribute of photo argument.

这是一个用于设置的代码示例这样的管道.

这篇关于Swift:在ARKit中获取用于面部跟踪的TruthDepth相机参数的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆