从iPhone X(SceneKit/ARKit应用)的2D区域难以获取人脸界标点的深度 [英] Difficulty getting depth of face landmark points from 2D regions on iPhone X (SceneKit/ARKit app)

查看:126
本文介绍了从iPhone X(SceneKit/ARKit应用)的2D区域难以获取人脸界标点的深度的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在使用iPhone X上的前置摄像头进行人脸界标检测,并且正在非常努力地获取人脸界标的3D点(VNFaceLandmarkRegion2D仅提供图像坐标X,Y).

I'm running face landmark detection using the front-facing camera on iPhone X, and am trying very hard to get 3D points of face landmarks (VNFaceLandmarkRegion2D gives image coordinates X, Y only).

我一直在尝试使用ARSCNView.hitTestARFrame.hitTest,但到目前为止没有成功.我认为我的错误可能在于将初始地标点转换为正确的坐标系.我已经尝试了很多排列,但是目前基于我的研究,这是我想出的:

I've been trying to use either the ARSCNView.hitTest or ARFrame.hitTest, but am so far unsuccessful. I think my error may be in converting the initial landmark points to the correct coordinate system. I've tried quite a few permutations, but currently based on my research this is what I've come up with:

let point = CGPoint(x: landmarkPt.x * faceBounds.width + faceBounds.origin.x, y: (1.0 - landmarkPt.y) * faceBounds.height + faceBounds.origin.y)
let screenPoint = CGPoint(point.x * view.bounds.width, point.y * view.bounds.height)
frame.hitTest(screenPoint, types: ARHitTestResult.ResultType.featurePoint)

我也尝试过

let newPoint = CGPoint(x: point.x, y: 1.0 - point.y) 

转换后,但似乎无济于事.我的frame.hitTest结果始终为空.我在转换中缺少任何内容吗?

after the conversion, but nothing seems to work. My frame.hitTest result is always empty. Am I missing anything in the conversion?

前置摄像头是否为此添加了另一个层次? (我还尝试将初始X值在某一点上反转,以防坐标系被镜像).在我看来,初始界标normalizedPoints有时为负,有时也大于1.0,这对我来说没有任何意义.如果要重要的话,我正在使用ARSession.currentFrame?.capturedImage捕获前置摄像头的帧.

Does the front-facing camera add another level to this? (I also tried inverting the initial X value at one point, in case the coordinate system was being mirrored). It also seems to me that the initial landmark normalizedPoints are sometimes negative and also sometimes greater than 1.0, which doesn't make any sense to me. I'm using ARSession.currentFrame?.capturedImage to capture the frame of the front-facing camera, if that's important.

非常感谢任何帮助,非常感谢!

Any help would be very, very appreciated, thanks so much!

-已解决-

对于有类似问题的任何人: 我终于得到了成功的测试结果!

For anyone with similar issues: I am finally getting hit test results!

for point in observation.landmarks?.allPoints?.pointsInImage(imageSize: sceneView.bounds.size) {
    let result = ARSCNView.hitTest(point, options: [ARSCNHitTestOption.rootNode: faceNode)
}

我将面部几何图形用作附加到该面部节点的遮挡节点.

I use the face geometry as an occlusion node attached to the face node.

感谢Rickster!

Thanks Rickster!

推荐答案

您正在使用ARFaceTrackingConfiguration,对吗?在这种情况下,featurePoint命中测试类型将无济于事,因为特征点是世界跟踪的一部分,而不是面部跟踪...实际上,几乎所有 ARKit 命中测试机制是特定于世界跟踪的,对脸部跟踪没有用.

You're using ARFaceTrackingConfiguration, correct? In that case, the featurePoint hit test type won't help you, because feature points are part of world tracking, not face tracking... in fact, just about all the ARKit hit testing machinery is specific to world tracking, and not useful to face tracking.

您可以做的是利用面部网格(ARFaceGeometry)和面部姿势跟踪(ARFaceAnchor)从2D图像点到3D世界空间(或摄影机空间)观点.您至少可以走几条路:

What you can do instead is make use of the face mesh (ARFaceGeometry) and face pose tracking (ARFaceAnchor) to work your way from a 2D image point to a 3D world-space (or camera-space) point. There's at least a couple paths you could go down for that:

  1. 如果您已经在使用SceneKit,则可以使用 SceneKit 的命中测试代替ARKit的命中测试. (也就是说,您要针对使用SceneKit建模的虚拟"几何体进行测试,而不是针对ARKit建模的现实环境的稀疏估计.在这种情况下,面部网格的虚拟"几何体通过即是,您要 ARSCNView.hitTest(_:options:) (继承自SCNSceneRenderer),而不是hitTest(_:types:).当然,这意味着您需要使用ARSCNFaceGeometry来可视化场景中的面部网格,并使用ARSCNView的节点/锚定贴图来跟踪面部姿势(尽管如果要使视频图像显示出来,您可以使网格透明)-否则SceneKit命中测试将找不到任何SceneKit几何.

  1. If you're already using SceneKit, you can use SceneKit's hit testing instead of ARKit's. (That is, you're hit testing against "virtual" geometry modeled in SceneKit, not against a sparse estimate of the real-world environment modeled by ARKit. In this case, the "virtual" geometry of the face mesh comes into SceneKit via ARKit.) That is, you want ARSCNView.hitTest(_:options:) (inherited from SCNSceneRenderer), not hitTest(_:types:). Of course, this means you'll need to be using ARSCNFaceGeometry to visualize the face mesh in your scene, and ARSCNView's node/anchor mapping to make it track the face pose (though if you want the video image to show through, you can make the mesh transparent) — otherwise the SceneKit hit test won't have any SceneKit geometry to find.

如果您不使用SceneKit,或者由于某种原因无法将面部网格物体放入场景中,那么您将拥有针对面部网格物体重建命中测试所需的全部信息. ARCamera具有视图和投影矩阵,可告诉您2D屏幕投影与3D世界空间的关系,ARFaceAnchor告诉您人脸在世界空间中的位置,而ARFaceGeometry告诉您人脸在每个点上的位置-因此,它只是数学束,它可以从屏幕上的点到面对面的点,并且反之亦然.

If you're not using SceneKit, or for some reason can't put the face mesh into your scene, you have all the information you need to reconstruct a hit test against the face mesh. ARCamera has view and projection matrices that tell you the relationship of your 2D screen projection to 3D world space, ARFaceAnchor tells you where the face is in world space, and ARFaceGeometry tells you where each point is on the face — so it's just a bunch of math to get from a screen point to a face-mesh point and vice versa.

这篇关于从iPhone X(SceneKit/ARKit应用)的2D区域难以获取人脸界标点的深度的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆