如何在 iOS 中借助 ARKit 测量设备与人脸的距离? [英] How to measure device distance from face with help of ARKit in iOS?

查看:39
本文介绍了如何在 iOS 中借助 ARKit 测量设备与人脸的距离?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我需要创建一个应用程序,我们要在其中测量设备和我的用户面部之间的距离.我认为 ARKit 可以实现,但不知道该怎么做.是否有任何示例或示例?

I need to create a application in which we want to measure distance between device and my user face. I think its possible by ARKit but don't know how to do it. Is there any kind of sample or example?

推荐答案

如果您正在运行 ARFaceTrackingConfiguration(仅适用于配备前置 TrueDepth 摄像头的设备),至少有两种方法可以实现(我认为第二种一个更好).

If you are running an ARFaceTrackingConfiguration (only for devices with a front-facing TrueDepth camera), there are at least two ways to achieve this (I think the second one is the better).

第一种方法

您可以使用红外摄像头的depthData:

You can use the depthData of the IR camera :

yourARSceneView.session.currentFrame?.capturedDepthData?.depthDataMap

这将返回一个大小为 640x360 的 CVPixelBuffer,其中包含每个像素的深度数据(基本上是 IR 摄像机与世界中真实物体之间的距离).您可以通过可用的扩展程序访问 CVPixelBuffer 数据,例如这个.深度数据以米表示.获得深度数据后,您必须选择或检测哪些是用户面部的一部分.您还必须注意深度感应相机以与彩色相机不同的帧速率提供数据,因此如果没有与当前彩色图像同时捕获深度数据,则此属性的值也可以为零".更多信息:AVDepthData

This will return a CVPixelBuffer of size 640x360 containing depth data for each pixel (basically the distance between the IR camera and the real objects in the world). You can access CVPixelBuffer data through available extensions like this one. The depth data are expressed in meters. Once you have the depth data, you will have to choose or detect which ones are part of the user's face. You also have to be careful that "the depth-sensing camera provides data at a different frame rate than the color camera, so this property’s value can also be nil if no depth data was captured at the same time as the current color image". For more informations : AVDepthData

第二种方法(推荐)

另一种获取设备与用户面部距离的方法是将检测到的用户面部位置转换为相机坐标系.为此,您必须使用 SceneKit 中的 convertPosition 方法为了切换坐标空间,从人脸坐标空间到相机坐标空间.

Another way to get the distance between the device and the user's face is to convert position of the detected user's face into camera's coordinate system. To do this, you will have to use the convertPosition method from SceneKit in order to switch coordinate space, from face coordinate space to camera coordinate space.

let positionInCameraSpace = theFaceNode.convertPosition(pointInFaceCoordinateSpace, to: yourARSceneView.pointOfView)

theFaceNode 是 ARKit 创建的代表用户面部的 SCNNode.ARSCNViewpointOfView 属性返回从中查看场景的节点,基本上是相机.pointInFaceCoordinateSpace 可以是人脸网格的任何顶点,也可以是 theFaceNode 的位置(即 人脸坐标系的原点).这里,positionInCameraSpace 是一个 SCNVector3,表示你给定的点在相机坐标空间中的位置.然后你可以使用这个 SCNVector3 的 x、y 和 z 值(以米表示)得到点和相机之间的距离.

theFaceNode is the SCNNode created by ARKit representing the user's face. The pointOfView property of your ARSCNView returns the node from which the scene is viewed, basically the camera. pointInFaceCoordinateSpace could be any vertices of the face mesh or just the position of theFaceNode (which is the origin of the face coordinate system). Here, positionInCameraSpace is a SCNVector3, representing the position of the point you gave, in camera coordinate space. Then you can get the distance between the point and the camera using the x,y and z value of this SCNVector3 (expressed in meters).

我想第二种方法更好,因为它看起来更精确,而且您可以精确地选择要测量的面部哪个点.您也可以使用 Rom4in 所说的转换(我猜 convertPosition 方法使用转换).希望它会有所帮助,我也很想知道是否有更简单的方法来实现这一目标.

I guess the second method is better as it looks more precise and you can choose precisely which point of the face you want to measure. You can also use transforms as Rom4in said (I guess the convertPosition method uses transforms). Hope it will help and I'm also curious to know if there are easier ways to achieve this.

这篇关于如何在 iOS 中借助 ARKit 测量设备与人脸的距离?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆