iOS11 ARKit:ARKit还可以捕获用户脸部的纹理吗? [英] iOS11 ARKit: Can ARKit also capture the Texture of the user's face?

查看:331
本文介绍了iOS11 ARKit:ARKit还可以捕获用户脸部的纹理吗?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我上下阅读了所有ARKit类的完整文档。我没有看到任何描述实际获取用户面部纹理的能力的地方。



ARFaceAnchor包含ARFaceGeometry(由顶点组成的拓扑和几何)和BlendShapeLocation数组(允许通过操纵用户面顶点上的几何数学来操纵各个面部特征的坐标)。



但是我在哪里可以获得用户脸部的实际纹理。例如:实际的肤色/肤色/纹理,面部毛发,其他独特的特征,如疤痕或出生痕迹?或者这根本不可能?

解决方案

你想要脸部的纹理贴图风格的图像?没有API可以满足您的要求,但您需要的所有信息都在那里:




  • ARFrame.capturedImage 获取相机图像。

  • ARFaceGeometry 为您提供面部的3D网格。

  • ARAnchor ARCamera 一起告诉你脸部相对于相机的位置,以及相机如何与图像像素相关。



因此,使用当前视频帧图像完全可以对脸部模型进行纹理化。对于网格中的每个顶点...


  1. 将顶点位置从模型空间转换为相机空间(使用锚点的变换)

  2. 将相机投影与该矢量相乘以得到标准化图像坐标

  3. 除以图像宽度/高度以获得像素坐标

这将获取每个顶点的纹理坐标,然后您可以使用相机图像对网格进行纹理处理。您可以一次性完成此数学操作以替换纹理坐标缓冲区 ARFaceGeometry 提供,或者在渲染期间在GPU上的着色器代码中执行此操作。 (如果您使用SceneKit / ARSCNView 进行渲染,您可以在 geometry 入口点。)



如果你想知道相机图像中每个像素对应的面部几何形状的哪一部分,那就更难了。你不能只是颠倒上面的数学,因为你错过了每个像素的深度值......但如果你不需要映射每个像素,SceneKit命中测试是获取单个像素几何的简单方法。 / p>




如果你真正要求的是具有里程碑意义的认可 - 例如在相机图像中的位置是眼睛,鼻子,胡须等 - 在ARKit中没有API。 Vision 框架可能有所帮助。


I read the whole documentation on all ARKit classes up and down. I don't see any place that describes ability to actually get the user face's Texture.

ARFaceAnchor contains the ARFaceGeometry (topology and geometry comprised of vertices) and the BlendShapeLocation array (coordinates allowing manipulations of individual facial traits by manipulating geometric math on the user face's vertices).

But where can I get the actual Texture of the user's face. For example: the actual skin tone / color / texture, facial hair, other unique traits, such as scars or birth marks? Or is this not possible at all?

解决方案

You want a texture-map-style image for the face? There’s no API that gets you exactly that, but all the information you need is there:

  • ARFrame.capturedImage gets you the camera image.
  • ARFaceGeometry gets you a 3D mesh of the face.
  • ARAnchor and ARCamera together tell you where the face is in relation to the camera, and how the camera relates to the image pixels.

So it’s entirely possible to texture the face model using the current video frame image. For each vertex in the mesh...

  1. Convert the vertex position from model space to camera space (use the anchor’s transform)
  2. Multiply with the camera projection with that vector to get to normalized image coordinates
  3. Divide by image width/height to get pixel coordinates

This gets you texture coordinates for each vertex, which you can then use to texture the mesh using the camera image. You could do this math either all at once to replace the texture coordinate buffer ARFaceGeometry provides, or do it in shader code on the GPU during rendering. (If you’re rendering using SceneKit / ARSCNView you can probably do this in a shader modifier for the geometry entry point.)

If instead you want to know for each pixel in the camera image what part of the face geometry it corresponds to, it’s a bit harder. You can’t just reverse the above math because you’re missing a depth value for each pixel... but if you don’t need to map every pixel, SceneKit hit testing is an easy way to get geometry for individual pixels.


If what you’re actually asking for is landmark recognition — e.g. where in the camera image are the eyes, nose, beard, etc — there’s no API in ARKit for that. The Vision framework might help.

这篇关于iOS11 ARKit:ARKit还可以捕获用户脸部的纹理吗?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆