iOS11 ARKit:ARKit 也能捕捉到用户脸部的纹理吗? [英] iOS11 ARKit: Can ARKit also capture the Texture of the user's face?

查看:27
本文介绍了iOS11 ARKit:ARKit 也能捕捉到用户脸部的纹理吗?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我上下阅读了有关所有 ARKit 类的整个文档.我没有看到任何描述实际获取用户面部纹理的能力的地方.

I read the whole documentation on all ARKit classes up and down. I don't see any place that describes ability to actually get the user face's Texture.

ARFaceAnchor 包含 ARFaceGeometry(由顶点组成的拓扑和几何)和 BlendShapeLocation 数组(允许通过操纵用户面部顶点上的几何数学来操纵各个面部特征的坐标).

ARFaceAnchor contains the ARFaceGeometry (topology and geometry comprised of vertices) and the BlendShapeLocation array (coordinates allowing manipulations of individual facial traits by manipulating geometric math on the user face's vertices).

但是我在哪里可以获得用户脸部的实际纹理.例如:实际肤色/颜色/质地、面部毛发、其他独特特征,如疤痕或胎记?或者这根本不可能?

But where can I get the actual Texture of the user's face. For example: the actual skin tone / color / texture, facial hair, other unique traits, such as scars or birth marks? Or is this not possible at all?

推荐答案

您想要一张纹理贴图风格的脸部图像吗?没有 API 可以完全满足您的需求,但您需要的所有信息都在此处:

You want a texture-map-style image for the face? There’s no API that gets you exactly that, but all the information you need is there:

  • ARFrame.capturedImage 为您提供相机图像.
  • ARFaceGeometry 为您提供面部的 3D 网格.
  • ARAnchorARCamera 一起告诉您人脸相对于相机的位置,以及相机与图像像素的关系.
  • ARFrame.capturedImage gets you the camera image.
  • ARFaceGeometry gets you a 3D mesh of the face.
  • ARAnchor and ARCamera together tell you where the face is in relation to the camera, and how the camera relates to the image pixels.

因此完全可以使用当前视频帧图像对面部模型进行纹理化.对于网格中的每个顶点...

So it’s entirely possible to texture the face model using the current video frame image. For each vertex in the mesh...

  1. 将顶点位置从模型空间转换到相机空间(使用锚点的变换)
  2. 将相机投影与该向量相乘以获得归一化的图像坐标
  3. 除以图像宽度/高度以获得像素坐标

这会为您提供每个顶点的纹理坐标,然后您可以使用相机图像来纹理网格.您可以一次性完成此数学运算以替换 ARFaceGeometry 提供的纹理坐标缓冲区,也可以在渲染期间在 GPU 上的着色器代码中进行.(如果您使用 SceneKit/ARSCNView 进行渲染,您可能可以在 geometry 入口点.)

This gets you texture coordinates for each vertex, which you can then use to texture the mesh using the camera image. You could do this math either all at once to replace the texture coordinate buffer ARFaceGeometry provides, or do it in shader code on the GPU during rendering. (If you’re rendering using SceneKit / ARSCNView you can probably do this in a shader modifier for the geometry entry point.)

相反,如果您想知道相机图像中的每个像素对应于面部几何图形的哪一部分,那就有点难了.你不能只是颠倒上面的数学,因为你缺少每个像素的深度值......但是如果你不需要映射每个像素,SceneKit 命中测试是一种获取单个像素几何的简单方法.

If instead you want to know for each pixel in the camera image what part of the face geometry it corresponds to, it’s a bit harder. You can’t just reverse the above math because you’re missing a depth value for each pixel... but if you don’t need to map every pixel, SceneKit hit testing is an easy way to get geometry for individual pixels.

如果您实际要求的是地标识别 - 例如相机图像中眼睛、鼻子、胡须等的位置——ARKit 中没有为此提供 API.Vision 框架可能会有所帮助.

If what you’re actually asking for is landmark recognition — e.g. where in the camera image are the eyes, nose, beard, etc — there’s no API in ARKit for that. The Vision framework might help.

这篇关于iOS11 ARKit:ARKit 也能捕捉到用户脸部的纹理吗?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆