如何使用AVDepthData制作3D模型? [英] How to make a 3D model from AVDepthData?

查看:389
本文介绍了如何使用AVDepthData制作3D模型?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我对TrueDepth Camera的数据处理问题感兴趣。有必要获取一个人的面部数据,构建面部3D模型并将该模型保存在.obj文件中。

I’m interested in the issue of data processing from TrueDepth Camera. It is necessary to obtain the data of a person’s face, build a 3D model of the face and save this model in an .obj file.

因为需要3D模型如果存在人的眼睛和牙齿,则ARKit / SceneKit不适合,因为ARKit / SceneKit不会用数据填充这些区域。

Since in the 3D model needed presence of the person’s eyes and teeth, then ARKit / SceneKit is not suitable, because ARKit / SceneKit do not fill these areas with data.

但是在SceneKit的帮助下.ModelIO库,我设法以.obj格式导出ARSCNView.scene(类型SCNScene)。
我试图以此项目为基础:
https: //developer.apple.com/documentation/avfoundation/cameras_and_media_capture/streaming_depth_data_from_the_truedepth_camera

But with the help of the SceneKit.ModelIO library, I managed to export ARSCNView.scene (type SCNScene) in the .obj format. I tried to take this project as a basis: https://developer.apple.com/documentation/avfoundation/cameras_and_media_capture/streaming_depth_data_from_the_truedepth_camera

在此项目中,使用Metal可以使用TrueDepth Camera,但是如果我没错,使用Metal渲染的MTKView不是3D模型,不能导出为.obj。

In this project, working with TrueDepth Camera is done using Metal, but if I'm not mistaken, MTKView, rendered using Metal, is not a 3D model and cannot be exported as .obj.

请告诉我是否有导出MTKView的方法到SCNScene还是直接到.obj?
如果没有这种方法,那么如何从AVDepthData创建3D模型?

Please tell me if there is a way to export MTKView to SCNScene or directly to .obj? If there is no such method, then how to make a 3D model from AVDepthData?

谢谢。

推荐答案

可以通过 AVDepthData 创建3D模型,但这可能不是您想要的。一个深度缓冲区就是这样-像素到相机距离值的2D数组。因此,您从中获得的唯一模型不是非常3D的;这只是一个高度图。这意味着您不能从侧面看,也不能看到从正面看不到的轮廓。 (附在 WWDC 2017有关深度摄影的演讲中的使用深度数据示例代码显示了一个示例。)

It's possible to make a 3D model from AVDepthData, but that probably isn't what you want. One depth buffer is just that — a 2D array of pixel distance-from-camera values. So the only "model" you're getting from that isn't very 3D; it's just a height map. That means you can't look at it from the side and see contours that you couldn't have seen from the front. (The "Using Depth Data" sample code attached to the WWDC 2017 talk on depth photography shows an example of this.)

如果您想要更多类似于ARKit提供的真正3D模型,则需要进行此工作。这是ARKit要做的—随着时间的推移使用多个颜色和深度框架,以及经过训练以了解人脸的机器学习系统(以及为快速运行该系统而优化的硬件)。您可能不会发现自己是一个可行的选择...

If you want more of a truly-3D "model", akin to what ARKit offers, you need to be doing the work that ARKit does — using multiple color and depth frames over time, along with a machine learning system trained to understand human faces (and hardware optimized for running that system quickly). You might not find doing that yourself to be a viable option...

可以使用模型I / O从ARKit中获取可导出模型。您所需的代码大纲如下所示:

It is possible to get an exportable model out of ARKit using Model I/O. The outline of the code you'd need goes something like this:


  1. 获取 ARFaceGeometry 来自面部跟踪会话。

  1. Get ARFaceGeometry from a face tracking session.

从面几何的顶点 MDLMeshBuffer s c>, textureCoordinates triangleIndices 数组。 (Apple注意到纹理坐标和三角形索引数组永远不会改变,因此您只需创建一次即可-每次获得新框架时都必须更新顶点。)

Create MDLMeshBuffers from the face geometry's vertices, textureCoordinates, and triangleIndices arrays. (Apple notes the texture coordinate and triangle index arrays never change, so you only need to create those once — vertices you have to update every time you get a new frame.)

从索引缓冲区创建 MDLSubmesh ,并从子网格以及顶点和纹理坐标缓冲区创建 MDLMesh 。 (可选地,在创建网格后,使用 MDLMesh 函数生成顶点法线缓冲区。)

Create a MDLSubmesh from the index buffer, and a MDLMesh from the submesh plus vertex and texture coordinate buffers. (Optionally, use MDLMesh functions to generate a vertex normals buffer after creating the mesh.)

创建一个空的 MDLAsset 并将网格添加到其中。

Create an empty MDLAsset and add the mesh to it.

导出 MDLAsset 到URL(提供带有 .obj 文件扩展名的URL,以便其推断您要导出的格式)。

Export the MDLAsset to a URL (providing a URL with the .obj file extension so that it infers the format you want to export).

该序列完全不需要SceneKit(或Metal,或任何显示的功能) ,根据您的需求可能会有用。如果您确实想使用SceneKit和Metal,则可以跳过一些步骤:

That sequence doesn't require SceneKit (or Metal, or any ability to display the mesh) at all, which might prove useful depending on your need. If you do want to involve SceneKit and Metal you can probably skip a few steps:


  1. 创建 ARSCNFaceGeometry 在您的Metal设备上,并通过面部跟踪会话将其传递给 ARFaceGeometry

  1. Create ARSCNFaceGeometry on your Metal device and pass it an ARFaceGeometry from a face tracking session.

使用 MDLMesh(scnGeometry:)获取该几何的模型I / O表示,然后按照上述步骤4-5将其导出到 .obj 文件。

Use MDLMesh(scnGeometry:) to get a Model I/O representation of that geometry, then follow steps 4-5 above to export it to an .obj file.






以任何方式对其进行切片,但是...如果对眼睛和牙齿建模有强烈要求,则Apple提供的任何选项都不会对您有所帮助,因为它们都没有这样做。因此,请考虑一下:


Any way you slice it, though... if it's a strong requirement to model eyes and teeth, none of the Apple-provided options will help you because none of them do that. So, some food for thought:


  • 考虑这是否有很强的要求?

  • 复制所有Apple的工作是从颜色+深度图像序列中进行自己的面部模型推断?

  • 使用根据 leftEyeTransform / rightEyeTransform

  • 使用预制的牙齿模型,由ARKit提供的面部几何形状组成显示? (使用一个单一的开合关节来表达您的内颌模型,并使用ARKit的 blendShapes [.jawOpen] 将其与脸部动画在一起。)

  • Consider whether that's a strong requirement?
  • Replicate all of Apple's work to do your own face-model inference from color + depth image sequences?
  • Cheat on eye modeling using spheres centered according to the leftEyeTransform/rightEyeTransform reported by ARKit?
  • Cheat on teeth modeling using a pre-made model of teeth, composed with the ARKit-provided face geometry for display? (Articulate your inner-jaw model with a single open-shut joint and use ARKit's blendShapes[.jawOpen] to animate it alongside the face.)

这篇关于如何使用AVDepthData制作3D模型?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆