如何从 AVDepthData 制作 3D 模型? [英] How to make a 3D model from AVDepthData?

查看:27
本文介绍了如何从 AVDepthData 制作 3D 模型?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我对原深感摄像头的数据处理问题很感兴趣.需要获取人脸的数据,建立人脸的3D模型并将该模型保存在.obj文件中.

I’m interested in the issue of data processing from TrueDepth Camera. It is necessary to obtain the data of a person’s face, build a 3D model of the face and save this model in an .obj file.

因为在 3D 模型中需要人的眼睛和牙齿,所以 ARKit/SceneKit 不适合,因为 ARKit/SceneKit 没有用数据填充这些区域.

Since in the 3D model needed presence of the person’s eyes and teeth, then ARKit / SceneKit is not suitable, because ARKit / SceneKit do not fill these areas with data.

但是在 SceneKit.ModelIO 库的帮助下,我设法以 .obj 格式导出 ARSCNView.scene(类型 SCNScene).我试图以此项目为基础:https://developer.apple.com/documentation/avfoundation/cameras_and_media_capture/streaming_depth_data_from_the_true>depth_

But with the help of the SceneKit.ModelIO library, I managed to export ARSCNView.scene (type SCNScene) in the .obj format. I tried to take this project as a basis: https://developer.apple.com/documentation/avfoundation/cameras_and_media_capture/streaming_depth_data_from_the_truedepth_camera

在这个项目中,TrueDepth Camera 的工作是使用 Metal 完成的,但如果我没记错的话,使用 Metal 渲染的 MTKView 不是 3D 模型,不能导出为 .obj.

In this project, working with TrueDepth Camera is done using Metal, but if I'm not mistaken, MTKView, rendered using Metal, is not a 3D model and cannot be exported as .obj.

请告诉我是否有办法将 MTKView 导出到 SCNScene 或直接导出到 .obj?如果没有这样的方法,那么如何从AVDepthData制作3D模型?

Please tell me if there is a way to export MTKView to SCNScene or directly to .obj? If there is no such method, then how to make a 3D model from AVDepthData?

谢谢.

推荐答案

可以从 AVDepthData 制作 3D 模型,但这可能不是您想要的.一个深度缓冲区就是这样——一个像素距离相机值的二维数组.因此,您从中获得的唯一模型"不是非常 3D;这只是一张高度图.这意味着您无法从侧面看它,也无法看到从正面看不到的轮廓.(附在 WWDC 2017 深度摄影演讲中的使用深度数据"示例代码a> 显示了一个例子.)

It's possible to make a 3D model from AVDepthData, but that probably isn't what you want. One depth buffer is just that — a 2D array of pixel distance-from-camera values. So the only "model" you're getting from that isn't very 3D; it's just a height map. That means you can't look at it from the side and see contours that you couldn't have seen from the front. (The "Using Depth Data" sample code attached to the WWDC 2017 talk on depth photography shows an example of this.)

如果您想要更多真正的 3D模型",类似于 ARKit 提供的模型,您需要做 ARKit 所做的工作——随着时间的推移使用多个颜色和深度帧,以及训练有素的机器学习系统了解人脸(以及为快速运行该系统而优化的硬件).您可能不会发现自己这样做是一个可行的选择...

If you want more of a truly-3D "model", akin to what ARKit offers, you need to be doing the work that ARKit does — using multiple color and depth frames over time, along with a machine learning system trained to understand human faces (and hardware optimized for running that system quickly). You might not find doing that yourself to be a viable option...

可以使用模型 I/O 从 ARKit 中获取可导出的模型.您需要的代码大纲如下所示:

It is possible to get an exportable model out of ARKit using Model I/O. The outline of the code you'd need goes something like this:

  1. 从面部跟踪会话中获取 ARFaceGeometry.

从面部几何体的verticestextureCoordinatestriangleIndices 数组创建MDLMeshBuffers.(Apple 指出纹理坐标和三角形索引数组永远不会改变,因此您只需要创建一次——每次获得新帧时都必须更新顶点.)

Create MDLMeshBuffers from the face geometry's vertices, textureCoordinates, and triangleIndices arrays. (Apple notes the texture coordinate and triangle index arrays never change, so you only need to create those once — vertices you have to update every time you get a new frame.)

从索引缓冲区创建一个 MDLSubmesh,从子网格加上顶点和纹理坐标缓冲区创建一个 MDLMesh.(可选地,在创建网格后,使用 MDLMesh 函数生成顶点法线缓冲区.)

Create a MDLSubmesh from the index buffer, and a MDLMesh from the submesh plus vertex and texture coordinate buffers. (Optionally, use MDLMesh functions to generate a vertex normals buffer after creating the mesh.)

创建一个空的 MDLAsset 并向其添加网格.

Create an empty MDLAsset and add the mesh to it.

MDLAsset 导出到一个 URL(提供一个带有 .obj 文件扩展名的 URL,以便它推断您要导出的格式).

Export the MDLAsset to a URL (providing a URL with the .obj file extension so that it infers the format you want to export).

该序列根本不需要 SceneKit(或 Metal,或任何显示网格的能力),这可能会根据您的需要证明很有用.如果您确实想涉及 SceneKit 和 Metal,您可以跳过几个步骤:

That sequence doesn't require SceneKit (or Metal, or any ability to display the mesh) at all, which might prove useful depending on your need. If you do want to involve SceneKit and Metal you can probably skip a few steps:

  1. 在您的 Metal 设备上创建 ARSCNFaceGeometry,并通过面部跟踪会话向其传递 ARFaceGeometry.

  1. Create ARSCNFaceGeometry on your Metal device and pass it an ARFaceGeometry from a face tracking session.

使用 MDLMesh(scnGeometry:) 获取该几何图形的模型 I/O 表示,然后按照上面的步骤 4-5 将其导出到 .obj代码>文件.

Use MDLMesh(scnGeometry:) to get a Model I/O representation of that geometry, then follow steps 4-5 above to export it to an .obj file.

<小时>

不过,无论如何切片……如果对眼睛和牙齿的建模有很强的要求,那么 Apple 提供的任何选项都不会帮助您,因为它们都不会这样做.所以,一些思考:


Any way you slice it, though... if it's a strong requirement to model eyes and teeth, none of the Apple-provided options will help you because none of them do that. So, some food for thought:

  • 考虑这是否是一个强烈的要求?
  • 复制 Apple 的所有工作,以根据颜色 + 深度图像序列进行您自己的面部模型推断?
  • 使用根据 ARKit 报告的 leftEyeTransform/rightEyeTransform 居中的球体进行眼睛建模作弊?
  • 使用 预制牙齿模型,由 ARKit 提供的面部几何图形组成以供显示?(使用单个开闭关节连接您的内颌模型,并使用 ARKit 的 blendShapes[.jawOpen] 将其与脸部一起制作动画.)
  • Consider whether that's a strong requirement?
  • Replicate all of Apple's work to do your own face-model inference from color + depth image sequences?
  • Cheat on eye modeling using spheres centered according to the leftEyeTransform/rightEyeTransform reported by ARKit?
  • Cheat on teeth modeling using a pre-made model of teeth, composed with the ARKit-provided face geometry for display? (Articulate your inner-jaw model with a single open-shut joint and use ARKit's blendShapes[.jawOpen] to animate it alongside the face.)

这篇关于如何从 AVDepthData 制作 3D 模型?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆