将图像映射到 3D 面部网格 [英] Mapping image onto 3D face mesh

查看:31
本文介绍了将图像映射到 3D 面部网格的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我使用 iPhone X 和 ARFaceKit 来捕捉用户的面部.目标是使用用户的图像对面部网格进行纹理化.

I am using the iPhone X and ARFaceKit to capture the user's face. The goal is to texture the face mesh with the user's image.

我只查看 AR 会话中的单个框架(ARFrame).从 ARFaceGeometry,我有一组描述人脸的顶点.我对当前帧的 capturedImage 进行 jpeg 表示.

I'm only looking at a single frame (an ARFrame) from the AR session. From ARFaceGeometry, I have a set of vertices that describe the face. I make a jpeg representation of the current frame's capturedImage.

然后我想找到将创建的 jpeg 映射到网格顶点的纹理坐标.我想:

I then want to find the texture coordinates that map the created jpeg onto the mesh vertices. I want to:

  1. 将顶点从模型空间映射到世界空间;

  1. map the vertices from model space to world space;

将顶点从世界空间映射到相机空间;

map the vertices from world space to camera space;

除以图像尺寸以获得纹理的像素坐标.

divide by image dimensions to get pixel coordinates for the texture.

让几何:ARFaceGeometry = contentUpdater.faceGeometry!让 theCamera = session.currentFrame?.camera

let geometry: ARFaceGeometry = contentUpdater.faceGeometry! let theCamera = session.currentFrame?.camera

让FaceAnchor:SCNNode = contentUpdater.faceNode让 anchorTransform = float4x4((theFaceAnchor?.transform)!)

let theFaceAnchor: SCNNode = contentUpdater.faceNode let anchorTransform = float4x4((theFaceAnchor?.transform)!)

用于索引在 0..

for index in 0..<totalVertices { let vertex = geometry.vertices[index]

 // Step 1: Model space to world space, using the anchor's transform
 let vertex4 = float4(vertex.x, vertex.y, vertex.z, 1.0)
 let worldSpace = anchorTransform * vertex4

 // Step 2: World space to camera space
 let world3 = float3(worldSpace.x, worldSpace.y, worldSpace.z)
 let projectedPt = theCamera?.projectPoint(world3, orientation: .landscapeRight, viewportSize: (theCamera?.imageResolution)!)

 // Step 3: Divide by image width/height to get pixel coordinates
 if (projectedPt != nil) {
     let vtx = projectedPt!.x / (theCamera?.imageResolution.width)!
     let vty = projectedPt!.y / (theCamera?.imageResolution.height)!
     textureVs += "vt (vtx) (vty)
"
 }

}

这不起作用,反而让我看起来很时髦!我哪里出错了?

This is not working, but instead gets me a very funky looking face! Where am I going wrong?

推荐答案

现在可以在 Apple 发布的基于人脸的示例代码(将相机视频映射到 3D 人脸几何体部分).

Texturing the face mesh with the user's image is now available in the Face-Based sample code published by Apple (section Map Camera Video onto 3D Face Geometry).

可以使用以下着色器修改器将相机视频映射到 3D 人脸几何体.

One can map camera video onto 3D Face Geometry using this following shader modifier.

// Transform the vertex to the camera coordinate system.
float4 vertexCamera = scn_node.modelViewTransform * _geometry.position;

// Camera projection and perspective divide to get normalized viewport coordinates (clip space).
float4 vertexClipSpace = scn_frame.projectionTransform * vertexCamera;
vertexClipSpace /= vertexClipSpace.w;

// XY in clip space is [-1,1]x[-1,1], so adjust to UV texture coordinates: [0,1]x[0,1].
// Image coordinates are Y-flipped (upper-left origin).
float4 vertexImageSpace = float4(vertexClipSpace.xy * 0.5 + 0.5, 0.0, 1.0);
vertexImageSpace.y = 1.0 - vertexImageSpace.y;

// Apply ARKit's display transform (device orientation * front-facing camera flip).
float4 transformedVertex = displayTransform * vertexImageSpace;

// Output as texture coordinates for use in later rendering stages.
_geometry.texcoords[0] = transformedVertex.xy;

这篇关于将图像映射到 3D 面部网格的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆