将图像映射到3D面部网格 [英] Mapping image onto 3D face mesh
问题描述
我正在使用iPhone X和 ARFaceKit
捕捉用户的脸部。目标是用用户的图像对面部网格进行纹理处理。
I am using the iPhone X and ARFaceKit
to capture the user's face. The goal is to texture the face mesh with the user's image.
我只查看单个帧( ARFrame
)来自 AR
会话。
在 ARFaceGeometry
中,我有一组描述面部的顶点。
我对当前帧的 capturedImage
做jpeg表示。
I'm only looking at a single frame (an ARFrame
) from the AR
session.
From ARFaceGeometry
, I have a set of vertices that describe the face.
I make a jpeg representation of the current frame's capturedImage
.
然后我想找到纹理将创建的jpeg映射到网格顶点的坐标。我要:
1.将顶点从模型空间映射到世界空间;
2.将顶点从世界空间映射到摄影机空间;
3.按图像尺寸除以获取纹理的像素坐标。
I then want to find the texture coordinates that map the created jpeg onto the mesh vertices. I want to: 1. map the vertices from model space to world space; 2. map the vertices from world space to camera space; 3. divide by image dimensions to get pixel coordinates for the texture.
let geometry: ARFaceGeometry = contentUpdater.faceGeometry!
let theCamera = session.currentFrame?.camera
let theFaceAnchor:SCNNode = contentUpdater.faceNode
let anchorTransform = float4x4((theFaceAnchor?.transform)!)
for index in 0..<totalVertices {
let vertex = geometry.vertices[index]
// Step 1: Model space to world space, using the anchor's transform
let vertex4 = float4(vertex.x, vertex.y, vertex.z, 1.0)
let worldSpace = anchorTransform * vertex4
// Step 2: World space to camera space
let world3 = float3(worldSpace.x, worldSpace.y, worldSpace.z)
let projectedPt = theCamera?.projectPoint(world3, orientation: .landscapeRight, viewportSize: (theCamera?.imageResolution)!)
// Step 3: Divide by image width/height to get pixel coordinates
if (projectedPt != nil) {
let vtx = projectedPt!.x / (theCamera?.imageResolution.width)!
let vty = projectedPt!.y / (theCamera?.imageResolution.height)!
textureVs += "vt \(vtx) \(vty)\n"
}
}
这不起作用,但是却让我看起来很时髦!我要去哪里了?
This is not working, but instead gets me a very funky looking face! Where am I going wrong?
推荐答案
使用用户图像对面部网格进行纹理化现在可以在基于面部的示例代码(将摄像机视频映射到3D面部几何
Texturing the face mesh with the user's image is now available in the Face-Based sample code published by Apple (section Map Camera Video onto 3D Face Geometry).
一个人可以使用以下着色器修改器将摄像机视频映射到3D人脸几何。
One can map camera video onto 3D Face Geometry using this following shader modifier.
// Transform the vertex to the camera coordinate system.
float4 vertexCamera = scn_node.modelViewTransform * _geometry.position;
// Camera projection and perspective divide to get normalized viewport coordinates (clip space).
float4 vertexClipSpace = scn_frame.projectionTransform * vertexCamera;
vertexClipSpace /= vertexClipSpace.w;
// XY in clip space is [-1,1]x[-1,1], so adjust to UV texture coordinates: [0,1]x[0,1].
// Image coordinates are Y-flipped (upper-left origin).
float4 vertexImageSpace = float4(vertexClipSpace.xy * 0.5 + 0.5, 0.0, 1.0);
vertexImageSpace.y = 1.0 - vertexImageSpace.y;
// Apply ARKit's display transform (device orientation * front-facing camera flip).
float4 transformedVertex = displayTransform * vertexImageSpace;
// Output as texture coordinates for use in later rendering stages.
_geometry.texcoords[0] = transformedVertex.xy;
这篇关于将图像映射到3D面部网格的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!