无论旋转如何,都可以跟踪增强面 (ARCore) 的面网格顶点 [英] Tracking face mesh vertices of Augmented Faces (ARCore) regardless of rotation

查看:35
本文介绍了无论旋转如何,都可以跟踪增强面 (ARCore) 的面网格顶点的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在尝试跟踪面部表情,例如扬眉、微笑、眨眼等.在 ARKit 中,我可以使用 blendShapes (https://developer.apple.com/documentation/arkit/arfaceanchor/2928251-blendshapes) 检测面部不同部位的运动,但在 ARCore 中尚不存在.

I'm trying to track facial expressions such as eyebrow raise, smile, wink, etc. In ARKit I could use blendShapes (https://developer.apple.com/documentation/arkit/arfaceanchor/2928251-blendshapes) to detect the movement of the different parts of the face but in ARCore it doesn't exist yet.

我尝试访问与脸部中心变换相关的网格顶点,但这些顶点会随着脸部的旋转而发生显着变化.

I've tried to access the mesh vertices which are relative to the center transform of the face but these change significantly with the rotation of the face.

有没有办法将面部标志/顶点从 0 归一化到 1,其中 0 是中性,1 是最大面部表情?它不需要像 ARKit blendShapes 那样准确.

Is there a way to normalize the face landmark/vertex from 0 to 1 where 0 is neutral and 1 is the maximum facial expression? It doesn't need to be as accurate as ARKit blendShapes.

推荐答案

您的问题涉及 2 个单独的问题:-

Your question talks of 2 separate problems:-

  1. 像 ARKit 一样从 ARCore 获取混合形状的问题.
  2. 头部旋转问题使得逐点比较变得困难.

我没有解决问题 1 的方法.但是对于问题 2,您可以根据地标点计算旋转矩阵.我有一种方法可以为 mediakit face mesh 做这件事.希望这对您有用:-

I do not have a solution for the problem 1. However for problem 2, you can compute a rotation matrix from the landmark points. I have a method ready to do it for mediakit face mesh. Hope this works for you:-

def calc_rotation_matrix(self):
    left_corner_right_eye = get_left_corner_right_eye()
    right_corner_left_eye = get_right_corner_left_eye()
    left_corner_face = get_left_corner_face()
    right_corner_face = get_right_corner_face()
    upper_nose = get_upper_pt_nose()
    chin = get_chin()

    rotation_matrix = np.zeros((3, 3))
    rotation_matrix[0:] = (right_corner_face - left_corner_face) / np.linalg.norm(right_corner_face - left_corner_face)
    rotation_matrix[1:] = (chin - upper_nose) / np.linalg.norm(chin - upper_nose)
    rotation_matrix[2:] = np.cross(rotation_matrix[0, :], rotation_matrix[1, :])
    
    return rotation_matrix

您显然必须为自己的用例编写获取相应点的方法.一旦你有了这个旋转矩阵,你总是可以通过将地标乘以 np.linalg.inv(rotation_matrix)

You will obviously have to write the methods for getting the respective points for your own usecase. Once you have this rotation matrix, you can always get the face with (pitch, yaw, roll) = (0, 0, 0) by multiplying the landmarks by np.linalg.inv(rotation_matrix)

AFAIK MediaKit(或 ARCore)没有内置混合形状的功能.@Hardik 在上面的评论中提到 OpenCV 和 Dlib 可以帮助解决这个问题……但我不太确定.事实上,我正在寻找类似的东西.

AFAIK MediaKit (or ARCore) does not have the feature of blendshapes built in. @Hardik mentions in the comment above that OpenCV and Dlib can help in this .. but I am not so sure. In fact, I am searching for something similar.

这篇关于无论旋转如何,都可以跟踪增强面 (ARCore) 的面网格顶点的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆