跟踪旋转的增强面(ARCore)的面网格顶点 [英] Tracking face mesh vertices of Augmented Faces (ARCore) regardless of rotation

查看:133
本文介绍了跟踪旋转的增强面(ARCore)的面网格顶点的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在尝试跟踪眉毛,微笑,眨眼等面部表情.在ARKit中,我可以使用blendShapes(

I'm trying to track facial expressions such as eyebrow raise, smile, wink, etc. In ARKit I could use blendShapes (https://developer.apple.com/documentation/arkit/arfaceanchor/2928251-blendshapes) to detect the movement of the different parts of the face but in ARCore it doesn't exist yet.

我尝试访问相对于脸部中心变换的网格顶点,但是这些顶点会随着脸部的旋转而显着变化.

I've tried to access the mesh vertices which are relative to the center transform of the face but these change significantly with the rotation of the face.

有没有一种方法可以将人脸界标/顶点从0标准化为1,其中0是中性,而1是最大面部表情?它不需要像ARKit blendShapes一样准确.

Is there a way to normalize the face landmark/vertex from 0 to 1 where 0 is neutral and 1 is the maximum facial expression? It doesn't need to be as accurate as ARKit blendShapes.

推荐答案

您的问题涉及2个独立的问题:-

Your question talks of 2 separate problems:-

  1. 像ARKit一样从ARCore获取融合形状的问题.
  2. 头部旋转的问题使逐点比较变得困难.

我没有解决问题1的方法.但是,对于问题2,可以从界标点计算旋转矩阵.我已经准备好一种方法来针对Mediakit面网格进行处理.希望这对您有用:-

I do not have a solution for the problem 1. However for problem 2, you can compute a rotation matrix from the landmark points. I have a method ready to do it for mediakit face mesh. Hope this works for you:-

def calc_rotation_matrix(self):
    left_corner_right_eye = get_left_corner_right_eye()
    right_corner_left_eye = get_right_corner_left_eye()
    left_corner_face = get_left_corner_face()
    right_corner_face = get_right_corner_face()
    upper_nose = get_upper_pt_nose()
    chin = get_chin()

    rotation_matrix = np.zeros((3, 3))
    rotation_matrix[0:] = (right_corner_face - left_corner_face) / np.linalg.norm(right_corner_face - left_corner_face)
    rotation_matrix[1:] = (chin - upper_nose) / np.linalg.norm(chin - upper_nose)
    rotation_matrix[2:] = np.cross(rotation_matrix[0, :], rotation_matrix[1, :])
    
    return rotation_matrix

很显然,您将不得不编写用于获取自己用例的各个要点的方法.一旦有了该旋转矩阵,通过将地标乘以 np.linalg.inv(rotation_matrix)

You will obviously have to write the methods for getting the respective points for your own usecase. Once you have this rotation matrix, you can always get the face with (pitch, yaw, roll) = (0, 0, 0) by multiplying the landmarks by np.linalg.inv(rotation_matrix)

AFAIK MediaKit(或ARCore)没有内置的blendshapes功能.@Hardik在上面的评论中提到OpenCV和Dlib可以在此方面提供帮助.实际上,我正在寻找类似的东西.

AFAIK MediaKit (or ARCore) does not have the feature of blendshapes built in. @Hardik mentions in the comment above that OpenCV and Dlib can help in this .. but I am not so sure. In fact, I am searching for something similar.

这篇关于跟踪旋转的增强面(ARCore)的面网格顶点的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆