对脸部转移,点和动画感到困惑 [英] Confused about face tranking, points and animations

查看:83
本文介绍了对脸部转移,点和动画感到困惑的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我对脸部追踪有点困惑所以如果我错了请纠正我。

我现在的想法: 

- ShapeUnits( FaceShapeDeformation )表示中性网格/平均人脸与真实用户的跟踪面不同(以70种不同的方式)。

- AnimationUnits( FaceShapeAnimation
- HighDetailFacePoints 只有36个3d中性网格面的格子? 

所以说, 

1)还有其他的"HighDetailFacePoints"是正确的。开发人员不需要使用/知道(因为有超过1000个高点/顶点,是否正确?)

2)只有5个2D点(FacePointType)是构建FaceProperty(即离散状态)和FaceShapeAnimations?

3.a)AnimationUnits是建立在HighDetailFacePoints上的?

3.b)所以如果我想编写新的自定义动画我应该使用HighDetailFacePoints而不是ShapeUnit吗?

4)是否有 FaceTrackingBasics示例/ api 为kinect 1渲染此顶点或网格(此处 

http://blogs.msdn.com/b/kinectforwindows/archive/2014/01/31/clearing-the-confusion-around-kinect-for-windows-face-tracking-output.aspx)?

因为我需要可视化这些不同的点或顶点以了解如何编码自定义animationUnits(仅用于调试),例如,从惊讶的张开嘴或尖叫的张开嘴中识别出哈欠。 />
很抱歉这么多问题,但每一个时间ei尝试编码一些样本我卡住:(

谢谢你

I'm little bit confused about face tracking so please correct me if i'm wrong.
What i think right now: 
- ShapeUnits (FaceShapeDeformation) express how much the neutral mesh/average human face differs from the real user's tracked face (in 70 different ways).
- AnimationUnits (FaceShapeAnimation) recognize 17 basic movement on the neutral face.
- HighDetailFacePoints are just 36 vetices of the 3d neutral mesh face? 
So is correct to say that 
1) there are others "HighDetailFacePoints" that developers don't need to use/know (just because there are more then 1000 hd point/vertices, correct?)
2) There are only 5 2D points (FacePointType) on which are build FaceProperty (that are discrete states) and FaceShapeAnimations?
3.a) AnimationUnits are built on HighDetailFacePoints?
3.b) so If i want to code new custom animation should I use HighDetailFacePoints and not ShapeUnits?
4) Is There a FaceTrackingBasics sample/api that renders this vertices or the mesh like for the kinect 1 (here 
http://blogs.msdn.com/b/kinectforwindows/archive/2014/01/31/clearing-the-confusion-around-kinect-for-windows-face-tracking-output.aspx)?
Because i need to visualize this different points or vertices to understand how to code custom animationUnits (just for debuggin) e.g. recognize a yawn from a surprised open mouth or a screaming open mouth.
Sorry for so many questions, but every time i try to code some sample i get stuck :(
Thanks

推荐答案

正好有94个SU(在FaceShapeDeformations枚举中定义,其中每个都以不同方式对网格进行变形。这些是非标准化单位,有时可以看作高达+/- 10. SU指定模型构建器
中捕获的面变形并且在跟踪时保持不变(至少直到另一个构建)。

There are exactly 94 SU's (defined in FaceShapeDeformations enum), where each deforms the mesh differently. These are non-normalized units and can sometimes been seen as high as +/- 10. SU's specify the captured face deformations from the model builder and are constant while tracking (at least until another build).

正好有17个AU(在FaceShapeAnimations枚举中定义),其中每个都以不同方式对网格进行变形(并且与SU无关)。这些是非标准化单位,但与SU不同,有些是签名而有些是无符号的(例如,JawOpen主要是正面的,
但没有任何东西说它不能为负)。 AU在跟踪时改变每一帧。对于构建的模型,AU的质量要高得多,但是它们在连续构建之间仍然可以不同(即,当应用
a不同的人形状(SU)时,不能保证常量AU产生相同的表达式。

There are exactly 17 AU's (defined in FaceShapeAnimations enum), where each deforms the mesh differently (and independent of SU's). These are non-normalized units, but unlike SU's, some are signed while others are unsigned (eg. JawOpen is mostly positive, but nothing says it cannot go negative). AU's change each frame when tracking. The quality of AU's is much higher for a built model, but they can still differ between successive builds (ie. a constant AU is not guaranteed to produce the same expression when a different persons shape(SU's) are applied.

HighDetailFacePoints是具有某种含义的特定顶点索引,它们不直接用于任何API调用。一旦从CalculateVerticiesFromAlignment获得顶点,就应该使用"LefteyeInnercorner"来获取顶点值。

HighDetailFacePoints are specific vertex indices that have some meaning, they aren't directly used in any API call. Once you get verticies from CalculateVerticiesFromAlignment you should use "LefteyeInnercorner" to get the vertex value.

模型(网格)中目前有1347个顶点,其他顶点没有"已分配"含义。但是可以像HighDetailFacePoints一样访问和使用。

There are currently 1347 vertices in the model (mesh), the others don’t have an "assigned" meaning.  But can be accessed and used the same way as HighDetailFacePoints.

FacePointType是2D人脸跟踪的属性,而不是HDFace。 它用于GetFacePointsInColorSpace等调用

The FacePointType is a property of 2D face tracking and not HDFace.  It’s used in calls like GetFacePointsInColorSpace

AU的类似于SU的变形网格。它们只与网格有关的HighDetailFacePoint.AU是应用程序的主要机制动画片。 将这些值用于自定义动画超出了API支持的范围(即
无法更改阵列并传回API。)  AU中的值需要应用于变形网格或转换为不同的动画表示(即骨骼变换)。 在这些情况下,HighDetailFacePoints
或ShapeUnits都不适用。

AU’s are analogous to SU’s as both deform the mesh.  They are only related to HighDetailFacePoint by the mesh. AU’s are the primary mechanism to apply to animations.  Using these values for custom animations is beyond what the API supports (i.e. there is no way to alter the array and pass back to the API.)  The values in the AU’s need to be applied to a deformed mesh or converted into a different animation representation (i.e. bones transformations).  In these instances neither HighDetailFacePoints or ShapeUnits apply.

例如,查看SDK浏览器中提供的HDFaceBascis示例。如果你想要一个线框可视化,你将不得不改变绘制调用。

For an example, review the HDFaceBascis samples provided in the SDK Browser. If you want a wireframe visual, you will have to change the draw calls.


这篇关于对脸部转移,点和动画感到困惑的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆