使用Kinect进行面部识别 [英] Facial Recognition with Kinect

查看:164
本文介绍了使用Kinect进行面部识别的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

最近,我一直在使用新的Developer Toolkit(v1.5.1)尝试使用Kinect进行面部识别。可以在此处找到FaceTracking工具的API: http://msdn.microsoft .COM / EN-US /库/ jj130970.aspx 。基本上我到目前为止所做的就是获得每个人独有的面部签名。为此,我引用了Kinect跟踪的这些面部点:()。

Lately I have been working on trying facial recognition with the Kinect, using the new Developer Toolkit (v1.5.1). The API for the FaceTracking tools can be found here: http://msdn.microsoft.com/en-us/library/jj130970.aspx. Basically what I have tried to do so far is attain a "facial signature" unique to each person. To do this, I referenced these facial points the Kinect tracks: () .

然后我跟踪了我的脸(加上几个朋友)并使用基本代数计算了第39和第8点之间的距离。我也获得了当前头部深度的值。以下是我获得的数据样本:

Then I tracked my face (plus a couple friends) and calculated the distance between points 39 and 8 using basic algebra. I also attained the values for the current depth of the head. Heres a sample of the data I obtained:

DISTANCE FROM RIGHT SIDE OF NOSE TO LEFT EYE: 10.1919198899636
CURRENT DEPTH OF HEAD: 1.65177881717682
DISTANCE FROM RIGHT SIDE OF NOSE TO LEFT EYE: 11.0429381713623
CURRENT DEPTH OF HEAD: 1.65189981460571
DISTANCE FROM RIGHT SIDE OF NOSE TO LEFT EYE: 11.0023324541865
CURRENT DEPTH OF HEAD: 1.65261101722717

这些只是我获得的一些值。所以我的下一步是使用excel绘制它们。我的预期结果是深度和距离之间的非常线性的趋势。因为随着深度的增加,距离应该更小,反之亦然。因此,对于人X的数据,趋势是相当线性的。但对于我的朋友(Y人)来说,情节到处都是。所以我得出结论,我不能使用这种方法进行面部识别。我无法获得跟踪如此小距离所需的精度。

These are just a few of the values I attained. So my next step was plotting them using excel. My expected result was a very linear trend between depth and distance. Because as depth increases, the distance should be smaller and vice versa. So for person X's data the trend was fairly linear. But for my friend (person Y) the plot was all over the place. So I came to conclude that I can't use this method for facial recognition. I cannot get the precision I need to track such a small distance.

我的目标是能够识别进入房间的人,保存他们的个人资料,然后一旦他们退出就将其删除。对不起,如果这有点多,但我只是想解释一下到目前为止我取得的进展。那么,你们怎么看待我如何实现面部识别?任何想法/帮助将不胜感激。

My goal is to be able to identify people as they enter a room, save their "profile", and then remove it upon once they exit. Sorry if this was a bit much, but I'm just trying to explain the progress I have made thus far. SO, what do you guys think about how I can implement facial recognition? Any ideas/help will be greatly appreciated.

推荐答案

如果您使用 EnumIndexableCollection< FeaturePoint,PointF>
所以你可以使用 FaceTrackFrame GetProjected3DShape()方法。
您可以这样使用它:

If you use a EnumIndexableCollection<FeaturePoint, PointF> so you can use a FaceTrackFrame's GetProjected3DShape() method. You use it like this:

  private byte[] colorImage;

  private ColorImageFormat colorImageFormat = ColorImageFormat.Undefined;

  private short[] depthImage;

  private DepthImageFormat depthImageFormat = DepthImageFormat.Undefined;

  KinectSensor Kinect = KinectSensor.KinectSensors[0];

  private Skeleton[] skeletonData;

  colorImageFrame = allFramesReadyEventArgs.OpenColorImageFrame();
  depthImageFrame = allFramesReadyEventArgs.OpenDepthImageFrame();
  skeletonFrame = allFramesReadyEventArgs.OpenSkeletonFrame();
  colorImageFrame.CopyPixelDataTo(this.colorImage);
  depthImageFrame.CopyPixelDataTo(this.depthImage);
  skeletonFrame.CopySkeletonDataTo(this.skeletonData);
  skeletonData = new Skeleton[skeletonFrame.SkeletonArrayLength];

  foreach(Skeleton skeletonOfInterest in skeletonData)
  {
       FaceTrackFrame frame = faceTracker.Track(
           colorImageFormat, colorImage, depthImageFormat, depthImage, skeletonOfInterest);
  }

  private EnumIndexableCollection<FeaturePoint, PointF> facePoints = frame.GetProjected3DShape();

然后您可以使用图片中的每个点。
我会有一个 const double preferedDistance 你可以将当前的
深度和不同点的x和y相乘,以找到首选版本的
x和y's以及公式的深度

Then you can use each of the points in your image. I would have a const double preferedDistance that you can multiply the current depth and x and y of the different points to find the preferred version of the x and y's and the depth by the formula


preferredDistance / currentDistance

preferredDistance / currentDistance

示例:

        const double preferredDistance = 500.0;//this can be any number you want.

        double currentDistance = //however you are calculating the distance

        double whatToMultiply = preferredDistance / currentDistance;

        double x1 = this.facePoints[39].X;
        double y1 = this.facePoints[39].Y;
        double x2 = this.facePoints[8].X;
        double y2 = this.facePoints[8].Y;

        double result = whatToMultiply * //however you are calculating distance.

然后你可以拥有 List<> 距离搜索的距离。
我还建议您有一个列表<> 的bool,如果结果匹配,则与
距离相同,设置为true,所以你可以跟踪哪个
bool是真/假。

例如:

Then you can have a List<> of what the distances are to search. I would also suggest that you have a List<> of bool which coorispond to the distances to set to true if the result matches, so you can keep track of which bool is true/false.
Example:

        List<double> DistanceFromEyeToNose = new List<double>
        {
            1,
            2,
            3 //etc
        };


        List<bool> IsMatch = new List<bool>
        {
            false,
            false,
            false //etc
        };

然后使用搜索循环。

        for (int i = 0; i < DistanceFromEyeToNose.Count; i++)
        {
            if (result == DistanceFromEyeToNose[i]) IsMatch[i] = true;
        } 

希望这有帮助!

这篇关于使用Kinect进行面部识别的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆