在C#中使用kinect通过手势操作应用程序 [英] operate application via gesture using kinect in C#

查看:44
本文介绍了在C#中使用kinect通过手势操作应用程序的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

您好!

我正在为kinect开发应用程序。我想通过手势操作应用程序,但其代码不起作用。我只在Xaml.cs(wpf)中启用kinect传感器和启动传感器,并为手势识别创建单独的类,其中我为Setposition编写代码,
Skelton跟踪和缩放。但在目前的情况下,我的控制不会进入我的手势识别类,它只启用传感器然后停止。


GestureRecognition.cs

使用System; 
使用System.Collections.Generic;
使用System.Linq;
使用System.Text;
使用Microsoft.Kinect;

名称空间BAL
{
公共类GestureRecognition
{
KinectSensor sensor = KinectSensor.KinectSensors [0];

骷髅[]骷髅;
int [] leftXY = new int [2];
int [] rightxy = new int [2];
public GestureRecognition(){


var parameters = new TransformSmoothParameters
{
Smoothing = 0.3f,
Correction = 0.0f,
预测= 0.0f,
JitterRadius = 1.0f,
MaxDeviationRadius = 0.04f
};
sensor.SkeletonStream.Enable();
sensor.Start();
// subtribe到SkeletonFrameReady事件,当Kinect传感器检测到第一个骨架时触发此事件
sensor.SkeletonFrameReady + = new EventHandler< SkeletonFrameReadyEventArgs>(TrackSkeleton);

}

private Joint SetPosition(联合联合)
{
Microsoft.Kinect.SkeletonPoint vector = new Microsoft.Kinect.SkeletonPoint();
vector.X = ScaleVector(640,joint.Position.X);
vector.Y = ScaleVector(480,-joint.Position.Y);


Joint updatedJoint = new Joint();
updatedJoint = joint;
updatedJoint.TrackingState = JointTrackingState.Tracked;
updatedJoint.Position = vector;
返回updatedJoint;

//Canvas.SetLeft (ellipse,updatedJoint.Position.X);
//Canvas.SetTop(ellipse,updatedJoint.Position.Y);
}
//使用kinect分辨率匹配屏幕分辨率
private float ScaleVector(int length,float position)
{
float value =((((float)长度)/ 1f)/ 2f)*位置)+(长度/ 2); //用于设置分辨率的公式
if(value> length)
{
return(float)length;
}
if(value< 0f)
{
return 0f;
}
返回值;
}
public void TrackSkeleton(object sender,SkeletonFrameReadyEventArgs e)
{
bool receivedData = false;

使用(SkeletonFrame SFrame = e.OpenSkeletonFrame())
{
if(SFrame == null)
{

}
else
{

skeletons = new Skeleton [SFrame.SkeletonArrayLength];
SFrame.CopySkeletonDataTo(骨架);
receivedData = true;
}
}

if(receivedData)
{
//子查询表达式。
Skeleton currentSkeleton =(来自skl in skeletons
,其中skl.TrackingState == SkeletonTrackingState.Tracked
select skl).FirstOrDefault();

if(currentSkeleton!= null)
{

Joint leftjoint = SetPosition(currentSkeleton.Joints [JointType.HandLeft]);
Joint rightjoint = SetPosition(currentSkeleton.Joints [JointType.HandRight]);
leftXY [1] =(int)leftjoint.Position.X;
leftXY [2] =(int)leftjoint.Position.Y;

rightxy [1] =(int)rightjoint.Position.X;
rightxy [2] =(int)rightjoint.Position.Y;

//返回rightxy;

}
}
}
}
}

.Xaml.CS

使用System; 
使用System.Collections.Generic;
使用System.Linq;
使用System.Text;使用System.Windows
;使用System.Windows.Controls
;
使用System.Windows.Data;
使用System.Windows.Documents;
使用System.Windows.Input;
使用System.Windows.Media;
使用System.Windows.Media.Imaging;
使用System.Windows.Shapes;
使用Microsoft.Kinect;使用BAL

;
命名空间Stroke_Recovery
{
公共部分类MainWindow:Window
{
KinectSensor sensor;
GestureRecognition gr;
int [] co;
public MainWindow()
{
InitializeComponent();
//初始化后订阅
Loaded + = MainWindow_Loaded形式的加载事件;

//初始化之后订阅
形式的卸载事件//我们使用此事件在关闭应用程序时停止传感器。
卸载+ = MainWindow_Unloaded;
}
void MainWindow_Unloaded(object sender,RoutedEventArgs e)
{
//sensor.Stop();
}
void MainWindow_Loaded(对象发送者,RoutedEventArgs e)
{
var parameters = new TransformSmoothParameters
{
Smoothing = 0.3f,
校正= 0.0f,
预测= 0.0f,
JitterRadius = 1.0f,
MaxDeviationRadius = 0.04f
};

sensor.SkeletonStream.Enable(参数);
sensor.SkeletonFrameReady + = new EventHandler< SkeletonFrameReadyEventArgs>(gr.TrackSkeleton);
sensor.Start();
//子部署到SkeletonFrameReady事件,当Kinect传感器检测到第一个骨架时会触发此事件



}



解决方案

您在哪里创建GestureRecognition类的实例。 MainWindow构造函数中的应用程序起点,您无法启动sensor.SkeletonStream,因为您尚未创建该类的实例。有关如何初始化传感器的代码,请查看SkeletonBasics示例


您可能只想摆脱该类并合并代码。


Hello!
I am developing application for kinect. i want to operate application via hand gesture but its code is not working. i am only enabling kinect sensor and start sensor in Xaml.cs (wpf) and make separate Class for gesture recognition where i write code for Setposition, Skelton Tracking and scaling. but in current scenario my control not goes to my Gesture recognition class it only enable sensor and then stop.

GestureRecognition.cs

using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using Microsoft.Kinect;

namespace BAL
{
    public class GestureRecognition
    {
        KinectSensor sensor = KinectSensor.KinectSensors[0];

        Skeleton[] skeletons;
        int[] leftXY = new int[2];
        int[] rightxy = new int[2];
        public GestureRecognition() {

           
            var parameters = new TransformSmoothParameters
            {
                Smoothing = 0.3f,
                Correction = 0.0f,
                Prediction = 0.0f,
                JitterRadius = 1.0f,
                MaxDeviationRadius = 0.04f
            };
                 sensor.SkeletonStream.Enable();
            sensor.Start();
            //substribe to the SkeletonFrameReady event , This event gets fired when a First skeleton is detected by the Kinect Sensor
            sensor.SkeletonFrameReady += new EventHandler<SkeletonFrameReadyEventArgs>(TrackSkeleton);
            
        }

        private Joint SetPosition( Joint joint)
        {
            Microsoft.Kinect.SkeletonPoint vector = new Microsoft.Kinect.SkeletonPoint();
            vector.X = ScaleVector(640, joint.Position.X);
            vector.Y = ScaleVector(480, -joint.Position.Y);


            Joint updatedJoint = new Joint();
            updatedJoint = joint;
            updatedJoint.TrackingState = JointTrackingState.Tracked;
            updatedJoint.Position = vector;
            return updatedJoint;

            //Canvas.SetLeft(ellipse, updatedJoint.Position.X);
            //Canvas.SetTop(ellipse, updatedJoint.Position.Y);
        }
        // Match screen resolution with kinect resolution
        private float ScaleVector(int length, float position)
        {
            float value = (((((float)length) / 1f) / 2f) * position) + (length / 2); // Formula to set resolution
            if (value > length)
            {
                return (float)length;
            }
            if (value < 0f)
            {
                return 0f;
            }
            return value;
        }
        public void TrackSkeleton(object sender, SkeletonFrameReadyEventArgs e)
        {
            bool receivedData = false;

            using (SkeletonFrame SFrame = e.OpenSkeletonFrame())
            {
                if (SFrame == null)
                {

                }
                else
                {

                    skeletons = new Skeleton[SFrame.SkeletonArrayLength];
                    SFrame.CopySkeletonDataTo(skeletons);
                    receivedData = true;
                }
            }

            if (receivedData)
            {
                // sub query expression .
                Skeleton currentSkeleton = (from skl in skeletons
                                            where skl.TrackingState == SkeletonTrackingState.Tracked
                                            select skl).FirstOrDefault();

                if (currentSkeleton != null)
                {

                    Joint leftjoint=SetPosition(currentSkeleton.Joints[JointType.HandLeft]);
                   Joint rightjoint= SetPosition(currentSkeleton.Joints[JointType.HandRight]);
                   leftXY[1] = (int)leftjoint.Position.X;
                   leftXY[2] = (int)leftjoint.Position.Y;

                   rightxy[1] = (int)rightjoint.Position.X;
                   rightxy[2] = (int)rightjoint.Position.Y;

                   //return rightxy;

                }
            }
        }
    }
}

.Xaml.CS

using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Windows;
using System.Windows.Controls;
using System.Windows.Data;
using System.Windows.Documents;
using System.Windows.Input;
using System.Windows.Media;
using System.Windows.Media.Imaging;
using System.Windows.Shapes;
using Microsoft.Kinect;

using BAL;
namespace Stroke_Recovery
{
    public partial class MainWindow : Window
    {
        KinectSensor sensor;
        GestureRecognition gr;
        int[] co;
        public MainWindow()
        {
            InitializeComponent();
            //After Initialization subscribe to the loaded event of the form 
            Loaded += MainWindow_Loaded;

            //After Initialization subscribe to the unloaded event of the form
            //We use this event to stop the sensor when the application is being closed.
           Unloaded += MainWindow_Unloaded;
        }
        void MainWindow_Unloaded(object sender, RoutedEventArgs e)
        {
            //sensor.Stop();
        }
        void MainWindow_Loaded(object sender, RoutedEventArgs e)
        {
            var parameters = new TransformSmoothParameters
            {
                Smoothing = 0.3f,
                Correction = 0.0f,
                Prediction = 0.0f,
                JitterRadius = 1.0f,
                MaxDeviationRadius = 0.04f
                };

               sensor.SkeletonStream.Enable(parameters);
               sensor.SkeletonFrameReady += new EventHandler<SkeletonFrameReadyEventArgs>(gr.TrackSkeleton);
                sensor.Start();
                //substribe to the SkeletonFrameReady event , This event gets fired when a First skeleton is detected by the Kinect Sensor
               

            
        }


解决方案

Where are you creating an instance of your GestureRecognition class. The application start point in MainWindow constructor, you cannot start the sensor.SkeletonStream since you have not created instances of the class. Have a look at the SkeletonBasics sample for code on how you can initialize the sensor.

You may just want to get rid of that class and combine the code.


这篇关于在C#中使用kinect通过手势操作应用程序的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆