Kinect SDK 1.7:将关节/光标坐标映射到屏幕分辨率 [英] Kinect SDK 1.7: Mapping Joint/Cursor Coordinates to screen Resolution

查看:84
本文介绍了Kinect SDK 1.7:将关节/光标坐标映射到屏幕分辨率的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

不幸的是,我仍在努力使用新的Kinect SDK 1.7. 此问题实际上与"通过反射c#查找事件"处于同一上下文中. :Click事件(但是,不必了解此问题).

Unfortunately I'm still struggling a little with the new Kinect SDK 1.7. This problem is actually in the same context as "finding events via reflection c#": the Click event (however, it is not necessary for understanding this problem).

我的问题很简单:如果我有右手控制光标(新的" Kinect HandPointer),并且它在屏幕的左上角,我希望它返回坐标(0,0).如果光标位于右下角,则坐标应分别为(1920,1080)当前屏幕分辨率.

My Problem is simple: if I have my right hand controlling the cursor (the "new" Kinect HandPointer) and it's in the upper left corner of the screen I want it to return the Coordinates (0,0). If the cursor is in the lower right corner the coordinates should be (1920,1080) respectively the current screen resolution.

新的SDK为每个HandPointer(最多4个)具有所谓的PhysicalInteractionZones(PIZ),它们随HandPointers一起移动,其值从(对于左上)为0.0到(对于右下)为1.0. 从根本上讲,我不能使用它们来映射到屏幕,因为它们会根据Kinect前面的用户移动而动态变化.至少,我无法找到一种方法来完成这项工作.

The new SDK has so-called PhysicalInteractionZones (PIZ) for each HandPointer (up to 4) which move with the HandPointers and have values from (for upper-left) 0.0 to (for lower-right) 1.0. Which basically means, I can't use them for mapping to the screen since they are changing dynamically according to the users movement in front of the Kinect. At least, I was unable to find a way to make that work.

然后我通过SkeletonStream进行了尝试:跟踪右手的坐标,并在注册了点击手势后立即在该特定点触发Click-Event.我用以下代码尝试过:

I then tried it via SkeletonStream: the coordinates for the right hand are tracked and as soon as a click gesture is registered, the Click-Event triggers at this specific point. I tried it with the following code:

private void ksensor_SkeletonFrameReady(object sender, SkeletonFrameReadyEventArgs e)
    {
        using (SkeletonFrame frame = e.OpenSkeletonFrame())
        {
            if (frame != null)
            {
                frame.CopySkeletonDataTo(this._FrameSkeletons);
                var accelerometerReading = 
                    Settings.Instance.ksensor.AccelerometerGetCurrentReading();
                ProcessFrame(frame);
                _InteractionStream.ProcessSkeleton(_FrameSkeletons,
                    accelerometerReading, frame.Timestamp);
            }
        }
    }

private void ProcessFrame(ReplaySkeletonFrame frame)
    {
        foreach (var skeleton in frame.Skeletons)
        {
            if (skeleton.TrackingState != SkeletonTrackingState.Tracked)
                continue;
            foreach (Joint joint in skeleton.Joints)
            {
                if (joint.TrackingState != JointTrackingState.Tracked)
                    continue;
                if (joint.JointType == JointType.HandRight)
                {
                    _SwipeGestureDetectorRight.Add(joint.Position,
                        Settings.Instance.ksensor);
                    _RightHand = GetPosition(joint);
                    myTextBox.Text = _RightHand.ToString();
                }
                if (joint.JointType == JointType.HandLeft)
                {
                    _SwipeGestureDetectorLeft.Add(joint.Position, 
                        Settings.Instance.ksensor);
                    _LeftHand = GetPosition(joint);
                }
            }
        }
    }

辅助GetPosition方法定义如下:

private Point GetPosition(Joint joint)
    {
        DepthImagePoint point = 
            Settings.Instance.ksensor.CoordinateMapper.MapSkeletonPointToDepthPoint(joint.Position, Settings.Instance.ksensor.DepthStream.Format);
        point.X *= 
            (int)Settings.Instance.mainWindow.ActualWidth / Settings.Instance.ksensor.DepthStream.FrameWidth;
        point.Y *= 
            (int)Settings.Instance.mainWindow.ActualHeight / Settings.Instance.ksensor.DepthStream.FrameHeight;

        return new Point(point.X, point.Y);
    }

一旦检测到点击手势,就会调用一个简单的invokeClick(_RightHand)并执行点击.点击本身运行良好(再次感谢对此问题做出回答的人们).到目前为止,行不通的是坐标的映射,因为我只能从中获取坐标

As soon as the click gesture is detected a simple invokeClick(_RightHand) is called and performs the click. The click itself is working perfectly fine (thanks again to the people who answered on that issue). What is not working so far is the mapping of the coordinates since I only get coordinates from

x轴:900-1500(从左到右) y轴:300-740(从上到下)

x-Axis: 900 - 1500 (from left to right) y-Axis: 300 - 740 (from top to bottom)

这些坐标甚至随着我每次尝试以100或200像素(例如,像素)到达屏幕上的一个特定点而变化.屏幕的左侧最初是900,但是当我将手移出Kinect的范围(在我的后背或桌子下方)并向左重复该动作时,我突然得到700的坐标或类似的东西. 我什至尝试了Coding4Fun.Kinect.Wpf(分别是ScaleTo(1920,1080) ScaleTo(SystemParameters.PrimaryScreenWidth,SystemParameters.PrimaryScreenHight))中的ScaleTo方法,但是这给了我x:300000,y:-100000或240000的疯狂坐标...而我快用光了这样的想法,所以我希望外面有人对我有一个想法,甚至对此有一个解决方案.

And these coordinates even vary with each time I try to reach one specific point on the screen by 100 or 200 pixels, e.g. the left-hand side of the screen is 900 at first but when I move my hand out of the range of the Kinect (behind my back or under the table) and repeat the movement towards the left-hand side I suddenly get coordinates of 700 or something around that. I even tried the ScaleTo methods from Coding4Fun.Kinect.Wpf (ScaleTo(1920,1080) respectively ScaleTo(SystemParameters.PrimaryScreenWidth,SystemParameters.PrimaryScreenHight)) but that just gave me crazy coordinates in the x:300000, y:-100000 or 240000...and I'm running out of ideas so I hope someone out there has one for me or even a solution for this.

很抱歉,我的文字太长了,但我已经尽力做到尽可能具体.预先感谢您的帮助!

Sorry for the long text but I've tried to be as specific as I could be. Thanks in advance for any help!

推荐答案

InteractionHandPointer类包含手的屏幕坐标. 我改编自SDK演示中的代码;有很多圈要跳过去!:

The InteractionHandPointer class contains the screen coordinates for the hand. I adapted this code from the SDK demos; there are many hoops to jump through!:

this.sensor.DepthFrameReady += this.Sensor_DepthFrameReady;
this.interaction = new InteractionStream(sensor, new InteractionClient());
this.interaction.InteractionFrameReady += interaction_InteractionFrameReady;

...

private void Sensor_DepthFrameReady(object sender, DepthImageFrameReadyEventArgs e)
{
    using (var frame = e.OpenDepthImageFrame())
    {
        if (frame != null)
        {
            try
            {
                interaction.ProcessDepth(frame.GetRawPixelData(), frame.Timestamp);
            }
            catch (InvalidOperationException) { }
        }
    }
}

private void interaction_InteractionFrameReady(object sender, InteractionFrameReadyEventArgs e)
{
    using (var frame = e.OpenInteractionFrame())
    {
        if (frame != null)
        {
            if ((interactionData == null) || 
                (interactionData.Length !== InteractionStream.FrameUserInfoArrayLength))
            {
                interactionData = new UserInfo[InteractionStream.FrameUserInfoArrayLength];
            }
            frame.CopyInteractionDataTo(interactionData);

            foreach (var ui in interactionData)
            {
                foreach (var hp in ui.HandPointers)
                {
                    // Get screen coordinates
                    var screenX = hp.X * DisplayWidth;
                    var screenY = hp.Y * DisplayHeight;

                    // You can also access IsGripped, IsPressed etc.
                }
            }
        }
    }
}

public class InteractionClient: IInteractionClient
{
    public InteractionInfo GetInteractionInfoAtLocation(
        int skeletonTrackingId,
        InteractionHandType handType,
        double x, double y)
    {
        return new InteractionInfo();
    }
}

这篇关于Kinect SDK 1.7:将关节/光标坐标映射到屏幕分辨率的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆