其他相机的MAP RGB输出具有kinect传感器的骨架跟踪功能 [英] MAP RGB output of other camera with Skeletal tracking functionality of kinect sensor

查看:95
本文介绍了其他相机的MAP RGB输出具有kinect传感器的骨架跟踪功能的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述



我想知道是否有可能从中获取RGB数据或视频流另一台摄像机同时从kinect传感器获取骨骼跟踪数据,并将传感器数据映射到从另一台摄像机使用的
中捕获的实时视频。


通过任何方法,我可以实现这个thing.?

解决方案

如果我打算解决这个问题 - 我认为我可以在骨骼映射到Kinect的使用CoordinateMapper.MapSkeletonPointToColorPoint()彩色流帧,然后基于该彩色流图像(例如640×480)的尺寸可以
平移该视频流数据的尺寸...


如果图像的dpis不同,我会考虑到这一点,并且假设其他视频源基本上位于Kinects传感器的顶部/下方(以及其他形象是与Kinect
图像相同) - 否则数据将关闭,您必须翻译/转换值以匹配任何差异(即如果摄像机放大了 - 你必须'缩放'这些点,以便事物排成一行,或者如果相机偏向一边,
等。)



Hi,

I was wondering if there could be any possibility in which i can take RGB data or video stream from another video camera and simultaneously getting the skeletal tracking data from kinect sensor and map the sensors data onto the realtime video captured from the other camera used.

By any method ,can i implement this thing.?

解决方案

If I were going to tackle this problem - I would assume that I could map the skeleton to the Kinect color stream frame using the CoordinateMapper.MapSkeletonPointToColorPoint(), and then based on the dimension of that color stream image (ex 640x480) could translate that to the dimensions of the video stream data...

I would thing that you would have to take into account if the dpis of the images are different, and that would assume that the other video source is basically on top of/below the Kinects sensors (and the zoom of the other image is the same as the Kinect image) - otherwise the data will be off and you would have to translate/transform the values to match any differences (i.e. if the video camera is zoomed in - you would have to 'scale' the points in so that things line up, or if the camera was off to the side, etc.)


这篇关于其他相机的MAP RGB输出具有kinect传感器的骨架跟踪功能的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆