表现取决于主题的深度 [英] Performance dependent on depth of subject

查看:143
本文介绍了表现取决于主题的深度的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述


我注意到性能(就Kinect SDK处理帧以提取骨架需要多长时间而言)似乎取决于深度来自相机的人例如,如果我站得很近,如果我离得更远,算法比
要慢得多。我认为这是一个结果,因为分类算法必须处理更多的像素,并且用于聚类像素的算法将需要更长的时间,因为需要考虑更多的像素。有没有其他人注意到这种行为,
是否有任何解决方法,因为我发现当有人站在相机框架附近拍摄深度图像时,开始更有可能被丢弃?


谢谢


Ben

解决方案

你站在相机上有多近?算法是为全身跟踪而设计的,所以如果你的腿,头或手臂在每一帧都不完全可见,那么算法必须做更多的工作才能从中恢复,因为它试图找到脚的位置,脚踝
和膝盖适用于每个骨架,如果它没有在框架中找到一个好的匹配,它必须推断这些关节的好位置。


Eddy


Hi,

I noticed that the performance (in terms of how long the Kinect SDK takes to process a frame to extract the skeleton) seems to be dependent on the depth of the person from the camera. For example, if I am stood close, the algorithm is much slower than if I am further away. I presume this is a result as the classification algorithm has to process more pixels and also the algortihm used to cluster the pixels will take longer as there are more pixels to consider. Has anyone else noticed this behaviour, is there any way around it as I find that when someone is stood close to the camera frames for the depth image then start to be more likley to get dropped?

Thanks

Ben

解决方案

How close are you standing to camera? Algorithm is designed for full body tracking, so if your legs, head or arms are not fully visible in every frame, then the algorithm has to do more work to recover from this, since it tries to find where the feet, ankles and knees are for every skeleton, and if it doesn't find a good match in frame it has to infer where a good place is for these joints.

Eddy


这篇关于表现取决于主题的深度的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆