从深度框架中查找点云 [英] Finding point cloud from Depth frame

查看:75
本文介绍了从深度框架中查找点云的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

大家好,

我正在尝试从Kinect v2传感器中提取点云。

I'm trying to extract a point cloud from the Kinect v2 sensor.

据我所知 - 我应该使用"CoordinateMapper.MapDepthFrameToCameraSpaceUsingIntPtr"将基础深度缓冲区转换为CameraSpacePoint数组的方法。 

As I understand it- I should use the "CoordinateMapper.MapDepthFrameToCameraSpaceUsingIntPtr" method to convert the underlying depth buffer into a CameraSpacePoint array. 

这是正确的吗?如果是这样的话:我不确定如何调整CameraSpacePoint数组的大小,因为我一直在收到错误,告诉我我已超出预期范围。

Is this correct? And if so: I'm unsure of how to size the CameraSpacePoint array as I keep getting errors telling me I'm outside the expected range.

编辑:因此,将大小设置为depthFrame.FrameDescription.LengthInPixels,我已经摆脱了原始错误。但是,当尝试打印CameraSpacePoint X,Y,Z值时,它们都设置为+/-无穷大。

So having set the size to depthFrame.FrameDescription.LengthInPixels, I've gotten rid of the original errors. However, when trying to print the CameraSpacePoint X,Y,Z values they're all set to +/- infinity.

深度帧可视化渲染正常,因此它不像深度框架坏了...

The depth frame visualisation is rendering fine, so it's not like the depth frame is broken...

推荐答案

您应该将颜色映射到1到1的映射深度。将深度映射到颜色将具有更多"未知"的颜色。给定颜色框大小的区域以及如何进行颜色映射。如果您没有获取数据,请检查您的参数。
你能提供代码片段来看看你传递的值是什么吗?

You should be mapping color to depth for a 1 to 1 mapping. Mapping depth to color will have a lot more "unknown" regions given the size of the color frame and how the mapping to color would be done. If you are not getting data, check your parameters. Can you provide a snippet of the code to see what you are passing in for values?


这篇关于从深度框架中查找点云的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆