提取2D图像点,深度图和相机校准矩阵提取3D坐标 [英] Extracting 3D coordinates given 2D image points, depth map and camera calibration matrices
问题描述
我有一组从 OpenCV FAST
角检测功能输出的 2D图像
关键点。使用 Asus Xtion I
也具有已知所有相机校准参数的时间同步深度图。使用这个信息,我想在 OpenCV中提取一组
3D
坐标(点云)。
任何人都可以给我一些关于如何做的指示?在此先感谢!
尼古拉斯·巴瑞斯创造了类似Kinect的深度传感器一个伟大的教程。
http://nicolas.burrus.name/index.php/研究/ KinectCalibration
我会复制&粘贴最重要的部分:
用颜色像素映射深度像素
第一步是使用
估计的失真系数对rgb和深度图像进行失真。使用下面的公式,然后,使用深度相机的深度相机
内在,每个像素(x_d,y_d)可以投射
键度量3D空间:
P3D.x =(x_d - cx_d)* depth(x_d,y_d)/ fx_d
P3D.y =(y_d- cy_d)* depth )/ fy_d
P3D.z = depth(x_d,y_d)
与fx_d,fy_d ,cx_d和cy_d深度相机的内部函数
块引用>
如果您有进一步的兴趣立体声映射(对于Kinect的值):
然后我们可以重新投影彩色图像上的每个3D点,并得到它的
颜色:P3D'= R.P3D + T P2D_rgb.x =(P3D'.x * fx_rgb / P3D'.z)+ cx_rgb
P2D_rgb.y = P3D'.y * fy_rgb / P3D'.z)+ cy_rgb
立体声校准过程中转换的参数估计
我可以为我的Kinect的估计的参数是:
颜色
fx_rgb 5.2921508098293293e + 02
fy_rgb 5.2556393630057437e + 02
cx_rgb 3.2894272028759258e + 02
cy_rgb 2.6748068171871557e + 02
k1_rgb 2.6451622333009589e-01
k2_rgb -8.3990749424620825e-01
p1_rgb -1.9922302173693159e-03
p2_rgb 1.4371995932897616e-03
k3_rgb 9.1192465078713847e-01
深度
fx_d 5.9421434211923247e + 02
fy_d 5.9104053696870778e + 02
cx_d 3.3930780975300314e + 02
cy_d 2.4273913761751615e + 02
k1_d -2.6386489753128833e -01
k2_d 9.9966832163729757e-01
p1_d -7.6275862143610667e-04
p2_d 5.0350940090814270e-03
k3_d -1.3053628089976321e + 00
传感器之间的相对变换(以米为单位)
<$ c $ ç>研究[9.9984628826577793e-01,1.2635359098409581e-03,-1.7487233004436643e-02,
-1.4779096108364480e-03,9.9992385683542895e-01,-1.2251380107679535e-02,
1.7470421412464927e-02, 1.2275341476520762e-02,9.9977202419716948e-01]
T [1.9985242312092553e-02,-7.4423738761617583e-04,-1.0916736334336222e-02]
pre>
I have a set of
2D image
keypoints that are outputted from theOpenCV FAST
corner detection function. Using anAsus Xtion I
also have a time-synchronised depth map with all camera calibration parameters known. Using this information I would like to extract a set of3D
coordinates (point cloud) inOpenCV.
Can anyone give me any pointers regarding how to do so? Thanks in advance!
解决方案Nicolas Burrus has created a great tutorial for Depth Sensors like Kinect.
http://nicolas.burrus.name/index.php/Research/KinectCalibration
I'll copy & paste the most important parts:
Mapping depth pixels with color pixels
The first step is to undistort rgb and depth images using the estimated distortion coefficients. Then, using the depth camera intrinsics, each pixel (x_d,y_d) of the depth camera can be projected to metric 3D space using the following formula:
P3D.x = (x_d - cx_d) * depth(x_d,y_d) / fx_d P3D.y = (y_d - cy_d) * depth(x_d,y_d) / fy_d P3D.z = depth(x_d,y_d)
with fx_d, fy_d, cx_d and cy_d the intrinsics of the depth camera.
If you are further interested in stereo mapping (values for kinect):
We can then reproject each 3D point on the color image and get its color:
P3D' = R.P3D + T P2D_rgb.x = (P3D'.x * fx_rgb / P3D'.z) + cx_rgb P2D_rgb.y = (P3D'.y * fy_rgb / P3D'.z) + cy_rgb
with R and T the rotation and translation parameters estimated during the stereo calibration.
The parameters I could estimate for my Kinect are:
Color
fx_rgb 5.2921508098293293e+02 fy_rgb 5.2556393630057437e+02 cx_rgb 3.2894272028759258e+02 cy_rgb 2.6748068171871557e+02 k1_rgb 2.6451622333009589e-01 k2_rgb -8.3990749424620825e-01 p1_rgb -1.9922302173693159e-03 p2_rgb 1.4371995932897616e-03 k3_rgb 9.1192465078713847e-01
Depth
fx_d 5.9421434211923247e+02 fy_d 5.9104053696870778e+02 cx_d 3.3930780975300314e+02 cy_d 2.4273913761751615e+02 k1_d -2.6386489753128833e-01 k2_d 9.9966832163729757e-01 p1_d -7.6275862143610667e-04 p2_d 5.0350940090814270e-03 k3_d -1.3053628089976321e+00
Relative transform between the sensors (in meters)
R [ 9.9984628826577793e-01, 1.2635359098409581e-03, -1.7487233004436643e-02, -1.4779096108364480e-03, 9.9992385683542895e-01, -1.2251380107679535e-02, 1.7470421412464927e-02, 1.2275341476520762e-02, 9.9977202419716948e-01 ] T [ 1.9985242312092553e-02, -7.4423738761617583e-04, -1.0916736334336222e-02 ]
这篇关于提取2D图像点,深度图和相机校准矩阵提取3D坐标的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!