视差图的 OpenCv 深度估计 [英] OpenCv depth estimation from Disparity map

查看:36
本文介绍了视差图的 OpenCv 深度估计的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

I'm trying to estimate depth from a stereo pair images with OpenCV. I have disparity map and depth estimation can be obtained as:

             (Baseline*focal)
depth  =     ------------------
           (disparity*SensorSize)

I have used Block Matching technique to find the same points in the two rectificated images. OpenCV permits to set some block matching parameter, for example BMState->numberOfDisparities.

After block matching process:

cvFindStereoCorrespondenceBM( frame1r, frame2r, disp, BMState);
cvConvertScale( disp, disp, 16, 0 );
cvNormalize( disp, vdisp, 0, 255, CV_MINMAX );

I found depth value as:

if(cvGet2D(vdisp,y,x).val[0]>0)
   {
   depth =((baseline*focal)/(((cvGet2D(vdisp,y,x).val[0])*SENSOR_ELEMENT_SIZE)));
   }

But the depth value obtaied is different from the value obtaied with the previous formula due to the value of BMState->numberOfDisparities that changes the result value.

How can I set this parameter? what to change this parameter?

Thanks

解决方案

The simple formula is valid if and only if the motion from left camera to right one is a pure translation (in particular, parallel to the horizontal image axis).

In practice this is hardly ever the case. It is common, for example, to perform the matching after rectifying the images, i.e. after warping them using a known Fundamental Matrix, so that corresponding pixels are constrained to belong to the same row. Once you have matches on the rectified images, you can remap them onto the original images using the inverse of the rectifying warp, and then triangulate into 3D space to reconstruct the scene. OpenCV has a routine to do that: reprojectImageTo3d

这篇关于视差图的 OpenCv 深度估计的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆