使用立体相机从视差图重建深度 [英] Depth reconstruction from disparity map using stereo camera

查看:201
本文介绍了使用立体相机从视差图重建深度的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在从视差图进行深度重建.我使用 OpenCV 来校准我的立体相机,然后对图像进行失真和校正.我使用 LibELAS 来计算视差图.

I'm working on depth reconstruction from disparity map. I use OpenCV to calibrate my stereo camera, then undistort and rectify the images. I use LibELAS to compute the disparity map.

我的问题是:根据 OpenCV 文档(https://docs.opencv.org/3.1.0/dd/d53/tutorial_py_depthmap.html),深度由depth = Baseline*focal_length/disparity计算.但根据 Middlebury 数据集(http://vision.middlebury.edu/stereo/data/scenes2014/),深度由depth = base * focus_length/(disparity + doffs)计算.doffs"是主点的 x 差,doffs = cx1 - cx0".

My question is: According to OpenCV document (https://docs.opencv.org/3.1.0/dd/d53/tutorial_py_depthmap.html), the depth is computed by depth = Baseline*focal_length/disparity. But according to middlebury dataset (http://vision.middlebury.edu/stereo/data/scenes2014/), the depth is computed by depth = baseline * focal_length / (disparity + doffs). The "doffs" is "x-difference of principal points, doffs = cx1 - cx0".

doffs"是什么意思?如何从 OpenCV 校准中获得doffs"?

What does the "doffs" mean ? How can I get the "doffs" from OpenCV calibration ?

推荐答案

OpenCV-Calibration 为您提供两个相机的内在矩阵.这些是具有以下样式的 3x3 矩阵:(来自 doc)

The OpenCV-Calibration gives you the intrinsic matrices for both of your cameras. These are 3x3 Matrices with the following style: (from doc)

  fx  0  cx
  0  fy  cy
  0   0   1

cxcy 是你的原则点的坐标.从那里您可以按照您的说明计算 doffs.对于理想的相机,这些参数是图像的中心.但在真实相机中,它们仅在几个像素上有所不同.

cx and cy are the coordiantes of your principle point. From there you can calculate doffs as stated by you. With ideal cameras these parameters are the center of the image. But in real cameras they differ in a few pixels.

这篇关于使用立体相机从视差图重建深度的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆