视野如何改变立体视觉中的深度估计? [英] How Field of view changes depth estimation in stereo vision?

查看:12
本文介绍了视野如何改变立体视觉中的深度估计?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在尝试从带有两个摄像头的立体系统估计深度.我使用的简单等式是:

I'm trying to estimate depth from a stereo system with two cameras. The simple equation that I use is:

           Baseline*Focal
Depth = ----------------------
             Disparity

两个摄像头的视野不会改变允许的最大深度?它只改变可测量的最小深度?

The field of view of the two cameras doesn't change the maximum depth allowed? It changes only the minimum depth measurable?

推荐答案

在顶端,可测量的深度受到您使用的相机分辨率的限制,这反映在视差中.随着深度变得更大,差异趋于零.有了更大的视野,它在较低的深度将有效地为零.因此,更大的视野会降低可测量的最大深度,但您可以通过使用更高分辨率的相机进行一定程度的补偿.

At the top end the measurable depth is limited by the resolution of the cameras you use, which is reflected in the disparity. As depth becomes greater disparity tends to zero. With a greater field of view it will effectively be zero at a lower depth. Thus a greater field of view lowers the maximum depth measurable, but you can compensate somewhat by using higher resolution cameras.

澄清一下:您应该注意(如果您做的事情正确)您以像素为单位测量视差,然后将其转换为米(或如下所示的毫米).完整的公式是:

To clarify: you should note that (if you do things correctly) you measure disparity in pixels but then convert it to meters (or milimeters as I do below). The full formula is then:

          Baseline * Focal length
Depth = ----------------------------
        Pixel disparity * Pixel size

假设您有以下设置:

Baseline (b) = 8 cm (80 mm)
Focal length (f) = 6.3 mm
Pixel size (p) = 14 um (0.014 mm)

您可以测量的最小差异为 1 个像素.有了已知的数字,这将转化为:

The smallest disparity you can measure is 1 pixel. With the known numbers this translates to:

Depth = (80*6.3)/(1*0.014) = 36,000 mm = 36 m

所以在这种情况下,这将是您的上限.请注意,您的测量在此范围内非常不准确.下一个可能的差异(2 个像素)发生在 18m 的深度,下一​​个(3 个像素)发生在 12m 处,依此类推.将基线加倍将使范围加倍至 72m.将您的焦距加倍也会使您的范围加倍,但请注意,两者都会在短期内对您产生负面影响.您还可以通过减小像素大小来增加最大深度.

So in these circumstances this would be your cap. Note that your measurement is wildly inaccurate at this range. The next possible disparity (2 pixels) occurs at a depth of 18m, the next after that (3 pixels) at 12m, etc. Doubling your baseline would double the range to 72m. Doubling your focal length would also double your range, but note that both would negatively affect you at the short end. You could also increase your maximum depth by decreasing the pixel size.

在像素大小为 0.014 毫米时,您可能谈论的是水平分辨率约为 1024 像素的 CCD,CCD 宽度约为 14.3 毫米.如果您将同一区域中的像素数量加倍,则您的最大范围将加倍,而不会在近端丢失任何东西(因为那里的限制由基线和焦距决定,它们保持不变).

At a pixel size of 0.014 mm, you are probably talking about a CCD with a horizontal resolution of something like 1024 pixels, for a CCD of about 14.3 mm wide. If you double the number of pixels in the same area you would double your maximum range without loosing anything at the near end (because the limitations there are determined by baseline and focal length, which stay the same).

很好地概述了深度测量的权衡立体视觉.并且 这篇文章 on wikipedia 有一些关于像素大小、ccd 大小、焦距之间关系的很好的信息和视野.

This is a very good overview of the tradeoffs in depth measurement in stereo vision. And this article on wikipedia has some good info on the relationship between pixel size, ccd size, focal length and field of view.

这篇关于视野如何改变立体视觉中的深度估计?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆