OpenCV立体视觉3D坐标到2D相机平面投影不同于将2D点三角化为3D [英] OpenCV stereo vision 3D coordinates to 2D camera-plane projection different than triangulating 2D points to 3D

查看:3317
本文介绍了OpenCV立体视觉3D坐标到2D相机平面投影不同于将2D点三角化为3D的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我使用特征匹配在左侧相机( pointL )和右侧相机中的相应图像点( pointR )中获得一个图像点。两个相机是平行的并且处于相同的高度。它们之间只有一个x转换。

I get an image point in the left camera (pointL) and the corresponding image point in the right camera (pointR) of my stereo camera using feature matching. The two cameras are parallel and are at the same "hight". There is only a x-translation between them.

我也知道每个摄像机的投影矩阵( projL,projR ),在校准期间使用 initUndistortRectifyMap

I also know the projection matrices for each camera (projL, projR), which I got during calibration using initUndistortRectifyMap.

对于三角测量 $ b triangulatePoints(projL,projR,pointL,pointR,pos3D)文档),其中 pos3D 是对象的输出3D位置。

For triangulating the point, I call: triangulatePoints(projL, projR, pointL, pointR, pos3D) (documentation), where pos3D is the output 3D position of the object.

现在,我要将3D坐标投影到左侧相机的2D图像


2Dpos = projL * 3dPos

2Dpos = projL*3dPos

生成的x坐标正确。但是y-coodinate是大约20像素错误。

The resulting x-coordinate is correct. But the y-coodinate is about 20 pixels wrong.

如何解决此问题?

编辑
当然,我需要使用齐次坐标,以便乘以投影矩阵(3x4)。因此,我设置:

Of course, I need to use homogeneous coordinates, in order to multiply it with the projection matrix (3x4). For that reason, I set:

3dPos[0] = x;
3dPos[1] = y;
3dPos[2] = z;
3dPos[3] = 1;

错误,设置 3dPos [3] 1

注意:


  1. 当然,我总是使用齐次坐标


推荐答案

您可能会投影到修正的相机。需要应用矫正扭曲的逆,以获得原始(未失真的)线性相机坐标中的点,然后应用失真以获得原始图像。

You are likely projecting into the rectified camera. Need to apply the inverse of the rectification warp to obtain the point in the original (undistorted) linear camera coordinates, then apply distortion to get into the original image.

这篇关于OpenCV立体视觉3D坐标到2D相机平面投影不同于将2D点三角化为3D的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆