从Essential Matrix旋转和平移不正确 [英] Rotation and Translation from Essential Matrix incorrect

查看:69
本文介绍了从Essential Matrix旋转和平移不正确的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我目前有一个立体摄像机设置.我已经校准了两个摄像机,并具有两个摄像机K1K2的固有矩阵.

I currently have a stereo camera setup. I have calibrated both cameras and have the intrinsic matrix for both cameras K1 and K2.

K1 = [2297.311,      0,       319.498;
      0,       2297.313,      239.499;
      0,             0,       1];

K2 = [2297.304,      0,       319.508;
      0,       2297.301,      239.514;
      0,             0,       1];

我还使用OpenCV的findFundamentalMat()确定了两个摄像机之间的基本矩阵F.我已经使用一对对应的点x1x2(在像素坐标中)测试了对极约束,它非常接近0.

I have also determined the Fundamental matrix F between the two cameras using findFundamentalMat() from OpenCV. I have tested the Epipolar constraint using a pair of corresponding points x1 and x2 (in pixel coordinates) and it is very close to 0.

F = [5.672563368940768e-10, 6.265600996978877e-06, -0.00150188302445251;
     6.766518121363063e-06, 4.758206104804563e-08,  0.05516598334827842;
     -0.001627120880791009, -0.05934224611334332,   1];

x1 = 133,75    
x2 = 124.661,67.6607

transpose(x2)*F*x1 = -0.0020

F中,我可以将基本矩阵E作为E = K2'*F*K1获得.我使用MATLAB SVD函数分解E,以获得K2相对于K1旋转和平移的4种可能性.

From F I am able to obtain the Essential Matrix E as E = K2'*F*K1. I decompose E using the MATLAB SVD function to get the 4 possibilites of rotation and translation of K2 with respect to K1.

E = transpose(K2)*F*K1;
svd(E);

[U,S,V] = svd(E);

diag_110 = [1 0 0; 0 1 0; 0 0 0];
newE = U*diag_110*transpose(V);
[U,S,V] = svd(newE); //Perform second decompose to get S=diag(1,1,0)

W = [0 -1 0; 1 0 0; 0 0 1];

R1 = U*W*transpose(V);
R2 = U*transpose(W)*transpose(V);
t1 = U(:,3); //norm = 1
t2 = -U(:,3); //norm = 1

假设K1用作我们进行所有测量的坐标系.因此,K1的中心在C1 = (0,0,0).这样,应该有可能应用正确的旋转R和平移t,以使C2 = R*(0,0,0)+t(即,相对于K1的中心测量K2的中心)

Let's say that K1 is used as the coordinate frame for which we make all measurements. Therefore, the center of K1 is at C1 = (0,0,0). With this it should be possible to apply the correct rotation R and translation t such that C2 = R*(0,0,0)+t (i.e. the center of K2 is measured with respect to the center of K1)

现在,假设使用我的对应对x1x2.如果我知道两个摄像机中每个像素的长度,并且由于我从本征矩阵知道了焦距,那么我应该能够确定两个摄像机的两个向量v1v2在相交的同一点相交在下面.

Now let's say that using my corresponding pairs x1 and x2. If I know the length of each pixel in both my cameras and since I know the focal length from the intrinsic matrix, I should be able to determine two vectors v1 and v2 for both cameras that intersect at the same point as seen below.

pixel_length = 7.4e-6; //in meters
focal_length = 17e-3;  //in meters

dx1 = (133-319.5)*pixel_length; //x-distance from principal point of 640*480 image
dy1 = (75-239.5) *pixel_length; //y-distance from principal point of 640*480 image
v1  = [dx1 dy1 focal_length] - (0,0,0); //vector found using camera center and corresponding image point on the image plane

dx2 = (124.661-319.5)*pixel_length; //same idea 
dy2 = (67.6607-239.5)*pixel_length; //same idea
v2  = R * ( [dx2 dy2 focal_length] - (0,0,0) ) + t; //apply R and t to measure v2 with respect to K1 frame

有了这个向量并知道了参数化形式的线方程,我们就可以将两条线等价于三角剖分并通过MATLAB中的左除法函数求解两个标量s和t来求解方程组

With this vector and knowing the line equation in parametric form, we can then equate the two lines to triangulate and solve the two scalar quantities s and t through the left hand divide function in MATLAB to solve for the system of equations.

C1 + s*v1 = C2 + t*v2
C1-C2 = tranpose([v2 v1])*transpose([s t]) //solve Ax = B form system to find s and t

确定了st后,我们可以通过插回线方​​程来找到三角剖分的点.但是,我的过程没有成功,因为找不到单个Rt解决方案,在该解决方案中,该点位于两个摄像头的前面,并且两个摄像头都指向前方.

With s and t determined we can find the triangulated point by plugging back into the line equation. However, my process has not been successful as I cannot find a single R and t solution in which the point is in front of both cameras and where both cameras are pointed forwards.

我的管道或思考过程是否有问题?能否获得每条单独的像素射线?

Is there something wrong with my pipeline or thought process? Is it at all possible to obtain each individual pixel ray?

推荐答案

将基本矩阵分解为Rt时,您会得到4种不同的解决方案.其中三个将点投影到一个或两个摄像机的后面,其中之一是正确的.您必须通过三角测量一些采样点来测试哪一个是正确的.

When you decompose the essential matrix into R and t you get 4 different solutions. Three of them project the points behind one or both cameras, and one of them is correct. You have to test which one is correct by triangulating some sample points.

MATLAB中的计算机视觉系统工具箱中有一个名为cameraPose的函数,它将为您完成此操作.

There is a function in the Computer Vision System Toolbox in MATLAB called cameraPose, which will do that for you.

这篇关于从Essential Matrix旋转和平移不正确的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆