基本矩阵中的错误? [英] Error in Fundamental Matrix?

查看:322
本文介绍了基本矩阵中的错误?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我试图通过扫描从它拍摄的两个图像,检测图像中的特征,匹配它们,创建基本矩阵,使用相机内在函数来计算基本矩阵,然后分解它来找出相机的姿态来估计相机的姿势旋转和翻译。



这里是matlab代码:

  I1 = rgb2gray(imread('1.png')); 
I2 = rgb2gray(imread('2.png'));

points1 = detectSURFFeatures(I1);
points2 = detectSURFFeatures(I2);

points1 = points1.selectStrongest(40);
points2 = points2.selectStrongest(40);

[features1,valid_points1] = extractFeatures(I1,points1);
[features2,valid_points2] = extractFeatures(I2,points2);

indexPairs = matchFeatures(features1,features2);

matchedPoints1 = valid_points1(indexPairs(:, 1),:);
matchedPoints2 = valid_points2(indexPairs(:, 2),:);

F = estimateFundamentalMatrix(matchedPoints1,matchedPoints2);

K = [2755.30930612600,0,0; ​​0,2757.82356074384,0; 1652.43432833339,1234.09417974414,1];

%figure; showMatchedFeatures(I1,I2,matchedPoints1,matchedPoints2);

E = transpose(K)* F * K;
W = [0,-1,0; 1,0,0; ​​0,0,1];
Z = [0,1,0; -1,0,0; ​​0,0,0];
[U,S,V] = svd(E);

R = U * inv(W)* transpose(V);

T = U(:,3);

thetaX = radtodeg(atan2(R(3,2),R(3,3)));
thetaY = radtodeg(atan2(-R(3,1),sqrt(R(3,2)^ 2 + R(3,3)^ 2)));
thetaZ = radtodeg(atan2(R(2,1),R(1,1)));

我面临的问题是R和T总是不正确。
ThetaZ是大多数时间等于〜90,如果我重复计算很多次,我有时得到预期的角度。 (只在某些情况下)



我似乎不明白为什么。这可能是因为我计算的基本矩阵是错误的。还是有不同的地方,我错了?



还有什么规模/单位是? (翻译矢量),或者它的推断是不同的。



新的计算机视觉...

解决方案

尝试转置K.你从 estimateCameraParameters 假设行向量乘以一个矩阵,而大多数教科书中的K假定列向量由一个矩阵预乘。



strong>修改:在计算机视觉系统工具箱的R2015b版本中,有一个 cameraPose 函数,它从基本矩阵计算相对方向和位置。


I am trying to estimate the pose of a camera by scanning two images taken from it, detecting features in the images, matching them, creating the fundamental matrix, using the camera intrinsics to calculate the essential matrix and then decompose it to find the Rotation and Translation.

Here is the matlab code:

I1 = rgb2gray(imread('1.png'));
I2 = rgb2gray(imread('2.png'));

points1 = detectSURFFeatures(I1);
points2 = detectSURFFeatures(I2);

points1 = points1.selectStrongest(40);
points2 = points2.selectStrongest(40);

[features1, valid_points1] = extractFeatures(I1, points1);
[features2, valid_points2] = extractFeatures(I2, points2);

indexPairs = matchFeatures(features1, features2);

matchedPoints1 = valid_points1(indexPairs(:, 1), :);
matchedPoints2 = valid_points2(indexPairs(:, 2), :);

F = estimateFundamentalMatrix(matchedPoints1,matchedPoints2);

K = [2755.30930612600,0,0;0,2757.82356074384,0;1652.43432833339,1234.09417974414,1];

%figure; showMatchedFeatures(I1, I2, matchedPoints1, matchedPoints2);

E = transpose(K)*F*K;
W = [0,-1,0;1,0,0;0,0,1];
Z = [0,1,0;-1,0,0;0,0,0];
[U,S,V] = svd(E);

R = U*inv(W)*transpose(V);

T = U(:,3);

thetaX = radtodeg(atan2(R(3,2),R(3,3)));
thetaY = radtodeg(atan2(-R(3,1),sqrt(R(3,2)^2 +R(3,3)^2)));
thetaZ = radtodeg(atan2(R(2,1),R(1,1)));

The problem I am facing is that R and T are always incorrect. ThetaZ is most of the times equal to ~90, If I repeat the calculation a lot of times I sometimes get the expected angles. (Only in some cases though)

I dont seem to understand why. It might be because the Fundamental Matrix I calculated is wrong. Or is there a different spot where I am going wrong?

Also what scale/units is T in? (Translation Vector) Or is it inferred differently.

P.S. New to computer vision...

解决方案

Try transposing K. The K that you get from estimateCameraParameters assumes row-vectors post-multiplied by a matrix, while the K in most textbooks assumes column-vectors pre-multipied by a matrix.

Edit: In the R2015b release of the Computer Vision System Toolbox there is a cameraPose function, which computes relative orientation and location from the fundamental matrix.

这篇关于基本矩阵中的错误?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆