FindFundamentalMatrix没有找到基本矩阵 [英] FindFundamentalMatrix doesn't find fundamental matrix

查看:469
本文介绍了FindFundamentalMatrix没有找到基本矩阵的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我试图通过使用
基本矩阵和Wikipedia上给出的算法来恢复摄像机的运动[1]。对于
,我需要找到基本矩阵。我使用
OpenCV :: findFundamentalMat。



两个意想不到的行为:
1)使用不同的拟合算法产生不同的结果
特别是FM_8POINT是不同的。
2)给定一组点对(y,x),不满足yFx = 0,并且是
总是大于0.



我在这里不明白吗?我的示例是false,还是
是什么?任何人都可以提出更好的测试示例?



下面是一个最小的例子。创建12个人工点,将每个
点向右移动10个像素,从
找到基本矩阵这两组点,并为每个点打印yFx。



例如:

  int main(int argc,const char * argv [])
{
//创建两组点。 pts2中的点在pts1中的点的右侧移动10个像素。
std :: vector< cv :: Point2f> pts1,pts2;
for(double y = 0; y <460; y + = 150)
{
for(double x = 0; x <320; x + = 150)
{
pts1.push_back(cv :: Point2f(x,y));
pts2.push_back(cv :: Point2f(x + 10.0,y));
}
}

cv :: Mat F = cv :: findFundamentalMat(pts1,pts2);

for(int i = 0; i {
//创建p1,p2,请让我知道如果这可以在更少的行。
cv :: Mat p1(3,1,CV_64FC1),p2(3,1,CV_64FC1);

p1.at< double>(0)= pts1.at(i).x;
p1.at< double>(1)= pts1.at(i).y;
p1.at< double>(2)= 1.0;

p2.at< double>(0)= pts2.at(i).x;
p2.at< double>(1)= pts2.at(i).y;
p2.at< double>(2)= 1.0;

//为每对点打印yFx。这应该是0所有。
cout<< p1.t()* F * p2 < endl;
}
}



对于FM_RANSAC,我得到[1.999],[2] ,[2],[1.599],[1.599],[1.599],
[1.198],[1.198],[1.198],[0.798],[0.798],[0.798]
$ b

对于FM_8POINT,基本矩阵为零(3,3),因此对于所有y,x,yFx为0



我只找到:基本矩阵的T和R估计,但是没有帮助太多。



编辑:yFx是错误的方式(p1 / p2在cout行切换)。这个例子也不工作,因为所有点都在一个平面上。

解决方案

我相信基本矩阵求解方程 p2.t()* F * p1 = 0 ,即你的代码中有p1和p2相反。至于为什么8点算法返回零矩阵,我不知道,对不起。



编辑:好吧,我相信我记得为什么8点算法在这里产生了坏的结果。您在两组点之间的运动是无需旋转的纯粹翻译,即它只有三个自由度。基本矩阵有7个自由度,所以不可能估计;这被称为退化情况。请参阅本论文,以进一步描述基本/基本矩阵估计中的退化情况。



也可能是这种情况在通过人工移动像素坐标得到的两个视点之间没有刚性变换,因此没有满足要求的基本矩阵。更好的测试用例可能是使用一个函数,如cv :: warpPerspective和一个已知的warp矩阵。


I am trying to recover the movement of a camera by using the fundamental matrix, and the algorithm as given on Wikipedia [1]. For this I need to find the fundamental matrix. I am using OpenCV::findFundamentalMat for this.

Two unexpected behaviours: 1) Using different fitting algorithms produces different results, especially FM_8POINT is different. 2) Given a set of point pairs (y, x), yFx =0 is not fulfilled and is always larger than 0.

Have I not understood something here? Is my example false, or what is going on? Can anyone suggest a better test example?

Below is a minimal example. Create 12 artificial points, shift each of those points 10 pixel to the right, find the fundamental matrix from these two sets of points and print yFx for each point.

Example:

int main(int argc, const char* argv[])
{
   // Create two sets of points. Points in pts2 are moved 10pixel to the right of the points in pts1.
   std::vector<cv::Point2f> pts1, pts2;
   for(double y = 0; y < 460; y+=150)
   {
           for(double x= 0; x < 320; x += 150)
           {
                   pts1.push_back(cv::Point2f(x, y));
                   pts2.push_back(cv::Point2f(x+10.0, y));
           }
   }

   cv::Mat F = cv::findFundamentalMat(pts1, pts2);

   for(int i = 0; i < pts1.size(); i++)
   {
           // Creating p1, p2, the two points. Please let me know if this can be done in fewer lines.
           cv::Mat p1(3,1, CV_64FC1), p2(3,1, CV_64FC1);

           p1.at<double>(0) = pts1.at(i).x;
           p1.at<double>(1) = pts1.at(i).y;
           p1.at<double>(2) = 1.0;

           p2.at<double>(0) = pts2.at(i).x;
           p2.at<double>(1) = pts2.at(i).y;
           p2.at<double>(2) = 1.0;

           // Print yFx for each pair of points. This should be 0 for all.
           cout << p1.t() * F * p2 << endl;
   }
}

For FM_RANSAC I get [1.999], [2], [2], [1.599], [1.599], [1.599], [1.198], [1.198], [1.198], [0.798], [0.798], [0.798]

For FM_8POINT the fundamental matrix is zeros(3,3) and thus yFx is 0 for all y,x.

I only found: T and R estimation from essential matrix but that didn't help much.

EDIT: yFx is the wrong way round (p1/p2 switched in the cout-line). This is example is also not working because all points lie on a plane.

解决方案

I believe that the fundamental matrix solves the equation p2.t() * F * p1 = 0, i.e. you have p1 and p2 reversed in your code. As to why the 8-point algorithm is returning the zero matrix, I have no idea, sorry.

Edit: Okay, I believe I recall why the 8-point algorithm is producing a bad result here. Your motion between the two set of points is pure translation without rotation, i.e. it only has three degrees of freedom. The fundamental matrix has 7 degrees of freedom, so it is impossible to estimate; this is called a degenerate case. See this paper for a further description of degenerate cases in fundamental/essential matrix estimation.

It might also be the case that there is no rigid transformation between the two viewpoints you get by artificially moving pixel coordinates, thus there is no fundamental matrix satisfying the requirements. A better test case might be to use a function such as cv::warpPerspective with a known warp matrix.

这篇关于FindFundamentalMatrix没有找到基本矩阵的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆