使用estimateRigidTransform而不是findHomography [英] Using estimateRigidTransform instead of findHomography

查看:2578
本文介绍了使用estimateRigidTransform而不是findHomography的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

下面的链接中的示例是使用 findHomography 获得两组点之间的变换。我想限制转换中使用的自由度,因此要用 estimateRigidTransform 替换 findHomography



http: //docs.opencv.org/doc/tutorials/features2d/feature_homography/feature_homography.html#featurehography



下面我使用 estimateRigidTransform 以获得对象和场景点之间的变换。 objPoints scePoints 表示为 vector< Point2f>

  Mat H = estimateRigidTransform(objPoints,scePoints,false);按照上面的教程中使用的方法,我想使用转换<$的方式转换角点值。



c $ c> H 。本教程使用 findHomography 返回的3x3矩阵使用 perspectiveTransform 。使用刚体变换只返回一个2x3矩阵,所以这个方法不能使用。



我如何变换角的值,表示为 vector2< Point2f> 。我只是希望执行与教程相同的功能,但具有较少的自由度的变换。我已经看过其他方法,如 warpAffine getPerspectiveTransform 以及,但到目前为止没有找到解决方案。 p>

编辑:



我试过David Nilosek的建议。下面我向矩阵添加额外的行。

  Mat row =(Mat_< double>(1,3) < 0,0,1); 
H.push_back(row);

但是,当使用透视变换时会出现此错误。

  OpenCV Error:Assertion failed(mtype == type0 ||(CV_MAT_CN(mtype)== CV_MAT_CN(type0)&&((1<< type0)& fixedDepthMask)!= 0))in create,file /Users/cgray/Downloads/opencv-2.4.6/modules/core/src/matrix.cpp,第1486行
libc ++ abi.dylib:以uncaught结束异常类型cv ::异常:/Users/cgray/Downloads/opencv-2.4.6/modules/core/src/matrix.cpp:1486:error:(-215)mtype == type0 ||在函数中创建(CV_MAT_CN(mtype)== CV_MAT_CN(type0)&&((1<< type0)& fixedDepthMask)!= 0)

ChronoTrigger建议使用 warpAffine 。我在下面调用 warpAffine 方法,1 x 5的大小是 objCorners sceCorners

  warpAffine(objCorners,sceCorners,H,Size(1,4)); 

这会产生下面的错误,这表明类型错误。 objCorners sceCorners 向量< Point2f> 4个角。我认为 warpAffine 会接受 Mat 图片,可能会解释错误。

  OpenCV错误:断言失败((M0.type()== CV_32F || M0.type()== CV_64F)&& M0.rows == 2& ;& M0.cols == 3)in warpAffine,file /Users/cgray/Downloads/opencv-2.4.6/modules/imgproc/src/imgwarp.cpp,line 3280 


解决方案

过去这样做:

  cv :: Mat R = cv :: estimateRigidTransform(p1,p2,false); 

if(R.cols == 0)
{
continue;
}

cv :: Mat H = cv :: Mat(3,3,R.type());
H.at< double>(0,0)= R.at double(0,0);
H.at< double>(0,1)= R.at double(0,1);
H.at< double>(0,2)= R.at double(0,2);

H.at< double>(1,0)= R.at double(1,0);
H.at< double>(1,1)= R.at double(1,1);
H.at< double>(1,2)= R.at double(1,2);

H.at< double>(2,0)= 0.0;
H.at< double>(2,1)= 0.0;
H.at&Double;(2,2)= 1.0;


cv :: Mat warped;
cv :: warpPerspective(img1,warped,H,img1.size());

与David Nilosek建议的相同:在矩阵的末尾添加一个0 0 1行



此代码通过刚性变换扭曲IMAGES。



我想要扭曲/您必须使用具有3x3矩阵的 perspectiveTransform 函数( http://docs.opencv.org/modules/core/doc/operations_on_arrays.html?highlight=perspectivetransform#perspectivetransform



教程:



http://docs.opencv.org/doc/tutorials/features2d/feature_homography/feature_homography.html



或您可以它通过循环您的向量和

手动。

  cv :: Point2f result; 
result.x = point.x * R.at double(0,0)+ point.y * R.at double(0,1)+ R.at double(0,2) );
result.y = point.x * R.at< double>(1,0)+ point.y * R.at double(1,1)+ R.at double );

希望有所帮助。



没有测试手工代码,而是应该工作。



编辑:这是完整的(测试过的)代码:

  // points 
std :: vector< cv :: Point2f> p1;
p1.push_back(cv :: Point2f(0,0));
p1.push_back(cv :: Point2f(1,0));
p1.push_back(cv :: Point2f(0,1));

//从p1进行简单转换以进行测试:
std :: vector< cv :: Point2f> p2;
p2.push_back(cv :: Point2f(1,1));
p2.push_back(cv :: Point2f(2,1));
p2.push_back(cv :: Point2f(1,2));

cv :: Mat R = cv :: estimateRigidTransform(p1,p2,false);

//扩展刚性变换以使用透视变换:
cv :: Mat H = cv :: Mat(3,3,R.type());
H.at< double>(0,0)= R.at double(0,0)
H.at< double>(0,1)= R.at double(0,1);
H.at< double>(0,2)= R.at double(0,2)

H.at< double>(1,0)= R.at double(1,0);
H.at< double>(1,1)= R.at double(1,1);
H.at< double>(1,2)= R.at double(1,2);

H.at< double>(2,0)= 0.0;
H.at< double>(2,1)= 0.0;
H.at< double>(2,2)= 1.0;

//在p1上计算透视变换
std :: vector< cv :: Point2f>结果;
cv :: perspectiveTransform(p1,result,H);

for(unsigned int i = 0; i< result.size(); ++ i)
std :: cout< result [i]<< std :: endl;

可输出符合预期:

  [1,1] 
[2,1]
[1,2]


The example in the link below is using findHomography to get the transformation between two sets of points. I want to limit the degrees of freedom used in the transformation so want to replace findHomography with estimateRigidTransform.

http://docs.opencv.org/doc/tutorials/features2d/feature_homography/feature_homography.html#feature-homography

Below I use estimateRigidTransform to get the transformation between the object and scene points. objPoints and scePoints are represented by vector <Point2f>.

Mat H = estimateRigidTransform(objPoints, scePoints, false);

Following the method used in the tutorial above, I want to transform the corner values using the transformation H. The tutorial uses perspectiveTransform with the 3x3 matrix returned by findHomography. With the rigid transform it only returns a 2x3 Matrix so this method cannot be used.

How would I transform the values of the corners, represented as vector <Point2f> with this 2x3 Matrix. I am just looking to perform the same functions as the tutorial but with less degrees of freedom for the transformation. I have looked at other methods such as warpAffine and getPerspectiveTransform as well, but so far not found a solution.

EDIT:

I have tried the suggestion from David Nilosek. Below I am adding the extra row to the matrix.

Mat row = (Mat_<double>(1,3) << 0, 0, 1);
H.push_back(row);

However this gives this error when using perspectiveTransform.

OpenCV Error: Assertion failed (mtype == type0 || (CV_MAT_CN(mtype) == CV_MAT_CN(type0) && ((1 << type0) & fixedDepthMask) != 0)) in create, file /Users/cgray/Downloads/opencv-2.4.6/modules/core/src/matrix.cpp, line 1486
libc++abi.dylib: terminating with uncaught exception of type cv::Exception: /Users/cgray/Downloads/opencv-2.4.6/modules/core/src/matrix.cpp:1486: error: (-215) mtype == type0 || (CV_MAT_CN(mtype) == CV_MAT_CN(type0) && ((1 << type0) & fixedDepthMask) != 0) in function create

ChronoTrigger suggested using warpAffine. I am calling the warpAffine method below, the size of 1 x 5 is the size of objCorners and sceCorners.

warpAffine(objCorners, sceCorners, H, Size(1,4));

This gives the error below, which suggests the wrong type. objCorners and sceCorners are vector <Point2f> representing the 4 corners. I thought warpAffine would accept Mat images which may explain the error.

OpenCV Error: Assertion failed ((M0.type() == CV_32F || M0.type() == CV_64F) && M0.rows == 2 && M0.cols == 3) in warpAffine, file /Users/cgray/Downloads/opencv-2.4.6/modules/imgproc/src/imgwarp.cpp, line 3280

解决方案

I've done it this way in the past:

cv::Mat R = cv::estimateRigidTransform(p1,p2,false);

    if(R.cols == 0)
    {
        continue;
    }

    cv::Mat H = cv::Mat(3,3,R.type());
    H.at<double>(0,0) = R.at<double>(0,0);
    H.at<double>(0,1) = R.at<double>(0,1);
    H.at<double>(0,2) = R.at<double>(0,2);

    H.at<double>(1,0) = R.at<double>(1,0);
    H.at<double>(1,1) = R.at<double>(1,1);
    H.at<double>(1,2) = R.at<double>(1,2);

    H.at<double>(2,0) = 0.0;
    H.at<double>(2,1) = 0.0;
    H.at<double>(2,2) = 1.0;


    cv::Mat warped;
    cv::warpPerspective(img1,warped,H,img1.size());

which is the same as David Nilosek suggested: add a 0 0 1 row at the end of the matrix

this code warps the IMAGES with a rigid transformation.

I you want to warp/transform the points, you must use perspectiveTransform function with a 3x3 matrix ( http://docs.opencv.org/modules/core/doc/operations_on_arrays.html?highlight=perspectivetransform#perspectivetransform )

tutorial here:

http://docs.opencv.org/doc/tutorials/features2d/feature_homography/feature_homography.html

or you can do it manually by looping over your vector and

cv::Point2f result;
result.x = point.x * R.at<double>(0,0) + point.y * R.at<double>(0,1) + R.at<double>(0,2);
result.y = point.x * R.at<double>(1,0) + point.y * R.at<double>(1,1) + R.at<double>(1,2);

hope that helps.

remark: didn't test the manual code, but should work. No PerspectiveTransform conversion needed there!

edit: this is the full (tested) code:

// points
std::vector<cv::Point2f> p1;
p1.push_back(cv::Point2f(0,0));
p1.push_back(cv::Point2f(1,0));
p1.push_back(cv::Point2f(0,1));

// simple translation from p1 for testing:
std::vector<cv::Point2f> p2;
p2.push_back(cv::Point2f(1,1));
p2.push_back(cv::Point2f(2,1));
p2.push_back(cv::Point2f(1,2));

cv::Mat R = cv::estimateRigidTransform(p1,p2,false);

// extend rigid transformation to use perspectiveTransform:
cv::Mat H = cv::Mat(3,3,R.type());
H.at<double>(0,0) = R.at<double>(0,0);
H.at<double>(0,1) = R.at<double>(0,1);
H.at<double>(0,2) = R.at<double>(0,2);

H.at<double>(1,0) = R.at<double>(1,0);
H.at<double>(1,1) = R.at<double>(1,1);
H.at<double>(1,2) = R.at<double>(1,2);

H.at<double>(2,0) = 0.0;
H.at<double>(2,1) = 0.0;
H.at<double>(2,2) = 1.0;

// compute perspectiveTransform on p1
std::vector<cv::Point2f> result;
cv::perspectiveTransform(p1,result,H);

for(unsigned int i=0; i<result.size(); ++i)
    std::cout << result[i] << std::endl;

which gives output as expected:

[1, 1]
[2, 1]
[1, 2]

这篇关于使用estimateRigidTransform而不是findHomography的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆