如何在opencv中获取getPerspectiveTransform的比例因子? [英] How to get the scale factor of getPerspectiveTransform in opencv?

查看:62
本文介绍了如何在opencv中获取getPerspectiveTransform的比例因子?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我有图像 A,我想获得图像 A 的鸟瞰图.所以我使用 getPerspectiveTransform 方法来获取变换矩阵.输出结果是 3x3 矩阵.看我的代码.就我而言,我想知道 3x3 矩阵的比例因子.我查看了 opencv 文档,但找不到变换矩阵的详细信息,也不知道如何获得比例.我也读过一些论文,论文说我们可以从 a11、a12、a21、a22 获得缩放、剪切和比率.看图.那么我怎样才能得到比例因子.你能给我一些建议吗?你能解释一下 getPerspectiveTransform 输出矩阵吗?谢谢!

I have image A and i want to get the bird-eye's view of image A. So I used getPerspectiveTransform method to get the transform matrix. The output result is 3x3 matrix. See my code. In my case i want to know the scale factor of the 3x3 matrix. I have looked the opencv document, but i cannot find detail of the transform matrix and i don't know how to get the scale. Also i have read some paper, the paper said we can get scaling, shearing and ratotion from a11, a12, a21, a22. See the pic. So how can i get the scale factor. Can you give me some advice? And can you explain the getPerspectiveTransform output matrix?Thank you!

Points[0] = Point2f(..., ...);
Points[1] = Point2f(..., ...);
Points[2] = Point2f(..., ...);
Points[3] = Point2f(..., ...);

dst[0] = Point2f(..., ...);
dst[1] = Point2f(..., ...);
dst[2] = Point2f(..., ...);
dst[3] = Point2f(..., ...);
Mat trans = getPerspectiveTransform(gpsPoints, dst);//I want to know the scale of trans
warpPerspective(A, B, trans, img.size());

当我改变相机位置时,梯形的大小和位置都会改变.现在我们把它设置成一个矩形,矩形的宽度/高度是已知的.但我认为不同高度的相机应该改变矩形大小.因为如果我们设置为相同大小的矩形,矩形可能会有不同的细节.这就是为什么我想知道 3x3 转换矩阵的比例.例如,trapezium1 和 trapezium2 具有转换标度 s1 和 s2.所以我们可以设置 rectangle1(width,height) = s2/s1 * rectangle2(width,height).

When i change the camara position, the trapezium size and position will change. Now we set it into a rectangle and rectangle width/height was known. But i think camera in different height the rectangle size should have been changed.Because if we set into same size rectangle, the rectangle may have different detal. That's why i want to know scale from 3x3 transfrom matrix. For example, trapezium1 and trapezium2 have transfrom scale s1 and s2. So we can set rectangle1(width,height) = s2/s1 * rectangle2(width,height).

推荐答案

好的,给你:

H is the homography
H = T*R*S*L with
T = [1,0,tx; 0,1,ty; 0,0,1]
R = [cos(a),sin(a),0; -sin(a),cos(a),0; 0,0,1]
S = [sx,shear,0; 0,sy,0; 0,0,1]
L = [1,0,0; 0,1,0; lx,ly,1]

where tx/ty is translation; a is rotation angle; sx/sy is scale; shear is shearing factor; lx/ly are perspective foreshortening parameters.

如果我理解正确,您想计算 sx 和 sy.现在如果 lx 和 ly 都是 0,那么计算 sx 和 sy 就很容易了.它将通过 QR 分解来分解 H 的左上部分,导致 Q*R,其中 Q 是正交矩阵(= 旋转矩阵),R 是上三角矩阵([sx,shear;0,sy]).

You want to compute sx and sy if I understood right. Now If lx and ly are both 0 it would be easy to compute sx and sy. It would be to decompose the upper left part of H by QR decomposition resulting in Q*R where Q is an orthogonal matrix (= rotation matrix) and R is an upper triangle matrix ([sx, shear; 0,sy]).

h1 h2 h3
h4 h5 h6
0  0  1

=> Q*R = [h1,h2; h4,h5]

但是 lx 和 ly 破坏了简单的方法.所以你必须找出矩阵的左上部分在没有 lx 和 ly 影响的情况下会是什么样子.

But lx and ly destroy the easy way. So you have to find out how the upper left part of the matrix would look like without the influence of lx and ly.

如果你的整个单应性是:

If your whole homography is:

h1 h2 h3
h4 h5 h6
h7 h8 1

那么你将拥有:

Q*R = 
h1-(h7*h3)   h2-(h8*h3)
h4-(h7*h6)   h5-(h8*h6)

因此,如果您根据这个矩阵计算 Q 和 R,则可以轻松计算旋转、缩放和剪切.

So if you compute Q and R from this matrix, you can compute rotation, scale and shear easily.

我已经用一个小的 C++ 程序对此进行了测试:

I've tested this with a small C++ program:

double scaleX = (rand()%200) / 100.0;
double scaleY = (rand()%200) / 100.0;
double shear = (rand()%100) / 100.0;
double rotation = CV_PI*(rand()%360)/180.0;
double transX = rand()%100 - 50;
double transY = rand()%100 - 50;

double perspectiveX = (rand()%100) / 1000.0;
double perspectiveY = (rand()%100) / 1000.0;

std::cout << "scale: " << "(" << scaleX << "," << scaleY << ")" << "\n";
std::cout << "shear: " << shear << "\n";
std::cout << "rotation: " << rotation*180/CV_PI << " degrees" << "\n";
std::cout << "translation: " << "(" << transX << "," << transY << ")" << std::endl;

cv::Mat ScaleShearMat = (cv::Mat_<double>(3,3) << scaleX, shear, 0, 0, scaleY, 0, 0, 0, 1);
cv::Mat RotationMat = (cv::Mat_<double>(3,3) << cos(rotation), sin(rotation), 0, -sin(rotation), cos(rotation), 0, 0, 0, 1);
cv::Mat TranslationMat = (cv::Mat_<double>(3,3) << 1, 0, transX, 0, 1, transY, 0, 0, 1);

cv::Mat PerspectiveMat = (cv::Mat_<double>(3,3) << 1, 0, 0, 0, 1, 0, perspectiveX, perspectiveY, 1);

cv::Mat HomographyMatWithoutPerspective = TranslationMat * RotationMat * ScaleShearMat;

cv::Mat HomographyMat = HomographyMatWithoutPerspective * PerspectiveMat;


std::cout << "Homography:\n" << HomographyMat << std::endl;

cv::Mat DecomposedRotaScaleShear(2,2,CV_64FC1);
DecomposedRotaScaleShear.at<double>(0,0) = HomographyMat.at<double>(0,0) - (HomographyMat.at<double>(2,0)*HomographyMat.at<double>(0,2));
DecomposedRotaScaleShear.at<double>(0,1) = HomographyMat.at<double>(0,1) - (HomographyMat.at<double>(2,1)*HomographyMat.at<double>(0,2));
DecomposedRotaScaleShear.at<double>(1,0) = HomographyMat.at<double>(1,0) - (HomographyMat.at<double>(2,0)*HomographyMat.at<double>(1,2));
DecomposedRotaScaleShear.at<double>(1,1) = HomographyMat.at<double>(1,1) - (HomographyMat.at<double>(2,1)*HomographyMat.at<double>(1,2));

std::cout << "Decomposed submat: \n" << DecomposedRotaScaleShear << std::endl;

现在您可以使用http://www.bluebit 的QR矩阵分解来测试结果.gr/矩阵计算器/

首先您可以尝试将perspectiveX 和perspectiveY 设置为零.您将看到可以使用矩阵的左上部分分解为旋转角度、剪切和比例的输入值.但是如果你没有将perspectiveX 和perspectiveX 设置为零,你可以使用DecomposedRotaScaleShear"并将其分解为QR.

First you can try to set perspectiveX and perspectiveY to zero. You'll see that you can use the upper left part of the matrix to decompose to the input values of rotation angle, shear and scale. But if you don't set perspectiveX and perspectiveX to zero, you can use the "DecomposedRotaScaleShear" and decompose it to QR.

你会得到一个结果页面

Q:

 a a
-a a

在这里你可以计算acos(a)来得到角度

here you can compute acos(a) to get the angle

R:

sx shear
0  sy

这里可以直接读取 sx 和 sy.

here you can read sx and sy directly.

希望这有帮助,我希望没有错误;)

Hope this helps and I hope there is no error ;)

这篇关于如何在opencv中获取getPerspectiveTransform的比例因子?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆