OpenCV:wrapPerspective整个图像 [英] OpenCV : wrapPerspective on whole image

查看:70
本文介绍了OpenCV:wrapPerspective整个图像的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在检测iPad捕获的图像上的标记.因此,我想计算它们之间的平移和旋转,因此我想更改这些图像在图像上的变化角度,因此好像我是直接在标记上方捕获它们一样.

I'm detecting markers on images captured by my iPad. Because of that I want to calculate translations and rotations between them, I want to change change perspective on images these image, so it would look like I'm capturing them directly above markers.

现在我正在使用

points2D.push_back(cv::Point2f(0, 0));
points2D.push_back(cv::Point2f(50, 0));
points2D.push_back(cv::Point2f(50, 50));
points2D.push_back(cv::Point2f(0, 50));

Mat perspectiveMat = cv::getPerspectiveTransform(points2D, imagePoints);
cv::warpPerspective(*_image, *_undistortedImage, M, cv::Size(_image->cols, _image->rows));

以下是我给出的这些结果(在 warpPerspective 的结果的右下角看):

Which gives my these results (look at the right-bottom corner for result of warpPerspective):

您可能会看到结果图像在结果图像的左上角包含公认的标记.我的问题是我想捕获整个图像(不进行裁剪),以便以后可以检测到该图像上的其他标记.

As you probably see result image contains recognized marker in left-top corner of the result image. My problem is that I want to capture whole image (without cropping) so I could detect other markers on that image later.

我该怎么做?也许我应该使用来自 solvePnP 函数的旋转/平移矢量?

How can I do that? Maybe I should use rotation/translation vectors from solvePnP function?

由于图像仍在平移,因此标记的左上角在图像的左上角,因此改变扭曲图像的大小并没有多大帮助.

Unfortunatelly changing size of warped image don't help much, because image is still translated so left-top corner of marker is in top-left corner of image.

例如,当我使用以下方法将尺寸翻倍时:

For example when I've doubled size using:

cv::warpPerspective(*_image, *_undistortedImage, M, cv::Size(2*_image->cols, 2*_image->rows));

我收到了这些图片:

推荐答案

您的代码似乎并不完整,因此很难说出问题所在.

Your code doesn't seem to be complete, so it is difficult to say what the problem is.

在任何情况下,变形图像的尺寸都可能与输入图像相比完全不同,因此您必须调整用于warpPerspective的尺寸参数.

In any case the warped image might have completely different dimensions compared to the input image so you will have to adjust the size paramter you are using for warpPerspective.

例如,尝试将尺寸增加一倍:

For example try to double the size:

cv::warpPerspective(*_image, *_undistortedImage, M, 2*cv::Size(_image->cols, _image->rows));

为确保整个图像都在该图像内部,必须将原始图像的所有角都扭曲到最终图像的内部.因此,只需为每个拐角点计算变形的目标,然后相应地调整目标点即可.

To make sure the whole image is inside this image, all corners of your original image must be warped to be inside the resulting image. So simply calculate the warped destination for each of the corner points and adjust the destination points accordingly.

为了使其更清晰一些示例代码:

To make it more clear some sample code:

// calculate transformation
cv::Matx33f M = cv::getPerspectiveTransform(points2D, imagePoints);

// calculate warped position of all corners

cv::Point3f a = M.inv() * cv::Point3f(0, 0, 1);
a = a * (1.0/a.z);

cv::Point3f b = M.inv() * cv::Point3f(0, _image->rows, 1);
b = b * (1.0/b.z);

cv::Point3f c = M.inv() * cv::Point3f(_image->cols, _image->rows, 1);
c = c * (1.0/c.z);

cv::Point3f d = M.inv() * cv::Point3f(_image->cols, 0, 1);
d = d * (1.0/d.z);

// to make sure all corners are in the image, every position must be > (0, 0)
float x = ceil(abs(min(min(a.x, b.x), min(c.x, d.x))));
float y = ceil(abs(min(min(a.y, b.y), min(c.y, d.y))));

// and also < (width, height)
float width = ceil(abs(max(max(a.x, b.x), max(c.x, d.x)))) + x;
float height = ceil(abs(max(max(a.y, b.y), max(c.y, d.y)))) + y;

// adjust target points accordingly
for (int i=0; i<4; i++) {
    points2D[i] += cv::Point2f(x,y);
}

// recalculate transformation
M = cv::getPerspectiveTransform(points2D, imagePoints);

// get result
cv::Mat result;
cv::warpPerspective(*_image, result, M, cv::Size(width, height), cv::WARP_INVERSE_MAP);

这篇关于OpenCV:wrapPerspective整个图像的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆