执行cv :: warpPerspective在一组cv :: Point上进行假歪斜校正 [英] Executing cv::warpPerspective for a fake deskewing on a set of cv::Point

查看:1068
本文介绍了执行cv :: warpPerspective在一组cv :: Point上进行假歪斜校正的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我想尝试对一组点进行透视变换,以实现歪斜校正效果:





m使用下面的图片进行测试,绿色矩形显示感兴趣的区域。





我想知道是否可以达到效果希望使用 cv :: getPerspectiveTransform cv :: warpPerspective 的简单组合。我共享我写的目前为止的源代码,但它不工作。这是生成的图片:





因此,有一个向量< cv :: Point> 兴趣,但点是没有以任何特定的顺序存储在向量内,这是我不能在检测过程中更改的东西。无论如何,后来,向量中的点用于定义 RotatedRect ,反过来用于组合 cv :: Point2f src_vertices [4]; 是 cv :: getPerspectiveTransform()所需的变量之一



我对顶点及其组织方式的理解可能是其中的一个问题。我还认为使用 RotatedRect 不是最好的想法来存储ROI的原始点,因为坐标会改变稍微适合旋转的矩形,这不太酷

  #include< cv.h> 
#include< highgui.h>
#include< iostream>

using namespace std;
using namespace cv;

int main(int argc,char * argv [])
{
cv :: Mat src = cv :: imread(argv [1],1);

//在一些神奇的过程之后,这些是点检测,代表
//图片中的纸角:
// [408,69] [72, 2186] [1584,2426] [1912,291]
vector< Point> not_a_rect_shape;
not_a_rect_shape.push_back(Point(408,69));
not_a_rect_shape.push_back(Point(72,2186));
not_a_rect_shape.push_back(Point(1584,2426));
not_a_rect_shape.push_back(Point(1912,291));

//为了调试,绘制连接这些点的绿线
//并保存在磁盘上
const Point * point =& not_a_rect_shape [0];
int n =(int)not_a_rect_shape.size();
Mat draw = src.clone();
折线(draw,& point,& n,1,true,Scalar(0,255,0),3,CV_AA);
imwrite(draw.jpg,draw);

//从这个信息组装一个旋转的矩形
RotatedRect box = minAreaRect(cv :: Mat(not_a_rect_shape));
std :: cout<< Rotated box set to(<<< box.boundingRect()。x<,<< box.boundingRect()。<<)< box.size.width< x< box.size.height< std :: endl;

//点的顺序是否重要?我假设他们不。
//但是如果是这样,是否有一个简单的方法来识别和订购
//他们作为topLeft,topRight,bottomRight,bottomLeft?
cv :: Point2f src_vertices [4];
src_vertices [0] = not_a_rect_shape [0];
src_vertices [1] = not_a_rect_shape [1];
src_vertices [2] = not_a_rect_shape [2];
src_vertices [3] = not_a_rect_shape [3];

Point2f dst_vertices [4];
dst_vertices [0] = Point(0,0);
dst_vertices [1] = Point(0,box.boundingRect()。width-1);
dst_vertices [2] = Point(0,box.boundingRect()。height-1);
dst_vertices [3] = Point(box.boundingRect()。width-1,box.boundingRect()。height-1);

Mat warpMatrix = getPerspectiveTransform(src_vertices,dst_vertices);

cv :: Mat旋转;
warpPerspective(src,rotating,warpMatrix,rotating.size(),INTER_LINEAR,BORDER_CONSTANT);

imwrite(rotated.jpg,旋转);

return 0;
}

有人可以帮我解决这个问题吗?

解决方案

所以,第一个问题是角顺序。它们必须在两个向量中具有相同的顺序。
所以,如果在第一个向量中,你的顺序是:(左上,左下,右下,右上),它们必须在另一个向量中的顺序相同。



其次,要使结果图像只包含感兴趣的对象,必须将其宽度和高度设置为与生成的矩形宽度和高度相同。不要担心,warpPerspective中的src和dst图像可以有不同的大小。



第三,性能问题。虽然你的方法是绝对准确的,因为你只做仿射变换(旋转,调整大小,去偏移),在数学上,你可以使用函数的仿射联系人。

$ b
  • warpAffine()。




  • 重要注意事项:getAffine transform需要且只有3点,并且结果矩阵是2乘3,而不是3乘3。



    如何使结果图像具有与输入不同的大小: / p>

      cv :: warpPerspective(src,dst,dst.size(),...); 

    使用

      cv :: Mat rotate; 
    cv :: Size size(box.boundingRect()。width,box.boundingRect()。height);
    cv :: warpPerspective(src,dst,size,...);

    所以这是你的,你的编程任务已经结束了。

      void main()
    {
    cv :: Mat src = cv :: imread(r8fmh.jpg,1);


    //在一些神奇的过程之后,这些是点检测,代表
    //图片中的纸角:
    // [408,69 ] [72,2186] [1584,2426] [1912,291]

    vector< Point> not_a_rect_shape;
    not_a_rect_shape.push_back(Point(408,69));
    not_a_rect_shape.push_back(Point(72,2186));
    not_a_rect_shape.push_back(Point(1584,2426));
    not_a_rect_shape.push_back(Point(1912,291));

    //为了调试,绘制连接这些点的绿线
    //并保存在磁盘上
    const Point * point =& not_a_rect_shape [0];
    int n =(int)not_a_rect_shape.size();
    Mat draw = src.clone();
    折线(draw,& point,& n,1,true,Scalar(0,255,0),3,CV_AA);
    imwrite(draw.jpg,draw);

    //从这个信息组装一个旋转的矩形
    RotatedRect box = minAreaRect(cv :: Mat(not_a_rect_shape));
    std :: cout<< Rotated box set to(<<< box.boundingRect()。x<,<< box.boundingRect()。<<)< box.size.width< x< box.size.height< std :: endl;

    Point2f pts [4];

    box.points(pts);

    //点的顺序是否重要?我假设他们不。
    //但是如果是这样,是否有一个简单的方法来识别和订购
    //他们作为topLeft,topRight,bottomRight,bottomLeft?

    cv :: Point2f src_vertices [3];
    src_vertices [0] = pts [0];
    src_vertices [1] = pts [1];
    src_vertices [2] = pts [3];
    // src_vertices [3] = not_a_rect_shape [3];

    Point2f dst_vertices [3];
    dst_vertices [0] = Point(0,0);
    dst_vertices [1] = Point(box.boundingRect()。width-1,0);
    dst_vertices [2] = Point(0,box.boundingRect()。height-1);

    / * Mat warpMatrix = getPerspectiveTransform(src_vertices,dst_vertices);

    cv :: Mat旋转;
    cv :: Size size(box.boundingRect()。width,box.boundingRect()。height);
    warpPerspective(src,rotate,warpMatrix,size,INTER_LINEAR,BORDER_CONSTANT); * /
    Mat warpAffineMatrix = getAffineTransform(src_vertices,dst_vertices);

    cv :: Mat旋转;
    cv :: Size size(box.boundingRect()。width,box.boundingRect()。height);
    warpAffine(src,rotating,warpAffineMatrix,size,INTER_LINEAR,BORDER_CONSTANT);

    imwrite(rotated.jpg,旋转);
    }


    I'm trying to do a perspective transformation of a set of points in order to achieve a deskewing effect:

    I'm using the image below for tests, and the green rectangle display the area of interest.

    I was wondering if it's possible to achieve the effect I'm hoping for using a simple combination of cv::getPerspectiveTransform and cv::warpPerspective. I'm sharing the source code I've written so far, but it doesn't work. This is the resulting image:

    So there is a vector<cv::Point> that defines the region of interest, but the points are not stored in any particular order inside the vector, and that's something I can't change in the detection procedure. Anyway, later, the points in the vector are used to define a RotatedRect, which in turn is used to assemble cv::Point2f src_vertices[4];, one of the variables required by cv::getPerspectiveTransform().

    My understanding about vertices and how they are organized might be one of the issues. I also think that using a RotatedRect is not the best idea to store the original points of the ROI, since the coordinates will change a little bit to fit into the rotated rectangle, and that's not very cool.

    #include <cv.h>
    #include <highgui.h>
    #include <iostream>
    
    using namespace std;
    using namespace cv;
    
    int main(int argc, char* argv[])
    {
        cv::Mat src = cv::imread(argv[1], 1);
    
        // After some magical procedure, these are points detect that represent 
        // the corners of the paper in the picture: 
        // [408, 69] [72, 2186] [1584, 2426] [1912, 291]
        vector<Point> not_a_rect_shape;
        not_a_rect_shape.push_back(Point(408, 69));
        not_a_rect_shape.push_back(Point(72, 2186));
        not_a_rect_shape.push_back(Point(1584, 2426));
        not_a_rect_shape.push_back(Point(1912, 291));
    
        // For debugging purposes, draw green lines connecting those points 
        // and save it on disk
        const Point* point = &not_a_rect_shape[0];
        int n = (int)not_a_rect_shape.size();
        Mat draw = src.clone();
        polylines(draw, &point, &n, 1, true, Scalar(0, 255, 0), 3, CV_AA);
        imwrite("draw.jpg", draw);
    
        // Assemble a rotated rectangle out of that info
        RotatedRect box = minAreaRect(cv::Mat(not_a_rect_shape));
        std::cout << "Rotated box set to (" << box.boundingRect().x << "," << box.boundingRect().y << ") " << box.size.width << "x" << box.size.height << std::endl;
    
        // Does the order of the points matter? I assume they do NOT.
        // But if it does, is there an easy way to identify and order 
        // them as topLeft, topRight, bottomRight, bottomLeft?
        cv::Point2f src_vertices[4];
        src_vertices[0] = not_a_rect_shape[0];
        src_vertices[1] = not_a_rect_shape[1];
        src_vertices[2] = not_a_rect_shape[2];
        src_vertices[3] = not_a_rect_shape[3];
    
        Point2f dst_vertices[4];
        dst_vertices[0] = Point(0, 0);
        dst_vertices[1] = Point(0, box.boundingRect().width-1);
        dst_vertices[2] = Point(0, box.boundingRect().height-1);
        dst_vertices[3] = Point(box.boundingRect().width-1, box.boundingRect().height-1);
    
        Mat warpMatrix = getPerspectiveTransform(src_vertices, dst_vertices);
    
        cv::Mat rotated;
        warpPerspective(src, rotated, warpMatrix, rotated.size(), INTER_LINEAR, BORDER_CONSTANT);
    
        imwrite("rotated.jpg", rotated);
    
        return 0;
    }
    

    Can someone help me fix this problem?

    解决方案

    So, first problem is corner order. They must be in the same order in both vectors. So, if in the first vector your order is:(top-left, bottom-left, bottom-right, top-right) , they MUST be in the same order in the other vector.

    Second, to have the resulting image contain only the object of interest, you must set its width and height to be the same as resulting rectangle width and height. Do not worry, the src and dst images in warpPerspective can be different sizes.

    Third, a performance concern. While your method is absolutely accurate, because you are doing only affine transforms (rotate, resize, deskew), mathematically, you can use the affine corespondent of your functions. They are much faster.

    • getAffineTransform()

    • warpAffine().

    Important note: getAffine transform needs and expects ONLY 3 points, and the result matrix is 2-by-3, instead of 3-by-3.

    How to make the result image have a different size than the input:

    cv::warpPerspective(src, dst, dst.size(), ... );
    

    use

    cv::Mat rotated;
    cv::Size size(box.boundingRect().width, box.boundingRect().height);
    cv::warpPerspective(src, dst, size, ... );
    

    So here you are, and your programming assignment is over.

    void main()
    {
        cv::Mat src = cv::imread("r8fmh.jpg", 1);
    
    
        // After some magical procedure, these are points detect that represent 
        // the corners of the paper in the picture: 
        // [408, 69] [72, 2186] [1584, 2426] [1912, 291]
    
        vector<Point> not_a_rect_shape;
        not_a_rect_shape.push_back(Point(408, 69));
        not_a_rect_shape.push_back(Point(72, 2186));
        not_a_rect_shape.push_back(Point(1584, 2426));
        not_a_rect_shape.push_back(Point(1912, 291));
    
        // For debugging purposes, draw green lines connecting those points 
        // and save it on disk
        const Point* point = &not_a_rect_shape[0];
        int n = (int)not_a_rect_shape.size();
        Mat draw = src.clone();
        polylines(draw, &point, &n, 1, true, Scalar(0, 255, 0), 3, CV_AA);
        imwrite("draw.jpg", draw);
    
        // Assemble a rotated rectangle out of that info
        RotatedRect box = minAreaRect(cv::Mat(not_a_rect_shape));
        std::cout << "Rotated box set to (" << box.boundingRect().x << "," << box.boundingRect().y << ") " << box.size.width << "x" << box.size.height << std::endl;
    
        Point2f pts[4];
    
        box.points(pts);
    
        // Does the order of the points matter? I assume they do NOT.
        // But if it does, is there an easy way to identify and order 
        // them as topLeft, topRight, bottomRight, bottomLeft?
    
        cv::Point2f src_vertices[3];
        src_vertices[0] = pts[0];
        src_vertices[1] = pts[1];
        src_vertices[2] = pts[3];
        //src_vertices[3] = not_a_rect_shape[3];
    
        Point2f dst_vertices[3];
        dst_vertices[0] = Point(0, 0);
        dst_vertices[1] = Point(box.boundingRect().width-1, 0); 
        dst_vertices[2] = Point(0, box.boundingRect().height-1);
    
       /* Mat warpMatrix = getPerspectiveTransform(src_vertices, dst_vertices);
    
        cv::Mat rotated;
        cv::Size size(box.boundingRect().width, box.boundingRect().height);
        warpPerspective(src, rotated, warpMatrix, size, INTER_LINEAR, BORDER_CONSTANT);*/
        Mat warpAffineMatrix = getAffineTransform(src_vertices, dst_vertices);
    
        cv::Mat rotated;
        cv::Size size(box.boundingRect().width, box.boundingRect().height);
        warpAffine(src, rotated, warpAffineMatrix, size, INTER_LINEAR, BORDER_CONSTANT);
    
        imwrite("rotated.jpg", rotated);
    }
    

    这篇关于执行cv :: warpPerspective在一组cv :: Point上进行假歪斜校正的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

    查看全文
    登录 关闭
    扫码关注1秒登录
    发送“验证码”获取 | 15天全站免登陆