自动透视校正 OpenCV [英] Automatic perspective correction OpenCV

查看:28
本文介绍了自动透视校正 OpenCV的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在尝试在我的 iOS 程序中实现自动透视校正,当我使用在教程中找到的测试图像时,一切都按预期进行.但是当我拍照时,我得到了一个奇怪的结果.

我正在使用此,在这里你应该得到 4 点,代表你的卡片的角落.您可以调整参数 epsilon 来制作 4 个坐标.

计算目标图像中对应四边形顶点的坐标

  • 这可以通过计算最大轮廓的边界矩形轻松找到.

在下图中,红色矩形代表源点,绿色矩形代表目标点.

调整坐标顺序并应用透视变换

查看最终结果

代码

 Mat src=imread("card.jpg");垫 thr;cvtColor(src,thr,CV_BGR2GRAY);阈值(thr, thr, 70, 255,CV_THRESH_BINARY);向量<矢量<点>>轮廓;//用于存储轮廓的向量向量等级制度;int large_contour_index=0;int最大区域=0;Mat dst(src.rows,src.cols,CV_8UC1,Scalar::all(0));//创建目标图像findContours(thr.clone(),轮廓,层次结构,CV_RETR_EXTERNAL,CV_CHAIN_APPROX_SIMPLE);//找到图像中的轮廓for( int i = 0; ilargest_area){最大面积=a;large_contour_index=i;//存储最大轮廓的索引}}drawContours( dst,contours, large_contour_index, Scalar(255,255,255),CV_FILLED, 8,hierarchy );矢量<矢量<点>>轮廓_poly(1);approxPolyDP( Mat(contours[largest_contour_index]), contours_poly[0],5, true );矩形 boundRect=boundingRect(contours[largest_contour_index]);if(contours_poly[0].size()==4){std::vector四点;std::vectorsqre_pts;quad_pts.push_back(Point2f(contours_poly[0][0].x,contours_poly[0][0].y));quad_pts.push_back(Point2f(contours_poly[0][1].x,contours_poly[0][1].y));quad_pts.push_back(Point2f(contours_poly[0][3].x,contours_poly[0][3].y));quad_pts.push_back(Point2f(contours_poly[0][2].x,contours_poly[0][2].y));sqre_pts.push_back(Point2f(boundRect.x,boundRect.y));sqre_pts.push_back(Point2f(boundRect.x,boundRect.y+boundRect.height));sqre_pts.push_back(Point2f(boundRect.x+boundRect.width,boundRect.y));sqre_pts.push_back(Point2f(boundRect.x+boundRect.width,boundRect.y+boundRect.height));Mat transmtx = getPerspectiveTransform(quad_pts,squre_pts);Mat 转换 = Mat::zeros(src.rows, src.cols, CV_8UC3);warpPerspective(src, 转换, transmtx, src.size());点 P1=contours_poly[0][0];点 P2=contours_poly[0][1];点 P3=contours_poly[0][2];点 P4=contours_poly[0][3];线(src,P1,P2,标量(0,0,255),1,CV_AA,0);线(src,P2,P3,标量(0,0,255),1,CV_AA,0);线(src,P3,P4,标量(0,0,255),1,CV_AA,0);线(src,P4,P1,标量(0,0,255),1,CV_AA,0);矩形(src,boundRect,Scalar(0,255,0),1,8,0);矩形(转换,boundRect,标量(0,255,0),1,8,0);imshow(四边形",变形);imshow("thr",thr);imshow("dst",dst);imshow("src",src);imwrite("result1.jpg",dst);imwrite("result2.jpg",src);imwrite("result3.jpg",transformed);等待键();}别的cout<<确保您使用 approxPolyDP 获得 4 个角..."<<endl;

I am trying to implement Automatic perspective correction in my iOS program and when I use the test image I found on the tutorial everything works as expected. But when I take a picture I get back a weird result.

I am using code found in this tutorial

When I give it an image that looks like this:

I get this as the result:

Here is what dst gives me that might help.

I am using this to call the method which contains the code.

quadSegmentation(Img, bw, dst, quad);

Can anyone tell me when I am getting so many green lines compared to the tutorial? And how I might be able to fix this and properly crop the image to only contain the card?

解决方案

For perspective transform you need,

source points->Coordinates of quadrangle vertices in the source image.

destination points-> Coordinates of the corresponding quadrangle vertices in the destination image.

Here we will calculate these point by contour process.

Calculate Coordinates of quadrangle vertices in the source image

  • You will get the your card as contour by just by blurring, thresholding, then find contour, find largest contour etc..
  • After finding largest contour just calculate approximates a polygonal curve, here you should get 4 Point which represent corners of your card. You can adjust the parameter epsilon to make 4 co-ordinates.

Calculate Coordinates of the corresponding quadrangle vertices in the destination image

  • This can be easily find out by calculating bounding rectangle for largest contour.

In below image the red rectangle represent source points and green for destination points.

Adjust the co-ordinates order and Apply Perspective transform

See the final result

Code

 Mat src=imread("card.jpg");
 Mat thr;
 cvtColor(src,thr,CV_BGR2GRAY);
 threshold( thr, thr, 70, 255,CV_THRESH_BINARY );

 vector< vector <Point> > contours; // Vector for storing contour
 vector< Vec4i > hierarchy;
 int largest_contour_index=0;
 int largest_area=0;

 Mat dst(src.rows,src.cols,CV_8UC1,Scalar::all(0)); //create destination image
 findContours( thr.clone(), contours, hierarchy,CV_RETR_EXTERNAL, CV_CHAIN_APPROX_SIMPLE ); // Find the contours in the image
 for( int i = 0; i< contours.size(); i++ ){
    double a=contourArea( contours[i],false);  //  Find the area of contour
    if(a>largest_area){
    largest_area=a;
    largest_contour_index=i;                //Store the index of largest contour
    }
 }

 drawContours( dst,contours, largest_contour_index, Scalar(255,255,255),CV_FILLED, 8, hierarchy );
 vector<vector<Point> > contours_poly(1);
 approxPolyDP( Mat(contours[largest_contour_index]), contours_poly[0],5, true );
 Rect boundRect=boundingRect(contours[largest_contour_index]);
 if(contours_poly[0].size()==4){
    std::vector<Point2f> quad_pts;
    std::vector<Point2f> squre_pts;
    quad_pts.push_back(Point2f(contours_poly[0][0].x,contours_poly[0][0].y));
    quad_pts.push_back(Point2f(contours_poly[0][1].x,contours_poly[0][1].y));
    quad_pts.push_back(Point2f(contours_poly[0][3].x,contours_poly[0][3].y));
    quad_pts.push_back(Point2f(contours_poly[0][2].x,contours_poly[0][2].y));
    squre_pts.push_back(Point2f(boundRect.x,boundRect.y));
    squre_pts.push_back(Point2f(boundRect.x,boundRect.y+boundRect.height));
    squre_pts.push_back(Point2f(boundRect.x+boundRect.width,boundRect.y));
    squre_pts.push_back(Point2f(boundRect.x+boundRect.width,boundRect.y+boundRect.height));

    Mat transmtx = getPerspectiveTransform(quad_pts,squre_pts);
    Mat transformed = Mat::zeros(src.rows, src.cols, CV_8UC3);
    warpPerspective(src, transformed, transmtx, src.size());
    Point P1=contours_poly[0][0];
    Point P2=contours_poly[0][1];
    Point P3=contours_poly[0][2];
    Point P4=contours_poly[0][3];


    line(src,P1,P2, Scalar(0,0,255),1,CV_AA,0);
    line(src,P2,P3, Scalar(0,0,255),1,CV_AA,0);
    line(src,P3,P4, Scalar(0,0,255),1,CV_AA,0);
    line(src,P4,P1, Scalar(0,0,255),1,CV_AA,0);
    rectangle(src,boundRect,Scalar(0,255,0),1,8,0);
    rectangle(transformed,boundRect,Scalar(0,255,0),1,8,0);

    imshow("quadrilateral", transformed);
    imshow("thr",thr);
    imshow("dst",dst);
    imshow("src",src);
    imwrite("result1.jpg",dst);
    imwrite("result2.jpg",src);
    imwrite("result3.jpg",transformed);
    waitKey();
   }
   else
    cout<<"Make sure that your are getting 4 corner using approxPolyDP..."<<endl;

这篇关于自动透视校正 OpenCV的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆