找到4个特定的角点像素,并将其与warp透视图一起使用 [英] Find 4 specific corner pixels and use them with warp perspective

查看:231
本文介绍了找到4个特定的角点像素,并将其与warp透视图一起使用的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在玩OpenCV,我想知道你将如何构建一个简单版本的透视变换程序。我有一个平行四边形的图像,它的每个角由一个具有特定颜色的像素组成,这在图像中没有其他地方。我想迭代所有像素并找到这4个像素。然后我想将它们用作新图像中的角点,以扭曲原始图像的视角。最后我应该放大一个方格。

  Point2f src [4]; //这是在这里使用的正确数据类型吗? 
int lineNumber = 0;
//迭代像素
for(int y = 0; y< image.rows; y ++)
{
for(int x = 0; x< image .cols; x ++)
{
Vec3b color = image.at< Vec3b>(Point(x,y));
if(color.val [1] == 245&& color.val [2] == 111&& color.val [0] == 10){
src [lineNumber ] =这个像素//像Point2f(x,y)我猜
lineNumber ++;
}
}
}
/ *我还需要获得getPerspectiveTransform
的dst积分,然后warpPerspective,我该如何获得?拿其他
点,以某种方式检查最大距离并将其用作最大长度来计算其余的
? * /

您应该如何使用OpenCV来解决问题? (我只是猜测我不是那种正常而聪明的方式)另外我该如何进行下一步,它将使用多个像素作为标记并计算多个中间的平均点点。有没有比运行每个像素更有效的东西?



基本上这样:



解决方案

从带有彩色圆圈的图像开始,如:



< a href =https://i.stack.imgur.com/xA0mA.jpg =nofollow noreferrer>



请注意,这是一个png图像,即具有无损压缩,可保留实际颜色。如果您使用像jpeg这样的有损压缩,颜色会稍微改变,而且您无法使用精确匹配对其进行分割,如此处所示。



您需要找到中心每个标记。


  1. 使用



    代码:

      #include< opencv2 / opencv.hpp> 
    #include< vector>
    #include< algorithm>
    using namespace std;
    使用命名空间cv;

    int main()
    {
    //加载图片
    Mat3b img = imread(path_to_image);

    //创建一个黑色输出图像
    Mat3b out(300,300,Vec3b(0,0,0));

    //标记的颜色,按顺序
    vector< Scalar> colors {Scalar(0,0,255),Scalar(0,255,0),Scalar(255,0,0),Scalar(0,255,255)}; //红色,绿色,蓝色,黄色

    vector< Point2f> src_vertices(colors.size());
    vector< Point2f> dst_vertices = {Point2f(0,0),Point2f(0,out.rows - 1),Point2f(out.cols - 1,out.rows - 1),Point2f(out.cols - 1,0)};

    for(int idx_color = 0; idx_color< colors.size(); ++ idx_color)
    {
    //检测颜色
    Mat1b掩码;
    inRange(img,colors [idx_color],colors [idx_color],mask);

    //查找连通分量
    vector< vector< Point>>轮廓;
    findContours(mask,contours,RETR_EXTERNAL,CHAIN_APPROX_NONE);

    //找到最大的
    int idx_largest = distance(contours.begin(),max_element(contours.begin(),contours.end(),[](const vector< Point>& ; lhs,const vector< Point>& rhs){
    return lhs.size()< rhs.size();
    }));

    //查找最大分量的质心
    时刻m =时刻(等值线[idx_largest]);
    Point2f中心(m.m10 / m.m00,m.m01 / m.m00);

    //找到标记中心,添加到源顶点
    src_vertices [idx_color] = center;
    }

    //查找转换
    Mat M = getPerspectiveTransform(src_vertices,dst_vertices);

    //应用转换
    warpPerspective(img,out,M,out.size());

    imshow(Image,img);
    imshow(扭曲,出局);
    waitKey();

    返回0;
    }


    I'm playing around with OpenCV and I want to know how you would build a simple version of a perspective transform program. I have a image of a parallelogram and each corner of it consists of a pixel with a specific color, which is nowhere else in the image. I want to iterate through all pixels and find these 4 pixels. Then I want to use them as corner points in a new image in order to warp the perspective of the original image. In the end I should have a zoomed on square.

    Point2f src[4]; //Is this the right datatype to use here?
    int lineNumber=0;
    //iterating through the pixels
    for(int y = 0; y < image.rows; y++)
    {
        for(int x = 0; x < image.cols; x++)
        {
            Vec3b colour = image.at<Vec3b>(Point(x, y));
        if(color.val[1]==245 && color.val[2]==111 && color.val[0]==10) { 
            src[lineNumber]=this pixel // something like Point2f(x,y) I guess
            lineNumber++;
        }
        }
    }
    /* I also need to get the dst points for getPerspectiveTransform 
    and afterwards warpPerspective, how do I get those? Take the other 
    points, check the biggest distance somehow and use it as the maxlength to calculate 
    the rest? */
    

    How should you use OpenCV in order to solve the problem? (I just guess I'm not doing it the "normal and clever way") Also how do I do the next step, which would be using more than one pixel as a "marker" and calculate the average point in the middle of multiple points. Is there something more efficient than running through each pixel?

    Something like this basically:

    解决方案

    Starting from an image with colored circles as markers, like:

    Note that is a png image, i.e. with a loss-less compression which preserves the actual color. If you use a lossy compression like jpeg the colors will change a little, and you cannot segment them with an exact match, as done here.

    You need to find the center of each marker.

    1. Segment the (known) color, using inRange
    2. Find all connected components with the given color, with findContours
    3. Find the largest blob, here done with max_element with a lambda function, and distance. You can use a for loop for this.
    4. Find the center of mass of the largest blob, here done with moments. You can use a loop also here, eventually.
    5. Add the center to your source vertices.

    Your destination vertices are just the four corners of the destination image.

    You can then use getPerspectiveTransform and warpPerspective to find and apply the warping.

    The resulting image is:

    Code:

    #include <opencv2/opencv.hpp>
    #include <vector>
    #include <algorithm>
    using namespace std;
    using namespace cv;
    
    int main()
    {
        // Load image
        Mat3b img = imread("path_to_image");
    
        // Create a black output image
        Mat3b out(300,300,Vec3b(0,0,0));
    
        // The color of your markers, in order
        vector<Scalar> colors{ Scalar(0, 0, 255), Scalar(0, 255, 0), Scalar(255, 0, 0), Scalar(0, 255, 255) }; // red, green, blue, yellow
    
        vector<Point2f> src_vertices(colors.size());
        vector<Point2f> dst_vertices = { Point2f(0, 0), Point2f(0, out.rows - 1), Point2f(out.cols - 1, out.rows - 1), Point2f(out.cols - 1, 0) };
    
        for (int idx_color = 0; idx_color < colors.size(); ++idx_color)
        {
            // Detect color
            Mat1b mask;
            inRange(img, colors[idx_color], colors[idx_color], mask);
    
            // Find connected components
            vector<vector<Point>> contours;
            findContours(mask, contours, RETR_EXTERNAL, CHAIN_APPROX_NONE);
    
            // Find largest
            int idx_largest = distance(contours.begin(), max_element(contours.begin(), contours.end(), [](const vector<Point>& lhs, const vector<Point>& rhs) {
                return lhs.size() < rhs.size();
            }));
    
            // Find centroid of largest component
            Moments m = moments(contours[idx_largest]);
            Point2f center(m.m10 / m.m00, m.m01 / m.m00);
    
            // Found marker center, add to source vertices
            src_vertices[idx_color] = center;
        }
    
        // Find transformation
        Mat M = getPerspectiveTransform(src_vertices, dst_vertices);
    
        // Apply transformation
        warpPerspective(img, out, M, out.size());
    
        imshow("Image", img);
        imshow("Warped", out);
        waitKey();
    
        return 0;
    }
    

    这篇关于找到4个特定的角点像素,并将其与warp透视图一起使用的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆