微调霍夫线功能参数OpenCV [英] Fine Tuning Hough Line function parameters OpenCV

查看:84
本文介绍了微调霍夫线功能参数OpenCV的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我一直在尝试在正方形周围得到4条线,以便获得正方形的顶点.我会采用这种方法,而不是出于准确性而直接使用Harris或等高线方法来查找拐角.在opencv的内置函数中使用houghlines,我无法获得全长线来获取相交点,而且我也获得了太多不相关的线.我想知道是否可以对参数进行微调以达到我的要求?如果是,我该怎么办?我的问题与此此处完全相同.但是我没有得到这些即使更改了这些参数,线本身也会自动对齐.我已经将原始图像以及代码和输出附加到了

I've been trying to get 4 lines around the square so that I can obtain the vertices of the square. I'm going with this approach rather than finding corners directly using Harris or contours method due to accuracy. Using houghlines in built function in opencv I'm unable to get full length lines to get intersection points and I'm also getting too many irrelevant lines. I'd like to know if the parameters can be fine tuned to obtain my requirements? If yes how do I go about it? My question is exactly the same as this one here. However I'm not getting those lines itself even after changing those parameters. I've attached the original image along with the code and output:

原始图片:

代码:

#include <Windows.h>
#include "opencv2\highgui.hpp"
#include "opencv2\imgproc.hpp"
#include "opencv2/imgcodecs/imgcodecs.hpp"
#include "opencv2/videoio/videoio.hpp"

using namespace cv;
using namespace std;

int main(int argc, const char** argv)
{

    Mat image,src;
    image = imread("c:/pics/output2_1.bmp");
    src = image.clone();
    cvtColor(image, image, CV_BGR2GRAY);
    threshold(image, image, 0, 255, CV_THRESH_OTSU + CV_THRESH_BINARY_INV);

    namedWindow("thresh", WINDOW_NORMAL);
    resizeWindow("thresh", 600, 400);

    imshow("thresh", image);

    cv::Mat edges;

    cv::Canny(image, edges, 0, 255);

    vector<Vec2f> lines;
    HoughLines(edges, lines, 1, CV_PI / 180, 100, 0, 0);
    for (size_t i = 0; i < lines.size(); i++)
    {
        float rho = lines[i][0], theta = lines[i][1];
        Point pt1, pt2;
        double a = cos(theta), b = sin(theta);
        double x0 = a*rho, y0 = b*rho;
        pt1.x = cvRound(x0 + 1000 * (-b));
        pt1.y = cvRound(y0 + 1000 * (a));
        pt2.x = cvRound(x0 - 1000 * (-b));
        pt2.y = cvRound(y0 - 1000 * (a));
        line(src, pt1, pt2, Scalar(0, 0, 255), 3, CV_AA);
    }

namedWindow("Edges Structure", WINDOW_NORMAL);
resizeWindow("Edges Structure", 600, 400);

imshow("Edges Structure", src);
waitKey(0);

return(0);
}

输出图像:

更新:此图像上有一个框架,因此我可以通过删除该框架来减少图像边框中不相关的线条,但是我仍然无法获得覆盖正方形的完整线条.

Update: There is a frame on this image, so I was able to reduce the irrelevant lines in the border of the image by removing that frame, however I'm still not getting complete lines covering the square.

推荐答案

许多种方法可以做到这一点,我仅举一个例子.但是,我使用python最快,因此我的代码示例将使用该语言.不过,翻译起来应该不难(请在为其他人完成文章后,随时使用C ++解决方案来编辑​​您的文章).

There are many ways to do this, I will give an example of just one. However, I'm quickest in python, so my code example will be in that language. Should not be hard to translate it, though (please feel free to edit your post with your C++ solution after you've finished it for others).

对于预处理,我强烈建议

For preprocessing, I highly suggest dilate()ing your edge image. This will make the lines thicker which will help fit the Hough lines better. What the Hough lines function does in the abstract is basically make a grid of lines passing through a ton of angles and distances, and if the lines go over any white pixels from Canny, then it gives that line a score for each point it goes through. However, the lines from Canny won't be perfectly straight, so you'll get a few different lines scoring. Making those Canny lines thicker will mean each line that is really close to fitting well will have better chances of scoring higher.

如果要使用HoughLinesP,则输出将为线段 segments ,该行上只有两点.

If you're going to use HoughLinesP, then your output will be line segments, where all you have is two points on the line.

由于线大多是垂直和水平的,因此您可以轻松地根据其位置分割线.如果一条线的两个 y 坐标彼此靠近,则该线大部分为水平线.如果两个 x 坐标彼此靠近,则该线大部分是垂直的.这样一来,您就可以将线分为垂直线和水平线.

Since the lines are mostly vertical and horizontal, you can easily split the lines based on their position. If the two y-coordinates of one line are near each other, then the line is mostly horizontal. If the two x-coordinates are near each other, then the line is mostly vertical. So you can segment your lines into vertical lines and horizontal lines that way.

def segment_lines(lines, delta):
    h_lines = []
    v_lines = []
    for line in lines:
        for x1, y1, x2, y2 in line:
            if abs(x2-x1) < delta: # x-values are near; line is vertical
                v_lines.append(line)
            elif abs(y2-y1) < delta: # y-values are near; line is horizontal
                h_lines.append(line)
    return h_lines, v_lines

然后,您可以从其端点获取两个线段的交点.使用行列式.

Then, you can obtain intersection points of two line segments from their endpoints using determinants.

def find_intersection(line1, line2):
    # extract points
    x1, y1, x2, y2 = line1[0]
    x3, y3, x4, y4 = line2[0]
    # compute determinant
    Px = ((x1*y2 - y1*x2)*(x3-x4) - (x1-x2)*(x3*y4 - y3*x4))/  \
        ((x1-x2)*(y3-y4) - (y1-y2)*(x3-x4))
    Py = ((x1*y2 - y1*x2)*(y3-y4) - (y1-y2)*(x3*y4 - y3*x4))/  \
        ((x1-x2)*(y3-y4) - (y1-y2)*(x3-x4))
    return Px, Py

现在,如果您遍历所有线,则所有水平线和垂直线都会有交点,但是有许多条线,因此对于盒子的同一角.

So now if you loop through all your lines, you'll have intersection points from all your horizontal and vertical lines, but you have many lines, so you'll have many intersection points for the same corner of the box.

但是,这些都在一个向量中,因此,不仅需要平均每个角上的点,还需要将它们实际分组在一起.您可以使用 k -means集群来实现此目的,该集群在OpenCV中以

However, these are all in one vector, so not only do you need to average the points in each corner, you need to actually group them together, too. You can achieve this with k-means clustering, which is implemented in OpenCV as kmeans().

def cluster_points(points, nclusters):
    criteria = (cv2.TERM_CRITERIA_EPS + cv2.TERM_CRITERIA_MAX_ITER, 10, 1.0)
    _, _, centers = cv2.kmeans(points, nclusters, None, criteria, 10, cv2.KMEANS_PP_CENTERS)
    return centers

最后,我们可以使用

Finally, we can simply plot those centers (making sure we round first---since so far everything is a float) onto the image with circle() to make sure we've done it right.

我们拥有它;四个角,在盒子的角落.

And we have it; four points, at the corners of the box.

这是我用python编写的完整代码,包括生成上面数字的代码:

Here's my full code in python, including the code to generate the figures above:

import cv2
import numpy as np 

def find_intersection(line1, line2):
    # extract points
    x1, y1, x2, y2 = line1[0]
    x3, y3, x4, y4 = line2[0]
    # compute determinant
    Px = ((x1*y2 - y1*x2)*(x3-x4) - (x1-x2)*(x3*y4 - y3*x4))/  \
        ((x1-x2)*(y3-y4) - (y1-y2)*(x3-x4))
    Py = ((x1*y2 - y1*x2)*(y3-y4) - (y1-y2)*(x3*y4 - y3*x4))/  \
        ((x1-x2)*(y3-y4) - (y1-y2)*(x3-x4))
    return Px, Py

def segment_lines(lines, delta):
    h_lines = []
    v_lines = []
    for line in lines:
        for x1, y1, x2, y2 in line:
            if abs(x2-x1) < delta: # x-values are near; line is vertical
                v_lines.append(line)
            elif abs(y2-y1) < delta: # y-values are near; line is horizontal
                h_lines.append(line)
    return h_lines, v_lines

def cluster_points(points, nclusters):
    criteria = (cv2.TERM_CRITERIA_EPS + cv2.TERM_CRITERIA_MAX_ITER, 10, 1.0)
    _, _, centers = cv2.kmeans(points, nclusters, None, criteria, 10, cv2.KMEANS_PP_CENTERS)
    return centers

img = cv2.imread('image.png')

# preprocessing
img = cv2.resize(img, None, fx=.5, fy=.5)
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
edges = cv2.Canny(gray, 50, 150)
dilated = cv2.dilate(edges, np.ones((3,3), dtype=np.uint8))

cv2.imshow("Dilated", dilated)
cv2.waitKey(0)
cv2.imwrite('dilated.png', dilated)

# run the Hough transform
lines = cv2.HoughLinesP(dilated, rho=1, theta=np.pi/180, threshold=100, maxLineGap=20, minLineLength=50)

# segment the lines
delta = 10
h_lines, v_lines = segment_lines(lines, delta)

# draw the segmented lines
houghimg = img.copy()
for line in h_lines:
    for x1, y1, x2, y2 in line:
        color = [0,0,255] # color hoz lines red
        cv2.line(houghimg, (x1, y1), (x2, y2), color=color, thickness=1)
for line in v_lines:
    for x1, y1, x2, y2 in line:
        color = [255,0,0] # color vert lines blue
        cv2.line(houghimg, (x1, y1), (x2, y2), color=color, thickness=1)

cv2.imshow("Segmented Hough Lines", houghimg)
cv2.waitKey(0)
cv2.imwrite('hough.png', houghimg)

# find the line intersection points
Px = []
Py = []
for h_line in h_lines:
    for v_line in v_lines:
        px, py = find_intersection(h_line, v_line)
        Px.append(px)
        Py.append(py)

# draw the intersection points
intersectsimg = img.copy()
for cx, cy in zip(Px, Py):
    cx = np.round(cx).astype(int)
    cy = np.round(cy).astype(int)
    color = np.random.randint(0,255,3).tolist() # random colors
    cv2.circle(intersectsimg, (cx, cy), radius=2, color=color, thickness=-1) # -1: filled circle

cv2.imshow("Intersections", intersectsimg)
cv2.waitKey(0)
cv2.imwrite('intersections.png', intersectsimg)

# use clustering to find the centers of the data clusters
P = np.float32(np.column_stack((Px, Py)))
nclusters = 4
centers = cluster_points(P, nclusters)
print(centers)

# draw the center of the clusters
for cx, cy in centers:
    cx = np.round(cx).astype(int)
    cy = np.round(cy).astype(int)
    cv2.circle(img, (cx, cy), radius=4, color=[0,0,255], thickness=-1) # -1: filled circle

cv2.imshow("Center of intersection clusters", img)
cv2.waitKey(0)
cv2.imwrite('corners.png', img)


最后,只有一个问题...为什么不使用 rel ="noreferrer"> cornerHarris() ?因为它只需要很少的代码就可以很好地工作.我对灰度图像进行了阈值处理,然后进行了一些模糊处理以去除伪角,然后...


Finally, just one question...why not use the Harris corner detector implemented in OpenCV as cornerHarris()? Because it works really well with very minimal code. I thresholded the grayscale image, and then gave a little blur to remove spurious corners, and, well...

这是用以下代码产生的:

This was produced with the following code:

import cv2
import numpy as np

img = cv2.imread('image.png')

# preprocessing
img = cv2.resize(img, None, fx=.5, fy=.5)
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
r, gray = cv2.threshold(gray, 120, 255, type=cv2.THRESH_BINARY)
gray = cv2.GaussianBlur(gray, (3,3), 3)

# run harris
gray = np.float32(gray)
dst = cv2.cornerHarris(gray,2,3,0.04)

# dilate the corner points for marking
dst = cv2.dilate(dst,None)
dst = cv2.dilate(dst,None)

# threshold
img[dst>0.01*dst.max()]=[0,0,255]

cv2.imshow('dst',img)
cv2.waitKey(0)
cv2.imwrite('harris.png', img)

我认为哈里斯角点检测器经过一些细微的调整,可能比外推霍夫线相交的精度更高.

I think with some minor adjustments the Harris corner detector can probably be much more accurate than extrapolating Hough line intersections.

这篇关于微调霍夫线功能参数OpenCV的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆