相机校准,将像素反向投影到方向 [英] Camera calibration, reverse projection of pixel to direction

查看:116
本文介绍了相机校准,将像素反向投影到方向的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在使用OpenCV从一系列棋盘图像中估计摄像头的固有矩阵-如

I am using OpenCV to estimate a webcam's intrinsic matrix from a series of chessboard images - as detailed in this tutorial, and reverse project a pixel to a direction (in term of azimuth/elevation angles).

最终目标是让用户选择图像上的一个点,估计该点相对于摄像头中心的方向,并将其用作波束形成算法的DOA.

因此,一旦估算出本征矩阵,便会反向投影用户选择的像素(请参见下面的代码),并将其显示为方位角/仰角.

So once I have estimated the intrinsic matrix, I reverse project the user-selected pixel (see code below) and display it as azimuth/elevation angles.

result = [0, 0, 0]  # reverse projected point, in homogeneous coord.
while 1:
    _, img = cap.read()
    if flag:  # If the user has clicked somewhere
        result = np.dot(np.linalg.inv(mtx), [mouse_x, mouse_y, 1])
        result = np.arctan(result)  # convert to angle
        flag = False

    cv2.putText(img, '({},{})'.format(mouse_x, mouse_y), (20, 440), cv2.FONT_HERSHEY_SIMPLEX,
                0.5, (0, 255, 0), 2, cv2.LINE_AA)
    cv2.putText(img, '({:.2f},{:.2f})'.format(180/np.pi*result[0], 180/np.pi*result[1]), (20, 460),
                cv2.FONT_HERSHEY_SIMPLEX, 0.5, (0, 255, 0), 2, cv2.LINE_AA)

    cv2.imshow('image', img)
    if cv2.waitKey(1) & 0xFF == ord('q'):
        break

我的问题是我不确定我的结果是否连贯.主要的不一致性是,与{0,0}角相对应的图像点明显偏离图像中心,如下所示(出于隐私原因,相机图像已被黑色背景替换):

My problem is that I'm not sure whether my results are coherent. The major incoherence is that, the point of the image corresponding to the {0,0} angle is noticeably off the image center, as seen below (camera image has been replaced by a black background for privacy reasons) :

我真的没有看到一种简单而有效的测量精度的方法(我唯一想到的方法是在相机下方使用带有激光的伺服电机并将其指向计算出的方向)

I don't really see a simple yet efficient way of measuring the accuracy (the only method I could think of was to use a servo motor with a laser on it, just under the camera and point it to the computed direction).

这里是用15张图像校准后的本征矩阵:

Here is the intrinsic matrix after calibration with 15 images :

我得到大约 0.44 RMS 的错误,这似乎令人满意.

I get an error of around 0.44 RMS which seems satisfying.

我的校准代码:

nCalFrames = 12  # number of frames for calibration
nFrames = 0
criteria = (cv2.TERM_CRITERIA_EPS + cv2.TERM_CRITERIA_MAX_ITER, 30, 0.001)  # termination criteria

objp = np.zeros((9*7, 3), np.float32)
objp[:, :2] = np.mgrid[0:9, 0:7].T.reshape(-1, 2)
objpoints = []  # 3d point in real world space
imgpoints = []  # 2d points in image plane.

cap = cv2.VideoCapture(0)
previousTime = 0
gray = 0

while 1:
    # Capture frame-by-frame
    _, img = cap.read()

    gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)

    # Find the chess board corners
    ret, corners = cv2.findChessboardCorners(gray, (9, 7), None)

    # If found, add object points, image points (after refining them)
    if ret:

        corners2 = cv2.cornerSubPix(gray, corners, (11, 11), (-1, -1), criteria)

        if time.time() - previousTime > 2:
            previousTime = time.time()
            imgpoints.append(corners2)
            objpoints.append(objp)
            img = cv2.bitwise_not(img)
            nFrames = nFrames + 1

        # Draw and display the corners
        img = cv2.drawChessboardCorners(img, (9, 7), corners, ret)

    cv2.putText(img, '{}/{}'.format(nFrames, nCalFrames), (20, 460), cv2.FONT_HERSHEY_SIMPLEX,
                2, (0, 255, 0), 2, cv2.LINE_AA)
    cv2.putText(img, 'press \'q\' to exit...', (255, 15), cv2.FONT_HERSHEY_SIMPLEX,
                0.5, (0, 0, 255), 1, cv2.LINE_AA)
    # Display the resulting frame
    cv2.imshow('Webcam Calibration', img)
    if nFrames == nCalFrames:
        break

    if cv2.waitKey(1) & 0xFF == ord('q'):
        break

RMS_error, mtx, disto_coef, _, _ = cv2.calibrateCamera(objpoints, imgpoints, gray.shape[::-1], None, None)

编辑:另一种测试方法是使用具有已知角度点的白板,并通过与实验结果进行比较来估计误差,但我不知道如何设置这样的系统

another test method would be to use a whiteboard with known angles points and estimate the error by comparing with experimental results, but I don't know how to set up such a system

推荐答案

关于您的第一个问题,使主要点偏离图像中心是正常的.估计点是零仰角和方位角,是使径向畸变系数最小的点,对于低值广角镜头(例如,典型的网络摄像头),可以很容易地将其关闭.

Regarding your first concern, it is normal to have the principal point off the image center. The estimated point, which is the point of zero elevation and azimuth, is the one that minimizes the radial distortion coefficients, and for a low value wide angle lens (e.g., that of a typical webcam) it can be easily off by noticeable amount.

您的校准应该可以满足calibrateCamera的要求.但是,在您的代码段中,您似乎忽略了失真系数.缺少的是initUndistortRectifyMap,如果需要的话,它还可以使主点居中.

Your calibration should be ok up to the call to calibrateCamera. However, in your code snippet it seems your ignoring the distortion coefficients. What is missing is initUndistortRectifyMap, which lets you also re-center the principal point if that matters.

h,  w = img.shape[:2]
# compute new camera matrix with central principal point
new_mtx,roi = cv2.getOptimalNewCameraMatrix(mtx,disto_coef,(w,h),1,(w,h))
print(new_mtx)
# compute undistort maps
mapx,mapy = cv2.initUndistortRectifyMap(mtx,disto_coef,None,new_mtx,(w,h),5)

从本质上讲,它使焦距在两个维度上均等,并使中心点居中(有关参数,请参见OpenCV python文档).

It essentially makes focal length equal in both dimensions and centers the principal point (see OpenCV python documentation for parameters).

然后,每个

_, img = cap.read()

您必须在渲染之前使图像不失真

you must undistort the image before rendering

# apply the remap
img = cv2.remap(img,mapx,mapy,cv2.INTER_LINEAR)
# crop the image
x,y,w,h = roi
img = img[y:y+h, x:x+w]

在这里,我将背景设置为绿色以强调桶形失真.输出可能是这样的(出于隐私原因,摄像机图像被棋盘代替了):

here, I put background to green to emphasize the barrel distortion. The output could be something like this (camera image replaced by checkerboard for privacy reasons):

如果执行所有这些操作,则您的校准目标是准确的,并且校准样本会占据整个图像区域,您应该对计算非常有信心.但是,为了验证相对于未失真图像的像素读数测得的方位角和仰角,我可能建议从镜头的第一个主点和在相机正前方以垂直角度放置的校准板开始进行卷尺测量.在那里,您可以计算期望的角度并进行比较.

If you do all these, your calibration target is accurate and your calibration samples fill the entire image area you should be quite confident of the computation. However, to validate the measured azimuth and elevation with respect to the undistorted image's pixel readings, I'd maybe suggest tape measure from the lenses first principal point and a calibration plate placed in normal angle right in front of the camera. There you can compute the expected angles and compare.

希望这会有所帮助.

这篇关于相机校准,将像素反向投影到方向的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆