在 Python 中的 OpenCV 中使用鱼眼相机捕获的点不失真的正确方法是什么? [英] What is the correct way to undistort points captured using fisheye camera in OpenCV in Python?

查看:250
本文介绍了在 Python 中的 OpenCV 中使用鱼眼相机捕获的点不失真的正确方法是什么?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

信息:

我已经校准了我的相机,并发现相机的内在矩阵 (K) 及其失真系数 (d) 如下所示:

I've calibrated my camera and have found the camera's intrinsics matrix (K) and its distortion coefficients (d) to be the following:

import numpy as np
K = np.asarray([[556.3834638575809,0,955.3259939726225],[0,556.2366649196925,547.3011305411478],[0,0,1]])
d = np.asarray([[-0.05165940570900624],[0.0031093602070252167],[-0.0034036648250202746],[0.0003390345044343793]])

从这里开始,我可以使用以下三行来消除图像失真:

From here, I can undistort my image using the following three lines:

final_K = cv2.fisheye.estimateNewCameraMatrixForUndistortRectify(K, d, (1920, 1080), np.eye(3), balance=1.0)

map_1, map_2 = cv2.fisheye.initUndistortRectifyMap(K, d, np.eye(3), final_K, (1920, 1080), cv2.CV_32FC1)

undistorted_image = cv2.remap(image, map_1, map_2, interpolation=cv2.INTER_LINEAR, borderMode=cv2.BORDER_CONSTANT)

生成的未失真图像似乎是正确的 左侧图像失真,右侧图像未失真,但是当我尝试使用 cv2.remap() 使图像点不失真时,点不会映射到与其在图像中的相应像素相同的位置.我使用

The resulting undistored images appears to be correct Left image is distorted, right is undistorted, but when I try to undistort image points using cv2.remap() points aren't mapped to the same location as their corresponding pixel in the image. I detected the calibration board points in the left image using

ret, corners = cv2.findChessboardCorners(gray, (6,8),cv2.CALIB_CB_ADAPTIVE_THRESH+cv2.CALIB_CB_FAST_CHECK+cv2.CALIB_CB_NORMALIZE_IMAGE)
corners2 = cv2.cornerSubPix(gray, corners, (3,3), (-1,-1), (cv2.TERM_CRITERIA_EPS + cv2.TERM_CRITERIA_MAX_ITER, 30, 0.1))

然后以下列方式重新映射这些点:

then remapped those points in the following way:

remapped_points = []
for corner in corners2:
    remapped_points.append(
                (map_1[int(corner[0][1]), int(corner[0][0])], map_2[int(corner[0][1]), int(corner[0][0])])
            )

在这些水平连接的图像中,左图显示了在扭曲的图像,而右图显示了右图中点的重新映射位置.

In these horizontally concatenated images, the left image shows the points detected in the distorted image, while the right image shows the remapped location of the points in the right image.

此外,我无法使用 cv2.fisheye.undistortPoints() 获得正确的结果.我有以下功能可以使点不失真:

Also, I haven't been able to get correct results using cv2.fisheye.undistortPoints(). I have the following function to undistort points:

def undistort_list_of_points(point_list, in_K, in_d):
    K = np.asarray(in_K)
    d = np.asarray(in_d)
    # Input can be list of bbox coords, poly coords, etc.
    # TODO -- Check if point behind camera?
    points_2d = np.asarray(point_list)

    points_2d = points_2d[:, 0:2].astype('float32')
    points2d_undist = np.empty_like(points_2d)
    points_2d = np.expand_dims(points_2d, axis=1)

    result = np.squeeze(cv2.fisheye.undistortPoints(points_2d, K, d))

    fx = K[0, 0]
    fy = K[1, 1]
    cx = K[0, 2]
    cy = K[1, 2]

    for i, (px, py) in enumerate(result):
        points2d_undist[i, 0] = px * fx + cx
        points2d_undist[i, 1] = py * fy + cy

    return points2d_undist

这张图片显示了使用上述函数去失真时的结果.

This image shows the results when undistorting using the above function.

(这一切都在 Python 3.6.8 的 Ubuntu 18.04 上的 OpenCV 4.2.0 中运行)

(this is all running in OpenCV 4.2.0 on Ubuntu 18.04 in Python 3.6.8)

问题

为什么重新映射图像坐标不能正常工作?我是否错误地使用了 map_1map_2?

Why isn't this remapping of image coordinates working properly? Am I using map_1 and map_2 incorrectly?

为什么使用 cv2.fisheye.undistortPoints() 的结果与使用 map_1map_2 的结果不同?

Why are the results from using cv2.fisheye.undistortPoints() different from using map_1 and map_2?

推荐答案

Q1 答案:

您没有正确使用ma​​p_1ma​​p_2.

cv2.fisheye.initUndistortRectifyMap函数生成的地图应该是目标图片的像素位置到源图片的像素位置的映射,即dst(x,y)=src(mapx(x,y),mapy(x,y)).参见 重新映射OpenCV.

The map generate by the cv2.fisheye.initUndistortRectifyMap function should be the mapping of the pixel location of the destination image to the pixel location of the source image, i.e. dst(x,y)=src(mapx(x,y),mapy(x,y)). see remap in OpenCV.

在代码中,ma​​p_1 用于 x 方向像素映射,ma​​p_2 用于 y 方向像素映射.例如,(X_undistorted, Y_undistorted) 是未失真图像中的像素位置.ma​​p_1[Y_undistorted, X_undistorted] 为您提供此像素应映射到失真图像中 x 坐标的位置,ma​​p_2 将为您提供对应的 y 坐标.

In the code, map_1 is for the x-direction pixel mapping and map_2 is for the y-direction pixel mapping. For example, (X_undistorted, Y_undistorted) is the pixel location in the undistorted image. map_1[Y_undistorted, X_undistorted] gives you where is this pixel should map to the x coordinate in the distorted image, and map_2 will give you the corresponding y coordinate.

因此,ma​​p_1ma​​p_2 可用于从失真图像构建未失真图像,但并不真正适用于逆向过程.

So, map_1 and map_2 are useful for constructing an undistorted image from a distorted image, and not really suitable for the reversed process.

remapped_points = []
for corner in corners2:
    remapped_points.append(
              (map_1[int(corner[0][1]), int(corner[0][0])], map_2[int(corner[0][1]), int(corner[0][0])]))

此代码用于查找角的未失真像素位置是不正确的.您将需要使用 undistortPoints 函数.

This code to find the undistorted pixel location of the corners is not correct. You will need to use undistortPoints function.

映射和不失真是不同的.

The mapping and undistortion are different.

您可以将映射看作是根据未失真图像中的像素位置和像素图构建未失真图像,而未失真是使用镜头失真模型使用原始像素位置找到未失真的像素位置.

You can think of mapping as constructing the undistorted image based on the pixel locations in the undistorted image with the pixel maps, while undistortion is to find undistorted pixel locations using the original pixel location using lens distortion model.

为了在未失真的图像中找到角点的正确像素位置.您需要使用新估计的 K 将未失真点的归一化坐标转换回像素坐标,在您的情况下,它是 final_K,因为可以看到未失真图像是由带有final_K 无失真(有小的缩放效果).

In order to find the correct pixel locations of the corners in the undistorted image. You need to convert the normalized coordinates of the undistorted points back to pixel coordinates using the newly estimated K, in your case, it's the final_K, because the undistorted image can be seen as taken by a camera with the final_K without distortion (there is a small zooming effect).

这是修改后的 undistort 函数:

Here is the modified undistort function:

def undistort_list_of_points(point_list, in_K, in_d, in_K_new):
    K = np.asarray(in_K)
    d = np.asarray(in_d)
    # Input can be list of bbox coords, poly coords, etc.
    # TODO -- Check if point behind camera?
    points_2d = np.asarray(point_list)

    points_2d = points_2d[:, 0:2].astype('float32')
    points2d_undist = np.empty_like(points_2d)
    points_2d = np.expand_dims(points_2d, axis=1)

    result = np.squeeze(cv2.fisheye.undistortPoints(points_2d, K, d))

    K_new = np.asarray(in_K_new)
    fx = K_new[0, 0]
    fy = K_new[1, 1]
    cx = K_new[0, 2]
    cy = K_new[1, 2]

    for i, (px, py) in enumerate(result):
        points2d_undist[i, 0] = px * fx + cx
        points2d_undist[i, 1] = py * fy + cy

    return points2d_undist


这是我做同样事情的代码.

import cv2
import numpy as np
import matplotlib.pyplot as plt

K = np.asarray([[556.3834638575809,0,955.3259939726225],[0,556.2366649196925,547.3011305411478],[0,0,1]])
D = np.asarray([[-0.05165940570900624],[0.0031093602070252167],[-0.0034036648250202746],[0.0003390345044343793]])
print("K:\n", K)
print("D:\n", D.ravel())

# read image and get the original image on the left
image_path = "sample.jpg"
image = cv2.imread(image_path)
image = image[:, :image.shape[1]//2, :]
image_gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)

fig = plt.figure()
plt.imshow(image_gray, "gray")

H_in, W_in = image_gray.shape
print("Grayscale Image Dimension:\n", (W_in, H_in))

scale_factor = 1.0 
balance = 1.0

img_dim_out =(int(W_in*scale_factor), int(H_in*scale_factor))
if scale_factor != 1.0:
    K_out = K*scale_factor
    K_out[2,2] = 1.0

K_new = cv2.fisheye.estimateNewCameraMatrixForUndistortRectify(K_out, D, img_dim_out, np.eye(3), balance=balance)
print("Newly estimated K:\n", K_new)

map1, map2 = cv2.fisheye.initUndistortRectifyMap(K, D, np.eye(3), K_new, img_dim_out, cv2.CV_32FC1)
print("Rectify Map1 Dimension:\n", map1.shape)
print("Rectify Map2 Dimension:\n", map2.shape)

undistorted_image_gray = cv2.remap(image_gray, map1, map2, interpolation=cv2.INTER_LINEAR, borderMode=cv2.BORDER_CONSTANT)
fig = plt.figure()
plt.imshow(undistorted_image_gray, "gray")
  
ret, corners = cv2.findChessboardCorners(image_gray, (6,8),cv2.CALIB_CB_ADAPTIVE_THRESH+cv2.CALIB_CB_FAST_CHECK+cv2.CALIB_CB_NORMALIZE_IMAGE)
corners_subpix = cv2.cornerSubPix(image_gray, corners, (3,3), (-1,-1), (cv2.TERM_CRITERIA_EPS + cv2.TERM_CRITERIA_MAX_ITER, 30, 0.1))

undistorted_corners = cv2.fisheye.undistortPoints(corners_subpix, K, D)
undistorted_corners = undistorted_corners.reshape(-1,2)


fx = K_new[0,0]
fy = K_new[1,1]
cx = K_new[0,2]
cy = K_new[1,2]
undistorted_corners_pixel = np.zeros_like(undistorted_corners)

for i, (x, y) in enumerate(undistorted_corners):
    px = x*fx + cx
    py = y*fy + cy
    undistorted_corners_pixel[i,0] = px
    undistorted_corners_pixel[i,1] = py
    
undistorted_image_show = cv2.cvtColor(undistorted_image_gray, cv2.COLOR_GRAY2BGR)
for corner in undistorted_corners_pixel:
    image_corners = cv2.circle(np.zeros_like(undistorted_image_show), (int(corner[0]),int(corner[1])), 15, [0, 255, 0], -1)
    undistorted_image_show = cv2.add(undistorted_image_show, image_corners)

fig = plt.figure()
plt.imshow(undistorted_image_show, "gray")

这篇关于在 Python 中的 OpenCV 中使用鱼眼相机捕获的点不失真的正确方法是什么?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆