了解openCV不失真 [英] Understanding of openCV undistortion

查看:71
本文介绍了了解openCV不失真的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在通过MATLAB接收tof camera的深度图像.使用openCV函数从tof相机提供的用于计算深度图像之外的x,y,z坐标的驱动程序,该功能通过通过mex文件实现.

但是后来我不能再使用这些驱动程序,也不能使用openCV函数,因此我需要自己实现2d到3d映射,包括径向变形的补偿.我已经掌握了相机参数,并且深度图像的每个像素的x,y,z坐标计算都可以进行.直到现在,我仍通过牛顿法(实际上并不快...)来求解不失真的隐式方程.但是我想实现openCV函数的不失真.

...这是我的问题:我不太了解,希望您能帮助我.它实际上是如何工作的?我试图在论坛上搜索,但还没有找到与此案有关的有用线索.

问候!

解决方案

[X; Y; Z]到2D图像点[u; v]的投影方程. docs.opencv.org/2.4/modules/calib3d/doc/camera_calibration_and_3d_reconstruction.html"rel =" nofollow noreferrer>与相机校准有关的文档页面:


(来源: opencv.org )

在镜头变形的情况下,方程是非线性的,并且取决于3至8个参数(k1至k6,p1和p2).因此,通常将需要非线性求解算法(例如,牛顿法,Levenberg-Marquardt算法等)来对这种模型进行反演并从失真的模型中估计出未失真的坐标.这就是在函数undistortPoints后面使用的功能,其调整后的参数使优化速度很快,但有点不准确.

但是,在特殊的图像镜头校正(与点校正相反)的情况下,有一种基于众所周知的图像重采样技巧的更有效的方法.这个技巧是,为了获得目标图像每个像素的有效强度,您必须将目标图像中的坐标转换为源图像中的坐标,而不是直觉上期望的相反.对于镜头畸变校正,这意味着您实际上不必对非线性模型求逆,而只需应用它即可.

基本上,功能undistort背后的算法如下.对于目标镜头校正图像的每个像素,请执行以下操作:

  • 使用校准矩阵K的逆函数将像素坐标(u_dst, v_dst)转换为归一化坐标(x', y')
  • 应用如上所示的镜头失真模型,以获取变形的归一化坐标(x'', y'')
  • 使用校准矩阵K(x'', y'')转换为失真的像素坐标(u_src, v_src)
  • 使用您选择的插值方法在源图像中找到与像素坐标(u_src, v_src)相关联的强度/深度,并将此强度/深度分配给当前目标像素.

请注意,如果您想使深度图图像不失真,则应使用最近邻插值法,否则几乎可以肯定的是,将深度值插值到对象边界处,从而导致不必要的伪像.

I'm receiving depth images of a tof camera via MATLAB. the delivered drivers of the tof camera to compute x,y,z coordinates out of the depth image are using openCV function, which are implemented in MATLAB via mex-files.

But later on I can't use those drivers anymore nor use openCV functions, therefore I need to implement the 2d to 3d mapping on my own including the compensation of radial distortion. I already got hold of the camera parameters and the computation of the x,y,z coordinates of each pixel of the depth image is working. Until now I am solving the implicit equations of the undistortion via the newton method (which isn't really fast...). But I want to implement the undistortion of the openCV function.

... and there is my problem: I dont really understand it and I hope you can help me out there. how is it actually working? I tried to search through the forum, but havent found any useful threads concerning this case.

greetings!

解决方案

The equations of the projection of a 3D point [X; Y; Z] to a 2D image point [u; v] are provided on the documentation page related to camera calibration :


(source: opencv.org)

In the case of lens distortion, the equations are non-linear and depend on 3 to 8 parameters (k1 to k6, p1 and p2). Hence, it would normally require a non-linear solving algorithm (e.g. Newton's method, Levenberg-Marquardt algorithm, etc) to inverse such a model and estimate the undistorted coordinates from the distorted ones. And this is what is used behind function undistortPoints, with tuned parameters making the optimization fast but a little inaccurate.

However, in the particular case of image lens correction (as opposed to point correction), there is a much more efficient approach based on a well-known image re-sampling trick. This trick is that, in order to obtain a valid intensity for each pixel of your destination image, you have to transform coordinates in the destination image into coordinates in the source image, and not the opposite as one would intuitively expect. In the case of lens distortion correction, this means that you actually do not have to inverse the non-linear model, but just apply it.

Basically, the algorithm behind function undistort is the following. For each pixel of the destination lens-corrected image do:

  • Convert the pixel coordinates (u_dst, v_dst) to normalized coordinates (x', y') using the inverse of the calibration matrix K,
  • Apply the lens-distortion model, as displayed above, to obtain the distorted normalized coordinates (x'', y''),
  • Convert (x'', y'') to distorted pixel coordinates (u_src, v_src) using the calibration matrix K,
  • Use the interpolation method of your choice to find the intensity/depth associated with the pixel coordinates (u_src, v_src) in the source image, and assign this intensity/depth to the current destination pixel.

Note that if you are interested in undistorting the depthmap image, you should use a nearest-neighbor interpolation, otherwise you will almost certainly interpolate depth values at object boundaries, resulting in unwanted artifacts.

这篇关于了解openCV不失真的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆