OpenCV中undistortPoints()和projectPoints()之间的区别 [英] Difference between undistortPoints() and projectPoints() in OpenCV
问题描述
根据我的理解,undistortPoints在失真的图像上获取一组点,并计算其坐标在同一图像的未失真版本上的位置.同样,projectPoints将一组对象坐标映射到其相应的图像坐标.
From my understanding, undistortPoints takes a set of points on a distorted image, and calculates where their coordinates would be on an undistorted version of the same image. Likewise, projectPoints maps a set of object coordinates to their corresponding image coordinates.
但是,我不确定projectPoints是将对象坐标映射到变形图像(即原始图像)还是未变形图像(直线)上的一组图像点吗?
However, I am unsure if projectPoints maps the object coordinates to a set of image points on the distorted image (ie. the original image) or one that has been undistorted (straight lines)?
此外,undistortPoints的OpenCV文档指出'该函数将反向转换为projectPoints()'.你能解释一下这是怎么回事吗?
Furthermore, the OpenCV documentation for undistortPoints states that 'the function performs a reverse transformation to projectPoints()'. Could you please explain how this is so?
推荐答案
Quote from the 3.2 documentation for projectPoints()
:
将3D投影到图像平面.
Projects 3D points to an image plane.
函数计算 给定内在和内在的3D投影到像平面的投影 外在相机参数.
The function computes projections of 3D points to the image plane given intrinsic and extrinsic camera parameters.
您有参数distCoeffs
:
如果向量为空,则假定失真系数为零.
If the vector is empty, the zero distortion coefficients are assumed.
在不失真的情况下,等式为:
With no distorsion the equation is:
使用K
固有矩阵和[R | t]
外部矩阵或将对象或世界框架中的点转换为摄影机框架的转换.
With K
the intrinsic matrix and [R | t]
the extrinsic matrix or the transformation that transforms a point in the object or world frame to the camera frame.
对于 undistortPoints()
,参数R:
For undistortPoints()
, you have the parameter R:
对象空间中的整齐转换(3x3矩阵).由cv :: stereoRectify计算的R1或R2可以在此处传递.如果矩阵为空,则使用恒等变换.
Rectification transformation in the object space (3x3 matrix). R1 or R2 computed by cv::stereoRectify can be passed here. If the matrix is empty, the identity transformation is used.
反向变换是一种操作,您可以使用固有参数为2D图像点([u, v]
)计算归一化相机帧([x, y, z=1]
)中的相应3D点.
The reverse transformation is the operation where you compute for a 2D image point ([u, v]
) the corresponding 3D point in the normalized camera frame ([x, y, z=1]
) using the intrinsic parameters.
使用外部矩阵,您可以在相机框架中获得该点:
With the extrinsic matrix, you can get the point in the camera frame:
归一化的相机框架是通过除以深度获得的:
The normalized camera frame is obtained by dividing by the depth:
假设没有失真,则图像点为:
Assuming no distortion, the image point is:
假设没有失真的逆向变换":
And the "reverse transformation" assuming no distortion:
这篇关于OpenCV中undistortPoints()和projectPoints()之间的区别的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!