从图像点计算 x,y 坐标 (3D) [英] Computing x,y coordinate (3D) from image point

查看:43
本文介绍了从图像点计算 x,y 坐标 (3D)的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我的任务是在 3D 坐标系中定位一个对象.由于我必须获得几乎精确的 X 和 Y 坐标,因此我决定跟踪一个具有已知 Z 坐标的颜色标记,该标记将放置在移动对象的顶部,例如这张图片中的橙色球:

I have a task to locate an object in 3D coordinate system. Since I have to get almost exact X and Y coordinate, I decided to track one color marker with known Z coordinate that will be placed on the top of the moving object, like the orange ball in this picture:

首先,我完成了相机校准以获取内在参数,然后我使用 cv::solvePnP 获取旋转和平移向量,如下面的代码所示:

First, I have done the camera calibration to get intrinsic parameters and after that I used cv::solvePnP to get rotation and translation vector like in this following code:

std::vector<cv::Point2f> imagePoints;
std::vector<cv::Point3f> objectPoints;
//img points are green dots in the picture
imagePoints.push_back(cv::Point2f(271.,109.));
imagePoints.push_back(cv::Point2f(65.,208.));
imagePoints.push_back(cv::Point2f(334.,459.));
imagePoints.push_back(cv::Point2f(600.,225.));

//object points are measured in millimeters because calibration is done in mm also
objectPoints.push_back(cv::Point3f(0., 0., 0.));
objectPoints.push_back(cv::Point3f(-511.,2181.,0.));
objectPoints.push_back(cv::Point3f(-3574.,2354.,0.));
objectPoints.push_back(cv::Point3f(-3400.,0.,0.));

cv::Mat rvec(1,3,cv::DataType<double>::type);
cv::Mat tvec(1,3,cv::DataType<double>::type);
cv::Mat rotationMatrix(3,3,cv::DataType<double>::type);

cv::solvePnP(objectPoints, imagePoints, cameraMatrix, distCoeffs, rvec, tvec);
cv::Rodrigues(rvec,rotationMatrix);

拥有所有矩阵后,这个方程可以帮助我将图像点转换为世界坐标:

After having all matrices, this equation that can help me with transforming image point to wolrd coordinates:

其中 M 是 cameraMatrix,R - rotationMatrix,t - tvec,s 是未知数.Zconst 表示橙色球所在的高度,在本例中为 285 毫米.所以,首先我需要解前面的方程,得到s",然后我可以通过选择图像点找出 X 和 Y 坐标:

where M is cameraMatrix, R - rotationMatrix, t - tvec, and s is an unknown. Zconst represents the height where the orange ball is, in this example it is 285 mm. So, first I need to solve previous equation, to get "s", and after I can find out X and Y coordinate by selecting image point:

解决这个问题我可以找到变量s",使用矩阵中的最后一行,因为 Zconst 是已知的,所以这里是以下代码:

Solving this I can find out variable "s", using the last row in matrices, because Zconst is known, so here is the following code for that:

cv::Mat uvPoint = (cv::Mat_<double>(3,1) << 363, 222, 1); // u = 363, v = 222, got this point using mouse callback

cv::Mat leftSideMat  = rotationMatrix.inv() * cameraMatrix.inv() * uvPoint;
cv::Mat rightSideMat = rotationMatrix.inv() * tvec;

double s = (285 + rightSideMat.at<double>(2,0))/leftSideMat.at<double>(2,0)); 
//285 represents the height Zconst

std::cout << "P = " << rotationMatrix.inv() * (s * cameraMatrix.inv() * uvPoint - tvec) << std::endl;

在这之后,我得到了结果:P = [-2629.5, 1272.6, 285.]

After this, I got result: P = [-2629.5, 1272.6, 285.]

当我将它与测量进行比较时,即:Preal = [-2629.6, 1269.5, 285.]

and when I compare it to measuring, which is: Preal = [-2629.6, 1269.5, 285.]

误差很小,非常好,但是当我将这个盒子移到这个房间的边缘时,误差可能是 20-40 毫米,我想改进它.任何人都可以帮我解决这个问题,你有什么建议吗?

the error is very small which is very good, but when I move this box to the edges of this room, errors are maybe 20-40mm and I would like to improve that. Can anyone help me with that, do you have any suggestions?

推荐答案

根据您的配置,边缘 20-40mm 的误差是平均的.看起来你做的一切都很好.

Given your configuration, errors of 20-40mm at the edges are average. It looks like you've done everything well.

不修改相机/系统配置,很难做得更好.您可以尝试重新进行相机校准并希望获得更好的结果,但这不会使它们得到很大改善(并且您最终可能会得到更差的结果,因此请不要删除实际的内在参数)

Without modifying camera/system configuration, doing better will be hard. You can try to redo camera calibration and hope for better results, but this will not improve them alot (and you may eventually get worse results, so don't erase actual instrinsic parameters)

正如 count0 所说,如果您需要更高的精度,您应该进行多次测量.

As said by count0, if you need more precision you should go for multiple measurements.

这篇关于从图像点计算 x,y 坐标 (3D)的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆