从图像点计算x,y坐标(3D) [英] Computing x,y coordinate (3D) from image point

查看:162
本文介绍了从图像点计算x,y坐标(3D)的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我有一项任务是在3D坐标系中定位对象.由于我必须获得几乎精确的X和Y坐标,因此我决定跟踪一个具有已知Z坐标的颜色标记,该颜色标记将放置在移动对象的顶部,如该图中的橙色球:

I have a task to locate an object in 3D coordinate system. Since I have to get almost exact X and Y coordinate, I decided to track one color marker with known Z coordinate that will be placed on the top of the moving object, like the orange ball in this picture:

首先,我完成了相机校准以获取固有参数,然后使用cv :: solvePnP来获取旋转和平移矢量,如以下代码所示:

First, I have done the camera calibration to get intrinsic parameters and after that I used cv::solvePnP to get rotation and translation vector like in this following code:

std::vector<cv::Point2f> imagePoints;
std::vector<cv::Point3f> objectPoints;
//img points are green dots in the picture
imagePoints.push_back(cv::Point2f(271.,109.));
imagePoints.push_back(cv::Point2f(65.,208.));
imagePoints.push_back(cv::Point2f(334.,459.));
imagePoints.push_back(cv::Point2f(600.,225.));

//object points are measured in millimeters because calibration is done in mm also
objectPoints.push_back(cv::Point3f(0., 0., 0.));
objectPoints.push_back(cv::Point3f(-511.,2181.,0.));
objectPoints.push_back(cv::Point3f(-3574.,2354.,0.));
objectPoints.push_back(cv::Point3f(-3400.,0.,0.));

cv::Mat rvec(1,3,cv::DataType<double>::type);
cv::Mat tvec(1,3,cv::DataType<double>::type);
cv::Mat rotationMatrix(3,3,cv::DataType<double>::type);

cv::solvePnP(objectPoints, imagePoints, cameraMatrix, distCoeffs, rvec, tvec);
cv::Rodrigues(rvec,rotationMatrix);

在拥有所有矩阵之后,该方程式可以帮助我将图像点转换为世界坐标:

After having all matrices, this equation that can help me with transforming image point to wolrd coordinates:

其中M是cameraMatrix,R-rotationMatrix,t-tvec,而s是未知数. Zconst表示橙色球所在的高度,在此示例中为285毫米. 因此,首先我需要求解前面的方程,得到"s",然后我可以通过选择图像点找出X和Y坐标:

where M is cameraMatrix, R - rotationMatrix, t - tvec, and s is an unknown. Zconst represents the height where the orange ball is, in this example it is 285 mm. So, first I need to solve previous equation, to get "s", and after I can find out X and Y coordinate by selecting image point:

解决这个问题,我可以使用矩阵的最后一行找出变量"s",因为Zconst是已知的,因此下面的代码如下:

Solving this I can find out variable "s", using the last row in matrices, because Zconst is known, so here is the following code for that:

cv::Mat uvPoint = (cv::Mat_<double>(3,1) << 363, 222, 1); // u = 363, v = 222, got this point using mouse callback

cv::Mat leftSideMat  = rotationMatrix.inv() * cameraMatrix.inv() * uvPoint;
cv::Mat rightSideMat = rotationMatrix.inv() * tvec;

double s = (285 + rightSideMat.at<double>(2,0))/leftSideMat.at<double>(2,0)); 
//285 represents the height Zconst

std::cout << "P = " << rotationMatrix.inv() * (s * cameraMatrix.inv() * uvPoint - tvec) << std::endl;

此后,我得到结果:P = [-2629.5,1272.6,285.]

After this, I got result: P = [-2629.5, 1272.6, 285.]

当我将其与测量值进行比较时,即:Preal = [-2629.6,1269.5,285.]

and when I compare it to measuring, which is: Preal = [-2629.6, 1269.5, 285.]

误差很小,非常好,但是当我将盒子移到房间的边缘时,误差可能是20-40mm,我想对此进行改进.有人可以帮我吗,您有什么建议吗?

the error is very small which is very good, but when I move this box to the edges of this room, errors are maybe 20-40mm and I would like to improve that. Can anyone help me with that, do you have any suggestions?

推荐答案

鉴于您的配置,边缘的20-40mm误差是平均水平.看来您做得很好.

Given your configuration, errors of 20-40mm at the edges are average. It looks like you've done everything well.

不修改相机/系统配置,很难做得更好.您可以尝试重做相机校准并希望获得更好的结果,但这并不能改善它们(您最终可能会得到更差的结果,因此请不要删除实际的内部参数)

Without modifying camera/system configuration, doing better will be hard. You can try to redo camera calibration and hope for better results, but this will not improve them alot (and you may eventually get worse results, so don't erase actual instrinsic parameters)

正如count0所说,如果您需要更高的精度,则应该进行多次测量.

As said by count0, if you need more precision you should go for multiple measurements.

这篇关于从图像点计算x,y坐标(3D)的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆