相机姿态估计(OpenCV PnP) [英] Camera pose estimation (OpenCV PnP)

查看:1587
本文介绍了相机姿态估计(OpenCV PnP)的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在尝试使用网络摄像头从四个具有已知全局位置的基准的图像中获取全局姿态估计.

I am trying to get a global pose estimate from an image of four fiducials with known global positions using my webcam.

我检查了很多stackexchange问​​题和几篇论文,但似乎无法找到正确的解决方案.我得到的位置编号是可重复的,但绝不会与相机的移动成线性比例.仅供参考,我正在使用C ++ OpenCV 2.1.

I have checked many stackexchange questions and a few papers and I cannot seem to get a a correct solution. The position numbers I do get out are repeatable but in no way linearly proportional to camera movement. FYI I am using C++ OpenCV 2.1.

在此链接处显示的图片 我的坐标系和下面使用的测试数据.

At this link is pictured my coordinate systems and the test data used below.

% Input to solvePnP():
imagePoints =     [ 481, 831; % [x, y] format
                    520, 504;
                   1114, 828;
                   1106, 507]
objectPoints = [0.11, 1.15, 0; % [x, y, z] format
                0.11, 1.37, 0; 
                0.40, 1.15, 0;
                0.40, 1.37, 0]

% camera intrinsics for Logitech C910
cameraMat = [1913.71011, 0.00000,    1311.03556;
             0.00000,    1909.60756, 953.81594;
             0.00000,    0.00000,    1.00000]
distCoeffs = [0, 0, 0, 0, 0]

% output of solvePnP():
tVec = [-0.3515;
         0.8928; 
         0.1997]

rVec = [2.5279;
       -0.09793;
        0.2050]
% using Rodrigues to convert back to rotation matrix:

rMat = [0.9853, -0.1159,  0.1248;
       -0.0242, -0.8206, -0.5708;
        0.1686,  0.5594, -0.8114]

到目前为止,有人看到这些数字有什么问题吗?如果有人可以将它们签入MatLAB,我将不胜感激(上面的代码对m文件友好).

So far, can anyone see anything wrong with these numbers? I would appreciate it if someone would check them in for example MatLAB (code above is m-file friendly).

从这一点上来说,我不确定如何从rMat和tVec获得全局姿态. 根据我在这个问题中所读到的内容, rMat和tVec很简单:

From this point, I am unsure of how to get the global pose from rMat and tVec. From what I have read in this question, to get the pose from rMat and tVec is simply:

position = transpose(rMat) * tVec   % matrix multiplication

但是我怀疑从其他来源获得的信息并不是那么简单.

However I suspect from other sources that I have read it is not that simple.

要获取相机在真实世界坐标中的位置,我需要做什么? 我不确定这是否是一个实现问题(但是最有可能是理论问题),我希望对于在OpenCV中成功使用了solvePnP函数的人可以回答这个问题,尽管也欢迎任何想法!

To get the position of the camera in real world coordinates, what do I need to do? As I am unsure if this is an implementation problem (however most likely a theory problem) I would like for someone who has used the solvePnP function successfully in OpenCV to answer this question, although any ideas are welcome too!

非常感谢您的宝贵时间.

Thank you very much for your time.

推荐答案

我不久前解决了这个问题,为一年的延误道歉.

I solved this a while ago, apologies for the year delay.

在我使用的python OpenCV 2.1和较新的版本3.0.0-dev中,我已经验证了要在全局框架中获得相机的姿势,您必须:

In the python OpenCV 2.1 I was using, and the newer version 3.0.0-dev, I have verified that to get the pose of the camera in the global frame you must:

_, rVec, tVec = cv2.solvePnP(objectPoints, imagePoints, cameraMatrix, distCoeffs)
Rt = cv2.Rodrigues(rvec)
R = Rt.transpose()
pos = -R * tVec

现在pos是在全局框架(表示objectPoints的同一框架)中表示的摄像机的位置. R是一个姿态矩阵DCM,是存储姿态的一个很好的形式. 如果您需要Euler角,则可以使用XYZ旋转顺序将DCM转换为Euler角,

Now pos is the position of the camera expressed in the global frame (the same frame the objectPoints are expressed in). R is an attitude matrix DCM which is a good form to store the attitude in. If you require Euler angles then you can convert the DCM to Euler angles given an XYZ rotation sequence using:

roll = atan2(-R[2][1], R[2][2])
pitch = asin(R[2][0])
yaw = atan2(-R[1][0], R[0][0])

这篇关于相机姿态估计(OpenCV PnP)的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆