如何确定相机的世界坐标? [英] How to determine world coordinates of a camera?

查看:334
本文介绍了如何确定相机的世界坐标?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我在墙上有一个已知尺寸和位置的矩形目标,在机器人上有一个移动相机.当机器人在房间周围行驶时,我需要定位目标并计算摄像头及其姿势的位置.作为进一步的扭曲,可以使用伺服器更改摄像机的仰角和方位角.我可以使用OpenCV定位目标,但是在计算摄像机的位置时仍然感到迷惑(实际上,上周我的头部撞在墙上,使我的额头上出现了一个平坦的点).这是我在做什么:

I have a rectangular target of known dimensions and location on a wall, and a mobile camera on a robot. As the robot is driving around the room, I need to locate the target and compute the location of the camera and its pose. As a further twist, the camera's elevation and azimuth can be changed using servos. I am able to locate the target using OpenCV, but I am still fuzzy on calculating the camera's position (actually, I've gotten a flat spot on my forehead from banging my head against a wall for the last week). Here is what I am doing:

  1. 读取先前计算的相机内在文件
  2. 从轮廓获取目标矩形的4个点的像素坐标
  3. 使用矩形的世界坐标,像素坐标,相机矩阵和失真矩阵调用resolvePnP
  4. 使用旋转和平移矢量调用projectPoints
  5. ???

我已经读过OpenCV的书,但是我想我只是缺少有关如何使用投影点,旋转和平移矢量来计算摄像机及其姿势的世界坐标的一些知识(我不是数学家) ):-(

I have read the OpenCV book, but I guess I'm just missing something on how to use the projected points, rotation and translation vectors to compute the world coordinates of the camera and its pose (I'm not a math wiz) :-(

2013-04-02 遵循"morynicz"的建议,我编写了这个简单的独立程序.

2013-04-02 Following the advice from "morynicz", I have written this simple standalone program.

#include <Windows.h>
#include "opencv\cv.h"

using namespace cv;

int main (int argc, char** argv)
{
const char          *calibration_filename = argc >= 2 ? argv [1] : "M1011_camera.xml";
FileStorage         camera_data (calibration_filename, FileStorage::READ);
Mat                 camera_intrinsics, distortion;
vector<Point3d>     world_coords;
vector<Point2d>     pixel_coords;
Mat                 rotation_vector, translation_vector, rotation_matrix, inverted_rotation_matrix, cw_translate;
Mat                 cw_transform = cv::Mat::eye (4, 4, CV_64FC1);


// Read camera data
camera_data ["camera_matrix"] >> camera_intrinsics;
camera_data ["distortion_coefficients"] >> distortion;
camera_data.release ();

// Target rectangle coordinates in feet
world_coords.push_back (Point3d (10.91666666666667, 10.01041666666667, 0));
world_coords.push_back (Point3d (10.91666666666667, 8.34375, 0));
world_coords.push_back (Point3d (16.08333333333334, 8.34375, 0));
world_coords.push_back (Point3d (16.08333333333334, 10.01041666666667, 0));

// Coordinates of rectangle in camera
pixel_coords.push_back (Point2d (284, 204));
pixel_coords.push_back (Point2d (286, 249));
pixel_coords.push_back (Point2d (421, 259));
pixel_coords.push_back (Point2d (416, 216));

// Get vectors for world->camera transform
solvePnP (world_coords, pixel_coords, camera_intrinsics, distortion, rotation_vector, translation_vector, false, 0);
dump_matrix (rotation_vector, String ("Rotation vector"));
dump_matrix (translation_vector, String ("Translation vector"));

// We need inverse of the world->camera transform (camera->world) to calculate
// the camera's location
Rodrigues (rotation_vector, rotation_matrix);
Rodrigues (rotation_matrix.t (), camera_rotation_vector);
Mat t = translation_vector.t ();
camera_translation_vector = -camera_rotation_vector * t;

printf ("Camera position %f, %f, %f\n", camera_translation_vector.at<double>(0), camera_translation_vector.at<double>(1), camera_translation_vector.at<double>(2));
printf ("Camera pose %f, %f, %f\n", camera_rotation_vector.at<double>(0), camera_rotation_vector.at<double>(1), camera_rotation_vector.at<double>(2));
}

我在测试中使用的像素坐标来自于实际图像,该图像是在距目标矩形约27英尺(宽62英寸,高20英寸)的位置拍摄的,呈45度角.输出不是我期望的.我在做什么错了?

The pixel coordinates I used in my test are from a real image that was taken about 27 feet left of the target rectangle (which is 62 inches wide and 20 inches high), at about a 45 degree angle. The output is not what I'm expecting. What am I doing wrong?

Rotation vector
2.7005
0.0328
0.4590

Translation vector
-10.4774
8.1194
13.9423

Camera position -28.293855, 21.926176, 37.650714
Camera pose -2.700470, -0.032770, -0.459009

如果我的世界坐标的Y轴与OpenCV的屏幕Y轴颠倒了,将会有问题吗? (我的坐标系的原点位于目标的左侧,而OpenCV的原点位于屏幕的左上方).

Will it be a problem if my world coordinates have the Y axis inverted from that of OpenCV's screen Y axis? (the origin of my coordinate system is on the floor to the left of the target, while OpenCV's orgin is the top left of the screen).

姿势在什么单位?

推荐答案

您是否从

You get the translation and rotation vectors from solvePnP, which are telling where is the object in camera's coordinates. You need to get an inverse transform.

变换相机->对象可以写为矩阵[R T;0 1]的均质坐标.使用它的特殊属性,该矩阵的逆数将为[R^t -R^t*T;0 1],其中R ^ t是R转置的.您可以从 Rodrigues 转换中获得R矩阵.这样,您将获得转换对象->相机坐标的平移矢量和旋转矩阵.

The transform camera -> object can be written as a matrix [R T;0 1] for homogeneous coordinates. The inverse of this matrix would be, using it's special properties, [R^t -R^t*T;0 1] where R^t is R transposed. You can get R matrix from Rodrigues transform. This way You get the translation vector and rotation matrix for transformation object->camera coordiantes.

如果您知道对象在世界坐标中的位置,则可以使用world-> object transform * object-> camera transform矩阵提取摄像机平移和姿势.

If You know where the object lays in the world coordinates You can use the world->object transform * object->camera transform matrix to extract cameras translation and pose.

用单个矢量或R矩阵描述姿势,您肯定会在书中找到它.如果是"Learning OpenCV",则可以在第401-402页找到它:)

The pose is described either by single vector or by the R matrix, You surely will find it in Your book. If it's "Learning OpenCV" You will find it on pages 401 - 402 :)

查看您的代码,您需要执行以下操作

Looking at Your code, You need to do something like this

    cv::Mat R;
    cv::Rodrigues(rotation_vector, R);

    cv::Mat cameraRotationVector;

    cv::Rodrigues(R.t(),cameraRotationVector);

    cv::Mat cameraTranslationVector = -R.t()*translation_vector;

cameraTranslationVector包含相机坐标. cameraRotationVector包含相机姿势.

cameraTranslationVector contains camera coordinates. cameraRotationVector contains camera pose.

这篇关于如何确定相机的世界坐标?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆