将设备姿势转换为相机姿势 [英] Convert device pose to camera pose

查看:146
本文介绍了将设备姿势转换为相机姿势的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在使用相机固有的(fx,fy,cx,cy,宽度,高)存储TangoXyzIjData.xyz缓冲区的深度图像.因此,我为xyz的每个点计算对应的图像点并存储其z值

I'm using the camera intrinsic (fx, fy, cx, cy, width, hight) to store a depth image of TangoXyzIjData.xyz buffer. Therefore I calculate for each point of xyz the corresponding image point and store its z value

x' = (fx * x) / z + cx
y' = (fy * y) / z + cy
depthImage[x'][y'] = z

现在,我也想存储相应的姿势数据.我正在使用TangoXyzIjData.timestamp的时间戳和以下功能

Now I would like to store the corresponding pose data as well. I'm using the timestamp of TangoXyzIjData.timestamp and the following function

getPoseAtTime(double timestamp, TangoCoordinateFramePair framePair)

带有帧对

new TangoCoordinateFramePair(TangoPoseData.COORDINATE_FRAME_START_OF_SERVICE, TangoPoseData.COORDINATE_FRAME_DEVICE)

问题在于姿势是服务框架开始时的设备框架.深度图像从深度相机框架获取其要点.我该如何搭配?

The problem is that the pose is the device frame wrt start of service frame. And the depth image get's its points from the depth camera frame. How can I match them?

有一种方法可以通过以下方式将深度相机的点转换为设备框:

There is a way to convert the depth camera points to the device frame by:

  1. depth2IMU =带有IMU框架的深度摄像机框架
  2. device2IMU =带IMU框架的设备框架
  3. device2IMU ^ -1 =使用IMU帧反转设备帧
  4. camera2Device = device2IMU ^ -1 * depth2IMU

现在我可以将点云的每个点与camera2Device相乘.但这就是对设备框架的转换.

Now I could multiply each point of the point cloud with camera2Device. But that's the transformation to the device frame.

有什么方法可以将设备的姿势转换为相机的姿势?

Is there any way to convert the device pose to a camera pose?

推荐答案

您放在一起的等式是正确的!但是还没有结束.

The equation you put together is correct! But it's not finished.

要形式化终端协议,让我们使用a_T_b作为转换矩阵,其中a代表基本框架,b代表目标框架. a_T_b是相对于b帧的a帧.

To formalize the terminalogy, let's use a_T_b as a transformation matrix, where a represents the base frame, b represents the target frame. a_T_b is a frame with respect to b frame.

根据您的问题,我们知道的矩阵是:

Based on your question, the matrices we known are:

start_service_T_deviceimu_T_deviceimu_T_depth

我们想要得到的矩阵是:

The matrix we want to get is:

start_service_T_depth

我们可以使用矩阵链"来获取结果:

We can just use a "matrix chain" to get the result:

start_service_T_depth = start_service_T_device * 
                        inverse(imu_T_device) * 
                        imu_T_depth;

现在,假设我们在深度框中有一个点P_depth.要应用此点的姿势并将其转换为start_service帧,我们可以使用:

Now, let's say we have a point P_depth in depth frame. To apply the pose for this point and convert it to start_service frame, we could use:

P_ss = start_service_T_depth * P_depth;


将其放在OpenGL框架中

在大多数情况下,您可能希望将其转换为易于显示图形库的协调框架.让我们以OpenGL为例,可以将这一点转换为OpenGL世界坐标系,如下所示:


Put it in OpenGL frame

In most of the cases, you might want to convert it to some coordate frame that is easy for graphic library to render out. Let's take OpenGL for example, we can transform this point to the OpenGL world coordinate frame as follow:

请注意,start_service_T_opengl_world是可以手动计算的常数矩阵. 此处是指向以下内容的链接矩阵,引用自Project Tango c ++示例.

Note that start_service_T_opengl_world is a constant matrix that you could compute by hand. Here is a link to the matrix, quoted from Project Tango c++ example.

P_gl = opengl_world_T_start_service * P_ss;

我们可以扩展我们刚刚编写的所有内容,并将其放在一个等式中:

We can expand everything we just wrote and put it in a single equation:

P_gl = opengl_world_T_start_service * 
       start_service_T_device * 
       inverse(imu_T_device) * 
       imu_T_depth * 
       P_depth;


探戈项目的示例代码

此外,在项目探戈示例中,点云示例对这些转换有很好的解释,以下是链接( java 统一).


Sample code from Project Tango

Also, in the project tango examples, the point cloud example has a pretty good explanation of these conversions, here are the links(c++, java, unity).

这篇关于将设备姿势转换为相机姿势的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆