向后投影3D世界点到新的视图图像平面 [英] Back projecting 3D world point to new view image plane

查看:339
本文介绍了向后投影3D世界点到新的视图图像平面的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

编辑:

我有什么:相机内在函数,校准的外在函数,2D图像和深度图

What I have : camera intrinsics, extrinsics from calibration, 2D image and depth map

我需要:2D虚拟视图图像

What I need : 2D virtual view image

我想为深度图像渲染生成一个小说视图(右视图)。这是因为只有左图像和深度图在接收器处可用,必须重建右视图。设置渲染

I am trying to generate a novel view(right view) for Depth Image Based Rendering. Reason for this is that only the left image and depth map are available at receiver which has to reconstruct the right view.Setup for rendering

我想知道这些步骤是否会给我想要的结果或我应该做什么,

I want to know if these steps will give me the desired result or what I should be doing instead,

首先,通过CalTech使用用于MATLAB的相机校准工具箱,可以获得内在的外在矩阵。

First, by using Camera Calibration toolbox for MATLAB by CalTech, the intrinsic, extrinsic matrices can be obtained.

使用此方法的校准参数映射到3D世界点 http://nicolas.burrus .name / index.php / Research / KinectCalibration#tocLink2

Then, they can be mapped to 3D world points using calibration parameters by this method "http://nicolas.burrus.name/index.php/Research/KinectCalibration#tocLink2"

现在,我想将此项目投影到新的图像平面(右视图)。右视图仅仅是左侧的平移,因为设置没有旋转。
我如何做这种重建?

Now, I want to back project this to a new image plane(right view). The right view is simply a translation of left and no rotation because of the setup. How do I do this reconstruction?

此外,我可以从MATLAB立体声校准工具估计R和T,并将原始左视图中的每个点转换为右视图P2 = R * P1 + T,
P1和P2是各个平面中3D世界点P的像点。

Also, Can I estimate R and T from MATLAB stereo calibration tool and transform every point in original left view to right view uisng P2 = R*P1+T, P1 and P2 are image points of 3D world point P in the respective planes.

任何想法和帮助是高度

推荐答案

(理论答案*)

您必须定义R和T的含义。如果我明白,是你的(主)左相机的Roto翻译。如果你可以在三维空间中映射一个点P(像你的P1或P2),在你的左边相机中与点m(我不叫它p,以避免混淆)的对应是(除非你使用不同的约定(伪代码) / p>

You have to define what R and T means. If I understand, is the Roto-translation of your (main) left camera. If you can map a point P (like your P1 or P2) in 3D space, the correspondance with a point m (I not call it p to avoid confusion) in your left camera is (unless you use a different convention (pseudocode)

m = K[R|t]*P 

其中

P1 = (X,Y,Z,1)
m  = (u',v',w)

您的左相机是:

u = u'/w 
v = v'/w

如果您已经将P1转换为P2(非常有用)等于(伪代码)

if you already roto-translated P1 into P2 (not very useful) is equal to (pseudocode)

                  1 0 0 0
m = K[I|0]*P = K*[0 1 0 0] * P2
                  0 0 1 0

假设这是与图像中的2D点的3D点P的理论关系m,你可以认为你的相机在不同的位置,如果只有相对于左相机的翻译,右相机相对于左相机被翻译T2,并且R / T + T2被翻译成尊重世界的中心。
所以你的右侧相机中的m'proiected点应该是(假设相机是相等的意味着有相同的内在性K)

Assume this is the theoretical relationship with a 3D point P with his 2D point in an image m, you may think to have you right camera in a different position. If there is only translation with respect to left camera, the right camera is translated of T2 with respect to the left camera and roto-translated of R/T+T2 with respect to the centre of the world. So the m' proiected point in your right camera should be (assuming that the cameras are equal means the have the same intrinsics K)

m'= K [R | T + T2] * P = K [I | T2] * P2
I是单位矩阵。

m' = K[R|T+T2]*P = K[I|T2]*P2 I is the identity matrix.

如果要使用3D点将m直接转换为m'withot,则必须实现对极几何。

If you want to transform m directly into m' withot using 3D points you have to implement epipolar geometry.


  • 如果摄像机与不同的K不同,如果R和T的校准没有同样的校准标准K,这个方程可能不工作。
    如果校准没有完成,它可以工作,但有错误。

这篇关于向后投影3D世界点到新的视图图像平面的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆