将 2D 图像点转换为 3D 世界点 [英] Converting a 2D image point to a 3D world point

查看:25
本文介绍了将 2D 图像点转换为 3D 世界点的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我知道在一般情况下,进行这种转换是不可能的,因为从 3d 到 2d 的深度信息会丢失.

I know that in the general case, making this conversion is impossible since depth information is lost going from 3d to 2d.

但是,我有一个固定的摄像头,并且我知道它的摄像头矩阵.我还有一个已知尺寸的平面校准模式 - 假设在世界坐标中它有角 (0,0,0) (2,0,0) (2,1,0) (0,1,0).使用 opencv 我可以估计模式的姿势,给出将对象上的点投影到图像中的像素所需的平移和旋转矩阵.

However, I have a fixed camera and I know its camera matrix. I also have a planar calibration pattern of known dimensions - let's say that in world coordinates it has corners (0,0,0) (2,0,0) (2,1,0) (0,1,0). Using opencv I can estimate the pattern's pose, giving the translation and rotation matrices needed to project a point on the object to a pixel in the image.

现在:这种 3d 到图像的投影很容易,但是另一种方式呢?如果我在图像中选取了一个我知道是校准图案的一部分的像素,我怎样才能获得相应的 3d 点?

Now: this 3d to image projection is easy, but how about the other way? If I pick a pixel in the image that I know is part of the calibration pattern, how can I get the corresponding 3d point?

我可以在校准图案上反复选择一些随机 3d 点,投影到 2d,并根据误差细化 3d 点.但这似乎很可怕.

I could iteratively choose some random 3d point on the calibration pattern, project to 2d, and refine the 3d point based on the error. But this seems pretty horrible.

鉴于这个未知点的世界坐标类似于 (x,y,0) - 因为它必须位于 z=0 平面上 - 似乎应该有一些我可以应用的变换,而不是做迭代的废话.不过我的数学不是很好 - 有人可以计算出这个转换并解释你是如何推导出它的吗?

Given that this unknown point has world coordinates something like (x,y,0) -- since it must lie on the z=0 plane -- it seems like there should be some transformation that I can apply, instead of doing the iterative nonsense. My maths isn't very good though - can someone work out this transformation and explain how you derive it?

推荐答案

是的,你可以.如果您有一个将 3d 世界中的点映射到图像平面的变换矩阵,则可以使用该变换矩阵的逆矩阵将图像平面点映射到 3d 世界点.如果您已经知道 3d 世界点的 z = 0,这将导致该点的一种解决方案.无需反复选择一些随机 3d 点.我有一个类似的问题,我将摄像头安装在具有已知位置和摄像头校准矩阵的车辆上.我需要知道在相机的图像位置上捕获的车道标记的真实位置.

Yes, you can. If you have a transformation matrix that maps a point in the 3d world to the image plane, you can just use the inverse of this transformation matrix to map a image plane point to the 3d world point. If you already know that z = 0 for the 3d world point, this will result in one solution for the point. There will be no need to iteratively choose some random 3d point. I had a similar problem where I had a camera mounted on a vehicle with a known position and camera calibration matrix. I needed to know the real world location of a lane marking captured on the image place of the camera.

这篇关于将 2D 图像点转换为 3D 世界点的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆