将ECEF坐标中的边界框转换为ENU坐标 [英] Convert a bounding box in ECEF coordinates to ENU coordinates
问题描述
我有一个顶点位于笛卡尔坐标系中的几何体。这些笛卡尔坐标是ECEF(地心固定)坐标。这个几何实际上是使用wgs84 corrdinates出现在地球的椭球体模型上的。笛卡尔坐标实际上是通过转换一系列经度和经度来获得的,这些经度和经度沿着几何体所在,但我不再能够访问它们。我所拥有的是一个轴对齐边界框,其中xmax,ymax,zmax和xmin,ymin,zmin通过解析笛卡尔坐标获得(在xmax,ymax,zmax或xmin,ymin,zmin处几何图形中没有明显的笛卡尔点边界框只是一个包围几何体的长方体)。
我想要做的是在概览模式下计算相机距离,使得这个几何体的边界框完全适合相机的平截头体。
我不太清楚采取这种方法。像使用局部世界矩阵的方法让人想起,但它不是很清楚。
@Specktre我提到了你在3D转换点方面的建议,这使我得到了另一个改进的解决方案,但并不完美。
- 计算可以从ECEF传输到ENU的矩阵。请参阅 - http://www.navipedia.net/index.php/Transformations_between_ECEF_and_ENU_coordinates
- 使用此矩阵旋转我原始边界框的所有八个角点。
- 通过查找x,y的最小值和最大值来计算新的边界框,z这些旋转点数
- 计算距离
-
cameraDistance1 =((newbb.ymax - newbb.ymin)/ 2)/ tan(fov / 2)
-
cameraDistance2 =((newbb.xmax - newbb.xmin )/ 2)/(tan(fov / 2)xaspectRatio)
-
cameraDistance = max(cameraDistance1,cameraDistance2)$ c
-
-
转换矩阵
- 首先要理解变换矩阵表示坐标系
- 看这里变换矩阵解剖学 如果您使用直接矩阵,那么您将
- 从矩阵局部如果您使用逆矩阵,那么您将坐标从GCS转换为LCS
如果您使用逆矩阵,则将空间(LCS)与世界全球空间(GCS) > -
相机矩阵
- 相机矩阵转换为相机空间,因此您需要逆矩阵
- 获得这样的相机矩阵:
-
camera = inverse(camera_space_matrix)
- 现在如何构建您的camera_space_matrix,使其符合边框
- 看这里 Frustrum距离计算计算你的盒子的顶部矩形的中点
- 计算相机距离作为从所有顶点ob盒子计算的距离的最大值
- 相机位置为中点+距离*中点正常
- 方向取决于您的投影矩阵。使用gluPerspective,然后根据选定的glDepthFunc
- 查看-Z或+ Z,以便矩阵的Z轴正常
- 和Y,X向量可以对齐到北/南和东/西
- 例如
Y = Z x(1,0,0)
,<$
- 现在放置位置,轴向量X,Y,Z位于矩阵内
- 计算逆矩阵
- 并且它是
- 不要忘记FOV对于X轴和Y轴有不同的角度(高宽比)
- 规范al只是中点 - 地球中心是(0,0,0)所以正常也是中点
- 只是标准化为1.0的大小
- 对于所有计算使用笛卡尔世界GCS(全局坐标系)
I have a geometry with its vertices in cartesian coordinates. These cartesian coordinates are the ECEF(Earth centred earth fixed) coordinates. This geometry is actually present on an ellipsoidal model of the earth using wgs84 corrdinates.The cartesian coordinates were actually obtained by converting the set of latitudes and longitudes along which the geomtries lie but i no longer have access to them. What i have is an axis aligned bounding box with xmax, ymax, zmax and xmin,ymin,zmin obtained by parsing the cartesian coordinates (There is no obviously no cartesian point of the geometry at xmax,ymax,zmax or xmin,ymin,zmin. The bounding box is just a cuboid enclosing the geometry).
What i want to do is to calculate the camera distance in an overview mode such that this geometry's bounding box perfectly fits the camera frustum.
I am not very clear with the approach to take here. A method like using a local to world matrix comes to mind but its not very clear.
@Specktre I referred to your suggestions on shifting points in 3D and that led me to another improved solution, nevertheless not perfect.
- Compute a matrix that can transfer from ECEF to ENU. Refer this - http://www.navipedia.net/index.php/Transformations_between_ECEF_and_ENU_coordinates
- Rotate all eight corners of my original bounding box using this matrix.
- Compute a new bounding box by finding the min and max of x,y,z of these rotated points
- compute distance
cameraDistance1 = ((newbb.ymax - newbb.ymin)/2)/tan(fov/2)
cameraDistance2 = ((newbb.xmax - newbb.xmin)/2)/(tan(fov/2)xaspectRatio)
cameraDistance = max(cameraDistance1, cameraDistance2)
This time i had to use the aspect ratio along x as i had previously expected since in my application fov is along y. Although this works almost accurately, there is still a small bug i guess. I am not very sure if it a good idea to generate a new bounding box. May be it is more accurate to identify 2 points point1(xmax, ymin, zmax) and point(xmax, ymax, zmax) in the original bounding box, find their values after multiplying with matrix and then do (point2 - point1).length(). Similarly for y. Would that be more accurate?
解决方案transform matrix
- first thing is to understand that transform matrix represents coordinate system
- look here Transform matrix anatomy
- if you use direct matrix then you are converting
- from matrix local space (LCS) to world global space (GCS)
- if you use inverse matrix then you converting coordinates from GCS to LCS
camera matrix
- camera matrix converts to camera space so you need the inverse matrix
- you get camera matrix like this:
camera=inverse(camera_space_matrix)
- now how to construct your camera_space_matrix so it fits the bounding box
- look here Frustrum distance computation
- so compute midpoint of the top rectangle of your box
- compute camera distance as max of distance computed from all vertexes ob box
- camera position is midpoint + distance*midpoint normal
- orientation depends on your projection matrix
- if you use gluPerspective then you are viewing -Z or +Z according selected glDepthFunc
- so Z axis of matrix to normal
- and Y,X vectors can be aligned to North/South and East/West
- so for example
Y=Z x (1,0,0)
,X = Z x Y
- now put position, and axis vectors X,Y,Z inside matrix
- compute inverse matrix
- and that it is
[Notes]
- do not forget that FOV can have different angles for X and Y axis (aspect ratio)
- normal is just midpoint - Earth center which is (0,0,0) so normal is also the midpoint
- just normalize it to size 1.0
- for all computations use cartesian world GCS (global coordinate system)
这篇关于将ECEF坐标中的边界框转换为ENU坐标的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!
这一次,我不得不使用纵横比作为x我以前曾经预计,因为在我的应用程序fov是沿着y。虽然这个工作几乎准确,但我仍然有一个小错误。我不确定生成新的边界框是否是个好主意。可能是在原始边界框中识别2个点的点1(xmax,ymin,zmax)和点(xmax,ymax,zmax)更准确,在与矩阵相乘后找到它们的值,然后执行do(point2 - point1).length ()。对于y也是如此。这会更准确吗?
[注意]