将地球的卫星照片转换为球体上的纹理贴图(OpenGL ES) [英] Convert satellite photos of Earth into texture maps on a sphere (OpenGL ES)

查看:374
本文介绍了将地球的卫星照片转换为球体上的纹理贴图(OpenGL ES)的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我们有5颗地球静止卫星,在赤道周围(不是等间距,但差不多)每天拍摄地球照片。每张照片的输出都是 - 惊喜! - 从远处拍摄的球体照片。

We have 5 geo-stationary satellites, spaced around the equator (not equally spaced, but almost) taking photos of Earth every day. The output of each photo is - surprise! - a photo of a sphere, taken from a long distance away.

我需要将这些照片重新组合成一个纹理映射的球体,我不知道如何最好这样做。关键问题:

I need to reassemble those photos into a single texture-mapped sphere, and I'm not sure how best to do this. Key problems:


  1. 从中心开始,照片显然会大幅扭曲,因为他们正在看球体

  2. 在一天中的不同时间拍摄了5张照片的数百套套装。任何解决方案都需要是程序化的 - 我不能手动执行此操作:(

  3. 输出平台是iPad3:Open GL ES 2,纹理高达4096x4096 - 但不如桌面GPU。我对着色器不太好(虽然我做了很多OpenGL预着色器)

  4. 照片本身是高分辨率的,我不确定我可以同时加载所有5个纹理。我还为行星表面(卫星照片下方)加载了非常高分辨率的纹理。

  1. The photos are - obviously - massively distorted the further you go from the center, since they're looking at a sphere
  2. There are many hundreds of "sets" of 5 photos, taken at different times of day. Any solution needs to be programmatic - I can't just do this by hand :(
  3. Output platform is the iPad3: Open GL ES 2, textures up to 4096x4096 - but not as powerful as a desktop GPU. I'm not great with shaders (although I've done a lot of OpenGL pre-shaders)
  4. The photos themselves are high-res, and I'm not sure I can have all 5 textures loaded simultaneously. I've also got a very high-res texture loaded for the planet surface (underneath the satellite photos).

我已经得到了:一个矩形纹理映射到一个球体上(我的球体是一个包裹在一个球体中的标准网格,顶点均匀分布在整个表面上),所以...我尝试转换5张照片将球划分为一个矩形的地图(但到目前为止没有成功;虽然有人指出我做了一个极地经线扭曲,看起来它可能效果更好)。

I've already got: a single rectangular texture mapped onto a sphere (my sphere is a standard mesh wrapped into a sphere, with vertices distributed evenly across the surface), so ... I tried converting 5 photos of spheres into a single rectangular map (but no success so far; although someone pointed me at doing a "polar sin warp" which looks like it might work better).

I我还想到做一些时髦的东西,制作一个立方体图5张照片,并且很聪明地决定要为给定像素读取哪些照片,但我并不完全相信。

I've also thought of doing something funky with making a cube-map out of the 5 photos, and being clever about deciding which of the photos to read for a given pixel, but I'm not entirely convinced.

有更好的方法吗?我忽略了什么?或者有没有人有一个实现上述目标的具体方法?

Is there a better way? Something I've overlooked? Or has anyone got a concrete way of achieving the above?

推荐答案

我会从中做一个矩形纹理。

I would do a rectangular texture from it.

你需要2 x 2D 纹理/数组一个 r,g,b 颜色总和 avg 和一个计数 cnt 。另外我不相信我会使用 OpenGL / GLSL ,因为在我看来C / C ++会更好。

You will need 2 x 2D textures/arrays one for r,g,b color summation avg and one for count cnt. Also I am not convinced that I would use OpenGL/GLSL for that it seems to me that C/C++ will be better for this.

我会这样做:


  1. 将目标纹理留空( avg [] [] = 0 ,cnt [] [] = 0

  2. 获取卫星位置/方向,时间

从位置和方向创建变换矩阵,以与照片相同的方式投射地球。然后从时间确定旋转移位。

from position and direction create transformation matrix which projects Earth the same ways as on photo. Then from time determine rotation shift.

在整个地球表面循环

只有两个嵌套循环 a - 旋转和`b - 与赤道的距离。

just two nested loops a - rotation and `b - distance from equator.

a,b 获取 x,y,z 并转换矩阵+旋转移位( a -axis)

get x,y,z from a,b and transform matrix + rotation shift (a-axis)

也可以向后执行 a,b,z = f( x,y)但它更棘手,但速度更快,更准确。你也可以在邻近的(像素/区域)[a] [b] 之间插入 x,y,z

also can do it backwards a,b,z = f(x,y) but it is more tricky but faster and more accurate. You can also interpolate x,y,z between neighboring (pixels/areas)[a][b]

添加像素

如果 x,y ,z 位于正面( z> 0 z< 0 取决于相机 Z 方向)然后

if x,y,z is on the front side (z>0 or z<0 depends on the camera Z direction) then

avg[a][b]+=image[x][y]; cnt[a][b]++;


  • 来自第3点的嵌套循环结束。

    循环播放整个平均恢复平均颜色的纹理

    do loop through entire avg texture to restore average color

    if (cnt[a][b]) avg[a][b]/=cnt[a][b];
    


  • [注意]


    1. 可以测试复制的像素是否为:

    白天或晚上获得(只使用你想要的东西而不是混合在一起!!!)也可以确定云(我认为灰色/白色的颜色不是雪)并忽略它们。

    Obtained during day or night (use only what you want and not mix both together!!!) also can determine clouds (i think gray/white-ish colors not snow) and ignore them.

    不要溢出颜色

    可以使用3个单独的纹理 r [] [],g [] [],b [] [] 而不是 avg 以避免

    can use 3 separate textures r[][],g[][],b[][] instead avg to avoid that

    可以忽略地球边缘附近的区域以避免扭曲

    可以应用照明修正

    时间 a,b 标准化照明的坐标

    希望有帮助......

    Hope it helps ...

    [Edit1]正交投影

    所以我明白这就是正交投影的意思:

    so its clear here is what I mean by orthogonal projection:

    这是用过的纹理(在网上找不到更适合和免费的东西)并且想要使用真正的卫星图像而不是一些渲染...

    this is used texture (cant find nothing better suited and free on the web) and wanted to use real satellite image not some rendered ...

    这是我的正交投影App

    this is my orthogonal projection App


    • 红色,绿色,蓝色线是地球坐标系( x,y,z 轴)

    • (红色,绿色,蓝色)白色线是卫星投影坐标系( x,y,z 轴)

    • the red,green,blue lines are Earth coordinate system (x,y,z axis)
    • the (red,green,blue)-white-ish lines are satellite projection coordinate system (x,y,z axis)

    要点是将地球顶点坐标(vx,vy,vz)转换为卫星坐标(x,y,z)如果 z> = 0 那么它是处理过的纹理的有效顶点,所以计算纹理坐标直接来自 x,y ,没有任何透视(正交)。

    the point is to to convert earth vertex coordinates (vx,vy,vz) to satellite coordinates (x,y,z) if z >= 0 then its the valid vertex for processed texture so compute texture coordinates directly from x,y without any perspective (orthogonally).

    例如 tx = 0.5 *(+ X + 1); ...如果 x 被缩放为< -1,+ 1> 并且可用纹理是 tx< 0,1> 同样适用于 y 轴: ty = 0.5 *( - y + 1); ...如果 y 被缩放为< -1,+ 1> 且可用纹理为 ty< 0,1> (我的相机已反转 y 对应于纹理矩阵的坐标系因此 y 轴上的反转符号

    for example tx=0.5*(+x+1); ... if x was scaled to <-1,+1> and usable texture is tx <0,1> The same goes for y axis: ty=0.5*(-y+1); ... if y was scaled to <-1,+1> and usable texture is ty <0,1> (my camera has inverted y coordinate system respective to texture matrix therefore the inverted sign on y axis)

    如果 z< 0 然后你正在处理纹理范围之外的顶点所以忽略它...
    你可以在图像上看到纹理的外边界被扭曲所以你应该只使用内部(对于例如70%的地球图像区域)你也可以根据纹理中点的距离做某种纹理坐标校正。完成后,只需将所有卫星图像投影合并到一个图像即可。

    if the z < 0 then you are processing vertex out of texture range so ignore it ... as you can see on the image the outer boundaries of texture are distorted so you should use only the inside (for example 70% of earth image area) also you can do some kind of texture coordinates correction dependent on the distance from texture middle point. When you have this done then just merge all satellite image projection to one image and that is all.

    [Edit2] 我玩过它一点点,发现这个:

    Well I played with it a little and found out this:


    • 反投影校正根本不适用于我的纹理我觉得可能是后期处理后的图像...

    • 基于中点距离的校正看起来不错但是使用的比例系数是奇数并且没有线索为什么要乘以6它应该是4我想...

    • reverse projection correction do not work for my texture at all I think that is possible it is post processed image ...
    • middle point distance based correction seems be nice but the scale coefficient used is odd have no clue why to multiply by 6 when it should be 4 I think ...

    tx=0.5*(+(asin(x)*6.0/M_PI)+1); 
    ty=0.5*(-(asin(y)*6.0/M_PI)+1); 
    



    • 校正非线性投影(by asin)


    • 更正的非线性投影边缘缩放

    • 扭曲比使用<$小得多c $ c> asin 纹理坐标修正

    • corrected nonlinear projection edge zoom
    • distortions are much much smaller then without asin texture coordinate corrections

    这篇关于将地球的卫星照片转换为球体上的纹理贴图(OpenGL ES)的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

    查看全文
    登录 关闭
    扫码关注1秒登录
    发送“验证码”获取 | 15天全站免登陆