将立方体贴图转换为Equirectangular全景图 [英] Converting a Cubemap into Equirectangular Panorama

查看:3580
本文介绍了将立方体贴图转换为Equirectangular全景图的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我想从立方体贴图[figure1]转换为equirectangular全景[figure2]。



图1



图2



有可能从球形到立方体(通过以下方式:



目标是将图像投影到equirectangular格式,如此:





转换算法相当简单。
为了计算给定具有6个面的立方体图的equirectangular图像中每个像素的最佳估计




  • 首先,计算对应于
    球形图像中每个像素的极坐标。

  • 其次,使用极坐标形成一个向量并确定
    立方体贴图的哪个面以及面向向量
    的像素;就像一个立方体中心的光线投射会击中其两侧的
    之一和该侧的特定点。



请记住,在立方图的特定面上给出标准化坐标(u,v)的情况下,有多种方法可以估算equirectangular图像中像素的颜色。最基本的方法是非常原始的近似,并且为了简单起见将在本答案中使用,是将坐标舍入到特定像素并使用该像素。其他更高级的方法可以计算几个相邻像素的平均值。



算法的实现将根据上下文而变化。我在Unity3D C#中进行了快速实现,演示了如何在真实场景中实现该算法。它在CPU上运行,有很大的改进空间,但很容易理解。

 使用UnityEngine; 

public static class CubemapConverter
{
public static byte [] ConvertToEquirectangular(Texture2D sourceTexture,int outputWidth,int outputHeight)
{
Texture2D equiTexture = new Texture2D(outputWidth,outputHeight,TextureFormat.ARGB32,false);
浮动u,v; //标准化纹理坐标,从0到1,从左下角开始
float phi,theta; //极坐标
int cubeFaceWidth,cubeFaceHeight;

cubeFaceWidth = sourceTexture.width / 4; // 4个水平面
cubeFaceHeight = sourceTexture.height / 3; // 3个垂直面


for(int j = 0; j< equiTexture.height; j ++)
{
//行从底部开始
v = 1 - ((float)j / equiTexture.height);
theta = v * Mathf.PI;

for(int i = 0; i< equiTexture.width; i ++)
{
//列从左边开始
u =((float)i / equiTexture.width);
phi = u * 2 * Mathf.PI;

浮动x,y,z; //单位向量
x = Mathf.Sin(phi)* Mathf.Sin(theta)* -1;
y = Mathf.Cos(theta);
z = Mathf.Cos(phi)* Mathf.Sin(theta)* -1;

浮动xa,ya,za;
浮动a;

a = Mathf.Max(new float [3] {Mathf.Abs(x),Mathf.Abs(y),Mathf.Abs(z)});

//向量平行于位于其中一个立方体面上的单位向量
xa = x / a;
ya = y / a;
za = z / a;

颜色;
int xPixel,yPixel;
int xOffset,yOffset;

if(xa == 1)
{
// Right
xPixel =(int)((((za + 1f)/ 2f) - 1f) * cubeFaceWidth);
xOffset = 2 * cubeFaceWidth; //偏移
yPixel =(int)((((ya + 1f)/ 2f))* cubeFaceHeight);
yOffset = cubeFaceHeight; // Offset
}
else if(xa == -1)
{
//左
xPixel =(int)((((za + 1f) / 2f))* cubeFaceWidth);
xOffset = 0;
yPixel =(int)((((ya + 1f)/ 2f))* cubeFaceHeight);
yOffset = cubeFaceHeight;
}
else if(ya == 1)
{
// Up
xPixel =(int)((((xa + 1f)/ 2f)) * cubeFaceWidth);
xOffset = cubeFaceWidth;
yPixel =(int)((((za + 1f)/ 2f) - 1f)* cubeFaceHeight);
yOffset = 2 * cubeFaceHeight;
}
else if(ya == -1)
{
// Down
xPixel =(int)((((xa + 1f)/ 2f) )* cubeFaceWidth);
xOffset = cubeFaceWidth;
yPixel =(int)((((za + 1f)/ 2f))* cubeFaceHeight);
yOffset = 0;
}
else if(za == 1)
{
//前
xPixel =(int)((((xa + 1f)/ 2f)) * cubeFaceWidth);
xOffset = cubeFaceWidth;
yPixel =(int)((((ya + 1f)/ 2f))* cubeFaceHeight);
yOffset = cubeFaceHeight;
}
else if(za == -1)
{
//返回
xPixel =(int)((((xa + 1f)/ 2f) - 1f)* cubeFaceWidth);
xOffset = 3 * cubeFaceWidth;
yPixel =(int)((((ya + 1f)/ 2f))* cubeFaceHeight);
yOffset = cubeFaceHeight;
}
其他
{
Debug.LogWarning(未知的面孔,出错了);
xPixel = 0;
yPixel = 0;
xOffset = 0;
yOffset = 0;
}

xPixel = Mathf.Abs(xPixel);
yPixel = Mathf.Abs(yPixel);

xPixel + = xOffset;
yPixel + = yOffset;

color = sourceTexture.GetPixel(xPixel,yPixel);
equiTexture.SetPixel(i,j,color);
}
}

equiTexture.Apply();
var bytes = equiTexture.EncodeToPNG();
Object.DestroyImmediate(equiTexture);

返回字节;
}
}

为了利用GPU我创建了一个着色器做同样的转换。它比在CPU上逐像素地运行转换要快得多,但不幸的是Unity对立方体贴图施加了分辨率限制,因此在使用高分辨率输入图像的情况下它的有用性受到限制。

  ShaderConversion / CubemapToEquirectangular{
Properties {
_MainTex(Cubemap(RGB), CUBE)={}
}

子订单{
通行证{
ZTest始终剔除ZWrite关闭
Fog {模式关闭}

CGPROGRAM
#pragma vertex vert
#pragma fragment frag
#pragma fragmentoption ARB_precision_hint_fastest
// #pragma fragmentoption ARB_precision_hint_nicest
#includeUnityCG.cginc

#define PI 3.141592653589793
#define TWOPI 6.283185307179587

struct v2f {
float4 pos:POSITION;
float2 uv:TEXCOORD0;
};

samplerCUBE _MainTex;

v2f vert(appdata_img v)
{
v2f o;
o.pos = mul(UNITY_MATRIX_MVP,v.vertex);
o.uv = v.texcoord.xy * float2(TWOPI,PI);
返回o;
}

fixed4 frag(v2f i):COLOR
{
float theta = i.uv.y;
float phi = i.uv.x;
float3 unit = float3(0,0,0);

unit.x = sin(phi)* sin(theta)* -1;
unit.y = cos(theta)* -1;
unit.z = cos(phi)* sin(theta)* -1;

返回texCUBE(_MainTex,unit);
}
ENDCG
}
}
后退关闭
}

通过采用更复杂的方法估算转换过程中像素的颜色或通过后处理生成的图像(或两者,实际上),可以大大提高生成图像的质量。例如,可以生成更大尺寸的图像以应用模糊滤镜,然后将其下采样到所需的大小。



我用两个编辑器向导创建了一个简单的Unity项目展示如何正确使用上面显示的C#代码或着色器。在此获取:
https://github.com/Mapiarz/CubemapToEquirectangular



请记住在Unity中为输入图像设置正确的导入设置:




  • 点过滤

  • Truecolor格式

  • 禁用mipmap

  • 非幂2:无(仅适用于2D纹理)

  • 启用读/写(仅适用于2D纹理)


I want to convert from cube map [figure1] into an equirectangular panorama [figure2].

Figure1

Figure2

It is possible to go from Spherical to Cubic (by following: Convert 2:1 equirectangular panorama to cube map ), but lost on how to reverse it.

Figure2 is to be rendered into a sphere using Unity.

解决方案

Assuming the input image is in the following cubemap format:

The goal is to project the image to the equirectangular format like so:

The conversion algorithm is rather straightforward. In order to calculate the best estimate of the color at each pixel in the equirectangular image given a cubemap with 6 faces:

  • Firstly, calculate polar coordinates that correspond to each pixel in the spherical image.
  • Secondly, using the polar coordinates form a vector and determine on which face of the cubemap and which pixel of that face the vector lies; just like a raycast from the center of a cube would hit one of its sides and a specific point on that side.

Keep in mind that there are multiple methods to estimate the color of a pixel in the equirectangular image given a normalized coordinate (u,v) on a specific face of a cubemap. The most basic method, which is a very raw approximation and will be used in this answer for simplicity's sake, is to round the coordinates to a specific pixel and use that pixel. Other more advanced methods could calculate an average of a few neighbouring pixels.

The implementation of the algorithm will vary depending on the context. I did a quick implementation in Unity3D C# that shows how to implement the algorithm in a real world scenario. It runs on the CPU, there is a lot room for improvement but it is easy to understand.

using UnityEngine;

public static class CubemapConverter
{
    public static byte[] ConvertToEquirectangular(Texture2D sourceTexture, int outputWidth, int outputHeight)
    {
        Texture2D equiTexture = new Texture2D(outputWidth, outputHeight, TextureFormat.ARGB32, false);
        float u, v; //Normalised texture coordinates, from 0 to 1, starting at lower left corner
        float phi, theta; //Polar coordinates
        int cubeFaceWidth, cubeFaceHeight;

        cubeFaceWidth = sourceTexture.width / 4; //4 horizontal faces
        cubeFaceHeight = sourceTexture.height / 3; //3 vertical faces


        for (int j = 0; j < equiTexture.height; j++)
        {
            //Rows start from the bottom
            v = 1 - ((float)j / equiTexture.height);
            theta = v * Mathf.PI;

            for (int i = 0; i < equiTexture.width; i++)
            {
                //Columns start from the left
                u = ((float)i / equiTexture.width);
                phi = u * 2 * Mathf.PI;

                float x, y, z; //Unit vector
                x = Mathf.Sin(phi) * Mathf.Sin(theta) * -1;
                y = Mathf.Cos(theta);
                z = Mathf.Cos(phi) * Mathf.Sin(theta) * -1;

                float xa, ya, za;
                float a;

                a = Mathf.Max(new float[3] { Mathf.Abs(x), Mathf.Abs(y), Mathf.Abs(z) });

                //Vector Parallel to the unit vector that lies on one of the cube faces
                xa = x / a;
                ya = y / a;
                za = z / a;

                Color color;
                int xPixel, yPixel;
                int xOffset, yOffset;

                if (xa == 1)
                {
                    //Right
                    xPixel = (int)((((za + 1f) / 2f) - 1f) * cubeFaceWidth);
                    xOffset = 2 * cubeFaceWidth; //Offset
                    yPixel = (int)((((ya + 1f) / 2f)) * cubeFaceHeight);
                    yOffset = cubeFaceHeight; //Offset
                }
                else if (xa == -1)
                {
                    //Left
                    xPixel = (int)((((za + 1f) / 2f)) * cubeFaceWidth);
                    xOffset = 0;
                    yPixel = (int)((((ya + 1f) / 2f)) * cubeFaceHeight);
                    yOffset = cubeFaceHeight;
                }
                else if (ya == 1)
                {
                    //Up
                    xPixel = (int)((((xa + 1f) / 2f)) * cubeFaceWidth);
                    xOffset = cubeFaceWidth;
                    yPixel = (int)((((za + 1f) / 2f) - 1f) * cubeFaceHeight);
                    yOffset = 2 * cubeFaceHeight;
                }
                else if (ya == -1)
                {
                    //Down
                    xPixel = (int)((((xa + 1f) / 2f)) * cubeFaceWidth);
                    xOffset = cubeFaceWidth;
                    yPixel = (int)((((za + 1f) / 2f)) * cubeFaceHeight);
                    yOffset = 0;
                }
                else if (za == 1)
                {
                    //Front
                    xPixel = (int)((((xa + 1f) / 2f)) * cubeFaceWidth);
                    xOffset = cubeFaceWidth;
                    yPixel = (int)((((ya + 1f) / 2f)) * cubeFaceHeight);
                    yOffset = cubeFaceHeight;
                }
                else if (za == -1)
                {
                    //Back
                    xPixel = (int)((((xa + 1f) / 2f) - 1f) * cubeFaceWidth);
                    xOffset = 3 * cubeFaceWidth;
                    yPixel = (int)((((ya + 1f) / 2f)) * cubeFaceHeight);
                    yOffset = cubeFaceHeight;
                }
                else
                {
                    Debug.LogWarning("Unknown face, something went wrong");
                    xPixel = 0;
                    yPixel = 0;
                    xOffset = 0;
                    yOffset = 0;
                }

                xPixel = Mathf.Abs(xPixel);
                yPixel = Mathf.Abs(yPixel);

                xPixel += xOffset;
                yPixel += yOffset;

                color = sourceTexture.GetPixel(xPixel, yPixel);
                equiTexture.SetPixel(i, j, color);
            }
        }

        equiTexture.Apply();
        var bytes = equiTexture.EncodeToPNG();
        Object.DestroyImmediate(equiTexture);

        return bytes;
    }
}

In order to utilize the GPU I created a shader that does the same conversion. It is much faster than running the conversion pixel by pixel on the CPU but unfortunately Unity imposes resolution limitations on cubemaps so it's usefulness is limited in scenarios when high resolution input image is to be used.

Shader "Conversion/CubemapToEquirectangular" {
  Properties {
        _MainTex ("Cubemap (RGB)", CUBE) = "" {}
    }

    Subshader {
        Pass {
            ZTest Always Cull Off ZWrite Off
            Fog { Mode off }      

            CGPROGRAM
                #pragma vertex vert
                #pragma fragment frag
                #pragma fragmentoption ARB_precision_hint_fastest
                //#pragma fragmentoption ARB_precision_hint_nicest
                #include "UnityCG.cginc"

                #define PI    3.141592653589793
                #define TWOPI 6.283185307179587

                struct v2f {
                    float4 pos : POSITION;
                    float2 uv : TEXCOORD0;
                };

                samplerCUBE _MainTex;

                v2f vert( appdata_img v )
                {
                    v2f o;
                    o.pos = mul(UNITY_MATRIX_MVP, v.vertex);
                    o.uv = v.texcoord.xy * float2(TWOPI, PI);
                    return o;
                }

                fixed4 frag(v2f i) : COLOR 
                {
                    float theta = i.uv.y;
                    float phi = i.uv.x;
                    float3 unit = float3(0,0,0);

                    unit.x = sin(phi) * sin(theta) * -1;
                    unit.y = cos(theta) * -1;
                    unit.z = cos(phi) * sin(theta) * -1;

                    return texCUBE(_MainTex, unit);
                }
            ENDCG
        }
    }
    Fallback Off
}

The quality of the resulting images can be greatly improved by either employing a more sophisticated method to estimate the color of a pixel during the conversion or by post processing the resulting image (or both, actually). For example an image of bigger size could be generated to apply a blur filter and then downsample it to the desired size.

I created a simple Unity project with two editor wizards that show how to properly utilize either the C# code or the shader shown above. Get it here: https://github.com/Mapiarz/CubemapToEquirectangular

Remember to set proper import settings in Unity for your input images:

  • Point filtering
  • Truecolor format
  • Disable mipmaps
  • Non Power of 2: None (only for 2DTextures)
  • Enable Read/Write (only for 2DTextures)

这篇关于将立方体贴图转换为Equirectangular全景图的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆