将2:1 equirectangular全景转换为立方体贴图 [英] Convert 2:1 equirectangular panorama to cube map

查看:1193
本文介绍了将2:1 equirectangular全景转换为立方体贴图的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我目前正在为网站制作一个简单的3D全景查看器。出于移动性能原因,我使用 three.js 我们得到

这似乎得到了铺路的大部分线条。



我们得到了一些图像文物。这是因为没有1比1的像素图。我们需要做的是使用逆变换。而不是遍历源中的每个像素并找到目标中的相应像素,我们循环通过目标图像并找到最接近的相应源像素。

 从PIL导入导入sys 
图像
来自数学导入pi,sin,cos,tan,atan2 ,hypot,floor
来自numpy导入剪辑

#get x,y,z coords from out image pixels coords
#i,j是像素坐标
#face是面数
#edge是边长
def outImgToXYZ(i,j,face,edge):
a = 2.0 * float(i)/ edge
b = 2.0 * float( j)/ edge
如果face == 0:#back
(x,y,z)=(-1.0,1.0-a,3.0-b)
elif face == 1: #left
(x,y,z)=(a-3.0,-1.0,3.0-b)
elif face == 2:#front
(x,y,z)= (1.0,a - 5.0,3.0 - b)
elif face == 3:#right
(x,y,z)=(7.0-a,1.0,3.0-b)
elif face == 4:#top
(x,y,z)=(b-1.0,a -5.0,1.0)
elif face == 5:#bottom
(x, y,z)=(5.0-b,a-5.0,-1.0)
return(x,y,z)

#转换使用逆转换
def convertBack( imgIn,imgOut):
inSize = imgIn.size
outSize = imgOut.size
inPix = imgIn.load()
outPix = imgOut.load()
edge = inSize [0] / 4#每条边的长度,以像素为单位
for x in xrange(outSize [0]):
face = int(i / edge)#0 - back,1 - left 2 - front,3 - right
if face == 2:
rng = xrange(0,edge * 3)
else:
rng = xrange(edge,edge * 2)

for r in rng:
if j< edge:
face2 = 4 #top
elif j> = 2 * edge:
face2 = 5#bottom
else:
face2 = face

(x,y,z)= outImgToXYZ(i,j,face2,edge)
theta = atan2(y,x)#range -pi到pi
r = hypot(x, y)
phi = atan2(z,r)#range -pi / 2到pi / 2
#source img coords
uf =(2.0 * edge *(theta + pi)/ pi )
vf =(2.0 * edge *(pi / 2-phi)/ pi)
#使用双线性插值四个周围像素之间的关系
ui = floor(uf)#cord of pixel to left left
vi = floor(vf)
u2 = ui + 1 #cord of pixel to right right
v2 = vi + 1
mu = uf-ui#像素的分数
nu = vf-vi
#四个角的像素值
A = inPix [ ui%inSize [0],clip(vi,0,inSize [1] -1)]
B = inPix [u2%inSize [0],clip(vi,0,inSize [1] -1)]
C = inPix [ui%inSize [0],clip(v2,0,inSize [1] -1)]
D = inPix [u2%inSize [0],clip(v2,0, inSize [1] -1)]
#interpolate
(r,g,b)=(
A [0] *(1-mu)*(1-nu)+ B [ 0] *(mu)*(1-nu)+ C [0] *(1-mu)* nu + D [0] * mu * nu,
A [1] *(1-mu)* (1-nu)+ B [1] *(mu)*(1-nu)+ C [1] *(1-mu)* nu + D [1] * mu * nu,
A [2 ] *(1-mu)*(1-nu)+ B [2] *(mu)*(1-nu)+ C [2] *(1-mu)* nu + D [2] * mu * nu )

outPix [i,j] =(int(round(round) r)),int(round(g)),int(round(b)))

imgIn = Image.open(sys.argv [1])$ ​​b $ b inSize = imgIn.size
imgOut = Image.new(RGB,(inSize [0],inSize [0] * 3/4),black)
convertBack(imgIn,imgOut)
imgOut。 save(sys.argv [1] .split('。')[0] +Out2.png)
imgOut.show()

结果是


I'm currently working on a simple 3D panorama viewer for a website. For mobile performance reasons I'm using the three.js CSS3 renderer. This requires a cube map, split up into 6 single images.

I'm recording the images on the iPhone with the Google Photosphere app, or similar apps that create 2:1 equirectangular panoramas. I then resize and convert these to a cubemap with this website: http://gonchar.me/panorama/ (Flash)

Preferrably, I'd like to do the conversion myself, either on the fly in three.js, if that's possible, or in Photoshop. I found Andrew Hazelden's Photoshop actions, and they seem kind of close, but no direct conversion is available. Is there a mathematical way to convert these, or some sort of script that does it? I'd like to avoid going through a 3D app like Blender, if possible.

Maybe this is a long shot, but I thought I'd ask. I have okay experience with javascript, but I'm pretty new to three.js. I'm also hesitant to rely on the WebGL functionality, since it seems either slow or buggy on mobile devices. Support is also still spotty.

解决方案

If you want to do it server side there are many options. http://www.imagemagick.org/ has a bunch of command line tools which could slice your image into pieces. You could put the command to do this into a script and just run that each time you have a new image.

Its hard to tell quite what algorithm is used in the program. We can try and reverse engineer quite what is happening by feeding a square grid into the program. I've used a grid from wikipedia

Which gives This gives us a clue as to how the box is constructed.

Imaging sphere with lines of latitude and longitude one it, and a cube surrounding it. Now project from the point at center of the sphere produces a distorted grid on the cube.

Mathematically take polar coordinates r, θ, ø, for the sphere r=1, 0 < θ < π, -π/4 < ø < 7π/4

  • x= r sin θ cos ø
  • y= r sin θ sin ø
  • z= r cos θ

centrally project these to the cube. First we divide into four regions by the latitude -π/4 < ø < π/4, π/4 < ø < 3π/4, 3π/4 < ø < 5π/4, 5π/4 < ø < 7π/4. These will either project to one of the four sides the top or the bottom.

Assume we are in the first side -π/4 < ø < π/4. The central projection of (sin θ cos ø, sin θ sin ø, cos θ) will be (a sin θ cos ø, a sin θ sin ø, a cos θ) which hits the x=1 plane when

  • a sin θ cos ø = 1

so

  • a = 1 / (sin θ cos ø)

and the projected point is

  • (1, tan ø, cot θ / cos ø)

If | cot θ / cos ø | < 1 this will be on the front face. Otherwise, it will be projected on the top or bottom and you will need a different projection for that. A better test for the top uses the fact that the minimum value of cos ø will be cos π/4 = 1/√2, so the projected point is always on the top if cot θ / (1/√2) > 1 or tan θ < 1/√2. This works out as θ < 35º or 0.615 radians.

Put this together in python

import sys
from PIL import Image
from math import pi,sin,cos,tan

def cot(angle):
    return 1/tan(angle)

# Project polar coordinates onto a surrounding cube
# assume ranges theta is [0,pi] with 0 the north poll, pi south poll
# phi is in range [0,2pi] 
def projection(theta,phi): 
        if theta<0.615:
            return projectTop(theta,phi)
        elif theta>2.527:
            return projectBottom(theta,phi)
        elif phi <= pi/4 or phi > 7*pi/4:
            return projectLeft(theta,phi)
        elif phi > pi/4 and phi <= 3*pi/4:
            return projectFront(theta,phi)
        elif phi > 3*pi/4 and phi <= 5*pi/4:
            return projectRight(theta,phi)
        elif phi > 5*pi/4 and phi <= 7*pi/4:
            return projectBack(theta,phi)

def projectLeft(theta,phi):
        x = 1
        y = tan(phi)
        z = cot(theta) / cos(phi)
        if z < -1:
            return projectBottom(theta,phi)
        if z > 1:
            return projectTop(theta,phi)
        return ("Left",x,y,z)

def projectFront(theta,phi):
        x = tan(phi-pi/2)
        y = 1
        z = cot(theta) / cos(phi-pi/2)
        if z < -1:
            return projectBottom(theta,phi)
        if z > 1:
            return projectTop(theta,phi)
        return ("Front",x,y,z)

def projectRight(theta,phi):
        x = -1
        y = tan(phi)
        z = -cot(theta) / cos(phi)
        if z < -1:
            return projectBottom(theta,phi)
        if z > 1:
            return projectTop(theta,phi)
        return ("Right",x,-y,z)

def projectBack(theta,phi):
        x = tan(phi-3*pi/2)
        y = -1
        z = cot(theta) / cos(phi-3*pi/2)
        if z < -1:
            return projectBottom(theta,phi)
        if z > 1:
            return projectTop(theta,phi)
        return ("Back",-x,y,z)

def projectTop(theta,phi):
        # (a sin θ cos ø, a sin θ sin ø, a cos θ) = (x,y,1)
        a = 1 / cos(theta)
        x = tan(theta) * cos(phi)
        y = tan(theta) * sin(phi)
        z = 1
        return ("Top",x,y,z)

def projectBottom(theta,phi):
        # (a sin θ cos ø, a sin θ sin ø, a cos θ) = (x,y,-1)
        a = -1 / cos(theta)
        x = -tan(theta) * cos(phi)
        y = -tan(theta) * sin(phi)
        z = -1
        return ("Bottom",x,y,z)

# Convert coords in cube to image coords 
# coords is a tuple with the side and x,y,z coords
# edge is the length of an edge of the cube in pixels
def cubeToImg(coords,edge):
    if coords[0]=="Left":
        (x,y) = (int(edge*(coords[2]+1)/2), int(edge*(3-coords[3])/2) )
    elif coords[0]=="Front":
        (x,y) = (int(edge*(coords[1]+3)/2), int(edge*(3-coords[3])/2) )
    elif coords[0]=="Right":
        (x,y) = (int(edge*(5-coords[2])/2), int(edge*(3-coords[3])/2) )
    elif coords[0]=="Back":
        (x,y) = (int(edge*(7-coords[1])/2), int(edge*(3-coords[3])/2) )
    elif coords[0]=="Top":
        (x,y) = (int(edge*(3-coords[1])/2), int(edge*(1+coords[2])/2) )
    elif coords[0]=="Bottom":
        (x,y) = (int(edge*(3-coords[1])/2), int(edge*(5-coords[2])/2) )
    return (x,y)

# convert the in image to out image
def convert(imgIn,imgOut):
    inSize = imgIn.size
    outSize = imgOut.size
    inPix = imgIn.load()
    outPix = imgOut.load()
    edge = inSize[0]/4   # the length of each edge in pixels
    for i in xrange(inSize[0]):
        for j in xrange(inSize[1]):
            pixel = inPix[i,j]
            phi = i * 2 * pi / inSize[0]
            theta = j * pi / inSize[1]
            res = projection(theta,phi)
            (x,y) = cubeToImg(res,edge)
            #if i % 100 == 0 and j % 100 == 0:
            #   print i,j,phi,theta,res,x,y
            if x >= outSize[0]:
                #print "x out of range ",x,res
                x=outSize[0]-1
            if y >= outSize[1]:
                #print "y out of range ",y,res
                y=outSize[1]-1
            outPix[x,y] = pixel

imgIn = Image.open(sys.argv[1])
inSize = imgIn.size
imgOut = Image.new("RGB",(inSize[0],inSize[0]*3/4),"black")
convert(imgIn,imgOut)
imgOut.show()

The projection function takes the theta and phi values and returns coordinates in a cube from -1 to 1 in each direction. The cubeToImg takes the (x,y,z) coords and translates them to the output image coords.

The above algorithm seems to get the geometry right using an image of buckingham palace we get This seems to get most of the lines in the paving right.

We are getting a few image artefacts. This is due to not having a 1 to 1 map of pixels. What we need to do is use a inverse transformation. Rather than loop through each pixel in the source and find the corresponding pixel in the target we loop through the target images and find the closest corresponding source pixel.

import sys
from PIL import Image
from math import pi,sin,cos,tan,atan2,hypot,floor
from numpy import clip

# get x,y,z coords from out image pixels coords
# i,j are pixel coords
# face is face number
# edge is edge length
def outImgToXYZ(i,j,face,edge):
    a = 2.0*float(i)/edge
    b = 2.0*float(j)/edge
    if face==0: # back
        (x,y,z) = (-1.0, 1.0-a, 3.0 - b)
    elif face==1: # left
        (x,y,z) = (a-3.0, -1.0, 3.0 - b)
    elif face==2: # front
        (x,y,z) = (1.0, a - 5.0, 3.0 - b)
    elif face==3: # right
        (x,y,z) = (7.0-a, 1.0, 3.0 - b)
    elif face==4: # top
        (x,y,z) = (b-1.0, a -5.0, 1.0)
    elif face==5: # bottom
        (x,y,z) = (5.0-b, a-5.0, -1.0)
    return (x,y,z)

# convert using an inverse transformation
def convertBack(imgIn,imgOut):
    inSize = imgIn.size
    outSize = imgOut.size
    inPix = imgIn.load()
    outPix = imgOut.load()
    edge = inSize[0]/4   # the length of each edge in pixels
    for i in xrange(outSize[0]):
        face = int(i/edge) # 0 - back, 1 - left 2 - front, 3 - right
        if face==2:
            rng = xrange(0,edge*3)
        else:
            rng = xrange(edge,edge*2)

        for j in rng:
            if j<edge:
                face2 = 4 # top
            elif j>=2*edge:
                face2 = 5 # bottom
            else:
                face2 = face

            (x,y,z) = outImgToXYZ(i,j,face2,edge)
            theta = atan2(y,x) # range -pi to pi
            r = hypot(x,y)
            phi = atan2(z,r) # range -pi/2 to pi/2
            # source img coords
            uf = ( 2.0*edge*(theta + pi)/pi )
            vf = ( 2.0*edge * (pi/2 - phi)/pi)
            # Use bilinear interpolation between the four surrounding pixels
            ui = floor(uf)  # coord of pixel to bottom left
            vi = floor(vf)
            u2 = ui+1       # coords of pixel to top right
            v2 = vi+1
            mu = uf-ui      # fraction of way across pixel
            nu = vf-vi
            # Pixel values of four corners
            A = inPix[ui % inSize[0],clip(vi,0,inSize[1]-1)]
            B = inPix[u2 % inSize[0],clip(vi,0,inSize[1]-1)]
            C = inPix[ui % inSize[0],clip(v2,0,inSize[1]-1)]
            D = inPix[u2 % inSize[0],clip(v2,0,inSize[1]-1)]
            # interpolate
            (r,g,b) = (
              A[0]*(1-mu)*(1-nu) + B[0]*(mu)*(1-nu) + C[0]*(1-mu)*nu+D[0]*mu*nu,
              A[1]*(1-mu)*(1-nu) + B[1]*(mu)*(1-nu) + C[1]*(1-mu)*nu+D[1]*mu*nu,
              A[2]*(1-mu)*(1-nu) + B[2]*(mu)*(1-nu) + C[2]*(1-mu)*nu+D[2]*mu*nu )

            outPix[i,j] = (int(round(r)),int(round(g)),int(round(b)))

imgIn = Image.open(sys.argv[1])
inSize = imgIn.size
imgOut = Image.new("RGB",(inSize[0],inSize[0]*3/4),"black")
convertBack(imgIn,imgOut)
imgOut.save(sys.argv[1].split('.')[0]+"Out2.png")
imgOut.show()

The results of this are

这篇关于将2:1 equirectangular全景转换为立方体贴图的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆