离轴投影用glFrustum [英] Off-axis projection with glFrustum

查看:423
本文介绍了离轴投影用glFrustum的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我试图做OpenGL的一个场景的偏轴投影和我给了一个读取到文件的罗伯特Kooima的偏轴投影,并有一个更好的主意,现在的究竟是什么必须做,但仍有一些零件,供我发现棘手这里。我认识的偏轴投影code对OpenGL是有点如下:

I am trying to do an off-axis projection of a scene with OpenGL and I gave a read to the document to Robert Kooima's off-axis projection and have a much better idea now of what actually has to be done but there are still some pieces which I am finding tricky here. I got to know of the off-axis projection code for OpenGL to be somewhat as follows:

code 1:

glMatrixMode(GL_PROJECTION);  
    glLoadIdentity();            
    glFrustum(fNear*(-fFov * ratio + headX),  
              fNear*(fFov * ratio + headX),  
              fNear*(-fFov + headY),  
              fNear*(fFov + headY),  
              fNear, fFar);  
          
    glMatrixMode(GL_MODELVIEW);  
    glLoadIdentity();  
    gluLookAt(headX*headZ, headY*headZ, 0, headX*headZ, headY*headZ, -1, 0, 1, 0);
    glTranslatef(0.0,0.0,headZ);

有了这个一直跟在屏幕中心的用户一个正常的透视投影,这是很容易理解,因为我COM prehend。

Had this been a normal perspective projection with the user at the center of the screen, it is fairly easy to understand as I comprehend.

               Screen  
                   |
                   |  h = H/2
                   |  
x----- n -----------
                   |
                   |  h = H/2
                   |

通过在x和屏幕为N,顶部,底部坐标的 glFrustum 的将被计算为距离用户:(假设THETA浏览的的视场(FOV)我想假设为 30度的)

With the user at x and distance from screen being n, the top, bottom coordinates for glFrustum would be calculated as: (assume theta is the Field of View (fov) which I suppose is assumed as 30 degrees)

h = n * tan (theta/2);
tanValue = DEG_TO_RAD * theta/2;
[EDIT Line additon here>>]: fFov = tan(tanValue);
h = n * tan (tanValue);

因此​​,顶部和底部(否定顶部的值)都获得的 glFrustum 的参数。左一个人的左/右现在。

Hence, top and bottom (negating top's value) are both obtained for glFrustum arguments. Left one's are left/right for now.

Now, Aspect Ratio, r = ofGetWidth()/ofGetHeight();
Right = n * (fFov * r); , where r is the aspect ratio [Edit1>> Was written tanValue*r earlier here]

问题 1)在上述(tanValue * R)获得水平视场角,然后采用相同的,以使左/右值?

Question 1) Is the above (tanValue*r) getting the horizontal fov angle and then applying the same to get left/right value?

双MSX =(双)ofGetMouseX();
 双MSY =(双)ofGetMouseY();
 双scrWidth =(双)ofGetWidth();
 双scrHeight =(双)ofGetHeight();

 headX =(MSX / scrWidth) - 0.5;
 风头正劲=((scrHeight - MSY)/ scrHeight) - 0.5;
 headZ = -2.0;

   double msX = (double)ofGetMouseX();
   double msY = (double)ofGetMouseY();
   double scrWidth = (double)ofGetWidth();
   double scrHeight = (double)ofGetHeight();

   headX = (msX / scrWidth) - 0.5;
   headY = ((scrHeight - msY) / scrHeight) - 0.5;
   headZ = -2.0;

现在,考虑到​​投影离轴,我们有 headX 令人心醉的计算(用鼠标点击这里,而不是实际的用户头部的)位置:

Now, consider the projection off-axis and we have the headX and headY position computed (using mouse here instead of actual user's head):

问题 2)是如何headX和y被计算的,什么是用减去-0.5从上面的?我观察到,它带来的x值对(-0.5至0.5)和y值(0.5至-0.5)与MSX和MSY变化

Question 2) How is the headX and y being computed and what is the use subtracting -0.5 from the above? I observed that it brings the x-value to (-0.5 to 0.5) and y-value to (0.5 to -0.5) with msX and msY varying.

问题 3)在上述code(code 1),如何风头正劲被添加到计算的棕褐色(FOV / 2)值?

Question 3) In the above code (Code 1), how is headY being added to the calculated to the tan(fov/2) value?

-fFov + headY
fFov + headY

这是什么值提供给我们? -fFov是计算的棕褐色THETA / 2,但如何能令人兴奋的加入直接?

What does this value provide us with? -fFov was the calculated tan of theta/2 but how can headY be added to directly?

-fFov * ratio + headX
-fFov * ratio + headX

如何在abvoe给我们一个vlaue这wehn由n个(近值)乘以给了我们左,右的不对称glFrustum呼吁偏轴投影?

How does the abvoe give us a vlaue which wehn multiplied by n (near value) gives us left and right for the assymetric glFrustum call for off-axis projection?

问题 4)据我所知,glLookAt有许多工作要做的观景点却将锥台到用户的眼睛是顶点(在这种情况下,鼠标)。注意上述code线:

Question 4) I understand that the glLookAt has to be done for View Point to shift the apex of the frustum to where the eye of the user is (in this case where the mouse is). Notice the line in the above code:

gluLookAt(headX * headZ,令人心醉* headZ,0,headX * headZ,令人心醉* headZ,-1,0,1,0)

gluLookAt(headX*headZ, headY*headZ, 0, headX*headZ, headY*headZ, -1, 0, 1, 0);

如何为 headX * headZ 给我眼睛的xPosition位置,风头正劲* headZ 给我的yPosition我可以在 gluLookAt用眼()吗?

How is headX*headZ giving me the xPosition of the eye, headY*headZ giving me the yPosition of the eye which I can use in gluLookAt() here?

编辑:全部问题描述补充的位置: pastebin.com/BiSHXspb

Full problem description added here: pastebin.com/BiSHXspb

推荐答案

您已经取得了ASCII艺术的这个漂亮的图片

You have made this nice picture of ASCII art

               Screen  
                   B
                   |  h = H/2
                   |  
x----- n ----------A
                   |
                   |  h = H/2
                   B'

视场被定义为角 FOV =角度((X,B),(X,B'))的两个小技巧B,B之间形成屏幕行和点x的。三角函数的 Tangens 的(TAN)是定义为

The field of view is defined as the angle fov = angle((x,B), (x,B')) formed between the two tips B, B' of the screen "line" and the point x. The trigonometric function Tangens (tan) is defines as

h/n = tan( angle((x,A), (x,B)) )

此外,由于的长度(A,B)==长度(A,B')== ^ h = = H / 2 我们知道

H/(2·n) == tan( fov ) == tan( angle((x,B), (x,B')) ) == tan( 2·angle((x,A), (x,B)) )

由于三角学的角度给出了弧度,但大多数人更熟悉的程度,你可能需要从辈分转换为弧度。

Since in trigonometry angles are given in radians, but most people are more comfortable with degrees you may have to convert from degress to radians.

所以,我们感兴趣的只有一半的屏幕跨度(= ^ h 的),我们已经到一半的角度。如果我们要接受辈分也将其转换为弧度。这就是这个EX pression是为。

So we're interested in only half of the screen span (= h) we've to half the angle. And if we want to accept degress also convert it to radians. That's what this expression is meant for.

tanValue = DEG_TO_RAD * theta/2;

使用,我们再计算的 ^ h 的用

h = tan(tanValue) * n

如果在FOV是屏幕的水平或垂直跨度取决于怎样的方式字段跨度的ħ的被缩放的高宽比。

If the FOV is for horizontal or vertical span of the screen depends on the way how the field span H is scaled with the aspect ratio.

是如何headX和y被计算,什么是利用减去-0.5从上面的?我观察到,它带来的x值对(-0.5至0.5)和y值(0.5至-0.5)与MSX和MSY变化

How is the headX and y being computed and what is the use subtracting -0.5 from the above? I observed that it brings the x-value to (-0.5 to 0.5) and y-value to (0.5 to -0.5) with msX and msY varying.

的计算你给假定屏幕空间坐标的范围[0,屏幕宽度]×[0,screenHeight]。然而,由于我们正在做我们的视锥计算的标准化范围为[-1,1]²我们希望把绝对鼠标坐标归一化中心相对坐标的装置。这允许随后以指定的轴线偏移相对于归一化的近平面尺寸。这是它的外观与0偏移(网格有0.1个单位的距离在这张照片):

The calculations you gave assume that screen space coordinates are in a range [0, screenWidth] × [0, screenHeight]. However since we're doing our frustum calculations in a normalized range [-1, 1]² we want to bring the device absolute mouse coordinates to normalized center relative coordinates. This allows then to specify the axis offset relative to the normalized near plane size. This is how it looks with 0 offset (the grid has 0.1 units distance in this picture):

和用的X -0.5偏移应用它看起来像这样(橙色轮廓),你可以看到近平面的左边缘已被转移到-0.5。

And with a X offset of -0.5 applied it looks like this (orange outline), as you can see the left edge of the near plane has been shifted to -0.5.

现在只需想象电网是你的屏幕,鼠标指针会围绕投影截锥拖近平面边界之类的。

Now simply imagine that the grid was your screen, and your mouse pointer would drag around the projection frustum near plane bounds like that.

这是什么值提供给我们? -fFov是计算的棕褐色THETA / 2,但如何能令人兴奋的加入直接?

What does this value provide us with? -fFov was the calculated tan of theta/2 but how can headY be added to directly?

由于FFOV不是一个角度,但在你的ASCII艺术图片跨度的 H / 2 = H 的。而 headX 令人心醉的是标准化的近投影面的相对变化。

Because fFov is not an angle but the span H/2 = h in your ASCII art picture. And headX and headY are relative shifts in the normalized near projection plane.

如何headX * headZ给人眼的我xPosition位置,风头正劲* headZ给我,我可以在gluLookAt用眼的yPosition()在这里?

How is headX*headZ giving me the xPosition of the eye, headY*headZ giving me the yPosition of the eye which I can use in gluLookAt() here?

你所引用的code似乎是该帐户的临时解决方案,以emphase效果。在实际的头部跟踪立体系统,你做的略有不同。技术上的 headZ 的应该是用来计算近平面的距离或者从它衍生出来。

The code you're quoted seems to be an ad-hoc solution on that account to emphase the effect. In a real head tracking stereoscopic system you do slightly different. Technically headZ should be either used to calculated the near plane distance or be derived from it.

反正的主要思想是,该头部位于与投影平面一段距离,并且中心点移入凸起的相对单位。所以,你必须相对比例的 headX,令人心醉的实际头部距离投影平面,以使顶点修正工作。

Anyway the main ideas is, that the head is located at some distance from the projection plane, and the center point is shifted in relative units of the projection. So you must scale relative headX, headY with the actual head distance to the projection plane to make the apex correction work.

到目前为止,我们已经将视野时(FOV)屏幕跨度望着只有一个维度。对于图像不失真的高宽比[左,右] / [底,顶]近剪裁平面必须视口宽度/高度的纵横比相匹配的程度。

So far we've looked at only one dimension when converting field of view (fov) to screen span. For the image to be undistorted the aspect ratio of the [left, right] / [bottom, top] extents of the near clipping plane must match the aspect ratio of the viewport width/height.

如果我们选择来定义的FoV角是垂直的FoV,不久的剪取平面的区则水平大小是按比例的与/高纵横比的垂直邻近剪取平面扩展区的大小。

If we choose to define the FoV angle to be the vertical FoV, then the horizontal size of the near clipping plane extents is the size of the vertical near clipping plane extents scaled with the with/height aspect ratio.

这是没有什么特别的偏轴投影,但可以在各个角度的投影辅助功能中找到;比较gluPerspective源$ C ​​$ C,以供参考:

This is nothing special about off-axis projection, but can be found in every perspective projection helper function; compare the source code of gluPerspective for reference:

void GLAPIENTRY
gluPerspective(GLdouble fovy, GLdouble aspect, GLdouble zNear, GLdouble zFar)
{
   GLdouble xmin, xmax, ymin, ymax;

   ymax = zNear * tan(fovy * M_PI / 360.0); // M_PI / 360.0 == DEG_TO_RAD
   ymin = -ymax;

   xmin = ymin * aspect;
   xmax = ymax * aspect;

   glFrustum(xmin, xmax, ymin, ymax, zNear, zFar);
}

如果我们考虑到近裁剪平面范围为[-aspect,纵横]×[-1,1]那当然的 headX 的位置不在标准范围[-1, 1],但必须在区间[-aspect,纵横]被赋予为好。

And if we consider the near clipping plane extents to be [-aspect, aspect]×[-1, 1] then of course the headX position is not in the normalized range [-1, 1] but must be given in the range [-aspect, aspect] as well.

如果你看一下你链接的文件,你会发现,每个屏幕的头部位置所报告的跟踪转化中的绝对的坐标相对于屏幕。

If you look at the paper you linked, you'll find that for each screen the head position as reported by the tracker is transformed in absolute coordinates relative to the screen.

两个星期前,我有机会测试被称为Z空间里偏振光立体显示已经结合了头部跟踪器创建一个离轴视锥显示系统/查找在符合您的实际头部位置的组合前面的显示屏。它还提供了一个笔与3D场景在你面前互动。这是最IM pressive事情我已经看到在过去的几年之一,我目前正在乞求我的老板给我们买之一:)

Two weeks ago I had the opportunity to test a display system called "Z space" where a polarized stereo display had been combined with a head tracker creating an off-axis frustum / look-at combination that matched your physical head position in front of the display. It also offers a "pen" to interact with the 3D scene in front of you. This is one of the most impressive things I've seen in the last few years and I'm currently begging my boss to buy us one :)

这篇关于离轴投影用glFrustum的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆