使用 glFrustum 进行离轴投影 [英] Off-axis projection with glFrustum

查看:38
本文介绍了使用 glFrustum 进行离轴投影的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在尝试使用 OpenGL 对场景进行离轴投影,并且我阅读了文档到 Robert Kooima 的离轴投影,现在对实际必须完成的工作有了更好的了解,但仍有一些部分我觉得这里很棘手.我了解到OpenGL的离轴投影代码如下:

I am trying to do an off-axis projection of a scene with OpenGL and I gave a read to the document to Robert Kooima's off-axis projection and have a much better idea now of what actually has to be done but there are still some pieces which I am finding tricky here. I got to know of the off-axis projection code for OpenGL to be somewhat as follows:

代码 1:

glMatrixMode(GL_PROJECTION);  
    glLoadIdentity();            
    glFrustum(fNear*(-fFov * ratio + headX),  
              fNear*(fFov * ratio + headX),  
              fNear*(-fFov + headY),  
              fNear*(fFov + headY),  
              fNear, fFar);  
          
    glMatrixMode(GL_MODELVIEW);  
    glLoadIdentity();  
    gluLookAt(headX*headZ, headY*headZ, 0, headX*headZ, headY*headZ, -1, 0, 1, 0);
    glTranslatef(0.0,0.0,headZ);

如果这是一个用户位于屏幕中央的普通透视投影,按照我的理解很容易理解.

Had this been a normal perspective projection with the user at the center of the screen, it is fairly easy to understand as I comprehend.

               Screen  
                   |
                   |  h = H/2
                   |  
x----- n -----------
                   |
                   |  h = H/2
                   |

当用户位于 x 处且距屏幕的距离为 n 时,glFrustum 的顶部、底部坐标将计算为:(假设 theta 是 视野 (fov)em> 我想假设为 30 度)

With the user at x and distance from screen being n, the top, bottom coordinates for glFrustum would be calculated as: (assume theta is the Field of View (fov) which I suppose is assumed as 30 degrees)

h = n * tan (theta/2);
tanValue = DEG_TO_RAD * theta/2;
[EDIT Line additon here>>]: fFov = tan(tanValue);
h = n * tan (tanValue);

因此,顶部和底部(否定顶部的值)都是为 glFrustum 参数获得的.左边的现在是左/右.

Hence, top and bottom (negating top's value) are both obtained for glFrustum arguments. Left one's are left/right for now.

Now, Aspect Ratio, r = ofGetWidth()/ofGetHeight();
Right = n * (fFov * r); , where r is the aspect ratio [Edit1>> Was written tanValue*r earlier here]

问题 1) 上面的 (tanValue*r) 是否得到了水平 fov 角度,然后应用相同的方法得到左/右值?

Question 1) Is the above (tanValue*r) getting the horizontal fov angle and then applying the same to get left/right value?

double msX = (double)ofGetMouseX();
double msY = (double)ofGetMouseY();
double scrWidth = (double)ofGetWidth();
double scrHeight = (double)ofGetHeight();

headX = (msX/scrWidth) - 0.5;
headY = ((scrHeight - msY)/scrHeight) - 0.5;
headZ = -2.0;

现在,考虑离轴投影,我们计算了 headXheadY 位置(此处使用鼠标而不是实际用户的头部):

Now, consider the projection off-axis and we have the headX and headY position computed (using mouse here instead of actual user's head):

问题 2) headX 和 y 是如何计算的,从上面减去 -0.5 有什么用?我观察到随着 msX 和 msY 的变化,它使 x 值变为(-0.5 到 0.5),y 值变为(0.5 到 -0.5).

Question 2) How is the headX and y being computed and what is the use subtracting -0.5 from the above? I observed that it brings the x-value to (-0.5 to 0.5) and y-value to (0.5 to -0.5) with msX and msY varying.

问题 3) 在上面的代码(代码 1)中,headY 是如何添加到计算的 tan(fov/2) 值中的?

Question 3) In the above code (Code 1), how is headY being added to the calculated to the tan(fov/2) value?

-fFov + headY
fFov + headY

这个价值为我们提供了什么?-fFov 是计算出的 theta/2 的 tan 但如何直接添加 headY ?

What does this value provide us with? -fFov was the calculated tan of theta/2 but how can headY be added to directly?

-fFov * ratio + headX
-fFov * ratio + headX

对于离轴投影的不对称 glFrustum 调用,abvoe 如何给我们一个乘以 n(近值)的值?

How does the abvoe give us a vlaue which wehn multiplied by n (near value) gives us left and right for the assymetric glFrustum call for off-axis projection?

问题 4) 我知道必须为 View Point 执行 glLookAt 才能将视锥体的顶点移动到用户眼睛所在的位置(在这种情况下是鼠标所在的位置).注意上面代码中的一行:

Question 4) I understand that the glLookAt has to be done for View Point to shift the apex of the frustum to where the eye of the user is (in this case where the mouse is). Notice the line in the above code:

gluLookAt(headX*headZ, headY*headZ, 0, headX*headZ, headY*headZ, -1, 0, 1, 0);

gluLookAt(headX*headZ, headY*headZ, 0, headX*headZ, headY*headZ, -1, 0, 1, 0);

headX*headZ 如何给我眼睛的 xPosition,headY*headZ 给我眼睛的 yPosition,我可以在 gluLookAt() 在这里?

How is headX*headZ giving me the xPosition of the eye, headY*headZ giving me the yPosition of the eye which I can use in gluLookAt() here?

在此处添加完整的问题描述:pastebin.com/BiSHXspb

Full problem description added here: pastebin.com/BiSHXspb

推荐答案

你制作了这张漂亮的 ASCII 艺术图片

You have made this nice picture of ASCII art

               Screen  
                   B
                   |  h = H/2
                   |  
x----- n ----------A
                   |
                   |  h = H/2
                   B'

视场定义为屏幕的两个提示B、B'之间形成的角度fov = angle((x,B), (x,B'))线"和点 x.三角函数Tangens (tan) 定义为

The field of view is defined as the angle fov = angle((x,B), (x,B')) formed between the two tips B, B' of the screen "line" and the point x. The trigonometric function Tangens (tan) is defines as

h/n = tan( angle((x,A), (x,B)) )

并且由于 length(A, B) == length(A, B') == h == H/2 我们知道

H/(2·n) == tan( fov ) == tan( angle((x,B), (x,B')) ) == tan( 2·angle((x,A), (x,B)) )

由于在三角学中角度是以弧度给出的,但大多数人更喜欢度数,因此您可能需要将度数转换为弧度.

Since in trigonometry angles are given in radians, but most people are more comfortable with degrees you may have to convert from degress to radians.

所以我们只对屏幕跨度的一半 (= h) 感兴趣,我们必须将角度设为一半.如果我们想接受度数也将其转换为弧度.这就是这个表达的意思.

So we're interested in only half of the screen span (= h) we've to half the angle. And if we want to accept degress also convert it to radians. That's what this expression is meant for.

tanValue = DEG_TO_RAD * theta/2;

使用它,我们然后通过

h = tan(tanValue) * n

FOV 是用于屏幕的水平跨度还是垂直跨度取决于视场跨度 H 与纵横比的缩放方式.

If the FOV is for horizontal or vertical span of the screen depends on the way how the field span H is scaled with the aspect ratio.

headX 和 y 是如何计算的,从上面减去 -0.5 有什么用?我观察到随着 msX 和 msY 的变化,它使 x 值变为(-0.5 到 0.5),y 值变为(0.5 到 -0.5).

How is the headX and y being computed and what is the use subtracting -0.5 from the above? I observed that it brings the x-value to (-0.5 to 0.5) and y-value to (0.5 to -0.5) with msX and msY varying.

您给出的计算假设屏幕空间坐标在 [0, screenWidth] × [0, screenHeight] 范围内.然而,由于我们在标准化范围 [-1, 1]² 内进行视锥体计算,我们希望将设备绝对鼠标坐标带到标准化中心相对坐标.这允许然后指定相对于归一化近平面尺寸的轴偏移.这是它在 0 偏移时的样子(这张图片中网格有 0.1 个单位的距离):

The calculations you gave assume that screen space coordinates are in a range [0, screenWidth] × [0, screenHeight]. However since we're doing our frustum calculations in a normalized range [-1, 1]² we want to bring the device absolute mouse coordinates to normalized center relative coordinates. This allows then to specify the axis offset relative to the normalized near plane size. This is how it looks with 0 offset (the grid has 0.1 units distance in this picture):

应用 -0.5 的 X 偏移后,它看起来像这样(橙色轮廓),正如您所看到的,近平面的左边缘已移动到 -0.5.

And with a X offset of -0.5 applied it looks like this (orange outline), as you can see the left edge of the near plane has been shifted to -0.5.

现在简单地想象一下,网格就是你的屏幕,你的鼠标指针会像这样在靠近平面边界的投影平截头体周围拖动.

Now simply imagine that the grid was your screen, and your mouse pointer would drag around the projection frustum near plane bounds like that.

这个价值为我们提供了什么?-fFov 是计算出的 theta/2 的 tan 但如何直接添加 headY ?

What does this value provide us with? -fFov was the calculated tan of theta/2 but how can headY be added to directly?

因为 fFov 不是角度,而是 ASCII 艺术图片中的跨度 H/2 = h.headXheadY 是归一化近投影平面中的相对位移.

Because fFov is not an angle but the span H/2 = h in your ASCII art picture. And headX and headY are relative shifts in the normalized near projection plane.

headXheadZ 如何给我眼睛的 xPosition,headYheadZ 给我眼睛的 yPosition,我可以在这里在 gluLookAt() 中使用?

How is headXheadZ giving me the xPosition of the eye, headYheadZ giving me the yPosition of the eye which I can use in gluLookAt() here?

您引用的代码似乎是该帐户的临时解决方案,以强调效果.在真正的头部跟踪立体系统中,您的做法略有不同.从技术上讲,headZ 应该用于计算近平面距离或从中导出.

The code you're quoted seems to be an ad-hoc solution on that account to emphase the effect. In a real head tracking stereoscopic system you do slightly different. Technically headZ should be either used to calculated the near plane distance or be derived from it.

无论如何,主要思想是,头部位于距投影平面一定距离的位置,并且中心点以投影的相对单位偏移.因此,您必须使用实际头部到投影平面的距离来缩放相对 headX, headY 以进行顶点校正.

Anyway the main ideas is, that the head is located at some distance from the projection plane, and the center point is shifted in relative units of the projection. So you must scale relative headX, headY with the actual head distance to the projection plane to make the apex correction work.

到目前为止,在将视野 (fov) 转换为屏幕跨度时,我们只研究了一个维度.要使图像不失真,近剪裁平面的 [左、右]/[底、顶] 范围的纵横比必须与视口宽度/高度的纵横比匹配.

So far we've looked at only one dimension when converting field of view (fov) to screen span. For the image to be undistorted the aspect ratio of the [left, right] / [bottom, top] extents of the near clipping plane must match the aspect ratio of the viewport width/height.

如果我们选择将 FoV 角度定义为垂直 FoV,那么近剪裁平面范围的水平大小就是垂直近剪裁平面范围的大小,与高宽比成比例.

If we choose to define the FoV angle to be the vertical FoV, then the horizontal size of the near clipping plane extents is the size of the vertical near clipping plane extents scaled with the with/height aspect ratio.

这个离轴投影没什么特别的,但是在每个透视投影辅助函数中都可以找到;对比gluPerspective的源代码以供参考:

This is nothing special about off-axis projection, but can be found in every perspective projection helper function; compare the source code of gluPerspective for reference:

void GLAPIENTRY
gluPerspective(GLdouble fovy, GLdouble aspect, GLdouble zNear, GLdouble zFar)
{
   GLdouble xmin, xmax, ymin, ymax;

   ymax = zNear * tan(fovy * M_PI / 360.0); // M_PI / 360.0 == DEG_TO_RAD
   ymin = -ymax;

   xmin = ymin * aspect;
   xmax = ymax * aspect;

   glFrustum(xmin, xmax, ymin, ymax, zNear, zFar);
}

如果我们认为近剪裁平面范围是 [-aspect, aspect]×[-1, 1] 那么当然 headX 位置不在归一化范围 [-1,1] 但也必须在 [-aspect, aspect] 范围内给出.

And if we consider the near clipping plane extents to be [-aspect, aspect]×[-1, 1] then of course the headX position is not in the normalized range [-1, 1] but must be given in the range [-aspect, aspect] as well.

如果您查看链接的论文,您会发现对于每个屏幕,跟踪器报告的头部位置都在相对于屏幕的绝对坐标中进行了转换.

If you look at the paper you linked, you'll find that for each screen the head position as reported by the tracker is transformed in absolute coordinates relative to the screen.

两周前,我有机会测试了一个名为Z 空间"的显示系统.偏振立体显示器与头部跟踪器相结合,创建了一个离轴视锥/观察组合,与显示器前面的物理头部位置相匹配.它还提供了一个笔"与您面前的 3D 场景互动.这是我在过去几年中看到的最令人印象深刻的事情之一,我目前正在乞求我的老板给我们买一个 :)

Two weeks ago I had the opportunity to test a display system called "Z space" where a polarized stereo display had been combined with a head tracker creating an off-axis frustum / look-at combination that matched your physical head position in front of the display. It also offers a "pen" to interact with the 3D scene in front of you. This is one of the most impressive things I've seen in the last few years and I'm currently begging my boss to buy us one :)

这篇关于使用 glFrustum 进行离轴投影的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆