OpenGLES 2中的文本/字体渲染(iOS - CoreText?) - 选项和最佳实践? [英] Text/font rendering in OpenGLES 2 (iOS - CoreText?) - options and best practice?

查看:308
本文介绍了OpenGLES 2中的文本/字体渲染(iOS - CoreText?) - 选项和最佳实践?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

OpenGL的字体渲染有很多问题,其中许多是通过纹理地图集(快速,但错误)或字符串纹理(仅固定文本)满足的。

然而,这些方法很糟糕,似乎已经过时了几年(使用着色器来做这个更好/更快?)。对于OpenGL 4.1,有一个很好的问题,看看今天该用什么?:

什么是最先进的技术在OpenGL中从4.1版本开始渲染文本?



所以,今天我们应该在iOS GL ES 2上使用什么?



我感到失望的是似乎没有开源(甚至是商业解决方案)。我知道很多团队吸收了它,花了数周的时间重新发明这个轮子,逐渐学习如何克恩和空间等(呃) - 但是肯定有比重写整个字体更好的方法,从头开始?






据我所知,有两个部分:


  1. 我们如何使用字体呈现文本?

  2. 我们如何显示输出?
    对于1(如何渲染),苹果公司提供了许多方法来获得正确的渲染输出 - 但简单的不支持OpenGL(也许一些的其他做 - 例如有一个简单的方法来映射CoreText输出到OpenGL?)。

    2(如何显示),我们有着色器,我们有VBOs ,我们有字形纹理,我们有查找纹理,和其他技术(例如上面链接的OpenGL 4.1东西?)

    这是我知道的两种常见的OpenGL方法


    1. 纹理图集(渲染所有字形一次,然后重新渲染每个字符1 x纹理四,从共享纹理)


      1. 这是错误的,除非你使用的是1980年代的位图字体(甚至那么:纹理地图集需要比看起来更多的工作,如果你需要更正确的字体)

      2. (字体不是一个字形的集合有大量的纹理图集失败)


    2. 固定字符串(使用任意值苹果类正确渲染,然后截图支持图像数据,并作为纹理上传)


      1. 以人为条件,这是快速的。在帧渲染中,这是非常非常慢的。如果你这样做了很多改变的文本,你的帧速率就会越来越高。从技术上来说,它大部分是正确的(不完全是这样,你会失去一些信息),但是效率非常低。
      2. li>

      我也看到了, p>


      1. Imagination / PowerVRPrint3D(链接断开)(来自制造GPU的人!但是他们的网站已经移动/移除了文本渲染页面)

      2. ...和/或FTGL 时间?)

      3. 字体存储 http://digestingduck.blogspot.co.uk/2009/08/font-stash.html (高质量,但非常慢?)

      4. 1.

      在Apple自己的操作系统/标准库中,我知道几个文本渲染源。注意:我已经在2D渲染项目中使用了大部分的细节,关于它们输出不同渲染的陈述是基于直接经验的


      1. 使用NSString的CoreGraphics


        1. 最简单:将渲染成CGRect
        2. 是人们推荐的固定字符串方法的一个稍微快一点的版本(即使你希望它是相同的)


      2. 用纯文本的UILabel和UITextArea


        1. 注意:它们是不一样的!


      3. NSAttributedString,呈现为上述之一


        1. 再次:呈现不同的(我知道的差异是相当微妙的和分类为错误,关于这个问题的各种问题)

        2. CATextLayer


          1. iOS字体与旧C渲染的混合。使用不完全的免费桥接的CFFont / UIFont,它揭示了一些更多的渲染差异/奇异性

          2. CoreText


            1. ...终极解决方案?但它是一个自己的野兽...
            2. 我做了一些更多的尝试,看起来CoreText可能是一个完美的解决方案,与纹理地图集和Valve的带符号差异纹理结合使用(可以将位图字形转换为独立于分辨率的高分辨率纹理) 。

              ...但是我还没有工作,还在尝试。




              更新:苹果公司的文档说,除了最后的细节之外,苹果公司的文档可以让你访问所有的东西:显示哪一个字形+字形布局(你可以获得行布局,字形数量,而不是字形本身,根据文件)。因为没有明显的原因,CoreText显然没有这个核心信息(如果是这样,这使得CT几乎毫无价值,我还在寻找能否找到真正glpyhs +每字形数据的方法) / p>




              UPDATE2:我现在可以正常使用苹果的CT(但没有任何不同的纹理),但最终结果为3类文件,10个数据结构,大约300行代码,再加上OpenGL代码来渲染它。对于一个SO回答太多:(。

              简短的回答是:是的,你可以做到这一点,它的工作原理,如果你:


              1. 创建CTFrameSetter
              2. 创建一个理论二维框架的CTFrame
                创建一个CGContext您将转换为GL纹理。
              3. 通过glyph-by-glyph,允许Apple渲染到CGContext。

              4. 每次Apple渲染一个字形,计算边界框(这是硬件),并将其保存在某处
              5. 保存唯一的字形ID(例如o,f和 (一个字形)!
              6. 最后,将您的CGContext作为纹理发送给GL。


                渲染时,使用Apple创建的glyph-id列表,并使用已保存的信息和纹理来渲染纹理合成ords,将单个字形从您上传的纹理中提取出来。

                这个工作起来很快,适用于所有的字体,它可以获得所有的字体布局宁正确等。


                There are many questions on OpenGL font rendering, many of them are satisfied by texture atlases (fast, but wrong), or string-textures (fixed-text only).

                However, those approaches are poor and appear to be years out of date (what about using shaders to do this better/faster?). For OpenGL 4.1 there's this excellent question looking at "what should you use today?":

                What is state-of-the-art for text rendering in OpenGL as of version 4.1?

                So, what should we be using on iOS GL ES 2 today?

                I'm disappointed that there appears to be no open-source (or even commercial solution). I know a lot of teams suck it down and spend weeks of dev time re-inventing this wheel, gradually learning how to kern and space etc (ugh) - but there must be a better way than re-writing the whole of "fonts" from scratch?


                As far as I can see, there are two parts to this:

                1. How do we render text using a font?
                2. How do we display the output?

                For 1 (how to render), Apple provides MANY ways to get the "correct" rendered output - but the "easy" ones don't support OpenGL (maybe some of the others do - e.g. is there a simple way to map CoreText output to OpenGL?).

                For 2 (how to display), we have shaders, we have VBOs, we have glyph-textures, we have lookup-textures, and other tecniques (e.g. the OpenGL 4.1 stuff linked above?)

                Here are the two common OpenGL approaches I know of:

                1. Texture atlas (render all glyphs once, then render 1 x textured quad per character, from the shared texture)

                  1. This is wrong, unless you're using a 1980s era "bitmap font" (and even then: texture atlas requires more work than it may seem, if you need it correct for non-trivial fonts)
                  2. (fonts aren't "a collection of glyphs" there's a vast amount of positioning, layout, wrapping, spacing, kerning, styling, colouring, weighting, etc. Texture atlases fail)

                2. Fixed string (use any Apple class to render correctly, then screenshot the backing image-data, and upload as a texture)

                  1. In human terms, this is fast. In frame-rendering, this is very, very slow. If you do this with a lot of changing text, your frame rate goes through the floor
                  2. Technically, it's mostly correct (not entirely: you lose some information this way) but hugely inefficient

                I've also seen, but heard both good and bad things about:

                1. Imagination/PowerVR "Print3D" (link broken) (from the guys that manufacture the GPU! But their site has moved/removed the text rendering page)
                2. FreeType (requires pre-processing, interpretation, lots of code, extra libraries?)
                3. ...and/or FTGL http://sourceforge.net/projects/ftgl/ (rumors: slow? buggy? not updated in a long time?)
                4. Font-Stash http://digestingduck.blogspot.co.uk/2009/08/font-stash.html (high quality, but very slow?)
                5. 1.

                Within Apple's own OS / standard libraries, I know of several sources of text rendering. NB: I have used most of these in detail on 2D rendering projects, my statements about them outputting different rendering are based on direct experience

                1. CoreGraphics with NSString

                  1. Simplest of all: render "into a CGRect"
                  2. Seem to be a slightly faster version of the "fixed string" approach people recommend (even though you'd expect it to be much the same)

                2. UILabel and UITextArea with plain text

                  1. NB: they are NOT the same! Slight differences in how they render the smae text

                3. NSAttributedString, rendered to one of the above

                  1. Again: renders differently (the differences I know of are fairly subtle and classified as "bugs", various SO questions about this)

                4. CATextLayer

                  1. A hybrid between iOS fonts and old C rendering. Uses the "not fully" toll-free-bridged CFFont / UIFont, which reveals some more rendering differences / strangeness

                5. CoreText

                  1. ... the ultimate solution? But a beast of its own...

                解决方案

                I did some more experimenting, and it seems that CoreText might make for a perfect solution when combined with a texture atlas and Valve's signed-difference textures (which can turn a bitmap glyph into a resolution-independent hi-res texture).

                ...but I don't have it working yet, still experimenting.


                UPDATE: Apple's docs say they give you access to everything except the final detail: which glyph + glyph layout to render (you can get the line layout, and the number of glyphs, but not the glyph itself, according to docs). For no apparent reason, this core piece of info is apparently missing from CoreText (if so, that makes CT almost worthless. I'm still hunting to see if I can find a way to get the actual glpyhs + per-glyph data)


                UPDATE2: I now have this working properly with Apple's CT (but no different-textures), but it ends up as 3 class files, 10 data structures, about 300 lines of code, plus the OpenGL code to render it. Too much for an SO answer :(.

                The short answer is: yes, you can do it, and it works, if you:

                1. Create CTFrameSetter
                2. Create CTFrame for a theoretical 2D frame
                3. Create a CGContext that you'll convert to a GL texture
                4. Go through glyph-by-glyph, allowing Apple to render to the CGContext
                5. Each time Apple renders a glyph, calculate the boundingbox (this is HARD), and save it somewhere
                6. And save the unique glyph-ID (this will be different for e.g. "o", "f", and "of" (one glyph!))
                7. Finally, send your CGContext up to GL as a texture

                When you render, use the list of glyph-IDs that Apple created, and for each one use the saved info, and the texture, to render quads with texture-co-ords that pull individual glyphs out of the texture you uploaded.

                This works, it's fast, it works with all fonts, it gets all font layout and kerning correct, etc.

                这篇关于OpenGLES 2中的文本/字体渲染(iOS - CoreText?) - 选项和最佳实践?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆