如何更快地将视图渲染到图像中? [英] How to render view into image faster?

查看:126
本文介绍了如何更快地将视图渲染到图像中?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在制作放大镜应用程序,允许用户触摸屏幕并移动他的手指,将有一个带有手指路径的放大镜。我通过截屏并将图像分配给放大镜图像视图来实现它,如下所示:

I'm making magnifier app, which allows an user touch the screen and move his finger, there will be a magnifier with his finger path. I implement it with take a screenshot and assign the image to magnifier image view, as following:

    CGSize imageSize = frame.size;
    UIGraphicsBeginImageContextWithOptions(imageSize, NO, 0.0);
    CGContextRef c = UIGraphicsGetCurrentContext();
    CGContextScaleCTM(c, scaleFactor, scaleFactor);
    CGContextConcatCTM(c, CGAffineTransformMakeTranslation(-frame.origin.x, -frame.origin.y));
    [self.layer renderInContext:c];
    UIImage *screenshot = UIGraphicsGetImageFromCurrentImageContext();
    UIGraphicsEndImageContext();
    return screenshot;

问题在于 self.layer renderInContext 很慢,所以用户在移动手指时感觉不顺畅。我尝试在其他线程中运行 self.layer renderInContext 但是,它使得放大镜图像看起来很奇怪,因为放大镜中的图像显示延迟。

the problem is that self.layer renderInContext is slow, so user feel not smooth when he is moving his finger. and I tried to run self.layer renderInContext in other thread, however, it makes the magnifier image looked weird because the image in magnifier showed delay.

有没有更好的方法将视图渲染到图像中? renderInContext:使用GPU?

is there any better way to render view into image? does renderInContext: use GPU?

推荐答案

否。在iOS6中,renderInContext:是唯一的方法。这很慢。它使用CPU。

No. In iOS6, renderInContext: is the only way. It is slow. It uses the CPU.

[view.layer renderInContext:UIGraphicsGetCurrentContext()];




  • 需要iOS 2.0。 它在CPU中运行

  • 它不捕获非仿射变换,OpenGL或视频内容的视图。

  • 如果动画正在运行,您可以选择捕获:


    • view.layer ,它捕获动画的最后一帧。

    • view.presentationLayer ,它捕获动画的当前帧。

      • Requires iOS 2.0. It runs in the CPU.
      • It doesn't capture views with non-affine transforms, OpenGL, or video content.
      • If an animation is running, you can have the option of capturing:
        • view.layer, which captures the final frame of the animation.
        • view.presentationLayer, which captures the current frame of the animation .
        • UIView *snapshot = [view snapshotViewAfterScreenUpdates:YES];
          




          • 需要iOS 7。

          • 这是最快的方法。

          • 视图内容是不可变的。如果你想申请效果,那就太好了。

          • 它捕获所有内容类型(UIKit,OpenGL或视频)。

            • Requires iOS 7.
            • It is the fastest method.
            • The view contents are immutable. Not good if you want to apply an effect.
            • It captures all content types (UIKit, OpenGL, or video).
            • [view resizableSnapshotViewFromRect:rect afterScreenUpdates:YES withCapInsets:edgeInsets]
              




              • 需要iOS 7。

              • snapshotViewAfterScreenUpdates:相同,但具有可调整大小的插入内容。 内容也是不可变的。

                • Requires iOS 7.
                • Same as snapshotViewAfterScreenUpdates: but with resizable insets. content is also immutable.
                • [view drawViewHierarchyInRect:rect afterScreenUpdates:YES];
                  




                  • 需要iOS 7。

                  • 它绘制当前上下文。

                  • 根据会话226,它比 renderInContext:更快。

                    • Requires iOS 7.
                    • It draws in the current context.
                    • According to session 226 it is faster than renderInContext:.
                    • 参见WWDC 2013会话 226在iOS上实施Engaging UI 关于新的快照API。

                      See WWDC 2013 Session 226 Implementing Engaging UI on iOS about the new snapshotting APIs.

                      如果有任何帮助,这里有一些代码可以在一个人仍在运行时放弃捕获尝试。

                      If it is any help, here is some code to discard capture attempts while one is still running.

                      这会一次阻止块执行一个,并丢弃其他块。来自此SO答案

                      This throttles block execution to one at a time, and discards others. From this SO answer.

                      dispatch_semaphore_t semaphore = dispatch_semaphore_create(1);
                      dispatch_queue_t renderQueue = dispatch_queue_create("com.throttling.queue", NULL);
                      
                      - (void) capture {
                          if (dispatch_semaphore_wait(semaphore, DISPATCH_TIME_NOW) == 0) {
                              dispatch_async(renderQueue, ^{
                                  // capture
                                  dispatch_semaphore_signal(semaphore);
                              });
                          }
                      }
                      

                      这是做什么的?


                      • 为一(1)个资源创建信号量。

                      • 创建一个串行队列。

                      • DISPATCH_TIME_NOW 表示超时为无,因此在红灯时立即返回非零值。因此,不执行if内容。

                      • 如果绿灯亮,则异步运行块,然后再次设置绿灯。

                      • Create a semaphore for one (1) resource.
                      • Create a serial queue.
                      • DISPATCH_TIME_NOW means the timeout is none, so it returns non zero immediately on red light. Thus, not executing the if content.
                      • If green light, run the block asynchronously, and set green light again.

                      这篇关于如何更快地将视图渲染到图像中?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆