如何更快地将视图渲染为图像? [英] How to render view into image faster?

查看:18
本文介绍了如何更快地将视图渲染为图像?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在制作放大镜应用程序,它允许用户触摸屏幕并移动他的手指,会有一个带有手指路径的放大镜.我通过截取屏幕截图并将图像分配给放大镜图像视图来实现它,如下所示:

I'm making magnifier app, which allows an user touch the screen and move his finger, there will be a magnifier with his finger path. I implement it with take a screenshot and assign the image to magnifier image view, as following:

    CGSize imageSize = frame.size;
    UIGraphicsBeginImageContextWithOptions(imageSize, NO, 0.0);
    CGContextRef c = UIGraphicsGetCurrentContext();
    CGContextScaleCTM(c, scaleFactor, scaleFactor);
    CGContextConcatCTM(c, CGAffineTransformMakeTranslation(-frame.origin.x, -frame.origin.y));
    [self.layer renderInContext:c];
    UIImage *screenshot = UIGraphicsGetImageFromCurrentImageContext();
    UIGraphicsEndImageContext();
    return screenshot;

问题是self.layer renderInContext速度慢,所以用户在移动手指时感觉不流畅.我尝试在其他线程中运行 self.layer renderInContext,但是,它使放大镜图像看起来很奇怪,因为放大镜中的图像显示延迟.

the problem is that self.layer renderInContext is slow, so user feel not smooth when he is moving his finger. and I tried to run self.layer renderInContext in other thread, however, it makes the magnifier image looked weird because the image in magnifier showed delay.

有没有更好的方法将视图渲染成图像?renderInContext: 使用 GPU 吗?

is there any better way to render view into image? does renderInContext: use GPU?

推荐答案

没有.在 iOS6 中, renderInContext: 是唯一的方法.它很慢.它使用 CPU.

No. In iOS6, renderInContext: is the only way. It is slow. It uses the CPU.

[view.layer renderInContext:UIGraphicsGetCurrentContext()];

  • 需要 iOS 2.0.它在 CPU 中运行.
  • 它不会捕获具有非仿射变换、OpenGL 或视频内容的视图.
  • 如果动画正在运行,您可以选择捕捉:
    • view.layer,捕获动画的最后一帧.
    • view.presentationLayer,捕获动画的当前帧.
      • Requires iOS 2.0. It runs in the CPU.
      • It doesn't capture views with non-affine transforms, OpenGL, or video content.
      • If an animation is running, you can have the option of capturing:
        • view.layer, which captures the final frame of the animation.
        • view.presentationLayer, which captures the current frame of the animation .
        • UIView *snapshot = [view snapshotViewAfterScreenUpdates:YES];
          

          • 需要 iOS 7.
          • 这是最快的方法.
          • 视图 contents 是不可变的.如果您想应用效果,那就不好了.
          • 它捕获所有内容类型(UIKit、OpenGL 或视频).
            • Requires iOS 7.
            • It is the fastest method.
            • The view contents are immutable. Not good if you want to apply an effect.
            • It captures all content types (UIKit, OpenGL, or video).
            • [view resizableSnapshotViewFromRect:rect afterScreenUpdates:YES withCapInsets:edgeInsets]
              

              • 需要 iOS 7.
              • snapshotViewAfterScreenUpdates: 相同,但具有可调整大小的插图.content 也是不可变的.
                • Requires iOS 7.
                • Same as snapshotViewAfterScreenUpdates: but with resizable insets. content is also immutable.
                • [view drawViewHierarchyInRect:rect afterScreenUpdates:YES];
                  

                  • 需要 iOS 7.
                  • 它在当前上下文中绘制.
                  • 根据会话 226,它比 renderInContext: 快.
                  • 请参阅 WWDC 2013 会议 226 在 iOS 上实现引人入胜的 UI,了解新的快照蜜蜂.

                    See WWDC 2013 Session 226 Implementing Engaging UI on iOS about the new snapshotting APIs.

                    如果有帮助,这里有一些代码可以在一个仍在运行时丢弃捕获尝试.

                    If it is any help, here is some code to discard capture attempts while one is still running.

                    这会一次限制一个块的执行,并丢弃其他块.来自 这个 SO 答案.

                    This throttles block execution to one at a time, and discards others. From this SO answer.

                    dispatch_semaphore_t semaphore = dispatch_semaphore_create(1);
                    dispatch_queue_t renderQueue = dispatch_queue_create("com.throttling.queue", NULL);
                    
                    - (void) capture {
                        if (dispatch_semaphore_wait(semaphore, DISPATCH_TIME_NOW) == 0) {
                            dispatch_async(renderQueue, ^{
                                // capture
                                dispatch_semaphore_signal(semaphore);
                            });
                        }
                    }
                    

                    这是在做什么?

                    • 为一 (1) 个资源创建一个信号量.
                    • 创建一个串行队列.
                    • DISPATCH_TIME_NOW 表示超时为无,因此在红灯时立即返回非零.因此,不执行 if 内容.
                    • 如果绿灯亮,异步运行块,然后再次设置绿灯.
                    • Create a semaphore for one (1) resource.
                    • Create a serial queue.
                    • DISPATCH_TIME_NOW means the timeout is none, so it returns non zero immediately on red light. Thus, not executing the if content.
                    • If green light, run the block asynchronously, and set green light again.

                    这篇关于如何更快地将视图渲染为图像?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆