使用GPUImage重新创建iOS 7 Glass Effect [英] Using GPUImage to Recreate iOS 7 Glass Effect

查看:91
本文介绍了使用GPUImage重新创建iOS 7 Glass Effect的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我试图通过将图像效果应用到 MKMapView 的屏幕截图,在我的眼镜中使用iOS 7风格的玻璃效果。由Apple提供的,但它显示了Apple左侧控制中心视图中的内置模糊,以及我右边的新GPUImage模糊过滤器:





作为一种方式提高性能(Apple的模糊似乎发生在48的sigma,这需要为每个像素采样相当大的区域),我在高斯模糊之前使用4X下采样,然后是4X上采样。这样可以将需要模糊16倍的像素数量减少,并将模糊sigma从48减少到12.使用此过滤器,iPhone 4S可以在大约30毫秒内模糊整个屏幕。



弄清楚模糊是一回事。 Apple仍然没有提供快速获取视图背后的图像内容的方法,因此很可能会成为您快速更改内容的瓶颈。


I am trying to use the iOS 7 style glass effect in my glass by applying image effects to a screenshot of a MKMapView. This UIImage category, provided by Apple, is what I am using as a baseline. This method desaturates the source image, applies a tint color, and blurs heavily using the input vals:

[image applyBlurWithRadius:10.0
                 tintColor:[UIColor colorWithRed:229/255.0f green:246/255.0f blue:255/255.0f alpha:0.33] 
     saturationDeltaFactor:0.66
                 maskImage:nil];

This produces the effect I am looking for, but takes way too long — between .3 and .5 seconds to render on an iPhone 4.

I would like to use the excellent GPUImage as my preliminary attempts have been about 5-10 times faster, but I just can't seem to get it right.

GPUImagePicture *stillImageSource = [[GPUImagePicture alloc] initWithImage:image];

GPUImageSaturationFilter *saturationFilter = [[GPUImageSaturationFilter alloc] init];
saturationFilter.saturation = 0.33; // 1.0 - 0.66;
[stillImageSource addTarget:saturationFilter];

GPUImageMonochromeFilter *monochromeFilter = [[GPUImageMonochromeFilter alloc] init];
[monochromeFilter setColor:(GPUVector4){229/255.0f, 246/255.0f, 1.0f, 0.33f}];
[monochromeFilter setIntensity:0.2];
[saturationFilter addTarget:monochromeFilter];

GPUImageFastBlurFilter *blurFilter = [[GPUImageFastBlurFilter alloc] init];
blurFilter.blurSize = 2;
blurFilter.blurPasses = 3;
[monochromeFilter addTarget:blurFilter];

[saturationFilter prepareForImageCapture];
[monochromeFilter prepareForImageCapture];

[stillImageSource processImage];
image = [blurFilter imageFromCurrentlyProcessedOutput];

This produces an image which is close, but not quite there

The blur doesn't seem to be deep enough, but when I try to increase the blurSize above, it becomes grid-like, almost like a kaleidoscope. You can actually see the grid here by zooming in on the second image. The tint-color I am trying to mimic seems to just wash out the image instead of overlaying and blending, which I think the Apple sample is doing.

I have tried to setup the filters according to comments made by @BradLarson in another SO question. Am I using the wrong GPUImage filters to reproduce this effect, or am I just setting them up wrong?

解决方案

OK, I've been working on something here for a little while, and I finally have it functional. I just rolled a number of changes to GPUImage's blur filters into the framework, and as a result I believe I have a reasonable replica of Apple's blur effect that they use for things like the control center view.

Previously, the blurs that I had in the framework used a single precalculated radius, and the only way to affect their intensity was to tweak the spacing at which they sampled pixels from the input image. With a limited number of samples per pixel, changing the multiple for the spacing between sampled pixels much above 1.5 started introducing serious blocking artifacts as pixels were skipped.

The new Gaussian blur implementation that I've built combines the performance benefits of precalculated Gaussian weights with the ability to use an arbitrary radius (sigma) for the Gaussian blur. It does this by generating shaders on the fly as they are needed for various radii. It also reduces the number of texture samples required for a given blur radius by using hardware interpolation to read two texels at a time for each sample point.

The new GPUImageiOSBlurFilter combines this tuned arbitrary-radius Gaussian blur filter with a color-correction filter that appears to replicate the adjustment Apple performs to the colors after they've been blurred. I added the below comparison to my answer here, but it shows Apple's built-in blurring from the control center view on the left, and my new GPUImage blur filter on the right:

As a way of improving performance (Apple's blur appears to occur with a sigma of 48, which requires quite a large area to be sampled for each pixel), I use a 4X downsampling before the Gaussian blur, then a 4X upsampling afterward. This reduces the number of pixels that need to be blurred by 16X, and also reduces the blur sigma from 48 to 12. An iPhone 4S can blur the entire screen in roughly 30 ms using this filter.

Getting the blur right is one thing. Apple still does not provide a fast way of getting the image content behind your views, so that most likely will be your bottleneck here for rapidly changing content.

这篇关于使用GPUImage重新创建iOS 7 Glass Effect的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆