iOS 上 UIImage 上的运动模糊效果 [英] Motion Blur effect on UIImage on iOS

查看:40
本文介绍了iOS 上 UIImage 上的运动模糊效果的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

有没有办法在 UIImage 上获得动态模糊效果?我尝试了 GPUImage、Filtrr 和 iOS Core Image,但所有这些都有规则的模糊 - 没有运动模糊.

Is there a way to get a Motion Blur effect on a UIImage? I tried GPUImage, Filtrr and the iOS Core Image but all of these have regular blur - no motion blur.

我也尝试过 UIImage-DSP,但它的运动模糊几乎不可见.我需要更强的东西.

I also tried UIImage-DSP but it's Motion Blur is almost non visible. I need something much stronger.

推荐答案

当我对存储库发表评论时,我只是向 GPUImage 添加了运动和缩放模糊.这些是 GPUImageMotionBlurFilter 和 GPUImageZoomBlurFilter 类.这是缩放模糊的示例:

As I commented on the repository, I just added motion and zoom blurs to GPUImage. These are the GPUImageMotionBlurFilter and GPUImageZoomBlurFilter classes. This is an example of the zoom blur:

对于运动模糊,我在单个方向上进行了 9 次高斯模糊.这是使用以下顶点和片段着色器实现的:

For the motion blur, I do a 9-hit Gaussian blur over a single direction. This is achieved using the following vertex and fragment shaders:

顶点:

 attribute vec4 position;
 attribute vec4 inputTextureCoordinate;

 uniform highp vec2 directionalTexelStep;

 varying vec2 textureCoordinate;
 varying vec2 oneStepBackTextureCoordinate;
 varying vec2 twoStepsBackTextureCoordinate;
 varying vec2 threeStepsBackTextureCoordinate;
 varying vec2 fourStepsBackTextureCoordinate;
 varying vec2 oneStepForwardTextureCoordinate;
 varying vec2 twoStepsForwardTextureCoordinate;
 varying vec2 threeStepsForwardTextureCoordinate;
 varying vec2 fourStepsForwardTextureCoordinate;

 void main()
 {
     gl_Position = position;

     textureCoordinate = inputTextureCoordinate.xy;
     oneStepBackTextureCoordinate = inputTextureCoordinate.xy - directionalTexelStep;
     twoStepsBackTextureCoordinate = inputTextureCoordinate.xy - 2.0 * directionalTexelStep;
     threeStepsBackTextureCoordinate = inputTextureCoordinate.xy - 3.0 * directionalTexelStep;
     fourStepsBackTextureCoordinate = inputTextureCoordinate.xy - 4.0 * directionalTexelStep;
     oneStepForwardTextureCoordinate = inputTextureCoordinate.xy + directionalTexelStep;
     twoStepsForwardTextureCoordinate = inputTextureCoordinate.xy + 2.0 * directionalTexelStep;
     threeStepsForwardTextureCoordinate = inputTextureCoordinate.xy + 3.0 * directionalTexelStep;
     fourStepsForwardTextureCoordinate = inputTextureCoordinate.xy + 4.0 * directionalTexelStep;
 }

片段:

 precision highp float;

 uniform sampler2D inputImageTexture;

 varying vec2 textureCoordinate;
 varying vec2 oneStepBackTextureCoordinate;
 varying vec2 twoStepsBackTextureCoordinate;
 varying vec2 threeStepsBackTextureCoordinate;
 varying vec2 fourStepsBackTextureCoordinate;
 varying vec2 oneStepForwardTextureCoordinate;
 varying vec2 twoStepsForwardTextureCoordinate;
 varying vec2 threeStepsForwardTextureCoordinate;
 varying vec2 fourStepsForwardTextureCoordinate;

 void main()
 {
     lowp vec4 fragmentColor = texture2D(inputImageTexture, textureCoordinate) * 0.18;
     fragmentColor += texture2D(inputImageTexture, oneStepBackTextureCoordinate) * 0.15;
     fragmentColor += texture2D(inputImageTexture, twoStepsBackTextureCoordinate) *  0.12;
     fragmentColor += texture2D(inputImageTexture, threeStepsBackTextureCoordinate) * 0.09;
     fragmentColor += texture2D(inputImageTexture, fourStepsBackTextureCoordinate) * 0.05;
     fragmentColor += texture2D(inputImageTexture, oneStepForwardTextureCoordinate) * 0.15;
     fragmentColor += texture2D(inputImageTexture, twoStepsForwardTextureCoordinate) *  0.12;
     fragmentColor += texture2D(inputImageTexture, threeStepsForwardTextureCoordinate) * 0.09;
     fragmentColor += texture2D(inputImageTexture, fourStepsForwardTextureCoordinate) * 0.05;

     gl_FragColor = fragmentColor;
 }

作为优化,我通过使用角度、模糊大小和图像尺寸来计算片段着色器之外的纹理样本之间的步长.然后将其传递给顶点着色器,以便我可以计算那里的纹理采样位置并在片段着色器中对它们进行插值.这避免了 iOS 设备上的依赖纹理读取.

As an optimization, I calculate the step size between texture samples outside of the fragment shader by using the angle, blur size, and the image dimensions. This is then passed into the vertex shader, so that I can calculate the texture sampling positions there and interpolate across them in the fragment shader. This avoids dependent texture reads on iOS devices.

缩放模糊要慢得多,因为我仍然在片段着色器中进行这些计算.毫无疑问,有一种方法可以优化它,但我还没有尝试过.缩放模糊使用 9 次高斯模糊,其中方向和每个样本的偏移距离作为像素位置与模糊中心的函数而变化.

The zoom blur is much slower, because I still do these calculations in the fragment shader. No doubt there's a way I can optimize this, but I haven't tried yet. The zoom blur uses a 9-hit Gaussian blur where the direction and per-sample offset distance vary as a function of the placement of the pixel vs. the center of the blur.

它使用以下片段着色器(和标准的直通顶点着色器):

It uses the following fragment shader (and a standard passthrough vertex shader):

 varying highp vec2 textureCoordinate;

 uniform sampler2D inputImageTexture;

 uniform highp vec2 blurCenter;
 uniform highp float blurSize;

 void main()
 {
     // TODO: Do a more intelligent scaling based on resolution here
     highp vec2 samplingOffset = 1.0/100.0 * (blurCenter - textureCoordinate) * blurSize;

     lowp vec4 fragmentColor = texture2D(inputImageTexture, textureCoordinate) * 0.18;
     fragmentColor += texture2D(inputImageTexture, textureCoordinate + samplingOffset) * 0.15;
     fragmentColor += texture2D(inputImageTexture, textureCoordinate + (2.0 * samplingOffset)) *  0.12;
     fragmentColor += texture2D(inputImageTexture, textureCoordinate + (3.0 * samplingOffset)) * 0.09;
     fragmentColor += texture2D(inputImageTexture, textureCoordinate + (4.0 * samplingOffset)) * 0.05;
     fragmentColor += texture2D(inputImageTexture, textureCoordinate - samplingOffset) * 0.15;
     fragmentColor += texture2D(inputImageTexture, textureCoordinate - (2.0 * samplingOffset)) *  0.12;
     fragmentColor += texture2D(inputImageTexture, textureCoordinate - (3.0 * samplingOffset)) * 0.09;
     fragmentColor += texture2D(inputImageTexture, textureCoordinate - (4.0 * samplingOffset)) * 0.05;

     gl_FragColor = fragmentColor;
 }

请注意,出于性能原因,这两种模糊都在 9 个样本中进行了硬编码.这意味着在较大的模糊尺寸下,您将开始看到此处有限样本的伪影.对于较大的模糊,您需要多次运行这些过滤器或扩展它们以支持更多的高斯样本.然而,由于 iOS 设备上的纹理采样带宽有限,更多的样本会导致渲染时间变慢.

Note that both of these blurs are hardcoded at 9 samples for performance reasons. This means that at larger blur sizes, you'll start to see artifacts from the limited samples here. For larger blurs, you'll need to run these filters multiple times or extend them to support more Gaussian samples. However, more samples will lead to much slower rendering times because of the limited texture sampling bandwidth on iOS devices.

这篇关于iOS 上 UIImage 上的运动模糊效果的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆