GPUImage将过滤器应用于图像缓冲区 [英] GPUImage apply filter to a buffer of images

查看:81
本文介绍了GPUImage将过滤器应用于图像缓冲区的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

GPUImage中,有一些滤镜仅适用于来自摄像机的帧流,例如低通滤镜或高通滤镜,但其中有很多.
我正在尝试创建一个UIImages缓冲区,该缓冲区具有固定的时间速率,从而可以在仅2张图像之间应用这些滤镜,并且每对图像都会生成一个滤镜图像.像这样的东西:
FirstImage + SecondImage-> FirstFilteredImage
SecondImage + ThirdImage-> SecondFilteredImage
我发现与框架配合使用的过滤器使用GPUImageBuffer,它是GPUImageFilter的子类(很可能只是继承某些方法和协议),该类加载了直通片段着色器.据我了解,这是一个缓冲区,用于保留传入的帧但已经纹理化",通过在当前上下文中绑定纹理来生成纹理.
我还发现了一个-conserveMemoryForNextFrame听起来对我想要实现的效果很好,但是我却不知道它是如何工作的.
有可能这样做吗?哪种方法可以将图像转换为纹理?

In GPUImage there are some filters that works only for a stream of frames from a camera, for instance the low pass filter, or the high pass filter, but there are plenty of them.
I'm trying to create a buffer of UIImages that with a fixed timerate make possible to apply those filters also between just 2 images, and that for each pair of image produces a single filtered image. Something like that:
FirstImage+SecondImage-->FirstFilteredImage
SecondImage+ThirdImage-->SecondFilteredImage
I've found that filters that works with frames use a GPUImageBuffer, that is a subclass of GPUImageFilter (most probably just to inherit some methods and protocols) that loads a passthrough fragment shader. From what I understood this is a buffer that keeps incoming frames but already "texturized", textures are generated by binding the texture in the current context.
I've found also a -conserveMemoryForNextFrame that sounds good for what I want to achieve, but I didn't get how is working.
Is it possible to do that? in which method images are converted in texture?

推荐答案

我在实现目标方面做得很接近,但是首先我必须说,我可能误解了有关当前过滤器功能的某些方面.
我以为某些过滤器可以在着色器中考虑到时间变量来进行某些操作.那是因为当我看到低通滤波器和高通滤波器时,我立即想到了时间.现实似乎有所不同,他们考虑了时间,但似乎并不影响过滤操作.
由于我自己开发了一个延时应用程序,该程序可以保存单个图像,并将它们重新组合到不同的时间轴中以制作没有音频的视频,所以我认为将时间的滤镜功能应用于后续的帧可能会很有趣.这就是为什么我发布此问题的原因. 现在的答案是:要将双输入滤镜应用于静止图像,您必须像在代码段中那样:

I made something close about what I'd like to achieve, but in first instance I must say that probably I've misunderstood some aspects about current filters functionalities.
I thought that some filters could make some operations taking the time variable into account in their shader. That's because when I saw the low pass filter and hight pass filter I've instantly thought about time. The reality seems to be different, they take into account time but it doesn't seems that this affect the filtering operations.
Since I'm developing by myself a time-lapse application, that saves single images and that reassemble them into a different timeline to make a video without audio, I imagined that a filters function of time could be fun to apply to the subsequent frames. This is the reason about why I posted this question.
Now the answer: to apply a double input filter to still images you must do like in the snippet:

    [sourcePicture1 addTarget:twoinputFilter];
    [sourcePicture1 processImage];
    [sourcePicture2 addTarget:twoinputFilter];
    [sourcePicture2 processImage];
    [twoinputFilter useNextFrameForImageCapture];
    UIImage * image = [twoinputFilter imageFromCurrentFramebuffer];

如果忘记调用-useNextFrameForImageCapture,则由于缓冲区重用,返回的图像将为nil.
不高兴,我认为也许好的Brad将来会做出类似的事情,所以我创建了一个GPUImagePicture子类,该子类不是将kCMTimeIvalid返回到适当的方法,而是返回一个新的ivar,它包含名为-frameTime的帧CMTime.

If you forget to call the -useNextFrameForImageCapture the returned image will be nil, due to the buffer reuse.
Not happy I thought that maybe in the future the good Brad will make something like this, so I've created a GPUImagePicture subclass, that instead of returning kCMTimeIvalid to the appropriate methods returns a new ivar that contains the frame CMTime called -frameTime.

@interface GPUImageFrame : GPUImagePicture
@property (assign, nonatomic) CMTime frameTime;
@end

@implementation GPUImageFrame

- (BOOL)processImageWithCompletionHandler:(void (^)(void))completion;
{
    hasProcessedImage = YES;

    //    dispatch_semaphore_wait(imageUpdateSemaphore, DISPATCH_TIME_FOREVER);

    if (dispatch_semaphore_wait(imageUpdateSemaphore, DISPATCH_TIME_NOW) != 0)
    {
        return NO;
    }

    runAsynchronouslyOnVideoProcessingQueue(^{
        for (id<GPUImageInput> currentTarget in targets)
        {
            NSInteger indexOfObject = [targets indexOfObject:currentTarget];
            NSInteger textureIndexOfTarget = [[targetTextureIndices objectAtIndex:indexOfObject] integerValue];

            [currentTarget setCurrentlyReceivingMonochromeInput:NO];
            [currentTarget setInputSize:pixelSizeOfImage atIndex:textureIndexOfTarget];
            [currentTarget setInputFramebuffer:outputFramebuffer atIndex:textureIndexOfTarget];
            [currentTarget newFrameReadyAtTime:_frameTime atIndex:textureIndexOfTarget];
        }

        dispatch_semaphore_signal(imageUpdateSemaphore);

        if (completion != nil) {
            completion();
        }
    });

    return YES;
}

- (void)addTarget:(id<GPUImageInput>)newTarget atTextureLocation:(NSInteger)textureLocation;
{
    [super addTarget:newTarget atTextureLocation:textureLocation];

    if (hasProcessedImage)
    {
        [newTarget setInputSize:pixelSizeOfImage atIndex:textureLocation];
        [newTarget newFrameReadyAtTime:_frameTime atIndex:textureLocation];
    }
}

这篇关于GPUImage将过滤器应用于图像缓冲区的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆