金属-调整视频缓冲区的大小,然后再传递到自定义内核过滤器 [英] Metal - Resize video buffer before passing to custom Kernel filter

查看:136
本文介绍了金属-调整视频缓冲区的大小,然后再传递到自定义内核过滤器的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

在我们的iOS应用中,我们使用的是使用Metal(CIKernel/CIColorKernel包装器)的自定义过滤器.

Within our iOS app, we are using custom filters using Metal (CIKernel/CIColorKernel wrappers).

让我们假设我们有一个4K视频和一个具有1080p输出大小的自定义视频合成,它在视频缓冲区上应用了高级过滤器.
显然,我们不需要过滤视频的原始大小,因此我们可能会通过内存警告(真实情况)终止该应用.

Let's assume we have a 4K video and a custom video composition with a 1080p output size, that applies an advanced filter on the video buffers.
Obviously, we don't need to filter the video in its original size, doing so we'll probably terminate the app with a memory warning (true story).

这是视频过滤管道:

获取4K缓冲区(如CIImage )->
CIImage ->
上应用过滤器 过滤器将CIKernel Metal过滤器功能应用于CIImage ->
将过滤后的CIImage返回到合成

Getting the buffer in 4K (as CIImage) -->
Apply filter on the CIImage -->
the filter applies the CIKernel Metal filter function on the CIImage-->
Return the filtered CIImage to the composition

我只能想到应用调整大小的两个地方是在将其发送到过滤器过程之前或在Metal函数中.

The only two places I can think of applying the resize is before we send it into the filter process or within the Metal function.

public class VHSFilter: CIFilter {

    public override var outputImage: CIImage? {
        // InputImage size is 4K
        guard let inputImage = self.inputImage else { return nil }

        // Manipulate the image here

        let roiCallback: CIKernelROICallback = { _, rect -> CGRect in
            return inputImage.extent
        }


        // Or inside the Kernel Metal function
        let outputImage = self.kernel.apply(extent: inputExtent,
                                            roiCallback: roiCallback,
                                            arguments: [inputImage])

        return outputImage

    }
}

我确定我不是第一个遇到此问题的人

I'm sure I'm not the first one to encounter this issue

当传入的视频缓冲区太大(在内存方面)以至于无法过滤并且需要实时调整大小时,该怎么办?以前没有重新编码视频吗?

推荐答案

正如warrenm所说,您可以在处理之前使用CILanczosScaleTransform过滤器对视频帧进行下采样.但是,这仍然会导致AVFoundation以全分辨率分配缓冲区.

As warrenm says, you could use a CILanczosScaleTransform filter to downsample the video frames before processing. However, this would still cause AVFoundation to allocate buffers in full resolution.

我假设您使用AVMutableVideoComposition进行过滤?在这种情况下,您可以将构图的renderSize设置为目标尺寸.从文档中:

I assume you use a AVMutableVideoComposition to do the filtering? In this case you can just set the renderSize of the composition to the target size. From the docs:

视频合成应呈现的大小.

The size at which the video composition should render.

这将告诉AVFoundation将帧重新采样(高效,快速),然后再将其传递到过滤器管道.

This will tell AVFoundation to resample the frames (efficiently, fast) before handing them to your filter pipeline.

这篇关于金属-调整视频缓冲区的大小,然后再传递到自定义内核过滤器的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆