使用AVFoundation(OSX)向视频添加滤镜-如何将生成的图像写回到AVWriter? [英] Adding filters to video with AVFoundation (OSX) - how do I write the resulting image back to AVWriter?

查看:154
本文介绍了使用AVFoundation(OSX)向视频添加滤镜-如何将生成的图像写回到AVWriter?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

设置场景

我正在开发一个视频处理应用程序,该应用程序可从命令行运行以读取,处理然后导出视频.我正在处理4首曲目.

  1. 我将许多片段添加到单个轨道中以制作一个视频.我们称其为ugcVideoComposition.
  2. 具有Alpha的剪辑(位于第二条轨道上并使用图层指令)在导出时进行合成设置,以在ugcVideoComposition的顶部播放.
  3. 音乐音轨.
  4. ugcVideoComposition的音频轨道,其中包含来自添加到单个轨道中的剪辑的音频.

我已经完成所有工作,可以使用AVExportSession将其合成并正确导出.

问题

我现在要做的是将滤镜和渐变应用于ugcVideoComposition.

到目前为止,我的研究表明,这是通过使用AVReader和AVWriter,提取CIImage,使用滤镜对其进行处理然后将其写出来完成的.

我还没有获得上面可以使用的所有功能,但是我设法使用AssetReader和AssetWriter读入ugcVideoComposition并将其写回到磁盘.

     BOOL done = NO;
    while (!done)
    {
        while ([assetWriterVideoInput isReadyForMoreMediaData] && !done)
        {
            CMSampleBufferRef sampleBuffer = [videoCompositionOutput copyNextSampleBuffer];
            if (sampleBuffer)
            {
                // Let's try create an image....
                CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
                CIImage *inputImage = [CIImage imageWithCVImageBuffer:imageBuffer];

                // < Apply filters and transformations to the CIImage here

                // < HOW TO GET THE TRANSFORMED IMAGE BACK INTO SAMPLE BUFFER??? >

                // Write things back out.
                [assetWriterVideoInput appendSampleBuffer:sampleBuffer];

                CFRelease(sampleBuffer);
                sampleBuffer = NULL;
            }
            else
            {
                // Find out why we couldn't get another sample buffer....
                if (assetReader.status == AVAssetReaderStatusFailed)
                {
                    NSError *failureError = assetReader.error;
                    // Do something with this error.
                }
                else
                {
                    // Some kind of success....
                    done = YES;
                    [assetWriter finishWriting];

                }
            }
         }
      }
 

如您所见,我什至可以从CMSampleBuffer获取CIImage,并且我有信心可以解决如何操纵图像并应用所需的任何效果等.我不知道该怎么办,就是将生成的操纵图像放回SampleBuffer中,以便再次将其写出.

问题

给出CIImage,我如何将其放入sampleBuffer中以将其与assetWriter附加在一起?

感谢任何帮助-AVFoundation文档太糟糕了,或者错过了关键点(例如,提取图像后如何放回图像,或者着重于将图像渲染到iPhone屏幕上,这不是我想要的)

非常感谢和感谢!

解决方案

我最终找到了一个完整的示例,并从苹果公司获得了很差的AVFoundation文档,找到了一个解决方案.

最大的困惑是,AVFoundation在较高级别上在iOS和OSX之间合理地"保持一致,而较低级别的项则表现不同,具有不同的方法和技术.此解决方案适用于OSX.

设置您的AssetWriter

第一件事是确保设置资产编写器时,添加了一个适配器以从CVPixelBuffer读入.该缓冲区将包含修改后的帧.

     // Create the asset writer input and add it to the asset writer.
    AVAssetWriterInput *assetWriterVideoInput = [AVAssetWriterInput assetWriterInputWithMediaType:[[videoTracks objectAtIndex:0] mediaType] outputSettings:videoSettings];
    // Now create an adaptor that writes pixels too!
    AVAssetWriterInputPixelBufferAdaptor *adaptor = [AVAssetWriterInputPixelBufferAdaptor
                                                   assetWriterInputPixelBufferAdaptorWithAssetWriterInput:assetWriterVideoInput
                                                 sourcePixelBufferAttributes:nil];
    assetWriterVideoInput.expectsMediaDataInRealTime = NO;
    [assetWriter addInput:assetWriterVideoInput];
 

阅读和写作

这里的挑战是我无法在iOS和OSX之间找到直接可比的方法-iOS能够将上下文直接呈现给PixelBuffer,而OSX不支持该选项. iOS和OSX之间的上下文配置也不同.

请注意,您还应该将QuartzCore.Framework包含到您的XCode项目中.

在OSX上创建上下文.

     CIContext *context = [CIContext contextWithCGContext:
                      [[NSGraphicsContext currentContext] graphicsPort]
                                             options: nil]; // We don't want to always create a context so we put it outside the loop
 

现在,您要遍历,读取AssetReader并写入AssetWriter ...,但请注意,您是通过先前创建的适配器而不是SampleBuffer进行写入.

     while ([adaptor.assetWriterInput isReadyForMoreMediaData] && !done)
    {
        CMSampleBufferRef sampleBuffer = [videoCompositionOutput copyNextSampleBuffer];
        if (sampleBuffer)
        {
            CMTime currentTime = CMSampleBufferGetPresentationTimeStamp(sampleBuffer);

            // GRAB AN IMAGE FROM THE SAMPLE BUFFER
            CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
            NSDictionary *options = [NSDictionary dictionaryWithObjectsAndKeys: [NSNumber numberWithInt:kCVPixelFormatType_32BGRA], kCVPixelBufferPixelFormatTypeKey,
                                     [NSNumber numberWithInt:640.0], kCVPixelBufferWidthKey,
                                     [NSNumber numberWithInt:360.0], kCVPixelBufferHeightKey,
                                     nil];

            CIImage *inputImage = [CIImage imageWithCVImageBuffer:imageBuffer options:options];

            //-----------------
            // FILTER IMAGE - APPLY ANY FILTERS IN HERE

            CIFilter *filter = [CIFilter filterWithName:@"CISepiaTone"];
            [filter setDefaults];
            [filter setValue: inputImage forKey: kCIInputImageKey];
            [filter setValue: @1.0f forKey: kCIInputIntensityKey];

            CIImage *outputImage = [filter valueForKey: kCIOutputImageKey];


            //-----------------
            // RENDER OUTPUT IMAGE BACK TO PIXEL BUFFER
            // 1. Firstly render the image
            CGImageRef finalImage = [context createCGImage:outputImage fromRect:[outputImage extent]];

            // 2. Grab the size
            CGSize size = CGSizeMake(CGImageGetWidth(finalImage), CGImageGetHeight(finalImage));

            // 3. Convert the CGImage to a PixelBuffer
            CVPixelBufferRef pxBuffer = NULL;
            // pixelBufferFromCGImage is documented below.
            pxBuffer = [self pixelBufferFromCGImage: finalImage andSize: size];

            // 4. Write things back out.
            // Calculate the frame time
            CMTime frameTime = CMTimeMake(1, 30); // Represents 1 frame at 30 FPS
            CMTime presentTime=CMTimeAdd(currentTime, frameTime); // Note that if you actually had a sequence of images (an animation or transition perhaps), your frameTime would represent the number of images / frames, not just 1 as I've done here.

            // Finally write out using the adaptor.
            [adaptor appendPixelBuffer:pxBuffer withPresentationTime:presentTime];

            CFRelease(sampleBuffer);
            sampleBuffer = NULL;
        }
        else
        {
            // Find out why we couldn't get another sample buffer....
            if (assetReader.status == AVAssetReaderStatusFailed)
            {
                NSError *failureError = assetReader.error;
                // Do something with this error.
            }
            else
            {
                // Some kind of success....
                done = YES;
                [assetWriter finishWriting];
            }
        }
    }
}
 

创建PixelBuffer

必须有一种更简单的方法,但是目前,这是可行的,并且是我发现在OSX上直接从CIImage到PixelBuffer(通过CGImage)的唯一方法.以下代码是从 AVFoundation + AssetWriter剪切并粘贴的:生成带有图像和音频的电影

     - (CVPixelBufferRef) pixelBufferFromCGImage: (CGImageRef) image andSize:(CGSize) size
    {
        NSDictionary *options = [NSDictionary dictionaryWithObjectsAndKeys:
                         [NSNumber numberWithBool:YES], kCVPixelBufferCGImageCompatibilityKey,
                         [NSNumber numberWithBool:YES], kCVPixelBufferCGBitmapContextCompatibilityKey,
                         nil];
        CVPixelBufferRef pxbuffer = NULL;

        CVReturn status = CVPixelBufferCreate(kCFAllocatorDefault, size.width,
                                      size.height, kCVPixelFormatType_32ARGB, (__bridge CFDictionaryRef) options,
                                      &pxbuffer);
        NSParameterAssert(status == kCVReturnSuccess && pxbuffer != NULL);

        CVPixelBufferLockBaseAddress(pxbuffer, 0);
        void *pxdata = CVPixelBufferGetBaseAddress(pxbuffer);
        NSParameterAssert(pxdata != NULL);

        CGColorSpaceRef rgbColorSpace = CGColorSpaceCreateDeviceRGB();
        CGContextRef context = CGBitmapContextCreate(pxdata, size.width,
                                             size.height, 8, 4*size.width, rgbColorSpace,
                                             kCGImageAlphaNoneSkipFirst);
        NSParameterAssert(context);
        CGContextConcatCTM(context, CGAffineTransformMakeRotation(0));
        CGContextDrawImage(context, CGRectMake(0, 0, CGImageGetWidth(image),
                                       CGImageGetHeight(image)), image);
        CGColorSpaceRelease(rgbColorSpace);
        CGContextRelease(context);

        CVPixelBufferUnlockBaseAddress(pxbuffer, 0);

        return pxbuffer;
    }
 

Setting the scene

I am working on a video processing app that runs from the command line to read in, process and then export video. I'm working with 4 tracks.

  1. Lots of clips that I append into a single track to make one video. Let's call this the ugcVideoComposition.
  2. Clips with Alpha which get positioned on a second track and using layer instructions, is set composited on export to play back over the top of the ugcVideoComposition.
  3. A music audio track.
  4. An audio track for the ugcVideoComposition containing the audio from the clips appended into the single track.

I have this all working, can composite it and export it correctly using AVExportSession.

The problem

What I now want to do is apply filters and gradients to the ugcVideoComposition.

My research so far suggests that this is done by using AVReader and AVWriter, extracting a CIImage, manipulating it with filters and then writing that out.

I haven't yet got all the functionality I had above working, but I have managed to get the ugcVideoComposition read in and written back out to disk using the AssetReader and AssetWriter.

    BOOL done = NO;
    while (!done)
    {
        while ([assetWriterVideoInput isReadyForMoreMediaData] && !done)
        {
            CMSampleBufferRef sampleBuffer = [videoCompositionOutput copyNextSampleBuffer];
            if (sampleBuffer)
            {
                // Let's try create an image....
                CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
                CIImage *inputImage = [CIImage imageWithCVImageBuffer:imageBuffer];

                // < Apply filters and transformations to the CIImage here

                // < HOW TO GET THE TRANSFORMED IMAGE BACK INTO SAMPLE BUFFER??? >

                // Write things back out.
                [assetWriterVideoInput appendSampleBuffer:sampleBuffer];

                CFRelease(sampleBuffer);
                sampleBuffer = NULL;
            }
            else
            {
                // Find out why we couldn't get another sample buffer....
                if (assetReader.status == AVAssetReaderStatusFailed)
                {
                    NSError *failureError = assetReader.error;
                    // Do something with this error.
                }
                else
                {
                    // Some kind of success....
                    done = YES;
                    [assetWriter finishWriting];

                }
            }
         }
      }

As you can see, I can even get the CIImage from the CMSampleBuffer, and I'm confident I can work out how to manipulate the image and apply any effects etc. I need. What I don't know how to do is put the resulting manipulated image BACK into the SampleBuffer so I can write it out again.

The question

Given a CIImage, how can I put that into a sampleBuffer to append it with the assetWriter?

Any help appreciated - the AVFoundation documentation is terrible and either misses crucial points (like how to put an image back after you've extracted it, or is focussed on rendering images to the iPhone screen which is not what I want to do.

Much appreciated and thanks!

解决方案

I eventually found a solution by digging through a lot of half complete samples and poor AVFoundation documentation from Apple.

The biggest confusion is that while at a high level, AVFoundation is "reasonably" consistent between iOS and OSX, the lower level items behave differently, have different methods and different techniques. This solution is for OSX.

Setting up your AssetWriter

The first thing is to make sure that when you set up the asset writer, you add an adaptor to read in from a CVPixelBuffer. This buffer will contain the modified frames.

    // Create the asset writer input and add it to the asset writer.
    AVAssetWriterInput *assetWriterVideoInput = [AVAssetWriterInput assetWriterInputWithMediaType:[[videoTracks objectAtIndex:0] mediaType] outputSettings:videoSettings];
    // Now create an adaptor that writes pixels too!
    AVAssetWriterInputPixelBufferAdaptor *adaptor = [AVAssetWriterInputPixelBufferAdaptor
                                                   assetWriterInputPixelBufferAdaptorWithAssetWriterInput:assetWriterVideoInput
                                                 sourcePixelBufferAttributes:nil];
    assetWriterVideoInput.expectsMediaDataInRealTime = NO;
    [assetWriter addInput:assetWriterVideoInput];

Reading and Writing

The challenge here is that I couldn't find directly comparable methods between iOS and OSX - iOS has the ability to render a context directly to a PixelBuffer, where OSX does NOT support that option. The context is also configured differently between iOS and OSX.

Note that you should include the QuartzCore.Framework into your XCode Project as well.

Creating the context on OSX.

    CIContext *context = [CIContext contextWithCGContext:
                      [[NSGraphicsContext currentContext] graphicsPort]
                                             options: nil]; // We don't want to always create a context so we put it outside the loop

Now you want want to loop through, reading off the AssetReader and writing to the AssetWriter... but note that you are writing via the adaptor created previously, not with the SampleBuffer.

    while ([adaptor.assetWriterInput isReadyForMoreMediaData] && !done)
    {
        CMSampleBufferRef sampleBuffer = [videoCompositionOutput copyNextSampleBuffer];
        if (sampleBuffer)
        {
            CMTime currentTime = CMSampleBufferGetPresentationTimeStamp(sampleBuffer);

            // GRAB AN IMAGE FROM THE SAMPLE BUFFER
            CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
            NSDictionary *options = [NSDictionary dictionaryWithObjectsAndKeys: [NSNumber numberWithInt:kCVPixelFormatType_32BGRA], kCVPixelBufferPixelFormatTypeKey,
                                     [NSNumber numberWithInt:640.0], kCVPixelBufferWidthKey,
                                     [NSNumber numberWithInt:360.0], kCVPixelBufferHeightKey,
                                     nil];

            CIImage *inputImage = [CIImage imageWithCVImageBuffer:imageBuffer options:options];

            //-----------------
            // FILTER IMAGE - APPLY ANY FILTERS IN HERE

            CIFilter *filter = [CIFilter filterWithName:@"CISepiaTone"];
            [filter setDefaults];
            [filter setValue: inputImage forKey: kCIInputImageKey];
            [filter setValue: @1.0f forKey: kCIInputIntensityKey];

            CIImage *outputImage = [filter valueForKey: kCIOutputImageKey];


            //-----------------
            // RENDER OUTPUT IMAGE BACK TO PIXEL BUFFER
            // 1. Firstly render the image
            CGImageRef finalImage = [context createCGImage:outputImage fromRect:[outputImage extent]];

            // 2. Grab the size
            CGSize size = CGSizeMake(CGImageGetWidth(finalImage), CGImageGetHeight(finalImage));

            // 3. Convert the CGImage to a PixelBuffer
            CVPixelBufferRef pxBuffer = NULL;
            // pixelBufferFromCGImage is documented below.
            pxBuffer = [self pixelBufferFromCGImage: finalImage andSize: size];

            // 4. Write things back out.
            // Calculate the frame time
            CMTime frameTime = CMTimeMake(1, 30); // Represents 1 frame at 30 FPS
            CMTime presentTime=CMTimeAdd(currentTime, frameTime); // Note that if you actually had a sequence of images (an animation or transition perhaps), your frameTime would represent the number of images / frames, not just 1 as I've done here.

            // Finally write out using the adaptor.
            [adaptor appendPixelBuffer:pxBuffer withPresentationTime:presentTime];

            CFRelease(sampleBuffer);
            sampleBuffer = NULL;
        }
        else
        {
            // Find out why we couldn't get another sample buffer....
            if (assetReader.status == AVAssetReaderStatusFailed)
            {
                NSError *failureError = assetReader.error;
                // Do something with this error.
            }
            else
            {
                // Some kind of success....
                done = YES;
                [assetWriter finishWriting];
            }
        }
    }
}

Creating the PixelBuffer

There MUST be an easier way, however for now, this works and is the only way I found to get directly from a CIImage to a PixelBuffer (via a CGImage) on OSX. The following code is cut and paste from AVFoundation + AssetWriter: Generate Movie With Images and Audio

    - (CVPixelBufferRef) pixelBufferFromCGImage: (CGImageRef) image andSize:(CGSize) size
    {
        NSDictionary *options = [NSDictionary dictionaryWithObjectsAndKeys:
                         [NSNumber numberWithBool:YES], kCVPixelBufferCGImageCompatibilityKey,
                         [NSNumber numberWithBool:YES], kCVPixelBufferCGBitmapContextCompatibilityKey,
                         nil];
        CVPixelBufferRef pxbuffer = NULL;

        CVReturn status = CVPixelBufferCreate(kCFAllocatorDefault, size.width,
                                      size.height, kCVPixelFormatType_32ARGB, (__bridge CFDictionaryRef) options,
                                      &pxbuffer);
        NSParameterAssert(status == kCVReturnSuccess && pxbuffer != NULL);

        CVPixelBufferLockBaseAddress(pxbuffer, 0);
        void *pxdata = CVPixelBufferGetBaseAddress(pxbuffer);
        NSParameterAssert(pxdata != NULL);

        CGColorSpaceRef rgbColorSpace = CGColorSpaceCreateDeviceRGB();
        CGContextRef context = CGBitmapContextCreate(pxdata, size.width,
                                             size.height, 8, 4*size.width, rgbColorSpace,
                                             kCGImageAlphaNoneSkipFirst);
        NSParameterAssert(context);
        CGContextConcatCTM(context, CGAffineTransformMakeRotation(0));
        CGContextDrawImage(context, CGRectMake(0, 0, CGImageGetWidth(image),
                                       CGImageGetHeight(image)), image);
        CGColorSpaceRelease(rgbColorSpace);
        CGContextRelease(context);

        CVPixelBufferUnlockBaseAddress(pxbuffer, 0);

        return pxbuffer;
    }

这篇关于使用AVFoundation(OSX)向视频添加滤镜-如何将生成的图像写回到AVWriter?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆