使用CGImageDestinationFinalize创建一个大型GIF - 内存不足 [英] Creating a large GIF with CGImageDestinationFinalize - running out of memory

查看:740
本文介绍了使用CGImageDestinationFinalize创建一个大型GIF - 内存不足的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

在创建包含大量帧的GIF时,我正在尝试解决性能问题。例如,某些GIF可能包含> 1200帧。使用我当前的代码,我的内存不足。我想弄清楚如何解决这个问题;这可以分批完成吗?我的第一个想法是,是否可以将图像附加到一起,但我认为没有一种方法可以通过 ImageIO 框架创建GIF。如果有一个复数 CGImageDestinationAddImages 方法会很好,但是没有,所以我迷失了如何尝试解决这个问题。我感谢任何提供的帮助。对于冗长的代码提前抱歉,但我觉得有必要逐步创建GIF。

I'm trying to fix a performance issue when creating GIFs with lots of frames. For example some GIFs could contain > 1200 frames. With my current code I run out of memory. I'm trying to figure out how to solve this; could this be done in batches? My first idea was if it was possible to append images together but I do not think there is a method for that or how GIFs are created by the ImageIO framework. It would be nice if there was a plural CGImageDestinationAddImages method but there isn't, so I'm lost on how to try to solve this. I appreciate any help offered. Sorry in advance for the lengthy code, but I felt it was necessary to show the step by step creation of the GIF.

我可以制作视频文件只要视频中存在不同的GIF帧延迟,并且录制时间与每帧中所有动画的总和不同,就可以代替GIF。

It is acceptable that I can make a video file instead of a GIF as long as the differing GIF frame delays are possible in a video and recording doesn't take as long as the sum of all animations in each frame.

注意:跳转到下面的最新更新标题以跳过背景故事。

更新1 - 6:
通过使用 GCD ,但内存问题仍然存在。这里不关心100%的CPU利用率,因为我在执行工作时显示了 UIActivityIndi​​catorView 。使用 drawViewHierarchyInRect 方法可能比 renderInContext 方法更有效/更快,但是我发现你不能使用后台线程上的 drawViewHierarchyInRect 方法,其中 afterScreenUpdates 属性设置为YES;它锁定了线程。

Updates 1 - 6: Thread lock fixed by using GCD, but the memory issue still remains. 100% CPU utilization is not the concern here, as I show a UIActivityIndicatorView while the work is performed. Using the drawViewHierarchyInRect method might be more efficient/speedy than the renderInContext method, however I discovered you can't use the drawViewHierarchyInRect method on a background thread with the afterScreenUpdates property set to YES; it locks up the thread.

必须有一些方法可以批量写出GIF。我相信我已经将内存问题缩小到: CGImageDestinationFinalize 这个方法对于制作包含大量帧的图像效率非常低,因为一切都必须在内存中写出整个图片。我已经证实了这一点,因为我在抓取渲染的containerView图层图像并调用 CGImageDestinationAddImage 时使用了很少的内存。
当我打电话给 CGImageDestinationFinalize 时,内存表会立即响起;有时根据帧数高达2GB。所需的内存量似乎是疯狂的~20-1000KB GIF。

There must be some way of writing the GIF out in batches. I believe I've narrowed the memory problem down to: CGImageDestinationFinalize This method seems pretty inefficient for making images with lots of frames since everything has to be in memory to write out the entire image. I've confirmed this because I use little memory while grabbing the rendered containerView layer images and calling CGImageDestinationAddImage. The moment I call CGImageDestinationFinalize the memory meter spikes up instantly; sometimes up to 2GB based on the amount of frames. The amount of memory required just seems crazy to make a ~20-1000KB GIF.

更新2:
我发现有一种方法可能会带来一些希望。它是:

Update 2: There is a method I found that might promise some hope. It is:

CGImageDestinationCopyImageSource(CGImageDestinationRef idst, 
CGImageSourceRef isrc, CFDictionaryRef options,CFErrorRef* err) 

我的新想法是,对于每10个或其他任意帧数,我会将它们写入目的地,然后在下一个循环中,具有10帧的先前完成的目的地现在将是我的新来源。但是有一个问题;阅读它说明的文档:

My new idea is that for every 10 or some other arbitrary # of frames, I will write those to a destination, and then in the next loop, the prior completed destination with 10 frames will now be my new source. However there is a problem; reading the docs it states this:

Losslessly copies the contents of the image source, 'isrc', to the * destination, 'idst'. 
The image data will not be modified. No other images should be added to the image destination. 
* CGImageDestinationFinalize() should not be called afterward -
* the result is saved to the destination when this function returns.

这让我觉得我的想法不起作用,但我试过了。继续更新3.

This makes me think my idea won't work, but alas I tried. Continue to Update 3.

更新3:
我一直在尝试使用我的 CGImageDestinationCopyImageSource 方法下面更新了代码,但是我总是只用一帧取回图像;这是因为上面更新2中所述的文档很有可能。还有一种方法可能尝试: CGImageSourceCreateIncremental 但我怀疑这是我需要的。

Update 3: I've been trying the CGImageDestinationCopyImageSource method with my updated code below, however I'm always getting back an image with only one frame; this is because of the documentation stated in Update 2 above most likely. There is yet one more method to perhaps try: CGImageSourceCreateIncremental But I doubt that is what I need.

似乎我需要一些方法来逐步将GIF帧写入/附加到磁盘上,这样我就可以清除每个新块的内存。也许一个 CGImageDestinationCreateWithDataConsumer 用适当的回调来逐步保存数据会是理想的吗?

It seems like I need some way of writing/appending the GIF frames to disk incrementally so I can purge each new chunk out of memory. Perhaps a CGImageDestinationCreateWithDataConsumer with the appropriate callbacks to save the data incrementally would be ideal?

更新4:
我开始尝试 CGImageDestinationCreateWithDataConsumer 方法,看看我是否可以使用 NSFileHandle ,但问题是调用 CGImageDestinationFinalize 一次性发送所有字节,这与以前相同 - 我耗尽了内存。我真的需要帮助才能解决这个问题并提供大笔奖金。

Update 4: I started to try the CGImageDestinationCreateWithDataConsumer method to see if I could manage writing the bytes out as they come in using an NSFileHandle, but again the problem is that calling CGImageDestinationFinalize sends all of the bytes in one shot which is the same as before - I run out memory. I really need help to get this solved and will offer a large bounty.

更新5:
我已经发布了大笔奖金。我希望看到一些没有第三方库或框架的出色解决方案,可以将原始的 NSData GIF字节相互附加,并以<$ c的形式逐步将其写入磁盘$ c> NSFileHandle
- 基本上是手动创建GIF。或者,如果您认为使用 ImageIO 找到了一个解决方案,就像我尝试过的那样,这也很棒。 Swizzling,subclassing等。

Update 5: I've posted a large bounty. I would like to see some brilliant solutions without a 3rd party library or framework to append the raw NSData GIF bytes to each other and write it out incrementally to disk with an NSFileHandle - essentially creating the GIF manually. Or, if you think there is a solution to be found using ImageIO like what I've tried that would be amazing too. Swizzling, subclassing etc.

更新6:
我一直在研究如何在最低级别制作GIF,我写了一个小测试在赏金的帮助下,我想要的是什么。我需要抓取渲染的UIImage,从中获取字节,使用LZW压缩它并附加字节以及其他一些工作,如确定全局颜色表。信息来源: http://giflib.sourceforge.net/whatsinagif/bits_and_bytes.html

我花了整整一周的时间从各个角度研究这个问题,看看究竟是为了建立体面的质量GIF基于限制(例如最多256色)。我相信并假设 ImageIO 正在做的是在引擎盖下创建一个位图上下文,所有图像帧合并为一个,并在此位图上执行颜色量化以生成单个要在GIF中使用的全局颜色表。在由 ImageIO 制作的一些成功的GIF上使用十六进制编辑器确认它们具有全局颜色表,并且从不拥有​​本地颜色表,除非您自己为每个帧设置它。在这个巨大的位图上执行颜色量化以构建调色板(再次假设,但强烈相信)。

I've spent all week researching this from every angle to see what goes on exactly to build decent quality GIFs based on limitations (such as 256 colors max). I believe and assume what ImageIO is doing is creating a single bitmap context under the hood with all image frames merged as one, and is performing color quantization on this bitmap to generate a single global color table to be used in the GIF. Using a hex editor on some successful GIFs made from ImageIO confirms they have a global color table and never have a local one unless you set it for each frame yourself. Color quantization is performed on this huge bitmap to build a color palette (again assuming, but strongly believe).

我有这个奇怪而疯狂的想法:来自我的帧图像应用程序每帧只能有一种颜色不同,甚至更好,我知道我的应用程序使用的是哪些小颜色。第一个/背景框架是一个框架,其中包含我无法控制的颜色(用户提供的内容,如照片)所以我在想的是我将对此视图进行快照,然后使用具有已知颜色的应用程序处理的另一个视图快照使用和制作一个位图上下文,我可以将其传递到正常的 ImaegIO GIF制作例程。有什么好处?好吧,通过将两个图像合并为一个图像,可以将其从约1200帧降低到一帧。然后 ImageIO 将在更小的位图上执行其操作,并用一帧写出单个GIF。

I have this weird and crazy idea: The frame images from my app can only differ by one color per frame and even better yet, I know what small sets of colors my app uses. The first/background frame is a frame that contains colors that I cannot control (user supplied content such as photos) so what I'm thinking is I will snapshot this view, and then snapshot another view with that has the known colors my app deals with and make this a single bitmap context that I can pass into the normal ImaegIO GIF making routines. What's the advantage? Well, this gets it down from ~1200 frames to one by merging two images into a single image. ImageIO will then do its thing on the much smaller bitmap and write out a single GIF with one frame.

现在我该怎么做才能构建实际的1200帧GIF?我想我可以采用单帧GIF并很好地提取颜色表字节,因为它们介于两个GIF协议块之间。我仍然需要手动构建GIF,但现在我不必计算调色板。我将窃取调色板 ImageIO 以为最好并将其用于我的字节缓冲区。我仍然需要一个LZW压缩器实现和赏金的帮助,但这应该比颜色量化更容易,这可能会非常缓慢。 LZW也可能很慢,所以我不确定它是否值得;不知道LZW将如何按顺序执行约1200帧。

Now what can I do to build the actual 1200 frame GIF? I'm thinking I can take that single frame GIF and extract the color table bytes nicely, because they fall between two GIF protocol blocks. I will still need to build the GIF manually, but now I shouldn't have to compute the color palette. I will be stealing the palette ImageIO thought was best and using that for my byte buffer. I still need an LZW compressor implementation with the bounty's help, but that should be alot easier than color quantization which can be painfully slow. LZW can be slow too so I'm not sure if it's even worth it; no idea how LZW will perform sequentially with ~1200 frames.

您有什么想法?

@property (nonatomic, strong) NSFileHandle *outputHandle;    

- (void)makeGIF
{
    dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_HIGH, 0),^
    {
        NSString *filePath = @"/Users/Test/Desktop/Test.gif";

        [[NSFileManager defaultManager] createFileAtPath:filePath contents:nil attributes:nil];

        self.outputHandle = [NSFileHandle fileHandleForWritingAtPath:filePath];

        NSMutableData *openingData = [[NSMutableData alloc]init];

        // GIF89a header

        const uint8_t gif89aHeader [] = { 0x47, 0x49, 0x46, 0x38, 0x39, 0x61 };

        [openingData appendBytes:gif89aHeader length:sizeof(gif89aHeader)];


        const uint8_t screenDescriptor [] = { 0x0A, 0x00, 0x0A, 0x00, 0x91, 0x00, 0x00 };

        [openingData appendBytes:screenDescriptor length:sizeof(screenDescriptor)];


        // Global color table

        const uint8_t globalColorTable [] = { 0xFF, 0xFF, 0xFF, 0xFF, 0x00, 0x00, 0x00, 0x00, 0xFF, 0x00, 0x00, 0x00 };

        [openingData appendBytes:globalColorTable length:sizeof(globalColorTable)];


        // 'Netscape 2.0' - Loop forever

        const uint8_t applicationExtension [] = { 0x21, 0xFF, 0x0B, 0x4E, 0x45, 0x54, 0x53, 0x43, 0x41, 0x50, 0x45, 0x32, 0x2E, 0x30, 0x03, 0x01, 0x00, 0x00, 0x00 };

        [openingData appendBytes:applicationExtension length:sizeof(applicationExtension)];

        [self.outputHandle writeData:openingData];

        for (NSUInteger i = 0; i < 1200; i++)
        {
            const uint8_t graphicsControl [] = { 0x21, 0xF9, 0x04, 0x04, 0x32, 0x00, 0x00, 0x00 };

            NSMutableData *imageData = [[NSMutableData alloc]init];

            [imageData appendBytes:graphicsControl length:sizeof(graphicsControl)];


            const uint8_t imageDescriptor [] = { 0x2C, 0x00, 0x00, 0x00, 0x00, 0x0A, 0x00, 0x0A, 0x00, 0x00 };

            [imageData appendBytes:imageDescriptor length:sizeof(imageDescriptor)];


            const uint8_t image [] = { 0x02, 0x16, 0x8C, 0x2D, 0x99, 0x87, 0x2A, 0x1C, 0xDC, 0x33, 0xA0, 0x02, 0x75, 0xEC, 0x95, 0xFA, 0xA8, 0xDE, 0x60, 0x8C, 0x04, 0x91, 0x4C, 0x01, 0x00 };

            [imageData appendBytes:image length:sizeof(image)];


            [self.outputHandle writeData:imageData];
        }


        NSMutableData *closingData = [[NSMutableData alloc]init];

        const uint8_t appSignature [] = { 0x21, 0xFE, 0x02, 0x48, 0x69, 0x00 };

        [closingData appendBytes:appSignature length:sizeof(appSignature)];


        const uint8_t trailer [] = { 0x3B };

        [closingData appendBytes:trailer length:sizeof(trailer)];


        [self.outputHandle writeData:closingData];

        [self.outputHandle closeFile];

        self.outputHandle = nil;

        dispatch_async(dispatch_get_main_queue(),^
        {
           // Get back to main thread and do something with the GIF
        });
    });
}

- (UIImage *)getImage
{
    // Read question's 'Update 1' to see why I'm not using the
    // drawViewHierarchyInRect method
    UIGraphicsBeginImageContextWithOptions(self.containerView.bounds.size, NO, 1.0);
    [self.containerView.layer renderInContext:UIGraphicsGetCurrentContext()];
    UIImage *snapShot = UIGraphicsGetImageFromCurrentImageContext();
    UIGraphicsEndImageContext();

    // Shaves exported gif size considerably
    NSData *data = UIImageJPEGRepresentation(snapShot, 1.0);

    return [UIImage imageWithData:data];
}


推荐答案

你可以用AVFoundation来写带有图片的视频。我已经上传了一个完整的工作测试项目到这个github存储库。在模拟器中运行测试项目时,它将打印到调试控制台的文件路径。在视频播放器中打开该路径以检查输出。

You can use AVFoundation to write a video with your images. I've uploaded a complete working test project to this github repository. When you run the test project in the simulator, it will print a file path to the debug console. Open that path in your video player to check the output.

我将在此答案中介绍代码的重要部分。

I'll walk through the important parts of the code in this answer.

首先创建一个 AVAssetWriter 。我给它 AVFileTypeAppleM4V 文件类型,以便视频在iOS设备上运行。

Start by creating an AVAssetWriter. I'd give it the AVFileTypeAppleM4V file type so that the video works on iOS devices.

AVAssetWriter *writer = [AVAssetWriter assetWriterWithURL:self.url fileType:AVFileTypeAppleM4V error:&error];

使用视频参数设置输出设置字典:

Set up an output settings dictionary with the video parameters:

- (NSDictionary *)videoOutputSettings {
    return @{
             AVVideoCodecKey: AVVideoCodecH264,
             AVVideoWidthKey: @((size_t)size.width),
             AVVideoHeightKey: @((size_t)size.height),
             AVVideoCompressionPropertiesKey: @{
                     AVVideoProfileLevelKey: AVVideoProfileLevelH264Baseline31,
                     AVVideoAverageBitRateKey: @(1200000) }};
}

您可以调整比特率来控制视频文件的大小。我在这里非常保守地选择了编解码器配置文件(它支持一些非常古老的设备)。您可能想要选择以后的个人资料。

You can adjust the bit rate to control the size of your video file. I've chosen the codec profile pretty conservatively here (it supports some pretty old devices). You might want to choose a later profile.

然后使用媒体类型创建 AVAssetWriterInput AVMediaTypeVideo 和您的输出设置。

Then create an AVAssetWriterInput with media type AVMediaTypeVideo and your output settings.

NSDictionary *outputSettings = [self videoOutputSettings];
AVAssetWriterInput *input = [AVAssetWriterInput assetWriterInputWithMediaType:AVMediaTypeVideo outputSettings:outputSettings];

设置像素缓冲区属性字典:

Set up a pixel buffer attribute dictionary:

- (NSDictionary *)pixelBufferAttributes {
    return @{
             fromCF kCVPixelBufferPixelFormatTypeKey: @(kCVPixelFormatType_32BGRA),
             fromCF kCVPixelBufferCGBitmapContextCompatibilityKey: @YES };
}

您不必在此指定像素缓冲区尺寸; AVFoundation将从输入的输出设置中获取它们。我在这里使用的属性(我相信)是使用Core Graphics绘图的最佳选择。

You don't have to specify the pixel buffer dimensions here; AVFoundation will get them from the input's output settings. The attributes I've used here are (I believe) optimal for drawing with Core Graphics.

接下来,创建一个 AVAssetWriterInputPixelBufferAdaptor 使用像素缓冲设置输入。

Next, create an AVAssetWriterInputPixelBufferAdaptor for your input using the pixel buffer settings.

AVAssetWriterInputPixelBufferAdaptor *adaptor = [AVAssetWriterInputPixelBufferAdaptor
    assetWriterInputPixelBufferAdaptorWithAssetWriterInput:input
    sourcePixelBufferAttributes:[self pixelBufferAttributes]];

将输入添加到作者并告诉作者开始:

Add the input to the writer and tell the writer to get going:

[writer addInput:input];
[writer startWriting];
[writer startSessionAtSourceTime:kCMTimeZero];

接下来我们将告诉输入如何获取视频帧。是的,在我们告诉作者开始写作之后我们可以做到这一点:

Next we'll tell the input how to get video frames. Yes, we can do this after we've told the writer to start writing:

    [input requestMediaDataWhenReadyOnQueue:adaptorQueue usingBlock:^{

这个块将完成我们需要用AVFoundation做的所有其他事情。每次输入准备接受更多数据时,输入都会调用它。它可能能够在一次调用中接受多个帧,因此我们将在它准备就绪时循环:

This block is going to do everything else we need to do with AVFoundation. The input calls it each time it's ready to accept more data. It might be able to accept multiple frames in a single call, so we'll loop as long is it's ready:

        while (input.readyForMoreMediaData && self.frameGenerator.hasNextFrame) {

我正在使用 self.frameGenerator 实际绘制帧。我稍后会显示该代码。 frameGenerator 决定视频何时结束(通过从 hasNextFrame 返回NO)。它还知道每个帧何时出现在屏幕上:

I'm using self.frameGenerator to actually draw the frames. I'll show that code later. The frameGenerator decides when the video is over (by returning NO from hasNextFrame). It also knows when each frame should appear on screen:

            CMTime time = self.frameGenerator.nextFramePresentationTime;

要实际绘制帧,我们需要从适配器获取像素缓冲区:

To actually draw the frame, we need to get a pixel buffer from the adaptor:

            CVPixelBufferRef buffer = 0;
            CVPixelBufferPoolRef pool = adaptor.pixelBufferPool;
            CVReturn code = CVPixelBufferPoolCreatePixelBuffer(0, pool, &buffer);
            if (code != kCVReturnSuccess) {
                errorBlock([self errorWithFormat:@"could not create pixel buffer; CoreVideo error code %ld", (long)code]);
                [input markAsFinished];
                [writer cancelWriting];
                return;
            } else {

如果我们无法获得像素缓冲区,我们会发出错误信号并中止一切。如果我们确实得到了一个像素缓冲区,我们需要在它周围包含一个位图上下文并要求 frameGenerator 在上下文中绘制下一帧:

If we couldn't get a pixel buffer, we signal an error and abort everything. If we did get a pixel buffer, we need to wrap a bitmap context around it and ask frameGenerator to draw the next frame in the context:

                CVPixelBufferLockBaseAddress(buffer, 0); {
                    CGColorSpaceRef rgb = CGColorSpaceCreateDeviceRGB(); {
                        CGContextRef gc = CGBitmapContextCreate(CVPixelBufferGetBaseAddress(buffer), CVPixelBufferGetWidth(buffer), CVPixelBufferGetHeight(buffer), 8, CVPixelBufferGetBytesPerRow(buffer), rgb, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst); {
                            [self.frameGenerator drawNextFrameInContext:gc];
                        } CGContextRelease(gc);
                    } CGColorSpaceRelease(rgb);

现在我们可以将缓冲区附加到视频中。适配器执行:

Now we can append the buffer to the video. The adaptor does that:

                    [adaptor appendPixelBuffer:buffer withPresentationTime:time];
                } CVPixelBufferUnlockBaseAddress(buffer, 0);
            } CVPixelBufferRelease(buffer);
        }

上面的循环通过适配器推送帧,直到输入显示它已经足够,或者直到 frameGenerator 表示没有框架。如果 frameGenerator 有更多帧,我们只返回,当输入准备好更多帧时,输入将再次给我们打电话:

The loop above pushes frames through the adaptor until either the input says it's had enough, or until frameGenerator says it's out of frames. If the frameGenerator has more frames, we just return, and the input will call us again when it's ready for more frames:

        if (self.frameGenerator.hasNextFrame) {
            return;
        }

如果 frameGenerator 超出帧数,我们关闭输入:

If the frameGenerator is out of frames, we shut down the input:

        [input markAsFinished];

然后我们告诉作者完成。它完成时会调用一个完成处理程序:

And then we tell the writer to finish. It'll call a completion handler when it's done:

        [writer finishWritingWithCompletionHandler:^{
            if (writer.status == AVAssetWriterStatusFailed) {
                errorBlock(writer.error);
            } else {
                dispatch_async(dispatch_get_main_queue(), doneBlock);
            }
        }];
    }];

相比之下,生成帧非常简单。这是生成器采用的协议:

By comparison, generating the frames is pretty straightforward. Here's the protocol the generator adopts:

@protocol DqdFrameGenerator <NSObject>

@required

// You should return the same size every time I ask for it.
@property (nonatomic, readonly) CGSize frameSize;

// I'll ask for frames in a loop. On each pass through the loop, I'll start by asking if you have any more frames:
@property (nonatomic, readonly) BOOL hasNextFrame;

// If you say NO, I'll stop asking and end the video.

// If you say YES, I'll ask for the presentation time of the next frame:
@property (nonatomic, readonly) CMTime nextFramePresentationTime;

// Then I'll ask you to draw the next frame into a bitmap graphics context:
- (void)drawNextFrameInContext:(CGContextRef)gc;

// Then I'll go back to the top of the loop.

@end

对于我的测试,我绘制了一个背景图像,随着视频的进展,用红色慢慢掩盖它。

For my test, I draw a background image, and slowly cover it up with solid red as the video progresses.

@implementation TestFrameGenerator {
    UIImage *baseImage;
    CMTime nextTime;
}

- (instancetype)init {
    if (self = [super init]) {
        baseImage = [UIImage imageNamed:@"baseImage.jpg"];
        _totalFramesCount = 100;
        nextTime = CMTimeMake(0, 30);
    }
    return self;
}

- (CGSize)frameSize {
    return baseImage.size;
}

- (BOOL)hasNextFrame {
    return self.framesEmittedCount < self.totalFramesCount;
}

- (CMTime)nextFramePresentationTime {
    return nextTime;
}

Core Graphics将原点放在位图上下文的左下角,但是我正在使用 UIImage ,而UIKit喜欢将原点放在左上角。

Core Graphics puts the origin in the lower left corner of the bitmap context, but I'm using a UIImage, and UIKit likes to have the origin in the upper left.

- (void)drawNextFrameInContext:(CGContextRef)gc {
    CGContextTranslateCTM(gc, 0, baseImage.size.height);
    CGContextScaleCTM(gc, 1, -1);
    UIGraphicsPushContext(gc); {
        [baseImage drawAtPoint:CGPointZero];

        [[UIColor redColor] setFill];
        UIRectFill(CGRectMake(0, 0, baseImage.size.width, baseImage.size.height * self.framesEmittedCount / self.totalFramesCount));
    } UIGraphicsPopContext();

    ++_framesEmittedCount;

我调用我的测试程序用来更新进度指示器的回调:

I call a callback that my test program uses to update a progress indicator:

    if (self.frameGeneratedCallback != nil) {
        dispatch_async(dispatch_get_main_queue(), ^{
            self.frameGeneratedCallback();
        });
    }

最后,为了演示可变帧速率,我发出帧的前半部分每秒30帧,下半场每秒15帧:

Finally, to demonstrate variable frame rate, I emit the first half of the frames at 30 frames per second, and the second half at 15 frames per second:

    if (self.framesEmittedCount < self.totalFramesCount / 2) {
        nextTime.value += 1;
    } else {
        nextTime.value += 2;
    }
}

@end

这篇关于使用CGImageDestinationFinalize创建一个大型GIF - 内存不足的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆