如何将UIImage数组导出为电影? [英] How do I export UIImage array as a movie?

查看:238
本文介绍了如何将UIImage数组导出为电影?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我有一个严重的问题:我有一个 NSArray 与几个 UIImage 对象。我现在想做的是从那些 UIImages 创建电影。但是我不知道该怎么做。



我希望有人可以帮助我,或者给我一个像我想要的代码片段。



编辑:以供将来参考 - 应用解决方案后,如果视频看起来扭曲,请确保要捕获的图像/区域的宽度为在这里经过许多小时的努力后发现:

为什么我的电影从UIImages被扭曲?



这是完整的解决方案(只是确保宽度是16的倍数) br>
http://codethink.no-ip.org/wordpress/archives / 673

解决方案

看看 AV AssetWriter 以及其他 AVFoundation框架。作者的输入类型为 AVAssetWriterInput ,而AVAssetWriterInput又有一种方法叫做 appendSampleBuffer:,可让您将单独的帧添加到视频流中。基本上你必须:



1)连线作者:

  NSError * error = nil; 
AVAssetWriter * videoWriter = [[AVAssetWriter alloc] initWithURL:
[NSURL fileURLWithPath:somePath] fileType:AVFileTypeQuickTimeMovie
error:& error];
NSParameterAssert(videoWriter);

NSDictionary * videoSettings = [NSDictionary dictionaryWithObjectsAndKeys:
AVVideoCodecH264,AVVideoCodecKey,
[NSNumber numberWithInt:640],AVVideoWidthKey,
[NSNumber numberWithInt:480],AVVideoHeightKey,
nil];
AVAssetWriterInput * writerInput = [[AVAssetWriterInput
assetWriterInputWithMediaType:AVMediaTypeVideo
outputSettings:videoSettings] retain];如果ARC

NSParameterAssert(writerInput); //保留应该被删除
NSParameterAssert([videoWriter canAddInput:writerInput]);
[videoWriter addInput:writerInput];

2)开始会话

  [videoWriter startWriting]; 
[videoWriter startSessionAtSourceTime ... ...] //如果不确定,使用kCMTimeZero

3)写一些样本:

  //或者你可以使用AVAssetWriterInputPixelBufferAdaptor。 
//这可以让您从CVPixelBuffer
//提供写入程序输入数据,这很容易从CGImage创建。
[writerInput appendSampleBuffer:sampleBuffer];

4)完成会话:

  [writerInput markAsFinished]; 
[videoWriter endSessionAtSourceTime ... ...]; //可选可以调用finishWriting而不指定endTime
[videoWriter finishWriting]; //在ios6中弃用
/ *
[videoWriter finishWritingWithCompletionHandler:...]; // ios 6.0+
* /

您仍然必须填写很多空白,但我认为唯一真正困难的部分是从 CGImage 获取像素缓冲区:

   - (CVPixelBufferRef)newPixelBufferFromCGImage:(CGImageRef)image 
{
NSDictionary * options = [NSDictionary dictionaryWithObjectsAndKeys:
[NSNumber numberWithBool:YES],kCVPixelBufferCGImageCompatibilityKey,
[NSNumber numberWithBool:YES],kCVPixelBufferCGBitmapContextCompatibilityKey,
nil];
CVPixelBufferRef pxbuffer = NULL;
CVReturn status = CVPixelBufferCreate(kCFAllocatorDefault,frameSize.width,
frameSize.height,kCVPixelFormatType_32ARGB,(CFDictionaryRef)options,
& pxbuffer);
NSParameterAssert(status == kCVReturnSuccess&& pxbuffer!= NULL);

CVPixelBufferLockBaseAddress(pxbuffer,0);
void * pxdata = CVPixelBufferGetBaseAddress(pxbuffer);
NSParameterAssert(pxdata!= NULL);

CGColorSpaceRef rgbColorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef context = CGBitmapContextCreate(pxdata,frameSize.width,
frameSize.height,8,4 * frameSize.width,rgbColorSpace,
kCGImageAlphaNoneSkipFirst);
NSParameterAssert(context);
CGContextConcatCTM(context,frameTransform);
CGContextDrawImage(上下文,CGRectMake(0,0,CGImageGetWidth(image),
CGImageGetHeight(image)),image);
CGColorSpaceRelease(rgbColorSpace);
CGContextRelease(context);

CVPixelBufferUnlockBaseAddress(pxbuffer,0);

返回pxbuffer;
}

frameSize 是一个 CGSize 描述您的目标框架大小和 frameTransform 是一个 CGAffineTransform 当您将图像绘制到框架中时,您可以将图像转换为图像。


I have a serious problem: I have an NSArray with several UIImage objects. What I now want to do, is create movie from those UIImages. But I don't have any idea how to do so.

I hope someone can help me or send me a code snippet which does something like I want.

Edit: For future reference - After applying the solution, if the video looks distorted, make sure the width of the images/area you are capturing is a multiple of 16. Found after many hours of struggle here:
Why does my movie from UIImages gets distorted?

Here is the complete solution (just ensure width is multiple of 16)
http://codethink.no-ip.org/wordpress/archives/673

解决方案

Take a look at AVAssetWriter and the rest of the AVFoundation framework. The writer has an input of type AVAssetWriterInput, which in turn has a method called appendSampleBuffer: that lets you add individual frames to a video stream. Essentially you’ll have to:

1) Wire the writer:

NSError *error = nil;
AVAssetWriter *videoWriter = [[AVAssetWriter alloc] initWithURL:
    [NSURL fileURLWithPath:somePath] fileType:AVFileTypeQuickTimeMovie
    error:&error];
NSParameterAssert(videoWriter);

NSDictionary *videoSettings = [NSDictionary dictionaryWithObjectsAndKeys:
    AVVideoCodecH264, AVVideoCodecKey,
    [NSNumber numberWithInt:640], AVVideoWidthKey,
    [NSNumber numberWithInt:480], AVVideoHeightKey,
    nil];
AVAssetWriterInput* writerInput = [[AVAssetWriterInput
    assetWriterInputWithMediaType:AVMediaTypeVideo
    outputSettings:videoSettings] retain]; //retain should be removed if ARC

NSParameterAssert(writerInput);
NSParameterAssert([videoWriter canAddInput:writerInput]);
[videoWriter addInput:writerInput];

2) Start a session:

[videoWriter startWriting];
[videoWriter startSessionAtSourceTime:…] //use kCMTimeZero if unsure

3) Write some samples:

// Or you can use AVAssetWriterInputPixelBufferAdaptor.
// That lets you feed the writer input data from a CVPixelBuffer
// that’s quite easy to create from a CGImage.
[writerInput appendSampleBuffer:sampleBuffer];

4) Finish the session:

[writerInput markAsFinished];
[videoWriter endSessionAtSourceTime:…]; //optional can call finishWriting without specifiying endTime
[videoWriter finishWriting]; //deprecated in ios6
/*
[videoWriter finishWritingWithCompletionHandler:...]; //ios 6.0+
*/

You’ll still have to fill-in a lot of blanks, but I think that the only really hard remaining part is getting a pixel buffer from a CGImage:

- (CVPixelBufferRef) newPixelBufferFromCGImage: (CGImageRef) image
{
    NSDictionary *options = [NSDictionary dictionaryWithObjectsAndKeys:
        [NSNumber numberWithBool:YES], kCVPixelBufferCGImageCompatibilityKey,
        [NSNumber numberWithBool:YES], kCVPixelBufferCGBitmapContextCompatibilityKey,
        nil];
    CVPixelBufferRef pxbuffer = NULL;
    CVReturn status = CVPixelBufferCreate(kCFAllocatorDefault, frameSize.width,
        frameSize.height, kCVPixelFormatType_32ARGB, (CFDictionaryRef) options, 
        &pxbuffer);
    NSParameterAssert(status == kCVReturnSuccess && pxbuffer != NULL);

    CVPixelBufferLockBaseAddress(pxbuffer, 0);
    void *pxdata = CVPixelBufferGetBaseAddress(pxbuffer);
    NSParameterAssert(pxdata != NULL);

    CGColorSpaceRef rgbColorSpace = CGColorSpaceCreateDeviceRGB();
    CGContextRef context = CGBitmapContextCreate(pxdata, frameSize.width,
        frameSize.height, 8, 4*frameSize.width, rgbColorSpace, 
        kCGImageAlphaNoneSkipFirst);
    NSParameterAssert(context);
    CGContextConcatCTM(context, frameTransform);
    CGContextDrawImage(context, CGRectMake(0, 0, CGImageGetWidth(image), 
        CGImageGetHeight(image)), image);
    CGColorSpaceRelease(rgbColorSpace);
    CGContextRelease(context);

    CVPixelBufferUnlockBaseAddress(pxbuffer, 0);

    return pxbuffer;
}

frameSize is a CGSize describing your target frame size and frameTransform is a CGAffineTransform that lets you transform the images when you draw them into frames.

这篇关于如何将UIImage数组导出为电影?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆