如何从其框架iPhone创建视频 [英] How to create video from its frames iPhone

查看:66
本文介绍了如何从其框架iPhone创建视频的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我已经完成了研发工作,并且成功地获得了如何从MPMoviePlayerController中播放的视频文件中获取图像中的帧的信息.

I had done R&D and got success in how to get frames in terms of images from video file played in MPMoviePlayerController.

从此代码中获取所有帧,并将所有图像保存在一个数组中.

Got all frames from this code, and save all images in one Array.

for(int i= 1; i <= moviePlayerController.duration; i++)
{
    UIImage *img = [moviePlayerController thumbnailImageAtTime:i timeOption:MPMovieTimeOptionNearestKeyFrame];
    [arrImages addObject:img];
}

现在的问题是,更改某些图像文件后,例如向图像添加情感并添加过滤器,例如;电影,黑白电影,我们该如何再次创建视频并将相同的视频以相同的帧频存储在Document目录中,而又不损失视频的质量.

Now the question is that, After change some image file, like adding emotions to the images and also adding filters, such as; movie real, black and white, How can we create video again and store the same video in Document directory with the same frame rate and without losing quality of video.

更改一些图像后,我执行了以下代码来再次保存该视频.

After changing some images I had done following code to save that video again.

- (void) writeImagesAsMovie:(NSString*)path 
{
    NSError *error  = nil;
    UIImage *first = [arrImages objectAtIndex:0];
    CGSize frameSize = first.size;
    AVAssetWriter *videoWriter = [[AVAssetWriter alloc] initWithURL:
                                  [NSURL fileURLWithPath:path] fileType:AVFileTypeQuickTimeMovie
                                                              error:&error];
    NSParameterAssert(videoWriter);

    NSDictionary *videoSettings = [NSDictionary dictionaryWithObjectsAndKeys:
                                   AVVideoCodecH264, AVVideoCodecKey,
                                   [NSNumber numberWithInt:640], AVVideoWidthKey,
                                   [NSNumber numberWithInt:480], AVVideoHeightKey,
                                   nil];
    AVAssetWriterInput* writerInput = [[AVAssetWriterInput
                                        assetWriterInputWithMediaType:AVMediaTypeVideo
                                        outputSettings:videoSettings] retain];

    AVAssetWriterInputPixelBufferAdaptor *adaptor = [AVAssetWriterInputPixelBufferAdaptor
                                                     assetWriterInputPixelBufferAdaptorWithAssetWriterInput:writerInput
                                                     sourcePixelBufferAttributes:nil];

    NSParameterAssert(writerInput);
    NSParameterAssert([videoWriter canAddInput:writerInput]);
    [videoWriter addInput:writerInput];

    [videoWriter startWriting];
    [videoWriter startSessionAtSourceTime:kCMTimeZero];

    int frameCount = 0;
    CVPixelBufferRef buffer = NULL;
    for(UIImage *img in arrImages)
    {
        buffer = [self newPixelBufferFromCGImage:[img CGImage] andFrameSize:frameSize]; 

            if (adaptor.assetWriterInput.readyForMoreMediaData) 
            {
                CMTime frameTime = CMTimeMake(frameCount,(int32_t) kRecordingFPS);
                [adaptor appendPixelBuffer:buffer withPresentationTime:frameTime];

                if(buffer)
                    CVBufferRelease(buffer);
            }
        frameCount++;
    } 

     [writerInput markAsFinished];
     [videoWriter finishWriting];
}


- (CVPixelBufferRef) newPixelBufferFromCGImage: (CGImageRef) image andFrameSize:(CGSize)frameSize
{
    NSDictionary *options = [NSDictionary dictionaryWithObjectsAndKeys:
                             [NSNumber numberWithBool:YES], kCVPixelBufferCGImageCompatibilityKey,
                             [NSNumber numberWithBool:YES], kCVPixelBufferCGBitmapContextCompatibilityKey,
                             nil];
    CVPixelBufferRef pxbuffer = NULL;
    CVReturn status = CVPixelBufferCreate(kCFAllocatorDefault, frameSize.width,
                                          frameSize.height, kCVPixelFormatType_32ARGB, (CFDictionaryRef) options, 
                                          &pxbuffer);
    NSParameterAssert(status == kCVReturnSuccess && pxbuffer != NULL);

    CVPixelBufferLockBaseAddress(pxbuffer, 0);
    void *pxdata = CVPixelBufferGetBaseAddress(pxbuffer);
    NSParameterAssert(pxdata != NULL);

    CGColorSpaceRef rgbColorSpace = CGColorSpaceCreateDeviceRGB();
    CGContextRef context = CGBitmapContextCreate(pxdata, frameSize.width,
                                                 frameSize.height, 8, 4*frameSize.width, rgbColorSpace, 
                                                 kCGImageAlphaNoneSkipFirst);
    NSParameterAssert(context);
    CGContextConcatCTM(context, CGAffineTransformMakeRotation(0));
    CGContextDrawImage(context, CGRectMake(0, 0, CGImageGetWidth(image), 
                                           CGImageGetHeight(image)), image);
    CGColorSpaceRelease(rgbColorSpace);
    CGContextRelease(context);

    CVPixelBufferUnlockBaseAddress(pxbuffer, 0);

    return pxbuffer;
}

我是这个主题的新手,请帮助我解决这个问题.

I am new in this topic so please help me solve this question.

推荐答案

您可以参考以下链接,希望对您有所帮助:-

You can refer following links hope you get some help :-

  1. 将FFMPEG库与iPhone SDK一起用于视频编码

    iPhone SDK,通过UIImage创建视频

    查看全文

登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆