AVAssetWriterInputPixelBufferAdaptor和CMTime [英] AVAssetWriterInputPixelBufferAdaptor and CMTime

查看:1313
本文介绍了AVAssetWriterInputPixelBufferAdaptor和CMTime的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在使用 AVAssetWriterInputPixelBufferAdaptor 为视频编写一些帧,以及行为w.r.t.时间不是我所期待的。

I'm writing some frames to video with AVAssetWriterInputPixelBufferAdaptor, and the behavior w.r.t. time isn't what I'd expect.

如果我只写一帧:

 [videoWriter startSessionAtSourceTime:kCMTimeZero];
 [adaptor appendPixelBuffer:pxBuffer withPresentationTime:kCMTimeZero];

这给我一个长度为零的视频,这是我所期待的。

this gets me a video of length zero, which is what I expect.

但如果我继续添加第二帧:

But if I go on to add a second frame:

 // 3000/600 = 5 sec, right?
 CMTime nextFrame = CMTimeMake(3000, 600); 
 [adaptor appendPixelBuffer:pxBuffer withPresentationTime:nextFrame];

我得到10秒的视频,我预计会有5秒。

I get ten seconds of video, where I'm expecting five.

这里发生了什么? withPresentationTime 以某种方式设置帧的开头和持续时间吗?

What's going on here? Does withPresentationTime somehow set both the start of the frame and the duration?

请注意,我没有打电话给 endSessionAtSourceTime ,只需 finishWriting

Note that I'm not calling endSessionAtSourceTime, just finishWriting.

推荐答案

尝试查看此示例并进行逆向工程以在5秒后添加1帧...

Try looking at this example and reverse engineering to add 1 frame 5 seconds later...

以下是示例代码链接:git@github.com :RudyAramayo / AVAssetWriterInputPixelBufferAdaptorSample.git

Here is the sample code link: git@github.com:RudyAramayo/AVAssetWriterInputPixelBufferAdaptorSample.git

以下是您需要的代码:

- (void) testCompressionSession
{
CGSize size = CGSizeMake(480, 320);


NSString *betaCompressionDirectory = [NSHomeDirectory() stringByAppendingPathComponent:@"Documents/Movie.m4v"];

NSError *error = nil;

unlink([betaCompressionDirectory UTF8String]);

//----initialize compression engine
AVAssetWriter *videoWriter = [[AVAssetWriter alloc] initWithURL:[NSURL fileURLWithPath:betaCompressionDirectory]
                                                       fileType:AVFileTypeQuickTimeMovie
                                                          error:&error];
NSParameterAssert(videoWriter);
if(error)
    NSLog(@"error = %@", [error localizedDescription]);

NSDictionary *videoSettings = [NSDictionary dictionaryWithObjectsAndKeys:AVVideoCodecH264, AVVideoCodecKey,
                               [NSNumber numberWithInt:size.width], AVVideoWidthKey,
                               [NSNumber numberWithInt:size.height], AVVideoHeightKey, nil];
AVAssetWriterInput *writerInput = [AVAssetWriterInput assetWriterInputWithMediaType:AVMediaTypeVideo outputSettings:videoSettings];

NSDictionary *sourcePixelBufferAttributesDictionary = [NSDictionary dictionaryWithObjectsAndKeys:
                                                       [NSNumber numberWithInt:kCVPixelFormatType_32ARGB], kCVPixelBufferPixelFormatTypeKey, nil];

AVAssetWriterInputPixelBufferAdaptor *adaptor = [AVAssetWriterInputPixelBufferAdaptor assetWriterInputPixelBufferAdaptorWithAssetWriterInput:writerInput
                                                                                                                 sourcePixelBufferAttributes:sourcePixelBufferAttributesDictionary];
NSParameterAssert(writerInput);
NSParameterAssert([videoWriter canAddInput:writerInput]);

if ([videoWriter canAddInput:writerInput])
    NSLog(@"I can add this input");
else
    NSLog(@"i can't add this input");

[videoWriter addInput:writerInput];

[videoWriter startWriting];
[videoWriter startSessionAtSourceTime:kCMTimeZero];

//---
// insert demo debugging code to write the same image repeated as a movie

CGImageRef theImage = [[UIImage imageNamed:@"Lotus.png"] CGImage];

dispatch_queue_t    dispatchQueue = dispatch_queue_create("mediaInputQueue", NULL);
int __block         frame = 0;

[writerInput requestMediaDataWhenReadyOnQueue:dispatchQueue usingBlock:^{
    while ([writerInput isReadyForMoreMediaData])
    {
        if(++frame >= 120)
        {
            [writerInput markAsFinished];
            [videoWriter finishWriting];
            [videoWriter release];
            break;
        }

        CVPixelBufferRef buffer = (CVPixelBufferRef)[self pixelBufferFromCGImage:theImage size:size];
        if (buffer)
        {
            if(![adaptor appendPixelBuffer:buffer withPresentationTime:CMTimeMake(frame, 20)])
                NSLog(@"FAIL");
            else
                NSLog(@"Success:%d", frame);
            CFRelease(buffer);
        }
    }
}];

NSLog(@"outside for loop");

}


- (CVPixelBufferRef )pixelBufferFromCGImage:(CGImageRef)image size:(CGSize)size
{
NSDictionary *options = [NSDictionary dictionaryWithObjectsAndKeys:
                         [NSNumber numberWithBool:YES], kCVPixelBufferCGImageCompatibilityKey, 
                         [NSNumber numberWithBool:YES], kCVPixelBufferCGBitmapContextCompatibilityKey, nil];
CVPixelBufferRef pxbuffer = NULL;
CVReturn status = CVPixelBufferCreate(kCFAllocatorDefault, size.width, size.height, kCVPixelFormatType_32ARGB, (CFDictionaryRef) options, &pxbuffer);
// CVReturn status = CVPixelBufferPoolCreatePixelBuffer(NULL, adaptor.pixelBufferPool, &pxbuffer);

NSParameterAssert(status == kCVReturnSuccess && pxbuffer != NULL); 

CVPixelBufferLockBaseAddress(pxbuffer, 0);
void *pxdata = CVPixelBufferGetBaseAddress(pxbuffer);
NSParameterAssert(pxdata != NULL);

CGColorSpaceRef rgbColorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef context = CGBitmapContextCreate(pxdata, size.width, size.height, 8, 4*size.width, rgbColorSpace, kCGImageAlphaPremultipliedFirst);
NSParameterAssert(context);

CGContextDrawImage(context, CGRectMake(0, 0, CGImageGetWidth(image), CGImageGetHeight(image)), image);

CGColorSpaceRelease(rgbColorSpace);
CGContextRelease(context);

CVPixelBufferUnlockBaseAddress(pxbuffer, 0);

return pxbuffer;
}

这篇关于AVAssetWriterInputPixelBufferAdaptor和CMTime的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆