iPad / iPhone上的OpenGL ES 2.0到视频 [英] OpenGL ES 2.0 to Video on iPad/iPhone

查看:105
本文介绍了iPad / iPhone上的OpenGL ES 2.0到视频的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

尽管StackOverflow上有很好的信息,但我仍然在这里尽力而为...



我正在尝试将OpenGL渲染缓冲区写入iPad 2上的视频(使用iOS 4.3)。这正是我正在尝试的:



A)设置AVAssetWriterInputPixelBufferAdaptor


  1. 创建指向视频文件的AVAssetWriter


  2. 使用适当的设置设置AVAssetWriterInput


  3. 设置AVAssetWriterInputPixelBufferAdaptor以将数据添加到视频文件中


B)将数据写入使用该AVAssetWriterInputPixelBufferAdaptor的视频文件


  1. 将OpenGL代码渲染到屏幕


  2. 通过glReadPixels获取OpenGL缓冲区


  3. 从OpenGL数据创建CVPixelBufferRef


  4. 使用appendPixelBuffer方法将PixelBuffer附加到AVAssetWriterInputPixelBufferAdaptor


但是,我遇到了问题。我现在的策略是在按下按钮时设置AVAssetWriterInputPixelBufferAdaptor。一旦AVAssetWriterInputPixelBufferAdaptor有效,我设置一个标志来通知EAGLView创建一个像素缓冲区,并通过appendPixelBuffer将其附加到视频文件中给定的帧数。



现在我的代码崩溃,因为它试图附加第二个像素缓冲区,给我以下错误:

   -  [__ NSCFDictionary appendPixelBuffer :withPresentationTime:]:无法识别的选择器发送到实例0x131db0 

这是我的AVAsset设置代码(很多是基于Rudy Aramayo的代码, 在普通图像上工作,但没有设置纹理):

   - (void)testVideoWriter {

//初始化全局信息
MOVIE_NAME = @Documents / Movie.mov;
CGSize size = CGSizeMake(480,320);
frameLength = CMTimeMake(1,5);
currentTime = kCMTimeZero;
currentFrame = 0;

NSString * MOVIE_PATH = [NSHomeDirectory()stringByAppendingPathComponent:MOVIE_NAME];
NSError * error = nil;

unlink([betaCompressionDirectory UTF8String]);

videoWriter = [[AVAssetWriter alloc] initWithURL:[NSURL fileURLWithPath:betaCompressionDirectory] ​​fileType:AVFileTypeQuickTimeMovie error:& error];

NSDictionary * videoSettings = [NSDictionary dictionaryWithObjectsAndKeys:AVVideoCodecH264,AVVideoCodecKey,
[NSNumber numberWithInt:size.width],AVVideoWidthKey,
[NSNumber numberWithInt:size.height],AVVideoHeightKey,零];
writerInput = [AVAssetWriterInput assetWriterInputWithMediaType:AVMediaTypeVideo outputSettings:videoSettings];

//writerInput.expectsMediaDataInRealTime = NO;

NSDictionary * sourcePixelBufferAttributesDictionary = [NSDictionary dictionaryWithObjectsAndKeys:[NSNumber numberWithInt:kCVPixelFormatType_32BGRA],kCVPixelBufferPixelFormatTypeKey,nil];

adapter = [AVAssetWriterInputPixelBufferAdaptor assetWriterInputPixelBufferAdaptorWithAssetWriterInput:writerInput sourcePixelBufferAttributes:sourcePixelBufferAttributesDictionary];
[适配器保留];

[videoWriter addInput:writerInput];

[videoWriter startWriting];
[videoWriter startSessionAtSourceTime:kCMTimeZero];

VIDEO_WRITER_IS_READY = true;
}

好了,现在我的videoWriter和适配器已经设置好,我告诉我的OpenGL渲染器为每一帧创建一个像素缓冲区:

   - (void)captureScreenVideo {

if( !writerInput.readyForMoreMediaData){
return;
}

CGSize esize = CGSizeMake(eagl.backingWidth,eagl.backingHeight);
NSInteger myDataLength = esize.width * esize.height * 4;
GLuint * buffer =(GLuint *)malloc(myDataLength);
glReadPixels(0,0,esize.width,esize.height,GL_RGBA,GL_UNSIGNED_BYTE,buffer);
CVPixelBufferRef pixel_buffer = NULL;
CVPixelBufferCreateWithBytes(NULL,esize.width,esize.height,kCVPixelFormatType_32BGRA,buffer,4 * esize.width,NULL,0,NULL,& pixel_buffer);

/ *在使用pixel_buffer之前不要释放! * /
//免费(缓冲区);

if(![adapter appendPixelBuffer:pixel_buffer withPresentationTime:currentTime]){
NSLog(@FAIL);
} else {
NSLog(@成功:%d,currentFrame);
currentTime = CMTimeAdd(currentTime,frameLength);
}

免费(缓冲);
CVPixelBufferRelease(pixel_buffer);
}


currentFrame ++;

if(currentFrame> MAX_FRAMES){
VIDEO_WRITER_IS_READY = false;
[writerInput markAsFinished];
[videoWriter finishWriting];
[videoWriter release];

[self moveVideoToSavedPhotos];
}
}

最后,我将视频移动到相机胶卷:

   - (void)moveVideoToSavedPhotos {
ALAssetsLibrary * library = [[ALAssetsLibrary alloc] init];
NSString * localVid = [NSHomeDirectory()stringByAppendingPathComponent:MOVIE_NAME];
NSURL * fileURL = [NSURL fileURLWithPath:localVid];

[library writeVideoAtPathToSavedPhotosAlbum:fileURL
completionBlock:^(NSURL * assetURL,NSError * error){
if(error){
NSLog(@%@:保存上下文时出错:%@,[self class],[error localizedDescription]);
}
}];
[图书馆发布];
}

但是,正如我所说,我在调用appendPixelBuffer时崩溃。 / p>

很抱歉发送了这么多代码,但我真的不知道我做错了什么。似乎更新一个将图像写入视频的项目是微不足道的,但是我无法通过glReadPixels创建我创建的像素缓冲区并附加它。这让我疯狂!如果有人有任何建议或OpenGL的工作代码示例 - >视频将是惊人的...谢谢!

解决方案

基于上面的代码,我在我的开源 GPUImage 框架中得到了类似的东西,所以我想我会提供我的工作解决方案。在我的情况下,我能够使用像Srikumar建议的像素缓冲池,而不是每帧的手动创建的像素缓冲区。



我首先配置电影要记录:

  NSError * error = nil; 

assetWriter = [[AVAssetWriter alloc] initWithURL:movieURL fileType:AVFileTypeAppleM4V error:& error];
if(error!= nil)
{
NSLog(@Error:%@,error);
}


NSMutableDictionary * outputSettings = [[NSMutableDictionary alloc] init];
[outputSettings setObject:AVVideoCodecH264 forKey:AVVideoCodecKey];
[outputSettings setObject:[NSNumber numberWithInt:videoSize.width] forKey:AVVideoWidthKey];
[outputSettings setObject:[NSNumber numberWithInt:videoSize.height] forKey:AVVideoHeightKey];


assetWriterVideoInput = [AVAssetWriterInput assetWriterInputWithMediaType:AVMediaTypeVideo outputSettings:outputSettings];
assetWriterVideoInput.expectsMediaDataInRealTime = YES;

//您需要将BGRA用于视频才能获得实时编码。我使用一个颜色调色的着色器将glReadPixels的普通RGBA输出与电影输入的BGRA对齐。
NSDictionary * sourcePixelBufferAttributesDictionary = [NSDictionary dictionaryWithObjectsAndKeys:[NSNumber numberWithInt:kCVPixelFormatType_32BGRA],kCVPixelBufferPixelFormatTypeKey,
[NSNumber numberWithInt:videoSize.width],kCVPixelBufferWidthKey,
[NSNumber numberWithInt:videoSize.height],kCVPixelBufferHeightKey ,
nil];

assetWriterPixelBufferInput = [AVAssetWriterInputPixelBufferAdaptor assetWriterInputPixelBufferAdaptorWithAssetWriterInput:assetWriterVideoInput sourcePixelBufferAttributes:sourcePixelBufferAttributesDictionary];

[assetWriter addInput:assetWriterVideoInput];

然后使用此代码使用 glReadPixels()获取每个渲染帧

  CVPixelBufferRef pixel_buffer = NULL; 

CVReturn status = CVPixelBufferPoolCreatePixelBuffer(NULL,[assetWriterPixelBufferInput pixelBufferPool],& pixel_buffer);
if((pixel_buffer == NULL)||(status!= kCVReturnSuccess))
{
return;
}
else
{
CVPixelBufferLockBaseAddress(pixel_buffer,0);
GLubyte * pixelBufferData =(GLubyte *)CVPixelBufferGetBaseAddress(pixel_buffer);
glReadPixels(0,0,videoSize.width,videoSize.height,GL_RGBA,GL_UNSIGNED_BYTE,pixelBufferData);
}

//这里可能需要添加一个支票,因为如果连续两次具有相同的值被添加到电影中,它将中止录制
CMTime currentTime = CMTimeMakeWithSeconds( [[NSDate date] timeIntervalSinceDate:startTime],120);

if(![assetWriterPixelBufferInput appendPixelBuffer:pixel_buffer withPresentationTime:currentTime])
{
NSLog(@问题附加像素缓冲区时间:%lld,currentTime.value);
}
else
{
// NSLog(@记录时的像素缓冲区:%lld,currentTime.value);
}
CVPixelBufferUnlockBaseAddress(pixel_buffer,0);

CVPixelBufferRelease(pixel_buffer);

我注意到的一件事是,如果我尝试追加两个具有相同整数时间值的像素缓冲区(在提供的基础上,整个录制将失败,输入将永远不会采取另一个像素缓冲区。类似地,如果我在从池中检索失败后尝试附加像素缓冲区,则会中止录制。因此,上面代码中的早期救助。



除了上面的代码,我使用一个颜色调色的着色器将我的OpenGL ES场景中的RGBA渲染转换为BGRA用于AVAssetWriter的快速编码。有了这个,我就可以在iPhone 4上以30 FPS录制640x480视频。



同样,所有代码都可以在 GPUImage 存储库,位于GPUImageMovieWriter类下。


I am at my wits end here despite the good information here on StackOverflow...

I am trying to write an OpenGL renderbuffer to a video on the iPad 2 (using iOS 4.3). This is more exactly what I am attempting:

A) set up an AVAssetWriterInputPixelBufferAdaptor

  1. create an AVAssetWriter that points to a video file

  2. set up an AVAssetWriterInput with appropriate settings

  3. set up an AVAssetWriterInputPixelBufferAdaptor to add data to the video file

B) write data to a video file using that AVAssetWriterInputPixelBufferAdaptor

  1. render OpenGL code to the screen

  2. get the OpenGL buffer via glReadPixels

  3. create a CVPixelBufferRef from the OpenGL data

  4. append that PixelBuffer to the AVAssetWriterInputPixelBufferAdaptor using the appendPixelBuffer method

However, I am having problems doings this. My strategy right now is to set up the AVAssetWriterInputPixelBufferAdaptor when a button is pressed. Once the AVAssetWriterInputPixelBufferAdaptor is valid, I set a flag to signal the EAGLView to create a pixel buffer and append it to the video file via appendPixelBuffer for a given number of frames.

Right now my code is crashing as it tries to append the second pixel buffer, giving me the following error:

-[__NSCFDictionary appendPixelBuffer:withPresentationTime:]: unrecognized selector sent to instance 0x131db0

Here is my AVAsset setup code (a lot of was based on Rudy Aramayo's code, which does work on normal images, but is not set up for textures):

- (void) testVideoWriter {

  //initialize global info
  MOVIE_NAME = @"Documents/Movie.mov";
  CGSize size = CGSizeMake(480, 320);
  frameLength = CMTimeMake(1, 5); 
  currentTime = kCMTimeZero;
  currentFrame = 0;

  NSString *MOVIE_PATH = [NSHomeDirectory() stringByAppendingPathComponent:MOVIE_NAME];
  NSError *error = nil;

  unlink([betaCompressionDirectory UTF8String]);

  videoWriter = [[AVAssetWriter alloc] initWithURL:[NSURL fileURLWithPath:betaCompressionDirectory] fileType:AVFileTypeQuickTimeMovie error:&error];

  NSDictionary *videoSettings = [NSDictionary dictionaryWithObjectsAndKeys:AVVideoCodecH264, AVVideoCodecKey,
                                 [NSNumber numberWithInt:size.width], AVVideoWidthKey,
                                 [NSNumber numberWithInt:size.height], AVVideoHeightKey, nil];
  writerInput = [AVAssetWriterInput assetWriterInputWithMediaType:AVMediaTypeVideo outputSettings:videoSettings];

  //writerInput.expectsMediaDataInRealTime = NO;

  NSDictionary *sourcePixelBufferAttributesDictionary = [NSDictionary dictionaryWithObjectsAndKeys: [NSNumber numberWithInt:kCVPixelFormatType_32BGRA], kCVPixelBufferPixelFormatTypeKey, nil];

  adaptor = [AVAssetWriterInputPixelBufferAdaptor assetWriterInputPixelBufferAdaptorWithAssetWriterInput:writerInput                                                                          sourcePixelBufferAttributes:sourcePixelBufferAttributesDictionary];
  [adaptor retain];

  [videoWriter addInput:writerInput];

  [videoWriter startWriting];
  [videoWriter startSessionAtSourceTime:kCMTimeZero];

  VIDEO_WRITER_IS_READY = true;
}

Ok, now that my videoWriter and adaptor are set up, I tell my OpenGL renderer to create a pixel buffer for every frame:

- (void) captureScreenVideo {

  if (!writerInput.readyForMoreMediaData) {
    return;
  }

  CGSize esize = CGSizeMake(eagl.backingWidth, eagl.backingHeight);
  NSInteger myDataLength = esize.width * esize.height * 4;
  GLuint *buffer = (GLuint *) malloc(myDataLength);
  glReadPixels(0, 0, esize.width, esize.height, GL_RGBA, GL_UNSIGNED_BYTE, buffer);
  CVPixelBufferRef pixel_buffer = NULL;
  CVPixelBufferCreateWithBytes (NULL, esize.width, esize.height, kCVPixelFormatType_32BGRA, buffer, 4 * esize.width, NULL, 0, NULL, &pixel_buffer);

  /* DON'T FREE THIS BEFORE USING pixel_buffer! */ 
  //free(buffer);

  if(![adaptor appendPixelBuffer:pixel_buffer withPresentationTime:currentTime]) {
      NSLog(@"FAIL");
    } else {
      NSLog(@"Success:%d", currentFrame);
      currentTime = CMTimeAdd(currentTime, frameLength);
    }

   free(buffer);
   CVPixelBufferRelease(pixel_buffer);
  }


  currentFrame++;

  if (currentFrame > MAX_FRAMES) {
    VIDEO_WRITER_IS_READY = false;
    [writerInput markAsFinished];
    [videoWriter finishWriting];
    [videoWriter release];

    [self moveVideoToSavedPhotos]; 
  }
}

And finally, I move the Video to the camera roll:

- (void) moveVideoToSavedPhotos {
  ALAssetsLibrary *library = [[ALAssetsLibrary alloc] init];
  NSString *localVid = [NSHomeDirectory() stringByAppendingPathComponent:MOVIE_NAME];    
  NSURL* fileURL = [NSURL fileURLWithPath:localVid];

  [library writeVideoAtPathToSavedPhotosAlbum:fileURL
                              completionBlock:^(NSURL *assetURL, NSError *error) {
                                if (error) {   
                                  NSLog(@"%@: Error saving context: %@", [self class], [error localizedDescription]);
                                }
                              }];
  [library release];
}

However, as I said, I am crashing in the call to appendPixelBuffer.

Sorry for sending so much code, but I really don't know what I am doing wrong. It seemed like it would be trivial to update a project which writes images to a video, but I am unable to take the pixel buffer I create via glReadPixels and append it. It's driving me crazy! If anyone has any advice or a working code example of OpenGL --> Video that would be amazing... Thanks!

解决方案

I just got something similar to this working in my open source GPUImage framework, based on the above code, so I thought I'd provide my working solution to this. In my case, I was able to use a pixel buffer pool, as suggested by Srikumar, instead of the manually created pixel buffers for each frame.

I first configure the movie to be recorded:

NSError *error = nil;

assetWriter = [[AVAssetWriter alloc] initWithURL:movieURL fileType:AVFileTypeAppleM4V error:&error];
if (error != nil)
{
    NSLog(@"Error: %@", error);
}


NSMutableDictionary * outputSettings = [[NSMutableDictionary alloc] init];
[outputSettings setObject: AVVideoCodecH264 forKey: AVVideoCodecKey];
[outputSettings setObject: [NSNumber numberWithInt: videoSize.width] forKey: AVVideoWidthKey];
[outputSettings setObject: [NSNumber numberWithInt: videoSize.height] forKey: AVVideoHeightKey];


assetWriterVideoInput = [AVAssetWriterInput assetWriterInputWithMediaType:AVMediaTypeVideo outputSettings:outputSettings];
assetWriterVideoInput.expectsMediaDataInRealTime = YES;

// You need to use BGRA for the video in order to get realtime encoding. I use a color-swizzling shader to line up glReadPixels' normal RGBA output with the movie input's BGRA.
NSDictionary *sourcePixelBufferAttributesDictionary = [NSDictionary dictionaryWithObjectsAndKeys: [NSNumber numberWithInt:kCVPixelFormatType_32BGRA], kCVPixelBufferPixelFormatTypeKey,
                                                       [NSNumber numberWithInt:videoSize.width], kCVPixelBufferWidthKey,
                                                       [NSNumber numberWithInt:videoSize.height], kCVPixelBufferHeightKey,
                                                       nil];

assetWriterPixelBufferInput = [AVAssetWriterInputPixelBufferAdaptor assetWriterInputPixelBufferAdaptorWithAssetWriterInput:assetWriterVideoInput sourcePixelBufferAttributes:sourcePixelBufferAttributesDictionary];

[assetWriter addInput:assetWriterVideoInput];

then use this code to grab each rendered frame using glReadPixels():

CVPixelBufferRef pixel_buffer = NULL;

CVReturn status = CVPixelBufferPoolCreatePixelBuffer (NULL, [assetWriterPixelBufferInput pixelBufferPool], &pixel_buffer);
if ((pixel_buffer == NULL) || (status != kCVReturnSuccess))
{
    return;
}
else
{
    CVPixelBufferLockBaseAddress(pixel_buffer, 0);
    GLubyte *pixelBufferData = (GLubyte *)CVPixelBufferGetBaseAddress(pixel_buffer);
    glReadPixels(0, 0, videoSize.width, videoSize.height, GL_RGBA, GL_UNSIGNED_BYTE, pixelBufferData);
}

// May need to add a check here, because if two consecutive times with the same value are added to the movie, it aborts recording
CMTime currentTime = CMTimeMakeWithSeconds([[NSDate date] timeIntervalSinceDate:startTime],120);

if(![assetWriterPixelBufferInput appendPixelBuffer:pixel_buffer withPresentationTime:currentTime]) 
{
    NSLog(@"Problem appending pixel buffer at time: %lld", currentTime.value);
} 
else 
{
//        NSLog(@"Recorded pixel buffer at time: %lld", currentTime.value);
}
CVPixelBufferUnlockBaseAddress(pixel_buffer, 0);

CVPixelBufferRelease(pixel_buffer);

One thing I noticed is that if I tried to append two pixel buffers with the same integer time value (in the basis provided), the entire recording would fail and the input would never take another pixel buffer. Similarly, if I tried to append a pixel buffer after retrieval from the pool failed, it would abort the recording. Thus, the early bailout in the code above.

In addition to the above code, I use a color-swizzling shader to convert the RGBA rendering in my OpenGL ES scene to BGRA for fast encoding by the AVAssetWriter. With this, I'm able to record 640x480 video at 30 FPS on an iPhone 4.

Again, all of the code for this can be found within the GPUImage repository, under the GPUImageMovieWriter class.

这篇关于iPad / iPhone上的OpenGL ES 2.0到视频的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆