OpenGL ES 2.0 到 iPad/iPhone 上的视频 [英] OpenGL ES 2.0 to Video on iPad/iPhone

查看:16
本文介绍了OpenGL ES 2.0 到 iPad/iPhone 上的视频的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

尽管 StackOverflow 上有很好的信息,但我仍然不知所措...

I am at my wits end here despite the good information here on StackOverflow...

我正在尝试将 OpenGL 渲染缓冲区写入 iPad 2(使用 iOS 4.3)上的视频.这更正是我正在尝试的:

I am trying to write an OpenGL renderbuffer to a video on the iPad 2 (using iOS 4.3). This is more exactly what I am attempting:

A) 设置一个 AVAssetWriterInputPixelBufferAdaptor

A) set up an AVAssetWriterInputPixelBufferAdaptor

  1. 创建一个指向视频文件的 AVAssetWriter

  1. create an AVAssetWriter that points to a video file

使用适当的设置设置 AVAssetWriterInput

set up an AVAssetWriterInput with appropriate settings

设置一个 AVAssetWriterInputPixelBufferAdaptor 来向视频文件添加数据

set up an AVAssetWriterInputPixelBufferAdaptor to add data to the video file

B) 使用 AVAssetWriterInputPixelBufferAdaptor 将数据写入视频文件

B) write data to a video file using that AVAssetWriterInputPixelBufferAdaptor

  1. 将 OpenGL 代码渲染到屏幕

  1. render OpenGL code to the screen

通过 glReadPixels 获取 OpenGL 缓冲区

get the OpenGL buffer via glReadPixels

从 OpenGL 数据创建一个 CVPixelBufferRef

create a CVPixelBufferRef from the OpenGL data

使用 appendPixelBuffer 方法将该 PixelBuffer 附加到 AVAssetWriterInputPixelBufferAdaptor

append that PixelBuffer to the AVAssetWriterInputPixelBufferAdaptor using the appendPixelBuffer method

但是,我在执行此操作时遇到了问题.我现在的策略是在按下按钮时设置 AVAssetWriterInputPixelBufferAdaptor.一旦 AVAssetWriterInputPixelBufferAdaptor 有效,我就会设置一个标志来通知 EAGLView 创建一个像素缓冲区,并通过 appendPixelBuffer 将其附加到视频文件中,以获得给定的帧数.

However, I am having problems doings this. My strategy right now is to set up the AVAssetWriterInputPixelBufferAdaptor when a button is pressed. Once the AVAssetWriterInputPixelBufferAdaptor is valid, I set a flag to signal the EAGLView to create a pixel buffer and append it to the video file via appendPixelBuffer for a given number of frames.

现在我的代码在尝试追加第二个像素缓冲区时崩溃,出现以下错误:

Right now my code is crashing as it tries to append the second pixel buffer, giving me the following error:

-[__NSCFDictionary appendPixelBuffer:withPresentationTime:]: unrecognized selector sent to instance 0x131db0

这是我的 AVAsset 设置代码(很多是基于 Rudy Aramayo 的代码,它确实适用于普通图像,但不适用于纹理):

Here is my AVAsset setup code (a lot of was based on Rudy Aramayo's code, which does work on normal images, but is not set up for textures):

- (void) testVideoWriter {

  //initialize global info
  MOVIE_NAME = @"Documents/Movie.mov";
  CGSize size = CGSizeMake(480, 320);
  frameLength = CMTimeMake(1, 5); 
  currentTime = kCMTimeZero;
  currentFrame = 0;

  NSString *MOVIE_PATH = [NSHomeDirectory() stringByAppendingPathComponent:MOVIE_NAME];
  NSError *error = nil;

  unlink([betaCompressionDirectory UTF8String]);

  videoWriter = [[AVAssetWriter alloc] initWithURL:[NSURL fileURLWithPath:betaCompressionDirectory] fileType:AVFileTypeQuickTimeMovie error:&error];

  NSDictionary *videoSettings = [NSDictionary dictionaryWithObjectsAndKeys:AVVideoCodecH264, AVVideoCodecKey,
                                 [NSNumber numberWithInt:size.width], AVVideoWidthKey,
                                 [NSNumber numberWithInt:size.height], AVVideoHeightKey, nil];
  writerInput = [AVAssetWriterInput assetWriterInputWithMediaType:AVMediaTypeVideo outputSettings:videoSettings];

  //writerInput.expectsMediaDataInRealTime = NO;

  NSDictionary *sourcePixelBufferAttributesDictionary = [NSDictionary dictionaryWithObjectsAndKeys: [NSNumber numberWithInt:kCVPixelFormatType_32BGRA], kCVPixelBufferPixelFormatTypeKey, nil];

  adaptor = [AVAssetWriterInputPixelBufferAdaptor assetWriterInputPixelBufferAdaptorWithAssetWriterInput:writerInput                                                                          sourcePixelBufferAttributes:sourcePixelBufferAttributesDictionary];
  [adaptor retain];

  [videoWriter addInput:writerInput];

  [videoWriter startWriting];
  [videoWriter startSessionAtSourceTime:kCMTimeZero];

  VIDEO_WRITER_IS_READY = true;
}

好的,现在我的 videoWriter 和适配器已经设置,我告诉我的 OpenGL 渲染器为每一帧创建一个像素缓冲区:

Ok, now that my videoWriter and adaptor are set up, I tell my OpenGL renderer to create a pixel buffer for every frame:

- (void) captureScreenVideo {

  if (!writerInput.readyForMoreMediaData) {
    return;
  }

  CGSize esize = CGSizeMake(eagl.backingWidth, eagl.backingHeight);
  NSInteger myDataLength = esize.width * esize.height * 4;
  GLuint *buffer = (GLuint *) malloc(myDataLength);
  glReadPixels(0, 0, esize.width, esize.height, GL_RGBA, GL_UNSIGNED_BYTE, buffer);
  CVPixelBufferRef pixel_buffer = NULL;
  CVPixelBufferCreateWithBytes (NULL, esize.width, esize.height, kCVPixelFormatType_32BGRA, buffer, 4 * esize.width, NULL, 0, NULL, &pixel_buffer);

  /* DON'T FREE THIS BEFORE USING pixel_buffer! */ 
  //free(buffer);

  if(![adaptor appendPixelBuffer:pixel_buffer withPresentationTime:currentTime]) {
      NSLog(@"FAIL");
    } else {
      NSLog(@"Success:%d", currentFrame);
      currentTime = CMTimeAdd(currentTime, frameLength);
    }

   free(buffer);
   CVPixelBufferRelease(pixel_buffer);
  }


  currentFrame++;

  if (currentFrame > MAX_FRAMES) {
    VIDEO_WRITER_IS_READY = false;
    [writerInput markAsFinished];
    [videoWriter finishWriting];
    [videoWriter release];

    [self moveVideoToSavedPhotos]; 
  }
}

最后,我将视频移至相机胶卷:

And finally, I move the Video to the camera roll:

- (void) moveVideoToSavedPhotos {
  ALAssetsLibrary *library = [[ALAssetsLibrary alloc] init];
  NSString *localVid = [NSHomeDirectory() stringByAppendingPathComponent:MOVIE_NAME];    
  NSURL* fileURL = [NSURL fileURLWithPath:localVid];

  [library writeVideoAtPathToSavedPhotosAlbum:fileURL
                              completionBlock:^(NSURL *assetURL, NSError *error) {
                                if (error) {   
                                  NSLog(@"%@: Error saving context: %@", [self class], [error localizedDescription]);
                                }
                              }];
  [library release];
}

但是,正如我所说,我在调用 appendPixelBuffer 时崩溃了.

However, as I said, I am crashing in the call to appendPixelBuffer.

抱歉发送了这么多代码,但我真的不知道我做错了什么.更新将图像写入视频的项目似乎很简单,但我无法获取我通过 glReadPixels 创建的像素缓冲区并附加它.这让我疯狂!如果有人有任何建议或 OpenGL 的工作代码示例 --> 视频会很棒...谢谢!

Sorry for sending so much code, but I really don't know what I am doing wrong. It seemed like it would be trivial to update a project which writes images to a video, but I am unable to take the pixel buffer I create via glReadPixels and append it. It's driving me crazy! If anyone has any advice or a working code example of OpenGL --> Video that would be amazing... Thanks!

推荐答案

我刚刚在我的开源GPUImage中得到了类似的东西a> 框架,基于上面的代码,所以我想我会为此提供我的工作解决方案.就我而言,我可以按照 Srikumar 的建议使用像素缓冲池,而不是为每一帧手动创建像素缓冲区.

I just got something similar to this working in my open source GPUImage framework, based on the above code, so I thought I'd provide my working solution to this. In my case, I was able to use a pixel buffer pool, as suggested by Srikumar, instead of the manually created pixel buffers for each frame.

我首先配置要录制的电影:

I first configure the movie to be recorded:

NSError *error = nil;

assetWriter = [[AVAssetWriter alloc] initWithURL:movieURL fileType:AVFileTypeAppleM4V error:&error];
if (error != nil)
{
    NSLog(@"Error: %@", error);
}


NSMutableDictionary * outputSettings = [[NSMutableDictionary alloc] init];
[outputSettings setObject: AVVideoCodecH264 forKey: AVVideoCodecKey];
[outputSettings setObject: [NSNumber numberWithInt: videoSize.width] forKey: AVVideoWidthKey];
[outputSettings setObject: [NSNumber numberWithInt: videoSize.height] forKey: AVVideoHeightKey];


assetWriterVideoInput = [AVAssetWriterInput assetWriterInputWithMediaType:AVMediaTypeVideo outputSettings:outputSettings];
assetWriterVideoInput.expectsMediaDataInRealTime = YES;

// You need to use BGRA for the video in order to get realtime encoding. I use a color-swizzling shader to line up glReadPixels' normal RGBA output with the movie input's BGRA.
NSDictionary *sourcePixelBufferAttributesDictionary = [NSDictionary dictionaryWithObjectsAndKeys: [NSNumber numberWithInt:kCVPixelFormatType_32BGRA], kCVPixelBufferPixelFormatTypeKey,
                                                       [NSNumber numberWithInt:videoSize.width], kCVPixelBufferWidthKey,
                                                       [NSNumber numberWithInt:videoSize.height], kCVPixelBufferHeightKey,
                                                       nil];

assetWriterPixelBufferInput = [AVAssetWriterInputPixelBufferAdaptor assetWriterInputPixelBufferAdaptorWithAssetWriterInput:assetWriterVideoInput sourcePixelBufferAttributes:sourcePixelBufferAttributesDictionary];

[assetWriter addInput:assetWriterVideoInput];

然后使用此代码使用 glReadPixels() 抓取每个渲染的帧:

then use this code to grab each rendered frame using glReadPixels():

CVPixelBufferRef pixel_buffer = NULL;

CVReturn status = CVPixelBufferPoolCreatePixelBuffer (NULL, [assetWriterPixelBufferInput pixelBufferPool], &pixel_buffer);
if ((pixel_buffer == NULL) || (status != kCVReturnSuccess))
{
    return;
}
else
{
    CVPixelBufferLockBaseAddress(pixel_buffer, 0);
    GLubyte *pixelBufferData = (GLubyte *)CVPixelBufferGetBaseAddress(pixel_buffer);
    glReadPixels(0, 0, videoSize.width, videoSize.height, GL_RGBA, GL_UNSIGNED_BYTE, pixelBufferData);
}

// May need to add a check here, because if two consecutive times with the same value are added to the movie, it aborts recording
CMTime currentTime = CMTimeMakeWithSeconds([[NSDate date] timeIntervalSinceDate:startTime],120);

if(![assetWriterPixelBufferInput appendPixelBuffer:pixel_buffer withPresentationTime:currentTime]) 
{
    NSLog(@"Problem appending pixel buffer at time: %lld", currentTime.value);
} 
else 
{
//        NSLog(@"Recorded pixel buffer at time: %lld", currentTime.value);
}
CVPixelBufferUnlockBaseAddress(pixel_buffer, 0);

CVPixelBufferRelease(pixel_buffer);

我注意到的一件事是,如果我尝试使用相同的整数时间值(在提供的基础上)附加两个像素缓冲区,则整个记录将失败,并且输入将永远不会占用另一个像素缓冲区.同样,如果我在从池中检索失败后尝试附加像素缓冲区,它将中止记录.因此,上面代码中的早期救助.

One thing I noticed is that if I tried to append two pixel buffers with the same integer time value (in the basis provided), the entire recording would fail and the input would never take another pixel buffer. Similarly, if I tried to append a pixel buffer after retrieval from the pool failed, it would abort the recording. Thus, the early bailout in the code above.

除了上述代码之外,我还使用了颜​​色混合着色器将我的 OpenGL ES 场景中的 RGBA 渲染转换为 BGRA,以便通过 AVAssetWriter 进行快速编码.有了这个,我就可以在 iPhone 4 上以 30 FPS 的速度录制 640x480 的视频.

In addition to the above code, I use a color-swizzling shader to convert the RGBA rendering in my OpenGL ES scene to BGRA for fast encoding by the AVAssetWriter. With this, I'm able to record 640x480 video at 30 FPS on an iPhone 4.

同样,所有代码都可以在 GPUImage 存储库中的 GPUImageMovieWriter 类下找到.

Again, all of the code for this can be found within the GPUImage repository, under the GPUImageMovieWriter class.

这篇关于OpenGL ES 2.0 到 iPad/iPhone 上的视频的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆