AVCapture appendSampleBuffer [英] AVCapture appendSampleBuffer

查看:121
本文介绍了AVCapture appendSampleBuffer的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我对此感到疯狂 - 到处寻找并尝试了我能想到的任何事情。

I am going insane with this one - have looked everywhere and tried anything and everything I can thinks of.

制作使用AVFoundation的iPhone应用程序 - 特别是AVCapture使用iPhone相机拍摄视频。

Am making an iPhone app that uses AVFoundation - specifically AVCapture to capture video using the iPhone camera.

我需要在录制中包含的视频源上叠加自定义图像。

I need to have a custom image that is overlayed on the video feed included in the recording.

到目前为止,我已经设置了AVCapture会话,可以显示Feed,访问框架,将其保存为UIImage并将叠加图像放在上面。然后将这个新的UIImage转换为CVPixelBufferRef。为了仔细检查bufferRef是否正常工作,我将其转换回UIImage并显示图像仍然正常。

So far I have the AVCapture session set up, can display the feed, access the frame, save it as a UIImage and marge the overlay Image onto it. Then convert this new UIImage into a CVPixelBufferRef. annnd to double check that the bufferRef is working I converted it back to a UIImage and it displays the image fine still.

当我尝试将CVPixelBufferRef转换为CMSampleBufferRef附加到AVCaptureSessions assetWriterInput。当我尝试创建它时,CMSampleBufferRef总是返回NULL。

The trouble starts when I try to convert the CVPixelBufferRef into a CMSampleBufferRef to append to the AVCaptureSessions assetWriterInput. The CMSampleBufferRef always returning NULL when I attempt to create it.

这是 - (void)captureOutput函数

Here is the -(void)captureOutput function

- (void)captureOutput:(AVCaptureOutput *)captureOutput 
didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer 
    fromConnection:(AVCaptureConnection *)connection
{ 

 UIImage *botImage = [self imageFromSampleBuffer:sampleBuffer];
 UIImage *wheel = [self imageFromView:wheelView];

 UIImage *finalImage = [self overlaidImage:botImage :wheel];
 //[previewImage setImage:finalImage]; <- works -- the image is being merged into one UIImage

 CVPixelBufferRef pixelBuffer = NULL;
 CGImageRef cgImage = CGImageCreateCopy(finalImage.CGImage);
 CFDataRef image = CGDataProviderCopyData(CGImageGetDataProvider(cgImage));
 int status = CVPixelBufferCreateWithBytes(NULL,
             self.view.bounds.size.width,
             self.view.bounds.size.height,
             kCVPixelFormatType_32BGRA, 
             (void*)CFDataGetBytePtr(image), 
             CGImageGetBytesPerRow(cgImage), 
             NULL, 
             0,
             NULL, 
             &pixelBuffer);
 if(status == 0){
  OSStatus result = 0;

  CMVideoFormatDescriptionRef videoInfo = NULL;
  result = CMVideoFormatDescriptionCreateForImageBuffer(NULL, pixelBuffer, &videoInfo);
  NSParameterAssert(result == 0 && videoInfo != NULL);

  CMSampleBufferRef myBuffer = NULL;
  result = CMSampleBufferCreateForImageBuffer(kCFAllocatorDefault,
            pixelBuffer, true, NULL, NULL, videoInfo, NULL, &myBuffer);
  NSParameterAssert(result == 0 && myBuffer != NULL);//always null :S

  NSLog(@"Trying to append");

  if (!CMSampleBufferDataIsReady(myBuffer)){
   NSLog(@"sampleBuffer data is not ready");
   return;
  }

  if (![assetWriterInput isReadyForMoreMediaData]){
   NSLog(@"Not ready for data :(");
   return;
  }

  if (![assetWriterInput appendSampleBuffer:myBuffer]){   
   NSLog(@"Failed to append pixel buffer");
  }



 }

}

我一直听到的另一个解决方案关于正在使用AVAssetWriterInputPixelBufferAdaptor,它不需要进行凌乱的CMSampleBufferRef包装。但是我已经搜索了堆栈和苹果开发人员论坛和文档,但无法找到关于如何设置或如何使用它的清晰描述或示例。任何人都有一个工作的例子,你可以请你告诉我或帮助我坚持上述问题 - 一直在这个不停的工作一个星期,我在智慧结束。

Another solution I keep hearing about is using a AVAssetWriterInputPixelBufferAdaptor which eliminates the need to do the messy CMSampleBufferRef wrapping. However I have scoured stacked and apple developer forums and docs and can't find a clear description or example on how to set this up or how to use it. If anyone has a working example of it could you please show me or help me nut out the above issue - have been working on this non-stop for a week and am at wits end.

如果您需要任何其他信息,请告诉我

Let me know if you need any other info

提前致谢,

Michael

推荐答案

你需要AVA ssetWriterInputPixelBufferAdaptor,这是创建它的代码:

You need AVAssetWriterInputPixelBufferAdaptor, here is the code to create it :

    // Create dictionary for pixel buffer adaptor
NSDictionary *bufferAttributes = [NSDictionary dictionaryWithObjectsAndKeys:[NSNumber numberWithInt:kCVPixelFormatType_32BGRA], kCVPixelBufferPixelFormatTypeKey, nil];

// Create pixel buffer adaptor
m_pixelsBufferAdaptor = [[AVAssetWriterInputPixelBufferAdaptor alloc] initWithAssetWriterInput:assetWriterInput sourcePixelBufferAttributes:bufferAttributes];

以及使用它的代码:

// If ready to have more media data
if (m_pixelsBufferAdaptor.assetWriterInput.readyForMoreMediaData) {
    // Create a pixel buffer
    CVPixelBufferRef pixelsBuffer = NULL;
    CVPixelBufferPoolCreatePixelBuffer(NULL, m_pixelsBufferAdaptor.pixelBufferPool, &pixelsBuffer);

    // Lock pixel buffer address
    CVPixelBufferLockBaseAddress(pixelsBuffer, 0);

    // Create your function to set your pixels data in the buffer (in your case, fill with your finalImage data)
    [self yourFunctionToPutDataInPixelBuffer:CVPixelBufferGetBaseAddress(pixelsBuffer)];

    // Unlock pixel buffer address
    CVPixelBufferUnlockBaseAddress(pixelsBuffer, 0);

    // Append pixel buffer (calculate currentFrameTime with your needing, the most simplest way is to have a frame time starting at 0 and increment each time you write a frame with the time of a frame (inverse of your framerate))
    [m_pixelsBufferAdaptor appendPixelBuffer:pixelsBuffer withPresentationTime:currentFrameTime];

    // Release pixel buffer
    CVPixelBufferRelease(pixelsBuffer);
}

不要忘记发布pixelBufferAdaptor。

And don't forget to release your pixelsBufferAdaptor.

这篇关于AVCapture appendSampleBuffer的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆