CVOpenGLESTextureCacheCreateTextureFromImage 创建 IOSurface 失败 [英] CVOpenGLESTextureCacheCreateTextureFromImage fails to create IOSurface

查看:25
本文介绍了CVOpenGLESTextureCacheCreateTextureFromImage 创建 IOSurface 失败的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

对于我当前的项目,我正在读取 iPhone 的主摄像头输出.然后,我通过以下方法将像素缓冲区转换为缓存的 OpenGL 纹理:CVOpenGLESTextureCacheCreateTextureFromImage.这在处理用于预览的相机帧时非常有用.在 iPhone 3GS、4、4S、iPod Touch(第 4 代)和 IOS5、IOS6 的不同组合上进行了测试.

For my current project I'm reading the main camera output of the iPhone. I'm then converting the pixelbuffer to a cached OpenGL texture through the method: CVOpenGLESTextureCacheCreateTextureFromImage. This works great when processing camera frames that are used for previewing. Tested on different combinations with the iPhone 3GS, 4, 4S, iPod Touch (4th gen) and IOS5, IOS6.

但是,对于具有非常高分辨率的实际最终图像,这仅适用于以下组合:

But, for the actual final image, which has a very high resolution, this only works on these combinations:

  • iPhone 3GS + IOS 5.1.1
  • iPhone 4 + IOS 5.1.1
  • iPhone 4S + IOS 6.0
  • iPod Touch(第 4 代)+ IOS 5.0

这不适用于:iPhone 4 + IOS6.

And this doesn't work for: iPhone 4 + IOS6.

控制台中的确切错误消息:

The exact error message in console:

Failed to create IOSurface image (texture)
2012-10-01 16:24:30.663 GLCameraRipple[676:907] Error at CVOpenGLESTextureCacheCreateTextureFromImage -6683

我已通过更改 Apple 的 GLCameraRipple 项目来隔离此问题.你可以在这里查看我的版本:http://lab.bitshiftcop.com/iosurface.zip

I've isolated this problem by changing the GLCameraRipple project from Apple. You can check out my version over here: http://lab.bitshiftcop.com/iosurface.zip

以下是我将静止输出添加到当前会话的方法:

Here's how I add the stilloutput to the current session:

- (void)setupAVCapture
{
    //-- Create CVOpenGLESTextureCacheRef for optimal CVImageBufferRef to GLES texture conversion.
    CVReturn err = CVOpenGLESTextureCacheCreate(kCFAllocatorDefault, NULL, [EAGLContext currentContext], NULL, &_videoTextureCache);
    if (err) 
    {
        NSLog(@"Error at CVOpenGLESTextureCacheCreate %d", err);
        return;
    }

    //-- Setup Capture Session.
    _session = [[AVCaptureSession alloc] init];
    [_session beginConfiguration];

    //-- Set preset session size.
    [_session setSessionPreset:_sessionPreset];

    //-- Creata a video device and input from that Device.  Add the input to the capture session.
    AVCaptureDevice * videoDevice = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];
    if(videoDevice == nil)
        assert(0);

    //-- Add the device to the session.
    NSError *error;        
    AVCaptureDeviceInput *input = [AVCaptureDeviceInput deviceInputWithDevice:videoDevice error:&error];
    if(error)
        assert(0);

    [_session addInput:input];

    //-- Create the output for the capture session.
    AVCaptureVideoDataOutput * dataOutput = [[AVCaptureVideoDataOutput alloc] init];
    [dataOutput setAlwaysDiscardsLateVideoFrames:YES]; // Probably want to set this to NO when recording

    //-- Set to YUV420.
    [dataOutput setVideoSettings:[NSDictionary dictionaryWithObject:[NSNumber numberWithInt:kCVPixelFormatType_32BGRA]
                                                             forKey:(id)kCVPixelBufferPixelFormatTypeKey]]; // Necessary for manual preview

    // Set dispatch to be on the main thread so OpenGL can do things with the data
    [dataOutput setSampleBufferDelegate:self queue:dispatch_get_main_queue()];


    // Add still output
    stillOutput = [[AVCaptureStillImageOutput alloc] init];
    [stillOutput setOutputSettings:[NSDictionary dictionaryWithObject:[NSNumber numberWithInt:kCVPixelFormatType_32BGRA] forKey:(id)kCVPixelBufferPixelFormatTypeKey]];
    if([_session canAddOutput:stillOutput]) [_session addOutput:stillOutput];

    [_session addOutput:dataOutput];
    [_session commitConfiguration];

    [_session startRunning];
}

以下是我捕获静止输出并对其进行处理的方法:

And here's how I capture the still output and process it:

- (void)capturePhoto
{
    AVCaptureConnection *videoConnection = nil;
    for (AVCaptureConnection *connection in stillOutput.connections) {
        for (AVCaptureInputPort *port in [connection inputPorts]) {
            if ([[port mediaType] isEqual:AVMediaTypeVideo] ) {
                videoConnection = connection;
                break;
            }
        }
        if (videoConnection) { break; }
    }

    [stillOutput captureStillImageAsynchronouslyFromConnection:videoConnection completionHandler:
     ^(CMSampleBufferRef imageSampleBuffer, NSError *error) {
         // Process hires image
         [self captureOutput:stillOutput didOutputSampleBuffer:imageSampleBuffer fromConnection:videoConnection];
     }];
}

以下是纹理的创建方式:

Here's how the texture is created:

- (void)captureOutput:(AVCaptureOutput *)captureOutput 
didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer 
       fromConnection:(AVCaptureConnection *)connection
{
    CVReturn err;
    CVImageBufferRef pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
    size_t width = CVPixelBufferGetWidth(pixelBuffer);
    size_t height = CVPixelBufferGetHeight(pixelBuffer);

    if (!_videoTextureCache)
    {
        NSLog(@"No video texture cache");
        return;
    }

    if (_ripple == nil ||
        width != _textureWidth ||
        height != _textureHeight)
    {
        _textureWidth = width;
        _textureHeight = height;

        _ripple = [[RippleModel alloc] initWithScreenWidth:_screenWidth 
                                              screenHeight:_screenHeight
                                                meshFactor:_meshFactor
                                               touchRadius:5
                                              textureWidth:_textureWidth
                                             textureHeight:_textureHeight];

        [self setupBuffers];
    }

    [self cleanUpTextures];

    NSLog(@"%zi x %zi", _textureWidth, _textureHeight);

    // RGBA texture
    glActiveTexture(GL_TEXTURE0);
    err = CVOpenGLESTextureCacheCreateTextureFromImage(kCFAllocatorDefault, 
                                                       _videoTextureCache,
                                                       pixelBuffer,
                                                       NULL,
                                                       GL_TEXTURE_2D,
                                                       GL_RGBA,
                                                       _textureWidth,
                                                       _textureHeight,
                                                       GL_BGRA,
                                                       GL_UNSIGNED_BYTE,
                                                       0,
                                                       &_chromaTexture);
    if (err) 
    {
        NSLog(@"Error at CVOpenGLESTextureCacheCreateTextureFromImage %d", err);
    }

    glBindTexture(CVOpenGLESTextureGetTarget(_chromaTexture), CVOpenGLESTextureGetName(_chromaTexture));
    glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
    glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE); 
}

对解决这个问题有什么建议吗?

Any suggestions for a solution to this problem?

推荐答案

iPhone 4(以及 iPhone 3GS 和 iPod Touch 4th gen.)使用 PowerVR SGX 535 GPU,最大 OpenGL ES 纹理大小为 2048x2048.这个值可以通过调用找到

The iPhone 4 (as well as the iPhone 3GS and iPod Touch 4th gen.) uses a PowerVR SGX 535 GPU, for which the maximum OpenGL ES texture size is 2048x2048. This value can be found by calling

glGetIntegerv(GL_MAX_TEXTURE_SIZE, &maxTextureSize);

iPod Touch 第 4 代.摄像头分辨率为 720x960,iPhone 3GS 为 640x1136,但 iPhone 4 的后置摄像头分辨率为 1936x2592,太大而无法放入单个纹理.

The iPod Touch 4th gen. has a camera resolution of 720x960 and the iPhone 3GS, 640x1136, but the iPhone 4's rear-facing camera resolution is 1936x2592, which is too large to fit onto a single texture.

您始终可以以较小的尺寸重写捕获的图像,同时保持纵横比 (1529x2048).Brad Larson 在 他的 GPUImage 框架 上完成了这项工作,但是这非常简单,只需使用 Core Graphics 重绘原始像素缓冲区的数据,然后从重绘数据中制作另一个像素缓冲区.框架的其余部分也是一个很好的资源.

You can always rewrite the captured image at a smaller size, while preserving the aspect ratio (1529x2048). Brad Larson does this over on his GPUImage framework, but it's pretty straightforward, just redrawing the data of the original pixel buffer using Core Graphics and then making another pixel buffer out of the redrawn data. The rest of the framework is a great resource, as well.

这篇关于CVOpenGLESTextureCacheCreateTextureFromImage 创建 IOSurface 失败的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆