OpenGL ES到iOS中的视频(使用iOS 5纹理缓存渲染到纹理) [英] OpenGL ES to video in iOS (rendering to a texture with iOS 5 texture cache)

查看:223
本文介绍了OpenGL ES到iOS中的视频(使用iOS 5纹理缓存渲染到纹理)的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

您知道Apple的示例代码与 CameraRipple 效果?好吧,我试图在openGL完成水的所有酷效果之后将相机输出记录在一个文件中。

You know the sample code of Apple with the CameraRipple effect? Well I'm trying to record the camera output in a file after openGL has done all the cool effect of water.

我用glReadPixels完成了它,我读了void * buffer中的所有像素,创建CVPixelBufferRef并将其附加到AVAssetWriterInputPixelBufferAdaptor,但它太慢了,因为readPixels需要花费大量时间。我发现使用FBO和纹理现金你可以做同样的事情,但速度更快。这是Apple使用的drawInRect方法中的代码:

I've done it with glReadPixels, where I read all the pixels in a void * buffer , create CVPixelBufferRef and append it to the AVAssetWriterInputPixelBufferAdaptor, but it's too slow, coz readPixels takes tons of time. I found out that using FBO and texture cash you can do the same, but faster. Here is my code in drawInRect method that Apple use:

CVReturn err = CVOpenGLESTextureCacheCreate(kCFAllocatorDefault, NULL, (__bridge void *)_context, NULL, &coreVideoTextureCashe);
if (err) 
{
    NSAssert(NO, @"Error at CVOpenGLESTextureCacheCreate %d");
}


CFDictionaryRef empty; // empty value for attr value.
CFMutableDictionaryRef attrs2;
empty = CFDictionaryCreate(kCFAllocatorDefault, // our empty IOSurface properties dictionary
                           NULL,
                           NULL,
                           0,
                           &kCFTypeDictionaryKeyCallBacks,
                           &kCFTypeDictionaryValueCallBacks);
attrs2 = CFDictionaryCreateMutable(kCFAllocatorDefault,
                                  1,
                                  &kCFTypeDictionaryKeyCallBacks,
                                  &kCFTypeDictionaryValueCallBacks);

CFDictionarySetValue(attrs2,
                     kCVPixelBufferIOSurfacePropertiesKey,
                     empty);

//CVPixelBufferPoolCreatePixelBuffer (NULL, [assetWriterPixelBufferInput pixelBufferPool], &renderTarget);
CVPixelBufferRef pixiel_bufer4e = NULL;

CVPixelBufferCreate(kCFAllocatorDefault, 
                    (int)_screenWidth, 
                    (int)_screenHeight,
                    kCVPixelFormatType_32BGRA,
                    attrs2,
                    &pixiel_bufer4e);
CVOpenGLESTextureRef renderTexture;
CVOpenGLESTextureCacheCreateTextureFromImage (kCFAllocatorDefault,
                                              coreVideoTextureCashe, pixiel_bufer4e,
                                              NULL, // texture attributes
                                              GL_TEXTURE_2D,
                                              GL_RGBA, // opengl format
                                              (int)_screenWidth, 
                                              (int)_screenHeight,
                                              GL_BGRA, // native iOS format
                                              GL_UNSIGNED_BYTE,
                                              0,
                                              &renderTexture);
CFRelease(attrs2);
CFRelease(empty);
glBindTexture(CVOpenGLESTextureGetTarget(renderTexture), CVOpenGLESTextureGetName(renderTexture));
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);

glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, CVOpenGLESTextureGetName(renderTexture), 0);

CVPixelBufferLockBaseAddress(pixiel_bufer4e, 0);

if([pixelAdapter appendPixelBuffer:pixiel_bufer4e withPresentationTime:currentTime]) {
                float result = currentTime.value;
            NSLog(@"\n\n\4eta danni i current time e : %f \n\n",result);
                currentTime = CMTimeAdd(currentTime, frameLength);
        }

CVPixelBufferUnlockBaseAddress(pixiel_bufer4e, 0);
CVPixelBufferRelease(pixiel_bufer4e);
CFRelease(renderTexture);
CFRelease(coreVideoTextureCashe);

它录制的视频非常快,但视频只是黑色我认为textureCasheRef不是正确的一个或我填错了。

It records a video and it's pretty quick, yet the video is just black I think the textureCasheRef is not the right one or am I filling it wrong.

作为更新,这是我尝试的另一种方式。我肯定错过了什么。在viewDidLoad中,在我设置openGL上下文后,我这样做:

As an update, here is another way I've tried. I must be missing something. In viewDidLoad, after I set the openGL context I do this:

CVOpenGLESTextureCacheCreate(kCFAllocatorDefault, NULL, (__bridge   void *)_context, NULL, &coreVideoTextureCashe);

    if (err) 
    {
        NSAssert(NO, @"Error at CVOpenGLESTextureCacheCreate %d");
    }

    //creats the pixel buffer

    pixel_buffer = NULL;
    CVPixelBufferPoolCreatePixelBuffer (NULL, [pixelAdapter pixelBufferPool], &pixel_buffer);

    CVOpenGLESTextureRef renderTexture;
    CVOpenGLESTextureCacheCreateTextureFromImage (kCFAllocatorDefault, coreVideoTextureCashe, pixel_buffer,
                                                  NULL, // texture attributes
                                                  GL_TEXTURE_2D,
                                                  GL_RGBA, //  opengl format
                                                   (int)screenWidth,
                                                  (int)screenHeight,
                                                  GL_BGRA, // native iOS format
                                                  GL_UNSIGNED_BYTE,
                                                  0,
                                                  &renderTexture);

    glBindTexture(CVOpenGLESTextureGetTarget(renderTexture), CVOpenGLESTextureGetName(renderTexture));
    glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
    glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);

    glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, CVOpenGLESTextureGetName(renderTexture), 0);

然后在drawInRect中:我这样做:

Then in drawInRect: I do this:

 if(isRecording&&writerInput.readyForMoreMediaData) {
    CVPixelBufferLockBaseAddress(pixel_buffer, 0);

    if([pixelAdapter appendPixelBuffer:pixel_buffer withPresentationTime:currentTime]) {
        currentTime = CMTimeAdd(currentTime, frameLength);
    }
    CVPixelBufferLockBaseAddress(pixel_buffer, 0);
    CVPixelBufferRelease(pixel_buffer);
}

然而它在renderTexture上与bad_acsess崩溃,后者不是nil而是0x000000001。

Yet it crashes with bad_acsess on the renderTexture, which is not nil but 0x000000001.

更新

UPDATE

使用下面的代码我实际设法拉动视频文件,但有一些绿色和红色闪烁。我使用BGRA pixelFormatType。

With the code below I actually managed to pull the video file, but there are some green and red flashes. I use BGRA pixelFormatType.

这里我创建纹理缓存:

CVReturn err2 = CVOpenGLESTextureCacheCreate(kCFAllocatorDefault, NULL, (__bridge void *)_context, NULL, &coreVideoTextureCashe);
if (err2) 
{
    NSLog(@"Error at CVOpenGLESTextureCacheCreate %d", err);
    return;
}

然后在drawInRect中我称之为:

And then in drawInRect I call this:

if(isRecording&&writerInput.readyForMoreMediaData) {
    [self cleanUpTextures];



    CFDictionaryRef empty; // empty value for attr value.
    CFMutableDictionaryRef attrs2;
    empty = CFDictionaryCreate(kCFAllocatorDefault, // our empty IOSurface properties dictionary
                           NULL,
                           NULL,
                           0,
                           &kCFTypeDictionaryKeyCallBacks,
                           &kCFTypeDictionaryValueCallBacks);
    attrs2 = CFDictionaryCreateMutable(kCFAllocatorDefault,
                                   1,
                                   &kCFTypeDictionaryKeyCallBacks,
                                   &kCFTypeDictionaryValueCallBacks);

    CFDictionarySetValue(attrs2,
                     kCVPixelBufferIOSurfacePropertiesKey,
                     empty);

//CVPixelBufferPoolCreatePixelBuffer (NULL, [assetWriterPixelBufferInput pixelBufferPool], &renderTarget);
    CVPixelBufferRef pixiel_bufer4e = NULL;

    CVPixelBufferCreate(kCFAllocatorDefault, 
                    (int)_screenWidth, 
                    (int)_screenHeight,
                    kCVPixelFormatType_32BGRA,
                    attrs2,
                    &pixiel_bufer4e);
    CVOpenGLESTextureRef renderTexture;
    CVOpenGLESTextureCacheCreateTextureFromImage (kCFAllocatorDefault,
                                              coreVideoTextureCashe, pixiel_bufer4e,
                                              NULL, // texture attributes
                                              GL_TEXTURE_2D,
                                              GL_RGBA, // opengl format
                                              (int)_screenWidth, 
                                              (int)_screenHeight,
                                              GL_BGRA, // native iOS format
                                              GL_UNSIGNED_BYTE,
                                              0,
                                              &renderTexture);
    CFRelease(attrs2);
    CFRelease(empty);
    glBindTexture(CVOpenGLESTextureGetTarget(renderTexture), CVOpenGLESTextureGetName(renderTexture));
    glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
    glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);

    glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, CVOpenGLESTextureGetName(renderTexture), 0);

    CVPixelBufferLockBaseAddress(pixiel_bufer4e, 0);

    if([pixelAdapter appendPixelBuffer:pixiel_bufer4e withPresentationTime:currentTime]) {
        float result = currentTime.value;
        NSLog(@"\n\n\4eta danni i current time e : %f \n\n",result);
        currentTime = CMTimeAdd(currentTime, frameLength);
    }

    CVPixelBufferUnlockBaseAddress(pixiel_bufer4e, 0);
    CVPixelBufferRelease(pixiel_bufer4e);
    CFRelease(renderTexture);
  //  CFRelease(coreVideoTextureCashe);
}

我知道我可以通过不做所有这些事情来优化这一点,但我用它想让它发挥作用。在cleanUpTextures中,我用以下内容刷新textureCache:

I know I can optimize this a lot by not doing all these things here, yet I use wanted to make it work. In cleanUpTextures I flush the textureCache with:

 CVOpenGLESTextureCacheFlush(coreVideoTextureCashe, 0);

RGBA的东西可能有问题,或者我不知道,但似乎它仍然没有一种错误的缓存。

Something might be wrong with the RGBA stuff or I don't know but it seems that it's still getting kind of wrong Cache.

推荐答案

对于录制视频,这不是我使用的方法。你正在为每个渲染帧创建一个新的像素缓冲区,这将是缓慢的,你永远不会释放它,所以你得到内存警告就不足为奇了。

For recording video, this isn't the approach I'd use. You're creating a new pixel buffer for each rendered frame, which will be slow, and you're never releasing it, so it's no surprise you're getting memory warnings.

相反,请按照我在此答案中的描述进行操作。我为缓存纹理创建一个像素缓冲区,将该纹理分配给我正在渲染的FBO,然后在每个帧上使用AVAssetWriter的像素缓冲输入附加该像素缓冲区。使用单像素缓冲区要比每帧重建一个像素缓冲区要快得多。您还希望保留与FBO纹理目标关联的像素缓冲区,而不是在每个帧上关联它。

Instead, follow what I describe in this answer. I create a pixel buffer for the cached texture once, assign that texture to the FBO I'm rendering to, then append that pixel buffer using the AVAssetWriter's pixel buffer input on every frame. It's far faster to use the single pixel buffer than recreating one every frame. You also want to leave the pixel buffer associated with your FBO's texture target, rather than associating it on every frame.

我将此录制代码封装在我的开源中的GPUImageMovieWriter中 GPUImage 框架,如果你想看看它在实践中是如何运作的。正如我在上面链接的答案中指出的那样,以这种方式进行录制会导致极快的编码。

I encapsulate this recording code within the GPUImageMovieWriter in my open source GPUImage framework, if you want to see how this works in practice. As I indicate in the above-linked answer, doing the recording in this fashion leads to extremely fast encodes.

这篇关于OpenGL ES到iOS中的视频(使用iOS 5纹理缓存渲染到纹理)的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆