iPad2 上的 CVOpenGLESTextureCacheCreateTextureFromImage 太慢了,需要将近 30 毫秒,太疯狂了 [英] CVOpenGLESTextureCacheCreateTextureFromImage on iPad2 is too slow ,it needs almost 30 ms, too crazy

查看:18
本文介绍了iPad2 上的 CVOpenGLESTextureCacheCreateTextureFromImage 太慢了,需要将近 30 毫秒,太疯狂了的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我使用 opengl es 在 iPad 上显示 bgr24 数据,我是 opengl es 的新手,所以在显示视频部分我使用来自 RosyWriter 的代码一个 APPLE 示例.它可以工作,但 CVOpenGLESTextureCacheCreateTextureFromImage 函数花费超过 30 毫秒,而在 RosyWriter它的成本可以忽略不计.我所做的是首先将 BGR24 转换为 BGRA 像素格式,然后使用 CVPixelBufferCreateWithBytes 函数创建一个 CVPixelBufferRef,然后通过 CVOpenGLESTextureCacheCreateTextureFromImage 获取一个 CVOpenGLESTextureRef.我的代码如下,

I use opengl es to display bgr24 data on iPad, I am new about opengl es ,so in display video part I use code from RosyWriter one APPLE sample. It works, but the CVOpenGLESTextureCacheCreateTextureFromImage function cost more than 30ms, while in RosyWriter its cost is negligible. what I do is first transform BGR24 to BGRA pixel format, then use CVPixelBufferCreateWithBytes function create a CVPixelBufferRef, and then get a CVOpenGLESTextureRef by CVOpenGLESTextureCacheCreateTextureFromImage. My codes as following,

- (void)transformBGRToBGRA:(const UInt8 *)pict width:(int)width height:(int)height
{
rgb.data = (void *)pict;

vImage_Error error = vImageConvert_RGB888toARGB8888(&rgb,NULL,0,&argb,NO,kvImageNoFlags);
if (error != kvImageNoError) {
    NSLog(@"vImageConvert_RGB888toARGB8888 error");
}

const uint8_t permuteMap[4] = {1,2,3,0};

error = vImagePermuteChannels_ARGB8888(&argb,&bgra,permuteMap,kvImageNoFlags);
if (error != kvImageNoError) {
    NSLog(@"vImagePermuteChannels_ARGB8888 error");
}

free((void *)pict);
}

变换后会生成CVPixelBufferRef,代码如下,

and after transform, will generate CVPixelBufferRef, codes as following,

[self transformBGRToBGRA:pict width:width height:height];

CVPixelBufferRef pixelBuffer;
CVReturn err = CVPixelBufferCreateWithBytes(NULL,
                             width,
                             height,
                             kCVPixelFormatType_32BGRA, 
                             (void*)bgraData, 
                             bytesByRow, 
                             NULL, 
                             0,
                             NULL, 
                             &pixelBuffer);

if(!pixelBuffer || err)
{
    NSLog(@"CVPixelBufferCreateWithBytes failed (error: %d)", err);  
    return;
}

CVOpenGLESTextureRef texture = NULL;
err = CVOpenGLESTextureCacheCreateTextureFromImage(kCFAllocatorDefault, 
                                                            videoTextureCache,
                                                            pixelBuffer,
                                                            NULL,
                                                            GL_TEXTURE_2D,
                                                            GL_RGBA,
                                                            width,
                                                            height,
                                                            GL_BGRA,
                                                            GL_UNSIGNED_BYTE,
                                                            0,
                                                            &texture);


if (!texture || err) {
    NSLog(@"CVOpenGLESTextureCacheCreateTextureFromImage failed (error: %d)", err);  
    CVPixelBufferRelease(pixelBuffer);
    return;
}

其他代码几乎与 RosyWriter 示例类似,包括着色器.所以我想知道为什么,如何解决这个问题.

The other codes is almost similar RosyWriter sample, include shaders. So I want to know why, how to fix this problem.

推荐答案

我这几天的研究,发现为什么CVOpenGLESTextureCacheCreateTextureFromImage要花很多时间,当数据很大的时候,这里是3M,分配、复制和移动操作,成本相当可观,尤其是复制操作.然后使用像素缓冲池CVOpenGLESTextureCacheCreateTextureFromImage的性能从30ms提高到5ms,与glTexImage2D()相同.我的解决方案如下:

With my research in these day, I find why CVOpenGLESTextureCacheCreateTextureFromImage cost much time, when the data is big, here is 3M, the allocation, copy and move operation which cost is considerable, especially Copy operation. Then with pixel buffer pool greatly improve performance of CVOpenGLESTextureCacheCreateTextureFromImage from 30ms to 5ms, the same level with glTexImage2D(). My solution as following:

NSMutableDictionary*     attributes;
attributes = [NSMutableDictionary dictionary];


[attributes setObject:[NSNumber numberWithInt:kCVPixelFormatType_32BGRA] forKey:(NSString*)kCVPixelBufferPixelFormatTypeKey];
[attributes setObject:[NSNumber numberWithInt:videoWidth] forKey: (NSString*)kCVPixelBufferWidthKey];
[attributes setObject:[NSNumber numberWithInt:videoHeight] forKey: (NSString*)kCVPixelBufferHeightKey];

CVPixelBufferPoolCreate(kCFAllocatorDefault, NULL, (CFDictionaryRef) attributes, &bufferPool);

CVPixelBufferPoolCreatePixelBuffer (NULL,bufferPool,&pixelBuffer);

CVPixelBufferLockBaseAddress(pixelBuffer,0);

UInt8 * baseAddress = CVPixelBufferGetBaseAddress(pixelBuffer);

memcpy(baseAddress, bgraData, bytesByRow * videoHeight);

CVPixelBufferUnlockBaseAddress(pixelBuffer,0);

使用这个新创建的 pixelBuffer,您可以加快速度.

with this new created pixelBuffer you can make it fast.

属性中添加如下配置可以使其性能达到最佳,小于1ms.

Add following configures to attribtes can make its performance to the best, less than 1ms.

 NSDictionary *IOSurfaceProperties = [NSDictionary dictionaryWithObjectsAndKeys:
                                                                        [NSNumber numberWithBool:YES], @"IOSurfaceOpenGLESFBOCompatibility",[NSNumber numberWithBool:YES], @"IOSurfaceOpenGLESTextureCompatibility",nil];

[attributes setObject:IOSurfaceProperties forKey:(NSString*)kCVPixelBufferIOSurfacePropertiesKey];

这篇关于iPad2 上的 CVOpenGLESTextureCacheCreateTextureFromImage 太慢了,需要将近 30 毫秒,太疯狂了的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆