iOS OpenGL使用glTexImage2D的参数来制作UIImage? [英] iOS OpenGL using parameters for glTexImage2D to make a UIImage?

查看:1583
本文介绍了iOS OpenGL使用glTexImage2D的参数来制作UIImage?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述



我已成功调用 glTexImage2D like this:

glTexImage2D(GL_TEXTURE_2D,0,texture-> format,texture-> widthTexture,texture-> heightTexture,0,texture-> format ,texture-> type,texture-> data);



我想创建一个图像(最好是 CGImage UIImage )使用传递到 glTexImage2D 如果可能的话。



我需要从OpenGL视图创建许多连续图像(每秒多个图像),并保存以供以后使用。



我应该能够使用我使用的变量创建一个 CGImage UIImage glTexImage2D



如果我应该可以,我该怎么办?



如果没有,为什么我不能和你建议我的任务保存/捕获我的opengl视图的内容每秒钟多次?



编辑:我已经成功地捕获图像使用一些技术提供的苹果与 glReadPixels 等等,我想要的东西更快,所以我可以得到更多的图像每秒。



编辑:在查看并添加来自Thomson的代码后,以下是生成的图像:



图像非常类似于图像应该是什么样子,除了重复〜5次水平和一些随机的黑色空间在下面。



注意:视频(每帧)数据通过与iPhone的ad-hoc网络连接。我相信相机正在用YCbCr颜色空间拍摄每一帧



编辑:进一步查看Thomson的代码
我已经将您的新代码复制到我的项目,不同的图片作为结果:





width:320
height:240



我不知道如何找到字节数在texture-> data中。它是一个void指针。



edit:格式和类型



texture.type = GL_UNSIGNED_SHORT_5_6_5



texture.format = GL_RGB

解决方案

Hey binnyb,下面是使用存储在 UIImage 的解决方案> texture-> data 。 v01d当然是正确的,你不会得到 UIImage ,因为它出现在你的GL帧缓冲区,但它会得到你的数据之前的图像,



结果是你的纹理数据是16位格式,5位为红色,6位为绿色,5位为蓝色。我添加了用于在创建UIImage之前将16位RGB值转换为32位RGBA值的代码。我期待着听到这是怎么回事。

  float width = 512; 
float height = 512;
int channels = 4;

//为我们的映像创建一个缓冲区,将它从565 rgb转换为8888rgba
u_int8_t * rawData =(u_int8_t *)malloc(width * height * channels);

//将5,6,5像素数据解压缩为24位RGB
for(int i = 0; i {
//将texture-> data中的两个相邻字节附加到16位int
u_int16_t pixel16 =(texture-> data [i * 2] <8)+ texture-> data [i * 2 + 1];
//掩码并将每个像素转换为单个8位无符号,然后通过5/6位标准化
//最大为8位整数最大值。 Alpha设置为0.
rawData [channels * i] =((pixel16& 63488)> 11)/ 31.0 * 255;
rawData [channels * i + 1] =((pixel16& 2016)< 5> 10)/ 63.0 * 255;
rawData [channels * i + 2] =((pixel16& 31)< 11 11)/ 31.0 * 255;
rawData [channels * 4 + 3] = 0;
}

//与之前相同
int bitsPerComponent = 8;
int bitsPerPixel = channels * bitsPerComponent;
int bytesPerRow = channels * width;
CGColorSpaceRef colorSpaceRef = CGColorSpaceCreateDeviceRGB();
CGBitmapInfo bitmapInfo = kCGBitmapByteOrderDefault;
CGColorRenderingIntent renderingIntent = kCGRenderingIntentDefault;

CGDataProviderRef provider = CGDataProviderCreateWithData(NULL,
rawData,
channels * width * height,
NULL);
free(rawData);
CGImageRef imageRef = CGImageCreate(width,
height,
bitsPerComponent,
bitsPerPixel,
bytesPerRow,
colorSpaceRef,
bitmapInfo,
provider,NULL,NO,renderingIntent);

UIImage * newImage = [UIImage imageWithCGImage:imageRef];

创建新图片的代码来自从原始RGBA数据创建UIImage 感谢Rohit。我已经测试了这与我们原来的320x240图像维度,已经将24位RGB图像转换为5,6,5格式,然后最多32位。我没有在512x512图像上测试它,但我不期待任何问题。


I am working through some existing code for a project i am assigned to.

I have a successful call to glTexImage2D like this:
glTexImage2D(GL_TEXTURE_2D, 0, texture->format, texture->widthTexture, texture->heightTexture, 0, texture->format, texture->type, texture->data);

I would like create an image (preferably a CGImage or UIImage) using the variables passed to glTexImage2D, but don't know if it's possible.

I need to create many sequential images(many of them per second) from an OpenGL view and save them for later use.

Should i be able to create a CGImage or UIImage using the variables i use in glTexImage2D?

If i should be able to, how should i do it?

If not, why can't i and what do you suggest for my task of saving/capturing the contents of my opengl view many times per second?

edit: i have already successfully captured images using some techniques provided by apple with glReadPixels, etc etc. i want something faster so i can get more images per second.

edit: after reviewing and adding the code from Thomson, here is the resulting image:

the image very slightly resembles what the image should look like, except duplicated ~5 times horizontally and with some random black space underneath.

note: the video(each frame) data is coming over an ad-hoc network connection to the iPhone. i believe the camera is shooting over each frame with the YCbCr color space

edit: further reviewing Thomson's code I have copied your new code into my project and got a different image as result:

width: 320 height: 240

i am not sure how to find the number of bytes in texture-> data. it is a void pointer.

edit: format and type

texture.type = GL_UNSIGNED_SHORT_5_6_5

texture.format = GL_RGB

解决方案

Hey binnyb, here's the solution to creating a UIImage using the data stored in texture->data. v01d is certainly right that you're not going to get the UIImage as it appears in your GL framebuffer, but it'll get you an image from the data before it has passed through the framebuffer.

Turns out your texture data is in 16 bit format, 5 bits for red, 6 bits for green, and 5 bits for blue. I've added code for converting the 16 bit RGB values into 32 bit RGBA values before creating a UIImage. I'm looking forward to hearing how this turns out.

float width    = 512;
float height   = 512;
int   channels = 4;

// create a buffer for our image after converting it from 565 rgb to 8888rgba
u_int8_t* rawData = (u_int8_t*)malloc(width*height*channels);

// unpack the 5,6,5 pixel data into 24 bit RGB
for (int i=0; i<width*height; ++i) 
{
    // append two adjacent bytes in texture->data into a 16 bit int
    u_int16_t pixel16 = (texture->data[i*2] << 8) + texture->data[i*2+1];      
    // mask and shift each pixel into a single 8 bit unsigned, then normalize by 5/6 bit
    // max to 8 bit integer max.  Alpha set to 0.
    rawData[channels*i]   = ((pixel16 & 63488)       >> 11) / 31.0 * 255;
    rawData[channels*i+1] = ((pixel16 & 2016)  << 5  >> 10) / 63.0 * 255;
    rawData[channels*i+2] = ((pixel16 & 31)    << 11 >> 11) / 31.0 * 255;
    rawData[channels*4+3] = 0;
}

// same as before
int                    bitsPerComponent = 8;
int                    bitsPerPixel     = channels*bitsPerComponent;
int                    bytesPerRow      = channels*width;
CGColorSpaceRef        colorSpaceRef    = CGColorSpaceCreateDeviceRGB();
CGBitmapInfo           bitmapInfo       = kCGBitmapByteOrderDefault;
CGColorRenderingIntent renderingIntent  = kCGRenderingIntentDefault;

CGDataProviderRef provider = CGDataProviderCreateWithData(NULL, 
                                                          rawData, 
                                                          channels*width*height,
                                                          NULL);
free( rawData );
CGImageRef        imageRef = CGImageCreate(width,
                                           height,
                                           bitsPerComponent,
                                           bitsPerPixel,
                                           bytesPerRow,
                                           colorSpaceRef,
                                           bitmapInfo,
                                           provider,NULL,NO,renderingIntent);

UIImage *newImage = [UIImage imageWithCGImage:imageRef];

The code for creating a new image comes from Creating UIImage from raw RGBA data thanks to Rohit. I've tested this with our original 320x240 image dimension, having converted a 24 bit RGB image into 5,6,5 format and then up to 32 bit. I haven't tested it on a 512x512 image but I don't expect any problems.

这篇关于iOS OpenGL使用glTexImage2D的参数来制作UIImage?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆