以编程方式获取 UIview+glview 的屏幕 [英] take screen Programmatically of UIview+glview

查看:18
本文介绍了以编程方式获取 UIview+glview 的屏幕的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我的 uiview 中有 glview,现在我必须拍摄 uiview 和 glview 的组合视图.我用谷歌搜索了很多,但我没有发现任何有用的东西我知道如何拍摄 glview 的 scrrenshot

<块引用>

nt 宽度 = glView.frame.size.width;int height = glView.frame.size.height;

NSInteger myDataLength = 宽度 * 高度 * 4;//分配数组并将像素读入其中.GLubyte *buffer = (GLubyte *) malloc(myDataLength);glReadPixels(0, 0, 宽度, 高度, GL_RGBA, GL_UNSIGNED_BYTE, 缓冲区);//gl 呈现颠倒",因此从上到下交换到新数组中.//肯定有更好的方法,但这行得通.GLubyte *buffer2 = (GLubyte *) malloc(myDataLength);for(int y = 0; y < 高度; y++){for(int x = 0; x <宽度 * 4; x++){buffer2[((height - 1) - y) * width * 4 + x] = buffer[y * 4 * width + x];}}//使用数据创建数据提供者.CGDataProviderRef provider = CGDataProviderCreateWithData(NULL, buffer2, myDataLength, NULL);//准备材料int bitsPerComponent = 8;int bitsPerPixel = 32;int bytesPerRow = 4 * 宽度;CGColorSpaceRef colorSpaceRef = CGColorSpaceCreateDeviceRGB();CGBitmapInfo bitmapInfo = kCGBitmapByteOrderDefault;CGColorRenderingIntent renderingIntent = kCGRenderingIntentDefault;//制作cg图像CGImageRef imageRef = CGImageCreate(width, height, bitsPerComponent, bitsPerPixel, bytesPerRow, colorSpaceRef, bitmapInfo, provider, NULL, NO, renderingIntent);//然后从中制作 uiimageUIImage *myImage = [UIImage imageWithCGImage:imageRef];返回我的图像;

解决方案

现在截屏似乎很棘手,尤其是当你混合使用 UIKit 和 OpenGL ES 时:曾经有 UIGetScreenImage()Apple 再次将其设为私有 并拒绝使用它的应用程序.p>

相反,有两种解决方案"可以替换它:Screen在 UIKit 应用程序中捕获OpenGL ES 视图快照.前者不捕获 OpenGL ES 或视频内容,而后者用于 OpenGL ES.

还有另一个技术说明 如何截取我的屏幕截图包含 UIKit 和 Camera 元素的应用程序?,这里他们基本上说:您需要先捕获相机图片,然后在渲染视图层次结构时,在上下文中绘制该图像.

同样适用于 OpenGL ES:您首先需要为 OpenGL ES 视图渲染快照,然后将 UIKit 视图层次结构渲染到图像上下文中,并在其上绘制 OpenGL ES 视图的图像.非常丑陋,并且根据您的视图层次结构,它实际上可能不是您在屏幕上看到的(例如,如果您的 OpenGL 视图前面有视图).

i have glview in my uiview ,now i have to take scrren shot of combine view of uiview and glview. i googled lot but i dnt found any thing useful i know how to take scrrenshot of glview

nt width = glView.frame.size.width; int height = glView.frame.size.height;

NSInteger myDataLength = width * height * 4;
// allocate array and read pixels into it.
GLubyte *buffer = (GLubyte *) malloc(myDataLength);
glReadPixels(0, 0, width, height, GL_RGBA, GL_UNSIGNED_BYTE, buffer);
// gl renders "upside down" so swap top to bottom into new array.
// there's gotta be a better way, but this works.
GLubyte *buffer2 = (GLubyte *) malloc(myDataLength);
for(int y = 0; y < height; y++)
{
    for(int x = 0; x < width * 4; x++)
    {
        buffer2[((height - 1) - y) * width * 4 + x] = buffer[y * 4 * width + x];
    }
}
// make data provider with data.
CGDataProviderRef provider = CGDataProviderCreateWithData(NULL, buffer2, myDataLength, NULL);
// prep the ingredients
int bitsPerComponent = 8;
int bitsPerPixel = 32;
int bytesPerRow = 4 * width;
CGColorSpaceRef colorSpaceRef = CGColorSpaceCreateDeviceRGB();
CGBitmapInfo bitmapInfo = kCGBitmapByteOrderDefault;
CGColorRenderingIntent renderingIntent = kCGRenderingIntentDefault;
// make the cgimage
CGImageRef imageRef = CGImageCreate(width, height, bitsPerComponent, bitsPerPixel, bytesPerRow, colorSpaceRef, bitmapInfo, provider, NULL, NO, renderingIntent);
// then make the uiimage from that
UIImage *myImage = [UIImage imageWithCGImage:imageRef];
return myImage;

解决方案

It seems like it's pretty tricky to get a screenshot nowadays, especially when you're mixing the UIKit and OpenGL ES: there used to be UIGetScreenImage() but Apple made it private again and is rejecting apps that use it.

Instead, there are two "solutions" to replace it: Screen capture in UIKit applications and OpenGL ES View Snapshot. The former does not capture OpenGL ES or video content while the later is only for OpenGL ES.

There is another technical note How do I take a screenshot of my app that contains both UIKit and Camera elements?, and here they essentially say: You need to first capture the camera picture and then when rendering the view hierarchy, draw that image in the context.

The very same would apply for OpenGL ES: You would first need to render a snapshot for your OpenGL ES view, then render the UIKit view hierarchy into an image context and draw the image of your OpenGL ES view on top of it. Very ugly, and depending on your view hierarchy it might actually not be what you're seeing on screen (e. g. if there are views in front of your OpenGL view).

这篇关于以编程方式获取 UIview+glview 的屏幕的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆