iOS 5 + GLKView:如何访问像素 RGB 数据以进行基于颜色的顶点拾取? [英] iOS 5 + GLKView: How to access pixel RGB data for colour-based vertex picking?

查看:31
本文介绍了iOS 5 + GLKView:如何访问像素 RGB 数据以进行基于颜色的顶点拾取?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我一直在转换我自己的个人 OGLES 2.0 框架,以利用新的 iOS 5 框架 GLKit 添加的功能.

在获得令人满意的结果之后,我现在希望实现here所述的基于颜色的拾取机制.为此,您必须访问后台缓冲区以检索触摸的像素 RGBA 值,然后将其用作顶点/图元/显示对象的唯一标识符.当然,这需要对所有顶点/基元/显示对象进行临时唯一着色.

我有两个问题,非常感谢您提供帮助:

<块引用>

  1. 我可以访问 GLKViewControllerGLKViewCAEAGLLayer(属于 GLKView)和EAGLContext.我还可以访问所有 OGLES 2.0缓冲区相关的命令.我如何结合这些来识别颜色我在屏幕上点击的 EAGLContext 中的一个像素?

  2. 鉴于我正在使用顶点缓冲区对象进行渲染,是否有一种巧妙的方法来覆盖提供给我的顶点着色器的颜色首先不涉及修改缓冲顶点(颜色)属性,其次不涉及添加 IF声明到顶点着色器?

我认为 (2) 的答案是否",但出于性能和不费力的代码修改的原因,我认为咨询更有经验的人是明智的.

如有任何建议,我们将不胜感激.感谢您的宝贵时间

更新

我现在知道如何使用 glReadPixels.所以我想我只需要对后台缓冲区进行特殊的独特颜色"渲染,短暂切换到它并读取像素,然后再切换回来.这将不可避免地产生视觉闪烁,但我想这是最简单的方法;肯定比从屏幕快照创建 CGImageContextRef 并以这种方式分析更快(也更明智).

不过,对于后台缓冲区的任何提示,我们将不胜感激.

解决方案

好吧,我已经弄清楚了如何尽可能简洁地做到这一点.下面我将解释如何实现这一点并列出所需的所有代码:)

为了允许触摸交互选择像素,首先将 UITapGestureRecognizer 添加到您的 GLKViewController 子类(假设您想要点击选择像素),使用该类中的以下目标方法.您必须使您的 GLKViewController 子类成为 UIGestureRecognizerDelegate:

@interface GLViewController : GLKViewController <GLKViewDelegate, UIGestureRecognizerDelegate>

实例化手势识别器后,将其添加到 view 属性(在 GLKViewController 中实际上是 GLKView):

//内部 GLKViewController 子类 init/awakeFromNib:[[self view] addGestureRecognizer:[self tapRecognizer]];[[self tapRecognizer] setDelegate:self];

为您的手势识别器设置目标动作;您可以在使用特定的 init... 创建它时执行此操作,但是我使用 Storyboard(又名Xcode 4.2 中的新界面生成器")创建我的并以这种方式连接它.

无论如何,这是我对点击手势识别器的目标操作:

-(IBAction)onTapGesture:(UIGestureRecognizer*)recognizer {const CGPoint loc = [识别器 locationInView:[self view]];[self pickAtX:loc.x Y:loc.y];}

其中调用的 pick 方法是我在 GLKViewController 子类中定义的方法:

-(void)pickAtX:(GLuint)x Y:(GLuint)y {GLKView *glkView = (GLKView*)[自拍];UIImage *snapshot = [glkView 快照];[快照pickPixelAtX:x Y:y];}

这利用了 Apple 友好地包含在 GLKView 中的方便的新方法 snapshot 从底层 EAGLContext 生成 UIImage.

需要注意的是 snapshot API 文档中的注释,其中指出:

<块引用>

只要您的应用程序显式调用,就应该调用此方法需要视图的内容;永远不要试图直接阅读使用 OpenGL ES 函数的底层帧缓冲区的内容.

这给了我一个线索,为什么我之前尝试调用 glReadPixels 以尝试访问像素数据会生成 EXC_BAD_ACCESS,以及将我发送到右侧的指示符路径.

您会注意到,在我刚才定义的 pickAtX:Y: 方法中,我在 UIImage 上调用了 pickPixelAtX:Y:.这是我在自定义类别中添加到 UIImage 的方法:

@interface UIImage (NDBExtensions)-(void)pickPixelAtX:(NSUInteger)x Y:(NSUInteger)y;@结尾

这里是实现;这是所需的最终代码清单.代码来自 这个问题根据那里收到的答案进行了修改:

@implementation UIImage (NDBExtensions)- (void)pickPixelAtX:(NSUInteger)x Y:(NSUInteger)y {CGImageRef cgImage = [自我 CGImage];size_t 宽度 = CGImageGetWidth(cgImage);size_t 高度 = CGImageGetHeight(cgImage);if ((x <宽度) && (y <高度)){CGDataProviderRef 提供者 = CGImageGetDataProvider(cgImage);CFDataRef bitmapData = CGDataProviderCopyData(provider);常量 UInt8* 数据 = CFDataGetBytePtr(bitmapData);size_t 偏移量 = ((宽度 * y) + x) * 4;UInt8 b = 数据[偏移量+0];UInt8 g = 数据[偏移+1];UInt8 r = 数据[偏移量+2];UInt8 a = 数据[偏移量+3];CFRelease(位图数据);NSLog(@"R:%i G:%i B:%i A:%i",r,g,b,a);}}@结尾

我最初尝试了在 Apple API 文档中找到的一些相关代码,标题为:从 CGImage 上下文中获取像素数据",它需要 2 个方法定义而不是这个 1,但需要更多代码并且有类型为 void * 我无法对其进行正确解释.

就是这样!将此代码添加到您的项目中,然后在点击一个像素时,它将以以下形式输出:

R:24 G:46 B:244 A:255

当然,您应该编写一些方法来提取这些 RGBA int 值(将在 0 - 255 范围内)并根据需要使用它们.一种方法是从上述方法返回一个 UIColor,实例化如下:

UIColor *color = [UIColor colorWithRed:red/255.0f green:green/255.0f blue:blue/255.0f alpha:alpha/255.0f];

I've been converting my own personal OGLES 2.0 framework to take advantage of the functionality added by the new iOS 5 framework GLKit.

After pleasing results, I now wish to implement the colour-based picking mechanism described here. For this, you must access the back buffer to retrieve a touched pixel RGBA value, which is then used as a unique identifier for a vertex/primitive/display object. Of course, this requires temporary unique coloring of all vertices/primitives/display objects.

I have two questions, and I'd be very grateful for assistance with either:

  1. I have access to a GLKViewController, GLKView, CAEAGLLayer (of the GLKView) and an EAGLContext. I also have access to all OGLES 2.0 buffer related commands. How do I combine these to identify the color of a pixel in the EAGLContext I'm tapping on-screen?

  2. Given that I'm using Vertex Buffer Objects to do my rendering, is there a neat way to override the colour provided to my vertex shader which firstly doesn't involve modifying buffered vertex (colour) attributes, and secondly doesn't involve the addition of an IF statement into the vertex shader?

I assume the answer to (2) is "no", but for reasons of performance and non-arduous code revamping I thought it wise to check with someone more experienced.

Any suggestions would be gratefully received. Thank you for your time

UPDATE

Well I now know how to read pixel data from the active frame buffer using glReadPixels. So I guess I just have to do the special "unique colours" render to the back buffer, briefly switch to it and read pixels, then switch back. This will inevitably create a visual flicker, but I guess it's the easiest way; certainly quicker (and more sensible) than creating a CGImageContextRef from a screen snapshot and analyzing that way.

Still, any tips as regards the back buffer would be much appreciated.

解决方案

Well, I've worked out exactly how to do this as concisely as possible. Below I explain how to achieve this and list all the code required :)

In order to allow touch interaction to select a pixel, first add a UITapGestureRecognizer to your GLKViewController subclass (assuming you want tap-to-select-pixel), with the following target method inside that class. You must make your GLKViewController subclass a UIGestureRecognizerDelegate:

@interface GLViewController : GLKViewController <GLKViewDelegate, UIGestureRecognizerDelegate>

After instantiating your gesture recognizer, add it to the view property (which in GLKViewController is actually a GLKView):

// Inside GLKViewController subclass init/awakeFromNib:
[[self view] addGestureRecognizer:[self tapRecognizer]];
[[self tapRecognizer] setDelegate:self];

Set the target action for your gesture recognizer; you can do this when creating it using a particular init... however I created mine using Storyboard (aka "the new Interface Builder in Xcode 4.2") and wired it up that way.

Anyway, here's my target action for the tap gesture recognizer:

-(IBAction)onTapGesture:(UIGestureRecognizer*)recognizer {
    const CGPoint loc = [recognizer locationInView:[self view]];
    [self pickAtX:loc.x Y:loc.y];
}

The pick method called in there is one I've defined inside my GLKViewController subclass:

-(void)pickAtX:(GLuint)x Y:(GLuint)y {
    GLKView *glkView = (GLKView*)[self view];
    UIImage *snapshot = [glkView snapshot];
    [snapshot pickPixelAtX:x Y:y];
}

This takes advantage of a handy new method snapshot that Apple kindly included in GLKView to produce a UIImage from the underlying EAGLContext.

What's important to note is a comment in the snapshot API documentation, which states:

This method should be called whenever your application explicitly needs the contents of the view; never attempt to directly read the contents of the underlying framebuffer using OpenGL ES functions.

This gave me a clue as to why my earlier attempts to invoke glReadPixels in attempts to access pixel data generated an EXC_BAD_ACCESS, and the indicator that sent me down the right path instead.

You'll notice in my pickAtX:Y: method defined a moment ago I call a pickPixelAtX:Y: on the UIImage. This is a method I added to UIImage in a custom category:

@interface UIImage (NDBExtensions)
-(void)pickPixelAtX:(NSUInteger)x Y:(NSUInteger)y;
@end

Here is the implementation; it's the final code listing required. The code came from this question and has been amended according to the answer received there:

@implementation UIImage (NDBExtensions)

- (void)pickPixelAtX:(NSUInteger)x Y:(NSUInteger)y {

    CGImageRef cgImage = [self CGImage];
    size_t width = CGImageGetWidth(cgImage);
    size_t height = CGImageGetHeight(cgImage);

    if ((x < width) && (y < height))
    {
        CGDataProviderRef provider = CGImageGetDataProvider(cgImage);
        CFDataRef bitmapData = CGDataProviderCopyData(provider);
        const UInt8* data = CFDataGetBytePtr(bitmapData);
        size_t offset = ((width * y) + x) * 4;
        UInt8 b = data[offset+0];
        UInt8 g = data[offset+1];
        UInt8 r = data[offset+2];
        UInt8 a = data[offset+3];
        CFRelease(bitmapData);
        NSLog(@"R:%i G:%i B:%i A:%i",r,g,b,a);
    }
}

@end

I originally tried some related code found in an Apple API doc entitled: "Getting the pixel data from a CGImage context" which required 2 method definitions instead of this 1, but much more code is required and there is data of type void * for which I was unable to implement the correct interpretation.

That's it! Add this code to your project, then upon tapping a pixel it will output it in the form:

R:24 G:46 B:244 A:255

Of course, you should write some means of extracting those RGBA int values (which will be in the range 0 - 255) and using them however you want. One approach is to return a UIColor from the above method, instantiated like so:

UIColor *color = [UIColor colorWithRed:red/255.0f green:green/255.0f blue:blue/255.0f alpha:alpha/255.0f];

这篇关于iOS 5 + GLKView:如何访问像素 RGB 数据以进行基于颜色的顶点拾取?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆