iOS 5 + GLKView:如何访问像素RGB数据基于颜色的顶点拾取? [英] iOS 5 + GLKView: How to access pixel RGB data for colour-based vertex picking?

查看:408
本文介绍了iOS 5 + GLKView:如何访问像素RGB数据基于颜色的顶点拾取?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我一直在转换我自己的个人OGLES 2.0框架,以利用新的iOS 5框架 GLKit 添加的功能。

I've been converting my own personal OGLES 2.0 framework to take advantage of the functionality added by the new iOS 5 framework GLKit.

获得愉悦的结果后,我现在希望实现基于颜色的拣选机制,此处。为此,您必须访问后缓冲区以检索触摸的像素RGBA值,然后将其用作顶点/图元/显示对象的唯一标识符。当然,这需要所有顶点/基元/显示对象的临时唯一着色。

After pleasing results, I now wish to implement the colour-based picking mechanism described here. For this, you must access the back buffer to retrieve a touched pixel RGBA value, which is then used as a unique identifier for a vertex/primitive/display object. Of course, this requires temporary unique coloring of all vertices/primitives/display objects.

我有两个问题,我非常感谢您的帮助: / p>

I have two questions, and I'd be very grateful for assistance with either:



  1. 我可以访问 GLKViewController GLKView CAEAGLLayer GLKView EAGLContext 。我也可以访问所有OGLES 2.0
    缓冲区相关命令。如何组合这些来识别我在屏幕上点击的EAGLContext中像素的颜色

  1. I have access to a GLKViewController, GLKView, CAEAGLLayer (of the GLKView) and an EAGLContext. I also have access to all OGLES 2.0 buffer related commands. How do I combine these to identify the color of a pixel in the EAGLContext I'm tapping on-screen?

使用顶点缓冲区对象来做我的渲染,有一个整洁的方式来覆盖提供给我的顶点着色器
的颜色,它首先不涉及修改缓冲的顶点(颜色)
属性,其次不涉及向顶点着色器中添加IF
语句?

Given that I'm using Vertex Buffer Objects to do my rendering, is there a neat way to override the colour provided to my vertex shader which firstly doesn't involve modifying buffered vertex (colour) attributes, and secondly doesn't involve the addition of an IF statement into the vertex shader?


我假设(2)的答案是不,但是出于性能和非严重的代码改写的原因,我认为与有经验的人进行检查是明智的。

I assume the answer to (2) is "no", but for reasons of performance and non-arduous code revamping I thought it wise to check with someone more experienced.

任何建议将不胜感激。谢谢您的时间

Any suggestions would be gratefully received. Thank you for your time

UPDATE

使用 glReadPixels 从活动帧缓冲区中获取像素数据。所以我想我只需要做特殊的独特的颜色渲染到后缓冲区,短暂切换到它,读取像素,然后切换回。这将不可避免地造成视觉闪烁,但我想这是最简单的方法;从屏幕截图中创建 CGImageContextRef 并以这种方式分析,肯定会更快(更明智)。

Well I now know how to read pixel data from the active frame buffer using glReadPixels. So I guess I just have to do the special "unique colours" render to the back buffer, briefly switch to it and read pixels, then switch back. This will inevitably create a visual flicker, but I guess it's the easiest way; certainly quicker (and more sensible) than creating a CGImageContextRef from a screen snapshot and analyzing that way.

推荐答案

好了,我已经确定如何做到这一点简明扼要。下面我解释如何实现这一点和列出所有的代码需要:)

Well, I've worked out exactly how to do this as concisely as possible. Below I explain how to achieve this and list all the code required :)

为了允许触摸交互选择一个像素,首先添加一个 UITapGestureRecognizer 到您的 GLKViewController 子类(假设您想要tap-to-select-pixel),在该类中有以下目标方法。您必须将 GLKViewController 子类化为 UIGestureRecognizerDelegate

In order to allow touch interaction to select a pixel, first add a UITapGestureRecognizer to your GLKViewController subclass (assuming you want tap-to-select-pixel), with the following target method inside that class. You must make your GLKViewController subclass a UIGestureRecognizerDelegate:

@interface GLViewController : GLKViewController <GLKViewDelegate, UIGestureRecognizerDelegate>

实例化您的手势识别器后,将其添加到视图属性( GLKViewController 实际上是 GLKView ):

After instantiating your gesture recognizer, add it to the view property (which in GLKViewController is actually a GLKView):

// Inside GLKViewController subclass init/awakeFromNib:
[[self view] addGestureRecognizer:[self tapRecognizer]];
[[self tapRecognizer] setDelegate:self];

设置手势识别器的目标动作;你可以使用一个特定的 init ... 创建它,但是我创建了我使用故事板(又名新的Interface Builder在Xcode 4.2),并连接到

Set the target action for your gesture recognizer; you can do this when creating it using a particular init... however I created mine using Storyboard (aka "the new Interface Builder in Xcode 4.2") and wired it up that way.

无论如何,这里是我点击手势识别器的目标动作:

Anyway, here's my target action for the tap gesture recognizer:

-(IBAction)onTapGesture:(UIGestureRecognizer*)recognizer {
    const CGPoint loc = [recognizer locationInView:[self view]];
    [self pickAtX:loc.x Y:loc.y];
}

这里调用的pick方法是我在 GLKViewController 子类:

The pick method called in there is one I've defined inside my GLKViewController subclass:

-(void)pickAtX:(GLuint)x Y:(GLuint)y {
    GLKView *glkView = (GLKView*)[self view];
    UIImage *snapshot = [glkView snapshot];
    [snapshot pickPixelAtX:x Y:y];
}

这利用了一个方便的新方法 snapshot 表示苹果公司在 GLKView 中包含以从底层产生 UIImage > EAGLContext

This takes advantage of a handy new method snapshot that Apple kindly included in GLKView to produce a UIImage from the underlying EAGLContext.

重要注意的是 snapshot API文档中的注释,其中规定:

What's important to note is a comment in the snapshot API documentation, which states:


这个方法应该在你的应用程序显式
需要视图的内容时调用;从不尝试使用OpenGL ES函数直接读取底层framebuffer的
内容。

This method should be called whenever your application explicitly needs the contents of the view; never attempt to directly read the contents of the underlying framebuffer using OpenGL ES functions.

这给我一个线索为什么我以前尝试调用 glReadPixels 尝试访问像素数据生成 EXC_BAD_ACCESS ,以及发送给我的指标

This gave me a clue as to why my earlier attempts to invoke glReadPixels in attempts to access pixel data generated an EXC_BAD_ACCESS, and the indicator that sent me down the right path instead.

您会注意到我之前定义的 pickAtX:Y:方法在 UIImage 上的 pickPixelAtX:Y:。这是我在自定义类别中添加到 UIImage 的方法:

You'll notice in my pickAtX:Y: method defined a moment ago I call a pickPixelAtX:Y: on the UIImage. This is a method I added to UIImage in a custom category:

@interface UIImage (NDBExtensions)
-(void)pickPixelAtX:(NSUInteger)x Y:(NSUInteger)y;
@end

它是最终的代码列表需要。代码来自此问题,并且根据在那里收到的答案修改:

Here is the implementation; it's the final code listing required. The code came from this question and has been amended according to the answer received there:

@implementation UIImage (NDBExtensions)

- (void)pickPixelAtX:(NSUInteger)x Y:(NSUInteger)y {

    CGImageRef cgImage = [self CGImage];
    size_t width = CGImageGetWidth(cgImage);
    size_t height = CGImageGetHeight(cgImage);

    if ((x < width) && (y < height))
    {
        CGDataProviderRef provider = CGImageGetDataProvider(cgImage);
        CFDataRef bitmapData = CGDataProviderCopyData(provider);
        const UInt8* data = CFDataGetBytePtr(bitmapData);
        size_t offset = ((width * y) + x) * 4;
        UInt8 b = data[offset+0];
        UInt8 g = data[offset+1];
        UInt8 r = data[offset+2];
        UInt8 a = data[offset+3];
        CFRelease(bitmapData);
        NSLog(@"R:%i G:%i B:%i A:%i",r,g,b,a);
    }
}

@end



最初尝试在Apple API文档中找到一些相关代码,标题为:从CGImage上下文获取像素数据,它需要2个方法定义而不是这1个,但需要更多的代码,并且有 void * ,我无法实现正确的解释。

I originally tried some related code found in an Apple API doc entitled: "Getting the pixel data from a CGImage context" which required 2 method definitions instead of this 1, but much more code is required and there is data of type void * for which I was unable to implement the correct interpretation.

就是这样!将此代码添加到您的项目中,然后点击像素,它将输出它的形式:

That's it! Add this code to your project, then upon tapping a pixel it will output it in the form:

R:24 G:46 B:244 A:255

当然,你应该写一些提取这些RGBA int值的方法它将在0 - 255的范围内),然后使用它们。一种方法是从上述方法返回一个 UIColor ,例如:

Of course, you should write some means of extracting those RGBA int values (which will be in the range 0 - 255) and using them however you want. One approach is to return a UIColor from the above method, instantiated like so:

UIColor *color = [UIColor colorWithRed:red/255.0f green:green/255.0f blue:blue/255.0f alpha:alpha/255.0f];

这篇关于iOS 5 + GLKView:如何访问像素RGB数据基于颜色的顶点拾取?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆