在模拟器而非iPhone上的“图层确定"中进行Alpha检测 [英] Alpha Detection in Layer OK on Simulator, not iPhone

查看:103
本文介绍了在模拟器而非iPhone上的“图层确定"中进行Alpha检测的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

首先,查看

First, check out this very handy extension to CALayer from elsewhere on SO. It helps you determine if a point in a layer's contents-assigned CGImageRef is or isn't transparent.

n.b .:不能保证图层的contents是可表示的或作为 CGImageRef 进行响应的. (这可能会对上面引用的扩展的更广泛使用产生影响.)但是,就我而言,我知道我正在测试的图层具有contents,这些图层被分配了 CGImageRef . (希望这在分配后无法从我下面改变!而且我注意到保留了contents.)

n.b.: There is no guarantee about a layer's contents being representable or responding as if it was a CGImageRef. (This can have implications for broader use of the extension referenced above, granted.) In my case, however, I know that the layers I'm testing have contents that were assigned a CGImageRef. (Hopefully this can't change out from under me after assignment! Plus I notice that contents is retained.)

好,回到眼前的问题.这是我使用扩展程序的方式.首先,我将选择器从containsPoint:更改为containsNonTransparentPoint:(我需要保留原始方法.)

OK, back to the problem at hand. Here's how I'm using the extension. For starters, I've changed the selector from containsPoint: to containsNonTransparentPoint: (I need to keep the original method around.)

现在,我有一个 UIImageView 子类,该子类使用七个 CALayer 对象.这些用于基于不透明度的动画(脉冲/发光效果和开/关状态).这七个图层中的每一个在其contents中都有一个已知的 CGImageRef ,可以用自己的一整套颜色有效地覆盖"(空气引号)整个视图的一部分.每个图像在其相应层中的其余部分都是透明的.

Now, I have a UIImageView subclass that uses seven CALayer objects. These are used for opacity-based animations (pulsing/glowing effects and on/off states). Each of those seven layers has a known CGImageRef in its contents that effectively "covers" (air quotes) one part of the entire view with its own swath of color. The rest of each image in its respective layer is transparent.

在子类中,我注册了单击手势.当一个人到达时,我会遍历各个层次,看看哪个被有效地窃听了(也就是说,哪一个在我窃取了一个非透明点时,第一个被发现获胜),然后我可以做任何需要做的事情.

In the subclass, I register for single tap gestures. When one arrives, I walk through my layers to see which one was effectively tapped (that is, which one has a non-transparent point where I tapped, first one found wins) and then I can do whatever needs doing.

这是我处理手势的方式:

Here's how I handle the gesture:

- (IBAction)handleSingleTap:(UIGestureRecognizer *)sender {
    CGPoint tapPoint = [sender locationInView:sender.view];

    // Flip y so 0,0 is at lower left. (Required by layer method below.)
    tapPoint.y = sender.view.bounds.size.height - tapPoint.y;

    // Figure out which layer was effectively tapped. First match wins.
    for (CALayer *layer in myLayers) {
        if ([layer containsNonTransparentPoint:tapPoint]) {
            NSLog(@"%@ tapped at (%.0f, %.0f)", layer.name, tapPoint.x, tapPoint.y);

            // We got our layer! Do something useful with it.
            return;
        }
    }
}

好消息?所有这些都可以在带有iOS 4.3.2的iPhone Simulator上完美运行. (FWIW,我正在运行Xcode 4.1的Lion上.)

The good news? All of this works beautifully on the iPhone Simulator with iOS 4.3.2. (FWIW, I'm on Lion running Xcode 4.1.)

但是,在我的iPhone 4(带有iOS 4.3.3)上,它甚至还差得远!我的水龙头中的没有似乎与我期望它们中的任何层相匹配.

However, on my iPhone 4 (with iOS 4.3.3), it doesn't even come close! None of my taps seem to match up with any of the layers I'd expect them to.

即使我尝试建议使用<当绘制到1x1像素上下文中时,strong> CGContextSetBlendMode (没有骰子).

Even if I try the suggestion to use CGContextSetBlendMode when drawing into the 1x1 pixel context, no dice.

我希望这是飞行员的错误,但是我还没有弄清楚差异是什么.丝锥确实有图案,但没有明显的图案.

I am hoping it's pilot error, but I have yet to figure out what the disparity is. The taps do have a pattern but not a discernible one.

也许存在数据边界问题.也许除了将y坐标翻转到图像的左下角之外,我还需要做其他事情.尚不确定.

Perhaps there's a data boundary issue. Perhaps I have to do something other than flip the y coordinate to the lower-left of the image. Just not sure yet.

如果任何人都可以阐明可能存在的问题,那我将非常感激!

If anyone can please shed some light on what might be amiss, I would be most appreciative!

更新,2011年9月22日::第一次啊哈!问题不是Simulator-vs-iPhone. 这是视网膜与非视网膜!在模拟器中出现相同的症状 使用Retina版本时.也许解决方案以某种方式/形状/形式围绕缩放(CTM?). 《 Quartz 2D编程指南》还建议"iOS应用程序应使用 UIGraphicsBeginImageContextWithOptions ".我觉得我已经很接近这里的解决方案了!

UPDATE, 22 September 2011: First ah-ha moment acquired! The problem isn't Simulator-vs-iPhone. It's Retina vs. Non-Retina! The same symptoms occur in the Simulator when using the Retina version. Perhaps the solution centers around scaling (CTM?) in some way/shape/form. The Quartz 2D Programming Guide also advises that "iOS applications should use UIGraphicsBeginImageContextWithOptions." I feel like I'm very close to the solution here!

推荐答案

好!首先,问题不是Simulator-vs-iPhone.而是视网膜与非视网膜.使用Retina版本时,在模拟器中会出现相同的症状.马上,人们开始认为解决方案与扩展有关.

OK! First, the problem wasn't Simulator-vs-iPhone. Rather, it was Retina vs. Non-Retina. The same symptoms occur in the Simulator when using the Retina version. Right away, one starts to think the solution has to do with scaling.

Apple Dev Quartz 2D论坛上的一篇非常有用的文章(以及类似的注意缩放"字样)引导我寻求解决方案.现在,我是第一个承认这一解决方案的人,但它确实适用于视网膜和非视网膜病例.

A very helpful post over on the Apple Dev Quartz 2D forum (along similar "be mindful of scaling" lines) steered me toward a solution. Now, I'm the first to admit, this solution is NOT pretty, but it does work for Retina and Non-Retina cases.

因此,这是上述简而言之,我们需要了解规模.如果我们将图像的宽度和高度除以该比例ta-dah,则命中测试现在可以在Retina和Non-Retina设备上运行!

In short, we need to know about the scale. If we divide the image width and height by that scale, ta-dah, the hit test now works on Retina and Non-Retina devices!

我对此不满意的是,我不得不把那个糟糕的选择器弄得一团糟,现在叫做 containsNonTransparentPoint:Scale:.正如问题中提到的那样,永远不能保证图层内容将包含什么.就我而言,我只是在仅使用CGImageRef的图层上使用它,但是在更通用/可重用的情况下,它不会起作用.

What I don't like about this is the mess I've had to make of that poor selector, now called containsNonTransparentPoint:Scale:. As mentioned in the question, there is never any guarantee what a layer's contents will contain. In my case I am taking care to only use this on layers with a CGImageRef in there, but this won't fly in a more general/reusable case.

所有这些使我想知道CALayer毕竟不是这个特定扩展的最佳位置,至少在这个新版本中.也许CGImage加上了一些聪明才智,可能会更干净.想象一下在CGImage上进行点击测试,但返回当时具有非透明内容的第一层的名称.仍然存在一个问题,就是不知道哪些图层中包含CGImageRefs,因此可能需要一些提示. (作为对您本人和读者的一种练习!)

All this makes me wonder if CALayer is not the best place for this particular extension after all, at least in this new incarnation. Perhaps CGImage, with some layer smarts thrown in, would be cleaner. Imagine doing a hit test on a CGImage but returning the name of the first layer that had non-transparent content at that point. There's still the problem of not knowing which layers have CGImageRefs in them, so some hinting might be required. (Left as an exercise for yours truly and the reader!)

更新:与Apple的开发人员进行了讨论之后,以这种方式弄乱图层实际上是不明智的.与我以前学到的(不正确吗?)相反,封装在UIView中的多个UIImageViews是到达此处的方法. (我始终记得得知您想将自己的观点保持在最低水平.也许在这种情况下这没什么大不了的.)尽管如此,我暂时将这个答案保留在这里,但不会将其标记为正确.一旦尝试并验证了其他技术,我将在这里分享!

UPDATE: After some discussion with a developer at Apple, messing with layers in this fashion is in fact ill-advised. Contrary to what I previously learned (incorrectly?), multiple UIImageViews encapsulated within a UIView are the way to go here. (I always remember learning that you want to keep your views to a minimum. Perhaps in this case it isn't as big a deal.) Nevertheless, I'll keep this answer here for now, but will not mark it as correct. Once I try out and verify the other technique, I will share that here!

这篇关于在模拟器而非iPhone上的“图层确定"中进行Alpha检测的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆