如何在Core图像中通过CIFilter更改最小值或最大值? [英] How to change minimum or maximum value by CIFilter in Core image?
问题描述
我正在iOS中绘制几个灰度图像。灰度图像的值具有最小值和最大值,即范围为[41,244]的8位值。
我想更改最小值或最大值。我想知道如何在你应该找到 GPUImageLuminanceThresholdFilter
将更好地替换过滤器2-> 5.
I'm drawing several grayscale images in iOS. The values of grayscale image have minimum and maximum i.e. for 8 bit values in range [41,244].
I want to change minimum value or maximum value. I want to know how to set a filter in this list to change minimum value and how to set a filter in this list to change maximum value?
Maybe the follow of drawing will be good to see :
[1. Read raw grayscale data] -> [2. Create CGImageRef] -> [3. Load ref in GPU]
and by now, in iOS 6 can apply filter in GPU in real-time. Manipulate on 1 or 2 is very slow for me because in one second I should draw 100 images. So I need to use CIFilter in GPU. I know how to filter image. But what filter can do my job?
Some sample code is good for me.
CIToneCurve should be good for this kind of thing.
- (UIImage*) applyToneCurveToImage:(UIImage*)image
{
CIContext* context = self.context;
CIImage* ciImage = [[CIImage alloc] initWithImage:image];
CIFilter* filter =
[CIFilter filterWithName:@"CIToneCurve"
keysAndValues:
kCIInputImageKey, ciImage,
@"inputPoint0",[CIVector vectorWithX:0.00 Y:0.3]
,@"inputPoint1",[CIVector vectorWithX:0.25 Y:0.4]
,@"inputPoint2",[CIVector vectorWithX:0.50 Y:0.5]
,@"inputPoint3",[CIVector vectorWithX:0.75 Y:0.6]
,@"inputPoint4",[CIVector vectorWithX:1.00 Y:0.7]
,nil];
CIImage* result = [filter valueForKey:kCIOutputImageKey];
CGImageRef cgImage = [context createCGImage:result
fromRect:[result extent]];
UIImage* filteredImage = [UIImage imageWithCGImage:cgImage];
CGImageRelease(cgImage);
return filteredImage;
}
self.context
can be a CPU or GPU (EAGL) context. The points describe a tone curve. X is input value, Y is output value. In this example we are reducing the slope, so reducing contrast. You could just as well keep the slope the same, but cut off the extremes so reducing your maxima and minima.
Here are some measurements processing 100 images on an iPad mini (average of four readings per setting). You will see that the GPU is no magic bullet, and around 60-65% of the time is absorbed in just moving image data around. These examples start and end with UIImage. I get very similar results with CGImage.
57x57px png 640x800px jpeg
UIImage->CIImage->UIImage (no filtering)
CPU 0.57s 2.83s
GPU 0.36s 2.83s
UIImage->CIImage->CIFilter->UIImage
CPU 0.75s 4.38s
GPU 0.58s 4.32s
update
For Window and Level adjustments with zero values for inputs outside of the window range, you want to achieve something like this:
CGFloat w; //window
CGFloat l; //level
@"inputPoint0",[CIVector vectorWithX:0.0 Y:0.0]
,@"inputPoint1",[CIVector vectorWithX:l-w/2 Y:0.0]
,@"inputPoint2",[CIVector vectorWithX:l+w/2 Y:1.0]
,@"inputPoint3",[CIVector vectorWithX:l+w/2 Y:0.0]
,@"inputPoint4",[CIVector vectorWithX:1.0 Y:0.0]
In practice this will not work. The points describe a spline curve, and it doesn't behave well with sharp changes of direction such as b-c-d. (Also points c and d (2 and 3) cannot share the same x value, so in any case you would have to increment slightly so that d.x = c.x*1.01)
You can get your result if you use a multi-step filter. The first filter uses CIToneCurve
apprpriately (this is the correct window/level algorithm without attempting to knock your max levels down to zero).
CIFilter* filter =
[CIFilter filterWithName:@"CIToneCurve"
keysAndValues:
kCIInputImageKey, ciImage,
@"inputPoint0",[CIVector vectorWithX:0.0 Y:0.0]
,@"inputPoint1",[CIVector vectorWithX:l-w/2 Y:0.0]
,@"inputPoint2",[CIVector vectorWithX:l Y:0.5]
,@"inputPoint3",[CIVector vectorWithX:l+w/2 Y:1.0]
,@"inputPoint4",[CIVector vectorWithX:1.0 Y:1.0]
We make a copy of the filter as we'll need it again in the last step:
filter2 = [filter copy];
Apply the CIColorControls filter to maximise contrast and brightness on the result
filter = [CIFilter filterWithName:@"CIColorControls"
keysAndValues:kCIInputImageKey,
[filter valueForKey:kCIOutputImageKey], nil];
[filter setValue:[NSNumber numberWithFloat:-1]
forKey:@"inputBrightness"];
[filter setValue:[NSNumber numberWithFloat:4]
forKey:@"inputContrast"];
Now posterize to a 1-bit palette
filter = [CIFilter filterWithName:@"CIColorPosterize"
keysAndValues:kCIInputImageKey,
[filter valueForKey:kCIOutputImageKey], nil];
[filter setValue:@2 forKey:@"inputLevels"];
Invert the result
filter = [CIFilter filterWithName:@"CIColorInvert"
keysAndValues:kCIInputImageKey,
[filter valueForKey:kCIOutputImageKey], nil];
Now we use this result as a mask with the window/levelled image so that all max-white levels get reduced to black.
filter = [CIFilter filterWithName:@"CIDarkenBlendMode"
keysAndValues:kCIInputImageKey,
[ filter valueForKey:kCIOutputImageKey], nil];
[filter setValue:[filter2 valueForKey:kCIOutputImageKey]
forKey:@"inputBackgroundImage"];
original
filter1 CIToneCurve
filter2 CIColorControls
filter3 CIColorPosterize
filter4 CIColorInvert
filter5 CIDarkenBlendMode
If you take a look at Brad Larson's GPUImage
you should find that GPUImageLuminanceThresholdFilter
will better replace filters 2->5.
这篇关于如何在Core图像中通过CIFilter更改最小值或最大值?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!