ios GPUImage,使用小尺寸图像处理图像效果不佳? [英] ios GPUImage,bad result of image processing with small-size images?

查看:175
本文介绍了ios GPUImage,使用小尺寸图像处理图像效果不佳?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在尝试为OCR准备图像,我使用GPUImage来做它,代码工作正常直到裁剪图像!!裁剪后我得到了不好的结果...

I'm trying prepare image for OCR,I use GPUImage to do it,code work fine till i crop image!!After cropping i got bad result...

裁剪区域:
https://www.dropbox.com/s/ e3mlp25sl6m55yk / IMG_0709.PNG

错误结果=(
https://www.dropbox.com/s/wtxw7li6paltx21/IMG_0710.PNG

+ (UIImage *) doBinarize:(UIImage *)sourceImage
{

    //first off, try to grayscale the image using iOS core Image routine
    UIImage * grayScaledImg = [self grayImage:sourceImage];

    GPUImagePicture *imageSource = [[GPUImagePicture alloc] initWithImage:grayScaledImg];

    GPUImageAdaptiveThresholdFilter *stillImageFilter = [[GPUImageAdaptiveThresholdFilter alloc] init];
    stillImageFilter.blurRadiusInPixels = 8.0;
    [stillImageFilter prepareForImageCapture];

    [imageSource addTarget:stillImageFilter];
    [imageSource processImage];
    UIImage *retImage = [stillImageFilter imageFromCurrentlyProcessedOutput];

    [imageSource removeAllTargets];

    return retImage;
}


+ (UIImage *) grayImage :(UIImage *)inputImage
{
    // Create a graphic context.
    UIGraphicsBeginImageContextWithOptions(inputImage.size, NO, 1.0);
    CGRect imageRect = CGRectMake(0, 0, inputImage.size.width, inputImage.size.height);

    // Draw the image with the luminosity blend mode.
    // On top of a white background, this will give a black and white image.
    [inputImage drawInRect:imageRect blendMode:kCGBlendModeLuminosity alpha:1.0];

    // Get the resulting image.
    UIImage *outputImage = UIGraphicsGetImageFromCurrentImageContext();
    UIGraphicsEndImageContext();
    return outputImage;
}

更新:


在此期间,当您裁剪图像时,请按最近的
宽度为8像素的倍数进行裁剪,您应该会看到正确的结果

In the meantime, when you crop your images, do so to the nearest multiple of 8 pixels in width and you should see the correct result

谢谢你@Brad Larson!我将图像宽度调整为最接近的8倍并得到我想要的东西

Thank u @Brad Larson ! i resize image width to nearest multiple of 8 and get what i want

-(UIImage*)imageWithMultiple8ImageWidth:(UIImage*)image
{
    float fixSize = next8(image.size.width);

    CGSize newSize = CGSizeMake(fixSize, image.size.height);
    UIGraphicsBeginImageContext( newSize );
    [image drawInRect:CGRectMake(0,0,newSize.width,newSize.height)];
    UIImage* newImage = UIGraphicsGetImageFromCurrentImageContext();
    UIGraphicsEndImageContext();

    return newImage;
}

float next8(float n) {

    int bits = (int)n & 7; // give us the distance to the previous 8
    if (bits == 0)
        return (float)n;
    return (float)n + (8-bits);
}


推荐答案

在我到达之前这里的核心问题,我应该指出GPUImageAdaptiveThresholdFilter作为第一步已经转换为灰度,所以上面的 -grayImage:代码是不必要的,只会慢事情发生了。您可以删除所有代码,只需将输入图像直接传递给自适应阈值过滤器。

Before I even get to the core issue here, I should point out that the GPUImageAdaptiveThresholdFilter already does a conversion to grayscale as a first step, so your -grayImage: code in the above is unnecessary and will only slow things down. You can remove all that code and just pass your input image directly to the adaptive threshold filter.

我认为这里的问题是最近的一组变化GPUImagePicture引入图像数据。看起来不是8像素宽的倍数的图像在导入时最终看起来像上面那样。有人提出了一些修复方法,但是如果来自存储库的最新代码(不是CocoaPods,通常是相对于GitHub存储库已经过时)仍在执行此操作,则可能需要完成更多工作。

What I believe is the problem here is a recent set of changes to the way that GPUImagePicture pulls in image data. It appears that images which aren't a multiple of 8 pixels wide end up looking like the above when imported. Some fixes were proposed about this, but if the latest code from the repository (not CocoaPods, which is often out of date relative to the GitHub repository) is still doing this, some more work may need to be done.

在此期间,裁剪图像时,请将宽度为8像素的最接近的倍数进行裁剪,您应该会看到正确的结果。

In the meantime, when you crop your images, do so to the nearest multiple of 8 pixels in width and you should see the correct result.

这篇关于ios GPUImage,使用小尺寸图像处理图像效果不佳?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆