使用PIL修剪扫描图像? [英] Trim scanned images with PIL?

查看:125
本文介绍了使用PIL修剪扫描图像?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

修剪使用扫描仪输入的图像的方法是什么,因此具有较大的白/黑区域?

What would be the approach to trim an image that's been input using a scanner and therefore has a large white/black area?

推荐答案

熵解决方案似乎存在问题且计算过于密集。为什么不进行边缘检测?

the entropy solution seems problematic and overly intensive computationally. Why not edge detect?

我刚刚写了这个python代码来解决同样的问题。我的背景是肮脏的白色,所以我使用的标准是黑暗和颜色。我通过为每个像素取最小的R,B或B值来简化这个标准,因此黑色或饱和的红色都突出相同。我还使用了每行或每列的平均多个最暗像素。然后我从每个边缘开始,直到我越过一个门槛。

I just wrote this python code to solve this same problem for myself. My background was dirty white-ish, so the criteria that I used was darkness and color. I simplified this criteria by just taking the smallest of the R, B or B value for each pixel, so that black or saturated red both stood out the same. I also used the average of the however many darkest pixels for each row or column. Then I started at each edge and worked my way in till I crossed a threshold.

这是我的代码:

#these values set how sensitive the bounding box detection is
threshold = 200     #the average of the darkest values must be _below_ this to count (0 is darkest, 255 is lightest)
obviousness = 50    #how many of the darkest pixels to include (1 would mean a single dark pixel triggers it)

from PIL import Image

def find_line(vals):
    #implement edge detection once, use many times 
    for i,tmp in enumerate(vals):
        tmp.sort()
        average = float(sum(tmp[:obviousness]))/len(tmp[:obviousness])
        if average <= threshold:
            return i
    return i    #i is left over from failed threshold finding, it is the bounds

def getbox(img):
    #get the bounding box of the interesting part of a PIL image object
    #this is done by getting the darekest of the R, G or B value of each pixel
    #and finding were the edge gest dark/colored enough
    #returns a tuple of (left,upper,right,lower)

    width, height = img.size    #for making a 2d array
    retval = [0,0,width,height] #values will be disposed of, but this is a black image's box 

    pixels = list(img.getdata())
    vals = []                   #store the value of the darkest color
    for pixel in pixels:
        vals.append(min(pixel)) #the darkest of the R,G or B values

    #make 2d array
    vals = np.array([vals[i * width:(i + 1) * width] for i in xrange(height)])

    #start with upper bounds
    forupper = vals.copy()
    retval[1] = find_line(forupper)

    #next, do lower bounds
    forlower = vals.copy()
    forlower = np.flipud(forlower)
    retval[3] = height - find_line(forlower)

    #left edge, same as before but roatate the data so left edge is top edge
    forleft = vals.copy()
    forleft = np.swapaxes(forleft,0,1)
    retval[0] = find_line(forleft)

    #and right edge is bottom edge of rotated array
    forright = vals.copy()
    forright = np.swapaxes(forright,0,1)
    forright = np.flipud(forright)
    retval[2] = width - find_line(forright)

    if retval[0] >= retval[2] or retval[1] >= retval[3]:
        print "error, bounding box is not legit"
        return None
    return tuple(retval)

if __name__ == '__main__':
    image = Image.open('cat.jpg')
    box = getbox(image)
    print "result is: ",box
    result = image.crop(box)
    result.show()

这篇关于使用PIL修剪扫描图像?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆