将灰度图像转换为较小的“逐像素”灰度图像 [英] Converting a greyscale image to a smaller 'pixel by pixel' greyscale image

查看:229
本文介绍了将灰度图像转换为较小的“逐像素”灰度图像的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我有这个图像很大但是1平方代表了一些像素值,但我想要的图像只有1个具有特定值的像素。方块的大小不一样。



有些列较窄,有些列较宽。这是大图像的一部分示例:





如您所见,左侧的方块比右侧的方格大。这就是问题!



实际图片:





例如,使用下面的代码,当我尝试将图像转换为像素一的较小像素时,我得到了这个,这与初始图片完全不同。



 来自PIL导入图像
导入numpy为np

img = Image.open('greyscale_intense.png')。convert('L')#convert image to 8-bit grayscale
WIDTH,HEIGHT = img.size

a = list(img.getdata())#将图像数据转换为整数列表
#将其转换为2D列表(列表清单)整数)
a = np.array([a [偏移:偏移+宽度]表示ra中的偏移量nge(0,WIDTH * HEIGHT,WIDTH)])

print
print来自图像的初始数组:#print as array
print
打印

rows_mask = np.insert(np.diff(a [:,0])。astype(np.bool),0,True)
columns_mask = np.insert(np .diff(a [0])。astype(np.bool),0,True)
b = a [np.ix_(rows_mask,columns_mask)]


print
print来自Image的子阵列:#print as array
print
print b


print
print来自Image的子阵列(更清晰的格式):#print as array
print
for b中的行:#print作为格式化的表格
print(''。join('{: 3}'。格式(值)行中的值))

img = Image.fromarray(b,mode ='L')

img.show()

我在代码中所做的是从初始图像创建一个数组,然后忽略重复值,创建一个没有重复值的子数组。使用该图像构建新图像。



例如,对于此图像:





我得到的结果是:





从数组38可以看出重复了9次而27次重复了8次......



我的最终目标是对彩色RGB图像执行相同的处理,如此处所示。





请帮忙!

解决方案

我不想写代码,但你可以:



a)滚动(



现在我再次执行相同操作,但将图像垂直滚动一个像素:

 转换CwinB.png \(+克隆-roll + 0 + 1 \)-compose difference -composite -normalize v.png 



现在将两者结合起来并取出哪个r image在每个像素处较亮:

  convert [vh] .png -compose lighten -composite z.png 



希望您可以看到它找到了方块的边缘,现在您可以选择任何完全黑色的行或列来查找原始像素。


I've got this image which is massive but 1 square represents a number of pixel values but I want an image that has only 1 pixel with a particular value. The squares are not all the same size.

Some of the columns are narrower and some are wider. This is the example which is part of the big image:

As you can see the squares on the left hand side is bigger than the one on the right handside. That's the problem!

Actual image:

For example, using the code below, when I try to convert my image to a smaller pixel by pixel one, I get this, which is completely different to the initial picture.

from PIL import Image
import numpy as np

img = Image.open('greyscale_intense.png').convert('L')  # convert image to 8-bit grayscale
WIDTH, HEIGHT = img.size

a = list(img.getdata()) # convert image data to a list of integers
# convert that to 2D list (list of lists of integers)
a = np.array ([a[offset:offset+WIDTH] for offset in range(0, WIDTH*HEIGHT, WIDTH)])

print " "
print "Intial array from image:"  #print as array
print " "
print a

rows_mask = np.insert(np.diff(a[:, 0]).astype(np.bool), 0, True)
columns_mask = np.insert(np.diff(a[0]).astype(np.bool), 0, True)
b = a[np.ix_(rows_mask, columns_mask)]


print " "
print "Subarray from Image:"  #print as array
print " "
print b


print " "
print "Subarray from Image (clearer format):"  #print as array
print " "
for row in b: #print as a table like format
    print(' '.join('{:3}'.format(value) for value in row))

img = Image.fromarray(b, mode='L')

img.show()

What I've done in the code is create an array from the initial image and then by ignoring an repeated values, create a subarray that has no repeated values. The new image was constructed using that.

For example for this image:

The result I get is:

As you can see from the array 38 is repeated 9 times while 27 is repeated 8 times...

My final aim is to do the same process for a coloured RGB image as shown here.

Please help!

解决方案

I don't feel like writing the code, but you could either:

a) "roll" (see here) the image one pixel to the right and difference (subtract) the rolled image from the original and then use np.where to find all pixels greater than zero as those are the "edges" where your "squares" end, i.e. where a pixel is different from its neighbour. Then find columns where any element is nonzero and use those as the indices to get values from your original image. Then roll the image down one pixel and find the horizontal rows of interest, and repeat as above but for the horizontal "edges".

Or

b) convolve the image with a differencing kernel that replaces each pixel with the difference between it and its neighbour to the right and then proceed as above. The kernel for difference between self and neighbour to the right would be:

0  0  0
0 -1  1
0  0  0 

While the difference between self and neighbour below would be:

0  0  0
0 -1  0
0  1  0

The Pillow documentation for creating kernels and applying them is here.


I'll illustrate what I mean with ImageMagick at the command line. First, I clone your image, and in the copy I roll the image to the right by one pixel, then I difference the result of rolling with the original image and make a new output image - normalised for greater contrast.

convert CwinB.png \( +clone -roll +1 \) -compose difference -composite -normalize h.png

Now I do the same again, but roll the image vertically by one pixel:

convert CwinB.png \( +clone -roll +0+1 \) -compose difference -composite -normalize v.png

Now combine both of those and take whichever image is the lighter at each pixel:

convert [vh].png -compose lighten -composite z.png

Hopefully you can see it finds the edges of your squares and you can now choose any row, or column that is entirely black to find your original pixels.

这篇关于将灰度图像转换为较小的“逐像素”灰度图像的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆