numpy.ndarray,其形状(高度,宽度,n)来自每个图像像素n个值 [英] numpy.ndarray with shape (height, width, n) from n values per Image pixel

查看:639
本文介绍了numpy.ndarray,其形状(高度,宽度,n)来自每个图像像素n个值的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我的输入是模式为RGBRGBAPIL.Image.Image,我需要用从每个像素的RGB值计算出的3个浮点值填充numpy.ndarray.输出数组应可通过像素坐标索引.我发现以下方法可以做到:

My input is a PIL.Image.Image with mode RGB or RGBA, and I need to fill a numpy.ndarray with 3 float values calculated from the RGB values of each pixel. The output array should be indexable by the pixel coordinates. I have found the following way to do it:

import numpy as np
from PIL import Image

def generate_ycbcr(img: Image.Image):
    for r, g, b in img.getdata():
        yield 0.299 * r + 0.587 * g + 0.114 * b
        yield 128 - 0.168736 * r - 0.331264 * g + 0.5 * b
        yield 128 + 0.5 * r - 0.418688 * g - 0.081312 * b

def get_ycbcr_arr(img: Image.Image):
    width, height = img.size
    arr = np.fromiter(generate_ycbcr(img), float, height * width * 3)
    return arr.reshape(height, width, 3)

它可以工作,但是我怀疑有更好和/或更快的方法.请告诉我是否有一个,但也没有.

It works, but I suspect there is a better and/or faster way. Please tell me if there is one, but also if there is not.

N.B .:我知道我可以将图像convert()转换为YCbCr,然后从中填充numpy.array,但是转换结果将四舍五入为整数值,这不是我所需要的.

N.B.: I know I can convert() the image to YCbCr, and then fill a numpy.array from that, but the conversion is rounded to integer values, which is not what I need.

推荐答案

对于初学者,您可以将图像直接转换为numpy数组,并使用矢量化操作来执行所需的操作:

For starters, you can convert an image directly to a numpy array and use vectorized operations to do what you want:

def get_ycbcr_vectorized(img: Image.Image):
    R,G,B = np.array(img).transpose(2,0,1)[:3] # ignore alpha if present
    Y = 0.299 * R + 0.587 * G + 0.114 * B
    Cb = 128 - 0.168736 * R - 0.331264 * G + 0.5 * B
    Cr = 128 + 0.5 * R - 0.418688 * G - 0.081312 * B
    return np.array([Y,Cb,Cr]).transpose(1,2,0)

print(np.array_equal(get_ycbcr_arr(img), get_ycbcr_vectorized(img))) # True

但是,您确定直接转换为'YCbCr'会有很大不同吗?我测试了上述函数中定义的转换:

However, are you sure that directly converting to 'YCbCr' will be that much different? I tested the conversion defined in the above function:

import matplotlib.pyplot as plt
def aux():
    # generate every integer R/G/B combination
    R,G,B = np.ogrid[:256,:256,:256]
    Y = 0.299 * R + 0.587 * G + 0.114 * B
    Cb = 128 - 0.168736 * R - 0.331264 * G + 0.5 * B
    Cr = 128 + 0.5 * R - 0.418688 * G - 0.081312 * B

    # plot the maximum error along one of the RGB channels
    for arr,label in zip([Y,Cb,Cr], ['Y', 'Cb', 'Cr']):
        plt.figure()
        plt.imshow((arr - arr.round()).max(-1))
        plt.xlabel('R')
        plt.ylabel('G')
        plt.title(f'max_B ({label} - {label}.round())')
        plt.colorbar()

aux()   
plt.show()

结果表明,最大的绝对误差为0.5,尽管这些误差遍及整个像素:

The results suggest that the largest absolute error is 0.5, although these errors happen all over the pixels:

是的,这可能是一个很大的 relative 错误,但这不一定是一个大问题.

So yeah, this could be a large-ish relative error, but this isn't necessarily a huge issue.

如果内置转换足够:

arr = np.array(img.convert('YCbCr'))

是您所需要的.

这篇关于numpy.ndarray,其形状(高度,宽度,n)来自每个图像像素n个值的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆