从图像的GLCM计算熵 [英] Calculating entropy from GLCM of an image

查看:523
本文介绍了从图像的GLCM计算熵的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在使用skimage库进行大多数图像分析工作.

I am using skimage library for most of image analysis work.

我有一个RGB图像,我打算从图像中提取texture功能,例如entropyenergyhomogeneitycontrast.

I have an RGB image and I intend to extract texture features like entropy, energy, homogeneity and contrast from the image.

以下是我正在执行的步骤:

Below are the steps that I am performing:

from skimage import io, color, feature
from skimage.filters import rank
rgbImg = io.imread(imgFlNm)
grayImg = color.rgb2gray(rgbImg)
print(grayImg.shape)  # (667,1000), a 2 dimensional grayscale image

glcm = feature.greycomatrix(grayImg, [1], [0, np.pi/4, np.pi/2, 3*np.pi/4])
print(glcm.shape) # (256, 256, 1, 4)

rank.entropy(glcm, disk(5)) # throws an error since entropy expects a 2-D array in its arguments

rank.entropy(grayImg, disk(5)) # given an output.

我的问题是,从灰度图像直接计算出的熵是否与从GLCM提取的熵特征(纹理特征)相同?

My question is, is the calculated entropy from the gray-scale image(directly) same as the entropy feature extracted from the GLCM (a texture feature)?

如果没有,从图像中提取所有纹理特征的正确方法是什么?

If not, what is the right way to extract all the texture features from an image?

注意:我已经提到:

熵-skimage

GLCM-纹理特征

推荐答案

从灰度图像计算出的熵(直接)是否与从GLCM提取的熵特征(纹理特征)相同?

Is the calculated entropy from the gray-scale image (directly) same as the entropy feature extracted from the GLCM (a texture feature)?

不,这两个熵非常不同:

No, these two entropies are rather different:

  1. skimage.filters.rank.entropy(grayImg, disk(5))产生与grayImg相同大小的数组,该数组包含在圆盘上计算的图像上的局部熵,其中心在相应像素处,半径为5个像素.查看熵(信息论),以了解如何计算熵.该数组中的值对于细分很有用(遵循此链接以查看基于熵的对象检测的示例).如果您的目标是通过单个(标量)值描述图像的熵,则可以使用skimage.measure.shannon_entropy(grayImg).此功能基本上将以下公式应用于完整图像:

    其中是灰度级数(8位图像为256),是像素具有,和是对数函数的基数.当设置为2,返回值以为单位.
  2. 灰度共生矩阵(GLCM)是在图像上给定偏移量下共现灰度值的直方图.为了描述图像的纹理,通常是从针对不同偏移量计算的几种共现矩阵中提取特征,例如熵,能量,对比度,相关性等.在这种情况下,熵定义如下:

    其中和分别是灰度级数和对数函数的基数,分别是表示由指定偏移量隔开的两个像素,其强度为和.不幸的是,熵不是您可以通过scikit-image计算出的GLCM的属性之一 * .如果您希望计算此功能,则需要将GLCM传递给 skimage.measure.shannon_entropy .
  1. skimage.filters.rank.entropy(grayImg, disk(5)) yields an array the same size as grayImg which contains the local entropy across the image computed on a circular disk with center at the the corresponding pixel and radius 5 pixels. Take a look at Entropy (information theory) to find out how entropy is calculated. The values in this array are useful for segmentation (follow this link to see an example of entropy-based object detection). If your goal is to describe the entropy of the image through a single (scalar) value you can use skimage.measure.shannon_entropy(grayImg). This function basically applies the following formula to the full image:

    where is the number of gray levels (256 for 8-bit images), is the probability of a pixel having gray level , and is the base of the logarithm function. When is set to 2 the returned value is measured in bits.
  2. A gray level co-occurence matrix (GLCM) is a histogram of co-occurring grayscale values at a given offset over an image. To describe the texture of an image it is usual to extract features such as entropy, energy, contrast, correlation, etc. from several co-occurrence matrices computed for different offsets. In this case the entropy is defined as follows:

    where and are again the number of gray levels and the base of the logarithm function, respectively, and stands for the probability of two pixels separated by the specified offset having intensities and . Unfortunately the entropy is not one of the properties of a GLCM that you can calculate through scikit-image*. If you wish to compute this feature you need to pass the GLCM to skimage.measure.shannon_entropy.

* 在上次编辑此帖子时,scikit-image的最新版本是0.13.1.

*At the time this post was last edited, the latest version of scikit-image is 0.13.1.

如果没有,从图像中提取所有纹理特征的正确方法是什么?

If not, what is the right way to extract all the texture features from an image?

描述图像纹理的功能多种多样,例如局部二进制模式,Gabor滤波器,小波,Laws遮罩等. Haralick的 GLCM 是最受欢迎的纹理描述符之一.一种通过GLCM功能描述图像纹理的可能方法是,针对不同的偏移量(每个偏移量都通过距离和角度定义)来计算GLCM,然后从每个GLCM中提取不同的属性.

There are a wide variety of features to describe the texture of an image, for example local binary patterns, Gabor filters, wavelets, Laws' masks and many others. Haralick's GLCM is one of the most popular texture descriptors. One possible approach to describe the texture of an image through GLCM features consists in computing the GLCM for different offsets (each offset is defined through a distance and an angle), and extracting different properties from each GLCM.

让我们考虑例如三个距离(1、2和3个像素),四个角度(0、45、90和135度)和两个属性(能量和同质性).结果为偏移量(因此是12个GLCM)和尺寸为.这是代码:

Let us consider for example three distances (1, 2 and 3 pixels), four angles (0, 45, 90 and 135 degrees) and two properties (energy and homogeneity). This results in offsets (and hence 12 GLCM's) and a feature vector of dimension . Here's the code:

import numpy as np
from skimage import io, color, img_as_ubyte
from skimage.feature import greycomatrix, greycoprops
from sklearn.metrics.cluster import entropy

rgbImg = io.imread('https://i.stack.imgur.com/1xDvJ.jpg')
grayImg = img_as_ubyte(color.rgb2gray(rgbImg))

distances = [1, 2, 3]
angles = [0, np.pi/4, np.pi/2, 3*np.pi/4]
properties = ['energy', 'homogeneity']

glcm = greycomatrix(grayImg, 
                    distances=distances, 
                    angles=angles,
                    symmetric=True,
                    normed=True)

feats = np.hstack([greycoprops(glcm, prop).ravel() for prop in properties])

使用该图像获得的结果:

Results obtained using this image:

:

In [56]: entropy(grayImg)
Out[56]: 5.3864158185167534

In [57]: np.set_printoptions(precision=4)

In [58]: print(feats)
[ 0.026   0.0207  0.0237  0.0206  0.0201  0.0207  0.018   0.0206  0.0173
  0.016   0.0157  0.016   0.3185  0.2433  0.2977  0.2389  0.2219  0.2433
  0.1926  0.2389  0.1751  0.1598  0.1491  0.1565]

这篇关于从图像的GLCM计算熵的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆