使用numpy计算成对互信息的最佳方法 [英] Optimal way to compute pairwise mutual information using numpy

查看:712
本文介绍了使用numpy计算成对互信息的最佳方法的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

对于 m x n 矩阵,计算所有两对列( n x n )的互信息的最佳(最快)方法是什么?

For an m x n matrix, what's the optimal (fastest) way to compute the mutual information for all pairs of columns (n x n)?

通过相互信息,我的意思是:

I(X,Y)= H(X)+ H(Y)-H(X,Y)

其中 H(X) X 的香农熵.

目前,我正在使用np.histogram2dnp.histogram来计算联合(X,Y)和单个(X或Y)计数.对于给定的矩阵A(例如250000 X 1000的浮点数矩阵),我正在执行嵌套的for循环,

Currently I'm using np.histogram2d and np.histogram to calculate the joint (X,Y) and individual (X or Y) counts. For a given matrix A (e.g. a 250000 X 1000 matrix of floats), I am doing a nested for loop,

    n = A.shape[1]
    for ix = arange(n)  
        for jx = arange(ix+1,n):
           matMI[ix,jx]= calc_MI(A[:,ix],A[:,jx])

肯定有更好/更快的方法可以做到这一点吗?

Surely there must be better/faster ways to do this?

顺便说一句,我也一直在寻找数组上列的映射函数(列或行操作),但还没有找到一个很好的通用答案.

As an aside, I've also looked for mapping functions on columns (column-wise or row-wise operations) on arrays, but haven't found a good general answer yet.

这是我的完整实现,遵循 Wiki页面中的约定:

Here is my full implementation, following the conventions in the Wiki page:

import numpy as np

def calc_MI(X,Y,bins):

   c_XY = np.histogram2d(X,Y,bins)[0]
   c_X = np.histogram(X,bins)[0]
   c_Y = np.histogram(Y,bins)[0]

   H_X = shan_entropy(c_X)
   H_Y = shan_entropy(c_Y)
   H_XY = shan_entropy(c_XY)

   MI = H_X + H_Y - H_XY
   return MI

def shan_entropy(c):
    c_normalized = c / float(np.sum(c))
    c_normalized = c_normalized[np.nonzero(c_normalized)]
    H = -sum(c_normalized* np.log2(c_normalized))  
    return H

A = np.array([[ 2.0,  140.0,  128.23, -150.5, -5.4  ],
              [ 2.4,  153.11, 130.34, -130.1, -9.5  ],
              [ 1.2,  156.9,  120.11, -110.45,-1.12 ]])

bins = 5 # ?
n = A.shape[1]
matMI = np.zeros((n, n))

for ix in np.arange(n):
    for jx in np.arange(ix+1,n):
        matMI[ix,jx] = calc_MI(A[:,ix], A[:,jx], bins)

尽管我的带有嵌套for循环的工作版本以合理的速度实现了该功能,但我想知道是否存在将calc_MI应用于A所有列的最佳方法(以成对方式进行计算)相互信息)?

Although my working version with nested for loops does it at reasonable speed, I'd like to know if there is a more optimal way to apply calc_MI on all the columns of A (to calculate their pairwise mutual information)?

我也想知道:

  1. 是否有有效的方法来映射函数以对np.arrays的列(或行)进行操作(可能类似于np.vectorize,它看起来更像装饰器)?

  1. Whether there are efficient ways to map functions to operate on columns (or rows) of np.arrays (maybe like np.vectorize, which looks more like a decorator)?

针对此特定计算(互信息)是否还有其他最佳实现?

Whether there are other optimal implementations for this specific calculation (mutual information)?

推荐答案

我不能建议对n *(n-1)/2上的外循环进行更快的计算 向量,但是您可以简化calc_MI(x, y, bins)的实现 如果您可以使用scipy版本0.13或 scikit-learn .

I can't suggest a faster calculation for the outer loop over the n*(n-1)/2 vectors, but your implementation of calc_MI(x, y, bins) can be simplified if you can use scipy version 0.13 or scikit-learn.

在scipy 0.13中,lambda_参数已添加到(或lambda_=0),对数似然比 返回.这通常也称为G或G 2 统计信息.以外 因子2 * n(其中n是偶发事件中的样本总数 表格),则此相互信息.因此,您可以实现calc_MI 为:

In scipy 0.13, the lambda_ argument was added to scipy.stats.chi2_contingency This argument controls the statistic that is computed by the function. If you use lambda_="log-likelihood" (or lambda_=0), the log-likelihood ratio is returned. This is also often called the G or G2 statistic. Other than a factor of 2*n (where n is the total number of samples in the contingency table), this is the mutual information. So you could implement calc_MI as:

from scipy.stats import chi2_contingency

def calc_MI(x, y, bins):
    c_xy = np.histogram2d(x, y, bins)[0]
    g, p, dof, expected = chi2_contingency(c_xy, lambda_="log-likelihood")
    mi = 0.5 * g / c_xy.sum()
    return mi

此操作与您的实现之间的唯一区别是 实现使用自然对数代替以2为底的对数 (因此,它以"nat"而不是"bits"表示信息).如果 您真的更喜欢位,只需将mi除以log(2).

The only difference between this and your implementation is that this implementation uses the natural logarithm instead of the base-2 logarithm (so it is expressing the information in "nats" instead of "bits"). If you really prefer bits, just divide mi by log(2).

如果您拥有(或可以安装)sklearn(即scikit-learn),则可以使用 sklearn.metrics.mutual_info_score ,并将calc_MI实施为:

If you have (or can install) sklearn (i.e. scikit-learn), you can use sklearn.metrics.mutual_info_score, and implement calc_MI as:

from sklearn.metrics import mutual_info_score

def calc_MI(x, y, bins):
    c_xy = np.histogram2d(x, y, bins)[0]
    mi = mutual_info_score(None, None, contingency=c_xy)
    return mi

这篇关于使用numpy计算成对互信息的最佳方法的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆