使用 numpy 计算成对互信息的最佳方法 [英] Optimal way to compute pairwise mutual information using numpy

查看:34
本文介绍了使用 numpy 计算成对互信息的最佳方法的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

对于 m x n 矩阵,计算所有列对 (n x n) 的互信息的最佳(最快)方法是什么?

For an m x n matrix, what's the optimal (fastest) way to compute the mutual information for all pairs of columns (n x n)?

通过相互信息,我的意思是:

I(X, Y) = H(X) + H(Y) - H(X,Y)

其中H(X)指的是X的香农熵.

目前我使用 np.histogram2dnp.histogram 来计算联合 (X,Y) 和个体 (X 或 Y) 计数.对于给定的矩阵 A(例如一个 250000 X 1000 的浮点矩阵),我正在做一个嵌套的 for 循环,

Currently I'm using np.histogram2d and np.histogram to calculate the joint (X,Y) and individual (X or Y) counts. For a given matrix A (e.g. a 250000 X 1000 matrix of floats), I am doing a nested for loop,

    n = A.shape[1]
    for ix = arange(n)  
        for jx = arange(ix+1,n):
           matMI[ix,jx]= calc_MI(A[:,ix],A[:,jx])

当然必须有更好/更快的方法来做到这一点吗?

Surely there must be better/faster ways to do this?

顺便说一句,我还在数组的列(按列或按行操作)上寻找映射函数,但还没有找到一个好的通用答案.

As an aside, I've also looked for mapping functions on columns (column-wise or row-wise operations) on arrays, but haven't found a good general answer yet.

这是我的完整实现,遵循维基页面中的约定:

Here is my full implementation, following the conventions in the Wiki page:

import numpy as np

def calc_MI(X,Y,bins):

   c_XY = np.histogram2d(X,Y,bins)[0]
   c_X = np.histogram(X,bins)[0]
   c_Y = np.histogram(Y,bins)[0]

   H_X = shan_entropy(c_X)
   H_Y = shan_entropy(c_Y)
   H_XY = shan_entropy(c_XY)

   MI = H_X + H_Y - H_XY
   return MI

def shan_entropy(c):
    c_normalized = c / float(np.sum(c))
    c_normalized = c_normalized[np.nonzero(c_normalized)]
    H = -sum(c_normalized* np.log2(c_normalized))  
    return H

A = np.array([[ 2.0,  140.0,  128.23, -150.5, -5.4  ],
              [ 2.4,  153.11, 130.34, -130.1, -9.5  ],
              [ 1.2,  156.9,  120.11, -110.45,-1.12 ]])

bins = 5 # ?
n = A.shape[1]
matMI = np.zeros((n, n))

for ix in np.arange(n):
    for jx in np.arange(ix+1,n):
        matMI[ix,jx] = calc_MI(A[:,ix], A[:,jx], bins)

尽管我使用嵌套 for 循环的工作版本以合理的速度执行此操作,但我想知道是否有更优化的方法将 calc_MI 应用于所有A 列(计算它们的成对互信息)?

Although my working version with nested for loops does it at reasonable speed, I'd like to know if there is a more optimal way to apply calc_MI on all the columns of A (to calculate their pairwise mutual information)?

我也想知道:

  1. 是否有高效的方式映射函数对np.arrays的列(或行)进行操作(可能像np.vectorize,看起来更像装饰师)?

  1. Whether there are efficient ways to map functions to operate on columns (or rows) of np.arrays (maybe like np.vectorize, which looks more like a decorator)?

对于这个特定的计算(互信息),是否还有其他最佳实现?

Whether there are other optimal implementations for this specific calculation (mutual information)?

推荐答案

我不能建议对 n*(n-1)/2 上的外循环进行更快的计算向量,但您的 calc_MI(x, y, bins) 实现可以简化如果您可以使用 scipy 0.13 版或 scikit-learn.

I can't suggest a faster calculation for the outer loop over the n*(n-1)/2 vectors, but your implementation of calc_MI(x, y, bins) can be simplified if you can use scipy version 0.13 or scikit-learn.

在 scipy 0.13 中,lambda_ 参数被添加到 scipy.stats.chi2_contingency此参数控制由函数计算的统计信息.如果您使用 lambda_="log-likelihood"(或 lambda_=0),即对数似然比被退回.这通常也称为 G 或 G2 统计量.以外一个 2*n 的因子(其中 n 是意外事件中的样本总数table),这是互信息.所以你可以实现 calc_MI如:

In scipy 0.13, the lambda_ argument was added to scipy.stats.chi2_contingency This argument controls the statistic that is computed by the function. If you use lambda_="log-likelihood" (or lambda_=0), the log-likelihood ratio is returned. This is also often called the G or G2 statistic. Other than a factor of 2*n (where n is the total number of samples in the contingency table), this is the mutual information. So you could implement calc_MI as:

from scipy.stats import chi2_contingency

def calc_MI(x, y, bins):
    c_xy = np.histogram2d(x, y, bins)[0]
    g, p, dof, expected = chi2_contingency(c_xy, lambda_="log-likelihood")
    mi = 0.5 * g / c_xy.sum()
    return mi

这与您的实现之间的唯一区别是实现使用自然对数代替以 2 为底的对数(所以它用nats"而不是bits"来表达信息).如果你真的更喜欢位,只需将 mi 除以 log(2).

The only difference between this and your implementation is that this implementation uses the natural logarithm instead of the base-2 logarithm (so it is expressing the information in "nats" instead of "bits"). If you really prefer bits, just divide mi by log(2).

如果你有(或可以安装)sklearn(即scikit-learn),你可以使用sklearn.metrics.mutual_info_score,并将 calc_MI 实现为:

If you have (or can install) sklearn (i.e. scikit-learn), you can use sklearn.metrics.mutual_info_score, and implement calc_MI as:

from sklearn.metrics import mutual_info_score

def calc_MI(x, y, bins):
    c_xy = np.histogram2d(x, y, bins)[0]
    mi = mutual_info_score(None, None, contingency=c_xy)
    return mi

这篇关于使用 numpy 计算成对互信息的最佳方法的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆