如何从 pandas 数据帧计算jaccard相似度 [英] How to compute jaccard similarity from a pandas dataframe
问题描述
我有一个数据框,如下所示:框的形状为(1510,1399).列代表产品,行代表用户为给定产品分配的值(0或1).如何计算jaccard_similarity_score?
I have a dataframe as follows: the shape of the frame is (1510, 1399). The columns represents products, the rows represents the values (0 or 1) assigned by an user for a given product. How can I can compute a jaccard_similarity_score?
我创建了一个占位符数据框,其中列出了产品与产品
I created a placeholder dataframe listing product vs. product
data_ibs = pd.DataFrame(index=data_g.columns,columns=data_g.columns)
我不确定如何遍历data_ibs来计算相似度.
I am not sure how to iterate though data_ibs to compute similarities.
for i in range(0,len(data_ibs.columns)) :
# Loop through the columns for each column
for j in range(0,len(data_ibs.columns)) :
.........
推荐答案
简短且矢量化的(快速)答案:
Short and vectorized (fast) answer:
从scikit学习的成对距离中使用汉明":
Use 'hamming' from the pairwise distances of scikit learn:
from sklearn.metrics.pairwise import pairwise_distances
jac_sim = 1 - pairwise_distances(df.T, metric = "hamming")
# optionally convert it to a DataFrame
jac_sim = pd.DataFrame(jac_sim, index=df.columns, columns=df.columns)
说明:
Explanation:
假设这是您的数据集:
import pandas as pd
import numpy as np
np.random.seed(0)
df = pd.DataFrame(np.random.binomial(1, 0.5, size=(100, 5)), columns=list('ABCDE'))
print(df.head())
A B C D E
0 1 1 1 1 0
1 1 0 1 1 0
2 1 1 1 1 0
3 0 0 1 1 1
4 1 1 0 1 0
使用sklearn的jaccard_similarity_score,A列和B列之间的相似度是:
Using sklearn's jaccard_similarity_score, similarity between column A and B is:
from sklearn.metrics import jaccard_similarity_score
print(jaccard_similarity_score(df['A'], df['B']))
0.43
这是在总行数100中具有相同值的行数.
This is the number of rows that have the same value over total number of rows, 100.
据我所知,jaccard_similarity_score没有成对的版本,但是距离有成对的版本.
As far as I know, there is no pairwise version of the jaccard_similarity_score but there are pairwise versions of distances.
但是,SciPy定义了提卡距离如下:
However, SciPy defines Jaccard distance as follows:
给定两个向量u和v,Jaccard距离是那些元素u [i]和v [i]在至少其中一个非零的地方不一致的比例.
Given two vectors, u and v, the Jaccard distance is the proportion of those elements u[i] and v[i] that disagree where at least one of them is non-zero.
因此,它排除了两列均为0的行. jaccard_similarity_score没有.另一方面,汉明距离符合相似性定义:
So it excludes the rows where both columns have 0 values. jaccard_similarity_score doesn't. Hamming distance, on the other hand, is inline with the similarity definition:
这些向量元素在两个n个向量u和v之间的比例 这不同意.
The proportion of those vector elements between two n-vectors u and v which disagree.
因此,如果要计算jaccard_similarity_score,则可以使用1-海明:
So if you want to calculate jaccard_similarity_score, you can use 1 - hamming:
from sklearn.metrics.pairwise import pairwise_distances
print(1 - pairwise_distances(df.T, metric = "hamming"))
array([[ 1. , 0.43, 0.61, 0.55, 0.46],
[ 0.43, 1. , 0.52, 0.56, 0.49],
[ 0.61, 0.52, 1. , 0.48, 0.53],
[ 0.55, 0.56, 0.48, 1. , 0.49],
[ 0.46, 0.49, 0.53, 0.49, 1. ]])
以DataFrame格式:
In a DataFrame format:
jac_sim = 1 - pairwise_distances(df.T, metric = "hamming")
jac_sim = pd.DataFrame(jac_sim, index=df.columns, columns=df.columns)
# jac_sim = np.triu(jac_sim) to set the lower diagonal to zero
# jac_sim = np.tril(jac_sim) to set the upper diagonal to zero
A B C D E
A 1.00 0.43 0.61 0.55 0.46
B 0.43 1.00 0.52 0.56 0.49
C 0.61 0.52 1.00 0.48 0.53
D 0.55 0.56 0.48 1.00 0.49
E 0.46 0.49 0.53 0.49 1.00
您可以通过遍历列的组合来执行相同的操作,但这会慢得多.
You can do the same by iterating over combinations of columns but it will be much slower.
import itertools
sim_df = pd.DataFrame(np.ones((5, 5)), index=df.columns, columns=df.columns)
for col_pair in itertools.combinations(df.columns, 2):
sim_df.loc[col_pair] = sim_df.loc[tuple(reversed(col_pair))] = jaccard_similarity_score(df[col_pair[0]], df[col_pair[1]])
print(sim_df)
A B C D E
A 1.00 0.43 0.61 0.55 0.46
B 0.43 1.00 0.52 0.56 0.49
C 0.61 0.52 1.00 0.48 0.53
D 0.55 0.56 0.48 1.00 0.49
E 0.46 0.49 0.53 0.49 1.00
这篇关于如何从 pandas 数据帧计算jaccard相似度的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!