TfidfVectorizer 中如何计算词频? [英] How term frequency is calculated in TfidfVectorizer?
问题描述
我搜索了很多来理解这一点,但我无法理解.我知道默认情况下 TfidfVectorizer 将对词频应用 l2
归一化.这个文章解释了它的等式.我在用古吉拉特语编写的文本中使用 TfidfVectorizer.以下是有关它的输出详细信息:
I searched a lot for understanding this but I am not able to. I understand that by default TfidfVectorizer will apply l2
normalization on term frequency. This article explain the equation of it. I am using TfidfVectorizer on my text written in Gujarati language. Following is details of output about it:
我的两个文件是:
ખુબ વખાણ કરે છે
ખુબ વધારે છે
我使用的代码是:
vectorizer = TfidfVectorizer(tokenizer=tokenize_words, sublinear_tf=True, use_idf=True, smooth_idf=False)
这里,tokenize_words
是我用来标记单词的函数.我的数据的TF-IDF列表是:
Here, tokenize_words
is my function for tokenizing words.
The list of TF-IDF of my data is:
[[ 0.6088451 0.35959372 0.35959372 0.6088451 0. ]
[ 0. 0.45329466 0.45329466 0. 0.76749457]]
功能列表:
['કરે', 'ખુબ', 'છે.', 'વખાણ', 'વધારે']
idf 的值:
{'વખાણ': 1.6931471805599454, 'છે.': 1.0, 'કરે': 1.6931471805599454, 'વધારે': 1.6931471805599454, 'ખુબ': 1.0}
请在这个例子中解释我的两个文档中每个词的词频是多少.
Please explain me in this example what shall be the term frequency of each term in my both documents.
推荐答案
好的,现在让我们通过 我在评论中给出的文档 一步一步:
Ok, Now lets go through the documentation I gave in comments step by step:
文件:
`ખુબ વખાણ કરે છે
ખુબ વધારે છે`
- 获取所有独特的术语(
features
):['કરે'、'ખુબ'、'છે.'、'વખાણ'、'વધારે']
> 计算文档中每个词的频率:-
- Get all unique terms (
features
):['કરે', 'ખુબ', 'છે.', 'વખાણ', 'વધારે']
Calculate frequency of each term in documents:-
一个.document1 [ખુબ વખાણ કરે છે]
中的每个术语都出现一次,而 વધારે 不出现.`
a. Each term present in document1 [ખુબ વખાણ કરે છે]
is present once, and વધારે is not present.`
B.所以词频向量(按特征排序):[1 1 1 1 0]
b. So the term frequency vector (sorted according to features): [1 1 1 1 0]
c.在 document2 上应用步骤 a 和 b,我们得到 [0 1 1 0 1]
c. Applying steps a and b on document2, we get [0 1 1 0 1]
d.所以我们最终的词频向量是 [[1 1 1 1 0], [0 1 1 0 1]]
d. So our final term-frequency vector is [[1 1 1 1 0], [0 1 1 0 1]]
注意:这是您想要的词频
现在找到 IDF(这是基于特征,而不是基于文档):
Now find IDF (This is based on features, not on document basis):
idf(term) = log(文档数/这个词的文档数) + 1
1 添加到 idf 值以防止零除法.它由 "smooth_idf"
参数控制,默认情况下为 True.
1 is added to the idf value to prevent zero divisions. It is governed by "smooth_idf"
parameter which is True by default.
idf('કરે') = log(2/1)+1 = 0.69314.. + 1 = 1.69314..
idf('ખુબ') = log(2/2)+1 = 0 + 1 = 1
idf('છે.') = log(2/2)+1 = 0 + 1 = 1
idf('વખાણ') = log(2/1)+1 = 0.69314.. + 1 = 1.69314..
idf('વધારે') = log(2/1)+1 = 0.69314.. + 1 = 1.69314..
注意:这与您显示的有问题的数据相对应.
Note: This corresponds to the data you showed in question.
现在计算 TF-IDF(这也是按文档计算,根据特征排序计算):
Now calculate TF-IDF (This again is calculated document-wise, calculated according to sorting of features):
一个.对于文档 1:
For 'કરે', tf-idf = tf(કરે) x idf(કરે) = 1 x 1.69314 = 1.69314
For 'ખુબ', tf-idf = tf(કરે) x idf(કરે) = 1 x 1 = 1
For 'છે.', tf-idf = tf(કરે) x idf(કરે) = 1 x 1 = 1
For 'વખાણ', tf-idf = tf(કરે) x idf(કરે) = 1 x 1.69314 = 1.69314
For 'વધારે', tf-idf = tf(કરે) x idf(કરે) = 0 x 1.69314 = 0
所以对于 document1,最终的 tf-idf 向量是 [1.69314 1 1 1.69314 0]
So for document1, the final tf-idf vector is [1.69314 1 1 1.69314 0]
B.现在归一化完成(l2欧几里得):
b. Now normalization is done (l2 Euclidean):
dividor = sqrt(sqr(1.69314)+sqr(1)+sqr(1)+sqr(1.69314)+sqr(0))
= sqrt(2.8667230596 + 1 + 1 + 2.8667230596 + 0)
= sqrt(7.7334461192)
= 2.7809074272977876...
将 tf-idf 数组的每个元素用除法器相除,我们得到:
Dividing each element of the tf-idf array with dividor, we get:
[0.6088445 0.3595948 0.3595948548 0.6088445 0]
注意:这是您发布的第一个文档的 tfidf.
Note: This is the tfidf of firt document you posted in question.
c.现在对文档 2 执行相同的步骤 a 和 b,我们得到:
c. Now do the same steps a and b for document 2, we get:
[ 0. 0.453294 0.453294 0. 0.767494]
更新:关于sublinear_tf = True OR False
您的原始词频向量是 [[1 1 1 1 0], [0 1 1 0 1]]
并且您的理解是正确的,即使用 sublinear_tf = True 会改变词频向量.
Your original term frequency vector is [[1 1 1 1 0], [0 1 1 0 1]]
and you are correct in your understanding that using sublinear_tf = True will change the term frequency vector.
new_tf = 1 + log(tf)
现在上面的行只适用于 term-frequecny 中的非零元素.因为对于 0,log(0) 是未定义的.
Now the above line will only work on non zero elements in the term-frequecny. Because for 0, log(0) is undefined.
所有非零条目都是 1.log(1)
是 0 和 1 + log(1) = 1 + 0 = 1`.
And all your non-zero entries are 1. log(1)
is 0 and 1 + log(1) = 1 + 0 = 1`.
您看到值为 1 的元素的值将保持不变.因此您的 new_tf = [[1 1 1 1 0], [0 1 1 0 1]] = tf(original)
.
You see that the values will remain unchanged for elements with value 1. So your new_tf = [[1 1 1 1 0], [0 1 1 0 1]] = tf(original)
.
您的词频因 sublinear_tf
而改变,但它仍然保持不变.
Your term frequency is changing due to the sublinear_tf
but it still remains the same.
因此,如果您使用 sublinear_tf=True
或 sublinear_tf=False
,以下所有计算将相同并且输出相同.
And hence all below calculations will be same and output is same if you use sublinear_tf=True
OR sublinear_tf=False
.
现在,如果您更改词频向量包含除 1 和 0 以外的元素 的文档,您将使用 sublinear_tf
获得差异.
Now if you change your documents for which the term-frequecy vector contains elements other than 1 and 0, you will get differences using the sublinear_tf
.
希望您的疑虑现在已消除.
Hope your doubts are cleared now.
这篇关于TfidfVectorizer 中如何计算词频?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!