如何在r中读写TermDocumentMatrix? [英] how to read and write TermDocumentMatrix in r?

查看:345
本文介绍了如何在r中读写TermDocumentMatrix?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我使用R中的一个csv文件制作了wordcloud.我在tm包中使用了TermDocumentMatrix方法.这是我的代码:

I made wordcloud using a csv file in R. I used TermDocumentMatrix method in the tm package. Here is my code:

csvData <- read.csv("word", encoding = "UTF-8", stringsAsFactors = FALSE)

Encoding(csvData$content) <- "UTF-8"
# useSejongDic() - KoNLP package
nouns <- sapply(csvData$content, extractNoun, USE.NAMES = F)
#create Corpus
myCorpus <- Corpus(VectorSource(nouns))

myCorpus <- tm_map(myCorpus, removePunctuation)
# remove numbers
myCorpus <- tm_map(myCorpus, removeNumbers)
#remove StopWord 
myCorpus <- tm_map(myCorpus, removeWords, myStopwords)

#create Matrix
TDM <- TermDocumentMatrix(myCorpus, control = list(wordLengths=c(2,5)))

m <- as.matrix(TDM)

此过程似乎需要太多时间.我认为extractNoun是导致花费过多时间的原因.为了使代码更省时,我想将生成的TDM保存为文件.读取此保存的文件时,可以完全使用m <- as.matrix(saved TDM file)吗?还是有更好的选择?

This process seemed to take too much time. I think extractNoun is what accounts for too much time being spent. To make the code more time-efficient, I want to save the resulting TDM as a file. When I read this saved file, can I use m <- as.matrix(saved TDM file) completely? Or, is there a better alternative?

推荐答案

我不是专家,但有时会使用NLP.

I'm not an expert but I've used NLP sometimes.

我确实使用了parallel软件包中的parSapply.这是文档 http://stat.ethz .ch/R-manual/R-devel/library/parallel/doc/parallel.pdf

I do use parSapply from parallel package. Here's the documentation http://stat.ethz.ch/R-manual/R-devel/library/parallel/doc/parallel.pdf

parallel带有R基,这是一个愚蠢的使用示例:

parallel comes with R base and this is a silly using example:

library(parallel)
no_cores <- detectCores() - 1
cl<-makeCluster(no_cores)
clusterExport(cl, "base")

base <- 2
parSapply(cl, as.character(2:4), 
          function(exponent){
            x <- as.numeric(exponent)
            c(base = base^x, self = x^x)
          })

因此,并行化nouns <- sapply(csvData$content, extractNoun, USE.NAMES = F),它将更快:)

So, parallelize nouns <- sapply(csvData$content, extractNoun, USE.NAMES = F) and it will be faster :)

这篇关于如何在r中读写TermDocumentMatrix?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆