我怎样才能并行化这个字数统计功能? [英] How can I parallelize this word counting function?

查看:40
本文介绍了我怎样才能并行化这个字数统计功能?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我有一些这样的串行代码来计算单词索引,即计算并置的单词对.以下程序有效,只是为了说明目的将句子列表压缩了.

I have some serial code like this that computes word concordances i.e. counting collocated word pairs. The following program works except that the list of sentences is canned for illustrative purposes.

import sys
from collections import defaultdict

GLOBAL_CONCORDANCE = defaultdict(lambda: defaultdict(lambda: defaultdict(lambda: [])))

def BuildConcordance(sentences):
    global GLOBAL_CONCORDANCE
    for sentenceIndex, sentence in enumerate(sentences):
        words = [word for word in sentence.split()]

        for index, word in enumerate(words):
            for i, collocate in enumerate(words[index:len(words)]):
                GLOBAL_CONCORDANCE[word][collocate][i].append(sentenceIndex)

def main():
    sentences = ["Sentence 1", "Sentence 2", "Sentence 3", "Sentence 4"]
    BuildConcordance(sentences)
    print GLOBAL_CONCORDANCE

if __name__ == "__main__":
    main()

对我来说,第一个 for 循环可以并行化,因为计算的数字是独立的.但是,正在修改的数据结构是全局结构.

To me, the first for loop can be parallelized because the numbers being computed are indepedent. However, the data structure being modified is a global one.

我尝试使用 Python 的 Pool 模块,但我面临一些酸洗问题,这让我怀疑我是否使用了正确的设计模式.有人可以提出一种并行化此代码的好方法吗?

I tried using Python's Pool module but I am facing some pickling problems which makes me wonder if I am using the right design pattern. Can someone suggest a good way to parallelize this code?

推荐答案

一般来说,当您使用函数式风格时,多处理是最简单的.在这种情况下,我的建议是从工作函数的每个实例返回结果元组列表.嵌套的 defaultdict 的额外复杂性并没有真正给你带来任何好处.像这样:

In general, multiprocessing is easiest when you use a functional style. In this case, my suggestion would be to return a list of result tuples from each instance of the worker function. The extra complexity of the nested defaultdicts doesn't really gain you anything. Something like this:

import sys
from collections import defaultdict
from multiprocessing import Pool, Queue
import re

GLOBAL_CONCORDANCE = defaultdict(lambda: defaultdict(lambda: defaultdict(list)))

def concordance_worker(index_sentence):
    sent_index, sentence = index_sentence
    words = sentence.split()

    return [(word, colo_word, colo_index, sent_index)
            for i, word in enumerate(words)
            for colo_index, colo_word in enumerate(words[i:])]

def build_concordance(sentences):
    global GLOBAL_CONCORDANCE
    pool = Pool(8)

    results = pool.map(concordance_worker, enumerate(sentences))

    for result in results:
        for word, colo_word, colo_index, sent_index in result:
            GLOBAL_CONCORDANCE[word][colo_word][colo_index].append(sent_index)

    print len(GLOBAL_CONCORDANCE)


def main():
    sentences = ["Sentence 1", "Sentence 2", "Sentence 3", "Sentence 4"]
    build_concordance(sentences)

if __name__ == "__main__":
    main()

如果这不能生成您要查找的内容,请告诉我.

Let me know if that doesn't generate what you're looking for.

这篇关于我怎样才能并行化这个字数统计功能?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆