朱莉娅平行宏似乎不起作用 [英] Julia Parallel macro does not seem to work
问题描述
我第一次使用julia
进行并行计算.我有点头疼.假设我按如下所示启动julia
:julia -p 4
.然后,我为所有处理器声明一个函数,然后将其与pmap
以及@parallel for
一起使用.
I'm playing around for the first time with parallel computing with julia
. I'm having a bit of a headache. So let's say I start julia
as follows: julia -p 4
. Then I declare the a function for all processors and then I use it with pmap
and also with @parallel for
.
@everywhere function count_heads(n)
c::Int = 0
for i=1:n
c += rand(Bool)
end
n, c # tuple (input, output)
end
###### first part ######
v=pmap(count_heads, 50000:1000:70000)
println("Result first part")
println(v)
###### second part ######
println("Result second part")
@parallel for i in 50000:1000:70000
println(count_heads(i))
end
结果如下.
Result first part
Counting heads function
Any[(50000,24894),(51000,25559),(52000,26141),(53000,26546),(54000,27056),(55000,27426),(56000,28024),(57000,28380),(58000,29001),(59000,29398),(60000,30100),(61000,30608),(62000,31001),(63000,31520),(64000,32200),(65000,32357),(66000,33063),(67000,33674),(68000,34085),(69000,34627),(70000,34902)]
Result second part
From worker 4: (61000, From worker 5: (66000, From worker 2: (50000, From worker 3: (56000
因此,功能pmap
显然工作正常,但@parallel for
正在停止,或者没有给我结果.我在做错什么吗?
Thus, the funcion pmap
is apparently working fine but @parallel for
is stopping or it doesn't give me the results. Am I doing something wrong?
谢谢!
更新
如果在代码末尾,则放入sleep(10)
.它可以正常工作.
If at the end of the code I put sleep(10)
. It does the work correctly.
From worker 5: (66000,33182)
From worker 3: (56000,27955)
............
From worker 3: (56000,27955)
推荐答案
您的两个示例都可以在我的笔记本电脑上正常工作,因此我不确定,但我认为此答案可能会解决您的问题!
Both of your examples work properly on my laptop so I'm not sure but I think this answer might solve your problem!
如果在@parallel for
摘自julia并行计算文档 http://docs.julialang.org/en/版本0.4/手动/并行计算/:
From the julia Parallel Computing Docs http://docs.julialang.org/en/release-0.4/manual/parallel-computing/:
...如果不是,则可以省略归约运算符 需要.在这种情况下,循环异步执行,即产生 在所有可用工人上执行独立任务,并返回一系列 立即使用RemoteRef,无需等待完成.来电者可以 稍后通过调用fetch()等待RemoteRef完成 在它们上面,或者在循环结束时通过添加前缀来等待完成 使用@sync,例如@sync @parallel.
... the reduction operator can be omitted if it is not needed. In that case, the loop executes asynchronously, i.e. it spawns independent tasks on all available workers and returns an array of RemoteRef immediately without waiting for completion. The caller can wait for the RemoteRef completions at a later point by calling fetch() on them, or wait for completion at the end of the loop by prefixing it with @sync, like @sync @parallel for.
因此,您可能需要在RemoteRef完成之前在其上调用println
.
So you are maybe calling println
on the RemoteRef before it has completed.
这篇关于朱莉娅平行宏似乎不起作用的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!