并行已经线性时间的算法 [英] Parallelize already linear-time algorithm
问题描述
在实践中,有没有一个已经线性时间的算法需要进行并行无论如何?我的老师认为,这是不值得,但我不这么认为。
In practice, is there any case that an already linear-time algorithm need to be parallelized? My teacher argue that it is not worth but I don't believe so.
推荐答案
您的老师是错误的。运行时的复杂单CPU的算法(为O(n),O(log n)的,等等)对是否将并行化中获益无关。
Your teacher is mistaken. The run-time complexity (O(n), O(log n), etc.) of the single-CPU algorithm has no bearing on whether or not it will benefit from parallelization.
更改您的code使用1个CPU,以用k处理器将在最好的K的因素划分的运行时间因为你不能随心所欲地创造的CPU凭空,K实际上是一个常数。这样,运行时的复杂性不会受到并行化。所有你能希望做的就是一个常数因子的改善。
Changing your code from using 1 CPU to using K CPUs will at best divide the run time by a factor of K. Since you can't arbitrarily create CPUs out of thin air, K is effectively a constant. So, the run-time complexity is not affected by parallelization. All you can hope to do is get a constant factor improvement.
这并不是说这是不值得做的 - 在某些情况下,两方面的改进是非常有益的。此外,在大规模并行十万CPU的系统中,不断得到pretty的大。
Which isn't to say that it's not worth doing - in some cases, a two-fold improvement is hugely beneficial. Plus, in a massively parallel system of thousands of CPUs, that constant gets pretty big.
这篇关于并行已经线性时间的算法的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!