归并排序时间和空间复杂 [英] Merge Sort Time and Space Complexity

查看:203
本文介绍了归并排序时间和空间复杂的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

让我们借此实现归并排序,例如,

Let's take this implementation of Merge Sort as an example

void mergesort(Item a[], int l, int r) {
if (r <= l) return;
int m = (r+l)/2;
mergesort(a, l, m);  ------------ (1)
mergesort(a, m+1, r); ------------(2)
merge(a, l, m, r);

一)本次合并排序的时间复杂度为O(NLG(N))。将并行(1)和(2)给出任何实际收获? Theorotically,看来他们并行后还你最终会在O(NLG(N),但实际上我们能得到什么收获?

a) The time complexity of this Merge Sort is O(nlg(n)). Will parallelizing (1) and (2) give any practical gain ? Theorotically, it appears that after parallelizing them also you would end up in O(nlg(n). But practically can we get any gains ?

b)空间这个归并排序在这里的复杂度为O(N)。不过,如果我选择执行就地归并排序使用链表(不知道是否可以使用数组合理地完成)将空间复杂度成为O(LG(N)),因为你必须考虑递归堆栈帧的大小? 我们可以把O(LG电子(n))的恒定,因为它不能够比64多吗?我可能在几个位置误解这一点。究竟是64的意义是什么?

b) Space complexity of this Merge Sort here is O(n). However, if I choose to perform in-place merge sort using linked lists (not sure if it can be done with arrays reasonably) will the space complexity become O(lg(n)), since you have to account for recursion stack frame size ? Can we treat O(lg(n)) as constant since it cannot be more than 64 ? I may have misunderstood this at couple of places. What exactly is the significance of 64 ?

C)<一href="http://www.cprogramming.com/tutorial/computersciencetheory/sortcomp.html">http://www.cprogramming.com/tutorial/computersciencetheory/sortcomp.html说归并排序需要使用链表不变的空间。怎么样 ?难道他们对待O(LG电子(n)的定?

c) http://www.cprogramming.com/tutorial/computersciencetheory/sortcomp.html says merge sort requires constant space using linked lists. How ? Did they treat O(lg(n) constant ?

D)[增加了获得更清晰对于空间复杂度的计算是公平的假设输入数组或列表已经在内存中?当我做复杂的计算,我总是计算额外的空间,我会除了已经采取的输入空间的需要。否则,空间复杂度永远是为O(n),或者更糟。

d) [Added to get more clarity] For space complexity calculation is it fair to assume the input array or list is already in memory ? When I do complexity calculations I always calculate the "Extra" space I will be needing besides the space already taken by input. Otherwise space complexity will always be O(n) or worse.

推荐答案

a)是 - 在一个完美的世界里,你不得不做的日志大小为n的n个合并,N / 2,N / 4 ...(或更好所述1,2,3,...,N / 4,N / 2,正 - 它们不能并行),这使为O(n)。它仍然是O(n log n)的。在不那么完美的世界,你不必处理器和上下文切换和同步偏移任何潜在的收益无限多。

a) Yes - in a perfect world you'd have to do log n merges of size n, n/2, n/4 ... (or better said 1, 2, 3 ... n/4, n/2, n - they can't be parallelized), which gives O(n). It still is O(n log n). In not-so-perfect-world you don't have infinite number of processors and context-switching and synchronization offsets any potential gains.

b)空间复杂度始终是欧米茄(N),因为你必须在某处存储的元素。额外的空间复杂度可以使用数组为O(N)的实施和O(1)在链表实现。在使用列表实践中实现需要的列表指针额外的空间,所以,除非你已经在内存中的列表是不应该的问题。 修改 如果算上栈帧,那么它的O(N)+ O(log n)的,所以仍然为O(n)的情况下阵列。如果名单是O(log n)的额外内存。

b) Space complexity is always omega(n) as you have to store the elements somewhere. Additional space complexity can be O(n) in an implementation using arrays and O(1) in linked list implementations. In practice implementations using lists need additional space for list pointers, so unless you already have the list in memory it shouldn't matter. edit if you count stack frames, then it's O(n)+ O(log n) , so still O(n) in case of arrays. In case of lists it's O(log n) additional memory.

C)只列出需要一些指针在合并过程中发生变化。这需要不断的额外内存。

c) Lists only need some pointers changed during the merge process. That requires constant additional memory.

D)这就是为什么在合并排序的复杂性分析,人们一提到额外空间要求或类似的东西。很明显,你必须存储一些元素,但它总是更好地提'额外的内存保持纯粹主义者拒之门外。

d) That's why in merge-sort complexity analysis people mention 'additional space requirement' or things like that. It's obvious that you have to store the elements somewhere, but it's always better to mention 'additional memory' to keep purists at bay.

这篇关于归并排序时间和空间复杂的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆