快速排序比归并排序慢? [英] Quicksort slower than Mergesort?

查看:141
本文介绍了快速排序比归并排序慢?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我是工作在昨天实现快速排序,然后我就跑了,期待更快的运行时间比归并(我也实现)。我跑了两个,而快速排序更快的较小的数据集和小于100元(我的没有的验证它的工作原理),归并很快成为了更快的算法。我一直在告诉我们,快速排序总是比归并排序快,据我所知,有关于这一主题的一些争论,但我至少预期它会比这更接近。对于数据集> 10000元素,归并结束了快4倍。这是可以预期的,或者是有一个错误在我的快速排序code?

归并:

 公共静态无效归并(INT [] E)
{
    如果(e.length< = 1)返回;
    INT []第一=新INT [e.length / 2];
    INT []第二=新INT [e.length  -  first.length]。
    System.arraycopy(即0,第一,0,first.length);
    System.arraycopy(即first.length,第二次,0,second.length);
    归并排序(第一);
    归并(第二);
    System.arraycopy(合并(一,二),0,即0,e.length);
}

私有静态诠释[]合并(INT []第一,INT []秒){
    INT iFirst = 0;
    INT iSecond = 0;
    INT iCombined = 0;

    INT [] =结合新的INT [first.length + second.length]。
    而(iFirst< first.length和放大器;&安培; iSecond< second.length){
        如果(第[iFirst]>第二个[iSecond]){
            结合[iCombined ++] =第二个[iSecond ++];
        }
        其他组合[iCombined ++] =第[iFirst ++];
    }
    对于(; iFirst< first.length; iFirst ++){
        结合[iCombined ++] =第[iFirst]
    }
    对于(; iSecond< second.length; iSecond ++){
        结合[iCombined ++] =第二个[iSecond]
    }
    返回组合;
}
 

快速排序:

 公共静态无效的快速排序(INT []一,诠释第一,诠释过去的){
    如果(第一> =最后的)回报;

    INT partitionIndex =分区(A,第一,最后);
    快速排序(一,首先,partitionIndex  -  1);
    快速排序(A,partitionIndex + 1,最后一次);
}

公共静态INT分区(INT []×,诠释第一,诠释过去的){
    INT左=第一;
    诠释权=最后;
    INT支点= X [首页];
    INT pivotIdx =第一;

    而(左< =右){
        而(左< x.length和放大器;&放大器; X [左]< =支点)离开++;
        而(右GT; = 0&功放;&放大器; X [右]>枢)right--;
        如果(左< =右){
            INT TEMP = X [左]
            X [左] = X [右]
            X [右] =温度;
        }
    }
    pivotIdx =权利;
    X [首页] = X [右]
    X [pivotIdx] =支点;
    返回pivotIdx;
}
 

解决方案

其实我只是用C写了一个链接列表比较排序演示程序,并得出了类似的结论(即归并会打快速排序对于大多数用途), altho我已被告知,快速排序,一般不用于链表反正。我要指出的支点值的选择是怪物的因素 - 我最初的版本采用了一个随机节点作为支点,当我精有点采取平均两个(随机)节点,该exectution时间达到1000000条记录,从4分钟到少于10秒,把它的权利看齐,与归并。

归并和快速排序具有相同大O最好的情况下(N *的log(n)),尽管人们可以尝试声称,大澳是真的约迭代次数和不比较计数。在最大的区别,可他们两个人之间产生的永远是快速排序的损害,它涉及那些已经很大程度上排序列表或包含大量的关系(时快速排序确实比归并排序要好,差别几乎不会那么大)。这是因为联系或已排序的段精简直接通过归并;当两个分割列表回来被合并,如果一个列表已经包含了所有的较小的值,都在左边的值的将被比较一次一个到的右侧的第一个元素,然后(由于所返回的列表具有内部订单)没有进一步的比较的需要做的,右边是简单的迭代的到年底。这就是说,迭代的数量将保持不变,但比较的数量减少了一半。如果你在谈论的实际时间,并且字符串进行排序,那就是价格昂贵的比较。

领带和已排序段快速排序,很容易导致不平衡列表如果数据透视值不仔细确定,不平衡列表(例如,一个在右边,十左侧)是什么原因导致的经济放缓。所以,如果你能得到您的快速排序对一个已排序列表因为它在一个ramdomized清单执行,以及,你得寻找支点的好方法。

如果你有兴趣,演示程序生成的输出是这样的:

  [根〜/ C] ./a.out -1 3
用,0记录
主要标准偏差= 128

命令(H求助,Q退出):N
有多少记录? 4000000
新名单562500.00 KB

命令(H求助,Q退出):M

Mergesorting .............. 3999999函数调用
123539969迭代比较来电:82696100
经过时间:0分9秒


命令(H求助,Q退出)表示:S
洗牌。

命令(H求助,Q退出):●

Quicksorting .............. 4000000函数调用
190179315迭代比较呼吁:100817020
经过时间:0分23秒
 

Altho没有疯狂之kolors。有一些关于它的更多的东西由我约一半,此页面

PS。既不是那种需要额外的内存与链表。

I was working on implementing a quicksort yesterday, and then I ran it, expecting a faster runtime than the Mergesort (which I had also implemented). I ran the two, and while the quicksort was faster for smaller data sets <100 elements (and I did verify that it works), the mergesort became the quicker algorithm fairly quickly. I had been taught that quicksort is almost always "quicker" than mergesort, and I understand that there is some debate on this topic, but I at least expected it to be closer than this. For data sets >10000 elements, the mergesort was over 4 times faster. Is this to be expected, or is there an error in my quicksort code?

mergesort:

public static void mergeSort(int[ ] e)
{
    if (e.length <= 1) return;
    int[] first = new int[e.length/2];
    int[] second = new int[e.length - first.length];
    System.arraycopy(e, 0, first, 0, first.length);
    System.arraycopy(e, first.length, second, 0, second.length);
    mergeSort(first);
    mergeSort(second);
    System.arraycopy(merge(first, second), 0, e, 0, e.length);
}

private static int[] merge(int[] first, int[] second) {
    int iFirst = 0;
    int iSecond = 0;
    int iCombined = 0;

    int[] combined = new int[first.length + second.length];
    while(iFirst < first.length && iSecond < second.length) {
        if (first[iFirst] > second[iSecond]) {
            combined[iCombined++] = second[iSecond++];
        }
        else combined[iCombined++] = first[iFirst++];
    }
    for(; iFirst < first.length; iFirst++) {
        combined[iCombined++] = first[iFirst];
    }
    for(; iSecond < second.length; iSecond++) {
        combined[iCombined++] = second[iSecond];
    }
    return combined;
}

quicksort:

public static void quicksort(int[] a, int first, int last) {
    if (first >= last) return;

    int partitionIndex = partition(a, first, last);
    quicksort(a, first, partitionIndex - 1);
    quicksort(a, partitionIndex + 1, last);
}

public static int partition(int[] x, int first, int last) {
    int left = first;
    int right = last;
    int pivot = x[first];
    int pivotIdx = first;

    while(left <= right) {
        while(left < x.length && x[left] <= pivot) left++;
        while(right >= 0 && x[right] > pivot) right--;
        if (left <= right) {
            int temp = x[left];
            x[left] = x[right];
            x[right] = temp;
        }
    }
    pivotIdx = right;
    x[first] = x[right];
    x[pivotIdx] = pivot;
    return pivotIdx;
}

解决方案

I actually just wrote a "linked-list comparative sort demo program" in C and arrived at a similar conclusion (that mergesort will beat quicksort for most uses), altho I have been told that quicksort is generally not used for linked lists anyway. I would note that the choice of pivot values is a monster factor -- my initial version used a random node as the pivot, and when I refined it a bit to take a mean of two (random) nodes, the exectution time for 1000000 records went from over 4 minutes to less than 10 seconds, putting it right on par with mergesort.

Mergesort and quicksort have the same big O best case (n*log(n)) and despite what people may try to claim, big O is really about iteration count and not comparison count. The biggest difference that can be produced between the two of them will always be to quicksort's detriment, and it involves lists that are already largely sorted or contain a large number of ties (when quicksort does better than mergesort, the difference will not be nearly so great). This is because ties or already sorted segments streamline straight through mergesort; when two split lists come back to be merged, if one list already contains all smaller values, all of the values on the left will be compared one at a time to the first element of the right, and then (since the returned lists have an internal order) no further comparisons need be done and the right is simply iterated onto the end. This is to say, the number of iterations will remain constant, but the number of comparisons is cut in half. If you are talking about actual time and are sorting strings, it's the comparisons that are expensive.

Ties and already sorted segments in quicksort can easily lead to unbalanced lists if the pivot value is not carefully determined, and the imbalanced lists (eg, one on the right, ten on the left) are what causes the slowdown. So, if you can get your quicksort to perform as well on an already sorted list as it does on a ramdomized list, you've got a good method for finding the pivot.

If you're interested, the demo program produces output like this:

[root~/C] ./a.out -1 3 
Using "", 0 records
Primary Criteria offset=128

Command (h for help, Q to quit): N
How many records? 4000000
New list is 562500.00 kb

Command (h for help, Q to quit): m

Mergesorting..............3999999 function calls
123539969 Iterations     Comparison calls: 82696100
Elapsed time: 0 min 9 sec


Command (h for help, Q to quit): S
Shuffled.

Command (h for help, Q to quit): q

Quicksorting..............4000000 function calls
190179315 Iterations     Comparison calls: 100817020
Elapsed time: 0 min 23 sec

Altho without the krazy kolors. There's some more stuff about it by me about halfway down this page.

ps. neither sort requires extra memory with the linked list.

这篇关于快速排序比归并排序慢?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆