如何实现在现代C经典的排序算法++? [英] How to implement classic sorting algorithms in modern C++?

查看:239
本文介绍了如何实现在现代C经典的排序算法++?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

的std ::排序算法(和它的表兄弟的std :: partial_sort 性病:: nth_element )的C ++标准库在大多数实现多个基本分类的复杂和混合合并的算法,如选择排序​​,插入排序,快速排序,排序合并或堆排序。

有很多问题在这里和姐妹网站,如 HTTP://$c$creview.stackexchange.com/ 相关错误,复杂性和这些经典的排序算法的实现等方面。大部分所提供的实施方式的包括生环路,使用索引操纵和具体类型,并且通常不平凡的正确性和效率方面来分析

问题:?怎样才能上述经典的排序算法使用现代C ++实现

  • 无原料循环,但通过组合的标准库的算法积木<算法>
  • Iterator接口和使用的模板,而不是指数的操作和具体类型
  • C ++ 14的风格,包括完整的标准库,以及语法降噪器,如汽车,模板别名,透明的比较和多形lambda表达式。

备注

  • 有关排序算法的实现进一步的参考请见维基百科,的罗塞塔code http://www.sorting-algorithms.com /
  • 按<一个href="https://github.com/sean-parent/sean-parent.github.com/wiki/$p$psentations/2013-09-11-cpp-seasoning/cpp-seasoning.pdf">Sean家长约定 (幻灯片39),原料循环是一个 -loop长于两个功能与操作员组成。因此, F(G(X)); F(X); G(X); F(X)+ G(X); 不生循环,也不是<$ c中的循环$ C> selection_sort 和 insertion_sort 。
  • 在我跟随斯科特迈尔斯的术语来表示当前的C ++ 1Y已经为C + + 14,并表示C ++ 98和C ++ 03既可作为C ++ 98,所以不要火焰我为
  • 在所建议由@Mehrdad的意见,我提供了四种实现作为一个活生生的例子在回答的结尾:C ++ 14,C ++ 11,C ++ 98和Boost和C ++ 98。
  • 答案本身是psented中只有C ++ 14项$ P $。在相关情况,我表示语法和库差异在哪里了各种语言的版本不同。
解决方案

算法积木

我们首先装配标准库算法的构造块:

 的#include&LT;算法&GT; // min_element,iter_swap,
                        // UPPER_BOUND,旋转,
                        // 划分,
                        // inplace_merge,
                        // make_heap,sort_heap,push_heap,pop_heap,
                        // is_heap,is_sorted
#包括&LT;&了cassert GT; // 断言
的#include&lt;功能&GT; // 减
#包括&LT;迭代器&GT; //距离,开始,结束,下一个
 

  • 的迭代器工具,如非会员的std ::开始() / 的std ::结束()以及与的std ::接下来的()只可作为C ++ 11和超越。对于C ++ 98,一个人需要写这些自己。有从Boost.Range替代的boost ::开始() / 的boost ::结束(),并从Boost.Utility在的boost ::接下来的()
  • 的std :: is_sorted 算法仅适用于C ++ 11和超越。对于C ++ 98,这可以在的std :: adjacent_find 的条款和手写功能的对象来实现。 Boost.Algorithm还提供了一个的boost ::算法:: is_sorted 作为替代品。
  • 的std :: is_heap 算法仅适用于C ++ 11及以后可用。

句法好吃的东西

C ++ 14提供了<一个href="http://stackoverflow.com/questions/20317413/what-are-transparent-comparators">transparent比较 形式的std ::以下&LT;&GT; 是多态的行为对他们的论点。这样就不必提供迭代器的类型。这可以结合使用C ++ 11的 默认函数模板参数 创建<强>单超载作为排序算法,取&LT; 作为比较和那些具有用户自定义的比较函数对象

 模板&LT; IT类,类比较=标准::少&LT;&GT;&GT;
无效xxx_sort(它首先,它最后,比较CMP =比较{});
 

在C ++ 11,可以定义一个可重复使用的<​​a href="http://stackoverflow.com/q/2795023/819272"> 模板别名 提取的这增加了轻微混乱的排序算法的签名迭代器的值类型:

 模板&LT;一流的IT&GT;
使用value_type_t =类型名的std :: iterator_traits实现&LT;它&GT; :: value_type的;

模板&LT; IT类,类比较=标准::少&LT; value_type_t&LT;它&GT;&GT;&GT;
无效xxx_sort(它首先,它最后,比较CMP =比较{});
 

在C ++中98,需要写两个重载,并使用详细的类型名XXX&LT; YYY&GT; ::键入语法

 模板&LT; IT类,类比较&GT;
无效xxx_sort(它首先,它最后,比较CMP); //一般执行

模板&LT;一流的IT&GT;
无效xxx_sort(它首先,它最后一个)
{
    xxx_sort(第一,最后的std ::少&LT;类型名的std :: iterator_traits实现&LT;它&GT; :: value_type的&GT;());
}
 

  • 在另一个语法精密是C ++ 14有利于通过多态lambda表达式包装用户自定义的比较被推断像功能(用汽车参数模板参数)。
  • 在C ++ 11只具有单型lambda表达式,需要使用上面的模板别名 value_type_t
  • 在C ++ 98,人们要么需要编写一个独立的函数对象,或求助于冗长的的std :: bind1st / 的std :: bind2nd / 的std :: NOT1 类型语法。
  • Boost.Bind与改进了这个的boost ::绑定 _1 / _2 占位符语法。
  • 在C ++ 11及以后也有的std :: find_if_not ,而C ++ 98的需求的std :: find_if 的std :: NOT1 围绕一个函数对象。

C ++风格

有没有普遍接受的C ++ 14的风格呢。对于坏更好,我紧跟斯科特迈尔斯的 草案有效的现代C ++ 和Herb Sutter的 修补GotW 。我用下面的风格建议:

  • 香草萨特的<一个href="http://herbsutter.com/2013/08/12/gotw-94-solution-aaa-style-almost-always-auto/">"Almost总是自动 和斯科特·迈尔斯的<一个href="http://my.safaribooksonline.com/book/programming/cplusplus/9781491908419/chapter-2-auto/item_5__$p$pfer_auto_to_explici">"$p$pfer自动以特定的类型声明 的建议,为此,简洁是无与伦比的,虽然它的清晰度是有时<一个href="http://www.reddit.com/r/cpp/comments/2a4tv0/dont_use_auto_unless_you_mean_it_a_unified/">disputed.
  • 在斯科特迈尔斯的<一个href="http://my.safaribooksonline.com/book/programming/cplusplus/9781491908419/chapter-3-from-cplusplus98-to-cplusplus11-and-cplusplus14/item_7_distinguish__and__when">"Distinguish () {} 创建对象时, 和一致选择支撑初始化 {} 而不是好老带括号的初始化()(以侧步所有最棘手的,分析问题在一般的code)。
  • 在斯科特迈尔斯的<一个href="http://my.safaribooksonline.com/book/programming/cplusplus/9781491908419/chapter-3-from-cplusplus98-to-cplusplus11-and-cplusplus14/item_9_$p$pfer_alias_declaratio">"$p$pfer别名声明类型定义 对于模板,这是一个必须反正和使用,而不是的typedef 它无处不在节省时间,并增加了一致性。
  • 在我使用的(汽车IT =第一;!它=最后++吧)模式在一些地方,为了让循环不变检查已分类子范围。在生产code,使用同时(第一!=上) ++第一个某处环可能会稍微好一点。

选择排序

选择排序 并不适应以任何方式的数据,因此它的运行时间总是 O(N ^ 2)。然而,选择排序有互换的尽量减少数量的财产。在应用中的交换项目的成本很高,选择排序很可能是选择的算法。

要使用标准库,可重复使用实现它的std :: min_element 找到剩下的最小元素,而 iter_swap 交换到位:

 模板&LT;类FwdIt,类比较=标准::少&LT;&GT;&GT;
无效selection_sort(FwdIt第一,FwdIt最后,比较CMP =比较{})
{
    为(自动它=第一;它=最后,!++吧){
        汽车常量选择=的std :: min_element(它,最后,CMP);
        的std :: iter_swap(选择它);
        断言(STD :: is_sorted(第一,性病::接下来的(它),CMP));
    }
}
 

注意 selection_sort 的已处理范围 [第一,吧)排序为循环不变量。最低要求是前向迭代器相比,的std ::排序的随机访问迭代器。

详细省略

  • 选择排序可以用一个早期的测试来优化如果(标准::距离(第一,最后)&LT; = 1)返回; (或前向/双向迭代器:如果(第一== ||最后的std ::接下来的(第一)==最后一个)回报;
  • 对于双向迭代器,上面的测试可以用一个循环在区间相结合 [第一,性病:: preV(最后一个)),因为最后一个元素是保证的最小剩余元件,并且不需要交换。

插入排序

虽然它是基本的排序算法, O(N ^ 2)最坏情况时,的 插入排序 是选择,要么当数据近排序(因为它是自适应)或问题规模时的算法小(因为它具有低的开销)。由于这些原因,并因为它也是<强>稳定下,插入排序常被用作递归基本情况(当问题尺寸小)为更高的开销的分而治之排序算法,如合并排序或快速排序。

要落实 insertion_sort 与标准库,可重复使用的std :: UPPER_BOUND 找到位置,当前元素都需要去,并使用的std ::旋转在输入范围向上移动剩余的元素:

 模板&LT;类FwdIt,类比较=标准::少&LT;&GT;&GT;
无效insertion_sort(FwdIt第一,FwdIt最后,比较CMP =比较{})
{
    为(自动它=第一;它=最后,!++吧){
        汽车常量插入=的std :: UPPER_BOUND(第一,它,*它,CMP);
        的std ::旋转(插入,它的std ::接下来的(它));
        断言(STD :: is_sorted(第一,性病::接下来的(它),CMP));
    }
}
 

注意 insertion_sort 的已处理范围 [第一,吧)排序为循环不变量。插入排序也与前向迭代器。

详细省略

  • 插入排序可以用一个早期的测试来优化如果(标准::距离(第一,最后)&LT; = 1)返回; (或前向/双向迭代器:如果(第一== ||最后的std ::接下来的(第一)==最后一个)回报; )和环比间隔 [STD ::下一个(第一),最后一个),因为第一个元素是保证到位,不需要旋转。
  • 对于双向迭代器,二进制搜索找到插入点可以被替换为反向线性搜索使用标准库的的std :: find_if_not 算法。

活生生的实例 C ++ 14 C ++ 11 ,的 C ++ 98和升压 ,的 C ++ 98 )为下面的代码片段:

 使用Revit =的std :: reverse_iterator的&LT; BiDirIt取代;
汽车常量插入=的std :: find_if_not(Revit中(它)时,Revit(第一),
    [=](自动常量和放大器; ELEM){返回CMP(*它,ELEM); }
)。基础();
 

  • 对于随机输入,这给了 O(N ^ 2)的比较,但这种提高到 O(N)比较了近排序的投入。二进制搜索始终使用 O(N日志N)比较。
  • 对于一个线性搜索,也可以主宰一个二进制搜索小输入范围,更好的内存位置(高速缓存,prefetching)(应该测试,这当然)。

快速排序

在认真贯彻落实, 快速排序 是强大的,具有 O(N日志N)预期的复杂性,但与 O(N ^ 2)最坏的情况下,可与adversarially选择输入数据引发的复杂性。当不需要一个稳定的排序,快速排序是一个很好的通用排序。

即使是最简单的版本,快速排序是更复杂了实现使用标准库比其他经典的排序算法。下面的方法使用了几个迭代器工具来找到中间元素输入范围的 [第一,最后)为支点,然后用两个电话以的std ::分区(这是 O(N))以三路分区输入范围为段那些比小于选定枢轴更小,等于,和更大,分别的元件。最后两个外段的元素比小于枢轴较小和较大递归排序:

 模板&LT;类FwdIt,类比较=标准::少&LT;&GT;&GT;
无效quick_sort(FwdIt第一,FwdIt最后,比较CMP =比较{})
{
    汽车常量N =标准::距离(第一个,最后一个);
    如果(N&LT; = 1)返回;
    汽车常量支点= *的std ::接下来的(第一,N / 2);
    汽车常量middle1 =的std ::分区(第一,最后,[=](自动常量和放大器; ELEM){
        返回CMP(ELEM,透视);
    });
    汽车常量middle2 =的std ::分区(middle1,最后,[=](自动常量和放大器; ELEM){
        返回CMP(支点,ELEM)!;
    });
    quick_sort(第一,middle1,CMP); //断言(STD :: is_sorted(第一,middle1,CMP));
    quick_sort(middle2,最后,CMP); //断言(STD :: is_sorted(middle2,最后,CMP));
}
 

不过,快速排序是相当棘手得到正确和有效的,因为上述每个步骤都必须仔细检查和生产水平code优化。特别是,对于 O(N日志N)的复杂性,枢轴具有导致到输入数据的一个平衡的分区,不能在一般保证为 0(1)支点,但可以如果设置支点为 O(N)的输入范围中位数保证

详细省略

  • 上面的实现是特别容易受到特殊输入,如它具有 O(N ^ 2)的复杂性,为管风琴输入 1,2,3,.. ,N / 2,... 3,2,1 (因为中间总是比所有其他元素大)。
  • 中位数的3 支点从的randomly选择的元素从输入范围的防近排序输入了解其中的复杂性,否则会恶化到 0 (N ^ 2)
  • 3路分区 (分离元件小于,等于和大于大支点),通过这两个调用的std ::分区是不是最有效的 O(N)算法实现这一结果。
  • 对于随机访问迭代器,保证 O(N日志N)的复杂性可以通过平均支点的选择使用的std :: nth_element(第一,中间,最后),然后递归调用 quick_sort(第一,中,CMP) quick_sort(中,最后,CMP)
  • 在本次担保是有代价的,但是,因为 O(N)的常数因子的复杂性的std :: nth_element 可以比 0的更昂贵的(1)中位数的3枢轴后跟一个 0的复杂性(N )致电的std ::分区(这是一个高速缓存友好的单前锋传过来的数据)。

合并排序

如果使用 O(N)额外的空间是不关心的,然后 归并排序 是一个很好的选择:它是唯一的稳定 O(N日志N)整理算法。

这是简单的实现使用标准的算法:用几个迭代器工具来查找输入范围的中间 [第一,最后)和合并两个递归排序段与一个的std :: inplace_merge

 模板&LT;类BiDirIt,类比较=标准::少&LT;&GT;&GT;
无效merge_sort(BiDirIt第一,BiDirIt最后,比较CMP =比较{})
{
    汽车常量N =标准::距离(第一个,最后一个);
    如果(N&LT; = 1)返回;
    汽车常量中间=的std ::接下来的(第一,N / 2);
    merge_sort(第一,中,CMP); //断言(STD :: is_sorted(第一,中,CMP));
    merge_sort(中,最后,CMP); //断言(STD :: is_sorted(中,最后,CMP));
    的std :: inplace_merge(第一,中间,最后,CMP); //断言(STD :: is_sorted(第一,最后,CMP));
}
 

合并排序需要双向迭代器的瓶颈是在的std :: inplace_merge 。需要注意的是排序链表时,归并排序,只需要 O(日志N)额外的空间(递归)。后者算法由的std ::实行名单&LT; T&GT; ::排序标准库内。

堆排序

堆排序 实现起来很简单,执行 0 (N日志N)就地排序,但并不稳定。

第一个循环, O(N)heapify阶段,把数组复制到堆秩序。第二个循环中, O(N日志N )sortdown阶段,多次提取最大值和恢复堆秩序。该标准库使这非常简单:

 模板&LT;类RandomIt,类比较=标准::少&LT;&GT;&GT;
无效heap_sort(RandomIt第一,RandomIt最后​​,比较CMP =比较{})
{
    LIB :: make_heap(第一,最后,CMP); //断言(STD :: is_heap(第一,最后,CMP));
    LIB :: sort_heap(第一,最后,CMP); //断言(STD :: is_sorted(第一,最后,CMP));
}
 

在情况下,你认为这是欺骗,使用的std :: make_heap 的std :: sort_heap ,你可以去更深一层,并自己编写这些功能在的std :: push_heap 的std :: pop_heap ,分别计

 命名空间的lib {

//注:为O(N日志N),而不是O(N)为标准:: make_heap
模板&LT;类RandomIt,类比较=标准::少&LT;&GT;&GT;
无效make_heap(RandomIt第一,RandomIt最后​​,比较CMP =比较{})
{
    为(自动它=第一;它=最后;!){
        的std :: push_heap(第一,++中,CMP);
        断言(STD :: is_heap(第一,它,CMP));
    }
}

模板&LT;类RandomIt,类比较=标准::少&LT;&GT;&GT;
无效sort_heap(RandomIt第一,RandomIt最后​​,比较CMP =比较{})
{
    为(自动它=上;它=第一;!){
        的std :: pop_heap(第一,它 - ,CMP);
        断言(STD :: is_heap(第一,它,CMP));
    }
}

} //命名空间的lib
 

标准库同时指定 push_heap pop_heap 的复杂性 O(日志N) 。但是请注意,外环在范围 [第一,最后)的结果 O(N日志N)的复杂性 make_heap ,而的std :: make_heap 只有 O(N)的复杂性。对于整体 O(N日志N) heap_sort 的复杂性,也没关系。

详细省略 O(N)实施 make_heap

测试

下面有四个活生生的实例 C ++ 14 C ++ 11 ,的C++98和Boost ,的 C ++ 98 )测试所有五个算法上的各种投入(并不意味着是详​​尽的或严格)。只要注意在组委会的巨大差异:C ++ 11 / C + + 14约130 LOC需要,C ++ 98和升压190(+ 50%)和C ++ 98超过270(+100%)<。 / P>

The std::sort algorithm (and its cousins std::partial_sort and std::nth_element) from the C++ Standard Library is in most implementations a complicated and hybrid amalgamation of more elementary sorting algorithms, such as selection sort, insertion sort, quick sort, merge sort, or heap sort.

There are many questions here and on sister sites such as http://codereview.stackexchange.com/ related to bugs, complexity and other aspects of implementations of these classic sorting algorithms. Most of the offered implementations consist of raw loops, use index manipulation and concrete types, and are generally non-trivial to analyse in terms of correctness and efficiency.

Question: how can the above mentioned classic sorting algorithms be implemented using modern C++?

  • no raw loops, but combining the Standard Library's algorithmic building blocks from <algorithm>
  • iterator interface and use of templates instead of index manipulation and concrete types
  • C++14 style, including the full Standard Library, as well as syntactic noise reducers such as auto, template aliases, transparent comparators and polymorphic lambdas.

Notes:

  • for further references on implementations of sorting algorithms see Wikipedia, Rosetta Code or http://www.sorting-algorithms.com/
  • according to Sean Parent's conventions (slide 39), a raw loop is a for-loop longer than composition of two functions with an operator. So f(g(x)); or f(x); g(x); or f(x) + g(x); are not raw loops, and neither are the loops in selection_sort and insertion_sort below.
  • I follow Scott Meyers's terminology to denote the current C++1y already as C++14, and to denote C++98 and C++03 both as C++98, so don't flame me for that.
  • As suggested in the comments by @Mehrdad, I provide four implementations as a Live Example at the end of the answer: C++14, C++11, C++98 and Boost and C++98.
  • The answer itself is presented in terms of C++14 only. Where relevant, I denote the syntactic and library differences where the various language versions differ.

解决方案

Algorithmic building blocks

We begin by assembling the algorithmic building blocks from the Standard Library:

#include <algorithm>    // min_element, iter_swap, 
                        // upper_bound, rotate, 
                        // partition, 
                        // inplace_merge,
                        // make_heap, sort_heap, push_heap, pop_heap,
                        // is_heap, is_sorted
#include <cassert>      // assert 
#include <functional>   // less
#include <iterator>     // distance, begin, end, next

  • the iterator tools such as non-member std::begin() / std::end() as well as with std::next() are only available as of C++11 and beyond. For C++98, one needs to write these himself. There are substitutes from Boost.Range in boost::begin() / boost::end(), and from Boost.Utility in boost::next().
  • the std::is_sorted algorithm is only available for C++11 and beyond. For C++98, this can be implemented in terms of std::adjacent_find and a hand-written function object. Boost.Algorithm also provides a boost::algorithm::is_sorted as a substitute.
  • the std::is_heap algorithm is only available for C++11 and beyond.

Syntactical goodies

C++14 provides transparent comparators of the form std::less<> that act polymorphically on their arguments. This avoids having to provide an iterator's type. This can be used in combination with C++11's default function template arguments to create a single overload for sorting algorithms that take < as comparison and those that have a user-defined comparison function object.

template<class It, class Compare = std::less<>>
void xxx_sort(It first, It last, Compare cmp = Compare{});

In C++11, one can define a reusable template alias to extract an iterator's value type which adds minor clutter to the sort algorithms' signatures:

template<class It>
using value_type_t = typename std::iterator_traits<It>::value_type;

template<class It, class Compare = std::less<value_type_t<It>>>
void xxx_sort(It first, It last, Compare cmp = Compare{});

In C++98, one needs to write two overloads and use the verbose typename xxx<yyy>::type syntax

template<class It, class Compare>
void xxx_sort(It first, It last, Compare cmp); // general implementation

template<class It>
void xxx_sort(It first, It last)
{
    xxx_sort(first, last, std::less<typename std::iterator_traits<It>::value_type>());
}

  • Another syntactical nicety is that C++14 facilitates wrapping user-defined comparators through polymorphic lambdas (with auto parameters that are deduced like function template arguments).
  • C++11 only has monomorphic lambdas, that require the use of the above template alias value_type_t.
  • In C++98, one either needs to write a standalone function object or resort to the verbose std::bind1st / std::bind2nd / std::not1 type of syntax.
  • Boost.Bind improves this with boost::bind and _1 / _2 placeholder syntax.
  • C++11 and beyond also have std::find_if_not, whereas C++98 needs std::find_if with a std::not1 around a function object.

C++ Style

There is no generally acceptable C++14 style yet. For better of for worse, I closely follow Scott Meyers's draft Effective Modern C++ and Herb Sutter's revamped GotW. I use the following style recommendations:

  • Herb Sutter's "Almost Always Auto" and Scott Meyers's "Prefer auto to specific type declarations" recommendation, for which the brevity is unsurpassed, although its clarity is sometimes disputed.
  • Scott Meyers's "Distinguish () and {} when creating objects" and consistently choose braced-initialization {} instead of the good old parenthesized initialization () (in order to side-step all most-vexing-parse issues in generic code).
  • Scott Meyers's "Prefer alias declarations to typedefs". For templates this is a must anyway, and using it everywhere instead of typedef saves time and adds consistency.
  • I use a for (auto it = first; it != last; ++it) pattern in some places, in order to allow for loop invariant checking for already sorted sub-ranges. In production code, the use of while (first != last) and a ++first somewhere inside the loop might be slightly better.

Selection sort

Selection sort does not adapt to the data in any way, so its runtime is always O(N^2). However, selection sort has the property of minimizing the number of swaps. In applications where the cost of swapping items is high, selection sort very well may be the algorithm of choice.

To implement it using the Standard Library, repeatedly use std::min_element to find the remaining minimum element, and iter_swap to swap it into place:

template<class FwdIt, class Compare = std::less<>>
void selection_sort(FwdIt first, FwdIt last, Compare cmp = Compare{})
{
    for (auto it = first; it != last; ++it) {
        auto const selection = std::min_element(it, last, cmp);
        std::iter_swap(selection, it); 
        assert(std::is_sorted(first, std::next(it), cmp));
    }
}

Note that selection_sort has the already processed range [first, it) sorted as its loop invariant. The minimal requirements are forward iterators, compared to std::sort's random access iterators.

Details omitted:

  • selection sort can be optimized with an early test if (std::distance(first, last) <= 1) return; (or for forward / bidirectional iterators: if (first == last || std::next(first) == last) return;).
  • for bidirectional iterators, the above test can be combined with a loop over the interval [first, std::prev(last)), because the last element is guaranteed to be the minimal remaining element and doesn't require a swap.

Insertion sort

Although it is one of the elementary sorting algorithms with O(N^2) worst-case time, insertion sort is the algorithm of choice either when the data is nearly sorted (because it is adaptive) or when the problem size is small (because it has low overhead). For these reasons, and because it is also stable, insertion sort is often used as the recursive base case (when the problem size is small) for higher overhead divide-and-conquer sorting algorithms, such as merge sort or quick sort.

To implement insertion_sort with the Standard Library, repeatedly use std::upper_bound to find the location where the current element needs to go, and use std::rotate to shift the remaining elements upward in the input range:

template<class FwdIt, class Compare = std::less<>>
void insertion_sort(FwdIt first, FwdIt last, Compare cmp = Compare{})
{
    for (auto it = first; it != last; ++it) {
        auto const insertion = std::upper_bound(first, it, *it, cmp);
        std::rotate(insertion, it, std::next(it)); 
        assert(std::is_sorted(first, std::next(it), cmp));
    }
}

Note that insertion_sort has the already processed range [first, it) sorted as its loop invariant. Insertion sort also works with forward iterators.

Details omitted:

  • insertion sort can be optimized with an early test if (std::distance(first, last) <= 1) return; (or for forward / bidirectional iterators: if (first == last || std::next(first) == last) return;) and a loop over the interval [std::next(first), last), because the first element is guaranteed to be in place and doesn't require a rotate.
  • for bidirectional iterators, the binary search to find the insertion point can be replaced with a reverse linear search using the Standard Library's std::find_if_not algorithm.

Four Live Examples (C++14, C++11, C++98 and Boost, C++98) for the fragment below:

using RevIt = std::reverse_iterator<BiDirIt>;
auto const insertion = std::find_if_not(RevIt(it), RevIt(first), 
    [=](auto const& elem){ return cmp(*it, elem); }
).base();

  • For random inputs this gives O(N^2) comparisons, but this improves to O(N) comparisons for almost sorted inputs. The binary search always uses O(N log N) comparisons.
  • For small input ranges, the better memory locality (cache, prefetching) of a linear search might also dominate a binary search (one should test this, of course).

Quick sort

When carefully implemented, quick sort is robust and has O(N log N) expected complexity, but with O(N^2) worst-case complexity that can be triggered with adversarially chosen input data. When a stable sort is not needed, quick sort is an excellent general-purpose sort.

Even for the simplest versions, quick sort is quite a bit more complicated to implement using the Standard Library than the other classic sorting algorithms. The approach below uses a few iterator utilities to locate the middle element of the input range [first, last) as the pivot, then use two calls to std::partition (which are O(N)) to three-way partition the input range into segments of elements that are smaller than, equal to, and larger than the selected pivot, respectively. Finally the two outer segments with elements smaller than and larger than the pivot are recursively sorted:

template<class FwdIt, class Compare = std::less<>>
void quick_sort(FwdIt first, FwdIt last, Compare cmp = Compare{})
{
    auto const N = std::distance(first, last);
    if (N <= 1) return;
    auto const pivot = *std::next(first, N / 2);
    auto const middle1 = std::partition(first, last, [=](auto const& elem){ 
        return cmp(elem, pivot); 
    });
    auto const middle2 = std::partition(middle1, last, [=](auto const& elem){ 
        return !cmp(pivot, elem);
    });
    quick_sort(first, middle1, cmp); // assert(std::is_sorted(first, middle1, cmp));
    quick_sort(middle2, last, cmp);  // assert(std::is_sorted(middle2, last, cmp));
}

However, quick sort is rather tricky to get correct and efficient, as each of the above steps has to be carefully checked and optimized for production level code. In particular, for O(N log N) complexity, the pivot has to result into a balanced partition of the input data, which cannot be guaranteed in general for an O(1) pivot, but which can be guaranteed if one sets the pivot as the O(N) median of the input range.

Details omitted:

  • the above implementation is particularly vulnerable to special inputs, e.g. it has O(N^2) complexity for the "organ pipe" input 1, 2, 3, ..., N/2, ... 3, 2, 1 (because the middle is always larger than all other elements).
  • median-of-3 pivot selection from randomly chosen elements from the input range guards against almost sorted inputs for which the complexity would otherwise deteriorate to O(N^2).
  • 3-way partitioning (separating elements smaller than, equal to and larger than the pivot) as shown by the two calls to std::partition is not the most efficient O(N) algorithm to achieve this result.
  • for random access iterators, a guaranteed O(N log N) complexity can be achieved through median pivot selection using std::nth_element(first, middle, last), followed by recursive calls to quick_sort(first, middle, cmp) and quick_sort(middle, last, cmp).
  • this guarantee comes at a cost, however, because the constant factor of the O(N) complexity of std::nth_element can be more expensive than that of the O(1) complexity of a median-of-3 pivot followed by an O(N) call to std::partition (which is a cache-friendly single forward pass over the data).

Merge sort

If using O(N) extra space is of no concern, then merge sort is an excellent choice: it is the only stable O(N log N) sorting algorithm.

It is simple to implement using Standard algorithms: use a few iterator utilities to locate the middle of the input range [first, last) and combine two recursively sorted segments with a std::inplace_merge:

template<class BiDirIt, class Compare = std::less<>>
void merge_sort(BiDirIt first, BiDirIt last, Compare cmp = Compare{})
{
    auto const N = std::distance(first, last);
    if (N <= 1) return;                   
    auto const middle = std::next(first, N / 2);
    merge_sort(first, middle, cmp); // assert(std::is_sorted(first, middle, cmp));
    merge_sort(middle, last, cmp);  // assert(std::is_sorted(middle, last, cmp));
    std::inplace_merge(first, middle, last, cmp); // assert(std::is_sorted(first, last, cmp));
}

Merge sort requires bidirectional iterators, the bottleneck being the std::inplace_merge. Note that when sorting linked lists, merge sort requires only O(log N) extra space (for recursion). The latter algorithm is implemented by std::list<T>::sort in the Standard Library.

Heap sort

Heap sort is simple to implement, performs an O(N log N) in-place sort, but is not stable.

The first loop, O(N) "heapify" phase, puts the array into heap order. The second loop, the O(N log N) "sortdown" phase, repeatedly extracts the maximum and restores heap order. The Standard Library makes this extremely straightforward:

template<class RandomIt, class Compare = std::less<>>
void heap_sort(RandomIt first, RandomIt last, Compare cmp = Compare{})
{
    lib::make_heap(first, last, cmp); // assert(std::is_heap(first, last, cmp));
    lib::sort_heap(first, last, cmp); // assert(std::is_sorted(first, last, cmp));
}

In case you consider it "cheating" to use std::make_heap and std::sort_heap, you can go one level deeper and write those functions yourself in terms of std::push_heap and std::pop_heap, respectively:

namespace lib {

// NOTE: is O(N log N), not O(N) as std::make_heap
template<class RandomIt, class Compare = std::less<>>
void make_heap(RandomIt first, RandomIt last, Compare cmp = Compare{})
{
    for (auto it = first; it != last;) {
        std::push_heap(first, ++it, cmp); 
        assert(std::is_heap(first, it, cmp));           
    }
}

template<class RandomIt, class Compare = std::less<>>
void sort_heap(RandomIt first, RandomIt last, Compare cmp = Compare{})
{
    for (auto it = last; it != first;) {
        std::pop_heap(first, it--, cmp);
        assert(std::is_heap(first, it, cmp));           
    } 
}

}   // namespace lib

The Standard Library specifies both push_heap and pop_heap as complexity O(log N). Note however that the outer loop over the range [first, last) results in O(N log N) complexity for make_heap, whereas std::make_heap has only O(N) complexity. For the overall O(N log N) complexity of heap_sort it doesn't matter.

Details omitted: O(N) implementation of make_heap

Testing

Here are four Live Examples (C++14, C++11, C++98 and Boost, C++98) testing all five algorithms on a variety of inputs (not meant to be exhaustive or rigorous). Just note the huge differences in the LOC: C++11/C++14 need around 130 LOC, C++98 and Boost 190 (+50%) and C++98 more than 270 (+100%).

这篇关于如何实现在现代C经典的排序算法++?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆