排序C语言链表 [英] Sorting linked lists in C

查看:156
本文介绍了排序C语言链表的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我被要求写一个函数,需要3未排序的链接列表,并返回一个单一的有序链表,结合了所有三个列表。什么是你能想到的最好的方法是什么? 我真的不具备的内存限制,但你会怎么做有/无内存限制?

解决方案

一种选择是使用归并排序上所有三个链表,然后用一个最后的合并步骤,合并在一起成为一个整体的排序列表。

不像大多数为O(n log n)的排序算法,归并排序可以在链表高效运行。在一个较高的层次,直觉后面归并排序上链表如下:

  1. 作为基础的情况下,如果列表中零个或一个元素,它已经排序。
  2. 否则:
    1. 也许是通过移动奇数元素成一个列表,甚至元素融入到其他拆分列表分为大致相等大小的两个列表。
    2. 在递归使用归并排序的列表进行排序。
    3. 套用合并步这些列表合并成一个排序的列表。

上链表合并算法真的很漂亮。伪code工作大致是这样的:

  1. 在初始化一个空链表持有的结果。
  2. 只要这两个列表不为空:
    1. 如果第一列表的第一个元素是小于第二列表的第一个元素,将其移动到结果列表的背面。
    2. 否则,移动第二列表的第一个元素到结果列表中的背面。
  3. 现在正好一个列表是空的,移动从第二列表中的所有元件与结果列表的背面。

此,可向运行在O(n)的时间,所以合并排序的整体复杂度为O(n log n)的。

一旦你排序的所有三个列表独立的,可以申请合并算法的三个列表组合成一个最终的排序列表。或者,你可以考虑串联起来这三个链表,然后用一个巨大的合并排序传递给所有排序的名单在同一时间。有没有明确的正确的方式来做到这一点;这真的取决于你。

以上算法的运行和西塔(N log n)的时间。它也只使用与西塔(log n)的内存,因为它不分配新的链表细胞只需要在每个堆栈帧指针存储到不同的表空间。由于递归深度和西塔(log n)的,内存使用率和西塔;(log n)的,以及


另一个为O(n log n)的排序,你可以在链表实现是快速排序的修改。虽然快速排序的链表的版本快(仍然为O(n log n)的预期),这是不是几乎一样快的就地版本的阵列,由于缺乏来自数组元素局部性影响的作品被连续存储。然而,这是适用于列出一个非常漂亮的算法。

快速排序背后的直觉是:

  1. 如果你有一个零个或一个元素的列表,该列表进行排序。
  2. 否则:
    1. 选择列表中的某些元素作为支点来使用。
    2. 分割清单分成三组 - 元素小于支点,元素等于支点,和要素大于枢轴
    3. 递归较小和较大的元素进行排序。
    4. 并置三个列表较小,那么相等,则更大找回整体排序列表。

之一的快速排序的连接表的版本好的方面是使分割步骤是比在阵列的情况下基本上更容易。当你选择了一个支点(细节稍后),您可以通过为小于,等于对,和大于列表,然后做了原有的线性扫描链接创建三个空列表做分区步骤名单。然后可以追加/ prePEND每个链接列表节点以链表对应于原始桶

在得到这方面的工作是挑选一个好的支点元素的一个挑战。这是众所周知的快速排序可以退化到为O(n 2 )时间,如果支点的选择是件坏事,但它也知道,如果你选择一个主元随机运行时是O(n日志n)的高概率。在一个阵列,这是很容易(只选择一个随机数组索引),但在链表的情况是棘手。要做到这一点最简单的方法是挑选和0之间的随机数列表的长度,然后选择在O(n)的时间列表的元素。另外,还有一些pretty的清凉方法挑选一个元素随机出的链接列表; 这里描述这样一种算法。


如果你想有一个更简单的算法,该算法只需要O(1)空间,还可以考虑使用插入排序以链接列表进行排序。虽然插入排序更容易实现,这为O(N 2 )在最坏情况下的时间(尽管它也有O(n)的最佳案例的行为),所以它可能不是一个很好的选择除非您特别想避免归并排序。

插入排序算法背后的想法是,如下所示:

  1. 在初始化一个空链表持有的结果。
  2. 为三个链表:
    1. 虽然这个链表不为空:
      1. 扫描横跨结果列表中找到的位置,其中此链接的列表的第一个元素属于
      2. 将在该位置的元素。

另一个为O(n 2 ),可以适用于链表排序算法是选择排序。这可以通过使用这种算法很容易实现的(假设你有一个双向链表):

  1. 在初始化空单持有的结果。
  2. 当输入列表不为空:
    1. 整个链表扫描寻找最小的剩余部分。
    2. 从链表中删除该元素。
    3. 在附加的元素到结果列表中。

这也运行在O(N 2 )时,只使用O(1)空间,但实际上它比插入排序慢;特别是,它总是运行在&的Theta;(正 2 )时间


根据如何链表的结构,你也许能逃脱一些非常真棒黑客。特别是,如果你给出的 - 连接列表,那么你有空间,为您的每一个链表细胞的两个指针。鉴于这种情况,你可以reinter preT这些指针的含义做了一些pretty的荒谬排序的招数。

作为一个简单的例子,让我们来看看使用链表细胞如何我们可以实现树排序。的思想如下。当链表元被存储在一个链表中,下和previous指针具有其原始的含义。然而,我们的目标是迭代地拉链表细胞出该链接的表的,然后reinter $ P $角它们作为二进制搜索树,其中,下一个指针指右子树和previous指针节点a意思是左子树。如果你允许这样做,这里是实现树排序一个非常酷的方式:

  1. 创建一个新的指针到一个链表细胞,将作为指针在树的根。
  2. 对于双链表的每个元素:
    1. 从链表中删除该单元格。
    2. 在处理该单元格为BST节点,接入节点插入到二叉搜索树。
  3. 请一个有序散步BST的。每当你访问一个节点,从BST将其取出并插入回双向链表。

这运行在最佳情况为O(n log n)的时间,最坏情况下为O(n 2 )。在内存使用方面,前两个步骤只需要O(1)内存,因为我们回收的空间,从旧的指针。最后一步可以在O进行使用一些相当聪明的算法(1)空间为好。

您还可以考虑实施堆排序这种方式为好,虽然这是一个有点棘手。


希望这有助于!

I was asked to write a function that takes 3 unsorted linked lists and return one single sorted linked list that combines all three lists. What is the best way you can think of? I dont really have restrictions of memory but what would you do with/without memory restrictions?

解决方案

One option would be to use merge sort on all three of the linked lists, then use one final merge step to merge them together into an overall sorted list.

Unlike most O(n log n) sorting algorithms, merge sort can run efficiently on linked lists. At a high-level, the intuition behind merge sort on a linked list is as follows:

  1. As a base case, if the list has zero or one elements, it's already sorted.
  2. Otherwise:

    1. Split the list into two lists of roughly equal size, perhaps by moving odd elements into one list and even elements into the other.
    2. Recursively use merge sort to sort those lists.
    3. Apply a merge step to combine those lists into one sorted list.

The merge algorithm on linked lists is really beautiful. The pseudocode works roughly like this:

  1. Initialize an empty linked list holding the result.
  2. As long as both lists aren't empty:

    1. If the first element of the first list is less than the first element of the second list, move it to the back of the result list.
    2. Otherwise, move the first element of the second list to the back of the result list.

  3. Now that exactly one list is empty, move all the elements from the second list to the back of the result list.

This can be made to run in O(n) time, so the overall complexity of the merge sort is O(n log n).

Once you've sorted all three lists independently, you can apply the merge algorithm to combine the three lists into one final sorted list. Alternatively, you could consider concatenating together all three linked lists, then using a giant merge sort pass to sort all of the lists at the same time. There's no clear "right way" to do this; it's really up to you.

The above algorithm runs in Θ(n log n) time. It also uses only Θ(log n) memory, since it allocates no new linked list cells and just needs space in each stack frame to store pointers to the various lists. Since the recursion depth is Θ(log n), the memory usage is Θ(log n) as well.


Another O(n log n) sort that you can implement on linked lists is a modification of quicksort. Although the linked list version of quicksort is fast (still O(n log n) expected), it isn't nearly as fast as the in-place version that works on arrays due to the lack of locality effects from array elements being stored contiguously. However, it's a very beautiful algorithm as applied to lists.

The intuition behind quicksort is as follows:

  1. If you have a zero- or one-element list, the list is sorted.
  2. Otherwise:

    1. Choose some element of the list to use as a pivot.
    2. Split the list into three groups - elements less than the pivot, elements equal to the pivot, and elements greater than the pivot.
    3. Recursively sort the smaller and greater elements.
    4. Concatenate the three lists as smaller, then equal, then greater to get back the overall sorted list.

One of the nice aspects of the linked-list version of quicksort is that the partitioning step is substantially easier than in the array case. After you've chosen a pivot (details a bit later), you can do the partitioning step by creating three empty lists for the less-than, equal-to, and greater-than lists, then doing a linear scan over the original linked list. You can then append/prepend each linked list node to the linked list corresponding to the original bucket.

The one challenge in getting this working is picking a good pivot element. It's well known that quicksort can degenerate to O(n2) time if the choice of pivot is bad, but it is also known that if you pick a pivot element at random the runtime is O(n log n) with high probability. In an array this is easy (just pick a random array index), but in the linked list case is trickier. The easiest way to do this is to pick a random number between 0 and the length of the list, then choose that element of the list in O(n) time. Alternatively, there are some pretty cool methods for picking an element at random out of a linked list; one such algorithm is described here.


If you want a simpler algorithm that needs only O(1) space, you can also consider using insertion sort to sort the linked lists. While insertion sort is easier to implement, it runs in O(n2) time in the worst case (though it also has O(n) best-case behavior), so it's probably not a good choice unless you specifically want to avoid merge sort.

The idea behind the insertion sort algorithm is as follows:

  1. Initialize an empty linked list holding the result.
  2. For each of the three linked lists:

    1. While that linked list isn't empty:

      1. Scan across the result list to find the location where the first element of this linked list belongs.
      2. Insert the element at that location.


Another O(n2) sorting algorithm that can be adapted for linked lists is selection sort. This can be implemented very easily (assuming you have a doubly-linked list) by using this algorithm:

  1. Initialize an empty list holding the result.
  2. While the input list is not empty:

    1. Scan across the linked list looking for the smallest remaining element.
    2. Remove that element from the linked list.
    3. Append that element to the result list.

This also runs in O(n2) time and uses only O(1) space, but in practice it's slower than insertion sort; in particular, it always runs in Θ(n2) time.


Depending on how the linked lists are structured, you might be able to get away with some extremely awesome hacks. In particular, if you are given doubly-linked lists, then you have space for two pointers in each of your linked list cells. Given that, you can reinterpret the meaning of those pointers to do some pretty ridiculous sorting tricks.

As a simple example, let's see how we could implement tree sort using the linked list cells. The idea is as follows. When the linked list cells are stored in a linked list, the next and previous pointers have their original meaning. However, our goal will be to iteratively pull the linked list cells out of the linked list, then reinterpret them as nodes a in binary search tree, where the next pointer means "right subtree" and the previous pointer means "left subtree." If you're allowed to do this, here's a really cool way to implement tree sort:

  1. Create a new pointer to a linked list cell that will serve as the pointer to the root of the tree.
  2. For each element of the doubly-linked list:

    1. Remove that cell from the linked list.
    2. Treating that cell as a BST node, insert the node into the binary search tree.

  3. Do an in-order walk of the BST. Whenever you visit a node, remove it from the BST and insert it back into the doubly-linked list.

This runs in best-case O(n log n) time and worst-case O(n2). In terms of memory usage, the first two steps require only O(1) memory, since we're recycling space from the older pointers. The last step can be done in O(1) space as well using some particularly clever algorithms.

You could also consider implementing heap sort this way as well, though it's a bit tricky.


Hope this helps!

这篇关于排序C语言链表的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆