在这里使用尾递归的优势是什么? [英] What is the advantage of using tail recursion here?

查看:98
本文介绍了在这里使用尾递归的优势是什么?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我一直在阅读有关描述如何通过使用尾部递归版本来减少快速排序的空间复杂性的文章,但是我不知道这是怎么回事。以下是两个版本:

  QUICKSORT(A,p,r)
q = PARTITION(A,p,r )
QUICKSORT(A,p,q-1)
QUICKSORT(A,q + 1,r)


TAIL-RECURSIVE-QUICKSORT(A,p ,r)
而p< r
q =分区(A,p,r)
尾递归快速(A,p,q-1)
p = q + 1

(来源- http://mypathtothe4.blogspot.com/2013/02/lesson-2-variations-on-quicksort-tail.html



据我了解,这两种方法都会在数组的左半部分和右半部分上引起递归调用。在这两种情况下,一次只处理一半,因此在任何时候,只有一个递归调用会使用堆栈空间。我看不到尾递归快速排序如何节省空间。



上面的伪代码摘自文章- http://mypathtothe4.blogspot.com/2013/02/lesson-2-variations-on-quicksort-tail.html
文章中提供的解释使我更加困惑-


Quicksort对给定的子数组进行分区并进行处理递归两次;
一个在左边的子数组上,另一个在右边。这些
递归调用中的每一个都将需要其自己的堆栈空间流。
此空间用于在某些级别的递归
中存储数组的索引变量。如果我们想象这是从开始
到执行结束之间发生的,我们可以看到堆栈空间在每个
层增加了一倍。



那怎么办尾递归快速排序解决了所有这些问题吗?



好吧,我们没有对两个子数组进行递归,而是对
进行了递归。这样就无需在执行的每一层
加倍堆栈空间。我们通过使用while循环作为执行相同任务的
迭代控件来解决此问题。不需要
堆栈为两个递归调用保存变量集,我们只需
更改相同的变量集,并对
新变量使用单个递归调用。


在常规quicksort的情况下,我看不到堆栈空间如何在执行的每一层加倍。



注意:-本文中未提到编译器优化。

解决方案

尾部递归函数调用使编译器可以执行常规的递归通常无法实现的特殊优化。在尾部递归函数中,递归调用是要执行的最后一件事。在这种情况下,编译器可以重新编写代码以简单地重用当前的堆栈框架,而不是为每个调用分配堆栈框架,这意味着尾递归函数将仅使用单个堆栈框架,而不是数百甚至数千。 / p>

之所以可以进行此优化,是因为编译器知道一旦执行了尾递归调用,就不再需要以前的变量副本了,因为没有更多的代码可以执行。例如,如果在递归调用之后执行了一条print语句,则编译器将需要在递归调用返回后知道要打印的变量的值,因此堆栈帧无法重用。



如果您想了解有关这种节省空间和堆栈重用的实际工作方式的更多信息,请参见Wiki页面,以及以下示例: Tail Call



编辑:我没有解释这如何适用于quicksort,是吗?好吧,该文章中有些术语使一切都变得混乱(其中有些只是明显的错误)。给定的第一个函数(QUICKSORT)在左侧进行递归调用,在右侧进行递归调用,然后退出。注意,右边的递归调用是函数中发生的最后一件事。如果编译器支持尾部递归优化(如上所述),则仅左调用会创建新的堆栈帧;所有正确的调用仅重用当前帧。这样可以节省 some 个堆栈帧,但仍会遭受分区创建一系列调用而尾部递归优化无关紧要的情况。此外,即使右侧调用使用相同的框架,左侧调用 in 的右侧调用仍使用堆栈。在最坏的情况下,堆栈深度为N。



所描述的第二个版本不是尾递归快速排序,而是仅尾部递归快速排序左排序是递归完成的,右排序是使用循环完成的。实际上,该快速排序(如另一个用户先前所述)无法对其应用尾部递归优化,因为递归调用不是最后执行的事情。这是如何运作的?如果正确实现,则对quicksort的第一次调用与原始算法中的左侧调用相同。但是,甚至没有调用右侧的递归调用。这是如何运作的?好吧,循环就解决了这一问题:与其对先左后右进行排序,不如对一个呼叫进行左排序,然后通过仅对右进行连续排序对右进行排序。这听起来确实很荒谬,但基本上只是对太多的左进行排序,以使权利成为单个元素,不需要进行排序。这有效地消除了正确的递归,使函数的递归性降低了(如果需要的话,可以是伪递归的)。但是,实际的实现并非每次都只选择左侧。它选择最小的一面。这个想法还是一样的。它基本上只在一侧而不是在一侧进行递归调用。选择较短的一面将确保堆栈深度永远不会大于log2(N),即适当的二叉树的深度。这是因为较短的一面总是最多为当前数组部分的一半。本文提供的实现并不能确保这一点,因为它可能会遇到左是整棵树的最坏情况。如果您愿意做更多的阅读,本文实际上对它进行了很好的解释:基于快速排序的有效选择和部分排序


I have been reading articles describing how space complexity of quicksort can be reduced by using the tail recursive version but I am not able to understand how this is so. Following are the two versions :

QUICKSORT(A, p, r)
       q = PARTITION(A, p, r)
       QUICKSORT(A, p, q-1)
       QUICKSORT(A, q+1, r)


TAIL-RECURSIVE-QUICKSORT(A, p, r)
   while p < r
      q = PARTITION(A, p, r)
      TAIL-RECURSIVE-QUICKSORT(A, p, q-1)
      p = q+1

(Source - http://mypathtothe4.blogspot.com/2013/02/lesson-2-variations-on-quicksort-tail.html)

As far as I understand , both of these would cause recursive calls on both the left and right half of the array. In both the cases , only one half would processed at a time and therefore at any time only one recursive call would be using the stack space. I am unable to see how the tail recursive quicksort saves space.

The pseudo code above is taken from the article - http://mypathtothe4.blogspot.com/2013/02/lesson-2-variations-on-quicksort-tail.html The explanation provided in the article confuses me even more -

Quicksort partitions a given sub-array and proceeds to recurse twice; one on the left-sub-array and one on the right. Each of these recursive calls will require its own individual stream of stack space. This space is used to store the indexing variables for the array at some level of recursion. If we picture this occurring from beginning to end of execution, we can see that the stack space doubles at each layer.

So how does Tail-Recursive-Quicksort fix all of this?

Well, instead of recursing on two sub-arrays, we now only recurse on one. This eliminates the need for doubling stack space at every layer of execution. We get around this problem by using the while loop as an iterative control that performs the same task. Instead of needing the stack to save sets of variables for two recursive calls, we simply alter the same set of variables and use the single recursive call on new variables.

I don't see how the stack space doubles at every layer of execution in the case of a regular quicksort.

Note :- There is no mention of compiler optimization in the article.

解决方案

A tail recursive function call allows the compiler to perform a special optimization which it normally can not with regular recursion. In a tail recursive function, the recursive call is the very last thing to be executed. In this case, instead of allocating a stack frame for each call, the compiler can rework the code to simply reuse the current stack frame, meaning a tail-recursive function will only use a single stack frame as opposed to hundreds or even thousands.

This optimization is possible because the compiler knows that once the tail recursive call is made, no previous copies of variables will be needed, because there is no more code to execute. If, for instance, a print statement followed a recursive call, the compiler would need to know the value of the variable to be printed after the recursive call returns, and thus the stack frame cannot be reused.

Here's the wiki page if you'd like more information on how this "space saving" and stack reuse actually works, along with examples: Tail Call

Edit: I didn't explain how this applies to quicksort, did I? Well, some terms are thrown around in that article which make everything all confusing (and some of it is just plain wrong). The first function given (QUICKSORT) makes a recursive call on the left, a recursive call on the right, and then exits. Notice that the recursive call on the right is the very last thing that happens in the function. If the compiler supports tail recursive optimization (explained above), only the left calls create new stack frames; all the right calls just reuse the current frame. This can save some stack frames, but can still suffer from the case where the partitioning creates a sequence of calls where tail recursion optimization doesn't matter. Plus, even though right-side calls use the same frame, the left-side calls called within the right-side calls still use the stack. In the worst case, the stack depth is N.

The second version described is not a tail recursive quicksort, but rather a quicksort where only the left sorting is done recursively, and the right sorting is done using the loop. In fact, this quicksort (as previously described by another user) cannot have the tail recursion optimization applied to it, because the recursive call is not the last thing to execute. How does this work? When implemented correctly, the the first call to quicksort is the same as a left-side call in the original algorithm. However, no right-side recursive calls are even called. How does this work? Well, the loop takes care of that: instead of sorting "left then right", it sorts the left with a call, then sorts the right by continually sorting only the lefts of the right. It's really ridiculous sounding, but it's basically just sorting so many lefts that the rights become single elements and don't need to be sorted. This effectively removes the right recursion, making the function less recursive (pseudo recursive, if you will). However, the real implementation does not choose just the left side each time; it chooses the smallest side. The idea is still the same; it basically only does a recursive call on one side instead of both. Picking the shorter side will ensure that the stack depth can never be larger than log2(N), which is the depth of a proper binary tree. This is because the shorter side is always going to be at most half the size of our current array section. The implementation given by the article does not ensure this however, because it can suffer from the same worst-case scenario of "left is the whole tree". This article actually gives a pretty good explanation of it if you're willing to do more reading: Efficient selection and partial sorting based on quicksort

这篇关于在这里使用尾递归的优势是什么?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆