试图让我的头在Haskell递归? [英] Trying to get my head around recursion in Haskell?

查看:138
本文介绍了试图让我的头在Haskell递归?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

现在我已经使用了很多递归函数,但仍然难以得知我如何正确使用这样的函数(我熟悉第二行(即 | n == 0 = 1 ),但对最后一行不太熟悉(即 | n> 0 = fac(n-1)* n ))。

  fac :: Int  - > Int 
fac n
| n == 0 = 1
| n> 0 = fac(n-1)* n


解决方案

递归算法数学归纳法。也许学习一个会帮助你更好地理解另一个。



使用递归时,需要记住两个关键原则:


  • 基本情况

  • 归纳步骤



归纳步骤通常是最难的部分,因为它假设它所依赖的所有东西都已经被正确计算。实现这种信仰的飞跃可能是困难的(至少我花了一段时间才能掌握它),但这仅仅是因为我们对我们的函数有了前提条件;必须指定那些前提条件(在这种情况下, n 是一个非负整数),以便归纳步骤和基本情况始终为真 。

基本情况有时也很困难:比方说,你知道阶乘 N! N *(N-1)!,但您究竟如何处理阶梯上的第一步? (在这种情况下,很容易定义 0!:= 1 。这个显式定义为您提供了一种终止递归应用程序的方法)



您可以在此函数中看到您的类型说明和保护模式,它们提供了保证归纳步骤可以反复使用的前提条件,直到它达到基本情况, n == 0 。如果无法满足前提条件,归纳步骤的递归应用将无法达到基础案例,并且您的计算将永不终止。 (嗯,当它耗尽内存时:)

一个复杂的因素,特别是函数式编程语言,是非常强烈的想要重写所有'简单的'递归函数,就像你在这里一样,使用 Tail Calls 或Tail Recursion。



因为这个函数调用自己,然后对结果执行另一个操作,所以可以像这样构建一个调用链:

  fac 3 3 * fac 2 
fac 2 2 * fac 1
fac 1 1 * fac 0
fac 0 1
fac 1 1
fac 2 2
fac 3 6

堆栈占用内存;但是编译器会注意到一个函数在递归调用后不会改变任何状态,可以优化递归调用。这些类型的函数通常沿着 accumulator 参数传递。同伴堆栈器有一个很好的例子: Haskell的尾递归

 阶乘1 c = c 
阶乘kc =阶乘(k-1)(c * k)

这个非常复杂的变化:)意味着前面的调用链变成这样:

  fac 3 1 fac 2 3 
fac 2 3 fac 1 6
fac 1 6 6

(嵌套仅用于显示;运行时系统实际上不会将执行的细节存储在堆栈中。)



无论 n 的值如何,它都在常量内存中运行,因此此优化可将不可能算法转换为可能算法。您会在函数式编程中看到广泛使用的这种技术,就像您在C编程中经常看到 char * yield 经常在Ruby编程中使用。


I have used many recursive functions now but still have trouble getting my head around how such a function exactly works (i'm familiar with the second line (i.e. | n==0 = 1) but am not so familiar with the final line (i.e. | n>0 = fac (n-1) * n)).

fac :: Int -> Int
fac n
    | n==0 = 1
    | n>0  = fac (n-1) * n

解决方案

Recursive algorithms are very closely linked to mathematical induction. Perhaps studying one will help you better understand the other.

You need to keep two key principles in mind when using recursion:

  • Base Case
  • Inductive Step

The Inductive Step is often the most difficult piece, because it assumes that everything it relies upon has already been computed correctly. Making this leap of faith can be difficult (at least it took me a while to get the hang of it), but it is only because we've got preconditions on our functions; those preconditions (in this case, that n is a non-negative integer) must be specified so that the inductive step and base case are always true.

The Base Case is also sometimes difficult: say, you know that the factorial N! is N * (N-1)!, but how exactly do you handle the first step on the ladder? (In this case, it is easy, define 0! := 1. This explicit definition provides you with a way to terminate the recursive application of your Inductive Step.)

You can see your type specification and guard patterns in this function are providing the preconditions that guarantee the Inductive Step can be used over and over again until it reaches the Base Case, n == 0. If the preconditions can't be met, recursive application of the Inductive Step would fail to reach the Base Case, and your computation would never terminate. (Well, it would when it runs out of memory. :)

One complicating factor, especially with functional programming languages, is the very strong desire to re-write all 'simple' recursive functions, as you have here, with variants that use Tail Calls or Tail Recursion.

Because this function calls itself, and then performs another operation on the result, you can build a call-chain like this:

fac 3        3 * fac 2
  fac 2      2 * fac 1
    fac 1    1 * fac 0
      fac 0  1
    fac 1    1
  fac 2      2
fac 3        6

That deep call stack takes up memory; but a compiler that notices that a function doesn't change any state after making a recursive call can optimize away the recursive calls. These kinds of functions typically pass along an accumulator argument. A fellow stacker has a very nice example: Tail Recursion in Haskell

factorial 1 c = c
factorial k c = factorial (k-1) (c*k)

This very complicated change :) means that the previous call chain is turned into this:

fac 3 1       fac 2 3
  fac 2 3     fac 1 6
    fac 1 6   6

(The nesting is there just for show; the runtime system wouldn't actually store details of the execution on the stack.)

This runs in constant memory, regardless of the value of n, and thus this optimization can convert 'impossible' algorithms into 'possible' algorithms. You'll see this kind of technique used extensively in functional programming, much as you'd see char * frequently in C programming or yield frequently in Ruby programming.

这篇关于试图让我的头在Haskell递归?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆