有人可以解释这个懒惰的斐波那契解决方案吗? [英] Can someone explain this lazy Fibonacci solution?

查看:126
本文介绍了有人可以解释这个懒惰的斐波那契解决方案吗?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

这是代码:

  fibs = 0:1:zipWith(+)fibs(drop 1 fibs)

评估时, fibs 是斐波那契的无限列表数字。我不明白的是如何连接列表。

zipWith 会返回一个列表,所以压缩 fibs 会产生这样的结果:

  0:1:[1]:[1,2]:[1,2,3] 

由于 0:1:zipWith(+)[0,1] [1] 产生 [1] zipWith(+)[0,1,1] [1,1] yield [1,2] 等等)。

然而,当我运行代码时,我获得正确的结果。



我在这里不理解什么?

解决方案

<你的因为并不是讲完整个故事。你截断了迄今为止的故事的列表并且热切地评价,然后想知道其余的来自哪里。这并不完全掌握真正发生的事情,这是一个很好的问题。



当您进行定义时会计算什么

  fibs = 0:1:zipWith(+)fibs(drop 1 fibs)

?很少。一旦开始使用列表,计算就会开始。延迟计算仅在需求时发生。



需求是什么?你会问:你是 [] 还是 x:xs ?如果是后者,你会得到一个句柄。



当我们问 fibs 这个问题时,我们得到:

  fibs = x0:xs0 
x0 = 0
xs0 = 1:zipWith(+ )fibs(drop 1 fibs)

但是这意味着(代替 fibs 然后 x0

  xs0 = 1 :zipWith(+)(0:xs0)(drop 1(0:xs0))

我们再问一次,我们得到:

  xs0 = x1:xs1 
x1 = 1
xs1 = zipWith (+)(0:xs0)(drop 1(0:xs0))

so

  xs1 = zipWith(+)(0:1:xs1)(drop 1(0:1:xs1))

但现在变得有趣了,因为我们必须做一些工作。刚才有足够的工作来回答这个问题,介意吗?当我们看看 xs1 时,我们强制 zipWith ,这会强制 drop

  xs1 = zipWith(+)(0:1:xs1)(drop 1(0:1:xs1)) 
= zipWith(+)(0:1:xs1)(1:xs1)
=(0 + 1):zipWith(+)(1:xs1)xs1




$ $ p $ code xs1 = x2: xs2
x2 = 0 + 1 = 1
xs2 = zipWith(+)(1:xs1)xs1
= zipWith(+)(1:1:xs2)(1:xs2)

请参阅?我们认为,我们仍然知道一个压缩列表的前两个元素,而另一个是第一个元素。这意味着我们将能够提供下一个输出刷新我们的缓冲区。当我们看 xs2 ,我们得到

  xs2 = zipWith(+ )(1:1:xs2)(1:xs2)
=(1 + 1):zipWith(1:xs2)xs2
xs2 = x3:xs3
x3 = 1 + 1 = 2
xs3 = zipWith(1:xs2)xs2
= zipWith(1:2:xs3)(2:xs3)

,我们很高兴再次出发!

每当我们要求下一个元素时,我们也会向前移动一步来自 zipWith 的元素用完了,这也正好赶上了时间。



没有使价值观在时间中表现出来的原则表现在这些类型中。目前,对于程序员来说,当需求发生时,通过用完数据来确保井式程序不会出错。 (我有计划做些什么,但我会尽量不要在这里离题。)



关键是懒惰的按需计算意味着我们不要必须将列表截断为流程启动时我们可以看到的元素。我们只需要知道我们总是可以采取下一步。


This is the code:

fibs = 0 : 1 : zipWith (+) fibs (drop 1 fibs)

When evaluated, fibs is an infinite list of Fibonacci numbers. What I don't understand is how the list is concatenated.

zipWith returns a list, so zipping fibs would yield this:

0 : 1 : [1] : [1,2] : [1,2,3]

Because 0 : 1 : zipWith (+) [0,1] [1] yields [1] and zipWith (+) [0,1,1] [1,1] yields [1,2] etc).

However, when I run the code, I get the correct result.

What am I not understanding here?

解决方案

Your "Because" is not telling the whole story. You're truncating the lists at "the story so far" and evaluating eagerly, then wondering where the rest comes from. That's not quite to grasp what's really going on, so good question.

What gets computed when you make the definition

fibs = 0 : 1 : zipWith (+) fibs (drop 1 fibs)

? Very little. Computation starts once you begin to use the list. Lazy computation happens only on demand.

What's a demand? You get to ask "are you [] or x : xs?" and if it's the latter, you get a handle on the pieces.

When we ask that question of fibs, we get that

fibs = x0 : xs0
x0  = 0
xs0 = 1 : zipWith (+) fibs (drop 1 fibs)

but that means (substituting for fibs and then x0)

xs0 = 1 : zipWith (+) (0 : xs0) (drop 1 (0 : xs0))

and when we ask again, we get that

xs0 = x1 : xs1
x1  = 1
xs1 = zipWith (+) (0 : xs0) (drop 1 (0 : xs0))

so

xs1 = zipWith (+) (0 : 1 : xs1) (drop 1 (0 : 1 : xs1))

but now it gets interesting, because we have to do some work. Just enough work to answer the question, mind? When we look at xs1, we force zipWith which forces drop.

xs1 = zipWith (+) (0 : 1 : xs1) (drop 1 (0 : 1 : xs1))
    = zipWith (+) (0 : 1 : xs1) (1 : xs1)
    = (0 + 1) : zipWith (+) (1 : xs1) xs1

so

xs1 = x2 : xs2
x2  = 0 + 1 = 1
xs2 = zipWith (+) (1 : xs1) xs1
    = zipWith (+) (1 : 1 : xs2) (1 : xs2)

See? We've maintained that we still know the first two elements of one zipped list, and the first element of the other. That means we'll be able to deliver the next output and refresh our "buffer". When we look at xs2, we get

xs2 = zipWith (+) (1 : 1 : xs2) (1 : xs2)
    = (1 + 1) : zipWith (1 : xs2) xs2
xs2 = x3 : xs3
x3  = 1 + 1 = 2
xs3 = zipWith (1 : xs2) xs2
    = zipWith (1 : 2 : xs3) (2 : xs3)

and we're good to go again!

Each time we demand the next element, we also move one step further away from zipWith running out of elements, which is just as well, just in the nick of time.

None of the discipline that makes values show up in the nick of time is expressed in the types. At the moment, it's for programmers to make sure that well typed programs don't go wrong by running out of data when a demand is made. (I have plans to do something about that, but I'll try not to digress here.)

The key is that lazy, "on demand" computation means that we don't have to truncate lists to just the elements we can see when the process starts. We just need to know that we can always take the next step.

这篇关于有人可以解释这个懒惰的斐波那契解决方案吗?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆