O中填充树的函数(深度) [英] Function to Populate Tree in O(depth)

查看:106
本文介绍了O中填充树的函数(深度)的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

纯粹功能数据结构包含以下练习:

   -  2.5在单个对象中共享可能很有用,而不仅仅是在对象之间。 
- 例如,如果给定节点的两个子小节是相同的,则它们可以是
- 由同一棵树表示。
- 第一部分:创建一个完成一个Int函数,该函数创建一个
树 - 深度Int,在树的每一片树叶中放置一棵树。
complete :: a - >整数 - >也许(树a)
完成x深度
|深度< 0 =没有
|否则=只要$完成'深度
完成'd
| d == 0 =清空
|否则= let copiedTree = complete'(d-1)
in Node x copiedTree copiedTree

此实现是否在 O(d)时间内运行?请问为什么或者为什么不可以?

解决方案

代码中有趣的部分是 complete' 功能:

 完成'd 
| d == 0 =清空
|否则= let copiedTree = complete'(d-1)
in Node x copiedTree copiedTree

正如 Cirdec的回答所暗示的那样,我们应该小心分析实施的每个部分,以确保我们的假设是有效的。作为一般规则,我们可以假设以下每个单位时间 *


  1. 使用数据构造函数构造一个值(例如,使用 Empty 来创建一棵空树或使用 Node 将一个值和两棵树变成一棵树)。

  2. 在一个值上进行模式匹配以查看它从哪个数据构造函数构建而来


  3. Guards和if / then / else表达式(使用模式匹配内部实现)。

  4. li>
  5. 整数与0比较。


Cirdec提到从整数减1的操作是整数大小的对数。正如他们所说的,这实质上是 Integer 实现方式的人工产物。实现整数是可能的,因此只需要一步就可以将它们与0进行比较,并且只需要一步就可以将它们递减1.为了保持一般情况,假设存在一些函数c以避免成本递减整数为c(深度)。






现在我们已经关注了这些预赛,让我们开始工作吧!一般情况下,我们需要建立一个方程组并解决它。假设f(d)是计算完成'd 所需的步骤数。那么第一个等式很简单:

  f(0)= 2 

这是因为它需要1步将 d 与0进行比较,而另一步是检查结果是真实



另一个等式是有趣的部分。考虑 d>发生的情况0


  1. 我们计算 d == 0

  2. 我们检查是否 True (不是)。
  3. 我们计算 d-1 (我们称结果为 dm1
  4. 我们计算 complete'dm1 ,将结果保存为 copiedTree

  5. 我们应用<$ x copiedTree Node > copiedTree 。

第一部分需要1步。第二部分采取一步。第三部分采用c(深度)步骤,第五步采取1步。第四部分呢?那么,这需要f(d-1)个步骤,所以这将是一个递归定义。

  f(0)= 2当d>时,
f(d)=(3 + c(深度))+ f(d-1) 0

好的,现在我们正在用煤气做饭!让我们来计算f的前几个值:

$ $ $ $ $ $ $ $ $ $ $ $ $ $ $($) (深度))+ f(0)=(3 + c(深度))+ 2 $ bf(2)=(3 + c(深度))+ f(1)
=(3+ (深度))+((3 + c(深度))+ 2)
= 2 *(3 + c(深度))+ 2 $ bf(3)=(3 + c )+ f(2)
=(3 + c(深度))+(2 *(3 + c(深度))+ 2)
= 3 * 2

您现在应该开始看到一种模式:

  f(d)= d *(3 + c(深度))+ 2 

我们通常使用数学归纳证明递归函数的一些事情。



基例:

由于0 *(3 + c(深度))+ 2 = 0 + 2 = 2 = f(0),所以该索赔对于d = 0成立。

<假设索赔适用于d = D。那么

$ $ p $ code> f(D + 1)=(3 + c(深度))+ f(D)
= (3 + c(深度))+(D *(3 + c(深度))+ 2)
=(D + 1)*(3 + c(深度))+ 2
code>

所以对于D + 1也是如此。因此,归纳起来,它适用于所有自然数d。作为提醒,这得出结论:完成'd 需要

  f(d)= d *(3 + c(深度))+ 2 

时间。现在我们如何用大的O来表达呢?那么,大O不关心任何术语的常数系数,只关心最高阶的术语。我们可以安全地假设c(深度)> = 1,所以我们得到b
$ b

  f(d)∈O(d * c (深度))

缩小至完成 ,这看起来像O(深度* c(深度))

如果您使用整数递减的实际成本,这给你O(深度*日志(深度))。如果你假装 Integer 递减是O(1),这会给你O(深度)。

:当你继续通过Okasaki工作时,你最终会到达10.2.1节,在那里你将会看到一种实现自然数的方法来支持O(1)递减和O(1) addition (但不是有效的减法)。

* Haskell的懒惰评估保证了它的准确性,但如果你假装一切都是严格评估,你会得到真实值的上限,在这种情况下这将是足够好的。如果你想学习如何分析使用懒惰来获得渐近界限的数据结构,你应该继续阅读冈崎。


Purely Functional Data Structures has the following exercise:

-- 2.5 Sharing can be useful within a single object, not just between objects.
-- For example, if the two subtress of a given node are identical, then they can 
-- be represented by the same tree.
-- Part a: make a `complete a Int` function that creates a tree of 
-- depth Int, putting a in every leaf of the tree.
complete :: a -> Integer -> Maybe (Tree a)
complete x depth 
 | depth < 0  = Nothing
 | otherwise  = Just $ complete' depth
                        where complete' d 
                                | d == 0    = Empty
                                | otherwise = let copiedTree = complete' (d-1) 
                                              in Node x copiedTree copiedTree

Does this implementation run in O(d) time? Could you please say why or why not?

解决方案

The interesting part of the code is the complete' function:

complete' d 
  | d == 0    = Empty
  | otherwise = let copiedTree = complete' (d-1) 
                in Node x copiedTree copiedTree

As Cirdec's answer suggests, we should be careful to analyze each part of the implementation to make sure our assumptions are valid. As a general rule, we can assume that the following take 1 unit of time each*:

  1. Using a data constructor to construct a value (e.g., using Empty to make an empty tree or using Node to turn a value and two trees into a tree).

  2. Pattern matching on a value to see what data constructor it was built from and what values the data constructor was applied to.

  3. Guards and if/then/else expressions (which are implemented internally using pattern matching).

  4. Comparing an Integer to 0.

Cirdec mentions that the operation of subtracting 1 from an Integer is logarithmic in the size of the integer. As they say, this is essentially an artifact of the way Integer is implemented. It is possible to implement integers so that it takes only one step to compare them to 0 and also takes only one step to decrement them by 1. To keep things very general, it's safe to assume that there is some function c such that the cost of decrementing an Integer is c(depth).


Now that we've taken care of those preliminaries, let's get down to work! As is generally the case, we need to set up a system of equations and solve it. Let f(d) be the number of steps needed to calculate complete' d. Then the first equation is very simple:

f(0) = 2

That's because it costs 1 step to compare d to 0, and another step to check that the result is True.

The other equation is the interesting part. Think about what happens when d > 0:

  1. We calculate d == 0.
  2. We check if that is True (it's not).
  3. We calculate d-1 (let's call the result dm1)
  4. We calculate complete' dm1, saving the result as copiedTree.
  5. We apply a Node constructor to x, copiedTree, and copiedTree.

The first part takes 1 step. The second part takes one step. The third part takes c(depth) steps, and the fifth step takes 1 step. What about the fourth part? Well, that takes f(d-1) steps, so this will be a recursive definition.

f(0) = 2
f(d) = (3+c(depth)) + f(d-1)    when d > 0

OK, now we're cooking with gas! Let's calculate the first few values of f:

f(0) = 2
f(1) = (3+c(depth)) + f(0) = (3+c(depth)) + 2
f(2) = (3+c(depth)) + f(1)
     = (3+c(depth)) + ((3+c(depth)) + 2)
     = 2*(3+c(depth)) + 2
f(3) = (3+c(depth)) + f(2)
     = (3+c(depth)) + (2*(3+c(depth)) + 2)
     = 3*(3+c(depth)) + 2

You should be starting to see a pattern by now:

f(d) = d*(3+c(depth)) + 2

We generally prove things about recursive functions using mathematical induction.

Base case:

The claim holds for d=0 because 0*(3+c(depth))+2=0+2=2=f(0).

Suppose that the claim holds for d=D. Then

f(D+1) = (3+c(depth)) + f(D)
       = (3+c(depth)) + (D*(3+c(depth))+2)
       = (D+1)*(3+c(depth))+2

So the claim holds for D+1 as well. Thus by induction, it holds for all natural numbers d. As a reminder, this gives the conclusion that complete' d takes

f(d) = d*(3+c(depth))+2

time. Now how do we express that in big O terms? Well, big O doesn't care about the constant coefficients of any of the terms, and only cares about the highest-order terms. We can safely assume that c(depth)>=1, so we get

f(d) ∈ O(d*c(depth))

Zooming out to complete, this looks like O(depth*c(depth))

If you use the real cost of Integer decrement, this gives you O(depth*log(depth)). If you pretend that Integer decrement is O(1), this gives you O(depth).

Side note: As you continue to work through Okasaki, you will eventually reach section 10.2.1, where you will see a way to implement natural numbers supporting O(1) decrement and O(1) addition (but not efficient subtraction).

* Haskell's lazy evaluation keeps this from being precisely true, but if you pretend that everything is evaluated strictly, you will get an upper bound for the true value, which will be good enough in this case. If you want to learn how to analyze data structures that use laziness to get good asymptotic bounds, you should keep reading Okasaki.

这篇关于O中填充树的函数(深度)的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆