什么会导致算法具有 O(log log n) 复杂度? [英] What would cause an algorithm to have O(log log n) complexity?

查看:27
本文介绍了什么会导致算法具有 O(log log n) 复杂度?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

这个较早的问题解决了一些可能导致算法具有 O(log n) 复杂度的因素.

This earlier question addresses some of the factors that might cause an algorithm to have O(log n) complexity.

什么会导致算法的时间复杂度为 O(log log n)?

What would cause an algorithm to have time complexity O(log log n)?

推荐答案

O(log log n) 项可以出现在各种不同的地方,但通常有两条主要路线会到达这个运行时.

O(log log n) terms can show up in a variety of different places, but there are typically two main routes that will arrive at this runtime.

>

按平方根缩小

如对链接问题的回答所述,算法具有时间复杂度 O(log n) 的一种常见方法是使该算法通过在每次迭代中将输入的大小反复减少某个常数因子来工作.如果是这种情况,算法必须在 O(log n) 次迭代后终止,因为在除以常数 O(log n) 之后,算法必须将问题大小缩小到 0 或 1.这就是为什么,例如, 二分查找的复杂度为 O(log n).

As mentioned in the answer to the linked question, a common way for an algorithm to have time complexity O(log n) is for that algorithm to work by repeatedly cut the size of the input down by some constant factor on each iteration. If this is the case, the algorithm must terminate after O(log n) iterations, because after doing O(log n) divisions by a constant, the algorithm must shrink the problem size down to 0 or 1. This is why, for example, binary search has complexity O(log n).

有趣的是,有一种类似的方法可以缩小问题的规模,从而产生 O(log log n) 形式的运行时.如果我们在每一层取大小的平方根,而不是在每一层将输入分成两半,会发生什么?

Interestingly, there is a similar way of shrinking down the size of a problem that yields runtimes of the form O(log log n). Instead of dividing the input in half at each layer, what happens if we take the square root of the size at each layer?

例如,让我们以数字 65,536 为例.在我们得到 1 之前,我们必须将其除以 2 多少次?如果我们这样做,我们会得到

For example, let's take the number 65,536. How many times do we have to divide this by 2 until we get down to 1? If we do this, we get

  • 65,536/2 = 32,768
  • 32,768/2 = 16,384
  • 16,384/2 = 8,192
  • 8,192/2 = 4,096
  • 4,096/2 = 2,048
  • 2,048/2 = 1,024
  • 1,024/2 = 512
  • 512/2 = 256
  • 256/2 = 128
  • 128/2 = 64
  • 64/2 = 32
  • 32/2 = 16
  • 16/2 = 8
  • 8/2 = 4
  • 4/2 = 2
  • 2/2 = 1

这个过程需要16个步骤,65,536 = 216也是如此.

This process takes 16 steps, and it's also the case that 65,536 = 216.

但是,如果我们在每个级别上取平方根,我们得到

But, if we take the square root at each level, we get

  • √65,536 = 256
  • √256 = 16
  • √16 = 4
  • √4 = 2

请注意,只需要四步就可以一直降到 2.这是为什么呢?

Notice that it only takes four steps to get all the way down to 2. Why is this?

首先,直观的解释.数字n和√n有多少位数字?数n中大约有log n个数字,在√n中大约log (√n) = log (n1/2) = (1/2) log n个数字.这意味着,每次取平方根时,数字中的位数大致减半.因为在它下降到一个常数(比如 2)之前,你只能将数量 k O(log k) 次减半,这意味着在你减少这个数字之前你只能取平方根 O(log log n) 次到某个常数(比如 2).

First, an intuitive explanation. How many digits are there in the numbers n and √n? There are approximately log n digits in the number n, and approximately log (√n) = log (n1/2) = (1/2) log n digits in √n. This means that, each time you take a square root, you're roughly halving the number of digits in the number. Because you can only halve a quantity k O(log k) times before it drops down to a constant (say, 2), this means you can only take square roots O(log log n) times before you've reduced the number down to some constant (say, 2).

现在,让我们做一些数学计算,以使其严格.让我们用二的幂重写上面的序列:

Now, let's do some math to make this rigorous. Le'ts rewrite the above sequence in terms of powers of two:

  • √65,536 = √216 = (216)1/2 = 28 = 256
  • √256 = √28 = (28)1/2 = 24 = 16
  • √16 = √24 = (24)1/2 = 22 = 4
  • √4 = √22 = (22)1/2 = 21 = 2
  • √65,536 = √216 = (216)1/2 = 28 = 256
  • √256 = √28 = (28)1/2 = 24 = 16
  • √16 = √24 = (24)1/2 = 22 = 4
  • √4 = √22 = (22)1/2 = 21 = 2

注意我们遵循的顺序是 216 → 28 → 24 → 22 → 21.在每次迭代中,我们将 2 的幂的指数减半.这很有趣,因为这与我们已经知道的有关 - 在它降为零之前,您只能将数字 k 除以一半 O(log k) 次.

Notice that we followed the sequence 216 → 28 → 24 → 22 → 21. On each iteration, we cut the exponent of the power of two in half. That's interesting, because this connects back to what we already know - you can only divide the number k in half O(log k) times before it drops to zero.

因此取任意数 n 并将其写为 n = 2k.每次取 n 的平方根,就将这个方程中的指数减半.因此,在 k 降到 1 或更低(在这种情况下 n 降到 2 或更低)之前,只能应用 O(log k) 平方根.由于 n = 2k,这意味着 k = log2 n,因此取平方根的数量为 O(log k) = O(log log n).因此,如果存在通过将问题重复地减少到大小为原始问题大小的平方根的子问题来工作的算法,则该算法将在 O(log log n) 步后终止.

So take any number n and write it as n = 2k. Each time you take the square root of n, you halve the exponent in this equation. Therefore, there can be only O(log k) square roots applied before k drops to 1 or lower (in which case n drops to 2 or lower). Since n = 2k, this means that k = log2 n, and therefore the number of square roots taken is O(log k) = O(log log n). Therefore, if there is algorithm that works by repeatedly reducing the problem to a subproblem of size that is the square root of the original problem size, that algorithm will terminate after O(log log n) steps.

一个真实的例子是 van Emde Boas 树 (vEB-tree)数据结构.vEB-tree 是一种特殊的数据结构,用于存储 0 ... N - 1 范围内的整数.它的工作原理如下:树的根节点有 √N 个指针,将范围 0 ... N - 分开1 到 √N 个桶中,每个桶包含大约 √N 个整数的范围.然后这些桶在内部被细分为 √(√ N) 个桶,每个桶包含大约 √(√ N) 个元素.要遍历树,您从根开始,确定您属于哪个桶,然后在适当的子树中递归继续.由于 vEB 树的结构方式,您可以在 O(1) 时间内确定下降到哪个子树,因此在 O(log log N) 步之后您将到达树的底部.因此,在 vEB 树中查找只需要 O(log log N) 的时间.

One real-world example of this is the van Emde Boas tree (vEB-tree) data structure. A vEB-tree is a specialized data structure for storing integers in the range 0 ... N - 1. It works as follows: the root node of the tree has √N pointers in it, splitting the range 0 ... N - 1 into √N buckets each holding a range of roughly √N integers. These buckets are then each internally subdivided into √(√ N) buckets, each of which holds roughly √(√ N) elements. To traverse the tree, you start at the root, determine which bucket you belong to, then recursively continue in the appropriate subtree. Due to the way the vEB-tree is structured, you can determine in O(1) time which subtree to descend into, and so after O(log log N) steps you will reach the bottom of the tree. Accordingly, lookups in a vEB-tree take time only O(log log N).

另一个例子是 Hopcroft-Fortune 最近点对算法.该算法试图在二维点集合中找到两个最近的点.它的工作原理是创建一个桶网格并将点分布到这些桶中.如果在算法中的任何一点发现一个桶中的点超过 √N 个,则算法递归地处理该桶.因此递归的最大深度是 O(log log n),并且使用对递归树的分析可以证明树中的每一层都执行 O(n) 的工作.因此,该算法的总运行时间为 O(n log log n).

Another example is the Hopcroft-Fortune closest pair of points algorithm. This algorithm attempts to find the two closest points in a collection of 2D points. It works by creating a grid of buckets and distributing the points into those buckets. If at any point in the algorithm a bucket is found that has more than √N points in it, the algorithm recursively processes that bucket. The maximum depth of the recursion is therefore O(log log n), and using an analysis of the recursion tree it can be shown that each layer in the tree does O(n) work. Therefore, the total runtime of the algorithm is O(n log log n).

还有一些其他算法通过对大小为 O(log n) 的对象使用二进制搜索等算法来实现 O(log log n) 运行时.例如,x-fast trie 数据结构在 at高度为 O(log U) 的树,因此其某些操作的运行时间为 O(log log U).相关的 y-fast trie 通过维护一些 O(log log U) 运行时每个 O(log U) 节点的平衡 BST,允许在这些树中的搜索在 O(log log U) 时间内运行.探戈树 和相关的multisplay tree 数据结构在其分析中以 O(log log n) 项结束,因为它们维护包含 O(log n) 项的树每个.

There are some other algorithms that achieve O(log log n) runtimes by using algorithms like binary search on objects of size O(log n). For example, the x-fast trie data structure performs a binary search over the layers of at tree of height O(log U), so the runtime for some of its operations are O(log log U). The related y-fast trie gets some of its O(log log U) runtimes by maintaining balanced BSTs of O(log U) nodes each, allowing searches in those trees to run in time O(log log U). The tango tree and related multisplay tree data structures end up with an O(log log n) term in their analyses because they maintain trees that contain O(log n) items each.

其他算法以其他方式实现运行时间 O(log log n).插值搜索 预计运行时间为 O(log log n) 以在排序数组中找到一个数字,但是分析相当复杂.最终,分析的工作原理是显示迭代次数等于 k ​​次,使得 n2-k ≤ 2,其中 log log n 是正确的解.某些算法,例如 Cheriton-Tarjan MST 算法,到达运行时涉及 O(log log n) 通过解决一个复杂的约束优化问题.

Other algorithms achieve runtime O(log log n) in other ways. Interpolation search has expected runtime O(log log n) to find a number in a sorted array, but the analysis is fairly involved. Ultimately, the analysis works by showing that the number of iterations is equal to the number k such that n2-k ≤ 2, for which log log n is the correct solution. Some algorithms, like the Cheriton-Tarjan MST algorithm, arrive at a runtime involving O(log log n) by solving a complex constrained optimization problem.

希望这有帮助!

这篇关于什么会导致算法具有 O(log log n) 复杂度?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
相关文章
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆