为什么在heapify中siftDown比siftUp更好? [英] Why siftDown is better than siftUp in heapify?

查看:410
本文介绍了为什么在heapify中siftDown比siftUp更好?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

要构建MAX堆树,我们可以通过以下方式进行筛选: siftDown siftUp 根并将其与两个子元素进行比较,然后将其替换为两个子元素中较大的元素,如果两个子元素都较小,则我们停止,否则我们继续向下筛选该元素,直到到达叶节点(或者当然,直到该元素大于它的两个孩子。)

To build a MAX heap tree, we can either siftDown or siftUp, by sifting down we start from the root and compare it to its two children, then we replace it with the larger element of the two children, if both children are smaller then we stop, otherwise we continue sifting that element down until we reach a leaf node (or of course again, until that element is larger that both of its children).

现在,我们只需要执行 n / 2 次,因为叶子数为 n / 2 ,并且当我们完成对最后一个元素之前(在叶子)-因此我们将剩下 n / 2 个元素进行堆放。

Now we will only need to do that n/2 times, because the number of leaves is n/2, and the leaves will satisfy the heap property when we finish heapifying the last element on the level before the last (before the leaves) - so we will be left with n/2 elements to heapify.

现在,如果我们使用 siftUp ,我们将从叶子开始,最终我们需要对所有 n 元素进行堆放。

Now if we use siftUp, we will start with the leaves, and eventually we will need to heapify all n elements.

我的问题是:当我们使用 siftDown 时,我们基本上不是在做tw o比较(将元素与其两个子元素进行比较),而不是使用 siftUp 时仅进行一次比较,因为我们仅将该元素与其父元素进行比较?如果是,那是否就意味着我们将复杂度加倍,并最终真正具有与筛选相同的精确度?

My question is: when we use siftDown, aren't we basically doing two comparisons (comparing the element to its both children), instead of only one comparison when using siftUp, since we only compare that element to its one parent? If yes, wouldn't that mean that we're doubling the complexity and really ending up with the same exact complexity as sifting down?

推荐答案

实际上,使用 siftDown 重复调用构建堆的复杂度为 O(n)重复调用 siftUp 的复杂度为 O(nlogn)

Actually, building a heap with repeated calls of siftDown has a complexity of O(n) whereas building it with repeated calls of siftUp has a complexity of O(nlogn).

这是由于以下事实:当您使用 siftDown 时,每次调用所花费的时间随着节点的深度而减少因为这些节点离叶子更近。当您使用 siftUp 时,交换数量随节点深度而增加,因为如果您处于完全深度,则可能必须交换所有扎根之路。随着节点数量随树的深度呈指数增长,使用 siftUp 会提供更昂贵的算法。

This is due to the fact that when you use siftDown, the time taken by each call decreases with the depth of the node because these nodes are closer to the leaves. When you use siftUp, the number of swaps increases with the depth of the node because if you are at full depth, you may have to swap all the way to the root. As the number of nodes grows exponentially with the depth of the tree, using siftUp gives a more expensive algorithm.

此外,如果您使用Max-heap进行某种排序,在其中您弹出堆的max元素,然后对其进行重新堆砌,则使用 siftDown 。您可以在 O(logn)时间内重新堆砌,方法是弹出max元素,将最后一个元素放在根节点(由于弹出而为空),然后向下过滤一直回到正确的位置。

Moreover, if you are using a Max-heap to do some sort of sorting where you pop the max element of the heap and then reheapify it, it's easier to do so by using siftDown. You can reheapify in O(logn) time by popping the max element, putting the last element at the root node (which was empty because you popped it) and then sifting it down all the way back to its correct spot.

这篇关于为什么在heapify中siftDown比siftUp更好?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆