为什么ArrayList add()和add(int index,E)复杂度是分摊的常量时间?为什么O(1)for add(),O(n)for add(int index,E)? [英] Why ArrayList add() and add(int index, E) complexity is amortized constant time? Why not O(1) for add(), O(n) for add(int index, E)?

查看:172
本文介绍了为什么ArrayList add()和add(int index,E)复杂度是分摊的常量时间?为什么O(1)for add(),O(n)for add(int index,E)?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

为什么ArrayList add()和add(int index,E)复杂度是分摊的常量时间?



为什么不对单个add()操作使用O(1),对单个add(int index,E)操作使用O(n),为添加n添加O(n)元素(n添加操作)使用(任何)添加方法?假设我们使用add(int index,E)很少添加到数组结尾?



数组(和ArrayList)的一个操作复杂性不是已经有n个元素:


  1. add() - O(1)?

  2. add(int index,E) - O(n)?

如果我们进行一百万次插入,1和2的平均值不能为O(1),对吗?



为什么Oracle说


添加操作以摊销的常量时间运行,即添加n
元素需要O(n)时间


我认为add()的复杂度为O(1),add(int index,E)的复杂度为O(n)。



这是否意味着n个操作的整体复杂度(复杂度O(1)的每个操作)可以说是n * O(1)= O(n)。我缺少什么?



也许对于Oracle添加操作总是意味着只有add()?而add(int,E)是插入操作?然后完全清楚!



我有一个猜测,它与渐近分析摊销分析之间的区别有关,但我无法把握到最后。



什么是摊销分析算法?



为什么数组插入的时间复杂度是O(n)而不是O(n + 1)?



更适合说Amortized O(1)vs O(n)插入未排序的动态数组?(不太清楚)



在将不同的例程添加到一起时的大O

解决方案

用Oracle术语(暗示默示)和谈论列表




  • add method (同义词 - 追加方法od )总是指布尔加法(E)

  • 插入方法总是意味着布尔加法(int索引,E)



当Oracle写道时


添加操作以分摊的常量时间运行,即添加n
元素需要O(n)时间。


这意味着:



单个布尔加法的复杂性( E)操作分摊为O(1)。



它不能只是O(1)渐近(总是)因为我们很少需要增加数组容量。这个单独的添加操作实际上是创建新的更大的数组,将旧的数组复制到其中,然后将一个元素添加到结尾操作是O(n)渐近复杂度,因为在增加List容量时复制数组是O( n),增长加上加法的复杂度是O(n)[计算为O(n)+ O(1)= O(n)]。如果没有这种容量增长操作,增加的复杂性就是O(1),元素总是被添加(追加)到数组的末尾(最大索引)。如果我们添加(=插入)不到数组结束,我们需要移动最右边的元素(使用更大的索引),单个此类操作的复杂性将是O(n)。



现在,对于单个添加操作,渐近复杂度为O(1),用于无增加容量,O(n)用于增加容量增加(这种情况非常罕见)。



单个添加操作的分摊复杂度为O(1)。它反映了这样一个事实,即稀有的O(n)生长和添加操作被更多的O(1)无添加 - 无添加稀释,因此平均单一操作是O(1)。 / p>

n个加法运算的渐近复杂度是O(n)。但在这里我们谈论n个操作的复杂性,而不是一个操作的复杂性。这样做并不是一种严格的方式(渐近复杂度),但无论如何。 n次操作的摊销复杂性更不合理。



最后,布尔加法(int index,E)单一操作复杂性始终是O(n)。如果触发增长,则为O(n)+ O(n)[grow + insert],但2 * O(n)与O(n)相同。


Why ArrayList add() and add(int index, E) complexity is amortized constant time?

Why not O(1) for single add() operation, O(n) for single add(int index, E) operation and O(n) for adding n elements (n add operations) using either (any) add method? Supposing we are using add(int index, E) to add to array end rarely?

Isn't one operation complexity of array (and ArrayList) already having n elements:

  1. add() - O(1)?
  2. add(int index, E) - O(n)?

If we make a million insertions, the average of 1 and 2 cannot be O(1), right?

Why Oracle says

The add operation runs in amortized constant time, that is, adding n elements requires O(n) time.

I thought complexity is O(1) for add() and O(n) for add(int index, E).

Does it means that "integral complexity of n operations" (each operation of complexity O(1)) is so to speak n*O(1) = O(n). What am I missing?

Maybe for Oracle "add operation" always means only add()? And add(int, E) is "insertion operation"? Then totally clear!

I have a guess, it has to do with difference between asymptotic analysis and amortized analysis, but I cannot grasp it to the end.

What is amortized analysis of algorithms?

Why the time complexity of an array insertion is O(n) and not O(n+1)?

More appropriate to say Amortized O(1) vs O(n) for insertion into unsorted dynamic array? (not quite clear)

Big O when adding together different routines

解决方案

In Oracle terms (which are implied tacitly) and speaking about List

  • "add method" (synonym - "append method") always means boolean add(E)
  • "insert method" always means boolean add(int index, E)

When Oracle writes

The add operation runs in amortized constant time, that is, adding n elements requires O(n) time.

it means:

Complexity of a single boolean add(E) operation is amortized O(1).

It cannot be just O(1) asymptotic (always) because rarely we need to grow array capacity. This single add operation which is in fact a "create new bigger array, copy old array into it, and then add one element to the end" operation is O(n) asymptotic complexity, because copying the array when increasing List capacity is O(n), complexity of growing plus addding is O(n) [calculated as O(n) + O(1) = O(n)]. Without this capacity growing operation, add complexity would have been O(1), element is always added (appended) to the end of array (max index). If we "added" (= inserted) not to array end, we would need to move rightmost elements (with bigger indexes), and complexity for single such operation would have been O(n).

Now, for a single add operation asymptotic complexity is O(1) for add-without-increasing-capacity and O(n) for add-with-increasing-capacity (which happens very rare).

Amortized complexity of a single add operation is O(1). It reflects the fact that rare O(n) grow-and-add operations get "diluted" with much more numerous O(1) add-without-grow ones, so "on the average" single operation is O(1).

"Asymptotic complexity" of n add operations is O(n). But here we speak about complexity of n operations, not complexity of one operation. It is not a strict way to put it like this ("Asymptotic complexity"), but anyway. Amortized complexity of n operations is even less sense.

Finally, boolean add(int index, E) single operation complexity is always O(n). If it triggers grows, it is O(n) + O(n) [grow + insert], but 2*O(n) is same as O(n).

这篇关于为什么ArrayList add()和add(int index,E)复杂度是分摊的常量时间?为什么O(1)for add(),O(n)for add(int index,E)?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
相关文章
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆