ArrayList 与 LinkedList [英] ArrayList Vs LinkedList

查看:44
本文介绍了ArrayList 与 LinkedList的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在关注之前的帖子说:

<块引用>

对于链表

  • 得到是 O(n)
  • 添加是 O(1)
  • 删除是 O(n)
  • Iterator.remove 是 O(1)

对于 ArrayList

  • 得到是 O(1)
  • add 是 O(1) 分摊的,但最坏的情况是 O(n),因为必须调整数组大小和复制数组
  • 删除是 O(n)

所以通过查看这个,我得出结论,如果我必须在我的集合中按顺序插入 5000000 个元素,LinkedList 将优于 ArrayList.>

如果我只需要通过迭代从集合中获取元素,即不抓取中间的元素,LinkedList 仍然会超过 `ArrayList.

现在为了验证我的上述两个陈述,我写了下面的示例程序......但我很惊讶我的上述陈述被证明是错误的.

ArrayList 在这两种情况下都优于 Linkedlist.比 LinkedList 添加以及从 Collection 中获取它们花费的时间更少.是不是我做错了什么,或者关于 LinkedListArrayList 的初始陈述不适用于大小为 5000000 的集合?

我提到了大小,因为如果我将元素数量减少到 50000,LinkedList 表现更好,并且初始语句成立.

long nano1 = System.nanoTime();列表<整数>arr = new ArrayList();for(int i = 0; i <5000000; ++i) {arr.add(i);}System.out.println((System.nanoTime() - nano1));for(int j: arr) {;}System.out.println((System.nanoTime() - nano1));long nano2 = System.nanoTime();列表<整数>arrL = 新的 LinkedList();for(int i = 0; i <5000000; ++i) {arrL.add(i);}System.out.println((System.nanoTime() - nano2));for(int j: arrL) {;}System.out.println((System.nanoTime() - nano2));

解决方案

请记住,big-O 复杂性描述的是渐近行为,可能无法反映实际的实现速度.它描述了每个操作的成本如何随着列表的大小而增长,而不是每个操作的速度.例如,add 的以下实现是 O(1) 但并不快:

public class MyList extends LinkedList {公共无效添加(对象 o){线程睡眠(10000);super.add(o);}}

我怀疑在你的情况下 ArrayList 表现良好,因为它相当积极地增加了它的内部缓冲区大小,所以不会有大量的重新分配.当缓冲区不需要调整大小时,ArrayList 会有更快的adds.

在进行此类分析时,您还需要非常小心.我建议您更改分析代码以进行预热阶段(因此 JIT 有机会在不影响结果的情况下进行一些优化)并在多次运行中平均结果.

private final static int WARMUP = 1000;私人最终静态 int TEST = 1000;私有最终静态 int SIZE = 500000;公共无效性能测试(){//暖身for (int i = 0; i < WARMUP; ++i) {buildArrayList();}//测试长和 = 0;for (int i = 0; i < TEST; ++i) {sum += buildArrayList();}System.out.println("创建数组列表的平均时间:" + (sum/TEST));}公共长 buildArrayList() {长开始 = System.nanoTime();ArrayList a = new ArrayList();for (int i = 0; i < SIZE; ++i) {a.添加(i);}长端 = System.nanoTime();返回结束 - 开始;}... 与 buildLinkedList 相同

(注意 sum 可能会溢出,最好使用 System.currentTimeMillis()).

也有可能编译器正在优化您的空 get 循环.确保循环确实执行某些操作以确保调用正确的代码.

I was following a previous post on this that says:

For LinkedList

  • get is O(n)
  • add is O(1)
  • remove is O(n)
  • Iterator.remove is O(1)

For ArrayList

  • get is O(1)
  • add is O(1) amortized, but O(n) worst-case since the array must be resized and copied
  • remove is O(n)

So by looking at this, I concluded that if I've to do just sequential insert in my collection for say 5000000 elements, LinkedList will outclass ArrayList.

And if I've to just fetch the elements from collection by iterating i.e. not grabbing the element in middle, still LinkedList will outclass `ArrayList.

Now to verify my above two statements, I wrote below sample program… But I'm surprised that my above statements were proven wrong.

ArrayList outclassed Linkedlist in both the cases. It took less time than LinkedList for adding as well as fetching them from Collection. Is there anything I'm doing wrong, or the initial statements about LinkedList and ArrayList does not hold true for collections of size 5000000?

I mentioned size, because if I reduce the number of elements to 50000, LinkedList performs better and initial statements hold true.

long nano1 = System.nanoTime();

List<Integer> arr = new ArrayList();
for(int i = 0; i < 5000000; ++i) {
    arr.add(i);
}
System.out.println( (System.nanoTime() - nano1) );

for(int j : arr) {
    ;
}
System.out.println( (System.nanoTime() - nano1) );

long nano2 = System.nanoTime();

List<Integer> arrL = new LinkedList();
for(int i = 0; i < 5000000; ++i) {
    arrL.add(i);
}
System.out.println( (System.nanoTime() - nano2) );

for(int j : arrL) {
    ;
}
System.out.println( (System.nanoTime() - nano2) );

解决方案

Remember that big-O complexity describes asymptotic behaviour and may not reflect actual implementation speed. It describes how the cost of each operation grows with the size of the list, not the speed of each operation. For example, the following implementation of add is O(1) but is not fast:

public class MyList extends LinkedList {
    public void add(Object o) {
        Thread.sleep(10000);
        super.add(o);
    }
}

I suspect in your case ArrayList is performing well because it increases it's internal buffer size fairly aggressively so there will not be a large number of reallocations. When the buffer does not need to be resized ArrayList will have faster adds.

You also need to be very careful when you do this kind of profiling. I'd suggest you change your profiling code to do a warm-up phase (so the JIT has the opportunity to do some optimization without affecting your results) and average the results over a number of runs.

private final static int WARMUP = 1000;
private final static int TEST = 1000;
private final static int SIZE = 500000;

public void perfTest() {
    // Warmup
    for (int i = 0; i < WARMUP; ++i) {
        buildArrayList();
    }
    // Test
    long sum = 0;
    for (int i = 0; i < TEST; ++i) {
        sum += buildArrayList();
    }
    System.out.println("Average time to build array list: " + (sum / TEST));
}

public long buildArrayList() {
    long start = System.nanoTime();
    ArrayList a = new ArrayList();
    for (int i = 0; i < SIZE; ++i) {
        a.add(i);
    }
    long end = System.nanoTime();
    return end - start;
}

... same for buildLinkedList

(Note that sum may overflow and you might be better to use System.currentTimeMillis()).

It's also possible that the compiler is optimizing away your empty get loops. Make sure the loop actually does something to ensure that the right code is getting called.

这篇关于ArrayList 与 LinkedList的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆