HashSet迭代 [英] HashSet iteration

查看:123
本文介绍了HashSet迭代的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我有一个关于Java中HashSet迭代器的查询。
在Java Generics and Collections一书中,陈述如下:

I have a query regarding iterator of HashSet in Java. In book "Java Generics and Collections", following is stated:


集合的哈希表实现的主要吸引力是(理想情况下)增加,删除,包含和大小的基本操作的恒定时间
性能。它的主要
缺点是它的迭代性能;因为迭代表涉及
检查每个桶,它的成本与表大小成比例,而不管它包含的集合的大小

The chief attraction of a hash table implementation for sets is the (ideally) constanttime performance for the basic operations of add, remove, contains, and size. Its main disadvantage is its iteration performance; since iterating through the table involves examining every bucket, its cost is proportional to the table size regardless of the size of the set it contains.

它声明迭代器在每个底层表中查找。但是通过实际实现(JDK 8),我看到HashIterator存储下一个节点
引用。所以似乎迭代器不需要访问每一个桶。

It states that iterator looks in every bucket of underlying table. But going through actual implementation(JDK 8), I see that HashIterator stores next node reference. So it seems iterator doesn't need to visit every single bucket.

这里的书是错的还是我的理解错了?

Is book wrong here OR my understanding is wrong?

推荐答案

该文件是对的。虽然 KeyIterator 确实调用 nextNode()。key ,但是这样

The document is right. Although KeyIterator indeed calls nextNode().key, like this

final class KeyIterator extends HashIterator implements Iterator<K> {
    public final K More ...next() {
        return nextNode().key;
    }
}

nextNode的代码( )在基类 HashIterator 中有文档正在讨论的循环:

the code for nextNode() in the base class HashIterator has the loop that the documentation is talking about:

final Node<K,V> nextNode() {
    Node<K,V>[] t;
    Node<K,V> e = next;
    if (modCount != expectedModCount)
        throw new ConcurrentModificationException();
    if (e == null)
        throw new NoSuchElementException();
    if ((next = (current = e).next) == null && (t = table) != null) {
        do {} while (index < t.length && (next = t[index++]) == null);
    }
    return e;
}

do / 循环时,一个空身体逐个遍历水桶,寻找下一个条目。

The do/while loop with an empty body traverses the buckets one by one, looking for the next entry.

唯一一次这可能是相关的,当你迭代一个你预先分配了大量桶的哈希集,但还没有填充大量的项目。当您添加更多项目时,当您的 HashSet 自行增长时,存储桶的数量将与您目前插入的项目数量成正比,因此减速不会很重要。

The only time this may be relevant is when you iterate over a hash set which you pre-allocated with a large number of buckets, but have not populated with a large number of items yet. When you let your HashSet grow by itself as you add more items, the number of buckets will be proportional to the number of items that you inserted so far, so the slowdown would not be significant.

这篇关于HashSet迭代的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆