哈希表重新哈希删除 [英] Hashtable rehash on remove

查看:198
本文介绍了哈希表重新哈希删除的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

有人知道为什么哈希表的Java jdk实现在删除后不重新哈希表吗?

Does anyone know why the java jdk implementation of hashtable does not rehash the table upon remove ?

如果空间使用率太低怎么办?这是减小大小和重新哈希化的原因吗?

What if space usage is too low? Isnt it a reason to reduce size and rehash?

就像加载因子0.75触发放置时重新哈希一样,我们可以在表的密度上设置下限,例如0.25(当然可以在此处的最佳值上进行分析),并再次触发重新哈希(只要大小)表的大小大于initialCapacity.

Just like load factor 0.75 which triggers rehash on put, we could have a lower bound like 0.25 (of course analysis can be done on the best value here) on the density of the table and trigger the rehash again, provided the size of the table is greater than the initialCapacity.

推荐答案

重新哈希是一项昂贵的操作,基于Java哈希的数据结构试图避免这种情况.它们仅在查找性能不好时才进行重新哈希处理.这就是这种数据结构的目的:查找性能.

Rehashing is an expensive operation and the java hash based data structures try to avoid it. They only do rehashing when the lookup performance is bad. This is the purpose of this type of data structure: lookup performance.

以下是HashMap Java文档的引文:

Here is a quote from the HashMap java docs:

设置初始容量时,应考虑映射中的预期条目数及其负载因子,以最大程度地减少重新哈希操作的次数.如果初始容量大于最大条目数除以负载因子,则将不会进行任何哈希操作.

如果要在HashMap实例中存储许多映射,则创建具有足够大容量的映射将比让其根据需要增长表的自动重新哈希处理更有效地存储映射

If many mappings are to be stored in a HashMap instance, creating it with a sufficiently large capacity will allow the mappings to be stored more efficiently than letting it perform automatic rehashing as needed to grow the table.

除了此参数之外,Java创建者还可能认为,如果哈希表中包含的元素过多,那么再次出现这些元素的可能性就很大,因此无需将表重新哈希两次.

Beside this argument, the java creators might have thought that if you had that many elements in your hashtable the probability to have them again is quite large so there is no need to rehash twice the table.

这篇关于哈希表重新哈希删除的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆