更糟的情况下时间复杂性把/得到HashMap [英] Worse case time complexity put/get HashMap

查看:108
本文介绍了更糟的情况下时间复杂性把/得到HashMap的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述



在我的理解中:因为每个键都有相同的哈希码,所以它的哈希码是相同的哈希码将总是去到同一个桶,并通过它循环检查equals方法,所以对于get和put的时间复杂度应该是O(n),我是对吗?

我正在看这个 HashMap get / put复杂性,但它不回答我的问题。



此处维基哈希表他们表示插入的时间复杂性更糟糕的是 O(1),并且为了得到O(n),为什么会这样?

解决是的,在最糟糕的情况下,你的哈希映射将退化为一个链表,并且你将在查找时遭受O(N)惩罚,以及插入和删除操作,这两者都需要查找操作符(感谢我在前面的答案中指出错误的意见)。



有些方法可以减轻最坏情况的行为,例如通过使用自我 - 平衡树而不是桶溢出的链表 - 这会将最坏情况的行为减少到O(logn)而不是O(n)。

What is the worst case time complexity of an Hashmap when the hashcode of it's keys are always equal.

In my understanding: As every key has the same hashcode it will always go to the same bucket and loop through it to check for equals method so for both get and put the time complexity should be O(n), Am I right?

I was looking at this HashMap get/put complexity but it doesn't answer my question.

Also here Wiki Hash Table they state the worse case time complexity for insert is O(1) and for get O(n) why is it so?

解决方案

Yes, in the worst case your hash map will degenerate into a linked list and you will suffer an O(N) penalty for lookups, as well as inserts and deletions, both of which require a lookup operation (thanks to the comments for pointing out the mistake in my earlier answer).

There are some ways of mitigating the worst-case behavior, such as by using a self-balancing tree instead of a linked list for the bucket overflow - this reduces the worst-case behavior to O(logn) instead of O(n).

这篇关于更糟的情况下时间复杂性把/得到HashMap的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆