在任何情况下,您都希望使用较高的big-O时间复杂度算法而不是较低的算法吗? [英] Are there any cases where you would prefer a higher big-O time complexity algorithm over the lower one?

查看:72
本文介绍了在任何情况下,您都希望使用较高的big-O时间复杂度算法而不是较低的算法吗?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

在任何情况下,您宁愿选择 O(log n)时间复杂度而不是 O(1)时间复杂度?还是 O(n) O(log n)

Are there are any cases where you would prefer O(log n) time complexity to O(1) time complexity? Or O(n) to O(log n)?

您有任何示例吗?

推荐答案

出于很多原因,他们更喜欢O时间复杂度更高的算法较低的一个:

There can be many reasons to prefer an algorithm with higher big O time complexity over the lower one:


  • 在大多数情况下,难以实现较低的big-O复杂性,并且需要熟练的实现,大量的知识和很多测试。

  • big-O隐藏了有关常量的细节:在 10 ^ 5 1/10 ^ 5 * log(n) O(1) vs O(log(n)),但对于最合理的 n ,第一个效果更好例如,矩阵乘法的最佳复杂度是 O(n ^ 2.373),但是该常数是如此之高,以至于(据我所知)没有计算库使用它。

  • big-O在计算大数时很有意义,如果需要对三个数的数组进行排序,则它会遮罩无论您使用 O(n * log(n))还是 O(n ^ 2)算法,rs都很少。

  • 有时小写时间复杂度的优势实际上可以忽略不计。对于示例,有一个数据结构探戈树给出了一个 O(log log N)查找项目的时间复杂度,但是还有一个二叉树,它在 O(log n)。即使对于数量众多的 n = 10 ^ 20 ,差异也可以忽略不计。

  • 时间复杂度并不是全部。想象一个算法在 O(n ^ 2)中运行并且需要 O(n ^ 2)内存。当n不是真正的时间时,最好在 O(n ^ 3)时间和 O(1)时间上使用大。问题是您可能需要等待很长时间,但是非常怀疑您是否可以找到足够大的RAM来与算法配合使用

  • 并行化在我们的分布式世界中是一个很好的功能。有些算法很容易并行化,有些根本不并行化。有时候,在1000台复杂度更高的商用机器上运行算法比使用一台复杂度稍高的机器上运行算法有意义。

  • 在某些地方(安全性),复杂度可能是要求。没有人希望有一种哈希算法能够快速地进行哈希运算(因为其他人可以更快地对您进行暴力破解)

  • 尽管这与复杂性的切换无关,但是某些安全功能应以防止计时攻击的方式编写。它们大多数都处于同一复杂度类中,但是以某种方式进行修改,以使其总是在更坏的情况下执行某些操作。一个例子是比较字符串是否相等。在大多数应用程序中,如果前几个字节不同则可以快速中断,但在安全性方面,您仍然要等到最后才告知坏消息。

  • 有人为低复杂度申请了专利

  • 某些算法可以很好地适应特定情况。例如,插入排序的平均时间复杂度为 O(n ^ 2),比quicksort或mergesort差,但是作为在线算法,它可以在接收到值(作为用户输入)时有效地对值列表进行排序,而大多数其他算法只能在值的完整列表。

  • most of the time, lower big-O complexity is harder to achieve and requires skilled implementation, a lot of knowledge and a lot of testing.
  • big-O hides the details about a constant: algorithm that performs in 10^5 is better from big-O point of view than 1/10^5 * log(n) (O(1) vs O(log(n)), but for most reasonable n the first one will perform better. For example the best complexity for matrix multiplication is O(n^2.373) but the constant is so high that no (to my knowledge) computational libraries use it.
  • big-O makes sense when you calculate over something big. If you need to sort array of three numbers, it matters really little whether you use O(n*log(n)) or O(n^2) algorithm.
  • sometimes the advantage of the lowercase time complexity can be really negligible. For example there is a data structure tango tree which gives a O(log log N) time complexity to find an item, but there is also a binary tree which finds the same in O(log n). Even for huge numbers of n = 10^20 the difference is negligible.
  • time complexity is not everything. Imagine an algorithm that runs in O(n^2) and requires O(n^2) memory. It might be preferable over O(n^3) time and O(1) space when the n is not really big. The problem is that you can wait for a long time, but highly doubt you can find a RAM big enough to use it with your algorithm
  • parallelization is a good feature in our distributed world. There are algorithms that are easily parallelizable, and there are some that do not parallelize at all. Sometimes it makes sense to run an algorithm on 1000 commodity machines with a higher complexity than using one machine with a slightly better complexity.
  • in some places (security) a complexity can be a requirement. No one wants to have a hash algorithm that can hash blazingly fast (because then other people can bruteforce you way faster)
  • although this is not related to switch of complexity, but some of the security functions should be written in a manner to prevent timing attack. They mostly stay in the same complexity class, but are modified in a way that it always takes worse case to do something. One example is comparing that strings are equal. In most applications it makes sense to break fast if the first bytes are different, but in security you will still wait for the very end to tell the bad news.
  • somebody patented the lower-complexity algorithm and it is more economical for a company to use higher complexity than to pay money.
  • some algorithms adapt well to particular situations. Insertion sort, for example, has an average time-complexity of O(n^2), worse than quicksort or mergesort, but as an online algorithm it can efficiently sort a list of values as they are received (as user input) where most other algorithms can only efficiently operate on a complete list of values.

这篇关于在任何情况下,您都希望使用较高的big-O时间复杂度算法而不是较低的算法吗?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆