大O vs小欧米茄 [英] Big O vs Small omega
问题描述
为什么ω(n)小于O(n)?
Why is ω(n) smaller than O(n)?
我知道什么是小欧米茄(例如, n =ω (log n)
),但我不明白为什么ω(n)小于O(n)。
I know what is little omega (for example, n = ω(log n)
), but I can't understand why ω(n) is smaller than O(n).
推荐答案
算法复杂度具有数学定义。
Algorithmic complexity has a mathematic definition.
如果 f
和 g
是两个函数,如果您可以找到两个常量 c $ c $,则
f = O(g)
c>(> 0)和 n
,例如 f(x)< c * g(x)
每 x> n
。
If f
and g
are two functions, f = O(g)
if you can find two constants c
(> 0) and n
such as f(x) < c * g(x)
for every x > n
.
对于Ω
,情况恰恰相反:您可以找到这样的常数如 f(x)> c * g(x)
。
For Ω
, it is the opposite: you can find constants such as f(x) > c * g(x)
.
f =Θ(g)
是三个常量 c
, d
和 n
,例如 c * g(x)< f(x)< d * g(x)
每 x> n
。
f = Θ(g)
if there are three constants c
, d
and n
such as c * g(x) < f(x) < d * g(x)
for every x > n
.
然后, O
表示您的函数处于主导地位,Θ
您的函数等同于其他函数,Ω
您的函数有一个下限。
因此,当您使用Θ
时,您的近似值更好,因为您可以包装函数在两条边之间;而 O
仅设置最大值。
Then, O
means your function is dominated, Θ
your function is equivalent to the other function, Ω
your function has a lower limit.
So, when you are using Θ
, your approximation is better for you are "wrapping" your function between two edges ; whereas O
only set a maximum. Ditto for Ω
(minimum).
总结如下:
-
O(n)
:在最坏的情况下,算法的复杂度为n
-
Ω(n)
:在最佳情况下,您的算法的复杂度为n
-
Θ(n)
:在每种情况下,您的算法的复杂度均n
O(n)
: in worst situations, your algorithm has a complexity ofn
Ω(n)
: in best case, your algorithm has a complexity ofn
Θ(n)
: in every situation, your algorithm has a complexity ofn
最后,您的假设是错误的:它是Θ
,而不是Ω
。如您所知, n>如果
。然后,根据先前的定义,说 n
具有巨大的价值,则返回log(n) n =Θ(log(n))
是逻辑。
To conclude, your assumption is wrong: it is Θ
, not Ω
. As you may know, n > log(n)
when n
has a huge value. Then, it is logic to say n = Θ(log(n))
, according to previous definitions.
这篇关于大O vs小欧米茄的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!