复杂度和运行时间 [英] Complexity's and Run times

查看:86
本文介绍了复杂度和运行时间的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我试图环顾四周,看看是否可以回答我的答案,但是我没有迷失能帮助我的东西。



在处理运行时复杂性时,您是否认为对于操作数?根据我对运行时间的理解,您每个不同的操作数都可以占用x量的时间,因此仅计算循环的下限即可吗?如果这是不正确的,请您向我解释我的逻辑错在哪里。



例如:



<$ p $对于(i = 0; i 对于(j = 0; j a [i,j] = b [i, j] + c [i,j]

是O(n ^ 2)对吗?还是由于加法操作数而成为O(a * n ^ 2)?而您使用 O表示运行时间通常正确吗?



例如:

 <$对于(i = 0; i 对于(j = 0; j a [i,j]-= b [i,j] * c [i,j] 

会再次成为O(n ^ 2)吗?还是因为减法并乘以操作数而成为O(a ^ 2 * n ^ 2)?



感谢堆栈!

解决方案

我建议您阅读O表示法的含义。但是,让我向您简要介绍一下:



当我们说 f(x)= O(g(x))时,我们的意思是对于与输入大小无关的某个常数c,

  f(x)<= cg(x)对于所有x> = k 

换句话说,在某个点k之外,曲线f(n)总是如图所示,在曲线g(n)的上方。





现在在您考虑的情况下,加,减,乘运算都是需要恒定时间的原始运算( O(1))。
假设两个数字相加需要'a'时间,分配结果需要'b'时间。



因此,此代码:

(i = 0; i

 对于(j = 0; j  $ ba [i,j] = b [i,j] + c [i,j] 

让我们草率地忽略分配和更新的for循环操作。
运行时间 T(n)=(a + b)n 2



请注意,这只是 O(n 2 ,为什么?



定义,这意味着我们可以确定点k,对于该点k,对于某个常数c,曲线T(n)始终在上方被n 2 限制。



意识到这确实是对的。我们总是可以选择足够大的常量,以便cn ^ 2曲线始终在给定曲线的上方。



这就是为什么人们说: 丢弃常数!



底线:
f(n)的大O为g(n )表示在某条垂直线的右侧,曲线f(n)始终在g(n)上方。


I tried looking around to see if my answer could be answered but I haven't stumbled what could help me.

When Dealing with Run Time Complexity's do you account for the operands? From my understanding dealing with run time you have each different operand can take x-amount of time so only counting for the loops with give you the lower bound? If this is incorrect can you please explain to me where my logic is wrong.

for example:

            for (i=0;i<n;i++)
               for (j=0;j<n;j++)
                 a[i,j]=b[i,j]+c[i,j]

Would just be O(n^2) right? or would it be O(a*n^2) because of the Addition Operand?? and you use "O" for run time usually correct?

for example:

            for (i=0;i<n;i++)
               for (j=0;j<n;j++)
                 a[i,j] -= b[i,j] * c[i,j]

Would just be O(n^2) again right?? or would it be O(a^2*n^2) because of the subtraction and multiply Operands??

Thanks Stack!

解决方案

I suggest you read on what the O notation means.But let me present you with a brief overview:

When we say f(x)=O(g(x)), we mean that for some constant c independent of input size,

f(x)<=c.g(x) for all x>=k

in other words, beyond a certain point k, the curve f(n) is always bounded above by the curve g(n) as shown in the figure.

Now in the case you have considered, the operations of addition and subtraction, multiplication are all primitive operations that take constant time(O(1)). Let's say the addition of two numbers takes 'a' time and assigning the result takes 'b' time.

So for this code:

  for (i=0;i<n;i++)
   for (j=0;j<n;j++)
     a[i,j]=b[i,j]+c[i,j]

Let's be sloppy and ignore the for loop operations of assignment and update. The running time T(n)=(a+b)n2.

Notice that this is just O(n2), why?

As per the definition, this means we can identify some point k beyond which for some constant c the curve T(n) is always bounded above by n2.

Realize that this is indeed true.We can always pick sufficiently large constants so that c.n^2 curve always bounds above the given curve.

This is why people say:Drop the constants!

Bottomline: Big O of f(n) is g(n) means, to the right of some vertical line, the curve f(n) is always bounded above by g(n).

这篇关于复杂度和运行时间的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆