这是更快:当(1)或同时(2)? [英] Which is faster: while(1) or while(2)?

查看:137
本文介绍了这是更快:当(1)或同时(2)?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

这是一个面试问题由高级经理问。

This was an interview question asked by a senior manager.

这是更快?

while(1) {
    // Some code
}

while(2) {
    //Some code
}

我说,都具有相同的执行速度,在除权pression而应该最后评估,以<​​code>真正或。在这种情况下,既要评估真正并有在,而条件内没有多余的条件指令。所以,无论是将有​​执行的相同的速度和我preFER而(1)。

I said that both have the same execution speed, as the expression inside while should finally evaluate to true or false. In this case, both evaluate to true and there are no extra conditional instructions inside the while condition. So, both will have the same speed of execution and I prefer while (1).

但面试官自信地说:
检查你的基础知识而(1),而(2)更快。
(他是不是测试了我的信心)

But the interviewer said confidently: "Check your basics. while(1) is faster than while(2)." (He was not testing my confidence)

这是真的吗?

推荐答案

这两个循环是无限的,但是我们可以看到每次迭代的其中一个需要更多指令/资源。

Both loops are infinite, but we can see which one takes more instructions/resources per iteration.

使用GCC,我在不同的优化级别编译时,以下两种方案,装配:

Using gcc, I compiled the two following programs to assembly at varying levels of optimization:

int main(void)
{
    while(1)
    {
    }

    return 0;
}

结果

int main(void)
{
    while(2)
    {
    }

    return 0;
}

即使没有优化( -O0 ),生成的程序集是这两个程序是相同的。因此,有之间没有速度差两个循环。

Even with no optimizations (-O0), the generated assembly was identical for both programs. Therefore, there is no speed difference between the two loops.

有关参考,在这里是生成的组件(使用 GCC的main.c -S -masm =英特尔与优化标志):

For reference, here is the generated assembly (using gcc main.c -S -masm=intel with an optimization flag):

使用 -O0

    .file   "main.c"
    .intel_syntax noprefix
    .def    __main; .scl    2;  .type   32; .endef
    .text
    .globl  main
    .def    main;   .scl    2;  .type   32; .endef
    .seh_proc   main
main:
    push    rbp
    .seh_pushreg    rbp
    mov rbp, rsp
    .seh_setframe   rbp, 0
    sub rsp, 32
    .seh_stackalloc 32
    .seh_endprologue
    call    __main
.L2:
    jmp .L2
    .seh_endproc
    .ident  "GCC: (tdm64-2) 4.8.1"

使用 -O1

    .file   "main.c"
    .intel_syntax noprefix
    .def    __main; .scl    2;  .type   32; .endef
    .text
    .globl  main
    .def    main;   .scl    2;  .type   32; .endef
    .seh_proc   main
main:
    sub rsp, 40
    .seh_stackalloc 40
    .seh_endprologue
    call    __main
.L2:
    jmp .L2
    .seh_endproc
    .ident  "GCC: (tdm64-2) 4.8.1"

使用 -O2 -O3 (相同的输出):

With -O2 and -O3 (same output):

    .file   "main.c"
    .intel_syntax noprefix
    .def    __main; .scl    2;  .type   32; .endef
    .section    .text.startup,"x"
    .p2align 4,,15
    .globl  main
    .def    main;   .scl    2;  .type   32; .endef
    .seh_proc   main
main:
    sub rsp, 40
    .seh_stackalloc 40
    .seh_endprologue
    call    __main
.L2:
    jmp .L2
    .seh_endproc
    .ident  "GCC: (tdm64-2) 4.8.1"

事实上,为循环生成的程序集是相同的优化的各个层面:

In fact, the assembly generated for the loop is identical for every level of optimization:

 .L2:
    jmp .L2
    .seh_endproc
    .ident  "GCC: (tdm64-2) 4.8.1"

是最重要的位:

.L2:
    jmp .L2

我不能读装配非常好,但是这显然是一个无条件的循环。在 JMP 指令无条件复位程序返回到 .L2 标签,甚至没有针对比较真实值,当然立刻让再做,直到程序以某种方式结束。这直接对应于C / C ++ code:

I can't read assembly very well, but this is obviously an unconditional loop. The jmp instruction unconditionally resets the program back to the .L2 label without even comparing a value against true, and of course immediately does so again until the program is somehow ended. This directly corresponds to the C/C++ code:

L2:
    goto L2;

编辑:

足够有趣的是,即使的没有优化的,下面的循环都产生完全相同的输出(无条件 JMP )的集会:

Interestingly enough, even with no optimizations, the following loops all produced the exact same output (unconditional jmp) in assembly:

while(42)
{
}

while(1==1)
{
}

while(2==2)
{
}

while(4<7)
{
}

while(3==3 && 4==4)
{
}

while(8-9 < 0)
{
}

while(4.3 * 3e4 >= 2 << 6)
{
}

while(-0.1 + 02)
{
}

甚至我惊讶的是:

And even to my amazement:

#include<math.h>

while(sqrt(7))
{
}

while(hypot(3,4))
{
}

事情变得更加有趣与用户定义的函数:

Things get a little more interesting with user-defined functions:

int x(void)
{
    return 1;
}

while(x())
{
}


#include<math.h>

double x(void)
{
    return sqrt(7);
}

while(x())
{
}

-O0 ,这两个例子中实际调用 X 并执行每一次迭代的比较。

At -O0, these two examples actually call x and perform a comparison for each iteration.

第一个例子(返回1):

First example (returning 1):

.L4:
    call    x
    testl   %eax, %eax
    jne .L4
    movl    $0, %eax
    addq    $32, %rsp
    popq    %rbp
    ret
    .seh_endproc
    .ident  "GCC: (tdm64-2) 4.8.1"

第二个例子(返回的sqrt(7)

.L4:
    call    x
    xorpd   %xmm1, %xmm1
    ucomisd %xmm1, %xmm0
    jp  .L4
    xorpd   %xmm1, %xmm1
    ucomisd %xmm1, %xmm0
    jne .L4
    movl    $0, %eax
    addq    $32, %rsp
    popq    %rbp
    ret
    .seh_endproc
    .ident  "GCC: (tdm64-2) 4.8.1"

然而,在 -O1 以上,他们都产生相同的组件设置为previous例子(无条件 JMP 回preceding标签)。

However, at -O1 and above, they both produce the same assembly as the previous examples (an unconditional jmp back to the preceding label).

在不同的循环被编译成汇编,编译器评估常数值,并且不打扰执行任何实际的比较;两个循环是相同的。

When the different loops are compiled to assembly, the compiler evaluates the constant values and doesn't bother performing any actual comparison; the two loops are identical.

即使这并不能证明这种行为是在所有的编译器/平台的一致性,证明编译器的可以的优化这些循环是相同的,因此的。对使用编译语言的主要好处是,这样的事情应该是外界程序员的关注的事实。

Even if this doesn't prove that this behavior is consistent across all compilers/platforms, it proves that the compiler can optimize these loops to be identical, and therefore should. One of the main benefits of using a compiled language is the fact that this sort of thing is supposed to be outside of the programmer's concern.

这篇关于这是更快:当(1)或同时(2)?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆