即时编译与提前编译的优点是什么? [英] What are the advantages of just-in-time compilation versus ahead-of-time compilation?

查看:1097
本文介绍了即时编译与提前编译的优点是什么?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我最近一直在想,在我看来,给予 JIT 编译的大多数优点应该或多或少地归因于中间格式,而本身的jitting不是



所以这些是我通常听到的主要的 pro-JIT 编译参数:


  1. 即时编译可提高可移植性。不是归因于中间格式?我的意思是,没有什么可以让你无法将你的虚拟字节码编译成本机字节码,一旦你已经在你的机器上。可移植性在分发阶段是一个问题,而不是在运行阶段。

  2. 好,那么在运行时生成代码?同样适用。

  3. 但是运行时将它编译为本地代码只需一次无论如何,并将生成的可执行文件存储在硬盘驱动器上某处的缓存中。是的,当然。但它已经在时间限制下优化了你的程序,它不会使它从那里更好。请参阅下一段。

这不像提前编译 即时编译有时间限制:您不能让最终用户在您的程序启动时永远等待,所以它有一个权衡在某个地方。大多数时候他们只是优化较少。我的一个朋友拥有分析证据,内联函数和手动展开循环(对进程中的源代码进行模糊处理)对他的 C#数字处理性能有积极的影响程序;在我的方面,我的 C 程序填充相同的任务,没有产生积极的结果,我相信这是由于我的编译器允许做大量的转换。



但是我们被围攻的程序包围着。 C# Java 无处不在,Python脚本可以编译为某种字节码,我确信一大堆其他编程语言也是如此。必须有一个很好的理由,我失踪。那么,什么使及时编译优于提前编译编译?






EDIT 要清除一些混乱,也许重要的是声明我是一个可执行文件的中间表示。这有很多优点(实际上,大多数即时编译的参数实际上是中间表示的参数)。我的问题是如何将它们编译为本地代码。



大多数运行时(或编译器)都喜欢或者编译它们,的时间。由于提前时间编译看起来像一个更好的替代我,因为编译器有更多的时间来执行优化,我想知道为什么微软,Sun和所有其他人正在反过来。由于我对即时编译程序的体验显示出很差的基本优化,我对于关于profiling的优化有点怀疑。



我使用了一个C代码示例,因为我需要一个提前编译编译与即时编译的例子。事实上, C 代码没有发送到中间表示与情况无关,因为我只需要显示提前编译可以产生更好的即时

解决方案

ngen工具页面溢出bean(或至少提供了一个很好的比较本地图像和JIT编译的图像)。以下是提前编译的可执行文件的优点列表:


  1. 本机映像加载速度更快,因为它们没有太多启动活动,并且需要静态少量内存(JIT编译器所需的内存);

  2. 本地映像可以共享库代码,而JIT编译的映像不能。

以下是即时编译可执行文件的优点:


  1. 原生图片大于字节码对应的原始图片;

  2. 原始图片必须在原始程序集或其中一个依赖项修改时重新生成(这是有意义的,



  3. 微软的一般考虑事项:


    1. 大型应用程序通常受益于提前编译,小型应用程序通常不受影响;

    2. 任何调用对于从动态库加载的函数,需要一个额外的跳转指令用于修复。

    需要重新生成一个每当其中的一个组件对于本地图像而言是巨大劣势时,就会提前进行编译。这是脆弱的基类问题的根源。在C ++中,例如,如果从您的本机应用程序使用的DLL的类的布局更改,你被拧了。如果你对接口进行编程,如果接口发生变化,你仍然感到困惑。如果你使用一个更动态的语言(比如说Objective-C),你会很好,但这带来了性能上的影响。



    另一方面,字节码图像不遭受这个问题,做它没有性能命中。这本身就是设计一个具有可以轻松重新生成的中间表示的系统的一个很好的理由。


    I've been thinking about it lately, and it seems to me that most advantages given to JIT compilation should more or less be attributed to the intermediate format instead, and that jitting in itself is not much of a good way to generate code.

    So these are the main pro-JIT compilation arguments I usually hear:

    1. Just-in-time compilation allows for greater portability. Isn't that attributable to the intermediate format? I mean, nothing keeps you from compiling your virtual bytecode into native bytecode once you've got it on your machine. Portability is an issue in the 'distribution' phase, not during the 'running' phase.
    2. Okay, then what about generating code at runtime? Well, the same applies. Nothing keeps you from integrating a just-in-time compiler for a real just-in-time need into your native program.
    3. But the runtime compiles it to native code just once anyways, and stores the resulting executable in some sort of cache somewhere on your hard drive. Yeah, sure. But it's optimized your program under time constraints, and it's not making it better from there on. See the next paragraph.

    It's not like ahead-of-time compilation had no advantages either. Just-in-time compilation has time constraints: you can't keep the end user waiting forever while your program launches, so it has a tradeoff to do somewhere. Most of the time they just optimize less. A friend of mine had profiling evidence that inlining functions and unrolling loops "manually" (obfuscating source code in the process) had a positive impact on performance on his C# number-crunching program; doing the same on my side, with my C program filling the same task, yielded no positive results, and I believe this is due to the extensive transformations my compiler was allowed to make.

    And yet we're surrounded by jitted programs. C# and Java are everywhere, Python scripts can compile to some sort of bytecode, and I'm sure a whole bunch of other programming languages do the same. There must be a good reason that I'm missing. So what makes just-in-time compilation so superior to ahead-of-time compilation?


    EDIT To clear some confusion, maybe it would be important to state that I'm all for an intermediate representation of executables. This has a lot of advantages (and really, most arguments for just-in-time compilation are actually arguments for an intermediate representation). My question is about how they should be compiled to native code.

    Most runtimes (or compilers for that matter) will prefer to either compile them just-in-time or ahead-of-time. As ahead-of-time compilation looks like a better alternative to me because the compiler has more time to perform optimizations, I'm wondering why Microsoft, Sun and all the others are going the other way around. I'm kind of dubious about profiling-related optimizations, as my experience with just-in-time compiled programs displayed poor basic optimizations.

    I used an example with C code only because I needed an example of ahead-of-time compilation versus just-in-time compilation. The fact that C code wasn't emitted to an intermediate representation is irrelevant to the situation, as I just needed to show that ahead-of-time compilation can yield better immediate results.

    解决方案

    The ngen tool page spilled the beans (or at least provided a good comparison of native images versus JIT-compiled images). Here is a list of advantages of executables that are compiled ahead-of-time:

    1. Native images load faster because they don't have much startup activities, and require a static amount of fewer memory (the memory required by the JIT compiler);
    2. Native images can share library code, while JIT-compiled images cannot.

    And here is a list of advantages of just-in-time compiled executables:

    1. Native images are larger than their bytecode counterpart;
    2. Native images must be regenerated whenever the original assembly or one of its dependencies is modified (which makes sense, since it could screw up virtual tables and stuff like that).

    And the general considerations of Microsoft on the matter:

    1. Large applications generally benefit from being compiled ahead-of-time, and small ones generally don't;
    2. Any call to a function loaded from a dynamic library needs the overhead of one additional jump instruction for fixups.

    The need to regenerate an image that is ahead-of-time compiled every time one of its components is a huge disadvantage for native images. This is the root of the fragile base class problem. In C++, for instance, if the layout of a class from a DLL you use with your native app changes, you're screwed. If you program against interfaces instead, you're still screwed if the interface changes. If you use a more dynamic language instead (say, Objective-C), you're fine, but this comes with a performance hit.

    On the other hand, bytecode images don't suffer from this issue and do it without the performance hit. This, in itself, is a very good reason to design a system with an intermediate representation that can easily be regenerated.

    这篇关于即时编译与提前编译的优点是什么?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆