GCC构建时间不会从预编译头获得很多好处 [英] GCC build time doesn't benefit much from precompiled headers

查看:558
本文介绍了GCC构建时间不会从预编译头获得很多好处的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我有一个巨大的项目,约150 000 LOC的C ++代码。构建时间大约15分钟。这个项目包含许多不同大小的子项目。



我为每个子项目建立了单独的预编译头文件,但是当我使用它们时,构建时间保持大致相同。



预编译的头文件是绝对使用的,我使用 -Winvalid-pch 选项,我试图使用 -H 编译器选项进行编译,我的预编译头在输出中显示为'bang'符号,​​这意味着编译器能够使用预编译头。



我所有的预编译头都不是很大,每个文件大约50Mb。
我使用python脚本,在这里找到最多的使用预编译头,所以我的预编译候选人列表是相当不错。



有没有任何免费/开源工具进行构建优化?看起来标准 make 实用程序没有能力来测量不同目标的构建时间。我找不到用 make 获得不同目标的统计信息的方法。我不是说依赖分析或高级的东西。我只想知道大多数时间被浪费了什么目标。



此外,似乎GCC在处理预编译头部效率很低。我无法得到任何子项目构建明显更快,我得到的最大加速是20%的项目,得到三分钟的建设。看来,购买更快的固态驱动器的机器比在GCC上优化linux上的构建时间更容易和更便宜。

解决方案

如果你想充分利用这个功能,你需要了解你的项目是如何被构造,以利用它们。最好的方法是手动减少构建时间的缓慢,困难的过程。首先听起来很蠢,但如果所有的构建都要快5倍,你知道如何构建你的项目和依赖的前进 - 那么你就意识到了收益。




我有一个连续的集成系统,你的目标是测量和记录你的进度/改进。巨大的项目,约150 000 LOC的C ++代码。构建时间大约15分钟。这个项目由许多不同大小的子项目组成。


听起来像做了很多冗余的工作,机器。



也考虑链接时间。


我所有的预编译头文件非常大,每个文件大约50Mb。


这是相当大的IMO。


$ b $


再次,持续集成统计。对于一个缓慢的构建,过度的依赖性很可能是问题(除非你有许多小的cpp文件,或者像物理内存耗尽一样愚蠢)。


我无法得到任何子项目构建明显更快,我得到的最大加速是20%


结构和依赖性。 PCH会减慢我的大部分项目。


似乎更容易和更便宜的购买固态硬盘的机器比优化构建


很可能,该机器不会使您的构建时间快20倍,依赖项和项目结构可以使其快20倍(或者最终问题的根源)。



您的构建可能是CPU /内存限制。


I have a huge project, something about 150 000 LOC of C++ code. Build time is something about 15 minutes. This project consists of many sub-projects of different sizes.

I have built separate precompiled headers for each subproject, but when I use them build time stays roughly the same. It seemed that build time is 5-10% percent less, not more.

Precompiled headers is definitely used, I use -Winvalid-pch option and I have tried to compile with -H compiler option, my precompiled headers appears in output with 'bang' symbol, that means that compiler is able to use precompiled header.

All my precompiled headers is not very large, every file is something about 50Mb. I use python script, found here to generate list of most used precompiled headers so my list of precompilation candidates is quite good.

Is there any free/open source tools for build optimization? It seemed that standard make utility doesn't have ability to measure build times of different targets. I can't find the way to get the statistics for different targets with make. I'm not talking about dependency analysis or something advanced. I just want to know for what targets most of the time was wasted.

Also, it seemed that GCC is quite inefficient in dealing with precompiled headers. I was unable to get any subproject build notably faster, maximum speedup that I get is 20% on a project that gets three minutes to build. It seemed that it is easier and cheaper to buy faster machine with solid state drive than to optimize build time on linux with GCC.

解决方案

If you want to get the most out of this feature, you need to understand how your projects can be structured to make good use of them. The best way is the slow, hard process of manually reducing build times. Sounds really stupid at first, but if all builds going forward are 5 times faster and you know how to structure your projects and dependencies moving forward -- then you realize the payoff.

You can setup a continuous integration system with your targets to measure and record your progress/improvements as your changes come in.

I have a huge project, something about 150 000 LOC of C++ code. Build time is something about 15 minutes. This project consists of many sub-projects of different sizes.

Sounds like it's doing a lot of redundant work, assuming you have a modern machine.

Also consider link times.

All my precompiled headers is not very large, every file is something about 50Mb.

That's pretty big, IMO.

I'm not talking about dependency analysis or something advanced.

Again, Continuous Integration for stats. For a build that slow, excessive dependencies are very likely the issue (unless you have many many small cpp files, or something silly like physical memory exhaustion is occurring).

I was unable to get any subproject build notably faster, maximum speedup that I get is 20%

Understand your structures and dependencies. PCHs slow down most of my projects.

It seemed that it is easier and cheaper to buy faster machine with solid state drive than to optimize build time on linux with GCC.

Chances are, that machine will not make your build times 20x faster, but fixing up your dependencies and project structures can make it 20x faster (or whatever the root of the problem ultimately is). The machine helps only so much (considering the build time for 150KSLOC).

Your build is probably CPU/memory bound.

这篇关于GCC构建时间不会从预编译头获得很多好处的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆