Upstream's documentation is in the URL field It seems to do what USE=pch did in the KDE3 days, or what sqlite does with its amalgamation feature: speed up compilation quite a lot at the expense of using more RAM for GCC. The page suggests it might take some pressure off the linker step too; that's currently the part most likely to ruin someone's day with an OOM. The stated downsides (incremental builds being slower) don't really affect us. With the package's ever-increasing build times and frequent updates a lot of us users would be grateful for something like this, if it works. Thanks for reading.
Could you test this with the latest dev channel ebuild and verify that it actually works?
Very impressive! I did a test with the latest beta channel, copied the chromium-64.0.3282.24.ebuild to my local overlay for chromium-64.0.3282.39 and put in the line: myconf_gn+=" use_jumbo_build=true" genlop -t chromium: Fri Dec 15 02:39:06 2017 >>> www-client/chromium-64.0.3282.24 merge time: 3 hours, 51 minutes and 36 seconds. Thu Dec 21 19:28:16 2017 >>> www-client/chromium-64.0.3282.39 merge time: 1 hour, 58 minutes and 37 seconds. Of course I updated clang fom 5.0.0 to 5.0.1 in the meantime, but that should not affect the results.
Oh wow, I didn't think the difference would be that big. Thanks for testing it Ingo, I'll go give it a try on my netbook (last build took 35 hours and then died at the final linker step!)
Succeeded on my desktop, but I had to add MAKEOPTS="-l1" to avoid OOM; some of the compiles peaked at over 2.5GB. Here's the timing: # qlop -d '1 month ago' -vgH chromium chromium-63.0.3239.59: Tue Nov 28 11:35:50 2017: 7 hours, 29 minutes, 8 seconds # qlop -d '1 week ago' -vgH chromium chromium-64.0.3282.24: Fri Dec 22 04:12:14 2017: 5 hours, 57 minutes, 19 seconds Significant improvement even with it not using all cores. I'll probably have to play around with the jumbo_file_merge_limit setting to get it to work on the netbook, which only has 2GB RAM. The webpage says the default value is 100, but the source has it as 50.
Successfully built on i686 with 2GB RAM and jumbo_file_merge_limit=25. Final linker step peaked at about 1.5GB - the machine didn't even reach swapdeath as it usually does with chromium. For completeness' sake, this is the /etc/portage/env file for chromium on that machine: --- I_KNOW_WHAT_I_AM_DOING=1 # probably redundant; old ebuilds used to OOM with ld.gold but filter-flags removes this now CFLAGS="$CFLAGS -fuse-ld=bfd" # was also necessary to get the linker to not OOM LDFLAGS="-Wl,--sort-common,--hash-style=gnu,-z,combreloc,--no-keep-memory,--reduce-memory-overheads" # -l1 due to jumbo_build MAKEOPTS="-j2 -l1" # include server keeps timing out due to size of jumbo_build files FEATURES="distcc -distcc-pump" --- And qlop timings, keep in mind my last attempt to build 64.0 on this hardware took ~36 hours: chromium-56.0.2924.21: Tue Dec 20 18:06:08 2016: 7 hours, 28 minutes, 2 seconds chromium-60.0.3112.24: Mon Jun 19 20:48:02 2017: 1 day, 2 hours, 8 minutes, 37 seconds chromium-60.0.3112.24: Mon Jun 19 20:48:02 2017: 1 day, 2 hours, 8 minutes, 37 seconds chromium-62.0.3202.18: Sat Sep 30 00:34:10 2017: 1 day, 2 hours, 50 minutes, 43 seconds chromium-64.0.3282.24: Fri Dec 22 19:10:06 2017: 21 hours, 18 minutes, 46 seconds Everything went better than I was expecting!
I added a variable called "EXTRA_GN" so that you can play with GN args without modifying the ebuild. I might add a USE flag for jumbo builds to the dev channel.
This bring chromium back on pair to libreoffice regarding compile time. Wow! gentoo ~ # genlop -t chromium * www-client/chromium Sat Dec 23 16:49:44 2017 >>> www-client/chromium-65.0.3294.5 merge time: 1 hour, 19 minutes and 48 seconds. Sat Dec 23 20:12:02 2017 >>> www-client/chromium-65.0.3294.5 merge time: 2 hours, 32 minutes and 5 seconds. Sat Dec 23 21:31:26 2017 >>> www-client/chromium-65.0.3294.5 merge time: 1 hour, 17 minutes and 47 seconds. Tue Dec 26 21:35:47 2017 >>> www-client/chromium-65.0.3298.3 merge time: 55 minutes and 58 seconds. 1. (Sat Dec 23 16:49:44 2017) using clang and use_jumbo_build=true 2. using clang without use_jumbo_build (official ebuild) 3. Testing first case, again, for reproduce error margin 4. Using gcc and EXTRA_GN="use_jumbo_build=true" I was under impression that clang is faster at compile time than gcc, but use_jumbo_build and gcc is 37% of clang and use_jumbo_build=false. Hmm... Wow!, again. Nice improvement!
It gives me 6x speedup in compilation time. Please add USE flag for that feature.
The bug has been closed via the following commit(s): https://gitweb.gentoo.org/repo/gentoo.git/commit/?id=121cd55193eda4a31bd6e95a6e61d60072fe4abe commit 121cd55193eda4a31bd6e95a6e61d60072fe4abe Author: Mike Gilbert <floppym@gentoo.org> AuthorDate: 2018-01-06 22:50:38 +0000 Commit: Mike Gilbert <floppym@gentoo.org> CommitDate: 2018-01-06 22:50:38 +0000 www-client/chromium: add jumbo-build USE flag Closes: https://bugs.gentoo.org/641786 Package-Manager: Portage-2.3.19_p3, Repoman-2.3.6_p37 www-client/chromium/chromium-64.0.3282.71.ebuild | 5 ++++- www-client/chromium/chromium-65.0.3298.3.ebuild | 5 ++++- www-client/chromium/metadata.xml | 1 + 3 files changed, 9 insertions(+), 2 deletions(-)