any chance to limit the # of parallel processes to 1 ? : ├─sudo(17192)───chr.sh(17204)───su(17260)───bash(17294)───emerge(22188)───sandbox(30181,portage)───ebuild.sh(30475)───ebuild.sh(30585)───cargo(30604)─┬─{cargo}(18564) │ ├─rustc(18578)───{rustc}(18684) │ ├─{cargo}(24506) │ └─rustc(24513)───{rustc}(24630)
*** Bug 626080 has been marked as a duplicate of this bug. ***
from upstream: >It should work if you pass -j1 to the ./x.py invocation (process 7838). https://github.com/rust-lang/rust/issues/47808#issuecomment-361000734
How about to pass '-j ${MAKEOPTS}'? This is so terrible if you had to do a debug build, or on arm devices.
(In reply to tt_1 from comment #3) ofc, that's what upstream meant eventually
yeah I mean, imagine of someone having a quadcore with hyperthreading. The ebuild would autotune to -j8 and demand defenitly more than 1gb of free ram per thread. Could you try if the fix works? Set to -j1 in make.conf and let it compile with only one job all the way through? I don't have enough free cpu time at the moment for that, sadly.
(In reply to tt_1 from comment #5) Erm - I *do* have that set, it is not respected by the eclass - *that's the problem.
FWIW: MAKEOPTS="-j1" NINJAFLAGS="-j 1" EGO_BUILD_FLAGS="-p 1" GOMAXPROCS="1" GO19CONCURRENTCOMPILATION=0 RUSTFLAGS="-C codegen-units=1"
You have edited the ebuild and put 'x.py build -j 1' (mind the gap!) and the eclass ignores that? That's a rather, ehm, strange thing. It seems to source config.toml for config switches, maybe there is something contradictory happening?
or a stupid typo made by me - will retest it: tinderbox@mr-fox ~ $ diff tb/data/portage/dev-lang/rust/rust-1.23.0-r1.ebuild /usr/portage/dev-lang/rust/rust-1.23.0-r1.ebuild 122c122 < ./x.py build --verbose --config="${S}"/config.toml ${MAKEOPTS} || die --- > ./x.py build --verbose --config="${S}"/config.toml || die
(In reply to Toralf Förster from comment #9) This patch works flawlessly, would be a great help for the tinderbox if it would be incorporated into the rust ebuilds
Can you please attach a patch against the existing ebuild?
Created attachment 517078 [details, diff] respect -jX
That's funny, because from my understanding of the syntax of the build script it misses the argument '--jobs' and should rather be --- rust-1.23.0-r1.ebuild.orig 2018-01-29 19:38:52.160505318 +0100 +++ rust-1.23.0-r1.ebuild 2018-01-29 19:39:32.797490574 +0100 @@ -119,7 +119,7 @@ } src_compile() { - ./x.py build --verbose --config="${S}"/config.toml || die + ./x.py build --verbose --config="${S}"/config.toml --jobs ${MAKEOPTS} || die } src_install() { but you tested it and it works, so I have no reason to doubt, it is just I'm curious.
(In reply to tt_1 from comment #13) I do usually have here a Load of 24 instead 12 for about 1-2 hours if rust is build, today there was just a small spike for about 5 min (maybe the install part)
(In reply to tt_1 from comment #13) and here the output - $MAKEOPTS is just appended to the (c)make line: init,1 └─sudo,9127 /opt/tb/bin/chr.sh /home/tinderbox/run/17.0_libressl_20180127-224621 /bin/bash /tmp/job.sh └─chr.sh,9141 /opt/tb/bin/chr.sh /home/tinderbox/run/17.0_libressl_20180127-224621 /bin/bash /tmp/job.sh └─su,9293 - root -c /bin/bash /tmp/job.sh └─bash,9319 /tmp/job.sh └─emerge,25545 -b /usr/lib/python-exec/python3.5/emerge --update sys-apps/habitat └─sandbox,24151,portage /usr/lib/portage/python3.5/ebuild.sh compile └─ebuild.sh,24281 /usr/lib/portage/python3.5/ebuild.sh compile └─ebuild.sh,24400 /usr/lib/portage/python3.5/ebuild.sh compile └─python2.7,24417 ./x.py build --verbose --config=/var/tmp/portage/dev-lang/rust-1.23.0-r1/work/rustc-1.23.0-src/config.toml -j1 └─python2.7,9371 ./x.py build --verbose --config=/var/tmp/portage/dev-lang/rust-1.23.0-r1/work/rustc-1.23.0-src/config.toml -j1 └─bootstrap,9372 build --verbose --config=/var/tmp/portage/dev-lang/rust-1.23.0-r1/work/rustc-1.23.0-src/config.toml -j1 └─cmake,13182 --build . --target install --config Release -- -j 1 └─gmake,13192 -j 1 install └─gmake,13224 -f CMakeFiles/Makefile2 all └─gmake,20530 -f tools/llvm-readobj/CMakeFiles/llvm-readobj.dir/build.make tools/llvm-readobj/CMakeFiles/llvm-readobj.dir/build
I committed something like this to rust-1.23.0-r1. Thanks!
okay, whatever :) thanks for the fix, I just hope it does include building the internal llvm copy with -j 1 as well.
Hmm, it does not work for the part between src_unpack and the build of their internal llvm copy. Which is, I guess, some kind of bootstrapping? Is there another hidden switch somewhere?
the enxt improvement could be to add "${MAKEOPTS}" to "env DESTDIR="${D}" ./x.py install "
something happens between src_unpack() and src_compile(), which does not honor the MAKEOPTS. env DESTDIR="${D}" ./x.py install in src_install() is another story.
Been building this for the whole night and its still going. I don't know if some small part honored MAKEOPTS=-j2 now, but after waking up in the morning, it's still going and using only one core (rather slow as a single core) ever. This part seems still using only one core: -- Installing: /tmp/portage/dev-lang/rust-1.23.0-r1/work/rustc-1.23.0-src/build/x86_64-unknown-linux-gnu/llvm/lib/cmake/llvm/./GetSVN.cmake -- Installing: /tmp/portage/dev-lang/rust-1.23.0-r1/work/rustc-1.23.0-src/build/x86_64-unknown-linux-gnu/llvm/lib/cmake/llvm/./AddLLVM.cmake cargo:root=/tmp/portage/dev-lang/rust-1.23.0-r1/work/rustc-1.23.0-src/build/x86_64-unknown-linux-gnu/llvm finished in 14719.411 < Llvm { target: "x86_64-unknown-linux-gnu" } c Assemble { target_compiler: Compiler { stage: 0, host: "x86_64-unknown-linux-gnu" } } c Std { target: "x86_64-unknown-linux-gnu", compiler: Compiler { stage: 0, host: "x86_64-unknown-linux-gnu" } } Building stage0 compiler artifacts (x86_64-unknown-linux-gnu -> x86_64-unknown-linux-gnu) Dirty - /tmp/portage/dev-lang/rust-1.23.0-r1/work/rustc-1.23.0-src/build/x86_64-unknown-linux-gnu/stage0-rustc c Sysroot { compiler: Compiler { stage: 0, host: "x86_64-unknown-linux-gnu" } } running: "/tmp/portage/dev-lang/rust-1.23.0-r1/work/rust-stage0/bin/cargo" "build" "--target" "x86_64-unknown-linux-gnu" "-j" "2" "-v" "--release" "--locked" "--frozen" "--features" " jemalloc llvm" "--manifest-path" "/tmp/portage/dev-lang/rust-1.23.0-r1/work/rustc-1.23.0-src/src/rustc/Cargo.toml" "--message-format" "json" Compiling rustc-demangle v0.1.5 Compiling cfg-if v0.1.2 Running `/tmp/portage/dev-lang/rust-1.23.0-r1/work/rustc-1.23.0-src/build/bootstrap/debug/rustc --crate-name rustc_demangle src/vendor/rustc-demangle/src/lib.rs --error-format json --crate-type lib --emit=dep-info,link -C opt-level=2 -C metadata=efac2aec4f7969d5 -C extra-filename=-efac2aec4f7969d5 --out-dir /tmp/portage/dev-lang/rust-1.23.0-r1/work/rustc-1.23.0-src/build/x86_64-unknown-linux-gnu/stage0-rustc/x86_64-unknown-linux-gnu/release/deps --target x86_64-unknown-linux-gnu -L dependency=/tmp/portage/dev-lang/rust-1.23.0-r1/work/rustc-1.23.0-src/build/x86_64-unknown-linux-gnu/stage0-rustc/x86_64-unknown-linux-gnu/release/deps -L dependency=/tmp/portage/dev-lang/rust-1.23.0-r1/work/rustc-1.23.0-src/build/x86_64-unknown-linux-gnu/stage0-rustc/release/deps --cap-lints allow` Running `/tmp/portage/dev-lang/rust-1.23.0-r1/work/rustc-1.23.0-src/build/bootstrap/debug/rustc --crate-name cfg_if src/vendor/cfg-if/src/lib.rs --error-format json --crate-type lib --emit=dep-info,link -C opt-level=2 -C metadata=7d3f7f3952300f9b -C extra-filename=-7d3f7f3952300f9b --out-dir /tmp/portage/dev-lang/rust-1.23.0-r1/work/rustc-1.23.0-src/build/x86_64-unknown-linux-gnu/stage0-rustc/release/deps -L dependency=/tmp/portage/dev-lang/rust-1.23.0-r1/work/rustc-1.23.0-src/build/x86_64-unknown-linux-gnu/stage0-rustc/release/deps --cap-lints allow` rustc command: "/tmp/portage/dev-lang/rust-1.23.0-r1/work/rust-stage0/bin/rustc" "--crate-name" "rustc_demangle" "src/vendor/rustc-demangle/src/lib.rs" "--crate-type" "lib" "--emit=dep-info,link" "-C" "opt-level=2" "-C" "metadata=efac2aec4f7969d5-rustc" "-C" "extra-filename=-efac2aec4f7969d5" "--out-dir" "/tmp/portage/dev-lang/rust-1.23.0-r1/work/rustc-1.23.0-src/build/x86_64-unknown-linux-gnu/stage0-rustc/x86_64-unknown-linux-gnu/release/deps" "--target" "x86_64-unknown-linux-gnu" "-L" "dependency=/tmp/portage/dev-lang/rust-1.23.0-r1/work/rustc-1.23.0-src/build/x86_64-unknown-linux-gnu/stage0-rustc/x86_64-unknown-linux-gnu/release/deps" "-L" "dependency=/tmp/portage/dev-lang/rust-1.23.0-r1/work/rustc-1.23.0-src/build/x86_64-unknown-linux-gnu/stage0-rustc/release/deps" "--cap-lints" "allow" "--cfg" "stage0" "--sysroot" "/tmp/portage/dev-lang/rust-1.23.0-r1/work/rustc-1.23.0-src/build/x86_64-unknown-linux-gnu/stage0-sysroot" "-Cprefer-dynamic" "-Clinker=x86_64-pc-linux-gnu-gcc" "-C" "debug-assertions=n" "-Z" "force-unstable-if-unmarked" "--color=always" I'm not sure if anything before used more things in parallel, but considering it's been going for over 5 hours and system llvm here takes below 2 hours, I'm willing to bet that not much was parallel before either. Some previous dev-lang/rust versions took less than 4 hours; not 1.23.0-r1 anymore. No end in sight with that.
After some 15 minutes of waiting, it finally seems to queue two rustc at once, once in a while. Before that it just sat there with only 1 process, half of my already slow system just sitting idle for a long time. Additionally graaff pointed out https://github.com/gentoo/gentoo/pull/7000 - for not having things break with -l load average and other stuff in MAKEOPTS. I.e, current fix breaks build completely for users adding -l to MAKEOPTS (which is a perfectly valid thing to do).
ah, that's bug 646092 separately by now (-l). I suppose cargo or whatever is just bad at parallelizing things properly across some different compilation series? Either way, I'm OK to revisit this after rust doesn't use bundled llvm anymore. Before that it's all small in comparison really..
This bug is about the ebuild not honoring MAKEOPTS, and autotuning to 1 job per core/thread of the cpu, which is a terrible thing for users with a low core to memory ratio, such as arm devices, or older x86 hardware. However, there is one section of compile which uses only one job for a very long time - this bug is not about that one job, of not using all available cores. For this you should open a bug upstream, but keep in mind that the answer might be they have to restrain the use of multicores for a certain reason. Other than that, I had no idea about the load average feature, same for the multiprocessing eclass. We will talk about this in #646092
To limit the amount of crosstalk, please limit further discussion to bug 646092, and let this one rest in peace.