Created attachment 445876 [details] Portage error message I tried using --jobs to make the emerge -avu --deep world" faster by paralleling the jobs. But portage seem to open too many files to be opened in same time while calculating dependencies, probably because of the large number of dependencies. As good measure, i tried it without --jobs, and it were able to calculate the dependencies without any issues. Then when i tried again with --jobs, same issue occurred. As a workaround i just launched emerge -avu --deep world, then began the compilation, terminated it with Ctrl+C, and launched emerge --resume --jobs. Is it because --jobs makes portage also calculate dependencies in parallel? There would be some way to limit it to have a safer amount of asynchronous file opening tasks? It's not the first time it's happening to me, but first time i am reporting it. Thank you!
What do you have configured for the maximum number of open files? See ulimit -n and /etc/security/limits.conf.
This behavior is expected if you use --jobs without a limit, and without --load-average (and you don't have an extremely large limit for open files).
(In reply to Simon-Pierre Dubé from comment #0) > Is it because --jobs makes portage also calculate dependencies in parallel? Well, it's --dynamic-deps (enabled by default) which it triggering something similar to the emerge --regen behavior observed in bug 462910. > There would be some way to limit it to have a safer amount of asynchronous > file opening tasks? Usually, people use --jobs with an integer limit, and --load-average is another way to limit it.
(In reply to Mike Gilbert from comment #1) > What do you have configured for the maximum number of open files? > > See ulimit -n and /etc/security/limits.conf. My ulimit -n is 1024. Didn't know of that feature. I will get it higher. (In reply to Zac Medico from comment #2) > This behavior is expected if you use --jobs without a limit, and without > --load-average (and you don't have an extremely large limit for open files). As i am using DistCC, that's mainly why i didn't want to use --jobs with a limit, as the 42 cores can take a lot of them and i like to see a lot of very small packages checked in same time, then compiled. I went for a MAKEOPTS="-j85 -l4" and i limited the numbers of DistCC job on localhost to be only 3 cores (on 4), making sure some resources will be left to distribute DistCC jobs and other stuff running. So if the ulimit still cause problem at 2048, i will try to reduce to -l3 or -l2. Thanks for explaining, it's appreciated.