Summary: | sys-apps/portage-2.1.11.55: emerge --regen --jobs fails to honour system limits | ||
---|---|---|---|
Product: | Portage Development | Reporter: | Patrick Lauer <patrick> |
Component: | Core - Interface (emerge) | Assignee: | Portage team <dev-portage> |
Status: | CONFIRMED --- | ||
Severity: | normal | CC: | dschridde+gentoobugs, gentoo, yorik.sar+gentoo-bugs |
Priority: | Normal | ||
Version: | unspecified | ||
Hardware: | All | ||
OS: | Linux | ||
See Also: | https://bugs.gentoo.org/show_bug.cgi?id=593914 | ||
Whiteboard: | |||
Package list: | Runtime testing required: | --- |
Description
Patrick Lauer
2013-03-24 07:42:37 UTC
I looks like python's "resource" module has a getrlimit function that can query the open file descriptor limit. I don't see a way to get the number of open file descriptors though. I guess we could use our portage.process.get_open_fds() function, which lists the contents of /proc/self/fd. I ran into almost the same exception. Setting ulimin -n 40960 helped. It looks like the problem is that emerge --regen leaks file descriptors. /proc/sys/fs/file-nr grows as it works. (In reply to Yuriy Taraday from comment #2) > I ran into almost the same exception. Setting ulimin -n 40960 helped. > It looks like the problem is that emerge --regen leaks file descriptors. > /proc/sys/fs/file-nr grows as it works. I don't think there's a leak, because if there was then we would notice the leak even for people who use sensible --jobs and --load-average settings. Are you doing like in comment #0 and using unlimited jobs with no --load-average cap? (In reply to Zac Medico from comment #3) > I don't think there's a leak, because if there was then we would notice the > leak even for people who use sensible --jobs and --load-average settings. > Are you doing like in comment #0 and using unlimited jobs with no > --load-average cap? Yes, I did. --load-average fixed this, thanks. Looks like it's not a leakage but overuse. I don't see how 10 processes can eat about 600 descriptors. (In reply to Yuriy Taraday from comment #4) > I don't see how 10 processes can eat about 600 descriptors. Well, the main emerge process should only create one file descriptor per process (2 if you count the write end of the pipe which is closed immediately after the fork, see /usr/lib/portage/pym/_emerge/EbuildMetadataPhase.py). Plus one more, for a total of 3 per process, if you also count the Manifest file which is opened and closed immediately before the fork. Actually, since the code that spawns the subprocesses is single-threaded, the number of simultaneously open file descriptors is really only one per process. (In reply to Zac Medico from comment #7) > Actually, since the code that spawns the subprocesses is single-threaded, > the number of simultaneously open file descriptors is really only one per > process. Ok... So it might be ebuild.sh's fault. |