The default Gentoo max opened files limit seems to be 11615. Last night our server was rendered unusable because of this limit, thanks of cherokee. The default cherokee configuration allows 16384 open file handles (in advanced.conf), so I assume this caused the errors (Apr 17 15:07:05 hostname VFS: file-max limit 11615 reached) and disabled us to open remote shells etc etc. Maybe the cherokee ebuild should sed the config file and lower the max limit? Or am I missing something? It"s something very easy to overlook when installing the server, but it can have desastrous results.
Last time I checked the gentoo default for ulimit was unlimited and I'm quite sure nobody changed it.
I must add this is a Gentoo Hardened system. The error message I quoted comes straight from /var/log/messages (and is repeated 1000th times due to the webservers, sshd,... trying to access files), some people told me this is a ulimit problem.
After I sent a mail regarding this issue to the cherokee mailing list, the cherokee author contacted me and we'll work together to try to solve this problem, because what happens here (cherokee keeping 1000s of files opened) should not happen at all. I'll try to keep you updated. Even after setting the MaxFds value in advanced.conf, it opened more than 4096 (the value I chose) files. http://www.0x50.org/archive/2005-April/000649.html could be a starting place for more information
The default rlimit on open files is set to 1024 and not 11615. cheap pax-utils # ulimit -a core file size (blocks, -c) 0 data seg size (kbytes, -d) unlimited file size (blocks, -f) unlimited max locked memory (kbytes, -l) unlimited max memory size (kbytes, -m) unlimited open files (-n) 1024 pipe size (512 bytes, -p) 8 stack size (kbytes, -s) 8192 cpu time (seconds, -t) unlimited max user processes (-u) 4095 virtual memory (kbytes, -v) unlimited You can enforce soft and hard ulimits or change the defaults on the user end in a number of ways. /etc/limits, enforce rlimit on execve() via grsec, PAM, outright setting them in /etc/conf.d/* on a per service basis and many other well known unix ways. This bug is also misassigned to base-system@ solar@simple mjpegtools $ epkginfo cherokee Package: www-servers/cherokee Herd: www-servers Maintainer: www-servers Keywords: cherokee-0.4.5: ~sparc Keywords: cherokee-0.4.16: ~x86 Keywords: cherokee-0.4.17.1: ppc x86 ChangeLog: 7 bass, 4 pvdabeel, 4 stuart, 3 aliz, 2 agriffis, 2 mholzer, 1 hansmi, 1 swegener, 1 weeve, 1 eradicator, 1 spider, 1 dholm, 1 tester,
Doesn't sound like ulimit to me, but rather the fs.file-max setting available though sysctl. On my server that is set to 50743 by default.
I can affirm ulimit -a gives 1024 for max_open_files on this system too. When cherokee is running one hour or so and has served a lot of files lsof | grep cherokee | grep www | wc -l (files are under /var/www) values bigger than 5000 can be reached. I don't have any /proc/fs/file-max... To be honest I don't really care what actually caused the problem (well, I do, but it's not for that readon I opened this bug report). The problem is cherokee in the current configuration can bring a "standard" Gentoo-based server (I did not make too many modifications on the sysctl/limits field, if not none of them next to some IP related settings) on its knees, which is a severe problem (especially if it disables you to log in on your machine)
Is this problem still existing in the current versions?
please reopen this bug, if it still is a problem to you. AFAICT the problem has been fixed upstream.