On my system net-analyzer/cacti-0.8.7e-r2 seems to spam lots of mysql connections which it never terminates. This is at least true for the webinterface, but could also include the poller. The process list as given by phpmyadmin: 5 cactiuser localhost cacti Sleep 71 7 cactiuser localhost cacti Sleep 71 10 cactiuser localhost cacti Sleep 71 11 cactiuser localhost cacti Sleep 72 12 cactiuser localhost cacti Sleep 72 13 cactiuser localhost cacti Sleep 71 16 cactiuser localhost cacti Sleep 36 17 cactiuser localhost cacti Sleep 35 18 cactiuser localhost cacti Sleep 21 19 cactiuser localhost cacti Sleep 34 20 cactiuser localhost cacti Sleep 34 21 cactiuser localhost cacti Sleep 5 22 cactiuser localhost cacti Sleep 20 23 cactiuser localhost cacti Sleep 30 24 cactiuser localhost cacti Sleep 14 25 cactiuser localhost cacti Sleep 5 dev-lang/php-5.2.13 dev-php5/suhosin-0.9.29 www-servers/apache-2.2.15 dev-db/mysql-5.0.90-r2 Portage 2.2_rc67 (hardened/linux/ia64/10.0/server, gcc-4.4.3, glibc-2.10.1-r1, 2.6.32-hardened-r6 ia64) ================================================================= System uname: Linux-2.6.32-hardened-r6-ia64-31-with-gentoo-2.0.1 Timestamp of tree: Wed, 02 Jun 2010 19:15:01 +0000 app-shells/bash: 4.0_p37 dev-lang/python: 2.6.4-r1, 3.1.2-r3 dev-python/pycrypto: 2.1.0_beta1 dev-util/cmake: 2.6.4-r3 sys-apps/baselayout: 2.0.1 sys-apps/openrc: 0.6.1-r1 sys-apps/sandbox: 2.2 sys-devel/autoconf: 2.65 sys-devel/automake: 1.7.9-r1, 1.9.6-r3, 1.10.3, 1.11.1 sys-devel/binutils: 2.20.1-r1 sys-devel/gcc: 4.4.3-r6 sys-devel/gcc-config: 1.4.1 sys-devel/libtool: 2.2.6b virtual/os-headers: 2.6.30-r1 ACCEPT_KEYWORDS="ia64" ACCEPT_LICENSE="* -@EULA" CBUILD="ia64-unknown-linux-gnu" CFLAGS="-pipe -mtune=mckinley -O2 -ftree-vectorize" CHOST="ia64-unknown-linux-gnu" CONFIG_PROTECT="/etc" CONFIG_PROTECT_MASK="/etc/ca-certificates.conf /etc/env.d /etc/fonts/fonts.conf /etc/gconf /etc/gentoo-release /etc/php/apache2-php5/ext-active/ /etc/php/cgi-php5/ext-active/ /etc/php/cli-php5/ext-active/ /etc/revdep-rebuild /etc/sandbox.d /etc/terminfo /etc/texmf/language.dat.d /etc/texmf/language.def.d /etc/texmf/updmap.d /etc/texmf/web2c /etc/udev/rules.d" CXXFLAGS="-pipe -mtune=mckinley -O2 -ftree-vectorize" DISTDIR="/var/cache/portage/distfiles" EMERGE_DEFAULT_OPTS="--with-bdeps y" FEATURES="assume-digests distlocks fixpackages news parallel-fetch preserve-libs protect-owned sandbox sfperms strict unmerge-logs unmerge-orphans userfetch userpriv usersandbox usersync" GENTOO_MIRRORS="http://ftp.spline.inf.fu-berlin.de/mirrors/gentoo/ http://ftp-stud.fht-esslingen.de/pub/Mirrors/gentoo/ http://distfiles.gentoo.org" LANG="en_GB.UTF-8" LDFLAGS="-Wl,-O1 -Wl,--hash-style=gnu -Wl,--as-needed" MAKEOPTS="-j3" PKGDIR="/var/cache/portage/packages" PORTAGE_COMPRESS="xz" PORTAGE_CONFIGROOT="/" PORTAGE_RSYNC_EXTRA_OPTS=" --checksum -6 --include='/sci-libs/' --include='/sci-libs/gsl/' --exclude='/sci-libs/*/' --include='/x11-libs/' --include='/x11-libs/qt*/' --include='/x11-libs/cairo/' --include='/x11-libs/pango/' --include='/x11-libs/pixman/' --exclude='/x11-libs/*/' --include='/x11-misc/' --include='/x11-misc/util-macros/' --exclude='/x11-misc/*/' --exclude='/games*/' --exclude='/gnome*/' --exclude='/gnustep*/' --exclude='/gpe*/' --exclude='/kde*/' --exclude='/lxde*/' --exclude='/rox*/' --exclude='/sci*/' --exclude='/x11*/' --exclude='/xfce*/'" PORTAGE_RSYNC_OPTS="--recursive --links --safe-links --perms --times --compress --force --whole-file --delete --stats --timeout=180 --exclude=/distfiles --exclude=/local --exclude=/packages" PORTAGE_TMPDIR="/var/tmp" PORTDIR="/var/cache/portage/gentoo" PORTDIR_OVERLAY="/var/cache/portage/layman/hardened-development /var/cache/portage/layman/sunrise /var/cache/portage/layman/anarchy /var/cache/portage/local" [...] Unset: CPPFLAGS, CTARGET, FFLAGS, INSTALL_MASK, LC_ALL, LINGUAS, PORTAGE_COMPRESS_FLAGS Reproducible: Always
Did you mean "spawn" or yet another new meaning of "spam"?
It "spawns" lots and lots of them, either hitting the global mysql connection limit, or the limit for that user. Thus I think "spam" is appropriate, too.
Dennis is this regression compared to previous revision. Do you use cacti-spine?
Regression: I cannot tell you more than: "It once worked fine", today it makes my mysql server block... So it is a regression from a previous combination of apache,mysql,php,cacti, yes. spine: The issue exists regardless of the poller choice. The issue also exists directly when opening the webinterface, without having to run any poller. I will have a look later whether running just the poller will create the same trouble. (Please remind me, should I not have reported back in a few days!)
All this looks like not cacti issue, but something else... Actually I don't have any idea at the moment.
Maybe notable: phpmyadmin does not show such issues (but it uses a mysqli connection).
Yesterday I upgraded my production cacti-0.8.7e + cacti-spine-0.8.7e to cacti-0.8.7e-r2 and cacti-spine-0.8.7e-r1 with no difference in the number of connections and queries to its MySQL database (gentoo/hardened on amd64). I'm using the spine poller with ~350 data sources on 20 hosts.
(In reply to comment #7) > Yesterday I upgraded my production cacti-0.8.7e + cacti-spine-0.8.7e to > cacti-0.8.7e-r2 and cacti-spine-0.8.7e-r1 with no difference in the number of > connections and queries to its MySQL database (gentoo/hardened on amd64). I'm > using the spine poller with ~350 data sources on 20 hosts. Do you have the same issue, or is everything normal on your end?
(In reply to comment #8) > Do you have the same issue, or is everything normal on your end? As I said, nothing changed on MySQL after the upgrade. I'm graphing the number of MySQL connections and queries with cacti itself, and you can't "see" any change in the graph. It's keeping the same number of open connections and doing the same number of queries as before.
Created attachment 239707 [details, diff] Proposed fix The problem is that cacti uses persistent connections to MySQL which is failing when apache is used with the worker MPM. I've attached a fix for this issue.
Lex, thank you very much for this fix! Have you tried to contact upstream on this issue? If yes, do you have reference at hand?
(In reply to comment #11) > Lex, thank you very much for this fix! Have you tried to contact upstream on > this issue? If yes, do you have reference at hand? > No, I didn't. But it looks like some other users have figured this out as well and posted it on the cacti forums ( http://forums.cacti.net/viewtopic.php?t=34235 ), the developers don't seem very interested though.
Since upstream doesn't really care, wouldn't it be an idea to include the patch in the ebuild? What do you think?
Hi, it's TheWitness here. I'd like to comment on this. Firstly, the reason that we use persistent connections is to speed the web UI. So, this is normal and not harmful to 99% of the LAMP servers out there as they are generally sleeping. What's more of an issue, is if your 'max_connections' in my.cnf is not high enough. That's the bogus issue. Another issue is that spine is taking a mysql connection per thread, so that's the real hog. It's top on my list of spine issues to correct. Regards, Larry Adams aka TheWitness
Well, raising the max_connections will only temporarily fix this because the connection count rises infinitely when using the worker MPM with apache. PHP seems to have problems using persistent connections with this setup. The only real solution IMHO is to stop using persistent connections.
I think this should be fixed in 0.8.7h. Please check.
(In reply to comment #16) > I think this should be fixed in 0.8.7h. Please check. Confirming fix in 0.8.7i (haven't checked 0.8.7h).
(In reply to comment #17) > (In reply to comment #16) > > I think this should be fixed in 0.8.7h. Please check. > Confirming fix in 0.8.7i (haven't checked 0.8.7h). P.S: There are still several open connections (about 20), but the number will not grow infinitely anymore.