Running updatedb (sys-apps/slocate-2.7-r5) (manually or as a cron job) causes massive memory leak - the used memory is not freed. This is especially severe when running first time after reboot. Subsequent runs may or may not to increase memory usage. Tested on gentoo-sources-2.4.20-r7, gentoo-sources-2.4.22-r3, r4, r5. Reproducible: Always Steps to Reproduce: 0.reboot: optional since the effect is most visible on freshly rebooted system 1.free -m 2.updatedb 3.free -m Actual Results: Results for freshly rebooted system Kernel 2.4.22-gentoo-r5 # free -m total used free shared buffers cached Mem: 503 52 451 0 8 22 -/+ buffers/cache: 21 482 Swap: 980 0 980 #updatedb #free -m total used free shared buffers cached Mem: 503 499 4 0 220 11 -/+ buffers/cache: 267 236 Swap: 980 0 980 Kernel 2.4.20-gentoo-r7: # free -m total used free shared buffers cached Mem: 502 50 452 0 8 20 -/+ buffers/cache: 21 481 Swap: 980 0 980 # updatedb # free -m total used free shared buffers cached Mem: 502 491 11 0 267 26 -/+ buffers/cache: 196 305 Swap: 980 0 980 Expected Results: memory usage should be unchanged For 2.4.20 running emerge (emerge mplayer to be exact) after updatedb results in freeing most of the memory: # free -m total used free shared buffers cached Mem: 502 417 84 0 254 116 -/+ buffers/cache: 46 455 Swap: 980 0 980 However, for 2.4.22 kernel, for which I have tried rebuilding mozilla "emerge mozilla", emerge results in increase in used memory: #free -m total used free shared buffers cached Mem: 503 484 19 0 74 101 -/+ buffers/cache: 308 195 Swap: 980 0 980 This bug is similar to now clesed bug 36855, but that bug is was ext3 specific and is fixed in 2.4.22-r5 kernel. I am seeing this problem on reiserfs. Note that already in bug 36855 some people reported that memory leak happens also on reiserfs, but only ext3 problem was addressed in that bug. This issue was discussed in the forums: http://forums.gentoo.org/viewtopic.php?t=119590&highlight= (note that first posts are about the ext3 problem, the problem reoprted in this bug report is discussed in later posts - pages 2,3,4). In this thread quite a few people reported that this problem exists on various kernel. I am entering this bug as major, since quite a few people see it and almost everybody runs updatedb as a cron job.
Can you see if this happens with 2.4.22-ck*? Does your system ever run completely out of memory?
No, my system never run out of memory because of this. As I wrote above, running updatedb second, third etc time causes only small (~10MB) or no increase in used memory, so after a few days my memory stabilizes at about 90% used (which seems too much to me, since I run xfce4, and use mostly firebird and xterm). Normally, my memory usage used to be about 50% (I run gkrellm ). But then again, since discovering the bug my maximum uptime was about a week, mostly due to frequent updates of gentoo-sources. I have not tried ck sources, I did try vanilla sources 2.4.22 and had the same problem. Is there a reason why ck source should behave better? If there is I will try them out.
Just tried 2.4.23-ck1 (latest stable ck kernel), it shows the same beahviour. Here is what free shows after reboot and updatedb # free -m total used free shared buffers cached Mem: 503 497 5 0 203 14 -/+ buffers/cache: 280 223 Swap: 980 0 9 Reemerging mplayer increases memory usage by 2MB (which is negligible).
My findings are as follows: With gentoo sources and aa sources 2.4.22 and 2.4.23, memory is not released visually after running updatedb. In both cases it runs memory almost to my max (1gb) after which both gkrellm memory monitor and free report almost all memory used. My machine behaves more slowly and erratic afterward until I reboot. I did a world update recently which included kde/gnome/gcc etc, but this problem has been happening since I switched to 2.4.22. I went back to 2.4.20-gentoo-r7 kernel, and although I had to turn off high memory and acpi, I find some more strangeness. After running updatedb, within 30 min, all memory as reported by gkrellm is released back to what it should be, however the free command still reports all memory used, even 15 hrs after running updatedb. I don't know where the blame for this lies, but I suspect with several things. Oddly enough I never got any out of memory errors even when attempting to run programs after updatedb when using 2.4.22+ kernels. This just is not right. It makes memory monitoring useless.
i'm having the same problem with 2.4.22-gentoo-r5 and reiserfs on the / partition. $ uname -a Linux GEN2 2.4.22-gentoo-r5 #1 Thu Jan 15 21:30:08 CET 2004 i686 AMD Athlon(tm) processor AuthenticAMD GNU/Linux $ grep "hda3" /etc/fstab /dev/hda3 / reiserfs noatime 0 0 $ free -m total used free shared buffers cached Mem: 756 108 647 0 6 47 -/+ buffers/cache: 55 700 Swap: 517 0 517 $ updatedb $ free -m total used free shared buffers cached Mem: 756 714 41 0 326 51 -/+ buffers/cache: 335 420 Swap: 517 0 517 $ ooffice $ free -m total used free shared buffers cached Mem: 756 735 20 0 288 108 -/+ buffers/cache: 338 417 Swap: 517 0 517
I'm going to take this upstream to LKML. For anyone looking at this bug, there is more info on bug 36855, although it isn't specifically related.
I'm seeing the same sort of behavior on my 2.6.1 development-sources system, so I don't think this is just a 2.4 problem...
I you people sure what you are seeing is not inode cache chewing lots of memory which should be natural when updatedb lookups lots of files? The fact that OOM never happens on those systems (in normal conditions) only implies that the inode cache is still freed when system needs the memory. Also looking in /proc/slabinfo before and after first updatedb run and looking at the amount of free memory after updatedb, and then after "tail /dev/zero" run (which is going to consume all the memory it can get) will show if the memory is actually was freed. The 2.6 problem where inode cache was not freed aggressively enough was also recently fixed (I do not remember if it was also visible in 2.4)
I guess if this is not supposed to be a problem we should enter bugs for all the popular memory applications since the definition of "free" memory has been changed.
Created attachment 25523 [details] memory usage and slabinfo before and after updatdb run (incl. tail /dev/zero) I am attaching data on memory usage, slabinfo before and after updatedb and also after "tail /dev/zero" (which takes a lot of memory). The complete data is in the attachment, so I will only shortly summarize it here. I did it on freshly rebooted system. Used memory: 22M Slabinfo shows: inode_cache 7016 7021 512 1003 1003 1 dentry_cache 7974 7980 128 266 266 1 Run updatedb Used memory: 302M Slabinfo: inode_cache 392608 392616 512 56088 56088 1 dentry_cache 395025 395040 128 13168 13168 1 Now tail /dev/zero, which tries to use all available memory and swap. The process is killed automatically after a minute or so. Used memory: 9M Slabinfo: inode_cache 209 511 512 73 73 1 dentry_cache 191 1200 128 40 40 1 I do not understand slabinfo entries very well, so I am not sure if I can interpret the reults correctly, but since memory taken by updatedb can be used if necessary, updatedb does not leak memory. But then it means that free does not report memory usage properly and then a bug report with respect to free should be filed. At least that is my understanding.
Free reports memory as the kernel sees it. After a program finishes executions, its memory isn't given back immediately. Free reports this like it should. Instead, the memory it was using just sits there waiting to be overwritten. Since the pages were never actually cleared, free won't report them as being available. In all actuality, those pages are ready for other programs to use. GKrellm, or similar, is more accurate with respect to how much memory you have available. There is no need to have a mad rush when a program exits to free all of it's pages. Memory is there to be used, why not use it?
Closing. Never really was an issue.
Closing.