Limit maximum memory usage to 100 megs by default in Linux/*nix. Otherwise linux mmap() will first exhaust swap space , and then then RAM and XDelta will forkbomb along with other critical system daedoms due to an out of memory condition. This can be tweaked by setting the XDELTA_MAX_MEGS environment variable For e.g export XDELTA_MAX_MEGS=200 sets it to 200 megs. Any calls xdelta will use this tweaked value if avaliable or default to 100 megs. This patch was necessary on my system when trying to use xdelta on huge 200MB+ archives. The default behaviour on Xdelta is not suited for such large files because there is NO memory usage limit . Now that there are quite a few people using the excellent dynamic-deltup-server by blackpenguin , this patch is necessary for people with low-mem systems. And most of the dynamic-deltup-server users who have modem or low-speed connections would fall in this category. Reproducible: Always Steps to Reproduce: 1. Try to deltup -p -v mozilla-source-1.5.tar.bz2-mozilla-source3-1.7.2.tar.bz2.dtu 2. Since deltup uses xdelta for patching it will try to make deltup patch the decompressed 200MB source archive 3. On a system with only ~500MB of RAM+swap xdelta will redline the memory usage and cause X to terminate along with serveral other processes due to heavy mem-swapping and out of memory condition. Actual Results: X crashed along with squid,samba,openldap and several other critical daemons Expected Results: Stuck to a memory usage limit
Created attachment 39487 [details, diff] Xdelta-1.1.3-memory-usage-limit.patch -p0 patch
Created attachment 39491 [details] Updated ebuild for the memory patch set the "lowmem" USE flag to use this patch
heh... xdelta's memory management is insane, period. When I was pulling the format apart, there was 8 bytes in it I couldn't nail down- as near as I can figure, it's the memory req for the patch (which sounds insane cause it is). So... offhand, I'm not much for this since this is purely reconstruction, so the issue is A) your OOMK thwacking the wrong processes, B) xdelta is developmentally dead, so we have to maintain it long term (not a huge issue), C) we're adding features that will always be gentoo specific (minor, and related to B). Request is valid enough, although I'd think y'all are using the wrong tool for this. Personally, I'd just use diffball for reconstruction- as stated, xdelta's memory management blows (if you think reconstruction is bad, try differencing)- diffball supports the xdelta format and will reconstruct it using _much_ less memory. Basically the memory req's for what y'all are doing is a couple of buffers, and struct's related to decompression. The reason I'm hawking diffball in this instance is that it would still get the job done instead of bailing. Quick test of mem req's for xdelta vs diffball for applying a linux-2.4.20 -> linux-2.4.21 patch was ~8mb for diffball, ~153mb for xdelta. Aside from that, the xdelta patch is more useful for forcing it to bail during differencing when a max mem limit is reached...
Yes i would try out diffball too if it supported xdelta patches , compatible with deltup. If it does then yes it could be used for the large database of .dtu patches available. Still there is a problem with Xdelta memory management which can be fixed by this patch. And it is optional for the user to install , only if they set USE="lowmem" , will the patch ever get activated. This patch is 100% tried and tested on my machine. XDelta without the memory limit will use up all available RAM for very large files and crash . With this patch it will stick to the memory limit set , verified by looking through top and ps aux . It sticks to the memory limit set and does not bail out or crash.
"Yes i would try out diffball too if it supported xdelta patches , compatible with deltup. If it does then yes it could be used for the large database of .dtu patches available." It's supported uncompressed/compressed xdelta patches since 0.4, along w/ the fdtu wraper format. "Still there is a problem with Xdelta memory management which can be fixed by this patch. And it is optional for the user to install , only if they set USE="lowmem" , will the patch ever get activated." Offhand, the thing is it's a design flaw of xdelta- we don't go adding artificial caps to bsdiff (which is even worse in memory usage) for the same reason I was a bit hesitant to include this- wouldn't label it a flaw as much as the programs default behaviour. Prior to getting out the torches to flame me... keep reading :) "It sticks to the memory limit set and does not bail out or crash." Wasn't aware of that. so... the patch sounds quite a bit better. :)
add in a manpage modification and I'll slip it in :)
Pardon, I might err, but what is the difference between this patch and using parameter "-m" with xdelta? (except setting the limit via an environment variable?) -m, --maxmem=SIZE Set the buffer size limit, e.g. 640K, 16M
-m should control hash size for differencing, not reconstruction Considering upstream is long since dead, closing this, reopen if it's really that much of a pisser.