On a new ARM system with only 128MB RAM I've got problems unpacking libtool and sandbox using xz-utils.
It fails with
lzma: /var/tmp/portage/sys-apps/sandbox-1.6-r2/distdir/sandbox-1.6.tar.lzma: Memory usage limit reached
lzma: Limit was 49 MiB, but 65 MiB would have been needed
I had about 80MB RAM free an much more swap space (about 3GB), so it seems like xz-utils wants those 65MiB in chunk of real memory.
Using -3 or such for decompression doesn't help, it seems the required memory for decompression can be only changed when compressing a file.
I've had this problem with sandbox and libtool, which are very critical packages.
I've marked the bug critical, because I assume that needing 65MiB real memory in chunk could be a problem on more machines than the small one where I've had those problem.
lzma doesn't have the problem, so it's for xz-utils only.
Hmm, it should read 'in one chunk' instead of 'in chunk', seems I was tired. ;)
Thank you for report Alexander. Actually I don't see anything extraordinary here. Please, read man unxz:
The memory usage of xz varies from a few hundred kilobytes to several gigabytes depending on the compression set‐
tings. The settings used when compressing a file affect also the memory usage of the decompressor. Typically the
decompressor needs only 5 % to 20 % of the amount of RAM that the compressor needed when creating the file. Still,
the worst-case memory usage of the decompressor is several gigabytes.
To prevent uncomfortable surprises caused by huge memory usage, xz has a built-in memory usage limiter. The default
limit is 40 % of total physical RAM. While operating systems provide ways to limit the memory usage of processes,
relying on it wasn't deemed to be flexible enough.
When compressing, if the selected compression settings exceed the memory usage limit, the settings are automati‐
cally adjusted downwards and a notice about this is displayed. As an exception, if the memory usage limit is
exceeded when compressing with --format=raw, an error is displayed and xz will exit with exit status 1.
If source file cannot be decompressed without exceeding the memory usage limit, an error message is displayed and
the file is skipped. Note that compressed files may contain many blocks, which may have been compressed with dif‐
ferent settings. Typically all blocks will have roughly the same memory requirements, but it is possible that a
block later in the file will exceed the memory usage limit, and an error about too low memory usage limit gets dis‐
played after some data has already been decompressed.
The absolute value of the active memory usage limit can be seen near the bottom of the output of --long-help. The
default limit can be overriden with --memory=limit.
So this is documented behavior...
Documented behavior or not, this leads to serious problems on embedded and limited resource systems wherein emerge can no longer build a package due to this silly behavior. As a for-instance, I have a Sony Vaio Picturebook PCG-C1VN with about 128MB of RAM and is not extensible -- using unxz via emerge to emerge sandbox-2.2 fails for exactly the same reason that was mentioned earlier. If, however, I go and manually attempt to unxz the file with --memory=10000000000000000000 it decompresses fine, despite the actual "low memory free" issue (39MB RAM free, 1GB swap available). If patching the source to fix this isn't acceptable, perhaps we should be looking at patching the ebuild scripts to work around the issue by passing the actual amount of RAM free? Either way, at the moment this bug is breaking my builds.
Confirmed that adding XZ_OPT="--memory=128M" to the emerge command line solves the memory limit problem. Unfortunately, however, that's a workaround for broken behavior, not a solution. I'm going to see about writing a quick patch to xz-utils to solve this issue by setting the memory percentage to 100% rather than 40% -- a hack, yes, but at least it'll solve the actual problem until the upstream maintainers decide to remove this poorly thought out code.
Created attachment 222311 [details, diff]
Quick hack to set the default memory limit to 100%
Here's a quick patch against the xz-utils 4.999.9beta source tree to set the default memory limit to 100% of the actual available RAM, which should remove the problem described in this bug.
Same here with a system with 16MB of RAM.
Why can't it just use swap? The memory limit should be absolute not relative!
A workaround for this is to specify --memory=max as an option to xz in the src_unpack() function in the ebuild.
Guys, could you forward this issue upstream to hear what they have to say on this issue? (and post link on upstream report here). Thanks.
Reopening... upstream developer have contacted me on IRC.
@base-system: could you suggest anything here:
22:44 <Larhzu> Hi! I'm xz's developer. I'm aware that the memory usage limiter has caused some trouble in Gentoo's build system. I recently made some changes in xz's git repository to make some of those problems go away.
The latest problem that happens on a system with 16 MiB RAM is still a problem, and avoiding it is harder.
jotik made a test: He compressed a 100 MiB file with "xz -6" to "xz -9" and then tested how fast it was to decompress on his 16 MiB (actually 24 MiB) box. http://pastebin.org/118773
Decompressing "xz -6" takes 9 MiB RAM and thus needs no swapping. It finishes in 40 seconds. "xz -9" takes 65 MiB and thus makes the system swap a lot. It takes 45 seconds of CPU time and 59 _minutes_ of swapping.
Still, he would like xz to decompress source files even if it was slow so that building packages wouldn't break. I agree with this.
But I think that if I ran xz on interacitve command line (maybe indirectly via tar), I would prefer it to abort instead of go to swap _by default_. If I really want that, I can easily tell xz to disable the limit. But I like to prevent such a DoS with default settings.
I'm not sure how to solve the problem. One possibility would be to put "export XZ_OPT=--memory=max" into the Gentoo's build tool. But there might be other scripts (possibly in other distros) that will have a problem too. So I'm not sure if this is optimal.
Some solution is needed. The DoS prevention shouldn't itself be a DoS.
That said, it looks like it's good idea to modify unpack() to handle short memory devices.
Can't we just deprecate xz support and remove it in the next EAPI? Why don't we just tell developers to use archives that're shipped in a stable format that doesn't need this kind of awful hackery?
Here are a few other possible alternatives:
1) set XZ_OPT=--memory=max in the base profile make.defaults
2) patch unpack in all package managers
3) patch our xz-utils package so --memory=max is default
i like the sound of (1)
Yes, go with the first.
I would prefer the solution with using swap and not polluting the environment with silly stuff. Using swap is no DoS. xz-utils could print a warning if it detects low memory, but failing by default, even if enough swap is available is in my humble opinion the wrong way to solve such things.
your comment doesnt make sense. we arent "polluting the environment" -- this is going into the build environment only, not user login environments. plus, once we enable that option, we *would* be using swap by default. existing behavior is done by *physical pages* as documented earlier and swap doesnt count as that. so no matter how much swap space you add, you arent going to change the behavior today.
tossing xz isnt feasible. upstream packaging projects have adopted it as well as many projects using them. in today's world, the target is not <512meg systems, so catering to them at the detriment of the majority of people doesnt make sense.
unless there's any other feedback, i'll add (1) in a few days.
I just stumbled on another package, app-admin/metalog, failing on a machine with 128MB of RAM:
lzma: Limit was 46 MiB, but 65 MiB would have been needed
*** Bug 320987 has been marked as a duplicate of this bug. ***
This bug is a blocker for building stages on a BeagleBoard. This fix was going to be done in March but has yet to be committed. Can an active/available dev apply the patch mentioned?
(In reply to comment #21)
> This bug is a blocker for building stages on a BeagleBoard. This fix was going
> to be done in March but has yet to be committed. Can an active/available dev
> apply the patch mentioned?
I solved this on my Neo Freerunner (armv4tl softfloat with 128MB of RAM) with adding this to /etc/env.d/99local :
Cheers to fellow ARM developers !
FYI the default memory limit has been removed as of app-arch/xz-utils-5.0.0, as mentioned in the release announcement: http://sourceforge.net/mailarchive/message.php?msg_name=201010242244.43286.lasse.collin%40tukaani.org