The `sort' utility is capable of doing external sorting, backed by temporary files, if memory is exhausted. A memory limit may be given on the command line (switch -S). If general numeric sorting is chosen (switch -g), however, sort ignores all memory limits and will run out of memory if input files become too large. If other comparison methods (e.g., numeric sort, -n), sort behaves as expected. Reproducible: Always Steps to Reproduce: 1. Grab a large file with a numeric field. Should exceed your main memory. 2. enter `sort -g -S 100M <your_large_file> > /dev/null' 3. If you run `top' in parallel, you will see how `sort' eats up all your memory. (It finally gets killed by the oom-killer.) Actual Results: `sort' gets killed by the oom-killer if main-memory is exhausted. Expected Results: It should have used at most as memory as given as a limit on the command line, and it should have used temporary files to do external sorting. (Test with numeric sort, switch -n.) Using sys-apps/coreutils 5.2.1 (stable) here.
Seems like this bug had already been filed as #39515. (I really did search in advance. Now I saw its fixed with the new stable coreutils package.) Anyway, thanks for the fix. *** This bug has been marked as a duplicate of 39515 ***