bigcore test creates huge core file and stops only because it is timed out. gdb fails also other tests, but this one causes more problems - it uses 17GB of disk space (!).
Creation of so big file may be connected with my ulimit settings, but this test should check them.
$ ulimit -a
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 8120
max locked memory (kbytes, -l) 32
max memory size (kbytes, -m) unlimited
open files (-n) 1024
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 8192
cpu time (seconds, -t) unlimited
max user processes (-u) 8120
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited
Created attachment 135385 [details]
gdb emerge log
Created attachment 135387 [details]
it doesnt actually use ~17gb of disk space ... it's created with holes
$ stat -c "%b %B %s" core
52440 512 17176354816
that means, on disk, it's only taking up (52440 * 512) bytes (so ~26 megs). the "file size" though is 17gb
(In reply to comment #3)
First, thank You for looking into this report. However, I get different results:
$ stat -c "%b %B %s" gdb.base/bigcore.corefile
33578007 512 17175166976
I'm using reiserfs3.6, but I don't know if it changes anything.
It would seem as though the test case is doing what they wanted it to do...
yeah, real fix is to get sparse support into the relevant filesystems