I tried switching to hardened from a working setup with GCC 4.5.1 on unstable (~amd64). Most of the packages went well, until patch 2.6.1 got recompiled, after that every subsequent package which contained a patch with hour indications, failed. The URL in the bug report links to the thread in the forums where the problem was discussed. Reproducible: Always Steps to Reproduce: 1. Switch to hardened profile and recompile the toolchain 2. emerge patch-2.6.1 3. try to use patch with a file containing hour indications ("emerge -1 glibc" should halt on "1040_all_2.3.3-localedef-fix-trampoline.patch") Actual Results: Patch fails with a segfault. Expected Results: Patch should work normally and patch the files. Every hardened profile produces non-working code, whereas switching to vanilla makes patch work again. We found a workaround on the forum thread by recompiling with "-O1" instead of "-O2". Attached to the bug report you will find my emerge --info and the actual backtrace of the code. Stripping "-fsched-pressure -fira-loop-pressure" from CFLAGS didn't help.
Created attachment 251223 [details] Emerge --info output
Created attachment 251225 [details] Backtrace of the program with "-ggdb -g3"
Even a simpel foo.patch make it segfault gdb) run Starting program: /usr/bin/patch -p0 -i add_txt.patch Program received signal SIGSEGV, Segmentation fault. parse_decimal (s=<value optimized out>, hi=60, resolution=1, res=0x7fffffffda60, fres=0x7fffffffdaf4, lo=0, digits=2) at src/partime.c:421 421 if ((s[0] == ',' || s[0] == '.') && ISDIGIT (s[1])) (gdb) bt #0 parse_decimal (s=<value optimized out>, hi=60, resolution=1, res=0x7fffffffda60, fres=0x7fffffffdaf4, lo=0, digits=2) at src/partime.c:421 #1 0x00007ffff7fecfac in parse_pattern_letter (s=0x7ffff820301f "\t2010-10-09 22:40:51.000000000 +0200\n", t=0x7fffffffdba0) at src/partime.c:702 #2 parse_prefix (s=0x7ffff820301f "\t2010-10-09 22:40:51.000000000 +0200\n", t=0x7fffffffdba0) at src/partime.c:349 #3 partime (s=0x7ffff820301f "\t2010-10-09 22:40:51.000000000 +0200\n", t=0x7fffffffdba0) at src/partime.c:892 #4 0x00007ffff7feb6fa in str2time (source=0x7fffffffdce0, default_time=1287516134, default_zone=-90000) at src/maketime.c:432 #5 0x00007ffff7ff83b2 in fetchname (at=0x7ffff8203014 "patchtest.c\t2010-10-09 22:40:51.000000000 +0200\n", strip_leading=<value optimized out>, ptimestr=0x7ffff82023d8, pstamp=0x7fffffffdda8) at src/util.c:1166 #6 0x00007ffff7ff43bb in intuit_diff_type (need_header=true) at src/pch.c:470 #7 0x00007ffff7ff4f52 in there_is_another_patch (need_header=true) at src/pch.c:228 #8 0x00007ffff7feeb69 in main (argc=<value optimized out>, argv=<value optimized out>) at src/patch.c:169 (gdb)
Can any test with adding -fstrict-overflow to the cflags if that make it work?
(In reply to comment #4) > Can any test with adding -fstrict-overflow to the cflags if that make it work? > Compiling with -fstrict-overflow works, it may be used as another workaround. It seems a bit odd, though.
(In reply to comment #5) > (In reply to comment #4) > > Can any test with adding -fstrict-overflow to the cflags if that make it work? > > > Compiling with -fstrict-overflow works, it may be used as another workaround. > It seems a bit odd, though. > We add -fno-strict-overflow to the hardened spec http://gcc.gnu.org/onlinedocs/gcc-4.5.1/gcc/Optimize-Options.html#Optimize-Options -fstrict-overflow Allow the compiler to assume strict signed overflow rules, depending on the language being compiled. For C (and C++) this means that overflow when doing arithmetic with signed numbers is undefined, which means that the compiler may assume that it will not happen. This permits various optimizations. For example, the compiler will assume that an expression like i + 10 > i will always be true for signed i. This assumption is only valid if signed overflow is undefined, as the expression is false if i + 10 overflows when using twos complement arithmetic. When this option is in effect any attempt to determine whether an operation on signed numbers will overflow must be written carefully to not actually involve overflow. This option also allows the compiler to assume strict pointer semantics: given a pointer to an object, if adding an offset to that pointer does not produce a pointer to the same object, the addition is undefined. This permits the compiler to conclude that p + u > p is always true for a pointer p and unsigned integer u. This assumption is only valid because pointer wraparound is undefined, as the expression is false if p + u overflows using twos complement arithmetic. See also the -fwrapv option. Using -fwrapv means that integer signed overflow is fully defined: it wraps. When -fwrapv is used, there is no difference between -fstrict-overflow and -fno-strict-overflow for integers. With -fwrapv certain types of overflow are permitted. For example, if the compiler gets an overflow when doing arithmetic on constants, the overflowed value can still be used with -fwrapv, but not otherwise. The -fstrict-overflow option is enabled at levels -O2, -O3, -Os.
(In reply to comment #6) > (In reply to comment #5) > > (In reply to comment #4) > > > Can any test with adding -fstrict-overflow to the cflags if that make it work? > > > > > Compiling with -fstrict-overflow works, it may be used as another workaround. > > It seems a bit odd, though. > > > > We add -fno-strict-overflow to the hardened spec Ah, that makes perfect sense. I read the GCC manual entry you posted and it was on that basis that I commented "It seems a bit odd". Thank you for explaining :)
I was able to reproduce the bug with non-hardened GCC as well, so it looks like a GCC bug. flame@yamato mytmpfs % cat testcase.c #include <stdio.h> static char const * parse_ranged (char const *s, int digits) { fprintf(stderr, "%p\n", s); int n = 0; char const *lim = s + digits; while (s < lim) { unsigned d = *s++ - '0'; if (9 < d) return 0; n = 10 * n + d; } return s && 0 <= n && n <= 59 ? s : 0; } int main() { const char *s = "10092240"; s = parse_ranged (s, 2); s = parse_ranged (s, 2); s = parse_ranged (s, 2); s = parse_ranged (s, 2); return 0; } flame@yamato mytmpfs % gcc -O2 -fno-strict-overflow testcase.c -o testcase flame@yamato mytmpfs % ./testcase 0x400770 0x2 zsh: segmentation fault (core dumped) ./testcase
Created attachment 251447 [details] testcase.c file as reported in the comments Note that you have to have at least four calls to the functions for GCC to properly inline it _and_ you have to keep the fprintf() call into it otherwise it'll be optimised away since it has no side-effect.
To make debugging even more fun, placing too many fprintf calls in the testcase causes it to stop crashing. My first cut of placing one fprintf after each real source line resulted in a non-crashing testcase. Disabling the fprintf after the assignment of 'd' allowed it to fail again. On error, s is valid at the bottom of the while loop on the last step, but then has the value 2 just before processing the return statement. The function then returns (char*)2, causing the next iteration to crash. Interestingly, changing the while condition from (s < lim) to (s != lim) causes gcc to generate code identical to what -fstrict-overflow already produced with the posted testcase. Making the equivalent change in patch itself also produces a working patch. [All this tested using amd64 gcc-4.5.1 hardened, using appropriate values of -fstrict-overflow / -fno-strict-overflow, as suggested by the preceding comments.]
A vanilla build of current 4.5 branch svn fails as well. Can you file a bug upstream?
(In reply to comment #11) > A vanilla build of current 4.5 branch svn fails as well. Can you file a bug > upstream? > http://gcc.gnu.org/bugzilla/show_bug.cgi?id=46315 Confirmed upstream.
Created attachment 254577 [details, diff] http://gcc.gnu.org/ml/gcc-patches/2010-11/msg01735.html Patch for the bug.
Thanks, I'll roll a new patchset this week.
Fixed in patchset 1.3.