The below test program generates very large numbers for test 1 and 2. Tested on gcc 4.1.2 on x86 and gcc 3.4.6 on ppc. To me this looks like a gcc bug, but I am no expert on sigen extension semantics. #include <stdio.h> int main(void) { unsigned int x = 10; signed int y = -4; double z; z = 0 + y; printf("0: z=%f \n", z); z = (x*0) + y; printf("1: z=%f \n", z); z = (x-x) + y; printf("2: z=%f \n", z); z = (x*1) + y; printf("3: z=%f \n", z); z = (double)(x*0) + y; printf("4: z=%f \n", z); return 0; } /* #> gcc test.c -O2 #> ./a.out 0: z=-4.000000 1: z=4294967292.000000 2: z=4294967292.000000 3: z=6.000000 4: z=-4.000000 */
Be wary of mixing signed and unsigned integers.
Yes, but I don't see what is wrong in this case. Can you explain why the sign extension works differently in case 1 and 2?
x is unsigned, so x-x and x*0 are unsigned, so (x-x)+y and (x*0)+y are unsigned. That's just the way C works, and you will see the same results with other compilers.
x*1 is also unsigned but works as expected in: z = (x*1) + y; It appears that when the x expression becomes zero, then the signed/unsigned conversions break down, strange.