The integer tests fail on amd64 because of 32 => 64 bits incorrect sign extension. IE: the first test expects to get 4294901760 and gets -65536 This is equivalent on 32 bits targets but not on 64 bits one: > printf "%x %x\n" 4294901760 -65536 ffff0000 ffffffffffff0000 The following patch solved the problem: --- /var/tmp/portage/eel-2.8.0/work/eel-2.8.0/eel/eel-gdk-extensions.h.orig 2004-10-09 16:19:06.076002656 +0200 +++ /var/tmp/portage/eel-2.8.0/work/eel-2.8.0/eel/eel-gdk-extensions.h 2004-10-09 16:15:31.608606656 +0200 @@ -43,10 +43,10 @@ /* Pack RGBA values into 32 bits */ #define EEL_RGBA_COLOR_PACK(r, g, b, a) \ -( ((a) << 24) | \ - ((r) << 16) | \ - ((g) << 8) | \ - ((b) << 0) ) +( (((unsigned int)a) << 24) | \ + (((unsigned int)r) << 16) | \ + (((unsigned int)g) << 8) | \ + (((unsigned int)b) << 0) ) /* Pack opaque RGBA values into 32 bits */ #define EEL_RGB_COLOR_PACK(r, g, b) \ Reproducible: Always Steps to Reproduce: 1. 2. 3.
Created attachment 41422 [details] emerge log
Re-reading my patch, I found that using uint32_t instead of unsigned int for casts would be a better fix: int size may not be 32 bits on some platforms.
thanks for the report, added a patch which makes also use of your suggestion in comment #2. fixed in cvs.