When building python 2.7.9-r2 3.3.5-r1 and 3.4.3 with gcc-4.9.2 and uclibc-0.9.2.33-r14 on mips32r2, the build fails with "Illegal instruction" when CFLAGS has -O1, -O2 or -O3. It succeeds with -O0. It also succeeds at all levels of optimization before gcc-4.9 so this is a regression. All other packages in @system succeed with 4.9 at -O2, so this bug is only triggered when building python. I'll attach the full build.log below but here is the final build error: LD_LIBRARY_PATH=/var/tmp/portage/dev-lang/python-3.4.3/work/mips-gentoo-linux-uclibc ./python -E -S -m sysconfig --generate-posix-vars ;\ if test $? -ne 0 ; then \ echo "generate-posix-vars failed" ; \ rm -f ./pybuilddir.txt ; \ exit 1 ; \ fi /bin/sh: line 5: 6899 Illegal instruction LD_LIBRARY_PATH=/var/tmp/portage/dev-lang/python-3.4.3/work/mips-gentoo-linux-uclibc ./python -E -S -m sysconfig --generate-posix-vars generate-posix-vars failed Makefile:569: recipe for target 'pybuilddir.txt' failed The problem is actually in libpython3.4.so.1.0 since I can get the freshly built ./python to run when linking against the host library by just removing the LD_LIBRARY_PATH. (The host library being built with gcc-4.8.4.) Sounds to me like any level of optimization in gcc-4.9 is causing a jump to a wrong address with garbage. I'm narrowing it down with gdb and I'll try to produce reduced code for upstream. I'm masking >=gcc-4.9 on profiles/linux/uclibc/mips while I sort this out.
Created attachment 399950 [details] Build failure with python-3.4.3
Forgot to add the mips folks. I should also say I hit hit this with isa=mips32r2 as stated above and mips I big endian. I have yet to test little endian. Also abi=o32 obviously since its the only 32-bit abi we support for mips.
(In reply to Anthony Basile from comment #2) > Forgot to add the mips folks. I should also say I hit hit this with > isa=mips32r2 as stated above and mips I big endian. I have yet to test > little endian. Also abi=o32 obviously since its the only 32-bit abi we > support for mips. This might be fixed with gcc-5.x. I found one of your old ~2012 mips3 uclibc stages and have successfully compiled gcc-5.3.0 and at least Python-2.7.10-r3 in it, and did not get any illegal instruction errors yet. Just rebuilt uclibc itself (after chasing down why NPTL was disabled...), got sandbox updated, and now python-3.5.0-r1 is building. Building with mips4/R1x000 compile flags, so I can't speak for mips32r2. Hoping to use this chroot as a seed stage for new uclibc-based stages, which means new netboots (hopefully). I am probably jinxing myself...
(In reply to Joshua Kinard from comment #3) > (In reply to Anthony Basile from comment #2) > > Forgot to add the mips folks. I should also say I hit hit this with > > isa=mips32r2 as stated above and mips I big endian. I have yet to test > > little endian. Also abi=o32 obviously since its the only 32-bit abi we > > support for mips. > > This might be fixed with gcc-5.x. I found one of your old ~2012 mips3 > uclibc stages and have successfully compiled gcc-5.3.0 and at least > Python-2.7.10-r3 in it, and did not get any illegal instruction errors yet. > Just rebuilt uclibc itself (after chasing down why NPTL was disabled...), > got sandbox updated, and now python-3.5.0-r1 is building. Building with > mips4/R1x000 compile flags, so I can't speak for mips32r2. Hoping to use > this chroot as a seed stage for new uclibc-based stages, which means new > netboots (hopefully). > > I am probably jinxing myself... 1) I'd like a copy of that updated mips3. I lost it and never did get around to reconstructing it. 2) It might be ISA dependent. You're running a mips3 cpu while I'm running a mips32r2.
(In reply to Anthony Basile from comment #4) > > 1) I'd like a copy of that updated mips3. I lost it and never did get > around to reconstructing it. Google says it's on your university's server: http://opensource.dyc.edu/pub/ZOLD/ Slow download speed, though. Caps at ~128kbps. If I can get updated stage runs out of catalyst, I'll make sure to get you a copy of those as well. > 2) It might be ISA dependent. You're running a mips3 cpu while I'm running > a mips32r2. Entirely possible. I'd try w/ 5.3, though, as apparently a lot of fixes went into that release. I still need to verify that PR66038 is fixed, which in 5.0 and 5.1, was preventing gcc from building under glibc N32. Never tested 5.2, so I'm hoping 5.3 did fix whatever was causing it. If the illegal instruction issue is still present, it might be a GCC bug and maybe filing upstream might yield results.
(In reply to Joshua Kinard from comment #5) > (In reply to Anthony Basile from comment #4) > > > > 1) I'd like a copy of that updated mips3. I lost it and never did get > > around to reconstructing it. > > Google says it's on your university's server: > http://opensource.dyc.edu/pub/ZOLD/ > > Slow download speed, though. Caps at ~128kbps. > > If I can get updated stage runs out of catalyst, I'll make sure to get you a > copy of those as well. > > OMG, i'm getting old and senile. i put it there and forgot.
I'm pretty confident now that whatever regression was causing this bug is fixed in gcc-5.3.0. I've rebuilt/updated most of the packages in the 2012 chroot using gcc-5.3.0, including all three major python versions and haven't had any errors like this one. I think gcc-4.9* should be masked on uclibc and at least 5.3 unmasked. I haven't tested 5.2, but that one is probably safe as well. Working around the maskings via catalyst is proving to be a bit....challenging.
Successfully built gcc-6.2.0 in a chroot under binutils-2.27 and uclibc-ng-1.0.19. Had to remove the toolchain.eclass hack, though (see Bug #600966). I am pretty confident we can drop this mask and close this bug now to open up the newer gcc versions for wider testing. gcc-6.2.x rebuilt all-but-one @system package in my chroot, the failing one being dev-util/pkgconfig, due to what looks like over-zealous -W flags (warning treated as an error). I can also state that Python 3.4.x definitely builds, having randomly glanced at the SSH terminal when 3.4.5 was getting unpacked and built.
I have newer stages built with gcc-6.3.0, uclibc-ng-1.0.24 (no objstack), and Python-3.4, 3.5, and 2.7 all compiled fine. Not sure about old uclibc, but I'm starting to think we need to deprecate it sooner rather than later and move onto uclibc-ng by releasing the masks and that toolchain.eclass block. I'm going to start testing an upgrade to gcc-7.1.x and uclibc-ng-1.0.25 next.
(In reply to Joshua Kinard from comment #9) > I have newer stages built with gcc-6.3.0, uclibc-ng-1.0.24 (no objstack), > and Python-3.4, 3.5, and 2.7 all compiled fine. Not sure about old uclibc, > but I'm starting to think we need to deprecate it sooner rather than later > and move onto uclibc-ng by releasing the masks and that toolchain.eclass > block. I'm going to start testing an upgrade to gcc-7.1.x and > uclibc-ng-1.0.25 next. i'm worried about my mips32r2 hardware which is very slow. its what i use to make the stage3's. i'm in the middle of a batch right now, afterwards, i can try bumping to gcc-6, but i'm afraid of how long it might take. do you have any idea what the build time is for gcc-6 compared to gcc-4.8.5?
(In reply to Anthony Basile from comment #10) > (In reply to Joshua Kinard from comment #9) > > I have newer stages built with gcc-6.3.0, uclibc-ng-1.0.24 (no objstack), > > and Python-3.4, 3.5, and 2.7 all compiled fine. Not sure about old uclibc, > > but I'm starting to think we need to deprecate it sooner rather than later > > and move onto uclibc-ng by releasing the masks and that toolchain.eclass > > block. I'm going to start testing an upgrade to gcc-7.1.x and > > uclibc-ng-1.0.25 next. > > i'm worried about my mips32r2 hardware which is very slow. its what i use > to make the stage3's. i'm in the middle of a batch right now, afterwards, i > can try bumping to gcc-6, but i'm afraid of how long it might take. do you > have any idea what the build time is for gcc-6 compared to gcc-4.8.5? If I recall correctly, the speed issues w/ your mips32r2 hardware was lack of L2 cache, right? Wasn't your system at least a ~600MHz processor? And what was the endianness of that platform? mipsel? I use my SGI Octane for builds, which has dual R14K CPUs at 600MHz and 2MB L2 cache. But gcc is still a monster to build. Per genlop, this is my build history for gcc since my reinstall to switch to the n32 ABI userland (using MAKEOPTS of -j3): * sys-devel/gcc Sun Aug 16 10:11:58 2015 >>> sys-devel/gcc-4.9.3 merge time: 5 hours, 39 minutes and 41 seconds. Wed Feb 24 07:33:35 2016 >>> sys-devel/gcc-5.3.0 merge time: 1 minute and 43 seconds. Thu Jun 16 16:05:48 2016 >>> sys-devel/gcc-5.4.0 merge time: 7 hours, 24 minutes and 14 seconds. Wed Oct 12 12:29:08 2016 >>> sys-devel/gcc-6.2.0-r1 merge time: 8 hours, 17 minutes and 34 seconds. Wed Nov 16 17:13:12 2016 >>> sys-devel/gcc-6.2.0-r1 merge time: 8 hours, 17 minutes and 16 seconds. Fri Jan 6 12:31:12 2017 >>> sys-devel/gcc-6.3.0 merge time: 8 hours, 22 minutes and 21 seconds. The Feb 2016 entry is a fluke where I re-installed from binpkg. gcc-7 doesn't seem to change the build times too much. uclibc-ng builds are *much* faster, probably on the order of ~2-3 hours less than glibc-based userlands. The smaller library footprint of uclibc-ng really makes itself known during large builds like w/ gcc. I'm updating my new uclibc_mips2 stage3 chroot right now with gcc-7.1.x. You need uclibc-ng-1.0.25 to do so, though, as it contains a build fix for gcc-7.1.x's final build of libstdc++. If that works, I'll update the remaining chroots before trying catalyst runs all over again.
blueness@ asked to add toolchain@ back in hopes to track bug down to gcc problem. To make it more actionable on toolchain side ideally I would need a few things: 1. emerge --info output (to get the idea which CPU has the problem) 2. gdb backtrace and a disassembly of python crash to get the idea whan instruction and and where it crashed 3. extracted self-contained source from python and a gcc command that generates bad instruction. This will ease gcc bisection down to a fix if it happened upstream.
uclibc support in Gentoo has been removed.