This may be slightly beyond the realm of what is considered reasonable (I apologize for that), but it's been brought to my attention that it's not possible to simply unmerge gcc from the standard distro without breaking the links to the c++ libraries and others since the libraries themselves are unmerged.
The reason I wished to do this is that I was creating a custom install for an ancient laptop (compiled on a separate box naturally) and I needed to reduce the space the install would take up. Compiling on this laptop is not feasible due to its speed, so it seemed a good choice to not include gcc.
In short, I like to request that building gcc install a separate package for the libraries currently bundled with gcc. Is this even possible right now, or simply an issue which is not of concern (not useful to almost everyone)?
Any feedback is appreciated. Thanks.
Steps to Reproduce:
1. Build system.
2. emerge unmerge gcc
3. Watch system be no longer capable of running c++ binaries.
Installation would become incapable of using c++ binaries
This was the expected result, I'd like it to change.
Good point. If it worked somewhat transparently with "remote-only" distcc, it would be cool...
It will take twice the time it does now, besides getting hairy (gcc needs its
libs to compile stuff) ... Really only feasible if you use binary packages if
you ask me.
the problem would be (in my eyes) that all packages on the current system would need to replace their DEPEND and RDEPEND with appropriate references to a virtual/gcc - which is at the current point in time just clouds in the sky.
we would have to consider at least the following objectives for being able to implement that:
1) on some arches there are special gcc versions available for building kernels
(at least on sparc and hppa to my knowledge)
2) you need to depend on different gcc versions for building certain versions of glibc, binutils and vice versa.
3) cross compiler support (and SLOT versions)
4) differently flavoured toolchains:
- embedded, uclibc and similar trimmed down environments
- bounds checking patches available for gcc being used in testbeds mainly
- hardened patches for security enhancements
if these problems can be addressed by a virtual/gcc, it would probably mean we have about 10 to 20 versions of such a virtual.
However, if using such a virtual would pave the way for a worthy area of interest, i think this bug could be the beginning of a major development:
distributed compiling (distcc) and remote compiling (no local compiler)
using multiple machines to compile via a virtual/gcc covered by distcc and also allowing virtual/gcc to be covered by a remote instance installation of gcc and/or distcc
bootstrapping a local installation completely using remotely available computing power comes to mind
worth also thinking about native support for cross-compiling code to embedded arches not having a full instance of the compiler and build environment on their slimmed down Gentoo system installed on the embedded device
CANTFIX until we have a fully working crosscompiler toolchain.