Right now, if you use 'emerge --jobs' (or otherwise run emerge in parallel), the effective number of make (ninja, etc.) jobs spawned can grow exponentially (up to emerge jobs * make jobs). This is both likely to be inefficient and to eat lots of memory.
I think a simple solution to workaround that would be to have 'locking' wrappers around CC, CXX and other commands known to consume lots of memory and be run in parallel. These wrappers would block whenever total number of invocations exceeds permitted number (defaulting to -j from MAKEOPTS).
Well, I see three options for implementing this.
An obvious solution would be to use POSIX named semaphores. They have exactly the semantics we need and should be efficient. However, they do not release locks (post semaphores) if the process crashes, so we could end up having successive jobs permalocked. Adding a 'reaper' for this would probably make it more complex than the alternatives.
A trivial option would be to use lockfiles. Basically, we create a lockfile for each allowed job and try to lock it non-blocking in a waiting loop. Its disadvantage is that it introduces arbitrary delays while waiting for a lock, and I don't see any good way that would avoid adding complexity while maintaining good locking speed and small CPU utilization.
Finally, we could use a client-server layout, with the server being started on first emerge process and its clients requesting semaphore locks via UNIX socket. This has the advantage that we can use socket connections to release resources automatically but it might have more complexity than the other solutions.