I make some tests to make compilation faster. For this, I play with a package I know to take some time but not too much : bash. I "knwo" that bash take "some time" to compile. More time that some packages (wget, curl) and less than others (vlc), but I don't know how many time less or more it takes.
I think providing a mean to know how many time a compilation will take should be a good idea. Not in time, as it depends on many stuffs, but for example in 'stars', or range from 1 to 10.
This notation could also help to give an idea on how many time an emerge -DNuv world will approximately takes (so that I can choose to make just a tea or I have the time to make some pastas while upgrading :) )
What do you think ? Is it (easily) feasible ?
Use either genlop from app-portage/genlop or qlop from app-portage/portage-util. Both scripts parse emerge logs to estimate the time. Genlop can query a remote base (http://gentoo.linuxhowtos.org/compiletimeestimator/) if a package has not been emerged on the system before.
This is pretty much impossible. So, 1 minute to compile bash on my 8 core box is fast, but what about on my 600mhz laptop? What about the other arches? What about your weird CFLAG that causes compilation to take 3x longer. What about ccache? Catch my drift here?
Resolving as CANTFIX because there are too many variables that makes this request not feasible. Sorry. If you have a better idea, it is more appropriate to bring it up on the respective mailing list(s).