Many people have been talking on the forums about install-time estimation for the portage system, and I would like to submit LFS's solution as a possible template for a portage "Time estimation".
The link to LFS's explanation is as follows:
In short, the portage system would have to:
1. Store how long it takes to compile the bash shell (initial install).
2. To make SBU findings consistent and easy, a wrapper should be written to time ebuilds and report their installation time in SBU's.
3. Standards would have to be enforced to make sure ebuild authors include SBU times; if no one uses it, it by default is useless. (pun intended).
4. A distributed SBU timing package, on the scale of gentoo-stats, could be used to report individual SBU's and therefore fine-tune actual SBU's for developers. The resulting set of SBU's could be easily included in the portage tree and be synced by default when the tree is.
5. Once SBU's were commonly used, it would be a simple emerge function to multiply a system's standard SBU time by the SBU for a given package, therefore giving an approximation of how long package[s] will take to compile.
6. Given that many people modify parts of their system that affect their compilation speed without re-compiling bash, an "SBU drift" (think ntp drift files) may have to be implemented.
Or include an hour and minute field when starting and ending an emerge
(currently it has just the date). Like:
Started emerge on: Nov 10, 2002 03:56
*** emerge sun-jdk
>>> emerge (1 of 1) dev-java/sun-jdk-1.4.1.01 to /
>>> AUTOCLEAN: dev-java/sun-jdk
--- AUTOCLEAN: Nothing unmerged.
::: completed emerge (1 of 1) dev-java/sun-jdk-1.4.1.01 to /
*** exiting successfully.
Stop emerge on: Nov 10, 2002 03:58
*** Bug 10805 has been marked as a duplicate of this bug. ***
*** Bug 18028 has been marked as a duplicate of this bug. ***
*** Bug 36335 has been marked as a duplicate of this bug. ***
*** Bug 36604 has been marked as a duplicate of this bug. ***
*** Bug 37887 has been marked as a duplicate of this bug. ***
A simple verison of this could be implelmented quite easily. The idea would be to time how long it takes to build every package on each system. That time (not including download/install, compile only) would be recorded somewhere in portage's databases (the world favorites file maybe?). These could be uploaded somewhere for statistics, but that's not the point. Once you have that data, when someone updates their system (emerge --update system or world) you could add up the time it took to compile those things previously, and then the emerge could report back that the estimated time (on that PC) would be.. 5h02m (or whatever). If you took these statistics (and CPU data, like from gentoo-statistics) then you could probably use that to create (and maintain) very reasonable estimated times for users during their inital install (although this would take a while). But the basic idea of timing individual systems and using that for estimations on the individual systems would be easy and a nice feature.
the DB idea is taking it too far, IMHO. a very simple algorithm of counting the total number of steps involved with an ebuild (calls in ./configure, calls to gcc, etc.) and a percentage of how many of those steps have been accomplished. i would think the normal ebuild would have many steps (looks like half are generally probing type queries on the machine), say 1000 nominally, and a simple bar showing 'x/1000 done' would be more than adequate. this could all be done on the fly with portage and no seperate DBs or massive amounts of backend work, and of course those of us with P200s still could turn this off 'cuz we know it'll take forever to install even the simplest ebuild.
Yeah, I completely agree with comment #9. We don't need the exact time, it's just important (IMHO) to know whether we're somewhere near the beginning/middle/end of the whole process...
*** Bug 48418 has been marked as a duplicate of this bug. ***
"a very simple algorithm of counting the total number of steps involved with an ebuild (calls in ./configure, calls to gcc, etc.) and a percentage of how many of those steps have been accomplished."
What you're proposing is basically a combination of a hacked up make binary, and hacking up ebuild.sh. Former was tried, and ultimately abandoned (check the forums if you're curious; make progress bar iirc).
Despite what's been stated here, there is no simple solution. You can't just automatically assume that each compilation of package-version is going to be the same amount of time; factor'ing ccache in, distcc, having the exact same USE flags, toolchain versions (gcc/linking/bash/make speed improvements/slowdowns), etc.
There is a massive amount of vars that affect the run time; yes, they may be minor individually (slightly sped up gcc fex), but combined they make it harder and harder to actually come up with a number that is relevant. Note this is assuming the system is left to compile/do it's thing; a final solution would have to account for loadavg, which is another huge can of worms.
Granted, you can lift the lfs approach of a btu calculated locally, but it's not that relevant. All of the issues I listed above still affect the runtime.
Going the BTU route isn't really possible in my books; a hacked up make I'm not much for either, although it's a step up from relying on BTU. Meanwhile, we can't really implement it, so closing it CANTFIX.
If you're after a simple 'completed phases %i/%i', that would have to wait till the API is in, in my books. If that would suffice, reopen and mark it Later please.
*** Bug 68788 has been marked as a duplicate of this bug. ***