Create a script, cron-job, HTML front page for archiving of Planet / Universe pages. Just simply suck down / parse the HTML on a daily output, and dump it to an archive, have a cron job delete older ones. How long to keep backlog is debatable (weeks, months?), and how often to archive (daily or weekly).
and I just remembered we don't need to parse it, because it's already static HTML to begin with. So, just copy it to $(date +%F).html nightly.
Modified the cron job to copy to archives/ now just need to generate an index page.
http://planet.gentoo.org/archives/ http://planet.gentoo.org/universe/archives/
So are we done with this bug then?
I think I might add a google search box
(In reply to comment #5) > I think I might add a google search box Will you?
apparently not :)