Summary: | running out of space during merge leaves unrecorded files in files system | ||
---|---|---|---|
Product: | Portage Development | Reporter: | Justin Lecher (RETIRED) <jlec> |
Component: | Core | Assignee: | Portage team <dev-portage> |
Status: | CONFIRMED --- | ||
Severity: | normal | CC: | abandonedaccountubdprczb8hs, alex_y_xu, aoaaxy+gentoobugzilla, esigra, n-roeser, phajdan.jr, zazdxscf+bugs.gentoo.org |
Priority: | Normal | ||
Version: | unspecified | ||
Hardware: | All | ||
OS: | Linux | ||
Whiteboard: | |||
Package list: | Runtime testing required: | --- | |
Bug Depends on: | |||
Bug Blocks: | 23851 |
Description
Justin Lecher (RETIRED)
2013-01-09 10:07:35 UTC
I guess we could try to reverse the merge process when we encounter ENOSPC. I thought this would work around it # time ebuild /usr/portage/www-client/firefox/firefox-37.0.1.ebuild qmerge --force but the --force is actually for manifest only according to `man ebuild` But maybe ebuild could ignore the existing files if $I_KNOW_WHAT_I_AM_DOING is set? Or is there a way to tell it to ignore the fact that those files exist but only if no package owns them ? (because they're left over from the previous failed merge copy operation which failed with 'no space left on device' as OP states) (In reply to EmanueL Czirai from comment #2) no, we should not be eliding random files based on random env vars. the merge step must fully succeed or it must be rejected. the ENOSPC case is always hard to handle as calculating free space before merging does not guarantee that space continues to exist as you merge :/. making hardlinks of all files in the original package and then removing those only once the merge has finished would allow for rollback, but it would also increase the amount of free space required than might otherwise be needed (since you're not reclaiming the space as you merge). any other ideas on how to implement this safely ? (In reply to SpanKY from comment #3) > (In reply to EmanueL Czirai from comment #2) > > no, we should not be eliding random files based on random env vars. the > merge step must fully succeed or it must be rejected. > > the ENOSPC case is always hard to handle as calculating free space before > merging does not guarantee that space continues to exist as you merge :/. > making hardlinks of all files in the original package and then removing > those only once the merge has finished would allow for rollback, but it > would also increase the amount of free space required than might otherwise > be needed (since you're not reclaiming the space as you merge). > > any other ideas on how to implement this safely ? it all depends on how you want the system to end up when you're done failing and what compromises you're willing to make. options: 0. delete and create 1. create hard links to ${D} 2. create side-by-side temp files and move into place 3. extend and overwrite 0 is the current solution, where the failure case is a mix of old and new files. 1 will not work if $PORTAGE_TMPDIR is on a tmpfs or any other different filesystem but is otherwise the best solution, since it requires no extra space. the failure case (e.g. if merge is terminated abnormally) is a mix of old and new files, but is unlikely because moving hard links is fast. 2 requires extra space equal to min(sum(old files), sum(new files)) and leaves temp files in case of failure that need to be cleaned later. the failure case is a mix of old and new files, but is unlikely because moving files is fast. 3 will not work with executable files. the worst-case failure is files that are mixes of old and new which hypothetically could lead to security problems in the worst case (e.g. imagine the file is /usr/bin/sshd and the old data was [JE ...] and the new data is [... JNE], then you will have [... JNE^WJE ...]). there are probably other ideas, probably even worse than these. the best first step is to include a simple check that the current free space is greater than or equal to the sum of the file sizes of the package (assuming this is not yet done). (In reply to Alex Xu (Hello71) from comment #4) we do not do (0). we first attempt to do the atomic rename from $D to $ROOT and if that fails, we fall back to (2). but we do this on a per-file, not package, basis. the fact that there's a mix of old and new files is not unique to any of the methods you describe. the only way to avoid that would be to get FS support for transactions. otherwise, we take the risk that the $D->$ROOT merge is fast or not critical enough to make things blow up. this is the status quo and i don't think we need discuss that here. your summary pretty much matches what i posted. our options are: - it works: we never leave the system in an inconsistent state - it sometimes fails in which case we reserve the right to make it an implementation detail and we mark this bug was WONTFIX if no one can think of a method that doesn't require free space matching `df $D`, then let's implement that and be done. *** Bug 345645 has been marked as a duplicate of this bug. *** |