was added on 2020-12-14 was stabilized on 2021-01-12 2.0.5 was stabilized on 2021-07-02 maximum kernel supported is 5.9, so 5.10 LTS users can't use it. only 4.x LTS and 5.4 LTS. ebuild is effectively unmaintained/untested, so is upstream branch. opening this bug in advance to gather potential feedback, if any. current target drop date is sometime mid-jan 2022. speak up if you still need it/use it and need more time.
maybe we should mask it first to get message to the users, and reference this bug URL.
(In reply to Georgy Yakovlev from comment #1) > maybe we should mask it first to get message to the users, and reference > this bug URL. yeah, "extended last rites" for that version
The bug has been referenced in the following commit(s): https://gitweb.gentoo.org/repo/gentoo.git/commit/?id=6a9807315a8dc234c679f1ee176f58482be4ad2f commit 6a9807315a8dc234c679f1ee176f58482be4ad2f Author: Georgy Yakovlev <gyakovlev@gentoo.org> AuthorDate: 2022-01-15 21:43:03 +0000 Commit: Georgy Yakovlev <gyakovlev@gentoo.org> CommitDate: 2022-01-15 21:50:09 +0000 profiles: mask ~sys-fs/zfs{,-kmod}-0.8.6 for removal Bug: https://bugs.gentoo.org/830020 Signed-off-by: Georgy Yakovlev <gyakovlev@gentoo.org> profiles/package.mask | 11 +++++++++++ 1 file changed, 11 insertions(+)
ZFS is one of those beasts that just works... :-) It looks like it's time for me to migrate to the newer version. Is there a migration doc somewhere? As in upgrading from ZFS v0.8.4 to ZFS v2.0.5? The only relevant info I seem to be able to pull up is this: https://wiki.gentoo.org/wiki/ZFS#Advanced """ This example uses 0.8.4, but just change it to the latest ~ or stable (when that happens) and you should be good. The only issue you may run into is having zfs and zfs-kmod out of sync with each other - avoid that. """ Are there any configuration options in my pools and datasets I need to check first?
(In reply to Jason Cooper from comment #4) > ZFS is one of those beasts that just works... :-) It looks like it's time > for me to migrate to the newer version. > > Is there a migration doc somewhere? As in upgrading from ZFS v0.8.4 to ZFS > v2.0.5? > > The only relevant info I seem to be able to pull up is this: > > https://wiki.gentoo.org/wiki/ZFS#Advanced > > """ > This example uses 0.8.4, but just change it to the latest ~ or stable (when > that happens) and you should be good. The only issue you may run into is > having zfs and zfs-kmod out of sync with each other - avoid that. > """ > > Are there any configuration options in my pools and datasets I need to check > first? like you mentioned it should just work. multiple users migrated with no issues. your migration depends on particular configuration, but there's really should be no surprises. but I can't help on exact options, it really depends. do a backup, consider making zpool checkpoint. https://openzfs.github.io/openzfs-docs/man/8/zpool-checkpoint.8.html install new version, reboot and test. you should be able to go to previous version just fine if you DON'T upgrade your pool with 'zpool upgrade' if any issues arise. leave old kernel bootable in bootloader just in case. check 'man zpool-upgrade', defer upgrade until you are satisfied with new version. https://openzfs.github.io/openzfs-docs/man/8/zpool-upgrade.8.html after you think you are good to remain on 2.x.x - remove checkpoint and do a zpool upgrade.
2.x have serious data loss bugs. Please keep 0.8 around for those of us who are willing to continue using 4.19 LTS to avoid those issues. See also: https://github.com/openzfs/zfs/issues/12007
(In reply to Luke-Jr from comment #6) > 2.x have serious data loss bugs. Please keep 0.8 around for those of us who > are willing to continue using 4.19 LTS to avoid those issues. > > See also: https://github.com/openzfs/zfs/issues/12007 I will not touch it at all until June 2022. and re-asses again at the time. my main motivation of getting it out is that it's untested, nor my upstream nor my ebuild maintainers.
(In reply to Georgy Yakovlev from comment #7) > my main motivation of getting it out is that it's untested, nor my upstream > nor my ebuild maintainers. Untested how? It was stable not long ago. Presumably it was tested then? Things don't become untested. Unmaintained, yes, but that is IMO preferable over poorly maintained (ie, known major bugs)
untested in this context means: not tested (build or runtime) with new kernel patches that come out weekly. and yes I was directly using 0.8.6 on 5.4 kernel until I masked it. now all my machines migrated to 2.x and I don't have means (or strong desire) for testing 0.8.6 anymore. so if new issues arise I would not be able to catch. most distros that ship zfs also moved on to 2.x.x, it's there all the testing happens and all the bugs are found and fixed. 0.8.6 is unmaintained, does not receive fixes, is not tested. I honestly think it's much worse than 2.x.x situation, which you seem to have configuration specific issues with.
(In reply to Georgy Yakovlev from comment #9) > most distros that ship zfs also moved on to 2.x.x, it's there all the > testing happens and all the bugs are found and fixed. Clearly not all of them are fixed, as I already cited a data loss bug that remains unfixed. > 0.8.6 is unmaintained, does not receive fixes, is not tested. > I honestly think it's much worse than 2.x.x situation, which you seem to > have configuration specific issues with. Unclean removal isn't a "configuration specific issue", but a standard risk that filesystems are supposed to protect against. Even ext2 without a journal recovers cleaner than ZFS 2.x in this case.
(In reply to Luke-Jr from comment #10) > (In reply to Georgy Yakovlev from comment #9) > > Unclean removal isn't a "configuration specific issue", but a standard risk > that filesystems are supposed to protect against. Even ext2 without a > journal recovers cleaner than ZFS 2.x in this case. agreed, but zfs->luks->usb is not a reliable chain, that's what I meant. you are at the mercy of luks after zfs layer, which should be ok, luks is solid and proven solution. but also at the mercy of usb storage layer, which is known to have issues regardless of fs used on top. maybe if you could reproduce the bug with simpler storage setup, or even on loop files, it'd attract more action from upstream.
(In reply to Georgy Yakovlev from comment #11) > agreed, but zfs->luks->usb is not a reliable chain, that's what I meant. It should be. But even if not, that's beside the point: a filesystem still shouldn't lose data just because the drive is removed suddenly. > but also at the mercy of usb storage layer, which is known to have issues > regardless of fs used on top. Really? :/ > maybe if you could reproduce the bug with simpler storage setup, or even on > loop files, it'd attract more action from upstream. Yes, maybe. But not doing so, doesn't mean I want to risk my data unnecessarily...
(In reply to Luke-Jr from comment #12) > (In reply to Georgy Yakovlev from comment #11) > > agreed, but zfs->luks->usb is not a reliable chain, that's what I meant. > > It should be. But even if not, that's beside the point: a filesystem still > shouldn't lose data just because the drive is removed suddenly. all software should be perfect and bug-free, but it ain't. simple fact, even for zfs. introducing more complexity and more layers under zfs you expose yourself to more edge-cases. I do not disagree it should work. I agree. It should work, or at lest do not crash and eat up data. I just don't understand why are you using unreliable configuration and expect reliable operation. > > > but also at the mercy of usb storage layer, which is known to have issues > > regardless of fs used on top. > > Really? :/ yes. there are many problems with it. also usb->sata converters used in usb drives can and do have bugs too. even USB-stick stall problem is still around in 2022 ( who would have thought ) https://lwn.net/Articles/572911/ > > > maybe if you could reproduce the bug with simpler storage setup, or even on > > loop files, it'd attract more action from upstream. > > Yes, maybe. But not doing so, doesn't mean I want to risk my data > unnecessarily... you don't have to use real data, but making an effort to isolate the problem can be useful, instead of blaming upstreams of buggy releases, bad maintenance and demanding keeping old versions in the tree, all on our time, to support your weird use-case.
(In reply to Georgy Yakovlev from comment #13) > I just don't understand why are you using unreliable configuration and > expect reliable operation. We're getting a bit off-topic here, but I would like to know what you would suggest instead? The drives are USB - there is no other way to attach them (without making warranty issues). Unfortunately, USB drives are much cheaper and have longer warranties than SATA models. And in any case, I already have them; it would cost quite a large chunk to replace 60+ TB of storage. > to support your weird use-case. I don't consider USB storage to be a weird use case. It's a fairly common standard for external storage.
let's stop spamming the bug, I don't see dialogue happening further and see no mutual understanding. on-topic: 0.8.6 situation will be re-assessed before removal and decision adjusted as needed. end of discussion on this case. but ofc we are open to listen to other cases against removal.
The bug has been closed via the following commit(s): https://gitweb.gentoo.org/repo/gentoo.git/commit/?id=d6c7622e7e719bb30939303a530806bbf788a315 commit d6c7622e7e719bb30939303a530806bbf788a315 Author: Georgy Yakovlev <gyakovlev@gentoo.org> AuthorDate: 2022-06-07 18:45:26 +0000 Commit: Georgy Yakovlev <gyakovlev@gentoo.org> CommitDate: 2022-06-07 18:45:26 +0000 profiles: remove zfs-0.8.x mask Closes: https://bugs.gentoo.org/830020 Signed-off-by: Georgy Yakovlev <gyakovlev@gentoo.org> profiles/package.mask | 10 ---------- 1 file changed, 10 deletions(-) Additionally, it has been referenced in the following commit(s): https://gitweb.gentoo.org/repo/gentoo.git/commit/?id=d63ee4d2e784fceca7406466a7085df61c48f5ba commit d63ee4d2e784fceca7406466a7085df61c48f5ba Author: Georgy Yakovlev <gyakovlev@gentoo.org> AuthorDate: 2022-06-07 18:44:53 +0000 Commit: Georgy Yakovlev <gyakovlev@gentoo.org> CommitDate: 2022-06-07 18:44:53 +0000 sys-fs/zfs-kmod: drop 0.8.6 Bug: https://bugs.gentoo.org/830020 Signed-off-by: Georgy Yakovlev <gyakovlev@gentoo.org> sys-fs/zfs-kmod/Manifest | 1 - sys-fs/zfs-kmod/files/0.8.6-copy-builtin.patch | 27 ---- sys-fs/zfs-kmod/zfs-kmod-0.8.6.ebuild | 212 ------------------------- 3 files changed, 240 deletions(-) https://gitweb.gentoo.org/repo/gentoo.git/commit/?id=24e2149eb5cd99e6628e57c2672be30dbb68e035 commit 24e2149eb5cd99e6628e57c2672be30dbb68e035 Author: Georgy Yakovlev <gyakovlev@gentoo.org> AuthorDate: 2022-06-07 18:44:03 +0000 Commit: Georgy Yakovlev <gyakovlev@gentoo.org> CommitDate: 2022-06-07 18:44:03 +0000 sys-fs/zfs: drop 0.8.6-r2 Bug: https://bugs.gentoo.org/830020 Signed-off-by: Georgy Yakovlev <gyakovlev@gentoo.org> sys-fs/zfs/Manifest | 1 - sys-fs/zfs/zfs-0.8.6-r2.ebuild | 245 ----------------------------------------- 2 files changed, 246 deletions(-)