Summary: | Odd problem with sys-fs/lvm2-2.02.03 | ||
---|---|---|---|
Product: | Gentoo Linux | Reporter: | Matteo Sasso <matteo.sasso> |
Component: | [OLD] Core system | Assignee: | Eric Edgar (RETIRED) <rocket> |
Status: | RESOLVED FIXED | ||
Severity: | normal | CC: | che |
Priority: | High | Keywords: | InVCS |
Version: | unspecified | ||
Hardware: | All | ||
OS: | Linux | ||
Whiteboard: | |||
Package list: | Runtime testing required: | --- |
Description
Matteo Sasso
2006-04-20 10:43:07 UTC
I'll spend some more time tomorrow on a similar note for media-tv bugs. This happened while farragut is building the system, as now I can build on Gentoo/FreeBSD without making my system go starving because of VMware. And this is really good for me. Now there are a few things that i need to fix in the baselayout, then I'll probably roll out a new updated snapshot the next week, as I found quite a few problems while building out of that stage. was this message on boot? can you try using 2.02.04 and see if it has the same issue? Yes, they're shown when init tries to activate (again) the LVs. I say "again" because LVs are activated by a custom initrd using an older version of lvm2. I also discovered that I cannot activate them using the newer version! A "vgchange -ay", besides the aforementioned warnings, reads: 0 logical volume(s) in volume group "backup" now active. Thanks to the initrd, my system can still boot! ;) ok so it sounds like lvm needs to probably be bumped in genkernel to a newer version as well. it sounds like the older version of lvm2 is messing with the newer version of it. another thing to try is recreating your initrd/initramfs with genkernel it should pickup the new static binaries as long as you have compiled the lvm2 ebuild without the nolvmstatic use flag. This will get the two versions of lvm2 in sync again. Ok, I emerged the new version (2.02.04) but it doesn't fix the problem. I'm afraid my LVs' metadata have fallen out of sync somehow... They've been through many divverent lvm2 versions. I also discovered that I cannot deactivate volumes anymore (e.g. in order to remove snapshots): WARNING: Duplicate VG name backup: Existing q4SOpG-9Vwx-GtW3-zS7c-bIS0-3v1H-wsZ9Up (created here) takes precedence over q4SOpG-9Vwx-GtW3-zS7c-bIS0-3v1H-wsZ9Up Volume group for uuid not found: Yxbak3rpy6tVAP00VrS0L0IVObhV16QfiDaSDnDi7W2hV3nv55jHZMpqHUcny5MN Now it gives me two identical IDs for the warning and a different one for the error. this might be a point where using the vgcfgrestore utility would be useful to restore your metadata to your disks. This *shouldnt* impact your data but I can not gaurantee that it wont happen. if possible take backups before you attempt to do so. and also try this first on a spare disk going through the vgreate/vgcfgbackup/restore procedure on this disk. Ok, I think I solved the problem by juggling with the two versions of lvm and renaming the VG which was caousing the warning. I still cannot "change" (activate/deactivate) LVs, but I think this issue is a bit OT. Should I open another bug/close this one? WONTFIX, WORKSFORME, ...? close this one as wontfix .. and we will see where you are on the other volume issue ... you may just have to do a repair on it via the vgcfgrestore tool. WONTFIX, as suggested by Eric. Thanks! :) Ok this seems to be an issue with upstream .. I will mask these 2 ebuilds and see if I cant get upstream to give a reason as to why this is happening. 2.02.03 and 2.02.04 appear to have the same issue. This bug is fixed and should be resolved in lvm2-2.02.04-r1 and higher. Upstream has the patch already applied and this ebuild backports the patch. All data on your disk should be fine and lvm will continue to work as expected once you upgrade to this ebuild. 2.02.05 is out .. and 2.02.04-r1 fixed the issue as well. Yep, 2.02.04-r1 is working for me. It even fixed another problem, where the whole system would hang if I tried to lvremove an active snapshot LV (and couldn't lvchange it to deactivate it). Tanks angain, Eric! |