The portage tree allows lvm2 2.00.08 to be installed, but the programmer himself says this version is not stable: http://www.redhat.com/archives/linux-lvm/2004-December/msg00128.html But the package is not masked at all. Why is it even in the tree?
I have had numerous problems with 2.00.08 on amd64. Including race conditions and segfaults from pvscan which is run by the init scripts. This problem has occured on at least 3 of my amd64 systems. 2.00.25 seems to be stable so the first thing I have to do on a new install is add "sys-fs/lvm2 ~amd64" to /etc/portage/package.keywords to even get lvm working. Please mark 2.00.25 as stable for amd64.
Yes, gentoo doesn't have a working lvm2. Ideally, 2.00.32 should be added.
Since issue can be resolved by merely adjusting the keywords on packages, perhaps the amd64 folks can at least correct this for amd64...
i don't want to mark it stable before the maintaining arch, so please hang on a while ;)
Ok, I've hung on for a month... This situation is very confusing for users. Can you give us an idea of how much longer is the stable package is going to be broken and an "unstable" package required for a working system?
I agree; it should have been released when the bug was originally posted. It's kind of silly that gentoo doesn't support lvm.
lvm2 2.00.08 works fine on all of my x86 and amd64 boxes. No problems at all. So, it's not completely broken.
The developer didn't says it was totally non-functional, only unstable.
This is stable on x86. Waiting for amd64 testing.
This works fine for me on amd64, but I might not be a good test case, because 08 worked fine for me too.
Here is a copy of a message I posted to the "gentoo-amd64" mailing list: -----BEGIN----- I'm new to gentoo. I installed from the "LiveCD 2004.3". While in the "install chroot", I laid out the new system partitions as follows: 3 Software RAID (level 1) devices: mdadm --create /dev/md0 -l 1 -n 2 /dev/hda9 /dev/sda9 mdadm --create /dev/md1 -l 1 -n 2 /dev/hda8 /dev/sda8 mdadm --create /dev/md2 -l 1 -n 2 /dev/hda5 /dev/sda5 /dev/md2 / reiserfs /dev/md1 PV assigned to VG "system" /dev/md0 PV assigned to VG "data" Logical volumes: /dev/system/gentoo0 /usr reiserfs /dev/system/gentoo1 /var reiserfs /dev/system/gentoo2 /tmp ext2 So the *creation* of LVM devices went fine. The first glitch: Each time I reboot (from the CD) and have to mount the LVs (after assembling the RAIDs), the first call to "lvdisplay" produces a seg fault, after which a second call outputs correctly the list of LVs. Second, when rebooting, I get the following message: Fail to shut LVM down. Now the big problem: Booting from the HDDs I'm dragged into "maintenance" mode shortly after the following message is printed: Setting up the Logical Volume Manager... /sbin/rc: line 415: Segmentation fault /sbin/vgscan >/dev/null The RAIDs are all OK. But none of the LVs: they are all "NOT available". So I enter the command: lvchange -a y /dev/system/gentoo0 which produces 2 errors: Found duplicate PV [follows the UUID]: using hda8 but not sda8 device-mapper ioclt cmd 9 failed: invalid argument although a call to "lvdisplay" shows that the LV is now "available" (and further calls to "lvchange" do not display the "ioctl" error message)! Nevertheless the device "/dev/system/gentoo0" is *not* created (as it was when doing the exact same sequence of commands within the "install chroot"). So I cannot mount the LVs (hence cannot access "/usr" and "/var"...). Also LVM commands ending in "scan" ("vgscan", "lvscan",...) seem they are going to run forever, not writing anything to stdout. processor Opteron 244 kernel 2.6.9-r14 (gentoo-dev-sources) lvm2 How can I make this work? :-{ -----END----- And this is the answer I got, which solved the problem: -----BEGIN----- I seem to remember having a similar problem many months ago, but it was caused by an outdated version of LVM. Based on that, the only idea I would have would be to try an ~amd64 lvm2. -----END----- So, I very much agree with comment #5. Especially for new users like me who most probably are not yet acquainted with "keywords", "use" and the like. It must be stressed that the current "stable" wersion of "lvm2" is *completely* non-functional when used in conjunction with Software RAID. So it seems indispensable to mark a *working* version as "stable".
2.00.33-r1 stable on amd64
Is there any reason this bug can't be closed-resolved?
Well from what I can see in keywording and from the reporters, this is fixed.