|Summary:||2.6.25 requires >=lvm2-2.02.29 where CONFIG_SYSFS_DEPRECATED_V2=n|
|Product:||Gentoo Linux||Reporter:||Michal Januszewski (RETIRED) <spock>|
|Component:||New packages||Assignee:||Gentoo's Team for Core System packages <base-system>|
|Severity:||major||CC:||gengor, hakan, kfm, vroman, wschlich|
|Package list:||Runtime testing required:||---|
|Bug Depends on:|
|Bug Blocks:||207383, 218127, 255196|
Output of pvscan -vv with lvm2-2.02.28-r5 and device-mapper-1.02.22-r5
output of lvm dumpconfig
Description Michal Januszewski (RETIRED) 2008-01-26 21:17:16 UTC
After upgrading the kernel to 2.6.24, the logical volumes are no longer detected by vgscan from the version of lvm2 currently available in Portage. The problem can be fixed by upgrading lvm2 to 2.02.31 and device-mapper to 1.02.24.
Comment 1 Victor Roman Archidona 2008-01-28 01:32:09 UTC
Hi, I installed Gentoo yesterday using device-mapper-1.02.22-r5 and lvm2-2.02.28-r4 with a 2.6.24 kernel on a ~amd64 and I don't have any problems: machine ~ # vgscan --version LVM version: 2.02.28 (2007-08-24) Library version: 1.02.22 (2007-08-21) Driver version: 4.12.0 machine ~ # vgscan Reading all physical volumes. This may take a while... Found volume group "system" using metadata type lvm2 machine ~ # Posted here only for information :).
Comment 2 Marcin Gil 2008-01-28 07:17:55 UTC
I also haven't got any problems using lvm2. However I am running current versions of ~x86. No problems whatsoever.
Comment 3 Weedy 2008-02-01 07:23:42 UTC
(In reply to comment #2) > I also haven't got any problems using lvm2. However I am running current > versions of ~x86. No problems whatsoever. > same here
Comment 4 Chris Gianelloni (RETIRED) 2008-02-01 20:01:47 UTC
So do we need a bump or a newer stable?
Comment 5 Michal Januszewski (RETIRED) 2008-02-02 12:10:34 UTC
(In reply to comment #4) > So do we need a bump or a newer stable? If this is a real bug (so far I seem to be the only person hitting this problem) and if kernel 2.6.24 goes stable, then we need to stable the bumped versions. Otherwise, we just need a bump. I can confirm that this is not a fluke (i.e. going back to the current ~amd64 breaks the boot process) but so far I haven't been able to identify what exactly causes the problem (not that I have looked very hard..).
Comment 6 Robin Johnson 2008-02-07 01:23:57 UTC
spock: could you please downgrade to the version that was broken for you, and then show the output from "pvscan -vv". I'm runing 2.6.24-git14 already, and have seen no troubles.
Comment 7 Robin Johnson 2008-02-07 02:03:33 UTC
The new versions are in the tree anyway, but please report your 'pvscan -vv' output with the old ones. I'm wondering if it was just the default filter change that happened recently (/dev/nbd* was added to the filter list, so I wonder about an etc-update snafu).
Comment 8 Michal Januszewski (RETIRED) 2008-02-07 14:24:03 UTC
Created attachment 142890 [details] Output of pvscan -vv with lvm2-2.02.28-r5 and device-mapper-1.02.22-r5
Comment 9 Robin Johnson 2008-02-07 20:50:04 UTC
spock: That looks VERY much like you have a bad filter statement in /etc/lvm/lvm.conf - because it didn't list a single device. Could you grep for it and paste here?
Comment 10 Michal Januszewski (RETIRED) 2008-02-08 17:08:44 UTC
(In reply to comment #9) > spock: That looks VERY much like you have a bad filter statement in > /etc/lvm/lvm.conf - because it didn't list a single device. > Could you grep for it and paste here? The only uncommented filter line is: filter = [ "r|/dev/nbd.*|", "r|/dev/cdrom|" ]
Comment 11 Robin Johnson 2008-02-08 23:46:32 UTC
Ok, that's really weird. Why's it ignoring most of your /dev nodes. Paste/attach the output of "lvm dumpconfig" please?
Comment 12 Michal Januszewski (RETIRED) 2008-02-10 10:27:10 UTC
Created attachment 143104 [details] output of lvm dumpconfig
Comment 13 Robin Johnson 2008-02-11 00:58:40 UTC
weird. i totally cannot see why it was ignoring your devices.
Comment 14 Doug Goldstein (RETIRED) 2008-02-13 21:27:10 UTC
Maybe the metadata is in lvm1 format?
Comment 15 Robin Johnson 2008-02-13 21:36:50 UTC
lvm1 support should be enabled in spock system still, as it's set via the pcakage.use in the base profile. But he can confirm that maybe?
Comment 16 Chris Gianelloni (RETIRED) 2008-02-14 01:19:12 UTC
USE="-* some use flags" will override package.use in the profiles. If he's using that, it'll definitely fail to pull in lvm1 support. Anyway, it doesn't look like I actually need to do anything for 2008.0, so I'm removing release here. If you need us to do something for the release, add us back.
Comment 17 Michal Januszewski (RETIRED) 2008-02-25 22:07:13 UTC
(In reply to comment #15) > lvm1 support should be enabled in spock system still, as it's set via the > pcakage.use in the base profile. But he can confirm that maybe? I don't think it's a lvm1 problem. lvm2 is installed with the lvm1 USE flag enabled: [ebuild R ] sys-fs/lvm2-2.02.33-r1 USE="lvm1 readline static (-clvm) (-cman) (-gulm) (-selinux)" and pvscan says: PV /dev/sda2 VG vg lvm2 [120.14 GB / 22.14 GB free] so I guess it's using lvm2 metadata.
Comment 18 Kevin Bowling 2008-05-13 08:34:29 UTC
Besides the current stable versions being ancient, I hit a similar bug. My volumes were not mounted from a 2.6.22 to 2.6.25 kernel upgrade and resulted in a week of downtime on my server since I didn't have access to it. Fixed with a simple bump, and rock stable. Please get the bit rot out of the tree and stabilize sane versions.
Comment 19 Michael Crawford (ali3nx) 2008-05-16 22:56:26 UTC
(In reply to comment #18) > Besides the current stable versions being ancient, I hit a similar bug. My > volumes were not mounted from a 2.6.22 to 2.6.25 kernel upgrade and resulted in > a week of downtime on my server since I didn't have access to it. > > Fixed with a simple bump, and rock stable. Please get the bit rot out of the > tree and stabilize sane versions. > I also ran into this issue with current stable lvm2-2.02.28-r2 not recognizing lvm volumes on boot on three separate amd64 gentoo systems. After much debate with zzam in #gentoo-base and testing on multiple systems running 2.6.25 kernels that all had lvm-2.02.28-r2 installed I can only conclude that a version bump of stable is definitely required.after keywording and boot testing two systems with sys-fs/lvm2-2.02.36 both systems booted with lvm volumes present. The three systems I own that were affected are a days old 2008.0 amd64 install that is using baselayout-2, a full stable amd64 using 2007.0 profile and my main workstation that uses full lvm2 rootfs and 2007.0 profile with baselayout-2. In all three systems lvm2 volumes were not started on boot with either baselayout-1 or baselayout-2. quoted conversation from #gentoo-base. <ali3nx>i have a new install i just finished this morning and one existing that both use lvm and so far neither machine's rc scripts have loaded lvm volumes without an initrd. <ali3nx> tested with both baselayout-1 and openrc with similar results <ali3nx> http://forums.gentoo.org/viewtopic-t-692484-start-0-postdays-0-postorder-asc-highlight-.html <UberLord> lvm works fine for me on my laptop <ali3nx> tryin to fix borked initrd's all night will get anyone good and freaky lol <ali3nx> hm <ali3nx> which arch? <ali3nx> both mine are amd64 and the fresh '08 install wont load lvm volumes before loading runlevel 3 <UberLord> i386 <ali3nx> tested that one with stable and openrc <UberLord> i will be using lvm on my new core2 quad which hopefully arrives tomorow <UberLord> which is amd64 (instruction set) <ali3nx> my workstation uses lvm2 rootfs so i have to use an initrd but the one time the "older" initrd worked during boot testing the lvm volume was not found <UberLord> ah, my rootfs isn't on lvm <ali3nx> so strangely similar issues on two systems <UberLord> rootfs on lvm wasn't advised by the docs <ali3nx> indeed. and the 2U server with the fresh install only has /usr /opt /var and friends on lvm <ali3nx> my workstation does use lvm2 root <ali3nx> workstation boots fine with an initrd using openrc <UberLord> and i don't use an initrd <ali3nx> hm <UberLord> i can't actually help that much as I'm not that knowledgeable about lvm <zzam> hmm <UberLord> i just have some so I can ensure that new openrc builds don't break lvm <zzam> so rootfs on lvm requires an initrd/initramfs always! <zzam> that does work? <ali3nx> yes <ali3nx> but not with baselayout 1 <ali3nx> worked with openrc and baselayout-2 <zzam> what happens with bl-1 then? <UberLord> should work with b-1 with an initrd as well <zzam> initramfs should mount / fine and exec init then <ali3nx> lvm volumes will not pre load with the rcscripts <zzam> and init should run lvm vgscan to get device nodes populated <zzam> either udev-start runs this or lvm-start.sh <ali3nx> yeah the rcscript addons files for lvm have worked reliably for ages <zzam> hmm, explicit excluding causes <zzam> did you update udev right before it happened? <ali3nx> just decided to fix two systems overnight and ran across these <ali3nx> it's possible i've been multitasking like a nutter all night heh <zzam> as with baselayout-1 all is a bit strange <zzam> udev-start runs this for bl-1 <zzam> # /sbin/lvm vgscan -P --mknodes --ignorelockingfailure <zzam> I am happy to phase that out with bl-2/openrc <ali3nx> emulated cisco lab and fixing two amd64 lvm setups but the lvm-start.sh file was present on the 2U during the bl-1 boot tests <ali3nx> could be udev then <zzam> using an initramfs this line should create the device nodes of already online volumes in /dev <ali3nx> i'll have to run out to the garage and kick it one more time <zzam> that should make initramfs + bl-1 up and running <zzam> the only reason for this strange stuff before running lvm-start is: fsckroot runs before <ali3nx> however after getting my workstation up with openrc and the rootfs lvm with initrd i assumed bl-1 was hosed and upgraded the server <ali3nx> still wont boot but i'll have a closer look at udev <zzam> the server now does not boot with what setup? <zzam> root on lvm or not? <ali3nx> fresh 2008.0 install and bl-2 <ali3nx> with usr var tmp opt and home on lvm <zzam> last time I updated kernel I stepped over too old lvm in my initramfs <zzam> or was it too old mdev from busybox <ali3nx> yeah aroud mid last month my workstation with the lvm rootfs wouldnt boot and i didn;t have time to figure out wtf caused it hehe... jus got around it last night <ali3nx> assumed at the time it was a hosed initrd or some kernel commit messing with me #next day <ali3nx> fyi i just discovered very similar lvm issues affecting my two amd64 systems on a third amd64 2U server that runs full stable <zzam> pong ali3nx <ali3nx> lvm array on the third system not found after it's last reboot <ali3nx> i'm just investigating <zzam> you did any updates? <ali3nx> the third system was fully up to date <ali3nx> nothing in base system keyworded <ali3nx> the physical disks are present but lvm did not initiialize properly for some yet unknown reason. vgscan shows no arrays <zzam> if you have a shell at that moment <ali3nx> that system has been using amd64/2007.0 profile for some time <ali3nx> sure do <zzam> does the raw devices exist / like sdaX or so <ali3nx> yes they are all present <zzam> maybe running vgscan -v -v or other options help <ali3nx> i'll fetch the results for ya <ali3nx> http://rafb.net/p/mALNA665.html <zzam> or try pvscan first <zzam> that should just list the raw lvm pv partitions <ali3nx> strange no pv's found <ali3nx> very bizzare <zzam> maybe conf file for lvm corrupt <ali3nx> http://rafb.net/p/vutEeD95.html <ali3nx> hm possibly <zzam> this is one of the times emerge --info does not help <ali3nx> also using current stable lvm2 utils <ali3nx> nope lvm.conf is there <ali3nx> http://rafb.net/p/krxHWx20.html <ali3nx> doesn't appear to be anything unusual <zzam> kernel is new? <ali3nx> roughly 2 days since last reboot <zzam> most content of lvm.conf is cryptic for me, too <ali3nx> most of the noisy systems live on a 2 post rack in my garage hehe <ali3nx> pair of 2U smp opterons, 3 u80 quad sparcs, fleet of ciscos ect <zzam> maybe try pvscan -vvv <zzam> the device-cache listed in conf-file may be a reason <ali3nx> http://rafb.net/p/Aay2x435.html <zzam> conf-file suggests vgscan -vvvv <zzam> even more -v :D <ali3nx> :) <ali3nx> http://rafb.net/p/hBdgTf47.html <zzam> the skipping messages are the strange ones <zzam> example: is sdc1 a lvm pv ? <ali3nx> yes <zzam> line 304 <zzam> #filters/filter-sysfs.c:255 /dev/sdc1: Skipping (sysfs <ali3nx> sdb1 and scd1 on that system <zzam> but why? <ali3nx> very good question :) <ali3nx> i'm not very adept with sysfs issues <ali3nx> unless lvm2 requires a legacy sysfs device which i believe i have disabled in the kernel <zzam> what version of lvm2 do you have? <ali3nx> ebuild R ] sys-fs/lvm2-2.02.28-r2 USE="readline (-clvm) (-cman) (-gulm) -nolvm1 -nolvmstatic (-selinux)" 0 kB <ali3nx> current stable <ali3nx> if memory serves me i think it was recently version bumped <zzam> maybe this bug is what you have <zzam> http://bugs.gentoo.org/show_bug.cgi?id=207612 <ali3nx> hmmm <zzam> it suggests basically updating lvm2 <zzam> but it seems to me they have no clue why <zzam> but I guess your vgscan output at low verbose levels is equal <ali3nx> the similarity does appear to suggest something changed in recent kernels <zzam> yes <zzam> maybe it really is some of those deprecated sysfs layout things <ali3nx> when i first attempted to boot 2.6.24 with my lvm2 rootfs system i had an error <zzam> but we may be wrong <ali3nx> but i assumed the initrd was hosed <ali3nx> then the first 2U opteron... fresh install with 2.6.25 kernel and stable lvm2 utils <ali3nx> using 2008.0 and either version of baselayout <ali3nx> still not booted on it's own <ali3nx> third opteron full stable with bl-1 = no lvm pv's found <ali3nx> getting to the root of the issue at least :) <zzam> as kernel 2.6.24 is stable that suggests that maybe a newer lvm2 is needed <ali3nx> yes it's certainly possible <ali3nx> i havent followed commits closely for ages however <zzam> commits of what <ali3nx> new developments, features in the kernel or gentoo's commits <ali3nx> i've been afk from irc for some time <ali3nx> used to follow early developments very closely <ali3nx> i'll keyword lvm2 on my 2U with the fresh install and test <ali3nx> see if it has any affects --- jyujin_ is now known as jyujin <ali3nx> zzam, i keyworded and updated device-mapper and lvm2 on the new install. <ali3nx> system booted <ali3nx> i'm just testing the same on the other 2U with stable bl-1 <ali3nx> so far it appears that there's a bug in either stable device-mapper or lvm2 currently in portage <zzam> fine <zzam> looking into lvm-history I find this: <zzam> Handle new sysfs subsystem/block/devices directory structure <zzam> this is in the changes of Version 2.02.29 - 5th December 2007 <ali3nx> would suggest a bump to >=2.0.29 is needed but i'll test on my other 2U and provide the results <ali3nx> bbiab going to watch the local console reboot
Comment 20 Victor Roman Archidona 2008-05-18 20:37:08 UTC
Hi, since gentoo-sources/linux-2.6.25 I am having problems with lvm2 prior to 2.02.29 due sysfs kernel changes. A workaround exists enabling setting CONFIG_SYSFS_DEPRECATED_V2=y in kernel's .config, but it isn't the clean way existing an LVM2 package supporting 2.6.25's new way. Using the *current* lvm2 unstable package (sys-fs/lvm2-2.02.36) all works fine, so this message is a request to make it stable and allowing us, LVM2 users, to continue using it.
Comment 21 Hanno Böck 2008-06-08 13:45:13 UTC
Archs, I think you can go ahead with stabilizing lvm2-2.02.36 to fix this.
Comment 22 Kerin Millar 2008-06-13 20:07:07 UTC
I ran into exactly the same problem as Victor after upgrading to 2.6.25. Despite the fact that not enabling a legacy sysfs layout was fine with 2.6.24 and lvm2-2.02.28, it's no longer OK with 2.6.25 because the consequences of this option upon the layout are different between 2.6.24 and 2.6.25 - even though the option is outwardly identical. Also, they've renamed the symbol from CONFIG_SYSFS_DEPRECATED to CONFIG_SYSFS_DEPRECATED_V2. Although it is seems to be set by default in 2.6.25, it is too much to expect the user to be aware of the significance of the option in terms of supporting the current stable lvm2 package. Further, the help text for the option states: "If you are using a distro with the most recent userspace packages, it should be safe to say N here." Well, that sort of distro sounds like Gentoo to me ;) I agree with Hanno. Assuming that there are no outstanding bugs, I would urge the arch testers to expedite stabilisation of the following two packages to contain the situation (which work very well here): sys-fs/device-mapper-1.02.26 sys-fs/lvm2-2.02.37 PS: I'm adding this to the 2.6.25 regression tracker so that the kernel herd are kept informed.
Comment 23 nixnut (RETIRED) 2008-06-14 09:36:33 UTC
Comment 24 Christoph Mende (RETIRED) 2008-06-14 09:52:01 UTC
Comment 25 Tobias Klausmann (RETIRED) 2008-06-15 11:36:21 UTC
Stable on alpha.
Comment 26 Jeroen Roovers (RETIRED) 2008-06-16 03:48:21 UTC
* QA Notice: USE Flag 'nolvmstatic' not in IUSE for sys-fs/lvm2-2.02.37 * QA Notice: USE Flag 'nolvm1' not in IUSE for sys-fs/lvm2-2.02.37
Comment 27 Robin Johnson 2008-06-16 05:06:01 UTC
jer: the flags are there DELIBERATELY for migration. If anybody is using them, they can suffer breakage unless they update their system. [snip] use nolvmstatic && eerror "USE=nolvmstatic has changed to USE=static via package.use" use nolvm1 && eerror "USE=nolvm1 has changed to USE=lvm1 via package.use" [/snip]
Comment 28 Jeroen Roovers (RETIRED) 2008-06-16 05:24:18 UTC
(In reply to comment #27) > jer: the flags are there DELIBERATELY for migration. They were deliberately not added or deliberately removed from IUSE? :)
Comment 29 Jeroen Roovers (RETIRED) 2008-06-16 05:44:25 UTC
Stable for HPPA.
Comment 30 Doug Goldstein (RETIRED) 2008-06-16 13:24:02 UTC
(In reply to comment #28) > (In reply to comment #27) > > jer: the flags are there DELIBERATELY for migration. > > They were deliberately not added or deliberately removed from IUSE? :) > Deliberately removed. You can see the history in CVS as to when they disappeared.
Comment 31 Kerin Millar 2008-06-17 09:54:33 UTC
Why are some of the arch herds not stabilising the latest versions that I mentioned in comment 22, thus closing bug 210879, bug 214194 and bug 202058 (and maybe others)? /me sighs and populates package.keywords again ...
Comment 32 Christian Faulhammer (RETIRED) 2008-06-17 10:15:44 UTC
(In reply to comment #31) > Why are some of the arch herds not stabilising the latest versions that I > mentioned in comment 22, thus closing bug 210879, bug 214194 and bug 202058 > (and maybe others)? /me sighs and populates package.keywords again ... Lack of time/setup to test it properly maybe?
Comment 33 Markus Rothe (RETIRED) 2008-06-21 19:24:39 UTC
following packages stable on ppc64: sys-fs/device-mapper-1.02.26 sys-fs/lvm2-2.02.37
Comment 34 Markus Meier 2008-06-22 15:24:17 UTC
Comment 35 Kerin Millar 2008-06-22 15:58:16 UTC
Re: Comment 32, my apologies as I did not word my comment very well. The specific issue that I was referring to is not that it has taken so long for the bitrot to be attended to; rather that some of the herds (amd64, alpha) _have_ attended to the request but have not stabilised the latest versions. That means that the versions they have stabilised (lvm2-2.02.36/device-mapper-1.02.24-r1) are ones that have open bugs in portage and I see this as a completely wasted opportunity - especially as it is anyone's guess as to how long it will take before the next round of stabilisation occurs. Generally speaking, these packages fall into the category of being low-risk, high-important "system" updates - in other words, updates are more likely to solve problems than to cause them. As such, when updates are requested, I think every effort should be made to attend to them in a timely fashion. If this isn't practical for whatever reason then I think that the packages should become part of "system" ... they're simply too important.
Comment 36 Kerin Millar 2008-06-22 16:01:30 UTC
Also, I should have said "applicable bugs" rather than "open bugs". The bugs may be closed but until the versions that correct said bugs are keyworded stable it's moot.
Comment 37 Raúl Porcel (RETIRED) 2008-06-23 18:54:04 UTC
ia64/sparc stable, closing
Comment 38 firstname.lastname@example.org 2008-06-29 16:15:27 UTC
I was googling before updating my lvm package and came across this bug, but I'm confused... I generally stick with stable kernels, and the current stable version for amd64 is 2.6.24-r8... but the very first comment says the fix is to upgrade the device-mapper to 1.02.24, but the current stable is 1.02.22-r5... So, if I'm upgrading to kernel 2.6.24-r8, will the current stable lvm packages be fine for this kernel?: device-mapper-1.02.22-r5 lvm2-2.02.36 Or do I need to unmask 1.02.24? Also, as to the changed USE flags... I never had lvm1 compatibility compiled in, as evidenced by the following emerge -p output: Calculating dependencies... done! [ebuild U ] sys-fs/lvm2-2.02.36 [2.02.28-r2] USE="lvm1%* readline static%* (-clvm) (-cman) (-gulm) (-selinux) (-nolvm1%) (-nolvmstatic%)" 556 kB I only have a couple of lvm volumes, and they are both lvm2: myhost init.d # pvscan PV /dev/sdb1 VG vg2 lvm2 [190.74 GB / 0 free] PV /dev/sdb2 VG vg2 lvm2 [256.25 GB / 206.25 GB free] Total: 2 [447.00 GB] / in use: 2 [447.00 GB] / in no VG: 0 [0 ] lvm1 and static are both in yellow with the trailing '%', meaning they are new... so would the proper way to disable these be to just add -lvm1 and -static package.use keywords for lvm2? Tia for any advice...
Comment 39 email@example.com 2008-06-29 16:25:28 UTC
Actually, I think I was wrong about the USE flags... myhost ~ # equery u lvm2 [ Searching for packages matching lvm2... ] [ Colour Code : set unset ] [ Legend : Left column (U) - USE flags from make.conf ] [ : Right column (I) - USE flags packages was installed with ] [ Found these USE variables for sys-fs/lvm2-2.02.28-r2 ] U I - - clvm : Allow users to build clustered lvm2 - - cman : Cman support for clustered lvm - - gulm : Gulm support for clustered lvm - - nolvm1 : Allow users to build lvm2 without lvm1 support - - nolvmstatic : Allow users to build lvm2 dynamically + + readline : Enables support for libreadline, a GNU line-editing library that almost everyone wants - - selinux : !!internal use only!! Security Enhanced Linux support, this must be set by the selinux profile or breakage will occur myhost ~ # So, this means that I *did* have lvm1 and lvm1static support compiled in, right? But I guess its still ok to set -lvm1 and -static, since I don't need it, right?
Comment 40 firstname.lastname@example.org 2008-06-29 19:04:07 UTC
Dang, I need to have my coffee before posting... Current stable device-mapper is 1.02.24-r1, so should be good to go... Only question remaining is the USE flags... I guess it doesn't hurt to have support for lvm1 compiled in, so I'll just leave it that way for now...
Comment 41 Kerin Millar 2008-06-29 20:30:13 UTC
Charles, bugzilla isn't the right place to ask support questions (or, indeed, any question that goes beyond the scope of having the bug resolved satisfactorily). Notwithstanding, I'll address your questions on this occasion ... Flags that negate a given feature - that is, those that begin with the word "no" - are considered to be a very bad thing. Yet, doing away with them was not so simple because developers sometimes want to make a feature optional where having the feature enabled is generally very important for the majority of the userbase. Now that USE flags can be sourced from the package.use in a given profile or sub-profile, and that they can also be enabled by the package itself in its IUSE declaration, developers are finally in a position to address the problem. That means that "nolvmstatic" is done away with in favour of the standard "static" global, and "nolvm1" is replaced by "lvm1". If you look at /usr/portage/profiles/base/package.use you'll see that both "lvm1" and "static" are specifically enabled for the lvm2 package. Of course, they can be overridden by an USE flag source that has a greater precedence in the USE_ORDER variable (read man make.conf for details). The sources that are relevant in this case are "defaults" and "pkginternal".b The long and short of it is that building lvm2 statically is a good idea as it just might save your bacon in a pinch. lvm1 support is less important; it has absolutely no relevance whatsoever to users of the lvm implementation in 2.6 kernels. Nonetheless, it's appropriate as a default for obvious reasons.