from the manual vgscan(8): --- In LVM2, vgscans take place automatically; but you might still need to run one explicitly after changing hardware. --- As I interpret this vgscan only needs to be run if you add/remove physical lvm2 disks during runtime. And even for this (adding/removing USB harddrives with lvm2 on them) I have only needed vgchange to get mountable device nodes. So maybe remove this line[1] and the following sleep, or maybe conditionally run them for people adding dolvm1 or something as I do not hink many still run lvm1, and thus do not need lvm1-support, and the extra time it takes to run vgscan and then wait the (for lvm2 even more undeeded) 2 seconds? Those two seconds I cannot find a reference to why in the changelog, so I guess those are also from the lvm1 time. Removing those boots my system fine (root on lvm2 residing on a md-raid) and activates all volume groups on my system. [1] http://git.overlays.gentoo.org/gitweb/?p=proj/genkernel.git;a=blob;f=defaults/initrd.scripts;h=e0710c4ed267a4f85f0606844e156e96f864a5fb;hb=HEAD#l640
Fixed. http://git.overlays.gentoo.org/gitweb/?p=proj/genkernel.git;a=commitdiff;h=b3d07b46a7a01f4e2497a1db62f0ba48e42dc7b1
(In reply to comment #1) > Fixed. Any special reason you kept vgscan?
Yes. I worry that it may break things. I may be too shy with this, but I'm still rather new to this kind of material. I'll adjust the subject and re-open for someone else.
Maybe ask the maintainers of the lvm2 package if they have knowledge about its necessity?
After digging into this (for some halfly unrelated issues) it seems that vgscan really only is needed for two cases: * when something screws with /dev in this case it needs --mknodes to create the dev files, but for a standard genkernel initramfs that uses mdev this should not be (and is not at least for me on a somewhat recent 2.6 kernel) a problem * when /etc/lvm/cache needs regeneration/update however since /etc/lvm/cache does not (and probably should not) exist in the initramfs this becomes a non problem, since vgchange in this case does the scan itself So with other words it seems to be safe to remove the vgscan for the genkernel generated initramfs.
Created attachment 260903 [details, diff] patch against the experimental branch vgchange nowdays have --sysinit parameter (also the version built with genkernel) that turns off locking, monitoring, polling and all other stuff that is not supposed to be started but later when real_root is mounted rw (i.e. in a init script).
Created attachment 260905 [details, diff] patch against the experimental branch As pointed out there really is no reason for this call in genkernel with lvm2.
With these two patches applied locally the generated does not seem to boot my lvm-inside-dmcrypt setup any more. I get an error on missing /dev/gentoo/root block device.
(In reply to comment #8) > With these two patches applied locally the generated does not seem to boot my > lvm-inside-dmcrypt setup any more. I get an error on missing /dev/gentoo/root > block device. > Try again with only the first patch, if we are looking function wise is that more crucial to ensure we do not break systems by starting stuff that needs proper locking. I guess it is the initrd that does not find your /dev/gentoo/root? Does /dev/mapper/gentoo-root exist? Do you have anything in /etc/lvm? What output gives "vgscan --mknodes" and does the device nodes appeare after that? This questions is because vgchange uses a cache (/etc/lvm/cache/.cache) which is simply a list of devices it should scan for volumegroups. If this cache does not exist it does the same as vgscan. So it only really exists two reasons to run vgscan in the initrd: 1. create devicenodes which should be created by udev/mdev/devtmpfs 2. update the cache if avaible
(In reply to comment #9) > Try again with only the first patch, if we are looking function wise is that > more crucial to ensure we do not break systems by starting stuff that needs > proper locking. With only 0002-Use-vgchange-sysinit.patch applied, my system is booting. > I guess it is the initrd that does not find your /dev/gentoo/root? Yes. >> Determining root device... !! Block device /dev/gentoo/root is not a valid root device > Does /dev/mapper/gentoo-root exist? No. > Do you have anything in /etc/lvm? Two files right atfer entering the shell: - cache/.cache - lvm.conf > What output gives "vgscan --mknodes" Found volume group "gentoo" using metadata type lvm2. > and does the device nodes appeare after that? No new new files for /dev/mapper/* or /dev/gentoo/*. It seems to have replaced some /dev/ram* entries in /etc/lvm/cache/.cache, though. In addition to /dev/ram* and /dev/sda{1,2} I now find - /dev/mapper/root - /dev/md-0 in the cache. When I run "vgchange -ay" after it activates 7 volumes adding files - /dev/gentoo/* (symlinks) - /dev/mapper/gentoo-* These files do not appear in /etc/lvm/cache/.cache unless I run vgscan once more. > This questions is because vgchange uses a cache (/etc/lvm/cache/.cache) which > is simply a list of devices it should scan for volumegroups. If this cache does > not exist it does the same as vgscan. So it only really exists two reasons to > run vgscan in the initrd: > 1. create devicenodes which should be created by udev/mdev/devtmpfs > 2. update the cache if avaible So what do we do? Keep vgscan in but remove "--ignorelockingfailure --mknodes" from its call?
I am somewhat interested in what that cache comes from, but that I can tackle and experiment with myself. I think we may leave vgscan as is then till this is investigated, but I think I should cook up another patch on how to run those two commands which makes it a bit faster.
Created attachment 261124 [details, diff] conditionalize vgscan an more Try this patch. As you can see it conditionalize vgscan depending on if the cache exists. It also runs all commands in one go as that is faster for everyone that also needs vgscan. Boots my computer which does not have a cache, should boot yours with a cache (probably generated by other tools using deice-mapper) too.
The patch still depends on 0002-Use-vgchange-sysinit.patch
(In reply to comment #12) > As you can see it conditionalize vgscan depending on if the cache exists. > It also runs all commands in one go as that is faster for everyone that also > needs vgscan. While I do like the idea of making vgscan depend on existance of a cache, the rest seems a little hacky to me, e.g. "echo -e" does not seem to be strictly posix. I'm indecisive if I should apply the patch as is. Let me think about it.
Created attachment 261151 [details, diff] Call lvm individually IIRC from my bootcharts this is still a bit faster then calling the symlinks. Also remved unnecessary parameters (there is no locking failures since the tmpfs this gets run in is a rw env, and the /dev/null was probably to hide ignored messages about this).
Created attachment 261152 [details, diff] Do lvm in one go The same patch, just cleaned up options send to command and POSIX-compitable. While sending options to echo is not POSIX-compitable, the gnu/busybox echo is not POSIX unless -e is passed. The portability of echo is talked about [1] and for portability the POSIX-command printf is preferred. Also this may seem more hackis, but speedwise there is a lot of IO-scanning less if you do it all in one go, this could potentially really do people with many/big/slow harddrives more happy if they need their cache updated. [1]http://www.opengroup.org/onlinepubs/000095399/utilities/echo.html
Patch 2 and 3 now applied. Thanks for you work on this. +*genkernel-3.4.12.6 (31 Jan 2011) + + 31 Jan 2011; Sebastian Pipping <sping@gentoo.org> -genkernel-3.4.12.1.ebuild, + -genkernel-3.4.12.2.ebuild, +genkernel-3.4.12.6.ebuild: + Bump to 3.4.12.6 +