Since the LVM script in Gentoo runs pvscan before the `boot' runlevel is
started, devices that are detected by the `coldplug' boot script aren't properly
scanned as PVs.
For example, I have three IDE disks on a PCI IDE controller and two SATA disks
on a PCI SATA controller. The drivers for these PCI controllers are built as
modules, and thus not loaded until the coldplug script runs. Since the LVM
script has already run pvscan long before that has a chance of happening, these
PVs aren't properly detected, and I have to manually run pvscan and vgchange
once the system is up.
Personally, I'm not sure what the best way to fix this problem is. I realize
that I could compile the required modules into the kernel, but that would be
ugly. I could also make the coldplug script run pvscan, but that also feels
quite ugly. Maybe the LVM init script should be moved into a seperate script in
init.d which `use's coldplug?
Put your modules in the autoload.d directory. As you know what they should be,
that's the best thing to do. Only use "coldplug" as a very last resort,
I don't recommend it at all.
Do you mind me asking why you recommend against using coldplug? Any way I look
at it, having the system dynamically detect my hardware is far better than
having to manually maintain a database of the devices I wish to have detected.
For devices that you _know_ you are always better off telling the system exactly
what should be loaded, it's just faster, and you remove any future possiblity
that things might get a bit messed up.
Also, coldplug is slow, and often loads modules that you really don't want
to have loaded (shpchp is a common one...)
And remember, I'm the coldplug/hotplug maintainer :)
is this still an issue or can we go ahead and close this bug?
Although the Hotplug Czar seems to disagree with me, I would definitely argue that it is an issue. I even hear that in baselayout 1.12, LVM is to be started even before module.autoload.d is run through, which means that one will have to build the affected modules into the boot kernel image.
So it comes down to a question of what kind of architecture is wanted. I would argue that it's better to keep the modules outside the boot kernel, since it, among other things, means that the modules can be recompiled and reloaded while the system is running (of course, the LVM will have to be taken down to do so, but in my systems, that's a non-issue). I would also argue that it is better to have the system autodetect the available devices and load drivers for them, rather than having to do that manually (why do anything manually, which can be done automatically?).
If the consensus of the Gentoo High Council is that everything should be built into a large monolithical kernel instead of reserving the possibility to upgrade modules during runtime, and/or that it is better to do stuff manually instead of having a computer do it automatically (which I, otherwise, would argue is the original purpose of a computer), then there is little I can do about it.
If anyone wishes to argue with me on these points, I give two thumbs up for doing so in this bug. Otherwise, if the decision of the High Council is to do as they wish regardlessly of the wishes of me, a mere user -- again, there is little I can do. I guess I can only go on patching my systems locally then. Unfortunately, I don't think there's any objective truth in this matter.
gregkh: using modules.autoload.d does not solve the problem for me. i'm having this issue on a box with a pci fibre channel card. and i am not evening using coldplug.
config: sun x4100, gentoo installed on mirrored raid set on motherboard SAS controller. the driver for that is built into the kernel. data storage is on a FC raid array attached to a pci-x qlogic qla2312 (2340L) pci FC card. we are loading the driver for the qla as a module because it requires firmware to be loaded at module insertion time. we are using LVM on both the internal (/dev/sda, vg0) and external (/dev/sdb, vg1) array.
the system boots with no problem. lvm is built into the kernel so the system finds /dev/sda and vg0. then the qla2xxx and qla2300 modules are inserted. /dev/sdb becomes available. checkfs runs looking to check /dev/vg1/data but vg1 is no where to be found. this is because /lib/rcscripts/addons/lvm-start.sh has not run pvscan, so lvm does not know that there is a volume group on /dev/sdb.
udev-096-r1 should fix this issue
Should the LVM startup not be made into an init.d service of its own? That way, the startup order could be manually adjusted, even to the point of starting after coldplug. Is there any particular reason why it is integrated in checkfs right now?
(In reply to comment #8)
> Should the LVM startup not be made into an init.d service of its own? That way,
> the startup order could be manually adjusted, even to the point of starting
> after coldplug. Is there any particular reason why it is integrated in checkfs
> right now?
coldplug init script is no longer used as udev-096-r1 itself does the coldplugging.
udev 096-r1 fixed this issue for me. when udev starts, it loads the modules fro the qlogic card. then when checkfs runs, vg1 is available. thanks.