Quick brief: I have rootfs over dmcrypted lvm, I am doing cryptsetup luksOpen and lvm vgchange in initramfs, before real /sbin/init (and udev) is started. I believe genkernel users have the same issue if they use dmcrypt and/or lvm (or maybe even md-based raid). Which ends with empty /dev/mapper (there is only control node). Also, /etc/init.d/device-mapper init script looks like a joke, propably noone use it as it does not work. I believe the real solution would be execute 'dmsetup mknodes' in udev init script, so it will restore my 'enc_root, vg-rootfs, vg-home' etc into /dev/mapper. At this point I fixed it by adding another init script with simple: start() { dmsetup mknodes; } into sysinit runlevel with 'after udev'. (Reported from fresh gentoo ~amd64 install). Reproducible: Always
Good grief, nobody care about it?
http://bugs.gentoo.org/show_bug.cgi?id=74750 looks like it was fixed 5 years ago and now we have again this issue.
(In reply to comment #0) > I > believe genkernel users have the same issue if they use dmcrypt and/or lvm (or > maybe even md-based raid). > Not with newer versions of genkernel. This because we mount devtmpfs/tmpfs on /dev before we create all these nodes in the initramfs. Then we mount --move /dev from the ramdisk to the real_root just before switch_root. This seems to be what upstream consider a fix. So the question is really if dmsetup mknodes is a fix, or a work around for a "broken" ramdisk, as upstream seems to think about that and "vgscan --mknodes" as obsolete workarounds, and that the dev-nodes are supposed to be handled purely by devtmpfs/udev.
(In reply to comment #3) > Not with newer versions of genkernel. This because we mount devtmpfs/tmpfs on > /dev before we create all these nodes in the initramfs. Then we mount --move > /dev from the ramdisk to the real_root just before switch_root. This seems to > be what upstream consider a fix. Well if you mount devtmpfs on /dev in initramfs and just before run switch_root it doing 'mount --move /dev /newroot/dev' it still does not look like solution, when udev start it will mount own tmpfs on /dev or... maybe it will use the devtmpfs one? If devtmpfs then i will just fix my initramfs to do mount --move. > So the question is really if dmsetup mknodes is a fix, or a work around for a > "broken" ramdisk, as upstream seems to think about that and "vgscan --mknodes" > as obsolete workarounds, and that the dev-nodes are supposed to be handled > purely by devtmpfs/udev. I am 100% sure that it was working when we had udev 13x or so, then after some updates to udev there was no longer nodes in /dev/mapper, I set dmsetup mknodes and forgot about it, after months I reinstalled my system and I saw this again.
(In reply to comment #4) > Well if you mount devtmpfs on /dev in initramfs and just before run switch_root > it doing 'mount --move /dev /newroot/dev' it still does not look like solution, > when udev start it will mount own tmpfs on /dev or... maybe it will use the > devtmpfs one? If devtmpfs then i will just fix my initramfs to do mount --move. > Works at least with openrc, from /etc/init.d/udev-mount: if mountinfo -q /dev; then einfo "/dev is already mounted" return 0 fi Also, for some reasons IIRC if you use devtmpfs, then you can also do stuff like "mount -t devtmpfs udev /dev && vgchange -ay && umount /dev" in the initramfs and the device nodes will be recreated when devtmpfs is remounted, but that should probably be considered a bit of a hack and not something to rely on. > I am 100% sure that it was working when we had udev 13x or so, then after some > updates to udev there was no longer nodes in /dev/mapper, I set dmsetup mknodes > and forgot about it, after months I reinstalled my system and I saw this again. I think it was the same for LVM2. I think older versions of udev handled this differently during "udevadm trigger", however as it works if you set up a /dev properly before doing anything dev-node I think they will only tell us we do stuff wrong if we try to change it back.
Ok, I removed my dmsetup mknodes from sysinit runlevel and I hacked my initramfs to not umount /dev before switch_root but do 'mount --move /dev /newroot/dev', while booting I saw '/dev is already mounted'. Thanks Xake for your support, you were great help. I will leave this bug as 'NEW" until udev maintainers will confirm that mount --move is the *solution*.
Is there something to be done here for >=sys-fs/udev-197-r4 and >=sys-fs/udev-init-scripts-21 ? Is this bug still relevant with current versions?
Sadly I cannot test it, I do move initramfs's /dev to /newroot/dev on initramfs level and I also switched from udev to mdev some time ago. I can't test it anytime soonish so maybe its safe to close it and reopen if needed in future.