When =sys-fs/lvm2-2.02.183 is compiled with the udev flag enabled, it prints multiple lines after lvm vgscan or other lvm commands in the initramfs, which lacks udev and just uses devtmpfs. The error messages: WARNING: Device /dev/sda not initialized in udev database even after waiting 1000000 microseconds After waiting around 30 seconds the command finishes Compiling lvm2 without the udev use flag enabled, fixes the problem. btw. sys-fs/lvm2-2.02.145-r2 works normal with the udev use flag enabled
I also see this error when trying to run grub-install and root on LVM etc. :-( Basically grub tries every device for 10s and fails, and again, and again... Workaround is to forward host lvm into chroot, see here https://bbs.archlinux.org/viewtopic.php?id=242594
I guess my last message was not very clear... I chroot into my new root partition that is one LVM on LUKS and try to grub-install into /dev/sda -> this fails unless /run/lvm from host is imported into the chroot.
This seems to be fixed in the new upstream release 2.02.184
The bug has been referenced in the following commit(s): https://gitweb.gentoo.org/repo/gentoo.git/commit/?id=53d8205ff9f4533c17a265a88e44363fd9fd66ed commit 53d8205ff9f4533c17a265a88e44363fd9fd66ed Author: Lars Wendler <polynomial-c@gentoo.org> AuthorDate: 2019-04-08 18:34:13 +0000 Commit: Lars Wendler <polynomial-c@gentoo.org> CommitDate: 2019-04-08 18:34:27 +0000 sys-fs/lvm2: Bump to version 2.02.184 this should fix udev related issues as well as scan_lvs issues. Bug: https://bugs.gentoo.org/682578 Bug: https://bugs.gentoo.org/682380 Package-Manager: Portage-2.3.62, Repoman-2.3.12 Signed-off-by: Lars Wendler <polynomial-c@gentoo.org> sys-fs/lvm2/Manifest | 1 + sys-fs/lvm2/lvm2-2.02.184.ebuild | 258 +++++++++++++++++++++++++++++++++++++++ 2 files changed, 259 insertions(+)
Please test =sys-fs/lvm2-2.02.184 and report back success/failure.
i test =sys-fs/lvm2-2.02.184-r1 and it fixes the problem for me, thanks
Tried again, unfortunately on another box and not totally identical setup, but the grub-install worked ok with 184-r1.
Please change severity of this bug to critical or blocker. It rendered my system entirely unbootable and I had to do an hour and a half of rescue work to get it back. Perhaps I needed to wait several more minutes? Please mask or remove this ebuild ASAP!!!
I can confirm that this is not about rather or not you have udev started. I start my raid via init script and it still happens. depend() { after sysfs devfs udev before checkfs fsck } assemble() { /sbin/fsck /dev/nvme0n1p1 || return 1 /bin/mount /mnt/nvme0 || return 1 /sbin/mdadm --assemble /dev/md0 /dev/sd{a,b,c,d}1 --bitmap /mnt/nvme0/md0-write-intent.dat || return 1 /sbin/vgchange -ay /dev/vg0 || return 1 } disassemble() { /sbin/vgchange -an /dev/vg0 || return 1 /sbin/mdadm --stop /dev/md0 || return 1 /bin/umount /mnt/nvme0 || return 1 }
Please also share the dmesg output that shows the udev delays. It may be about non-started OR unresponsive udev.
Also I'm wrong about it making my system unbootable. I tried again with a stopwatch and patience and it proceeded after 4 minutes and 40 seconds. It did loop through all block devices twice and left these love letters in my kernel log: [ 5.841247] EXT4-fs (nvme0n1p1): mounted filesystem with ordered data mode. Opts: (null) [ 5.853304] md: md0 stopped. [ 5.859397] md/raid:md0: device sda1 operational as raid disk 0 [ 5.859519] md/raid:md0: device sdd1 operational as raid disk 3 [ 5.859639] md/raid:md0: device sdc1 operational as raid disk 2 [ 5.859758] md/raid:md0: device sdb1 operational as raid disk 1 [ 5.861881] md/raid:md0: raid level 5 active with 4 out of 4 devices, algorithm 2 [ 5.864511] udevd[2067]: GOTO 'libsane_rules_end' has no matching label in: '/etc/udev/rules.d/S99-2000S1.rules' [ 5.866365] md0: detected capacity change from 0 to 8796106653696 [ 66.863157] udevd[2067]: worker [2105] /devices/virtual/block/md0 is taking a long time [ 186.965091] udevd[2105]: timeout 'udisks-lvm-pv-export d2uPGL-1XLV-VdAb-L05Y-KhdX-uTHk-QFspo3' [ 186.965345] udevd[2105]: slow: 'udisks-lvm-pv-export d2uPGL-1XLV-VdAb-L05Y-KhdX-uTHk-QFspo3' [2109] [ 186.999448] udevd[2067]: worker [2105] /devices/virtual/block/md0 timeout; kill it [ 186.999679] udevd[2067]: seq 1385 '/devices/virtual/block/md0' killed [ 187.000471] udevd[2067]: worker [2105] terminated by signal 9 (Killed) [ 187.000755] udevd[2067]: worker [2105] failed while handling '/devices/virtual/block/md0' [ 250.070660] udevd[2067]: worker [2106] /devices/virtual/block/md0 is taking a long time [ 288.846412] udevd[2426]: failed to execute '/lib/udev/pci-db' 'pci-db /devices/pci0000:00/0000:00:01.3/0000:03:00.1': No such file or directory <3 <3 <3
Created attachment 573910 [details] my complete /etc/init.d/assemble-raid script
Created attachment 573912 [details] dmesg.log My full dmesg.log. I have some kernel hacking enabled and there's a driver that always gets flagged by the UB sanitizer (unrelated). Incidentally I also have a misbehaving wifi device I fixed with another init script hack, but this is also after starting udev and my assemble-raid, which is why my that wifi device goes away and comes back. #!/sbin/openrc-run name="iwlwifi-rescue" description="wifi device, driver or firmware loading is screwed" depend() { after udev sysfs before NetworkManager } start() { ebegin "Attempting to ressurect Intel wifi device..." set -x echo 0000:27:00.0 > /sys/module/iwlwifi/drivers/pci:iwlwifi/unbind || return -1 sleep 0.25 echo 0000:27:00.0 > /sys/module/iwlwifi/drivers/pci:iwlwifi/bind || return -1 set +x eend }
how about stabilizing lvm2-2.02.184.*?