The initrd created using genkernel --install --lvm --mdadm initramfs fails to activate the necessary RAIDs to mount my /usr. My /usr lives in a logical volume. The logical volume lives on a volume group built on top of raids. WORKAROUND: I can manually command the initrd to activate the RAIDs by passing lvmraid=/dev/md0,/dev/md1,/dev/md2,/dev/md3 as a kernel command-line argument. PROPOSED ENHANCEMENT: genkernel should examine the environment to figure out which MDs are needed to get the system on-line. Make these the defaults. This will require some smarts from genkernel to correlate the root and extra mounts with logical volumes and the raids on which they are built. For bonus points, when the configuration changes and the initrd is operating on stale information, have it diagnose and give the user an informative message that points them at documentation on how to troubleshoot and fix the problem. alexandria genkernel # cat /proc/mdstat Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] md5 : active raid1 sdd1[2] sdb1[0] 40064 blocks [3/2] [U_U] md122 : active raid1 sdd2[1] 4008128 blocks [3/1] [_U_] md1 : active raid5 sdd6[3] sdc6[2] sdb6[1] sda6[0] 187526400 blocks level 5, 64k chunk, algorithm 2 [4/4] [UUUU] md2 : active raid5 sdd7[3] sdc7[2] sdb7[1] sda7[0] 187526400 blocks level 5, 64k chunk, algorithm 2 [4/4] [UUUU] md3 : active raid1 sdd8[0] sdb8[1] 62508800 blocks [3/2] [UU_] md0 : active raid5 sdd5[3] sdc5[2] sdb5[1] sda5[0] 187526400 blocks level 5, 64k chunk, algorithm 2 [4/4] [UUUU] alexandria genkernel # pvdisplay -c /dev/sda8:vg00:6586587:-1:0:0:-1:32768:100:3:97:ZrvP00-UmLI-86rq-JjaA-vNLj-Cyok-GIPpj4 /dev/sdb9:vg00:80887212:-1:0:0:-1:32768:1234:1202:32:LdaMtM-SNg2-UnUQ-1rLv-9RtZ-SEKV-q7UMzl /dev/md3:vg00:125017600:-1:8:8:-1:32768:1907:0:1907:zUuLoe-sAca-Pe8L-g82i-7Vqq-1Uf6-PCHeNM /dev/md0:vg00:375052800:-1:8:8:-1:32768:5722:110:5612:sxEY9T-aQSC-ELTt-Wrxp-Fojr-gzLs-X9jYiQ /dev/md1:vg00:375052800:-1:8:8:-1:32768:5722:884:4838:ftaJ7y-Afd7-ctyn-XbN6-hv7E-hyfX-ZzNtJs /dev/sdd9:vg00:76887027:-1:0:0:-1:32768:1173:1173:0:ShGO6L-U587-jyyO-I5gl-8ak1-RvfQ-jbFtQ2 /dev/md2:vg00:375052800:-1:8:8:-1:32768:5722:0:5722:GGvF33-o1ES-TZt4-UFYj-cscn-3RI0-hAmwVf "/dev/sdc8" is a new physical volume of "59.61 GiB" /dev/sdc8:#orphans_lvm2:125017383:-1:8:8:-1:0:0:0:0:mFxdzq-091B-eOS6-3M3S-T6K0-j3tO-0CvDI3 "/dev/sdc9" is a new physical volume of "29.81 GiB" /dev/sdc9:#orphans_lvm2:62508468:-1:8:8:-1:0:0:0:0:m2Huhq-Q06f-1Hqf-b9sQ-PKa7-s62Z-IeJ8if "/dev/sdc10" is a new physical volume of "6.86 GiB" /dev/sdc10:#orphans_lvm2:14377728:-1:8:8:-1:0:0:0:0:SIZBTq-S0sr-Dq2B-EmXs-E9PM-WAHe-gKd9zT Yes, those are small disks. This is a very old system.
Already implemented (mdadm config that enables all RAIDs by default, or tries to take the system mdadm.conf)