When sys-fs/mdadm (3.3.1-r2 is called by /etc/init.d/mdraid at early boot time, if the kernel has alredy assembled the RAID-1 array mdadm returns an empty error string. This results in a red asterisk appearing alone on the line following "Starting up RAID devices..." Reproducible: Always My amd64 system (not ~) has /boot on /dev/sda1, / on /dev/md5 and everything else in lvm2 volumes on /dev/md7. /dev/md5 is built on /dev/sda5 and /dev/sdb5; /dev/md7 is built on /dev/sda7 and /dev/sdb7. I set metadata 0.9 when creating /dev/md5 because it's the root device so the kernel needs to be able to assemble it. I thought I'd set metadata 1.x in /dev/md7 but apparently I was mistaken and set 0.9. The postinst message from sys-fs/mdadm is ambiguous about whether I need mdraid in my boot run-level, and the installation docs at the time said I definitely do, so I did "rc-update add mdraid boot". Now, at boot time, the kernel detects /dev/md5 and /dev/md7 and assembles them, so by the time /etc/init.d/mdraid is called the devices are already active. The script fails to assemble them again, so returns an error - but the message is empty. I see two problems: 1. The docs should distinguish between the two metadata types. 2. /etc/init.d/mdraid should act sensibly in the nothing-to-do case. It would be good also to have a clearer description somewhere of how mdraid and lvm work, including how to use the tools. As far as I can see, at present you have to know most of the answer you're looking for before setting out to find it.
Created attachment 383812 [details] emerge --info
wieneke ~ # mdadm -As mdadm: No arrays found in config file or automatically wieneke ~ # echo $? 1
Peter, could you also attach: 1) /proc/mdstat (to show array configuration) 2) /etc/mdadm.conf 3) Output of mdadm -As and subsequent value of $? when run directly The latter would be to prove that it prints no error message, yet returns a non-zero exit status, leading to an eerror line with no actual message (the red asterisk in isolation that you report). This makes for extremely poor user-facing behaviour. I would propose this as a workaround: ebegin "Starting up RAID devices" output=$(mdadm -As 2>&1) eend $(( $? && ${#output} )) "${output}" Granted, that is a kludge. But, if mdadm can't return a meaningful exit status to indicate that it had no work to do, or even be relied upon to print something to stderr when otherwise seeing fit to return a non-zero exit status, I would say that it is the only viable option.
(In reply to Kerin Millar in comment #3) > Peter, could you also attach: > > 1) /proc/mdstat (to show array configuration) # cat /proc/mdstat Personalities : [raid1] md5 : active raid1 sdb5[1] sda5[0] 20971456 blocks [2/2] [UU] md7 : active raid1 sdb7[1] sda7[0] 524287936 blocks [2/2] [UU] md9 : active raid1 sdb9[1] sda9[0] 395845248 blocks [2/2] [UU] Md9 is unused; I created it to hold another system which was never instlled. > 2) /etc/mdadm.conf /etc/mdadm.conf has nothing but comments - it is as installed. > 3) Output of mdadm -As and subsequent value of $? when run directly mdadm -As returns value 1 when run from the command line. It did so during boot as well, when I had inserted eerror "$?" immediately after the call to mdadm -As in /etc/init.d/mdraid.
(In reply to Peter Humphrey from comment #4) > mdadm -As returns value 1 when run from the command line. It did so during > boot as well, when I had inserted eerror "$?" immediately after the call to > mdadm -As in /etc/init.d/mdraid. If you adjust the mdraid script per my proposal above, does that prevent the anomalous behaviour from occurring?
Any update on this, Peter?
(In reply to Kerin Millar from comment #6) > Any update on this, Peter? Oh, sorry Kerin. Substituting your eerror line in /etc/init.d/mdraid causes this error during startup: "* mdadm: No arrays found in config file or automatically". But the arrays are assembled automatically anyway. During shutdown I get "Cannot get exclusive access to /dev/md5 ..." Neither of these happens if I just remove mdraid from the boot phase. I hope I'm not going to be asked to re-create md5 with 1.x metadata and test that. I could do so but it seems excessive.
the same problem appears because mdadm is called from udev rules so well before the /etc/init.d/mdadm service is run
Yes, mdadm initscript is useless on systems with udev novadays since all arrays gets incrementally assembled by udev.