Summary: | sys-fs/mdadm - /etc/init.d/mdraid: mdadm -As returns error 1 when the kernel has already assembled an array | ||
---|---|---|---|
Product: | Gentoo Linux | Reporter: | peter <peter> |
Component: | [OLD] Core system | Assignee: | Gentoo's Team for Core System packages <base-system> |
Status: | UNCONFIRMED --- | ||
Severity: | normal | CC: | alexander, joost, kfm, peter, rossi.f |
Priority: | Normal | ||
Version: | unspecified | ||
Hardware: | All | ||
OS: | Linux | ||
Whiteboard: | |||
Package list: | Runtime testing required: | --- | |
Attachments: | emerge --info |
Description
peter@prh.myzen.co.uk
2014-08-27 09:36:36 UTC
Created attachment 383812 [details]
emerge --info
wieneke ~ # mdadm -As mdadm: No arrays found in config file or automatically wieneke ~ # echo $? 1 Peter, could you also attach: 1) /proc/mdstat (to show array configuration) 2) /etc/mdadm.conf 3) Output of mdadm -As and subsequent value of $? when run directly The latter would be to prove that it prints no error message, yet returns a non-zero exit status, leading to an eerror line with no actual message (the red asterisk in isolation that you report). This makes for extremely poor user-facing behaviour. I would propose this as a workaround: ebegin "Starting up RAID devices" output=$(mdadm -As 2>&1) eend $(( $? && ${#output} )) "${output}" Granted, that is a kludge. But, if mdadm can't return a meaningful exit status to indicate that it had no work to do, or even be relied upon to print something to stderr when otherwise seeing fit to return a non-zero exit status, I would say that it is the only viable option. (In reply to Kerin Millar in comment #3) > Peter, could you also attach: > > 1) /proc/mdstat (to show array configuration) # cat /proc/mdstat Personalities : [raid1] md5 : active raid1 sdb5[1] sda5[0] 20971456 blocks [2/2] [UU] md7 : active raid1 sdb7[1] sda7[0] 524287936 blocks [2/2] [UU] md9 : active raid1 sdb9[1] sda9[0] 395845248 blocks [2/2] [UU] Md9 is unused; I created it to hold another system which was never instlled. > 2) /etc/mdadm.conf /etc/mdadm.conf has nothing but comments - it is as installed. > 3) Output of mdadm -As and subsequent value of $? when run directly mdadm -As returns value 1 when run from the command line. It did so during boot as well, when I had inserted eerror "$?" immediately after the call to mdadm -As in /etc/init.d/mdraid. (In reply to Peter Humphrey from comment #4) > mdadm -As returns value 1 when run from the command line. It did so during > boot as well, when I had inserted eerror "$?" immediately after the call to > mdadm -As in /etc/init.d/mdraid. If you adjust the mdraid script per my proposal above, does that prevent the anomalous behaviour from occurring? Any update on this, Peter? (In reply to Kerin Millar from comment #6) > Any update on this, Peter? Oh, sorry Kerin. Substituting your eerror line in /etc/init.d/mdraid causes this error during startup: "* mdadm: No arrays found in config file or automatically". But the arrays are assembled automatically anyway. During shutdown I get "Cannot get exclusive access to /dev/md5 ..." Neither of these happens if I just remove mdraid from the boot phase. I hope I'm not going to be asked to re-create md5 with 1.x metadata and test that. I could do so but it seems excessive. the same problem appears because mdadm is called from udev rules so well before the /etc/init.d/mdadm service is run Yes, mdadm initscript is useless on systems with udev novadays since all arrays gets incrementally assembled by udev. |