Gentoo Websites Logo
Go to: Gentoo Home Documentation Forums Lists Bugs Planet Store Wiki Get Gentoo!
Bug 521280 - sys-fs/mdadm - /etc/init.d/mdraid: mdadm -As returns error 1 when the kernel has already assembled an array
Summary: sys-fs/mdadm - /etc/init.d/mdraid: mdadm -As returns error 1 when the kernel ...
Status: UNCONFIRMED
Alias: None
Product: Gentoo Linux
Classification: Unclassified
Component: [OLD] Core system (show other bugs)
Hardware: All Linux
: Normal normal (vote)
Assignee: Gentoo's Team for Core System packages
URL:
Whiteboard:
Keywords:
Depends on:
Blocks:
 
Reported: 2014-08-27 09:36 UTC by peter@prh.myzen.co.uk
Modified: 2014-11-20 23:54 UTC (History)
5 users (show)

See Also:
Package list:
Runtime testing required: ---


Attachments
emerge --info (emerge.info,5.00 KB, text/plain)
2014-08-27 09:38 UTC, peter@prh.myzen.co.uk
Details

Note You need to log in before you can comment on or make changes to this bug.
Description peter@prh.myzen.co.uk 2014-08-27 09:36:36 UTC
When sys-fs/mdadm (3.3.1-r2 is called by /etc/init.d/mdraid at early boot time, if the kernel has alredy assembled the RAID-1 array mdadm returns an empty error string. This results in a red asterisk appearing alone on the line following "Starting up RAID devices..."

Reproducible: Always




My amd64 system (not ~) has /boot on /dev/sda1, / on /dev/md5 and everything else in lvm2 volumes on /dev/md7. /dev/md5 is built on /dev/sda5 and /dev/sdb5; /dev/md7 is built on /dev/sda7 and /dev/sdb7. I set metadata 0.9 when creating /dev/md5 because it's the root device so the kernel needs to be able to assemble it. I thought I'd set metadata 1.x in /dev/md7 but apparently I was mistaken and set 0.9.

The postinst message from sys-fs/mdadm is ambiguous about whether I need mdraid in my boot run-level, and the installation docs at the time said I definitely do, so I did "rc-update add mdraid boot".

Now, at boot time, the kernel detects /dev/md5 and /dev/md7 and assembles them, so by the time /etc/init.d/mdraid is called the devices are already active. The script fails to assemble them again, so returns an error - but the message is empty.

I see two problems:
1.  The docs should distinguish between the two metadata types.
2.  /etc/init.d/mdraid should act sensibly in the nothing-to-do case.

It would be good also to have a clearer description somewhere of how mdraid and lvm work, including how to use the tools. As far as I can see, at present you have to know most of the answer you're looking for before setting out to find it.
Comment 1 peter@prh.myzen.co.uk 2014-08-27 09:38:18 UTC
Created attachment 383812 [details]
emerge --info
Comment 2 Jeroen Roovers (RETIRED) gentoo-dev 2014-08-27 09:56:56 UTC
wieneke ~ # mdadm -As
mdadm: No arrays found in config file or automatically
wieneke ~ # echo $?
1
Comment 3 kfm 2014-08-27 11:35:02 UTC
Peter, could you also attach:

1) /proc/mdstat (to show array configuration)
2) /etc/mdadm.conf
3) Output of mdadm -As and subsequent value of $? when run directly

The latter would be to prove that it prints no error message, yet returns a non-zero exit status, leading to an eerror line with no actual message (the red asterisk in isolation that you report).

This makes for extremely poor user-facing behaviour. I would propose this as a workaround:

  ebegin "Starting up RAID devices"
      output=$(mdadm -As 2>&1)
  eend $(( $? && ${#output} )) "${output}"

Granted, that is a kludge. But, if mdadm can't return a meaningful exit status to indicate that it had no work to do, or even be relied upon to print something to stderr when otherwise seeing fit to return a non-zero exit status, I would say that it is the only viable option.
Comment 4 peter@prh.myzen.co.uk 2014-08-27 12:54:30 UTC
(In reply to Kerin Millar in comment #3)
> Peter, could you also attach:
> 
> 1) /proc/mdstat (to show array configuration)

# cat /proc/mdstat
Personalities : [raid1] 
md5 : active raid1 sdb5[1] sda5[0]
      20971456 blocks [2/2] [UU]
      
md7 : active raid1 sdb7[1] sda7[0]
      524287936 blocks [2/2] [UU]
      
md9 : active raid1 sdb9[1] sda9[0]
      395845248 blocks [2/2] [UU]

Md9 is unused; I created it to hold another system which was never instlled.

> 2) /etc/mdadm.conf

/etc/mdadm.conf has nothing but comments - it is as installed.

> 3) Output of mdadm -As and subsequent value of $? when run directly

mdadm -As returns value 1 when run from the command line. It did so during boot as well, when I had inserted eerror "$?" immediately after the call to mdadm -As in /etc/init.d/mdraid.
Comment 5 kfm 2014-08-27 14:06:04 UTC
(In reply to Peter Humphrey from comment #4)

> mdadm -As returns value 1 when run from the command line. It did so during
> boot as well, when I had inserted eerror "$?" immediately after the call to
> mdadm -As in /etc/init.d/mdraid.

If you adjust the mdraid script per my proposal above, does that prevent the anomalous behaviour from occurring?
Comment 6 kfm 2014-09-07 12:55:20 UTC
Any update on this, Peter?
Comment 7 peter@prh.myzen.co.uk 2014-09-07 16:18:32 UTC
(In reply to Kerin Millar from comment #6)
> Any update on this, Peter?

Oh, sorry Kerin.

Substituting your eerror line in /etc/init.d/mdraid causes this error during startup: "* mdadm: No arrays found in config file or automatically". But the arrays are assembled automatically anyway.

During shutdown I get "Cannot get exclusive access to /dev/md5 ..."

Neither of these happens if I just remove mdraid from the boot phase.

I hope I'm not going to be asked to re-create md5 with 1.x metadata and test that. I could do so but it seems excessive.
Comment 8 Fabio Rossi 2014-11-12 22:38:18 UTC
the same problem appears because mdadm is called from udev rules so well before the /etc/init.d/mdadm service is run
Comment 9 Alexander Tsoy 2014-11-20 23:54:34 UTC
Yes, mdadm initscript is useless on systems with udev novadays since all arrays gets incrementally assembled by udev.