Gentoo Websites Logo
Go to: Gentoo Home Documentation Forums Lists Bugs Planet Store Wiki Get Gentoo!

Bug 372567

Summary: sys-kernel/genkernel-3.4.10.907 mdadm in genkernel does not compatible with sys-fs/mdadm-3.1.4, unbootable system
Product: Gentoo Hosted Projects Reporter: Artem V. Ryabov <avryabov>
Component: genkernelAssignee: Gentoo Genkernel Maintainers <genkernel>
Status: RESOLVED NEEDINFO    
Severity: normal CC: alexander
Priority: Normal    
Version: unspecified   
Hardware: All   
OS: Linux   
Whiteboard:
Package list:
Runtime testing required: ---

Description Artem V. Ryabov 2011-06-22 11:09:17 UTC
New version mdadm create /etc/mdadm.conf in incompatible format:
ARRAY /dev/md/1 metadata=0.90 UUID=60f4438a:3a7387cc:cb201669:f728008a
ARRAY /dev/md/2 metadata=0.90 UUID=bb0fb0ee:41a37b79:cb201669:f728008a

But mdadm in initramfs (which integrated in busybox) need format without last slash, like this:
ARRAY /dev/md1 metadata=0.90 UUID=60f4438a:3a7387cc:cb201669:f728008a
ARRAY /dev/md2 metadata=0.90 UUID=bb0fb0ee:41a37b79:cb201669:f728008a

if real_root is depends on /dev/md2, system is unbootable


Reproducible: Always

Steps to Reproduce:
I don't want reproduce realy unbootable system but can show what mdadm in initramfs does not understood new format. 
1. create software raid (not md0 but md1, md2)
2. create /etc/mdadm.conf ( mdadm --detail --scan >> /etc/mdadm.conf )
3. create initramfs with genkernel
4. unpack initramfs
5. stop software raid
6. restart software raid with mdadm in busybox in initramfs. It start as md0, but not md1 or md2 which in /etc/mdadm.conf
Actual Results:  
tau initramfs # mdadm -S /dev/md1
mdadm: stopped /dev/md1
tau initramfs # ./bin/busybox mdadm --assemble
mdadm: /dev/md/1 has been started with 2 drives.
tau initramfs # cat /proc/mdstat
Personalities : [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
md0 : active raid1 sdb1[0] sda1[1]
      48064 blocks [2/2] [UU]
      bitmap: 0/6 pages [0KB], 4KB chunk

md2 : active raid1 sdb2[0] sda2[1]
      17872192 blocks [2/2] [UU]
      bitmap: 10/137 pages [40KB], 64KB chunk

unused devices: <none>
tau initramfs # tail /etc/mdadm.conf
# mail address and/or a program.  This can be given with "mailaddr"
# and "program" lines to that monitoring can be started using
#    mdadm --follow --scan & echo $! > /var/run/mdadm
# If the lines are not found, mdadm will exit quietly
#MAILADDR root@mydomain.tld
#PROGRAM /usr/sbin/handle-mdadm-events
#ARRAY /dev/md1 metadata=0.90 UUID=60f4438a:3a7387cc:cb201669:f728008a
#ARRAY /dev/md2 metadata=0.90 UUID=bb0fb0ee:41a37b79:cb201669:f728008a
ARRAY /dev/md/1 metadata=0.90 UUID=60f4438a:3a7387cc:cb201669:f728008a
ARRAY /dev/md/2 metadata=0.90 UUID=bb0fb0ee:41a37b79:cb201669:f728008a
tau initramfs #



Expected Results:  
tau initramfs # mdadm -S /dev/md1
mdadm: stopped /dev/md1
tau initramfs # ./bin/busybox mdadm --assemble
mdadm: /dev/md/1 has been started with 2 drives.
tau initramfs # cat /proc/mdstat
Personalities : [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
md1 : active raid1 sdb1[0] sda1[1]
      48064 blocks [2/2] [UU]
      bitmap: 0/6 pages [0KB], 4KB chunk

md2 : active raid1 sdb2[0] sda2[1]
      17872192 blocks [2/2] [UU]
      bitmap: 10/137 pages [40KB], 64KB chunk

unused devices: <none>
tau initramfs # tail /etc/mdadm.conf
# mail address and/or a program.  This can be given with "mailaddr"
# and "program" lines to that monitoring can be started using
#    mdadm --follow --scan & echo $! > /var/run/mdadm
# If the lines are not found, mdadm will exit quietly
#MAILADDR root@mydomain.tld
#PROGRAM /usr/sbin/handle-mdadm-events
#ARRAY /dev/md1 metadata=0.90 UUID=60f4438a:3a7387cc:cb201669:f728008a
#ARRAY /dev/md2 metadata=0.90 UUID=bb0fb0ee:41a37b79:cb201669:f728008a
ARRAY /dev/md/1 metadata=0.90 UUID=60f4438a:3a7387cc:cb201669:f728008a
ARRAY /dev/md/2 metadata=0.90 UUID=bb0fb0ee:41a37b79:cb201669:f728008a
tau initramfs #
Comment 1 Sebastian Pipping gentoo-dev 2011-07-01 02:16:01 UTC
Do you run into the same problem with genkernel 3.4.16, too?
Comment 2 Artem V. Ryabov 2011-07-01 15:04:43 UTC
With genkernel 3.4.16 my system is boot.

But not clean.
mdadm assemble my arrays with another numbers then my default:
/dev/md126 and /dev/md127 
(my /dev/md1 and /dev/md2)
vg on /dev/md127 was found, system is boot successful.

But
/boot not mounted
becose in fstab:
/dev/md1 /boot ext2 noatime 1 2

I think, in other cases, when real root will be on /dev/mdxx, not on vg, system will be unbootable.
Comment 3 Xake 2011-07-01 15:24:32 UTC
(In reply to comment #2)
> With genkernel 3.4.16 my system is boot.
> 
> But not clean.
> mdadm assemble my arrays with another numbers then my default:
> /dev/md126 and /dev/md127 
> (my /dev/md1 and /dev/md2)
> vg on /dev/md127 was found, system is boot successful.
> 
> But
> /boot not mounted
> becose in fstab:
> /dev/md1 /boot ext2 noatime 1 2
> 
> I think, in other cases, when real root will be on /dev/mdxx, not on vg, system
> will be unbootable.

This is somewhat expected with newer versions of mdadm. I would advice you to hack your mdadm.conf to create the expected devnodes (or use the ones in /dev/md/ which are supposed to be consistive and the "new" /dev/md*), use label or use uuid for mounting...
Comment 4 Sebastian Pipping gentoo-dev 2011-07-01 19:29:57 UTC
Genkernel parameter --mdadm-config= could be of use to this.
Comment 5 Axel Schöner 2011-08-13 13:33:44 UTC
Would it not be better to include the "--mdadm-config=/etc/mdadm.conf" as default when using "genkernel --mdadm" ?
This would avoid future problems when user forget to use the "--mdadm-config=/etc/mdadm.conf".
Comment 6 Xake 2011-08-13 21:31:04 UTC
(In reply to comment #5)
> Would it not be better to include the "--mdadm-config=/etc/mdadm.conf" as
> default when using "genkernel --mdadm" ?
> This would avoid future problems when user forget to use the
> "--mdadm-config=/etc/mdadm.conf".

No, a broken/empty /etc/mdadm.conf would kill that boot, and there are a number of reasons why we want autodetect as default.

Also, autodetect *should* create sane defaults. The problem here is that upstream has gone from having a somewhat persistant /dev/md* to a somewhat persistant /dev/md/* instead. And they really urge people to use /dev/md/<name>, LABEL or UUID when addressing or mounting a raid, as a dynamic /dev has to much troubles with everything else.
Comment 7 Robin Johnson archtester Gentoo Infrastructure gentoo-dev Security 2012-02-12 07:07:18 UTC
Do you use devtmpfs? Which version of the nodes does it create?
Comment 8 Alexander Tsoy 2013-01-31 14:51:32 UTC
(In reply to comment #0)
> New version mdadm create /etc/mdadm.conf in incompatible format:
> ARRAY /dev/md/1 metadata=0.90 UUID=60f4438a:3a7387cc:cb201669:f728008a
> ARRAY /dev/md/2 metadata=0.90 UUID=bb0fb0ee:41a37b79:cb201669:f728008a

I think this is an effect of use an old udev with devfs-compat enabled, and not an mdadm issue. Am I right?
Comment 9 Robin Johnson archtester Gentoo Infrastructure gentoo-dev Security 2017-01-02 23:41:38 UTC
No response from user in 3+ years.