New version mdadm create /etc/mdadm.conf in incompatible format: ARRAY /dev/md/1 metadata=0.90 UUID=60f4438a:3a7387cc:cb201669:f728008a ARRAY /dev/md/2 metadata=0.90 UUID=bb0fb0ee:41a37b79:cb201669:f728008a But mdadm in initramfs (which integrated in busybox) need format without last slash, like this: ARRAY /dev/md1 metadata=0.90 UUID=60f4438a:3a7387cc:cb201669:f728008a ARRAY /dev/md2 metadata=0.90 UUID=bb0fb0ee:41a37b79:cb201669:f728008a if real_root is depends on /dev/md2, system is unbootable Reproducible: Always Steps to Reproduce: I don't want reproduce realy unbootable system but can show what mdadm in initramfs does not understood new format. 1. create software raid (not md0 but md1, md2) 2. create /etc/mdadm.conf ( mdadm --detail --scan >> /etc/mdadm.conf ) 3. create initramfs with genkernel 4. unpack initramfs 5. stop software raid 6. restart software raid with mdadm in busybox in initramfs. It start as md0, but not md1 or md2 which in /etc/mdadm.conf Actual Results: tau initramfs # mdadm -S /dev/md1 mdadm: stopped /dev/md1 tau initramfs # ./bin/busybox mdadm --assemble mdadm: /dev/md/1 has been started with 2 drives. tau initramfs # cat /proc/mdstat Personalities : [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] md0 : active raid1 sdb1[0] sda1[1] 48064 blocks [2/2] [UU] bitmap: 0/6 pages [0KB], 4KB chunk md2 : active raid1 sdb2[0] sda2[1] 17872192 blocks [2/2] [UU] bitmap: 10/137 pages [40KB], 64KB chunk unused devices: <none> tau initramfs # tail /etc/mdadm.conf # mail address and/or a program. This can be given with "mailaddr" # and "program" lines to that monitoring can be started using # mdadm --follow --scan & echo $! > /var/run/mdadm # If the lines are not found, mdadm will exit quietly #MAILADDR root@mydomain.tld #PROGRAM /usr/sbin/handle-mdadm-events #ARRAY /dev/md1 metadata=0.90 UUID=60f4438a:3a7387cc:cb201669:f728008a #ARRAY /dev/md2 metadata=0.90 UUID=bb0fb0ee:41a37b79:cb201669:f728008a ARRAY /dev/md/1 metadata=0.90 UUID=60f4438a:3a7387cc:cb201669:f728008a ARRAY /dev/md/2 metadata=0.90 UUID=bb0fb0ee:41a37b79:cb201669:f728008a tau initramfs # Expected Results: tau initramfs # mdadm -S /dev/md1 mdadm: stopped /dev/md1 tau initramfs # ./bin/busybox mdadm --assemble mdadm: /dev/md/1 has been started with 2 drives. tau initramfs # cat /proc/mdstat Personalities : [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] md1 : active raid1 sdb1[0] sda1[1] 48064 blocks [2/2] [UU] bitmap: 0/6 pages [0KB], 4KB chunk md2 : active raid1 sdb2[0] sda2[1] 17872192 blocks [2/2] [UU] bitmap: 10/137 pages [40KB], 64KB chunk unused devices: <none> tau initramfs # tail /etc/mdadm.conf # mail address and/or a program. This can be given with "mailaddr" # and "program" lines to that monitoring can be started using # mdadm --follow --scan & echo $! > /var/run/mdadm # If the lines are not found, mdadm will exit quietly #MAILADDR root@mydomain.tld #PROGRAM /usr/sbin/handle-mdadm-events #ARRAY /dev/md1 metadata=0.90 UUID=60f4438a:3a7387cc:cb201669:f728008a #ARRAY /dev/md2 metadata=0.90 UUID=bb0fb0ee:41a37b79:cb201669:f728008a ARRAY /dev/md/1 metadata=0.90 UUID=60f4438a:3a7387cc:cb201669:f728008a ARRAY /dev/md/2 metadata=0.90 UUID=bb0fb0ee:41a37b79:cb201669:f728008a tau initramfs #
Do you run into the same problem with genkernel 3.4.16, too?
With genkernel 3.4.16 my system is boot. But not clean. mdadm assemble my arrays with another numbers then my default: /dev/md126 and /dev/md127 (my /dev/md1 and /dev/md2) vg on /dev/md127 was found, system is boot successful. But /boot not mounted becose in fstab: /dev/md1 /boot ext2 noatime 1 2 I think, in other cases, when real root will be on /dev/mdxx, not on vg, system will be unbootable.
(In reply to comment #2) > With genkernel 3.4.16 my system is boot. > > But not clean. > mdadm assemble my arrays with another numbers then my default: > /dev/md126 and /dev/md127 > (my /dev/md1 and /dev/md2) > vg on /dev/md127 was found, system is boot successful. > > But > /boot not mounted > becose in fstab: > /dev/md1 /boot ext2 noatime 1 2 > > I think, in other cases, when real root will be on /dev/mdxx, not on vg, system > will be unbootable. This is somewhat expected with newer versions of mdadm. I would advice you to hack your mdadm.conf to create the expected devnodes (or use the ones in /dev/md/ which are supposed to be consistive and the "new" /dev/md*), use label or use uuid for mounting...
Genkernel parameter --mdadm-config= could be of use to this.
Would it not be better to include the "--mdadm-config=/etc/mdadm.conf" as default when using "genkernel --mdadm" ? This would avoid future problems when user forget to use the "--mdadm-config=/etc/mdadm.conf".
(In reply to comment #5) > Would it not be better to include the "--mdadm-config=/etc/mdadm.conf" as > default when using "genkernel --mdadm" ? > This would avoid future problems when user forget to use the > "--mdadm-config=/etc/mdadm.conf". No, a broken/empty /etc/mdadm.conf would kill that boot, and there are a number of reasons why we want autodetect as default. Also, autodetect *should* create sane defaults. The problem here is that upstream has gone from having a somewhat persistant /dev/md* to a somewhat persistant /dev/md/* instead. And they really urge people to use /dev/md/<name>, LABEL or UUID when addressing or mounting a raid, as a dynamic /dev has to much troubles with everything else.
Do you use devtmpfs? Which version of the nodes does it create?
(In reply to comment #0) > New version mdadm create /etc/mdadm.conf in incompatible format: > ARRAY /dev/md/1 metadata=0.90 UUID=60f4438a:3a7387cc:cb201669:f728008a > ARRAY /dev/md/2 metadata=0.90 UUID=bb0fb0ee:41a37b79:cb201669:f728008a I think this is an effect of use an old udev with devfs-compat enabled, and not an mdadm issue. Am I right?
No response from user in 3+ years.