I know the raid code is a bit clunky, but adding another minor complaint on is that /etc/init.d/checkfs (baselayout 1.8.6.10-r1) checks /etc/mdadm.conf for raid volumes and drop to an admin shell if they are found there but not able to start. If they are old and stale they should provide a warning, not a full error. Took me a bit of work to figure out where it was getting the non-existant volume from. Raidtools 1.00.3-r1 Reproducible: Always Steps to Reproduce: 1. 2. 3.
nothing to do with me...
So its not critical for 90% people out there if their raid do not start (or do not seem to do so) ?
I think my point was more that it's taking data that is possibly bad (ie: from madm.conf) and acting on it. Maybe there's a different way to check if the raid is there besides looking at madm.conf? And yes, it is important if raid starts :) But it was a PITA to track down why my system was dropping to a shell when all my raid arrays were starting!
I am running into the same issue, although mine may be caused by a bug in mdadm. I have 2 RAID1 arrays that start up just fine, but my RAID5 array (md2) can't be started automatically by mdadm for some reason. /etc/mdadm.conf: DEVICE /dev/hd[aegik]1,/dev/hd[gk]3 DEVICE /dev/discs/*/* ARRAY /dev/md0 devices=/dev/hdi1,/dev/hdk1 ARRAY /dev/md1 devices=/dev/hdi3,/dev/hdk3 ARRAY /dev/md2 devices=/dev/hda1,/dev/hde1,/dev/hdg1 md0 and md1 (raid1) have no problems at all. For md2 (raid5) If I try to run the same command checkfs runs I get an error: # mdadm -As /dev/md2 mdadm: no devices found for /dev/md2 I am however able to start the drive just fine by specifying the drives manually. # mdadm -A --run /dev/md2 /dev/hda1 /dev/hde1 /dev/hdg1 mdadm: /dev/md2 has been started with 3 drives. That's the exact same set of partitions listed in mdadm.conf, so mdadm must not be reading the list of drives properly. I worked around this once by commenting out md2 in fstab, and manually running the above mdadm command and a mount command in local.start. Several days later when I had a crash and the array was not synced, it would drop me to the root shell during boot. Even after it was synced I could not boot without getting dropped to the shell. I had to completely remove the md2 line from fstab before it would let me boot. If that didn't work I was ready to rip out the entire RAID section from checkfs.
baselayout-1.11.10+ will execute `mdadm -As` and will not abort if it fails ... it'll just spit out warning/error messages about the problem however, if you have junk in say /etc/fstab which refers to the raid devices that failed to start, your machine will be halted and that behavior will not be changed