Gentoo Websites Logo
Go to: Gentoo Home Documentation Forums Lists Bugs Planet Store Wiki Get Gentoo!
Bug 29243 - checkfs drops to root if mdadm.conf listed raid volumes in don't exist
Summary: checkfs drops to root if mdadm.conf listed raid volumes in don't exist
Status: RESOLVED FIXED
Alias: None
Product: Gentoo Linux
Classification: Unclassified
Component: [OLD] baselayout (show other bugs)
Hardware: x86 Linux
: High normal (vote)
Assignee: Gentoo's Team for Core System packages
URL:
Whiteboard:
Keywords:
Depends on:
Blocks:
 
Reported: 2003-09-20 23:35 UTC by Alan
Modified: 2005-03-01 20:36 UTC (History)
1 user (show)

See Also:
Package list:
Runtime testing required: ---


Attachments

Note You need to log in before you can comment on or make changes to this bug.
Description Alan 2003-09-20 23:35:13 UTC
I know the raid code is a bit clunky, but adding another minor complaint on is
that /etc/init.d/checkfs (baselayout 1.8.6.10-r1) checks /etc/mdadm.conf for
raid volumes and drop to an admin shell if they are found there but not able to
start.  If they are old and stale they should provide a warning, not a full
error.  Took me a bit of work to figure out where it was getting the
non-existant volume from.

Raidtools 1.00.3-r1

Reproducible: Always
Steps to Reproduce:
1.
2.
3.
Comment 1 rob holland (RETIRED) gentoo-dev 2003-09-25 07:41:22 UTC
nothing to do with me...
Comment 2 Martin Schlemmer (RETIRED) gentoo-dev 2003-10-13 14:13:30 UTC
So its not critical for 90% people out there if their raid do not start
(or do not seem to do so) ?
Comment 3 Alan 2003-10-13 15:01:05 UTC
I think my point was more that it's taking data that is possibly bad (ie:
from madm.conf) and acting on it.  Maybe there's a different way to check
if the raid is there besides looking at madm.conf?

And yes, it is important if raid starts :)  But it was a PITA to track down
why my system was dropping to a shell when all my raid arrays were starting!
Comment 4 Andy Grundman 2004-02-18 06:14:25 UTC
I am running into the same issue, although mine may be caused by a bug in mdadm.

I have 2 RAID1 arrays that start up just fine, but my RAID5 array (md2) can't be started automatically by mdadm for some reason.

/etc/mdadm.conf:
DEVICE /dev/hd[aegik]1,/dev/hd[gk]3
DEVICE /dev/discs/*/*
ARRAY /dev/md0 devices=/dev/hdi1,/dev/hdk1
ARRAY /dev/md1 devices=/dev/hdi3,/dev/hdk3
ARRAY /dev/md2 devices=/dev/hda1,/dev/hde1,/dev/hdg1

md0 and md1 (raid1) have no problems at all.  For md2 (raid5) If I try to run the same command checkfs runs I get an error:
# mdadm -As /dev/md2
mdadm: no devices found for /dev/md2

I am however able to start the drive just fine by specifying the drives manually.
# mdadm -A --run /dev/md2 /dev/hda1 /dev/hde1 /dev/hdg1
mdadm: /dev/md2 has been started with 3 drives.

That's the exact same set of partitions listed in mdadm.conf, so mdadm must not be reading the list of drives properly.

I worked around this once by commenting out md2 in fstab, and manually running the above mdadm command and a mount command in local.start.  Several days later when I had a crash and the array was not synced, it would drop me to the root shell during boot.  Even after it was synced I could not boot without getting dropped to the shell.  I had to completely remove the md2 line from fstab before it would let me boot.  If that didn't work I was ready to rip out the entire RAID section from checkfs.
Comment 5 SpanKY gentoo-dev 2005-03-01 20:36:25 UTC
baselayout-1.11.10+ will execute `mdadm -As` and will not abort if it fails ... it'll just spit out warning/error messages about the problem

however, if you have junk in say /etc/fstab which refers to the raid devices that failed to start, your machine will be halted and that behavior will not be changed