Currently, the x86 Install Guide recommends a rather cumbersome method of installing GRUB on systems with hardware RAID. Instead of having to create a boot disk, a user can simply do the following: `mount -o bind /dev /mnt/gentoo/dev` immediately before chroot'ing... and then using the following "setup" command in GRUB during the latter part of the install: `setup --stage2=/boot/grub/stage2 (hd0)` This method is much simplier and less cumbersome than the current installation method documented. Reproducible: Always Steps to Reproduce: I've installed Gentoo/GRUB on countless servers, with Adaptec, American Megatrends, Compaq Smart2-/* and Smart 5300/5x RAID controllers, and use successfully used this method every time. The URL above directs to a forum post I made several months ago regarding this topic. The general directions are applicable to other RAID controllers as well.
Isn't the "setup --stage2=/boot/grub/stage2 (hd0)" the same as just doing "setup (hd0)"? I.e. won't stage2 be loaded from /boot/grub/stage2 anyway?
Also, in the configurations you've tested this with, was (hd0) also part of the array or was it a standalone IDE disk? And if it's part of the array, won't the system fail booting when hd0 crashes but the array still exist in degraded mode?
It would seem that the --stage2=/boot/grub/stage2 would be the same, however it just doesn't work that way. Grub's device.map file does not correctly map the BIOS devices on a hardware RAID controller to the OS /dev tree during the bootloader install. Therefore, you are forced tell it where to get it's stage2 files from. You need to explicitly tell it where to go. As for the second comment, yes and no. Depends on the type of RAID you are using. In a proctective RAID level, such as 5, 1, and 1+0 the controller will not report a failure to the (hd0) device because an event caused a degredation of the array. Events occuring behind the RAID controller should be invisible to normal OS layer functions. If you're using a non-redundant RAID level, like 0, you will have a total failure of the array if one of the disks should fail. In this case, it would be no different than if you had a single hard disk. If you're referring to having individual containers on a single RAID controller (like a 5 disk array with 2 disks at RAID 1, and 3 disks at RAID 5) this would only be the case if your (hd0) container had completely failed. Remember that Grub nor Linux care what's going on behind the RAID controller... that's the whole point of hardware RAID.
Forgot to mention... There was one server I could not get Grub working on. It was a Compaq Proliant 6000 with three individual RAID controllers: A Smart2/P, a Smart2/DH, and a SmartArray 3200. This server has three individual backplanes that all require a controller. The problem was that during the GRUB install it was impossible for GRUB to distinguish between any of the RAID controllers because the device.map was incorrect, and specifiying a --stage2= command would not help because it could not understand which controller had the reigns of the /boot partition. It understood there were partitions, but it had major issues telling the controllers apart. I had to use LILO in this instance. I don't think it's possible to install GRUB in any fashion on a machine that has more than one RAID controller.
I have added your recommendation to the installation guide. You can view the intermediate result at http://dev.gentoo.org/~swift/gentoo-x86-install.html. If the reviewing team doesn't find any difficulties this will be committed to CVS tomorrow.
Updated.