I am booting my system fro mdadm raid setup. I use kernel command line of: kernel /boot/vmlinuz root=/dev/md6 init=/sbin/bootchartd raid=noautodetect md=6,/dev/sda6,/dev/sdb6 rootfstype=ext4 which works fine, apart from the fact that when devfs script starts, it spews out: devfs: waiting for udev counting down from 60 to 0 in increments of 9. Finally, after a minute, it times out, with udev announcing "queue contains /sys/devices/virtual/block/md6". It seems to me that udev is trying to auto-assemble a raid array that is already active and fails, causing a time out.
Please attach the output of emerge --info to this bug.
Created attachment 243467 [details] emerge --info
This turned out to be my fault: my swap partitions had the partition type set to 0xfd (RAID autoassemble) instead of 0x82 (Linux swap). Fixing this solved the problem. What made this hard to diagnose, however, was the fact that with 'raid=noautodetect' parameter kernel did not report the missing superblock on the swap partitions, and udev script complained about the already assembled matrices instead of the actual problem.