Summary: | sys-apps/hal-0.5.12_rc1 fails to correctly probe md-raid devices. | ||
---|---|---|---|
Product: | Gentoo Linux | Reporter: | Malcolm Lashley <mlashley> |
Component: | [OLD] Core system | Assignee: | Freedesktop bugs <freedesktop-bugs> |
Status: | RESOLVED OBSOLETE | ||
Severity: | normal | CC: | chainsaw, esigra |
Priority: | High | ||
Version: | unspecified | ||
Hardware: | AMD64 | ||
OS: | Linux | ||
Whiteboard: | |||
Package list: | Runtime testing required: | --- | |
Bug Depends on: | 313389 | ||
Bug Blocks: |
Description
Malcolm Lashley
2009-03-19 01:23:07 UTC
Could I see your USE-flags (just emerge -pv output will do fine) for both HAL versions please. Could you also include instructions on how to quickly create the problematic volume type on say... an SD-card. It seems a genuine bug, but I would like to have an easy way of reproducing it. [ebuild R ] sys-apps/hal-0.5.11-r8 USE="X crypt -acpi -apm -debug -dell -disk-partition -doc -laptop (-selinux)" 0 kB [ebuild U ] sys-apps/hal-0.5.12_rc1 [0.5.11-r8] USE="X crypt -acpi -apm -debug -dell -disk-partition -doc -laptop (-selinux)" 0 kB And see - http://www.gentoo.org/doc/en/gentoo-x86+raid+lvm2-quickinstall.xml Ignore the lvm stuff and follow the example for one of md1 or md3 (/boot or /root) partition. (You'll probably need to create 2 partitions on the sdcard first) If hal-0.5.13 doesn't fix this I think it's high time to report this to upstream Not fixed in 0.5.13 - but I did find a workaround and some other info - firstly, my raid arrays are autodetected by kernel: root@duality /usr/src/linux $ zgrep MD_AUTO /proc/config.gz CONFIG_MD_AUTODETECT=y If I stop and re-assemble the raid array once hald is running - the devices are properly detected: duality ~ # lshal | grep md[01] duality ~ # ls -l /dev/md[01] brw-rw---- 1 root disk 9, 0 Aug 3 22:05 /dev/md0 brw-rw---- 1 root disk 9, 1 Aug 3 22:05 /dev/md1 duality ~ # mount | grep md[01] /dev/md0 on /raid type ext3 (rw,noatime,nodiratime,user_xattr) /dev/md1 on /space type ext3 (rw,noatime,nodiratime,user_xattr) duality ~ # umount /dev/md1 && mdadm --manage --stop /dev/md1 && mdadm --assemble /dev/md1 && mount /dev/md1 mdadm: stopped /dev/md1 mdadm: /dev/md1 has been started with 2 drives. duality ~ # lshal | grep md[01] block.device = '/dev/md1' (string) linux.sysfs_path = '/sys/devices/virtual/block/md1' (string) storage.linux_raid.sysfs_path = '/sys/devices/virtual/block/md1' (string) block.device = '/dev/md1' (string) linux.sysfs_path = '/sys/devices/virtual/block/md1/fakevolume' (string) ditto for md0. The same issue here.
I have analyzed the debug output of hald (sys-apps/hal-0.5.13-r2) and have understood that the problem originates from the order in which hald processes hotplug events for block devices. It tries to add a md-device to its database first before the corresponding physical partitions, but at the same time supposes that the partitions are already there. Being unable to find the partitions by linux.sysfs_path it rejects to add the md-device at all.
> If I stop and re-assemble the raid array once hald is running - the devices are properly detected:
Unfortunately, this workaround may be undesirable or just impossible (for root filesystem). Actually, all you need to do is to make hald process events from md-devices one more time.
It may be done, for example, by running "udevadm trigger --subsystem-match=block" from /etc/conf.d/local.start
sys-apps/hal was removed from tree wrt #313389, closing as OBSOLETE |