I'm using lvm2, udisks and gnome-disk-utility from gnome overlay, and my lvm2 volume group is not shown in palimpsest (the gui of gnome-disk-utility), whilst it should. just doing : echo change > /sys/block/sda/sda2/uevent echo change > /sys/block/sda/sda4/uevent solves my problem (sda2 and sda4 being the pvs of my vg). It seems that udisks is waiting for a "change" event on each pv to generate its stuff in the udev environment, and lvm doesn't send them. This problem seems to only be existing on gentoo and probably comes from either udisks or lvm2 package. Reproducible: Always Steps to Reproduce: open palimsest Actual Results: lvm vg is not there Expected Results: lvm vg is there
You need to set USE=lvm2 on udisks
reopen, I have lvm2 flag enabled on udisks. udevadm info --export-db | grep UDISKS | grep LVM doesn't show anything and my vg doesn't appear in palimpsest, but if I type echo change > /sys/block/sda/sda2/uevent echo change > /sys/block/sda/sda4/uevent then udevadm info --export-db | grep UDISKS | grep LVM returns me a list of E: UDISKS_LVM2_PV_VG_* things and my vg appears in palimpsest
sounds like your problem is with udev is with lvm not being in your boot runlevel
actually lvm is in my boot runlevel ... are you guys using lvm and seeing your vg in palimpsest ? I mean, not all the lvs, I see them too, but in the "multi disks" section at the end, the vg itself
I'm not using gnome 2.30 yet but running the udev command you pasted I indeed have no ouput even if my lvs are up and running.
same here, volume group is coming after echo change to those uevent in /sys lvm is already in my boot runlevel (In reply to comment #4) > actually lvm is in my boot runlevel ... > are you guys using lvm and seeing your vg in palimpsest ? I mean, not all the > lvs, I see them too, but in the "multi disks" section at the end, the vg itself >
btw, it seems my volume group cannot be activated after I click "start volume group". this leads to show my logical volumes as hard disks in peripheral devices.
problem still there with lvm2-2.02.67-r2, udisks-1.0.1-r1 and udev-158
The change that introduced this bug is quite old: http://cgit.freedesktop.org/udisks/commit/?id=a95d351dcae957e36b1cd3c7c6c1a784de65dcf8 especially http://cgit.freedesktop.org/udisks/diff/data/95-devkit-disks.rules?id=a95d351dcae957e36b1cd3c7c6c1a784de65dcf8 Reverting the change in the rules file seems to solve the problem for me, but I guess it was done for a reason ...
(In reply to comment #9) I'm sorry, what actually did the trick was the "udevadm trigger" call I did after changing the rules file.
(In reply to comment #10) > (In reply to comment #9) > I'm sorry, what actually did the trick was the "udevadm trigger" call I did > after changing the rules file. > Changing the file doesn't solve anything at all, but yes, indeed, running udevadm trigger does the trick
try 1.0.2
even with udisks 1.0.2, I have to run udevadm trigger by hand to see the lvm volume group
(In reply to comment #13) > even with udisks 1.0.2, I have to run udevadm trigger by hand to see the lvm > volume group > Thanks for testing. Please report this to upstream, at http://bugs.freedesktop.org/
(In reply to comment #14) > (In reply to comment #13) > > even with udisks 1.0.2, I have to run udevadm trigger by hand to see the lvm > > volume group > > > > Thanks for testing. > > Please report this to upstream, at http://bugs.freedesktop.org/ > It already has been reported to upstream for udisks, udev and lvm2 before creating this bug, the three of them said it came from the gentoo packages of one of those three.
I must say I'm not seeing the multi disk volume as well in debian squeeze. However I see raid volumes just fine.
After adding a new lv to an existing vg with system-config-lvm, the vg and lv is properly displayed in gnome-disk-utility (palimpsest) and in the gnome "Places" menu. I think the important difference are the "UDISKS_LVM2_PV_VG" entries... udevadm info --export-db (vg an lv not visible) P: /devices/pci0000:00/0000:00:1f.2/host0/target0:0:0/0:0:0:0/block/sda/sda2 N: sda2 W: 29 S: block/8:2 S: disk/by-id/ata-ST9500420AS_5VJ4DNG8-part2 S: disk/by-id/scsi-SATA_ST9500420AS_5VJ4DNG8-part2 S: disk/by-path/pci-0000:00:1f.2-scsi-0:0:0:0-part2 S: disk/by-id/wwn-0x5000c50021b6f0d8-part2 E: UDEV_LOG=3 E: DEVPATH=/devices/pci0000:00/0000:00:1f.2/host0/target0:0:0/0:0:0:0/block/sda/sda2 E: MAJOR=8 E: MINOR=2 E: DEVNAME=/dev/sda2 E: DEVTYPE=partition E: SUBSYSTEM=block E: ID_ATA=1 E: ID_TYPE=disk E: ID_BUS=ata E: ID_MODEL=ST9500420AS E: ID_MODEL_ENC=ST9500420AS\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20 E: ID_REVISION=0006HP1M E: ID_SERIAL=ST9500420AS_5VJ4DNG8 E: ID_SERIAL_SHORT=5VJ4DNG8 E: ID_ATA_WRITE_CACHE=1 E: ID_ATA_WRITE_CACHE_ENABLED=1 E: ID_ATA_FEATURE_SET_PM=1 E: ID_ATA_FEATURE_SET_PM_ENABLED=1 E: ID_ATA_FEATURE_SET_SECURITY=1 E: ID_ATA_FEATURE_SET_SECURITY_ENABLED=0 E: ID_ATA_FEATURE_SET_SECURITY_ERASE_UNIT_MIN=102 E: ID_ATA_FEATURE_SET_SECURITY_ENHANCED_ERASE_UNIT_MIN=102 E: ID_ATA_FEATURE_SET_SECURITY_FROZEN=1 E: ID_ATA_FEATURE_SET_SMART=1 E: ID_ATA_FEATURE_SET_SMART_ENABLED=1 E: ID_ATA_FEATURE_SET_APM=1 E: ID_ATA_FEATURE_SET_APM_ENABLED=1 E: ID_ATA_FEATURE_SET_APM_CURRENT_VALUE=128 E: ID_ATA_DOWNLOAD_MICROCODE=1 E: ID_ATA_SATA=1 E: ID_ATA_SATA_SIGNAL_RATE_GEN2=1 E: ID_ATA_SATA_SIGNAL_RATE_GEN1=1 E: ID_ATA_ROTATION_RATE_RPM=7200 E: ID_WWN=0x5000c50021b6f0d8 E: ID_WWN_WITH_EXTENSION=0x5000c50021b6f0d8 E: ID_SCSI_COMPAT=SATA_ST9500420AS_5VJ4DNG8 E: ID_PATH=pci-0000:00:1f.2-scsi-0:0:0:0 E: ID_PART_TABLE_TYPE=dos E: ID_FS_UUID=x1duMI-S0Ff-DKIr-8jE6-NoHl-7KuH-N0Qa8w E: ID_FS_UUID_ENC=x1duMI-S0Ff-DKIr-8jE6-NoHl-7KuH-N0Qa8w E: ID_FS_VERSION=LVM2\x20001 E: ID_FS_TYPE=LVM2_member E: ID_FS_USAGE=raid E: UDISKS_PRESENTATION_NOPOLICY=0 E: UDISKS_PARTITION=1 E: UDISKS_PARTITION_SCHEME=mbr E: UDISKS_PARTITION_NUMBER=2 E: UDISKS_PARTITION_TYPE=0x8e E: UDISKS_PARTITION_SIZE=499582500864 E: UDISKS_PARTITION_SLAVE=/sys/devices/pci0000:00/0000:00:1f.2/host0/target0:0:0/0:0:0:0/block/sda E: UDISKS_PARTITION_OFFSET=525336576 E: UDISKS_PARTITION_ALIGNMENT_OFFSET=0 E: DEVLINKS=/dev/block/8:2 /dev/disk/by-id/ata-ST9500420AS_5VJ4DNG8-part2 /dev/disk/by-id/scsi-SATA_ST9500420AS_5VJ4DNG8-part2 /dev/disk/by-path/pci-0000:00:1f.2-scsi-0:0:0:0-part2 /dev/disk/by-id/wwn-0x5000c50021b6f0d8-part2 udevadm info --export-db (vg an lv visible) P: /devices/pci0000:00/0000:00:1f.2/host0/target0:0:0/0:0:0:0/block/sda/sda2 N: sda2 W: 90 S: block/8:2 S: disk/by-id/ata-ST9500420AS_5VJ4DNG8-part2 S: disk/by-id/scsi-SATA_ST9500420AS_5VJ4DNG8-part2 S: disk/by-path/pci-0000:00:1f.2-scsi-0:0:0:0-part2 S: disk/by-id/wwn-0x5000c50021b6f0d8-part2 E: UDEV_LOG=3 E: DEVPATH=/devices/pci0000:00/0000:00:1f.2/host0/target0:0:0/0:0:0:0/block/sda/sda2 E: MAJOR=8 E: MINOR=2 E: DEVNAME=/dev/sda2 E: DEVTYPE=partition E: SUBSYSTEM=block E: ID_ATA=1 E: ID_TYPE=disk E: ID_BUS=ata E: ID_MODEL=ST9500420AS E: ID_MODEL_ENC=ST9500420AS\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20 E: ID_REVISION=0006HP1M E: ID_SERIAL=ST9500420AS_5VJ4DNG8 E: ID_SERIAL_SHORT=5VJ4DNG8 E: ID_ATA_WRITE_CACHE=1 E: ID_ATA_WRITE_CACHE_ENABLED=1 E: ID_ATA_FEATURE_SET_PM=1 E: ID_ATA_FEATURE_SET_PM_ENABLED=1 E: ID_ATA_FEATURE_SET_SECURITY=1 E: ID_ATA_FEATURE_SET_SECURITY_ENABLED=0 E: ID_ATA_FEATURE_SET_SECURITY_ERASE_UNIT_MIN=102 E: ID_ATA_FEATURE_SET_SECURITY_ENHANCED_ERASE_UNIT_MIN=102 E: ID_ATA_FEATURE_SET_SECURITY_FROZEN=1 E: ID_ATA_FEATURE_SET_SMART=1 E: ID_ATA_FEATURE_SET_SMART_ENABLED=1 E: ID_ATA_FEATURE_SET_APM=1 E: ID_ATA_FEATURE_SET_APM_ENABLED=1 E: ID_ATA_FEATURE_SET_APM_CURRENT_VALUE=128 E: ID_ATA_DOWNLOAD_MICROCODE=1 E: ID_ATA_SATA=1 E: ID_ATA_SATA_SIGNAL_RATE_GEN2=1 E: ID_ATA_SATA_SIGNAL_RATE_GEN1=1 E: ID_ATA_ROTATION_RATE_RPM=7200 E: ID_WWN=0x5000c50021b6f0d8 E: ID_WWN_WITH_EXTENSION=0x5000c50021b6f0d8 E: ID_SCSI_COMPAT=SATA_ST9500420AS_5VJ4DNG8 E: ID_PATH=pci-0000:00:1f.2-scsi-0:0:0:0 E: ID_PART_TABLE_TYPE=dos E: ID_FS_UUID=x1duMI-S0Ff-DKIr-8jE6-NoHl-7KuH-N0Qa8w E: ID_FS_UUID_ENC=x1duMI-S0Ff-DKIr-8jE6-NoHl-7KuH-N0Qa8w E: ID_FS_VERSION=LVM2\x20001 E: ID_FS_TYPE=LVM2_member E: ID_FS_USAGE=raid E: UDISKS_PRESENTATION_NOPOLICY=0 E: UDISKS_LVM2_PV_VG_UUID=2W4y2G-UvTX-nkF1-lEsW-u3ZT-2AuR-Uyp4LI E: UDISKS_LVM2_PV_VG_NAME=vg_8540w E: UDISKS_LVM2_PV_VG_SIZE=499558383616 E: UDISKS_LVM2_PV_VG_FREE_SIZE=92610232320 E: UDISKS_LVM2_PV_VG_EXTENT_SIZE=33554432 E: UDISKS_LVM2_PV_VG_EXTENT_COUNT=14888 E: UDISKS_LVM2_PV_VG_SEQNUM=39 E: UDISKS_LVM2_PV_VG_PV_LIST=uuid=x1duMI-S0Ff-DKIr-8jE6-NoHl-7KuH-N0Qa8w;size=499558383616;allocated_size=406948151296 E: UDISKS_LVM2_PV_VG_LV_LIST=name=lv_swap;uuid=IXcyGe-Hgpq-Vr3S-kiaX-EV5c-svC2-Q7YhvS;size=4294967296;;active=1 name=lv_home;uuid=9FMy2o-LJo1-6h60-m2R0-r7Ys-8WKa-gKsdVm;size=322122547200;;active=1 name=lv_root_fc14;uuid=0tIGU0-hTyi-QopC-lo44-trhP-LXO7-NfyItA;size=32212254720;;active=1 name=lv_root_gentoo;uuid=HBxM1f-Vmlf-2tQ1-Ke1f-YdnL-e6j3-4qeNUZ;size=48318382080;;active=1 E: UDISKS_LVM2_PV_UUID=x1duMI-S0Ff-DKIr-8jE6-NoHl-7KuH-N0Qa8w E: UDISKS_LVM2_PV_NUM_MDA=1 E: UDISKS_PARTITION=1 E: UDISKS_PARTITION_SCHEME=mbr E: UDISKS_PARTITION_NUMBER=2 E: UDISKS_PARTITION_TYPE=0x8e E: UDISKS_PARTITION_SIZE=499582500864 E: UDISKS_PARTITION_SLAVE=/sys/devices/pci0000:00/0000:00:1f.2/host0/target0:0:0/0:0:0:0/block/sda E: UDISKS_PARTITION_OFFSET=525336576 E: UDISKS_PARTITION_ALIGNMENT_OFFSET=0 E: DEVLINKS=/dev/block/8:2 /dev/disk/by-id/ata-ST9500420AS_5VJ4DNG8-part2 /dev/disk/by-id/scsi-SATA_ST9500420AS_5VJ4DNG8-part2 /dev/disk/by-path/pci-0000:00:1f.2-scsi-0:0:0:0-part2 /dev/disk/by-id/wwn-0x5000c50021b6f0d8-part2
Created attachment 262035 [details] system-config-lvm.ebuild for testing
The command that make all this work (without system-config-lvm) is vgchange --refresh
And so does udevadm trigger as said in comment #11, it juste basically do the same thing here, exporting udev stuff
(In reply to comment #20) > And so does udevadm trigger as said in comment #11, it juste basically do the > same thing here, exporting udev stuff > With udevadm trigger I have still the problems mentioned in comment #7.
(In reply to comment #21) > (In reply to comment #20) > > And so does udevadm trigger as said in comment #11, it juste basically do the > > same thing here, exporting udev stuff > > > > With udevadm trigger I have still the problems mentioned in comment #7. > With udev 164, lvm2 2.02.84, udisks 1.0.2 and gnome-disk-utility 2.91.6, udevadm trigger works juste fine, I no longer have the same bug as in comment #7 (I had it before, but not now, hadn't test for a while though, so dunno which component upgrade solved it) vgchange --refresh just do nothing here, actually
Note that this isn't reproductible with systemd instead of openrc/sysvinit
Question: Do you /usr as a separate mount point? If so, then you are probably hitting bug #364235. If not, then enablde udev_debug in /etc/conf.d/udev, reboot your system, and look for error messages in /dev/.udev/udev.log wrt anything /lib{,64}/udev/ which is not rules files (that is where all the probers are, and they are what is failing for you). Especially look for anything returning errorcode 127 or alike.
Don't think I was the only one to get this issue, and cannot really test that since I do no longer have sysvinit/openrc installed and I do not hit this anymore using systemd... But no, my /usr is not in a separate partition
(In reply to comment #24) > Question: > > Do you /usr as a separate mount point? No, I don't. > Especially look for anything returning errorcode 127 or alike. The only "error" in my log is: /lib64/udev/udisks-lvm-pv-export' (stderr) 'Error calling lvm_init(): Read-only file system And there are some util_run_program: '/sbin/modprobe' (stderr) 'FATAL: Module input:b0019v0000p0001e0000_e0,1,k74,ramlsfw not found.' concerning different modules. I still use vgchange --refresh in /etc/local.d/lvm-udev.start to workaround this issue. ~amd64 sys-apps/openrc-0.8.2 sys-apps/baselayout-2.0.2 amd64 sys-fs/udev-151-r4 sys-fs/lvm2-2.02.73-r1 sys-fs/udisks-1.0.2-r1 sys-apps/gnome-disk-utility-2.32.1
Here is a link to my log: http://poncho.spahan.ch/public/udev.log The file is to big to attache to the bug... (1000.2 KB)
Compress it with bzip2 (for example) then ;)
Created attachment 270711 [details] udev.log with failing gnome-disk-utility (In reply to comment #28) > Compress it with bzip2 (for example) then ;) Here it is ...
Sorry for late answear, forgot to CC me. (In reply to comment #26) > The only "error" in my log is: > /lib64/udev/udisks-lvm-pv-export' (stderr) 'Error calling lvm_init(): Read-only > file system This just got more fun... In short this is because udisks-lvm-pv-export uses the locking dir from /etc/lvm/lvm.conf, however when udev runs /var/run is still mounted RO. Way to work around (or maybe fix?) is to change locking_dir= in /etc/lvm/lvm.conf from /var/lock/lvm to somethig residing on a tmpfs (tested and confirmed on my system) like /dev/.lvm (this particular path untested) or something other which resides on a tmpfs/RW-mounted fs. Another fix could be to have udisks-* fail with another exitcode the 1 when it fails for this kind of reasosn, and have udev mark these fails as "failed" so udev-postmount can rerun them.
(In reply to comment #30) > Way to work around (or maybe fix?) is to change locking_dir= in > /etc/lvm/lvm.conf from /var/lock/lvm to somethig residing on a tmpfs (tested > and confirmed on my system) like /dev/.lvm (this particular path untested) or > something other which resides on a tmpfs/RW-mounted fs. I can confirm that locking_dir="/dev/.lvm" makes the lvm visible in palimpsest as "Multi-disk Devices". But there are duplicated entries in "Peripheral Devices". This doesn't happen with vgchange --refresh. Instead of symlinks to the /dev/dm-* devices it creates duplicated device nodes under /dev/mapper for already existing devices. /dev/mapper with vgchange --refresh 3138 0 drwxr-xr-x 2 root root 140 24. Apr 22:14 . 1032 0 drwxr-xr-x 19 root root 6440 24. Apr 22:14 .. 3139 0 crw-rw---- 1 root root 10, 236 24. Apr 22:14 control 4787 0 lrwxrwxrwx 1 root root 7 24. Apr 22:14 vg_8540w-lv_home -> ../dm-1 4814 0 lrwxrwxrwx 1 root root 7 24. Apr 22:14 vg_8540w-lv_root_fc15 -> ../dm-2 4841 0 lrwxrwxrwx 1 root root 7 24. Apr 22:14 vg_8540w-lv_root_gentoo -> ../dm-3 4764 0 lrwxrwxrwx 1 root root 7 24. Apr 22:14 vg_8540w-lv_swap -> ../dm-0 /dev/mapper with locking_dir="/dev/.lvm" 2085 0 drwxr-xr-x 2 root root 140 24. Apr 22:31 . 1050 0 drwxr-xr-x 19 root root 6440 24. Apr 22:31 .. 2086 0 crw-rw---- 1 root root 10, 236 24. Apr 22:31 control 3468 0 brw------- 1 root root 253, 1 24. Apr 22:31 vg_8540w-lv_home 3466 0 brw------- 1 root root 253, 2 24. Apr 22:31 vg_8540w-lv_root_fc15 3467 0 brw------- 1 root root 253, 3 24. Apr 22:31 vg_8540w-lv_root_gentoo 3469 0 brw------- 1 root root 253, 0 24. Apr 22:31 vg_8540w-lv_swap
(In reply to comment #31) > But there are duplicated entries in "Peripheral Devices". This doesn't happen > with vgchange --refresh. > Instead of symlinks to the /dev/dm-* devices it creates duplicated device nodes > under /dev/mapper for already existing devices. Yeah, and this does not happend with udevadm trigger either, which makes me believe there is something other that also fails during udevadm, that udevd does not mark as "failed".
(In reply to comment #31) > But there are duplicated entries in "Peripheral Devices". This doesn't happen > with vgchange --refresh. This just gets more ad more fun. udevadm --action=add (the old default, and what /etc/init.d/udev does) does not run at least 11-dm-lvm.rules, however udevadm trigger --action=change (the new default) runs it. Those rules also specific tells us that with action add it only runs during "coldplug", however with current usage of devtmpfs I do not know if what "udev trigger --action=add" does counts as coldplug anymore (not even if you set /etc/conf.d/udev to coldplug this rule is run). So it seems like the handling of udev in openrc has slipped a bit behind. So what can be interesting to know if what happends with dracut+openrc (i.e. dracut runs the coldplugging). Also how you may channge the init scripts to actually work with those setups where you do not have a udev-enabled ramdisk.
Created attachment 271375 [details] dracut with genkernel (In reply to comment #33) > So what can be interesting to know if what happends with dracut+openrc (i.e. > dracut runs the coldplugging). I do use genkernel from the aidecoe-overlay (https://github.com/aidecoe/aidecoe-overlay) in combination with dracut from the gentoo ~amd64 tree. sys-kernel/dracut-010-r1 sys-kernel/genkernel-999999
So a udev-enabled ramdisk changes nothing... I am trying to figure out what rules are note applied and why... udev.log is not very verbose afaict on why a specific ruleset was skipped, however after a closer look the duplication seems to be because of some stuff in 10-dm.rules not being run. So this looks more and more like two bugs: one bug wrt lvm using an RO-lockdir during early init which makes udisks-rules fails (could possibly hit /etc/init.d/{device-manager,lvm} unless using another lockdir for early init then the rest of the boot which is unsupported ad discouraged by upstream) one bug wrt udevs handling of *-dm*.roles, which seems to make makes 10-dm.rules trigger on ACTION=="add", ENV{DM_UDEV_RULES_VSN}!="1", ENV{DM_UDEV_PRIMARY_SOURCE_FLAG}!="1", GOTO="dm_disable" As nothing except the stuff after LABEL=dm_disable really is entered into /dev/.udev/data/b253* until "udevadm trigger" is executed later on.
So not even old udevadm handles this right. On one of my stable systems using openrc as one of few unmasked packages the 10-dm.rule is not run before I issue "udevadm trigger --action=change" (for old udevadm add was default action), and the info that udisks uses to map a /dev/dm-0 to /dev/mapper/* is not in the db until then. Example: # udevadm info --export-db | grep DM_UUID # udevadm trigger --action=change --subsystem-match=block # udevadm info --export-db | grep DM_UUID E: DM_UUID=LVM-3ceAalF0CnBkg9ZxNDqXMHDY3EWpgIcV9dPz9KIA52cD3gk4m4QgbeV9xnxUwz2a .... Also DM_VG_NAME, DM_LV_NAME and the more by udisks used variables aren not in there either. So actually with current udev/device-mapper setup in openrc no info wrt dm seems correctly recorded into udev-db, not on amd64 nor on ~amd64.
(In reply to comment #35) > one bug wrt udevs handling of *-dm*.rules This is reported as bug #365227 against udev as it seems to be an issue about how udevadm trigger --action=add works on devtmpfs (which manifests itself in palimpsest as entries in "Peripheral Devices"). That leaves only one issue in this bug: lvm and locking_dir I propose that we set locking dir to /dev/.lvm as default, as that will also remove the need to set it to something RW in other script like: $ grep locking_dir /lib/rcscripts/addons/lvm-start.sh config='global { locking_dir = "/dev/.lvm" }' which essentially gives Gentoo TWO DIFFERENT locking dirs for LVM, which is something upstream does frown upon (guess what may happend if something not related to /lib/rcscripts/addons/lvm-* does something needing locking on Gentoo at the same time as one of those script is executed?).
Is this valid with udisks-2.1.6? (old slot is completely dead and won't be fixed ever probably)