I met this problem that randomly lvm can't create snapshot logical volume. Then after some google. I think I found a temporary solution. #lvcreate -s -L200M /dev/system/root -n rootsnap LV system/rootsnap in use: not deactivating Couldn't deactivate new snapshot This kind of problem does not always happen. Actually I have figured out this is because lvm the conflict with udev $ emerge -pv lvm2 These are the packages that would be merged, in order: Calculating dependencies... done! [ebuild R ] sys-fs/lvm2-2.02.10 USE="readline (-clvm) (-cman) (-gulm) -nolvm1 -nolvmstatic -nomirrors -nosnapshots (-selinux)" 0 kB $ emerge -pv udev These are the packages that would be merged, in order: Calculating dependencies... done! [ebuild R ] sys-fs/udev-114 USE="(-selinux)" 0 kB Total: 1 package (1 reinstall), Size of downloads: 0 kB My solution is to revise the udev rules to ignore the snapshot logical volume by revise the file /etc/udev/rules.d/64-device-mapper.rules $ cat /etc/udev/rules.d/64-device-mapper.rules # device mapper links hook into "change" events, when the dm table # becomes available; some table-types must be ignored KERNEL=="device-mapper", NAME="mapper/control" KERNEL!="dm-*", GOTO="device_mapper_end" ACTION!="add|change", GOTO="device_mapper_end" # lookup device name # use dmsetup, until devmap_name is provided by sys-fs/device-mapper PROGRAM=="/sbin/dmsetup -j %M -m %m --noopencount --noheadings -c -o name info", ENV{DM_NAME}="%c" # do not do anything if dmsetup does not provide a name ENV{DM_NAME}=="", NAME="", OPTIONS="ignore_device" ENV{DM_NAME}=="system-rootsnap", OPTIONS="ignore_device" ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ENV{DM_NAME}=="system-varsnap", OPTIONS="ignore_device" ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ENV{DM_NAME}=="system-optsnap", OPTIONS="ignore_device" ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ENV{DM_NAME}=="system-usrsnap", OPTIONS="ignore_device" ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ENV{DM_NAME}=="home-hmsnap", OPTIONS="ignore_device" ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ # ignore luks crypt devices while not fully up ENV{DM_NAME}=="temporary-cryptsetup-*", NAME="", OPTIONS="ignore_device" # use queried name ENV{DM_NAME}=="?*", NAME="mapper/$env{DM_NAME}" SYMLINK+="disk/by-id/dm-name-$env{DM_NAME}" PROGRAM!="/sbin/dmsetup status -j %M -m %m", GOTO="device_mapper_end" RESULT=="|*snapshot*|*error*", GOTO="device_mapper_end" IMPORT{program}="vol_id --export $tempnode" OPTIONS="link_priority=50" ENV{ID_FS_USAGE}=="filesystem|other|crypto", ENV{ID_FS_UUID_ENC}=="?*", SYMLINK+="disk/by-uuid/$env{ID_FS_UUID_ENC}" ENV{ID_FS_USAGE}=="filesystem|other", ENV{ID_FS_LABEL_ENC}=="?*", SYMLINK+="disk/by-label/$env{ID_FS_LABEL_ENC}" LABEL="device_mapper_end" Reproducible: Sometimes I submit this bug report just hope that this can be fixed by patch the lvm2 ebuild to setup its own rules for lvm in udev.
Tell me what is the real problem? Are it the later rules you skip with OPTIONS="ignore_device" ? Then we should check why this rule does not catch snapshots and skip the later rules: PROGRAM!="/sbin/dmsetup status -j %M -m %m", GOTO="device_mapper_end" RESULT=="|*snapshot*|*error*", GOTO="device_mapper_end" Can you please attach the output of this (with correct number #) udevtest /sys/block/dm-#
And please also attach output of emerge --info.
I have also found this bug after the udev update to 104. The bug seems not Gentoo specific. See http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=343671, for example. As as workaround, I changed /etc/udev/rules.d/64-device-mapper.rules. If you ignore dm devices as previous versions of udev did, it works again. The problem seems to be a race condition between the 2 processes using libdevmapper (udevd uses libdevmaper indirectly through dmsetup). The lvcreate process fails after issuing an ioctl to /dev/mapper/control, as this strace shows: ******************** Beginning "Execution of a failing lvcreate --snapshot command" strace -o /dev/shm/lvcreate-snapshot.strace -tt lvcreate --name root-snap --size 100M --snapshot /dev/system/root ******************** 16:04:06.961204 open("/dev/mapper/control", O_RDWR|O_LARGEFILE) = 3 16:04:06.961763 ioctl(3, DM_DEV_STATUS, 0x81dc568) = 0 16:04:06.962491 write(2, " ", 2) = 2 16:04:06.962937 write(2, "LV system/root-snap in use: not "..., 44) = 44 16:04:06.965131 write(2, "\n", 1) = 1 16:04:06.965575 write(2, " ", 2) = 2 16:04:06.967116 write(2, "Couldn\'t deactivate new snapshot"..., 33) = 33 16:04:06.968161 write(2, "\n", 1) = 1 ******************** End "Execution of a failing lvcreate --snapshot command" ******************** Perhaps a more thorough strace analysis could provide more enlightenment here.
*** Bug 189859 has been marked as a duplicate of this bug. ***
I can confirm this problem during an LVM snapshot. My temporary solution is to go back to udev-104.
I now added device-mapper-1.02.22 with a patch to export some data to userspace. The updated udev-rules use this to decide which volumes to ignore. I hope this fixes that bug.
(In reply to comment #6) > I hope this fixes that bug. I guess, no. I still have a problem creating the backups on our very own Gentoo CVS/SVN-server. We even don't run backups for about a week now. That's why I changed the severity of this bug. This is the output of the lvcreate-command: stork ~ # /sbin/lvcreate -s -L6G -v -n cvsrootbackup /dev/vg/cvsroot Setting chunksize to 16 sectors. Finding volume group "vg" Archiving volume group "vg" metadata (seqno 1384). Creating logical volume cvsrootbackup Creating volume group backup "/etc/lvm/backup/vg" (seqno 1385). Found volume group "vg" Creating vg-cvsrootbackup Loading vg-cvsrootbackup table Resuming vg-cvsrootbackup (254:5) Clearing start of logical volume "cvsrootbackup" Found volume group "vg" Removing vg-cvsrootbackup (254:5) Found volume group "vg" Found volume group "vg" Found volume group "vg" Clearing inactive table vg-cvsroot (254:0) Loading vg-cvsroot-real table Suppressed vg-cvsroot-real identical table reload. Loading vg-cvsroot table Creating vg-cvsrootbackup-cow device-mapper: create ioctl failed: Device or resource busy Creating vg-cvsrootbackup Loading vg-cvsrootbackup table device-mapper: reload ioctl failed: No such device or address Failed to suspend origin cvsroot stork ~ # udev-115 with device-mapper-1.02.22 installed.
I guess it will help to have some output of a parallel running udevmonitor --env
Created attachment 129454 [details] udevmonitor during failed lvcreate on stork.gentoo.org (In reply to comment #8) > I guess it will help to have some output of a parallel running > udevmonitor --env Sure. See the attached filed.
Can we consider this bug fixed? Can some reporters please verify this?
The udev rule changes commited by zzam have triggered another bug, bug 190819. Please revise them.
Still not working for me. I am using sys-fs/udev-114 and sys-fs/device-mapper-1.02.22.
(In reply to comment #12) > Still not working for me. I am using sys-fs/udev-114 and > sys-fs/device-mapper-1.02.22. > Did you reboot since you last tried to create a snapshot? Else there could exist some tmp-volumes with the name lvm tries to create them. See as above from Comment #7: >> Creating vg-cvsrootbackup-cow >> device-mapper: create ioctl failed: Device or resource busy
After upgrading sys-fs/device-mapper, I issued a "udevcontrol reload_rules". Still not working, I killed and restarted the udevd process. Same behavior again. I tried the reboot. It worked only the first time. The next ones, the creation fails. Seems a race condition. As I tried dropping the page and buffers caches (echo 3 > /proc/sys/vm/drop_caches). After dropping the caches, it works again the first time. Before issuing the lvcreate command, both lvs and "dmsetup table" report no extra temporal volumes (only the normal logical volumes). If you find it necessary, I can provide any extra log or command output.
(In reply to comment #14) > After upgrading sys-fs/device-mapper, I issued a "udevcontrol reload_rules". > Still not working, I killed and restarted the udevd process. Same behavior > again. > Well, as long as kernel has inotify that explicit reload should not be necessary. > I tried the reboot. It worked only the first time. The next ones, the creation > fails. Seems a race condition. As I tried dropping the page and buffers caches > (echo 3 > /proc/sys/vm/drop_caches). After dropping the caches, it works again > the first time. Very strange solution. Maybe it helps to either run "udevmonitor --env" in parallel. Or running lvcreate under some tracer like strace. > > Before issuing the lvcreate command, both lvs and "dmsetup table" report no > extra temporal volumes (only the normal logical volumes). > No idea if "dmsetup table" lists also volumes that not yet have a table, but I guess it lists them with empty table column. > If you find it necessary, I can provide any extra log or command output. > yeah, see above.
zzam: This issue still exists on stork.gentoo.org with the latest device-mapper. I ran udevmonitor --env in parallel to: # dmsetup table # lvcreate -v -s ... # echo rc=$? ; dmsetup table Here's the creation side: vg-tmp: 0 2097152 linear 8:4 18874752 vg-svnroot: 0 6291456 linear 8:4 12583296 vg-cvsroot: 0 12582912 linear 8:4 384 vg-backup: 0 20971520 linear 8:4 20971904 vg-gitroot: 0 6291456 linear 8:4 41943424 Setting chunksize to 16 sectors. Finding volume group "vg" Archiving volume group "vg" metadata (seqno 1596). Creating logical volume svnrootbackup Creating volume group backup "/etc/lvm/backup/vg" (seqno 1597). Found volume group "vg" Creating vg-svnrootbackup Loading vg-svnrootbackup table Resuming vg-svnrootbackup (254:5) Clearing start of logical volume "svnrootbackup" Found volume group "vg" LV vg/svnrootbackup in use: not deactivating Couldn't deactivate new snapshot. rc=5 vg-tmp: 0 2097152 linear 8:4 18874752 vg-svnroot: 0 6291456 linear 8:4 12583296 vg-cvsroot: 0 12582912 linear 8:4 384 vg-backup: 0 20971520 linear 8:4 20971904 vg-svnrootbackup: 0 6291456 linear 8:4 48234880 vg-gitroot: 0 6291456 linear 8:4 41943424 And here is the udevmonitor --env side: UEVENT[1190777826.977329] add /block/dm-5 (block) ACTION=add DEVPATH=/block/dm-5 SUBSYSTEM=block SEQNUM=2043 MINOR=5 MAJOR=254 UEVENT[1190777826.977377] change /block/dm-5 (block) ACTION=change DEVPATH=/block/dm-5 SUBSYSTEM=block SEQNUM=2044 MINOR=5 MAJOR=254 UDEV [1190777827.043089] add /block/dm-5 (block) UDEV_LOG=3 ACTION=add DEVPATH=/block/dm-5 SUBSYSTEM=block SEQNUM=2043 MINOR=5 MAJOR=254 UDEVD_EVENT=1 DM_NAME=vg-svnrootbackup DM_MAJOR=254 DM_MINOR=5 DM_STATUS=ACTIVE DM_READ_ONLY=0 DM_EXISTS=1 DM_SUSPENDED=0 DM_TABLE_LIVE=1 DM_TABLE_INACTIVE=0 DM_OPEN=1 DM_SEGMENTS=1 DM_EVENTS=0 DM_UUID=LVM-5aaUe6D345rreLUh3v2p4akldKLY13b0XEg9cT4lXSYPzfLk41exQU60ONtVVMck DM_TARGET_TYPES=linear DEVNAME=/dev/dm-5 DEVLINKS=/dev/mapper/vg-svnrootbackup /dev/disk/by-id/dm-name-vg-svnrootbackup /dev/disk/by-id/dm-uuid-LVM-5aaUe6D345rreLUh3v2p4akldKLY13b0XEg9cT4lXSYPzfLk41exQU60ONtVVMck UDEV [1190777827.057675] change /block/dm-5 (block) UDEV_LOG=3 ACTION=change DEVPATH=/block/dm-5 SUBSYSTEM=block SEQNUM=2044 MINOR=5 MAJOR=254 UDEVD_EVENT=1 DM_NAME=vg-svnrootbackup DM_MAJOR=254 DM_MINOR=5 DM_STATUS=ACTIVE DM_READ_ONLY=0 DM_EXISTS=1 DM_SUSPENDED=0 DM_TABLE_LIVE=1 DM_TABLE_INACTIVE=0 DM_OPEN=0 DM_SEGMENTS=1 DM_EVENTS=0 DM_UUID=LVM-5aaUe6D345rreLUh3v2p4akldKLY13b0XEg9cT4lXSYPzfLk41exQU60ONtVVMck DM_TARGET_TYPES=linear DEVNAME=/dev/dm-5 DEVLINKS=/dev/mapper/vg-svnrootbackup /dev/disk/by-id/dm-name-vg-svnrootbackup /dev/disk/by-id/dm-uuid-LVM-5aaUe6D345rreLUh3v2p4akldKLY13b0XEg9cT4lXSYPzfLk41exQU60ONtVVMck UDEV [1190777829.368985] remove /block/dm-5 (block) UDEV_LOG=3 ACTION=remove DEVPATH=/block/dm-5 SUBSYSTEM=block SEQNUM=2045 MINOR=5 MAJOR=254 UDEVD_EVENT=1 DM_NAME=vg-svnrootbackup DM_MAJOR=254 DM_MINOR=5 DM_STATUS=ACTIVE DM_READ_ONLY=0 DM_EXISTS=1 DM_SUSPENDED=0 DM_TABLE_LIVE=1 DM_TABLE_INACTIVE=0 DM_OPEN=0 DM_SEGMENTS=1 DM_EVENTS=0 DM_UUID=LVM-5aaUe6D345rreLUh3v2p4akldKLY13b0XEg9cT4lXSYPzfLk41exQU60ONtVVMck DM_TARGET_TYPES=linear DEVNAME=/dev/dm-5 DEVLINKS=/dev/mapper/vg-svnrootbackup /dev/disk/by-id/dm-name-vg-svnrootbackup /dev/disk/by-id/dm-uuid-LVM-5aaUe6D345rreLUh3v2p4akldKLY13b0XEg9cT4lXSYPzfLk41exQU60ONtVVMck
Ok, a bit more tracing shows up the problem. It is a race issue, and one that disappears on sufficiently slow or non-parallel machines. lvcreate very early on causes /dev/dm-X to exist. At this point, the node is NOT quite ready for use (not yet activated, and no COW is setup), but it does exist. At this moment, udev fires the IMPORT{program} rule that runs 'dmsetup info --export ....'. dmsetup has the device open at the very moment that lvcreate tries to activate the snapshot - the kernel says the device is busy, and lvcreate gives up. We need to make dmsetup run later to fix this, or make dmsetup aware that lvcreate is still busy with the device. As a comparision, here's what happens with a normal lvcreate -s that does not get stopped (and the relevant dmsetup table output before and after, trimmed): # dmsetup table vg-svnroot: 0 6291456 linear 8:4 12583296 # lvcreate -s ... Setting chunksize to 16 sectors. Finding volume group "vg" Archiving volume group "vg" metadata (seqno 1600). Creating logical volume svnrootbackup Creating volume group backup "/etc/lvm/backup/vg" (seqno 1601). Found volume group "vg" Creating vg-svnrootbackup Loading vg-svnrootbackup table Resuming vg-svnrootbackup (254:5) Clearing start of logical volume "svnrootbackup" Found volume group "vg" Found volume group "vg" Found volume group "vg" Found volume group "vg" Creating vg-svnroot-real Loading vg-svnroot-real table Resuming (254:6) Loading vg-svnroot table Creating vg-svnrootbackup-cow Loading vg-svnrootbackup-cow table Resuming (254:7) Loading vg-svnrootbackup table Suspending vg-svnroot (254:1) with filesystem sync. Suspending vg-svnroot-real (254:6) with filesystem sync. Found volume group "vg" Loading vg-svnroot-real table Suppressed vg-svnroot-real identical table reload. Resuming vg-svnroot-real (254:6) Loading vg-svnrootbackup-cow table Suppressed vg-svnrootbackup-cow identical table reload. Resuming vg-svnrootbackup (254:5) Resuming vg-svnroot (254:1) Creating volume group backup "/etc/lvm/backup/vg" (seqno 1602). Logical volume "svnrootbackup" created # dmsetup table vg-svnroot-real: 0 6291456 linear 8:4 12583296 vg-svnroot: 0 6291456 snapshot-origin 254:6 vg-svnrootbackup-cow: 0 6291456 linear 8:4 48234880 vg-svnrootbackup: 0 6291456 snapshot 254:6 254:7 P 16
On a crazy hunch, I updated to lvm2 2.02.27 from stable amd64, and I cannot reproduce the problem there. If the other reporters could try that, I'd love to hear if it works for them.
*** Bug 193811 has been marked as a duplicate of this bug. ***
"lvcreate very early on causes /dev/dm-X to exist. At this point, the node is NOT quite ready for use (not yet activated, and no COW is setup), but it does exist. At this moment, udev fires the IMPORT{program} rule that runs 'dmsetup info --export ....'." This is the normal problem with udev: Firstly, udev must be configured to ignore ADD events against dm devices completely and react *only* to CHANGE events; Secondly, udev needs to provide a mechanism for lvm2 to use to wait for udev to finish what it's doing before lvm2 proceeds and uses the devices. I'm told to expect the (upstream) code to do this to arrive in my inbox very soon. Until then, if you hit this problem, you need to tell udev to ignore affected dm devices completely (or add sleep+retry to the userspace dm&lvm2 code).
I upgraded to lvm2-2.02.27 (in testing branch for x86). It works for me. Before trying this upgrade, my only solution to this bug was ignoring *both* add and change events. BTW, nice tip about reload_rules, Matthias. Not needed if the kernel has inotify support ;-).
Ok, so we need to stabilize a newer version of lvm2. I'll file the various bugs.
New x86 stable versions of lvm2 and device-mapper work flawlessly for me.
amd64 here, too. Bug can be closed, IMHO.
not closing until all the minor arches get to the stabilization bugs.
(In reply to comment #25) > not closing until all the minor arches get to the stabilization bugs. All doe.