Hi, currently, lvm requires lvmetad: > # /etc/init.d/lvm ineed > * Caching service dependencies ... [ ok ] > lvmetad sysfs That doesn't make sense: 1. You typical put lvm into your boot runlevel. But lvmetad can't create its socket at boot runlevel, To make that visible, add ls -l /run/lvm" to start_post() in /etc/init.d/lvmetad: > * Starting lvmetad ... > [ ok ] > ls: cannot access /run/lvm: No such file or directory > [ ok ] > * Setting up the Logical Volume Manager ... > /run/lvm/lvmetad.socket: connect failed: No such file or directory > WARNING: Failed to connect to lvmetad. Falling back to internal scanning. > [ ok ] 2. As you can see from lvm's WARNING, lvm works without lvmetad. 3. rc_need="lvmetad" causes lvm service to restart when you restart lvmetad. This is unnecessary and dangerous because stopping lvm service will unmount file systems on all your VGs if they aren't busy. Reproducible: Always
My proposed patch: diff -rupN old/sys-fs/lvm2/files/lvm.rc-2.02.105-r2 new/sys-fs/lvm2/files/lvm.rc-2.02.105-r2 --- old/sys-fs/lvm2/files/lvm.rc-2.02.105-r2 2014-02-02 20:52:34.000000000 +0100 +++ new/sys-fs/lvm2/files/lvm.rc-2.02.105-r2 2015-07-17 17:20:48.251018536 +0200 @@ -6,7 +6,8 @@ depend() { before checkfs fsck after modules device-mapper - need lvmetad sysfs + need sysfs + use lvmetad } config='global { locking_dir = "/run/lock/lvm" }' @@ -77,6 +78,9 @@ then if [ "$VGS" ] then + local _ending="eend" + [ "$RC_RUNLEVEL" = shutdown ] && _ending="ewend" + ebegin " Shutting Down LVs & VGs" #still echo stderr for debugging lvm_commands="#! ${lvm_path} --config '${config}'\n" @@ -86,7 +90,7 @@ then lvm_commands="${lvm_commands}vgchange --sysinit -a ln ${VGS}\n" # Order of this is important, have to work around dash and LVM readline printf "%b\n" "${lvm_commands}" | $lvm_path /proc/self/fd/0 --config "${config}" >/dev/null - eend $? "Failed (possibly some LVs still needed for /usr or root)" + ${_ending} $? "Failed (possibly some LVs still needed for /usr or root)" fi einfo "Finished shutting down the Logical Volume Manager"
I would love to see this get resolved, I'm getting really tired of having to copy patches over to every system I install that I use LVM on. On top of the issues already mentioned, using lvmetad can actually be dangerous on non-server hardware, because a single error in RAM can cause it's cached state to not match the disks, and that can cause severe data loss. The biggest benefactors of lvmetad are management software that refuses to do it's own caching, and users of near-line storage arrays, neither of which are common cases for most Gentoo users.
*** This bug has been marked as a duplicate of bug 525614 ***
Thanks for the report. Fixed in http://gitweb.gentoo.org/repo/gentoo.git/commit/?id=ef66b97c3c1778c3c8e9f96d80057ad7a1a3e2f4