With latest lvm2 stable update (sys-fs/lvm2-2.02.187-r1), lvm complains at boot and at poweroff about lvmetad. Other people are affected, see this thread : https://forums.gentoo.org/viewtopic-t-1111740.html boot : * Starting the Logical Volume Manager ... WARNING: Failed to connect to lvmetad. Falling back to device scanning. WARNING: Failed to connect to lvmetad. Falling back to device scanning. WARNING: Failed to connect to lvmetad. Falling back to device scanning. Reading all physical volumes. This may take a while... Found volume group "anon" using metadata type lvm2 WARNING: Failed to connect to lvmetad. Falling back to device scanning. WARNING: Failed to connect to lvmetad. Falling back to device scanning. 2 logical volume(s) in volume group "anon" now active WARNING: Failed to connect to lvmetad. Falling back to device scanning. [ ok ] poweroff: * Stopping the Logical Volume Manager ... WARNING: Failed to connect to lvmetad. Falling back to device scanning. Logical volume anon/root contains a filesystem in use. Can't deactivate volume group "anon" with 1 open logical volume(s) WARNING: Failed to connect to lvmetad. Falling back to device scanning. * Failed to stop Logical Volume Manager (possibly some LVs still needed for /usr or root) [ !! ] Downgrading to previous stable (2.02.184-r5) fixes the issue. As some user pointed out, it seems related to this change https://bugs.gentoo.org/689292 Reproducible: Always
Show us # lvm dumpconfig global 2>/dev/null | grep -q 'use_lvmetad=1' # echo $? and your runlevel where LVM service is scheduled to run, i.e. # rc-status boot and output of # /etc/init.d/lvm iuse
(In reply to Thomas Deutschmann from comment #1) > # lvm dumpconfig global 2>/dev/null | grep -q 'use_lvmetad=1' > # echo $? 1 > and your runlevel where LVM service is scheduled to run, i.e. > > # rc-status boot lvm is not present although it is correctly added to boot runlevel. rc-service lvm start says : * WARNING: lvm has already been started With previous version, "rc-status boot" correctly shows lvm in the list. > and output of > > # /etc/init.d/lvm iuse no output for this command
If `lvm dumpconfig global 2>/dev/null | grep 'use_lvmetad=1'` (note: I removed "-q") didn't output "use_lvmetad=1", lvm won't connect to lvmetad. I just verified that: 1) I removed lvmetad from any runlevel. 2) I set use_lvmetad=0 in lvm.conf. On restart, lvm service did not complain. If I now set this back to 1, I can reproduce the problem because lvmetad isn't in runlevel and therefore service isn't running. At the moment you are supposed to also add lvmetad to the same runlevel where you added lvm service. If lvm doesn't show up in boot runlevel, it's not started in boot runlevel. Please check where you added lvm service. I am currently thinking about changing "use lvmetad" to "want lvmetad". This would avoid the need to add lvmetad manually to the same runlevel.
> At the moment you are supposed to also add lvmetad to the same runlevel > where you added lvm service. If lvm doesn't show up in boot runlevel, it's > not started in boot runlevel. Please check where you added lvm service. > > I am currently thinking about changing "use lvmetad" to "want lvmetad". This > would avoid the need to add lvmetad manually to the same runlevel. The thing is the current ebuild does not warn about the need to add lvmetad. So I tried adding lvmetad to boot runlevel. And before that, to make sure I removed lvm from boot runlevel and readded it : $ rc-update del lvm boot $ rc-update add lvm boot $ rc-update add lvmetad boot It fixes the issue at boot * Starting the Logical Volume Manager ... Reading volume groups from cache. Found volume group "anon" using metadata type lvm2 5 logical volume(s) in volume group "matrix" now active [ ok ] $ rc-status boot | grep lvm lvmetad [ started ] lvm [ started ] $ /etc/init.d/lvm iuse lvmetad ! but when powering off the system, there is this error : * Stopping the Logical Volume Manager ... Logical volume anon/root contains a filesystem in use. Can't deactivate volume group "anon" with 1 open logical volume(s) * Failed to stop Logical Volume Manager (possibly some LVs still needed for /usr or root) [ !! ] * Stopping lvmetad ... [ ok ]
Please ignore the typo, there is only one VG, called "anon". Also, $ lvm dumpconfig global 2>/dev/null | grep 'use_lvmetad=1 correctly returns : use_lvmetad=1 So boot is somehow fixed but I don't understand the issue when powering off.
(In reply to David Duchesne from comment #5) > Also, > > $ lvm dumpconfig global 2>/dev/null | grep 'use_lvmetad=1 > > correctly returns : > > use_lvmetad=1 So your comment #2 which said "1" which means use_lvmetad=1 wasn't found, was wrong. This confused me. :) > The thing is the current ebuild does not warn about the need to add lvmetad. Like it's not telling you to enable lvm service ;) This is not new. You always had to manually enable lvmetad service when you need it. But I get your point. Like said I am thinking about switching from "use" to "want" which would start lvmetad/lvmlockd even when services aren't enabled in runlevel. Given that you can control usage in /etc/lvm/lvm.conf and these are just 'helper' services, using "want" seems to be the correct way to do it. > So boot is somehow fixed but I don't understand the issue when powering off. OK, you maybe don't get the difference: The message you quoted isn't a failure. It's using ewarn (only visible when you can see colored output). This is new. Previous runscript just told you it failed to stop lvm service and entered error state (not critical, was mostly just a visual problem because the shutdown wasn't interrupted). The new runscript will show this warning instead and make sure that service won't exit with an error...
> Given that you can control usage in /etc/lvm/lvm.conf and these > are just 'helper' services, using "want" seems to be the correct way to do > it. Yes, exactly what I was going to ask. Thanks for clarifying that part. > > So boot is somehow fixed but I don't understand the issue when powering off. > OK, you maybe don't get the difference: The message you quoted isn't a > failure. It's using ewarn (only visible when you can see colored output). > This is new. Previous runscript just told you it failed to stop lvm service > and entered error state (not critical, was mostly just a visual problem > because the shutdown wasn't interrupted). The new runscript will show this > warning instead and make sure that service won't exit with an error... Ok, I was a bit worried because with previous LVM, the shutdown output was the following : * Shutting down the Logical Volume Manager * Shutting Down LVs & VGs ... Logical volume anon/root contains a filesystem in use. [ ok ] * Finished shutting down the Logical Volume Manager * Stopping lvmetad ... [ ok ]
"This is not new. You always had to manually enable lvmetad service when you need it" This is not correct. Up until 2.02.187-r1 lvmetad was started by the lvm rc script. I believe that this was because up until then it had 'need lvmetad'. Bug 689292 changed this to 'use lvmetad' on the 14th April 2020.
The bug has been closed via the following commit(s): https://gitweb.gentoo.org/repo/gentoo.git/commit/?id=171bc0144212041972ae2e877078cf073b8252a5 commit 171bc0144212041972ae2e877078cf073b8252a5 Author: Thomas Deutschmann <whissi@gentoo.org> AuthorDate: 2020-04-22 15:44:44 +0000 Commit: Thomas Deutschmann <whissi@gentoo.org> CommitDate: 2020-04-22 15:44:44 +0000 sys-fs/lvm2: use rc_want instead of rc_use Closes: https://bugs.gentoo.org/718748 Package-Manager: Portage-2.3.99, Repoman-2.3.22 Signed-off-by: Thomas Deutschmann <whissi@gentoo.org> sys-fs/lvm2/files/lvm.rc-2.02.187 | 22 +++++++++++----------- ...-2.02.187-r1.ebuild => lvm2-2.02.187-r2.ebuild} | 0 2 files changed, 11 insertions(+), 11 deletions(-)