I've distfiles on a glusterfs server and is mounted to save bandwidth during regular updates. As part of regular maintenance after a @world update I restart the services (via app-admin/restart-services). After upgrading to =sys-fs/udev-init-scripts-33 this causes random process killing (most notably syslog-ng and sshd). How to reproduce: mount /usr/portage/distfiles emerge util-linux restart-services (restarts udev) umount /usr/portage/distfiles Connection to ssh.example.com closed by remote host. Connection to ssh.example.com closed. The killing combination is =sys-fs/udev-init-scripts-33 with sys-fs/udev =sys-fs/udev-init-scripts-32 with sys-fs/udev = OK =sys-fs/udev-init-scripts-3[23] with sys-fs/eudev = OK
I saw the same behavior where sshd died when unmounting an NFS filesystem. It is no longer an issue after downgrading to sys-fs/udev-init-scripts-32.
I "resolved" it by migrating to eudev. Now even udev-init-scripts-33 works fine.
This has been plaguing me since Feb as well and I just found this bug. I can't give proof positive it is fixed, but I can say that it only seemed to happen on machines that were old enough the udev was still the default when they were installed. My newer hardware runs eudev and none of them had issues. Most of mine manifested in ways like - random lvmetad crashes - sshd would crash when (among other things) mounting filesystems. This was particularly painful on headless machines. Thanks Tomáš for the fix/workaround!
In udev-init-scripts-33, the --daemon option was removed from the udevd command line. Perhaps this triggers some change in behavior in systemd-udevd. https://gitweb.gentoo.org/proj/udev-gentoo-scripts.git/commit/?id=6f98cf89f54a9ad1c9625577d100f6ca8b8a2b0b
Are you able to reproduce this with >=sys-fs/udev-241? https://github.com/systemd/systemd/commit/31cbd2025359e1e8435a2dc371e439591846d8c4
I'm unable to reproduce on a current (~)amd64 system. sys-fs/udev-245.5-r1 + sys-fs/udev-init-scripts-33 OK sys-fs/udev-246-r1 + sys-fs/udev-init-scripts-34 OK