nv nvidia-drivers-334.21 with USE=uvm creates /dev/nvidia-uvm. However, the device is not accessible to members of the video group: crw-rw-rw- 1 root root 248, 0 Apr 3 12:37 /dev/nvidia-uvm It should be: chown root:video /dev/nvidia-uvm crw-rw-rw- 1 root video 248, 0 Apr 3 12:37 /dev/nvidia-uvm Reproducible: Always
Correction: nvidia-drivers-334.21 with USE=uvm does not create /dev/nvidia-uvm, but it should. The node is created when root runs a cuda-enabled program. The node is created with the wrong owner (should be video). Perhaps nvidia-drivers-334.21 with USE=uvm should cause this to happen at boot: mknod -m 660 /dev/nvidia-uvm c 249 0 chgrp video /dev/nvidia-uvm
I'd been thinking about that. The device node is automatically created by nvidia-modprobe and is accessible to *anyone*. We could get the source code[1], patch it and install our own sys-fs/nvidia-modprobe that installs an alternative /usr/bin/nvidia-modprobe that sets proper permissions. But this raised some questions. The big question is whether or not the device node should be accessible to only people in the video group. After all, you should be able to run computations regardless of your access to accelerated video, shouldn't you? And what would the side effects be? What altcoin mining operations will suddenly stop working? Will Nvidia get inundated with effectively invalid bug reports? [1] ftp://download.nvidia.com/XFree86/nvidia-modprobe/
See also bug #482336 which discusses whether with GNOME/systemd the video group should be used, or if it's entirely an antiquated and ineffective security kludge.
Jeroen, you seem to miss the fact that you still need access to /dev/nvidia{0,ctl} devices managed by nvidia.ko to run CUDA applications. nvidia-uvm.ko and /dev/nvidia-uvm merely provide a new CUDA feature ("unified virtual memory"); if you're using an older version of CUDA you don't need nvidia-uvm.ko at all. So you still need to belong to the "video" group to run CUDA; nvidia-uvm changes nothing in that regard. The only question I see is whether you want to restrict access to /dev/nvidia-uvm (for instance if, hypothetically, nvidia-uvm is exploitable such that the user having access to /dev/nvidia-uvm but not /dev/nvidia{0,ctl} can elevate privileges).
(In reply to Alexander Monakov from comment #4) > you seem to miss the fact that you still need access to /dev/nvidia{0,ctl} > devices managed by nvidia.ko to run CUDA applications. I fail to see how you arrived at that conclusion. > So you still need to belong to the "video" group to run CUDA; nvidia-uvm > changes nothing in that regard. So you're disagreeing with the reporter/Summary? It does not matter whether nvidia-uvm is accessible to everyone? > The only question I see is whether you want > to restrict access to /dev/nvidia-uvm (for instance if, hypothetically, > nvidia-uvm is exploitable such that the user having access to > /dev/nvidia-uvm but not /dev/nvidia{0,ctl} can elevate privileges). I already suggested in comment #2 that we could fetch, patch, compile and install a version of the nvidia-modprobe sources that restricts access to nvidia-uvm to the video group. I don't see how hypothetical security bugs change the way we look at the problem.
(In reply to Jeroen Roovers from comment #5) > (In reply to Alexander Monakov from comment #4) > > you seem to miss the fact that you still need access to /dev/nvidia{0,ctl} > > devices managed by nvidia.ko to run CUDA applications. > > I fail to see how you arrived at that conclusion. You said, "And what would the side effects be? What altcoin mining operations will suddenly stop working?". Mining apps' users have to be in the video group to open /dev/nvidiactl anyhow. I apologize if I'm badly misinterpreting something. > > So you still need to belong to the "video" group to run CUDA; nvidia-uvm > > changes nothing in that regard. > > So you're disagreeing with the reporter/Summary? It does not matter whether > nvidia-uvm is accessible to everyone? No, I was pointing out that the discussion was confusing. It would make sense to have the same permissions on /dev/nvidia-uvm as on /dev/nvidiactl. > > The only question I see is whether you want > > to restrict access to /dev/nvidia-uvm (for instance if, hypothetically, > > nvidia-uvm is exploitable such that the user having access to > > /dev/nvidia-uvm but not /dev/nvidia{0,ctl} can elevate privileges). > > I already suggested in comment #2 that we could fetch, patch, compile and > install a version of the nvidia-modprobe sources that restricts access to > nvidia-uvm to the video group. Hm, I don't think going to such great lengths is necessary; can you simply adjust the permissions from a udev rule (if it works it should be preferable I guess, the package already has one udev rule for the main module), or use an "install" command for nvidia-uvm in modprobe.d? > I don't see how hypothetical security bugs change the way we look at the > problem. I'm afraid I didn't catch your meaning here.
Ping. Any progress on this? I think *any* real solution (even if proves being wrong at some point) is better than the current implicit automagic creation.
Any updates on this?
any progress on this?
Oleg, Michał, I'm not sure if your question are directed primarily at me or Jeroen. Do you agree with what I said above? Would you like to see patches addressing the problem as I see fit?
it should be created with udev rule with correct permissions as a workaround until NVIDIA resolve it.
Created attachment 399174 [details] nvidia uvm udev rules
Gentlemen, I am not a developer, but maybe my input also helpful fur you guys... Running nvidia-drivers-349.12 with a GTX 960, gentoo-sources-3.19.1 and amd64. I was on the way to create a script, which chown 0700 nvidia-modprobe (I'll do that anyway) and then create the dev manually. While fiddling around, I found, opencl works, regardless, /dev/nvidia-uvm is in the video group (as nvidia-modprobe leaves it in root:root) Finaly I use the udev-rules from Oleg now and it works fine. It took me 1.5days to find (or find and actualy read to the end ;->>). This page here, since as normal darktable user I searched differently. Would be nice, the nvidia-drivers .ebuild installs it once USEFLAGS=uvm is set.
Axel, was udev rule is the only needed workaround to make this work?
I'm working on an ebuild for a package that attempts to autodetect the CUDA hardware arch available on the system. If nvidia-modprobe -u hasn't been run prior to building the package then the nvcc call in the build system silently fails. It would be much cleaner for this and other use cases if the device nodes were at least created by udev on Gentoo rather than relying on the nvidia-modprobe binary. However, NVIDIA does acknowledge that in packaging there may be a preference to exclude or disable the nvidia-modprobe binary or use an alternate method[1]. [1] http://developer.download.nvidia.com/compute/cuda/6_0/rel/docs/CUDA_Getting_Started_Linux.pdf (Section 2.9)
*** Bug 591286 has been marked as a duplicate of this bug. ***
*** Bug 615558 has been marked as a duplicate of this bug. ***
(In reply to Daniel M. Weeks from comment #16) > I'm working on an ebuild for a package that attempts to autodetect the CUDA > hardware arch available on the system. If nvidia-modprobe -u hasn't been run > prior to building the package then the nvcc call in the build system > silently fails. I am not sure I understand what you are trying to do. You want to "detect" features on the build host that you want to configure at build time?
The bug has been referenced in the following commit(s): https://gitweb.gentoo.org/repo/gentoo.git/commit/?id=9fd6d358a4e831724c79f9369c3c86dea00cddd0 commit 9fd6d358a4e831724c79f9369c3c86dea00cddd0 Author: Jeroen Roovers <jer@gentoo.org> AuthorDate: 2019-03-03 13:09:53 +0000 Commit: Jeroen Roovers <jer@gentoo.org> CommitDate: 2019-03-03 13:11:05 +0000 x11-drivers/nvidia-drivers: USE=uvm: add udev rule, fix rmmod Package-Manager: Portage-2.3.62, Repoman-2.3.12 Bug: https://bugs.gentoo.org/506696 Bug: https://bugs.gentoo.org/578126 Signed-off-by: Jeroen Roovers <jer@gentoo.org> .../nvidia-drivers/files/nvidia-uvm.udev-rule | 1 + .../nvidia-drivers-390.116-r1.ebuild | 589 ++++++++++++++++++++ .../nvidia-drivers-410.104-r1.ebuild | 592 +++++++++++++++++++++ .../nvidia-drivers/nvidia-drivers-415.27-r1.ebuild | 592 +++++++++++++++++++++ .../nvidia-drivers/nvidia-drivers-418.43-r1.ebuild | 585 ++++++++++++++++++++ 5 files changed, 2359 insertions(+)
(In reply to Jeroen Roovers from comment #19) > (In reply to Daniel M. Weeks from comment #16) > > I'm working on an ebuild for a package that attempts to autodetect the CUDA > > hardware arch available on the system. If nvidia-modprobe -u hasn't been run > > prior to building the package then the nvcc call in the build system > > silently fails. > > I am not sure I understand what you are trying to do. You want to "detect" > features on the build host that you want to configure at build time? Yes. It's similar to the the probing of the CPU that happens when specifying -march=native but for the GPU. However, if GPU probing fails, rather than getting a generic build that works across all CUDA architectures, you get a build for *each* CUDA architecture.