Gentoo Websites Logo
Go to: Gentoo Home Documentation Forums Lists Bugs Planet Store Wiki Get Gentoo!
Bug 376527 - x11-drivers/nvidia-drivers should create devices upon module load (was: dev-util/nvidia-cuda-sdk / local scripts to create /dev/nvidia* nodes)
Summary: x11-drivers/nvidia-drivers should create devices upon module load (was: dev-u...
Status: RESOLVED FIXED
Alias: None
Product: Gentoo Linux
Classification: Unclassified
Component: New packages (show other bugs)
Hardware: All Linux
: Normal enhancement (vote)
Assignee: Doug Goldstein (RETIRED)
URL:
Whiteboard:
Keywords:
Depends on:
Blocks:
 
Reported: 2011-07-26 21:00 UTC by Tomas Vondra
Modified: 2016-08-15 00:57 UTC (History)
9 users (show)

See Also:
Package list:
Runtime testing required: ---


Attachments
/etc/local.d/cuda.start (cuda.start,498 bytes, text/plain)
2011-07-26 21:00 UTC, Tomas Vondra
Details
/etc/local.d/cuda.stop (cuda.stop,526 bytes, text/plain)
2011-07-26 21:00 UTC, Tomas Vondra
Details
udev rule to create and remove dev nodes (99-nvidia.rules,206 bytes, text/plain)
2012-03-15 03:55 UTC, Rick Farina (Zero_Chaos)
Details
new udev rule (99-nvidia.rules,182 bytes, text/plain)
2012-03-25 04:18 UTC, Rick Farina (Zero_Chaos)
Details
script called by udev (nvidia_control.sh,31 bytes, text/plain)
2012-03-25 04:20 UTC, Rick Farina (Zero_Chaos)
Details
old udev rule (99-nvidia.rules,182 bytes, text/plain)
2012-03-26 03:56 UTC, Rick Farina (Zero_Chaos)
Details
new udev rule for when nvidia-smi is installed into /opt/bin (99-nvidia.rules,182 bytes, text/plain)
2012-03-26 16:27 UTC, Rick Farina (Zero_Chaos)
Details
ebuild diff (295.33.ebuild.diff,719 bytes, patch)
2012-03-27 02:10 UTC, Rick Farina (Zero_Chaos)
Details | Diff

Note You need to log in before you can comment on or make changes to this bug.
Description Tomas Vondra 2011-07-26 21:00:36 UTC
Created attachment 281079 [details]
/etc/local.d/cuda.start

I think we could add two /etc/local.d/ scripts to make it a bit easier - before using CUDA, it's necessary to create at least two devices (at least /dev/nvidiactl and /dev/nvidia0, if you have more devices then nvidia1 etc. are needed). And it's not sufficient to modprobe the nvidia module, you have to issue mknod commands with root privileges.

In the end everyone does that using a script, so why not to put it into /etc/local.d/ and make that automatic? The "getting started guide" already contains a shell script that does that, so I've just enhanced it a bit (to check for devices that already exist) - it's called "cuda.start" (See the attachment). And I've created "cuda.stop" too, that does exactly the opposite (removes the devices etc.).
Comment 1 Tomas Vondra 2011-07-26 21:00:56 UTC
Created attachment 281081 [details]
/etc/local.d/cuda.stop
Comment 2 Samuli Suominen (RETIRED) gentoo-dev 2011-07-27 05:20:05 UTC
Works fine here without mknod:

$ ls -l /dev/nvidia*
crw-rw-rw- 1 root root 195,   0 Jun 21 03:29 /dev/nvidia0
crw-rw-rw- 1 root root 195, 255 Jun 21 03:29 /dev/nvidiactl
Comment 3 Samuli Suominen (RETIRED) gentoo-dev 2011-07-27 09:09:30 UTC
I suppose your nvidia-drivers or udev installation is broken if the device nodes don't get automatically created.
Comment 4 Tomas Vondra 2011-07-27 18:02:19 UTC
I don't think it's broken - it's a fresh system (less than a week since I've finally switched to amd64) and I'm not aware of any problems. Right now I'm using nvidia-drivers-270.41.19.

Maybe it's because I'm not using nVidia as a graphics card - I use an integrated graphics card, the nVidia card is used just for CUDA. Could this be the cause? Are the files in /dev created when not using the device as a graphics card?
Comment 5 Stefan Schmiedl 2011-08-25 08:43:13 UTC
I can confirm Tomas' report.

I have installed a new box with two nvidia GPUs and a Matrox chip for actual display handling. After reboot the nvidia kernel module is loaded, but the nvidia devices appear only after executing deviceQuery.

# ls /dev/nv*
/dev/nvram
# /opt/cuda/sdk/C/bin/linux/release/deviceQuery
[deviceQuery] starting...
/opt/cuda/sdk/C/bin/linux/release/deviceQuery Starting...

 CUDA Device Query (Runtime API) version (CUDART static linking)

Found 2 CUDA Capable device(s)
...snip...
deviceQuery, CUDA Driver = CUDART, CUDA Driver Version = 4.0, CUDA Runtime Version = 4.0, NumDevs = 2, Device = Tesla M2090, Device = Tesla M2090
[deviceQuery] test results...
PASSED

Press ENTER to exit...

# ls -l /dev/nv*
crw-rw-rw- 1 root root 195,   0 Aug 24 22:38 /dev/nvidia0
crw-rw-rw- 1 root root 195,   1 Aug 24 22:38 /dev/nvidia1
crw-rw-rw- 1 root root 195, 255 Aug 24 22:38 /dev/nvidiactl
crw-r----- 1 root kmem  10, 144 Aug 24 22:34 /dev/nvram
Comment 6 Michal Januszewski (RETIRED) gentoo-dev 2011-08-29 22:16:34 UTC
(In reply to comment #5)
> I can confirm Tomas' report.
> 
> I have installed a new box with two nvidia GPUs and a Matrox chip for actual
> display handling. After reboot the nvidia kernel module is loaded, but the
> nvidia devices appear only after executing deviceQuery.
> 
> # ls /dev/nv*
> /dev/nvram
> # /opt/cuda/sdk/C/bin/linux/release/deviceQuery
> [deviceQuery] starting...
> /opt/cuda/sdk/C/bin/linux/release/deviceQuery Starting...

Interesting that deviceQuery creates the device nodes.  Could you please check if running `nvidia-smi -a` does the same?
Comment 7 Tomas Vondra 2011-09-11 22:43:07 UTC
(In reply to comment #6)
> Interesting that deviceQuery creates the device nodes.  Could you please check
> if running `nvidia-smi -a` does the same?

Yup, it creates the devices too

# ls /dev/nvidia*
ls: cannot access /dev/nvidia*: No such file or directory

# nvidia-smi -a
... a lot of info about the device ...

# ls /dev/nvidia*
/dev/nvidia0  /dev/nvidiactl
Comment 8 Nicolas Bigaouette 2011-10-06 18:02:08 UTC
I stumbled upon this discussion looking for something else.

I can confirm that if X is not started on a nvidia card, the /dev/nvidia* devices won't be created. This prevent running cuda/opencl code on video cards meant just for gpgpu.

After googling I've wrote my own version of the script. It's an init script though. The devices will be created an deleted.

You will find it on github: https://github.com/nbigaouette/ebuilds/tree/master/dev-util/cuda-init-script
or in my overlay (nbigaouette).
Comment 9 Nicolas Pinto 2011-11-25 14:38:12 UTC
I'm getting the same problems on a fresh system. Any way to get this merge into nvidia-drivers' ebuild?
Comment 10 Rick Farina (Zero_Chaos) gentoo-dev 2012-03-14 22:37:33 UTC
I'm not certain that an init script is the proper way to handle this, it seems needless and messy.  udev knows when a new module is loaded and watching "udevadm monitor" shows me this when the nvidia module gets loaded:

KERNEL[1331764023.288530] add      /module/nvidia (module)
UDEV  [1331764023.288712] add      /module/nvidia (module)
KERNEL[1331764023.288985] add      /kernel/slab/:t-0012288 (slab)
UDEV  [1331764023.289063] add      /kernel/slab/:t-0012288 (slab)
KERNEL[1331764023.289486] add      /bus/pci/drivers/nvidia (drivers)
UDEV  [1331764023.290500] add      /bus/pci/drivers/nvidia (drivers)

Surely someone knows how to convert the init script to a udev rule? I lack the ability right now but if no one else steps up I guess I'll learn.

-ZC
Comment 11 Rick Farina (Zero_Chaos) gentoo-dev 2012-03-14 23:16:41 UTC
I'm no expert, however it appears that the first card would need:

mknod -m 660 /dev/nvidiactl c 195 255
mknod -m 660 /dev/nvidia0 c 195 0

and each additional card would need:

mknod -m 660 /dev/nvidiaN+1 c 195 0 (where N is the last created node)

I'm not sure if there is a clean way to cause this to run for each detected card, however, since I doubt many people are hotplugging their video cards I'm guessing detecting module load then adapting one of the scripts is good enough.

The attached cuda.start script could be easily adapted to run via a udev rule, however personally I'm not in love with this part:

	NVDEVS=`lspci | grep -i NVIDIA`
	N3D=`echo "$NVDEVS" | grep "3D controller" | wc -l`
	NVGA=`echo "$NVDEVS" | grep "VGA compatible controller" | wc -l`
	N=`expr $N3D + $NVGA - 1`

seems a bit akward personally, and I don't really like the grepping lspci output.... however, I have no way to test since I have one card.

-ZC
Comment 12 Rick Farina (Zero_Chaos) gentoo-dev 2012-03-15 03:55:20 UTC
Created attachment 305423 [details]
udev rule to create and remove dev nodes

Okay I have successfully fixed this to the absolute best of my ability.  This fix is as good as is possible in my eyes.

When the nvidia kernel module is loaded it creates the needed devices, when the module is unloaded the dev nodes are removed.

This is NOT a dev-util/nvidia-cuda-sdk bug, this is a x11-drivers/nvidia-drivers bug and must be fixed in there where it is appropriate. Believe it or not, you can use cuda without installing the sdk ;-)

-ZC
Comment 13 Rick Farina (Zero_Chaos) gentoo-dev 2012-03-15 03:56:40 UTC
Note: udev rule (if accepted) obsoletes all other proposed fixes.

-ZC
Comment 14 Kacper Kowalik (Xarthisius) (RETIRED) gentoo-dev 2012-03-24 09:00:33 UTC
I'm stealing this bug, since I agree with most of the people that it should be fixed on the driver level.

@Cardoe, @jer, @spock
What do you think about it? Udev rule seems to be a very neat solution to that rather annoying problem. Nvidia forums are full of other ugly hacks and I'm suprised that upstream haven't done anything about it.

Although, "remove" in proposed rule doesn't work for me. I haven't figured out why yet.
Comment 15 Rick Farina (Zero_Chaos) gentoo-dev 2012-03-25 04:18:47 UTC
Created attachment 306581 [details]
new udev rule

Some cold hearted guy moved nvidia-smi from /usr/bin to /opt/bin so this udev rule update ONLY works for the latest nvidia-drivers ebuild and needs to be changed for the older one.

I also introduce a new script which the udev rule calls to do the removal properly.
Comment 16 Rick Farina (Zero_Chaos) gentoo-dev 2012-03-25 04:20:30 UTC
Created attachment 306583 [details]
script called by udev

This goes in /lib/udev (this is where all the other scripts called by udev are).

The original rule working for me must have been a fluke because the remove didn't work for anyone else.... so I wrote this script which should fix that issue.
Comment 17 Rick Farina (Zero_Chaos) gentoo-dev 2012-03-26 03:56:23 UTC
Created attachment 306669 [details]
old udev rule

After making nvidia-drivers 295.33 install by fixing the missing patch I found that although the ebuild looks like it is installing nvidia-smi into /opt/bin, it's not. so fix the udev rule to call nvidia-smi from where it is, not where it should be.

nvidia-drivers could use a lot of love it seems :-(
Comment 18 Rick Farina (Zero_Chaos) gentoo-dev 2012-03-26 16:27:32 UTC
Created attachment 306747 [details]
new udev rule for when nvidia-smi is installed into /opt/bin

only one of these udev rules is required per version.  it is possible for me to write a script that calls if from either location, or even write the udev rule to just call both location, but I feel the this is the cleanest solution.

Just tell me and I'll write a script that finds nvidia-smi and calls it OR I can write a udev rule that blindly calls from both locations and one will just silently fail.

Thanks,
ZC
Comment 19 Rick Farina (Zero_Chaos) gentoo-dev 2012-03-27 02:10:33 UTC
Created attachment 306807 [details, diff]
ebuild diff

Here is a diff against the (now working) nvidia-drivers-295.33 in portage, as committed by cardoe today.

This patch will likely cleanly apply to all the nvidia-driver ebuilds in portage, but note that "old udev rule" is needed for <nvidia-drivers-295.33 and "new udev rule" is needed for >=nvidia-drivers-295.33

Personally I think everyone who wants cuda should be on the latest driver, but I see no harm in fixing all the ebuilds.... Up to committer.

Thanks,
ZC
Comment 20 Doug Goldstein (RETIRED) gentoo-dev 2012-03-27 04:13:13 UTC
(In reply to comment #19)
> Created attachment 306807 [details, diff]
> ebuild diff
> 
> Here is a diff against the (now working) nvidia-drivers-295.33 in portage,
> as committed by cardoe today.
> 
> This patch will likely cleanly apply to all the nvidia-driver ebuilds in
> portage, but note that "old udev rule" is needed for <nvidia-drivers-295.33
> and "new udev rule" is needed for >=nvidia-drivers-295.33
> 
> Personally I think everyone who wants cuda should be on the latest driver,
> but I see no harm in fixing all the ebuilds.... Up to committer.
> 
> Thanks,
> ZC

Still not convinced that the remove rule is necessary. Its a bit hackish. Can you explain to me the configuration that results in a case that /dev/nvidia* is not removed?
Comment 21 Rick Farina (Zero_Chaos) gentoo-dev 2012-03-27 06:17:12 UTC
> Still not convinced that the remove rule is necessary. Its a bit hackish.
> Can you explain to me the configuration that results in a case that
> /dev/nvidia* is not removed?

Sure, turns out after testing the situation is much crazier than I thought.

Imagine if you will, I modprobe -r nvidia and my device nodes as still present.

When something tries to access those /dev/nvidia* files, the kernel realizes that the nvidia module isn't loaded and then autoloads it.

I know, for just a second this sounds like a good thing, but then you remember the user *purposely* did a modprobe -r nvidia, in my case this is done to force the card to power down and stay powered down unless I want to power it back up so my Optimus laptop gets reasonable battery life.  So basically after I modprobe -r nvidia, if anything touches the device nodes the device powers right back up.

Personally, when I modprobe -r nvidia, I expect it to stay -r.  If we are creating the nodes on module load then we MUST remove them on module load or the whole idea doesn't make sense. The states should be constant imho, nvidia loaded, shows up in /dev, nvidia unloaded, it shouldn't show up in dev.

Please, think of the batteries....
Comment 22 Rick Farina (Zero_Chaos) gentoo-dev 2012-04-01 03:00:25 UTC
Just to confirm I did a little bit more testing.

Let's assume the user is dumber than a box of rocks and tries to unload nvidia module while X is running.

While I can't test every possible cuda based application, X is easy enough.

X dies.  It's video card disappears magically in a puff of smoke and X dies.  The system stays completely functional (albeit without X) and there are no kernel panicks or similar.

If a user is stupid enough to "modprobe -r nvidia" while using it, they will experience whatever using the graphics card suddenly dying, with or without this patch makes no difference.

That said, I really like the remove rule because it only makes sense that if I create nodes on module insert, I should destroy them on module remove.  

Additionally, the behavior is inconsistent without the rule.  With the rule, when the nvidia module is not loaded, cuda applications cannot be used, but when the nvidia module is loaded cuda applications may be used.  Without the remove rule, on system boot cuda applications cannot be used, until you load nvidia, but when you remove the nvidia module you can still magically use cuda modules (because the kernel forces autoload).

I've made enough of a case for this in my mind, I'm not going to jump up and down anymore.  You agree or you don't, either way I'd rather see the create rule merged alone than no rule at all because we can't agree.

Please, fix this in the manner you feel is best, even if you still disagree that  the remove rule is unneeded.  I of course, feel it is, and am extremely unlikely to waiver on this matter as finding out my 8 hour battery has been drained 2 hours into a cross country flight is the entire reason I wrote the remove rule.

Thanks
Comment 23 Rick Farina (Zero_Chaos) gentoo-dev 2012-04-04 00:42:58 UTC
On request, I looked up some additional info on this bug. It appears to be a long standing issue (I saw bug reports on this against 1.0.xx).

One of the many advocated fixes was to add a line like this into /etc/modprobe.d/nvidia.conf

options nvidia NVreg_DeviceFileMode=432 NVreg_DeviceFileUID=0 NVreg_DeviceFileGID=27 NVreg_ModifyDeviceFiles=1

If that looks familiar, that's because it's already in gentoo, and doesn't seem to work.  Not that it matters, but I'd also like to point out that it is wrong, using mode 432 means the user can't read and write to the device making the change from GID=0 rather pointless.  Mode should be 0660.  Again, that's all relative since it doesn't actually honor any of this. Someone better than me would have to tell us all why the nvidia module ignores the options passed to it.

Sadly, I'm back at the same point.  This is a permanent bug in nvidia and they clearly do not care about fixing it upstream as even the readme says that the X driver creates the nodes, not the loading of the module.  You can find this in /usr/share/doc/nvidia-drivers-${PV}/README.bz2 :


Q. How and when are the the NVIDIA device files created?

A. Depending on the target system's configuration, the NVIDIA device files
   used to be created in one of three different ways:
   
      o at installation time, using mknod
   
      o at module load time, via devfs (Linux device file system)
   
      o at module load time, via hotplug/udev
   
   With current NVIDIA driver releases, device files are created or modified
   by the X driver when the X server is started.

   By default, the NVIDIA driver will attempt to create device files with the
   following attributes:
   
         UID:  0     - 'root'
         GID:  0     - 'root'
         Mode: 0666  - 'rw-rw-rw-'
   
   Existing device files are changed if their attributes don't match these
   defaults. If you want the NVIDIA driver to create the device files with
   different attributes, you can specify them with the "NVreg_DeviceFileUID"
   (user), "NVreg_DeviceFileGID" (group) and "NVreg_DeviceFileMode" NVIDIA
   Linux kernel module parameters.

   For example, the NVIDIA driver can be instructed to create device files
   with UID=0 (root), GID=44 (video) and Mode=0660 by passing the following
   module parameters to the NVIDIA Linux kernel module:
   
         NVreg_DeviceFileUID=0 
         NVreg_DeviceFileGID=44 
         NVreg_DeviceFileMode=0660
   
   The "NVreg_ModifyDeviceFiles" NVIDIA kernel module parameter will disable
   dynamic device file management, if set to 0.

--------------------------------------------------

We are doing everything they suggested (aside from the wrong mode setting) and yet it still doesn't work.  I once again fall back to my suggested udev rule.
Comment 24 Doug Goldstein (RETIRED) gentoo-dev 2012-04-05 15:34:26 UTC
(In reply to comment #23)
> On request, I looked up some additional info on this bug. It appears to be a
> long standing issue (I saw bug reports on this against 1.0.xx).
> 
> One of the many advocated fixes was to add a line like this into
> /etc/modprobe.d/nvidia.conf
> 
> options nvidia NVreg_DeviceFileMode=432 NVreg_DeviceFileUID=0
> NVreg_DeviceFileGID=27 NVreg_ModifyDeviceFiles=1
> 
> If that looks familiar, that's because it's already in gentoo, and doesn't
> seem to work.  Not that it matters, but I'd also like to point out that it
> is wrong, using mode 432 means the user can't read and write to the device
> making the change from GID=0 rather pointless.  Mode should be 0660.  Again,
> that's all relative since it doesn't actually honor any of this. Someone
> better than me would have to tell us all why the nvidia module ignores the
> options passed to it.
> 
> Sadly, I'm back at the same point.  This is a permanent bug in nvidia and
> they clearly do not care about fixing it upstream as even the readme says
> that the X driver creates the nodes, not the loading of the module.  You can
> find this in /usr/share/doc/nvidia-drivers-${PV}/README.bz2 :
> 
> 

....snip....

> We are doing everything they suggested (aside from the wrong mode setting)
> and yet it still doesn't work.  I once again fall back to my suggested udev
> rule.

Break out a calculator and take 432 and convert it to octal. You'll notice in their documentation they've only referenced octal. Then do ls -l /dev/nvidia* on your system and you'll see the mode is set correctly. The reason for this is that in the past octal handling has been broken while base 10 handling has always worked.
Comment 25 Rick Farina (Zero_Chaos) gentoo-dev 2012-04-05 16:00:21 UTC
> Break out a calculator and take 432 and convert it to octal. You'll notice
> in their documentation they've only referenced octal. Then do ls -l
> /dev/nvidia* on your system and you'll see the mode is set correctly. The
> reason for this is that in the past octal handling has been broken while
> base 10 handling has always worked.

The docs specifically said:

NVreg_DeviceFileMode=0660

which lead me to believe it was in the format I was used to.  After a quick test it appears you are correct.  <noob>Sorry for the added noise, I've never seen modes set like that before. </noob>
Comment 26 Doug Goldstein (RETIRED) gentoo-dev 2012-04-06 02:02:44 UTC
Give the currently masked 295.33 a whirl and see if that's ok with you.
Comment 27 Rick Farina (Zero_Chaos) gentoo-dev 2012-04-06 03:39:12 UTC
(In reply to comment #26)
> Give the currently masked 295.33 a whirl and see if that's ok with you.

Negative, the udev rule cannot fire the script

ozzie src # ls /dev/nv*
zsh: no matches found: /dev/nv*
ozzie src # modprobe nvidia
ozzie src # ls /dev/nv*    
zsh: no matches found: /dev/nv*


In my ebuild diff I did this:
+		fowners root:${VIDEOGROUP} /lib/udev/nvidia_control.sh
+		fperms 0750 /lib/udev/nvidia_control.sh

The fowners is likely completely preference as udev is running as root so no need to change the group, however making it executable is completely required.

You can either add an fperms line as I did, or use doexe instead of doins.
Comment 28 Doug Goldstein (RETIRED) gentoo-dev 2012-04-06 03:57:42 UTC
(In reply to comment #27)
> (In reply to comment #26)
> > Give the currently masked 295.33 a whirl and see if that's ok with you.
> 
> Negative, the udev rule cannot fire the script
> 
> ozzie src # ls /dev/nv*
> zsh: no matches found: /dev/nv*
> ozzie src # modprobe nvidia
> ozzie src # ls /dev/nv*    
> zsh: no matches found: /dev/nv*
> 
> 
> In my ebuild diff I did this:
> +		fowners root:${VIDEOGROUP} /lib/udev/nvidia_control.sh
> +		fperms 0750 /lib/udev/nvidia_control.sh
> 
> The fowners is likely completely preference as udev is running as root so no
> need to change the group, however making it executable is completely
> required.
> 
> You can either add an fperms line as I did, or use doexe instead of doins.

Let's keep the snarky comments to a minimum. Give it exec and let me know if it accomplished what you want.
Comment 29 Rick Farina (Zero_Chaos) gentoo-dev 2012-04-06 04:04:50 UTC
> Let's keep the snarky comments to a minimum. Give it exec and let me know if
> it accomplished what you want.

I re-read what I wrote, and I don't see anything overtly snarky.  If I want to get snarky you won't have to read between the lines, quit being so sensitive.


ozzie nvidia-drivers # chmod 700 /lib/udev/nvidia-udev.sh 
ozzie nvidia-drivers # ls /dev/nv*
zsh: no matches found: /dev/nv*
ozzie nvidia-drivers # modprobe nvidia
ozzie nvidia-drivers # ls /dev/nv*
zsh: no matches found: /dev/nv*

You want snarky, why am I the only one testing this?
Comment 30 Rick Farina (Zero_Chaos) gentoo-dev 2012-04-06 04:26:18 UTC
(In reply to comment #29)
> > Let's keep the snarky comments to a minimum. Give it exec and let me know if
> > it accomplished what you want.
> 
> I re-read what I wrote, and I don't see anything overtly snarky.  If I want
> to get snarky you won't have to read between the lines, quit being so
> sensitive.
> 
> 
> ozzie nvidia-drivers # chmod 700 /lib/udev/nvidia-udev.sh 
> ozzie nvidia-drivers # ls /dev/nv*
> zsh: no matches found: /dev/nv*
> ozzie nvidia-drivers # modprobe nvidia
> ozzie nvidia-drivers # ls /dev/nv*
> zsh: no matches found: /dev/nv*
> 
> You want snarky, why am I the only one testing this?

Additionally, I thought the script was failing, but when I run with "udevadm control --log-priority=debug" I don't even see it trying to run the script. No idea why.

After a quick survey of other scripts in /lib/udev as well as comparing your rule to mine, I have no idea why it's not working.
Comment 31 Doug Goldstein (RETIRED) gentoo-dev 2012-04-06 14:57:58 UTC
(In reply to comment #29)
> > Let's keep the snarky comments to a minimum. Give it exec and let me know if
> > it accomplished what you want.
> 
> I re-read what I wrote, and I don't see anything overtly snarky.  If I want
> to get snarky you won't have to read between the lines, quit being so
> sensitive.
> 
> 
> ozzie nvidia-drivers # chmod 700 /lib/udev/nvidia-udev.sh 
> ozzie nvidia-drivers # ls /dev/nv*
> zsh: no matches found: /dev/nv*
> ozzie nvidia-drivers # modprobe nvidia
> ozzie nvidia-drivers # ls /dev/nv*
> zsh: no matches found: /dev/nv*
> 
> You want snarky, why am I the only one testing this?

meyer ~ # ls -l /dev/nv*
ls: cannot access /dev/nv*: No such file or directory
meyer ~ # modprobe nvidia
meyer ~ # ls -l /dev/nv*
crw-rw---- 1 root video 195,   0 Apr  6 09:56 /dev/nvidia0
crw-rw---- 1 root video 195, 255 Apr  6 09:56 /dev/nvidiactl
meyer ~ # rmmod nvidia
meyer ~ # ls -l /dev/nv*
ls: cannot access /dev/nv*: No such file or directory

works here. What version of udev?
Comment 32 Rick Farina (Zero_Chaos) gentoo-dev 2012-04-06 15:06:41 UTC
> meyer ~ # ls -l /dev/nv*
> ls: cannot access /dev/nv*: No such file or directory
> meyer ~ # modprobe nvidia
> meyer ~ # ls -l /dev/nv*
> crw-rw---- 1 root video 195,   0 Apr  6 09:56 /dev/nvidia0
> crw-rw---- 1 root video 195, 255 Apr  6 09:56 /dev/nvidiactl
> meyer ~ # rmmod nvidia
> meyer ~ # ls -l /dev/nv*
> ls: cannot access /dev/nv*: No such file or directory
> 
> works here. What version of udev?

sys-fs/udev-171-r5 the one marked stable for amd64

I would test it with ~ but that requires a few rather key system changes I'd rather not make.  Perhaps this is a udev difference, or perhaps even a modutils/kmod difference....

I see a few other "ACTION==blah|blah" rules in /lib/udev/rules.d, but I wonder if that isn't supported in the stable version?
Comment 33 Rick Farina (Zero_Chaos) gentoo-dev 2012-04-06 15:52:23 UTC
(In reply to comment #32)
> > meyer ~ # ls -l /dev/nv*
> > ls: cannot access /dev/nv*: No such file or directory
> > meyer ~ # modprobe nvidia
> > meyer ~ # ls -l /dev/nv*
> > crw-rw---- 1 root video 195,   0 Apr  6 09:56 /dev/nvidia0
> > crw-rw---- 1 root video 195, 255 Apr  6 09:56 /dev/nvidiactl
> > meyer ~ # rmmod nvidia
> > meyer ~ # ls -l /dev/nv*
> > ls: cannot access /dev/nv*: No such file or directory
> > 
> > works here. What version of udev?
> 
> sys-fs/udev-171-r5 the one marked stable for amd64
> 
> I would test it with ~ but that requires a few rather key system changes I'd
> rather not make.  Perhaps this is a udev difference, or perhaps even a
> modutils/kmod difference....
> 
> I see a few other "ACTION==blah|blah" rules in /lib/udev/rules.d, but I
> wonder if that isn't supported in the stable version?

ozzie ~ # ls /dev/nv*
zsh: no matches found: /dev/nv*
ozzie ~ # modprobe nvidia
ozzie ~ # ls /dev/nv*
/dev/nvidia0  /dev/nvidiactl
ozzie ~ # modprobe -r nvidia 
ozzie ~ # ls /dev/nv*
zsh: no matches found: /dev/nv*

confirmed, stable udev doesn't support ACTION==blah|blah, after I divide this into one rule for add and one rule for remove it works flawlessly.

Please make this change so that us poor unsupported users who run part stable and part ~ can have a working system :-)

Thanks!
Comment 34 Rick Farina (Zero_Chaos) gentoo-dev 2012-04-06 17:05:08 UTC
ACTION=blah|blah in the udev rule will require sys-fs/udev-182-r3 to be marked stable before this ebuild can be marked stable. With such a simple fix (or simply breaking up the rule onto two lines) I don't see why we should add extra dependencies.

Please consider splitting the rule back out.
Comment 35 Doug Goldstein (RETIRED) gentoo-dev 2012-04-06 18:21:53 UTC
I've tested on udev 171-r5 and udev-182-r2 and it worked fine. Rick recompiled his udev 171-r5 and the problem went away. I've broken out the rules anyway just to be safe.

The changes are now in tree and everything is unmasked.
Comment 36 Sven Eden 2013-12-17 08:50:33 UTC
Just a note:

I Just started my desktop with the call to nvidia-smi commented out in the shell script, and the device nodes are created automatically anyway.

sys-fs/udev-208
x11-drivers/nvidia-drivers-331.20
Module loaded via /etc/conf.d/modules
/etc/init.d/nvidia-smi is not started
Comment 37 Doug Goldstein (RETIRED) gentoo-dev 2013-12-17 14:56:49 UTC
(In reply to Sven Eden from comment #36)
> Just a note:
> 
> I Just started my desktop with the call to nvidia-smi commented out in the
> shell script, and the device nodes are created automatically anyway.
> 
> sys-fs/udev-208
> x11-drivers/nvidia-drivers-331.20
> Module loaded via /etc/conf.d/modules
> /etc/init.d/nvidia-smi is not started

Because you started X.
Comment 38 Sven Eden 2013-12-21 11:08:33 UTC
(In reply to Doug Goldstein from comment #37)
> (In reply to Sven Eden from comment #36)
> > Just a note:
> > 
> > I Just started my desktop with the call to nvidia-smi commented out in the
> > shell script, and the device nodes are created automatically anyway.
> > 
> > sys-fs/udev-208
> > x11-drivers/nvidia-drivers-331.20
> > Module loaded via /etc/conf.d/modules
> > /etc/init.d/nvidia-smi is not started
> 
> Because you started X.

I did not check with /etc/init.d/xdm disabled, yes. However, the creation of the devices is an option (as far as I understand this) to the module itself. And the module is, as recommended by the gentoo nvidia guide, autoloaded on boot:

"https://wiki.gentoo.org/wiki/NVidia/nvidia-drivers" :
> To prevent you having to manually load the module on every bootup, you probably want to have this done automatically each time you boot your system, so edit /etc/conf.d/modules and add nvidia to it. 


Another user in the german forums has long delays when this script is enabled.:

[    5.043235] hid-generic 0003:E0FF:0002.0005: input,hiddev0,hidraw4: USB HID v1.00 Keyboard [A..... SPEEDLINK Gaming Mouse] on usb-0000:00:16.0-4/input1
[   24.716746] INFO: rcu_sched self-detected stall on CPU { 3}  (t=2101 jiffies g=18446744073709551372 c=18446744073709551371 q=11029)
[   24.716996] sending NMI to all CPUs:
[   24.717055] NMI backtrace for cpu 3
[   24.717113] CPU: 3 PID: 183 Comm: nvidia-smi Tainted: P        W  O 3.10.21-gentoo #1

I asked him to report back here. The thread is:
forums.gentoo.org/viewtopic-t-978520.html - But it is in german, of course.

So currently the documentation reads as follows:
> Depending on the target system's configuration, the NVIDIA device files used to be created in one of three different ways:
>
>    at installation time, using mknod
>
>    at module load time, via devfs (Linux device file system)
>
>    at module load time, via hotplug/udev
>
>With current NVIDIA driver releases, device files are created or modified by the X driver when the X server is started.

So of course the devices are created on X startup, but only if it actually has to 'load' the module. Loading the module on boot should create them as well.
Comment 39 Paul 2013-12-21 11:42:27 UTC
In replay to Sven Eden.

On my system nvidia-udev.sh is causing for long delays during start and shutdown process and error messages like

[   24.626895] INFO: rcu_sched self-detected stall on CPU { 1}  (t=2101 jiffies g=18446744073709551384 c=18446744073709551383 q=3892)
[   24.627130] sending NMI to all CPUs:
[   24.627189] NMI backtrace for cpu 1
[   24.627240] CPU: 1 PID: 163 Comm: nvidia-smi Tainted: P           O 3.10.21-gentoo #3
[   24.627291] Hardware name: To Be Filled By O.E.M. To Be Filled By O.E.M./870 Extreme3, BIOS P1.60 09/14/2010
[   24.627345] task: ffff88022d4d2700 ti: ffff880228f7c000 task.ti: ffff880228f7c000
[   24.627395] RIP: 0010:[<ffffffff8130693f>]  [<ffffffff8130693f>] __const_udelay+0x19/0x26
[   24.627498] RSP: 0018:ffff880237c43df8  EFLAGS: 00000002
[   24.627546] RAX: 0000000001062560 RBX: 0000000000002710 RCX: 0000000000000007
[   24.627596] RDX: 000000002ce5d7d4 RSI: 0000000000000002 RDI: 0000000000418958
[   24.627645] RBP: ffff880237c43df8 R08: 0000000000000000 R09: 0000000000000000
[   24.627694] R10: ffffffff8151fdf0 R11: ffff880229b52c00 R12: ffff880237c4d530
[   24.627743] R13: ffffffff81688780 R14: ffff880237c4d1a8 R15: ffff880228f7c000
[   24.627794] FS:  00007f1cc7060700(0000) GS:ffff880237c40000(0000) knlGS:0000000000000000
[   24.627845] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[   24.627893] CR2: 000000000040f8c0 CR3: 000000022845c000 CR4: 00000000000007e0
[   24.627942] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
[   24.627991] DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400
[   24.628039] Stack:
[   24.628082]  ffff880237c43e18 ffffffff8101de75 ffff880237c4d6a8 ffffffff81688780
[   24.628296]  ffff880237c43e78 ffffffff81087e37 0000000000000f34 0000000000000001
[   24.628510]  ffff880237c43e68 ffff880228f7c000 ffff88022d4d2700 ffff88022d4d2700
[   24.628724] Call Trace:
[   24.628777]  <IRQ>

[   24.628919]  [<ffffffff8101de75>] arch_trigger_all_cpu_backtrace+0x68/0x73
[   24.628981]  [<ffffffff81087e37>] rcu_check_callbacks+0x1a5/0x4c5
[   24.629042]  [<ffffffff81038242>] update_process_times+0x3a/0x69
[   24.629103]  [<ffffffff810637ea>] tick_sched_handle+0x32/0x34
[   24.629163]  [<ffffffff81063acc>] tick_sched_timer+0x36/0x56
[   24.629223]  [<ffffffff81048fde>] __run_hrtimer.isra.25+0x4e/0xa7
[   24.629283]  [<ffffffff810495c6>] hrtimer_interrupt+0xde/0x1cd
[   24.629343]  [<ffffffff8101d37e>] smp_apic_timer_interrupt+0x81/0x94
[   24.629403]  [<ffffffff814d840a>] apic_timer_interrupt+0x6a/0x70
[   24.629458]  <EOI>

[   24.629829]  [<ffffffffa0749fda>] ? rm_shutdown_gvi_device+0x182/0x290 [nvidia]
[   24.630089]  [<ffffffffa0749fcd>] ? rm_shutdown_gvi_device+0x175/0x290 [nvidia]
[   24.630342]  [<ffffffffa074cba9>] ? _nv000928rm+0x76/0xa4 [nvidia]
[   24.630593]  [<ffffffffa074a4f2>] ? _nv012082rm+0x1a9/0xa00 [nvidia]
[   24.630847]  [<ffffffffa073b510>] ? _nv012360rm+0x92/0x168 [nvidia]
[   24.631099]  [<ffffffffa073f633>] ? _nv000813rm+0x393/0x435 [nvidia]
[   24.631350]  [<ffffffffa073f5e4>] ? _nv000813rm+0x344/0x435 [nvidia]
[   24.631602]  [<ffffffffa073fc1c>] ? _nv000736rm+0x547/0x5e6 [nvidia]
[   24.631854]  [<ffffffffa0747a85>] ? _nv000748rm+0x1d9/0x2f2 [nvidia]
[   24.632105]  [<ffffffffa0741452>] ? rm_disable_adapter+0x74/0x107 [nvidia]
[   24.632358]  [<ffffffffa07607fc>] ? nv_kern_close+0x1fc/0x3bb [nvidia]
[   24.632607]  [<ffffffffa075f552>] ? nv_kern_ioctl+0x38c/0x39e [nvidia]
[   24.632669]  [<ffffffff810c258c>] ? __fput+0xf0/0x1d9
[   24.632727]  [<ffffffff810c267e>] ? ____fput+0x9/0xb
[   24.632786]  [<ffffffff81044503>] ? task_work_run+0x79/0x92
[   24.632845]  [<ffffffff8100250f>] ? do_notify_resume+0x55/0x66
[   24.632905]  [<ffffffff814d7ada>] ? int_signal+0x12/0x17
[   24.632960] Code: fb 48 ff c8 5d c3 55 48 89 e5 ff 15 f4 d8 38 00 5d c3 55 48 8d 04 bd 00 00 00 00 65 48 8b 14 25 60 10 01 00 48 6b d2 19 48 89 e5 <f7> e2 48 8d 7a 01 e8 d0 ff ff ff 5d c3 55 48 69 c7 1c 43 00 00

It only works fine if nvidia-udev.sh is disabled.

VGA compatible controller: NVIDIA Corporation G92 [GeForce 9800 GT] (rev a2)

emerge --info
Portage 2.2.7 (default/linux/amd64/13.0/desktop, gcc-4.7.3, glibc-2.16.0, 3.10.21-gentoo x86_64)                                                                                                                   
=================================================================                                                                                                                                                  
System uname: Linux-3.10.21-gentoo-x86_64-AMD_Phenom-tm-_II_X4_B40_Processor-with-gentoo-2.2                                                                                                                       
KiB Mem:     8184588 total,   2112872 free                                                                                                                                                                         
KiB Swap:    9214972 total,   9214972 free                                                                                                                                                                         
Timestamp of tree: Sat, 21 Dec 2013 09:30:01 +0000
ld GNU ld (GNU Binutils) 2.23.1
app-shells/bash:          4.2_p45
dev-java/java-config:     2.1.12-r1
dev-lang/python:          2.7.5-r3, 3.2.5-r3, 3.3.2-r2
dev-util/cmake:           2.8.11.2
dev-util/pkgconfig:       0.28
sys-apps/baselayout:      2.2
sys-apps/openrc:          0.12.4
sys-apps/sandbox:         2.6-r1
sys-devel/autoconf:       2.13, 2.69
sys-devel/automake:       1.11.6, 1.12.6, 1.13.4
sys-devel/binutils:       2.23.1
sys-devel/gcc:            4.7.3-r1
sys-devel/gcc-config:     1.7.3
sys-devel/libtool:        2.4.2
sys-devel/make:           3.82-r4
sys-kernel/linux-headers: 3.10 (virtual/os-headers)
sys-libs/glibc:           2.16.0
Repositories: gentoo steam-overlay hasufell lokal
ACCEPT_KEYWORDS="amd64"
ACCEPT_LICENSE="* -@EULA"
CBUILD="x86_64-pc-linux-gnu"
CFLAGS="-O2 -pipe -march=native"
CHOST="x86_64-pc-linux-gnu"
CONFIG_PROTECT="/etc /usr/share/config /usr/share/gnupg/qualified.txt /usr/share/themes/oxygen-gtk/gtk-2.0 /usr/share/themes/oxygen-gtk/gtk-3.0"
CONFIG_PROTECT_MASK="/etc/ca-certificates.conf /etc/env.d /etc/fonts/fonts.conf /etc/gconf /etc/gentoo-release /etc/revdep-rebuild /etc/sandbox.d /etc/terminfo /etc/texmf/language.dat.d /etc/texmf/language.def.d /etc/texmf/updmap.d /etc/texmf/web2c"
CXXFLAGS="-O2 -pipe -march=native"
DISTDIR="/usr/portage/distfiles"
FCFLAGS="-O2 -pipe"
FEATURES="assume-digests binpkg-logs buildpkg config-protect-if-modified distlocks ebuild-locks fixlafiles merge-sync news parallel-fetch preserve-libs protect-owned sandbox sfperms strict unknown-features-warn unmerge-logs unmerge-orphans userfetch userpriv usersandbox usersync"
FFLAGS="-O2 -pipe"
GENTOO_MIRRORS="http://distfiles.gentoo.org"
LANG="de_DE.UTF-8"
LDFLAGS="-Wl,-O1 -Wl,--as-needed"
MAKEOPTS="-j4"
PKGDIR="/usr/portage/packages"
PORTAGE_CONFIGROOT="/"
PORTAGE_RSYNC_OPTS="--recursive --links --safe-links --perms --times --omit-dir-times --compress --force --whole-file --delete --stats --human-readable --timeout=180 --exclude=/distfiles --exclude=/local --exclude=/packages"
PORTAGE_TMPDIR="/var/tmp"
PORTDIR="/usr/portage"
PORTDIR_OVERLAY="/var/lib/layman/steam /var/lib/layman/hasufell /usr/local/portage/overlay"
USE="X a52 aac acl acpi alsa amd64 berkdb branding bzip2 cairo cdda cdr cli cracklib crypt cryptsetup cups cxx dbus dri dts dvb dvd dvdr emboss encode exif fam firefox flac fortran gdbm gif gpm gtk iconv icu infinality ipv6 jpeg kde kipi lcdfilter lcms ldap libnotify mad mmx mng modules mp3 mp4 mpeg mudflap multilib ncurses networkmanager nls nptl ogg opengl openmp pam pango pcre pdf plasma png policykit ppds qalculate qt3support qt4 readline scanner sdl semantic-desktop session spell sse sse2 sse3 ssl startup-notification svg systemd tcpd tiff truetype udev udisks unicode upower usb vaapi vdpau vorbis wxwidgets x264 xcb xinerama xml xv xvid zeroconf zlib" ABI_X86="64" ALSA_CARDS="ali5451 als4000 atiixp atiixp-modem bt87x ca0106 cmipci emu10k1x ens1370 ens1371 es1938 es1968 fm801 hda-intel intel8x0 intel8x0m maestro3 trident usb-audio via82xx via82xx-modem ymfpci" APACHE2_MODULES="authn_core authz_core socache_shmcb unixd actions alias auth_basic authn_alias authn_anon authn_dbm authn_default authn_file authz_dbm authz_default authz_groupfile authz_host authz_owner authz_user autoindex cache cgi cgid dav dav_fs dav_lock deflate dir disk_cache env expires ext_filter file_cache filter headers include info log_config logio mem_cache mime mime_magic negotiation rewrite setenvif speling status unique_id userdir usertrack vhost_alias" CALLIGRA_FEATURES="kexi words flow plan sheets stage tables krita karbon braindump author" CAMERAS="ptp2" COLLECTD_PLUGINS="df interface irq load memory rrdtool swap syslog" DRACUT_MODULES="device-mapper systemd" ELIBC="glibc" GPSD_PROTOCOLS="ashtech aivdm earthmate evermore fv18 garmin garmintxt gpsclock itrax mtk3301 nmea ntrip navcom oceanserver oldstyle oncore rtcm104v2 rtcm104v3 sirf superstar2 timing tsip tripmate tnt ublox ubx" INPUT_DEVICES="evdev" KERNEL="linux" LCD_DEVICES="bayrad cfontz cfontz633 glk hd44780 lb216 lcdm001 mtxorb ncurses text" LIBREOFFICE_EXTENSIONS="presenter-console presenter-minimizer" LINGUAS="de" OFFICE_IMPLEMENTATION="libreoffice" PHP_TARGETS="php5-5" PYTHON_SINGLE_TARGET="python2_7" PYTHON_TARGETS="python2_7 python3_3" RUBY_TARGETS="ruby19 ruby18" USERLAND="GNU" VIDEO_CARDS="nvidia" XTABLES_ADDONS="quota2 psd pknock lscan length2 ipv4options ipset ipp2p iface geoip fuzzy condition tee tarpit sysrq steal rawnat logmark ipmark dhcpmac delude chaos account"
Unset:  CPPFLAGS, CTARGET, EMERGE_DEFAULT_OPTS, INSTALL_MASK, LC_ALL, PORTAGE_BUNZIP2_COMMAND, PORTAGE_COMPRESS, PORTAGE_COMPRESS_FLAGS, PORTAGE_RSYNC_EXTRA_OPTS, SYNC, USE_PYTHON
Comment 40 Rick Farina (Zero_Chaos) gentoo-dev 2013-12-22 17:19:36 UTC
(In reply to Sven Eden from comment #38)
> So of course the devices are created on X startup, but only if it actually
> has to 'load' the module. Loading the module on boot should create them as
> well.

Yet, without this udev script, it doesn't. You offer the following solutions:

>    at installation time, using mknod

Dev is volatile this doesn't work.

>    at module load time, via devfs (Linux device file system)

no idea what this means or how to do it, feel free to report back.

>    at module load time, via hotplug/udev

This is what we do.
Comment 41 Rick Farina (Zero_Chaos) gentoo-dev 2013-12-22 17:24:54 UTC
(In reply to Paul Trunk from comment #39)
> On my system nvidia-udev.sh is causing for long delays during start and
> shutdown process and error messages like
...
> [   24.627291] Hardware name: To Be Filled By O.E.M. To Be Filled By
> O.E.M./870 Extreme3, BIOS P1.60 09/14/2010
No name? is this even a legit card? Or a chinese clone?

> [   24.629829]  [<ffffffffa0749fda>] ? rm_shutdown_gvi_device+0x182/0x290
> [nvidia]
> [   24.630089]  [<ffffffffa0749fcd>] ? rm_shutdown_gvi_device+0x175/0x290
> [nvidia]
> [   24.630342]  [<ffffffffa074cba9>] ? _nv000928rm+0x76/0xa4 [nvidia]
> [   24.630593]  [<ffffffffa074a4f2>] ? _nv012082rm+0x1a9/0xa00 [nvidia]
> [   24.630847]  [<ffffffffa073b510>] ? _nv012360rm+0x92/0x168 [nvidia]
> [   24.631099]  [<ffffffffa073f633>] ? _nv000813rm+0x393/0x435 [nvidia]
> [   24.631350]  [<ffffffffa073f5e4>] ? _nv000813rm+0x344/0x435 [nvidia]
> [   24.631602]  [<ffffffffa073fc1c>] ? _nv000736rm+0x547/0x5e6 [nvidia]
> [   24.631854]  [<ffffffffa0747a85>] ? _nv000748rm+0x1d9/0x2f2 [nvidia]
> [   24.632105]  [<ffffffffa0741452>] ? rm_disable_adapter+0x74/0x107 [nvidia]
> [   24.632358]  [<ffffffffa07607fc>] ? nv_kern_close+0x1fc/0x3bb [nvidia]
> [   24.632607]  [<ffffffffa075f552>] ? nv_kern_ioctl+0x38c/0x39e [nvidia]
> [   24.632669]  [<ffffffff810c258c>] ? __fput+0xf0/0x1d9
> [   24.632727]  [<ffffffff810c267e>] ? ____fput+0x9/0xb
> [   24.632786]  [<ffffffff81044503>] ? task_work_run+0x79/0x92
> [   24.632845]  [<ffffffff8100250f>] ? do_notify_resume+0x55/0x66
> [   24.632905]  [<ffffffff814d7ada>] ? int_signal+0x12/0x17

It looks to me that this is running happening at shutdown?  The udev script rm's the dev nodes when the module is removed.  If the driver is trying to access the dev nodes AFTER THE DRIVER IS REMOVED then I'd have to say some kind of odd black magic is happening.

> It only works fine if nvidia-udev.sh is disabled.

Honestly if I had to guess this is either a suspend/resume bug in nvidia, or you have some odd script running at shutdown that tries to touch the nvidia after it's been unloaded.

Please feel free to open a new bug about your problem, the udev rule is staying.
Comment 42 Paul 2014-01-08 14:27:41 UTC
(In reply to Rick Farina (Zero_Chaos) from comment #41)

Thanks for reply.

> No name? is this even a legit card? Or a chinese clone?
The mainboard is definitively legit. No chinese clone. I have no idea why there is no name :-).

> It looks to me that this is running happening at shutdown?  The udev script
> rm's the dev nodes when the module is removed.  If the driver is trying to
> access the dev nodes AFTER THE DRIVER IS REMOVED then I'd have to say some
> kind of odd black magic is happening.
No it is happening at start. There is a delay of ~19sec and than the messages from above appear. At shutdown there is a delay too but without any messages.

> > It only works fine if nvidia-udev.sh is disabled.
> 
> Honestly if I had to guess this is either a suspend/resume bug in nvidia, or
> you have some odd script running at shutdown that tries to touch the nvidia
> after it's been unloaded.
> 
> Please feel free to open a new bug about your problem, the udev rule is
> staying.

When there are no other users with this problem i do not want to open a bugreport. The workaround works fine for me. For a long time the script doesnt cause any problems. Since november the delays are there. It is ok for me because i can individually turn it off. I just want to report that for me the script is causing unforeseeable problems.