Launching a container with a forwarded port creates a new iptables entry (yay!), but when the container exits the port-forward is still present. This did not happen with 19.03.14, and seems new with 20.10.2 Reproducible: Always Steps to Reproduce: 1. Start with a 'clean' iptables: ScottE-LT-NVIDIA ~ # iptables -L Chain INPUT (policy ACCEPT) target prot opt source destination Chain FORWARD (policy DROP) target prot opt source destination DOCKER-USER all -- anywhere anywhere DOCKER-ISOLATION-STAGE-1 all -- anywhere anywhere ACCEPT all -- anywhere anywhere ctstate RELATED,ESTABLISHED DOCKER all -- anywhere anywhere ACCEPT all -- anywhere anywhere ACCEPT all -- anywhere anywhere Chain OUTPUT (policy ACCEPT) target prot opt source destination Chain DOCKER (1 references) target prot opt source destination Chain DOCKER-ISOLATION-STAGE-1 (1 references) target prot opt source destination DOCKER-ISOLATION-STAGE-2 all -- anywhere anywhere RETURN all -- anywhere anywhere Chain DOCKER-ISOLATION-STAGE-2 (1 references) target prot opt source destination DROP all -- anywhere anywhere RETURN all -- anywhere anywhere Chain DOCKER-USER (1 references) target prot opt source destination RETURN all -- anywhere anywhere 2. Launch a container with a port forward, and let it exit. scotte@ScottE-LT-NVIDIA ~ $ docker run --rm -ti -p 9999:9999 --runtime=nvidia nvcr.io/nvidia/cuda:latest nvidia-smi Fri Jan 15 08:13:45 2021 +-----------------------------------------------------------------------------+ | NVIDIA-SMI 460.27.04 Driver Version: 460.27.04 CUDA Version: 11.2 | |-------------------------------+----------------------+----------------------+ | GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. | | | | MIG M. | |===============================+======================+======================| | 0 Quadro M1000M On | 00000000:01:00.0 Off | N/A | | N/A 30C P8 N/A / N/A | 0MiB / 2004MiB | 0% Default | | | | N/A | +-------------------------------+----------------------+----------------------+ +-----------------------------------------------------------------------------+ | Processes: | | GPU GI CI PID Type Process name GPU Memory | | ID ID Usage | |=============================================================================| | No running processes found | +-----------------------------------------------------------------------------+ scotte@ScottE-LT-NVIDIA ~ $ 3. Note port-forward entry is still present ScottE-LT-NVIDIA ~ # iptables -L Chain INPUT (policy ACCEPT) target prot opt source destination Chain FORWARD (policy DROP) target prot opt source destination DOCKER-USER all -- anywhere anywhere DOCKER-ISOLATION-STAGE-1 all -- anywhere anywhere ACCEPT all -- anywhere anywhere ctstate RELATED,ESTABLISHED DOCKER all -- anywhere anywhere ACCEPT all -- anywhere anywhere ACCEPT all -- anywhere anywhere Chain OUTPUT (policy ACCEPT) target prot opt source destination Chain DOCKER (1 references) target prot opt source destination ACCEPT tcp -- anywhere 172.17.0.2 tcp dpt:9999 Chain DOCKER-ISOLATION-STAGE-1 (1 references) target prot opt source destination DOCKER-ISOLATION-STAGE-2 all -- anywhere anywhere RETURN all -- anywhere anywhere Chain DOCKER-ISOLATION-STAGE-2 (1 references) target prot opt source destination DROP all -- anywhere anywhere RETURN all -- anywhere anywhere Chain DOCKER-USER (1 references) target prot opt source destination RETURN all -- anywhere anywhere
I have just added 20.10.3 to the tree, can you please let me know if this is still an issue? Thanks much. William
No change in behavior with app-emulation/docker-20.10.3 and net-firewall/iptables-1.8.7 . Every run still accumulates portmaps. ScottE-LT-NVIDIA ~ # iptables -L [snip] Chain DOCKER (1 references) target prot opt source destination ACCEPT tcp -- anywhere 172.17.0.2 tcp dpt:8888 ACCEPT tcp -- anywhere 172.17.0.2 tcp dpt:8888 [snip] scotte@ScottE-LT-NVIDIA ~ $ docker ps -a CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
Hi, do you mind filing an issue upstream and linking it here? The upstream issue tracker is linked below. If you are comfortable doing that it will help a lot. Thanks much. William https://github.com/moby/moby/issues
Thanks for making that upstream bug William! Sorry for flaking. I added details on the upstream bug showing the issue w/o the nvidia runtime (since that doesn't look to be involved).
I checked here based on your instructions in the upstream bug and was unable to reproduce the issue. Also, my docker version output seems to be different from yours. $ docker version Client: Version: 20.10.0-dev API version: 1.41 Go version: go1.16.2 Git commit: 55c4c88966 Built: Fri Apr 2 02:18:19 2021 OS/Arch: linux/amd64 Context: default Experimental: true Server: Engine: Version: 20.10.5 API version: 1.41 (minimum version 1.12) Go version: go1.16.2 Git commit: 363e9a88a1 Built: Fri Apr 2 02:17:34 2021 OS/Arch: linux/amd64 Experimental: false containerd: Version: 1.4.4 GitCommit: 05f951a3781f4f2c1911b05e61c160e9c30eaa8e runc: Version: 1.0.0-rc92 GitCommit: ff819c7e9184c13b7c2607fe6c30ae19403a7aff docker-init: Version: 0.19.0 GitCommit: fec3683b971d9c3ef73f284f176672c44b448662 $ sudo iptables --version iptables v1.8.5 (legacy) $sudo iptables --list Chain INPUT (policy ACCEPT) target prot opt source destination Chain FORWARD (policy DROP) target prot opt source destination DOCKER-USER all -- anywhere anywhere DOCKER-ISOLATION-STAGE-1 all -- anywhere anywhere ACCEPT all -- anywhere anywhere ctstate RELATED,ESTABLISHED DOCKER all -- anywhere anywhere ACCEPT all -- anywhere anywhere ACCEPT all -- anywhere anywhere Chain OUTPUT (policy ACCEPT) target prot opt source destination Chain DOCKER (1 references) target prot opt source destination Chain DOCKER-ISOLATION-STAGE-1 (1 references) target prot opt source destination DOCKER-ISOLATION-STAGE-2 all -- anywhere anywhere RETURN all -- anywhere anywhere Chain DOCKER-ISOLATION-STAGE-2 (1 references) target prot opt source destination DROP all -- anywhere anywhere RETURN all -- anywhere anywhere Chain DOCKER-USER (1 references) target prot opt source destination RETURN all -- anywhere anywhere $ docker run --rm -ti -p 9999:8888 hello-world Hello from Docker! This message shows that your installation appears to be working correctly. To generate this message, Docker took the following steps: 1. The Docker client contacted the Docker daemon. 2. The Docker daemon pulled the "hello-world" image from the Docker Hub. (amd64) 3. The Docker daemon created a new container from that image which runs the executable that produces the output you are currently reading. 4. The Docker daemon streamed that output to the Docker client, which sent it to your terminal. To try something more ambitious, you can run an Ubuntu container with: $ docker run -it ubuntu bash Share images, automate workflows, and more with a free Docker ID: https://hub.docker.com/ For more examples and ideas, visit: https://docs.docker.com/get-started/ $ sudo iptables --list Chain INPUT (policy ACCEPT) target prot opt source destination Chain FORWARD (policy DROP) target prot opt source destination DOCKER-USER all -- anywhere anywhere DOCKER-ISOLATION-STAGE-1 all -- anywhere anywhere ACCEPT all -- anywhere anywhere ctstate RELATED,ESTABLISHED DOCKER all -- anywhere anywhere ACCEPT all -- anywhere anywhere ACCEPT all -- anywhere anywhere Chain OUTPUT (policy ACCEPT) target prot opt source destination Chain DOCKER (1 references) target prot opt source destination Chain DOCKER-ISOLATION-STAGE-1 (1 references) target prot opt source destination DOCKER-ISOLATION-STAGE-2 all -- anywhere anywhere RETURN all -- anywhere anywhere Chain DOCKER-ISOLATION-STAGE-2 (1 references) target prot opt source destination DROP all -- anywhere anywhere RETURN all -- anywhere anywhere Chain DOCKER-USER (1 references) target prot opt source destination RETURN all -- anywhere anywhere Can you please verify that you are running the docker from the gentoo tree and try again? Thanks, William
Trying to find cycles to investigate this, but I am starting to suspect it's something related to cgroups. For those where it works, are you using OpenRC (like I am) or systemd? I see errors from docker in finding/removing the CONNTRACK entry associated with the container, but can't get the same error to appear in a VM with systemd. One of the bug Docker 19.x->20.x changes was related to how it handles cgroups, and I'm feeling like there's some OpenRC cgroups "thing" that's different, and causing the iptables entry to not be there when Docker expects it. (Yes, I know...I need to get some more data, not just half-ass experiment...)
app-containers/docker-20.10.9 doesn't remove iptable entries as well
(In reply to pod from comment #7) > app-containers/docker-20.10.9 doesn't remove iptable entries as well For me, 20.xx works fine after making the iptables6 symlink, as described in https://github.com/moby/moby/issues/42127 . It's still failing for you?
(In reply to Scott Ellis from comment #8) > (In reply to pod from comment #7) > > app-containers/docker-20.10.9 doesn't remove iptable entries as well > > For me, 20.xx works fine after making the iptables6 symlink, as described in > https://github.com/moby/moby/issues/42127 . It's still failing for you? creating the symlinks works for me as well. thanks ! lrwxrwxrwx 1 root root 20 Jan 11 17:02 ip6tables -> xtables-legacy-multi lrwxrwxrwx 1 root root 20 Jan 11 17:06 ip6tables-apply -> xtables-legacy-multi lrwxrwxrwx 1 root root 20 Jan 11 17:06 ip6tables-legacy -> xtables-legacy-multi lrwxrwxrwx 1 root root 20 Jan 11 17:07 ip6tables-legacy-restore -> xtables-legacy-multi lrwxrwxrwx 1 root root 20 Jan 11 17:07 ip6tables-legacy-save -> xtables-legacy-multi lrwxrwxrwx 1 root root 20 Jan 11 17:06 ip6tables-restore -> xtables-legacy-multi lrwxrwxrwx 1 root root 20 Jan 11 17:06 ip6tables-save -> xtables-legacy-multi lrwxrwxrwx 1 root root 20 Jan 11 17:07 ip6tables-xml -> xtables-legacy-multi lrwxrwxrwx 1 root root 20 Sep 10 22:05 iptables -> xtables-legacy-multi -rwxr-xr-x 1 root root 7057 Sep 10 22:04 iptables-apply lrwxrwxrwx 1 root root 20 Sep 10 22:04 iptables-legacy -> xtables-legacy-multi lrwxrwxrwx 1 root root 20 Sep 10 22:04 iptables-legacy-restore -> xtables-legacy-multi lrwxrwxrwx 1 root root 20 Sep 10 22:04 iptables-legacy-save -> xtables-legacy-multi lrwxrwxrwx 1 root root 20 Sep 10 22:05 iptables-restore -> xtables-legacy-multi lrwxrwxrwx 1 root root 20 Sep 10 22:05 iptables-save -> xtables-legacy-multi lrwxrwxrwx 1 root root 20 Sep 10 22:05 iptables-xml -> xtables-legacy-multi