Docker exposing ports to the public - how I fixed it


2 min read

In a recent installation of Alma Linux 9.3, I was configuring the server and checking which ports were open to the external public when I noticed port 8080 was open. I had never allowed that port. When I used firewall-cmd to list ports and services, port 8080 was not there ๐Ÿ˜•.

My understanding was that firewall-cmd would give me a reliable view of what's open and what's not, in terms of firewall rules. But, it won't. You can have rules configured in iptables that won't show up in firewall-cmd. When I checked iptables -L I could see some dynamic firewall rules there, created by docker.

I thought, why is docker messing with my firewall rules? I learned that this is the expected behavior from docker. It's documented on their site:

Let's say I start a container that exposes a port to the host like In that scenario, docker will dynamically insert a rule into the DOCKER iptables chain, allowing traffic from outside to port 8080.

But when you start a container that exposes a port to the host, that doesn't mean you necessarily want to expose that port to the external public, right?

You can prevent this docker behavior if you bind the container port to localhost only, like: Unfortunately I couldn't do that in my scenario, because that doesn't play well with Nginx Proxy Manager.

So to keep the port binding to and at the same time block the access from outside, I inserted new iptables rules. Docker gives you a DOCKER-USER chain for that.

The command that worked for me was:

iptables -I DOCKER-USER -i eth0 -p tcp -m conntrack --ctorigdstport 8080 --ctdir ORIGINAL -j DROP

You can view the rules with:

iptables -L --line-numbers

And if you want to remove a rule that was added by mistake, you can use:

iptables -D DOCKER-USER 1

(That 1 is the line number of the rule.)

To check if the port is open, from outside of the server, you can telnet:

telnet 8080

It will say "Connected" if open, or "No route to host", otherwise.