Docker Networking Explained: Bridge, Host, Overlay, and None
Understand Docker's four network drivers -- bridge, host, overlay, and none. Learn how container DNS resolution works, when to use each driver, and how port mapping actually functions.
Infrastructure engineer with 10+ years building production systems on AWS, GCP,…

Four Network Drivers, One Container Runtime
Docker networking is the layer that connects containers to each other and to the outside world. Every container you start gets attached to a network, whether you think about it or not. Docker ships with four built-in network drivers -- bridge, host, overlay, and none -- and picking the right one is the difference between a working architecture and hours of debugging packet drops.
Most developers never move past the default bridge network. That's a mistake. Understanding how Docker assigns IPs, resolves DNS, and maps ports gives you predictable, debuggable infrastructure instead of "it works on my machine."
What Is Docker Networking?
Definition: Docker networking is the subsystem that provides isolated or shared network namespaces for containers, enabling communication between containers, between containers and the host, and between containers across multiple hosts using pluggable drivers.
When Docker starts a container, it creates a virtual ethernet pair -- one end inside the container's network namespace and the other attached to a bridge on the host. The driver you choose determines how that plumbing is configured.
The Four Network Drivers Compared
| Driver | Isolation | Cross-Host | DNS | Use Case |
|---|---|---|---|---|
| bridge | Container-level | No | User-defined only | Single-host dev/prod |
| host | None (shares host) | No | Host resolver | Maximum throughput |
| overlay | Container-level | Yes (Swarm/manual) | Yes | Multi-host clusters |
| none | Full (no networking) | No | No | Batch jobs, security |
Bridge Network: The Default and Its Gotchas
Every Docker installation creates a docker0 bridge interface. Containers attached to it get an IP from the 172.17.0.0/16 subnet by default. They can reach each other by IP, and they reach the internet through NAT on the host.
Here's the catch: the default bridge does not provide automatic DNS resolution. If you have two containers on the default bridge, they can ping each other by IP, but ping my-api will fail. This is the single biggest source of confusion for newcomers.
User-Defined Bridge Networks
A user-defined bridge fixes this. Containers on a user-defined bridge get automatic DNS resolution by container name. They're also isolated from containers on other networks.
# Create a user-defined bridge
docker network create my-app-net
# Run two containers on it
docker run -d --name api --network my-app-net node:20-alpine
docker run -d --name web --network my-app-net nginx:alpine
# From the web container, "api" resolves to the container's IP
docker exec web ping api
Pro tip: Always create a user-defined bridge for your application stack. The default bridge is legacy behavior. User-defined bridges give you DNS, better isolation, and the ability to connect/disconnect containers at runtime without restarting them.
How Bridge DNS Resolution Works
Docker runs an embedded DNS server at 127.0.0.11 inside every container on a user-defined network. When a container looks up another container's name, this embedded resolver responds with the target container's IP on that network. If the name doesn't match any container, the query falls through to the host's DNS.
Host Network: Bypassing the Network Stack
The host driver removes network isolation entirely. The container shares the host's network namespace -- same IP, same interfaces, same port space. There's no NAT, no port mapping, no bridge.
# Container binds directly to host port 80
docker run -d --network host nginx:alpine
When to Use Host Networking
- Performance-critical workloads -- eliminating the veth pair and NAT saves measurable latency. Benchmarks show 2-5% throughput improvement for high-packet-rate services.
- Containers that need to see all host traffic -- monitoring agents, network sniffers, service discovery daemons.
- Port-heavy services -- if your app listens on dozens of ports (like an FTP server), host mode avoids mapping each one.
Watch out: Host networking only works on Linux. On Docker Desktop for Mac and Windows,
--network hostdoesn't behave the same way because Docker runs inside a Linux VM. Your container still won't bind to your Mac's interfaces directly.
Overlay Network: Multi-Host Communication
The overlay driver creates a distributed network that spans multiple Docker hosts. It uses VXLAN encapsulation to tunnel container traffic across the underlying host network. This is what Docker Swarm uses internally, and you can also create overlay networks manually.
# Initialize Swarm (required for overlay)
docker swarm init
# Create an overlay network
docker network create --driver overlay --attachable my-overlay
# Services on different hosts can now communicate by name
docker service create --name api --network my-overlay my-api:latest
docker service create --name worker --network my-overlay my-worker:latest
Overlay vs Kubernetes Networking
If you're running Kubernetes, you won't use Docker overlay networks. Kubernetes has its own networking model where every Pod gets a routable IP, and CNI plugins (Calico, Cilium, Flannel) handle cross-node communication. Docker overlay is relevant for Swarm deployments or standalone multi-host Docker setups.
None Network: Complete Isolation
The none driver gives the container a loopback interface and nothing else. No external connectivity, no DNS, no bridge. The container is a network island.
docker run -d --network none alpine sleep 3600
# This container cannot reach anything
docker exec <container-id> ping 8.8.8.8 # fails
When None Makes Sense
- Batch processing -- jobs that read from a mounted volume, process data, and write results back. No network needed, no attack surface.
- Security-sensitive computation -- cryptographic operations, secret generation, anything where you want zero chance of network exfiltration.
- Testing -- verifying that your app handles network unavailability gracefully.
Port Mapping: EXPOSE vs -p
This is another area where confusion runs rampant. The EXPOSE instruction in a Dockerfile and the -p flag in docker run are completely different things.
| Mechanism | What It Does | Opens a Port? |
|---|---|---|
EXPOSE 3000 | Documents that the container listens on 3000 | No |
-p 8080:3000 | Maps host port 8080 to container port 3000 | Yes |
-P | Maps all EXPOSE'd ports to random host ports | Yes |
Pro tip: Always include
EXPOSEin your Dockerfile even though it doesn't publish ports. It serves as documentation for anyone reading the image, and tools likedocker-composeand reverse proxies use it for service discovery.
How to Map Ports Correctly
- Specify the bind address when you don't want to expose to all interfaces:
-p 127.0.0.1:8080:3000 - Use the same port for simplicity in development:
-p 3000:3000 - Avoid conflicts by letting Docker pick the host port:
-p 3000(Docker assigns a random high port) - Map UDP explicitly if needed:
-p 53:53/udp
Container DNS Resolution: How Containers Find Each Other
Step-by-Step DNS Lookup in Docker
- Container A calls
getaddrinfo("api") - The request goes to the embedded DNS resolver at
127.0.0.11 - Docker checks if "api" matches any container name or network alias on the same user-defined network
- If found, Docker returns the container's IP address on that network
- If not found, Docker forwards the query to the host's configured DNS servers
- Container A connects to the resolved IP
Network Aliases
You can give a container multiple DNS names using --network-alias:
docker run -d --name api-v2 --network my-app-net --network-alias api my-api:v2
Now both api-v2 and api resolve to this container. This is useful for blue-green deployments -- point the alias at whichever version is live.
Pricing and Tool Recommendations
Docker networking itself is free, but your infrastructure choices affect cost:
| Tool / Service | Cost | Best For |
|---|---|---|
| Docker Engine (CE) | Free | Single-host bridge/host networking |
| Docker Desktop (Business) | $24/user/month | Teams needing managed desktop experience |
| AWS ECS with awsvpc | Per-task ENI (no extra charge) | Native VPC networking per container |
| Cilium (CNI) | Free (OSS) / Enterprise pricing | Advanced eBPF-based networking in K8s |
| Calico | Free (OSS) / Tigera pricing | Network policy enforcement at scale |
Frequently Asked Questions
What is the difference between bridge and host network in Docker?
Bridge creates an isolated network namespace with its own IP and uses NAT for external access. Host removes isolation entirely -- the container shares the host's IP and port space. Bridge is safer and more flexible; host gives better raw performance but means port conflicts are possible between containers.
Can containers on different bridge networks talk to each other?
Not by default. Containers on separate bridge networks are isolated. You can connect a container to multiple networks using docker network connect, which gives it an interface on each network. This is the correct way to let a container act as a bridge between two isolated networks.
Why can't my containers resolve each other by name?
You're probably on the default bridge network. Automatic DNS resolution only works on user-defined bridge networks. Create a network with docker network create and attach your containers to it. Container names and network aliases will then resolve automatically.
Is Docker overlay network the same as Kubernetes networking?
No. Docker overlay uses VXLAN to create a flat network across Swarm nodes. Kubernetes uses CNI plugins (Calico, Cilium, Flannel) that implement a different model where every Pod gets a unique, routable IP. If you're running Kubernetes, you don't configure Docker overlay networks.
Does EXPOSE in a Dockerfile actually open a port?
No. EXPOSE is purely documentation metadata. It tells users and tooling which ports the container expects to use. To actually make a port accessible from outside the container, you must use the -p flag at runtime. The -P flag publishes all EXPOSE'd ports to random host ports.
How do I debug Docker networking issues?
Start with docker network inspect <network-name> to see connected containers and their IPs. Use docker exec <container> ping <target> to test connectivity. For deeper issues, run docker exec <container> nslookup <name> to check DNS resolution. On the host, iptables -L -t nat shows Docker's NAT rules.
Should I use Docker Compose networks or create networks manually?
Use Compose networks. Docker Compose automatically creates a user-defined bridge for each project and attaches all services to it. Services can reach each other by their Compose service name. Manual network creation is only needed for cross-project communication or advanced topologies.
Conclusion
Pick bridge (user-defined) for 90% of workloads. Use host only when you've measured a real performance bottleneck. Use overlay for multi-host Swarm deployments. Use none when you genuinely need zero network access. And always, always use user-defined bridges instead of the default -- automatic DNS alone makes it worth the extra line of configuration.
Written by
Abhishek Patel
Infrastructure engineer with 10+ years building production systems on AWS, GCP, and bare metal. Writes practical guides on cloud architecture, containers, networking, and Linux for developers who want to understand how things actually work under the hood.
Related Articles
Certificate Management at Scale: Let's Encrypt, ACME, and cert-manager
Automate TLS certificates with Let's Encrypt, ACME protocol, and cert-manager in Kubernetes. Covers HTTP-01, DNS-01, wildcards, private CAs, and expiry monitoring.
9 min read
SecuritySecret Management: HashiCorp Vault vs AWS Secrets Manager vs Kubernetes Secrets
Compare Vault, AWS Secrets Manager, and Kubernetes Secrets. Learn about dynamic secrets, rotation, injection patterns, and when to use each tool.
9 min read
ContainersKubernetes Pods, Deployments, and Services: A Visual Guide
Kubernetes complexity concentrates in three core objects: Pods, Deployments, and Services. This visual guide explains what they do, how they connect, and what happens during rolling updates.
11 min read
Enjoyed this article?
Get more like this in your inbox. No spam, unsubscribe anytime.