Table of Contents
In this path of learning docker, it is necessary to understand about docker networking. To get a deeper understanding of “how docker manages the networking and requests coming to the containers”, learning its networking is important.
Before that, if you want to learn more about docker, follow -> Learn Docker
Below is the tree showing the things involved in docker networking, as of docker version = v18.09
Docker Networking Types
There are two types of networking in docker:
- Single Host Networking
- Multi Host Networking
Let us learn each, one by one.
Single Host Networking
The scope of this type is “local and available to a single host” and it is of 3 types:
- bridge
- host
- none
“bridge network driver” in Docker
- default driver used when creating containers
- usually used when one wants to run an application in a “standalone container“
For practical, follow -> Learn Docker Bridge Network (Everything Practical)
Question: What is the meaning of “bridge” in context to docker?
Answer: This can be explained in following points:
- This uses what called as “software bridge“
- Containers connected on a “same bridge” network can communicate to each other (But the way is different for “default = docker0” and “user-defined bridge“)
- When docker is installed, it also installs this docker bridger driver along with it, which in turn adds the “iptables” rules to the host networking, which in turn blocks the communication of different bridge networks, but can be enabled with other means and solutions available.
Note: The bridge networks only apply to containers which are running on the same docker host.
Question: What if we still want communication between docker containers running on different docker daemon hosts?
Answer: In that case, we have two following options available:
- Manually manage the routing of packets at “OS level“
- Use an already baked solution, “Docker Overlay Network Driver”
This driver can be used in two ways:
- default bridge = “docker0”
- user-defined bridge = can be named as anything
Note: “user-defined network” bridges are superior and more complex than “default bridge docker0“.
Question: What is the difference between “user-defined bridge” and “default bridge” in docker networking?
Answer: This can be covered in the following points:
Isolation
- Containers which are connected to a “same user-defined bridge” automatically exposes all the ports to each other and no ports to the outside world, this is not true for “default = docker0 bridge“.
- The benefit of this is that containerized applications can easily communicate with each other without accidentally opening ports to the outside world.
Automatic DNS resolution
- Containers which are connected to “default = docker0 bridge“, can only access or communicate with each other by the “IP-Address” of each other, or with the help of “– – link” option which is now considered as “legacy” and not recommended for use.
- Containers which are connected to “user-defined bridge network“, can easily resolve and communicated with each other by using the “container name” or “alias” of a container.
Question: What if we still want to use the “default = docker0 bridge network” and want communication between containers?
Answer: In that case, we have two options:
- We can use “– – link” option
- We can manually edit each container’s “/etc/hosts” file
Question: Can we “attach” and “re-attach” the containers from an “user-defined bridge network” on the fly?
Answer: Yes, but first we have to stop the containers using an “user-defined network” and then recreate it with a different network.
Sharing data such as -> Environment Variables
- Initially, during the earlier versions of docker, “– – link” is the only way to share data between containers.
- Now, we can use “docker volumes“, define in “docker-compose.yml” file.
Note: For port which needs to be accessible by a “non-docker host” or a “docker container“, that port must be published using the “-p” or “– – publish” option.
Note: Overall, the best option is to always use an “user-defined bridge network“.
“host network driver” in Docker
- When this driver is used, containers on a docker host are no more isolated from the docker host.
- They will now use the “eth0” or “enp0s3” in case of an ethernet connection, and “wlp2s0” in case of wireless or WiFi.
- For example: If we run an “nginx container” which we know uses “port = 80” for the content transfer, then in case of “host network driver” used, there is no need for “port mapping” instead, the “host’s port = 80” is directly going to be used as the default for the content transmission for “nginx container” and on the “host’s IP-Address“.
For practical, follow -> Docker Host Network V/s Bridge Network (Practical)
Note: As of now, this “host network driver” is only available on “Linux“.
“none network driver” in Docker
- This network is installed with the docker installation, and it is used when we want to disable any kind of networking for a container.
Multi Host Networking
As of now, this only has one type:
- overlay
“overlay network driver” in Docker
- This requires “swarm mode” to enabled.
- This network’s main scope and why it exists is because of “docker swarm mode“.
- When the “swarm mode” is initialised then two new networks are created on that docker host, an “overlay network = ingress” and a “bridge network = docker_gwbridge“.
- ingress = handles “control + data” traffic for swarm services, by default if a network for a swarm service is not specified, the service is connected to this.
- docker_gwbridge = different daemons installed on different nodes in swarm communicate with each other using this.
Ports and their use in “overlay network driver“:
- tcp / udp / 2377 = cluster management communications
- tcp / udp / 7946 = communication among nodes
- udp / 4789 = overlay network traffic
Note: The built-in “overlay” network is for container communication only.
Question: What if we want our containers to communicate to our “existing VMs” or “physical devices” on our existing networks?
Answer: In that case, “MACVLAN” is our answer.
“macvlan network” in Docker
- This gives each container its own “IP + MAC” Address on the existing network.
Note: But, this type of network requires “promiscuous mode” on the host’s NIC (network interface card).
Note: It is good locally, but “on the cloud” some providers generally do not allow the “promiscuous mode“.
Question: What is the solution for that?
Answer: Use “IP VLAN” , this is similar to “MACVLAN” but without “promiscuous mode“.
“IPvlan network” in Docker
- Make a note that it is currently not fully developed and is under testing.
Comment here