Have you ever seen a huge ship with hundreds of thousands of containers on it and docking from one seaport to another. Each container is packed and filled up with different cargo like fruit, vegetables, drugs, and groceries. They are isolated and ship from one place to another with ease. That’s the same thing as what is docker in the cloud computing world.
Now let’s relate the above real-life example with docker that we run on our servers. Docker is a platform used for developing, running, and shipping applications. It’s just like the big ship that runs in the sea. Docker is also known as an open-source containerization platform. It enables you to package your application into a bundle into a container.
You can see the following image where the big blue ship is actually acting like a docker carrying and shipping containers. All these containers have different cargo and each container is isolated.
The most exciting part is when your application is packaged into a container, you can deploy and run it anywhere in the world.
Docker is getting so popular and makes it easier, simpler, and safer to build, run, deploy, and manage containers. Getting started with Docker is easy, so let’s have a quick idea about docker terminologies that we are going to use and learn in our upcoming journey.
Docker has drastically changed modern IT. Docker was introduced back in 2013 and it has been bringing so much excitement in the IT world ever since. Docker has changed the way of application deployment. Developers love docker as it provides the same features and facilities to run their application whether it is a test, staging, or production environment.
You might have heard another famous term these days, which is called CI/CD. CI and CD stand for continuous integration and continuous deployment. Docker plays a vital part here and bridges the gaps between development and operation activities. It enforces automation in the building, testing, and deployment of applications.
Additionally, Docker is a comprehensive end-to-end platform that includes UIs, CLIs, APIs, and security mechanisms that work together across the entire application delivery lifecycle.
Fortunately, many cloud hosting companies offer a built-in Docker cloud deployment strategy.
Docker is based on a client-server architecture. When Docker is installed on your server, a Docker daemon runs which builds runs, and distributes the Docker containers. During the process of running a container, the Docker client talks to the Docker daemon.
Let’s check out the following architecture diagram of Docker and let me explain it.
So there are 3 core parts in the Docker architecture.
That’s you— the user who executes commands like docker run, docker pull, or docker search. When we run any docker command, let’s say docker run, it sends a request to the Docker daemon (dockerd) or the Docker host via the Docker API.
Also called the Docker daemon which actually does the math behind all the containerization processes. Docker Host executes the requests coming from the Docker client. It is the man-in-the-middle that coordinates with both the client and the Docker registry. Additionally, it pulls the images from the Docker registry, builds those images, and runs them into containers.
The registry is the central hub of Docker images. Docker Hub and Docker Cloud are public registries that anyone can use to keep their images. You can even run your own private registry.
If you want to know more about Docker Hub and how to set up a Docker Hub account, read my Docker Hub guide.
You can assume a docker container as a small process running on the main server. However, these processes are isolated and use the kernel of the OS of the main underlying server. Did you get my point? Simply, a container consists of an entire runtime environment, shares OS and where appropriate bins and libraries.
A docker container is built from a docker image and when it runs, it pulls all its dependencies, libraries, and configuration files needed to run it, bundled into one package.
If you get into any running container via a bash shell, you will see the architecture of a complete running operating system. In short, a docker container is the smallest footprint of an OS. Its size can be smallest as 5MB. For instance, Alpine Linux is a minimal Docker image with a complete package index and only 5 MB in size.
Docker Engine manages and run all the Docker containers.
Finally, Docker containers help by containerizing the application platform and its dependencies. Also, they abstract the differences in OS distributions and underlying infrastructure. You can run a container with CentOS image on Ubuntu and vice versa.
If you are going to work on Docker, there are a couple of terms and tools that you should know. Though you’ll learn everything gradually, and you may not need to know all of the words, but you should know the meanings of the following key Docker terms.
Docker is a baby-sitter for the Docker containers. It pulls images and builds images and runs them into Docker containers. It’s the daemon that hosts Docker containers.
A Docker image is a bundled package containing all the necessities to run the application into a container. If you are going to deploy a Node.JS app, you’ll need to pull the node image. That node image is packed with all the tools, libraries, and dependencies that the application code needs to run as a container. Pull the image, and run the app in a container, that’s how simple it is.
As stated above in the Docker container section, a Docker container is live, a running instance of the Docker image. Containers are isolated but still, they can communicate with each other being on the same network.
What I love about Docker is the Dockerfile. It’s a simple file, written and saved as “Dockerfile” with D in caps. It contains the instructions for how to build the Docker container image. Furthermore, a Dockerfile helps to create images on auto-pilot. Moreover, it contains the list of essential commands that Docker Engine will run to build the image. You can modify and re-build the image whenever you want.
A Docker registry is the image storage point. It’s where you push and pull the images to and from the Docker Hub Registry. The Docker Registry is organized into repositories. You need to sign up on Docker Hub to create your image repositories. On the free plan, you can only create one private repository for your images. There are other registries as well to store images like Amazon ECR, Google Container Registry and IBM Container Registry.
Here’s how Docker Hub looks like:
A Docker repository is a set of collection of multiple Docker images with the same name and different tags. Additionally, a Docker repository is where you can store 1 or more versions of a specific Docker image. The tags on images help you to recognize the image version.
I have created a public and a private repository on hub.docker.com.
It’s a bit advanced tool and you will use it on high-availability applications. For now, I would recommend having an overview of Docker Swarm. It is a container orchestration tool that allows the user to manage multiple containers running at the same time. You can build multiple images at the same time with a Docker compose YAML file. Finally, Swarm helps to deploy hundreds of containers across multiple servers in the cloud.