In 2017, I joined a team that was responsible for CI/CD of a platform for processing mortgages applications. This platform processed 1300 requests on a weekly basis.
We, as a team, implemented a containerized testing solution using Docker. This solution covers the testing infrastructure and virtualized interfaces (image below).

Why Docker and why now?
Docker was released in 2013 as an open-source project by a company named dotCloud. Within a year of releasing the open-source project, it became really famous so that they decided to start a new company named Docker, Inc.
Now, more than 5 years later everything has been changing, we made a huge shift in the ecosystem.
Whether you are an operator, sysadmin, developer, tester or build engineer, it would be very smart to jump into Docker. You will gain an advantage in your following assignments! It will just be a matter of time until your company/client will decide to move to Docker.

The real hard part about these migrations is the migration itself. We will have to learn new tools (which can make it also interesting), workflows and get up to speed on the terminology. Yet unlike previous shifts, Docker is focused on the migration experience. Meaning that they work just as well for a developer as they do for system operators. The problem, with a lot of these previous technologies, was that they were purely built for system admins.
Versions of Docker
- DIRECT
You install it directly on a supported operating system, meaning that it will be running on that kernel. Before 2016, it was really just Linux. After that, we saw support for Raspberry Pi, Mainframes and now even Windows Server 2016 is natively supported. - MAC / WINDOWS
This version comes with a suite of tools including a GUI. All with the purpose to make it very easy for the developer to create containers on either a Windows or Linux operating system. - CLOUD
The last option is the Cloud. These are the Azure, Google or AWS versions. These versions come with features specific to that cloud vendor. For example, it will come with persistent storage for your databases or features that automatically update elastic load balancers.
Different Releases
- Stable Release
By default, when you download Docker, you will get the Stable version which is released once a quarter. It supports that release for 4 months, so you will get an extra month to install the next quarterly stable update before they stop updating the previous release. - Edge Release
Edge is actually beta in the Docker World. A new release comes out every month, unfortunately, it is only supported for a month until the next beta comes out. Personally, I use the edge version for testing new Docker features.

Installing Docker on Windows
Docker for Windows
(this version provides you the best experience)
As a side note, this version can only be used on Windows 10 Professional and Enterprise editions. If you are using Windows 10 Home or an older version of Windows, you have to use the Docker toolbox version (explained on the right side).
This version operates with Hyper-V, which uses a Linux VM for Linux containers, this means you don't need Virtualbox or VMware anymore.
Docker for Windows can be downloaded from the Docker Store.
Docker Toolbox
(first version released for Windows, not supporting Windows containers!)
This is a great alternative and still receives updates. The downside of this version is not having the latest innovations as the Docker for Windows version (like the usage of Hyper-V and Windows containers).
Docker Toolbox can also be downloaded from Docker Store and uses VirtualBox and you can manage it through docker-machine.
Pulling your first image and creating a container
We will be downloading an image from the Docker registry and creating a container out of it. In this case, we will run a simple webserver (using Nginx).
Before we start, it might be good to explain the differences between images and containers.
Image:
The Binaries, libraries and the source code that is necessary for your application to run.

Container:
A container is the running instance of the image, so we can have multiple containers running based on the same image.
So, let's start by opening the command prompt and make sure the Docker desktop is running ( it can be verified by the icon on your system tray).
- Open the command prompt by click on Start and type cmd

- Now, if we execute the following command:
- You should get an overview of your (currently) running containers.
As you can see (in the screenshot below), there are no containers running at the moment.

- Now we are ready to pull our first image and start the container. With the command:
What will happen:
- First, it will check if the image already exists, as you can see in the screenshot below, it couldn't find the image locally.
- Because the image was not available locally, it pulled it from the Docker registry.
- the --publish is used to publish the localhost (your laptop/desktop) port 4444 to the container port 80.
- the --name is used to give the running container a unique name, in our case we used first_container
- the nginx specifies the image name being used

- Now, if we start a web browser and navigate to http://localhost:4444, you should get the Nginx welcome page (similar to the screenshot below).

- If you have a look at your command prompt again, you will see the traffic that has been sent to the container (similar to the image below).

Summary and next steps
After following the steps above you should be able to:
- Install docker on your local machine.
- Pull images from the Docker registry and run containers.
- Publish the port between the container and your laptop/desktop.
- Start a simple webserver.
For the next article that I am planning to write, we will dive into the Dockerfile basics and we will be able to create a container (out of a Docker file) that can host your website.
For now, I hope you enjoyed reading the article and please if you have any questions feel free to leave a comment below!