Docker Fundamentals


Docker Fundamentals

Docker Service Vs Docker Run

Docker service and docker run, both are like same, but still very different. Docker service almost replaces docker run, however, since many things might still depend on run, hence docker run is not replaced by docker service.
The main diff is that docker run was made only and only to cater to one particular node and not a set of nodes. However, docker service can cater to multiple nodes.
 
DockerFile Vs docker-compose
 
This example explains how we can use the docker-compose file which basically is used to launch several containers all at once. And the second file that is involved is our “dockerfile”.
 
1.DockerFile
The DockerFile basically sets the setting of a particular image, like what is supposed to be the image, if after installing base image, the apt-get update command is required. So all the things that need to make ie build an image, is mentioned in a DockerFile. So each image has a dockerFile.
Ex:
 
FROM drupal:8.8.2
 
 RUN apt-get update && apt-get install -y git \
&& rm -rf /var/lib/apt/lists/*
 
WORKDIR /var/www/html/themes
 
RUN git clone –branch 8.x-3.x –single-branch –depth 1 https://git.drupal.org/project/bootstrap.git \
    && chown -R www-data:www-data bootstrap
 
WORKDIR /var/www/html
 
 
The above dockerFile will first download the base image and then run the commands such as apt-get to further build on top of that image and then pack it into a single image.
 
 
2.docker-compose
The docker-compose file is like where you mention several different containers that you want to deploy all at once and you mention all the information in a single file. Sometimes its like pasting contents of different dockerFile , one below the another. Because in Docker-compose, if you are making this just for one image, then it will be pretty much the same like dockerFile.
 
But dockerFIle will not enable you to lunch several containers together and then further make changes to them via their networking etc.
Ex: Docker-compose
 
 
version: ‘2’
# NOTE: move this answer file up a directory so it’ll work
 
services:
 
  drupal:
    image: custom-drupal
    build: .    ß- build is being used here because in this we are using dockerFile to first creat new image and then use it in docker-compose file. Hence build ensures that the dockerFile present in the same working directory first gets built and then used in this docker-compose file. So 2 works do together.
    ports:
      – “8080:80”
    volumes:
      – drupal-modules:/var/www/html/modules
      – drupal-profiles:/var/www/html/profiles      
      – drupal-sites:/var/www/html/sites     
      – drupal-themes:/var/www/html/themes
 
  postgres:
    image: postgres:12.1
    environment:
      – POSTGRES_PASSWORD=mypasswd
    volumes:
      – drupal-data:/var/lib/postgresql/data
 
volumes:
  drupal-data:
  drupal-modules:
  drupal-profiles:
  drupal-sites:
  drupal-themes:
 
A Dockerfile is a simple text file that contains the commands a user could call to assemble an image whereas Docker Compose is a tool for defining and running multi-container Docker applications.
 
Docker Compose define the services that make up your app in docker-compose.yml so they can be run together in an isolated environment. It get an app running in one command by just running docker-compose up.Docker compose uses the Dockerfile if one add the build command to your project’s docker-compose.yml. Your Docker workflow should be to build a suitable Dockerfile for each image you wish to create, then use compose to assemble the images using the build command.
 
 
 
https://dockerlabs.collabnix.com/docker/cheatsheet/
 
 
Docker Bind Mount
 
Basically we map one of the external folder on computer to a folder inside an image/docker container. So if the images requires to access files from a doc, then map this folder through bind mount to an external folder such as on desktop and so we can easily change the content inside of the folder and thedocker image will also get the updated folder items.
 
3.Docker-Machine
 
docker-machine is used to create more than 1 nodes. So basically till now if we write $ docker service create new_image alpine
 
This will create only one service , but if we need more than 1 VM, ie if we want to have more than 1 node to run different services on each and manage them through swarn, in this case docker-machine is used
 
Creating node:
 
$ docker-machine create node1
this creates one node
 
now
$ docker-machine create node2
This creates the second node, but remember to do this on a new CMD screen
 
After this, to get into the docker nodes that are created, we can ssh into it and then create services into the node:
 
$ docker-machine ssh node1
now you’ll be on docker node1
 
4.DOCKER SWARM
> To create a swarm: $ docker swarm init
 
> copy paste the given token to create new workers
 
To leave the swarm: $ docker swarm leave
 
 
Docker Service
 
Docker service ls
 
this will give back service id not image id
 
It shows a column written “replicas” and would marked as 1/1 or 2/3 etc . This means that from the 3 nodes that we have specified to run, right now 2 running and the admin has to ensure that the number stay 100% , ie it becomes 3/3 from 2/3.
 
Overlay driver
 
This enabled the networking within swarm for intranet connectivity. This is the only network we can use as it has capability to start communication between nodes like an intranet/lan is connected to those nodes
 
$ docker network create –driver overlay overlay_network_name
 
For creating service and making them join this network:
 
$ docker service create –name psql –network overlay_network_name -e
POSTGRES_PASSWORD=mypass postgres
 
then run $docker service ps psql
to viewing the running psql named service
 
Must to learn swarm: Connecting docker-machine nodes to swarm over the overlay network:: Creating a website with drupal as webserver and postgress as database
 
We first create multiple nodes, node1, node2 and node3.
 
$ docker-machine create node1
$ docker-machine create node2
$ docker-machine create node3
 
$ docker-machine ls
 
to see names of all the nodes running.
 
Now we ssh into node 1, which will be our manager node. Imagine these 3 different nodes to be 3 different laptops you have installed docker on them. Now we access the first node by SSHing into it:
 
 
$ docker-machine ssh node1
 
We make this node our swarm manager. Verify that you are in node1 , as the command line will show docker@node1:
 
Now create a overlay network for the swarm to communicate with
 
docker@node1$ : docker network create –driver overlay mydrupal    <—name of the network
 
Run the below command to make create a swarm and make this node1 as manager by default
$ docker swarm init
 
This might through error stating to mention the advertising IP, so then note/copy the public IP from the output of the error and then run the below command:
 
docker swarm init –advertise-addr 192.168.99.104 <– replace this ip from the copied IP of the error
 
This will make the node1 our manager
 
Now copy the whole token given as the output of the above command and navigate to node2 CMD . on the Node2 CMD, paste and press enter the above copied joining token.
 
This will make node2, join the node1’s swarm. Do same with node3 also. Now we have 4 different windows of CMD. One has normal docker running and no ssh to any node, and three other windows of each node who have been ssh into the nodes.
 
Till now we have created 3 nodes, connected them to swarm and node1 is manager. We have also created overlay network on the node1. Now we need to install services on these nodes and make them work, however, we need to give almost all commands to the node1 and it will automatically install different services to different nodes according to the need.
 
If you do $ docker network ls, then will show the overlay network and another network which is created for/by swarm automatically which is named like “docker_gwbridge”. And overlay will show as “ingress” and in driver it will show as “overlay”.
 
Now we will launch postgress service on node1:
 
docker@node1:~$ docker service create –name psql –network mydrupal -e POSTGRES_PASSWORD=mypass postgres
 
This will create postgres on node1.
 
Now that we have a databased, we need to setup our website service. So again on node1, run:
 
docker@node1:~$  docker service create –name drupal –network mydrupal -p 80:80 drupal
 
Now this will install drupal also,just like sql (postgress). But is you run below two command one b y one:
 
$ docker service ps psql
 
$docker service ps drupal
 
then you’ll observe that psql is in node1 and drupal in either node2 or node 3. hence different services installed in different nodes by then commands given to master node. This is how swarm divides the work among various nodes.
Now run:
 
$ docker service inspect drupal
 
this will give details on which ip drupal is running, paste that IP on chrome and drupal website setup will start
 
Routing Mesh
 
the routing of incoming (ingress) traffic across to nodes depending upon the task. Basically we just give commands to master but different nodes gets diff service, this is routing mesh. This is actually a very basic functionality of linux.
 
Load balances swarm service across the nodes
 
Routing mess is stateless load balancer and works at level 3, not on level 4(DNS level). So to Load balancer at level 4, we need to use Nginx and HAproxy. these LB containers actually sit side by side the routing mesh and helps to give you a statefull LB by giving you ability to cache the sessions / cookies, which cant be done by default level 3 LBs.
 
Stateless means we cant use browser cookies to stick to a particular instance.
 
5.Docker Stacks
Till low the best and easiest method to deploy multiple services was docker-compose file. However in docker-compose file, we cannot specify the networks and volumes settings and many other commands that we need to manually put as docker-compose does not support these commands.
However, Docket Stacks allows all the commands.
 
So basially docker stack is like a single fine of all CLI  commands that you were going to enter, but unlike docker-compose, stack can also include network and volume commands such as bind mouting.
But Stack does not support, docker build, stack ignores this as stack expects all files to be already compiled.
 
Create a yml file inside a node with a swarm, So first create nodes and then initialize swarm and let members join. Then create the stack yml file:
 
$ cat>app_name.yml
 
press enter and then paste the YML content and then press ctrl+D to save and come out.Now you have yml file created, just deploy it.
 
$ docker stack deploy -c app_name.yml my_new_app
 
This will deploy the yml file and create all the services in different nodes. Now access the website /app deployed on the IP address of the node.
 
To get the node IP address you can use below command:
docker node inspect self –format ‘{{ .Status.Addr  }}’
To get the service IP address, Just add service-id in the end, like:
docker node inspect self –format ‘{{ .Status.Addr  }}’ service-id
To get the container IP address, use:
docker inspect -f ‘{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}’ container-id
 
 
6. Docker Swarm Secrets
 
  1. Secrets are anything that you would like to save and not publish openly like passwords, ssh files, ssl certificate, twitter api etc. These things that needs to be stored safely are called secrets.
  2. You can take in secret values in 2 ways, one if creating files and the other is taking in the secrets value using cli.
  3. Secrets are swarm only thing. Swarm does not create in actual memory but rather in the ram memory and you still see a file on harddisk but that is not the actual secret.
  4. As screts are swarm only thing, the stack users wanted to give back wards compatibility to docker-compose. Docker-compose can also be used without swarm, in plain docker. Now as docker-conpose cannot store file on ram, so the workaround was to create the file in harddisk, now this was not safe, but this provided the stack users to mention secrets in their stack file and then can akso be deployed on docker-compose.
  5. only stored on disk on manager nodes and only they have power to decrrypt it.
  6. the way through which these secrets come down to containers is through what is called as “control place” which basically is mutual authentication for communication between manager and worker.
  7. the secret is loaded somewhere in the database which is encrypted and a file decides which service gets to use the database for the secrets
  8. we can have a stack file to assign who can use the secrets
  9. Since secrets is a swarm only feature (v3.1), but this does not mean that we cannot use the same yml file  with docker-compose. In case docker-compose, Docker found a way around and actually physically creates a files on harddisk and then just does a bind mount to the container so that container can have access to the secrets file. Due to this reason we can only use onfile secrets and not the secrets that we can take using CLI commands, hence not very secure but gets the work done. This is used in case of local development where there is no swarm present and hence you used docker-compose command.
 
Creating and Using secrets
 
  1. there are 2 ways, one way is to first create a text file and then create a secret of it. the second  way to to give CLI command to create the secret and secret is provided in the cli command, this is much safer.
 
Way1
a. first create a file
$cat>secret_file.txt
 
enter whatever passwprd/username yiuy want and press ctrl+d to save and come out. Your cli is created. now we need to create a secret from the file.
 
b. run the follwing commnd:
$ docker secret create secret_A secret_file.txt
 
this will create the secret. Create another one for experiment. Now you will have 2 secrets. Do ls to check the secrets created. Remember to first create a text file and then create a secret using command to finally create the secret.
 
c. Now run ls:
$ docker secret ls
 
this will list the two secrets created above.(one created ourselves and steps not given)      
 
d. Now since we cannot just view the secrets information present inside it, we need to associate these secrets to some service and then use the service to view the secrets. So enter the following to create psql server. Now Psql image requires 2 parameters are suername and password. Instead of writting these values, we will give those enviroment variables in cli command path to the secrets, so that for those values, the service will need ot go to the secrets path.
 
docker service create –name psql –secret psql_user –secret psql_pass -e POSTGRES_PASSWORD_FILE=/run/secrets/psql_pass -e POSTGRESS_USER_FILE=/run/secrets/psql_user postgres
 
the paths given are actually RAM folders . Now to check if service has access to secrets just bash into the psql using interact and then look into the ram / ondisk file path :
$ docker exec it – <docker container Id>
$ ls /run/secrets
$ cat>psql_user
“gives the secret inside the psql_user.txt of secret_file.txt”
 
 
Very Important:
  1. for using stack, the yml file version should be atleast 3
  2. For using secrets with stack, the version needs to be atleast 3.1
  3. swarm and stack does not support build, hence if build is mentioned in your stack, it will just ignore it and move on.
                                                                                                 
 
7.Docker for CI, Production, testing etc
 
  1. we can use docker-compose to create a main docker yml file which only include the image names. and then built several other files which includes all other settings in different combinations if you want to try the same image with different configurations. For example:
 
we create a mail docker-compose.yml as below:
___________________
version: ‘3.1’
 
services:
drupal:
image: drupal:latest
 
postgres:
image: postgres:9.6
____________________
 
Now in the above file we have not mentioned volumes, secrets or any other kind of configurations. This is where -f  command comes in which allows us to mention another file with this main file to provide this file with configurations.
 
For ex. After the above file is created by the name docker-compose.yml, we create another file in same document which will mention everything other than image name cause image name is already present in the main file. Now there are 2 ways to do this, one way is to use a completely new file with a new name which has these confuguration and we use that file using -f command, the second way is to create a file by the name “docker-compose.override.yml”. This is actual name and cant be changed just like docker-compose.yml name if you dont wish to use -f tag with the command and mention the yml file name with the configs.
 
Below is the “docker-compose.override.yml”:
 
_____________________________
 
version: ‘3.1’
 
services:
drupal:
build: .
ports:
– “8080:80” #give gap after – and “
volumes:
-drupal-modules:/var/www/html/modules
-drupal-profiles:/var/www/html/profile
 
postgres:
environment:
-POSTGRES_PASSWORD_FILE=/run/secrets/psql-pw
secrets:
-psql-pw
volumes:
-drupal-data:/var/lib/postgresql/data
secrets:
psql-pw:
file: psql-fake-password.txt
 
_____________________________
 
Save the above file by the name: “docker-compose.override.yml”
 
Now if you run
$ docker-compose up
 
Then the services will be deployed and if you go into the service logs and inspect, you’ll see that even though in the main docker-compose.yml we have not mentioned the volumes and secrets, then also this the volumes were mounted as expected because docker-compose up command always looks for docker-compose-override.yml and if found, applies its configs and observe the the override file never has the image names.
 
In a similar manner, let’s say you have different environments and you want to test your website. Diff env means that the configs need to be different but the main image remains same. this is also called Continuous integration (CI) because we can keep on changing the other config files and keep building the image.
 
So for this, we create a file similar to the override file but name it as per our needs. here we are creating “docker-compose.prod.yml”. So actually nothing chagnes, just date since you have not named this file as override, you will have to mention this file name using the -f command along with docker-compose.yml file just like in case of override.
 
So you can actually copy paste contents of override file and name it “docker-compose.prod.yml” and then run the command:
 
$ docker-compose -f docker-compose.yml -f docker-compose.prod.yml up -d
 
this will create the service using docker-compose.yml file and for the settings and configs, it will use “docker-compose.prod.yml ” just like override file. The advantage is that you can create multiple of such files for testing, prod, deployment  etc according to env it would be deployed upon.
 
Lastly you can actually mention the work “config” at the end of the file which will physically join the docker-compose.yml file and docker-compose.prod.yml file and you can also actually create an output file which will be merger of these two:
 
$ docker-compose -f docker-compose.yml -f docker-compose.prod.yml config > output.yml
 
The output.yml will be physical merger of docker-compose.yml and docker-compose.prod.yml
 
This is how we manage docker stack in diff env and also use it in CI by merging diff files in diff deployements recursively.
 
 
 
Updating running services
 
We can update services using update command.
 
$ docker service update  –image myappp:1.2.1 myApp
 
This is how we can update the image.
 
But updating differnt things can be different and you will have to search to update those parameters, like updating port requires first removing port and then add new one
 
$ docker service update –publish-rm 8088 –publish-add 9090:80 myApp
 
 
8.Docker HealthCheck
$ docker container run –name p2 -d –health-cmd=”pg_isready -U postgres || exit 1″ postgres
 
1 is mentioned because only when the result is 1 the docker will run the healthcheck (kind off..recheck this)
 
The above command will show the status of the container i=either starting, running or terminated/stopped. Using the docker container ls command for the status in the status section of ls output
 
9.Docker Registry (Docker Hub, docker cloud, docker ..  etc)
 
Docker hub is very unique from all other because docker hub also provides facility to build the image from source code. Also you can use docker hub to trigger a web hook for automation, like maybe after an image is built, you trigger an http webhook to start a jenkins pipeline and so use it in opposite way.
That is you attach docker hub to github and some other platform and if the repository such as git repo gets updated due to new code being pushed, then git can trigger docker hub to initiate a fresh image built using the newly updated git files. This is great for CI/CD, you just commit new changes and dockerhub will automatically build the image just associate the github repo is docker image setting in automation.
 
  1. AutoScaling in Docker (One of the reasons to use Kubernetes instead)
      Docker Swarm (or Swarm mode) does not support auto-scaling machines out of the box. You’d need to use another solution for that like docker-machine to create machines (with docker) on your infrastructure and link these to the existing Swarm cluster (with docker swarm join).
This will involve a lot of scripting but the idea is to monitor the cluster for CPU / Memory / Network usage (with top or monit) and once it goes beyond a threshold (say 70% of total cluster resources), you trigger a script calling docker-machine to scale up the cluster. Using the same idea you can also scale down by draining and removing nodes (preferably Agent nodes) from the existing swarm cluster once your are below the lower threshold.
You need to make sure you are monitoring for sustained resource usage if you want to use this criteria or you will have your Infrastructure spawning and destroying nodes from the frequent and sudden changes in resource usage.
You can define a lower bound and an upper bound for machines in the cluster to keep things under control.
Note that Swarm requires at least 3 Manager nodes (recommended 5) to maintain a quorum for the Distributed Consensus algorithm. So the minimum recommended lower bound is 5 nodes (which you can extend with Agent nodes as resources are incrementally being used by services).


Leave a Reply