Difference between revisions of "Docker"

From Ever changing code
Jump to navigation Jump to search
 
(46 intermediate revisions by the same user not shown)
Line 1: Line 1:
Containers taking a world J
Containers taking a world J


= Installation =
= [https://docs.docker.com/install/linux/docker-ce/ubuntu/ Installation] =
General procedure:
General procedure:
# Make sure you don't have docker already installed from your packet manager
# Make sure you don't have docker already installed from your packet manager
Line 25: Line 25:
</source>
</source>


== Ubuntu 16.04, 18.04 ==
== Ubuntu 16.04, 18.04, 20.04 ==
<source lang="bash">
<source lang="bash">
# Optional, clear out config files
# Optional, clear out config files
Line 32: Line 32:
sudo rm /etc/default/docker #environment file
sudo rm /etc/default/docker #environment file


#New docker package is called now 'docker-ce'
# New docker package is called now 'docker-ce'
sudo apt-get remove docker docker-engine docker.io docker-ce #start fresh
sudo apt-get remove docker docker-engine docker.io containerd runc docker-ce # start fresh
sudo apt-get install apt-transport-https ca-certificates curl software-properties-common
sudo apt-get -yq install apt-transport-https ca-certificates curl gnupg-agent software-properties-common # apt over HTTPs
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add - # Docker official GPG key
sudo apt-key fingerprint 0EBFCD88 #verify
sudo apt-key fingerprint 0EBFCD88 #verify


#add the repository
#add the repository
sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" # or {edge|test}
sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" # or {edge|test}
sudo apt-get update
sudo apt-get update # optional
sudo apt-cache madison docker-ce #display available versions
 
sudo apt-get  install docker-ce=<VERSION>
# Option 1 - install latest
sudo apt-get  install docker-ce=18.06.0~ce~3-0~ubuntu
sudo apt-get install docker-ce docker-ce-cli containerd.io
 
# Option 2 - install fixed version
sudo apt-cache madison docker-ce # display available versions
sudo apt-get  install docker-ce=<VERSION_STRING>          docker-ce-cli=<VERSION_STRING>         containerd.io
sudo apt-get  install docker-ce=18.09.0~3-0~ubuntu-bionic docker-ce-cli=18.09.0~3-0~ubuntu-bionic containerd.io
sudo apt-mark  hold    docker-ce docker-ce-cli containerd.io
sudo apt-mark  showhold # show packages that version upgrade has been put on hold
 
# Unhold
sudo apt-mark unhold  docker-ce docker-ce-cli containerd.io
</source>
</source>
[https://docs.docker.com/engine/release-notes/ Newer versions] (>18.09.0) of Docker come with 3 packages:
* <code>containerd.io</code> - daemon to interface with the OS API (in this case, LXC - Linux Containers), essentially decouples Docker from the OS, also provides container services for non-Docker container managers
* <code>docker-ce</code> - Docker daemon, this is the part that does all the management work, requires the other two on Linux
* <code>docker-ce-cli</code> - CLI tools to control the daemon, you can install them on their own if you want to control a remote Docker daemon


Example of how to run [[Jenkins CI|Jenkins docker image]]
Example of how to run [[Jenkins CI|Jenkins docker image]]
Line 52: Line 69:
<source lang="bash">
<source lang="bash">
sudo usermod -aG docker $(whoami)
sudo usermod -aG docker $(whoami)
# log in to the new docker group (to avoid having to log out / log in again)
newgrp docker
</source>
</source>


Line 138: Line 158:
Attaching directly to a running container and then exiting the shell will cause the container to stop. Executing another shell in a running container and then exiting that shell will not stop the underlying container process started on instantiation.
Attaching directly to a running container and then exiting the shell will cause the container to stop. Executing another shell in a running container and then exiting that shell will not stop the underlying container process started on instantiation.


= Mount directory in container =
= Entrypoint, CMD, PID1 and [https://github.com/krallin/tini tini] =
We can mount host directory into docker container so the content will be available from the container
== Entrypoint and receiving signals ==
<source lang="bash">
Reciving signals and handling them within containers here Docker it's the same important as for any other application. Remember containers it's a group of processes running on your host so you need to take care of signals send to your applications.
docker run -it -v /mnt/sdb1:/opt/java pio2pio/java8
 
# syntax: -v /path/on/host:/path/in/container
</source>


= Build image =
As a principal container management tool eg. <code>docker stop</code> sends a configurable (in Dockerfile)  signal to the entrypoint of your application where <code>SIGTERM - 15 - Termination (ANSI)</code> is default.
== Dockerfile ==
Each line ''RUN'' creates a container so if possible, we should join lines so it ends up with less layers.
<source lang="bash">  
$ wget jkd1.8.0_111.tar.gz
$ cat Dockerfile <<- EOF #'<<-' heredoc with '-' minus ignores <tab> indent
ARG TAGVERSION=6                    #only command allowed b4 FROM
FROM ubuntu:${TAGVERSION}
FROM ubuntu:latest                  #defines base image eg. ubuntu:16.04
LABEL maintainer="myname@gmail.com" #key/value pair added to a metadata of the image


ARG ARG1=value1


ENV ENVIRONMENT="prod"
;ENTRYPOINT syntax
ENV SHARE /usr/local/share  #define env variables with syntax ENV space EnvironmetVariable space Value
ENV JAVA_HOME $SHARE/java


# COPY jkd1.8.0_111.tar.gz /tmp #works only with files, copy a file to container filesystem, here to /tmp
<source>
# ADD http://example.com/file.txt
# exec form, require JSON array; IT SHOULD ALWAYS BE USED
ADD jkd1.8.0_111.tar.gz /  #add files into the image root folder, can add also URLs
ENTRYPOINT ["/app/bin/your-app", "arg1", "arg2"]


# SHELL ["executable","params"] #overrides /bin/sh -c for RUN,CMD, etc..
# shell form, it always runs as a subcommand of '/bin/sh -c', thus your application will never see any signal sent to it
ENTRYPOINT "/app/bin/your-app arg1 arg2"
</source>


# Executes commands during build process in a new layer E.g., it is often used for installing software packages
RUN mv /jkd1.8.0_111.tar.gz $JAVA_HOME
RUN apt-get update
RUN ["apt-get", "update", "-y"] #in json array format, allows to run a commands but does not require shell executable


VOLUME /mymount_point #this command does not mount anything from a host, just creates a mountpoint
;ENTRYPOINT is a shell script
If application is started by shell script regular way, your shell spawns your application in a new process and you won’t receive signals from Docker. Therefore we need to tell shell to replace itself with your application using the <code>[https://stackoverflow.com/questions/18351198/what-are-the-uses-of-the-exec-command-in-shell-scripts exec]</code> command, check also <code>[https://en.wikipedia.org/wiki/Exec_(system_call) exec syscall]</code>. Use:
<source lang=bash>
/app/bin/my-app      # incorrect, signal won't be received by 'my-app'
exec /app/bin/my-app # correct way
</source>


EXPOSE 80 #it doesn't automatically map the port to a hosts


#containers usually don't have system maangement eg. systemctl/service/init.d as designed to run as single process
;ENTRYPOINT exec with a pipe commands causing starting a subshell
#entry point becomes main command that start the main proces
If you <code>exec</code> piping will force a command to be run in a subshell with the usual consequence: no signals to an app.
ENTRYPOINT apachectl "-DFOREGROUND" #think about it as the MAIN_PURPOSE_OF_CONTAINER command.
<source lang=bash>
# It's always run by default it cannot be overridden
exec /app/bin/your-app | tai64n # here you want to add timestamps by piping through tai64n,
                                # causing running your command in a subshell
</source>


#Single command that will run after the image has been created. Only one per dockerfile, can be overriden.
CMD ["/bin/bash"]


# STOPSIGNAL
;Let another program to be PID1 and handle signalling
* tini
* dump-init


EOF
<source lang=bash>
ENTRYPOINT ["/tini", "-v", "--", "/app/bin/docker-entrypoint.sh"]
</source>
</source>


== Build ==
tini and dumb-init are also able to proxy signals to process groups which technically allows you to pipe your output.However, your pipe target receives that signal at the same time so you can’t log anything on cleanup lest you crave race conditions and SIGPIPEs. So, it's better to avoid logging at termination at all.
<source lang="bash">  
 
docker build --tag myrepo/java8 . #-f point to custom Dockerfile name eg. -f Dockerfile2
 
# myrepo dockerhub username, java8 -image name,
;Change signal that will terminate your container process
# .     directory where is the Dockerfile
Listen for SIGTERM or set STOPSIGNAL in your Dockerfile.
<source lang=bash>
vi Dockerfile
STOPSIGNAL SIGINT # this will trigger container termination process if someone press Ctrl^C
</source>
 
;References:
* [https://hynek.me/articles/docker-signals/ Why Your Dockerized Application Isn’t Receiving Signals]
* [http://smarden.org/runit/ runit] alternative to tini
 
== Tini ==
It's a tiny but valid init for containers:
* protects you from software that accidentally creates zombie processes
* ensures that the default signal handlers work for the software you run in your Docker image
* does so completely transparently! Docker images that work without Tini will work with Tini without any changes
* Docker 1.13+ has Tini included, to enable Tini, just pass the <code>--init</code> flag to docker run
 


docker build -t myrepo/java8 . --pull --no-cache --squash
;Understanding Tini
# --pull    regardless a local copy of an image can exist force to download a new image
After spawning your process, Tini will wait for signals and forward those to the child process, and periodically reap zombie processes that may be created within your container. When the "first" child process exits (/your/program in the examples above), Tini exits as well, with the exit code of the child process (so you can check your container's exit code to know whether the child exited successfully).
# --no-cache don't use cache to build, forcing to rebuild all interim containers
# --squash  after the build squash all layers into a single layer.  


docker images            #list images
docker push myrepo/java8 #upload the image to DockerHub repository
</source>




<code>squash</code> is enabled only on docker demon with experimental features enabled.
Add Tini - general dynamicly-linked library (in the 10KB range)
<source lang=bash>
ENV TINI_VERSION v0.18.0
ADD https://github.com/krallin/tini/releases/download/${TINI_VERSION}/tini /tini
RUN chmod +x /tini
ENTRYPOINT ["/tini", "--"]


= Manage containers and images =
# Run your program under Tini
== Run a container ==
CMD ["/your/program", "-and", "-its", "arguments"]
When you ''run'' a container you will create a new container from a image that has been already build/ is available then put in running state
# or docker run your-image /your/program ...
* -d detached mode, the container will continue to run after the CMD or passed on command exited
</source>
* -i interactive mode, allows you to login in /ssh to the container




<source lang="bash">
Add Tini to Alpine based image
# docker container run [OPTIONS]          IMAGE    [COMMAND] [ARG...] # usage
<source lang=bash>
  docker container run -it --name mycentos centos:6 /bin/bash
RUN apk add --no-cache tini
  docker          run -it pio2pio/java8 #container section command is optional
# Tini is now available at /sbin/tini
# -i      :- run in interactive mode, then run command /bin/bash
ENTRYPOINT ["/sbin/tini", "--"]
# --rm    :- will delete container after run
# --publish | -p 80:8080 :- publish exposed container port 80-> to 8080 on the docker-host
# --publish-all | -P    :- publish all exposed container ports to random port >32768
</source>
</source>


== List ==
<source lang="bash">
ctop #top for containers
docker ps -a #list containers
docker image ls #list images
docker images #short form of the command above
docker images --no-trunc
docker images -q #--quiet
docker images --filer "before=centos:6"


# List exposed ports on a container
Existing entrypoint
docker port CONTAINER [PRIVATE_PORT[/PROTOCOL]]
<source>
docker port web2
ENTRYPOINT ["/tini", "--", "/docker-entrypoint.sh"]
80/tcp -> 0.0.0.0:81
</source>
</source>




Search images
;References:
*[https://github.com/krallin/tini/issues/8 What is advantage of Tini?]
*[https://ahmet.im/blog/minimal-init-process-for-containers/ Choosing an init process for multi-process containers]
 
= Mount directory in container =
We can mount host directory into docker container so the content will be available from the container
<source lang="bash">
<source lang="bash">
docker search ubuntu #search the Docker Hub for images, you need to docker login first
docker run -it -v /mnt/sdb1:/opt/java pio2pio/java8
NAME                            DESCRIPTION                                    STARS OFFICIAL  AUTOMATED
# syntax: -v /path/on/host:/path/in/container
ubuntu                          Ubuntu is a Debian-based Linux operating sys…  8206  [OK]     
dorowu/ubuntu-desktop-lxde-vnc  Ubuntu with openssh-server and NoVNC            210              [OK]
rastasheep/ubuntu-sshd          Dockerized SSH service, built on top of offi…  167              [OK]
 
docker search --filter stars=50 is-official=true apache #search images that have 50 or more stars
docker search --limit 10 apache                        #display top 10 images
</source>
</source>


== Pull images ==
= Build image =  
<source lang="bash">
== Dockerfile ==
                <name>:<tag>
Each line ''RUN'' creates a container so if possible, we should join lines so it ends up with less layers.
docker pull hello-world:latest # pull latest
<source lang="bash">  
docker pull --all hello-world  # pull all tags
$ wget jkd1.8.0_111.tar.gz
docker pull --disable-content-trust hello-world # disable verification
$ cat Dockerfile <<- EOF #'<<-' heredoc with '-' minus ignores <tab> indent
ARG TAGVERSION=6                    #only command allowed b4 FROM
FROM ubuntu:${TAGVERSION}
FROM ubuntu:latest                 #defines base image eg. ubuntu:16.04
LABEL maintainer="myname@gmail.com" #key/value pair added to a metadata of the image


docker images --digests #displays sha256: digest of an image
ARG ARG1=value1


# Dangling images - transitinal images
ENV ENVIRONMENT="prod"
</source>
ENV SHARE /usr/local/share  #define env variables with syntax ENV space EnvironmetVariable space Value
ENV JAVA_HOME $SHARE/java


== Save and import image ==
# COPY jkd1.8.0_111.tar.gz /tmp #works only with files, copy a file to container filesystem, here to /tmp
In course to move a image to another filesystem we can save it into <code>.tar</code>
# ADD http://example.com/file.txt
<source lang="bash">
ADD jkd1.8.0_111.tar.gz /  #add files into the image root folder, can add also URLs
# Export
docker image save myrepo/centos:v2 > mycentos.v2.tar
tar -tvf mycentos.v2.tar


# Import
# SHELL ["executable","params"] #overrides /bin/sh -c for RUN,CMD, etc..
docker image import mycentos.v2.tar <new_image_name>


# Load from a stream
# Executes commands during build process in a new layer E.g., it is often used for installing software packages
docker load < mycentos.v2.tar #or --input mycentos.v2.tar to avoid redirections
RUN mv /jkd1.8.0_111.tar.gz $JAVA_HOME
</source>
RUN apt-get update
RUN ["apt-get", "update", "-y"] #in json array format, allows to run a commands but does not require shell executable


== Export container into image ==
VOLUME /mymount_point #this command does not mount anything from a host, just creates a mountpoint
Let's say we wanto modify stock image centos:6 by installing Apache interactivly, set to autostart then export as an new image. Let's do it!
<source lang="bash">
docker pull centos:6
docker container run -it --name apache-centos6 centos:6
# Interactivly do: yum -y update; yum install -y httpd; chkconfig httpd on; exit
docker commit b237d65fd197 newcentos:withapache #creates a new image from a container's changes
docker container export apache-centos6 > apache-centos6.tar
docker image    import apache-centos6.tar newcentos:withapache


docker images
EXPOSE 80 #it doesn't automatically map the port to a hosts
REPOSITORY    TAG          IMAGE ID            CREATED            SIZE
newcentos    withapache  ea5215fb46ed        50 seconds ago      272MB


docker image history newcentos:withapache
#containers usually don't have system maangement eg. systemctl/service/init.d as designed to run as single process
IMAGE        CREATED        CREATED BY  SIZE  COMMENT
#entry point becomes main command that start the main proces
ea5215fb46ed 2 minutes ago              272MB  Imported from -
ENTRYPOINT apachectl "-DFOREGROUND" #think about it as the MAIN_PURPOSE_OF_CONTAINER command.
</source>
# It's always run by default it cannot be overridden


#Single command that will run after the image has been created. Only one per dockerfile, can be overriden.
CMD ["/bin/bash"]


I am unsure what is a difference in creation of images from a container between:
# STOPSIGNAL
* <code>docker container commit</code>
* <code>docker container export</code> - this seems creates smaller image


== Tag images ==
EOF
Tags are used to usually to name Official image with a new name that we are planning to modify. This allows to create a new image, run a new container from a tag, delete the original image without affecting the new image or container started from the new tagged image.
<source lang="bash">
docker image tag #long version
docker tag centos:6 myucentos:v1 #this will create a duplicate of centos:6 named myucentos:v1
</source>
</source>


Tagging allows to modify repository name and maanges references to images located on a filesystem.
== Build ==
<source lang="bash">
docker build --tag myrepo/java8 .  #-f point to custom Dockerfile name eg. -f Dockerfile2
# myrepo dockerhub username, java8 -image name,
# .     directory where is the Dockerfile


== History of an image ==
docker build -t myrepo/java8 . --pull --no-cache --squash
We can display history of layers that created the image by showing interim images build in creation order. It shows only layers created on a local filesystem.
# --pull    regardless a local copy of an image can exist force to download a new image
<source lang="bash">
# --no-cache don't use cache to build, forcing to rebuild all interim containers
docker image history myrepo/centos:v2
# --squash  after the build squash all layers into a single layer.  
 
docker images            #list images
docker push myrepo/java8 #upload the image to DockerHub repository
</source>
</source>
== Stop and delete all containers ==
 
 
<code>squash</code> is enabled only on docker demon with experimental features enabled.
 
= Manage containers and images =
== Run a container ==
When you ''run'' a container you will create a new container from a image that has been already build/ is available then put in running state
* -d detached mode, the container will continue to run after the CMD or passed on command exited
* -i interactive mode, allows you to login in /ssh to the container
 
 
<source lang="bash">
<source lang="bash">
docker stop $(docker ps -aq) && docker rm $(docker ps -aq)
# docker container run [OPTIONS]          IMAGE    [COMMAND] [ARG...] # usage
  docker container run -it --name mycentos centos:6 /bin/bash
  docker           run -it pio2pio/java8 #container section command is optional
# -i      :- run in interactive mode, then run command /bin/bash
# --rm     :- will delete container after run
# --publish | -p 80:8080 :- publish exposed container port 80-> to 8080 on the docker-host
# --publish-all | -P    :- publish all exposed container ports to random port >32768
</source>
</source>


== Delete image ==
== List images ==
<source lang="bash">
<source lang="bash">
$ docker images
ctop #top for containers
REPOSITORY          TAG                IMAGE ID            CREATED            SIZE
docker ps -a #list containers
company-repo        0.1.0              f796d7f843cc        About an hour ago  888MB
docker image ls #list images
<none>              <none>              04fbac2fdf48        3 hours ago        565MB
docker images #short form of the command above
ubuntu              16.04              7aa3602ab41e        3 weeks ago        115MB
docker images --no-trunc
docker images -q #--quiet
docker images --filer "before=centos:6"


# Delete image
# List exposed ports on a container
$ docker rmi company-repo:0.1.0
docker port CONTAINER [PRIVATE_PORT[/PROTOCOL]]
Untagged: company-repo:0.1.0
docker port web2
Deleted: sha256:e5cca6a080a5c65eacff98e1b17eeb7be02651849b431b46b074899c088bd42a
80/tcp -> 0.0.0.0:81
..
</source>
Deleted: sha256:bc7cda232a2319578324aae620c4537938743e46081955c4dd0743a89e9e8183


# Prune image - delete dangling (temp/interim) images.
# These are not associated with end-product image or containers.
docker image prune
docker image prune -a #remove all images not associated with any container
</source>


== Cleaning up space by removing docker objects ==
== Search images in remote repository ==
This applied both to docker container and swarm systems.
Search the DockerHub for images,. You may require to do `docker login` first
<source lang="bash">
<source lang="bash">
docker system df    #show disk usage
IMAGE=ubuntu
TYPE                TOTAL              ACTIVE              SIZE                RECLAIMABLE
docker search $IMAGE
Images             1                  0                  131.7MB            131.7MB (100%)
NAME                            DESCRIPTION                                    STARS OFFICIAL  AUTOMATED
Containers         0                  0                  0B                  0B
ubuntu                          Ubuntu is a Debian-based Linux operating sys…  8206  [OK]     
Local Volumes      0                  0                  0B                  0B
dorowu/ubuntu-desktop-lxde-vnc  Ubuntu with openssh-server and NoVNC            210             [OK]
Build Cache        0                  0                  0B                  0B
rastasheep/ubuntu-sshd         Dockerized SSH service, built on top of offi…  167              [OK]


docker network ls #note all networks below are system created, so won't get removed
IMAGE=apache
NETWORK ID          NAME                DRIVER              SCOPE
docker search $IMAGE --filter stars=50 # search images that have 50 or more stars
452b1c428209       bridge              bridge              local
docker search $IMAGE --limit 10       # display top 10 images
528db1bf80f1        docker_gwbridge    bridge              local
</source>
832c8c6d73a5        host                host                local
t8jxy5vsy5on        ingress            overlay            swarm
815a9c2c4005        none                null                local


docker system prune #removes objects created by a user only, on the current node only
                    #add --volumes to remove them as well
WARNING! This will remove:
        - all stopped containers
        - all networks not used by at least one container
        - all dangling images
        - all dangling build cache
Are you sure you want to continue? [y/N]


docker system prune -a --volumes #remove all
</source>


== Docker Volumes ==
List all available tags
Docker's 'copy-on-write' philosophy drives both performance and efficiency. It's only the top layer that is writable and it's a delta of underlying layer.
<source lang="bash">
IMAGE=nginx
wget -q https://registry.hub.docker.com/v1/repositories/${IMAGE}/tags -O - | sed -e 's/[][]//g' -e 's/"//g' -e 's/ //g' | tr '}' '\n'  | awk -F: '{print $3}' | sort -V
</source>
 
== Pull images ==
<source lang="bash">
                <name>:<tag>
docker pull hello-world:latest # pull latest
docker pull --all hello-world  # pull all tags
docker pull --disable-content-trust hello-world # disable verification
 
docker images --digests #displays sha256: digest of an image
 
# Dangling images - transitional images
</source>
=== [https://docs.aws.amazon.com/AmazonECR/latest/userguide/registries.html#registry_auth from Amazon ECR] ===
;Docker login to ECR service using IAM
<code>docker login</code> does not support native IAM authentication methods. Therefore use a command below that will retrieve, decode, and convert the <code>authorization IAM token</code> into a pre-generated <code>docker login</code> command. Therefore produced login credentials will assume your current IAM User/Role permissions. If your current IAM user can only pull from ECR, after login with <code>docker login</code> you still won't be able to push image to the registry. Example error you may get is <code>not authorized to perform: ecr:InitiateLayerUpload</code>
 


Volumes can be mounted to your container instances from your underlying host systems.
Login to ECR service, your IAM user requires to have relevant pull/push permissions
<source lang="bash">
eval $(aws ecr get-login --region eu-west-1 --no-include-email)
    # aws ecr get-login # generates below docker command with the login token
    # docker login -u AWS -p **token** https://$ACCOUNT.dkr.ecr.us-east-1.amazonaws.com # <- output
</source>


''_data'' volumes, since they are not controlled by the storage driver (since they represent a file/directory on the host filesystem /var/lib/docker), are able to bypass the storage driver. As a result, their contents are not affected when a container is removed.


Volumes are data mounts created on a host in <code>/var/lib/docker/volumes/</code> directory and refereed by name in a Dockerfile.
Docker login to ECR singular repository, min awscli v1.17
<source lang="bash">
<source lang="bash">
docker volume ls                  #list volumes created by VOLUME directive in a Dockerfile
ACCOUNT=111111111111
sudo tree /var/lib/docker/volumes/ #list volumes on host-side
REPOSITORY=myrepo
docker volume create  my-vol-1
aws ecr get-login-password --region eu-west-1 | docker login --username AWS --password-stdin $ACCOUNT.dkr.ecr.eu-west-1.amazonaws.com/$REPOSITORY
docker volume inspect my-vol-1
</source>
[
 
    {
=== [https://docs.aws.amazon.com/AmazonECR/latest/userguide/docker-push-ecr-image.html push to Amazon ECR] ===
        "CreatedAt": "2019-01-17T08:47:01Z",
<source lang=bash>
        "Driver": "local",
# List images
        "Labels": {},
$ docker images
        "Mountpoint": "/var/lib/docker/volumes/my-vol-1/_data",
REPOSITORY                                                TAG  IMAGE ID     CREATED        SIZE
        "Name": "my-vol-1",
ansible-aws                                                2.0.1 b09807c20c96 5 minutes ago  570MB
        "Options": {},
111111111111.dkr.ecr.eu-west-1.amazonaws.com/ansible-aws  1.0.0 9bf35fe9cc0e 4 weeks ago    515MB
        "Scope": "local"
     }
]
</source>


# Tag an image 'b09807c20c96'
docker tag b09807c20c96 111111111111.dkr.ecr.eu-west-1.amazonaws.com/ansible-aws:2.0.1


Using volumes with Swarm services
# List images, to verify your newly tagged one
<source lang=bash>
$ docker images
docker container run  --name web1 -p 80:80 --mount source=my-vol-1,target=/internal-mount --replicas 3 httpd #container
REPOSITORY                                                TAG  IMAGE ID    CREATED        SIZE
docker service create --name web1 -p 80:80 --mount source=my-vol-1,target=/internal-mount --replicas 3 httpd #swarm service
111111111111.dkr.ecr.eu-west-1.amazonaws.com/ansible-aws  2.0.1 b09807c20c96 6 minutes ago  570MB # <- new tagged image
# --mount --volumes|-v is not supported with services, this will replicate volumes across swarm when needed,
ansible-aws                                                2.0.1 b09807c20c96 6 minutes ago  570MB
# but it will not replicate files
111111111111.dkr.ecr.eu-west-1.amazonaws.com/ansible-aws  1.0.0 9bf35fe9cc0e 4 weeks ago    515MB
 
docker exec -it web1 /bin/bash #connect to the container
roor@c123:/ echo "Created when connected to container: volume-web1" > /internal-mount/local.txt; exit


# prove the file is on a host filesystem created volume
# Push an image to ECR
user@dockerhost$ cat /var/lib/docker/volumes/my-vol-1/_data/local.txt
docker push 111111111111.dkr.ecr.eu-west-1.amazonaws.com/ansible-aws:2.0.1
The push refers to repository [111111111111.dkr.ecr.eu-west-1.amazonaws.com/ansible-aws]
2c405c66e675: Pushed
...
77cae8ab23bf: Layer already exists
2.0.1: digest: sha256:111111111193969807708e1f6aea2b19a08054f418b07cf64016a6d1111111111 size: 1796
</source>
</source>


 
== Save and import image ==
;Host storage mount
In course to move a image to another filesystem we can save it into <code>.tar</code>
Bind mapping is binding host filesystem directory to a container directory. It's not mouting volume that it'd require a mount point and volume on a host.
<source lang="bash">
<source lang=bash>
# Export
mkdir /home/user/web1
docker image save myrepo/centos:v2 > mycentos.v2.tar
echo "web1 index" > /home/user/web1/index.html
tar -tvf mycentos.v2.tar
docker container run -d --name testweb -p 80:80 --mount type=bind,source=/home/user/web1,target=/usr/local/apache2/htdocs httpd
 
curl http://localhost:80
# Import
docker image import mycentos.v2.tar <new_image_name>
 
# Load from a stream
docker load < mycentos.v2.tar #or --input mycentos.v2.tar to avoid redirections
</source>
</source>


Removing service is not going to remove the volume unless you delete the volume itself. It that case will be removed from all swarms.
== Export aka commit container into image ==
Let's say we wanto modify stock image centos:6 by installing Apache interactivly, set to autostart then export as an new image. Let's do it!
<source lang="bash">
docker pull centos:6
docker container run -it --name apache-centos6 centos:6
# Interactively do: yum -y update; yum install -y httpd; chkconfig httpd on; exit


== Networking ==
# Save container changes - option1
=== Container Network Model ===
docker commit -m "added httpd daemon" -a "Piotr" b237d65fd197 newcentos:withapache #creates new image from a container's changes
It's a concept of network implementation that is built on multiple private networks across multiple hosts overlayed and managed by IPAM. Protocol that keeps track and provision addesses.
docker commit -m "added httpd daemon" -a "Piotr" <container_name> <repo>/<image>:<tag>
# -a :- author


Main 3 components:
# Save container changes - option2
* sandbox - contains the configuration of a container's network stack, incl. management of interfaces, routing and DNS. An implementation of a Sandbox could be a eg. Linux Network Namespace. A Sandbox may contain many endpoints from multiple networks.
docker container export apache-centos6 > apache-centos6.tar
* endpoint - joins a Sandbox to a Network. Interfaces, switches, ports, etc and belong to only one network at the time. The Endpoint construct exists so the actual connection to the network can be abstracted away from the application. This helps maintain portability.
docker image    import apache-centos6.tar newcentos:withapache
* network - a clollection of endpoints that can communicate directly (bridges, VLANs, etc.) and can consist of 1toN endpoints


docker images
REPOSITORY    TAG          IMAGE ID            CREATED            SIZE
newcentos    withapache  ea5215fb46ed        50 seconds ago      272MB


[[File:Container-network-model.png|none||left|Container Network Model]]
docker image history newcentos:withapache
IMAGE        CREATED        CREATED BY  SIZE  COMMENT
ea5215fb46ed 2 minutes ago              272MB  Imported from -
</source>


;IPAM (Internet Protocol Address Management)
Managing addesees across multiple hosts on a separate physical networks while providing routing to the underlaying swarm networks externally is ''the IPAM prblem'' for Docker. Depends on the netwok driver choice, IPAM is handled at different layers in the stack. ''Network drivers'' enable IPAM through ''DHCP drivers'' or plugin drivers so the complex implementation that would be normally overlapping addesses is supported.


;References
I am unsure what is a difference in creation of images from a container between:
* [https://success.docker.com/article/networking Docker Reference Architecture: Designing Scalable, Portable Docker Container Networks]
* <code>docker container commit</code>
* <code>docker container export</code> - this seems creates smaller image


=== Publish exposed container/service ports ===
== Tag images ==
;Publishing modes
Tags are used to usually to name Official image with a new name that we are planning to modify. This allows to create a new image, run a new container from a tag, delete the original image without affecting the new image or container started from the new tagged image.
;host: set using <code> --publish mode=host,8080:80</code>, makes ports available only on the undelaying host system not outside the host the service may exist; defits ''routing mesh'' so user is responsible for routing
<source lang="bash">
;ingress: responsible for ''routing mesh'' makes sure all published ports are avaialble on all hosts in the swarm cluster regardless is a service replica running on it or not
docker image tag #long version
docker tag centos:6 myucentos:v1 #this will create a duplicate of centos:6 named myucentos:v1
</source>


Tagging allows to modify repository name and maanges references to images located on a filesystem.


<source lang=bash>
== History of an image ==
# List exposed ports on a container
We can display history of layers that created the image by showing interim images build in creation order. It shows only layers created on a local filesystem.
docker port CONTAINER [PRIVATE_PORT[/PROTOCOL]]
<source lang="bash">
docker port web2
docker image history myrepo/centos:v2
80/tcp -> 0.0.0.0:81
</source>
== Stop and delete all containers ==
<source lang="bash">
docker stop $(docker ps -aq) && docker rm $(docker ps -aq)
</source>


# Publish port
== Delete image ==
                                          host  :  container
<source lang="bash">
                                            \  :  /
$ docker images
docker container run -d --name web1 --publish 81:80 httpd
REPOSITORY          TAG                IMAGE ID            CREATED            SIZE
# --publish | -p :- publish to host exposed container port
company-repo        0.1.0              f796d7f843cc        About an hour ago  888MB
# 81            :- port on a host, can use range eg. 81-85, so based on port availability port will be used
<none>              <none>              04fbac2fdf48        3 hours ago        565MB
# 80            :- exposed port on a container
ubuntu              16.04              7aa3602ab41e        3 weeks ago        115MB


ss -lnt
# Delete image
State      Recv-Q Send-Q Local Address:Port Peer Address:Port
$ docker rmi company-repo:0.1.0
LISTEN      0     100        127.0.0.1:25              *:*
Untagged: company-repo:0.1.0
LISTEN      0      128                *:22              *:*
Deleted: sha256:e5cca6a080a5c65eacff98e1b17eeb7be02651849b431b46b074899c088bd42a
LISTEN      0     100              ::1:25            :::*
..
LISTEN      0      128              :::81            :::*
Deleted: sha256:bc7cda232a2319578324aae620c4537938743e46081955c4dd0743a89e9e8183
LISTEN      0      128              :::22            :::*


docker container run -d --name web1 --publish-all 81:80 httpd
# Prune image - delete dangling (temp/interim) images.
# --publish-all | -P publish all cotainer exposed ports to random ports above >32768
# These are not associated with end-product image or containers.
CONTAINER ID IMAGE COMMAND              CREATED STATUS PORTS                  NAMES
docker image prune
c63efe9cbb94 httpd "httpd-foreground"  2 sec.. Up 1 s 80/tcp                  testweb  #port exposed but not published
docker image prune -a #remove all images not associated with any container
cb0711134eb5 httpd "httpd-foreground"  4 sec.. Up 2 s 0.0.0.0:32769->80/tcp  testweb1 #port exposed and published to host:32769
</source>
</source>


=== Network drivers ===
== Cleaning up space by removing docker objects ==
Default network for a single host docker-host is ''bridge'' network.
This applied both to docker container and swarm systems.
<source lang="bash">
docker system df    #show disk usage
TYPE                TOTAL              ACTIVE              SIZE                RECLAIMABLE
Images              1                  0                  131.7MB            131.7MB (100%)
Containers          0                  0                  0B                  0B
Local Volumes      0                  0                  0B                  0B
Build Cache        0                  0                  0B                  0B


;List of Native (part of Docker Engine) Network Drivers:
docker network ls #note all networks below are system created, so won't get removed
;bridge: default on stand-alone hosts, it's private network internal to the host system, all containers on this host using Bridge network can communicate, external access is granted by port exposure or static-routes added with teh host as the gateway for that network
NETWORK ID          NAME                DRIVER              SCOPE
;none: used when a container does not need any networkng, still can be accessed from the host using <code>docker attach [containerID]</code> or <code>docker exec -it [containerID]</code> commands
452b1c428209        bridge              bridge              local
;host: aka ''Host Only Networking'', only accessable via underlaying host, access to services can be provided by exposing ports to the host system
528db1bf80f1        docker_gwbridge    bridge              local
;overlay: swarm scope driver, allows communication to all Docker Daemons in a cluster, self-extending if needed, maanged by Swarm manager, it's default mode of Swarm communication
832c8c6d73a5        host               host               local
;ingress: extended network across all nodes in the cluster; special overlay network that load balances netowrk traffic amongst a given service's working nodes; maintains a list of all IP addresses from nodes that participate in that service (using the IPVS module) and when a request comes in, routes to one of them for the indicated service; provides ''routing mesh' that allows services to be exposed to the external network without having replica running on every node in the Swarm
t8jxy5vsy5on        ingress            overlay             swarm
;docker gateway bridge: special bridge network that allows overlay networks (incl. ingress) access an individual DOcker daemon's physical network; every container run within a service is connected to the local Docerk daemon's host network; automatically created when Docker is initialised or joined by <code>docker swarm init</code> or <code>docker join</code> commands.
815a9c2c4005        none                null                local


;Docker interfaces
docker system prune #removes objects created by a user only, on the current node only
* <code>docker0</code> - adapter is installed by default during Docker setup and will be assigned an address range that will determine the local host IPs available to the containers running on it
                    #add --volumes to remove them as well
 
WARNING! This will remove:
;Create bridge network
        - all stopped containers
<source lang=bash>
        - all networks not used by at least one container
docker network ls #default networks list
        - all dangling images
NETWORK ID    NAME                DRIVER  SCOPE
        - all dangling build cache
130833da0920  bridge              bridge  local
Are you sure you want to continue? [y/N]
528db1bf80f1  docker_gwbridge    bridge  local
832c8c6d73a5  host                host    local
t8jxy5vsy5on  ingress            overlay  swarm  #'ingress' special network 1 per cluster
815a9c2c4005  none                null    local


docker network inspect bridge #bridge is a default network containers are deployed to
docker system prune -a --volumes #remove all
</source>


docker container run  -d web1 -p 8080:80 httpd #expose container port :80 -> :8080 on the docker-home
== Docker Volumes ==
docker container inspect web1 | grep IPAdd
Docker's 'copy-on-write' philosophy drives both performance and efficiency. It's only the top layer that is writable and it's a delta of underlying layer.
docker container inspect --format="{{.NetworkSettings.Networks.bridge.IPAddress}}" web1 #get container ip
curl http://$(IPAddr)
</source>


Volumes can be mounted to your container instances from your underlying host systems.


;Create bridge network
''_data'' volumes, since they are not controlled by the storage driver (since they represent a file/directory on the host filesystem /var/lib/docker), are able to bypass the storage driver. As a result, their contents are not affected when a container is removed.
<source lang=bash>
docker network create --driver=bridge --subnet=192.168.1.0/24 --opt "com.docker.network.driver.mtu"=1501 deviceeth0


docker network ls
Volumes are data mounts created on a host in <code>/var/lib/docker/volumes/</code> directory and refereed by name in a Dockerfile.
docker network inspect deviceeth0
<source lang="bash">
docker volume ls                   #list volumes created by VOLUME directive in a Dockerfile
sudo tree /var/lib/docker/volumes/ #list volumes on host-side
docker volume create  my-vol-1
docker volume inspect my-vol-1
[
    {
        "CreatedAt": "2019-01-17T08:47:01Z",
        "Driver": "local",
        "Labels": {},
        "Mountpoint": "/var/lib/docker/volumes/my-vol-1/_data",
        "Name": "my-vol-1",
        "Options": {},
        "Scope": "local"
    }
]
</source>
</source>




;Create overlay network
Using volumes with Swarm services
<source lang=bash>
<source lang=bash>
docker network create --driver=overlay --subnet=192.168.1.0/24 --gateway=192.168.1.1 overlay0
docker container run  --name web1 -p 80:80 --mount source=my-vol-1,target=/internal-mount --replicas 3 httpd #container
docker network ls
docker service create --name web1 -p 80:80 --mount source=my-vol-1,target=/internal-mount --replicas 3 httpd #swarm service
NETWORK ID          NAME                DRIVER              SCOPE
# --mount --volumes|-v is not supported with services, this will replicate volumes across swarm when needed,
130833da0920        bridge              bridge              local
# but it will not replicate files
528db1bf80f1        docker_gwbridge    bridge              local
 
832c8c6d73a5        host                host               local
docker exec -it web1 /bin/bash #connect to the container
t8jxy5vsy5on        ingress            overlay            swarm
roor@c123:/ echo "Created when connected to container: volume-web1" > /internal-mount/local.txt; exit
815a9c2c4005        none                null                local
 
2x6bq1czzdc1        overlay0            overlay            swarm
# prove the file is on a host filesystem created volume
user@dockerhost$ cat /var/lib/docker/volumes/my-vol-1/_data/local.txt
</source>
</source>




;Inspect network
;Host storage mount
<source lang=json>
Bind mapping is binding host filesystem directory to a container directory. It's not mouting volume that it'd require a mount point and volume on a host.
docker network inspect overlay0
<source lang=bash>
[
mkdir /home/user/web1
    {
echo "web1 index" > /home/user/web1/index.html
        "Name": "overlay0",
docker container run -d --name testweb -p 80:80 --mount type=bind,source=/home/user/web1,target=/usr/local/apache2/htdocs httpd
        "Id": "2x6bq1czzdc102sl6ge7gpm3w",
curl http://localhost:80
        "Created": "2019-01-19T11:24:02.146339562Z",
</source>
        "Scope": "swarm",
 
        "Driver": "overlay",
Removing service is not going to remove the volume unless you delete the volume itself. It that case will be removed from all swarms.
        "EnableIPv6": false,
 
        "IPAM": {
== Networking ==
            "Driver": "default",
=== Container Network Model ===
            "Options": null,
It's a concept of network implementation that is built on multiple private networks across multiple hosts overlayed and managed by IPAM. Protocol that keeps track and provision addesses.
            "Config": [
 
                {
Main 3 components:
                    "Subnet": "192.168.1.0/24",
* sandbox -  contains the configuration of a container's network stack, incl. management of interfaces, routing and DNS. An implementation of a Sandbox could be a eg. Linux Network Namespace. A Sandbox may contain many endpoints from multiple networks.
                    "Gateway": "192.168.1.1"
* endpoint - joins a Sandbox to a Network. Interfaces, switches, ports, etc and belong to only one network at the time. The Endpoint construct exists so the actual connection to the network can be abstracted away from the application. This helps maintain portability.
                }
* network - a clollection of endpoints that can communicate directly (bridges, VLANs, etc.) and can consist of 1toN endpoints
            ]
 
        },
 
        "Internal": false,
[[File:Container-network-model.png|none||left|Container Network Model]]
        "Attachable": false,
 
        "Ingress": false,
;IPAM (Internet Protocol Address Management)
        "ConfigFrom": {
Managing addesees across multiple hosts on a separate physical networks while providing routing to the underlaying swarm networks externally is ''the IPAM prblem'' for Docker. Depends on the netwok driver choice, IPAM is handled at different layers in the stack. ''Network drivers'' enable IPAM through ''DHCP drivers'' or plugin drivers so the complex implementation that would be normally overlapping addesses is supported.
            "Network": ""
 
        },
;References
        "ConfigOnly": false,
* [https://success.docker.com/article/networking Docker Reference Architecture: Designing Scalable, Portable Docker Container Networks]
        "Containers": null,
 
        "Options": {
=== Publish exposed container/service ports ===
            "com.docker.network.driver.overlay.vxlanid_list": "4097"
;Publishing modes
        },
;host: set using <code> --publish mode=host,8080:80</code>, makes ports available only on the undelaying host system not outside the host the service may exist; defits ''routing mesh'' so user is responsible for routing
        "Labels": null
;ingress: responsible for ''routing mesh'' makes sure all published ports are avaialble on all hosts in the swarm cluster regardless is a service replica running on it or not
    }
]
</source>




;Inspect container network
<source lang=bash>
<source lang=bash>
docker container inspect testweb --format {{.HostConfig.NetworkMode}}
# List exposed ports on a container
overlay0
docker port CONTAINER [PRIVATE_PORT[/PROTOCOL]]
docker container inspect testweb --format {{.NetworkSettings.Networks.dev_bridge.IPAddress}}
docker port web2
192.168.1.3
80/tcp -> 0.0.0.0:81
</source>


# Publish port
                                          host  :  container
                                            \  :  /
docker container run -d --name web1 --publish 81:80 httpd
# --publish | -p :- publish to host exposed container port
# 81            :- port on a host, can use range eg. 81-85, so based on port availability port will be used
# 80            :- exposed port on a container


Connect/disconnect from a network can be done when a container is running. Connect won't disconnect from a current network.
ss -lnt
<source lang=bash>
State      Recv-Q Send-Q Local Address:Port Peer Address:Port
docker network connect --ip=192.168.1.10 deviceeth0 web1
LISTEN      0      100        127.0.0.1:25              *:*
docker container inspect --format="{{.NetworkSettings.Networks.bridge.IPAddress}}" web1
LISTEN      0      128                *:22              *:*
docker container inspect --format="{{.NetworkSettings.Networks.deviceeth0.IPAddress}}" web1
LISTEN      0      100              ::1:25            :::*
curl http://$(IPAddr)
LISTEN      0      128              :::81            :::*
docker network disconnect deviceeth0 web1
LISTEN      0      128              :::22            :::*
</source>


=== Overlay network in Swarm cluster ===
docker container run -d --name web1 --publish-all 81:80 httpd
Overlay network can be created/removed/updated like any other docker objects. It allows inter-service(containers) communication, where <code>--gateway</code> ip address is used to reach to outside eg. Inernet or the host network. When creating the <code>overlay</code> network on the manager host it will get recreated on worker nodes only when is referenced by any service that is using it. See below.
# --publish-all | -P publish all cotainer exposed ports to random ports above >32768
<source lang="bash">
CONTAINER ID IMAGE COMMAND              CREATED STATUS PORTS                  NAMES
swarm-mgr$ docker network create --driver=overlay --subnet=192.168.1.0/24 --gateway=192.168.1.1 overlay0
c63efe9cbb94 httpd "httpd-foreground"  2 sec.. Up 1 s 80/tcp                  testweb  #port exposed but not published
swarm-mgr$ docker service create --name web1 -p 8080:80 --network=overlay0 --replicas 2 httpd
cb0711134eb5 httpd "httpd-foreground"  4 sec.. Up 2 s 0.0.0.0:32769->80/tcp   testweb1 #port exposed and published to host:32769
uvxymzdkcfwvs2oznbnk7nv03
overall progress: 2 out of 2 tasks
1/2: running  [==================================================>]
2/2: running   [==================================================>]
 
swarm-wkr$ docker network ls
NETWORK ID          NAME                DRIVER              SCOPE
ba175ebd2a6f        bridge              bridge              local
a5848f607d8c        docker_gwbridge    bridge              local
fccfb9c1fdc3        host               host                local
t8jxy5vsy5on        ingress            overlay            swarm
127b10783faa        none                null                local
2x6bq1czzdc1        overlay0            overlay            swarm
 
# remove network, only affected newly created servces not the running onces
swarm-mgr$ docker service update --network-rm=overlay0 web1
</source>
</source>


=== Network drivers ===
Default network for a single host docker-host is ''bridge'' network.


=== DNS ===
;List of Native (part of Docker Engine) Network Drivers:
<source lang="bash">
;bridge: default on stand-alone hosts, it's private network internal to the host system, all containers on this host using Bridge network can communicate, external access is granted by port exposure or static-routes added with teh host as the gateway for that network
docker container run -d --name testweb1 -P --dns=8.8.8.8 \
;none: used when a container does not need any networkng, still can be accessed from the host using <code>docker attach [containerID]</code> or <code>docker exec -it [containerID]</code> commands
                                          --dns=8.8.4.4 \
;host: aka ''Host Only Networking'', only accessable via underlaying host, access to services can be provided by exposing ports to the host system
                                          --dns-search "mydomain.local" \
;overlay: swarm scope driver, allows communication to all Docker Daemons in a cluster, self-extending if needed, maanged by Swarm manager, it's default mode of Swarm communication
                                          httpd
;ingress: extended network across all nodes in the cluster; special overlay network that load balances netowrk traffic amongst a given service's working nodes; maintains a list of all IP addresses from nodes that participate in that service (using the IPVS module) and when a request comes in, routes to one of them for the indicated service; provides ''routing mesh' that allows services to be exposed to the external network without having replica running on every node in the Swarm
# -P :- publish-all exposed ports to random port >32768
;docker gateway bridge: special bridge network that allows overlay networks (incl. ingress) access an individual DOcker daemon's physical network; every container run within a service is connected to the local Docerk daemon's host network; automatically created when Docker is initialised or joined by <code>docker swarm init</code> or <code>docker join</code> commands.


docker container exec -it testweb1 /bin/bash -c 'cat /etc/resolv.conf'
;Docker interfaces
search us-east-2.compute.internal
* <code>docker0</code> - adapter is installed by default during Docker setup and will be assigned an address range that will determine the local host IPs available to the containers running on it
nameserver 8.8.8.8
nameserver 8.8.4.4


# System wide settings, requires docker.service restart
;Create bridge network
cat > /etc/docker/daemon.json <<EOF
<source lang=bash>
{
docker network ls #default networks list
   "dns": ["8.8.8.8", "8.8.4.4"]
NETWORK ID    NAME                DRIVER  SCOPE
}
130833da0920  bridge              bridge   local
EOF
528db1bf80f1  docker_gwbridge    bridge  local
sudo systemctl restart docker.service #required
832c8c6d73a5  host                host    local
</source>
t8jxy5vsy5on  ingress            overlay  swarm  #'ingress' special network 1 per cluster
815a9c2c4005  none                null    local


= Examples =
docker network inspect bridge #bridge is a default network containers are deployed to
== Default project ==
As good practice all Docker files should be source controlled. The basic self explanatory structure can looks like below, and skeleton be created with a commend below:
<source lang="bash">
mkdir APROJECT && d=$_; touch $d/{build.sh,run.sh,Dockerfile,README.md,VERSION};mkdir $d/assets; touch $_/{entrypoint.sh,install.sh}


└── APROJECT
docker container run  -d web1 -p 8080:80 httpd #expose container port :80 -> :8080 on the docker-home
    ├── assets
docker container inspect web1 | grep IPAdd
    │   ├── entrypoint.sh
docker container inspect --format="{{.NetworkSettings.Networks.bridge.IPAddress}}" web1 #get container ip
    │   └── install.sh
curl http://$(IPAddr)
    ├── build.sh
    ├── Dockerfile
    ├── README.md
    └── VERSION
</source>
</source>


== Dockerfile ==
 
User management
;Create bridge network
<source>
<source lang=bash>
RUN addgroup --gid 1001 jenkins -q
docker network create --driver=bridge --subnet=192.168.1.0/24 --opt "com.docker.network.driver.mtu"=1501 deviceeth0
RUN adduser  --gid 1001 --home /tank --disabled-password --gecos '' --uid 1001 jenkins
 
# --gid add user to group 1001
docker network ls
# --gecos parameter is used to set the additional information. In this case it is just empty.
docker network inspect deviceeth0
# --disabled-password it's like  --disabled-login,  but  logins  are still possible (for example using SSH RSA keys) but not using password authentication
USER jenkins:jenkins #sets user for next RUN, CMD and ENTRYPOINT command
WORKDIR /tank #changes cwd for next RUN, CMD, ENTRYPOINT, COPY and ADD
</source>
</source>
= Squash an image =
Docker uses <code>Union</code> filesystem that allows multiple volumes (layers) to share common and override changes by applying them on top layer.
There is no official way to ''flatten'' layers to a single storage layer or minimize an image size (2017). Below it's just practical approach.
# Start container from an image
# Export a container to <code>.tar</code> with all it's file systems
# Import container with new image name




Once the process completes and original image gets deleted the new image <code>docker image history</code> command will show only one layer. Often the image will be smaller.
;Create overlay network
<source lang=bash>
<source lang=bash>
# run a container from an image
docker network create --driver=overlay --subnet=192.168.1.0/24 --gateway=192.168.1.1 overlay0
docker run myweb:v3
docker network ls
# export container to .tar
NETWORK ID          NAME                DRIVER              SCOPE
docker export <contr_name> > myweb.v3.tar
130833da0920        bridge              bridge              local
docker save  <image id>  > image.tar #not verified command
528db1bf80f1        docker_gwbridge    bridge              local
docker import myweb.v3.tar  myweb:v4
832c8c6d73a5        host                host                local
docker load  myweb.v3.tar             #not verified command 
t8jxy5vsy5on        ingress            overlay            swarm
815a9c2c4005        none                null                local
2x6bq1czzdc1        overlay0            overlay             swarm
</source>
</source>


;Resources
*[https://github.com/jwilder/docker-squash docker-squash] GitHub
= Gracefully stop / kill a container =
''all below are only notes''


Trap ctrl_c then kill/rm container.
;Inspect network
*--init
<source lang=json>
*--sig-proxy this only works when --tty=false but by default is true
docker network inspect overlay0
 
[
= Proxy =
    {
If you are behind corporate proxy, you should use Docker client <code>~/.docker/config.json</code> config file. It requires Docker
        "Name": "overlay0",
17.07 minimum version.
        "Id": "2x6bq1czzdc102sl6ge7gpm3w",
<source>
        "Created": "2019-01-19T11:24:02.146339562Z",
{
        "Scope": "swarm",
"proxies":
        "Driver": "overlay",
{
        "EnableIPv6": false,
  "default":
        "IPAM": {
  {
            "Driver": "default",
    "httpProxy": "http://10.0.0.1:3128",
            "Options": null,
    "httpsProxy": "http://10.0.0.1:3128",
            "Config": [
    "noProxy": "localhost,127.0.0.1,*.test.example.com,.example2.com"
                {
  }
                    "Subnet": "192.168.1.0/24",
}
                    "Gateway": "192.168.1.1"
}
                }
            ]
        },
        "Internal": false,
        "Attachable": false,
        "Ingress": false,
        "ConfigFrom": {
            "Network": ""
        },
        "ConfigOnly": false,
        "Containers": null,
        "Options": {
            "com.docker.network.driver.overlay.vxlanid_list": "4097"
        },
        "Labels": null
    }
]
</source>
</source>
More you can find [https://docs.docker.com/network/proxy/#configure-the-docker-client here]


== Insecure proxy ==
 
These can be added to different places, the order is based on latest practices and versioning
;Inspect container network
;docker-ce 18.6
<source lang=bash>
<source lang="json">
docker container inspect testweb --format {{.HostConfig.NetworkMode}}
{
overlay0
    "insecure-registries" : [ "localhost:443","10.0.0.0/8", "172.16.0.0/12", "192.168.0.0/16" ]
docker container inspect testweb --format {{.NetworkSettings.Networks.dev_bridge.IPAddress}}
}
192.168.1.3
</source>
</source>
<source lang="bash">
 
sudo systemctl daemon-reload
 
sudo systemctl restart docker
Connect/disconnect from a network can be done when a container is running. Connect won't disconnect from a current network.
sudo systemctl show docker | grep Env
<source lang=bash>
docker info #check Insecure Registries
docker network connect --ip=192.168.1.10 deviceeth0 web1
docker container inspect --format="{{.NetworkSettings.Networks.bridge.IPAddress}}" web1
docker container inspect --format="{{.NetworkSettings.Networks.deviceeth0.IPAddress}}" web1
curl http://$(IPAddr)
docker network disconnect deviceeth0 web1
</source>
</source>


;Using environment file, prior version 18
=== Overlay network in Swarm cluster ===
Overlay network can be created/removed/updated like any other docker objects. It allows inter-service(containers) communication, where <code>--gateway</code> ip address is used to reach to outside eg. Inernet or the host network. When creating the <code>overlay</code> network on the manager host it will get recreated on worker nodes only when is referenced by any service that is using it. See below.
<source lang="bash">
<source lang="bash">
$ sudo vi /etc/default/docker
swarm-mgr$ docker network create --driver=overlay --subnet=192.168.1.0/24 --gateway=192.168.1.1 overlay0
DOCKER_HOME='--graph=/tank/docker'
swarm-mgr$ docker service create --name web1 -p 8080:80 --network=overlay0 --replicas 2 httpd
DOCKER_GROUP='--group=docker'
uvxymzdkcfwvs2oznbnk7nv03
DOCKER_LOG_DRIVER='--log-driver=json-file'
overall progress: 2 out of 2 tasks
DOCKER_STORAGE_DRIVER='--storage-driver=btrfs'
1/2: running  [==================================================>]
DOCKER_ICC='--icc=false'
2/2: running  [==================================================>]
DOCKER_IPMASQ='--ip-masq=true'
 
DOCKER_IPTABLES='--iptables=true'
swarm-wkr$ docker network ls
DOCKER_IPFORWARD='--ip-forward=true'
NETWORK ID          NAME                DRIVER              SCOPE
DOCKER_ADDRESSES='--host=unix:///var/run/docker.sock'
ba175ebd2a6f        bridge              bridge              local
DOCKER_INSECURE_REGISTRIES='--insecure-registry 10.0.0.0/8 --insecure-registry 172.16.0.0/12 --insecure-registry 192.168.0.0/16'
a5848f607d8c        docker_gwbridge    bridge              local
DOCKER_OPTS="${DOCKER_HOME} ${DOCKER_GROUP} ${DOCKER_LOG_DRIVER} ${DOCKER_STORAGE_DRIVER} ${DOCKER_ICC} ${DOCKER_IPMASQ} ${DOCKER_IPTABLES} ${DOCKER_IPFORWARD} ${DOCKER_ADDRESSES} ${DOCKER_INSECURE_REGISTRIES}"
fccfb9c1fdc3        host                host               local
t8jxy5vsy5on        ingress            overlay            swarm
127b10783faa        none                null                local
2x6bq1czzdc1        overlay0            overlay            swarm


$ sudo vi /etc/systemd/system/docker.service.d/docker.conf
# remove network, only affected newly created servces not the running onces
[Service]
swarm-mgr$ docker service update --network-rm=overlay0 web1
EnvironmentFile=-/etc/default/docker
</source>
ExecStart=/usr/bin/dockerd $DOCKER_HOME $DOCKER_GROUP $DOCKER_LOG_DRIVER $DOCKER_STORAGE_DRIVER $DOCKER_ICC $DOCKER_IPMASQ $DOCKER_IPTABLES $DOCKER_IPFORWARD $DOCKER_ADDRESSES $DOCKER_INSECURE_REGISTRIES


$ sudo vi /etc/systemd/system/docker.service
[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network-online.target docker.socket firewalld.service
Wants=network-online.target
Requires=docker.socket


[Service]
=== DNS ===
EnvironmentFile=-/etc/default/docker
<source lang="bash">
Type=notify
docker container run -d --name testweb1 -P --dns=8.8.8.8 \
# the default is not to use systemd for cgroups because the delegate issues still
                                          --dns=8.8.4.4 \
# exists and systemd currently does not support the cgroup feature set required
                                          --dns-search "mydomain.local" \
# for containers run by docker
                                          httpd
ExecStart=/usr/bin/dockerd -H fd://
# -P :- publish-all exposed ports to random port >32768
ExecReload=/bin/kill -s HUP $MAINPID
 
LimitNOFILE=1048576
docker container exec -it testweb1 /bin/bash -c 'cat /etc/resolv.conf'
# Having non-zero Limit*s causes performance problems due to accounting overhead
search us-east-2.compute.internal
# in the kernel. We recommend using cgroups to do container-local accounting.
nameserver 8.8.8.8
LimitNPROC=infinity
nameserver 8.8.4.4
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
# restart the docker process if it exits prematurely
Restart=on-failure
StartLimitBurst=3
StartLimitInterval=60s


[Install]
# System wide settings, requires docker.service restart
WantedBy=multi-user.target
cat > /etc/docker/daemon.json <<EOF
{
  "dns": ["8.8.8.8", "8.8.4.4"]
}
EOF
sudo systemctl restart docker.service #required
</source>
</source>


== Run docker without sudo ==
= Examples =
Adding a user to docker group should be sufficient. However on apparmor, SELinux or a filesystem with ACL enabled additional permissions might be required in respect to access a <tt>socket file</tt>.
== Lint - best practices ==
<source>
<source lang=bash>
$ ll /var/run/docker.sock
$ docker run --rm -i hadolint/hadolint < Dockerfile
srw-rw---- 1 root docker 0 Sep  6 12:31 /var/run/docker.sock=
/dev/stdin:9:16 unexpected newline expecting "\ ", '=', a space followed by the value for the variable 'MAC_ADDRESS', or at least one space
# ACL
</source>
$ sudo getfacl /var/run/docker.sock
getfacl: Removing leading '/' from absolute path names
# file: var/run/docker.sock
# owner: root
# group: docker
user::rw-
group::rw-
other::---


# Grant ACL to jenkns user
== Default project ==
$ sudo setfacl -m user:username:rw /var/run/docker.sock
As good practice all Docker files should be source controlled. The basic self explanatory structure can looks like below, and skeleton be created with a commend below:
<source lang="bash">
mkdir APROJECT && d=$_; touch $d/{build.sh,run.sh,Dockerfile,README.md,VERSION};mkdir $d/assets; touch $_/{entrypoint.sh,install.sh}


$ sudo getfacl /var/run/docker.sock
└── APROJECT
getfacl: Removing leading '/' from absolute path names
    ├── assets
# file: var/run/docker.sock
    │   ├── entrypoint.sh
# owner: root
    │   └── install.sh
# group: docker
    ├── build.sh
user::rw-
    ├── Dockerfile
user:jenkins:rw-
    ├── README.md
group::rw-
    └── VERSION
mask::rw-
other::---
</source>
</source>
;References
* [https://askubuntu.com/questions/477551/how-can-i-use-docker-without-sudo how-can-i-use-docker-without-sudo]


== References ==
== Dockerfile ==
*[https://www.weave.works/blog/my-container-wont-stop-on-ctrl-c-and-other-minor-tragedies/ my-container-wont-stop-on-ctrl-c-and-other-minor-tragedies]
<code>Dockerfile</code> it is simply a build file.
*[https://github.com/moby/moby/pull/12228 PID1 in container aka tinit]
=== Semantics ===
*[https://container-solutions.com/understanding-volumes-docker/ understanding-volumes-docker]
;<code>entrypoint</code>: Container config: what to start when this image is ran.
 
= Docker Swarm =
== Swarm - sizing ==
;Universal Control Plane (UCP)
This is for only Enterpsise Edition
* ports managers, workers in/out
 
Hardware requirments:
* 8gb RAM for  managers or DTR Docker Trsuted Registry
* 4gb RAM for workers
* 3gb free space


;<code>entrypoint</code> and <code>cmd</code>: Docker allows you to define an Entrypoint and Cmd which you can mix and match in a Dockerfile. Entrypoint is the executable, and Cmd are the arguments passed to the Entrypoint. The Dockerfile schema is quite lenient and allows users to set Cmd without Entrypoint, which means that the first argument in Cmd will be the executable to run.


Performance Consideration (Timing)
=== User management ===
<source>
<source>
Component                              Timeout(ms) Configurable
RUN addgroup --gid 1001 jenkins -q
Raft consensus between manager nodes  3000        no
RUN adduser --gid 1001 --home /tank --disabled-password --gecos '' --uid 1001 jenkins
Gossip protocol for overlay networking 5000        no
# --gid add user to group 1001
etcd                                  500          yes
# --gecos parameter is used to set the additional information. In this case it is just empty.
RethinkDB                              10000        no
# --disabled-password it's like  --disabled-login,  but  logins  are still possible (for example using SSH RSA keys) but not using password authentication
Stand-alone swarm                      90000        no
USER jenkins:jenkins #sets user for next RUN, CMD and ENTRYPOINT command
WORKDIR /tank #changes cwd for next RUN, CMD, ENTRYPOINT, COPY and ADD
</source>
</source>


=== Multiple stage build ===
Introduced in Docker 17.06, allows to use multiple <code>FROM</code> statements allowing for multi stage builds.
<source>
FROM microsoft/aspnetcore-build AS build-env
WORKDIR /app


Compatibility Docker EE
# copy csproj and restore as distinct layers
* Docker Engine 17.06+
COPY *.csproj ./
* DTR 2.3+
RUN dotnet restore
* UCP 2.2+
== Swarm with single host manager ==
<source lang=bash>
# Initialise Swarm
docker swarm init --advertise-addr 172.31.16.10 #Iyou get SWMTKN-token
To add a worker to this swarm, run the following command:
    docker swarm join --token SWMTKN-1-1i2v91qbj0pg88dxld15vpx3e74qm5clk7xkcrg6j3xknedqui-dh60f4j09itiqjfhqa196ufvo 172.31.16.10:2377
To add a manager to this swarm, run 'docker swarm join-token manager' and follow the instructions.


# Join tokens
# copy everything else and build
docker swarm join-token manager #display manager join-token, run on manager
COPY . ./
docker swarm join-token worker  #display worker  join-token, run on manager
RUN dotnet publish -c Release -o output


# Join worker, run new-worker-node
# build runtime image
#                                 ->           swarm cluster id                    <-> this part is mgr/wkr <- -> mgr node <-
FROM microsoft/aspnetcore
docker swarm join --token SWMTKN-1-1i2v91qbj0pg88dxld15vpx3e74qm5clk7xkcrg6j3xknedqui-dh60f4j09itiqjfhqa196ufvo 172.31.16.10:2377
WORKDIR /app
COPY --from=build-env /app/output .  #multi stage: copy files from previous container [as build-env]
ENTRYPOINT ["dotnet", "LetsKube.dll"]
</source>
 
= Squash an image =
Docker uses <code>Union</code> filesystem that allows multiple volumes (layers) to share common and override changes by applying them on top layer.
There is no official way to ''flatten'' layers to a single storage layer or minimize an image size (2017). Below it's just practical approach.  
# Start container from an image
# Export a container to <code>.tar</code> with all it's file systems
# Import container with new image name


# Join another manager, run on new-manager-node
docker swarm join-token manager #run on primary manager if you wish add another manager
# in output you get a token. You notice that 1st part up to dash identifies Swarm cluster and the other part is role id.


# join to swarm (cluster), token will identify a role in the cluster manager or worker
Once the process completes and original image gets deleted the new image <code>docker image history</code> command will show only one layer. Often the image will be smaller.
docker swarm join --token SWMTKN-xxxx
<source lang=bash>
docker swarm join --token SWMTKN-1-1i2v91qbj0pg88dxld15vpx3e74qm5clk7xkcrg6j3xknedqui-dh60f4j09itiqjfhqa196ufvo 172.31.16.10:2377
# run a container from an image
This node joined a swarm as a worker.
docker run myweb:v3
# export container to .tar
docker export <contr_name> > myweb.v3.tar
docker save  <image id>  > image.tar #not verified command
docker import myweb.v3.tar  myweb:v4
docker load  myweb.v3.tar            #not verified command 
</source>
</source>


;Resources
*[https://github.com/jwilder/docker-squash docker-squash] GitHub


Check Swarm status
= Gracefully stop / kill a container =
<source lang=bash>
''all below are only notes''
docker node ls
[cloud_user@ip-172-31-16-10 swarm-manager]$ docker node ls
ID                            HOSTNAME                          STATUS  AVAILABILITY MANAGER STATUS ENGINE VERSION
641bfndn49b1i1dj17s8cirgw *  ip-172-31-16-10.mylabserver.com  Ready    Active      Leader        18.09.0
vlw7te728z7bvd7ulb3hn08am    ip-172-31-16-94.mylabserver.com  Ready    Active                      18.09.0


docker system info | grep -A 7 Swarm
Trap ctrl_c then kill/rm container.
Swarm: active
*--init
NodeID: 641bfndn49b1i1dj17s8cirgw
*--sig-proxy this only works when --tty=false but by default is true
Is Manager: true
ClusterID: 4jqxdmfd0w5pc4if4fskgd5nq
Managers: 1
Nodes: 2
Default Address Pool: 10.0.0.0/8 
SubnetSize: 24
</source>


;Troubleshooting
= Proxy =
<source lang=bash>
If you are behind corporate proxy, you should use Docker client <code>~/.docker/config.json</code> config file. It requires Docker
sudo systemctl disable firewalld && sudo systemctl stop firewalld # CentOS
17.07 minimum version.
sudo -i; printf "\n10.0.0.11 mgr01\n10.0.0.12 node01\n" >> /etc/hosts # Add nodes to hosts file; exit
<source>
{
"proxies":
{
  "default":
  {
    "httpProxy": "http://10.0.0.1:3128",
    "httpsProxy": "http://10.0.0.1:3128",
    "noProxy": "localhost,127.0.0.1,*.test.example.com,.example2.com"
  }
}
}
</source>
</source>
== Swarm cluster ==
More you can find [https://docs.docker.com/network/proxy/#configure-the-docker-client here]
<source lang=bash>
docker node update -availability drain [node] #drain services for Manager Only nodes
docker service update --force [service_name] #force re-balance services across cluster


docker swarm leave #node leaves a cluster
== Insecure proxy ==
These can be added to different places, the order is based on latest practices and versioning
;docker-ce 18.6
<source lang="json">
{
    "insecure-registries" : [ "localhost:443","10.0.0.0/8", "172.16.0.0/12", "192.168.0.0/16" ]
}
</source>
</source>
 
<source lang="bash">
== Locking / unlocking swarm cluster ==
sudo systemctl daemon-reload
Logs used by Swarm manager are encrypted on disk. Access to nodes gives access to keys that encrypt them. It further protects cluster as requires a unlocking key when restarting manager/nodes.
sudo systemctl restart docker
<source lang=bash>
sudo systemctl show docker | grep Env
docker swarm init  --auto-lock=true #initialize with
docker info #check Insecure Registries
docker swarm update --auto-lock=true #update current swarm
# both will produce unlock token STKxxx
docker swarm unlock #it'll ask for the unlock token
docker swarm update --auto-lock=false #disable key locking
</source>
</source>


;Using environment file, prior version 18
<source lang="bash">
$ sudo vi /etc/default/docker
DOCKER_HOME='--graph=/tank/docker'
DOCKER_GROUP='--group=docker'
DOCKER_LOG_DRIVER='--log-driver=json-file'
DOCKER_STORAGE_DRIVER='--storage-driver=btrfs'
DOCKER_ICC='--icc=false'
DOCKER_IPMASQ='--ip-masq=true'
DOCKER_IPTABLES='--iptables=true'
DOCKER_IPFORWARD='--ip-forward=true'
DOCKER_ADDRESSES='--host=unix:///var/run/docker.sock'
DOCKER_INSECURE_REGISTRIES='--insecure-registry 10.0.0.0/8 --insecure-registry 172.16.0.0/12 --insecure-registry 192.168.0.0/16'
DOCKER_OPTS="${DOCKER_HOME} ${DOCKER_GROUP} ${DOCKER_LOG_DRIVER} ${DOCKER_STORAGE_DRIVER} ${DOCKER_ICC} ${DOCKER_IPMASQ} ${DOCKER_IPTABLES} ${DOCKER_IPFORWARD} ${DOCKER_ADDRESSES} ${DOCKER_INSECURE_REGISTRIES}"


If you have access to a manager you can always get unlock key using:
$ sudo vi /etc/systemd/system/docker.service.d/docker.conf
<source lang=bash>
[Service]
docker swarm unlock-key
EnvironmentFile=-/etc/default/docker
</source>
ExecStart=/usr/bin/dockerd $DOCKER_HOME $DOCKER_GROUP $DOCKER_LOG_DRIVER $DOCKER_STORAGE_DRIVER $DOCKER_ICC $DOCKER_IPMASQ $DOCKER_IPTABLES $DOCKER_IPFORWARD $DOCKER_ADDRESSES $DOCKER_INSECURE_REGISTRIES


Key management
$ sudo vi /etc/systemd/system/docker.service
<source lang=bash>
[Unit]
docker swarm unlock-key --rotate #could be in a cron
Description=Docker Application Container Engine
</source>
Documentation=https://docs.docker.com
== Backup and restore ==
After=network-online.target docker.socket firewalld.service
This priocess describes how to backup whole cluster configuration so can be restored on a new set of servers.
Wants=network-online.target
Requires=docker.socket


Create docker apps running across swarm
[Service]
<source lang=bash>
EnvironmentFile=-/etc/default/docker
docker service create --name bkweb --publish 80:80 --replicas 2 httpd
Type=notify
$ docker service ls
# the default is not to use systemd for cgroups because the delegate issues still
ID                  NAME                MODE                REPLICAS            IMAGE              PORTS
# exists and systemd currently does not support the cgroup feature set required
q9jki3n2hffm        bkweb              replicated          2/2                httpd:latest        *:80->80/tcp
# for containers run by docker
ExecStart=/usr/bin/dockerd -H fd://
ExecReload=/bin/kill -s HUP $MAINPID
LimitNOFILE=1048576
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
# restart the docker process if it exits prematurely
Restart=on-failure
StartLimitBurst=3
StartLimitInterval=60s


$ docker service ps bkweb #note containers run on 2 different nodes
[Install]
ID                  NAME                IMAGE              NODE                              DESIRED STATE      CURRENT STATE         
WantedBy=multi-user.target
j964jm1lq3q5        bkweb.1            httpd:latest        server2c.mylabserver.com  Running            Running about a minute ago           
jpjx3mk7hhm0        bkweb.2            httpd:latest        server1c.mylabserver.com  Running            Running about a minute ago
</source>
</source>


Backup state files
== Run docker without sudo ==
<source lang=bash>
Adding a user to docker group should be sufficient. However on apparmor, SELinux or a filesystem with ACL enabled additional permissions might be required in respect to access a <tt>socket file</tt>.
sudo -i
<source>
cd /var/lib/docker/swarm
$ ll /var/run/docker.sock
cat docker-state.json #contains info about managers, workers, certificates, etc..
srw-rw---- 1 root docker 0 Sep  6 12:31 /var/run/docker.sock=
cat state.json
# ACL
sudo systemctl stop docker
$ sudo getfacl /var/run/docker.sock
cp -r /var/lib/docker/swarm /root #backup state files
getfacl: Removing leading '/' from absolute path names
systemctl start docker #the running docker containers should be brought up as they were before stopping the service
# file: var/run/docker.sock
# owner: root
# group: docker
user::rw-
group::rw-
other::---


Backup Swarm, this file can be then used to recover whole swarm cluster on another set of servers
# Grant ACL to jenkns user
tar -cvf swarm.tar /root/swarm/
$ sudo setfacl -m user:username:rw /var/run/docker.sock
</source>


Recover using swarm.tar backup
$ sudo getfacl /var/run/docker.sock
<source lang=bash>
getfacl: Removing leading '/' from absolute path names
# scp swarm.tar to recovery node - what it'd be a node with just installed docker
# file: var/run/docker.sock
sudo rm -rf /var/lib/docker/swarm
# owner: root
sudoo systemctl stop docker
# group: docker
sudo tar -xvf swarm.tar -C /var/lib/docker/swarm
user::rw-
sudo systemctl start docker
user:jenkins:rw-
docker swarm init --force-new-cluster # produces the exactly same token
group::rw-
mask::rw-
other::---
</source>
</source>
;References
* [https://askubuntu.com/questions/477551/how-can-i-use-docker-without-sudo how-can-i-use-docker-without-sudo]


== Run containers as a services ==
== References ==
Docker container has a number limitation therefore running as a service where Cluster Manager: Swarm or Kubernetes manages networking, access, loadbalancing etc.. is a way to scale with ease. The service is using eg. mesh routing to deal with access to containers.
*[https://www.weave.works/blog/my-container-wont-stop-on-ctrl-c-and-other-minor-tragedies/ my-container-wont-stop-on-ctrl-c-and-other-minor-tragedies]
*[https://github.com/moby/moby/pull/12228 PID1 in container aka tinit]
*[https://container-solutions.com/understanding-volumes-docker/ understanding-volumes-docker]


Swarm nodes setup 1 manager and 2 workers
= Docker Enterprise Edition =
<source lang=bash>
*[https://success.docker.com/article/compatibility-matrix Compatibility Matrix]
ID                            HOSTNAME                          STATUS AVAILABILITY MANAGER STATUS ENGINE VERSION
Components:
641bfndn49b1i1dj17s8cirgw *   swarm-mgr-1.example.com   Ready  Active Leader      18.09.1
* Docker daemon (fka "Engine")
vlw7te728z7bvd7ulb3hn08am    swarm-wkr-1.example.com  Ready  Active              18.09.1
* Docker Trusted Registry (DTR)
r8h7xmevue9v2mgysmld59py2    swarm-wkr-2.example.com  Ready  Active              18.09.0
* Docker Universal Control Plane (UCP)
</source>


= Docker Swarm =
== Swarm - sizing ==
;Universal Control Plane (UCP)
This is for only Enterpsise Edition
* ports managers, workers in/out


Create and run a service
Hardware requirments:
<source lang=bash>
* 8gb RAM for  managers or DTR Docker Trsuted Registry
docekr pull httpd
* 4gb RAM for workers
docker service create --name serviceweb --publish 80:80 httpd
* 3gb free space
# --publish|-p -expose a port on all containers in the running cluster


docker service ls
ID                  NAME                MODE                REPLICAS            IMAGE              PORTS
vt0ftkifbd84        serviceweb          replicated          1/1                httpd:latest        *:80->80/tcp


docker service ps serviceweb #show nodes that a container is running on, here on mgr-1 node
Performance Consideration (Timing)
ID          NAME         IMAGE       NODE                    DESIRED STATE CURRENT STATE  ERROR  PORTS
<source>
e6rx3tzgp1e5 serviceweb.1 httpd:latest swarm-mgr-1.example.com Running      Running about                 
Component                              Timeout(ms)  Configurable
Raft consensus between manager nodes   3000        no
Gossip protocol for overlay networking 5000         no
etcd                                  500          yes
RethinkDB                              10000       no
Stand-alone swarm                     90000        no
</source>
</source>




When running as a service even if a container runs on a single node (replica=1) the container can be accessed from any of swarm nodes. It's because service exposed port has been exposed to extended mesh private network that the container is running on.
Compatibility Docker EE
* Docker Engine 17.06+
* DTR 2.3+
* UCP 2.2+
== Swarm with single host manager ==
<source lang=bash>
<source lang=bash>
[user@swarm-mgr-1 ~]$ curl -k http://swarm-mgr-1.example.com
# Initialise Swarm
  <html><body><h1>It works!</h1></body></html>
docker swarm init --advertise-addr 172.31.16.10 #Iyou get SWMTKN-token
[user@swarm-mgr-1 ~]$ curl -k http://swarm-wkr-1.example.com
To add a worker to this swarm, run the following command:
  <html><body><h1>It works!</h1></body></html>
    docker swarm join --token SWMTKN-1-1i2v91qbj0pg88dxld15vpx3e74qm5clk7xkcrg6j3xknedqui-dh60f4j09itiqjfhqa196ufvo 172.31.16.10:2377
[user@swarm-mgr-1 ~]$ curl -k http://swarm-wkr-2.example.com
To add a manager to this swarm, run 'docker swarm join-token manager' and follow the instructions.
  <html><body><h1>It works!</h1></body></html>
 
# Join tokens
docker swarm join-token manager #display manager join-token, run on manager
docker swarm join-token worker  #display worker  join-token, run on manager
 
# Join worker, run new-worker-node
#                                ->           swarm cluster id                    <-> this part is mgr/wkr <- -> mgr node <-
docker swarm join --token SWMTKN-1-1i2v91qbj0pg88dxld15vpx3e74qm5clk7xkcrg6j3xknedqui-dh60f4j09itiqjfhqa196ufvo 172.31.16.10:2377
 
# Join another manager, run on new-manager-node
docker swarm join-token manager #run on primary manager if you wish add another manager
# in output you get a token. You notice that 1st part up to dash identifies Swarm cluster and the other part is role id.
 
# join to swarm (cluster), token will identify a role in the cluster manager or worker
docker swarm join --token SWMTKN-xxxx
docker swarm join --token SWMTKN-1-1i2v91qbj0pg88dxld15vpx3e74qm5clk7xkcrg6j3xknedqui-dh60f4j09itiqjfhqa196ufvo 172.31.16.10:2377
This node joined a swarm as a worker.
</source>
</source>




Service update, can be done to limits, volumes, env-variables and more...
Check Swarm status
<source lang=bash>
<source lang=bash>
docker service scale devweb=3                #or
docker node ls
docker service update --replicas 3 serviceweb #--detach=false shows visual progress in older versions, default in v18.06
[cloud_user@ip-172-31-16-10 swarm-manager]$ docker node ls
serviceweb
ID                            HOSTNAME                          STATUS  AVAILABILITY MANAGER STATUS ENGINE VERSION
overall progress: 3 out of 3 tasks
641bfndn49b1i1dj17s8cirgw *   ip-172-31-16-10.mylabserver.com   Ready    Active      Leader        18.09.0
1/3: running   [==================================================>]
vlw7te728z7bvd7ulb3hn08am    ip-172-31-16-94.mylabserver.com   Ready    Active                      18.09.0
2/3: running   [==================================================>]
3/3: running   [==================================================>]
verify: Service converged


# Limits(soft limit) and reservations(hard limit), this causes to start new services(containers)
docker system info | grep -A 7 Swarm
docker service update --limit-cpu=.5 --reserve-cpu=.75 --limit-memory=128m --reserve-memory=256m serviceweb
Swarm: active
NodeID: 641bfndn49b1i1dj17s8cirgw
Is Manager: true
ClusterID: 4jqxdmfd0w5pc4if4fskgd5nq
Managers: 1
Nodes: 2
Default Address Pool: 10.0.0.0/8 
SubnetSize: 24
</source>
</source>


== Templating service names ==
;Troubleshooting
This allows to control eg. hostname in a cluster. Useful for big clusters to easier identify services where they run from hostname.
<source lang=bash>
<source lang=bash>
docker service create --name web --hostname"{{.Node.ID}}-{{.Service.Name}}" httpd
sudo systemctl disable firewalld && sudo systemctl stop firewalld # CentOS
docker service ps --no-trunc web
sudo -i; printf "\n10.0.0.11 mgr01\n10.0.0.12 node01\n" >> /etc/hosts # Add nodes to hosts file; exit
docker inspect --format="{{}.Config.Hostname}" web.1.ab10_serviceID_cd
aa_nodeID_bb-web
</source>
</source>
== Node lables for task/service placement ==
== Swarm cluster ==
<source lang=bash>
<source lang=bash>
docker node ls
docker node update -availability drain [node] #drain services for Manager Only nodes
ID                            HOSTNAME                  STATUS AVAILABILITY        MANAGER STATUS      ENGINE VERSION
docker service update --force [service_name] #force re-balance services across cluster
641bfndn49b1i1dj17s8cirgw *  swarm-mgr-1.example.com  Ready Active              Leader              18.09.1
 
vlw7te728z7bvd7ulb3hn08am    swarm-wkr-1.example.com  Ready  Active                                  18.09.1
docker swarm leave #node leaves a cluster
r8h7xmevue9v2mgysmld59py2    swarm-wkr-2.example.com  Ready  Active                                  18.09.1
</source>


docker node inspect 641bfndn49b1i1dj17s8cirgw --pretty
== Locking / unlocking swarm cluster ==
ID:                    641bfndn49b1i1dj17s8cirgw
Logs used by Swarm manager are encrypted on disk. Access to nodes gives access to keys that encrypt them. It further protects cluster as requires a unlocking key when restarting manager/nodes.
Hostname:              swarm-mgr-1.example.com
<source lang=bash>
Joined at:              2019-01-08 12:16:56.277717163 +0000 utc
docker swarm init  --auto-lock=true #initialize with
Status:
docker swarm update --auto-lock=true #update current swarm
State:                Ready
# both will produce unlock token STKxxx
Availability:          Active
docker swarm unlock #it'll ask for the unlock token
Address:              172.31.10.10
docker swarm update --auto-lock=false #disable key locking
Manager Status:
Address:              172.31.10.10:2377
Raft Status:          Reachable
Leader:                Yes
Platform:
Operating System:      linux
Architecture:          x86_64
Resources:
CPUs:                  2
Memory:                3.699GiB
Plugins:
Log:          awslogs, fluentd, gcplogs, gelf, journald, json-file, local, logentries, splunk, syslog
Network:              bridge, host, macvlan, null, overlay
Volume:                local
Engine Version:        18.09.1
TLS Info:
TrustRoot:
-----BEGIN CERTIFICATE-----
MIIBajCCARCgAwIBAgIUKXz3wtc8OA8uzTo1pO86ko+PB+EwCgYIKoZIzj0EAwIw
..
-----END CERTIFICATE-----
Issuer Subject:        MBMxETAPBgNVBAMTCHN3YX.....h
Issuer Public Key:    MFkwEwYHKoZIzj0CAQYIKoZIzj0DAQcDQgAEy......==
</source>
</source>




Apply label to a node
If you have access to a manager you can always get unlock key using:
<source lang=bash>
<source lang=bash>
docker node update --label-add node-env=testnode r8h7xmevue9v2mgysmld59py2
docker swarm unlock-key
docker node inspect r8h7xmevue9v2mgysmld59py2 --pretty | grep -B1 -A2 Labels
ID:                    r8h7xmevue9v2mgysmld59py2
Labels:
- node-env=testnode
Hostname:              swarm-wkr-1.example.com
</source>
</source>


Key management
<source lang=bash>
docker swarm unlock-key --rotate #could be in a cron
</source>
== Backup and restore swarm cluster ==
This priocess describes how to backup whole cluster configuration so can be restored on a new set of servers.


How to use it. Run a service with <code>--constraint</code> option that pins services to run on a node meeting given criteria. In our case to run on a node where <code>node.labels.node-env == testnode</code>. Note that all replicas are running on the same node unlike they'd be distributed across the cluster.
Create docker apps running across swarm
<source lang=bash>
<source lang=bash>
docker service create --name constraints -p 80:80 --constraint 'node.labels.node-env == testnode' --replicas 3 httpd #node.role, node.id, node.hostname
docker service create --name bkweb --publish 80:80 --replicas 2 httpd
zrk15vfdaitc1rvw9wqh2s0ot
$ docker service ls
overall progress: 3 out of 3 tasks
ID          NAME     MODE         REPLICAS IMAGE        PORTS
1/3: running  [==================================================>]
q9jki3n2hffm bkweb    replicated   2/2      httpd:latest  *:80->80/tcp
2/3: running  [==================================================>]
3/3: running  [==================================================>]
verify: Service converged
[cloud_user@mrpiotrpawlak1c ~]$ docker service ls
ID          NAME          MODE        REPLICAS IMAGE        PORTS
zrk15vfdaitc constraints  replicated   3/3      httpd:latest  *:80->80/tcp


[user@swarm-wkr-2 ~]$ docker service ps constraints
$ docker service ps bkweb #note containers run on 2 different nodes
ID          NAME         IMAGE       NODE                      DESIRED STATE       CURRENT STATE           ERROR              PORTS
ID          NAME     IMAGE         NODE                      DESIRED STATE CURRENT STATE        
y5z4mt99uzpo constraints.1 httpd:latest swarm-wkr-2.example.com   Running Running 41 seconds ago                      
j964jm1lq3q5 bkweb.1   httpd:latest server2c.mylabserver.com Running       Running about a minute ago
zqbn4ips969q constraints.2 httpd:latest swarm-wkr-2.example.com   Running Running 41 seconds ago                     
jpjx3mk7hhm0 bkweb.2  httpd:latest server1c.mylabserver.com Running       Running about a minute ago
vnb10jcs2915 constraints.3 httpd:latest swarm-wkr-2.example.com   Running Running 41 seconds ago  
</source>
</source>


== Scaling services ==
;Backup state files
These commands be issued on a manager node
<source lang=bash>
<source lang=bash>
docker pull docker nginx
sudo -i
docker service create --name web --publish 80:80 httpd
cd /var/lib/docker/swarm
docker service ps web                  #there is only 1 replica
cat docker-state.json #contains info about managers, workers, certificates, etc..
docker service update --replicas 3 web #update to 3 replicas
cat state.json
docker service create --name nginx --publish 5901:80 nginx
sudo systemctl stop docker.service
elinks http://swarm-mgr-1.com:5901    #nginx website will be presented
 
# Backup swarm cluster, this file can be then used to recover whole swarm cluster on another set of servers
sudo tar -czvf swarm.tar.gz /var/lib/docker/swarm/


# scale is equivalent to update --replicas command for a single or multiple services
#the running docker containers should be brought up as they were before stopping the service
docker service scale web=3 nginx=3
systemctl start docker
docker service ls
</source>
</source>


== Replicated services vs global services ==
Recover using swarm.tar backup
;Global Replicated: mode runs at least one copy of a service on each swarm node, even if you join another node the service will coverage there as well. In global mode you cannot use <code>update --replicats</code> or <code>scale</code> commands. It is not possible to update the mode type.
;Replicated mode: allows for grater control and flexibility of running number of services.
<source lang=bash>
<source lang=bash>
# creates a single service running across whole cluster in replicated mode
# scp swarm.tar to recovery node - what it'd be a node with just installed docker
docker service create --name web --publish 80:80 httpd
sudo rm -rf /var/lib/docker/swarm
sudo systemctl stop docker


# run in a global node
# Option1 untar directly
docker service create --name web --publish 5901:80 --mode global httpd
sudo tar -xzvf swarm.tar.gz -C /var/lib/docker/swarm
docker service ls #note distinct mode names: global and replicated
</source>


= Docker compose and deploy to Swarm =
# Option2 copy recursivly, -f override if a file exists
Install
tar -xzvf swarm.tar.gz; cd /var/lib/docker
<source lang=bash>
cp -rf swarm/ /var/lib/docker/
sudo yum install epel
sudo yum install pip
sudo pip install --upgrade pip
# install docker CE or EE to avoid Python libs conflits
sudo pip install docker-compose


# Troubleshooting
sudo systemctl start docker
## Err: Cannot uninstall 'requests'. It is a distutils installed project...
docker swarm init --force-new-cluster # produces the exactly same token
pip install --ignore-installed requests
# you should join all required nodes to new manager ip
# scale services down to 1, then scale up so get distributed to other nodes
</source>
</source>


Dockerfile
== Run containers as a services ==
Docker container has a number limitation therefore running as a service where Cluster Manager: Swarm or Kubernetes manages networking, access, loadbalancing etc.. is a way to scale with ease. The service is using eg. mesh routing to deal with access to containers.
 
Swarm nodes setup 1 manager and 2 workers
<source lang=bash>
<source lang=bash>
cat >Dockerfile <<EOF
ID                            HOSTNAME                          STATUS AVAILABILITY MANAGER STATUS ENGINE VERSION
FROM centos:latest
641bfndn49b1i1dj17s8cirgw *  swarm-mgr-1.example.com  Ready  Active Leader      18.09.1
RUN yum install -y httpd
vlw7te728z7bvd7ulb3hn08am    swarm-wkr-1.example.com  Ready  Active              18.09.1
RUN echo "Website1" >> /var/www/html/index.html
r8h7xmevue9v2mgysmld59py2    swarm-wkr-2.example.com  Ready  Active              18.09.0
EXPOSE 80
ENTRYPOINT apachectl -DFOREGROUND
EOF
</source>
</source>




Docker compose file
Create and run a service
<source lang=bash>
<source lang=bash>
cat >docker-compose.yml <<EOF
docekr pull httpd
version: '3'
docker service create --name serviceweb --publish 80:80 httpd
services:
# --publish|-p -expose a port on all containers in the running cluster
  apiweb1:
 
    image: httpd_1:v1
docker service ls
    build: .
ID                  NAME                MODE                REPLICAS            IMAGE              PORTS
    ports:
vt0ftkifbd84        serviceweb          replicated          1/1                httpd:latest        *:80->80/tcp
      - "81:80"
 
  apiweb2:
docker service ps serviceweb #show nodes that a container is running on, here on mgr-1 node
    image: httpd_1:v1
ID          NAME        IMAGE        NODE                    DESIRED STATE CURRENT STATE  ERROR  PORTS
    ports:
e6rx3tzgp1e5 serviceweb.1 httpd:latest swarm-mgr-1.example.com Running       Running about                 
      - "82:80"
  load-balancer:
    image: nginx:latest
    ports:
       - "80:80"
EOF
</source>
</source>




Run docker compose, on the current node only
When running as a service even if a container runs on a single node (replica=1) the container can be accessed from any of swarm nodes. It's because service exposed port has been exposed to extended mesh private network that the container is running on.
<source lang=bash>
<source lang=bash>
docker-compose up -d
[user@swarm-mgr-1 ~]$ curl -k http://swarm-mgr-1.example.com
WARNING: The Docker Engine you're using is running in swarm mode.
  <html><body><h1>It works!</h1></body></html>
Compose does not use swarm mode to deploy services to multiple nodes in a swarm. All containers will be scheduled on the current node.
[user@swarm-mgr-1 ~]$ curl -k http://swarm-wkr-1.example.com
To deploy your application across the swarm, use `docker stack deploy`.
  <html><body><h1>It works!</h1></body></html>
Creating compose_apiweb2_1      ... done
[user@swarm-mgr-1 ~]$ curl -k http://swarm-wkr-2.example.com
Creating compose_apiweb1_1      ... done
  <html><body><h1>It works!</h1></body></html>
Creating compose_load-balancer_1 ... done
</source>
 


docker ps
Service update, can be done to limits, volumes, env-variables and more...
CONTAINER ID IMAGE        COMMAND                 CREATED  STATUS  PORTS              NAMES
<source lang=bash>
14f8b6b10c2d nginx:latest "nginx -g 'daemon of…"  2 minutesUp 2 min 0.0.0.0:80->80/tcp compose_load-balancer_1
docker service scale devweb=3                 #or
e9b5b37fe4e5 httpd_1:v1  "/bin/sh -c 'apachec…"  2 minutesUp 2 min 0.0.0.0:81->80/tcp compose_apiweb1_1
docker service update --replicas 3 serviceweb #--detach=false shows visual progress in older versions, default in v18.06
28ee22a8eae0 httpd_1:v1   "/bin/sh -c 'apachec…"  2 minutesUp 2 min 0.0.0.0:82->80/tcp compose_apiweb2_1
serviceweb
overall progress: 3 out of 3 tasks
1/3: running  [==================================================>]
2/3: running   [==================================================>]
3/3: running  [==================================================>]
verify: Service converged


# Verify
# Limits(soft limit) and reservations(hard limit), this causes to start new services(containers)
curl http://localhost:81
docker service update --limit-cpu=.5 --reserve-cpu=.75 --limit-memory=128m --reserve-memory=256m serviceweb
curl http://localhost:82
</source>
curl http://localhost:80 #nginx


# Prep before deploying docker-compose to Swarm. Also images needs to be build before hand.
== Templating service names ==
# Docker stack does not support building images
This allows to control eg. hostname in a cluster. Useful for big clusters to easier identify services where they run from hostname.
docker-compose down --volumes #save everything to storage volumes
<source lang=bash>
Stopping compose_load-balancer_1 ... done
docker service create --name web --hostname"{{.Node.ID}}-{{.Service.Name}}" httpd
Stopping compose_apiweb1_1      ... done
docker service ps --no-trunc web
Stopping compose_apiweb2_1      ... done
docker inspect --format="{{}.Config.Hostname}" web.1.ab10_serviceID_cd
Removing compose_load-balancer_1 ... done
aa_nodeID_bb-web
Removing compose_apiweb1_1      ... done
Removing compose_apiweb2_1      ... done
Removing network compose_default
</source>
</source>
 
== Node lables for task/service placement ==
 
;Deploy compose to Swarm
<source lang=bash>
<source lang=bash>
docker stack deploy --compose-file docker-compose.yml customcompose-stack #customcompose-stack is a prefix for service name
docker node ls
Ignoring unsupported options: build
ID                            HOSTNAME                  STATUS AVAILABILITY        MANAGER STATUS      ENGINE VERSION
Creating network customcompose-stack_default
641bfndn49b1i1dj17s8cirgw *  swarm-mgr-1.example.com  Ready  Active              Leader              18.09.1
Creating service customcompose-stack_apiweb1
vlw7te728z7bvd7ulb3hn08am    swarm-wkr-1.example.com  Ready  Active                                  18.09.1
Creating service customcompose-stack_apiweb2
r8h7xmevue9v2mgysmld59py2    swarm-wkr-2.example.com  Ready  Active                                  18.09.1
Creating service customcompose-stack_load-balancer


docker stack services customcompose-stack #or
docker node inspect 641bfndn49b1i1dj17s8cirgw --pretty
docker service ls
ID:                    641bfndn49b1i1dj17s8cirgw
ID           NAME                              MODE      REPLICAS IMAGE        PORTS
Hostname:              swarm-mgr-1.example.com
k7wwkncov49p customcompose-stack_apiweb1        replicated 0/1     httpd_1:v1  *:81->80/tcp
Joined at:              2019-01-08 12:16:56.277717163 +0000 utc
nl0j5folpmha customcompose-stack_apiweb2        replicated 0/1      httpd_1:v1  *:82->80/tcp
Status:
x6p14gmpjyra customcompose-stack_load-balancer replicated 1/1      nginx:latest *:80->80/tcp
State:                 Ready
 
Availability:         Active
docker stack rm customcompose-stack #remove stack
Address:              172.31.10.10
 
Manager Status:
</source>
  Address:              172.31.10.10:2377
 
Raft Status:           Reachable
= Selecting a Storage Driver =
Leader:                Yes
Go to Docker version matrix to verify what drivers are supported on your platform. Changing storage driver is destructive and you loose all containers volumes. Therefore you need to export/backup then re-import after the storage driver change.
Platform:
 
Operating System:      linux
;CentOS
Architecture:          x86_64
Device mapper is officialy supported on CentOS. It can be used on a disc as blockstorage it uses loopback adapter to provide that. Or can be blockstorage devive allowing Docker to mamange it.
Resources:
 
CPUs:                  2
<source lang=bash>
Memory:                3.699GiB
docker info --format '{{json .Driver}}'
Plugins:
docker info -f '{{json .}}' | jq .Driver
Log:          awslogs, fluentd, gcplogs, gelf, journald, json-file, local, logentries, splunk, syslog
docker info | grep Storage
Network:              bridge, host, macvlan, null, overlay
 
Volume:                local
sudo touch /etc/docker/daemon.json
Engine Version:        18.09.1
sudo vi    /etc/docker/daemon.json #additional options are available
TLS Info:
{
TrustRoot:
  "storage-driver":"devicemapper"
-----BEGIN CERTIFICATE-----
}
MIIBajCCARCgAwIBAgIUKXz3wtc8OA8uzTo1pO86ko+PB+EwCgYIKoZIzj0EAwIw
..
-----END CERTIFICATE-----
Issuer Subject:        MBMxETAPBgNVBAMTCHN3YX.....h
Issuer Public Key:     MFkwEwYHKoZIzj0CAQYIKoZIzj0DAQcDQgAEy......==
</source>
</source>




Preserving any current images, requires export/backup and re-import after the storage driver change.
Apply label to a node
<source lang=bash>
<source lang=bash>
docker images
docker node update --label-add node-env=testnode r8h7xmevue9v2mgysmld59py2
sudo systemctl docker restart
docker node inspect r8h7xmevue9v2mgysmld59py2 --pretty | grep -B1 -A2 Labels
ls -l /var/lib/docker/devicemapper #new location to storing images
ID:                    r8h7xmevue9v2mgysmld59py2
Labels:
- node-env=testnode
Hostname:              swarm-wkr-1.example.com
</source>
</source>


Note, in <code>/var/lib/docker</code> new directory <code>devicemapper</code> has been created to store images from now on.


;Update 2019 - Docker Engine 18.09.1
How to use it. Run a service with <code>--constraint</code> option that pins services to run on a node meeting given criteria. In our case to run on a node where <code>node.labels.node-env == testnode</code>. Note that all replicas are running on the same node unlike they'd be distributed across the cluster.
<source>
<source lang=bash>
WARNING: the devicemapper storage-driver is deprecated, and will be removed in a future release.
docker service create --name constraints -p 80:80 --constraint 'node.labels.node-env == testnode' --replicas 3 httpd #node.role, node.id, node.hostname
WARNING: devicemapper: usage of loopback devices is strongly discouraged for production use.
zrk15vfdaitc1rvw9wqh2s0ot
        Use `--storage-opt dm.thinpooldev` to specify a custom block storage device.
overall progress: 3 out of 3 tasks
</source>
1/3: running   [==================================================>]
 
2/3: running   [==================================================>]
= Selecting a logginig driver =
3/3: running   [==================================================>]
Available list of [https://docs.docker.com/config/containers/logging/configure/#supported-logging-driverslogging drivers] can be seen on Docker documentation page. Most popular are:
verify: Service converged
*none - No logs are available for the container and docker logs does not return any output.
[cloud_user@mrpiotrpawlak1c ~]$ docker service ls
*json-file - (default) the logs are formatted as JSON. The default logging driver for Docker.
ID          NAME          MODE        REPLICAS IMAGE        PORTS
*syslog - Writes logging messages to the syslog facility. The syslog daemon must be running on the host machine.
zrk15vfdaitc constraints  replicated  3/3      httpd:latest  *:80->80/tcp
*journald - Writes log messages to journald. The journald daemon must be running on the host machine.
*fluentd - Writes log messages to fluentd (forward input). The fluentd daemon must be running on the host machine.
*awslogs - Writes log messages to Amazon CloudWatch Logs.
*splunk - Writes log messages to splunk using the HTTP Event Collector.
*etwlogs - (Windows) Writes log messages as Event Tracing for Windows (ETW) events


[user@swarm-wkr-2 ~]$ docker service ps constraints
ID          NAME          IMAGE        NODE                      DESIRED STATE      CURRENT STATE            ERROR              PORTS
y5z4mt99uzpo constraints.1 httpd:latest swarm-wkr-2.example.com  Running Running 41 seconds ago                     
zqbn4ips969q constraints.2 httpd:latest swarm-wkr-2.example.com  Running Running 41 seconds ago                     
vnb10jcs2915 constraints.3 httpd:latest swarm-wkr-2.example.com  Running Running 41 seconds ago
</source>


== Scaling services ==
These commands be issued on a manager node
<source lang=bash>
<source lang=bash>
docker info | grep logging
docker pull docker nginx
docker container run -d --name <webjson> --logg-driver json-file httpd #per docker container setup
docker service create --name web --publish 80:80 httpd
docker logs <testjson>
docker service ps web                  #there is only 1 replica
 
docker service update --replicas 3 web #update to 3 replicas
docker container run -d --name <web> httpd #start new container
docker service create --name nginx --publish 5901:80 nginx
docker logs -f _testweb_                  #display standard-out logs
elinks http://swarm-mgr-1.com:5901    #nginx website will be presented
docker service log -f <web> #for swarm all container replicas logs
 
# scale is equivalent to update --replicas command for a single or multiple services
docker service scale web=3 nginx=3
docker service ls
</source>
</source>


 
== Replicated services vs global services ==
Enable syslog logginig driver
;Global Replicated: mode runs at least one copy of a service on each swarm node, even if you join another node the service will coverage there as well. In global mode you cannot use <code>update --replicats</code> or <code>scale</code> commands. It is not possible to update the mode type.
;Replicated mode: allows for grater control and flexibility of running number of services.
<source lang=bash>
<source lang=bash>
sudo vi /etc/rsyslog.conf
# creates a single service running across whole cluster in replicated mode
#uncomment below
docker service create --name web --publish 80:80 httpd
$ModLoad imudp
$UDPServerRun 514


sudo systemctl start rsyslog
# run in a global node
docker service create --name web --publish 5901:80 --mode global httpd
docker service ls #note distinct mode names: global and replicated
</source>
</source>


 
= Docker compose and deploy to Swarm =
Change logging driver. Then standard output won't be available after the change.
Install
<source lang=bash>
<source lang=bash>
{
sudo yum install epel
  "log-driver": "syslog",
sudo yum install pip
  "log-opts": {
sudo pip install --upgrade pip
    "syslog-address": "udp://172.31.10.1"
# install docker CE or EE to avoid Python libs conflits
  }
sudo pip install docker-compose
}


sudo systemctl restart docker
# Troubleshooting
docker info | grep logging
## Err: Cannot uninstall 'requests'. It is a distutils installed project...
tail -f /var/log/messages #this will show all logging eg. access logs for httpd server
pip install --ignore-installed requests
</source>
</source>
== Docker daemon logs ==
 
System level logs
Dockerfile
<source lang=bash>
<source lang=bash>
# CentOS
cat >Dockerfile <<EOF
/var/messages | grep -i docker
FROM centos:latest
 
RUN yum install -y httpd
# Ubuntu
RUN echo "Website1" >> /var/www/html/index.html
sudo journalctl -u docker.service --no-hostname
EXPOSE 80
sudo journalctl -u docker -o json | jq -cMr '.MESSAGE'
ENTRYPOINT apachectl -DFOREGROUND
sudo journalctl -u docker -o json | jq -cMr 'select(has("CONTAINER_ID") | not) | .MESSAGE'
EOF
/var/log/syslog | grep -i docker
</source>
</source>


;Docker container or service logs
<source lang=bash>
docker container logs [OPTIONS] containerID  #single container logs
docker service  logs [OPTIONS] service|task #agregate logs across all cluster deployed container replicas
</source>


= Container life-cycle policies - eg. autostart =
Docker compose file
<source lang=bash>
<source lang=bash>
docker container run -d --name web --restart <on-failure|unless-stopped|no|none(default)|always> httpd
cat >docker-compose.yml <<EOF
# --restart -restart on crash or exit 1 or service or system reboot
version: '3'
</source>
services:
Definitions:
  apiweb1:
* always - it will restart container always, even if stopped manually, restarting docker-deamon will start container
    image: httpd_1:v1
* unless-stopped - it will restart container always unless stopped manually by <code>docker container stop</code>
    build: .
* on-failure - restart if container exits with non-zero exit code
    ports:
      - "81:80"
  apiweb2:
    image: httpd_1:v1
    ports:
      - "82:80"
  load-balancer:
    image: nginx:latest
    ports:
      - "80:80"
EOF
</source>


= Universal Control Plance - UCP =
It's an application what allow to see all operational details for Swarm cluster when using Docker EE editin. 30 days trial is available.


;Communication between Docker Engine, UCP and DTR (Docker Trusted Registry)
Run docker compose, on the current node only
* over TCP/UDP - depends on a port, and whether a response is required, or if a message is a notification
<source lang=bash>
* IPC - interprocess communication (intra-host), services on the same node
docker-compose up -d
* API - over TCP, uses API directlyto query or update components in a cluster
WARNING: The Docker Engine you're using is running in swarm mode.
Compose does not use swarm mode to deploy services to multiple nodes in a swarm. All containers will be scheduled on the current node.
To deploy your application across the swarm, use `docker stack deploy`.
Creating compose_apiweb2_1      ... done
Creating compose_apiweb1_1      ... done
Creating compose_load-balancer_1 ... done


;References
docker ps
* [https://docs.docker.com/ee/ucp/ucp-architecture/ UCP architecture]
CONTAINER ID IMAGE        COMMAND                CREATED  STATUS  PORTS              NAMES
14f8b6b10c2d nginx:latest "nginx -g 'daemon of…"  2 minutesUp 2 min 0.0.0.0:80->80/tcp compose_load-balancer_1
e9b5b37fe4e5 httpd_1:v1  "/bin/sh -c 'apachec…"  2 minutesUp 2 min 0.0.0.0:81->80/tcp compose_apiweb1_1
28ee22a8eae0 httpd_1:v1  "/bin/sh -c 'apachec…"  2 minutesUp 2 min 0.0.0.0:82->80/tcp compose_apiweb2_1


== Install/uninstall UCP <code>image: docker/ucp</code> ==
# Verify
OS support:  
curl http://localhost:81
* UCP 2.2.11 is supported running on RHEL 7.5 and Ubuntu 18.04
curl http://localhost:82
curl http://localhost:80 #nginx


For labs purpose, we can use eg. <code>ucp.example.com</code> the domain <code>example.com</code> is included in UCP and DTR wildcard self-signed certificate.
# Prep before deploying docker-compose to Swarm. Also images needs to be build before hand.
# Docker stack does not support building images
docker-compose down --volumes #save everything to storage volumes
Stopping compose_load-balancer_1 ... done
Stopping compose_apiweb1_1      ... done
Stopping compose_apiweb2_1      ... done
Removing compose_load-balancer_1 ... done
Removing compose_apiweb1_1      ... done
Removing compose_apiweb2_1      ... done
Removing network compose_default
</source>




Install on a manager node
;Deploy compose to Swarm
<source lang=bash>
<source lang=bash>
export UCP_USERNAME=ucp-admin
docker stack deploy --compose-file docker-compose.yml customcompose-stack #customcompose-stack is a prefix for service name
export UCP_PASSWORD=ucp-admin
Ignoring unsupported options: build
export UCP_MGR_NODE_IP=172.31.101.248
Creating network customcompose-stack_default
Creating service customcompose-stack_apiweb1
Creating service customcompose-stack_apiweb2
Creating service customcompose-stack_load-balancer
 
docker stack services customcompose-stack #or
docker service ls
ID          NAME                              MODE      REPLICAS IMAGE        PORTS
k7wwkncov49p customcompose-stack_apiweb1        replicated 0/1      httpd_1:v1  *:81->80/tcp
nl0j5folpmha customcompose-stack_apiweb2        replicated 0/1      httpd_1:v1  *:82->80/tcp
x6p14gmpjyra customcompose-stack_load-balancer  replicated 1/1      nginx:latest *:80->80/tcp


docker container run --rm -it --name ucp \
docker stack rm customcompose-stack #remove stack
  -v /var/run/docker.sock:/var/run/docker.sock docker/ucp:2.2.15 \
  install --host-address=$UCP_MGR_NODE_IP --interactive --debug


# --rm  :- because this container will be only transitinal container
# -it  :- because installation we want interactive
# -v    :- link the container with a file on a host
# --san :- add subject alternative names to certificates (e.g. --san www1.acme.com --san www2.acme.com)
# --host-address    :- IP address or network interface name to advertise to other nodes
# docker/ucp:2.2.11 :- image version
# --dns        :- custom DNS servers for the UCP containers
# --dns-search :- ustom DNS search domains for the UCP containers
# --admin-username "$UCP_USERNAME" --admin-password "$UCP_PASSWORD" #seems these are not supported, although are in a guide
</source>
</source>


= Selecting a Storage Driver =
Go to Docker version matrix to verify what drivers are supported on your platform. Changing storage driver is destructive and you loose all containers volumes. Therefore you need to export/backup then re-import after the storage driver change.


If not provided you will be asked for:
;CentOS
* Admin password during the process
Device mapper is officialy supported on CentOS. It can be used on a disc as blockstorage it uses loopback adapter to provide that. Or can be blockstorage devive allowing Docker to mamange it.
* You may enter additional aliases (SANs) now or press enter to proceed with the above list:
** Additinall aliases: ucp ucp.example.com
DEBU[0062] User entered: ucp ucp.ciscolinux.co.uk
DEBU[0062] Hostnames: [host1c.mylabserver.com 127.0.0.1 172.17.0.1 172.31.101.248 ucp ucp.ciscolinux.co.uk]


<source lang=bash>
docker info --format '{{json .Driver}}'
docker info -f '{{json .}}' | jq .Driver
docker info | grep Storage


You may want to add DNS entries in <code>/etc/hosts</code> for
sudo touch /etc/docker/daemon.json
* ''ucp'' or ''ucp.example.com'' pointing to manager public ip
sudo vi    /etc/docker/daemon.json #additional options are available
* ''dtr'' or ''dtr.example.com'' pointing a worker node public IPs.
{
  "storage-driver":"devicemapper"
}
</source>




;Verify
Preserving any current images, requires export/backup and re-import after the storage driver change.
* connect to https://ucp.example.com:443.  
<source lang=bash>
* <code>docker ps</code> should see a number of containers running now, they need to see each other therefore we used <code>hosts</code> entries to allow this.
docker images
sudo systemctl docker restart
ls -l /var/lib/docker/devicemapper #new location to storing images
</source>


;Uninstall
Note, in <code>/var/lib/docker</code> new directory <code>devicemapper</code> has been created to store images from now on.
<source lang=bash>
docker container run --rm -it --name ucp \
  -v /var/run/docker.sock:/var/run/docker.sock \
  docker/ucp uninstall-ucp --interactive


INFO[0000] Your engine version 18.09.1, build 4c52b90 (4.15.0-1031-aws) is compatible with UCP 3.1.2 (b822777)
;Update 2019 - Docker Engine 18.09.1
INFO[0000] We're about to uninstall from this swarm cluster. UCP ID: t0ltwwcw5tdbtjo2fxlzmj8p4
<source>
Do you want to proceed with the uninstall? (y/n): y
WARNING: the devicemapper storage-driver is deprecated, and will be removed in a future release.
INFO[0000] Uninstalling UCP on each node...            
WARNING: devicemapper: usage of loopback devices is strongly discouraged for production use.
INFO[0031] UCP has been removed from this cluster successfully.
        Use `--storage-opt dm.thinpooldev` to specify a custom block storage device.
INFO[0033] Removing UCP Services
</source>
</source>


== Install DTR Docker Trusted Repository <code>image: docker/dtr</code> ==
= Selecting a logginig driver =
It's recommended for single core systems to wait 5 minutes after UCP deployment to relese more cpu cycle. You can see the load may peaking up at 1.0 using <code>w</code> command.
Available list of [https://docs.docker.com/config/containers/logging/configure/#supported-logging-driverslogging drivers] can be seen on Docker documentation page. Most popular are:
 
*none - No logs are available for the container and docker logs does not return any output.
Connect to UCP service https://ucp.example.com, login with creds created. Uload a license.lic file.
*json-file - (default) the logs are formatted as JSON. The default logging driver for Docker.
Go to Admin Settings > Docker Trusted Registry > Pick one of UCP Nodes [worker]
*syslog - Writes logging messages to the syslog facility. The syslog daemon must be running on the host machine.
You may disable TLS verification on self-signed certificate
*journald - Writes log messages to journald. The journald daemon must be running on the host machine.
*fluentd - Writes log messages to fluentd (forward input). The fluentd daemon must be running on the host machine.
*awslogs - Writes log messages to Amazon CloudWatch Logs.
*splunk - Writes log messages to splunk using the HTTP Event Collector.
*etwlogs - (Windows) Writes log messages as Event Tracing for Windows (ETW) events




Run a given command on a node you want to install DTR. <code>UCP_NODE</code> in lab environment can cause a few issues. For a convinience to avoid avoid port conflict :80,:443 use different node that UCP is instaled. Eg. dns ''user2c.mylabserver.com'' or private IP.
<source lang=bash>
<source lang=bash>
export UCP_NODE=wkr-172.31.107.250 #for convinince, to avoid port conflict :80,:443 use worker IP
docker info | grep logging
export UCP_USERNAME=ucp-admin
docker container run -d --name <webjson> --logg-driver json-file httpd #per docker container setup
export UCP_PASSWORD=ucp-admin
docker logs <testjson>
export UCP_URL=https://ucp.example.com:443 #avoid using example.com to avoid SSL name validation issues
docker pull docker/dtr


# Optional. Download UCP public certificate
docker container run -d --name <web> httpd #start new container
curl -k https://ucp.ciscolinux.co.uk/ca > ucp-ca.pem
docker logs -f _testweb_                  #display standard-out logs
docker service log -f <web> #for swarm all container replicas logs
</source>


docker container run -it --rm docker/dtr install \
  --ucp-node $UCP_NODE --ucp-url $UCP_URL --debug \
  --ucp-username $UCP_USERNAME --ucp-password $UCP_PASSWORD \
  --ucp-insecure-tls  # --ucp-ca "$(cat ucp-ca.pem)"


# --ucp-node :- hostname/IP of the UCP node (any node managed by UCP) to deploy DTR. Random by default
Enable syslog logginig driver
# --ucp-url  :- the UCP URL including domain and port.
<source lang=bash>
sudo vi /etc/rsyslog.conf
#uncomment below
$ModLoad imudp
$UDPServerRun 514


# It will ask for if not specified:
sudo systemctl start rsyslog
# - ucp-password: you know it from installation step
</source>
</source>




DTR installation process above has also installed a few new containers on maanger/worker nodes named <code>ucp-agent</code> and number of containers on dedicated DTR node. You can verify DTR by logging to https://dtr.example.com should be presented with registry.docker.io similar theme. Any images stored there will be trusted from a perspective of our organisation.
Change logging driver. Then standard output won't be available after the change.
 
== Backup UCP and DTR  configuration ==
This is build into UCP. The process is to start a special container to export UCP configuration to tar file. This can be run as <code>cron</code> job.
 
<source lang=bash>
<source lang=bash>
docker container run --log-driver non --rm -i --name ucp -v /var/run/docker.sock:/var/run/docker.sock docker/ucp backup > backup.tar
{
# --rm it's transitional container
  "log-driver": "syslog",
# -i run interactivly
  "log-opts": {
    "syslog-address": "udp://172.31.10.1"
  }
}


# At first run it will error with --id m79xxxxxxxxx, asking to re-run teh command with this id.
sudo systemctl restart docker
docker info | grep logging
tail -f /var/log/messages #this will show all logging eg. access logs for httpd server
</source>
== Docker daemon logs ==
System level logs
<source lang=bash>
# CentOS
/var/messages | grep -i docker


# Restore command
# Ubuntu
docker container run --log-driver non --rm -i --name ucp -v /var/run/docker.sock:/var/run/docker.sock docker/ucp restore --id m79xxx < backup.tar
sudo journalctl -u docker.service --no-hostname
sudo journalctl -u docker -o json | jq -cMr '.MESSAGE'
sudo journalctl -u docker -o json | jq -cMr 'select(has("CONTAINER_ID") | not) | .MESSAGE'
/var/log/syslog | grep -i docker
</source>
</source>


;DTR
;Docker container or service logs
Durign a backup DTR will not be available.
<source lang=bash>
<source lang=bash>
docker container run --log-driver non --rm docker/dtr backup  --ucp-insecure-tls --ucp-url <ucp_server_dns:443> --ucp-username admin --ucp-password <password> > dtr-backup.tar
docker container logs [OPTIONS] containerID  #single container logs
docker service  logs [OPTIONS] service|task #agregate logs across all cluster deployed container replicas
</source>


# will you be asked for:
= Container life-cycle policies - eg. autostart =
# Choose a replica to back up from: enter
<source lang=bash>
 
docker container run -d --name web --restart <on-failure|unless-stopped|no|none(default)|always> httpd
# Restore command
# --restart -restart on crash or exit 1 or service or system reboot
docker container run --log-driver non --rm docker/dtr restore --ucp-insecure-tls --ucp-url <ucp_server_dns:443> --ucp-username admin --ucp-password <password> < dtr-backup.tar
</source>
</source>
== UCP RBAC ==
Definitions:
The main concept is:
* always - it will restart container always, even if stopped manually, restarting docker-deamon will start container
* administrators can make changes to the UCP swarm/kubernetes, User Management, Orgainisation, Team and Roles
* unless-stopped - it will restart container always unless stopped manually by <code>docker container stop</code>
* users - range of access from Full Control of resources to no access
* on-failure - restart if container exits with non-zero exit code
 
= Universal Control Plance - UCP =
It's an application what allow to see all operational details for Swarm cluster when using Docker EE editin. 30 days trial is available.


;Communication between Docker Engine, UCP and DTR (Docker Trusted Registry)
* over TCP/UDP - depends on a port, and whether a response is required, or if a message is a notification
* IPC - interprocess communication (intra-host), services on the same node
* API - over TCP, uses API directlyto query or update components in a cluster


[[File:Ucp-rbac.png|500px|none|left|Ucp-rbac]]
;References
* [https://docs.docker.com/ee/ucp/ucp-architecture/ UCP architecture]


Note that only Scheduler role allows access to Node to view nodes. Plus schedule workloads of course.
== Install/uninstall UCP <code>image: docker/ucp</code> ==
OS support:
* UCP 2.2.11 is supported running on RHEL 7.5 and Ubuntu 18.04


= Docker Secure Registry | image: registry =
For labs purpose, we can use eg. <code>ucp.example.com</code> the domain <code>example.com</code> is included in UCP and DTR wildcard self-signed certificate.
Docker provides a special docker image that can be used to manage docker imagages both internally or externally thus steps below include securing the access with SSL certificate.




Create certificate
Install on a manager node
<source lang=bash>
<source lang=bash>
mkdir ~/{auth,certs}
export UCP_USERNAME=ucp-admin
# create self-signed certificate for Docker Repository
export UCP_PASSWORD=ucp-admin
mkdir certs; cd _$ #cd to last argument in history
export UCP_MGR_NODE_IP=172.31.101.248
openssl req -newkey rsa:4096 -nodes -sha256 -keyout repo-key.pem -x509 -days 365 -out repo-cer.pem -subj /CN=myrepo.com
 
# trusted-certs docker client directory, docker client looks for trusted certs when conencting to reomote repo
docker container run --rm -it --name ucp \
sudo mkdir -p /etc/docker/certs.d/myrepo.com:5000 #port 5000 is a default port
  -v /var/run/docker.sock:/var/run/docker.sock docker/ucp:2.2.15 \
sudo cp repo-cer.pem /etc/docker/certs.d/myrepo.com:5000/ca.crt
  install --host-address=$UCP_MGR_NODE_IP --interactive --debug
</source>
<code>ca.crt</code> is default/required CAroot trustcert file name, that the docker client (docker login API) uses when conencting to remote repository. In our case we trust any cert signed by CA=ca.crt when connecting to myrepo.com:5000 as same certs (selfsigned), got installed in <code>repository:2</code> container via <code>-v /certs/</code> option.


Optional for development purposes to add doamin ''myrepo.com'' to hostfile binding to local interface ip address.
# --rm  :- because this container will be only transitinal container
<source lang=bash>
# -it  :- because installation we want interactive
sudo -i; echo "172.16.10.10 myrepo.com" >> /etc/hosts; exit
# -v    :- link the container with a file on a host
# --san :- add subject alternative names to certificates (e.g. --san www1.acme.com --san www2.acme.com)
# --host-address    :- IP address or network interface name to advertise to other nodes
# docker/ucp:2.2.11 :- image version
# --dns        :- custom DNS servers for the UCP containers
# --dns-search :- ustom DNS search domains for the UCP containers
# --admin-username "$UCP_USERNAME" --admin-password "$UCP_PASSWORD" #seems these are not supported, although are in a guide
</source>
</source>




Optional add insecure-registry entry
If not provided you will be asked for:
<source>
* Admin password during the process
sudo vi /etc/docker/deamon.json
* You may enter additional aliases (SANs) now or press enter to proceed with the above list:
{
** Additinall aliases: ucp ucp.example.com
  "insecure-registries" : [ "myrepo.com:5000"]
DEBU[0062] User entered: ucp ucp.ciscolinux.co.uk
}
DEBU[0062] Hostnames: [host1c.mylabserver.com 127.0.0.1 172.17.0.1 172.31.101.248 ucp ucp.ciscolinux.co.uk]  
</source>




Pull special Docker Registry image
You may want to add DNS entries in <code>/etc/hosts</code> for
<source lang=bash>
* ''ucp'' or ''ucp.example.com'' pointing to manager public ip
mkdir -p ~/auth #authentication directory, used when deploying local repository
* ''dtr'' or ''dtr.example.com'' pointing a worker node public IPs.
docker pull registry:2
docker run --entrypoint htpasswd registry:2 -Bbn reg-admin Passw0rd123 > ~/auth/htpasswd
# -Bbn        -parameters
# reg-admin  -user
# Passw0rd123 -password string for basic htpasswd authentication method, the hashed password will be displayed to STDOUT


$ cat ~/auth/htpasswd
reg-admin:$2y$05$DnTWDHp7uTwaDrw4sXpUbuDDIlLwu3c8MEMsHPjK/AcUMdK/TD6fO
</source>


 
;Verify
Run Registry container
* connect to https://ucp.example.com:443.
* <code>docker ps</code> should see a number of containers running now, they need to see each other therefore we used <code>hosts</code> entries to allow this.
 
;Uninstall
<source lang=bash>
<source lang=bash>
cd ~
docker container run --rm -it --name ucp \
docker run -d -p 5000:5000 --name myrepo \
  -v /var/run/docker.sock:/var/run/docker.sock \
      -v $(pwd)/certs:/certs \
  docker/ucp uninstall-ucp --interactive
      -e REGISTRY_HTTP_TLS_CERTIFICATE=/certs/repo-cer.pem \
      -e REGISTRY_HTTP_TLS_KEY=/certs/repo-key.pem \
      -v $(pwd)/auth:/auth \
      -e REGISTRY_AUTH=htpasswd \
      -e REGISTRY_AUTH_HTPASSWD_REALM="Registry Realm" \
      -e REGISTRY_AUTH_HTPASSWD_PATH=/auth/htpasswd \
      registry:2
# -v                              -indicate where our certificates will be mounted within a container
# -e REGISTRY_HTTP_TLS_CERTIFICATE -path to cert inside the container
# -v $(pwd)/auth:/auth            -mounting authentication directory where a file with password is
# -e REGISTRY_AUTH htpasswd        -setting up to use 'htpasswd' authentication method
# registry:2                      -image name, positinal params 
</source>


INFO[0000] Your engine version 18.09.1, build 4c52b90 (4.15.0-1031-aws) is compatible with UCP 3.1.2 (b822777)
INFO[0000] We're about to uninstall from this swarm cluster. UCP ID: t0ltwwcw5tdbtjo2fxlzmj8p4
Do you want to proceed with the uninstall? (y/n): y
INFO[0000] Uninstalling UCP on each node...           
INFO[0031] UCP has been removed from this cluster successfully.
INFO[0033] Removing UCP Services
</source>


Verify
== Install DTR Docker Trusted Repository <code>image: docker/dtr</code> ==
<source lang=bash>
It's recommended for single core systems to wait 5 minutes after UCP deployment to relese more cpu cycle. You can see the load may peaking up at 1.0 using <code>w</code> command.
docker pull  alpine
docker tag  alpine    myrepo.com:5000/aa-alpine #create a tagged image (copy) on a local filesystem,
    # it must be prefixed with the private repo name '/' image name you want to upload as
 
docker logout  # if logged in to another repository
docker login myrepo.com:5000/aa-alpine #login to a repository that runs as a container, stays login untill logout/reboot
docker login myrepo.com:5000/aa-alpine --username=rep-admin --password Passw0rd123
docker push  myrepo.com:5000/aa-alpine       


docker image rmi alpine myrepo.com:5000/aa-alpine #delete image stored locally
Connect to UCP service https://ucp.example.com, login with creds created. Uload a license.lic file.
docker pull            myrepo.com:5000/aa-alpine #pull image from a container repository
Go to Admin Settings > Docker Trusted Registry > Pick one of UCP Nodes [worker]
You may disable TLS verification on self-signed certificate


# List private-repository images
curl --insecure -u "reg-admin:password" https://myrepo.com:5000/v2/_catalog
{"repositories":["aa-alpine"]}


wget --no-check-certificate --http-user=reg-admin --http-password=password https://myrepo.com:5000/v2/_catalog
Run a given command on a node you want to install DTR. <code>UCP_NODE</code> in lab environment can cause a few issues. For a convinience to avoid avoid port conflict :80,:443 use different node that UCP is instaled. Eg. dns ''user2c.mylabserver.com'' or private IP.
cat _catalog                                                                                                                                                                     
<source lang=bash>
{"repositories":["my-alpine","myalpine","new-aa-busybox"]}
export UCP_NODE=wkr-172.31.107.250 #for convinince, to avoid port conflict :80,:443 use worker IP
export UCP_USERNAME=ucp-admin
export UCP_PASSWORD=ucp-admin
export UCP_URL=https://ucp.example.com:443 #avoid using example.com to avoid SSL name validation issues
docker pull docker/dtr
 
# Optional. Download UCP public certificate
curl -k https://ucp.ciscolinux.co.uk/ca > ucp-ca.pem
 
docker container run -it --rm docker/dtr install \
  --ucp-node $UCP_NODE --ucp-url $UCP_URL --debug \
  --ucp-username $UCP_USERNAME --ucp-password $UCP_PASSWORD \
  --ucp-insecure-tls  # --ucp-ca "$(cat ucp-ca.pem)"


# List tags
# --ucp-node :- hostname/IP of the UCP node (any node managed by UCP) to deploy DTR. Random by default
curl --insecure -u "reg-admin:password" https://myrepo.com:5000/v2/aa-alpine/tags/list
# --ucp-url  :- the UCP URL including domain and port.
{"name":"myalpine","tags":["latest"]}
curl --insecure -u "reg-admin:password" https://myrepo.com:5000/v2/aa-alpine/manifests/latest #entire image metadata
</source>
</source>




Note. There is no easy way to delete images from repository:2 container.
It will ask for if not specified:
* ucp-password: you know it from UCP installation step
 


= Docker push =
Sygnificiant installation logs
;Login to a docker repository
<source lang=bash>
<source lang=bash>
docker info | grep -B1 Registry #check if you are logged in to docker.hub repository
..
WARNING: No swap limit support
INFO[0006] Only one available UCP node detected. Picking UCP node 'user2c.labserver.com'
Registry: https://index.docker.io/v1/
..
 
INFO[0006] verifying [80 443] ports on user2c.labserver.com
docker login
..
 
INFO[0000] Using default overlay subnet: 10.1.0.0/24   
docker info | grep -B1 Registry
INFO[0000] Creating network: dtr-ol                   
Username: pio2pio
INFO[0000] Connecting to network: dtr-ol               
Registry: https://index.docker.io/v1/
..
INFO[0008] Generated TLS certificate. dnsNames="[*.com *.*.com example.com *.dtr *.*.dtr]" domains="[*.com *.*.com 172.17.0.1 example.com *.dtr *.*.dtr]" ipAddresses="[172.17.0.1]"
..
INFO[0073] You can use flag '--existing-replica-id 10e168476b49' when joining other replicas to your Docker Trusted Registry Cluster
</source>
</source>




; Tag and push an image
;Verify by logging in to https://dtr.example.com
<source lang=bash>
DTR installation process above has also installed a number of containers on maanger/worker nodes named <code>ucp-agent</code> and number of containers on dedicated DTR node.
# docker tag local-image:tagname new-repo:tagname  #create a local copy of an image
You can verify DTR by logging to https://dtr.example.com with UCP credentials <code>ucp-admin</code> and the same password if you haven't changed any commands above. Then you should be presented with registry.docker.io like theme. Any images stored there will be trusted from a perspective of our organisation.
# docker push new-repo:tagname                   


docker pull busybox
docker --tag busybox:latest pio2pio/testrepo
docker push pio2pio/testrepo
The push refers to repository [docker.io/pio2pio/testrepo]
683f499823be: Mounted from library/busybox
latest: digest: sha256:bbb143159af9eabdf45511fd5aab4fd2475d4c0e7fd4a5e154b98e838488e510
size: 527
</source>


= Theory =
;Verify by going to UCP https://ucp.example.com, admin settings > Docker Trusted Registry
== Name spaces and Control Groups ==
[[File:Ucp-dtr-in-admin.png|none|400px|left|Ucp-dtr-in-admin]]
''namespaces'' provide security and isolation, ''control groups'' provide resource management and reporting.


== Backup UCP and DTR  configuration ==
This is build into UCP. The process is to start a special container to export UCP configuration to tar file. This can be run as <code>cron</code> job.


;Namespaces provide: ''isolation'' so that other pieces of the system remain unaffected by whatever is within the namespace. ''Docker'' uses namespaces of various kinds to provide the isolation that containers need in order to remain portable and refrain from affecting the remainder of the host system
<source lang=bash>
docker container run --log-driver non --rm -i --name ucp -v /var/run/docker.sock:/var/run/docker.sock docker/ucp backup > backup.tar
# --rm it's transitional container
# -i run interactivly


Namespace Type:
# At first run it will error with --id m79xxxxxxxxx, asking to re-run teh command with this id.
* Process ID - container encapsulate everything into a single process
 
* Mount - volume mounts
# Restore command
* IPC Inter Process Isolation - eg. swarm services allowed to communicate with containers but not outside
docker container run --log-driver non --rm -i --name ucp -v /var/run/docker.sock:/var/run/docker.sock docker/ucp restore --id m79xxx < backup.tar
* Users (1.12+ experimental) map container users to host users. This can break other isolation items.
</source>
* Network - ips, routing, etc..
 
 
;DTR
;Control Groups: provide ''resource limitation'' and reporting capability within the container space. They allow for granular control over what host resources are allocated to container/s and when they are allocated.
Durign a backup DTR will not be available.
 
<source lang=bash>
Common Control Groups
docker container run --log-driver non --rm docker/dtr backup  --ucp-insecure-tls --ucp-url <ucp_server_dns:443> --ucp-username admin --ucp-password <password> > dtr-backup.tar
* CPU
 
* Memory
# will you be asked for:
* Network Bandwidth
# Choose a replica to back up from: enter
* Disk
 
* Priority
# Restore command
 
docker container run --log-driver non --rm docker/dtr restore --ucp-insecure-tls --ucp-url <ucp_server_dns:443> --ucp-username admin --ucp-password <password> < dtr-backup.tar
== Difference between docker attach and docker exec ==
</source>
;Attach
== UCP RBAC ==
The docker attach command allows you to attach to a running container using the containers ID or name, either to view its ongoing output or to control it interactively. You can attach to the same contained process multiple times simultaneously, screen sharing style, or quickly view the progress of your detached process.
The main concept is:
 
* administrators can make changes to the UCP swarm/kubernetes, User Management, Orgainisation, Team and Roles
The command docker attach is for attaching to the existing process. So when you exit, you exit the existing process.
* users - range of access from Full Control of resources to no access
 
 
If we use docker attach, we can use only one instance of shell. So if we want open new terminal with new instance of container's shell, we just need run docker exec
 
 
[[File:Ucp-rbac.png|500px|none|left|Ucp-rbac]]
If the docker container was started using /bin/bash command, you can access it using attach, if not then you need to execute the command to create a bash instance inside the container using exec. Attach isn't for running an extra thing in a container, it's for attaching to the running process.
 
 
Note that only Scheduler role allows access to Node to view nodes. Plus schedule workloads of course.
To stop a container, use CTRL-c. This key sequence sends SIGKILL to the container. If --sig-proxy is true (the default),CTRL-c sends a SIGINT to the container. You can detach from a container and leave it running using the CTRL-p CTRL-q key sequence.
 
 
= UCP Client bundle =
;exec
UCP client bundle allows to export a bundle containing a certificate and environment settings that will poind docker-client to UCP in order to use a cluster, create images and services.
 
 
"docker exec" is specifically for running new things in a already started container, be it a shell or some other process. The docker exec command runs a new command in a running container.
;Download bundle
 
# Create a user with priviliges that yuo wish docker-client to run as
The command started using docker exec only runs while the container’s primary process (PID 1) is running, and it is not restarted if the container is restarted.
# Download a client budle from User Profile > Client bundle > + New Client Bundle
 
# File <code>ucp-bundle-[username].zip will get downloaded</code> <p>
exec command works only on already running container. If the container is currently stopped, you need to first run it. So now you can run any command in running container just knowing its ID (or name):
<source lang=bash>
unzip ucp-bundle-bob.zip
Archive:  ucp-bundle-bob.zip
extracting: ca.pem                 
extracting: cert.pem               
extracting: key.pem               
extracting: cert.pub               
extracting: env.sh                 
extracting: env.ps1               
extracting: env.cmd   
 
cat env.sh
export COMPOSE_TLS_VERSION=TLSv1_2
export DOCKER_TLS_VERIFY=1
export DOCKER_CERT_PATH="$PWD"
export DOCKER_HOST=tcp://3.16.143.49:443
#
# Bundle for user bob
# UCP Instance ID t0ltwwcw5tdbtjo2fxlzmj8p4
#
# This admin cert will also work directly against Swarm and the individual
# engine proxies for troubleshooting.  After sourcing this env file, use
# "docker info" to discover the location of Swarm managers and engines.
# and use the --host option to override $DOCKER_HOST
#
# Run this command from within this directory to configure your shell:
# eval $(<env.sh)
 
eval $(<env.sh) # apply ucp-bundle
 
docker images # to list UCP managed images
</source></p>
# <li value="4"> In my lab I had to update DOCKER_HOST from public IP to private IP </li>
Err: error during connect: Get https://3.16.143.49:443/v1.39/images/json: x509: certificate is valid for 127.0.0.1, 172.31.101.248, 172.17.0.1, not 3.16.143.49
<source lang=bash>
export DOCKER_HOST=tcp://172.31.101.248:443
</source>
 
 
# <li value="5"> Verify if you have permissions to create a service</li>
<source lang=bash>
docker service create --name test111 httpd
Error response from daemon: access denied:
no access to Service Create, on collection swarm
</source>
 
# <li value="6"> Add Grants to the user</li>
## Go to User Management > Granst > Create Grant
## Base on a Roles, select Full Control
## Select Subjects, All Users, select the user
## Click Create
# Re-run service create command that should succeed now. This service can be managed now also within UCP console.
 
= Docker Secure Registry | image: registry =
Docker provides a special docker image that can be used to manage docker imagages both internally or externally thus steps below include securing the access with SSL certificate.
 
 
Create certificate
<source lang=bash>
mkdir ~/{auth,certs}
# create self-signed certificate for Docker Repository
mkdir certs; cd _$ #cd to last argument in history
openssl req -newkey rsa:4096 -nodes -sha256 -keyout repo-key.pem -x509 -days 365 -out repo-cer.pem -subj /CN=myrepo.com
# trusted-certs docker client directory, docker client looks for trusted certs when conencting to reomote repo
sudo mkdir -p /etc/docker/certs.d/myrepo.com:5000 #port 5000 is a default port
sudo cp repo-cer.pem /etc/docker/certs.d/myrepo.com:5000/ca.crt
</source>
<code>ca.crt</code> is default/required CAroot trustcert file name, that the docker client (docker login API) uses when conencting to remote repository. In our case we trust any cert signed by CA=ca.crt when connecting to myrepo.com:5000 as same certs (selfsigned), got installed in <code>repository:2</code> container via <code>-v /certs/</code> option.
 
Optional for development purposes to add doamin ''myrepo.com'' to hostfile binding to local interface ip address.
<source lang=bash>
sudo -i; echo "172.16.10.10 myrepo.com" >> /etc/hosts; exit
</source>
 
 
Optional add insecure-registry entry
<source>
sudo vi /etc/docker/deamon.json
{
  "insecure-registries" : [ "myrepo.com:5000"]
}
</source>
 
 
Pull special Docker Registry image
<source lang=bash>
mkdir -p ~/auth #authentication directory, used when deploying local repository
docker pull registry:2
docker run --entrypoint htpasswd registry:2 -Bbn reg-admin Passw0rd123 > ~/auth/htpasswd
# -Bbn        -parameters
# reg-admin  -user
# Passw0rd123 -password string for basic htpasswd authentication method, the hashed password will be displayed to STDOUT
 
$ cat ~/auth/htpasswd
reg-admin:$2y$05$DnTWDHp7uTwaDrw4sXpUbuDDIlLwu3c8MEMsHPjK/AcUMdK/TD6fO
</source>
 
 
Run Registry container
<source lang=bash>
cd ~
docker run -d -p 5000:5000 --name myrepo \
      -v $(pwd)/certs:/certs \
      -e REGISTRY_HTTP_TLS_CERTIFICATE=/certs/repo-cer.pem \
      -e REGISTRY_HTTP_TLS_KEY=/certs/repo-key.pem \
      -v $(pwd)/auth:/auth \
      -e REGISTRY_AUTH=htpasswd \
      -e REGISTRY_AUTH_HTPASSWD_REALM="Registry Realm" \
      -e REGISTRY_AUTH_HTPASSWD_PATH=/auth/htpasswd \
      registry:2
# -v                              -indicate where our certificates will be mounted within a container
# -e REGISTRY_HTTP_TLS_CERTIFICATE -path to cert inside the container
# -v $(pwd)/auth:/auth            -mounting authentication directory where a file with password is
# -e REGISTRY_AUTH htpasswd        -setting up to use 'htpasswd' authentication method
# registry:2                      -image name, positinal params 
</source>
 
 
Verify
<source lang=bash>
docker pull  alpine
docker tag  alpine    myrepo.com:5000/aa-alpine #create a tagged image (copy) on a local filesystem,
    # it must be prefixed with the private repo name '/' image name you want to upload as
 
docker logout  # if logged in to another repository
docker login myrepo.com:5000/aa-alpine #login to a repository that runs as a container, stays login untill logout/reboot
docker login myrepo.com:5000/aa-alpine --username=rep-admin --password Passw0rd123
docker push  myrepo.com:5000/aa-alpine       
 
docker image rmi alpine myrepo.com:5000/aa-alpine #delete image stored locally
docker pull            myrepo.com:5000/aa-alpine #pull image from a container repository
 
# List private-repository images
curl --insecure -u "reg-admin:password" https://myrepo.com:5000/v2/_catalog
{"repositories":["aa-alpine"]}
 
wget --no-check-certificate --http-user=reg-admin --http-password=password https://myrepo.com:5000/v2/_catalog
cat _catalog                                                                                                                                                                     
{"repositories":["my-alpine","myalpine","new-aa-busybox"]}
 
# List tags
curl --insecure -u "reg-admin:password" https://myrepo.com:5000/v2/aa-alpine/tags/list
{"name":"myalpine","tags":["latest"]}
curl --insecure -u "reg-admin:password" https://myrepo.com:5000/v2/aa-alpine/manifests/latest #entire image metadata
</source>
 
 
Note. There is no easy way to delete images from repository:2 container.
 
= Docker push =
;Login to a docker repository
<source lang=bash>
docker info | grep -B1 Registry #check if you are logged in to docker.hub repository
WARNING: No swap limit support
Registry: https://index.docker.io/v1/
 
docker login
 
docker info | grep -B1 Registry
Username: pio2pio
Registry: https://index.docker.io/v1/
</source>
 
 
; Tag and push an image
<source lang=bash>
# docker tag local-image:tagname new-repo:tagname  #create a local copy of an image
# docker push new-repo:tagname                   
 
docker pull busybox
docker --tag busybox:latest pio2pio/testrepo
docker push pio2pio/testrepo
The push refers to repository [docker.io/pio2pio/testrepo]
683f499823be: Mounted from library/busybox
latest: digest: sha256:bbb143159af9eabdf45511fd5aab4fd2475d4c0e7fd4a5e154b98e838488e510
size: 527
</source>
 
; Docker Content Trust
All images are implicitly trusted by your Docker daemon. Buy can set that ONLY signed images are allowed. You can configure your systems for trusting only image tags that have been signed.
<source lang=bash>
export=DOCKER_CONTENT_TRUST=1 #enable system to sign an image during push process
docker build -t myrepo.com:5000/untrusted.latest
docker push myrepo.com:5000/untrusted.latest
...
No tag specified, skipping trust metadata push
# 2nd attempt, with a tag specified now
docker push myrepo.com:5000/untrusted.latest:latest
Error: error contacting notary server: x509: certificate signed by unknown authority
 
docker pull myrepo.com:5000/untrusted.latest:latest
Error: error contacting notary server: x509: certificate signed by unknown authority
</source>
 
;Errors explained:
Err: No tag specified, skipping trust metadata push<br />
* Explenation: When image gets signed is signed by a tag. Thereofre if you skip a tag it won't get signed and metada get skipped.
Err: error contacting notary server: x509: certificate signed by unknown authority
* when uploading the image gets uploaded, but it is not trusted becasue signed with self-signed CA
* when downloading, and <code>DOCKER_CONTENT_TRUST=1</code> is enabled, the image cannot be downloaded because is untrusted
 
= Theory =
== What is a docker ==
Docker is a container runtime platform, where Swarm is a container orchestration platform.
 
== Security ==
=== Mutually Authenticated TLS ===
Docker Swarm is ''secure by default'', it means all communication is encrypted. ''Mutually Authenticated TLS'' is the implementation was chosen to secure that communication. Any time a swarm is inicialised, a self-signed CA is generated and issues certificates to every node (mgr or wkr) to facilicate registration (join mgr or wkr) and latter those secure communications. It's transient container brought up to generate CA certs every time a cert is needed. MTLS communication is between Managers and Workers.
 
== [[Linux Namespaces and Control Groups]] ==
 
== Difference between docker attach and docker exec ==
;Attach
The docker attach command allows you to attach to a running container using the containers ID or name, either to view its ongoing output or to control it interactively. You can attach to the same contained process multiple times simultaneously, screen sharing style, or quickly view the progress of your detached process.
 
The command docker attach is for attaching to the existing process. So when you exit, you exit the existing process.
 
If we use docker attach, we can use only one instance of shell. So if we want open new terminal with new instance of container's shell, we just need run docker exec
 
If the docker container was started using /bin/bash command, you can access it using attach, if not then you need to execute the command to create a bash instance inside the container using exec. Attach isn't for running an extra thing in a container, it's for attaching to the running process.
 
To stop a container, use CTRL-c. This key sequence sends SIGKILL to the container. If --sig-proxy is true (the default),CTRL-c sends a SIGINT to the container. You can detach from a container and leave it running using the CTRL-p CTRL-q key sequence.
 
;exec
 
"docker exec" is specifically for running new things in a already started container, be it a shell or some other process. The docker exec command runs a new command in a running container.
 
The command started using docker exec only runs while the container’s primary process (PID 1) is running, and it is not restarted if the container is restarted.
 
exec command works only on already running container. If the container is currently stopped, you need to first run it. So now you can run any command in running container just knowing its ID (or name):
 
<source>
docker exec <container_id_or_name> echo "Hello from container!"
docker run -it -d shykes/pybuilder /bin/bash
</source>
 
The most important here is the -d option, which stands for detached. It means that the command you initially provided to the container (/bin/bash) will be run in background and the container will not stop immediately.


<source>
= Dockerfile - python =
docker exec <container_id_or_name> echo "Hello from container!"
* [https://luis-sena.medium.com/creating-the-perfect-python-dockerfile-51bdec41f1c8 perfect python dockerfile] Medium
docker run -it -d shykes/pybuilder /bin/bash
</source>
 
The most important here is the -d option, which stands for detached. It means that the command you initially provided to the container (/bin/bash) will be run in background and the container will not stop immediately.


= References =
= References =
Line 1,666: Line 1,997:
*[https://www.fromlatest.io/#/ Docker online lintel]
*[https://www.fromlatest.io/#/ Docker online lintel]
*[https://hub.docker.com/r/portainer/portainer/ portainer] Monitor your containers via Web GUI
*[https://hub.docker.com/r/portainer/portainer/ portainer] Monitor your containers via Web GUI
*[https://treescale.com/ treescale.com] Free private Docker registry

Latest revision as of 23:29, 31 May 2024

Containers taking a world J

Installation

General procedure:

  1. Make sure you don't have docker already installed from your packet manager
  2. The /var/lib/docker may be

To install the latest version of Docker with curl:

curl -sSL https://get.docker.com/ | sh

CentOS

sudo yum install bash-completion bash-completion-extras #optional, requires you log out
sudo yum install -y yum-utils device-mapper-persistent-data lvm2 #utils
sudo yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo #docker-ee.repo for EE edition
                      # --enable docker-ce-{edge|test} #for beta releases
sudo yum update
sudo yum clean all #not sure why this command is here
sudo yum install docker-ce
#old: sudo yum install -y --setopt=obsoletes=0 docker-ce-17.03.1.ce-1.el7.centos docker-ce-selinux-17.03.1.ce-1.el7.centos
sudo systemctl enable docker && sudo systemctl start docker && sudo systemctl status docker
yum-config-manager --disable jenkins #disable source to prevent accidental update ?jenkins?

Ubuntu 16.04, 18.04, 20.04

# Optional, clear out config files
sudo rm /etc/systemd/system/docker.service.d/docker.conf
sudo rm /etc/systemd/system/docker.service
sudo rm /etc/default/docker #environment file

# New docker package is called now 'docker-ce'
sudo apt-get remove docker docker-engine docker.io containerd runc docker-ce  # start fresh
sudo apt-get -yq install apt-transport-https ca-certificates curl gnupg-agent software-properties-common # apt over HTTPs
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add - # Docker official GPG key
sudo apt-key fingerprint 0EBFCD88 #verify

#add the repository
sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" # or {edge|test}
sudo apt-get update # optional

# Option 1 - install latest
sudo apt-get install docker-ce docker-ce-cli containerd.io

# Option 2 - install fixed version
sudo apt-cache madison docker-ce # display available versions
sudo apt-get   install docker-ce=<VERSION_STRING>          docker-ce-cli=<VERSION_STRING>          containerd.io
sudo apt-get   install docker-ce=18.09.0~3-0~ubuntu-bionic docker-ce-cli=18.09.0~3-0~ubuntu-bionic containerd.io
sudo apt-mark  hold    docker-ce docker-ce-cli containerd.io
sudo apt-mark  showhold # show packages that version upgrade has been put on hold

# Unhold
sudo apt-mark unhold   docker-ce docker-ce-cli containerd.io


Newer versions (>18.09.0) of Docker come with 3 packages:

  • containerd.io - daemon to interface with the OS API (in this case, LXC - Linux Containers), essentially decouples Docker from the OS, also provides container services for non-Docker container managers
  • docker-ce - Docker daemon, this is the part that does all the management work, requires the other two on Linux
  • docker-ce-cli - CLI tools to control the daemon, you can install them on their own if you want to control a remote Docker daemon


Example of how to run Jenkins docker image

Add a user to docker group

Add your user to docker group to be able to run docker commands without need of sudo as the docker.socket is owned by group docker.

sudo usermod -aG docker $(whoami)

# log in to the new docker group (to avoid having to log out / log in again)
newgrp docker


Reason

[root@piotr]$ ls -al /var/run/docker.sock
srw-rw----. 1 root docker 7 Jan 09:00 docker.sock

HTTP proxy

Configure docker if you run behind a proxy server. In this example CNTLM proxy runs on the host machine listening on localhost:3128. This example overrides the default docker.service file by adding configuration to the Docker systemd service file.

First, create a systemd drop-in directory for the docker service:

sudo mkdir /etc/systemd/system/docker.service.d
sudo vi    /etc/systemd/system/docker.service.d/http-proxy.conf
[Service]
Environment="HTTP_PROXY=http://proxy.example.com:80/"
Environment="HTTP_PROXY=http://172.31.1.1:3128/" #overrides previous entry
Environment="HTTPS_PROXY=http://172.31.1.1:3128/"
# If you have internal Docker registries that you need to contact without proxying you can specify them via the NO_PROXY environment variable
Environment="NO_PROXY=localhost,127.0.0.1,10.6.96.172,proxy.example.com:80"


Flush changes:

$ sudo systemctl daemon-reload

Verify that the configuration has been loaded:

$ systemctl show --property=Environment docker
Environment=HTTP_PROXY=http://proxy.example.com:80/

Restart Docker:

$ sudo systemctl restart docker

Docker create and run, basic options

# It will create a container but won't start it up
docker container create -it --name="my-container" ubuntu:latest /bin/bash
docekr container start my-container

docker run -it --name="mycentos" centos:latest /bin/bash
# -i   :- interactive mode (attach to STDIN)          \command to execute when instantiating container 
# -t   :- attach to the current terminal (sudo TTY)
# -d   :- disconnect mode, daemon mode, detached mode, running the task in the background
# -p   :- publish to host exposed container port [ host_port(8080):container_exposedPort(80) ]
# --rm :- remove container after command has been executed
# --name="name_your_container"

# -e|--env MYVAR=123 exports/passing variable to the container, echo $MYVAR will have a value 123
# --privileged :- option will allow Docker to perform actions normally restricted, 
#                 like binding a device path to an internal container path.

Docker inspect

inspect image

docker image inspect centos:6
docker image inspect centos:6 --format '{{.ContainerConfig.Hostname}}' #just a single value
docker image inspect centos:6 --format '{{json .ContainerConfig}}'     #json key/value output
docker image inspect centos:6 --format '{{.RepoTags}}'                 #shows all associated tags with the image

Using --format is similar to jq

inspect container

Shows current configuration state of a docker container or an image.

docker inspect <container_name> | grep IPAddress
           "SecondaryIPAddresses": null,
           "IPAddress": "172.17.0.3",
                   "IPAddress": "172.17.0.3",

Attach/exec to a docker process

If you are running eg. /bin/bash as a command you can get attached to this running docker process. Note that when you exit the container will stop.

docker attach mycentos


To avoid stopping a container on exit of attach command we can use exec command.

docker exec -it mycentos /bin/bash


Attaching directly to a running container and then exiting the shell will cause the container to stop. Executing another shell in a running container and then exiting that shell will not stop the underlying container process started on instantiation.

Entrypoint, CMD, PID1 and tini

Entrypoint and receiving signals

Reciving signals and handling them within containers here Docker it's the same important as for any other application. Remember containers it's a group of processes running on your host so you need to take care of signals send to your applications.


As a principal container management tool eg. docker stop sends a configurable (in Dockerfile) signal to the entrypoint of your application where SIGTERM - 15 - Termination (ANSI) is default.


ENTRYPOINT syntax
# exec form, require JSON array; IT SHOULD ALWAYS BE USED
ENTRYPOINT ["/app/bin/your-app", "arg1", "arg2"]

# shell form, it always runs as a subcommand of '/bin/sh -c', thus your application will never see any signal sent to it
ENTRYPOINT "/app/bin/your-app arg1 arg2"


ENTRYPOINT is a shell script

If application is started by shell script regular way, your shell spawns your application in a new process and you won’t receive signals from Docker. Therefore we need to tell shell to replace itself with your application using the exec command, check also exec syscall. Use:

/app/bin/my-app      # incorrect, signal won't be received by 'my-app'
exec /app/bin/my-app # correct way


ENTRYPOINT exec with a pipe commands causing starting a subshell

If you exec piping will force a command to be run in a subshell with the usual consequence: no signals to an app.

exec /app/bin/your-app | tai64n # here you want to add timestamps by piping through tai64n,
                                # causing running your command in a subshell


Let another program to be PID1 and handle signalling
  • tini
  • dump-init
ENTRYPOINT ["/tini", "-v", "--", "/app/bin/docker-entrypoint.sh"]

tini and dumb-init are also able to proxy signals to process groups which technically allows you to pipe your output.However, your pipe target receives that signal at the same time so you can’t log anything on cleanup lest you crave race conditions and SIGPIPEs. So, it's better to avoid logging at termination at all.


Change signal that will terminate your container process

Listen for SIGTERM or set STOPSIGNAL in your Dockerfile.

vi Dockerfile
STOPSIGNAL SIGINT # this will trigger container termination process if someone press Ctrl^C
References

Tini

It's a tiny but valid init for containers:

  • protects you from software that accidentally creates zombie processes
  • ensures that the default signal handlers work for the software you run in your Docker image
  • does so completely transparently! Docker images that work without Tini will work with Tini without any changes
  • Docker 1.13+ has Tini included, to enable Tini, just pass the --init flag to docker run


Understanding Tini

After spawning your process, Tini will wait for signals and forward those to the child process, and periodically reap zombie processes that may be created within your container. When the "first" child process exits (/your/program in the examples above), Tini exits as well, with the exit code of the child process (so you can check your container's exit code to know whether the child exited successfully).


Add Tini - general dynamicly-linked library (in the 10KB range)

ENV TINI_VERSION v0.18.0
ADD https://github.com/krallin/tini/releases/download/${TINI_VERSION}/tini /tini
RUN chmod +x /tini
ENTRYPOINT ["/tini", "--"]

# Run your program under Tini
CMD ["/your/program", "-and", "-its", "arguments"]
# or docker run your-image /your/program ...


Add Tini to Alpine based image

RUN apk add --no-cache tini
# Tini is now available at /sbin/tini
ENTRYPOINT ["/sbin/tini", "--"]


Existing entrypoint

ENTRYPOINT ["/tini", "--", "/docker-entrypoint.sh"]


References

Mount directory in container

We can mount host directory into docker container so the content will be available from the container

docker run -it -v /mnt/sdb1:/opt/java pio2pio/java8
# syntax: -v /path/on/host:/path/in/container

Build image

Dockerfile

Each line RUN creates a container so if possible, we should join lines so it ends up with less layers.

$ wget jkd1.8.0_111.tar.gz
$ cat Dockerfile <<- EOF #'<<-' heredoc with '-' minus ignores <tab> indent
ARG TAGVERSION=6                    #only command allowed b4 FROM
FROM ubuntu:${TAGVERSION}
FROM ubuntu:latest                  #defines base image eg. ubuntu:16.04
LABEL maintainer="myname@gmail.com" #key/value pair added to a metadata of the image

ARG ARG1=value1

ENV ENVIRONMENT="prod"
ENV SHARE /usr/local/share  #define env variables with syntax ENV space EnvironmetVariable space Value
ENV JAVA_HOME $SHARE/java

# COPY jkd1.8.0_111.tar.gz /tmp #works only with files, copy a file to container filesystem, here to /tmp
# ADD http://example.com/file.txt
ADD jkd1.8.0_111.tar.gz /  #add files into the image root folder, can add also URLs

# SHELL ["executable","params"] #overrides /bin/sh -c for RUN,CMD, etc..

# Executes commands during build process in a new layer E.g., it is often used for installing software packages
RUN mv /jkd1.8.0_111.tar.gz $JAVA_HOME 
RUN apt-get update
RUN ["apt-get", "update", "-y"] #in json array format, allows to run a commands but does not require shell executable

VOLUME /mymount_point #this command does not mount anything from a host, just creates a mountpoint

EXPOSE 80 #it doesn't automatically map the port to a hosts

#containers usually don't have system maangement eg. systemctl/service/init.d as designed to run as single process
#entry point becomes main command that start the main proces
ENTRYPOINT apachectl "-DFOREGROUND" #think about it as the MAIN_PURPOSE_OF_CONTAINER command. 
# It's always run by default it cannot be overridden

#Single command that will run after the image has been created. Only one per dockerfile, can be overriden.
CMD ["/bin/bash"]

# STOPSIGNAL

EOF

Build

docker build --tag myrepo/java8 .  #-f point to custom Dockerfile name eg. -f Dockerfile2
# myrepo dockerhub username, java8 -image name, 
# .      directory where is the Dockerfile

docker build -t myrepo/java8 . --pull --no-cache --squash
# --pull     regardless a local copy of an image can exist force to download a new image
# --no-cache don't use cache to build, forcing to rebuild all interim containers
# --squash   after the build squash all layers into a single layer. 

docker images             #list images
docker push myrepo/java8 #upload the image to DockerHub repository


squash is enabled only on docker demon with experimental features enabled.

Manage containers and images

Run a container

When you run a container you will create a new container from a image that has been already build/ is available then put in running state

  • -d detached mode, the container will continue to run after the CMD or passed on command exited
  • -i interactive mode, allows you to login in /ssh to the container


# docker container run [OPTIONS]           IMAGE    [COMMAND] [ARG...] # usage
  docker container run -it --name mycentos centos:6 /bin/bash
  docker           run -it pio2pio/java8 #container section command is optional
# -i       :- run in interactive mode, then run command /bin/bash
# --rm     :- will delete container after run
# --publish | -p 80:8080 :- publish exposed container port 80-> to 8080 on the docker-host
# --publish-all | -P     :- publish all exposed container ports to random port >32768

List images

ctop #top for containers
docker ps -a #list containers
docker image ls #list images
docker images #short form of the command above
docker images --no-trunc
docker images -q #--quiet
docker images --filer "before=centos:6"

# List exposed ports on a container
docker port CONTAINER [PRIVATE_PORT[/PROTOCOL]]
docker port web2
80/tcp -> 0.0.0.0:81


Search images in remote repository

Search the DockerHub for images,. You may require to do `docker login` first

IMAGE=ubuntu
docker search $IMAGE
NAME                            DESCRIPTION                                     STARS OFFICIAL   AUTOMATED
ubuntu                          Ubuntu is a Debian-based Linux operating sys…   8206  [OK]       
dorowu/ubuntu-desktop-lxde-vnc  Ubuntu with openssh-server and NoVNC            210              [OK]
rastasheep/ubuntu-sshd          Dockerized SSH service, built on top of offi…   167              [OK]

IMAGE=apache
docker search $IMAGE --filter stars=50 # search images that have 50 or more stars
docker search $IMAGE --limit 10        # display top 10 images


List all available tags

IMAGE=nginx
wget -q https://registry.hub.docker.com/v1/repositories/${IMAGE}/tags -O - | sed -e 's/[][]//g' -e 's/"//g' -e 's/ //g' | tr '}' '\n'  | awk -F: '{print $3}' | sort -V

Pull images

<name>:<tag>
docker pull hello-world:latest # pull latest
docker pull --all hello-world  # pull all tags
docker pull --disable-content-trust hello-world # disable verification 

docker images --digests #displays sha256: digest of an image

# Dangling images - transitional images

from Amazon ECR

Docker login to ECR service using IAM

docker login does not support native IAM authentication methods. Therefore use a command below that will retrieve, decode, and convert the authorization IAM token into a pre-generated docker login command. Therefore produced login credentials will assume your current IAM User/Role permissions. If your current IAM user can only pull from ECR, after login with docker login you still won't be able to push image to the registry. Example error you may get is not authorized to perform: ecr:InitiateLayerUpload


Login to ECR service, your IAM user requires to have relevant pull/push permissions

eval $(aws ecr get-login --region eu-west-1 --no-include-email)
     # aws ecr get-login # generates below docker command with the login token
     # docker login -u AWS -p **token** https://$ACCOUNT.dkr.ecr.us-east-1.amazonaws.com # <- output


Docker login to ECR singular repository, min awscli v1.17

ACCOUNT=111111111111
REPOSITORY=myrepo
aws ecr get-login-password --region eu-west-1 | docker login --username AWS --password-stdin $ACCOUNT.dkr.ecr.eu-west-1.amazonaws.com/$REPOSITORY

push to Amazon ECR

# List images
$ docker images
REPOSITORY                                                 TAG   IMAGE ID     CREATED        SIZE
ansible-aws                                                2.0.1 b09807c20c96 5 minutes ago  570MB
111111111111.dkr.ecr.eu-west-1.amazonaws.com/ansible-aws   1.0.0 9bf35fe9cc0e 4 weeks ago    515MB

# Tag an image 'b09807c20c96'
docker tag b09807c20c96 111111111111.dkr.ecr.eu-west-1.amazonaws.com/ansible-aws:2.0.1

# List images, to verify your newly tagged one
$ docker images
REPOSITORY                                                 TAG   IMAGE ID     CREATED        SIZE
111111111111.dkr.ecr.eu-west-1.amazonaws.com/ansible-aws   2.0.1 b09807c20c96 6 minutes ago  570MB # <- new tagged image
ansible-aws                                                2.0.1 b09807c20c96 6 minutes ago  570MB
111111111111.dkr.ecr.eu-west-1.amazonaws.com/ansible-aws   1.0.0 9bf35fe9cc0e 4 weeks ago    515MB

# Push an image to ECR
docker push 111111111111.dkr.ecr.eu-west-1.amazonaws.com/ansible-aws:2.0.1
The push refers to repository [111111111111.dkr.ecr.eu-west-1.amazonaws.com/ansible-aws]
2c405c66e675: Pushed 
...
77cae8ab23bf: Layer already exists 
2.0.1: digest: sha256:111111111193969807708e1f6aea2b19a08054f418b07cf64016a6d1111111111 size: 1796

Save and import image

In course to move a image to another filesystem we can save it into .tar

# Export
docker image save myrepo/centos:v2 > mycentos.v2.tar
tar -tvf mycentos.v2.tar

# Import
docker image import mycentos.v2.tar <new_image_name>

# Load from a stream
docker load < mycentos.v2.tar #or --input mycentos.v2.tar to avoid redirections

Export aka commit container into image

Let's say we wanto modify stock image centos:6 by installing Apache interactivly, set to autostart then export as an new image. Let's do it!

docker pull centos:6
docker container run -it --name apache-centos6 centos:6
# Interactively do: yum -y update; yum install -y httpd; chkconfig httpd on; exit

# Save container changes - option1
docker commit -m "added httpd daemon" -a "Piotr" b237d65fd197 newcentos:withapache #creates new image from a container's changes
docker commit -m "added httpd daemon" -a "Piotr" <container_name> <repo>/<image>:<tag>
# -a :- author

# Save container changes - option2
docker container export apache-centos6 > apache-centos6.tar
docker image     import apache-centos6.tar newcentos:withapache

docker images
REPOSITORY    TAG          IMAGE ID            CREATED             SIZE
newcentos     withapache   ea5215fb46ed        50 seconds ago      272MB

docker image history newcentos:withapache
IMAGE        CREATED        CREATED BY   SIZE   COMMENT
ea5215fb46ed 2 minutes ago               272MB  Imported from -


I am unsure what is a difference in creation of images from a container between:

  • docker container commit
  • docker container export - this seems creates smaller image

Tag images

Tags are used to usually to name Official image with a new name that we are planning to modify. This allows to create a new image, run a new container from a tag, delete the original image without affecting the new image or container started from the new tagged image.

docker image tag #long version
docker tag centos:6 myucentos:v1 #this will create a duplicate of centos:6 named myucentos:v1

Tagging allows to modify repository name and maanges references to images located on a filesystem.

History of an image

We can display history of layers that created the image by showing interim images build in creation order. It shows only layers created on a local filesystem.

docker image history myrepo/centos:v2

Stop and delete all containers

docker stop $(docker ps -aq) && docker rm $(docker ps -aq)

Delete image

$ docker images
REPOSITORY          TAG                 IMAGE ID            CREATED             SIZE
company-repo        0.1.0               f796d7f843cc        About an hour ago   888MB
<none>              <none>              04fbac2fdf48        3 hours ago         565MB
ubuntu              16.04               7aa3602ab41e        3 weeks ago         115MB

# Delete image
$ docker rmi company-repo:0.1.0
Untagged: company-repo:0.1.0
Deleted: sha256:e5cca6a080a5c65eacff98e1b17eeb7be02651849b431b46b074899c088bd42a
..
Deleted: sha256:bc7cda232a2319578324aae620c4537938743e46081955c4dd0743a89e9e8183

# Prune image - delete dangling (temp/interim) images. 
# These are not associated with end-product image or containers.
docker image prune
docker image prune -a #remove all images not associated with any container

Cleaning up space by removing docker objects

This applied both to docker container and swarm systems.

docker system df     #show disk usage
TYPE                TOTAL               ACTIVE              SIZE                RECLAIMABLE
Images              1                   0                   131.7MB             131.7MB (100%)
Containers          0                   0                   0B                  0B
Local Volumes       0                   0                   0B                  0B
Build Cache         0                   0                   0B                  0B

docker network ls #note all networks below are system created, so won't get removed
NETWORK ID          NAME                DRIVER              SCOPE
452b1c428209        bridge              bridge              local
528db1bf80f1        docker_gwbridge     bridge              local
832c8c6d73a5        host                host                local
t8jxy5vsy5on        ingress             overlay             swarm
815a9c2c4005        none                null                local

docker system prune #removes objects created by a user only, on the current node only
                    #add --volumes to remove them as well
WARNING! This will remove:
        - all stopped containers
        - all networks not used by at least one container
        - all dangling images
        - all dangling build cache
Are you sure you want to continue? [y/N]

docker system prune -a --volumes #remove all

Docker Volumes

Docker's 'copy-on-write' philosophy drives both performance and efficiency. It's only the top layer that is writable and it's a delta of underlying layer.

Volumes can be mounted to your container instances from your underlying host systems.

_data volumes, since they are not controlled by the storage driver (since they represent a file/directory on the host filesystem /var/lib/docker), are able to bypass the storage driver. As a result, their contents are not affected when a container is removed.

Volumes are data mounts created on a host in /var/lib/docker/volumes/ directory and refereed by name in a Dockerfile.

docker volume ls                   #list volumes created by VOLUME directive in a Dockerfile
sudo tree /var/lib/docker/volumes/ #list volumes on host-side
docker volume create  my-vol-1
docker volume inspect my-vol-1
[
    {
        "CreatedAt": "2019-01-17T08:47:01Z",
        "Driver": "local",
        "Labels": {},
        "Mountpoint": "/var/lib/docker/volumes/my-vol-1/_data",
        "Name": "my-vol-1",
        "Options": {},
        "Scope": "local"
    }
]


Using volumes with Swarm services

docker container run  --name web1 -p 80:80 --mount source=my-vol-1,target=/internal-mount --replicas 3 httpd #container
docker service create --name web1 -p 80:80 --mount source=my-vol-1,target=/internal-mount --replicas 3 httpd #swarm service
# --mount --volumes|-v is not supported with services, this will replicate volumes across swarm when needed,
# but it will not replicate files

docker exec -it web1 /bin/bash #connect to the container
roor@c123:/ echo "Created when connected to container: volume-web1" > /internal-mount/local.txt; exit

# prove the file is on a host filesystem created volume
user@dockerhost$ cat /var/lib/docker/volumes/my-vol-1/_data/local.txt


Host storage mount

Bind mapping is binding host filesystem directory to a container directory. It's not mouting volume that it'd require a mount point and volume on a host.

mkdir /home/user/web1
echo "web1 index" > /home/user/web1/index.html
docker container run -d --name testweb -p 80:80 --mount type=bind,source=/home/user/web1,target=/usr/local/apache2/htdocs httpd
curl http://localhost:80

Removing service is not going to remove the volume unless you delete the volume itself. It that case will be removed from all swarms.

Networking

Container Network Model

It's a concept of network implementation that is built on multiple private networks across multiple hosts overlayed and managed by IPAM. Protocol that keeps track and provision addesses.

Main 3 components:

  • sandbox - contains the configuration of a container's network stack, incl. management of interfaces, routing and DNS. An implementation of a Sandbox could be a eg. Linux Network Namespace. A Sandbox may contain many endpoints from multiple networks.
  • endpoint - joins a Sandbox to a Network. Interfaces, switches, ports, etc and belong to only one network at the time. The Endpoint construct exists so the actual connection to the network can be abstracted away from the application. This helps maintain portability.
  • network - a clollection of endpoints that can communicate directly (bridges, VLANs, etc.) and can consist of 1toN endpoints


Container Network Model
IPAM (Internet Protocol Address Management)

Managing addesees across multiple hosts on a separate physical networks while providing routing to the underlaying swarm networks externally is the IPAM prblem for Docker. Depends on the netwok driver choice, IPAM is handled at different layers in the stack. Network drivers enable IPAM through DHCP drivers or plugin drivers so the complex implementation that would be normally overlapping addesses is supported.

References

Publish exposed container/service ports

Publishing modes
host
set using --publish mode=host,8080:80, makes ports available only on the undelaying host system not outside the host the service may exist; defits routing mesh so user is responsible for routing
ingress
responsible for routing mesh makes sure all published ports are avaialble on all hosts in the swarm cluster regardless is a service replica running on it or not


# List exposed ports on a container
docker port CONTAINER [PRIVATE_PORT[/PROTOCOL]]
docker port web2
80/tcp -> 0.0.0.0:81

# Publish port
                                          host  :  container
                                             \  :  /
docker container run -d --name web1 --publish 81:80 httpd
# --publish | -p :- publish to host exposed container port
# 81             :- port on a host, can use range eg. 81-85, so based on port availability port will be used
# 80             :- exposed port on a container

ss -lnt
State       Recv-Q Send-Q Local Address:Port Peer Address:Port
LISTEN      0      100        127.0.0.1:25              *:*
LISTEN      0      128                *:22              *:*
LISTEN      0      100              ::1:25             :::*
LISTEN      0      128               :::81             :::*
LISTEN      0      128               :::22             :::*

docker container run -d --name web1 --publish-all 81:80 httpd
# --publish-all | -P publish all cotainer exposed ports to random ports above >32768
CONTAINER ID IMAGE COMMAND              CREATED STATUS PORTS                   NAMES
c63efe9cbb94 httpd "httpd-foreground"   2 sec.. Up 1 s 80/tcp                  testweb  #port exposed but not published
cb0711134eb5 httpd "httpd-foreground"   4 sec.. Up 2 s 0.0.0.0:32769->80/tcp   testweb1 #port exposed and published to host:32769

Network drivers

Default network for a single host docker-host is bridge network.

List of Native (part of Docker Engine) Network Drivers
bridge
default on stand-alone hosts, it's private network internal to the host system, all containers on this host using Bridge network can communicate, external access is granted by port exposure or static-routes added with teh host as the gateway for that network
none
used when a container does not need any networkng, still can be accessed from the host using docker attach [containerID] or docker exec -it [containerID] commands
host
aka Host Only Networking, only accessable via underlaying host, access to services can be provided by exposing ports to the host system
overlay
swarm scope driver, allows communication to all Docker Daemons in a cluster, self-extending if needed, maanged by Swarm manager, it's default mode of Swarm communication
ingress
extended network across all nodes in the cluster; special overlay network that load balances netowrk traffic amongst a given service's working nodes; maintains a list of all IP addresses from nodes that participate in that service (using the IPVS module) and when a request comes in, routes to one of them for the indicated service; provides routing mesh' that allows services to be exposed to the external network without having replica running on every node in the Swarm
docker gateway bridge
special bridge network that allows overlay networks (incl. ingress) access an individual DOcker daemon's physical network; every container run within a service is connected to the local Docerk daemon's host network; automatically created when Docker is initialised or joined by docker swarm init or docker join commands.
Docker interfaces
  • docker0 - adapter is installed by default during Docker setup and will be assigned an address range that will determine the local host IPs available to the containers running on it
Create bridge network
docker network ls #default networks list
NETWORK ID    NAME                DRIVER   SCOPE
130833da0920  bridge              bridge   local
528db1bf80f1  docker_gwbridge     bridge   local
832c8c6d73a5  host                host     local
t8jxy5vsy5on  ingress             overlay  swarm  #'ingress' special network 1 per cluster
815a9c2c4005  none                null     local

docker network inspect bridge #bridge is a default network containers are deployed to

docker container run  -d web1 -p 8080:80 httpd #expose container port :80 -> :8080 on the docker-home
docker container inspect web1 | grep IPAdd
docker container inspect --format="{{.NetworkSettings.Networks.bridge.IPAddress}}" web1 #get container ip
curl http://$(IPAddr)


Create bridge network
docker network create --driver=bridge --subnet=192.168.1.0/24 --opt "com.docker.network.driver.mtu"=1501 deviceeth0

docker network ls
docker network inspect deviceeth0


Create overlay network
docker network create --driver=overlay --subnet=192.168.1.0/24 --gateway=192.168.1.1 overlay0
docker network ls 
NETWORK ID          NAME                DRIVER              SCOPE
130833da0920        bridge              bridge              local
528db1bf80f1        docker_gwbridge     bridge              local
832c8c6d73a5        host                host                local
t8jxy5vsy5on        ingress             overlay             swarm
815a9c2c4005        none                null                local
2x6bq1czzdc1        overlay0            overlay             swarm


Inspect network
docker network inspect overlay0
[
    {
        "Name": "overlay0",
        "Id": "2x6bq1czzdc102sl6ge7gpm3w",
        "Created": "2019-01-19T11:24:02.146339562Z",
        "Scope": "swarm",
        "Driver": "overlay",
        "EnableIPv6": false,
        "IPAM": {
            "Driver": "default",
            "Options": null,
            "Config": [
                {
                    "Subnet": "192.168.1.0/24",
                    "Gateway": "192.168.1.1"
                }
            ]
        },
        "Internal": false,
        "Attachable": false,
        "Ingress": false,
        "ConfigFrom": {
            "Network": ""
        },
        "ConfigOnly": false,
        "Containers": null,
        "Options": {
            "com.docker.network.driver.overlay.vxlanid_list": "4097"
        },
        "Labels": null
    }
]


Inspect container network
docker container inspect testweb --format {{.HostConfig.NetworkMode}}
overlay0
docker container inspect testweb --format {{.NetworkSettings.Networks.dev_bridge.IPAddress}}
192.168.1.3


Connect/disconnect from a network can be done when a container is running. Connect won't disconnect from a current network.

docker network connect --ip=192.168.1.10 deviceeth0 web1
docker container inspect --format="{{.NetworkSettings.Networks.bridge.IPAddress}}" web1
docker container inspect --format="{{.NetworkSettings.Networks.deviceeth0.IPAddress}}" web1
curl http://$(IPAddr)
docker network disconnect deviceeth0 web1

Overlay network in Swarm cluster

Overlay network can be created/removed/updated like any other docker objects. It allows inter-service(containers) communication, where --gateway ip address is used to reach to outside eg. Inernet or the host network. When creating the overlay network on the manager host it will get recreated on worker nodes only when is referenced by any service that is using it. See below.

swarm-mgr$ docker network create --driver=overlay --subnet=192.168.1.0/24 --gateway=192.168.1.1 overlay0
swarm-mgr$ docker service create --name web1 -p 8080:80 --network=overlay0 --replicas 2 httpd
uvxymzdkcfwvs2oznbnk7nv03
overall progress: 2 out of 2 tasks 
1/2: running   [==================================================>] 
2/2: running   [==================================================>] 

swarm-wkr$ docker network ls
NETWORK ID          NAME                DRIVER              SCOPE
ba175ebd2a6f        bridge              bridge              local
a5848f607d8c        docker_gwbridge     bridge              local
fccfb9c1fdc3        host                host                local
t8jxy5vsy5on        ingress             overlay             swarm
127b10783faa        none                null                local
2x6bq1czzdc1        overlay0            overlay             swarm

# remove network, only affected newly created servces not the running onces
swarm-mgr$ docker service update --network-rm=overlay0 web1


DNS

docker container run -d --name testweb1 -P --dns=8.8.8.8 \
                                           --dns=8.8.4.4 \
                                           --dns-search "mydomain.local" \
                                           httpd
# -P :- publish-all exposed ports to random port >32768

docker container exec -it testweb1 /bin/bash -c 'cat /etc/resolv.conf'
search us-east-2.compute.internal
nameserver 8.8.8.8
nameserver 8.8.4.4

# System wide settings, requires docker.service restart
cat > /etc/docker/daemon.json <<EOF
{ 
  "dns": ["8.8.8.8", "8.8.4.4"]
}
EOF
sudo systemctl restart docker.service #required

Examples

Lint - best practices

$ docker run --rm -i hadolint/hadolint < Dockerfile
/dev/stdin:9:16 unexpected newline expecting "\ ", '=', a space followed by the value for the variable 'MAC_ADDRESS', or at least one space

Default project

As good practice all Docker files should be source controlled. The basic self explanatory structure can looks like below, and skeleton be created with a commend below:

mkdir APROJECT && d=$_; touch $d/{build.sh,run.sh,Dockerfile,README.md,VERSION};mkdir $d/assets; touch $_/{entrypoint.sh,install.sh}

└── APROJECT
    ├── assets
    │   ├── entrypoint.sh
    │   └── install.sh
    ├── build.sh
    ├── Dockerfile
    ├── README.md
    └── VERSION

Dockerfile

Dockerfile it is simply a build file.

Semantics

entrypoint
Container config: what to start when this image is ran.
entrypoint and cmd
Docker allows you to define an Entrypoint and Cmd which you can mix and match in a Dockerfile. Entrypoint is the executable, and Cmd are the arguments passed to the Entrypoint. The Dockerfile schema is quite lenient and allows users to set Cmd without Entrypoint, which means that the first argument in Cmd will be the executable to run.

User management

RUN addgroup --gid 1001 jenkins -q
RUN adduser  --gid 1001 --home /tank --disabled-password --gecos '' --uid 1001 jenkins
# --gid add user to group 1001
# --gecos parameter is used to set the additional information. In this case it is just empty.
# --disabled-password it's like  --disabled-login,  but  logins  are still possible (for example using SSH RSA keys) but not using password authentication
USER jenkins:jenkins #sets user for next RUN, CMD and ENTRYPOINT command
WORKDIR /tank #changes cwd for next RUN, CMD, ENTRYPOINT, COPY and ADD

Multiple stage build

Introduced in Docker 17.06, allows to use multiple FROM statements allowing for multi stage builds.

FROM microsoft/aspnetcore-build AS build-env
WORKDIR /app

# copy csproj and restore as distinct layers
COPY *.csproj ./
RUN dotnet restore

# copy everything else and build
COPY . ./
RUN dotnet publish -c Release -o output

# build runtime image
FROM microsoft/aspnetcore
WORKDIR /app
COPY --from=build-env /app/output .   #multi stage: copy files from previous container [as build-env]
ENTRYPOINT ["dotnet", "LetsKube.dll"]

Squash an image

Docker uses Union filesystem that allows multiple volumes (layers) to share common and override changes by applying them on top layer. There is no official way to flatten layers to a single storage layer or minimize an image size (2017). Below it's just practical approach.

  1. Start container from an image
  2. Export a container to .tar with all it's file systems
  3. Import container with new image name


Once the process completes and original image gets deleted the new image docker image history command will show only one layer. Often the image will be smaller.

# run a container from an image
docker run myweb:v3
# export container to .tar
docker export <contr_name> > myweb.v3.tar
docker save   <image id>   > image.tar #not verified command
docker import myweb.v3.tar   myweb:v4
docker load   myweb.v3.tar             #not verified command
Resources

Gracefully stop / kill a container

all below are only notes

Trap ctrl_c then kill/rm container.

  • --init
  • --sig-proxy this only works when --tty=false but by default is true

Proxy

If you are behind corporate proxy, you should use Docker client ~/.docker/config.json config file. It requires Docker 17.07 minimum version.

{
 "proxies":
 {
   "default":
   {
     "httpProxy": "http://10.0.0.1:3128",
     "httpsProxy": "http://10.0.0.1:3128",
     "noProxy": "localhost,127.0.0.1,*.test.example.com,.example2.com"
   }
 }
}

More you can find here

Insecure proxy

These can be added to different places, the order is based on latest practices and versioning

docker-ce 18.6
{
    "insecure-registries" : [ "localhost:443","10.0.0.0/8", "172.16.0.0/12", "192.168.0.0/16" ]
}
sudo systemctl daemon-reload
sudo systemctl restart docker
sudo systemctl show docker | grep Env
docker info #check Insecure Registries
Using environment file, prior version 18
$ sudo vi /etc/default/docker
DOCKER_HOME='--graph=/tank/docker'
DOCKER_GROUP='--group=docker'
DOCKER_LOG_DRIVER='--log-driver=json-file'
DOCKER_STORAGE_DRIVER='--storage-driver=btrfs'
DOCKER_ICC='--icc=false'
DOCKER_IPMASQ='--ip-masq=true'
DOCKER_IPTABLES='--iptables=true'
DOCKER_IPFORWARD='--ip-forward=true'
DOCKER_ADDRESSES='--host=unix:///var/run/docker.sock'
DOCKER_INSECURE_REGISTRIES='--insecure-registry 10.0.0.0/8 --insecure-registry 172.16.0.0/12 --insecure-registry 192.168.0.0/16'
DOCKER_OPTS="${DOCKER_HOME} ${DOCKER_GROUP} ${DOCKER_LOG_DRIVER} ${DOCKER_STORAGE_DRIVER} ${DOCKER_ICC} ${DOCKER_IPMASQ} ${DOCKER_IPTABLES} ${DOCKER_IPFORWARD} ${DOCKER_ADDRESSES} ${DOCKER_INSECURE_REGISTRIES}"

$ sudo vi /etc/systemd/system/docker.service.d/docker.conf
[Service]
EnvironmentFile=-/etc/default/docker
ExecStart=/usr/bin/dockerd $DOCKER_HOME $DOCKER_GROUP $DOCKER_LOG_DRIVER $DOCKER_STORAGE_DRIVER $DOCKER_ICC $DOCKER_IPMASQ $DOCKER_IPTABLES $DOCKER_IPFORWARD $DOCKER_ADDRESSES $DOCKER_INSECURE_REGISTRIES

$ sudo vi /etc/systemd/system/docker.service
[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network-online.target docker.socket firewalld.service
Wants=network-online.target
Requires=docker.socket

[Service]
EnvironmentFile=-/etc/default/docker
Type=notify
# the default is not to use systemd for cgroups because the delegate issues still
# exists and systemd currently does not support the cgroup feature set required
# for containers run by docker
ExecStart=/usr/bin/dockerd -H fd://
ExecReload=/bin/kill -s HUP $MAINPID
LimitNOFILE=1048576
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
# restart the docker process if it exits prematurely
Restart=on-failure
StartLimitBurst=3
StartLimitInterval=60s

[Install]
WantedBy=multi-user.target

Run docker without sudo

Adding a user to docker group should be sufficient. However on apparmor, SELinux or a filesystem with ACL enabled additional permissions might be required in respect to access a socket file.

$ ll /var/run/docker.sock
srw-rw---- 1 root docker 0 Sep  6 12:31 /var/run/docker.sock=
# ACL
$ sudo getfacl /var/run/docker.sock
getfacl: Removing leading '/' from absolute path names
# file: var/run/docker.sock
# owner: root
# group: docker
user::rw-
group::rw-
other::---

# Grant ACL to jenkns user
$ sudo setfacl -m user:username:rw /var/run/docker.sock

$ sudo getfacl /var/run/docker.sock
getfacl: Removing leading '/' from absolute path names
# file: var/run/docker.sock
# owner: root
# group: docker
user::rw-
user:jenkins:rw-
group::rw-
mask::rw-
other::---
References

References

Docker Enterprise Edition

Components:

  • Docker daemon (fka "Engine")
  • Docker Trusted Registry (DTR)
  • Docker Universal Control Plane (UCP)

Docker Swarm

Swarm - sizing

Universal Control Plane (UCP)

This is for only Enterpsise Edition

  • ports managers, workers in/out

Hardware requirments:

  • 8gb RAM for managers or DTR Docker Trsuted Registry
  • 4gb RAM for workers
  • 3gb free space


Performance Consideration (Timing)

Component                              Timeout(ms)  Configurable
Raft consensus between manager nodes   3000         no
Gossip protocol for overlay networking 5000         no
etcd                                   500          yes
RethinkDB                              10000        no
Stand-alone swarm                      90000        no


Compatibility Docker EE

  • Docker Engine 17.06+
  • DTR 2.3+
  • UCP 2.2+

Swarm with single host manager

# Initialise Swarm
docker swarm init --advertise-addr 172.31.16.10 #Iyou get SWMTKN-token
To add a worker to this swarm, run the following command:
    docker swarm join --token SWMTKN-1-1i2v91qbj0pg88dxld15vpx3e74qm5clk7xkcrg6j3xknedqui-dh60f4j09itiqjfhqa196ufvo 172.31.16.10:2377
To add a manager to this swarm, run 'docker swarm join-token manager' and follow the instructions.

# Join tokens
docker swarm join-token manager #display manager join-token, run on manager
docker swarm join-token worker  #display worker  join-token, run on manager

# Join worker, run new-worker-node
#                                 ->            swarm cluster id                    <-> this part is mgr/wkr <- -> mgr node <-
docker swarm join --token SWMTKN-1-1i2v91qbj0pg88dxld15vpx3e74qm5clk7xkcrg6j3xknedqui-dh60f4j09itiqjfhqa196ufvo 172.31.16.10:2377

# Join another manager, run on new-manager-node
docker swarm join-token manager #run on primary manager if you wish add another manager
# in output you get a token. You notice that 1st part up to dash identifies Swarm cluster and the other part is role id.

# join to swarm (cluster), token will identify a role in the cluster manager or worker
docker swarm join --token SWMTKN-xxxx
docker swarm join --token SWMTKN-1-1i2v91qbj0pg88dxld15vpx3e74qm5clk7xkcrg6j3xknedqui-dh60f4j09itiqjfhqa196ufvo 172.31.16.10:2377
This node joined a swarm as a worker.


Check Swarm status

docker node ls
[cloud_user@ip-172-31-16-10 swarm-manager]$ docker node ls
ID                            HOSTNAME                          STATUS   AVAILABILITY MANAGER STATUS ENGINE VERSION
641bfndn49b1i1dj17s8cirgw *   ip-172-31-16-10.mylabserver.com   Ready    Active       Leader         18.09.0
vlw7te728z7bvd7ulb3hn08am     ip-172-31-16-94.mylabserver.com   Ready    Active                      18.09.0

docker system info | grep -A 7 Swarm
Swarm: active
 NodeID: 641bfndn49b1i1dj17s8cirgw
 Is Manager: true
 ClusterID: 4jqxdmfd0w5pc4if4fskgd5nq
 Managers: 1
 Nodes: 2
 Default Address Pool: 10.0.0.0/8  
 SubnetSize: 24
Troubleshooting
sudo systemctl disable firewalld && sudo systemctl stop firewalld # CentOS
sudo -i; printf "\n10.0.0.11 mgr01\n10.0.0.12 node01\n" >> /etc/hosts # Add nodes to hosts file; exit

Swarm cluster

docker node update -availability drain [node] #drain services for Manager Only nodes
docker service update --force [service_name]  #force re-balance services across cluster

docker swarm leave #node leaves a cluster

Locking / unlocking swarm cluster

Logs used by Swarm manager are encrypted on disk. Access to nodes gives access to keys that encrypt them. It further protects cluster as requires a unlocking key when restarting manager/nodes.

docker swarm init   --auto-lock=true #initialize with 
docker swarm update --auto-lock=true #update current swarm
# both will produce unlock token STKxxx
docker swarm unlock #it'll ask for the unlock token
docker swarm update --auto-lock=false #disable key locking


If you have access to a manager you can always get unlock key using:

docker swarm unlock-key

Key management

docker swarm unlock-key --rotate #could be in a cron

Backup and restore swarm cluster

This priocess describes how to backup whole cluster configuration so can be restored on a new set of servers.

Create docker apps running across swarm

docker service create --name bkweb --publish 80:80 --replicas 2 httpd
$ docker service ls
ID           NAME      MODE          REPLICAS  IMAGE         PORTS
q9jki3n2hffm bkweb     replicated    2/2       httpd:latest  *:80->80/tcp

$ docker service ps bkweb #note containers run on 2 different nodes
ID           NAME      IMAGE         NODE                      DESIRED STATE CURRENT STATE          
j964jm1lq3q5 bkweb.1   httpd:latest  server2c.mylabserver.com  Running       Running about a minute ago
jpjx3mk7hhm0 bkweb.2   httpd:latest  server1c.mylabserver.com  Running       Running about a minute ago
Backup state files
sudo -i
cd /var/lib/docker/swarm
cat docker-state.json #contains info about managers, workers, certificates, etc..
cat state.json
sudo systemctl stop docker.service

# Backup swarm cluster, this file can be then used to recover whole swarm cluster on another set of servers
sudo tar -czvf swarm.tar.gz /var/lib/docker/swarm/

#the running docker containers should be brought up as they were before stopping the service
systemctl start docker

Recover using swarm.tar backup

# scp swarm.tar to recovery node - what it'd be a node with just installed docker
sudo rm -rf /var/lib/docker/swarm
sudo systemctl stop docker

# Option1 untar directly
sudo tar -xzvf swarm.tar.gz -C /var/lib/docker/swarm

# Option2 copy recursivly, -f override if a file exists
tar -xzvf swarm.tar.gz; cd /var/lib/docker
cp -rf swarm/ /var/lib/docker/

sudo systemctl start docker
docker swarm init --force-new-cluster # produces the exactly same token
# you should join all required nodes to new manager ip
# scale services down to 1, then scale up so get distributed to other nodes

Run containers as a services

Docker container has a number limitation therefore running as a service where Cluster Manager: Swarm or Kubernetes manages networking, access, loadbalancing etc.. is a way to scale with ease. The service is using eg. mesh routing to deal with access to containers.

Swarm nodes setup 1 manager and 2 workers

ID                            HOSTNAME                          STATUS AVAILABILITY MANAGER STATUS ENGINE VERSION
641bfndn49b1i1dj17s8cirgw *   swarm-mgr-1.example.com   Ready   Active Leader       18.09.1
vlw7te728z7bvd7ulb3hn08am     swarm-wkr-1.example.com   Ready   Active              18.09.1
r8h7xmevue9v2mgysmld59py2     swarm-wkr-2.example.com   Ready   Active              18.09.0


Create and run a service

docekr pull httpd
docker service create --name serviceweb --publish 80:80 httpd
# --publish|-p -expose a port on all containers in the running cluster

docker service ls
ID                  NAME                MODE                REPLICAS            IMAGE               PORTS
vt0ftkifbd84        serviceweb          replicated          1/1                 httpd:latest        *:80->80/tcp

docker service ps serviceweb #show nodes that a container is running on, here on mgr-1 node
ID           NAME         IMAGE        NODE                    DESIRED STATE CURRENT STATE  ERROR  PORTS
e6rx3tzgp1e5 serviceweb.1 httpd:latest swarm-mgr-1.example.com Running       Running about


When running as a service even if a container runs on a single node (replica=1) the container can be accessed from any of swarm nodes. It's because service exposed port has been exposed to extended mesh private network that the container is running on.

[user@swarm-mgr-1 ~]$ curl -k http://swarm-mgr-1.example.com
  <html><body><h1>It works!</h1></body></html>
[user@swarm-mgr-1 ~]$ curl -k http://swarm-wkr-1.example.com
  <html><body><h1>It works!</h1></body></html>
[user@swarm-mgr-1 ~]$ curl -k http://swarm-wkr-2.example.com
  <html><body><h1>It works!</h1></body></html>


Service update, can be done to limits, volumes, env-variables and more...

docker service scale devweb=3                 #or
docker service update --replicas 3 serviceweb #--detach=false shows visual progress in older versions, default in v18.06
serviceweb
overall progress: 3 out of 3 tasks 
1/3: running   [==================================================>] 
2/3: running   [==================================================>] 
3/3: running   [==================================================>] 
verify: Service converged 

# Limits(soft limit) and reservations(hard limit), this causes to start new services(containers)
docker service update --limit-cpu=.5 --reserve-cpu=.75 --limit-memory=128m --reserve-memory=256m serviceweb

Templating service names

This allows to control eg. hostname in a cluster. Useful for big clusters to easier identify services where they run from hostname.

docker service create --name web --hostname"{{.Node.ID}}-{{.Service.Name}}" httpd
docker service ps --no-trunc web
docker inspect --format="{{}.Config.Hostname}" web.1.ab10_serviceID_cd
aa_nodeID_bb-web

Node lables for task/service placement

docker node ls
ID                            HOSTNAME                  STATUS AVAILABILITY        MANAGER STATUS      ENGINE VERSION
641bfndn49b1i1dj17s8cirgw *   swarm-mgr-1.example.com   Ready  Active              Leader              18.09.1
vlw7te728z7bvd7ulb3hn08am     swarm-wkr-1.example.com   Ready  Active                                  18.09.1
r8h7xmevue9v2mgysmld59py2     swarm-wkr-2.example.com   Ready  Active                                  18.09.1

docker node inspect 641bfndn49b1i1dj17s8cirgw --pretty
ID:                     641bfndn49b1i1dj17s8cirgw
Hostname:               swarm-mgr-1.example.com 
Joined at:              2019-01-08 12:16:56.277717163 +0000 utc
Status:
 State:                 Ready
 Availability:          Active
 Address:               172.31.10.10
Manager Status:
 Address:               172.31.10.10:2377
 Raft Status:           Reachable
 Leader:                Yes
Platform:
 Operating System:      linux
 Architecture:          x86_64
Resources:
 CPUs:                  2
 Memory:                3.699GiB
Plugins:
 Log:           awslogs, fluentd, gcplogs, gelf, journald, json-file, local, logentries, splunk, syslog
 Network:               bridge, host, macvlan, null, overlay
 Volume:                local
Engine Version:         18.09.1
TLS Info:
 TrustRoot:
-----BEGIN CERTIFICATE-----
MIIBajCCARCgAwIBAgIUKXz3wtc8OA8uzTo1pO86ko+PB+EwCgYIKoZIzj0EAwIw
..
-----END CERTIFICATE-----
 Issuer Subject:        MBMxETAPBgNVBAMTCHN3YX.....h
 Issuer Public Key:     MFkwEwYHKoZIzj0CAQYIKoZIzj0DAQcDQgAEy......==


Apply label to a node

docker node update --label-add node-env=testnode r8h7xmevue9v2mgysmld59py2
docker node inspect r8h7xmevue9v2mgysmld59py2 --pretty | grep -B1 -A2 Labels
ID:                     r8h7xmevue9v2mgysmld59py2
Labels:
 - node-env=testnode
Hostname:               swarm-wkr-1.example.com


How to use it. Run a service with --constraint option that pins services to run on a node meeting given criteria. In our case to run on a node where node.labels.node-env == testnode. Note that all replicas are running on the same node unlike they'd be distributed across the cluster.

docker service create --name constraints -p 80:80 --constraint 'node.labels.node-env == testnode' --replicas 3 httpd #node.role, node.id, node.hostname
zrk15vfdaitc1rvw9wqh2s0ot
overall progress: 3 out of 3 tasks 
1/3: running   [==================================================>] 
2/3: running   [==================================================>] 
3/3: running   [==================================================>] 
verify: Service converged 
[cloud_user@mrpiotrpawlak1c ~]$ docker service ls
ID           NAME          MODE         REPLICAS IMAGE         PORTS
zrk15vfdaitc constraints   replicated   3/3      httpd:latest  *:80->80/tcp

[user@swarm-wkr-2 ~]$ docker service ps constraints
ID           NAME          IMAGE        NODE                      DESIRED STATE       CURRENT STATE            ERROR               PORTS
y5z4mt99uzpo constraints.1 httpd:latest swarm-wkr-2.example.com   Running Running 41 seconds ago                       
zqbn4ips969q constraints.2 httpd:latest swarm-wkr-2.example.com   Running Running 41 seconds ago                       
vnb10jcs2915 constraints.3 httpd:latest swarm-wkr-2.example.com   Running Running 41 seconds ago

Scaling services

These commands be issued on a manager node

docker pull docker nginx
docker service create --name web --publish 80:80 httpd
docker service ps web                  #there is only 1 replica
docker service update --replicas 3 web #update to 3 replicas
docker service create --name nginx --publish 5901:80 nginx
elinks http://swarm-mgr-1.com:5901     #nginx website will be presented

# scale is equivalent to update --replicas command for a single or multiple services
docker service scale web=3 nginx=3
docker service ls

Replicated services vs global services

Global Replicated
mode runs at least one copy of a service on each swarm node, even if you join another node the service will coverage there as well. In global mode you cannot use update --replicats or scale commands. It is not possible to update the mode type.
Replicated mode
allows for grater control and flexibility of running number of services.
# creates a single service running across whole cluster in replicated mode
docker service create --name web --publish 80:80 httpd

# run in a global node
docker service create --name web --publish 5901:80 --mode global httpd
docker service ls #note distinct mode names: global and replicated

Docker compose and deploy to Swarm

Install

sudo yum install epel
sudo yum install pip
sudo pip install --upgrade pip
# install docker CE or EE to avoid Python libs conflits
sudo pip install docker-compose

# Troubleshooting
## Err: Cannot uninstall 'requests'. It is a distutils installed project...
pip install --ignore-installed requests

Dockerfile

cat >Dockerfile <<EOF
FROM centos:latest
RUN yum install -y httpd
RUN echo "Website1" >> /var/www/html/index.html
EXPOSE 80
ENTRYPOINT apachectl -DFOREGROUND
EOF


Docker compose file

cat >docker-compose.yml <<EOF
version: '3'
services:
  apiweb1:
    image: httpd_1:v1
    build: .
    ports:
      - "81:80"
  apiweb2:
    image: httpd_1:v1
    ports:
      - "82:80"
  load-balancer:
    image: nginx:latest
    ports:
      - "80:80"
EOF


Run docker compose, on the current node only

docker-compose up -d
WARNING: The Docker Engine you're using is running in swarm mode.
Compose does not use swarm mode to deploy services to multiple nodes in a swarm. All containers will be scheduled on the current node.
To deploy your application across the swarm, use `docker stack deploy`.
Creating compose_apiweb2_1       ... done
Creating compose_apiweb1_1       ... done
Creating compose_load-balancer_1 ... done

docker ps
CONTAINER ID IMAGE        COMMAND                 CREATED  STATUS   PORTS              NAMES
14f8b6b10c2d nginx:latest "nginx -g 'daemon of…"  2 minutesUp 2 min 0.0.0.0:80->80/tcp compose_load-balancer_1
e9b5b37fe4e5 httpd_1:v1   "/bin/sh -c 'apachec…"  2 minutesUp 2 min 0.0.0.0:81->80/tcp compose_apiweb1_1
28ee22a8eae0 httpd_1:v1   "/bin/sh -c 'apachec…"  2 minutesUp 2 min 0.0.0.0:82->80/tcp compose_apiweb2_1

# Verify
curl http://localhost:81
curl http://localhost:82
curl http://localhost:80 #nginx

# Prep before deploying docker-compose to Swarm. Also images needs to be build before hand.
# Docker stack does not support building images
docker-compose down --volumes #save everything to storage volumes
Stopping compose_load-balancer_1 ... done
Stopping compose_apiweb1_1       ... done
Stopping compose_apiweb2_1       ... done
Removing compose_load-balancer_1 ... done
Removing compose_apiweb1_1       ... done
Removing compose_apiweb2_1       ... done
Removing network compose_default


Deploy compose to Swarm
docker stack deploy --compose-file docker-compose.yml customcompose-stack #customcompose-stack is a prefix for service name
Ignoring unsupported options: build
Creating network customcompose-stack_default
Creating service customcompose-stack_apiweb1
Creating service customcompose-stack_apiweb2
Creating service customcompose-stack_load-balancer

docker stack services customcompose-stack #or
docker service ls
ID           NAME                               MODE       REPLICAS IMAGE        PORTS
k7wwkncov49p customcompose-stack_apiweb1        replicated 0/1      httpd_1:v1   *:81->80/tcp
nl0j5folpmha customcompose-stack_apiweb2        replicated 0/1      httpd_1:v1   *:82->80/tcp
x6p14gmpjyra customcompose-stack_load-balancer  replicated 1/1      nginx:latest *:80->80/tcp

docker stack rm customcompose-stack #remove stack

Selecting a Storage Driver

Go to Docker version matrix to verify what drivers are supported on your platform. Changing storage driver is destructive and you loose all containers volumes. Therefore you need to export/backup then re-import after the storage driver change.

CentOS

Device mapper is officialy supported on CentOS. It can be used on a disc as blockstorage it uses loopback adapter to provide that. Or can be blockstorage devive allowing Docker to mamange it.

docker info --format '{{json .Driver}}'
docker info -f '{{json .}}' | jq .Driver
docker info | grep Storage

sudo touch /etc/docker/daemon.json
sudo vi    /etc/docker/daemon.json #additional options are available
{
  "storage-driver":"devicemapper"
}


Preserving any current images, requires export/backup and re-import after the storage driver change.

docker images
sudo systemctl docker restart
ls -l /var/lib/docker/devicemapper #new location to storing images

Note, in /var/lib/docker new directory devicemapper has been created to store images from now on.

Update 2019 - Docker Engine 18.09.1
WARNING: the devicemapper storage-driver is deprecated, and will be removed in a future release.
WARNING: devicemapper: usage of loopback devices is strongly discouraged for production use.
         Use `--storage-opt dm.thinpooldev` to specify a custom block storage device.

Selecting a logginig driver

Available list of drivers can be seen on Docker documentation page. Most popular are:

  • none - No logs are available for the container and docker logs does not return any output.
  • json-file - (default) the logs are formatted as JSON. The default logging driver for Docker.
  • syslog - Writes logging messages to the syslog facility. The syslog daemon must be running on the host machine.
  • journald - Writes log messages to journald. The journald daemon must be running on the host machine.
  • fluentd - Writes log messages to fluentd (forward input). The fluentd daemon must be running on the host machine.
  • awslogs - Writes log messages to Amazon CloudWatch Logs.
  • splunk - Writes log messages to splunk using the HTTP Event Collector.
  • etwlogs - (Windows) Writes log messages as Event Tracing for Windows (ETW) events


docker info | grep logging
docker container run -d --name <webjson> --logg-driver json-file httpd #per docker container setup
docker logs <testjson>

docker container run -d --name <web> httpd #start new container
docker logs -f _testweb_                   #display standard-out logs
docker service log -f <web> #for swarm all container replicas logs


Enable syslog logginig driver

sudo vi /etc/rsyslog.conf
#uncomment below
$ModLoad imudp
$UDPServerRun 514

sudo systemctl start rsyslog


Change logging driver. Then standard output won't be available after the change.

{
  "log-driver": "syslog",
  "log-opts": {
    "syslog-address": "udp://172.31.10.1"
  }
}

sudo systemctl restart docker
docker info | grep logging
tail -f /var/log/messages #this will show all logging eg. access logs for httpd server

Docker daemon logs

System level logs

# CentOS
/var/messages | grep -i docker

# Ubuntu
sudo journalctl -u docker.service --no-hostname
sudo journalctl -u docker -o json | jq -cMr '.MESSAGE'
sudo journalctl -u docker -o json | jq -cMr 'select(has("CONTAINER_ID") | not) | .MESSAGE'
/var/log/syslog | grep -i docker
Docker container or service logs
docker container logs [OPTIONS] containerID  #single container logs
docker service   logs [OPTIONS] service|task #agregate logs across all cluster deployed container replicas

Container life-cycle policies - eg. autostart

docker container run -d --name web --restart <on-failure|unless-stopped|no|none(default)|always> httpd
# --restart -restart on crash or exit 1 or service or system reboot

Definitions:

  • always - it will restart container always, even if stopped manually, restarting docker-deamon will start container
  • unless-stopped - it will restart container always unless stopped manually by docker container stop
  • on-failure - restart if container exits with non-zero exit code

Universal Control Plance - UCP

It's an application what allow to see all operational details for Swarm cluster when using Docker EE editin. 30 days trial is available.

Communication between Docker Engine, UCP and DTR (Docker Trusted Registry)
  • over TCP/UDP - depends on a port, and whether a response is required, or if a message is a notification
  • IPC - interprocess communication (intra-host), services on the same node
  • API - over TCP, uses API directlyto query or update components in a cluster
References

Install/uninstall UCP image: docker/ucp

OS support:

  • UCP 2.2.11 is supported running on RHEL 7.5 and Ubuntu 18.04

For labs purpose, we can use eg. ucp.example.com the domain example.com is included in UCP and DTR wildcard self-signed certificate.


Install on a manager node

export UCP_USERNAME=ucp-admin
export UCP_PASSWORD=ucp-admin
export UCP_MGR_NODE_IP=172.31.101.248

docker container run --rm -it --name ucp \
  -v /var/run/docker.sock:/var/run/docker.sock docker/ucp:2.2.15 \
  install --host-address=$UCP_MGR_NODE_IP --interactive --debug

# --rm  :- because this container will be only transitinal container
# -it   :- because installation we want interactive
# -v    :- link the container with a file on a host
# --san :- add subject alternative names to certificates (e.g. --san www1.acme.com --san www2.acme.com)
# --host-address    :- IP address or network interface name to advertise to other nodes
# docker/ucp:2.2.11 :- image version
# --dns        :- custom DNS servers for the UCP containers
# --dns-search :- ustom DNS search domains for the UCP containers
# --admin-username "$UCP_USERNAME" --admin-password "$UCP_PASSWORD" #seems these are not supported, although are in a guide


If not provided you will be asked for:

  • Admin password during the process
  • You may enter additional aliases (SANs) now or press enter to proceed with the above list:
    • Additinall aliases: ucp ucp.example.com
DEBU[0062] User entered: ucp ucp.ciscolinux.co.uk
DEBU[0062] Hostnames: [host1c.mylabserver.com 127.0.0.1 172.17.0.1 172.31.101.248 ucp ucp.ciscolinux.co.uk] 


You may want to add DNS entries in /etc/hosts for

  • ucp or ucp.example.com pointing to manager public ip
  • dtr or dtr.example.com pointing a worker node public IPs.


Verify
  • connect to https://ucp.example.com:443.
  • docker ps should see a number of containers running now, they need to see each other therefore we used hosts entries to allow this.
Uninstall
docker container run --rm -it --name ucp \
  -v /var/run/docker.sock:/var/run/docker.sock \
  docker/ucp uninstall-ucp --interactive

INFO[0000] Your engine version 18.09.1, build 4c52b90 (4.15.0-1031-aws) is compatible with UCP 3.1.2 (b822777) 
INFO[0000] We're about to uninstall from this swarm cluster. UCP ID: t0ltwwcw5tdbtjo2fxlzmj8p4 
Do you want to proceed with the uninstall? (y/n): y
INFO[0000] Uninstalling UCP on each node...             
INFO[0031] UCP has been removed from this cluster successfully. 
INFO[0033] Removing UCP Services

Install DTR Docker Trusted Repository image: docker/dtr

It's recommended for single core systems to wait 5 minutes after UCP deployment to relese more cpu cycle. You can see the load may peaking up at 1.0 using w command.

Connect to UCP service https://ucp.example.com, login with creds created. Uload a license.lic file. Go to Admin Settings > Docker Trusted Registry > Pick one of UCP Nodes [worker] You may disable TLS verification on self-signed certificate


Run a given command on a node you want to install DTR. UCP_NODE in lab environment can cause a few issues. For a convinience to avoid avoid port conflict :80,:443 use different node that UCP is instaled. Eg. dns user2c.mylabserver.com or private IP.

export UCP_NODE=wkr-172.31.107.250 #for convinince, to avoid port conflict :80,:443 use worker IP
export UCP_USERNAME=ucp-admin
export UCP_PASSWORD=ucp-admin
export UCP_URL=https://ucp.example.com:443 #avoid using example.com to avoid SSL name validation issues
docker pull docker/dtr

# Optional. Download UCP public certificate
curl -k https://ucp.ciscolinux.co.uk/ca > ucp-ca.pem

docker container run -it --rm docker/dtr install \
  --ucp-node $UCP_NODE --ucp-url $UCP_URL --debug \
  --ucp-username $UCP_USERNAME --ucp-password $UCP_PASSWORD \
  --ucp-insecure-tls  # --ucp-ca "$(cat ucp-ca.pem)"

# --ucp-node :- hostname/IP of the UCP node (any node managed by UCP) to deploy DTR. Random by default
# --ucp-url  :- the UCP URL including domain and port.


It will ask for if not specified:

  • ucp-password: you know it from UCP installation step


Sygnificiant installation logs

..
INFO[0006] Only one available UCP node detected. Picking UCP node 'user2c.labserver.com' 
..
INFO[0006] verifying [80 443] ports on user2c.labserver.com 
..
INFO[0000] Using default overlay subnet: 10.1.0.0/24    
INFO[0000] Creating network: dtr-ol                     
INFO[0000] Connecting to network: dtr-ol                
..
INFO[0008] Generated TLS certificate. dnsNames="[*.com *.*.com example.com *.dtr *.*.dtr]" domains="[*.com *.*.com 172.17.0.1 example.com *.dtr *.*.dtr]" ipAddresses="[172.17.0.1]"
..
INFO[0073] You can use flag '--existing-replica-id 10e168476b49' when joining other replicas to your Docker Trusted Registry Cluster


Verify by logging in to https://dtr.example.com

DTR installation process above has also installed a number of containers on maanger/worker nodes named ucp-agent and number of containers on dedicated DTR node. You can verify DTR by logging to https://dtr.example.com with UCP credentials ucp-admin and the same password if you haven't changed any commands above. Then you should be presented with registry.docker.io like theme. Any images stored there will be trusted from a perspective of our organisation.


Verify by going to UCP https://ucp.example.com, admin settings > Docker Trusted Registry
Ucp-dtr-in-admin

Backup UCP and DTR configuration

This is build into UCP. The process is to start a special container to export UCP configuration to tar file. This can be run as cron job.

docker container run --log-driver non --rm -i --name ucp -v /var/run/docker.sock:/var/run/docker.sock docker/ucp backup > backup.tar
# --rm it's transitional container
# -i run interactivly

# At first run it will error with --id m79xxxxxxxxx, asking to re-run teh command with this id.

# Restore command
docker container run --log-driver non --rm -i --name ucp -v /var/run/docker.sock:/var/run/docker.sock docker/ucp restore --id m79xxx < backup.tar
DTR

Durign a backup DTR will not be available.

docker container run --log-driver non --rm docker/dtr backup  --ucp-insecure-tls --ucp-url <ucp_server_dns:443> --ucp-username admin --ucp-password <password> > dtr-backup.tar

# will you be asked for:
# Choose a replica to back up from: enter

# Restore command
docker container run --log-driver non --rm docker/dtr restore --ucp-insecure-tls --ucp-url <ucp_server_dns:443> --ucp-username admin --ucp-password <password> < dtr-backup.tar

UCP RBAC

The main concept is:

  • administrators can make changes to the UCP swarm/kubernetes, User Management, Orgainisation, Team and Roles
  • users - range of access from Full Control of resources to no access


Ucp-rbac

Note that only Scheduler role allows access to Node to view nodes. Plus schedule workloads of course.

UCP Client bundle

UCP client bundle allows to export a bundle containing a certificate and environment settings that will poind docker-client to UCP in order to use a cluster, create images and services.

Download bundle
  1. Create a user with priviliges that yuo wish docker-client to run as
  2. Download a client budle from User Profile > Client bundle > + New Client Bundle
  3. File ucp-bundle-[username].zip will get downloaded

unzip ucp-bundle-bob.zip 
Archive:  ucp-bundle-bob.zip
 extracting: ca.pem                  
 extracting: cert.pem                
 extracting: key.pem                 
 extracting: cert.pub                
 extracting: env.sh                  
 extracting: env.ps1                 
 extracting: env.cmd     

cat env.sh 
export COMPOSE_TLS_VERSION=TLSv1_2
export DOCKER_TLS_VERIFY=1
export DOCKER_CERT_PATH="$PWD"
export DOCKER_HOST=tcp://3.16.143.49:443
#
# Bundle for user bob
# UCP Instance ID t0ltwwcw5tdbtjo2fxlzmj8p4
#
# This admin cert will also work directly against Swarm and the individual
# engine proxies for troubleshooting.  After sourcing this env file, use
# "docker info" to discover the location of Swarm managers and engines.
# and use the --host option to override $DOCKER_HOST
#
# Run this command from within this directory to configure your shell:
# eval $(<env.sh)

eval $(<env.sh) # apply ucp-bundle

docker images # to list UCP managed images

  1. In my lab I had to update DOCKER_HOST from public IP to private IP

Err: error during connect: Get https://3.16.143.49:443/v1.39/images/json: x509: certificate is valid for 127.0.0.1, 172.31.101.248, 172.17.0.1, not 3.16.143.49

export DOCKER_HOST=tcp://172.31.101.248:443


  1. Verify if you have permissions to create a service
docker service create --name test111 httpd
Error response from daemon: access denied:
no access to Service Create, on collection swarm
  1. Add Grants to the user
    1. Go to User Management > Granst > Create Grant
    2. Base on a Roles, select Full Control
    3. Select Subjects, All Users, select the user
    4. Click Create
  2. Re-run service create command that should succeed now. This service can be managed now also within UCP console.

Docker Secure Registry | image: registry

Docker provides a special docker image that can be used to manage docker imagages both internally or externally thus steps below include securing the access with SSL certificate.


Create certificate

mkdir ~/{auth,certs}
# create self-signed certificate for Docker Repository
mkdir certs; cd _$ #cd to last argument in history
openssl req -newkey rsa:4096 -nodes -sha256 -keyout repo-key.pem -x509 -days 365 -out repo-cer.pem -subj /CN=myrepo.com
# trusted-certs docker client directory, docker client looks for trusted certs when conencting to reomote repo
sudo mkdir -p /etc/docker/certs.d/myrepo.com:5000 #port 5000 is a default port
sudo cp repo-cer.pem /etc/docker/certs.d/myrepo.com:5000/ca.crt

ca.crt is default/required CAroot trustcert file name, that the docker client (docker login API) uses when conencting to remote repository. In our case we trust any cert signed by CA=ca.crt when connecting to myrepo.com:5000 as same certs (selfsigned), got installed in repository:2 container via -v /certs/ option.

Optional for development purposes to add doamin myrepo.com to hostfile binding to local interface ip address.

sudo -i; echo "172.16.10.10 myrepo.com" >> /etc/hosts; exit


Optional add insecure-registry entry

sudo vi /etc/docker/deamon.json
{
  "insecure-registries" : [ "myrepo.com:5000"]
}


Pull special Docker Registry image

mkdir -p ~/auth #authentication directory, used when deploying local repository
docker pull registry:2
docker run --entrypoint htpasswd registry:2 -Bbn reg-admin Passw0rd123 > ~/auth/htpasswd
# -Bbn        -parameters
# reg-admin   -user
# Passw0rd123 -password string for basic htpasswd authentication method, the hashed password will be displayed to STDOUT

$ cat ~/auth/htpasswd
reg-admin:$2y$05$DnTWDHp7uTwaDrw4sXpUbuDDIlLwu3c8MEMsHPjK/AcUMdK/TD6fO


Run Registry container

cd ~
docker run -d -p 5000:5000 --name myrepo \
       -v $(pwd)/certs:/certs \
       -e REGISTRY_HTTP_TLS_CERTIFICATE=/certs/repo-cer.pem \
       -e REGISTRY_HTTP_TLS_KEY=/certs/repo-key.pem \
       -v $(pwd)/auth:/auth \
       -e REGISTRY_AUTH=htpasswd \
       -e REGISTRY_AUTH_HTPASSWD_REALM="Registry Realm" \
       -e REGISTRY_AUTH_HTPASSWD_PATH=/auth/htpasswd \
       registry:2
# -v                               -indicate where our certificates will be mounted within a container
# -e REGISTRY_HTTP_TLS_CERTIFICATE -path to cert inside the container
# -v $(pwd)/auth:/auth             -mounting authentication directory where a file with password is
# -e REGISTRY_AUTH htpasswd        -setting up to use 'htpasswd' authentication method
# registry:2                       -image name, positinal params


Verify

docker pull  alpine
docker tag   alpine     myrepo.com:5000/aa-alpine #create a tagged image (copy) on a local filesystem, 
     # it must be prefixed with the private repo name '/' image name you want to upload as

docker logout  # if logged in to another repository
docker login myrepo.com:5000/aa-alpine #login to a repository that runs as a container, stays login untill logout/reboot
docker login myrepo.com:5000/aa-alpine --username=rep-admin --password Passw0rd123
docker push  myrepo.com:5000/aa-alpine        

docker image rmi alpine myrepo.com:5000/aa-alpine #delete image stored locally
docker pull             myrepo.com:5000/aa-alpine #pull image from a container repository

# List private-repository images
curl --insecure -u "reg-admin:password" https://myrepo.com:5000/v2/_catalog
{"repositories":["aa-alpine"]}

wget --no-check-certificate --http-user=reg-admin --http-password=password https://myrepo.com:5000/v2/_catalog
cat _catalog                                                                                                                                                                       
{"repositories":["my-alpine","myalpine","new-aa-busybox"]}

# List tags
curl --insecure -u "reg-admin:password" https://myrepo.com:5000/v2/aa-alpine/tags/list
{"name":"myalpine","tags":["latest"]}
curl --insecure -u "reg-admin:password" https://myrepo.com:5000/v2/aa-alpine/manifests/latest #entire image metadata


Note. There is no easy way to delete images from repository:2 container.

Docker push

Login to a docker repository
docker info | grep -B1 Registry #check if you are logged in to docker.hub repository
WARNING: No swap limit support
Registry: https://index.docker.io/v1/

docker login

docker info | grep -B1 Registry
Username: pio2pio
Registry: https://index.docker.io/v1/


Tag and push an image
# docker tag local-image:tagname new-repo:tagname  #create a local copy of an image
# docker push new-repo:tagname                     

docker pull busybox
docker --tag busybox:latest pio2pio/testrepo
docker push pio2pio/testrepo
The push refers to repository [docker.io/pio2pio/testrepo]
683f499823be: Mounted from library/busybox 
latest: digest: sha256:bbb143159af9eabdf45511fd5aab4fd2475d4c0e7fd4a5e154b98e838488e510 
size: 527
Docker Content Trust

All images are implicitly trusted by your Docker daemon. Buy can set that ONLY signed images are allowed. You can configure your systems for trusting only image tags that have been signed.

export=DOCKER_CONTENT_TRUST=1 #enable system to sign an image during push process
docker build -t myrepo.com:5000/untrusted.latest
docker push myrepo.com:5000/untrusted.latest
...
No tag specified, skipping trust metadata push
# 2nd attempt, with a tag specified now
docker push myrepo.com:5000/untrusted.latest:latest
Error: error contacting notary server: x509: certificate signed by unknown authority

docker pull myrepo.com:5000/untrusted.latest:latest
Error: error contacting notary server: x509: certificate signed by unknown authority
Errors explained

Err: No tag specified, skipping trust metadata push

  • Explenation: When image gets signed is signed by a tag. Thereofre if you skip a tag it won't get signed and metada get skipped.

Err: error contacting notary server: x509: certificate signed by unknown authority

  • when uploading the image gets uploaded, but it is not trusted becasue signed with self-signed CA
  • when downloading, and DOCKER_CONTENT_TRUST=1 is enabled, the image cannot be downloaded because is untrusted

Theory

What is a docker

Docker is a container runtime platform, where Swarm is a container orchestration platform.

Security

Mutually Authenticated TLS

Docker Swarm is secure by default, it means all communication is encrypted. Mutually Authenticated TLS is the implementation was chosen to secure that communication. Any time a swarm is inicialised, a self-signed CA is generated and issues certificates to every node (mgr or wkr) to facilicate registration (join mgr or wkr) and latter those secure communications. It's transient container brought up to generate CA certs every time a cert is needed. MTLS communication is between Managers and Workers.

Linux Namespaces and Control Groups

Difference between docker attach and docker exec

Attach

The docker attach command allows you to attach to a running container using the containers ID or name, either to view its ongoing output or to control it interactively. You can attach to the same contained process multiple times simultaneously, screen sharing style, or quickly view the progress of your detached process.

The command docker attach is for attaching to the existing process. So when you exit, you exit the existing process.

If we use docker attach, we can use only one instance of shell. So if we want open new terminal with new instance of container's shell, we just need run docker exec

If the docker container was started using /bin/bash command, you can access it using attach, if not then you need to execute the command to create a bash instance inside the container using exec. Attach isn't for running an extra thing in a container, it's for attaching to the running process.

To stop a container, use CTRL-c. This key sequence sends SIGKILL to the container. If --sig-proxy is true (the default),CTRL-c sends a SIGINT to the container. You can detach from a container and leave it running using the CTRL-p CTRL-q key sequence.

exec

"docker exec" is specifically for running new things in a already started container, be it a shell or some other process. The docker exec command runs a new command in a running container.

The command started using docker exec only runs while the container’s primary process (PID 1) is running, and it is not restarted if the container is restarted.

exec command works only on already running container. If the container is currently stopped, you need to first run it. So now you can run any command in running container just knowing its ID (or name):

docker exec <container_id_or_name> echo "Hello from container!"
docker run -it -d shykes/pybuilder /bin/bash

The most important here is the -d option, which stands for detached. It means that the command you initially provided to the container (/bin/bash) will be run in background and the container will not stop immediately.

Dockerfile - python

References