- Docker is a container tech based on LXC
- Much less resource than VM by sharing cores
- But provide full run-time isolation
- It makes system deployment much faster
- and easier system operation
Still don’t know why? Check out use cases here
Docker engine is a command line which interacts with docker daemon to perform RESTful web requests like pulling images, spawn containers etc.
Once docker being installed successfully, it is able to use
docker in command line to run docker engine.
This error is given if
docker cannot connect to $DOCKER_HOST.
There are 3 env vars needed to run
docker command correctly:
- DOCKER_HOST: The docker daemon host. It could be something like “tcp://xxx.xxx.xxx.xxx:2376” depends on daemon configuration.
- DOCKER_TLS_VERIFY: Whether to verify the connection is TLS or not. It could be “1” – TLS only or “0” – TLS disabled. For production “1” is a must be.
- DOCKER_CERT_PATH: If TLS is enabled, where to find related certs. It points to a folder containing current client private key, certificate and a trusted CA certificate. (ca / cert / key).pem. For more information about TLS, please see here.
To switch between Docker daemon instances, change the 3 env vars above to correct ones. With help of Docker Machine, we do not need to do this manually.
This command will pull the image from docker hub if image not existed and then run the default command of the image.
- -i: Keep STDIN open
- -t: Allocate a pseudo-TTY. This is essential for logs which directly write to stdout/stderr
- -d: Detached / run the image in background
This will promt a bash shell allowing typing commands. Typically using this to debug an image see if everything in Dockerfile is correct.
- –rm: Remove the container after container exited. Without it, the container will remain in container list
docker ps -a.
This will make the running containner to run the command provided. This is very useful to debug a running container.
Docker compose is a single configuration file containing multiple docker containers (services). It allows batching running / stopping.
The power of docker compose is it gives very clear architecture of system:
- The dependencies of each component (services)
- How services are allocated to different servers (nodes)
- How network is configured
- What ports have been exposed to public
- What environemnt has been configured to each service.
The example above show the cluster has 3 parts which compose a wordpress blog system.
For more about docker compose. See here.
Docker compose will automatically create a overlay network which has all services registered under same network. That means all service can access each other through its name. In the example, it is able to
ping blog from nginx container.
Docker machine is a VM level tool. It helps provision a bare VM (or non-bare VM) to be docker ready.
It is highly recommended to provision Docker daemon using this way on your own VM.
It has a bunch of built-in drivers like digital ocean, aws etc. However, as long as you got SSH access and root account, it should not be a big problem.
- -d: driver to use. See here for list of drivers.
Once it is finished. The new machine can be found with
This command will set essential env vars for docker engine.
The docker-machine will use
Backup it and protect it.
Docker swarm is an awesome concept that treats multiple docker daemons as one. This means you can horizontally scale your cluster without worrying about many changes in your DevOps progress.
Docker swarm is another RESTful service layer which has exact same endpoints as Docker Daemon does. It has builtin strategies (like spread) to pick the actual node (server) to use.
Docker engine 100% works with docker swarm service. Docker swarm is transparent to any docker engine.
No. There are still some configuration needed to make things work like network, volume mapping etc. With help of Docker compose, the swarm can be configured easily. See here for limitations with Docker Swarm.
Docker swarm depends on a discovery service which itself could be a docker service.
The swarm network and discovery service is de-centralised and clustered which means better availability.
swarm join will regiser itself to discovery service and swarm manager can then collect information from the discovery service.
It is able to use:
- token proto: for non production
and some other key-value stores.
token is hosted on docker.com. Typeically not used for prod.
Generate a token:
Then it is able to use
token://<tokenID> as discovery service
As docker swarm manager needs to actively manage swarm agents, it needs have its own TLS certificates signed by same CA. Otherwise, swarm manager will not be able to talk to swarm agents.
node-1 can be name of the docker machine.