Abstract

This article contains information about Docker for Developers in a compact format.

The following solutions have been tested on POSIX compatible OS machines running MacOS, CentOS and Ubuntu. They should work on Windows in a PowerShell as well.

Problem & Solution

Clean & Reset Docker

docker system prune -a

Delete all Docker containers

docker rm $(docker ps -a -q)

Remove a single Docker image

docker rmi <imageId>

Load a Docker image into the local repository

docker load -i services.img

Export a Docker image from local repository to the host file system

docker save - i services.img

Fetch a Docker image from a (remote) repository

docker pull <vendorTag>:<imageId>

List all installed Docker Images

docker images -a

List all running Docker containers

docker ps -a

Create a GeoJSON Dumpfile using GDAL via Docker

Pull the GDAL image.

docker pull osgeo/gdal:ubuntu-small-latest

Run ogr2ogr to fetch the DB records by using the given SQL Statement and convert them using the GeoJSON driver. The result is saved as the given file into the container’s /home path, that is actually mapped to the /home path of the host. You can change the mapped path to whatever host directory you want. The “–rm” parameter will let Docker remove the container runtime instance straight after use, to save some resources.

docker run --rm -v /home:/home osgeo/gdal:ubuntu-small-latest ogr2ogr -f "GeoJSON" "/home/geojson-export-limited.json" PG:"host=<ip> dbname='<db_name>' user='<db_user>' password='<db_user_password>'" -sql "select * from <db-table> order by <pkey> asc limit <size_limit>"

Usage of Docker Compose to build a service stack using multiple containers

  1. Create a docker-compose.yml file to specify the stack
  2. Trigger a build, using the following command while being in the directory with the docker-compose.yml file:
docker-compose build

Usage of Docker Compose to start a service stack using multiple containers

Run the following command within the directory of the corresponding docker-compose.yml file. Skip parameter “-d” if you want immediate logging output to the stdout.

docker-compose up -d

Run the following command if you need to recreate the containers at startup.

docker-compose up -d --force-recreate

Usage of Docker Compose to stop a service stack using multiple containers

Run the following command within the directory of the corresponding docker-compose.yml file.

docker-compose down

Usage of Docker Compose to follow the logs of a service stack’s running docker containers

Run the following command within the directory of the corresponding docker-compose.yml file.

docker-compose logs -f

Usage of Docker Compose to follow the logs of one container in a service stack

Run the following command within the directory of the corresponding docker-compose.yml file.

docker-compose logs pump <container-name>

Enable Docker Swarm Mode

Docker Swarm is a mechanism to distribute and orchestrate containers over more than one machine node. You can easily build a simple server cluster with neat cloud semantics without any programming hac-mac, just by using preconfigured base images with corresponding applications.

A simple load balancing mechanism will also take place, just like using Spring Cloud Netflix Eureka Discovery Service. However using Docker Swarm mode the load balancing will take place not on application level, but on the “system” level. If you need fine control and discovery features within the service stack and you should also use the Eureka way, though. Whereas Docker swarm simply eases the pain about cluster orchestration (ie. distributed deployments).

Eureka vs Docker Swam (redundancies)

In swarm mode Docker Compose is used to configure the actual service stack, just like in a single node environment.

The master / manager node is initialised using the following command:

 docker swarm init --advertise-addr <MANAGER-IP>

To let a node join the swarm the above command will print out information of how to do it. Example:

Swarm initialized: current node (dxn1zf6l61qsb1josjja83ngz) is now a manager.

To add a worker to this swarm, run the following command:

    docker swarm join \
    --token SWMTKN-1-49nj1cmql0jkz5s954yi3oex3nedyz0fb0xx14ie39trti4wxv-8vxv8rssmk743ojnwacrr2e7c \
    192.168.99.100:2377

If you want to test a single node swam you can omit the manager node’s ip and use this command:

docker swarm init

Deploy a Docker Compose stack to Docker in swarm mode

To deploy a Docker Compose stack to Docker in swarm mode use the following command:

docker stack deploy --compose-file docker-compose.yml <stack_name>

Display information about a Docker Swarm

docker info

Display information about nodes of a Docker swarm

docker nodes ls

Inspect a Docker Container

Get detailed information about a container’s configuration and state.

docker inspect <container_name_or_id>

Monitor Docker Containers’ Resource Usage

Track CPU, memory, network I/O, and block I/O for all running containers.

docker stats

Tail Docker Container Logs

Follow the log output of a container. This is useful for troubleshooting and monitoring container activity in real time.

docker logs -f <container_name_or_id>

Copy Files from/to a Docker Container

Copy files or directories between a container and the local filesystem.

# Copy from container to host
docker cp <container_name_or_id>:/path/to/container/file /path/to/host/destination

# Copy from host to container
docker cp /path/to/host/file <container_name_or_id>:/path/to/container/destination

Execute Commands in a Running Container

Run a command in a running container. This is especially useful for debugging.

docker exec -it <container_name_or_id> /bin/bash

Filter Docker Resources

List resources (containers, images, volumes, networks) based on specific criteria.

# List containers that are running
docker ps --filter "status=running"

# List images based on pattern
docker images --filter=reference='pattern*'

Remove Unused Docker Objects

Clean up unused Docker images, containers, volumes, and networks.

docker system prune

Limiting Resources for Containers

Control the amount of CPU, memory, and I/O resources a container can use, preventing any single container from monopolizing system resources.

# Limit CPU usage
docker run -it --cpus=".5" ubuntu /bin/bash

# Limit Memory usage
docker run -it --memory="256m" ubuntu /bin/bash

Docker Networking: Connect Containers

Create a custom network and connect containers to it, facilitating communication between containers.

# Create a custom network
docker network create my-network

# Attach a container to the network
docker run --network=my-network --name my-container alpine

Backup and Restore a Docker Container

Create a snapshot of a container’s state to a tar archive and restore it later.

# Backup
docker export <container_name_or_id> > container_backup.tar

# Restore (create a new container from backup)
cat container_backup.tar | docker import - new_container_name

Conclusion

These Docker commands and techniques offer developers a broader toolkit for managing and optimizing their Docker environments. From resource monitoring to advanced networking, understanding these “cheats” can lead to more efficient development practices and deployment strategies.