For developers and DevOps engineers who regularly work with containers, proficiency in Docker is a must. In this blog post we look at five commands that are often underutilised, but may help you to conserve valuable time and streamline the management of your containers.
docker system df
The docker system df command serves as a critical tool when managing storage in Docker. As you run more containers and create more images, keeping track of resources can become challenging, so this command can help you by providing a concise report on disk usage.
The output of the command is divided into four sections: Images, Containers, Local Volumes and the Build Cache. Each section summarises the amount of storage used, the amount reclaimable (unused but not deleted), the total number for each type of object and how many are actively being used.
Using this command frequently can help you to maintain a clean and efficient Docker environment. By having a constant overview of the disk usage, you can quickly identify potential areas for cleaning or optimisation, helping to maintain a balanced, healthy Docker system.
If you have been using Docker for some time, you are likely familiar with the docker ps command which lists all running containers. However, what if you require more extensive information about a container or an image? This is where docker inspect can help.
For example, it can give you valuable information about container or image configuration, including environment variables, the CMD or ENTRYPOINT commands, ports, and restart policy among other things. This information can be critical when debugging issues related to a container's runtime behaviour or replicating an image's configuration.
Another of advantage of docker inspect is the ability to access detailed information about a container's network settings. This includes the container's IP address, MAC address, network gateway, and subnet mask. This information is extremely useful when you're troubleshooting network connectivity issues or setting up container-to-container communication.
The output of the command is also in JSON format, which means it's machine readable. You can parse this output in scripts or use command-line tools like jq for filtering and transforming the data. This allows for the automation of tasks that rely on the information obtained from Docker objects, for example in a CI/CD pipeline.
docker build --cache-from
The --cache-from flag when used with the docker build command is a powerful and often underutilised feature in Docker that can significantly optimise the build process.
Before we talk about what the flag does, it's important to understand how Docker's build process works. When Docker builds an image, it processes each instruction in the Dockerfile one-by-one, in order. For each instruction, Docker creates a layer in the image (an intermediate image) that it caches locally. The next time you then execute a docker build command, Docker first checks its cache to see if there's a cached layer it can reuse, rather than executing the instruction again.
For instance, if you're building an image that starts from a particular base image, and you've built this image before, Docker won't actually pull the base image again — it will use the one it has in its local cache. This dramatically speeds up the building process.
However, by default, Docker can only use the build cache of the machine where the build command is running. This means that when you're building the same Docker image on a different machine (like on a CI/CD server), Docker can't use the cache from your local machine and has to start building the image from scratch.
This is where the --cache-from flag comes into play.
The --cache-from option allows you to specify another image as a reference, which Docker will use for caching purposes while building a new image. Essentially, you're telling Docker: "If you can't find a suitable layer in your local cache, look in this image I'm pointing you to."
This allows you to effectively share your cache across different machines, which can lead to faster, more efficient builds, and can greatly benefit your CI/CD pipeline.
The docker cp command is remarkably versatile, offering simplicity in what can often be a complex task: managing files within Docker containers. It simplifies the process of transferring files to and from containers as easily as copying files on your local system.
For example, you may have a scenario where you have a log file or a configuration file inside a running Docker container that you want to analyse on your host machine. Here, docker cp allows you to effortlessly copy the required file from the running (or stopped) container to your host machine.
Alternatively, there might be situations where you have a script or an application on your host machine that you want to test in a containerised environment. docker cp simplifies the process by enabling you to quickly copy the required files into a running Docker container.
It's important to note that while docker cp is very handy, careful thought should be given to how it's used. Frequent copying of files to and from containers can lead to an unmanageable system and should not be a replacement for proper use of Docker volumes and bind mounts.
A key aspect of managing Docker containers is having the ability to monitor their performance in real time, and docker stats is specifically designed for this purpose. It offers a live stream of data, providing critical insights into your running containers.
Upon execution, docker stats returns a continuously updating dashboard within your terminal, showcasing vital statistics for all your active containers. It reports on CPU usage, memory usage, network I/O, block I/O, PIDs, and more.
Let's unpack what each of these performance metrics means:
By providing these insights, docker stats allows you to monitor the health of your containers continuously. With this ability, you can troubleshoot issues before they become critical, manage resources more effectively, and ensure the smooth running of your applications.
Learn to create and deploy containers with Docker and Kubernetes.