To understand what a volume is, let's launch a container in interactive mode and associate the host folder /tmp/data with the folder /data on the container:
docker run -it -v /tmp/data:/data ubuntu /bin/bash
Inside the container, navigate to this folder and create a file:
cd /data/ touch testfile
Then exit the container using the `exit` command.
After exiting the container, list the contents of the folder on the host using the following command or with the Ubuntu file browser:
ls /tmp/data/
The file testfile was created by the container in the folder we connected using -v /tmp/data:/data.
To avoid interference with the second part of the TP:
* Stop all redis and moby-counter containers with docker stop or with Portainer. * Remove stopped containers with `docker container prune`. * Run `docker volume prune` to clean up volumes created in previous labs. * Also run `docker network prune` to clean up unused networks.
Let's move on to exploring volumes:
Recreate the moby-network network and the redis and moby-counter containers inside it:
docker network create moby-network docker run -d --name redis --network moby-network redis docker run -d --name moby-counter --network moby-network -p 8000:80 russmckendrick/moby-counter
Visit your application in the browser. Make a recognizable pattern by clicking.
* Delete the redis container: `docker stop redis` then `docker rm redis` * Visit your application in the browser. It is now disconnected from its backend. * Have we really lost the data from our previous container? No! The Dockerfile for the official Redis image looks like this:
FROM alpine:3.5 ... VOLUME /data ...
Many Docker containers are stateful applications, meaning they store data. Automatically, these containers create anonymous volumes in the background that need to be manually deleted later (with `rm` or `prune`).
Inspect the list of volumes (for example, with Portainer) to find the ID of the hidden volume. Normally, there should be a `portainer_data` volume (if you use Portainer) and an anonymous volume with a hash.
Create a new redis container attaching it to the “hidden” redis volume you found (by copying the ID of the anonymous volume):
docker container run -d --name redis -v <volume_id>:/data --network moby-network redis:alpine
Visit the application page. Normally, a pattern of moby logos from a previous session should appear (after a delay of up to several minutes).
Display the content of the volume with the command:
docker exec redis ls -lha /data
Finally, we will recreate a container with a volume that is not anonymous. Indeed, the right way to create volumes is to create them manually (named volumes):
docker volume create redis_data
Delete the old redis container then create a new container attached to this named volume:
docker container rm redis docker container run -d --name redis -v redis_data:/data --network moby-network redis:alpine
When a specific host directory is used in a volume (the syntax `-v HOST_DIR:CONTAINER_DIR`), it is often called bind mounting. This is somewhat misleading because all volumes are technically “bind mounted”. The difference is that the mount point is explicit rather than hidden in a directory managed by Docker.
Run `docker volume inspect redis_data`.
To clean up all this work, first stop the various redis and moby-counter containers.
Run the prune function for the containers first, then for the networks, and finally for the volumes.
Since the networks and volumes were no longer attached to running containers, they were deleted.
Typically, you need to be much more careful with volume pruning (data loss) than container pruning (nothing to lose because immutable and generally in the registry).
Navigate to your root directory by typing `cd`.
After entering the microblog repo using `cd microblog`, retrieve a pre-dockerized version of the app by loading the content of the Git branch `tp2-dockerfile` with `git checkout tp2-dockerfile – Dockerfile`.
If you didn't have the microblog repo yet:
git clone https://github.com/uptime-formation/microblog/ cd microblog git checkout tp2-dockerfile
Read the Dockerfile of the microblog application.
A Docker volume appears as a folder inside the container. We will make the Docker volume appear as a folder at location `/data` on the container.
To make the Python app aware of the location of the database, add to your Dockerfile an environment variable `DATABASE_URL` like so (this variable is read by the Python program):
ENV DATABASE_URL=sqlite:////data/app.db
Add to the Dockerfile a `VOLUME` instruction to store the SQLite database of the application.
Solution:
Create a named volume called `microblog_db`, and launch a container using it, create an account, and write a message.
Verify that the named volume is being used correctly by connecting a second microblog container using the same named volume.
You have all the ingredients to package the app of your choice now! Get a base image, base yourself on an existing Dockerfile if it inspires you, and get started!