Creating a Multi-container Application

Articulate two images with Docker Compose

  • Unordered List ItemInstall docker-compose with `sudo apt install docker-compose`.

identidock: a Flask application that connects to redis

  • Start a new project in VSCode (create a folder called identidock and load it with the Add folder to workspace function).
  • In a sub-folder named app, add a small Python application by creating this file identidock.py:
from flask import Flask, Response, request
import requests
import hashlib
import redis
 
app = Flask(__name__)
cache = redis.StrictRedis(host='redis', port=6379, db=0)
salt = "UNIQUE_SALT"
default_name = 'Joe Bloggs'
 
@app.route('/', methods=['GET', 'POST'])
def mainpage():
 
    name = default_name
    if request.method == 'POST':
        name = request.form['name']
 
    salted_name = salt + name
    name_hash = hashlib.sha256(salted_name.encode()).hexdigest()
    header = '<html><head><title>Identidock</title></head><body>'
    body = '''<form method="POST">
                Hello <input type="text" name="name" value="{0}">
                <input type="submit" value="submit">
                </form>
                <p>You look like a:
                <img src="/monster/{1}"/>
            '''.format(name, name_hash)
    footer = '</body></html>'
    return header + body + footer
 
 
@app.route('/monster/<name>')
def get_identicon(name):
 
    image = cache.get(name)
 
    if image is None:
        print ("Cache miss", flush=True)
        r = requests.get('http://dnmonster:8080/monster/' + name + '?size=80')
        image = r.content
    cache.set(name, image)
 
    return Response(image, mimetype='image/png')
 
if __name__ == '__main__':
  app.run(debug=True, host='0.0.0.0', port=9090)
  • uWSGI is a production-grade Python server that is very suitable for serving our Flask integrated server, so we will use it.
  • Dockerize this new application with the following Dockerfile:
FROM python:3.7
 
RUN groupadd -r uwsgi && useradd -r -g uwsgi uwsgi
RUN pip install Flask uWSGI requests redis
WORKDIR /app
COPY app/identidock.py /app
 
EXPOSE 9090 9191
USER uwsgi
CMD ["uwsgi", "--http", "0.0.0.0:9090", "--wsgi-file", "/app/identidock.py", \
"--callable", "app", "--stats", "0.0.0.0:9191"]
  • Let's observe the Dockerfile code together if it's not clear to you. Just before launching the application, we changed the user with the `USER` instruction, why?.
  • Build the application, for now with `docker build`, launch it, and check with `docker exec`, `whoami`, and `id` the user running the container.

Answer:

The Docker Compose File

  • At the root of our identidock project (next to the Dockerfile), create a file declaring our application called `docker-compose.yml` with the following content:
version: "3.7"
services:
  identidock:
    build: .
    ports:
      - "9090:9090"

Several remarks:

  • The first line after services declares the container for our application
  • The following lines describe how to run our container
  • 'build: .' indicates that the original image of our container is the result of building an image from the current directory (equivalent to `docker build -t identidock .`)
  • The next line describes the port mapping between the outside of the container and the inside.
  • Launch the service (for the moment single-container) with `docker-compose up` (this command implies `docker-compose build`).
  • Visit the web page of the app.
  • Now let's add a second container. We will take advantage of an already created image that allows retrieving an “identicon”. Add to the end of the Compose file (be careful with the indentations!):
dnmonster:
  image: amouat/dnmonster:1.0

The `docker-compose.yml` should now look like this:

version: "3.7"
services:
  identidock:
    build: .
    ports:
      - "9090:9090"

  dnmonster:
    image: amouat/dnmonster:1.0

Finally, we also declare a network called `identinet` to put the two containers of our application in.

  • Declare this network at the end of the file (note that we must specify the network driver):
networks:
  identinet:
    driver: bridge
  • Also, put our two services `identidock` and `dnmonster` on the same network by adding this piece of code twice where necessary (be careful with the indentations!):
networks:
  - identinet
  • Also, add a redis container (be careful with the indentations!). This database is used to cache images and not recalculate them every time.
redis:
  image: redis
  networks:
    - identinet

Final `docker-compose.yml`:

version: "3.7"
services:
  identidock:
    build: .
    ports:
      - "9090:9090"
    networks:
      - identinet

  dnmonster:
    image: amouat/dnmonster:1.0
    networks:
      - identinet

  redis:
    image: redis
    networks:
      - identinet

networks:
  identinet:
    driver: bridge
  • Launch the application and verify that the cache works by searching for cache misses in the application logs.
  • Feel free to spend time exploring the options and commands of docker-compose, as well as the official documentation of Compose files language. This documentation also indicates the differences between version 2 and version 3 of Docker Compose files.

Other services

Google-fu Exercise: a CodiMD pad
  • Retrieve (and adapt if necessary) from the Internet a `docker-compose.yml` file allowing to launch a CodiMD pad with its database. I advise you to always look in the official documentation or the official repository (often on Github) first. Note that CodiMD used to be called HackMD.
  • Make sure the pad is accessible on the given port.

Elastic Stack

Centralizing logs

The usefulness of Elasticsearch is that, thanks to a very simple configuration of its Filebeat module, we will be able to centralize the logs of all our Docker containers. To do this, we just need to download a Filebeat configuration designed for this purpose:

curl -L -O https://raw.githubusercontent.com/elastic/beats/7.10/deploy/docker/filebeat.docker.yml

Let's rename this configuration and correct who owns this file to satisfy a security constraint of Filebeat:

mv filebeat.docker.yml filebeat.yml
sudo chown root filebeat.yml

Finally, let's create a `docker-compose.yml` file to launch an Elasticsearch stack:

version: "3"

services:
  elasticsearch:
    image: docker.elastic.co/elasticsearch/elasticsearch:7.5.0
    environment:
      - discovery.type=single-node
      - xpack.security.enabled=false
    networks:
      - logging-network

  filebeat:
    image: docker.elastic.co/beats/filebeat:7.5.0
    user: root
    depends_on:
      - elasticsearch
    volumes:
      - ./filebeat.yml:/usr/share/filebeat/filebeat.yml:ro
      - /var/lib/docker/containers:/var/lib/docker/containers:ro
      - /var/run/docker.sock:/var/run/docker.sock:ro
    networks:
      - logging-network
    environment:
      - -strict.perms=false

  kibana:
    image: docker.elastic.co/kibana/kibana:7.5.0
    depends_on:
      - elasticsearch
    ports:
      - 5601:5601
    networks:
      - logging-network

networks:
  logging-network:
    driver: bridge

Then go to Kibana (port 5601) and configure the index by typing * in the indicated field, validate and select the @timestamp field, then validate. The index required by Kibana is created, you can go to the Discover section on the left (the compass icon 🧭) to read your logs.

teaching_assistant/workflow/creating_a_multi-container_application.txt · Last modified: 2024/05/15 12:12 by Ralph
Back to top
CC Attribution-Share Alike 4.0 International
chimeric.de = chi`s home Valid CSS Driven by DokuWiki do yourself a favour and use a real browser - get firefox!! Recent changes RSS feed Valid XHTML 1.0