Creating a Multi-container Application

Articulate two images with Docker Compose

identidock: a Flask application that connects to redis

from flask import Flask, Response, request
import requests
import hashlib
import redis
 
app = Flask(__name__)
cache = redis.StrictRedis(host='redis', port=6379, db=0)
salt = "UNIQUE_SALT"
default_name = 'Joe Bloggs'
 
@app.route('/', methods=['GET', 'POST'])
def mainpage():
 
    name = default_name
    if request.method == 'POST':
        name = request.form['name']
 
    salted_name = salt + name
    name_hash = hashlib.sha256(salted_name.encode()).hexdigest()
    header = '<html><head><title>Identidock</title></head><body>'
    body = '''<form method="POST">
                Hello <input type="text" name="name" value="{0}">
                <input type="submit" value="submit">
                </form>
                <p>You look like a:
                <img src="/monster/{1}"/>
            '''.format(name, name_hash)
    footer = '</body></html>'
    return header + body + footer
 
 
@app.route('/monster/<name>')
def get_identicon(name):
 
    image = cache.get(name)
 
    if image is None:
        print ("Cache miss", flush=True)
        r = requests.get('http://dnmonster:8080/monster/' + name + '?size=80')
        image = r.content
    cache.set(name, image)
 
    return Response(image, mimetype='image/png')
 
if __name__ == '__main__':
  app.run(debug=True, host='0.0.0.0', port=9090)
FROM python:3.7
 
RUN groupadd -r uwsgi && useradd -r -g uwsgi uwsgi
RUN pip install Flask uWSGI requests redis
WORKDIR /app
COPY app/identidock.py /app
 
EXPOSE 9090 9191
USER uwsgi
CMD ["uwsgi", "--http", "0.0.0.0:9090", "--wsgi-file", "/app/identidock.py", \
"--callable", "app", "--stats", "0.0.0.0:9191"]

Answer:

The Docker Compose File

version: "3.7"
services:
  identidock:
    build: .
    ports:
      - "9090:9090"

Several remarks:

dnmonster:
  image: amouat/dnmonster:1.0

The `docker-compose.yml` should now look like this:

version: "3.7"
services:
  identidock:
    build: .
    ports:
      - "9090:9090"

  dnmonster:
    image: amouat/dnmonster:1.0

Finally, we also declare a network called `identinet` to put the two containers of our application in.

networks:
  identinet:
    driver: bridge
networks:
  - identinet
redis:
  image: redis
  networks:
    - identinet

Final `docker-compose.yml`:

version: "3.7"
services:
  identidock:
    build: .
    ports:
      - "9090:9090"
    networks:
      - identinet

  dnmonster:
    image: amouat/dnmonster:1.0
    networks:
      - identinet

  redis:
    image: redis
    networks:
      - identinet

networks:
  identinet:
    driver: bridge

Other services

Google-fu Exercise: a CodiMD pad

Elastic Stack

Centralizing logs

The usefulness of Elasticsearch is that, thanks to a very simple configuration of its Filebeat module, we will be able to centralize the logs of all our Docker containers. To do this, we just need to download a Filebeat configuration designed for this purpose:

curl -L -O https://raw.githubusercontent.com/elastic/beats/7.10/deploy/docker/filebeat.docker.yml

Let's rename this configuration and correct who owns this file to satisfy a security constraint of Filebeat:

mv filebeat.docker.yml filebeat.yml
sudo chown root filebeat.yml

Finally, let's create a `docker-compose.yml` file to launch an Elasticsearch stack:

version: "3"

services:
  elasticsearch:
    image: docker.elastic.co/elasticsearch/elasticsearch:7.5.0
    environment:
      - discovery.type=single-node
      - xpack.security.enabled=false
    networks:
      - logging-network

  filebeat:
    image: docker.elastic.co/beats/filebeat:7.5.0
    user: root
    depends_on:
      - elasticsearch
    volumes:
      - ./filebeat.yml:/usr/share/filebeat/filebeat.yml:ro
      - /var/lib/docker/containers:/var/lib/docker/containers:ro
      - /var/run/docker.sock:/var/run/docker.sock:ro
    networks:
      - logging-network
    environment:
      - -strict.perms=false

  kibana:
    image: docker.elastic.co/kibana/kibana:7.5.0
    depends_on:
      - elasticsearch
    ports:
      - 5601:5601
    networks:
      - logging-network

networks:
  logging-network:
    driver: bridge

Then go to Kibana (port 5601) and configure the index by typing * in the indicated field, validate and select the @timestamp field, then validate. The index required by Kibana is created, you can go to the Discover section on the left (the compass icon 🧭) to read your logs.