Deploy containers from a to z

This practical work will consist of creating Kubernetes objects to deploy an example stack: monster_stack. It is composed of:

  • A front-end in Flask (Python),
  • A backend that generates images (a monster avatar corresponding to a string),
  • And a database serving as a cache for these images, Redis.

You can use either your Cloud environment or Minikube.

Reminder: Install Lens

Lens is a nice graphical interface for Kubernetes.

It connects using the default ~/.kube/config configuration and allows us to access a much more pleasant dashboard.

You can install it by running these commands:

sudo apt-get update; sudo apt-get install -y libxss-dev
curl -fSL https://github.com/lensapp/lens/releases/download/v4.0.6/Lens-4.0.6.AppImage -o ~/Lens.AppImage
chmod +x ~/Lens.AppImage
~/Lens.AppImage &

Deploying the monsterstack stack

Pods are sets of containers always kept together.

We would like to deploy our monster_app stack. We will start by creating a pod with only our monstericon container.

  • Create an empty project monster_app_k8s.
  • Create the following deployment file: monstericon.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: monstericon 
  labels:
    <labels>

This file expresses an empty deployment object.

Add the label app: monsterstack to this Deployment object.

For now, our deployment is not defined because it does not have a spec: section.

The first step is to propose a ReplicaSet template for our deployment. Add the following (spec: should be at the same level as kind: and metadata:)

spec:
  template:
    spec:

Fill in the spec section of our monstericon pod from a pod template launching an Nginx container:

        containers:
        - name: nginx
          image: nginx:1.7.9
          ports:
            - containerPort: 80
  • Replace the container name with monstericon, and the container image with tecpi/monster_icon:0.1, this will retrieve the image previously uploaded to Docker Hub (version 0.1)
  • Complete the port by setting the production port of our application, 9090
  • Objects in Kubernetes are highly dynamic. To associate and designate them, we assign them labels, i.e., labels with which we can find or match them precisely. It is thanks to labels that k8s associates pods with ReplicaSets. Add the following at the same level as the pod's spec:
    metadata:
      labels:
        app: monsterstack
        partie: monstericon

At this stage, we have described the pods of our deployment with their labels (a common label to all objects of the app, a more specific label to the sub-part of the app).

Now it is a matter of adding some options to configure our deployment (at the level of template:):

  selector:
    matchLabels:
      app: monsterstack
      partie: monstericon
  strategy:
    type: Recreate

This section indicates the labels to use to identify the pods of this deployment among others.

Then the update strategy (rollout) of the pods for the deployment is specified: Recreate designates the most brutal strategy of complete pod deletion and redeployment.

Finally, just before the line selector: and at the level of the strategy: keyword, add replicas: 3. Kubernetes will create 3 identical pods during the monstericon deployment.

The monstericon.yaml file so far:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: monstericon
  labels:
    app: monsterstack
spec:
  template:
    spec:
      containers:
      - name: monstericon
        image: tecpi/monster_icon:0.1
        ports:
        - containerPort: 9090
    metadata:
      labels:
        app: monsterstack
        partie: monstericon
  selector:
    matchLabels:
      app: monsterstack
      partie: monstericon
  strategy:
    type: Recreate
  replicas: 3
Apply our deployment
  • Apply our deployment file with the command apply -f.
  • Display the deployments with kubectl get deploy -o wide.
  • Also list the pods by running kubectl get pods –watch to verify that the containers are running.
  • Let's add a readinessProbe health check to the container in the pod with the following syntax (the readinessProbe keyword should be at the level of the image:):
        readinessProbe:
          failureThreshold: 5 # Retry 5 times
          httpGet:
            path: /
            port: 9090
            scheme: HTTP
          initialDelaySeconds: 30 # Wait 30s before testing
          periodSeconds: 10 # Wait 10s between each try
          timeoutSeconds: 5 # Wait 5s for response

This way, k8s will be able to know if the container is working well by calling the /. route. This is a good practice for Kubernetes to know when to restart a pod.

  • Let's also add constraints on CPU and RAM usage, by adding at the same level as image:
      resources:
        requests:
          cpu: "100m"
          memory: "50Mi"

Our pods will then be guaranteed to have one-tenth of a CPU and 50 megabytes of RAM.

  • Run kubectl apply -f monstericon.yaml to apply.
  • With kubectl get pods –watch, let's observe the deployment strategy type: Recreate in real time.
  • With kubectl describe deployment monstericon, let's read the results of our readinessProbe, as well as how the deployment strategy type: Recreate went.

Final monstericon.yaml:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: monstericon
  labels:
    app: monsterstack
spec:
  template:
    spec:
      containers:
      - name: monstericon
        image: tecpi/monster_icon:0.1
        ports:
        - containerPort: 9090
        readinessProbe:
          failureThreshold: 5 # Retry 5 times
          httpGet:
            path: /
            port: 9090
            scheme: HTTP
          initialDelaySeconds: 30 # Wait 30s before testing
          periodSeconds: 10 # Wait 10s between each try
          timeoutSeconds: 5 # Wait 5s for response
        resources:
          requests:
            cpu: "100m"
            memory: "50Mi"
    metadata:
      labels:
        app: monsterstack
        partie: monstericon
  selector:
    matchLabels:
      app: monsterstack
      partie: monstericon
  strategy:
    type: Recreate
  replicas: 5

Similar deployment for dnmonster. Now we will also create a deployment for dnmonster:

  • create dnmonster.yaml and paste the following code into it:
apiVersion: apps/v1
kind: Deployment
metadata:
  name:

 dnmonster 
  labels:
    app: monsterstack
spec:
  selector:
    matchLabels:
      app: monsterstack
      partie: dnmonster
  strategy:
    type: Recreate
  replicas: 5
  template:
    metadata:
      labels:
        app: monsterstack
        partie: dnmonster
    spec:
      containers:
      - image: amouat/dnmonster:1.0
        name: dnmonster
        ports:
        - containerPort: 8080

Finally, configure a third deployment redis:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: redis 
  labels:
    app: monsterstack
spec:
  selector:
    matchLabels:
      app: monsterstack
      partie: redis
  strategy:
    type: Recreate
  replicas: 1
  template:
    metadata:
      labels:
        app: monsterstack
        partie: redis
    spec:
      containers:
      - image: redis:latest
        name: redis
        ports:
        - containerPort: 6379

Expose our stack with services

K8s services are network endpoints that automatically balance traffic to a set of pods designated by certain labels.

To create a Service object, use the following code, to be completed:

apiVersion: v1
kind: Service
metadata:
  name: <service_name>
  labels:
    app: monsterstack
spec:
  ports:
    - port: <port>
  selector:
    app: <app_selector> 
    partie: <tier_selector>
  type: <type>

Add the following code at the beginning of each deployment file. Complete for each part of our application:

  • the service name and the part name with the name of our program (monstericon, dnmonster, and redis)
  • the port with the service port - the app and part selectors with those of the corresponding ReplicaSet.

The type will be: ClusterIP for dnmonster and redis because these are services that only need to be accessed internally, and LoadBalancer for monstericon.

Apply your three files.

  • List the services with kubectl get services.
  • Visit your application in the browser with minikube service <name-of-the-monstericon-service>.

Let's gather the three objects with a kustomization.

A kustomization allows summarizing an object contained in multiple files in one place to be able to launch it easily:

  • Create a folder monster_stack to store the three files:
    • monstericon.yaml
    • dnmonster.yaml
    • redis.yaml

Also create a kustomization.yaml file inside with:

resources:
    - monstericon.yaml
    - dnmonster.yaml
    - redis.yaml
  • Try running the kustomization with kubectl apply -k . from the monster_stack folder.

Add an Ingress load balancer to expose our application on the standard port

Let's install the Nginx Ingress controller with minikube addons enable ingress.

This is an implementation of a dynamic load balancer based on nginx configured to interface with a k8s cluster.

Also, add the following load balancer configuration object in the monster-ingress.yaml file:

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: monster-ingress 
  annotations:
    nginx.ingress.kubernetes.io/rewrite-target: /
spec:
  rules:
  - http:
      paths:
        - path: /monstericon
          backend:
            serviceName: monstericon
            servicePort: 9090
  • Add this file to our kustomization.yaml
  • Rerun the kustomization.

You should be able to access the application by running minikube service monstericon –url and adding /monstericon to access it.

teaching_assistant/workflow/deploy_containers_from_a_to_z.txt · Last modified: 2024/05/15 12:28 by Ralph
Back to top
CC Attribution-Share Alike 4.0 International
chimeric.de = chi`s home Valid CSS Driven by DokuWiki do yourself a favour and use a real browser - get firefox!! Recent changes RSS feed Valid XHTML 1.0