How to handle security updates within Docker containers?

When deploying applications onto servers, there is typically a separation between what the application bundles with itself and what it expects from the platform (operating system and installed packages) to provide. One point of this is that the platform can be updated independently of the application. This is useful for example when security updates need to be applied urgently to packages provided by the platform without rebuilding the entire application.

Traditionally security updates have been applied simply by executing a package manager command to install updated versions of packages on the operating system (for example “yum update” on RHEL). But with the advent of container technology such as Docker where container images essentially bundle both the application and the platform, what is the canonical way of keeping a system with containers up to date? Both the host and containers have their own, independent, sets of packages that need updating and updating on the host will not update any packages inside the containers. With the release of RHEL 7 where Docker containers are especially featured, it would be interesting to hear what Redhat’s recommended way to handle security updates of containers is.

Thoughts on a few of the options:

  • Letting the package manager update packages on the host will not update packages inside the containers.
  • Having to regenerate all container images to apply updates seems to break the separation between the application and the platform (updating the platform requires access to the application build process which generates the Docker images).
  • Running manual commands inside each of the running containers seems cumbersome and changes are at risk of being overwritten the next time containers are updated from the application release artifacts.

So none of these approaches seems satisfactory.

A New, Better Way to Automatically Update Docker Containers

Docker is the buzz these days, right? Package your application with a , build, push to a registry and somehow get it to your cloud provider. There are a million ways to skin this cat. 🐱

The reality of it is, someone sets up a container somewhere that “just works” and often forgets about it. I’ve seen a single container run for months on end. This results in missing functionality from newer versions of an application or—even worse—can lead to potential security vulnerabilities, all because somebody was too lazy to update the image and recreate the container on their on-prem server or in the cloud.

Container Orchestration

The proper way to handle such a scenario is to utilize rolling updates using Kubernetes or Docker Swarm, but to most developers, these are black boxes that “only ops people know how to use.” So what happens now? Usually a developer will just issue a  and say, “Cool, my app is deployed, now consumers can quit hounding me.”

It’s not ideal… but again… the reality.

..

Automate the Process

There’s a popular open source project called Watchtower that has the ability to “watch” running Docker containers on either the same local or remote host, check if there is a newer image in the remote registry, and then update the container with the new image using the same configuration options it was instantiated with. Pretty cool right?

Absolutely.

This application really intrigued me, so naturally I started digging through the source, which is written in Go. The issue was I couldn’t follow a lot of what was going on in the application due to its lack of readability. I’m not a Go expert, but can still follow logic in most cases. This was not one of those cases…

I thought it was natural that it was written in Go since, well, Docker is. I knew there were two Docker SDK options available in Go and Python, so I thought to myself, “OK, Watchtower is the Go version of auto-updating containers, where’s the Python version?” Well, guess what… it didn’t exist.

Reinventing the Wheel

Why?

I was somewhat shocked to see someone had not done a similar thing using the Docker Python SDK. And so went the thought process, “Hey, I dig Python. I’ll try my hand at it.” It is the Hacktoberfest season after all.

After a weekend of playing with Docker Python SDK, I saw how feasible the implementation would be and had something minimal working.

Kubernetes wordpress Installation (helm)

Introduction

In this article we will learn how to to setup wordpress in kubernetes cluster using helm

Helm: Helm is a tool for managing Kubernetes charts. Charts are packages of pre-configured Kubernetes resources.

  • Let’s Begin deploying wordpress using helm in kubernetes , if you are new to helm then download and initialize helm as follows
root@kube-master:#  helm init
root@kube-master:# kubectl create serviceaccount --namespace kube-system tiller
root@kube-master:# kubectl create clusterrolebinding tiller-cluster-rule \
   --clusterrole=cluster-admin --serviceaccount=kube-system:tiller
root@kube-master:#  kubectl patch deploy --namespace kube-system tiller-deploy \
   -p '{"spec":{"template":{"spec":{"serviceAccount":"tiller"}}}}'
  • Make sure the title-deploy pod is up and running
root@kube-master:/home/ansible# kubectl get pods -n kube-system 
NAME                                  READY     STATUS    RESTARTS   AGE
coredns-78fcdf6894-jvmlb              1/1       Running   0          1h
coredns-78fcdf6894-xstbn              1/1       Running   0          1h
etcd-kube-master                      1/1       Running   0          1h
kube-apiserver-kube-master            1/1       Running   0          1h
kube-controller-manager-kube-master   1/1       Running   0          1h
kube-flannel-ds-5gzn9                 1/1       Running   0          1h
kube-flannel-ds-tlc8j                 1/1       Running   0          1h
kube-proxy-kl4fg                      1/1       Running   0          1h
kube-proxy-krt6n                      1/1       Running   0          1h
kube-scheduler-kube-master            1/1       Running   0          1h
<mark>tiller-deploy-85744d9bfb-wh98g        1/1       Running   0          1h </mark>
  • Once titler pod is up and running, deploying wordpress uses bitnami docker images, for this we need to go and create PersistentVolume and PersistentVolumeClaim
  • Define the PersistentVolume for mariadb-pv where the mariadb data to be stored. The hostPath tells the mysql directory is in /bitnami/mariadb location