Webinar Series: Deploying and Scaling Microservices in Kubernetes

This article supplements a webinar series on deploying and managing containerized workloads in the cloud. The series covers the essentials of containers, including managing container lifecycles, deploying multi-container applications, scaling workloads, and working with Kubernetes. It also highlights best practices for running stateful applications.

How To Back Up and Restore a Kubernetes Cluster on DigitalOcean Using Heptio Ark

Heptio Ark is a convenient backup tool for Kubernetes clusters that compresses and backs up Kubernetes objects to object storage. It also takes snapshots of your cluster’s Persistent Volumes using your cloud provider’s block storage snapshot features, and can then restore your cluster’s objects and Persistent Volumes to a previous state.

StackPointCloud’s DigitalOcean Ark Plugin allows you to use DigitalOcean block storage to snapshot your Persistent Volumes, and Spaces to back up your Kubernetes objects. When running a Kubernetes cluster on DigitalOcean, this allows you to quickly back up your cluster’s state and restore it should disaster strike.

In this tutorial we’ll set up and configure the Ark client on a local machine, and deploy the Ark server into our Kubernetes cluster. We’ll then deploy a sample Nginx app that uses a Persistent Volume for logging, and simulate a disaster recovery scenario.

How to run a multi-tenant WordPress platform on Google Kubernetes Engine

As a service provider running WordPress sites, it is all about density, density, density

WordPress is said to run 28% of all websites on the Internet. That is a phenomenal installed base of some 75 million sites. While some of these are massive sites like TechCrunch or The New Yorker, the vast majority of WordPress sites are much smaller.

That means as a WordPress hoster, your business probably follows the 80-20 rule. 80% of your revenue comes from 20% of your sites. Or said another way, 80% of your sites only account for 20% of your traffic.

That means that you need to think about your business in two ways:

  1. You need to provide a reliable service to a large number of low-traffic sites while minimizing infrastructure costs since your margins come in large part by placing more sites on the same physical infrastructure.
  2. You need to provide a white-glove, highly performant and reliable experience to a small number of sites that make up the bulk of your revenue.

At the same time, you need a migration path for some sites to move from low-volume to high-volume plan, without disrupting the customer or your own internal operations teams.

 

..“Our clusters are highly dense, meaning we run a lot of containers per host. On AWS, we use huge instances. The recommendation from Kubernetes is 100 pods per VM. Already, we’re running 200-300 pods per host. Also, since most of the apps that we run are stateful, we can easily have 200-300 volumes per host as well. And we’re working to push these limits even further. Because of these densities enabled by Kubernetes and Portworx, we’re easily saving 60-90% on our compute costs. Portworx itself was between 30-50% cheaper than any other storage solution we tested.”

 

.. If you categorically knew which 20% of your customers would account for 80% of your traffic at all times, solving the noisy neighbor problem would be a one-time migration. But, because traffic patterns change over time, this is a hard problem to solve. Portworx does a few critical things to help.

First, in addition to using Kubernetes to limit pod resources like Memory and CPU, you can use Portworx to automatically place different workloads on different storage hardware for different classes of service. For instance, you might sell your customers a premium “performance” plan if they are expecting heavy usage and they are performance sensitive. Alternatively, cost-conscious customers might opt for a “budget” plan that offers reliability but doesn’t guarantee blazing fast performance. On the backend, these plans can be mapped to Portworx “storage classes” that automatically place high-end plans on SSDs and low-end plans on HHDs.

 

.. Often a hosting customer will call their service provider the day before they are going to be on a national TV and say “I really need my site to work tomorrow.” This often leads to a lot of scrambling around and manual tuning, but with PX-Motion, moving the customer to an environment with more resources is as easy as kubectl apply -f wp-migration.yaml.

The above described moving one heavy load site off a multi-tenant cluster. This is often the best option if you have some advance warning before a large traffic spike. However, in the middle of a large traffic event, it is often better to move low traffic sites away from the heavy traffic site, instead of vice versa. This is also possible with PX-Motion.