Easily run a multinode Kubernetes cluster on your CI

Written by: Fabrice JAMMES (LinkedIn). Date: Jan 4, 2020 · 10 min read

This tutorial shows how to automate the tests of a Cloud-Native application. It will make your life much more easier by allowing you to automatically run and test your Kubernetes applications inside a CI server.

Pre-requisites

Example below is based on Github and Travis-CI, but you can easily use any SCM and CI server that are able to spawn a virtual machine for each commit.

  • Your Kubernetes application will be stored inside a Github project, which may be used freely. You need to create a Github account.
  • The continuous integration server will rely on Travis-CI, which may be used freely. You need to create a Travis-CI account.
  • Connect the Github project to Travis-CI, by going to url https://travis-ci.org/<GITHUB_USER>/<PROJECT_NAME> and activate the project inside Travis-CI web interface. Travis-CI will then launch a new virtual machine for each new Github commit.

Kind: what is that thing?

kind is a tool for running local Kubernetes clusters using Docker container “nodes”. It is very helpful for developers who want to test their cloud-native applications on their workstation and for system administrators aiming at providing Kubernetes clusters for CI or development.

Install Kind inside Travis-CI

Travis-CI now launches a new virtual machine for each new Github commit in your project. Our goal is to run kind inside this Travis-CI virtual machine, so that we can test that our project runs correctly inside a Kubernetes cluster.

  • Luckily, K8s-school provides an example project in order to do this. It is available here.

So, let’s clone our Github project. In the following lines, we will use ci-example as our example project, but you should use your own Github project.

git clone https://github.com/k8s-school/ci-example.git
cd ci-example
ls -l .travis.yml
-rw-rw-r-- 1 user group 571 Oct 14 11:56 .travis.yml

The hidden file .travis.yml explains to Travis-CI what to do during the build.

sudo: required
dist: xenial

before_script:
  - git clone --depth 1 -b "v0.5.1" --single-branch https://github.com/k8s-school/kind-helper.git
  - ./kind-helper/kind/k8s-create.sh
  - export KUBECONFIG="$(kind get kubeconfig-path --name="kind")"

script:
  - ./build.sh
  - ./deploy.sh
  - ./wait-app-ready.sh 
  - kubectl get all,endpoints,cm,pvc,pv -o wide
  - ./run-integration-tests.sh

The before_script section will clone kind-helper inside the Travis-CI virtual machine, and then launch the embedded script k8s-create.sh. This script creates a 3 nodes Kubernetes cluster using Kind and install the kubectl client. The KUBECONFIG variable will then allow kubectl and other Kubernetes clients to talk to the Kind cluster.

All you have to do now is enabling the script section to build containers for your application, deploy them to kind using kubectl or any other Kubernetes clients, wait for your application to be up and running and then launch the integration tests.

Pretty easy and lightweight, isn’t it?

This technique is leveraged since a few years by the Large Synoptic Survey Telescope (LSST) data access team, mainly based at Stanford University. This team is developing Qserv, a distributed, petascale, shared-nothing database. You can see Kubernetes inside Travis-CI in action for the qserv-operator project, which is the Kubernetes operator in charge of installing and managing Qserv in production.

comments powered by Disqus