local development with kubernetes

I wrote this howto for work while learning Kubernetes, mostly for myself, but also for other devs to jump in and learn without racking up cloud costs.

With the knowledge that there are 234924385734 other similar blog posts out there, I’m sharing my take on falling in to the K-hole getting started with Kubernetes.


Introduction

Kubernetes is a standard with more than one implementation. Each distribution caters to a different set of requirements and considerations. Much like the diversity of available Linux distributions, some Kubernetes distributions aim to be completely packed with all current and historical features and add-ons, while some aim to be as lightweight and portable as possible. This guide will focus on installing a distribution on the lighter end of the spectrum for the purposes of local testing and development, k3s by Rancher Labs.

Installation

k3s claims to be runnable on any Linux (amd64, armv7, arm64). For this guide we’re running Ubuntu 16.04 LTS.

Install it with

$ curl -sfL https://get.k3s.io | sh -

Disclaimer:

Piping a shell script straight from the internet is living on the edge. But you’re probably following this in a throwaway VM anyway, right? Right? I mean, who goes running random commands they find on internet blogs on their local computers?!

Uh. Anyway. The installation script will download the k3s binary to /usr/local/bin by default and add a systemd service definition to run the agent in server mode, listening on https://localhost:6443. It will start a local cluster of 1 node and write a Kubernetes config file to /etc/rancher/k3s/k3s.yaml.

Option 1: Copy kubectl config

By default, k3s creates the config file with 0600 permissions (only root can read or write to it) because it contains authentication information for controlling your Kubernetes “cluster”.

In order to interact with our cluster as a non-root user, copy this config to your local user directory and set the permissions so you can read it:

$ sudo cp /etc/rancher/k3s/k3s.yaml ~/.kube/config-k3s.yaml
$ sudo chmod a+r ~/.kube/config-k3s.yaml

Set the KUBECONFIG environment variable in your shell to point to our copy of the config file so kubectl knows which cluster you want to work with:

$ export KUBECONFIG=~/.kube/config-k3s.yaml

You can now check if the installation succeeded with

$ k3s kubectl get nodes

Option 2: modify installation

Another option is instructing k3s to write the config file with less restrictive permissions in the first place. For obvious reasons, this is not recommended. Don’t do this unless you have good reason.

$ sudo chmod 0644 /etc/rancher/k3s/k3s.yaml
$ sudo chmod 0600 /etc/systemd/system/k3s.service.env
$ echo "K3S_KUBECONFIG_MODE=0644" | sudo tee -a /etc/systemd/system/k3s.service.env

Check that this works by restarting the service with sudo systemctl restart k3s.service and looking at the permissions of /etc/rancher/k3s/k3s.yaml (which should now be 0644).

Which kubectl?

You may have multiple kubectl binaries available. If not, skip this section.

Using k3s kubectl over kubectl shouldn’t make a difference as long as you set your Kubernetes config properly.

The following commands are equivalent:

$ KUBECONFIG=~/.kube/config-k3s.yaml kubectl get nodes
$ kubectl get nodes --kubeconfig=~/.kube/config-k3s.yaml
$ sudo k3s kubectl get nodes

The alternative to explicitly setting KUBECONFIG is to merge your existing Kubernetes config file with the contents of the k3s config in /etc/rancher/k3s, but this is not recommended [5]. Proceed at your own risk!

Command Completion

It is extremely handy to have command completion available for kubectl since the commands tend to get pretty verbose. We can install it by

$ kubectl completion bash | sudo tee /usr/share/bash-completion/completions/kubectl
$ source ~/.bashrc

Also, it’s tiring to type kubectl repeatedly, so let’s alias it, and make sure completions are loaded for the alias too.
Add these lines to your ~/.bash_aliases:

alias kl=kubectl
type -f __start_kubectl 2>&- || _completion_loader kubectl
complete -o default -o nospace -F __start_kubectl kl

NOTE: On Ubuntu 16.04, bash completion is disabled by default system-wide. So let’s enable it:

$ sudo sed -i.bak -E '/if ! shopt -oq posix; then/,+6 s/^#//g' /etc/bash.bashrc

Make sure to restart your shell for changes to take effect.

Storage Class

When using a managed Kubernetes service like GKE, a default storage class for persistent data is pre-defined to use some cloud block storage solution like GCE persistent disks. k3s does not offer a default storage class, so any deployments that rely on persistence will probably fail unless the storage class is explicitly defined for each volume claim. We can fix that by defining a storage class called local-path which will create a local volume on your local host:

$ kubectl apply -f https://raw.githubusercontent.com/rancher/local-path-provisioner/master/deploy/local-path-storage.yaml

(Also note that kubectl can apply a manifest straight from the internet – like piping a script straight to your shell but cooler!)

You can now set the local-path storage class to be the default with:

$ kubectl patch storageclass local-path -p '
metadata:
  annotations:
    storageclass.kubernetes.io/is-default-class: "true"'

Verify that the storage class is now set to default with kubectl get storageclass:

NAME                   PROVISIONER             AGE
local-path (default)   rancher.io/local-path   121m

Now that we’re set up, let’s run some stuff in our cluster.


Running a Container

Run a container inside your cluster:

$ kubectl run -it ubuntu --image=ubuntu:xenial -- bash

If you’ve used Docker before, you’ll appreciate the similarity to the docker run syntax.

This does a few things behind the scenes:

  • Create a Deployment named ubuntu
  • Create a ReplicaSet of scale 1 to satisfy the Deployment
  • Create a Pod controlled by the ReplicaSet running a single container
  • Pull the image ubuntu:xenial from the public Docker registry
  • Drop an interactive shell into the container

You can poke around, install some packages, take a look at /etc/resolv.conf, maybe take a look at the running environment with env, or mount, or top.

When you are done, exit the shell. This will terminate the pod.
Note, however, that because this is a deployment, Kubernetes will create a new pod in its place. This can be confirmed by running kubectl get pods:

NAME                      READY   STATUS    RESTARTS   AGE
ubuntu-5cb45865f9-6rj62   1/1     Running   1          75s

To see a list of all the objects we created by invoking kubectl run, we can run kubectl get all -l run=ubuntu:

NAME                          READY   STATUS    RESTARTS   AGE
pod/ubuntu-5cb45865f9-6rj62   1/1     Running   1          3m23s

NAME                     READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/ubuntu   1/1     1            1           3m23s

NAME                                DESIRED   CURRENT   READY   AGE
replicaset.apps/ubuntu-5cb45865f9   1         1         1       3m23s

To delete the deployment and all of its dependent resources, run

$ kubectl delete deployment ubuntu

Configuration as Code

One of the real superpowers of Kubernetes, in my opinion, is the ability to do almost anything either completely imperatively (“do this! do that! then do these three things!”) or declaratively (“here’s what I want it to look like at the end – you figure out how to get there”).

Let’s take our imperative kubectl run example from above, and generate a declarative manifest from it that we can save to a file and, say, commit to version control:

$ kubectl run ubuntu --image=ubuntu:xenial --dry-run --output yaml

What you’ll get back instead of a shell prompt inside your container is the instructions for building your deployment in YAML form, which you can then modify as needed, and then apply with kubectl apply -f. Brilliant.


Helm

What would Linux be without apt? Helm is the apt of Kubernetes, and more. It uses charts, which are like package manifests, to instantiate and manage Kubernetes objects such as deployments, secrets, and configMaps as a unified, versioned set called a release. It can template values using the Go templating language, gotpl. It can roll back releases to previous versions. It can… well, you get it.

Installation

Install Helm as a snap package (on Ubuntu):

$ sudo snap install helm --classic

Helm can also be installed on other unix-y OSes with:

$ curl -L https://git.io/get_helm.sh | bash

Install Helm bash completion:

$ helm completion bash | sudo tee /usr/share/bash-completion/completions/helm
source ~/.bashrc

Helm installs a server-side component to your Kubernetes cluster called Tiller. Create a service account for Tiller to manage the cluster and initialize Helm:

$ kubectl -n kube-system create serviceaccount tiller
$ kubectl create clusterrolebinding tiller --clusterrole cluster-admin --serviceaccount=kube-system:tiller
$ helm init --service-account tiller

Note: Helm 3, which is in beta at the time of writing, does away with Tiller. Instead of using a server-side component to manage state and deploy releases, it uses the credentials of the client invoking the request, which is much more secure. Read more about Helm 3 here.

Example: Install Grafana

Let’s test our local Kubernetes cluster by installing a relatively straightforward Helm chart, Grafana.
Remember to make sure the KUBECONFIG environment variable is set properly (see above), and run:

$ helm install --name grafana stable/grafana

After a few moments you should see some output describing the Kubernetes objects created that make up the release, as well as some notes about next steps. Following these notes, retrieve the autogenerated admin password:

$ kubectl get secret --namespace default grafana -o jsonpath="{.data.admin-password}" | base64 --decode ; echo

Since the Grafana chart deploys a service of type ClusterIP by default with no ingress to route traffic to the cluster from outside, we have to use port forwarding to access the pod.

Think of port forwarding as the ssh -L of Kubernetes – it abstracts away all the complex networking stuff happening under the hood (and there’s a lot of it) to give you access to your services through the control plane, like this:

$ export POD_NAME=$(kubectl get pods --namespace default -l "app=grafana,release=grafana" -o jsonpath="{.items[0].metadata.name}")
$ kubectl --namespace default port-forward $POD_NAME 3000

Note that if you installed kubectl bash completion, the generated pod name will be available as a completion in an interactive shell.

Now we can access Grafana at http://localhost:3000, just as if we had deployed Grafana to a remote server and logged in to it with ssh -L.

Getting Info

Let’s get some more info about our Grafana deployment.

Every Kubernetes object has as part of its metadata a set of key-value pairs called labels. We can perform operations on multiple objects at a time using label selectors. An example of this can be seen above in the port forwarding command – specifically the -l app=grafana,release=grafana part. This is useful if, for example, you have two versions of the same app deployed simultaneously, and you want to differentiate between two sets of objects by release.

Most Helm charts provide a label called release with the name of the release as the value. The Grafana chart is no exception.

So let’s get a list of all objects matching this label selector:

$ kl get all -l app=grafana,release=grafana

You should get back something like this:

NAME                          READY   STATUS    RESTARTS   AGE
pod/grafana-6c6d9cfd6-4fhnn   1/1     Running   0          2m18s


NAME              TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)   AGE
service/grafana   ClusterIP   10.43.184.148   <none>        80/TCP    2m18s


NAME                      READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/grafana   1/1     1            1           2m18s

NAME                                DESIRED   CURRENT   READY   AGE
replicaset.apps/grafana-6c6d9cfd6   1         1         1       2m18s

kubectl also supports different output formats like YAML, JSON, and even jsonpath which can extract specific fields from the manifest to be used programmatically. [7]

For example, to get the ClusterIP address for the Grafana service from above to be used in a script:

$ kubectl get svc -l app=grafana,release=grafana \
  -o jsonpath='{.items[0].spec.clusterIP}'
10.43.184.148

A more complicated example which gets the port number for a port named service:

$ kubectl get svc -l app=grafana,release=grafana \
  -ojsonpath='{.items[*].spec.ports[?(@.name=="service")].port}'
80

Here’s the equivalent in jq, which some may feel a little more at home with (I know I do):

$ kubectl get svc -l app=grafana,release=grafana -ojson \
  | jq '.items[].spec.ports[] | select(.name=="service").port'
80

When done testing, we can delete the release and make it like it never existed with

$ helm delete --purge grafana

When all else fails:

$ kubectl cluster-info dump

Using a Private Registry

You might be working on a project the world isn’t yet ready for, in which case you’re probably using a private container registry that requires credentials. If we want to use a container image hosted on a private registry like gcr.io, we need to tell k3s how to authenticate. In GKE this would normally just work, but since we’re not running in Google’s walled garden we have to specify an ImagePullSecret.

First we need to create a service account key with read-only access to Google Cloud Storage, which Google Container Registry uses as its backend:

$ gcloud iam service-accounts keys create \
  --iam-account=gcr-readonly@<your_project_id>.iam.gserviceaccount.com \
  gcr-readonly.json

Create a Kubernetes secret of type docker-registry from the service account key.
We are going to call our secret gcr-readonly-secret:

$ kubectl create secret docker-registry gcr-readonly-secret \
  --docker-server=https://gcr.io \
  --docker-username=_json_key \
  --docker-password="$(cat gcr-readonly.json)"

Now we can do one of two things: a) Specify an imagePullSecret for every deployment that needs to pull images from this registry, or b) patch our default service account to include a reference to our image pull secret. We’re going to go with option B to keep our manifests DRY:

$ kubectl patch serviceaccount default -p \
'{"imagePullSecrets": [{"name": "gcr-readonly-secret"}]}'

Although not strictly necessary, we also want to create a generic secret with this service account key to be used by pods themselves to interact with GCS:

$ kubectl create secret generic gcs-service-account-key --from-file=gcr-readonly.json

Now we can create pods and deployments from images in our private container registry, gcr.io/<your_project_id> and k3s will be able to pull them.

For the sake of brevity I’m leaving out some of the GCP-specific steps of interacting with a private registry, but the overall development process would go something like:

  1. write a Dockerfile for your code and its dependencies
  2. build a container with docker build -t gcr.io/<your_project_id>/super-cool-app:0.0.1 .
  3. push it to your registry with docker push gcr.io/<your_project_id>/super-cool-app:0.0.1
  4. write some Kubernetes manifests (or even your very own Helm chart!) to deploy your app

And that’s it, folks.

Obviously there is a whole lot more to Kubernetes, but I hope this guide helps you get your feet wet.

Please leave a comment and let me know if you found this useful!


References

  1. https://medium.com/@marcovillarreal_40011/cheap-and-local-kubernetes-playground-with-k3s-helm-5a0e2a110de9
  2. https://github.com/rancher/local-path-provisioner
  3. https://kubernetes.io/docs/tasks/administer-cluster/change-default-storage-class
  4. https://helm.sh/docs/using_helm/#installing-helm
  5. https://ahmet.im/blog/mastering-kubeconfig
  6. https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry
  7. https://kubernetes.io/docs/reference/kubectl/jsonpath
  8. https://rancher.com/docs/k3s/latest/en/configuration

Leave a Reply