local development with kubernetes
With the knowledge that there are 234924385734 other similar blog posts out there, I’m sharing my take on getting started with Kubernetes.
July 2020: This post has been updated with instructions for the latest version of k3s at the time of writing (1.18.4), Ubuntu 18.04, and Helm 3.
A lot has changed in a year!
Introduction
Kubernetes is a standard with more than one implementation. Each distribution caters to a different set of requirements and considerations. Much like the diversity of available Linux distributions, some Kubernetes distributions aim to be completely packed with all current and historical features and add-ons, while some aim to be as lightweight and portable as possible. This guide will focus on installing a distribution on the lighter end of the spectrum for the purposes of local testing and development, k3s by Rancher Labs.
Installation
k3s claims to be runnable on any Linux (amd64, armv7, arm64). For this guide we’re running Ubuntu 18.04 LTS.
Install it with
$ curl -sfL https://get.k3s.io | sh -Disclaimer:
Piping a shell script straight from the internet is living on the edge. But you’re probably following this in a throwaway VM anyway, right? Right? I mean, who goes running random commands they find on internet blogs on their local computers?
Anyway. The installation script will download the k3s binary to /usr/local/bin by default and add a systemd service definition to run the agent in server mode, listening on https://localhost:6443. It will start a local cluster of 1 node and write a Kubernetes config file to /etc/rancher/k3s/k3s.yaml.
Client configuration
By default, k3s creates the config file with 0600 permissions (only root can read or write to it) because it contains authentication information for controlling your Kubernetes “cluster”.
In order to interact with our cluster as a non-root user, copy this config to your local user directory where your kubectl client expects it to be.
NOTE: If you’ve already got some cluster credentials stored in the default location, they will be overwritten by the following command!
$ mkdir -p ~/.kube
$ sudo k3s kubectl config view --raw > ~/.kube/configIf you want to preserve your existing Kubernetes cluster authentication credentials, kubectl can merge two YAML configs together – but be warned, it’s a bit kludgy.
First, copy the k3s credentials to a temporary file using the kubectl command that comes built-in to k3s:
$ sudo k3s kubectl config view --raw > ~/.kube/config-k3sSet the KUBECONFIG environment variable in your shell to point to the k3s configuration in addition to your original config using the same syntax you’d use for appending multiple paths to a PATH variable:
$ export KUBECONFIG=~/.kube/config-k3s:~/.kube/configNow we merge the configurations together, replace the original, and clean up:
$ kubectl config view --raw > ~/.kube/config-merged
$ mv ~/.kube/config-merged ~/.kube/config
$ rm ~/.kube/config-k3s
$ unset KUBECONFIGYou can now check to make sure both your original clusters and your new local k3s cluster are present:
$ kubectl config get-contexts
CURRENT NAME CLUSTER AUTHINFO NAMESPACE
* cluster-1 cluster-1 default
default default default Now let’s set our k3s cluster (named default by default) as the current context:
$ kubectl config use-context defaultYou might want to rename the nondescript default with a more descriptive name:
$ kubectl config rename-context default k3s-localAnd now if we check our contexts, we should see this:
$ kubectl config get-contexts
CURRENT NAME CLUSTER AUTHINFO NAMESPACE
cluster-1 cluster-1 default
* k3s-local k3s-local default You may have more than one kubectl available.
Using k3s kubectl over kubectl shouldn’t make a difference as long as you set your Kubernetes config properly.
The following commands are equivalent:
$ KUBECONFIG=~/.kube/config-k3s kubectl get nodes
$ kubectl get nodes --kubeconfig=~/.kube/config-k3s
$ sudo k3s kubectl get nodesCommand completion
It is extremely handy to have command completion available for kubectl since the commands tend to get pretty verbose. If your package manager didn’t install completion alongside kubectl, you can install it manually by
$ kubectl completion bash | sudo tee /usr/share/bash-completion/completions/kubectl
$ source ~/.bashrcAlso, it’s tiring to type kubectl repeatedly, so let’s alias it to kl, and make sure completions are loaded for the alias too.
Add these lines to your ~/.bash_aliases:
alias kl=kubectl
type -f __start_kubectl 2>&- || _completion_loader kubectl
complete -o default -o nospace -F __start_kubectl klNOTE: On Ubuntu 16.04 right on up through 20.04, bash completion is disabled by default system-wide. So let’s make sure it’s installed and enabled:
$ sudo apt -y install bash-completion
$ sudo sed -i.bak -E '/if ! shopt -oq posix; then/,+6 s/^#//g' /etc/bash.bashrcMake sure to restart your shell for changes to take effect.
Running a Container
Run a container inside your cluster:
$ kubectl run -it ubuntu --image=ubuntu:bionic -- bashIf you’ve used Docker before, you’ll appreciate the similarity to the docker run syntax.
This does a few things behind the scenes:
- Create a
Podnamedubunturunning a single container - Pull the image
ubuntu:bionicfrom the public Docker registry - Drop an interactive shell into the container
You can poke around, install some packages, take a look at /etc/resolv.conf, maybe take a look at the running environment with env, or mount, or top.
When you are done, exit the shell. This will terminate the pod.
Note, however, that Kubernetes will create a new pod in its place. This can be confirmed by running kubectl get pods:
NAME READY STATUS RESTARTS AGE
ubuntu 1/1 Running 1 75sTo get rid of the pod for good, run:
$ kubectl delete pod ubuntuConfiguration as Code
One of the real superpowers of Kubernetes, in my opinion, is the ability to do almost anything either completely imperatively (“do this! do that! then do these three things!") or declaratively (“here’s what I want it to look like at the end - you figure out how to get there”).
Let’s take our imperative kubectl run example from above, and generate a declarative manifest from it that we can save to a file and, say, commit to version control:
$ kubectl run ubuntu --image=ubuntu:bionic --dry-run=client --output yaml > pod.yamlWhat you’ll get back instead of a shell prompt inside your container is the instructions for building your deployment in YAML form, which you can then modify as needed, and then apply with kubectl apply -f pod.yaml.
Helm
What would Linux be without apt? Helm is the apt of Kubernetes, and more. It uses charts, which are like package manifests, to instantiate and manage Kubernetes objects such as deployments, secrets, and configMaps as a unified, versioned set called a release. It can template values using the Go templating language, gotpl. It can roll back releases to previous versions. It can… well, you get it.
Installation
Helm can be installed on all supported OSes with:
$ curl https://raw.githubusercontent.com/helm/helm/master/scripts/get-helm-3 | bashInstall Helm bash completion:
$ helm completion bash | sudo tee /usr/share/bash-completion/completions/helm
$ source ~/.bashrcExample: Install Grafana
Let’s test our local Kubernetes cluster by installing a relatively straightforward Helm chart, Grafana.
First, let’s add the Helm stable chart repository:
$ helm repo add stable https://kubernetes-charts.storage.googleapis.com
$ helm repo update$ helm install grafana stable/grafanaAfter a few moments you should see some output describing the Kubernetes objects created that make up the release, as well as some notes about next steps. Following these notes, retrieve the auto-generated admin password:
$ kubectl get secret --namespace default grafana -o jsonpath="{.data.admin-password}" | base64 --decode ; echoSince the Grafana chart deploys a service of type ClusterIP by default with no ingress to route traffic to the cluster from outside, we have to use port forwarding to access the pod.
Think of port forwarding as the ssh -L of Kubernetes - it abstracts away all the complex networking stuff happening under the hood (and there’s a lot of it) to give you access to your services through the control plane, like this:
$ export POD_NAME=$(kubectl get pods --namespace default -l "app=grafana,release=grafana" -o jsonpath="{.items[0].metadata.name}")
$ kubectl --namespace default port-forward $POD_NAME 3000Note that if you installed kubectl bash completion, the generated pod name will be available as a completion in an interactive shell.
Now we can access Grafana at http://localhost:3000, just as if we had deployed Grafana to a remote server and logged in to it with ssh -L.
Getting Info
Let’s get some more info about our Grafana deployment.
Every Kubernetes object has as part of its metadata a set of key-value pairs called labels. We can perform operations on multiple objects at a time using label selectors. An example of this can be seen above in the port forwarding command - specifically the -l app=grafana,release=grafana part. This is useful if, for example, you have two versions of the same app deployed simultaneously, and you want to differentiate between two sets of objects by release.
Most Helm charts provide a label called release with the name of the release as the value. The Grafana chart is no exception.
So let’s get a list of all objects matching this label selector:
$ kl get all -l app=grafana,release=grafanaYou should get back something like this:
NAME READY STATUS RESTARTS AGE
pod/grafana-6c6d9cfd6-4fhnn 1/1 Running 0 2m18s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/grafana ClusterIP 10.43.184.148 <none> 80/TCP 2m18s
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/grafana 1/1 1 1 2m18s
NAME DESIRED CURRENT READY AGE
replicaset.apps/grafana-6c6d9cfd6 1 1 1 2m18skubectl also supports different output formats like YAML, JSON, and even jsonpath which can extract specific fields from the manifest to be used programmatically. [7]
For example, to get the ClusterIP address for the Grafana service from above to be used in a script:
$ kubectl get svc -l app=grafana,release=grafana \
-o jsonpath='{.items[0].spec.clusterIP}'
10.43.184.148A more complicated example which gets the port number for a port named service:
$ kubectl get svc -l app=grafana,release=grafana \
-ojsonpath='{.items[*].spec.ports[?(@.name=="service")].port}'
80Here’s the equivalent in jq, which some may feel a little more at home with (I know I do):
$ kubectl get svc -l app=grafana,release=grafana -ojson \
| jq '.items[].spec.ports[] | select(.name=="service").port'
80When done testing, we can delete the release and make it like it never existed with
$ helm delete grafanaWhen all else fails:
$ kubectl cluster-info dumpUsing a Private Registry
You might be working on a project the world isn’t yet ready for, in which case you’re probably using a private container registry that requires credentials. If we want to use a container image hosted on a private registry like gcr.io, we need to tell k3s how to authenticate. In GKE this would normally just work, but since we’re not running in Google’s walled garden we have to specify an ImagePullSecret.
First we need to create a service account key with read-only access to Google Cloud Storage, which Google Container Registry uses as its backend:
$ gcloud iam service-accounts keys create \
--iam-account=gcr-readonly@<your_project_id>.iam.gserviceaccount.com \
gcr-readonly.jsonCreate a Kubernetes secret of type docker-registry from the service account key.
We are going to call our secret gcr-readonly-secret:
$ kubectl create secret docker-registry gcr-readonly-secret \
--docker-server=https://gcr.io \
--docker-username=_json_key \
--docker-password="$(cat gcr-readonly.json)"Now we can do one of two things: a) Specify an imagePullSecret for every deployment that needs to pull images from this registry, or b) patch our default service account to include a reference to our image pull secret. We’re going to go with option B to keep our manifests DRY:
$ kubectl patch serviceaccount default -p \
'{"imagePullSecrets": [{"name": "gcr-readonly-secret"}]}'Although not strictly necessary, we also want to create a generic secret with this service account key to be used by pods themselves to interact with GCS:
$ kubectl create secret generic gcs-service-account-key --from-file=gcr-readonly.jsonNow we can create pods and deployments from images in our private container registry, gcr.io/<your_project_id> and k3s will be able to pull them.
For the sake of brevity I’m leaving out some of the GCP-specific steps of interacting with a private registry, but the overall development process would go something like:
- write a Dockerfile for your code and its dependencies
- build a container with
docker build -t gcr.io/<your_project_id>/super-cool-app:0.0.1 - push it to your registry with
docker push gcr.io/<your_project_id>/super-cool-app:0.0.1 - write some Kubernetes manifests (or even your very own Helm chart!) to deploy your app
And that’s it, folks.
Obviously there is a whole lot more to Kubernetes, but I hope this guide helps you get your feet wet.
Please leave a comment and let me know if you found this useful!
References
- https://medium.com/@marcovillarreal_40011/cheap-and-local-kubernetes-playground-with-k3s-helm-5a0e2a110de9
- https://helm.sh/docs/intro/install/
- https://ahmet.im/blog/mastering-kubeconfig/
- https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/
- https://kubernetes.io/docs/reference/kubectl/jsonpath/
- https://rancher.com/docs/k3s/latest/en/configuration