Clicky

How to Set Up Tailscale in Kubernetes

Since writing this blog post, Tailscale has added official support for Kubernetes. You can read more about it here - 11-01-2021

Networking for Dummies

I am a huge fan of a newer startup called Tailscale. Their tagline is Private networks made easy. Even though are young they already live up to it. My first experience with Tailscale was setting up a personal network to connect a couple of personal computers, mobile devices, and a RaspberryPi. It probably took me 15 minutes to get all of these devices on one network and communicating with internal DNS. I was very impressed. While I'm no networking expert, I have set up OpenVPN and AWS Client VPN Endpoints and Tailscale was an order of magnitude easier to configure, secure, and maintain. It made me feel like a networking specialist, Tailscale is private networking for dummies.

Tailscale is a VPN Mesh built on WireGuard. They use an external identity provider (IDP) like GSuite or Azure Active Directory. One of the many benefits that this provides is MFA on your network. Their software runs as a daemon on the machine and doesn't require the configuration of firewall ports. You can restrict each user's access to different machines and/or subnets. 10/10 would recommend Tailscale to someone looking for a VPN mesh. I'm excited to watch them roll out more and more features.

Setting up Tailscale on user machines and server instances was easy; however, there seems to be a lack of support and documentation for setting up Tailscale in a containerized environment. Looking through issues and blog posts it appears that they do not officially support Tailscale in containers yet. That said, it wasn't very difficult to get Tailscale running in a container inside of Kubernetes. Let's take a look at how to do that.

The Dockerfile

The first step in any Kubernetes deployment is the Dockerfile. As shown below, this part is pretty straight forward. It should be pretty obvious, but to run Tailscale, you need to install it. We will also need some way of maintaining some state. In Kubernetes, pods come and go, and most likely you'll want to keep the same IP address in the Tailscale network for a particular service. Kubectl provides an easy way to do this as I'll show later on in this post, but for now, we just need to install it.

The one gotcha with the Dockerfile is that the base docker image needs to have a sysctl executable installed. The first time I tried this I was using Amazon Linux 2 as an image base which doesn't. After some fruitless debugging, I couldn't figure out how to get around that. If you are smarter than me and know how to fix that I'd love to know how!

FROM ubuntu:latest

RUN apt-get update -y && \
    apt-get install -y curl gpg && \
    curl -fsSL https://pkgs.tailscale.com/stable/ubuntu/focal.gpg | apt-key add - && \
    curl -fsSL https://pkgs.tailscale.com/stable/ubuntu/focal.list | tee /etc/apt/sources.list.d/tailscale.list && \
    apt-get update -y && \
    apt-get install -y tailscale

RUN curl -LO https://storage.googleapis.com/kubernetes-release/release/`curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt`/bin/linux/amd64/kubectl && \
    chmod +x kubectl && \
    mv kubectl /usr/bin/kubectl

COPY ./src /app

CMD ["bash", "-c", "/app/entrypoint.sh"]

Kubernetes Deployments and Sidecars

Once you have your image, there are two ways that you could deploy Tailscale in your cluster. I think the most useful way is to deploy it as a sidecar to an internal application that you want to expose in the Tailscale. (In Kubernetes, a pod can contain multiple containers. Sidecars are a modular way of enhancing the behavior of another container because they share the same volumes and network.) You also could deploy it as a stand-alone container if you want to set up a relay for your network. You can use that container to advertise subnets on the Tailscale. This is useful, for example, if you have a managed database in the cloud and you can't install Tailscale alongside it.

When you are writing the container definition, there are two things necessary for Tailscale to run. It needs access to /dev/net/tun on the Kubernetes node. That will need to be mounted into the container. The container also needs two kernel capabilities added to the container security context, net_admin and sys_module.

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: tailscale-relay
  labels:
    app: tailscale
spec:
  selector:
    matchLabels:
      app: tailscale
  replicas: 1
  template:
    metadata:
      labels:
        app: tailscale
    spec:
      volumes:
        - name: devnet
          hostPath:
            path: /dev/net/tun
      containers:
        - name: tailscale
          securityContext:
            capabilities:
              add: ['NET_ADMIN', 'SYS_MODULE']
          volumeMounts:
            - mountPath: /dev/net/tun
              name: devnet

Entry Script and Maintaining State

Below is an example script that you could use to start up the Tailscale service. The script stores and restores the data from /var/lib/tailscale/tailscaled.state in a config map using kubectl. (Notice the awesome sed wizardry going on to escape the JSON? That took some serious time and many stack overflow searches.) Because the state persists in the config map each time the container boots, it will use the same hostname and key and will appear the same in Tailscale making it a stable addition to your network.

set -m # enables job control to use fg command
kubectl create configmap $TAILSCALE_CONFIG_MAP # create the config map to backup tailscale data if it doesn't exist

tailscaled >/dev/null 2>&1 & # start the tailscale daemon
sleep 5 # allows for the service to boot an connect before trying to register container with tailscale
tailscale up -hostname $TAILSCALE_HOSTNAME -authkey $TAILSCALE_AUTH_KEY -advertise-routes $TAILSCALE_ADVERTISE_ROUTES # connect to VPN

data=$(cat /var/lib/tailscale/tailscaled.state | sed 's/\"/\\\"/g' | sed ':a;N;$!ba;s/\n/ /g') # backup updated tailscale data to configmap
kubectl patch configmap $TAILSCALE_CONFIG_MAP -p "{\"data\": {\"state\": \"$data\"}}"

fg # bring the tailscaled process to foreground

And that's it! You will need to adapt these code examples to your use case but they should get you started running Tailscale in your Kubernetes cluster.