How to install Canonical Kubernetes in air-gapped environments

There are situations where it is necessary or desirable to run Canonical Kubernetes on a machine that is not connected to the internet. Based on different degrees of separation from the network, different solutions are offered to accomplish this goal. This guide documents any necessary extra preparation for air-gap deployments, as well the steps that are needed to successfully deploy Canonical Kubernetes in such environments.

Prepare for deployment

In preparation for the offline deployment download the Canonical Kubernetes snap, fulfill the networking requirements based on your scenario and handle images for workloads and Canonical Kubernetes features.

Download the Canonical Kubernetes snap

From a machine with access to the internet download the k8s and core22 snap with:

sudo snap download k8s --channel 1.33-classic/stable --basename k8s
sudo snap download core22 --basename core22

Besides the snaps, this will also download the corresponding assert files which are necessary to verify the integrity of the packages.

Check network requirements

Air-gap deployments are typically associated with a number of constraints and restrictions when it comes to the networking connectivity of the machines. Below we discuss the requirements that the deployment needs to fulfill.

Cluster node communication

Ensure that all cluster nodes are reachable from each other.

Default Gateway

In cases where the air-gap environment does not have a default Gateway, add a placeholder default route on the eth0 interface using the following command:

ip route add default dev eth0

Note

Ensure that eth0 is the name of the default network interface used for pod-to-pod communication.

The placeholder gateway will only be used by the Kubernetes services to know which interface to use, actual connectivity to the internet is not required. Ensure that the placeholder gateway rule survives a node reboot.

Ensure proxy access

This section is only relevant if access to upstream image registries (e.g. docker.io, quay.io, rocks.canonical.com, etc.) is only allowed through an HTTP proxy (e.g. squid).

Ensure that all nodes can use the proxy to access the image registry. For example, if using http://squid.internal:3128 to access docker.io, an easy way to test connectivity is:

export https_proxy=http://squid.internal:3128
curl -v https://registry-1.docker.io/v2

Access images

All workloads in a Kubernetes cluster are run as an OCI image. Kubernetes needs to be able to fetch these images and load them into the container runtime. For Canonical Kubernetes, it is also necessary to fetch the images used by its features (network, DNS, etc.) as well as any images that are needed to run specific workloads.

List images

If the k8s snap is already installed, list the images in use with the following command:

k8s list-images

The output will look similar to the following:

ghcr.io/canonical/cilium-operator-generic:1.15.2-ck2
ghcr.io/canonical/cilium:1.15.2-ck2
ghcr.io/canonical/coredns:1.11.1-ck4
ghcr.io/canonical/k8s-snap/pause:3.10
ghcr.io/canonical/k8s-snap/sig-storage/csi-node-driver-registrar:v2.10.1
ghcr.io/canonical/k8s-snap/sig-storage/csi-provisioner:v5.0.1
ghcr.io/canonical/k8s-snap/sig-storage/csi-resizer:v1.11.1
ghcr.io/canonical/k8s-snap/sig-storage/csi-snapshotter:v8.0.1
ghcr.io/canonical/metrics-server:0.7.0-ck2
ghcr.io/canonical/rawfile-localpv:0.8.2

A list of images can also be found in the images.txt file when the downloaded k8s snap is unsquashed.

Please ensure that the images used by workloads are tracked as well.

Choose how to access images

You must select how the container runtime accesses OCI images in your air-gapped installation:

Note

The image options are presented in the order of increasing complexity of implementation. It may be helpful to combine these options for different scenarios.

In many cases, the nodes of the air-gap deployment may not have direct access to upstream registries, but can reach them through the use of an HTTP proxy.

Create or edit the /etc/systemd/system/snap.k8s.containerd.service.d/http-proxy.conf file on each node and set the appropriate http_proxy, https_proxy and no_proxy variables as described in the adding proxy configuration section.

Deploy Canonical Kubernetes

Once you’ve completed all the preparatory steps for your air-gapped cluster, you can proceed with the deployment.

Install Canonical Kubernetes

Transfer the following files to the target node:

  • k8s.snap

  • k8s.assert

  • core22.snap

  • core22.assert

On the target node, run the following command to install the Kubernetes snap:

sudo snap ack core22.assert && sudo snap install ./core22.snap
sudo snap ack k8s.assert && sudo snap install ./k8s.snap --classic

Repeat the above for all nodes of the cluster.

Configure container runtime

Based on the image access type you chose in the step choose how to access images, configure the container runtime to fetch images properly:

The HTTP proxy should be set from the earlier access images via HTTP proxy step. Refer to the adding proxy configuration section for more help if necessary.

Bootstrap cluster

Now, bootstrap the cluster and replace MY-NODE-IP with the IP of the node by running the command:

sudo k8s bootstrap --address MY-NODE-IP

After a while, confirm that all the node show up in the output of the sudo k8s kubectl get node command.

Adding nodes requires the same steps to be repeated but instead of bootstrapping, you would need to generate a join token and join the node to the cluster.