Deploy Ubuntu OKE nodes using Terraform¶
Ubuntu images are available for OKE worker nodes, with support for a select number of suites and Kubernetes versions. For a list of supported OKE configurations, see our Ubuntu availability on OKE page.
Prerequisites¶
You’ll need:
Oracle Cloud compartment to create the nodes.
Oracle’s
oci
CLI installed.kubectl
installed.Domain, Dynamic Group and Policy configured (Self-Managed only).
Get Ubuntu image access¶
Attention
Ubuntu worker node images for OKE are currently in a Limited Availability (LA) release. To gain access to the images, reach out to your Oracle account or support teams to express interest in this LA.
Once access has been provided to your tenancy and region then you must add an additional Policy to your root compartment to permit the listing of these images.
Use the Show manual editor button while creating the policy and add:
Define tenancy oke as ocid1.tenancy.oc1..aaaaaaaa5vrtu4bjcqpjvbworiwffgccrgrbkum64mtn33yrccjrqpzuyara
Endorse any-user to read instance-images in tenancy oke
Note
The tenancy OCID listed above for the policy belongs to the OCI OKE Service, it is not owned by Canonical.
Find an Ubuntu image¶
Once your tenancy has been enabled for the LA, you will be able to list the images from the CLI or view them through the OCI Console.
Prior to running any oci
commands, be sure that your ~/.oci/config
is configured to use your region specified for the LA enablement.
Using the oci
command, list all OKE compatible images and save them to the node-pool-options.json
file. The file will contain a list of all available OKE images including your newly added Ubuntu images.
oci ce node-pool-options get --compartment-id <compartment-id> --node-pool-option-id all | tee node-pool-options.json
Use the following to filter it down to only the newly added Ubuntu images.
jq -r '.data.sources[] | select(."source-name" | contains("ubuntu"))' node-pool-options.json
The result will contain objects of the form:
{
"image-id": "ocid1.image.oc1.phx.aaaaaaaajqzb5jcbcvh5obyg2l2hzzw5qewwhrknvlyb3zaglduivigvo4sq",
"source-name": "ubuntu-amd64-minimal-22.04-jammy-v20250604.1-OKE-1.31.1",
"source-type": "IMAGE"
}
Make note of the image-id
for the image(s) that you wish to use since they will be required in later steps.
Whether you decide to create nodes using Create Cluster > Quick Create or Add node pool, both will let you select your desired node image. That list will contain all available OKE images, including your newly added Ubuntu images.

After selecting your desired image based on the architecture, Ubuntu suite and Kubernetes version, your configuration should look similar to the image below.

Warning
Pay attention to which image you select and ensure it’s architecture corresponds to your selected pod shape.
When launching Ubuntu nodes from the console during the LA, it’s required to provide custom cloud-init user-data
to the instances to ensure they join the cluster properly.
Managed node user-data
is quite simple:
#cloud-config
runcmd:
- oke bootstrap
For Self-Managed user-data
, use the following with appropriate variable substitutions for the cluster certificate (cluster_ca_cert
) and private control plane IP (api_server_endpoint
):
#cloud-config
runcmd:
- oke bootstrap --ca ${cluster_ca_cert} --apiserver-host ${api_server_endpoint}
write_files:
- path: /etc/oke/oke-apiserver
permissions: '0644'
content: ${api_server_endpoint}
- encoding: b64
path: /etc/kubernetes/ca.crt
permissions: '0644'
content: ${cluster_ca_cert}
Copy the user-data
required for your node type and paste it into the console.

Deploy an OKE cluster using Terraform¶
Before getting started, note the architecture of the image you have selected, either AMD64
or ARM64
, as you want to ensure that nodes are launched with the correct instance shapes.
For this guide we’ll use our GitHub example repository as a base. It contains all of the HCL to launch the networking, cluster and nodes using the OCI Terraform Provider and OKE Terraform Module.
Set up the Terraform project¶
Clone the GitHub example repository and change directory to the Terraform example:
git clone https://github.com/canonical/oracle-doc-examples
cd oracle-doc-examples/deploy-oke-using-ubuntu/terraform
The Terraform example directory should look like the following.
.
├── data.tf
├── locals.tf
├── modules.tf
├── outputs.tf
├── providers.tf
├── README.md
├── self_managed.tf
├── terraform.tfvars.example
├── user-data
│ ├── managed.yaml
│ └── self-managed.yaml
└── variables.tf
Before deploying the OKE resources you must initialize the Terraform project. This will download all of the required providers and modules to launch the configuration.
terraform init
The Terraform example includes a template to create your terraform.tfvars
file. This file contains all of the cluster and image configurations including the Kubernetes version and image OCID.
cp terraform.tfvars.example terraform.tfvars
Create an OKE cluster¶
Now the terraform.tfvars
file should contain the following Terraform variable assignments.
# Required
tenancy_ocid = "ocid1.tenancy.oc1..xxxxxxxxxxx"
user_ocid = "ocid1.user.oc1..xxxxxxxxxxx"
compartment_ocid = "ocid1.compartment.oc1..xxxxxxxxxxx"
fingerprint = "xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx"
private_key_path = "~/.oci/oci.pem"
region = "us-phoenix-1"
kubernetes_version = "v1.32.1"
image_id = "ocid1.image.oc1.phx.xxxxxxxxxxx"
ssh_public_key_path = "~/.ssh/id_rsa.pub"
ssh_private_key_path = "~/.ssh/id_rsa"
# Optional
public_nodes = false
architecture = "amd64" # or "arm64"
add_managed_nodes = false
add_self_managed_nodes = false
Most of the values for the terraform.tfvars
can be found in your ~/.oci/config
file or by searching for each service in the OCI Web Console.
Note
To be clear, you cannot use any Ubuntu image_id
, but only Ubuntu OKE specific images that have been listed from the CLI or images that you can find through the oci_core_images
Terraform API. If you do find them through the Terraform API, they must be specifically denoted as Ubuntu OKE.
After configuring your terraform.tfvars
, deploy a cluster using:
terraform apply
Terraform will then provide a permission prompt for creating all the required OKE cluster resources:
Plan: 61 to add, 0 to change, 0 to destroy.
Changes to Outputs:
+ apiserver_private_host = (known after apply)
+ cluster_ca_cert = (known after apply)
+ cluster_endpoints = (known after apply)
+ cluster_id = (known after apply)
+ cluster_kubeconfig = (known after apply)
+ vcn_id = (known after apply)
+ worker_subnet_id = (known after apply)
Do you want to perform these actions?
Terraform will perform the actions described above.
Only 'yes' will be accepted to approve.
Enter a value: yes
After agreeing, the required cluster resources will be created, but by default you will not have access to the cluster. The Terraform example does provide the cluster’s kubeconfig
via an output
though. The following command will create the ~/.kube/
directory and write the kubeconfig
to ~/.kube/config
.
mkdir -p ~/.kube/
terraform output -json cluster_kubeconfig | yq -p json | tee ~/.kube/config
This will allow you to verify cluster connectivity, using:
kubectl cluster-info
The output should indicate that you are connected to the cluster:
Kubernetes control plane is running at https://<public-ip>:6443
CoreDNS is running at https://<public-ip>:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
Add OKE nodes¶
You now have a cluster at your disposal but there aren’t any nodes added yet. You can view the node status with:
kubectl get nodes
to get as output:
No resources found
OKE offers two different node types to add to the cluster, Managed and Self-Managed. From an Ubuntu OKE node perspective, the differences are regarding the user-data
that is provided to the nodes through cloud-init
and how the nodes are provisioned – node pools for Managed and instances for Self-Managed.
These differences are illustrated in the two .yaml
files present in the user-data
folder of the Terraform example.
The Managed nodes user-data
is quite simple:
#cloud-config
runcmd:
- oke bootstrap
While the Self-Managed user-data
requires variable substitution from Terraform for the cluster certificate and private control plane IP:
#cloud-config
runcmd:
- oke bootstrap --ca ${cluster_ca_cert} --apiserver-host ${api_server_endpoint}
write_files:
- path: /etc/oke/oke-apiserver
permissions: '0644'
content: ${api_server_endpoint}
- encoding: b64
path: /etc/kubernetes/ca.crt
permissions: '0644'
content: ${cluster_ca_cert}
To launch Ubuntu OKE nodes of these types, in terraform.tfvars
, set the variables add_managed_nodes
or add_self_managed_nodes
to true
:
# Optional
public_nodes = false
architecture = "amd64"
add_managed_nodes = true
add_self_managed_nodes = true
Note
Both node types can be enabled independently, therefore you could have just Self-Managed, Managed or both.
Once these variables are set, the next time you run terraform apply
, Terraform will attempt to update the state and create the nodes.
Alternatively, you can override these variables directly from the command line and they will override the terraform.tfvars
file:
terraform apply -var="add_managed_nodes=true" -var="add_self_managed_nodes=true"
After executing terraform apply
with one of the methods above, Terraform will begin updating the state and creating the appropriate nodes. You can watch the nodes register in the cluster with:
kubectl get nodes --watch
Which should provide output similar to:
NAME STATUS ROLES AGE VERSION
10.0.101.01 Ready node 2m v1.32.1
10.0.102.02 Ready node 2m v1.32.1
10.0.103.03 Ready node 2m v1.32.1
Delete the OKE cluster¶
If you wish to delete the nodes but maintain the cluster then disable the node variables using:
terraform apply -var="add_managed_nodes=false"
Alternatively, you can tear down the nodes and all other resources deployed by Terraform using:
terraform destroy
Further references¶
For more information about oci
CLI and managing self-managed nodes on your cluster, refer to the Oracle Documentation: