1.33

Canonical Kubernetes 1.33 - Release notes - 30 June 2025

Requirements and compatibility

Canonical Kubernetes can be installed on a variety of operating systems using several methods. For specific requirements, see the Installation guides.

What’s new

  • Kubernetes 1.33 - read more about the upstream release here.

  • Controlled feature upgrade process - With the second release of Canonical Kubernetes, seamlessly upgrade from one version to the next. With a coordinated approach to our feature upgrades and snap refreshes, version drift is prevented and this ensures a smooth, predictable upgrade path for cluster features.

  • Support updating node certificates - Users can now update their external certificates on a running node. See our external certificate refresh guide for more information.

  • k8s certs-status command - This new command provides a detailed view of the certificates status on a node.

  • k8s inspect command - This new command collects diagnostics and other relevant information from a Kubernetes node, either control-plane or worker node, and compiles them into a tarball report.To find out more about what information is collected in the tarball see our inspection reports reference guide.

Also in this release

  • Update CNI to v1.6.2

  • Update Helm to v3.18.3

  • Update k8s-dqlite to v1.3.2

  • Update Cilium version to 1.17.1

  • Update CoreDNS version to 1.12, chart to 1.39.2

  • Update Containerd version to 1.7.27

  • Update GetNodeStatus and GetClusterConfig RPC endpoints

  • Enable Cilium protocol differentiation

  • Enable Cilium session affinity

  • Allow Cilium SCTP configuration through annotations

  • Enable cluster-config.load-balancer.l2-mode by default

  • Added revision implementation for pebble

  • DISA STIG hardening guides improved

  • Other documentation improvements

Note

Changes to the default configuration of the LoadBalancer apply only to new clusters and do not affect existing clusters during upgrade.

Deprecations and API changes

  • Upstream - Please review the upstream release notes, which include depreciation notices and API changes for Kubernetes 1.33.

Fixed bugs and issues

  • Fixed performing PreInitChecks (#1423)

  • Fixed custom containerd path on cleanup (#1469)

  • Fixed node label controller (#1363)

  • Fixed service argument quotations (#1222)

  • Fixed IPv6 parsing for k8s-apiserver-proxy (#1370)

  • Fixed snap refresh on worker nodes (#1239)

  • Fixed certificates refresh panic (#1150)

  • Fixed cluster config merge checks (#1089)

  • Fixed memory leak in k8s-dqlite (#1061)

  • Fixed custom containerd paths (#1046)

  • Fixed certificates usage during control plane join (#1029)

Upgrade notes

See our upgrade notes page for instructions on how to upgrade to 1.33. For dual-stack environments, there are additional configuration steps you may need to implement for a successful upgrade.

Patch notices

18 Nov, 202

  • Version bumps

    • containerd v1.7.29

    • runc to v1.3.3

    • metrics-server 0.7.2-ck7

    • CoreDNS 1.12.4-ck0

  • Revert change on patching upgrade object during rolling upgrades (#1971)

  • Remove unsupported recycle reclaim policy in local storage

  • Add a fix to force remove lost nodes from the cluster

  • For greater security, bump Helm version to v3.18.6 and introduce value sanitization

  • Improve the way k8s-api-server discovers k8s endpoints by no longer querying through the proxy and instead using current endpoints

  • During a k8s version downgrade, sanitize any feature gates that were introduced in later k8s versions

Aug 25, 2025

  • Change the default cluster datastore to managed etcd (#1561). Existing clusters deployed with k8s-dqlite will not be affected

    • Add managed etcd service to the inspection report(#1724)

    • Add etcd port checks to bootstrap verification (#1770)

    • Build etcd v3.6.2 from upstream (#1709)

    • Add etcd service definition to pebble services (#1695)

    • Sort --initial-clusters in etcd setup (#1631)

    • Derive initial-cluster from etcd member list (#1764)

  • Update CIS hardening docs and quorum recovery pages to include etcd recommendations

  • Improve the feature controller by:

    • Triggering the feature controller on k8sd startup(#1783)

    • Update node IP addresses on k8sd post-refresh (#1714)

    • Collect upgrades information in the inspection report

    • Wait for worker nodes to get upgraded before triggering the feature controller upgrades (#1643)

    • Use controller-gen to generate CRD yaml file from Go structs (#1327)

    • Prevent multiple refreshes on worker nodes

  • Set auto-tls options explicitly to false

  • Bump k8s-dqlite to v1.3.4

  • Update Kubernetes to v1.33.4 and Helm to v3.18.6

  • Use -ck tags for MetalLB FRR and rawfile-localpv rocks (#1735)

  • Include control-plane-taints in the k8sd ConfigMap (#1716)

  • Update containerd v1.7.28 to and runc to v1.3.0

  • Add coordinator to use a single manager for managed controllers (#1453)

  • Add upgrade docs for 1.33

  • Bump Helm to v3.18.4 to fix CVE-2025-53547

  • Bump Microcluster to v2.2.0 and Go to v1.24.5

  • Bump rawfile-localpv chart to v0.9.1

  • Update Kubernetes to v1.33.2

  • Fix issues with the charm e2e tests

Jul 22, 2025

  • Added fixes to the feature controller to allow smoother updates by adding additional tests, adding jitter to reconciliation retries and not allowing multiple simultaneous upgrades of the same resource by adding a pending state and feature lock (#1615, #1637)

  • Update Helm to v3.18.4

  • Add 1.33 release notes

  • Update ports and services page, add how to configure containerd override_path and document Auto Namespace creation and RBD disable in our docs

  • Ask the user about removing Cilium VXLAN interface (#1598)