Kubeadm is the official tool for installing and maintaining a cluster that’s based on the default Kubernetes distribution. Created clusters don’t automatically upgrade themselves and disabling package updates for Kubernetes components is part of the set up process. This means you have to manually migrate your cluster when a new Kubernetes release arrives.
In this article you’ll learn the steps involved in a Kubernetes upgrade by walking through a transition from v1.24 to v1.25 on Ubuntu 22.04. The process is usually similar for any Kubernetes minor release but you should always refer to the official documentation before you start, in case a new release carries specialist requirements.
Identifying the Precise Version to Install
The first step is determining the version you’re going to upgrade to. You can’t skip minor versions – going directly from v1.23 to v1.25 is unsupported, for example – so you should pick the most recent patch release for the minor version that follows your cluster’s current release.
You can discover the latest patch version with the following command:
This shows that 1.25.1-00 is the newest release of Kubernetes v1.25. Replace 1.25 in the command with the minor version that you’re going to be moving to.
Upgrading the Control Plane
Complete this section on the machine that’s running your control plane. Don’t touch the worker nodes yet – they can continue using their current Kubernetes release while the control plane is updated. If you have multiple control plane nodes, run this sequence on the first one and follow the worker node procedure in the next section on the others.
Update Kubeadm
First release the hold on the Kubeadm package and install the new version. Specify the exact release identified earlier so that Apt doesn’t automatically grab the latest one, which could be an unsupported minor version bump.
Now reapply the hold so that apt upgrade doesn’t deliver unwanted releases in the future:
Verify that Kubeadm is now the expected version:
Create the Upgrade Plan
Kubeadm automates the control plane upgrade process. First use the upgrade plan command to establish which versions you can migrate to. This checks your cluster to make sure it can accept the new release.
The output is quite long but it’s worth closely inspecting. The first section should report that all the Kubernetes components will upgrade to the version number you selected earlier. New versions may also be displayed for CoreDNS and etcd.
The end of the output includes a table that surfaces any required config changes. You may occasionally need to take manual action to adjust these config files and supply them to the cluster. Refer to the documentation for your release if you get a “yes” in the “Manual Upgrade Required” column.
This cluster is now ready to upgrade. The plan has confirmed that Kubernetes v1.25.1 is available and no manual actions are required. Check you’ve installed the correct Kubeadm version if no plan is produced or errors appear. You might be trying to move between more than one minor version.
Applying the Upgrade Plan
Now you can instruct Kubeadm to proceed with applying the upgrade plan by running upgrade apply with the correct version number:
A confirmation prompt will appear:
Press y to continue with the upgrade. The process may take several minutes while it pulls the images for the new components and restarts your control plane. You won’t be able to reliably interact with your cluster’s API during this time but any running Pods should remain operational on your Nodes.
Eventually you should see a success message:
The control plane has now been upgraded.
Upgrading Worker Nodes
Now you can upgrade your worker nodes. These steps also need to be performed on your control plane nodes. Upgrade each node in sequence to minimize the effects of capacity being removed from your cluster. Pods will be rescheduled to other nodes while each one gets upgraded.
First drain the node of its existing Pods and place a cordon around it. Substitute in the name of the node instead of node-1 in the following commands.
This evicts the node’s Pods and prevents any new ones from being scheduled. The node’s now inactive in your cluster.
Next release the package manager hold on the kubeadm, kubectl, and kubelet packages. Install the new version of each one. The versions of all three packages should exactly match. Remember to set the hold status again after you’ve got the new releases.
Next use Kubeadm’s upgrade node command to apply the upgrade and update your node’s configuration:
Finally restart the Kubelet service and uncordon the node. It should rejoin the cluster and start accepting new Pods.
Checking Your Cluster
Once you’ve finished your upgrade, run kubectl version to check the active release matches your expectations:
Next check that all your nodes are reporting their new version and have entered the Ready state:
The upgrade is now complete.
Recovering From an Upgrade Failure
Occasionally an upgrade could fail even though Kubeadm successfully plans a pathway and verifies your cluster’s health. Problems can occur if the upgrade gets interrupted or a Kubernetes component stops responding. Kubeadm should automatically rollback to the previous version if this happens.
The upgrade apply command can be safely repeated to retry a failed upgrade. It will detect the ways in which your cluster differs from the expected version, allowing it to attempt a recovery of both total failures and partial upgrades.
When repeating the command doesn’t work, you can try forcing the upgrade by adding the –force flag to the command:
This will allow the upgrade to continue in situations where requirements are missing or can no longer be fulfilled.
When disaster strikes and your cluster seems to be totally broken, you should be able to restore it using the backup files that Kubeadm writes automatically:
Copy the contents of /etc/kubernetes/tmp/kubeadm-backup-etcd-
These backups can be used to manually restore the previous Kubernetes version to a working state.
Summary
Upgrading Kubernetes with Kubeadm shouldn’t be too stressful. Most of the process is automated with your involvement limited to installing the new packages and checking the upgrade plan.
Before upgrading you should always consult the Kubernetes changelog and any documentation published by components you use in your cluster. Pod networking interfaces, Ingress controllers, storage providers, and other addons may all have incompatibilities with a new Kubernetes release or require their own upgrade routines.