Tasks

Kubernetes v1.16 documentation is no longer actively maintained. The version you are currently viewing is a static snapshot. For up-to-date documentation, see the latest version.

Edit This Page

Upgrading kubeadm clusters

This page explains how to upgrade a Kubernetes cluster created with kubeadm from version 1.15.x to version 1.16.x, and from version 1.16.x to 1.16.y (where y > x).

To see information about upgrading clusters created using older versions of kubeadm, please refer to following pages instead:

The upgrade workflow at high level is the following:

  1. Upgrade the primary control plane node.
  2. Upgrade additional control plane nodes.
  3. Upgrade worker nodes.

Before you begin

  • You need to have a kubeadm Kubernetes cluster running version 1.15.0 or later.
  • Swap must be disabled.
  • The cluster should use a static control plane and etcd pods or external etcd.
  • Make sure you read the release notes carefully.
  • Make sure to back up any important components, such as app-level state stored in a database. kubeadm upgrade does not touch your workloads, only components internal to Kubernetes, but backups are always a best practice.

Additional information

  • All containers are restarted after upgrade, because the container spec hash value is changed.
  • You only can upgrade from one MINOR version to the next MINOR version, or between PATCH versions of the same MINOR. That is, you cannot skip MINOR versions when you upgrade. For example, you can upgrade from 1.y to 1.y+1, but not from 1.y to 1.y+2.

Determine which version to upgrade to

  • Find the latest stable 1.16 version:
apt update
apt-cache policy kubeadm
# find the latest 1.16 version in the list
# it should look like 1.16.x-00, where x is the latest patch
yum list --showduplicates kubeadm --disableexcludes=kubernetes
# find the latest 1.16 version in the list
# it should look like 1.16.x-0, where x is the latest patch

Upgrading control plane nodes

Upgrade the first control plane node

  • On your first control plane node, upgrade kubeadm:
# replace x in 1.16.x-00 with the latest patch version
apt-mark unhold kubeadm && \
apt-get update && apt-get install -y kubeadm=1.16.x-00 && \
apt-mark hold kubeadm
# replace x in 1.16.x-0 with the latest patch version
yum install -y kubeadm-1.16.x-0 --disableexcludes=kubernetes

  • Verify that the download works and has the expected version:

    kubeadm version
  • On the control plane node, run:

    sudo kubeadm upgrade plan

    You should see output similar to this:

    [upgrade/config] Making sure the configuration is correct:
    [upgrade/config] Reading configuration from the cluster...
    [upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
    [preflight] Running pre-flight checks.
    [upgrade] Making sure the cluster is healthy:
    [upgrade] Fetching available versions to upgrade to
    [upgrade/versions] Cluster version: v1.15.2
    [upgrade/versions] kubeadm version: v1.16.0
    
    Components that must be upgraded manually after you have upgraded the control plane with 'kubeadm upgrade apply':
    COMPONENT   CURRENT       AVAILABLE
    Kubelet     1 x v1.15.2   v1.16.0
    
    Upgrade to the latest version in the v1.16 series:
    
    COMPONENT            CURRENT   AVAILABLE
    API Server           v1.15.2   v1.16.0
    Controller Manager   v1.15.2   v1.16.0
    Scheduler            v1.15.2   v1.16.0
    Kube Proxy           v1.15.2   v1.16.0
    CoreDNS              1.3.1     1.6.2
    Etcd                 3.3.10    3.3.15
    
    You can now apply the upgrade by executing the following command:
    
        kubeadm upgrade apply v1.16.0
    
    _____________________________________________________________________
    

    This command checks that your cluster can be upgraded, and fetches the versions you can upgrade to.

    Note: kubeadm upgrade also automatically renews the certificates that it manages on this node. To opt-out of certificate renewal the flag --certificate-renewal=false can be used. For more information see the certificate management guide.
  • Choose a version to upgrade to, and run the appropriate command. For example:

    sudo kubeadm upgrade apply v1.16.x
    • Replace x with the patch version you picked for this upgrade.

    You should see output similar to this:

    [preflight] Running pre-flight checks.
    [upgrade] Making sure the cluster is healthy:
    [upgrade/config] Making sure the configuration is correct:
    [upgrade/config] Reading configuration from the cluster...
    [upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
    [upgrade/version] You have chosen to change the cluster version to "v1.16.0"
    [upgrade/versions] Cluster version: v1.15.2
    [upgrade/versions] kubeadm version: v1.16.0
    [upgrade/confirm] Are you sure you want to proceed with the upgrade? [y/N]: y
    [upgrade/prepull] Will prepull images for components [kube-apiserver kube-controller-manager kube-scheduler etcd]
    [upgrade/prepull] Prepulling image for component etcd.
    [upgrade/prepull] Prepulling image for component kube-apiserver.
    [upgrade/prepull] Prepulling image for component kube-controller-manager.
    [upgrade/prepull] Prepulling image for component kube-scheduler.
    [apiclient] Found 0 Pods for label selector k8s-app=upgrade-prepull-kube-scheduler
    [apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-kube-apiserver
    [apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-kube-controller-manager
    [apiclient] Found 0 Pods for label selector k8s-app=upgrade-prepull-etcd
    [apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-kube-scheduler
    [apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-etcd
    [upgrade/prepull] Prepulled image for component etcd.
    [upgrade/prepull] Prepulled image for component kube-controller-manager.
    [upgrade/prepull] Prepulled image for component kube-apiserver.
    [upgrade/prepull] Prepulled image for component kube-scheduler.
    [upgrade/prepull] Successfully prepulled the images for all the control plane components
    [upgrade/apply] Upgrading your Static Pod-hosted control plane to version "v1.16.0"...
    Static pod: kube-apiserver-luboitvbox hash: 8d931c2296a38951e95684cbcbe3b923
    Static pod: kube-controller-manager-luboitvbox hash: 2480bf6982ad2103c05f6764e20f2787
    Static pod: kube-scheduler-luboitvbox hash: 9b290132363a92652555896288ca3f88
    [upgrade/etcd] Upgrading to TLS for etcd
    [upgrade/staticpods] Writing new Static Pod manifests to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests446257614"
    [upgrade/staticpods] Preparing for "kube-apiserver" upgrade
    [upgrade/staticpods] Renewing "apiserver-etcd-client" certificate
    [upgrade/staticpods] Renewing "apiserver" certificate
    [upgrade/staticpods] Renewing "apiserver-kubelet-client" certificate
    [upgrade/staticpods] Renewing "front-proxy-client" certificate
    [upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-apiserver.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2019-06-05-23-38-03/kube-apiserver.yaml"
    [upgrade/staticpods] Waiting for the kubelet to restart the component
    [upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
    Static pod: kube-apiserver-luboitvbox hash: 8d931c2296a38951e95684cbcbe3b923
    Static pod: kube-apiserver-luboitvbox hash: 1b4e2b09a408c844f9d7b535e593ead9
    [apiclient] Found 1 Pods for label selector component=kube-apiserver
    [upgrade/staticpods] Component "kube-apiserver" upgraded successfully!
    [upgrade/staticpods] Preparing for "kube-controller-manager" upgrade
    [upgrade/staticpods] Renewing certificate embedded in "controller-manager.conf"
    [upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-controller-manager.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2019-06-05-23-38-03/kube-controller-manager.yaml"
    [upgrade/staticpods] Waiting for the kubelet to restart the component
    [upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
    Static pod: kube-controller-manager-luboitvbox hash: 2480bf6982ad2103c05f6764e20f2787
    Static pod: kube-controller-manager-luboitvbox hash: 6617d53423348aa619f1d6e568bb894a
    [apiclient] Found 1 Pods for label selector component=kube-controller-manager
    [upgrade/staticpods] Component "kube-controller-manager" upgraded successfully!
    [upgrade/staticpods] Preparing for "kube-scheduler" upgrade
    [upgrade/staticpods] Renewing certificate embedded in "scheduler.conf"
    [upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-scheduler.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2019-06-05-23-38-03/kube-scheduler.yaml"
    [upgrade/staticpods] Waiting for the kubelet to restart the component
    [upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
    Static pod: kube-scheduler-luboitvbox hash: 9b290132363a92652555896288ca3f88
    Static pod: kube-scheduler-luboitvbox hash: edf58ab819741a5d1eb9c33de756e3ca
    [apiclient] Found 1 Pods for label selector component=kube-scheduler
    [upgrade/staticpods] Component "kube-scheduler" upgraded successfully!
    [upgrade/staticpods] Renewing certificate embedded in "admin.conf"
    [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
    [kubelet] Creating a ConfigMap "kubelet-config-1.16" in namespace kube-system with the configuration for the kubelets in the cluster
    [kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.16" ConfigMap in the kube-system namespace
    [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
    [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
    [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
    [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
    [addons] Applied essential addon: CoreDNS
    [addons] Applied essential addon: kube-proxy
    
    [upgrade/successful] SUCCESS! Your cluster was upgraded to "v1.16.0". Enjoy!
    
    [upgrade/kubelet] Now that your control plane is upgraded, please proceed with upgrading your kubelets if you haven't already done so.
    
  • Manually upgrade your CNI provider plugin.

    Your Container Network Interface (CNI) provider may have its own upgrade instructions to follow. Check the addons page to find your CNI provider and see whether additional upgrade steps are required.

    This step is not required on additional control plane nodes if the CNI provider runs as a DaemonSet.

    One common issue you may encounter if you are using Flannel network is that after the upgrade, the cluster nodes remain in NotReady state. If you do kubectl get nodes -o yaml, you may see following lines in the output:

    message:docker: network plugin is not ready: cni config uninitialized
    

    In this case, you will need to update the /etc/cni/net.d/10-flannel.conflist file to include the cniVersion line, as shown below:

      "name": "cbr0",
      "cniVersion": "0.2.0",
      "plugins": [ ...
    

Upgrade additional control plane nodes

  • Same as the first control plane node but use:

    sudo kubeadm upgrade node
    

instead of:

sudo kubeadm upgrade apply

Also sudo kubeadm upgrade plan is not needed.

Note: If you’re using the kubeadm support for certificate management, please add --certificate-renewal=true to the kubeadm upgrade node command line. bug issue

Drain the control plane node

  • Prepare the node for maintenance by marking it unschedulable and evicting the workloads:

    kubectl drain $MASTER --ignore-daemonsets

Upgrade kubelet and kubectl

  • Upgrade the kubelet and kubectl on all control plane nodes:
# replace x in 1.16.x-00 with the latest patch version
apt-mark unhold kubelet kubectl && \
apt-get update && apt-get install -y kubelet=1.16.x-00 kubectl=1.16.x-00 && \
apt-mark hold kubelet kubectl
# replace x in 1.16.x-0 with the latest patch version
yum install -y kubelet-1.16.x-0 kubectl-1.16.x-0 --disableexcludes=kubernetes

  • Restart the kubelet

    sudo systemctl restart kubelet

Uncordon the control plane node

  • Bring the node back online by marking it schedulable:

    kubectl uncordon $MASTER

Upgrade worker nodes

The upgrade procedure on worker nodes should be executed one node at a time or few nodes at a time, without compromising the minimum required capacity for running your workloads.

Upgrade kubeadm

  • Upgrade kubeadm on all worker nodes:
# replace x in 1.16.x-00 with the latest patch version
apt-mark unhold kubeadm && \
apt-get update && apt-get install -y kubeadm=1.16.x-00 && \
apt-mark hold kubeadm
# replace x in 1.16.x-0 with the latest patch version
yum install -y kubeadm-1.16.x-0 --disableexcludes=kubernetes

Upgrade the kubelet configuration

  • Call the following command:

    sudo kubeadm upgrade node

Drain the node

  • Prepare the node for maintenance by marking it unschedulable and evicting the workloads. Run:

    kubectl drain $NODE --ignore-daemonsets

    You should see output similar to this:

    node/ip-172-31-85-18 cordoned
    WARNING: ignoring DaemonSet-managed Pods: kube-system/kube-proxy-dj7d7, kube-system/weave-net-z65qx
    node/ip-172-31-85-18 drained
    

Upgrade kubelet and kubectl

  • Upgrade the kubelet and kubectl on all worker nodes:
# replace x in 1.16.x-00 with the latest patch version
apt-mark unhold kubelet kubectl && \
apt-get update && apt-get install -y kubelet=1.16.x-00 kubectl=1.16.x-00 && \
apt-mark hold kubelet kubectl
# replace x in 1.16.x-0 with the latest patch version
yum install -y kubelet-1.16.x-0 kubectl-1.16.x-0 --disableexcludes=kubernetes

  • Restart the kubelet

    sudo systemctl restart kubelet

Uncordon the node

  • Bring the node back online by marking it schedulable:

    kubectl uncordon $NODE

Verify the status of the cluster

After the kubelet is upgraded on all nodes verify that all nodes are available again by running the following command from anywhere kubectl can access the cluster:

kubectl get nodes

The STATUS column should show Ready for all your nodes, and the version number should be updated.

Recovering from a failure state

If kubeadm upgrade fails and does not roll back, for example because of an unexpected shutdown during execution, you can run kubeadm upgrade again. This command is idempotent and eventually makes sure that the actual state is the desired state you declare.

To recover from a bad state, you can also run kubeadm upgrade apply --force without changing the version that your cluster is running.

How it works

kubeadm upgrade apply does the following:

  • Checks that your cluster is in an upgradeable state:
    • The API server is reachable
    • All nodes are in the Ready state
    • The control plane is healthy
  • Enforces the version skew policies.
  • Makes sure the control plane images are available or available to pull to the machine.
  • Upgrades the control plane components or rollbacks if any of them fails to come up.
  • Applies the new kube-dns and kube-proxy manifests and makes sure that all necessary RBAC rules are created.
  • Creates new certificate and key files of the API server and backs up old files if they’re about to expire in 180 days.

kubeadm upgrade node does the following on additional control plane nodes:

  • Fetches the kubeadm ClusterConfiguration from the cluster.
  • Optionally backups the kube-apiserver certificate.
  • Upgrades the static Pod manifests for the control plane components.
  • Upgrades the kubelet configuration for this node.

kubeadm upgrade node does the following on worker nodes:

  • Fetches the kubeadm ClusterConfiguration from the cluster.
  • Upgrades the kubelet configuration for this node.

Feedback