Concepts

Kubernetes v1.16 documentation is no longer actively maintained. The version you are currently viewing is a static snapshot. For up-to-date documentation, see the latest version.

Edit This Page

Runtime Class

FEATURE STATE: Kubernetes v1.14 beta
This feature is currently in a beta state, meaning:

  • The version names contain beta (e.g. v2beta3).
  • Code is well tested. Enabling the feature is considered safe. Enabled by default.
  • Support for the overall feature will not be dropped, though details may change.
  • The schema and/or semantics of objects may change in incompatible ways in a subsequent beta or stable release. When this happens, we will provide instructions for migrating to the next version. This may require deleting, editing, and re-creating API objects. The editing process may require some thought. This may require downtime for applications that rely on the feature.
  • Recommended for only non-business-critical uses because of potential for incompatible changes in subsequent releases. If you have multiple clusters that can be upgraded independently, you may be able to relax this restriction.
  • Please do try our beta features and give feedback on them! After they exit beta, it may not be practical for us to make more changes.

This page describes the RuntimeClass resource and runtime selection mechanism.

Warning: RuntimeClass includes breaking changes in the beta upgrade in v1.14. If you were using RuntimeClass prior to v1.14, see Upgrading RuntimeClass from Alpha to Beta.

Runtime Class

RuntimeClass is a feature for selecting the container runtime configuration. The container runtime configuration is used to run a Pod’s containers.

Motivation

You can set a different RuntimeClass between different Pods to provide a balance of performance versus security. For example, if part of your workload deserves a high level of information security assurance, you might choose to schedule those Pods so that they run in a container runtime that uses hardware virtualization. You’d then benefit from the extra isolation of the alternative runtime, at the expense of some additional overhead.

You can also use RuntimeClass to run different Pods with the same container runtime but with different settings.

Set Up

Ensure the RuntimeClass feature gate is enabled (it is by default). See Feature Gates for an explanation of enabling feature gates. The RuntimeClass feature gate must be enabled on apiservers and kubelets.

  1. Configure the CRI implementation on nodes (runtime dependent)
  2. Create the corresponding RuntimeClass resources

1. Configure the CRI implementation on nodes

The configurations available through RuntimeClass are Container Runtime Interface (CRI) implementation dependent. See the corresponding documentation (below) for your CRI implementation for how to configure.

Note: RuntimeClass assumes a homogeneous node configuration across the cluster by default (which means that all nodes are configured the same way with respect to container runtimes). To support heterogenous node configurations, see Scheduling below.

The configurations have a corresponding handler name, referenced by the RuntimeClass. The handler must be a valid DNS 1123 label (alpha-numeric + - characters).

2. Create the corresponding RuntimeClass resources

The configurations setup in step 1 should each have an associated handler name, which identifies the configuration. For each handler, create a corresponding RuntimeClass object.

The RuntimeClass resource currently only has 2 significant fields: the RuntimeClass name (metadata.name) and the handler (handler). The object definition looks like this:

apiVersion: node.k8s.io/v1beta1  # RuntimeClass is defined in the node.k8s.io API group
kind: RuntimeClass
metadata:
  name: myclass  # The name the RuntimeClass will be referenced by
  # RuntimeClass is a non-namespaced resource
handler: myconfiguration  # The name of the corresponding CRI configuration
Note: It is recommended that RuntimeClass write operations (create/update/patch/delete) be restricted to the cluster administrator. This is typically the default. See Authorization Overview for more details.

Usage

Once RuntimeClasses are configured for the cluster, using them is very simple. Specify a runtimeClassName in the Pod spec. For example:

apiVersion: v1
kind: Pod
metadata:
  name: mypod
spec:
  runtimeClassName: myclass
  # ...

This will instruct the Kubelet to use the named RuntimeClass to run this pod. If the named RuntimeClass does not exist, or the CRI cannot run the corresponding handler, the pod will enter the Failed terminal phase. Look for a corresponding event for an error message.

If no runtimeClassName is specified, the default RuntimeHandler will be used, which is equivalent to the behavior when the RuntimeClass feature is disabled.

CRI Configuration

For more details on setting up CRI runtimes, see CRI installation.

dockershim

Kubernetes built-in dockershim CRI does not support runtime handlers.

containerd

Runtime handlers are configured through containerd’s configuration at /etc/containerd/config.toml. Valid handlers are configured under the runtimes section:

[plugins.cri.containerd.runtimes.${HANDLER_NAME}]

See containerd’s config documentation for more details: https://github.com/containerd/cri/blob/master/docs/config.md

cri-o

Runtime handlers are configured through cri-o’s configuration at /etc/crio/crio.conf. Valid handlers are configured under the crio.runtime table:

[crio.runtime.runtimes.${HANDLER_NAME}]
  runtime_path = "${PATH_TO_BINARY}"

See cri-o’s config documentation for more details: https://github.com/kubernetes-sigs/cri-o/blob/master/cmd/crio/config.go

Scheduling

FEATURE STATE: Kubernetes v1.16 beta
This feature is currently in a beta state, meaning:

  • The version names contain beta (e.g. v2beta3).
  • Code is well tested. Enabling the feature is considered safe. Enabled by default.
  • Support for the overall feature will not be dropped, though details may change.
  • The schema and/or semantics of objects may change in incompatible ways in a subsequent beta or stable release. When this happens, we will provide instructions for migrating to the next version. This may require deleting, editing, and re-creating API objects. The editing process may require some thought. This may require downtime for applications that rely on the feature.
  • Recommended for only non-business-critical uses because of potential for incompatible changes in subsequent releases. If you have multiple clusters that can be upgraded independently, you may be able to relax this restriction.
  • Please do try our beta features and give feedback on them! After they exit beta, it may not be practical for us to make more changes.

As of Kubernetes v1.16, RuntimeClass includes support for heterogenous clusters through its scheduling fields. Through the use of these fields, you can ensure that pods running with this RuntimeClass are scheduled to nodes that support it. To use the scheduling support, you must have the RuntimeClass admission controller enabled (the default, as of 1.16).

To ensure pods land on nodes supporting a specific RuntimeClass, that set of nodes should have a common label which is then selected by the runtimeclass.scheduling.nodeSelector field. The RuntimeClass’s nodeSelector is merged with the pod’s nodeSelector in admission, effectively taking the intersection of the set of nodes selected by each. If there is a conflict, the pod will be rejected.

If the supported nodes are tainted to prevent other RuntimeClass pods from running on the node, you can add tolerations to the RuntimeClass. As with the nodeSelector, the tolerations are merged with the pod’s tolerations in admission, effectively taking the union of the set of nodes tolerated by each.

To learn more about configuring the node selector and tolerations, see Assigning Pods to Nodes.

Pod Overhead

FEATURE STATE: Kubernetes v1.16 alpha
This feature is currently in a alpha state, meaning:

  • The version names contain alpha (e.g. v1alpha1).
  • Might be buggy. Enabling the feature may expose bugs. Disabled by default.
  • Support for feature may be dropped at any time without notice.
  • The API may change in incompatible ways in a later software release without notice.
  • Recommended for use only in short-lived testing clusters, due to increased risk of bugs and lack of long-term support.

As of Kubernetes v1.16, RuntimeClass includes support for specifying overhead associated with running a pod, as part of the PodOverhead feature. To use PodOverhead, you must have the PodOverhead feature gate enabled (it is off by default).

Pod overhead is defined in RuntimeClass through the Overhead fields. Through the use of these fields, you can specify the overhead of running pods utilizing this RuntimeClass and ensure these overheads are accounted for in Kubernetes.

Upgrading RuntimeClass from Alpha to Beta

The RuntimeClass Beta feature includes the following changes:

  • The node.k8s.io API group and runtimeclasses.node.k8s.io resource have been migrated to a built-in API from a CustomResourceDefinition.
  • The spec has been inlined in the RuntimeClass definition (i.e. there is no more RuntimeClassSpec).
  • The runtimeHandler field has been renamed handler.
  • The handler field is now required in all API versions. This means the runtimeHandler field in the Alpha API is also required.
  • The handler field must be a valid DNS label (RFC 1123), meaning it can no longer contain . characters (in all versions). Valid handlers match the following regular expression: ^[a-z0-9]([-a-z0-9]*[a-z0-9])?$.

Action Required: The following actions are required to upgrade from the alpha version of the RuntimeClass feature to the beta version:

  • RuntimeClass resources must be recreated after upgrading to v1.14, and the runtimeclasses.node.k8s.io CRD should be manually deleted:

    kubectl delete customresourcedefinitions.apiextensions.k8s.io runtimeclasses.node.k8s.io
    
  • Alpha RuntimeClasses with an unspecified or empty runtimeHandler or those using a . character in the handler are no longer valid, and must be migrated to a valid handler configuration (see above).

Further Reading

Feedback