Kubernetes v1.16 documentation is no longer actively maintained. The version you are currently viewing is a static snapshot. For up-to-date documentation, see the latest version.

Foundational

USERS › APPLICATION DEVELOPER › FOUNDATIONAL
Introduction
sections in this doc

If you’re a developer looking to run applications on Kubernetes, this page and its linked topics can help you get started with the fundamentals. Though this page primarily describes development workflows, the subsequent page in the series covers more advanced, production setups.

Note: A quick note
This app developer “user journey” is not a comprehensive overview of Kubernetes. It focuses more on what you develop, test, and deploy to Kubernetes, rather than how the underlying infrastructure works.

Though it’s possible for a single person to manage both, in many organizations, it’s common to assign the latter to a dedicated cluster operatorA person who configures, controls, and monitors clusters. .

Get started with a cluster

Web-based environment

If you’re brand new to Kubernetes and simply want to experiment without setting up a full development environment, web-based environments are a good place to start:

  • Kubernetes Basics - Introduces you to six common Kubernetes workflows. Each section walks you through browser-based, interactive exercises complete with their own Kubernetes environment.

  • Katacoda - The playground equivalent of the environment used in Kubernetes Basics above. Katacoda also provides more advanced tutorials, such as “Liveness and Readiness Healthchecks”.

  • Play with Kubernetes - A less structured environment than the Katacoda playground, for those who are more comfortable with Kubernetes concepts and want to explore further. It supports the ability to spin up multiple nodes.

Web-based environments are easy to access, but are not persistent. If you want to continue exploring Kubernetes in a workspace that you can come back to and change, Minikube is a good option.

Minikube can be installed locally, and runs a simple, single-node Kubernetes cluster inside a virtual machine (VM). This cluster is fully functioning and contains all core Kubernetes components. Many developers have found this sufficient for local application development.

Minikube includes a Docker daemon, but if you’re developing applications locally, you’ll want an independent Docker instance to support your workflow. This allows you to create containersA lightweight and portable executable image that contains software and all of its dependencies. and push them to a container registry.

Note: Version 1.12 is recommended for full compatibility with Kubernetes, but a few other versions are tested and known to work.

You can get basic information about your cluster with the commands kubectl cluster-info and kubectl get nodes. However, to get a good idea of what’s really going on, you need to deploy an application to your cluster. This is covered in the next section.

MicroK8s

On Linux, MicroK8s is a good alternative to Minikube for a local install of Kubernetes:

  • Runs on the native OS, so there is no overhead from running a virtual machine.
  • Always provides the latest stable version of Kubernetes, using built-in auto-upgrade functionality.
  • Installs in less than a minute.

  • Install microk8s.

After you install MicroK8s, you can use its tab-completion functionality. All MicroK8s commands start with microk8s.. Type microk8s. (with the period) and then use the tab key to see a list of available commands.

It also includes commands to enable Kubernetes subsystems. For example:

  • the Kubernetes Dashboard
  • the DNS service
  • GPU passthrough (for NVIDIA)
  • Ingress
  • Istio
  • Metrics server
  • Registry
  • Storage

Deploy an application

Basic workloads

The following examples demonstrate the fundamentals of deploying Kubernetes apps:

Through these deployment tasks, you’ll gain familiarity with the following:

The subsequent topics are also useful to know for basic application deployment.

Metadata

You can also specify custom information about your Kubernetes API objects by attaching key/value fields. Kubernetes provides two ways of doing this:

Storage

You’ll also want to think about storage. Kubernetes provides different types of storage API objects for different storage needs:

Configuration

To avoid having to unnecessarily rebuild your container images, you should decouple your application’s configuration data from the code required to run it. There are a couple ways of doing this, which you should choose according to your use case:

ApproachType of DataHow it's mountedExample
Using a manifest's container definitionNon-confidentialEnvironment variableCommand-line flag
Using ConfigMapsAn API object used to store non-confidential data in key-value pairs. Can be consumed as environment variables, command-line arguments, or config files in a volume.Non-confidentialEnvironment variable OR local filenginx configuration
Using SecretsStores sensitive information, such as passwords, OAuth tokens, and ssh keys.ConfidentialEnvironment variable OR local fileDatabase credentials
Note: If you have any data that you want to keep private, you should be using a Secret. Otherwise there is nothing stopping that data from being exposed to malicious users.

Understand basic Kubernetes architecture

As an app developer, you don’t need to know everything about the inner workings of Kubernetes, but you may find it helpful to understand it at a high level.

What Kubernetes offers

Say that your team is deploying an ordinary Rails application. You’ve run some calculations and determined that you need five instances of your app running at any given time, in order to handle external traffic.

If you’re not running Kubernetes or a similar automated system, you might find the following scenario familiar:

Note:
  1. One instance of your app (a complete machine instance or just a container) goes down.

  2. Because your team has monitoring set up, this pages the person on call.

  3. The on-call person has to go in, investigate, and manually spin up a new instance.

  4. Depending how your team handles DNS/networking, the on-call person may also need to also update the service discovery mechanism to point at the IP of the new Rails instance rather than the old.

This process can be tedious and also inconvenient, especially if (2) happens in the early hours of the morning!

If you have Kubernetes set up, however, manual intervention is not as necessary. The Kubernetes control plane, which runs on your cluster’s master node, gracefully handles (3) and (4) on your behalf. As a result, Kubernetes is often referred to as a self-healing system.

There are two key parts of the control plane that facilitate this behavior: the Kubernetes API server and the Controllers.

Kubernetes API server

For Kubernetes to be useful, it needs to know what sort of cluster state you want it to maintain. Your YAML or JSON configuration files declare this desired state in terms of one or more API objects, such as DeploymentsAn API object that manages a replicated application. . To make updates to your cluster’s state, you submit these files to the Kubernetes APIThe application that serves Kubernetes functionality through a RESTful interface and stores the state of the cluster. server (kube-apiserver).

Examples of state include but are not limited to the following:

  • The applications or other workloads to run
  • The container images for your applications and workloads
  • Allocation of network and disk resources

Note that the API server is just the gateway, and that object data is actually stored in a highly available datastore called etcd. For most intents and purposes, though, you can focus on the API server. Most reads and writes to cluster state take place as API requests.

For more information, see Understanding Kubernetes Objects.

Controllers

Once you’ve declared your desired state through the Kubernetes API, the controllers work to make the cluster’s current state match this desired state.

The standard controller processes are kube-controller-manager and cloud-controller-manager, but you can also write your own controllers as well.

All of these controllers implement a control loop. For simplicity, you can think of this as the following:

Note:
  1. What is the current state of the cluster (X)?

  2. What is the desired state of the cluster (Y)?

  3. X == Y ?

    • true - Do nothing.
    • false - Perform tasks to get to Y, such as starting or restarting containers, or scaling the number of replicas of a given application. Return to 1.

By continuously looping, these controllers ensure the cluster can pick up new updates and avoid drifting from the desired state. These ideas are covered in more detail here.

Additional resources

The Kubernetes documentation is rich in detail. Here’s a curated list of resources to help you start digging deeper.

Basic concepts

Tutorials

What’s next

If you feel fairly comfortable with the topics on this page and want to learn more, check out the following user journeys:

Feedback