mirror of
https://github.com/openshift/openshift-docs.git
synced 2026-02-05 12:46:18 +01:00
155 lines
8.9 KiB
Plaintext
155 lines
8.9 KiB
Plaintext
// Module included in the following assemblies:
|
|
//
|
|
// * architecture/control-plane.adoc
|
|
|
|
[id="architecture-machine-roles_{context}"]
|
|
= Machine roles in {product-title}
|
|
|
|
{product-title} assigns hosts different roles. These roles define the function of the machine within the cluster. The cluster contains definitions for the standard `master` and `worker` role types.
|
|
|
|
ifndef::openshift-dedicated,openshift-rosa[]
|
|
[NOTE]
|
|
====
|
|
The cluster also contains the definition for the `bootstrap` role. Because the bootstrap machine is used only during cluster installation, its function is explained in the cluster installation documentation.
|
|
====
|
|
endif::openshift-dedicated,openshift-rosa[]
|
|
|
|
ifndef::openshift-dedicated,openshift-rosa[]
|
|
== Control plane and node host compatibility
|
|
|
|
The {product-title} version must match between control plane host and node host. For example, in a {product-version} cluster, all control plane hosts must be {product-version} and all nodes must be {product-version}.
|
|
|
|
Temporary mismatches during cluster upgrades are acceptable. For example, when upgrading from the previous {product-title} version to {product-version}, some nodes will upgrade to {product-version} before others. Prolonged skewing of control plane hosts and node hosts might expose older compute machines to bugs and missing features. Users should resolve skewed control plane hosts and node hosts as soon as possible.
|
|
|
|
The `kubelet` service must not be newer than `kube-apiserver`, and can be up to two minor versions older depending on whether your {product-title} version is odd or even. The table below shows the appropriate version compatibility:
|
|
|
|
[cols="2",options="header"]
|
|
|===
|
|
| {product-title} version
|
|
| Supported `kubelet` skew
|
|
|
|
|
|
| Odd {product-title} minor versions ^[1]^
|
|
| Up to one version older
|
|
|
|
| Even {product-title} minor versions ^[2]^
|
|
| Up to two versions older
|
|
|===
|
|
[.small]
|
|
--
|
|
1. For example, {product-title} 4.11, 4.13.
|
|
2. For example, {product-title} 4.10, 4.12.
|
|
--
|
|
endif::openshift-dedicated,openshift-rosa[]
|
|
|
|
[id="defining-workers_{context}"]
|
|
== Cluster workers
|
|
|
|
In a Kubernetes cluster, worker nodes run and manage the actual workloads requested by Kubernetes users. The worker nodes advertise their capacity and the scheduler, which is a control plane service, determines on which nodes to start pods and containers. The following important services run on each worker node:
|
|
|
|
* CRI-O, which is the container engine.
|
|
* kubelet, which is the service that accepts and fulfills requests for running and stopping container workloads.
|
|
* A service proxy, which manages communication for pods across workers.
|
|
* The crun or runC low-level container runtime, which creates and runs containers.
|
|
|
|
[NOTE]
|
|
====
|
|
For information about how to enable runC instead of the default crun, see the documentation for creating a `ContainerRuntimeConfig` CR.
|
|
====
|
|
|
|
In {product-title}, compute machine sets control the compute machines, which are assigned the `worker` machine role. Machines with the `worker` role drive compute workloads that are governed by a specific machine pool that autoscales them. Because {product-title} has the capacity to support multiple machine types, the machines with the `worker` role are classed as _compute_ machines. In this release, the terms _worker machine_ and _compute machine_ are used interchangeably because the only default type of compute machine is the worker machine. In future versions of {product-title}, different types of compute machines, such as infrastructure machines, might be used by default.
|
|
|
|
[NOTE]
|
|
====
|
|
Compute machine sets are groupings of compute machine resources under the `machine-api` namespace. Compute machine sets are configurations that are designed to start new compute machines on a specific cloud provider. Conversely, machine config pools (MCPs) are part of the Machine Config Operator (MCO) namespace. An MCP is used to group machines together so the MCO can manage their configurations and facilitate their upgrades.
|
|
====
|
|
|
|
[id="defining-masters_{context}"]
|
|
== Cluster control planes
|
|
|
|
In a Kubernetes cluster, the _master_ nodes run services that are required to control the Kubernetes cluster. In {product-title}, the control plane is comprised of control plane machines that have a `master` machine role. They contain more than just the Kubernetes services for managing the {product-title} cluster.
|
|
|
|
For most {product-title} clusters, control plane machines are defined by a series of standalone machine API resources.
|
|
ifndef::openshift-dedicated,openshift-rosa[]
|
|
For supported cloud provider and {product-title} version combinations, control planes can be managed with control plane machine sets.
|
|
endif::openshift-dedicated,openshift-rosa[]
|
|
ifdef::openshift-dedicated,openshift-rosa[]
|
|
Control planes are managed with control plane machine sets.
|
|
endif::openshift-dedicated,openshift-rosa[]
|
|
Extra controls apply to control plane machines to prevent you from deleting all of the control plane machines and breaking your cluster.
|
|
|
|
[NOTE]
|
|
====
|
|
ifndef::openshift-dedicated,openshift-rosa[]
|
|
Exactly three control plane nodes must be used for all production deployments. However, on bare metal platforms, clusters can be scaled up to five control plane nodes.
|
|
endif::openshift-dedicated,openshift-rosa[]
|
|
ifdef::openshift-dedicated,openshift-rosa[]
|
|
Single availability zone clusters and multiple availability zone clusters require a minimum of three control plane nodes.
|
|
endif::openshift-dedicated,openshift-rosa[]
|
|
====
|
|
|
|
Services that fall under the Kubernetes category on the control plane include the Kubernetes API server, etcd, the Kubernetes controller manager, and the Kubernetes scheduler.
|
|
|
|
.Kubernetes services that run on the control plane
|
|
[cols="1,2",options="header"]
|
|
|===
|
|
|Component |Description
|
|
|Kubernetes API server
|
|
|The Kubernetes API server validates and configures the data for pods, services,
|
|
and replication controllers. It also provides a focal point for the shared state of the cluster.
|
|
|
|
|etcd
|
|
|etcd stores the persistent control plane state while other components watch etcd for
|
|
changes to bring themselves into the specified state.
|
|
//etcd can be optionally configured for high availability, typically deployed with 2n+1 peer services.
|
|
|
|
|Kubernetes controller manager
|
|
|The Kubernetes controller manager watches etcd for changes to objects such as
|
|
replication, namespace, and service account controller objects, and then uses the
|
|
API to enforce the specified state. Several such processes create a cluster with
|
|
one active leader at a time.
|
|
|
|
|Kubernetes scheduler
|
|
|The Kubernetes scheduler watches for newly created pods without an assigned node and selects the best node to host the pod.
|
|
|===
|
|
|
|
There are also OpenShift services that run on the control plane, which include the OpenShift API server, OpenShift controller manager, OpenShift OAuth API server, and OpenShift OAuth server.
|
|
|
|
.OpenShift services that run on the control plane
|
|
[cols="1,2",options="header"]
|
|
|===
|
|
|Component |Description
|
|
|OpenShift API server
|
|
|The OpenShift API server validates and configures the data for OpenShift resources, such as projects, routes, and templates.
|
|
|
|
The OpenShift API server is managed by the OpenShift API Server Operator.
|
|
|OpenShift controller manager
|
|
|The OpenShift controller manager watches etcd for changes to OpenShift objects, such as project, route, and template controller objects, and then uses the API to enforce the specified state.
|
|
|
|
The OpenShift controller manager is managed by the OpenShift Controller Manager Operator.
|
|
|OpenShift OAuth API server
|
|
|The OpenShift OAuth API server validates and configures the data to authenticate to {product-title}, such as users, groups, and OAuth tokens.
|
|
|
|
The OpenShift OAuth API server is managed by the Cluster Authentication Operator.
|
|
|OpenShift OAuth server
|
|
|Users request tokens from the OpenShift OAuth server to authenticate themselves to the API.
|
|
|
|
The OpenShift OAuth server is managed by the Cluster Authentication Operator.
|
|
|===
|
|
|
|
Some of these services on the control plane machines run as systemd services, while others run as static pods.
|
|
|
|
Systemd services are appropriate for services that you need to always come up on that particular system shortly after it starts. For control plane machines, those include sshd, which allows remote login. It also includes services such as:
|
|
|
|
* The CRI-O container engine (crio), which runs and manages the containers. {product-title} {product-version} uses CRI-O instead of the Docker Container Engine.
|
|
* Kubelet (kubelet), which accepts requests for managing containers on the machine from control plane services.
|
|
|
|
CRI-O and Kubelet must run directly on the host as systemd services because they need to be running before you can run other containers.
|
|
|
|
The [x-]`installer-*` and [x-]`revision-pruner-*` control plane pods must run with root permissions because they write to the `/etc/kubernetes` directory, which is owned by the root user. These pods are in the following namespaces:
|
|
|
|
* `openshift-etcd`
|
|
* `openshift-kube-apiserver`
|
|
* `openshift-kube-controller-manager`
|
|
* `openshift-kube-scheduler`
|