1
0
mirror of https://github.com/openshift/openshift-docs.git synced 2026-02-07 09:46:53 +01:00
Files
openshift-docs/modules/architecture-machine-roles.adoc
2020-11-12 11:08:42 -05:00

120 lines
5.6 KiB
Plaintext
Raw Blame History

This file contains invisible Unicode characters
This file contains invisible Unicode characters that are indistinguishable to humans but may be processed differently by a computer. If you think that this is intentional, you can safely ignore this warning. Use the Escape button to reveal them.
// Module included in the following assemblies:
//
// * architecture/architecture.adoc
[id="architecture-machine-roles_{context}"]
= Machine roles in {product-title}
{product-title} assigns hosts different roles. These roles define the function
of the machine within the cluster. The cluster contains definitions for the
standard master and worker role types.
[NOTE]
====
The cluster also contains the definition for the bootstrap role. Because the
bootstrap machine is used only during cluster installation, its function is
explained in the cluster installation documentation.
====
[id="defining-workers_{context}"]
== Cluster workers
In a Kubernetes cluster, the worker nodes are where the actual workloads
requested by Kubernetes users run and are managed. The worker nodes advertise
their capacity and the scheduler, which is part of the master services,
determines on which nodes to start containers and pods. Important services run
on each worker node, including CRI-O, which is the container engine, Kubelet,
which is the service that accepts and fulfills requests for running and
stopping container workloads, and a service proxy, which manages communication
for pods across workers.
In {product-title}, machine sets control the worker machines. Machines with
the worker role drive compute workloads that are governed by a specific machine
pool that autoscales them. Because {product-title} has the capacity to support
multiple machine types, the worker machines are classed as _compute_ machines.
In this release, the terms "worker machine" and "compute machine" are
used interchangeably because the only default type of compute machine
is the worker machine. In future versions of {product-title}, different types
of compute machines, such as infrastructure machines, might be used by default.
[id="defining-masters_{context}"]
== Cluster masters
In a Kubernetes cluster, the master nodes run services that are required to
control the Kubernetes cluster. In {product-title}, the master machines are
the control plane. They contain more
than just the Kubernetes services for managing the {product-title} cluster.
Because all of the machines with the control plane role are master machines,
the terms _master_ and _control plane_ are used interchangeably to describe
them. Instead of being grouped into a
machine set, master machines are defined by a series of standalone machine API
resources. Extra controls apply to master machines to prevent you from deleting
all master machines and breaking your cluster.
[NOTE]
====
Use three master nodes. Although you can theoretically
use any number of master nodes, the number is constrained by etcd quorum due to
master static pods and etcd static pods working on the same hosts.
====
Services that fall under the Kubernetes category on the master include the
Kubernetes API server, etcd, Kubernetes controller manager, and HAProxy services.
.Kubernetes services that run on the control plane
[cols="1,2",options="header"]
|===
|Component |Description
|Kubernetes API server
|The Kubernetes API server validates and configures the data for pods, services,
and replication controllers. It also provides a focal point for the shared state of the cluster.
|etcd
|etcd stores the persistent master state while other components watch etcd for
changes to bring themselves into the specified state.
//etcd can be optionally configured for high availability, typically deployed with 2n+1 peer services.
|Kubernetes controller manager
|The Kubernetes controller manager watches etcd for changes to objects such as
replication, namespace, and service account controller objects, and then uses the
API to enforce the specified state. Several such processes create a cluster with
one active leader at a time.
|===
There are also OpenShift services that run on the control plane, which include the OpenShift API server, OpenShift controller manager, and OAuth API server.
.OpenShift services that run on the control plane
[cols="1,2",options="header"]
|===
|Component |Description
|OpenShift API server
|The OpenShift API server validates and configures the data for OpenShift resources, such as projects, routes, and templates.
The OpenShift API server is managed by the OpenShift API Server Operator.
|OpenShift controller manager
|The OpenShift controller manager watches etcd for changes to OpenShift objects, such as project, route, and template controller objects, and then uses the API to enforce the specified state.
The OpenShift controller manager is managed by the OpenShift Controller Manager Operator.
|OpenShift OAuth API server
|The OpenShift OAuth API server validates and configures the data to authenticate to OpenShift Container Platform, such as users, groups, and OAuth tokens.
The OpenShift OAuth API server is managed by the Cluster Authentication Operator.
|OpenShift OAuth server
|Users request tokens from the OpenShift OAuth server to authenticate themselves to the API.
The OpenShift OAuth server is managed by the Cluster Authentication Operator.
|===
Some of these services on the master machines run as systemd services, while
others run as static pods.
Systemd services are appropriate for services that you need to always come up on
that particular system shortly after it starts. For master machines, those
include sshd, which allows remote login. It also includes services such as:
* The CRI-O container engine (crio), which runs and
manages the containers. {product-title} {product-version} uses CRI-O instead of
the Docker Container Engine.
* Kubelet (kubelet), which accepts requests for managing containers on the
machine from master services.
CRI-O and Kubelet must run directly on the host as systemd services because
they need to be running before you can run other containers.