From 1557ef3884abedb486ebcf3e4cbb563eba0b9048 Mon Sep 17 00:00:00 2001 From: Vikram Goyal Date: Thu, 8 Jul 2021 16:28:38 +1000 Subject: [PATCH] Conscious lang update: master to control plane --- architecture/understanding-development.adoc | 2 +- backup_and_restore/backing-up-etcd.adoc | 4 +- .../about-disaster-recovery.adoc | 4 +- .../replacing-unhealthy-etcd-member.adoc | 4 +- contributing_to_docs/term_glossary.adoc | 29 +++++++------- .../creating-infrastructure-machinesets.adoc | 2 +- modules/accessing-hosts-on-aws.adoc | 2 +- .../architecture-kubernetes-introduction.adoc | 2 +- modules/architecture-machine-roles.adoc | 8 ++-- modules/backup-etcd.adoc | 6 +-- modules/builds-webhook-triggers.adoc | 2 +- modules/cluster-logging-deploy-cli.adoc | 2 +- modules/cluster-logging-deploy-console.adoc | 2 +- ...ster-logging-log-store-status-viewing.adoc | 2 +- modules/creating-an-infra-node.adoc | 2 +- ...ining-where-installation-issues-occur.adoc | 8 ++-- modules/dr-restoring-cluster-state.adoc | 16 ++++---- .../file-integrity-important-attributes.adoc | 2 +- ...grity-operator-defining-custom-config.adoc | 6 +-- .../gathering-bootstrap-diagnostic-data.adoc | 4 +- modules/gathering-crio-logs.adoc | 4 +- modules/gathering-operator-logs.adoc | 6 +-- modules/graceful-restart.adoc | 10 ++--- modules/graceful-shutdown.adoc | 4 +- modules/installation-aws-limits.adoc | 4 +- ...tallation-aws-user-infra-requirements.adoc | 2 +- modules/installation-azure-config-yaml.adoc | 2 +- modules/installation-bootstrap-gather.adoc | 2 +- modules/installation-common-issues.adoc | 2 +- ...stallation-creating-aws-control-plane.adoc | 4 +- ...allation-creating-azure-control-plane.adoc | 4 +- modules/installation-dns-user-infra.adoc | 2 +- modules/installation-process.adoc | 2 +- ...lation-rhv-creating-bootstrap-machine.adoc | 2 +- .../installation-special-config-kargs.adoc | 4 +- .../installation-special-config-rtkernel.adoc | 8 ++-- ...nvestigating-etcd-installation-issues.adoc | 10 ++--- ...ating-kubelet-api-installation-issues.adoc | 16 ++++---- ...ating-master-node-installation-issues.adoc | 38 +++++++++---------- ...ating-worker-node-installation-issues.adoc | 2 +- ...leshooting-cluster-nodes-will-not-pxe.adoc | 4 +- ...i-install-troubleshooting-misc-issues.adoc | 2 +- modules/ldap-failover-generate-certs.adoc | 2 +- modules/machine-config-overview.adoc | 2 +- modules/master-node-sizing.adoc | 4 +- ...gmt-power-remediation-baremetal-about.adoc | 4 +- ...onitoring-configuring-etcd-monitoring.adoc | 4 +- modules/monitoring-installation-progress.adoc | 8 ++-- modules/nodes-nodes-audit-log-advanced.adoc | 3 +- .../nodes-nodes-audit-log-basic-viewing.adoc | 8 ++-- ...odes-nodes-working-master-schedulable.adoc | 15 ++++---- ...es-scheduler-taints-tolerations-about.adoc | 2 +- ...s-scheduler-taints-tolerations-adding.adoc | 2 +- modules/nw-create-load-balancer-service.adoc | 2 +- modules/nw-sriov-configuring-operator.adoc | 6 +-- .../querying-bootstrap-node-journal-logs.adoc | 2 +- .../querying-cluster-node-journal-logs.adoc | 6 +-- .../restore-determine-state-etcd-member.adoc | 2 +- ...tore-replace-crashlooping-etcd-member.adoc | 2 +- .../restore-replace-stopped-etcd-member.adoc | 8 ++-- modules/rhcos-about.adoc | 2 +- modules/rhcos-enabling-multipath.adoc | 2 +- modules/running-compliance-scans.adoc | 2 +- .../security-context-constraints-about.adoc | 4 +- modules/security-hardening-how.adoc | 2 +- ...ice-accounts-configuration-parameters.adoc | 2 +- modules/storage-expanding-flexvolume.adoc | 2 +- ...shooting-disabling-autoreboot-mco-cli.adoc | 3 +- modules/understanding-control-plane.adoc | 2 +- modules/upi-installation-considerations.adoc | 2 +- networking/accessing-hosts.adoc | 2 +- .../scheduler-config-openshift-io-v1.adoc | 2 +- .../troubleshooting-installations.adoc | 4 +- 73 files changed, 175 insertions(+), 181 deletions(-) diff --git a/architecture/understanding-development.adoc b/architecture/understanding-development.adoc index a823ff8b22..095b6d6d40 100644 --- a/architecture/understanding-development.adoc +++ b/architecture/understanding-development.adoc @@ -124,7 +124,7 @@ so there is less overhead in running them. When you ultimately run your containers in {product-title}, you use the link:https://cri-o.io/[CRI-O] container engine. CRI-O runs on every worker and -master machine in an {product-title} cluster, but CRI-O is not yet supported as +control plane machine (also known as the master machine) in an {product-title} cluster, but CRI-O is not yet supported as a standalone runtime outside of {product-title}. [id="base-image-options"] diff --git a/backup_and_restore/backing-up-etcd.adoc b/backup_and_restore/backing-up-etcd.adoc index 90d5a5e936..8c3e0c7596 100644 --- a/backup_and_restore/backing-up-etcd.adoc +++ b/backup_and_restore/backing-up-etcd.adoc @@ -19,13 +19,13 @@ Be sure to take an etcd backup after you upgrade your cluster. This is important [IMPORTANT] ==== -Back up your cluster's etcd data by performing a single invocation of the backup script on a master host. Do not take a backup for each master host. +Back up your cluster's etcd data by performing a single invocation of the backup script on a control plane host (also known as the master host). Do not take a backup for each control plane host. ==== After you have an etcd backup, you can xref:../backup_and_restore/disaster_recovery/scenario-2-restoring-cluster-state.adoc#dr-restoring-cluster-state[restore to a previous cluster state]. You can perform the xref:../backup_and_restore/backing-up-etcd.adoc#backing-up-etcd-data_backup-etcd[etcd data backup process] -on any master host that has a running etcd instance. +on any control plane host that has a running etcd instance. // Backing up etcd data include::modules/backup-etcd.adoc[leveloffset=+1] diff --git a/backup_and_restore/disaster_recovery/about-disaster-recovery.adoc b/backup_and_restore/disaster_recovery/about-disaster-recovery.adoc index 9699f91ced..10d8d49f26 100644 --- a/backup_and_restore/disaster_recovery/about-disaster-recovery.adoc +++ b/backup_and_restore/disaster_recovery/about-disaster-recovery.adoc @@ -13,13 +13,13 @@ state. [IMPORTANT] ==== -Disaster recovery requires you to have at least one healthy master host. +Disaster recovery requires you to have at least one healthy control plane host (also known as the master host). ==== xref:../../backup_and_restore/disaster_recovery/scenario-2-restoring-cluster-state.adoc#dr-restoring-cluster-state[Restoring to a previous cluster state]:: This solution handles situations where you want to restore your cluster to a previous state, for example, if an administrator deletes something critical. -This also includes situations where you have lost the majority of your master hosts, leading to etcd quorum loss and the cluster going offline. As long as you have taken an etcd backup, you can follow this procedure to restore your cluster to a previous state. +This also includes situations where you have lost the majority of your control plane hosts, leading to etcd quorum loss and the cluster going offline. As long as you have taken an etcd backup, you can follow this procedure to restore your cluster to a previous state. + If applicable, you might also need to xref:../../backup_and_restore/disaster_recovery/scenario-3-expired-certs.adoc#dr-recovering-expired-certs[recover from expired control plane certificates]. + diff --git a/backup_and_restore/replacing-unhealthy-etcd-member.adoc b/backup_and_restore/replacing-unhealthy-etcd-member.adoc index 9546604450..c872db9066 100644 --- a/backup_and_restore/replacing-unhealthy-etcd-member.adoc +++ b/backup_and_restore/replacing-unhealthy-etcd-member.adoc @@ -11,11 +11,11 @@ This process depends on whether the etcd member is unhealthy because the machine [NOTE] ==== -If you have lost the majority of your master hosts, leading to etcd quorum loss, then you must follow the disaster recovery procedure to xref:../backup_and_restore/disaster_recovery/scenario-2-restoring-cluster-state.adoc#dr-restoring-cluster-state[restore to a previous cluster state] instead of this procedure. +If you have lost the majority of your control plane hosts (also known as the master hosts), leading to etcd quorum loss, then you must follow the disaster recovery procedure to xref:../backup_and_restore/disaster_recovery/scenario-2-restoring-cluster-state.adoc#dr-restoring-cluster-state[restore to a previous cluster state] instead of this procedure. If the control plane certificates are not valid on the member being replaced, then you must follow the procedure to xref:../backup_and_restore/disaster_recovery/scenario-3-expired-certs.adoc#dr-recovering-expired-certs[recover from expired control plane certificates] instead of this procedure. -If a master node is lost and a new one is created, the etcd cluster Operator handles generating the new TLS certificates and adding the node as an etcd member. +If a control plane node is lost and a new one is created, the etcd cluster Operator handles generating the new TLS certificates and adding the node as an etcd member. ==== == Prerequisites diff --git a/contributing_to_docs/term_glossary.adoc b/contributing_to_docs/term_glossary.adoc index f7c2d30254..ad5f510362 100644 --- a/contributing_to_docs/term_glossary.adoc +++ b/contributing_to_docs/term_glossary.adoc @@ -211,7 +211,7 @@ Usage: Ignition config file or Ignition config files The file that Ignition uses to configure {op-system-first} during operating system initialization. The installation program generates different -Ignition config files to initialize bootstrap, master, and worker nodes. +Ignition config files to initialize bootstrap, control plane, and worker nodes. === Ingress @@ -241,17 +241,16 @@ Usage: kubelet(s) as appropriate The agent that controls a Kubernetes node. Each node runs a kubelet, which handles starting and stopping containers on a node, based on the desired state -defined by the master. +defined by the control plane (also known as master). '''' -=== Kubernetes master +=== Kubernetes control plane -Usage: Kubernetes master(s) as appropriate +Usage: Kubernetes control plane -The Kubernetes-native equivalent to the link:#project[OpenShift master]. -An OpenShift system runs OpenShift masters, not Kubernetes masters, and -an OpenShift master provides a superset of the functionality of a Kubernetes -master, so it is generally preferred to use the term OpenShift master. +The Kubernetes-native equivalent to the link:#project[OpenShift control plane]. +An OpenShift system runs OpenShift control planes (also known as masters), not Kubernetes control planes, and +an OpenShift control plane provides a superset of the functionality of a Kubernetes control plane, so it is generally preferred to use the term OpenShift control plane. == M @@ -309,16 +308,16 @@ Usage: OpenShift CLI The `oc` tool is the command line interface of OpenShift 3 and 4. '''' -=== OpenShift master +=== OpenShift control plane (also known as master) -Usage: OpenShift master(s) as appropriate +Usage: OpenShift control plane Provides a REST endpoint for interacting with the system and manages the state of the system, ensuring that all containers expected to be running are actually running and that other requests such as builds and deployments are serviced. New deployments and configurations are created with the REST API, and the state of the system can be interrogated through this endpoint as well. An OpenShift -master comprises the apiserver, scheduler, and SkyDNS. +control plane comprises the API server, scheduler, and SkyDNS. '''' === Operator @@ -424,8 +423,8 @@ caching, or traffic controls on the Service content. Usage: scheduler(s) as appropriate -Component of the Kubernetes master or OpenShift master that manages the state of -the system, places Pods on nodes, and ensures that all containers that are +Component of the Kubernetes control plane or OpenShift control plane that manages the state of +the system, places pods on nodes, and ensures that all containers that are expected to be running are actually running. '''' @@ -474,8 +473,8 @@ A service account binds together: Usage: SkyDNS -Component of the Kubernetes master or OpenShift master that provides -cluster-wide DNS resolution of internal host names for Services and Pods. +Component of the Kubernetes control plane or OpenShift control plane that provides +cluster-wide DNS resolution of internal host names for services and pods. '''' === Source-to-Image (S2I) diff --git a/machine_management/creating-infrastructure-machinesets.adoc b/machine_management/creating-infrastructure-machinesets.adoc index f8bc93b3b1..ea2ebc3399 100644 --- a/machine_management/creating-infrastructure-machinesets.adoc +++ b/machine_management/creating-infrastructure-machinesets.adoc @@ -9,7 +9,7 @@ You can create a machine set to host only infrastructure components. You apply s [IMPORTANT] ==== -Unlike earlier versions of {product-title}, you cannot move the infrastructure components to the master machines. To move the components, you must create a new machine set. +Unlike earlier versions of {product-title}, you cannot move the infrastructure components to the control plane machines (also known as the master machines). To move the components, you must create a new machine set. ==== include::modules/infrastructure-components.adoc[leveloffset=+1] diff --git a/modules/accessing-hosts-on-aws.adoc b/modules/accessing-hosts-on-aws.adoc index ac973e0f91..a51a89b056 100644 --- a/modules/accessing-hosts-on-aws.adoc +++ b/modules/accessing-hosts-on-aws.adoc @@ -41,7 +41,7 @@ API is responsive, run privileged pods instead. master. The host name looks similar to `ip-10-0-1-163.ec2.internal`. . From the bastion SSH host you manually deployed into Amazon EC2, SSH into that -master host. Ensure that you use the same SSH key you specified during the +control plane host (also known as the master host). Ensure that you use the same SSH key you specified during the installation: + [source,terminal] diff --git a/modules/architecture-kubernetes-introduction.adoc b/modules/architecture-kubernetes-introduction.adoc index 06756c169c..15d8ac09c8 100644 --- a/modules/architecture-kubernetes-introduction.adoc +++ b/modules/architecture-kubernetes-introduction.adoc @@ -15,7 +15,7 @@ deployment, scaling, and management of containerized applications. The general concept of Kubernetes is fairly simple: * Start with one or more worker nodes to run the container workloads. -* Manage the deployment of those workloads from one or more master nodes. +* Manage the deployment of those workloads from one or more control plane nodes (also known as the master nodes). * Wrap containers in a deployment unit called a pod. Using pods provides extra metadata with the container and offers the ability to group several containers in a single deployment entity. diff --git a/modules/architecture-machine-roles.adoc b/modules/architecture-machine-roles.adoc index 7beb5db304..403ec40f55 100644 --- a/modules/architecture-machine-roles.adoc +++ b/modules/architecture-machine-roles.adoc @@ -26,11 +26,11 @@ Machine sets are groupings of machine resources under the `machine-api` namespac [id="defining-masters_{context}"] == Cluster masters -In a Kubernetes cluster, the master nodes run services that are required to control the Kubernetes cluster. In {product-title}, the master machines are the control plane. They contain more than just the Kubernetes services for managing the {product-title} cluster. Because all of the machines with the control plane role are master machines, the terms _master_ and _control plane_ are used interchangeably to describe them. Instead of being grouped into a machine set, master machines are defined by a series of standalone machine API resources. Extra controls apply to master machines to prevent you from deleting all master machines and breaking your cluster. +In a Kubernetes cluster, the control plane nodes (also known as the master nodes) run services that are required to control the Kubernetes cluster. In {product-title}, the control plane machines are the control plane. They contain more than just the Kubernetes services for managing the {product-title} cluster. Because all of the machines with the control plane role are control plane machines, the terms _master_ and _control plane_ are used interchangeably to describe them. Instead of being grouped into a machine set, control plane machines are defined by a series of standalone machine API resources. Extra controls apply to control plane machines to prevent you from deleting all control plane machines and breaking your cluster. [NOTE] ==== -Exactly three master nodes must be used for all production deployments. +Exactly three control plane nodes must be used for all production deployments. ==== Services that fall under the Kubernetes category on the master include the Kubernetes API server, etcd, the Kubernetes controller manager, and the Kubernetes scheduler. @@ -82,9 +82,9 @@ The OpenShift OAuth API server is managed by the Cluster Authentication Operator The OpenShift OAuth server is managed by the Cluster Authentication Operator. |=== -Some of these services on the master machines run as systemd services, while others run as static pods. +Some of these services on the control plane machines run as systemd services, while others run as static pods. -Systemd services are appropriate for services that you need to always come up on that particular system shortly after it starts. For master machines, those include sshd, which allows remote login. It also includes services such as: +Systemd services are appropriate for services that you need to always come up on that particular system shortly after it starts. For control plane machines, those include sshd, which allows remote login. It also includes services such as: * The CRI-O container engine (crio), which runs and manages the containers. {product-title} {product-version} uses CRI-O instead of the Docker Container Engine. * Kubelet (kubelet), which accepts requests for managing containers on the machine from master services. diff --git a/modules/backup-etcd.adoc b/modules/backup-etcd.adoc index 9c448fa35d..79b827c5d8 100644 --- a/modules/backup-etcd.adoc +++ b/modules/backup-etcd.adoc @@ -10,7 +10,7 @@ Follow these steps to back up etcd data by creating an etcd snapshot and backing [IMPORTANT] ==== -Only save a backup from a single master host. Do not take a backup from each master host in the cluster. +Only save a backup from a single control plane host (also known as the master host). Do not take a backup from each control plane host in the cluster. ==== .Prerequisites @@ -25,7 +25,7 @@ You can check whether the proxy is enabled by reviewing the output of `oc get pr .Procedure -. Start a debug session for a master node: +. Start a debug session for a control plane node: + [source,terminal] ---- @@ -74,7 +74,7 @@ Snapshot saved at /home/core/assets/backup/snapshot_2021-06-25_190035.db snapshot db and kube resources are successfully saved to /home/core/assets/backup ---- + -In this example, two files are created in the `/home/core/assets/backup/` directory on the master host: +In this example, two files are created in the `/home/core/assets/backup/` directory on the control plane host: * `snapshot_.db`: This file is the etcd snapshot. The `cluster-backup.sh` script confirms its validity. * `static_kuberesources_.tar.gz`: This file contains the resources for the static pods. If etcd encryption is enabled, it also contains the encryption keys for the etcd snapshot. diff --git a/modules/builds-webhook-triggers.adoc b/modules/builds-webhook-triggers.adoc index fbea051a10..8e34863399 100644 --- a/modules/builds-webhook-triggers.adoc +++ b/modules/builds-webhook-triggers.adoc @@ -9,7 +9,7 @@ Webhook triggers allow you to trigger a new build by sending a request to the {p Currently, {product-title} webhooks only support the analogous versions of the push event for each of the Git-based Source Code Management (SCM) systems. All other event types are ignored. -When the push events are processed, the {product-title} master host confirms if the branch reference inside the event matches the branch reference in the corresponding `BuildConfig`. If so, it then checks out the exact commit reference noted in the webhook event on the {product-title} build. If they do not match, no build is triggered. +When the push events are processed, the {product-title} control plane host (also known as the master host) confirms if the branch reference inside the event matches the branch reference in the corresponding `BuildConfig`. If so, it then checks out the exact commit reference noted in the webhook event on the {product-title} build. If they do not match, no build is triggered. [NOTE] ==== diff --git a/modules/cluster-logging-deploy-cli.adoc b/modules/cluster-logging-deploy-cli.adoc index ad4622764e..253576b032 100644 --- a/modules/cluster-logging-deploy-cli.adoc +++ b/modules/cluster-logging-deploy-cli.adoc @@ -397,7 +397,7 @@ However, an unmanaged deployment does not receive updates until OpenShift Loggin [NOTE] + ==== -The maximum number of Elasticsearch master nodes is three. If you specify a `nodeCount` greater than `3`, {product-title} creates three Elasticsearch nodes that are Master-eligible nodes, with the master, client, and data roles. The additional Elasticsearch nodes are created as Data-only nodes, using client and data roles. Master nodes perform cluster-wide actions such as creating or deleting an index, shard allocation, and tracking nodes. Data nodes hold the shards and perform data-related operations such as CRUD, search, and aggregations. Data-related operations are I/O-, memory-, and CPU-intensive. It is important to monitor these resources and to add more Data nodes if the current nodes are overloaded. +The maximum number of Elasticsearch control plane nodes is three. If you specify a `nodeCount` greater than `3`, {product-title} creates three Elasticsearch nodes that are Master-eligible nodes, with the master, client, and data roles. The additional Elasticsearch nodes are created as Data-only nodes, using client and data roles. Control plane nodes perform cluster-wide actions such as creating or deleting an index, shard allocation, and tracking nodes. Data nodes hold the shards and perform data-related operations such as CRUD, search, and aggregations. Data-related operations are I/O-, memory-, and CPU-intensive. It is important to monitor these resources and to add more Data nodes if the current nodes are overloaded. For example, if `nodeCount=4`, the following nodes are created: diff --git a/modules/cluster-logging-deploy-console.adoc b/modules/cluster-logging-deploy-console.adoc index 4e1544a343..f23fa8b52f 100644 --- a/modules/cluster-logging-deploy-console.adoc +++ b/modules/cluster-logging-deploy-console.adoc @@ -235,7 +235,7 @@ However, an unmanaged deployment does not receive updates until OpenShift Loggin [NOTE] + ==== -The maximum number of Elasticsearch master nodes is three. If you specify a `nodeCount` greater than `3`, {product-title} creates three Elasticsearch nodes that are Master-eligible nodes, with the master, client, and data roles. The additional Elasticsearch nodes are created as Data-only nodes, using client and data roles. Master nodes perform cluster-wide actions such as creating or deleting an index, shard allocation, and tracking nodes. Data nodes hold the shards and perform data-related operations such as CRUD, search, and aggregations. Data-related operations are I/O-, memory-, and CPU-intensive. It is important to monitor these resources and to add more Data nodes if the current nodes are overloaded. +The maximum number of Elasticsearch control plane nodes (also known as the master nodes) is three. If you specify a `nodeCount` greater than `3`, {product-title} creates three Elasticsearch nodes that are Master-eligible nodes, with the master, client, and data roles. The additional Elasticsearch nodes are created as Data-only nodes, using client and data roles. Control plane nodes perform cluster-wide actions such as creating or deleting an index, shard allocation, and tracking nodes. Data nodes hold the shards and perform data-related operations such as CRUD, search, and aggregations. Data-related operations are I/O-, memory-, and CPU-intensive. It is important to monitor these resources and to add more Data nodes if the current nodes are overloaded. For example, if `nodeCount=4`, the following nodes are created: diff --git a/modules/cluster-logging-log-store-status-viewing.adoc b/modules/cluster-logging-log-store-status-viewing.adoc index 618f5d387c..6d0666e588 100644 --- a/modules/cluster-logging-log-store-status-viewing.adoc +++ b/modules/cluster-logging-log-store-status-viewing.adoc @@ -202,7 +202,7 @@ status: type: InvalidRedundancy ---- -This status message indicates your cluster has too many master nodes: +This status message indicates your cluster has too many control plane nodes (also known as the master nodes): [source,yaml] ---- diff --git a/modules/creating-an-infra-node.adoc b/modules/creating-an-infra-node.adoc index d37d570d25..857ad35e72 100644 --- a/modules/creating-an-infra-node.adoc +++ b/modules/creating-an-infra-node.adoc @@ -7,7 +7,7 @@ [IMPORTANT] ==== -See Creating infrastructure machine sets for installer-provisioned infrastructure environments or for any cluster where the master nodes are managed by the machine API. +See Creating infrastructure machine sets for installer-provisioned infrastructure environments or for any cluster where the control plane nodes (also known as the master nodes) are managed by the machine API. ==== Requirements of the cluster dictate that infrastructure, also called `infra` nodes, be provisioned. The installer only provides provisions for master and worker nodes. Worker nodes can be designated as infrastructure nodes or application, also called `app`, nodes through labeling. diff --git a/modules/determining-where-installation-issues-occur.adoc b/modules/determining-where-installation-issues-occur.adoc index ba22dd8ae4..2fa5489305 100644 --- a/modules/determining-where-installation-issues-occur.adoc +++ b/modules/determining-where-installation-issues-occur.adoc @@ -11,15 +11,15 @@ When troubleshooting {product-title} installation issues, you can monitor instal . Ignition configuration files are created. -. The bootstrap machine boots and starts hosting the remote resources required for the master machines to boot. +. The bootstrap machine boots and starts hosting the remote resources required for the control plane machines (also known as the master machines) to boot. -. The master machines fetch the remote resources from the bootstrap machine and finish booting. +. The control plane machines fetch the remote resources from the bootstrap machine and finish booting. -. The master machines use the bootstrap machine to form an etcd cluster. +. The control plane machines use the bootstrap machine to form an etcd cluster. . The bootstrap machine starts a temporary Kubernetes control plane using the new etcd cluster. -. The temporary control plane schedules the production control plane to the master machines. +. The temporary control plane schedules the production control plane to the control plane machines. . The temporary control plane shuts down and passes control to the production control plane. diff --git a/modules/dr-restoring-cluster-state.adoc b/modules/dr-restoring-cluster-state.adoc index 74f48c5268..5e448c3b41 100644 --- a/modules/dr-restoring-cluster-state.adoc +++ b/modules/dr-restoring-cluster-state.adoc @@ -7,7 +7,7 @@ [id="dr-scenario-2-restoring-cluster-state_{context}"] = Restoring to a previous cluster state -You can use a saved etcd backup to restore back to a previous cluster state. You use the etcd backup to restore a single control plane host. Then the etcd cluster Operator handles scaling to the remaining master hosts. +You can use a saved etcd backup to restore back to a previous cluster state. You use the etcd backup to restore a single control plane host. Then the etcd cluster Operator handles scaling to the remaining control plane hosts (also known as the master hosts). [IMPORTANT] ==== @@ -17,8 +17,8 @@ When you restore your cluster, you must use an etcd backup that was taken from t .Prerequisites * Access to the cluster as a user with the `cluster-admin` role. -* A healthy master host to use as the recovery host. -* SSH access to master hosts. +* A healthy control plane host to use as the recovery host. +* SSH access to control plane hosts. * A backup directory containing both the etcd snapshot and the resources for the static pods, which were from the same backup. The file names in the directory must be in the following formats: `snapshot_.db` and `static_kuberesources_.tar.gz`. .Procedure @@ -31,7 +31,7 @@ The Kubernetes API server becomes inaccessible after the restore process starts, + [IMPORTANT] ==== -If you do not complete this step, you will not be able to access the master hosts to complete the restore procedure, and you will be unable to recover your cluster from this state. +If you do not complete this step, you will not be able to access the control plane hosts to complete the restore procedure, and you will be unable to recover your cluster from this state. ==== . Copy the etcd backup directory to the recovery control plane host. @@ -86,7 +86,7 @@ The output of this command should be empty. If it is not empty, wait a few minut [core@ip-10-0-154-194 ~]$ sudo mv /var/lib/etcd/ /tmp ---- -.. Repeat this step on each of the other master hosts that is not the recovery host. +.. Repeat this step on each of the other control plane hosts that is not the recovery host. . Access the recovery control plane host. @@ -134,7 +134,7 @@ starting kube-scheduler-pod.yaml static-pod-resources/kube-scheduler-pod-8/kube-scheduler-pod.yaml ---- -. Restart the kubelet service on all master hosts. +. Restart the kubelet service on all control plane hosts. .. From the recovery host, run the following command: + @@ -143,7 +143,7 @@ static-pod-resources/kube-scheduler-pod-8/kube-scheduler-pod.yaml [core@ip-10-0-143-125 ~]$ sudo systemctl restart kubelet.service ---- -.. Repeat this step on all other master hosts. +.. Repeat this step on all other control plane hosts. . Verify that the single member control plane has started successfully. @@ -305,7 +305,7 @@ AllNodesAtLatestRevision + If the output includes multiple revision numbers, such as `2 nodes are at revision 6; 1 nodes are at revision 7`, this means that the update is still in progress. Wait a few minutes and try again. -. Verify that all master hosts have started and joined the cluster. +. Verify that all control plane hosts have started and joined the cluster. + In a terminal that has access to the cluster as a `cluster-admin` user, run the following command: + diff --git a/modules/file-integrity-important-attributes.adoc b/modules/file-integrity-important-attributes.adoc index da0428c6b2..225512bc2f 100644 --- a/modules/file-integrity-important-attributes.adoc +++ b/modules/file-integrity-important-attributes.adoc @@ -25,7 +25,7 @@ pods would output extra information. |`spec.tolerations` |Specify tolerations to schedule on nodes with custom taints. When not specified, -a default toleration is applied, which allows tolerations to run on master nodes. +a default toleration is applied, which allows tolerations to run on control plane nodes (also known as the master nodes). |`spec.config.gracePeriod` |The number of seconds to pause in between AIDE integrity checks. Frequent AIDE diff --git a/modules/file-integrity-operator-defining-custom-config.adoc b/modules/file-integrity-operator-defining-custom-config.adoc index 52d4c6a8f4..8c14f4a74f 100644 --- a/modules/file-integrity-operator-defining-custom-config.adoc +++ b/modules/file-integrity-operator-defining-custom-config.adoc @@ -6,10 +6,10 @@ = Defining a custom File Integrity Operator configuration This example focuses on defining a custom configuration for a scanner that runs -on the master nodes based on the default configuration provided for the +on the control plane nodes (also known as the master nodes) based on the default configuration provided for the `worker-fileintegrity` CR. This workflow might be useful if you are planning to deploy a custom software running as a daemon set and storing its data under -`/opt/mydaemon` on the master nodes. +`/opt/mydaemon` on the control plane nodes. .Procedure @@ -49,7 +49,7 @@ $ vim aide.conf !/hostroot/etc/openvswitch/conf.db ---- + -Exclude a path specific to master nodes: +Exclude a path specific to control plane nodes: + [source,terminal] ---- diff --git a/modules/gathering-bootstrap-diagnostic-data.adoc b/modules/gathering-bootstrap-diagnostic-data.adoc index e5f7ef789c..7d775e3937 100644 --- a/modules/gathering-bootstrap-diagnostic-data.adoc +++ b/modules/gathering-bootstrap-diagnostic-data.adoc @@ -57,7 +57,7 @@ $ ssh core@ journalctl -b -f -u bootkube.service + [NOTE] ==== -The `bootkube.service` log on the bootstrap node outputs etcd `connection refused` errors, indicating that the bootstrap server is unable to connect to etcd on master nodes. After etcd has started on each master node and the nodes have joined the cluster, the errors should stop. +The `bootkube.service` log on the bootstrap node outputs etcd `connection refused` errors, indicating that the bootstrap server is unable to connect to etcd on control plane nodes (also known as the master nodes). After etcd has started on each control plane node and the nodes have joined the cluster, the errors should stop. ==== + . Collect logs from the bootstrap node containers. @@ -71,4 +71,4 @@ $ ssh core@ 'for pod in $(sudo podman ps -a -q); do sudo podman . If the bootstrap process fails, verify the following. + * You can resolve `api..` from the installation host. -* The load balancer proxies port 6443 connections to bootstrap and master nodes. Ensure that the proxy configuration meets {product-title} installation requirements. +* The load balancer proxies port 6443 connections to bootstrap and control plane nodes. Ensure that the proxy configuration meets {product-title} installation requirements. diff --git a/modules/gathering-crio-logs.adoc b/modules/gathering-crio-logs.adoc index 5fed36502c..4d5494d825 100644 --- a/modules/gathering-crio-logs.adoc +++ b/modules/gathering-crio-logs.adoc @@ -12,11 +12,11 @@ If you experience CRI-O issues, you can obtain CRI-O journald unit logs from a n * You have access to the cluster as a user with the `cluster-admin` role. * Your API service is still functional. * You have installed the OpenShift CLI (`oc`). -* You have the fully qualified domain names of the control plane, or master machines. +* You have the fully qualified domain names of the control plane, or control plane machines (also known as the master machines). .Procedure -. Gather CRI-O journald unit logs. The following example collects logs from all master nodes within the cluster: +. Gather CRI-O journald unit logs. The following example collects logs from all control plane nodes (within the cluster: + [source,terminal] ---- diff --git a/modules/gathering-operator-logs.adoc b/modules/gathering-operator-logs.adoc index db29db0f34..cf7adf0632 100644 --- a/modules/gathering-operator-logs.adoc +++ b/modules/gathering-operator-logs.adoc @@ -12,7 +12,7 @@ If you experience Operator issues, you can gather detailed diagnostic informatio * You have access to the cluster as a user with the `cluster-admin` role. * Your API service is still functional. * You have installed the OpenShift CLI (`oc`). -* You have the fully qualified domain names of the control plane, or master machines. +* You have the fully qualified domain names of the control plane, or control plane machines (also known as the master machines). .Procedure @@ -37,8 +37,8 @@ If an Operator pod has multiple containers, the preceding command will produce a $ oc logs pod/ -c -n ---- -. If the API is not functional, review Operator pod and container logs on each master node by using SSH instead. Replace `..` with appropriate values. -.. List pods on each master node: +. If the API is not functional, review Operator pod and container logs on each control plane node by using SSH instead. Replace `..` with appropriate values. +.. List pods on each control plane node: + [source,terminal] ---- diff --git a/modules/graceful-restart.adoc b/modules/graceful-restart.adoc index f03abf6f26..5b75b25573 100644 --- a/modules/graceful-restart.adoc +++ b/modules/graceful-restart.adoc @@ -20,16 +20,16 @@ You can restart your cluster after it has been shut down gracefully. + Use the appropriate method for your cloud environment to start the machines, for example, from your cloud provider's web console. + -Wait approximately 10 minutes before continuing to check the status of master nodes. +Wait approximately 10 minutes before continuing to check the status of control plane nodes (also known as the master nodes). -. Verify that all master nodes are ready. +. Verify that all control plane nodes are ready. + [source,terminal] ---- $ oc get nodes -l node-role.kubernetes.io/master ---- + -The master nodes are ready if the status is `Ready`, as shown in the following output: +The control plane nodes are ready if the status is `Ready`, as shown in the following output: + [source,terminal] ---- @@ -39,7 +39,7 @@ ip-10-0-170-223.ec2.internal Ready master 75m v1.21.0 ip-10-0-211-16.ec2.internal Ready master 75m v1.21.0 ---- -. If the master nodes are _not_ ready, then check whether there are any pending certificate signing requests (CSRs) that must be approved. +. If the control plane nodes are _not_ ready, then check whether there are any pending certificate signing requests (CSRs) that must be approved. .. Get the list of current CSRs: + @@ -63,7 +63,7 @@ $ oc describe csr <1> $ oc adm certificate approve ---- -. After the master nodes are ready, verify that all worker nodes are ready. +. After the control plane nodes are ready, verify that all worker nodes are ready. + [source,terminal] ---- diff --git a/modules/graceful-shutdown.adoc b/modules/graceful-shutdown.adoc index aae8997719..5c43868cd8 100644 --- a/modules/graceful-shutdown.adoc +++ b/modules/graceful-shutdown.adoc @@ -43,9 +43,9 @@ Shutting down the nodes using one of these methods allows pods to terminate grac + [NOTE] ==== -It is not necessary to drain master nodes of the standard pods that ship with {product-title} prior to shutdown. +It is not necessary to drain control plane nodes (also known as the master nodes) of the standard pods that ship with {product-title} prior to shutdown. -Cluster administrators are responsible for ensuring a clean restart of their own workloads after the cluster is restarted. If you drained master nodes prior to shutdown because of custom workloads, you must mark the master nodes as schedulable before the cluster will be functional again after restart. +Cluster administrators are responsible for ensuring a clean restart of their own workloads after the cluster is restarted. If you drained control plane nodes prior to shutdown because of custom workloads, you must mark the control plane nodes as schedulable before the cluster will be functional again after restart. ==== . Shut off any cluster dependencies that are no longer needed, such as external storage or an LDAP server. Be sure to consult your vendor's documentation before doing so. diff --git a/modules/installation-aws-limits.adoc b/modules/installation-aws-limits.adoc index 0a687a144c..1926e19cdc 100644 --- a/modules/installation-aws-limits.adoc +++ b/modules/installation-aws-limits.adoc @@ -26,7 +26,7 @@ ability to install and run {product-title} clusters. |By default, each cluster creates the following instances: * One bootstrap machine, which is removed after installation -* Three master nodes +* Three control plane nodes (also known as the master nodes) * Three worker nodes These instance type counts are within a new account's default limit. To deploy @@ -35,7 +35,7 @@ different instance type, review your account limits to ensure that your cluster can deploy the machines that you need. In most regions, the bootstrap and worker machines uses an `m4.large` machines -and the master machines use `m4.xlarge` instances. In some regions, including +and the control plane machines use `m4.xlarge` instances. In some regions, including all regions that do not support these instance types, `m5.large` and `m5.xlarge` instances are used instead. diff --git a/modules/installation-aws-user-infra-requirements.adoc b/modules/installation-aws-user-infra-requirements.adoc index 6be3a82a6e..8aa5c40cf4 100644 --- a/modules/installation-aws-user-infra-requirements.adoc +++ b/modules/installation-aws-user-infra-requirements.adoc @@ -138,7 +138,7 @@ balancer. The cluster also requires load balancers and listeners for port 6443, which are required for the Kubernetes API and its extensions, and port 22623, which are required for the Ignition config files for new machines. The targets will be the -master nodes. Port 6443 must be accessible to both clients external to the +control plane nodes (also known as the master nodes). Port 6443 must be accessible to both clients external to the cluster and nodes within the cluster. Port 22623 must be accessible to nodes within the cluster. diff --git a/modules/installation-azure-config-yaml.adoc b/modules/installation-azure-config-yaml.adoc index 2b1b17b713..26177eb4ee 100644 --- a/modules/installation-azure-config-yaml.adoc +++ b/modules/installation-azure-config-yaml.adoc @@ -179,7 +179,7 @@ endif::gov[] ==== If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Use larger virtual machine types, such as `Standard_D8s_v3`, for your machines if you disable simultaneous multithreading. ==== -<5> You can specify the size of the disk to use in GB. Minimum recommendation for master nodes is 1024 GB. +<5> You can specify the size of the disk to use in GB. Minimum recommendation for control plane nodes (also known as the master nodes) is 1024 GB. //To configure faster storage for etcd, especially for larger clusters, set the //storage type as `io1` and set `iops` to `2000`. <6> Specify a list of zones to deploy your machines to. For high availability, specify at least two zones. diff --git a/modules/installation-bootstrap-gather.adoc b/modules/installation-bootstrap-gather.adoc index 049fd8f1a5..c3cf6bc089 100644 --- a/modules/installation-bootstrap-gather.adoc +++ b/modules/installation-bootstrap-gather.adoc @@ -20,7 +20,7 @@ running cluster, use the `oc adm must-gather` command. * Your {product-title} installation failed before the bootstrap process finished. The bootstrap node is running and accessible through SSH. * The `ssh-agent` process is active on your computer, and you provided the same SSH key to both the `ssh-agent` process and the installation program. -* If you tried to install a cluster on infrastructure that you provisioned, you must have the fully qualified domain names of the bootstrap and master nodes. +* If you tried to install a cluster on infrastructure that you provisioned, you must have the fully qualified domain names of the bootstrap and control plane nodes (also known as the master nodes). .Procedure diff --git a/modules/installation-common-issues.adoc b/modules/installation-common-issues.adoc index 453f7d296e..5c933c6a78 100644 --- a/modules/installation-common-issues.adoc +++ b/modules/installation-common-issues.adoc @@ -11,7 +11,7 @@ Here are some common issues you might encounter, along with proposed causes and == CPU load increases and nodes go into a `Not Ready` state * *Symptom*: CPU load increases significantly and nodes start going into a `Not Ready` state. -* *Cause*: The storage domain latency might be too high, especially for master nodes. +* *Cause*: The storage domain latency might be too high, especially for control plane nodes (also known as the master nodes). * *Solution*: + Make the nodes ready again by restarting the kubelet service: diff --git a/modules/installation-creating-aws-control-plane.adoc b/modules/installation-creating-aws-control-plane.adoc index cd08d75bca..f4e3a395d5 100644 --- a/modules/installation-creating-aws-control-plane.adoc +++ b/modules/installation-creating-aws-control-plane.adoc @@ -135,7 +135,7 @@ displayed in the AWS console. <11> A subnet, preferably private, to launch the control plane machines on. <12> Specify a subnet from the `PrivateSubnets` value from the output of the CloudFormation template for DNS and load balancing. -<13> The master security group ID to associate with master nodes. +<13> The master security group ID to associate with control plane nodes (also known as the master nodes). <14> Specify the `MasterSecurityGroupId` value from the output of the CloudFormation template for the security group and roles. <15> The location to fetch control plane Ignition config file from. @@ -145,7 +145,7 @@ CloudFormation template for the security group and roles. <18> Specify the value from the `master.ign` file that is in the installation directory. This value is the long string with the format `data:text/plain;charset=utf-8;base64,ABC...xYz==`. -<19> The IAM profile to associate with master nodes. +<19> The IAM profile to associate with control plane nodes. <20> Specify the `MasterInstanceProfile` parameter value from the output of the CloudFormation template for the security group and roles. <21> The type of AWS instance to use for the control plane machines. diff --git a/modules/installation-creating-azure-control-plane.adoc b/modules/installation-creating-azure-control-plane.adoc index f019e1a595..ac1bd224ef 100644 --- a/modules/installation-creating-azure-control-plane.adoc +++ b/modules/installation-creating-azure-control-plane.adoc @@ -50,7 +50,7 @@ $ az deployment group create -g ${RESOURCE_GROUP} \ --parameters privateDNSZoneName="${CLUSTER_NAME}.${BASE_DOMAIN}" \ <3> --parameters baseName="${INFRA_ID}"<4> ---- -<1> The Ignition content for the master nodes. +<1> The Ignition content for the control plane nodes (also known as the master nodes). <2> The SSH RSA public key file as a string. -<3> The name of the private DNS zone to which the master nodes are attached. +<3> The name of the private DNS zone to which the control plane nodes are attached. <4> The base name to be used in resource names; this is usually the cluster's infrastructure ID. diff --git a/modules/installation-dns-user-infra.adoc b/modules/installation-dns-user-infra.adoc index 00eda8eb18..7e7718d0ea 100644 --- a/modules/installation-dns-user-infra.adoc +++ b/modules/installation-dns-user-infra.adoc @@ -92,7 +92,7 @@ machine. These records must be resolvable by the nodes within the cluster. |Control plane machines |`...` |DNS A/AAAA or CNAME records and DNS PTR records to identify each machine -for the master nodes. These records must be resolvable by the nodes within the cluster. +for the control plane nodes (also known as the master nodes). These records must be resolvable by the nodes within the cluster. |Compute machines |`...` diff --git a/modules/installation-process.adoc b/modules/installation-process.adoc index ec3825e631..34a63881ed 100644 --- a/modules/installation-process.adoc +++ b/modules/installation-process.adoc @@ -68,7 +68,7 @@ If your cluster uses user-provisioned infrastructure, you have the option of add [discrete] == Installation process details -Because each machine in the cluster requires information about the cluster when it is provisioned, {product-title} uses a temporary _bootstrap_ machine during initial configuration to provide the required information to the permanent control plane. It boots by using an Ignition config file that describes how to create the cluster. The bootstrap machine creates the master machines that make up the control plane. The control plane machines then create the compute machines, which are also known as worker machines. The following figure illustrates this process: +Because each machine in the cluster requires information about the cluster when it is provisioned, {product-title} uses a temporary _bootstrap_ machine during initial configuration to provide the required information to the permanent control plane. It boots by using an Ignition config file that describes how to create the cluster. The bootstrap machine creates the control plane machines (also known as the master machines) that make up the control plane. The control plane machines then create the compute machines, which are also known as worker machines. The following figure illustrates this process: .Creating the bootstrap, master, and worker machines image::create-nodes.png[Creating bootstrap, master, and worker machines] diff --git a/modules/installation-rhv-creating-bootstrap-machine.adoc b/modules/installation-rhv-creating-bootstrap-machine.adoc index b44482dccb..8182b76e35 100644 --- a/modules/installation-rhv-creating-bootstrap-machine.adoc +++ b/modules/installation-rhv-creating-bootstrap-machine.adoc @@ -35,5 +35,5 @@ $ ssh core@ + [NOTE] ==== -The `bootkube.service` log on the bootstrap node outputs etcd `connection refused` errors, indicating that the bootstrap server is unable to connect to etcd on master nodes. After etcd has started on each master node and the nodes have joined the cluster, the errors should stop. +The `bootkube.service` log on the bootstrap node outputs etcd `connection refused` errors, indicating that the bootstrap server is unable to connect to etcd on control plane nodes (also known as the master nodes). After etcd has started on each control plane node and the nodes have joined the cluster, the errors should stop. ==== diff --git a/modules/installation-special-config-kargs.adoc b/modules/installation-special-config-kargs.adoc index 265f0f80ae..4c9fdf61c2 100644 --- a/modules/installation-special-config-kargs.adoc +++ b/modules/installation-special-config-kargs.adoc @@ -32,12 +32,12 @@ It is best to only add kernel arguments with this procedure if they are needed t $ ./openshift-install create manifests --dir= ---- -. Decide if you want to add kernel arguments to worker or master nodes. +. Decide if you want to add kernel arguments to worker or control plane nodes (also known as the master nodes). . In the `openshift` directory, create a file (for example, `99-openshift-machineconfig-master-kargs.yaml`) to define a `MachineConfig` object to add the kernel settings. -This example adds a `loglevel=7` kernel argument to master nodes: +This example adds a `loglevel=7` kernel argument to control plane nodes: + [source,terminal] ---- diff --git a/modules/installation-special-config-rtkernel.adoc b/modules/installation-special-config-rtkernel.adoc index 8e31bc3e8d..1f643772bd 100644 --- a/modules/installation-special-config-rtkernel.adoc +++ b/modules/installation-special-config-rtkernel.adoc @@ -12,7 +12,7 @@ kernel includes a preemptive scheduler that provides the operating system with real-time characteristics. If your {product-title} workloads require these real-time characteristics, -you can set up your compute (worker) and/or master machines to use the +you can set up your compute (worker) and/or control plane machines (also known as the master machines) to use the Linux real-time kernel when you first install the cluster. To do this, create a `MachineConfig` object and inject that object into the set of manifest files used by Ignition during cluster setup, as described in the following @@ -47,7 +47,7 @@ $ ./openshift-install create install-config --dir= $ ./openshift-install create manifests --dir= ---- -. Decide if you want to add the real-time kernel to worker or master nodes. +. Decide if you want to add the real-time kernel to worker or control plane nodes. . In the `openshift` directory, create a file (for example, `99-worker-realtime.yaml`) to define a `MachineConfig` object that applies a @@ -67,7 +67,7 @@ spec: EOF ---- + -You can change `worker` to `master` to add kernel arguments to master nodes instead. +You can change `worker` to `master` to add kernel arguments to control plane nodes instead. Create a separate YAML file to add to both master and worker nodes. . Create the cluster. You can now continue on to create the {product-title} cluster. @@ -79,7 +79,7 @@ $ ./openshift-install create cluster --dir= . Check the real-time kernel: Once the cluster comes up, log in to the cluster and run the following commands to make sure that the real-time kernel has -replaced the regular kernel for the set of worker or master nodes you +replaced the regular kernel for the set of worker or control plane nodes you configured: + [source,terminal] diff --git a/modules/investigating-etcd-installation-issues.adoc b/modules/investigating-etcd-installation-issues.adoc index 4ad393e063..a963aa84ad 100644 --- a/modules/investigating-etcd-installation-issues.adoc +++ b/modules/investigating-etcd-installation-issues.adoc @@ -5,14 +5,14 @@ [id="investigating-etcd-installation-issues_{context}"] = Investigating etcd installation issues -If you experience etcd issues during installation, you can check etcd pod status and collect etcd pod logs. You can also verify etcd DNS records and check DNS availability on master nodes. +If you experience etcd issues during installation, you can check etcd pod status and collect etcd pod logs. You can also verify etcd DNS records and check DNS availability on control plane nodes (also known as the master nodes). .Prerequisites * You have access to the cluster as a user with the `cluster-admin` role. * You have installed the OpenShift CLI (`oc`). * You have SSH access to your hosts. -* You have the fully qualified domain names of the master nodes. +* You have the fully qualified domain names of the control plane nodes. .Procedure @@ -53,8 +53,8 @@ $ oc logs pod/ -n $ oc logs pod/ -c -n ---- -. If the API is not functional, review etcd pod and container logs on each master node by using SSH instead. Replace `..` with appropriate values. -.. List etcd pods on each master node: +. If the API is not functional, review etcd pod and container logs on each control plane node by using SSH instead. Replace `..` with appropriate values. +.. List etcd pods on each control plane node: + [source,terminal] ---- @@ -100,4 +100,4 @@ $ ssh core@.. sudo crictl logs -f ..`. ==== + -. Validate primary and secondary DNS server connectivity from master nodes. +. Validate primary and secondary DNS server connectivity from control plane nodes. diff --git a/modules/investigating-kubelet-api-installation-issues.adoc b/modules/investigating-kubelet-api-installation-issues.adoc index 88211d5729..a8bc1abb6c 100644 --- a/modules/investigating-kubelet-api-installation-issues.adoc +++ b/modules/investigating-kubelet-api-installation-issues.adoc @@ -3,26 +3,26 @@ // * support/troubleshooting/troubleshooting-installations.adoc [id="investigating-kubelet-api-installation-issues_{context}"] -= Investigating master node kubelet and API server issues += Investigating control plane node kubelet and API server issues -To investigate master node kubelet and API server issues during installation, check DNS, DHCP, and load balancer functionality. Also, verify that certificates have not expired. +To investigate control plane node (also known as the master node) kubelet and API server issues during installation, check DNS, DHCP, and load balancer functionality. Also, verify that certificates have not expired. .Prerequisites * You have access to the cluster as a user with the `cluster-admin` role. * You have installed the OpenShift CLI (`oc`). * You have SSH access to your hosts. -* You have the fully qualified domain names of the master nodes. +* You have the fully qualified domain names of the control plane nodes. .Procedure -. Verify that the API server's DNS record directs the kubelet on master nodes to [x-]`https://api-int..:6443`. Ensure that the record references the load balancer. +. Verify that the API server's DNS record directs the kubelet on control plane nodes to [x-]`https://api-int..:6443`. Ensure that the record references the load balancer. -. Ensure that the load balancer's port 6443 definition references each master node. +. Ensure that the load balancer's port 6443 definition references each control plane node. -. Check that unique master node host names have been provided by DHCP. +. Check that unique control plane node host names have been provided by DHCP. -. Inspect the `kubelet.service` journald unit logs on each master node. +. Inspect the `kubelet.service` journald unit logs on each control plane node. .. Retrieve the logs using `oc`: + [source,terminal] @@ -42,7 +42,7 @@ $ ssh core@.. journalctl -b -f -u kubele {product-title} {product-version} cluster nodes running {op-system-first} are immutable and rely on Operators to apply cluster changes. Accessing cluster nodes using SSH is not recommended and nodes will be tainted as _accessed_. Before attempting to collect diagnostic data over SSH, review whether the data collected by running `oc adm must gather` and other `oc` commands is sufficient instead. However, if the {product-title} API is not available, or the kubelet is not properly functioning on the target node, `oc` operations will be impacted. In such situations, it is possible to access nodes using `ssh core@..`. ==== + -. Check for certificate expiration messages in the master node kubelet logs. +. Check for certificate expiration messages in the control plane node kubelet logs. .. Retrieve the log using `oc`: + [source,terminal] diff --git a/modules/investigating-master-node-installation-issues.adoc b/modules/investigating-master-node-installation-issues.adoc index 96445e8d01..9248658510 100644 --- a/modules/investigating-master-node-installation-issues.adoc +++ b/modules/investigating-master-node-installation-issues.adoc @@ -3,16 +3,16 @@ // * support/troubleshooting/troubleshooting-installations.adoc [id="investigating-master-node-installation-issues_{context}"] -= Investigating master node installation issues += Investigating control plane node installation issues -If you experience master node installation issues, determine the master node, {product-title} software defined network (SDN), and network Operator status. Collect `kubelet.service`, `crio.service` journald unit logs, and master node container logs for visibility into master node agent, CRI-O container runtime, and pod activity. +If you experience control plane node (also known as the master node)installation issues, determine the control plane node {product-title} software defined network (SDN), and network Operator status. Collect `kubelet.service`, `crio.service` journald unit logs, and control plane node container logs for visibility into control plane node agent, CRI-O container runtime, and pod activity. .Prerequisites * You have access to the cluster as a user with the `cluster-admin` role. * You have installed the OpenShift CLI (`oc`). * You have SSH access to your hosts. -* You have the fully qualified domain names of the bootstrap and master nodes. +* You have the fully qualified domain names of the bootstrap and control plane nodes. * If you are hosting Ignition configuration files by using an HTTP server, you must have the HTTP server's fully qualified domain name and the port number. You must also have SSH access to the HTTP host. + [NOTE] @@ -22,13 +22,13 @@ The initial `kubeadmin` password can be found in `/auth/kubea .Procedure -. If you have access to the master node's console, monitor the console until the node reaches the login prompt. During the installation, Ignition log messages are output to the console. +. If you have access to the console for the control plane node, monitor the console until the node reaches the login prompt. During the installation, Ignition log messages are output to the console. . Verify Ignition file configuration. + * If you are hosting Ignition configuration files by using an HTTP server. + -.. Verify the master node Ignition file URL. Replace `` with HTTP server's fully qualified domain name: +.. Verify the control plane node Ignition file URL. Replace `` with HTTP server's fully qualified domain name: + [source,terminal] ---- @@ -36,7 +36,7 @@ $ curl -I http://:/master.ign <1> ---- <1> The `-I` option returns the header only. If the Ignition file is available on the specified URL, the command returns `200 OK` status. If it is not available, the command returns `404 file not found`. + -.. To verify that the Ignition file was received by the master node, query the HTTP server logs on the serving host. For example, if you are using an Apache web server to serve Ignition files: +.. To verify that the Ignition file was received by the control plane node query the HTTP server logs on the serving host. For example, if you are using an Apache web server to serve Ignition files: + [source,terminal] ---- @@ -49,21 +49,21 @@ If the master Ignition file is received, the associated `HTTP GET` log message w + * If you are using a cloud provider mechanism to inject Ignition configuration files into hosts as part of their initial deployment. + -.. Review the master node's console to determine if the mechanism is injecting the master node Ignition file correctly. +.. Review the console for the control plane node to determine if the mechanism is injecting the control plane node Ignition file correctly. -. Check the availability of the master node's assigned storage device. +. Check the availability of the storage device assigned to the control plane node. -. Verify that the master node has been assigned an IP address from the DHCP server. +. Verify that the control plane node has been assigned an IP address from the DHCP server. -. Determine master node status. -.. Query master node status: +. Determine control plane node status. +.. Query control plane node status: + [source,terminal] ---- $ oc get nodes ---- + -.. If one of the master nodes does not reach a `Ready` status, retrieve a detailed node description: +.. If one of the control plane nodes does not reach a `Ready` status, retrieve a detailed node description: + [source,terminal] ---- @@ -127,7 +127,7 @@ $ oc get pods -n openshift-network-operator $ oc logs pod/ -n openshift-network-operator ---- -. Monitor `kubelet.service` journald unit logs on master nodes, after they have booted. This provides visibility into master node agent activity. +. Monitor `kubelet.service` journald unit logs on control plane nodes, after they have booted. This provides visibility into control plane node agent activity. .. Retrieve the logs using `oc`: + [source,terminal] @@ -147,7 +147,7 @@ $ ssh core@.. journalctl -b -f -u kubele {product-title} {product-version} cluster nodes running {op-system-first} are immutable and rely on Operators to apply cluster changes. Accessing cluster nodes using SSH is not recommended and nodes will be tainted as _accessed_. Before attempting to collect diagnostic data over SSH, review whether the data collected by running `oc adm must gather` and other `oc` commands is sufficient instead. However, if the {product-title} API is not available, or the kubelet is not properly functioning on the target node, `oc` operations will be impacted. In such situations, it is possible to access nodes using `ssh core@..`. ==== + -. Retrieve `crio.service` journald unit logs on master nodes, after they have booted. This provides visibility into master node CRI-O container runtime activity. +. Retrieve `crio.service` journald unit logs on control plane nodes, after they have booted. This provides visibility into control plane node CRI-O container runtime activity. .. Retrieve the logs using `oc`: + [source,terminal] @@ -162,15 +162,15 @@ $ oc adm node-logs --role=master -u crio $ ssh core@.. journalctl -b -f -u crio.service ---- -. Collect logs from specific subdirectories under `/var/log/` on master nodes. -.. Retrieve a list of logs contained within a `/var/log/` subdirectory. The following example lists files in `/var/log/openshift-apiserver/` on all master nodes: +. Collect logs from specific subdirectories under `/var/log/` on control plane nodes. +.. Retrieve a list of logs contained within a `/var/log/` subdirectory. The following example lists files in `/var/log/openshift-apiserver/` on all control plane nodes: + [source,terminal] ---- $ oc adm node-logs --role=master --path=openshift-apiserver ---- + -.. Inspect a specific log within a `/var/log/` subdirectory. The following example outputs `/var/log/openshift-apiserver/audit.log` contents from all master nodes: +.. Inspect a specific log within a `/var/log/` subdirectory. The following example outputs `/var/log/openshift-apiserver/audit.log` contents from all control plane nodes: + [source,terminal] ---- @@ -184,7 +184,7 @@ $ oc adm node-logs --role=master --path=openshift-apiserver/audit.log $ ssh core@.. sudo tail -f /var/log/openshift-apiserver/audit.log ---- -. Review master node container logs using SSH. +. Review control plane node container logs using SSH. .. List the containers: + [source,terminal] @@ -199,7 +199,7 @@ $ ssh core@.. sudo crictl ps -a $ ssh core@.. sudo crictl logs -f ---- -. If you experience master node configuration issues, verify that the MCO, MCO endpoint, and DNS record are functioning. The Machine Config Operator (MCO) manages operating system configuration during the installation procedure. Also verify system clock accuracy and certificate validity. +. If you experience control plane node configuration issues, verify that the MCO, MCO endpoint, and DNS record are functioning. The Machine Config Operator (MCO) manages operating system configuration during the installation procedure. Also verify system clock accuracy and certificate validity. .. Test whether the MCO endpoint is available. Replace `` with appropriate values: + [source,terminal] diff --git a/modules/investigating-worker-node-installation-issues.adoc b/modules/investigating-worker-node-installation-issues.adoc index 6086cedae3..f836496de2 100644 --- a/modules/investigating-worker-node-installation-issues.adoc +++ b/modules/investigating-worker-node-installation-issues.adoc @@ -75,7 +75,7 @@ $ oc describe node It is not possible to run `oc` commands if an installation issue prevents the {product-title} API from running or if the kubelet is not running yet on each node. ==== + -. Unlike master nodes, worker nodes are deployed and scaled using the Machine API Operator. Check the status of the Machine API Operator. +. Unlike control plane nodes (also known as the master nodes), worker nodes are deployed and scaled using the Machine API Operator. Check the status of the Machine API Operator. .. Review Machine API Operator pod status: + [source,terminal] diff --git a/modules/ipi-install-troubleshooting-cluster-nodes-will-not-pxe.adoc b/modules/ipi-install-troubleshooting-cluster-nodes-will-not-pxe.adoc index c5cee83f3e..8368804fb3 100644 --- a/modules/ipi-install-troubleshooting-cluster-nodes-will-not-pxe.adoc +++ b/modules/ipi-install-troubleshooting-cluster-nodes-will-not-pxe.adoc @@ -15,11 +15,11 @@ When {product-title} cluster nodes will not PXE boot, execute the following chec . Verify that the `install-config.yaml` configuration file has the proper hardware profile and boot MAC address for the NIC connected to the `provisioning` network. For example: + -.Master node settings +.control plane node settings + ---- bootMACAddress: 24:6E:96:1B:96:90 # MAC of bootable provisioning NIC -hardwareProfile: default #master node settings +hardwareProfile: default #control plane node settings ---- + .Worker node settings diff --git a/modules/ipi-install-troubleshooting-misc-issues.adoc b/modules/ipi-install-troubleshooting-misc-issues.adoc index b37260607e..ac0f63a039 100644 --- a/modules/ipi-install-troubleshooting-misc-issues.adoc +++ b/modules/ipi-install-troubleshooting-misc-issues.adoc @@ -121,7 +121,7 @@ If the hostname is `localhost`, proceed with the following steps. + [NOTE] ==== -Where `X` is the master node number. +Where `X` is the control plane node (also known as the master node) number. ==== . Force the cluster node to renew the DHCP lease: diff --git a/modules/ldap-failover-generate-certs.adoc b/modules/ldap-failover-generate-certs.adoc index f7eb537ec8..2afe11cd34 100644 --- a/modules/ldap-failover-generate-certs.adoc +++ b/modules/ldap-failover-generate-certs.adoc @@ -5,7 +5,7 @@ [id="sssd-generating-certificates_{context}"] = Generating and sharing certificates with the remote basic authentication server -Complete the following steps on the first master host listed in the Ansible host inventory file, +Complete the following steps on the first control plane host (also known as the master host) listed in the Ansible host inventory file, by default `/etc/ansible/hosts`. .Procedure diff --git a/modules/machine-config-overview.adoc b/modules/machine-config-overview.adoc index 08139e9481..acb42b004f 100644 --- a/modules/machine-config-overview.adoc +++ b/modules/machine-config-overview.adoc @@ -10,7 +10,7 @@ The Machine Config Operator (MCO) manages updates to systemd, CRI-O and Kubelet, * A machine config can make a specific change to a file or service on the operating system of each system representing a pool of {product-title} nodes. -* MCO applies changes to operating systems in pools of machines. All {product-title} clusters start with worker and master node pools. By adding more role labels, you can configure custom pools of nodes. For example, you can set up a custom pool of worker nodes that includes particular hardware features needed by an application. However, examples in this section focus on changes to the default pool types. +* MCO applies changes to operating systems in pools of machines. All {product-title} clusters start with worker and control plane node (also known as the master node) pools. By adding more role labels, you can configure custom pools of nodes. For example, you can set up a custom pool of worker nodes that includes particular hardware features needed by an application. However, examples in this section focus on changes to the default pool types. + [IMPORTANT] ==== diff --git a/modules/master-node-sizing.adoc b/modules/master-node-sizing.adoc index 33f7c7e917..81b5ae73ff 100644 --- a/modules/master-node-sizing.adoc +++ b/modules/master-node-sizing.adoc @@ -40,14 +40,14 @@ The control plane node resource requirements depend on the number of nodes in th |=== -On a cluster with three masters or control plane nodes, the CPU and memory usage will spike up when one of the nodes is stopped, rebooted or fails because the remaining two nodes must handle the load in order to be highly available. This is also expected during upgrades because the masters are cordoned, drained, and rebooted serially to apply the operating system updates, as well as the control plane Operators update. To avoid cascading failures on large and dense clusters, keep the overall resource usage on the master nodes to at most half of all available capacity to handle the resource usage spikes. Increase the CPU and memory on the master nodes accordingly. +On a cluster with three masters or control plane nodes, the CPU and memory usage will spike up when one of the nodes is stopped, rebooted or fails because the remaining two nodes must handle the load in order to be highly available. This is also expected during upgrades because the masters are cordoned, drained, and rebooted serially to apply the operating system updates, as well as the control plane Operators update. To avoid cascading failures on large and dense clusters, keep the overall resource usage on the control plane nodes (also known as the master nodes) to at most half of all available capacity to handle the resource usage spikes. Increase the CPU and memory on the control plane nodes accordingly. [IMPORTANT] ==== The node sizing varies depending on the number of nodes and object counts in the cluster. It also depends on whether the objects are actively being created on the cluster. During object creation, the control plane is more active in terms of resource usage compared to when the objects are in the `running` phase. ==== -Operator Lifecycle Manager (OLM ) runs on the master nodes and it's memory footprint depends on the number of namespaces and user installed operators that OLM needs to manage on the cluster. Master nodes need to be sized accordingly to avoid OOM kills. Following data points are based on the results from cluster maximums testing. +Operator Lifecycle Manager (OLM ) runs on the control plane nodes and it's memory footprint depends on the number of namespaces and user installed operators that OLM needs to manage on the cluster. Control plane nodes need to be sized accordingly to avoid OOM kills. Following data points are based on the results from cluster maximums testing. [options="header",cols="3*"] |=== diff --git a/modules/mgmt-power-remediation-baremetal-about.adoc b/modules/mgmt-power-remediation-baremetal-about.adoc index a5a28c2e8f..0024a2f0d4 100644 --- a/modules/mgmt-power-remediation-baremetal-about.adoc +++ b/modules/mgmt-power-remediation-baremetal-about.adoc @@ -43,7 +43,7 @@ The remediation process operates as follows: [NOTE] ==== -If the power operations did not complete, the bare metal machine controller triggers the reprovisioning of the unhealthy node unless this is a master node or a node that was provisioned externally. +If the power operations did not complete, the bare metal machine controller triggers the reprovisioning of the unhealthy node unless this is a control plane node (also known as the master node) or a node that was provisioned externally. ==== [id="mgmt-creating-mhc-baremetal_{context}"] @@ -110,4 +110,4 @@ The `matchLabels` are examples only; you must map your machine groups based on y To troubleshoot an issue with power-based remediation, verify the following: * You have access to the BMC. -* BMC is connected to the master node that is responsible for running the remediation task. +* BMC is connected to the control plane node that is responsible for running the remediation task. diff --git a/modules/monitoring-configuring-etcd-monitoring.adoc b/modules/monitoring-configuring-etcd-monitoring.adoc index 414b1e6455..416d783860 100644 --- a/modules/monitoring-configuring-etcd-monitoring.adoc +++ b/modules/monitoring-configuring-etcd-monitoring.adoc @@ -39,7 +39,7 @@ $ oc -n openshift-monitoring edit configmap cluster-monitoring-config . Under `config.yaml: |+`, add the `etcd` section. + -.. If you run `etcd` in static pods on your master nodes, you can specify the `etcd` nodes using the selector: +.. If you run `etcd` in static pods on your control plane nodes (also known as master nodes), you can specify the `etcd` nodes using the selector: + [subs="quotes"] ---- @@ -118,7 +118,7 @@ image::etcd-no-certificate.png[] While `etcd` is being monitored, Prometheus is not yet able to authenticate against `etcd`, and so cannot gather metrics. To configure Prometheus authentication against `etcd`: -. Copy the `/etc/etcd/ca/ca.crt` and `/etc/etcd/ca/ca.key` credentials files from the master node to the local machine: +. Copy the `/etc/etcd/ca/ca.crt` and `/etc/etcd/ca/ca.key` credentials files from the control plane node (also known as the master node) to the local machine: + [subs="quotes"] ---- diff --git a/modules/monitoring-installation-progress.adoc b/modules/monitoring-installation-progress.adoc index 33f8e6c6e4..d75c048dea 100644 --- a/modules/monitoring-installation-progress.adoc +++ b/modules/monitoring-installation-progress.adoc @@ -12,7 +12,7 @@ You can monitor high-level installation, bootstrap, and control plane logs as an * You have access to the cluster as a user with the `cluster-admin` role. * You have installed the OpenShift CLI (`oc`). * You have SSH access to your hosts. -* You have the fully qualified domain names of the bootstrap and master nodes. +* You have the fully qualified domain names of the bootstrap and control plane nodes (also known as the master nodes). + [NOTE] ==== @@ -37,10 +37,10 @@ $ ssh core@ journalctl -b -f -u bootkube.service + [NOTE] ==== -The `bootkube.service` log on the bootstrap node outputs etcd `connection refused` errors, indicating that the bootstrap server is unable to connect to etcd on master nodes. After etcd has started on each master node and the nodes have joined the cluster, the errors should stop. +The `bootkube.service` log on the bootstrap node outputs etcd `connection refused` errors, indicating that the bootstrap server is unable to connect to etcd on control plane nodes. After etcd has started on each control plane node and the nodes have joined the cluster, the errors should stop. ==== + -. Monitor `kubelet.service` journald unit logs on master nodes, after they have booted. This provides visibility into master node agent activity. +. Monitor `kubelet.service` journald unit logs on control plane nodes, after they have booted. This provides visibility into control plane node agent activity. .. Monitor the logs using `oc`: + [source,terminal] @@ -54,7 +54,7 @@ $ oc adm node-logs --role=master -u kubelet $ ssh core@.. journalctl -b -f -u kubelet.service ---- -. Monitor `crio.service` journald unit logs on master nodes, after they have booted. This provides visibility into master node CRI-O container runtime activity. +. Monitor `crio.service` journald unit logs on control plane nodes, after they have booted. This provides visibility into control plane node CRI-O container runtime activity. .. Monitor the logs using `oc`: + [source,terminal] diff --git a/modules/nodes-nodes-audit-log-advanced.adoc b/modules/nodes-nodes-audit-log-advanced.adoc index ed5d789df8..30ef17f9a3 100644 --- a/modules/nodes-nodes-audit-log-advanced.adoc +++ b/modules/nodes-nodes-audit-log-advanced.adoc @@ -19,7 +19,7 @@ openshift_master_audit_config={"enabled": true, "auditFilePath": "/var/lib/origi [IMPORTANT] ==== -The policy file *_/etc/origin/master/adv-audit.yaml_* must be available on each master node. +The policy file *_/etc/origin/master/adv-audit.yaml_* must be available on each control plane node (also known as the master node). ==== @@ -137,4 +137,3 @@ that group. For more information on advanced audit, see the link:https://kubernetes.io/docs/tasks/debug-application-cluster/audit[Kubernetes documentation] - diff --git a/modules/nodes-nodes-audit-log-basic-viewing.adoc b/modules/nodes-nodes-audit-log-basic-viewing.adoc index 569f6ebc6c..cb3949b7ea 100644 --- a/modules/nodes-nodes-audit-log-basic-viewing.adoc +++ b/modules/nodes-nodes-audit-log-basic-viewing.adoc @@ -5,7 +5,7 @@ [id="nodes-nodes-audit-log-basic-viewing_{context}"] = Viewing the audit logs -You can view the logs for the OpenShift API server, Kubernetes API server, and OpenShift OAuth API server for each master node. +You can view the logs for the OpenShift API server, Kubernetes API server, and OpenShift OAuth API server for each control plane node (also known as the master node). .Procedure @@ -13,7 +13,7 @@ To view the audit logs: * View the OpenShift API server logs: -.. List the OpenShift API server logs that are available for each master node: +.. List the OpenShift API server logs that are available for each control plane node: + [source,terminal] ---- @@ -53,7 +53,7 @@ $ oc adm node-logs ci-ln-m0wpfjb-f76d1-vnb5x-master-0 --path=openshift-apiserver * View the Kubernetes API server logs: -.. List the Kubernetes API server logs that are available for each master node: +.. List the Kubernetes API server logs that are available for each control plane node: + [source,terminal] ---- @@ -93,7 +93,7 @@ $ oc adm node-logs ci-ln-m0wpfjb-f76d1-vnb5x-master-0 --path=kube-apiserver/audi * View the OpenShift OAuth API server logs: -.. List the OpenShift OAuth API server logs that are available for each master node: +.. List the OpenShift OAuth API server logs that are available for each control plane node: + [source,terminal] ---- diff --git a/modules/nodes-nodes-working-master-schedulable.adoc b/modules/nodes-nodes-working-master-schedulable.adoc index 0f9d7f6e3e..ab0464c0d7 100644 --- a/modules/nodes-nodes-working-master-schedulable.adoc +++ b/modules/nodes-nodes-working-master-schedulable.adoc @@ -3,22 +3,21 @@ // * nodes/nodes-nodes-working.adoc [id="nodes-nodes-working-master-schedulable_{context}"] -= Configuring master nodes as schedulable += Configuring control plane nodes as schedulable -You can configure master nodes to be +You can configure control plane nodes (also known as the master nodes) to be schedulable, meaning that new pods are allowed for placement on the master -nodes. By default, master nodes are not schedulable. +nodes. By default, control plane nodes are not schedulable. You can set the masters to be schedulable, but must retain the worker nodes. [NOTE] ==== You can deploy {product-title} with no worker nodes on a bare metal cluster. -In this case, the master nodes are marked schedulable by default. +In this case, the control plane nodes are marked schedulable by default. ==== -You can allow or disallow master nodes to be schedulable by configuring the -`mastersSchedulable` field. +You can allow or disallow control plane nodes to be schedulable by configuring the `mastersSchedulable` field. .Procedure @@ -48,7 +47,7 @@ spec: name: "" status: {} ---- -<1> Set to `true` to allow master nodes to be schedulable, or `false` to -disallow master nodes to be schedulable. +<1> Set to `true` to allow control plane nodes to be schedulable, or `false` to +disallow control plane nodes to be schedulable. . Save the file to apply the changes. diff --git a/modules/nodes-scheduler-taints-tolerations-about.adoc b/modules/nodes-scheduler-taints-tolerations-about.adoc index f6ae0f423e..b83631f4e0 100644 --- a/modules/nodes-scheduler-taints-tolerations-about.adoc +++ b/modules/nodes-scheduler-taints-tolerations-about.adoc @@ -89,7 +89,7 @@ Taints and tolerations consist of a key, value, and effect. |=== [.small] -- -1. If you add a `NoSchedule` taint to a master node, the node must have the `node-role.kubernetes.io/master=:NoSchedule` taint, which is added by default. +1. If you add a `NoSchedule` taint to a control plane node (also known as the master node) the node must have the `node-role.kubernetes.io/master=:NoSchedule` taint, which is added by default. + For example: + diff --git a/modules/nodes-scheduler-taints-tolerations-adding.adoc b/modules/nodes-scheduler-taints-tolerations-adding.adoc index f703513968..71468be245 100644 --- a/modules/nodes-scheduler-taints-tolerations-adding.adoc +++ b/modules/nodes-scheduler-taints-tolerations-adding.adoc @@ -68,7 +68,7 @@ This command places a taint on `node1` that has key `key1`, value `value1`, and + [NOTE] ==== -If you add a `NoSchedule` taint to a master node, the node must have the `node-role.kubernetes.io/master=:NoSchedule` taint, which is added by default. +If you add a `NoSchedule` taint to a control plane node (also known as the master node) the node must have the `node-role.kubernetes.io/master=:NoSchedule` taint, which is added by default. For example: diff --git a/modules/nw-create-load-balancer-service.adoc b/modules/nw-create-load-balancer-service.adoc index 6cbde8cf68..1b64286835 100644 --- a/modules/nw-create-load-balancer-service.adoc +++ b/modules/nw-create-load-balancer-service.adoc @@ -24,7 +24,7 @@ To create a load balancer service: $ oc project project1 ---- -. Open a text file on the master node and paste the following text, editing the +. Open a text file on the control plane node (also known as the master node) and paste the following text, editing the file as needed: + .Sample load balancer configuration file diff --git a/modules/nw-sriov-configuring-operator.adoc b/modules/nw-sriov-configuring-operator.adoc index 413cfde7b0..832f103903 100644 --- a/modules/nw-sriov-configuring-operator.adoc +++ b/modules/nw-sriov-configuring-operator.adoc @@ -36,7 +36,7 @@ application. It provides the following capabilities: * Mutation of resource requests and limits in a pod specification to add an SR-IOV resource name according to an SR-IOV network attachment definition annotation. * Mutation of a pod specification with a Downward API volume to expose pod annotations, labels, and huge pages requests and limits. Containers that run in the pod can access the exposed information as files under the `/etc/podnetinfo` path. -By default, the Network Resources Injector is enabled by the SR-IOV Network Operator and runs as a daemon set on all master nodes. The following is an example of Network Resources Injector pods running in a cluster with three master nodes: +By default, the Network Resources Injector is enabled by the SR-IOV Network Operator and runs as a daemon set on all control plane nodes (also known as the master nodes). The following is an example of Network Resources Injector pods running in a cluster with three control plane nodes: [source,terminal] ---- @@ -61,8 +61,8 @@ Admission Controller application. It provides the following capabilities: * Validation of the `SriovNetworkNodePolicy` CR when it is created or updated. * Mutation of the `SriovNetworkNodePolicy` CR by setting the default value for the `priority` and `deviceType` fields when the CR is created or updated. -By default the SR-IOV Network Operator Admission Controller webhook is enabled by the Operator and runs as a daemon set on all master nodes. -The following is an example of the Operator Admission Controller webhook pods running in a cluster with three master nodes: +By default the SR-IOV Network Operator Admission Controller webhook is enabled by the Operator and runs as a daemon set on all control plane nodes. +The following is an example of the Operator Admission Controller webhook pods running in a cluster with three control plane nodes: [source,terminal] ---- diff --git a/modules/querying-bootstrap-node-journal-logs.adoc b/modules/querying-bootstrap-node-journal-logs.adoc index e634e20fe6..a609235db1 100644 --- a/modules/querying-bootstrap-node-journal-logs.adoc +++ b/modules/querying-bootstrap-node-journal-logs.adoc @@ -23,7 +23,7 @@ $ ssh core@ journalctl -b -f -u bootkube.service + [NOTE] ==== -The `bootkube.service` log on the bootstrap node outputs etcd `connection refused` errors, indicating that the bootstrap server is unable to connect to etcd on master nodes. After etcd has started on each master node and the nodes have joined the cluster, the errors should stop. +The `bootkube.service` log on the bootstrap node outputs etcd `connection refused` errors, indicating that the bootstrap server is unable to connect to etcd on control plane nodes (also known as the master nodes). After etcd has started on each control plane node and the nodes have joined the cluster, the errors should stop. ==== + . Collect logs from the bootstrap node containers using `podman` on the bootstrap node. Replace `` with the bootstrap node's fully qualified domain name: diff --git a/modules/querying-cluster-node-journal-logs.adoc b/modules/querying-cluster-node-journal-logs.adoc index deb58270f7..482ee41832 100644 --- a/modules/querying-cluster-node-journal-logs.adoc +++ b/modules/querying-cluster-node-journal-logs.adoc @@ -17,7 +17,7 @@ You can gather `journald` unit logs and other logs within `/var/log` on individu .Procedure -. Query `kubelet` `journald` unit logs from {product-title} cluster nodes. The following example queries master nodes only: +. Query `kubelet` `journald` unit logs from {product-title} cluster nodes. The following example queries control plane nodes (also known as the master nodes) only: + [source,terminal] ---- @@ -26,14 +26,14 @@ $ oc adm node-logs --role=master -u kubelet <1> <1> Replace `kubelet` as appropriate to query other unit logs. . Collect logs from specific subdirectories under `/var/log/` on cluster nodes. -.. Retrieve a list of logs contained within a `/var/log/` subdirectory. The following example lists files in `/var/log/openshift-apiserver/` on all master nodes: +.. Retrieve a list of logs contained within a `/var/log/` subdirectory. The following example lists files in `/var/log/openshift-apiserver/` on all control plane nodes: + [source,terminal] ---- $ oc adm node-logs --role=master --path=openshift-apiserver ---- + -.. Inspect a specific log within a `/var/log/` subdirectory. The following example outputs `/var/log/openshift-apiserver/audit.log` contents from all master nodes: +.. Inspect a specific log within a `/var/log/` subdirectory. The following example outputs `/var/log/openshift-apiserver/audit.log` contents from all control plane nodes: + [source,terminal] ---- diff --git a/modules/restore-determine-state-etcd-member.adoc b/modules/restore-determine-state-etcd-member.adoc index fa92da6518..74ab9b8726 100644 --- a/modules/restore-determine-state-etcd-member.adoc +++ b/modules/restore-determine-state-etcd-member.adoc @@ -83,7 +83,7 @@ If the *node is not ready*, then follow the _Replacing an unhealthy etcd member + If the machine is running and the node is ready, then check whether the etcd pod is crashlooping. -.. Verify that all master nodes are listed as `Ready`: +.. Verify that all control plane nodes (also known as the master nodes) are listed as `Ready`: + [source,terminal] ---- diff --git a/modules/restore-replace-crashlooping-etcd-member.adoc b/modules/restore-replace-crashlooping-etcd-member.adoc index a0400e72f2..745f936d3c 100644 --- a/modules/restore-replace-crashlooping-etcd-member.adoc +++ b/modules/restore-replace-crashlooping-etcd-member.adoc @@ -196,7 +196,7 @@ $ oc patch etcd cluster -p='{"spec": {"forceRedeploymentReason": "single-master- ---- <1> The `forceRedeploymentReason` value must be unique, which is why a timestamp is appended. + -When the etcd cluster Operator performs a redeployment, it ensures that all master nodes have a functioning etcd pod. +When the etcd cluster Operator performs a redeployment, it ensures that all control plane nodes (also known as the master nodes) have a functioning etcd pod. .Verification diff --git a/modules/restore-replace-stopped-etcd-member.adoc b/modules/restore-replace-stopped-etcd-member.adoc index 973404974c..c546019ee9 100644 --- a/modules/restore-replace-stopped-etcd-member.adoc +++ b/modules/restore-replace-stopped-etcd-member.adoc @@ -146,7 +146,7 @@ $ oc delete secret -n openshift-etcd etcd-serving-ip-10-0-131-183.ec2.internal $ oc delete secret -n openshift-etcd etcd-serving-metrics-ip-10-0-131-183.ec2.internal ---- -. Delete and recreate the master machine. After this machine is recreated, a new revision is forced and etcd scales up automatically. +. Delete and recreate the control plane machine (also known as the master machine). After this machine is recreated, a new revision is forced and etcd scales up automatically. + If you are running installer-provisioned infrastructure, or you used the Machine API to create your machines, follow these steps. Otherwise, you must create the new master using the same method that was used to originally create it. @@ -170,7 +170,7 @@ clustername-8qw5l-worker-us-east-1a-wbtgd Running m4.large us-east-1 us clustername-8qw5l-worker-us-east-1b-lrdxb Running m4.large us-east-1 us-east-1b 3h28m ip-10-0-144-248.ec2.internal aws:///us-east-1b/i-0cb45ac45a166173b running clustername-8qw5l-worker-us-east-1c-pkg26 Running m4.large us-east-1 us-east-1c 3h28m ip-10-0-170-181.ec2.internal aws:///us-east-1c/i-06861c00007751b0a running ---- -<1> This is the master machine for the unhealthy node, `ip-10-0-131-183.ec2.internal`. +<1> This is the control plane machine for the unhealthy node, `ip-10-0-131-183.ec2.internal`. .. Save the machine configuration to a file on your file system: + @@ -181,7 +181,7 @@ $ oc get machine clustername-8qw5l-master-0 \ <1> -o yaml \ > new-master-machine.yaml ---- -<1> Specify the name of the master machine for the unhealthy node. +<1> Specify the name of the control plane machine for the unhealthy node. .. Edit the `new-master-machine.yaml` file that was created in the previous step to assign a new name and remove unnecessary fields. @@ -276,7 +276,7 @@ metadata: ---- $ oc delete machine -n openshift-machine-api clustername-8qw5l-master-0 <1> ---- -<1> Specify the name of the master machine for the unhealthy node. +<1> Specify the name of the control plane machine for the unhealthy node. .. Verify that the machine was deleted: + diff --git a/modules/rhcos-about.adoc b/modules/rhcos-about.adoc index ad1f4bb869..0d6afcb007 100644 --- a/modules/rhcos-about.adoc +++ b/modules/rhcos-about.adoc @@ -122,7 +122,7 @@ The way that Ignition configures machines is similar to how tools like https://c The Ignition process for an {op-system} machine in an {product-title} cluster involves the following steps: -* The machine gets its Ignition config file. Master machines get their Ignition config files from the bootstrap machine, and worker machines get Ignition config files from a master. +* The machine gets its Ignition config file. Control plane machines (also known as the master machines) get their Ignition config files from the bootstrap machine, and worker machines get Ignition config files from a master. * Ignition creates disk partitions, file systems, directories, and links on the machine. It supports RAID arrays but does not support LVM volumes. * Ignition mounts the root of the permanent file system to the `/sysroot` directory in the initramfs and starts working in that `/sysroot` directory. * Ignition configures all defined file systems and sets them up to mount appropriately at runtime. diff --git a/modules/rhcos-enabling-multipath.adoc b/modules/rhcos-enabling-multipath.adoc index f943f01ebf..5543446064 100644 --- a/modules/rhcos-enabling-multipath.adoc +++ b/modules/rhcos-enabling-multipath.adoc @@ -24,7 +24,7 @@ On IBM Z and LinuxONE, you can enable multipathing only if you configured your c .Procedure -. To enable multipathing on master nodes: +. To enable multipathing on control plane nodes (also known as the master nodes): * Create a machine config file, such as `99-master-kargs-mpath.yaml`, that instructs the cluster to add the `master` label and that identifies the multipath kernel argument, for example: diff --git a/modules/running-compliance-scans.adoc b/modules/running-compliance-scans.adoc index 5f2d7a08b8..a9b453801e 100644 --- a/modules/running-compliance-scans.adoc +++ b/modules/running-compliance-scans.adoc @@ -42,7 +42,7 @@ schedule: 0 1 * * * <6> <2> The Compliance Operator keeps results of three subsequent scans in the volume; older scans are rotated. <3> The Compliance Operator will allocate one GB of storage for the scan results. <4> If the scan setting uses any profiles that scan cluster nodes, scan these node roles. -<5> The default scan setting object also scans the master nodes. +<5> The default scan setting object also scans the control plane nodes (also known as the master nodes). <6> The default scan setting object runs scans at 01:00 each day. + As an alternative to the default scan setting, you can use `default-auto-apply`, which has the following settings: diff --git a/modules/security-context-constraints-about.adoc b/modules/security-context-constraints-about.adoc index bb9a8beddd..563c2bef9f 100644 --- a/modules/security-context-constraints-about.adoc +++ b/modules/security-context-constraints-about.adoc @@ -40,9 +40,7 @@ The cluster contains nine default SCCs: + [WARNING] ==== -If additional workloads are run on master hosts, use caution when providing -access to `hostnetwork`. A workload that runs `hostnetwork` on a master host is -effectively root on the cluster and must be trusted accordingly. +If additional workloads are run on control plane hosts (also known as the master hosts), use caution when providing access to `hostnetwork`. A workload that runs `hostnetwork` on a control plane host is effectively root on the cluster and must be trusted accordingly. ==== * `node-exporter` * `nonroot` diff --git a/modules/security-hardening-how.adoc b/modules/security-hardening-how.adoc index 42f0636133..c442a23d99 100644 --- a/modules/security-hardening-how.adoc +++ b/modules/security-hardening-how.adoc @@ -8,7 +8,7 @@ Direct modification of {op-system} systems in {product-title} is discouraged. Instead, you should think of modifying systems in pools of nodes, such -as worker nodes and master nodes. When a new node is needed, in +as worker nodes and control plane nodes (also known as the master nodes). When a new node is needed, in non-bare metal installs, you can request a new node of the type you want and it will be created from an {op-system} image plus the modifications you created earlier. diff --git a/modules/service-accounts-configuration-parameters.adoc b/modules/service-accounts-configuration-parameters.adoc index bbaad7f9c9..8ec41bf1df 100644 --- a/modules/service-accounts-configuration-parameters.adoc +++ b/modules/service-accounts-configuration-parameters.adoc @@ -6,7 +6,7 @@ = Service account configuration parameters You can provide values for the following service account parameters in the -*_/etc/origin/master/master-config.yml_* file on the master host. +*_/etc/origin/master/master-config.yml_* file on the control plane host (also known as the master host) .Service account configuration parameters [cols="3a,3a,6a",options="header"] diff --git a/modules/storage-expanding-flexvolume.adoc b/modules/storage-expanding-flexvolume.adoc index d58807ba62..f0e53951dc 100644 --- a/modules/storage-expanding-flexvolume.adoc +++ b/modules/storage-expanding-flexvolume.adoc @@ -30,5 +30,5 @@ If `true`, calls `ExpandFS` to resize filesystem after physical volume expansion [IMPORTANT] ==== -Because {product-title} does not support installation of FlexVolume plugins on master nodes, it does not support control-plane expansion of FlexVolume. +Because {product-title} does not support installation of FlexVolume plugins on control plane nodes (also known as the master nodes), it does not support control-plane expansion of FlexVolume. ==== diff --git a/modules/troubleshooting-disabling-autoreboot-mco-cli.adoc b/modules/troubleshooting-disabling-autoreboot-mco-cli.adoc index f83d3f68c5..367ab1a3ab 100644 --- a/modules/troubleshooting-disabling-autoreboot-mco-cli.adoc +++ b/modules/troubleshooting-disabling-autoreboot-mco-cli.adoc @@ -73,7 +73,7 @@ master rendered-master-33cf0a1254318755d7b48002c597bf91 True False worker rendered-worker-e405a5bdb0db1295acea08bcca33fa60 False False ---- + -If the *UPDATED* column is *False* and *UPDATING* is *False*, there are pending changes. When *UPDATED* is *True* and *UPDATING* is *False*, there are no pending changes. In the previous example, the worker node has pending changes. The master node does not have any pending changes. +If the *UPDATED* column is *False* and *UPDATING* is *False*, there are pending changes. When *UPDATED* is *True* and *UPDATING* is *False*, there are no pending changes. In the previous example, the worker node has pending changes. The control plane node (also known as the master node) does not have any pending changes. + [IMPORTANT] ==== @@ -138,4 +138,3 @@ worker rendered-worker-b4c51bb33ccaae6fc4a6a5 False True ---- + If the MCP is applying any pending changes, the *UPDATED* column is *False* and the *UPDATING* column is *True*. When *UPDATED* is *True* and *UPDATING* is *False*, there are no further changes being made. In the previous example, the MCO is updating the worker node. - diff --git a/modules/understanding-control-plane.adoc b/modules/understanding-control-plane.adoc index ae9e3be3a2..1f2cd67ca1 100644 --- a/modules/understanding-control-plane.adoc +++ b/modules/understanding-control-plane.adoc @@ -5,7 +5,7 @@ [id="understanding-control-plane_{context}"] = Understanding the {product-title} control plane -The control plane, which is composed of master machines, manages the +The control plane, which is composed of control plane machines (also known as the master machines), manages the {product-title} cluster. The control plane machines manage workloads on the compute machines, which are also known as worker machines. The cluster itself manages all upgrades to the machines by the actions of the Cluster Version Operator, the diff --git a/modules/upi-installation-considerations.adoc b/modules/upi-installation-considerations.adoc index 3696bf6071..59ff6b62be 100644 --- a/modules/upi-installation-considerations.adoc +++ b/modules/upi-installation-considerations.adoc @@ -26,4 +26,4 @@ It is not possible to enable cloud provider integration in {product-title} envir * Storage needs to be manually provisioned if you want to leverage optional framework components such as the embedded container registry, ElasticSearch, or Prometheus. Default storage classes are not defined in user-provisioned infrastructure installations unless explicitly configured. -* A load balancer is required to distribute API requests across all master nodes in highly available {product-title} environments. You can use any TCP-based load balancing solution that meets {product-title} DNS routing and port requirements. +* A load balancer is required to distribute API requests across all control plane nodes (also known as the master nodes) in highly available {product-title} environments. You can use any TCP-based load balancing solution that meets {product-title} DNS routing and port requirements. diff --git a/networking/accessing-hosts.adoc b/networking/accessing-hosts.adoc index e61998c6c8..116c21d53c 100644 --- a/networking/accessing-hosts.adoc +++ b/networking/accessing-hosts.adoc @@ -6,6 +6,6 @@ include::modules/common-attributes.adoc[] toc::[] Learn how to create a bastion host to access {product-title} instances and -access the master nodes with secure shell (SSH) access. +access the control plane nodes (also known as the master nodes) with secure shell (SSH) access. include::modules/accessing-hosts-on-aws.adoc[leveloffset=+1] diff --git a/rest_api/config_apis/scheduler-config-openshift-io-v1.adoc b/rest_api/config_apis/scheduler-config-openshift-io-v1.adoc index be7d1bd31c..6a6a7e0562 100644 --- a/rest_api/config_apis/scheduler-config-openshift-io-v1.adoc +++ b/rest_api/config_apis/scheduler-config-openshift-io-v1.adoc @@ -70,7 +70,7 @@ Type:: | `mastersSchedulable` | `boolean` -| MastersSchedulable allows masters nodes to be schedulable. When this flag is turned on, all the master nodes in the cluster will be made schedulable, so that workload pods can run on them. The default value for this field is false, meaning none of the master nodes are schedulable. Important Note: Once the workload pods start running on the master nodes, extreme care must be taken to ensure that cluster-critical control plane components are not impacted. Please turn on this field after doing due diligence. +| MastersSchedulable allows masters nodes to be schedulable. When this flag is turned on, all the control plane nodes (also known as the master nodes) in the cluster will be made schedulable, so that workload pods can run on them. The default value for this field is false, meaning none of the control plane nodes (also known as the master nodes) are schedulable. Important Note: Once the workload pods start running on the control plane nodes (also known as the master nodes), extreme care must be taken to ensure that cluster-critical control plane components are not impacted. Please turn on this field after doing due diligence. | `policy` | `object` diff --git a/support/troubleshooting/troubleshooting-installations.adoc b/support/troubleshooting/troubleshooting-installations.adoc index 8844b5807d..372d1bc90c 100644 --- a/support/troubleshooting/troubleshooting-installations.adoc +++ b/support/troubleshooting/troubleshooting-installations.adoc @@ -26,13 +26,13 @@ include::modules/monitoring-installation-progress.adoc[leveloffset=+1] // Gathering bootstrap node diagnostic data include::modules/gathering-bootstrap-diagnostic-data.adoc[leveloffset=+1] -// Investigating master node installation issues +// Investigating control plane node installation issues include::modules/investigating-master-node-installation-issues.adoc[leveloffset=+1] // Investigating etcd installation issues include::modules/investigating-etcd-installation-issues.adoc[leveloffset=+1] -// Investigating master node kubelet and API server issues +// Investigating control plane node kubelet and API server issues include::modules/investigating-kubelet-api-installation-issues.adoc[leveloffset=+1] // Investigating worker node installation issues