1
0
mirror of https://github.com/openshift/openshift-docs.git synced 2026-02-05 12:46:18 +01:00

TELCODOCS#2122: Configuring a local arbiter node

This commit is contained in:
srir
2025-04-17 15:38:28 +05:30
committed by openshift-cherrypick-robot
parent 968c9b52cc
commit d97d1a337c
4 changed files with 130 additions and 0 deletions

View File

@@ -57,6 +57,18 @@ include::modules/nw-osp-configuring-external-load-balancer.adoc[leveloffset=+2]
// Setting the cluster node hostnames through DHCP
include::modules/ipi-install-setting-cluster-node-hostnames-dhcp.adoc[leveloffset=+1]
// Configuring a local arbiter node
include::modules/ipi-install-config-local-arbiter-node.adoc[leveloffset=+1]
.Next steps
* xref:../../../installing/installing_bare_metal/ipi/ipi-install-installing-a-cluster.adoc#ipi-install-installing-a-cluster[Installing a cluster]
[role="_additional-resources"]
.Additional resources
* xref:../../../nodes/clusters/nodes-cluster-enabling-features.adoc#nodes-cluster-enabling-features-about_nodes-cluster-enabling[Understanding feature gates]
[id="ipi-install-configuration-files"]
[id="additional-resources_config"]
== Configuring the install-config.yaml file

View File

@@ -112,6 +112,23 @@ controlPlane:
|
|Replicas sets the number of control plane nodes included as part of the {product-title} cluster.
a|
----
arbiter:
name: arbiter
----
|
|The {product-title} cluster requires a name for arbiter nodes.
a|
----
arbiter:
replicas: 1
----
|
|The `replicas` parameter sets the number of arbiter nodes for the {product-title} cluster.
a| `provisioningNetworkInterface` | | The name of the network interface on nodes connected to the provisioning network. For {product-title} 4.9 and later releases, use the `bootMACAddress` configuration setting to enable Ironic to identify the IP address of the NIC instead of using the `provisioningNetworkInterface` configuration setting to identify the name of the NIC.

View File

@@ -0,0 +1,100 @@
// Module included in the following assemblies:
//
// *installing/installing_bare_metal/ipi/ipi-install-installation-workflow.adoc
:_mod-docs-content-type: PROCEDURE
[id="ipi-install-config-local-arbiter-node_{context}"]
= Configuring a local arbiter node
You can configure an {product-title} cluster with two control plane nodes and one local arbiter node so to retain high availability (HA) while reducing infrastructure costs for your cluster. This configuration is supported only for bare-metal installations.
:FeatureName: Configuring a local arbiter node
include::snippets/technology-preview.adoc[]
A local arbiter node is a lower-cost, co-located machine that participates in control plane quorum decisions. Unlike a standard control plane node, the arbiter node does not run the full set of control plane services. You can use this configuration to maintain HA in your cluster with only two fully provisioned control plane nodes instead of three.
[IMPORTANT]
====
You can configure a local arbiter node only. Remote arbiter nodes are not supported.
====
To deploy a cluster with two control plane nodes and one local arbiter node, you must define the following nodes in the `install-config.yaml` file:
* 2 control plane nodes
* 1 arbiter node
You must enable the `TechPreviewNoUpgrade` feature set in the `FeatureGate` custom resource (CR) to enable the arbiter node feature.
For more information about feature gates, see "Understanding feature gates".
The arbiter node must meet the following minimum system requirements:
* 2 threads
* 8 GB of RAM
* 120 GB of SSD or equivalent storage
The arbiter node must be located in a network environment with an end-to-end latency of less than 500 milliseconds, including disk I/O. In high-latency environments, you might need to apply the `etcd` slow profile.
The control plane nodes must meet the following minimum system requirements:
* 4 threads
* 16 GB of RAM
* 120 GB of SSD or equivalent storage
Additionally, the control plane nodes must also have enough storage for the workload.
.Prerequisites
* You have downloaded {oc-first} and the installation program.
* You have logged into the {oc-first}.
.Procedure
. Edit the `install-config.yaml` file to define the arbiter node alongside control plane nodes.
+
.Example `install-config.yaml` configuration for deploying an arbiter node
[source,yaml]
----
apiVersion: v1
baseDomain: devcluster.openshift.com
compute:
- architecture: amd64
hyperthreading: Enabled
name: worker
platform: {}
replicas: 0
arbiter: <1>
architecture: amd64
hyperthreading: Enabled
replicas: 1 <2>
name: arbiter <3>
platform:
baremetal: {}
controlPlane: <4>
architecture: amd64
hyperthreading: Enabled
name: master
platform:
baremetal: {}
replicas: 2 <5>
featureSet: TechPreviewNoUpgrade
platform:
baremetal:
# ...
hosts:
- name: cluster-master-0
role: master
# ...
- name: cluster-master-1
role: master
...
- name: cluster-arbiter-0
role: arbiter
# ...
----
<1> Defines the arbiter machine pool. You must configure this field to deploy a cluster with an arbiter node.
<2> Set the `replicas` field to `1` for the arbiter pool. You cannot set this field to a value that is greater than 1.
<3> Specifies a name for the arbiter machine pool.
<4> Defines the control plane machine pool.
<5> When an arbiter pool is defined, two control plane replicas are valid.
. Save the modified `install-config.yaml` file.

View File

@@ -28,6 +28,7 @@ The following Technology Preview features are enabled by this feature set:
** Pod security admission enforcement. Enables the restricted enforcement mode for pod security admission. Instead of only logging a warning, pods are rejected if they violate pod security standards. (`OpenShiftPodSecurityAdmission`)
** StatefulSet pod availability upgrading limits. Enables users to define the maximum number of statefulset pods unavailable during updates which reduces application downtime. (`MaxUnavailableStatefulSet`)
** Image mode behavior of image streams. Enables a new API for controlling the import mode behavior of image streams. (`imageStreamImportMode`)
** Configuring a local arbiter node. You can configure an {product-title} cluster with two control plane nodes and one local arbiter node to retain high availability (HA) while reducing infrastructure costs. This configuration is supported only for bare-metal installations.
** `OVNObservability` resource allows you to verify expected network behavior. Supports the following network APIs: `NetworkPolicy`, `AdminNetworkPolicy`, `BaselineNetworkPolicy`, `UserDefinesdNetwork` isolation, multicast ACLs, and egress firewalls. When enabled, you can view network events in the terminal.
** `gcpLabelsTags`
** `vSphereStaticIPs`