1
0
mirror of https://github.com/openshift/openshift-docs.git synced 2026-02-05 12:46:18 +01:00

OSDOCS#12987: Two-node OpenShift cluster with fencing (Technology Preview)

This commit is contained in:
srir
2025-09-04 18:18:18 +05:30
parent 87874cd000
commit c966c2cf45
26 changed files with 1049 additions and 81 deletions

View File

@@ -0,0 +1 @@
../../_attributes/

View File

@@ -0,0 +1,16 @@
:_mod-docs-content-type: ASSEMBLY
[id="about-two-node-arbiter-installation"]
= Two-Node with Arbiter
:context: about-two-node-arbiter-installation
A Two-Node OpenShift with Arbiter (TNA) cluster is a compact, cost-effective {product-title} topology. The topology consists of two control plane nodes and a lightweight arbiter node. The arbiter node stores the full etcd data, maintaining an etcd quorum and preventing split brain. The arbiter node does not run the additional control plane components `kube-apiserver` and `kube-controller-manager`, nor does it run workloads.
To install a Two-Node OpenShift with Arbiter cluster, assign an arbiter role to at least one of the nodes and set the control plane node count for the cluster to 2. Although {product-title} does not currently impose a limit on the number of arbiter nodes, the typical deployment includes only one to minimize the use of hardware resources.
After installation, you can add additional arbiter nodes to a Two-Node OpenShift with Arbiter cluster but not to a standard multi-node cluster. It is also not possible to convert between a Two-Node OpenShift with Arbiter and standard topology.
You can install a Two-Node Arbiter cluster by using one of the following methods:
* Installing on bare metal: xref:../installing_bare_metal/ipi/ipi-install-installation-workflow.adoc#ipi-install-config-local-arbiter-node_ipi-install-installation-workflow[Configuring a local arbiter node]
* Installing with the Agent-based Installer: xref:../../installing/installing_with_agent_based_installer/installing-with-agent-based-installer.adoc#installing-ocp-agent-local-arbiter-node_installing-with-agent-based-installer[Configuring a local arbiter node]

View File

@@ -0,0 +1 @@
../../images/

View File

@@ -0,0 +1 @@
../../_attributes/

View File

@@ -0,0 +1 @@
../../images/

View File

@@ -0,0 +1,33 @@
:_mod-docs-content-type: ASSEMBLY
[id="installing-post-tnf"]
= Post-installation troubleshooting and recovery
include::_attributes/common-attributes.adoc[]
:context: install-post-tnf
toc::[]
The following sections help with recovering from issues in a two-node OpenShift cluster with fencing.
:FeatureName: Two-node OpenShift cluster with fencing
include::snippets/technology-preview.adoc[leveloffset=+1]
// Manually recovering from a disruption event when automated recovery is unavailable
include::modules/installation-manual-recovering-when-auto-recovery-is-unavail.adoc[leveloffset=+1]
[role="_additional-resources"]
== Additional resources
* xref:../../../backup_and_restore/control_plane_backup_and_restore/backing-up-etcd.adoc#backup-etcd-restoring_backing-up-etcd[Restoring etcd from a backup].
* xref:../installing_tnf/install-post-tnf.adoc#installation-verifying-etcd-health_install-post-tnf[Verifying etcd health in a two-node OpenShift cluster with fencing]
// Replacing control plane nodes
include::modules/installation-replacing-control-plane-nodes.adoc[leveloffset=+1]
[role="_additional-resources"]
== Additional resources
* xref:../../../backup_and_restore/control_plane_backup_and_restore/backing-up-etcd.adoc#backup-etcd-restoring_backing-up-etcd[Restoring etcd from a backup].
// Verifying etcd health in a two-node OpenShift cluster with fencing
include::modules/installation-verifying-etcd-health.adoc[leveloffset=+1]

View File

@@ -0,0 +1,18 @@
:_mod-docs-content-type: ASSEMBLY
[id="installing-tnf"]
= Installing a two-node OpenShift cluster with fencing
include::_attributes/common-attributes.adoc[]
:context: install-tnf
toc::[]
You can deploy a two-node OpenShift cluster with fencing by using either the installer-provisioned infrastructure or the user-provisioned infrastructure installation method. The following examples provide sample `install-config.yaml` configurations for both methods.
:FeatureName: Two-node OpenShift cluster with fencing
include::snippets/technology-preview.adoc[leveloffset=+1]
// Sample install-config.yaml for a two-node installer-provisioned infrastructure cluster with fencing
include::modules/installation-sample-install-config-two-node-fencing-ipi.adoc[leveloffset=+1]
// Sample install-config.yaml for a two-node user-provisioned infrastructure cluster with fencing
include::modules/installation-sample-install-config-two-node-fencing-upi.adoc[leveloffset=+1]

View File

@@ -0,0 +1,67 @@
:_mod-docs-content-type: ASSEMBLY
[id="installing-two-node-fencing"]
= Preparing to install a two-node OpenShift cluster with fencing
include::_attributes/common-attributes.adoc[]
:context: installing-two-node-fencing
toc::[]
:FeatureName: Two-node OpenShift cluster with fencing
include::snippets/technology-preview.adoc[leveloffset=+1]
A two-node OpenShift cluster with fencing provides high availability (HA) with a reduced hardware footprint. This configuration is designed for distributed or edge environments where deploying a full three-node control plane cluster is not practical.
A two-node cluster does not include compute nodes. The two control plane machines run user workloads in addition to managing the cluster.
Fencing is managed by Pacemaker, which can isolate an unresponsive node by using the Baseboard Management Console (BMC) of the node. After the unresponsive node is fenced, the remaining node can safely continue operating the cluster without the risk of resource corruption.
[NOTE]
====
You can deploy a two-node OpenShift cluster with fencing by using either the user-provisioned infrastructure method or the installer-provisioned infrastructure method.
====
The two-node OpenShift cluster with fencing requires the following hosts:
.Minimum required hosts
[options="header"]
|===
|Hosts |Description
|Two control plane machines
|The control plane machines run the Kubernetes and {product-title} services that form the control plane.
|One temporary bootstrap machine
|You need a bootstrap machine to deploy the {product-title} cluster on the control plane machines. You can remove the bootstrap machine after you install the cluster.
|===
The bootstrap and control plane machines must use Red Hat Enterprise Linux CoreOS (RHCOS) as the operating system. For instructions on installing RHCOS and starting the bootstrap process, see xref:../../../installing/installing_bare_metal/upi/installing-bare-metal-network-customizations.adoc#creating-machines-bare-metal_installing-bare-metal-network-customizations[Installing {op-system} and starting the {product-title} bootstrap process]
[NOTE]
====
The requirement to use RHCOS applies only to user-provisioned infrastructure deployments. For installer-provisioned infrastructure deployments, the bootstrap and control plane machines are provisioned automatically by the installation program, and you do not need to manually install RHCOS.
====
include::modules/installation-two-node-cluster-min-resource-reqs.adoc[leveloffset=+1]
// Two-node-dns-requirements - user-provisioned infrastructure
include::modules/installation-dns-user-infra.adoc[leveloffset=+1]
// Two-node-dns-requirements - installer-provisioned infrastructure
include::modules/installation-dns-installer-infra.adoc[leveloffset=+1]
// Cofiguration for Ingress LB to get it to work with Pacemaker
include::modules/installation-two-node-ingress-lb-configuration.adoc[leveloffset=+1]
// Creating a manifest object that includes a customized br-ex bridge
include::modules/installation-two-node-creating-manifest-custom-br-ex.adoc[leveloffset=+1]
[role="_additional-resources"]
== Additional resources
* xref:../../../installing/installing_bare_metal/ipi/ipi-install-installation-workflow.adoc#creating-manifest-file-customized-br-ex-bridge_ipi-install-installation-workflow[Creating a manifest file for a customized br-ex bridge]
* link:https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/9/html/configuring_and_managing_high_availability_clusters/index[Configuring and managing high availability clusters in RHEL].

View File

@@ -0,0 +1 @@
../../modules

View File

@@ -0,0 +1 @@
../../snippets/

View File

@@ -0,0 +1 @@
../../modules

View File

@@ -0,0 +1 @@
../../snippets/