1
0
mirror of https://github.com/openshift/openshift-docs.git synced 2026-02-05 12:46:18 +01:00

Updated with its attributes

This commit is contained in:
xenolinux
2025-01-16 18:00:28 +05:30
committed by openshift-cherrypick-robot
parent e5cfeb1c9d
commit 5b034183fe
16 changed files with 32 additions and 32 deletions

View File

@@ -6,7 +6,7 @@ include::_attributes/common-attributes.adoc[]
toc::[]
You can deploy hosted control planes by configuring a cluster to function as a hosting cluster. The hosting cluster is an {product-title} cluster where the control planes are hosted. The hosting cluster is also known as the _management_ cluster.
You can deploy {hcp} by configuring a cluster to function as a hosting cluster. The hosting cluster is an {product-title} cluster where the control planes are hosted. The hosting cluster is also known as the _management_ cluster.
[NOTE]
====

View File

@@ -6,7 +6,7 @@ include::_attributes/common-attributes.adoc[]
toc::[]
When you provision hosted control planes on bare metal, you use the Agent platform. The Agent platform and {mce} work together to enable disconnected deployments. The Agent platform uses the central infrastructure management service to add worker nodes to a hosted cluster. For an introduction to the central infrastructure management service, see link:https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_management_for_kubernetes/2.12/html/clusters/cluster_mce_overview#enable-cim[Enabling the central infrastructure management service].
When you provision {hcp} on bare metal, you use the Agent platform. The Agent platform and {mce} work together to enable disconnected deployments. The Agent platform uses the central infrastructure management service to add worker nodes to a hosted cluster. For an introduction to the central infrastructure management service, see link:https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_management_for_kubernetes/2.12/html/clusters/cluster_mce_overview#enable-cim[Enabling the central infrastructure management service].
include::modules/hcp-dc-bm-arch.adoc[leveloffset=+1]
include::modules/hcp-dc-bm-reqs.adoc[leveloffset=+1]
@@ -51,4 +51,4 @@ include::modules/hcp-nodepool-hc.adoc[leveloffset=+2]
include::modules/hcp-dc-infraenv.adoc[leveloffset=+2]
include::modules/hcp-worker-hc.adoc[leveloffset=+2]
include::modules/hcp-bm-hosts.adoc[leveloffset=+2]
include::modules/hcp-dc-scale-np.adoc[leveloffset=+2]
include::modules/hcp-dc-scale-np.adoc[leveloffset=+2]

View File

@@ -6,7 +6,7 @@ include::_attributes/common-attributes.adoc[]
toc::[]
Before you get started with hosted control planes for {product-title}, you must properly label nodes so that the pods of hosted clusters can be scheduled into infrastructure nodes. Node labeling is also important for the following reasons:
Before you get started with {hcp} for {product-title}, you must properly label nodes so that the pods of hosted clusters can be scheduled into infrastructure nodes. Node labeling is also important for the following reasons:
* To ensure high availability and proper workload deployment. For example, you can set the `node-role.kubernetes.io/infra` label to avoid having the control-plane workload count toward your {product-title} subscription.
* To ensure that control plane workloads are separate from other workloads in the management cluster.
@@ -14,7 +14,7 @@ Before you get started with hosted control planes for {product-title}, you must
//* To ensure that control plane workloads are configured at one of the following multi-tenancy distribution levels:
//** Everything shared: Control planes for hosted clusters can run on any node that is designated for control planes.
//** Request serving isolation: Serving pods are requested in their own dedicated nodes.
//** Nothing shared: Every control plane has its own dedicated nodes.
//** Nothing shared: Every control plane has its own dedicated nodes.
[IMPORTANT]
====
@@ -23,4 +23,4 @@ Do not use the management cluster for your workload. Workloads must not run on n
include::modules/hcp-labels-taints.adoc[leveloffset=+1]
include::modules/hcp-priority-classes.adoc[leveloffset=+1]
include::modules/hcp-virt-taints-tolerations.adoc[leveloffset=+1]
include::modules/hcp-virt-taints-tolerations.adoc[leveloffset=+1]

View File

@@ -6,7 +6,7 @@ include::_attributes/common-attributes.adoc[]
toc::[]
Many factors, including hosted cluster workload and worker node count, affect how many hosted control planes can fit within a certain number of worker nodes. Use this sizing guide to help with hosted cluster capacity planning. This guidance assumes a highly available {hcp} topology. The load-based sizing examples were measured on a bare-metal cluster. Cloud-based instances might have different limiting factors, such as memory size.
Many factors, including hosted cluster workload and worker node count, affect how many {hcp} can fit within a certain number of worker nodes. Use this sizing guide to help with hosted cluster capacity planning. This guidance assumes a highly available {hcp} topology. The load-based sizing examples were measured on a bare-metal cluster. Cloud-based instances might have different limiting factors, such as memory size.
You can override the following resource utilization sizing measurements and disable the metric service monitoring.
@@ -38,4 +38,4 @@ include::modules/hcp-shared-infra.adoc[leveloffset=+1]
[role="_additional-resources"]
.Additional resources
* xref:../../hosted_control_planes/hcp-prepare/hcp-sizing-guidance.adoc[Sizing guidance for {hcp}]
* xref:../../hosted_control_planes/hcp-prepare/hcp-sizing-guidance.adoc[Sizing guidance for {hcp}]

View File

@@ -1,12 +1,12 @@
:_mod-docs-content-type: ASSEMBLY
[id="about-hcp-ha"]
= About high availability for hosted control planes
include::_attributes/common-attributes.adoc[]
= About high availability for {hcp}
:context: about-hcp-ha
toc::[]
You can maintain high availability (HA) of hosted control planes by implementing the following actions:
You can maintain high availability (HA) of {hcp} by implementing the following actions:
* Recover etcd members for a hosted cluster.
* Back up and restore etcd for a hosted cluster.

View File

@@ -26,7 +26,7 @@ You must meet the following prerequisites on the management cluster:
* You have access to the {oadp-short} subscription through a catalog source.
* You have access to a cloud storage provider that is compatible with {oadp-short}, such as S3, {azure-full}, {gcp-full}, or MinIO.
* In a disconnected environment, you have access to a self-hosted storage provider, for example link:https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/[{odf-full}] or link:https://min.io/[MinIO], that is compatible with {oadp-short}.
* Your hosted control planes pods are up and running.
* Your {hcp} pods are up and running.
[id="prepare-aws-oadp_{context}"]
== Preparing {aws-short} to use {oadp-short}

View File

@@ -5,11 +5,11 @@
[id="hcp-enable-manual-addon_{context}"]
= Manually enabling the hypershift-addon managed cluster add-on for local-cluster
Enabling the hosted control planes feature automatically enables the `hypershift-addon` managed cluster add-on. If you need to enable the `hypershift-addon` managed cluster add-on manually, complete the following steps to use the `hypershift-addon` to install the HyperShift Operator on `local-cluster`.
Enabling the {hcp} feature automatically enables the `hypershift-addon` managed cluster add-on. If you need to enable the `hypershift-addon` managed cluster add-on manually, complete the following steps to use the `hypershift-addon` to install the HyperShift Operator on `local-cluster`.
.Procedure
. Create the `ManagedClusterAddon` HyperShift add-on by creating a file that resembles the following example:
. Create the `ManagedClusterAddon` add-on named `hypershift-addon` by creating a file that resembles the following example:
+
[source,yaml]
----
@@ -17,7 +17,7 @@ apiVersion: addon.open-cluster-management.io/v1alpha1
kind: ManagedClusterAddOn
metadata:
name: hypershift-addon
namespace: local-cluster
namespace: local-cluster
spec:
installNamespace: open-cluster-management-agent-addon
----
@@ -29,9 +29,9 @@ spec:
$ oc apply -f <filename>
----
+
Replace `filename` with the name of the file that you created.
Replace `filename` with the name of the file that you created.
. Confirm that the `hypershift-addon` is installed by running the following command:
. Confirm that the `hypershift-addon` managed cluster add-on is installed by running the following command:
+
[source,terminal]
----
@@ -46,4 +46,4 @@ NAME AVAILABLE DEGRADED PROGRESSING
hypershift-addon True
----
Your HyperShift add-on is installed and the hosting cluster is available to create and manage hosted clusters.
Your `hypershift-addon` managed cluster add-on is installed and the hosting cluster is available to create and manage hosted clusters.

View File

@@ -37,7 +37,7 @@ api-int IN A 1xx.2x.2xx.1xx
;
;EOF
----
<1> The record refers to the IP address of the API load balancer that handles ingress and egress traffic for hosted control planes.
<1> The record refers to the IP address of the API load balancer that handles ingress and egress traffic for {hcp}.
For {ibm-title} z/VM, add IP addresses that correspond to the IP address of the agent.
@@ -45,4 +45,4 @@ For {ibm-title} z/VM, add IP addresses that correspond to the IP address of the
----
compute-0 IN A 1xx.2x.2xx.1yy
compute-1 IN A 1xx.2x.2xx.1yy
----
----

View File

@@ -6,7 +6,7 @@
[id="hcp-labels-taints_{context}"]
= Labeling management cluster nodes
Proper node labeling is a prerequisite to deploying hosted control planes.
Proper node labeling is a prerequisite to deploying {hcp}.
As a management cluster administrator, you use the following labels and taints in management cluster nodes to schedule a control plane workload:
@@ -30,7 +30,7 @@ $ oc label node/worker-2a node/worker-2b topology.kubernetes.io/zone=rack2
Pods for a hosted cluster have tolerations, and the scheduler uses affinity rules to schedule them. Pods tolerate taints for `control-plane` and the `cluster` for the pods. The scheduler prioritizes the scheduling of pods into nodes that are labeled with `hypershift.openshift.io/control-plane` and `hypershift.openshift.io/cluster: ${HostedControlPlane Namespace}`.
For the `ControllerAvailabilityPolicy` option, use `HighlyAvailable`, which is the default value that the hosted control planes command line interface, `hcp`, deploys. When you use that option, you can schedule pods for each deployment within a hosted cluster across different failure domains by setting `topology.kubernetes.io/zone` as the topology key. Control planes that are not highly available are not supported.
For the `ControllerAvailabilityPolicy` option, use `HighlyAvailable`, which is the default value that the {hcp} command-line interface, `hcp`, deploys. When you use that option, you can schedule pods for each deployment within a hosted cluster across different failure domains by setting `topology.kubernetes.io/zone` as the topology key. Control planes that are not highly available are not supported.
.Procedure
@@ -43,4 +43,4 @@ To enable a hosted cluster to require its pods to be scheduled into infrastructu
role.kubernetes.io/infra: ""
----
This way, hosted control planes for each hosted cluster are eligible infrastructure node workloads, and you do not need to entitle the underlying {product-title} nodes.
This way, {hcp} for each hosted cluster are eligible infrastructure node workloads, and you do not need to entitle the underlying {product-title} nodes.

View File

@@ -6,7 +6,7 @@
[id="hcp-managed-aws-infra-mgmt_{context}"]
= Infrastructure requirements for a management {aws-short} account
When your infrastructure is managed by hosted control planes in a management AWS account, the infrastructure requirements differ depending on whether your clusters are public, private, or a combination.
When your infrastructure is managed by {hcp} in a management AWS account, the infrastructure requirements differ depending on whether your clusters are public, private, or a combination.
For accounts with public clusters, the infrastructure requirements are as follows:

View File

@@ -20,7 +20,7 @@ You can use the {mce-short} with {product-title} as a standalone cluster manager
A management cluster is also known as the hosting cluster.
====
You can deploy {product-title} clusters by using two different control plane configurations: standalone or hosted control planes. The standalone configuration uses dedicated virtual machines or physical machines to host the control plane. With {hcp} for {product-title}, you create control planes as pods on a management cluster without the need for dedicated virtual or physical machines for each control plane.
You can deploy {product-title} clusters by using two different control plane configurations: standalone or {hcp}. The standalone configuration uses dedicated virtual machines or physical machines to host the control plane. With {hcp} for {product-title}, you create control planes as pods on a management cluster without the need for dedicated virtual or physical machines for each control plane.
.{rh-rhacm} and the {mce-short} introduction diagram
image::rhacm-flow.png[{rh-rhacm} and the {mce-short} introduction diagram]

View File

@@ -10,7 +10,7 @@ If the management cluster component fails, your workload remains unaffected. In
The following table covers the impact of a failed management cluster component on the control plane and the data plane. However, the table does not cover all scenarios for the management cluster component failures.
.Impact of the failed component on hosted control planes
.Impact of the failed component on {hcp}
[cols="1,1,1",options="header"]
|===
|Name of the failed component |Hosted control plane API status |Hosted cluster data plane status

View File

@@ -7,4 +7,4 @@
The `maxPods` setting for each node affects how many hosted clusters can fit in a control-plane node. It is important to note the `maxPods` value on all control-plane nodes. Plan for about 75 pods for each highly available hosted control plane.
For bare-metal nodes, the default `maxPods` setting of 250 is likely to be a limiting factor because roughly three hosted control planes fit for each node given the pod requirements, even if the machine has plenty of resources to spare. Setting the `maxPods` value to 500 by configuring the `KubeletConfig` value allows for greater hosted control plane density, which can help you take advantage of additional compute resources.
For bare-metal nodes, the default `maxPods` setting of 250 is likely to be a limiting factor because roughly three {hcp} fit for each node given the pod requirements, even if the machine has plenty of resources to spare. Setting the `maxPods` value to 500 by configuring the `KubeletConfig` value allows for greater hosted control plane density, which can help you take advantage of additional compute resources.

View File

@@ -5,6 +5,6 @@
[id="hcp-resource-limit_{context}"]
= Request-based resource limit
The maximum number of hosted control planes that the cluster can host is calculated based on the hosted control plane CPU and memory requests from the pods.
The maximum number of {hcp} that the cluster can host is calculated based on the hosted control plane CPU and memory requests from the pods.
A highly available hosted control plane consists of 78 pods that request 5 vCPUs and 18 GB memory. These baseline numbers are compared to the cluster worker node resource capacities to estimate the maximum number of hosted control planes.
A highly available hosted control plane consists of 78 pods that request 5 vCPUs and 18 GB memory. These baseline numbers are compared to the cluster worker node resource capacities to estimate the maximum number of {hcp}.

View File

@@ -64,7 +64,7 @@ You can use the `hypershift.openshift.io` API resources, such as, `HostedCluster
The API version policy generally aligns with the policy for link:https://kubernetes.io/docs/reference/using-api/#api-versioning[Kubernetes API versioning].
Updates for {hcp} involve updating the hosted cluster and the node pools. For more information, see "Updates for hosted control planes".
Updates for {hcp} involve updating the hosted cluster and the node pools. For more information, see "Updates for {hcp}".
[id="hcp-versioning-cpo_{context}"]
== Control Plane Operator
@@ -73,4 +73,4 @@ The Control Plane Operator is released as part of each {product-title} payload r
* amd64
* arm64
* multi-arch
* multi-arch

View File

@@ -6,7 +6,7 @@
[id="hosted-restart-hcp-components_{context}"]
= Restarting hosted control plane components
If you are an administrator for hosted control planes, you can use the `hypershift.openshift.io/restart-date` annotation to restart all control plane components for a particular `HostedCluster` resource. For example, you might need to restart control plane components for certificate rotation.
If you are an administrator for {hcp}, you can use the `hypershift.openshift.io/restart-date` annotation to restart all control plane components for a particular `HostedCluster` resource. For example, you might need to restart control plane components for certificate rotation.
.Procedure
@@ -50,4 +50,4 @@ The following components are restarted:
* openshift-oauth-apiserver
* packageserver
* redhat-marketplace-catalog
* redhat-operators-catalog
* redhat-operators-catalog