1
0
mirror of https://github.com/openshift/openshift-docs.git synced 2026-02-05 12:46:18 +01:00

OSDOCS-15427: Completed CQA for Deploying HCP on IBM Power

This commit is contained in:
dfitzmau
2025-08-27 12:56:08 +01:00
committed by openshift-cherrypick-robot
parent aecd1673ee
commit 2d49b0788f
14 changed files with 103 additions and 92 deletions

View File

@@ -6,21 +6,22 @@ include::_attributes/common-attributes.adoc[]
toc::[]
You can deploy {hcp} by configuring a cluster to function as a hosting cluster. The hosting cluster is an {product-title} cluster where the control planes are hosted. The hosting cluster is also known as the _management_ cluster.
You can deploy {hcp} by configuring a cluster to function as a hosting cluster. This configuration provides an efficient and scalable solution for managing many clusters. The hosting cluster is an {product-title} cluster that hosts control planes. The hosting cluster is also known as the _management_ cluster.
[NOTE]
====
The _management_ cluster is not the _managed_ cluster. A managed cluster is a cluster that the hub cluster manages.
====
The {mce-short} supports only the default `local-cluster`, which is a hub cluster that is managed, and the hub cluster as the hosting cluster.
The {mce-short} supports only the default `local-cluster`, which is a managed hub cluster, and the hub cluster as the hosting cluster.
To provision {hcp} on bare metal, you can use the Agent platform. The Agent platform uses the central infrastructure management service to add worker nodes to a hosted cluster. For more information, see "Enabling the central infrastructure management service".
To provision {hcp} on bare-metal infrastructure, you can use the Agent platform. The Agent platform uses the central infrastructure management service to add compute nodes to a hosted cluster. For more information, see "Enabling the central infrastructure management service".
Each {ibm-power-title} host must be started with a Discovery Image that the central infrastructure management provides. After each host starts, it runs an Agent process to discover the details of the host and completes the installation. An Agent custom resource represents each host.
You must start each {ibm-power-title} host with a Discovery image that the central infrastructure management provides. After each host starts, it runs an Agent process to discover the details of the host and completes the installation. An Agent custom resource represents each host.
When you create a hosted cluster with the Agent platform, HyperShift installs the Agent Cluster API provider in the hosted control plane namespace.
// Prerequisites to configure HCP on IBM Power
include::modules/hcp-ibm-power-prereqs.adoc[leveloffset=+1]
[role="_additional-resources"]
@@ -28,44 +29,50 @@ include::modules/hcp-ibm-power-prereqs.adoc[leveloffset=+1]
* link:https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_management_for_kubernetes/2.14/html/clusters/cluster_mce_overview#advanced-config-engine[Advanced configuration]
* link:https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_management_for_kubernetes/2.14/html/clusters/cluster_mce_overview#enable-cim[Enabling the central infrastructure management service]
* xref:../../hosted_control_planes/hcp-prepare/hcp-cli.adoc#hcp-cli-terminal_hcp-cli[Installing the {hcp} command-line interface]
* xref:../../hosted_control_planes/hcp-prepare/hcp-cli.adoc#hcp-cli-terminal_hcp-cli[Installing the hosted control plane command-line interface]
* xref:../../hosted_control_planes/hcp-prepare/hcp-enable-disable.adoc#hcp-enable-manual_hcp-enable-disable[Manually enabling the {hcp} feature]
* xref:../../hosted_control_planes/hcp-prepare/hcp-enable-disable.adoc#hcp-disable_hcp-enable-disable[Disabling the {hcp} feature]
// IBM Power infrastructure requirements
include::modules/hcp-ibm-power-infra-reqs.adoc[leveloffset=+1]
// DNS configuration for {hcp} on {ibm-power-title}
include::modules/hcp-ibm-power-dns.adoc[leveloffset=+1]
// Defining a custom DNS name
include::modules/hcp-custom-dns.adoc[leveloffset=+2]
[id="hcp-bm-create-hc-ibm-power"]
== Creating a hosted cluster on bare metal
You can create a hosted cluster or import one. When the Assisted Installer is enabled as an add-on to {mce-short} and you create a hosted cluster with the Agent platform, the HyperShift Operator installs the Agent Cluster API provider in the hosted control plane namespace.
include::modules/hcp-bm-hc.adoc[leveloffset=+2]
// Creating a hosted cluster by using the CLI
include::modules/hcp-bm-hc.adoc[leveloffset=+1]
[role="_additional-resources"]
.Additional resources
* xref:../../hosted_control_planes/hcp-prepare/hcp-requirements.adoc[Requirements for hosted control planes]
* xref:../../hosted_control_planes/hcp-deploy/hcp-deploy-bm.adoc#hcp-bm-dns_hcp-deploy-bm[DNS configurations on bare metal]
* xref:../../hosted_control_planes/hcp-import.adoc[Manually importing a hosted cluster]
* xref:../../hosted_control_planes/hcp-disconnected/hcp-deploy-dc-bm.adoc#hcp-dc-extract_hcp-deploy-dc-bm[Extracting the release image digest]
[id="hcp-ibm-power-heterogeneous-nodepools_{context}"]
== Creating heterogeneous node pools on agent hosted clusters
On the agent platform, you can create heterogeneous node pools so that your clusters can run diverse machine types, such as `x86_64` or `ppc64le`, within a single hosted cluster.
include::modules/hcp-ibm-power-create-heterogeneous-nodepools-agent-hc-con.adoc[leveloffset=+2]
// About creating heterogeneous node pools on agent hosted clusters
include::modules/hcp-ibm-power-create-heterogeneous-nodepools-agent-hc-con.adoc[leveloffset=+1]
// Creating the AgentServiceConfig custom resource
include::modules/hcp-ibm-power-create-heterogeneous-nodepools-agent-hc.adoc[leveloffset=+2]
// Create an agent cluster
include::modules/hcp-ibm-power-heterogeneous-nodepools-create-agent-cluster.adoc[leveloffset=+2]
// Creating heterogeneous node pools
include::modules/hcp-create-heterogeneous-nodepools.adoc[leveloffset=+2]
// DNS configuration for hosted control planes
include::modules/hcp-ibm-power-heterogeneous-nodepools-agent-hc-dns.adoc[leveloffset=+2]
// Creating infrastructure environment resources
include::modules/hcp-create-infraenv.adoc[leveloffset=+2]
// Adding agents to the heterogeneous cluster
include::modules/hcp-adding-agents.adoc[leveloffset=+2]
// Scaling the node pool
include::modules/hcp-scale-the-nodepool.adoc[leveloffset=+2]

View File

@@ -6,7 +6,7 @@
[id="hcp-adding-agents_{context}"]
= Adding agents to the heterogeneous cluster
You add agents by manually configuring the machine to boot with a live ISO. You can download the live ISO and use it to boot a bare-metal node or a virtual machine. On boot, the node communicates with the `assisted-service` and registers as an agent in the same namespace as the `InfraEnv` resource. When each agent is created, you can optionally set its `installation_disk_id` and `hostname` parameters in the specifications. When you are done, approve it to indicate that the agent is ready for use.
You add agents by manually configuring the machine to boot with a live ISO. You can download the live ISO and use it to boot a bare-metal node or a virtual machine. On boot, the node communicates with the `assisted-service` and registers as an agent in the same namespace as the `InfraEnv` resource. After the creation of each agent, you can optionally set its `installation_disk_id` and `hostname` parameters in the specifications. You can then approve the agent to indicate the agent as ready for use.
.Procedure
@@ -14,7 +14,7 @@ You add agents by manually configuring the machine to boot with a live ISO. You
+
[source,terminal]
----
oc -n <hosted_control_plane_namespace> get agents
$ oc -n <hosted_control_plane_namespace> get agents
----
+
.Example output
@@ -28,21 +28,21 @@ e57a637f-745b-496e-971d-1abbf03341ba auto-assign
+
[source,terminal]
----
oc -n <hosted_control_plane_namespace> patch agent 86f7ac75-4fc4-4b36-8130-40fa12602218 -p '{"spec":{"installation_disk_id":"/dev/sda","approved":true,"hostname":"worker-0.example.krnl.es"}}' --type merge
$ oc -n <hosted_control_plane_namespace> patch agent 86f7ac75-4fc4-4b36-8130-40fa12602218 -p '{"spec":{"installation_disk_id":"/dev/sda","approved":true,"hostname":"worker-0.example.krnl.es"}}' --type merge
----
. Patch the second agent by running the following command:
+
[source,terminal]
----
oc -n <hosted_control_plane_namespace> patch agent 23d0c614-2caa-43f5-b7d3-0b3564688baa -p '{"spec":{"installation_disk_id":"/dev/sda","approved":true,"hostname":"worker-1.example.krnl.es"}}' --type merge
$ oc -n <hosted_control_plane_namespace> patch agent 23d0c614-2caa-43f5-b7d3-0b3564688baa -p '{"spec":{"installation_disk_id":"/dev/sda","approved":true,"hostname":"worker-1.example.krnl.es"}}' --type merge
----
. Check the agent approval status by running the following command:
+
[source,terminal]
----
oc -n <hosted_control_plane_namespace> get agents
$ oc -n <hosted_control_plane_namespace> get agents
----
+
.Example output

View File

@@ -8,21 +8,23 @@
[id="hcp-bm-hc_{context}"]
= Creating a hosted cluster by using the CLI
To create a hosted cluster by using the command-line interface (CLI), complete the following steps.
On bare-metal infrastructure, you can create or import a hosted cluster. After you enable the Assisted Installer as an add-on to {mce-short} and you create a hosted cluster with the Agent platform, the HyperShift Operator installs the Agent Cluster API provider in the hosted control plane namespace. The Agent Cluster API provider connects a management cluster that hosts the control plane and a hosted cluster that consists of only the compute nodes.
.Prerequisites
- Each hosted cluster must have a cluster-wide unique name. A hosted cluster name cannot be the same as any existing managed cluster in order for the {mce-short} to manage it.
- Each hosted cluster must have a cluster-wide unique name. A hosted cluster name cannot be the same as any existing managed cluster. Otherwise, the {mce-short} cannot manage the hosted cluster.
- Do not use `clusters` as a hosted cluster name.
- Do not use the word `clusters` as a hosted cluster name.
- A hosted cluster cannot be created in the namespace of a {mce-short} managed cluster.
- You cannot create a hosted cluster in the namespace of a {mce-short} managed cluster.
- For best security and management practices, create a hosted cluster separate from other hosted clusters.
- Verify that you have a default storage class configured for your cluster. Otherwise, you might see pending persistent volume claims (PVCs).
- By default when you use the `hcp create cluster agent` command, the hosted cluster is created with node ports. However, the preferred publishing strategy for hosted clusters on bare metal is to expose services through a load balancer. If you create a hosted cluster by using the web console or by using {rh-rhacm-title}, to set a publishing strategy for a service besides the Kubernetes API server, you must manually specify the `servicePublishingStrategy` information in the `HostedCluster` custom resource. For more information, see step 4 in this procedure.
- By default when you use the `hcp create cluster agent` command, the command creates a hosted cluster with configured node ports. The preferred publishing strategy for hosted clusters on bare metal exposes services through a load balancer. If you create a hosted cluster by using the web console or by using {rh-rhacm-title}, to set a publishing strategy for a service besides the Kubernetes API server, you must manually specify the `servicePublishingStrategy` information in the `HostedCluster` custom resource.
- Ensure that you meet the requirements described in "Preparing to deploy {hcp} on bare metal", which includes requirements related to infrastructure, firewalls, ports, and services. For example, those requirements describe how to add the appropriate zone labels to the bare-metal hosts in your management cluster, as shown in the following example commands:
- Ensure that you meet the requirements described in "Requirements for {hcp} on bare metal", which includes requirements related to infrastructure, firewalls, ports, and services. For example, those requirements describe how to add the appropriate zone labels to the bare-metal hosts in your management cluster, as shown in the following example commands:
+
[source,terminal]
----
@@ -50,7 +52,7 @@ $ oc label node [compute-node-3] topology.kubernetes.io/zone=zone3
$ oc create ns <hosted_cluster_namespace>
----
+
Replace `<hosted_cluster_namespace>` with an identifier for your hosted cluster namespace. Typically, the namespace is created by the HyperShift Operator, but during the hosted cluster creation process on bare metal, a Cluster API provider role is generated that needs the namespace to already exist.
Replace `<hosted_cluster_namespace>` with an identifier for your hosted cluster namespace. The HyperShift Operator creates the namespace. During the hosted cluster creation process on bare-metal infrastructure, a generated Cluster API provider role requires that the namespace already exists.
. Create the configuration file for your hosted cluster by entering the following command:
+
@@ -73,28 +75,28 @@ $ hcp create cluster agent \
--ssh-key <home_directory>/<path_to_ssh_key>/<ssh_key> > hosted-cluster-config.yaml <12>
----
+
<1> Specify the name of your hosted cluster.
<2> Specify the path to your pull secret, for example, `/user/name/pullsecret`.
<3> Specify your hosted control plane namespace. To ensure that agents are available in this namespace, enter the `oc get agent -n <hosted_control_plane_namespace>` command.
<4> Specify your base domain, for example, `krnl.es`.
<5> The `--api-server-address` flag defines the IP address that is used for the Kubernetes API communication in the hosted cluster. If you do not set the `--api-server-address` flag, you must log in to connect to the management cluster.
<6> Specify the etcd storage class name, for example, `lvm-storageclass`.
<1> Specify the name of your hosted cluster, such as `example`.
<2> Specify the path to your pull secret, such as `/user/name/pullsecret`.
<3> Specify your hosted control plane namespace, such as `clusters-example`. Ensure that agents are available in this namespace by using the `oc get agent -n <hosted_control_plane_namespace>` command.
<4> Specify your base domain, such as `krnl.es`.
<5> The `--api-server-address` flag defines the IP address that gets used for the Kubernetes API communication in the hosted cluster. If you do not set the `--api-server-address` flag, you must log in to connect to the management cluster.
<6> Specify the etcd storage class name, such as `lvm-storageclass`.
<7> Specify the path to your SSH public key. The default file path is `~/.ssh/id_rsa.pub`.
<8> Specify your hosted cluster namespace.
<9> Specify the availability policy for the hosted control plane components. Supported options are `SingleReplica` and `HighlyAvailable`. The default value is `HighlyAvailable`.
<10> Specify the supported {product-title} version that you want to use, for example, `4.19.0-multi`. If you are using a disconnected environment, replace `<ocp_release_image>` with the digest image. To extract the {product-title} release image digest, see "Extracting the release image digest".
<11> Specify the node pool replica count, for example, `3`. You must specify the replica count as `0` or greater to create the same number of replicas. Otherwise, no node pools are created.
<12> After the `--ssh-key` flag, specify the path to the SSH key; for example, `user/.ssh/id_rsa`.
<10> Specify the supported {product-title} version that you want to use, such as `4.19.0-multi`. If you are using a disconnected environment, replace `<ocp_release_image>` with the digest image. To extract the {product-title} release image digest, see _Extracting the {product-title} release image digest_.
<11> Specify the node pool replica count, such as `3`. You must specify the replica count as `0` or greater to create the same number of replicas. Otherwise, you do not create node pools.
<12> After the `--ssh-key` flag, specify the path to the SSH key, such as `user/.ssh/id_rsa`.
. Configure the service publishing strategy. By default, hosted clusters use the NodePort service publishing strategy because node ports are always available without additional infrastructure. However, you can configure the service publishing strategy to use a load balancer.
. Configure the service publishing strategy. By default, hosted clusters use the `NodePort` service publishing strategy because node ports are always available without additional infrastructure. However, you can configure the service publishing strategy to use a load balancer.
** If you are using the default NodePort strategy, configure the DNS to point to the hosted cluster compute nodes, not the management cluster nodes. For more information, see "DNS configurations on bare metal".
** If you are using the default `NodePort` strategy, configure the DNS to point to the hosted cluster compute nodes, not the management cluster nodes. For more information, see "DNS configurations on bare metal".
** For production environments, use the LoadBalancer strategy because it provides certificate handling and automatic DNS resolution. To change the service publishing strategy `LoadBalancer`, in your hosted cluster configuration file, edit the service publishing strategy details:
** For production environments, use the `LoadBalancer` strategy because this strategy provides certificate handling and automatic DNS resolution. The following example demonstrates changing the service publishing `LoadBalancer` strategy in your hosted cluster configuration file:
+
[source,yaml]
----
...
# ...
spec:
services:
- service: APIServer
@@ -114,7 +116,7 @@ spec:
type: Route
sshKey:
name: <ssh_key>
...
# ...
----
+
<1> Specify `LoadBalancer` as the API Server type. For all other services, specify `Route` as the type.
@@ -126,7 +128,7 @@ spec:
$ oc apply -f hosted_cluster_config.yaml
----
. Monitor the creation of the hosted cluster, node pools, and pods by entering the following commands:
. Check for the creation of the hosted cluster, node pools, and pods by entering the following commands:
+
[source,terminal]
----
@@ -149,7 +151,7 @@ $ oc get nodepool \
$ oc get pods -n <hosted_cluster_namespace>
----
. Confirm that the hosted cluster is ready. The cluster is ready when its status is `Available: True`, the node pool status shows `AllMachinesReady: True`, and all cluster Operators are healthy.
. Confirm that the hosted cluster is ready. The status of `Available: True` indicates the readiness of the cluster and the node pool status shows `AllMachinesReady: True`. These statuses indicate the healthiness of all cluster Operators.
. Install MetalLB in the hosted cluster:
+
@@ -196,6 +198,7 @@ spec:
source: redhat-operators
sourceNamespace: openshift-marketplace
installPlanApproval: Automatic
# ...
----
+
.. Apply the file by entering the following command:
@@ -227,6 +230,7 @@ metadata:
spec:
ipAddressPools:
- metallb
# ...
----
+
.. Apply the configuration by entering the following command:
@@ -236,7 +240,7 @@ spec:
$ oc apply -f deploy-metallb-ipaddresspool.yaml
----
+
.. Verify that MetalLB is installed by checking the Operator status, the IP address pool, and the L2Advertisement. Enter the following commands:
.. Verify the installation of MetalLB by checking the Operator status, the IP address pool, and the `L2Advertisement` resource by entering the following commands:
+
[source,terminal]
----
@@ -279,6 +283,7 @@ spec:
selector:
ingresscontroller.operator.openshift.io/deployment-ingresscontroller: default
type: LoadBalancer
# ...
----
+
.. Apply the configuration by entering the following command:
@@ -343,7 +348,7 @@ Ensure that all Operators show `AVAILABLE: True`, `PROGRESSING: False`, and `DEG
$ oc get nodes
----
+
Ensure that the status of all nodes is `READY`.
Ensure that each node has the `READY` status.
. Test access to the console by entering the following URL in a web browser:
+

View File

@@ -6,7 +6,7 @@
[id="hcp-create-heterogeneous-nodepools_{context}"]
= Creating heterogeneous node pools
You create heterogeneous node pools by using the `NodePool` custom resource (CR).
You create heterogeneous node pools by using the `NodePool` custom resource (CR), so that you can optimize costs and performance by associating different workloads to specific hardware.
.Procedure
@@ -40,4 +40,4 @@ spec:
EOF
----
+
<1> This selector block is selects the agents that match the specified label. To create a node pool of architecture `ppc64le` with zero replicas, specify `ppc64le`. This ensures that it selects only agents from `ppc64le` architecture when it scales.
<1> The selector block selects the agents that match the specified label. To create a node pool of architecture `ppc64le` with zero replicas, specify `ppc64le`. This ensures that the selector block selects only agents from `ppc64le` architecture during a scaling operation.

View File

@@ -6,11 +6,11 @@
[id="hcp-create-infraenv_{context}"]
= Creating infrastructure environment resources
For heterogeneous node pools, you must create an `infraEnv` custom resource (CR) for each architecture. For example, for node pools with `x86_64` and `ppc64le` architectures, create an `InfraEnv` CR for `x86_64` and `ppc64le`.
For heterogeneous node pools, you must create an `infraEnv` custom resource (CR) for each architecture. This configuration ensures that the correct architecture-specific operating system and boot artifacts get used during the node provisioning process. For example, for node pools with `x86_64` and `ppc64le` architectures, create an `InfraEnv` CR for `x86_64` and `ppc64le`.
[NOTE]
====
Before proceeding, make sure that the operating system (OS) images for both `x86_64` and `ppc64le` architectures are added to the `AgentServiceConfig` resource. After this, you can use the `InfraEnv` resources to get the minimal ISO image.
Before starting the procedure, ensure that you add the operating system images for both `x86_64` and `ppc64le` architectures to the `AgentServiceConfig` resource. After this, you can use the `InfraEnv` resources to get the minimal ISO image.
====
.Procedure
@@ -36,7 +36,7 @@ EOF
<1> The hosted cluster name.
<2> The `x86_64` architecture.
<3> The hosted control plane namespace.
<4> The ssh public key.
<4> The SSH public key.
. Create the `InfraEnv` resource with `ppc64le` architecture for heterogeneous node pools by running the following command:
+
@@ -59,18 +59,18 @@ EOF
<1> The hosted cluster name.
<2> The `ppc64le` architecture.
<3> The hosted control plane namespace.
<4> The ssh public key.
<4> The SSH public key.
. To verify that the `InfraEnv` resources are created successfully run the following commands:
. Verify the successful creation of the `InfraEnv` resources by running the following commands:
** Verify that the `x86_64` `InfraEnv` resource is created successfully:
** Verify the successful creation of the `x86_64` `InfraEnv` resource:
+
[source,terminal]
----
$ oc describe InfraEnv <hosted_cluster_name>-<arch_x86>
----
+
** Verify that the `ppc64le` `InfraEnv` resource is created successfully:
** Verify the successful creation of the `ppc64le` `InfraEnv` resource:
+
[source,terminal]
----

View File

@@ -11,22 +11,22 @@
[id="hcp-custom-dns_{context}"]
= Defining a custom DNS name
As a cluster administrator, you can create a hosted cluster with an external API DNS name that is different from the internal endpoint that is used for node bootstraps and control plane communication. You might want to define a different DNS name for the following reasons:
As a cluster administrator, you can create a hosted cluster with an external API DNS name that differs from the internal endpoint that gets used for node bootstraps and control plane communication. You might want to define a different DNS name for the following reasons:
* To replace the user-facing TLS certificate with one from a public CA without breaking the control plane functions that are bound to the internal root CA
* To support split-horizon DNS and NAT scenarios
* To ensure a similar experience to standalone control planes, where you can use functions, such as the "Show Login Command" function, with the correct `kubeconfig` and DNS configuration
* To replace the user-facing TLS certificate with one from a public CA without breaking the control plane functions that bind to the internal root CA.
* To support split-horizon DNS and NAT scenarios.
* To ensure a similar experience to standalone control planes, where you can use functions, such as the `Show Login Command` function, with the correct `kubeconfig` and DNS configuration.
You can define a DNS name either during your initial setup or during day-2 operations, by entering a domain name in the `kubeAPIServerDNSName` field of a `HostedCluster` object.
You can define a DNS name either during your initial setup or during postinstallation operations, by entering a domain name in the `kubeAPIServerDNSName` parameter of a `HostedCluster` object.
.Prerequisites
* You have a valid TLS certificate that covers the DNS name that you will set in the `kubeAPIServerDNSName` field.
* Your DNS name is a resolvable URI that is reachable and pointing to the right address.
* You have a valid TLS certificate that covers the DNS name that you set in the `kubeAPIServerDNSName` parameter.
* You have a resolvable DNS name URI that can reach and point to the correct address.
.Procedure
* In the specification for the `HostedCluster` object, add the `kubeAPIServerDNSName` field and the address for the domain and specify which certificate to use, as shown in the following example:
* In the specification for the `HostedCluster` object, add the `kubeAPIServerDNSName` parameter and the address for the domain and specify which certificate to use, as shown in the following example:
+
[source,yaml]
----
@@ -41,20 +41,20 @@ spec:
- yyy.example.com
servingCertificate:
name: <my_serving_certificate>
kubeAPIServerDNSName: <your_custom_address> # <1>
kubeAPIServerDNSName: <custom_address> # <1>
----
+
<1> The value for the `kubeAPIServerDNSName` field must be a valid, addressable domain.
<1> The value for the `kubeAPIServerDNSName` parameter must be a valid and addressable domain.
After you define the `kubeAPIServerDNSName` field and specify the certificate, the Control Plane Operator controllers create a `kubeconfig` file named `custom-admin-kubeconfig`, which is stored in the `HostedControlPlane` namespace. The certificates are generated from the root CA, and the `HostedControlPlane` namespace manages their expiration and renewal.
After you define the `kubeAPIServerDNSName` parameter and specify the certificate, the Control Plane Operator controllers create a `kubeconfig` file named `custom-admin-kubeconfig`, where the file gets stored in the `HostedControlPlane` namespace. The generation of certificates happen from the root CA, and the `HostedControlPlane` namespace manages their expiration and renewal.
The Control Plane Operator reports a new `kubeconfig` file named `CustomKubeconfig` in the `HostedControlPlane` namespace. That file uses the new server that is defined in the `kubeAPIServerDNSName` field.
The Control Plane Operator reports a new `kubeconfig` file named `CustomKubeconfig` in the `HostedControlPlane` namespace. That file uses the defined new server in the `kubeAPIServerDNSName` parameter.
The custom `kubeconfig` file is referenced in the `HostedCluster` object in the `status` field as `CustomKubeconfig`. The `CustomKubeConfig` field is optional, and can be added only if the `kubeAPIServerDNSName` field is not empty. When the `CustomKubeConfig` field is set, it triggers the generation of a secret named `<hosted_cluster_name>-custom-admin-kubeconfig` in the `HostedCluster` namespace. You can use the secret to access the `HostedCluster` API server. If you remove the `CustomKubeConfig` field during day-2 operations, all related secrets and status references are deleted.
A reference for the custom `kubeconfig` file exists in the `status` parameter as `CustomKubeconfig` of the `HostedCluster` object. The `CustomKubeConfig` parameter is optional, and you can add the parameter only if the `kubeAPIServerDNSName` parameter is not empty. After you set the `CustomKubeConfig` parameter, the parameter triggers the generation of a secret named `<hosted_cluster_name>-custom-admin-kubeconfig` in the `HostedCluster` namespace. You can use the secret to access the `HostedCluster` API server. If you remove the `CustomKubeConfig` parameter during postinstallation operations, deletion of all related secrets and status references occur.
[NOTE]
====
This process does not directly affect the data plane, so no rollouts are expected to occur. The `HostedControlPlane` namespace receives the changes from the HyperShift Operator and deletes the corresponding fields.
Defining a custom DNS name does not directly impact the data plane, so no expected rollouts occur. The `HostedControlPlane` namespace receives the changes from the HyperShift Operator and deletes the corresponding parameters.
====
If you remove the `kubeAPIServerDNSName` field from the specification for the `HostedCluster` object, all newly generated secrets and the `CustomKubeconfig` reference are removed from the cluster and from the `status` field.
If you remove the `kubeAPIServerDNSName` parameter from the specification for the `HostedCluster` object, all newly generated secrets and the `CustomKubeconfig` reference are removed from the cluster and from the `status` parameter.

View File

@@ -6,13 +6,13 @@
[id="hcp-ibm-power-create-heterogeneous-nodepools-agent-hc-con_{context}"]
= About creating heterogeneous node pools on agent hosted clusters
A node pool is a group of nodes within a cluster that share the same configuration. Heterogeneous node pools are pools that have different configurations, allowing you to create pools optimized for various workloads.
A node pool is a group of nodes within a cluster that share the same configuration. Heterogeneous node pools have different configurations, so that you can create pools and optimize them for various workloads.
You can create heterogeneous node pools on the agent platform. It enables clusters to run diverse machine types, for example, `x86_64` or `ppc64le`, within a single hosted cluster.
You can create heterogeneous node pools on the agent platform. The platform enables clusters to run diverse machine types, such as `x86_64` or `ppc64le`, within a single hosted cluster.
To create a heterogeneous node pool, perform the following general steps, as described in the following sections:
Creating a heterogeneous node pool requires completion of the following general steps:
* Create an `AgentServiceConfig` custom resource (CR) that informs the Operator how much storage is needed for components such as the database and filesystem. The CR also defines which {product-title} versions to maintain.
* Create an `AgentServiceConfig` custom resource (CR) that informs the Operator how much storage it needs for components such as the database and filesystem. The CR also defines which {product-title} versions to support.
* Create an agent cluster.
* Create the heterogeneous node pool.
* Configure DNS for hosted control planes

View File

@@ -5,7 +5,7 @@
:_mod-docs-content-type: PROCEDURE
[id="hcp-ibm-power-create-heterogeneous-nodepools-agent-hc_{context}"]
= Creating the AgentServiceConfig custom resource
To create heterogeneous node pools on an agent hosted cluster, you first need to create the `AgentServiceConfig` CR with two heterogeneous architecture operating system (OS) images.
To create heterogeneous node pools on an agent hosted cluster, you need to create the `AgentServiceConfig` CR with two heterogeneous architecture operating system (OS) images.
.Procedure
@@ -50,7 +50,7 @@ EOF
<3> Specify the current version of {product-title}.
<4> Specify the current {product-title} release version for x86.
<5> Specify the ISO URL for x86.
<6> Specify the root filesystem URL for X86.
<6> Specify the root filesystem URL for x86.
<7> Specify the CPU architecture for x86.
<8> Specify the current {product-title} version.
<9> Specify the {product-title} release version for `ppc64le`.

View File

@@ -6,11 +6,11 @@
[id="hcp-ibm-power-dns_{context}"]
= DNS configuration for {hcp} on {ibm-power-title}
The API server for the hosted cluster is exposed. A DNS entry must exist for the `api.<hosted_cluster_name>.<basedomain>` entry that points to the destination where the API server is reachable.
Clients outside the network can access the API server for the hosted cluster. A DNS entry must exist for the `api.<hosted_cluster_name>.<basedomain>` entry that points to the destination where the API server is reachable.
The DNS entry can be as simple as a record that points to one of the nodes in the managed cluster that is running the hosted control plane.
The DNS entry can be as simple as a record that points to one of the nodes in the managed cluster that runs the hosted control plane.
The entry can also point to a load balancer that is deployed to redirect incoming traffic to the ingress pods.
The entry can also point to a deployed load balancer to redirect incoming traffic to the ingress pods.
See the following example of a DNS configuration:
@@ -33,7 +33,7 @@ api IN A 1xx.2x.2xx.1xx <1>
api-int IN A 1xx.2x.2xx.1xx
;
;
*.apps.<hosted-cluster-name>.<basedomain> IN A 1xx.2x.2xx.1xx
*.apps.<hosted_cluster_name>.<basedomain> IN A 1xx.2x.2xx.1xx
;
;EOF
----

View File

@@ -6,6 +6,6 @@
[id="hcp-ibm-power-heterogeneous-nodepools-agent-hc-dns_{context}"]
= DNS configuration for hosted control planes
Domain Name Service (DNS) configuration for hosted control planes enables external clients to reach ingress controllers so that they can route traffic to internal components. Configuring this setting ensures that traffic is routed to either `ppc64le` or `x86_64` compute node.
A Domain Name Service (DNS) configuration for hosted control planes means that external clients can reach ingress controllers, so that the clients can route traffic to internal components. Configuring this setting ensures that traffic gets routed to either a `ppc64le` or an `x86_64` compute node.
You can point an `*.apps.<cluster_name>` record to either of the compute nodes where the ingress application is hosted. Or, if you are able to set up a load balancer on top of the compute nodes, point this record to this load balancer. When you are creating a heterogeneous node pool, make sure the compute nodes can reach each other or keep them in the same network.
You can point an `*.apps.<cluster_name>` record to either of the compute nodes that hosts the ingress application. Or, if you can set up a load balancer on top of the compute nodes, point the record to this load balancer. When you are creating a heterogeneous node pool, make sure the compute nodes can reach each other or keep them in the same network.

View File

@@ -6,7 +6,7 @@
[id="hcp-ibm-power-heterogeneous-nodepools-create-agent-cluster_{context}"]
= Create an agent cluster
An agent cluster is a cluster that is managed and provisioned using an agent-based approach. An agent cluster can utilize heterogeneous node pools, allowing for different types of worker nodes to be used within the same cluster.
An agent-based approach manages and provisions an agent cluster. An agent cluster can use heterogeneous node pools, allowing the use of different types of compute nodes within the same cluster.
.Prerequisites

View File

@@ -8,6 +8,5 @@
The Agent platform does not create any infrastructure, but requires the following resources for infrastructure:
* Agents: An _Agent_ represents a host that is booted with a discovery image and is ready to be provisioned as an {product-title} node.
* Agents: An _Agent_ represents a host that boots with a Discovery image and that you can provision as an {product-title} node.
* DNS: The API and Ingress endpoints must be routable.

View File

@@ -15,10 +15,10 @@
$ oc get managedclusters local-cluster
----
* You need a hosting cluster with at least 3 worker nodes to run the HyperShift Operator.
* You need a hosting cluster with at least 3 compute nodes to run the HyperShift Operator.
* You need to enable the central infrastructure management service. For more information, see "Enabling the central infrastructure management service".
* You need to install the hosted control plane command-line interface. For more information, see "Installing the hosted control plane command-line interface".
* You need to install the hosted control planes command-line interface. For more information, see "Installing the hosted control plane command-line interface".
The {hcp} feature is enabled by default. If you disabled the feature and want to manually enable it, see "Manually enabling the {hcp} feature". If you need to disable the feature, see "Disabling the {hcp} feature".
The {hcp} feature is enabled by default. If you disabled the feature and want to manually enable the feature, see "Manually enabling the {hcp} feature". If you need to disable the feature, see "Disabling the {hcp} feature".

View File

@@ -6,7 +6,7 @@
[id="hcp-scale-the-nodepool_{context}"]
= Scaling the node pool
After your agents are approved, you can scale the node pools. The `agentLabelSelector` value that is configured in the node pool ensures that only matching agents are added to the cluster. This also helps scale down the node pool. To remove specific architecture nodes from the cluster, scale down the corresponding node pool.
After you approve your agents, you can scale the node pools. The `agentLabelSelector` value that you configured in the node pool ensures that only matching agents get added to the cluster. This also helps scale down the node pool. To remove specific architecture nodes from the cluster, scale down the corresponding node pool.
.Procedure
@@ -53,7 +53,7 @@ BMH: ocp-worker-0 Agent: d9198891-39f4-4930-a679-65fb142b108b State: known-unbou
BMH: ocp-worker-1 Agent: da503cf1-a347-44f2-875c-4960ddb04091 State: insufficient
----
. Once the agents reach the `added-to-existing-cluster` state, verify that the {product-title} nodes are ready by running the following command:
. After the agents reach the `added-to-existing-cluster` state, verify that the {product-title} nodes are ready by running the following command:
+
[source,terminal]
----
@@ -67,7 +67,7 @@ ocp-worker-1 Ready worker 5m41s v1.24.0+3882f8f
ocp-worker-2 Ready worker 6m3s v1.24.0+3882f8f
----
. Adding workloads to the nodes can reconcile some cluster operators. The following command displays that two machines are created after scaling up the node pool:
. Adding workloads to the nodes can reconcile some cluster operators. The following command displays the creation of two machines that happened after scaling up the node pool:
+
[source,terminal]
----