1
0
mirror of https://github.com/openshift/openshift-docs.git synced 2026-02-05 12:46:18 +01:00

Moving IBM Z docs for HCP

This commit is contained in:
Laura Hinson
2024-09-04 20:12:41 -04:00
committed by openshift-cherrypick-robot
parent 65be5318f9
commit e1cfc55c50
12 changed files with 589 additions and 9 deletions

View File

@@ -2413,8 +2413,6 @@ Topics:
File: hcp-manage-virt
- Name: Managing hosted control planes on non-bare metal agent machines
File: hcp-manage-non-bm
- Name: Managing hosted control planes on IBM Z
File: hcp-manage-ibmz
- Name: Managing hosted control planes on IBM Power
File: hcp-manage-ibmpower
- Name: Preparing to deploy hosted control planes in a disconnected environment

View File

@@ -5,3 +5,68 @@ include::_attributes/common-attributes.adoc[]
:context: hcp-deploy-ibmz
toc::[]
You can deploy {hcp} by configuring a cluster to function as a management cluster. The management cluster is the {product-title} cluster where the control planes are hosted. The management cluster is also known as the _hosting_ cluster.
[NOTE]
====
The _management_ cluster is not the _managed_ cluster. A managed cluster is a cluster that the hub cluster manages.
====
You can convert a managed cluster to a management cluster by using the `hypershift` add-on to deploy the HyperShift Operator on that cluster. Then, you can start to create the hosted cluster.
The {mce-short} 2.5 supports only the default `local-cluster`, which is a hub cluster that is managed, and the hub cluster as the management cluster.
:FeatureName: {hcp-capital} on {ibm-z-title}
include::snippets/technology-preview.adoc[]
To provision {hcp} on bare metal, you can use the Agent platform. The Agent platform uses the central infrastructure management service to add worker nodes to a hosted cluster. For more information, see _Enabling the central infrastructure management service_.
Each {ibm-z-title} system host must be started with the PXE images provided by the central infrastructure management. After each host starts, it runs an Agent process to discover the details of the host and completes the installation. An Agent custom resource represents each host.
When you create a hosted cluster with the Agent platform, HyperShift Operator installs the Agent Cluster API provider in the hosted control plane namespace.
include::modules/hcp-ibmz-prereqs.adoc[leveloffset=+1]
[role="_additional-resources"]
.Additional resources
* link:https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_management_for_kubernetes/2.11/html/clusters/cluster_mce_overview#advanced-config-engine[Advanced configuration]
* link:https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_management_for_kubernetes/2.11/html/clusters/cluster_mce_overview#enable-cim[Enabling the central infrastructure management service]
// * Installing the hosted control plane command line interface
* xref:../../hosted_control_planes/hcp-prepare/hcp-enable-disable.adoc[Enabling or disabling the {hcp} feature]
include::modules/hcp-ibmz-infra-reqs.adoc[leveloffset=+1]
[role="_additional-resources"]
.Additional resources
* xref:../../hosted_control_planes/hcp-prepare/hcp-enable-disable.adoc[Enabling or disabling the {hcp} feature]
include::modules/hcp-ibmz-dns.adoc[leveloffset=+1]
include::modules/hcp-bm-hc.adoc[leveloffset=+1]
include::modules/hcp-ibmz-infraenv.adoc[leveloffset=+1]
[id="hcp-ibmz-add-agents"]
== Adding {ibm-z-title} agents to the InfraEnv resource
To attach compute nodes to a hosted control plane, create agents that help you to scale the node pool. Adding agents in an {ibm-z-title} environment requires additional steps, which are described in detail in this section.
Unless stated otherwise, these procedures apply to both z/VM and RHEL KVM installations on {ibm-z-title} and {ibm-linuxone-title}.
include::modules/hcp-ibmz-kvm-agents.adoc[leveloffset=+2]
include::modules/hcp-ibmz-lpar-agents.adoc[leveloffset=+2]
[role="_additional-resources"]
.Additional resources
* link:https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/8/html/performing_a_standard_rhel_8_installation/installing-in-an-lpar_installing-rhel[Installing in an LPAR]
include::modules/hcp-ibmz-zvm-agents.adoc[leveloffset=+2]
include::modules/hcp-ibmz-scale-np.adoc[leveloffset=+1]
[role="_additional-resources"]
.Additional resources
* xref:../../installing/installing_ibm_z/installing-ibm-z.adoc#installation-operators-config[Initial Operator configuration]

View File

@@ -1,7 +0,0 @@
:_mod-docs-content-type: ASSEMBLY
[id="hcp-manage-ibmz"]
include::_attributes/common-attributes.adoc[]
= Managing {hcp} on {ibm-z-title}
:context: hcp-manage-ibmz
toc::[]

View File

@@ -1,6 +1,7 @@
// Module included in the following assemblies:
//
// * hosted_control_planes/hcp-deploy/hcp-deploy-bm.adoc
// * hosted_control_planes/hcp-deploy/hcp-deploy-ibmz.adoc
:_mod-docs-content-type: PROCEDURE
[id="hcp-bm-hc_{context}"]

48
modules/hcp-ibmz-dns.adoc Normal file
View File

@@ -0,0 +1,48 @@
// Module included in the following assemblies:
//
// * hosted_control_planes/hcp-deploy/hcp-deploy-ibmz.adoc
:_mod-docs-content-type: CONCEPT
[id="hcp-ibmz-dns_{context}"]
= DNS configuration for {hcp} on {ibm-z-title}
The API server for the hosted cluster is exposed as a `NodePort` service. A DNS entry must exist for the `api.<hosted_cluster_name>.<base_domain>` that points to the destination where the API server is reachable.
The DNS entry can be as simple as a record that points to one of the nodes in the managed cluster that is running the hosted control plane.
The entry can also point to a load balancer deployed to redirect incoming traffic to the Ingress pods.
See the following example of a DNS configuration:
[source,terminal]
----
$ cat /var/named/<example.krnl.es.zone>
----
.Example output
[source,terminal]
----
$ TTL 900
@ IN SOA bastion.example.krnl.es.com. hostmaster.example.krnl.es.com. (
2019062002
1D 1H 1W 3H )
IN NS bastion.example.krnl.es.com.
;
;
api IN A 1xx.2x.2xx.1xx <1>
api-int IN A 1xx.2x.2xx.1xx
;
;
*.apps IN A 1xx.2x.2xx.1xx
;
;EOF
----
<1> The record refers to the IP address of the API load balancer that handles ingress and egress traffic for hosted control planes.
For {ibm-title} z/VM, add IP addresses that correspond to the IP address of the agent.
[source,terminal]
----
compute-0 IN A 1xx.2x.2xx.1yy
compute-1 IN A 1xx.2x.2xx.1yy
----

View File

@@ -0,0 +1,15 @@
// Module included in the following assemblies:
//
// * hosted_control_planes/hcp-deploy/hcp-deploy-ibmz.adoc
:_mod-docs-content-type: CONCEPT
[id="hcp-ibmz-infra-reqs_{context}"]
= {ibm-z-title} infrastructure requirements
The Agent platform does not create any infrastructure, but requires the following resources for infrastructure:
* Agents: An _Agent_ represents a host that is booted with a discovery image, or PXE image and is ready to be provisioned as an {product-title} node.
* DNS: The API and Ingress endpoints must be routable.
The {hcp} feature is enabled by default. If you disabled the feature and want to manually enable it, or if you need to disable the feature, see _Enabling or disabling the {hcp} feature_.

View File

@@ -0,0 +1,43 @@
// Module included in the following assemblies:
//
// * hosted_control_planes/hcp-deploy/hcp-deploy-ibmz.adoc
:_mod-docs-content-type: PROCEDURE
[id="hcp-ibmz-infraenv_{context}"]
= Creating an InfraEnv resource for {hcp} on {ibm-z-title}
An `InfraEnv` is an environment where hosts that are booted with PXE images can join as agents. In this case, the agents are created in the same namespace as your hosted control plane.
.Procedure
. Create a YAML file to contain the configuration. See the following example:
+
[source,yaml]
----
apiVersion: agent-install.openshift.io/v1beta1
kind: InfraEnv
metadata:
name: <hosted_cluster_name>
namespace: <hosted_control_plane_namespace>
spec:
cpuArchitecture: s390x
pullSecretRef:
name: pull-secret
sshAuthorizedKey: <ssh_public_key>
----
. Save the file as `infraenv-config.yaml`.
. Apply the configuration by entering the following command:
+
[source,terminal]
----
$ oc apply -f infraenv-config.yaml
----
. To fetch the URL to download the PXE images, such as, `initrd.img`, `kernel.img`, or `rootfs.img`, which allows {ibm-z-title} machines to join as agents, enter the following command:
+
[source,terminal]
----
$ oc -n <hosted_control_plane_namespace> get InfraEnv <hosted_cluster_name> -o json
----

View File

@@ -0,0 +1,60 @@
// Module included in the following assemblies:
//
// * hosted_control_planes/hcp-deploy/hcp-deploy-ibmz.adoc
:_mod-docs-content-type: PROCEDURE
[id="hcp-ibmz-kvm-agents_{context}"]
= Adding {ibm-z-title} KVM as agents
For {ibm-z-title} with KVM, run the following command to start your {ibm-z-title} environment with the downloaded PXE images from the `InfraEnv` resource. After the Agents are created, the host communicates with the Assisted Service and registers in the same namespace as the `InfraEnv` resource on the management cluster.
.Procedure
. Run the following command:
+
[source,terminal]
----
virt-install \
--name "<vm_name>" \ <1>
--autostart \
--ram=16384 \
--cpu host \
--vcpus=4 \
--location "<path_to_kernel_initrd_image>,kernel=kernel.img,initrd=initrd.img" \ <2>
--disk <qcow_image_path> \ <3>
--network network:macvtap-net,mac=<mac_address> \ <4>
--graphics none \
--noautoconsole \
--wait=-1
--extra-args "rd.neednet=1 nameserver=<nameserver> coreos.live.rootfs_url=http://<http_server>/rootfs.img random.trust_cpu=on rd.luks.options=discard ignition.firstboot ignition.platform.id=metal console=tty1 console=ttyS1,115200n8 coreos.inst.persistent-kargs=console=tty1 console=ttyS1,115200n8" <5>
----
+
<1> Specify the name of the virtual machine.
<2> Specify the location of the `kernel_initrd_image` file.
<3> Specify the disk image path.
<4> Specify the Mac address.
<5> Specify the server name of the agents.
. For ISO boot, download ISO from the `InfraEnv` resource and boot the nodes by running the following command:
+
[source,terminal]
----
virt-install \
--name "<vm_name>" \ <1>
--autostart \
--memory=16384 \
--cpu host \
--vcpus=4 \
--network network:macvtap-net,mac=<mac_address> \ <2>
--cdrom "<path_to_image.iso>" \ <3>
--disk <qcow_image_path> \
--graphics none \
--noautoconsole \
--os-variant <os_version> \ <4>
--wait=-1
----
+
<1> Specify the name of the virtual machine.
<2> Specify the Mac address.
<3> Specify the location of the `image.iso` file.
<4> Specify the operating system version that you are using.

View File

@@ -0,0 +1,109 @@
// Module included in the following assemblies:
//
// * hosted_control_planes/hcp-deploy/hcp-deploy-ibmz.adoc
:_mod-docs-content-type: PROCEDURE
[id="hcp-ibmz-lpar-agents_{context}"]
= Adding {ibm-z-title} LPAR as agents
You can add the Logical Partition (LPAR) on {ibm-z-title} or {ibm-linuxone-title} as a compute node to a hosted control plane.
.Procedure
. Create a boot parameter file for the agents:
+
.Example parameter file
[source,yaml]
----
rd.neednet=1 cio_ignore=all,!condev \
console=ttysclp0 \
ignition.firstboot ignition.platform.id=metal
coreos.live.rootfs_url=http://<http_server>/rhcos-<version>-live-rootfs.<architecture>.img \// <1>
coreos.inst.persistent-kargs=console=ttysclp0
ip=<ip>::<gateway>:<netmask>:<hostname>::none nameserver=<dns> \// <2>
rd.znet=qeth,<network_adaptor_range>,layer2=1
rd.<disk_type>=<adapter> \// <3>
zfcp.allow_lun_scan=0
ai.ip_cfg_override=1 \// <4>
random.trust_cpu=on rd.luks.options=discard
----
+
<1> For the `coreos.live.rootfs_url` artifact, specify the matching `rootfs` artifact for the `kernel` and `initramfs` that you are starting. Only HTTP and HTTPS protocols are supported.
<2> For the `ip` parameter, manually assign the IP address, as described in _Installing a cluster with z/VM on {ibm-z-title} and {ibm-linuxone-title}_.
<3> For installations on DASD-type disks, use `rd.dasd` to specify the DASD where {op-system-first} is to be installed. For installations on FCP-type disks, use `rd.zfcp=<adapter>,<wwpn>,<lun>` to specify the FCP disk where {op-system} is to be installed.
<4> Specify this parameter when you use an Open Systems Adapter (OSA) or HiperSockets.
. Generate the `.ins` and `initrd.img.addrsize` files.
+
The `.ins` file includes installation data and is on the FTP server. You can access the file from the HMC system. The `.ins` file contains details such as mapping of the location of installation data on the disk or FTP server, the memory locations where the data is to be copied.
+
[NOTE]
====
In {product-title} 4.16, the `.ins` file and `initrd.img.addrsize` are not automatically generated as part of boot-artifacts from the installation program. You must manually generate these files.
====
.. Run the following commands to get the size of the `kernel` and `initrd`:
+
[source,yaml]
----
KERNEL_IMG_PATH='./kernel.img'
INITRD_IMG_PATH='./initrd.img'
CMDLINE_PATH='./generic.prm'
kernel_size=$(stat -c%s $KERNEL_IMG_PATH )
initrd_size=$(stat -c%s $INITRD_IMG_PATH)
----
.. Round the `kernel` size up to the next MiB boundary. This value is the starting address of `initrd.img`.
+
[source,terminal]
----
offset=$(( (kernel_size + 1048575) / 1048576 * 1048576 ))
----
.. Create the kernel binary patch file that contains the `initrd` address and size by running the following commands:
+
[source,terminal]
----
INITRD_IMG_NAME=$(echo $INITRD_IMG_PATH | rev | cut -d '/' -f 1 | rev)
KERNEL_OFFSET=0x00000000
KERNEL_CMDLINE_OFFSET=0x00010480
INITRD_ADDR_SIZE_OFFSET=0x00010408
OFFSET_HEX=$(printf '0x%08x\n' $offset)
----
.. Convert the address and size to binary format by running the following commands:
+
[source,terminal]
----
printf "$(printf '%016x\n' $initrd_size)" | xxd -r -p > temp_size.bin
----
.. Merge the address and size binaries by running the following command:
+
[source,terminal]
----
cat temp_address.bin temp_size.bin > "$INITRD_IMG_NAME.addrsize"
----
.. Clean up temporary files by running the following command:
+
[source,terminal]
----
rm -rf temp_address.bin temp_size.bin
----
.. Create the `.ins` file. The file is based on the paths of the `kernel.img`, `initrd.img`, `initrd.img.addrsize`, and `cmdline` files and the memory locations where the data is to be copied.
+
[source,yaml]
----
$KERNEL_IMG_PATH $KERNEL_OFFSET
$INITRD_IMG_PATH $OFFSET_HEX
$INITRD_IMG_NAME.addrsize $INITRD_ADDR_SIZE_OFFSET
$CMDLINE_PATH $KERNEL_CMDLINE_OFFSET
----
. Transfer the `initrd`, `kernel`, `generic.ins`, and `initrd.img.addrsize` parameter files to the file server. For more information about how to transfer the files with FTP and boot, see _Installing in an LPAR_.
. Start the machine.
. Repeat the procedure for all other machines in the cluster.

View File

@@ -0,0 +1,22 @@
// Module included in the following assemblies:
//
// * hosted_control_planes/hcp-deploy/hcp-deploy-ibmz.adoc
:_mod-docs-content-type: CONCEPT
[id="hcp-ibmz-prereqs_{context}"]
= Prerequisites to configure {hcp} on {ibm-z-title}
* The {mce} version 2.5 or later must be installed on an {product-title} cluster. You can install {mce-short} as an Operator from the {product-title} OperatorHub.
* The {mce-short} must have at least one managed {product-title} cluster. The `local-cluster` is automatically imported in {mce-short} 2.5 and later. For more information about the `local-cluster`, see _Advanced configuration_ in the Red{nbsp}Hat Advanced Cluster Management documentation. You can check the status of your hub cluster by running the following command:
+
[source,terminal]
----
$ oc get managedclusters local-cluster
----
* You need a hosting cluster with at least three worker nodes to run the HyperShift Operator.
* You need to enable the central infrastructure management service. For more information, see _Enabling the central infrastructure management service_.
* You need to install the hosted control plane command line interface. For more information, see _Installing the hosted control plane command line interface_.

View File

@@ -0,0 +1,127 @@
// Module included in the following assemblies:
//
// * hosted_control_planes/hcp-deploy/hcp-deploy-ibmz.adoc
:_mod-docs-content-type: PROCEDURE
[id="hcp-ibmz-scale-np_{context}"]
= Scaling the NodePool object for a hosted cluster on {ibm-z-title}
The `NodePool` object is created when you create a hosted cluster. By scaling the `NodePool` object, you can add more compute nodes to the hosted control plane.
When you scale up a node pool, a machine is created. The Cluster API provider finds an Agent that is approved, is passing validations, is not currently in use, and meets the requirements that are specified in the node pool specification. You can monitor the installation of an Agent by checking its status and conditions.
When you scale down a node pool, Agents are unbound from the corresponding cluster. Before you reuse the clusters, you must boot the clusters by using the PXE image to update the number of nodes.
.Procedure
. Run the following command to scale the `NodePool` object to two nodes:
+
[source,terminal]
----
$ oc -n <clusters_namespace> scale nodepool <nodepool_name> --replicas 2
----
+
The Cluster API agent provider randomly picks two agents that are then assigned to the hosted cluster. Those agents go through different states and finally join the hosted cluster as {ocp-short} nodes. The agents pass through the transition phases in the following order:
* `binding`
* `discovering`
* `insufficient`
* `installing`
* `installing-in-progress`
* `added-to-existing-cluster`
. Run the following command to see the status of a specific scaled agent:
+
[source,terminal]
----
$ oc -n <hosted_control_plane_namespace> get agent -o \
jsonpath='{range .items[*]}BMH: {@.metadata.labels.agent-install\.openshift\.io/bmh} \
Agent: {@.metadata.name} State: {@.status.debugInfo.state}{"\n"}{end}'
----
+
.Example output
[source,terminal]
----
BMH: Agent: 50c23cda-cedc-9bbd-bcf1-9b3a5c75804d State: known-unbound
BMH: Agent: 5e498cd3-542c-e54f-0c58-ed43e28b568a State: insufficient
----
. Run the following command to see the transition phases:
+
[source,terminal]
----
$ oc -n <hosted_control_plane_namespace> get agent
----
+
.Example output
[source,terminal]
----
NAME CLUSTER APPROVED ROLE STAGE
50c23cda-cedc-9bbd-bcf1-9b3a5c75804d hosted-forwarder true auto-assign
5e498cd3-542c-e54f-0c58-ed43e28b568a true auto-assign
da503cf1-a347-44f2-875c-4960ddb04091 hosted-forwarder true auto-assign
----
. Run the following command to generate the `kubeconfig` file to access the hosted cluster:
+
[source,terminal]
----
$ hcp create kubeconfig --namespace <clusters_namespace> --name <hosted_cluster_namespace> > <hosted_cluster_name>.kubeconfig
----
. After the agents reach the `added-to-existing-cluster` state, verify that you can see the {ocp-short} nodes by entering the following command:
+
[source,terminal]
----
$ oc --kubeconfig <hosted_cluster_name>.kubeconfig get nodes
----
+
.Example output
[source,terminal]
----
NAME STATUS ROLES AGE VERSION
worker-zvm-0.hostedn.example.com Ready worker 5m41s v1.24.0+3882f8f
worker-zvm-1.hostedn.example.com Ready worker 6m3s v1.24.0+3882f8f
----
+
Cluster Operators start to reconcile by adding workloads to the nodes.
. Enter the following command to verify that two machines were created when you scaled up the `NodePool` object:
+
[source,terminal]
----
$ oc -n <hosted_control_plane_namespace> get machine.cluster.x-k8s.io
----
+
.Example output
[source,terminal]
----
NAME CLUSTER NODENAME PROVIDERID PHASE AGE VERSION
hosted-forwarder-79558597ff-5tbqp hosted-forwarder-crqq5 worker-zvm-0.hostedn.example.com agent://50c23cda-cedc-9bbd-bcf1-9b3a5c75804d Running 41h 4.15.0
hosted-forwarder-79558597ff-lfjfk hosted-forwarder-crqq5 worker-zvm-1.hostedn.example.com agent://5e498cd3-542c-e54f-0c58-ed43e28b568a Running 41h 4.15.0
----
. Run the following command to check the cluster version:
+
[source,terminal]
----
$ oc --kubeconfig <hosted_cluster_name>.kubeconfig get clusterversion,co
----
+
.Example output
[source,terminal]
----
NAME VERSION AVAILABLE PROGRESSING SINCE STATUS
clusterversion.config.openshift.io/version 4.15.0-ec.2 True False 40h Cluster version is 4.15.0-ec.2
----
. Run the following command to check the cluster operator status:
+
[source,terminal]
----
$ oc --kubeconfig <hosted_cluster_name>.kubeconfig get clusteroperators
----
For each component of your cluster, the output shows the following cluster operator statuses: `NAME`, `VERSION`, `AVAILABLE`, `PROGRESSING`, `DEGRADED`, `SINCE`, and `MESSAGE`.
For an output example, see _Initial Operator configuration_.

View File

@@ -0,0 +1,99 @@
// Module included in the following assemblies:
//
// * hosted_control_planes/hcp-deploy/hcp-deploy-ibmz.adoc
:_mod-docs-content-type: PROCEDURE
[id="hcp-ibmz-zvm-agents_{context}"]
= Adding {ibm-title} z/VM as agents
If you want to use a static IP for z/VM guest, you must configure the `NMStateConfig` attribute for the z/VM agent so that the IP parameter persists in the second start.
Complete the following steps to start your {ibm-z-title} environment with the downloaded PXE images from the `InfraEnv` resource. After the Agents are created, the host communicates with the Assisted Service and registers in the same namespace as the `InfraEnv` resource on the management cluster.
.Procedure
. Update the parameter file to add the `rootfs_url`, `network_adaptor` and `disk_type` values.
+
.Example parameter file
[source,yaml]
----
rd.neednet=1 cio_ignore=all,!condev \
console=ttysclp0 \
ignition.firstboot ignition.platform.id=metal \
coreos.live.rootfs_url=http://<http_server>/rhcos-<version>-live-rootfs.<architecture>.img \// <1>
coreos.inst.persistent-kargs=console=ttysclp0
ip=<ip>::<gateway>:<netmask>:<hostname>::none nameserver=<dns> \// <2>
rd.znet=qeth,<network_adaptor_range>,layer2=1
rd.<disk_type>=<adapter> \// <3>
zfcp.allow_lun_scan=0
ai.ip_cfg_override=1 \// <4>
----
<1> For the `coreos.live.rootfs_url` artifact, specify the matching `rootfs` artifact for the `kernel` and `initramfs` that you are starting. Only HTTP and HTTPS protocols are supported.
<2> For the `ip` parameter, manually assign the IP address, as described in _Installing a cluster with z/VM on {ibm-z-title} and {ibm-linuxone-title}_.
<3> For installations on DASD-type disks, use `rd.dasd` to specify the DASD where Red Hat Enterprise Linux CoreOS (RHCOS) is to be installed. For installations on FCP-type disks, use `rd.zfcp=<adapter>,<wwpn>,<lun>` to specify the FCP disk where RHCOS is to be installed.
<4> Specify this parameter when you use an Open Systems Adapter (OSA) or HiperSockets.
. Move `initrd`, kernel images, and the parameter file to the guest VM by running the following commands:
+
[source,terminal]
----
vmur pun -r -u -N kernel.img $INSTALLERKERNELLOCATION/<image name>
----
+
[source,terminal]
----
vmur pun -r -u -N generic.parm $PARMFILELOCATION/paramfilename
----
+
[source,terminal]
----
vmur pun -r -u -N initrd.img $INSTALLERINITRAMFSLOCATION/<image name>
----
. Run the following command from the guest VM console:
+
[source,terminal]
----
cp ipl c
----
. To list the agents and their properties, enter the following command:
+
[source,terminal]
----
$ oc -n <hosted_control_plane_namespace> get agents
----
+
.Example output
[source,terminal]
----
NAME CLUSTER APPROVED ROLE STAGE
50c23cda-cedc-9bbd-bcf1-9b3a5c75804d auto-assign
5e498cd3-542c-e54f-0c58-ed43e28b568a auto-assign
----
. Run the following command to approve the agent.
+
[source,terminal]
----
$ oc -n <hosted_control_plane_namespace> patch agent \
50c23cda-cedc-9bbd-bcf1-9b3a5c75804d -p \
'{"spec":{"installation_disk_id":"/dev/sda","approved":true,"hostname":"worker-zvm-0.hostedn.example.com"}}' \// <1>
--type merge
----
<1> Optionally, you can set the agent ID `<installation_disk_id>` and `<hostname>` in the specification.
. Run the following command to verify that the agents are approved:
+
[source,terminal]
----
$ oc -n <hosted_control_plane_namespace> get agents
----
+
.Example output
[source,terminal]
----
NAME CLUSTER APPROVED ROLE STAGE
50c23cda-cedc-9bbd-bcf1-9b3a5c75804d true auto-assign
5e498cd3-542c-e54f-0c58-ed43e28b568a true auto-assign
----