mirror of
https://github.com/openshift/openshift-docs.git
synced 2026-02-05 12:46:18 +01:00
IBM Z docs changes for HCP 4.20 Release
This commit is contained in:
committed by
openshift-cherrypick-robot
parent
07a9dd2150
commit
9f47340d27
@@ -10,7 +10,7 @@ You can deploy {hcp} by configuring a cluster to function as a management cluste
|
||||
|
||||
[NOTE]
|
||||
====
|
||||
The _management_ cluster is not the _managed_ cluster. A managed cluster is a cluster that the hub cluster manages.
|
||||
The _management_ cluster is not the _managed_ cluster. A managed cluster is a cluster that the hub cluster manages. The _management_ cluster can run on either the x86_64 architecture, supported beginning with {product-title} 4.17 and {mce} 2.7, or the s390x architecture, supported beginning with {product-title} 4.20 and {mce} 2.10.
|
||||
====
|
||||
|
||||
You can convert a managed cluster to a management cluster by using the `hypershift` add-on to deploy the HyperShift Operator on that cluster. Then, you can start to create the hosted cluster.
|
||||
@@ -19,7 +19,7 @@ The {mce-short} supports only the default `local-cluster`, which is a hub cluste
|
||||
|
||||
To provision {hcp} on bare metal, you can use the Agent platform. The Agent platform uses the central infrastructure management service to add worker nodes to a hosted cluster. For more information, see "Enabling the central infrastructure management service".
|
||||
|
||||
Each {ibm-z-title} system host must be started with the PXE images provided by the central infrastructure management. After each host starts, it runs an Agent process to discover the details of the host and completes the installation. An Agent custom resource represents each host.
|
||||
Each {ibm-z-title} system host must be started with the PXE or ISO images that are provided by the central infrastructure management. After each host starts, it runs an Agent process to discover the details of the host and completes the installation. An Agent custom resource represents each host.
|
||||
|
||||
When you create a hosted cluster with the Agent platform, HyperShift Operator installs the Agent Cluster API provider in the hosted control plane namespace.
|
||||
|
||||
@@ -31,20 +31,18 @@ include::modules/hcp-ibm-z-prereqs.adoc[leveloffset=+1]
|
||||
* link:https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_management_for_kubernetes/2.14/html/clusters/cluster_mce_overview#advanced-config-engine[Advanced configuration]
|
||||
* link:https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_management_for_kubernetes/2.14/html/clusters/cluster_mce_overview#enable-cim[Enabling the central infrastructure management service]
|
||||
* xref:../../hosted_control_planes/hcp-prepare/hcp-cli.adoc#hcp-cli-terminal_hcp-cli[Installing the {hcp} command-line interface]
|
||||
* xref:../../hosted_control_planes/hcp-prepare/hcp-enable-disable.adoc[Enabling or disabling the {hcp} feature]
|
||||
|
||||
include::modules/hcp-ibm-z-infra-reqs.adoc[leveloffset=+1]
|
||||
|
||||
[role="_additional-resources"]
|
||||
.Additional resources
|
||||
|
||||
* xref:../../hosted_control_planes/hcp-prepare/hcp-enable-disable.adoc[Enabling or disabling the {hcp} feature]
|
||||
|
||||
include::modules/hcp-ibm-z-dns.adoc[leveloffset=+1]
|
||||
|
||||
include::modules/hcp-custom-dns.adoc[leveloffset=+2]
|
||||
|
||||
include::modules/hcp-bm-hc.adoc[leveloffset=+1]
|
||||
[id="hcp-bm-create-hc-ibm-z"]
|
||||
== Creating a hosted cluster on bare metal for {ibm-z-title}
|
||||
|
||||
You can create a hosted cluster or import one. When the Assisted Installer is enabled as an add-on to {mce-short} and you create a hosted cluster with the Agent platform, the HyperShift Operator installs the Agent Cluster API provider in the hosted control plane namespace.
|
||||
|
||||
include::modules/hcp-bm-hc.adoc[leveloffset=+2]
|
||||
|
||||
[role="_additional-resources"]
|
||||
.Additional resources
|
||||
@@ -73,8 +71,3 @@ include::modules/hcp-ibm-z-lpar-agents.adoc[leveloffset=+2]
|
||||
include::modules/hcp-ibm-z-zvm-agents.adoc[leveloffset=+2]
|
||||
|
||||
include::modules/hcp-ibm-z-scale-np.adoc[leveloffset=+1]
|
||||
|
||||
[role="_additional-resources"]
|
||||
.Additional resources
|
||||
|
||||
* xref:../../installing/installing_ibm_z/upi/installing-ibm-z.adoc#installation-operators-config[Initial Operator configuration]
|
||||
|
||||
@@ -22,27 +22,6 @@ On bare-metal infrastructure, you can create or import a hosted cluster. After y
|
||||
|
||||
- Verify that you have a default storage class configured for your cluster. Otherwise, you might see pending persistent volume claims (PVCs).
|
||||
|
||||
- By default when you use the `hcp create cluster agent` command, the command creates a hosted cluster with configured node ports. The preferred publishing strategy for hosted clusters on bare metal exposes services through a load balancer. If you create a hosted cluster by using the web console or by using {rh-rhacm-title}, to set a publishing strategy for a service besides the Kubernetes API server, you must manually specify the `servicePublishingStrategy` information in the `HostedCluster` custom resource.
|
||||
|
||||
- Ensure that you meet the requirements described in "Requirements for {hcp} on bare metal", which includes requirements related to infrastructure, firewalls, ports, and services. For example, those requirements describe how to add the appropriate zone labels to the bare-metal hosts in your management cluster, as shown in the following example commands:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ oc label node [compute-node-1] topology.kubernetes.io/zone=zone1
|
||||
----
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ oc label node [compute-node-2] topology.kubernetes.io/zone=zone2
|
||||
----
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ oc label node [compute-node-3] topology.kubernetes.io/zone=zone3
|
||||
----
|
||||
|
||||
- Ensure that you have added bare-metal nodes to a hardware inventory.
|
||||
|
||||
.Procedure
|
||||
|
||||
. Create a namespace by entering the following command:
|
||||
@@ -88,39 +67,6 @@ $ hcp create cluster agent \
|
||||
<11> Specify the node pool replica count, such as `3`. You must specify the replica count as `0` or greater to create the same number of replicas. Otherwise, you do not create node pools.
|
||||
<12> After the `--ssh-key` flag, specify the path to the SSH key, such as `user/.ssh/id_rsa`.
|
||||
|
||||
. Configure the service publishing strategy. By default, hosted clusters use the `NodePort` service publishing strategy because node ports are always available without additional infrastructure. However, you can configure the service publishing strategy to use a load balancer.
|
||||
|
||||
** If you are using the default `NodePort` strategy, configure the DNS to point to the hosted cluster compute nodes, not the management cluster nodes. For more information, see "DNS configurations on bare metal".
|
||||
|
||||
** For production environments, use the `LoadBalancer` strategy because this strategy provides certificate handling and automatic DNS resolution. The following example demonstrates changing the service publishing `LoadBalancer` strategy in your hosted cluster configuration file:
|
||||
+
|
||||
[source,yaml]
|
||||
----
|
||||
# ...
|
||||
spec:
|
||||
services:
|
||||
- service: APIServer
|
||||
servicePublishingStrategy:
|
||||
type: LoadBalancer #<1>
|
||||
- service: Ignition
|
||||
servicePublishingStrategy:
|
||||
type: Route
|
||||
- service: Konnectivity
|
||||
servicePublishingStrategy:
|
||||
type: Route
|
||||
- service: OAuthServer
|
||||
servicePublishingStrategy:
|
||||
type: Route
|
||||
- service: OIDC
|
||||
servicePublishingStrategy:
|
||||
type: Route
|
||||
sshKey:
|
||||
name: <ssh_key>
|
||||
# ...
|
||||
----
|
||||
+
|
||||
<1> Specify `LoadBalancer` as the API Server type. For all other services, specify `Route` as the type.
|
||||
|
||||
. Apply the changes to the hosted cluster configuration file by entering the following command:
|
||||
+
|
||||
[source,terminal]
|
||||
@@ -153,183 +99,6 @@ $ oc get pods -n <hosted_cluster_namespace>
|
||||
|
||||
. Confirm that the hosted cluster is ready. The status of `Available: True` indicates the readiness of the cluster and the node pool status shows `AllMachinesReady: True`. These statuses indicate the healthiness of all cluster Operators.
|
||||
|
||||
. Install MetalLB in the hosted cluster:
|
||||
+
|
||||
.. Extract the `kubeconfig` file from the hosted cluster and set the environment variable for hosted cluster access by entering the following commands:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ oc get secret \
|
||||
<hosted_cluster_namespace>-admin-kubeconfig \
|
||||
-n <hosted_cluster_namespace> \
|
||||
-o jsonpath='{.data.kubeconfig}' \
|
||||
| base64 -d > \
|
||||
kubeconfig-<hosted_cluster_namespace>.yaml
|
||||
----
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ export KUBECONFIG="/path/to/kubeconfig-<hosted_cluster_namespace>.yaml"
|
||||
----
|
||||
+
|
||||
.. Install the MetalLB Operator by creating the `install-metallb-operator.yaml` file:
|
||||
+
|
||||
[source,yaml]
|
||||
----
|
||||
apiVersion: v1
|
||||
kind: Namespace
|
||||
metadata:
|
||||
name: metallb-system
|
||||
---
|
||||
apiVersion: operators.coreos.com/v1
|
||||
kind: OperatorGroup
|
||||
metadata:
|
||||
name: metallb-operator
|
||||
namespace: metallb-system
|
||||
---
|
||||
apiVersion: operators.coreos.com/v1alpha1
|
||||
kind: Subscription
|
||||
metadata:
|
||||
name: metallb-operator
|
||||
namespace: metallb-system
|
||||
spec:
|
||||
channel: "stable"
|
||||
name: metallb-operator
|
||||
source: redhat-operators
|
||||
sourceNamespace: openshift-marketplace
|
||||
installPlanApproval: Automatic
|
||||
# ...
|
||||
----
|
||||
+
|
||||
.. Apply the file by entering the following command:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ oc apply -f install-metallb-operator.yaml
|
||||
----
|
||||
+
|
||||
.. Configure the MetalLB IP address pool by creating the `deploy-metallb-ipaddresspool.yaml` file:
|
||||
+
|
||||
[source,yaml]
|
||||
----
|
||||
apiVersion: metallb.io/v1beta1
|
||||
kind: IPAddressPool
|
||||
metadata:
|
||||
name: metallb
|
||||
namespace: metallb-system
|
||||
spec:
|
||||
autoAssign: true
|
||||
addresses:
|
||||
- 10.11.176.71-10.11.176.75
|
||||
---
|
||||
apiVersion: metallb.io/v1beta1
|
||||
kind: L2Advertisement
|
||||
metadata:
|
||||
name: l2advertisement
|
||||
namespace: metallb-system
|
||||
spec:
|
||||
ipAddressPools:
|
||||
- metallb
|
||||
# ...
|
||||
----
|
||||
+
|
||||
.. Apply the configuration by entering the following command:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ oc apply -f deploy-metallb-ipaddresspool.yaml
|
||||
----
|
||||
+
|
||||
.. Verify the installation of MetalLB by checking the Operator status, the IP address pool, and the `L2Advertisement` resource by entering the following commands:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ oc get pods -n metallb-system
|
||||
----
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ oc get ipaddresspool -n metallb-system
|
||||
----
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ oc get l2advertisement -n metallb-system
|
||||
----
|
||||
|
||||
. Configure the load balancer for ingress:
|
||||
+
|
||||
.. Create the `ingress-loadbalancer.yaml` file:
|
||||
+
|
||||
[source,yaml]
|
||||
----
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
annotations:
|
||||
metallb.universe.tf/address-pool: metallb
|
||||
name: metallb-ingress
|
||||
namespace: openshift-ingress
|
||||
spec:
|
||||
ports:
|
||||
- name: http
|
||||
protocol: TCP
|
||||
port: 80
|
||||
targetPort: 80
|
||||
- name: https
|
||||
protocol: TCP
|
||||
port: 443
|
||||
targetPort: 443
|
||||
selector:
|
||||
ingresscontroller.operator.openshift.io/deployment-ingresscontroller: default
|
||||
type: LoadBalancer
|
||||
# ...
|
||||
----
|
||||
+
|
||||
.. Apply the configuration by entering the following command:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ oc apply -f ingress-loadbalancer.yaml
|
||||
----
|
||||
+
|
||||
.. Verify that the load balancer service works as expected by entering the following command:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ oc get svc metallb-ingress -n openshift-ingress
|
||||
----
|
||||
+
|
||||
.Example output
|
||||
+
|
||||
[source,text]
|
||||
----
|
||||
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
|
||||
metallb-ingress LoadBalancer 172.31.127.129 10.11.176.71 80:30961/TCP,443:32090/TCP 16h
|
||||
----
|
||||
|
||||
. Configure the DNS to work with the load balancer:
|
||||
+
|
||||
.. Configure the DNS for the `apps` domain by pointing the `*.apps.<hosted_cluster_namespace>.<base_domain>` wildcard DNS record to the load balancer IP address.
|
||||
+
|
||||
.. Verify the DNS resolution by entering the following command:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ nslookup console-openshift-console.apps.<hosted_cluster_namespace>.<base_domain> <load_balancer_ip_address>
|
||||
----
|
||||
+
|
||||
.Example output
|
||||
+
|
||||
[source,text]
|
||||
----
|
||||
Server: 10.11.176.1
|
||||
Address: 10.11.176.1#53
|
||||
|
||||
Name: console-openshift-console.apps.my-hosted-cluster.sample-base-domain.com
|
||||
Address: 10.11.176.71
|
||||
----
|
||||
|
||||
.Verification
|
||||
|
||||
. Check the cluster Operators by entering the following command:
|
||||
|
||||
@@ -9,7 +9,7 @@ You can install the {hcp} command-line interface (CLI), `hcp`, by using the cont
|
||||
|
||||
.Prerequisites
|
||||
|
||||
* On an {product-title} cluster, you have installed {mce} 2.5 or later. The {mce-short} is automatically installed when you install Red{nbsp}Hat Advanced Cluster Management. You can also install {mce-short} without Red{nbsp}Hat Advanced Management as an Operator from the {product-title} software catalog.
|
||||
* On an {product-title} cluster, you have installed {mce} 2.7 or later. The {mce-short} is automatically installed when you install Red{nbsp}Hat Advanced Cluster Management. You can also install {mce-short} without Red{nbsp}Hat Advanced Management as an Operator from {product-title} OperatorHub.
|
||||
|
||||
.Procedure
|
||||
|
||||
|
||||
@@ -35,7 +35,7 @@ spec:
|
||||
$ oc apply -f infraenv-config.yaml
|
||||
----
|
||||
|
||||
. To fetch the URL to download the PXE images, such as, `initrd.img`, `kernel.img`, or `rootfs.img`, which allows {ibm-z-title} machines to join as agents, enter the following command:
|
||||
. To fetch the URL to download the PXE or ISO images, such as, `initrd.img`, `kernel.img`, or `rootfs.img`, which allows {ibm-z-title} machines to join as agents, enter the following command:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
|
||||
@@ -20,7 +20,7 @@ console=ttysclp0 \
|
||||
ignition.firstboot ignition.platform.id=metal
|
||||
coreos.live.rootfs_url=http://<http_server>/rhcos-<version>-live-rootfs.<architecture>.img \// <1>
|
||||
coreos.inst.persistent-kargs=console=ttysclp0
|
||||
ip=<ip>::<gateway>:<netmask>:<hostname>::none nameserver=<dns> \// <2>
|
||||
ip=<ip>::<gateway>:<netmask>::<interface>:none nameserver=<dns> \// <2>
|
||||
rd.znet=qeth,<network_adaptor_range>,layer2=1
|
||||
rd.<disk_type>=<adapter> \// <3>
|
||||
zfcp.allow_lun_scan=0
|
||||
|
||||
@@ -6,9 +6,9 @@
|
||||
[id="hcp-ibm-z-prereqs_{context}"]
|
||||
= Prerequisites to configure {hcp} on {ibm-z-title}
|
||||
|
||||
* The {mce} version 2.5 or later must be installed on an {product-title} cluster. You can install {mce-short} as an Operator from the {product-title} software catalog.
|
||||
* The {mce} version 2.7 or later must be installed on an {product-title} cluster. You can install {mce-short} as an Operator from the {product-title} OperatorHub.
|
||||
|
||||
* The {mce-short} must have at least one managed {product-title} cluster. The `local-cluster` is automatically imported in {mce-short} 2.5 and later. For more information about the `local-cluster`, see _Advanced configuration_ in the Red{nbsp}Hat Advanced Cluster Management documentation. You can check the status of your hub cluster by running the following command:
|
||||
* The {mce-short} must have at least one managed {product-title} cluster. The `local-cluster` is automatically imported in {mce-short} 2.7 and later. For more information about the `local-cluster`, see _Advanced configuration_ in the Red{nbsp}Hat Advanced Cluster Management documentation. You can check the status of your hub cluster by running the following command:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
@@ -20,3 +20,8 @@ $ oc get managedclusters local-cluster
|
||||
* You need to enable the central infrastructure management service. For more information, see _Enabling the central infrastructure management service_.
|
||||
|
||||
* You need to install the hosted control plane command-line interface. For more information, see _Installing the hosted control plane command-line interface_.
|
||||
|
||||
[NOTE]
|
||||
====
|
||||
The _management_ cluster can run on either the x86_64 architecture, supported beginning with {product-title} 4.17 and {mce} 2.7, or the s390x architecture, supported beginning with {product-title} 4.20 and {mce} 2.10.
|
||||
====
|
||||
|
||||
@@ -23,7 +23,7 @@ console=ttysclp0 \
|
||||
ignition.firstboot ignition.platform.id=metal \
|
||||
coreos.live.rootfs_url=http://<http_server>/rhcos-<version>-live-rootfs.<architecture>.img \// <1>
|
||||
coreos.inst.persistent-kargs=console=ttysclp0
|
||||
ip=<ip>::<gateway>:<netmask>:<hostname>::none nameserver=<dns> \// <2>
|
||||
ip=<ip>::<gateway>:<netmask>::<interface>:none nameserver=<dns> \// <2>
|
||||
rd.znet=qeth,<network_adaptor_range>,layer2=1
|
||||
rd.<disk_type>=<adapter> \// <3>
|
||||
zfcp.allow_lun_scan=0
|
||||
|
||||
Reference in New Issue
Block a user