1
0
mirror of https://github.com/openshift/openshift-docs.git synced 2026-02-05 12:46:18 +01:00

multi-arch compute add IBM Z KVM

This commit is contained in:
SNiemann15
2023-08-23 15:17:59 +02:00
committed by openshift-cherrypick-robot
parent 4d5a7de2b0
commit 88435b85b1
6 changed files with 297 additions and 3 deletions

View File

@@ -539,6 +539,10 @@ Topics:
File: creating-multi-arch-compute-nodes-aws
- Name: Creating a cluster with multi-architecture compute machines on bare metal
File: creating-multi-arch-compute-nodes-bare-metal
- Name: Creating a cluster with multi-architecture compute machines on IBM Z and IBM LinuxONE with z/VM
File: creating-multi-arch-compute-nodes-ibm-z
- Name: Creating a cluster with multi-architecture compute machines on IBM Z and IBM LinuxONE with RHEL KVM
File: creating-multi-arch-compute-nodes-ibm-z-kvm
- Name: Managing your cluster with multi-architecture compute machines
File: multi-architecture-compute-managing
- Name: Enabling encryption on a vSphere cluster

View File

@@ -0,0 +1,100 @@
// Module included in the following assemblies:
//
// * post_installation_configuration/configuring-multi-arch-compute-machines/creating-multi-arch-compute-nodes-ibm-z-kvm.adoc
:_content-type: PROCEDURE
[id="machine-user-infra-machines-ibm-z-kvm_{context}"]
= Creating {op-system} machines using `virt-install`
You can create more {op-system-first} compute machines for your cluster by using `virt-install`.
.Prerequisites
* You have at least one LPAR running on {op-system-base} 8.7 or later with KVM, referred to as {op-system-base} KVM host in this procedure.
* The KVM/QEMU hypervisor is installed on the {op-system-base} KVM host.
* You have a domain name server (DNS) that can perform hostname and reverse lookup for the nodes.
* An HTTP or HTTPS server is set up.
.Procedure
. Extract the Ignition config file from the cluster by running the following command:
+
[source,terminal]
----
$ oc extract -n openshift-machine-api secret/worker-user-data-managed --keys=userData --to=- > worker.ign
----
. Upload the `worker.ign` Ignition config file you exported from your cluster to your HTTP server. Note the URL of this file.
. You can validate that the Ignition file is available on the URL. The following example gets the Ignition config file for the compute node:
+
[source,terminal]
----
$ curl -k http://<HTTP_server>/worker.ign
----
. Download the {op-system-base} live `kernel`, `initramfs`, and `rootfs` files by running the following commands:
+
[source,terminal]
----
$ curl -LO $(oc -n openshift-machine-config-operator get configmap/coreos-bootimages -o jsonpath='{.data.stream}' \
| jq -r '.architectures.s390x.artifacts.metal.formats.pxe.kernel.location')
----
+
[source,terminal]
----
$ curl -LO $(oc -n openshift-machine-config-operator get configmap/coreos-bootimages -o jsonpath='{.data.stream}' \
| jq -r '.architectures.s390x.artifacts.metal.formats.pxe.initramfs.location')
----
+
[source,terminal]
----
$ curl -LO $(oc -n openshift-machine-config-operator get configmap/coreos-bootimages -o jsonpath='{.data.stream}' \
| jq -r '.architectures.s390x.artifacts.metal.formats.pxe.rootfs.location')
----
. Move the downloaded {op-system-base} live `kernel`, `initramfs` and `rootfs` files to an HTTP or HTTPS server before you launch `virt-install`.
. Create the new KVM guest nodes using the {op-system-base} `kernel`, `initramfs`, and Ignition files; the new disk image; and adjusted parm line arguments.
+
--
[source,terminal]
----
$ virt-install \
--connect qemu:///system \
--name {vn_name} \
--autostart \
--os-variant rhel9.2 \ <1>
--cpu host \
--vcpus {vcpus} \
--memory {memory_mb} \
--disk {vn_name}.qcow2,size={image_size | default(100,true)} \
--network network={virt_network_parm} \
--location {media_location},kernel={rhcos_kernel},initrd={rhcos_initrd} \ <2>
--extra-args "rd.neednet=1" \
--extra-args "coreos.inst.install_dev=/dev/vda" \
--extra-args "coreos.inst.ignition_url={worker_ign}" \ <3>
--extra-args "coreos.live.rootfs_url={rhcos_rootfs}" \ <4>
--extra-args "ip={ip}::{default_gateway}:{subnet_mask_length}:{vn_name}::none:{MTU}" \
--extra-args "nameserver={dns}" \
--extra-args "console=ttysclp0" \
--noautoconsole \
--wait
----
<1> For `os-variant`, specify the {op-system-base} version for the {op-system} compute machine. `rhel9.2` is the recommended version. To query the supported {op-system-base} version of your operating system, run the following command:
+
[source,terminal]
----
$ osinfo-query os -f short-id
----
+
[NOTE]
====
The `os-variant` is case sensitive.
====
+
<2> For `--location`, specify the location of the kernel/initrd on the HTTP or HTTPS server.
<3> For `coreos.inst.ignition_url=`, specify the `worker.ign` Ignition file for the machine role. Only HTTP and HTTPS protocols are supported.
<4> For `coreos.live.rootfs_url=`, specify the matching rootfs artifact for the `kernel` and `initramfs` you are booting. Only HTTP and HTTPS protocols are supported.
--
. Continue to create more compute machines for your cluster.

View File

@@ -0,0 +1,148 @@
// Module included in the following assemblies:
//
// * post_installation_configuration/configuring-multi-arch-compute-machines/creating-multi-arch-compute-nodes-ibm-z.adoc
:_content-type: PROCEDURE
[id="machine-user-infra-machines-ibm-z_{context}"]
= Creating {op-system} machines on {ibmzProductName} with z/VM
You can create more {op-system-first} compute machines running on {ibmzProductName} with z/VM and attach them to your existing cluster.
.Prerequisites
* You have a domain name server (DNS) that can perform hostname and reverse lookup for the nodes.
* You have an HTTP or HTTPS server running on your provisioning machine that is accessible to the machines you create.
.Procedure
. Extract the Ignition config file from the cluster by running the following command:
+
[source,terminal]
----
$ oc extract -n openshift-machine-api secret/worker-user-data-managed --keys=userData --to=- > worker.ign
----
. Upload the `worker.ign` Ignition config file you exported from your cluster to your HTTP server. Note the URL of this file.
. You can validate that the Ignition file is available on the URL. The following example gets the Ignition config file for the compute node:
+
[source,terminal]
----
$ curl -k http://<HTTP_server>/worker.ign
----
. Download the {op-system-base} live `kernel`, `initramfs`, and `rootfs` files by running the following commands:
+
[source,terminal]
----
$ curl -LO $(oc -n openshift-machine-config-operator get configmap/coreos-bootimages -o jsonpath='{.data.stream}' \
| jq -r '.architectures.s390x.artifacts.metal.formats.pxe.kernel.location')
----
+
[source,terminal]
----
$ curl -LO $(oc -n openshift-machine-config-operator get configmap/coreos-bootimages -o jsonpath='{.data.stream}' \
| jq -r '.architectures.s390x.artifacts.metal.formats.pxe.initramfs.location')
----
+
[source,terminal]
----
$ curl -LO $(oc -n openshift-machine-config-operator get configmap/coreos-bootimages -o jsonpath='{.data.stream}' \
| jq -r '.architectures.s390x.artifacts.metal.formats.pxe.rootfs.location')
----
. Move the downloaded {op-system-base} live `kernel`, `initramfs`, and `rootfs` files to an HTTP or HTTPS server that is accessible from the z/VM guest you want to add.
. Create a parameter file for the z/VM guest. The following parameters are specific for the virtual machine:
** Optional: To specify a static IP address, add an `ip=` parameter with the following entries, with each separated by a colon:
... The IP address for the machine.
... An empty string.
... The gateway.
... The netmask.
... The machine host and domain name in the form `hostname.domainname`. Omit this value to let {op-system} decide.
... The network interface name. Omit this value to let {op-system} decide.
... The value `none`.
** For `coreos.inst.ignition_url=`, specify the URL to the `worker.ign` file. Only HTTP and HTTPS protocols are supported.
** For `coreos.live.rootfs_url=`, specify the matching rootfs artifact for the `kernel` and `initramfs` you are booting. Only HTTP and HTTPS protocols are supported.
** For installations on DASD-type disks, complete the following tasks:
... For `coreos.inst.install_dev=`, specify `/dev/dasda`.
... Use `rd.dasd=` to specify the DASD where {op-system} is to be installed.
... Leave all other parameters unchanged.
+
The following is an example parameter file, `additional-worker-dasd.parm`:
+
[source,terminal]
----
rd.neednet=1 \
console=ttysclp0 \
coreos.inst.install_dev=/dev/dasda \
coreos.live.rootfs_url=http://cl1.provide.example.com:8080/assets/rhcos-live-rootfs.s390x.img \
coreos.inst.ignition_url=http://cl1.provide.example.com:8080/ignition/worker.ign \
ip=172.18.78.2::172.18.78.1:255.255.255.0:::none nameserver=172.18.78.1 \
rd.znet=qeth,0.0.bdf0,0.0.bdf1,0.0.bdf2,layer2=1,portno=0 \
zfcp.allow_lun_scan=0 \
rd.dasd=0.0.3490
----
+
Write all options in the parameter file as a single line and make sure that you have no newline characters.
** For installations on FCP-type disks, complete the following tasks:
... Use `rd.zfcp=<adapter>,<wwpn>,<lun>` to specify the FCP disk where {op-system} is to be installed. For multipathing, repeat this step for each additional path.
+
[NOTE]
====
When you install with multiple paths, you must enable multipathing directly after the installation, not at a later point in time, as this can cause problems.
====
... Set the install device as: `coreos.inst.install_dev=/dev/sda`.
+
[NOTE]
====
If additional LUNs are configured with NPIV, FCP requires `zfcp.allow_lun_scan=0`. If you must enable `zfcp.allow_lun_scan=1` because you use a CSI driver, for example, you must configure your NPIV so that each node cannot access the boot partition of another node.
====
... Leave all other parameters unchanged.
+
[IMPORTANT]
====
Additional post-installation steps are required to fully enable multipathing. For more information, see “Enabling multipathing with kernel arguments on {op-system}" in _Post-installation machine configuration tasks_.
====
// Add xref once it's allowed.
+
The following is an example parameter file, `additional-worker-fcp.parm` for a worker node with multipathing:
+
[source,terminal]
----
rd.neednet=1 \
console=ttysclp0 \
coreos.inst.install_dev=/dev/sda \
coreos.live.rootfs_url=http://cl1.provide.example.com:8080/assets/rhcos-live-rootfs.s390x.img \
coreos.inst.ignition_url=http://cl1.provide.example.com:8080/ignition/worker.ign \
ip=172.18.78.2::172.18.78.1:255.255.255.0:::none nameserver=172.18.78.1 \
rd.znet=qeth,0.0.bdf0,0.0.bdf1,0.0.bdf2,layer2=1,portno=0 \
zfcp.allow_lun_scan=0 \
rd.zfcp=0.0.1987,0x50050763070bc5e3,0x4008400B00000000 \
rd.zfcp=0.0.19C7,0x50050763070bc5e3,0x4008400B00000000 \
rd.zfcp=0.0.1987,0x50050763071bc5e3,0x4008400B00000000 \
rd.zfcp=0.0.19C7,0x50050763071bc5e3,0x4008400B00000000
----
+
Write all options in the parameter file as a single line and make sure that you have no newline characters.
. Transfer the `initramfs`, `kernel`, parameter files, and {op-system} images to z/VM, for example, by using FTP. For details about how to transfer the files with FTP and boot from the virtual reader, see link:https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html/installation_guide/sect-installing-zvm-s390[Installing under Z/VM].
. Punch the files to the virtual reader of the z/VM guest virtual machine.
+
See link:https://www.ibm.com/docs/en/zvm/latest?topic=commands-punch[PUNCH] in IBM Documentation.
+
[TIP]
====
You can use the CP PUNCH command or, if you use Linux, the **vmur** command to transfer files between two z/VM guest virtual machines.
====
+
. Log in to CMS on the bootstrap machine.
. IPL the bootstrap machine from the reader by running the following command:
+
----
$ ipl c
----
+
See link:https://www.ibm.com/docs/en/zvm/latest?topic=commands-ipl[IPL] in IBM Documentation.

View File

@@ -0,0 +1,19 @@
:_content-type: ASSEMBLY
:context: creating-multi-arch-compute-nodes-ibm-z-kvm
[id="creating-multi-arch-compute-nodes-ibm-z-kvm"]
= Creating a cluster with multi-architecture compute machines on {ibmzProductName} and {linuxoneProductName} with {op-system-base} KVM
include::_attributes/common-attributes.adoc[]
toc::[]
To create a cluster with multi-architecture compute machines on {ibmzProductName} and {linuxoneProductName} (`s390x`) with {op-system-base} KVM, you must have an existing single-architecture `x86_64` cluster. You can then add `s390x` compute machines to your {product-title} cluster.
Before you can add `s390x` nodes to your cluster, you must upgrade your cluster to one that uses the multi-architecture payload. For more information on migrating to the multi-architecture payload, see xref:../../updating/updating_a_cluster/migrating-to-multi-payload.adoc#migrating-to-multi-payload[Migrating to a cluster with multi-architecture compute machines].
The following procedures explain how to create a {op-system} compute machine using a {op-system-base} KVM instance. This will allow you to add `s390x` nodes to your cluster and deploy a cluster with multi-architecture compute machines.
include::modules/multi-architecture-verifying-cluster-compatibility.adoc[leveloffset=+1]
include::modules/machine-user-infra-machines-ibm-z-kvm.adoc[leveloffset=+1]
include::modules/installation-approve-csrs.adoc[leveloffset=+1]

View File

@@ -0,0 +1,19 @@
:_content-type: ASSEMBLY
:context: creating-multi-arch-compute-nodes-ibm-z
[id="creating-multi-arch-compute-nodes-ibm-z"]
= Creating a cluster with multi-architecture compute machines on {ibmzProductName} and {linuxoneProductName} with z/VM
include::_attributes/common-attributes.adoc[]
toc::[]
To create a cluster with multi-architecture compute machines on {ibmzProductName} and {linuxoneProductName} (`s390x`) with z/VM, you must have an existing single-architecture `x86_64` cluster. You can then add `s390x` compute machines to your {product-title} cluster.
Before you can add `s390x` nodes to your cluster, you must upgrade your cluster to one that uses the multi-architecture payload. For more information on migrating to the multi-architecture payload, see xref:../../updating/updating_a_cluster/migrating-to-multi-payload.adoc#migrating-to-multi-payload[Migrating to a cluster with multi-architecture compute machines].
The following procedures explain how to create a {op-system} compute machine using a z/VM instance. This will allow you to add `s390x` nodes to your cluster and deploy a cluster with multi-architecture compute machines.
include::modules/multi-architecture-verifying-cluster-compatibility.adoc[leveloffset=+1]
include::modules/machine-user-infra-machines-ibm-z.adoc[leveloffset=+1]
include::modules/installation-approve-csrs.adoc[leveloffset=+1]

View File

@@ -1,12 +1,12 @@
:_content-type: CONCEPT
:context: multi-architecture-configuration
[id="post-install-multi-architecture-configuration"]
= About clusters with multi-architecture compute machines
= About clusters with multi-architecture compute machines
include::_attributes/common-attributes.adoc[]
toc::[]
An {product-title} cluster with multi-architecture compute machines is a cluster that supports compute machines with different architectures. Clusters with multi-architecture compute machines are available only on AWS or Azure installer-provisioned infrastructures and bare metal user-provisioned infrastructures with x86_64 control plane machines.
An {product-title} cluster with multi-architecture compute machines is a cluster that supports compute machines with different architectures. Clusters with multi-architecture compute machines are available only on Amazon Web Services (AWS) or Microsoft Azure installer-provisioned infrastructures and bare metal, {ibmpowerProductName}, and {ibmzProductName} user-provisioned infrastructures with x86_64 control plane machines.
[NOTE]
====
@@ -20,7 +20,7 @@ The Cluster Samples Operator is not supported on clusters with multi-architectur
For information on migrating your single-architecture cluster to a cluster that supports multi-architecture compute machines, see xref:../../updating/updating_a_cluster/migrating-to-multi-payload.adoc#migrating-to-multi-payload[Migrating to a cluster with multi-architecture compute machines].
== Configuring your cluster with multi-architecture compute machines
== Configuring your cluster with multi-architecture compute machines
To create a cluster with multi-architecture compute machines for various platforms, you can use the documentation in the following sections:
@@ -29,3 +29,7 @@ To create a cluster with multi-architecture compute machines for various platfor
* xref:../../post_installation_configuration/configuring-multi-arch-compute-machines/creating-multi-arch-compute-nodes-aws.adoc#creating-multi-arch-compute-nodes-aws[Creating a cluster with multi-architecture compute machines on AWS]
* xref:../../post_installation_configuration/configuring-multi-arch-compute-machines/creating-multi-arch-compute-nodes-bare-metal.adoc#creating-multi-arch-compute-nodes-bare-metal[Creating a cluster with multi-architecture compute machines on bare metal]
* xref:../../post_installation_configuration/configuring-multi-arch-compute-machines/creating-multi-arch-compute-nodes-ibm-z.adoc#creating-multi-arch-compute-nodes-ibm-z[Creating a cluster with multi-architecture compute machines on {ibmzProductName} and {linuxoneProductName} with z/VM]
* xref:../../post_installation_configuration/configuring-multi-arch-compute-machines/creating-multi-arch-compute-nodes-ibm-z-kvm.adoc#creating-multi-arch-compute-nodes-ibm-z-kvm[Creating a cluster with multi-architecture compute machines on {ibmzProductName} and {linuxoneProductName} with {op-system-base} KVM]