1
0
mirror of https://github.com/openshift/openshift-docs.git synced 2026-02-05 12:46:18 +01:00

Add installation IBM Z in an LPAR

This commit is contained in:
SNiemann15
2024-06-05 14:11:38 +02:00
committed by openshift-cherrypick-robot
parent efaa432aa9
commit cb103aa732
4 changed files with 43 additions and 16 deletions

View File

@@ -601,6 +601,8 @@ Topics:
File: creating-multi-arch-compute-nodes-bare-metal
- Name: Creating a cluster with multi-architecture compute machines on IBM Z and IBM LinuxONE with z/VM
File: creating-multi-arch-compute-nodes-ibm-z
- Name: Creating a cluster with multi-architecture compute machines on IBM Z and IBM LinuxONE in an LPAR
File: creating-multi-arch-compute-nodes-ibm-z-lpar
- Name: Creating a cluster with multi-architecture compute machines on IBM Z and IBM LinuxONE with RHEL KVM
File: creating-multi-arch-compute-nodes-ibm-z-kvm
- Name: Creating a cluster with multi-architecture compute machines on IBM Power

View File

@@ -89,7 +89,7 @@ $ virt-install \
--connect qemu:///system \
--name <vm_name> \
--autostart \
--os-variant rhel9.2 \ <1>
--os-variant rhel9.4 \ <1>
--cpu host \
--vcpus <vcpus> \
--memory <memory_mb> \
@@ -98,15 +98,15 @@ $ virt-install \
--location <media_location>,kernel=<rhcos_kernel>,initrd=<rhcos_initrd> \ <2>
--extra-args "rd.neednet=1" \
--extra-args "coreos.inst.install_dev=/dev/vda" \
--extra-args "coreos.inst.ignition_url=<worker_ign>" \ <3>
--extra-args "coreos.live.rootfs_url=<rhcos_rootfs>" \ <4>
--extra-args "ip=<ip>::<default_gateway>:<subnet_mask_length>:<hostname>::none:<MTU>" \ <5>
--extra-args "coreos.inst.ignition_url=http://<http_server>/worker.ign " \ <3>
--extra-args "coreos.live.rootfs_url=http://<http_server>/rhcos-<version>-live-rootfs.<architecture>.img" \ <4>
--extra-args "ip=<ip>::<gateway>:<netmask>:<hostname>::none" \ <5>
--extra-args "nameserver=<dns>" \
--extra-args "console=ttysclp0" \
--noautoconsole \
--wait
----
<1> For `os-variant`, specify the {op-system-base} version for the {op-system} compute machine. `rhel9.2` is the recommended version. To query the supported {op-system-base} version of your operating system, run the following command:
<1> For `os-variant`, specify the {op-system-base} version for the {op-system} compute machine. `rhel9.4` is the recommended version. To query the supported {op-system-base} version of your operating system, run the following command:
+
[source,terminal]
----
@@ -119,8 +119,8 @@ The `os-variant` is case sensitive.
====
+
<2> For `--location`, specify the location of the kernel/initrd on the HTTP or HTTPS server.
<3> For `coreos.inst.ignition_url=`, specify the `worker.ign` Ignition file for the machine role. Only HTTP and HTTPS protocols are supported.
<4> For `coreos.live.rootfs_url=`, specify the matching rootfs artifact for the `kernel` and `initramfs` you are booting. Only HTTP and HTTPS protocols are supported.
<3> Specify the location of the `worker.ign` config file. Only HTTP and HTTPS protocols are supported.
<4> Specify the location of the `rootfs` artifact for the `kernel` and `initramfs` you are booting. Only HTTP and HTTPS protocols are supported
<5> Optional: For `hostname`, specify the fully qualified hostname of the client machine.
--
+

View File

@@ -53,7 +53,7 @@ $ oc extract -n openshift-machine-api secret/worker-user-data-managed --keys=use
+
[source,terminal]
----
$ curl -k http://<HTTP_server>/worker.ign
$ curl -k http://<http_server>/worker.ign
----
. Download the {op-system-base} live `kernel`, `initramfs`, and `rootfs` files by running the following commands:
@@ -93,7 +93,7 @@ $ curl -LO $(oc -n openshift-machine-config-operator get configmap/coreos-bootim
** For installations on DASD-type disks, complete the following tasks:
... For `coreos.inst.install_dev=`, specify `/dev/dasda`.
... Use `rd.dasd=` to specify the DASD where {op-system} is to be installed.
... Leave all other parameters unchanged.
... You can adjust further parameters if required.
+
The following is an example parameter file, `additional-worker-dasd.parm`:
+
@@ -102,9 +102,9 @@ The following is an example parameter file, `additional-worker-dasd.parm`:
rd.neednet=1 \
console=ttysclp0 \
coreos.inst.install_dev=/dev/dasda \
coreos.live.rootfs_url=http://cl1.provide.example.com:8080/assets/rhcos-live-rootfs.s390x.img \
coreos.inst.ignition_url=http://cl1.provide.example.com:8080/ignition/worker.ign \
ip=172.18.78.2::172.18.78.1:255.255.255.0:::none nameserver=172.18.78.1 \
coreos.live.rootfs_url=http://<http_server>/rhcos-<version>-live-rootfs.<architecture>.img \
coreos.inst.ignition_url=http://<http_server>/worker.ign \
ip=<ip>::<gateway>:<netmask>:<hostname>::none nameserver=<dns> \
rd.znet=qeth,0.0.bdf0,0.0.bdf1,0.0.bdf2,layer2=1,portno=0 \
zfcp.allow_lun_scan=0 \
rd.dasd=0.0.3490
@@ -125,7 +125,7 @@ When you install with multiple paths, you must enable multipathing directly afte
====
If additional LUNs are configured with NPIV, FCP requires `zfcp.allow_lun_scan=0`. If you must enable `zfcp.allow_lun_scan=1` because you use a CSI driver, for example, you must configure your NPIV so that each node cannot access the boot partition of another node.
====
... Leave all other parameters unchanged.
... You can adjust further parameters if required.
+
[IMPORTANT]
====
@@ -140,9 +140,9 @@ The following is an example parameter file, `additional-worker-fcp.parm` for a w
rd.neednet=1 \
console=ttysclp0 \
coreos.inst.install_dev=/dev/sda \
coreos.live.rootfs_url=http://cl1.provide.example.com:8080/assets/rhcos-live-rootfs.s390x.img \
coreos.inst.ignition_url=http://cl1.provide.example.com:8080/ignition/worker.ign \
ip=172.18.78.2::172.18.78.1:255.255.255.0:::none nameserver=172.18.78.1 \
coreos.live.rootfs_url=http://<http_server>/rhcos-<version>-live-rootfs.<architecture>.img \
coreos.inst.ignition_url=http://<http_server>/worker.ign \
ip=<ip>::<gateway>:<netmask>:<hostname>::none nameserver=<dns> \
rd.znet=qeth,0.0.bdf0,0.0.bdf1,0.0.bdf2,layer2=1,portno=0 \
zfcp.allow_lun_scan=0 \
rd.zfcp=0.0.1987,0x50050763070bc5e3,0x4008400B00000000 \

View File

@@ -0,0 +1,25 @@
:_mod-docs-content-type: ASSEMBLY
:context: creating-multi-arch-compute-nodes-ibm-z-lpar
include::_attributes/common-attributes.adoc[]
[id="creating-multi-arch-compute-nodes-ibm-z-lpar"]
= Creating a cluster with multi-architecture compute machines on {ibm-z-title} and {ibm-linuxone-title} in an LPAR
toc::[]
To create a cluster with multi-architecture compute machines on {ibm-z-name} and {ibm-linuxone-name} (`s390x`) in an LPAR, you must have an existing single-architecture `x86_64` cluster. You can then add `s390x` compute machines to your {product-title} cluster.
Before you can add `s390x` nodes to your cluster, you must upgrade your cluster to one that uses the multi-architecture payload. For more information on migrating to the multi-architecture payload, see xref:../../updating/updating_a_cluster/migrating-to-multi-payload.adoc#migrating-to-multi-payload[Migrating to a cluster with multi-architecture compute machines].
The following procedures explain how to create a {op-system} compute machine using an LPAR instance. This will allow you to add `s390x` nodes to your cluster and deploy a cluster with multi-architecture compute machines.
[NOTE]
====
To create an {ibm-z-name} or {ibm-linuxone-name} (`s390x`) cluster with multi-architecture compute machines on `x86_64`, follow the instructions for
xref:../../installing/installing_ibm_z/preparing-to-install-on-ibm-z.adoc#preparing-to-install-on-ibm-z[Installing a cluster on {ibm-z-name} and {ibm-linuxone-name}]. You can then add `x86_64` compute machines as described in xref:./creating-multi-arch-compute-nodes-bare-metal.adoc#creating-multi-arch-compute-nodes-bare-metal[Creating a cluster with multi-architecture compute machines on bare metal, {ibm-power-title}, or {ibm-z-title}].
====
include::modules/multi-architecture-verifying-cluster-compatibility.adoc[leveloffset=+1]
include::modules/machine-user-infra-machines-ibm-z.adoc[leveloffset=+1]
include::modules/installation-approve-csrs.adoc[leveloffset=+1]