diff --git a/_topic_maps/_topic_map.yml b/_topic_maps/_topic_map.yml index fd818f98e4..72a3c25651 100644 --- a/_topic_maps/_topic_map.yml +++ b/_topic_maps/_topic_map.yml @@ -601,6 +601,8 @@ Topics: File: creating-multi-arch-compute-nodes-bare-metal - Name: Creating a cluster with multi-architecture compute machines on IBM Z and IBM LinuxONE with z/VM File: creating-multi-arch-compute-nodes-ibm-z + - Name: Creating a cluster with multi-architecture compute machines on IBM Z and IBM LinuxONE in an LPAR + File: creating-multi-arch-compute-nodes-ibm-z-lpar - Name: Creating a cluster with multi-architecture compute machines on IBM Z and IBM LinuxONE with RHEL KVM File: creating-multi-arch-compute-nodes-ibm-z-kvm - Name: Creating a cluster with multi-architecture compute machines on IBM Power diff --git a/modules/machine-user-infra-machines-ibm-z-kvm.adoc b/modules/machine-user-infra-machines-ibm-z-kvm.adoc index 72471a5269..d90215ae2a 100644 --- a/modules/machine-user-infra-machines-ibm-z-kvm.adoc +++ b/modules/machine-user-infra-machines-ibm-z-kvm.adoc @@ -89,7 +89,7 @@ $ virt-install \ --connect qemu:///system \ --name \ --autostart \ - --os-variant rhel9.2 \ <1> + --os-variant rhel9.4 \ <1> --cpu host \ --vcpus \ --memory \ @@ -98,15 +98,15 @@ $ virt-install \ --location ,kernel=,initrd= \ <2> --extra-args "rd.neednet=1" \ --extra-args "coreos.inst.install_dev=/dev/vda" \ - --extra-args "coreos.inst.ignition_url=" \ <3> - --extra-args "coreos.live.rootfs_url=" \ <4> - --extra-args "ip=::::::none:" \ <5> + --extra-args "coreos.inst.ignition_url=http:///worker.ign " \ <3> + --extra-args "coreos.live.rootfs_url=http:///rhcos--live-rootfs..img" \ <4> + --extra-args "ip=::::::none" \ <5> --extra-args "nameserver=" \ --extra-args "console=ttysclp0" \ --noautoconsole \ --wait ---- -<1> For `os-variant`, specify the {op-system-base} version for the {op-system} compute machine. `rhel9.2` is the recommended version. To query the supported {op-system-base} version of your operating system, run the following command: +<1> For `os-variant`, specify the {op-system-base} version for the {op-system} compute machine. `rhel9.4` is the recommended version. To query the supported {op-system-base} version of your operating system, run the following command: + [source,terminal] ---- @@ -119,8 +119,8 @@ The `os-variant` is case sensitive. ==== + <2> For `--location`, specify the location of the kernel/initrd on the HTTP or HTTPS server. -<3> For `coreos.inst.ignition_url=`, specify the `worker.ign` Ignition file for the machine role. Only HTTP and HTTPS protocols are supported. -<4> For `coreos.live.rootfs_url=`, specify the matching rootfs artifact for the `kernel` and `initramfs` you are booting. Only HTTP and HTTPS protocols are supported. +<3> Specify the location of the `worker.ign` config file. Only HTTP and HTTPS protocols are supported. +<4> Specify the location of the `rootfs` artifact for the `kernel` and `initramfs` you are booting. Only HTTP and HTTPS protocols are supported <5> Optional: For `hostname`, specify the fully qualified hostname of the client machine. -- + diff --git a/modules/machine-user-infra-machines-ibm-z.adoc b/modules/machine-user-infra-machines-ibm-z.adoc index d303036e48..44c215613d 100644 --- a/modules/machine-user-infra-machines-ibm-z.adoc +++ b/modules/machine-user-infra-machines-ibm-z.adoc @@ -53,7 +53,7 @@ $ oc extract -n openshift-machine-api secret/worker-user-data-managed --keys=use + [source,terminal] ---- -$ curl -k http:///worker.ign +$ curl -k http:///worker.ign ---- . Download the {op-system-base} live `kernel`, `initramfs`, and `rootfs` files by running the following commands: @@ -93,7 +93,7 @@ $ curl -LO $(oc -n openshift-machine-config-operator get configmap/coreos-bootim ** For installations on DASD-type disks, complete the following tasks: ... For `coreos.inst.install_dev=`, specify `/dev/dasda`. ... Use `rd.dasd=` to specify the DASD where {op-system} is to be installed. -... Leave all other parameters unchanged. +... You can adjust further parameters if required. + The following is an example parameter file, `additional-worker-dasd.parm`: + @@ -102,9 +102,9 @@ The following is an example parameter file, `additional-worker-dasd.parm`: rd.neednet=1 \ console=ttysclp0 \ coreos.inst.install_dev=/dev/dasda \ -coreos.live.rootfs_url=http://cl1.provide.example.com:8080/assets/rhcos-live-rootfs.s390x.img \ -coreos.inst.ignition_url=http://cl1.provide.example.com:8080/ignition/worker.ign \ -ip=172.18.78.2::172.18.78.1:255.255.255.0:::none nameserver=172.18.78.1 \ +coreos.live.rootfs_url=http:///rhcos--live-rootfs..img \ +coreos.inst.ignition_url=http:///worker.ign \ +ip=::::::none nameserver= \ rd.znet=qeth,0.0.bdf0,0.0.bdf1,0.0.bdf2,layer2=1,portno=0 \ zfcp.allow_lun_scan=0 \ rd.dasd=0.0.3490 @@ -125,7 +125,7 @@ When you install with multiple paths, you must enable multipathing directly afte ==== If additional LUNs are configured with NPIV, FCP requires `zfcp.allow_lun_scan=0`. If you must enable `zfcp.allow_lun_scan=1` because you use a CSI driver, for example, you must configure your NPIV so that each node cannot access the boot partition of another node. ==== -... Leave all other parameters unchanged. +... You can adjust further parameters if required. + [IMPORTANT] ==== @@ -140,9 +140,9 @@ The following is an example parameter file, `additional-worker-fcp.parm` for a w rd.neednet=1 \ console=ttysclp0 \ coreos.inst.install_dev=/dev/sda \ -coreos.live.rootfs_url=http://cl1.provide.example.com:8080/assets/rhcos-live-rootfs.s390x.img \ -coreos.inst.ignition_url=http://cl1.provide.example.com:8080/ignition/worker.ign \ -ip=172.18.78.2::172.18.78.1:255.255.255.0:::none nameserver=172.18.78.1 \ +coreos.live.rootfs_url=http:///rhcos--live-rootfs..img \ +coreos.inst.ignition_url=http:///worker.ign \ +ip=::::::none nameserver= \ rd.znet=qeth,0.0.bdf0,0.0.bdf1,0.0.bdf2,layer2=1,portno=0 \ zfcp.allow_lun_scan=0 \ rd.zfcp=0.0.1987,0x50050763070bc5e3,0x4008400B00000000 \ diff --git a/post_installation_configuration/configuring-multi-arch-compute-machines/creating-multi-arch-compute-nodes-ibm-z-lpar.adoc b/post_installation_configuration/configuring-multi-arch-compute-machines/creating-multi-arch-compute-nodes-ibm-z-lpar.adoc new file mode 100644 index 0000000000..1f14102109 --- /dev/null +++ b/post_installation_configuration/configuring-multi-arch-compute-machines/creating-multi-arch-compute-nodes-ibm-z-lpar.adoc @@ -0,0 +1,25 @@ +:_mod-docs-content-type: ASSEMBLY +:context: creating-multi-arch-compute-nodes-ibm-z-lpar +include::_attributes/common-attributes.adoc[] +[id="creating-multi-arch-compute-nodes-ibm-z-lpar"] += Creating a cluster with multi-architecture compute machines on {ibm-z-title} and {ibm-linuxone-title} in an LPAR + +toc::[] + +To create a cluster with multi-architecture compute machines on {ibm-z-name} and {ibm-linuxone-name} (`s390x`) in an LPAR, you must have an existing single-architecture `x86_64` cluster. You can then add `s390x` compute machines to your {product-title} cluster. + +Before you can add `s390x` nodes to your cluster, you must upgrade your cluster to one that uses the multi-architecture payload. For more information on migrating to the multi-architecture payload, see xref:../../updating/updating_a_cluster/migrating-to-multi-payload.adoc#migrating-to-multi-payload[Migrating to a cluster with multi-architecture compute machines]. + +The following procedures explain how to create a {op-system} compute machine using an LPAR instance. This will allow you to add `s390x` nodes to your cluster and deploy a cluster with multi-architecture compute machines. + +[NOTE] +==== +To create an {ibm-z-name} or {ibm-linuxone-name} (`s390x`) cluster with multi-architecture compute machines on `x86_64`, follow the instructions for +xref:../../installing/installing_ibm_z/preparing-to-install-on-ibm-z.adoc#preparing-to-install-on-ibm-z[Installing a cluster on {ibm-z-name} and {ibm-linuxone-name}]. You can then add `x86_64` compute machines as described in xref:./creating-multi-arch-compute-nodes-bare-metal.adoc#creating-multi-arch-compute-nodes-bare-metal[Creating a cluster with multi-architecture compute machines on bare metal, {ibm-power-title}, or {ibm-z-title}]. +==== + +include::modules/multi-architecture-verifying-cluster-compatibility.adoc[leveloffset=+1] + +include::modules/machine-user-infra-machines-ibm-z.adoc[leveloffset=+1] + +include::modules/installation-approve-csrs.adoc[leveloffset=+1] \ No newline at end of file