1
0
mirror of https://github.com/openshift/openshift-docs.git synced 2026-02-05 12:46:18 +01:00
Files
openshift-docs/modules/install-sno-ibm-z-lpar.adoc
2024-06-19 17:30:18 +00:00

226 lines
8.3 KiB
Plaintext

// This is included in the following assemblies:
//
// installing_sno/install-sno-installing-sno.adoc
:_mod-docs-content-type: PROCEDURE
[id="installing-sno-on-ibm-z-lpar_{context}"]
= Installing {sno} in an LPAR on {ibm-z-title} and {ibm-linuxone-title}
.Prerequisites
* If you are deploying a single-node cluster there are zero compute nodes, the Ingress Controller pods run on the control plane nodes. In single-node cluster deployments, you must configure your application ingress load balancer to route HTTP and HTTPS traffic to the control plane nodes. See the _Load balancing requirements for user-provisioned infrastructure_ section for more information.
.Procedure
. Set the {product-title} version by running the following command:
+
[source,terminal]
----
$ OCP_VERSION=<ocp_version> <1>
----
+
<1> Replace `<ocp_version>` with the current version. For example, `latest-{product-version}`.
. Set the host architecture by running the following command:
+
[source,terminal]
----
$ ARCH=<architecture> <1>
----
<1> Replace `<architecture>` with the target host architecture `s390x`.
. Download the {product-title} client (`oc`) and make it available for use by entering the following commands:
+
[source,terminal]
----
$ curl -k https://mirror.openshift.com/pub/openshift-v4/${ARCH}/clients/ocp/${OCP_VERSION}/openshift-client-linux.tar.gz -o oc.tar.gz
----
+
[source,terminal]
----
$ tar zxvf oc.tar.gz
----
+
[source,terminal]
----
$ chmod +x oc
----
. Download the {product-title} installer and make it available for use by entering the following commands:
+
[source,terminal]
----
$ curl -k https://mirror.openshift.com/pub/openshift-v4/${ARCH}/clients/ocp/${OCP_VERSION}/openshift-install-linux.tar.gz -o openshift-install-linux.tar.gz
----
+
[source,terminal]
----
$ tar zxvf openshift-install-linux.tar.gz
----
+
[source,terminal]
----
$ chmod +x openshift-install
----
. Prepare the `install-config.yaml` file:
+
[source,yaml]
----
apiVersion: v1
baseDomain: <domain> <1>
compute:
- name: worker
replicas: 0 <2>
controlPlane:
name: master
replicas: 1 <3>
metadata:
name: <name> <4>
networking: <5>
clusterNetwork:
- cidr: 10.128.0.0/14
hostPrefix: 23
machineNetwork:
- cidr: 10.0.0.0/16 <6>
networkType: OVNKubernetes
serviceNetwork:
- 172.30.0.0/16
platform:
none: {}
pullSecret: '<pull_secret>' <7>
sshKey: |
<ssh_key> <8>
----
<1> Add the cluster domain name.
<2> Set the `compute` replicas to `0`. This makes the control plane node schedulable.
<3> Set the `controlPlane` replicas to `1`. In conjunction with the previous `compute` setting, this setting ensures the cluster runs on a single node.
<4> Set the `metadata` name to the cluster name.
<5> Set the `networking` details. OVN-Kubernetes is the only allowed network plugin type for single-node clusters.
<6> Set the `cidr` value to match the subnet of the {sno} cluster.
<7> Copy the {cluster-manager-url-pull} and add the contents to this configuration setting.
<8> Add the public SSH key from the administration host so that you can log in to the cluster after installation.
. Generate {product-title} assets by running the following commands:
+
[source,terminal]
----
$ mkdir ocp
----
+
[source,terminal]
----
$ cp install-config.yaml ocp
----
. Change to the directory that contains the {product-title} installation program and generate the Kubernetes manifests for the cluster:
+
[source,terminal]
----
$ ./openshift-install create manifests --dir <installation_directory> <1>
----
+
<1> For `<installation_directory>`, specify the installation directory that contains the `install-config.yaml` file you created.
. Check that the `mastersSchedulable` parameter in the `<installation_directory>/manifests/cluster-scheduler-02-config.yml` Kubernetes manifest file is set to `true`.
+
--
.. Open the `<installation_directory>/manifests/cluster-scheduler-02-config.yml` file.
.. Locate the `mastersSchedulable` parameter and ensure that it is set to `true` as shown in the following `spec` stanza:
+
[source,yaml]
----
spec:
mastersSchedulable: true
status: {}
----
.. Save and exit the file.
--
. Create the Ignition configuration files by running the following command from the directory that contains the installation program:
+
[source,terminal]
----
$ ./openshift-install create ignition-configs --dir <installation_directory> <1>
----
<1> For `<installation_directory>`, specify the same installation directory.
. Obtain the {op-system-base} `kernel`, `initramfs`, and `rootfs` artifacts from the link:https://access.redhat.com/downloads/content/290[Product Downloads] page on the Red Hat Customer Portal or from the link:https://mirror.openshift.com/pub/openshift-v4/s390x/dependencies/rhcos/latest/[{op-system} image mirror] page.
+
[IMPORTANT]
====
The {op-system} images might not change with every release of {product-title}. You must download images with the highest version that is less than or equal to the {product-title} version that you install. Only use the appropriate `kernel`, `initramfs`, and `rootfs` artifacts described in the following procedure.
====
+
The file names contain the {product-title} version number. They resemble the following examples:
+
`kernel`:: `rhcos-<version>-live-kernel-<architecture>`
`initramfs`:: `rhcos-<version>-live-initramfs.<architecture>.img`
`rootfs`:: `rhcos-<version>-live-rootfs.<architecture>.img`
+
[NOTE]
====
The `rootfs` image is the same for FCP and DASD.
====
. Move the following artifacts and files to an HTTP or HTTPS server:
** Downloaded {op-system-base} live `kernel`, `initramfs`, and `rootfs` artifacts
** Ignition files
. Create a parameter file for the bootstrap in an LPAR:
+
.Example parameter file for the bootstrap machine
+
[source,terminal]
----
cio_ignore=all,!condev rd.neednet=1 \
console=ttysclp0 \
coreos.inst.install_dev=/dev/<block_device> \// <1>
coreos.inst.ignition_url=http://<http_server>/bootstrap.ign \// <2>
coreos.live.rootfs_url=http://<http_server>/rhcos-<version>-live-rootfs.<architecture>.img \// <3>
ip=<ip>::<gateway>:<netmask>:<hostname>::none nameserver=<dns> \// <4>
rd.znet=qeth,0.0.1140,0.0.1141,0.0.1142,layer2=1,portno=0 \
rd.dasd=0.0.4411 \// <5>
rd.zfcp=0.0.8001,0x50050763040051e3,0x4000406300000000 \// <6>
zfcp.allow_lun_scan=0
----
<1> Specify the block device on the system to install to. For installations on DASD-type disk use `dasda`, for installations on FCP-type disks use `sda`.
<2> Specify the location of the `bootstrap.ign` config file. Only HTTP and HTTPS protocols are supported.
<3> For the `coreos.live.rootfs_url=` artifact, specify the matching `rootfs` artifact for the `kernel`and `initramfs` you are booting. Only HTTP and HTTPS protocols are supported.
<4> For the `ip=` parameter, assign the IP address manually as described in "Installing a cluster in an LPAR on {ibm-z-name} and {ibm-linuxone-name}".
<5> For installations on DASD-type disks, use `rd.dasd=` to specify the DASD where {op-system} is to be installed. Omit this entry for FCP-type disks.
<6> For installations on FCP-type disks, use `rd.zfcp=<adapter>,<wwpn>,<lun>` to specify the FCP disk where {op-system} is to be installed. Omit this entry for DASD-type disks.
+
You can adjust further parameters if required.
. Create a parameter file for the control plane in an LPAR:
+
.Example parameter file for the control plane machine
+
[source,terminal]
----
cio_ignore=all,!condev rd.neednet=1 \
console=ttysclp0 \
coreos.inst.install_dev=/dev/<block_device> \
coreos.inst.ignition_url=http://<http_server>/master.ign \// <1>
coreos.live.rootfs_url=http://<http_server>/rhcos-<version>-live-rootfs.<architecture>.img \
ip=<ip>::<gateway>:<netmask>:<hostname>::none nameserver=<dns> \
rd.znet=qeth,0.0.1140,0.0.1141,0.0.1142,layer2=1,portno=0 \
rd.dasd=0.0.4411 \
rd.zfcp=0.0.8001,0x50050763040051e3,0x4000406300000000 \
zfcp.allow_lun_scan=0
----
<1> Specify the location of the `master.ign` config file. Only HTTP and HTTPS protocols are supported.
. Transfer the following artifacts, files, and images to the LPAR. For example by using FTP:
** `kernel` and `initramfs` artifacts
** Parameter files
** {op-system} images
+
For details about how to transfer the files with FTP and boot, see link:https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/performing_a_standard_rhel_9_installation/assembly_installing-on-64-bit-ibm-z_installing-rhel#installing-in-an-lpar_installing-in-an-lpar[Installing in an LPAR].
. Boot the bootstrap machine.
. Boot the control plane machine.