mirror of
https://github.com/openshift/openshift-docs.git
synced 2026-02-05 12:46:18 +01:00
IBM Z add sno support
This commit is contained in:
committed by
openshift-cherrypick-robot
parent
7d98f867bf
commit
face448c51
@@ -75,3 +75,34 @@ include::modules/install-sno-installing-with-usb-media.adoc[leveloffset=+1]
|
||||
include::modules/install-booting-from-an-iso-over-http-redfish.adoc[leveloffset=+1]
|
||||
|
||||
include::modules/creating-custom-live-rhcos-iso.adoc[leveloffset=+1]
|
||||
|
||||
== Installing {sno} with {ibmzProductName} and {linuxoneProductName}
|
||||
|
||||
Installing a single node cluster on {ibmzProductName} and {linuxoneProductName} requires user-provisioned installation using either the "Installing a cluster with {op-system-base} KVM on {ibmzProductName} and {linuxoneProductName}" or the "Installing a cluster with z/VM on {ibmzProductName} and {linuxoneProductName}" procedure.
|
||||
|
||||
[NOTE]
|
||||
====
|
||||
Installing a single-node cluster on {ibmzProductName} simplifies installation for development and test environments and requires less resource requirements at entry level.
|
||||
====
|
||||
|
||||
[discrete]
|
||||
=== Hardware requirements
|
||||
|
||||
* The equivalent of two Integrated Facilities for Linux (IFL), which are SMT2 enabled, for each cluster.
|
||||
* At least one network connection to both connect to the `LoadBalancer` service and to serve data for traffic outside the cluster.
|
||||
|
||||
[NOTE]
|
||||
====
|
||||
You can use dedicated or shared IFLs to assign sufficient compute resources. Resource sharing is one of the key strengths of {ibmzProductName}. However, you must adjust capacity correctly on each hypervisor layer and ensure sufficient resources for every {product-title} cluster.
|
||||
====
|
||||
|
||||
[role="_additional-resources"]
|
||||
.Additional resources
|
||||
|
||||
* xref:../../installing/installing_ibm_z/installing-ibm-z.adoc#installing-ibm-z[Installing a cluster with z/VM on {ibmzProductName} and {linuxoneProductName}]
|
||||
|
||||
* xref:../../installing/installing_ibm_z/installing-ibm-z-kvm.adoc#installing-ibm-z-kvm[Installing a cluster with {op-system-base} KVM on {ibmzProductName} and{linuxoneProductName}]
|
||||
|
||||
include::modules/install-sno-ibm-z.adoc[leveloffset=+2]
|
||||
|
||||
include::modules/install-sno-ibm-z-kvm.adoc[leveloffset=+2]
|
||||
@@ -6,7 +6,7 @@
|
||||
[id="install-sno-about-installing-on-a-single-node_{context}"]
|
||||
= About OpenShift on a single node
|
||||
|
||||
You can create a single-node cluster with standard installation methods. {product-title} on a single node is a specialized installation that requires the creation of a special ignition configuration ISO. The primary use case is for edge computing workloads, including intermittent connectivity, portable clouds, and 5G radio access networks (RAN) close to a base station. The major tradeoff with an installation on a single node is the lack of high availability.
|
||||
You can create a single-node cluster with standard installation methods. {product-title} on a single node is a specialized installation that requires the creation of a special Ignition configuration file. The primary use case is for edge computing workloads, including intermittent connectivity, portable clouds, and 5G radio access networks (RAN) close to a base station. The major tradeoff with an installation on a single node is the lack of high availability.
|
||||
|
||||
[IMPORTANT]
|
||||
====
|
||||
|
||||
173
modules/install-sno-ibm-z-kvm.adoc
Normal file
173
modules/install-sno-ibm-z-kvm.adoc
Normal file
@@ -0,0 +1,173 @@
|
||||
// This is included in the following assemblies:
|
||||
//
|
||||
// installing_sno/install-sno-installing-sno.adoc
|
||||
|
||||
:_content-type: PROCEDURE
|
||||
[id="installing-sno-on-ibm-z-kvm_{context}"]
|
||||
= Installing {sno} with {op-system-base} KVM on {ibmzProductName} and {linuxoneProductName}
|
||||
|
||||
.Prerequisites
|
||||
|
||||
* You have installed `podman`.
|
||||
|
||||
.Procedure
|
||||
|
||||
. Set the {product-title} version by running the following command:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ OCP_VERSION=<ocp_version> <1>
|
||||
----
|
||||
+
|
||||
<1> Replace `<ocp_version>` with the current version, for example, `latest-{product-version}`
|
||||
|
||||
. Set the host architecture by running the following command:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ ARCH=<architecture> <1>
|
||||
----
|
||||
<1> Replace `<architecture>` with the target host architecture `s390x`.
|
||||
|
||||
. Download the {product-title} client (`oc`) and make it available for use by entering the following commands:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ curl -k https://mirror.openshift.com/pub/openshift-v4/clients/ocp/$OCP_VERSION/openshift-client-linux.tar.gz -o oc.tar.gz
|
||||
----
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ tar zxf oc.tar.gz
|
||||
----
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ chmod +x oc
|
||||
----
|
||||
|
||||
. Download the {product-title} installer and make it available for use by entering the following commands:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ curl -k https://mirror.openshift.com/pub/openshift-v4/clients/ocp/$OCP_VERSION/openshift-install-linux.tar.gz -o openshift-install-linux.tar.gz
|
||||
----
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ tar zxvf openshift-install-linux.tar.gz
|
||||
----
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ chmod +x openshift-install
|
||||
----
|
||||
|
||||
. Prepare the `install-config.yaml` file:
|
||||
+
|
||||
[source,yaml]
|
||||
----
|
||||
apiVersion: v1
|
||||
baseDomain: <domain> <1>
|
||||
compute:
|
||||
- name: worker
|
||||
replicas: 0 <2>
|
||||
controlPlane:
|
||||
name: master
|
||||
replicas: 1 <3>
|
||||
metadata:
|
||||
name: <name> <4>
|
||||
networking: <5>
|
||||
clusterNetwork:
|
||||
- cidr: 10.128.0.0/14
|
||||
hostPrefix: 23
|
||||
machineNetwork:
|
||||
- cidr: 10.0.0.0/16 <6>
|
||||
networkType: OVNKubernetes
|
||||
serviceNetwork:
|
||||
- 172.30.0.0/16
|
||||
platform:
|
||||
none: {}
|
||||
bootstrapInPlace:
|
||||
installationDisk: /dev/disk/by-id/<disk_id> <7>
|
||||
pullSecret: '<pull_secret>' <8>
|
||||
sshKey: |
|
||||
<ssh_key> <9>
|
||||
----
|
||||
<1> Add the cluster domain name.
|
||||
<2> Set the `compute` replicas to `0`. This makes the control plane node schedulable.
|
||||
<3> Set the `controlPlane` replicas to `1`. In conjunction with the previous `compute` setting, this setting ensures the cluster runs on a single node.
|
||||
<4> Set the `metadata` name to the cluster name.
|
||||
<5> Set the `networking` details. OVN-Kubernetes is the only allowed network plugin type for single-node clusters.
|
||||
<6> Set the `cidr` value to match the subnet of the {sno} cluster.
|
||||
<7> Set the path to the installation disk drive, for example, `/dev/disk/by-id/wwn-0x64cd98f04fde100024684cf3034da5c2`.
|
||||
<8> Copy the {cluster-manager-url-pull} and add the contents to this configuration setting.
|
||||
<9> Add the public SSH key from the administration host so that you can log in to the cluster after installation.
|
||||
|
||||
. Generate {product-title} assets by running the following commands:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ mkdir ocp
|
||||
----
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ cp install-config.yaml ocp
|
||||
----
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ ./openshift-install --dir=ocp create single-node-ignition-config
|
||||
----
|
||||
|
||||
. Obtain the {op-system-base} `kernel`, `initramfs`, and `rootfs` artifacts from the link:https://access.redhat.com/downloads/content/290[Product Downloads] page on the Red Hat Customer Portal or from the link:https://mirror.openshift.com/pub/openshift-v4/s390x/dependencies/rhcos/latest/[{op-system} image mirror] page.
|
||||
+
|
||||
[IMPORTANT]
|
||||
====
|
||||
The {op-system} images might not change with every release of {product-title}. You must download images with the highest version that is less than or equal to the {product-title} version that you install. Only use the appropriate `kernel`, `initramfs`, and `rootfs` artifacts described in the following procedure.
|
||||
====
|
||||
+
|
||||
The file names contain the {product-title} version number. They resemble the following examples:
|
||||
+
|
||||
`kernel`:: `rhcos-<version>-live-kernel-<architecture>`
|
||||
`initramfs`:: `rhcos-<version>-live-initramfs.<architecture>.img`
|
||||
`rootfs`:: `rhcos-<version>-live-rootfs.<architecture>.img`
|
||||
+
|
||||
. Before you launch `virt-install`, move the following files and artifacts to an HTTP or HTTPS server:
|
||||
|
||||
** Downloaded {op-system-base} live `kernel`, `initramfs`, and `rootfs` artifacts
|
||||
** Ignition files
|
||||
|
||||
. Create the KVM guest nodes by using the following components:
|
||||
|
||||
** {op-system-base} `kernel` and `initramfs` artifacts
|
||||
** Ignition files
|
||||
** The new disk image
|
||||
** Adjusted parm line arguments
|
||||
|
||||
[source,terminal]
|
||||
----
|
||||
$ virt-install \
|
||||
--name {vn_name} \
|
||||
--autostart \
|
||||
--memory={memory_mb} \
|
||||
--cpu host \
|
||||
--vcpus {vcpus} \
|
||||
--location {media_location},kernel={rhcos_kernel},initrd={rhcos_initrd} \// <1>
|
||||
--disk size=100 \
|
||||
--network network={virt_network_parm} \
|
||||
--graphics none \
|
||||
--noautoconsole \
|
||||
--extra-args "ip=${IP}::${GATEWAY}:${MASK}:${VM_NAME}::none" \
|
||||
--extra-args "nameserver=${NAME_SERVER}" \
|
||||
--extra-args "ip=dhcp rd.neednet=1 ignition.platform.id=metal ignition.firstboot" \
|
||||
--extra-args "coreos.live.rootfs_url={rhcos_liveos}" \// <2>
|
||||
--extra-args "ignition.config.url={rhcos_ign}" \// <3>
|
||||
--extra-args "random.trust_cpu=on rd.luks.options=discard" \
|
||||
--extra-args "console=tty1 console=ttyS1,115200n8" \
|
||||
--wait
|
||||
----
|
||||
<1> For the `--location` parameter, specify the location of the kernel/initrd on the HTTP or HTTPS server.
|
||||
<2> For the `coreos.live.rootfs_url=` artifact, specify the matching `rootfs` artifact for the `kernel` and `initramfs` you are booting. Only HTTP and HTTPS protocols are supported.
|
||||
<3> For the `ignition.config.url=` parameter, specify the Ignition file for the machine role. Only HTTP and HTTPS protocols are supported.
|
||||
264
modules/install-sno-ibm-z.adoc
Normal file
264
modules/install-sno-ibm-z.adoc
Normal file
@@ -0,0 +1,264 @@
|
||||
// This is included in the following assemblies:
|
||||
//
|
||||
// installing_sno/install-sno-installing-sno.adoc
|
||||
|
||||
:_content-type: PROCEDURE
|
||||
[id="installing-sno-on-ibm-z_{context}"]
|
||||
= Installing {sno} with z/VM on {ibmzProductName} and {linuxoneProductName}
|
||||
|
||||
.Prerequisites
|
||||
|
||||
* You have installed `podman`.
|
||||
|
||||
.Procedure
|
||||
|
||||
. Set the {product-title} version by running the following command:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ OCP_VERSION=<ocp_version> <1>
|
||||
----
|
||||
+
|
||||
<1> Replace `<ocp_version>` with the current version, for example, `latest-{product-version}`
|
||||
|
||||
. Set the host architecture by running the following command:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ ARCH=<architecture> <1>
|
||||
----
|
||||
<1> Replace `<architecture>` with the target host architecture `s390x`.
|
||||
|
||||
. Download the {product-title} client (`oc`) and make it available for use by entering the following commands:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ curl -k https://mirror.openshift.com/pub/openshift-v4/clients/ocp/$OCP_VERSION/openshift-client-linux.tar.gz -o oc.tar.gz
|
||||
----
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ tar zxf oc.tar.gz
|
||||
----
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ chmod +x oc
|
||||
----
|
||||
|
||||
. Download the {product-title} installer and make it available for use by entering the following commands:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ curl -k https://mirror.openshift.com/pub/openshift-v4/clients/ocp/$OCP_VERSION/openshift-install-linux.tar.gz -o openshift-install-linux.tar.gz
|
||||
----
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ tar zxvf openshift-install-linux.tar.gz
|
||||
----
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ chmod +x openshift-install
|
||||
----
|
||||
|
||||
. Prepare the `install-config.yaml` file:
|
||||
+
|
||||
[source,yaml]
|
||||
----
|
||||
apiVersion: v1
|
||||
baseDomain: <domain> <1>
|
||||
compute:
|
||||
- name: worker
|
||||
replicas: 0 <2>
|
||||
controlPlane:
|
||||
name: master
|
||||
replicas: 1 <3>
|
||||
metadata:
|
||||
name: <name> <4>
|
||||
networking: <5>
|
||||
clusterNetwork:
|
||||
- cidr: 10.128.0.0/14
|
||||
hostPrefix: 23
|
||||
machineNetwork:
|
||||
- cidr: 10.0.0.0/16 <6>
|
||||
networkType: OVNKubernetes
|
||||
serviceNetwork:
|
||||
- 172.30.0.0/16
|
||||
platform:
|
||||
none: {}
|
||||
bootstrapInPlace:
|
||||
installationDisk: /dev/disk/by-id/<disk_id> <7>
|
||||
pullSecret: '<pull_secret>' <8>
|
||||
sshKey: |
|
||||
<ssh_key> <9>
|
||||
----
|
||||
<1> Add the cluster domain name.
|
||||
<2> Set the `compute` replicas to `0`. This makes the control plane node schedulable.
|
||||
<3> Set the `controlPlane` replicas to `1`. In conjunction with the previous `compute` setting, this setting ensures the cluster runs on a single node.
|
||||
<4> Set the `metadata` name to the cluster name.
|
||||
<5> Set the `networking` details. OVN-Kubernetes is the only allowed network plugin type for single-node clusters.
|
||||
<6> Set the `cidr` value to match the subnet of the {sno} cluster.
|
||||
<7> Set the path to the installation disk drive, for example, `/dev/disk/by-id/wwn-0x64cd98f04fde100024684cf3034da5c2`.
|
||||
<8> Copy the {cluster-manager-url-pull} and add the contents to this configuration setting.
|
||||
<9> Add the public SSH key from the administration host so that you can log in to the cluster after installation.
|
||||
|
||||
. Generate {product-title} assets by running the following commands:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ mkdir ocp
|
||||
----
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ cp install-config.yaml ocp
|
||||
----
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ ./openshift-install --dir=ocp create single-node-ignition-config
|
||||
----
|
||||
|
||||
. Obtain the {op-system-base} `kernel`, `initramfs`, and `rootfs` artifacts from the link:https://access.redhat.com/downloads/content/290[Product Downloads] page on the Red Hat Customer Portal or from the link:https://mirror.openshift.com/pub/openshift-v4/s390x/dependencies/rhcos/latest/[{op-system} image mirror] page.
|
||||
+
|
||||
[IMPORTANT]
|
||||
====
|
||||
The {op-system} images might not change with every release of {product-title}. You must download images with the highest version that is less than or equal to the {product-title} version that you install. Only use the appropriate `kernel`, `initramfs`, and `rootfs` artifacts described in the following procedure.
|
||||
====
|
||||
+
|
||||
The file names contain the {product-title} version number. They resemble the following examples:
|
||||
+
|
||||
`kernel`:: `rhcos-<version>-live-kernel-<architecture>`
|
||||
`initramfs`:: `rhcos-<version>-live-initramfs.<architecture>.img`
|
||||
`rootfs`:: `rhcos-<version>-live-rootfs.<architecture>.img`
|
||||
+
|
||||
[NOTE]
|
||||
====
|
||||
The `rootfs` image is the same for FCP and DASD.
|
||||
====
|
||||
|
||||
. Move the following artifacts and files to an HTTP or HTTPS server:
|
||||
|
||||
** Downloaded {op-system-base} live `kernel`, `initramfs`, and `rootfs` artifacts
|
||||
** Ignition files
|
||||
|
||||
. Create parameter files for a particular virtual machine:
|
||||
+
|
||||
.Example parameter file
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
rd.neednet=1 \
|
||||
console=ttysclp0 \
|
||||
coreos.live.rootfs_url={rhcos_liveos}:8080/rootfs.img \// <1>
|
||||
ignition.config.url={rhcos_ign}:8080/ignition/bootstrap-in-place-for-live-iso.ign \// <2>
|
||||
ip=encbdd0:dhcp::02:00:00:02:34:02 <3>
|
||||
rd.znet=qeth,0.0.bdd0,0.0.bdd1,0.0.bdd2,layer2=1 \
|
||||
rd.dasd=0.0.4411 \// <4>
|
||||
rd.zfcp=0.0.8001,0x50050763040051e3,0x4000406300000000 \// <5>
|
||||
zfcp.allow_lun_scan=0 \
|
||||
rd.luks.options=discard \
|
||||
ignition.firstboot ignition.platform.id=metal \
|
||||
console=tty1 console=ttyS1,115200n8
|
||||
----
|
||||
<1> For the `coreos.live.rootfs_url=` artifact, specify the matching `rootfs` artifact for the `kernel`and `initramfs` you are booting. Only HTTP and HTTPS protocols are supported.
|
||||
<2> For the `ignition.config.url=` parameter, specify the Ignition file for the machine role. Only HTTP and HTTPS protocols are supported.
|
||||
<3> For the `ip=` parameter, assign the IP address automatically using DHCP or manually as described in "Installing a cluster with z/VM on {ibmzProductName} and {linuxoneProductName}".
|
||||
<4> For installations on DASD-type disks, use `rd.dasd=` to specify the DASD where {op-system} is to be installed. Omit this entry for FCP-type disks.
|
||||
<5> For installations on FCP-type disks, use `rd.zfcp=<adapter>,<wwpn>,<lun>` to specify the FCP disk where {op-system} is to be installed. Omit this entry for DASD-type disks.
|
||||
+
|
||||
Leave all other parameters unchanged.
|
||||
|
||||
. Transfer the following artifacts, files, and images to z/VM. For example by using FTP:
|
||||
|
||||
** `kernel` and `initramfs` artifacts
|
||||
** Parameter files
|
||||
** {op-system} images
|
||||
+
|
||||
For details about how to transfer the files with FTP and boot from the virtual reader, see link:https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html/installation_guide/sect-installing-zvm-s390[Installing under Z/VM].
|
||||
|
||||
. Punch the files to the virtual reader of the z/VM guest virtual machine that is to become your bootstrap node.
|
||||
|
||||
. Log in to CMS on the bootstrap machine.
|
||||
|
||||
. IPL the bootstrap machine from the reader by running the following command:
|
||||
+
|
||||
----
|
||||
$ cp ipl c
|
||||
----
|
||||
|
||||
. After the first reboot of the virtual machine, run the following commands directly after one another:
|
||||
|
||||
.. To boot a DASD device after first reboot, run the following commands:
|
||||
+
|
||||
--
|
||||
[source,terminal]
|
||||
----
|
||||
$ cp i <devno> clear loadparm prompt
|
||||
----
|
||||
|
||||
where:
|
||||
|
||||
`<devno>`:: Specifies the device number of the boot device as seen by the guest.
|
||||
|
||||
[source,terminal]
|
||||
----
|
||||
$ cp vi vmsg 0 <kernel_parameters>
|
||||
----
|
||||
|
||||
where:
|
||||
|
||||
`<kernel_parameters>`:: Specifies a set of kernel parameters to be stored as system control program data (SCPDATA). When booting Linux, these kernel parameters are concatenated to the end of the existing kernel parameters that are used by your boot configuration. The combined parameter string must not exceed 896 characters.
|
||||
--
|
||||
.. To boot an FCP device after first reboot, run the following commands:
|
||||
+
|
||||
--
|
||||
[source,terminal]
|
||||
----
|
||||
$ cp set loaddev portname <wwpn> lun <lun>
|
||||
----
|
||||
|
||||
where:
|
||||
|
||||
`<wwpn>`:: Specifies the target port and `<lun>` the logical unit in hexadecimal format.
|
||||
|
||||
[source,terminal]
|
||||
----
|
||||
$ cp set loaddev bootprog <n>
|
||||
----
|
||||
|
||||
where:
|
||||
|
||||
`<n>`:: Specifies the kernel to be booted.
|
||||
|
||||
[source,terminal]
|
||||
----
|
||||
$ cp set loaddev scpdata {APPEND|NEW} '<kernel_parameters>'
|
||||
----
|
||||
|
||||
where:
|
||||
|
||||
`<kernel_parameters>`:: Specifies a set of kernel parameters to be stored as system control program data (SCPDATA). When booting Linux, these kernel parameters are concatenated to the end of the existing kernel parameters that are used by your boot configuration. The combined parameter string must not exceed 896 characters.
|
||||
|
||||
`<APPEND|NEW>`:: Optional: Specify `APPEND` to append kernel parameters to existing SCPDATA. This is the default. Specify `NEW` to replace existing SCPDATA.
|
||||
|
||||
.Example
|
||||
[source,terminal]
|
||||
----
|
||||
$ cp set loaddev scpdata 'rd.zfcp=0.0.8001,0x500507630a0350a4,0x4000409D00000000
|
||||
ip=encbdd0:dhcp::02:00:00:02:34:02 rd.neednet=1'
|
||||
----
|
||||
|
||||
To start the IPL and boot process, run the following command:
|
||||
|
||||
[source,terminal]
|
||||
----
|
||||
$ cp i <devno>
|
||||
----
|
||||
|
||||
where:
|
||||
|
||||
`<devno>`:: Specifies the device number of the boot device as seen by the guest.
|
||||
--
|
||||
@@ -10,9 +10,14 @@ Installing {product-title} on a single node alleviates some of the requirements
|
||||
|
||||
* *Administration host:* You must have a computer to prepare the ISO, to create the USB boot drive, and to monitor the installation.
|
||||
|
||||
* *CPU Architecture:* Installing {product-title} on a single node supports `x86_64` and `arm64` CPU architectures.
|
||||
[NOTE]
|
||||
====
|
||||
ISO is not required for {ibmzProductName} installations.
|
||||
====
|
||||
|
||||
* *Supported platforms:* Installing {product-title} on a single node is supported on bare metal, vSphere, AWS, Red Hat OpenStack, and {rh-virtualization-first} platforms.
|
||||
* *CPU Architecture:* Installing {product-title} on a single node supports `x86_64`, `arm64`,`ppc64le`, and `s390x` CPU architectures.
|
||||
|
||||
* *Supported platforms:* Installing {product-title} on a single node is supported on bare metal, vSphere, AWS, Red Hat OpenStack, {VirtProductName}, {ibmpowerProductName}, and {ibmzProductName} platforms.
|
||||
|
||||
* *Production-grade server:* Installing {product-title} on a single node requires a server with sufficient resources to run {product-title} services and a production workload.
|
||||
+
|
||||
@@ -20,7 +25,7 @@ Installing {product-title} on a single node alleviates some of the requirements
|
||||
[options="header"]
|
||||
|====
|
||||
|Profile|vCPU|Memory|Storage
|
||||
|Minimum|8 vCPU cores|16GB of RAM| 120GB
|
||||
|Minimum|8 vCPU cores|16 GB of RAM| 120 GB
|
||||
|====
|
||||
+
|
||||
[NOTE]
|
||||
@@ -32,7 +37,12 @@ Installing {product-title} on a single node alleviates some of the requirements
|
||||
* Adding Operators during the installation process might increase the minimum resource requirements.
|
||||
====
|
||||
+
|
||||
The server must have a Baseboard Management Controller (BMC) when booting with virtual media.
|
||||
The server must have a Baseboard Management Controller (BMC) when booting with virtual media.
|
||||
|
||||
[NOTE]
|
||||
====
|
||||
BMC is not supported on {ibmzProductName}.
|
||||
====
|
||||
|
||||
* *Networking:* The server must have access to the internet or access to a local registry if it is not connected to a routable network. The server must have a DHCP reservation or a static IP address for the Kubernetes API, ingress route, and cluster node domain names. You must configure the DNS to resolve the IP address to each of the following fully qualified domain names (FQDN):
|
||||
+
|
||||
|
||||
Reference in New Issue
Block a user