1
0
mirror of https://github.com/openshift/openshift-docs.git synced 2026-02-05 12:46:18 +01:00

Merge pull request #105360 from openshift-cherrypick-robot/cherry-pick-105295-to-enterprise-4.21

[enterprise-4.21] OSDOCS-17013-bm-upi-8: CQA BM UPI doc
This commit is contained in:
Ben Scott
2026-01-26 10:21:01 -05:00
committed by GitHub
19 changed files with 149 additions and 97 deletions

View File

@@ -66,6 +66,11 @@ data:
// Updating the global cluster pull secret
include::modules/images-update-global-pull-secret.adoc[leveloffset=+1]
[role="_additional-resources"]
.Additional resources
* link:https://docs.redhat.com/en/documentation/openshift_cluster_manager/1-latest/html-single/managing_clusters/index#transferring-cluster-ownership_downloading-and-updating-pull-secrets[Transferring cluster ownership]
[id="update-service-install_{context}"]
== Installing the OpenShift Update Service Operator

View File

@@ -274,6 +274,8 @@ include::modules/rhcos-install-iscsi-ibft.adoc[leveloffset=+2]
[role="_additional-resources"]
.Additional resources
* link:https://www.man7.org/linux/man-pages/man7/dracut.cmdline.7.html[`dracut.cmdline` manual page]
* xref:../../../installing/installing_bare_metal/upi/installing-bare-metal.adoc#creating-machines-bare-metal_installing-bare-metal[Installing {op-system} and starting the {product-title} bootstrap process]
include::modules/installation-installing-bare-metal.adoc[leveloffset=+1]
@@ -287,6 +289,11 @@ include::modules/cli-logging-in-kubeadmin.adoc[leveloffset=+1]
include::modules/installation-approve-csrs.adoc[leveloffset=+1]
[role="_additional-resources"]
.Additional resources
* link:https://kubernetes.io/docs/reference/access-authn-authz/certificate-signing-requests/[Certificate Signing Requests]
include::modules/installation-operators-config.adoc[leveloffset=+1]
[role="_additional-resources"]
@@ -313,13 +320,11 @@ include::modules/cluster-telemetry.adoc[leveloffset=+1]
.Additional resources
* xref:../../../support/remote_health_monitoring/about-remote-health-monitoring.adoc#about-remote-health-monitoring[About remote health monitoring]
== Next steps
* xref:../../../installing/validation_and_troubleshooting/validating-an-installation.adoc#validating-an-installation[Validating an installation]
* xref:../../../post_installation_configuration/cluster-tasks.adoc#available_cluster_customizations[Customize your cluster].
* xref:../../../support/remote_health_monitoring/remote-health-reporting.adoc#remote-health-reporting[Remote health reporting]
* xref:../../../registry/configuring_registry_storage/configuring-registry-storage-baremetal.adoc#configuring-registry-storage-baremetal[Set up your registry and configure registry storage]
* link:https://access.redhat.com/solutions/4656511[Data Gathered and Used by Red Hat's subscription services ]
ifeval::["{context}" == "installing-with-agent-based-installer"]
:!agent:

View File

@@ -72,9 +72,10 @@ endif::[]
[id="cli-logging-in-kubeadmin_{context}"]
= Logging in to the cluster by using the CLI
You can log in to your cluster as a default system user by exporting the cluster `kubeconfig` file.
The `kubeconfig` file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server.
The file is specific to a cluster and is created during {product-title} installation.
[role="_abstract"]
To log in to your cluster as the default system user, export the `kubeconfig` file. This configuration enables the CLI to authenticate and connect to the specific API server created during {product-title} installation.
The `kubeconfig` file is specific to a cluster and is created during {product-title} installation.
.Prerequisites
ifndef::gcp[]
@@ -91,10 +92,12 @@ endif::gcp[]
+
[source,terminal]
----
$ export KUBECONFIG=<installation_directory>/auth/kubeconfig <1>
$ export KUBECONFIG=<installation_directory>/auth/kubeconfig
----
<1> For `<installation_directory>`, specify the path to the directory that you stored
the installation files in.
+
where:
+
`<installation_directory>`:: Specifies the path to the directory that stores the installation files.
. Verify you can run `oc` commands successfully using the exported configuration by running the following command:
+

View File

@@ -68,8 +68,8 @@
ifndef::openshift-origin[]
= Telemetry access for {product-title}
In {product-title} {product-version}, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to {cluster-manager-url}.
After you confirm that your {cluster-manager-url} inventory is correct, either maintained automatically by Telemetry or manually by using {cluster-manager}, link:https://access.redhat.com/documentation/en-us/subscription_central/2020-04/html/getting_started_with_subscription_watch/con-how-to-select-datacollection-tool_assembly-requirements-and-your-responsibilities-ctxt#red_hat_openshift[use subscription watch] to track your {product-title} subscriptions at the account or multi-cluster level.
[role="_abstract"]
To provide metrics about cluster health and the success of updates, the Telemetry service requires internet access. When connected, this service runs automatically by default and registers your cluster to {cluster-manager-url}.
After you confirm that your {cluster-manager-url} inventory is correct, either maintained automatically by Telemetry or manually by using {cluster-manager},use subscription watch to track your {product-title} subscriptions at the account or multi-cluster level. For more information about subscription watch, see "Data Gathered and Used by Red Hat's subscription services" in the _Additional resources_ section.
endif::openshift-origin[]

View File

@@ -46,7 +46,8 @@ endif::[]
[id="installation-approve-csrs_{context}"]
= Approving the certificate signing requests for your machines
When you add machines to a cluster, two pending certificate signing requests (CSRs) are generated for each machine that you added. You must confirm that these CSRs are approved or, if necessary, approve them yourself. The client requests must be approved first, followed by the server requests.
[role="_abstract"]
To add machines to a cluster, verify the status of the certificate signing requests (CSRs) generated for each machine. If manual approval is required, approve the client requests first, followed by the server requests.
.Prerequisites
@@ -127,9 +128,12 @@ For clusters running on platforms that are not machine API enabled, such as bare
+
[source,terminal]
----
$ oc adm certificate approve <csr_name> <1>
$ oc adm certificate approve <csr_name>
----
<1> `<csr_name>` is the name of a CSR from the list of current CSRs.
+
where:
+
`<csr_name>`:: Specifies the name of a CSR from the list of current CSRs.
+
** To approve all pending CSRs, run the following command:
+
@@ -165,9 +169,12 @@ csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal
+
[source,terminal]
----
$ oc adm certificate approve <csr_name> <1>
$ oc adm certificate approve <csr_name>
----
<1> `<csr_name>` is the name of a CSR from the list of current CSRs.
+
where:
+
`<csr_name>`:: Specifies the name of a CSR from the list of current CSRs.
+
** To approve all pending CSRs, run the following command:
+
@@ -223,9 +230,6 @@ endif::ibm-power[]
It can take a few minutes after approval of the server CSRs for the machines to transition to the `Ready` status.
====
.Additional information
* link:https://kubernetes.io/docs/reference/access-authn-authz/certificate-signing-requests/[Certificate Signing Requests]
ifeval::["{context}" == "installing-ibm-z"]
:!ibm-z:
endif::[]

View File

@@ -54,8 +54,8 @@ endif::[]
[id="installation-complete-user-infra_{context}"]
= Completing installation on user-provisioned infrastructure
After you complete the Operator configuration, you can finish installing the
cluster on infrastructure that you provide.
[role="_abstract"]
To finalize the installation on user-provisioned infrastructure, complete the cluster deployment after configuring the Operators. This ensures the cluster is fully operational on the infrastructure that you provide.
.Prerequisites
@@ -108,13 +108,16 @@ service-ca {product-version}.0 True Fa
storage {product-version}.0 True False False 37m
----
+
Alternatively, the following command notifies you when all of the clusters are available. It also retrieves and displays credentials:
Alternatively, the following command notifies you when all of the clusters are available. The command also retrieves and displays credentials:
+
[source,terminal]
----
$ ./openshift-install --dir <installation_directory> wait-for install-complete <1>
$ ./openshift-install --dir <installation_directory> wait-for install-complete
----
<1> For `<installation_directory>`, specify the path to the directory that you
+
where:
+
`<installation_directory>`:: Specifies the path to the directory that you
stored the installation files in.
+
.Example output
@@ -159,7 +162,10 @@ openshift-authentication-operator authentication-operator-69d5d8bf84-vh2n8
----
$ oc logs <pod_name> -n <namespace>
----
* `<namespace>`: Specify the pod name and namespace, as shown in the output of an earlier command.
+
where:
+
`<namespace>`:: Specifies the pod name and namespace, as shown in the output of an earlier command.
+
If the pod logs display, the Kubernetes API server can communicate with the cluster machines.

View File

@@ -34,12 +34,13 @@ endif::[]
[id="installation-installing-bare-metal_{context}"]
= Waiting for the bootstrap process to complete
The {product-title} bootstrap process begins after the cluster nodes first boot into the persistent {op-system} environment that has been installed to disk. The configuration information provided through the Ignition config files is used to initialize the bootstrap process and install {product-title} on the machines. You must wait for the bootstrap process to complete.
[role="_abstract"]
To install {product-title}, use Ignition configuration files to initialize the bootstrap process after the cluster nodes boot into {op-system}. You must wait for this process to complete to ensure the cluster is fully installed.
.Prerequisites
* You have created the Ignition config files for your cluster.
* You have configured suitable network, DNS and load balancing infrastructure.
* You have configured suitable network, DNS, and load balancing infrastructure.
* You have obtained the installation program and generated the Ignition config files for your cluster.
* You installed {op-system} on your cluster machines and provided the Ignition config files that the {product-title} installation program generated.
ifndef::restricted[]
@@ -52,11 +53,14 @@ endif::restricted[]
+
[source,terminal]
----
$ ./openshift-install --dir <installation_directory> wait-for bootstrap-complete \ <1>
--log-level=info <2>
$ ./openshift-install --dir <installation_directory> wait-for bootstrap-complete \
--log-level=info
----
<1> For `<installation_directory>`, specify the path to the directory that you stored the installation files in.
<2> To view different installation details, specify `warn`, `debug`, or `error` instead of `info`.
+
where:
+
`<installation_directory>`:: Specifies the path to the directory that stores the installation files.
`--log-level=info`:: Specifies `warn`, `debug`, or `error` instead of `info` to view different installation details.
+
.Example output
[source,terminal]

View File

@@ -74,7 +74,7 @@ balancer after the bootstrap machine initializes the cluster control plane.
|Machine config server
|===
+
[NOTE]
====
The load balancer must be configured to take a maximum of 30 seconds from the

View File

@@ -19,8 +19,8 @@
[id="installation-operators-config_{context}"]
= Initial Operator configuration
After the control plane initializes, you must immediately configure some
Operators so that they all become available.
[role="_abstract"]
To ensure all Operators become available, configure the required Operators immediately after the control plane initialises. This configuration is essential for stabilizing the cluster environment following the installation.
.Prerequisites
@@ -71,4 +71,5 @@ operator-lifecycle-manager-packageserver {product-version}.0 True Fa
service-ca {product-version}.0 True False False 38m
storage {product-version}.0 True False False 37m
----
. Configure the Operators that are not available.

View File

@@ -42,9 +42,9 @@ ifndef::aws[]
The Image Registry Operator is not initially available for platforms that do not provide default storage. After installation, you must configure your registry to use storage so that the Registry Operator is made available.
endif::aws[]
Instructions are shown for configuring a persistent volume, which is required for production clusters. Where applicable, instructions are shown for configuring an empty directory as the storage location, which is available for only non-production clusters.
Configure a persistent volume, which is required for production clusters. Where applicable, you can configure an empty directory as the storage location for non-production clusters.
Additional instructions are provided for allowing the image registry to use block storage types by using the `Recreate` rollout strategy during upgrades.
You can also allow the image registry to use block storage types by using the `Recreate` rollout strategy during upgrades.
ifeval::["{context}" == "installing-aws-user-infra"]
:!aws:

View File

@@ -32,7 +32,7 @@ $ oc patch configs.imageregistry.operator.openshift.io cluster --type merge --pa
+
[WARNING]
====
Configure this option for only non-production clusters.
Configure this option only for non-production clusters.
====
+
If you run this command before the Image Registry Operator initializes its

View File

@@ -13,7 +13,7 @@ You can use `oc-mirror` to perform a dry run, without actually mirroring any ima
.Prerequisites
* You have access to the internet to obtain the necessary container images.
* You have installed the {oc=first}.
* You have installed the {oc-first}.
* You have installed the oc-mirror CLI plugin.
* You have created the image set configuration file.

View File

@@ -53,7 +53,7 @@ ifdef::ibm-power[]
endif::ibm-power[]
[role="_abstract"]
As a cluster administrator, following installation you must configure your registry to use storage.
To ensure the registry is fully operational, configure the registry to use storage immediately after the cluster installation. This configuration is a mandatory step to enable the registry to store data.
.Prerequisites
@@ -69,7 +69,7 @@ ifdef::ibm-power[on {ibm-power-name}.]
{product-title} supports `ReadWriteOnce` access for image registry storage when you have only one replica. `ReadWriteOnce` access also requires that the registry uses the `Recreate` rollout strategy. To deploy an image registry that supports high availability with two or more replicas, `ReadWriteMany` access is required.
====
+
* Must have 100Gi capacity.
* You must have a system with at least 100Gi capacity.
.Procedure

View File

@@ -12,9 +12,10 @@ endif::[]
[id="rhcos-enabling-multipath_{context}"]
= Enabling multipathing with kernel arguments on {op-system}
{op-system} supports multipathing on the primary disk, allowing stronger resilience to hardware failure to achieve higher host availability.
[role="_abstract"]
To achieve higher host availability and stronger resilience against hardware failure, enable multipathing on the primary disk. This configuration uses kernel arguments on {op-system} to ensure continuous storage access if path failure occurs.
You can enable multipathing at installation time for nodes that were provisioned in {product-title} 4.8 or later. While postinstallation support is available by activating multipathing via the machine config, enabling multipathing during installation is recommended.
You can enable multipathing at installation time for nodes that were provisioned in {product-title} 4.8 or later. While postinstallation support is available by activating multipathing through the machine config, Red{nbsp}Hat recommends enabling multipathing during installation.
In setups where any I/O to non-optimized paths results in I/O system errors, you must enable multipathing at installation time.
@@ -22,7 +23,6 @@ In setups where any I/O to non-optimized paths results in I/O system errors, you
====
On {ibm-z-name} and {ibm-linuxone-name}, you can enable multipathing only if you configured your cluster for it during installation. For more information, see "Installing {op-system} and starting the {product-title} bootstrap process" in _Installing a cluster with z/VM on {ibm-z-name} and {ibm-linuxone-name}_.
====
// Add xref once it's allowed.
The following procedure enables multipath at installation time and appends kernel arguments to the `coreos-installer install` command so that the installed system itself will use multipath beginning from the first boot.
@@ -50,12 +50,12 @@ $ mpathconf --enable && systemctl start multipathd.service
. Append the kernel arguments by invoking the `coreos-installer` program:
+
* If there is only one multipath device connected to the machine, it should be available at path `/dev/mapper/mpatha`. For example:
* If there is only one multipath device connected to the machine, the device should be available at path `/dev/mapper/mpatha`. For example:
+
ifndef::restricted[]
[source,terminal]
----
$ coreos-installer install /dev/mapper/mpatha \//
$ coreos-installer install /dev/mapper/mpatha \
--ignition-url=http://host/worker.ign \
--append-karg rd.multipath=default \
--append-karg root=/dev/disk/by-label/dm-mpath-root \
@@ -65,7 +65,7 @@ endif::[]
ifdef::restricted[]
[source,terminal]
----
$ coreos-installer install /dev/mapper/mpatha \//
$ coreos-installer install /dev/mapper/mpatha \
--ignition-url=http://host/worker.ign \
--append-karg rd.multipath=default \
--append-karg root=/dev/disk/by-label/dm-mpath-root \
@@ -78,12 +78,12 @@ endif::[]
* `/dev/mapper/mpatha`: Indicates the path of the single multipathed device.
--
+
* If there are multiple multipath devices connected to the machine, or to be more explicit, instead of using `/dev/mapper/mpatha`, it is recommended to use the World Wide Name (WWN) symlink available in `/dev/disk/by-id`. For example:
* If there are multiple multipath devices connected to the machine, instead of using `/dev/mapper/mpatha`, Red{nbsp}Hat recommends using the World Wide Name (WWN) symlink. The symlink is available in `/dev/disk/by-id`. For example:
+
ifndef::restricted[]
[source,terminal]
----
$ coreos-installer install /dev/disk/by-id/wwn-<wwn_ID> \// <1>
$ coreos-installer install /dev/disk/by-id/wwn-<wwn_ID> \
--ignition-url=http://host/worker.ign \
--append-karg rd.multipath=default \
--append-karg root=/dev/disk/by-label/dm-mpath-root \
@@ -93,7 +93,7 @@ endif::[]
ifdef::restricted[]
[source,terminal]
----
$ coreos-installer install /dev/disk/by-id/wwn-<wwn_ID> \// <1>
$ coreos-installer install /dev/disk/by-id/wwn-<wwn_ID> \
--ignition-url=http://host/worker.ign \
--append-karg rd.multipath=default \
--append-karg root=/dev/disk/by-label/dm-mpath-root \
@@ -102,9 +102,9 @@ $ coreos-installer install /dev/disk/by-id/wwn-<wwn_ID> \// <1>
----
endif::[]
+
--
* `<wwn_ID>`: Indicates the WWN ID of the target multipathed device. For example, `0xx194e957fcedb4841`.
--
where:
+
* `<wwn_ID>`:: Indicates the WWN ID of the target multipathed device. For example, `0xx194e957fcedb4841`.
+
This symlink can also be used as the `coreos.inst.install_dev` kernel argument when using special `coreos.inst.*` arguments to direct the live installer. For more information, see "Installing {op-system} and starting the {product-title} bootstrap process".

View File

@@ -12,12 +12,14 @@ endif::[]
[id="rhcos-install-iscsi-ibft_{context}"]
= Installing {op-system} on an iSCSI boot device using iBFT
On a completely diskless machine, the iSCSI target and initiator values can be passed through iBFT. iSCSI multipathing is also supported.
[role="_abstract"]
To configure a completely diskless machine, pass the iSCSI target and initiator values by using the iSCSI Boot Firmware Table (iBFT). With this setup, you can use iSCSI multipathing to ensure storage resilience.
.Prerequisites
. You are in the {op-system} live environment.
. You have an iSCSI target you want to install {op-system} on.
. Optional: you have multipathed your iSCSI target.
. Optional: You have configured multipathing for your iSCSI target.
.Procedure
@@ -28,10 +30,13 @@ On a completely diskless machine, the iSCSI target and initiator values can be p
$ iscsiadm \
--mode discovery \
--type sendtargets
--portal <IP_address> \ <1>
--portal <IP_address> \
--login
----
<1> The IP address of the target portal.
+
where:
+
`<IP_address>`:: Specifies the IP address of the target portal.
. Optional: enable multipathing and start the daemon with the following command:
+
@@ -46,27 +51,30 @@ $ mpathconf --enable && systemctl start multipathd.service
----
ifndef::restricted[]
$ coreos-installer install \
/dev/mapper/mpatha \ <1>
--append-karg rd.iscsi.firmware=1 \ <2>
--append-karg rd.multipath=default \ <3>
/dev/mapper/mpatha \
--append-karg rd.iscsi.firmware=1 \
--append-karg rd.multipath=default \
--console ttyS0 \
--ignition-file <path_to_file>
endif::[]
ifdef::restricted[]
$ coreos-installer install \
/dev/mapper/mpatha \ <1>
--append-karg rd.iscsi.firmware=1 \ <2>
--append-karg rd.multipath=default \ <3>
/dev/mapper/mpatha \
--append-karg rd.iscsi.firmware=1 \
--append-karg rd.multipath=default \
--console ttyS0 \
--ignition-file <path_to_file> \
--offline
endif::[]
----
<1> The path of a single multipathed device. If there are multiple multipath devices connected, or to be explicit, you can use the World Wide Name (WWN) symlink available in `/dev/disk/by-path`.
<2> The iSCSI parameter is read from the BIOS firmware.
<3> Optional: include this parameter if you are enabling multipathing.
+
For more information about the iSCSI options supported by `dracut`, see the link:https://www.man7.org/linux/man-pages/man7/dracut.cmdline.7.html[`dracut.cmdline` manual page].
where:
+
`/dev/mapper/mpatha`:: Specifies the path of a single multipathed device. If there are multiple multipath devices connected, or to be explicit, you can use the World Wide Name (WWN) symlink available in `/dev/disk/by-path`.
`rd.iscsi.firmware=1`:: Specifies that the iSCSI parameter is read from the BIOS firmware.
`rd.multipath=default`:: Specifies to enable multipathing. Optional parameter.
+
For more information about the iSCSI options supported by `dracut`, see the `dracut.cmdline` manual page.
. Unmount the iSCSI disk:
+
@@ -74,8 +82,8 @@ For more information about the iSCSI options supported by `dracut`, see the link
----
$ iscsiadm --mode node --logout=all
----
This procedure can also be performed using the `coreos-installer iso customize` or `coreos-installer pxe customize` subcommands.
+
You can also perform this procedure by using the `coreos-installer iso customize` or `coreos-installer pxe customize` subcommands.
ifeval::["{context}" == "installing-restricted-networks-bare-metal"]
:!restricted:

View File

@@ -12,7 +12,8 @@ endif::[]
[id="rhcos-install-iscsi-manual_{context}"]
= Installing {op-system} manually on an iSCSI boot device
You can manually install {op-system} on an iSCSI target.
[role="_abstract"]
To deploy {op-system} by using networked storage, manually install the operating system on an iSCSI target. This configuration enables the system to boot from a remote storage array, eliminating the need for local disks.
.Prerequisites
. You are in the {op-system} live environment.
@@ -27,10 +28,13 @@ You can manually install {op-system} on an iSCSI target.
$ iscsiadm \
--mode discovery \
--type sendtargets
--portal <IP_address> \ <1>
--portal <IP_address> \
--login
----
<1> The IP address of the target portal.
+
where:
+
`<IP_address>`:: Specifies the IP address of the target portal.
. Install {op-system} onto the iSCSI target by running the following command and using the necessary kernel arguments, for example:
+
@@ -38,25 +42,28 @@ $ iscsiadm \
----
ifndef::restricted[]
$ coreos-installer install \
/dev/disk/by-path/ip-<IP_address>:<port>-iscsi-<target_iqn>-lun-<lun> \ <1>
--append-karg rd.iscsi.initiator=<initiator_iqn> \ <2>
--append.karg netroot=<target_iqn> \ <3>
/dev/disk/by-path/ip-<IP_address>:<port>-iscsi-<target_iqn>-lun-<lun> \
--append-karg rd.iscsi.initiator=<initiator_iqn> \
--append.karg netroot=<target_iqn> \
--console ttyS0,115200n8
--ignition-file <path_to_file>
endif::[]
ifdef::restricted[]
$ coreos-installer install \
/dev/disk/by-path/ip-<IP_address>:<port>-iscsi-<target_iqn>-lun-<lun> \ <1>
--append-karg rd.iscsi.initiator=<initiator_iqn> \ <2>
--append.karg netroot=<target_iqn> \ <3>
/dev/disk/by-path/ip-<IP_address>:<port>-iscsi-<target_iqn>-lun-<lun> \
--append-karg rd.iscsi.initiator=<initiator_iqn> \
--append.karg netroot=<target_iqn> \
--console ttyS0,115200n8 \
--ignition-file <path_to_file> \
--offline
endif::[]
----
<1> The location you are installing to. You must provide the IP address of the target portal, the associated port number, the target iSCSI node in IQN format, and the iSCSI logical unit number (LUN).
<2> The iSCSI initiator, or client, name in IQN format. The initiator forms a session to connect to the iSCSI target.
<3> The the iSCSI target, or server, name in IQN format.
+
where:
+
`/dev/disk/by-path/ip`:: Specifies the installation location. You must provide the IP address of the target portal, the associated port number, the target iSCSI node in IQN format, and the iSCSI logical unit number (LUN).
`<initiator_iqn>`:: Specifies the iSCSI initiator, or client, name in IQN format. The initiator forms a session to connect to the iSCSI target.
`<target_iqn>`:: Specifies the iSCSI target, or server, name in IQN format.
+
For more information about the iSCSI options supported by `dracut`, see the link:https://www.man7.org/linux/man-pages/man7/dracut.cmdline.7.html[`dracut.cmdline` manual page].
@@ -66,7 +73,7 @@ For more information about the iSCSI options supported by `dracut`, see the link
----
$ iscsiadm --mode node --logoutall=all
----
+
This procedure can also be performed using the `coreos-installer iso customize` or `coreos-installer pxe customize` subcommands.
ifeval::["{context}" == "installing-restricted-networks-bare-metal"]

View File

@@ -8,7 +8,8 @@
[id="rhcos-multipath-secondary-disk_{context}"]
= Enabling multipathing on secondary disks
{op-system} also supports multipathing on a secondary disk. Instead of kernel arguments, you use Ignition to enable multipathing for the secondary disk at installation time.
[role="_abstract"]
To enable multipathing on a secondary disk during installation, use Ignition configuration. This setup ensures storage resilience for additional disks on {op-system} without relying on the kernel arguments used for primary disks.
.Prerequisites
@@ -34,12 +35,12 @@ systemd:
Description=Configure Multipath on Secondary Disk
ConditionFirstBoot=true
ConditionPathExists=!/etc/multipath.conf
Before=multipathd.service <1>
Before=multipathd.service
DefaultDependencies=no
[Service]
Type=oneshot
ExecStart=/usr/sbin/mpathconf --enable <2>
ExecStart=/usr/sbin/mpathconf --enable
[Install]
WantedBy=multi-user.target
@@ -48,14 +49,14 @@ systemd:
contents: |
[Unit]
Description=Set Up Multipath On /var/lib/containers
ConditionFirstBoot=true <3>
ConditionFirstBoot=true
Requires=dev-mapper-mpatha.device
After=dev-mapper-mpatha.device
After=ostree-remount.service
Before=kubelet.service
DefaultDependencies=no
[Service] <4>
[Service]
Type=oneshot
ExecStart=/usr/sbin/mkfs.xfs -L containers -m reflink=1 /dev/mapper/mpatha
ExecStart=/usr/bin/mkdir -p /var/lib/containers
@@ -68,9 +69,9 @@ systemd:
[Unit]
Description=Mount /var/lib/containers
After=mpath-var-lib-containers.service
Before=kubelet.service <5>
Before=kubelet.service
[Mount] <6>
[Mount]
What=/dev/disk/by-label/dm-mpath-containers
Where=/var/lib/containers
Type=xfs
@@ -78,12 +79,15 @@ systemd:
[Install]
WantedBy=multi-user.target
----
<1> The configuration must be set before launching the multipath daemon.
<2> Starts the `mpathconf` utility.
<3> This field must be set to the value `true`.
<4> Creates the filesystem and directory `/var/lib/containers`.
<5> The device must be mounted before starting any nodes.
<6> Mounts the device to the `/var/lib/containers` mount point. This location cannot be a symlink.
+
where:
+
`Before=multipathd.service`:: Specifies that the configuration must be set before launching the multipath daemon.
`ExecStart=/usr/sbin/mpathconf`:: Specifies starting the `mpathconf` utility.
`ConditionFirstBoot=true`:: Set to the value `true`.
`[Service]`:: Specifies the creation of the filesystem and directory `/var/lib/containers`.
`Before=kubelet.service`:: Specifies that the device must be mounted before starting any nodes.
`[Mount]`:: Specifies to mount the device to the `/var/lib/containers` mount point. This location cannot be a symlink.
. Create the Ignition configuration by running the following command:
+
@@ -96,5 +100,5 @@ $ butane --pretty --strict multipath-config.bu > multipath-config.ign
+
[IMPORTANT]
====
Do not add the `rd.multipath` or `root` kernel arguments on the command-line during installation unless the primary disk is also multipathed.
Do not add the `rd.multipath` or `root` kernel arguments on the CLI during installation unless the primary disk is also multipathed.
====

View File

@@ -30,4 +30,9 @@ include::modules/images-pulling-from-private-registries.adoc[leveloffset=+2]
ifndef::openshift-rosa,openshift-rosa-hcp,openshift-dedicated[]
include::modules/images-update-global-pull-secret.adoc[leveloffset=+1]
[role="_additional-resources"]
.Additional resources
* link:https://docs.redhat.com/en/documentation/openshift_cluster_manager/1-latest/html-single/managing_clusters/index#transferring-cluster-ownership_downloading-and-updating-pull-secrets[Transferring cluster ownership]
endif::openshift-rosa,openshift-rosa-hcp,openshift-dedicated[]

View File

@@ -36,6 +36,6 @@ include::modules/images-update-global-pull-secret.adoc[leveloffset=+1]
[role=_additional_resources]
.Additional resources
* link:https://access.redhat.com/documentation/en-us/openshift_cluster_manager/2023/html-single/managing_clusters/index#transferring-cluster-ownership_downloading-and-updating-pull-secrets[Transferring cluster ownership]
* link:https://docs.redhat.com/en/documentation/openshift_cluster_manager/1-latest/html-single/managing_clusters/index#transferring-cluster-ownership_downloading-and-updating-pull-secrets[Transferring cluster ownership]
endif::openshift-enterprise,openshift-webscale,openshift-origin[]