1
0
mirror of https://github.com/openshift/openshift-docs.git synced 2026-02-05 12:46:18 +01:00
Files
openshift-docs/modules/ipi-install-configuring-the-install-config-file.adoc

161 lines
6.6 KiB
Plaintext

// Module included in the following assemblies:
//
// * installing/installing_bare_metal/ipi/ipi-install-installation-workflow.adoc
:_mod-docs-content-type: PROCEDURE
[id="configuring-the-install-config-file_{context}"]
= Configuring the install-config.yaml file
The `install-config.yaml` file requires some additional details.
Most of the information teaches the installation program and the resulting cluster enough about the available hardware that it is able to fully manage it.
[NOTE]
====
The installation program no longer needs the `clusterOSImage` {op-system} image because the correct image is in the release payload.
====
. Configure `install-config.yaml`. Change the appropriate variables to match the environment, including `pullSecret` and `sshKey`:
+
[source,yaml]
----
apiVersion: v1
baseDomain: <domain>
metadata:
name: <cluster_name>
networking:
machineNetwork:
- cidr: <public_cidr>
networkType: OVNKubernetes
compute:
- name: worker
replicas: 2 <1>
controlPlane:
name: master
replicas: 3
platform:
baremetal: {}
platform:
baremetal:
additionalNTPServers: <2>
- <ntp_domain_or_ip>
apiVIPs:
- <api_ip>
ingressVIPs:
- <wildcard_ip>
provisioningNetworkCIDR: <CIDR>
bootstrapExternalStaticIP: <bootstrap_static_ip_address> <3>
bootstrapExternalStaticGateway: <bootstrap_static_gateway> <4>
bootstrapExternalStaticDNS: <bootstrap_static_dns> <5>
hosts:
- name: openshift-master-0
role: master
bmc:
address: ipmi://<out_of_band_ip> <6>
username: <user>
password: <password>
bootMACAddress: <NIC1_mac_address>
rootDeviceHints:
deviceName: "<installation_disk_drive_path>" <7>
- name: <openshift_master_1>
role: master
bmc:
address: ipmi://<out_of_band_ip>
username: <user>
password: <password>
bootMACAddress: <NIC1_mac_address>
rootDeviceHints:
deviceName: "<installation_disk_drive_path>"
- name: <openshift_master_2>
role: master
bmc:
address: ipmi://<out_of_band_ip>
username: <user>
password: <password>
bootMACAddress: <NIC1_mac_address>
rootDeviceHints:
deviceName: "<installation_disk_drive_path>"
- name: <openshift_worker_0>
role: worker
bmc:
address: ipmi://<out_of_band_ip>
username: <user>
password: <password>
bootMACAddress: <NIC1_mac_address>
- name: <openshift_worker_1>
role: worker
bmc:
address: ipmi://<out_of_band_ip>
username: <user>
password: <password>
bootMACAddress: <NIC1_mac_address>
rootDeviceHints:
deviceName: "<installation_disk_drive_path>"
pullSecret: '<pull_secret>'
sshKey: '<ssh_pub_key>'
----
+
--
<1> Scale the compute machines based on the number of compute nodes that are part of the {product-title} cluster. Valid options for the `replicas` value are `0` and integers greater than or equal to `2`. Set the number of replicas to `0` to deploy a three-node cluster, which contains only three control plane machines. A three-node cluster is a smaller, more resource-efficient cluster that can be used for testing, development, and production. You cannot install the cluster with only one compute node.
<2> An optional list of additional NTP server domain names or IP addresses to add to each host configuration when the cluster host clocks are out of synchronization.
<3> When deploying a cluster with static IP addresses, you must set the `bootstrapExternalStaticIP` configuration setting to specify the static IP address of the bootstrap VM when there is no DHCP server on the bare metal network.
<4> When deploying a cluster with static IP addresses, you must set the `bootstrapExternalStaticGateway` configuration setting to specify the gateway IP address for the bootstrap VM when there is no DHCP server on the bare metal network.
<5> When deploying a cluster with static IP addresses, you must set the `bootstrapExternalStaticDNS` configuration setting to specify the DNS address for the bootstrap VM when there is no DHCP server on the bare metal network.
<6> See the BMC addressing sections for more options.
<7> To set the path to the installation disk drive, enter the kernel name of the disk. For example, `/dev/sda`.
+
[IMPORTANT]
====
Because the disk discovery order is not guaranteed, the kernel name of the disk can change across booting options for machines with multiple disks. For example, `/dev/sda` becomes `/dev/sdb` and vice versa. To avoid this issue, you must use persistent disk attributes, such as the disk World Wide Name (WWN) or `/dev/disk/by-path/`. It is recommended to use the `/dev/disk/by-path/<device_path>` link to the storage location. To use the disk WWN, replace the `deviceName` parameter with the `wwnWithExtension` parameter. Depending on the parameter that you use, enter either of the following values:
* The disk name. For example, `/dev/sda`, or `/dev/disk/by-path/`.
* The disk WWN. For example, `"0x64cd98f04fde100024684cf3034da5c2"`. Ensure that you enter the disk WWN value within quotes so that it is used as a string value and not a hexadecimal value.
Failure to meet these requirements for the `rootDeviceHints` parameter might result in the following error:
[source,text]
----
ironic-inspector inspection failed: No disks satisfied root device hints
----
====
[NOTE]
====
Before {product-title} 4.12, the cluster installation program only accepted an IPv4 address or an IPv6 address for the `apiVIP` and `ingressVIP` configuration settings. In {product-title} 4.12 and later, these configuration settings are deprecated. Instead, use a list format in the `apiVIPs` and `ingressVIPs` configuration settings to specify IPv4 addresses, IPv6 addresses, or both IP address formats.
====
--
. Create a directory to store the cluster configuration:
+
[source,terminal]
----
$ mkdir ~/clusterconfigs
----
. Copy the `install-config.yaml` file to the new directory:
+
[source,terminal]
----
$ cp install-config.yaml ~/clusterconfigs
----
. Ensure all bare metal nodes are powered off prior to installing the {product-title} cluster:
+
[source,terminal]
----
$ ipmitool -I lanplus -U <user> -P <password> -H <management-server-ip> power off
----
. Remove old bootstrap resources if any are left over from a previous deployment attempt:
+
[source,bash]
----
for i in $(sudo virsh list | tail -n +3 | grep bootstrap | awk {'print $2'});
do
sudo virsh destroy $i;
sudo virsh undefine $i;
sudo virsh vol-delete $i --pool $i;
sudo virsh vol-delete $i.ign --pool $i;
sudo virsh pool-destroy $i;
sudo virsh pool-undefine $i;
done
----