mirror of
https://github.com/openshift/openshift-docs.git
synced 2026-02-06 06:46:26 +01:00
Added new guide Deploying IPI Bare Metal
Signed-off-by: John Wilkins <jowilkin@redhat.com>
This commit is contained in:
committed by
Ashley Hardin
parent
950cbfb25d
commit
3bfad910c3
@@ -177,6 +177,18 @@ Topics:
|
||||
File: installing-bare-metal-network-customizations
|
||||
- Name: Restricted network bare metal installation
|
||||
File: installing-restricted-networks-bare-metal
|
||||
- Name: Deploying IPI Bare Metal
|
||||
Dir: installing_bare_metal_ipi
|
||||
Distros: openshift-webscale
|
||||
Topics:
|
||||
- Name: Deploying IPI bare metal
|
||||
File: deploying-ipi-bare-metal
|
||||
- Name: Setting up the environment for an OpenShift installation
|
||||
File: ipi-install-installation-workflow
|
||||
- Name: Prerequisites
|
||||
File: ipi-install-prerequisites
|
||||
- Name: Troubleshooting
|
||||
File: ipi-install-troubleshooting
|
||||
- Name: Installing on IBM Z
|
||||
Dir: installing_ibm_z
|
||||
Topics:
|
||||
|
||||
@@ -0,0 +1,27 @@
|
||||
[id="deploying-ipi-bare-metal"]
|
||||
= Deploying IPI Bare Metal
|
||||
include::modules/common-attributes.adoc[]
|
||||
:context: ipi-install
|
||||
|
||||
[IMPORTANT]
|
||||
====
|
||||
The Bare Metal IPI images and code described in this document are for *Developer Preview*
|
||||
purposes and are *not supported* by Red Hat at this time.
|
||||
====
|
||||
|
||||
Installer Provisioned Infrastructure (IPI) installation provides support for installing {product-title} on bare metal nodes.
|
||||
This guide provides a methodology to achieving a successful installation.
|
||||
|
||||
The bare metal node labeled as `provisioner` contains two network bridges: provisioning and baremetal,
|
||||
each one connected to a different network.
|
||||
During installation of IPI on baremetal, a bootstrap VM is created and connected to both the provisioning and
|
||||
baremetal network via those bridges. The role of the VM is to assist in the process of deploying an {product-title} cluster.
|
||||
|
||||
image::71_OpenShift_Baremetal_IPI_Depoyment_0320_1.png[Deployment phase one]
|
||||
|
||||
When the installation of OpenShift control plane nodes, or master nodes, is complete and fully operational,
|
||||
the bootstrap VM is destroyed automatically and the appropriate VIPs are moved accordingly.
|
||||
The API and DNS VIPs move into the control plane nodes and the Ingress VIP services applications that
|
||||
reside within the worker nodes.
|
||||
|
||||
image::71_OpenShift_Baremetal_IPI_Depoyment_0320_2.png[Deployment phase two]
|
||||
Binary file not shown.
|
After Width: | Height: | Size: 97 KiB |
Binary file not shown.
|
After Width: | Height: | Size: 98 KiB |
@@ -0,0 +1,26 @@
|
||||
[id="ipi-install-installation-workflow"]
|
||||
= Setting up the environment for an OpenShift installation
|
||||
include::modules/common-attributes.adoc[]
|
||||
:context: ipi-install-installation-workflow
|
||||
|
||||
toc::[]
|
||||
|
||||
After an environment has been prepared according to the documented prerequisites, the installation process is the same as other IPI-based platforms.
|
||||
|
||||
include::modules/ipi-install-preparing-the-provision-node-for-openshift-install.adoc[leveloffset=+1]
|
||||
|
||||
include::modules/ipi-install-retrieving-the-openshift-installer.adoc[leveloffset=+1]
|
||||
|
||||
include::modules/ipi-install-extracting-the-openshift-installer.adoc[leveloffset=+1]
|
||||
|
||||
include::modules/ipi-install-configuring-the-install-config-file.adoc[leveloffset=+1]
|
||||
|
||||
include::modules/ipi-install-configuring-the-metal3-config-file.adoc[leveloffset=+1]
|
||||
|
||||
include::modules/ipi-install-creating-the-openshift-manifests.adoc[leveloffset=+1]
|
||||
|
||||
include::modules/ipi-install-additional-install-config-parameters.adoc[leveloffset=+1]
|
||||
|
||||
include::modules/ipi-install-validation-checklist-for-installation.adoc[leveloffset=+1]
|
||||
|
||||
include::modules/ipi-install-deploying-the-cluster-via-the-openshift-installer.adoc[leveloffset=+1]
|
||||
@@ -0,0 +1,15 @@
|
||||
[id="ipi-install-prerequisites"]
|
||||
= Prerequisites
|
||||
include::modules/common-attributes.adoc[]
|
||||
:context: ipi-install-prerequisites
|
||||
|
||||
toc::[]
|
||||
|
||||
Before installing {product-title}, ensure the hardware environment meets the following requirements.
|
||||
|
||||
include::modules/ipi-install-network-requirements.adoc[leveloffset=+1]
|
||||
include::modules/ipi-install-node-requirements.adoc[leveloffset=+1]
|
||||
include::modules/ipi-install-configuring-nodes.adoc[leveloffset=+1]
|
||||
include::modules/ipi-install-out-of-band-management.adoc[leveloffset=+1]
|
||||
include::modules/ipi-install-required-data-for-installation.adoc[leveloffset=+1]
|
||||
include::modules/ipi-install-validation-checklist-for-nodes.adoc[leveloffset=+1]
|
||||
@@ -0,0 +1,14 @@
|
||||
[id="ipi-install-troubleshooting"]
|
||||
= Troubleshooting
|
||||
include::modules/common-attributes.adoc[]
|
||||
:context: ipi-install-troubleshooting
|
||||
|
||||
toc::[]
|
||||
|
||||
The following troubleshooting sections address Installer Provisioned Infrastructure (IPI) troubleshooting.
|
||||
|
||||
// For general, {product-title} troubleshooting, see <xref>.
|
||||
|
||||
include::modules/ipi-install-troubleshooting-the-bootstrap-vm.adoc[leveloffset=+1]
|
||||
|
||||
include::modules/ipi-install-troubleshooting-the-control-plane.adoc[leveloffset=+1]
|
||||
@@ -0,0 +1,71 @@
|
||||
// Module included in the following assemblies:
|
||||
//
|
||||
// * installing/installing_bare_metal_ipi/ipi-install-installation-workflow.adoc
|
||||
|
||||
[id="additional-install-config-parameters_{context}"]
|
||||
= Additional `install-config` parameters
|
||||
|
||||
This topic describes the required parameters, the `hosts` parameter, and the `bmc` `address` parameter
|
||||
for the `install-config.yaml` file.
|
||||
|
||||
.Required parameters
|
||||
|
||||
|===
|
||||
|Parameters |Default |Description
|
||||
| `provisioningNetworkInterface` | | The name of the network interface on control plane nodes connected to the
|
||||
provisioning network. ({product-title} 4.4 only)
|
||||
| `hosts` | | Details about bare metal hosts to use to build the cluster.
|
||||
| `defaultMachinePlatform` | | The default configuration used for machine pools without a platform configuration.
|
||||
| `apiVIP` | `api.<clusterdomain>` | The VIP to use for internal API communication.
|
||||
|
||||
This setting must either be provided or pre-configured in DNS so that the
|
||||
default name resolve correctly.
|
||||
| `ingressVIP` | `test.apps.<clusterdomain>` | The VIP to use for ingress traffic.
|
||||
|
||||
This setting must either be provided or pre-configured in DNS so that the
|
||||
default name resolve correctly.
|
||||
|`dnsVIP` | | The VIP to use for internal DNS communication.
|
||||
|
||||
This setting has no default and must always be provided.
|
||||
|===
|
||||
|
||||
.Hosts
|
||||
|
||||
The `hosts` parameter is a list of separate bare metal assets that should be used to build the cluster.
|
||||
|
||||
|===
|
||||
|Name |Default |Description
|
||||
| `name` | | The name of the `BareMetalHost` resource to associate with the details.
|
||||
| `role` | | Either `master` or `worker`.
|
||||
| `bmc` | | Connection details for the baseboard management controller. See below for details.
|
||||
| `bootMACAddress` | | The MAC address of the NIC the host will use to boot on the provisioning network.
|
||||
|===
|
||||
|
||||
The `bmc` parameter for each host is a set of values for accessing the baseboard management controller in the host.
|
||||
|
||||
|===
|
||||
|Name |Default |Description
|
||||
| `username` | | The username for authenticating to the BMC.
|
||||
| `password` | | The password associated with `username`.
|
||||
| `address` | | The URL for communicating with the BMC controller, based on the provider being used.
|
||||
See BMC Addressing for details.
|
||||
|===
|
||||
|
||||
.BMC addressing
|
||||
|
||||
Keep the following in mind when providing values for the `bmc` `address` field.
|
||||
|
||||
* The `address` field for each `bmc` entry is a URL with details for connecting to the controller,
|
||||
including the type of controller in the URL scheme and its location on the network.
|
||||
|
||||
* IPMI hosts use `ipmi://<host>:<port>`. An unadorned `<host>:<port>` is also accepted.
|
||||
If the port is omitted, the default of 623 is used.
|
||||
|
||||
* Dell iDRAC hosts use `idrac://` (or `idrac+http://` to disable TLS).
|
||||
|
||||
* Fujitsu iRMC hosts use `irmc://<host>:<port>`, where `<port>` is optional if using the default.
|
||||
|
||||
* For Redfish, use `redfish://` (or `redfish+http://` to disable TLS).
|
||||
The hostname (or IP address) and the path to the system ID are both required.
|
||||
For example, `redfish://myhost.example/redfish/v1/Systems/System.Embedded.1`
|
||||
or `redfish://myhost.example/redfish/v1/Systems/1`.
|
||||
43
modules/ipi-install-configuring-nodes.adoc
Normal file
43
modules/ipi-install-configuring-nodes.adoc
Normal file
@@ -0,0 +1,43 @@
|
||||
// Module included in the following assemblies:
|
||||
//
|
||||
// * installing/installing_bare_metal_ipi/ipi-install-prerequisites.adoc
|
||||
|
||||
[id="configuring-nodes_{context}"]
|
||||
= Configuring nodes
|
||||
|
||||
Each node in the cluster requires the following configuration for proper installation.
|
||||
|
||||
[WARNING]
|
||||
====
|
||||
A mismatch between nodes will cause an installation failure.
|
||||
====
|
||||
|
||||
While the cluster nodes can contain more than two NICs, the installation process only focuses on the first two NICs:
|
||||
|
||||
|===
|
||||
|NIC |Network |VLAN
|
||||
| NIC1 | `provisioning` | <provisioning-vlan>
|
||||
| NIC2 | `baremetal` | <baremetal-vlan>
|
||||
|===
|
||||
|
||||
The RHEL 8.1 installation process on the Provisioning node may vary. For this procedure, NIC2 is PXE-enabled to ensure easy installation using a local Satellite server.
|
||||
|
||||
NIC1 is a non-routable network (`provisioning`) that is only used for the installation of the {product-title} cluster.
|
||||
|
||||
|===
|
||||
|PXE |Boot order
|
||||
| NIC1 PXE-enabled (provisioning network) | 1
|
||||
| NIC2 PXE-enabled (baremetal network) | 2
|
||||
|===
|
||||
|
||||
[NOTE]
|
||||
====
|
||||
Ensure PXE is disabled on all other NICs.
|
||||
====
|
||||
|
||||
Configure the control plane (master) and worker nodes as follows:
|
||||
|
||||
|===
|
||||
|PXE | Boot order
|
||||
| NIC1 PXE-enabled (provisioning network) | 1
|
||||
|===
|
||||
105
modules/ipi-install-configuring-the-install-config-file.adoc
Normal file
105
modules/ipi-install-configuring-the-install-config-file.adoc
Normal file
@@ -0,0 +1,105 @@
|
||||
// Module included in the following assemblies:
|
||||
//
|
||||
// * installing/installing_bare_metal_ipi/ipi-install-installation-workflow.adoc
|
||||
|
||||
[id="configuring-the-install-config-file_{context}"]
|
||||
|
||||
= Configuring the `install-config` file
|
||||
|
||||
The `install-config.yaml` file requires some additional details.
|
||||
Most of the information is teaching the installer and the resulting cluster enough about the available hardware so that it is able to fully manage it.
|
||||
|
||||
. Configure `install-config.yaml`. Change the appropriate variables to match your environment, including `pullSecret` and `sshKey`.
|
||||
+
|
||||
----
|
||||
apiVersion: v1
|
||||
baseDomain: <domain>
|
||||
metadata:
|
||||
name: <cluster-name>
|
||||
networking:
|
||||
machineCIDR: <public-cidr>
|
||||
networkType: OVNKubernetes
|
||||
compute:
|
||||
- name: worker
|
||||
replicas: 2
|
||||
controlPlane:
|
||||
name: master
|
||||
replicas: 3
|
||||
platform:
|
||||
baremetal: {}
|
||||
platform:
|
||||
baremetal:
|
||||
apiVIP: <api-ip>
|
||||
ingressVIP: <wildcard-ip>
|
||||
dnsVIP: <dns-ip>
|
||||
provisioningBridge: provisioning
|
||||
externalBridge: baremetal
|
||||
hosts:
|
||||
- name: openshift-master-0
|
||||
role: master
|
||||
bmc:
|
||||
address: ipmi://<out-of-band-ip>
|
||||
username: <user>
|
||||
password: <password>
|
||||
bootMACAddress: <NIC1-mac-address>
|
||||
hardwareProfile: default
|
||||
- name: openshift-master-1
|
||||
role: master
|
||||
bmc:
|
||||
address: ipmi://<out-of-band-ip>
|
||||
username: <user>
|
||||
password: <password>
|
||||
bootMACAddress: <NIC1-mac-address>
|
||||
hardwareProfile: default
|
||||
- name: openshift-master-2
|
||||
role: master
|
||||
bmc:
|
||||
address: ipmi://<out-of-band-ip>
|
||||
username: <user>
|
||||
password: <password>
|
||||
bootMACAddress: <NIC1-mac-address>
|
||||
hardwareProfile: default
|
||||
- name: openshift-worker-0
|
||||
role: worker
|
||||
bmc:
|
||||
address: ipmi://<out-of-band-ip>
|
||||
username: <user>
|
||||
password: <password>
|
||||
bootMACAddress: <NIC1-mac-address>
|
||||
hardwareProfile: unknown
|
||||
- name: openshift-worker-1
|
||||
role: worker
|
||||
bmc:
|
||||
address: ipmi://<out-of-band-ip>
|
||||
username: <user>
|
||||
password: <password>
|
||||
bootMACAddress: <NIC1-mac-address>
|
||||
hardwareProfile: unknown
|
||||
pullSecret: '<pull_secret>'
|
||||
sshKey: '<ssh_pub_key>'
|
||||
----
|
||||
|
||||
. Create a directory to store cluster configs.
|
||||
+
|
||||
----
|
||||
# mkdir ~/clusterconfigs
|
||||
# cp install-config.yaml ~/clusterconfigs
|
||||
----
|
||||
|
||||
. Ensure all bare metal nodes are powered off prior to installing the {product-title} cluster.
|
||||
+
|
||||
----
|
||||
# ipmitool -I lanplus -U <user> -P <password> -H <management-server-ip> power off
|
||||
----
|
||||
|
||||
. Ensure that old bootstrap resources are removed, if any are left over from a previous deployment attempt.
|
||||
+
|
||||
----
|
||||
for i in $(sudo virsh list | tail -n +3 | grep bootstrap | awk {'print $2'});
|
||||
do
|
||||
sudo virsh destroy $i;
|
||||
sudo virsh undefine $i;
|
||||
sudo virsh vol-delete $i --pool default;
|
||||
sudo virsh vol-delete $i.ign --pool default;
|
||||
done
|
||||
----
|
||||
44
modules/ipi-install-configuring-the-metal3-config-file.adoc
Normal file
44
modules/ipi-install-configuring-the-metal3-config-file.adoc
Normal file
@@ -0,0 +1,44 @@
|
||||
// Module included in the following assemblies:
|
||||
//
|
||||
// * installing/installing_bare_metal_ipi/ipi-install-installation-workflow.adoc
|
||||
|
||||
[id="configuring-the-metal3-config-file_{context}"]
|
||||
|
||||
= Configuring the `metal3-config` file ({product-title} 4.3 only)
|
||||
|
||||
If you are you working in {product-title} 4.3, you must create the `ConfigMap metal3-config.yaml.sample` file.
|
||||
|
||||
. Create `ConfigMap metal3-config.yaml.sample`.
|
||||
+
|
||||
----
|
||||
apiVersion: v1
|
||||
kind: ConfigMap
|
||||
metadata:
|
||||
name: metal3-config
|
||||
namespace: openshift-machine-api
|
||||
data:
|
||||
cache_url: ''
|
||||
deploy_kernel_url: http://<provisioning_ip>:6180/images/ironic-python-agent.kernel
|
||||
deploy_ramdisk_url: http://<provisioning_ip>:6180/images/ironic-python-agent.initramfs
|
||||
dhcp_range: 172.22.0.10,172.22.0.100
|
||||
http_port: "6180"
|
||||
ironic_endpoint: http://<provisioning_ip>:6385/v1/
|
||||
ironic_inspector_endpoint: http://172.22.0.3:5050/v1/
|
||||
provisioning_interface: <NIC1>
|
||||
provisioning_ip: <provisioning_ip>/24
|
||||
rhcos_image_url: ${RHCOS_PATH}${RHCOS_URI}
|
||||
----
|
||||
+
|
||||
[NOTE]
|
||||
====
|
||||
The `provisioning_ip` should be modified to an available IP on the `provisioning` network. The default is `172.22.0.3`.
|
||||
====
|
||||
|
||||
. Create the final ConfigMap.
|
||||
+
|
||||
----
|
||||
export COMMIT_ID=$(./openshift-baremetal-install version | grep '^built from commit' | awk '{print $4}')
|
||||
export RHCOS_URI=$(curl -s -S https://raw.githubusercontent.com/openshift/installer/$COMMIT_ID/data/data/rhcos.json | jq .images.openstack.path | sed 's/"//g')
|
||||
export RHCOS_PATH=$(curl -s -S https://raw.githubusercontent.com/openshift/installer/$COMMIT_ID/data/data/rhcos.json | jq .baseURI | sed 's/"//g')
|
||||
envsubst < metal3-config.yaml.sample > metal3-config.yaml
|
||||
----
|
||||
25
modules/ipi-install-creating-the-openshift-manifests.adoc
Normal file
25
modules/ipi-install-creating-the-openshift-manifests.adoc
Normal file
@@ -0,0 +1,25 @@
|
||||
// Module included in the following assemblies:
|
||||
//
|
||||
// * installing/installing_bare_metal_ipi/ipi-install-installation-workflow.adoc
|
||||
|
||||
[id="creating-the-openshift-manifests_{context}"]
|
||||
= Creating the {product-title} manifests
|
||||
|
||||
. Create the {product-title} manifests.
|
||||
+
|
||||
----
|
||||
# ./openshift-baremetal-install --dir ~/clusterconfigs create manifests
|
||||
----
|
||||
+
|
||||
----
|
||||
INFO Consuming Install Config from target directory
|
||||
WARNING Making control-plane schedulable by setting MastersSchedulable to true for Scheduler cluster settings
|
||||
WARNING Discarding the Openshift Manifests that was provided in the target directory because its dependencies are dirty and it needs to be regenerated
|
||||
----
|
||||
|
||||
. If you are you working in {product-title} 4.3 and you created the `metal3-config.yaml` file, copy the
|
||||
file to the `clusterconfigs/openshift` directory.
|
||||
+
|
||||
----
|
||||
# cp ~/metal3-config.yaml clusterconfigs/openshift/99_metal3-config.yaml
|
||||
----
|
||||
@@ -0,0 +1,12 @@
|
||||
// Module included in the following assemblies:
|
||||
//
|
||||
// * installing/installing_bare_metal_ipi/ipi-install-installation-workflow.adoc
|
||||
|
||||
[id='deploying-the-cluster-via-the-openshift-installer-{context}']
|
||||
= Deploying the cluster via the {product-title} installer
|
||||
|
||||
Run the {product-title} installer:
|
||||
|
||||
----
|
||||
# ./openshift-baremetal-install --dir ~/clusterconfigs --log-level debug create cluster
|
||||
----
|
||||
19
modules/ipi-install-extracting-the-openshift-installer.adoc
Normal file
19
modules/ipi-install-extracting-the-openshift-installer.adoc
Normal file
@@ -0,0 +1,19 @@
|
||||
// Module included in the following assemblies:
|
||||
//
|
||||
// * installing/installing_bare_metal_ipi/ipi-install-installation-workflow.adoc
|
||||
|
||||
[id="extracting-the-openshift-installer_{context}"]
|
||||
= Extracting the {product-title} installer
|
||||
|
||||
After retrieving the installer, the next step is to extract it:
|
||||
|
||||
----
|
||||
$ export cmd=openshift-baremetal-install
|
||||
$ export pullsecret_file=~/pull-secret.txt
|
||||
$ export extract_dir=$(pwd)
|
||||
# Get the oc binary
|
||||
$ curl -s https://mirror.openshift.com/pub/openshift-v4/clients/ocp/$VERSION/openshift-client-linux-$VERSION.tar.gz | tar zxvf - oc
|
||||
$ sudo cp ./oc /usr/local/bin/oc
|
||||
# Extract the baremetal installer
|
||||
$ oc adm release extract --registry-config "${pullsecret_file}" --command=$cmd --to "${extract_dir}" ${RELEASE_IMAGE}
|
||||
----
|
||||
82
modules/ipi-install-network-requirements.adoc
Normal file
82
modules/ipi-install-network-requirements.adoc
Normal file
@@ -0,0 +1,82 @@
|
||||
// Module included in the following assemblies:
|
||||
//
|
||||
// * installing/installing_bare_metal_ipi/ipi-install-prerequisites.adoc
|
||||
|
||||
[id='network-requirements_{context}']
|
||||
= Network requirements
|
||||
|
||||
IPI installation involves several network requirements.
|
||||
First, IPI installation involves a non-routable `provisioning` network for provisioning the OS on
|
||||
each bare metal node and a routable `baremetal` network for access to the public network.
|
||||
Since IPI installation deploys `ironic-dnsmasq`, the networks should have no other DHCP servers running
|
||||
on the same broadcast domain.
|
||||
Network administrators *must* reserve IP addresses for each node in the {product-title} cluster.
|
||||
|
||||
.Network Time Protocol (NTP)
|
||||
|
||||
Each {product-title} node in the cluster must have access to an NTP server.
|
||||
|
||||
.Configuring NICs
|
||||
|
||||
{product-title} deploys with two networks:
|
||||
|
||||
- `provisioning`: The `provisioning` network is a non-routable network used for
|
||||
provisioning the underlying operating system on each node that is a part of the
|
||||
{product-title} cluster. The first NIC on each node, such as `eth0` or `eno1`,
|
||||
*must* interface with the `provisioning` network.
|
||||
|
||||
- `baremetal`: The `baremetal` network is a routable network used for external
|
||||
network access to the outside world. The second NIC on each node, such as `eth1`
|
||||
or `eno2`, *must* interface with the `baremetal` network.
|
||||
|
||||
[IMPORTANT]
|
||||
====
|
||||
Each NIC should be on a separate VLAN corresponding to the appropriate network.
|
||||
====
|
||||
|
||||
.Configuring the DNS server
|
||||
|
||||
Clients access the {product-title} cluster nodes over the `baremetal` network.
|
||||
A network administrator *must* configure a subdomain or subzone where the canonical name extension is the cluster name.
|
||||
|
||||
----
|
||||
<cluster-name>.<domain-name>
|
||||
----
|
||||
|
||||
For example:
|
||||
|
||||
----
|
||||
test-cluster.example.com
|
||||
----
|
||||
|
||||
.Reserving IP Addresses for Nodes with the DHCP Server
|
||||
|
||||
For the `baremetal` network, a network administrator must reserve a number of IP addresses, including:
|
||||
|
||||
. Three virtual IP addresses.
|
||||
+
|
||||
- 1 IP address for the API endpoint
|
||||
- 1 IP address for the wildcard ingress endpoint
|
||||
- 1 IP address for the name server
|
||||
|
||||
. One IP Address for the Provisioning node.
|
||||
. One IP address for each Control Plane (Master) node.
|
||||
. One IP address for each worker node.
|
||||
|
||||
|
||||
The following table provides an exemplary embodiment of hostnames for each node in the {product-title} cluster.
|
||||
|
||||
[width="100%", cols="3,5e,2e", frame="topbot",options="header"]
|
||||
|=====
|
||||
| Usage | Hostname | IP
|
||||
| API | api.<cluster-name>.<domain> | <ip>
|
||||
| Ingress LB (apps) | *.apps.<cluster-name>.<domain> | <ip>
|
||||
| Nameserver | ns1.<cluster-name>.<domain> | <ip>
|
||||
| Provisioning node | provisioner.<cluster-name>.<domain> | <ip>
|
||||
| Master-0 | master-0.<cluster-name>.<domain> | <ip>
|
||||
| Master-1 | master-1.<cluster-name>-.<domain> | <ip>
|
||||
| Master-2 | master-2.<cluster-name>.<domain> | <ip>
|
||||
| Worker-0 | worker-0.<cluster-name>.<domain> | <ip>
|
||||
| Worker-1 | worker-1.<cluster-name>.<domain> | <ip>
|
||||
| Worker-n | worker-n.<cluster-name>.<domain> | <ip>
|
||||
|=====
|
||||
29
modules/ipi-install-node-requirements.adoc
Normal file
29
modules/ipi-install-node-requirements.adoc
Normal file
@@ -0,0 +1,29 @@
|
||||
// Module included in the following assemblies:
|
||||
//
|
||||
// * installing/installing_bare_metal_ipi/ipi-install-prerequisites.adoc
|
||||
|
||||
[id='node-requirements_{context}']
|
||||
= Node requirements
|
||||
|
||||
IPI installation involves a number of hardware node requirements:
|
||||
|
||||
- **CPU architecture:** All nodes *must* use `x86_64` CPU architecture.
|
||||
|
||||
- **Similar nodes:** Nodes *should* have an identical configuration per role. That is, control plane nodes *should* be the same brand and model with the same CPU, RAM and storage configuration. Worker nodes should be *identical*.
|
||||
|
||||
//<IS CPU PINNING/NUMA AN ISSUE???>
|
||||
|
||||
- **Intelligent Platform Management Interface (IPMI):** IPI installation requires IPMI enabled on each node.
|
||||
|
||||
- **Latest generation:** Nodes should be of the most recent generation. IPI installation relies on IPMI, which should be compatible across nodes. Additionally, RHEL 8 ships with the most recent drivers for RAID controllers. Ensure that the nodes are recent enough to support RHEL 8 for the provisioning node and RHCOS 8 for the worker nodes.
|
||||
|
||||
- **Network interfaces:** Each node *must* have at least two 10 GB network interfaces (NICs)- one for the `provisioning` network and one for the public `baremetal` network.
|
||||
Network interface names *must* follow the same naming convention across all nodes.
|
||||
For example, the first NIC name on a node, such as `eth0` or `eno1`, should be the same name on all of the other nodes.
|
||||
The same principle applies to the remaining NICs on each node.
|
||||
|
||||
- **Provisioning node:** IPI installation requires one provisioning node.
|
||||
|
||||
- **Control plane:** IPI installation requires three control plane (master) nodes for high availability.
|
||||
|
||||
- **Worker nodes:** A typical production cluster will have many worker nodes. IPI installation in a high availability environment requires at least two worker nodes in an initial cluster.
|
||||
13
modules/ipi-install-out-of-band-management.adoc
Normal file
13
modules/ipi-install-out-of-band-management.adoc
Normal file
@@ -0,0 +1,13 @@
|
||||
// Module included in the following assemblies:
|
||||
//
|
||||
// * installing/installing_bare_metal_ipi/ipi-install-prerequisites.adoc
|
||||
|
||||
|
||||
[id="out-of-band-management_{context}"]
|
||||
= Out-of-band management
|
||||
|
||||
Nodes will typically have an additional NIC used by the Baseboard Management Controllers (BMCs). These BMCs must be accessible from the provisioning node.
|
||||
|
||||
Each node must be accessible via out-of-band management. The provisioning node requires access to the out-of-band management network for a successful {product-title} 4 installation.
|
||||
|
||||
The out-of-band management setup is out of scope for this document. We recommend setting up a separate management network for out-of-band management. However, using the `provisioning` network or the `baremetal` network are valid options.
|
||||
@@ -0,0 +1,129 @@
|
||||
// Module included in the following assemblies:
|
||||
//
|
||||
// * installing/installing_bare_metal_ipi/ipi-install-installation-workflow.adoc
|
||||
|
||||
|
||||
[id="preparing-the-provision-node-for-openshift-install_{context}"]
|
||||
= Preparing the provision node for {product-title} installation
|
||||
|
||||
Perform the following steps need to prepare the environment.
|
||||
|
||||
. Log in to the provision node via `ssh`.
|
||||
|
||||
. Create a user (for example, `kni`) to deploy as non-root and provide that user `sudo` privileges.
|
||||
+
|
||||
----
|
||||
# useradd kni
|
||||
# echo "kni ALL=(root) NOPASSWD:ALL" | tee -a /etc/sudoers.d/kni
|
||||
# chmod 0440 /etc/sudoers.d/kni
|
||||
----
|
||||
|
||||
. Create an `ssh` key for the new user.
|
||||
+
|
||||
----
|
||||
# su - kni -c "ssh-keygen -t rsa -f /home/kni/.ssh/id_rsa -N ''"
|
||||
----
|
||||
|
||||
. Use Red Hat Subscription Manager to register your environment.
|
||||
+
|
||||
----
|
||||
# subscription-manager register --username=<user> --password=<pass> --auto-attach
|
||||
# subscription-manager repos --enable=rhel-8-for-x86_64-appstream-rpms --enable=rhel-8-for-x86_64-baseos-rpms
|
||||
----
|
||||
+
|
||||
[NOTE]
|
||||
====
|
||||
For more information about Red Hat Subscription Manager, see link:https://access.redhat.com/documentation/en-us/red_hat_subscription_management/1/html-single/rhsm/index[Using and Configuring Red Hat Subscription Manager].
|
||||
====
|
||||
|
||||
. Install the following packages.
|
||||
+
|
||||
----
|
||||
# dnf install -y libvirt qemu-kvm mkisofs python3-devel jq ipmitool
|
||||
----
|
||||
|
||||
. Modify the user to add the `libvirt` group to the newly created user.
|
||||
+
|
||||
----
|
||||
# usermod --append --groups libvirt <user>
|
||||
----
|
||||
|
||||
. Start `firewalld`, enable the `http` service, and enable port 5000.
|
||||
+
|
||||
----
|
||||
# systemctl start firewalld
|
||||
# firewall-cmd --zone=public --add-service=http --permanent
|
||||
# firewall-cmd --add-port=5000/tcp --zone=libvirt --permanent
|
||||
# firewall-cmd --add-port=5000/tcp --zone=public --permanent
|
||||
# firewall-cmd --reload
|
||||
----
|
||||
|
||||
. Start and enable the `libvirtd` service.
|
||||
+
|
||||
----
|
||||
# systemctl start libvirtd
|
||||
# systemctl enable libvirtd --now
|
||||
----
|
||||
|
||||
. Create the default storage pool and start it.
|
||||
+
|
||||
----
|
||||
# virsh pool-define-as --name default --type dir --target /var/lib/libvirt/images
|
||||
# virsh pool-start default
|
||||
# virsh pool-autostart default
|
||||
----
|
||||
|
||||
. Configure networking.
|
||||
+
|
||||
[NOTE]
|
||||
====
|
||||
This step can also be run from the web console.
|
||||
====
|
||||
+
|
||||
----
|
||||
# You will be disconnected, but reconnect via your ssh session after running
|
||||
export PUB_CONN=<baremetal_nic_name>
|
||||
export PROV_CONN=<prov_nic_name>
|
||||
nohup bash -c '
|
||||
nmcli con down "$PROV_CONN"
|
||||
nmcli con down "$PUB_CONN"
|
||||
nmcli con delete "$PROV_CONN"
|
||||
nmcli con delete "$PUB_CONN"
|
||||
# RHEL 8.1 appends the word "System" in front of the connection, delete in case it exists
|
||||
nmcli con down "System $PUB_CONN"
|
||||
nmcli con delete "System $PUB_CONN"
|
||||
nmcli connection add ifname provisioning type bridge con-name provisioning
|
||||
nmcli con add type bridge-slave ifname "$PROV_CONN" master provisioning
|
||||
nmcli connection add ifname baremetal type bridge con-name baremetal
|
||||
nmcli con add type bridge-slave ifname "$PUB_CONN" master baremetal
|
||||
nmcli con down "$PUB_CONN";pkill dhclient;dhclient baremetal
|
||||
nmcli connection modify provisioning ipv4.addresses 172.22.0.1/24 ipv4.method manual
|
||||
nmcli con down provisioning
|
||||
nmcli con up provisioning
|
||||
'
|
||||
----
|
||||
|
||||
. `ssh` back into your terminal session, if required.
|
||||
|
||||
. Verify the connection bridges have been properly created.
|
||||
+
|
||||
----
|
||||
# nmcli con show
|
||||
----
|
||||
+
|
||||
----
|
||||
NAME UUID TYPE DEVICE
|
||||
baremetal 4d5133a5-8351-4bb9-bfd4-3af264801530 bridge baremetal
|
||||
provisioning 43942805-017f-4d7d-a2c2-7cb3324482ed bridge provisioning
|
||||
virbr0 d9bca40f-eee1-410b-8879-a2d4bb0465e7 bridge virbr0
|
||||
bridge-slave-eno1 76a8ed50-c7e5-4999-b4f6-6d9014dd0812 ethernet eno1
|
||||
bridge-slave-eno2 f31c3353-54b7-48de-893a-02d2b34c4736 ethernet eno2
|
||||
----
|
||||
|
||||
. Log in as the new user on the provision node.
|
||||
+
|
||||
----
|
||||
# su - kni
|
||||
----
|
||||
|
||||
. Copy the pull secret (`pull-secret.txt`) generated earlier and place it in the provision node new user home directory.
|
||||
15
modules/ipi-install-required-data-for-installation.adoc
Normal file
15
modules/ipi-install-required-data-for-installation.adoc
Normal file
@@ -0,0 +1,15 @@
|
||||
// Module included in the following assemblies:
|
||||
//
|
||||
// * installing/installing_bare_metal_ipi/ipi-install-prerequisites.adoc
|
||||
|
||||
[id="required-data-for-installation_{context}"]
|
||||
= Required data for installation
|
||||
|
||||
Prior to the installation of the {product-title} cluster, gather the following information from all cluster nodes:
|
||||
|
||||
* Out-of-band management IP
|
||||
** Examples
|
||||
*** Dell (iDRAC) IP
|
||||
*** HP (iLO) IP
|
||||
* NIC1 (`provisioning`) MAC address
|
||||
* NIC2 (`baremetal`) MAC address
|
||||
15
modules/ipi-install-retrieving-the-openshift-installer.adoc
Normal file
15
modules/ipi-install-retrieving-the-openshift-installer.adoc
Normal file
@@ -0,0 +1,15 @@
|
||||
// Module included in the following assemblies:
|
||||
//
|
||||
// * installing/installing_bare_metal_ipi/ipi-install-installation-workflow.adoc
|
||||
|
||||
|
||||
[id="retrieving-the-openshift-installer_{context}"]
|
||||
= Retrieving the {product-title} installer
|
||||
|
||||
The latest-4.x (currently, 4.3) can be used to deploy the latest Generally
|
||||
Available version of {product-title}:
|
||||
|
||||
----
|
||||
export VERSION=latest-4.3
|
||||
export RELEASE_IMAGE=$(curl -s https://mirror.openshift.com/pub/openshift-v4/clients/ocp/$VERSION/release.txt | grep 'Pull From: quay.io' | awk -F ' ' '{print $3}' | xargs)
|
||||
----
|
||||
48
modules/ipi-install-troubleshooting-the-bootstrap-vm.adoc
Normal file
48
modules/ipi-install-troubleshooting-the-bootstrap-vm.adoc
Normal file
@@ -0,0 +1,48 @@
|
||||
// Module included in the following assemblies:
|
||||
//
|
||||
// * installing/installing_bare_metal_ipi/ipi-install-troubleshooting.adoc
|
||||
|
||||
[id='troubleshooting-the-bootstrap-vm-{context}']
|
||||
= Troubleshooting the bootstrap VM
|
||||
|
||||
By default, the bootstrap VM runs on the same node as the {product-title}
|
||||
installer. The bootstrap VM runs the Ironic services needed to provision the
|
||||
control plane. However, running Ironic depends upon successfully downloading the
|
||||
machine OS and the Ironic agent images. In some cases, the download can fail,
|
||||
and the installer will report a timeout waiting for the Ironic API.
|
||||
|
||||
The bootstrap VM obtains an IP address from the DHCP server on the externally
|
||||
routable `baremetal` network. To retrieve the IP address, execute the following:
|
||||
|
||||
----
|
||||
# virsh net-dhcp-leases baremetal
|
||||
----
|
||||
|
||||
If the installation program activates the provisioning network, you can use the
|
||||
provisioning bootstrap IP which defaults to `172.22.0.2`. Viewing the bootstrap
|
||||
VM's console with `virt-manager` can also be helpful.
|
||||
|
||||
To troubleshoot the Ironic services on the bootstrap VM, log in to the VM using
|
||||
the core user and the SSH key defined in the installation configuration.
|
||||
|
||||
----
|
||||
# ssh core@<bootstrap-vm> -i /path/to/ssh-key/key.txt
|
||||
----
|
||||
|
||||
//note: Is there a specific username and default path to the key?
|
||||
|
||||
To view the Ironic logs, execute the following:
|
||||
|
||||
----
|
||||
# journalctl -u ironic
|
||||
----
|
||||
|
||||
To view the logs of the individual containers, execute the following:
|
||||
|
||||
----
|
||||
# podman logs ipa-downloader
|
||||
# podman logs coreos-downloader
|
||||
# podman logs ironic
|
||||
# podman logs ironic-inspector
|
||||
# podman logs ironic-dnsmasq
|
||||
----
|
||||
16
modules/ipi-install-troubleshooting-the-control-plane.adoc
Normal file
16
modules/ipi-install-troubleshooting-the-control-plane.adoc
Normal file
@@ -0,0 +1,16 @@
|
||||
// Module included in the following assemblies:
|
||||
//
|
||||
// * installing/installing_bare_metal_ipi/ipi-install-troubleshooting.adoc
|
||||
|
||||
[id='troubleshooting-the-control-plane-{context}']
|
||||
= Troubleshooting the control plane
|
||||
|
||||
Once Ironic is available, the installer will provision three control plane
|
||||
nodes. For early failures, use IPMI to access the Baseboard Management
|
||||
Controller (BMC) of each control plane node to see if it received any error
|
||||
reports. You can also use a proprietary solution such as iDRAC or ILO.
|
||||
|
||||
If commands like `oc get clusteroperators` show degraded Operators when the
|
||||
cluster comes up and after the installer destroys the bootstrap VM, it can be
|
||||
useful to examine the logs of the Pods within the `openshift-kni-infra`
|
||||
namespace.
|
||||
@@ -0,0 +1,14 @@
|
||||
// Module included in the following assemblies:
|
||||
//
|
||||
// * installing/installing_bare_metal_ipi/ipi-install-installation-workflow.adoc
|
||||
|
||||
|
||||
[id="validation-checklist-for-installation_{context}"]
|
||||
= Validation checklist for installation
|
||||
|
||||
* [ ] {product-title} installer has been retrieved.
|
||||
* [ ] {product-title} installer has been extracted.
|
||||
* [ ] Required parameters for the `install-config.yaml` have been configured.
|
||||
* [ ] The `hosts` parameter for the `install-config.yaml` has been configured.
|
||||
* [ ] The `bmc` parameter for the `install-config.yaml` has been configured.
|
||||
* [ ] Conventions for the values configured in the `bmc` `address` field have been applied.
|
||||
17
modules/ipi-install-validation-checklist-for-nodes.adoc
Normal file
17
modules/ipi-install-validation-checklist-for-nodes.adoc
Normal file
@@ -0,0 +1,17 @@
|
||||
// Module included in the following assemblies:
|
||||
//
|
||||
// * installing/installing_bare_metal_ipi/ipi-install-prerequisites.adoc
|
||||
|
||||
|
||||
[id="validation-checklist-for-nodes{context}"]
|
||||
= Validation checklist for nodes
|
||||
|
||||
* [ ] NIC1 VLAN is configured for provisioning.
|
||||
* [ ] NIC2 VLAN is configured for bare metal.
|
||||
* [ ] NIC1 is PXE-enabled on the provisioning, control plane (master), and worker nodes.
|
||||
* [ ] NIC2 is PXE-enabled on the provisioning node.
|
||||
* [ ] PXE has been disabled on all other NICs.
|
||||
* [ ] Control plane (master) and worker nodes are configured.
|
||||
* [ ] All nodes accessible via out-of-band management.
|
||||
* [ ] A separate management network has been created.
|
||||
* [ ] Required data for installation.
|
||||
Reference in New Issue
Block a user