1
0
mirror of https://github.com/openshift/openshift-docs.git synced 2026-02-05 21:46:22 +01:00

Added additional edits from Roger Lopez.

Signed-off-by: John Wilkins <jowilkin@redhat.com>

Conflicts:
	_topic_map.yml
This commit is contained in:
John Wilkins
2020-04-23 20:00:57 -07:00
committed by Ashley Hardin
parent 3726e458b6
commit f430c5c7fb
9 changed files with 71 additions and 46 deletions

View File

@@ -177,12 +177,14 @@ Topics:
File: installing-bare-metal-network-customizations
- Name: Restricted network bare metal installation
File: installing-restricted-networks-bare-metal
- Name: Deploying IPI Bare Metal
- Name: Deploying IPI bare metal
Dir: installing_bare_metal_ipi
Distros: openshift-webscale
Topics:
- Name: Deploying IPI bare metal
File: deploying-ipi-bare-metal
- Name: Overview
File: ipi-install-overview
- Name: Prerequisites
File: ipi-install-prerequisites
- Name: Setting up the environment for an OpenShift installation
File: ipi-install-installation-workflow
- Name: Prerequisites

View File

@@ -5,7 +5,12 @@ include::modules/common-attributes.adoc[]
toc::[]
After an environment has been prepared according to the documented prerequisites, the installation process is the same as other IPI-based platforms.
After an environment has been prepared according to the documented prerequisites, the provisioner node will provision a {product-title} cluster consisting of:
. Three Control Plane or master nodes; and,
. Two worker nodes.
The installation process is the same as other IPI-based platforms.
include::modules/ipi-install-preparing-the-provisioner-node-for-openshift-install.adoc[leveloffset=+1]

View File

@@ -1,5 +1,5 @@
[id="deploying-ipi-bare-metal"]
= Deploying IPI Bare Metal
[id="ipi-install-overview"]
= Overview
include::modules/common-attributes.adoc[]
:context: ipi-install

View File

@@ -5,10 +5,21 @@ include::modules/common-attributes.adoc[]
toc::[]
Installing {product-title} requires:
. One provisioner node with RHEL 8.1 installed.
. Three Control Plane or master nodes.
. At least two worker nodes.
. IPMI access to each node.
. At least two networks:
.. One network for provisioning nodes
.. One network routable to the internet; and,
.. One optional management network.
Before installing {product-title}, ensure the hardware environment meets the following requirements.
include::modules/ipi-install-network-requirements.adoc[leveloffset=+1]
include::modules/ipi-install-node-requirements.adoc[leveloffset=+1]
include::modules/ipi-install-network-requirements.adoc[leveloffset=+1]
include::modules/ipi-install-configuring-nodes.adoc[leveloffset=+1]
include::modules/ipi-install-out-of-band-management.adoc[leveloffset=+1]
include::modules/ipi-install-required-data-for-installation.adoc[leveloffset=+1]

View File

@@ -33,7 +33,6 @@ platform:
ingressVIP: <wildcard-ip>
dnsVIP: <dns-ip>
provisioningBridge: provisioning
externalBridge: baremetal
hosts:
- name: openshift-master-0
role: master

View File

@@ -4,11 +4,19 @@
[id="configuring-the-metal3-config-file_{context}"]
= Configuring the `metal3-config.yaml` file ({product-title} 4.3 only)
= Configuring the `metal3-config.yaml` file
If you are you working in {product-title} 4.3, you must create the `ConfigMap metal3-config.yaml.sample` file.
You must create and configure a ConfigMap `metal3-config.yaml` file.
. Create `ConfigMap metal3-config.yaml.sample`.
.Procedure
. Create a ConfigMap `metal3-config.yaml.sample`.
+
----
[kni@provisioner ~]$ vim metal3-config.yaml.sample
----
+
Provide the following contents:
+
----
apiVersion: v1
@@ -31,7 +39,7 @@ data:
+
[NOTE]
====
The `provisioning_ip` should be modified to an available IP on the `provisioning` network. The default is `172.22.0.3`.
Replace `<provisioning_ip>` with an available IP on the `provisioning` network. The default is `172.22.0.3`.
====
. Create the final ConfigMap.

View File

@@ -68,10 +68,10 @@ The following table provides an exemplary embodiment of hostnames for each node
| Ingress LB (apps) | *.apps.<cluster-name>.<domain> | <ip>
| Nameserver | ns1.<cluster-name>.<domain> | <ip>
| Provisioner node | provisioner.<cluster-name>.<domain> | <ip>
| Master-0 | master-0.<cluster-name>.<domain> | <ip>
| Master-1 | master-1.<cluster-name>-.<domain> | <ip>
| Master-2 | master-2.<cluster-name>.<domain> | <ip>
| Worker-0 | worker-0.<cluster-name>.<domain> | <ip>
| Worker-1 | worker-1.<cluster-name>.<domain> | <ip>
| Worker-n | worker-n.<cluster-name>.<domain> | <ip>
| Master-0 | openshift-master-0.<cluster-name>.<domain> | <ip>
| Master-1 | openshift-master-1.<cluster-name>-.<domain> | <ip>
| Master-2 | openshift-master-2.<cluster-name>.<domain> | <ip>
| Worker-0 | openshift-worker-0.<cluster-name>.<domain> | <ip>
| Worker-1 | openshift-worker-1.<cluster-name>.<domain> | <ip>
| Worker-n | openshift-worker-n.<cluster-name>.<domain> | <ip>
|=====

View File

@@ -13,16 +13,16 @@ Perform the following steps need to prepare the environment.
. Create a user (for example, `kni`) to deploy as non-root and provide that user `sudo` privileges.
+
----
[root@provision ~]# useradd kni
[root@provision ~]# passwd kni
[root@provision ~]# echo "kni ALL=(root) NOPASSWD:ALL" | tee -a /etc/sudoers.d/kni
[root@provision ~]# chmod 0440 /etc/sudoers.d/kni
[root@provisioner ~]# useradd kni
[root@provisioner ~]# passwd kni
[root@provisioner ~]# echo "kni ALL=(root) NOPASSWD:ALL" | tee -a /etc/sudoers.d/kni
[root@provisioner ~]# chmod 0440 /etc/sudoers.d/kni
----
. Create an `ssh` key for the new user.
+
----
[root@provision ~]# su - kni -c "ssh-keygen -t rsa -f /home/kni/.ssh/id_rsa -N ''"
[root@provisioner ~]# su - kni -c "ssh-keygen -t rsa -f /home/kni/.ssh/id_rsa -N ''"
----
. Login in as the new user on the provision node.
@@ -34,8 +34,8 @@ Perform the following steps need to prepare the environment.
. Use Red Hat Subscription Manager to register your environment.
+
----
[kni@provision ~]$ sudo subscription-manager register --username=<user> --password=<pass> --auto-attach
[kni@provision ~]$ sudo subscription-manager repos --enable=rhel-8-for-x86_64-appstream-rpms --enable=rhel-8-for-x86_64-baseos-rpms
[kni@provisioner ~]$ sudo subscription-manager register --username=<user> --password=<pass> --auto-attach
[kni@provisioner ~]$ sudo subscription-manager repos --enable=rhel-8-for-x86_64-appstream-rpms --enable=rhel-8-for-x86_64-baseos-rpms
----
+
[NOTE]
@@ -46,38 +46,38 @@ For more information about Red Hat Subscription Manager, see https://access.redh
. Install the following packages.
+
----
[kni@provision ~]$ sudo dnf install -y libvirt qemu-kvm mkisofs python3-devel jq ipmitool
[kni@provisioner ~]$ sudo dnf install -y libvirt qemu-kvm mkisofs python3-devel jq ipmitool
----
. Modify the user to add the `libvirt` group to the newly created user.
+
----
[kni@provision ~]$ sudo usermod --append --groups libvirt <user>
[kni@provisioner ~]$ sudo usermod --append --groups libvirt <user>
----
. Start `firewalld`, enable the `http` service, and enable port 5000.
+
----
[kni@provision ~]$ sudo systemctl start firewalld
[kni@provision ~]$ sudo firewall-cmd --zone=public --add-service=http --permanent
[kni@provision ~]$ sudo firewall-cmd --add-port=5000/tcp --zone=libvirt --permanent
[kni@provision ~]$ sudo firewall-cmd --add-port=5000/tcp --zone=public --permanent
[kni@provision ~]$ sudo firewall-cmd --reload
[kni@provisioner ~]$ sudo systemctl start firewalld
[kni@provisioner ~]$ sudo firewall-cmd --zone=public --add-service=http --permanent
[kni@provisioner ~]$ sudo firewall-cmd --add-port=5000/tcp --zone=libvirt --permanent
[kni@provisioner ~]$ sudo firewall-cmd --add-port=5000/tcp --zone=public --permanent
[kni@provisioner ~]$ sudo firewall-cmd --reload
----
. Start and enable the `libvirtd` service.
+
----
[kni@provision ~]$ sudo systemctl start libvirtd
[kni@provision ~]$ sudo systemctl enable libvirtd --now
[kni@provisioner ~]$ sudo systemctl start libvirtd
[kni@provisioner ~]$ sudo systemctl enable libvirtd --now
----
. Create the default storage pool and start it.
+
----
[kni@provision ~]$ sudo virsh pool-define-as --name default --type dir --target /var/lib/libvirt/images
[kni@provision ~]$ sudo virsh pool-start default
[kni@provision ~]$ sudo virsh pool-autostart default
[kni@provisioner ~]$ sudo virsh pool-define-as --name default --type dir --target /var/lib/libvirt/images
[kni@provisioner ~]$ sudo virsh pool-start default
[kni@provisioner ~]$ sudo virsh pool-autostart default
----
. Configure networking.
@@ -89,9 +89,9 @@ This step can also be run from the console.
+
----
[kni@provision ~]$ export PUB_CONN=<baremetal_nic_name>
[kni@provision ~]$ export PROV_CONN=<prov_nic_name>
[kni@provision ~]$ sudo nohup bash -c '
[kni@provisioner ~]$ export PUB_CONN=<baremetal_nic_name>
[kni@provisioner ~]$ export PROV_CONN=<prov_nic_name>
[kni@provisioner ~]$ sudo nohup bash -c '
nmcli con down "$PROV_CONN"
nmcli con down "$PUB_CONN"
nmcli con delete "$PROV_CONN"
@@ -121,7 +121,7 @@ NOTE: The `ssh` connection may disconnect after executing this step.
. Verify the connection bridges have been properly created.
+
----
[kni@provision ~]$ sudo nmcli con show
[kni@provisioner ~]$ sudo nmcli con show
----
+
----
@@ -137,7 +137,7 @@ bridge-slave-eno2 f31c3353-54b7-48de-893a-02d2b34c4736 ethernet eno2
. Create a `pull-secret.txt` file.
+
----
[kni@provision ~]$ vim pull-secret.txt
[kni@provisioner ~]$ vim pull-secret.txt
----
+
In a web brower, navigate to https://cloud.redhat.com/openshift/install/metal/user-provisioned[Install on Bare Metal with user-provisioned infrastructure], and scroll down to the **Downloads** section. Click **Copy pull secret**. Paste the contents into the `pull-secret.txt` file and save the contents in the `kni` user's home directory.

View File

@@ -57,7 +57,7 @@ For more information about Red Hat Subscription Manager, see link:https://access
[kni@provisioner ~]$ sudo usermod --append --groups libvirt <user>
----
. Start `firewalld` and enable the `http` service.
. Restart `firewalld` and enable the `http` service.
+
----
[kni@provisioner ~]$ sudo systemctl start firewalld
@@ -72,7 +72,7 @@ For more information about Red Hat Subscription Manager, see link:https://access
[kni@provisioner ~]$ sudo systemctl enable libvirtd --now
----
. Create the default storage pool and start it.
. Create the `default` storage pool and start it.
+
----
[kni@provisioner ~]$ sudo virsh pool-define-as --name default --type dir --target /var/lib/libvirt/images
@@ -115,10 +115,10 @@ This step can also be run from the web console.
The `ssh` connection may disconnect after executing this step.
====
. `ssh` back into the provisioner node (if required).
. `ssh` back into the `provisioner` node (if required).
+
----
# ssh kni@provisioner
# ssh provisioner.<cluster-name>.<domain>
----
. Verify the connection bridges have been properly created.