1
0
mirror of https://github.com/openshift/openshift-docs.git synced 2026-02-05 21:46:22 +01:00

Squashing 2 commits into 1 commit.

This commit is contained in:
John Wilkins
2021-06-11 11:58:57 -07:00
committed by openshift-cherrypick-robot
parent 9e5cdf49d3
commit b0ea95149e
3 changed files with 104 additions and 6 deletions

View File

@@ -3,6 +3,10 @@
include::modules/common-attributes.adoc[]
:context: ipi-install-post-installation-configuration
toc::[]
After successfully deploying an installer-provisioned cluster, consider the following post-installation procedures.
include::modules/ipi-install-configuring-ntp-for-disconnected-clusters.adoc[leveloffset=+1]
include::modules/nw-enabling-a-provisioning-network-after-installation.adoc[leveloffset=+1]

View File

@@ -33,20 +33,31 @@ endif::[]
{product-title} deploys with two networks:
- `provisioning`: The `provisioning` network is an *optional* non-routable network used for provisioning the underlying operating system on each node that is a part of the {product-title} cluster. When deploying using the `provisioning` network, the first NIC on each node, such as `eth0` or `eno1`,
*must* interface with the `provisioning` network.
- `provisioning`: The `provisioning` network is an optional non-routable network used for provisioning the underlying operating system on each node that is a part of the {product-title} cluster. The network interface for the `provisioning` network on each cluster node must have the BIOS or UEFI configured to PXE boot.
+
In {product-title} 4.3, when deploying using the `provisioning` network, the first NIC on each node, such as `eth0` or `eno1`, must interface with the `provisioning` network.
+
In {product-title} 4.4 and later releases, you can specify the provisioning network NIC with the `provisioningNetworkInterface` configuration setting.
- `baremetal`: The `baremetal` network is a routable network. When deploying using the `provisioning` network, the second NIC on each node, such as `eth1` or `eno2`, *must* interface with the `baremetal` network. When deploying without a `provisioning` network, you can use any NIC on each node to interface with the `baremetal` network.
- `baremetal`: The `baremetal` network is a routable network.
+
In {product-title} 4.3, when deploying using the `provisioning` network, the second NIC on each node, such as `eth1` or `eno2`, must interface with the `baremetal` network.
+
In {product-title} 4.4 and later releases, you can use any NIC order to interface with the `baremetal` network, provided it is the same NIC order across worker and control plane nodes and not the NIC specified in the `provisioningNetworkInterface` configuration setting for the `provisioning` network.
[NOTE]
====
Use a compatible approach such that cluster nodes use the same NIC ordering on all cluster nodes. NICs must have heterogeneous hardware with the same NIC naming convention such as `eth0` or `eno1`.
====
[IMPORTANT]
====
Each NIC should be on a separate VLAN corresponding to the appropriate network.
When using a VLAN, each NIC must be on a separate VLAN corresponding to the appropriate network.
====
.Configuring the DNS server
Clients access the {product-title} cluster nodes over the `baremetal` network.
A network administrator must configure a subdomain or subzone where the canonical name extension is the cluster name.
Clients access the {product-title} cluster nodes over the `baremetal` network. A network administrator must configure a subdomain or subzone where the canonical name extension is the cluster name.
----
<cluster-name>.<domain-name>

View File

@@ -0,0 +1,83 @@
// This is included in the following assemblies:
//
// ipi-install-post-installation-configuration.adoc
[id="enabling-a-provisioning-network-after-installation_{context}"]
= Enabling a provisioning network after installation
The assisted installer and installer-provisioned installation for bare metal clusters provide the ability to deploy a cluster without a `provisioning` network. This capability is for scenarios such as proof-of-concept clusters or deploying exclusively with Redfish virtual media when each node's baseboard management controller is routable via the `baremetal` network.
In {product-title} 4.8 and later, you can enable a `provisioning` network after installation using the Cluster Baremetal Operator (CBO).
.Prerequisites
. The `provisioning` network must exist.
. The `provisioning` network must be enabled.
. The cluster nodes must be connected to the `provisioning` network using the same network interface on both worker nodes and control plane nodes.
. The cluster nodes must be homogeneous. If the cluster nodes use different network interface names for the same network interface order, such as `eth0` and `eno1` for the first network interface, the procedure fails.
.Procedure
. Identify the provisioning interface name for the cluster nodes. For example, `eth0` or `eno1`.
. Enable the preboot execution environment (PXE) on the `provisioning` network interface of the cluster nodes.
. Retrieve the current state of the `provisioning` network and save it to a provisioning configuration resource file:
+
[source,terminal]
----
$ oc get provisioning -o yaml > enable-provisioning-nw.yaml
----
. Modify the provisioning configuration resource file:
+
[source,terminal]
----
$ vim ~/enable-provisioning-nw.yaml
----
+
Scroll down to the `provisioningNetwork` configuration setting and change it from `Disabled` to `Managed`. Then, add the `provisioningOSDownloadURL`, `provisioningIP`, `provisioningNetworkCIDR`, `provisioningDHCPRange`, `provisioningInterface`, and `watchAllNameSpaces` configuration settings after the `provisioningNetwork` setting. Provide appropriate values for each setting.
+
[source,yaml]
----
apiVersion: v1
items:
- apiVersion: metal3.io/v1alpha1
kind: Provisioning
metadata:
name: provisioning-configuration
spec:
provisioningNetwork: <1>
provisioningOSDownloadURL: <2>
provisioningIP: <3>
provisioningNetworkCIDR: <4>
provisioningDHCPRange: <5>
provisioningInterface: <6>
watchAllNameSpaces: <7>
----
+
where:
+
<1> The `provisioningNetwork` is one of `Managed`, `Unmanaged`, or `Disabled`. When set to `Managed`, Metal3 manages the provisioning network and the CBO deploys the Metal3 pod with a configured DHCP server. When set to `Unmanaged`, the system administrator configures DHCP server manually.
+
<2> The `provisioningOSDownloadURL` is a valid HTTPS URL with a valid sha256 checksum that enables the Metal3 pod to download a qcow2 operating system image ending in `.qcow2.gz` or `.qcow2.xz`. This field is required whether the provisioning network is `Managed`, `Unmanaged`, or `Disabled`. For example: `\http://192.168.0.1/images/rhcos-_<version>_.x86_64.qcow2.gz?sha256=_<sha>_`.
+
<3> The `provisioningIP` is the static IP address that the DHCP server and ironic use to provision the network. This static IP address must be within the `provisioning` subnet, and outside of the DHCP range. If you configure this setting, it must have a valid IP address even if the `provisioning` network is `Disabled`. The static IP address is bound to the metal3 pod. If the metal3 pod fails and moves to another server, the static IP address also moves to the new server.
+
<4> The Classless Inter-Domain Routing (CIDR) address. If you configure this setting, it must have a valid CIDR address even if the `provisioning` network is `Disabled`. For example: `192.168.0.1/24`.
+
<5> The DHCP range. This setting is only applicable to a `Managed` provisioning network. Omit this configuration setting if the `provisioning` network is `Disabled`. For example: `192.168.0.64, 192.168.0.253`.
+
<6> The NIC name for the `provisioning` interface on cluster nodes. This setting is only applicable to `Managed` and `Unamanged` provisioning networks. Omit this configuration setting if the `provisioning` network is `Disabled`.
+
<7> Set this setting to `true` if you want metal3 to watch namespaces other than the default `openshift-machine-api` namespace. The default value is `false`.
. Save the changes to the provisioning configuration resource file.
. Apply the provisioning configuration resource file to the cluster:
+
[source,terminal]
----
$ oc apply -f enable-provisioning-nw.yaml
----