mirror of
https://github.com/openshift/openshift-docs.git
synced 2026-02-07 00:48:01 +01:00
210 lines
13 KiB
Plaintext
210 lines
13 KiB
Plaintext
// Module included in the following assemblies:
|
|
//
|
|
// * installing/installing_bare_metal_ipi/ipi-install-prerequisites.adoc
|
|
|
|
:_mod-docs-content-type: CONCEPT
|
|
[id="network-requirements_{context}"]
|
|
= Network requirements
|
|
|
|
Installer-provisioned installation of {product-title} involves several network requirements. First, installer-provisioned installation involves an optional non-routable `provisioning` network for provisioning the operating system on each bare metal node. Second, installer-provisioned installation involves a routable `baremetal` network.
|
|
|
|
image::210_OpenShift_Baremetal_IPI_Deployment_updates_0122_2.png[Installer-provisioned networking]
|
|
|
|
[id="network-requirements-ensuring-required-ports-are-open_{context}"]
|
|
== Ensuring required ports are open
|
|
|
|
Certain ports must be open between cluster nodes for installer-provisioned installations to complete successfully. In certain situations, such as using separate subnets for far edge worker nodes, you must ensure that the nodes in these subnets can communicate with nodes in the other subnets on the following required ports.
|
|
|
|
.Required ports
|
|
[options="header"]
|
|
|====
|
|
|Port|Description
|
|
|
|
|`67`,`68` | When using a provisioning network, cluster nodes access the `dnsmasq` DHCP server over their provisioning network interfaces using ports `67` and `68`.
|
|
|
|
| `69` | When using a provisioning network, cluster nodes communicate with the TFTP server on port `69` using their provisioning network interfaces. The TFTP server runs on the bootstrap VM. The bootstrap VM runs on the provisioner node.
|
|
|
|
| `80` | When not using the image caching option or when using virtual media, the provisioner node must have port `80` open on the `baremetal` machine network interface to stream the {op-system-first} image from the provisioner node to the cluster nodes.
|
|
|
|
| `123` | The cluster nodes must access the NTP server on port `123` using the `baremetal` machine network.
|
|
|
|
|`5050`| The Ironic Inspector API runs on the control plane nodes and listens on port `5050`. The Inspector API is responsible for hardware introspection, which collects information about the hardware characteristics of the bare metal nodes.
|
|
|
|
|`6180`| When deploying with virtual media and not using TLS, the provisioner node and the control plane nodes must have port `6180` open on the `baremetal` machine network interface so that the baseboard management controller (BMC) of the worker nodes can access the {op-system} image. Starting with {product-title} 4.13, the default HTTP port is `6180`.
|
|
|
|
|`6183`| When deploying with virtual media and using TLS, the provisioner node and the control plane nodes must have port `6183` open on the `baremetal` machine network interface so that the BMC of the worker nodes can access the {op-system} image.
|
|
|
|
|`6385`| The Ironic API server runs initially on the bootstrap VM and later on the control plane nodes and listens on port `6385`. The Ironic API allows clients to interact with Ironic for bare metal node provisioning and management, including operations like enrolling new nodes, managing their power state, deploying images, and cleaning the hardware.
|
|
|
|
|`8080`| When using image caching without TLS, port `8080` must be open on the provisioner node and accessible by the BMC interfaces of the cluster nodes.
|
|
|
|
|`8083`| When using the image caching option with TLS, port `8083` must be open on the provisioner node and accessible by the BMC interfaces of the cluster nodes.
|
|
|
|
|`9999`| By default, the Ironic Python Agent (IPA) listens on TCP port `9999` for API calls from the Ironic conductor service. This port is used for communication between the bare metal node where IPA is running and the Ironic conductor service.
|
|
|
|
|====
|
|
|
|
[NOTE]
|
|
====
|
|
For installer-provisioned infrastructure installations, CoreDNS exposes port 53 at the node level, making it accessible from other routable networks.
|
|
====
|
|
|
|
[id="network-requirements-increase-mtu_{context}"]
|
|
== Increase the network MTU
|
|
|
|
Before deploying {product-title}, increase the network maximum transmission unit (MTU) to 1500 or more. If the MTU is lower than 1500, the Ironic image that is used to boot the node might fail to communicate with the Ironic inspector pod, and inspection will fail. If this occurs, installation stops because the nodes are not available for installation.
|
|
|
|
[id="network-requirements-config-nics_{context}"]
|
|
== Configuring NICs
|
|
|
|
{product-title} deploys with two networks:
|
|
|
|
- `provisioning`: The `provisioning` network is an optional non-routable network used for provisioning the underlying operating system on each node that is a part of the {product-title} cluster. The network interface for the `provisioning` network on each cluster node must have the BIOS or UEFI configured to PXE boot.
|
|
+
|
|
The `provisioningNetworkInterface` configuration setting specifies the `provisioning` network NIC name on the control plane nodes, which must be identical on the control plane nodes. The `bootMACAddress` configuration setting provides a means to specify a particular NIC on each node for the `provisioning` network.
|
|
+
|
|
The `provisioning` network is optional, but it is required for PXE booting. If you deploy without a `provisioning` network, you must use a virtual media BMC addressing option such as `redfish-virtualmedia` or `idrac-virtualmedia`.
|
|
|
|
- `baremetal`: The `baremetal` network is a routable network. You can use any NIC to interface with the `baremetal` network provided the NIC is not configured to use the `provisioning` network.
|
|
|
|
[IMPORTANT]
|
|
====
|
|
When using a VLAN, each NIC must be on a separate VLAN corresponding to the appropriate network.
|
|
====
|
|
|
|
[id="network-requirements-dns_{context}"]
|
|
== DNS requirements
|
|
|
|
Clients access the {product-title} cluster nodes over the `baremetal` network. A network administrator must configure a subdomain or subzone where the canonical name extension is the cluster name.
|
|
|
|
[source,text]
|
|
----
|
|
<cluster_name>.<base_domain>
|
|
----
|
|
|
|
For example:
|
|
|
|
[source,text]
|
|
----
|
|
test-cluster.example.com
|
|
----
|
|
|
|
{product-title} includes functionality that uses cluster membership information to generate A/AAAA records. This resolves the node names to their IP addresses. After the nodes are registered with the API, the cluster can disperse node information without using CoreDNS-mDNS. This eliminates the network traffic associated with multicast DNS.
|
|
|
|
In {product-title} deployments, DNS name resolution is required for the following components:
|
|
|
|
* The Kubernetes API
|
|
* The {product-title} application wildcard ingress API
|
|
|
|
A/AAAA records are used for name resolution and PTR records are used for reverse name resolution. {op-system-first} uses the reverse records or DHCP to set the hostnames for all the nodes.
|
|
|
|
Installer-provisioned installation includes functionality that uses cluster membership information to generate A/AAAA records. This resolves the node names to their IP addresses. In each record, `<cluster_name>` is the cluster name and `<base_domain>` is the base domain that you specify in the `install-config.yaml` file. A complete DNS record takes the form: `<component>.<cluster_name>.<base_domain>.`.
|
|
|
|
.Required DNS records
|
|
[cols="1a,3a,5a",options="header"]
|
|
|===
|
|
|
|
|Component
|
|
|Record
|
|
|Description
|
|
|
|
|Kubernetes API
|
|
|`api.<cluster_name>.<base_domain>.`
|
|
|An A/AAAA record and a PTR record identify the API load balancer. These records must be resolvable by both clients external to the cluster and from all the nodes within the cluster.
|
|
|
|
|Routes
|
|
|`*.apps.<cluster_name>.<base_domain>.`
|
|
|The wildcard A/AAAA record refers to the application ingress load balancer. The application ingress load balancer targets the nodes that run the Ingress Controller pods. The Ingress Controller pods run on the worker nodes by default. These records must be resolvable by both clients external to the cluster and from all the nodes within the cluster.
|
|
|
|
For example, `console-openshift-console.apps.<cluster_name>.<base_domain>` is used as a wildcard route to the {product-title} console.
|
|
|
|
|===
|
|
|
|
[TIP]
|
|
====
|
|
You can use the `dig` command to verify DNS resolution.
|
|
====
|
|
|
|
[id="network-requirements-dhcp-reqs_{context}"]
|
|
== Dynamic Host Configuration Protocol (DHCP) requirements
|
|
|
|
By default, installer-provisioned installation deploys `ironic-dnsmasq` with DHCP enabled for the `provisioning` network. No other DHCP servers should be running on the `provisioning` network when the `provisioningNetwork` configuration setting is set to `managed`, which is the default value. If you have a DHCP server running on the `provisioning` network, you must set the `provisioningNetwork` configuration setting to `unmanaged` in the `install-config.yaml` file.
|
|
|
|
Network administrators must reserve IP addresses for each node in the {product-title} cluster for the `baremetal` network on an external DHCP server.
|
|
|
|
[id="network-requirements-reserving-ip-addresses_{context}"]
|
|
== Reserving IP addresses for nodes with the DHCP server
|
|
|
|
For the `baremetal` network, a network administrator must reserve a number of IP addresses, including:
|
|
|
|
. Two unique virtual IP addresses.
|
|
+
|
|
- One virtual IP address for the API endpoint.
|
|
- One virtual IP address for the wildcard ingress endpoint.
|
|
+
|
|
. One IP address for the provisioner node.
|
|
. One IP address for each control plane node.
|
|
. One IP address for each worker node, if applicable.
|
|
|
|
[IMPORTANT]
|
|
.Reserving IP addresses so they become static IP addresses
|
|
====
|
|
Some administrators prefer to use static IP addresses so that each node's IP address remains constant in the absence of a DHCP server. To configure static IP addresses with NMState, see "(Optional) Configuring node network interfaces" in the "Setting up the environment for an OpenShift installation" section.
|
|
====
|
|
|
|
[IMPORTANT]
|
|
.Networking between external load balancers and control plane nodes
|
|
====
|
|
External load balancing services and the control plane nodes must run on the same L2 network, and on the same VLAN when using VLANs to route traffic between the load balancing services and the control plane nodes.
|
|
====
|
|
|
|
[IMPORTANT]
|
|
====
|
|
The storage interface requires a DHCP reservation or a static IP.
|
|
====
|
|
|
|
The following table provides an exemplary embodiment of fully qualified domain names. The API and Nameserver addresses begin with canonical name extensions. The hostnames of the control plane and worker nodes are exemplary, so you can use any host naming convention you prefer.
|
|
|
|
[width="100%", cols="3,5,2", options="header"]
|
|
|=====
|
|
| Usage | Host Name | IP
|
|
| API | `api.<cluster_name>.<base_domain>` | `<ip>`
|
|
| Ingress LB (apps) | `*.apps.<cluster_name>.<base_domain>` | `<ip>`
|
|
| Provisioner node | `provisioner.<cluster_name>.<base_domain>` | `<ip>`
|
|
| Control-plane-0 | `openshift-control-plane-0.<cluster_name>.<base_domain>` | `<ip>`
|
|
| Control-plane-1 | `openshift-control-plane-1.<cluster_name>-.<base_domain>` | `<ip>`
|
|
| Control-plane-2 | `openshift-control-plane-2.<cluster_name>.<base_domain>` | `<ip>`
|
|
| Worker-0 | `openshift-worker-0.<cluster_name>.<base_domain>` | `<ip>`
|
|
| Worker-1 | `openshift-worker-1.<cluster_name>.<base_domain>` | `<ip>`
|
|
| Worker-n | `openshift-worker-n.<cluster_name>.<base_domain>` | `<ip>`
|
|
|=====
|
|
|
|
[NOTE]
|
|
====
|
|
If you do not create DHCP reservations, the installer requires reverse DNS resolution to set the hostnames for the Kubernetes API node, the provisioner node, the control plane nodes, and the worker nodes.
|
|
====
|
|
|
|
[id="network-requirements-provisioner_{context}"]
|
|
== Provisioner node requirements
|
|
|
|
You must specify the MAC address for the provisioner node in your installation configuration. The `bootMacAddress` specification is typically associated with PXE network booting. However, the Ironic provisioning service also requires the `bootMacAddress` specification to identify nodes during the inspection of the cluster, or during node redeployment in the cluster.
|
|
|
|
The provisioner node requires layer 2 connectivity for network booting, DHCP and DNS resolution, and local network communication. The provisioner node requires layer 3 connectivity for virtual media booting.
|
|
|
|
[id="network-requirements-ntp_{context}"]
|
|
== Network Time Protocol (NTP)
|
|
|
|
Each {product-title} node in the cluster must have access to an NTP server. {product-title} nodes use NTP to synchronize their clocks. For example, cluster nodes use SSL certificates that require validation, which might fail if the date and time between the nodes are not in sync.
|
|
|
|
[IMPORTANT]
|
|
====
|
|
Define a consistent clock date and time format in each cluster node's BIOS settings, or installation might fail.
|
|
====
|
|
|
|
You can reconfigure the control plane nodes to act as NTP servers on disconnected clusters, and reconfigure worker nodes to retrieve time from the control plane nodes.
|
|
|
|
[id="network-requirements-out-of-band_{context}"]
|
|
== Port access for the out-of-band management IP address
|
|
|
|
The out-of-band management IP address is on a separate network from the node. To ensure that the out-of-band management can communicate with the provisioner node during installation, the out-of-band management IP address must be granted access to port `6180` on the provisioner node and on the {product-title} control plane nodes. TLS port `6183` is required for virtual media installation, for example, by using Redfish.
|