1
0
mirror of https://github.com/openshift/openshift-docs.git synced 2026-02-05 12:46:18 +01:00

OSDOCS-14356-New: Added bond best practices info to networking docs

This commit is contained in:
dfitzmau
2025-05-08 16:56:16 +01:00
committed by openshift-cherrypick-robot
parent 2ec1076c99
commit 36dbaa98d3
14 changed files with 159 additions and 58 deletions

View File

@@ -1761,6 +1761,8 @@ Topics:
File: verifying-connectivity-endpoint File: verifying-connectivity-endpoint
- Name: Changing the cluster network MTU - Name: Changing the cluster network MTU
File: changing-cluster-network-mtu File: changing-cluster-network-mtu
- Name: Network bonding considerations
File: network-bonding-considerations
- Name: Using Stream Control Transmission Protocol - Name: Using Stream Control Transmission Protocol
File: using-sctp File: using-sctp
- Name: Associating secondary interfaces metrics to network attachments - Name: Associating secondary interfaces metrics to network attachments

View File

@@ -38,9 +38,6 @@ include::modules/creating-manifest-file-customized-br-ex-bridge.adoc[leveloffset
// Scale each machine set to compute nodes // Scale each machine set to compute nodes
include::modules/creating-scaling-machine-sets-compute-nodes-networking.adoc[leveloffset=+2] include::modules/creating-scaling-machine-sets-compute-nodes-networking.adoc[leveloffset=+2]
// Enabling OVS balance-slb mode for your cluster
include::modules/enabling-OVS-balance-slb-mode.adoc[leveloffset=+1]
// Establishing communication between subnets // Establishing communication between subnets
include::modules/ipi-install-establishing-communication-between-subnets.adoc[leveloffset=+1] include::modules/ipi-install-establishing-communication-between-subnets.adoc[leveloffset=+1]

View File

@@ -84,9 +84,6 @@ include::modules/creating-manifest-file-customized-br-ex-bridge.adoc[leveloffset
// Scale each machine set to compute nodes // Scale each machine set to compute nodes
include::modules/creating-scaling-machine-sets-compute-nodes-networking.adoc[leveloffset=+2] include::modules/creating-scaling-machine-sets-compute-nodes-networking.adoc[leveloffset=+2]
// Enabling OVS balance-slb mode for your cluster
include::modules/enabling-OVS-balance-slb-mode.adoc[leveloffset=+1]
include::modules/installation-infrastructure-user-infra.adoc[leveloffset=+1] include::modules/installation-infrastructure-user-infra.adoc[leveloffset=+1]
[role="_additional-resources"] [role="_additional-resources"]

View File

@@ -99,9 +99,6 @@ include::modules/creating-manifest-file-customized-br-ex-bridge.adoc[leveloffset
// Scale each machine set to compute nodes // Scale each machine set to compute nodes
include::modules/creating-scaling-machine-sets-compute-nodes-networking.adoc[leveloffset=+2] include::modules/creating-scaling-machine-sets-compute-nodes-networking.adoc[leveloffset=+2]
// Enabling OVS balance-slb mode for your cluster
include::modules/enabling-OVS-balance-slb-mode.adoc[leveloffset=+1]
include::modules/installation-infrastructure-user-infra.adoc[leveloffset=+1] include::modules/installation-infrastructure-user-infra.adoc[leveloffset=+1]
[role="_additional-resources"] [role="_additional-resources"]

View File

@@ -18,6 +18,11 @@ When attaching a secondary network, you can either use the existing `br-ex` brid
- If your nodes include only a single network interface, you must use the existing bridge. This network interface is owned and managed by OVN-Kubernetes and you must not remove it from the `br-ex` bridge or alter the interface configuration. If you remove or alter the network interface, your cluster network stops working correctly. - If your nodes include only a single network interface, you must use the existing bridge. This network interface is owned and managed by OVN-Kubernetes and you must not remove it from the `br-ex` bridge or alter the interface configuration. If you remove or alter the network interface, your cluster network stops working correctly.
- If your nodes include several network interfaces, you can attach a different network interface to a new bridge, and use that for your secondary network. This approach provides for traffic isolation from your primary cluster network. - If your nodes include several network interfaces, you can attach a different network interface to a new bridge, and use that for your secondary network. This approach provides for traffic isolation from your primary cluster network.
[NOTE]
====
You cannot make configuration changes to the `br-ex` bridge or its underlying interfaces in the `NodeNetworkConfigurationPolicy` (NNCP) resource as a postinstallation task. As a workaround, use a secondary network interface connected to your host or switch.
====
The `localnet1` network is mapped to the `br-ex` bridge in the following sharing-a-bridge example: The `localnet1` network is mapped to the `br-ex` bridge in the following sharing-a-bridge example:
[source,yaml] [source,yaml]
@@ -35,17 +40,16 @@ spec:
- localnet: localnet1 - localnet: localnet1
bridge: br-ex bridge: br-ex
state: present state: present
# ...
---- ----
+
where: where:
+
`name`:: The name for the configuration object. `metadata.name`:: The name for the configuration object.
`node-role.kubernetes.io/worker`:: A node selector that specifies the nodes to apply the node network configuration policy to. `spec.nodeSelector.node-role.kubernetes.io/worker`:: A node selector that specifies the nodes to apply the node network configuration policy to.
`localnet`:: The name for the secondary network from which traffic is forwarded to the OVS bridge. This secondary network must match the name of the `spec.config.name` field of the `NetworkAttachmentDefinition` CRD that defines the OVN-Kubernetes secondary network. `spec.desiredState.ovn.bridge-mappings.localnet`:: The name for the secondary network from which traffic is forwarded to the OVS bridge. This secondary network must match the name of the `spec.config.name` field of the `NetworkAttachmentDefinition` CRD that defines the OVN-Kubernetes secondary network.
`bridge`:: The name of the OVS bridge on the node. This value is required only if you specify `state: present`. `spec.desiredState.ovn.bridge-mappings.bridge`:: The name of the OVS bridge on the node. This value is required only if you specify `state: present`.
`state`:: The state for the mapping. Must be either `present` to add the bridge or `absent` to remove the bridge. The default value is `present`. `spec.desiredState.ovn.bridge-mappings.state`:: The state for the mapping. Must be either `present` to add the bridge or `absent` to remove the bridge. The default value is `present`.
+
The following JSON example configures a localnet secondary network that is named `localnet1`. Note that the value for the `mtu` parameter must match the MTU value that was set for the secondary network interface that is mapped to the `br-ex` bridge interface. The following JSON example configures a localnet secondary network that is named `localnet1`. Note that the value for the `mtu` parameter must match the MTU value that was set for the secondary network interface that is mapped to the `br-ex` bridge interface.
[source,json] [source,json]
@@ -96,18 +100,18 @@ spec:
bridge: ovs-br1 bridge: ovs-br1
state: present state: present
---- ----
+
where: where:
+
`name`:: Specifies the name of the configuration object. `metadata.name`:: Specifies the name of the configuration object.
`node-role.kubernetes.io/worker`:: Specifies a node selector that identifies the nodes to which the node network configuration policy applies. `node-role.kubernetes.io/worker`:: Specifies a node selector that identifies the nodes to which the node network configuration policy applies.
`interfaces.name`:: Specifies a new OVS bridge that operates separately from the default bridge used by OVN-Kubernetes for cluster traffic. `desiredState.interfaces.name`:: Specifies a new OVS bridge that operates separately from the default bridge used by OVN-Kubernetes for cluster traffic.
`mcast-snooping-enable`:: Specifies whether to enable multicast snooping. When enabled, multicast snooping prevents network devices from flooding multicast traffic to all network members. By default, an OVS bridge does not enable multicast snooping. The default value is `false`. `options.mcast-snooping-enable`:: Specifies whether to enable multicast snooping. When enabled, multicast snooping prevents network devices from flooding multicast traffic to all network members. By default, an OVS bridge does not enable multicast snooping. The default value is `false`.
``port.name`:: Specifies the network device on the host system to associate with the new OVS bridge. `bridge.port.name`:: Specifies the network device on the host system to associate with the new OVS bridge.
`bridge-mappings.localnet`:: Specifies the name of the secondary network that forwards traffic to the OVS bridge. This name must match the value of the `spec.config.name` field in the `NetworkAttachmentDefinition` CRD that defines the OVN-Kubernetes secondary network. `ovn.bridge-mappings.localnet`:: Specifies the name of the secondary network that forwards traffic to the OVS bridge. This name must match the value of the `spec.config.name` field in the `NetworkAttachmentDefinition` CRD that defines the OVN-Kubernetes secondary network.
`bridge-mappings.bridge`:: Specifies the name of the OVS bridge on the node. The value is required only when `state: present` is set. `ovn.bridge-mappings.bridge`:: Specifies the name of the OVS bridge on the node. The value is required only when `state: present` is set.
`bridge-mappings.state`:: Specifies the state of the mapping. Valid values are `present` to add the bridge or `absent` to remove the bridge. The default value is `present`. `ovn.bridge-mappings.state`:: Specifies the state of the mapping. Valid values are `present` to add the bridge or `absent` to remove the bridge. The default value is `present`.
+
The following JSON example configures a localnet secondary network that is named `localnet2`. Note that the value for the `mtu` parameter must match the MTU value that was set for the `eth1` secondary network interface. The following JSON example configures a localnet secondary network that is named `localnet2`. Note that the value for the `mtu` parameter must match the MTU value that was set for the `eth1` secondary network interface.
[source,json] [source,json]
@@ -125,4 +129,3 @@ The following JSON example configures a localnet secondary network that is named
"excludeSubnets": "10.100.200.0/29" "excludeSubnets": "10.100.200.0/29"
} }
---- ----

View File

@@ -6,6 +6,7 @@
[id="configuring-ovnk-use-second-ovs-bridge_{context}"] [id="configuring-ovnk-use-second-ovs-bridge_{context}"]
= Configuring OVN-Kubernetes to use a secondary OVS bridge = Configuring OVN-Kubernetes to use a secondary OVS bridge
[role="_abstract"]
You can create an additional or _secondary_ Open vSwitch (OVS) bridge, `br-ex1`, that OVN-Kubernetes manages and the Multiple External Gateways (MEG) implementation uses for defining external gateways for an {product-title} node. You can define a MEG in an `AdminPolicyBasedExternalRoute` custom resource (CR). The MEG implementation provides a pod with access to multiple gateways, equal-cost multipath (ECMP) routes, and the Bidirectional Forwarding Detection (BFD) implementation. You can create an additional or _secondary_ Open vSwitch (OVS) bridge, `br-ex1`, that OVN-Kubernetes manages and the Multiple External Gateways (MEG) implementation uses for defining external gateways for an {product-title} node. You can define a MEG in an `AdminPolicyBasedExternalRoute` custom resource (CR). The MEG implementation provides a pod with access to multiple gateways, equal-cost multipath (ECMP) routes, and the Bidirectional Forwarding Detection (BFD) implementation.
Consider a use case for pods impacted by the Multiple External Gateways (MEG) feature and you want to egress traffic to a different interface, for example `br-ex1`, on a node. Egress traffic for pods not impacted by MEG get routed to the default OVS `br-ex` bridge. Consider a use case for pods impacted by the Multiple External Gateways (MEG) feature and you want to egress traffic to a different interface, for example `br-ex1`, on a node. Egress traffic for pods not impacted by MEG get routed to the default OVS `br-ex` bridge.
@@ -17,6 +18,11 @@ Currently, MEG is unsupported for use with other egress features, such as egress
You must define the additional bridge in an interface definition of a machine configuration manifest file. The Machine Config Operator uses the manifest to create a new file at `/etc/ovnk/extra_bridge` on the host. The new file includes the name of the network interface that the additional OVS bridge configures for a node. You must define the additional bridge in an interface definition of a machine configuration manifest file. The Machine Config Operator uses the manifest to create a new file at `/etc/ovnk/extra_bridge` on the host. The new file includes the name of the network interface that the additional OVS bridge configures for a node.
[IMPORTANT]
====
Do not use the `nmstate` API to make configuration changes to the secondary interface that is defined in the `/etc/ovnk/extra_bridge` directory path. The `configure-ovs.sh` configuration script creates and manages OVS bridge interfaces, so any interruptive changes to these interfaces by the `nmstate` API can lead to network configuration instability.
====
After you create and edit the manifest file, the Machine Config Operator completes tasks in the following order: After you create and edit the manifest file, the Machine Config Operator completes tasks in the following order:
. Drains nodes in singular order based on the selected machine configuration pool. . Drains nodes in singular order based on the selected machine configuration pool.
@@ -39,9 +45,9 @@ For more information about useful situations for the additional `br-ex1` bridge
+ +
[IMPORTANT] [IMPORTANT]
==== ====
Do not use the Kubernetes NMState Operator to define or a `NodeNetworkConfigurationPolicy` (NNCP) manifest file to define the additional interface. Do not use the Kubernetes NMState Operator or a `NodeNetworkConfigurationPolicy` (NNCP) manifest file to define the additional interface. Ensure that the additional interface or sub-interfaces when defining a `bond` interface are not used by an existing `br-ex` OVN Kubernetes network deployment.
Also ensure that the additional interface or sub-interfaces when defining a `bond` interface are not used by an existing `br-ex` OVN Kubernetes network deployment. You cannot make configuration changes to the `br-ex` bridge or its underlying interfaces as a postinstallation task. As a workaround, use a secondary network interface connected to your host or switch.
==== ====
+ +
.. Create the following interface definition files. These files get added to a machine configuration manifest file so that host nodes can access the definition files. .. Create the following interface definition files. These files get added to a machine configuration manifest file so that host nodes can access the definition files.

View File

@@ -0,0 +1,18 @@
// Module included in the following assemblies:
//
// * networking/advanced_networking/network-bonding-considerations.adoc
:_mod-docs-content-type: PROCEDURE
[id="enable-active-backup-mode_{context}"]
= Enable active-backup mode for your cluster
[role="_abstract"]
The `active-backup` mode provides fault tolerance for network connections by switching to a backup link where the primary link fails.
The mode specifies the following ports for your cluster:
* An active port where one physical interface sends and receives traffic at any given time.
* A standby port where all other ports act as backup links and continously monitor their link status.
During a failover process, if an active port or its link fails, the bonding logic switches all network traffic to a standby port. This standby port becomes the new active port. For failover to work, all ports in a bond must share the same Media Access Control (MAC) address.

View File

@@ -74,8 +74,8 @@ endif::[]
[id="installation-network-user-infra_{context}"] [id="installation-network-user-infra_{context}"]
= Networking requirements for user-provisioned infrastructure = Networking requirements for user-provisioned infrastructure
All the {op-system-first} machines require networking to be configured in `initramfs` during boot [role="_abstract"]
to fetch their Ignition config files. You must configure networking for all the {op-system-first} machines in `initramfs` during boot, so that they can fetch their Ignition config files.
[IMPORTANT] [IMPORTANT]
==== ====
@@ -94,17 +94,13 @@ During the initial boot, the machines require an IP address configuration that i
[NOTE] [NOTE]
==== ====
* It is recommended to use a DHCP server for long-term management of the cluster machines. Ensure that the DHCP server is configured to provide persistent IP addresses, DNS server information, and hostnames to the cluster machines. * Consider using a DHCP server for long-term management of the cluster machines. Ensure that the DHCP server is configured to provide persistent IP addresses, DNS server information, and hostnames to the cluster machines.
* If a DHCP service is not available for your user-provisioned infrastructure, you can instead provide the IP networking configuration and the address of the DNS server to the nodes at {op-system} install time. These can be passed as boot arguments if you are installing from an ISO image. See the _Installing {op-system} and starting the {product-title} bootstrap process_ section for more information about static IP provisioning and advanced networking options. * If a DHCP service is not available for your user-provisioned infrastructure, you can instead provide the IP networking configuration and the address of the DNS server to the nodes at {op-system} install time. These can be passed as boot arguments if you are installing from an ISO image. See the _Installing {op-system} and starting the {product-title} bootstrap process_ section for more information about static IP provisioning and advanced networking options.
==== ====
endif::ibm-z[] endif::ibm-z[]
The Kubernetes API server must be able to resolve the node names of the cluster The Kubernetes API server must be able to resolve the node names of the cluster machines. If the API servers and worker nodes are in different zones, you can configure a default DNS search zone to allow the API server to resolve the node names. Another supported approach is to always refer to hosts by their fully-qualified domain names in both the node objects and all DNS requests.
machines. If the API servers and worker nodes are in different zones, you can
configure a default DNS search zone to allow the API server to resolve the
node names. Another supported approach is to always refer to hosts by their
fully-qualified domain names in both the node objects and all DNS requests.
endif::azure,gcp[] endif::azure,gcp[]
ifndef::ibm-z,azure[] ifndef::ibm-z,azure[]
@@ -119,9 +115,7 @@ endif::ibm-z,azure[]
[id="installation-network-connectivity-user-infra_{context}"] [id="installation-network-connectivity-user-infra_{context}"]
== Network connectivity requirements == Network connectivity requirements
You must configure the network connectivity between machines to allow {product-title} cluster You must configure the network connectivity between machines to allow {product-title} cluster components to communicate. Each machine must be able to resolve the hostnames of all other machines in the cluster.
components to communicate. Each machine must be able to resolve the hostnames
of all other machines in the cluster.
This section provides details about the ports that are required. This section provides details about the ports that are required.

View File

@@ -45,7 +45,10 @@ endif::[]
[id="installation-user-infra-machines-static-network_{context}"] [id="installation-user-infra-machines-static-network_{context}"]
= Advanced {op-system} installation reference = Advanced {op-system} installation reference
This section illustrates the networking configuration and other advanced options that allow you to modify the {op-system-first} manual installation process. The following tables describe the kernel arguments and command-line options you can use with the {op-system} live installer and the `coreos-installer` command. [role="_abstract"]
You can configure networking and other advanced options, so that you can modify the {op-system-first} manual installation process.
The following tables describe the kernel arguments and command-line options you can use with the {op-system} live installer and the `coreos-installer` command.
[id="installation-user-infra-machines-routing-bonding_{context}"] [id="installation-user-infra-machines-routing-bonding_{context}"]
ifndef::ibm-z-kvm[] ifndef::ibm-z-kvm[]
@@ -172,7 +175,6 @@ ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none
ip=::::core0.example.com:enp2s0:none ip=::::core0.example.com:enp2s0:none
---- ----
=== Combining DHCP and static IP configurations === Combining DHCP and static IP configurations
You can combine DHCP and static IP configurations on systems with multiple network interfaces, for example: You can combine DHCP and static IP configurations on systems with multiple network interfaces, for example:
@@ -183,7 +185,6 @@ ip=enp1s0:dhcp
ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp2s0:none ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp2s0:none
---- ----
=== Configuring VLANs on individual interfaces === Configuring VLANs on individual interfaces
Optional: You can configure VLANs on individual interfaces by using the `vlan=` parameter. Optional: You can configure VLANs on individual interfaces by using the `vlan=` parameter.
@@ -220,7 +221,9 @@ ifndef::ibm-z-kvm[]
=== Bonding multiple network interfaces to a single interface === Bonding multiple network interfaces to a single interface
Optional: You can bond multiple network interfaces to a single interface by using the `bond=` option. Refer to the following examples: As an optional configuration, you can bond multiple network interfaces to a single interface by using the `bond=` option. To apply this configuration to your cluster, complete the procedure steps for each node that runs on your cluster.
.Procedure
* The syntax for configuring a bonded interface is: `bond=<name>[:<network_interfaces>][:options]` * The syntax for configuring a bonded interface is: `bond=<name>[:<network_interfaces>][:options]`
+ +
@@ -229,33 +232,31 @@ and _options_ is a comma-separated list of bonding options. Enter `modinfo bondi
* When you create a bonded interface using `bond=`, you must specify how the IP address is assigned and other * When you create a bonded interface using `bond=`, you must specify how the IP address is assigned and other
information for the bonded interface. information for the bonded interface.
+
** To configure the bonded interface to use DHCP, set the bond's IP address to `dhcp`. For example: ** To configure the bonded interface to use DHCP, set the bond's IP address to `dhcp`. For example:
+ +
[source,terminal] [source,terminal]
---- ----
bond=bond0:em1,em2:mode=active-backup
ip=bond0:dhcp ip=bond0:dhcp
---- ----
+
** To configure the bonded interface to use a static IP address, enter the specific IP address you want and related information. For example: ** To configure the bonded interface to use a static IP address, enter the specific IP address you want and related information. For example:
ifndef::ibm-z[] ifndef::ibm-z[]
+ +
[source,terminal] [source,terminal]
---- ----
bond=bond0:em1,em2:mode=active-backup
ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:bond0:none ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:bond0:none
---- ----
endif::ibm-z[] endif::ibm-z[]
ifdef::ibm-z[] ifdef::ibm-z[]
+
[source,terminal] [source,terminal]
---- ----
bond=bond0:em1,em2:mode=active-backup,fail_over_mac=1 bond=bond0:em1,em2:mode=active-backup ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:bond0:none::AA:BB:CC:DD:EE:FF ip=em1:none::AA:BB:CC:DD:EE:FF
ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:bond0:none ip=em2:none::AA:BB:CC:DD:EE:FF
---- ----
Always set the `fail_over_mac=1` option in active-backup mode, to avoid problems when shared OSA/RoCE cards are used. {ibm-z-title} supports value `1` for the `fail_over_mac` parameter, so always set the `fail_over_mac=1` option in active-backup mode to avoid problems when shared OSA/RoCE cards are used.
endif::ibm-z[] endif::ibm-z[]
ifdef::ibm-z[] ifdef::ibm-z[]
@@ -287,9 +288,9 @@ ifndef::ibm-z[]
=== Bonding multiple SR-IOV network interfaces to a dual port NIC interface === Bonding multiple SR-IOV network interfaces to a dual port NIC interface
Optional: You can bond multiple SR-IOV network interfaces to a dual port NIC interface by using the `bond=` option. As an optional configuration, you can bond multiple SR-IOV network interfaces to a dual port NIC interface by using the `bond=` option.
On each node, you must perform the following tasks: .Procedure
ifndef::installing-ibm-power[] ifndef::installing-ibm-power[]
. Create the SR-IOV virtual functions (VFs) following the guidance in link:https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/9/html/configuring_and_managing_virtualization/managing-virtual-devices_configuring-and-managing-virtualization#managing-sr-iov-devices_managing-virtual-devices[Managing SR-IOV devices]. Follow the procedure in the "Attaching SR-IOV networking devices to virtual machines" section. . Create the SR-IOV virtual functions (VFs) following the guidance in link:https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/9/html/configuring_and_managing_virtualization/managing-virtual-devices_configuring-and-managing-virtualization#managing-sr-iov-devices_managing-virtual-devices[Managing SR-IOV devices]. Follow the procedure in the "Attaching SR-IOV networking devices to virtual machines" section.
@@ -308,12 +309,14 @@ The following examples illustrate the syntax you must use:
* When you create a bonded interface using `bond=`, you must specify how the IP address is assigned and other information for the bonded interface. * When you create a bonded interface using `bond=`, you must specify how the IP address is assigned and other information for the bonded interface.
** To configure the bonded interface to use DHCP, set the bond's IP address to `dhcp`. For example: ** To configure the bonded interface to use DHCP, set the `ip` parameter to `dhcp` as demonstrated in the following example:
+ +
[source,terminal] [source,terminal]
---- ----
bond=bond0:eno1f0,eno2f0:mode=active-backup bond=bond0:eno1f0,eno2f0:mode=active-backup
ip=bond0:dhcp ip=bond0:dhcp::AA:BB:CC:DD:EE:FF
ip=eno1f0:none::AA:BB:CC:DD:EE:FF
ip=eno2f0:none::AA:BB:CC:DD:EE:FF
---- ----
** To configure the bonded interface to use a static IP address, enter the specific IP address you want and related information. For example: ** To configure the bonded interface to use a static IP address, enter the specific IP address you want and related information. For example:

View File

@@ -0,0 +1,27 @@
// Module included in the following assemblies:
//
// * networking/advanced_networking/network-bonding-considerations.adoc
:_mod-docs-content-type: CONCEPT
[id="nw-kernel-bonding_{context}"]
= Kernel bonding
[role="_abstract"]
You can use kernel bonding, which is a built-in Linux kernel function where link aggregation can exist among many Ethernet interfaces, to create a single logical physical interface. Kernel bonding allows multiple network interfaces to be combined into a single logical interface, which can enhance network performance by increasing bandwidth and providing redundancy in case of a link failure.
Kernel bonding is the default mode if no bond interfaces depend on OVS bonds. This bonding type does not give the same level of customization as supported OVS bonding.
For `kernel-bonding` mode, the bond interfaces exist outside, which means they are not in the data path, of the bridge interface. Network traffic in this mode is not sent or received on the bond interface port but instead requires additional bridging capabilities for MAC address assignment at the kernel level.
If you enabled `kernel-bonding` mode on network controller interfaces (NICs) for your nodes, you must specify a Media Access Control (MAC) address failover. This configuration prevents node communication issues with the bond interfaces, such as `eno1f0` and `eno2f0`.
Red{nbsp}Hat supports only the following value for the `fail_over_mac` parameter:
* `0`: Specifies the `none` value, which disables MAC address failover so that all interfaces receive the same MAC address as the bond interface. This is the default value.
Red{nbsp}Hat does not support the following values for the `fail_over_mac` parameter:
* `1`: Specifies the `active` value and sets the MAC address of the primary bond interface to always remain the same as active interfaces. If during a failover, the MAC address of an interface changes, the MAC address of the bond interface changes to match the new MAC address of the interface.
* `2`: Specifies the `follow` value so that during a failover, an active interface gets the MAC address of the bond interface and a formerly active interface receives the MAC address of the newly active interface.

View File

@@ -0,0 +1,21 @@
// Module included in the following assemblies:
//
// * networking/advanced_networking/network-bonding-considerations.adoc
:_mod-docs-content-type: CONCEPT
[id="nw-ovs-bonding_{context}"]
= Open vSwitch (OVS) bonding
[role="_abstract"]
With an OVS bonding configuration, you create a single, logical interface by connecting each physical network interface controller (NIC) as a port to a specific bond. This single bond then handles all network traffic, effectively replacing the function of individual interfaces.
Consider the following architectural layout for OVS bridges that interact with OVS interfaces:
* A network interface uses a bridge Media Access Control (MAC) address for managing protocol-level traffic and other administrative tasks, such as IP address assignment.
* The physical MAC addresses of physical interfaces do not handle traffic.
* OVS handles all MAC address management at the OVS bridge level.
This layout simplifies bond interface management as bonds act as data paths, where centralized MAC address management happens at the OVS bridge level.
For OVS bonding, you can select either `active-backup` mode or `balance-slb` mode. A bonding mode specifies the policy for how bond interfaces get used during network transmission.

View File

@@ -9,6 +9,11 @@
[role="_abstract"] [role="_abstract"]
You can create multiple interfaces in the same node network configuration policy. These interfaces can reference each other, allowing you to build and deploy a network configuration by using a single policy manifest. You can create multiple interfaces in the same node network configuration policy. These interfaces can reference each other, allowing you to build and deploy a network configuration by using a single policy manifest.
[IMPORTANT]
====
If multiple interfaces use the same default configuration, a single Network Manager connection profile activates on multiple interfaces simultaneously and this causes connections to have the same universally unique identifier (UUID). To avoid this issue, ensure that each interface has a specific configuration that is different from the default configuration.
====
The following example YAML file creates a bond that is named `bond10` across two NICs and VLAN that is named `bond10.103` that connects to the bond. The following example YAML file creates a bond that is named `bond10` across two NICs and VLAN that is named `bond10.103` that connects to the bond.
[source,yaml] [source,yaml]

View File

@@ -0,0 +1,25 @@
:_mod-docs-content-type: ASSEMBLY
[id="network-bonding-considerations"]
= Network bonding considerations
include::_attributes/common-attributes.adoc[]
:context: network-bonding-considerations
toc::[]
[role="_abstract"]
You can use network bonding, also known as _link aggregration_, to combine many network interfaces into a single, logical interface. This means that you can use different modes for handling how network traffic distributes across bonded interfaces. Each mode provides fault tolerance and some modes provide load balancing capabilities to your network. Red Hat supports Open vSwitch (OVS) bonding and kernel bonding.
// Open vSwitch (OVS) bonding
include::modules/nw-ovs-bonding.adoc[leveloffset=+1]
// Enabling active-backup mode for your cluster
include::modules/enable-active-backup-mode.adoc[leveloffset=+2]
// Enabling OVS balance-slb mode for your cluster
include::modules/enabling-OVS-balance-slb-mode.adoc[leveloffset=+2]
// Kernel bonding
include::modules/nw-kernel-bonding.adoc[leveloffset=+1]

View File

@@ -7,6 +7,7 @@ include::_attributes/common-attributes.adoc[]
toc::[] toc::[]
[role="_abstract"]
The Kubernetes NMState Operator provides a Kubernetes API for performing state-driven network configuration across the {product-title} cluster's nodes with NMState. The Kubernetes NMState Operator provides users with functionality to configure various network interface types, DNS, and routing on cluster nodes. Additionally, the daemons on the cluster nodes periodically report on the state of each node's network interfaces to the API server. The Kubernetes NMState Operator provides a Kubernetes API for performing state-driven network configuration across the {product-title} cluster's nodes with NMState. The Kubernetes NMState Operator provides users with functionality to configure various network interface types, DNS, and routing on cluster nodes. Additionally, the daemons on the cluster nodes periodically report on the state of each node's network interfaces to the API server.
[IMPORTANT] [IMPORTANT]
@@ -38,6 +39,11 @@ Node networking is monitored and updated by the following objects:
`NodeNetworkConfigurationPolicy`:: Describes the requested network configuration on nodes. You update the node network configuration, including adding and removing interfaces, by applying a `NodeNetworkConfigurationPolicy` CR to the cluster. `NodeNetworkConfigurationPolicy`:: Describes the requested network configuration on nodes. You update the node network configuration, including adding and removing interfaces, by applying a `NodeNetworkConfigurationPolicy` CR to the cluster.
`NodeNetworkConfigurationEnactment`:: Reports the network policies enacted upon each node. `NodeNetworkConfigurationEnactment`:: Reports the network policies enacted upon each node.
[NOTE]
====
Do not make configuration changes to the `br-ex` bridge or its underlying interfaces as a postinstallation task.
====
[id="installing-the-kubernetes-nmstate-operator-cli"] [id="installing-the-kubernetes-nmstate-operator-cli"]
== Installing the Kubernetes NMState Operator == Installing the Kubernetes NMState Operator