1
0
mirror of https://github.com/openshift/openshift-docs.git synced 2026-02-05 12:46:18 +01:00

MNW-2: CQA 2.0

This commit is contained in:
JoeAldinger
2025-12-10 16:01:42 -05:00
committed by openshift-cherrypick-robot
parent 9e5b5f844f
commit 694e8fca2c
22 changed files with 208 additions and 147 deletions

View File

@@ -6,7 +6,8 @@
[id="cudn-status-conditions_{context}"]
= User-defined network status condition types
The following tables explain the status condition types returned for `ClusterUserDefinedNetwork` and `UserDefinedNetwork` CRs when describing the resource. These conditions can be used to troubleshoot your deployment.
[role="_abstract"]
To troubleshoot your network deployment in {product-title}, evaluate the status condition types returned for `ClusterUserDefinedNetwork` and `UserDefinedNetwork` custom resources (CRs). Reviewing these conditions ensures that you can identify and resolve configuration errors.
.NetworkCreated condition types (`ClusterDefinedNetwork` and `UserDefinedNetwork` CRs)
[cols="2a,2a,3a,6a",options="header"]

View File

@@ -6,9 +6,10 @@
[id="about-cudn_{context}"]
= About the ClusterUserDefinedNetwork CR
The `ClusterUserDefinedNetwork` (UDN) custom resource (CR) provides cluster-scoped network segmentation and isolation for administrators only.
[role="_abstract"]
The `ClusterUserDefinedNetwork` (CUDN) custom resource (CR) provides cluster-scoped network segmentation in {product-title} and isolation for administrators only. Defining this resource ensures that network traffic is securely partitioned across the entire cluster.
The following diagram demonstrates how a cluster administrator can use the `ClusterUserDefinedNetwork` CR to create network isolation between tenants. This network configuration allows a network to span across many namespaces. In the diagram, network isolation is achieved through the creation of two user-defined networks, `udn-1` and `udn-2`. These networks are not connected and the `spec.namespaceSelector.matchLabels` field is used to select different namespaces. For example, `udn-1` configures and isolates communication for `namespace-1` and `namespace-2`, while `udn-2` configures and isolates communication for `namespace-3` and `namespace-4`. Isolated tenants (Tenants 1 and Tenants 2) are created by separating namespaces while also allowing pods in the same namespace to communicate.
The following diagram demonstrates how a cluster administrator can use the CUDN CR to create network isolation between tenants. This network configuration allows a network to span across many namespaces. In the diagram, network isolation is achieved through the creation of two user-defined networks, `udn-1` and `udn-2`. These networks are not connected and the `spec.namespaceSelector.matchLabels` field is used to select different namespaces. For example, `udn-1` configures and isolates communication for `namespace-1` and `namespace-2`, while `udn-2` configures and isolates communication for `namespace-3` and `namespace-4`. Isolated tenants (Tenants 1 and Tenants 2) are created by separating namespaces while also allowing pods in the same namespace to communicate.
.Tenant isolation using a ClusterUserDefinedNetwork CR
image::528-OpenShift-multitenant-0225.png[The tenant isolation concept in a user-defined network (UDN)]

View File

@@ -2,11 +2,14 @@
//
// * networking/multiple_networks/primary_networks/about-user-defined-networks.adoc
:_mod-docs-content-type: CONCEPT
:_mod-docs-content-type: REFERENCE
[id="considerations-for-cudn_{context}"]
= Best practices for ClusterUserDefinedNetwork CRs
Before setting up a `ClusterUserDefinedNetwork` custom resource (CR), users should consider the following information:
[role="_abstract"]
To create and deploy a successful instance of the `ClusterUserDefinedNetwork` (CUDN) CR, administrators must follow best practices such as avoiding default and openshift-* namespaces, use the proper namespace selector configuration, and ensure physical network parameter matching.
The following details provide administrators with a best practice for designing a CUDN CR:
* A `ClusterUserDefinedNetwork` CR is intended for use by cluster administrators and should not be used by non-administrators. If used incorrectly, it might result in security issues with your deployment, cause disruptions, or break the cluster network.

View File

@@ -6,7 +6,8 @@
[id="nw-cudn-cr-ui_{context}"]
= Creating a ClusterUserDefinedNetwork CR by using the web console
You can create a `ClusterUserDefinedNetwork` custom resource (CR) with a `Layer2` topology in the {product-title} web console.
[role="_abstract"]
To implement isolated network segments with layer 2 connectivity in {product-title}, create a `ClusterUserDefinedNetwork` custom resource (CR) by using the web console. Defining this resource ensures that your cluster workloads can communicate directly at the data link layer.
[NOTE]
====

View File

@@ -6,7 +6,10 @@
[id="nw-cudn-cr_{context}"]
= Creating a ClusterUserDefinedNetwork CR by using the CLI
The following procedure creates a `ClusterUserDefinedNetwork` custom resource (CR) by using the CLI. Based upon your use case, create your request using either the `cluster-layer-two-udn.yaml` example for a `Layer2` topology type or the `cluster-layer-three-udn.yaml` example for a `Layer3` topology type.
[role="_abstract"]
To implement cluster-wide network segmentation and isolation across multiple namespaces, supporting either layer 2 or layer 3 in {product-title}, create a `ClusterUserDefinedNetwork` CR by using the CLI. Defining this resource ensures that network traffic is securely partitioned across the cluster.
Based upon your use case, create your request by using either the `cluster-layer-two-udn.yaml` example for a `Layer2` topology type or the `cluster-layer-three-udn.yaml` example for a `Layer3` topology type.
[IMPORTANT]
====
@@ -43,29 +46,31 @@ EOF
apiVersion: k8s.ovn.org/v1
kind: ClusterUserDefinedNetwork
metadata:
name: <cudn_name> # <1>
name: <cudn_name>
spec:
namespaceSelector: # <2>
matchLabels: # <3>
"<label_1_key>": "<label_1_value>" # <4>
"<label_2_key>": "<label_2_value>" # <4>
network: # <5>
topology: Layer2 # <6>
layer2: # <7>
role: Primary # <8>
namespaceSelector:
matchLabels:
"<label_1_key>": "<label_1_value>"
"<label_2_key>": "<label_2_value>"
network:
topology: Layer2
layer2:
role: Primary
subnets:
- "2001:db8::/64"
- "10.100.0.0/16" # <9>
- "10.100.0.0/16"
----
<1> Name of your `ClusterUserDefinedNetwork` CR.
<2> A label query over the set of namespaces that the CUDN CR applies to. Uses the standard Kubernetes `MatchLabel` selector. Must not point to `default` or `openshift-*` namespaces.
<3> Uses the `matchLabels` selector type, where terms are evaluated with an `AND` relationship.
<4> In this example, the CUDN CR is deployed to namespaces that contain both `<label_1_key>=<label_1_value>` and `<label_2_key>=<label_2_value>` labels.
<5> Describes the network configuration.
<6> The `topology` field describes the network configuration; accepted values are `Layer2` and `Layer3`. Specifying a `Layer2` topology type creates one logical switch that is shared by all nodes.
<7> This field specifies the topology configuration. It can be `layer2` or `layer3`.
<8> Specifies `Primary` or `Secondary`. `Primary` is the only `role` specification supported in {product-version}.
<9> For `Layer2` topology types the following specifies config details for the `subnet` field:
+
where:
`Name`:: Specifies the name of your `ClusterUserDefinedNetwork` CR.
`namespaceSelector`:: Specifies a label query over the set of namespaces that the CUDN CR applies to. Uses the standard Kubernetes `MatchLabel` selector. Must not point to `default` or `openshift-*` namespaces.
`matchLabels`:: Uses the `matchLabels` selector type, where terms are evaluated with an `AND` relationship. In this example, the CUDN CR is deployed to namespaces that contain both `<label_1_key>=<label_1_value>` and `<label_2_key>=<label_2_value>` labels.
`network`:: Describes the network configuration.
`topology`:: This field describes the network configuration; accepted values are `Layer2` and `Layer3`. Specifying a `Layer2` topology type creates one logical switch that is shared by all nodes.
This field specifies the topology configuration. It can be `layer2` or `layer3`.
`role`:: Specifies `Primary` or `Secondary`. `Primary` is the only `role` specification supported in {product-version}.
`subnets`:: For `Layer2` topology types the following specifies config details for the field:
+
* The subnets field is optional.
* The subnets field is of type `string` and accepts standard CIDR formats for both IPv4 and IPv6.
@@ -79,32 +84,31 @@ spec:
apiVersion: k8s.ovn.org/v1
kind: ClusterUserDefinedNetwork
metadata:
name: <cudn_name> # <1>
name: <cudn_name>
spec:
namespaceSelector: # <2>
matchExpressions: # <3>
- key: kubernetes.io/metadata.name # <4>
operator: In # <5>
values: ["<example_namespace_one>", "<example_namespace_two>"] # <6>
network: # <7>
topology: Layer3 # <8>
layer3: # <9>
role: Primary # <10>
subnets: # <11>
namespaceSelector:
matchExpressions:
- key: kubernetes.io/metadata.name
operator: In
values: ["<example_namespace_one>", "<example_namespace_two>"]
network:
topology: Layer3
layer3:
role: Primary
subnets:
- cidr: 10.100.0.0/16
hostSubnet: 24
----
<1> Name of your `ClusterUserDefinedNetwork` CR.
<2> A label query over the set of namespaces that the cluster UDN applies to. Uses the standard Kubernetes `MatchLabel` selector. Must not point to `default` or `openshift-*` namespaces.
<3> Uses the `matchExpressions` selector type, where terms are evaluated with an `OR` relationship.
<4> Specifies the label key to match.
<5> Specifies the operator. Valid values include: `In`, `NotIn`, `Exists`, and `DoesNotExist`.
<6> Because the `matchExpressions` type is used, provisions namespaces matching either `<example_namespace_one>` or `<example_namespace_two>`.
<7> Describes the network configuration.
<8> The `topology` field describes the network configuration; accepted values are `Layer2` and `Layer3`. Specifying a `Layer3` topology type creates a layer 2 segment per node, each with a different subnet. Layer 3 routing is used to interconnect node subnets.
<9> This field specifies the topology configuration. Valid values are `layer2` or `layer3`.
<10> Specifies a `Primary` or `Secondary` role. `Primary` is the only `role` specification supported in {product-version}.
<11> For `Layer3` topology types the following specifies config details for the `subnet` field:
+
where:
`Name`:: Specifies the name of your `ClusterUserDefinedNetwork` CR.
`namespaceSelector`:: Specifies a label query over the set of namespaces that the CUDN CR applies to. Uses the standard Kubernetes `MatchLabel` selector. Must not point to `default` or `openshift-*` namespaces. Uses the `matchExpressions` selector type, where terms are evaluated with an `OR` relationship.
`Key`:: Specifies the label key to match. Takes an operator value; valid values include: `In`, `NotIn`, `Exists`, and `DoesNotExist`. Because the `matchExpressions` type is used, provisions namespaces matching either `<example_namespace_one>` or `<example_namespace_two>`.
`network`:: Describes the network configuration.
`topology`:: The `topology` field describes the network configuration; accepted values are `Layer2` and `Layer3`. Specifying a `Layer3` topology type creates a layer 2 segment per node, each with a different subnet. Layer 3 routing is used to interconnect node subnets.
`role`:: Specifies `Primary` or `Secondary`. `Primary` is the only `role` specification supported in {product-version}.
`subnets`:: For `Layer3` topology types the following specifies config details for the `subnet` field:
+
* The `subnets` field is mandatory.
* The type for the `subnets` field is `cidr` and `hostSubnet`:

View File

@@ -6,7 +6,8 @@
[id="nw-cudn-localnet_{context}"]
= Creating a ClusterUserDefinedNetwork CR for a Localnet topology
A `Localnet` topology connects the secondary network to the physical underlay. This enables both east-west cluster traffic and access to services running outside the cluster. This topology type requires the additional configuration of the underlying Open vSwitch (OVS) system on cluster nodes.
[role="_abstract"]
You deploy a `Localnet` topology to connect the secondary network to the physical underlay. This enables both east-west cluster traffic and access to services running outside the cluster. This topology type requires the additional configuration of the underlying Open vSwitch (OVS) system on cluster nodes.
.Prerequisites
@@ -25,29 +26,30 @@ A `Localnet` topology connects the secondary network to the physical underlay. T
apiVersion: k8s.ovn.org/v1
kind: ClusterUserDefinedNetwork
metadata:
name: <cudn_name> # <1>
name: <cudn_name>
spec:
namespaceSelector: # <2>
matchLabels: # <3>
"<label_1_key>": "<label_1_value>" # <4>
"<label_2_key>": "<label_2_value>" # <4>
network: # <5>
topology: Localnet # <6>
localnet: # <7>
role: Secondary # <8>
namespaceSelector:
matchLabels:
"<label_1_key>": "<label_1_value>"
"<label_2_key>": "<label_2_value>"
network:
topology: Localnet
localnet:
role: Secondary
physicalNetworkName: test
ipam: {lifecycle: Persistent}
subnets: ["192.168.0.0/16", "2001:dbb::/64"] # <9>
subnets: ["192.168.0.0/16", "2001:dbb::/64"]
----
<1> Name of your `ClusterUserDefinedNetwork` (CUDN) CR.
<2> A label query over the set of namespaces that the cluster CUDN CR applies to. Uses the standard Kubernetes `MatchLabel` selector. Must not point to `default`, `openshift-*`, or any other system namespaces.
<3> Uses the `matchLabels` selector type, where terms are evaluated with an `AND` relationship.
<4> In this example, the CUDN CR is deployed to namespaces that contain both `<label_1_key>=<label_1_value>` and `<label_2_key>=<label_2_value>` labels.
<5> Describes the network configuration.
<6> Specifying a `Localnet` topology type creates one logical switch that is directly bridged to one provider network.
<7> This field specifies the `localnet` topology.
<8> Specifies the `role` for the network configuration. `Secondary` is the only `role` specification supported for the `localnet` topology.
<9> For `Localnet` topology types the following specifies config details for the `subnet` field:
+
where:
`Name`:: Specifies the name of your `ClusterUserDefinedNetwork` CR.
`namespaceSelector`:: Specifies a label query over the set of namespaces that the CUDN CR applies to. Uses the standard Kubernetes `MatchLabel` selector. Must not point to `default` or `openshift-*` namespaces.
`matchLabels`:: Uses the `matchLabels` selector type, where terms are evaluated with an `AND` relationship. In this example, the CUDN CR is deployed to namespaces that contain both `<label_1_key>=<alabel_1_value>` and `<label_2_key>=<label_2_value>` labels.
`network`:: Describes the network configuration.
`topology`:: Specifying a `Localnet` topology type creates one logical switch that is directly bridged to one provider network.
`role`:: Specifies the `role` for the network configuration. `Secondary` is the only `role` specification supported for the `localnet` topology.
`subnets`:: For `Localnet` topology types the following specifies config details for the `subnet` field:
+
* The subnets field is optional.
* The subnets field is of type `string` and accepts standard CIDR formats for both IPv4 and IPv6.

View File

@@ -6,6 +6,9 @@
[id="nw-multus-create-network-apply_{context}"]
= Creating a primary network attachment by applying a YAML manifest
[role="_abstract"]
Create a primary network attachment by directly applying a `NetworkAttachmentDefinition` YAML manifest. This gives you full control over the network configuration without relying on the Cluster Network Operator to manage the resource automatically.
.Prerequisites
* You have installed the {oc-first}.
@@ -27,7 +30,7 @@ spec:
{
"cniVersion": "0.3.1",
"name": "work-network",
"namespace": "namespace2", #<1>
"namespace": "namespace2",
"type": "host-device",
"device": "eth1",
"ipam": {
@@ -35,7 +38,8 @@ spec:
}
}
----
<1> Optional: You can specify a namespace to which the NAD is applied. If you are working in the namespace where the NAD is to be deployed, this spec is not necessary.
+
.. Optional: You can specify a namespace to which the NAD is applied. If you are working in the namespace where the NAD is to be deployed, the `namespace` specification is not necessary.
. To create the primary network, enter the following command:
+

View File

@@ -6,7 +6,8 @@
[id="nw-multus-create-network_{context}"]
= Creating a primary network attachment with the Cluster Network Operator
The Cluster Network Operator (CNO) manages additional network definitions. When you specify a primary network to create, the CNO creates the `NetworkAttachmentDefinition` custom resource definition (CRD) automatically.
[role="_abstract"]
When you specify a primary network to create by using the Cluster Network Operator (CNO), the (CNO) creates the `NetworkAttachmentDefinition` custom resource definition (CRD) automatically and manages it.
[IMPORTANT]
====

View File

@@ -4,9 +4,10 @@
:_mod-docs-content-type: REFERENCE
[id="nw-nad-cr_{context}"]
== Configuration for a primary network attachment
= Configuration for a primary network attachment
A primary network is configured by using the `NetworkAttachmentDefinition` API in the `k8s.cni.cncf.io` API group.
[role="_abstract"]
You configure a primary network by using the `NetworkAttachmentDefinition` API in the `k8s.cni.cncf.io` API group.
The configuration for the API is described in the following table:

View File

@@ -0,0 +1,26 @@
// Module included in the following assemblies:
//
// * networking/multiple_networks/creating-primary-nad.adoc
:_mod-docs-content-type: REFERENCE
[id="approaches-managing-additional-network_{context}"]
= Approaches to managing a primary network
[role="_abstract"]
You can manage the life cycle of a primary network created by a NAD CR through the Cluster Network Operator (CNO) or a YAML manifest. Using the CNO provides automated management of the network resource, while applying a YAML manifest allows for direct control over the network configuration.
Modifying the Cluster Network Operator (CNO) configuration:: With this method, the CNO automatically creates and manages the `NetworkAttachmentDefinition` object. In addition to managing the object lifecycle, the CNO ensures that a DHCP is available for a primary network that uses a DHCP assigned IP address.
Applying a YAML manifest:: With this method, you can manage the primary network directly by creating an `NetworkAttachmentDefinition` object. This approach allows for the invocation of multiple CNI plugins in order to attach primary network interfaces in a pod.
Each approach is mutually exclusive and you can only use one approach for managing a primary network at a time. For either approach, the primary network is managed by a Container Network Interface (CNI) plugin that you configure.
[NOTE]
====
When deploying {product-title} nodes with multiple network interfaces on {rh-openstack-first} with OVN SDN, DNS configuration of the secondary interface might take precedence over the DNS configuration of the primary interface. In this case, remove the DNS nameservers for the subnet ID that is attached to the secondary interface by running the following command:
[source,terminal]
----
$ openstack subnet set --dns-nameserver 0.0.0.0 <subnet_id>
----
====

View File

@@ -6,7 +6,8 @@
[id="about-udn_{context}"]
= About the UserDefinedNetwork CR
The `UserDefinedNetwork` (UDN) custom resource (CR) provides advanced network segmentation and isolation for users and administrators.
[role="_abstract"]
To create advanced network segmentation and isolation, users and administrators create `UserDefinedNetwork` (UDN) custom resource (CR)s. UDNs provide granular control over network traffic within specific namespaces.
The following diagram shows four cluster namespaces, where each namespace has a single assigned user-defined network (UDN), and each UDN has an assigned custom subnet for its pod IP allocations. The OVN-Kubernetes handles any overlapping UDN subnets. Without using the Kubernetes network policy, a pod attached to a UDN can communicate with other pods in that UDN. By default, these pods are isolated from communicating with pods that exist in other UDNs. For microsegmentation, you can apply network policy within a UDN. You can assign one or more UDNs to a namespace, with a limitation of only one primary UDN to a namespace, and one or more namespaces to a UDN.

View File

@@ -6,13 +6,17 @@
[id="nw-udn-additional-config-details_{context}"]
= Additional configuration details for user-defined networks
The following table explains additional configurations for `ClusterUserDefinedNetwork` and `UserDefinedNetwork` custom resources (CRs) that are optional. It is not recommended to set these fields without explicit need and understanding of OVN-Kubernetes network topology.
[role="_abstract"]
[role="_abstract"]
Configure optional advanced settings for `ClusterUserDefinedNetwork` and `UserDefinedNetwork` CRs when default values conflict with your network topology or when you need persistent IP addresses, custom gateways, or specific subnet configurations.
It is not recommended to set these fields without explicit need and understanding of OVN-Kubernetes network topology.
. Optional configurations for user-defined networks
[cols="2,1,7",options="header"]
|====
|CUDN field|UDN field|Type|Description
|*CUDN field*|*UDN field*|*Type*|*Description*
|`spec.network.<topology>.joinSubnets`
|`spec.<topology>.joinSubnets`
@@ -54,13 +58,9 @@ Setting a value of Persistent is only supported when `ipam.mode` parameter is se
|`spec.network.<topology>.ipam.mode`
|`spec.<topology>`ipam.mode`
|object
|The `mode` parameter controls how much of the IP configuration is managed by OVN-Kubernetes. The following options are available:
**Enabled:** +
When enabled, OVN-Kubernetes applies the IP configuration to the SDN infrastructure and assigns IP addresses from the selected subnet to the individual pods. This is the default setting. When set to `Enabled`, the `subnets` field must be defined. `Enabled` is the default configuration.
**Disabled:** +
When disabled, OVN-Kubernetes only assigns MAC addresses and provides layer 2 communication, which allows users to configure IP addresses. `Disabled` is only available for layer 2 (secondary) networks. By disabling IPAM, features that rely on selecting pods by IP, for example, network policy, services, and so on, no longer function. Additionally, IP port security is also disabled for interfaces attached to this network. The `subnets` field must be empty when `spec.ipam.mode` is set to `Disabled.`
a|The `mode` parameter controls how much of the IP configuration is managed by OVN-Kubernetes. The following options are available:
* `Enabled`: When enabled, OVN-Kubernetes applies the IP configuration to the SDN infrastructure and assigns IP addresses from the selected subnet to the individual pods. This is the default setting. When set to `Enabled`, the `subnets` field must be defined. `Enabled` is the default configuration.
* `Disabled`: When disabled, OVN-Kubernetes only assigns MAC addresses and provides layer 2 communication, which allows users to configure IP addresses. `Disabled` is only available for layer 2 (secondary) networks. By disabling IPAM, features that rely on selecting pods by IP, for example, network policy, services, and so on, no longer function. Additionally, IP port security is also disabled for interfaces attached to this network. The `subnets` field must be empty when `spec.ipam.mode` is set to `Disabled.`
|`spec.network.<topology>.mtu`
|`spec.<topology>.mtu`

View File

@@ -5,7 +5,10 @@
[id="nw-udn-benefits_{context}"]
= Benefits of a user-defined network
User-defined networks provide the following benefits:
[role="_abstract"]
User-defined networks enable tenant isolation by providing each namespace with its own isolated primary network, reducing cross-tenant traffic risks and simplifying network management by eliminating the need for complex network policies.
User-defined networks offer the following benefits:
. Enhanced network isolation for security
+

View File

@@ -6,7 +6,10 @@
[id="considerations-for-udn_{context}"]
= Best practices for UserDefinedNetwork CRs
Before setting up a `UserDefinedNetwork` custom resource (CR), you should consider the following information:
[role="_abstract"]
To deploy a successful instance of the `UserDefinedNetwork` (UDN) CR, you must follow masquerade IP address requirements, avoid default and openshift-* namespaces, set a proper namespace selector configuration, and ensure physical network parameter matching.
The following details provide a best practice for designing a UDN CR:
//These will not go live till 4.18 GA
//* To eliminate errors and ensure connectivity, you should create a namespace scoped UDN CR before creating any workload in the namespace.

View File

@@ -2,11 +2,12 @@
//
// * networking/multiple_networks/primary_networks/about-user-defined-networks.adoc
:_mod-docs-content-type: PROCEDURE
[id="nw-udn-cr-ui_{context}"]
:_mod-docs-content-type: PROCEDURE
[id="nw-udn-cr-ui_{context}"]
= Creating a UserDefinedNetwork CR by using the web console
You can create a `UserDefinedNetwork` custom resource (CR) with a `Layer2` topology and `Primary` role by using the {product-title} web console.
[role="_abstract"]
To implement isolated network segments with layer 2 connectivity in {product-title}, create a `UserDefinedNetwork` custom resource (CR) by using the web console. Defining this resource ensures that your cluster workloads can communicate directly at the data link layer.
[NOTE]
====

View File

@@ -6,9 +6,12 @@
[id="nw-udn-cr_{context}"]
= Creating a UserDefinedNetwork CR by using the CLI
[role="_abstract"]
Create a `UserDefinedNetwork` CR by using the CLI to enable namespace-scoped network segmentation and isolation, allowing you to define custom Layer 2 or Layer 3 network topologies for pods within specific namespaces.
The following procedure creates a `UserDefinedNetwork` CR that is namespace scoped. Based upon your use case, create your request by using either the `my-layer-two-udn.yaml` example for a `Layer2` topology type or the `my-layer-three-udn.yaml` example for a `Layer3` topology type.
.Perquisites
.Prerequisites
* You have logged in with `cluster-admin` privileges, or you have `view` and `edit` role-based access control (RBAC).
@@ -38,21 +41,23 @@ EOF
apiVersion: k8s.ovn.org/v1
kind: UserDefinedNetwork
metadata:
name: udn-1 # <1>
name: udn-1
namespace: <some_custom_namespace>
spec:
topology: Layer2 # <2>
topology: Layer2
layer2: <3>
role: Primary # <4>
role: Primary
subnets:
- "10.0.0.0/24"
- "2001:db8::/60" # <5>
- "2001:db8::/60"
----
<1> Name of your `UserDefinedNetwork` resource. This should not be `default` or duplicate any global namespaces created by the Cluster Network Operator (CNO).
<2> The `topology` field describes the network configuration; accepted values are `Layer2` and `Layer3`. Specifying a `Layer2` topology type creates one logical switch that is shared by all nodes.
<3> This field specifies the topology configuration. It can be `layer2` or `layer3`.
<4> Specifies a `Primary` or `Secondary` role.
<5> For `Layer2` topology types the following specifies config details for the `subnet` field:
+
where:
`name`:: Name of your `UserDefinedNetwork` resource. This should not be `default` or duplicate any global namespaces created by the Cluster Network Operator (CNO).
`topology`:: Specifies the network configuration; accepted values are `Layer2` and `Layer3`. Specifying a `Layer2` topology type creates one logical switch that is shared by all nodes.
`role`:: Specifies a `Primary` or `Secondary` role.
`subnets`:: For `Layer2` topology types the following specifies config details for the `subnet` field:
+
* The subnets field is optional.
* The subnets field is of type `string` and accepts standard CIDR formats for both IPv4 and IPv6.
@@ -67,24 +72,26 @@ spec:
apiVersion: k8s.ovn.org/v1
kind: UserDefinedNetwork
metadata:
name: udn-2-primary # <1>
name: udn-2-primary
namespace: <some_custom_namespace>
spec:
topology: Layer3 # <2>
layer3: # <3>
role: Primary # <4>
subnets: # <5>
topology: Layer3
layer3:
role: Primary
subnets:
- cidr: 10.150.0.0/16
hostSubnet: 24
- cidr: 2001:db8::/60
hostSubnet: 64
# ...
----
<1> Name of your `UserDefinedNetwork` resource. This should not be `default` or duplicate any global namespaces created by the Cluster Network Operator (CNO).
<2> The `topology` field describes the network configuration; accepted values are `Layer2` and `Layer3`. Specifying a `Layer3` topology type creates a layer 2 segment per node, each with a different subnet. Layer 3 routing is used to interconnect node subnets.
<3> This field specifies the topology configuration. Valid values are `layer2` or `layer3`.
<4> Specifies a `Primary` or `Secondary` role.
<5> For `Layer3` topology types the following specifies config details for the `subnet` field:
+
where:
`name`:: Name of your `UserDefinedNetwork` resource. This should not be `default` or duplicate any global namespaces created by the Cluster Network Operator (CNO).
`topology`:: Specifies the network configuration; accepted values are `Layer2` and `Layer3`. Specifying a `Layer2` topology type creates one logical switch that is shared by all nodes.
`role`:: Specifies a `Primary` or `Secondary` role.
`subnets`:: For `Layer3` topology types the following specifies config details for the `subnet` field:
+
* The `subnets` field is mandatory.
* The type for the `subnets` field is `cidr` and `hostSubnet`:

View File

@@ -5,19 +5,22 @@
[id="nw-udn-l2-l3_{context}"]
= Layer 2 and layer 3 topologies
A flat layer 2 topology creates a virtual switch that is distributed across all nodes in a cluster. Virtual machines and pods connect to this virtual switch so that all these components can communicate with each other within the same subnet. A flat layer 2 topology is useful for live migration of virtual machines across nodes that exist in a cluster. The following diagram shows a flat layer 2 topology with two nodes that use the virtual switch for live migration purposes:
[role="_abstract"]
A layer 2 topology creates a distributed virtual switch across cluster nodes, this network topology provides a smooth live migration of virtual machine (VM) within the same subnet. A layer 3 topology creates unique segments per node with routing between them, this network topology effectively manages large broadcast domains.
In a flat layer 2 topology, virtual machines and pods connect to the virtual switch so that all these components can communicate with each other within the same subnet. This topology is useful for the live migration of VMs across nodes in the cluster. The following diagram shows a flat layer 2 topology with two nodes that use the virtual switch for live migration purposes:
.A flat layer 2 topology that uses a virtual switch for component communication
image::504_OpenShift_UDN_L2_0325.png[A flat layer 2 topology with a virtual switch so that virtual machines in node-1 to node-2 can communicate with each other]
If you decide not to specify a layer 2 subnet, then you must manually configure IP addresses for each pod in your cluster. When you do not specify a layer 2 subnet, port security is limited to preventing Media Access Control (MAC) spoofing only, and does not include IP spoofing. A layer 2 topology creates a single broadcast domain that can be challenging in large network environments, where the topology might cause a broadcast storm that can degrade network performance.
If you decide not to specify a layer 2 subnet, then you must manually configure IP addresses for each pod in your cluster. When you do not specify a layer 2 subnet, port security is limited to preventing Media Access Control (MAC) spoofing only, and does not include IP spoofing. A layer 2 topology creates a single broadcast domain that can be challenging in large network environments, where the topology might cause a broadcast storm that can degrade network performance.
To access more configurable options for your network, you can integrate a layer 2 topology with a user-defined network (UDN). The following diagram shows two nodes that use a UDN with a layer 2 topology that includes pods that exist on each node. Each node includes two interfaces:
* A node interface, which is a compute node that connects networking components to the node.
* An Open vSwitch (OVS) bridge such as `br-ex`, which creates an layer 2 OVN switch so that pods can communicate with each other and share resources.
* An Open vSwitch (OVS) bridge such as `br-ex`, which creates an layer 2 OVN switch so that pods can communicate with each other and share resources.
An external switch connects these two interfaces, while the gateway or router handles routing traffic between the external switch and the layer 2 OVN switch. VMs and pods in a node can use the UDN to communicate with each other. The layer 2 OVN switch handles node traffic over a UDN so that live migrate of a VM from one node to another is possible.
An external switch connects these two interfaces, while the gateway or router handles routing traffic between the external switch and the layer 2 OVN switch. VMs and pods in a node can use the UDN to communicate with each other. The layer 2 OVN switch handles node traffic over a UDN so that live migrate of a VM from one node to another is possible.
.A user-defined network (UDN) that uses a layer 2 topology
image::503_OpenShift_UDN_L2_0425.png[A UDN that uses a layer 2 topology for migrating a VM from node-1 to node-2]

View File

@@ -6,7 +6,10 @@
[id="limitations-for-udn_{context}"]
= Limitations of a user-defined network
While user-defined networks (UDN) offer highly customizable network configuration options, there are limitations that cluster administrators and developers should be aware of when implementing and managing these networks. Consider the following limitations before implementing a UDN.
[role="_abstract"]
To deploy a successful user-defined networks (UDN), you must consider their limitations including DNS resolution behavior, restricted access to default network services such as the image registry, network policy constraints between isolated networks, and the requirement to create namespaces and networks before pods.
Consider the following limitations before implementing a UDN.
//Check on the removal of the DNS limitation for 4.18 or 4.17.z.
* *DNS limitations*:

View File

@@ -0,0 +1,17 @@
//module included in the following assembly:
//
// *networking/multiple_networks/about-user-defined-networks.adoc
:_mod-docs-content-type: CONCEPT
[id="nw-udn-overview_{context}"]
= Overview of user-defined networks
[role="_abstract"]
To secure and improve network segmentation and isolation, cluster administrators can create primary or secondary networks that span namespaces at the cluster level using the `ClusterUserDefinedNetwork` custom resource (CR) while a developer can define secondary networks at the namespace level using the `UserDefinedNetwork` CR.
Before the implementation of user-defined networks (UDN), the OVN-Kubernetes CNI plugin for {product-title} only supported a layer 3 topology on the primary or _main_ network. Due to Kubernetes design principles: all pods are attached to the main network, all pods communicate with each other by their IP addresses, and inter-pod traffic is restricted according to network policy.
While the Kubernetes design is useful for simple deployments, this layer 3 topology restricts customization of primary network segment configurations, especially for modern multi-tenant deployments.
UDN improves the flexibility and segmentation capabilities of the default layer 3 topology for a Kubernetes pod network by enabling custom layer 2 and layer 3 network segments, where all these segments are isolated by default. These segments act as either primary or secondary networks for container pods and virtual machines that use the default OVN-Kubernetes CNI plugin. UDNs enable a wide range of network architectures and topologies, enhancing network flexibility, security, and performance.
The following sections further emphasize the benefits and limitations of user-defined networks, the best practices when creating a `ClusterUserDefinedNetwork` or `UserDefinedNetwork` CR, how to create the CR, and additional configuration details that might be relevant to your deployment.

View File

@@ -6,10 +6,11 @@
[id="opening-default-network-ports-udn_{context}"]
= Opening default network ports on user-defined network pods
By default, pods on a user-defined network (UDN) are isolated from the default network. This means that default network pods, such as those running monitoring services (Prometheus or Alertmanager) or the {product-title} image registry, cannot initiate connections to UDN pods.
[role="_abstract"]
To allow default network pods to connect to a user-defined network pod, you can use the `k8s.ovn.org/open-default-ports` annotation. This annotation opens specific ports on the user-defined network pod for access from the default network.
By default, pods on a user-defined network (UDN) are isolated from the default network. This means that default network pods, such as those running monitoring services (Prometheus or Alertmanager) or the {product-title} image registry, cannot initiate connections to UDN pods.
The following pod specification allows incoming TCP connections on port `80` and UDP traffic on port `53` from the default network:
[source,yaml]
----
@@ -24,7 +25,7 @@ metadata:
port: 53
# ...
----
[NOTE]
====
Open ports are accessible on the pod's default network IP, not its UDN network IP.

View File

@@ -6,28 +6,10 @@ include::_attributes/common-attributes.adoc[]
toc::[]
The following sections explain how to create and manage primary networks using the `NetworkAttachmentDefinition` (NAD) resource.
[role="_abstract"]
Use the `NetworkAttachmentDefinition` (NAD) resource to create primary networks when you need to use CNI plugins other than OVN-Kubernetes, such as IPVLAN or MACVLAN, or when you require direct control over the Container Network Interface (CNI) configuration for advanced networking scenarios.
[id="{context}_approaches-managing-additional-network"]
== Approaches to managing a primary network
You can manage the life cycle of a primary network created by NAD with one of the following two approaches:
* By modifying the Cluster Network Operator (CNO) configuration. With this method, the CNO automatically creates and manages the `NetworkAttachmentDefinition` object. In addition to managing the object lifecycle, the CNO ensures that a DHCP is available for a primary network that uses a DHCP assigned IP address.
* By applying a YAML manifest. With this method, you can manage the primary network directly by creating an `NetworkAttachmentDefinition` object. This approach allows for the invocation of multiple CNI plugins in order to attach primary network interfaces in a pod.
Each approach is mutually exclusive and you can only use one approach for managing a primary network at a time. For either approach, the primary network is managed by a Container Network Interface (CNI) plugin that you configure.
[NOTE]
====
When deploying {product-title} nodes with multiple network interfaces on {rh-openstack-first} with OVN SDN, DNS configuration of the secondary interface might take precedence over the DNS configuration of the primary interface. In this case, remove the DNS nameservers for the subnet ID that is attached to the secondary interface by running the following command:
[source,terminal]
----
$ openstack subnet set --dns-nameserver 0.0.0.0 <subnet_id>
----
====
include::modules/nw-nad-management.adoc[leveloffset=+1]
include::modules/nw-multus-create-network.adoc[leveloffset=+1]

View File

@@ -1,20 +1,16 @@
:_mod-docs-content-type: ASSEMBLY
[id="about-user-defined-networks"]
= About user-defined networks
= About-user-defined networks
include::_attributes/common-attributes.adoc[]
:context: about-user-defined-networks
:context: user-defined-networks
toc::[]
Before the implementation of user-defined networks (UDN), the OVN-Kubernetes CNI plugin for {product-title} only supported a Layer 3 topology on the primary or _main_ network. Due to Kubernetes design principles: all pods are attached to the main network, all pods communicate with each other by their IP addresses, and inter-pod traffic is restricted according to network policy.
[role="_abstract"]
User-defined networks (UDNs) extend OVN-Kubernetes to enable custom layer 2 and layer 3 network segments with default isolation, providing enhanced network flexibility, security, and segmentation capabilities for multi-tenant deployments and custom network architectures.
While the Kubernetes design is useful for simple deployments, this Layer 3 topology restricts customization of primary network segment configurations, especially for modern multi-tenant deployments.
UDN improves the flexibility and segmentation capabilities of the default Layer 3 topology for a Kubernetes pod network by enabling custom Layer 2 and Layer 3 network segments, where all these segments are isolated by default. These segments act as either primary or secondary networks for container pods and virtual machines that use the default OVN-Kubernetes CNI plugin. UDNs enable a wide range of network architectures and topologies, enhancing network flexibility, security, and performance.
A cluster administrator can use a UDN to create and define primary or secondary networks that span multiple namespaces at the cluster level by leveraging the `ClusterUserDefinedNetwork` custom resource (CR). Additionally, a cluster administrator or a cluster user can use a UDN to define secondary networks at the namespace level with the `UserDefinedNetwork` CR.
The following sections further emphasize the benefits and limitations of user-defined networks, the best practices when creating a `ClusterUserDefinedNetwork` or `UserDefinedNetwork` CR, how to create the CR, and additional configuration details that might be relevant to your deployment.
//about UDN
include::modules/nw-udn-overview.adoc[leveloffset=+1]
//benefits of UDN
include::modules/nw-udn-benefits.adoc[leveloffset=+1]