1
0
mirror of https://github.com/openshift/openshift-docs.git synced 2026-02-05 12:46:18 +01:00

OSDOCS-16860: CQA 2.0 for Networking Overview and Fundamentals

This commit is contained in:
dfitzmau
2026-01-13 17:14:10 +00:00
committed by openshift-cherrypick-robot
parent 0662c09e1e
commit fe2aa5a4bd
14 changed files with 163 additions and 145 deletions

View File

@@ -6,9 +6,12 @@ include::_attributes/attributes-openshift-dedicated.adoc[]
toc::[]
{openshift-networking} is an ecosystem of features, plugins, and advanced networking capabilities that enhance Kubernetes networking with advanced networking-related features that your cluster needs to manage network traffic for one or multiple hybrid clusters. This ecosystem of networking capabilities integrates ingress, egress, load balancing, high-performance throughput, security, and inter- and intra-cluster traffic management. The {openshift-networking} ecosystem also provides role-based observability tooling to reduce its natural complexities.
[role="_abstract"]
To optimize network traffic management and security across hybrid clusters, configure {openshift-networking}.
The following are some of the most commonly used {openshift-networking} features available on your cluster:
The {openshift-networking} ecosystem of networking capabilities integrates ingress, egress, load balancing, high-performance throughput, security, and inter- and intra-cluster traffic management. The {openshift-networking} ecosystem also provides role-based observability tooling to reduce its natural complexities.
The following list details some of the most commonly used {openshift-networking} features available on your cluster:
* Cluster Network Operator for network plugin management.
@@ -29,7 +32,6 @@ ifdef::openshift-rosa,openshift-dedicated[]
Before upgrading {product-title} clusters that are configured with the OpenShift SDN network plugin to version 4.17, you must migrate to the OVN-Kubernetes network plugin. For more information, see _Migrating from the OpenShift SDN network plugin to the OVN-Kubernetes network plugin_.
====
[role="_additional-resources"]
[id="additional-resources_{context}"]
== Additional resources

View File

@@ -6,7 +6,9 @@ include::_attributes/common-attributes.adoc[]
toc::[]
Learn how to create a bastion host to access {product-title} instances and
access the control plane nodes with secure shell (SSH) access.
[role="_abstract"]
To establish secure administrative access to {product-title} instances and control plane nodes, create a bastion host.
Configuring a bastion host provides an entry point for Secure Shell (SSH) traffic, ensuring that your cluster remains protected while allowing for remote management.
include::modules/accessing-hosts-on-aws.adoc[leveloffset=+1]

View File

@@ -9,11 +9,12 @@ endif::openshift-dedicated,openshift-rosa,openshift-rosa-hcp[]
toc::[]
If your cluster uses OVN-Kubernetes, you must specify non-overlapping ranges for Classless Inter-Domain Routing (CIDR) subnet ranges.
[role="_abstract"]
To ensure stable and accurate network routing in {product-title} clusters that use OVN-Kubernetes, define non-overlapping Classless Inter-Domain Routing (CIDR) subnet ranges. Establishing unique ranges prevents IP address conflicts so that internal traffic reaches its intended destination without interference.
[IMPORTANT]
====
For {product-title} 4.17 and later versions, clusters use `169.254.0.0/17` for IPv4 and `fd69::/112` for IPv6 as the default masquerade subnet. Users must avoid these ranges. For upgraded clusters, there is no change to the default masquerade subnet.
For {product-title} 4.17 and later versions, clusters use `169.254.0.0/17` for IPv4 and `fd69::/112` for IPv6 as the default masquerade subnet. You must avoid these ranges. For upgraded clusters, there is no change to the default masquerade subnet.
====
[TIP]
@@ -23,7 +24,7 @@ You can use the link:https://access.redhat.com/labs/ocpnc/[Red Hat OpenShift Net
You must have a Red Hat account to use the calculator.
====
The following subnet types and are mandatory for a cluster that uses OVN-Kubernetes:
The following subnet types are mandatory for a cluster that uses OVN-Kubernetes:
* Join: Uses a join switch to connect gateway routers to distributed routers. A join switch reduces the number of IP addresses for a distributed router. For a cluster that uses the OVN-Kubernetes plugin, an IP address from a dedicated subnet is assigned to any logical port that attaches to the join switch.
* Masquerade: Prevents collisions for identical source and destination IP addresses that are sent from a node as hairpin traffic to the same node after a load balancer makes a routing decision.
@@ -46,7 +47,6 @@ OVN-Kubernetes, the default network provider in {product-title} 4.14 and later v
* `V6TransitSwitchSubnet`: `fd97::/64`
* `defaultV4MasqueradeSubnet`: `169.254.0.0/17`
* `defaultV6MasqueradeSubnet`: `fd69::/112`
// TODO OSDOCS-11830 validate for HCP clusters
[IMPORTANT]
====
@@ -57,33 +57,10 @@ ifndef::openshift-rosa,openshift-rosa-hcp,openshift-dedicated[]
[role="_additional-resources"]
.Additional resources
* For more information about configuring join subnets or transit subnets, see xref:../../networking/ovn_kubernetes_network_provider/configure-ovn-kubernetes-subnets.adoc#configure-ovn-kubernetes-subnets[Configuring OVN-Kubernetes internal IP address subnets].
* xref:../../networking/ovn_kubernetes_network_provider/configure-ovn-kubernetes-subnets.adoc#configure-ovn-kubernetes-subnets[Configuring OVN-Kubernetes internal IP address subnets]
endif::openshift-rosa,openshift-rosa-hcp,openshift-dedicated[]
[id="machine-cidr-description"]
== Machine CIDR
In the Machine classless inter-domain routing (CIDR) field, you must specify the IP address range for machines or cluster nodes.
[NOTE]
====
You cannot change Machine CIDR ranges after you created your cluster.
====
ifdef::openshift-rosa,openshift-rosa-hcp,openshift-dedicated[]
This range must encompass all CIDR address ranges for your virtual private cloud (VPC) subnets. Subnets must be contiguous. A minimum IP address range of 128 addresses, using the subnet prefix `/25`, is supported for single availability zone deployments. A minimum address range of 256 addresses, using the subnet prefix `/24`, is supported for deployments that use multiple availability zones.
endif::openshift-rosa,openshift-rosa-hcp,openshift-dedicated[]
//TODO OSDOCS-11830 does this mean that machine CIDR can onky be in /25 and /24?
The default is `10.0.0.0/16`. This range must not conflict with any connected networks.
ifdef::openshift-rosa-hcp[]
[NOTE]
====
When using {product-title}, the static IP address `172.20.0.1` is reserved for the internal Kubernetes API address. The machine, pod, and service CIDRs ranges must not conflict with this IP address.
====
endif::openshift-rosa-hcp[]
include::modules/machine-cidr-description.adoc[leveloffset=+1]
ifndef::openshift-rosa,openshift-rosa-hcp,openshift-dedicated[]
[role="_additional-resources"]
@@ -92,55 +69,18 @@ ifndef::openshift-rosa,openshift-rosa-hcp,openshift-dedicated[]
* xref:../../networking/networking_operators/cluster-network-operator.adoc#nw-operator-cr_cluster-network-operator[Cluster Network Operator configuration]
endif::openshift-rosa,openshift-rosa-hcp,openshift-dedicated[]
[id="service-cidr-description"]
== Service CIDR
In the Service CIDR field, you must specify the IP address range for services.
ifdef::openshift-rosa,openshift-rosa-hcp,openshift-dedicated[]
It is recommended, but not required, that the address block is the same between clusters. This will not create IP address conflicts.
endif::openshift-rosa,openshift-rosa-hcp,openshift-dedicated[]
The range must be large enough to accommodate your workload. The address block must not overlap with any external service accessed from within the cluster. The default is `172.30.0.0/16`.
include::modules/service-cidr-description.adoc[leveloffset=+1]
[id="pod-cidr-description"]
== Pod CIDR
In the pod CIDR field, you must specify the IP address range for pods.
include::modules/pod-cidr-description.adoc[leveloffset=+1]
ifdef::openshift-enterprise[]
The pod CIDR is the same as the `clusterNetwork` CIDR and the cluster CIDR.
endif::openshift-enterprise[]
ifdef::openshift-rosa,openshift-rosa-hcp,openshift-dedicated[]
It is recommended, but not required, that the address block is the same between clusters. This will not create IP address conflicts.
endif::openshift-rosa,openshift-rosa-hcp,openshift-dedicated[]
The range must be large enough to accommodate your workload. The address block must not overlap with any external service accessed from within the cluster. The default is `10.128.0.0/14`.
ifdef::openshift-enterprise[]
You can expand the range after cluster installation.
[role="_additional-resources"]
.Additional resources
* xref:../../networking/networking_operators/cluster-network-operator.adoc#nw-operator-cr_cluster-network-operator[Cluster Network Operator configuration]
* xref:../../networking/configuring_network_settings/configuring-cluster-network-range.adoc#configuring-cluster-network-range[Configuring the cluster network range]
endif::openshift-enterprise[]
[id="host-prefix-description"]
== Host prefix
In the `hostPrefix` parameter, you must specify the subnet prefix length assigned to pods scheduled to individual machines. The host prefix determines the pod IP address pool for each machine.
ifdef::openshift-rosa,openshift-rosa-hcp,openshift-dedicated[]
For example, if you set the `hostPrefix` parameter to `/23`, each machine is assigned a `/23` subnet from the pod CIDR address range. The default is `/23`, allowing 512 cluster nodes, and 512 pods per node (both of which are beyond our maximum supported).
endif::openshift-rosa,openshift-rosa-hcp,openshift-dedicated[]
ifdef::openshift-enterprise,openshift-rosa,openshift-rosa-hcp,openshift-dedicated[]
For example, if the host prefix is set to `/23`, each machine is assigned a `/23` subnet from the pod CIDR address range. The default is `/23`, allowing 510 cluster nodes, and 510 pod IP addresses per node.
Consider another example where you set the `clusterNetwork.cidr` parameter to `10.128.0.0/16`, you define the complete address space for the cluster. This assigns a pool of 65536 IP addresses to your cluster. If you then set the `hostPrefix` parameter to `/23`, you define a subnet slice to each node in the cluster, where the `/23` slice becomes a subnet of the `/16` subnet network. This assigns 512 IP addresses to each node, where 2 IP addresses get reserved for networking and broadcasting purposes. The following example calculation uses these IP address figures to determine the maximum number of nodes that you can create for your cluster:
[source,text]
----
65536 / 512 = 128
----
You can use the link:https://access.redhat.com/labs/ocpnc/[Red Hat OpenShift Network Calculator] to calculate the maximum number of nodes for your cluster.
endif::openshift-enterprise,openshift-rosa,openshift-rosa-hcp,openshift-dedicated[]
include::modules/host-prefix-description.adoc[leveloffset=+1]
// CIDR ranges for HCP
ifndef::openshift-rosa,openshift-rosa-hcp,openshift-dedicated[]

View File

@@ -6,28 +6,29 @@ include::_attributes/common-attributes.adoc[]
toc::[]
Networking metrics are viewable in dashboards within the {product-title} web console, under *Observe* -> *Dashboards*.
[role="_abstract"]
To monitor and analyze network performance within your cluster, view networking metrics in the {product-title} web console. By accessing these dashboards through *Observe* -> *Dashboards*, you can identify traffic patterns and troubleshoot connectivity issues to ensure consistent workload availability.
Network Observability Operator::
[id="network-observability-operator-operator-dashboards"]
== Network Observability Operator
If you have the Network Observability Operator installed, you can view network traffic metrics dashboards by selecting the *Netobserv* dashboard from the *Dashboards* drop-down list. For more information about metrics available in this *Dashboard*, see xref:../../observability/network_observability/metrics-alerts-dashboards.adoc#network-observability-viewing-dashboards_metrics-dashboards-alerts[Network Observability metrics dashboards].
[id="general-networking-ovnk-dashboards"]
== Networking and OVN-Kubernetes dashboard
You can view both general networking metrics as well as OVN-Kubernetes metrics from the dashboard.
Networking and OVN-Kubernetes dashboard::
You can view both general networking metrics and OVN-Kubernetes metrics from the dashboard.
+
To view general networking metrics, select *Networking/Linux Subsystem Stats* from the *Dashboards* drop-down list. You can view the following networking metrics from the dashboard: *Network Utilisation*, *Network Saturation*, and *Network Errors*.
+
To view OVN-Kubernetes metrics select *Networking/Infrastructure* from the *Dashboards* drop-down list. You can view the following OVN-Kubernetes metrics: *Networking Configuration*, *TCP Latency Probes*, *Control Plane Resources*, and *Worker Resources*.
To view OVN-Kubernetes metrics select *Networking/Infrastructure* from the *Dashboards* drop-down list. You can view the following OVN-Kuberenetes metrics: *Networking Configuration*, *TCP Latency Probes*, *Control Plane Resources*, and *Worker Resources*.
Ingress Operator dashboard::
[id="ingress-dashboards"]
== Ingress Operator dashboard
You can view networking metrics handled by the Ingress Operator from the dashboard. This includes metrics like the following:
+
* Incoming and outgoing bandwidth
* HTTP error rates
* HTTP server response latency
+
To view these Ingress metrics, select *Networking/Ingress* from the *Dashboards* drop-down list. You can view Ingress metrics for the following categories: *Top 10 Per Route*, *Top 10 Per Namespace*, and *Top 10 Per Shard*.

View File

@@ -6,7 +6,8 @@ include::_attributes/common-attributes.adoc[]
toc::[]
Understanding networking is essential for building resilient, secure, and scalable applications in {product-title}. From basic pod-to-pod communication to complex traffic routing and security rules, every component of your application relies on the network to function correctly.
[role="_abstract"]
To build resilient and secure applications in {product-title}, configure the networking infrastructure for your cluster. Defining reliable pod-to-pod communication and traffic routing rules ensures that every application component functions correctly within the environment.
include::modules/nw-understanding-networking-core-layers-and-components.adoc[leveloffset=+1]