mirror of
https://github.com/openshift/openshift-docs.git
synced 2026-02-05 12:46:18 +01:00
OSDOCS-16860: CQA 2.0 for Networking Overview and Fundamentals
This commit is contained in:
committed by
openshift-cherrypick-robot
parent
0662c09e1e
commit
fe2aa5a4bd
@@ -6,46 +6,31 @@
|
||||
[id="accessing-hosts-on-aws_{context}"]
|
||||
= Accessing hosts on Amazon Web Services in an installer-provisioned infrastructure cluster
|
||||
|
||||
The {product-title} installer does not create any public IP addresses for any of
|
||||
the Amazon Elastic Compute Cloud (Amazon EC2) instances that it provisions for
|
||||
your {product-title} cluster. To be able to SSH to your {product-title}
|
||||
hosts, you must follow this procedure.
|
||||
[role="_abstract"]
|
||||
To establish Secure Shell (SSH) access to {product-title} hosts on Amazon EC2 instances that lack public IP addresses, configure a bastion host or secure gateway. Defining this access path ensures that you can safely manage and troubleshoot your private infrastructure within an installer-provisioned environment.
|
||||
|
||||
.Procedure
|
||||
|
||||
. Create a security group that allows SSH access into the virtual private cloud
|
||||
(VPC) created by the `openshift-install` command.
|
||||
. Create a security group that allows SSH access into the virtual private cloud (VPC) that the `openshift-install` command-line interface creates.
|
||||
|
||||
. Create an Amazon EC2 instance on one of the public subnets the installer
|
||||
created.
|
||||
. Create an Amazon EC2 instance on one of the public subnets the installation program created.
|
||||
|
||||
. Associate a public IP address with the Amazon EC2 instance that you created.
|
||||
+
|
||||
Unlike with the {product-title} installation, you should associate the Amazon
|
||||
EC2 instance you created with an SSH keypair. It does not matter what operating
|
||||
system you choose for this instance, as it will simply serve as an SSH bastion
|
||||
to bridge the internet into your {product-title} cluster's VPC. The Amazon
|
||||
Machine Image (AMI) you use does matter. With {op-system-first},
|
||||
for example, you can provide keys via Ignition, like the installer does.
|
||||
Unlike with the {product-title} installation, associate the Amazon EC2 instance you created with an SSH keypair. The operating system selection is not important for this instance, because the instanace serves as an SSH bastion to bridge the internet into the VPC of your {product-title} cluster. The Amazon Machine Image (AMI) you use does matter. With {op-system-first}, for example, you can provide keys through Ignition by using a similar method to the installation program.
|
||||
|
||||
. After you provisioned your Amazon EC2 instance and can SSH into it, you must add
|
||||
the SSH key that you associated with your {product-title} installation. This key
|
||||
can be different from the key for the bastion instance, but does not have to be.
|
||||
. After you provisioned your Amazon EC2 instance and can SSH into the instance, add the SSH key that you associated with your {product-title} installation. This key can be different from the key for the bastion instance, but this is not a strict requirement.
|
||||
+
|
||||
[NOTE]
|
||||
====
|
||||
Direct SSH access is only recommended for disaster recovery. When the Kubernetes
|
||||
API is responsive, run privileged pods instead.
|
||||
Use direct SSH access only for disaster recovery. When the Kubernetes API is responsive, run privileged pods instead.
|
||||
====
|
||||
|
||||
. Run `oc get nodes`, inspect the output, and choose one of the nodes that is a
|
||||
master. The hostname looks similar to `ip-10-0-1-163.ec2.internal`.
|
||||
. Run `oc get nodes`, inspect the output, and choose one of the nodes that is a control plane. The hostname looks similar to `ip-10-0-1-163.ec2.internal`.
|
||||
|
||||
. From the bastion SSH host you manually deployed into Amazon EC2, SSH into that
|
||||
control plane host. Ensure that you use the same SSH key you specified during the
|
||||
installation:
|
||||
. From the bastion SSH host that you manually deployed into Amazon EC2, SSH into that control plane host by entering the following command. Ensure that you use the same SSH key that you specified during installation:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ ssh -i <ssh-key-path> core@<master-hostname>
|
||||
$ ssh -i <ssh-key-path> core@<control_plane_hostname>
|
||||
----
|
||||
|
||||
@@ -1,16 +1,17 @@
|
||||
// Module included in the following assemblies:
|
||||
//
|
||||
// * hosted_control_planes/hcp-prepare/hcp-requirements.adoc
|
||||
// * /networking/networking_overview/cidr-range-definitions.adoc
|
||||
|
||||
:_mod-docs-content-type: REFERENCE
|
||||
[id="hcp-cidr-ranges_{context}"]
|
||||
= CIDR ranges for {hcp}
|
||||
|
||||
[role="_abstract"]
|
||||
To successfully deploy {hcp} on {product-title}, define the network environment by using specific Classless Inter-Domain Routing (CIDR) subnet ranges. Establishing these nonoverlapping ranges ensures reliable communication between cluster components and prevents internal IP address conflicts.
|
||||
|
||||
For deploying {hcp} on {product-title}, use the following required Classless Inter-Domain Routing (CIDR) subnet ranges:
|
||||
|
||||
* `v4InternalSubnet`: 100.65.0.0/16 (OVN-Kubernetes)
|
||||
* `clusterNetwork`: 10.132.0.0/14 (pod network)
|
||||
* `serviceNetwork`: 172.31.0.0/16
|
||||
|
||||
|
||||
For more information about {product-title} CIDR range definitions, see "CIDR range definitions".
|
||||
|
||||
27
modules/host-prefix-description.adoc
Normal file
27
modules/host-prefix-description.adoc
Normal file
@@ -0,0 +1,27 @@
|
||||
// Module included in the following assemblies:
|
||||
//
|
||||
// * /networking/networking_overview/cidr-range-definitions.adoc
|
||||
|
||||
:_mod-docs-content-type: CONCEPT
|
||||
[id="host-prefix-description_{context}"]
|
||||
= Host prefix
|
||||
|
||||
[role="_abstract"]
|
||||
To allocate a dedicated pool of IP addresses for pods on each node in {product-title}, specify the subnet prefix length in the hostPrefix parameter. Defining an appropriate prefix ensures that every machine has sufficient unique addresses to support its scheduled workloads without exhausting the cluster's network resources.
|
||||
|
||||
ifdef::openshift-rosa,openshift-rosa-hcp,openshift-dedicated[]
|
||||
For example, if you set the `hostPrefix` parameter to `/23`, each machine is assigned a `/23` subnet from the pod CIDR address range. The default is `/23`, allowing 512 cluster nodes and 512 pods per node. Note that where 512 cluster nodes and 512 pods are beyond the maximum supported.
|
||||
endif::openshift-rosa,openshift-rosa-hcp,openshift-dedicated[]
|
||||
|
||||
ifdef::openshift-enterprise,openshift-rosa,openshift-rosa-hcp,openshift-dedicated[]
|
||||
For example, if the host prefix is set to `/23`, each machine is assigned a `/23` subnet from the pod CIDR address range. The default is `/23`, allowing 510 cluster nodes and 510 pod IP addresses per node.
|
||||
|
||||
Consider another example where you set the `clusterNetwork.cidr` parameter to `10.128.0.0/16`, you define the complete address space for the cluster. This assigns a pool of 65,536 IP addresses to your cluster. If you then set the `hostPrefix` parameter to `/23`, you define a subnet slice to each node in the cluster, where the `/23` slice becomes a subnet of the `/16` subnet network. This assigns 512 IP addresses to each node, where 2 IP addresses get reserved for networking and broadcasting purposes. The following example calculation uses these IP address figures to determine the maximum number of nodes that you can create for your cluster:
|
||||
|
||||
[source,text]
|
||||
----
|
||||
65536 / 512 = 128
|
||||
----
|
||||
|
||||
You can use the link:https://access.redhat.com/labs/ocpnc/[Red Hat OpenShift Network Calculator] to calculate the maximum number of nodes for your cluster.
|
||||
endif::openshift-enterprise,openshift-rosa,openshift-rosa-hcp,openshift-dedicated[]
|
||||
28
modules/machine-cidr-description.adoc
Normal file
28
modules/machine-cidr-description.adoc
Normal file
@@ -0,0 +1,28 @@
|
||||
// Module included in the following assemblies:
|
||||
//
|
||||
// * /networking/networking_overview/cidr-range-definitions.adoc
|
||||
|
||||
:_mod-docs-content-type: CONCEPT
|
||||
[id="machine-cidr-description_{context}"]
|
||||
= Machine CIDR
|
||||
|
||||
[role="_abstract"]
|
||||
To establish the network scope for cluster nodes in {product-title}, specify an IP address range in the Machine Classless Inter-Domain Routing (CIDR) parameter. Defining this range ensures that all machines within the environment have valid, routable addresses for internal cluster communication.
|
||||
|
||||
[NOTE]
|
||||
====
|
||||
You cannot change Machine CIDR ranges after you create your cluster.
|
||||
====
|
||||
|
||||
ifdef::openshift-rosa,openshift-rosa-hcp,openshift-dedicated[]
|
||||
This range must encompass all CIDR address ranges for your virtual private cloud (VPC) subnets. Subnets must be contiguous. A minimum range of 128 IP addresses, using the subnet prefix `/25`, is supported for single availability zone deployments. A minimum address range of 256 addresses, using the subnet prefix `/24`, is supported for deployments that use multiple availability zones.
|
||||
endif::openshift-rosa,openshift-rosa-hcp,openshift-dedicated[]
|
||||
|
||||
The default is `10.0.0.0/16`. This range must not conflict with any connected networks.
|
||||
|
||||
ifdef::openshift-rosa-hcp[]
|
||||
[NOTE]
|
||||
====
|
||||
When using {product-title}, the static IP address `172.20.0.1` is reserved for the internal Kubernetes API address. The machine, pod, and service CIDR ranges must not conflict with this IP address.
|
||||
====
|
||||
endif::openshift-rosa-hcp[]
|
||||
@@ -6,18 +6,17 @@
|
||||
[id="nw-understanding-networking-core-layers-and-components_{context}"]
|
||||
= Core network layers and components
|
||||
|
||||
{openshift-networking} is built on two fundamental layers: the `pod network` and the `service network`. The pod network is where your applications live. The service network makes your applications reliably accessible.
|
||||
[role="_abstract"]
|
||||
To build and expose resilient applications in {product-title}, configure the pod and service network layers. Defining these foundational layers ensures that your application workloads have a secure environment to run and remain reliably accessible to other services.
|
||||
|
||||
[id="the-pod-network_{context}"]
|
||||
== The pod network
|
||||
The pod network::
|
||||
|
||||
The pod network is a flat network space where every pod in the cluster receives its own unique IP address. This network is managed by the Container Network Interface (CNI) plugin. The CNI plugin is responsible for wiring each pod into the cluster network.
|
||||
+
|
||||
This design allows pods to communicate directly with each other using their IP addresses, regardless of which node they are running on. However, these pod IP addresses are ephemeral. This means the IP addresses are destroyed when the pod is destroyed and a new IP address is assigned when a new pod is created. Because of this, you should never rely on pod IP addresses directly for long-lived communication.
|
||||
|
||||
This design allows pods to communicate directly with each other using their IP addresses, regardless of which node they are running on. However, these pod IPs are ephemeral. This means the IPs are destroyed when the pod is destroyed and a new IP address is assigned when a new pod is created. Because of this, you should never rely on pod IP addresses directly for long-lived communication.
|
||||
|
||||
[id="the-service-network_{context}"]
|
||||
== The service network
|
||||
The service network::
|
||||
|
||||
A service is a networking object that provides a single, stable virtual IP address, called a ClusterIP, and a DNS name for a logical group of pods.
|
||||
|
||||
When a request is sent to a service's ClusterIP, {product-title} automatically load-balances the traffic to one of the healthy pods backing that service. It uses Kubernetes labels and selectors to keep track of which pods belong to which service. This abstraction makes your applications resilient because individual pods can be created or destroyed without affecting the applications trying to reach them.
|
||||
+
|
||||
When a request is sent to a the ClusterIP of the service, {product-title} automatically load balances the traffic to one of the healthy pods backing that service. {product-title} uses Kubernetes labels and selectors to keep track of which pods belong to which service. This abstraction makes your applications resilient because individual pods can be created or destroyed without affecting the applications trying to reach them.
|
||||
|
||||
@@ -6,31 +6,27 @@
|
||||
[id="nw-understanding-networking-managing-traffic-entering-leaving_{context}"]
|
||||
= Managing traffic entering and leaving the cluster
|
||||
|
||||
You need a way for external users to access your applications and for your applications to securely access external services. {product-title} provides several tools to manage this flow of traffic into and out of your cluster.
|
||||
[role="_abstract"]
|
||||
To enable external access and securely manage traffic flow into and out of your {product-title} cluster, configure ingress and egress mechanisms. Establishing these traffic rules ensures that external users can reach your applications reliably while maintaining secure communication with external services.
|
||||
|
||||
[id="exposing-applications-with-ingress-and-route-objects_{context}"]
|
||||
== Exposing applications with Ingress and Route objects
|
||||
Exposing applications with Ingress and Route objects::
|
||||
|
||||
To allow external traffic to reach services inside your cluster, you use an Ingress Controller. This component acts as the front door that directs incoming requests to the correct application. You define the traffic rules using one of two primary resources:
|
||||
To allow external traffic to reach services inside your cluster, you use an Ingress Controller. The Ingress Controller acts as the front door that directs incoming requests to the correct application. You define the traffic rules using one of two primary resources:
|
||||
|
||||
* Ingress: The standard Kubernetes resource for managing external access to services, typically for HTTP and HTTPS traffic.
|
||||
|
||||
* `Route` object: A resource that provides the same functionality as Ingress but includes additional features like more advanced TLS termination options and traffic splitting. `Route` objects are specific to {product-title}.
|
||||
|
||||
[id="distributing-traffic-with-load-balancers_{context}"]
|
||||
== Distributing traffic with Load Balancers
|
||||
Distributing traffic with load balancers::
|
||||
|
||||
A Load Balancer provides a single, highly available IP address for directing traffic to your cluster. It typically runs outside the cluster on a cloud provider or using MetalLB on bare-metal infrastructure and distributes incoming requests across multiple nodes that are running the Ingress Controller.
|
||||
A load balancer provides a single, highly available IP address for directing traffic to your cluster. A load balancer typically runs outside the cluster on a cloud provider or can use MetalLB on bare-metal infrastructure to distribute incoming requests across multiple nodes that are running the Ingress Controller. This prevents any single node from becoming a bottleneck or a point of failure to ensure that your applications remain accessible.
|
||||
|
||||
This prevents any single node from becoming a bottleneck or a point of failure to ensure that your applications remain accessible.
|
||||
|
||||
[id="controlling-egress-traffic_{context}"]
|
||||
== Controlling Egress traffic
|
||||
Controlling egress traffic::
|
||||
|
||||
Egress refers to outbound traffic that originates from a pod inside the cluster and is destined for an external system. {product-title} provides several mechanisms to manage this:
|
||||
|
||||
* EgressIP: You can assign a specific, predictable source IP address to all outbound traffic from a given project. This is useful when you need to access an external service like a database that has a firewall requiring you to allow specific source IPs.
|
||||
* EgressIP: You can assign a specific, predictable source IP address to all outbound traffic from a given project. Consider this configuration when you need to access an external service like a database that has a firewall where you need to allow specific source IPs.
|
||||
|
||||
* Egress Router: This is a dedicated pod that acts as a gateway for outbound traffic. It allows you to route connections through a single, controlled exit point.
|
||||
* Egress Router: This is a dedicated pod that acts as a gateway for outbound traffic. By using an Egress Router, you can route connections through a single, controlled exit point.
|
||||
|
||||
* Egress Firewall: This acts as a cluster-level firewall for all outbound traffic. It enhances your security posture by allowing you to create rules that explicitly allow or deny connections from pods to specific external destinations.
|
||||
* Egress Firewall: This acts as a cluster-level firewall for all outbound traffic. The Egress Firewall enhances your security posture so that you can create rules that explicitly allow or deny connections from pods to specific external destinations.
|
||||
|
||||
@@ -6,16 +6,15 @@
|
||||
[id="nw-understanding-networking-managing-traffic-within_{context}"]
|
||||
= Managing traffic within the cluster
|
||||
|
||||
Your applications need to communicate with each other inside the cluster. {product-title} provides two primary mechanisms for internal traffic: direct pod-to-pod communication for simple exchanges and robust service discovery for reliable connections.
|
||||
[role="_abstract"]
|
||||
To ensure reliable communication between applications in {product-title}, configure pod-to-pod traffic and service discovery mechanisms. Implementing these mechanisms allows cluster workloads to exchange data efficiently through either direct connections or robust discovery rules.
|
||||
|
||||
[id="pod-to-pod-communication_{context}"]
|
||||
== Pod-to-pod communication
|
||||
Pod-to-pod communication::
|
||||
|
||||
Pods communicate directly using the unique IP addresses assigned by the pod network. A pod on one node can send traffic directly to a pod on another node without any network address translation (NAT). This direct communication model is efficient for services that need to exchange data quickly. Applications can simply target another pod’s IP address to establish a connection.
|
||||
Pods communicate directly by using the unique IP addresses assigned by the pod network. A pod on one node can send traffic directly to a pod on another node without any network address translation (NAT). This direct communication model is efficient for services that need to exchange data quickly. Applications can simply target the IP address of another pod to establish a connection.
|
||||
|
||||
[id="service-discovery-with-dns_{context}"]
|
||||
== Service discovery with DNS
|
||||
Service discovery with DNS::
|
||||
|
||||
Pods need a reliable way to find each other because pod IP addresses are ephemeral. {product-title} uses `CoreDNS`, a built-in DNS server, to provide this service discovery.
|
||||
|
||||
+
|
||||
Every service you create automatically receives a stable DNS name. A pod can use this DNS name to connect to the service. The DNS system resolves the name to the service's stable `ClusterIP` address. This process ensures reliable communication even when individual pod IPs change.
|
||||
21
modules/pod-cidr-description.adoc
Normal file
21
modules/pod-cidr-description.adoc
Normal file
@@ -0,0 +1,21 @@
|
||||
// Module included in the following assemblies:
|
||||
//
|
||||
// * /networking/networking_overview/cidr-range-definitions.adoc
|
||||
|
||||
:_mod-docs-content-type: CONCEPT
|
||||
[id="pod-cidr-description_{context}"]
|
||||
= Pod CIDR
|
||||
|
||||
[role="_abstract"]
|
||||
To allocate internal network addresses for cluster workloads in {product-title}, specify an IP address range in the pod Classless Inter-Domain Routing (CIDR) field. Defining this range ensures that pods can communicate with each other reliably without overlapping with the node or service networks.
|
||||
|
||||
ifdef::openshift-enterprise[]
|
||||
The pod CIDR is the same as the `clusterNetwork` CIDR and the cluster CIDR.
|
||||
endif::openshift-enterprise[]
|
||||
ifdef::openshift-rosa,openshift-rosa-hcp,openshift-dedicated[]
|
||||
Red{nbsp}Hat recommends, but this task is not mandatory, that the address block is the same between clusters. This does not create IP address conflicts.
|
||||
endif::openshift-rosa,openshift-rosa-hcp,openshift-dedicated[]
|
||||
The range must be large enough to accommodate your workload. The address block must not overlap with any external service accessed from within the cluster. The default is `10.128.0.0/14`.
|
||||
ifdef::openshift-enterprise[]
|
||||
You can expand the range after cluster installation.
|
||||
endif::openshift-enterprise[]
|
||||
16
modules/service-cidr-description.adoc
Normal file
16
modules/service-cidr-description.adoc
Normal file
@@ -0,0 +1,16 @@
|
||||
// Module included in the following assemblies:
|
||||
//
|
||||
// * /networking/networking_overview/cidr-range-definitions.adoc
|
||||
|
||||
:_mod-docs-content-type: CONCEPT
|
||||
[id="service-cidr-description_{context}"]
|
||||
= Service CIDR
|
||||
|
||||
[role="_abstract"]
|
||||
To allocate IP addresses for cluster services in {product-title}, specify an IP address range in the Service Classless Inter-Domain Routing (CIDR) parameter. Defining this range ensures that internal services have a dedicated block of addresses for reliable communication without overlapping with node or pod networks.
|
||||
|
||||
ifdef::openshift-rosa,openshift-rosa-hcp,openshift-dedicated[]
|
||||
Red{nbsp}Hat recommends, but this task is not mandatory, that the address block is the same between clusters. This does not create IP address conflicts.
|
||||
endif::openshift-rosa,openshift-rosa-hcp,openshift-dedicated[]
|
||||
|
||||
The range must be large enough to accommodate your workload. The address block must not overlap with any external service accessed from within the cluster. The default is `172.30.0.0/16`.
|
||||
@@ -6,9 +6,12 @@ include::_attributes/attributes-openshift-dedicated.adoc[]
|
||||
|
||||
toc::[]
|
||||
|
||||
{openshift-networking} is an ecosystem of features, plugins, and advanced networking capabilities that enhance Kubernetes networking with advanced networking-related features that your cluster needs to manage network traffic for one or multiple hybrid clusters. This ecosystem of networking capabilities integrates ingress, egress, load balancing, high-performance throughput, security, and inter- and intra-cluster traffic management. The {openshift-networking} ecosystem also provides role-based observability tooling to reduce its natural complexities.
|
||||
[role="_abstract"]
|
||||
To optimize network traffic management and security across hybrid clusters, configure {openshift-networking}.
|
||||
|
||||
The following are some of the most commonly used {openshift-networking} features available on your cluster:
|
||||
The {openshift-networking} ecosystem of networking capabilities integrates ingress, egress, load balancing, high-performance throughput, security, and inter- and intra-cluster traffic management. The {openshift-networking} ecosystem also provides role-based observability tooling to reduce its natural complexities.
|
||||
|
||||
The following list details some of the most commonly used {openshift-networking} features available on your cluster:
|
||||
|
||||
* Cluster Network Operator for network plugin management.
|
||||
|
||||
@@ -29,7 +32,6 @@ ifdef::openshift-rosa,openshift-dedicated[]
|
||||
Before upgrading {product-title} clusters that are configured with the OpenShift SDN network plugin to version 4.17, you must migrate to the OVN-Kubernetes network plugin. For more information, see _Migrating from the OpenShift SDN network plugin to the OVN-Kubernetes network plugin_.
|
||||
====
|
||||
|
||||
|
||||
[role="_additional-resources"]
|
||||
[id="additional-resources_{context}"]
|
||||
== Additional resources
|
||||
|
||||
@@ -6,7 +6,9 @@ include::_attributes/common-attributes.adoc[]
|
||||
|
||||
toc::[]
|
||||
|
||||
Learn how to create a bastion host to access {product-title} instances and
|
||||
access the control plane nodes with secure shell (SSH) access.
|
||||
[role="_abstract"]
|
||||
To establish secure administrative access to {product-title} instances and control plane nodes, create a bastion host.
|
||||
|
||||
Configuring a bastion host provides an entry point for Secure Shell (SSH) traffic, ensuring that your cluster remains protected while allowing for remote management.
|
||||
|
||||
include::modules/accessing-hosts-on-aws.adoc[leveloffset=+1]
|
||||
|
||||
@@ -9,11 +9,12 @@ endif::openshift-dedicated,openshift-rosa,openshift-rosa-hcp[]
|
||||
|
||||
toc::[]
|
||||
|
||||
If your cluster uses OVN-Kubernetes, you must specify non-overlapping ranges for Classless Inter-Domain Routing (CIDR) subnet ranges.
|
||||
[role="_abstract"]
|
||||
To ensure stable and accurate network routing in {product-title} clusters that use OVN-Kubernetes, define non-overlapping Classless Inter-Domain Routing (CIDR) subnet ranges. Establishing unique ranges prevents IP address conflicts so that internal traffic reaches its intended destination without interference.
|
||||
|
||||
[IMPORTANT]
|
||||
====
|
||||
For {product-title} 4.17 and later versions, clusters use `169.254.0.0/17` for IPv4 and `fd69::/112` for IPv6 as the default masquerade subnet. Users must avoid these ranges. For upgraded clusters, there is no change to the default masquerade subnet.
|
||||
For {product-title} 4.17 and later versions, clusters use `169.254.0.0/17` for IPv4 and `fd69::/112` for IPv6 as the default masquerade subnet. You must avoid these ranges. For upgraded clusters, there is no change to the default masquerade subnet.
|
||||
====
|
||||
|
||||
[TIP]
|
||||
@@ -23,7 +24,7 @@ You can use the link:https://access.redhat.com/labs/ocpnc/[Red Hat OpenShift Net
|
||||
You must have a Red Hat account to use the calculator.
|
||||
====
|
||||
|
||||
The following subnet types and are mandatory for a cluster that uses OVN-Kubernetes:
|
||||
The following subnet types are mandatory for a cluster that uses OVN-Kubernetes:
|
||||
|
||||
* Join: Uses a join switch to connect gateway routers to distributed routers. A join switch reduces the number of IP addresses for a distributed router. For a cluster that uses the OVN-Kubernetes plugin, an IP address from a dedicated subnet is assigned to any logical port that attaches to the join switch.
|
||||
* Masquerade: Prevents collisions for identical source and destination IP addresses that are sent from a node as hairpin traffic to the same node after a load balancer makes a routing decision.
|
||||
@@ -46,7 +47,6 @@ OVN-Kubernetes, the default network provider in {product-title} 4.14 and later v
|
||||
* `V6TransitSwitchSubnet`: `fd97::/64`
|
||||
* `defaultV4MasqueradeSubnet`: `169.254.0.0/17`
|
||||
* `defaultV6MasqueradeSubnet`: `fd69::/112`
|
||||
// TODO OSDOCS-11830 validate for HCP clusters
|
||||
|
||||
[IMPORTANT]
|
||||
====
|
||||
@@ -57,33 +57,10 @@ ifndef::openshift-rosa,openshift-rosa-hcp,openshift-dedicated[]
|
||||
[role="_additional-resources"]
|
||||
.Additional resources
|
||||
|
||||
* For more information about configuring join subnets or transit subnets, see xref:../../networking/ovn_kubernetes_network_provider/configure-ovn-kubernetes-subnets.adoc#configure-ovn-kubernetes-subnets[Configuring OVN-Kubernetes internal IP address subnets].
|
||||
* xref:../../networking/ovn_kubernetes_network_provider/configure-ovn-kubernetes-subnets.adoc#configure-ovn-kubernetes-subnets[Configuring OVN-Kubernetes internal IP address subnets]
|
||||
endif::openshift-rosa,openshift-rosa-hcp,openshift-dedicated[]
|
||||
|
||||
[id="machine-cidr-description"]
|
||||
== Machine CIDR
|
||||
|
||||
In the Machine classless inter-domain routing (CIDR) field, you must specify the IP address range for machines or cluster nodes.
|
||||
|
||||
[NOTE]
|
||||
====
|
||||
You cannot change Machine CIDR ranges after you created your cluster.
|
||||
====
|
||||
|
||||
ifdef::openshift-rosa,openshift-rosa-hcp,openshift-dedicated[]
|
||||
This range must encompass all CIDR address ranges for your virtual private cloud (VPC) subnets. Subnets must be contiguous. A minimum IP address range of 128 addresses, using the subnet prefix `/25`, is supported for single availability zone deployments. A minimum address range of 256 addresses, using the subnet prefix `/24`, is supported for deployments that use multiple availability zones.
|
||||
endif::openshift-rosa,openshift-rosa-hcp,openshift-dedicated[]
|
||||
//TODO OSDOCS-11830 does this mean that machine CIDR can onky be in /25 and /24?
|
||||
|
||||
The default is `10.0.0.0/16`. This range must not conflict with any connected networks.
|
||||
|
||||
ifdef::openshift-rosa-hcp[]
|
||||
[NOTE]
|
||||
====
|
||||
When using {product-title}, the static IP address `172.20.0.1` is reserved for the internal Kubernetes API address. The machine, pod, and service CIDRs ranges must not conflict with this IP address.
|
||||
====
|
||||
endif::openshift-rosa-hcp[]
|
||||
|
||||
include::modules/machine-cidr-description.adoc[leveloffset=+1]
|
||||
|
||||
ifndef::openshift-rosa,openshift-rosa-hcp,openshift-dedicated[]
|
||||
[role="_additional-resources"]
|
||||
@@ -92,55 +69,18 @@ ifndef::openshift-rosa,openshift-rosa-hcp,openshift-dedicated[]
|
||||
* xref:../../networking/networking_operators/cluster-network-operator.adoc#nw-operator-cr_cluster-network-operator[Cluster Network Operator configuration]
|
||||
endif::openshift-rosa,openshift-rosa-hcp,openshift-dedicated[]
|
||||
|
||||
[id="service-cidr-description"]
|
||||
== Service CIDR
|
||||
In the Service CIDR field, you must specify the IP address range for services.
|
||||
ifdef::openshift-rosa,openshift-rosa-hcp,openshift-dedicated[]
|
||||
It is recommended, but not required, that the address block is the same between clusters. This will not create IP address conflicts.
|
||||
endif::openshift-rosa,openshift-rosa-hcp,openshift-dedicated[]
|
||||
The range must be large enough to accommodate your workload. The address block must not overlap with any external service accessed from within the cluster. The default is `172.30.0.0/16`.
|
||||
include::modules/service-cidr-description.adoc[leveloffset=+1]
|
||||
|
||||
[id="pod-cidr-description"]
|
||||
== Pod CIDR
|
||||
In the pod CIDR field, you must specify the IP address range for pods.
|
||||
include::modules/pod-cidr-description.adoc[leveloffset=+1]
|
||||
|
||||
ifdef::openshift-enterprise[]
|
||||
The pod CIDR is the same as the `clusterNetwork` CIDR and the cluster CIDR.
|
||||
endif::openshift-enterprise[]
|
||||
ifdef::openshift-rosa,openshift-rosa-hcp,openshift-dedicated[]
|
||||
It is recommended, but not required, that the address block is the same between clusters. This will not create IP address conflicts.
|
||||
endif::openshift-rosa,openshift-rosa-hcp,openshift-dedicated[]
|
||||
The range must be large enough to accommodate your workload. The address block must not overlap with any external service accessed from within the cluster. The default is `10.128.0.0/14`.
|
||||
ifdef::openshift-enterprise[]
|
||||
You can expand the range after cluster installation.
|
||||
|
||||
[role="_additional-resources"]
|
||||
.Additional resources
|
||||
* xref:../../networking/networking_operators/cluster-network-operator.adoc#nw-operator-cr_cluster-network-operator[Cluster Network Operator configuration]
|
||||
* xref:../../networking/configuring_network_settings/configuring-cluster-network-range.adoc#configuring-cluster-network-range[Configuring the cluster network range]
|
||||
endif::openshift-enterprise[]
|
||||
|
||||
[id="host-prefix-description"]
|
||||
== Host prefix
|
||||
|
||||
In the `hostPrefix` parameter, you must specify the subnet prefix length assigned to pods scheduled to individual machines. The host prefix determines the pod IP address pool for each machine.
|
||||
|
||||
ifdef::openshift-rosa,openshift-rosa-hcp,openshift-dedicated[]
|
||||
For example, if you set the `hostPrefix` parameter to `/23`, each machine is assigned a `/23` subnet from the pod CIDR address range. The default is `/23`, allowing 512 cluster nodes, and 512 pods per node (both of which are beyond our maximum supported).
|
||||
endif::openshift-rosa,openshift-rosa-hcp,openshift-dedicated[]
|
||||
|
||||
ifdef::openshift-enterprise,openshift-rosa,openshift-rosa-hcp,openshift-dedicated[]
|
||||
For example, if the host prefix is set to `/23`, each machine is assigned a `/23` subnet from the pod CIDR address range. The default is `/23`, allowing 510 cluster nodes, and 510 pod IP addresses per node.
|
||||
|
||||
Consider another example where you set the `clusterNetwork.cidr` parameter to `10.128.0.0/16`, you define the complete address space for the cluster. This assigns a pool of 65536 IP addresses to your cluster. If you then set the `hostPrefix` parameter to `/23`, you define a subnet slice to each node in the cluster, where the `/23` slice becomes a subnet of the `/16` subnet network. This assigns 512 IP addresses to each node, where 2 IP addresses get reserved for networking and broadcasting purposes. The following example calculation uses these IP address figures to determine the maximum number of nodes that you can create for your cluster:
|
||||
|
||||
[source,text]
|
||||
----
|
||||
65536 / 512 = 128
|
||||
----
|
||||
|
||||
You can use the link:https://access.redhat.com/labs/ocpnc/[Red Hat OpenShift Network Calculator] to calculate the maximum number of nodes for your cluster.
|
||||
endif::openshift-enterprise,openshift-rosa,openshift-rosa-hcp,openshift-dedicated[]
|
||||
include::modules/host-prefix-description.adoc[leveloffset=+1]
|
||||
|
||||
// CIDR ranges for HCP
|
||||
ifndef::openshift-rosa,openshift-rosa-hcp,openshift-dedicated[]
|
||||
|
||||
@@ -6,28 +6,29 @@ include::_attributes/common-attributes.adoc[]
|
||||
|
||||
toc::[]
|
||||
|
||||
Networking metrics are viewable in dashboards within the {product-title} web console, under *Observe* -> *Dashboards*.
|
||||
[role="_abstract"]
|
||||
To monitor and analyze network performance within your cluster, view networking metrics in the {product-title} web console. By accessing these dashboards through *Observe* -> *Dashboards*, you can identify traffic patterns and troubleshoot connectivity issues to ensure consistent workload availability.
|
||||
|
||||
Network Observability Operator::
|
||||
|
||||
[id="network-observability-operator-operator-dashboards"]
|
||||
== Network Observability Operator
|
||||
If you have the Network Observability Operator installed, you can view network traffic metrics dashboards by selecting the *Netobserv* dashboard from the *Dashboards* drop-down list. For more information about metrics available in this *Dashboard*, see xref:../../observability/network_observability/metrics-alerts-dashboards.adoc#network-observability-viewing-dashboards_metrics-dashboards-alerts[Network Observability metrics dashboards].
|
||||
|
||||
[id="general-networking-ovnk-dashboards"]
|
||||
== Networking and OVN-Kubernetes dashboard
|
||||
You can view both general networking metrics as well as OVN-Kubernetes metrics from the dashboard.
|
||||
Networking and OVN-Kubernetes dashboard::
|
||||
|
||||
You can view both general networking metrics and OVN-Kubernetes metrics from the dashboard.
|
||||
+
|
||||
To view general networking metrics, select *Networking/Linux Subsystem Stats* from the *Dashboards* drop-down list. You can view the following networking metrics from the dashboard: *Network Utilisation*, *Network Saturation*, and *Network Errors*.
|
||||
+
|
||||
To view OVN-Kubernetes metrics select *Networking/Infrastructure* from the *Dashboards* drop-down list. You can view the following OVN-Kubernetes metrics: *Networking Configuration*, *TCP Latency Probes*, *Control Plane Resources*, and *Worker Resources*.
|
||||
|
||||
To view OVN-Kubernetes metrics select *Networking/Infrastructure* from the *Dashboards* drop-down list. You can view the following OVN-Kuberenetes metrics: *Networking Configuration*, *TCP Latency Probes*, *Control Plane Resources*, and *Worker Resources*.
|
||||
Ingress Operator dashboard::
|
||||
|
||||
[id="ingress-dashboards"]
|
||||
== Ingress Operator dashboard
|
||||
You can view networking metrics handled by the Ingress Operator from the dashboard. This includes metrics like the following:
|
||||
|
||||
+
|
||||
* Incoming and outgoing bandwidth
|
||||
* HTTP error rates
|
||||
* HTTP server response latency
|
||||
|
||||
+
|
||||
To view these Ingress metrics, select *Networking/Ingress* from the *Dashboards* drop-down list. You can view Ingress metrics for the following categories: *Top 10 Per Route*, *Top 10 Per Namespace*, and *Top 10 Per Shard*.
|
||||
|
||||
|
||||
|
||||
@@ -6,7 +6,8 @@ include::_attributes/common-attributes.adoc[]
|
||||
|
||||
toc::[]
|
||||
|
||||
Understanding networking is essential for building resilient, secure, and scalable applications in {product-title}. From basic pod-to-pod communication to complex traffic routing and security rules, every component of your application relies on the network to function correctly.
|
||||
[role="_abstract"]
|
||||
To build resilient and secure applications in {product-title}, configure the networking infrastructure for your cluster. Defining reliable pod-to-pod communication and traffic routing rules ensures that every application component functions correctly within the environment.
|
||||
|
||||
include::modules/nw-understanding-networking-core-layers-and-components.adoc[leveloffset=+1]
|
||||
|
||||
|
||||
Reference in New Issue
Block a user