mirror of
https://github.com/openshift/openshift-docs.git
synced 2026-02-05 12:46:18 +01:00
99 lines
8.5 KiB
Plaintext
99 lines
8.5 KiB
Plaintext
// Module included in the following assemblies:
|
||
//
|
||
// * osd_architecture/osd_policy/osd-service-definition.adoc
|
||
|
||
:_mod-docs-content-type: CONCEPT
|
||
[id="sdpolicy-networking_{context}"]
|
||
= Networking
|
||
|
||
[role="_abstract"]
|
||
Review the networking aspects of {product-title}.
|
||
|
||
[id="custom-domains_{context}"]
|
||
== Custom domains for applications
|
||
|
||
[WARNING]
|
||
====
|
||
Starting with {product-title} 4.14, the Custom Domain Operator is deprecated. To manage Ingress in {product-title} 4.14 or later, use the Ingress Operator. The functionality is unchanged for {product-title} 4.13 and earlier versions.
|
||
====
|
||
|
||
To use a custom hostname for a route, you must update your DNS provider by creating a canonical name (CNAME) record. Your CNAME record should map the OpenShift canonical router hostname to your custom domain. The OpenShift canonical router hostname is shown on the *Route Details* page after a Route is created. Alternatively, a wildcard CNAME record can be created once to route all subdomains for a given hostname to the cluster's router.
|
||
|
||
[id="custom-domains-cluster_{context}"]
|
||
== Custom domains for cluster services
|
||
Custom domains and subdomains are not available for the platform service routes, for example, the API or web console routes, or for the default application routes.
|
||
|
||
[id="domain-validated-certificates_{context}"]
|
||
== Domain validated certificates
|
||
{product-title} includes TLS security certificates needed for both internal and external services on the cluster. For external routes, there are two, separate TLS wildcard certificates that are provided and installed on each cluster, one for the web console and route default hostnames and the second for the API endpoint. _Let’s Encrypt_ is the certificate authority used for certificates. Routes within the cluster, for example, the internal link:https://kubernetes.io/docs/tasks/access-application-cluster/access-cluster/#accessing-the-api-from-a-pod[API endpoint], use TLS certificates signed by the cluster's built-in certificate authority and require the CA bundle available in every pod for trusting the TLS certificate.
|
||
|
||
[id="custom-certificate-authorities_{context}"]
|
||
== Custom certificate authorities for builds
|
||
{product-title} supports the use of custom certificate authorities to be trusted by builds when pulling images from an image registry.
|
||
|
||
[id="load-balancers_{context}"]
|
||
== Load balancers
|
||
{product-title} uses up to 5 different load balancers:
|
||
|
||
* Internal control plane load balancer that is internal to the cluster and used to balance traffic for internal cluster communications.
|
||
* External control plane load balancer that is used for accessing the {OCP} and Kubernetes APIs. This load balancer can be disabled in {cluster-manager-first}. If this load balancer is disabled, Red{nbsp}Hat reconfigures the API DNS to point to the internal control load balancer.
|
||
* External control plane load balancer for Red{nbsp}Hat that is reserved for cluster management by Red{nbsp}Hat. Access is strictly controlled, and communication is only possible from allowlisted bastion hosts.
|
||
* Default router/ingress load balancer that is the default application load balancer, denoted by `apps` in the URL. The default load balancer can be configured in {cluster-manager} to be either publicly accessible over the internet, or only privately accessible over a pre-existing private connection. All application routes on the cluster are exposed on this default router load balancer, including cluster services such as the logging UI, metrics API, and registry.
|
||
* Optional: Secondary router/ingress load balancer that is a secondary application load balancer, denoted by `apps2` in the URL. The secondary load balancer can be configured in {cluster-manager} to be either publicly accessible over the internet, or only privately accessible over a pre-existing private connection. If a 'Label match' is configured for this router load balancer, then only application routes matching this label will be exposed on this router load balancer, otherwise all application routes are also exposed on this router load balancer.
|
||
* Optional: Load balancers for services that can be mapped to a service running on {product-title} to enable advanced ingress features, such as non-HTTP/SNI traffic or the use of non-standard ports. These can be purchased in groups of 4 for non-CCS clusters, or they can be provisioned through the cloud provider console in Customer Cloud Subscription (CCS) clusters; however, each AWS account has a quota that link:https://docs.aws.amazon.com/elasticloadbalancing/latest/classic/elb-limits.html[limits the number of Classic Load Balancers] that can be used within each cluster.
|
||
|
||
[id="network-usage_{context}"]
|
||
== Network usage
|
||
For non-CCS {product-title} clusters, network usage is measured based on data transfer between inbound, VPC peering, VPN, and AZ traffic. On a non-CCS {product-title} base cluster, 12 TB of network I/O is provided. Additional network I/O can be purchased in 12 TB increments. For CCS {product-title} clusters, network usage is not monitored, and is billed directly by the cloud provider.
|
||
|
||
[id="cluster-ingress_{context}"]
|
||
== Cluster ingress
|
||
Project administrators can add route annotations for many different purposes, including ingress control through IP allowlisting.
|
||
|
||
Ingress policies can also be changed by using `NetworkPolicy` objects, which leverage the `ovs-networkpolicy` plugin. This allows for full control over the ingress network policy down to the pod level, including between pods on the same cluster and even in the same namespace.
|
||
|
||
All cluster ingress traffic goes through the defined load balancers. Direct access to all nodes is blocked by cloud configuration.
|
||
|
||
[id="cluster-egress_{context}"]
|
||
== Cluster egress
|
||
Pod egress traffic control through `EgressNetworkPolicy` objects can be used to prevent or limit outbound traffic in {product-title}.
|
||
|
||
Public outbound traffic from the control plane and infrastructure nodes is required and necessary to maintain cluster image security and cluster monitoring. This requires the `0.0.0.0/0` route to belong only to the internet gateway; it is not possible to route this range over private connections.
|
||
|
||
{product-title} clusters use NAT Gateways to present a public, static IP for any public outbound traffic leaving the cluster. Each subnet a cluster is deployed into receives a distinct NAT Gateway. For clusters deployed on AWS with multiple availability zones, up to 3 unique static IP addresses can exist for cluster egress traffic. For clusters deployed on {gcp-full}, regardless of availability zone topology, there will by 1 static IP address for worker node egress traffic. Any traffic that remains inside the cluster or does not go out to the public internet will not pass through the NAT Gateway and will have a source IP address belonging to the node that the traffic originated from. Node IP addresses are dynamic, and therefore a customer should not rely on allowlisting individual IP address when accessing private resources.
|
||
|
||
Customers can determine their public static IP addresses by running a pod on the cluster and then querying an external service. For example:
|
||
|
||
[source,terminal]
|
||
----
|
||
$ oc run ip-lookup --image=busybox -i -t --restart=Never --rm -- /bin/sh -c "/bin/nslookup -type=a myip.opendns.com resolver1.opendns.com | grep -E 'Address: [0-9.]+'"
|
||
----
|
||
|
||
[id="cloud-network-configuration_{context}"]
|
||
== Cloud network configuration
|
||
{Product-title} allows for the configuration of a private network connection through several cloud provider managed technologies:
|
||
|
||
* VPN connections
|
||
* AWS VPC peering
|
||
* AWS Transit Gateway
|
||
* AWS Direct Connect
|
||
* {gcp-full} VPC Network peering
|
||
* {gcp-full} Classic VPN
|
||
* {gcp-full} HA VPN
|
||
|
||
[IMPORTANT]
|
||
====
|
||
Red{nbsp}Hat SREs do not monitor private network connections. Monitoring these connections is the responsibility of the customer.
|
||
====
|
||
|
||
[id="dns-forwarding_{context}"]
|
||
== DNS forwarding
|
||
For {product-title} clusters that have a private cloud network configuration, a customer can specify internal DNS servers available on that private connection that should be queried for explicitly provided domains.
|
||
|
||
[id="osd-network-verification_{context}"]
|
||
== Network verification
|
||
|
||
Network verification checks run automatically when you deploy an {product-title} cluster into an existing Virtual Private Cloud (VPC) or create an additional machine pool with a subnet that is new to your cluster. The checks validate your network configuration and highlight errors, enabling you to resolve configuration issues prior to deployment.
|
||
|
||
You can also run the network verification checks manually to validate the configuration for an existing cluster.
|