mirror of
https://github.com/openshift/openshift-docs.git
synced 2026-02-07 09:46:53 +01:00
106 lines
8.2 KiB
Plaintext
106 lines
8.2 KiB
Plaintext
|
||
// Module included in the following assemblies:
|
||
//
|
||
// * rosa_architecture/rosa_policy_service_definition/rosa-service-definition.adoc
|
||
// * rosa_architecture/rosa_policy_service_definition/rosa-hcp-service-definition.adoc
|
||
|
||
ifeval::["{context}" == "rosa-hcp-service-definition"]
|
||
:rosa-with-hcp:
|
||
endif::[]
|
||
|
||
[id="rosa-sdpolicy-networking_{context}"]
|
||
= Networking
|
||
|
||
This section provides information about the service definition for {product-title} networking.
|
||
|
||
[id="rosa-sdpolicy-custom-domains_{context}"]
|
||
== Custom domains for applications
|
||
|
||
[WARNING]
|
||
====
|
||
Starting with {product-title} 4.14, the Custom Domain Operator is deprecated. To manage Ingress in {product-title} 4.14 or later, use the Ingress Operator. The functionality is unchanged for {product-title} 4.13 and earlier versions.
|
||
====
|
||
To use a custom hostname for a route, you must update your DNS provider by creating a canonical name (CNAME) record. Your CNAME record should map the OpenShift canonical router hostname to your custom domain. The OpenShift canonical router hostname is shown on the _Route Details_ page after a route is created. Alternatively, a wildcard CNAME record can be created once to route all subdomains for a given hostname to the cluster's router.
|
||
|
||
[id="rosa-sdpolicy-validated-certificates_{context}"]
|
||
== Domain validated certificates
|
||
{product-title} includes TLS security certificates needed for both internal and external services on the cluster. For external routes, there are two separate TLS wildcard certificates that are provided and installed on each cluster: one is for the web console and route default hostnames, and the other is for the API endpoint. Let’s Encrypt is the certificate authority used for certificates. Routes within the cluster, such as the internal link:https://kubernetes.io/docs/tasks/access-application-cluster/access-cluster/#accessing-the-api-from-a-pod[API endpoint], use TLS certificates signed by the cluster's built-in certificate authority and require the CA bundle available in every pod for trusting the TLS certificate.
|
||
|
||
[id="rosa-sdpolicy-custom-certificates_{context}"]
|
||
== Custom certificate authorities for builds
|
||
{product-title} supports the use of custom certificate authorities to be trusted by builds when pulling images from an image registry.
|
||
|
||
[id="rosa-sdpolicy-load-balancers_{context}"]
|
||
== Load balancers
|
||
|
||
ifdef::rosa-with-hcp[]
|
||
{hcp-title-first} only deploys load balancers from the default ingress controller. All other load balancers can be optionally deployed by a customer for secondary ingress controllers or Service load balancers.
|
||
endif::rosa-with-hcp[]
|
||
ifndef::rosa-with-hcp[]
|
||
{product-title} uses up to five different load balancers:
|
||
|
||
- An internal control plane load balancer that is internal to the cluster and used to balance traffic for internal cluster communications.
|
||
- An external control plane load balancer that is used for accessing the OpenShift and Kubernetes APIs. This load balancer can be disabled in {cluster-manager}. If this load balancer is disabled, Red Hat reconfigures the API DNS to point to the internal control plane load balancer.
|
||
- An external control plane load balancer for Red Hat that is reserved for cluster management by Red Hat. Access is strictly controlled, and communication is only possible from whitelisted bastion hosts.
|
||
- A default external router/ingress load balancer that is the default application load balancer, denoted by `apps` in the URL. The default load balancer can be configured in {cluster-manager} to be either publicly accessible over the Internet or only privately accessible over a pre-existing private connection. All application routes on the cluster are exposed on this default router load balancer, including cluster services such as the logging UI, metrics API, and registry.
|
||
- Optional: A secondary router/ingress load balancer that is a secondary application load balancer, denoted by `apps2` in the URL. The secondary load balancer can be configured in {cluster-manager} to be either publicly accessible over the Internet or only privately accessible over a pre-existing private connection. If a `Label match` is configured for this router load balancer, then only application routes matching this label are exposed on this router load balancer; otherwise, all application routes are also exposed on this router load balancer.
|
||
- Optional: Load balancers for services. Enable non-HTTP/SNI traffic and non-standard ports for services. These load balancers can be mapped to a service running on {product-title} to enable advanced ingress features, such as non-HTTP/SNI traffic or the use of non-standard ports. Each AWS account has a quota which link:https://docs.aws.amazon.com/elasticloadbalancing/latest/classic/elb-limits.html[limits the number of Classic Load Balancers] that can be used within each cluster.
|
||
endif::rosa-with-hcp[]
|
||
|
||
|
||
[id="rosa-sdpolicy-cluster-ingress_{context}"]
|
||
== Cluster ingress
|
||
Project administrators can add route annotations for many different purposes, including ingress control through IP allow-listing.
|
||
|
||
Ingress policies can also be changed by using `NetworkPolicy` objects, which leverage the `ovs-networkpolicy` plugin. This allows for full control over the ingress network policy down to the pod level, including between pods on the same cluster and even in the same namespace.
|
||
|
||
All cluster ingress traffic will go through the defined load balancers. Direct access to all nodes is blocked by cloud configuration.
|
||
|
||
[id="rosa-sdpolicy-cluster-egress_{context}"]
|
||
== Cluster egress
|
||
Pod egress traffic control through `EgressNetworkPolicy` objects can be used to prevent or limit outbound traffic in
|
||
ifdef::rosa-with-hcp[]
|
||
{hcp-title-first}.
|
||
endif::rosa-with-hcp[]
|
||
ifndef::rosa-with-hcp[]
|
||
{product-title}.
|
||
|
||
Public outbound traffic from the control plane and infrastructure nodes is required and necessary to maintain cluster image security and cluster monitoring. This requires that the `0.0.0.0/0` route belongs only to the Internet gateway; it is not possible to route this range over private connections.
|
||
|
||
OpenShift 4 clusters use NAT gateways to present a public, static IP for any public outbound traffic leaving the cluster. Each availability zone a cluster is deployed into receives a distinct NAT gateway, therefore up to 3 unique static IP addresses can exist for cluster egress traffic. Any traffic that remains inside the cluster, or that does not go out to the public Internet, will not pass through the NAT gateway and will have a source IP address belonging to the node that the traffic originated from. Node IP addresses are dynamic; therefore, a customer must not rely on whitelisting individual IP addresses when accessing private resources.
|
||
|
||
Customers can determine their public static IP addresses by running a pod on the cluster and then querying an external service. For example:
|
||
[source,terminal]
|
||
----
|
||
$ oc run ip-lookup --image=busybox -i -t --restart=Never --rm -- /bin/sh -c "/bin/nslookup -type=a myip.opendns.com resolver1.opendns.com | grep -E 'Address: [0-9.]+'"
|
||
----
|
||
endif::rosa-with-hcp[]
|
||
|
||
[id="rosa-sdpolicy-cloud-network-config_{context}"]
|
||
== Cloud network configuration
|
||
{Product-title} allows for the configuration of a private network connection through AWS-managed technologies, such as:
|
||
|
||
- VPN connections
|
||
- VPC peering
|
||
- Transit Gateway
|
||
- Direct Connect
|
||
|
||
[IMPORTANT]
|
||
====
|
||
Red Hat site reliability engineers (SREs) do not monitor private network connections. Monitoring these connections is the responsibility of the customer.
|
||
====
|
||
|
||
[id="rosa-sdpolicy-dns-forwarding_{context}"]
|
||
== DNS forwarding
|
||
For {product-title} clusters that have a private cloud network configuration, a customer can specify internal DNS servers available on that private connection, that should be queried for explicitly provided domains.
|
||
|
||
[id="rosa-sdpolicy-network-verification_{context}"]
|
||
== Network verification
|
||
|
||
Network verification checks run automatically when you deploy a {product-title} cluster into an existing Virtual Private Cloud (VPC) or create an additional machine pool with a subnet that is new to your cluster. The checks validate your network configuration and highlight errors, enabling you to resolve configuration issues prior to deployment.
|
||
|
||
You can also run the network verification checks manually to validate the configuration for an existing cluster.
|
||
ifeval::["{context}" == "rosa-hcp-service-definition"]
|
||
:!rosa-with-hcp:
|
||
endif::[]
|