mirror of
https://github.com/openshift/openshift-docs.git
synced 2026-02-05 12:46:18 +01:00
Merge pull request #104695 from openshift-cherrypick-robot/cherry-pick-104519-to-enterprise-4.21
[enterprise-4.21] OSDOCS-16862-2: CQA2.0 of CORE-3: Ingress Controllers and Load Balancin
This commit is contained in:
@@ -2,10 +2,11 @@
|
||||
//
|
||||
// * networking/load-balancing-openstack.adoc
|
||||
|
||||
:_mod-docs-content-type: PROCEDURE
|
||||
:_mod-docs-content-type: CONCEPT
|
||||
[id="installation-osp-api-octavia_{context}"]
|
||||
= Scaling clusters for application traffic by using Octavia
|
||||
|
||||
{product-title} clusters that run on {rh-openstack-first} can use the Octavia load balancing service to distribute traffic across multiple virtual machines (VMs) or floating IP addresses. This feature mitigates the bottleneck that single machines or addresses create.
|
||||
[role="_abstract"]
|
||||
To distribute traffic across multiple virtual machines (VMs), configure your cluster that runs on {rh-openstack-first} to use the Octavia load balancing service. By using this feature, you can mitigate the bottleneck that single machines or addresses create.
|
||||
|
||||
You must create your own Octavia load balancer to use it for application network scaling.
|
||||
@@ -6,7 +6,8 @@
|
||||
[id="installation-osp-api-scaling_{context}"]
|
||||
= Scaling clusters by using Octavia
|
||||
|
||||
If you want to use multiple API load balancers, create an Octavia load balancer and then configure your cluster to use it.
|
||||
[role="_abstract"]
|
||||
To ensure high availability and distribute traffic across multiple cluster API access points in {product-title} on {rh-openstack}, create an Octavia load balancer. Configuring your cluster to use multiple balancers prevents network bottlenecks and ensures continuous access to your API services.
|
||||
|
||||
.Prerequisites
|
||||
|
||||
@@ -14,7 +15,7 @@ If you want to use multiple API load balancers, create an Octavia load balancer
|
||||
|
||||
.Procedure
|
||||
|
||||
. From a command line, create an Octavia load balancer that uses the Amphora driver:
|
||||
. From the command-line interface (CLI), create an Octavia load balancer that uses the Amphora driver:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
@@ -35,14 +36,14 @@ $ openstack loadbalancer listener create --name API_OCP_CLUSTER_6443 --protocol
|
||||
To view the status of the load balancer, enter `openstack loadbalancer list`.
|
||||
====
|
||||
|
||||
. Create a pool that uses the round robin algorithm and has session persistence enabled:
|
||||
. Create a pool that uses the round-robin algorithm and has session persistence enabled:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ openstack loadbalancer pool create --name API_OCP_CLUSTER_pool_6443 --lb-algorithm ROUND_ROBIN --session-persistence type=<source_IP_address> --listener API_OCP_CLUSTER_6443 --protocol HTTPS
|
||||
----
|
||||
|
||||
. To ensure that control plane machines are available, create a health monitor:
|
||||
. To ensure that control-plane machines are available, create a health monitor:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
@@ -72,5 +73,5 @@ $ openstack floating ip unset $API_FIP
|
||||
----
|
||||
$ openstack floating ip set --port $(openstack loadbalancer show -c <vip_port_id> -f value API_OCP_CLUSTER) $API_FIP
|
||||
----
|
||||
|
||||
+
|
||||
Your cluster now uses Octavia for load balancing.
|
||||
@@ -6,7 +6,10 @@
|
||||
[id="nw-ingress-controller-endpoint-publishing-strategies_{context}"]
|
||||
= Ingress Controller endpoint publishing strategy
|
||||
|
||||
*`NodePortService` endpoint publishing strategy*
|
||||
[role="_abstract"]
|
||||
To expose Ingress Controller endpoints to external networks in {product-title}, configure either the `NodePortService` endpoint publishing strategy or the `HostNetwork` endpoint publishing strategy.
|
||||
|
||||
`NodePortService` endpoint publishing strategy::
|
||||
|
||||
The `NodePortService` endpoint publishing strategy publishes the Ingress Controller using a Kubernetes NodePort service.
|
||||
|
||||
@@ -29,7 +32,7 @@ By default, ports are allocated automatically and you can access the port alloca
|
||||
|
||||
For more information, see the link:https://kubernetes.io/docs/concepts/services-networking/service/#nodeport[Kubernetes Services documentation on `NodePort`].
|
||||
|
||||
*`HostNetwork` endpoint publishing strategy*
|
||||
`HostNetwork` endpoint publishing strategy*::
|
||||
|
||||
The `HostNetwork` endpoint publishing strategy publishes the Ingress Controller on node ports where the Ingress Controller is deployed.
|
||||
|
||||
|
||||
@@ -6,11 +6,14 @@
|
||||
[id="nw-ingress-controller-nodeportservice-projects_{context}"]
|
||||
= Adding a single NodePort service to an Ingress Controller
|
||||
|
||||
Instead of creating a `NodePort`-type `Service` for each project, you can create a custom Ingress Controller to use the `NodePortService` endpoint publishing strategy. To prevent port conflicts, consider this configuration for your Ingress Controller when you want to apply a set of routes, through Ingress sharding, to nodes that might already have a `HostNetwork` Ingress Controller.
|
||||
[role="_abstract"]
|
||||
To prevent port conflicts, instead of creating a `NodePort`-type `Service` for each project, create a custom Ingress Controller that can use the `NodePortService` endpoint publishing strategy.
|
||||
|
||||
Consider this configuration for your Ingress Controller when you want to apply a set of routes, through Ingress sharding, to nodes that might already have a `HostNetwork` Ingress Controller.
|
||||
|
||||
Before you set a `NodePort`-type `Service` for each project, read the following considerations:
|
||||
|
||||
* You must create a wildcard DNS record for the Nodeport Ingress Controller domain. A Nodeport Ingress Controller route can be reached from the address of a worker node. For more information about the required DNS records for routes, see "User-provisioned DNS requirements".
|
||||
* You must create a wildcard DNS record for the `Nodeport` Ingress Controller domain. A Nodeport Ingress Controller route can be reached from the address of a worker node. For more information about the required DNS records for routes, see "User-provisioned DNS requirements".
|
||||
* You must expose a route for your service and specify the `--hostname` argument for your custom Ingress Controller domain.
|
||||
* You must append the port that is assigned to the `NodePort`-type `Service` in the route so that you can access application pods.
|
||||
|
||||
@@ -33,34 +36,38 @@ items:
|
||||
- apiVersion: operator.openshift.io/v1
|
||||
kind: IngressController
|
||||
metadata:
|
||||
name: <custom_ic_name> <1>
|
||||
name: <custom_ic_name>
|
||||
namespace: openshift-ingress-operator
|
||||
spec:
|
||||
replicas: 1
|
||||
domain: <custom_ic_domain_name> <2>
|
||||
domain: <custom_ic_domain_name>
|
||||
nodePlacement:
|
||||
nodeSelector:
|
||||
matchLabels:
|
||||
<key>: <value> <3>
|
||||
<key>: <value>
|
||||
namespaceSelector:
|
||||
matchLabels:
|
||||
<key>: <value> <4>
|
||||
<key>: <value>
|
||||
endpointPublishingStrategy:
|
||||
type: NodePortService
|
||||
# ...
|
||||
----
|
||||
<1> Specify the a custom `name` for the `IngressController` CR.
|
||||
<2> The DNS name that the Ingress Controller services. As an example, the default ingresscontroller domain is `apps.ipi-cluster.example.com`, so you would specify the `<custom_ic_domain_name>` as `nodeportsvc.ipi-cluster.example.com`.
|
||||
<3> Specify the label for the nodes that include the custom Ingress Controller.
|
||||
<4> Specify the label for a set of namespaces. Substitute `<key>:<value>` with a map of key-value pairs where `<key>` is a unique name for the new label and `<value>` is its value. For example: `ingresscontroller: custom-ic`.
|
||||
+
|
||||
where:
|
||||
+
|
||||
`metadata.name`:: Specifies a custom `name` for the `IngressController` CR.
|
||||
`spec.domain`:: Specifies the DNS name that the Ingress Controller services. For example, the default ingresscontroller domain is `apps.ipi-cluster.example.com`, so you would specify the `<custom_ic_domain_name>` as `nodeportsvc.ipi-cluster.example.com`.
|
||||
`nodeSelector.matchLabels.<key>`:: Specifies the label for the nodes that include the custom Ingress Controller.
|
||||
`namespaceSelector.matchLabels.<key>`:: Specifies the label for a set of namespaces. Substitute `<key>:<value>` with a map of key-value pairs where `<key>` is a unique name for the new label and `<value>` is its value. For example: `ingresscontroller: custom-ic`.
|
||||
|
||||
. Add a label to a node by using the `oc label node` command:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ oc label node <node_name> <key>=<value> <1>
|
||||
$ oc label node <node_name> <key>=<value>
|
||||
----
|
||||
<1> Where `<value>` must match the key-value pair specified in the `nodePlacement` section of your `IngressController` CR.
|
||||
+
|
||||
* `<key>=<value>`: Where `<value>` must match the key-value pair specified in the `nodePlacement` section of your `IngressController` CR.
|
||||
|
||||
. Create the `IngressController` object:
|
||||
+
|
||||
@@ -95,23 +102,25 @@ $ oc new-project <project_name>
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ oc label namespace <project_name> <key>=<value> <1>
|
||||
$ oc label namespace <project_name> <key>=<value>
|
||||
----
|
||||
<1> Where `<key>=<value>` must match the value in the `namespaceSelector` section of your Ingress Controller CR.
|
||||
+
|
||||
* `<key>=<value>`:: Where `<key>=<value>` must match the value in the `namespaceSelector` section of your Ingress Controller CR.
|
||||
|
||||
. Create a new application in your cluster:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ oc new-app --image=<image_name> <1>
|
||||
$ oc new-app --image=<image_name>
|
||||
----
|
||||
<1> An example of `<image_name>` is `quay.io/openshifttest/hello-openshift:multiarch`.
|
||||
+
|
||||
* `<image_name>`: An example of `<image_name>` is `quay.io/openshifttest/hello-openshift:multiarch`.
|
||||
|
||||
. Create a `Route` object for a service, so that the pod can use the service to expose the application external to the cluster.
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ oc expose svc/<service_name> --hostname=<svc_name>-<project_name>.<custom_ic_domain_name> <1>
|
||||
$ oc expose svc/<service_name> --hostname=<svc_name>-<project_name>.<custom_ic_domain_name>
|
||||
----
|
||||
+
|
||||
[NOTE]
|
||||
@@ -170,10 +179,10 @@ $ dig +short <svc_name>-<project_name>.<custom_ic_domain_name>
|
||||
----
|
||||
$ curl <svc_name>-<project_name>.<custom_ic_domain_name>:<port> <1>
|
||||
----
|
||||
<1> Where `<port>` is the node port from the `NodePort`-type `Service`. Based on example output from the `oc get svc -n openshift-ingress` command, the `80:32432/TCP` HTTP route means that `32432` is the node port.
|
||||
+
|
||||
* `<custom_ic_domain_name>:<port>`: Where `<port>` is the node port from the `NodePort`-type `Service`. Based on example output from the `oc get svc -n openshift-ingress` command, the `80:32432/TCP` HTTP route means that `32432` is the node port.
|
||||
+
|
||||
.Output example
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
Hello OpenShift!
|
||||
|
||||
@@ -6,15 +6,12 @@
|
||||
[id="nw-ingress-gateway-api-deployment_{context}"]
|
||||
= Gateway API deployment topologies
|
||||
|
||||
Gateway API is designed to accomodate two topologies: shared gateways or dedicated gateways. Each topology has its own advantages and different security implications.
|
||||
[role="_abstract"]
|
||||
To optimise network security and resource allocation in {product-title}, choose between shared or dedicated gateway topologies when implementing the Gateway API. Selecting the appropriate topology ensures your infrastructure meets specific security requirements and operational advantages for your workloads.
|
||||
|
||||
Dedicated gateway:: Routes and any load balancers or proxies are served from the same namespace. The `Gateway`
|
||||
object restricts routes to a particular application namespace. This is the default topology when deploying a Gateway API resource in {product-title}.
|
||||
|
||||
Shared gateway:: Routes are served from multiple namespaces or with multiple hostnames. The `Gateway` object filters allow routes from application namespaces by using the `spec.listeners.allowedRoutes.namespaces` field.
|
||||
|
||||
[id="dedicated-gateway-example_{context}"]
|
||||
== Dedicated gateway example
|
||||
The following example shows a dedicated `Gateway` resource, `fin-gateway`:
|
||||
|
||||
.Example dedicated `Gateway` resource
|
||||
@@ -26,13 +23,13 @@ metadata:
|
||||
name: fin-gateway
|
||||
namespace: openshift-ingress
|
||||
spec:
|
||||
listeners: <1>
|
||||
listeners:
|
||||
- name: http
|
||||
protocol: HTTP
|
||||
port: 8080
|
||||
hostname: "example.com"
|
||||
----
|
||||
<1> Creating a `Gateway` resource without setting `spec.listeners[].allowedRoutes` results in implicitly setting the `namespaces.from` field to have the value `Same`.
|
||||
* `spec.listeneres`:: If you do not set `spec.listeners[].allowedRoutes` for a `Gateway` resource, the system implicitly sets the `namespaces.from` field to the value of `Same`.
|
||||
|
||||
The following example shows the associated `HTTPRoute` resource, `sales-db`, which attaches to the dedicated `Gateway` object:
|
||||
|
||||
@@ -55,10 +52,10 @@ spec:
|
||||
¦ port: 8080
|
||||
----
|
||||
|
||||
The `HTTPRoute` resource must have the name of the `Gateway` object as the value for its `parentRefs` field in order to attach to the gateway. Implicitly, the route is assumed to be in the same namespace as the `Gateway` object.
|
||||
The `HTTPRoute` resource must have the name of the `Gateway` object as the value for its `parentRefs` field in order to attach to the gateway. The system implicitly assumes that the route is exists in the same namespace as the `Gateway` object.
|
||||
|
||||
Shared gateway:: Routes are served from multiple namespaces or multiple hostnames. The `Gateway` object allows routes from application namespaces by using the `spec.listeners.allowedRoutes.namespaces` field.
|
||||
|
||||
[id="shared-gateway-example_{context}"]
|
||||
== Shared gateway example
|
||||
The following example shows a `Gateway` resource, `devops-gateway`, that has a `spec.listeners.allowedRoutes.namespaces` label selector set to match any namespaces containing `shared-gateway-access: "true"`:
|
||||
|
||||
.Example shared `Gateway` resource
|
||||
|
||||
@@ -6,7 +6,8 @@
|
||||
[id="nw-ingress-gateway-api-enable_{context}"]
|
||||
= Getting started with Gateway API for the Ingress Operator
|
||||
|
||||
When you create a GatewayClass as shown in the first step, it configures Gateway API for use on your cluster.
|
||||
[role="_abstract"]
|
||||
To implement routing policies in your {product-title} cluster, create a GatewayClass resource. This resource initializes the Gateway API infrastructure, providing the foundational template required to define and manage how external traffic reaches your internal services.
|
||||
|
||||
[IMPORTANT]
|
||||
====
|
||||
@@ -14,7 +15,7 @@ The {product-title} Gateway API implementation relies on the Cluster Ingress Ope
|
||||
|
||||
A conflict occurs if your cluster already has an active OpenShift Service Mesh (OSSM v2.x) subscription in any namespace. OSSM v2.x and OSSM v3.x cannot coexist on the same cluster.
|
||||
|
||||
If a conflicting OSSM v2.x subscription is present when you create a GatewayClass resource, the Cluster Ingress Operator attempts to install the required OSSM v3.x components but fails this installation operation. As a result, Gateway API resources ,such as Gateway or HTTPRoute, have no effect and no proxy gets configured to route traffic. In {product-title} 4.19, this failure is silent; In {product-title} 4.20 and later, this conflict causes the ingress ClusterOperator to report a Degraded status.
|
||||
If a conflicting OSSM v2.x subscription is present when you create a GatewayClass resource, the Cluster Ingress Operator attempts to install the required OSSM v3.x components but fails this installation operation. As a result, Gateway API resources, such as Gateway or HTTPRoute, have no effect and no proxy gets configured to route traffic. In {product-title} 4.19, this failure is silent. For {product-title} 4.20 and later, this conflict causes the ingress ClusterOperator to report a Degraded status.
|
||||
|
||||
Before enabling Gateway API by creating a `GatewayClass`, verify that you do not have an active OSSM v2.x subscription on the cluster.
|
||||
====
|
||||
@@ -22,7 +23,7 @@ Before enabling Gateway API by creating a `GatewayClass`, verify that you do not
|
||||
.Procedure
|
||||
|
||||
. Create a `GatewayClass` object:
|
||||
|
||||
+
|
||||
.. Create a YAML file, `openshift-default.yaml`, that contains the following information:
|
||||
+
|
||||
.Example `GatewayClass` CR
|
||||
@@ -33,15 +34,16 @@ kind: GatewayClass
|
||||
metadata:
|
||||
name: openshift-default
|
||||
spec:
|
||||
controllerName: openshift.io/gateway-controller/v1 <1>
|
||||
controllerName: openshift.io/gateway-controller/v1
|
||||
----
|
||||
<1> The controller name.
|
||||
+
|
||||
* `controllerName`: The controller name.
|
||||
+
|
||||
[IMPORTANT]
|
||||
====
|
||||
The controller name must be exactly as shown for the Ingress Operator to manage it. If you set this field to anything else, the Ingress Operator ignores the `GatewayClass` object and all associated `Gateway`, `GRPCRoute`, and `HTTPRoute` objects. The controller name is tied to the implementation of Gateway API in {product-title}, and `openshift.io/gateway-controller/v1` is the only controller name allowed.
|
||||
====
|
||||
|
||||
+
|
||||
.. Run the following command to create the `GatewayClass` resource:
|
||||
+
|
||||
[source,terminal]
|
||||
@@ -56,8 +58,8 @@ gatewayclass.gateway.networking.k8s.io/openshift-default created
|
||||
----
|
||||
+
|
||||
During the creation of the `GatewayClass` resource, the Ingress Operator installs a lightweight version of {SMProductName}, an Istio custom resource, and a new deployment in the `openshift-ingress` namespace.
|
||||
|
||||
.. Optional: Verify that the new deployment, `istiod-openshift-gateway` is ready and available:
|
||||
+
|
||||
.. Optional: Verify that the new deployment, `istiod-openshift-gateway`, is ready and available:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
@@ -87,7 +89,7 @@ $ DOMAIN=$(oc get ingresses.config/cluster -o jsonpath={.spec.domain})
|
||||
----
|
||||
|
||||
. Create a `Gateway` object:
|
||||
|
||||
+
|
||||
.. Create a YAML file, `example-gateway.yaml`, that contains the following information:
|
||||
+
|
||||
.Example `Gateway` CR
|
||||
@@ -97,27 +99,30 @@ apiVersion: gateway.networking.k8s.io/v1
|
||||
kind: Gateway
|
||||
metadata:
|
||||
name: example-gateway
|
||||
namespace: openshift-ingress <1>
|
||||
namespace: openshift-ingress
|
||||
spec:
|
||||
gatewayClassName: openshift-default <2>
|
||||
gatewayClassName: openshift-default
|
||||
listeners:
|
||||
- name: https <3>
|
||||
hostname: "*.gwapi.${DOMAIN}" <4>
|
||||
- name: https
|
||||
hostname: "*.gwapi.${DOMAIN}"
|
||||
port: 443
|
||||
protocol: HTTPS
|
||||
tls:
|
||||
mode: Terminate
|
||||
certificateRefs:
|
||||
- name: gwapi-wildcard <5>
|
||||
- name: gwapi-wildcard
|
||||
allowedRoutes:
|
||||
namespaces:
|
||||
from: All
|
||||
----
|
||||
<1> The `Gateway` object must be created in the `openshift-ingress` namespace.
|
||||
<2> The `Gateway` object must reference the name of the previously created `GatewayClass` object.
|
||||
<3> The HTTPS listener listens for HTTPS requests that match a subdomain of the cluster domain. You use this listener to configure ingress to your applications by using Gateway API `HTTPRoute` resources.
|
||||
<4> The hostname must be a subdomain of the Ingress Operator domain. If you use a domain, the listener tries to serve all traffic in that domain.
|
||||
<5> The name of the previously created secret.
|
||||
+
|
||||
where:
|
||||
+
|
||||
`metadata.namespace`:: The `Gateway` object must be created in the `openshift-ingress` namespace.
|
||||
`gatewayClassName`:: The `Gateway` object must reference the name of the previously created `GatewayClass` object.
|
||||
`listeners.name`:: The HTTPS listener listens for HTTPS requests that match a subdomain of the cluster domain. You use this listener to configure ingress to your applications by using Gateway API `HTTPRoute` resources.
|
||||
`listeners.hostname`:: The hostname must be a subdomain of the Ingress Operator domain. If you use a domain, the listener tries to serve all traffic in that domain.
|
||||
`tls.name`:: The name of the previously created secret.
|
||||
|
||||
.. Apply the resource by running the following command:
|
||||
+
|
||||
@@ -125,7 +130,7 @@ spec:
|
||||
----
|
||||
$ oc apply -f example-gateway.yaml
|
||||
----
|
||||
|
||||
+
|
||||
.. Optional: When you create a `Gateway` object, {SMProductName} automatically provisions a deployment and service with the same name. Verify this by running the following commands:
|
||||
*** To verify the deployment, run the following command:
|
||||
+
|
||||
@@ -187,7 +192,7 @@ status:
|
||||
----
|
||||
|
||||
. Create an `HTTPRoute` resource that directs requests to your already-created namespace and application called `example-app/example-app`:
|
||||
|
||||
+
|
||||
.. Create a YAML file, `example-route.yaml`, that contains the following information:
|
||||
+
|
||||
.Example `HTTPRoute` CR
|
||||
@@ -197,23 +202,26 @@ apiVersion: gateway.networking.k8s.io/v1
|
||||
kind: HTTPRoute
|
||||
metadata:
|
||||
name: example-route
|
||||
namespace: example-app-ns <1>
|
||||
namespace: example-app-ns
|
||||
spec:
|
||||
parentRefs: <2>
|
||||
parentRefs:
|
||||
- name: example-gateway
|
||||
namespace: openshift-ingress
|
||||
hostnames: ["example.gwapi.${DOMAIN}"] <3>
|
||||
hostnames: ["example.gwapi.${DOMAIN}"]
|
||||
rules:
|
||||
- backendRefs: <4>
|
||||
- backendRefs:
|
||||
- name: example-app <5>
|
||||
port: 8443
|
||||
----
|
||||
<1> The namespace you are deploying your application.
|
||||
<2> This field must point to the `Gateway` object you previously configured.
|
||||
<3> The hostname must match the one specified in the `Gateway` object. In this case, the listeners use a wildcard hostname.
|
||||
<4> This field specifies the backend references that point to your service.
|
||||
<5> The name of the `Service` for your application.
|
||||
|
||||
+
|
||||
where:
|
||||
+
|
||||
`metadata.namespace`:: The namespace you are deploying your application.
|
||||
`spec.parentRefs`:: This field must point to the `Gateway` object you previously configured.
|
||||
`spec.hostnames`:: The hostname must match the one specified in the `Gateway` object. In this case, the listeners use a wildcard hostname.
|
||||
`rules.backendRefs`:: This field specifies the backend references that point to your service.
|
||||
`rules.name`:: The name of the `Service` for your application.
|
||||
+
|
||||
.. Apply the resource by running the following command:
|
||||
+
|
||||
[source,terminal]
|
||||
|
||||
@@ -6,9 +6,10 @@
|
||||
[id="nw-ingress-gateway-api-implementation_{context}"]
|
||||
= Gateway API implementation for {product-title}
|
||||
|
||||
The Ingress Operator manages the lifecycle of Gateway API CRDs in a way that enables other vendor implementations to make use of CRDs defined in an {product-title} cluster.
|
||||
[role="_abstract"]
|
||||
To ensure interoperability between external vendor implementations and your networking infrastructure in {product-title}, use the Ingress Operator to manage the lifecycle of Gateway API custom resource definitions (CRDs).
|
||||
|
||||
In some situations, Gateway API provides one or more fields that a vendor implementation does not support, but that implementation is otherwise compatible in schema with the rest of the fields. These "dead fields" can result in disrupted Ingress workloads, improperly provisioned applications and services, and security related issues. Because {product-title} uses a specific version of Gateway API CRDs, any use of third-party implementations of Gateway API must conform to the {product-title} implementation to ensure that all fields work as expected.
|
||||
In some situations, Gateway API provides one or more fields that a vendor implementation does not support, but that implementation is otherwise compatible in schema with the rest of the fields. These "dead fields" can result in disrupted Ingress workloads, improperly provisioned applications and services, and security-related issues. Because {product-title} uses a specific version of Gateway API CRDs, any use of third-party implementations of Gateway API must conform to the {product-title} implementation to ensure that all fields work as expected.
|
||||
|
||||
Any CRDs created within an {product-title} {product-version} cluster are compatibly versioned and maintained by the Ingress Operator. If CRDs are already present but were not previously managed by the Ingress Operator, the Ingress Operator checks whether these configurations are compatible with Gateway API version supported by {product-title}, and creates an admin-gate that requires your acknowledgment of CRD succession.
|
||||
|
||||
|
||||
@@ -6,11 +6,14 @@
|
||||
[id="nw-ingress-gateway-api-overview_{context}"]
|
||||
= Overview of Gateway API
|
||||
|
||||
Gateway API is an open source, community-managed, Kubernetes networking mechanism. It focuses on routing within the transport layer, L4, and the application layer, L7, for clusters. A variety of vendors offer many link:https://gateway-api.sigs.k8s.io/implementations/[implementations of Gateway API].
|
||||
[role="_abstract"]
|
||||
To optimize network traffic management and implement routing policies in {product-title}, use the Gateway API. By adopting this community-managed Kubernetes mechanism, you can configure advanced routing at both the transport (L4) and application (L7) layers while leveraging various vendor-supported implementations to meet your specific networking requirements
|
||||
|
||||
A variety of vendors offer many link:https://gateway-api.sigs.k8s.io/implementations/[implementations of Gateway API].
|
||||
|
||||
The project is an effort to provide a standardized ecosystem by using a portable API with broad community support. By integrating Gateway API functionality into the Ingress Operator, it enables a networking solution that aligns with existing community and upstream development efforts.
|
||||
|
||||
Gateway API extends the functionality of the Ingress Operator to handle more granular cluster traffic and routing configurations. With these capabilities, you can create instances of Gateway APIs custom resource definitions (CRDs). For {product-title} clusters, the Ingress Operator creates the following resources:
|
||||
Gateway API extends the functionality of the Ingress Operator to handle more granular cluster traffic and routing configurations. With these capabilities, you can create instances of Gateway API custom resource definitions (CRDs). For {product-title} clusters, the Ingress Operator creates the following resources:
|
||||
|
||||
Gateway:: This resource describes how traffic can be translated to services within the cluster. For example, a specific load balancer configuration.
|
||||
GatewayClass:: This resource defines a set of `Gateway` objects that share a common configuration and behavior. For example, two separate `GatewayClass` objects might be created to distinguish a set of `Gateway` resources used for public or private applications.
|
||||
@@ -25,7 +28,7 @@ In {product-title}, the implementation of Gateway API is based on `gateway.netwo
|
||||
Gateway API provides the following benefits:
|
||||
|
||||
* Portability: While {product-title} uses HAProxy to improve Ingress performance, Gateway API does not rely on vendor-specific annotations to provide certain behavior. To get comparable performance as HAProxy, the `Gateway` objects need to be horizontally scaled or their associated nodes need to be vertically scaled.
|
||||
* Separation of concerns: Gateway API uses a role-based approach to its resources, and more neatly fits into how a large organization structures its responsibilities and teams. Platform engineers might focus on `GatewayClass` resources, cluster admins might focus on configuring `Gateway` resources, and application developers might focus on routing their services with `HTTPRoute` resources.
|
||||
* Separation of concerns: Gateway API uses a role-based approach to its resources, and more neatly fits into how a large organization structures its responsibilities and teams. Platform engineers might focus on `GatewayClass` resources, cluster administrators might focus on configuring `Gateway` resources, and application developers might focus on routing their services with `HTTPRoute` resources.
|
||||
* Extensibility: Additional functionality is developed as a standardized CRD.
|
||||
|
||||
[id="gateway-api-limitations_{context}"]
|
||||
|
||||
@@ -4,9 +4,10 @@
|
||||
|
||||
:_mod-docs-content-type: PROCEDURE
|
||||
[id="nw-ingress-gateway-api-troubleshooting-degraded_{context}"]
|
||||
= Ingress Operator is Degraded due to Gateway API and OSSM conflict
|
||||
= Removing conflicts between the Gateway API and the OSSM v2.x
|
||||
|
||||
In {product-title} 4.20 and later, if you create a `GatewayClass` resource while a conflicting OpenShift Service Mesh (OSSM) v2.x subscription exists, the `ingress` Cluster Operator (CIO) reports a `Degraded` status. This procedure details how to verify and resolve this conflict.
|
||||
[role="_abstract"]
|
||||
To restore cluster health and resolve operator degradation in {product-title} 4.20 and later, identify and remove conflicts between the Gateway API and OpenShift Service Mesh (OSSM) v2.x. Ensuring these subscriptions do not overlap allows the ingress Cluster Operator to maintain a healthy status when you create a GatewayClass resource.
|
||||
|
||||
The conflict occurs because the Gateway API implementation requires OSSM v3.x, which cannot coexist with OSSM v2.x. The CIO detects this conflict, stops the Gateway API provisioning, and reports the `Degraded` status to alert administrators.
|
||||
|
||||
@@ -36,7 +37,7 @@ status:
|
||||
type: Degraded
|
||||
----
|
||||
|
||||
You can resolve this issue and clear the `Degraded` status either by removing the `GatewayClass` resource or by using Openshift Gateway API to remove the conflicting OpenShift Service Mesh v2.x subscription from the cluster.
|
||||
You can resolve this issue and clear the `Degraded` status either by removing the `GatewayClass` resource or by using OpenShift Gateway API to remove the conflicting OpenShift Service Mesh v2.x subscription from the cluster.
|
||||
|
||||
.Procedure
|
||||
|
||||
@@ -54,4 +55,4 @@ $ oc delete gatewayclass <gatewayclass-name>
|
||||
$ oc -n openshift-operators delete subscription <OSSM v2.x subscription name>
|
||||
----
|
||||
+
|
||||
After the v2.x subscription is removed, the Ingress Operator automatically retries the installation of OSSM v3.x and completes the Gateway API provisioning.
|
||||
After you remove the v2.x subscription, the Ingress Operator automatically retries the installation of OSSM v3.x and completes the Gateway API provisioning.
|
||||
|
||||
@@ -6,9 +6,12 @@
|
||||
[id="nw-ingresscontroller-change-external_{context}"]
|
||||
= Configuring the Ingress Controller endpoint publishing scope to External
|
||||
|
||||
[role="_abstract"]
|
||||
To expose cluster services to public networks or the internet in {product-title}, configure the Ingress Controller endpoint publishing scope to `External`.
|
||||
|
||||
When a cluster administrator installs a new cluster without specifying that the cluster is private, the default Ingress Controller is created with a `scope` set to `External`.
|
||||
|
||||
The Ingress Controller's scope can be configured to be `Internal` during installation or after, and cluster administrators can change an `Internal` Ingress Controller to `External`.
|
||||
As an installation or post-installation task, a cluster administrator can configure the Ingress Controller to `Internal`. Additionally, a cluster administrator can change an `Internal` Ingress Controller to `External`.
|
||||
|
||||
[IMPORTANT]
|
||||
====
|
||||
@@ -19,19 +22,19 @@ Changing the scope can cause disruption to Ingress traffic, potentially for seve
|
||||
|
||||
.Prerequisites
|
||||
|
||||
* You installed the `oc` CLI.
|
||||
* You installed the {oc-first}.
|
||||
|
||||
.Procedure
|
||||
|
||||
* To change an `Internal` scoped Ingress Controller to `External`, enter the following command:
|
||||
* To change an `Internal`-scoped Ingress Controller to `External`, enter the following command:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ oc -n openshift-ingress-operator patch ingresscontrollers/private --type=merge --patch='{"spec":{"endpointPublishingStrategy":{"type":"LoadBalancerService","loadBalancer":{"scope":"External"}}}}'
|
||||
----
|
||||
+
|
||||
|
||||
.Verification
|
||||
+
|
||||
|
||||
* To check the status of the Ingress Controller, enter the following command:
|
||||
+
|
||||
[source,terminal]
|
||||
|
||||
@@ -6,23 +6,26 @@
|
||||
[id="nw-ingresscontroller-change-internal_{context}"]
|
||||
= Configuring the Ingress Controller endpoint publishing scope to Internal
|
||||
|
||||
When a cluster administrator installs a new cluster without specifying that the cluster is private, the default Ingress Controller is created with a `scope` set to `External`. Cluster administrators can change an `External` scoped Ingress Controller to `Internal`.
|
||||
[role="_abstract"]
|
||||
To restrict cluster access to internal traffic and enhance network security in {product-title}, change the Ingress Controller scope from `External` to `Internal`.
|
||||
|
||||
When a cluster administrator installs a new cluster without specifying that the cluster is private, the default Ingress Controller is created with a `scope` set to `External`.
|
||||
|
||||
.Prerequisites
|
||||
|
||||
* You installed the `oc` CLI.
|
||||
* You installed the {oc-first}.
|
||||
|
||||
.Procedure
|
||||
|
||||
* To change an `External` scoped Ingress Controller to `Internal`, enter the following command:
|
||||
* To change an `External`-scoped Ingress Controller to `Internal`, enter the following command:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ oc -n openshift-ingress-operator patch ingresscontrollers/default --type=merge --patch='{"spec":{"endpointPublishingStrategy":{"type":"LoadBalancerService","loadBalancer":{"scope":"Internal"}}}}'
|
||||
----
|
||||
+
|
||||
|
||||
.Verification
|
||||
+
|
||||
|
||||
* To check the status of the Ingress Controller, enter the following command:
|
||||
+
|
||||
[source,terminal]
|
||||
|
||||
@@ -0,0 +1,39 @@
|
||||
// Module included in the following assemblies:
|
||||
//
|
||||
// *networking/ingress_load_balancing/configuring_ingress_cluster_traffic/overview-traffic.adoc
|
||||
|
||||
:_mod-docs-content-type: REFERENCE
|
||||
[id="nw-ingresscontroller-communication-service-methods_{context}"]
|
||||
= Methods for communicating from outside the cluster
|
||||
|
||||
[role="_abstract"]
|
||||
To enable communication between external networks and services in {product-title}, configure the appropriate ingress method.
|
||||
|
||||
{product-title} provides the following methods for communicating from outside the cluster with services running in the cluster. Note that the methods are listed in order of preference.
|
||||
|
||||
* If you have HTTP/HTTPS, use an Ingress Controller.
|
||||
* If you have a TLS-encrypted protocol other than HTTPS. For example, for TLS
|
||||
with the SNI header, use an Ingress Controller.
|
||||
* Otherwise, use a Load Balancer, an External IP, or a `NodePort`.
|
||||
|
||||
[options="header"]
|
||||
|===
|
||||
|Method |Purpose
|
||||
|
||||
|Use an Ingress Controller
|
||||
|Allows access to HTTP/HTTPS traffic and TLS-encrypted protocols other than HTTPS (for example, TLS with the SNI header).
|
||||
|
||||
|Automatically assign an external IP using a load balancer service
|
||||
|Allows traffic to non-standard ports through an IP address assigned from a pool.
|
||||
Most cloud platforms offer a method to start a service with a load-balancer IP address.
|
||||
|
||||
|About MetalLB and the MetalLB Operator
|
||||
|Allows traffic to a specific IP address or address from a pool on the machine network.
|
||||
For bare-metal installations or platforms that are like bare metal, MetalLB provides a way to start a service with a load-balancer IP address.
|
||||
|
||||
|Manually assign an external IP to a service
|
||||
|Allows traffic to non-standard ports through a specific IP address.
|
||||
|
||||
|Configure a `NodePort`
|
||||
|Expose a service on all nodes in the cluster.
|
||||
|===
|
||||
@@ -0,0 +1,22 @@
|
||||
// Module included in the following assemblies:
|
||||
//
|
||||
// *networking/configuring_ingress_cluster_traffic/configuring-ingress-cluster-traffic-nodeport.adoc
|
||||
|
||||
:_mod-docs-content-type: CONCEPT
|
||||
[id="overview-traffic-comparision_{context}"]
|
||||
= Comparison: Fault-tolerant access to external IP addresses
|
||||
|
||||
[role="_abstract"]
|
||||
To ensure continuous service availability and maintain external IP access in {product-title}, configure fault-tolerant networking features.
|
||||
|
||||
For the communication methods that provide access to an external IP address, fault tolerant access to the IP address is another consideration. The following features provide fault tolerant access to an external IP address.
|
||||
|
||||
IP failover::
|
||||
IP failover manages a pool of virtual IP addresses for a set of nodes. IP failover gets implemented with Keepalived and Virtual Router Redundancy Protocol (VRRP). IP failover is a layer 2 mechanism only and relies on multicast. Multicast can have disadvantages for some networks.
|
||||
|
||||
MetalLB::
|
||||
MetalLB has a layer 2 mode, but it does not use multicast. Layer 2 mode has a disadvantage that it transfers all traffic for an external IP address through one node.
|
||||
|
||||
Manually assigning external IP addresses::
|
||||
You can configure your cluster with an IP address block that is used to assign external IP addresses to services. By default, this feature is disabled. This feature is flexible, but places the largest burden on the cluster or network administrator. The cluster is prepared to receive traffic that is destined for the external IP, but you must decide how to route traffic to nodes.
|
||||
|
||||
@@ -34,11 +34,12 @@ endif::[]
|
||||
[id="nw-osp-configuring-external-load-balancer_{context}"]
|
||||
= Configuring a user-managed load balancer
|
||||
|
||||
You can configure an {product-title} cluster
|
||||
[role="_abstract"]
|
||||
To integrate your infrastructure with existing network standards or gain more control over traffic management in {product-title}
|
||||
ifdef::openstack[]
|
||||
on {rh-openstack-first}
|
||||
endif::openstack[]
|
||||
to use a user-managed load balancer in place of the default load balancer.
|
||||
, use a user-managed load balancer in place of the default load balancer.
|
||||
|
||||
[IMPORTANT]
|
||||
====
|
||||
@@ -52,7 +53,9 @@ Read the following prerequisites that apply to the service that you want to conf
|
||||
MetalLB, which runs on a cluster, functions as a user-managed load balancer.
|
||||
====
|
||||
|
||||
.OpenShift API prerequisites
|
||||
.Prerequisites
|
||||
|
||||
The following list details OpenShift API prerequisites:
|
||||
|
||||
* You defined a front-end IP address.
|
||||
* TCP ports 6443 and 22623 are exposed on the front-end IP address of your load balancer. Check the following items:
|
||||
@@ -62,21 +65,19 @@ MetalLB, which runs on a cluster, functions as a user-managed load balancer.
|
||||
* The front-end IP address and port 22623 are reachable only by {product-title} nodes.
|
||||
* The load balancer backend can communicate with {product-title} control plane nodes on port 6443 and 22623.
|
||||
|
||||
.Ingress Controller prerequisites
|
||||
The following list details Ingress Controller prerequisites:
|
||||
|
||||
* You defined a front-end IP address.
|
||||
* TCP ports 443 and 80 are exposed on the front-end IP address of your load balancer.
|
||||
* The front-end IP address, port 80 and port 443 are be reachable by all users of your system with a location external to your {product-title} cluster.
|
||||
* The front-end IP address, port 80 and port 443 are reachable to all nodes that operate in your {product-title} cluster.
|
||||
* TCP port 443 and port 80 are exposed on the front-end IP address of your load balancer.
|
||||
* The front-end IP address, port 80 and port 443 are reachable by all users of your system with a location external to your {product-title} cluster.
|
||||
* The front-end IP address, port 80 and port 443 are reachable by all nodes that operate in your {product-title} cluster.
|
||||
* The load balancer backend can communicate with {product-title} nodes that run the Ingress Controller on ports 80, 443, and 1936.
|
||||
|
||||
.Prerequisite for health check URL specifications
|
||||
The following list details prerequisites for health check URL specifications:
|
||||
|
||||
You can configure most load balancers by setting health check URLs that determine if a service is available or unavailable. {product-title} provides these health checks for the OpenShift API, Machine Configuration API, and Ingress Controller backend services.
|
||||
|
||||
The following examples show health check specifications for the previously listed backend services:
|
||||
|
||||
.Example of a Kubernetes API health check specification
|
||||
The following example shows a Kubernetes API health check specification for a backend service:
|
||||
|
||||
[source,terminal]
|
||||
----
|
||||
@@ -87,7 +88,7 @@ Timeout: 10
|
||||
Interval: 10
|
||||
----
|
||||
|
||||
.Example of a Machine Config API health check specification
|
||||
The following example shows a Machine Config API health check specification for a backend service:
|
||||
|
||||
[source,terminal]
|
||||
----
|
||||
@@ -98,7 +99,7 @@ Timeout: 10
|
||||
Interval: 10
|
||||
----
|
||||
|
||||
.Example of an Ingress Controller health check specification
|
||||
The following example shows a Ingress Controller health check specification for a backend service:
|
||||
|
||||
[source,terminal]
|
||||
----
|
||||
@@ -303,9 +304,7 @@ set-cookie: 1e2670d92730b515ce3a1bb65da45062=1bf5e9573c9a2760c964ed1659cc1673; p
|
||||
cache-control: private
|
||||
----
|
||||
|
||||
. Configure the DNS records for your cluster to target the front-end IP addresses of the user-managed load balancer. You must update records to your DNS server for the cluster API and applications over the load balancer.
|
||||
+
|
||||
.Examples of modified DNS records
|
||||
. Configure the DNS records for your cluster to target the front-end IP addresses of the user-managed load balancer. You must update records to your DNS server for the cluster API and applications over the load balancer. The following examples shows modified DNS records:
|
||||
+
|
||||
[source,dns]
|
||||
----
|
||||
@@ -343,16 +342,19 @@ ifdef::vsphere[]
|
||||
vsphere:
|
||||
endif::vsphere[]
|
||||
loadBalancer:
|
||||
type: UserManaged <1>
|
||||
type: UserManaged
|
||||
apiVIPs:
|
||||
- <api_ip> <2>
|
||||
ingressVIPs:
|
||||
- <ingress_ip> <3>
|
||||
# ...
|
||||
----
|
||||
<1> Set `UserManaged` for the `type` parameter to specify a user-managed load balancer for your cluster. The parameter defaults to `OpenShiftManagedDefault`, which denotes the default internal load balancer. For services defined in an `openshift-kni-infra` namespace, a user-managed load balancer can deploy the `coredns` service to pods in your cluster but ignores `keepalived` and `haproxy` services.
|
||||
<2> Required parameter when you specify a user-managed load balancer. Specify the user-managed load balancer's public IP address, so that the Kubernetes API can communicate with the user-managed load balancer.
|
||||
<3> Required parameter when you specify a user-managed load balancer. Specify the user-managed load balancer's public IP address, so that the user-managed load balancer can manage ingress traffic for your cluster.
|
||||
+
|
||||
where:
|
||||
+
|
||||
`loadBalancer.type`:: Set `UserManaged` for the `type` parameter to specify a user-managed load balancer for your cluster. The parameter defaults to `OpenShiftManagedDefault`, which denotes the default internal load balancer. For services defined in an `openshift-kni-infra` namespace, a user-managed load balancer can deploy the `coredns` service to pods in your cluster but ignores `keepalived` and `haproxy` services.
|
||||
`loadBalancer.<api_ip>`:: Specifies a user-managed load balancer. Specify the user-managed load balancer's public IP address, so that the Kubernetes API can communicate with the user-managed load balancer. Mandatory parameter.
|
||||
`loadBalancer.<ingress_ip>`:: Specifies a user-managed load balancer. Specify the user-managed load balancer's public IP address, so that the user-managed load balancer can manage ingress traffic for your cluster. Mandatory parameter.
|
||||
|
||||
.Verification
|
||||
|
||||
@@ -397,7 +399,7 @@ HTTP/1.1 200 OK
|
||||
Content-Length: 0
|
||||
----
|
||||
+
|
||||
.. Verify that you can access each cluster application on port, by running the following command and observing the output:
|
||||
.. Verify that you can access each cluster application on port 80, by running the following command and observing the output:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
|
||||
@@ -5,10 +5,13 @@
|
||||
[id="nw-osp-loadbalancer-etp-local_{context}"]
|
||||
= Local external traffic policies
|
||||
|
||||
You can set the external traffic policy (ETP) parameter, `.spec.externalTrafficPolicy`, on a load balancer service to preserve the source IP address of incoming traffic when it reaches service endpoint pods. However, if your cluster uses the Amphora Octavia provider, the source IP of the traffic is replaced with the IP address of the Amphora VM. This behavior does not occur if your cluster uses the OVN Octavia provider.
|
||||
[role="_abstract"]
|
||||
You can set the external traffic policy (ETP) parameter, `.spec.externalTrafficPolicy`, on a load balancer service to preserve the source IP address of incoming traffic when it reaches service endpoint pods.
|
||||
|
||||
Having the `ETP` option set to `Local` requires that health monitors be created for the load balancer. Without health monitors, traffic can be routed to a node that does not have a functional endpoint, which causes the connection to drop. To force Cloud Provider OpenStack to create health monitors, you must set the value of the `create-monitor` option in the cloud provider configuration to `true`.
|
||||
If your cluster uses the Amphora Octavia provider, the source IP of the traffic is replaced with the IP address of the Amphora VM. This behavior does not occur if your cluster uses the OVN Octavia provider.
|
||||
|
||||
In {rh-openstack} 16.2, the OVN Octavia provider does not support health monitors. Therefore, setting the ETP to local is unsupported.
|
||||
Having the `ETP` option set to `Local` requires creating health monitors for the load balancer. Without health monitors, traffic can be routed to a node that does not have a functional endpoint, which causes the connection to drop. To force Cloud Provider OpenStack to create health monitors, you must set the value of the `create-monitor` option in the cloud provider configuration to `true`.
|
||||
|
||||
In {rh-openstack} 16.2, the OVN Octavia provider does not support health monitors. Therefore, setting the ETP to `Local` is unsupported.
|
||||
|
||||
In {rh-openstack} 16.2, the Amphora Octavia provider does not support HTTP monitors on UDP pools. As a result, UDP load balancer services have `UDP-CONNECT` monitors created instead. Due to implementation details, this configuration only functions properly with the OVN-Kubernetes CNI plugin.
|
||||
|
||||
@@ -6,6 +6,9 @@
|
||||
[id="nw-osp-loadbalancer-limitations_{context}"]
|
||||
= Limitations of load balancer services
|
||||
|
||||
{product-title} clusters on {rh-openstack-first} use Octavia to handle load balancer services. As a result of this choice, such clusters have a number of functional limitations.
|
||||
[role="_abstract"]
|
||||
To optimise network resource management and mitigate operational risks in {product-title} clusters on {rh-openstack-first}, review the implementation of Octavia for load balancer services.
|
||||
|
||||
{rh-openstack} Octavia has two supported providers: Amphora and OVN. These providers differ in terms of available features as well as implementation details. These distinctions affect load balancer services that are created on your cluster.
|
||||
{product-title} clusters on {rh-openstack-first} use Octavia to handle load balancer services. As a result, your cluster has several functional limitations.
|
||||
|
||||
{rh-openstack} Octavia has two supported providers: Amphora and OVN. These providers differ in available features and implementation details. These distinctions affect load balancer services that you create on your cluster.
|
||||
@@ -15,11 +15,12 @@
|
||||
[id="nw-osp-services-external-load-balancer_{context}"]
|
||||
= Services for a user-managed load balancer
|
||||
|
||||
You can configure an {product-title} cluster
|
||||
[role="_abstract"]
|
||||
To integrate your infrastructure with existing network standards or gain more control over traffic management in {product-title}
|
||||
ifeval::["{context}" == "load-balancing-openstack"]
|
||||
on {rh-openstack-first}
|
||||
endif::[]
|
||||
to use a user-managed load balancer in place of the default load balancer.
|
||||
, configure services for a user-managed load balancer.
|
||||
|
||||
[IMPORTANT]
|
||||
====
|
||||
|
||||
@@ -6,9 +6,10 @@
|
||||
[id="nw-osp-specify-floating-ip_{context}"]
|
||||
= Specifying a floating IP address in the Ingress Controller
|
||||
|
||||
By default, a floating IP address gets randomly assigned to your {product-title} cluster on {rh-openstack-first} upon deployment. This floating IP address is associated with your Ingress port.
|
||||
[role="_abstract"]
|
||||
To establish external access to your {product-title} cluster on {rh-openstack-first}, use the automatically assigned floating IP address. The floating IP address is associated with your Ingress port.
|
||||
|
||||
You might want to pre-create a floating IP address before updating your DNS records and cluster deployment. In this situation, you can define a floating IP address to the Ingress Controller. You can do this regardless of whether you are using Octavia or a user-managed cluster.
|
||||
You might want to precreate a floating IP address before updating your DNS records and cluster deployment. In this situation, you can define a floating IP address to the Ingress Controller. You can do this regardless of whether you are using Octavia or a user-managed cluster.
|
||||
|
||||
.Procedure
|
||||
|
||||
@@ -21,22 +22,25 @@ apiVersion: operator.openshift.io/v1
|
||||
kind: IngressController
|
||||
metadata:
|
||||
namespace: openshift-ingress-operator
|
||||
name: <name> <1>
|
||||
name: <name>
|
||||
spec:
|
||||
domain: <domain> <2>
|
||||
domain: <domain>
|
||||
endpointPublishingStrategy:
|
||||
type: LoadBalancerService
|
||||
loadBalancer:
|
||||
scope: External <3>
|
||||
scope: External
|
||||
providerParameters:
|
||||
type: OpenStack
|
||||
openstack:
|
||||
floatingIP: <ingress_port_IP> <4>
|
||||
floatingIP: <ingress_port_IP>
|
||||
----
|
||||
<1> The name of your Ingress Controller. If you are using the default Ingress Controller, the value for this field is `default`.
|
||||
<2> The DNS name serviced by the Ingress Controller.
|
||||
<3> You must set the scope to `External` to use a floating IP address.
|
||||
<4> The floating IP address associated with the port your Ingress Controller is listening on.
|
||||
+
|
||||
where:
|
||||
+
|
||||
`metadata.name`:: Specifies the name of your Ingress Controller. If you are using the default Ingress Controller, the value for this field is `default`.
|
||||
`spec.domain`:: Specifies the DNS name serviced by the Ingress Controller.
|
||||
`loadBalancer.scope`:: You must set the scope to `External` to use a floating IP address.
|
||||
`openstack.floatingIP`:: Specifies the floating IP address associated with the port your Ingress Controller is listening on.
|
||||
|
||||
. Apply the CR file by running the following command:
|
||||
+
|
||||
|
||||
@@ -1,12 +1,13 @@
|
||||
:_mod-docs-content-type: ASSEMBLY
|
||||
[id="ingress-gateway-api"]
|
||||
= Gateway API with {product-title} Networking
|
||||
= Gateway API with {product-title} networking
|
||||
include::_attributes/common-attributes.adoc[]
|
||||
:context: ingress-gateway-api
|
||||
|
||||
toc::[]
|
||||
|
||||
{product-title} provides additional ways of configuring network traffic by using Gateway API with the Ingress Operator.
|
||||
[role="_abstract"]
|
||||
To manage complex network traffic and implement advanced routing policies in {product-title}, use the Ingress Operator to configure the Gateway API.
|
||||
|
||||
[IMPORTANT]
|
||||
====
|
||||
@@ -24,4 +25,4 @@ include::modules/nw-ingress-gateway-api-deployment-topologies.adoc[leveloffset=+
|
||||
include::modules/nw-ingress-gateway-api-troubleshooting-degraded.adoc[leveloffset=+1]
|
||||
|
||||
.Additional resources
|
||||
* xref:configuring-ingress-cluster-traffic-ingress-controller.adoc#nw-ingress-sharding-concept_configuring-ingress-cluster-traffic-ingress-controller[Ingress Controller sharding].
|
||||
* xref:configuring-ingress-cluster-traffic-ingress-controller.adoc#nw-ingress-sharding-concept_configuring-ingress-cluster-traffic-ingress-controller[Ingress Controller sharding]
|
||||
|
||||
@@ -6,7 +6,8 @@ include::_attributes/common-attributes.adoc[]
|
||||
|
||||
toc::[]
|
||||
|
||||
The `endpointPublishingStrategy` is used to publish the Ingress Controller endpoints to other networks, enable load balancer integrations, and provide access to other systems.
|
||||
[role="_abstract"]
|
||||
To expose Ingress Controller endpoints to external systems and enable load balancer integrations in {product-title}, configure the `endpointPublishingStrategy` parameter.
|
||||
|
||||
[IMPORTANT]
|
||||
====
|
||||
|
||||
@@ -6,58 +6,24 @@ include::_attributes/common-attributes.adoc[]
|
||||
|
||||
toc::[]
|
||||
|
||||
{product-title} provides the following methods for communicating from
|
||||
outside the cluster with services running in the cluster.
|
||||
[role="_abstract"]
|
||||
To enable communication between external networks and services in {product-title}, configure ingress cluster traffic.
|
||||
|
||||
The methods are recommended, in order or preference:
|
||||
include::modules/nw-ingresscontroller-communication-service-methods.adoc[leveloffset=+1]
|
||||
|
||||
* If you have HTTP/HTTPS, use an Ingress Controller.
|
||||
* If you have a TLS-encrypted protocol other than HTTPS. For example, for TLS
|
||||
with the SNI header, use an Ingress Controller.
|
||||
* Otherwise, use a Load Balancer, an External IP, or a `NodePort`.
|
||||
[role="_additional-resources"]
|
||||
[id="additional-resources_{context}"]
|
||||
== Additional resources
|
||||
|
||||
[[external-access-options-table]]
|
||||
[options="header"]
|
||||
|===
|
||||
* xref:../../../networking/ingress_load_balancing/configuring_ingress_cluster_traffic/configuring-ingress-cluster-traffic-ingress-controller.adoc#configuring-ingress-cluster-traffic-ingress-controller[Use an Ingress Controller]
|
||||
|
||||
|Method |Purpose
|
||||
* xref:../../../networking/ingress_load_balancing/configuring_ingress_cluster_traffic/configuring-ingress-cluster-traffic-load-balancer.adoc#configuring-ingress-cluster-traffic-load-balancer[Automatically assign an external IP using a load balancer service]
|
||||
|
||||
|xref:../../../networking/ingress_load_balancing/configuring_ingress_cluster_traffic/configuring-ingress-cluster-traffic-ingress-controller.adoc#configuring-ingress-cluster-traffic-ingress-controller[Use an Ingress Controller]
|
||||
|Allows access to HTTP/HTTPS traffic and TLS-encrypted protocols other than HTTPS (for example, TLS with the SNI header).
|
||||
* xref:../../../networking/networking_operators/metallb-operator/about-metallb.adoc#about-metallb[About MetalLB and the MetalLB Operator]
|
||||
|
||||
|xref:../../../networking/ingress_load_balancing/configuring_ingress_cluster_traffic/configuring-ingress-cluster-traffic-load-balancer.adoc#configuring-ingress-cluster-traffic-load-balancer[Automatically assign an external IP using a load balancer service]
|
||||
|Allows traffic to non-standard ports through an IP address assigned from a pool.
|
||||
Most cloud platforms offer a method to start a service with a load-balancer IP address.
|
||||
* xref:../../../networking/ingress_load_balancing/configuring_ingress_cluster_traffic/configuring-ingress-cluster-traffic-service-external-ip.adoc#configuring-ingress-cluster-traffic-service-external-ip[Manually assign an external IP to a service]
|
||||
|
||||
|xref:../../../networking/networking_operators/metallb-operator/about-metallb.adoc#about-metallb[About MetalLB and the MetalLB Operator]
|
||||
|Allows traffic to a specific IP address or address from a pool on the machine network.
|
||||
For bare-metal installations or platforms that are like bare metal, MetalLB provides a way to start a service with a load-balancer IP address.
|
||||
* xref:../../../networking/ingress_load_balancing/configuring_ingress_cluster_traffic/configuring-ingress-cluster-traffic-nodeport.adoc#configuring-ingress-cluster-traffic-nodeport[Configure a `NodePort`]
|
||||
|
||||
|xref:../../../networking/ingress_load_balancing/configuring_ingress_cluster_traffic/configuring-ingress-cluster-traffic-service-external-ip.adoc#configuring-ingress-cluster-traffic-service-external-ip[Manually assign an external IP to a service]
|
||||
|Allows traffic to non-standard ports through a specific IP address.
|
||||
include::modules/nw-ingresscontroller-overview-traffic-comparision.adoc[leveloffset=+1]
|
||||
|
||||
|xref:../../../networking/ingress_load_balancing/configuring_ingress_cluster_traffic/configuring-ingress-cluster-traffic-nodeport.adoc#configuring-ingress-cluster-traffic-nodeport[Configure a `NodePort`]
|
||||
|Expose a service on all nodes in the cluster.
|
||||
|===
|
||||
|
||||
[id="overview-traffic-comparision_{context}"]
|
||||
== Comparision: Fault tolerant access to external IP addresses
|
||||
|
||||
For the communication methods that provide access to an external IP address, fault tolerant access to the IP address is another consideration.
|
||||
The following features provide fault tolerant access to an external IP address.
|
||||
|
||||
IP failover::
|
||||
IP failover manages a pool of virtual IP address for a set of nodes.
|
||||
It is implemented with Keepalived and Virtual Router Redundancy Protocol (VRRP).
|
||||
IP failover is a layer 2 mechanism only and relies on multicast.
|
||||
Multicast can have disadvantages for some networks.
|
||||
|
||||
MetalLB::
|
||||
MetalLB has a layer 2 mode, but it does not use multicast.
|
||||
Layer 2 mode has a disadvantage that it transfers all traffic for an external IP address through one node.
|
||||
|
||||
Manually assigning external IP addresses::
|
||||
You can configure your cluster with an IP address block that is used to assign external IP addresses to services.
|
||||
By default, this feature is disabled.
|
||||
This feature is flexible, but places the largest burden on the cluster or network administrator.
|
||||
The cluster is prepared to receive traffic that is destined for the external IP, but each customer has to decide how they want to route traffic to nodes.
|
||||
|
||||
@@ -6,7 +6,9 @@ include::_attributes/common-attributes.adoc[]
|
||||
|
||||
toc::[]
|
||||
|
||||
//limitations of OSP loadbalancer
|
||||
[role="_abstract"]
|
||||
To distribute network traffic and communications activity evenly across your compute instances in {rh-openstack}, configure load balancing services.
|
||||
|
||||
include::modules/nw-osp-loadbalancer-limitations.adoc[leveloffset=+1]
|
||||
|
||||
include::modules/nw-osp-loadbalancer-etp-local.adoc[leveloffset=+2]
|
||||
@@ -15,11 +17,8 @@ include::modules/installation-osp-api-octavia.adoc[leveloffset=+1]
|
||||
|
||||
include::modules/installation-osp-api-scaling.adoc[leveloffset=+2]
|
||||
|
||||
// Services for a user-managed load balancer
|
||||
include::modules/nw-osp-services-external-load-balancer.adoc[leveloffset=+1]
|
||||
|
||||
// Configuring a user-managed load balancer
|
||||
include::modules/nw-osp-configuring-external-load-balancer.adoc[leveloffset=+2]
|
||||
|
||||
// Configuring an Ingress controller to use floating IPs
|
||||
include::modules/nw-osp-specify-floating-ip.adoc[leveloffset=+1]
|
||||
|
||||
Reference in New Issue
Block a user