diff --git a/images/202_OpenShift_Ingress_0222_load_balancer.png b/images/202_OpenShift_Ingress_0222_load_balancer.png new file mode 100644 index 0000000000..3d74aa8057 Binary files /dev/null and b/images/202_OpenShift_Ingress_0222_load_balancer.png differ diff --git a/images/202_OpenShift_Ingress_0222_node_port.png b/images/202_OpenShift_Ingress_0222_node_port.png new file mode 100644 index 0000000000..6474dd2be4 Binary files /dev/null and b/images/202_OpenShift_Ingress_0222_node_port.png differ diff --git a/modules/nw-ingress-controller-endpoint-publishing-strategies.adoc b/modules/nw-ingress-controller-endpoint-publishing-strategies.adoc index 80e042bbfc..31dc3b3363 100644 --- a/modules/nw-ingress-controller-endpoint-publishing-strategies.adoc +++ b/modules/nw-ingress-controller-endpoint-publishing-strategies.adoc @@ -11,6 +11,14 @@ The `NodePortService` endpoint publishing strategy publishes the Ingress Control In this configuration, the Ingress Controller deployment uses container networking. A `NodePortService` is created to publish the deployment. The specific node ports are dynamically allocated by {product-title}; however, to support static port allocations, your changes to the node port field of the managed `NodePortService` are preserved. +.Diagram of NodePortService +image::202_OpenShift_Ingress_0222_node_port.png[{product-title} Ingress NodePort endpoint publishing strategy] + +The preceding graphic shows the following concepts pertaining to {product-title} Ingress NodePort endpoint publishing strategy: + +* All the available nodes in the cluster have their own, externally accessible IP addresses. The service running in the cluster is bound to the unique NodePort for all the nodes. +* When the client connects to a node that is down, for example, by way of the `193.10.0.10` IP address in the graphic, the node port directly connects the client to an available node that is running the service. In this scenario, no load balancing is required. As the image shows, `193.10.0.10` is down and another IP address must be used instead. + [NOTE] ==== The Ingress Operator ignores any updates to `.spec.ports[].nodePort` fields of the service. diff --git a/modules/nw-ingress-setting-internal-lb.adoc b/modules/nw-ingress-setting-internal-lb.adoc index 070722a504..6d7563f737 100644 --- a/modules/nw-ingress-setting-internal-lb.adoc +++ b/modules/nw-ingress-setting-internal-lb.adoc @@ -20,6 +20,14 @@ If you do not, all of your nodes will lose egress connectivity to the internet. If you want to change the `scope` for an `IngressController` object, you must delete and then recreate that `IngressController` object. You cannot change the `.spec.endpointPublishingStrategy.loadBalancer.scope` parameter after the custom resource (CR) is created. ==== +.Diagram of LoadBalancer +image::202_OpenShift_Ingress_0222_load_balancer.png[{product-title} Ingress LoadBalancerService endpoint publishing strategy] + +The preceding graphic shows the following concepts pertaining to {product-title} Ingress LoadBalancerService endpoint publishing strategy: + +* You can load load balance externally, using the cloud provider load balancer, or internally, using the OpenShift Ingress Controller Load Balancer. +* You can use the single IP address of the load balancer and more familiar ports, such as 8080 and 4200 as shown on the cluster depicted in the graphic. +* Traffic from the external load balancer is directed at the pods, and managed by the load balancer, as depicted in the instance of a down node. See the link:https://kubernetes.io/docs/concepts/services-networking/service/#internal-load-balancer[Kubernetes Services documentation] for implementation details.