From e2304b3047bc78ccc2423e98705c1bb24c8c324c Mon Sep 17 00:00:00 2001 From: dfitzmau Date: Tue, 23 Apr 2024 16:57:33 +0100 Subject: [PATCH] OSDOCS-7074: Documented external LB for managing api/ingress traffic --- .../ipi-install-installation-workflow.adoc | 6 + ...stall-post-installation-configuration.adoc | 4 +- ...etworks-installer-provisioned-vsphere.adoc | 13 +- ...-installer-provisioned-customizations.adoc | 13 +- ...er-provisioned-network-customizations.adoc | 13 +- modules/installation-launching-installer.adoc | 2 +- ...nstallation-load-balancing-user-infra.adoc | 2 +- ...allation-osp-balancing-external-loads.adoc | 2 +- ...sp-configuring-external-load-balancer.adoc | 82 +-- ...w-osp-services-external-load-balancer.adoc | 59 +- networking/load-balancing-openstack.adoc | 4 +- ...sscontroller-operator-openshift-io-v1.adoc | 520 +++++++++--------- 12 files changed, 357 insertions(+), 363 deletions(-) diff --git a/installing/installing_bare_metal_ipi/ipi-install-installation-workflow.adoc b/installing/installing_bare_metal_ipi/ipi-install-installation-workflow.adoc index b1562b4966..e6e128fed6 100644 --- a/installing/installing_bare_metal_ipi/ipi-install-installation-workflow.adoc +++ b/installing/installing_bare_metal_ipi/ipi-install-installation-workflow.adoc @@ -29,6 +29,12 @@ include::modules/ipi-install-extracting-the-openshift-installer.adoc[leveloffset include::modules/ipi-install-creating-an-rhcos-images-cache.adoc[leveloffset=+1] +// Services for a user-managed load balancer +include::modules/nw-osp-services-external-load-balancer.adoc[leveloffset=+1] + +// Configuring a user-managed load balancer +include::modules/nw-osp-configuring-external-load-balancer.adoc[leveloffset=+2] + include::modules/ipi-install-setting-cluster-node-hostnames-dhcp.adoc[leveloffset=+1] [id="ipi-install-configuration-files"] diff --git a/installing/installing_bare_metal_ipi/ipi-install-post-installation-configuration.adoc b/installing/installing_bare_metal_ipi/ipi-install-post-installation-configuration.adoc index c557898a9e..c5922fef95 100644 --- a/installing/installing_bare_metal_ipi/ipi-install-post-installation-configuration.adoc +++ b/installing/installing_bare_metal_ipi/ipi-install-post-installation-configuration.adoc @@ -12,8 +12,8 @@ include::modules/ipi-install-configuring-ntp-for-disconnected-clusters.adoc[leve include::modules/nw-enabling-a-provisioning-network-after-installation.adoc[leveloffset=+1] -// Configuring an external load balancer +// Configuring a user-managed load balancer include::modules/nw-osp-services-external-load-balancer.adoc[leveloffset=+1] -// Services for an external load balancer +// Services for a user-managed load balancer include::modules/nw-osp-configuring-external-load-balancer.adoc[leveloffset=+2] diff --git a/installing/installing_vsphere/ipi/installing-restricted-networks-installer-provisioned-vsphere.adoc b/installing/installing_vsphere/ipi/installing-restricted-networks-installer-provisioned-vsphere.adoc index 74a102c6eb..a03a71f50b 100644 --- a/installing/installing_vsphere/ipi/installing-restricted-networks-installer-provisioned-vsphere.adoc +++ b/installing/installing_vsphere/ipi/installing-restricted-networks-installer-provisioned-vsphere.adoc @@ -64,6 +64,13 @@ include::modules/installation-configure-proxy.adoc[leveloffset=+2] include::modules/configuring-vsphere-regions-zones.adoc[leveloffset=+2] +// Services for a user-managed load balancer +include::modules/nw-osp-services-external-load-balancer.adoc[leveloffset=+1] + +// Configuring a user-managed load balancer +include::modules/nw-osp-configuring-external-load-balancer.adoc[leveloffset=+2] + +// Deploying the cluster include::modules/installation-launching-installer.adoc[leveloffset=+1] include::modules/cli-logging-in-kubeadmin.adoc[leveloffset=+1] @@ -88,12 +95,6 @@ include::modules/cluster-telemetry.adoc[leveloffset=+1] * See xref:../../../support/remote_health_monitoring/about-remote-health-monitoring.adoc#about-remote-health-monitoring[About remote health monitoring] for more information about the Telemetry service -// Services for an external load balancer -include::modules/nw-osp-services-external-load-balancer.adoc[leveloffset=+1] - -// Configuring an external load balancer -include::modules/nw-osp-configuring-external-load-balancer.adoc[leveloffset=+2] - [id="next-steps_installing-restricted-networks-installer-provisioned-vsphere"] == Next steps diff --git a/installing/installing_vsphere/ipi/installing-vsphere-installer-provisioned-customizations.adoc b/installing/installing_vsphere/ipi/installing-vsphere-installer-provisioned-customizations.adoc index 119f177357..2dc7eced94 100644 --- a/installing/installing_vsphere/ipi/installing-vsphere-installer-provisioned-customizations.adoc +++ b/installing/installing_vsphere/ipi/installing-vsphere-installer-provisioned-customizations.adoc @@ -56,6 +56,13 @@ include::modules/installation-configure-proxy.adoc[leveloffset=+2] include::modules/configuring-vsphere-regions-zones.adoc[leveloffset=+2] +// Services for a user-managed load balancer +include::modules/nw-osp-services-external-load-balancer.adoc[leveloffset=+1] + +// Configuring a user-managed load balancer +include::modules/nw-osp-configuring-external-load-balancer.adoc[leveloffset=+2] + +// Deploying the cluster include::modules/installation-launching-installer.adoc[leveloffset=+1] include::modules/cli-logging-in-kubeadmin.adoc[leveloffset=+1] @@ -81,12 +88,6 @@ include::modules/cluster-telemetry.adoc[leveloffset=+1] * See xref:../../../support/remote_health_monitoring/about-remote-health-monitoring.adoc#about-remote-health-monitoring[About remote health monitoring] for more information about the Telemetry service -// Services for an external load balancer -include::modules/nw-osp-services-external-load-balancer.adoc[leveloffset=+1] - -// Configuring an external load balancer -include::modules/nw-osp-configuring-external-load-balancer.adoc[leveloffset=+2] - [id="next-steps_installing-vsphere-installer-provisioned-customizations"] == Next steps diff --git a/installing/installing_vsphere/ipi/installing-vsphere-installer-provisioned-network-customizations.adoc b/installing/installing_vsphere/ipi/installing-vsphere-installer-provisioned-network-customizations.adoc index c2754563b4..46228fcd78 100644 --- a/installing/installing_vsphere/ipi/installing-vsphere-installer-provisioned-network-customizations.adoc +++ b/installing/installing_vsphere/ipi/installing-vsphere-installer-provisioned-network-customizations.adoc @@ -66,6 +66,13 @@ include::modules/nw-modifying-operator-install-config.adoc[leveloffset=+1] include::modules/nw-operator-cr.adoc[leveloffset=+1] // end network customization +// Services for a user-managed load balancer +include::modules/nw-osp-services-external-load-balancer.adoc[leveloffset=+1] + +// Configuring a user-managed load balancer +include::modules/nw-osp-configuring-external-load-balancer.adoc[leveloffset=+2] + +// Deploying the cluster include::modules/installation-launching-installer.adoc[leveloffset=+1] include::modules/cli-logging-in-kubeadmin.adoc[leveloffset=+1] @@ -91,12 +98,6 @@ include::modules/cluster-telemetry.adoc[leveloffset=+1] * See xref:../../../support/remote_health_monitoring/about-remote-health-monitoring.adoc#about-remote-health-monitoring[About remote health monitoring] for more information about the Telemetry service -// Services for an external load balancer -include::modules/nw-osp-services-external-load-balancer.adoc[leveloffset=+1] - -// Configuring an external load balancer -include::modules/nw-osp-configuring-external-load-balancer.adoc[leveloffset=+2] - include::modules/ipi-install-configure-network-components-to-run-on-the-control-plane.adoc[leveloffset=+1] [id="next-steps_installing-vsphere-installer-provisioned-network-customizations"] diff --git a/modules/installation-launching-installer.adoc b/modules/installation-launching-installer.adoc index 803921839d..1bf1c5c68d 100644 --- a/modules/installation-launching-installer.adoc +++ b/modules/installation-launching-installer.adoc @@ -311,7 +311,7 @@ ifdef::vsphere[] + [IMPORTANT] ==== -You do not need to specify API and Ingress static addresses for your installation program. If you choose this configuration, you must take additional actions to define network targets that accept an IP address from each referenced vSphere subnet. See the section "Configuring an external load balancer". +You do not need to specify API and Ingress static addresses for your installation program. If you choose this configuration, you must take additional actions to define network targets that accept an IP address from each referenced vSphere subnet. See the section "Configuring a user-managed load balancer". ==== endif::vsphere[] diff --git a/modules/installation-load-balancing-user-infra.adoc b/modules/installation-load-balancing-user-infra.adoc index a686345ac7..ef4b902cd1 100644 --- a/modules/installation-load-balancing-user-infra.adoc +++ b/modules/installation-load-balancing-user-infra.adoc @@ -227,4 +227,4 @@ If you are using HAProxy as a load balancer, you can check that the `haproxy` pr ifeval::["{context}" == "installing-openstack-installer-custom"] :!user-managed-lb: -endif::[] \ No newline at end of file +endif::[] diff --git a/modules/installation-osp-balancing-external-loads.adoc b/modules/installation-osp-balancing-external-loads.adoc index 36b26703c5..1dd1a4ebff 100644 --- a/modules/installation-osp-balancing-external-loads.adoc +++ b/modules/installation-osp-balancing-external-loads.adoc @@ -3,7 +3,7 @@ // * installing/installing_openstack/installing-openstack-load-balancing.adoc [id="installation-osp-balancing-external-loads_{context}"] -= Configuring an external load balancer += Configuring a user-managed load balancer Configure an external load balancer in {rh-openstack-first} to use your own load balancer, resolve external networking needs, or scale beyond what the default {product-title} load balancer can provide. diff --git a/modules/nw-osp-configuring-external-load-balancer.adoc b/modules/nw-osp-configuring-external-load-balancer.adoc index 33cb758046..00d96ad28d 100644 --- a/modules/nw-osp-configuring-external-load-balancer.adoc +++ b/modules/nw-osp-configuring-external-load-balancer.adoc @@ -1,45 +1,39 @@ // Module included in the following assemblies: -// * networking/load-balancing-openstack.adoc ( Load balancing on OpenStack) -// * installing/installing_bare_metal_ipi/ipi-install-post-installation-configuration.adoc (Post-installation configuration) -// * installing/installing-vsphere-installer-provisioned.adoc(Installing a cluster) -// * installing/installing-vsphere-installer-provisioned-customizations.adoc (Installing a cluster on vSphere with customizations) -// * installing/installing-vsphere-installer-provisioned-network-customizations.adoc (Installing a cluster on vSphere with network customizations) -// * installing/installing-restricted-networks-installer-provisioned-vsphere.adoc (Installing a cluster on vSphere in a restricted network) +// OpenStack +// * networking/load-balancing-openstack.adoc +// Bare metal +// * installing/installing_bare_metal_ipi/ipi-install-installation-workflow.adoc +// * installing/installing_bare_metal_ipi/ipi-install-post-installation-configuration.adoc +// vSphere +// * installing/installing-vsphere-installer-provisioned-customizations.adoc +// * installing/installing-vsphere-installer-provisioned-network-customizations.adoc +// * installing/installing-restricted-networks-installer-provisioned-vsphere.adoc -ifeval::["{context}" == "installing-vsphere-installer-provisioned"] -:vsphere: -endif::[] -ifeval::["{context}" == "installing-vsphere-installer-provisioned-customizations"] -:vsphere: -endif::[] -ifeval::["{context}" == "installing-vsphere-installer-provisioned-network-customizations"] -:vsphere: -endif::[] -ifeval::["{context}" == installing-restricted-networks-installer-provisioned-vsphere] -:vsphere: +ifeval::["{context}" == "ipi-install-installation-workflow"] +:bare-metal: endif::[] :_mod-docs-content-type: PROCEDURE [id="nw-osp-configuring-external-load-balancer_{context}"] -= Configuring an external load balancer += Configuring a user-managed load balancer You can configure an {product-title} cluster ifeval::["{context}" == "load-balancing-openstack"] on {rh-openstack-first} endif::[] -to use an external load balancer in place of the default load balancer. +to use a user-managed load balancer in place of the default load balancer. [IMPORTANT] ==== -Before you configure an external load balancer, ensure that you read the "Services for an external load balancer" section. +Before you configure a user-managed load balancer, ensure that you read the "Services for a user-managed load balancer" section. ==== -Read the following prerequisites that apply to the service that you want to configure for your external load balancer. +Read the following prerequisites that apply to the service that you want to configure for your user-managed load balancer. [NOTE] ==== -MetalLB, that runs on a cluster, functions as an external load balancer. +MetalLB, which runs on a cluster, functions as a user-managed load balancer. ==== .OpenShift API prerequisites @@ -64,7 +58,7 @@ MetalLB, that runs on a cluster, functions as an external load balancer. You can configure most load balancers by setting health check URLs that determine if a service is available or unavailable. {product-title} provides these health checks for the OpenShift API, Machine Configuration API, and Ingress Controller backend services. -The following examples demonstrate health check specifications for the previously listed backend services: +The following examples show health check specifications for the previously listed backend services: .Example of a Kubernetes API health check specification @@ -157,7 +151,7 @@ listen my-cluster-apps-80 # ... ---- -. Use the `curl` CLI command to verify that the external load balancer and its resources are operational: +. Use the `curl` CLI command to verify that the user-managed load balancer and its resources are operational: + .. Verify that the cluster machine configuration API is accessible to the Kubernetes API server resource, by running the following command and observing the response: + @@ -239,7 +233,7 @@ set-cookie: 1e2670d92730b515ce3a1bb65da45062=1bf5e9573c9a2760c964ed1659cc1673; p cache-control: private ---- -. Configure the DNS records for your cluster to target the front-end IP addresses of the external load balancer. You must update records to your DNS server for the cluster API and applications over the load balancer. +. Configure the DNS records for your cluster to target the front-end IP addresses of the user-managed load balancer. You must update records to your DNS server for the cluster API and applications over the load balancer. + .Examples of modified DNS records + @@ -260,7 +254,30 @@ A record pointing to Load Balancer Front End DNS propagation might take some time for each DNS record to become available. Ensure that each DNS record propagates before validating each record. ==== -. Use the `curl` CLI command to verify that the external load balancer and DNS record configuration are operational: +ifdef::bare-metal[] +. For your {product-title} cluster to use the user-managed load balancer, you must specify the following configuration in your cluster's `install-config.yaml` file: ++ +[source,yaml] +---- +# ... +platform: + baremetal: + loadBalancer: + type: UserManaged <1> + apiVIPs: + - <2> + ingressVIPs: + - <3> +# ... +---- +<1> Set `UserManaged` for the `type` parameter to specify a user-managed load balancer for your cluster. The parameter defaults to `OpenShiftManagedDefault`, which denotes the default internal load balancer. For services defined in an `openshift-kni-infra` namespace, a user-managed load balancer can deploy the `coredns` service to pods in your cluster but ignores `keepalived` and `haproxy` services. +<2> Required parameter when you specify a user-managed load balancer. Specify the user-managed load balancer's public IP address, so that the Kubernetes API can communicate with the user-managed load balancer. +<3> Required parameter when you specify a user-managed load balancer. Specify the user-managed load balancer's public IP address, so that the user-managed load balancer can manage ingress traffic for your cluster. +endif::bare-metal[] + +.Verification + +. Use the `curl` CLI command to verify that the user-managed load balancer and DNS record configuration are operational: + .. Verify that you can access the cluster API, by running the following command and observing the output: + @@ -352,15 +369,6 @@ set-cookie: 1e2670d92730b515ce3a1bb65da45062=1bf5e9573c9a2760c964ed1659cc1673; p cache-control: private ---- -ifeval::["{context}" == "installing-vsphere-installer-provisioned"] -:!vsphere: -endif::[] -ifeval::["{context}" == "installing-vsphere-installer-provisioned-customizations"] -:!vsphere: -endif::[] -ifeval::["{context}" == "installing-vsphere-installer-provisioned-network-customizations"] -:!vsphere: -endif::[] -ifeval::["{context}" == installing-restricted-networks-installer-provisioned-vsphere] -:!vsphere: +ifeval::["{context}" == "ipi-install-installation-workflow"] +:!bare-metal: endif::[] diff --git a/modules/nw-osp-services-external-load-balancer.adoc b/modules/nw-osp-services-external-load-balancer.adoc index eab14b4b71..a419323829 100644 --- a/modules/nw-osp-services-external-load-balancer.adoc +++ b/modules/nw-osp-services-external-load-balancer.adoc @@ -1,49 +1,39 @@ // Module included in the following assemblies: -// * networking/load-balancing-openstack.adoc ( Load balancing on OpenStack) -// * installing/installing_bare_metal_ipi/ipi-install-post-installation-configuration.adoc (Post-installation configuration) -// * installing/installing-vsphere-installer-provisioned.adoc(Installing a cluster) -// * installing/installing-vsphere-installer-provisioned-customizations.adoc (Installing a cluster on vSphere with customizations) -// * installing/installing-vsphere-installer-provisioned-network-customizations.adoc (Installing a cluster on vSphere with network customizations) -// * installing/installing-restricted-networks-installer-provisioned-vsphere.adoc (Installing a cluster on vSphere in a restricted network) - -ifeval::["{context}" == "installing-vsphere-installer-provisioned"] -:vsphere: -endif::[] -ifeval::["{context}" == "installing-vsphere-installer-provisioned-customizations"] -:vsphere: -endif::[] -ifeval::["{context}" == "installing-vsphere-installer-provisioned-network-customizations"] -:vsphere: -endif::[] -ifeval::["{context}" == installing-restricted-networks-installer-provisioned-vsphere] -:vsphere: -endif::[] +// OpenStack +// * networking/load-balancing-openstack.adoc +// Bare metal +// * installing/installing_bare_metal_ipi/ipi-install-installation-workflow.adoc +// * installing/installing_bare_metal_ipi/ipi-install-post-installation-configuration.adoc +// vSphere +// * installing/installing-vsphere-installer-provisioned-customizations.adoc +// * installing/installing-vsphere-installer-provisioned-network-customizations.adoc +// * installing/installing-restricted-networks-installer-provisioned-vsphere.adoc :_mod-docs-content-type: CONCEPT [id="nw-osp-services-external-load-balancer_{context}"] -= Services for an external load balancer += Services for a user-managed load balancer You can configure an {product-title} cluster ifeval::["{context}" == "load-balancing-openstack"] on {rh-openstack-first} endif::[] -to use an external load balancer in place of the default load balancer. +to use a user-managed load balancer in place of the default load balancer. [IMPORTANT] ==== -Configuring an external load balancer depends on your vendor's load balancer. +Configuring a user-managed load balancer depends on your vendor's load balancer. The information and examples in this section are for guideline purposes only. Consult the vendor documentation for more specific information about the vendor's load balancer. ==== -Red Hat supports the following services for an external load balancer: +Red Hat supports the following services for a user-managed load balancer: * Ingress Controller * OpenShift API * OpenShift MachineConfig API -You can choose whether you want to configure one or all of these services for an external load balancer. Configuring only the Ingress Controller service is a common configuration option. To better understand each service, view the following diagrams: +You can choose whether you want to configure one or all of these services for a user-managed load balancer. Configuring only the Ingress Controller service is a common configuration option. To better understand each service, view the following diagrams: .Example network workflow that shows an Ingress Controller operating in an {product-title} environment image::external-load-balancer-default.png[An image that shows an example network workflow of an Ingress Controller operating in an {product-title} environment.] @@ -54,7 +44,7 @@ image::external-load-balancer-openshift-api.png[An image that shows an example n .Example network workflow that shows an OpenShift MachineConfig API operating in an {product-title} environment image::external-load-balancer-machine-config-api.png[An image that shows an example network workflow of an OpenShift MachineConfig API operating in an {product-title} environment.] -The following configuration options are supported for external load balancers: +The following configuration options are supported for user-managed load balancers: * Use a node selector to map the Ingress Controller to a specific set of nodes. You must assign a static IP address to each node in this set, or configure each node to receive the same IP address from the Dynamic Host Configuration Protocol (DHCP). Infrastructure nodes commonly receive this type of configuration. @@ -65,25 +55,12 @@ The following configuration options are supported for external load balancers: You can list all IP addresses that exist in a network by checking the machine config pool's resources. ==== -Before you configure an external load balancer for your {product-title} cluster, consider the following information: +Before you configure a user-managed load balancer for your {product-title} cluster, consider the following information: * For a front-end IP address, you can use the same IP address for the front-end IP address, the Ingress Controller's load balancer, and API load balancer. Check the vendor's documentation for this capability. -* For a back-end IP address, ensure that an IP address for an {product-title} control plane node does not change during the lifetime of the external load balancer. You can achieve this by completing one of the following actions: +* For a back-end IP address, ensure that an IP address for an {product-title} control plane node does not change during the lifetime of the user-managed load balancer. You can achieve this by completing one of the following actions: ** Assign a static IP address to each control plane node. ** Configure each node to receive the same IP address from the DHCP every time the node requests a DHCP lease. Depending on the vendor, the DHCP lease might be in the form of an IP reservation or a static DHCP assignment. -* Manually define each node that runs the Ingress Controller in the external load balancer for the Ingress Controller back-end service. For example, if the Ingress Controller moves to an undefined node, a connection outage can occur. - -ifeval::["{context}" == "installing-vsphere-installer-provisioned"] -:!vsphere: -endif::[] -ifeval::["{context}" == "installing-vsphere-installer-provisioned-customizations"] -:!vsphere: -endif::[] -ifeval::["{context}" == "installing-vsphere-installer-provisioned-network-customizations"] -:!vsphere: -endif::[] -ifeval::["{context}" == installing-restricted-networks-installer-provisioned-vsphere] -:!vsphere: -endif::[] +* Manually define each node that runs the Ingress Controller in the user-managed load balancer for the Ingress Controller back-end service. For example, if the Ingress Controller moves to an undefined node, a connection outage can occur. diff --git a/networking/load-balancing-openstack.adoc b/networking/load-balancing-openstack.adoc index 2db8630441..cecb255182 100644 --- a/networking/load-balancing-openstack.adoc +++ b/networking/load-balancing-openstack.adoc @@ -11,8 +11,8 @@ include::modules/nw-osp-loadbalancer-etp-local.adoc[leveloffset=+2] include::modules/installation-osp-api-octavia.adoc[leveloffset=+1] include::modules/installation-osp-api-scaling.adoc[leveloffset=+2] -// Services for an external load balancer +// Services for a user-managed load balancer include::modules/nw-osp-services-external-load-balancer.adoc[leveloffset=+1] -// Configuring an external load balancer +// Configuring a user-managed load balancer include::modules/nw-osp-configuring-external-load-balancer.adoc[leveloffset=+2] diff --git a/rest_api/operator_apis/ingresscontroller-operator-openshift-io-v1.adoc b/rest_api/operator_apis/ingresscontroller-operator-openshift-io-v1.adoc index 4f3b31fc3c..b8cde0895d 100644 --- a/rest_api/operator_apis/ingresscontroller-operator-openshift-io-v1.adoc +++ b/rest_api/operator_apis/ingresscontroller-operator-openshift-io-v1.adoc @@ -11,10 +11,10 @@ toc::[] Description:: + -- -IngressController describes a managed ingress controller for the cluster. The controller can service OpenShift Route and Kubernetes Ingress resources. - When an IngressController is created, a new ingress controller deployment is created to allow external traffic to reach the services that expose Ingress or Route resources. Updating this resource may lead to disruption for public facing network connections as a new ingress controller revision may be rolled out. - https://kubernetes.io/docs/concepts/services-networking/ingress-controllers - Whenever possible, sensible defaults for the platform are used. See each field for more details. +IngressController describes a managed ingress controller for the cluster. The controller can service OpenShift Route and Kubernetes Ingress resources. + When an IngressController is created, a new ingress controller deployment is created to allow external traffic to reach the services that expose Ingress or Route resources. Updating this resource may lead to disruption for public facing network connections as a new ingress controller revision may be rolled out. + https://kubernetes.io/docs/concepts/services-networking/ingress-controllers + Whenever possible, sensible defaults for the platform are used. See each field for more details. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). -- @@ -73,28 +73,28 @@ Type:: | `defaultCertificate` | `object` -| defaultCertificate is a reference to a secret containing the default certificate served by the ingress controller. When Routes don't specify their own certificate, defaultCertificate is used. - The secret must contain the following keys and data: - tls.crt: certificate file contents tls.key: key file contents - If unset, a wildcard certificate is automatically generated and used. The certificate is valid for the ingress controller domain (and subdomains) and the generated certificate's CA will be automatically integrated with the cluster's trust store. - If a wildcard certificate is used and shared by multiple HTTP/2 enabled routes (which implies ALPN) then clients (i.e., notably browsers) are at liberty to reuse open connections. This means a client can reuse a connection to another route and that is likely to fail. This behaviour is generally known as connection coalescing. +| defaultCertificate is a reference to a secret containing the default certificate served by the ingress controller. When Routes don't specify their own certificate, defaultCertificate is used. + The secret must contain the following keys and data: + tls.crt: certificate file contents tls.key: key file contents + If unset, a wildcard certificate is automatically generated and used. The certificate is valid for the ingress controller domain (and subdomains) and the generated certificate's CA will be automatically integrated with the cluster's trust store. + If a wildcard certificate is used and shared by multiple HTTP/2 enabled routes (which implies ALPN) then clients (i.e., notably browsers) are at liberty to reuse open connections. This means a client can reuse a connection to another route and that is likely to fail. This behaviour is generally known as connection coalescing. The in-use certificate (whether generated or user-specified) will be automatically integrated with OpenShift's built-in OAuth server. | `domain` | `string` -| domain is a DNS name serviced by the ingress controller and is used to configure multiple features: - * For the LoadBalancerService endpoint publishing strategy, domain is used to configure DNS records. See endpointPublishingStrategy. - * When using a generated default certificate, the certificate will be valid for domain and its subdomains. See defaultCertificate. - * The value is published to individual Route statuses so that end-users know where to target external DNS records. - domain must be unique among all IngressControllers, and cannot be updated. +| domain is a DNS name serviced by the ingress controller and is used to configure multiple features: + * For the LoadBalancerService endpoint publishing strategy, domain is used to configure DNS records. See endpointPublishingStrategy. + * When using a generated default certificate, the certificate will be valid for domain and its subdomains. See defaultCertificate. + * The value is published to individual Route statuses so that end-users know where to target external DNS records. + domain must be unique among all IngressControllers, and cannot be updated. If empty, defaults to ingress.config.openshift.io/cluster .spec.domain. | `endpointPublishingStrategy` | `object` -| endpointPublishingStrategy is used to publish the ingress controller endpoints to other networks, enable load balancer integrations, etc. - If unset, the default is based on infrastructure.config.openshift.io/cluster .status.platform: - AWS: LoadBalancerService (with External scope) Azure: LoadBalancerService (with External scope) GCP: LoadBalancerService (with External scope) IBMCloud: LoadBalancerService (with External scope) AlibabaCloud: LoadBalancerService (with External scope) Libvirt: HostNetwork - Any other platform types (including None) default to HostNetwork. +| endpointPublishingStrategy is used to publish the ingress controller endpoints to other networks, enable load balancer integrations, etc. + If unset, the default is based on infrastructure.config.openshift.io/cluster .status.platform: + AWS: LoadBalancerService (with External scope) Azure: LoadBalancerService (with External scope) GCP: LoadBalancerService (with External scope) IBMCloud: LoadBalancerService (with External scope) AlibabaCloud: LoadBalancerService (with External scope) Libvirt: HostNetwork + Any other platform types (including None) default to HostNetwork. endpointPublishingStrategy cannot be updated. | `httpCompression` @@ -103,7 +103,7 @@ Type:: | `httpEmptyRequestsPolicy` | `string` -| httpEmptyRequestsPolicy describes how HTTP connections should be handled if the connection times out before a request is received. Allowed values for this field are "Respond" and "Ignore". If the field is set to "Respond", the ingress controller sends an HTTP 400 or 408 response, logs the connection (if access logging is enabled), and counts the connection in the appropriate metrics. If the field is set to "Ignore", the ingress controller closes the connection without sending a response, logging the connection, or incrementing metrics. The default value is "Respond". +| httpEmptyRequestsPolicy describes how HTTP connections should be handled if the connection times out before a request is received. Allowed values for this field are "Respond" and "Ignore". If the field is set to "Respond", the ingress controller sends an HTTP 400 or 408 response, logs the connection (if access logging is enabled), and counts the connection in the appropriate metrics. If the field is set to "Ignore", the ingress controller closes the connection without sending a response, logging the connection, or incrementing metrics. The default value is "Respond". Typically, these connections come from load balancers' health probes or Web browsers' speculative connections ("preconnect") and can be safely ignored. However, these requests may also be caused by network errors, and so setting this field to "Ignore" may impede detection and diagnosis of problems. In addition, these requests may be caused by port scans, in which case logging empty requests may aid in detecting intrusion attempts. | `httpErrorCodePages` @@ -112,7 +112,7 @@ Type:: | `httpHeaders` | `object` -| httpHeaders defines policy for HTTP headers. +| httpHeaders defines policy for HTTP headers. If this field is empty, the default values are used. | `logging` @@ -121,39 +121,39 @@ Type:: | `namespaceSelector` | `object` -| namespaceSelector is used to filter the set of namespaces serviced by the ingress controller. This is useful for implementing shards. +| namespaceSelector is used to filter the set of namespaces serviced by the ingress controller. This is useful for implementing shards. If unset, the default is no filtering. | `nodePlacement` | `object` -| nodePlacement enables explicit control over the scheduling of the ingress controller. +| nodePlacement enables explicit control over the scheduling of the ingress controller. If unset, defaults are used. See NodePlacement for more details. | `replicas` | `integer` -| replicas is the desired number of ingress controller replicas. If unset, the default depends on the value of the defaultPlacement field in the cluster config.openshift.io/v1/ingresses status. - The value of replicas is set based on the value of a chosen field in the Infrastructure CR. If defaultPlacement is set to ControlPlane, the chosen field will be controlPlaneTopology. If it is set to Workers the chosen field will be infrastructureTopology. Replicas will then be set to 1 or 2 based whether the chosen field's value is SingleReplica or HighlyAvailable, respectively. +| replicas is the desired number of ingress controller replicas. If unset, the default depends on the value of the defaultPlacement field in the cluster config.openshift.io/v1/ingresses status. + The value of replicas is set based on the value of a chosen field in the Infrastructure CR. If defaultPlacement is set to ControlPlane, the chosen field will be controlPlaneTopology. If it is set to Workers the chosen field will be infrastructureTopology. Replicas will then be set to 1 or 2 based whether the chosen field's value is SingleReplica or HighlyAvailable, respectively. These defaults are subject to change. | `routeAdmission` | `object` -| routeAdmission defines a policy for handling new route claims (for example, to allow or deny claims across namespaces). +| routeAdmission defines a policy for handling new route claims (for example, to allow or deny claims across namespaces). If empty, defaults will be applied. See specific routeAdmission fields for details about their defaults. | `routeSelector` | `object` -| routeSelector is used to filter the set of Routes serviced by the ingress controller. This is useful for implementing shards. +| routeSelector is used to filter the set of Routes serviced by the ingress controller. This is useful for implementing shards. If unset, the default is no filtering. | `tlsSecurityProfile` | `object` -| tlsSecurityProfile specifies settings for TLS connections for ingresscontrollers. - If unset, the default is based on the apiservers.config.openshift.io/cluster resource. +| tlsSecurityProfile specifies settings for TLS connections for ingresscontrollers. + If unset, the default is based on the apiservers.config.openshift.io/cluster resource. Note that when using the Old, Intermediate, and Modern profile types, the effective profile configuration is subject to change between releases. For example, given a specification to use the Intermediate profile deployed on release X.Y.Z, an upgrade to release X.Y.Z+1 may cause a new profile configuration to be applied to the ingress controller, resulting in a rollout. | `tuningOptions` | `object` -| tuningOptions defines parameters for adjusting the performance of ingress controller pods. All fields are optional and will use their respective defaults if not set. See specific tuningOptions fields for more details. +| tuningOptions defines parameters for adjusting the performance of ingress controller pods. All fields are optional and will use their respective defaults if not set. See specific tuningOptions fields for more details. Setting fields within tuningOptions is generally not recommended. The default values are suitable for most configurations. | `unsupportedConfigOverrides` @@ -191,7 +191,7 @@ Required:: | `clientCertificatePolicy` | `string` -| clientCertificatePolicy specifies whether the ingress controller requires clients to provide certificates. This field accepts the values "Required" or "Optional". +| clientCertificatePolicy specifies whether the ingress controller requires clients to provide certificates. This field accepts the values "Required" or "Optional". Note that the ingress controller only checks client certificates for edge-terminated and reencrypt TLS routes; it cannot check certificates for cleartext HTTP or passthrough TLS routes. |=== @@ -223,11 +223,11 @@ Required:: Description:: + -- -defaultCertificate is a reference to a secret containing the default certificate served by the ingress controller. When Routes don't specify their own certificate, defaultCertificate is used. - The secret must contain the following keys and data: - tls.crt: certificate file contents tls.key: key file contents - If unset, a wildcard certificate is automatically generated and used. The certificate is valid for the ingress controller domain (and subdomains) and the generated certificate's CA will be automatically integrated with the cluster's trust store. - If a wildcard certificate is used and shared by multiple HTTP/2 enabled routes (which implies ALPN) then clients (i.e., notably browsers) are at liberty to reuse open connections. This means a client can reuse a connection to another route and that is likely to fail. This behaviour is generally known as connection coalescing. +defaultCertificate is a reference to a secret containing the default certificate served by the ingress controller. When Routes don't specify their own certificate, defaultCertificate is used. + The secret must contain the following keys and data: + tls.crt: certificate file contents tls.key: key file contents + If unset, a wildcard certificate is automatically generated and used. The certificate is valid for the ingress controller domain (and subdomains) and the generated certificate's CA will be automatically integrated with the cluster's trust store. + If a wildcard certificate is used and shared by multiple HTTP/2 enabled routes (which implies ALPN) then clients (i.e., notably browsers) are at liberty to reuse open connections. This means a client can reuse a connection to another route and that is likely to fail. This behaviour is generally known as connection coalescing. The in-use certificate (whether generated or user-specified) will be automatically integrated with OpenShift's built-in OAuth server. -- @@ -250,10 +250,10 @@ Type:: Description:: + -- -endpointPublishingStrategy is used to publish the ingress controller endpoints to other networks, enable load balancer integrations, etc. - If unset, the default is based on infrastructure.config.openshift.io/cluster .status.platform: - AWS: LoadBalancerService (with External scope) Azure: LoadBalancerService (with External scope) GCP: LoadBalancerService (with External scope) IBMCloud: LoadBalancerService (with External scope) AlibabaCloud: LoadBalancerService (with External scope) Libvirt: HostNetwork - Any other platform types (including None) default to HostNetwork. +endpointPublishingStrategy is used to publish the ingress controller endpoints to other networks, enable load balancer integrations, etc. + If unset, the default is based on infrastructure.config.openshift.io/cluster .status.platform: + AWS: LoadBalancerService (with External scope) Azure: LoadBalancerService (with External scope) GCP: LoadBalancerService (with External scope) IBMCloud: LoadBalancerService (with External scope) AlibabaCloud: LoadBalancerService (with External scope) Libvirt: HostNetwork + Any other platform types (including None) default to HostNetwork. endpointPublishingStrategy cannot be updated. -- @@ -287,21 +287,21 @@ Required:: | `type` | `string` -| type is the publishing strategy to use. Valid values are: - * LoadBalancerService - Publishes the ingress controller using a Kubernetes LoadBalancer Service. - In this configuration, the ingress controller deployment uses container networking. A LoadBalancer Service is created to publish the deployment. - See: https://kubernetes.io/docs/concepts/services-networking/service/#loadbalancer - If domain is set, a wildcard DNS record will be managed to point at the LoadBalancer Service's external name. DNS records are managed only in DNS zones defined by dns.config.openshift.io/cluster .spec.publicZone and .spec.privateZone. - Wildcard DNS management is currently supported only on the AWS, Azure, and GCP platforms. - * HostNetwork - Publishes the ingress controller on node ports where the ingress controller is deployed. - In this configuration, the ingress controller deployment uses host networking, bound to node ports 80 and 443. The user is responsible for configuring an external load balancer to publish the ingress controller via the node ports. - * Private - Does not publish the ingress controller. - In this configuration, the ingress controller deployment uses container networking, and is not explicitly published. The user must manually publish the ingress controller. - * NodePortService - Publishes the ingress controller using a Kubernetes NodePort Service. +| type is the publishing strategy to use. Valid values are: + * LoadBalancerService + Publishes the ingress controller using a Kubernetes LoadBalancer Service. + In this configuration, the ingress controller deployment uses container networking. A LoadBalancer Service is created to publish the deployment. + See: https://kubernetes.io/docs/concepts/services-networking/service/#loadbalancer + If domain is set, a wildcard DNS record will be managed to point at the LoadBalancer Service's external name. DNS records are managed only in DNS zones defined by dns.config.openshift.io/cluster .spec.publicZone and .spec.privateZone. + Wildcard DNS management is currently supported only on the AWS, Azure, and GCP platforms. + * HostNetwork + Publishes the ingress controller on node ports where the ingress controller is deployed. + In this configuration, the ingress controller deployment uses host networking, bound to node ports 80 and 443. The user is responsible for configuring a user-managed load balancer to publish the ingress controller via the node ports. + * Private + Does not publish the ingress controller. + In this configuration, the ingress controller deployment uses container networking, and is not explicitly published. The user must manually publish the ingress controller. + * NodePortService + Publishes the ingress controller using a Kubernetes NodePort Service. In this configuration, the ingress controller deployment uses container networking. A NodePort Service is created to publish the deployment. The specific node ports are dynamically allocated by OpenShift; however, to support static port allocations, user changes to the node port field of the managed NodePort Service will preserved. |=== @@ -332,10 +332,10 @@ Type:: | `protocol` | `string` -| protocol specifies whether the IngressController expects incoming connections to use plain TCP or whether the IngressController expects PROXY protocol. - PROXY protocol can be used with load balancers that support it to communicate the source addresses of client connections when forwarding those connections to the IngressController. Using PROXY protocol enables the IngressController to report those source addresses instead of reporting the load balancer's address in HTTP headers and logs. Note that enabling PROXY protocol on the IngressController will cause connections to fail if you are not using a load balancer that uses PROXY protocol to forward connections to the IngressController. See http://www.haproxy.org/download/2.2/doc/proxy-protocol.txt for information about PROXY protocol. - The following values are valid for this field: - * The empty string. * "TCP". * "PROXY". +| protocol specifies whether the IngressController expects incoming connections to use plain TCP or whether the IngressController expects PROXY protocol. + PROXY protocol can be used with load balancers that support it to communicate the source addresses of client connections when forwarding those connections to the IngressController. Using PROXY protocol enables the IngressController to report those source addresses instead of reporting the load balancer's address in HTTP headers and logs. Note that enabling PROXY protocol on the IngressController will cause connections to fail if you are not using a load balancer that uses PROXY protocol to forward connections to the IngressController. See http://www.haproxy.org/download/2.2/doc/proxy-protocol.txt for information about PROXY protocol. + The following values are valid for this field: + * The empty string. * "TCP". * "PROXY". The empty string specifies the default, which is TCP without PROXY protocol. Note that the default is subject to change. | `statsPort` @@ -365,7 +365,7 @@ Required:: | `allowedSourceRanges` | `` -| allowedSourceRanges specifies an allowlist of IP address ranges to which access to the load balancer should be restricted. Each range must be specified using CIDR notation (e.g. "10.0.0.0/8" or "fd00::/8"). If no range is specified, "0.0.0.0/0" for IPv4 and "::/0" for IPv6 are used by default, which allows all source addresses. +| allowedSourceRanges specifies an allowlist of IP address ranges to which access to the load balancer should be restricted. Each range must be specified using CIDR notation (e.g. "10.0.0.0/8" or "fd00::/8"). If no range is specified, "0.0.0.0/0" for IPv4 and "::/0" for IPv6 are used by default, which allows all source addresses. To facilitate migration from earlier versions of OpenShift that did not have the allowedSourceRanges field, you may set the service.beta.kubernetes.io/load-balancer-source-ranges annotation on the "router-" service in the "openshift-ingress" namespace, and this annotation will take effect if allowedSourceRanges is empty on OpenShift 4.12. | `dnsManagementPolicy` @@ -374,7 +374,7 @@ Required:: | `providerParameters` | `object` -| providerParameters holds desired load balancer information specific to the underlying infrastructure provider. +| providerParameters holds desired load balancer information specific to the underlying infrastructure provider. If empty, defaults will be applied. See specific providerParameters fields for details about their defaults. | `scope` @@ -386,7 +386,7 @@ Required:: Description:: + -- -providerParameters holds desired load balancer information specific to the underlying infrastructure provider. +providerParameters holds desired load balancer information specific to the underlying infrastructure provider. If empty, defaults will be applied. See specific providerParameters fields for details about their defaults. -- @@ -404,17 +404,17 @@ Required:: | `aws` | `object` -| aws provides configuration settings that are specific to AWS load balancers. +| aws provides configuration settings that are specific to AWS load balancers. If empty, defaults will be applied. See specific aws fields for details about their defaults. | `gcp` | `object` -| gcp provides configuration settings that are specific to GCP load balancers. +| gcp provides configuration settings that are specific to GCP load balancers. If empty, defaults will be applied. See specific gcp fields for details about their defaults. | `ibm` | `object` -| ibm provides configuration settings that are specific to IBM Cloud load balancers. +| ibm provides configuration settings that are specific to IBM Cloud load balancers. If empty, defaults will be applied. See specific ibm fields for details about their defaults. | `type` @@ -426,7 +426,7 @@ Required:: Description:: + -- -aws provides configuration settings that are specific to AWS load balancers. +aws provides configuration settings that are specific to AWS load balancers. If empty, defaults will be applied. See specific aws fields for details about their defaults. -- @@ -452,11 +452,11 @@ Required:: | `type` | `string` -| type is the type of AWS load balancer to instantiate for an ingresscontroller. - Valid values are: - * "Classic": A Classic Load Balancer that makes routing decisions at either the transport layer (TCP/SSL) or the application layer (HTTP/HTTPS). See the following for additional details: - https://docs.aws.amazon.com/AmazonECS/latest/developerguide/load-balancer-types.html#clb - * "NLB": A Network Load Balancer that makes routing decisions at the transport layer (TCP/SSL). See the following for additional details: +| type is the type of AWS load balancer to instantiate for an ingresscontroller. + Valid values are: + * "Classic": A Classic Load Balancer that makes routing decisions at either the transport layer (TCP/SSL) or the application layer (HTTP/HTTPS). See the following for additional details: + https://docs.aws.amazon.com/AmazonECS/latest/developerguide/load-balancer-types.html#clb + * "NLB": A Network Load Balancer that makes routing decisions at the transport layer (TCP/SSL). See the following for additional details: https://docs.aws.amazon.com/AmazonECS/latest/developerguide/load-balancer-types.html#nlb |=== @@ -499,7 +499,7 @@ Type:: Description:: + -- -gcp provides configuration settings that are specific to GCP load balancers. +gcp provides configuration settings that are specific to GCP load balancers. If empty, defaults will be applied. See specific gcp fields for details about their defaults. -- @@ -515,10 +515,10 @@ Type:: | `clientAccess` | `string` -| clientAccess describes how client access is restricted for internal load balancers. - Valid values are: * "Global": Specifying an internal load balancer with Global client access allows clients from any region within the VPC to communicate with the load balancer. - https://cloud.google.com/kubernetes-engine/docs/how-to/internal-load-balancing#global_access - * "Local": Specifying an internal load balancer with Local client access means only clients within the same region (and VPC) as the GCP load balancer can communicate with the load balancer. Note that this is the default behavior. +| clientAccess describes how client access is restricted for internal load balancers. + Valid values are: * "Global": Specifying an internal load balancer with Global client access allows clients from any region within the VPC to communicate with the load balancer. + https://cloud.google.com/kubernetes-engine/docs/how-to/internal-load-balancing#global_access + * "Local": Specifying an internal load balancer with Local client access means only clients within the same region (and VPC) as the GCP load balancer can communicate with the load balancer. Note that this is the default behavior. https://cloud.google.com/load-balancing/docs/internal#client_access |=== @@ -526,7 +526,7 @@ Type:: Description:: + -- -ibm provides configuration settings that are specific to IBM Cloud load balancers. +ibm provides configuration settings that are specific to IBM Cloud load balancers. If empty, defaults will be applied. See specific ibm fields for details about their defaults. -- @@ -542,8 +542,8 @@ Type:: | `protocol` | `string` -| protocol specifies whether the load balancer uses PROXY protocol to forward connections to the IngressController. See "service.kubernetes.io/ibm-load-balancer-cloud-provider-enable-features: "proxy-protocol"" at https://cloud.ibm.com/docs/containers?topic=containers-vpc-lbaas" - PROXY protocol can be used with load balancers that support it to communicate the source addresses of client connections when forwarding those connections to the IngressController. Using PROXY protocol enables the IngressController to report those source addresses instead of reporting the load balancer's address in HTTP headers and logs. Note that enabling PROXY protocol on the IngressController will cause connections to fail if you are not using a load balancer that uses PROXY protocol to forward connections to the IngressController. See http://www.haproxy.org/download/2.2/doc/proxy-protocol.txt for information about PROXY protocol. +| protocol specifies whether the load balancer uses PROXY protocol to forward connections to the IngressController. See "service.kubernetes.io/ibm-load-balancer-cloud-provider-enable-features: "proxy-protocol"" at https://cloud.ibm.com/docs/containers?topic=containers-vpc-lbaas" + PROXY protocol can be used with load balancers that support it to communicate the source addresses of client connections when forwarding those connections to the IngressController. Using PROXY protocol enables the IngressController to report those source addresses instead of reporting the load balancer's address in HTTP headers and logs. Note that enabling PROXY protocol on the IngressController will cause connections to fail if you are not using a load balancer that uses PROXY protocol to forward connections to the IngressController. See http://www.haproxy.org/download/2.2/doc/proxy-protocol.txt for information about PROXY protocol. Valid values for protocol are TCP, PROXY and omitted. When omitted, this means no opinion and the platform is left to choose a reasonable default, which is subject to change over time. The current default is TCP, without the proxy protocol enabled. |=== @@ -566,10 +566,10 @@ Type:: | `protocol` | `string` -| protocol specifies whether the IngressController expects incoming connections to use plain TCP or whether the IngressController expects PROXY protocol. - PROXY protocol can be used with load balancers that support it to communicate the source addresses of client connections when forwarding those connections to the IngressController. Using PROXY protocol enables the IngressController to report those source addresses instead of reporting the load balancer's address in HTTP headers and logs. Note that enabling PROXY protocol on the IngressController will cause connections to fail if you are not using a load balancer that uses PROXY protocol to forward connections to the IngressController. See http://www.haproxy.org/download/2.2/doc/proxy-protocol.txt for information about PROXY protocol. - The following values are valid for this field: - * The empty string. * "TCP". * "PROXY". +| protocol specifies whether the IngressController expects incoming connections to use plain TCP or whether the IngressController expects PROXY protocol. + PROXY protocol can be used with load balancers that support it to communicate the source addresses of client connections when forwarding those connections to the IngressController. Using PROXY protocol enables the IngressController to report those source addresses instead of reporting the load balancer's address in HTTP headers and logs. Note that enabling PROXY protocol on the IngressController will cause connections to fail if you are not using a load balancer that uses PROXY protocol to forward connections to the IngressController. See http://www.haproxy.org/download/2.2/doc/proxy-protocol.txt for information about PROXY protocol. + The following values are valid for this field: + * The empty string. * "TCP". * "PROXY". The empty string specifies the default, which is TCP without PROXY protocol. Note that the default is subject to change. |=== @@ -592,10 +592,10 @@ Type:: | `protocol` | `string` -| protocol specifies whether the IngressController expects incoming connections to use plain TCP or whether the IngressController expects PROXY protocol. - PROXY protocol can be used with load balancers that support it to communicate the source addresses of client connections when forwarding those connections to the IngressController. Using PROXY protocol enables the IngressController to report those source addresses instead of reporting the load balancer's address in HTTP headers and logs. Note that enabling PROXY protocol on the IngressController will cause connections to fail if you are not using a load balancer that uses PROXY protocol to forward connections to the IngressController. See http://www.haproxy.org/download/2.2/doc/proxy-protocol.txt for information about PROXY protocol. - The following values are valid for this field: - * The empty string. * "TCP". * "PROXY". +| protocol specifies whether the IngressController expects incoming connections to use plain TCP or whether the IngressController expects PROXY protocol. + PROXY protocol can be used with load balancers that support it to communicate the source addresses of client connections when forwarding those connections to the IngressController. Using PROXY protocol enables the IngressController to report those source addresses instead of reporting the load balancer's address in HTTP headers and logs. Note that enabling PROXY protocol on the IngressController will cause connections to fail if you are not using a load balancer that uses PROXY protocol to forward connections to the IngressController. See http://www.haproxy.org/download/2.2/doc/proxy-protocol.txt for information about PROXY protocol. + The following values are valid for this field: + * The empty string. * "TCP". * "PROXY". The empty string specifies the default, which is TCP without PROXY protocol. Note that the default is subject to change. |=== @@ -618,7 +618,7 @@ Type:: | `mimeTypes` | `array (string)` -| mimeTypes is a list of MIME types that should have compression applied. This list can be empty, in which case the ingress controller does not apply compression. +| mimeTypes is a list of MIME types that should have compression applied. This list can be empty, in which case the ingress controller does not apply compression. Note: Not all MIME types benefit from compression, but HAProxy will still use resources to try to compress if instructed to. Generally speaking, text (html, css, js, etc.) formats benefit from compression, but formats that are already compressed (image, audio, video, etc.) benefit little in exchange for the time and cpu spent on compressing again. See https://joehonton.medium.com/the-gzip-penalty-d31bd697f1a2 |=== @@ -650,7 +650,7 @@ Required:: Description:: + -- -httpHeaders defines policy for HTTP headers. +httpHeaders defines policy for HTTP headers. If this field is empty, the default values are used. -- @@ -670,23 +670,23 @@ Type:: | `forwardedHeaderPolicy` | `string` -| forwardedHeaderPolicy specifies when and how the IngressController sets the Forwarded, X-Forwarded-For, X-Forwarded-Host, X-Forwarded-Port, X-Forwarded-Proto, and X-Forwarded-Proto-Version HTTP headers. The value may be one of the following: - * "Append", which specifies that the IngressController appends the headers, preserving existing headers. - * "Replace", which specifies that the IngressController sets the headers, replacing any existing Forwarded or X-Forwarded-* headers. - * "IfNone", which specifies that the IngressController sets the headers if they are not already set. - * "Never", which specifies that the IngressController never sets the headers, preserving any existing headers. +| forwardedHeaderPolicy specifies when and how the IngressController sets the Forwarded, X-Forwarded-For, X-Forwarded-Host, X-Forwarded-Port, X-Forwarded-Proto, and X-Forwarded-Proto-Version HTTP headers. The value may be one of the following: + * "Append", which specifies that the IngressController appends the headers, preserving existing headers. + * "Replace", which specifies that the IngressController sets the headers, replacing any existing Forwarded or X-Forwarded-* headers. + * "IfNone", which specifies that the IngressController sets the headers if they are not already set. + * "Never", which specifies that the IngressController never sets the headers, preserving any existing headers. By default, the policy is "Append". | `headerNameCaseAdjustments` | `` -| headerNameCaseAdjustments specifies case adjustments that can be applied to HTTP header names. Each adjustment is specified as an HTTP header name with the desired capitalization. For example, specifying "X-Forwarded-For" indicates that the "x-forwarded-for" HTTP header should be adjusted to have the specified capitalization. - These adjustments are only applied to cleartext, edge-terminated, and re-encrypt routes, and only when using HTTP/1. - For request headers, these adjustments are applied only for routes that have the haproxy.router.openshift.io/h1-adjust-case=true annotation. For response headers, these adjustments are applied to all HTTP responses. +| headerNameCaseAdjustments specifies case adjustments that can be applied to HTTP header names. Each adjustment is specified as an HTTP header name with the desired capitalization. For example, specifying "X-Forwarded-For" indicates that the "x-forwarded-for" HTTP header should be adjusted to have the specified capitalization. + These adjustments are only applied to cleartext, edge-terminated, and re-encrypt routes, and only when using HTTP/1. + For request headers, these adjustments are applied only for routes that have the haproxy.router.openshift.io/h1-adjust-case=true annotation. For response headers, these adjustments are applied to all HTTP responses. If this field is empty, no request headers are adjusted. | `uniqueId` | `object` -| uniqueId describes configuration for a custom HTTP header that the ingress controller should inject into incoming HTTP requests. Typically, this header is configured to have a value that is unique to the HTTP request. The header can be used by applications or included in access logs to facilitate tracing individual HTTP requests. +| uniqueId describes configuration for a custom HTTP header that the ingress controller should inject into incoming HTTP requests. Typically, this header is configured to have a value that is unique to the HTTP request. The header can be used by applications or included in access logs to facilitate tracing individual HTTP requests. If this field is empty, no such header is injected into requests. |=== @@ -916,7 +916,7 @@ Required:: Description:: + -- -uniqueId describes configuration for a custom HTTP header that the ingress controller should inject into incoming HTTP requests. Typically, this header is configured to have a value that is unique to the HTTP request. The header can be used by applications or included in access logs to facilitate tracing individual HTTP requests. +uniqueId describes configuration for a custom HTTP header that the ingress controller should inject into incoming HTTP requests. Typically, this header is configured to have a value that is unique to the HTTP request. The header can be used by applications or included in access logs to facilitate tracing individual HTTP requests. If this field is empty, no such header is injected into requests. -- @@ -958,7 +958,7 @@ Type:: | `access` | `object` -| access describes how the client requests should be logged. +| access describes how the client requests should be logged. If this field is empty, access logging is disabled. |=== @@ -966,7 +966,7 @@ Type:: Description:: + -- -access describes how the client requests should be logged. +access describes how the client requests should be logged. If this field is empty, access logging is disabled. -- @@ -992,13 +992,13 @@ Required:: | `httpCaptureHeaders` | `object` -| httpCaptureHeaders defines HTTP headers that should be captured in access logs. If this field is empty, no headers are captured. +| httpCaptureHeaders defines HTTP headers that should be captured in access logs. If this field is empty, no headers are captured. Note that this option only applies to cleartext HTTP connections and to secure HTTP connections for which the ingress controller terminates encryption (that is, edge-terminated or reencrypt connections). Headers cannot be captured for TLS passthrough connections. | `httpLogFormat` | `string` -| httpLogFormat specifies the format of the log message for an HTTP request. - If this field is empty, log messages use the implementation's default HTTP log format. For HAProxy's default HTTP log format, see the HAProxy documentation: http://cbonte.github.io/haproxy-dconv/2.0/configuration.html#8.2.3 +| httpLogFormat specifies the format of the log message for an HTTP request. + If this field is empty, log messages use the implementation's default HTTP log format. For HAProxy's default HTTP log format, see the HAProxy documentation: http://cbonte.github.io/haproxy-dconv/2.0/configuration.html#8.2.3 Note that this format only applies to cleartext HTTP connections and to secure HTTP connections for which the ingress controller terminates encryption (that is, edge-terminated or reencrypt connections). It does not affect the log format for TLS passthrough connections. | `logEmptyRequests` @@ -1035,10 +1035,10 @@ Required:: | `type` | `string` -| type is the type of destination for logs. It must be one of the following: - * Container - The ingress operator configures the sidecar container named "logs" on the ingress controller pod and configures the ingress controller to write logs to the sidecar. The logs are then available as container logs. The expectation is that the administrator configures a custom logging solution that reads logs from this sidecar. Note that using container logs means that logs may be dropped if the rate of logs exceeds the container runtime's or the custom logging solution's capacity. - * Syslog +| type is the type of destination for logs. It must be one of the following: + * Container + The ingress operator configures the sidecar container named "logs" on the ingress controller pod and configures the ingress controller to write logs to the sidecar. The logs are then available as container logs. The expectation is that the administrator configures a custom logging solution that reads logs from this sidecar. Note that using container logs means that logs may be dropped if the rate of logs exceeds the container runtime's or the custom logging solution's capacity. + * Syslog Logs are sent to a syslog endpoint. The administrator must specify an endpoint that can receive syslog messages. The expectation is that the administrator has configured a custom syslog instance. |=== @@ -1061,8 +1061,8 @@ Type:: | `maxLength` | `integer` -| maxLength is the maximum length of the log message. - Valid values are integers in the range 480 to 8192, inclusive. +| maxLength is the maximum length of the log message. + Valid values are integers in the range 480 to 8192, inclusive. When omitted, the default value is 1024. |=== @@ -1092,13 +1092,13 @@ Required:: | `facility` | `string` -| facility specifies the syslog facility of log messages. +| facility specifies the syslog facility of log messages. If this field is empty, the facility is "local1". | `maxLength` | `integer` -| maxLength is the maximum length of the log message. - Valid values are integers in the range 480 to 4096, inclusive. +| maxLength is the maximum length of the log message. + Valid values are integers in the range 480 to 4096, inclusive. When omitted, the default value is 1024. | `port` @@ -1110,7 +1110,7 @@ Required:: Description:: + -- -httpCaptureHeaders defines HTTP headers that should be captured in access logs. If this field is empty, no headers are captured. +httpCaptureHeaders defines HTTP headers that should be captured in access logs. If this field is empty, no headers are captured. Note that this option only applies to cleartext HTTP connections and to secure HTTP connections for which the ingress controller terminates encryption (that is, edge-terminated or reencrypt connections). Headers cannot be captured for TLS passthrough connections. -- @@ -1126,12 +1126,12 @@ Type:: | `request` | `` -| request specifies which HTTP request headers to capture. +| request specifies which HTTP request headers to capture. If this field is empty, no request headers are captured. | `response` | `` -| response specifies which HTTP response headers to capture. +| response specifies which HTTP response headers to capture. If this field is empty, no response headers are captured. |=== @@ -1139,7 +1139,7 @@ Type:: Description:: + -- -namespaceSelector is used to filter the set of namespaces serviced by the ingress controller. This is useful for implementing shards. +namespaceSelector is used to filter the set of namespaces serviced by the ingress controller. This is useful for implementing shards. If unset, the default is no filtering. -- @@ -1216,7 +1216,7 @@ Required:: Description:: + -- -nodePlacement enables explicit control over the scheduling of the ingress controller. +nodePlacement enables explicit control over the scheduling of the ingress controller. If unset, defaults are used. See NodePlacement for more details. -- @@ -1232,20 +1232,20 @@ Type:: | `nodeSelector` | `object` -| nodeSelector is the node selector applied to ingress controller deployments. - If set, the specified selector is used and replaces the default. - If unset, the default depends on the value of the defaultPlacement field in the cluster config.openshift.io/v1/ingresses status. - When defaultPlacement is Workers, the default is: - kubernetes.io/os: linux node-role.kubernetes.io/worker: '' - When defaultPlacement is ControlPlane, the default is: - kubernetes.io/os: linux node-role.kubernetes.io/master: '' - These defaults are subject to change. +| nodeSelector is the node selector applied to ingress controller deployments. + If set, the specified selector is used and replaces the default. + If unset, the default depends on the value of the defaultPlacement field in the cluster config.openshift.io/v1/ingresses status. + When defaultPlacement is Workers, the default is: + kubernetes.io/os: linux node-role.kubernetes.io/worker: '' + When defaultPlacement is ControlPlane, the default is: + kubernetes.io/os: linux node-role.kubernetes.io/master: '' + These defaults are subject to change. Note that using nodeSelector.matchExpressions is not supported. Only nodeSelector.matchLabels may be used. This is a limitation of the Kubernetes API: the pod spec does not allow complex expressions for node selectors. | `tolerations` | `array` -| tolerations is a list of tolerations applied to ingress controller deployments. - The default is an empty list. +| tolerations is a list of tolerations applied to ingress controller deployments. + The default is an empty list. See https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/ | `tolerations[]` @@ -1257,14 +1257,14 @@ Type:: Description:: + -- -nodeSelector is the node selector applied to ingress controller deployments. - If set, the specified selector is used and replaces the default. - If unset, the default depends on the value of the defaultPlacement field in the cluster config.openshift.io/v1/ingresses status. - When defaultPlacement is Workers, the default is: - kubernetes.io/os: linux node-role.kubernetes.io/worker: '' - When defaultPlacement is ControlPlane, the default is: - kubernetes.io/os: linux node-role.kubernetes.io/master: '' - These defaults are subject to change. +nodeSelector is the node selector applied to ingress controller deployments. + If set, the specified selector is used and replaces the default. + If unset, the default depends on the value of the defaultPlacement field in the cluster config.openshift.io/v1/ingresses status. + When defaultPlacement is Workers, the default is: + kubernetes.io/os: linux node-role.kubernetes.io/worker: '' + When defaultPlacement is ControlPlane, the default is: + kubernetes.io/os: linux node-role.kubernetes.io/master: '' + These defaults are subject to change. Note that using nodeSelector.matchExpressions is not supported. Only nodeSelector.matchLabels may be used. This is a limitation of the Kubernetes API: the pod spec does not allow complex expressions for node selectors. -- @@ -1341,8 +1341,8 @@ Required:: Description:: + -- -tolerations is a list of tolerations applied to ingress controller deployments. - The default is an empty list. +tolerations is a list of tolerations applied to ingress controller deployments. + The default is an empty list. See https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/ -- @@ -1394,7 +1394,7 @@ Type:: Description:: + -- -routeAdmission defines a policy for handling new route claims (for example, to allow or deny claims across namespaces). +routeAdmission defines a policy for handling new route claims (for example, to allow or deny claims across namespaces). If empty, defaults will be applied. See specific routeAdmission fields for details about their defaults. -- @@ -1410,18 +1410,18 @@ Type:: | `namespaceOwnership` | `string` -| namespaceOwnership describes how host name claims across namespaces should be handled. - Value must be one of: - - Strict: Do not allow routes in different namespaces to claim the same host. - - InterNamespaceAllowed: Allow routes to claim different paths of the same host name across namespaces. +| namespaceOwnership describes how host name claims across namespaces should be handled. + Value must be one of: + - Strict: Do not allow routes in different namespaces to claim the same host. + - InterNamespaceAllowed: Allow routes to claim different paths of the same host name across namespaces. If empty, the default is Strict. | `wildcardPolicy` | `string` -| wildcardPolicy describes how routes with wildcard policies should be handled for the ingress controller. WildcardPolicy controls use of routes [1] exposed by the ingress controller based on the route's wildcard policy. - [1] https://github.com/openshift/api/blob/master/route/v1/types.go - Note: Updating WildcardPolicy from WildcardsAllowed to WildcardsDisallowed will cause admitted routes with a wildcard policy of Subdomain to stop working. These routes must be updated to a wildcard policy of None to be readmitted by the ingress controller. - WildcardPolicy supports WildcardsAllowed and WildcardsDisallowed values. +| wildcardPolicy describes how routes with wildcard policies should be handled for the ingress controller. WildcardPolicy controls use of routes [1] exposed by the ingress controller based on the route's wildcard policy. + [1] https://github.com/openshift/api/blob/master/route/v1/types.go + Note: Updating WildcardPolicy from WildcardsAllowed to WildcardsDisallowed will cause admitted routes with a wildcard policy of Subdomain to stop working. These routes must be updated to a wildcard policy of None to be readmitted by the ingress controller. + WildcardPolicy supports WildcardsAllowed and WildcardsDisallowed values. If empty, defaults to "WildcardsDisallowed". |=== @@ -1429,7 +1429,7 @@ Type:: Description:: + -- -routeSelector is used to filter the set of Routes serviced by the ingress controller. This is useful for implementing shards. +routeSelector is used to filter the set of Routes serviced by the ingress controller. This is useful for implementing shards. If unset, the default is no filtering. -- @@ -1506,8 +1506,8 @@ Required:: Description:: + -- -tlsSecurityProfile specifies settings for TLS connections for ingresscontrollers. - If unset, the default is based on the apiservers.config.openshift.io/cluster resource. +tlsSecurityProfile specifies settings for TLS connections for ingresscontrollers. + If unset, the default is based on the apiservers.config.openshift.io/cluster resource. Note that when using the Old, Intermediate, and Modern profile types, the effective profile configuration is subject to change between releases. For example, given a specification to use the Intermediate profile deployed on release X.Y.Z, an upgrade to release X.Y.Z+1 may cause a new profile configuration to be applied to the ingress controller, resulting in a rollout. -- @@ -1523,36 +1523,36 @@ Type:: | `custom` | `` -| custom is a user-defined TLS security profile. Be extremely careful using a custom profile as invalid configurations can be catastrophic. An example custom profile looks like this: +| custom is a user-defined TLS security profile. Be extremely careful using a custom profile as invalid configurations can be catastrophic. An example custom profile looks like this: ciphers: - ECDHE-ECDSA-CHACHA20-POLY1305 - ECDHE-RSA-CHACHA20-POLY1305 - ECDHE-RSA-AES128-GCM-SHA256 - ECDHE-ECDSA-AES128-GCM-SHA256 minTLSVersion: VersionTLS11 | `intermediate` | `` -| intermediate is a TLS security profile based on: - https://wiki.mozilla.org/Security/Server_Side_TLS#Intermediate_compatibility_.28recommended.29 - and looks like this (yaml): +| intermediate is a TLS security profile based on: + https://wiki.mozilla.org/Security/Server_Side_TLS#Intermediate_compatibility_.28recommended.29 + and looks like this (yaml): ciphers: - TLS_AES_128_GCM_SHA256 - TLS_AES_256_GCM_SHA384 - TLS_CHACHA20_POLY1305_SHA256 - ECDHE-ECDSA-AES128-GCM-SHA256 - ECDHE-RSA-AES128-GCM-SHA256 - ECDHE-ECDSA-AES256-GCM-SHA384 - ECDHE-RSA-AES256-GCM-SHA384 - ECDHE-ECDSA-CHACHA20-POLY1305 - ECDHE-RSA-CHACHA20-POLY1305 - DHE-RSA-AES128-GCM-SHA256 - DHE-RSA-AES256-GCM-SHA384 minTLSVersion: VersionTLS12 | `modern` | `` -| modern is a TLS security profile based on: - https://wiki.mozilla.org/Security/Server_Side_TLS#Modern_compatibility - and looks like this (yaml): - ciphers: - TLS_AES_128_GCM_SHA256 - TLS_AES_256_GCM_SHA384 - TLS_CHACHA20_POLY1305_SHA256 minTLSVersion: VersionTLS13 +| modern is a TLS security profile based on: + https://wiki.mozilla.org/Security/Server_Side_TLS#Modern_compatibility + and looks like this (yaml): + ciphers: - TLS_AES_128_GCM_SHA256 - TLS_AES_256_GCM_SHA384 - TLS_CHACHA20_POLY1305_SHA256 minTLSVersion: VersionTLS13 NOTE: Currently unsupported. | `old` | `` -| old is a TLS security profile based on: - https://wiki.mozilla.org/Security/Server_Side_TLS#Old_backward_compatibility - and looks like this (yaml): +| old is a TLS security profile based on: + https://wiki.mozilla.org/Security/Server_Side_TLS#Old_backward_compatibility + and looks like this (yaml): ciphers: - TLS_AES_128_GCM_SHA256 - TLS_AES_256_GCM_SHA384 - TLS_CHACHA20_POLY1305_SHA256 - ECDHE-ECDSA-AES128-GCM-SHA256 - ECDHE-RSA-AES128-GCM-SHA256 - ECDHE-ECDSA-AES256-GCM-SHA384 - ECDHE-RSA-AES256-GCM-SHA384 - ECDHE-ECDSA-CHACHA20-POLY1305 - ECDHE-RSA-CHACHA20-POLY1305 - DHE-RSA-AES128-GCM-SHA256 - DHE-RSA-AES256-GCM-SHA384 - DHE-RSA-CHACHA20-POLY1305 - ECDHE-ECDSA-AES128-SHA256 - ECDHE-RSA-AES128-SHA256 - ECDHE-ECDSA-AES128-SHA - ECDHE-RSA-AES128-SHA - ECDHE-ECDSA-AES256-SHA384 - ECDHE-RSA-AES256-SHA384 - ECDHE-ECDSA-AES256-SHA - ECDHE-RSA-AES256-SHA - DHE-RSA-AES128-SHA256 - DHE-RSA-AES256-SHA256 - AES128-GCM-SHA256 - AES256-GCM-SHA384 - AES128-SHA256 - AES256-SHA256 - AES128-SHA - AES256-SHA - DES-CBC3-SHA minTLSVersion: VersionTLS10 | `type` | `string` -| type is one of Old, Intermediate, Modern or Custom. Custom provides the ability to specify individual TLS security profile parameters. Old, Intermediate and Modern are TLS security profiles based on: - https://wiki.mozilla.org/Security/Server_Side_TLS#Recommended_configurations - The profiles are intent based, so they may change over time as new ciphers are developed and existing ciphers are found to be insecure. Depending on precisely which ciphers are available to a process, the list may be reduced. +| type is one of Old, Intermediate, Modern or Custom. Custom provides the ability to specify individual TLS security profile parameters. Old, Intermediate and Modern are TLS security profiles based on: + https://wiki.mozilla.org/Security/Server_Side_TLS#Recommended_configurations + The profiles are intent based, so they may change over time as new ciphers are developed and existing ciphers are found to be insecure. Depending on precisely which ciphers are available to a process, the list may be reduced. Note that the Modern profile is currently not supported because it is not yet well adopted by common software libraries. |=== @@ -1560,7 +1560,7 @@ Type:: Description:: + -- -tuningOptions defines parameters for adjusting the performance of ingress controller pods. All fields are optional and will use their respective defaults if not set. See specific tuningOptions fields for more details. +tuningOptions defines parameters for adjusting the performance of ingress controller pods. All fields are optional and will use their respective defaults if not set. See specific tuningOptions fields for more details. Setting fields within tuningOptions is generally not recommended. The default values are suitable for most configurations. -- @@ -1576,75 +1576,75 @@ Type:: | `clientFinTimeout` | `string` -| clientFinTimeout defines how long a connection will be held open while waiting for the client response to the server/backend closing the connection. +| clientFinTimeout defines how long a connection will be held open while waiting for the client response to the server/backend closing the connection. If unset, the default timeout is 1s | `clientTimeout` | `string` -| clientTimeout defines how long a connection will be held open while waiting for a client response. +| clientTimeout defines how long a connection will be held open while waiting for a client response. If unset, the default timeout is 30s | `headerBufferBytes` | `integer` -| headerBufferBytes describes how much memory should be reserved (in bytes) for IngressController connection sessions. Note that this value must be at least 16384 if HTTP/2 is enabled for the IngressController (https://tools.ietf.org/html/rfc7540). If this field is empty, the IngressController will use a default value of 32768 bytes. +| headerBufferBytes describes how much memory should be reserved (in bytes) for IngressController connection sessions. Note that this value must be at least 16384 if HTTP/2 is enabled for the IngressController (https://tools.ietf.org/html/rfc7540). If this field is empty, the IngressController will use a default value of 32768 bytes. Setting this field is generally not recommended as headerBufferBytes values that are too small may break the IngressController and headerBufferBytes values that are too large could cause the IngressController to use significantly more memory than necessary. | `headerBufferMaxRewriteBytes` | `integer` -| headerBufferMaxRewriteBytes describes how much memory should be reserved (in bytes) from headerBufferBytes for HTTP header rewriting and appending for IngressController connection sessions. Note that incoming HTTP requests will be limited to (headerBufferBytes - headerBufferMaxRewriteBytes) bytes, meaning headerBufferBytes must be greater than headerBufferMaxRewriteBytes. If this field is empty, the IngressController will use a default value of 8192 bytes. +| headerBufferMaxRewriteBytes describes how much memory should be reserved (in bytes) from headerBufferBytes for HTTP header rewriting and appending for IngressController connection sessions. Note that incoming HTTP requests will be limited to (headerBufferBytes - headerBufferMaxRewriteBytes) bytes, meaning headerBufferBytes must be greater than headerBufferMaxRewriteBytes. If this field is empty, the IngressController will use a default value of 8192 bytes. Setting this field is generally not recommended as headerBufferMaxRewriteBytes values that are too small may break the IngressController and headerBufferMaxRewriteBytes values that are too large could cause the IngressController to use significantly more memory than necessary. | `healthCheckInterval` | `string` -| healthCheckInterval defines how long the router waits between two consecutive health checks on its configured backends. This value is applied globally as a default for all routes, but may be overridden per-route by the route annotation "router.openshift.io/haproxy.health.check.interval". - Expects an unsigned duration string of decimal numbers, each with optional fraction and a unit suffix, eg "300ms", "1.5h" or "2h45m". Valid time units are "ns", "us" (or "µs" U+00B5 or "μs" U+03BC), "ms", "s", "m", "h". - Setting this to less than 5s can cause excess traffic due to too frequent TCP health checks and accompanying SYN packet storms. Alternatively, setting this too high can result in increased latency, due to backend servers that are no longer available, but haven't yet been detected as such. - An empty or zero healthCheckInterval means no opinion and IngressController chooses a default, which is subject to change over time. Currently the default healthCheckInterval value is 5s. +| healthCheckInterval defines how long the router waits between two consecutive health checks on its configured backends. This value is applied globally as a default for all routes, but may be overridden per-route by the route annotation "router.openshift.io/haproxy.health.check.interval". + Expects an unsigned duration string of decimal numbers, each with optional fraction and a unit suffix, eg "300ms", "1.5h" or "2h45m". Valid time units are "ns", "us" (or "µs" U+00B5 or "μs" U+03BC), "ms", "s", "m", "h". + Setting this to less than 5s can cause excess traffic due to too frequent TCP health checks and accompanying SYN packet storms. Alternatively, setting this too high can result in increased latency, due to backend servers that are no longer available, but haven't yet been detected as such. + An empty or zero healthCheckInterval means no opinion and IngressController chooses a default, which is subject to change over time. Currently the default healthCheckInterval value is 5s. Currently the minimum allowed value is 1s and the maximum allowed value is 2147483647ms (24.85 days). Both are subject to change over time. | `maxConnections` | `integer` -| maxConnections defines the maximum number of simultaneous connections that can be established per HAProxy process. Increasing this value allows each ingress controller pod to handle more connections but at the cost of additional system resources being consumed. - Permitted values are: empty, 0, -1, and the range 2000-2000000. - If this field is empty or 0, the IngressController will use the default value of 50000, but the default is subject to change in future releases. - If the value is -1 then HAProxy will dynamically compute a maximum value based on the available ulimits in the running container. Selecting -1 (i.e., auto) will result in a large value being computed (~520000 on OpenShift >=4.10 clusters) and therefore each HAProxy process will incur significant memory usage compared to the current default of 50000. - Setting a value that is greater than the current operating system limit will prevent the HAProxy process from starting. - If you choose a discrete value (e.g., 750000) and the router pod is migrated to a new node, there's no guarantee that that new node has identical ulimits configured. In such a scenario the pod would fail to start. If you have nodes with different ulimits configured (e.g., different tuned profiles) and you choose a discrete value then the guidance is to use -1 and let the value be computed dynamically at runtime. - You can monitor memory usage for router containers with the following metric: 'container_memory_working_set_bytes{container="router",namespace="openshift-ingress"}'. +| maxConnections defines the maximum number of simultaneous connections that can be established per HAProxy process. Increasing this value allows each ingress controller pod to handle more connections but at the cost of additional system resources being consumed. + Permitted values are: empty, 0, -1, and the range 2000-2000000. + If this field is empty or 0, the IngressController will use the default value of 50000, but the default is subject to change in future releases. + If the value is -1 then HAProxy will dynamically compute a maximum value based on the available ulimits in the running container. Selecting -1 (i.e., auto) will result in a large value being computed (~520000 on OpenShift >=4.10 clusters) and therefore each HAProxy process will incur significant memory usage compared to the current default of 50000. + Setting a value that is greater than the current operating system limit will prevent the HAProxy process from starting. + If you choose a discrete value (e.g., 750000) and the router pod is migrated to a new node, there's no guarantee that that new node has identical ulimits configured. In such a scenario the pod would fail to start. If you have nodes with different ulimits configured (e.g., different tuned profiles) and you choose a discrete value then the guidance is to use -1 and let the value be computed dynamically at runtime. + You can monitor memory usage for router containers with the following metric: 'container_memory_working_set_bytes{container="router",namespace="openshift-ingress"}'. You can monitor memory usage of individual HAProxy processes in router containers with the following metric: 'container_memory_working_set_bytes{container="router",namespace="openshift-ingress"}/container_processes{container="router",namespace="openshift-ingress"}'. | `reloadInterval` | `string` -| reloadInterval defines the minimum interval at which the router is allowed to reload to accept new changes. Increasing this value can prevent the accumulation of HAProxy processes, depending on the scenario. Increasing this interval can also lessen load imbalance on a backend's servers when using the roundrobin balancing algorithm. Alternatively, decreasing this value may decrease latency since updates to HAProxy's configuration can take effect more quickly. - The value must be a time duration value; see . Currently, the minimum value allowed is 1s, and the maximum allowed value is 120s. Minimum and maximum allowed values may change in future versions of OpenShift. Note that if a duration outside of these bounds is provided, the value of reloadInterval will be capped/floored and not rejected (e.g. a duration of over 120s will be capped to 120s; the IngressController will not reject and replace this disallowed value with the default). - A zero value for reloadInterval tells the IngressController to choose the default, which is currently 5s and subject to change without notice. - This field expects an unsigned duration string of decimal numbers, each with optional fraction and a unit suffix, e.g. "300ms", "1.5h" or "2h45m". Valid time units are "ns", "us" (or "µs" U+00B5 or "μs" U+03BC), "ms", "s", "m", "h". +| reloadInterval defines the minimum interval at which the router is allowed to reload to accept new changes. Increasing this value can prevent the accumulation of HAProxy processes, depending on the scenario. Increasing this interval can also lessen load imbalance on a backend's servers when using the roundrobin balancing algorithm. Alternatively, decreasing this value may decrease latency since updates to HAProxy's configuration can take effect more quickly. + The value must be a time duration value; see . Currently, the minimum value allowed is 1s, and the maximum allowed value is 120s. Minimum and maximum allowed values may change in future versions of OpenShift. Note that if a duration outside of these bounds is provided, the value of reloadInterval will be capped/floored and not rejected (e.g. a duration of over 120s will be capped to 120s; the IngressController will not reject and replace this disallowed value with the default). + A zero value for reloadInterval tells the IngressController to choose the default, which is currently 5s and subject to change without notice. + This field expects an unsigned duration string of decimal numbers, each with optional fraction and a unit suffix, e.g. "300ms", "1.5h" or "2h45m". Valid time units are "ns", "us" (or "µs" U+00B5 or "μs" U+03BC), "ms", "s", "m", "h". Note: Setting a value significantly larger than the default of 5s can cause latency in observing updates to routes and their endpoints. HAProxy's configuration will be reloaded less frequently, and newly created routes will not be served until the subsequent reload. | `serverFinTimeout` | `string` -| serverFinTimeout defines how long a connection will be held open while waiting for the server/backend response to the client closing the connection. +| serverFinTimeout defines how long a connection will be held open while waiting for the server/backend response to the client closing the connection. If unset, the default timeout is 1s | `serverTimeout` | `string` -| serverTimeout defines how long a connection will be held open while waiting for a server/backend response. +| serverTimeout defines how long a connection will be held open while waiting for a server/backend response. If unset, the default timeout is 30s | `threadCount` | `integer` -| threadCount defines the number of threads created per HAProxy process. Creating more threads allows each ingress controller pod to handle more connections, at the cost of more system resources being used. HAProxy currently supports up to 64 threads. If this field is empty, the IngressController will use the default value. The current default is 4 threads, but this may change in future releases. +| threadCount defines the number of threads created per HAProxy process. Creating more threads allows each ingress controller pod to handle more connections, at the cost of more system resources being used. HAProxy currently supports up to 64 threads. If this field is empty, the IngressController will use the default value. The current default is 4 threads, but this may change in future releases. Setting this field is generally not recommended. Increasing the number of HAProxy threads allows ingress controller pods to utilize more CPU time under load, potentially starving other pods if set too high. Reducing the number of threads may cause the ingress controller to perform poorly. | `tlsInspectDelay` | `string` -| tlsInspectDelay defines how long the router can hold data to find a matching route. - Setting this too short can cause the router to fall back to the default certificate for edge-terminated or reencrypt routes even when a better matching certificate could be used. +| tlsInspectDelay defines how long the router can hold data to find a matching route. + Setting this too short can cause the router to fall back to the default certificate for edge-terminated or reencrypt routes even when a better matching certificate could be used. If unset, the default inspect delay is 5s | `tunnelTimeout` | `string` -| tunnelTimeout defines how long a tunnel connection (including websockets) will be held open while the tunnel is idle. +| tunnelTimeout defines how long a tunnel connection (including websockets) will be held open while the tunnel is idle. If unset, the default timeout is 1h |=== @@ -1671,12 +1671,12 @@ Type:: | `conditions` | `array` -| conditions is a list of conditions and their status. - Available means the ingress controller deployment is available and servicing route and ingress resources (i.e, .status.availableReplicas equals .spec.replicas) - There are additional conditions which indicate the status of other ingress controller features and capabilities. - * LoadBalancerManaged - True if the following conditions are met: * The endpoint publishing strategy requires a service load balancer. - False if any of those conditions are unsatisfied. - * LoadBalancerReady - True if the following conditions are met: * A load balancer is managed. * The load balancer is ready. - False if any of those conditions are unsatisfied. - * DNSManaged - True if the following conditions are met: * The endpoint publishing strategy and platform support DNS. * The ingress controller domain is set. * dns.config.openshift.io/cluster configures DNS zones. - False if any of those conditions are unsatisfied. +| conditions is a list of conditions and their status. + Available means the ingress controller deployment is available and servicing route and ingress resources (i.e, .status.availableReplicas equals .spec.replicas) + There are additional conditions which indicate the status of other ingress controller features and capabilities. + * LoadBalancerManaged - True if the following conditions are met: * The endpoint publishing strategy requires a service load balancer. - False if any of those conditions are unsatisfied. + * LoadBalancerReady - True if the following conditions are met: * A load balancer is managed. * The load balancer is ready. - False if any of those conditions are unsatisfied. + * DNSManaged - True if the following conditions are met: * The endpoint publishing strategy and platform support DNS. * The ingress controller domain is set. * dns.config.openshift.io/cluster configures DNS zones. - False if any of those conditions are unsatisfied. * DNSReady - True if the following conditions are met: * DNS is managed. * DNS records have been successfully created. - False if any of those conditions are unsatisfied. | `conditions[]` @@ -1716,12 +1716,12 @@ Type:: Description:: + -- -conditions is a list of conditions and their status. - Available means the ingress controller deployment is available and servicing route and ingress resources (i.e, .status.availableReplicas equals .spec.replicas) - There are additional conditions which indicate the status of other ingress controller features and capabilities. - * LoadBalancerManaged - True if the following conditions are met: * The endpoint publishing strategy requires a service load balancer. - False if any of those conditions are unsatisfied. - * LoadBalancerReady - True if the following conditions are met: * A load balancer is managed. * The load balancer is ready. - False if any of those conditions are unsatisfied. - * DNSManaged - True if the following conditions are met: * The endpoint publishing strategy and platform support DNS. * The ingress controller domain is set. * dns.config.openshift.io/cluster configures DNS zones. - False if any of those conditions are unsatisfied. +conditions is a list of conditions and their status. + Available means the ingress controller deployment is available and servicing route and ingress resources (i.e, .status.availableReplicas equals .spec.replicas) + There are additional conditions which indicate the status of other ingress controller features and capabilities. + * LoadBalancerManaged - True if the following conditions are met: * The endpoint publishing strategy requires a service load balancer. - False if any of those conditions are unsatisfied. + * LoadBalancerReady - True if the following conditions are met: * A load balancer is managed. * The load balancer is ready. - False if any of those conditions are unsatisfied. + * DNSManaged - True if the following conditions are met: * The endpoint publishing strategy and platform support DNS. * The ingress controller domain is set. * dns.config.openshift.io/cluster configures DNS zones. - False if any of those conditions are unsatisfied. * DNSReady - True if the following conditions are met: * DNS is managed. * DNS records have been successfully created. - False if any of those conditions are unsatisfied. -- @@ -1750,23 +1750,23 @@ Type:: | `lastTransitionTime` | `string` -| +| | `message` | `string` -| +| | `reason` | `string` -| +| | `status` | `string` -| +| | `type` | `string` -| +| |=== === .status.endpointPublishingStrategy @@ -1806,21 +1806,21 @@ Required:: | `type` | `string` -| type is the publishing strategy to use. Valid values are: - * LoadBalancerService - Publishes the ingress controller using a Kubernetes LoadBalancer Service. - In this configuration, the ingress controller deployment uses container networking. A LoadBalancer Service is created to publish the deployment. - See: https://kubernetes.io/docs/concepts/services-networking/service/#loadbalancer - If domain is set, a wildcard DNS record will be managed to point at the LoadBalancer Service's external name. DNS records are managed only in DNS zones defined by dns.config.openshift.io/cluster .spec.publicZone and .spec.privateZone. - Wildcard DNS management is currently supported only on the AWS, Azure, and GCP platforms. - * HostNetwork - Publishes the ingress controller on node ports where the ingress controller is deployed. - In this configuration, the ingress controller deployment uses host networking, bound to node ports 80 and 443. The user is responsible for configuring an external load balancer to publish the ingress controller via the node ports. - * Private - Does not publish the ingress controller. - In this configuration, the ingress controller deployment uses container networking, and is not explicitly published. The user must manually publish the ingress controller. - * NodePortService - Publishes the ingress controller using a Kubernetes NodePort Service. +| type is the publishing strategy to use. Valid values are: + * LoadBalancerService + Publishes the ingress controller using a Kubernetes LoadBalancer Service. + In this configuration, the ingress controller deployment uses container networking. A LoadBalancer Service is created to publish the deployment. + See: https://kubernetes.io/docs/concepts/services-networking/service/#loadbalancer + If domain is set, a wildcard DNS record will be managed to point at the LoadBalancer Service's external name. DNS records are managed only in DNS zones defined by dns.config.openshift.io/cluster .spec.publicZone and .spec.privateZone. + Wildcard DNS management is currently supported only on the AWS, Azure, and GCP platforms. + * HostNetwork + Publishes the ingress controller on node ports where the ingress controller is deployed. + In this configuration, the ingress controller deployment uses host networking, bound to node ports 80 and 443. The user is responsible for configuring a user-managed load balancer to publish the ingress controller via the node ports. + * Private + Does not publish the ingress controller. + In this configuration, the ingress controller deployment uses container networking, and is not explicitly published. The user must manually publish the ingress controller. + * NodePortService + Publishes the ingress controller using a Kubernetes NodePort Service. In this configuration, the ingress controller deployment uses container networking. A NodePort Service is created to publish the deployment. The specific node ports are dynamically allocated by OpenShift; however, to support static port allocations, user changes to the node port field of the managed NodePort Service will preserved. |=== @@ -1851,10 +1851,10 @@ Type:: | `protocol` | `string` -| protocol specifies whether the IngressController expects incoming connections to use plain TCP or whether the IngressController expects PROXY protocol. - PROXY protocol can be used with load balancers that support it to communicate the source addresses of client connections when forwarding those connections to the IngressController. Using PROXY protocol enables the IngressController to report those source addresses instead of reporting the load balancer's address in HTTP headers and logs. Note that enabling PROXY protocol on the IngressController will cause connections to fail if you are not using a load balancer that uses PROXY protocol to forward connections to the IngressController. See http://www.haproxy.org/download/2.2/doc/proxy-protocol.txt for information about PROXY protocol. - The following values are valid for this field: - * The empty string. * "TCP". * "PROXY". +| protocol specifies whether the IngressController expects incoming connections to use plain TCP or whether the IngressController expects PROXY protocol. + PROXY protocol can be used with load balancers that support it to communicate the source addresses of client connections when forwarding those connections to the IngressController. Using PROXY protocol enables the IngressController to report those source addresses instead of reporting the load balancer's address in HTTP headers and logs. Note that enabling PROXY protocol on the IngressController will cause connections to fail if you are not using a load balancer that uses PROXY protocol to forward connections to the IngressController. See http://www.haproxy.org/download/2.2/doc/proxy-protocol.txt for information about PROXY protocol. + The following values are valid for this field: + * The empty string. * "TCP". * "PROXY". The empty string specifies the default, which is TCP without PROXY protocol. Note that the default is subject to change. | `statsPort` @@ -1884,7 +1884,7 @@ Required:: | `allowedSourceRanges` | `` -| allowedSourceRanges specifies an allowlist of IP address ranges to which access to the load balancer should be restricted. Each range must be specified using CIDR notation (e.g. "10.0.0.0/8" or "fd00::/8"). If no range is specified, "0.0.0.0/0" for IPv4 and "::/0" for IPv6 are used by default, which allows all source addresses. +| allowedSourceRanges specifies an allowlist of IP address ranges to which access to the load balancer should be restricted. Each range must be specified using CIDR notation (e.g. "10.0.0.0/8" or "fd00::/8"). If no range is specified, "0.0.0.0/0" for IPv4 and "::/0" for IPv6 are used by default, which allows all source addresses. To facilitate migration from earlier versions of OpenShift that did not have the allowedSourceRanges field, you may set the service.beta.kubernetes.io/load-balancer-source-ranges annotation on the "router-" service in the "openshift-ingress" namespace, and this annotation will take effect if allowedSourceRanges is empty on OpenShift 4.12. | `dnsManagementPolicy` @@ -1893,7 +1893,7 @@ Required:: | `providerParameters` | `object` -| providerParameters holds desired load balancer information specific to the underlying infrastructure provider. +| providerParameters holds desired load balancer information specific to the underlying infrastructure provider. If empty, defaults will be applied. See specific providerParameters fields for details about their defaults. | `scope` @@ -1905,7 +1905,7 @@ Required:: Description:: + -- -providerParameters holds desired load balancer information specific to the underlying infrastructure provider. +providerParameters holds desired load balancer information specific to the underlying infrastructure provider. If empty, defaults will be applied. See specific providerParameters fields for details about their defaults. -- @@ -1923,17 +1923,17 @@ Required:: | `aws` | `object` -| aws provides configuration settings that are specific to AWS load balancers. +| aws provides configuration settings that are specific to AWS load balancers. If empty, defaults will be applied. See specific aws fields for details about their defaults. | `gcp` | `object` -| gcp provides configuration settings that are specific to GCP load balancers. +| gcp provides configuration settings that are specific to GCP load balancers. If empty, defaults will be applied. See specific gcp fields for details about their defaults. | `ibm` | `object` -| ibm provides configuration settings that are specific to IBM Cloud load balancers. +| ibm provides configuration settings that are specific to IBM Cloud load balancers. If empty, defaults will be applied. See specific ibm fields for details about their defaults. | `type` @@ -1945,7 +1945,7 @@ Required:: Description:: + -- -aws provides configuration settings that are specific to AWS load balancers. +aws provides configuration settings that are specific to AWS load balancers. If empty, defaults will be applied. See specific aws fields for details about their defaults. -- @@ -1971,11 +1971,11 @@ Required:: | `type` | `string` -| type is the type of AWS load balancer to instantiate for an ingresscontroller. - Valid values are: - * "Classic": A Classic Load Balancer that makes routing decisions at either the transport layer (TCP/SSL) or the application layer (HTTP/HTTPS). See the following for additional details: - https://docs.aws.amazon.com/AmazonECS/latest/developerguide/load-balancer-types.html#clb - * "NLB": A Network Load Balancer that makes routing decisions at the transport layer (TCP/SSL). See the following for additional details: +| type is the type of AWS load balancer to instantiate for an ingresscontroller. + Valid values are: + * "Classic": A Classic Load Balancer that makes routing decisions at either the transport layer (TCP/SSL) or the application layer (HTTP/HTTPS). See the following for additional details: + https://docs.aws.amazon.com/AmazonECS/latest/developerguide/load-balancer-types.html#clb + * "NLB": A Network Load Balancer that makes routing decisions at the transport layer (TCP/SSL). See the following for additional details: https://docs.aws.amazon.com/AmazonECS/latest/developerguide/load-balancer-types.html#nlb |=== @@ -2018,7 +2018,7 @@ Type:: Description:: + -- -gcp provides configuration settings that are specific to GCP load balancers. +gcp provides configuration settings that are specific to GCP load balancers. If empty, defaults will be applied. See specific gcp fields for details about their defaults. -- @@ -2034,10 +2034,10 @@ Type:: | `clientAccess` | `string` -| clientAccess describes how client access is restricted for internal load balancers. - Valid values are: * "Global": Specifying an internal load balancer with Global client access allows clients from any region within the VPC to communicate with the load balancer. - https://cloud.google.com/kubernetes-engine/docs/how-to/internal-load-balancing#global_access - * "Local": Specifying an internal load balancer with Local client access means only clients within the same region (and VPC) as the GCP load balancer can communicate with the load balancer. Note that this is the default behavior. +| clientAccess describes how client access is restricted for internal load balancers. + Valid values are: * "Global": Specifying an internal load balancer with Global client access allows clients from any region within the VPC to communicate with the load balancer. + https://cloud.google.com/kubernetes-engine/docs/how-to/internal-load-balancing#global_access + * "Local": Specifying an internal load balancer with Local client access means only clients within the same region (and VPC) as the GCP load balancer can communicate with the load balancer. Note that this is the default behavior. https://cloud.google.com/load-balancing/docs/internal#client_access |=== @@ -2045,7 +2045,7 @@ Type:: Description:: + -- -ibm provides configuration settings that are specific to IBM Cloud load balancers. +ibm provides configuration settings that are specific to IBM Cloud load balancers. If empty, defaults will be applied. See specific ibm fields for details about their defaults. -- @@ -2061,8 +2061,8 @@ Type:: | `protocol` | `string` -| protocol specifies whether the load balancer uses PROXY protocol to forward connections to the IngressController. See "service.kubernetes.io/ibm-load-balancer-cloud-provider-enable-features: "proxy-protocol"" at https://cloud.ibm.com/docs/containers?topic=containers-vpc-lbaas" - PROXY protocol can be used with load balancers that support it to communicate the source addresses of client connections when forwarding those connections to the IngressController. Using PROXY protocol enables the IngressController to report those source addresses instead of reporting the load balancer's address in HTTP headers and logs. Note that enabling PROXY protocol on the IngressController will cause connections to fail if you are not using a load balancer that uses PROXY protocol to forward connections to the IngressController. See http://www.haproxy.org/download/2.2/doc/proxy-protocol.txt for information about PROXY protocol. +| protocol specifies whether the load balancer uses PROXY protocol to forward connections to the IngressController. See "service.kubernetes.io/ibm-load-balancer-cloud-provider-enable-features: "proxy-protocol"" at https://cloud.ibm.com/docs/containers?topic=containers-vpc-lbaas" + PROXY protocol can be used with load balancers that support it to communicate the source addresses of client connections when forwarding those connections to the IngressController. Using PROXY protocol enables the IngressController to report those source addresses instead of reporting the load balancer's address in HTTP headers and logs. Note that enabling PROXY protocol on the IngressController will cause connections to fail if you are not using a load balancer that uses PROXY protocol to forward connections to the IngressController. See http://www.haproxy.org/download/2.2/doc/proxy-protocol.txt for information about PROXY protocol. Valid values for protocol are TCP, PROXY and omitted. When omitted, this means no opinion and the platform is left to choose a reasonable default, which is subject to change over time. The current default is TCP, without the proxy protocol enabled. |=== @@ -2085,10 +2085,10 @@ Type:: | `protocol` | `string` -| protocol specifies whether the IngressController expects incoming connections to use plain TCP or whether the IngressController expects PROXY protocol. - PROXY protocol can be used with load balancers that support it to communicate the source addresses of client connections when forwarding those connections to the IngressController. Using PROXY protocol enables the IngressController to report those source addresses instead of reporting the load balancer's address in HTTP headers and logs. Note that enabling PROXY protocol on the IngressController will cause connections to fail if you are not using a load balancer that uses PROXY protocol to forward connections to the IngressController. See http://www.haproxy.org/download/2.2/doc/proxy-protocol.txt for information about PROXY protocol. - The following values are valid for this field: - * The empty string. * "TCP". * "PROXY". +| protocol specifies whether the IngressController expects incoming connections to use plain TCP or whether the IngressController expects PROXY protocol. + PROXY protocol can be used with load balancers that support it to communicate the source addresses of client connections when forwarding those connections to the IngressController. Using PROXY protocol enables the IngressController to report those source addresses instead of reporting the load balancer's address in HTTP headers and logs. Note that enabling PROXY protocol on the IngressController will cause connections to fail if you are not using a load balancer that uses PROXY protocol to forward connections to the IngressController. See http://www.haproxy.org/download/2.2/doc/proxy-protocol.txt for information about PROXY protocol. + The following values are valid for this field: + * The empty string. * "TCP". * "PROXY". The empty string specifies the default, which is TCP without PROXY protocol. Note that the default is subject to change. |=== @@ -2111,10 +2111,10 @@ Type:: | `protocol` | `string` -| protocol specifies whether the IngressController expects incoming connections to use plain TCP or whether the IngressController expects PROXY protocol. - PROXY protocol can be used with load balancers that support it to communicate the source addresses of client connections when forwarding those connections to the IngressController. Using PROXY protocol enables the IngressController to report those source addresses instead of reporting the load balancer's address in HTTP headers and logs. Note that enabling PROXY protocol on the IngressController will cause connections to fail if you are not using a load balancer that uses PROXY protocol to forward connections to the IngressController. See http://www.haproxy.org/download/2.2/doc/proxy-protocol.txt for information about PROXY protocol. - The following values are valid for this field: - * The empty string. * "TCP". * "PROXY". +| protocol specifies whether the IngressController expects incoming connections to use plain TCP or whether the IngressController expects PROXY protocol. + PROXY protocol can be used with load balancers that support it to communicate the source addresses of client connections when forwarding those connections to the IngressController. Using PROXY protocol enables the IngressController to report those source addresses instead of reporting the load balancer's address in HTTP headers and logs. Note that enabling PROXY protocol on the IngressController will cause connections to fail if you are not using a load balancer that uses PROXY protocol to forward connections to the IngressController. See http://www.haproxy.org/download/2.2/doc/proxy-protocol.txt for information about PROXY protocol. + The following values are valid for this field: + * The empty string. * "TCP". * "PROXY". The empty string specifies the default, which is TCP without PROXY protocol. Note that the default is subject to change. |=== @@ -2289,13 +2289,13 @@ Type:: | `ciphers` | `array (string)` -| ciphers is used to specify the cipher algorithms that are negotiated during the TLS handshake. Operators may remove entries their operands do not support. For example, to use DES-CBC3-SHA (yaml): +| ciphers is used to specify the cipher algorithms that are negotiated during the TLS handshake. Operators may remove entries their operands do not support. For example, to use DES-CBC3-SHA (yaml): ciphers: - DES-CBC3-SHA | `minTLSVersion` | `string` -| minTLSVersion is used to specify the minimal version of the TLS protocol that is negotiated during the TLS handshake. For example, to use TLS versions 1.1, 1.2 and 1.3 (yaml): - minTLSVersion: VersionTLS11 +| minTLSVersion is used to specify the minimal version of the TLS protocol that is negotiated during the TLS handshake. For example, to use TLS versions 1.1, 1.2 and 1.3 (yaml): + minTLSVersion: VersionTLS11 NOTE: currently the highest minTLSVersion allowed is VersionTLS12 |=== @@ -2414,7 +2414,7 @@ Description:: | Parameter | Type | Description | `body` | xref:../operator_apis/ingresscontroller-operator-openshift-io-v1.adoc#ingresscontroller-operator-openshift-io-v1[`IngressController`] schema -| +| |=== .HTTP responses @@ -2547,7 +2547,7 @@ Description:: | Parameter | Type | Description | `body` | xref:../operator_apis/ingresscontroller-operator-openshift-io-v1.adoc#ingresscontroller-operator-openshift-io-v1[`IngressController`] schema -| +| |=== .HTTP responses @@ -2649,7 +2649,7 @@ Description:: | Parameter | Type | Description | `body` | xref:../autoscale_apis/scale-autoscaling-v1.adoc#scale-autoscaling-v1[`Scale`] schema -| +| |=== .HTTP responses @@ -2751,7 +2751,7 @@ Description:: | Parameter | Type | Description | `body` | xref:../operator_apis/ingresscontroller-operator-openshift-io-v1.adoc#ingresscontroller-operator-openshift-io-v1[`IngressController`] schema -| +| |=== .HTTP responses