1
0
mirror of https://github.com/openshift/openshift-docs.git synced 2026-02-05 12:46:18 +01:00
Files
openshift-docs/modules/nw-osp-configuring-external-load-balancer.adoc

471 lines
16 KiB
Plaintext

// Module included in the following assemblies:
// Bare metal
// * installing/installing_bare_metal/ipi/ipi-install-installation-workflow.adoc
// * installing/installing_bare_metal/bare-metal-postinstallation-configuration.adoc
// OpenStack
// * networking/load-balancing-openstack.adoc
// Nutanix
// * installing/installing_nutanix/installing-nutanix-installer-provisioned.adoc
// vSphere
// * installing/installing-vsphere-installer-provisioned-customizations.adoc
// * installing/installing-restricted-networks-installer-provisioned-vsphere.adoc
ifeval::["{context}" == "ipi-install-installation-workflow"]
:bare-metal:
endif::[]
ifeval::["{context}" == "load-balancing-openstack"]
:openstack:
endif::[]
ifeval::["{context}" == "installing-nutanix-installer-provisioned"]
:nutanix:
endif::[]
ifeval::["{context}" == "installing-vsphere-installer-provisioned-customizations"]
:vsphere:
endif::[]
ifeval::["{context}" == "installing-vsphere-installer-provisioned-customizations"]
:vsphere:
endif::[]
ifeval::["{context}" == "installing-restricted-networks-installer-provisioned-vsphere"]
:vsphere:
endif::[]
:_mod-docs-content-type: PROCEDURE
[id="nw-osp-configuring-external-load-balancer_{context}"]
= Configuring a user-managed load balancer
[role="_abstract"]
To integrate your infrastructure with existing network standards or gain more control over traffic management in {product-title}
ifdef::openstack[]
on {rh-openstack-first}
endif::openstack[]
, use a user-managed load balancer in place of the default load balancer.
[IMPORTANT]
====
Before you configure a user-managed load balancer, ensure that you read the "Services for a user-managed load balancer" section.
====
Read the following prerequisites that apply to the service that you want to configure for your user-managed load balancer.
[NOTE]
====
MetalLB, which runs on a cluster, functions as a user-managed load balancer.
====
.Prerequisites
The following list details OpenShift API prerequisites:
* You defined a front-end IP address.
* TCP ports 6443 and 22623 are exposed on the front-end IP address of your load balancer. Check the following items:
** Port 6443 provides access to the OpenShift API service.
** Port 22623 can provide ignition startup configurations to nodes.
* The front-end IP address and port 6443 are reachable by all users of your system with a location external to your {product-title} cluster.
* The front-end IP address and port 22623 are reachable only by {product-title} nodes.
* The load balancer backend can communicate with {product-title} control plane nodes on port 6443 and 22623.
The following list details Ingress Controller prerequisites:
* You defined a front-end IP address.
* TCP port 443 and port 80 are exposed on the front-end IP address of your load balancer.
* The front-end IP address, port 80 and port 443 are reachable by all users of your system with a location external to your {product-title} cluster.
* The front-end IP address, port 80 and port 443 are reachable by all nodes that operate in your {product-title} cluster.
* The load balancer backend can communicate with {product-title} nodes that run the Ingress Controller on ports 80, 443, and 1936.
The following list details prerequisites for health check URL specifications:
You can configure most load balancers by setting health check URLs that determine if a service is available or unavailable. {product-title} provides these health checks for the OpenShift API, Machine Configuration API, and Ingress Controller backend services.
The following example shows a Kubernetes API health check specification for a backend service:
[source,terminal]
----
Path: HTTPS:6443/readyz
Healthy threshold: 2
Unhealthy threshold: 2
Timeout: 10
Interval: 10
----
The following example shows a Machine Config API health check specification for a backend service:
[source,terminal]
----
Path: HTTPS:22623/healthz
Healthy threshold: 2
Unhealthy threshold: 2
Timeout: 10
Interval: 10
----
The following example shows a Ingress Controller health check specification for a backend service:
[source,terminal]
----
Path: HTTP:1936/healthz/ready
Healthy threshold: 2
Unhealthy threshold: 2
Timeout: 5
Interval: 10
----
.Procedure
. Configure the HAProxy Ingress Controller, so that you can enable access to the cluster from your load balancer on ports 6443, 22623, 443, and 80. Depending on your needs, you can specify the IP address of a single subnet or IP addresses from multiple subnets in your HAProxy configuration.
+
.Example HAProxy configuration with one listed subnet
[source,terminal,subs="quotes"]
----
# ...
listen my-cluster-api-6443
bind 192.168.1.100:6443
mode tcp
balance roundrobin
option httpchk
http-check connect
http-check send meth GET uri /readyz
http-check expect status 200
server my-cluster-master-2 192.168.1.101:6443 check inter 10s rise 2 fall 2
server my-cluster-master-0 192.168.1.102:6443 check inter 10s rise 2 fall 2
server my-cluster-master-1 192.168.1.103:6443 check inter 10s rise 2 fall 2
listen my-cluster-machine-config-api-22623
bind 192.168.1.100:22623
mode tcp
balance roundrobin
option httpchk
http-check connect
http-check send meth GET uri /healthz
http-check expect status 200
server my-cluster-master-2 192.168.1.101:22623 check inter 10s rise 2 fall 2
server my-cluster-master-0 192.168.1.102:22623 check inter 10s rise 2 fall 2
server my-cluster-master-1 192.168.1.103:22623 check inter 10s rise 2 fall 2
listen my-cluster-apps-443
bind 192.168.1.100:443
mode tcp
balance roundrobin
option httpchk
http-check connect
http-check send meth GET uri /healthz/ready
http-check expect status 200
server my-cluster-worker-0 192.168.1.111:443 check port 1936 inter 10s rise 2 fall 2
server my-cluster-worker-1 192.168.1.112:443 check port 1936 inter 10s rise 2 fall 2
server my-cluster-worker-2 192.168.1.113:443 check port 1936 inter 10s rise 2 fall 2
listen my-cluster-apps-80
bind 192.168.1.100:80
mode tcp
balance roundrobin
option httpchk
http-check connect
http-check send meth GET uri /healthz/ready
http-check expect status 200
server my-cluster-worker-0 192.168.1.111:80 check port 1936 inter 10s rise 2 fall 2
server my-cluster-worker-1 192.168.1.112:80 check port 1936 inter 10s rise 2 fall 2
server my-cluster-worker-2 192.168.1.113:80 check port 1936 inter 10s rise 2 fall 2
# ...
----
+
.Example HAProxy configuration with multiple listed subnets
[source,terminal,subs="quotes"]
----
# ...
listen api-server-6443
bind *:6443
mode tcp
server master-00 192.168.83.89:6443 check inter 1s
server master-01 192.168.84.90:6443 check inter 1s
server master-02 192.168.85.99:6443 check inter 1s
server bootstrap 192.168.80.89:6443 check inter 1s
listen machine-config-server-22623
bind *:22623
mode tcp
server master-00 192.168.83.89:22623 check inter 1s
server master-01 192.168.84.90:22623 check inter 1s
server master-02 192.168.85.99:22623 check inter 1s
server bootstrap 192.168.80.89:22623 check inter 1s
listen ingress-router-80
bind *:80
mode tcp
balance source
server worker-00 192.168.83.100:80 check inter 1s
server worker-01 192.168.83.101:80 check inter 1s
listen ingress-router-443
bind *:443
mode tcp
balance source
server worker-00 192.168.83.100:443 check inter 1s
server worker-01 192.168.83.101:443 check inter 1s
listen ironic-api-6385
bind *:6385
mode tcp
balance source
server master-00 192.168.83.89:6385 check inter 1s
server master-01 192.168.84.90:6385 check inter 1s
server master-02 192.168.85.99:6385 check inter 1s
server bootstrap 192.168.80.89:6385 check inter 1s
listen inspector-api-5050
bind *:5050
mode tcp
balance source
server master-00 192.168.83.89:5050 check inter 1s
server master-01 192.168.84.90:5050 check inter 1s
server master-02 192.168.85.99:5050 check inter 1s
server bootstrap 192.168.80.89:5050 check inter 1s
# ...
----
. Use the `curl` CLI command to verify that the user-managed load balancer and its resources are operational:
+
.. Verify that the cluster machine configuration API is accessible to the Kubernetes API server resource, by running the following command and observing the response:
+
[source,terminal]
----
$ curl https://<loadbalancer_ip_address>:6443/version --insecure
----
+
If the configuration is correct, you receive a JSON object in response:
+
[source,json]
----
{
"major": "1",
"minor": "11+",
"gitVersion": "v1.11.0+ad103ed",
"gitCommit": "ad103ed",
"gitTreeState": "clean",
"buildDate": "2019-01-09T06:44:10Z",
"goVersion": "go1.10.3",
"compiler": "gc",
"platform": "linux/amd64"
}
----
+
.. Verify that the cluster machine configuration API is accessible to the Machine config server resource, by running the following command and observing the output:
+
[source,terminal]
----
$ curl -v https://<loadbalancer_ip_address>:22623/healthz --insecure
----
+
If the configuration is correct, the output from the command shows the following response:
+
[source,terminal]
----
HTTP/1.1 200 OK
Content-Length: 0
----
+
.. Verify that the controller is accessible to the Ingress Controller resource on port 80, by running the following command and observing the output:
+
[source,terminal]
----
$ curl -I -L -H "Host: console-openshift-console.apps.<cluster_name>.<base_domain>" http://<load_balancer_front_end_IP_address>
----
+
If the configuration is correct, the output from the command shows the following response:
+
[source,terminal]
----
HTTP/1.1 302 Found
content-length: 0
location: https://console-openshift-console.apps.ocp4.private.opequon.net/
cache-control: no-cache
----
+
.. Verify that the controller is accessible to the Ingress Controller resource on port 443, by running the following command and observing the output:
+
[source,terminal]
----
$ curl -I -L --insecure --resolve console-openshift-console.apps.<cluster_name>.<base_domain>:443:<Load Balancer Front End IP Address> https://console-openshift-console.apps.<cluster_name>.<base_domain>
----
+
If the configuration is correct, the output from the command shows the following response:
+
[source,terminal]
----
HTTP/1.1 200 OK
referrer-policy: strict-origin-when-cross-origin
set-cookie: csrf-token=UlYWOyQ62LWjw2h003xtYSKlh1a0Py2hhctw0WmV2YEdhJjFyQwWcGBsja261dGLgaYO0nxzVErhiXt6QepA7g==; Path=/; Secure; SameSite=Lax
x-content-type-options: nosniff
x-dns-prefetch-control: off
x-frame-options: DENY
x-xss-protection: 1; mode=block
date: Wed, 04 Oct 2023 16:29:38 GMT
content-type: text/html; charset=utf-8
set-cookie: 1e2670d92730b515ce3a1bb65da45062=1bf5e9573c9a2760c964ed1659cc1673; path=/; HttpOnly; Secure; SameSite=None
cache-control: private
----
. Configure the DNS records for your cluster to target the front-end IP addresses of the user-managed load balancer. You must update records to your DNS server for the cluster API and applications over the load balancer. The following examples shows modified DNS records:
+
[source,dns]
----
<load_balancer_ip_address> A api.<cluster_name>.<base_domain>
A record pointing to Load Balancer Front End
----
+
[source,dns]
----
<load_balancer_ip_address> A apps.<cluster_name>.<base_domain>
A record pointing to Load Balancer Front End
----
+
[IMPORTANT]
====
DNS propagation might take some time for each DNS record to become available. Ensure that each DNS record propagates before validating each record.
====
. For your {product-title} cluster to use the user-managed load balancer, you must specify the following configuration in your cluster's `install-config.yaml` file:
+
[source,yaml]
----
# ...
platform:
ifdef::bare-metal[]
baremetal:
endif::bare-metal[]
ifdef::openstack[]
openstack:
endif::openstack[]
ifdef::nutanix[]
nutanix:
endif::nutanix[]
ifdef::vsphere[]
vsphere:
endif::vsphere[]
loadBalancer:
type: UserManaged
apiVIPs:
- <api_ip> <2>
ingressVIPs:
- <ingress_ip> <3>
# ...
----
+
where:
+
`loadBalancer.type`:: Set `UserManaged` for the `type` parameter to specify a user-managed load balancer for your cluster. The parameter defaults to `OpenShiftManagedDefault`, which denotes the default internal load balancer. For services defined in an `openshift-kni-infra` namespace, a user-managed load balancer can deploy the `coredns` service to pods in your cluster but ignores `keepalived` and `haproxy` services.
`loadBalancer.<api_ip>`:: Specifies a user-managed load balancer. Specify the user-managed load balancer's public IP address, so that the Kubernetes API can communicate with the user-managed load balancer. Mandatory parameter.
`loadBalancer.<ingress_ip>`:: Specifies a user-managed load balancer. Specify the user-managed load balancer's public IP address, so that the user-managed load balancer can manage ingress traffic for your cluster. Mandatory parameter.
.Verification
. Use the `curl` CLI command to verify that the user-managed load balancer and DNS record configuration are operational:
+
.. Verify that you can access the cluster API, by running the following command and observing the output:
+
[source,terminal]
----
$ curl https://api.<cluster_name>.<base_domain>:6443/version --insecure
----
+
If the configuration is correct, you receive a JSON object in response:
+
[source,json]
----
{
"major": "1",
"minor": "11+",
"gitVersion": "v1.11.0+ad103ed",
"gitCommit": "ad103ed",
"gitTreeState": "clean",
"buildDate": "2019-01-09T06:44:10Z",
"goVersion": "go1.10.3",
"compiler": "gc",
"platform": "linux/amd64"
}
----
+
.. Verify that you can access the cluster machine configuration, by running the following command and observing the output:
+
[source,terminal]
----
$ curl -v https://api.<cluster_name>.<base_domain>:22623/healthz --insecure
----
+
If the configuration is correct, the output from the command shows the following response:
+
[source,terminal]
----
HTTP/1.1 200 OK
Content-Length: 0
----
+
.. Verify that you can access each cluster application on port 80, by running the following command and observing the output:
+
[source,terminal]
----
$ curl http://console-openshift-console.apps.<cluster_name>.<base_domain> -I -L --insecure
----
+
If the configuration is correct, the output from the command shows the following response:
+
[source,terminal]
----
HTTP/1.1 302 Found
content-length: 0
location: https://console-openshift-console.apps.<cluster-name>.<base domain>/
cache-control: no-cacheHTTP/1.1 200 OK
referrer-policy: strict-origin-when-cross-origin
set-cookie: csrf-token=39HoZgztDnzjJkq/JuLJMeoKNXlfiVv2YgZc09c3TBOBU4NI6kDXaJH1LdicNhN1UsQWzon4Dor9GWGfopaTEQ==; Path=/; Secure
x-content-type-options: nosniff
x-dns-prefetch-control: off
x-frame-options: DENY
x-xss-protection: 1; mode=block
date: Tue, 17 Nov 2020 08:42:10 GMT
content-type: text/html; charset=utf-8
set-cookie: 1e2670d92730b515ce3a1bb65da45062=9b714eb87e93cf34853e87a92d6894be; path=/; HttpOnly; Secure; SameSite=None
cache-control: private
----
+
.. Verify that you can access each cluster application on port 443, by running the following command and observing the output:
+
[source,terminal]
----
$ curl https://console-openshift-console.apps.<cluster_name>.<base_domain> -I -L --insecure
----
+
If the configuration is correct, the output from the command shows the following response:
+
[source,terminal]
----
HTTP/1.1 200 OK
referrer-policy: strict-origin-when-cross-origin
set-cookie: csrf-token=UlYWOyQ62LWjw2h003xtYSKlh1a0Py2hhctw0WmV2YEdhJjFyQwWcGBsja261dGLgaYO0nxzVErhiXt6QepA7g==; Path=/; Secure; SameSite=Lax
x-content-type-options: nosniff
x-dns-prefetch-control: off
x-frame-options: DENY
x-xss-protection: 1; mode=block
date: Wed, 04 Oct 2023 16:29:38 GMT
content-type: text/html; charset=utf-8
set-cookie: 1e2670d92730b515ce3a1bb65da45062=1bf5e9573c9a2760c964ed1659cc1673; path=/; HttpOnly; Secure; SameSite=None
cache-control: private
----
ifeval::["{context}" == "ipi-install-installation-workflow"]
:!bare-metal:
endif::[]
ifeval::["{context}" == "load-balancing-openstack"]
:!openstack:
endif::[]
ifeval::["{context}" == "installing-nutanix-installer-provisioned"]
:!nutanix:
endif::[]
ifeval::["{context}" == "installing-vsphere-installer-provisioned-customizations"]
:!vsphere:
endif::[]
ifeval::["{context}" == "installing-vsphere-installer-provisioned-customizations"]
:!vsphere:
endif::[]
ifeval::["{context}" == "installing-restricted-networks-installer-provisioned-vsphere"]
:!vsphere:
endif::[]