1
0
mirror of https://github.com/openshift/openshift-docs.git synced 2026-02-05 12:46:18 +01:00

OSDOCS-2174: MetalLB Operator and load balancer

* Mohamed clarified that the namespace doesn't matter.

* Add the 10s typical client fail over duration.

fix: ipv4 or ipv6 with dual-stack

Remove the IP failover procedure. That
is covered in the IP failover section.

Feedback from Arti.

Review from Samantha.
This commit is contained in:
Mike McKiernan
2021-07-15 15:28:05 -04:00
committed by openshift-cherrypick-robot
parent af85dece07
commit aba5706f2c
23 changed files with 942 additions and 1 deletions

View File

@@ -1078,6 +1078,17 @@ Topics:
Distros: openshift-enterprise,openshift-origin
- Name: Load balancing on OpenStack
File: load-balancing-openstack
- Name: Load balancing with MetalLB
Dir: metallb
Topics:
- Name: About MetalLB and the MetalLB Operator
File: about-metallb
- Name: Installing the MetalLB Operator
File: metallb-operator-install
- Name: Configuring MetalLB address pools
File: metallb-configure-address-pools
- Name: Configuring services to use MetalLB
File: metallb-configure-services
- Name: Associating secondary interfaces metrics to network attachments
File: associating-secondary-interfaces-metrics-to-network-attachments
---

Binary file not shown.

After

Width:  |  Height:  |  Size: 105 KiB

View File

@@ -0,0 +1,69 @@
[id="nw-metallb-addresspool-cr_{context}"]
= About the address pool custom resource
The fields for the address pool custom resource are described in the following table.
.MetalLB address pool custom resource
[cols="1,1,3", options="header"]
|===
|Field
|Type
|Description
|`metadata.name`
|`string`
|Specifies the name for the address pool.
When you add a service, you can specify this pool name in the `metallb.universe.tf/address-pool` annotation to select an IP address from a specific pool.
The names `doc-example`, `silver`, and `gold` are used throughout the documentation.
|`metadata.namespace`
|`string`
|Specifies the namespace for the address pool.
Specify the same namespace that the MetalLB Operator uses.
|`spec.protocol`
|`string`
|Specifies the protocol for announcing the load balancer IP address to peer nodes.
The only supported value is `layer2`.
|`spec.autoAssign`
|`boolean`
|Optional: Specifies whether MetalLB automatically assigns IP addresses from this pool.
Specify `false` if you want explicitly request an IP address from this pool with the `metallb.universe.tf/address-pool` annotation.
The default value is `true`.
|`spec.addresses`
|`array`
|Specifies a list of IP addresses for MetalLB to assign to services.
You can specify multiple ranges in a single pool.
Specify each range in CIDR notation or as starting and ending IP addresses separated with a hyphen.
|===
////
.Address pool object
[source,yaml]
----
apiVersion: metallb.io/v1alpha1
kind: AddressPool
metadata:
name: <pool_name> <.>
namespace: metallb-system <.>
spec:
protocol: <protocol_type> <.>
autoAssign: true <.>
addresses: <.>
- <range_or_CIDR>
...
----
<.> Specify the name for the address pool. When you add a service, you can specify this pool name in the `metallb.universe.tf/address-pool` annotation to select an IP address from a specific pool.
<.> Specify the namespace for the address pool.
<.> Specify the protocol for announcing the load balancer IP address to peer nodes. The only supported value is `layer2`.
<.> Optional: Specify whether MetalLB automatically assigns IP addresses from this pool. Specify `false` if you want explicitly request an IP address from this pool with the `metallb.universe.tf/address-pool` annotation. The default value is `true`.
<.> Specify a list of IP addresses for MetalLB to assign to services. You can specify multiple ranges in a single pool. Specify each range in CIDR notation or as starting and ending IP addresses separated with a hyphen.
////

View File

@@ -0,0 +1,66 @@
[id="nw-metallb-configure-address-pool_{context}"]
= Configuring an address pool
As a cluster administrator, you can add address pools to your cluster to control the IP addresses that MetaLLB can assign to load-balancer services.
.Prerequisites
* Install the OpenShift CLI (`oc`).
* Log in as a user with `cluster-admin` privileges.
.Procedure
. Create a file, such as `addresspool.yaml`, with content like the following example:
+
[source,yaml]
----
apiVersion: metallb.io/v1alpha1
kind: AddressPool
metadata:
namespace: metallb-system
name: doc-example
spec:
protocol: layer2
addresses:
- 203.0.113.1-203.0.113.10
- 203.0.113.65-203.0.113.75
----
. Apply the configuration for the address pool:
+
[source,terminal]
----
$ oc apply -f addresspool.yaml
----
.Verification
* View the address pool:
+
[source,terminal]
----
$ oc describe -n metallb-system addresspool doc-example
----
+
.Example output
[source,terminal]
----
Name: doc-example
Namespace: metallb-system
Labels: <none>
Annotations: <none>
API Version: metallb.io/v1alpha1
Kind: AddressPool
Metadata:
...
Spec:
Addresses:
203.0.113.1-203.0.113.10
203.0.113.65-203.0.113.75
Auto Assign: true
Protocol: layer2
Events: <none>
----
Confirm that the address pool name, such as `doc-example`, and the IP address ranges appear in the output.

View File

@@ -0,0 +1,72 @@
[id="nw-metallb-configure-svc_{context}"]
= Configuring a service with MetalLB
You can configure a load-balancing service to use an external IP address from an address pool.
.Prerequisites
* Install the OpenShift CLI (`oc`).
* Install the MetalLB Operator and start MetalLB.
* Configure at least one address pool.
* Configure your network to route traffic from the clients to the host network for the cluster.
.Procedure
. Create a `<service_name>.yaml` file. In the file, ensure that the `spec.type` field is set to `LoadBalancer`.
+
Refer to the examples for information about how to request the external IP address that MetalLB assigns to the service.
. Create the service:
+
[source,terminal]
----
$ oc apply -f <service_name>.yaml
----
+
.Example output
[source,terminal]
----
service/<service_name> created
----
.Verification
* Describe the service:
+
[source,terminal]
----
$ oc describe service <service_name>
----
+
.Example output
----
Name: <service_name>
Namespace: default
Labels: <none>
Annotations: metallb.universe.tf/address-pool: doc-example <.>
Selector: app=service_name
Type: LoadBalancer <.>
IP Family Policy: SingleStack
IP Families: IPv4
IP: 10.105.237.254
IPs: 10.105.237.254
LoadBalancer Ingress: 192.168.100.5 <.>
Port: <unset> 80/TCP
TargetPort: 8080/TCP
NodePort: <unset> 30550/TCP
Endpoints: 10.244.0.50:8080
Session Affinity: None
External Traffic Policy: Cluster
Events: <.>
Type Reason Age From Message
---- ------ ---- ---- -------
Normal nodeAssigned 32m (x2 over 32m) metallb-speaker announcing from node "<node_name>"
----
<.> The annotation is present if you request an IP address from a specific pool.
<.> The service type must indicate `LoadBalancer`.
<.> The load-balancer ingress field indicates the external IP address if the service is assigned correctly.
<.> The events field indicates the node name that is assigned to announce the external IP address.
If you experience an error, the events field indicates the reason for the error.

View File

@@ -0,0 +1,61 @@
[id="nw-metallb-example-addresspool_{context}"]
= Example address pool configurations
== Example: IPv4 and CIDR ranges
You can specify a range of IP addresses in CIDR notation.
You can combine CIDR notation with the notation that uses a hyphen to separate lower and upper bounds.
[source,yaml]
----
apiVersion: metallb.io/v1beta1
kind: AddressPool
metadata:
name: doc-example-cidr
namespace: metallb-system
spec:
protocol: layer2
addresses:
- 192.168.100.0/24
- 192.168.200.0/24
- 192.168.255.1-192.168.255.5
----
== Example: Reserve IP addresses
You can set the `autoAssign` field to `false` to prevent MetalLB from automatically assigning the IP addresses from the pool.
When you add a service, you can request a specific IP address from the pool or you can specify the pool name in an annotation to request any IP address from the pool.
[source,yaml]
----
apiVersion: metallb.io/v1beta1
kind: AddressPool
metadata:
name: doc-example-reserved
namespace: metallb-system
spec:
protocol: layer2
addresses:
- 10.0.100.0/28
autoAssign: false
----
== Example: IPv6 address pool
You can add address pools that use IPv6.
The following example shows a single IPv6 range.
However, you can specify multiple ranges in the `addresses` list, just like several IPv4 examples.
[source,yaml]
----
apiVersion: metallb.io/v1beta1
kind: AddressPool
metadata:
name: doc-example-ipv6
namespace: metallb-system
spec:
protocol: layer2
addresses:
- 2002:2:2::1-2002:2:2::100
----

View File

@@ -0,0 +1,16 @@
[id="nw-metallb-infra-considerations_{context}"]
= Infrastructure considerations for MetalLB
MetalLB is primarily useful for on-premise, bare metal installations because these installations do not include a native load-balancer capability.
In addition to bare metal installations, installations of {product-title} on some infrastructures might not include a native load-balancer capability.
For example, the following infrastructures might benefit from adding the MetalLB Operator:
* Bare metal
* VMware vSphere
* {rh-virtualization-first}
* {rh-openstack-first} when it is installed without Octavia
MetalLB Operator and MetalLB are supported with the OpenShift SDN and OVN-Kubernetes network providers.

View File

@@ -0,0 +1,126 @@
[id="nw-metallb-installing-operator-cli_{context}"]
= Installing from OperatorHub using the CLI
Instead of using the {product-title} web console, you can install an Operator from OperatorHub using the CLI. Use the `oc` command to create or update a `Subscription` object.
.Prerequisites
* Install the OpenShift CLI (`oc`).
* Log in as a user with `cluster-admin` privileges.
.Procedure
. Confirm that the MetalLB Operator is available:
+
[source,terminal]
----
$ oc get packagemanifests -n openshift-marketplace metallb-operator
----
+
.Example output
[source,terminal]
----
NAME CATALOG AGE
metallb-operator Community Operators 9h
----
. Create the `metallb-system` namespace:
+
[source,terminal]
----
$ cat << EOF | oc apply -f -
apiVersion: v1
kind: Namespace
metadata:
name: metallb-system
EOF
----
. Create an Operator group custom resource in the namespace:
+
[source,terminal]
----
$ cat << EOF | oc apply -f -
apiVersion: operators.coreos.com/v1
kind: OperatorGroup
metadata:
name: metallb-operator
namespace: metallb-system
spec:
targetNamespaces:
- metallb-system
EOF
----
. Confirm the Operator group is installed in the namespace:
+
[source,terminal]
----
$ oc get operatorgroup -n metallb-system
----
+
.Example output
[source,terminal]
----
NAME AGE
metallb-operator 14m
----
. Confirm the install plan is in the namespace:
+
[source,terminal]
----
$ oc get installplan -n metallb-system
----
+
.Example output
[source,terminal]
----
NAME CSV APPROVAL APPROVED
install-wzg94 metallb-operator.4.9.0-nnnnnnnnnnnn Automatic true
----
. Subscribe to the MetalLB Operator.
.. Run the following command to get the {product-title} major and minor version. You use the values to set the `channel` value in the next
step.
+
[source,terminal]
----
$ OC_VERSION=$(oc version -o yaml | grep openshiftVersion | \
grep -o '[0-9]*[.][0-9]*' | head -1)
----
.. To create a subscription custom resource for the Operator, enter the following command:
+
[source,terminal]
----
$ cat << EOF| oc apply -f -
apiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
name: metallb-operator-sub
namespace: metallb-system
spec:
channel: "${OC_VERSION}"
name: metallb-operator
source: community-operators
sourceNamespace: openshift-marketplace
EOF
----
. To verify that the Operator is installed, enter the following command:
+
[source,terminal]
----
$ oc get clusterserviceversion -n metallb-system \
-o custom-columns=Name:.metadata.name,Phase:.status.phase
----
+
.Example output
[source,terminal]
----
Name Phase
metallb-operator.4.9.0-nnnnnnnnnnnn Succeeded
----

View File

@@ -0,0 +1,21 @@
[id="nw-metallb-layer2-extern-traffic-pol_{context}"]
= Layer 2 and external traffic policy
With layer 2 mode, one node in your cluster receives all the traffic for the service IP address.
How your cluster handles the traffic after it enters the node is affected by the external traffic policy.
`cluster`::
This is the default value for `spec.externalTrafficPolicy`.
+
With the `cluster` traffic policy, after the node receives the traffic, the service proxy distributes the traffic to all the pods in your service.
This policy provides uniform traffic distribution across the pods, but it obscures the client IP address and it can appear to the application in your pods that the traffic originates from the node rather than the client.
`local`::
With the `local` traffic policy, after the node receives the traffic, the service proxy only sends traffic to the pods on the same node.
For example, if the `speaker` pod on node A announces the external service IP, then all traffic is sent to node A.
After the traffic enters node A, the service proxy only sends traffic to pods for the service that are also on node A.
Pods for the service that are on additional nodes do not receive any traffic from node A.
Pods for the service on additional nodes act as replicas in case failover is needed.
+
This policy does not affect the client IP address.
Application pods can determine the client IP address from the incoming connections.

View File

@@ -0,0 +1,31 @@
[id="nw-metallb-layer2-limitations_{context}"]
= Limitations for layer 2 mode
[id="nw-metallb-layer2-limitations-bottleneck_{context}"]
== Single-node bottleneck
MetalLB routes all traffic for a service through a single node, the node can become a bottleneck and limit performance.
Layer 2 mode limits the ingress bandwidth for your service to the bandwidth of a single node.
This is a fundamental limitation of using ARP and NDP to direct traffic.
[id="nw-metallb-layer2-limitations-failover_{context}"]
== Slow failover performance
Failover between nodes depends on cooperation from the clients.
When a failover occurs, MetalLB sends gratuitous ARP packets to notify clients that the MAC address associated with the service IP has changed.
Most client operating systems handle gratuitous ARP packets correctly and update their neighbor caches promptly.
When clients update their caches quickly, failover completes within a few seconds.
Clients typically fail over to a new node within 10 seconds.
However, some client operating systems either do not handle gratuitous ARP packets at all or have outdated implementations that delay the cache update.
Recent versions of common operating systems such as Windows, macOS, and Linux implement layer 2 failover correctly.
Issues with slow failover are not expected except for older and less common client operating systems.
// FIXME: I think "leadership" is from an old algorithm.
// If there is a way to perform a planned failover, let's cover it. `oc drain`?
To minimize the impact from a planned failover on outdated clients, keep the old node running for a few minutes after flipping leadership.
The old node can continue to forward traffic for outdated clients until their caches refresh.
During an unplanned failover, the service IPs are unreachable until the outdated clients refresh their cache entries.

View File

@@ -0,0 +1,42 @@
[id="nw-metallb-layer2_{context}"]
= MetalLB concepts for layer 2 mode
In layer 2 mode, the `speaker` pod on one node announces the external IP address for a service to the host network.
From a network perspective, the node appears to have multiple IP addresses assigned to a network interface.
The `speaker` pod responds to ARP requests for IPv4 services and NDP requests for IPv6.
In layer 2 mode, all traffic for a service IP address is routed through one node.
After traffic enters the node, the service proxy for the CNI network provider distributes the traffic to all the pods for the service.
Because all traffic for a service enters through a single node in layer 2 mode, in a strict sense, MetalLB does not implement a load balancer for layer 2.
Rather, MetalLB implements a failover mechanism for layer 2 so that when a `speaker` pod becomes unavailable, a `speaker` pod on a different node can announce the service IP address.
When a node becomes unavailable, failover is automatic.
The `speaker` pods on the other nodes detect that a node is unavailable and a new `speaker` pod and node take ownership of the service IP address from the failed node.
image::nw-metallb-layer2.png[Conceptual diagram for MetalLB and layer 2 mode]
The preceding graphic shows the following concepts related to MetalLB:
* An application is available through a service that has a cluster IP on the `172.130.0.0/16` subnet.
That IP address is accessible from inside the cluster.
The service also has an external IP address that MetalLB assigned to the service, `192.168.100.200`.
* Nodes 1 and 3 have a pod for the application.
* The `speaker` daemon set runs a pod on each node.
The MetalLB Operator starts these pods.
* Each `speaker` pod is a host-networked pod.
The IP address for the pod is identical to the IP address for the node on the host network.
* The `speaker` pod on node 1 uses ARP to announce the external IP address for the service, `192.168.100.200`.
The `speaker` pod that announces the external IP address must be on the same node as an endpoint for the service and the endpoint must be in the `Ready` condition.
* Client traffic is routed to the host network and connects to the `192.168.100.200` IP address.
After traffic enters the node, the service proxy sends the traffic to the application pod on the same node or another node according to the external traffic policy that you set for the service.
* If node 1 becomes unavailable, the external IP address fails over to another node.
On another node that has an instance of the application pod and service endpoint, the `speaker` pod begins to announce the external IP address, `192.168.100.200` and the new node receives the client traffic.
In the diagram, the only candidate is node 3.

View File

@@ -0,0 +1,20 @@
[id="nw-metallb-operator-custom-resources_{context}"]
= MetalLB Operator custom resources
The MetalLB Operator monitors its own namespace for two custom resources:
`MetalLB`::
When you add a `MetalLB` custom resource to the cluster, the MetalLB Operator deploys MetalLB on the cluster.
The Operator only supports a single instance of the custom resource.
If the instance is deleted, the Operator removes MetalLB from the cluster.
`AddressPool`::
MetalLB requires one or more pools of IP addresses that it can assign to a service when you add a service of type `LoadBalancer`.
When you add an `AddressPool` custom resource to the cluster, the MetalLB Operator configures MetalLB so that it can assign IP addresses from the pool.
An address pool includes a list of IP addresses.
The list can be a single IP address, a range specified in CIDR notation, a range specified as a starting and ending address separated by a hyphen, or a combination of the three.
An address pool requires a name.
The documentation uses names like `doc-example`, `doc-example-reserved`, and `doc-example-ipv6`.
An address pool specifies whether MetalLB can automatically assign IP addresses from the pool or whether the IP addresses are reserved for services that explicitly specify the pool by name.
After you add the `MetalLB` custom resource to the cluster and the Operator deploys MetalLB, the MetalLB software components, `controller` and `speaker`, begin running.

View File

@@ -0,0 +1,61 @@
[id="nw-metallb-operator-initial-config_{context}"]
= Starting MetalLB on your cluster
After you install the Operator, you need to configure a single instance of a MetalLB custom resource. After you configure the custom resource, the Operator starts MetalLB on your cluster.
.Prerequisites
* Install the OpenShift CLI (`oc`).
* Log in as a user with `cluster-admin` privileges.
* Install the MetalLB Operator.
.Procedure
. Create a single instance of a MetalLB custom resource:
+
[source,terminal]
----
$ cat << EOF | oc apply -f -
apiVersion: metallb.io/v1alpha1
kind: MetalLB
metadata:
name: metallb
namespace: metallb-system
EOF
----
.Verification
Confirm that the deployment for the MetalLB controller and the daemon set for the MetalLB speaker are running.
. Check that the deployment for the controller is running:
+
[source,terminal]
----
$ oc get deployment -n metallb-system controller
----
+
.Example output
[source,terminal]
----
NAME READY UP-TO-DATE AVAILABLE AGE
controller 1/1 1 1 11m
----
. Check that the daemon set for the speaker is running:
+
[source,terminal]
----
$ oc get daemonset -n metallb-system speaker
----
+
.Example output
[source,terminal]
----
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
speaker 6 6 6 6 6 kubernetes.io/os=linux 18m
----
+
The example output indicates 6 speaker pods. The number of speaker pods in your cluster might differ from the example output. Make sure the output indicates one pod for each node in your cluster.

View File

@@ -0,0 +1,25 @@
[id="nw-metallb-software-components_{context}"]
= MetalLB software components
When you install the MetalLB Operator, the `metallb-operator-controller-manager` deployment starts a pod.
The pod is the implementation of the Operator.
The pod monitors for changes to the `MetalLB` custom resource and `AddressPool` custom resources.
When the Operator starts an instance of MetalLB, it starts a `controller` deployment and a `speaker` daemon set.
`controller`::
The Operator starts the deployment and a single pod.
When you add a service of type `LoadBalancer`, Kubernetes uses the `controller` to allocate an IP address from an address pool.
`speaker`::
The Operator starts a daemon set with one `speaker` pod for each node in your cluster.
+
For layer 2 mode, after the `controller` allocates an IP address for the service, each `speaker` pod determines if it is on the same node as an endpoint for the service.
An algorithm that involves hashing the node name and the service name is used to select a single `speaker` pod to announce the load balancer IP address.
// IETF treats protocol names as proper nouns.
The `speaker` uses Address Resolution Protocol (ARP) to announce IPv4 addresses and Neighbor Discovery Protocol (NDP) to announce IPv6 addresses.
+
Requests for the load balancer IP address are routed to the node with the `speaker` that announces the IP address.
After the node receives the packets, the service proxy routes the packets to an endpoint for the service.
The endpoint can be on the same node in the optimal case, or it can be on another node.
The service proxy chooses an endpoint each time a connection is established.

View File

@@ -0,0 +1,8 @@
[id="nw-metallb-when-metallb_{context}"]
= When to use MetalLB
Using MetalLB is valuable when you have a bare-metal cluster, or an infrastructure that is like bare metal, and you want fault-tolerant access to an application through an external IP address.
You must configure your networking infrastructure to ensure that network traffic for the external IP address is routed from clients to the host network for the cluster.
After deploying MetalLB with the MetalLB Operator, when you add a service of type `LoadBalancer`, MetalLB provides a platform-native load balancer.

View File

@@ -8,7 +8,8 @@
// https://projects.engineering.redhat.com/projects/RHEC/summary
// Add additional ifevals here, but before context == olm-adding-operators-to-a-cluster
ifeval::["{context}" != "olm-adding-operators-to-a-cluster"]
ifndef::filter-type[]
//ifeval::["{context}" != "olm-adding-operators-to-a-cluster"]
:filter-type: jaeger
:filter-operator: Jaeger
:olm-admin:

View File

@@ -26,6 +26,11 @@ with the SNI header, use an Ingress Controller.
|xref:../../networking/configuring_ingress_cluster_traffic/configuring-ingress-cluster-traffic-load-balancer.adoc#configuring-ingress-cluster-traffic-load-balancer[Automatically assign an external IP using a load balancer service]
|Allows traffic to non-standard ports through an IP address assigned from a pool.
Most cloud platforms offer a method to start a service with a load-balancer IP address.
|xref:../../networking/metallb/about-metallb.adoc#about-metallb[About MetalLB and the MetalLB Operator]
|Allows traffic to a specific IP address or address from a pool on the machine network.
For bare-metal installations or platforms that are like bare metal, MetalLB provides a way to start a service with a load-balancer IP address.
|xref:../../networking/configuring_ingress_cluster_traffic/configuring-ingress-cluster-traffic-service-external-ip.adoc#configuring-ingress-cluster-traffic-service-external-ip[Manually assign an external IP to a service]
|Allows traffic to non-standard ports through a specific IP address.
@@ -33,3 +38,25 @@ with the SNI header, use an Ingress Controller.
|xref:../../networking/configuring_ingress_cluster_traffic/configuring-ingress-cluster-traffic-nodeport.adoc#configuring-ingress-cluster-traffic-nodeport[Configure a `NodePort`]
|Expose a service on all nodes in the cluster.
|===
[id="overview-traffic-comparision_{context}"]
== Comparision: Fault tolerant access to external IP addresses
For the communication methods that provide access to an external IP address, fault tolerant access to the IP address is another consideration.
The following features provide fault tolerant access to an external IP address.
IP failover::
IP failover manages a pool of virtual IP address for a set of nodes.
It is implemented with Keepalived and Virtual Router Redundancy Protocol (VRRP).
IP failover is a layer 2 mechanism only and relies on multicast.
Multicast can have disadvantages for some networks.
MetalLB::
MetalLB has a layer 2 mode, but it does not use multicast.
Layer 2 mode has a disadvantage that it transfers all traffic for an external IP address through one node.
Manually assigning external IP addresses::
You can configure your cluster with an IP address block that is used to assign external IP addresses to services.
By default, this feature is disabled.
This feature is flexible, but places the largest burden on the cluster or network administrator.
The cluster is prepared to receive traffic that is destined for the external IP, but each customer has to decide how they want to route traffic to nodes.

View File

@@ -0,0 +1,65 @@
[id="about-metallb"]
= About MetalLB and the MetalLB Operator
include::modules/common-attributes.adoc[]
:context: about-metallb-and-metallb-operator
toc::[]
As a cluster administrator, you can add the MetalLB Operator to your cluster so that when a service of type `LoadBalancer` is added to the cluster, MetalLB can add a fault-tolerant external IP address for the service.
The external IP address is added to the host network for your cluster.
// When to deploy MetalLB
include::modules/nw-metallb-when-metallb.adoc[leveloffset=+1]
// MetalLB Operator custom resources
include::modules/nw-metallb-operator-custom-resources.adoc[leveloffset=+1]
// MetalLB software components
include::modules/nw-metallb-software-components.adoc[leveloffset=+1]
// Layer 2
include::modules/nw-metallb-layer2.adoc[leveloffset=+1]
// Layer 2 and external traffic policy
include::modules/nw-metallb-layer2-extern-traffic-pol.adoc[leveloffset=+2]
[id="limitations-and-restrictions_{context}"]
== Limitations and restrictions
// With fair confidence, this topic is temporary for 4.9.
[id="support-layer2-only_{context}"]
=== Support for layer 2 only
When you install and configure MetalLB on {product-title} 4.9 with the MetalLB Operator, support is restricted to layer 2 mode only.
In comparison, the open source MetalLB project offers load balancing for layer 2 mode and a mode for layer 3 that uses border gateway protocol (BGP).
// Ditto. This limitation should be lifted in 4.10.
[id="support-single-stack_{context}"]
=== Support for single stack networking
Although you can specify IPv4 addresses and IPv6 addresses in the same address pool, MetalLB only assigns one IP address for the load balancer.
When MetalLB is deployed on a cluster that is configured for dual-stack networking, MetalLB assigns one IPv4 or IPv6 address for the load balancer, depending on the IP address family of the cluster IP for the service.
For example, if the cluster IP of the service is IPv4, then MetalLB assigns an IPv4 address for the load balancer.
MetalLB does not assign an IPv4 and an IPv6 address simultaneously.
IPv6 is only supported for clusters that use the OVN-Kubernetes network provider.
// Infra considerations
include::modules/nw-metallb-infra-considerations.adoc[leveloffset=+2]
// Layer 2 limitations
include::modules/nw-metallb-layer2-limitations.adoc[leveloffset=+2]
// Incompat with IP failover
[id="incompatibility-with-ip-failover_{context}"]
=== Incompatibility with IP failover
MetalLB is incompatible with the IP failover feature. Before you install the MetalLB Operator, remove IP failover.
[id="additional-resources_{context}"]
== Additional resources
* xref:../../networking/configuring_ingress_cluster_traffic/overview-traffic.adoc#overview-traffic-comparision_overview-traffic[Comparison: Fault tolerant access to external IP addresses]
* xref:../../networking/configuring-ipfailover.adoc#nw-ipfailover-remove_configuring-ipfailover[Removing IP failover]

1
networking/metallb/images Symbolic link
View File

@@ -0,0 +1 @@
../../images

View File

@@ -0,0 +1,23 @@
[id="metallb-configure-address-pools"]
= Configuring MetalLB address pools
include::modules/common-attributes.adoc[]
:context: configure-metallb-address-pools
toc::[]
As a cluster administrator, you can add, modify, and delete address pools.
The MetalLB Operator uses the address pool custom resources to set the IP addresses that MetalLB can assign to services.
// Address pool custom resource
include::modules/nw-metallb-addresspool-cr.adoc[leveloffset=+1]
// Add an address pool
include::modules/nw-metallb-configure-address-pool.adoc[leveloffset=+1]
// Examples
include::modules/nw-metallb-example-addresspool.adoc[leveloffset=+1]
[id="next-steps_{context}"]
== Next steps
* xref:../../networking/metallb/metallb-configure-services.adoc#metallb-configure-services[Configuring services to use MetalLB]

View File

@@ -0,0 +1,162 @@
[id="metallb-configure-services"]
= Configuring services to use MetalLB
include::modules/common-attributes.adoc[]
:context: configure-services-metallb
toc::[]
As a cluster administrator, when you add a service of type `LoadBalancer`, you can control how MetalLB assigns an IP address.
// Request a specific IP address
[id="request-specific-ip-address_{context}"]
== Request a specific IP address
Like some other load-balancer implementations, MetalLB accepts the `spec.loadBalancerIP` field in the service specification.
If the requested IP address is within a range from any address pool, MetalLB assigns the requested IP address.
If the requested IP address is not within any range, MetalLB reports a warning.
.Example service YAML for a specific IP address
[source,yaml]
----
apiVersion: v1
kind: Service
metadata:
name: <service_name>
annotations:
metallb.universe.tf/address-pool: <address_pool_name>
spec:
selector:
<label_key>: <label_value>
ports:
- port: 8080
targetPort: 8080
protocol: TCP
type: LoadBalancer
loadBalancerIP: <ip_address>
----
If MetalLB cannot assign the requested IP address, the `EXTERNAL-IP` for the service reports `<pending>` and running `oc describe service <service_name>` includes an event like the following example.
.Example event when MetalLB cannot assign a requested IP address
[source,terminal]
----
...
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning AllocationFailed 3m16s metallb-controller Failed to allocate IP for "default/invalid-request": "4.3.2.1" is not allowed in config
----
[id="request-ip-address-from-pool_{context}"]
== Request an IP address from a specific pool
To assign an IP address from a specific range, but you are not concerned with the specific IP address, then you can use the `metallb.universe.tf/address-pool` annotation to request an IP address from the specified address pool.
.Example service YAML for an IP address from a specific pool
[source,yaml]
----
apiVersion: v1
kind: Service
metadata:
name: <service_name>
annotations:
metallb.universe.tf/address-pool: <address_pool_name>
spec:
selector:
<label_key>: <label_value>
ports:
- port: 8080
targetPort: 8080
protocol: TCP
type: LoadBalancer
----
If the address pool that you specify for `<address_pool_name>` does not exist, MetalLB attempts to assign an IP address from any pool that permits automatic assignment.
[id="accept-any-ip-address_{context}"]
== Accept any IP address
By default, address pools are configured to permit automatic assignment.
MetalLB assigns an an IP address from these address pools.
To accept any IP address from any pool that is configured for automatic assignment, no special annotation or configuration is required.
.Example service YAML for accepting any IP address
[source,yaml]
----
apiVersion: v1
kind: Service
metadata:
name: <service_name>
spec:
selector:
<label_key>: <label_value>
ports:
- port: 8080
targetPort: 8080
protocol: TCP
type: LoadBalancer
----
[id="share-specific-ip-address_{context}"]
== Share a specific IP address
By default, services do not share IP addresses.
However, if you need to colocate services on a single IP address, you can enable selective IP sharing by adding the `metallb.universe.tf/allow-shared-ip` annotation to the services.
[source,yaml]
----
apiVersion: v1
kind: Service
metadata:
name: service-http
annotations:
metallb.universe.tf/address-pool: doc-example
metallb.universe.tf/allow-shared-ip: "web-server-svc" <1>
spec:
ports:
- name: http
port: 80 <2>
protocol: TCP
targetPort: 8080
selector:
<label_key>: <label_value> <3>
type: LoadBalancer
loadBalancerIP: 172.31.249.7 <4>
---
apiVersion: v1
kind: Service
metadata:
name: service-https
annotations:
metallb.universe.tf/address-pool: doc-example
metallb.universe.tf/allow-shared-ip: "web-server-svc" <1>
spec:
ports:
- name: https
port: 443 <2>
protocol: TCP
targetPort: 8080
selector:
<label_key>: <label_value> <3>
type: LoadBalancer
loadBalancerIP: 172.31.249.7 <4>
----
<1> Specify the same value for the `metallb.universe.tf/allow-shared-ip` annotation. This value is referred to as the _sharing key_.
<2> Specify different port numbers for the services.
<3> Specify identical pod selectors if you must specify `externalTrafficPolicy: local` so the services send traffic to the same set of pods. If you use the `cluster` external traffic policy, then the pod selectors do not need to be identical.
<4> Optional: If you specify the three preceding items, MetalLB might colocate the services on the same IP address. To ensure that services share an IP address, specify the IP address to share.
By default, Kubernetes does not allow multiprotocol load balancer services.
This limitation would normally make it impossible to run a service like DNS that needs to listen on both TCP and UDP.
To work around this limitation of Kubernetes with MetalLB, create two services:
* For one service, specify TCP and for the second service, specify UDP.
* In both services, specify the same pod selector.
* Specify the same sharing key and `spec.loadBalancerIP` value to colocate the TCP and UDP services on the same IP address.
// Configuring a service with MetalLB
include::modules/nw-metallb-configure-svc.adoc[leveloffset=+1]

View File

@@ -0,0 +1,32 @@
[id="metallb-operator-install"]
= Installing the MetalLB Operator
include::modules/common-attributes.adoc[]
:context: metallb-operator-install
toc::[]
As a cluster administrator, you can add the MetallB Operator so that the Operator can manage the lifecycle for an instance of MetalLB on your cluster.
The installation procedures use the `metallb-system` namespace.
You can install the Operator and configure custom resources in a different namespace.
The Operator starts MetalLB in the same namespace that the Operator is installed in.
MetalLB and IP failover are incompatible. If you configured IP failover for your cluster, perform the steps to xref:../../networking/configuring-ipfailover.adoc#nw-ipfailover-remove_configuring-ipfailover[remove IP failover] before you install the Operator.
// Install the Operator with console
:filter-type: metallb
:filter-operator: MetalLB
include::modules/olm-installing-from-operatorhub-using-web-console.adoc[leveloffset=+1]
:!filter-type:
:!filter-operator:
// Install the Operator with CLI
include::modules/nw-metallb-installing-operator-cli.adoc[leveloffset=+1]
// Starting MetalLB on your cluster
include::modules/nw-metallb-operator-initial-config.adoc[leveloffset=+1]
[id="next-steps_{context}"]
== Next steps
* xref:../../networking/metallb/metallb-configure-address-pools.adoc#metallb-configure-address-pools[Configuring MetalLB address pools]

1
networking/metallb/modules Symbolic link
View File

@@ -0,0 +1 @@
../../modules