1
0
mirror of https://github.com/openshift/openshift-docs.git synced 2026-02-05 12:46:18 +01:00

ADV edits for some MetalLB content

This commit is contained in:
Steven Smith
2026-02-02 15:55:08 -05:00
committed by openshift-cherrypick-robot
parent a82a149165
commit 5212e402d2
9 changed files with 54 additions and 39 deletions

View File

@@ -6,7 +6,8 @@
[id="upgrading-metallb-operator_{context}"] [id="upgrading-metallb-operator_{context}"]
= Manually upgrading the MetalLB Operator = Manually upgrading the MetalLB Operator
To manually control upgrading the MetalLB Operator, you must edit the `Subscription` custom resource (CR) that subscribes the namespace to `metallb-system`. A `Subscription` CR is created as part of the Operator installation and the CR has the `installPlanApproval` parameter set to `Automatic` by default. [role="_abstract"]
To manually control when the MetalLB Operator upgrades in {product-title}, you set `installPlanApproval` to Manual in the Subscription custom resource and approve the install plan. You then verify the upgrade by using the `ClusterServiceVersion` status.
.Prerequisites .Prerequisites

View File

@@ -6,7 +6,8 @@
[id="nw-metallb-installing-operator-cli_{context}"] [id="nw-metallb-installing-operator-cli_{context}"]
= Installing from the software catalog using the CLI = Installing from the software catalog using the CLI
Instead of using the {product-title} web console, you can install an Operator from the software catalog using the CLI. You can use the OpenShift CLI (`oc`) to install the MetalLB Operator. [role="_abstract"]
To install the MetalLB Operator from the software catalog in {product-title} without using the web console, you can use the {oc-first}.
It is recommended that when using the CLI you install the Operator in the `metallb-system` namespace. It is recommended that when using the CLI you install the Operator in the `metallb-system` namespace.
@@ -70,10 +71,11 @@ metadata:
spec: spec:
channel: stable channel: stable
name: metallb-operator name: metallb-operator
source: redhat-operators <1> source: redhat-operators
sourceNamespace: openshift-marketplace sourceNamespace: openshift-marketplace
---- ----
<1> You must specify the `redhat-operators` value. +
** For the `spec.source` parameter, must specify the `redhat-operators` value.
.. To create the `Subscription` CR, run the following command: .. To create the `Subscription` CR, run the following command:
+ +

View File

@@ -2,11 +2,14 @@
// //
// * networking/metallb/metallb-operator-install.adoc // * networking/metallb/metallb-operator-install.adoc
:_mod-docs-content-type: REFERENCE :_mod-docs-content-type: CONCEPT
[id="nw-metallb-operator-deployment-specifications-for-metallb_{context}"] [id="nw-metallb-operator-deployment-specifications-for-metallb_{context}"]
= Deployment specifications for MetalLB = Deployment specifications for MetalLB
When you start an instance of MetalLB using the `MetalLB` custom resource, you can configure deployment specifications in the `MetalLB` custom resource to manage how the `controller` or `speaker` pods deploy and run in your cluster. Use these deployment specifications to manage the following tasks: [role="_abstract"]
Deployment specifications in the `MetalLB` custom resource control how the MetalLB `controller` and `speaker` pods deploy and run in {product-title}.
Use deployment specifications to manage the following tasks:
* Select nodes for MetalLB pod deployment. * Select nodes for MetalLB pod deployment.
* Manage scheduling by using pod priority and pod affinity. * Manage scheduling by using pod priority and pod affinity.

View File

@@ -6,7 +6,8 @@
[id="nw-metallb-operator-initial-config_{context}"] [id="nw-metallb-operator-initial-config_{context}"]
= Starting MetalLB on your cluster = Starting MetalLB on your cluster
After you install the Operator, you need to configure a single instance of a MetalLB custom resource. After you configure the custom resource, the Operator starts MetalLB on your cluster. [role="_abstract"]
To start MetalLB on your cluster after installing the MetalLB Operator in {product-title}, you create a single MetalLB custom resource.
.Prerequisites .Prerequisites
@@ -16,11 +17,8 @@ After you install the Operator, you need to configure a single instance of a Met
* Install the MetalLB Operator. * Install the MetalLB Operator.
.Procedure .Procedure
This procedure assumes the MetalLB Operator is installed in the `metallb-system` namespace. If you installed using the web console substitute `openshift-operators` for the namespace.
. Create a single instance of a MetalLB custom resource: . Create a single instance of a MetalLB custom resource:
+ +
[source,terminal] [source,terminal]
@@ -33,6 +31,8 @@ metadata:
namespace: metallb-system namespace: metallb-system
EOF EOF
---- ----
+
** For the `metdata.namespace` parameter, substitute `metallb-system` with `openshift-operators` if you installed the MetalLB Operator using the web console.
.Verification .Verification

View File

@@ -6,16 +6,14 @@
[id="nw-metallb-operator-limit-speaker-to-nodes_{context}"] [id="nw-metallb-operator-limit-speaker-to-nodes_{context}"]
= Limit speaker pods to specific nodes = Limit speaker pods to specific nodes
By default, when you start MetalLB with the MetalLB Operator, the Operator starts an instance of a `speaker` pod on each node in the cluster. [role="_abstract"]
Only the nodes with a `speaker` pod can advertise a load balancer IP address. You can limit MetalLB `speaker` pods to specific nodes in {product-title} by configuring a node selector in the `MetalLB` custom resource. Only nodes that run a `speaker` pod advertise load balancer IP addresses, so you control which nodes serve MetalLB traffic.
You can configure the `MetalLB` custom resource with a node selector to specify which nodes run the `speaker` pods.
The most common reason to limit the `speaker` pods to specific nodes is to ensure that only nodes with network interfaces on specific networks advertise load balancer IP addresses. The most common reason to limit the `speaker` pods to specific nodes is to ensure that only nodes with network interfaces on specific networks advertise load balancer IP addresses.
Only the nodes with a running `speaker` pod are advertised as destinations of the load balancer IP address.
If you limit the `speaker` pods to specific nodes and specify `local` for the external traffic policy of a service, then you must ensure that the application pods for the service are deployed to the same nodes. If you limit the `speaker` pods to specific nodes and specify `local` for the external traffic policy of a service, then you must ensure that the application pods for the service are deployed to the same nodes.
.Example configuration to limit speaker pods to worker nodes .Example configuration to limit `speaker` pods to worker nodes
[source,yaml] [source,yaml]
---- ----
apiVersion: metallb.io/v1beta1 apiVersion: metallb.io/v1beta1
@@ -24,15 +22,16 @@ metadata:
name: metallb name: metallb
namespace: metallb-system namespace: metallb-system
spec: spec:
nodeSelector: <1> nodeSelector:
node-role.kubernetes.io/worker: "" node-role.kubernetes.io/worker: ""
speakerTolerations: <2> speakerTolerations:
- key: "Example" - key: "Example"
operator: "Exists" operator: "Exists"
effect: "NoExecute" effect: "NoExecute"
---- ----
<1> The example configuration specifies to assign the speaker pods to worker nodes, but you can specify labels that you assigned to nodes or any valid node selector.
<2> In this example configuration, the pod that this toleration is attached to tolerates any taint that matches the `key` value and `effect` value using the `operator`. ** In this example configuration, the `spec.nodeSelector` field assigns the `speaker` pods to worker nodes. You can specify labels that you assigned to nodes or any valid node selector.
** In this example configuration, `spec.speakerToTolerations` pod that this toleration is attached to tolerates any taint that matches the `key` and `effect` values by using the `operator` value.
After you apply a manifest with the `spec.nodeSelector` field, you can check the number of pods that the Operator deployed with the `oc get daemonset -n metallb-system speaker` command. After you apply a manifest with the `spec.nodeSelector` field, you can check the number of pods that the Operator deployed with the `oc get daemonset -n metallb-system speaker` command.
Similarly, you can display the nodes that match your labels with a command like `oc get nodes -l node-role.kubernetes.io/worker=`. Similarly, you can display the nodes that match your labels with a command like `oc get nodes -l node-role.kubernetes.io/worker=`.

View File

@@ -6,7 +6,8 @@
[id="nw-metallb-operator-setting-pod-CPU-limits_{context}"] [id="nw-metallb-operator-setting-pod-CPU-limits_{context}"]
= Configuring pod CPU limits in a MetalLB deployment = Configuring pod CPU limits in a MetalLB deployment
You can optionally assign pod CPU limits to `controller` and `speaker` pods by configuring the `MetalLB` custom resource. Defining CPU limits for the `controller` or `speaker` pods helps you to manage compute resources on the node. This ensures all pods on the node have the necessary compute resources to manage workloads and cluster housekeeping. [role="_abstract"]
To manage compute resources on nodes running MetalLB in {product-title}, you can assign CPU limits to the `controller` and `speaker` pods in the `MetalLB` custom resource. This ensures that all pods on the node have the necessary compute resources to manage workloads and cluster housekeeping.
.Prerequisites .Prerequisites
@@ -15,6 +16,7 @@ You can optionally assign pod CPU limits to `controller` and `speaker` pods by c
* You have installed the MetalLB Operator. * You have installed the MetalLB Operator.
.Procedure .Procedure
. Create a `MetalLB` custom resource file, such as `CPULimits.yaml`, to specify the `cpu` value for the `controller` and `speaker` pods: . Create a `MetalLB` custom resource file, such as `CPULimits.yaml`, to specify the `cpu` value for the `controller` and `speaker` pods:
+ +
[source,yaml] [source,yaml]
@@ -44,6 +46,7 @@ $ oc apply -f CPULimits.yaml
---- ----
.Verification .Verification
* To view compute resources for a pod, run the following command, replacing `<pod_name>` with your target pod: * To view compute resources for a pod, run the following command, replacing `<pod_name>` with your target pod:
+ +
[source,bash] [source,bash]

View File

@@ -6,7 +6,10 @@
[id="nw-metallb-operator-setting-pod-priority-affinity_{context}"] [id="nw-metallb-operator-setting-pod-priority-affinity_{context}"]
= Configuring pod priority and pod affinity in a MetalLB deployment = Configuring pod priority and pod affinity in a MetalLB deployment
You can optionally assign pod priority and pod affinity rules to `controller` and `speaker` pods by configuring the `MetalLB` custom resource. The pod priority indicates the relative importance of a pod on a node and schedules the pod based on this priority. Set a high priority on your `controller` or `speaker` pod to ensure scheduling priority over other pods on the node. [role="_abstract"]
To control scheduling of MetalLB controller and `speaker` pods in {product-title}, you can assign pod priority and pod affinity in the `MetalLB` custom resource. You create a `PriorityClass` and set `priorityClassName` and affinity in the `MetalLB` spec, then apply the configuration.
The pod priority indicates the relative importance of a pod on a node and schedules the pod based on this priority. Set a high priority on your `controller` or `speaker` pod to ensure scheduling priority over other pods on the node.
Pod affinity manages relationships among pods. Assign pod affinity to the `controller` or `speaker` pods to control on what node the scheduler places the pod in the context of pod relationships. For example, you can use pod affinity rules to ensure that certain pods are located on the same node or nodes, which can help improve network communication and reduce latency between those components. Pod affinity manages relationships among pods. Assign pod affinity to the `controller` or `speaker` pods to control on what node the scheduler places the pod in the context of pod relationships. For example, you can use pod affinity rules to ensure that certain pods are located on the same node or nodes, which can help improve network communication and reduce latency between those components.
@@ -19,6 +22,7 @@ Pod affinity manages relationships among pods. Assign pod affinity to the `contr
* You have started the MetalLB Operator on your cluster. * You have started the MetalLB Operator on your cluster.
.Procedure .Procedure
. Create a `PriorityClass` custom resource, such as `myPriorityClass.yaml`, to configure the priority level. This example defines a `PriorityClass` named `high-priority` with a value of `1000000`. Pods that are assigned this priority class are considered higher priority during scheduling compared to pods with lower priority classes: . Create a `PriorityClass` custom resource, such as `myPriorityClass.yaml`, to configure the priority level. This example defines a `PriorityClass` named `high-priority` with a value of `1000000`. Pods that are assigned this priority class are considered higher priority during scheduling compared to pods with lower priority classes:
+ +
[source,yaml] [source,yaml]
@@ -49,9 +53,9 @@ metadata:
spec: spec:
logLevel: debug logLevel: debug
controllerConfig: controllerConfig:
priorityClassName: high-priority <1> priorityClassName: high-priority
affinity: affinity:
podAffinity: <2> podAffinity:
requiredDuringSchedulingIgnoredDuringExecution: requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector: - labelSelector:
matchLabels: matchLabels:
@@ -68,26 +72,31 @@ spec:
topologyKey: kubernetes.io/hostname topologyKey: kubernetes.io/hostname
---- ----
+ +
<1> Specifies the priority class for the MetalLB controller pods. In this case, it is set to `high-priority`. where:
<2> Specifies that you are configuring pod affinity rules. These rules dictate how pods are scheduled in relation to other pods or nodes. This configuration instructs the scheduler to schedule pods that have the label `app: metallb` onto nodes that share the same hostname. This helps to co-locate MetalLB-related pods on the same nodes, potentially optimizing network communication, latency, and resource usage between these pods.
. Apply the `MetalLB` custom resource configuration:
+ +
[source,bash] --
`spec.controllerConfig.priorityClassName`:: Specifies the priority class for the MetalLB controller pods. In this case, it is set to `high-priority`.
`spec.controllerConfig.affinity.podAffinity`:: Specifies that you are configuring pod affinity rules. These rules dictate how pods are scheduled in relation to other pods or nodes. This configuration instructs the scheduler to schedule pods that have the label `app: metallb` onto nodes that share the same hostname. This helps to co-locate MetalLB-related pods on the same nodes, potentially optimizing network communication, latency, and resource usage between these pods.
--
. Apply the `MetalLB` custom resource configuration by running the following command:
+
[source,terminal]
---- ----
$ oc apply -f MetalLBPodConfig.yaml $ oc apply -f MetalLBPodConfig.yaml
---- ----
.Verification .Verification
* To view the priority class that you assigned to pods in the `metallb-system` namespace, run the following command: * To view the priority class that you assigned to pods in the `metallb-system` namespace, run the following command:
+ +
[source,bash] [source,terminal]
---- ----
$ oc get pods -n metallb-system -o custom-columns=NAME:.metadata.name,PRIORITY:.spec.priorityClassName $ oc get pods -n metallb-system -o custom-columns=NAME:.metadata.name,PRIORITY:.spec.priorityClassName
---- ----
+ +
.Example output .Example output
+
[source,terminal] [source,terminal]
---- ----
NAME PRIORITY NAME PRIORITY
@@ -97,9 +106,9 @@ metallb-operator-webhook-server-c895594d4-shjgx <none>
speaker-dddf7 high-priority speaker-dddf7 high-priority
---- ----
* To verify that the scheduler placed pods according to pod affinity rules, view the metadata for the pod's node or nodes by running the following command: * Verify that the scheduler placed pods according to pod affinity rules by viewing the metadata for the node of the pod. For example:
+ +
[source,bash] [source,terminal]
---- ----
$ oc get pod -o=custom-columns=NODE:.spec.nodeName,NAME:.metadata.name -n metallb-system $ oc get pod -o=custom-columns=NODE:.spec.nodeName,NAME:.metadata.name -n metallb-system
---- ----

View File

@@ -39,9 +39,4 @@ include::modules/nw-metallb-operator-setting-pod-CPU-limits.adoc[leveloffset=+2]
* xref:../../../nodes/scheduling/nodes-scheduler-node-selectors.adoc#nodes-scheduler-node-selectors[Placing pods on specific nodes using node selectors] * xref:../../../nodes/scheduling/nodes-scheduler-node-selectors.adoc#nodes-scheduler-node-selectors[Placing pods on specific nodes using node selectors]
* xref:../../../nodes/scheduling/nodes-scheduler-taints-tolerations.adoc#nodes-scheduler-taints-tolerations-about[Controlling pod placement using node taints] * xref:../../../nodes/scheduling/nodes-scheduler-taints-tolerations.adoc#nodes-scheduler-taints-tolerations-about[Controlling pod placement using node taints]
* xref:../../../nodes/pods/nodes-pods-priority.adoc#nodes-pods-priority-about_nodes-pods-priority[Understanding pod priority] * xref:../../../nodes/pods/nodes-pods-priority.adoc#nodes-pods-priority-about_nodes-pods-priority[Understanding pod priority]
* xref:../../../nodes/scheduling/nodes-scheduler-pod-affinity.adoc#nodes-scheduler-pod-affinity-about_nodes-scheduler-pod-affinity[Understanding pod affinity] * xref:../../../nodes/scheduling/nodes-scheduler-pod-affinity.adoc#nodes-scheduler-pod-affinity-about_nodes-scheduler-pod-affinity[Understanding pod affinity]
[id="next-steps_{context}"]
== Next steps
* xref:../../../networking/ingress_load_balancing/metallb/metallb-configure-address-pools.adoc#nw-metallb-configure-address-pool_configure-metallb-address-pools[Configuring MetalLB address pools]

View File

@@ -6,7 +6,10 @@ include::_attributes/common-attributes.adoc[]
toc::[] toc::[]
A `Subscription` custom resource (CR) that subscribes the namespace to `metallb-system` by default, automatically sets the `installPlanApproval` parameter to `Automatic`. This means that when Red{nbsp}Hat-provided Operator catalogs include a newer version of the MetalLB Operator, the MetalLB Operator is automatically upgraded. [role="_abstract"]
The `Subscription` custom resource (CR) for the MetalLB Operator is used to manage whether the Operator is upgraded automatically or manually.
By default, the `Subscription` CR assigns the namespace to `metallb-system` and automatically sets the `installPlanApproval` parameter to `Automatic`. This means that when Red{nbsp}Hat-provided Operator catalogs include a newer version of the MetalLB Operator, the MetalLB Operator is automatically upgraded.
If you need to manually control upgrading the MetalLB Operator, set the `installPlanApproval` parameter to `Manual`. If you need to manually control upgrading the MetalLB Operator, set the `installPlanApproval` parameter to `Manual`.