1
0
mirror of https://github.com/openshift/openshift-docs.git synced 2026-02-05 12:46:18 +01:00
Files
openshift-docs/modules/nw-metallb-configure-return-traffic-proc.adoc

236 lines
8.2 KiB
Plaintext

// Module included in the following assemblies:
//
// * networking/ovn_kubernetes_network_provider/configuring-egress-traffic-for-vrf-loadbalancer-services.adoc
:_mod-docs-content-type: PROCEDURE
[id="nw-metallb-configure-return-traffic-proc_{context}"]
= Configuring symmetric routing by using VRFs with MetalLB
[role="_abstract"]
To ensure that applications behind a MetalLB service use the same network path for both ingress and egress, configure symmetric routing by using Virtual Routing and Forwarding (VRF).
The example in the procedure associates a VRF routing table with MetalLB and an egress service to enable symmetric routing for ingress and egress traffic for pods behind a `LoadBalancer` service.
[IMPORTANT]
====
* If you use the `sourceIPBy: "LoadBalancerIP"` setting in the `EgressService` CR, you must specify the load-balancer node in the `BGPAdvertisement` custom resource (CR).
* You can use the `sourceIPBy: "Network"` setting on clusters that use OVN-Kubernetes configured with the `gatewayConfig.routingViaHost` specification set to `true` only. Additionally, if you use the `sourceIPBy: "Network"` setting, you must schedule the application workload on nodes configured with the network VRF instance.
====
.Prerequisites
* Install the {oc-first}.
* Log in as a user with `cluster-admin` privileges.
* Install the Kubernetes NMState Operator.
* Install the MetalLB Operator.
.Procedure
. Create a `NodeNetworkConfigurationPolicy` CR to define the VRF instance:
+
.. Create a file, such as `node-network-vrf.yaml`, with content like the following example:
+
[source,yaml]
----
apiVersion: nmstate.io/v1
kind: NodeNetworkConfigurationPolicy
metadata:
name: vrfpolicy
spec:
nodeSelector:
vrf: "true"
maxUnavailable: 3
desiredState:
interfaces:
- name: ens4vrf
type: vrf
state: up
vrf:
port:
- ens4
route-table-id: 2
- name: ens4
type: ethernet
state: up
ipv4:
address:
- ip: 192.168.130.130
prefix-length: 24
dhcp: false
enabled: true
routes:
config:
- destination: 0.0.0.0/0
metric: 150
next-hop-address: 192.168.130.1
next-hop-interface: ens4
table-id: 2
route-rules:
config:
- ip-to: 172.30.0.0/16
priority: 998
route-table: 254
- ip-to: 10.132.0.0/14
priority: 998
route-table: 254
- ip-to: 169.254.0.0/17
priority: 998
route-table: 254
# ...
----
+
where:
+
`metadata.name`:: Specifies the name of the policy.
`nodeSelector.vrf`:: Specifies the policy for all nodes with the label `vrf:true`.
`interfaces.name.ens4vrf`:: Specifies the name of the interface.
`interfaces.type`:: Specifies the type of interface. This example creates a VRF instance.
`vrf.port`:: Specifies the node interface that the VRF attaches to.
`vrf.route-table-id`:: Specifies the name of the route table ID for the VRF.
`interfaces.name.ens4 `:: Specifies the IPv4 address of the interface associated with the VRF.
`routes`:: Specifies the configuration for network routes. The `next-hop-address` field defines the IP address of the next hop for the route. The `next-hop-interface` field defines the outgoing interface for the route. In this example, the VRF routing table is `2`, which references the ID that you define in the `EgressService` CR.
`route-rules`:: Specifies additional route rules. The `ip-to` fields must match the `Cluster Network` CIDR, `Service Network` CIDR, and `Internal Masquerade` subnet CIDR. You can view the values for these CIDR address specifications by running the following command: `oc describe network.operator/cluster`.
`route-rules.route-table`:: Specifies the main routing table that the Linux kernel uses when calculating routes has the ID `254`.
+
.. Apply the policy by running the following command:
+
[source,terminal]
----
$ oc apply -f node-network-vrf.yaml
----
. Create a `BGPPeer` custom resource (CR):
+
.. Create a file, such as `frr-via-vrf.yaml`, with content like the following example:
+
[source,yaml]
----
apiVersion: metallb.io/v1beta2
kind: BGPPeer
metadata:
name: frrviavrf
namespace: metallb-system
spec:
myASN: 100
peerASN: 200
peerAddress: 192.168.130.1
vrf: ens4vrf
# ...
----
+
where:
+
`spec.vrf`:: Specifies the VRF instance to associate with the BGP peer. MetalLB can advertise services and make routing decisions based on the routing information in the VRF.
+
.. Apply the configuration for the BGP peer by running the following command:
+
[source,terminal]
----
$ oc apply -f frr-via-vrf.yaml
----
. Create an `IPAddressPool` CR:
+
.. Create a file, such as `first-pool.yaml`, with content like the following example:
+
[source,yaml]
----
apiVersion: metallb.io/v1beta1
kind: IPAddressPool
metadata:
name: first-pool
namespace: metallb-system
spec:
addresses:
- 192.169.10.0/32
# ...
----
+
.. Apply the configuration for the IP address pool by running the following command:
+
[source,terminal]
----
$ oc apply -f first-pool.yaml
----
. Create a `BGPAdvertisement` CR:
+
.. Create a file, such as `first-adv.yaml`, with content like the following example:
+
[source,yaml]
----
apiVersion: metallb.io/v1beta1
kind: BGPAdvertisement
metadata:
name: first-adv
namespace: metallb-system
spec:
ipAddressPools:
- first-pool
peers:
- frrviavrf
nodeSelectors:
- matchLabels:
egress-service.k8s.ovn.org/test-server1: ""
# ...
----
+
where:
+
`peers`:: In this example, MetalLB advertises a range of IP addresses from the `first-pool` IP address pool to the `frrviavrf` BGP peer.
`nodeSelectors`:: In this example, the `EgressService` CR configures the source IP address for egress traffic to use the load-balancer service IP address. Therefore, you must specify the load-balancer node for return traffic to use the same return path for the traffic originating from the pod.
+
.. Apply the configuration for the BGP advertisement by running the following command:
+
[source,terminal]
----
$ oc apply -f first-adv.yaml
----
. Create an `EgressService` CR:
+
.. Create a file, such as `egress-service.yaml`, with content like the following example:
+
[source,yaml,options="nowrap",role="white-space-pre"]
----
apiVersion: k8s.ovn.org/v1
kind: EgressService
metadata:
name: server1
namespace: test
spec:
sourceIPBy: "LoadBalancerIP"
nodeSelector:
matchLabels:
vrf: "true"
network: "2"
# ...
----
+
where:
+
`metadata.name`:: Specifies the name for the egress service. The name of the `EgressService` resource must match the name of the load-balancer service that you want to modify.
`metadata.namespace`:: Specifies the namespace for the egress service. The namespace for the `EgressService` must match the namespace of the load-balancer service that you want to modify. The egress service is namespace-scoped.
`spec.sourceIPBy`:: Specifies the `LoadBalancer` service ingress IP address as the source IP address for egress traffic.
`matchLabels.vrf`:: If you specify `LoadBalancer` for the `sourceIPBy` specification, a single node handles the `LoadBalancer` service traffic. In this example, only a node with the label `vrf: "true"` can handle the service traffic. If you do not specify a node, OVN-Kubernetes selects a worker node to handle the service traffic. When a node is selected, OVN-Kubernetes labels the node in the following format: `egress-service.k8s.ovn.org/<svc_namespace>-<svc_name>: ""`.
`network`:: Specifyies the routing table ID for egress traffic. Ensure that the value matches the `route-table-id` ID defined in the `NodeNetworkConfigurationPolicy` resource, for example, `route-table-id: 2`.
+
.. Apply the configuration for the egress service by running the following command:
+
[source,terminal]
----
$ oc apply -f egress-service.yaml
----
.Verification
. Verify that you can access the application endpoint of the pods running behind the MetalLB service by running the following command:
+
[source,terminal]
----
$ curl <external_ip_address>:<port_number>
----
* `<external_ip_address>:<port_number>`: Specifies the external IP address and port number to suit your application endpoint.
. Optional: If you assigned the `LoadBalancer` service ingress IP address as the source IP address for egress traffic, verify this configuration by using tools such as `tcpdump` to analyze packets received at the external client.