1
0
mirror of https://github.com/openshift/openshift-docs.git synced 2026-02-05 12:46:18 +01:00
Files
openshift-docs/cloud_experts_tutorials/cloud-experts-consistent-egress-ip.adoc
Janelle Neczypor b5df2159a7 OSDOCS-14500
2025-09-03 12:46:05 +00:00

458 lines
12 KiB
Plaintext

:_mod-docs-content-type: ASSEMBLY
[id="cloud-experts-consistent-egress-ip"]
= Tutorial: Assigning a consistent egress IP for external traffic
include::_attributes/attributes-openshift-dedicated.adoc[]
include::_attributes/common-attributes.adoc[]
:context: cloud-experts-consistent-egress-ip
toc::[]
// Mobb content metadata
// Brought into ROSA product docs 2023-09-19
// ---
// date: '2023-02-28'
// title: Assign Consistent Egress IP for External Traffic
// tags: ["OSD", "ROSA", "ARO"]
// authors:
// - 'Dustin Scott'
// ---
You can assign a consistent IP address for traffic that leaves your cluster such as security groups which require an IP-based configuration to meet security standards.
By default, {product-title} uses the OVN-Kubernetes container network interface (CNI) to assign random IP addresses from a pool. This can make configuring security lockdowns unpredictable or open.
ifndef::openshift-rosa-hcp[]
See xref:../networking/ovn_kubernetes_network_provider/configuring-egress-ips-ovn.adoc#configuring-egress-ips-ovn[Configuring an egress IP address] for more information.
endif::openshift-rosa-hcp[]
ifdef::openshift-rosa-hcp[]
See link:https://docs.openshift.com/rosa/networking/ovn_kubernetes_network_provider/configuring-egress-ips-ovn.html[Configuring an egress IP address] for more information.
endif::openshift-rosa-hcp[]
.Objectives
* Learn how to configure a set of predictable IP addresses for egress cluster traffic.
.Prerequisites
* A {product-title} cluster deployed with OVN-Kubernetes
* The xref:../cli_reference/openshift_cli/getting-started-cli.adoc#cli-getting-started[OpenShift CLI] (`oc`)
* The xref:../cli_reference/rosa_cli/rosa-get-started-cli.adoc#rosa-get-started-cli[ROSA CLI] (`rosa`)
* link:https://stedolan.github.io/jq/[`jq`]
== Setting your environment variables
* Set your environment variables by running the following command:
+
[NOTE]
====
Replace the value of the `ROSA_MACHINE_POOL_NAME` variable to target a different machine pool.
====
+
[source,terminal]
----
$ export ROSA_CLUSTER_NAME=$(oc get infrastructure cluster -o=jsonpath="{.status.infrastructureName}" | sed 's/-[a-z0-9]\{5\}$//')
$ export ROSA_MACHINE_POOL_NAME=worker
----
== Ensuring capacity
The number of IP addresses assigned to each node is limited for each public cloud provider.
* Verify sufficient capacity by running the following command:
+
[source,terminal]
----
$ oc get node -o json | \
jq '.items[] |
{
"name": .metadata.name,
"ips": (.status.addresses | map(select(.type == "InternalIP") | .address)),
"capacity": (.metadata.annotations."cloud.network.openshift.io/egress-ipconfig" | fromjson[] | .capacity.ipv4)
}'
----
+
.Example output
+
[source,terminal]
---
{
"name": "ip-10-10-145-88.ec2.internal",
"ips": [
"10.10.145.88"
],
"capacity": 14
}
{
"name": "ip-10-10-154-175.ec2.internal",
"ips": [
"10.10.154.175"
],
"capacity": 14
}
---
== Creating the egress IP rules
. Before creating the egress IP rules, identify which egress IPs you will use.
+
[NOTE]
====
The egress IPs that you select should exist as a part of the subnets in which the worker nodes are provisioned.
====
+
. *Optional*: Reserve the egress IPs that you requested to avoid conflicts with the AWS Virtual Private Cloud (VPC) Dynamic Host Configuration Protocol (DHCP) service.
+
Request explicit IP reservations on the link:https://docs.aws.amazon.com/vpc/latest/userguide/subnet-cidr-reservation.html[AWS documentation for CIDR reservations] page.
== Assigning an egress IP to a namespace
. Create a new project by running the following command:
+
[source,terminal]
----
$ oc new-project demo-egress-ns
----
+
. Create the egress rule for all pods within the namespace by running the following command:
+
[source,terminal]
----
$ cat <<EOF | oc apply -f -
apiVersion: k8s.ovn.org/v1
kind: EgressIP
metadata:
name: demo-egress-ns
spec:
# NOTE: these egress IPs are within the subnet range(s) in which my worker nodes
# are deployed.
egressIPs:
- 10.10.100.253
- 10.10.150.253
- 10.10.200.253
namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: demo-egress-ns
EOF
----
== Assigning an egress IP to a pod
. Create a new project by running the following command:
+
[source,terminal]
----
$ oc new-project demo-egress-pod
----
+
. Create the egress rule for the pod by running the following command:
+
[NOTE]
====
`spec.namespaceSelector` is a mandatory field.
====
+
[source,terminal]
----
$ cat <<EOF | oc apply -f -
apiVersion: k8s.ovn.org/v1
kind: EgressIP
metadata:
name: demo-egress-pod
spec:
# NOTE: these egress IPs are within the subnet range(s) in which my worker nodes
# are deployed.
egressIPs:
- 10.10.100.254
- 10.10.150.254
- 10.10.200.254
namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: demo-egress-pod
podSelector:
matchLabels:
run: demo-egress-pod
EOF
----
=== Labeling the nodes
. Obtain your pending egress IP assignments by running the following command:
+
[source,terminal]
----
$ oc get egressips
----
+
.Example output
[source,terminal]
----
NAME EGRESSIPS ASSIGNED NODE ASSIGNED EGRESSIPS
demo-egress-ns 10.10.100.253
demo-egress-pod 10.10.100.254
----
+
The egress IP rule that you created only applies to nodes with the `k8s.ovn.org/egress-assignable` label. Make sure that the label is only on a specific machine pool.
+
. Assign the label to your machine pool using the following command:
+
[WARNING]
====
If you rely on node labels for your machine pool, this command will replace those labels. Be sure to input your desired labels into the `--labels` field to ensure your node labels remain.
====
+
[source,terminal]
----
$ rosa update machinepool ${ROSA_MACHINE_POOL_NAME} \
--cluster="${ROSA_CLUSTER_NAME}" \
--labels "k8s.ovn.org/egress-assignable="
----
=== Reviewing the egress IPs
* Review the egress IP assignments by running the following command:
+
[source,terminal]
----
$ oc get egressips
----
+
.Example output
[source,terminal]
----
NAME EGRESSIPS ASSIGNED NODE ASSIGNED EGRESSIPS
demo-egress-ns 10.10.100.253 ip-10-10-156-122.ec2.internal 10.10.150.253
demo-egress-pod 10.10.100.254 ip-10-10-156-122.ec2.internal 10.10.150.254
----
== Verification
=== Deploying a sample application
To test the egress IP rule, create a service that is restricted to the egress IP addresses which we have specified. This simulates an external service that is expecting a small subset of IP addresses.
. Run the `echoserver` command to replicate a request:
+
[source,terminal]
----
$ oc -n default run demo-service --image=gcr.io/google_containers/echoserver:1.4
----
+
. Expose the pod as a service and limit the ingress to the egress IP addresses you specified by running the following command:
+
[source,terminal]
----
$ cat <<EOF | oc apply -f -
apiVersion: v1
kind: Service
metadata:
name: demo-service
namespace: default
annotations:
service.beta.kubernetes.io/aws-load-balancer-scheme: "internal"
service.beta.kubernetes.io/aws-load-balancer-internal: "true"
spec:
selector:
run: demo-service
ports:
- port: 80
targetPort: 8080
type: LoadBalancer
externalTrafficPolicy: Local
# NOTE: this limits the source IPs that are allowed to connect to our service. It
# is being used as part of this demo, restricting connectivity to our egress
# IP addresses only.
# NOTE: these egress IPs are within the subnet range(s) in which my worker nodes
# are deployed.
loadBalancerSourceRanges:
- 10.10.100.254/32
- 10.10.150.254/32
- 10.10.200.254/32
- 10.10.100.253/32
- 10.10.150.253/32
- 10.10.200.253/32
EOF
----
+
. Retrieve the load balancer hostname and save it as an environment variable by running the following command:
+
[source,terminal]
----
$ export LOAD_BALANCER_HOSTNAME=$(oc get svc -n default demo-service -o json | jq -r '.status.loadBalancer.ingress[].hostname')
----
=== Testing the namespace egress
. Start an interactive shell to test the namespace egress rule:
+
[source,terminal]
----
$ oc run \
demo-egress-ns \
-it \
--namespace=demo-egress-ns \
--env=LOAD_BALANCER_HOSTNAME=$LOAD_BALANCER_HOSTNAME \
--image=registry.access.redhat.com/ubi9/ubi -- \
bash
----
+
. Send a request to the load balancer and ensure that you can successfully connect:
+
[source,terminal]
----
$ curl -s http://$LOAD_BALANCER_HOSTNAME
----
+
. Check the output for a successful connection:
+
[NOTE]
====
The `client_address` is the internal IP address of the load balancer not your egress IP. You can verify that you have configured the client address correctly by connecting with your service limited to `.spec.loadBalancerSourceRanges`.
====
+
.Example output
+
[source,terminal]
----
CLIENT VALUES:
client_address=10.10.207.247
command=GET
real path=/
query=nil
request_version=1.1
request_uri=http://internal-a3e61de18bfca4a53a94a208752b7263-148284314.us-east-1.elb.amazonaws.com:8080/
SERVER VALUES:
server_version=nginx: 1.10.0 - lua: 10001
HEADERS RECEIVED:
accept=*/*
host=internal-a3e61de18bfca4a53a94a208752b7263-148284314.us-east-1.elb.amazonaws.com
user-agent=curl/7.76.1
BODY:
-no body in request-
----
+
. Exit the pod by running the following command:
+
[source,terminal]
----
$ exit
----
=== Testing the pod egress
. Start an interactive shell to test the pod egress rule:
+
[source,terminal]
----
$ oc run \
demo-egress-pod \
-it \
--namespace=demo-egress-pod \
--env=LOAD_BALANCER_HOSTNAME=$LOAD_BALANCER_HOSTNAME \
--image=registry.access.redhat.com/ubi9/ubi -- \
bash
----
+
. Send a request to the load balancer by running the following command:
+
[source,terminal]
----
$ curl -s http://$LOAD_BALANCER_HOSTNAME
----
+
. Check the output for a successful connection:
+
[NOTE]
====
The `client_address` is the internal IP address of the load balancer not your egress IP. You can verify that you have configured the client address correctly by connecting with your service limited to `.spec.loadBalancerSourceRanges`.
====
+
.Example output
+
[source,terminal]
----
CLIENT VALUES:
client_address=10.10.207.247
command=GET
real path=/
query=nil
request_version=1.1
request_uri=http://internal-a3e61de18bfca4a53a94a208752b7263-148284314.us-east-1.elb.amazonaws.com:8080/
SERVER VALUES:
server_version=nginx: 1.10.0 - lua: 10001
HEADERS RECEIVED:
accept=*/*
host=internal-a3e61de18bfca4a53a94a208752b7263-148284314.us-east-1.elb.amazonaws.com
user-agent=curl/7.76.1
BODY:
-no body in request-
----
+
. Exit the pod by running the following command:
+
[source,terminal]
----
$ exit
----
=== Optional: Testing blocked egress
. *Optional:* Test that the traffic is successfully blocked when the egress rules do not apply by running the following command:
+
[source,terminal]
----
$ oc run \
demo-egress-pod-fail \
-it \
--namespace=demo-egress-pod \
--env=LOAD_BALANCER_HOSTNAME=$LOAD_BALANCER_HOSTNAME \
--image=registry.access.redhat.com/ubi9/ubi -- \
bash
----
+
. Send a request to the load balancer by running the following command:
+
[source,terminal]
----
$ curl -s http://$LOAD_BALANCER_HOSTNAME
----
+
. If the command is unsuccessful, egress is successfully blocked.
. Exit the pod by running the following command:
+
[source,terminal]
----
$ exit
----
== Cleaning up your cluster
. Clean up your cluster by running the following commands:
+
[source,terminal]
----
$ oc delete svc demo-service -n default; \
$ oc delete pod demo-service -n default; \
$ oc delete project demo-egress-ns; \
$ oc delete project demo-egress-pod; \
$ oc delete egressip demo-egress-ns; \
$ oc delete egressip demo-egress-pod
----
+
. Clean up the assigned node labels by running the following command:
+
[WARNING]
====
If you rely on node labels for your machine pool, this command replaces those labels. Input your desired labels into the `--labels` field to ensure your node labels remain.
====
+
[source,terminal]
----
$ rosa update machinepool ${ROSA_MACHINE_POOL_NAME} \
--cluster="${ROSA_CLUSTER_NAME}" \
--labels ""
----