1
0
mirror of https://github.com/openshift/openshift-docs.git synced 2026-02-05 12:46:18 +01:00

OSDOCS-5458: Networking doc imporovements for MicroShift

This commit is contained in:
Kelly Brown
2023-03-23 14:50:24 -04:00
committed by openshift-cherrypick-robot
parent 6943941f30
commit 08556e1c58
4 changed files with 221 additions and 1 deletions

View File

@@ -28,4 +28,6 @@ include::modules/microshift-restart-ovnkube-master.adoc[leveloffset=+1]
include::modules/microshift-http-proxy.adoc[leveloffset=+1]
include::modules/microshift-cri-o-container-runtime.adoc[leveloffset=+1]
include::modules/microshift-ovs-snapshot.adoc[leveloffset=+1]
include::modules/microshift-deploying-a-load-balancer.adoc[leveloffset=+1]
include::modules/microshift-blocking-nodeport-access.adoc[leveloffset=+1]
include::modules/microshift-mDNS.adoc[leveloffset=+1]

View File

@@ -0,0 +1,61 @@
// Module included in the following assemblies:
//
// * microshift_networking/microshift-networking.adoc
:_content-type: PROCEDURE
[id="microshift-blocking-nodeport-access_{context}"]
= Blocking external access to the NodePort service on a specific host interface
OVN-Kubernetes does not restrict the host interface where a NodePort service can be accessed from outside a {product-title} node.
The following procedure explains how to block the NodePort service on a specific host interface and restrict external access.
.Prerequisites
* You need access to the cluster as a user with the cluster-admin role.
.Procedure
. Change the `NODEPORT` variable to the host port number assigned to your Kubernetes NodePort service by running the following command:
+
[source,terminal]
----
$ export NODEPORT=30700
----
. Change the `INTERFACE_IP` value to the IP address from the host interface that you want to block. For example:
+
[source,terminal]
----
$ export INTERFACE_IP=192.168.150.33
----
. Insert a new rule in the `nat` table PREROUTING chain to drop all packets that match the destination port and ip.
+
[source,terminal]
----
$ sudo nft -a insert rule ip nat PREROUTING tcp dport $NODEPORT ip daddr $INTERFACE_IP drop
----
. List the new rule by running the following command:
+
[source,terminal]
----
$ sudo nft -a list chain ip nat PREROUTING
table ip nat {
chain PREROUTING { # handle 1
type nat hook prerouting priority dstnat; policy accept;
tcp dport 30700 ip daddr 192.168.150.33 drop # handle 134
counter packets 108 bytes 18074 jump OVN-KUBE-ETP # handle 116
counter packets 108 bytes 18074 jump OVN-KUBE-EXTERNALIP # handle 114
counter packets 108 bytes 18074 jump OVN-KUBE-NODEPORT # handle 112
}
}
----
+
[NOTE]
====
Note your `handle` number of the newly added rule. You need to remove the `handle` number in the following step
====
. Remove the custom rule with the following sample command:
+
[source,terminal]
----
$ sudo nft -a delete rule ip nat PREROUTING handle 134
----

View File

@@ -69,7 +69,7 @@ mtu: 1400
|mtu
|uint32
|1400
|auto
|MTU value used for the pods
|1300
|===

View File

@@ -0,0 +1,157 @@
// Module included in the following assemblies:
//
// * microshift_networking/microshift-networking.adoc
:_content-type: PROCEDURE
[id="microshift-deploying-a-load-balancer_{context}"]
= Deploying a TCP load balancer on a workload
{product-title} offers a built-in implementation of network load balancers. The following example procedure uses the node IP address as the external IP address for the `LoadBalancer` service configuration file.
.Prerequisites
* You installed the OpenShift CLI (`oc`)
* You need access to the cluster as a user with the cluster-admin role.
* You installed a cluster on an infrastructure configured with the OVN-Kubernetes network plugin.
* The `KUBECONFIG` environment variable is set.
.Procedure
. Verify that your pods are running by running the following command:
+
[source,terminal]
----
$ oc get pods -A
----
. Create a namespace by running the following commands:
+
[source,terminal]
----
$ NAMESPACE=nginx-lb-test
----
+
[source,terminal]
----
$ oc create ns $NAMESPACE
----
. The following example deploys three replicas of the test `nginx` application in your namespace.
+
[source,terminal]
----
$ oc apply -n $NAMESPACE -f - <<EOF
apiVersion: v1
kind: ConfigMap
metadata:
name: nginx
data:
headers.conf: |
add_header X-Server-IP \$server_addr always;
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- image: quay.io/packit/nginx-unprivileged
imagePullPolicy: Always
name: nginx
ports:
- containerPort: 8080
volumeMounts:
- name: nginx-configs
subPath: headers.conf
mountPath: /etc/nginx/conf.d/headers.conf
securityContext:
allowPrivilegeEscalation: false
seccompProfile:
type: RuntimeDefault
capabilities:
drop: ["ALL"]
runAsNonRoot: true
volumes:
- name: nginx-configs
configMap:
name: nginx
items:
- key: headers.conf
path: headers.conf
EOF
----
. You can verify that the three sample replicas started successfully by running the following command:
+
[source,terminal]
----
$ oc get pods -n $NAMESPACE
----
. Create a `LoadBalancer` service for the `nginx` test application with the following sample commands:
+
[source,terminal]
----
$ oc create -n $NAMESPACE -f - <<EOF
apiVersion: v1
kind: Service
metadata:
name: nginx
spec:
ports:
- port: 81
targetPort: 8080
selector:
app: nginx
type: LoadBalancer
EOF
----
+
[NOTE]
====
You must ensure that the `port` parameter is a host port that is not occupied by other `LoadBalancer` services or {product-title} components
====
. To verify that the service file exists and the external IP address is properly assigned, and the external IP is identical to the node IP, run the following command:
+
[source,terminal]
----
$ oc get svc -n $NAMESPACE
----
+
.Example output
[source,terminal]
----
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
nginx LoadBalancer 10.43.183.104 192.168.1.241 81:32434/TCP 2m
----
.Verification
* The following command forms five connections to the `nginx` application using the external IP address of the `LoadBalancer` service config. You can verify that the load balancer sends requests to all the running applications with the following command:
+
[source,terminal]
----
EXTERNAL_IP=192.168.1.241
seq 5 | xargs -Iz curl -s -I http://$EXTERNAL_IP:81 | grep X-Server-IP
----
+
Your output should contain different IP addresses, this shows that the load balancer is successfully distributing the traffic to the applications.
+
.Example output
[source,terminal]
----
X-Server-IP: 10.42.0.41
X-Server-IP: 10.42.0.41
X-Server-IP: 10.42.0.43
X-Server-IP: 10.42.0.41
X-Server-IP: 10.42.0.43
----