1
0
mirror of https://github.com/openshift/openshift-docs.git synced 2026-02-05 12:46:18 +01:00
Files
openshift-docs/modules/nw-ovn-kubernetes-running-ovnkube-trace.adoc
2024-08-22 01:27:18 +00:00

326 lines
9.9 KiB
Plaintext

// Module included in the following assemblies:
//
// * networking/ovn_kubernetes_network_provider/ovn-kubernetes-architecture.adoc
:_mod-docs-content-type: PROCEDURE
[id="nw-ovn-kubernetes-running-ovnkube-trace_{context}"]
= Running ovnkube-trace
Run `ovn-trace` to simulate packet forwarding within an OVN logical network.
.Prerequisites
* You installed the OpenShift CLI (`oc`).
* You are logged in to the cluster with a user with `cluster-admin` privileges.
* You have installed `ovnkube-trace` on local host
.Example: Testing that DNS resolution works from a deployed pod
This example illustrates how to test the DNS resolution from a deployed pod to the core DNS pod that runs in the cluster.
.Procedure
. Start a web service in the default namespace by entering the following command:
+
[source,terminal]
----
$ oc run web --namespace=default --image=quay.io/openshifttest/nginx --labels="app=web" --expose --port=80
----
. List the pods running in the `openshift-dns` namespace:
+
[source,terminal]
----
oc get pods -n openshift-dns
----
+
.Example output
[source,terminal]
----
NAME READY STATUS RESTARTS AGE
dns-default-8s42x 2/2 Running 0 5h8m
dns-default-mdw6r 2/2 Running 0 4h58m
dns-default-p8t5h 2/2 Running 0 4h58m
dns-default-rl6nk 2/2 Running 0 5h8m
dns-default-xbgqx 2/2 Running 0 5h8m
dns-default-zv8f6 2/2 Running 0 4h58m
node-resolver-62jjb 1/1 Running 0 5h8m
node-resolver-8z4cj 1/1 Running 0 4h59m
node-resolver-bq244 1/1 Running 0 5h8m
node-resolver-hc58n 1/1 Running 0 4h59m
node-resolver-lm6z4 1/1 Running 0 5h8m
node-resolver-zfx5k 1/1 Running 0 5h
----
. Run the following `ovnkube-trace` command to verify DNS resolution is working:
+
[source,terminal]
----
$ ./ovnkube-trace \
-src-namespace default \ <1>
-src web \ <2>
-dst-namespace openshift-dns \ <3>
-dst dns-default-p8t5h \ <4>
-udp -dst-port 53 \ <5>
-loglevel 0 <6>
----
+
<1> Namespace of the source pod
<2> Source pod name
<3> Namespace of destination pod
<4> Destination pod name
<5> Use the `udp` transport protocol. Port 53 is the port the DNS service uses.
<6> Set the log level to 0 (0 is minimal and 5 is debug)
+
.Example output if the `src&dst` pod lands on the same node
[source,terminal]
----
ovn-trace source pod to destination pod indicates success from web to dns-default-p8t5h
ovn-trace destination pod to source pod indicates success from dns-default-p8t5h to web
ovs-appctl ofproto/trace source pod to destination pod indicates success from web to dns-default-p8t5h
ovs-appctl ofproto/trace destination pod to source pod indicates success from dns-default-p8t5h to web
ovn-detrace source pod to destination pod indicates success from web to dns-default-p8t5h
ovn-detrace destination pod to source pod indicates success from dns-default-p8t5h to web
----
+
.Example output if the `src&dst` pod lands on a different node
[source,terminal]
----
ovn-trace source pod to destination pod indicates success from web to dns-default-8s42x
ovn-trace (remote) source pod to destination pod indicates success from web to dns-default-8s42x
ovn-trace destination pod to source pod indicates success from dns-default-8s42x to web
ovn-trace (remote) destination pod to source pod indicates success from dns-default-8s42x to web
ovs-appctl ofproto/trace source pod to destination pod indicates success from web to dns-default-8s42x
ovs-appctl ofproto/trace destination pod to source pod indicates success from dns-default-8s42x to web
ovn-detrace source pod to destination pod indicates success from web to dns-default-8s42x
ovn-detrace destination pod to source pod indicates success from dns-default-8s42x to web
----
+
The ouput indicates success from the deployed pod to the DNS port and also indicates that it is
successful going back in the other direction. So you know bi-directional traffic is supported on UDP port 53 if my web pod wants to do dns resolution from core DNS.
If for example that did not work and you wanted to get the `ovn-trace`, the `ovs-appctl` of `proto/trace` and `ovn-detrace`, and more debug type information increase the log level to 2 and run the command again as follows:
[source,terminal]
----
$ ./ovnkube-trace \
-src-namespace default \
-src web \
-dst-namespace openshift-dns \
-dst dns-default-467qw \
-udp -dst-port 53 \
-loglevel 2
----
The output from this increased log level is too much to list here. In a failure situation the output of this command shows which flow is dropping that traffic. For example an egress or ingress network policy may be configured on the cluster that does not allow that traffic.
.Example: Verifying by using debug output a configured default deny
This example illustrates how to identify by using the debug output that an ingress default deny policy blocks traffic.
.Procedure
. Create the following YAML that defines a `deny-by-default` policy to deny ingress from all pods in all namespaces. Save the YAML in the `deny-by-default.yaml` file:
+
[source,yaml]
----
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: deny-by-default
namespace: default
spec:
podSelector: {}
ingress: []
----
. Apply the policy by entering the following command:
+
[source,terminal]
----
$ oc apply -f deny-by-default.yaml
----
+
.Example output
[source,terminal]
----
networkpolicy.networking.k8s.io/deny-by-default created
----
. Start a web service in the `default` namespace by entering the following command:
+
[source,terminal]
----
$ oc run web --namespace=default --image=quay.io/openshifttest/nginx --labels="app=web" --expose --port=80
----
. Run the following command to create the `prod` namespace:
+
[source,terminal]
----
$ oc create namespace prod
----
. Run the following command to label the `prod` namespace:
+
[source,terminal]
----
$ oc label namespace/prod purpose=production
----
. Run the following command to deploy an `alpine` image in the `prod` namespace and start a shell:
+
[source,terminal]
----
$ oc run test-6459 --namespace=prod --rm -i -t --image=alpine -- sh
----
. Open another terminal session.
. In this new terminal session run `ovn-trace` to verify the failure in communication between the source pod `test-6459` running in namespace `prod` and destination pod running in the `default` namespace:
+
[source,terminal]
----
$ ./ovnkube-trace \
-src-namespace prod \
-src test-6459 \
-dst-namespace default \
-dst web \
-tcp -dst-port 80 \
-loglevel 0
----
+
.Example output
[source,terminal]
----
ovn-trace source pod to destination pod indicates failure from test-6459 to web
----
. Increase the log level to 2 to expose the reason for the failure by running the following command:
+
[source,terminal]
----
$ ./ovnkube-trace \
-src-namespace prod \
-src test-6459 \
-dst-namespace default \
-dst web \
-tcp -dst-port 80 \
-loglevel 2
----
+
.Example output
[source,terminal]
----
...
------------------------------------------------
3. ls_out_acl_hint (northd.c:7454): !ct.new && ct.est && !ct.rpl && ct_mark.blocked == 0, priority 4, uuid 12efc456
reg0[8] = 1;
reg0[10] = 1;
next;
5. ls_out_acl_action (northd.c:7835): reg8[30..31] == 0, priority 500, uuid 69372c5d
reg8[30..31] = 1;
next(4);
5. ls_out_acl_action (northd.c:7835): reg8[30..31] == 1, priority 500, uuid 2fa0af89
reg8[30..31] = 2;
next(4);
4. ls_out_acl_eval (northd.c:7691): reg8[30..31] == 2 && reg0[10] == 1 && (outport == @a16982411286042166782_ingressDefaultDeny), priority 2000, uuid 447d0dab
reg8[17] = 1;
ct_commit { ct_mark.blocked = 1; }; <1>
next;
...
----
+
<1> Ingress traffic is blocked due to the default deny policy being in place.
. Create a policy that allows traffic from all pods in a particular namespaces with a label `purpose=production`. Save the YAML in the `web-allow-prod.yaml` file:
+
[source,terminal]
----
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: web-allow-prod
namespace: default
spec:
podSelector:
matchLabels:
app: web
policyTypes:
- Ingress
ingress:
- from:
- namespaceSelector:
matchLabels:
purpose: production
----
. Apply the policy by entering the following command:
+
[source,terminal]
----
$ oc apply -f web-allow-prod.yaml
----
. Run `ovnkube-trace` to verify that traffic is now allowed by entering the following command:
+
[source,terminal]
----
$ ./ovnkube-trace \
-src-namespace prod \
-src test-6459 \
-dst-namespace default \
-dst web \
-tcp -dst-port 80 \
-loglevel 0
----
+
.Expected output
[source,terminal]
----
ovn-trace source pod to destination pod indicates success from test-6459 to web
ovn-trace destination pod to source pod indicates success from web to test-6459
ovs-appctl ofproto/trace source pod to destination pod indicates success from test-6459 to web
ovs-appctl ofproto/trace destination pod to source pod indicates success from web to test-6459
ovn-detrace source pod to destination pod indicates success from test-6459 to web
ovn-detrace destination pod to source pod indicates success from web to test-6459
----
. Run the following command in the shell that was opened in step six to connect nginx to the web-server:
+
[source,terminal]
----
wget -qO- --timeout=2 http://web.default
----
+
.Expected output
[source,terminal]
----
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
body {
width: 35em;
margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif;
}
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>
<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>
<p><em>Thank you for using nginx.</em></p>
</body>
</html>
----