1
0
mirror of https://github.com/openshift/openshift-docs.git synced 2026-02-05 12:46:18 +01:00

Merge pull request #101212 from openshift-cherrypick-robot/cherry-pick-99819-to-enterprise-4.21

[enterprise-4.21] OSDOCS-13364-egfw:separates OVN-K egress firewall from SDN egressnetw…
This commit is contained in:
Joe Aldinger
2025-10-28 13:26:30 -04:00
committed by GitHub
17 changed files with 513 additions and 333 deletions

View File

@@ -19,20 +19,22 @@ include::snippets/technology-preview.adoc[]
apiVersion: networking.openshift.io/v1alpha1
kind: DNSNameResolver
spec:
name: www.example.com. <1>
name: www.example.com.
status:
resolvedNames:
- dnsName: www.example.com. <2>
- dnsName: www.example.com.
resolvedAddress:
- ip: "1.2.3.4" <3>
ttlSeconds: 60 <4>
lastLookupTime: "2023-08-08T15:07:04Z" <5>
- ip: "1.2.3.4"
ttlSeconds: 60
lastLookupTime: "2023-08-08T15:07:04Z"
----
<1> The DNS name. This can be either a standard DNS name or a wildcard DNS name. For a wildcard DNS name, the DNS name resolution information contains all of the DNS names that match the wildcard DNS name.
<2> The resolved DNS name matching the `spec.name` field. If the `spec.name` field contains a wildcard DNS name, then multiple `dnsName` entries are created that contain the standard DNS names that match the wildcard DNS name when resolved. If the wildcard DNS name can also be successfully resolved, then this field also stores the wildcard DNS name.
<3> The current IP addresses associated with the DNS name.
<4> The last time-to-live (TTL) duration.
<5> The last lookup time.
where:
<name>:: Specifies the DNS name. This can be either a standard DNS name or a wildcard DNS name. For a wildcard DNS name, the DNS name resolution information contains all of the DNS names that match the wildcard DNS name.
<dnsName>:: Specifies the resolved DNS name matching the `spec.name` field. If the `spec.name` field contains a wildcard DNS name, then multiple `dnsName` entries are created that contain the standard DNS names that match the wildcard DNS name when resolved. If the wildcard DNS name can also be successfully resolved, then this field also stores the wildcard DNS name.
<ip> Specifies the current IP addresses associated with the DNS name.
<ttlSeconds>:: Specifies the last time-to-live (TTL) duration.
<lastLookupTime>:: Specifies the last lookup time.
If during DNS resolution the DNS name in the query matches any name defined in a `DNSNameResolver` CR, then the previous information is updated accordingly in the CR `status` field. For unsuccessful DNS wildcard name lookups, the request is retried after a default TTL of 30 minutes.

View File

@@ -0,0 +1,96 @@
// Module included in the following assemblies:
//
// * networking/network_security/configuring-egress-firewall-ovn.adoc
:_mod-docs-content-type: CONCEPT
[id="nw-egress-firewall-about_{context}"]
= How an egress firewall works in a project
As a cluster administrator, you can use an _egress firewall_ to limit the external hosts that some or all pods can access from within the
cluster. An egress firewall supports the following scenarios:
- A pod can only connect to internal hosts and cannot initiate connections to
the public internet.
- A pod can only connect to the public internet and cannot initiate connections
to internal hosts that are outside the {product-title} cluster.
- A pod cannot reach specified internal subnets or hosts outside the {product-title} cluster.
- A pod can only connect to specific external hosts.
For example, you can allow one project access to a specified IP range but deny the same access to a different project. Or, you can restrict application developers from updating from Python pip mirrors, and force updates to come only from approved sources.
You configure an egress firewall policy by creating an `EgressFirewall` custom resource (CR). The egress firewall matches network traffic that meets any of the following criteria:
- An IP address range in CIDR format
- A DNS name that resolves to an IP address
- A port number
- A protocol that is one of the following protocols: TCP, UDP, and SCTP
[id="limitations-of-an-egress-firewall-ovn-k_{context}"]
== Limitations of an egress firewall
An egress firewall has the following limitations:
* No project can have more than one `EgressFirewall` CR.
* Egress firewall rules do not apply to traffic that goes through routers. Any user with permission to create a `Route` CR object can bypass egress firewall policy rules by creating a route that points to a forbidden destination.
* Egress firewall does not apply to the host network namespace. Pods with host networking enabled are unaffected by egress firewall rules.
* If your egress firewall includes a deny rule for `0.0.0.0/0`, access to your {product-title} API servers is blocked. You must either add allow rules for each IP address or use the `nodeSelector` type allow rule in your egress policy rules to connect to API servers.
+
The following example illustrates the order of the egress firewall rules necessary to ensure API server access:
+
.`EgressFirewall` API server access example
[source,yaml,subs="attributes+"]
----
apiVersion: k8s.ovn.org/v1
kind: EgressFirewall
metadata:
name: default
namespace: <namespace> <1>
spec:
egress:
- to:
cidrSelector: <api_server_address_range> <2>
type: Allow
# ...
- to:
cidrSelector: 0.0.0.0/0 <3>
type: Deny
----
+
where:
<namespace>:: Specifies the namespace for the egress firewall.
<api_server_address_range>:: Specifies the IP address range that includes your {product-title} API servers.
<cidrSelector>:: Specifies a value of `0.0.0.0/0` to set a global deny rule that prevents access to the {product-title} API servers.
+
To find the IP address for your API servers, run `oc get ep kubernetes -n default`.
+
For more information, see link:https://bugzilla.redhat.com/show_bug.cgi?id=1988324[BZ#1988324].
* A maximum of one `EgressFirewall` object with a maximum of 8,000 rules can be defined per project.
* If you are using the OVN-Kubernetes network plugin with shared gateway mode in Red{nbsp}Hat OpenShift Networking, return ingress replies are affected by egress firewall rules. If the egress firewall rules drop the ingress reply destination IP, the traffic is dropped.
* In general, using Domain Name Server (DNS) names in your egress firewall policy does not affect local DNS resolution through CoreDNS. However, if your egress firewall policy uses domain names and an external DNS server handles DNS resolution for an affected pod, you must include egress firewall rules that permit access to the IP addresses of your DNS server.
Violating any of these restrictions results in a broken egress firewall for the project. Consequently, all external network traffic is dropped, which can cause security risks for your organization.
An `EgressFirewall` resource is created in the `kube-node-lease`, `kube-public`, `kube-system`, `openshift` and `openshift-` projects.
[id="policy-rule-order-ovn-k_{context}"]
== Matching order for egress firewall policy rules
OVN-Kubernetes evaluates egress firewall policy rules in the order they are defined in, from first to last. The first rule that matches an egress connection from a pod applies. Any subsequent rules are ignored for that connection.
[id="domain-name-server-resolution-ovn-k_{context}"]
== How Domain Name Server (DNS) resolution works
If you use DNS names in any of your egress firewall policy rules, proper resolution of the domain names is subject to the following restrictions:
* Domain name updates are polled based on a time-to-live (TTL) duration. By default, the duration is 30 minutes. When the egress firewall controller queries the local name servers for a domain name, if the response includes a TTL and the TTL is less than 30 minutes, the controller sets the duration for that DNS name to the returned value. Each DNS name is queried after the TTL for the DNS record expires.
* The pod must resolve the domain from the same local name servers when necessary. Otherwise the IP addresses for the domain known by the egress firewall controller and the pod can be different. If the IP addresses for a hostname differ, the egress firewall might not be enforced consistently.
* Because the egress firewall controller and pods asynchronously poll the same local name server, the pod might obtain the updated IP address before the egress controller does, which causes a race condition. Due to this current limitation, domain name usage in `EgressFirewall` objects is only recommended for domains with infrequent IP address changes.

View File

@@ -0,0 +1,38 @@
// Module included in the following assemblies:
//
// * networking/network_security/configuring-egress-firewall-ovn.adoc
:_mod-docs-content-type: PROCEDURE
[id="nw-egress-firewall-policy-create_{context}"]
= Creating an EgressFirewall custom resource (CR)
As a cluster administrator, you can create an egress firewall policy object for a project.
[IMPORTANT]
====
If the project already has an `EgressFirewall` resource, you must edit the existing policy to make changes to egress firewall rules.
====
.Prerequisites
* A cluster that uses the OVN-Kubernetes network plugin.
* Install the OpenShift CLI (`oc`).
* You must log in to the cluster as a cluster administrator.
.Procedure
. Create a policy rule:
.. Create a `<policy_name>.yaml` file where `<policy_name>` describes the egress
policy rules.
.. Define the `EgressFirewall` object in the file.
. Create the policy object by entering the following command. Replace `<policy_name>` with the name of the policy and `<project>` with the project that the rule applies to.
+
[source,terminal]
----
$ oc create -f <policy_name>.yaml -n <project>
----
+
Successful output lists the `egressfirewall.k8s.ovn.org/v1` name and the `created` status.
. Optional: Save the `<policy_name>.yaml` file so that you can make changes later.

View File

@@ -0,0 +1,31 @@
// Module included in the following assemblies:
//
// * networking/ovn_kubernetes_network_provider/removing-egress-firewall-ovn.adoc
:_mod-docs-content-type: PROCEDURE
[id="nw-egress-firewall-delete_{context}"]
= Removing an EgressFirewall CR
As a cluster administrator, you can remove an egress firewall from a project.
.Prerequisites
* A cluster using the OVN-Kubernetes network plugin.
* Install the OpenShift CLI (`oc`).
* You must log in to the cluster as a cluster administrator.
.Procedure
. Find the name of the `EgressFirewall` CR for the project. Replace `<project>` with the name of the project.
+
[source,terminal,subs="attributes+"]
----
$ oc get egressfirewall -n <project>
----
. Delete the `EgressFirewall` CR by entering the following command. Replace `<project>` with the name of the project and `<name>` with the name of the object.
+
[source,terminal,subs="attributes+"]
----
$ oc delete -n <project> egressfirewall <name>
----

View File

@@ -0,0 +1,40 @@
// Module included in the following assemblies:
//
// * networking/ovn_kubernetes_network_provider/editing-egress-firewall-ovn.adoc
:_mod-docs-content-type: PROCEDURE
[id="nw-egress-firewall-edit_{context}"]
= Editing an EgressFirewall custom resource (CR)
As a cluster administrator, you can update the egress firewall for a project.
.Prerequisites
* A cluster using the OVN-Kubernetes network plugin.
* Install the OpenShift CLI (`oc`).
* You must log in to the cluster as a cluster administrator.
.Procedure
. Find the name of the `EgressFirewall` CR for the project. Replace `<project>` with the name of the project.
+
[source,terminal,subs="attributes+"]
----
$ oc get -n <project> egressfirewall
----
. Optional: If you did not save a copy of the `EgressFirewall` object when you created the egress network firewall, enter the following command to create a copy.
+
[source,terminal,subs="attributes+"]
----
$ oc get -n <project> egressfirewall <name> -o yaml > <filename>.yaml
----
+
Replace `<project>` with the name of the project. Replace `<name>` with the name of the object. Replace `<filename>` with the name of the file to save the YAML to.
. After making changes to the policy rules, enter the following command to replace the `EgressFirewall` CR. Replace `<filename>` with the name of the file containing the updated `EgressFirewall` CR.
+
[source,terminal]
----
$ oc replace -f <filename>.yaml
----

View File

@@ -0,0 +1,131 @@
// Module included in the following assemblies:
//
// * networking/network_security/configuring-egress-firewall-ovn.adoc
:_mod-docs-content-type: REFERENCE
[id="nw-egress-firewall-object_{context}"]
= EgressFirewall custom resource (CR)
You can define one or more rules for an egress firewall. A rule is either an `Allow` rule or a `Deny` rule, with a specification for the traffic that the rule applies to.
The following YAML describes an `EgressFirewall` CR:
.EgressFirewall object
[source,yaml,subs="attributes+"]
----
apiVersion: k8s.ovn.org/v1
kind: EgressFirewall
metadata:
name: <ovn>
spec:
egress: <egress_rules>
...
----
where:
<ovn>:: The name for the object must be `default`.
<egress_rules>:: Specifies a collection of one or more egress network policy rules as described in the following section.
[id="egress-firewall-rules_{context}"]
== EgressFirewall rules
The following YAML describes the rules for an `EgressFirewall` resource. The user can select either an IP address range in CIDR format, a domain name, or use the `nodeSelector` field to allow or deny egress traffic. The `egress` stanza expects an array of one or more objects.
// - OVN-Kubernetes does not support DNS
// - OpenShift SDN does not support port and protocol specification
.Egress policy rule stanza
[source,yaml]
----
egress:
- type: <type>
to:
cidrSelector: <cidr_range>
dnsName: <dns_name>
nodeSelector: <label_name>: <label_value>
ports: <optional_port>
...
----
where:
<type>:: Specifies the type of rule. The value must be either `Allow` or `Deny`.
<to>:: Specifies a stanza describing an egress traffic match rule that specifies the `cidrSelector` field or the `dnsName` field. You cannot use both fields in the same rule.
<cidr_range>:: Specifies an IP address range in CIDR format.
<dns_name>:: Specifies a DNS domain name.
<nodeSelector>:: Specifies labels which are key and value pairs that the user defines. Labels are attached to objects, such as pods. The `nodeSelector` allows for one or more node labels to be selected and attached to pods.
<ports>:: Specifies an optional field that describes a collection of network ports and protocols for the rule.
.Ports stanza
[source,yaml]
----
ports:
- port:
protocol:
----
where:
<port>:: Specifies a network port, such as `80` or `443`. If you specify a value for this field, you must also specify a value for the `protocol` field.
<protocol>:: Specifies a network protocol. The value must be either `TCP`, `UDP`, or `SCTP`.
[id="egress-firewall-example_{context}"]
== Example EgressFirewall CR
The following example defines several egress firewall policy rules:
[source,yaml,subs="attributes+"]
----
apiVersion: k8s.ovn.org/v1
kind: EgressFirewall
metadata:
name: default
spec:
egress: <1>
- type: Allow
to:
cidrSelector: 1.2.3.0/24
- type: Deny
to:
cidrSelector: 0.0.0.0/0
----
where:
<egress>:: Specifies a collection of egress firewall policy rule objects.
The following example defines a policy rule that denies traffic to the host at the `172.16.1.1/32` IP address, if the traffic is using either the TCP protocol and destination port `80` or any protocol and destination port `443`.
[source,yaml,subs="attributes+"]
----
apiVersion: k8s.ovn.org/v1
kind: EgressFirewall
metadata:
name: default
spec:
egress:
- type: Deny
to:
cidrSelector: 172.16.1.1/32
ports:
- port: 80
protocol: TCP
- port: 443
----
[id="configuring-NodeSelector-egfw-example_{context}"]
== Example EgressFirewall CR using nodeSelector
As a cluster administrator, you can allow or deny egress traffic to nodes in your cluster by specifying a label using `nodeSelector` field. Labels can be applied to one or more nodes. Labels can be helpful because instead of adding manual rules per node IP address, you can use node selectors to create a label that allows pods behind an egress firewall to access host network pods. The following is an example with the `region=east` label:
[source,yaml,subs="attributes+"]
----
apiVersion: k8s.ovn.org/v1
kind: EgressFirewall
metadata:
name: default
spec:
egress:
- to:
nodeSelector:
matchLabels:
region: east
type: Allow
----

View File

@@ -0,0 +1,45 @@
// Module included in the following assemblies:
//
// * networking/network_security/configuring-egress-firewall-ovn.adoc
:_mod-docs-content-type: PROCEDURE
[id="nw-egress-firewall-view_{context}"]
= Viewing an EgressFirewall custom resource (CR)
You can view an `EgressFirewall` CR in your cluster.
.Prerequisites
* A cluster using the OVN-Kubernetes network plugin.
* Install the OpenShift Command-line Interface (CLI), commonly known as `oc`.
* You must log in to the cluster.
.Procedure
. Optional: To view the names of the `EgressFirewall` CR defined in your cluster,
enter the following command:
+
[source,terminal,subs="attributes"]
----
$ oc get egressfirewall --all-namespaces
----
. To inspect a policy, enter the following command. Replace `<policy_name>` with the name of the policy to inspect.
+
[source,terminal,subs="attributes+"]
----
$ oc describe egressfirewall <policy_name>
----
+
[source,terminal]
.Example output
----
Name: default
Namespace: project1
Created: 20 minutes ago
Labels: <none>
Annotations: <none>
Rule: Allow to 1.2.3.0/24
Rule: Allow to www.example.com
Rule: Deny to 0.0.0.0/0
----

View File

@@ -1,127 +1,75 @@
// Module included in the following assemblies:
//
// * networking/network_security/configuring-egress-firewall-ovn.adoc
ifeval::["{context}" == "configuring-egress-firewall-ovn"]
:ovn:
:kind: EgressFirewall
:api: k8s.ovn.org/v1
endif::[]
// * networking/openshift_sdn/configuring-egress-firewall.adoc
:_mod-docs-content-type: CONCEPT
[id="nw-egressnetworkpolicy-about_{context}"]
= How an egress firewall works in a project
= How an EgressNetworkPolicy custom resource works in a project
As a cluster administrator, you can use an _egress firewall_ to limit the external hosts that some or all pods can access from within the cluster. An egress firewall supports the following scenarios:
As a cluster administrator, you can use an _egress firewall_ to
limit the external hosts that some or all pods can access from within the cluster. You configure an egress firewall policy by creating an `EgressNetworkPolicy` custom resource (CR).
An egress firewall supports the following scenarios:
- A pod can only connect to internal hosts and cannot start connections to
the public internet.
- A pod can only connect to the public internet and cannot start connections
to internal hosts that are outside the {product-title} cluster.
- A pod cannot reach specified internal subnets or hosts outside the {product-title} cluster.
- A pod can connect to only specific external hosts.
- A pod can only connect to specific external hosts.
For example, you can allow one project access to a specified IP range but deny the same access to a different project. Or you can restrict application developers from updating from Python pip mirrors, and force updates to come only from approved sources.
[NOTE]
====
Egress firewall does not apply to the host network namespace. Egress firewall rules do not impact any pods that have host networking enabled.
====
You configure an egress firewall policy by creating an {kind} custom resource (CR) object. The egress firewall matches network traffic that meets any of the following criteria:
In your `EgressNetworkPolicy` CR you can match network traffic that meets any of the following criteria:
- An IP address range in CIDR format
- A DNS name that resolves to an IP address
ifdef::ovn[]
- A port number
- A protocol that is one of the following protocols: TCP, UDP, and SCTP
endif::ovn[]
[IMPORTANT]
====
If your egress firewall includes a deny rule for `0.0.0.0/0`, the rule blocks access to your {product-title} API servers. You must either add allow rules for each IP address or use the `nodeSelector` type allow rule in your egress policy rules to connect to API servers.
The following example illustrates the order of the egress firewall rules necessary to ensure API server access:
[source,yaml,subs="attributes+"]
----
apiVersion: {api}
kind: {kind}
metadata:
name: default
namespace: <namespace> <1>
spec:
egress:
- to:
cidrSelector: <api_server_address_range> <2>
type: Allow
# ...
- to:
cidrSelector: 0.0.0.0/0 <3>
type: Deny
----
<1> The namespace for the egress firewall.
<2> The IP address range that includes your {product-title} API servers.
<3> A global deny rule prevents access to the {product-title} API servers.
To find the IP address for your API servers, run `oc get ep kubernetes -n default`.
For more information, see link:https://bugzilla.redhat.com/show_bug.cgi?id=1988324[BZ#1988324].
====
[WARNING]
====
Egress firewall rules do not apply to traffic that goes through routers. Any user with permission to create a Route CR object can bypass egress firewall policy rules by creating a route that points to a forbidden destination.
====
[id="limitations-of-an-egress-firewall_{context}"]
== Limitations of an egress firewall
== Limitations of an EgressNetworkPolicy CR
An egress firewall has the following limitations:
* No project can have more than one {kind} object.
* No project can have more than one `EgressNetworkPolicy` CR. The creation of more than one `EgressNetworkPolicy` CR is allowed, however; it should not be done. When you create more than one custom resource, you receive the following message: `dropping all rules`. In actuality, all external traffic is dropped, which can cause security risks for your organization.
ifdef::ovn[]
* A maximum of one {kind} object with a maximum of 8,000 rules can be defined per project.
* You must have OpenShift SDN configured to use either the network policy or multitenant mode to configure an egress firewall. If you use network policy mode, an egress firewall is compatible with only one policy per namespace and will not work with projects that share a network, such as global projects.
* If you use the OVN-Kubernetes network plugin and you configured `false` for the `routingViaHost` parameter in the `Network` custom resource for your cluster, egress firewall rules impact the return ingress replies. If the egress firewall rules drop the ingress reply destination IP, the traffic is dropped.
endif::ovn[]
* A maximum of one `EgressNetworkPolicy` CR with a maximum of 1,000 rules can be defined per project.
* The `default` project cannot use an egress firewall.
* When using the OpenShift SDN network plugin in multitenant mode, the following limitations apply:
- Global projects cannot use an egress firewall. You can make a project global by using the `oc adm pod-network make-projects-global` command.
- Projects merged by using the `oc adm pod-network join-projects` command cannot use an egress firewall in any of the joined projects.
* If you create a selectorless service and manually define endpoints or `EndpointSlices` that point to external IPs, traffic to the service IP might still be allowed, even if your `EgressNetworkPolicy` is configured to deny all egress traffic. This occurs because OpenShift SDN does not fully enforce egress network policies for these external endpoints. Consequently, this might result in unexpected access to external services.
* Egress firewall does not apply to the host network namespace. Pods with host networking enabled are unaffected by egress firewall rules.
* Egress firewall rules do not apply to traffic that goes through routers. Any user with permission to create a Route CR object can bypass egress firewall policy rules by creating a route that points to a forbidden destination.
Violating any of these restrictions results in a broken egress firewall for the project. As a result, all external network traffic drops, which can cause security risks for your organization.
You can create an Egress Firewall resource in the `kube-node-lease`, `kube-public`, `kube-system`, `openshift` and `openshift-` projects.
[id="policy-rule-order_{context}"]
== Matching order for egress firewall policy rules
[id="policy-rule-order-sdn_{context}"]
== Matching order for EgressNetworkPolicy CR rules
The OVN-Kubernetes network plugin evaluates egress firewall policy rules based on the first-to-last order of how you defined the rules. The first rule that matches an egress connection from a pod applies. The plugin ignores any subsequent rules for that connection.
[id="domain-name-server-resolution_{context}"]
== Domain Name Server (DNS) resolution
[id="domain-name-server-resolution-sdn_{context}"]
== How Domain Name Server (DNS) resolution works
If you use DNS names in any of your egress firewall policy rules, proper resolution of the domain names is subject to the following restrictions:
ifdef::ovn[]
* Domain name updates are polled based on a time-to-live (TTL) duration. By default, the duration is 30 minutes. When the egress firewall controller queries the local name servers for a domain name, if the response includes a TTL and the TTL is less than 30 minutes, the controller sets the duration for that DNS name to the returned value. Each DNS name is queried after the TTL for the DNS record expires.
endif::ovn[]
* Domain name updates are polled based on a time-to-live (TTL) duration. By default, the duration is 30 seconds. When the egress firewall controller queries the local name servers for a domain name, if the response includes a TTL that is less than 30 seconds, the controller sets the duration to the returned value. If the TTL in the response is greater than 30 minutes, the controller sets the duration to 30 minutes. If the TTL is between 30 seconds and 30 minutes, the controller ignores the value and sets the duration to 30 seconds.
* The pod must resolve the domain from the same local name servers when necessary. Otherwise the IP addresses for the domain known by the egress firewall controller and the pod can be different. If the IP addresses for a hostname differ, consistent enforcement of the egress firewall does not apply.
* Because the egress firewall controller and pods asynchronously poll the same local name server, the pod might obtain the updated IP address before the egress controller does, which causes a race condition. Due to this current limitation, domain name usage in {kind} objects is only recommended for domains with infrequent IP address changes.
* Because the egress firewall controller and pods asynchronously poll the same local name server, the pod might obtain the updated IP address before the egress controller does, which causes a race condition. Due to this current limitation, domain name usage in `EgressNetworkPolicy` CR is only recommended for domains with infrequent IP address changes.
[NOTE]
====
Using DNS names in your egress firewall policy does not affect local DNS resolution through CoreDNS.
If your egress firewall policy uses domain names, and an external DNS server handles DNS resolution for an affected pod, you must include egress firewall rules that allow access to the IP addresses of your DNS server.
====
ifdef::ovn[]
:!ovn:
endif::[]
ifdef::kind[]
:!kind:
endif::[]
ifdef::api[]
:!api:
endif::[]
* Using DNS names in your `EgressNetworkPolicy` CR does not affect local DNS resolution through CoreDNS.
+
However, if your policy uses domain names, and an external DNS server handles DNS resolution for an affected pod, you must include egress firewall rules that permit access to the IP addresses of your DNS server.

View File

@@ -2,58 +2,36 @@
//
// * networking/network_security/configuring-egress-firewall-ovn.adoc
ifeval::["{context}" == "openshift-sdn-egress-firewall"]
:kind: EgressNetworkPolicy
:obj: egressnetworkpolicy.network.openshift.io/v1
:cni: OpenShift SDN
endif::[]
ifeval::["{context}" == "configuring-egress-firewall-ovn"]
:kind: EgressFirewall
:obj: egressfirewall.k8s.ovn.org/v1
:cni: OVN-Kubernetes
endif::[]
:_mod-docs-content-type: PROCEDURE
[id="nw-networkpolicy-create_{context}"]
= Creating an egress firewall policy object
= Creating an EgressNetworkPolicy custom resource (CR)
As a cluster administrator, you can create an egress firewall policy object for a project.
As a cluster administrator, you can create an `EgressNetworkPolicy` CR for a project.
[IMPORTANT]
====
If the project already has an {kind} object defined, you must edit the existing policy to make changes to the egress firewall rules.
If the project already has an `EgressNetworkPolicy` object defined, you must edit the existing policy to make changes to the egress firewall rules.
====
.Prerequisites
* A cluster that uses the {cni} network plugin.
* A cluster that uses the OpenShift SDN network plugin.
* Install the OpenShift CLI (`oc`).
* You must log in to the cluster as a cluster administrator.
.Procedure
. Create a policy rule:
.. Create a `<policy_name>.yaml` file where `<policy_name>` describes the egress
policy rules.
.. In the file you created, define an egress policy object.
.. Create a `<policy_name>.yaml` file where `<policy_name>` describes the egress policy rules.
.. Define the `EgressNetworkPolicy` in the file.
. Enter the following command to create the policy object. Replace `<policy_name>` with the name of the policy and `<project>` with the project that the rule applies to.
. Create the policy object by entering the following command. Replace `<policy_name>` with the name of the policy and `<project>` with the project that the rule applies to.
+
[source,terminal]
----
$ oc create -f <policy_name>.yaml -n <project>
----
+
Successful output lists the {obj} name and the `created` status.
Successful output lists the `egressnetworkpolicy.network.openshift.io/v1` name and the `created` status.
. Optional: Save the `<policy_name>.yaml` file so that you can make changes later.
ifdef::kind[]
:!kind:
endif::[]
ifdef::obj[]
:!obj:
endif::[]
ifdef::cni[]
:!cni:
endif::[]

View File

@@ -1,52 +1,35 @@
// Module included in the following assemblies:
//
// * networking/ovn_kubernetes_network_provider/removing-egress-firewall-ovn.adoc
ifeval::["{context}" == "openshift-sdn-egress-firewall"]
:kind: EgressNetworkPolicy
:res: egressnetworkpolicy
:cni: OpenShift SDN
endif::[]
ifeval::["{context}" == "removing-egress-firewall-ovn"]
:kind: EgressFirewall
:res: egressfirewall
:cni: OVN-Kubernetes
endif::[]
// * networking/openshift_sdn/removing-egress-firewall.adoc
:_mod-docs-content-type: PROCEDURE
[id="nw-egressnetworkpolicy-delete_{context}"]
= Removing an {kind} object
= Removing an EgressNetworkPolicy custom resource (CR)
As a cluster administrator, you can remove an egress firewall from a project.
.Prerequisites
* A cluster using the {cni} network plugin.
* A cluster using the OpenShift SDN network plugin.
* Install the OpenShift CLI (`oc`).
* You must log in to the cluster as a cluster administrator.
.Procedure
. Find the name of the {kind} object for the project. Replace `<project>` with the name of the project.
. Find the name of the `EgressNetworkPolicy` CR for the project.
+
[source,terminal,subs="attributes+"]
----
$ oc get -n <project> {res}
$ oc get -n <project> egressnetworkpolicy
----
+
Replace `<project>` with the name of the project.
. Enter the following command to delete the {kind} object. Replace `<project>` with the name of the project and `<name>` with the name of the object.
. Enter the following command to delete the `EgressNetworkPolicy` CR.
+
[source,terminal,subs="attributes+"]
----
$ oc delete -n <project> {res} <name>
$ oc delete -n <project> egressnetworkpolicy <name>
----
ifdef::kind[]
:!kind:
endif::[]
ifdef::res[]
:!res:
endif::[]
ifdef::cni[]
:!cni:
endif::[]
+
Replace `<project>` with the name of the project and `<name>` with the name of the object.

View File

@@ -1,61 +1,44 @@
// Module included in the following assemblies:
//
// * networking/ovn_kubernetes_network_provider/editing-egress-firewall-ovn.adoc
ifeval::["{context}" == "openshift-sdn-egress-firewall"]
:kind: EgressNetworkPolicy
:res: egressnetworkpolicy
:cni: OpenShift SDN
endif::[]
ifeval::["{context}" == "editing-egress-firewall-ovn"]
:kind: EgressFirewall
:res: egressfirewall
:cni: OVN-Kubernetes
endif::[]
// * networking/openshift_sdn/editing-egress-firewall.adoc
:_mod-docs-content-type: PROCEDURE
[id="nw-egressnetworkpolicy-edit_{context}"]
= Editing an {kind} object
= Editing an EgressNetworkPolicy custom resource (CR)
As a cluster administrator, you can update the egress firewall for a project.
.Prerequisites
* A cluster using the {cni} network plugin.
* A cluster using the OpenShift SDN network plugin.
* Install the OpenShift CLI (`oc`).
* You must log in to the cluster as a cluster administrator.
.Procedure
. Find the name of the {kind} object for the project. Replace `<project>` with the name of the project.
. Find the name of the `EgressNetworkPolicy` CR for the project.
+
[source,terminal,subs="attributes+"]
----
$ oc get -n <project> {res}
$ oc get -n <project> egressnetworkpolicy
----
+
Replace `<project>` with the name of the project.
. Optional: If you did not save a copy of the {kind} object when you created the egress network firewall, enter the following command to create a copy.
. Optional: If you did not save a copy of the `EgressNetworkPolicy` CR when you created the egress firewall, enter the following command to create a copy.
+
[source,terminal,subs="attributes+"]
----
$ oc get -n <project> {res} <name> -o yaml > <filename>.yaml
$ oc get -n <project> egressnetworkpolicy <name> -o yaml > <filename>.yaml
----
+
Replace `<project>` with the name of the project. Replace `<name>` with the name of the object. Replace `<filename>` with the name of the file to save the YAML to.
. After making changes to the policy rules, enter the following command to replace the {kind} object. Replace `<filename>` with the name of the file containing the updated {kind} object.
. After making changes to the policy rules, enter the following command to replace the `EgressNetworkPolicy` CR.
+
[source,terminal]
----
$ oc replace -f <filename>.yaml
----
ifdef::kind[]
:!kind:
endif::[]
ifdef::res[]
:!res:
endif::[]
ifdef::cni[]
:!cni:
endif::[]
+
Replace `<filename>` with the name of the file containing the updated `EgressNetworkPolicy` CR.

View File

@@ -1,86 +1,64 @@
// Module included in the following assemblies:
//
// * networking/network_security/configuring-egress-firewall-ovn.adoc
ifeval::["{context}" == "configuring-egress-firewall-ovn"]
:kind: EgressFirewall
:api: k8s.ovn.org/v1
:ovn:
endif::[]
// * networking/openshift_sdn/configuring-egress-firewall.adoc
[id="nw-egressnetworkpolicy-object_{context}"]
= {kind} custom resource (CR) object
= EgressNetworkPolicy custom resource (CR)
You can define one or more rules for an egress firewall. A rule is either an `Allow` rule or a `Deny` rule, with a specification for the traffic that the rule applies to.
The following YAML describes an {kind} CR object:
The following YAML describes an `EgressNetworkPolicy` CR:
.{kind} object
[source,yaml,subs="attributes+"]
----
apiVersion: {api}
kind: {kind}
apiVersion: network.openshift.io/v1
kind: EgressNetworkPolicy
metadata:
ifdef::ovn[]
name: <name> <1>
endif::ovn[]
name:
spec:
egress: <2>
...
egress:
...
----
ifdef::ovn[]
<1> The name for the object must be `default`.
<2> A collection of one or more egress network policy rules as described in the following section.
endif::ovn[]
where:
<name>:: Specifies the name for your egress firewall policy.
<egress>:: Specifies a collection of one or more egress network policy rules as described in the following section.
[id="egressnetworkpolicy-rules_{context}"]
== {kind} rules
== EgressNetworkPolicy rules
The following YAML describes an egress firewall rule object. The user can select either an IP address range in CIDR format, a domain name, or use the `nodeSelector` to allow or deny egress traffic. The `egress` stanza expects an array of one or more objects.
The user can select either an IP address range in CIDR format, a domain name, or use the `nodeSelector` to allow or deny egress traffic. The `egress` stanza expects an array of one or more objects. The following YAML describes an egress firewall rule object.
[source,yaml,subs="attributes+"]
----
egress:
- type: <type>
to:
cidrSelector: <cidr>
dnsName: <dns_name>
nodeSelector: <label_name>: <label_value>
----
where:
<type>:: Specifies the type of rule. The value must be either `Allow` or `Deny`.
<to>:: Specifies a stanza describing an egress traffic match rule that specifies the `cidrSelector` field or the `dnsName` field. You cannot use both fields in the same rule.
<cidr_range>:: Specifies an IP address range in CIDR format.
<dns_name>:: Specifies a DNS domain name.
<nodeSelector>:: Specifies labels which are key and value pairs that the user defines. Labels are attached to objects, such as pods. The `nodeSelector` allows for one or more node labels to be selected and attached to pods.
// - OVN-Kubernetes does not support DNS
// - OpenShift SDN does not support port and protocol specification
.Egress policy rule stanza
ifdef::ovn[]
[source,yaml]
----
egress:
- type: <type> <1>
to: <2>
cidrSelector: <cidr> <3>
dnsName: <dns_name> <4>
nodeSelector: <label_name>: <label_value> <5>
ports: <6>
...
----
<1> The type of rule. The value must be either `Allow` or `Deny`.
<2> A stanza describing an egress traffic match rule that specifies the `cidrSelector` field or the `dnsName` field. You cannot use both fields in the same rule.
<3> An IP address range in CIDR format.
<4> A DNS domain name.
<5> Labels are key/value pairs that the user defines. Labels are attached to objects, such as pods. The `nodeSelector` allows for one or more node labels to be selected and attached to pods.
<6> Optional: A stanza describing a collection of network ports and protocols for the rule.
.Ports stanza
[source,yaml]
----
ports:
- port: <port> <1>
protocol: <protocol> <2>
----
<1> A network port, such as `80` or `443`. If you specify a value for this field, you must also specify a value for `protocol`.
<2> A network protocol. The value must be either `TCP`, `UDP`, or `SCTP`.
endif::ovn[]
[id="egressnetworkpolicy-example_{context}"]
== Example {kind} CR objects
== Example EgressNetworkPolicy CR objects
The following example defines several egress firewall policy rules:
The following example defines several egress firewall rules:
[source,yaml,subs="attributes+"]
----
apiVersion: {api}
kind: {kind}
apiVersion: k8s.ovn.org/v1
kind: EgressNetworkPolicy
metadata:
name: default
spec:
@@ -92,60 +70,7 @@ spec:
to:
cidrSelector: 0.0.0.0/0
----
<1> A collection of egress firewall policy rule objects.
+
where:
ifdef::ovn[]
The following example defines a policy rule that denies traffic to the host at the `172.16.1.1/32` IP address, if the traffic is using either the TCP protocol and destination port `80` or any protocol and destination port `443`.
[source,yaml,subs="attributes+"]
----
apiVersion: {api}
kind: {kind}
metadata:
name: default
spec:
egress:
- type: Deny
to:
cidrSelector: 172.16.1.1/32
ports:
- port: 80
protocol: TCP
- port: 443
----
[id="configuringNodeSelector-example_{context}"]
== Example nodeSelector for {kind}
As a cluster administrator, you can allow or deny egress traffic to nodes in your cluster by specifying a label using `nodeSelector`. Labels can be applied to one or more nodes. The following is an example with the `region=east` label:
[source,yaml,subs="attributes+"]
----
apiVersion: {api}
kind: EgressFirewall
metadata:
name: default
spec:
egress:
- to:
nodeSelector:
matchLabels:
region: east
type: Allow
----
[TIP]
====
Instead of adding manual rules per node IP address, use node selectors to create a label that allows pods behind an egress firewall to access host network pods.
====
endif::ovn[]
ifdef::kind[]
:!kind:
endif::[]
ifdef::api[]
:!api:
endif::[]
ifdef::ovn[]
:!ovn:
endif::[]
<egress>:: Specifies a collection of egress firewall policy rule objects.

View File

@@ -1,45 +1,33 @@
// Module included in the following assemblies:
//
// * networking/network_security/configuring-egress-firewall-ovn.adoc
ifeval::["{context}" == "openshift-sdn-viewing-egress-firewall"]
:kind: EgressNetworkPolicy
:res: egressnetworkpolicy
:cni: OpenShift SDN
endif::[]
ifeval::["{context}" == "viewing-egress-firewall-ovn"]
:kind: EgressFirewall
:res: egressfirewall
:cni: OVN-Kubernetes
endif::[]
// * networking/openshift_sdn/configuring-egress-firewall.adoc
:_mod-docs-content-type: PROCEDURE
[id="nw-egressnetworkpolicy-view_{context}"]
= Viewing an {kind} object
= Viewing an EgressNetworkPolicy custom resource (CR)
You can view an {kind} object in your cluster.
You can view an `EgressNetworkPolicy` CR in your cluster.
.Prerequisites
* A cluster using the {cni} network plugin.
* A cluster using the OpenShift SDN network plugin.
* Install the OpenShift Command-line Interface (CLI), commonly known as `oc`.
* You must log in to the cluster.
.Procedure
. Optional: To view the names of the {kind} objects defined in your cluster,
enter the following command:
. Optional: To view the names of the `EgressNetworkPolicy` CRs defined in your cluster, enter the following command:
+
[source,terminal,subs="attributes"]
----
$ oc get {res} --all-namespaces
$ oc get egressnetworkpolicy --all-namespaces
----
. To inspect a policy, enter the following command. Replace `<policy_name>` with the name of the policy to inspect.
+
[source,terminal,subs="attributes+"]
----
$ oc describe {res} <policy_name>
$ oc describe egressnetworkpolicy <policy_name>
----
+
[source,terminal]
@@ -54,13 +42,3 @@ Rule: Allow to 1.2.3.0/24
Rule: Allow to www.example.com
Rule: Deny to 0.0.0.0/0
----
ifdef::kind[]
:!kind:
endif::[]
ifdef::res[]
:!res:
endif::[]
ifdef::cni[]
:!cni:
endif::[]

View File

@@ -8,9 +8,10 @@ toc::[]
As a cluster administrator, you can create an egress firewall for a project that restricts egress traffic leaving your {product-title} cluster.
include::modules/nw-egressnetworkpolicy-about.adoc[leveloffset=+1]
include::modules/nw-egress-firewall-about.adoc[leveloffset=+1]
include::modules/nw-coredns-egress-firewall.adoc[leveloffset=+3]
include::modules/nw-egressnetworkpolicy-object.adoc[leveloffset=+1]
include::modules/nw-egressnetworkpolicy-create.adoc[leveloffset=+1]
include::modules/nw-egress-firewall-object.adoc[leveloffset=+1]
include::modules/nw-egress-firewall-create.adoc[leveloffset=+1]

View File

@@ -8,4 +8,4 @@ toc::[]
As a cluster administrator, you can modify network traffic rules for an existing egress firewall.
include::modules/nw-egressnetworkpolicy-edit.adoc[leveloffset=+1]
include::modules/nw-egress-firewall-edit.adoc[leveloffset=+1]

View File

@@ -8,4 +8,4 @@ toc::[]
As a cluster administrator, you can remove an egress firewall from a project to remove all restrictions on network traffic from the project that leaves the {product-title} cluster.
include::modules/nw-egressnetworkpolicy-delete.adoc[leveloffset=+1]
include::modules/nw-egress-firewall-delete.adoc[leveloffset=+1]

View File

@@ -8,4 +8,5 @@ toc::[]
As a cluster administrator, you can list the names of any existing egress firewalls and view the traffic rules for a specific egress firewall.
include::modules/nw-egressnetworkpolicy-view.adoc[leveloffset=+1]
include::modules/nw-egress-firewall-view.adoc[leveloffset=+1]