1
0
mirror of https://github.com/openshift/openshift-docs.git synced 2026-02-05 12:46:18 +01:00

OSDOCS-4693: Update firewall docs toto allow MicroShift API

This commit is contained in:
Shauna Diaz
2022-12-13 16:10:10 -05:00
committed by openshift-cherrypick-robot
parent b71948620b
commit ea301f0a0b
20 changed files with 253 additions and 165 deletions

View File

@@ -22,5 +22,16 @@ include::modules/microshift-configuring-ovn.adoc[leveloffset=+1]
include::modules/microshift-http-proxy.adoc[leveloffset=+1]
include::modules/microshift-cri-o-container-runtime.adoc[leveloffset=+1]
include::modules/microshift-ovs-snapshot.adoc[leveloffset=+1]
include::modules/microshift-firewall-config.adoc[leveloffset=+1]
include::modules/microshift-mDNS.adoc[leveloffset=+1]
include::modules/microshift-firewall-config.adoc[leveloffset=+1]
include::modules/microshift-firewalld-install.adoc[leveloffset=+1]
include::modules/microshift-firewall-req-settings.adoc[leveloffset=+1]
include::modules/microshift-firewall-opt-settings.adoc[leveloffset=+1]
include::modules/microshift-firewall-allow-traffic.adoc[leveloffset=+1]
include::modules/microshift-firewall-verify-settings.adoc[leveloffset=+1]
include::modules/microshift-firewall-known-issue.adoc[leveloffset=+1]
[role="_additional-resources"]
.Additional resources
* link:https://https://access.redhat.com/documentation/en-us/microshift/4.12/toubleshooting/index[Troubleshooting and known issues].

View File

@@ -1,13 +1,13 @@
// Module included in the following assemblies:
//
// microshift/understanding-microshift.adoc
// microshift/understanding-microshift.adoc
[id="con-about-microshift_{context}"]
= About MicroShift
= About {product-title}
Working with low-resource field environments and hardware presents many challenges not experienced with cloud computing. {product-title} enables you to solve problems for edge devices by:
* Overcoming the operational challenge of minimal system resources, for example, a {op-system-chip} with {op-system-ram}.
* Overcoming the operational challenge of minimal system resources, for example, a {op-system-chip}.
* Addressing the environmental challenges of severe networking constraints such as low or no connectivity.
* Meeting the physical constraint of hard-to-access locations by installing your system images directly on edge devices.
* Building on and integrating with edge-optimized operating systems such as {op-system-first}.

View File

@@ -4,7 +4,7 @@
:_content-type: PROCEDURE
[id="microshift-config-OVN-K_{context}"]
== Configuring OVN-Kubernetes
= Configuring OVN-Kubernetes
An OVN-Kubernetes config file can be written to `/etc/microshift/ovn.yaml`. {product-title} will use default OVN-Kubernetes configuration values if an OVN-Kubernetes config file is not customized.
.Default `ovn.yaml` config values:

View File

@@ -4,7 +4,7 @@
:_content-type: PROCEDURE
[id="microshift-CRI-O-container-engine_{context}"]
== CRI-O container runtime
= CRI-O container runtime
To use an HTTP(S) proxy in `CRI-O`, you need to set the `HTTP_PROXY` and `HTTPS_PROXY` environment variables. You can also set the `NO_PROXY` variable to exclude a list of hosts from being proxied.
.Procedure

View File

@@ -0,0 +1,40 @@
// Module included in the following assemblies:
//
// * microshift_networking/microshift-networking.adoc
:_content-type: PROCEDURE
[id="microshift-firewall-network-traffic_{context}"]
= Allowing network traffic through the firewall
You can allow network traffic through the firewall by first configuring the IP address range with either default or custom values, and then allowing internal traffic from pods through the network gateway by inserting the DNS server.
.Procedure
.. To configure the IP address range with default values, run the following command:
+
[source,terminal]
----
$ sudo firewall-offline-cmd --permanent --zone=trusted --add-source=10.42.0.0/16
----
.. Alternatively, you can configure the IP address range with custom values by running the following command:
+
[source,terminal]
----
$ sudo firewall-offline-cmd --permanent --zone=trusted --add-source=<custom IP range>
----
.. To allow internal traffic from pods through the network gateway, run the following command:
+
[source, terminal]
----
$ sudo firewall-offline-cmd --permanent --zone=trusted --add-source=169.254.169.1
----
[id="microshift-firewall-applying-settings_{context}"]
== Applying firewall settings
After you have finished configuring, run the following command to restart the firewall and apply settings:
[source,terminal]
----
$ sudo firewall-cmd --reload
----

View File

@@ -1,6 +1,6 @@
// Module included in the following assemblies:
//
// * microshift_configuring/microshift-networking.adoc
// * microshift_networking/microshift-networking.adoc
:_content-type: CONCEPT
[id="microshift-firewall-config_{context}"]
@@ -22,136 +22,4 @@ The Kubernetes pod that uses host network
[IMPORTANT]
====
{product-title} pods must have access to the internal CoreDNS component and API servers.
====
[id="microshift-firewall-install_{context}"]
== Installing the `firewalld` service
To install and run the `firewalld` service, run the following commands:
.Procedure
. To install the `firewalld` service:
+
[source,terminal]
----
$ sudo dnf install -y firewalld
----
. To initiate the firewall:
+
[source,terminal]
----
$ sudo systemctl enable firewalld --now
----
[id="microshift-required-settings_{context}"]
== Required settings
An IP address range for pods is a required part of the firewall configuration. You can use the default values or customize the IP address range. You must also configure pod access to the internal CoreDNS component.
.Required settings
[cols="1,1",options="header"]
|===
^| IR Range ^| Description
|10.42.0.0/16
|Host network pod access to CoreDNS and {product-title} API
|169.254.169.1
|Host network pod access to {product-title} API Server
|===
.Procedure
. Run the following commands to allow network traffic by first configuring the IP address range with either default or custom values, then allow internal traffic from pods through the network gateway by inserting the DNS server.
.. To use default values for the IP address range:
+
[source,terminal]
----
$ sudo firewall-offline-cmd --permanent --zone=trusted --add-source=10.42.0.0/16
----
.. To allow internal traffic from pods through the network gateway:
+
[source, terminal]
----
$ sudo firewall-offline-cmd --permanent --zone=trusted --add-source=169.254.169.1
----
. To use custom values for the IP address range:
+
[source,terminal]
----
$ sudo firewall-offline-cmd --permanent --zone=trusted --add-source=<custom IP range>
----
. To allow internal traffic from pods through the network gateway:
+
[source,terminal]
----
$ sudo firewall-offline-cmd --permanent --zone=trusted --add-source=169.254.169.1
----
. Reload the firewall rules:
+
[source, terminal]
----
$ sudo firewall-cmd --reload
----
[id="microshift-firewall-optional-settings_{context}"]
== Optional settings
.Procedure
. To add customized ports to your firewall configuration, use the following command syntax:
+
[source,terminal]
----
$ sudo firewall-offline-cmd --permanent --zone=public --add-port=<port number>/<port protocol>
----
+
.Optional ports
[option="header"]
|===
|Port(s)|Protocol(s)|Description
|80
|TCP
|HTTP port used to serve applications through the {ocp} router.
|443
|TCP
|HTTPS port used to serve applications through the {ocp} router.
|5353
|UDP
|mDNS service to respond for {ocp} route mDNS hosts.
|30000-32767
|TCP
|Port range reserved for NodePort services; can be used to expose applications on the LAN.
|30000-32767
|UDP
|Port range reserved for NodePort services; can be used to expose applications on the LAN.
|6443
|TCP
|HTTPS API port for the {product-title} API.
|===
=== Known firewall issue
To avoid breaking traffic flows with a firewall restart, execute firewall commands before starting OVN-Kubernetes pods. OVN-Kubernetes makes use of iptable rules for some traffic flows, such as those using the NodePort service. The iptable rules are generated and inserted by the `ovnkube-master` container, but are deleted when the firewall restarts. The absence of the iptable rules breaks traffic flows. If firewall commands have to be executed after OVN-Kubernetes pods have started, manually restart the `ovnkube-master` pod to trigger a reconciliation of the iptable rules.
//See Troubleshooting for a detailed procedure. Need hard link to troubleshooting section
[id="microshift-firewall-applying-settings_{context}"]
== Applying firewall settings
After you have finished configuring, run the following command to restart the firewall and apply settings:
[source,terminal]
----
$ sudo firewall-offline-cmd --reload
----
//Q: How do we verify? What should we see after running this command?
====

View File

@@ -0,0 +1,8 @@
// Module included in the following assemblies:
//
// * microshift_networking/microshift-networking.adoc
:_content-type: CONCEPT
[id="microshift-firewall-known-issue_{context}"]
= Known firewall issue
* To avoid breaking traffic flows with a firewall reload or restart, execute firewall commands before starting {product-title}. The CNI driver in {product-title} makes use of iptable rules for some traffic flows, such as those using the NodePort service. The iptable rules are generated and inserted by the CNI driver, but are deleted when the firewall reloads or restarts. The absence of the iptable rules breaks traffic flows. If firewall commands have to be executed after {product-title} is running, manually restart `ovnkube-master` pod in the `openshift-ovn-kubernetes` namespace to reset the rules controlled by the CNI driver.

View File

@@ -0,0 +1,71 @@
// Module included in the following assemblies:
//
// * microshift_networking/microshift-networking.adoc
:_content-type: PROCEDURE
[id="microshift-firewall-optional-settings_{context}"]
= Optional port settings
.Procedure
. To add customized ports to your firewall configuration, use the following command syntax:
+
[source,terminal]
----
$ sudo firewall-cmd --permanent --zone=public --add-port=<port number>/<port protocol>
----
+
.Optional ports
[option="header"]
|===
|Port(s)|Protocol(s)|Description
|80
|TCP
|HTTP port used to serve applications through the {ocp} router.
|443
|TCP
|HTTPS port used to serve applications through the {ocp} router.
|5353
|UDP
|mDNS service to respond for {ocp} route mDNS hosts.
|30000-32767
|TCP
|Port range reserved for NodePort services; can be used to expose applications on the LAN.
|30000-32767
|UDP
|Port range reserved for NodePort services; can be used to expose applications on the LAN.
|6443
|TCP
|HTTPS API port for the {product-title} API.
|===
The following are examples of commands used when requiring external access through the firewall to services running on {product-title}, such as port 6443 for the API server, for example, ports 80 and 443 for applications exposed through the router.
.Example commands
* Configuring a port for the {product-title} API server:
+
[source, terminal]
----
$ sudo firewall-cmd --permanent --zone=public --add-port=6443/tcp
----
* Configuring ports for applications exposed through the router:
+
[source, terminal]
----
$ sudo firewall-cmd --permanent --zone=public --add-port=80/tcp
----
+
[source, terminal]
----
$ sudo firewall-cmd --permanent --zone=public --add-port=443/tcp
----

View File

@@ -0,0 +1,42 @@
// Module included in the following assemblies:
//
// * microshift_networking/microshift-networking.adoc
:_content-type: CONCEPT
[id="microshift-required-settings_{context}"]
= Required firewall settings
An IP address range for the cluster network must be enabled during firewall configuration. You can use the default values or customize the IP address range. If you choose to customize the cluster network IP address range from the default `10.42.0.0/16` setting, you must also use the same custom range in the firewall configuration.
.Firewall IP address settings
[cols="3",options="header"]
|===
|IP Range
|Firewall rule required
|Description
|10.42.0.0/16
|No
|Host network pod access to other pods
|169.254.169.1
|Yes
|Host network pod access to {product-title} API server
|===
The following are examples of commands for settings that are mandatory for firewall configuration:
.Example commands
* Configure host network pod access to other pods:
+
[source, terminal]
----
$ sudo firewall-cmd --permanent --zone=trusted --add-source=10.42.0.0/16
----
* Configure host network pod access to services backed by Host endpoints, such as the {product-title} API:
+
[source, terminal]
----
$ sudo firewall-cmd --permanent --zone=trusted --add-source=169.254.169.1
----

View File

@@ -0,0 +1,24 @@
// Module included in the following assemblies:
//
// * microshift_networking/microshift-networking.adoc
:_content-type: PROCEDURE
[id="microshift-firewall-verifying-settings_{context}"]
= Verifying firewall settings
After you have restarted the firewall, you can verify your settings by listing them.
.Procedure
* To verify rules added in the default public zone, such as ports-related rules, run the following command:
+
[source,terminal]
----
$ sudo firewall-cmd --list-all
----
* To verify rules added in the trusted zone, such as IP-range related rules, run the following command:
+
[source,terminal]
----
$ sudo firewall-cmd --zone=trusted --list-all
----

View File

@@ -0,0 +1,24 @@
// Module included in the following assemblies:
//
// * microshift_configuring/microshift-networking.adoc
:_content-type: PROCEDURE
[id="microshift-firewall-install_{context}"]
= Installing the firewalld service
Use the following procedure to install and run the `firewalld` service for {product-title}.
.Procedure
. To install the `firewalld` service, run the following command:
+
[source,terminal]
----
$ sudo dnf install -y firewalld
----
. To initiate the firewall, run the following command:
+
[source,terminal]
----
$ sudo systemctl enable firewalld --now
----

View File

@@ -4,7 +4,7 @@
:_content-type: CONCEPT
[id="microshift-http-proxy_{context}"]
== Deploying {product-title} behind an HTTP(S) proxy
= Deploying {product-title} behind an HTTP(S) proxy
Deploy a {product-title} cluster behind an HTTP(S) proxy when you want to add basic anonymity and security measures to your pods.
You must configure the host operating system to use the proxy service with all components initiating HTTP(S) requests when deploying {product-title} behind a proxy.

View File

@@ -4,13 +4,13 @@
:_content-type: CONCEPT
[id="lvms-configuring"]
= Configuring the LVMS
= Configuring the LVMS
{product-title} supports passing through a user's LVMS configuration and allows users to specify custom volume groups, thin volume provisioning parameters, and reserved unallocated volume group space. The LVMS configuration file can be edited at any time. You must restart {product-title} to deploy configuration changes.
{product-title} supports passing through a user's LVMS configuration and allows users to specify custom volume groups, thin volume provisioning parameters, and reserved unallocated volume group space. The LVMS configuration file can be edited at any time. You must restart {product-title} to deploy configuration changes.
The following `config.yaml` file shows a basic LVMS configuration:
The following `config.yaml` file shows a basic LVMS configuration:
.LVMS YAML configuration
.LVMS YAML configuration example
[source,yaml]
----
socket-name: <1>

View File

@@ -4,8 +4,8 @@
:_content-type: CONCEPT
[id="lvms-deployment"]
= LVMS Deployment
= LVMS Deployment
LVMS is automatically deployed on to the cluster in the `openshift-storage` namespace after {product-title} boots.
LVMS is automatically deployed on to the cluster in the `openshift-storage` namespace after {product-title} boots.
LVMS uses `StorageCapacity` tracking to ensure that pods with an LVMS PVC are not scheduled if the requested storage is greater than the volume group's remaining free storage. For more information about `StorageCapacity` tracking, see link:https://kubernetes.io/docs/concepts/storage/storage-capacity/[Storage Capacity].
LVMS uses `StorageCapacity` tracking to ensure that pods with an LVMS PVC are not scheduled if the requested storage is greater than the volume group's remaining free storage. For more information about `StorageCapacity` tracking, see link:https://kubernetes.io/docs/concepts/storage/storage-capacity/[Storage Capacity].

View File

@@ -4,22 +4,22 @@
:_content-type: CONCEPT
[id="lvms-system-requirements"]
= LVMS system requirements
= LVMS system requirements
{product-title}'s LVMS requires the following system specifications.
{product-title}'s LVMS requires the following system specifications.
[id="lvms-volume-group-name"]
== Volume Group Name
== Volume Group Name
The default integration of LVMS assumes a volume group named `rhel`. Prior to launching, the `lvmd.yaml` configuration file must specify an existing volume group on the node with sufficient capacity for workload storage. If the volume group does not exist, the node controller will fail to start and enter a `CrashLoopBackoff` state.
The default integration of LVMS assumes a volume group named `rhel`. Prior to launching, the `lvmd.yaml` configuration file must specify an existing volume group on the node with sufficient capacity for workload storage. If the volume group does not exist, the node controller will fail to start and enter a `CrashLoopBackoff` state.
[id="lvms-volume-size-increments"]
== Volume size increments
== Volume size increments
The LVMS provisions storage in increments of 1 GB. Storage requests are rounded up to the nearest gigabyte (GB). When a volume group's capacity is less than 1 GB, the `PersistentVolumeClaim` registers a `ProvisioningFailed` event, for example:
The LVMS provisions storage in increments of 1 GB. Storage requests are rounded up to the nearest gigabyte (GB). When a volume group's capacity is less than 1 GB, the `PersistentVolumeClaim` registers a `ProvisioningFailed` event, for example:
[source,terminal]
----
Warning ProvisioningFailed 3s (x2 over 5s) topolvm.cybozu.com_topolvm-controller-858c78d96c-xttzp_0fa83aef-2070-4ae2-bcb9-163f818dcd9f failed to provision volume with
Warning ProvisioningFailed 3s (x2 over 5s) topolvm.cybozu.com_topolvm-controller-858c78d96c-xttzp_0fa83aef-2070-4ae2-bcb9-163f818dcd9f failed to provision volume with
StorageClass "topolvm-provisioner": rpc error: code = ResourceExhausted desc = no enough space left on VG: free=(BYTES_INT), requested=(BYTES_INT)
----

View File

@@ -4,7 +4,7 @@
:_content-type: CONCEPT
[id="microshift-mDNS_{context}"]
== The multicast DNS protocol
= The multicast DNS protocol
The multicast DNS protocol (mDNS) allows name resolution and service discovery within a Local Area Network (LAN) using multicast exposed on the `5353/UDP` port.
{product-title} includes an embedded mDNS server for deployment scenarios in which the authoritative DNS server cannot be reconfigured to point clients to services on {product-title}. The embedded DNS server allows `.local` domains exposed by {product-title} to be discovered by other elements on the LAN.

View File

@@ -4,7 +4,7 @@
:_content-type: PROCEDURE
[id="microshift-OVS-snapshot_{context}"]
== Getting a snapshot of OVS interfaces from a running cluster
= Getting a snapshot of OVS interfaces from a running cluster
.Procedure
To see a snapshot of OVS interfaces from a running {product-title} cluster, use the following command:

View File

@@ -4,7 +4,7 @@
:_content-type: PROCEDURE
[id="microshift-rpm-ostree-package-system_{context}"]
== rpm-ostree image and package system
= rpm-ostree image and package system
To use the HTTP(S) proxy in rpm-ostree, set the `http_proxy environment` variable for the `rpm-ostreed` service.
.Procedure

View File

@@ -4,13 +4,13 @@
:_content-type: CONCEPT
[id="setting-lvms-path"]
= Setting the LVMS path
= Setting the LVMS path
The `config.yaml` file for the LMVS should be written to the same directory as the MicroShift `config.yaml` file. If a MicroShift `config.yaml` file does not exist, MicroShift will create an LVMS YAML and automatically populate the configuration fields with the default settings. The following paths are checked for the `config.yaml` file, depending on which user runs MicroShift:
The `config.yaml` file for the LMVS should be written to the same directory as the {product-title} `config.yaml` file. If a {product-title} `config.yaml` file does not exist, {product-title} will create an LVMS YAML and automatically populate the configuration fields with the default settings. The following paths are checked for the `config.yaml` file, depending on which user runs {product-title}:
.LVMS paths
.LVMS paths
[options="header",cols="1,3"]
|===
| MicroShift user | Configuration directory
|{product-title} user | Configuration directory
|Global administrator | `/etc/microshift/lvmd.yaml`
|===

View File

@@ -6,11 +6,11 @@
[id="using-lvms"]
= Using the LVMS
The LVMS `StorageClass` is deployed with a default `StorageClass`. Any `PersistentVolumeClaim` objects without a `.spec.storageClassName` defined automatically has a `PersistentVolume` provisioned from the default `StorageClass`.
The LVMS `StorageClass` is deployed with a default `StorageClass`. Any `PersistentVolumeClaim` objects without a `.spec.storageClassName` defined automatically has a `PersistentVolume` provisioned from the default `StorageClass`.
Use the following procedure to provision and mount a logical volume to a pod.
Use the following procedure to provision and mount a logical volume to a pod.
.Procedure
.Procedure
* Enter the following command to provision and mount a logical volume to a pod:
+