mirror of
https://github.com/openshift/openshift-docs.git
synced 2026-02-05 03:47:04 +01:00
OSDOCS-16338: fixes cluster mentions MicroShift
This commit is contained in:
committed by
openshift-cherrypick-robot
parent
bd98da653c
commit
4a0bd7b8b1
@@ -250,7 +250,7 @@ Topics:
|
||||
File: microshift-operators-olm
|
||||
- Name: Creating custom catalogs with oc-mirror
|
||||
File: microshift-operators-oc-mirror
|
||||
- Name: Adding OLM-based Operators to a disconnected cluster
|
||||
- Name: Adding OLM-based Operators to a disconnected node
|
||||
File: microshift-operators-oc-mirror-disconnected
|
||||
---
|
||||
Name: Backup and restore
|
||||
@@ -270,12 +270,12 @@ Topics:
|
||||
File: microshift-etcd
|
||||
- Name: The sos report tool
|
||||
File: microshift-sos-report
|
||||
- Name: Getting your cluster ID
|
||||
File: microshift-getting-cluster-id
|
||||
- Name: Getting your node ID
|
||||
File: microshift-getting-node-id
|
||||
- Name: Getting support
|
||||
File: microshift-getting-support
|
||||
- Name: Remote health monitoring with a connected cluster
|
||||
File: microshift-remote-cluster-monitoring
|
||||
- Name: Remote health monitoring with a connected node
|
||||
File: microshift-remote-node-monitoring
|
||||
---
|
||||
Name: Troubleshooting
|
||||
Dir: microshift_troubleshooting
|
||||
@@ -283,8 +283,8 @@ Distros: microshift
|
||||
Topics:
|
||||
- Name: Check your version
|
||||
File: microshift-version
|
||||
- Name: Troubleshoot the cluster
|
||||
File: microshift-troubleshoot-cluster
|
||||
- Name: Troubleshoot the node
|
||||
File: microshift-troubleshoot-node
|
||||
- Name: Troubleshoot installation issues
|
||||
File: microshift-installing-troubleshooting
|
||||
- Name: Troubleshoot backup and restore
|
||||
|
||||
@@ -11,7 +11,7 @@ The optional {oc-first} tool provides a subset of `oc` commands for {microshift-
|
||||
include::modules/microshift-cli-oc-about.adoc[leveloffset=+1]
|
||||
|
||||
[id="cli-using-cli_{context}"]
|
||||
== Using oc with a {microshift-short} cluster
|
||||
== Using oc with a {microshift-short} node
|
||||
|
||||
Review the following sections to learn how to complete common tasks in {microshift-short} using the `oc` CLI.
|
||||
|
||||
|
||||
@@ -11,7 +11,7 @@ The Kubernetes command-line interface (CLI), `kubectl`, can be used to run comma
|
||||
[id="microshift-kubectl-binary_{context}"]
|
||||
== The kubectl CLI tool
|
||||
|
||||
You can use the `kubectl` CLI tool to interact with Kubernetes primitives on your {microshift-short} cluster. You can also use existing `kubectl` workflows and scripts for users coming from another Kubernetes environment, or for those who prefer to use the `kubectl` CLI.
|
||||
You can use the `kubectl` CLI tool to interact with Kubernetes primitives on your {microshift-short} node. You can also use existing `kubectl` workflows and scripts for users coming from another Kubernetes environment, or for those who prefer to use the `kubectl` CLI.
|
||||
|
||||
* The `kubectl` CLI tool is included in the archive when you download `oc`.
|
||||
|
||||
|
||||
@@ -6,7 +6,7 @@ include::_attributes/attributes-microshift.adoc[]
|
||||
|
||||
toc::[]
|
||||
|
||||
Learn about how `kubeconfig` files are used with {microshift-short} deployments. CLI tools use `kubeconfig` files to communicate with the API server of a cluster. These files provide cluster details, IP addresses, and other information needed for authentication.
|
||||
Learn about how `kubeconfig` files are used with {microshift-short} deployments. CLI tools use `kubeconfig` files to communicate with the API server of a node. These files provide node details, IP addresses, and other information needed for authentication.
|
||||
|
||||
include::modules/microshift-kubeconfig-overview.adoc[leveloffset=+1]
|
||||
|
||||
|
||||
@@ -20,7 +20,7 @@ include::modules/microshift-nw-advertise-address.adoc[leveloffset=+2]
|
||||
|
||||
include::modules/microshift-config-nodeport-limits.adoc[leveloffset=+2]
|
||||
|
||||
[id="additional-resources_microshift-using-config-yaml_{context}"]
|
||||
[id="additional-resources_microshift-using-config-yaml"]
|
||||
[role="_additional-resources"]
|
||||
== Additional resources
|
||||
|
||||
|
||||
@@ -20,5 +20,5 @@ include::modules/microshift-tls-default-cipher-suites.adoc[leveloffset=+2]
|
||||
|
||||
* xref:../../microshift_configuring/microshift-config-snippets.adoc#microshift-config-snippets[Using configuration snippets]
|
||||
* xref:../../microshift_running_apps/microshift-authentication.adoc#authentication-microshift[Pod security authentication and authorization with SCC]
|
||||
* xref:../../microshift_configuring/microshift-node-access-kubeconfig#microshift-node-access-kubeconfig[Cluster access with kubeconfig]
|
||||
* xref:../../microshift_configuring/microshift-node-access-kubeconfig#microshift-node-access-kubeconfig[Node access with kubeconfig]
|
||||
* xref:../microshift_auth_security/microshift-custom-ca.adoc#microshift-custom-ca[Configuring custom certificate authorities]
|
||||
|
||||
@@ -6,7 +6,7 @@ include::_attributes/attributes-microshift.adoc[]
|
||||
|
||||
toc::[]
|
||||
|
||||
You can use a custom container registry when you deploy {microshift-short} in a disconnected network. Running your cluster in a restricted network without direct internet connectivity is possible by installing the cluster from a mirrored set of container images in a private registry.
|
||||
You can use a custom container registry when you deploy {microshift-short} in a disconnected network. Running your node in a restricted network without direct internet connectivity is possible by installing the node from a mirrored set of container images in a private registry.
|
||||
|
||||
include::modules/microshift-mirror-container-images.adoc[leveloffset=+1]
|
||||
|
||||
|
||||
@@ -6,7 +6,7 @@ include::_attributes/attributes-microshift.adoc[]
|
||||
|
||||
toc::[]
|
||||
|
||||
You can install optional RPM packages with {microshift-short} to provide additional cluster and application services.
|
||||
You can install optional RPM packages with {microshift-short} to provide additional node and application services.
|
||||
|
||||
[id="microshift-install-rpm-add-ons"]
|
||||
== Installing optional packages
|
||||
|
||||
@@ -6,7 +6,7 @@ include::_attributes/attributes-microshift.adoc[]
|
||||
|
||||
toc::[]
|
||||
|
||||
Embedding {microshift-short} containers in an `rpm-ostree` commit means that you can run a cluster in air-gapped, disconnected, or offline environments. You can embed {product-title} containers in a {op-system-ostree-first} image so that container engines do not need to pull images over a network from a container registry. Workloads can start immediately without network connectivity.
|
||||
Embedding {microshift-short} containers in an `rpm-ostree` commit means that you can run a node in air-gapped, disconnected, or offline environments. You can embed {product-title} containers in a {op-system-ostree-first} image so that container engines do not need to pull images over a network from a container registry. Workloads can start immediately without network connectivity.
|
||||
|
||||
include::modules/microshift-embed-microshift-image-offline-deploy.adoc[leveloffset=+1]
|
||||
|
||||
|
||||
@@ -6,7 +6,7 @@ include::_attributes/attributes-microshift.adoc[]
|
||||
|
||||
toc::[]
|
||||
|
||||
The OVN-Kubernetes Container Network Interface (CNI) plugin is the default networking solution for the {microshift-short} node. OVN-Kubernetes is a virtualized network for pods and services that is based on Open Virtual Network (OVN).
|
||||
The OVN-Kubernetes Container Network Interface (CNI) plugin is the default networking solution for a {microshift-short} node. OVN-Kubernetes is a virtualized network for pods and services that is based on Open Virtual Network (OVN).
|
||||
|
||||
* Default network configuration and connections are applied automatically in {microshift-short} with the `microshift-networking` RPM during installation.
|
||||
* A node that uses the OVN-Kubernetes network plugin also runs Open vSwitch (OVS) on the node.
|
||||
|
||||
@@ -6,7 +6,7 @@ include::_attributes/attributes-microshift.adoc[]
|
||||
|
||||
toc::[]
|
||||
|
||||
You can configure routes for services to have {microshift-short} cluster access.
|
||||
You can configure routes for services to have {microshift-short} node access.
|
||||
|
||||
//OCP module, edit with care; Creating an insecure/http route
|
||||
include::modules/microshift-nw-create-http-based-route.adoc[leveloffset=+1]
|
||||
|
||||
@@ -8,13 +8,13 @@ toc::[]
|
||||
|
||||
Learn how to apply networking customization and default settings to {microshift-short} deployments. Each node is contained to a single machine and single {microshift-short}, so each deployment requires individual configuration, pods, and settings.
|
||||
|
||||
Cluster Administrators have several options for exposing applications that run inside a cluster to external traffic and securing network connections:
|
||||
{microshift-short} administrators have several options for exposing applications that run inside a node to external traffic and securing network connections:
|
||||
|
||||
* A service such as NodePort
|
||||
|
||||
* API resources, such as `Ingress` and `Route`
|
||||
|
||||
By default, Kubernetes allocates each pod an internal IP address for applications running within the pod. Pods and their containers can have traffic between them, but clients outside the cluster do not have direct network access to pods except when exposed with a service such as NodePort.
|
||||
By default, Kubernetes allocates each pod an internal IP address for applications running within the pod. Pods and their containers can have traffic between them, but clients outside the node do not have direct network access to pods except when exposed with a service such as NodePort.
|
||||
|
||||
include::modules/microshift-configuring-ovn.adoc[leveloffset=+1]
|
||||
|
||||
@@ -31,7 +31,7 @@ include::modules/microshift-ovs-snapshot.adoc[leveloffset=+1]
|
||||
[id="microshift-about-load-balancer-service_{context}"]
|
||||
== The {microshift-short} LoadBalancer service for workloads
|
||||
|
||||
{microshift-short} has a built-in implementation of network load balancers that you can use for your workloads and applications within the cluster. You can create a `LoadBalancer` service by configuring a pod to interpret ingress rules and serve as an ingress controller. The following procedure gives an example of a deployment-based `LoadBalancer` service.
|
||||
{microshift-short} has a built-in implementation of network load balancers that you can use for your workloads and applications within the node. You can create a `LoadBalancer` service by configuring a pod to interpret ingress rules and serve as an ingress controller. The following procedure gives an example of a deployment-based `LoadBalancer` service.
|
||||
|
||||
include::modules/microshift-deploying-a-load-balancer.adoc[leveloffset=+1]
|
||||
|
||||
|
||||
@@ -10,7 +10,7 @@ In addition to the default OVN-Kubernetes Container Network Interface (CNI) plug
|
||||
|
||||
include::modules/microshift-multus-intro.adoc[leveloffset=+1]
|
||||
|
||||
include::modules/microshift-install-multus-running-cluster.adoc[leveloffset=+1]
|
||||
include::modules/microshift-install-multus-running-node.adoc[leveloffset=+1]
|
||||
|
||||
//OCP module, edit with conditionals and care
|
||||
include::modules/nw-multus-bridge-object.adoc[leveloffset=+1]
|
||||
|
||||
@@ -6,7 +6,7 @@ include::_attributes/attributes-microshift.adoc[]
|
||||
|
||||
toc::[]
|
||||
|
||||
Learn how network policies work for {microshift-short} to restrict or allow network traffic to pods in your cluster.
|
||||
Learn how network policies work for {microshift-short} to restrict or allow network traffic to pods in your node.
|
||||
|
||||
include::modules/microshift-nw-network-policy-intro.adoc[leveloffset=+1]
|
||||
|
||||
|
||||
@@ -9,4 +9,4 @@ toc::[]
|
||||
|
||||
Use the following procedure to view a network policy for a namespace.
|
||||
|
||||
include::modules/nw-networkpolicy-view-cli.adoc[leveloffset=+1]
|
||||
include::modules/nw-networkpolicy-view-cli.adoc[leveloffset=+1]
|
||||
|
||||
@@ -8,7 +8,7 @@ toc::[]
|
||||
|
||||
{microshift-short} supports the deletion of manifest resources in the following situations:
|
||||
|
||||
* Manifest removal: Manifests can be removed when you need to completely remove a resource from the cluster.
|
||||
* Manifest removal: Manifests can be removed when you need to completely remove a resource from the node.
|
||||
* Manifest upgrade: During an application upgrade, some resources might need to be removed while others are retained to preserve data.
|
||||
|
||||
When creating new manifests, you can use manifest resource deletion to remove or update old objects, ensuring there are no conflicts or issues.
|
||||
|
||||
@@ -6,7 +6,7 @@ include::_attributes/attributes-microshift.adoc[]
|
||||
|
||||
toc::[]
|
||||
|
||||
You can embed microservices-based workloads and applications in a {op-system-ostree-first} image. Embedding means you can run a {product-title} cluster in air-gapped, disconnected, or offline environments.
|
||||
You can embed microservices-based workloads and applications in a {op-system-ostree-first} image. Embedding means you can run a {microshift-short} node in air-gapped, disconnected, or offline environments.
|
||||
|
||||
include::modules/microshift-embed-images-offline-use.adoc[leveloffset=+1]
|
||||
|
||||
|
||||
@@ -6,7 +6,7 @@ include::_attributes/attributes-microshift.adoc[]
|
||||
|
||||
toc::[]
|
||||
|
||||
You can embed microservices-based workloads and applications in a {op-system-ostree-first} image to run in a {microshift-short} cluster. Embedded applications can be installed directly on edge devices to run in disconnected or offline environments.
|
||||
You can embed microservices-based workloads and applications in a {op-system-ostree-first} image to run in a {microshift-short} node. Embedded applications can be installed directly on edge devices to run in disconnected or offline environments.
|
||||
|
||||
[id="microshift-add-app-RPMs-to-rpm-ostree-image_{context}"]
|
||||
== Adding application RPMs to an rpm-ostree image
|
||||
|
||||
@@ -6,7 +6,7 @@ include::_attributes/attributes-microshift.adoc[]
|
||||
|
||||
toc::[]
|
||||
|
||||
The following tutorial gives a detailed example of how to embed applications in a {op-system-ostree} image for use in a {microshift-short} cluster in various environments.
|
||||
The following tutorial gives a detailed example of how to embed applications in a {op-system-ostree} image for use in a {microshift-short} node in various environments.
|
||||
|
||||
include::modules/microshift-embed-app-rpms-tutorial.adoc[leveloffset=+1]
|
||||
|
||||
|
||||
@@ -6,14 +6,14 @@ include::_attributes/attributes-microshift.adoc[]
|
||||
|
||||
toc::[]
|
||||
|
||||
GitOps with Argo CD for {microshift-short} is a lightweight, optional add-on controller derived from the Red Hat OpenShift GitOps Operator. GitOps for {microshift-short} uses the command-line interface (CLI) of Argo CD to interact with the GitOps controller that acts as the declarative GitOps engine. You can consistently configure and deploy Kubernetes-based infrastructure and applications across clusters and development lifecycles.
|
||||
GitOps with Argo CD for {microshift-short} is a lightweight, optional add-on controller derived from the Red Hat OpenShift GitOps Operator. GitOps for {microshift-short} uses the command-line interface (CLI) of Argo CD to interact with the GitOps controller that acts as the declarative GitOps engine. You can consistently configure and deploy Kubernetes-based infrastructure and applications across node and development lifecycles.
|
||||
|
||||
[id="microshift-gitops-can-do_{context}"]
|
||||
== What you can do with the GitOps agent
|
||||
By using the GitOps with Argo CD agent with {microshift-short}, you can utilize the following principles:
|
||||
|
||||
* Implement application lifecycle management.
|
||||
** Create and manage your clusters and application configuration files using the core principles of developing and maintaining software in a Git repository.
|
||||
** Create and manage your node and application configuration files using the core principles of developing and maintaining software in a Git repository.
|
||||
** You can update the single repository and GitOps automates the deployment of new applications or updates to existing ones.
|
||||
** For example, if you have 1,000 edge devices, each using {microshift-short} and a local GitOps agent, you can easily add or update an application on all 1,000 devices with just one change in your central Git repository.
|
||||
* The Git repository contains a declarative description of the infrastructure you need in your specified environment and contains an automated process to make your environment match the described state.
|
||||
@@ -31,7 +31,7 @@ GitOps with Argo CD for {microshift-short} has the following differences from th
|
||||
|
||||
* The `gitops-operator` component is not used with {microshift-short}.
|
||||
* To maintain the small resource use of {microshift-short}, the Argo CD web console is not available. You can use the Argo CD CLI.
|
||||
* Because {microshift-short} is single-node, there is no multi-cluster support. Each instance of {microshift-short} is paired with a local GitOps agent.
|
||||
* Because {microshift-short} is single-node, there is no multi-node support. Each instance of {microshift-short} is paired with a local GitOps agent.
|
||||
* The `oc adm must-gather` command is not available in {microshift-short}.
|
||||
|
||||
[id="microshift-gitops-troubleshooting_{context}"]
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
:_mod-docs-content-type: ASSEMBLY
|
||||
[id="microshift-operators-oc-mirror-disconnected"]
|
||||
= Adding OLM-based Operators to a disconnected cluster
|
||||
= Adding OLM-based Operators to a disconnected node
|
||||
include::_attributes/attributes-microshift.adoc[]
|
||||
:context: microshift-operators-oc-mirror-disconnected
|
||||
|
||||
|
||||
@@ -28,8 +28,8 @@ include::modules/microshift-oc-mirror-to-mirror.adoc[leveloffset=+1]
|
||||
//Convert the imageset file and add configuration to CRI-O
|
||||
include::modules/microshift-oc-mirror-transform-imageset-to-crio.adoc[leveloffset=+1]
|
||||
|
||||
//Apply changes to cluster so it can use Operators
|
||||
include::modules/microshift-oc-mirror-install-catalog-cluster.adoc[leveloffset=+1]
|
||||
//Apply changes to node so it can use Operators
|
||||
include::modules/microshift-oc-mirror-install-catalog-node.adoc[leveloffset=+1]
|
||||
|
||||
[id="Additional-resources_microshift-operators-oc-mirror_{context}"]
|
||||
[role="_additional-resources"]
|
||||
|
||||
@@ -15,10 +15,10 @@ Operator Lifecycle Manager (OLM) is used in {microshift-short} for installing an
|
||||
|
||||
* Cluster Operators as applied in {ocp} are not used in {microshift-short}.
|
||||
* You must create your own catalogs for the add-on Operators you want to use with your applications. Catalogs are not provided by default.
|
||||
** Each catalog must have an accessible `CatalogSource` added to a cluster, so that the OLM catalog Operator can use the catalog for content.
|
||||
* You must use the CLI to conduct OLM activities with {microshift-short}. The console, software catalog, and catalog management GUIs are not available.
|
||||
** Use the link:https://access.redhat.com/documentation/en-us/openshift_container_platform/{ocp-version}/html/cli_tools/opm-cli#cli-opm-install[Operator Package Manager `opm` CLI] with network-connected clusters, or for building catalogs for custom Operators that use an internal registry.
|
||||
** To mirror your catalogs and Operators for disconnected or offline clusters, install link:https://docs.openshift.com/container-platform/{ocp-version}/installing/disconnected_install/installing-mirroring-disconnected.html#installation-oc-mirror-installing-plugin_installing-mirroring-disconnected[the oc-mirror OpenShift CLI plugin].
|
||||
** Each catalog must have an accessible `CatalogSource` added to a node, so that the OLM catalog Operator can use the catalog for content.
|
||||
* You must use the CLI to conduct OLM activities with {microshift-short}. The console and OperatorHub GUIs are not available.
|
||||
** Use the link:https://access.redhat.com/documentation/en-us/openshift_container_platform/{ocp-version}/html/cli_tools/opm-cli#cli-opm-install[Operator Package Manager `opm` CLI] with a network-connected node, or for building catalogs for custom Operators that use an internal registry.
|
||||
** To mirror your catalogs and Operators for disconnected or offline nodes, install link:https://docs.openshift.com/container-platform/{ocp-version}/installing/disconnected_install/installing-mirroring-disconnected.html#installation-oc-mirror-installing-plugin_installing-mirroring-disconnected[the oc-mirror OpenShift CLI plugin].
|
||||
|
||||
[IMPORTANT]
|
||||
====
|
||||
@@ -27,7 +27,7 @@ Before using an Operator, verify with the provider that the Operator is supporte
|
||||
|
||||
[id="microshift-installing-olm-options_{context}"]
|
||||
== Determining your OLM installation type
|
||||
You can install the OLM package manager for use with {microshift-short} 4.15 or newer versions. There are different ways to install OLM for {microshift-short} clusters, depending on your use case.
|
||||
You can install the OLM package manager for use with {microshift-short} 4.15 or newer versions. There are different ways to install OLM for a {microshift-short} node, depending on your use case.
|
||||
|
||||
* You can install the `microshift-olm` RPM at the same time you install the {microshift-short} RPM on {op-system-base-full}.
|
||||
* You can install the `microshift-olm` on an existing {microshift-short} {product-version}. Restart the {microshift-short} service after installing OLM for the changes to apply.
|
||||
@@ -59,5 +59,5 @@ include::modules/microshift-olm-deploy-ops-spec-ns.adoc[leveloffset=+2]
|
||||
//additional resources for working with operators after deployment
|
||||
[role="_additional-resources"]
|
||||
.Additional resources
|
||||
* link:https://access.redhat.com/documentation/en-us/openshift_container_platform/{ocp-version}/html/operators/administrator-tasks#olm-upgrading-operators[Updating installed Operators]
|
||||
* link:https://access.redhat.com/documentation/en-us/openshift_container_platform/{ocp-version}/html/operators/administrator-tasks#olm-deleting-operator-from-a-cluster-using-cli_olm-deleting-operators-from-a-cluster[Deleting Operators from a cluster using the CLI]
|
||||
* link:https://docs.redhat.com/en/documentation/openshift_container_platform/{ocp-version}/html/operators/administrator-tasks#olm-upgrading-operators[Updating installed Operators]
|
||||
* link:https://docs.redhat.com/en/documentation/openshift_container_platform/{ocp-version}/html/operators/administrator-tasks#olm-deleting-operator-from-a-cluster-using-cli_olm-deleting-operators-from-a-cluster[Deleting Operators from a cluster using the CLI]
|
||||
|
||||
@@ -6,16 +6,16 @@ include::_attributes/attributes-microshift.adoc[]
|
||||
|
||||
toc::[]
|
||||
|
||||
You can use Operators with {microshift-short} to create applications that monitor the running services in your cluster. Operators can manage applications and their resources, such as deploying a database or message bus. As customized software running inside your cluster, Operators can be used to implement and automate common operations.
|
||||
You can use Operators with {microshift-short} to create applications that monitor the running services in your node. Operators can manage applications and their resources, such as deploying a database or message bus. As customized software running inside your node, Operators can be used to implement and automate common operations.
|
||||
|
||||
Operators offer a more localized configuration experience and integrate with Kubernetes APIs and CLI tools such as `kubectl` and `oc`. Operators are designed specifically for your applications. Operators enable you to configure components instead of modifying a global configuration file.
|
||||
|
||||
{microshift-short} applications are generally expected to be deployed in static environments. However, Operators are available if helpful in your use case. To determine the compatibility of an Operator with {microshift-short}, check the Operator documentation.
|
||||
|
||||
[id="microshift-operators-installation-paths_{context}"]
|
||||
== How to use Operators with {microshift-short} clusters
|
||||
== How to use Operators with a {microshift-short} node
|
||||
|
||||
There are two ways to use Operators for your {microshift-short} clusters:
|
||||
There are two ways to use Operators for your {microshift-short} node:
|
||||
|
||||
[id="microshift-operators-paths-manifests_{context}"]
|
||||
=== Manifests for Operators
|
||||
@@ -25,6 +25,6 @@ Operators can be installed and managed directly by using manifests. You can use
|
||||
|
||||
[id="microshift-operators-paths-olm_{context}"]
|
||||
=== Operator Lifecycle Manager for Operators
|
||||
You can also install add-on Operators to a {microshift-short} cluster using Operator Lifecycle Manager (OLM). OLM can be used to manage both custom Operators and Operators that are widely available. Building catalogs is required to use OLM with {microshift-short}.
|
||||
You can also install add-on Operators to a {microshift-short} node by using Operator Lifecycle Manager (OLM). OLM can be used to manage both custom Operators and Operators that are widely available. Building catalogs is required to use OLM with {microshift-short}.
|
||||
|
||||
* For details, see xref:../../microshift_running_apps/microshift_operators/microshift-operators-olm.adoc#microshift-operators-olm[Using Operator Lifecycle Manager with {microshift-short}].
|
||||
|
||||
@@ -6,7 +6,7 @@ include::_attributes/attributes-microshift.adoc[]
|
||||
|
||||
toc::[]
|
||||
|
||||
{microshift-short} supports multiple types of storage, both for on-premise and cloud providers. You can manage container storage for persistent and non-persistent data in a {product-title} cluster.
|
||||
{microshift-short} supports multiple types of storage, both for on-premise and cloud providers. You can manage container storage for persistent and non-persistent data in a {microshift-short} node.
|
||||
|
||||
[id="microshift-storage-types"]
|
||||
== Storage types
|
||||
@@ -21,7 +21,7 @@ Pods and containers are ephemeral or transient in nature and designed for statel
|
||||
[id="microshift-persistent-storage"]
|
||||
=== Persistent storage
|
||||
|
||||
Stateful applications deployed in containers require persistent storage. {microshift-short} uses a pre-provisioned storage framework called persistent volumes (PV) to allow cluster administrators to provision persistent storage. The data inside these volumes can exist beyond the lifecycle of an individual pod. Developers can use persistent volume claims (PVCs) to request storage requirements. For persistent storage details, read xref:../microshift_storage/understanding-persistent-storage-microshift.adoc#understanding-persistent-storage-microshift[Understanding persistent storage].
|
||||
Stateful applications deployed in containers require persistent storage. {microshift-short} uses a pre-provisioned storage framework called persistent volumes (PV) to allow node administrators to provision persistent storage. The data inside these volumes can exist beyond the lifecycle of an individual pod. Developers can use persistent volume claims (PVCs) to request storage requirements. For persistent storage details, read xref:../microshift_storage/understanding-persistent-storage-microshift.adoc#understanding-persistent-storage-microshift[Understanding persistent storage].
|
||||
|
||||
[id="microshift-dynamic-provisioning-overview"]
|
||||
=== Dynamic storage provisioning
|
||||
|
||||
@@ -6,6 +6,6 @@ include::_attributes/attributes-microshift.adoc[]
|
||||
|
||||
toc::[]
|
||||
|
||||
Storage version migration is used to update existing objects in the cluster from their current version to the latest version. The Kube Storage Version Migrator embedded controller is used in {microshift-short} to migrate resources without having to recreate those resources. Either you or a controller can create a `StorageVersionMigration` custom resource (CR) that requests a migration through the Migrator Controller.
|
||||
Storage version migration is used to update existing objects in the {microshift-short} node from their current version to the latest version. The Kube Storage Version Migrator embedded controller is used in {microshift-short} to migrate resources without having to re-create those resources. Either you or a controller can create a `StorageVersionMigration` custom resource (CR) that requests a migration through the Migrator Controller.
|
||||
|
||||
include::modules/microshift-making-storage-migration-request.adoc[leveloffset=+1]
|
||||
|
||||
@@ -6,7 +6,7 @@ include::_attributes/attributes-microshift.adoc[]
|
||||
|
||||
toc::[]
|
||||
|
||||
Managing storage is a distinct problem from managing compute resources. {microshift-short} uses the Kubernetes persistent volume (PV) framework to allow cluster administrators to provision persistent storage for a cluster. Developers can use persistent volume claims (PVCs) to request PV resources without having specific knowledge of the underlying storage infrastructure.
|
||||
Managing storage is a distinct problem from managing compute resources. {microshift-short} uses the Kubernetes persistent volume (PV) framework to allow {microshift-short} administrators to provision persistent storage for a node. Developers can use persistent volume claims (PVCs) to request PV resources without having specific knowledge of the underlying storage infrastructure.
|
||||
|
||||
include::modules/microshift-control-permissions-security-context-constraints.adoc[leveloffset=+1]
|
||||
|
||||
|
||||
@@ -6,11 +6,11 @@ include::_attributes/attributes-microshift.adoc[]
|
||||
|
||||
toc::[]
|
||||
|
||||
Cluster administrators can use volume snapshots to help protect against data loss by using the supported {microshift-short} logical volume manager storage (LVMS) Container Storage Interface (CSI) provider. Familiarity with xref:../microshift_storage/understanding-persistent-storage-microshift.adoc#persistent-volumes_understanding-persistent-storage-microshift[persistent volumes] is required.
|
||||
{microshift-short} administrators can use volume snapshots to help protect against data loss by using the supported {microshift-short} logical volume manager storage (LVMS) Container Storage Interface (CSI) provider. Familiarity with xref:../microshift_storage/understanding-persistent-storage-microshift.adoc#persistent-volumes_understanding-persistent-storage-microshift[persistent volumes] is required.
|
||||
|
||||
A snapshot represents the state of the storage volume in a cluster at a particular point in time. Volume snapshots can also be used to provision new volumes. Snapshots are created as read-only logical volumes (LVs) located on the same device as the original data.
|
||||
A snapshot represents the state of the storage volume in a node at a particular point in time. Volume snapshots can also be used to provision new volumes. Snapshots are created as read-only logical volumes (LVs) located on the same device as the original data.
|
||||
|
||||
A cluster administrator can complete the following tasks using CSI volume snapshots:
|
||||
A {microshift-short} administrator can complete the following tasks by using CSI volume snapshots:
|
||||
|
||||
* Create a snapshot of an existing persistent volume claim (PVC).
|
||||
* Back up a volume snapshot to a secure location.
|
||||
|
||||
@@ -1,18 +0,0 @@
|
||||
:_mod-docs-content-type: ASSEMBLY
|
||||
[id="microshift-getting-cluster-id"]
|
||||
= Getting your cluster ID
|
||||
include::_attributes/attributes-microshift.adoc[]
|
||||
:context: microshift-getting-cluster-id
|
||||
|
||||
toc::[]
|
||||
|
||||
When providing information to Red{nbsp}Hat Support, it is helpful to provide the unique identifier of your cluster. For {microshift-short}, you can get your cluster ID manually by using the {oc-first} or by retrieving the ID from a file.
|
||||
|
||||
[NOTE]
|
||||
====
|
||||
A cluster ID is created only after the {microshift-short} service runs for the first time after installation.
|
||||
====
|
||||
|
||||
include::modules/microshift-get-cluster-id-kubesystem.adoc[leveloffset=+1]
|
||||
|
||||
include::modules/microshift-get-nonrunning-cluster-id-kubesystem.adoc[leveloffset=+1]
|
||||
18
microshift_support/microshift-getting-node-id.adoc
Normal file
18
microshift_support/microshift-getting-node-id.adoc
Normal file
@@ -0,0 +1,18 @@
|
||||
:_mod-docs-content-type: ASSEMBLY
|
||||
[id="microshift-getting-node-id"]
|
||||
= Getting your node ID
|
||||
include::_attributes/attributes-microshift.adoc[]
|
||||
:context: microshift-getting-node-id
|
||||
|
||||
toc::[]
|
||||
|
||||
When providing information to Red{nbsp}Hat Support, it is helpful to provide the unique identifier of your node. For {microshift-short}, you can get your node ID manually by using the {oc-first} or by retrieving the ID from a file.
|
||||
|
||||
[NOTE]
|
||||
====
|
||||
A node ID is created only after the {microshift-short} service runs for the first time after installation.
|
||||
====
|
||||
|
||||
include::modules/microshift-get-node-id-kubesystem.adoc[leveloffset=+1]
|
||||
|
||||
include::modules/microshift-get-nonrunning-node-id-kubesystem.adoc[leveloffset=+1]
|
||||
@@ -1,12 +1,12 @@
|
||||
:_mod-docs-content-type: ASSEMBLY
|
||||
[id="microshift-remote-cluster-monitoring"]
|
||||
= Remote health monitoring with a connected cluster
|
||||
[id="microshift-remote-node-monitoring"]
|
||||
= Remote health monitoring with a connected node
|
||||
include::_attributes/attributes-microshift.adoc[]
|
||||
:context: microshift-remote-cluster-monitoring
|
||||
:context: microshift-remote-node-monitoring
|
||||
|
||||
toc::[]
|
||||
|
||||
Telemetry and configuration data about your cluster can be collected and reported to Red{nbsp}Hat.
|
||||
Telemetry and configuration data about your node can be collected and reported to Red{nbsp}Hat.
|
||||
|
||||
include::modules/microshift-about-remote-health-monitoring.adoc[leveloffset=+1]
|
||||
|
||||
@@ -14,7 +14,8 @@ include::modules/microshift-info-collected-telemetry.adoc[leveloffset=+1]
|
||||
|
||||
include::modules/microshift-opt-out-telemetry.adoc[leveloffset=+1]
|
||||
|
||||
[id="additional-resources_microshift-remote-cluster-monitoring_{context}"]
|
||||
[role="_additional-resources"]
|
||||
[id="additional-resources_microshift-remote-node-monitoring_{context}"]
|
||||
== Additional resources
|
||||
|
||||
* xref:../microshift_configuring/microshift-config-snippets.adoc#microshift-config-snippets[Using configuration snippets]
|
||||
* xref:../microshift_configuring/microshift-config-snippets.adoc#microshift-config-snippets[Using configuration snippets]
|
||||
@@ -1,11 +0,0 @@
|
||||
:_mod-docs-content-type: ASSEMBLY
|
||||
[id="microshift-troubleshoot-cluster"]
|
||||
= Troubleshooting a cluster
|
||||
include::_attributes/attributes-microshift.adoc[]
|
||||
:context: microshift-troubleshoot-cluster
|
||||
|
||||
toc::[]
|
||||
|
||||
To begin troubleshooting a {microshift-short} cluster, first access the cluster status.
|
||||
|
||||
include::modules/microshift-check-cluster-status.adoc[leveloffset=+1]
|
||||
11
microshift_troubleshooting/microshift-troubleshoot-node.adoc
Normal file
11
microshift_troubleshooting/microshift-troubleshoot-node.adoc
Normal file
@@ -0,0 +1,11 @@
|
||||
:_mod-docs-content-type: ASSEMBLY
|
||||
[id="microshift-troubleshoot-node"]
|
||||
= Troubleshooting a node
|
||||
include::_attributes/attributes-microshift.adoc[]
|
||||
:context: microshift-troubleshoot-node
|
||||
|
||||
toc::[]
|
||||
|
||||
To begin troubleshooting a {microshift-short} node, first access the node status.
|
||||
|
||||
include::modules/microshift-check-node-status.adoc[leveloffset=+1]
|
||||
@@ -87,7 +87,7 @@ You can update {op-system-ostree} or {op-system-base} and update {microshift-sho
|
||||
[id="microshift-update-options-edge-to-image_{context}"]
|
||||
== Migrating {microshift-short} from {op-system-ostree} to {op-system-image}
|
||||
|
||||
Starting with {microshift-short} 4.19, you can migrate your {microshift-short} cluster from {op-system-ostree} to {op-system-image} if the versions are compatible. Check compatibilities before beginning a migration. See the {op-system-base} documentation for instructions to migrate your image-based {op-system-base} system.
|
||||
Starting with {microshift-short} 4.19, you can migrate your {microshift-short} node from {op-system-ostree} to {op-system-image} if the versions are compatible. Check compatibilities before beginning a migration. See the {op-system-base} documentation for instructions to migrate your image-based {op-system-base} system.
|
||||
//RHEL docs are coming soon
|
||||
|
||||
//additional resources for updating RHEL and MicroShift
|
||||
|
||||
@@ -1,14 +1,14 @@
|
||||
// Module included in the following assemblies:
|
||||
//
|
||||
// microshift_support/microshift-remote-cluster-monitoring.adoc
|
||||
// microshift_support/microshift-remote-node-monitoring.adoc
|
||||
|
||||
:_mod-docs-content-type: CONCEPT
|
||||
[id="microshift-about-remote-health-monitoring_{context}"]
|
||||
= About remote health monitoring with {microshift-short}
|
||||
|
||||
Remote health monitoring is conducted in {microshift-short} by the collection of telemetry and configuration data about your cluster that is reported to Red{nbsp}Hat with the Telemeter API. A cluster that reports Telemetry to Red{nbsp}Hat is considered a _connected cluster_.
|
||||
Remote health monitoring is conducted in {microshift-short} by the collection of telemetry and configuration data about your node that is reported to Red{nbsp}Hat with the Telemeter API. A node that reports Telemetry to Red{nbsp}Hat is considered a _connected node_.
|
||||
|
||||
*Telemetry* is the term that Red{nbsp}Hat uses to describe the information being sent to Red{nbsp}Hat by the {microshift-short} Telemeter API. Lightweight attributes are sent from a connected cluster to Red{nbsp}Hat to monitor the health of clusters.
|
||||
*Telemetry* is the term that Red{nbsp}Hat uses to describe the information being sent to Red{nbsp}Hat by the {microshift-short} Telemeter API. Lightweight attributes are sent from a connected node to Red{nbsp}Hat to monitor the health of a node.
|
||||
|
||||
.Telemetry benefits
|
||||
|
||||
@@ -18,11 +18,11 @@ Telemetry provides the following benefits:
|
||||
|
||||
* *Targeted prioritization of new features and functionality*. The data collected provides information about system capabilities and usage characteristics. With this information, Red{nbsp}Hat can focus on developing the new features and functionality that have the greatest impact for our customers.
|
||||
|
||||
Telemetry sends a carefully chosen subset of the cluster monitoring metrics to Red{nbsp}Hat. The Telemeter API fetches the metrics values every hour and uploads the data to Red{nbsp}Hat. This stream of data is used by Red{nbsp}Hat to monitor a cluster over time.
|
||||
Telemetry sends a carefully chosen subset of the node monitoring metrics to Red{nbsp}Hat. The Telemeter API fetches the metrics values every hour and uploads the data to Red{nbsp}Hat. This stream of data is used by Red{nbsp}Hat to monitor nodes over time.
|
||||
|
||||
This debugging information is available to Red{nbsp}Hat Support and Engineering teams with the same restrictions as accessing data reported through support cases. All _connected cluster_ information is used by Red{nbsp}Hat to help make {microshift-short} better.
|
||||
This debugging information is available to Red{nbsp}Hat Support and Engineering teams with the same restrictions as accessing data reported through support cases. All _connected node_ information is used by Red{nbsp}Hat to help make {microshift-short} better.
|
||||
|
||||
[NOTE]
|
||||
====
|
||||
{microshift-short} does not support Prometheus. To view the Telemetry gathered from your cluster, you must contact Red{nbsp}Hat Support.
|
||||
{microshift-short} does not support Prometheus. To view the Telemetry gathered from your node, you must contact Red{nbsp}Hat Support.
|
||||
====
|
||||
@@ -6,9 +6,8 @@
|
||||
[id="about-microshift-sos-reports_{context}"]
|
||||
= About sos reports
|
||||
|
||||
The `sos` tool is composed of different plugins that help you gather information from different applications. A {microshift-short}-specific plugin has been added from sos version 4.5.1, and it can gather the following data:
|
||||
The `sos` tool is composed of different plugins that help you gather information from different applications. A {microshift-short}-specific plugin from sos version 4.5.1 can gather the following data:
|
||||
|
||||
* {microshift-short} configuration and version
|
||||
* YAML output for cluster-wide and system namespaced resources
|
||||
* YAML output for node and system namespaced resources
|
||||
* OVN-Kubernetes information
|
||||
|
||||
|
||||
@@ -6,7 +6,7 @@
|
||||
[id="microshift-applying-manifests-example_{context}"]
|
||||
= Using manifests example
|
||||
|
||||
This example demonstrates automatic deployment of a BusyBox container using `kustomize` manifests in the `/etc/microshift/manifests` directory.
|
||||
This example demonstrates automatic deployment of a BusyBox container by using `kustomize` manifests in the `/etc/microshift/manifests` directory.
|
||||
|
||||
.Procedure
|
||||
. Create the BusyBox manifest files by running the following commands:
|
||||
|
||||
@@ -8,7 +8,7 @@
|
||||
|
||||
{microshift-short} is a single-node container orchestration runtime designed to extend the benefits of using containers for running applications to low-resource edge environments. Because {microshift-short} is primarily a platform for deploying applications, only the APIs and features essential to operating in edge and small form factor computing environments are included.
|
||||
|
||||
For example, {microshift-short} has only the following Kubernetes cluster capabilities:
|
||||
For example, {microshift-short} has only the following Kubernetes node capabilities:
|
||||
|
||||
* Networking
|
||||
* Ingress
|
||||
@@ -22,27 +22,27 @@ For example, {microshift-short} has only the following Kubernetes cluster capabi
|
||||
To optimize your deployments, use {microshift-short} with a compatible operating system, such as {op-system-ostree-first}. Using {microshift-short} and {op-system-ostree-first} together forms {op-system-bundle}. Virtual machines are handled by the operating system in {microshift-short} deployments.
|
||||
|
||||
.{product-title} as part of {op-system-bundle}.
|
||||
image::311_RHDevice_Edge_Overview_0223_1.png[<{product-title} is tasked with only the Kubernetes cluster services networking, ingress, storage, helm, with additional Kubernetes functions of orchestration and security, as the following diagram illustrates.>]
|
||||
image::311_RHDevice_Edge_Overview_0223_1.png[<{product-title} is tasked with only the Kubernetes node services networking, ingress, storage, helm, with additional Kubernetes functions of orchestration and security, as the following diagram illustrates.>]
|
||||
|
||||
The following operational differences from {oke} can help you understand where {microshift-short} can be deployed:
|
||||
The following operational differences from {oke} can help you understand where you can deploy {microshift-short}:
|
||||
|
||||
[id="microshift-differences-oke_{context}"]
|
||||
== Key differences from {oke}
|
||||
|
||||
* Devices with {microshift-short} installed are self-managing
|
||||
* Compatible with RPM-OStree-based systems
|
||||
* Compatible with `rpm-ostree`-based systems
|
||||
* Uses only the APIs needed for essential functions, such as security and runtime controls
|
||||
* Enables a subset of commands from the OpenShift CLI (`oc`) tool
|
||||
* Enables a subset of commands from the {oc-first} tool
|
||||
* Does not support workload high availability (HA) or horizontal scalability with the addition of worker nodes
|
||||
|
||||
.{product-title} differences from {oke}.
|
||||
image::311_RHDevice_Edge_Overview_0223_2.png[<{microshift-short} is tasked with only the Kubernetes cluster capabilities of networking, ingress, storage, helm, with the additional Kubernetes functions of orchestration and security, as the following diagram illustrates.>]
|
||||
image::311_RHDevice_Edge_Overview_0223_2.png[<{microshift-short} is tasked with only the Kubernetes node capabilities of networking, ingress, storage, helm, with the additional Kubernetes functions of orchestration and security, as the following diagram illustrates.>]
|
||||
|
||||
The figure "{product-title} differences from {oke}" shows that {oke} has the same cluster capabilities as {product-title}, and adds the following information:
|
||||
The figure "{product-title} differences from {oke}" shows that {oke} has the same cluster capabilities as a {product-title} node, and adds the following information:
|
||||
|
||||
* Install
|
||||
* Over-the-air updates
|
||||
* Cluster Operators
|
||||
* Operators
|
||||
* Operator Lifecycle Manager
|
||||
* Monitoring
|
||||
* Logging
|
||||
|
||||
@@ -6,7 +6,7 @@
|
||||
[id="microshift-audit-logs-config-intro_{context}"]
|
||||
= About setting limits on audit log files
|
||||
|
||||
Controlling the rotation and retention of the {microshift-short} audit log file by using configuration values helps keep the limited storage capacities of far-edge devices from being exceeded. On such devices, logging data accumulation can limit host system or cluster workloads, potentially causing the device stop working. Setting audit log policies can help ensure that critical processing space is continually available.
|
||||
Controlling the rotation and retention of the {microshift-short} audit log file by using configuration values helps keep the limited storage capacities of far-edge devices from being exceeded. On such devices, logging data accumulation can limit host system or node workloads, potentially causing the device stop working. Setting audit log policies can help ensure that critical processing space is continually available.
|
||||
|
||||
The values you set to limit {microshift-short} audit logs enable you to enforce the size, number, and age limits of audit log backups. Field values are processed independently of one another and without prioritization.
|
||||
|
||||
@@ -38,5 +38,5 @@ If you do not specify a value for a field, the default value is used. If you rem
|
||||
|
||||
[IMPORTANT]
|
||||
====
|
||||
You must configure audit log retention and rotation in {op-system-base-full} for logs that are generated by application pods. These logs print to the console and are saved. Ensure that your log preferences are configured for the {op-system-base} `/var/log/audit/audit.log` file to maintain {microshift-short} cluster health.
|
||||
You must configure audit log retention and rotation in {op-system-base-full} for logs that are generated by application pods. These logs print to the console and are saved. Ensure that your log preferences are configured for the {op-system-base} `/var/log/audit/audit.log` file to maintain {microshift-short} node health.
|
||||
====
|
||||
@@ -6,4 +6,4 @@
|
||||
[id="microshift-ca-adding-bundle_{context}"]
|
||||
= Adding a certificate authority bundle
|
||||
|
||||
{microshift-short} uses the host trust bundle when clients evaluate server certificates. You can also use a customized security certificate chain to improve the compatibility of your endpoint certificates with clients specific to your deployments. To do this, you can add a certificate authority (CA) bundle with root and intermediate certificates to the {op-system-ostree-first} system-wide trust store.
|
||||
{microshift-short} uses the host trust bundle when clients evaluate server certificates. You can also use a customized security certificate chain to improve the compatibility of your endpoint certificates with clients specific to your deployments. To do this, you can add a certificate authority (CA) bundle with root and intermediate certificates to the {op-system-ostree-first} system-wide truststore.
|
||||
|
||||
@@ -1,15 +1,16 @@
|
||||
//Module included in the following assemblies:
|
||||
//
|
||||
//* microshift_troubleshooting/microshift-troubleshoot-cluster
|
||||
//* microshift_troubleshooting/microshift-troubleshoot-node
|
||||
|
||||
:_mod-docs-content-type: PROCEDURE
|
||||
[id="microshift-check-cluster-status_{context}"]
|
||||
= Checking the status of a cluster
|
||||
[id="microshift-check-node-status_{context}"]
|
||||
= Checking the status of a node
|
||||
|
||||
You can check the status of a {microshift-short} cluster or see active pods. Given in the following procedure are three different commands you can use to check cluster status. You can choose to run one, two, or all commands to help you get the information you need to troubleshoot the cluster.
|
||||
You can check the status of a {microshift-short} node or see active pods. You can choose to run any or all of the following commands to help you get the information you need to troubleshoot the node.
|
||||
|
||||
.Procedure
|
||||
* Check the system status, which returns the cluster status, by running the following command:
|
||||
|
||||
* Check the system status, which returns the node status, by running the following command:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
@@ -14,5 +14,5 @@ With the OpenShift command-line interface (CLI), the `oc` command, you can deplo
|
||||
|
||||
[NOTE]
|
||||
====
|
||||
A `kubeconfig` file must exist for the cluster to be accessible. The values are applied from built-in default values or a `config.yaml`, if one was created.
|
||||
A `kubeconfig` file must exist for the node to be accessible. The values are applied from built-in default values or a `config.yaml`, if you created one.
|
||||
====
|
||||
|
||||
@@ -16,12 +16,12 @@ If you use `bridge` and the `dhcp` IPAM, a DHCP server listening on the bridged
|
||||
.Prerequisites
|
||||
|
||||
* The {microshift-short} Multus CNI is installed.
|
||||
* The OpenShift CLI (`oc`) is installed.
|
||||
* The cluster is running.
|
||||
* The {oc-first} is installed.
|
||||
* {microshift-short} is running.
|
||||
|
||||
.Procedure
|
||||
|
||||
. Optional: Verify that the {microshift-short} cluster is running with the Multus CNI by running the following command:
|
||||
. Optional: Verify that the {microshift-short} node is running with the Multus CNI by running the following command:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
|
||||
@@ -6,7 +6,7 @@
|
||||
[id="microshift-understanding-generic-device-plugin-con_{context}"]
|
||||
= Understanding the Generic Device Plugin
|
||||
|
||||
The Generic Device Plugin (GDP) is a Kubernetes device plugin that enables applications running in pods to access host devices such as serial ports, cameras, and sound cards securely. This capability is especially important for edge and IoT environments where direct hardware interaction is a common requirement. The GDP integrates with the kubelet to advertise available devices to the cluster and facilitate their allocation to pods without requiring elevated privileges within the container itself.
|
||||
The Generic Device Plugin (GDP) is a Kubernetes device plugin that enables applications running in pods to access host devices such as serial ports, cameras, and sound cards securely. This capability is especially important for edge and IoT environments where direct hardware interaction is a common requirement. The GDP integrates with the kubelet to advertise available devices to the node and facilitate their allocation to pods without requiring elevated privileges within the container itself.
|
||||
|
||||
The GDP is designed to handle devices that are initialized and managed by the operating system and do not require any special initialization procedures or drivers for a pod to use them.
|
||||
|
||||
|
||||
@@ -15,7 +15,7 @@ The following table explains {microshift-short} configuration YAML parameters an
|
||||
|
||||
|`advertiseAddress`
|
||||
|`string`
|
||||
|A string that specifies the IP address from which the API server is advertised to members of the cluster. The default value is calculated based on the address of the service network.
|
||||
|A string that specifies the IP address from which the API server is advertised to members of the node. The default value is calculated based on the address of the service network.
|
||||
|
||||
|`auditLog.maxFileAge`
|
||||
|`number`
|
||||
@@ -71,7 +71,7 @@ The following table explains {microshift-short} configuration YAML parameters an
|
||||
|
||||
|`dns.baseDomain`
|
||||
|`valid domain`
|
||||
|Base domain of the cluster. All managed DNS records are subdomains of this base.
|
||||
|Base domain of the node. All managed DNS records are subdomains of this base.
|
||||
|
||||
|`etcd.memoryLimitMB`
|
||||
|`number`
|
||||
@@ -147,7 +147,7 @@ When unspecified, the value defaults to `mrw`.
|
||||
|
||||
|`generic.Device.Plugin.domain`
|
||||
|`string`
|
||||
|Specifies the domain prefix with which devices are advertised and present in the cluster. For example, `device.microshift.io/serial`. The default value is `device.microshift.io`.
|
||||
|Specifies the domain prefix with which devices are advertised and present in the node. For example, `device.microshift.io/serial`. The default value is `device.microshift.io`.
|
||||
|
||||
|`generic.Device.Plugin.status`
|
||||
|`Enabled`, `Disabled`
|
||||
@@ -162,13 +162,13 @@ The secret must contain the following keys and data:
|
||||
* `tls.crt`: certificate file contents
|
||||
* `tls.key`: key file contents
|
||||
|
||||
If you do not set one of these values, a wildcard certificate is automatically generated and used. The certificate is valid for the ingress controller `domain` and `subdomains` fields, and the generated CA for the certificate is automatically integrated with the truststore for the cluster.
|
||||
If you do not set one of these values, a wildcard certificate is automatically generated and used. The certificate is valid for the ingress controller `domain` and `subdomains` fields, and the generated CA for the certificate is automatically integrated with the truststore for the node.
|
||||
|
||||
Any certificate in use is automatically integrated in the {microshift-short} OAuth server.
|
||||
|
||||
|`ingress.clientTLS`
|
||||
|`AllowedSubjectPatterns`, `spec.clientTLS.ClientCA`, `spec.clientTLS.clientCertificatePolicy`
|
||||
|Authenticates client access to the cluster and services. Mutual TLS authentication is enabled when using these settings. If you do not set values for the `spec.clientTLS.clientCertificatePolicy` and `spec.clientTLS.ClientCA` required subfields, client TLS is not enabled.
|
||||
|Authenticates client access to the node and services. Mutual TLS authentication is enabled when using these settings. If you do not set values for the `spec.clientTLS.clientCertificatePolicy` and `spec.clientTLS.ClientCA` required subfields, client TLS is not enabled.
|
||||
//are the values in the config.yaml defaults?
|
||||
//if I don't want to use client TLS, do I leave all three subfields empty?
|
||||
|
||||
|
||||
@@ -17,7 +17,7 @@ Restart {microshift-short} after changing any configuration settings to have the
|
||||
|
||||
[id="microshift-yaml-custom-settings_{context}"]
|
||||
== Separate restarts
|
||||
Applications and other optional services used with your {microshift-short} cluster might also need to be restarted separately to apply configuration changes throughout the cluster. For example, when making changes to certain networking settings, you must stop and restart service and application pods to apply those changes. See each procedure for the task you are completing for more information.
|
||||
Applications and other optional services used with your {microshift-short} node might also need to be restarted separately to apply configuration changes throughout the node. For example, when making changes to certain networking settings, you must stop and restart service and application pods to apply those changes. See each procedure for the task you are completing for more information.
|
||||
|
||||
[TIP]
|
||||
====
|
||||
|
||||
@@ -8,6 +8,6 @@
|
||||
|
||||
At startup, {microshift-short} checks the system-wide `/etc/microshift/` directory for a configuration file named `config.yaml`. If the configuration file does not exist in the directory, built-in default values are used to start the service.
|
||||
|
||||
You must use the {microshift-short} configuration file in combination with host and, sometimes, application and service settings. Ensure that you configure each function in tandem when you adjust settings for your {microshift-short} cluster.
|
||||
You must use the {microshift-short} configuration file in combination with host and, sometimes, application and service settings. Ensure that you configure each function in tandem when you adjust settings for your {microshift-short} node.
|
||||
|
||||
For your convenience, a `config.yaml.default` file ready for your inputs is automatically installed.
|
||||
|
||||
@@ -6,7 +6,7 @@
|
||||
[id=microshift-control-permissions-security-context-constraints_{context}]
|
||||
= Control permissions with security context constraints
|
||||
|
||||
You can use security context constraints (SCCs) to control permissions for the pods in your cluster. These permissions determine the actions that a pod can perform and what resources it can access. You can use SCCs to define a set of conditions that a pod must run with to be accepted into the system.
|
||||
You can use security context constraints (SCCs) to control permissions for the pods in your node. These permissions determine the actions that a pod can perform and what resources it can access. You can use SCCs to define a set of conditions that a pod must run with to be accepted into the system.
|
||||
|
||||
For more information see link:https://docs.openshift.com/container-platform/4.16/authentication/managing-security-context-constraints.html[Managing security context constraints].
|
||||
|
||||
|
||||
@@ -6,11 +6,11 @@
|
||||
[id="microshift-creating-backups-auto-recovery_{context}"]
|
||||
= Creating backups using the auto-recovery feature
|
||||
|
||||
Use the following procedure to create backups using automatic recovery options.
|
||||
Use the following procedure to create backups by using automatic recovery options.
|
||||
|
||||
[NOTE]
|
||||
====
|
||||
Creating backups require stopping {microshift-short}, so you must determine the best time to stop {microshift-short}.
|
||||
Creating backups require stopping {microshift-short}. You must decide on the best time to stop {microshift-short}.
|
||||
====
|
||||
|
||||
.Prerequisites
|
||||
|
||||
@@ -6,7 +6,7 @@
|
||||
[id="microshift-custom-ca-certificates-cleaning_{context}"]
|
||||
= Cleaning up and recreating the custom certificates
|
||||
|
||||
To stop the {microshift-short} services, clean up the custom certificates and recreate the custom certificates, use the following steps.
|
||||
To stop the {microshift-short} services, clean up the custom certificates and re-create the custom certificates, use the following steps.
|
||||
|
||||
.Procedure
|
||||
|
||||
|
||||
@@ -6,7 +6,7 @@
|
||||
[id="microshift-custom-cas_{context}"]
|
||||
= Using custom certificate authorities for the {microshift-short} API server
|
||||
|
||||
When {microshift-short} starts, an internal {microshift-short} cluster certificate authority (CA) issues the default API server certificate. By default, clients outside of the cluster cannot verify the {microshift-short}-issued API server certificate. You can grant secure access and encrypt connections between the {microshift-short} API server and external clients. Replace the default certificate with a custom server certificate issued externally by a CA that clients trust.
|
||||
When {microshift-short} starts, an internal {microshift-short} node certificate authority (CA) issues the default API server certificate. By default, clients outside of the node cannot verify the {microshift-short}-issued API server certificate. You can grant secure access and encrypt connections between the {microshift-short} API server and external clients. Replace the default certificate with a custom server certificate issued externally by a CA that clients trust.
|
||||
|
||||
The following steps illustrate the workflow for customizing the API server certificate configuration in {microshift-short}:
|
||||
|
||||
|
||||
@@ -16,7 +16,7 @@ Externally generated `kubeconfig` files are created in the `/var/lib/microshift/
|
||||
.Prerequisites
|
||||
|
||||
* The {oc-first} is installed.
|
||||
* You have access to the cluster as a user with the cluster administration role.
|
||||
* You have root access to the node.
|
||||
* The certificate authority has issued the custom certificates.
|
||||
* A {microshift-short} `/etc/microshift/config.yaml` configuration file exists.
|
||||
|
||||
|
||||
@@ -18,7 +18,7 @@ The following certificate problems cause {microshift-short} to ignore certificat
|
||||
|Address |Type |Comment
|
||||
|`localhost` |DNS |
|
||||
|`127.0.0.1` |IP Address |
|
||||
|`10.42.0.0` |IP Address |Cluster Network
|
||||
|`10.42.0.0` |IP Address |Node Network
|
||||
|`10.43.0.0/16,10.44.0.0/16` |IP Address |Service Network
|
||||
|169.254.169.2/29 |IP Address |br-ex Network
|
||||
|`kubernetes.default.svc` |DNS |
|
||||
|
||||
@@ -10,8 +10,8 @@ The following example procedure uses the node IP address as the external IP addr
|
||||
|
||||
.Prerequisites
|
||||
|
||||
* The OpenShift CLI (`oc`) is installed.
|
||||
* You installed a cluster on an infrastructure configured with the OVN-Kubernetes network plugin.
|
||||
* The {oc-first} is installed.
|
||||
* You installed a node on an infrastructure configured with the OVN-Kubernetes network plugin.
|
||||
* The `KUBECONFIG` environment variable is set.
|
||||
|
||||
.Procedure
|
||||
@@ -158,7 +158,7 @@ nginx LoadBalancer 10.43.183.104 192.168.1.241 81:32434/TCP 2m
|
||||
|
||||
.Verification
|
||||
|
||||
The following command forms five connections to the example `nginx` application using the external IP address of the `LoadBalancer` service configuration. The result of the command is a list of those server IP addresses.
|
||||
The following command forms five connections to the example `nginx` application by using the external IP address of the `LoadBalancer` service configuration. The result of the command is a list of those server IP addresses.
|
||||
|
||||
* Verify that the load balancer sends requests to all the running applications by running the following command:
|
||||
+
|
||||
|
||||
@@ -7,16 +7,11 @@
|
||||
[id="microshift-disabling-lvms-csi-driver_{context}"]
|
||||
= Disabling deployments that run the CSI driver implementations
|
||||
|
||||
Use the following procedure to disable installation of the CSI implementation pods.
|
||||
Use the following procedure to disable installation of the CSI implementation pods. {microshift-short} does not delete CSI driver implementation pods. You must configure {microshift-short} to disable installation of the CSI driver implementation pods during the startup process.
|
||||
|
||||
[IMPORTANT]
|
||||
====
|
||||
This procedure is for users who are defining the configuration file before installing and running {microshift-short}. If {microshift-short} is already started then CSI driver implementation will be running. Users must manually remove it by following the uninstallation instructions.
|
||||
====
|
||||
|
||||
[NOTE]
|
||||
====
|
||||
{microshift-short} will not delete CSI driver implementation pods. You must configure {microshift-short} to disable installation of the CSI driver implementation pods during the startup process.
|
||||
This procedure is for defining the configuration file before installing and running {microshift-short}. If {microshift-short} is already started, then the CSI driver implementation is running. You must manually remove it by following the uninstallation instructions.
|
||||
====
|
||||
|
||||
.Procedure
|
||||
|
||||
@@ -14,7 +14,7 @@ You can reduce the use of runtime resources such as RAM, CPU, and storage by rem
|
||||
|
||||
[IMPORTANT]
|
||||
====
|
||||
Automated uninstallation is not supported as this can cause orphaning of the provisioned volumes. Without the LVMS CSI driver, the cluster does not have knowledge of the underlying storage interface and cannot perform provisioning and deprovisioning or mounting and unmounting operations.
|
||||
Automated uninstallation is not supported as this can cause orphaning of the provisioned volumes. Without the LVMS CSI driver, the node does not detect the underlying storage interface and cannot perform provisioning and deprovisioning or mounting and unmounting operations.
|
||||
====
|
||||
|
||||
[NOTE]
|
||||
|
||||
@@ -6,13 +6,13 @@
|
||||
[id="microshift-disconnected-host-preparation_{context}"]
|
||||
= Preparing networking for fully disconnected hosts
|
||||
|
||||
Use the procedure that follows to start and run {microshift-short} clusters on devices running fully disconnected operating systems. A {microshift-short} host is considered fully disconnected if it has no external network connectivity.
|
||||
Use the procedure that follows to start and run {microshift-short} nodes on devices running fully disconnected operating systems. A {microshift-short} host is considered fully disconnected if it has no external network connectivity.
|
||||
|
||||
Typically this means that the device does not have an attached network interface controller (NIC) to provide a subnet. These steps can also be completed on a host with a NIC that is removed after setup. You can also automate these steps on a host that does not have a NIC by using the `%post` phase of a Kickstart file.
|
||||
|
||||
[IMPORTANT]
|
||||
====
|
||||
Configuring networking settings for disconnected environments is necessary because {microshift-short} requires a network device to support cluster communication. To meet this requirement, you must configure {microshift-short} networking settings to use the "fake" IP address you assign to the system loopback device during setup.
|
||||
Configuring networking settings for disconnected environments is necessary because {microshift-short} requires a network device to support node communication. To meet this requirement, you must configure {microshift-short} networking settings to use the "fake" IP address you assign to the system loopback device during setup.
|
||||
====
|
||||
|
||||
[id="microshift-disconnected-host-procedure-summary_{context}"]
|
||||
@@ -35,4 +35,4 @@ For the changes to take effect::
|
||||
* Disable the default NIC if attached.
|
||||
* Restart the host or device.
|
||||
|
||||
After starting, {microshift-short} runs using the loopback device for within-cluster communication.
|
||||
After starting, {microshift-short} runs using the loopback device for intra-node communication.
|
||||
|
||||
@@ -10,14 +10,14 @@ To configure the networking settings for running {microshift-short} on a fully d
|
||||
|
||||
.Prerequisites
|
||||
* RHEL 9 or newer.
|
||||
* {microshift-short} 4.14 or newer.
|
||||
* {microshift-short} 4.16 or newer.
|
||||
* Access to the host CLI.
|
||||
* A valid IP address chosen to avoid both internal and potential future external IP conflicts when running {microshift-short}.
|
||||
* {microshift-short} networking settings are set to defaults.
|
||||
|
||||
[IMPORTANT]
|
||||
====
|
||||
The following procedure is for use cases in which access to the {microshift-short} cluster is not required after devices are deployed in the field. There is no remote cluster access after the network connection is removed.
|
||||
The following procedure is for use cases in which access to the {microshift-short} node is not required after devices are deployed in the field. There is no remote node access after the network connection is removed.
|
||||
====
|
||||
|
||||
.Procedure
|
||||
@@ -77,7 +77,7 @@ node:
|
||||
EOF
|
||||
----
|
||||
|
||||
. {microshift-short} is now ready to use the loopback device for cluster communications. Finish preparing the device for offline use.
|
||||
. {microshift-short} is now ready to use the loopback device for intra-node communications. Finish preparing the device for offline use.
|
||||
|
||||
.. If the device currently has a NIC attached, disconnect the device from the network.
|
||||
.. Shut down the device and disconnect the NIC.
|
||||
@@ -89,13 +89,13 @@ EOF
|
||||
----
|
||||
$ sudo systemctl reboot <1>
|
||||
----
|
||||
<1> This step restarts the cluster. Wait for the greenboot health check to report the system healthy before implementing verification.
|
||||
<1> This step restarts the node. Wait for the greenboot health check to report the system healthy before implementing verification.
|
||||
|
||||
.Verification
|
||||
|
||||
At this point, network access to the {microshift-short} host has been severed. If you have access to the host terminal, you can use the host CLI to verify that the cluster has started in a stable state.
|
||||
At this point, network access to the {microshift-short} host has been severed. If you have access to the host terminal, you can use the host CLI to verify that the node has started in a stable state.
|
||||
|
||||
. Verify that the {microshift-short} cluster is running by entering the following commands:
|
||||
. Verify that the {microshift-short} node is running by entering the following commands:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
|
||||
@@ -4,10 +4,9 @@
|
||||
|
||||
:_mod-docs-content-type: PROCEDURE
|
||||
[id="microshift-exposed-audit-ports-loadbalancer_{context}"]
|
||||
|
||||
= NodePort and LoadBalancer services
|
||||
|
||||
OVN-Kubernetes opens host ports for `NodePort` and `LoadBalancer` service types. These services add iptables rules that take the ingress traffic from the host port and forwards it to the clusterIP. Logs for the `NodePort` and `LoadBalancer` services are presented in the following examples:
|
||||
OVN-Kubernetes opens host ports for `NodePort` and `LoadBalancer` service types. These services add iptables rules that take the ingress traffic from the host port and forwards it to the node IP address. Logs for the `NodePort` and `LoadBalancer` services are presented in the following examples:
|
||||
|
||||
.Procedure
|
||||
|
||||
|
||||
@@ -14,7 +14,7 @@ Using FIPS with {microshift-short} requires enabling the cryptographic module se
|
||||
+
|
||||
[IMPORTANT]
|
||||
====
|
||||
Because FIPS must be enabled before the operating system that your cluster uses starts for the first time, you cannot enable FIPS after you deploy a cluster.
|
||||
Because FIPS must be enabled before the operating system that your node uses starts for the first time, you cannot enable FIPS after you deploy a node.
|
||||
====
|
||||
|
||||
* {microshift-short} uses a FIPS-compatible Golang compiler.
|
||||
|
||||
@@ -6,4 +6,4 @@
|
||||
[id="microshift-firewall-known-issue_{context}"]
|
||||
= Known firewall issue
|
||||
|
||||
* To avoid breaking traffic flows with a firewall reload or restart, execute firewall commands before starting {op-system-base-full}. The CNI driver in {microshift-short} makes use of iptable rules for some traffic flows, such as those using the NodePort service. The iptable rules are generated and inserted by the CNI driver, but are deleted when the firewall reloads or restarts. The absence of the iptable rules breaks traffic flows. If firewall commands have to be executed after {microshift-short} is running, manually restart `ovnkube-master` pod in the `openshift-ovn-kubernetes` namespace to reset the rules controlled by the CNI driver.
|
||||
To avoid breaking traffic flows with a firewall reload or restart, run firewall commands before starting {op-system-base-full}. The CNI driver in {microshift-short} makes use of iptable rules for some traffic flows, such as those using the NodePort service. The iptable rules are generated and inserted by the CNI driver, but are deleted when the firewall reloads or restarts. The absence of the iptable rules breaks traffic flows. If firewall commands have to run after {microshift-short} is started, manually restart `ovnkube-master` pod in the `openshift-ovn-kubernetes` namespace to reset the rules controlled by the CNI driver.
|
||||
|
||||
@@ -6,7 +6,7 @@
|
||||
[id="microshift-firewall-req-settings_{context}"]
|
||||
= Required firewall settings
|
||||
|
||||
An IP address range for the cluster network must be enabled during firewall configuration. You can use the default values or customize the IP address range. If you choose to customize the cluster network IP address range from the default `10.42.0.0/16` setting, you must also use the same custom range in the firewall configuration.
|
||||
An IP address range for the node network must be enabled during firewall configuration. You can use the default values or customize the IP address range. If you choose to customize the node network IP address range from the default `10.42.0.0/16` setting, you must also use the same custom range in the firewall configuration.
|
||||
|
||||
.Firewall IP address settings
|
||||
[cols="3",options="header"]
|
||||
|
||||
@@ -18,4 +18,4 @@ For `HostPort` pods, the CRI-O config sets up iptables DNAT (Destination Network
|
||||
|
||||
These methods function for clients whether they are on the same host or on a remote host. The iptables rules, which are added by OVN-Kubernetes and CRI-O, attach to the PREROUTING and OUTPUT chains. The local traffic goes through the OUTPUT chain with the interface set to the `lo` type. The DNAT runs before it hits filler rules in the INPUT chain.
|
||||
|
||||
Because the {microshift-short} API server does not run in CRI-O, it is subject to the firewall configurations. You can open port 6443 in the firewall to access the API server in your {microshift-short} cluster.
|
||||
Because the {microshift-short} API server does not run in CRI-O, it is subject to the firewall configurations. You can open port 6443 in the firewall to access the API server in your {microshift-short} node.
|
||||
@@ -12,7 +12,7 @@
|
||||
|
||||
.Procedure
|
||||
|
||||
. Log into the failing host as a root user.
|
||||
. Log in to the failing host as a root user.
|
||||
|
||||
. Perform the debug report creation procedure by running the following command:
|
||||
+
|
||||
|
||||
@@ -1,16 +1,16 @@
|
||||
// Module included in the following assemblies:
|
||||
//
|
||||
// microshift_support/microshift-getting-cluster-id.adoc
|
||||
// microshift_support/microshift-getting-node-id.adoc
|
||||
|
||||
:_mod-docs-content-type: PROCEDURE
|
||||
[id="microshift-get-cluster-id-kubesystem_{context}"]
|
||||
= Getting the cluster ID of a running cluster
|
||||
[id="microshift-get-node-id-kubesystem_{context}"]
|
||||
= Getting the node ID of a running node
|
||||
|
||||
Use either the of the following steps to get the ID of a running cluster.
|
||||
Use either the of the following steps to get the ID of a running node.
|
||||
|
||||
.Procedure
|
||||
|
||||
* Get the ID of a running cluster using `oc get` by entering the following command:
|
||||
* Get the ID of a running node using `oc get` by entering the following command:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
@@ -23,7 +23,7 @@ $ oc get namespaces kube-system -o jsonpath={.metadata.uid}
|
||||
7cf13853-68f4-454e-8f5c-1af748cbfb1a
|
||||
----
|
||||
|
||||
* Get the ID of a running cluster by retrieving it from the `cluster-id` file by entering the following command:
|
||||
* Get the ID of a running node by retrieving it from the `cluster-id` file by entering the following command:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
@@ -1,24 +0,0 @@
|
||||
// Module included in the following assemblies:
|
||||
//
|
||||
// microshift_support/microshift-getting-cluster-id.adoc
|
||||
|
||||
:_mod-docs-content-type: PROCEDURE
|
||||
[id="microshift-get-nonrunning-cluster-id-kubesystem_{context}"]
|
||||
= Getting the cluster ID of a stopped cluster
|
||||
|
||||
For a cluster that ran before, but is not running now, you can get the cluster ID from the `cluster-id` file in the `/var/lib/microshift` directory.
|
||||
|
||||
.Procedure
|
||||
|
||||
* Get the ID of a stopped cluster by retrieving it from the `cluster-id` file by entering the following command:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ sudo cat /var/lib/microshift/cluster-id
|
||||
----
|
||||
.Example output
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
7cf13853-68f4-454e-8f5c-1af748cbfb1a
|
||||
----
|
||||
24
modules/microshift-get-nonrunning-node-id-kubesystem.adoc
Normal file
24
modules/microshift-get-nonrunning-node-id-kubesystem.adoc
Normal file
@@ -0,0 +1,24 @@
|
||||
// Module included in the following assemblies:
|
||||
//
|
||||
// microshift_support/microshift-getting-node-id.adoc
|
||||
|
||||
:_mod-docs-content-type: PROCEDURE
|
||||
[id="microshift-get-nonrunning-node-id-kubesystem_{context}"]
|
||||
= Getting the node ID of a stopped node
|
||||
|
||||
For a node that ran before, but is not running now, you can get the node ID from the `cluster-id` file in the `/var/lib/microshift` directory.
|
||||
|
||||
.Procedure
|
||||
|
||||
* Get the ID of a stopped node by retrieving it from the `cluster-id` file by entering the following command:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ sudo cat /var/lib/microshift/cluster-id
|
||||
----
|
||||
.Example output
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
7cf13853-68f4-454e-8f5c-1af748cbfb1a
|
||||
----
|
||||
@@ -6,7 +6,7 @@
|
||||
[id="microshift-http-proxy_{context}"]
|
||||
= Deploying {microshift-short} behind an HTTP or HTTPS proxy
|
||||
|
||||
Deploy a {microshift-short} cluster behind an HTTP or HTTPS proxy when you want to add basic anonymity and security measures to your pods.
|
||||
Deploy a {microshift-short} node behind an HTTP or HTTPS proxy when you want to add basic anonymity and security measures to your pods.
|
||||
|
||||
You must configure the host operating system to use the proxy service with all components initiating HTTP or HTTPS requests when deploying {microshift-short} behind a proxy.
|
||||
|
||||
|
||||
@@ -1,28 +1,28 @@
|
||||
// Module included in the following assemblies:
|
||||
//
|
||||
// * microshift_support/microshift-remote-cluster-monitoring.adoc
|
||||
// * microshift_support/microshift-remote-node-monitoring.adoc
|
||||
|
||||
:_mod-docs-content-type: REFERENCE
|
||||
[id="microshift-info-collected-by-telemetry_{context}"]
|
||||
= Information collected by the {microshift-short} Telemetry API
|
||||
|
||||
All metrics combined are generally under 2KB and not expected to consume cluster resources.
|
||||
All metrics combined are generally under 2KB and not expected to consume node resources.
|
||||
|
||||
The following information is collected by Telemetry:
|
||||
|
||||
[id="microshift-telemetry-system-information_{context}"]
|
||||
== System information
|
||||
|
||||
The system information describes the basic configuration of your {microshift-short} cluster and where it is running, for example:
|
||||
The system information describes the basic configuration of your {microshift-short} node and where it is running, for example:
|
||||
|
||||
* Version information, including the {microshift-short} cluster version.
|
||||
* Version information, including the {microshift-short} node version.
|
||||
* The {op-system-base-full} version.
|
||||
* The {op-system-base} deployment type.
|
||||
|
||||
[id="microshift-telemetry-sizing-information_{context}"]
|
||||
== Sizing information
|
||||
|
||||
Sizing information details the cluster capacity, for example:
|
||||
Sizing information details the node capacity, for example:
|
||||
|
||||
* The CPU cores {microshift-short} can use.
|
||||
* Architecture information.
|
||||
@@ -31,7 +31,7 @@ Sizing information details the cluster capacity, for example:
|
||||
[id="microshift-telemetry-usage-information_{context}"]
|
||||
== Usage information
|
||||
|
||||
Usage information outlines what is happening in the cluster, for example:
|
||||
Usage information outlines what is happening in the node, for example:
|
||||
|
||||
* The CPU usage in percentage.
|
||||
* The memory usage in percentage.
|
||||
|
||||
@@ -3,10 +3,10 @@
|
||||
// * microshift_networking/microshift-cni-multus.adoc
|
||||
|
||||
:_mod-docs-content-type: CONCEPT
|
||||
[id="microshift-multus-installing-on-running-cluster_{context}"]
|
||||
= Installing the Multus CNI plugin on a running cluster
|
||||
[id="microshift-multus-installing-on-running-node_{context}"]
|
||||
= Installing the Multus CNI plugin on a running node
|
||||
|
||||
If you want to attach additional networks to a pod for high-performance network configurations, you can install the {microshift-short} Multus RPM package. After installation, a host restart is required to recreate all the pods with the Multus annotation.
|
||||
If you want to attach additional networks to a pod for high-performance network configurations, you can install the {microshift-short} Multus RPM package. After installation, a host restart is required to re-create all the pods with the Multus annotation.
|
||||
|
||||
[IMPORTANT]
|
||||
====
|
||||
@@ -31,7 +31,7 @@ $ sudo dnf install microshift-multus
|
||||
If you create your custom resources (CRs) for additional networks now, you can complete your installation and apply configurations with one restart.
|
||||
====
|
||||
|
||||
. To apply the package manifest to an active cluster, restart the host by running the following command:
|
||||
. To apply the package manifest to an active node, restart the host by running the following command:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
@@ -16,13 +16,13 @@ For most installation types, you must also take the following steps:
|
||||
|
||||
** link:https://docs.redhat.com/en/documentation/red_hat_build_of_microshift/{ocp-version}/html/configuring/using-the-microshift-configuration-file[Using the {microshift-short} configuration file]
|
||||
|
||||
* Decide whether you need to configure storage for the application and tasks you are using in your {microshift-short} cluster, or disable the {microshift-short} storage plug-in completely.
|
||||
* Decide whether you need to configure storage for the application and tasks you are using in your {microshift-short} node, or disable the {microshift-short} storage plugin completely.
|
||||
|
||||
* For more information about creating volume groups and persistent volumes on {op-system-base}, see the following link:
|
||||
|
||||
** link:https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/configuring_and_managing_logical_volumes/overview-of-logical-volume-management_configuring-and-managing-logical-volumes[Overview of logical volume management]
|
||||
|
||||
* Configure networking settings according to the access needs you plan for your {microshift-short} cluster and applications. Consider whether you want to use single or dual-stack networks, configure a firewall, or configure routes.
|
||||
* Configure networking settings according to the access needs you plan for your {microshift-short} node and applications. Consider whether you want to use single or dual-stack networks, configure a firewall, or configure routes.
|
||||
+
|
||||
[NOTE]
|
||||
====
|
||||
|
||||
@@ -6,10 +6,10 @@
|
||||
[id="microshift-install-rhel-types_{context}"]
|
||||
= {op-system-base} installation types
|
||||
|
||||
Choose the best {op-system-base-full} installation type based on where you want to run your cluster and what your applications need to do. For the best results, apply the following principles:
|
||||
Choose the best {op-system-base-full} installation type based on where you want to run your node and what your applications need to do. For the best results, apply the following principles:
|
||||
|
||||
* For every installation target, you must configure both the operating system and {microshift-short}.
|
||||
* Consider your application storage needs, networking for cluster or application access, and your authentication and security requirements.
|
||||
* Consider your application storage needs, networking for node or application access, and your authentication and security requirements.
|
||||
* Understand the differences between the {op-system-base} installation types, including the support scope of each, and the tools used.
|
||||
|
||||
[id="microshift-get-ready-install-rpm_{context}"]
|
||||
|
||||
@@ -6,7 +6,7 @@
|
||||
[id="microshift-install-rpm-before_{context}"]
|
||||
= Before installing {microshift-short} from an RPM package
|
||||
|
||||
Preparation of the host machine is recommended prior to installing {microshift-short} for memory configuration and FIPS mode.
|
||||
Preparation of the host machine is recommended before installing {microshift-short} for memory configuration and FIPS mode.
|
||||
|
||||
[id="microshift-configuring-volume-groups_{context}"]
|
||||
== Configuring volume groups
|
||||
@@ -22,5 +22,5 @@ If your use case requires running {microshift-short} containers in FIPS mode, yo
|
||||
|
||||
[IMPORTANT]
|
||||
====
|
||||
Because FIPS must be enabled before the operating system that your cluster uses starts for the first time, you cannot enable FIPS after you deploy a cluster.
|
||||
Because FIPS must be enabled before the operating system that your node uses starts for the first time, you cannot enable FIPS after you deploy a node.
|
||||
====
|
||||
@@ -6,7 +6,7 @@
|
||||
[id="microshift-installing-with-olm-from-rpm-package_{context}"]
|
||||
= Installing the Operator Lifecycle Manager (OLM) from an RPM package
|
||||
|
||||
When you install {microshift-short}, the Operator Lifecycle Manager (OLM) package is not installed by default. You can install the OLM on your {microshift-short} instance using an RPM package.
|
||||
When you install {microshift-short}, the Operator Lifecycle Manager (OLM) package is not installed by default. You can install the OLM on your {microshift-short} instance by using an RPM package.
|
||||
|
||||
.Procedure
|
||||
|
||||
@@ -17,7 +17,7 @@ When you install {microshift-short}, the Operator Lifecycle Manager (OLM) packag
|
||||
$ sudo dnf install microshift-olm
|
||||
----
|
||||
|
||||
. To apply the manifest from the package to an active cluster, run the following command:
|
||||
. To apply the manifest from the package to an active node, run the following command:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
|
||||
@@ -71,7 +71,7 @@ $ sudo systemctl restart microshift
|
||||
$ cat /var/lib/microshift/resources/kubeadmin/alt-name-1/kubeconfig
|
||||
----
|
||||
|
||||
. Choose the `kubeconfig` file to use that contains the SAN or IP address you want to use to connect your cluster. In this example, the `kubeconfig` containing`alt-name-1` in the `cluster.server` field is the correct file.
|
||||
. Choose the `kubeconfig` file to use that contains the SAN or IP address you want to use to connect your node. In this example, the `kubeconfig` containing`alt-name-1` in the `cluster.server` field is the correct file.
|
||||
+
|
||||
.Example contents of an additional `kubeconfig` file
|
||||
[source,yaml]
|
||||
|
||||
@@ -6,7 +6,7 @@
|
||||
[id="microshift-kubeconfig-local-access_{context}"]
|
||||
= Local access kubeconfig file
|
||||
|
||||
The local access `kubeconfig` file is written to `/var/lib/microshift/resources/kubeadmin/kubeconfig`. This `kubeconfig` file provides access to the API server by using `localhost`. Choose this file when you are connecting the cluster locally.
|
||||
The local access `kubeconfig` file is written to `/var/lib/microshift/resources/kubeadmin/kubeconfig`. This `kubeconfig` file provides access to the API server by using `localhost`. Choose this file when you are connecting the node locally.
|
||||
|
||||
.Example contents of `kubeconfig` for local access
|
||||
[source,yaml]
|
||||
|
||||
@@ -4,7 +4,7 @@
|
||||
|
||||
:_mod-docs-content-type: CONCEPT
|
||||
[id="kubeconfig-files-overview_{context}"]
|
||||
= Kubeconfig files for configuring cluster access
|
||||
= Kubeconfig files for configuring node access
|
||||
|
||||
The two categories of `kubeconfig` files used in {microshift-short} are local access and remote access. Every time {microshift-short} starts, a set of `kubeconfig` files for local and remote access to the API server are generated. These files are generated in the `/var/lib/microshift/resources/kubeadmin/` directory by using preexisting configuration information.
|
||||
|
||||
|
||||
@@ -6,7 +6,7 @@
|
||||
[id="microshift-low-latency-concept_{context}"]
|
||||
= Lowering latency in {microshift-short} applications
|
||||
|
||||
Latency is defined as the time from an event to the response to that event. You can use low latency configurations and tuning in a {microshift-short} cluster running in an operational or software-defined control system where an edge device has to respond quickly to an external event. You can fully optimize low latency performance by combining {microshift-short} configurations with operating system tuning and workload partitioning.
|
||||
Latency is defined as the time from an event to the response to that event. You can use low latency configurations and tuning in a {microshift-short} node running in an operational or software-defined control system where an edge device has to respond quickly to an external event. You can fully optimize low latency performance by combining {microshift-short} configurations with operating system tuning and workload partitioning.
|
||||
|
||||
[IMPORTANT]
|
||||
====
|
||||
@@ -15,7 +15,7 @@ The CPU set for management applications, such as the {microshift-short} service,
|
||||
|
||||
[id="microshift-low-latency-workflow_{context}"]
|
||||
== Workflow for configuring low latency for {microshift-short} applications
|
||||
To configure low latency for applications running in a {microshift-short} cluster, you must complete the following tasks:
|
||||
To configure low latency for applications running in a {microshift-short} node, you must complete the following tasks:
|
||||
|
||||
Required::
|
||||
* Install the `microshift-low-latency` RPM.
|
||||
|
||||
@@ -6,12 +6,12 @@
|
||||
[id="microshift-low-latency-config-yaml_{context}"]
|
||||
= Configuration kubelet parameters and values in {microshift-short}
|
||||
|
||||
The first step in enabling low latency to a {microshift-short} cluster is to add configurations to the {microshift-short} `config.yaml` file.
|
||||
The first step in enabling low latency to a {microshift-short} node is to add configurations to the {microshift-short} `config.yaml` file.
|
||||
|
||||
.Prerequisites
|
||||
|
||||
* You installed the OpenShift CLI (`oc`).
|
||||
* You have root access to the cluster.
|
||||
* You installed the {oc-first}.
|
||||
* You have root access to the node.
|
||||
* You made a copy of the provided `config.yaml.default` file in the `/etc/microshift/` directory, and renamed it `config.yaml`.
|
||||
|
||||
.Procedure
|
||||
|
||||
@@ -6,7 +6,7 @@
|
||||
[id="microshift-low-latency-install-kernelrt-rhel-edge_{context}"]
|
||||
= Installing the {op-system-rt-kernel} in a {op-system-ostree-first} image
|
||||
|
||||
You can include the real-time kernel in a {op-system-ostree} image deployment using image builder. The following example blueprint sections include references gathered from the previous steps required to configure low latency for a {microshift-short} cluster.
|
||||
You can include the real-time kernel in a {op-system-ostree} image deployment using image builder. The following example blueprint sections include references gathered from the previous steps required to configure low latency for a {microshift-short} node.
|
||||
|
||||
.Prerequisites
|
||||
* You have a Red Hat subscription enabled on the host that includes {op-system-rt-kernel}.
|
||||
|
||||
@@ -6,7 +6,7 @@
|
||||
[id="microshift-low-latency-install-kernelrt_{context}"]
|
||||
= Installing the {op-system-rt-kernel}
|
||||
|
||||
Although the real-time kernel is not necessary for low latency workloads, using the {op-system-rtk} can optimize low latency performance. You can install it on a host using RPM packages, and include it in a {op-system-ostree-first} image deployment.
|
||||
Although the real-time kernel is not necessary for low latency workloads, using the {op-system-rtk} can optimize low latency performance. You can install it on a host by using RPM packages, and include it in a {op-system-ostree-first} image deployment.
|
||||
|
||||
.Prerequisites
|
||||
|
||||
|
||||
@@ -80,11 +80,11 @@ spec:
|
||||
<3> Disables the CPU completely fair scheduler (CFS) quota at the pod run time.
|
||||
<4> Enables or disables C-states for each CPU. Set the value to `disable` to provide the best performance for a high-priority pod.
|
||||
<5> Sets the `cpufreq` governor for each CPU. The `performance` governor is recommended for high-priority workloads.
|
||||
<6> The `runtimeClassName` must match the name of the performance profile configured in the cluster. For example, `microshift-low-latency`.
|
||||
<6> The `runtimeClassName` must match the name of the performance profile configured in the node. For example, `microshift-low-latency`.
|
||||
+
|
||||
[NOTE]
|
||||
====
|
||||
Disable CPU load balancing only when the CPU manager static policy is enabled and for pods with guaranteed QoS that use whole CPUs. Otherwise, disabling CPU load balancing can affect the performance of other containers in the cluster.
|
||||
Disable CPU load balancing only when the CPU manager static policy is enabled and for pods with guaranteed QoS that use whole CPUs. Otherwise, disabling CPU load balancing can affect the performance of other containers in the node.
|
||||
====
|
||||
+
|
||||
[IMPORTANT]
|
||||
|
||||
@@ -8,7 +8,7 @@
|
||||
|
||||
As a {op-system-base-full} system administrator, you can use the TuneD service to optimize the performance profile of {op-system-base} for a variety of use cases. TuneD monitors and optimizes system performance under certain workloads, including latency performance.
|
||||
|
||||
* Use TuneD profiles to tune your system for different use cases, such as deploying a low-latency {microshift-short} cluster.
|
||||
* Use TuneD profiles to tune your system for different use cases, such as deploying a low-latency {microshift-short} node.
|
||||
* You can modify the rules defined for each profile and customize tuning for a specific device.
|
||||
* When you switch to another profile or deactivate TuneD, all changes made to the system settings by the previous profile revert back to their original state.
|
||||
* You can also configure TuneD to react to changes in device usage and adjusts settings to improve performance of active devices and reduce power consumption of inactive devices.
|
||||
@@ -6,11 +6,11 @@
|
||||
[id="microshift-low-latency-tuned-profile_{context}"]
|
||||
= Configuring the {microshift-short} TuneD profile
|
||||
|
||||
Configure a TuneD profile for your host to use low latency with {microshift-short} workloads using the `microshift-baseline-variables.conf` configuration file provided in the {op-system-base-full} `/etc/tuned/` host directory after you install the `microshift-low-latency` RPM package.
|
||||
Configure a TuneD profile for your host to use low latency with {microshift-short} workloads by using the `microshift-baseline-variables.conf` configuration file provided in the {op-system-base-full} `/etc/tuned/` host directory after you install the `microshift-low-latency` RPM package.
|
||||
|
||||
.Prerequisites
|
||||
|
||||
* You have root access to the cluster.
|
||||
* You have root access to the node.
|
||||
* You installed the `microshift-low-latency` RPM package.
|
||||
* Your {op-system-base} host has TuneD installed. See link:https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/monitoring_and_managing_system_status_and_performance/getting-started-with-tuned_monitoring-and-managing-system-status-and-performance#the-location-of-tuned-profiles_getting-started-with-tuned[Getting started with TuneD] (RHEL documentation).
|
||||
|
||||
|
||||
@@ -8,7 +8,7 @@
|
||||
|
||||
To use advanced storage features such as creating volume snapshots or volume cloning, you must perform the following actions:
|
||||
|
||||
* Configure both the logical volume manager storage (LVMS) provider and the cluster.
|
||||
* Configure both the logical volume manager storage (LVMS) provider and the node.
|
||||
* Provision a logical volume manager (LVM) thin-pool on the {op-system-ostree} host.
|
||||
* Attach LVM thin-pools to a volume group.
|
||||
|
||||
@@ -24,7 +24,7 @@ When using thin provisioning, it is important that you monitor the storage pool
|
||||
|
||||
For LVMS to manage thin logical volumes (LVs), a thin-pool `device-class` array must be specified in the `etc/lvmd.yaml` configuration file. Multiple thin-pool device classes are permitted.
|
||||
|
||||
If additional storage pools are configured with device classes, then additional storage classes must also exist to expose the storage pools to users and workloads. To enable dynamic provisioning on a thin-pool, a `StorageClass` resource must be present on the cluster. The `StorageClass` resource specifies the source `device-class` array in the `topolvm.io/device-class` parameter.
|
||||
If additional storage pools are configured with device classes, then additional storage classes must also exist to expose the storage pools to users and workloads. To enable dynamic provisioning on a thin-pool, a `StorageClass` resource must be present on the node. The `StorageClass` resource specifies the source `device-class` array in the `topolvm.io/device-class` parameter.
|
||||
|
||||
.Example `lvmd.yaml` file that specifies a single device class for a thin-pool
|
||||
[source, yaml]
|
||||
|
||||
@@ -6,6 +6,6 @@
|
||||
[id="microshift-lvms-deployment_{context}"]
|
||||
= LVMS deployment
|
||||
|
||||
LVMS is automatically deployed on to the cluster in the `openshift-storage` namespace after {microshift-short} starts.
|
||||
LVMS is automatically deployed on to the node in the `openshift-storage` namespace after {microshift-short} starts.
|
||||
|
||||
LVMS uses `StorageCapacity` tracking to ensure that pods with an LVMS PVC are not scheduled if the requested storage is greater than the free storage of the volume group. For more information about `StorageCapacity` tracking, read link:https://kubernetes.io/docs/concepts/storage/storage-capacity/[Storage Capacity].
|
||||
@@ -6,16 +6,16 @@
|
||||
[id="microshift-manifests-overview_{context}"]
|
||||
= How Kustomize works with manifests to deploy applications
|
||||
|
||||
The `kustomize` configuration management tool is integrated with {microshift-short}. You can use Kustomize and the {oc-first} together to apply customizations to your application manifests and deploy those applications to a {microshift-short} cluster.
|
||||
The `kustomize` configuration management tool is integrated with {microshift-short}. You can use Kustomize and the {oc-first} together to apply customizations to your application manifests and deploy those applications to a {microshift-short} node.
|
||||
|
||||
* A `kustomization.yaml` file is a specification of resources plus customizations.
|
||||
* Kustomize uses a `kustomization.yaml` file to load a resource, such as an application, then applies any changes you want to that application manifest and produces a copy of the manifest with the changes overlaid.
|
||||
* Using a manifest copy with an overlay keeps the original configuration file for your application intact, while enabling you to deploy iterations and customizations of your applications efficiently.
|
||||
* You can then deploy the application in your {microshift-short} cluster with an `oc` command.
|
||||
* You can then deploy the application in your {microshift-short} node with an `oc` command.
|
||||
|
||||
[NOTE]
|
||||
====
|
||||
At each system start, {microshift-short} deletes the manifests found in the `delete` subdirectories and then applies the manifest files found in the manifest directories to the cluster.
|
||||
At each system start, {microshift-short} deletes the manifests found in the `delete` subdirectories and then applies the manifest files found in the manifest directories to the node.
|
||||
====
|
||||
|
||||
[id="how-microshift-uses-manifests"]
|
||||
@@ -27,7 +27,7 @@ At every start, {microshift-short} searches the following manifest directories f
|
||||
* `/usr/lib/microshift/`
|
||||
* `/usr/lib/microshift/manifests.d/++*++`
|
||||
|
||||
{microshift-short} automatically runs the equivalent of the `kubectl apply -k` command to apply the manifests to the cluster if any of the following file types exists in the searched directories:
|
||||
{microshift-short} automatically runs the equivalent of the `kubectl apply -k` command to apply the manifests to the node if any of the following file types exists in the searched directories:
|
||||
|
||||
* `kustomization.yaml`
|
||||
* `kustomization.yml`
|
||||
|
||||
@@ -6,7 +6,7 @@
|
||||
[id="microshift-multus-intro_{context}"]
|
||||
= Secondary networks in {microshift-short}
|
||||
|
||||
During cluster installation, the _default_ pod network is configured with default values unless you customize the configuration. The default network handles all ordinary network traffic for the cluster. Using the {microshift-short} Multus CNI plugin, you can add additional interfaces to pods from other networks. This gives you flexibility when you configure pods that deliver network functionality, such as switching or routing.
|
||||
During node installation, the _default_ pod network is configured with default values unless you customize the configuration. The default network handles all ordinary network traffic for the node. Using the {microshift-short} Multus CNI plugin, you can add additional interfaces to pods from other networks. This gives you flexibility when you configure pods that deliver network functionality, such as switching or routing.
|
||||
|
||||
[id="microshift-supported-additional-networks_{context}"]
|
||||
== Supported secondary networks for network isolation
|
||||
@@ -43,7 +43,7 @@ The Multus CNI plugin is deployed when the {microshift-short} service starts up.
|
||||
[id="microshift-additional-network-how-implemented_{context}"]
|
||||
== How secondary networks are implemented
|
||||
|
||||
All of the pods in the cluster still use the cluster-wide default network to maintain connectivity across the cluster. Every pod has an `eth0` interface that is attached to the cluster-wide pod network.
|
||||
All of the pods in the node still use the node-wide default network to maintain connectivity across the node. Every pod has an `eth0` interface that is attached to the node-wide pod network.
|
||||
|
||||
* You can view the interfaces for a pod by using the `oc get pod <pod_name> -o=jsonpath='{ .metadata.annotations.k8s\.v1\.cni\.cncf\.io/network-status }'` command.
|
||||
* If you add secondary network interfaces that use the {microshift-short} Multus CNI, they are named `net1`, `net2`, ..., `netN`.
|
||||
|
||||
@@ -6,11 +6,11 @@
|
||||
[id="microshift-yaml-advertiseAddress_{context}"]
|
||||
= Configuring the advertise address network flag
|
||||
|
||||
The `apiserver.advertiseAddress` flag specifies the IP address on which to advertise the API server to members of the cluster. This address must be reachable by the cluster. You can set a custom IP address here, but you must also add the IP address to a host interface. Customizing this parameter preempts {microshift-short} from adding a default IP address to the `br-ex` network interface.
|
||||
The `apiserver.advertiseAddress` flag specifies the IP address on which to advertise the API server to members of the node. This address must be reachable by the node. You can set a custom IP address here, but you must also add the IP address to a host interface. Customizing this parameter preempts {microshift-short} from adding a default IP address to the `br-ex` network interface.
|
||||
|
||||
[IMPORTANT]
|
||||
====
|
||||
If you customize the `advertiseAddress` IP address, make sure it is reachable by the cluster when {microshift-short} starts by adding the IP address to a host interface.
|
||||
If you customize the `advertiseAddress` IP address, make sure it is reachable by the node when {microshift-short} starts by adding the IP address to a host interface.
|
||||
====
|
||||
|
||||
If unset, the default value is set to the next immediate subnet after the service network. For example, when the service network is `10.43.0.0/16`, the `advertiseAddress` is set to `10.44.0.0/32`.
|
||||
|
||||
@@ -11,8 +11,9 @@ A route allows you to host your application at a public URL. It can either be se
|
||||
The following procedure describes how to create a simple HTTP-based route to a web application, using the `hello-microshift` application as an example.
|
||||
|
||||
.Prerequisites
|
||||
* You installed the OpenShift CLI (`oc`).
|
||||
* You have access to your {microshift-short} cluster.
|
||||
|
||||
* You installed the {oc-first}.
|
||||
* You have access to your {microshift-short} node.
|
||||
* You have a web application that exposes a port and a TCP endpoint listening for traffic on the port.
|
||||
|
||||
.Procedure
|
||||
|
||||
@@ -6,7 +6,7 @@
|
||||
[id="microshift-nw-enforcing-hsts-per-domain_{context}"]
|
||||
= Enforcing HTTP Strict Transport Security per-domain
|
||||
|
||||
You can configure a route with a compliant HSTS policy annotation. To handle upgraded clusters with non-compliant HSTS routes, you can update the manifests at the source and apply the updates.
|
||||
You can configure a route with a compliant HSTS policy annotation. To handle an upgraded node with noncompliant HSTS routes, you can update the manifests at the source and apply the updates.
|
||||
|
||||
You cannot use `oc expose route` or `oc create route` commands to add a route in a domain that enforces HSTS because the API for these commands does not accept annotations.
|
||||
|
||||
@@ -16,12 +16,12 @@ HSTS cannot be applied to insecure, or non-TLS, routes.
|
||||
====
|
||||
|
||||
.Prerequisites
|
||||
* You have root access to the cluster.
|
||||
* You have root access to the node.
|
||||
* You installed the {oc-first}.
|
||||
|
||||
.Procedure
|
||||
|
||||
* Apply HSTS to all routes in the cluster by running the following `oc annotate command`:
|
||||
* Apply HSTS to all routes in the node by running the following `oc annotate command`:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
|
||||
@@ -6,16 +6,16 @@
|
||||
[id="microshift-intro-ipv6_{context}"]
|
||||
= IPv6 networking with {microshift-short}
|
||||
|
||||
The {microshift-short} service defaults to IPv4 address families cluster-wide. However, IPv6 single-stack and IPv4/IPv6 dual-stack networking is available on supported platforms.
|
||||
The {microshift-short} service defaults to IPv4 address families node-wide. However, IPv6 single-stack and IPv4/IPv6 dual-stack networking is available on supported platforms.
|
||||
|
||||
* When you set the values for IPv6 in the {microshift-short} configuration file and restart the service, settings managed by the OVN-Kubernetes network plugin are updated automatically.
|
||||
* After migrating to dual-stack networking, both new and existing pods have dual-stack networking enabled.
|
||||
* If you require cluster-wide IPv6 access, such as for the control plane and other services, use the following configuration examples. The {microshift-short} Multus Container Network Interface (CNI) plugin can enable IPv6 for pods.
|
||||
* For dual-stack networking, each {microshift-short} cluster network and service network supports up to two values in the cluster and service network configuration parameters.
|
||||
* If you require node-wide IPv6 access, such as for the control plane and other services, use the following configuration examples. The {microshift-short} Multus Container Network Interface (CNI) plugin can enable IPv6 for pods.
|
||||
* For dual-stack networking, each {microshift-short} node network and service network supports up to two values in the node and service network configuration parameters.
|
||||
|
||||
[IMPORTANT]
|
||||
====
|
||||
Plan for IPv6 before starting {microshift-short} for the first time. Switching a cluster to and from different IP families is not supported unless you are migrating a cluster from default single-stack to dual-stack networking.
|
||||
Plan for IPv6 before starting {microshift-short} for the first time. Switching a node to and from different IP families is not supported unless you are migrating a node from default single-stack to dual-stack networking.
|
||||
|
||||
If you configure your networking for either IPv6 single stack or IPv4/IPv6 dual stack, you must restart application pods and services. Otherwise pods and services remain configured with the default IP family.
|
||||
====
|
||||
|
||||
@@ -6,10 +6,10 @@
|
||||
[id="microshift-configuring-ipv6-dual-stack-config_{context}"]
|
||||
= Configuring IPv6 dual-stack networking before {microshift-short} starts
|
||||
|
||||
You can configure your {microshift-short} cluster to run on dual-stack networking that supports IPv4 and IPv6 address families by using the configuration file before starting the service.
|
||||
You can configure your {microshift-short} node to run on dual-stack networking that supports IPv4 and IPv6 address families by using the configuration file before starting the service.
|
||||
|
||||
* The first IP family in the configuration is the primary IP stack in the cluster.
|
||||
* After the cluster is running with dual-stack networking, enable application pods and add-on services for dual-stack by restarting them.
|
||||
* The first IP family in the configuration is the primary IP stack in the node.
|
||||
* After the node is running with dual-stack networking, enable application pods and add-on services for dual-stack by restarting them.
|
||||
|
||||
[IMPORTANT]
|
||||
====
|
||||
@@ -23,9 +23,9 @@ When using dual-stack networking where IPv6 is required, you cannot use IPv4-map
|
||||
|
||||
.Prerequisites
|
||||
|
||||
* You installed the OpenShift CLI (`oc`).
|
||||
* You have root access to the cluster.
|
||||
* Your cluster uses the OVN-Kubernetes network plugin.
|
||||
* You installed the {oc-first}.
|
||||
* You have root access to the node.
|
||||
* Your node uses the OVN-Kubernetes network plugin.
|
||||
* The host has both IPv4 and IPv6 addresses and routes, including a default for each.
|
||||
* The host has at least two L3 networks, IPv4 and IPv6.
|
||||
|
||||
|
||||
@@ -4,13 +4,13 @@
|
||||
|
||||
:_mod-docs-content-type: PROCEDURE
|
||||
[id="microshift-nw-ipv6-dual-stack-migrating-config_{context}"]
|
||||
= Migrating a {microshift-short} cluster to IPv6 dual-stack networking
|
||||
= Migrating a {microshift-short} node to IPv6 dual-stack networking
|
||||
|
||||
You can convert a single-stack cluster to dual-stack cluster networking that supports IPv4 and IPv6 address families by setting two entries in the service and cluster network parameters in the {microshift-short} configuration file.
|
||||
You can convert a single-stack node to dual-stack node networking that supports IPv4 and IPv6 address families by setting two entries in the service and node network parameters in the {microshift-short} configuration file.
|
||||
|
||||
* The first IP family in the configuration is the primary IP stack in the cluster.
|
||||
* The first IP family in the configuration is the primary IP stack in the node.
|
||||
* {microshift-short} system pods and services are automatically updated upon {microshift-short} restart.
|
||||
* After the cluster is migrated to dual-stack networking and has restarted, enable workload pods and services for dual-stack networking by restarting them.
|
||||
* After the node is migrated to dual-stack networking and has restarted, enable workload pods and services for dual-stack networking by restarting them.
|
||||
|
||||
[IMPORTANT]
|
||||
====
|
||||
@@ -24,9 +24,9 @@ When using dual-stack networking where IPv6 is required, you cannot use IPv4-map
|
||||
|
||||
.Prerequisites
|
||||
|
||||
* You installed the OpenShift CLI (`oc`).
|
||||
* You have root access to the cluster.
|
||||
* Your cluster uses the OVN-Kubernetes network plugin.
|
||||
* You installed the {oc-first}.
|
||||
* You have root access to the node.
|
||||
* Your node uses the OVN-Kubernetes network plugin.
|
||||
* The host has both IPv4 and IPv6 addresses and routes, including a default for each.
|
||||
* The host has at least two L3 networks, IPv4 and IPv6.
|
||||
|
||||
@@ -141,8 +141,8 @@ $ oc get pod -n openshift-ingress router-default-5b75594b4-228z7 -o jsonpath='{.
|
||||
----
|
||||
[{"ip":"10.42.0.3"},{"ip":"fd01:0:0:1::3"}]
|
||||
----
|
||||
|
||||
+
|
||||
[NOTE]
|
||||
====
|
||||
To return to single-stack networking, you can remove the second entry to the networks and return to the single stack that was configured before migrating to dual-stack.
|
||||
====
|
||||
====
|
||||
|
||||
@@ -10,9 +10,9 @@ You can use the IPv6 network protocol by updating the {microshift-short} service
|
||||
|
||||
.Prerequisites
|
||||
|
||||
* You installed the OpenShift CLI (`oc`).
|
||||
* You have root access to the cluster.
|
||||
* Your cluster uses the OVN-Kubernetes network plugin.
|
||||
* You installed the {oc-first}.
|
||||
* You have root access to the node.
|
||||
* Your node uses the OVN-Kubernetes network plugin.
|
||||
* The host has an IPv6 address and IPv6 routes, including the default.
|
||||
|
||||
.Procedure
|
||||
|
||||
@@ -6,14 +6,14 @@
|
||||
[id="microshift-nw-multus-add-pod_{context}"]
|
||||
= Adding a pod to an additional network
|
||||
|
||||
You can add a pod to an additional network. At the time a pod is created, additional networks are attached to it. The pod continues to send normal cluster-related network traffic over the default network.
|
||||
You can add a pod to an additional network. At the time a pod is created, additional networks are attached to it. The pod continues to send normal node-related network traffic over the default network.
|
||||
|
||||
If you want to attach additional networks to a pod that is already running, you must restart the pod.
|
||||
|
||||
.Prerequisites
|
||||
|
||||
* The {oc-first} is installed.
|
||||
* The cluster is running.
|
||||
* The node is running.
|
||||
* A network defined by a `NetworkAttachmentDefinition` object that you want to attach the pod to exists.
|
||||
|
||||
.Procedure
|
||||
|
||||
@@ -6,16 +6,13 @@
|
||||
[id="microshift-nw-network-policy-intro_{context}"]
|
||||
= How network policy works in {microshift-short}
|
||||
|
||||
In a cluster using the default OVN-Kubernetes Container Network Interface (CNI) plugin for {microshift-short}, network isolation is controlled by both firewalld, which is configured on the host, and by `NetworkPolicy` objects created within {microshift-short}. Simultaneous use of firewalld and `NetworkPolicy` is supported.
|
||||
In a node that is using the default OVN-Kubernetes Container Network Interface (CNI) plugin for {microshift-short}, network isolation is controlled by both firewalld, which is configured on the host, and by `NetworkPolicy` objects created within {microshift-short}. Simultaneous use of firewalld and `NetworkPolicy` is supported.
|
||||
|
||||
* Network policies work only within boundaries of OVN-Kubernetes-controlled traffic, so they can apply to every situation except for `hostPort/hostNetwork` enabled pods.
|
||||
|
||||
* Firewalld settings also do not apply to `hostPort/hostNetwork` enabled pods.
|
||||
|
||||
[NOTE]
|
||||
====
|
||||
Firewalld rules run before any `NetworkPolicy` is enforced.
|
||||
====
|
||||
* Firewalld rules run before any `NetworkPolicy` is enforced.
|
||||
|
||||
[WARNING]
|
||||
====
|
||||
@@ -24,7 +21,7 @@ Network policy does not apply to the host network namespace. Pods with host netw
|
||||
Network policies cannot block traffic from localhost.
|
||||
====
|
||||
|
||||
By default, all pods in a {microshift-short} node are accessible from other pods and network endpoints. To isolate one or more pods in a cluster, you can create `NetworkPolicy` objects to indicate allowed incoming connections. You can create and delete `NetworkPolicy` objects.
|
||||
By default, all pods in a {microshift-short} node are accessible from other pods and network endpoints. To isolate one or more pods in a node, you can create `NetworkPolicy` objects to indicate allowed incoming connections. You can create and delete `NetworkPolicy` objects.
|
||||
|
||||
If a pod is matched by selectors in one or more `NetworkPolicy` objects, then the pod accepts only connections that are allowed by at least one of those `NetworkPolicy` objects. A pod that is not selected by any `NetworkPolicy` objects is fully accessible.
|
||||
|
||||
|
||||
@@ -27,7 +27,7 @@ A virtual switch named `<node-name>`. The OVN node switch is named according to
|
||||
|
||||
OVN cluster router::
|
||||
A virtual router named `ovn_cluster_router`, also known as the distributed router.
|
||||
** In this example, the cluster network is `10.42.0.0/16`.
|
||||
** In this example, the node network is `10.42.0.0/16`.
|
||||
|
||||
OVN join switch::
|
||||
A virtual switch named `join`.
|
||||
|
||||
@@ -4,30 +4,34 @@
|
||||
|
||||
:_mod-docs-content-type: CONCEPT
|
||||
[id="microshift-oc-apis-errors_{context}"]
|
||||
= oc command errors in {product-title}
|
||||
= oc command errors in {microshift-short}
|
||||
|
||||
Not all {oc-first} commands are relevant for {microshift-short} deployments. When you use `oc` to make a request call against an unsupported API, the `oc` binary usually generates an error message about a resource that cannot be found.
|
||||
|
||||
.Example output
|
||||
|
||||
For example, when the following `new-project` command is run:
|
||||
|
||||
* For example, when you run the following `new-project` command:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ oc new-project test
|
||||
----
|
||||
|
||||
+
|
||||
The following error message can be generated:
|
||||
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
Error from server (NotFound): the server could not find the requested resource (get projectrequests.project.openshift.io)
|
||||
----
|
||||
|
||||
And when the `get projects` command is run, another error can be generated as follows:
|
||||
|
||||
* When you run the `get projects` command, another error can be generated as follows:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ oc get projects
|
||||
----
|
||||
+
|
||||
The following error message can be generated:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
error: the server doesn't have a resource type "projects"
|
||||
----
|
||||
----
|
||||
|
||||
@@ -8,4 +8,4 @@
|
||||
|
||||
You can use the oc-mirror OpenShift CLI (oc) plugin with {microshift-short} to filter and delete images from Operator catalogs. You can then mirror the filtered catalog contents to a mirror registry or use the container images in disconnected or offline deployments.
|
||||
|
||||
The procedure to mirror content from Red Hat-hosted registries connected to the internet to a disconnected image registry is the same, independent of the registry you select. After you mirror the contents of your catalog, configure each cluster to retrieve this content from your mirror registry.
|
||||
The procedure to mirror content from Red Hat-hosted registries connected to the internet to a disconnected image registry is the same, independent of the registry you select. After you mirror the contents of your catalog, configure each node to retrieve this content from your mirror registry.
|
||||
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user