diff --git a/modules/accessing-running-pods.adoc b/modules/accessing-running-pods.adoc index 6f98fef840..78d4ca6085 100644 --- a/modules/accessing-running-pods.adoc +++ b/modules/accessing-running-pods.adoc @@ -3,9 +3,9 @@ // * support/troubleshooting/investigating-pod-issues.adoc [id="accessing-running-pods_{context}"] -= Accessing running Pods += Accessing running pods -You can review running Pods dynamically by opening a shell inside a Pod or by gaining network access through port forwarding. +You can review running pods dynamically by opening a shell inside a pod or by gaining network access through port forwarding. .Prerequisites @@ -15,29 +15,29 @@ You can review running Pods dynamically by opening a shell inside a Pod or by ga .Procedure -. Switch into the project that contains the Pod you would like to access. This is necessary because the `oc rsh` command does not accept the `-n` namespace option: +. Switch into the project that contains the pod you would like to access. This is necessary because the `oc rsh` command does not accept the `-n` namespace option: + [source,terminal] ---- $ oc project ---- -. Start a remote shell into a Pod: +. Start a remote shell into a pod: + [source,terminal] ---- $ oc rsh <1> ---- -<1> If a Pod has multiple containers, `oc rsh` defaults to the first container unless `-c ` is specified. +<1> If a pod has multiple containers, `oc rsh` defaults to the first container unless `-c ` is specified. -. Start a remote shell into a specific container within a Pod: +. Start a remote shell into a specific container within a pod: + [source,terminal] ---- $ oc rsh -c pod/ ---- -. Create a port forwarding session to a port on a Pod: +. Create a port forwarding session to a port on a pod: + [source,terminal] ---- diff --git a/modules/apiserversource-kn.adoc b/modules/apiserversource-kn.adoc index b094e64959..c8cf06244f 100644 --- a/modules/apiserversource-kn.adoc +++ b/modules/apiserversource-kn.adoc @@ -86,7 +86,7 @@ Conditions: You can verify that the Kubernetes events were sent to Knative by looking at the message dumper function logs. -. Get the Pods: +. Get the pods: + [source,terminal] @@ -94,7 +94,7 @@ You can verify that the Kubernetes events were sent to Knative by looking at the $ oc get pods ---- -. View the message dumper function logs for the Pods: +. View the message dumper function logs for the pods: + [source,terminal] diff --git a/modules/apiserversource-yaml.adoc b/modules/apiserversource-yaml.adoc index e1f70b1103..93147da643 100644 --- a/modules/apiserversource-yaml.adoc +++ b/modules/apiserversource-yaml.adoc @@ -216,7 +216,7 @@ spec: To verify that the Kubernetes events were sent to Knative, you can look at the message dumper function logs. -. Get the Pods: +. Get the pods: + [source,terminal] @@ -224,7 +224,7 @@ To verify that the Kubernetes events were sent to Knative, you can look at the m $ oc get pods ---- -. View the message dumper function logs for the Pods: +. View the message dumper function logs for the pods: + [source,terminal] diff --git a/modules/application-health-about.adoc b/modules/application-health-about.adoc index e9f2ad7ae0..2459bc81e1 100644 --- a/modules/application-health-about.adoc +++ b/modules/application-health-about.adoc @@ -16,7 +16,7 @@ container has its IP address removed from the endpoints of all services. A readiness probe can be used to signal to the endpoints controller that even though a container is running, it should not receive any traffic from a proxy. -For example, a Readiness check can control which Pods are used. When a Pod is not ready, +For example, a Readiness check can control which pods are used. When a pod is not ready, it is removed. Liveness Probe:: diff --git a/modules/applications-create-using-cli-modify.adoc b/modules/applications-create-using-cli-modify.adoc index b4ff44809e..c6c6be3487 100644 --- a/modules/applications-create-using-cli-modify.adoc +++ b/modules/applications-create-using-cli-modify.adoc @@ -214,9 +214,9 @@ repository. If this is not the intent, specify the required builder image for the source using the `~` separator. ==== -== Grouping images and source in a single Pod +== Grouping images and source in a single pod -The `new-app` command allows deploying multiple images together in a single Pod. +The `new-app` command allows deploying multiple images together in a single pod. In order to specify which images to group together, use the `+` separator. The `--group` command line argument can also be used to specify the images that should be grouped together. To group the image built from a source repository with diff --git a/modules/architecture-kubernetes-introduction.adoc b/modules/architecture-kubernetes-introduction.adoc index 8fe322da61..06756c169c 100644 --- a/modules/architecture-kubernetes-introduction.adoc +++ b/modules/architecture-kubernetes-introduction.adoc @@ -16,14 +16,14 @@ concept of Kubernetes is fairly simple: * Start with one or more worker nodes to run the container workloads. * Manage the deployment of those workloads from one or more master nodes. -* Wrap containers in a deployment unit called a Pod. Using Pods provides extra +* Wrap containers in a deployment unit called a pod. Using pods provides extra metadata with the container and offers the ability to group several containers in a single deployment entity. * Create special kinds of assets. For example, services are represented by a -set of Pods and a policy that defines how they are accessed. This policy +set of pods and a policy that defines how they are accessed. This policy allows containers to connect to the services that they need even if they do not have the specific IP addresses for the services. Replication controllers are -another special asset that indicates how many Pod Replicas are required to run +another special asset that indicates how many pod replicas are required to run at a time. You can use this capability to automatically scale your application to adapt to its current demand. diff --git a/modules/architecture-machine-roles.adoc b/modules/architecture-machine-roles.adoc index e78aefee1a..1f2ab1cb32 100644 --- a/modules/architecture-machine-roles.adoc +++ b/modules/architecture-machine-roles.adoc @@ -21,7 +21,7 @@ explained in the cluster installation documentation. In a Kubernetes cluster, the worker nodes are where the actual workloads requested by Kubernetes users run and are managed. The worker nodes advertise their capacity and the scheduler, which is part of the master services, -determines on which nodes to start containers and Pods. Important services run +determines on which nodes to start containers and pods. Important services run on each worker node, including CRI-O, which is the container engine, Kubelet, which is the service that accepts and fulfills requests for running and stopping container workloads, and a service proxy, which manages communication @@ -54,7 +54,7 @@ all master machines and breaking your cluster. ==== Use three master nodes. Although you can theoretically use any number of master nodes, the number is constrained by etcd quorum due to -master static Pods and etcd static Pods working on the same hosts. +master static pods and etcd static pods working on the same hosts. ==== Services that fall under the Kubernetes category on the master include the @@ -65,7 +65,7 @@ Kubernetes API server, etcd, Kubernetes controller manager, and HAProxy services |=== |Component |Description |Kubernetes API server -|The Kubernetes API server validates and configures the data for Pods, Services, +|The Kubernetes API server validates and configures the data for pods, Services, and replication controllers. It also provides a focal point for the shared state of the cluster. |etcd |etcd stores the persistent master state while other components watch etcd for @@ -103,7 +103,7 @@ The OpenShift OAuth server is managed by the Cluster Authentication Operator. |=== Some of these services on the master machines run as systemd services, while -others run as static Pods. +others run as static pods. Systemd services are appropriate for services that you need to always come up on that particular system shortly after it starts. For master machines, those diff --git a/modules/backup-etcd.adoc b/modules/backup-etcd.adoc index 1854126e0c..c8b86a47ed 100644 --- a/modules/backup-etcd.adoc +++ b/modules/backup-etcd.adoc @@ -6,7 +6,7 @@ [id="backing-up-etcd-data_{context}"] = Backing up etcd data -Follow these steps to back up etcd data by creating an etcd snapshot and backing up the resources for the static Pods. This backup can be saved and used at a later time if you need to restore etcd. +Follow these steps to back up etcd data by creating an etcd snapshot and backing up the resources for the static pods. This backup can be saved and used at a later time if you need to restore etcd. [IMPORTANT] ==== @@ -65,7 +65,7 @@ snapshot db and kube resources are successfully saved to /home/core/assets/backu In this example, two files are created in the `/home/core/assets/backup/` directory on the master host: * `snapshot_.db`: This file is the etcd snapshot. -* `static_kuberesources_.tar.gz`: This file contains the resources for the static Pods. If etcd encryption is enabled, it also contains the encryption keys for the etcd snapshot. +* `static_kuberesources_.tar.gz`: This file contains the resources for the static pods. If etcd encryption is enabled, it also contains the encryption keys for the etcd snapshot. + [NOTE] ==== diff --git a/modules/bound-sa-tokens-about.adoc b/modules/bound-sa-tokens-about.adoc index 9c4dcc3a0d..3e178d1768 100644 --- a/modules/bound-sa-tokens-about.adoc +++ b/modules/bound-sa-tokens-about.adoc @@ -5,7 +5,7 @@ [id="bound-sa-tokens-about_{context}"] = About bound service account tokens -You can use bound service account tokens to limit the scope of permissions for a given service account token. These tokens are audience and time-bound. This facilitates the authentication of a service account to an IAM role and the generation of temporary credentials mounted to a Pod. You can request bound service account tokens by using volume projection and the TokenRequest API. +You can use bound service account tokens to limit the scope of permissions for a given service account token. These tokens are audience and time-bound. This facilitates the authentication of a service account to an IAM role and the generation of temporary credentials mounted to a pod. You can request bound service account tokens by using volume projection and the TokenRequest API. [IMPORTANT] ==== diff --git a/modules/bound-sa-tokens-configuring.adoc b/modules/bound-sa-tokens-configuring.adoc index e4b3a62214..cf3b5a31bd 100644 --- a/modules/bound-sa-tokens-configuring.adoc +++ b/modules/bound-sa-tokens-configuring.adoc @@ -5,7 +5,7 @@ [id="bound-sa-tokens-configuring_{context}"] = Configuring bound service account tokens using volume projection -You can configure Pods to request bound service account tokens by using volume projection. +You can configure pods to request bound service account tokens by using volume projection. .Prerequisites @@ -34,7 +34,7 @@ spec: ---- <1> This value should be a URL from which the recipient of a bound token can source the public keys necessary to verify the signature of the token. The default is [x-]`https://kubernetes.default.svc`. -. Configure a Pod to use a bound service account token by using volume projection. +. Configure a pod to use a bound service account token by using volume projection. .. Create a file called `pod-projected-svc-token.yaml` with the following contents: + @@ -66,14 +66,14 @@ spec: <3> Optionally set the expiration of the service account token, in seconds. The default is 3600 seconds (1 hour) and must be at least 600 seconds (10 minutes). The kubelet will start trying to rotate the token if the token is older than 80 percent of its time to live or if the token is older than 24 hours. <4> Optionally set the intended audience of the token. The recipient of a token should verify that the recipient identity matches the audience claim of the token, and should otherwise reject the token. The audience defaults to the identifier of the API server. -.. Create the Pod: +.. Create the pod: + [source,terminal] ---- $ oc create -f pod-projected-svc-token.yaml ---- + -The kubelet requests and stores the token on behalf of the Pod, makes the token available to the Pod at a configurable file path, and refreshes the token as it approaches expiration. +The kubelet requests and stores the token on behalf of the pod, makes the token available to the pod at a configurable file path, and refreshes the token as it approaches expiration. . The application that uses the bound token must handle reloading the token when it rotates. + diff --git a/modules/building-memcached-operator-using-osdk.adoc b/modules/building-memcached-operator-using-osdk.adoc index 78da388966..a56e860685 100644 --- a/modules/building-memcached-operator-using-osdk.adoc +++ b/modules/building-memcached-operator-using-osdk.adoc @@ -165,7 +165,7 @@ The example controller executes the following reconciliation logic for each -- * Create a Memcached Deployment if it does not exist. * Ensure that the Deployment size is the same as specified by the `Memcached` CR spec. -* Update the `Memcached` CR status with the names of the Memcached Pods. +* Update the `Memcached` CR status with the names of the Memcached pods. -- + The next two sub-steps inspect how the Controller watches resources and how the @@ -374,8 +374,8 @@ memcached-operator 1 1 1 1 2m example-memcached 3 3 3 3 1m ---- -.. Check the Pods and CR status to confirm the status is updated with the -`memcached` Pod names: +.. Check the pods and CR status to confirm the status is updated with the +`memcached` pod names: + [source,terminal] ---- diff --git a/modules/builds-adding-source-clone-secrets.adoc b/modules/builds-adding-source-clone-secrets.adoc index ca02a97d30..6d324b6597 100644 --- a/modules/builds-adding-source-clone-secrets.adoc +++ b/modules/builds-adding-source-clone-secrets.adoc @@ -5,7 +5,7 @@ [id="builds-adding-source-clone-secrets_{context}"] = Source Clone Secrets -Builder Pods require access to any Git repositories defined as source for a build. Source clone secrets are used to provide the builder Pod with access it would not normally have access to, such as private repositories or repositories with self-signed or untrusted SSL certificates. +Builder pods require access to any Git repositories defined as source for a build. Source clone secrets are used to provide the builder pod with access it would not normally have access to, such as private repositories or repositories with self-signed or untrusted SSL certificates. * The following source clone secret configurations are supported. ** .gitconfig File diff --git a/modules/builds-configmap-overview.adoc b/modules/builds-configmap-overview.adoc index 4f2199a0ae..97d52559dc 100644 --- a/modules/builds-configmap-overview.adoc +++ b/modules/builds-configmap-overview.adoc @@ -9,7 +9,7 @@ Many applications require configuration using some combination of configuration The ConfigMap object provides mechanisms to inject containers with configuration data while keeping containers agnostic of {product-title}. A ConfigMap can be used to store fine-grained information like individual properties or coarse-grained information like entire configuration files or JSON blobs. -The ConfigMap API object holds key-value pairs of configuration data that can be consumed in Pods or used to store configuration data for system components such as controllers. For example: +The ConfigMap API object holds key-value pairs of configuration data that can be consumed in pods or used to store configuration data for system components such as controllers. For example: .ConfigMap Object Definition [source,yaml] @@ -38,7 +38,7 @@ binaryData: You can use the `binaryData` field when you create a ConfigMap from a binary file, such as an image. ==== -Configuration data can be consumed in Pods in a variety of ways. A ConfigMap can be used to: +Configuration data can be consumed in pods in a variety of ways. A ConfigMap can be used to: * Populate environment variable values in containers * Set command-line arguments in a container @@ -51,14 +51,14 @@ A ConfigMap is similar to a secret, but designed to more conveniently support wo [discrete] == ConfigMap restrictions -*A ConfigMap must be created before its contents can be consumed in Pods.* +*A ConfigMap must be created before its contents can be consumed in pods.* Controllers can be written to tolerate missing configuration data. Consult individual components configured by using ConfigMaps on a case-by-case basis. *ConfigMap objects reside in a project.* -They can only be referenced by Pods in the same project. +They can only be referenced by pods in the same project. -*The Kubelet only supports the use of a ConfigMap for Pods it gets from the API server.* +*The Kubelet only supports the use of a ConfigMap for pods it gets from the API server.* -This includes any Pods created by using the CLI, or indirectly from a replication controller. It does not include Pods created by using the {product-title} node's `--manifest-url` flag, its `--config` flag, or its REST API because these are not common ways to create Pods. +This includes any pods created by using the CLI, or indirectly from a replication controller. It does not include pods created by using the {product-title} node's `--manifest-url` flag, its `--config` flag, or its REST API because these are not common ways to create pods. diff --git a/modules/builds-configmaps-use-case-consuming-in-env-vars.adoc b/modules/builds-configmaps-use-case-consuming-in-env-vars.adoc index 69709b7476..01dfa37deb 100644 --- a/modules/builds-configmaps-use-case-consuming-in-env-vars.adoc +++ b/modules/builds-configmaps-use-case-consuming-in-env-vars.adoc @@ -22,7 +22,7 @@ data: special.type: charm <3> ---- <1> Name of the ConfigMap. -<2> The project in which the ConfigMap resides. ConfigMaps can only be referenced by Pods in the same project. +<2> The project in which the ConfigMap resides. ConfigMaps can only be referenced by pods in the same project. <3> Environment variables to inject. .ConfigMap with one environment variable @@ -41,9 +41,9 @@ data: .Procedure -* You can consume the keys of this ConfigMap in a Pod using `configMapKeyRef` sections. +* You can consume the keys of this ConfigMap in a pod using `configMapKeyRef` sections. + -.Sample Pod specification configured to inject specific environment variables +.Sample `Pod` specification configured to inject specific environment variables [source,yaml] ---- apiVersion: v1 diff --git a/modules/builds-configmaps-use-case-setting-command-line-arguments.adoc b/modules/builds-configmaps-use-case-setting-command-line-arguments.adoc index fa2ac31249..88de8bfd9c 100644 --- a/modules/builds-configmaps-use-case-setting-command-line-arguments.adoc +++ b/modules/builds-configmaps-use-case-setting-command-line-arguments.adoc @@ -23,7 +23,7 @@ data: * To inject values into a command in a container, you must consume the keys you want to use as environment variables, as in the consuming ConfigMaps in environment variables use case. Then you can refer to them in a container's command using the `$(VAR_NAME)` syntax. + -.Sample Pod specification configured to inject specific environment variables +.Sample `Pod` specification configured to inject specific environment variables [source,yaml] ---- apiVersion: v1 diff --git a/modules/cli-administrator-cluster-management.adoc b/modules/cli-administrator-cluster-management.adoc index 3fc5345ef9..5ced456748 100644 --- a/modules/cli-administrator-cluster-management.adoc +++ b/modules/cli-administrator-cluster-management.adoc @@ -39,7 +39,7 @@ $ oc adm must-gather Show usage statistics of resources on the server. -.Example: Show CPU and memory usage for Pods +.Example: Show CPU and memory usage for pods [source,terminal] ---- $ oc adm top pods diff --git a/modules/cli-administrator-maintenance.adoc b/modules/cli-administrator-maintenance.adoc index eecc1b55a5..2158ef81a3 100644 --- a/modules/cli-administrator-maintenance.adoc +++ b/modules/cli-administrator-maintenance.adoc @@ -16,7 +16,7 @@ subcommand used. $ oc adm migrate storage ---- -.Example: Perform an update of only Pods +.Example: Perform an update of only pods [source,terminal] ---- $ oc adm migrate storage --include=pods diff --git a/modules/cli-developer-application-management.adoc b/modules/cli-developer-application-management.adoc index 19a2185356..dda28364fa 100644 --- a/modules/cli-developer-application-management.adoc +++ b/modules/cli-developer-application-management.adoc @@ -36,7 +36,7 @@ $ oc apply -f pod.json Autoscale a DeploymentConfig or ReplicationController. -.Example: Autoscale to a minimum of two and maximum of five Pods +.Example: Autoscale to a minimum of two and maximum of five pods [source,terminal] ---- $ oc autoscale deploymentconfig/parksmap-katacoda --min=2 --max=5 @@ -62,7 +62,7 @@ Delete a resource. $ oc delete pod/parksmap-katacoda-1-qfqz4 ---- -.Example: Delete all Pods with the `app=parksmap-katacoda` label +.Example: Delete all pods with the `app=parksmap-katacoda` label [source,terminal] ---- $ oc delete pods -l app=parksmap-katacoda @@ -78,7 +78,7 @@ Return detailed information about a specific object. $ oc describe deployment/example ---- -.Example: Describe all Pods +.Example: Describe all pods [source,terminal] ---- $ oc describe pods @@ -126,7 +126,7 @@ $ oc expose service/parksmap-katacoda --hostname=www.my-host.com Display one or more resources. -.Example: List Pods in the `default` namespace +.Example: List pods in the `default` namespace [source,terminal] ---- $ oc get pods -n default @@ -153,7 +153,7 @@ $ oc label pod/python-1-mz2rf status=unhealthy Set the desired number of replicas for a ReplicationController or a DeploymentConfig. -.Example: Scale the `ruby-app` DeploymentConfig to three Pods +.Example: Scale the `ruby-app` DeploymentConfig to three pods [source,terminal] ---- $ oc scale deploymentconfig/ruby-app --replicas=3 diff --git a/modules/cli-developer-basic.adoc b/modules/cli-developer-basic.adoc index 2b4eda0a1e..03fa81352c 100644 --- a/modules/cli-developer-basic.adoc +++ b/modules/cli-developer-basic.adoc @@ -9,7 +9,7 @@ Display documentation for a certain resource. -.Example: Display documentation for Pods +.Example: Display documentation for pods [source,terminal] ---- $ oc explain pods diff --git a/modules/cli-developer-troubleshooting.adoc b/modules/cli-developer-troubleshooting.adoc index 2024d56a41..5659cb3c68 100644 --- a/modules/cli-developer-troubleshooting.adoc +++ b/modules/cli-developer-troubleshooting.adoc @@ -9,7 +9,7 @@ Attach the shell to a running container. -.Example: Get output from the `python` container from Pod `python-1-mz2rf` +.Example: Get output from the `python` container from pod `python-1-mz2rf` [source,terminal] ---- $ oc attach python-1-mz2rf -c python @@ -19,7 +19,7 @@ $ oc attach python-1-mz2rf -c python Copy files and directories to and from containers. -.Example: Copy a file from the `python-1-mz2rf` Pod to the local file system +.Example: Copy a file from the `python-1-mz2rf` pod to the local file system [source,terminal] ---- $ oc cp default/python-1-mz2rf:/opt/app-root/src/README.md ~/mydirectory/. @@ -39,7 +39,7 @@ $ oc debug deploymentconfig/python Execute a command in a container. -.Example: Execute the `ls` command in the `python` container from Pod `python-1-mz2rf` +.Example: Execute the `ls` command in the `python` container from pod `python-1-mz2rf` [source,terminal] ---- $ oc exec python-1-mz2rf -c python ls @@ -48,7 +48,7 @@ $ oc exec python-1-mz2rf -c python ls == logs Retrieve the log output for a specific build, BuildConfig, DeploymentConfig, or -Pod. +pod. .Example: Stream the latest logs from the `python` DeploymentConfig [source,terminal] @@ -58,9 +58,9 @@ $ oc logs -f deploymentconfig/python == port-forward -Forward one or more local ports to a Pod. +Forward one or more local ports to a pod. -.Example: Listen on port `8888` locally and forward to port `5000` in the Pod +.Example: Listen on port `8888` locally and forward to port `5000` in the pod [source,terminal] ---- $ oc port-forward python-1-mz2rf 8888:5000 @@ -80,7 +80,7 @@ $ oc proxy --port=8011 --www=./local/www/ Open a remote shell session to a container. -.Example: Open a shell session on the first container in the `python-1-mz2rf` Pod +.Example: Open a shell session on the first container in the `python-1-mz2rf` pod [source,terminal] ---- $ oc rsh python-1-mz2rf @@ -88,10 +88,10 @@ $ oc rsh python-1-mz2rf == rsync -Copy contents of a directory to or from a running Pod container. Only changed +Copy contents of a directory to or from a running pod container. Only changed files are copied using the `rsync` command from your operating system. -.Example: Synchronize files from a local directory with a Pod directory +.Example: Synchronize files from a local directory with a pod directory [source,terminal] ---- $ oc rsync ~/mydirectory/ python-1-mz2rf:/opt/app-root/src/ @@ -99,9 +99,9 @@ $ oc rsync ~/mydirectory/ python-1-mz2rf:/opt/app-root/src/ == run -Create a Pod running a particular image. +Create a pod running a particular image. -.Example: Start a Pod running the `perl` image +.Example: Start a pod running the `perl` image [source,terminal] ---- $ oc run my-test --image=perl @@ -116,7 +116,7 @@ Wait for a specific condition on one or more resources. This command is experimental and might change without notice. ==== -.Example: Wait for the `python-1-mz2rf` Pod to be deleted +.Example: Wait for the `python-1-mz2rf` pod to be deleted [source,terminal] ---- $ oc wait --for=delete pod/python-1-mz2rf diff --git a/modules/cli-getting-help.adoc b/modules/cli-getting-help.adoc index 832c04780c..0bbcbf6ce4 100644 --- a/modules/cli-getting-help.adoc +++ b/modules/cli-getting-help.adoc @@ -59,7 +59,7 @@ Usage: * Use the `oc explain` command to view the description and fields for a particular resource: + -.Example: View documentation for the Pod resource +.Example: View documentation for the `Pod` resource [source,terminal] ---- $ oc explain pods diff --git a/modules/cluster-autoscaler-about.adoc b/modules/cluster-autoscaler-about.adoc index 68aefb94e1..9cac722876 100644 --- a/modules/cluster-autoscaler-about.adoc +++ b/modules/cluster-autoscaler-about.adoc @@ -12,7 +12,7 @@ provide infrastructure management that does not rely on objects of a specific cloud provider. The ClusterAutoscaler has a cluster scope, and is not associated with a particular namespace. -The ClusterAutoscaler increases the size of the cluster when there are Pods +The ClusterAutoscaler increases the size of the cluster when there are pods that failed to schedule on any of the current nodes due to insufficient resources or when another node is necessary to meet deployment needs. The ClusterAutoscaler does not increase the cluster resources beyond the limits @@ -25,30 +25,30 @@ Ensure that the `maxNodesTotal` value in the `ClusterAutoscaler` definition that The ClusterAutoscaler decreases the size of the cluster when some nodes are consistently not needed for a significant period, such as when it has low -resource use and all of its important Pods can fit on other nodes. +resource use and all of its important pods can fit on other nodes. -If the following types of Pods are present on a node, the ClusterAutoscaler +If the following types of pods are present on a node, the ClusterAutoscaler will not remove the node: * Pods with restrictive PodDisruptionBudgets (PDBs). -* Kube-system Pods that do not run on the node by default. -* Kube-system Pods that do not have a PDB or have a PDB that is too restrictive. +* Kube-system pods that do not run on the node by default. +* Kube-system pods that do not have a PDB or have a PDB that is too restrictive. * Pods that are not backed by a controller object such as a Deployment, ReplicaSet, or StatefulSet. * Pods with local storage. * Pods that cannot be moved elsewhere because of a lack of resources, incompatible node selectors or affinity, matching anti-affinity, and so on. * Unless they also have a `"cluster-autoscaler.kubernetes.io/safe-to-evict": "true"` -annotation, Pods that have a `"cluster-autoscaler.kubernetes.io/safe-to-evict": "false"` +annotation, pods that have a `"cluster-autoscaler.kubernetes.io/safe-to-evict": "false"` annotation. If you configure the ClusterAutoscaler, additional usage restrictions apply: * Do not modify the nodes that are in autoscaled node groups directly. All nodes within the same node group have the same capacity and labels and run the same -system Pods. -* Specify requests for your Pods. -* If you have to prevent Pods from being deleted too quickly, configure +system pods. +* Specify requests for your pods. +* If you have to prevent pods from being deleted too quickly, configure appropriate PDBs. * Confirm that your cloud provider quota is large enough to support the maximum node pools that you configure. @@ -62,23 +62,23 @@ number of replicas based on the current CPU load. If the load increases, the HPA creates new replicas, regardless of the amount of resources available to the cluster. If there are not enough resources, the ClusterAutoscaler adds resources so that -the HPA-created Pods can run. +the HPA-created pods can run. If the load decreases, the HPA stops some replicas. If this action causes some nodes to be underutilized or completely empty, the ClusterAutoscaler deletes the unnecessary nodes. -The ClusterAutoscaler takes Pod priorities into account. The Pod Priority and -Preemption feature enables scheduling Pods based on priorities if the cluster +The ClusterAutoscaler takes pod priorities into account. The Pod Priority and +Preemption feature enables scheduling pods based on priorities if the cluster does not have enough resources, but the ClusterAutoscaler ensures that the -cluster has resources to run all Pods. To honor the intention of both features, +cluster has resources to run all pods. To honor the intention of both features, the ClusterAutoscaler inclues a priority cutoff function. You can use this cutoff to -schedule "best-effort" Pods, which do not cause the ClusterAutoscaler to +schedule "best-effort" pods, which do not cause the ClusterAutoscaler to increase resources but instead run only when spare resources are available. Pods with priority lower than the cutoff value do not cause the cluster to scale up or prevent the cluster from scaling down. No new nodes are added to run the -Pods, and nodes running these Pods might be deleted to free resources. +pods, and nodes running these pods might be deleted to free resources. //// Default priority cutoff is 0. It can be changed using `--expendable-pods-priority-cutoff` flag, diff --git a/modules/cluster-dns-operator.adoc b/modules/cluster-dns-operator.adoc index a3913dcd16..3cae7f43f4 100644 --- a/modules/cluster-dns-operator.adoc +++ b/modules/cluster-dns-operator.adoc @@ -9,7 +9,7 @@ == Purpose The DNS Operator deploys and manages CoreDNS to provide a name resolution -service to Pods that enables DNS-based Kubernetes Service discovery in +service to pods that enables DNS-based Kubernetes Service discovery in {product-title}. The Operator creates a working default deployment based on the cluster's configuration. diff --git a/modules/cluster-logging-collector-legacy-fluentd.adoc b/modules/cluster-logging-collector-legacy-fluentd.adoc index 0eecdab5a3..87617bf75e 100644 --- a/modules/cluster-logging-collector-legacy-fluentd.adoc +++ b/modules/cluster-logging-collector-legacy-fluentd.adoc @@ -5,7 +5,7 @@ [id="cluster-logging-collector-legacy-fluentd_{context}"] = Forwarding logs using the legacy Fluentd method -You can use the Fluentd *forward* protocol to send logs to destinations outside of your {product-title} cluster instead of the default Elasticsearch log store by creating a configuration file and ConfigMap. You are responsible for configuring the external log aggregator to receive the logs from {product-title}. +You can use the Fluentd *forward* protocol to send logs to destinations outside of your {product-title} cluster instead of the default Elasticsearch log store by creating a configuration file and ConfigMap. You are responsible for configuring the external log aggregator to receive the logs from {product-title}. [IMPORTANT] ==== @@ -16,7 +16,7 @@ ifdef::openshift-origin[] The *forward* protocols are provided with the Fluentd image as of v1.4.0. endif::openshift-origin[] -To send logs using the Fluentd *forward* protocol, create a configuration file called `secure-forward.conf`, that points to an external log aggregator. Then, use that file to create a ConfigMap called called `secure-forward` in the `openshift-logging` namespace, which {product-title} uses when forwarding the logs. +To send logs using the Fluentd *forward* protocol, create a configuration file called `secure-forward.conf`, that points to an external log aggregator. Then, use that file to create a ConfigMap called called `secure-forward` in the `openshift-logging` namespace, which {product-title} uses when forwarding the logs. .Sample Fluentd configuration file @@ -25,7 +25,7 @@ To send logs using the Fluentd *forward* protocol, create a configuration file c @type forward - self_hostname fluentd.example.com + self_hostname fluentd.example.com shared_key "fluent-receiver" transport tls @@ -68,7 +68,7 @@ To configure {product-title} to forward logs using the legacy Fluentd method: tls_verify_hostname <4> tls_cert_path <5> <6> - @type file + @type file path '/var/lib/fluentd/secureforwardlegacy' queued_chunks_limit_size "#{ENV['BUFFER_QUEUE_LIMIT'] || '1024' }" chunk_limit_size "#{ENV['BUFFER_SIZE_LIMIT'] || '1m' }" @@ -100,7 +100,7 @@ To configure {product-title} to forward logs using the legacy Fluentd method: <8> Specify the host name or IP of the server. <9> Specify the host label of the server. <10> Specify the port of the server. -<11> Optionally, add additional servers. +<11> Optionally, add additional servers. If you specify two or more servers, *forward* uses these server nodes in a round-robin order. + To use Mutual TLS (mTLS) authentication, see the link:https://docs.fluentd.org/output/forward#tips-and-tricks[Fluentd documentation] for information about client certificate, key parameters, and other settings. @@ -112,8 +112,8 @@ To use Mutual TLS (mTLS) authentication, see the link:https://docs.fluentd.org/o $ oc create configmap secure-forward --from-file=secure-forward.conf -n openshift-logging ---- -The Cluster Logging Operator redeploys the Fluentd Pods. If the Pods do not redeploy, you can delete the Fluentd -Pods to force them to redeploy. +The Cluster Logging Operator redeploys the Fluentd pods. If the pods do not redeploy, you can delete the Fluentd +pods to force them to redeploy. [source,terminal] ---- diff --git a/modules/cluster-logging-collector-legacy-syslog.adoc b/modules/cluster-logging-collector-legacy-syslog.adoc index d66dc96d77..213c12f729 100644 --- a/modules/cluster-logging-collector-legacy-syslog.adoc +++ b/modules/cluster-logging-collector-legacy-syslog.adoc @@ -110,8 +110,8 @@ rfc 3164 <6> $ oc create configmap syslog --from-file=syslog.conf -n openshift-logging ---- -The Cluster Logging Operator redeploys the Fluentd Pods. If the Pods do not redeploy, you can delete the Fluentd -Pods to force them to redeploy. +The Cluster Logging Operator redeploys the Fluentd pods. If the pods do not redeploy, you can delete the Fluentd +pods to force them to redeploy. [source,terminal] ---- diff --git a/modules/cluster-logging-collector-log-forward-es.adoc b/modules/cluster-logging-collector-log-forward-es.adoc index 5161d2e00d..3afc7739ce 100644 --- a/modules/cluster-logging-collector-log-forward-es.adoc +++ b/modules/cluster-logging-collector-log-forward-es.adoc @@ -9,7 +9,7 @@ You can optionally forward logs to an external Elasticsearch v5.x or v6.x instan To configure log forwarding to an external Elasticsearch instance, create a `ClusterLogForwarder` Custom Resource (CR) with an output to that instance and a pipeline that uses the output. The external Elasticsearch output can use the HTTP (insecure) or HTTPS (secure HTTP) connection. -To forward logs to both an external and the internal Elasticsearch instance, create outputs and pipelines to the external instance and a pipeline that uses the `default` output to forward logs to the internal instance. You do not need to create a `default` output. If you do configure a `default` output, you receive an error message because the `default` output is reserved for the Cluster Logging Operator. +To forward logs to both an external and the internal Elasticsearch instance, create outputs and pipelines to the external instance and a pipeline that uses the `default` output to forward logs to the internal instance. You do not need to create a `default` output. If you do configure a `default` output, you receive an error message because the `default` output is reserved for the Cluster Logging Operator. [NOTE] ==== @@ -65,7 +65,7 @@ spec: <8> Specify which log types should be forwarded using that pipeline: `application,` `infrastructure`, or `audit`. <9> Specify the output to use with that pipeline for forwarding the logs. <10> Optional: Specify the `default` output to send the logs to the internal Elasticsearch instance. -<11> Optional: One or more labels to add to the logs. +<11> Optional: One or more labels to add to the logs. <12> Optional: Configure multiple outputs to forward logs to other external log aggregtors of any supported type: ** Optional. A name to describe the pipeline. ** The `inputRefs` is the log type to forward using that pipeline: `application,` `infrastructure`, or `audit`. @@ -79,11 +79,10 @@ spec: $ oc create -f .yaml ---- -The Cluster Logging Operator redeploys the Fluentd Pods. If the Pods do not redeploy, you can delete the Fluentd -Pods to force them to redeploy. +The Cluster Logging Operator redeploys the Fluentd pods. If the pods do not redeploy, you can delete the Fluentd +pods to force them to redeploy. [source,terminal] ---- $ oc delete pod --selector logging-infra=fluentd ---- - diff --git a/modules/cluster-logging-collector-log-forward-fluentd.adoc b/modules/cluster-logging-collector-log-forward-fluentd.adoc index fd29f58635..de87eaca96 100644 --- a/modules/cluster-logging-collector-log-forward-fluentd.adoc +++ b/modules/cluster-logging-collector-log-forward-fluentd.adoc @@ -11,7 +11,7 @@ To configure log forwarding using the *forward* protocol, create a `ClusterLogFo [NOTE] ==== -Alternately, you can use a ConfigMap to forward logs using the *forward* protocols. However, this method is deprecated in {product-title} and will be removed in a future release. +Alternately, you can use a ConfigMap to forward logs using the *forward* protocols. However, this method is deprecated in {product-title} and will be removed in a future release. ==== .Procedure @@ -77,8 +77,8 @@ spec: $ oc create -f .yaml ---- -The Cluster Logging Operator redeploys the Fluentd Pods. If the Pods do not redeploy, you can delete the Fluentd -Pods to force them to redeploy. +The Cluster Logging Operator redeploys the Fluentd pods. If the pods do not redeploy, you can delete the Fluentd +pods to force them to redeploy. [source,terminal] ---- diff --git a/modules/cluster-logging-collector-log-forward-kafka.adoc b/modules/cluster-logging-collector-log-forward-kafka.adoc index f8280515e1..d18d00bcea 100644 --- a/modules/cluster-logging-collector-log-forward-kafka.adoc +++ b/modules/cluster-logging-collector-log-forward-kafka.adoc @@ -5,7 +5,7 @@ [id="cluster-logging-collector-log-forward-kafka_{context}"] = Forwarding logs to a Kafka broker -You can forward logs to an external Kafka broker in addition to, or instead of, the default Elasticsearch log store. +You can forward logs to an external Kafka broker in addition to, or instead of, the default Elasticsearch log store. To configure log forwarding to an external Kafka instance, create a `ClusterLogForwarder` Custom Resource (CR) with an output to that instance and a pipeline that uses the output. You can include a specific Kafka topic in the output or use the default. The Kafka output can use a TCP (insecure) or TLS (secure TCP) connection. @@ -40,9 +40,9 @@ spec: inputRefs: <8> - application outputRefs: <9> - - app-logs + - app-logs labels: - logType: application <10> + logType: application <10> - name: infra-topic <11> inputRefs: - infrastructure @@ -83,8 +83,8 @@ spec: $ oc create -f .yaml ---- -The Cluster Logging Operator redeploys the Fluentd Pods. If the Pods do not redeploy, you can delete the Fluentd -Pods to force them to redeploy. +The Cluster Logging Operator redeploys the Fluentd pods. If the pods do not redeploy, you can delete the Fluentd +pods to force them to redeploy. [source,terminal] ---- diff --git a/modules/cluster-logging-collector-log-forward-syslog.adoc b/modules/cluster-logging-collector-log-forward-syslog.adoc index 1aff5a20f6..63c623d30f 100644 --- a/modules/cluster-logging-collector-log-forward-syslog.adoc +++ b/modules/cluster-logging-collector-log-forward-syslog.adoc @@ -92,8 +92,8 @@ spec: $ oc create -f .yaml ---- -The Cluster Logging Operator redeploys the Fluentd Pods. If the Pods do not redeploy, you can delete the Fluentd -Pods to force them to redeploy. +The Cluster Logging Operator redeploys the Fluentd pods. If the pods do not redeploy, you can delete the Fluentd +pods to force them to redeploy. [source,terminal] ---- diff --git a/modules/cluster-logging-collector-tolerations.adoc b/modules/cluster-logging-collector-tolerations.adoc index bd26b4512e..07db0d9c0b 100644 --- a/modules/cluster-logging-collector-tolerations.adoc +++ b/modules/cluster-logging-collector-tolerations.adoc @@ -3,16 +3,16 @@ // * logging/cluster-logging-collector.adoc [id="cluster-logging-collector-tolerations_{context}"] -= Using tolerations to control the log collector Pod placement += Using tolerations to control the log collector pod placement -You can ensure which nodes the logging collector Pods run on and prevent -other workloads from using those nodes by using tolerations on the Pods. +You can ensure which nodes the logging collector pods run on and prevent +other workloads from using those nodes by using tolerations on the pods. -You apply tolerations to logging collector Pods through the Cluster Logging Custom Resource (CR) +You apply tolerations to logging collector pods through the Cluster Logging Custom Resource (CR) and apply taints to a node through the node specification. You can use taints and tolerations -to ensure the Pod does not get evicted for things like memory and CPU issues. +to ensure the pod does not get evicted for things like memory and CPU issues. -By default, the logging collector Pods have the following toleration: +By default, the logging collector pods have the following toleration: [source,yaml] ---- @@ -28,7 +28,7 @@ tolerations: .Procedure -. Use the following command to add a taint to a node where you want logging collector Pods to schedule logging collector Pods: +. Use the following command to add a taint to a node where you want logging collector pods to schedule logging collector pods: + [source,terminal] ---- @@ -43,10 +43,10 @@ $ oc adm taint nodes node1 collector=node:NoExecute ---- + This example places a taint on `node1` that has key `collector`, value `node`, and taint effect `NoExecute`. -You must use the `NoExecute` taint effect. `NoExecute` schedules only Pods that match the taint and removes existing Pods +You must use the `NoExecute` taint effect. `NoExecute` schedules only pods that match the taint and removes existing pods that do not match. -. Edit the `collection` section of the Cluster Logging Custom Resource (CR) to configure a toleration for the logging collector Pods: +. Edit the `collection` section of the Cluster Logging Custom Resource (CR) to configure a toleration for the logging collector pods: + [source,yaml] ---- @@ -54,16 +54,15 @@ that do not match. logs: type: "fluentd" rsyslog: - tolerations: + tolerations: - key: "collector" <1> operator: "Exists" <2> effect: "NoExecute" <3> tolerationSeconds: 6000 <4> ---- <1> Specify the key that you added to the node. -<2> Specify the `Exists` operator to require the `key`/`value`/`effect` parameters to match. +<2> Specify the `Exists` operator to require the `key`/`value`/`effect` parameters to match. <3> Specify the `NoExecute` effect. -<4> Optionally, specify the `tolerationSeconds` parameter to set how long a Pod can remain bound to a node before being evicted. - -This toleration matches the taint created by the `oc adm taint` command. A Pod with this toleration would be able to schedule onto `node1`. +<4> Optionally, specify the `tolerationSeconds` parameter to set how long a pod can remain bound to a node before being evicted. +This toleration matches the taint created by the `oc adm taint` command. A pod with this toleration would be able to schedule onto `node1`. diff --git a/modules/cluster-logging-collector-tuning.adoc b/modules/cluster-logging-collector-tuning.adoc index 14eaa6644d..60bccddb62 100644 --- a/modules/cluster-logging-collector-tuning.adoc +++ b/modules/cluster-logging-collector-tuning.adoc @@ -15,9 +15,9 @@ Fluentd collects log data in a single blob called a _chunk_. When Fluentd create By default in {product-title}, Fluentd uses the _exponential backoff_ method to retry flushing, where Fluentd doubles the time it waits between attempts to retry flushing again, which helps reduce connection requests to the destination. You can disable exponential backoff and use the _periodic_ retry method instead, which retries flushing the chunks at a specified interval. By default, Fluentd retries chunk flushing indefinitely. In {product-title}, you cannot change the indefinite retry behavior. -These parameters can help you determine the trade-offs between latency and throughput. +These parameters can help you determine the trade-offs between latency and throughput. -* To optimize Fluentd for throughput, you could use these parameters to reduce network packet count by configuring larger buffers and queues, delaying flushes, and setting longer times between retries. Be aware that larger buffers require more space on the node file system. +* To optimize Fluentd for throughput, you could use these parameters to reduce network packet count by configuring larger buffers and queues, delaying flushes, and setting longer times between retries. Be aware that larger buffers require more space on the node file system. * To optimize for low latency, you could use the parameters to send data as soon as possible, avoid the build-up of batches, have shorter queues and buffers, and use more frequent flush and retries. @@ -52,8 +52,8 @@ These parameters are: |`flushMode` a| The method to perform flushes: - -* `lazy`: Flush chunks based on the `timekey` parameter. You cannot modify the `timekey` parameter. + +* `lazy`: Flush chunks based on the `timekey` parameter. You cannot modify the `timekey` parameter. * `interval`: Flush chunks based on the `flushInterval` parameter. * `immediate`: Flush chunks immediately after data is added to a chunk. |`interval` @@ -70,7 +70,7 @@ a|The chunking behavior when the queue is full: * `drop_oldest_chunk`: Drop the oldest chunk to accept new incoming chunks. Older chunks have less value than newer chunks. |`block` -|`queuedChunkLimitSize` +|`queuedChunkLimitSize` |The number of chunks in the queue. |`32` @@ -109,13 +109,13 @@ $ oc edit ClusterLogging instance ---- apiVersion: logging.openshift.io/v1 kind: ClusterLogging -metadata: +metadata: name: instance namespace: openshift-logging -spec: - forwarder: - fluentd: - buffer: +spec: + forwarder: + fluentd: + buffer: chunkLimitSize: 8m <1> flushInterval: 5s <2> flushMode: interval <3> @@ -137,7 +137,7 @@ spec: <8> Specify the time in seconds before the next chunk flush. <9> Specify the maximum size of the chunk buffer. -. Verify that the Fluentd Pods are redeployed: +. Verify that the Fluentd pods are redeployed: + [source,terminal] ---- diff --git a/modules/cluster-logging-deploy-cli.adoc b/modules/cluster-logging-deploy-cli.adoc index 34b3fe334d..0c9af47214 100644 --- a/modules/cluster-logging-deploy-cli.adoc +++ b/modules/cluster-logging-deploy-cli.adoc @@ -415,9 +415,9 @@ $ oc create -f clo-instance.yaml + This creates the Cluster Logging components, the Elasticsearch Custom Resource and components, and the Kibana interface. -. Verify the install by listing the Pods in the *openshift-logging* project. +. Verify the install by listing the pods in the *openshift-logging* project. + -You should see several Pods for Cluster Logging, Elasticsearch, Fluentd, and Kibana similar to the following list: +You should see several pods for Cluster Logging, Elasticsearch, Fluentd, and Kibana similar to the following list: + [source,terminal] ---- diff --git a/modules/cluster-logging-deploy-console.adoc b/modules/cluster-logging-deploy-console.adoc index 9b0adf8aeb..239a960962 100644 --- a/modules/cluster-logging-deploy-console.adoc +++ b/modules/cluster-logging-deploy-console.adoc @@ -87,7 +87,7 @@ If the Operator does not appear as installed, to troubleshoot further: + * Switch to the *Operators* → *Installed Operators* page and inspect the *Status* column for any errors or failures. -* Switch to the *Workloads* → *Pods* page and check the logs in any Pods in the +* Switch to the *Workloads* → *Pods* page and check the logs in any pods in the `openshift-logging` project that are reporting issues. . Create a cluster logging instance: @@ -243,7 +243,7 @@ The number of primary shards for the index templates is equal to the number of E .. Select the *openshift-logging* project. + -You should see several Pods for cluster logging, Elasticsearch, Fluentd, and Kibana similar to the following list: +You should see several pods for cluster logging, Elasticsearch, Fluentd, and Kibana similar to the following list: + * cluster-logging-operator-cb795f8dc-xkckc * elasticsearch-cdm-b3nqzchd-1-5c6797-67kfz diff --git a/modules/cluster-logging-deploy-storage-considerations.adoc b/modules/cluster-logging-deploy-storage-considerations.adoc index 131d6a61bb..ea1151e2e5 100644 --- a/modules/cluster-logging-deploy-storage-considerations.adoc +++ b/modules/cluster-logging-deploy-storage-considerations.adoc @@ -28,7 +28,7 @@ Baseline (256 characters per minute -> 15KB/min) [cols="3,4",options="header"] |=== -|Logging Pods +|Logging pods |Storage Throughput |3 es diff --git a/modules/cluster-logging-elasticsearch-tolerations.adoc b/modules/cluster-logging-elasticsearch-tolerations.adoc index fa22041bfa..2283105f19 100644 --- a/modules/cluster-logging-elasticsearch-tolerations.adoc +++ b/modules/cluster-logging-elasticsearch-tolerations.adoc @@ -3,17 +3,17 @@ // * logging/cluster-logging-elasticsearch.adoc [id="cluster-logging-elasticsearch-tolerations_{context}"] -= Using tolerations to control the log store Pod placement += Using tolerations to control the log store pod placement -You can control which nodes the log store Pods runs on and prevent -other workloads from using those nodes by using tolerations on the Pods. +You can control which nodes the log store pods runs on and prevent +other workloads from using those nodes by using tolerations on the pods. -You apply tolerations to the log store Pods through the Cluster Logging Custom Resource (CR) -and apply taints to a node through the node specification. A taint on a node is a `key:value pair` that -instructs the node to repel all Pods that do not tolerate the taint. Using a specific `key:value` pair -that is not on other Pods ensures only the log store Pods can run on that node. +You apply tolerations to the log store pods through the Cluster Logging Custom Resource (CR) +and apply taints to a node through the node specification. A taint on a node is a `key:value pair` that +instructs the node to repel all pods that do not tolerate the taint. Using a specific `key:value` pair +that is not on other pods ensures only the log store pods can run on that node. -By default, the log store Pods have the following toleration: +By default, the log store pods have the following toleration: [source,yaml] ---- @@ -29,7 +29,7 @@ tolerations: .Procedure -. Use the following command to add a taint to a node where you want to schedule the cluster logging Pods: +. Use the following command to add a taint to a node where you want to schedule the cluster logging pods: + [source,terminal] ---- @@ -44,10 +44,10 @@ $ oc adm taint nodes node1 elasticsearch=node:NoExecute ---- + This example places a taint on `node1` that has key `elasticsearch`, value `node`, and taint effect `NoExecute`. -Nodes with the `NoExecute` effect schedule only Pods that match the taint and remove existing Pods +Nodes with the `NoExecute` effect schedule only pods that match the taint and remove existing pods that do not match. -. Edit the `logstore` section of the Cluster Logging Custom Resource (CR) to configure a toleration for the Elasticsearch Pods: +. Edit the `logstore` section of the Cluster Logging Custom Resource (CR) to configure a toleration for the Elasticsearch pods: + [source,yaml] ---- @@ -55,16 +55,15 @@ that do not match. type: "elasticsearch" elasticsearch: nodeCount: 1 - tolerations: + tolerations: - key: "elasticsearch" <1> operator: "Exists" <2> effect: "NoExecute" <3> tolerationSeconds: 6000 <4> ---- <1> Specify the key that you added to the node. -<2> Specify the `Exists` operator to require a taint with the key `elasticsearch` to be present on the Node. +<2> Specify the `Exists` operator to require a taint with the key `elasticsearch` to be present on the Node. <3> Specify the `NoExecute` effect. -<4> Optionally, specify the `tolerationSeconds` parameter to set how long a Pod can remain bound to a node before being evicted. - -This toleration matches the taint created by the `oc adm taint` command. A Pod with this toleration could be scheduled onto `node1`. +<4> Optionally, specify the `tolerationSeconds` parameter to set how long a pod can remain bound to a node before being evicted. +This toleration matches the taint created by the `oc adm taint` command. A pod with this toleration could be scheduled onto `node1`. diff --git a/modules/cluster-logging-eventrouter-deploy.adoc b/modules/cluster-logging-eventrouter-deploy.adoc index 6ab956b874..ddbd662db0 100644 --- a/modules/cluster-logging-eventrouter-deploy.adoc +++ b/modules/cluster-logging-eventrouter-deploy.adoc @@ -7,7 +7,7 @@ Use the following steps to deploy the Event Router into your cluster. You should always deploy the Event Router to the `openshift-logging` project to ensure it collects events from across the cluster. -The following Template object creates the service account, cluster role, and cluster role binding required for the Event Router. The template also configures and deploys the Event Router Pod. You can use this template without making changes, or change the Deployment object CPU and memory requests. +The following Template object creates the service account, cluster role, and cluster role binding required for the Event Router. The template also configures and deploys the Event Router pod. You can use this template without making changes, or change the Deployment object CPU and memory requests. .Prerequisites @@ -17,7 +17,7 @@ The following Template object creates the service account, cluster role, and clu .Procedure -. Create a template for the Event Router: +. Create a template for the Event Router: + [source,yaml] ---- @@ -104,7 +104,7 @@ objects: configMap: name: eventrouter parameters: - - name: IMAGE + - name: IMAGE displayName: Image value: "registry.redhat.io/openshift4/ose-logging-eventrouter:latest" - name: CPU <6> @@ -113,7 +113,7 @@ parameters: - name: MEMORY <7> displayName: Memory value: "128Mi" - - name: NAMESPACE + - name: NAMESPACE displayName: Namespace value: "openshift-logging" <8> ---- @@ -121,9 +121,9 @@ parameters: <2> Creates a ClusterRole to monitor for events in the cluster. <3> Creates a ClusterRoleBinding to bind the ClusterRole to the ServiceAccount. <4> Creates a ConfigMap in the `openshift-logging` project to generate the required `config.json` file. -<5> Creates a Deployment in the `openshift-logging` project to generate and configure the Event Router Pod. -<6> Specifies the minimum amount of memory to allocate to the Event Router Pod. Defaults to `128Mi`. -<7> Specifies the minimum amount of CPU to allocate to the Event Router Pod. Defaults to `100m`. +<5> Creates a Deployment in the `openshift-logging` project to generate and configure the Event Router pod. +<6> Specifies the minimum amount of memory to allocate to the Event Router pod. Defaults to `128Mi`. +<7> Specifies the minimum amount of CPU to allocate to the Event Router pod. Defaults to `100m`. <8> Specifies the `openshift-logging` project to install objects in. . Use the following command to process and apply the template: @@ -169,14 +169,14 @@ pod/cluster-logging-eventrouter-d649f97c8-qvv8r + [source,terminal] ---- -$ oc logs -n openshift-logging +$ oc logs -n openshift-logging ---- + For example: + [source,terminal] ---- -$ oc logs cluster-logging-eventrouter-d649f97c8-qvv8r -n openshift-logging +$ oc logs cluster-logging-eventrouter-d649f97c8-qvv8r -n openshift-logging ---- + .Example output @@ -186,4 +186,3 @@ $ oc logs cluster-logging-eventrouter-d649f97c8-qvv8r -n openshift-logging ---- + You can also use Kibana to view events by creating an index pattern using the Elasticsearch `infra` index. - diff --git a/modules/cluster-logging-kibana-tolerations.adoc b/modules/cluster-logging-kibana-tolerations.adoc index ac43972a5e..fc442adae0 100644 --- a/modules/cluster-logging-kibana-tolerations.adoc +++ b/modules/cluster-logging-kibana-tolerations.adoc @@ -3,15 +3,15 @@ // * logging/cluster-logging-visualizer.adoc [id="cluster-logging-kibana-tolerations_{context}"] -= Using tolerations to control the log visualizer Pod placement += Using tolerations to control the log visualizer pod placement -You can control the node where the log visualizer Pod runs and prevent -other workloads from using those nodes by using tolerations on the Pods. +You can control the node where the log visualizer pod runs and prevent +other workloads from using those nodes by using tolerations on the pods. -You apply tolerations to the log visualizer Pod through the Cluster Logging Custom Resource (CR) -and apply taints to a node through the node specification. A taint on a node is a `key:value pair` that -instructs the node to repel all Pods that do not tolerate the taint. Using a specific `key:value` pair -that is not on other Pods ensures only the Kibana Pod can run on that node. +You apply tolerations to the log visualizer pod through the Cluster Logging Custom Resource (CR) +and apply taints to a node through the node specification. A taint on a node is a `key:value pair` that +instructs the node to repel all pods that do not tolerate the taint. Using a specific `key:value` pair +that is not on other pods ensures only the Kibana pod can run on that node. .Prerequisites @@ -19,7 +19,7 @@ that is not on other Pods ensures only the Kibana Pod can run on that node. .Procedure -. Use the following command to add a taint to a node where you want to schedule the log visualizer Pod: +. Use the following command to add a taint to a node where you want to schedule the log visualizer pod: + [source,terminal] ---- @@ -34,27 +34,26 @@ $ oc adm taint nodes node1 kibana=node:NoExecute ---- + This example places a taint on `node1` that has key `kibana`, value `node`, and taint effect `NoExecute`. -You must use the `NoExecute` taint effect. `NoExecute` schedules only Pods that match the taint and remove existing Pods +You must use the `NoExecute` taint effect. `NoExecute` schedules only pods that match the taint and remove existing pods that do not match. -. Edit the `visualization` section of the Cluster Logging Custom Resource (CR) to configure a toleration for the Kibana Pod: +. Edit the `visualization` section of the Cluster Logging Custom Resource (CR) to configure a toleration for the Kibana pod: + [source,yaml] ---- visualization: - type: "kibana" + type: "kibana" kibana: - tolerations: + tolerations: - key: "kibana" <1> operator: "Exists" <2> effect: "NoExecute" <3> tolerationSeconds: 6000 <4> ---- <1> Specify the key that you added to the node. -<2> Specify the `Exists` operator to require the `key`/`value`/`effect` parameters to match. +<2> Specify the `Exists` operator to require the `key`/`value`/`effect` parameters to match. <3> Specify the `NoExecute` effect. -<4> Optionally, specify the `tolerationSeconds` parameter to set how long a Pod can remain bound to a node before being evicted. +<4> Optionally, specify the `tolerationSeconds` parameter to set how long a pod can remain bound to a node before being evicted. -This toleration matches the taint created by the `oc adm taint` command. A Pod with this toleration would be able to schedule onto `node1`. - +This toleration matches the taint created by the `oc adm taint` command. A pod with this toleration would be able to schedule onto `node1`. diff --git a/modules/cluster-logging-log-store-status-comp.adoc b/modules/cluster-logging-log-store-status-comp.adoc index 87b5809cab..a31d8a3c38 100644 --- a/modules/cluster-logging-log-store-status-comp.adoc +++ b/modules/cluster-logging-log-store-status-comp.adoc @@ -49,8 +49,8 @@ green open .kibana_-1595131456_user1 g ---- -Log store Pods:: -You can view the status of the Pods that host the log store. +Log store pods:: +You can view the status of the pods that host the log store. . Get the name of a pod: + diff --git a/modules/cluster-logging-logstore-limits.adoc b/modules/cluster-logging-logstore-limits.adoc index ba4bfae348..7f41674cd9 100644 --- a/modules/cluster-logging-logstore-limits.adoc +++ b/modules/cluster-logging-logstore-limits.adoc @@ -10,7 +10,7 @@ You should not have to manually adjust these values as the Elasticsearch Operator sets values sufficient for your environment. Each Elasticsearch node can operate with a lower memory setting though this is *not* recommended for production deployments. -For production use, you should have no less than the default 16Gi allocated to each Pod. Preferably you should allocate as much as possible, up to 64Gi per Pod. +For production use, you should have no less than the default 16Gi allocated to each pod. Preferably you should allocate as much as possible, up to 64Gi per pod. .Prerequisites diff --git a/modules/cluster-logging-updating-logging.adoc b/modules/cluster-logging-updating-logging.adoc index 000095bded..5716dc4ef4 100644 --- a/modules/cluster-logging-updating-logging.adoc +++ b/modules/cluster-logging-updating-logging.adoc @@ -14,7 +14,7 @@ When you update: + Kibana is unusable when the Elasticsearch Operator has been updated but the Cluster Logging Operator has not been updated. + -If you update the Cluster Logging Operator before the Elasticsearch Operator, Kibana does not update and the Kibana Custom Resource (CR) is not created. To work around this problem, delete the Cluster Logging Operator Pod. When the Cluster Logging Operator Pod redeploys, the Kibana CR is created. +If you update the Cluster Logging Operator before the Elasticsearch Operator, Kibana does not update and the Kibana Custom Resource (CR) is not created. To work around this problem, delete the Cluster Logging Operator pod. When the Cluster Logging Operator pod redeploys, the Kibana CR is created. [IMPORTANT] ==== @@ -27,7 +27,7 @@ If your cluster logging version is prior to 4.5, you must upgrade cluster loggin * Make sure the cluster logging status is healthy: + -** All Pods are `ready`. +** All pods are `ready`. ** The Elasticsearch cluster is healthy. * Back up your Elasticsearch and Kibana data. @@ -86,7 +86,7 @@ Wait for the *Status* field to report *Succeeded*. . Check the logging components: -.. Ensure that all Elasticsearch Pods are in the *Ready* status: +.. Ensure that all Elasticsearch pods are in the *Ready* status: + [source,terminal] ---- @@ -208,7 +208,7 @@ You should see a `fluentd-init` container: $ oc get kibana kibana -o json ---- + -You should see a Kibana Pod with the `ready` status: +You should see a Kibana pod with the `ready` status: + [source,json] ---- diff --git a/modules/cluster-logging-visualizer-launch.adoc b/modules/cluster-logging-visualizer-launch.adoc index 599bda6d9c..5082037993 100644 --- a/modules/cluster-logging-visualizer-launch.adoc +++ b/modules/cluster-logging-visualizer-launch.adoc @@ -12,7 +12,7 @@ pie charts, heat maps, built-in geospatial support, and other visualizations. * To list the *infra* and *audit* indices in Kibana, a user must have the `cluster-admin` role, the `cluster-reader` role, or both roles. The default `kubeadmin` user has proper permissions to list these indices. + -If you can view the Pods and logs in the `default` project, you should be able to access the these indices. You can use the following command to check if the current user has proper permissions: +If you can view the pods and logs in the `default` project, you should be able to access the these indices. You can use the following command to check if the current user has proper permissions: + [source,terminal] ---- diff --git a/modules/cluster-node-tuning-operator-verify-profiles.adoc b/modules/cluster-node-tuning-operator-verify-profiles.adoc index 2b15e6eea0..77e3fce208 100644 --- a/modules/cluster-node-tuning-operator-verify-profiles.adoc +++ b/modules/cluster-node-tuning-operator-verify-profiles.adoc @@ -9,7 +9,7 @@ Use this procedure to check which Tuned profiles are applied on every node. .Procedure -. Check which Tuned Pods are running on each node: +. Check which Tuned pods are running on each node: + [source,terminal] ---- diff --git a/modules/cnf-installing-the-performance-addon-operator.adoc b/modules/cnf-installing-the-performance-addon-operator.adoc index a86b040467..649d449287 100644 --- a/modules/cnf-installing-the-performance-addon-operator.adoc +++ b/modules/cnf-installing-the-performance-addon-operator.adoc @@ -150,5 +150,5 @@ If the Operator does not appear as installed, to troubleshoot further: * Go to the *Operators* -> *Installed Operators* page and inspect the *Operator Subscriptions* and *Install Plans* tabs for any failure or errors under *Status*. -* Go to the *Workloads* -> *Pods* page and check the logs for Pods in the +* Go to the *Workloads* -> *Pods* page and check the logs for pods in the `performance-addon-operator` project. diff --git a/modules/configuring-cluster-loader.adoc b/modules/configuring-cluster-loader.adoc index c9a353d0dd..56502a692d 100644 --- a/modules/configuring-cluster-loader.adoc +++ b/modules/configuring-cluster-loader.adoc @@ -6,7 +6,7 @@ = Configuring Cluster Loader The tool creates multiple namespaces (projects), which contain multiple -templates or Pods. +templates or pods. == Example Cluster Loader configuration file @@ -68,7 +68,7 @@ ClusterLoader: ---- <1> Optional setting for end-to-end tests. Set to `local` to avoid extra log messages. <2> The tuning sets allow rate limiting and stepping, the ability to create several -batches of Pods while pausing in between sets. Cluster Loader monitors +batches of pods while pausing in between sets. Cluster Loader monitors completion of the previous step before continuing. <3> Stepping will pause for `M` seconds after each `N` objects are created. <4> Rate limiting will wait `M` milliseconds between the creation of objects. @@ -137,7 +137,7 @@ path to a file from which you create the ConfigMap. a file from which you create the secret. |`pods` -|A sub-object with one or many definition(s) of Pods to deploy. +|A sub-object with one or many definition(s) of pods to deploy. |`templates` |A sub-object with one or many definition(s) of templates to deploy. @@ -148,7 +148,7 @@ a file from which you create the secret. |Field |Description |`num` -|An integer. The number of Pods or templates to deploy. +|An integer. The number of pods or templates to deploy. |`image` |A string. The docker image URL to a repository where it can be pulled. @@ -173,7 +173,7 @@ override in the pod or template. defining a tuning in a project. |`pods` -|A sub-object identifying the `tuningsets` that will apply to Pods. +|A sub-object identifying the `tuningsets` that will apply to pods. |`templates` |A sub-object identifying the `tuningsets` that will apply to templates. @@ -221,18 +221,18 @@ whether to start an HTTP server for pod synchronization. The integer `port` defines the HTTP server port to listen on (`9090` by default). |`running` -|A boolean. Wait for Pods with labels matching `selectors` to go into `Running` +|A boolean. Wait for pods with labels matching `selectors` to go into `Running` state. |`succeeded` -|A boolean. Wait for Pods with labels matching `selectors` to go into `Completed` +|A boolean. Wait for pods with labels matching `selectors` to go into `Completed` state. |`selectors` -|A list of selectors to match Pods in `Running` or `Completed` states. +|A list of selectors to match pods in `Running` or `Completed` states. |`timeout` -|A string. The synchronization timeout period to wait for Pods in `Running` or +|A string. The synchronization timeout period to wait for pods in `Running` or `Completed` states. For values that are not `0`, use units: [ns\|us\|ms\|s\|m\|h]. |=== diff --git a/modules/configuring-scale-bounds-knative.adoc b/modules/configuring-scale-bounds-knative.adoc index 8b487bd2b4..d6a392c916 100644 --- a/modules/configuring-scale-bounds-knative.adoc +++ b/modules/configuring-scale-bounds-knative.adoc @@ -5,12 +5,12 @@ [id="configuring-scale-bounds-knative_{context}"] = Configuring scale bounds Knative Serving autoscaling -The `minScale` and `maxScale` annotations can be used to configure the minimum and maximum number of Pods that can serve applications. +The `minScale` and `maxScale` annotations can be used to configure the minimum and maximum number of pods that can serve applications. These annotations can be used to prevent cold starts or to help control computing costs. -minScale:: If the `minScale` annotation is not set, Pods will scale to zero (or to 1 if enable-scale-to-zero is false per the `ConfigMap`). +minScale:: If the `minScale` annotation is not set, pods will scale to zero (or to 1 if enable-scale-to-zero is false per the `ConfigMap`). -maxScale:: If the `maxScale` annotation is not set, there will be no upper limit for the number of Pods created. +maxScale:: If the `maxScale` annotation is not set, there will be no upper limit for the number of pods created. `minScale` and `maxScale` can be configured as follows in the revision template: diff --git a/modules/copying-files-pods-and-containers.adoc b/modules/copying-files-pods-and-containers.adoc index 03298a6b42..bba6fce6db 100644 --- a/modules/copying-files-pods-and-containers.adoc +++ b/modules/copying-files-pods-and-containers.adoc @@ -3,7 +3,7 @@ // * support/troubleshooting/investigating-pod-issues.adoc [id="copying-files-pods-and-containers_{context}"] -= Copying files to and from Pods and containers += Copying files to and from pods and containers You can copy files to and from a Pod to test configuration changes or gather diagnostic information. diff --git a/modules/create-a-kubeletconfig-crd-to-edit-kubelet-parameters.adoc b/modules/create-a-kubeletconfig-crd-to-edit-kubelet-parameters.adoc index a4833788b5..2ce18e2f67 100644 --- a/modules/create-a-kubeletconfig-crd-to-edit-kubelet-parameters.adoc +++ b/modules/create-a-kubeletconfig-crd-to-edit-kubelet-parameters.adoc @@ -26,7 +26,7 @@ This provides a list of the available machine configuration objects you can select. By default, the two kubelet-related configs are `01-master-kubelet` and `01-worker-kubelet`. -. To check the current value of max Pods per node, run: +. To check the current value of max pods per node, run: + [source,terminal] ---- @@ -54,7 +54,7 @@ Allocatable: pods: 250 ---- -. To set the max Pods per node on the worker nodes, create a custom resource file +. To set the max pods per node on the worker nodes, create a custom resource file that contains the kubelet configuration. For example, `change-maxPods-cr.yaml`: + [source,yaml] diff --git a/modules/creating-serverless-apps-yaml.adoc b/modules/creating-serverless-apps-yaml.adoc index 49a5a2ab79..7f632d2cf9 100644 --- a/modules/creating-serverless-apps-yaml.adoc +++ b/modules/creating-serverless-apps-yaml.adoc @@ -39,4 +39,4 @@ $ oc apply -f After the Service is created and the application is deployed, Knative creates an immutable Revision for this version of the application. -Knative also performs network programming to create a Route, Ingress, Service, and load balancer for your application and automatically scales your Pods up and down based on traffic, including inactive Pods. +Knative also performs network programming to create a Route, Ingress, Service, and load balancer for your application and automatically scales your pods up and down based on traffic, including inactive pods. diff --git a/modules/customize-certificates-manually-rotate-service-ca.adoc b/modules/customize-certificates-manually-rotate-service-ca.adoc index f94df01ea8..d73aa2c440 100644 --- a/modules/customize-certificates-manually-rotate-service-ca.adoc +++ b/modules/customize-certificates-manually-rotate-service-ca.adoc @@ -11,7 +11,7 @@ If necessary, you can manually refresh the service CA by using the following pro [WARNING] ==== -A manually-rotated service CA does not maintain trust with the previous service CA. You might experience a temporary service disruption until the Pods in the cluster are restarted, which ensures that Pods are using service serving certificates issued by the new service CA. +A manually-rotated service CA does not maintain trust with the previous service CA. You might experience a temporary service disruption until the pods in the cluster are restarted, which ensures that pods are using service serving certificates issued by the new service CA. ==== .Prerequisites @@ -39,7 +39,7 @@ which will be used to sign the new service certificates. $ oc delete secret/signing-key -n openshift-service-ca ---- -. To apply the new certificates to all services, restart all the Pods +. To apply the new certificates to all services, restart all the pods in your cluster. This command ensures that all services use the updated certificates. + diff --git a/modules/customize-certificates-understanding-service-serving.adoc b/modules/customize-certificates-understanding-service-serving.adoc index fe76dc8daf..4b18de035d 100644 --- a/modules/customize-certificates-understanding-service-serving.adoc +++ b/modules/customize-certificates-understanding-service-serving.adoc @@ -21,7 +21,7 @@ The service CA certificate, which issues the service certificates, is valid for [NOTE] ==== -You can use the following command to manually restart all Pods in the cluster. Be aware that running this command causes a service interruption, because it deletes every running Pod in every namespace. These Pods will automatically restart after they are deleted. +You can use the following command to manually restart all pods in the cluster. Be aware that running this command causes a service interruption, because it deletes every running pod in every namespace. These pods will automatically restart after they are deleted. [source,terminal] ---- diff --git a/modules/dedicated-cluster-install-deploy.adoc b/modules/dedicated-cluster-install-deploy.adoc index 1d9d98b745..98d3289ce8 100644 --- a/modules/dedicated-cluster-install-deploy.adoc +++ b/modules/dedicated-cluster-install-deploy.adoc @@ -122,7 +122,7 @@ spec: .. Click *Create* to deploy the logging instance, which creates the Cluster Logging and Elasticsearch Custom Resources. -. Verify that the Pods for the Cluster Logging instance deployed: +. Verify that the pods for the Cluster Logging instance deployed: .. Switch to the *Workloads* → *Pods* page. diff --git a/modules/dedicated-storage-expanding-filesystem-pvc.adoc b/modules/dedicated-storage-expanding-filesystem-pvc.adoc index 74f48066c8..35b782c6eb 100644 --- a/modules/dedicated-storage-expanding-filesystem-pvc.adoc +++ b/modules/dedicated-storage-expanding-filesystem-pvc.adoc @@ -9,9 +9,9 @@ Expanding PVCs based on volume types that need file system re-sizing, such as AWS EBS, is a two-step process. This process involves expanding volume objects in the cloud provider and then expanding the file system on the actual node. These steps occur automatically -after the PVC object is edited and might require a Pod restart to take effect. +after the PVC object is edited and might require a pod restart to take effect. -Expanding the file system on the node only happens when a new Pod is started +Expanding the file system on the node only happens when a new pod is started with the volume. .Prerequisites @@ -77,13 +77,13 @@ Mounted By: mysql-1-q4nz7 <3> ---- <1> The current capacity of the PVC. <2> Any relevant conditions are displayed here. -<3> The Pod that is currently mounting this volume +<3> The pod that is currently mounting this volume -. If the output of the previous command included a message to restart the Pod, delete the mounting Pod that it specified: +. If the output of the previous command included a message to restart the pod, delete the mounting pod that it specified: + ---- $ oc delete pod mysql-1-q4nz7 ---- -. Once the Pod is running, the newly requested size is available and the +. Once the pod is running, the newly requested size is available and the `FileSystemResizePending` condition is removed from the PVC. diff --git a/modules/deployments-ab-testing-lb.adoc b/modules/deployments-ab-testing-lb.adoc index 3db138ea78..e841546c7d 100644 --- a/modules/deployments-ab-testing-lb.adoc +++ b/modules/deployments-ab-testing-lb.adoc @@ -18,7 +18,7 @@ between `0` and `256`. When the `weight` is `0`, the service does not participat but continues to serve existing persistent connections. When the service `weight` is not `0`, each endpoint has a minimum `weight` of `1`. Because of this, a service with a lot of endpoints can end up with higher `weight` than desired. -In this case, reduce the number of Pods to get the desired load balance +In this case, reduce the number of pods to get the desired load balance `weight`. //// @@ -80,7 +80,7 @@ in load-balancing, but continues to serve existing persistent connections. [NOTE] ==== Changes to the route just change the portion of traffic to the various services. -You might have to scale the DeploymentConfigs to adjust the number of Pods +You might have to scale the DeploymentConfigs to adjust the number of pods to handle the anticipated loads. ==== + @@ -171,7 +171,7 @@ This means 99% of traffic is sent to service `ab-example-a` and 1% to service `ab-example-b`. + This command does not scale the DeploymentConfigs. You might be required to do -so to have enough Pods to handle the request load. +so to have enough pods to handle the request load. . Run the command with no flags to verify the current configuration: + @@ -266,7 +266,7 @@ $ oc new-app openshift/deployment-example:v2 \ SUBTITLE="shard B" COLOR="red" ---- -. At this point, both sets of Pods are being served under the route. However, +. At this point, both sets of pods are being served under the route. However, because both browsers (by leaving a connection open) and the router (by default, through a cookie) attempt to preserve your connection to a back-end server, you might not see both shards being returned to you. @@ -291,7 +291,7 @@ $ oc scale dc/ab-example-a --replicas=1; oc scale dc/ab-example-b --replicas=0 + Refresh your browser to show `v1` and `shard A` (in blue). -. If you trigger a deployment on either shard, only the Pods in that shard are +. If you trigger a deployment on either shard, only the pods in that shard are affected. You can trigger a deployment by changing the `SUBTITLE` environment variable in either DeploymentConfig: + diff --git a/modules/deployments-ab-testing.adoc b/modules/deployments-ab-testing.adoc index 72d0685281..f0f4ea2820 100644 --- a/modules/deployments-ab-testing.adoc +++ b/modules/deployments-ab-testing.adoc @@ -13,7 +13,7 @@ to the new version. Because you control the portion of requests to each version, as testing progresses you can increase the fraction of requests to the new version and ultimately stop using the previous version. As you adjust the request load on -each version, the number of Pods in each service might have to be scaled as well +each version, the number of pods in each service might have to be scaled as well to provide the expected performance. In addition to upgrading software, you can use this feature to experiment with diff --git a/modules/deployments-assigning-pods-to-nodes.adoc b/modules/deployments-assigning-pods-to-nodes.adoc index 42086bf38e..7d92d5e0f5 100644 --- a/modules/deployments-assigning-pods-to-nodes.adoc +++ b/modules/deployments-assigning-pods-to-nodes.adoc @@ -5,18 +5,18 @@ [id="deployments-assigning-pods-to-nodes_{context}"] = Assigning pods to specific nodes -You can use node selectors in conjunction with labeled nodes to control Pod +You can use node selectors in conjunction with labeled nodes to control pod placement. Cluster administrators can set the default node selector for a project in order -to restrict Pod placement to specific nodes. As a developer, you can set a node -selector on a Pod configuration to restrict nodes even further. +to restrict pod placement to specific nodes. As a developer, you can set a node +selector on a `Pod` configuration to restrict nodes even further. .Procedure -. To add a node selector when creating a pod, edit the Pod configuration, and add -the `nodeSelector` value. This can be added to a single Pod configuration, or in -a Pod template: +. To add a node selector when creating a pod, edit the `Pod` configuration, and add +the `nodeSelector` value. This can be added to a single `Pod` configuration, or in +a `Pod` template: + [source,yaml] ---- @@ -34,12 +34,12 @@ labels added by a cluster administrator. + For example, if a project has the `type=user-node` and `region=east` labels added to a project by the cluster administrator, and you add the above -`disktype: ssd` label to a Pod, the Pod is only ever scheduled on nodes that +`disktype: ssd` label to a pod, the pod is only ever scheduled on nodes that have all three labels. + [NOTE] ==== Labels can only be set to one value, so setting a node selector of `region=west` -in a Pod configuration that has `region=east` as the administrator-set default, -results in a Pod that will never be scheduled. +in a `Pod` configuration that has `region=east` as the administrator-set default, +results in a pod that will never be scheduled. ==== diff --git a/modules/deployments-comparing-deploymentconfigs.adoc b/modules/deployments-comparing-deploymentconfigs.adoc index deee453964..8180578433 100644 --- a/modules/deployments-comparing-deploymentconfigs.adoc +++ b/modules/deployments-comparing-deploymentconfigs.adoc @@ -21,11 +21,11 @@ properties of the link:https://en.wikipedia.org/wiki/CAP_theorem[CAP theorem] that each design has chosen for the rollout process. DeploymentConfigs prefer consistency, whereas Deployments take availability over consistency. -For DeploymentConfigs, if a node running a deployer Pod goes down, it will +For DeploymentConfigs, if a node running a deployer pod goes down, it will not get replaced. The process waits until the node comes back online or is -manually deleted. Manually deleting the node also deletes the corresponding Pod. -This means that you can not delete the Pod to unstick the rollout, as the -kubelet is responsible for deleting the associated Pod. +manually deleted. Manually deleting the node also deletes the corresponding pod. +This means that you can not delete the pod to unstick the rollout, as the +kubelet is responsible for deleting the associated pod. However, Deployments rollouts are driven from a controller manager. The controller manager runs in high availability mode on masters and uses leader diff --git a/modules/deployments-creating-rolling-deployment.adoc b/modules/deployments-creating-rolling-deployment.adoc index 75083a03e1..48ba481090 100644 --- a/modules/deployments-creating-rolling-deployment.adoc +++ b/modules/deployments-creating-rolling-deployment.adoc @@ -46,8 +46,8 @@ $ oc tag deployment-example:v2 deployment-example:latest . In your browser, refresh the page until you see the `v2` image. -. When using the CLI, the following command shows how many Pods are on version 1 -and how many are on version 2. In the web console, the Pods are progressively +. When using the CLI, the following command shows how many pods are on version 1 +and how many are on version 2. In the web console, the pods are progressively added to v2 and removed from v1: + [source,terminal] @@ -56,8 +56,8 @@ $ oc describe dc deployment-example ---- During the deployment process, the new ReplicationController is incrementally -scaled up. After the new Pods are marked as `ready` (by passing their readiness +scaled up. After the new pods are marked as `ready` (by passing their readiness check), the deployment process continues. -If the Pods do not become ready, the process aborts, and the DeploymentConfig +If the pods do not become ready, the process aborts, and the DeploymentConfig rolls back to its previous version. diff --git a/modules/deployments-deploymentconfigs.adoc b/modules/deployments-deploymentconfigs.adoc index 60893f8ba3..a2cccc1476 100644 --- a/modules/deployments-deploymentconfigs.adoc +++ b/modules/deployments-deploymentconfigs.adoc @@ -8,7 +8,7 @@ Building on ReplicationControllers, {product-title} adds expanded support for the software development and deployment lifecycle with the concept of _DeploymentConfigs_. In the simplest case, a DeploymentConfig creates a new -ReplicationController and lets it start up Pods. +ReplicationController and lets it start up pods. However, {product-title} deployments from DeploymentConfigs also provide the ability to transition from an existing deployment of an image to a new one and diff --git a/modules/deployments-lifecycle-hooks.adoc b/modules/deployments-lifecycle-hooks.adoc index 40c202cdbb..58dabd91f9 100644 --- a/modules/deployments-lifecycle-hooks.adoc +++ b/modules/deployments-lifecycle-hooks.adoc @@ -16,7 +16,7 @@ pre: failurePolicy: Abort execNewPod: {} <1> ---- -<1> `execNewPod` is a Pod-based lifecycle hook. +<1> `execNewPod` is a pod-based lifecycle hook. Every hook has a `failurePolicy`, which defines the action the strategy should take when a hook failure is encountered: @@ -35,13 +35,13 @@ take when a hook failure is encountered: |=== Hooks have a type-specific field that describes how to execute the hook. -Currently, Pod-based hooks are the only supported hook type, specified by the +Currently, pod-based hooks are the only supported hook type, specified by the `execNewPod` field. [discrete] ==== Pod-based lifecycle hook -Pod-based lifecycle hooks execute hook code in a new Pod derived from the +Pod-based lifecycle hooks execute hook code in a new pod derived from the template in a DeploymentConfig. The following simplified example DeploymentConfig uses the Rolling strategy. @@ -84,14 +84,14 @@ spec: <3> `env` is an optional set of environment variables for the hook container. <4> `volumes` is an optional set of volume references for the hook container. -In this example, the `pre` hook will be executed in a new Pod using the +In this example, the `pre` hook will be executed in a new pod using the `openshift/origin-ruby-sample` image from the `helloworld` container. The hook -Pod has the following properties: +pod has the following properties: * The hook command is `/usr/bin/command arg1 arg2`. * The hook container has the `CUSTOM_VAR1=custom_value1` environment variable. * The hook failure policy is `Abort`, meaning the deployment process fails if the hook fails. -* The hook Pod inherits the `data` volume from the DeploymentConfig Pod. +* The hook pod inherits the `data` volume from the DeploymentConfig pod. [id="deployments-setting-lifecycle-hooks_{context}"] == Setting lifecycle hooks diff --git a/modules/deployments-replicationcontrollers.adoc b/modules/deployments-replicationcontrollers.adoc index 3b6b041b8d..9434c4a165 100644 --- a/modules/deployments-replicationcontrollers.adoc +++ b/modules/deployments-replicationcontrollers.adoc @@ -5,22 +5,22 @@ [id="deployments-replicationcontrollers_{context}"] = ReplicationControllers -A ReplicationController ensures that a specified number of replicas of a Pod are running at -all times. If Pods exit or are deleted, the ReplicationController acts to +A ReplicationController ensures that a specified number of replicas of a pod are running at +all times. If pods exit or are deleted, the ReplicationController acts to instantiate more up to the defined number. Likewise, if there are more running than desired, it deletes as many as necessary to match the defined amount. A ReplicationController configuration consists of: * The number of replicas desired (which can be adjusted at runtime). -* A Pod definition to use when creating a replicated Pod. -* A selector for identifying managed Pods. +* A `Pod` definition to use when creating a replicated pod. +* A selector for identifying managed pods. A selector is a set of labels assigned to -the Pods that are managed by the ReplicationController. These labels are -included in the Pod definition that the ReplicationController instantiates. +the pods that are managed by the ReplicationController. These labels are +included in the `Pod` definition that the ReplicationController instantiates. The ReplicationController uses the selector to determine how many -instances of the Pod are already running in order to adjust as needed. +instances of the pod are already running in order to adjust as needed. The ReplicationController does not perform auto-scaling based on load or traffic, as it does not track either. Rather, this requires its replica @@ -51,8 +51,8 @@ spec: protocol: TCP restartPolicy: Always ---- -<1> The number of copies of the Pod to run. -<2> The label selector of the Pod to run. -<3> A template for the Pod the controller creates. -<4> Labels on the Pod should include those from the label selector. +<1> The number of copies of the pod to run. +<2> The label selector of the pod to run. +<3> A template for the pod the controller creates. +<4> Labels on the pod should include those from the label selector. <5> The maximum name length after expanding any parameters is 63 characters. diff --git a/modules/deployments-rolling-strategy.adoc b/modules/deployments-rolling-strategy.adoc index a41bf1dcd5..5fcacbdb92 100644 --- a/modules/deployments-rolling-strategy.adoc +++ b/modules/deployments-rolling-strategy.adoc @@ -56,15 +56,15 @@ replica count and the old ReplicationController has been scaled to zero. [IMPORTANT] ==== -When scaling down, the Rolling strategy waits for Pods to become ready so it can -decide whether further scaling would affect availability. If scaled up Pods +When scaling down, the Rolling strategy waits for pods to become ready so it can +decide whether further scaling would affect availability. If scaled up pods never become ready, the deployment process will eventually time out and result in a deployment failure. ==== -The `maxUnavailable` parameter is the maximum number of Pods that can be +The `maxUnavailable` parameter is the maximum number of pods that can be unavailable during the update. The `maxSurge` parameter is the maximum number -of Pods that can be scheduled above the original number of Pods. Both parameters +of pods that can be scheduled above the original number of pods. Both parameters can be set to either a percentage (e.g., `10%`) or an absolute value (e.g., `2`). The default value for both is `25%`. diff --git a/modules/deployments-setting-resources.adoc b/modules/deployments-setting-resources.adoc index a148f56ade..fc5008d009 100644 --- a/modules/deployments-setting-resources.adoc +++ b/modules/deployments-setting-resources.adoc @@ -12,8 +12,8 @@ ephemeral storage technology preview. This feature is disabled by default. ==== A deployment is completed by a Pod that consumes resources (memory, CPU, and -ephemeral storage) on a node. By default, Pods consume unbounded node resources. -However, if a project specifies default container limits, then Pods consume +ephemeral storage) on a node. By default, pods consume unbounded node resources. +However, if a project specifies default container limits, then pods consume resources up to those limits. You can also limit resource use by specifying resource limits as part of the @@ -59,7 +59,7 @@ items is required: the list of resources in the quota. - A limit range defined in your project, where the defaults from the `LimitRange` -object apply to Pods created during the deployment process. +object apply to pods created during the deployment process. -- + To set deployment resources, choose one of the above options. Otherwise, deploy diff --git a/modules/deployments-viewing-logs.adoc b/modules/deployments-viewing-logs.adoc index 3f25997c23..d3d5065ae6 100644 --- a/modules/deployments-viewing-logs.adoc +++ b/modules/deployments-viewing-logs.adoc @@ -19,7 +19,7 @@ process that is responsible for deploying your pods. If it is successful, it returns the logs from a Pod of your application. . You can also view logs from older failed deployment processes, if and only if -these processes (old ReplicationControllers and their deployer Pods) exist and +these processes (old ReplicationControllers and their deployer pods) exist and have not been pruned or deleted manually: + [source,terminal] diff --git a/modules/developer-cli-odo-creating-services-from-yaml-files.adoc b/modules/developer-cli-odo-creating-services-from-yaml-files.adoc index 84df525a39..6279287903 100644 --- a/modules/developer-cli-odo-creating-services-from-yaml-files.adoc +++ b/modules/developer-cli-odo-creating-services-from-yaml-files.adoc @@ -49,7 +49,7 @@ spec: $ odo service create --from-file etcd.yaml ---- -. Verify that the `EtcdCluster` service has started with one Pod instead of the pre-configured three Pods: +. Verify that the `EtcdCluster` service has started with one pod instead of the pre-configured three pods: + [source,terminal] ---- diff --git a/modules/developer-cli-odo-openshift-cluster-objects.adoc b/modules/developer-cli-odo-openshift-cluster-objects.adoc index 1be85b0de7..8e91807db5 100644 --- a/modules/developer-cli-odo-openshift-cluster-objects.adoc +++ b/modules/developer-cli-odo-openshift-cluster-objects.adoc @@ -8,7 +8,7 @@ == Init Containers Init containers are specialized containers that run before the application container starts and configure the necessary environment for the application containers to run. Init containers can have files that application images do not have, for example setup scripts. Init containers always run to completion and the application container does not start if any of the init containers fails. -The Pod created by {odo-title} executes two Init Containers: +The pod created by {odo-title} executes two Init Containers: * The `copy-supervisord` Init container. * The `copy-files-to-volume` Init container. @@ -35,14 +35,14 @@ The `copy-supervisord` Init container copies necessary files onto an `emptyDir` The `emtpyDir Volume` is mounted at the `/opt/odo` mount point for both the Init container and the application container. === `copy-files-to-volume` -The `copy-files-to-volume` Init container copies files that are in `/opt/app-root` in the S2I builder image onto the Persistent Volume. The volume is then mounted at the same location (`/opt/app-root`) in an application container. +The `copy-files-to-volume` Init container copies files that are in `/opt/app-root` in the S2I builder image onto the Persistent Volume. The volume is then mounted at the same location (`/opt/app-root`) in an application container. Without the `PersistentVolume` on `/opt/app-root` the data in this directory is lost when `PersistentVolumeClaim` is mounted at the same location. The `PVC` is mounted at the `/mnt` mount point inside the Init container. == Application container -Application container is the main container inside of which the user-source code executes. +Application container is the main container inside of which the user-source code executes. Application container is mounted with two Volumes: @@ -54,7 +54,7 @@ Application container is mounted with two Volumes: `SupervisorD` executes and monitores the user assembled source code. If the user process crashes, `SupervisorD` restarts it. == `PersistentVolume` and `PersistentVolumeClaim` -`PersistentVolumeClaim` (`PVC`) is a volume type in Kubernetes which provisions a `PersistentVolume`. The life of a `PersistentVolume` is independent of a Pod lifecycle. The data on the `PersistentVolume` persists across Pod restarts. +`PersistentVolumeClaim` (`PVC`) is a volume type in Kubernetes which provisions a `PersistentVolume`. The life of a `PersistentVolume` is independent of a pod lifecycle. The data on the `PersistentVolume` persists across pod restarts. The `copy-files-to-volume` Init container copies necessary files onto the `PersistentVolume`. The main application container utilizes these files at runtime for execution. @@ -71,7 +71,7 @@ The naming convention of the `PersistentVolume` is -s2idata. |=== == `emptyDir` Volume -An `emptyDir` Volume is created when a Pod is assigned to a node, and exists as long as that Pod is running on the node. If the container is restarted or moved, the content of the `emptyDir` is removed, Init container restores the data back to the `emptyDir`. `emptyDir` is initially empty. +An `emptyDir` Volume is created when a pod is assigned to a node, and exists as long as that pod is running on the node. If the container is restarted or moved, the content of the `emptyDir` is removed, Init container restores the data back to the `emptyDir`. `emptyDir` is initially empty. The `copy-supervisord` Init container copies necessary files onto the `emptyDir` volume. These files are then utilized by the main application container at runtime for execution. @@ -86,6 +86,6 @@ The `copy-supervisord` Init container copies necessary files onto the `emptyDir` |=== == Service -Service is a Kubernetes concept of abstracting the way of communicating with a set of Pods. +Service is a Kubernetes concept of abstracting the way of communicating with a set of pods. -{odo-title} creates a Service for every application Pod to make it accessible for communication. +{odo-title} creates a Service for every application pod to make it accessible for communication. diff --git a/modules/dr-restoring-cluster-state.adoc b/modules/dr-restoring-cluster-state.adoc index de116fbd1c..142424ab73 100644 --- a/modules/dr-restoring-cluster-state.adoc +++ b/modules/dr-restoring-cluster-state.adoc @@ -13,7 +13,7 @@ You can use a saved etcd backup to restore back to a previous cluster state. You * Access to the cluster as a user with the `cluster-admin` role. * SSH access to master hosts. -* A backup directory containing both the etcd snapshot and the resources for the static Pods, which were from the same backup. The file names in the directory must be in the following formats: `snapshot_.db` and `static_kuberesources_.tar.gz`. +* A backup directory containing both the etcd snapshot and the resources for the static pods, which were from the same backup. The file names in the directory must be in the following formats: `snapshot_.db` and `static_kuberesources_.tar.gz`. .Procedure @@ -30,13 +30,13 @@ If you do not complete this step, you will not be able to access the master host . Copy the etcd backup directory to the recovery control plane host. + -This procedure assumes that you copied the `backup` directory containing the etcd snapshot and the resources for the static Pods to the `/home/core/` directory of your recovery control plane host. +This procedure assumes that you copied the `backup` directory containing the etcd snapshot and the resources for the static pods to the `/home/core/` directory of your recovery control plane host. -. Stop the static Pods on all other control plane nodes. +. Stop the static pods on all other control plane nodes. + [NOTE] ==== -It is not required to manually stop the Pods on the recovery host. The recovery script will stop the Pods on the recovery host. +It is not required to manually stop the pods on the recovery host. The recovery script will stop the pods on the recovery host. ==== .. Access a control plane host that is not the recovery host. @@ -48,7 +48,7 @@ It is not required to manually stop the Pods on the recovery host. The recovery [core@ip-10-0-154-194 ~]$ sudo mv /etc/kubernetes/manifests/etcd-pod.yaml /tmp ---- -.. Verify that the etcd Pods are stopped. +.. Verify that the etcd pods are stopped. + [source,terminal] ---- @@ -64,7 +64,7 @@ The output of this command should be empty. If it is not empty, wait a few minut [core@ip-10-0-154-194 ~]$ sudo mv /etc/kubernetes/manifests/kube-apiserver-pod.yaml /tmp ---- -.. Verify that the Kubernetes API server Pods are stopped. +.. Verify that the Kubernetes API server pods are stopped. + [source,terminal] ---- @@ -168,7 +168,7 @@ NAME READY STATUS RESTARTS etcd-ip-10-0-143-125.ec2.internal 1/1 Running 1 2m47s ---- + -If the status is `Pending`, or the output lists more than one running etcd Pod, wait a few minutes and check again. +If the status is `Pending`, or the output lists more than one running etcd pod, wait a few minutes and check again. . Force etcd redeployment. + @@ -180,7 +180,7 @@ $ oc patch etcd cluster -p='{"spec": {"forceRedeploymentReason": "recovery-'"$( ---- <1> The `forceRedeploymentReason` value must be unique, which is why a timestamp is appended. + -When the etcd cluster Operator performs a redeployment, the existing nodes are started with new Pods similar to the initial bootstrap scale up. +When the etcd cluster Operator performs a redeployment, the existing nodes are started with new pods similar to the initial bootstrap scale up. . Verify all nodes are updated to the latest revision. + @@ -288,4 +288,4 @@ etcd-ip-10-0-154-194.ec2.internal 2/2 Running 0 etcd-ip-10-0-173-171.ec2.internal 2/2 Running 0 9h ---- -Note that it might take several minutes after completing this procedure for all services to be restored. For example, authentication by using `oc login` might not immediately work until the OAuth server Pods are restarted. +Note that it might take several minutes after completing this procedure for all services to be restored. For example, authentication by using `oc login` might not immediately work until the OAuth server pods are restarted. diff --git a/modules/ephemeral-storage-csi-inline-overview.adoc b/modules/ephemeral-storage-csi-inline-overview.adoc index 173f91a58e..51233b3e13 100644 --- a/modules/ephemeral-storage-csi-inline-overview.adoc +++ b/modules/ephemeral-storage-csi-inline-overview.adoc @@ -7,7 +7,7 @@ Traditionally, volumes that are backed by Container Storage Interface (CSI) drivers can only be used with a PersistentVolume and PersistentVolumeClaim object combination. -This feature allows you to specify CSI volumes directly in the Pod specification, rather than in a PersistentVolume. Inline volumes are ephemeral and do not persist across Pod restarts. +This feature allows you to specify CSI volumes directly in the `Pod` specification, rather than in a PersistentVolume. Inline volumes are ephemeral and do not persist across pod restarts. == Support limitations diff --git a/modules/ephemeral-storage-csi-inline-pod.adoc b/modules/ephemeral-storage-csi-inline-pod.adoc index 521e24a375..790f3a70ab 100644 --- a/modules/ephemeral-storage-csi-inline-pod.adoc +++ b/modules/ephemeral-storage-csi-inline-pod.adoc @@ -3,13 +3,13 @@ // * storage/container_storage_interface/ephemeral-storage-csi-inline-pod-scheduling.adoc [id="ephemeral-storage-csi-inline-pod_{context}"] -= Embedding a CSI inline ephemeral volume in the Pod specification += Embedding a CSI inline ephemeral volume in the `Pod` specification -You can embed a CSI inline ephemeral volume in the Pod specification in {product-title}. At runtime, nested inline volumes follow the ephemeral lifecycle of their associated Pods so that the CSI driver handles all phases of volume operations as Pods are created and destroyed. +You can embed a CSI inline ephemeral volume in the `Pod` specification in {product-title}. At runtime, nested inline volumes follow the ephemeral lifecycle of their associated pods so that the CSI driver handles all phases of volume operations as pods are created and destroyed. .Procedure -. Create the Pod object definition and save it to a file. +. Create the `Pod` object definition and save it to a file. . Embed the CSI inline ephemeral volume in the file. + @@ -35,7 +35,7 @@ spec: volumeAttributes: foo: bar ---- -<1> The name of the volume that is used by Pods. +<1> The name of the volume that is used by pods. . Create the object definition file that you saved in the previous step. + diff --git a/modules/feature-gate-features.adoc b/modules/feature-gate-features.adoc index 271434204e..70df7a5909 100644 --- a/modules/feature-gate-features.adoc +++ b/modules/feature-gate-features.adoc @@ -17,7 +17,7 @@ The following features are affected by FeatureGates: |True |`SupportPodPidsLimit` -|Enables support for limiting the number of processes (PIDs) running in a Pod. +|Enables support for limiting the number of processes (PIDs) running in a pod. |True |`MachineHealthCheck` diff --git a/modules/gathering-application-diagnostic-data.adoc b/modules/gathering-application-diagnostic-data.adoc index b07ad53619..fbb8c05e1e 100644 --- a/modules/gathering-application-diagnostic-data.adoc +++ b/modules/gathering-application-diagnostic-data.adoc @@ -5,10 +5,10 @@ [id="gathering-application-diagnostic-data_{context}"] = Gathering application diagnostic data to investigate application failures -Application failures can occur within running application Pods. In these situations, you can retrieve diagnostic information with these strategies: +Application failures can occur within running application pods. In these situations, you can retrieve diagnostic information with these strategies: -* Review events relating to the application Pods. -* Review the logs from the application Pods, including application-specific log files that are not collected by the {product-title} logging framework. +* Review events relating to the application pods. +* Review the logs from the application pods, including application-specific log files that are not collected by the {product-title} logging framework. * Test application functionality interactively and run diagnostic tools in an application container. .Prerequisites @@ -18,30 +18,30 @@ Application failures can occur within running application Pods. In these situati .Procedure -. List events relating to a specific application Pod. The following example retrieves events for an application Pod named `my-app-1-akdlg`: +. List events relating to a specific application pod. The following example retrieves events for an application pod named `my-app-1-akdlg`: + [source,terminal] ---- $ oc describe pod/my-app-1-akdlg ---- -. Review logs from an application Pod: +. Review logs from an application pod: + [source,terminal] ---- $ oc logs -f pod/my-app-1-akdlg ---- -. Query specific logs within a running application Pod. Logs that are sent to stdout are collected by the {product-title} logging framework and are included in the output of the preceding command. The following query is only required for logs that are not sent to stdout. +. Query specific logs within a running application pod. Logs that are sent to stdout are collected by the {product-title} logging framework and are included in the output of the preceding command. The following query is only required for logs that are not sent to stdout. + -.. If an application log can be accessed without root privileges within a Pod, concatenate the log file as follows: +.. If an application log can be accessed without root privileges within a pod, concatenate the log file as follows: + [source,terminal] ---- $ oc exec my-app-1-akdlg -- cat /var/log/my-application.log ---- + -.. If root access is required to view an application log, you can start a debug container with root privileges and then view the log file from within the container. Start the debug container from the project's deployment configuration. Pod users typically run with non-root privileges, but running troubleshooting Pods with temporary root privileges can be useful during issue investigation: +.. If root access is required to view an application log, you can start a debug container with root privileges and then view the log file from within the container. Start the debug container from the project's deployment configuration. Pod users typically run with non-root privileges, but running troubleshooting pods with temporary root privileges can be useful during issue investigation: + [source,terminal] ---- @@ -50,7 +50,7 @@ $ oc debug dc/my-deployment-configuration --as-root -- cat /var/log/my-applicati + [NOTE] ==== -You can access an interactive shell with root access within the debug Pod if you run `oc debug dc/ --as-root` without appending `-- `. +You can access an interactive shell with root access within the debug pod if you run `oc debug dc/ --as-root` without appending `-- `. ==== . Test application functionality interactively and run diagnostic tools, in an application container with an interactive shell. @@ -67,18 +67,18 @@ $ oc exec -it my-app-1-akdlg /bin/bash + [NOTE] ==== -Root privileges are required to run some diagnostic binaries. In these situations you can start a debug Pod with root access, based on a problematic Pod's deployment configuration, by running `oc debug dc/ --as-root`. Then, you can run diagnostic binaries as root from within the debug Pod. +Root privileges are required to run some diagnostic binaries. In these situations you can start a debug pod with root access, based on a problematic pod's deployment configuration, by running `oc debug dc/ --as-root`. Then, you can run diagnostic binaries as root from within the debug pod. ==== . If diagnostic binaries are not available within a container, you can run a host's diagnostic binaries within a container's namespace by using `nsenter`. The following example runs `ip ad` within a container's namespace, using the host`s `ip` binary. -.. Enter into a debug session on the target node. This step instantiates a debug Pod called `-debug`: +.. Enter into a debug session on the target node. This step instantiates a debug pod called `-debug`: + [source,terminal] ---- $ oc debug node/my-cluster-node ---- + -.. Set `/host` as the root directory within the debug shell. The debug Pod mounts the host's root file system in `/host` within the Pod. By changing the root directory to `/host`, you can run binaries contained in the host's executable paths: +.. Set `/host` as the root directory within the debug shell. The debug pod mounts the host's root file system in `/host` within the pod. By changing the root directory to `/host`, you can run binaries contained in the host's executable paths: + [source,terminal] ---- diff --git a/modules/gathering-operator-logs.adoc b/modules/gathering-operator-logs.adoc index 8fedca8da3..db29db0f34 100644 --- a/modules/gathering-operator-logs.adoc +++ b/modules/gathering-operator-logs.adoc @@ -5,7 +5,7 @@ [id="gathering-operator-logs_{context}"] = Gathering Operator logs -If you experience Operator issues, you can gather detailed diagnostic information from Operator Pod logs. +If you experience Operator issues, you can gather detailed diagnostic information from Operator pod logs. .Prerequisites @@ -16,43 +16,43 @@ If you experience Operator issues, you can gather detailed diagnostic informatio .Procedure -. List the Operator Pods that are running in the Operator's namespace, plus the Pod status, restarts, and age: +. List the Operator pods that are running in the Operator's namespace, plus the pod status, restarts, and age: + [source,terminal] ---- $ oc get pods -n ---- -. Review logs for an Operator Pod: +. Review logs for an Operator pod: + [source,terminal] ---- $ oc logs pod/ -n ---- + -If an Operator Pod has multiple containers, the preceding command will produce an error that includes the name of each container. Query logs from an individual container: +If an Operator pod has multiple containers, the preceding command will produce an error that includes the name of each container. Query logs from an individual container: + [source,terminal] ---- $ oc logs pod/ -c -n ---- -. If the API is not functional, review Operator Pod and container logs on each master node by using SSH instead. Replace `..` with appropriate values. -.. List Pods on each master node: +. If the API is not functional, review Operator pod and container logs on each master node by using SSH instead. Replace `..` with appropriate values. +.. List pods on each master node: + [source,terminal] ---- $ ssh core@.. sudo crictl pods ---- + -.. For any Operator Pods not showing a `Ready` status, inspect the Pod's status in detail. Replace `` with the Operator Pod's ID listed in the output of the preceding command: +.. For any Operator pods not showing a `Ready` status, inspect the pod's status in detail. Replace `` with the Operator pod's ID listed in the output of the preceding command: + [source,terminal] ---- $ ssh core@.. sudo crictl inspectp ---- + -.. List containers related to an Operator Pod: +.. List containers related to an Operator pod: + [source,terminal] ---- diff --git a/modules/gathering-s2i-diagnostic-data.adoc b/modules/gathering-s2i-diagnostic-data.adoc index 60e5b3d8b8..d65c9f6a85 100644 --- a/modules/gathering-s2i-diagnostic-data.adoc +++ b/modules/gathering-s2i-diagnostic-data.adoc @@ -5,7 +5,7 @@ [id="gathering-s2i-diagnostic-data_{context}"] = Gathering Source-to-Image diagnostic data -The S2I tool runs a build Pod and a deployment Pod in sequence. The deployment Pod is responsible for deploying the application Pods based on the application container image created in the build stage. Watch build, deployment and application Pod status to determine where in the S2I process a failure occurs. Then, focus diagnostic data collection accordingly. +The S2I tool runs a build pod and a deployment pod in sequence. The deployment pod is responsible for deploying the application pods based on the application container image created in the build stage. Watch build, deployment and application pod status to determine where in the S2I process a failure occurs. Then, focus diagnostic data collection accordingly. .Prerequisites @@ -15,17 +15,17 @@ The S2I tool runs a build Pod and a deployment Pod in sequence. The deployment P .Procedure -. Watch the Pod status throughout the S2I process to determine at which stage a failure occurs: +. Watch the pod status throughout the S2I process to determine at which stage a failure occurs: + [source,terminal] ---- $ oc get pods -w <1> ---- -<1> Use `-w` to monitor Pods for changes until you quit the command using `Ctrl+C`. +<1> Use `-w` to monitor pods for changes until you quit the command using `Ctrl+C`. -. Review a failed Pod's logs for errors. +. Review a failed pod's logs for errors. + -* *If the build Pod fails*, review the build Pod's logs: +* *If the build pod fails*, review the build pod's logs: + [source,terminal] ---- @@ -34,10 +34,10 @@ $ oc logs -f pod/--build + [NOTE] ==== -Alternatively, you can review the build configuration's logs using `oc logs -f bc/`. The build configuration's logs include the logs from the build Pod. +Alternatively, you can review the build configuration's logs using `oc logs -f bc/`. The build configuration's logs include the logs from the build pod. ==== + -* *If the deployment Pod fails*, review the deployment Pod's logs: +* *If the deployment pod fails*, review the deployment pod's logs: + [source,terminal] ---- @@ -46,10 +46,10 @@ $ oc logs -f pod/--deploy + [NOTE] ==== -Alternatively, you can review the deployment configuration's logs using `oc logs -f dc/`. This outputs logs from the deployment Pod until the deployment Pod completes successfully. The command outputs logs from the application Pods if you run it after the deployment Pod has completed. After a deployment Pod completes, its logs can still be accessed by running `oc logs -f pod/--deploy`. +Alternatively, you can review the deployment configuration's logs using `oc logs -f dc/`. This outputs logs from the deployment pod until the deployment pod completes successfully. The command outputs logs from the application pods if you run it after the deployment pod has completed. After a deployment pod completes, its logs can still be accessed by running `oc logs -f pod/--deploy`. ==== + -* *If an application Pod fails, or if an application is not behaving as expected within a running application Pod*, review the application Pod's logs: +* *If an application pod fails, or if an application is not behaving as expected within a running application pod*, review the application pod's logs: + [source,terminal] ---- diff --git a/modules/graceful-shutdown.adoc b/modules/graceful-shutdown.adoc index 61a02cd3ba..e5d1294248 100644 --- a/modules/graceful-shutdown.adoc +++ b/modules/graceful-shutdown.adoc @@ -43,7 +43,7 @@ Shutting down the nodes using one of these methods allows pods to terminate grac + [NOTE] ==== -It is not necessary to drain master nodes of the standard Pods that ship with {product-title} prior to shutdown. +It is not necessary to drain master nodes of the standard pods that ship with {product-title} prior to shutdown. Cluster administrators are responsible for ensuring a clean restart of their own workloads after the cluster is restarted. If you drained master nodes prior to shutdown because of custom workloads, you must mark the master nodes as schedulable before the cluster will be functional again after restart. ==== diff --git a/modules/how-to-plan-your-environment-according-to-application-requirements.adoc b/modules/how-to-plan-your-environment-according-to-application-requirements.adoc index f4b6ce86c1..f68bb0f603 100644 --- a/modules/how-to-plan-your-environment-according-to-application-requirements.adoc +++ b/modules/how-to-plan-your-environment-according-to-application-requirements.adoc @@ -70,14 +70,14 @@ of applications that would not allow for overcommitment. That memory can not be used for other applications. In the example above, the environment would be roughly 30 percent overcommitted, a common ratio. -The application Pods can access a service either by using environment variables or DNS. -If using environment variables, for each active service the variables are injected by the -kubelet when a Pod is run on a node. A cluster-aware DNS server watches the Kubernetes API -for new services and creates a set of DNS records for each one. If DNS is enabled throughout -your cluster, then all Pods should automatically be able to resolve services by their DNS name. -Service discovery using DNS can be used in case you must go beyond 5000 services. When using -environment variables for service discovery, the argument list exceeds the allowed length after -5000 services in a namespace, then the Pods and deployments will start failing. Disable the service +The application pods can access a service either by using environment variables or DNS. +If using environment variables, for each active service the variables are injected by the +kubelet when a pod is run on a node. A cluster-aware DNS server watches the Kubernetes API +for new services and creates a set of DNS records for each one. If DNS is enabled throughout +your cluster, then all pods should automatically be able to resolve services by their DNS name. +Service discovery using DNS can be used in case you must go beyond 5000 services. When using +environment variables for service discovery, the argument list exceeds the allowed length after +5000 services in a namespace, then the pods and deployments will start failing. Disable the service links in the deployment's service specification file to overcome this: [source,yaml] diff --git a/modules/how-to-plan-your-environment-according-to-cluster-maximums.adoc b/modules/how-to-plan-your-environment-according-to-cluster-maximums.adoc index 9bfd41b191..5695270f41 100644 --- a/modules/how-to-plan-your-environment-according-to-cluster-maximums.adoc +++ b/modules/how-to-plan-your-environment-according-to-cluster-maximums.adoc @@ -23,7 +23,7 @@ While planning your environment, determine how many pods are expected to fit per node: ---- -Required Pods per Cluster / Pods per Node = Total Number of Nodes Needed +Required pods per Cluster / pods per Node = Total Number of Nodes Needed ---- The current maximum number of pods per node is 250. However, the number of pods @@ -49,5 +49,5 @@ If you increase the number of nodes to 20, then the pod distribution changes to Where: ---- -Required Pods per Cluster / Total Number of Nodes = Expected Pods per Node +Required pods per Cluster / Total Number of Nodes = Expected pods per Node ---- diff --git a/modules/images-allow-pods-to-reference-images-across-projects.adoc b/modules/images-allow-pods-to-reference-images-across-projects.adoc index 16be6b1e50..a2847c8d92 100644 --- a/modules/images-allow-pods-to-reference-images-across-projects.adoc +++ b/modules/images-allow-pods-to-reference-images-across-projects.adoc @@ -2,15 +2,15 @@ // * openshift_images/using-image-pull-secrets [id="images-allow-pods-to-reference-images-across-projects_{context}"] -= Allowing Pods to reference images across projects += Allowing pods to reference images across projects -When using the internal registry, to allow Pods in `project-a` to reference +When using the internal registry, to allow pods in `project-a` to reference images in `project-b`, a service account in `project-a` must be bound to the `system:image-puller` role in `project-b`. .Procedure -. To allow Pods in `project-a` to reference images in `project-b`, bind a service +. To allow pods in `project-a` to reference images in `project-b`, bind a service account in `project-a` to the `system:image-puller` role in `project-b`: + [source,terminal] diff --git a/modules/images-allow-pods-to-reference-images-from-secure-registries.adoc b/modules/images-allow-pods-to-reference-images-from-secure-registries.adoc index 8cc777347c..f8d202d02f 100644 --- a/modules/images-allow-pods-to-reference-images-from-secure-registries.adoc +++ b/modules/images-allow-pods-to-reference-images-from-secure-registries.adoc @@ -3,7 +3,7 @@ // * virt/virtual_machines/importing_vms/virt-importing-vmware-vm.adoc [id="images-allow-pods-to-reference-images-from-secure-registries_{context}"] -= Allowing Pods to reference images from other secured registries += Allowing pods to reference images from other secured registries The `.dockercfg` `$HOME/.docker/config.json` file for Docker clients is a Docker credentials file that stores your authentication information if you have @@ -46,9 +46,9 @@ $ oc create secret docker-registry \ --docker-email= ---- -* To use a secret for pulling images for Pods, you must add the secret to your +* To use a secret for pulling images for pods, you must add the secret to your service account. The name of the service account in this example should match -the name of the service account the Pod uses. `default` is the default +the name of the service account the pod uses. `default` is the default service account: + [source,terminal] diff --git a/modules/images-other-jenkins-kubernetes-plugin.adoc b/modules/images-other-jenkins-kubernetes-plugin.adoc index 52dfbadf8c..fc8f771566 100644 --- a/modules/images-other-jenkins-kubernetes-plugin.adoc +++ b/modules/images-other-jenkins-kubernetes-plugin.adoc @@ -60,7 +60,7 @@ items: ---- It is also possible to override the specification of the dynamically created -Jenkins agent Pod. The following is a modification to the previous example, which +Jenkins agent pod. The following is a modification to the previous example, which overrides the container memory and specifies an environment variable: The following example is a BuildConfig that the Jenkins Kubernetes Plug-in, diff --git a/modules/images-other-jenkins-permissions.adoc b/modules/images-other-jenkins-permissions.adoc index eb1668e715..9e4f8bff2c 100644 --- a/modules/images-other-jenkins-permissions.adoc +++ b/modules/images-other-jenkins-permissions.adoc @@ -6,12 +6,12 @@ = Jenkins permissions If in the ConfigMap the `` element of the Pod Template XML is -the {product-title} Service Account used for the resulting Pod, the service -account credentials are mounted into the Pod. The permissions are associated +the {product-title} Service Account used for the resulting pod, the service +account credentials are mounted into the pod. The permissions are associated with the service account and control which operations against the -{product-title} master are allowed from the Pod. +{product-title} master are allowed from the pod. -Consider the following scenario with service accounts used for the Pod, which +Consider the following scenario with service accounts used for the pod, which is launched by the Kubernetes Plug-in that runs in the {product-title} Jenkins image: @@ -36,4 +36,4 @@ is the XML configuration for a Pod Template. account is used. * Ensure that whatever service account is used has the necessary permissions, roles, and so on defined within {product-title} to manipulate -whatever projects you choose to manipulate from the within the Pod. +whatever projects you choose to manipulate from the within the pod. diff --git a/modules/images-update-global-pull-secret.adoc b/modules/images-update-global-pull-secret.adoc index ffd65fee4a..fdc29121fb 100644 --- a/modules/images-update-global-pull-secret.adoc +++ b/modules/images-update-global-pull-secret.adoc @@ -26,4 +26,4 @@ $ oc set data secret/pull-secret -n openshift-config --from-file=.dockerconfigjs ---- <1> Provide the path to the new pull secret file. -This update is rolled out to all nodes, which can take some time depending on the size of your cluster. During this time, nodes are drained and Pods are rescheduled on the remaining nodes. +This update is rolled out to all nodes, which can take some time depending on the size of your cluster. During this time, nodes are drained and pods are rescheduled on the remaining nodes. diff --git a/modules/infrastructure-moving-logging.adoc b/modules/infrastructure-moving-logging.adoc index e1590ad913..8436907740 100644 --- a/modules/infrastructure-moving-logging.adoc +++ b/modules/infrastructure-moving-logging.adoc @@ -146,7 +146,7 @@ metadata: .... ---- -* To move the Kibana Pod, edit the Cluster Logging CR to add a node selector: +* To move the Kibana pod, edit the Cluster Logging CR to add a node selector: + [source,yaml] ---- diff --git a/modules/infrastructure-moving-monitoring.adoc b/modules/infrastructure-moving-monitoring.adoc index d5447a5376..6760ed97bd 100644 --- a/modules/infrastructure-moving-monitoring.adoc +++ b/modules/infrastructure-moving-monitoring.adoc @@ -65,7 +65,7 @@ to infrastructure nodes. $ oc create -f cluster-monitoring-configmap.yaml ---- -. Watch the monitoring Pods move to the new machines: +. Watch the monitoring pods move to the new machines: + [source,terminal] ---- diff --git a/modules/infrastructure-node-sizing.adoc b/modules/infrastructure-node-sizing.adoc index 4b017c9f48..f2b7a84d05 100644 --- a/modules/infrastructure-node-sizing.adoc +++ b/modules/infrastructure-node-sizing.adoc @@ -31,7 +31,7 @@ The infrastructure node resource requirements depend on the cluster age, nodes, [IMPORTANT] ==== -These sizing recommendations are based on scale tests, which create a large number of objects across the cluster. These tests include reaching some of the cluster maximums. In the case of 250 and 500 node counts on a {product-title} {product-version} cluster, these maximums are 10000 namespaces with 61000 Pods, 10000 deployments, 181000 secrets, 400 ConfigMaps, and so on. Prometheus is a highly memory intensive application; the resource usage depends on various factors including the number of nodes, objects, the Prometheus metrics scraping interval, metrics or time series, and the age of the cluster. The disk size also depends on the retention period. You must take these factors into consideration and size them accordingly. +These sizing recommendations are based on scale tests, which create a large number of objects across the cluster. These tests include reaching some of the cluster maximums. In the case of 250 and 500 node counts on a {product-title} {product-version} cluster, these maximums are 10000 namespaces with 61000 pods, 10000 deployments, 181000 secrets, 400 ConfigMaps, and so on. Prometheus is a highly memory intensive application; the resource usage depends on various factors including the number of nodes, objects, the Prometheus metrics scraping interval, metrics or time series, and the age of the cluster. The disk size also depends on the retention period. You must take these factors into consideration and size them accordingly. The sizing recommendations are applicable only for the infrastructure components which gets installed during the cluster install - Prometheus, Router and Registry. Logging is a day two operation and the recommendations do not take it into account. ==== diff --git a/modules/insights-operator-about.adoc b/modules/insights-operator-about.adoc index ba0d73648a..ef6d30ea9e 100644 --- a/modules/insights-operator-about.adoc +++ b/modules/insights-operator-about.adoc @@ -18,4 +18,3 @@ Red Hat uses all connected cluster information to: * Make {product-title} more intuitive The information the Insights Operator sends is available only to Red Hat Support and engineering teams with the same restrictions as accessing data reported in support cases. Red Hat does not share this information with third parties. - diff --git a/modules/insights-operator-what-information-is-collected.adoc b/modules/insights-operator-what-information-is-collected.adoc index 73b422bd74..dbccf6e4b4 100644 --- a/modules/insights-operator-what-information-is-collected.adoc +++ b/modules/insights-operator-what-information-is-collected.adoc @@ -12,5 +12,4 @@ The Insights Operator collects: * Error that occurred in the cluster components * Progress and health information of running updates, and the status of any component upgrades * Details of the platform that {product-title} is deployed on, such as Amazon Web Services, and the region that the cluster is located in -* Information about infrastructure Pods - +* Information about infrastructure pods diff --git a/modules/inspecting-pod-and-container-logs.adoc b/modules/inspecting-pod-and-container-logs.adoc index d2f1f424b9..bbea9e61d6 100644 --- a/modules/inspecting-pod-and-container-logs.adoc +++ b/modules/inspecting-pod-and-container-logs.adoc @@ -3,9 +3,9 @@ // * support/troubleshooting/investigating-pod-issues.adoc [id="inspecting-pod-and-container-logs_{context}"] -= Inspecting Pod and container logs += Inspecting pod and container logs -You can inspect Pod and container logs for warnings and error messages related to explicit Pod failures. Depending on policy and exit code, Pod and container logs remain available after Pods have been terminated. +You can inspect pod and container logs for warnings and error messages related to explicit pod failures. Depending on policy and exit code, pod and container logs remain available after pods have been terminated. .Prerequisites @@ -15,31 +15,31 @@ You can inspect Pod and container logs for warnings and error messages related t .Procedure -. Query logs for a specific Pod: +. Query logs for a specific pod: + [source,terminal] ---- $ oc logs ---- -. Query logs for a specific container within a Pod: +. Query logs for a specific container within a pod: + [source,terminal] ---- $ oc logs -c ---- + -Logs retrieved using the preceding `oc logs` commands are composed of messages sent to stdout within Pods or containers. +Logs retrieved using the preceding `oc logs` commands are composed of messages sent to stdout within pods or containers. -. Inspect logs contained in `/var/log/` within a Pod. -.. List log files and subdirectories contained in `/var/log` within a Pod: +. Inspect logs contained in `/var/log/` within a pod. +.. List log files and subdirectories contained in `/var/log` within a pod: + [source,terminal] ---- $ oc exec ls -alh /var/log ---- + -.. Query a specific log file contained in `/var/log` within a Pod: +.. Query a specific log file contained in `/var/log` within a pod: + [source,terminal] ---- diff --git a/modules/installation-bare-metal-config-yaml.adoc b/modules/installation-bare-metal-config-yaml.adoc index 16d03cd51f..346a3d7ab7 100644 --- a/modules/installation-bare-metal-config-yaml.adoc +++ b/modules/installation-bare-metal-config-yaml.adoc @@ -145,8 +145,8 @@ machines for the cluster to use before you finish installing {product-title}. the cluster uses this values as the number of etcd endpoints in the cluster, the value must match the number of control plane machines that you deploy. <6> The cluster name that you specified in your DNS records. -<7> A block of IP addresses from which Pod IP addresses are allocated. This block must -not overlap with existing physical networks. These IP addresses are used for the Pod network. If you need to access the Pods from an external network, you must configure load balancers and routers to manage the traffic. +<7> A block of IP addresses from which pod IP addresses are allocated. This block must +not overlap with existing physical networks. These IP addresses are used for the pod network. If you need to access the pods from an external network, you must configure load balancers and routers to manage the traffic. <8> The subnet prefix length to assign to each individual node. For example, if `hostPrefix` is set to `23`, then each node is assigned a `/23` subnet out of the given `cidr`, which allows for 510 (2^(32 - 23) - 2) pod IPs addresses. If diff --git a/modules/installation-complete-user-infra.adoc b/modules/installation-complete-user-infra.adoc index a5e60c5195..2fea247b1f 100644 --- a/modules/installation-complete-user-infra.adoc +++ b/modules/installation-complete-user-infra.adoc @@ -97,8 +97,8 @@ The command succeeds when the Cluster Version Operator finishes deploying the The Ignition config files that the installation program generates contain certificates that expire after 24 hours. You must keep the cluster running for 24 hours in a non-degraded state to ensure that the first certificate rotation has finished. ==== -. Confirm that the Kubernetes API server is communicating with the Pods. -.. To view a list of all Pods, use the following command: +. Confirm that the Kubernetes API server is communicating with the pods. +.. To view a list of all pods, use the following command: + [source,terminal] ---- diff --git a/modules/installation-configuration-parameters.adoc b/modules/installation-configuration-parameters.adoc index 150068e64f..04f7300704 100644 --- a/modules/installation-configuration-parameters.adoc +++ b/modules/installation-configuration-parameters.adoc @@ -251,7 +251,7 @@ Not all CCO modes are supported for all cloud providers. For more information on |Object |`networking.clusterNetwork` -|The IP address pools for Pods. The default is `10.128.0.0/14` with a host prefix of `/23`. +|The IP address pools for pods. The default is `10.128.0.0/14` with a host prefix of `/23`. |Array of objects |`networking.clusterNetwork.cidr` diff --git a/modules/installation-dns-user-infra.adoc b/modules/installation-dns-user-infra.adoc index 9ea263ebd4..703cf574a0 100644 --- a/modules/installation-dns-user-infra.adoc +++ b/modules/installation-dns-user-infra.adoc @@ -39,7 +39,7 @@ nodes within the cluster. ==== The API server must be able to resolve the worker nodes by the host names that are recorded in Kubernetes. If it cannot resolve the node names, proxied -API calls can fail, and you cannot retrieve logs from Pods. +API calls can fail, and you cannot retrieve logs from pods. ==== |Routes diff --git a/modules/installation-gcp-user-infra-completing.adoc b/modules/installation-gcp-user-infra-completing.adoc index 0bde2f3542..b1d8b59f03 100644 --- a/modules/installation-gcp-user-infra-completing.adoc +++ b/modules/installation-gcp-user-infra-completing.adoc @@ -98,7 +98,7 @@ service-catalog-controller-manager 4.5.4 True False F storage 4.5.4 True False False 17m ---- -.. Run the following command to view your cluster Pods: +.. Run the following command to view your cluster pods: + [source,terminal] ---- diff --git a/modules/installation-osp-about-kuryr.adoc b/modules/installation-osp-about-kuryr.adoc index 293ffccf45..e33a8f5fc3 100644 --- a/modules/installation-osp-about-kuryr.adoc +++ b/modules/installation-osp-about-kuryr.adoc @@ -9,15 +9,15 @@ link:https://docs.openstack.org/kuryr-kubernetes/latest/[Kuryr] is a container network interface (CNI) plug-in solution that uses the link:https://docs.openstack.org/neutron/latest/[Neutron] and link:https://docs.openstack.org/octavia/latest/[Octavia] {rh-openstack-first} services -to provide networking for Pods and Services. +to provide networking for pods and Services. Kuryr and {product-title} integration is primarily designed for {product-title} clusters running on {rh-openstack} VMs. Kuryr improves the -network performance by plugging {product-title} Pods into {rh-openstack} SDN. -In addition, it provides interconnectivity between Pods and +network performance by plugging {product-title} pods into {rh-openstack} SDN. +In addition, it provides interconnectivity between pods and {rh-openstack} virtual instances. -Kuryr components are installed as Pods in {product-title} using the +Kuryr components are installed as pods in {product-title} using the `openshift-kuryr` namespace: * `kuryr-controller` - a single Service instance installed on a `master` node. @@ -25,7 +25,7 @@ This is modeled in {product-title} as a `Deployment`. * `kuryr-cni` - a container installing and configuring Kuryr as a CNI driver on each {product-title} node. This is modeled in {product-title} as a `DaemonSet`. -The Kuryr controller watches the OpenShift API server for Pod, Service, and +The Kuryr controller watches the OpenShift API server for pod, Service, and namespace create, update, and delete events. It maps the {product-title} API calls to corresponding objects in Neutron and Octavia. This means that every network solution that implements the Neutron trunk port functionality can be diff --git a/modules/installation-osp-default-kuryr-deployment.adoc b/modules/installation-osp-default-kuryr-deployment.adoc index 9a8e4a0659..3257f674a6 100644 --- a/modules/installation-osp-default-kuryr-deployment.adoc +++ b/modules/installation-osp-default-kuryr-deployment.adoc @@ -5,7 +5,7 @@ [id="installation-osp-default-kuryr-deployment_{context}"] = Resource guidelines for installing {product-title} on {rh-openstack} with Kuryr -When using Kuryr SDN, the Pods, Services, namespaces, and network policies are +When using Kuryr SDN, the pods, Services, namespaces, and network policies are using resources from the {rh-openstack} quota; this increases the minimum requirements. Kuryr also has some additional requirements on top of what a default install requires. @@ -47,9 +47,9 @@ If you are using {rh-openstack-first} version 16 with the Amphora driver rather Take the following notes into consideration when setting resources: -* The number of ports that are required is larger than the number of Pods. Kuryr -uses ports pools to have pre-created ports ready to be used by Pods and speed up -the Pods' booting time. +* The number of ports that are required is larger than the number of pods. Kuryr +uses ports pools to have pre-created ports ready to be used by pods and speed up +the pods' booting time. * Each NetworkPolicy is mapped into an {rh-openstack} security group, and depending on the NetworkPolicy spec, one or more rules are added to the diff --git a/modules/installation-osp-kuryr-config-yaml.adoc b/modules/installation-osp-kuryr-config-yaml.adoc index 68e1e2a0de..25f55f0f56 100644 --- a/modules/installation-osp-kuryr-config-yaml.adoc +++ b/modules/installation-osp-kuryr-config-yaml.adoc @@ -61,6 +61,6 @@ sshKey: ssh-ed25519 AAAA... Both `trunkSupport` and `octaviaSupport` are automatically discovered by the installer, so there is no need to set them. But if your environment does not meet both requirements, Kuryr SDN will not properly work. Trunks are needed -to connect the Pods to the {rh-openstack} network and Octavia is required to create the +to connect the pods to the {rh-openstack} network and Octavia is required to create the OpenShift Services. ==== diff --git a/modules/installation-osp-kuryr-increase-quota.adoc b/modules/installation-osp-kuryr-increase-quota.adoc index dc2846b10c..e2a3a08c7b 100644 --- a/modules/installation-osp-kuryr-increase-quota.adoc +++ b/modules/installation-osp-kuryr-increase-quota.adoc @@ -6,7 +6,7 @@ = Increasing quota When using Kuryr SDN, you must increase quotas to satisfy the {rh-openstack-first} -resources used by Pods, Services, namespaces, and network policies. +resources used by pods, Services, namespaces, and network policies. .Procedure diff --git a/modules/installation-osp-kuryr-known-limitations.adoc b/modules/installation-osp-kuryr-known-limitations.adoc index 2500336ac8..c493567e4b 100644 --- a/modules/installation-osp-kuryr-known-limitations.adoc +++ b/modules/installation-osp-kuryr-known-limitations.adoc @@ -44,7 +44,7 @@ and UDP, are not supported. There are limitations when using Kuryr SDN that depend on your deployment environment. -Because of Octavia's lack of support for the UDP protocol and multiple listeners, if the {rh-openstack} version is earlier than 16, Kuryr forces Pods to use TCP for DNS resolution. +Because of Octavia's lack of support for the UDP protocol and multiple listeners, if the {rh-openstack} version is earlier than 16, Kuryr forces pods to use TCP for DNS resolution. In Go versions 1.12 and earlier, applications that are compiled with CGO support disabled use UDP only. In this case, the native Go resolver does not recognize the `use-vc` option in `resolv.conf`, which controls whether TCP is forced for DNS resolution. diff --git a/modules/installation-osp-kuryr-octavia-upgrade.adoc b/modules/installation-osp-kuryr-octavia-upgrade.adoc index 8d3d617f69..77d81b50df 100644 --- a/modules/installation-osp-kuryr-octavia-upgrade.adoc +++ b/modules/installation-osp-kuryr-octavia-upgrade.adoc @@ -53,7 +53,7 @@ metadata: ---- <1> Delete this line. The cluster will regenerate it with `ovn` as the value. + -Wait for the Cluster Network Operator to detect the modification and to redeploy the `kuryr-controller` and `kuryr-cni` Pods. This process might take several minutes. +Wait for the Cluster Network Operator to detect the modification and to redeploy the `kuryr-controller` and `kuryr-cni` pods. This process might take several minutes. . Verify that the `kuryr-config` ConfigMap annotation is present with `ovn` as its value. On a command line, enter: + diff --git a/modules/installation-osp-verifying-cluster-status.adoc b/modules/installation-osp-verifying-cluster-status.adoc index 52b5f2346e..de95af1ecb 100644 --- a/modules/installation-osp-verifying-cluster-status.adoc +++ b/modules/installation-osp-verifying-cluster-status.adoc @@ -58,7 +58,7 @@ $ oc get clusterversion $ oc get clusteroperator ---- -. View all running Pods in the cluster: +. View all running pods in the cluster: + [source,terminal] ---- diff --git a/modules/installation-user-infra-generate-k8s-manifest-ignition.adoc b/modules/installation-user-infra-generate-k8s-manifest-ignition.adoc index 7d29281a7b..aea3c882f8 100644 --- a/modules/installation-user-infra-generate-k8s-manifest-ignition.adoc +++ b/modules/installation-user-infra-generate-k8s-manifest-ignition.adoc @@ -150,7 +150,7 @@ ifdef::baremetal,baremetal-restricted[] If you are running a three-node cluster, skip the following step to allow the masters to be schedulable. ==== endif::baremetal,baremetal-restricted[] -. Modify the `/manifests/cluster-scheduler-02-config.yml` Kubernetes manifest file to prevent Pods from being scheduled on the control plane machines: +. Modify the `/manifests/cluster-scheduler-02-config.yml` Kubernetes manifest file to prevent pods from being scheduled on the control plane machines: + -- .. Open the `/manifests/cluster-scheduler-02-config.yml` file. diff --git a/modules/investigating-etcd-installation-issues.adoc b/modules/investigating-etcd-installation-issues.adoc index b462e5054d..4ad393e063 100644 --- a/modules/investigating-etcd-installation-issues.adoc +++ b/modules/investigating-etcd-installation-issues.adoc @@ -5,7 +5,7 @@ [id="investigating-etcd-installation-issues_{context}"] = Investigating etcd installation issues -If you experience etcd issues during installation, you can check etcd Pod status and collect etcd Pod logs. You can also verify etcd DNS records and check DNS availability on master nodes. +If you experience etcd issues during installation, you can check etcd pod status and collect etcd pod logs. You can also verify etcd DNS records and check DNS availability on master nodes. .Prerequisites @@ -16,59 +16,59 @@ If you experience etcd issues during installation, you can check etcd Pod status .Procedure -. Check the status of etcd Pods. -.. Review the status of Pods in the `openshift-etcd` namespace: +. Check the status of etcd pods. +.. Review the status of pods in the `openshift-etcd` namespace: + [source,terminal] ---- $ oc get pods -n openshift-etcd ---- + -.. Review the status of Pods in the `openshift-etcd-operator` namespace: +.. Review the status of pods in the `openshift-etcd-operator` namespace: + [source,terminal] ---- $ oc get pods -n openshift-etcd-operator ---- -. If any of the Pods listed by the previous commands are not showing a `Running` or a `Completed` status, gather diagnostic information for the Pod. -.. Review events for the Pod: +. If any of the pods listed by the previous commands are not showing a `Running` or a `Completed` status, gather diagnostic information for the pod. +.. Review events for the pod: + [source,terminal] ---- $ oc describe pod/ -n ---- + -.. Inspect the Pod's logs: +.. Inspect the pod's logs: + [source,terminal] ---- $ oc logs pod/ -n ---- + -.. If the Pod has more than one container, the preceding command will create an error, and the container names will be provided in the error message. Inspect logs for each container: +.. If the pod has more than one container, the preceding command will create an error, and the container names will be provided in the error message. Inspect logs for each container: + [source,terminal] ---- $ oc logs pod/ -c -n ---- -. If the API is not functional, review etcd Pod and container logs on each master node by using SSH instead. Replace `..` with appropriate values. -.. List etcd Pods on each master node: +. If the API is not functional, review etcd pod and container logs on each master node by using SSH instead. Replace `..` with appropriate values. +.. List etcd pods on each master node: + [source,terminal] ---- $ ssh core@.. sudo crictl pods --name=etcd- ---- + -.. For any Pods not showing `Ready` status, inspect Pod status in detail. Replace `` with the Pod's ID listed in the output of the preceding command: +.. For any pods not showing `Ready` status, inspect pod status in detail. Replace `` with the pod's ID listed in the output of the preceding command: + [source,terminal] ---- $ ssh core@.. sudo crictl inspectp ---- + -.. List containers related to a Pod: +.. List containers related to a pod: + // TODO: Once https://bugzilla.redhat.com/show_bug.cgi?id=1858239 has been resolved, replace the `grep` command below: //[source,terminal] diff --git a/modules/investigating-master-node-installation-issues.adoc b/modules/investigating-master-node-installation-issues.adoc index 64d2cbbbf6..0c8837fb34 100644 --- a/modules/investigating-master-node-installation-issues.adoc +++ b/modules/investigating-master-node-installation-issues.adoc @@ -84,14 +84,14 @@ It is not possible to run `oc` commands if an installation issue prevents the {p $ oc get daemonsets -n openshift-sdn ---- + -.. If those resources are listed as `Not found`, review Pods in the `openshift-sdn` namespace: +.. If those resources are listed as `Not found`, review pods in the `openshift-sdn` namespace: + [source,terminal] ---- $ oc get pods -n openshift-sdn ---- + -.. Review logs relating to failed {product-title} SDN Pods in the `openshift-sdn` namespace: +.. Review logs relating to failed {product-title} SDN pods in the `openshift-sdn` namespace: + [source,terminal] ---- diff --git a/modules/ipi-install-troubleshooting-bootstrap-vm-cannot-boot.adoc b/modules/ipi-install-troubleshooting-bootstrap-vm-cannot-boot.adoc index dd656bbb80..40e2a524e5 100644 --- a/modules/ipi-install-troubleshooting-bootstrap-vm-cannot-boot.adoc +++ b/modules/ipi-install-troubleshooting-bootstrap-vm-cannot-boot.adoc @@ -31,7 +31,7 @@ To verify the issue, there are three containers related to `ironic`: [core@localhost ~]$ sudo podman logs -f ---- + -Replace `` with one of `ironic-api`, `ironic-conductor`, or `ironic-inspector`. If you encounter an issue where the master nodes are not booting up via PXE, check the `ironic-conductor` Pod. The `ironic-conductor` Pod contains the most detail about the attempt to boot the cluster nodes, because it attempts to log in to the node over IPMI. +Replace `` with one of `ironic-api`, `ironic-conductor`, or `ironic-inspector`. If you encounter an issue where the master nodes are not booting up via PXE, check the `ironic-conductor` pod. The `ironic-conductor` pod contains the most detail about the attempt to boot the cluster nodes, because it attempts to log in to the node over IPMI. .Potential reason The cluster nodes might be in the `ON` state when deployment started. diff --git a/modules/ipi-install-troubleshooting-bootstrap-vm-inspecting-logs.adoc b/modules/ipi-install-troubleshooting-bootstrap-vm-inspecting-logs.adoc index 8a4869912a..ba14da6243 100644 --- a/modules/ipi-install-troubleshooting-bootstrap-vm-inspecting-logs.adoc +++ b/modules/ipi-install-troubleshooting-bootstrap-vm-inspecting-logs.adoc @@ -53,14 +53,14 @@ If the bootstrap VM cannot access the URL to the images, use the `curl` command [core@localhost ~]$ journalctl -b -f -u bootkube.service ---- -. Verify all the Pods, including `dnsmasq`, `mariadb`, `httpd`, and `ironic`, are running: +. Verify all the pods, including `dnsmasq`, `mariadb`, `httpd`, and `ironic`, are running: + [source,terminal] ---- [core@localhost ~]$ sudo podman ps ---- -. If there are issues with the Pods, check the logs of the containers with issues. To check the log of the `ironic-api`, execute the following: +. If there are issues with the pods, check the logs of the containers with issues. To check the log of the `ironic-api`, execute the following: + [source,terminal] ---- diff --git a/modules/ipi-install-troubleshooting-misc-issues.adoc b/modules/ipi-install-troubleshooting-misc-issues.adoc index a331f39402..b09a4fb445 100644 --- a/modules/ipi-install-troubleshooting-misc-issues.adoc +++ b/modules/ipi-install-troubleshooting-misc-issues.adoc @@ -17,7 +17,7 @@ The Cluster Network Operator is responsible for deploying the networking compone .Procedure -. Inspect the Pods in the `openshift-network-operator` namespace: +. Inspect the pods in the `openshift-network-operator` namespace: + [source,terminal] ---- diff --git a/modules/jaeger-deploy-default.adoc b/modules/jaeger-deploy-default.adoc index ef88d58ed9..6ed9382019 100644 --- a/modules/jaeger-deploy-default.adoc +++ b/modules/jaeger-deploy-default.adoc @@ -94,7 +94,7 @@ metadata: $ oc create -n jaeger-system -f jaeger.yaml ---- -. Run the following command to watch the progress of the Pods during the installation process: +. Run the following command to watch the progress of the pods during the installation process: + [source,terminal] ---- diff --git a/modules/jaeger-deploy-production-es.adoc b/modules/jaeger-deploy-production-es.adoc index fc12b4b300..c7486e57c2 100644 --- a/modules/jaeger-deploy-production-es.adoc +++ b/modules/jaeger-deploy-production-es.adoc @@ -109,7 +109,7 @@ $ oc new-project jaeger-system $ oc create -n jaeger-system -f jaeger-production.yaml ---- + -. Run the following command to watch the progress of the Pods during the installation process: +. Run the following command to watch the progress of the pods during the installation process: + [source,terminal] ---- diff --git a/modules/jaeger-deploy-streaming.adoc b/modules/jaeger-deploy-streaming.adoc index 77868c4132..d45b8e11db 100644 --- a/modules/jaeger-deploy-streaming.adoc +++ b/modules/jaeger-deploy-streaming.adoc @@ -116,7 +116,7 @@ $ oc new-project jaeger-system $ oc create -n jaeger-system -f jaeger-streaming.yaml ---- + -. Run the following command to watch the progress of the Pods during the installation process: +. Run the following command to watch the progress of the pods during the installation process: + [source,terminal] ---- diff --git a/modules/jaeger-upgrading-es.adoc b/modules/jaeger-upgrading-es.adoc index 00cb3f28e2..1aa8bf0997 100644 --- a/modules/jaeger-upgrading-es.adoc +++ b/modules/jaeger-upgrading-es.adoc @@ -79,7 +79,7 @@ $ oc delete -f jaeger-prod-elasticsearch.yaml $ oc create -f ---- + -. Validate that your Pods have restarted: +. Validate that your pods have restarted: + [source,terminal] ---- diff --git a/modules/machine-api-overview.adoc b/modules/machine-api-overview.adoc index 510e695e86..0a7bf1fa64 100644 --- a/modules/machine-api-overview.adoc +++ b/modules/machine-api-overview.adoc @@ -24,7 +24,7 @@ providerSpec, which describes the types of compute nodes that are offered for di cloud platforms. For example, a machine type for a worker node on Amazon Web Services (AWS) might define a specific machine type and required metadata. MachineSets:: Groups of machines. MachineSets are to machines as -ReplicaSets are to Pods. If you need more machines or must scale them down, +ReplicaSets are to pods. If you need more machines or must scale them down, you change the *replicas* field on the MachineSet to meet your compute need. The following custom resources add more capabilities to your cluster: diff --git a/modules/managing-memcached-operator-using-olm.adoc b/modules/managing-memcached-operator-using-olm.adoc index 67b4948ab2..bebcf85abb 100644 --- a/modules/managing-memcached-operator-using-olm.adoc +++ b/modules/managing-memcached-operator-using-olm.adoc @@ -111,7 +111,7 @@ resource has the kind `Memcached`. Native Kubernetes RBAC also applies to each Operator. + Creating instances of Memcached in this namespace will now trigger the Memcached -Operator to instantiate Pods running the memcached server that are managed by +Operator to instantiate pods running the memcached server that are managed by the Operator. The more `CustomResources` you create, the more unique instances of Memcached are managed by the Memcached Operator running in this namespace. + @@ -181,7 +181,7 @@ execution. It is up to the Operators themselves to execute any data migrations required to upgrade resources to run under a new version of the Operator. + The following commands demonstrate applying a new Operator manifest file using a -new version of the Operator and shows that the Pods remain executing: +new version of the Operator and shows that the pods remain executing: .. Download the manifest: + @@ -198,7 +198,7 @@ $ curl -Lo memcachedoperator.0.18.1.csv.yaml \ $ oc apply -f memcachedoperator.0.18.1.csv.yaml ---- -.. View the Pods: +.. View the pods: + [source,terminal] ---- diff --git a/modules/metering-debugging.adoc b/modules/metering-debugging.adoc index 8323573201..174aedaaf1 100644 --- a/modules/metering-debugging.adoc +++ b/modules/metering-debugging.adoc @@ -23,7 +23,7 @@ $ oc -n openshift-metering logs -f "$(oc -n openshift-metering get pods -l app=r [id="metering-query-presto-using-presto-cli_{context}"] == Query Presto using presto-cli -The following command opens an interactive presto-cli session where you can query Presto. This session runs in the same container as Presto and launches an additional Java instance, which can create memory limits for the Pod. If this occurs, you should increase the memory request and limits of the Presto Pod. +The following command opens an interactive presto-cli session where you can query Presto. This session runs in the same container as Presto and launches an additional Java instance, which can create memory limits for the pod. If this occurs, you should increase the memory request and limits of the Presto pod. By default, Presto is configured to communicate using TLS. You must use the following command to run Presto queries: @@ -87,7 +87,7 @@ presto:default> [id="metering-query-hive-using-beeline_{context}"] == Query Hive using beeline -The following opens an interactive beeline session where you can query Hive. This session runs in the same container as Hive and launches an additional Java instance, which can create memory limits for the Pod. If this occurs, you should increase the memory request and limits of the Hive Pod. +The following opens an interactive beeline session where you can query Hive. This session runs in the same container as Hive and launches an additional Java instance, which can create memory limits for the pod. If this occurs, you should increase the memory request and limits of the Hive pod. [source,terminal] ---- @@ -172,7 +172,7 @@ Run the following command to port-forward to the first HDFS datanode: ---- $ oc -n openshift-metering port-forward hdfs-datanode-0 9864 <1> ---- -<1> To check other datanodes, replace `hdfs-datanode-0` with the Pod you want to view information on. +<1> To check other datanodes, replace `hdfs-datanode-0` with the pod you want to view information on. [id="metering-ansible-operator_{context}"] == Metering Ansible Operator @@ -180,7 +180,7 @@ Metering uses the Ansible Operator to watch and reconcile resources in a cluster [id="metering-accessing-ansible-logs_{context}"] === Accessing Ansible logs -In the default installation, the Metering Operator is deployed as a Pod. In this case, you can check the logs of the Ansible container within this Pod: +In the default installation, the Metering Operator is deployed as a pod. In this case, you can check the logs of the Ansible container within this pod: [source,terminal] ---- diff --git a/modules/metering-install-operator.adoc b/modules/metering-install-operator.adoc index ee69f32d16..9e221cacf6 100644 --- a/modules/metering-install-operator.adoc +++ b/modules/metering-install-operator.adoc @@ -32,7 +32,7 @@ metadata: openshift.io/cluster-monitoring: "true" ---- <1> It is strongly recommended to deploy metering in the `openshift-metering` namespace. -<2> Include this annotation before configuring specific node selectors for the operand Pods. +<2> Include this annotation before configuring specific node selectors for the operand pods. . In the {product-title} web console, click *Operators* -> *OperatorHub*. Filter for `metering` to find the Metering Operator. @@ -72,7 +72,7 @@ metadata: openshift.io/cluster-monitoring: "true" ---- <1> It is strongly recommended to deploy metering in the `openshift-metering` namespace. -<2> Include this annotation before configuring specific node selectors for the operand Pods. +<2> Include this annotation before configuring specific node selectors for the operand pods. . Create the namespace object: + diff --git a/modules/metering-install-verify.adoc b/modules/metering-install-verify.adoc index c746bbf314..a4cf43dd2b 100644 --- a/modules/metering-install-verify.adoc +++ b/modules/metering-install-verify.adoc @@ -33,19 +33,19 @@ metering-operator.v{product-version}.0 Metering ---- -- -* Check that all required Pods in the `openshift-metering` namespace are created. This can be done through either the web console or CLI. +* Check that all required pods in the `openshift-metering` namespace are created. This can be done through either the web console or CLI. + -- [NOTE] ==== -Many Pods rely on other components to function before they themselves can be considered ready. Some Pods may restart if other Pods take too long to start. This is to be expected during the Metering Operator installation. +Many pods rely on other components to function before they themselves can be considered ready. Some pods may restart if other pods take too long to start. This is to be expected during the Metering Operator installation. ==== .Procedure (UI) -* Navigate to *Workloads* -> *Pods* in the metering namespace and verify that Pods are being created. This can take several minutes after installing the metering stack. +* Navigate to *Workloads* -> *Pods* in the metering namespace and verify that pods are being created. This can take several minutes after installing the metering stack. .Procedure (CLI) -* Check that all required Pods in the `openshift-metering` namespace are created: +* Check that all required pods in the `openshift-metering` namespace are created: + [source,terminal] ---- @@ -92,4 +92,4 @@ pod-usage-cpu-cores 2019-08-05T16:52:00Z 2019-08-05T1 pod-usage-memory-bytes 2019-08-05T16:52:00Z 2019-08-05T18:08:00Z 2019-08-05T16:52:00Z 2019-08-05T18:08:00Z 2019-08-05T18:54:20Z 9m49s ---- -After all Pods are ready and you have verified that data is being imported, you can begin using metering to collect data and report on your cluster. +After all pods are ready and you have verified that data is being imported, you can begin using metering to collect data and report on your cluster. diff --git a/modules/metering-troubleshooting.adoc b/modules/metering-troubleshooting.adoc index b58e8156a6..1829153a56 100644 --- a/modules/metering-troubleshooting.adoc +++ b/modules/metering-troubleshooting.adoc @@ -5,7 +5,7 @@ [id="metering-troubleshooting_{context}"] = Troubleshooting metering -A common issue with metering is Pods failing to start. Pods might fail to start due to lack of resources or if they have a dependency on a resource that does not exist, such as a StorageClass or Secret. +A common issue with metering is pods failing to start. Pods might fail to start due to lack of resources or if they have a dependency on a resource that does not exist, such as a StorageClass or Secret. [id="metering-not-enough-compute-resources_{context}"] == Not enough compute resources diff --git a/modules/metering-uninstall.adoc b/modules/metering-uninstall.adoc index 1be37958c2..d1ebf57b69 100644 --- a/modules/metering-uninstall.adoc +++ b/modules/metering-uninstall.adoc @@ -20,7 +20,7 @@ Uninstall your metering namespace, for example the `openshift-metering` namespac $ oc --namespace openshift-metering delete meteringconfig --all ---- -. After the previous step is complete, verify that all Pods in the `openshift-metering` namespace are deleted or are reporting a terminating state: +. After the previous step is complete, verify that all pods in the `openshift-metering` namespace are deleted or are reporting a terminating state: + [source,terminal] ---- diff --git a/modules/migrating-reconcile-code.adoc b/modules/migrating-reconcile-code.adoc index b5e002878f..85d96312cf 100644 --- a/modules/migrating-reconcile-code.adoc +++ b/modules/migrating-reconcile-code.adoc @@ -56,7 +56,7 @@ func add(mgr manager.Manager, r reconcile.Reconciler) error { // Watch for changes to the primary resource Memcached err = c.Watch(&source.Kind{Type: &cachev1alpha1.Memcached{}}, &handler.EnqueueRequestForObject{}) - // Watch for changes to the secondary resource Pods and enqueue reconcile requests for the owner Memcached + // Watch for changes to the secondary resource pods and enqueue reconcile requests for the owner Memcached err = c.Watch(&source.Kind{Type: &corev1.Pod{}}, &handler.EnqueueRequestForOwner{ IsController: true, OwnerType: &cachev1alpha1.Memcached{}, diff --git a/modules/migration-changing-migration-plan-limits.adoc b/modules/migration-changing-migration-plan-limits.adoc index ae7fdae645..98efc6e0ac 100644 --- a/modules/migration-changing-migration-plan-limits.adoc +++ b/modules/migration-changing-migration-plan-limits.adoc @@ -43,7 +43,7 @@ mig_namespace_limit: 10 <7> <3> Specifies the number of CPU units available for Migration Controller requests. `100m` represents 0.1 CPU units (100 * 1e-3). <4> Specifies the amount of memory available for Migration Controller requests. <5> Specifies the number of PVs that can be migrated. -<6> Specifies the number of Pods that can be migrated. +<6> Specifies the number of pods that can be migrated. <7> Specifies the number of namespaces that can be migrated. . Create a migration plan that uses the updated parameters to verify the changes. diff --git a/modules/migration-installing-cam-operator-ocp-3.adoc b/modules/migration-installing-cam-operator-ocp-3.adoc index 99872b2cce..e5d44db179 100644 --- a/modules/migration-installing-cam-operator-ocp-3.adoc +++ b/modules/migration-installing-cam-operator-ocp-3.adoc @@ -141,7 +141,7 @@ rolebindings.rbac.authorization.k8s.io "system:image-pullers" already exists $ oc create -f controller-3.yml ---- -. Verify that the Velero and Restic Pods are running: +. Verify that the Velero and Restic pods are running: + [source,terminal] ---- diff --git a/modules/migration-installing-cam-operator-ocp-4.adoc b/modules/migration-installing-cam-operator-ocp-4.adoc index b9102a759f..162bd0cbf7 100644 --- a/modules/migration-installing-cam-operator-ocp-4.adoc +++ b/modules/migration-installing-cam-operator-ocp-4.adoc @@ -87,8 +87,8 @@ endif::[] . Click *Create*. ifdef::source-4-1-4,source-4-2-4[] -. Click *Workloads* -> *Pods* to verify that the Restic and Velero Pods are running. +. Click *Workloads* -> *Pods* to verify that the Restic and Velero pods are running. endif::[] ifdef::disconnected-3-4,disconnected-target-4-1-4,disconnected-target-4-2-4,migrating-3-4,target-4-2-4,target-4-1-4[] -. Click *Workloads* -> *Pods* to verify that the Controller Manager, Migration UI, Restic, and Velero Pods are running. +. Click *Workloads* -> *Pods* to verify that the Controller Manager, Migration UI, Restic, and Velero pods are running. endif::[] diff --git a/modules/migration-running-migration-plan-cam.adoc b/modules/migration-running-migration-plan-cam.adoc index 0114c025e4..8595668e5e 100644 --- a/modules/migration-running-migration-plan-cam.adoc +++ b/modules/migration-running-migration-plan-cam.adoc @@ -45,5 +45,5 @@ You can run *Stage* multiple times to reduce the actual migration time. .. Click *Home* -> *Projects*. .. Click the migrated project to view its status. .. In the *Routes* section, click *Location* to verify that the application is functioning, if applicable. -.. Click *Workloads* -> *Pods* to verify that the Pods are running in the migrated namespace. +.. Click *Workloads* -> *Pods* to verify that the pods are running in the migrated namespace. .. Click *Storage* -> *Persistent volumes* to verify that the migrated persistent volume is correctly provisioned. diff --git a/modules/nodes-cluster-limit-ranges-about.adoc b/modules/nodes-cluster-limit-ranges-about.adoc index c95a85972f..a0ade988f2 100644 --- a/modules/nodes-cluster-limit-ranges-about.adoc +++ b/modules/nodes-cluster-limit-ranges-about.adoc @@ -7,7 +7,7 @@ A limit range, defined by a LimitRange object, restricts resource consumption in a project. In the project you can set specific resource -limits for a Pod, container, image, image stream, or +limits for a pod, container, image, image stream, or persistent volume claim (PVC). All requests to create and modify resources are evaluated against each @@ -23,7 +23,7 @@ container memory that you can specify is 100Mi. ==== endif::[] -The following shows a limit range object for all components: Pod, container, +The following shows a limit range object for all components: pod, container, image, image stream, or PVC. You can configure limits for any or all of these components in the same object. You create a different limit range object for each project where you want to control resources. diff --git a/modules/nodes-cluster-limit-ranges-creating.adoc b/modules/nodes-cluster-limit-ranges-creating.adoc index 9183d950bf..9b74d681e5 100644 --- a/modules/nodes-cluster-limit-ranges-creating.adoc +++ b/modules/nodes-cluster-limit-ranges-creating.adoc @@ -55,9 +55,9 @@ spec: <1> Specify a name for the LimitRange object. <2> To set limits for a pod, specify the minimum and maximum CPU and memory requests as needed. <3> To set limits for a container, specify the minimum and maximum CPU and memory requests as needed. -<4> Optional. For a container, specify the default amount of CPU or memory that a container can use, if not specified in the Pod spec. -<5> Optional. For a container, specify the default amount of CPU or memory that a container can request, if not specified in the Pod spec. -<6> Optional. For a container, specify the maximum limit-to-request ratio that can be specified in the Pod spec. +<4> Optional. For a container, specify the default amount of CPU or memory that a container can use, if not specified in the `Pod` spec. +<5> Optional. For a container, specify the default amount of CPU or memory that a container can request, if not specified in the `Pod` spec. +<6> Optional. For a container, specify the maximum limit-to-request ratio that can be specified in the `Pod` spec. <7> To set limits for an Image object, set the maximum size of an image that can be pushed to an internal registry. <8> To set limits for an image stream, set the maximum number of image tags and references that can be in the imagestream object file, as needed. <9> To set limits for a persistent volume claim, set the minimum and maximum amount of storage that can be requested. diff --git a/modules/nodes-cluster-limit-ranges-limits.adoc b/modules/nodes-cluster-limit-ranges-limits.adoc index 365e2c003e..af16189580 100644 --- a/modules/nodes-cluster-limit-ranges-limits.adoc +++ b/modules/nodes-cluster-limit-ranges-limits.adoc @@ -5,34 +5,34 @@ [id="nodes-cluster-limit-ranges-limits_{context}"] = About component limits -The following examples show limit range parameters for each component. The -examples are broken out for clarity. You can create a single limit range object -for any or all components as necessary. +The following examples show limit range parameters for each component. The +examples are broken out for clarity. You can create a single limit range object +for any or all components as necessary. [id="nodes-cluster-limit-container-limits"] == Container limits -A limit range allows you to specify the minimum and maximum CPU and memory that each container -in a Pod can request for a specific project. If a container is created in the project, -the container CPU and memory requests in the Pod spec must comply with the values set in the -limit range object. If not, the Pod does not get created. +A limit range allows you to specify the minimum and maximum CPU and memory that each container +in a pod can request for a specific project. If a container is created in the project, +the container CPU and memory requests in the `Pod` spec must comply with the values set in the +limit range object. If not, the pod does not get created. -* The container CPU or memory request and limit must be greater than or equal to the +* The container CPU or memory request and limit must be greater than or equal to the `min` resource constraint for containers that are specified in the limit range object. -* The container CPU or memory request must be less than or equal to the +* The container CPU or memory request must be less than or equal to the `max` resource constraint for containers that are specified in the limit range object. + If the limit range defines a `max` CPU, you do not need to define a CPU -`request` value in the Pod spec. But you must specify a CPU `limit` value that +`request` value in the `Pod` spec. But you must specify a CPU `limit` value that satisfies the maximum CPU constraint specified in the limit range. - -* The ratio of the container limits to requests must be + +* The ratio of the container limits to requests must be less than or equal to the `maxLimitRequestRatio` value for containers that is specified in the limit range object. + If the limit range defines a `maxLimitRequestRatio` constraint, any new -containers must have both a `request` and a `limit` value. {product-title} +containers must have both a `request` and a `limit` value. {product-title} calculates the limit-to-request ratio by dividing the `limit` by the `request`. This value should be a non-negative integer greater than 1. + @@ -40,8 +40,8 @@ For example, if a container has `cpu: 500` in the `limit` value, and `cpu: 100` in the `request` value, the limit-to-request ratio for `cpu` is `5`. This ratio must be less than or equal to the `maxLimitRequestRatio`. -If the Pod spec does not specify a container resource memory or limit, -the `default` or `defaultRequest` CPU and memory values for containers +If the `Pod` spec does not specify a container resource memory or limit, +the `default` or `defaultRequest` CPU and memory values for containers specified in the limit range object are assigned to the container. .Container LimitRange object definition @@ -74,35 +74,35 @@ spec: <2> The maximum amount of CPU that a single container in a Pod can request. <3> The maximum amount of memory that a single container in a Pod can request. <4> The minimum amount of CPU that a single container in a Pod can request. -Not setting a `min` value or setting `0` is unlimited, allowing the +Not setting a `min` value or setting `0` is unlimited, allowing the Pod to consume more than the `max` CPU value. <5> The minimum amount of memory that a single container in a Pod can request. -Not setting a `min` value or setting `0` is unlimited, allowing the +Not setting a `min` value or setting `0` is unlimited, allowing the Pod to consume more than the `max` memory value. -<6> The default amount of CPU that a container can use if not specified in the Pod spec. -<7> The default amount of memory that a container can use if not specified in the Pod spec. -<8> The default amount of CPU that a container can request if not specified in the Pod spec. -<9> The default amount of memory that a container can request if not specified in the Pod spec. +<6> The default amount of CPU that a container can use if not specified in the `Pod` spec. +<7> The default amount of memory that a container can use if not specified in the `Pod` spec. +<8> The default amount of CPU that a container can request if not specified in the `Pod` spec. +<9> The default amount of memory that a container can request if not specified in the `Pod` spec. <10> The maximum limit-to-request ratio for a container. [id="nodes-cluster-limit-pod-limits"] == Pod limits -A limit range allows you to specify the minimum and maximum CPU and memory limits for all containers -across a Pod in a given project. To create a container in the project, the container CPU and memory -requests in the Pod spec must comply with the values set in the limit range object. If not, -the Pod does not get created. +A limit range allows you to specify the minimum and maximum CPU and memory limits for all containers +across a pod in a given project. To create a container in the project, the container CPU and memory +requests in the `Pod` spec must comply with the values set in the limit range object. If not, +the pod does not get created. -Across all containers in a Pod, the following must hold true: +Across all containers in a pod, the following must hold true: -* The container CPU or memory request and limit must be greater than or equal to the -`min` resource constraints for Pods that are specified in the limit range object. +* The container CPU or memory request and limit must be greater than or equal to the +`min` resource constraints for pods that are specified in the limit range object. -* The container CPU or memory request and limit must be less than or equal to the -`max` resource constraints for Pods that are specified in the limit range object. +* The container CPU or memory request and limit must be less than or equal to the +`max` resource constraints for pods that are specified in the limit range object. -* The ratio of the container limits to requests must be less than or equal to +* The ratio of the container limits to requests must be less than or equal to the `maxLimitRequestRatio` constraint specified in the limit range object. .Pod LimitRange object definition @@ -128,23 +128,23 @@ spec: <1> The name of the limit range object. <2> The maximum amount of CPU that a Pod can request across all containers. <3> The maximum amount of memory that a Pod can request across all containers. -<4> The minimum amount of CPU that a Pod can request across all containers. -Not setting a `min` value or setting `0` is unlimited, allowing the Pod to +<4> The minimum amount of CPU that a Pod can request across all containers. +Not setting a `min` value or setting `0` is unlimited, allowing the Pod to consume more than the `max` CPU value. -<5> The minimum amount of memory that a Pod can request across all containers. -Not setting a `min` value or setting `0` is unlimited, allowing the Pod to -consume more than the `max` memory value. +<5> The minimum amount of memory that a Pod can request across all containers. +Not setting a `min` value or setting `0` is unlimited, allowing the Pod to +consume more than the `max` memory value. <6> The maximum limit-to-request ratio for a container. [id="nodes-cluster-limit-image-limits"] == Image limits -A limit range allows you to specify the maximum size of an image +A limit range allows you to specify the maximum size of an image that can be pushed to an internal registry. When pushing images to an internal registry, the following must hold true: -* The size of the image must be less than or equal to the `max` size for +* The size of the image must be less than or equal to the `max` size for images that is specified in the limit range object. .Image LimitRange object definition @@ -168,7 +168,7 @@ ifdef::openshift-enterprise,openshift-origin[] [NOTE] ==== To prevent blobs that exceed the limit from being uploaded to the registry, the -registry must be configured to enforce quotas. +registry must be configured to enforce quotas. ==== endif::[] @@ -192,13 +192,13 @@ A limit range allows you to specify limits for image streams. For each image stream, the following must hold true: -* The number of image tags in an imagestream specification must be less -than or equal to the `openshift.io/image-tags` constraint in the limit range -object. +* The number of image tags in an imagestream specification must be less +than or equal to the `openshift.io/image-tags` constraint in the limit range +object. -* The number of unique references to images in an imagestream specification -must be less than or equal to the `openshift.io/images` constraint in the limit -range object. +* The number of unique references to images in an imagestream specification +must be less than or equal to the `openshift.io/images` constraint in the limit +range object. .Imagestream LimitRange object definition @@ -215,22 +215,22 @@ spec: openshift.io/image-tags: 20 <2> openshift.io/images: 30 <3> ---- -<1> The name of the limit range object. +<1> The name of the limit range object. <2> The maximum number of unique image tags in the `imagestream.spec.tags` parameter in imagestream spec. -<3> The maximum number of unique image references in the `imagestream.status.tags` +<3> The maximum number of unique image references in the `imagestream.status.tags` parameter in the imagestream spec. The `openshift.io/image-tags` resource represents unique image references. Possible references are an `*ImageStreamTag*`, an -`*ImageStreamImage*` and a `*DockerImage*`. Tags can be created using +`*ImageStreamImage*` and a `*DockerImage*`. Tags can be created using the `oc tag` and `oc import-image` commands. No distinction is made between internal and external references. However, each unique reference tagged in an imagestream specification is counted just once. It does not restrict pushes to an internal container image registry in any way, but is useful for tag restriction. -The `openshift.io/images` resource represents unique image names recorded in +The `openshift.io/images` resource represents unique image names recorded in imagestream status. It allows for restriction of a number of images that can be pushed to the internal registry. Internal and external references are not distinguished. @@ -238,14 +238,14 @@ distinguished. [id="nodes-cluster-limit-pvc-limits"] == Persistent volume claim limits -A limit range allows you to restrict the storage requested in a persistent volume claim (PVC). +A limit range allows you to restrict the storage requested in a persistent volume claim (PVC). Across all persistent volume claims in a project, the following must hold true: -* The resource request in a persistent volume claim (PVC) must be greater than or equal +* The resource request in a persistent volume claim (PVC) must be greater than or equal the `min` constraint for PVCs that is specified in the limit range object. -* The resource request in a persistent volume claim (PVC) must be less than or equal +* The resource request in a persistent volume claim (PVC) must be less than or equal the `max` constraint for PVCs that is specified in the limit range object. .PVC LimitRange object definition @@ -259,7 +259,7 @@ metadata: spec: limits: - type: "PersistentVolumeClaim" - min: + min: storage: "2Gi" <2> max: storage: "50Gi" <3> diff --git a/modules/nodes-cluster-resource-configure.adoc b/modules/nodes-cluster-resource-configure.adoc index e6ef819c1b..db36688d31 100644 --- a/modules/nodes-cluster-resource-configure.adoc +++ b/modules/nodes-cluster-resource-configure.adoc @@ -14,7 +14,7 @@ and a label for each project where you want the Operator to control overcommit. * The Cluster Resource Override Operator has no effect if limits have not been set on containers. You must specify default limits for a project using a LimitRange -object or configure limits in Pod specs in order for the overrides to apply. +object or configure limits in `Pod` specs in order for the overrides to apply. .Procedure diff --git a/modules/nodes-cluster-resource-override-deploy-cli.adoc b/modules/nodes-cluster-resource-override-deploy-cli.adoc index 8fc3932cb8..141a7c789a 100644 --- a/modules/nodes-cluster-resource-override-deploy-cli.adoc +++ b/modules/nodes-cluster-resource-override-deploy-cli.adoc @@ -12,7 +12,7 @@ You can use the {product-title} CLI to install the Cluster Resource Override Ope * The Cluster Resource Override Operator has no effect if limits have not been set on containers. You must specify default limits for a project using a LimitRange -object or configure limits in Pod specs in order for the overrides to apply. +object or configure limits in `Pod` specs in order for the overrides to apply. .Procedure diff --git a/modules/nodes-cluster-resource-override-deploy-console.adoc b/modules/nodes-cluster-resource-override-deploy-console.adoc index 0852d7a813..76611489e6 100644 --- a/modules/nodes-cluster-resource-override-deploy-console.adoc +++ b/modules/nodes-cluster-resource-override-deploy-console.adoc @@ -11,7 +11,7 @@ You can use the {product-title} web console to install the Cluster Resource Over * The Cluster Resource Override Operator has no effect if limits have not been set on containers. You must specify default limits for a project using a LimitRange -object or configure limits in Pod specs in order for the overrides to apply. +object or configure limits in `Pod` specs in order for the overrides to apply. .Procedure diff --git a/modules/nodes-cluster-resource-override.adoc b/modules/nodes-cluster-resource-override.adoc index 77e09a440e..38082fd2c2 100644 --- a/modules/nodes-cluster-resource-override.adoc +++ b/modules/nodes-cluster-resource-override.adoc @@ -33,7 +33,7 @@ spec: ==== The Cluster Resource Override Operator overrides have no effect if limits have not been set on containers. Create a LimitRange object with default limits per individual project -or configure limits in Pod specs in order for the overrides to apply. +or configure limits in `Pod` specs in order for the overrides to apply. ==== When configured, overrides can be enabled per-project by applying the following diff --git a/modules/nodes-containers-sysctls-about.adoc b/modules/nodes-containers-sysctls-about.adoc index 0f714e2942..c03779a4f1 100644 --- a/modules/nodes-containers-sysctls-about.adoc +++ b/modules/nodes-containers-sysctls-about.adoc @@ -15,7 +15,7 @@ process file system. The parameters cover various subsystems, such as: - MDADM (common prefix: *_dev._*) More subsystems are described in -link:https://www.kernel.org/doc/Documentation/sysctl/README[Kernel documentation]. +link:https://www.kernel.org/doc/Documentation/sysctl/README[Kernel documentation]. To get a list of all parameters, run: [source,terminal] @@ -55,7 +55,7 @@ them that need those sysctl settings. Use the taints and toleration feature to m [[safe-vs-unsafe-sysclts]] == Safe versus unsafe sysctls -Sysctls are grouped into _safe_ and _unsafe_ sysctls. +Sysctls are grouped into _safe_ and _unsafe_ sysctls. For a sysctl to be considered safe, it must use proper namespacing and must be properly isolated between pods on the same @@ -73,12 +73,12 @@ in the safe set: - *_net.ipv4.tcp_syncookies_* All safe sysctls are enabled by default. You can use a sysctl in a pod by modifying -the pod specification. +the pod specification. Any sysctl not whitelisted by {product-title} is considered unsafe for {product-title}. Note that being namespaced alone is not sufficient for the sysctl to be considered safe. -All unsafe sysctls are disabled by default, and the cluster administrator must +All unsafe sysctls are disabled by default, and the cluster administrator must manually enable them on a per-node basis. Pods with disabled unsafe sysctls are scheduled but do not launch. diff --git a/modules/nodes-descheduler-about.adoc b/modules/nodes-descheduler-about.adoc index a8b247cf72..491a0523f6 100644 --- a/modules/nodes-descheduler-about.adoc +++ b/modules/nodes-descheduler-about.adoc @@ -5,27 +5,27 @@ [id="nodes-descheduler-about_{context}"] = About the descheduler -You can use the descheduler to evict Pods based on specific strategies so that the Pods can be rescheduled onto more appropriate nodes. +You can use the descheduler to evict pods based on specific strategies so that the pods can be rescheduled onto more appropriate nodes. -You can benefit from descheduling running Pods in situations such as the following: +You can benefit from descheduling running pods in situations such as the following: * Nodes are underutilized or overutilized. * Pod and node affinity requirements, such as taints or labels, have changed and the original scheduling decisions are no longer appropriate for certain nodes. -* Node failure requires Pods to be moved. +* Node failure requires pods to be moved. * New nodes are added to clusters. * Pods have been restarted too many times. [IMPORTANT] ==== -The descheduler does not schedule replacement of evicted Pods. The scheduler automatically performs this task for the evicted Pods. +The descheduler does not schedule replacement of evicted pods. The scheduler automatically performs this task for the evicted pods. ==== -When the descheduler decides to evict Pods from a node, it employs the following general mechanism: +When the descheduler decides to evict pods from a node, it employs the following general mechanism: -* Critical Pods with `priorityClassName` set to `system-cluster-critical` or `system-node-critical` are never evicted. -* Static, mirrored, or stand-alone Pods that are not part of a ReplicationController, ReplicaSet, Deployment or Job are never evicted because these Pods will not be recreated. +* Critical pods with `priorityClassName` set to `system-cluster-critical` or `system-node-critical` are never evicted. +* Static, mirrored, or stand-alone pods that are not part of a ReplicationController, ReplicaSet, Deployment or Job are never evicted because these pods will not be recreated. * Pods associated with DaemonSets are never evicted. * Pods with local storage are never evicted. -* `BestEffort` Pods are evicted before `Burstable` and `Guaranteed` Pods. -* All types of Pods with the `descheduler.alpha.kubernetes.io/evict` annotation are evicted. This annotation is used to override checks that prevent eviction, and the user can select which Pod is evicted. Users should know how and if the Pod will be recreated. -* Pods subject to Pod Disruption Budget (PDB) are not evicted if descheduling violates its Pod disruption budget (PDB). The Pods are evicted by using eviction subresource to handle PDB. +* `BestEffort` pods are evicted before `Burstable` and `Guaranteed` pods. +* All types of pods with the `descheduler.alpha.kubernetes.io/evict` annotation are evicted. This annotation is used to override checks that prevent eviction, and the user can select which pod is evicted. Users should know how and if the pod will be recreated. +* Pods subject to pod disruption budget (PDB) are not evicted if descheduling violates its pod disruption budget (PDB). The pods are evicted by using eviction subresource to handle PDB. diff --git a/modules/nodes-descheduler-configuring-other-settings.adoc b/modules/nodes-descheduler-configuring-other-settings.adoc index ff81712562..718764723a 100644 --- a/modules/nodes-descheduler-configuring-other-settings.adoc +++ b/modules/nodes-descheduler-configuring-other-settings.adoc @@ -36,7 +36,7 @@ spec: ... ---- <1> Set number of seconds between descheduler runs. A value of `0` in this field runs the descheduler once and exits. -<2> Set one or more flags to append to the descheduler Pod. This flag must be in the format ready to pass to the binary. +<2> Set one or more flags to append to the descheduler pod. This flag must be in the format ready to pass to the binary. <3> Set the descheduler container image to deploy. . Save the file to apply the changes. diff --git a/modules/nodes-descheduler-configuring-strategies.adoc b/modules/nodes-descheduler-configuring-strategies.adoc index a8ec92e155..a5b7ddb06a 100644 --- a/modules/nodes-descheduler-configuring-strategies.adoc +++ b/modules/nodes-descheduler-configuring-strategies.adoc @@ -5,7 +5,7 @@ [id="nodes-descheduler-configuring-strategies_{context}"] = Configuring descheduler strategies -You can configure which strategies the descheduler uses to evict Pods. +You can configure which strategies the descheduler uses to evict pods. .Prerequisites * Cluster administrator privileges. diff --git a/modules/nodes-descheduler-strategies.adoc b/modules/nodes-descheduler-strategies.adoc index ca99ad5b04..798e305682 100644 --- a/modules/nodes-descheduler-strategies.adoc +++ b/modules/nodes-descheduler-strategies.adoc @@ -8,45 +8,45 @@ The following descheduler strategies are available: Low node utilization:: -The `LowNodeUtilization` strategy finds nodes that are underutilized and evicts Pods, if possible, from other nodes in the hope that recreation of evicted Pods will be scheduled on these underutilized nodes. +The `LowNodeUtilization` strategy finds nodes that are underutilized and evicts pods, if possible, from other nodes in the hope that recreation of evicted pods will be scheduled on these underutilized nodes. + -The underutilization of nodes is determined by several configurable threshold parameters: CPU, memory, and number of Pods. If a node's usage is below the configured thresholds for all parameters (CPU, memory, and number of Pods), then the node is considered to be underutilized. +The underutilization of nodes is determined by several configurable threshold parameters: CPU, memory, and number of pods. If a node's usage is below the configured thresholds for all parameters (CPU, memory, and number of pods), then the node is considered to be underutilized. + -You can also set a target threshold for CPU, memory, and number of Pods. If a node's usage is above the configured target thresholds for any of the parameters, then the node's Pods might be considered for eviction. +You can also set a target threshold for CPU, memory, and number of pods. If a node's usage is above the configured target thresholds for any of the parameters, then the node's pods might be considered for eviction. + Additionally, you can use the `NumberOfNodes` parameter to set the strategy to activate only when the number of underutilized nodes is above the configured value. This can be helpful in large clusters where a few nodes might be underutilized frequently or for a short period of time. -Duplicate Pods:: -The `RemoveDuplicates` strategy ensures that there is only one Pod associated with a ReplicaSet, ReplicationController, Deployment, or Job running on same node. If there are more, then those duplicate Pods are evicted for better spreading of Pods in a cluster. +Duplicate pods:: +The `RemoveDuplicates` strategy ensures that there is only one pod associated with a ReplicaSet, ReplicationController, Deployment, or Job running on same node. If there are more, then those duplicate pods are evicted for better spreading of pods in a cluster. + -This situation could occur after a node failure, when a Pod is moved to another node, leading to more than one Pod associated with a ReplicaSet, ReplicationController, Deployment, or Job on that node. After the failed node is ready again, this strategy evicts the duplicate Pod. +This situation could occur after a node failure, when a pod is moved to another node, leading to more than one pod associated with a ReplicaSet, ReplicationController, Deployment, or Job on that node. After the failed node is ready again, this strategy evicts the duplicate pod. + -This strategy has an optional parameter, `ExcludeOwnerKinds`, that allows you to specify a list of `Kind` types. If a Pod has any of these types listed as an `OwnerRef`, that Pod is not considered for eviction. +This strategy has an optional parameter, `ExcludeOwnerKinds`, that allows you to specify a list of `Kind` types. If a pod has any of these types listed as an `OwnerRef`, that pod is not considered for eviction. Violation of inter-pod anti-affinity:: -The `RemovePodsViolatingInterPodAntiAffinity` strategy ensures that Pods violating inter-pod anti-affinity are removed from nodes. +The `RemovePodsViolatingInterPodAntiAffinity` strategy ensures that pods violating inter-pod anti-affinity are removed from nodes. + -This situation could occur when anti-affinity rules are created for Pods that are already running on the same node. +This situation could occur when anti-affinity rules are created for pods that are already running on the same node. Violation of node affinity:: -The `RemovePodsViolatingNodeAffinity` strategy ensures that Pods violating node affinity are removed from nodes. +The `RemovePodsViolatingNodeAffinity` strategy ensures that pods violating node affinity are removed from nodes. + -This situation could occur if a node no longer satisfies a Pod's affinity rule. If another node is available that satisfies the affinity rule, then the Pod is evicted. +This situation could occur if a node no longer satisfies a pod's affinity rule. If another node is available that satisfies the affinity rule, then the pod is evicted. Violation of node taints:: -The `RemovePodsViolatingNodeTaints` strategy ensures that Pods violating `NoSchedule` taints on nodes are removed. +The `RemovePodsViolatingNodeTaints` strategy ensures that pods violating `NoSchedule` taints on nodes are removed. + -This situation could occur if a Pod is set to tolerate a taint `key=value:NoSchedule` and is running on a tainted node. If the node's taint is updated or removed, the taint is no longer satisfied by the Pod's tolerations and the Pod is evicted. +This situation could occur if a pod is set to tolerate a taint `key=value:NoSchedule` and is running on a tainted node. If the node's taint is updated or removed, the taint is no longer satisfied by the pod's tolerations and the pod is evicted. Too many restarts:: -The `RemovePodsHavingTooManyRestarts` strategy ensures that Pods that have been restarted too many times are removed from nodes. +The `RemovePodsHavingTooManyRestarts` strategy ensures that pods that have been restarted too many times are removed from nodes. + -This situation could occur if a Pod is scheduled on a node that is unable to start it. For example, if the node is having network issues and is unable to mount a networked persistent volume, then the Pod should be evicted so that it can be scheduled on another node. Another example is if the Pod is crashlooping. +This situation could occur if a pod is scheduled on a node that is unable to start it. For example, if the node is having network issues and is unable to mount a networked persistent volume, then the pod should be evicted so that it can be scheduled on another node. Another example is if the pod is crashlooping. + -This strategy has two configurable parameters: `PodRestartThreshold` and `IncludingInitContainers`. If a Pod is restarted more than the configured `PodRestartThreshold` value, then the Pod is evicted. You can use the `IncludingInitContainers` parameter to specify whether restarts for Init Containers should be calculated into the `PodRestartThreshold` value. +This strategy has two configurable parameters: `PodRestartThreshold` and `IncludingInitContainers`. If a pod is restarted more than the configured `PodRestartThreshold` value, then the pod is evicted. You can use the `IncludingInitContainers` parameter to specify whether restarts for Init Containers should be calculated into the `PodRestartThreshold` value. Pod life time:: -The `PodLifeTime` strategy evicts Pods that are too old. +The `PodLifeTime` strategy evicts pods that are too old. + -After a Pod reaches the age, in seconds, set by the `MaxPodLifeTimeSeconds` parameter, it is evicted. +After a pod reaches the age, in seconds, set by the `MaxPodLifeTimeSeconds` parameter, it is evicted. diff --git a/modules/nodes-nodes-audit-policy.adoc b/modules/nodes-nodes-audit-policy.adoc index 19851d63d4..dfef909b3a 100644 --- a/modules/nodes-nodes-audit-policy.adoc +++ b/modules/nodes-nodes-audit-policy.adoc @@ -36,7 +36,7 @@ $ oc edit apiserver cluster . Save the file to apply the changes. -. Verify that a new revision of the Kubernetes API server Pods has rolled out. This will take several minutes. +. Verify that a new revision of the Kubernetes API server pods has rolled out. This will take several minutes. + [source,terminal] ---- diff --git a/modules/nodes-nodes-jobs-about.adoc b/modules/nodes-nodes-jobs-about.adoc index 5f383a801e..d4d6ecc721 100644 --- a/modules/nodes-nodes-jobs-about.adoc +++ b/modules/nodes-nodes-jobs-about.adoc @@ -18,20 +18,20 @@ A regular Job is a run-once object that creates a task and ensures the Job finis There are three main types of task suitable to run as a Job: * Non-parallel Jobs: -** A Job that starts only one Pod, unless the Pod fails. -** The Job is complete as soon as its Pod terminates successfully. +** A Job that starts only one pod, unless the pod fails. +** The Job is complete as soon as its pod terminates successfully. * Parallel Jobs with a fixed completion count: ** a Job that starts multiple pods. -** The Job represents the overall task and is complete when there is one successful Pod for each value in the range `1` to the `completions` value. +** The Job represents the overall task and is complete when there is one successful pod for each value in the range `1` to the `completions` value. * Parallel Jobs with a work queue: ** A Job with multiple parallel worker processes in a given pod. ** {product-title} coordinates pods to determine what each should work on or use an external queue service. -** Each Pod is independently capable of determining whether or not all peer pods are complete and that the entire Job is done. -** When any Pod from the Job terminates with success, no new Pods are created. -** When at least one Pod has terminated with success and all Pods are terminated, the Job is successfully completed. -** When any Pod has exited with success, no other Pod should be doing any work for this task or writing any output. Pods should all be in the process of exiting. +** Each pod is independently capable of determining whether or not all peer pods are complete and that the entire Job is done. +** When any pod from the Job terminates with success, no new pods are created. +** When at least one pod has terminated with success and all pods are terminated, the Job is successfully completed. +** When any pod has exited with success, no other pod should be doing any work for this task or writing any output. Pods should all be in the process of exiting. For more information about how to make use of the different types of Job, see link:https://kubernetes.io/docs/concepts/workloads/controllers/jobs-run-to-completion/#job-patterns[Job Patterns] in the Kubernetes documentation. @@ -84,7 +84,7 @@ an execution. After reaching the specified timeout, the Job is terminated by {pr == Understanding how to set a Job back off policy for pod failure A Job can be considered failed, after a set amount of retries due to a -logical error in configuration or other similar reasons. Failed Pods associated with the Job are recreated by the controller with +logical error in configuration or other similar reasons. Failed pods associated with the Job are recreated by the controller with an exponential back off delay (`10s`, `20s`, `40s` …) capped at six minutes. The limit is reset if no new failed pods appear between controller checks. diff --git a/modules/nodes-nodes-managing-max-pods-proc.adoc b/modules/nodes-nodes-managing-max-pods-proc.adoc index cde20b152d..6cf08a3edd 100644 --- a/modules/nodes-nodes-managing-max-pods-proc.adoc +++ b/modules/nodes-nodes-managing-max-pods-proc.adoc @@ -4,7 +4,7 @@ // * post_installation_configuration/node-tasks.adoc [id="nodes-nodes-managing-max-pods-about_{context}"] -= Configuring the maximum number of Pods per Node += Configuring the maximum number of pods per Node //// The following section is included in the Scaling and Performance Guide. diff --git a/modules/nodes-nodes-working-deleting-bare-metal.adoc b/modules/nodes-nodes-working-deleting-bare-metal.adoc index cd52474a5a..766692dc63 100644 --- a/modules/nodes-nodes-working-deleting-bare-metal.adoc +++ b/modules/nodes-nodes-working-deleting-bare-metal.adoc @@ -7,10 +7,10 @@ = Deleting nodes from a bare metal cluster When you delete a node using the CLI, the node object is deleted in Kubernetes, -but the Pods that exist on the node are not deleted. Any bare Pods not backed by +but the pods that exist on the node are not deleted. Any bare pods not backed by a replication controller become inaccessible to {product-title}. Pods backed by replication controllers are rescheduled to other available nodes. You must -delete local manifest Pods. +delete local manifest pods. .Procedure @@ -24,7 +24,7 @@ the following steps: $ oc adm cordon ---- -. Drain all Pods on your node: +. Drain all pods on your node: + [source,terminal] ---- diff --git a/modules/nodes-nodes-working-deleting.adoc b/modules/nodes-nodes-working-deleting.adoc index c4aade37c0..a52414939c 100644 --- a/modules/nodes-nodes-working-deleting.adoc +++ b/modules/nodes-nodes-working-deleting.adoc @@ -6,10 +6,10 @@ = Deleting nodes from a cluster When you delete a node using the CLI, the node object is deleted in Kubernetes, -but the Pods that exist on the node are not deleted. Any bare Pods not +but the pods that exist on the node are not deleted. Any bare pods not backed by a replication controller become inaccessible to {product-title}. Pods backed by replication controllers are rescheduled to other available -nodes. You must delete local manifest Pods. +nodes. You must delete local manifest pods. .Procedure diff --git a/modules/nodes-nodes-working-evacuating.adoc b/modules/nodes-nodes-working-evacuating.adoc index 1f2021aba6..b78e58a849 100644 --- a/modules/nodes-nodes-working-evacuating.adoc +++ b/modules/nodes-nodes-working-evacuating.adoc @@ -5,13 +5,13 @@ [id="nodes-nodes-working-evacuating_{context}"] = Understanding how to evacuate pods on nodes -Evacuating Pods allows you to migrate all or selected Pods from a given node or +Evacuating pods allows you to migrate all or selected pods from a given node or nodes. -You can only evacuate Pods backed by a replication controller. The replication controller creates new Pods on +You can only evacuate pods backed by a replication controller. The replication controller creates new pods on other nodes and removes the existing pods from the specified node(s). -Bare Pods, meaning those not backed by a replication controller, are unaffected by default. +Bare pods, meaning those not backed by a replication controller, are unaffected by default. You can evacuate a subset of pods by specifying a pod-selector. Pod selectors are based on labels, so all the pods with the specified label will be evacuated. @@ -46,17 +46,17 @@ NAME STATUS ROLES AGE VERSION NotReady,SchedulingDisabled worker 1d v1.19.0 ---- -. Evacuate the Pods using one of the following methods: +. Evacuate the pods using one of the following methods: -** Evacuate all or selected Pods on one or more nodes: +** Evacuate all or selected pods on one or more nodes: + [source,terminal] ---- $ oc adm drain [--pod-selector=] ---- -** Force the deletion of bare Pods using the `--force` option. When set to -`true`, deletion continues even if there are Pods not managed by a replication +** Force the deletion of bare pods using the `--force` option. When set to +`true`, deletion continues even if there are pods not managed by a replication controller, ReplicaSet, job, daemonset, or StatefulSet: + [source,terminal] @@ -73,7 +73,7 @@ be used: $ oc adm drain --grace-period=-1 ---- -** Ignore DaemonSet-managed Pods using the `--ignore-daemonsets` flag set to `true`: +** Ignore DaemonSet-managed pods using the `--ignore-daemonsets` flag set to `true`: + [source,terminal] ---- @@ -88,7 +88,7 @@ value of `0` sets an infinite length of time: $ oc adm drain --timeout=5s ---- -** Delete Pods even if there are Pods using emptyDir using the `--delete-local-data` flag set to `true`. Local data is deleted when the node +** Delete pods even if there are pods using emptyDir using the `--delete-local-data` flag set to `true`. Local data is deleted when the node is drained: + [source,terminal] diff --git a/modules/nodes-nodes-working-master-schedulable.adoc b/modules/nodes-nodes-working-master-schedulable.adoc index 1bee5f2725..0f9d7f6e3e 100644 --- a/modules/nodes-nodes-working-master-schedulable.adoc +++ b/modules/nodes-nodes-working-master-schedulable.adoc @@ -6,7 +6,7 @@ = Configuring master nodes as schedulable You can configure master nodes to be -schedulable, meaning that new Pods are allowed for placement on the master +schedulable, meaning that new pods are allowed for placement on the master nodes. By default, master nodes are not schedulable. You can set the masters to be schedulable, but must retain the worker nodes. diff --git a/modules/nodes-pods-about.adoc b/modules/nodes-pods-about.adoc index 3efb24e71a..d0322aebe9 100644 --- a/modules/nodes-pods-about.adoc +++ b/modules/nodes-pods-about.adoc @@ -9,4 +9,4 @@ together on one host, and the smallest compute unit that can be defined, deployed, and managed. Pods are the rough equivalent of a machine instance (physical or virtual) to a container. -You can view a list of pods associated with a specific project or view usage statistics about pods. \ No newline at end of file +You can view a list of pods associated with a specific project or view usage statistics about pods. diff --git a/modules/nodes-pods-autoscaling-creating-cpu.adoc b/modules/nodes-pods-autoscaling-creating-cpu.adoc index e2a8825d16..c68227fc67 100644 --- a/modules/nodes-pods-autoscaling-creating-cpu.adoc +++ b/modules/nodes-pods-autoscaling-creating-cpu.adoc @@ -7,12 +7,12 @@ = Creating a horizontal pod autoscaler for CPU utilization You can create a horizontal pod autoscaler (HPA) for an existing DeploymentConfig or ReplicationController object -that automatically scales the Pods associated with that object in order to maintain the CPU usage you specify. +that automatically scales the pods associated with that object in order to maintain the CPU usage you specify. -The HPA increases and decreases the number of replicas between the minimum and maximum numbers to maintain the specified CPU utilization across all Pods. +The HPA increases and decreases the number of replicas between the minimum and maximum numbers to maintain the specified CPU utilization across all pods. -When autoscaling for CPU utilization, you can use the `oc autoscale` command and specify the minimum and maximum number of Pods you want to run at any given time and the average CPU utilization your Pods should target. If you do not specify a minimum, the Pods are given default values from the {product-title} server. -To autoscale for a specific CPU value, create a `HorizontalPodAutoscaler` object with the target CPU and Pod limits. +When autoscaling for CPU utilization, you can use the `oc autoscale` command and specify the minimum and maximum number of pods you want to run at any given time and the average CPU utilization your pods should target. If you do not specify a minimum, the pods are given default values from the {product-title} server. +To autoscale for a specific CPU value, create a `HorizontalPodAutoscaler` object with the target CPU and pod limits. .Prerequisites @@ -69,7 +69,7 @@ $ oc autoscale dc/ \// <1> <1> Specify the name of the DeploymentConfig. The object must exist. <2> Optionally, specify the minimum number of replicas when scaling down. <3> Specify the maximum number of replicas when scaling up. -<4> Specify the target average CPU utilization over all the Pods, represented as a percent of requested CPU. If not specified or negative, a default autoscaling policy is used. +<4> Specify the target average CPU utilization over all the pods, represented as a percent of requested CPU. If not specified or negative, a default autoscaling policy is used. ** To scale based on the percent of CPU utilization, create a `HorizontalPodAutoscaler` object for an existing ReplicationController: + @@ -84,7 +84,7 @@ $ oc autoscale rc/ <1> <1> Specify the name of the ReplicationController. The object must exist. <2> Specify the minimum number of replicas when scaling down. <3> Specify the maximum number of replicas when scaling up. -<4> Specify the target average CPU utilization over all the Pods, represented as a percent of requested CPU. If not specified or negative, a default autoscaling policy is used. +<4> Specify the target average CPU utilization over all the pods, represented as a percent of requested CPU. If not specified or negative, a default autoscaling policy is used. ** To scale for a specific CPU value, create a YAML file similar to the following for an existing DeploymentConfig or ReplicationController: + @@ -147,7 +147,7 @@ NAME REFERENCE TARGETS MINPODS MAXPOD cpu-autoscale ReplicationController/example 173m/500m 1 10 1 20m ---- -For example, the following command creates a horizontal pod autoscaler that maintains between 3 and 7 replicas of the Pods that are controlled by the `image-registry` DeploymentConfig in order to maintain an average CPU utilization of 75% across all Pods. +For example, the following command creates a horizontal pod autoscaler that maintains between 3 and 7 replicas of the pods that are controlled by the `image-registry` DeploymentConfig in order to maintain an average CPU utilization of 75% across all pods. [source,terminal] ---- @@ -192,7 +192,7 @@ status: desiredReplicas: 0 ---- -The following example shows autoscaling for the `image-registry` DeploymentConfig. The initial deployment requires 3 Pods. The HPA object increased that minimum to 5 and will increase the Pods up to 7 if CPU usage on the Pods reaches 75%: +The following example shows autoscaling for the `image-registry` DeploymentConfig. The initial deployment requires 3 pods. The HPA object increased that minimum to 5 and will increase the pods up to 7 if CPU usage on the pods reaches 75%: . View the current state of the `image-registry` deployment: + diff --git a/modules/nodes-pods-autoscaling-creating-memory.adoc b/modules/nodes-pods-autoscaling-creating-memory.adoc index 804345bf8a..b81ebfc1da 100644 --- a/modules/nodes-pods-autoscaling-creating-memory.adoc +++ b/modules/nodes-pods-autoscaling-creating-memory.adoc @@ -7,14 +7,14 @@ = Creating a horizontal pod autoscaler object for memory utilization You can create a horizontal pod autoscaler (HPA) for an existing DeploymentConfig or ReplicationController object -that automatically scales the Pods associated with that object in order to maintain the average memory utilization you specify, +that automatically scales the pods associated with that object in order to maintain the average memory utilization you specify, either a direct value or a percentage of requested memory. The HPA increases and decreases the number of replicas between the minimum and maximum numbers to maintain -the specified memory utilization across all Pods. +the specified memory utilization across all pods. -For memory utilization, you can specify the minimum and maximum number of Pods and the average memory utilization -your Pods should target. If you do not specify a minimum, the Pods are given default values from the {product-title} server. +For memory utilization, you can specify the minimum and maximum number of pods and the average memory utilization +your pods should target. If you do not specify a minimum, the pods are given default values from the {product-title} server. [IMPORTANT] ==== @@ -150,7 +150,7 @@ spec: <8> Use the `metrics` parameter for memory utilization. <9> Specify `memory` for memory utilization. <10> Set to `Utilization`. -<11> Specify `averageUtilization` and a target average memory utilization over all the Pods, +<11> Specify `averageUtilization` and a target average memory utilization over all the pods, represented as a percent of requested memory. The target pods must have memory requests configured. . Create the horizontal pod autoscaler: diff --git a/modules/nodes-pods-pod-disruption-about.adoc b/modules/nodes-pods-pod-disruption-about.adoc index 8777aa12d0..9701d6c199 100644 --- a/modules/nodes-pods-pod-disruption-about.adoc +++ b/modules/nodes-pods-pod-disruption-about.adoc @@ -5,12 +5,12 @@ // * post_installation_configuration/cluster-tasks.adoc [id="nodes-pods-configuring-pod-distruption-about_{context}"] -= Understanding how to use Pod disruption budgets to specify the number of Pods that must be up += Understanding how to use pod disruption budgets to specify the number of pods that must be up A _pod disruption budget_ is part of the link:http://kubernetes.io/docs/admin/disruptions/[Kubernetes] API, which can be managed with `oc` commands like other object types. They -allow the specification of safety constraints on Pods during operations, such as +allow the specification of safety constraints on pods during operations, such as draining a node for maintenance. `PodDisruptionBudget` is an API object that specifies the minimum number or @@ -21,11 +21,11 @@ upgrade) and is only honored on voluntary evictions (not on node failures). A `PodDisruptionBudget` object's configuration consists of the following key parts: -* A label selector, which is a label query over a set of Pods. -* An availability level, which specifies the minimum number of Pods that must be +* A label selector, which is a label query over a set of pods. +* An availability level, which specifies the minimum number of pods that must be available simultaneously, either: -** `minAvailable` is the number of Pods must always be available, even during a disruption. -** `maxUnavailable` is the number of Pods can be unavailable during a disruption. +** `minAvailable` is the number of pods must always be available, even during a disruption. +** `maxUnavailable` is the number of pods can be unavailable during a disruption. [NOTE] ==== @@ -33,7 +33,7 @@ A `maxUnavailable` of `0%` or `0` or a `minAvailable` of `100%` or equal to the is permitted but can block nodes from being drained. ==== -You can check for Pod disruption budgets across all projects with the following: +You can check for pod disruption budgets across all projects with the following: [source,terminal] ---- @@ -49,10 +49,10 @@ test-project my-pdb 2 foo=bar ---- The `PodDisruptionBudget` is considered healthy when there are at least -`minAvailable` Pods running in the system. Every Pod above that limit can be evicted. +`minAvailable` pods running in the system. Every pod above that limit can be evicted. [NOTE] ==== -Depending on your Pod priority and preemption settings, -lower-priority Pods might be removed despite their Pod disruption budget requirements. +Depending on your pod priority and preemption settings, +lower-priority pods might be removed despite their pod disruption budget requirements. ==== diff --git a/modules/nodes-pods-using-about.adoc b/modules/nodes-pods-using-about.adoc index 06a246844c..434cd76742 100644 --- a/modules/nodes-pods-using-about.adoc +++ b/modules/nodes-pods-using-about.adoc @@ -3,7 +3,7 @@ // * nodes/nodes-pods-using.adoc [id="nodes-pods-using-about_{context}"] -= Understanding pods += Understanding pods Pods are the rough equivalent of a machine instance (physical or virtual) to a Container. Each pod is allocated its own internal IP address, therefore owning its entire port space, and Containers within pods can share their local storage and networking. diff --git a/modules/nodes-pods-using-example.adoc b/modules/nodes-pods-using-example.adoc index 3be2a7cc4b..ddeb78030c 100644 --- a/modules/nodes-pods-using-example.adoc +++ b/modules/nodes-pods-using-example.adoc @@ -15,7 +15,7 @@ integrated container image registry. It demonstrates many features of pods, most which are discussed in other topics and thus only briefly mentioned here: [id="example-pod-definition_{context}"] -.Pod object definition (YAML) +.`Pod` object definition (YAML) [source,yaml] ---- diff --git a/modules/nodes-pods-vertical-autoscaler-about.adoc b/modules/nodes-pods-vertical-autoscaler-about.adoc index 5ad43ad3d0..eaec032450 100644 --- a/modules/nodes-pods-vertical-autoscaler-about.adoc +++ b/modules/nodes-pods-vertical-autoscaler-about.adoc @@ -5,19 +5,19 @@ [id="nodes-pods-vertical-autoscaler-about_{context}"] = About the Vertical Pod Autoscaler Operator -The Vertical Pod Autoscaler Operator (VPA) is implemented as an API resource and a custom resource (CR). The CR determines the actions the Vertical Pod Autoscaler Operator should take with the Pods associated with a specific workload object, such as a Daemonset, ReplicationController, and so forth. +The Vertical Pod Autoscaler Operator (VPA) is implemented as an API resource and a custom resource (CR). The CR determines the actions the Vertical Pod Autoscaler Operator should take with the pods associated with a specific workload object, such as a Daemonset, ReplicationController, and so forth. -The VPA automatically computes historic and current CPU and memory usage for the containers in those Pods and can use this data to automatically re-deploy Pods with optimized resource limits and requests to ensure that these Pods are operating efficiently at all times. When re-deploying Pods, the VPA honors any Pod Disruption Budget set for applications. If you do not want the VPA to automatically re-deploy Pods, you can use this resource information to manually update the Pods as needed. +The VPA automatically computes historic and current CPU and memory usage for the containers in those pods and can use this data to automatically re-deploy pods with optimized resource limits and requests to ensure that these pods are operating efficiently at all times. When re-deploying pods, the VPA honors any Pod Disruption Budget set for applications. If you do not want the VPA to automatically re-deploy pods, you can use this resource information to manually update the pods as needed. -When configured to update Pods automatically, the VPA reduces resources for Pods that are requesting more resources then they are using and increase resources for Pods that are not requesting enough. +When configured to update pods automatically, the VPA reduces resources for pods that are requesting more resources then they are using and increase resources for pods that are not requesting enough. -For example, if you have a pod that uses 50% of the CPU but only requests 10%, the VPA determines that the Pod is consuming more CPU than requested and restarts the Pods with higher resources. +For example, if you have a pod that uses 50% of the CPU but only requests 10%, the VPA determines that the pod is consuming more CPU than requested and restarts the pods with higher resources. -For developers, the VPA helps ensure their Pods stay up during periods of high demand by scheduling Pods onto nodes so that appropriate resources are available for each Pod. +For developers, the VPA helps ensure their pods stay up during periods of high demand by scheduling pods onto nodes so that appropriate resources are available for each pod. -Administrators can use the VPA to better utilize cluster resources, such as preventing Pods from reserving more CPU resources than needed. The VPA monitors the resources that workloads are actually using and adjusts the resource requirements so capacity is available to other workloads. The VPA also maintains the ratios between limits and requests that are specified in initial container configuration. +Administrators can use the VPA to better utilize cluster resources, such as preventing pods from reserving more CPU resources than needed. The VPA monitors the resources that workloads are actually using and adjusts the resource requirements so capacity is available to other workloads. The VPA also maintains the ratios between limits and requests that are specified in initial container configuration. [NOTE] ==== -If you stop running the VPA or delete a specific VPA CR in your cluster, the resource requests for the Pods already modified by the VPA do not change. Any new Pods get the resources defined in the workload object, not the previous recommendations made by the VPA. +If you stop running the VPA or delete a specific VPA CR in your cluster, the resource requests for the pods already modified by the VPA do not change. Any new pods get the resources defined in the workload object, not the previous recommendations made by the VPA. ==== diff --git a/modules/nodes-pods-vertical-autoscaler-configuring.adoc b/modules/nodes-pods-vertical-autoscaler-configuring.adoc index c401ab043e..4832bb098d 100644 --- a/modules/nodes-pods-vertical-autoscaler-configuring.adoc +++ b/modules/nodes-pods-vertical-autoscaler-configuring.adoc @@ -5,7 +5,7 @@ [id="nodes-pods-vertical-autoscaler-configuring_{context}"] = Using the Vertical Pod Autoscaler Operator -You can use the Vertical Pod Autoscaler Operator (VPA) by creating a VPA custom resource (CR). The CR indicates which Pods it should analyze and determines the actions the VPA should take with those Pods. +You can use the Vertical Pod Autoscaler Operator (VPA) by creating a VPA custom resource (CR). The CR indicates which pods it should analyze and determines the actions the VPA should take with those pods. .Procedure @@ -36,10 +36,10 @@ spec: <1> Specify the type of workload object you want this VPA to manage: `Deployment`, `StatefulSet`, `Job`, `DaemonSet`, `ReplicaSet`, or `ReplicationController`. <2> Specify the name of an existing workload object you want this VPA to manage. <3> Specify the VPA mode: -* `auto` to automatically apply the recommended resources on Pods associated with the controller. The VPA terminates existing Pods and creates new Pods with the recommended resource limits and requests. -* `recreate` to automatically apply the recommended resources on Pods associated with the workload object. The VPA terminates existing Pods and creates new Pods with the recommended resource limits and requests. The `recreate` mode should be used rarely, only if you need to ensure that the pods are restarted whenever the resource request changes. -* `initial` to automatically apply the recommended resources when Pods associated with the workload object are created. The VPA does not update the Pods as it learns new resource recommendations. -* `off` to only generate resource recommendations for the Pods associated with the workload object. The VPA does not update the Pods as it learns new resource recommendations and does not apply the recommendations to new Pods. +* `auto` to automatically apply the recommended resources on pods associated with the controller. The VPA terminates existing pods and creates new pods with the recommended resource limits and requests. +* `recreate` to automatically apply the recommended resources on pods associated with the workload object. The VPA terminates existing pods and creates new pods with the recommended resource limits and requests. The `recreate` mode should be used rarely, only if you need to ensure that the pods are restarted whenever the resource request changes. +* `initial` to automatically apply the recommended resources when pods associated with the workload object are created. The VPA does not update the pods as it learns new resource recommendations. +* `off` to only generate resource recommendations for the pods associated with the workload object. The VPA does not update the pods as it learns new resource recommendations and does not apply the recommendations to new pods. <4> Optional. Specify the containers you want to opt-out and set the mode to `Off`. @@ -50,7 +50,7 @@ spec: $ oc create -f .yaml ---- + -After a few moments, the VPA learns the resource usage of the containers in the Pods associated with the workload object. +After a few moments, the VPA learns the resource usage of the containers in the pods associated with the workload object. + You can view the VPA recommendations using the following command: + diff --git a/modules/nodes-pods-vertical-autoscaler-install.adoc b/modules/nodes-pods-vertical-autoscaler-install.adoc index c020a651ac..2dea3695e5 100644 --- a/modules/nodes-pods-vertical-autoscaler-install.adoc +++ b/modules/nodes-pods-vertical-autoscaler-install.adoc @@ -23,7 +23,7 @@ is automatically created if it does not exist. .. Navigate to *Workloads* -> *Pods*. -.. Select the `openshift-vertical-pod-autoscaler` project from the drop-down menu and verify that there are four Pods running. +.. Select the `openshift-vertical-pod-autoscaler` project from the drop-down menu and verify that there are four pods running. .. Navigate to *Workloads* -> *Deployments* to verify that there are four Deployments running. @@ -34,7 +34,7 @@ is automatically created if it does not exist. $ oc get all -n openshift-vertical-pod-autoscaler ---- + -The output shows four Pods and four Deplyoments: +The output shows four pods and four Deplyoments: + .Example output [source,terminal] diff --git a/modules/nodes-pods-vertical-autoscaler-uninstall.adoc b/modules/nodes-pods-vertical-autoscaler-uninstall.adoc index 6ae1558f4b..1af94a8ebb 100644 --- a/modules/nodes-pods-vertical-autoscaler-uninstall.adoc +++ b/modules/nodes-pods-vertical-autoscaler-uninstall.adoc @@ -5,7 +5,7 @@ [id="nodes-pods-vertical-autoscaler-uninstall_{context}"] = Uninstalling the Vertical Pod Autoscaler Operator -You can remove the Vertical Pod Autoscaler Operator (VPA) from your {product-title} cluster. After uninstalling, the resource requests for the Pods already modified by an existing VPA CR do not change. Any new Pods get the resources defined in the workload object, not the previous recommendations made by the Vertical Pod Autoscaler Operator. +You can remove the Vertical Pod Autoscaler Operator (VPA) from your {product-title} cluster. After uninstalling, the resource requests for the pods already modified by an existing VPA CR do not change. Any new pods get the resources defined in the workload object, not the previous recommendations made by the Vertical Pod Autoscaler Operator. [NOTE] ==== diff --git a/modules/nodes-pods-vertical-autoscaler-using-about.adoc b/modules/nodes-pods-vertical-autoscaler-using-about.adoc index 0891eae973..62ad2797ad 100644 --- a/modules/nodes-pods-vertical-autoscaler-using-about.adoc +++ b/modules/nodes-pods-vertical-autoscaler-using-about.adoc @@ -5,17 +5,17 @@ [id="nodes-pods-vertical-autoscaler-using-about_{context}"] = About Using the Vertical Pod Autoscaler Operator -To use the Vertical Pod Autoscaler Operator (VPA), you create a VPA custom resource (CR) for a workload object in your cluster. The VPA learns and applies the optimal CPU and memory resources for the Pods associated with that workload object. You can use a VPA with and Deployment, StatefulSet, Job, DaemonSet, ReplicaSet, or ReplicationController workload object. The VPA CR must be in the same project as the Pods you want to monitor. +To use the Vertical Pod Autoscaler Operator (VPA), you create a VPA custom resource (CR) for a workload object in your cluster. The VPA learns and applies the optimal CPU and memory resources for the pods associated with that workload object. You can use a VPA with and Deployment, StatefulSet, Job, DaemonSet, ReplicaSet, or ReplicationController workload object. The VPA CR must be in the same project as the pods you want to monitor. You use the VPA CR to associate a workload object and specify which mode the VPA operates in: -* The `Auto` and `Recreate` modes automatically apply the VPA CPU and memory recommendations throughout the Pod lifetime. -* The `Initial` mode automatically applies VPA recommendations only at Pod creation. +* The `Auto` and `Recreate` modes automatically apply the VPA CPU and memory recommendations throughout the pod lifetime. +* The `Initial` mode automatically applies VPA recommendations only at pod creation. * The `Off` mode only provides recommended resource limits and requests, allowing you to manually apply the recommendations. The `off` mode does not update pods. You can also use the CR to opt-out certain containers from VPA evaluation and updates. -For example, a Pod has the following limits and requests: +For example, a pod has the following limits and requests: [source,yaml] ---- @@ -28,7 +28,7 @@ resources: memory: 100Mi ---- -After creating a VPA that is set to `auto`, the VPA learns the resource usage and terminates and recreates the Pod with new resource limits and requests: +After creating a VPA that is set to `auto`, the VPA learns the resource usage and terminates and recreates the pod with new resource limits and requests: [source,yaml] ---- @@ -90,14 +90,14 @@ status: ... ---- -The output shows the recommended resources, `target`, the minimum recommended resources, `lowerBound`, the highest recommended resources, `upperBound`, and the most recent resource recommendations, `uncappedTarget`. +The output shows the recommended resources, `target`, the minimum recommended resources, `lowerBound`, the highest recommended resources, `upperBound`, and the most recent resource recommendations, `uncappedTarget`. -The VPA uses the `lowerBound` and `upperBound` values to determine if a Pod needs to be updated. If a Pod has resource requests below the `lowerBound` values or above the `upperBound` values, the VPA terminates and recreates the Pod with the `target` values. +The VPA uses the `lowerBound` and `upperBound` values to determine if a pod needs to be updated. If a pod has resource requests below the `lowerBound` values or above the `upperBound` values, the VPA terminates and recreates the pod with the `target` values. Automatically applying VPA recommendations:: -To use the VPA to automatically update Pods, create a VPA CR for a specific workload object with `updateMode` set to `Auto` or `Recreate`. +To use the VPA to automatically update pods, create a VPA CR for a specific workload object with `updateMode` set to `Auto` or `Recreate`. -When the Pods are created for the workload object, the VPA constantly monitors the containers to analyze their CPU and memory needs. The VPA deletes and redeploys Pods with new container resource limits and requests to meet those needs, honoring any Pod Disruption Budget set for your applications. The recommendations are added to the `status` field of the VPA CR for reference. +When the pods are created for the workload object, the VPA constantly monitors the containers to analyze their CPU and memory needs. The VPA deletes and redeploys pods with new container resource limits and requests to meet those needs, honoring any Pod Disruption Budget set for your applications. The recommendations are added to the `status` field of the VPA CR for reference. .Example VPA CR for the `Auto` mode [source,yaml] @@ -117,18 +117,18 @@ spec: <1> The type of workload object you want this VPA CR to manage. <2> The name of workload object you want this VPA CR to manage. <3> Set the mode to `Auto` or `Recreate`: -* `Auto`. The VPA assigns resource requests on Pod creation and updates the existing Pods by terminating them when the requested resources differ significantly from the new recommendation. -* `Recreate`. The VPA assigns resource requests on Pod creation and updates the existing Pods by terminating them when the requested resources differ significantly from the new recommendation. This mode should be used rarely, only if you need to ensure that the pods are restarted whenever the resource request changes. +* `Auto`. The VPA assigns resource requests on pod creation and updates the existing pods by terminating them when the requested resources differ significantly from the new recommendation. +* `Recreate`. The VPA assigns resource requests on pod creation and updates the existing pods by terminating them when the requested resources differ significantly from the new recommendation. This mode should be used rarely, only if you need to ensure that the pods are restarted whenever the resource request changes. [NOTE] ==== -There must be operating Pods in the project before the VPA can determine recommended resources and apply the recommendations to new pods. +There must be operating pods in the project before the VPA can determine recommended resources and apply the recommendations to new pods. ==== -Automatically applying VPA recommendations on Pod creation:: -To use the VPA to apply the recommended resources only when a Pod is first deployed, create a VPA CR for a specific workload object with `updateMode` set to `Initial`. +Automatically applying VPA recommendations on pod creation:: +To use the VPA to apply the recommended resources only when a pod is first deployed, create a VPA CR for a specific workload object with `updateMode` set to `Initial`. -When the Pods are created for that workload object, the VPA analyzes the CPU and memory needs of the containers and assigns the recommended container resource limits and requests. The VPA does not update the Pods as it learns new resource recommendations. +When the pods are created for that workload object, the VPA analyzes the CPU and memory needs of the containers and assigns the recommended container resource limits and requests. The VPA does not update the pods as it learns new resource recommendations. .Example VPA CR for the `Initial` mode [source,yaml] @@ -147,17 +147,17 @@ spec: ---- <1> The type of workload object you want this VPA CR to manage. <2> The name of workload object you want this VPA CR to manage. -<3> Set the mode to `Initial`. The VPA assigns resources when Pods are created and does not change the resources during the lifetime of the Pod. +<3> Set the mode to `Initial`. The VPA assigns resources when pods are created and does not change the resources during the lifetime of the pod. [NOTE] ==== -There must be operating Pods in the project before a VPA can determine recommended resources and apply the recommendations to new pods. +There must be operating pods in the project before a VPA can determine recommended resources and apply the recommendations to new pods. ==== Manually applying VPA recommendations:: -To use the VPA to only determine the recommended CPU and memory values, create a VPA CR for a specific workload object with `updateMode` set to `off`. +To use the VPA to only determine the recommended CPU and memory values, create a VPA CR for a specific workload object with `updateMode` set to `off`. -When the Pods are created for that workload object, the VPA analyzes the CPU and memory needs of the containers and records those recommendations in the `status` field of the VPA CR. The VPA does not update the Pods as it determines new resource recommendations. +When the pods are created for that workload object, the VPA analyzes the CPU and memory needs of the containers and records those recommendations in the `status` field of the VPA CR. The VPA does not update the pods as it determines new resource recommendations. .Example VPA CR for the `Off` mode [source,yaml] @@ -176,7 +176,7 @@ spec: ---- <1> The type of workload object you want this VPA CR to manage. <2> The name of workload object you want this VPA CR to manage. -<3> Set the mode to `Off`. +<3> Set the mode to `Off`. You can view the recommendations using the following command. @@ -185,17 +185,17 @@ You can view the recommendations using the following command. $ oc get vpa --output yaml ---- -With the recommendations, you can edit the workload object to add CPU and memory requests, then delete and redeploy the Pods using the recommended resources. +With the recommendations, you can edit the workload object to add CPU and memory requests, then delete and redeploy the pods using the recommended resources. [NOTE] ==== -There must be operating Pods in the project before a VPA can determine recommended resources. +There must be operating pods in the project before a VPA can determine recommended resources. ==== Exempting containers from applying VPA recommendations:: If your workload object has multiple containers and you do not want the VPA to evaluate and act on all of the containers, create a VPA CR for a specific workload object and add a `resourcePolicy` to opt-out specific containers. -When the VPA updates the Pods with recommended resources, any containers with a `resourcePolicy` are not updated and the VPA does not present recommendations for those containers in the Pod. +When the VPA updates the pods with recommended resources, any containers with a `resourcePolicy` are not updated and the VPA does not present recommendations for those containers in the pod. [source,yaml] ---- @@ -217,7 +217,7 @@ spec: ---- <1> The type of workload object you want this VPA CR to manage. <2> The name of workload object you want this VPA CR to manage. -<3> Set the mode to `Auto`, `Recreate`, or `Off`. The `Recreate` mode should be used rarely, only if you need to ensure that the pods are restarted whenever the resource request changes. +<3> Set the mode to `Auto`, `Recreate`, or `Off`. The `Recreate` mode should be used rarely, only if you need to ensure that the pods are restarted whenever the resource request changes. <4> Specify the containers you want to opt-out and set `mode` to `Off`. For example, a pod has two containers, the same resource requests and limits: diff --git a/modules/nodes-scheduler-pod-affinity-example.adoc b/modules/nodes-scheduler-pod-affinity-example.adoc index babd4a1f1d..1b2cb5c137 100644 --- a/modules/nodes-scheduler-pod-affinity-example.adoc +++ b/modules/nodes-scheduler-pod-affinity-example.adoc @@ -160,4 +160,3 @@ spec: NAME READY STATUS RESTARTS AGE IP NODE pod-s2 0/1 Pending 0 32s ---- - diff --git a/modules/nodes-scheduler-taints-tolerations-about.adoc b/modules/nodes-scheduler-taints-tolerations-about.adoc index fd85bb3376..ff262e03f1 100644 --- a/modules/nodes-scheduler-taints-tolerations-about.adoc +++ b/modules/nodes-scheduler-taints-tolerations-about.adoc @@ -8,9 +8,9 @@ [id="nodes-scheduler-taints-tolerations-about_{context}"] = Understanding taints and tolerations -A _taint_ allows a node to refuse Pod to be scheduled unless that Pod has a matching _toleration_. +A _taint_ allows a node to refuse pod to be scheduled unless that pod has a matching _toleration_. -You apply taints to a node through the node specification (`NodeSpec`) and apply tolerations to a Pod through the Pod specification (`PodSpec`). A taint on a node instructs the node to repel all Pods that do not tolerate the taint. +You apply taints to a node through the node specification (`NodeSpec`) and apply tolerations to a pod through the `Pod` specification (`PodSpec`). A taint on a node instructs the node to repel all pods that do not tolerate the taint. Taints and tolerations consist of a key, value, and effect. An operator allows you to leave one of these parameters empty. @@ -34,14 +34,14 @@ Taints and tolerations consist of a key, value, and effect. An operator allows y [cols="2a,3a"] !==== !`NoSchedule` -!* New Pods that do not match the taint are not scheduled onto that node. -* Existing Pods on the node remain. +!* New pods that do not match the taint are not scheduled onto that node. +* Existing pods on the node remain. !`PreferNoSchedule` -!* New Pods that do not match the taint might be scheduled onto that node, but the scheduler tries not to. -* Existing Pods on the node remain. +!* New pods that do not match the taint might be scheduled onto that node, but the scheduler tries not to. +* Existing pods on the node remain. !`NoExecute` -!* New Pods that do not match the taint cannot be scheduled onto that node. -* Existing Pods on the node that do not have a matching toleration are removed. +!* New pods that do not match the taint cannot be scheduled onto that node. +* Existing pods on the node that do not have a matching toleration are removed. !==== |`operator` @@ -71,7 +71,7 @@ The following taints are built into kubernetes: * `node.kubernetes.io/not-ready`: The node is not ready. This corresponds to the node condition `Ready=False`. * `node.kubernetes.io/unreachable`: The node is unreachable from the node controller. This corresponds to the node condition `Ready=Unknown`. -* `node.kubernetes.io/out-of-disk`: The node has insufficient free space on the node for adding new Pods. This corresponds to the node condition `OutOfDisk=True`. +* `node.kubernetes.io/out-of-disk`: The node has insufficient free space on the node for adding new pods. This corresponds to the node condition `OutOfDisk=True`. * `node.kubernetes.io/memory-pressure`: The node has memory pressure issues. This corresponds to the node condition `MemoryPressure=True`. * `node.kubernetes.io/disk-pressure`: The node has disk pressure issues. This corresponds to the node condition `DiskPressure=True`. * `node.kubernetes.io/network-unavailable`: The node network is unavailable. @@ -79,9 +79,9 @@ The following taints are built into kubernetes: * `node.cloudprovider.kubernetes.io/uninitialized`: When the node controller is started with an external cloud provider, this taint is set on a node to mark it as unusable. After a controller from the cloud-controller-manager initializes this node, the kubelet removes this taint. [id="nodes-scheduler-taints-tolerations-about-seconds_{context}"] -== Understanding how to use toleration seconds to delay Pod evictions +== Understanding how to use toleration seconds to delay pod evictions -You can specify how long a Pod can remain bound to a node before being evicted by specifying the `tolerationSeconds` parameter in the Pod specification. If a taint with the `NoExecute` effect is added to a node, any Pods that do not tolerate the taint are evicted immediately. Pods that do tolerate the taint are not evicted. However, if a Pod that does tolerate the taint has the `tolerationSeconds` parameter, the Pod is not evicted until that time period expires. +You can specify how long a pod can remain bound to a node before being evicted by specifying the `tolerationSeconds` parameter in the `Pod` specification. If a taint with the `NoExecute` effect is added to a node, any pods that do not tolerate the taint are evicted immediately. Pods that do tolerate the taint are not evicted. However, if a pod that does tolerate the taint has the `tolerationSeconds` parameter, the pod is not evicted until that time period expires. .Example output [source,yaml] @@ -94,19 +94,19 @@ tolerations: tolerationSeconds: 3600 ---- -Here, if this Pod is running but does not have a matching taint, the Pod stays bound to the node for 3,600 seconds and then be evicted. If the taint is removed before that time, the Pod is not evicted. +Here, if this pod is running but does not have a matching taint, the pod stays bound to the node for 3,600 seconds and then be evicted. If the taint is removed before that time, the pod is not evicted. [id="nodes-scheduler-taints-tolerations-about-multiple_{context}"] == Understanding how to use multiple taints -You can put multiple taints on the same node and multiple tolerations on the same Pod. {product-title} processes multiple taints and tolerations as follows: +You can put multiple taints on the same node and multiple tolerations on the same pod. {product-title} processes multiple taints and tolerations as follows: -. Process the taints for which the Pod has a matching toleration. -. The remaining unmatched taints have the indicated effects on the Pod: +. Process the taints for which the pod has a matching toleration. +. The remaining unmatched taints have the indicated effects on the pod: + -* If there is at least one unmatched taint with effect `NoSchedule`, {product-title} cannot schedule a Pod onto that node. -* If there is no unmatched taint with effect `NoSchedule` but there is at least one unmatched taint with effect `PreferNoSchedule`, {product-title} tries to not schedule the Pod onto the node. -* If there is at least one unmatched taint with effect `NoExecute`, {product-title} evicts the Pod from the node (if it is already running on the node), or the Pod is not scheduled onto the node (if it is not yet running on the node). +* If there is at least one unmatched taint with effect `NoSchedule`, {product-title} cannot schedule a pod onto that node. +* If there is no unmatched taint with effect `NoSchedule` but there is at least one unmatched taint with effect `PreferNoSchedule`, {product-title} tries to not schedule the pod onto the node. +* If there is at least one unmatched taint with effect `NoExecute`, {product-title} evicts the pod from the node (if it is already running on the node), or the pod is not scheduled onto the node (if it is not yet running on the node). + ** Pods that do not tolerate the taint are evicted immediately. + @@ -133,7 +133,7 @@ $ oc adm taint nodes node1 key1=value1:NoExecute $ oc adm taint nodes node1 key2=value2:NoSchedule ---- -* The Pod has the following tolerations: +* The pod has the following tolerations: + [source,yaml] ---- @@ -148,34 +148,34 @@ tolerations: effect: "NoExecute" ---- -In this case, the Pod cannot be scheduled onto the node, because there is no toleration matching the third taint. The Pod continues running if it is already running on the node when the taint is added, because the third taint is the only -one of the three that is not tolerated by the Pod. +In this case, the pod cannot be scheduled onto the node, because there is no toleration matching the third taint. The pod continues running if it is already running on the node when the taint is added, because the third taint is the only +one of the three that is not tolerated by the pod. [id="nodes-scheduler-taints-tolerations-about-prevent_{context}"] -== Preventing Pod eviction for node problems +== Preventing pod eviction for node problems -The Taint-Based Evictions feature, enabled by default, adds a taint with the `NoExecute` effect to nodes that are not ready or are unreachable. This allows you to specify how long a Pod should remain bound to a node that becomes unreachable or not ready, rather than using the default of five minutes. For example, you might want to allow a Pod on an unreachable node if the workload is safe to remain running while a networking issue resolves. +The Taint-Based Evictions feature, enabled by default, adds a taint with the `NoExecute` effect to nodes that are not ready or are unreachable. This allows you to specify how long a pod should remain bound to a node that becomes unreachable or not ready, rather than using the default of five minutes. For example, you might want to allow a pod on an unreachable node if the workload is safe to remain running while a networking issue resolves. If a node enters a not ready state, the node controller adds the `node.kubernetes.io/not-ready:NoExecute` taint to the node. If a node enters an unreachable state, the node controller adds the `node.kubernetes.io/unreachable:NoExecute` taint to the node. -The `NoExecute` taint affects Pods that are already running on the node in the following ways: +The `NoExecute` taint affects pods that are already running on the node in the following ways: * Pods that do not tolerate the taint are evicted immediately. * Pods that tolerate the taint without specifying `tolerationSeconds` in their toleration specification remain bound forever. * Pods that tolerate the taint with a specified `tolerationSeconds` remain bound for the specified amount of time. [id="nodes-scheduler-taints-tolerations-about-taintNodesByCondition_{context}"] -== Understanding Pod scheduling and node conditions (Taint Node by Condition) +== Understanding pod scheduling and node conditions (Taint Node by Condition) -{product-title} automatically taints nodes that report conditions such as memory pressure and disk pressure. If a node reports a condition, a taint is added until the condition clears. The taints have the `NoSchedule` effect, which means no Pod can be scheduled on the node unless the Pod has a matching toleration. This feature, *Taint Nodes By Condition*, is enabled by default. +{product-title} automatically taints nodes that report conditions such as memory pressure and disk pressure. If a node reports a condition, a taint is added until the condition clears. The taints have the `NoSchedule` effect, which means no pod can be scheduled on the node unless the pod has a matching toleration. This feature, *Taint Nodes By Condition*, is enabled by default. -The scheduler checks for these taints on nodes before scheduling Pods. If the taint is present, the Pod is scheduled on a different node. Because the scheduler checks for taints and not the actual node conditions, you configure the scheduler to ignore some of these node conditions by adding appropriate Pod tolerations. +The scheduler checks for these taints on nodes before scheduling pods. If the taint is present, the pod is scheduled on a different node. Because the scheduler checks for taints and not the actual node conditions, you configure the scheduler to ignore some of these node conditions by adding appropriate pod tolerations. To ensure backward compatibility, the DaemonSet controller automatically adds the following tolerations to all daemons: * node.kubernetes.io/memory-pressure * node.kubernetes.io/disk-pressure -* node.kubernetes.io/out-of-disk (only for critical Pods) +* node.kubernetes.io/out-of-disk (only for critical pods) * node.kubernetes.io/unschedulable (1.10 or later) * node.kubernetes.io/network-unavailable (host network only) @@ -184,19 +184,19 @@ You can also add arbitrary tolerations to DaemonSets. [id="nodes-scheduler-taints-tolerations-about-taintBasedEvictions_{context}"] == Understanding evicting pods by condition (Taint-Based Evictions) -The Taint-Based Evictions feature, enabled by default, evicts Pods from a node that experiences specific conditions, such as `not-ready` and `unreachable`. -When a node experiences one of these conditions, {product-title} automatically adds taints to the node, and starts evicting and rescheduling the Pods on different nodes. +The Taint-Based Evictions feature, enabled by default, evicts pods from a node that experiences specific conditions, such as `not-ready` and `unreachable`. +When a node experiences one of these conditions, {product-title} automatically adds taints to the node, and starts evicting and rescheduling the pods on different nodes. -Taint Based Evictions has a `NoExecute` effect, where any Pod that does not tolerate the taint will be evicted immediately and any Pod that does tolerate the taint will never be evicted. +Taint Based Evictions has a `NoExecute` effect, where any pod that does not tolerate the taint will be evicted immediately and any pod that does tolerate the taint will never be evicted. [NOTE] ==== -{product-title} evicts Pods in a rate-limited way to prevent massive Pod evictions in scenarios such as the master becoming partitioned from the nodes. +{product-title} evicts pods in a rate-limited way to prevent massive pod evictions in scenarios such as the master becoming partitioned from the nodes. ==== -This feature, in combination with `tolerationSeconds`, allows you to specify how long a Pod stays bound to a node that has a node condition. If the condition still exists after the `tolerationSections` period, the taint remains on the node and the Pods are evicted in a rate-limited manner. If the condition clears before the `tolerationSeconds` period, Pods are not removed. +This feature, in combination with `tolerationSeconds`, allows you to specify how long a pod stays bound to a node that has a node condition. If the condition still exists after the `tolerationSections` period, the taint remains on the node and the pods are evicted in a rate-limited manner. If the condition clears before the `tolerationSeconds` period, pods are not removed. -{product-title} automatically adds a toleration for `node.kubernetes.io/not-ready` and `node.kubernetes.io/unreachable` with `tolerationSeconds=300`, unless the Pod configuration specifies either toleration. +{product-title} automatically adds a toleration for `node.kubernetes.io/not-ready` and `node.kubernetes.io/unreachable` with `tolerationSeconds=300`, unless the `Pod` configuration specifies either toleration. [source,yaml] ---- @@ -212,13 +212,13 @@ spec tolerationSeconds: 300 ---- -These tolerations ensure that the default Pod behavior is to remain bound for five minutes after one of these node conditions problems is detected. +These tolerations ensure that the default pod behavior is to remain bound for five minutes after one of these node conditions problems is detected. -You can configure these tolerations as needed. For example, if you have an application with a lot of local state, you might want to keep the Pods bound to node for a longer time in the event of network partition, allowing for the partition to recover and avoiding Pod eviction. +You can configure these tolerations as needed. For example, if you have an application with a lot of local state, you might want to keep the pods bound to node for a longer time in the event of network partition, allowing for the partition to recover and avoiding pod eviction. -DaemonSet Pods are created with NoExecute tolerations for the following taints with no tolerationSeconds: +DaemonSet pods are created with NoExecute tolerations for the following taints with no tolerationSeconds: * `node.kubernetes.io/unreachable` * `node.kubernetes.io/not-ready` -This ensures that DaemonSet Pods are never evicted due to these node conditions, even if the `DefaultTolerationSeconds` admission controller is disabled. +This ensures that DaemonSet pods are never evicted due to these node conditions, even if the `DefaultTolerationSeconds` admission controller is disabled. diff --git a/modules/nw-about-multicast.adoc b/modules/nw-about-multicast.adoc index f747ea6e74..2963cf586c 100644 --- a/modules/nw-about-multicast.adoc +++ b/modules/nw-about-multicast.adoc @@ -23,23 +23,23 @@ At this time, multicast is best used for low-bandwidth coordination or service discovery and not a high-bandwidth solution. ==== -Multicast traffic between {product-title} Pods is disabled by default. If you are using the {sdn} default Container Network Interface (CNI) network provider, you can enable multicast on a per-project basis. +Multicast traffic between {product-title} pods is disabled by default. If you are using the {sdn} default Container Network Interface (CNI) network provider, you can enable multicast on a per-project basis. ifdef::openshift-sdn[] When using the OpenShift SDN network plug-in in `networkpolicy` isolation mode: -* Multicast packets sent by a Pod will be delivered to all other Pods in the +* Multicast packets sent by a pod will be delivered to all other pods in the project, regardless of NetworkPolicy objects. Pods might be able to communicate over multicast even when they cannot communicate over unicast. -* Multicast packets sent by a Pod in one project will never be delivered to Pods +* Multicast packets sent by a pod in one project will never be delivered to pods in any other project, even if there are NetworkPolicy objects that allow communication between the projects. When using the OpenShift SDN network plug-in in `multitenant` isolation mode: -* Multicast packets sent by a Pod will be delivered to all other Pods in the +* Multicast packets sent by a pod will be delivered to all other pods in the project. -* Multicast packets sent by a Pod in one project will be delivered to Pods in +* Multicast packets sent by a pod in one project will be delivered to pods in other projects only if each project is joined together and multicast is enabled in each joined project. endif::openshift-sdn[] diff --git a/modules/nw-configuring-high-performance-multicast-with-sriov.adoc b/modules/nw-configuring-high-performance-multicast-with-sriov.adoc index c136abe454..d3693815df 100644 --- a/modules/nw-configuring-high-performance-multicast-with-sriov.adoc +++ b/modules/nw-configuring-high-performance-multicast-with-sriov.adoc @@ -5,7 +5,7 @@ [id="nw-configuring-high-performance-multicast-with-sriov_{context}"] = Configuring high performance multicast -The OpenShift SDN default Container Network Interface (CNI) network provider supports multicast between Pods on the default network. This is best used for low-bandwidth coordination or service discovery, and not high-bandwidth applications. +The OpenShift SDN default Container Network Interface (CNI) network provider supports multicast between pods on the default network. This is best used for low-bandwidth coordination or service discovery, and not high-bandwidth applications. For applications such as streaming media, like Internet Protocol television (IPTV) and multipoint videoconferencing, you can utilize Single Root I/O Virtualization (SR-IOV) hardware to provide near-native performance. When using additional SR-IOV interfaces for multicast: diff --git a/modules/nw-disabling-multicast.adoc b/modules/nw-disabling-multicast.adoc index 3ead9aa7c8..800085e0b4 100644 --- a/modules/nw-disabling-multicast.adoc +++ b/modules/nw-disabling-multicast.adoc @@ -13,9 +13,9 @@ ifeval::["{context}" == "ovn-kubernetes-disabling-multicast"] endif::[] [id="nw-disabling-multicast_{context}"] -= Disabling multicast between Pods += Disabling multicast between pods -You can disable multicast between Pods for your project. +You can disable multicast between pods for your project. .Prerequisites diff --git a/modules/nw-egressnetworkpolicy-about.adoc b/modules/nw-egressnetworkpolicy-about.adoc index cef6d88fe1..6cab9e5926 100644 --- a/modules/nw-egressnetworkpolicy-about.adoc +++ b/modules/nw-egressnetworkpolicy-about.adoc @@ -16,15 +16,15 @@ endif::[] = How an egress firewall works in a project As a cluster administrator, you can use an _egress firewall_ to -limit the external hosts that some or all Pods can access from within the +limit the external hosts that some or all pods can access from within the cluster. An egress firewall supports the following scenarios: -- A Pod can only connect to internal hosts and cannot initiate connections to +- A pod can only connect to internal hosts and cannot initiate connections to the public Internet. -- A Pod can only connect to the public Internet and cannot initiate connections +- A pod can only connect to the public Internet and cannot initiate connections to internal hosts that are outside the {product-title} cluster. -- A Pod cannot reach specified internal subnets or hosts outside the {product-title} cluster. -- A Pod can connect to only specific external hosts. +- A pod cannot reach specified internal subnets or hosts outside the {product-title} cluster. +- A pod can connect to only specific external hosts. For example, you can allow one project access to a specified IP range but deny the same access to a different project. Or you can restrict application developers from updating from Python pip mirrors, and force updates to come only from approved sources. @@ -77,7 +77,7 @@ Violating any of these restrictions results in a broken egress firewall for the [id="policy-rule-order_{context}"] == Matching order for egress firewall policy rules -The egress firewall policy rules are evaluated in the order that they are defined, from first to last. The first rule that matches an egress connection from a Pod applies. Any subsequent rules are ignored for that connection. +The egress firewall policy rules are evaluated in the order that they are defined, from first to last. The first rule that matches an egress connection from a pod applies. Any subsequent rules are ignored for that connection. ifdef::openshift-sdn[] [id="domain-name-server-resolution_{context}"] @@ -87,15 +87,15 @@ If you use DNS names in any of your egress firewall policy rules, proper resolut * Domain name updates are polled based on the TTL (time to live) value of the domain returned by the local non-authoritative servers. -* The Pod must resolve the domain from the same local name servers when necessary. Otherwise the IP addresses for the domain known by the egress firewall controller and the Pod can be different. If the IP addresses for a host name differ, the egress firewall might not be enforced consistently. +* The pod must resolve the domain from the same local name servers when necessary. Otherwise the IP addresses for the domain known by the egress firewall controller and the pod can be different. If the IP addresses for a host name differ, the egress firewall might not be enforced consistently. -* Because the egress firewall controller and Pods asynchronously poll the same local name server, the Pod might obtain the updated IP address before the egress controller does, which causes a race condition. Due to this current limitation, domain name usage in {kind} objects is only recommended for domains with infrequent IP address changes. +* Because the egress firewall controller and pods asynchronously poll the same local name server, the pod might obtain the updated IP address before the egress controller does, which causes a race condition. Due to this current limitation, domain name usage in {kind} objects is only recommended for domains with infrequent IP address changes. [NOTE] ==== -The egress firewall always allows Pods access to the external interface of the node that the Pod is on for DNS resolution. +The egress firewall always allows pods access to the external interface of the node that the pod is on for DNS resolution. -If you use domain names in your egress firewall policy and your DNS resolution is not handled by a DNS server on the local node, then you must add egress firewall rules that allow access to your DNS server’s IP addresses. if you are using domain names in your Pods. +If you use domain names in your egress firewall policy and your DNS resolution is not handled by a DNS server on the local node, then you must add egress firewall rules that allow access to your DNS server’s IP addresses. if you are using domain names in your pods. ==== endif::openshift-sdn[] diff --git a/modules/nw-enabling-multicast.adoc b/modules/nw-enabling-multicast.adoc index f07e9b1f16..569d142d89 100644 --- a/modules/nw-enabling-multicast.adoc +++ b/modules/nw-enabling-multicast.adoc @@ -13,9 +13,9 @@ ifeval::["{context}" == "ovn-kubernetes-enabling-multicast"] endif::[] [id="nw-enabling-multicast_{context}"] -= Enabling multicast between Pods += Enabling multicast between pods -You can enable multicast between Pods for your project. +You can enable multicast between pods for your project. .Prerequisites diff --git a/modules/nw-ingress-creating-an-edge-route-with-a-custom-certificate.adoc b/modules/nw-ingress-creating-an-edge-route-with-a-custom-certificate.adoc index 230329d200..d01e6aa14f 100644 --- a/modules/nw-ingress-creating-an-edge-route-with-a-custom-certificate.adoc +++ b/modules/nw-ingress-creating-an-edge-route-with-a-custom-certificate.adoc @@ -8,7 +8,7 @@ You can configure a secure route using edge TLS termination with a custom certificate by using the `oc create route` command. With an edge route, the Ingress Controller terminates TLS encryption before forwarding traffic to the -destination Pod. The route specifies the TLS certificate and key that the +destination pod. The route specifies the TLS certificate and key that the Ingress Controller uses for the route. .Prerequisites diff --git a/modules/nw-multitenant-isolation.adoc b/modules/nw-multitenant-isolation.adoc index 03264d2d04..4cc779d781 100644 --- a/modules/nw-multitenant-isolation.adoc +++ b/modules/nw-multitenant-isolation.adoc @@ -4,8 +4,8 @@ [id="nw-multitenant-isolation_{context}"] = Isolating a project -You can isolate a project so that Pods and services in other projects cannot -access its Pods and services. +You can isolate a project so that pods and services in other projects cannot +access its pods and services. .Prerequisites diff --git a/modules/nw-multitenant-joining.adoc b/modules/nw-multitenant-joining.adoc index 814f5161f1..028dcf9b2a 100644 --- a/modules/nw-multitenant-joining.adoc +++ b/modules/nw-multitenant-joining.adoc @@ -4,7 +4,7 @@ [id="nw-multitenant-joining_{context}"] = Joining projects -You can join two or more projects to allow network traffic between Pods and +You can join two or more projects to allow network traffic between pods and services in different projects. .Prerequisites diff --git a/modules/nw-multus-add-pod.adoc b/modules/nw-multus-add-pod.adoc index cda067d781..0e85bd702f 100644 --- a/modules/nw-multus-add-pod.adoc +++ b/modules/nw-multus-add-pod.adoc @@ -18,18 +18,18 @@ ifeval::["{product-version}" == "4.5"] endif::[] [id="nw-multus-add-pod_{context}"] -= Adding a Pod to an additional network += Adding a pod to an additional network -You can add a Pod to an additional network. The Pod continues to send normal cluster-related network traffic over the default network. +You can add a pod to an additional network. The pod continues to send normal cluster-related network traffic over the default network. -When a Pod is created additional networks are attached to it. However, if a Pod already exists, you cannot attach additional networks to it. +When a pod is created additional networks are attached to it. However, if a pod already exists, you cannot attach additional networks to it. -The Pod must be in the same namespace as the additional network. +The pod must be in the same namespace as the additional network. ifdef::sriov[] [NOTE] ===== -If a NetworkAttachmentDefinition is managed by the SR-IOV Network Operator, the SR-IOV Network Resource Injector adds the `resource` field to the Pod object automatically. +If a NetworkAttachmentDefinition is managed by the SR-IOV Network Operator, the SR-IOV Network Resource Injector adds the `resource` field to the `Pod` object automatically. ===== ifdef::bz[] @@ -46,14 +46,14 @@ endif::sriov[] * Log in to the cluster. ifdef::sriov[] * Install the SR-IOV Operator. -* Create either an `SriovNetwork` object or an `SriovIBNetwork` object to attach the Pod to. +* Create either an `SriovNetwork` object or an `SriovIBNetwork` object to attach the pod to. endif::sriov[] .Procedure -. Add an annotation to the Pod object. Only one of the following annotation formats can be used: +. Add an annotation to the `Pod` object. Only one of the following annotation formats can be used: -.. To attach an additional network without any customization, add an annotation with the following format. Replace `` with the name of the additional network to associate with the Pod: +.. To attach an additional network without any customization, add an annotation with the following format. Replace `` with the name of the additional network to associate with the pod: + [source,yaml] ---- @@ -63,7 +63,7 @@ metadata: ---- <1> To specify more than one additional network, separate each network with a comma. Do not include whitespace between the comma. If you specify -the same additional network multiple times, that Pod will have multiple network +the same additional network multiple times, that pod will have multiple network interfaces attached to that network. .. To attach an additional network with customizations, add an annotation with the following format: @@ -86,21 +86,21 @@ metadata: <3> Optional: Specify an override for the default route, such as `192.168.17.1`. -. To create the Pod, enter the following command. Replace `` with the name of the Pod. +. To create the pod, enter the following command. Replace `` with the name of the pod. + [source,terminal] ---- $ oc create -f .yaml ---- -. Optional: To Confirm that the annotation exists in the Pod CR, enter the following command, replacing `` with the name of the Pod. +. Optional: To Confirm that the annotation exists in the `Pod` CR, enter the following command, replacing `` with the name of the pod. + [source,terminal] ---- $ oc get pod -o yaml ---- + -In the following example, the `example-pod` Pod is attached to the `net1` +In the following example, the `example-pod` pod is attached to the `net1` additional network: + [source,terminal] @@ -138,7 +138,7 @@ status: ---- <1> The `k8s.v1.cni.cncf.io/networks-status` parameter is a JSON array of objects. Each object describes the status of an additional network attached -to the Pod. The annotation value is stored as a plain text value. +to the pod. The annotation value is stored as a plain text value. ifeval::["{context}" == "configuring-sr-iov"] :!sriov: diff --git a/modules/nw-multus-advanced-annotations.adoc b/modules/nw-multus-advanced-annotations.adoc index 924d36a91d..8373386eae 100644 --- a/modules/nw-multus-advanced-annotations.adoc +++ b/modules/nw-multus-advanced-annotations.adoc @@ -3,15 +3,15 @@ // * networking/multiple_networks/attaching-pod.adoc [id="nw-multus-advanced-annotations_{context}"] -= Specifying Pod-specific addressing and routing options += Specifying pod-specific addressing and routing options -When attaching a Pod to an additional network, you may want to specify further properties -about that network in a particular Pod. This allows you to change some aspects of routing, as well +When attaching a pod to an additional network, you may want to specify further properties +about that network in a particular pod. This allows you to change some aspects of routing, as well as specify static IP addresses and MAC addresses. In order to accomplish this, you can use the JSON formatted annotations. .Prerequisites -* The Pod must be in the same namespace as the additional network. +* The pod must be in the same namespace as the additional network. * Install the OpenShift Command-line Interface (`oc`). * You must log in to the cluster. ifdef::sriov[] @@ -20,19 +20,19 @@ endif::sriov[] .Procedure -To add a Pod to an additional network while specifying addressing and/or routing options, complete the following steps: +To add a pod to an additional network while specifying addressing and/or routing options, complete the following steps: -. Edit the Pod resource definition. If you are editing an existing Pod, run the +. Edit the `Pod` resource definition. If you are editing an existing `Pod` resource, run the following command to edit its definition in the default editor. Replace `` -with the name of the Pod to edit. +with the name of the `Pod` resource to edit. + [source,terminal] ---- $ oc edit pod ---- -. In the Pod resource definition, add the `k8s.v1.cni.cncf.io/networks` -parameter to the Pod `metadata` mapping. The `k8s.v1.cni.cncf.io/networks` +. In the `Pod` resource definition, add the `k8s.v1.cni.cncf.io/networks` +parameter to the pod `metadata` mapping. The `k8s.v1.cni.cncf.io/networks` accepts a JSON string of a list of objects that reference the name of NetworkAttachmentDefinition Custom Resource (CR) names in addition to specifying additional properties. + @@ -69,20 +69,20 @@ spec: image: centos/tools ---- <1> The `name` key is the name of the additional network to associate -with the Pod. +with the pod. <2> The `default-route` key specifies a value of a gateway for traffic to be routed over if no other routing entry is present in the routing table. If more than one `default-route` key is specified, -this will cause the Pod to fail to become active. +this will cause the pod to fail to become active. The default route will cause any traffic that is not specified in other routes to be routed to the gateway. [IMPORTANT] ==== Setting the default route to an interface other than the default network interface for {product-title} -may cause traffic that is anticipated for Pod-to-Pod traffic to be routed over another interface. +may cause traffic that is anticipated for pod-to-pod traffic to be routed over another interface. ==== -To verify the routing properties of a Pod, the `oc` command may be used to execute the `ip` command within a Pod. +To verify the routing properties of a pod, the `oc` command may be used to execute the `ip` command within a pod. [source,terminal] ---- @@ -91,12 +91,12 @@ $ oc exec -it -- ip route [NOTE] ==== -You may also reference the Pod's `k8s.v1.cni.cncf.io/networks-status` to see which additional network has been +You may also reference the pod's `k8s.v1.cni.cncf.io/networks-status` to see which additional network has been assigned the default route, by the presence of the `default-route` key in the JSON-formatted list of objects. ==== -To set a static IP address or MAC address for a Pod you can use the JSON formatted annotations. This requires you create networks that specifically allow for this functionality. This can be specified in a rawCNIConfig for the CNO. +To set a static IP address or MAC address for a pod you can use the JSON formatted annotations. This requires you create networks that specifically allow for this functionality. This can be specified in a rawCNIConfig for the CNO. . Edit the CNO CR by running the following command: + @@ -158,9 +158,9 @@ and IP address using the macvlan CNI plug-in: <4> Here the `capabilities` key denotes that a request is made to enable the static MAC address functionality of a CNI plug-in. The above network attachment may then be referenced in a JSON formatted annotation, along with keys to specify which -static IP and MAC address will be assigned to a given Pod. +static IP and MAC address will be assigned to a given pod. -Edit the desired Pod with: +Edit the desired pod with: [source,terminal] ---- @@ -197,7 +197,7 @@ Static IP addresses and MAC addresses do not have to be used at the same time, y individually, or together. ==== -To verify the IP address and MAC properties of a Pod with additional networks, use the `oc` command to execute the ip command within a Pod. +To verify the IP address and MAC properties of a pod with additional networks, use the `oc` command to execute the ip command within a pod. [source,terminal] ---- diff --git a/modules/nw-multus-delete-network.adoc b/modules/nw-multus-delete-network.adoc index 3a20cc8568..8aedf56e5f 100644 --- a/modules/nw-multus-delete-network.adoc +++ b/modules/nw-multus-delete-network.adoc @@ -6,7 +6,7 @@ = Removing an additional network attachment definition As a cluster administrator, you can remove an additional network from your -{product-title} cluster. The additional network is not removed from any Pods it +{product-title} cluster. The additional network is not removed from any pods it is attached to. .Prerequisites diff --git a/modules/nw-multus-edit-network.adoc b/modules/nw-multus-edit-network.adoc index 15bce91254..823b179ee0 100644 --- a/modules/nw-multus-edit-network.adoc +++ b/modules/nw-multus-edit-network.adoc @@ -6,7 +6,7 @@ = Modifying an additional network attachment definition As a cluster administrator, you can make changes to an existing additional -network. Any existing Pods attached to the additional network will not be updated. +network. Any existing pods attached to the additional network will not be updated. .Prerequisites diff --git a/modules/nw-multus-ipam-object.adoc b/modules/nw-multus-ipam-object.adoc index fc62c62a94..fdfd8c8086 100644 --- a/modules/nw-multus-ipam-object.adoc +++ b/modules/nw-multus-ipam-object.adoc @@ -83,7 +83,7 @@ IPv4 and IPv6 IP addresses are supported. <3> The default gateway to route egress network traffic to. -<4> An array describing routes to configure inside the Pod. +<4> An array describing routes to configure inside the pod. <5> The IP address range in CIDR format. @@ -280,7 +280,7 @@ interface. Both IPv4 and IPv6 IP addresses are supported. <3> The default gateway to route egress network traffic to. -<4> A collection of mappings describing routes to configure inside the Pod. +<4> A collection of mappings describing routes to configure inside the pod. <5> The IP address range in CIDR format. diff --git a/modules/nw-ne-openshift-dns.adoc b/modules/nw-ne-openshift-dns.adoc index f86e6f44cf..3bf03d6c30 100644 --- a/modules/nw-ne-openshift-dns.adoc +++ b/modules/nw-ne-openshift-dns.adoc @@ -6,14 +6,14 @@ = {product-title} DNS If you are running multiple services, such as front-end and back-end services for -use with multiple Pods, environment variables are created for user names, -service IPs, and more so the front-end Pods can communicate with the back-end +use with multiple pods, environment variables are created for user names, +service IPs, and more so the front-end pods can communicate with the back-end services. If the service is deleted and recreated, a new IP address can be -assigned to the service, and requires the front-end Pods to be recreated to pick +assigned to the service, and requires the front-end pods to be recreated to pick up the updated values for the service IP environment variable. Additionally, the -back-end service must be created before any of the front-end Pods to ensure that +back-end service must be created before any of the front-end pods to ensure that the service IP is generated properly, and that it can be provided to the -front-end Pods as an environment variable. +front-end pods as an environment variable. For this reason, {product-title} has a built-in DNS so that the services can be reached by the service DNS as well as the service IP/port. diff --git a/modules/nw-networkpolicy-about.adoc b/modules/nw-networkpolicy-about.adoc index 6c3acc4468..aae0e054ad 100644 --- a/modules/nw-networkpolicy-about.adoc +++ b/modules/nw-networkpolicy-about.adoc @@ -15,7 +15,7 @@ In {product-title} {product-version}, OpenShift SDN supports using NetworkPolicy ==== IPBlock is supported in NetworkPolicy with limitations for OpenShift SDN; it supports IPBlock without except clauses. If you create a policy with an IPBlock -section including an except clause, the SDN Pods log generates warnings and the +section including an except clause, the SDN pods log generates warnings and the entire IPBlock section of that policy is ignored. ==== @@ -24,15 +24,15 @@ entire IPBlock section of that policy is ignored. Network policy does not apply to the host network namespace. Pods with host networking enabled are unaffected by NetworkPolicy object rules. ==== -By default, all Pods in a project are accessible from other Pods and network -endpoints. To isolate one or more Pods in a project, you can create +By default, all pods in a project are accessible from other pods and network +endpoints. To isolate one or more pods in a project, you can create NetworkPolicy objects in that project to indicate the allowed incoming connections. Project administrators can create and delete NetworkPolicy objects within their own project. -If a Pod is matched by selectors in one or more NetworkPolicy objects, then the -Pod will accept only connections that are allowed by at least one of those -NetworkPolicy objects. A Pod that is not selected by any NetworkPolicy objects +If a pod is matched by selectors in one or more NetworkPolicy objects, then the +pod will accept only connections that are allowed by at least one of those +NetworkPolicy objects. A pod that is not selected by any NetworkPolicy objects is fully accessible. The following example NetworkPolicy objects demonstrate supporting different @@ -41,7 +41,7 @@ scenarios: * Deny all traffic: + To make a project deny by default, add a NetworkPolicy object that matches all -Pods but accepts no traffic: +pods but accepts no traffic: + [source,yaml] ---- @@ -77,15 +77,15 @@ spec: ---- + -If the Ingress Controller is configured with `endpointPublishingStrategy: HostNetwork`, then the Ingress Controller Pod runs on the host network. +If the Ingress Controller is configured with `endpointPublishingStrategy: HostNetwork`, then the Ingress Controller pod runs on the host network. When running on the host network, the traffic from the Ingress Controller is assigned the `netid:0` Virtual Network ID (VNID). The `netid` for the namespace that is associated with the Ingress Operator is different, so the `matchLabel` in the `allow-from-openshift-ingress` network policy does not match traffic from the `default` Ingress Controller. Because the `default` namespace is assigned the `netid:0` VNID, you can allow traffic from the `default` Ingress Controller by labeling your `default` namespace with `network.openshift.io/policy-group: ingress`. -* Only accept connections from Pods within a project: +* Only accept connections from pods within a project: + -To make Pods accept connections from other Pods in the same project, but reject -all other connections from Pods in other projects, add the following +To make pods accept connections from other pods in the same project, but reject +all other connections from pods in other projects, add the following NetworkPolicy object: + [source,yaml] @@ -101,9 +101,9 @@ spec: - podSelector: {} ---- -* Only allow HTTP and HTTPS traffic based on Pod labels: +* Only allow HTTP and HTTPS traffic based on pod labels: + -To enable only HTTP and HTTPS access to the Pods with a specific label +To enable only HTTP and HTTPS access to the pods with a specific label (`role=frontend` in following example), add a NetworkPolicy object similar to the following: + [source,yaml] @@ -124,9 +124,9 @@ spec: port: 443 ---- -* Accept connections by using both namespace and Pod selectors: +* Accept connections by using both namespace and pod selectors: + -To match network traffic by combining namespace and Pod selectors, you can use a NetworkPolicy object similar to the following: +To match network traffic by combining namespace and pod selectors, you can use a NetworkPolicy object similar to the following: + [source,yaml] ---- @@ -154,7 +154,7 @@ NetworkPolicy objects together to satisfy complex network requirements. For example, for the NetworkPolicy objects defined in previous samples, you can define both `allow-same-namespace` and `allow-http-and-https` policies -within the same project. Thus allowing the Pods with the label `role=frontend`, +within the same project. Thus allowing the pods with the label `role=frontend`, to accept any connection allowed by each policy. That is, connections on any -port from Pods in the same namespace, and connections on ports `80` and -`443` from Pods in any namespace. +port from pods in the same namespace, and connections on ports `80` and +`443` from pods in any namespace. diff --git a/modules/nw-networkpolicy-multitenant-isolation.adoc b/modules/nw-networkpolicy-multitenant-isolation.adoc index ea8e62ffcc..ce553c37b9 100644 --- a/modules/nw-networkpolicy-multitenant-isolation.adoc +++ b/modules/nw-networkpolicy-multitenant-isolation.adoc @@ -7,7 +7,7 @@ [id="nw-networkpolicy-multitenant-isolation_{context}"] = Configuring multitenant isolation using NetworkPolicy -You can configure your project to isolate it from Pods and Services in other +You can configure your project to isolate it from pods and Services in other project namespaces. .Prerequisites diff --git a/modules/nw-networkpolicy-object.adoc b/modules/nw-networkpolicy-object.adoc index 2d306ea95f..b3d9ea1f41 100644 --- a/modules/nw-networkpolicy-object.adoc +++ b/modules/nw-networkpolicy-object.adoc @@ -32,8 +32,8 @@ spec: port: 27017 ---- <1> The `name` of the NetworkPolicy object. -<2> A selector describing the Pods the policy applies to. The policy object can -only select Pods in the project that the NetworkPolicy object is defined. -<3> A selector matching the Pods that the policy object allows ingress traffic -from. The selector will match Pods in any project. +<2> A selector describing the pods the policy applies to. The policy object can +only select pods in the project that the NetworkPolicy object is defined. +<3> A selector matching the pods that the policy object allows ingress traffic +from. The selector will match pods in any project. <4> A list of one or more destination ports to accept traffic on. diff --git a/modules/nw-ovn-kubernetes-migration.adoc b/modules/nw-ovn-kubernetes-migration.adoc index 1f89d20fa3..eefa9da786 100644 --- a/modules/nw-ovn-kubernetes-migration.adoc +++ b/modules/nw-ovn-kubernetes-migration.adoc @@ -46,7 +46,7 @@ $ oc patch Network.config.openshift.io cluster \ --type='merge' --patch '{ "spec": { "networkType": "OVNKubernetes" } }' ---- -. To confirm the migration disabled the OpenShift SDN default CNI network provider and removed all OpenShift SDN Pods, enter the following command. It might take several moments for all the OpenShift SDN Pods to stop. +. To confirm the migration disabled the OpenShift SDN default CNI network provider and removed all OpenShift SDN pods, enter the following command. It might take several moments for all the OpenShift SDN pods to stop. + [source,terminal] ---- @@ -84,14 +84,14 @@ $ oc get nodes + If a node is stuck in the `NotReady` state, reboot the node again. -.. To confirm that your Pods are not in an error state, enter the following command: +.. To confirm that your pods are not in an error state, enter the following command: + [source,terminal] ---- $ oc get pods --all-namespaces -o wide --sort-by='{.spec.nodeName}' ---- + -If Pods on a node are in an error state, reboot that node. +If pods on a node are in an error state, reboot that node. . Complete the following steps only if the migration succeeds and your cluster is in a good state: diff --git a/modules/nw-ovn-kubernetes-rollback.adoc b/modules/nw-ovn-kubernetes-rollback.adoc index 2c8e8adc03..427a45dd6c 100644 --- a/modules/nw-ovn-kubernetes-rollback.adoc +++ b/modules/nw-ovn-kubernetes-rollback.adoc @@ -42,7 +42,7 @@ $ oc patch Network.config.openshift.io cluster \ $ oc edit Network.config.openshift.io cluster ---- -. To confirm that the migration disabled the OVN-Kubernetes default CNI network provider and removed all the OVN-Kubernetes Pods, enter the following command. It might take several moments for all the OVN-Kubernetes Pods to stop. +. To confirm that the migration disabled the OVN-Kubernetes default CNI network provider and removed all the OVN-Kubernetes pods, enter the following command. It might take several moments for all the OVN-Kubernetes pods to stop. + [source,terminal] ---- @@ -69,7 +69,7 @@ done $ oc get network.config/cluster -o jsonpath='{.status.networkType}{"\n"}' ---- -. To confirm that the OpenShift SDN Pods are in the `READY` state, enter the following command: +. To confirm that the OpenShift SDN pods are in the `READY` state, enter the following command: + [source,terminal] ---- diff --git a/modules/nw-ptp-installing-operator.adoc b/modules/nw-ptp-installing-operator.adoc index 38d942dddf..3606eed066 100644 --- a/modules/nw-ptp-installing-operator.adoc +++ b/modules/nw-ptp-installing-operator.adoc @@ -135,5 +135,5 @@ If the operator does not appear as installed, to troubleshoot further: * Go to the *Operators* -> *Installed Operators* page and inspect the *Operator Subscriptions* and *Install Plans* tabs for any failure or errors under *Status*. -* Go to the *Workloads* -> *Pods* page and check the logs for Pods in the +* Go to the *Workloads* -> *Pods* page and check the logs for pods in the `openshift-ptp` project. diff --git a/modules/nw-sctp-about.adoc b/modules/nw-sctp-about.adoc index b6f211c550..619ab864b2 100644 --- a/modules/nw-sctp-about.adoc +++ b/modules/nw-sctp-about.adoc @@ -10,7 +10,7 @@ On {op-system-first}, the SCTP module is disabled by default. SCTP is a reliable message based protocol that runs on top of an IP network. -When enabled, you can use SCTP as a protocol with Pods, Services, and network policy. +When enabled, you can use SCTP as a protocol with pods, Services, and network policy. A Service must be defined with the `type` parameter set to either the `ClusterIP` or `NodePort` value. [id="example_configurations_{context}"] @@ -56,7 +56,7 @@ spec: type: ClusterIP ---- -In the following example, a NetworkPolicy object is configured to apply to SCTP network traffic on port `80` from any Pods with a specific label: +In the following example, a NetworkPolicy object is configured to apply to SCTP network traffic on port `80` from any pods with a specific label: [source,yaml] ---- diff --git a/modules/nw-sctp-verifying.adoc b/modules/nw-sctp-verifying.adoc index c9872f145d..1e76581d99 100644 --- a/modules/nw-sctp-verifying.adoc +++ b/modules/nw-sctp-verifying.adoc @@ -53,7 +53,7 @@ spec: $ oc create -f sctp-server.yaml ---- -. Create a Service for the SCTP listener Pod. +. Create a Service for the SCTP listener pod. .. Create a file named `sctp-service.yaml` that defines a Service with the following YAML: + @@ -104,7 +104,7 @@ spec: ["dnf install -y nc && sleep inf"] ---- -.. To create the Pod object, enter the following command: +.. To create the `Pod` object, enter the following command: + [source,terminal] ---- @@ -113,7 +113,7 @@ $ oc apply -f sctp-client.yaml . Run an SCTP listener on the server. -.. To connect to the server Pod, enter the following command: +.. To connect to the server pod, enter the following command: + [source,terminal] ---- @@ -138,7 +138,7 @@ $ nc -l 30102 --sctp $ oc get services sctpservice -o go-template='{{.spec.clusterIP}}{{"\n"}}' ---- -.. To connect to the client Pod, enter the following command: +.. To connect to the client pod, enter the following command: + [source,terminal] ---- diff --git a/modules/nw-sriov-add-pod-runtimeconfig.adoc b/modules/nw-sriov-add-pod-runtimeconfig.adoc index a2804fc58b..1d0e066fb0 100644 --- a/modules/nw-sriov-add-pod-runtimeconfig.adoc +++ b/modules/nw-sriov-add-pod-runtimeconfig.adoc @@ -57,7 +57,7 @@ $ oc get net-attach-def -n [NOTE] ===== -Do not modify or delete a SriovNetwork Custom Resource (CR) if it is attached to any Pods in the `running` state. +Do not modify or delete a SriovNetwork Custom Resource (CR) if it is attached to any pods in the `running` state. ===== . Create the following SR-IOV pod spec, and then save the YAML in the `-sriov-pod.yaml` file. Replace `` with a name for this pod. diff --git a/modules/nw-sriov-configuring-device.adoc b/modules/nw-sriov-configuring-device.adoc index 4fa0ce97bf..f3edd064d2 100644 --- a/modules/nw-sriov-configuring-device.adoc +++ b/modules/nw-sriov-configuring-device.adoc @@ -75,7 +75,7 @@ If you specify both `pfNames` and `rootDevices` at the same time, ensure that th <11> Optional: The parameter accepts an array of one or more physical function (PF) names for the Ethernet device. <12> The parameter accepts an array of one or more PCI bus addresses for the physical function of the Ethernet device. Provide the address in the following format: `0000:02:00.1`. <13> The `vfio-pci` driver type is required for virtual functions in {VirtProductName}. -<14> Optional: Specify whether to enable remote direct memory access (RDMA) mode. For a Mellanox card, set `isRdma` to `false`. The default value is `false`. +<14> Optional: Specify whether to enable remote direct memory access (RDMA) mode. For a Mellanox card, set `isRdma` to `false`. The default value is `false`. + [NOTE] ==== @@ -94,7 +94,7 @@ $ oc create -f -sriov-node-network.yaml + where `` specifies the name for this configuration. + -After applying the configuration update, all the Pods in `sriov-network-operator` namespace transition to the `Running` status. +After applying the configuration update, all the pods in `sriov-network-operator` namespace transition to the `Running` status. . To verify that the SR-IOV network device is configured, enter the following command. Replace `` with the name of a node with the SR-IOV network device that you just configured. + diff --git a/modules/nw-sriov-configuring-operator.adoc b/modules/nw-sriov-configuring-operator.adoc index bb8b376e2b..60d8ce6ce3 100644 --- a/modules/nw-sriov-configuring-operator.adoc +++ b/modules/nw-sriov-configuring-operator.adoc @@ -33,10 +33,10 @@ The SriovOperatorConfig CR provides several fields for configuring the operator: The Network Resources Injector is a Kubernetes Dynamic Admission Controller application. It provides the following capabilities: -* Mutation of resource requests and limits in Pod specification to add an SR-IOV resource name according to an SR-IOV network attachment definition annotation. -* Mutation of Pod specifications with downward API volume to expose pod annotations and labels to the running container as files under the `/etc/podnetinfo` path. +* Mutation of resource requests and limits in `Pod` specification to add an SR-IOV resource name according to an SR-IOV network attachment definition annotation. +* Mutation of `Pod` specifications with downward API volume to expose pod annotations and labels to the running container as files under the `/etc/podnetinfo` path. -By default the Network Resources Injector is enabled by the SR-IOV operator and runs as a DaemonSet on all master nodes. The following is an example of Network Resources Injector Pods running in a cluster with three master nodes: +By default the Network Resources Injector is enabled by the SR-IOV operator and runs as a DaemonSet on all master nodes. The following is an example of Network Resources Injector pods running in a cluster with three master nodes: [source,terminal] ---- @@ -62,7 +62,7 @@ Admission Controller application. It provides the following capabilities: * Mutation of the `SriovNetworkNodePolicy` CR by setting the default value for the `priority` and `deviceType` fields when the CR is created or updated. By default the SR-IOV Operator Admission Controller webook is enabled by the operator and runs as a DaemonSet on all master nodes. -The following is an example of the Operator Admission Controller webook Pods running in a cluster with three master nodes: +The following is an example of the Operator Admission Controller webook pods running in a cluster with three master nodes: [source,terminal] ---- diff --git a/modules/nw-sriov-dpdk-example-intel.adoc b/modules/nw-sriov-dpdk-example-intel.adoc index 26967d2e29..668be30409 100644 --- a/modules/nw-sriov-dpdk-example-intel.adoc +++ b/modules/nw-sriov-dpdk-example-intel.adoc @@ -39,7 +39,7 @@ When applying the configuration specified in a SriovNetworkNodePolicy CR, the SR It may take several minutes for a configuration change to apply. Ensure that there are enough available nodes in your cluster to handle the evicted workload beforehand. + -After the configuration update is applied, all the Pods in `openshift-sriov-network-operator` namespace will change to a `Running` status. +After the configuration update is applied, all the pods in `openshift-sriov-network-operator` namespace will change to a `Running` status. ===== . Create the SriovNetworkNodePolicy CR by running the following command: @@ -78,7 +78,7 @@ Please refer to the `Configuring SR-IOV additional network` section for a detail $ oc create -f intel-dpdk-network.yaml ---- -. Create the following Pod spec, and then save the YAML in the `intel-dpdk-pod.yaml` file. +. Create the following `Pod` spec, and then save the YAML in the `intel-dpdk-pod.yaml` file. + [source,yaml] ---- @@ -116,13 +116,13 @@ spec: emptyDir: medium: HugePages ---- -<1> Specify the same `target_namespace` where the SriovNetwork CR `intel-dpdk-network` is created. If you would like to create the Pod in a different namespace, change `target_namespace` in both the Pod spec and the SriovNetowrk CR. +<1> Specify the same `target_namespace` where the SriovNetwork CR `intel-dpdk-network` is created. If you would like to create the Pod in a different namespace, change `target_namespace` in both the `Pod` spec and the SriovNetowrk CR. <2> Specify the DPDK image which includes your application and the DPDK library used by application. <3> Specify the `IPC_LOCK` capability which is required by the application to allocate hugepage memory inside container. <4> Mount a hugepage volume to the DPDK Pod under `/dev/hugepages`. The hugepage volume is backed by the emptyDir volume type with the medium being `Hugepages`. -<5> Optional: Specify the number of DPDK devices allocated to DPDK Pod. This resource request and limit, if not explicitly specified, will be automatically added by the SR-IOV network resource injector. The SR-IOV network resource injector is an admission controller component managed by the SR-IOV Operator. It is enabled by default and can be disabled by setting `enableInjector` option to `false` in the default `SriovOperatorConfig` CR. +<5> Optional: Specify the number of DPDK devices allocated to DPDK pod. This resource request and limit, if not explicitly specified, will be automatically added by the SR-IOV network resource injector. The SR-IOV network resource injector is an admission controller component managed by the SR-IOV Operator. It is enabled by default and can be disabled by setting `enableInjector` option to `false` in the default `SriovOperatorConfig` CR. <6> Specify the number of CPUs. The DPDK Pod usually requires exclusive CPUs to be allocated from the kubelet. This is achieved by setting CPU Manager policy to `static` and creating a Pod with `Guaranteed` QoS. -<7> Specify hugepage size `hugepages-1Gi` or `hugepages-2Mi` and the quantity of hugepages that will be allocated to the DPDK Pod. Configure `2Mi` and `1Gi` hugepages separately. Configuring `1Gi` hugepage requires adding kernel arguments to Nodes. For example, adding kernel arguments `default_hugepagesz=1GB`, `hugepagesz=1G` and `hugepages=16` will result in `16*1Gi` hugepages be allocated during system boot. +<7> Specify hugepage size `hugepages-1Gi` or `hugepages-2Mi` and the quantity of hugepages that will be allocated to the DPDK pod. Configure `2Mi` and `1Gi` hugepages separately. Configuring `1Gi` hugepage requires adding kernel arguments to Nodes. For example, adding kernel arguments `default_hugepagesz=1GB`, `hugepagesz=1G` and `hugepages=16` will result in `16*1Gi` hugepages be allocated during system boot. . Create the DPDK Pod by running the following command: + diff --git a/modules/nw-sriov-dpdk-example-mellanox.adoc b/modules/nw-sriov-dpdk-example-mellanox.adoc index 967c42af6b..82ef697e0a 100644 --- a/modules/nw-sriov-dpdk-example-mellanox.adoc +++ b/modules/nw-sriov-dpdk-example-mellanox.adoc @@ -42,7 +42,7 @@ When applying the configuration specified in a SriovNetworkNodePolicy CR, the SR It may take several minutes for a configuration change to apply. Ensure that there are enough available nodes in your cluster to handle the evicted workload beforehand. + -After the configuration update is applied, all the Pods in the `openshift-sriov-network-operator` namespace will change to a `Running` status. +After the configuration update is applied, all the pods in the `openshift-sriov-network-operator` namespace will change to a `Running` status. ===== . Create the SriovNetworkNodePolicy CR by running the following command: @@ -82,7 +82,7 @@ Please refer to `Configuring SR-IOV additional network` section for detailed exp $ oc create -f mlx-dpdk-network.yaml ---- -. Create the following Pod spec, and then save the YAML in the `mlx-dpdk-pod.yaml` file. +. Create the following `Pod` spec, and then save the YAML in the `mlx-dpdk-pod.yaml` file. + [source,yaml] ---- @@ -120,15 +120,15 @@ spec: emptyDir: medium: HugePages ---- -<1> Specify the same `target_namespace` where SriovNetwork CR `mlx-dpdk-network` is created. If you would like to create the Pod in a different namespace, change `target_namespace` in both Pod spec and SriovNetowrk CR. +<1> Specify the same `target_namespace` where SriovNetwork CR `mlx-dpdk-network` is created. If you would like to create the pod in a different namespace, change `target_namespace` in both `Pod` spec and SriovNetowrk CR. <2> Specify the DPDK image which includes your application and the DPDK library used by application. <3> Specify the `IPC_LOCK` capability which is required by the application to allocate hugepage memory inside the container. -<4> Mount the hugepage volume to the DPDK Pod under `/dev/hugepages`. The hugepage volume is backed by the emptyDir volume type with the medium being `Hugepages`. -<5> Optional: Specify the number of DPDK devices allocated to the DPDK Pod. This resource request and limit, if not explicitly specified, will be automatically added by SR-IOV network resource injector. The SR-IOV network resource injector is an admission controller component managed by SR-IOV Operator. It is enabled by default and can be disabled by setting the `enableInjector` option to `false` in the default `SriovOperatorConfig` CR. -<6> Specify the number of CPUs. The DPDK Pod usually requires exclusive CPUs be allocated from kubelet. This is achieved by setting CPU Manager policy to `static` and creating a Pod with `Guaranteed` QoS. -<7> Specify hugepage size `hugepages-1Gi` or `hugepages-2Mi` and the quantity of hugepages that will be allocated to DPDK Pod. Configure `2Mi` and `1Gi` hugepages separately. Configuring `1Gi` hugepage requires adding kernel arguments to Nodes. +<4> Mount the hugepage volume to the DPDK pod under `/dev/hugepages`. The hugepage volume is backed by the emptyDir volume type with the medium being `Hugepages`. +<5> Optional: Specify the number of DPDK devices allocated to the DPDK pod. This resource request and limit, if not explicitly specified, will be automatically added by SR-IOV network resource injector. The SR-IOV network resource injector is an admission controller component managed by SR-IOV Operator. It is enabled by default and can be disabled by setting the `enableInjector` option to `false` in the default `SriovOperatorConfig` CR. +<6> Specify the number of CPUs. The DPDK pod usually requires exclusive CPUs be allocated from kubelet. This is achieved by setting CPU Manager policy to `static` and creating a pod with `Guaranteed` QoS. +<7> Specify hugepage size `hugepages-1Gi` or `hugepages-2Mi` and the quantity of hugepages that will be allocated to DPDK pod. Configure `2Mi` and `1Gi` hugepages separately. Configuring `1Gi` hugepage requires adding kernel arguments to Nodes. -. Create the DPDK Pod by running the following command: +. Create the DPDK pod by running the following command: + [source,terminal] ---- diff --git a/modules/nw-sriov-example-vf-function-in-pod.adoc b/modules/nw-sriov-example-vf-function-in-pod.adoc index ebfd21e603..ea191fc0b7 100644 --- a/modules/nw-sriov-example-vf-function-in-pod.adoc +++ b/modules/nw-sriov-example-vf-function-in-pod.adoc @@ -9,7 +9,7 @@ You can run a remote direct memory access (RDMA) or a Data Plane Development Kit This example shows a Pod using a virtual function (VF) in RDMA mode: -.Pod spec that uses RDMA mode +.`Pod` spec that uses RDMA mode [source,yaml] ---- apiVersion: v1 @@ -31,7 +31,7 @@ spec: The following example shows a Pod with a VF in DPDK mode: -.Pod spec that uses DPDK mode +.`Pod` spec that uses DPDK mode [source,yaml] ---- apiVersion: v1 diff --git a/modules/nw-sriov-installing-operator.adoc b/modules/nw-sriov-installing-operator.adoc index 1d68592900..be479e0090 100644 --- a/modules/nw-sriov-installing-operator.adoc +++ b/modules/nw-sriov-installing-operator.adoc @@ -175,7 +175,7 @@ If the operator does not appear as installed, to troubleshoot further: + * Inspect the *Operator Subscriptions* and *Install Plans* tabs for any failure or errors under *Status*. -* Navigate to the *Workloads* -> *Pods* page and check the logs for Pods in the +* Navigate to the *Workloads* -> *Pods* page and check the logs for pods in the `openshift-sriov-network-operator` project. ifdef::run-level[] diff --git a/modules/nw-sriov-rdma-example-mellanox.adoc b/modules/nw-sriov-rdma-example-mellanox.adoc index 36f28b2c74..2a343895fe 100644 --- a/modules/nw-sriov-rdma-example-mellanox.adoc +++ b/modules/nw-sriov-rdma-example-mellanox.adoc @@ -45,7 +45,7 @@ When applying the configuration specified in a SriovNetworkNodePolicy CR, the SR It may take several minutes for a configuration change to apply. Ensure that there are enough available nodes in your cluster to handle the evicted workload beforehand. + -After the configuration update is applied, all the Pods in the `openshift-sriov-network-operator` namespace will change to a `Running` status. +After the configuration update is applied, all the pods in the `openshift-sriov-network-operator` namespace will change to a `Running` status. ===== . Create the SriovNetworkNodePolicy CR by running the following command: @@ -85,7 +85,7 @@ Please refer to `Configuring SR-IOV additional network` section for detailed exp $ oc create -f mlx-rdma-network.yaml ---- -. Create the following Pod spec, and then save the YAML in the `mlx-rdma-pod.yaml` file. +. Create the following `Pod` spec, and then save the YAML in the `mlx-rdma-pod.yaml` file. + [source,yaml] ---- @@ -121,14 +121,14 @@ spec: emptyDir: medium: HugePages ---- -<1> Specify the same `target_namespace` where SriovNetwork CR `mlx-rdma-network` is created. If you would like to create the Pod in a different namespace, change `target_namespace` in both Pod spec and SriovNetowrk CR. +<1> Specify the same `target_namespace` where SriovNetwork CR `mlx-rdma-network` is created. If you would like to create the pod in a different namespace, change `target_namespace` in both `Pod` spec and SriovNetowrk CR. <2> Specify the RDMA image which includes your application and RDMA library used by application. <3> Specify the `IPC_LOCK` capability which is required by the application to allocate hugepage memory inside the container. -<4> Mount the hugepage volume to RDMA Pod under `/dev/hugepages`. The hugepage volume is backed by the emptyDir volume type with the medium being `Hugepages`. -<5> Specify number of CPUs. The RDMA Pod usually requires exclusive CPUs be allocated from the kubelet. This is achieved by setting CPU Manager policy to `static` and create Pod with `Guaranteed` QoS. -<6> Specify hugepage size `hugepages-1Gi` or `hugepages-2Mi` and the quantity of hugepages that will be allocated to the RDMA Pod. Configure `2Mi` and `1Gi` hugepages separately. Configuring `1Gi` hugepage requires adding kernel arguments to Nodes. +<4> Mount the hugepage volume to RDMA pod under `/dev/hugepages`. The hugepage volume is backed by the emptyDir volume type with the medium being `Hugepages`. +<5> Specify number of CPUs. The RDMA pod usually requires exclusive CPUs be allocated from the kubelet. This is achieved by setting CPU Manager policy to `static` and create pod with `Guaranteed` QoS. +<6> Specify hugepage size `hugepages-1Gi` or `hugepages-2Mi` and the quantity of hugepages that will be allocated to the RDMA pod. Configure `2Mi` and `1Gi` hugepages separately. Configuring `1Gi` hugepage requires adding kernel arguments to Nodes. -. Create the RDMA Pod by running the following command: +. Create the RDMA pod by running the following command: + [source,terminal] ---- diff --git a/modules/nw-throughput-troubleshoot.adoc b/modules/nw-throughput-troubleshoot.adoc index cf339c630e..6d6e9b637e 100644 --- a/modules/nw-throughput-troubleshoot.adoc +++ b/modules/nw-throughput-troubleshoot.adoc @@ -17,20 +17,20 @@ to analyze traffic between a Pod and its node. For example, run the tcpdump tool on each Pod while reproducing the behavior that led to the issue. Review the captures on both sides to compare send and receive timestamps to -analyze the latency of traffic to and from a Pod. +analyze the latency of traffic to and from a pod. Latency can occur in {product-title} if a node interface is overloaded with -traffic from other Pods, storage devices, or the data plane. +traffic from other pods, storage devices, or the data plane. + [source,terminal] ---- $ tcpdump -s 0 -i any -w /tmp/dump.pcap host && host <1> ---- + -<1> `podip` is the IP address for the Pod. Run the `oc get pod -o wide` command to get -the IP address of a Pod. +<1> `podip` is the IP address for the pod. Run the `oc get pod -o wide` command to get +the IP address of a pod. + tcpdump generates a file at `/tmp/dump.pcap` containing all traffic between -these two Pods. Ideally, run the analyzer shortly +these two pods. Ideally, run the analyzer shortly before the issue is reproduced and stop the analyzer shortly after the issue is finished reproducing to minimize the size of the file. You can also run a packet analyzer between the nodes (eliminating the SDN from @@ -42,7 +42,7 @@ $ tcpdump -s 0 -i any -w /tmp/dump.pcap port 4789 ---- * Use a bandwidth measuring tool, such as iperf, to measure streaming throughput -and UDP throughput. Run the tool from the Pods first, then from the nodes, +and UDP throughput. Run the tool from the pods first, then from the nodes, to locate any bottlenecks. ifdef::openshift-enterprise,openshift-webscale[] diff --git a/modules/nw-using-cookies-keep-route-statefulness.adoc b/modules/nw-using-cookies-keep-route-statefulness.adoc index dc7bb975a3..4e0db7cc13 100644 --- a/modules/nw-using-cookies-keep-route-statefulness.adoc +++ b/modules/nw-using-cookies-keep-route-statefulness.adoc @@ -18,4 +18,4 @@ controller selects an endpoint to handle any user requests, and creates a cookie for the session. The cookie is passed back in the response to the request and the user sends the cookie back with the next request in the session. The cookie tells the Ingress Controller which endpoint is handling the session, ensuring -that client requests use the cookie so that they are routed to the same Pod. +that client requests use the cookie so that they are routed to the same pod. diff --git a/modules/oauth-configuring-token-inactivity-timeout.adoc b/modules/oauth-configuring-token-inactivity-timeout.adoc index 010d186970..7e7718ccea 100644 --- a/modules/oauth-configuring-token-inactivity-timeout.adoc +++ b/modules/oauth-configuring-token-inactivity-timeout.adoc @@ -44,7 +44,7 @@ spec: .. Save the file to apply the changes. -. Check that the OAuth server Pods have restarted: +. Check that the OAuth server pods have restarted: + [source,terminal] ---- @@ -60,7 +60,7 @@ NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.6.0 True False False 145m ---- -. Check that a new revision of the Kubernetes API server Pods has rolled out. This will take several minutes. +. Check that a new revision of the Kubernetes API server pods has rolled out. This will take several minutes. + [source,terminal] ---- diff --git a/modules/odc-importing-codebase-from-git-to-create-application.adoc b/modules/odc-importing-codebase-from-git-to-create-application.adoc index 33247f3f2b..6a64813c32 100644 --- a/modules/odc-importing-codebase-from-git-to-create-application.adoc +++ b/modules/odc-importing-codebase-from-git-to-create-application.adoc @@ -72,7 +72,7 @@ Click the *Build Configuration* and *Deployment* links to see the respective con For serverless applications, the *Deployment* option is not displayed as the Knative configuration resource maintains the desired state for your deployment instead of a DeploymentConfig. Scaling:: -Click the *Scaling* link to define the number of Pods or instances of the application you want to deploy initially. +Click the *Scaling* link to define the number of pods or instances of the application you want to deploy initially. + For serverless applications, you can: diff --git a/modules/odc-interacting-with-applications-and-components.adoc b/modules/odc-interacting-with-applications-and-components.adoc index cdd5b6858f..a89050f872 100644 --- a/modules/odc-interacting-with-applications-and-components.adoc +++ b/modules/odc-interacting-with-applications-and-components.adoc @@ -20,7 +20,7 @@ This feature is available only when you create applications using the *From Git* * Use the *List View* icon to see a list of all your applications and use the *Topology View* icon to switch back to the *Topology* view. * Use the *Find by name* field to select the components with component names that match the query. Search results may appear outside of the visible area; click *Fit to Screen* from the lower-left toolbar to resize the *Topology* view to show all components. * Use the *Display Options* drop-down list to configure the *Topology* view of the various application groupings. The options are available depending on the types of components deployed in the project: -** *Pod Count*: Select to show the number of Pods of a component in the component icon. +** *Pod Count*: Select to show the number of pods of a component in the component icon. ** *Event Sources*: Toggle to show or hide the event sources. ** *Virtual Machines*: Toggle to show or hide the virtual machines. ** *Labels*: Toggle to show or hide the component labels. diff --git a/modules/odc-monitoring-your-project-metrics.adoc b/modules/odc-monitoring-your-project-metrics.adoc index c2e7e29e59..925960cd94 100644 --- a/modules/odc-monitoring-your-project-metrics.adoc +++ b/modules/odc-monitoring-your-project-metrics.adoc @@ -21,7 +21,7 @@ Use the following options to see further details: ** Select a workload from the *All Workloads* list to see the filtered metrics for the selected workload. ** Select an option from the *Time Range* list to determine the time frame for the data being captured. ** Select an option from the *Refresh Interval* list to determine the time period after which the data is refreshed. -** Hover your cursor over the graphs to see specific details for your Pod. +** Hover your cursor over the graphs to see specific details for your pod. ** Click on any of the graphs displayed to see the details for that particular metric in the *Metrics* page. * Use the *Metrics* tab to query for the required project metric. @@ -29,8 +29,8 @@ Use the following options to see further details: .Monitoring metrics image::odc_project_metrics.png[] + -.. In the *Select Query* list, select an option to filter the required details for your project. The filtered metrics for all the application Pods in your project are displayed in the graph. The Pods in your project are also listed below. -.. From the list of Pods, clear the colored square boxes to remove the metrics for specific Pods to further filter your query result. +.. In the *Select Query* list, select an option to filter the required details for your project. The filtered metrics for all the application pods in your project are displayed in the graph. The pods in your project are also listed below. +.. From the list of pods, clear the colored square boxes to remove the metrics for specific pods to further filter your query result. .. Click *Show PromQL* to see the Prometheus query. You can further modify this query with the help of prompts to customize the query and filter the metrics you want to see for that namespace. .. Use the drop-down list to set a time range for the data being displayed. You can click *Reset Zoom* to reset it to the default time range. .. Optionally, in the *Select Query* list, select *Custom Query* to create a custom Prometheus query and filter relevant metrics. diff --git a/modules/odc-scaling-application-pods-and-checking-builds-and-routes.adoc b/modules/odc-scaling-application-pods-and-checking-builds-and-routes.adoc index b3ef710da3..ba5cbb97c3 100644 --- a/modules/odc-scaling-application-pods-and-checking-builds-and-routes.adoc +++ b/modules/odc-scaling-application-pods-and-checking-builds-and-routes.adoc @@ -5,17 +5,17 @@ [id="odc-scaling-application-pods-and-checking-builds-and-routes_{context}"] = Scaling application pods and checking builds and routes -The *Topology* view provides the details of the deployed components in the *Overview* panel. You can use the *Overview* and *Resources* tabs to scale the application Pods, check build status, services, and routes as follows: +The *Topology* view provides the details of the deployed components in the *Overview* panel. You can use the *Overview* and *Resources* tabs to scale the application pods, check build status, services, and routes as follows: * Click on the component node to see the *Overview* panel to the right. Use the *Overview* tab to: -** Scale your Pods using the up and down arrows to increase or decrease the number of instances of the application manually. For serverless applications, the Pods are automatically scaled down to zero when idle and scaled up depending on the channel traffic. +** Scale your pods using the up and down arrows to increase or decrease the number of instances of the application manually. For serverless applications, the pods are automatically scaled down to zero when idle and scaled up depending on the channel traffic. ** Check the *Labels*, *Annotations*, and *Status* of the application. * Click the *Resources* tab to: -** See the list of all the Pods, view their status, access logs, and click on the Pod to see the Pod details. +** See the list of all the pods, view their status, access logs, and click on the pod to see the pod details. ** See the builds, their status, access logs, and start a new build if needed. ** See the services and routes used by the component. diff --git a/modules/odc-starting-recreate-deployment.adoc b/modules/odc-starting-recreate-deployment.adoc index 7db7796e05..28a2b1f7fd 100644 --- a/modules/odc-starting-recreate-deployment.adoc +++ b/modules/odc-starting-recreate-deployment.adoc @@ -18,7 +18,7 @@ To switch to a Recreate update strategy and to upgrade an application: . In the *Actions* drop-down menu, select *Edit Deployment Config* to see the deployment configuration details of the application. . In the YAML editor, change the `spec.strategy.type` to `Recreate` and click *Save*. . In the *Topology* view, select the node to see the *Overview* tab in the side panel. The *Update Strategy* is now set to *Recreate*. -. Use the *Actions* drop-down menu to select *Start Rollout* to start an update using the Recreate strategy. The Recreate strategy first terminates Pods for the older version of the application and then spins up Pods for the new version. +. Use the *Actions* drop-down menu to select *Start Rollout* to start an update using the Recreate strategy. The Recreate strategy first terminates pods for the older version of the application and then spins up pods for the new version. + .Recreate update image::odc-recreate-update.png[] diff --git a/modules/odc-viewing-application-topology.adoc b/modules/odc-viewing-application-topology.adoc index 28a72702cc..b69fd97270 100644 --- a/modules/odc-viewing-application-topology.adoc +++ b/modules/odc-viewing-application-topology.adoc @@ -5,7 +5,7 @@ [id="odc-viewing-application-topology_{context}"] = Viewing the topology of your application -You can navigate to the *Topology* view using the left navigation panel in the *Developer* perspective. After you create an application, you are directed automatically to the *Topology* view where you can see the status of the application Pods, quickly access the application on a public URL, access the source code to modify it, and see the status of your last build. You can zoom in and out to see more details for a particular application. +You can navigate to the *Topology* view using the left navigation panel in the *Developer* perspective. After you create an application, you are directed automatically to the *Topology* view where you can see the status of the application pods, quickly access the application on a public URL, access the source code to modify it, and see the status of your last build. You can zoom in and out to see more details for a particular application. The status or phase of the Pod is indicated by different colors and tooltips as *Running* (image:odc_pod_running.png[title="Pod Running"]), *Not Ready* (image:odc_pod_not_ready.png[title="Pod Not Ready"]), *Warning*(image:odc_pod_warning.png[title="Pod Warning"]), *Failed*(image:odc_pod_failed.png[title="Pod Failed"]), *Pending*(image:odc_pod_pending.png[title="Pod Pending"]), *Succeeded*(image:odc_pod_succeeded.png[title="Pod Succeeded"]), *Terminating*(image:odc_pod_terminating.png[title="Pod Terminating"]), or *Unknown*(image:odc_pod_unknown.png[title="Pod Unknown"]). diff --git a/modules/olm-catalogsource.adoc b/modules/olm-catalogsource.adoc index e946264437..94015ac04b 100644 --- a/modules/olm-catalogsource.adoc +++ b/modules/olm-catalogsource.adoc @@ -5,13 +5,13 @@ [id="olm-catalogsource_{context}"] = CatalogSource -A CatalogSource represents a store of metadata that OLM can query to discover and install Operators and their dependencies. The spec of a CatalogSource indicates how to construct a Pod or how to communicate with a service that serves the Operator Registry gRPC API. +A CatalogSource represents a store of metadata that OLM can query to discover and install Operators and their dependencies. The spec of a CatalogSource indicates how to construct a pod or how to communicate with a service that serves the Operator Registry gRPC API. There are three primary `sourceTypes` for a CatalogSource: -* `grpc` with an `image` reference: OLM pulls the image and runs the Pod, which is expected to serve a compliant API. +* `grpc` with an `image` reference: OLM pulls the image and runs the pod, which is expected to serve a compliant API. * `grpc` with an `address` field: OLM attempts to contact the gRPC API at the given address. This should not be used in most cases. -* `internal` or `configmap`: OLM parses the ConfigMap data and runs a Pod that can serve the gRPC API over it. +* `internal` or `configmap`: OLM parses the ConfigMap data and runs a pod that can serve the gRPC API over it. .Example CatalogSource [source,yaml] diff --git a/modules/olm-creating-catalog-from-index.adoc b/modules/olm-creating-catalog-from-index.adoc index cb335de0fd..2254754987 100644 --- a/modules/olm-creating-catalog-from-index.adoc +++ b/modules/olm-creating-catalog-from-index.adoc @@ -59,7 +59,7 @@ $ oc create -f catalogsource.yaml . Verify the following resources are created successfully. -.. Check the Pods: +.. Check the pods: + [source,terminal] ---- diff --git a/modules/olm-creating-etcd-cluster-from-operator.adoc b/modules/olm-creating-etcd-cluster-from-operator.adoc index 652a45bf11..a5e6abad6f 100644 --- a/modules/olm-creating-etcd-cluster-from-operator.adoc +++ b/modules/olm-creating-etcd-cluster-from-operator.adoc @@ -47,14 +47,14 @@ objects work similar to the built-in native Kubernetes ones, such as .. The next screen allows you to make any modifications to the minimal starting template of an `EtcdCluster` object, such as the size of the cluster. For now, -click *Create* to finalize. This triggers the Operator to start up the Pods, +click *Create* to finalize. This triggers the Operator to start up the pods, Services, and other components of the new etcd cluster. . Click the *Resources* tab to see that your project now contains a number of resources created and configured automatically by the Operator. + Verify that a Kubernetes service has been created that allows you to access the -database from other Pods in your project. +database from other pods in your project. . All users with the `edit` role in a given project can create, manage, and delete application instances (an etcd cluster, in this example) managed by Operators diff --git a/modules/olm-deleting-operators-from-a-cluster-using-web-console.adoc b/modules/olm-deleting-operators-from-a-cluster-using-web-console.adoc index 2cda3bf2e4..75eddaa379 100644 --- a/modules/olm-deleting-operators-from-a-cluster-using-web-console.adoc +++ b/modules/olm-deleting-operators-from-a-cluster-using-web-console.adoc @@ -36,7 +36,7 @@ configured off-cluster resources, these will continue to run and need to be cleaned up manually.* -- + -The Operator, any Operator deployments, and Pods are removed by this action. Any +The Operator, any Operator deployments, and pods are removed by this action. Any resources managed by the Operator, including CRDs and CRs are not removed. The web console enables dashboards and navigation items for some Operators. To remove these after uninstalling the Operator, you might need to manually delete diff --git a/modules/olm-injecting-custom-ca.adoc b/modules/olm-injecting-custom-ca.adoc index a59a8ccb88..dab98211a6 100644 --- a/modules/olm-injecting-custom-ca.adoc +++ b/modules/olm-injecting-custom-ca.adoc @@ -72,7 +72,7 @@ spec: readOnly: true ---- <1> Add a `config` section if it does not exist. -<2> Specify labels to match Pods that are owned by the Operator. +<2> Specify labels to match pods that are owned by the Operator. <3> Create a `trusted-ca` volume. <4> `ca-bundle.crt` is required as the ConfigMap key. <5> `tls-ca-bundle.pem` is required as the ConfigMap path. diff --git a/modules/olm-installing-from-operatorhub-using-web-console.adoc b/modules/olm-installing-from-operatorhub-using-web-console.adoc index d38ec282b2..40e71657db 100644 --- a/modules/olm-installing-from-operatorhub-using-web-console.adoc +++ b/modules/olm-installing-from-operatorhub-using-web-console.adoc @@ -118,7 +118,7 @@ For the *All namespaces...* Installation Mode, the status resolves to + If it does not: -.. Check the logs in any Pods in the `openshift-operators` project (or other +.. Check the logs in any pods in the `openshift-operators` project (or other relevant namespace if *A specific namespace...* Installation Mode was selected) on the *Workloads → Pods* page that are reporting issues to troubleshoot further. diff --git a/modules/olm-overriding-proxy-settings.adoc b/modules/olm-overriding-proxy-settings.adoc index 32efccabd2..bfb7995093 100644 --- a/modules/olm-overriding-proxy-settings.adoc +++ b/modules/olm-overriding-proxy-settings.adoc @@ -7,7 +7,7 @@ If a cluster-wide egress proxy is configured, applications created from Operators using Operator Lifecycle Manager (OLM) inherit the cluster-wide proxy -settings on their Deployments and Pods. Cluster administrators can also override +settings on their Deployments and pods. Cluster administrators can also override these proxy settings by configuring the Operator's Subscription. .Prerequisites diff --git a/modules/olm-policy-fine-grained-permissions.adoc b/modules/olm-policy-fine-grained-permissions.adoc index aaf2632e5d..b14da00b59 100644 --- a/modules/olm-policy-fine-grained-permissions.adoc +++ b/modules/olm-policy-fine-grained-permissions.adoc @@ -45,7 +45,7 @@ rules: resources: ["pods"] verbs: ["list", "watch", "get", "create", "update", "patch", "delete"] ---- -<1> Add permissions to create other resources, such as Deployments and Pods shown +<1> Add permissions to create other resources, such as Deployments and pods shown here. In addition, if any Operator specifies a pull secret, the following permissions diff --git a/modules/op-about-tasks.adoc b/modules/op-about-tasks.adoc index 250af782b9..98ff612a7b 100644 --- a/modules/op-about-tasks.adoc +++ b/modules/op-about-tasks.adoc @@ -7,7 +7,7 @@ _Tasks_ are the building blocks of a Pipeline and consist of sequentially executed Steps. Tasks are reusable and can be used in multiple Pipelines. -_Steps_ are a series of commands that achieve a specific goal, such as building an image. Every Task runs as a Pod and each Step runs in its own container within the same Pod. Because Steps run within the same Pod, they have access to the same volumes for caching files, ConfigMaps, and Secrets. +_Steps_ are a series of commands that achieve a specific goal, such as building an image. Every Task runs as a pod and each Step runs in its own container within the same pod. Because Steps run within the same pod, they have access to the same volumes for caching files, ConfigMaps, and Secrets. The following example shows the `apply-manifests` Task. @@ -43,6 +43,6 @@ spec: <4> <3> Unique name of this Task. <4> Lists the parameters and Steps in the Task and the workspace used by the Task. -This Task starts the Pod and runs a container inside that Pod using the `maven:3.6.0-jdk-8-slim` image to run the specified commands. It receives an input directory called `workspace-git` that contains the source code of the application. +This Task starts the pod and runs a container inside that pod using the `maven:3.6.0-jdk-8-slim` image to run the specified commands. It receives an input directory called `workspace-git` that contains the source code of the application. The Task only declares the placeholder for the Git repository, it does not specify which Git repository to use. This allows Tasks to be reusable for multiple Pipelines and purposes. diff --git a/modules/op-creating-pipeline-tasks.adoc b/modules/op-creating-pipeline-tasks.adoc index c586271794..e2edf1afc9 100644 --- a/modules/op-creating-pipeline-tasks.adoc +++ b/modules/op-creating-pipeline-tasks.adoc @@ -33,7 +33,7 @@ update-deployment 48 seconds ago + [NOTE] ==== -You must use a privileged Pod container to run the `buildah` ClusterTask because it requires a privileged security context. To learn more about Security Context Constraints (SCC) for Pods, see the Additional resources section. +You must use a privileged Pod container to run the `buildah` ClusterTask because it requires a privileged security context. To learn more about Security Context Constraints (SCC) for pods, see the Additional resources section. ==== + ---- diff --git a/modules/op-release-notes-1-1.adoc b/modules/op-release-notes-1-1.adoc index 40736784b3..4e8d1e8e3a 100644 --- a/modules/op-release-notes-1-1.adoc +++ b/modules/op-release-notes-1-1.adoc @@ -32,7 +32,7 @@ In addition to the fixes and stability improvements, here is a highlight of what * The names of the `feature-flags` and the `config-defaults` ConfigMaps are now customizable. * Support for HostNetwork in the PodTemplate used by TaskRun is now available. * An Affinity Assistant is now available to support node affinity in TaskRuns that share workspace volume. By default, this is disabled on OpenShift Pipelines. -* The PodTemplate has been updated to specify `imagePullSecrets` to identify secrets that the container runtime should use to authorize container image pulls when starting a Pod. +* The PodTemplate has been updated to specify `imagePullSecrets` to identify secrets that the container runtime should use to authorize container image pulls when starting a pod. * Support for emitting warning events from the TaskRun controller if the controller fails to update the TaskRun. * Standard or recommended k8s labels have been added to all resources to identify resources belonging to an application or component. * The Entrypoint process is now notified for signals and these signals are then propagated using a dedicated PID Group of the Entrypoint process. diff --git a/modules/openshift-cluster-maximums-major-releases.adoc b/modules/openshift-cluster-maximums-major-releases.adoc index 73a30040da..aef72b9fdc 100644 --- a/modules/openshift-cluster-maximums-major-releases.adoc +++ b/modules/openshift-cluster-maximums-major-releases.adoc @@ -16,15 +16,15 @@ Tested Cloud Platforms for {product-title} 4.x: Amazon Web Services, Microsoft A | 2,000 | 2,000 -| Number of Pods ^[1]^ +| Number of pods ^[1]^ | 150,000 | 150,000 -| Number of Pods per node +| Number of pods per node | 250 | 500 ^[2]^ -| Number of Pods per core +| Number of pods per core | There is no default value. | There is no default value. @@ -36,7 +36,7 @@ Tested Cloud Platforms for {product-title} 4.x: Amazon Web Services, Microsoft A | 10,000 (Default pod RAM 512 Mi) - Pipeline Strategy | 10,000 (Default pod RAM 512 Mi) - Source-to-Image (S2I) build strategy -| Number of Pods per namespace ^[4]^ +| Number of pods per namespace ^[4]^ | 25,000 | 25,000 @@ -59,8 +59,8 @@ Tested Cloud Platforms for {product-title} 4.x: Amazon Web Services, Microsoft A |=== [.small] -- -1. The Pod count displayed here is the number of test Pods. The actual number of Pods depends on the application’s memory, CPU, and storage requirements. -2. This was tested on a cluster with 100 worker nodes with 500 Pods per worker node. The default `maxPods` is still 250. To get to 500 `maxPods`, the cluster must be created with a `maxPods` set to `500` using a custom KubeletConfig. If you need 500 user pods, you need a `hostPrefix` of `22` because there are 10-15 system Pods already running on the node. The maximum number of Pods with attached Persistent Volume Claims (PVC) depends on storage backend from where PVC are allocated. In our tests, only OpenShift Container Storage v4 (OCS v4) was able to satisfy the number of Pods per node discussed in this document. +1. The pod count displayed here is the number of test pods. The actual number of pods depends on the application’s memory, CPU, and storage requirements. +2. This was tested on a cluster with 100 worker nodes with 500 pods per worker node. The default `maxPods` is still 250. To get to 500 `maxPods`, the cluster must be created with a `maxPods` set to `500` using a custom KubeletConfig. If you need 500 user pods, you need a `hostPrefix` of `22` because there are 10-15 system pods already running on the node. The maximum number of pods with attached Persistent Volume Claims (PVC) depends on storage backend from where PVC are allocated. In our tests, only OpenShift Container Storage v4 (OCS v4) was able to satisfy the number of pods per node discussed in this document. 3. When there are a large number of active projects, etcd might suffer from poor performance if the keyspace grows excessively large and exceeds the space quota. Periodic maintenance of etcd, including defragmentaion, is highly recommended to free etcd storage. 4. There are a number of control loops in the system that must iterate over all objects in a given namespace as a reaction to some changes in state. Having a large number of objects of a given type in a single namespace can make those loops expensive and slow down processing given state changes. The limit assumes that the system has enough CPU, memory, and disk to satisfy the application requirements. 5. Each Service port and each Service back-end has a corresponding entry in iptables. The number of back-ends of a given Service impact the size of the endpoints objects, which impacts the size of data that is being sent all over the system. diff --git a/modules/openshift-cluster-maximums.adoc b/modules/openshift-cluster-maximums.adoc index 615f7e1819..d129ab47ca 100644 --- a/modules/openshift-cluster-maximums.adoc +++ b/modules/openshift-cluster-maximums.adoc @@ -16,21 +16,21 @@ | 500 | 2,000 -| Number of Pods ^[1]^ +| Number of pods ^[1]^ | 150,000 | 150,000 | 62,500 | 62,500 | 150,000 -| Number of Pods per node +| Number of pods per node | 250 | 500 | 500 | 500 | 500 -| Number of Pods per core +| Number of pods per core | There is no default value. | There is no default value. | There is no default value. @@ -51,7 +51,7 @@ | 10,000 (Default pod RAM 512 Mi) - Source-to-Image (S2I) build strategy | 10,000 (Default pod RAM 512 Mi) - Source-to-Image (S2I) build strategy -| Number of Pods per Namespace ^[3]^ +| Number of pods per Namespace ^[3]^ | 25,000 | 25,000 | 25,000 @@ -89,7 +89,7 @@ |=== [.small] -- -1. The Pod count displayed here is the number of test Pods. The actual number of Pods depends on the application’s memory, CPU, and storage requirements. +1. The pod count displayed here is the number of test pods. The actual number of pods depends on the application’s memory, CPU, and storage requirements. 2. When there are a large number of active projects, etcd might suffer from poor performance if the keyspace grows excessively large and exceeds the space quota. Periodic maintenance of etcd, including defragmentaion, is highly recommended to free etcd storage. 3. There are a number of control loops in the system that must iterate over all objects in a given namespace as a reaction to some changes in state. Having a large number of objects of a given type in a single namespace can make those loops expensive and slow down processing given state changes. The limit assumes that the system has enough CPU, memory, and disk to satisfy the application requirements. 4. Each service port and each service back-end has a corresponding entry in iptables. The number of back-ends of a given service impact the size of the endpoints objects, which impacts the size of data that is being sent all over the system. diff --git a/modules/osdk-building-ansible-operator.adoc b/modules/osdk-building-ansible-operator.adoc index e0ef880265..43dc229459 100644 --- a/modules/osdk-building-ansible-operator.adoc +++ b/modules/osdk-building-ansible-operator.adoc @@ -321,7 +321,7 @@ memcached-operator 1 1 1 1 2m example-memcached 3 3 3 3 1m ---- -.. Check the Pods to confirm three replicas were created: +.. Check the pods to confirm three replicas were created: + [source,terminal] ---- diff --git a/modules/osdk-building-helm-operator.adoc b/modules/osdk-building-helm-operator.adoc index 96627d2b8e..fb1aba32a1 100644 --- a/modules/osdk-building-helm-operator.adoc +++ b/modules/osdk-building-helm-operator.adoc @@ -271,7 +271,7 @@ NAME DESIRED CURRENT UP-TO-DATE example-nginx-b9phnoz9spckcrua7ihrbkrt1 2 2 2 2 1m ---- + -Check the Pods to confirm two replicas were created: +Check the pods to confirm two replicas were created: + [source,terminal] ---- diff --git a/modules/osdk-manually-defined-csv-fields.adoc b/modules/osdk-manually-defined-csv-fields.adoc index 5036fc50f6..2b7fa57549 100644 --- a/modules/osdk-manually-defined-csv-fields.adoc +++ b/modules/osdk-manually-defined-csv-fields.adoc @@ -54,7 +54,7 @@ Operator SDK if any CRD YAML files are present in `deploy/`. However, several fields not in the CRD manifest spec require user input: - `description`: description of the CRD. -- `resources`: any Kubernetes resources leveraged by the CRD, for example Pods and StatefulSets. +- `resources`: any Kubernetes resources leveraged by the CRD, for example pods and StatefulSets. - `specDescriptors`: UI hints for inputs and outputs of the Operator. |=== diff --git a/modules/osdk-monitoring-prometheus-servicemonitor.adoc b/modules/osdk-monitoring-prometheus-servicemonitor.adoc index ffcadb9d6e..045fbd3d2f 100644 --- a/modules/osdk-monitoring-prometheus-servicemonitor.adoc +++ b/modules/osdk-monitoring-prometheus-servicemonitor.adoc @@ -7,7 +7,7 @@ A ServiceMonitor is a Custom Resource Definition (CRD) provided by the Prometheus Operator that discovers the `Endpoints` in Service objects and -configures Prometheus to monitor those Pods. +configures Prometheus to monitor those pods. In Go-based Operators generated using the Operator SDK, the `GenerateServiceMonitor()` helper function can take a Service object and diff --git a/modules/osdk-owned-crds.adoc b/modules/osdk-owned-crds.adoc index 172e8284b2..758ad918f4 100644 --- a/modules/osdk-owned-crds.adoc +++ b/modules/osdk-owned-crds.adoc @@ -83,7 +83,7 @@ in general. The following example depicts a `MongoDB Standalone` CRD that requires some user input in the form of a Secret and ConfigMap, and orchestrates Services, -StatefulSets, Pods and ConfigMaps: +StatefulSets, pods and ConfigMaps: [id="osdk-crds-owned-example_{context}"] .Example owned CRD @@ -123,7 +123,7 @@ StatefulSets, Pods and ConfigMaps: x-descriptors: - 'urn:alm:descriptor:com.tectonic.ui:label' statusDescriptors: - - description: The status of each of the Pods for the MongoDB cluster. + - description: The status of each of the pods for the MongoDB cluster. displayName: Pod Status path: pods x-descriptors: diff --git a/modules/ossm-configuring-jaeger.adoc b/modules/ossm-configuring-jaeger.adoc index 5ca650bf63..cd6396ddd3 100644 --- a/modules/ossm-configuring-jaeger.adoc +++ b/modules/ossm-configuring-jaeger.adoc @@ -152,7 +152,7 @@ Minimum deployment =1 Minimum deployment = 16Gi* | -4+|{asterisk} Each Elasticsearch node can operate with a lower memory setting though this is *not* recommended for production deployments. For production use, you should have no less than 16Gi allocated to each Pod by default, but preferably allocate as much as you can, up to 64Gi per Pod. +4+|{asterisk} Each Elasticsearch node can operate with a lower memory setting though this is *not* recommended for production deployments. For production use, you should have no less than 16Gi allocated to each pod by default, but preferably allocate as much as you can, up to 64Gi per pod. |=== diff --git a/modules/ossm-control-plane-deploy.adoc b/modules/ossm-control-plane-deploy.adoc index aeb27d4d0f..78e3f8f5b6 100644 --- a/modules/ossm-control-plane-deploy.adoc +++ b/modules/ossm-control-plane-deploy.adoc @@ -133,5 +133,5 @@ The installation has finished successfully when the `STATUS` column is `Componen + ---- NAME READY STATUS PROFILES VERSION AGE IMAGE REGISTRY -basic 9/9 ComponentsReady ["default"] 2.0.0 3m31s +basic 9/9 ComponentsReady ["default"] 2.0.0 3m31s ---- diff --git a/modules/ossm-operatorhub-remove.adoc b/modules/ossm-operatorhub-remove.adoc index fa1a9442cf..d43471e7c4 100644 --- a/modules/ossm-operatorhub-remove.adoc +++ b/modules/ossm-operatorhub-remove.adoc @@ -33,7 +33,7 @@ Operator* from the *Actions* drop-down menu. . When prompted by the *Remove Operator Subscription* window, optionally select the *Also completely remove the Operator from the selected namespace* check box if you want all components related to the installation to be removed. -This removes the CSV, which in turn removes the Pods, Deployments, CRDs, and CRs +This removes the CSV, which in turn removes the pods, Deployments, CRDs, and CRs associated with the Operator. @@ -60,7 +60,7 @@ Operator* from the *Actions* drop-down menu. . When prompted by the *Remove Operator Subscription* window, optionally select the *Also completely remove the Operator from the selected namespace* check box if you want all components related to the installation to be removed. -This removes the CSV, which in turn removes the Pods, Deployments, CRDs, and CRs +This removes the CSV, which in turn removes the pods, Deployments, CRDs, and CRs associated with the Operator. [id="ossm-remove-operator-kiali_{context}"] @@ -86,7 +86,7 @@ Operator* from the *Actions* drop-down menu. . When prompted by the *Remove Operator Subscription* window, optionally select the *Also completely remove the Operator from the selected namespace* check box if you want all components related to the installation to be removed. -This removes the CSV, which in turn removes the Pods, Deployments, CRDs, and CRs +This removes the CSV, which in turn removes the pods, Deployments, CRDs, and CRs associated with the Operator. [id="ossm-remove-operator-elasticsearch_{context}"] @@ -112,7 +112,7 @@ Operator* from the *Actions* drop-down menu. . When prompted by the *Remove Operator Subscription* window, optionally select the *Also completely remove the Operator from the selected namespace* check box if you want all components related to the installation to be removed. -This removes the CSV, which in turn removes the Pods, Deployments, CRDs, and CRs +This removes the CSV, which in turn removes the pods, Deployments, CRDs, and CRs associated with the Operator. [id="ossm-remove-cleanup_{context}"] diff --git a/modules/persistent-storage-cinder-volume-security.adoc b/modules/persistent-storage-cinder-volume-security.adoc index af5ab3810b..78cf821e8a 100644 --- a/modules/persistent-storage-cinder-volume-security.adoc +++ b/modules/persistent-storage-cinder-volume-security.adoc @@ -57,7 +57,7 @@ spec: <1> The number of copies of the Pod to run. <2> The label selector of the Pod to run. <3> A template for the Pod that the controller creates. -<4> The labels on the Pod. They must include labels from the label selector. +<4> The labels on the pod. They must include labels from the label selector. <5> The maximum name length after expanding any parameters is 63 characters. <6> Specifies the service account you created. -<7> Specifies an `fsGroup` for the Pods. +<7> Specifies an `fsGroup` for the pods. diff --git a/modules/persistent-storage-csi-cloning-provisioning.adoc b/modules/persistent-storage-csi-cloning-provisioning.adoc index c5ef289834..a42bb86ddd 100644 --- a/modules/persistent-storage-csi-cloning-provisioning.adoc +++ b/modules/persistent-storage-csi-cloning-provisioning.adoc @@ -59,9 +59,9 @@ $ oc get pvc pvc-1-clone + The `pvc-1-clone` shows that it is `Bound`. + -You are now ready to use the newly cloned PVC to configure a Pod. +You are now ready to use the newly cloned PVC to configure a pod. -. Create and save a file with the Pod object described by the YAML. For example: +. Create and save a file with the `Pod` object described by the YAML. For example: + [source,yaml] @@ -85,4 +85,4 @@ spec: + <1> The cloned PVC created during the CSI volume cloning operation. + -The created Pod object is now ready to consume, clone, snapshot, or delete your cloned PVC independently of its original `dataSource` PVC. +The created `Pod` object is now ready to consume, clone, snapshot, or delete your cloned PVC independently of its original `dataSource` PVC. diff --git a/modules/persistent-storage-csi-snapshots-create.adoc b/modules/persistent-storage-csi-snapshots-create.adoc index ec0618bcc3..d91022d59b 100644 --- a/modules/persistent-storage-csi-snapshots-create.adoc +++ b/modules/persistent-storage-csi-snapshots-create.adoc @@ -12,7 +12,7 @@ When you create a VolumeSnapshot object, {product-title} creates a volume snapsh * Logged in to a running {product-title} cluster. * A PVC created using a CSI driver that supports VolumeSnapshot objects. * A storage class to provision the storage backend. -* No Pods are using the persistent volume claim (PVC) that you want to take a snapshot of. +* No pods are using the persistent volume claim (PVC) that you want to take a snapshot of. + [NOTE] ==== diff --git a/modules/persistent-storage-flexvolume-consuming.adoc b/modules/persistent-storage-flexvolume-consuming.adoc index 7ad437ba2d..fdac0bbdfd 100644 --- a/modules/persistent-storage-flexvolume-consuming.adoc +++ b/modules/persistent-storage-flexvolume-consuming.adoc @@ -31,7 +31,7 @@ spec: fooServer: 192.168.0.1:1234 fooVolumeName: bar ---- -<1> The name of the volume. This is how it is identified through persistent volume claims or from Pods. This name can be different from the name of the volume on +<1> The name of the volume. This is how it is identified through persistent volume claims or from pods. This name can be different from the name of the volume on back-end storage. <2> The amount of storage allocated to this volume. <3> The name of the driver. This field is mandatory. diff --git a/modules/persistent-storage-flexvolume-installing.adoc b/modules/persistent-storage-flexvolume-installing.adoc index bc2650ddec..bd3de6fa84 100644 --- a/modules/persistent-storage-flexvolume-installing.adoc +++ b/modules/persistent-storage-flexvolume-installing.adoc @@ -34,7 +34,7 @@ Unmounts a volume from a directory. This can include anything that is necessary ** Expected output: default JSON `mountdevice`:: -Mounts a volume's device to a directory where individual Pods can then bind mount. +Mounts a volume's device to a directory where individual pods can then bind mount. This call-out does not pass "secrets" specified in the FlexVolume spec. If your driver requires secrets, do not implement this call-out. diff --git a/modules/persistent-storage-hostpath-pod.adoc b/modules/persistent-storage-hostpath-pod.adoc index ea0002d8da..cc503f7c53 100644 --- a/modules/persistent-storage-hostpath-pod.adoc +++ b/modules/persistent-storage-hostpath-pod.adoc @@ -5,7 +5,7 @@ [id="persistent-storage-hostpath-pod_{context}"] = Mounting the hostPath share in a privileged Pod -After the PersistentVolumeClaim has been created, it can be used inside by an application. The following example demonstrates mounting this share inside of a Pod. +After the PersistentVolumeClaim has been created, it can be used inside by an application. The following example demonstrates mounting this share inside of a pod. .Prerequisites * A PersistentVolumeClaim exists that is mapped to the underlying hostPath share. @@ -35,7 +35,7 @@ spec: persistentVolumeClaim: claimName: task-pvc-volume <4> ---- -<1> The name of the Pod. +<1> The name of the pod. <2> The Pod must run as privileged to access the node's storage. -<3> The path to mount the hostPath share inside the privileged Pod. +<3> The path to mount the hostPath share inside the privileged pod. <4> The name of the PersistentVolumeClaim that has been previously created. diff --git a/modules/persistent-storage-hostpath-static-provisioning.adoc b/modules/persistent-storage-hostpath-static-provisioning.adoc index 9611f41343..8f94cbb0ec 100644 --- a/modules/persistent-storage-hostpath-static-provisioning.adoc +++ b/modules/persistent-storage-hostpath-static-provisioning.adoc @@ -29,7 +29,7 @@ A Pod that uses a hostPath volume must be referenced by manual (static) provisio hostPath: path: "/mnt/data" <4> ---- -<1> The name of the volume. This name is how it is identified by PersistentVolumeClaims or Pods. +<1> The name of the volume. This name is how it is identified by PersistentVolumeClaims or pods. <2> Used to bind PersistentVolumeClaim requests to this PersistentVolume. <3> The volume can be mounted as `read-write` by a single node. <4> The configuration file specifies that the volume is at `/mnt/data` on the cluster’s node. diff --git a/modules/persistent-storage-local-pvc.adoc b/modules/persistent-storage-local-pvc.adoc index 4b0de76337..711faf0dbe 100644 --- a/modules/persistent-storage-local-pvc.adoc +++ b/modules/persistent-storage-local-pvc.adoc @@ -6,7 +6,7 @@ = Create the local volume PersistentVolumeClaim Local volumes must be statically created as a PersistentVolumeClaim (PVC) -to be accessed by the Pod. +to be accessed by the pod. .Prerequisite diff --git a/modules/persistent-storage-local-tolerations.adoc b/modules/persistent-storage-local-tolerations.adoc index 48e1f48ca9..43cf651334 100644 --- a/modules/persistent-storage-local-tolerations.adoc +++ b/modules/persistent-storage-local-tolerations.adoc @@ -3,12 +3,12 @@ // storage/persistent_storage/persistent-storage-local.adoc [id="local-tolerations_{context}"] -= Using tolerations with Local Storage Operator Pods += Using tolerations with Local Storage Operator pods -Taints can be applied to nodes to prevent them from running general workloads. To allow the Local Storage Operator to use tainted nodes, you must add tolerations to the Pod or DaemonSet definition. This allows the created resources to run on these tainted nodes. +Taints can be applied to nodes to prevent them from running general workloads. To allow the Local Storage Operator to use tainted nodes, you must add tolerations to the `Pod` or `DaemonSet` definition. This allows the created resources to run on these tainted nodes. -You apply tolerations to the Local Storage Operator Pod through the LocalVolume resource -and apply taints to a node through the node specification. A taint on a node instructs the node to repel all Pods that do not tolerate the taint. Using a specific taint that is not on other Pods ensures that the Local Storage Operator Pod can also run on that node. +You apply tolerations to the Local Storage Operator pod through the LocalVolume resource +and apply taints to a node through the node specification. A taint on a node instructs the node to repel all pods that do not tolerate the taint. Using a specific taint that is not on other pods ensures that the Local Storage Operator pod can also run on that node. [IMPORTANT] ==== @@ -26,7 +26,7 @@ Taints and tolerations consist of a key, value, and effect. As an argument, it i .Procedure To configure local volumes for scheduling on tainted nodes: -. Modify the YAML file that defines the Pod and add the `LocalVolume` spec, as shown in the following example: +. Modify the YAML file that defines the `Pod` and add the `LocalVolume` spec, as shown in the following example: + [source,yaml] ---- @@ -53,4 +53,4 @@ To configure local volumes for scheduling on tainted nodes: <4> The volume mode, either `Filesystem` or `Block`, defining the type of the local volumes. <5> The path containing a list of local storage devices to choose from. -The defined tolerations will be passed to the resulting DaemonSets, allowing the diskmaker and provisioner Pods to be created for nodes that contain the specified taints. +The defined tolerations will be passed to the resulting DaemonSets, allowing the diskmaker and provisioner pods to be created for nodes that contain the specified taints. diff --git a/modules/persistent-storage-vsphere-static-provisioning.adoc b/modules/persistent-storage-vsphere-static-provisioning.adoc index 700f24886e..9238777ca8 100644 --- a/modules/persistent-storage-vsphere-static-provisioning.adoc +++ b/modules/persistent-storage-vsphere-static-provisioning.adoc @@ -47,9 +47,9 @@ spec: volumePath: "[datastore1] volumes/myDisk" <4> fsType: ext4 <5> ---- -<1> The name of the volume. This name is how it is identified by PersistentVolumeClaims or Pods. +<1> The name of the volume. This name is how it is identified by PersistentVolumeClaims or pods. <2> The amount of storage allocated to this volume. -<3> The volume type used, with `vsphereVolume` for vSphere volumes. The label is used to mount a vSphere VMDK volume into Pods. The contents of a volume are preserved when it is unmounted. The volume type supports VMFS and VSAN datastore. +<3> The volume type used, with `vsphereVolume` for vSphere volumes. The label is used to mount a vSphere VMDK volume into pods. The contents of a volume are preserved when it is unmounted. The volume type supports VMFS and VSAN datastore. <4> The existing VMDK volume to use. If you used `vmkfstools`, you must enclose the datastore name in square brackets, `[]`, in the volume definition, as shown previously. <5> The file system type to mount. For example, ext4, xfs, or other file systems. + diff --git a/modules/pod-interactions-with-topology-manager.adoc b/modules/pod-interactions-with-topology-manager.adoc index ccc93a633a..bebe1be59d 100644 --- a/modules/pod-interactions-with-topology-manager.adoc +++ b/modules/pod-interactions-with-topology-manager.adoc @@ -5,9 +5,9 @@ [id="pod-interactions-with-topology-manager_{context}"] = Pod interactions with Topology Manager policies -The example Pod specs below help illustrate Pod interactions with Topology Manager. +The example `Pod` specs below help illustrate pod interactions with Topology Manager. -The following Pod runs in the `BestEffort` QoS class because no resource requests or +The following pod runs in the `BestEffort` QoS class because no resource requests or limits are specified. [source,yaml] @@ -18,7 +18,7 @@ spec: image: nginx ---- -The next Pod runs in the `Burstable` QoS class because requests are less than limits. +The next pod runs in the `Burstable` QoS class because requests are less than limits. [source,yaml] ---- @@ -34,9 +34,9 @@ spec: ---- If the selected policy is anything other than `none`, Topology Manager would -not consider either of these Pod specifications. +not consider either of these `Pod` specifications. -The last example Pod below runs in the Guaranteed QoS class because requests are equal to limits. +The last example pod below runs in the Guaranteed QoS class because requests are equal to limits. [source,yaml] ---- @@ -55,11 +55,11 @@ spec: example.com/device: "1" ---- -Topology Manager would consider this Pod. The Topology Manager consults the +Topology Manager would consider this pod. The Topology Manager consults the CPU Manager static policy, which returns the topology of available CPUs. Topology Manager also consults Device Manager to discover the topology of available devices for example.com/device. Topology Manager will use this information to store the best Topology for this -container. In the case of this Pod, CPU Manager and Device Manager will use this stored +container. In the case of this pod, CPU Manager and Device Manager will use this stored information at the resource allocation stage. diff --git a/modules/prometheus-database-storage-requirements.adoc b/modules/prometheus-database-storage-requirements.adoc index c38c080467..3be08a61b1 100644 --- a/modules/prometheus-database-storage-requirements.adoc +++ b/modules/prometheus-database-storage-requirements.adoc @@ -11,7 +11,7 @@ Red Hat performed various tests for different scale sizes. .Prometheus Database storage requirements based on number of nodes/pods in the cluster [options="header"] |=== -|Number of Nodes |Number of Pods |Prometheus storage growth per day |Prometheus storage growth per 15 days |RAM Space (per scale size) |Network (per tsdb chunk) +|Number of Nodes |Number of pods |Prometheus storage growth per day |Prometheus storage growth per 15 days |RAM Space (per scale size) |Network (per tsdb chunk) |50 |1800 diff --git a/modules/pruning-images-manual.adoc b/modules/pruning-images-manual.adoc index bad2451ea1..e2283ba477 100644 --- a/modules/pruning-images-manual.adoc +++ b/modules/pruning-images-manual.adoc @@ -185,8 +185,8 @@ You can apply conditions to your manually pruned images. ** Created at least `--keep-younger-than` minutes ago and are not currently referenced by any: *** Pods created less than `--keep-younger-than` minutes ago *** Imagestreams created less than `--keep-younger-than` minutes ago -*** Running Pods -*** Pending Pods +*** Running pods +*** Pending pods *** ReplicationControllers *** Deployments *** DeploymentConfigs @@ -195,8 +195,8 @@ You can apply conditions to your manually pruned images. *** Builds *** `--keep-tag-revisions` most recent items in `stream.status.tags[].items` ** That are exceeding the smallest limit defined in the same project and are not currently referenced by any: -*** Running Pods -*** Pending Pods +*** Running pods +*** Pending pods *** ReplicationControllers *** Deployments *** DeploymentConfigs @@ -239,7 +239,7 @@ registry storage by hard pruning the registry. . To see what a pruning operation would delete: .. Keeping up to three tag revisions, and keeping resources (images, imagestreams, -and Pods) younger than 60 minutes: +and pods) younger than 60 minutes: + [source,terminal] ---- diff --git a/modules/pruning-images.adoc b/modules/pruning-images.adoc index 948133016c..058807ce7f 100644 --- a/modules/pruning-images.adoc +++ b/modules/pruning-images.adoc @@ -60,7 +60,7 @@ status: <2> `suspend`: If set to `true`, the `CronJob` running pruning is suspended. This is an optional field, and it defaults to `false`. <3> `keepTagRevisions`: The number of revisions per tag to keep. This is an optional field, and it defaults to `3` if not set. <4> `keepYoungerThan`: Retain images younger than this duration. This is an optional field, and it defaults `60m` if not set. -<5> `resources`: Standard Pod resource requests and limits. This is an optional field. +<5> `resources`: Standard `Pod` resource requests and limits. This is an optional field. <6> `affinity`: Standard Pod affinity. This is an optional field. <7> `nodeSelector`: Standard Pod node selector for the image pruner pod. This is an optional field. <8> `tolerations`: Standard Pod tolerations. This is an optional field. diff --git a/modules/querying-kubelet-status-on-a-node.adoc b/modules/querying-kubelet-status-on-a-node.adoc index 4aea5e9d62..06ca8c475d 100644 --- a/modules/querying-kubelet-status-on-a-node.adoc +++ b/modules/querying-kubelet-status-on-a-node.adoc @@ -15,15 +15,15 @@ You can review cluster node health status, resource consumption statistics, and .Procedure -. The kubelet is managed using a systemd service on each node. Review the kubelet's status by querying the `kubelet` systemd service within a debug Pod. -.. Start a debug Pod for a node: +. The kubelet is managed using a systemd service on each node. Review the kubelet's status by querying the `kubelet` systemd service within a debug pod. +.. Start a debug pod for a node: + [source,terminal] ---- $ oc debug node/my-node ---- + -.. Set `/host` as the root directory within the debug shell. The debug Pod mounts the host's root file system in `/host` within the Pod. By changing the root directory to `/host`, you can run binaries contained in the host's executable paths: +.. Set `/host` as the root directory within the debug shell. The debug pod mounts the host's root file system in `/host` within the pod. By changing the root directory to `/host`, you can run binaries contained in the host's executable paths: + [source,terminal] ---- diff --git a/modules/querying-operator-pod-status.adoc b/modules/querying-operator-pod-status.adoc index 3d2a5a0b1e..1088d8486b 100644 --- a/modules/querying-operator-pod-status.adoc +++ b/modules/querying-operator-pod-status.adoc @@ -3,9 +3,9 @@ // * support/troubleshooting/troubleshooting-operator-issues.adoc [id="querying-operator-pod-status_{context}"] -= Querying Operator Pod status += Querying Operator pod status -You can list Operator Pods within a cluster and their status. You can also collect a detailed Operator Pod summary. +You can list Operator pods within a cluster and their status. You can also collect a detailed Operator pod summary. .Prerequisites @@ -22,14 +22,14 @@ You can list Operator Pods within a cluster and their status. You can also colle $ oc get clusteroperators ---- -. List Operator Pods running in the Operator's namespace, plus Pod status, restarts, and age: +. List Operator pods running in the Operator's namespace, plus pod status, restarts, and age: + [source,terminal] ---- $ oc get pod -n ---- -. Output a detailed Operator Pod summary: +. Output a detailed Operator pod summary: + [source,terminal] ---- @@ -37,14 +37,14 @@ $ oc describe pod -n ---- . If an Operator issue is node-specific, query Operator container status on that node. -.. Start a debug Pod for the node: +.. Start a debug pod for the node: + [source,terminal] ---- $ oc debug node/my-node ---- + -.. Set `/host` as the root directory within the debug shell. The debug Pod mounts the host's root file system in `/host` within the Pod. By changing the root directory to `/host`, you can run binaries contained in the host's executable paths: +.. Set `/host` as the root directory within the debug shell. The debug pod mounts the host's root file system in `/host` within the pod. By changing the root directory to `/host`, you can run binaries contained in the host's executable paths: + [source,terminal] ---- @@ -56,7 +56,7 @@ $ oc debug node/my-node {product-title} {product-version} cluster nodes running {op-system-first} are immutable and rely on Operators to apply cluster changes. Accessing cluster nodes using SSH is not recommended and nodes will be tainted as _accessed_. However, if the {product-title} API is not available, or the kubelet is not properly functioning on the target node, `oc` operations will be impacted. In such situations, it is possible to access nodes using `ssh core@..` instead. ==== + -.. List details about the node's containers, including state and associated Pod IDs: +.. List details about the node's containers, including state and associated pod IDs: + [source,terminal] ---- diff --git a/modules/querying-operator-status-after-installation.adoc b/modules/querying-operator-status-after-installation.adoc index e011726c07..1881f4c3af 100644 --- a/modules/querying-operator-status-after-installation.adoc +++ b/modules/querying-operator-status-after-installation.adoc @@ -5,7 +5,7 @@ [id="querying-operator-status-after-installation_{context}"] = Querying Operator status after installation -You can check Operator status at the end of an installation. Retrieve diagnostic data for Operators that do not become available. Review logs for any Operator Pods that are listed as `Pending` or have an error status. Validate base images used by problematic Pods. +You can check Operator status at the end of an installation. Retrieve diagnostic data for Operators that do not become available. Review logs for any Operator pods that are listed as `Pending` or have an error status. Validate base images used by problematic pods. .Prerequisites @@ -35,7 +35,7 @@ $ oc describe clusteroperator $ oc get pods -n ---- -. Obtain a detailed description for Pods that do not have `Running` status: +. Obtain a detailed description for pods that do not have `Running` status: + [source,terminal] ---- diff --git a/modules/registry-checking-the-status-of-registry-pods.adoc b/modules/registry-checking-the-status-of-registry-pods.adoc index df805b544f..849376c843 100644 --- a/modules/registry-checking-the-status-of-registry-pods.adoc +++ b/modules/registry-checking-the-status-of-registry-pods.adoc @@ -3,9 +3,9 @@ // * registry/accessing-the-registry.adoc [id="checking-the-status-of-registry-pods_{context}"] -= Checking the status of the registry Pods += Checking the status of the registry pods -As a cluster administrator, you can list the image registry Pods running in the `openshift-image-registry` project and check their status. +As a cluster administrator, you can list the image registry pods running in the `openshift-image-registry` project and check their status. .Prerequisites @@ -14,7 +14,7 @@ As a cluster administrator, you can list the image registry Pods running in the .Procedure -. List the Pods in the `openshift-image-registry` project and view their status: +. List the pods in the `openshift-image-registry` project and view their status: + [source,terminal] ---- diff --git a/modules/restore-replace-crashlooping-etcd-member.adoc b/modules/restore-replace-crashlooping-etcd-member.adoc index 7bd7305136..4760885116 100644 --- a/modules/restore-replace-crashlooping-etcd-member.adoc +++ b/modules/restore-replace-crashlooping-etcd-member.adoc @@ -21,7 +21,7 @@ It is important to take an etcd backup before performing this procedure so that .Procedure -. Stop the crashlooping etcd Pod. +. Stop the crashlooping etcd pod. .. Debug the node that is crashlooping. + @@ -196,7 +196,7 @@ $ oc patch etcd cluster -p='{"spec": {"forceRedeploymentReason": "single-master- ---- <1> The `forceRedeploymentReason` value must be unique, which is why a timestamp is appended. + -When the etcd cluster Operator performs a redeployment, it ensures that all master nodes have a functioning etcd Pod. +When the etcd cluster Operator performs a redeployment, it ensures that all master nodes have a functioning etcd pod. . Verify that the new member is available and healthy. diff --git a/modules/restore-replace-stopped-etcd-member.adoc b/modules/restore-replace-stopped-etcd-member.adoc index f04aef620d..3bf9df2ed2 100644 --- a/modules/restore-replace-stopped-etcd-member.adoc +++ b/modules/restore-replace-stopped-etcd-member.adoc @@ -308,7 +308,7 @@ clustername-8qw5l-worker-us-east-1c-pkg26 Running m4.large us-east-1 + It might take a few minutes for the new machine to be created. The etcd cluster Operator will automatically sync when the machine or node returns to a healthy state. -. Verify that all etcd Pods are running properly: +. Verify that all etcd pods are running properly: + In a terminal that has access to the cluster as a `cluster-admin` user, run the following command: + diff --git a/modules/reviewing-pod-status.adoc b/modules/reviewing-pod-status.adoc index 2a8db149e5..d82d4807a0 100644 --- a/modules/reviewing-pod-status.adoc +++ b/modules/reviewing-pod-status.adoc @@ -3,9 +3,9 @@ // * support/troubleshooting/investigating-pod-issues.adoc [id="reviewing-pod-status_{context}"] -= Reviewing Pod status += Reviewing pod status -You can query Pod status and error states. You can also query a Pod's associated deployment configuration and review base image availability. +You can query pod status and error states. You can also query a pod's associated deployment configuration and review base image availability. .Prerequisites @@ -22,7 +22,7 @@ You can query Pod status and error states. You can also query a Pod's associated $ oc project ---- -. List Pods running within the namespace, as well as Pod status, error states, restarts, and age: +. List pods running within the namespace, as well as pod status, error states, restarts, and age: + [source,terminal] ---- @@ -52,14 +52,14 @@ $ skopeo inspect docker:// $ oc edit deployment/my-deployment ---- -. When deployment configuration changes on exit, the configuration will automatically redeploy. Watch Pod status as the deployment progresses, to determine whether the issue has been resolved: +. When deployment configuration changes on exit, the configuration will automatically redeploy. Watch pod status as the deployment progresses, to determine whether the issue has been resolved: + [source,terminal] ---- $ oc get pods -w ---- -. Review events within the namespace for diagnostic information relating to Pod failures: +. Review events within the namespace for diagnostic information relating to pod failures: + [source,terminal] ---- diff --git a/modules/serverless-rn-1-9-0.adoc b/modules/serverless-rn-1-9-0.adoc index d0b31985ce..e16b03e272 100644 --- a/modules/serverless-rn-1-9-0.adoc +++ b/modules/serverless-rn-1-9-0.adoc @@ -25,4 +25,4 @@ [id="known-issues-1-9-0_{context}"] == Known issues -* After deleting the `KnativeEventing` custom resource, the `v0.15.0-upgrade-xr55x` and `storage-version-migration-eventing-99c7q` Pods remain on the cluster and show a `Completed` status. You can delete the namespace where the `KnativeEventing` custom resource was installed to completely remove these Pods. +* After deleting the `KnativeEventing` custom resource, the `v0.15.0-upgrade-xr55x` and `storage-version-migration-eventing-99c7q` pods remain on the cluster and show a `Completed` status. You can delete the namespace where the `KnativeEventing` custom resource was installed to completely remove these pods. diff --git a/modules/service-ca-certificates.adoc b/modules/service-ca-certificates.adoc index 64f0c007ba..68d609620f 100644 --- a/modules/service-ca-certificates.adoc +++ b/modules/service-ca-certificates.adoc @@ -45,8 +45,8 @@ prior to the expiration of the pre-rotation CA. [WARNING] ==== A manually-rotated service CA does not maintain trust with the previous service -CA. You might experience a temporary service disruption until the Pods in the -cluster are restarted, which ensures that Pods are using service serving +CA. You might experience a temporary service disruption until the pods in the +cluster are restarted, which ensures that pods are using service serving certificates issued by the new service CA. ==== diff --git a/modules/setting-up-cpu-manager.adoc b/modules/setting-up-cpu-manager.adoc index ad669adfcc..ba135b2131 100644 --- a/modules/setting-up-cpu-manager.adoc +++ b/modules/setting-up-cpu-manager.adoc @@ -258,7 +258,7 @@ of one core is subtracted from the total capacity of the node to arrive at the `Node Allocatable` amount. You can see that `Allocatable CPU` is 1500 millicores. This means you can run one of the CPU Manager pods since each will take one whole core. A whole core is equivalent to 1000 millicores. If you try to schedule a -second Pod, the system will accept the Pod, but it will never be scheduled: +second pod, the system will accept the pod, but it will never be scheduled: + [source, terminal] ---- diff --git a/modules/starting-debug-pods-with-root-access.adoc b/modules/starting-debug-pods-with-root-access.adoc index a30be3ab9a..0b4ea42bd6 100644 --- a/modules/starting-debug-pods-with-root-access.adoc +++ b/modules/starting-debug-pods-with-root-access.adoc @@ -3,9 +3,9 @@ // * support/troubleshooting/investigating-pod-issues.adoc [id="starting-debug-pods-with-root-access_{context}"] -= Starting debug Pods with root access += Starting debug pods with root access -You can start a debug Pod with root access, based on a problematic Pod's deployment or deployment configuration. Pod users typically run with non-root privileges, but running troubleshooting Pods with temporary root privileges can be useful during issue investigation. +You can start a debug pod with root access, based on a problematic pod's deployment or deployment configuration. Pod users typically run with non-root privileges, but running troubleshooting pods with temporary root privileges can be useful during issue investigation. .Prerequisites @@ -15,7 +15,7 @@ You can start a debug Pod with root access, based on a problematic Pod's deploym .Procedure -. Start a debug Pod with root access, based on a deployment. +. Start a debug pod with root access, based on a deployment. .. Obtain a project's deployment name: + [source,terminal] @@ -23,14 +23,14 @@ You can start a debug Pod with root access, based on a problematic Pod's deploym $ oc get deployment -n ---- -.. Start a debug Pod with root privileges, based on the deployment: +.. Start a debug pod with root privileges, based on the deployment: + [source,terminal] ---- $ oc debug deployment/my-deployment --as-root -n ---- -. Start a debug Pod with root access, based on a deployment configuration. +. Start a debug pod with root access, based on a deployment configuration. .. Obtain a project's deployment configuration name: + [source,terminal] @@ -38,7 +38,7 @@ $ oc debug deployment/my-deployment --as-root -n $ oc get deploymentconfigs -n ---- -.. Start a debug Pod with root privileges, based on the deployment configuration: +.. Start a debug pod with root privileges, based on the deployment configuration: + [source,terminal] ---- @@ -47,5 +47,5 @@ $ oc debug deploymentconfig/my-deployment-configuration --as-root -n ` to the preceding `oc debug` commands to run individual commands within a debug Pod, instead of running an interactive shell. +You can append `-- ` to the preceding `oc debug` commands to run individual commands within a debug pod, instead of running an interactive shell. ==== diff --git a/modules/storage-expanding-flexvolume.adoc b/modules/storage-expanding-flexvolume.adoc index d36aa35c5b..ae2df4b9e3 100644 --- a/modules/storage-expanding-flexvolume.adoc +++ b/modules/storage-expanding-flexvolume.adoc @@ -9,7 +9,7 @@ When using FlexVolume to connect to your backend storage system, you can expand FlexVolume allows expansion if the driver is set with `RequiresFSResize` to `true`. The FlexVolume can be expanded on Pod restart. -Similar to other volume types, FlexVolume volumes can also be expanded when in use by a Pod. +Similar to other volume types, FlexVolume volumes can also be expanded when in use by a pod. .Prerequisites diff --git a/modules/storage-persistent-storage-azure-file-pod.adoc b/modules/storage-persistent-storage-azure-file-pod.adoc index f390d018d8..41daf6ba14 100644 --- a/modules/storage-persistent-storage-azure-file-pod.adoc +++ b/modules/storage-persistent-storage-azure-file-pod.adoc @@ -5,7 +5,7 @@ [id="create-azure-file-pod_{context}"] = Mount the Azure File share in a Pod -After the PersistentVolumeClaim has been created, it can be used inside by an application. The following example demonstrates mounting this share inside of a Pod. +After the PersistentVolumeClaim has been created, it can be used inside by an application. The following example demonstrates mounting this share inside of a pod. .Prerequisites @@ -32,6 +32,6 @@ spec: persistentVolumeClaim: claimName: claim1 <3> ---- -<1> The name of the Pod. -<2> The path to mount the Azure File share inside the Pod. +<1> The name of the pod. +<2> The path to mount the Azure File share inside the pod. <3> The name of the PersistentVolumeClaim that has been previously created. diff --git a/modules/storage-persistent-storage-block-volume-examples.adoc b/modules/storage-persistent-storage-block-volume-examples.adoc index 2ead69ad7b..08b99d9866 100644 --- a/modules/storage-persistent-storage-block-volume-examples.adoc +++ b/modules/storage-persistent-storage-block-volume-examples.adoc @@ -48,7 +48,7 @@ spec: <1> `volumeMode` must be set to `Block` to indicate that a raw block PVC is requested. -.Pod specification example +.`Pod` specification example [source,yaml] ---- apiVersion: v1 diff --git a/modules/storage-persistent-storage-efs-pvc.adoc b/modules/storage-persistent-storage-efs-pvc.adoc index 5c7c33b17a..09ffa914f2 100644 --- a/modules/storage-persistent-storage-efs-pvc.adoc +++ b/modules/storage-persistent-storage-efs-pvc.adoc @@ -5,7 +5,7 @@ [id="efs-pvc_{context}"] = Create the EFS PersistentVolumeClaim -EFS PersistentVolumeClaims are created to allow Pods +EFS PersistentVolumeClaims are created to allow pods to mount the underlying EFS storage. .Prerequisites diff --git a/modules/storage-persistent-storage-lifecycle.adoc b/modules/storage-persistent-storage-lifecycle.adoc index 7314ace47b..d20631b439 100644 --- a/modules/storage-persistent-storage-lifecycle.adoc +++ b/modules/storage-persistent-storage-lifecycle.adoc @@ -43,32 +43,32 @@ PVC requesting 100Gi. The PVC can be bound when a 100Gi PV is added to the cluster. [id="using-pods_{context}"] -== Use Pods and claimed PVs +== Use pods and claimed PVs Pods use claims as volumes. The cluster inspects the claim to find the bound -volume and mounts that volume for a Pod. For those volumes that support +volume and mounts that volume for a pod. For those volumes that support multiple access modes, you must specify which mode applies when you use -the claim as a volume in a Pod. +the claim as a volume in a pod. Once you have a claim and that claim is bound, the bound PV belongs to you -for as long as you need it. You can schedule Pods and access claimed -PVs by including `persistentVolumeClaim` in the Pod's volumes block. +for as long as you need it. You can schedule pods and access claimed +PVs by including `persistentVolumeClaim` in the pod's volumes block. ifdef::openshift-origin,openshift-enterprise,openshift-webscale[] [id="pvcprotection_{context}"] == Storage Object in Use Protection -The Storage Object in Use Protection feature ensures that PVCs in active use by a Pod and PVs that are bound to PVCs are not removed from the system, as this can result in data loss. +The Storage Object in Use Protection feature ensures that PVCs in active use by a pod and PVs that are bound to PVCs are not removed from the system, as this can result in data loss. Storage Object in Use Protection is enabled by default. [NOTE] ==== -A PVC is in active use by a Pod when a Pod object exists that uses the PVC. +A PVC is in active use by a pod when a `Pod` object exists that uses the PVC. ==== -If a user deletes a PVC that is in active use by a Pod, the PVC is not removed immediately. PVC removal is postponed until the PVC is no longer actively used by any Pods. Also, if a cluster admin deletes a PV that is bound to a PVC, the PV is not removed immediately. PV removal is postponed until the PV is no longer bound to a PVC. +If a user deletes a PVC that is in active use by a pod, the PVC is not removed immediately. PVC removal is postponed until the PVC is no longer actively used by any pods. Also, if a cluster admin deletes a PV that is bound to a PVC, the PV is not removed immediately. PV removal is postponed until the PV is no longer bound to a PVC. endif::openshift-origin,openshift-enterprise,openshift-webscale[] diff --git a/modules/storage-persistent-storage-nfs-export-settings.adoc b/modules/storage-persistent-storage-nfs-export-settings.adoc index 62829435aa..5b879f781c 100644 --- a/modules/storage-persistent-storage-nfs-export-settings.adoc +++ b/modules/storage-persistent-storage-nfs-export-settings.adoc @@ -44,6 +44,6 @@ conditions: ---- * The NFS export and directory must be set up so that they are accessible -by the target Pods. Either set the export to be owned by the container's +by the target pods. Either set the export to be owned by the container's primary UID, or supply the Pod group access using `supplementalGroups`, as shown in the group IDs above. diff --git a/modules/storage-persistent-storage-nfs-group-ids.adoc b/modules/storage-persistent-storage-nfs-group-ids.adoc index 6379502a45..da841051c0 100644 --- a/modules/storage-persistent-storage-nfs-group-ids.adoc +++ b/modules/storage-persistent-storage-nfs-group-ids.adoc @@ -33,7 +33,7 @@ spec: ---- <1> `securityContext` must be defined at the Pod level, not under a specific container. -<2> An array of GIDs defined for the Pod. In this case, there is +<2> An array of GIDs defined for the pod. In this case, there is one element in the array. Additional GIDs would be comma-separated. Assuming there are no custom SCCs that might satisfy the Pod's @@ -51,5 +51,5 @@ and a group ID of `5555` is allowed. ==== To use a custom SCC, you must first add it to the appropriate service account. For example, use the `default` service account in the given project -unless another has been specified on the Pod specification. +unless another has been specified on the `Pod` specification. ==== diff --git a/modules/storage-persistent-storage-nfs-user-ids.adoc b/modules/storage-persistent-storage-nfs-user-ids.adoc index e42ace459c..135680f2c3 100644 --- a/modules/storage-persistent-storage-nfs-user-ids.adoc +++ b/modules/storage-persistent-storage-nfs-user-ids.adoc @@ -28,7 +28,7 @@ spec: ---- <1> Pods contain a `securityContext` specific to each container and a Pod's `securityContext` which applies to all containers defined in -the Pod. +the pod. <2> `65534` is the `nfsnobody` user. Assuming the `default` project and the `restricted` SCC, the Pod's requested @@ -53,5 +53,5 @@ are defined, UID range checking is still enforced, and the UID of `65534` ==== To use a custom SCC, you must first add it to the appropriate service account. For example, use the `default` service account in the given project -unless another has been specified on the Pod specification. +unless another has been specified on the `Pod` specification. ==== diff --git a/modules/storage-persistent-storage-overview.adoc b/modules/storage-persistent-storage-overview.adoc index c4d13b3248..2764accb17 100644 --- a/modules/storage-persistent-storage-overview.adoc +++ b/modules/storage-persistent-storage-overview.adoc @@ -15,7 +15,7 @@ PVCs are specific to a project, and are created and used by developers as a means to use a PV. PV resources on their own are not scoped to any single project; they can be shared across the entire {product-title} cluster and claimed from any project. After a PV is bound to a PVC, -that PV can not then be bound to additional PVCs. This has the effect of +that PV can not then be bound to additional PVCs. This has the effect of scoping a bound PV to a single namespace, that of the binding project. PVs are defined by a `PersistentVolume` API object, which represents a @@ -24,7 +24,7 @@ by the cluster administrator or dynamically provisioned using a StorageClass obj node is a cluster resource. PVs are volume plug-ins like `Volumes` but -have a lifecycle that is independent of any individual Pod that uses the +have a lifecycle that is independent of any individual pod that uses the PV. PV objects capture the details of the implementation of the storage, be that NFS, iSCSI, or a cloud-provider-specific storage system. @@ -35,8 +35,8 @@ storage provider. ==== PVCs are defined by a `PersistentVolumeClaim` API object, which represents a -request for storage by a developer. It is similar to a Pod in that Pods -consume node resources and PVCs consume PV resources. For example, Pods +request for storage by a developer. It is similar to a pod in that pods +consume node resources and PVCs consume PV resources. For example, pods can request specific levels of resources, such as CPU and memory, while PVCs can request specific storage capacity and access modes. For example, they can be mounted once read-write or many times read-only. diff --git a/modules/storage-persistent-storage-pv.adoc b/modules/storage-persistent-storage-pv.adoc index 4601281649..2d2c9c80d9 100644 --- a/modules/storage-persistent-storage-pv.adoc +++ b/modules/storage-persistent-storage-pv.adoc @@ -132,7 +132,7 @@ iSCSI and Fibre Channel volumes do not currently have any fencing mechanisms. You must ensure the volumes are only used by one node at a time. In certain situations, such as draining a node, the volumes can be used simultaneously by two nodes. Before draining the node, first ensure -the Pods that use these volumes are deleted. +the pods that use these volumes are deleted. ==== .Supported access modes for PVs @@ -162,7 +162,7 @@ endif::[] [.small] -- 1. ReadWriteOnce (RWO) volumes cannot be mounted on multiple nodes. If a node fails, the system does not allow the attached RWO volume to be mounted on a new node because it is already assigned to the failed node. If you encounter a multi-attach error message as a result, you can either recover or delete the failed node to make the volume available to other nodes. -2. Use a recreate deployment strategy for Pods that rely on AWS EBS. +2. Use a recreate deployment strategy for pods that rely on AWS EBS. // GCE Persistent Disks, or Openstack Cinder PVs. -- @@ -178,9 +178,9 @@ ifdef::openshift-dedicated[] depending on where the cluster is provisioned. * Only RWO access mode is applicable, as EBS volumes and GCE Persistent Disks cannot be mounted to multiple nodes. - * *emptyDir* has the same lifecycle as the Pod: + * *emptyDir* has the same lifecycle as the pod: ** *emptyDir* volumes survive container crashes/restarts. - ** *emptyDir* volumes are deleted when the Pod is deleted. + ** *emptyDir* volumes are deleted when the pod is deleted. endif::[] ifdef::openshift-online[] @@ -192,13 +192,13 @@ Disks cannot be mounted to multiple nodes. instantiated . * *emptyDir* is restricted to 512 Mi per project (group) per node. - ** A single Pod for a project on a particular node can use up to 512 Mi + ** A single pod for a project on a particular node can use up to 512 Mi of *emptyDir* storage. - ** Multiple Pods for a project on a particular node share the 512 Mi of + ** Multiple pods for a project on a particular node share the 512 Mi of *emptyDir* storage. - * *emptyDir* has the same lifecycle as the Pod: + * *emptyDir* has the same lifecycle as the pod: ** *emptyDir* volumes survive container crashes/restarts. - ** *emptyDir* volumes are deleted when the Pod is deleted. + ** *emptyDir* volumes are deleted when the pod is deleted. endif::[] [id="pv-phase_{context}"] diff --git a/modules/storage-persistent-storage-pvc.adoc b/modules/storage-persistent-storage-pvc.adoc index 790246a532..5a287655f2 100644 --- a/modules/storage-persistent-storage-pvc.adoc +++ b/modules/storage-persistent-storage-pvc.adoc @@ -70,7 +70,7 @@ specific access modes. [id="pvc-resources_{context}"] == Resources -Claims, such as Pods, can request specific quantities of a resource. In +Claims, such as pods, can request specific quantities of a resource. In this case, the request is for storage. The same resource model applies to volumes and claims. @@ -78,11 +78,11 @@ volumes and claims. == Claims as volumes Pods access storage by using the claim as a volume. Claims must exist in the -same namespace as the Pod by using the claim. The cluster finds the claim -in the Pod's namespace and uses it to get the `PersistentVolume` backing -the claim. The volume is mounted to the host and into the Pod, for example: +same namespace as the pod by using the claim. The cluster finds the claim +in the pod's namespace and uses it to get the `PersistentVolume` backing +the claim. The volume is mounted to the host and into the pod, for example: -.Mount volume to the host and into the Pod example +.Mount volume to the host and into the pod example [source,yaml] ---- kind: Pod @@ -101,6 +101,6 @@ spec: persistentVolumeClaim: claimName: myclaim <3> ---- -<1> Path to mount the volume inside the Pod +<1> Path to mount the volume inside the pod <2> Name of the volume to mount <3> Name of the PVC, that exists in the same namespace, to use diff --git a/modules/strategies-for-s2i-troubleshooting.adoc b/modules/strategies-for-s2i-troubleshooting.adoc index a8a7a7d622..c98a4da41a 100644 --- a/modules/strategies-for-s2i-troubleshooting.adoc +++ b/modules/strategies-for-s2i-troubleshooting.adoc @@ -7,16 +7,16 @@ Use Source-to-Image (S2I) to build reproducible, Docker-formatted container images. You can create ready-to-run images by injecting application source code into a container image and assembling a new image. The new image incorporates the base image (the builder) and built source. -To determine where in the S2I process a failure occurs, you can observe the state of the Pods relating to each of the following S2I stages: +To determine where in the S2I process a failure occurs, you can observe the state of the pods relating to each of the following S2I stages: -. *During the build configuration stage*, a build Pod is used to create an application container image from a base image and application source code. +. *During the build configuration stage*, a build pod is used to create an application container image from a base image and application source code. -. *During the deployment configuration stage*, a deployment Pod is used to deploy application Pods from the application container image that was built in the build configuration stage. The deployment Pod also deploys other resources such as services and routes. The deployment configuration begins after the build configuration succeeds. +. *During the deployment configuration stage*, a deployment pod is used to deploy application pods from the application container image that was built in the build configuration stage. The deployment pod also deploys other resources such as services and routes. The deployment configuration begins after the build configuration succeeds. -. *After the deployment Pod has started the application Pods*, application failures can occur within the running application Pods. For instance, an application might not behave as expected even though the application Pods are in a `Running` state. In this scenario, you can access running application Pods to investigate application failures within a Pod. +. *After the deployment pod has started the application pods*, application failures can occur within the running application pods. For instance, an application might not behave as expected even though the application pods are in a `Running` state. In this scenario, you can access running application pods to investigate application failures within a pod. When troubleshooting S2I issues, follow this strategy: -. Monitor build, deployment, and application Pod status +. Monitor build, deployment, and application pod status . Determine the stage of the S2I process where the problem occurred . Review logs corresponding to the failed stage diff --git a/modules/support-collecting-network-trace.adoc b/modules/support-collecting-network-trace.adoc index 3a91db2ff1..e9b79e5250 100644 --- a/modules/support-collecting-network-trace.adoc +++ b/modules/support-collecting-network-trace.adoc @@ -5,7 +5,7 @@ [id="support-collecting-network-trace_{context}"] = Collecting a network trace from an {product-title} node or container -When investigating potential network-related {product-title} issues, Red Hat Support might request a network packet trace from a specific {product-title} cluster node or from a specific container. The recommended method to capture a network trace in {product-title} is through a debug Pod. +When investigating potential network-related {product-title} issues, Red Hat Support might request a network packet trace from a specific {product-title} cluster node or from a specific container. The recommended method to capture a network trace in {product-title} is through a debug pod. .Prerequisites @@ -32,7 +32,7 @@ $ oc get nodes $ oc debug node/my-cluster-node ---- -. Set `/host` as the root directory within the debug shell. The debug Pod mounts the host's root file system in `/host` within the Pod. By changing the root directory to `/host`, you can run binaries contained in the host's executable paths: +. Set `/host` as the root directory within the debug shell. The debug pod mounts the host's root file system in `/host` within the pod. By changing the root directory to `/host`, you can run binaries contained in the host's executable paths: + [source,terminal] ---- diff --git a/modules/support-generating-a-sosreport-archive.adoc b/modules/support-generating-a-sosreport-archive.adoc index 4fa858be8d..b7b670a151 100644 --- a/modules/support-generating-a-sosreport-archive.adoc +++ b/modules/support-generating-a-sosreport-archive.adoc @@ -5,7 +5,7 @@ [id="support-generating-a-sosreport-archive_{context}"] = Generating a `sosreport` archive for an {product-title} cluster node -The recommended way to generate a `sosreport` for an {product-title} {product-version} cluster node is through a debug Pod. +The recommended way to generate a `sosreport` for an {product-title} {product-version} cluster node is through a debug pod. .Prerequisites @@ -32,7 +32,7 @@ $ oc get nodes $ oc debug node/my-cluster-node ---- -. Set `/host` as the root directory within the debug shell. The debug Pod mounts the host's root file system in `/host` within the Pod. By changing the root directory to `/host`, you can run binaries contained in the host's executable paths: +. Set `/host` as the root directory within the debug shell. The debug pod mounts the host's root file system in `/host` within the pod. By changing the root directory to `/host`, you can run binaries contained in the host's executable paths: + [source,terminal] ---- diff --git a/modules/support-providing-diagnostic-data-to-red-hat.adoc b/modules/support-providing-diagnostic-data-to-red-hat.adoc index d4e5bea209..a2606640a3 100644 --- a/modules/support-providing-diagnostic-data-to-red-hat.adoc +++ b/modules/support-providing-diagnostic-data-to-red-hat.adoc @@ -51,7 +51,7 @@ $ oc get nodes $ oc debug node/my-cluster-node ---- + -. Set `/host` as the root directory within the debug shell. The debug Pod mounts the host's root file system in `/host` within the Pod. By changing the root directory to `/host`, you can run binaries contained in the host's executable paths: +. Set `/host` as the root directory within the debug shell. The debug pod mounts the host's root file system in `/host` within the pod. By changing the root directory to `/host`, you can run binaries contained in the host's executable paths: + [source,terminal] ---- diff --git a/modules/topology-manager-policies.adoc b/modules/topology-manager-policies.adoc index 77370bdb9b..160032eccd 100644 --- a/modules/topology-manager-policies.adoc +++ b/modules/topology-manager-policies.adoc @@ -6,11 +6,11 @@ [id="topology_manager_policies_{context}"] = Topology Manager policies -Topology Manager aligns Pod resources of all Quality of Service (QoS) classes by collecting topology hints from Hint Providers, such as CPU Manager and Device Manager, and using the collected hints to align the Pod resources. +Topology Manager aligns `Pod` resources of all Quality of Service (QoS) classes by collecting topology hints from Hint Providers, such as CPU Manager and Device Manager, and using the collected hints to align the `Pod` resources. [NOTE] ==== -To align CPU resources with other requested resources in a Pod spec, the CPU Manager must be enabled with the `static` CPU Manager policy. +To align CPU resources with other requested resources in a `Pod` spec, the CPU Manager must be enabled with the `static` CPU Manager policy. ==== Topology Manager supports four allocation policies, which you assign in the `cpumanager-enabled` custom resource (CR): diff --git a/modules/understanding-pod-error-states.adoc b/modules/understanding-pod-error-states.adoc index 3259470447..1cea891fe9 100644 --- a/modules/understanding-pod-error-states.adoc +++ b/modules/understanding-pod-error-states.adoc @@ -3,11 +3,11 @@ // * support/troubleshooting/investigating-pod-issues.adoc [id="understanding-pod-error-states_{context}"] -= Understanding Pod error states += Understanding pod error states -Pod failures return explicit error states that can be observed in the `status` field in the output of `oc get Pods`. Pod error states cover image, container, and container network related failures. +Pod failures return explicit error states that can be observed in the `status` field in the output of `oc get pods`. Pod error states cover image, container, and container network related failures. -The following table provides a list of Pod error states along with their descriptions. +The following table provides a list of pod error states along with their descriptions. .Pod error states [cols="1,4",options="header"] @@ -33,16 +33,16 @@ The following table provides a list of Pod error states along with their descrip | When attempting to retrieve an image from a registry, an HTTP error was encountered. | `ErrContainerNotFound` -| The specified container is either not present or not managed by the kubelet, within the declared Pod. +| The specified container is either not present or not managed by the kubelet, within the declared pod. | `ErrRunInitContainer` | Container initialization failed. | `ErrRunContainer` -| None of the Pod's containers started successfully. +| None of the pod's containers started successfully. | `ErrKillContainer` -| None of the Pod's containers were killed successfully. +| None of the pod's containers were killed successfully. | `ErrCrashLoopBackOff` | A container has terminated. The kubelet will not attempt to restart it. diff --git a/modules/verifying-crio-status.adoc b/modules/verifying-crio-status.adoc index 68d02b3a1f..5416b52c75 100644 --- a/modules/verifying-crio-status.adoc +++ b/modules/verifying-crio-status.adoc @@ -14,15 +14,15 @@ You can verify CRI-O container runtime engine status on each cluster node. .Procedure -. Review CRI-O status by querying the `crio` systemd service on a node, within a debug Pod. -.. Start a debug Pod for a node: +. Review CRI-O status by querying the `crio` systemd service on a node, within a debug pod. +.. Start a debug pod for a node: + [source,terminal] ---- $ oc debug node/my-node ---- + -.. Set `/host` as the root directory within the debug shell. The debug Pod mounts the host's root file system in `/host` within the Pod. By changing the root directory to `/host`, you can run binaries contained in the host's executable paths: +.. Set `/host` as the root directory within the debug shell. The debug pod mounts the host's root file system in `/host` within the pod. By changing the root directory to `/host`, you can run binaries contained in the host's executable paths: + [source,terminal] ---- diff --git a/modules/virt-about-cpu-and-memory-quota-namespace.adoc b/modules/virt-about-cpu-and-memory-quota-namespace.adoc index ccc5186d5c..2d879b3b6c 100644 --- a/modules/virt-about-cpu-and-memory-quota-namespace.adoc +++ b/modules/virt-about-cpu-and-memory-quota-namespace.adoc @@ -12,6 +12,6 @@ consumed by resources within that namespace. The `CDIConfig` object defines the user configuration for the Containerized Data Importer (CDI). The CPU and memory request and limit values for the `CDIConfig` object are set to a default value of 0. -This ensures that Pods created by CDI that make no compute resource requirements +This ensures that pods created by CDI that make no compute resource requirements are given the default values and are allowed to run in a namespace that is restricted with a quota. diff --git a/modules/virt-about-the-overview-dashboard.adoc b/modules/virt-about-the-overview-dashboard.adoc index 7ccab4111b..36aa662f0f 100644 --- a/modules/virt-about-the-overview-dashboard.adoc +++ b/modules/virt-about-the-overview-dashboard.adoc @@ -21,7 +21,7 @@ Status include *ok*, *error*, *warning*, *in progress*, and *unknown*. Resources ** Version * *Cluster Inventory* details number of resources and associated statuses. It is helpful when intervention is required to resolve problems, including information about: ** Number of nodes -** Number of Pods +** Number of pods ** Persistent storage volume claims ifdef::virt-cluster[] ** Virtual machines (available if {VirtProductName} is installed) @@ -36,8 +36,8 @@ endif::virt-cluster[] ** Storage consumed ** Network resources consumed * *Cluster Utilization* shows the capacity of various resources over a specified period of time, to help administrators understand the scale and frequency of high resource consumption. -* *Events* lists messages related to recent activity in the cluster, such as Pod creation or virtual machine migration to another host. -* *Top Consumers* helps administrators understand how cluster resources are consumed. Click on a resource to jump to a detailed page listing Pods and nodes that consume the largest amount of the specified cluster resource (CPU, memory, or storage). +* *Events* lists messages related to recent activity in the cluster, such as pod creation or virtual machine migration to another host. +* *Top Consumers* helps administrators understand how cluster resources are consumed. Click on a resource to jump to a detailed page listing pods and nodes that consume the largest amount of the specified cluster resource (CPU, memory, or storage). ifeval::["{context}" == "virt-using-dashboard-to-get-cluster-info"] :!virt-cluster: diff --git a/modules/virt-about-upgrading-virt.adoc b/modules/virt-about-upgrading-virt.adoc index 105a74136a..dd1b3e1661 100644 --- a/modules/virt-about-upgrading-virt.adoc +++ b/modules/virt-about-upgrading-virt.adoc @@ -22,8 +22,8 @@ connection. Most automatic updates complete within fifteen minutes. == How {VirtProductName} upgrades affect your cluster * Upgrading does not interrupt virtual machine workloads. -** Virtual machine Pods are not restarted or migrated during an upgrade. If you -need to update the `virt-launcher` Pod, you must restart or live migrate the +** Virtual machine pods are not restarted or migrated during an upgrade. If you +need to update the `virt-launcher` pod, you must restart or live migrate the virtual machine. + [NOTE] diff --git a/modules/virt-additional-scc-for-kubevirt-controller.adoc b/modules/virt-additional-scc-for-kubevirt-controller.adoc index a6cad9dcae..aa29004007 100644 --- a/modules/virt-additional-scc-for-kubevirt-controller.adoc +++ b/modules/virt-additional-scc-for-kubevirt-controller.adoc @@ -5,13 +5,13 @@ [id="virt-additional-scc-for-kubevirt-controller_{context}"] = Additional {product-title} Security Context Constraints and Linux capabilities for the `kubevirt-controller` service account -Security Context Constraints (SCCs) control permissions for Pods. These permissions include actions that a Pod, a collection of containers, can perform and what resources it can access. You can use SCCs to define a set of conditions that a Pod must run with in order to be accepted into the system. +Security Context Constraints (SCCs) control permissions for pods. These permissions include actions that a pod, a collection of containers, can perform and what resources it can access. You can use SCCs to define a set of conditions that a Pod must run with in order to be accepted into the system. -The `kubevirt-controller` is a cluster controller that creates the virt-launcher Pods for virtual machines in the cluster. These virt-launcher Pods are granted permissions by the `kubevirt-controller` service account. +The `kubevirt-controller` is a cluster controller that creates the virt-launcher pods for virtual machines in the cluster. These virt-launcher pods are granted permissions by the `kubevirt-controller` service account. == Additional SCCs granted to the `kubevirt-controller` service account -The `kubevirt-controller` service account is granted additional SCCs and Linux capabilities so that it can create virt-launcher Pods with the appropriate permissions. These extended permissions allow virtual machines to take advantage of {VirtProductName} features that are beyond the scope of typical Pods. +The `kubevirt-controller` service account is granted additional SCCs and Linux capabilities so that it can create virt-launcher pods with the appropriate permissions. These extended permissions allow virtual machines to take advantage of {VirtProductName} features that are beyond the scope of typical pods. The `kubevirt-controller` service account is granted the following SCCs: diff --git a/modules/virt-configuring-guest-memory-overcommitment.adoc b/modules/virt-configuring-guest-memory-overcommitment.adoc index 85e4623a66..2ed000c03c 100644 --- a/modules/virt-configuring-guest-memory-overcommitment.adoc +++ b/modules/virt-configuring-guest-memory-overcommitment.adoc @@ -43,7 +43,7 @@ spec: + [NOTE] ==== -The same eviction rules as those for Pods apply to the virtual machine instance if +The same eviction rules as those for pods apply to the virtual machine instance if the node is under memory pressure. ==== diff --git a/modules/virt-connecting-to-the-terminal.adoc b/modules/virt-connecting-to-the-terminal.adoc index 97433aba20..cc3964fc35 100644 --- a/modules/virt-connecting-to-the-terminal.adoc +++ b/modules/virt-connecting-to-the-terminal.adoc @@ -14,6 +14,6 @@ list and select the appropriate project. . Click *Workloads* -> *Virtualization* from the side menu. . Click the *Virtual Machines* tab. . Select a virtual machine to open the *Virtual Machine Overview* screen. -. In the *Details* tab, click the `virt-launcher-` Pod. +. In the *Details* tab, click the `virt-launcher-` pod. . Click the *Terminal* tab. If the terminal is blank, select the terminal and press any key to initiate connection. diff --git a/modules/virt-creating-bridge-nad-cli.adoc b/modules/virt-creating-bridge-nad-cli.adoc index f004da78b9..7038095232 100644 --- a/modules/virt-creating-bridge-nad-cli.adoc +++ b/modules/virt-creating-bridge-nad-cli.adoc @@ -6,7 +6,7 @@ = Creating a Linux bridge NetworkAttachmentDefinition in the CLI As a network administrator, you can configure a NetworkAttachmentDefinition -of type `cnv-bridge` to provide Layer-2 networking to Pods and virtual machines. +of type `cnv-bridge` to provide Layer-2 networking to pods and virtual machines. [NOTE] ==== diff --git a/modules/virt-creating-bridge-nad-web.adoc b/modules/virt-creating-bridge-nad-web.adoc index 6b7a703d16..1a26c29f44 100644 --- a/modules/virt-creating-bridge-nad-web.adoc +++ b/modules/virt-creating-bridge-nad-web.adoc @@ -11,7 +11,7 @@ The NetworkAttachmentDefinition is a custom resource that exposes layer-2 device to a specific namespace in your {VirtProductName} cluster. Network administrators can create NetworkAttachmentDefinitions -to provide existing layer-2 networking to Pods and virtual machines. +to provide existing layer-2 networking to pods and virtual machines. .Procedure diff --git a/modules/virt-creating-vm.adoc b/modules/virt-creating-vm.adoc index a1dfbb6762..2c05230c4c 100644 --- a/modules/virt-creating-vm.adoc +++ b/modules/virt-creating-vm.adoc @@ -26,7 +26,7 @@ instance by starting it. [NOTE] ==== -A https://kubernetes.io/docs/concepts/workloads/controllers/replicaset/[ReplicaSet]’s purpose is often used to guarantee the availability of a specified number of identical Pods. +A https://kubernetes.io/docs/concepts/workloads/controllers/replicaset/[ReplicaSet]’s purpose is often used to guarantee the availability of a specified number of identical pods. ReplicaSet is not currently supported in {VirtProductName}. ==== diff --git a/modules/virt-deploying-virt.adoc b/modules/virt-deploying-virt.adoc index e575024ef7..5c482b8568 100644 --- a/modules/virt-deploying-virt.adoc +++ b/modules/virt-deploying-virt.adoc @@ -33,6 +33,6 @@ to the next step, ensure that the custom resource is named the default . Click *Create* to launch {VirtProductName}. -. Navigate to the *Workloads* -> *Pods* page and monitor the {VirtProductName} Pods -until they are all *Running*. After all the Pods display the *Running* state, +. Navigate to the *Workloads* -> *Pods* page and monitor the {VirtProductName} pods +until they are all *Running*. After all the pods display the *Running* state, you can access {VirtProductName}. diff --git a/modules/virt-extended-selinux-policies-for-virt-launcher.adoc b/modules/virt-extended-selinux-policies-for-virt-launcher.adoc index d5a78c6599..48e30817fa 100644 --- a/modules/virt-extended-selinux-policies-for-virt-launcher.adoc +++ b/modules/virt-extended-selinux-policies-for-virt-launcher.adoc @@ -3,9 +3,9 @@ // * virt/virt-additional-security-privileges-controller-and-launcher.adoc [id="virt-extended-selinux-policies-for-virt-launcher_{context}"] -= Extended SELinux policies for virt-launcher Pods += Extended SELinux policies for virt-launcher pods -The `container_t` SELinux policy for virt-launcher Pods is extended with the following rules: +The `container_t` SELinux policy for virt-launcher pods is extended with the following rules: * `allow process self (tun_socket (relabelfrom relabelto attach_queue))` * `allow process sysfs_t (file (write))` @@ -16,7 +16,7 @@ These rules enable the following virtualization features: * Relabel and attach queues to its own TUN sockets, which is required to support network multi-queue. Multi-queue enables network performance to scale as the number of available vCPUs increases. -* Allows virt-launcher Pods to write information to sysfs (`/sys`) files, which is required to enable Single Root I/O Virtualization (SR-IOV). +* Allows virt-launcher pods to write information to sysfs (`/sys`) files, which is required to enable Single Root I/O Virtualization (SR-IOV). * Read/write `hugetlbfs` entries, which is required to support huge pages. Huge pages are a method of managing large amounts of memory by increasing the memory page size. diff --git a/modules/virt-networking-glossary.adoc b/modules/virt-networking-glossary.adoc index 75e2d46f7b..1097f85125 100644 --- a/modules/virt-networking-glossary.adoc +++ b/modules/virt-networking-glossary.adoc @@ -23,7 +23,7 @@ API resource that allows you to define custom resources, or an object defined by using the CRD API resource. NetworkAttachmentDefinition:: a CRD introduced by the Multus project that -allows you to attach Pods, virtual machines, and virtual machine instances to one or more networks. +allows you to attach pods, virtual machines, and virtual machine instances to one or more networks. Preboot eXecution Environment (PXE):: an interface that enables an administrator to boot a client machine from a server over the network. Network booting allows diff --git a/modules/virt-understanding-logs.adoc b/modules/virt-understanding-logs.adoc index 85a50cf7c4..80f3a59a4a 100644 --- a/modules/virt-understanding-logs.adoc +++ b/modules/virt-understanding-logs.adoc @@ -5,7 +5,7 @@ [id="virt-understanding-logs_{context}"] = Understanding virtual machine logs -Logs are collected for {product-title} Builds, Deployments, and Pods. +Logs are collected for {product-title} Builds, Deployments, and pods. In {VirtProductName}, virtual machine logs can be retrieved from the virtual machine launcher Pod in either the web console or the CLI. diff --git a/modules/virt-understanding-scratch-space.adoc b/modules/virt-understanding-scratch-space.adoc index 0866de530e..fffe36f3e0 100644 --- a/modules/virt-understanding-scratch-space.adoc +++ b/modules/virt-understanding-scratch-space.adoc @@ -3,36 +3,35 @@ // * virt/virtual_machines/virtual_disks/virt-preparing-cdi-scratch-space.adoc [id="virt-understanding-scratch-space_{context}"] -= Understanding scratch space += Understanding scratch space -The Containerized Data Importer (CDI) requires scratch space (temporary storage) +The Containerized Data Importer (CDI) requires scratch space (temporary storage) to complete some operations, such as importing and uploading virtual machine images. -During this process, the CDI provisions a scratch space PVC equal to the size of -the PVC backing the destination DataVolume (DV). The scratch space PVC is deleted +During this process, the CDI provisions a scratch space PVC equal to the size of +the PVC backing the destination DataVolume (DV). The scratch space PVC is deleted after the operation completes or aborts. -The CDIConfig object allows you to define which StorageClass to use to bind the -scratch space PVC by setting the `scratchSpaceStorageClass` in the `spec:` -section of the CDIConfig object. +The CDIConfig object allows you to define which StorageClass to use to bind the +scratch space PVC by setting the `scratchSpaceStorageClass` in the `spec:` +section of the CDIConfig object. -If the defined StorageClass does not match a StorageClass in the cluster, then -the default StorageClass defined for the cluster is used. If there is no -default StorageClass defined in the cluster, the StorageClass used to provision -the original DV or PVC is used. +If the defined StorageClass does not match a StorageClass in the cluster, then +the default StorageClass defined for the cluster is used. If there is no +default StorageClass defined in the cluster, the StorageClass used to provision +the original DV or PVC is used. [NOTE] ==== -The CDI requires requesting scratch space with a `file` volume mode, regardless -of the PVC backing the origin DataVolume. If the origin PVC is backed by -`block` volume mode, you must define a StorageClass capable of provisioning +The CDI requires requesting scratch space with a `file` volume mode, regardless +of the PVC backing the origin DataVolume. If the origin PVC is backed by +`block` volume mode, you must define a StorageClass capable of provisioning `file` volume mode PVCs. ==== [discrete] -== Manual provisioning - -If there are no storage classes, the CDI will use any PVCs in the project that -match the size requirements for the image. If there are no PVCs that match these -requirements, the CDI import Pod will remain in a *Pending* state until an -appropriate PVC is made available or until a timeout function kills the Pod. +== Manual provisioning +If there are no storage classes, the CDI will use any PVCs in the project that +match the size requirements for the image. If there are no PVCs that match these +requirements, the CDI import pod will remain in a *Pending* state until an +appropriate PVC is made available or until a timeout function kills the pod. diff --git a/modules/virt-viewing-virtual-machine-logs-cli.adoc b/modules/virt-viewing-virtual-machine-logs-cli.adoc index 11e0e25488..a4a83028dd 100644 --- a/modules/virt-viewing-virtual-machine-logs-cli.adoc +++ b/modules/virt-viewing-virtual-machine-logs-cli.adoc @@ -5,7 +5,7 @@ [id="virt-viewing-virtual-machine-logs-cli_{context}"] = Viewing virtual machine logs in the CLI -Get virtual machine logs from the virtual machine launcher Pod. +Get virtual machine logs from the virtual machine launcher pod. .Procedure diff --git a/modules/virt-viewing-virtual-machine-logs-web.adoc b/modules/virt-viewing-virtual-machine-logs-web.adoc index 0a3393c73a..2717538100 100644 --- a/modules/virt-viewing-virtual-machine-logs-web.adoc +++ b/modules/virt-viewing-virtual-machine-logs-web.adoc @@ -5,7 +5,7 @@ [id="virt-viewing-virtual-machine-logs-web_{context}"] = Viewing virtual machine logs in the web console -Get virtual machine logs from the associated virtual machine launcher Pod. +Get virtual machine logs from the associated virtual machine launcher pod. .Procedure diff --git a/modules/web-console-overview.adoc b/modules/web-console-overview.adoc index 40489d87be..b6bd9ef853 100644 --- a/modules/web-console-overview.adoc +++ b/modules/web-console-overview.adoc @@ -4,8 +4,8 @@ [id="web-console-overview_{context}"] = Understanding and accessing the web console -The web console runs as a Pod on the master. The static assets required to run -the web console are served by the Pod. Once {product-title} is successfully +The web console runs as a pod on the master. The static assets required to run +the web console are served by the pod. Once {product-title} is successfully installed, find the URL for the web console and login credentials for your installed cluster in the CLI output of the installation program. For example: