mirror of
https://github.com/openshift/openshift-docs.git
synced 2026-02-06 15:46:57 +01:00
First sweep of updating term formatting per new style guidelines
This commit is contained in:
committed by
openshift-cherrypick-robot
parent
84d28e7f83
commit
b96bfbf63e
@@ -3,9 +3,9 @@
|
||||
// * support/troubleshooting/investigating-pod-issues.adoc
|
||||
|
||||
[id="accessing-running-pods_{context}"]
|
||||
= Accessing running Pods
|
||||
= Accessing running pods
|
||||
|
||||
You can review running Pods dynamically by opening a shell inside a Pod or by gaining network access through port forwarding.
|
||||
You can review running pods dynamically by opening a shell inside a pod or by gaining network access through port forwarding.
|
||||
|
||||
.Prerequisites
|
||||
|
||||
@@ -15,29 +15,29 @@ You can review running Pods dynamically by opening a shell inside a Pod or by ga
|
||||
|
||||
.Procedure
|
||||
|
||||
. Switch into the project that contains the Pod you would like to access. This is necessary because the `oc rsh` command does not accept the `-n` namespace option:
|
||||
. Switch into the project that contains the pod you would like to access. This is necessary because the `oc rsh` command does not accept the `-n` namespace option:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ oc project <namespace>
|
||||
----
|
||||
|
||||
. Start a remote shell into a Pod:
|
||||
. Start a remote shell into a pod:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ oc rsh <pod_name> <1>
|
||||
----
|
||||
<1> If a Pod has multiple containers, `oc rsh` defaults to the first container unless `-c <container_name>` is specified.
|
||||
<1> If a pod has multiple containers, `oc rsh` defaults to the first container unless `-c <container_name>` is specified.
|
||||
|
||||
. Start a remote shell into a specific container within a Pod:
|
||||
. Start a remote shell into a specific container within a pod:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ oc rsh -c <container_name> pod/<pod_name>
|
||||
----
|
||||
|
||||
. Create a port forwarding session to a port on a Pod:
|
||||
. Create a port forwarding session to a port on a pod:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
|
||||
@@ -86,7 +86,7 @@ Conditions:
|
||||
|
||||
You can verify that the Kubernetes events were sent to Knative by looking at the message dumper function logs.
|
||||
|
||||
. Get the Pods:
|
||||
. Get the pods:
|
||||
+
|
||||
|
||||
[source,terminal]
|
||||
@@ -94,7 +94,7 @@ You can verify that the Kubernetes events were sent to Knative by looking at the
|
||||
$ oc get pods
|
||||
----
|
||||
|
||||
. View the message dumper function logs for the Pods:
|
||||
. View the message dumper function logs for the pods:
|
||||
+
|
||||
|
||||
[source,terminal]
|
||||
|
||||
@@ -216,7 +216,7 @@ spec:
|
||||
|
||||
To verify that the Kubernetes events were sent to Knative, you can look at the message dumper function logs.
|
||||
|
||||
. Get the Pods:
|
||||
. Get the pods:
|
||||
+
|
||||
|
||||
[source,terminal]
|
||||
@@ -224,7 +224,7 @@ To verify that the Kubernetes events were sent to Knative, you can look at the m
|
||||
$ oc get pods
|
||||
----
|
||||
|
||||
. View the message dumper function logs for the Pods:
|
||||
. View the message dumper function logs for the pods:
|
||||
+
|
||||
|
||||
[source,terminal]
|
||||
|
||||
@@ -16,7 +16,7 @@ container has its IP address removed from the endpoints of all services. A
|
||||
readiness probe can be used to signal to the endpoints controller that even
|
||||
though a container is running, it should not receive any traffic from a proxy.
|
||||
|
||||
For example, a Readiness check can control which Pods are used. When a Pod is not ready,
|
||||
For example, a Readiness check can control which pods are used. When a pod is not ready,
|
||||
it is removed.
|
||||
|
||||
Liveness Probe::
|
||||
|
||||
@@ -214,9 +214,9 @@ repository. If this is not the intent, specify the required builder image for
|
||||
the source using the `~` separator.
|
||||
====
|
||||
|
||||
== Grouping images and source in a single Pod
|
||||
== Grouping images and source in a single pod
|
||||
|
||||
The `new-app` command allows deploying multiple images together in a single Pod.
|
||||
The `new-app` command allows deploying multiple images together in a single pod.
|
||||
In order to specify which images to group together, use the `+` separator. The
|
||||
`--group` command line argument can also be used to specify the images that should
|
||||
be grouped together. To group the image built from a source repository with
|
||||
|
||||
@@ -16,14 +16,14 @@ concept of Kubernetes is fairly simple:
|
||||
|
||||
* Start with one or more worker nodes to run the container workloads.
|
||||
* Manage the deployment of those workloads from one or more master nodes.
|
||||
* Wrap containers in a deployment unit called a Pod. Using Pods provides extra
|
||||
* Wrap containers in a deployment unit called a pod. Using pods provides extra
|
||||
metadata with the container and offers the ability to group several containers
|
||||
in a single deployment entity.
|
||||
* Create special kinds of assets. For example, services are represented by a
|
||||
set of Pods and a policy that defines how they are accessed. This policy
|
||||
set of pods and a policy that defines how they are accessed. This policy
|
||||
allows containers to connect to the services that they need even if they do not
|
||||
have the specific IP addresses for the services. Replication controllers are
|
||||
another special asset that indicates how many Pod Replicas are required to run
|
||||
another special asset that indicates how many pod replicas are required to run
|
||||
at a time. You can use this capability to automatically scale your application
|
||||
to adapt to its current demand.
|
||||
|
||||
|
||||
@@ -21,7 +21,7 @@ explained in the cluster installation documentation.
|
||||
In a Kubernetes cluster, the worker nodes are where the actual workloads
|
||||
requested by Kubernetes users run and are managed. The worker nodes advertise
|
||||
their capacity and the scheduler, which is part of the master services,
|
||||
determines on which nodes to start containers and Pods. Important services run
|
||||
determines on which nodes to start containers and pods. Important services run
|
||||
on each worker node, including CRI-O, which is the container engine, Kubelet,
|
||||
which is the service that accepts and fulfills requests for running and
|
||||
stopping container workloads, and a service proxy, which manages communication
|
||||
@@ -54,7 +54,7 @@ all master machines and breaking your cluster.
|
||||
====
|
||||
Use three master nodes. Although you can theoretically
|
||||
use any number of master nodes, the number is constrained by etcd quorum due to
|
||||
master static Pods and etcd static Pods working on the same hosts.
|
||||
master static pods and etcd static pods working on the same hosts.
|
||||
====
|
||||
|
||||
Services that fall under the Kubernetes category on the master include the
|
||||
@@ -65,7 +65,7 @@ Kubernetes API server, etcd, Kubernetes controller manager, and HAProxy services
|
||||
|===
|
||||
|Component |Description
|
||||
|Kubernetes API server
|
||||
|The Kubernetes API server validates and configures the data for Pods, Services,
|
||||
|The Kubernetes API server validates and configures the data for pods, Services,
|
||||
and replication controllers. It also provides a focal point for the shared state of the cluster.
|
||||
|etcd
|
||||
|etcd stores the persistent master state while other components watch etcd for
|
||||
@@ -103,7 +103,7 @@ The OpenShift OAuth server is managed by the Cluster Authentication Operator.
|
||||
|===
|
||||
|
||||
Some of these services on the master machines run as systemd services, while
|
||||
others run as static Pods.
|
||||
others run as static pods.
|
||||
|
||||
Systemd services are appropriate for services that you need to always come up on
|
||||
that particular system shortly after it starts. For master machines, those
|
||||
|
||||
@@ -6,7 +6,7 @@
|
||||
[id="backing-up-etcd-data_{context}"]
|
||||
= Backing up etcd data
|
||||
|
||||
Follow these steps to back up etcd data by creating an etcd snapshot and backing up the resources for the static Pods. This backup can be saved and used at a later time if you need to restore etcd.
|
||||
Follow these steps to back up etcd data by creating an etcd snapshot and backing up the resources for the static pods. This backup can be saved and used at a later time if you need to restore etcd.
|
||||
|
||||
[IMPORTANT]
|
||||
====
|
||||
@@ -65,7 +65,7 @@ snapshot db and kube resources are successfully saved to /home/core/assets/backu
|
||||
In this example, two files are created in the `/home/core/assets/backup/` directory on the master host:
|
||||
|
||||
* `snapshot_<datetimestamp>.db`: This file is the etcd snapshot.
|
||||
* `static_kuberesources_<datetimestamp>.tar.gz`: This file contains the resources for the static Pods. If etcd encryption is enabled, it also contains the encryption keys for the etcd snapshot.
|
||||
* `static_kuberesources_<datetimestamp>.tar.gz`: This file contains the resources for the static pods. If etcd encryption is enabled, it also contains the encryption keys for the etcd snapshot.
|
||||
+
|
||||
[NOTE]
|
||||
====
|
||||
|
||||
@@ -5,7 +5,7 @@
|
||||
[id="bound-sa-tokens-about_{context}"]
|
||||
= About bound service account tokens
|
||||
|
||||
You can use bound service account tokens to limit the scope of permissions for a given service account token. These tokens are audience and time-bound. This facilitates the authentication of a service account to an IAM role and the generation of temporary credentials mounted to a Pod. You can request bound service account tokens by using volume projection and the TokenRequest API.
|
||||
You can use bound service account tokens to limit the scope of permissions for a given service account token. These tokens are audience and time-bound. This facilitates the authentication of a service account to an IAM role and the generation of temporary credentials mounted to a pod. You can request bound service account tokens by using volume projection and the TokenRequest API.
|
||||
|
||||
[IMPORTANT]
|
||||
====
|
||||
|
||||
@@ -5,7 +5,7 @@
|
||||
[id="bound-sa-tokens-configuring_{context}"]
|
||||
= Configuring bound service account tokens using volume projection
|
||||
|
||||
You can configure Pods to request bound service account tokens by using volume projection.
|
||||
You can configure pods to request bound service account tokens by using volume projection.
|
||||
|
||||
.Prerequisites
|
||||
|
||||
@@ -34,7 +34,7 @@ spec:
|
||||
----
|
||||
<1> This value should be a URL from which the recipient of a bound token can source the public keys necessary to verify the signature of the token. The default is [x-]`https://kubernetes.default.svc`.
|
||||
|
||||
. Configure a Pod to use a bound service account token by using volume projection.
|
||||
. Configure a pod to use a bound service account token by using volume projection.
|
||||
|
||||
.. Create a file called `pod-projected-svc-token.yaml` with the following contents:
|
||||
+
|
||||
@@ -66,14 +66,14 @@ spec:
|
||||
<3> Optionally set the expiration of the service account token, in seconds. The default is 3600 seconds (1 hour) and must be at least 600 seconds (10 minutes). The kubelet will start trying to rotate the token if the token is older than 80 percent of its time to live or if the token is older than 24 hours.
|
||||
<4> Optionally set the intended audience of the token. The recipient of a token should verify that the recipient identity matches the audience claim of the token, and should otherwise reject the token. The audience defaults to the identifier of the API server.
|
||||
|
||||
.. Create the Pod:
|
||||
.. Create the pod:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ oc create -f pod-projected-svc-token.yaml
|
||||
----
|
||||
+
|
||||
The kubelet requests and stores the token on behalf of the Pod, makes the token available to the Pod at a configurable file path, and refreshes the token as it approaches expiration.
|
||||
The kubelet requests and stores the token on behalf of the pod, makes the token available to the pod at a configurable file path, and refreshes the token as it approaches expiration.
|
||||
|
||||
. The application that uses the bound token must handle reloading the token when it rotates.
|
||||
+
|
||||
|
||||
@@ -165,7 +165,7 @@ The example controller executes the following reconciliation logic for each
|
||||
--
|
||||
* Create a Memcached Deployment if it does not exist.
|
||||
* Ensure that the Deployment size is the same as specified by the `Memcached` CR spec.
|
||||
* Update the `Memcached` CR status with the names of the Memcached Pods.
|
||||
* Update the `Memcached` CR status with the names of the Memcached pods.
|
||||
--
|
||||
+
|
||||
The next two sub-steps inspect how the Controller watches resources and how the
|
||||
@@ -374,8 +374,8 @@ memcached-operator 1 1 1 1 2m
|
||||
example-memcached 3 3 3 3 1m
|
||||
----
|
||||
|
||||
.. Check the Pods and CR status to confirm the status is updated with the
|
||||
`memcached` Pod names:
|
||||
.. Check the pods and CR status to confirm the status is updated with the
|
||||
`memcached` pod names:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
|
||||
@@ -5,7 +5,7 @@
|
||||
[id="builds-adding-source-clone-secrets_{context}"]
|
||||
= Source Clone Secrets
|
||||
|
||||
Builder Pods require access to any Git repositories defined as source for a build. Source clone secrets are used to provide the builder Pod with access it would not normally have access to, such as private repositories or repositories with self-signed or untrusted SSL certificates.
|
||||
Builder pods require access to any Git repositories defined as source for a build. Source clone secrets are used to provide the builder pod with access it would not normally have access to, such as private repositories or repositories with self-signed or untrusted SSL certificates.
|
||||
|
||||
* The following source clone secret configurations are supported.
|
||||
** .gitconfig File
|
||||
|
||||
@@ -9,7 +9,7 @@ Many applications require configuration using some combination of configuration
|
||||
|
||||
The ConfigMap object provides mechanisms to inject containers with configuration data while keeping containers agnostic of {product-title}. A ConfigMap can be used to store fine-grained information like individual properties or coarse-grained information like entire configuration files or JSON blobs.
|
||||
|
||||
The ConfigMap API object holds key-value pairs of configuration data that can be consumed in Pods or used to store configuration data for system components such as controllers. For example:
|
||||
The ConfigMap API object holds key-value pairs of configuration data that can be consumed in pods or used to store configuration data for system components such as controllers. For example:
|
||||
|
||||
.ConfigMap Object Definition
|
||||
[source,yaml]
|
||||
@@ -38,7 +38,7 @@ binaryData:
|
||||
You can use the `binaryData` field when you create a ConfigMap from a binary file, such as an image.
|
||||
====
|
||||
|
||||
Configuration data can be consumed in Pods in a variety of ways. A ConfigMap can be used to:
|
||||
Configuration data can be consumed in pods in a variety of ways. A ConfigMap can be used to:
|
||||
|
||||
* Populate environment variable values in containers
|
||||
* Set command-line arguments in a container
|
||||
@@ -51,14 +51,14 @@ A ConfigMap is similar to a secret, but designed to more conveniently support wo
|
||||
[discrete]
|
||||
== ConfigMap restrictions
|
||||
|
||||
*A ConfigMap must be created before its contents can be consumed in Pods.*
|
||||
*A ConfigMap must be created before its contents can be consumed in pods.*
|
||||
|
||||
Controllers can be written to tolerate missing configuration data. Consult individual components configured by using ConfigMaps on a case-by-case basis.
|
||||
|
||||
*ConfigMap objects reside in a project.*
|
||||
|
||||
They can only be referenced by Pods in the same project.
|
||||
They can only be referenced by pods in the same project.
|
||||
|
||||
*The Kubelet only supports the use of a ConfigMap for Pods it gets from the API server.*
|
||||
*The Kubelet only supports the use of a ConfigMap for pods it gets from the API server.*
|
||||
|
||||
This includes any Pods created by using the CLI, or indirectly from a replication controller. It does not include Pods created by using the {product-title} node's `--manifest-url` flag, its `--config` flag, or its REST API because these are not common ways to create Pods.
|
||||
This includes any pods created by using the CLI, or indirectly from a replication controller. It does not include pods created by using the {product-title} node's `--manifest-url` flag, its `--config` flag, or its REST API because these are not common ways to create pods.
|
||||
|
||||
@@ -22,7 +22,7 @@ data:
|
||||
special.type: charm <3>
|
||||
----
|
||||
<1> Name of the ConfigMap.
|
||||
<2> The project in which the ConfigMap resides. ConfigMaps can only be referenced by Pods in the same project.
|
||||
<2> The project in which the ConfigMap resides. ConfigMaps can only be referenced by pods in the same project.
|
||||
<3> Environment variables to inject.
|
||||
|
||||
.ConfigMap with one environment variable
|
||||
@@ -41,9 +41,9 @@ data:
|
||||
|
||||
.Procedure
|
||||
|
||||
* You can consume the keys of this ConfigMap in a Pod using `configMapKeyRef` sections.
|
||||
* You can consume the keys of this ConfigMap in a pod using `configMapKeyRef` sections.
|
||||
+
|
||||
.Sample Pod specification configured to inject specific environment variables
|
||||
.Sample `Pod` specification configured to inject specific environment variables
|
||||
[source,yaml]
|
||||
----
|
||||
apiVersion: v1
|
||||
|
||||
@@ -23,7 +23,7 @@ data:
|
||||
|
||||
* To inject values into a command in a container, you must consume the keys you want to use as environment variables, as in the consuming ConfigMaps in environment variables use case. Then you can refer to them in a container's command using the `$(VAR_NAME)` syntax.
|
||||
+
|
||||
.Sample Pod specification configured to inject specific environment variables
|
||||
.Sample `Pod` specification configured to inject specific environment variables
|
||||
[source,yaml]
|
||||
----
|
||||
apiVersion: v1
|
||||
|
||||
@@ -39,7 +39,7 @@ $ oc adm must-gather
|
||||
|
||||
Show usage statistics of resources on the server.
|
||||
|
||||
.Example: Show CPU and memory usage for Pods
|
||||
.Example: Show CPU and memory usage for pods
|
||||
[source,terminal]
|
||||
----
|
||||
$ oc adm top pods
|
||||
|
||||
@@ -16,7 +16,7 @@ subcommand used.
|
||||
$ oc adm migrate storage
|
||||
----
|
||||
|
||||
.Example: Perform an update of only Pods
|
||||
.Example: Perform an update of only pods
|
||||
[source,terminal]
|
||||
----
|
||||
$ oc adm migrate storage --include=pods
|
||||
|
||||
@@ -36,7 +36,7 @@ $ oc apply -f pod.json
|
||||
|
||||
Autoscale a DeploymentConfig or ReplicationController.
|
||||
|
||||
.Example: Autoscale to a minimum of two and maximum of five Pods
|
||||
.Example: Autoscale to a minimum of two and maximum of five pods
|
||||
[source,terminal]
|
||||
----
|
||||
$ oc autoscale deploymentconfig/parksmap-katacoda --min=2 --max=5
|
||||
@@ -62,7 +62,7 @@ Delete a resource.
|
||||
$ oc delete pod/parksmap-katacoda-1-qfqz4
|
||||
----
|
||||
|
||||
.Example: Delete all Pods with the `app=parksmap-katacoda` label
|
||||
.Example: Delete all pods with the `app=parksmap-katacoda` label
|
||||
[source,terminal]
|
||||
----
|
||||
$ oc delete pods -l app=parksmap-katacoda
|
||||
@@ -78,7 +78,7 @@ Return detailed information about a specific object.
|
||||
$ oc describe deployment/example
|
||||
----
|
||||
|
||||
.Example: Describe all Pods
|
||||
.Example: Describe all pods
|
||||
[source,terminal]
|
||||
----
|
||||
$ oc describe pods
|
||||
@@ -126,7 +126,7 @@ $ oc expose service/parksmap-katacoda --hostname=www.my-host.com
|
||||
|
||||
Display one or more resources.
|
||||
|
||||
.Example: List Pods in the `default` namespace
|
||||
.Example: List pods in the `default` namespace
|
||||
[source,terminal]
|
||||
----
|
||||
$ oc get pods -n default
|
||||
@@ -153,7 +153,7 @@ $ oc label pod/python-1-mz2rf status=unhealthy
|
||||
Set the desired number of replicas for a ReplicationController or a
|
||||
DeploymentConfig.
|
||||
|
||||
.Example: Scale the `ruby-app` DeploymentConfig to three Pods
|
||||
.Example: Scale the `ruby-app` DeploymentConfig to three pods
|
||||
[source,terminal]
|
||||
----
|
||||
$ oc scale deploymentconfig/ruby-app --replicas=3
|
||||
|
||||
@@ -9,7 +9,7 @@
|
||||
|
||||
Display documentation for a certain resource.
|
||||
|
||||
.Example: Display documentation for Pods
|
||||
.Example: Display documentation for pods
|
||||
[source,terminal]
|
||||
----
|
||||
$ oc explain pods
|
||||
|
||||
@@ -9,7 +9,7 @@
|
||||
|
||||
Attach the shell to a running container.
|
||||
|
||||
.Example: Get output from the `python` container from Pod `python-1-mz2rf`
|
||||
.Example: Get output from the `python` container from pod `python-1-mz2rf`
|
||||
[source,terminal]
|
||||
----
|
||||
$ oc attach python-1-mz2rf -c python
|
||||
@@ -19,7 +19,7 @@ $ oc attach python-1-mz2rf -c python
|
||||
|
||||
Copy files and directories to and from containers.
|
||||
|
||||
.Example: Copy a file from the `python-1-mz2rf` Pod to the local file system
|
||||
.Example: Copy a file from the `python-1-mz2rf` pod to the local file system
|
||||
[source,terminal]
|
||||
----
|
||||
$ oc cp default/python-1-mz2rf:/opt/app-root/src/README.md ~/mydirectory/.
|
||||
@@ -39,7 +39,7 @@ $ oc debug deploymentconfig/python
|
||||
|
||||
Execute a command in a container.
|
||||
|
||||
.Example: Execute the `ls` command in the `python` container from Pod `python-1-mz2rf`
|
||||
.Example: Execute the `ls` command in the `python` container from pod `python-1-mz2rf`
|
||||
[source,terminal]
|
||||
----
|
||||
$ oc exec python-1-mz2rf -c python ls
|
||||
@@ -48,7 +48,7 @@ $ oc exec python-1-mz2rf -c python ls
|
||||
== logs
|
||||
|
||||
Retrieve the log output for a specific build, BuildConfig, DeploymentConfig, or
|
||||
Pod.
|
||||
pod.
|
||||
|
||||
.Example: Stream the latest logs from the `python` DeploymentConfig
|
||||
[source,terminal]
|
||||
@@ -58,9 +58,9 @@ $ oc logs -f deploymentconfig/python
|
||||
|
||||
== port-forward
|
||||
|
||||
Forward one or more local ports to a Pod.
|
||||
Forward one or more local ports to a pod.
|
||||
|
||||
.Example: Listen on port `8888` locally and forward to port `5000` in the Pod
|
||||
.Example: Listen on port `8888` locally and forward to port `5000` in the pod
|
||||
[source,terminal]
|
||||
----
|
||||
$ oc port-forward python-1-mz2rf 8888:5000
|
||||
@@ -80,7 +80,7 @@ $ oc proxy --port=8011 --www=./local/www/
|
||||
|
||||
Open a remote shell session to a container.
|
||||
|
||||
.Example: Open a shell session on the first container in the `python-1-mz2rf` Pod
|
||||
.Example: Open a shell session on the first container in the `python-1-mz2rf` pod
|
||||
[source,terminal]
|
||||
----
|
||||
$ oc rsh python-1-mz2rf
|
||||
@@ -88,10 +88,10 @@ $ oc rsh python-1-mz2rf
|
||||
|
||||
== rsync
|
||||
|
||||
Copy contents of a directory to or from a running Pod container. Only changed
|
||||
Copy contents of a directory to or from a running pod container. Only changed
|
||||
files are copied using the `rsync` command from your operating system.
|
||||
|
||||
.Example: Synchronize files from a local directory with a Pod directory
|
||||
.Example: Synchronize files from a local directory with a pod directory
|
||||
[source,terminal]
|
||||
----
|
||||
$ oc rsync ~/mydirectory/ python-1-mz2rf:/opt/app-root/src/
|
||||
@@ -99,9 +99,9 @@ $ oc rsync ~/mydirectory/ python-1-mz2rf:/opt/app-root/src/
|
||||
|
||||
== run
|
||||
|
||||
Create a Pod running a particular image.
|
||||
Create a pod running a particular image.
|
||||
|
||||
.Example: Start a Pod running the `perl` image
|
||||
.Example: Start a pod running the `perl` image
|
||||
[source,terminal]
|
||||
----
|
||||
$ oc run my-test --image=perl
|
||||
@@ -116,7 +116,7 @@ Wait for a specific condition on one or more resources.
|
||||
This command is experimental and might change without notice.
|
||||
====
|
||||
|
||||
.Example: Wait for the `python-1-mz2rf` Pod to be deleted
|
||||
.Example: Wait for the `python-1-mz2rf` pod to be deleted
|
||||
[source,terminal]
|
||||
----
|
||||
$ oc wait --for=delete pod/python-1-mz2rf
|
||||
|
||||
@@ -59,7 +59,7 @@ Usage:
|
||||
* Use the `oc explain` command to view the description and fields for a
|
||||
particular resource:
|
||||
+
|
||||
.Example: View documentation for the Pod resource
|
||||
.Example: View documentation for the `Pod` resource
|
||||
[source,terminal]
|
||||
----
|
||||
$ oc explain pods
|
||||
|
||||
@@ -12,7 +12,7 @@ provide infrastructure management that does not rely on objects of a specific
|
||||
cloud provider. The ClusterAutoscaler has a cluster scope, and is not associated
|
||||
with a particular namespace.
|
||||
|
||||
The ClusterAutoscaler increases the size of the cluster when there are Pods
|
||||
The ClusterAutoscaler increases the size of the cluster when there are pods
|
||||
that failed to schedule on any of the current nodes due to insufficient
|
||||
resources or when another node is necessary to meet deployment needs. The
|
||||
ClusterAutoscaler does not increase the cluster resources beyond the limits
|
||||
@@ -25,30 +25,30 @@ Ensure that the `maxNodesTotal` value in the `ClusterAutoscaler` definition that
|
||||
|
||||
The ClusterAutoscaler decreases the size of the cluster when some nodes are
|
||||
consistently not needed for a significant period, such as when it has low
|
||||
resource use and all of its important Pods can fit on other nodes.
|
||||
resource use and all of its important pods can fit on other nodes.
|
||||
|
||||
If the following types of Pods are present on a node, the ClusterAutoscaler
|
||||
If the following types of pods are present on a node, the ClusterAutoscaler
|
||||
will not remove the node:
|
||||
|
||||
* Pods with restrictive PodDisruptionBudgets (PDBs).
|
||||
* Kube-system Pods that do not run on the node by default.
|
||||
* Kube-system Pods that do not have a PDB or have a PDB that is too restrictive.
|
||||
* Kube-system pods that do not run on the node by default.
|
||||
* Kube-system pods that do not have a PDB or have a PDB that is too restrictive.
|
||||
* Pods that are not backed by a controller object such as a Deployment,
|
||||
ReplicaSet, or StatefulSet.
|
||||
* Pods with local storage.
|
||||
* Pods that cannot be moved elsewhere because of a lack of resources,
|
||||
incompatible node selectors or affinity, matching anti-affinity, and so on.
|
||||
* Unless they also have a `"cluster-autoscaler.kubernetes.io/safe-to-evict": "true"`
|
||||
annotation, Pods that have a `"cluster-autoscaler.kubernetes.io/safe-to-evict": "false"`
|
||||
annotation, pods that have a `"cluster-autoscaler.kubernetes.io/safe-to-evict": "false"`
|
||||
annotation.
|
||||
|
||||
If you configure the ClusterAutoscaler, additional usage restrictions apply:
|
||||
|
||||
* Do not modify the nodes that are in autoscaled node groups directly. All nodes
|
||||
within the same node group have the same capacity and labels and run the same
|
||||
system Pods.
|
||||
* Specify requests for your Pods.
|
||||
* If you have to prevent Pods from being deleted too quickly, configure
|
||||
system pods.
|
||||
* Specify requests for your pods.
|
||||
* If you have to prevent pods from being deleted too quickly, configure
|
||||
appropriate PDBs.
|
||||
* Confirm that your cloud provider quota is large enough to support the
|
||||
maximum node pools that you configure.
|
||||
@@ -62,23 +62,23 @@ number of replicas based on the current CPU load.
|
||||
If the load increases, the HPA creates new replicas, regardless of the amount
|
||||
of resources available to the cluster.
|
||||
If there are not enough resources, the ClusterAutoscaler adds resources so that
|
||||
the HPA-created Pods can run.
|
||||
the HPA-created pods can run.
|
||||
If the load decreases, the HPA stops some replicas. If this action causes some
|
||||
nodes to be underutilized or completely empty, the ClusterAutoscaler deletes
|
||||
the unnecessary nodes.
|
||||
|
||||
|
||||
The ClusterAutoscaler takes Pod priorities into account. The Pod Priority and
|
||||
Preemption feature enables scheduling Pods based on priorities if the cluster
|
||||
The ClusterAutoscaler takes pod priorities into account. The Pod Priority and
|
||||
Preemption feature enables scheduling pods based on priorities if the cluster
|
||||
does not have enough resources, but the ClusterAutoscaler ensures that the
|
||||
cluster has resources to run all Pods. To honor the intention of both features,
|
||||
cluster has resources to run all pods. To honor the intention of both features,
|
||||
the ClusterAutoscaler inclues a priority cutoff function. You can use this cutoff to
|
||||
schedule "best-effort" Pods, which do not cause the ClusterAutoscaler to
|
||||
schedule "best-effort" pods, which do not cause the ClusterAutoscaler to
|
||||
increase resources but instead run only when spare resources are available.
|
||||
|
||||
Pods with priority lower than the cutoff value do not cause the cluster to scale
|
||||
up or prevent the cluster from scaling down. No new nodes are added to run the
|
||||
Pods, and nodes running these Pods might be deleted to free resources.
|
||||
pods, and nodes running these pods might be deleted to free resources.
|
||||
|
||||
////
|
||||
Default priority cutoff is 0. It can be changed using `--expendable-pods-priority-cutoff` flag,
|
||||
|
||||
@@ -9,7 +9,7 @@
|
||||
== Purpose
|
||||
|
||||
The DNS Operator deploys and manages CoreDNS to provide a name resolution
|
||||
service to Pods that enables DNS-based Kubernetes Service discovery in
|
||||
service to pods that enables DNS-based Kubernetes Service discovery in
|
||||
{product-title}.
|
||||
|
||||
The Operator creates a working default deployment based on the cluster's configuration.
|
||||
|
||||
@@ -5,7 +5,7 @@
|
||||
[id="cluster-logging-collector-legacy-fluentd_{context}"]
|
||||
= Forwarding logs using the legacy Fluentd method
|
||||
|
||||
You can use the Fluentd *forward* protocol to send logs to destinations outside of your {product-title} cluster instead of the default Elasticsearch log store by creating a configuration file and ConfigMap. You are responsible for configuring the external log aggregator to receive the logs from {product-title}.
|
||||
You can use the Fluentd *forward* protocol to send logs to destinations outside of your {product-title} cluster instead of the default Elasticsearch log store by creating a configuration file and ConfigMap. You are responsible for configuring the external log aggregator to receive the logs from {product-title}.
|
||||
|
||||
[IMPORTANT]
|
||||
====
|
||||
@@ -16,7 +16,7 @@ ifdef::openshift-origin[]
|
||||
The *forward* protocols are provided with the Fluentd image as of v1.4.0.
|
||||
endif::openshift-origin[]
|
||||
|
||||
To send logs using the Fluentd *forward* protocol, create a configuration file called `secure-forward.conf`, that points to an external log aggregator. Then, use that file to create a ConfigMap called called `secure-forward` in the `openshift-logging` namespace, which {product-title} uses when forwarding the logs.
|
||||
To send logs using the Fluentd *forward* protocol, create a configuration file called `secure-forward.conf`, that points to an external log aggregator. Then, use that file to create a ConfigMap called called `secure-forward` in the `openshift-logging` namespace, which {product-title} uses when forwarding the logs.
|
||||
|
||||
.Sample Fluentd configuration file
|
||||
|
||||
@@ -25,7 +25,7 @@ To send logs using the Fluentd *forward* protocol, create a configuration file c
|
||||
<store>
|
||||
@type forward
|
||||
<security>
|
||||
self_hostname fluentd.example.com
|
||||
self_hostname fluentd.example.com
|
||||
shared_key "fluent-receiver"
|
||||
</security>
|
||||
transport tls
|
||||
@@ -68,7 +68,7 @@ To configure {product-title} to forward logs using the legacy Fluentd method:
|
||||
tls_verify_hostname <value> <4>
|
||||
tls_cert_path <path_to_file> <5>
|
||||
<buffer> <6>
|
||||
@type file
|
||||
@type file
|
||||
path '/var/lib/fluentd/secureforwardlegacy'
|
||||
queued_chunks_limit_size "#{ENV['BUFFER_QUEUE_LIMIT'] || '1024' }"
|
||||
chunk_limit_size "#{ENV['BUFFER_SIZE_LIMIT'] || '1m' }"
|
||||
@@ -100,7 +100,7 @@ To configure {product-title} to forward logs using the legacy Fluentd method:
|
||||
<8> Specify the host name or IP of the server.
|
||||
<9> Specify the host label of the server.
|
||||
<10> Specify the port of the server.
|
||||
<11> Optionally, add additional servers.
|
||||
<11> Optionally, add additional servers.
|
||||
If you specify two or more servers, *forward* uses these server nodes in a round-robin order.
|
||||
+
|
||||
To use Mutual TLS (mTLS) authentication, see the link:https://docs.fluentd.org/output/forward#tips-and-tricks[Fluentd documentation] for information about client certificate, key parameters, and other settings.
|
||||
@@ -112,8 +112,8 @@ To use Mutual TLS (mTLS) authentication, see the link:https://docs.fluentd.org/o
|
||||
$ oc create configmap secure-forward --from-file=secure-forward.conf -n openshift-logging
|
||||
----
|
||||
|
||||
The Cluster Logging Operator redeploys the Fluentd Pods. If the Pods do not redeploy, you can delete the Fluentd
|
||||
Pods to force them to redeploy.
|
||||
The Cluster Logging Operator redeploys the Fluentd pods. If the pods do not redeploy, you can delete the Fluentd
|
||||
pods to force them to redeploy.
|
||||
|
||||
[source,terminal]
|
||||
----
|
||||
|
||||
@@ -110,8 +110,8 @@ rfc 3164 <6>
|
||||
$ oc create configmap syslog --from-file=syslog.conf -n openshift-logging
|
||||
----
|
||||
|
||||
The Cluster Logging Operator redeploys the Fluentd Pods. If the Pods do not redeploy, you can delete the Fluentd
|
||||
Pods to force them to redeploy.
|
||||
The Cluster Logging Operator redeploys the Fluentd pods. If the pods do not redeploy, you can delete the Fluentd
|
||||
pods to force them to redeploy.
|
||||
|
||||
[source,terminal]
|
||||
----
|
||||
|
||||
@@ -9,7 +9,7 @@ You can optionally forward logs to an external Elasticsearch v5.x or v6.x instan
|
||||
|
||||
To configure log forwarding to an external Elasticsearch instance, create a `ClusterLogForwarder` Custom Resource (CR) with an output to that instance and a pipeline that uses the output. The external Elasticsearch output can use the HTTP (insecure) or HTTPS (secure HTTP) connection.
|
||||
|
||||
To forward logs to both an external and the internal Elasticsearch instance, create outputs and pipelines to the external instance and a pipeline that uses the `default` output to forward logs to the internal instance. You do not need to create a `default` output. If you do configure a `default` output, you receive an error message because the `default` output is reserved for the Cluster Logging Operator.
|
||||
To forward logs to both an external and the internal Elasticsearch instance, create outputs and pipelines to the external instance and a pipeline that uses the `default` output to forward logs to the internal instance. You do not need to create a `default` output. If you do configure a `default` output, you receive an error message because the `default` output is reserved for the Cluster Logging Operator.
|
||||
|
||||
[NOTE]
|
||||
====
|
||||
@@ -65,7 +65,7 @@ spec:
|
||||
<8> Specify which log types should be forwarded using that pipeline: `application,` `infrastructure`, or `audit`.
|
||||
<9> Specify the output to use with that pipeline for forwarding the logs.
|
||||
<10> Optional: Specify the `default` output to send the logs to the internal Elasticsearch instance.
|
||||
<11> Optional: One or more labels to add to the logs.
|
||||
<11> Optional: One or more labels to add to the logs.
|
||||
<12> Optional: Configure multiple outputs to forward logs to other external log aggregtors of any supported type:
|
||||
** Optional. A name to describe the pipeline.
|
||||
** The `inputRefs` is the log type to forward using that pipeline: `application,` `infrastructure`, or `audit`.
|
||||
@@ -79,11 +79,10 @@ spec:
|
||||
$ oc create -f <file-name>.yaml
|
||||
----
|
||||
|
||||
The Cluster Logging Operator redeploys the Fluentd Pods. If the Pods do not redeploy, you can delete the Fluentd
|
||||
Pods to force them to redeploy.
|
||||
The Cluster Logging Operator redeploys the Fluentd pods. If the pods do not redeploy, you can delete the Fluentd
|
||||
pods to force them to redeploy.
|
||||
|
||||
[source,terminal]
|
||||
----
|
||||
$ oc delete pod --selector logging-infra=fluentd
|
||||
----
|
||||
|
||||
|
||||
@@ -11,7 +11,7 @@ To configure log forwarding using the *forward* protocol, create a `ClusterLogFo
|
||||
|
||||
[NOTE]
|
||||
====
|
||||
Alternately, you can use a ConfigMap to forward logs using the *forward* protocols. However, this method is deprecated in {product-title} and will be removed in a future release.
|
||||
Alternately, you can use a ConfigMap to forward logs using the *forward* protocols. However, this method is deprecated in {product-title} and will be removed in a future release.
|
||||
====
|
||||
|
||||
.Procedure
|
||||
@@ -77,8 +77,8 @@ spec:
|
||||
$ oc create -f <file-name>.yaml
|
||||
----
|
||||
|
||||
The Cluster Logging Operator redeploys the Fluentd Pods. If the Pods do not redeploy, you can delete the Fluentd
|
||||
Pods to force them to redeploy.
|
||||
The Cluster Logging Operator redeploys the Fluentd pods. If the pods do not redeploy, you can delete the Fluentd
|
||||
pods to force them to redeploy.
|
||||
|
||||
[source,terminal]
|
||||
----
|
||||
|
||||
@@ -5,7 +5,7 @@
|
||||
[id="cluster-logging-collector-log-forward-kafka_{context}"]
|
||||
= Forwarding logs to a Kafka broker
|
||||
|
||||
You can forward logs to an external Kafka broker in addition to, or instead of, the default Elasticsearch log store.
|
||||
You can forward logs to an external Kafka broker in addition to, or instead of, the default Elasticsearch log store.
|
||||
|
||||
To configure log forwarding to an external Kafka instance, create a `ClusterLogForwarder` Custom Resource (CR) with an output to that instance and a pipeline that uses the output. You can include a specific Kafka topic in the output or use the default. The Kafka output can use a TCP (insecure) or TLS (secure TCP) connection.
|
||||
|
||||
@@ -40,9 +40,9 @@ spec:
|
||||
inputRefs: <8>
|
||||
- application
|
||||
outputRefs: <9>
|
||||
- app-logs
|
||||
- app-logs
|
||||
labels:
|
||||
logType: application <10>
|
||||
logType: application <10>
|
||||
- name: infra-topic <11>
|
||||
inputRefs:
|
||||
- infrastructure
|
||||
@@ -83,8 +83,8 @@ spec:
|
||||
$ oc create -f <file-name>.yaml
|
||||
----
|
||||
|
||||
The Cluster Logging Operator redeploys the Fluentd Pods. If the Pods do not redeploy, you can delete the Fluentd
|
||||
Pods to force them to redeploy.
|
||||
The Cluster Logging Operator redeploys the Fluentd pods. If the pods do not redeploy, you can delete the Fluentd
|
||||
pods to force them to redeploy.
|
||||
|
||||
[source,terminal]
|
||||
----
|
||||
|
||||
@@ -92,8 +92,8 @@ spec:
|
||||
$ oc create -f <file-name>.yaml
|
||||
----
|
||||
|
||||
The Cluster Logging Operator redeploys the Fluentd Pods. If the Pods do not redeploy, you can delete the Fluentd
|
||||
Pods to force them to redeploy.
|
||||
The Cluster Logging Operator redeploys the Fluentd pods. If the pods do not redeploy, you can delete the Fluentd
|
||||
pods to force them to redeploy.
|
||||
|
||||
[source,terminal]
|
||||
----
|
||||
|
||||
@@ -3,16 +3,16 @@
|
||||
// * logging/cluster-logging-collector.adoc
|
||||
|
||||
[id="cluster-logging-collector-tolerations_{context}"]
|
||||
= Using tolerations to control the log collector Pod placement
|
||||
= Using tolerations to control the log collector pod placement
|
||||
|
||||
You can ensure which nodes the logging collector Pods run on and prevent
|
||||
other workloads from using those nodes by using tolerations on the Pods.
|
||||
You can ensure which nodes the logging collector pods run on and prevent
|
||||
other workloads from using those nodes by using tolerations on the pods.
|
||||
|
||||
You apply tolerations to logging collector Pods through the Cluster Logging Custom Resource (CR)
|
||||
You apply tolerations to logging collector pods through the Cluster Logging Custom Resource (CR)
|
||||
and apply taints to a node through the node specification. You can use taints and tolerations
|
||||
to ensure the Pod does not get evicted for things like memory and CPU issues.
|
||||
to ensure the pod does not get evicted for things like memory and CPU issues.
|
||||
|
||||
By default, the logging collector Pods have the following toleration:
|
||||
By default, the logging collector pods have the following toleration:
|
||||
|
||||
[source,yaml]
|
||||
----
|
||||
@@ -28,7 +28,7 @@ tolerations:
|
||||
|
||||
.Procedure
|
||||
|
||||
. Use the following command to add a taint to a node where you want logging collector Pods to schedule logging collector Pods:
|
||||
. Use the following command to add a taint to a node where you want logging collector pods to schedule logging collector pods:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
@@ -43,10 +43,10 @@ $ oc adm taint nodes node1 collector=node:NoExecute
|
||||
----
|
||||
+
|
||||
This example places a taint on `node1` that has key `collector`, value `node`, and taint effect `NoExecute`.
|
||||
You must use the `NoExecute` taint effect. `NoExecute` schedules only Pods that match the taint and removes existing Pods
|
||||
You must use the `NoExecute` taint effect. `NoExecute` schedules only pods that match the taint and removes existing pods
|
||||
that do not match.
|
||||
|
||||
. Edit the `collection` section of the Cluster Logging Custom Resource (CR) to configure a toleration for the logging collector Pods:
|
||||
. Edit the `collection` section of the Cluster Logging Custom Resource (CR) to configure a toleration for the logging collector pods:
|
||||
+
|
||||
[source,yaml]
|
||||
----
|
||||
@@ -54,16 +54,15 @@ that do not match.
|
||||
logs:
|
||||
type: "fluentd"
|
||||
rsyslog:
|
||||
tolerations:
|
||||
tolerations:
|
||||
- key: "collector" <1>
|
||||
operator: "Exists" <2>
|
||||
effect: "NoExecute" <3>
|
||||
tolerationSeconds: 6000 <4>
|
||||
----
|
||||
<1> Specify the key that you added to the node.
|
||||
<2> Specify the `Exists` operator to require the `key`/`value`/`effect` parameters to match.
|
||||
<2> Specify the `Exists` operator to require the `key`/`value`/`effect` parameters to match.
|
||||
<3> Specify the `NoExecute` effect.
|
||||
<4> Optionally, specify the `tolerationSeconds` parameter to set how long a Pod can remain bound to a node before being evicted.
|
||||
|
||||
This toleration matches the taint created by the `oc adm taint` command. A Pod with this toleration would be able to schedule onto `node1`.
|
||||
<4> Optionally, specify the `tolerationSeconds` parameter to set how long a pod can remain bound to a node before being evicted.
|
||||
|
||||
This toleration matches the taint created by the `oc adm taint` command. A pod with this toleration would be able to schedule onto `node1`.
|
||||
|
||||
@@ -15,9 +15,9 @@ Fluentd collects log data in a single blob called a _chunk_. When Fluentd create
|
||||
|
||||
By default in {product-title}, Fluentd uses the _exponential backoff_ method to retry flushing, where Fluentd doubles the time it waits between attempts to retry flushing again, which helps reduce connection requests to the destination. You can disable exponential backoff and use the _periodic_ retry method instead, which retries flushing the chunks at a specified interval. By default, Fluentd retries chunk flushing indefinitely. In {product-title}, you cannot change the indefinite retry behavior.
|
||||
|
||||
These parameters can help you determine the trade-offs between latency and throughput.
|
||||
These parameters can help you determine the trade-offs between latency and throughput.
|
||||
|
||||
* To optimize Fluentd for throughput, you could use these parameters to reduce network packet count by configuring larger buffers and queues, delaying flushes, and setting longer times between retries. Be aware that larger buffers require more space on the node file system.
|
||||
* To optimize Fluentd for throughput, you could use these parameters to reduce network packet count by configuring larger buffers and queues, delaying flushes, and setting longer times between retries. Be aware that larger buffers require more space on the node file system.
|
||||
|
||||
* To optimize for low latency, you could use the parameters to send data as soon as possible, avoid the build-up of batches, have shorter queues and buffers, and use more frequent flush and retries.
|
||||
|
||||
@@ -52,8 +52,8 @@ These parameters are:
|
||||
|
||||
|`flushMode`
|
||||
a| The method to perform flushes:
|
||||
|
||||
* `lazy`: Flush chunks based on the `timekey` parameter. You cannot modify the `timekey` parameter.
|
||||
|
||||
* `lazy`: Flush chunks based on the `timekey` parameter. You cannot modify the `timekey` parameter.
|
||||
* `interval`: Flush chunks based on the `flushInterval` parameter.
|
||||
* `immediate`: Flush chunks immediately after data is added to a chunk.
|
||||
|`interval`
|
||||
@@ -70,7 +70,7 @@ a|The chunking behavior when the queue is full:
|
||||
* `drop_oldest_chunk`: Drop the oldest chunk to accept new incoming chunks. Older chunks have less value than newer chunks.
|
||||
|`block`
|
||||
|
||||
|`queuedChunkLimitSize`
|
||||
|`queuedChunkLimitSize`
|
||||
|The number of chunks in the queue.
|
||||
|`32`
|
||||
|
||||
@@ -109,13 +109,13 @@ $ oc edit ClusterLogging instance
|
||||
----
|
||||
apiVersion: logging.openshift.io/v1
|
||||
kind: ClusterLogging
|
||||
metadata:
|
||||
metadata:
|
||||
name: instance
|
||||
namespace: openshift-logging
|
||||
spec:
|
||||
forwarder:
|
||||
fluentd:
|
||||
buffer:
|
||||
spec:
|
||||
forwarder:
|
||||
fluentd:
|
||||
buffer:
|
||||
chunkLimitSize: 8m <1>
|
||||
flushInterval: 5s <2>
|
||||
flushMode: interval <3>
|
||||
@@ -137,7 +137,7 @@ spec:
|
||||
<8> Specify the time in seconds before the next chunk flush.
|
||||
<9> Specify the maximum size of the chunk buffer.
|
||||
|
||||
. Verify that the Fluentd Pods are redeployed:
|
||||
. Verify that the Fluentd pods are redeployed:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
|
||||
@@ -415,9 +415,9 @@ $ oc create -f clo-instance.yaml
|
||||
+
|
||||
This creates the Cluster Logging components, the Elasticsearch Custom Resource and components, and the Kibana interface.
|
||||
|
||||
. Verify the install by listing the Pods in the *openshift-logging* project.
|
||||
. Verify the install by listing the pods in the *openshift-logging* project.
|
||||
+
|
||||
You should see several Pods for Cluster Logging, Elasticsearch, Fluentd, and Kibana similar to the following list:
|
||||
You should see several pods for Cluster Logging, Elasticsearch, Fluentd, and Kibana similar to the following list:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
|
||||
@@ -87,7 +87,7 @@ If the Operator does not appear as installed, to troubleshoot further:
|
||||
+
|
||||
* Switch to the *Operators* → *Installed Operators* page and inspect
|
||||
the *Status* column for any errors or failures.
|
||||
* Switch to the *Workloads* → *Pods* page and check the logs in any Pods in the
|
||||
* Switch to the *Workloads* → *Pods* page and check the logs in any pods in the
|
||||
`openshift-logging` project that are reporting issues.
|
||||
|
||||
. Create a cluster logging instance:
|
||||
@@ -243,7 +243,7 @@ The number of primary shards for the index templates is equal to the number of E
|
||||
|
||||
.. Select the *openshift-logging* project.
|
||||
+
|
||||
You should see several Pods for cluster logging, Elasticsearch, Fluentd, and Kibana similar to the following list:
|
||||
You should see several pods for cluster logging, Elasticsearch, Fluentd, and Kibana similar to the following list:
|
||||
+
|
||||
* cluster-logging-operator-cb795f8dc-xkckc
|
||||
* elasticsearch-cdm-b3nqzchd-1-5c6797-67kfz
|
||||
|
||||
@@ -28,7 +28,7 @@ Baseline (256 characters per minute -> 15KB/min)
|
||||
|
||||
[cols="3,4",options="header"]
|
||||
|===
|
||||
|Logging Pods
|
||||
|Logging pods
|
||||
|Storage Throughput
|
||||
|
||||
|3 es
|
||||
|
||||
@@ -3,17 +3,17 @@
|
||||
// * logging/cluster-logging-elasticsearch.adoc
|
||||
|
||||
[id="cluster-logging-elasticsearch-tolerations_{context}"]
|
||||
= Using tolerations to control the log store Pod placement
|
||||
= Using tolerations to control the log store pod placement
|
||||
|
||||
You can control which nodes the log store Pods runs on and prevent
|
||||
other workloads from using those nodes by using tolerations on the Pods.
|
||||
You can control which nodes the log store pods runs on and prevent
|
||||
other workloads from using those nodes by using tolerations on the pods.
|
||||
|
||||
You apply tolerations to the log store Pods through the Cluster Logging Custom Resource (CR)
|
||||
and apply taints to a node through the node specification. A taint on a node is a `key:value pair` that
|
||||
instructs the node to repel all Pods that do not tolerate the taint. Using a specific `key:value` pair
|
||||
that is not on other Pods ensures only the log store Pods can run on that node.
|
||||
You apply tolerations to the log store pods through the Cluster Logging Custom Resource (CR)
|
||||
and apply taints to a node through the node specification. A taint on a node is a `key:value pair` that
|
||||
instructs the node to repel all pods that do not tolerate the taint. Using a specific `key:value` pair
|
||||
that is not on other pods ensures only the log store pods can run on that node.
|
||||
|
||||
By default, the log store Pods have the following toleration:
|
||||
By default, the log store pods have the following toleration:
|
||||
|
||||
[source,yaml]
|
||||
----
|
||||
@@ -29,7 +29,7 @@ tolerations:
|
||||
|
||||
.Procedure
|
||||
|
||||
. Use the following command to add a taint to a node where you want to schedule the cluster logging Pods:
|
||||
. Use the following command to add a taint to a node where you want to schedule the cluster logging pods:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
@@ -44,10 +44,10 @@ $ oc adm taint nodes node1 elasticsearch=node:NoExecute
|
||||
----
|
||||
+
|
||||
This example places a taint on `node1` that has key `elasticsearch`, value `node`, and taint effect `NoExecute`.
|
||||
Nodes with the `NoExecute` effect schedule only Pods that match the taint and remove existing Pods
|
||||
Nodes with the `NoExecute` effect schedule only pods that match the taint and remove existing pods
|
||||
that do not match.
|
||||
|
||||
. Edit the `logstore` section of the Cluster Logging Custom Resource (CR) to configure a toleration for the Elasticsearch Pods:
|
||||
. Edit the `logstore` section of the Cluster Logging Custom Resource (CR) to configure a toleration for the Elasticsearch pods:
|
||||
+
|
||||
[source,yaml]
|
||||
----
|
||||
@@ -55,16 +55,15 @@ that do not match.
|
||||
type: "elasticsearch"
|
||||
elasticsearch:
|
||||
nodeCount: 1
|
||||
tolerations:
|
||||
tolerations:
|
||||
- key: "elasticsearch" <1>
|
||||
operator: "Exists" <2>
|
||||
effect: "NoExecute" <3>
|
||||
tolerationSeconds: 6000 <4>
|
||||
----
|
||||
<1> Specify the key that you added to the node.
|
||||
<2> Specify the `Exists` operator to require a taint with the key `elasticsearch` to be present on the Node.
|
||||
<2> Specify the `Exists` operator to require a taint with the key `elasticsearch` to be present on the Node.
|
||||
<3> Specify the `NoExecute` effect.
|
||||
<4> Optionally, specify the `tolerationSeconds` parameter to set how long a Pod can remain bound to a node before being evicted.
|
||||
|
||||
This toleration matches the taint created by the `oc adm taint` command. A Pod with this toleration could be scheduled onto `node1`.
|
||||
<4> Optionally, specify the `tolerationSeconds` parameter to set how long a pod can remain bound to a node before being evicted.
|
||||
|
||||
This toleration matches the taint created by the `oc adm taint` command. A pod with this toleration could be scheduled onto `node1`.
|
||||
|
||||
@@ -7,7 +7,7 @@
|
||||
|
||||
Use the following steps to deploy the Event Router into your cluster. You should always deploy the Event Router to the `openshift-logging` project to ensure it collects events from across the cluster.
|
||||
|
||||
The following Template object creates the service account, cluster role, and cluster role binding required for the Event Router. The template also configures and deploys the Event Router Pod. You can use this template without making changes, or change the Deployment object CPU and memory requests.
|
||||
The following Template object creates the service account, cluster role, and cluster role binding required for the Event Router. The template also configures and deploys the Event Router pod. You can use this template without making changes, or change the Deployment object CPU and memory requests.
|
||||
|
||||
.Prerequisites
|
||||
|
||||
@@ -17,7 +17,7 @@ The following Template object creates the service account, cluster role, and clu
|
||||
|
||||
.Procedure
|
||||
|
||||
. Create a template for the Event Router:
|
||||
. Create a template for the Event Router:
|
||||
+
|
||||
[source,yaml]
|
||||
----
|
||||
@@ -104,7 +104,7 @@ objects:
|
||||
configMap:
|
||||
name: eventrouter
|
||||
parameters:
|
||||
- name: IMAGE
|
||||
- name: IMAGE
|
||||
displayName: Image
|
||||
value: "registry.redhat.io/openshift4/ose-logging-eventrouter:latest"
|
||||
- name: CPU <6>
|
||||
@@ -113,7 +113,7 @@ parameters:
|
||||
- name: MEMORY <7>
|
||||
displayName: Memory
|
||||
value: "128Mi"
|
||||
- name: NAMESPACE
|
||||
- name: NAMESPACE
|
||||
displayName: Namespace
|
||||
value: "openshift-logging" <8>
|
||||
----
|
||||
@@ -121,9 +121,9 @@ parameters:
|
||||
<2> Creates a ClusterRole to monitor for events in the cluster.
|
||||
<3> Creates a ClusterRoleBinding to bind the ClusterRole to the ServiceAccount.
|
||||
<4> Creates a ConfigMap in the `openshift-logging` project to generate the required `config.json` file.
|
||||
<5> Creates a Deployment in the `openshift-logging` project to generate and configure the Event Router Pod.
|
||||
<6> Specifies the minimum amount of memory to allocate to the Event Router Pod. Defaults to `128Mi`.
|
||||
<7> Specifies the minimum amount of CPU to allocate to the Event Router Pod. Defaults to `100m`.
|
||||
<5> Creates a Deployment in the `openshift-logging` project to generate and configure the Event Router pod.
|
||||
<6> Specifies the minimum amount of memory to allocate to the Event Router pod. Defaults to `128Mi`.
|
||||
<7> Specifies the minimum amount of CPU to allocate to the Event Router pod. Defaults to `100m`.
|
||||
<8> Specifies the `openshift-logging` project to install objects in.
|
||||
|
||||
. Use the following command to process and apply the template:
|
||||
@@ -169,14 +169,14 @@ pod/cluster-logging-eventrouter-d649f97c8-qvv8r
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ oc logs <cluster_logging_eventrouter_pod> -n openshift-logging
|
||||
$ oc logs <cluster_logging_eventrouter_pod> -n openshift-logging
|
||||
----
|
||||
+
|
||||
For example:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ oc logs cluster-logging-eventrouter-d649f97c8-qvv8r -n openshift-logging
|
||||
$ oc logs cluster-logging-eventrouter-d649f97c8-qvv8r -n openshift-logging
|
||||
----
|
||||
+
|
||||
.Example output
|
||||
@@ -186,4 +186,3 @@ $ oc logs cluster-logging-eventrouter-d649f97c8-qvv8r -n openshift-logging
|
||||
----
|
||||
+
|
||||
You can also use Kibana to view events by creating an index pattern using the Elasticsearch `infra` index.
|
||||
|
||||
|
||||
@@ -3,15 +3,15 @@
|
||||
// * logging/cluster-logging-visualizer.adoc
|
||||
|
||||
[id="cluster-logging-kibana-tolerations_{context}"]
|
||||
= Using tolerations to control the log visualizer Pod placement
|
||||
= Using tolerations to control the log visualizer pod placement
|
||||
|
||||
You can control the node where the log visualizer Pod runs and prevent
|
||||
other workloads from using those nodes by using tolerations on the Pods.
|
||||
You can control the node where the log visualizer pod runs and prevent
|
||||
other workloads from using those nodes by using tolerations on the pods.
|
||||
|
||||
You apply tolerations to the log visualizer Pod through the Cluster Logging Custom Resource (CR)
|
||||
and apply taints to a node through the node specification. A taint on a node is a `key:value pair` that
|
||||
instructs the node to repel all Pods that do not tolerate the taint. Using a specific `key:value` pair
|
||||
that is not on other Pods ensures only the Kibana Pod can run on that node.
|
||||
You apply tolerations to the log visualizer pod through the Cluster Logging Custom Resource (CR)
|
||||
and apply taints to a node through the node specification. A taint on a node is a `key:value pair` that
|
||||
instructs the node to repel all pods that do not tolerate the taint. Using a specific `key:value` pair
|
||||
that is not on other pods ensures only the Kibana pod can run on that node.
|
||||
|
||||
.Prerequisites
|
||||
|
||||
@@ -19,7 +19,7 @@ that is not on other Pods ensures only the Kibana Pod can run on that node.
|
||||
|
||||
.Procedure
|
||||
|
||||
. Use the following command to add a taint to a node where you want to schedule the log visualizer Pod:
|
||||
. Use the following command to add a taint to a node where you want to schedule the log visualizer pod:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
@@ -34,27 +34,26 @@ $ oc adm taint nodes node1 kibana=node:NoExecute
|
||||
----
|
||||
+
|
||||
This example places a taint on `node1` that has key `kibana`, value `node`, and taint effect `NoExecute`.
|
||||
You must use the `NoExecute` taint effect. `NoExecute` schedules only Pods that match the taint and remove existing Pods
|
||||
You must use the `NoExecute` taint effect. `NoExecute` schedules only pods that match the taint and remove existing pods
|
||||
that do not match.
|
||||
|
||||
. Edit the `visualization` section of the Cluster Logging Custom Resource (CR) to configure a toleration for the Kibana Pod:
|
||||
. Edit the `visualization` section of the Cluster Logging Custom Resource (CR) to configure a toleration for the Kibana pod:
|
||||
+
|
||||
[source,yaml]
|
||||
----
|
||||
visualization:
|
||||
type: "kibana"
|
||||
type: "kibana"
|
||||
kibana:
|
||||
tolerations:
|
||||
tolerations:
|
||||
- key: "kibana" <1>
|
||||
operator: "Exists" <2>
|
||||
effect: "NoExecute" <3>
|
||||
tolerationSeconds: 6000 <4>
|
||||
----
|
||||
<1> Specify the key that you added to the node.
|
||||
<2> Specify the `Exists` operator to require the `key`/`value`/`effect` parameters to match.
|
||||
<2> Specify the `Exists` operator to require the `key`/`value`/`effect` parameters to match.
|
||||
<3> Specify the `NoExecute` effect.
|
||||
<4> Optionally, specify the `tolerationSeconds` parameter to set how long a Pod can remain bound to a node before being evicted.
|
||||
<4> Optionally, specify the `tolerationSeconds` parameter to set how long a pod can remain bound to a node before being evicted.
|
||||
|
||||
|
||||
This toleration matches the taint created by the `oc adm taint` command. A Pod with this toleration would be able to schedule onto `node1`.
|
||||
|
||||
This toleration matches the taint created by the `oc adm taint` command. A pod with this toleration would be able to schedule onto `node1`.
|
||||
|
||||
@@ -49,8 +49,8 @@ green open .kibana_-1595131456_user1 g
|
||||
----
|
||||
|
||||
|
||||
Log store Pods::
|
||||
You can view the status of the Pods that host the log store.
|
||||
Log store pods::
|
||||
You can view the status of the pods that host the log store.
|
||||
|
||||
. Get the name of a pod:
|
||||
+
|
||||
|
||||
@@ -10,7 +10,7 @@ You should not have to manually adjust these values as the Elasticsearch
|
||||
Operator sets values sufficient for your environment.
|
||||
|
||||
Each Elasticsearch node can operate with a lower memory setting though this is *not* recommended for production deployments.
|
||||
For production use, you should have no less than the default 16Gi allocated to each Pod. Preferably you should allocate as much as possible, up to 64Gi per Pod.
|
||||
For production use, you should have no less than the default 16Gi allocated to each pod. Preferably you should allocate as much as possible, up to 64Gi per pod.
|
||||
|
||||
.Prerequisites
|
||||
|
||||
|
||||
@@ -14,7 +14,7 @@ When you update:
|
||||
+
|
||||
Kibana is unusable when the Elasticsearch Operator has been updated but the Cluster Logging Operator has not been updated.
|
||||
+
|
||||
If you update the Cluster Logging Operator before the Elasticsearch Operator, Kibana does not update and the Kibana Custom Resource (CR) is not created. To work around this problem, delete the Cluster Logging Operator Pod. When the Cluster Logging Operator Pod redeploys, the Kibana CR is created.
|
||||
If you update the Cluster Logging Operator before the Elasticsearch Operator, Kibana does not update and the Kibana Custom Resource (CR) is not created. To work around this problem, delete the Cluster Logging Operator pod. When the Cluster Logging Operator pod redeploys, the Kibana CR is created.
|
||||
|
||||
[IMPORTANT]
|
||||
====
|
||||
@@ -27,7 +27,7 @@ If your cluster logging version is prior to 4.5, you must upgrade cluster loggin
|
||||
|
||||
* Make sure the cluster logging status is healthy:
|
||||
+
|
||||
** All Pods are `ready`.
|
||||
** All pods are `ready`.
|
||||
** The Elasticsearch cluster is healthy.
|
||||
|
||||
* Back up your Elasticsearch and Kibana data.
|
||||
@@ -86,7 +86,7 @@ Wait for the *Status* field to report *Succeeded*.
|
||||
|
||||
. Check the logging components:
|
||||
|
||||
.. Ensure that all Elasticsearch Pods are in the *Ready* status:
|
||||
.. Ensure that all Elasticsearch pods are in the *Ready* status:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
@@ -208,7 +208,7 @@ You should see a `fluentd-init` container:
|
||||
$ oc get kibana kibana -o json
|
||||
----
|
||||
+
|
||||
You should see a Kibana Pod with the `ready` status:
|
||||
You should see a Kibana pod with the `ready` status:
|
||||
+
|
||||
[source,json]
|
||||
----
|
||||
|
||||
@@ -12,7 +12,7 @@ pie charts, heat maps, built-in geospatial support, and other visualizations.
|
||||
|
||||
* To list the *infra* and *audit* indices in Kibana, a user must have the `cluster-admin` role, the `cluster-reader` role, or both roles. The default `kubeadmin` user has proper permissions to list these indices.
|
||||
+
|
||||
If you can view the Pods and logs in the `default` project, you should be able to access the these indices. You can use the following command to check if the current user has proper permissions:
|
||||
If you can view the pods and logs in the `default` project, you should be able to access the these indices. You can use the following command to check if the current user has proper permissions:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
|
||||
@@ -9,7 +9,7 @@ Use this procedure to check which Tuned profiles are applied on every node.
|
||||
|
||||
.Procedure
|
||||
|
||||
. Check which Tuned Pods are running on each node:
|
||||
. Check which Tuned pods are running on each node:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
|
||||
@@ -150,5 +150,5 @@ If the Operator does not appear as installed, to troubleshoot further:
|
||||
* Go to the *Operators* -> *Installed Operators* page and inspect
|
||||
the *Operator Subscriptions* and *Install Plans* tabs for any failure or errors
|
||||
under *Status*.
|
||||
* Go to the *Workloads* -> *Pods* page and check the logs for Pods in the
|
||||
* Go to the *Workloads* -> *Pods* page and check the logs for pods in the
|
||||
`performance-addon-operator` project.
|
||||
|
||||
@@ -6,7 +6,7 @@
|
||||
= Configuring Cluster Loader
|
||||
|
||||
The tool creates multiple namespaces (projects), which contain multiple
|
||||
templates or Pods.
|
||||
templates or pods.
|
||||
|
||||
== Example Cluster Loader configuration file
|
||||
|
||||
@@ -68,7 +68,7 @@ ClusterLoader:
|
||||
----
|
||||
<1> Optional setting for end-to-end tests. Set to `local` to avoid extra log messages.
|
||||
<2> The tuning sets allow rate limiting and stepping, the ability to create several
|
||||
batches of Pods while pausing in between sets. Cluster Loader monitors
|
||||
batches of pods while pausing in between sets. Cluster Loader monitors
|
||||
completion of the previous step before continuing.
|
||||
<3> Stepping will pause for `M` seconds after each `N` objects are created.
|
||||
<4> Rate limiting will wait `M` milliseconds between the creation of objects.
|
||||
@@ -137,7 +137,7 @@ path to a file from which you create the ConfigMap.
|
||||
a file from which you create the secret.
|
||||
|
||||
|`pods`
|
||||
|A sub-object with one or many definition(s) of Pods to deploy.
|
||||
|A sub-object with one or many definition(s) of pods to deploy.
|
||||
|
||||
|`templates`
|
||||
|A sub-object with one or many definition(s) of templates to deploy.
|
||||
@@ -148,7 +148,7 @@ a file from which you create the secret.
|
||||
|Field |Description
|
||||
|
||||
|`num`
|
||||
|An integer. The number of Pods or templates to deploy.
|
||||
|An integer. The number of pods or templates to deploy.
|
||||
|
||||
|`image`
|
||||
|A string. The docker image URL to a repository where it can be pulled.
|
||||
@@ -173,7 +173,7 @@ override in the pod or template.
|
||||
defining a tuning in a project.
|
||||
|
||||
|`pods`
|
||||
|A sub-object identifying the `tuningsets` that will apply to Pods.
|
||||
|A sub-object identifying the `tuningsets` that will apply to pods.
|
||||
|
||||
|`templates`
|
||||
|A sub-object identifying the `tuningsets` that will apply to templates.
|
||||
@@ -221,18 +221,18 @@ whether to start an HTTP server for pod synchronization. The integer `port`
|
||||
defines the HTTP server port to listen on (`9090` by default).
|
||||
|
||||
|`running`
|
||||
|A boolean. Wait for Pods with labels matching `selectors` to go into `Running`
|
||||
|A boolean. Wait for pods with labels matching `selectors` to go into `Running`
|
||||
state.
|
||||
|
||||
|`succeeded`
|
||||
|A boolean. Wait for Pods with labels matching `selectors` to go into `Completed`
|
||||
|A boolean. Wait for pods with labels matching `selectors` to go into `Completed`
|
||||
state.
|
||||
|
||||
|`selectors`
|
||||
|A list of selectors to match Pods in `Running` or `Completed` states.
|
||||
|A list of selectors to match pods in `Running` or `Completed` states.
|
||||
|
||||
|`timeout`
|
||||
|A string. The synchronization timeout period to wait for Pods in `Running` or
|
||||
|A string. The synchronization timeout period to wait for pods in `Running` or
|
||||
`Completed` states. For values that are not `0`, use units:
|
||||
[ns\|us\|ms\|s\|m\|h].
|
||||
|===
|
||||
|
||||
@@ -5,12 +5,12 @@
|
||||
[id="configuring-scale-bounds-knative_{context}"]
|
||||
= Configuring scale bounds Knative Serving autoscaling
|
||||
|
||||
The `minScale` and `maxScale` annotations can be used to configure the minimum and maximum number of Pods that can serve applications.
|
||||
The `minScale` and `maxScale` annotations can be used to configure the minimum and maximum number of pods that can serve applications.
|
||||
These annotations can be used to prevent cold starts or to help control computing costs.
|
||||
|
||||
minScale:: If the `minScale` annotation is not set, Pods will scale to zero (or to 1 if enable-scale-to-zero is false per the `ConfigMap`).
|
||||
minScale:: If the `minScale` annotation is not set, pods will scale to zero (or to 1 if enable-scale-to-zero is false per the `ConfigMap`).
|
||||
|
||||
maxScale:: If the `maxScale` annotation is not set, there will be no upper limit for the number of Pods created.
|
||||
maxScale:: If the `maxScale` annotation is not set, there will be no upper limit for the number of pods created.
|
||||
|
||||
`minScale` and `maxScale` can be configured as follows in the revision template:
|
||||
|
||||
|
||||
@@ -3,7 +3,7 @@
|
||||
// * support/troubleshooting/investigating-pod-issues.adoc
|
||||
|
||||
[id="copying-files-pods-and-containers_{context}"]
|
||||
= Copying files to and from Pods and containers
|
||||
= Copying files to and from pods and containers
|
||||
|
||||
You can copy files to and from a Pod to test configuration changes or gather diagnostic information.
|
||||
|
||||
|
||||
@@ -26,7 +26,7 @@ This provides a list of the available machine configuration objects you can
|
||||
select. By default, the two kubelet-related configs are `01-master-kubelet` and
|
||||
`01-worker-kubelet`.
|
||||
|
||||
. To check the current value of max Pods per node, run:
|
||||
. To check the current value of max pods per node, run:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
@@ -54,7 +54,7 @@ Allocatable:
|
||||
pods: 250
|
||||
----
|
||||
|
||||
. To set the max Pods per node on the worker nodes, create a custom resource file
|
||||
. To set the max pods per node on the worker nodes, create a custom resource file
|
||||
that contains the kubelet configuration. For example, `change-maxPods-cr.yaml`:
|
||||
+
|
||||
[source,yaml]
|
||||
|
||||
@@ -39,4 +39,4 @@ $ oc apply -f <filename>
|
||||
|
||||
After the Service is created and the application is deployed, Knative creates an immutable Revision for this version of the application.
|
||||
|
||||
Knative also performs network programming to create a Route, Ingress, Service, and load balancer for your application and automatically scales your Pods up and down based on traffic, including inactive Pods.
|
||||
Knative also performs network programming to create a Route, Ingress, Service, and load balancer for your application and automatically scales your pods up and down based on traffic, including inactive pods.
|
||||
|
||||
@@ -11,7 +11,7 @@ If necessary, you can manually refresh the service CA by using the following pro
|
||||
|
||||
[WARNING]
|
||||
====
|
||||
A manually-rotated service CA does not maintain trust with the previous service CA. You might experience a temporary service disruption until the Pods in the cluster are restarted, which ensures that Pods are using service serving certificates issued by the new service CA.
|
||||
A manually-rotated service CA does not maintain trust with the previous service CA. You might experience a temporary service disruption until the pods in the cluster are restarted, which ensures that pods are using service serving certificates issued by the new service CA.
|
||||
====
|
||||
|
||||
.Prerequisites
|
||||
@@ -39,7 +39,7 @@ which will be used to sign the new service certificates.
|
||||
$ oc delete secret/signing-key -n openshift-service-ca
|
||||
----
|
||||
|
||||
. To apply the new certificates to all services, restart all the Pods
|
||||
. To apply the new certificates to all services, restart all the pods
|
||||
in your cluster. This command ensures that all services use the
|
||||
updated certificates.
|
||||
+
|
||||
|
||||
@@ -21,7 +21,7 @@ The service CA certificate, which issues the service certificates, is valid for
|
||||
|
||||
[NOTE]
|
||||
====
|
||||
You can use the following command to manually restart all Pods in the cluster. Be aware that running this command causes a service interruption, because it deletes every running Pod in every namespace. These Pods will automatically restart after they are deleted.
|
||||
You can use the following command to manually restart all pods in the cluster. Be aware that running this command causes a service interruption, because it deletes every running pod in every namespace. These pods will automatically restart after they are deleted.
|
||||
|
||||
[source,terminal]
|
||||
----
|
||||
|
||||
@@ -122,7 +122,7 @@ spec:
|
||||
.. Click *Create* to deploy the logging instance, which creates the Cluster
|
||||
Logging and Elasticsearch Custom Resources.
|
||||
|
||||
. Verify that the Pods for the Cluster Logging instance deployed:
|
||||
. Verify that the pods for the Cluster Logging instance deployed:
|
||||
|
||||
.. Switch to the *Workloads* → *Pods* page.
|
||||
|
||||
|
||||
@@ -9,9 +9,9 @@ Expanding PVCs based on volume types that need file system re-sizing,
|
||||
such as AWS EBS, is a two-step process.
|
||||
This process involves expanding volume objects in the cloud provider and
|
||||
then expanding the file system on the actual node. These steps occur automatically
|
||||
after the PVC object is edited and might require a Pod restart to take effect.
|
||||
after the PVC object is edited and might require a pod restart to take effect.
|
||||
|
||||
Expanding the file system on the node only happens when a new Pod is started
|
||||
Expanding the file system on the node only happens when a new pod is started
|
||||
with the volume.
|
||||
|
||||
.Prerequisites
|
||||
@@ -77,13 +77,13 @@ Mounted By: mysql-1-q4nz7 <3>
|
||||
----
|
||||
<1> The current capacity of the PVC.
|
||||
<2> Any relevant conditions are displayed here.
|
||||
<3> The Pod that is currently mounting this volume
|
||||
<3> The pod that is currently mounting this volume
|
||||
|
||||
. If the output of the previous command included a message to restart the Pod, delete the mounting Pod that it specified:
|
||||
. If the output of the previous command included a message to restart the pod, delete the mounting pod that it specified:
|
||||
+
|
||||
----
|
||||
$ oc delete pod mysql-1-q4nz7
|
||||
----
|
||||
|
||||
. Once the Pod is running, the newly requested size is available and the
|
||||
. Once the pod is running, the newly requested size is available and the
|
||||
`FileSystemResizePending` condition is removed from the PVC.
|
||||
|
||||
@@ -18,7 +18,7 @@ between `0` and `256`. When the `weight` is `0`, the service does not participat
|
||||
but continues to serve existing persistent connections. When the service `weight`
|
||||
is not `0`, each endpoint has a minimum `weight` of `1`. Because of this, a
|
||||
service with a lot of endpoints can end up with higher `weight` than desired.
|
||||
In this case, reduce the number of Pods to get the desired load balance
|
||||
In this case, reduce the number of pods to get the desired load balance
|
||||
`weight`.
|
||||
|
||||
////
|
||||
@@ -80,7 +80,7 @@ in load-balancing, but continues to serve existing persistent connections.
|
||||
[NOTE]
|
||||
====
|
||||
Changes to the route just change the portion of traffic to the various services.
|
||||
You might have to scale the DeploymentConfigs to adjust the number of Pods
|
||||
You might have to scale the DeploymentConfigs to adjust the number of pods
|
||||
to handle the anticipated loads.
|
||||
====
|
||||
+
|
||||
@@ -171,7 +171,7 @@ This means 99% of traffic is sent to service `ab-example-a` and 1% to
|
||||
service `ab-example-b`.
|
||||
+
|
||||
This command does not scale the DeploymentConfigs. You might be required to do
|
||||
so to have enough Pods to handle the request load.
|
||||
so to have enough pods to handle the request load.
|
||||
|
||||
. Run the command with no flags to verify the current configuration:
|
||||
+
|
||||
@@ -266,7 +266,7 @@ $ oc new-app openshift/deployment-example:v2 \
|
||||
SUBTITLE="shard B" COLOR="red"
|
||||
----
|
||||
|
||||
. At this point, both sets of Pods are being served under the route. However,
|
||||
. At this point, both sets of pods are being served under the route. However,
|
||||
because both browsers (by leaving a connection open) and the router (by default,
|
||||
through a cookie) attempt to preserve your connection to a back-end server,
|
||||
you might not see both shards being returned to you.
|
||||
@@ -291,7 +291,7 @@ $ oc scale dc/ab-example-a --replicas=1; oc scale dc/ab-example-b --replicas=0
|
||||
+
|
||||
Refresh your browser to show `v1` and `shard A` (in blue).
|
||||
|
||||
. If you trigger a deployment on either shard, only the Pods in that shard are
|
||||
. If you trigger a deployment on either shard, only the pods in that shard are
|
||||
affected. You can trigger a deployment by changing the `SUBTITLE` environment
|
||||
variable in either DeploymentConfig:
|
||||
+
|
||||
|
||||
@@ -13,7 +13,7 @@ to the new version.
|
||||
Because you control the portion of requests to each version, as testing
|
||||
progresses you can increase the fraction of requests to the new version and
|
||||
ultimately stop using the previous version. As you adjust the request load on
|
||||
each version, the number of Pods in each service might have to be scaled as well
|
||||
each version, the number of pods in each service might have to be scaled as well
|
||||
to provide the expected performance.
|
||||
|
||||
In addition to upgrading software, you can use this feature to experiment with
|
||||
|
||||
@@ -5,18 +5,18 @@
|
||||
[id="deployments-assigning-pods-to-nodes_{context}"]
|
||||
= Assigning pods to specific nodes
|
||||
|
||||
You can use node selectors in conjunction with labeled nodes to control Pod
|
||||
You can use node selectors in conjunction with labeled nodes to control pod
|
||||
placement.
|
||||
|
||||
Cluster administrators can set the default node selector for a project in order
|
||||
to restrict Pod placement to specific nodes. As a developer, you can set a node
|
||||
selector on a Pod configuration to restrict nodes even further.
|
||||
to restrict pod placement to specific nodes. As a developer, you can set a node
|
||||
selector on a `Pod` configuration to restrict nodes even further.
|
||||
|
||||
.Procedure
|
||||
|
||||
. To add a node selector when creating a pod, edit the Pod configuration, and add
|
||||
the `nodeSelector` value. This can be added to a single Pod configuration, or in
|
||||
a Pod template:
|
||||
. To add a node selector when creating a pod, edit the `Pod` configuration, and add
|
||||
the `nodeSelector` value. This can be added to a single `Pod` configuration, or in
|
||||
a `Pod` template:
|
||||
+
|
||||
[source,yaml]
|
||||
----
|
||||
@@ -34,12 +34,12 @@ labels added by a cluster administrator.
|
||||
+
|
||||
For example, if a project has the `type=user-node` and `region=east` labels
|
||||
added to a project by the cluster administrator, and you add the above
|
||||
`disktype: ssd` label to a Pod, the Pod is only ever scheduled on nodes that
|
||||
`disktype: ssd` label to a pod, the pod is only ever scheduled on nodes that
|
||||
have all three labels.
|
||||
+
|
||||
[NOTE]
|
||||
====
|
||||
Labels can only be set to one value, so setting a node selector of `region=west`
|
||||
in a Pod configuration that has `region=east` as the administrator-set default,
|
||||
results in a Pod that will never be scheduled.
|
||||
in a `Pod` configuration that has `region=east` as the administrator-set default,
|
||||
results in a pod that will never be scheduled.
|
||||
====
|
||||
|
||||
@@ -21,11 +21,11 @@ properties of the link:https://en.wikipedia.org/wiki/CAP_theorem[CAP theorem]
|
||||
that each design has chosen for the rollout process. DeploymentConfigs prefer
|
||||
consistency, whereas Deployments take availability over consistency.
|
||||
|
||||
For DeploymentConfigs, if a node running a deployer Pod goes down, it will
|
||||
For DeploymentConfigs, if a node running a deployer pod goes down, it will
|
||||
not get replaced. The process waits until the node comes back online or is
|
||||
manually deleted. Manually deleting the node also deletes the corresponding Pod.
|
||||
This means that you can not delete the Pod to unstick the rollout, as the
|
||||
kubelet is responsible for deleting the associated Pod.
|
||||
manually deleted. Manually deleting the node also deletes the corresponding pod.
|
||||
This means that you can not delete the pod to unstick the rollout, as the
|
||||
kubelet is responsible for deleting the associated pod.
|
||||
|
||||
However, Deployments rollouts are driven from a controller manager. The
|
||||
controller manager runs in high availability mode on masters and uses leader
|
||||
|
||||
@@ -46,8 +46,8 @@ $ oc tag deployment-example:v2 deployment-example:latest
|
||||
|
||||
. In your browser, refresh the page until you see the `v2` image.
|
||||
|
||||
. When using the CLI, the following command shows how many Pods are on version 1
|
||||
and how many are on version 2. In the web console, the Pods are progressively
|
||||
. When using the CLI, the following command shows how many pods are on version 1
|
||||
and how many are on version 2. In the web console, the pods are progressively
|
||||
added to v2 and removed from v1:
|
||||
+
|
||||
[source,terminal]
|
||||
@@ -56,8 +56,8 @@ $ oc describe dc deployment-example
|
||||
----
|
||||
|
||||
During the deployment process, the new ReplicationController is incrementally
|
||||
scaled up. After the new Pods are marked as `ready` (by passing their readiness
|
||||
scaled up. After the new pods are marked as `ready` (by passing their readiness
|
||||
check), the deployment process continues.
|
||||
|
||||
If the Pods do not become ready, the process aborts, and the DeploymentConfig
|
||||
If the pods do not become ready, the process aborts, and the DeploymentConfig
|
||||
rolls back to its previous version.
|
||||
|
||||
@@ -8,7 +8,7 @@
|
||||
Building on ReplicationControllers, {product-title} adds expanded support for
|
||||
the software development and deployment lifecycle with the concept of
|
||||
_DeploymentConfigs_. In the simplest case, a DeploymentConfig creates a new
|
||||
ReplicationController and lets it start up Pods.
|
||||
ReplicationController and lets it start up pods.
|
||||
|
||||
However, {product-title} deployments from DeploymentConfigs also provide the
|
||||
ability to transition from an existing deployment of an image to a new one and
|
||||
|
||||
@@ -16,7 +16,7 @@ pre:
|
||||
failurePolicy: Abort
|
||||
execNewPod: {} <1>
|
||||
----
|
||||
<1> `execNewPod` is a Pod-based lifecycle hook.
|
||||
<1> `execNewPod` is a pod-based lifecycle hook.
|
||||
|
||||
Every hook has a `failurePolicy`, which defines the action the strategy should
|
||||
take when a hook failure is encountered:
|
||||
@@ -35,13 +35,13 @@ take when a hook failure is encountered:
|
||||
|===
|
||||
|
||||
Hooks have a type-specific field that describes how to execute the hook.
|
||||
Currently, Pod-based hooks are the only supported hook type, specified by the
|
||||
Currently, pod-based hooks are the only supported hook type, specified by the
|
||||
`execNewPod` field.
|
||||
|
||||
[discrete]
|
||||
==== Pod-based lifecycle hook
|
||||
|
||||
Pod-based lifecycle hooks execute hook code in a new Pod derived from the
|
||||
Pod-based lifecycle hooks execute hook code in a new pod derived from the
|
||||
template in a DeploymentConfig.
|
||||
|
||||
The following simplified example DeploymentConfig uses the Rolling strategy.
|
||||
@@ -84,14 +84,14 @@ spec:
|
||||
<3> `env` is an optional set of environment variables for the hook container.
|
||||
<4> `volumes` is an optional set of volume references for the hook container.
|
||||
|
||||
In this example, the `pre` hook will be executed in a new Pod using the
|
||||
In this example, the `pre` hook will be executed in a new pod using the
|
||||
`openshift/origin-ruby-sample` image from the `helloworld` container. The hook
|
||||
Pod has the following properties:
|
||||
pod has the following properties:
|
||||
|
||||
* The hook command is `/usr/bin/command arg1 arg2`.
|
||||
* The hook container has the `CUSTOM_VAR1=custom_value1` environment variable.
|
||||
* The hook failure policy is `Abort`, meaning the deployment process fails if the hook fails.
|
||||
* The hook Pod inherits the `data` volume from the DeploymentConfig Pod.
|
||||
* The hook pod inherits the `data` volume from the DeploymentConfig pod.
|
||||
|
||||
[id="deployments-setting-lifecycle-hooks_{context}"]
|
||||
== Setting lifecycle hooks
|
||||
|
||||
@@ -5,22 +5,22 @@
|
||||
[id="deployments-replicationcontrollers_{context}"]
|
||||
= ReplicationControllers
|
||||
|
||||
A ReplicationController ensures that a specified number of replicas of a Pod are running at
|
||||
all times. If Pods exit or are deleted, the ReplicationController acts to
|
||||
A ReplicationController ensures that a specified number of replicas of a pod are running at
|
||||
all times. If pods exit or are deleted, the ReplicationController acts to
|
||||
instantiate more up to the defined number. Likewise, if there are more running
|
||||
than desired, it deletes as many as necessary to match the defined amount.
|
||||
|
||||
A ReplicationController configuration consists of:
|
||||
|
||||
* The number of replicas desired (which can be adjusted at runtime).
|
||||
* A Pod definition to use when creating a replicated Pod.
|
||||
* A selector for identifying managed Pods.
|
||||
* A `Pod` definition to use when creating a replicated pod.
|
||||
* A selector for identifying managed pods.
|
||||
|
||||
A selector is a set of labels assigned to
|
||||
the Pods that are managed by the ReplicationController. These labels are
|
||||
included in the Pod definition that the ReplicationController instantiates.
|
||||
the pods that are managed by the ReplicationController. These labels are
|
||||
included in the `Pod` definition that the ReplicationController instantiates.
|
||||
The ReplicationController uses the selector to determine how many
|
||||
instances of the Pod are already running in order to adjust as needed.
|
||||
instances of the pod are already running in order to adjust as needed.
|
||||
|
||||
The ReplicationController does not perform auto-scaling based on load or
|
||||
traffic, as it does not track either. Rather, this requires its replica
|
||||
@@ -51,8 +51,8 @@ spec:
|
||||
protocol: TCP
|
||||
restartPolicy: Always
|
||||
----
|
||||
<1> The number of copies of the Pod to run.
|
||||
<2> The label selector of the Pod to run.
|
||||
<3> A template for the Pod the controller creates.
|
||||
<4> Labels on the Pod should include those from the label selector.
|
||||
<1> The number of copies of the pod to run.
|
||||
<2> The label selector of the pod to run.
|
||||
<3> A template for the pod the controller creates.
|
||||
<4> Labels on the pod should include those from the label selector.
|
||||
<5> The maximum name length after expanding any parameters is 63 characters.
|
||||
|
||||
@@ -56,15 +56,15 @@ replica count and the old ReplicationController has been scaled to zero.
|
||||
|
||||
[IMPORTANT]
|
||||
====
|
||||
When scaling down, the Rolling strategy waits for Pods to become ready so it can
|
||||
decide whether further scaling would affect availability. If scaled up Pods
|
||||
When scaling down, the Rolling strategy waits for pods to become ready so it can
|
||||
decide whether further scaling would affect availability. If scaled up pods
|
||||
never become ready, the deployment process will eventually time out and result in a
|
||||
deployment failure.
|
||||
====
|
||||
|
||||
The `maxUnavailable` parameter is the maximum number of Pods that can be
|
||||
The `maxUnavailable` parameter is the maximum number of pods that can be
|
||||
unavailable during the update. The `maxSurge` parameter is the maximum number
|
||||
of Pods that can be scheduled above the original number of Pods. Both parameters
|
||||
of pods that can be scheduled above the original number of pods. Both parameters
|
||||
can be set to either a percentage (e.g., `10%`) or an absolute value (e.g.,
|
||||
`2`). The default value for both is `25%`.
|
||||
|
||||
|
||||
@@ -12,8 +12,8 @@ ephemeral storage technology preview. This feature is disabled by default.
|
||||
====
|
||||
|
||||
A deployment is completed by a Pod that consumes resources (memory, CPU, and
|
||||
ephemeral storage) on a node. By default, Pods consume unbounded node resources.
|
||||
However, if a project specifies default container limits, then Pods consume
|
||||
ephemeral storage) on a node. By default, pods consume unbounded node resources.
|
||||
However, if a project specifies default container limits, then pods consume
|
||||
resources up to those limits.
|
||||
|
||||
You can also limit resource use by specifying resource limits as part of the
|
||||
@@ -59,7 +59,7 @@ items is required:
|
||||
the list of resources in the quota.
|
||||
|
||||
- A limit range defined in your project, where the defaults from the `LimitRange`
|
||||
object apply to Pods created during the deployment process.
|
||||
object apply to pods created during the deployment process.
|
||||
--
|
||||
+
|
||||
To set deployment resources, choose one of the above options. Otherwise, deploy
|
||||
|
||||
@@ -19,7 +19,7 @@ process that is responsible for deploying your pods. If it is successful, it
|
||||
returns the logs from a Pod of your application.
|
||||
|
||||
. You can also view logs from older failed deployment processes, if and only if
|
||||
these processes (old ReplicationControllers and their deployer Pods) exist and
|
||||
these processes (old ReplicationControllers and their deployer pods) exist and
|
||||
have not been pruned or deleted manually:
|
||||
+
|
||||
[source,terminal]
|
||||
|
||||
@@ -49,7 +49,7 @@ spec:
|
||||
$ odo service create --from-file etcd.yaml
|
||||
----
|
||||
|
||||
. Verify that the `EtcdCluster` service has started with one Pod instead of the pre-configured three Pods:
|
||||
. Verify that the `EtcdCluster` service has started with one pod instead of the pre-configured three pods:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
|
||||
@@ -8,7 +8,7 @@
|
||||
== Init Containers
|
||||
Init containers are specialized containers that run before the application container starts and configure the necessary environment for the application containers to run. Init containers can have files that application images do not have, for example setup scripts. Init containers always run to completion and the application container does not start if any of the init containers fails.
|
||||
|
||||
The Pod created by {odo-title} executes two Init Containers:
|
||||
The pod created by {odo-title} executes two Init Containers:
|
||||
|
||||
* The `copy-supervisord` Init container.
|
||||
* The `copy-files-to-volume` Init container.
|
||||
@@ -35,14 +35,14 @@ The `copy-supervisord` Init container copies necessary files onto an `emptyDir`
|
||||
The `emtpyDir Volume` is mounted at the `/opt/odo` mount point for both the Init container and the application container.
|
||||
|
||||
=== `copy-files-to-volume`
|
||||
The `copy-files-to-volume` Init container copies files that are in `/opt/app-root` in the S2I builder image onto the Persistent Volume. The volume is then mounted at the same location (`/opt/app-root`) in an application container.
|
||||
The `copy-files-to-volume` Init container copies files that are in `/opt/app-root` in the S2I builder image onto the Persistent Volume. The volume is then mounted at the same location (`/opt/app-root`) in an application container.
|
||||
|
||||
Without the `PersistentVolume` on `/opt/app-root` the data in this directory is lost when `PersistentVolumeClaim` is mounted at the same location.
|
||||
|
||||
The `PVC` is mounted at the `/mnt` mount point inside the Init container.
|
||||
|
||||
== Application container
|
||||
Application container is the main container inside of which the user-source code executes.
|
||||
Application container is the main container inside of which the user-source code executes.
|
||||
|
||||
Application container is mounted with two Volumes:
|
||||
|
||||
@@ -54,7 +54,7 @@ Application container is mounted with two Volumes:
|
||||
`SupervisorD` executes and monitores the user assembled source code. If the user process crashes, `SupervisorD` restarts it.
|
||||
|
||||
== `PersistentVolume` and `PersistentVolumeClaim`
|
||||
`PersistentVolumeClaim` (`PVC`) is a volume type in Kubernetes which provisions a `PersistentVolume`. The life of a `PersistentVolume` is independent of a Pod lifecycle. The data on the `PersistentVolume` persists across Pod restarts.
|
||||
`PersistentVolumeClaim` (`PVC`) is a volume type in Kubernetes which provisions a `PersistentVolume`. The life of a `PersistentVolume` is independent of a pod lifecycle. The data on the `PersistentVolume` persists across pod restarts.
|
||||
|
||||
The `copy-files-to-volume` Init container copies necessary files onto the `PersistentVolume`. The main application container utilizes these files at runtime for execution.
|
||||
|
||||
@@ -71,7 +71,7 @@ The naming convention of the `PersistentVolume` is <component-name>-s2idata.
|
||||
|===
|
||||
|
||||
== `emptyDir` Volume
|
||||
An `emptyDir` Volume is created when a Pod is assigned to a node, and exists as long as that Pod is running on the node. If the container is restarted or moved, the content of the `emptyDir` is removed, Init container restores the data back to the `emptyDir`. `emptyDir` is initially empty.
|
||||
An `emptyDir` Volume is created when a pod is assigned to a node, and exists as long as that pod is running on the node. If the container is restarted or moved, the content of the `emptyDir` is removed, Init container restores the data back to the `emptyDir`. `emptyDir` is initially empty.
|
||||
|
||||
The `copy-supervisord` Init container copies necessary files onto the `emptyDir` volume. These files are then utilized by the main application container at runtime for execution.
|
||||
|
||||
@@ -86,6 +86,6 @@ The `copy-supervisord` Init container copies necessary files onto the `emptyDir`
|
||||
|===
|
||||
|
||||
== Service
|
||||
Service is a Kubernetes concept of abstracting the way of communicating with a set of Pods.
|
||||
Service is a Kubernetes concept of abstracting the way of communicating with a set of pods.
|
||||
|
||||
{odo-title} creates a Service for every application Pod to make it accessible for communication.
|
||||
{odo-title} creates a Service for every application pod to make it accessible for communication.
|
||||
|
||||
@@ -13,7 +13,7 @@ You can use a saved etcd backup to restore back to a previous cluster state. You
|
||||
|
||||
* Access to the cluster as a user with the `cluster-admin` role.
|
||||
* SSH access to master hosts.
|
||||
* A backup directory containing both the etcd snapshot and the resources for the static Pods, which were from the same backup. The file names in the directory must be in the following formats: `snapshot_<datetimestamp>.db` and `static_kuberesources_<datetimestamp>.tar.gz`.
|
||||
* A backup directory containing both the etcd snapshot and the resources for the static pods, which were from the same backup. The file names in the directory must be in the following formats: `snapshot_<datetimestamp>.db` and `static_kuberesources_<datetimestamp>.tar.gz`.
|
||||
|
||||
.Procedure
|
||||
|
||||
@@ -30,13 +30,13 @@ If you do not complete this step, you will not be able to access the master host
|
||||
|
||||
. Copy the etcd backup directory to the recovery control plane host.
|
||||
+
|
||||
This procedure assumes that you copied the `backup` directory containing the etcd snapshot and the resources for the static Pods to the `/home/core/` directory of your recovery control plane host.
|
||||
This procedure assumes that you copied the `backup` directory containing the etcd snapshot and the resources for the static pods to the `/home/core/` directory of your recovery control plane host.
|
||||
|
||||
. Stop the static Pods on all other control plane nodes.
|
||||
. Stop the static pods on all other control plane nodes.
|
||||
+
|
||||
[NOTE]
|
||||
====
|
||||
It is not required to manually stop the Pods on the recovery host. The recovery script will stop the Pods on the recovery host.
|
||||
It is not required to manually stop the pods on the recovery host. The recovery script will stop the pods on the recovery host.
|
||||
====
|
||||
|
||||
.. Access a control plane host that is not the recovery host.
|
||||
@@ -48,7 +48,7 @@ It is not required to manually stop the Pods on the recovery host. The recovery
|
||||
[core@ip-10-0-154-194 ~]$ sudo mv /etc/kubernetes/manifests/etcd-pod.yaml /tmp
|
||||
----
|
||||
|
||||
.. Verify that the etcd Pods are stopped.
|
||||
.. Verify that the etcd pods are stopped.
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
@@ -64,7 +64,7 @@ The output of this command should be empty. If it is not empty, wait a few minut
|
||||
[core@ip-10-0-154-194 ~]$ sudo mv /etc/kubernetes/manifests/kube-apiserver-pod.yaml /tmp
|
||||
----
|
||||
|
||||
.. Verify that the Kubernetes API server Pods are stopped.
|
||||
.. Verify that the Kubernetes API server pods are stopped.
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
@@ -168,7 +168,7 @@ NAME READY STATUS RESTARTS
|
||||
etcd-ip-10-0-143-125.ec2.internal 1/1 Running 1 2m47s
|
||||
----
|
||||
+
|
||||
If the status is `Pending`, or the output lists more than one running etcd Pod, wait a few minutes and check again.
|
||||
If the status is `Pending`, or the output lists more than one running etcd pod, wait a few minutes and check again.
|
||||
|
||||
. Force etcd redeployment.
|
||||
+
|
||||
@@ -180,7 +180,7 @@ $ oc patch etcd cluster -p='{"spec": {"forceRedeploymentReason": "recovery-'"$(
|
||||
----
|
||||
<1> The `forceRedeploymentReason` value must be unique, which is why a timestamp is appended.
|
||||
+
|
||||
When the etcd cluster Operator performs a redeployment, the existing nodes are started with new Pods similar to the initial bootstrap scale up.
|
||||
When the etcd cluster Operator performs a redeployment, the existing nodes are started with new pods similar to the initial bootstrap scale up.
|
||||
|
||||
. Verify all nodes are updated to the latest revision.
|
||||
+
|
||||
@@ -288,4 +288,4 @@ etcd-ip-10-0-154-194.ec2.internal 2/2 Running 0
|
||||
etcd-ip-10-0-173-171.ec2.internal 2/2 Running 0 9h
|
||||
----
|
||||
|
||||
Note that it might take several minutes after completing this procedure for all services to be restored. For example, authentication by using `oc login` might not immediately work until the OAuth server Pods are restarted.
|
||||
Note that it might take several minutes after completing this procedure for all services to be restored. For example, authentication by using `oc login` might not immediately work until the OAuth server pods are restarted.
|
||||
|
||||
@@ -7,7 +7,7 @@
|
||||
|
||||
Traditionally, volumes that are backed by Container Storage Interface (CSI) drivers can only be used with a PersistentVolume and PersistentVolumeClaim object combination.
|
||||
|
||||
This feature allows you to specify CSI volumes directly in the Pod specification, rather than in a PersistentVolume. Inline volumes are ephemeral and do not persist across Pod restarts.
|
||||
This feature allows you to specify CSI volumes directly in the `Pod` specification, rather than in a PersistentVolume. Inline volumes are ephemeral and do not persist across pod restarts.
|
||||
|
||||
== Support limitations
|
||||
|
||||
|
||||
@@ -3,13 +3,13 @@
|
||||
// * storage/container_storage_interface/ephemeral-storage-csi-inline-pod-scheduling.adoc
|
||||
|
||||
[id="ephemeral-storage-csi-inline-pod_{context}"]
|
||||
= Embedding a CSI inline ephemeral volume in the Pod specification
|
||||
= Embedding a CSI inline ephemeral volume in the `Pod` specification
|
||||
|
||||
You can embed a CSI inline ephemeral volume in the Pod specification in {product-title}. At runtime, nested inline volumes follow the ephemeral lifecycle of their associated Pods so that the CSI driver handles all phases of volume operations as Pods are created and destroyed.
|
||||
You can embed a CSI inline ephemeral volume in the `Pod` specification in {product-title}. At runtime, nested inline volumes follow the ephemeral lifecycle of their associated pods so that the CSI driver handles all phases of volume operations as pods are created and destroyed.
|
||||
|
||||
.Procedure
|
||||
|
||||
. Create the Pod object definition and save it to a file.
|
||||
. Create the `Pod` object definition and save it to a file.
|
||||
|
||||
. Embed the CSI inline ephemeral volume in the file.
|
||||
+
|
||||
@@ -35,7 +35,7 @@ spec:
|
||||
volumeAttributes:
|
||||
foo: bar
|
||||
----
|
||||
<1> The name of the volume that is used by Pods.
|
||||
<1> The name of the volume that is used by pods.
|
||||
|
||||
. Create the object definition file that you saved in the previous step.
|
||||
+
|
||||
|
||||
@@ -17,7 +17,7 @@ The following features are affected by FeatureGates:
|
||||
|True
|
||||
|
||||
|`SupportPodPidsLimit`
|
||||
|Enables support for limiting the number of processes (PIDs) running in a Pod.
|
||||
|Enables support for limiting the number of processes (PIDs) running in a pod.
|
||||
|True
|
||||
|
||||
|`MachineHealthCheck`
|
||||
|
||||
@@ -5,10 +5,10 @@
|
||||
[id="gathering-application-diagnostic-data_{context}"]
|
||||
= Gathering application diagnostic data to investigate application failures
|
||||
|
||||
Application failures can occur within running application Pods. In these situations, you can retrieve diagnostic information with these strategies:
|
||||
Application failures can occur within running application pods. In these situations, you can retrieve diagnostic information with these strategies:
|
||||
|
||||
* Review events relating to the application Pods.
|
||||
* Review the logs from the application Pods, including application-specific log files that are not collected by the {product-title} logging framework.
|
||||
* Review events relating to the application pods.
|
||||
* Review the logs from the application pods, including application-specific log files that are not collected by the {product-title} logging framework.
|
||||
* Test application functionality interactively and run diagnostic tools in an application container.
|
||||
|
||||
.Prerequisites
|
||||
@@ -18,30 +18,30 @@ Application failures can occur within running application Pods. In these situati
|
||||
|
||||
.Procedure
|
||||
|
||||
. List events relating to a specific application Pod. The following example retrieves events for an application Pod named `my-app-1-akdlg`:
|
||||
. List events relating to a specific application pod. The following example retrieves events for an application pod named `my-app-1-akdlg`:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ oc describe pod/my-app-1-akdlg
|
||||
----
|
||||
|
||||
. Review logs from an application Pod:
|
||||
. Review logs from an application pod:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ oc logs -f pod/my-app-1-akdlg
|
||||
----
|
||||
|
||||
. Query specific logs within a running application Pod. Logs that are sent to stdout are collected by the {product-title} logging framework and are included in the output of the preceding command. The following query is only required for logs that are not sent to stdout.
|
||||
. Query specific logs within a running application pod. Logs that are sent to stdout are collected by the {product-title} logging framework and are included in the output of the preceding command. The following query is only required for logs that are not sent to stdout.
|
||||
+
|
||||
.. If an application log can be accessed without root privileges within a Pod, concatenate the log file as follows:
|
||||
.. If an application log can be accessed without root privileges within a pod, concatenate the log file as follows:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ oc exec my-app-1-akdlg -- cat /var/log/my-application.log
|
||||
----
|
||||
+
|
||||
.. If root access is required to view an application log, you can start a debug container with root privileges and then view the log file from within the container. Start the debug container from the project's deployment configuration. Pod users typically run with non-root privileges, but running troubleshooting Pods with temporary root privileges can be useful during issue investigation:
|
||||
.. If root access is required to view an application log, you can start a debug container with root privileges and then view the log file from within the container. Start the debug container from the project's deployment configuration. Pod users typically run with non-root privileges, but running troubleshooting pods with temporary root privileges can be useful during issue investigation:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
@@ -50,7 +50,7 @@ $ oc debug dc/my-deployment-configuration --as-root -- cat /var/log/my-applicati
|
||||
+
|
||||
[NOTE]
|
||||
====
|
||||
You can access an interactive shell with root access within the debug Pod if you run `oc debug dc/<deployment_configuration> --as-root` without appending `-- <command>`.
|
||||
You can access an interactive shell with root access within the debug pod if you run `oc debug dc/<deployment_configuration> --as-root` without appending `-- <command>`.
|
||||
====
|
||||
|
||||
. Test application functionality interactively and run diagnostic tools, in an application container with an interactive shell.
|
||||
@@ -67,18 +67,18 @@ $ oc exec -it my-app-1-akdlg /bin/bash
|
||||
+
|
||||
[NOTE]
|
||||
====
|
||||
Root privileges are required to run some diagnostic binaries. In these situations you can start a debug Pod with root access, based on a problematic Pod's deployment configuration, by running `oc debug dc/<deployment_configuration> --as-root`. Then, you can run diagnostic binaries as root from within the debug Pod.
|
||||
Root privileges are required to run some diagnostic binaries. In these situations you can start a debug pod with root access, based on a problematic pod's deployment configuration, by running `oc debug dc/<deployment_configuration> --as-root`. Then, you can run diagnostic binaries as root from within the debug pod.
|
||||
====
|
||||
|
||||
. If diagnostic binaries are not available within a container, you can run a host's diagnostic binaries within a container's namespace by using `nsenter`. The following example runs `ip ad` within a container's namespace, using the host`s `ip` binary.
|
||||
.. Enter into a debug session on the target node. This step instantiates a debug Pod called `<node_name>-debug`:
|
||||
.. Enter into a debug session on the target node. This step instantiates a debug pod called `<node_name>-debug`:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ oc debug node/my-cluster-node
|
||||
----
|
||||
+
|
||||
.. Set `/host` as the root directory within the debug shell. The debug Pod mounts the host's root file system in `/host` within the Pod. By changing the root directory to `/host`, you can run binaries contained in the host's executable paths:
|
||||
.. Set `/host` as the root directory within the debug shell. The debug pod mounts the host's root file system in `/host` within the pod. By changing the root directory to `/host`, you can run binaries contained in the host's executable paths:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
|
||||
@@ -5,7 +5,7 @@
|
||||
[id="gathering-operator-logs_{context}"]
|
||||
= Gathering Operator logs
|
||||
|
||||
If you experience Operator issues, you can gather detailed diagnostic information from Operator Pod logs.
|
||||
If you experience Operator issues, you can gather detailed diagnostic information from Operator pod logs.
|
||||
|
||||
.Prerequisites
|
||||
|
||||
@@ -16,43 +16,43 @@ If you experience Operator issues, you can gather detailed diagnostic informatio
|
||||
|
||||
.Procedure
|
||||
|
||||
. List the Operator Pods that are running in the Operator's namespace, plus the Pod status, restarts, and age:
|
||||
. List the Operator pods that are running in the Operator's namespace, plus the pod status, restarts, and age:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ oc get pods -n <operator_namespace>
|
||||
----
|
||||
|
||||
. Review logs for an Operator Pod:
|
||||
. Review logs for an Operator pod:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ oc logs pod/<pod_name> -n <operator_namespace>
|
||||
----
|
||||
+
|
||||
If an Operator Pod has multiple containers, the preceding command will produce an error that includes the name of each container. Query logs from an individual container:
|
||||
If an Operator pod has multiple containers, the preceding command will produce an error that includes the name of each container. Query logs from an individual container:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ oc logs pod/<operator_pod_name> -c <container_name> -n <operator_namespace>
|
||||
----
|
||||
|
||||
. If the API is not functional, review Operator Pod and container logs on each master node by using SSH instead. Replace `<master-node>.<cluster_name>.<base_domain>` with appropriate values.
|
||||
.. List Pods on each master node:
|
||||
. If the API is not functional, review Operator pod and container logs on each master node by using SSH instead. Replace `<master-node>.<cluster_name>.<base_domain>` with appropriate values.
|
||||
.. List pods on each master node:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ ssh core@<master-node>.<cluster_name>.<base_domain> sudo crictl pods
|
||||
----
|
||||
+
|
||||
.. For any Operator Pods not showing a `Ready` status, inspect the Pod's status in detail. Replace `<operator_pod_id>` with the Operator Pod's ID listed in the output of the preceding command:
|
||||
.. For any Operator pods not showing a `Ready` status, inspect the pod's status in detail. Replace `<operator_pod_id>` with the Operator pod's ID listed in the output of the preceding command:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ ssh core@<master-node>.<cluster_name>.<base_domain> sudo crictl inspectp <operator_pod_id>
|
||||
----
|
||||
+
|
||||
.. List containers related to an Operator Pod:
|
||||
.. List containers related to an Operator pod:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
|
||||
@@ -5,7 +5,7 @@
|
||||
[id="gathering-s2i-diagnostic-data_{context}"]
|
||||
= Gathering Source-to-Image diagnostic data
|
||||
|
||||
The S2I tool runs a build Pod and a deployment Pod in sequence. The deployment Pod is responsible for deploying the application Pods based on the application container image created in the build stage. Watch build, deployment and application Pod status to determine where in the S2I process a failure occurs. Then, focus diagnostic data collection accordingly.
|
||||
The S2I tool runs a build pod and a deployment pod in sequence. The deployment pod is responsible for deploying the application pods based on the application container image created in the build stage. Watch build, deployment and application pod status to determine where in the S2I process a failure occurs. Then, focus diagnostic data collection accordingly.
|
||||
|
||||
.Prerequisites
|
||||
|
||||
@@ -15,17 +15,17 @@ The S2I tool runs a build Pod and a deployment Pod in sequence. The deployment P
|
||||
|
||||
.Procedure
|
||||
|
||||
. Watch the Pod status throughout the S2I process to determine at which stage a failure occurs:
|
||||
. Watch the pod status throughout the S2I process to determine at which stage a failure occurs:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ oc get pods -w <1>
|
||||
----
|
||||
<1> Use `-w` to monitor Pods for changes until you quit the command using `Ctrl+C`.
|
||||
<1> Use `-w` to monitor pods for changes until you quit the command using `Ctrl+C`.
|
||||
|
||||
. Review a failed Pod's logs for errors.
|
||||
. Review a failed pod's logs for errors.
|
||||
+
|
||||
* *If the build Pod fails*, review the build Pod's logs:
|
||||
* *If the build pod fails*, review the build pod's logs:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
@@ -34,10 +34,10 @@ $ oc logs -f pod/<application_name>-<build_number>-build
|
||||
+
|
||||
[NOTE]
|
||||
====
|
||||
Alternatively, you can review the build configuration's logs using `oc logs -f bc/<application_name>`. The build configuration's logs include the logs from the build Pod.
|
||||
Alternatively, you can review the build configuration's logs using `oc logs -f bc/<application_name>`. The build configuration's logs include the logs from the build pod.
|
||||
====
|
||||
+
|
||||
* *If the deployment Pod fails*, review the deployment Pod's logs:
|
||||
* *If the deployment pod fails*, review the deployment pod's logs:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
@@ -46,10 +46,10 @@ $ oc logs -f pod/<application_name>-<build_number>-deploy
|
||||
+
|
||||
[NOTE]
|
||||
====
|
||||
Alternatively, you can review the deployment configuration's logs using `oc logs -f dc/<application_name>`. This outputs logs from the deployment Pod until the deployment Pod completes successfully. The command outputs logs from the application Pods if you run it after the deployment Pod has completed. After a deployment Pod completes, its logs can still be accessed by running `oc logs -f pod/<application_name>-<build_number>-deploy`.
|
||||
Alternatively, you can review the deployment configuration's logs using `oc logs -f dc/<application_name>`. This outputs logs from the deployment pod until the deployment pod completes successfully. The command outputs logs from the application pods if you run it after the deployment pod has completed. After a deployment pod completes, its logs can still be accessed by running `oc logs -f pod/<application_name>-<build_number>-deploy`.
|
||||
====
|
||||
+
|
||||
* *If an application Pod fails, or if an application is not behaving as expected within a running application Pod*, review the application Pod's logs:
|
||||
* *If an application pod fails, or if an application is not behaving as expected within a running application pod*, review the application pod's logs:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
|
||||
@@ -43,7 +43,7 @@ Shutting down the nodes using one of these methods allows pods to terminate grac
|
||||
+
|
||||
[NOTE]
|
||||
====
|
||||
It is not necessary to drain master nodes of the standard Pods that ship with {product-title} prior to shutdown.
|
||||
It is not necessary to drain master nodes of the standard pods that ship with {product-title} prior to shutdown.
|
||||
|
||||
Cluster administrators are responsible for ensuring a clean restart of their own workloads after the cluster is restarted. If you drained master nodes prior to shutdown because of custom workloads, you must mark the master nodes as schedulable before the cluster will be functional again after restart.
|
||||
====
|
||||
|
||||
@@ -70,14 +70,14 @@ of applications that would not allow for overcommitment. That memory can not be
|
||||
used for other applications. In the example above, the environment would be
|
||||
roughly 30 percent overcommitted, a common ratio.
|
||||
|
||||
The application Pods can access a service either by using environment variables or DNS.
|
||||
If using environment variables, for each active service the variables are injected by the
|
||||
kubelet when a Pod is run on a node. A cluster-aware DNS server watches the Kubernetes API
|
||||
for new services and creates a set of DNS records for each one. If DNS is enabled throughout
|
||||
your cluster, then all Pods should automatically be able to resolve services by their DNS name.
|
||||
Service discovery using DNS can be used in case you must go beyond 5000 services. When using
|
||||
environment variables for service discovery, the argument list exceeds the allowed length after
|
||||
5000 services in a namespace, then the Pods and deployments will start failing. Disable the service
|
||||
The application pods can access a service either by using environment variables or DNS.
|
||||
If using environment variables, for each active service the variables are injected by the
|
||||
kubelet when a pod is run on a node. A cluster-aware DNS server watches the Kubernetes API
|
||||
for new services and creates a set of DNS records for each one. If DNS is enabled throughout
|
||||
your cluster, then all pods should automatically be able to resolve services by their DNS name.
|
||||
Service discovery using DNS can be used in case you must go beyond 5000 services. When using
|
||||
environment variables for service discovery, the argument list exceeds the allowed length after
|
||||
5000 services in a namespace, then the pods and deployments will start failing. Disable the service
|
||||
links in the deployment's service specification file to overcome this:
|
||||
|
||||
[source,yaml]
|
||||
|
||||
@@ -23,7 +23,7 @@ While planning your environment, determine how many pods are expected to fit per
|
||||
node:
|
||||
|
||||
----
|
||||
Required Pods per Cluster / Pods per Node = Total Number of Nodes Needed
|
||||
Required pods per Cluster / pods per Node = Total Number of Nodes Needed
|
||||
----
|
||||
|
||||
The current maximum number of pods per node is 250. However, the number of pods
|
||||
@@ -49,5 +49,5 @@ If you increase the number of nodes to 20, then the pod distribution changes to
|
||||
Where:
|
||||
|
||||
----
|
||||
Required Pods per Cluster / Total Number of Nodes = Expected Pods per Node
|
||||
Required pods per Cluster / Total Number of Nodes = Expected pods per Node
|
||||
----
|
||||
|
||||
@@ -2,15 +2,15 @@
|
||||
// * openshift_images/using-image-pull-secrets
|
||||
|
||||
[id="images-allow-pods-to-reference-images-across-projects_{context}"]
|
||||
= Allowing Pods to reference images across projects
|
||||
= Allowing pods to reference images across projects
|
||||
|
||||
When using the internal registry, to allow Pods in `project-a` to reference
|
||||
When using the internal registry, to allow pods in `project-a` to reference
|
||||
images in `project-b`, a service account in `project-a` must be bound to the
|
||||
`system:image-puller` role in `project-b`.
|
||||
|
||||
.Procedure
|
||||
|
||||
. To allow Pods in `project-a` to reference images in `project-b`, bind a service
|
||||
. To allow pods in `project-a` to reference images in `project-b`, bind a service
|
||||
account in `project-a` to the `system:image-puller` role in `project-b`:
|
||||
+
|
||||
[source,terminal]
|
||||
|
||||
@@ -3,7 +3,7 @@
|
||||
// * virt/virtual_machines/importing_vms/virt-importing-vmware-vm.adoc
|
||||
|
||||
[id="images-allow-pods-to-reference-images-from-secure-registries_{context}"]
|
||||
= Allowing Pods to reference images from other secured registries
|
||||
= Allowing pods to reference images from other secured registries
|
||||
|
||||
The `.dockercfg` `$HOME/.docker/config.json` file for Docker clients is a
|
||||
Docker credentials file that stores your authentication information if you have
|
||||
@@ -46,9 +46,9 @@ $ oc create secret docker-registry <pull_secret_name> \
|
||||
--docker-email=<email>
|
||||
----
|
||||
|
||||
* To use a secret for pulling images for Pods, you must add the secret to your
|
||||
* To use a secret for pulling images for pods, you must add the secret to your
|
||||
service account. The name of the service account in this example should match
|
||||
the name of the service account the Pod uses. `default` is the default
|
||||
the name of the service account the pod uses. `default` is the default
|
||||
service account:
|
||||
+
|
||||
[source,terminal]
|
||||
|
||||
@@ -60,7 +60,7 @@ items:
|
||||
----
|
||||
|
||||
It is also possible to override the specification of the dynamically created
|
||||
Jenkins agent Pod. The following is a modification to the previous example, which
|
||||
Jenkins agent pod. The following is a modification to the previous example, which
|
||||
overrides the container memory and specifies an environment variable:
|
||||
|
||||
The following example is a BuildConfig that the Jenkins Kubernetes Plug-in,
|
||||
|
||||
@@ -6,12 +6,12 @@
|
||||
= Jenkins permissions
|
||||
|
||||
If in the ConfigMap the `<serviceAccount>` element of the Pod Template XML is
|
||||
the {product-title} Service Account used for the resulting Pod, the service
|
||||
account credentials are mounted into the Pod. The permissions are associated
|
||||
the {product-title} Service Account used for the resulting pod, the service
|
||||
account credentials are mounted into the pod. The permissions are associated
|
||||
with the service account and control which operations against the
|
||||
{product-title} master are allowed from the Pod.
|
||||
{product-title} master are allowed from the pod.
|
||||
|
||||
Consider the following scenario with service accounts used for the Pod, which
|
||||
Consider the following scenario with service accounts used for the pod, which
|
||||
is launched by the Kubernetes Plug-in that runs in the {product-title} Jenkins
|
||||
image:
|
||||
|
||||
@@ -36,4 +36,4 @@ is the XML configuration for a Pod Template.
|
||||
account is used.
|
||||
* Ensure that whatever service account is used has the necessary
|
||||
permissions, roles, and so on defined within {product-title} to manipulate
|
||||
whatever projects you choose to manipulate from the within the Pod.
|
||||
whatever projects you choose to manipulate from the within the pod.
|
||||
|
||||
@@ -26,4 +26,4 @@ $ oc set data secret/pull-secret -n openshift-config --from-file=.dockerconfigjs
|
||||
----
|
||||
<1> Provide the path to the new pull secret file.
|
||||
|
||||
This update is rolled out to all nodes, which can take some time depending on the size of your cluster. During this time, nodes are drained and Pods are rescheduled on the remaining nodes.
|
||||
This update is rolled out to all nodes, which can take some time depending on the size of your cluster. During this time, nodes are drained and pods are rescheduled on the remaining nodes.
|
||||
|
||||
@@ -146,7 +146,7 @@ metadata:
|
||||
....
|
||||
----
|
||||
|
||||
* To move the Kibana Pod, edit the Cluster Logging CR to add a node selector:
|
||||
* To move the Kibana pod, edit the Cluster Logging CR to add a node selector:
|
||||
+
|
||||
[source,yaml]
|
||||
----
|
||||
|
||||
@@ -65,7 +65,7 @@ to infrastructure nodes.
|
||||
$ oc create -f cluster-monitoring-configmap.yaml
|
||||
----
|
||||
|
||||
. Watch the monitoring Pods move to the new machines:
|
||||
. Watch the monitoring pods move to the new machines:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
|
||||
@@ -31,7 +31,7 @@ The infrastructure node resource requirements depend on the cluster age, nodes,
|
||||
|
||||
[IMPORTANT]
|
||||
====
|
||||
These sizing recommendations are based on scale tests, which create a large number of objects across the cluster. These tests include reaching some of the cluster maximums. In the case of 250 and 500 node counts on a {product-title} {product-version} cluster, these maximums are 10000 namespaces with 61000 Pods, 10000 deployments, 181000 secrets, 400 ConfigMaps, and so on. Prometheus is a highly memory intensive application; the resource usage depends on various factors including the number of nodes, objects, the Prometheus metrics scraping interval, metrics or time series, and the age of the cluster. The disk size also depends on the retention period. You must take these factors into consideration and size them accordingly.
|
||||
These sizing recommendations are based on scale tests, which create a large number of objects across the cluster. These tests include reaching some of the cluster maximums. In the case of 250 and 500 node counts on a {product-title} {product-version} cluster, these maximums are 10000 namespaces with 61000 pods, 10000 deployments, 181000 secrets, 400 ConfigMaps, and so on. Prometheus is a highly memory intensive application; the resource usage depends on various factors including the number of nodes, objects, the Prometheus metrics scraping interval, metrics or time series, and the age of the cluster. The disk size also depends on the retention period. You must take these factors into consideration and size them accordingly.
|
||||
|
||||
The sizing recommendations are applicable only for the infrastructure components which gets installed during the cluster install - Prometheus, Router and Registry. Logging is a day two operation and the recommendations do not take it into account.
|
||||
====
|
||||
|
||||
@@ -18,4 +18,3 @@ Red Hat uses all connected cluster information to:
|
||||
* Make {product-title} more intuitive
|
||||
|
||||
The information the Insights Operator sends is available only to Red Hat Support and engineering teams with the same restrictions as accessing data reported in support cases. Red Hat does not share this information with third parties.
|
||||
|
||||
|
||||
@@ -12,5 +12,4 @@ The Insights Operator collects:
|
||||
* Error that occurred in the cluster components
|
||||
* Progress and health information of running updates, and the status of any component upgrades
|
||||
* Details of the platform that {product-title} is deployed on, such as Amazon Web Services, and the region that the cluster is located in
|
||||
* Information about infrastructure Pods
|
||||
|
||||
* Information about infrastructure pods
|
||||
|
||||
@@ -3,9 +3,9 @@
|
||||
// * support/troubleshooting/investigating-pod-issues.adoc
|
||||
|
||||
[id="inspecting-pod-and-container-logs_{context}"]
|
||||
= Inspecting Pod and container logs
|
||||
= Inspecting pod and container logs
|
||||
|
||||
You can inspect Pod and container logs for warnings and error messages related to explicit Pod failures. Depending on policy and exit code, Pod and container logs remain available after Pods have been terminated.
|
||||
You can inspect pod and container logs for warnings and error messages related to explicit pod failures. Depending on policy and exit code, pod and container logs remain available after pods have been terminated.
|
||||
|
||||
.Prerequisites
|
||||
|
||||
@@ -15,31 +15,31 @@ You can inspect Pod and container logs for warnings and error messages related t
|
||||
|
||||
.Procedure
|
||||
|
||||
. Query logs for a specific Pod:
|
||||
. Query logs for a specific pod:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ oc logs <pod_name>
|
||||
----
|
||||
|
||||
. Query logs for a specific container within a Pod:
|
||||
. Query logs for a specific container within a pod:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ oc logs <pod_name> -c <container_name>
|
||||
----
|
||||
+
|
||||
Logs retrieved using the preceding `oc logs` commands are composed of messages sent to stdout within Pods or containers.
|
||||
Logs retrieved using the preceding `oc logs` commands are composed of messages sent to stdout within pods or containers.
|
||||
|
||||
. Inspect logs contained in `/var/log/` within a Pod.
|
||||
.. List log files and subdirectories contained in `/var/log` within a Pod:
|
||||
. Inspect logs contained in `/var/log/` within a pod.
|
||||
.. List log files and subdirectories contained in `/var/log` within a pod:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ oc exec <pod_name> ls -alh /var/log
|
||||
----
|
||||
+
|
||||
.. Query a specific log file contained in `/var/log` within a Pod:
|
||||
.. Query a specific log file contained in `/var/log` within a pod:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
|
||||
@@ -145,8 +145,8 @@ machines for the cluster to use before you finish installing {product-title}.
|
||||
the cluster uses this values as the number of etcd endpoints in the cluster, the
|
||||
value must match the number of control plane machines that you deploy.
|
||||
<6> The cluster name that you specified in your DNS records.
|
||||
<7> A block of IP addresses from which Pod IP addresses are allocated. This block must
|
||||
not overlap with existing physical networks. These IP addresses are used for the Pod network. If you need to access the Pods from an external network, you must configure load balancers and routers to manage the traffic.
|
||||
<7> A block of IP addresses from which pod IP addresses are allocated. This block must
|
||||
not overlap with existing physical networks. These IP addresses are used for the pod network. If you need to access the pods from an external network, you must configure load balancers and routers to manage the traffic.
|
||||
<8> The subnet prefix length to assign to each individual node. For example, if
|
||||
`hostPrefix` is set to `23`, then each node is assigned a `/23` subnet out of
|
||||
the given `cidr`, which allows for 510 (2^(32 - 23) - 2) pod IPs addresses. If
|
||||
|
||||
@@ -97,8 +97,8 @@ The command succeeds when the Cluster Version Operator finishes deploying the
|
||||
The Ignition config files that the installation program generates contain certificates that expire after 24 hours. You must keep the cluster running for 24 hours in a non-degraded state to ensure that the first certificate rotation has finished.
|
||||
====
|
||||
|
||||
. Confirm that the Kubernetes API server is communicating with the Pods.
|
||||
.. To view a list of all Pods, use the following command:
|
||||
. Confirm that the Kubernetes API server is communicating with the pods.
|
||||
.. To view a list of all pods, use the following command:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
|
||||
@@ -251,7 +251,7 @@ Not all CCO modes are supported for all cloud providers. For more information on
|
||||
|Object
|
||||
|
||||
|`networking.clusterNetwork`
|
||||
|The IP address pools for Pods. The default is `10.128.0.0/14` with a host prefix of `/23`.
|
||||
|The IP address pools for pods. The default is `10.128.0.0/14` with a host prefix of `/23`.
|
||||
|Array of objects
|
||||
|
||||
|`networking.clusterNetwork.cidr`
|
||||
|
||||
@@ -39,7 +39,7 @@ nodes within the cluster.
|
||||
====
|
||||
The API server must be able to resolve the worker nodes by the host names
|
||||
that are recorded in Kubernetes. If it cannot resolve the node names, proxied
|
||||
API calls can fail, and you cannot retrieve logs from Pods.
|
||||
API calls can fail, and you cannot retrieve logs from pods.
|
||||
====
|
||||
|
||||
|Routes
|
||||
|
||||
@@ -98,7 +98,7 @@ service-catalog-controller-manager 4.5.4 True False F
|
||||
storage 4.5.4 True False False 17m
|
||||
----
|
||||
|
||||
.. Run the following command to view your cluster Pods:
|
||||
.. Run the following command to view your cluster pods:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
|
||||
@@ -9,15 +9,15 @@ link:https://docs.openstack.org/kuryr-kubernetes/latest/[Kuryr] is a container
|
||||
network interface (CNI) plug-in solution that uses the
|
||||
link:https://docs.openstack.org/neutron/latest/[Neutron] and
|
||||
link:https://docs.openstack.org/octavia/latest/[Octavia] {rh-openstack-first} services
|
||||
to provide networking for Pods and Services.
|
||||
to provide networking for pods and Services.
|
||||
|
||||
Kuryr and {product-title} integration is primarily designed for
|
||||
{product-title} clusters running on {rh-openstack} VMs. Kuryr improves the
|
||||
network performance by plugging {product-title} Pods into {rh-openstack} SDN.
|
||||
In addition, it provides interconnectivity between Pods and
|
||||
network performance by plugging {product-title} pods into {rh-openstack} SDN.
|
||||
In addition, it provides interconnectivity between pods and
|
||||
{rh-openstack} virtual instances.
|
||||
|
||||
Kuryr components are installed as Pods in {product-title} using the
|
||||
Kuryr components are installed as pods in {product-title} using the
|
||||
`openshift-kuryr` namespace:
|
||||
|
||||
* `kuryr-controller` - a single Service instance installed on a `master` node.
|
||||
@@ -25,7 +25,7 @@ This is modeled in {product-title} as a `Deployment`.
|
||||
* `kuryr-cni` - a container installing and configuring Kuryr as a CNI driver on
|
||||
each {product-title} node. This is modeled in {product-title} as a `DaemonSet`.
|
||||
|
||||
The Kuryr controller watches the OpenShift API server for Pod, Service, and
|
||||
The Kuryr controller watches the OpenShift API server for pod, Service, and
|
||||
namespace create, update, and delete events. It maps the {product-title} API
|
||||
calls to corresponding objects in Neutron and Octavia. This means that every
|
||||
network solution that implements the Neutron trunk port functionality can be
|
||||
|
||||
@@ -5,7 +5,7 @@
|
||||
[id="installation-osp-default-kuryr-deployment_{context}"]
|
||||
= Resource guidelines for installing {product-title} on {rh-openstack} with Kuryr
|
||||
|
||||
When using Kuryr SDN, the Pods, Services, namespaces, and network policies are
|
||||
When using Kuryr SDN, the pods, Services, namespaces, and network policies are
|
||||
using resources from the {rh-openstack} quota; this increases the minimum
|
||||
requirements. Kuryr also has some additional requirements on top of what a
|
||||
default install requires.
|
||||
@@ -47,9 +47,9 @@ If you are using {rh-openstack-first} version 16 with the Amphora driver rather
|
||||
|
||||
Take the following notes into consideration when setting resources:
|
||||
|
||||
* The number of ports that are required is larger than the number of Pods. Kuryr
|
||||
uses ports pools to have pre-created ports ready to be used by Pods and speed up
|
||||
the Pods' booting time.
|
||||
* The number of ports that are required is larger than the number of pods. Kuryr
|
||||
uses ports pools to have pre-created ports ready to be used by pods and speed up
|
||||
the pods' booting time.
|
||||
|
||||
* Each NetworkPolicy is mapped into an {rh-openstack} security group, and
|
||||
depending on the NetworkPolicy spec, one or more rules are added to the
|
||||
|
||||
@@ -61,6 +61,6 @@ sshKey: ssh-ed25519 AAAA...
|
||||
Both `trunkSupport` and `octaviaSupport` are automatically discovered by the
|
||||
installer, so there is no need to set them. But if your environment does not
|
||||
meet both requirements, Kuryr SDN will not properly work. Trunks are needed
|
||||
to connect the Pods to the {rh-openstack} network and Octavia is required to create the
|
||||
to connect the pods to the {rh-openstack} network and Octavia is required to create the
|
||||
OpenShift Services.
|
||||
====
|
||||
|
||||
@@ -6,7 +6,7 @@
|
||||
= Increasing quota
|
||||
|
||||
When using Kuryr SDN, you must increase quotas to satisfy the {rh-openstack-first}
|
||||
resources used by Pods, Services, namespaces, and network policies.
|
||||
resources used by pods, Services, namespaces, and network policies.
|
||||
|
||||
.Procedure
|
||||
|
||||
|
||||
@@ -44,7 +44,7 @@ and UDP, are not supported.
|
||||
|
||||
There are limitations when using Kuryr SDN that depend on your deployment environment.
|
||||
|
||||
Because of Octavia's lack of support for the UDP protocol and multiple listeners, if the {rh-openstack} version is earlier than 16, Kuryr forces Pods to use TCP for DNS resolution.
|
||||
Because of Octavia's lack of support for the UDP protocol and multiple listeners, if the {rh-openstack} version is earlier than 16, Kuryr forces pods to use TCP for DNS resolution.
|
||||
|
||||
In Go versions 1.12 and earlier, applications that are compiled with CGO support disabled use UDP only. In this case,
|
||||
the native Go resolver does not recognize the `use-vc` option in `resolv.conf`, which controls whether TCP is forced for DNS resolution.
|
||||
|
||||
@@ -53,7 +53,7 @@ metadata:
|
||||
----
|
||||
<1> Delete this line. The cluster will regenerate it with `ovn` as the value.
|
||||
+
|
||||
Wait for the Cluster Network Operator to detect the modification and to redeploy the `kuryr-controller` and `kuryr-cni` Pods. This process might take several minutes.
|
||||
Wait for the Cluster Network Operator to detect the modification and to redeploy the `kuryr-controller` and `kuryr-cni` pods. This process might take several minutes.
|
||||
|
||||
. Verify that the `kuryr-config` ConfigMap annotation is present with `ovn` as its value. On a command line, enter:
|
||||
+
|
||||
|
||||
@@ -58,7 +58,7 @@ $ oc get clusterversion
|
||||
$ oc get clusteroperator
|
||||
----
|
||||
|
||||
. View all running Pods in the cluster:
|
||||
. View all running pods in the cluster:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
|
||||
@@ -150,7 +150,7 @@ ifdef::baremetal,baremetal-restricted[]
|
||||
If you are running a three-node cluster, skip the following step to allow the masters to be schedulable.
|
||||
====
|
||||
endif::baremetal,baremetal-restricted[]
|
||||
. Modify the `<installation_directory>/manifests/cluster-scheduler-02-config.yml` Kubernetes manifest file to prevent Pods from being scheduled on the control plane machines:
|
||||
. Modify the `<installation_directory>/manifests/cluster-scheduler-02-config.yml` Kubernetes manifest file to prevent pods from being scheduled on the control plane machines:
|
||||
+
|
||||
--
|
||||
.. Open the `<installation_directory>/manifests/cluster-scheduler-02-config.yml` file.
|
||||
|
||||
@@ -5,7 +5,7 @@
|
||||
[id="investigating-etcd-installation-issues_{context}"]
|
||||
= Investigating etcd installation issues
|
||||
|
||||
If you experience etcd issues during installation, you can check etcd Pod status and collect etcd Pod logs. You can also verify etcd DNS records and check DNS availability on master nodes.
|
||||
If you experience etcd issues during installation, you can check etcd pod status and collect etcd pod logs. You can also verify etcd DNS records and check DNS availability on master nodes.
|
||||
|
||||
.Prerequisites
|
||||
|
||||
@@ -16,59 +16,59 @@ If you experience etcd issues during installation, you can check etcd Pod status
|
||||
|
||||
.Procedure
|
||||
|
||||
. Check the status of etcd Pods.
|
||||
.. Review the status of Pods in the `openshift-etcd` namespace:
|
||||
. Check the status of etcd pods.
|
||||
.. Review the status of pods in the `openshift-etcd` namespace:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ oc get pods -n openshift-etcd
|
||||
----
|
||||
+
|
||||
.. Review the status of Pods in the `openshift-etcd-operator` namespace:
|
||||
.. Review the status of pods in the `openshift-etcd-operator` namespace:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ oc get pods -n openshift-etcd-operator
|
||||
----
|
||||
|
||||
. If any of the Pods listed by the previous commands are not showing a `Running` or a `Completed` status, gather diagnostic information for the Pod.
|
||||
.. Review events for the Pod:
|
||||
. If any of the pods listed by the previous commands are not showing a `Running` or a `Completed` status, gather diagnostic information for the pod.
|
||||
.. Review events for the pod:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ oc describe pod/<pod_name> -n <namespace>
|
||||
----
|
||||
+
|
||||
.. Inspect the Pod's logs:
|
||||
.. Inspect the pod's logs:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ oc logs pod/<pod_name> -n <namespace>
|
||||
----
|
||||
+
|
||||
.. If the Pod has more than one container, the preceding command will create an error, and the container names will be provided in the error message. Inspect logs for each container:
|
||||
.. If the pod has more than one container, the preceding command will create an error, and the container names will be provided in the error message. Inspect logs for each container:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ oc logs pod/<pod_name> -c <container_name> -n <namespace>
|
||||
----
|
||||
|
||||
. If the API is not functional, review etcd Pod and container logs on each master node by using SSH instead. Replace `<master-node>.<cluster_name>.<base_domain>` with appropriate values.
|
||||
.. List etcd Pods on each master node:
|
||||
. If the API is not functional, review etcd pod and container logs on each master node by using SSH instead. Replace `<master-node>.<cluster_name>.<base_domain>` with appropriate values.
|
||||
.. List etcd pods on each master node:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ ssh core@<master-node>.<cluster_name>.<base_domain> sudo crictl pods --name=etcd-
|
||||
----
|
||||
+
|
||||
.. For any Pods not showing `Ready` status, inspect Pod status in detail. Replace `<pod_id>` with the Pod's ID listed in the output of the preceding command:
|
||||
.. For any pods not showing `Ready` status, inspect pod status in detail. Replace `<pod_id>` with the pod's ID listed in the output of the preceding command:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ ssh core@<master-node>.<cluster_name>.<base_domain> sudo crictl inspectp <pod_id>
|
||||
----
|
||||
+
|
||||
.. List containers related to a Pod:
|
||||
.. List containers related to a pod:
|
||||
+
|
||||
// TODO: Once https://bugzilla.redhat.com/show_bug.cgi?id=1858239 has been resolved, replace the `grep` command below:
|
||||
//[source,terminal]
|
||||
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user