mirror of
https://github.com/openshift/openshift-docs.git
synced 2026-02-05 12:46:18 +01:00
no to 'need to'
This commit is contained in:
@@ -46,7 +46,7 @@ $pre-padding: .8em .8em .6em .8em;
|
||||
$list-side-margin: emCalc(24px);
|
||||
$definition-list-header-margin-bottom: emCalc(5px);
|
||||
$definition-list-margin-bottom: emCalc(20px);
|
||||
// FIXME need to account for nested definition list
|
||||
// FIXME must account for nested definition list
|
||||
//$definition-list-content-margin-left: 0;
|
||||
|
||||
// Blockquotes
|
||||
|
||||
@@ -41,7 +41,7 @@ above strategies.
|
||||
|
||||
The route-based deployment strategies do not scale the number of Pods in the
|
||||
services. To maintain desired performance characteristics the deployment
|
||||
configurations may need to be scaled.
|
||||
configurations might have to be scaled.
|
||||
|
||||
include::modules/deployments-proxy-shards.adoc[leveloffset=+1]
|
||||
include::modules/deployments-n1-compatibility.adoc[leveloffset=+1]
|
||||
|
||||
@@ -40,14 +40,14 @@ Deployments and DeploymentConfigs are enabled by the use of native Kubernetes
|
||||
API objects ReplicationControllers and ReplicaSets, respectively, as their
|
||||
building blocks.
|
||||
|
||||
Users do not need to manipulate ReplicationControllers, ReplicaSets, or Pods
|
||||
Users do not have to manipulate ReplicationControllers, ReplicaSets, or Pods
|
||||
owned by DeploymentConfigs or Deployments. The deployment systems ensures
|
||||
changes are propagated appropriately.
|
||||
|
||||
[TIP]
|
||||
====
|
||||
If the existing deployment strategies are not suited for your use case and you
|
||||
have the need to run manual steps during the lifecycle of your deployment, then
|
||||
must run manual steps during the lifecycle of your deployment, then
|
||||
you should consider creating a Custom deployment strategy.
|
||||
====
|
||||
|
||||
|
||||
@@ -6,7 +6,7 @@ toc::[]
|
||||
|
||||
[IMPORTANT]
|
||||
====
|
||||
Configuring these options will need to change because they're set in the master
|
||||
Configuring these options must change because they're set in the master
|
||||
config file now.
|
||||
====
|
||||
|
||||
|
||||
6
build.py
6
build.py
@@ -318,7 +318,7 @@ def reformat_for_drupal(info):
|
||||
distro = info['distro']
|
||||
|
||||
# Build a mapping of files to ids
|
||||
# Note: For all-in-one we need to collect ids from all books first
|
||||
# Note: For all-in-one we have to collect ids from all books first
|
||||
file_to_id_map = {}
|
||||
if info['all_in_one']:
|
||||
book_ids = []
|
||||
@@ -941,9 +941,9 @@ def main():
|
||||
|
||||
# Copy the original data and reformat for drupal
|
||||
reformat_for_drupal(info)
|
||||
|
||||
|
||||
if has_errors:
|
||||
sys.exit(1)
|
||||
sys.exit(1)
|
||||
|
||||
if args.push:
|
||||
# Parse the repo urls
|
||||
|
||||
@@ -28,7 +28,7 @@ with the remote repository.
|
||||
====
|
||||
Because most changes in this repository must be committed to the `master`
|
||||
branch, the following process always uses `master` as the name of the source
|
||||
branch. If you need to use another branch as the source for your change, make
|
||||
branch. If you must use another branch as the source for your change, make
|
||||
sure that you consistently use that branch name instead of `master`
|
||||
====
|
||||
|
||||
|
||||
@@ -278,7 +278,7 @@ even if you are linking to the same directory that you are writing in. This make
|
||||
operations to fix broken links much easier.
|
||||
|
||||
For example, if you are writing in *_architecture/core_concepts/deployments.adoc_* and you want to
|
||||
link to *_architecture/core_concepts/routes.adoc_* then you would need to include the path back to the first
|
||||
link to *_architecture/core_concepts/routes.adoc_* then you must include the path back to the first
|
||||
level of the assembly directory:
|
||||
|
||||
----
|
||||
@@ -385,7 +385,7 @@ If you want to link to an image:
|
||||
image::<name_of_image>[image]
|
||||
----
|
||||
|
||||
You only need to specify `<name_of_image>`. The build mechanism automatically specifies the file path.
|
||||
You only have to specify `<name_of_image>`. The build mechanism automatically specifies the file path.
|
||||
|
||||
== Formatting
|
||||
|
||||
@@ -401,7 +401,7 @@ For all of the system blocks including table delimiters, use four characters. Fo
|
||||
Code blocks are used to show examples of command screen outputs, or
|
||||
configuration files. When using command blocks, always use the actual values for
|
||||
any items that a user would normally replace. Code blocks should represent
|
||||
exactly what a customer would see on their screen. If you need to expand or
|
||||
exactly what a customer would see on their screen. If you must expand or
|
||||
provide information on what some of the contents of a screen output or
|
||||
configuration file represent, then use callouts to provide that information.
|
||||
|
||||
@@ -561,7 +561,7 @@ This will render as such:
|
||||
|
||||
. Item 3
|
||||
|
||||
If you need to add any text, admonitions, or code blocks you need to add the continuous +, as shown in the example:
|
||||
If you must add any text, admonitions, or code blocks you have to add the continuous +, as shown in the example:
|
||||
|
||||
....
|
||||
. Item 1
|
||||
|
||||
@@ -322,7 +322,7 @@ more on Operator naming.
|
||||
|
||||
Usage: Pod(s) as appropriate
|
||||
|
||||
Kubernetes object that groups related Docker containers that need to share
|
||||
Kubernetes object that groups related Docker containers that have to share
|
||||
network, filesystem, or memory together for placement on a node. Multiple
|
||||
instances of a Pod can run to provide scaling and redundancy.
|
||||
|
||||
|
||||
@@ -26,7 +26,7 @@ $ ssh -T git@github.com
|
||||
|
||||
== Fork and clone the OpenShift documentation repository
|
||||
You must fork and set up the OpenShift documentation repository on your
|
||||
workstation so that you can create PRs and contribute. These steps only need to
|
||||
workstation so that you can create PRs and contribute. These steps must only
|
||||
be performed during initial setup.
|
||||
|
||||
1. Fork the https://github.com/openshift/openshift-docs repository into your
|
||||
|
||||
@@ -1,9 +0,0 @@
|
||||
[id="uninstalling-existing-hosts"]
|
||||
= Uninstalling a cluster from existing hosts
|
||||
include::modules/common-attributes.adoc[]
|
||||
:context: uninstall-existing-hosts
|
||||
|
||||
toc::[]
|
||||
|
||||
We don't know how this will be managed at release of 4.0, but this topic
|
||||
will need to say something.
|
||||
@@ -102,7 +102,7 @@ data:
|
||||
blacklist: ['elasticsearch', 'urllib3']
|
||||
----
|
||||
|
||||
Optionally, you can use the actions file, *_actions.yaml_*, directly. Editing this file allows you to use any action that Curator has available to it to be run periodically. However, this is only recommended for advanced users as modifying the file can be destructive to the cluster and can cause removal of required indices/settings from Elasticsearch. Most users only need to modify the Curator configuration map and never edit the *action file*.
|
||||
Optionally, you can use the actions file, *_actions.yaml_*, directly. Editing this file allows you to use any action that Curator has available to it to be run periodically. However, this is only recommended for advanced users as modifying the file can be destructive to the cluster and can cause removal of required indices/settings from Elasticsearch. Most users must only modify the Curator configuration map and never edit the *action file*.
|
||||
|
||||
[NOTE]
|
||||
====
|
||||
|
||||
@@ -9,8 +9,8 @@ toc::[]
|
||||
The Cluster Logging Operator and Elasticsearch Operator can be in a _Managed_ or _Unmanaged_ state.
|
||||
|
||||
By default, the Cluster Logging and Elasticsearch operators are set to Managed. However, in order to modify the components managed by the Cluster Logging Operator,
|
||||
you need to set cluster logging to the _unmanaged_ state. In order to modify the components managed by the Elasticsearch Operator,
|
||||
you need to set Elasticsearch to the _unmanaged_ state.
|
||||
you must set cluster logging to the _unmanaged_ state. In order to modify the components managed by the Elasticsearch Operator,
|
||||
you must set Elasticsearch to the _unmanaged_ state.
|
||||
|
||||
// The following include statements pull in the module files that comprise
|
||||
// the assembly. Include any combination of concept, procedure, or reference
|
||||
|
||||
@@ -16,7 +16,7 @@ and managing utilities that your apps use.
|
||||
|
||||
{product-title} 4 runs on top of a Kubernetes cluster, with data about the
|
||||
objects stored in etcd, a reliable clustered key-value store. The cluster is
|
||||
enhanced with standard components that you need to run your cluster, including
|
||||
enhanced with standard components that are required to run your cluster, including
|
||||
network, ingress, logging, and monitoring, that run as Operators to increase the
|
||||
ease and automation of installation, scaling, and maintenance.
|
||||
|
||||
|
||||
@@ -15,7 +15,7 @@ entire build process.
|
||||
{product-title} uses Kubernetes by creating containers
|
||||
from build images and pushing them to a container image registry.
|
||||
|
||||
Build objects share common characteristics including inputs for a build, the need to
|
||||
Build objects share common characteristics including inputs for a build, the requirement to
|
||||
complete a build process, logging the build process, publishing resources from
|
||||
successful builds, and publishing the final status of the build. Builds take
|
||||
advantage of resource restrictions, specifying limitations on resources such as
|
||||
|
||||
@@ -98,7 +98,7 @@ Docker build.
|
||||
<3> The runtime image is used as the source image for the Docker build.
|
||||
|
||||
The result of this setup is that the output image of the second build does not
|
||||
need to contain any of the build tools that are needed to create the WAR file.
|
||||
have to contain any of the build tools that are needed to create the WAR file.
|
||||
Also, because the second build contains an image change trigger,
|
||||
whenever the first build is run and produces a new image with the binary
|
||||
artifact, the second build is automatically triggered to produce a runtime image
|
||||
|
||||
@@ -7,7 +7,7 @@
|
||||
[id="builds-create-custom-build-artifacts-{context}"]
|
||||
= Creating custom build artifacts
|
||||
|
||||
You need to create the image you want to use as your custom build image.
|
||||
You must create the image you want to use as your custom build image.
|
||||
|
||||
.Procedure
|
||||
|
||||
|
||||
@@ -58,7 +58,7 @@ $ oc create secret generic <secret_name> \
|
||||
[IMPORTANT]
|
||||
====
|
||||
To avoid having to enter your password again, be sure to specify the S2I image in
|
||||
your builds. However, if you cannot clone the repository, you still need to
|
||||
your builds. However, if you cannot clone the repository, you still must
|
||||
specify your user name and password to promote the build.
|
||||
====
|
||||
|
||||
|
||||
@@ -42,7 +42,7 @@ If you configure the ClusterAutoscaler, additional usage restrictions apply:
|
||||
within the same node group have the same capacity and labels and run the same
|
||||
system pods.
|
||||
* Specify requests for your pods.
|
||||
* If you need to prevent pods from being deleted too quickly, configure
|
||||
* If you have to prevent pods from being deleted too quickly, configure
|
||||
appropriate PDBs.
|
||||
* Confirm that your cloud provider quota is large enough to support the
|
||||
maximum node pools that you configure.
|
||||
|
||||
@@ -37,7 +37,7 @@ Once bound to a node, a Pod will never be bound to another node. This means that
|
||||
|xref:../../architecture/core_concepts/deployments.adoc#replication-controllers[Replication Controller]
|
||||
| `Always`.
|
||||
|
||||
|Pods that need to run one-per-machine
|
||||
|Pods that must run one-per-machine
|
||||
|xref:../../dev_guide/daemonsets.adoc#dev-guide-daemonsets[Daemonset]
|
||||
|Any
|
||||
|===
|
||||
@@ -46,8 +46,8 @@ If a container on a Pod fails and the restart policy is set to `OnFailure`, the
|
||||
restart, use a restart policy of `Never`.
|
||||
|
||||
//https://kubernetes.io/docs/concepts/workloads/controllers/jobs-run-to-completion/#handling-pod-and-container-failures
|
||||
If an entire Pod fails, {product-title} starts a new Pod. Developers need to address the possibility that applications might be restarted in a new Pod. In particular,
|
||||
applications need to handle temporary files, locks, incomplete output, and so forth caused by previous runs.
|
||||
If an entire Pod fails, {product-title} starts a new Pod. Developers must address the possibility that applications might be restarted in a new Pod. In particular,
|
||||
applications must handle temporary files, locks, incomplete output, and so forth caused by previous runs.
|
||||
|
||||
For details on how {product-title} uses restart policy with failed containers, see
|
||||
the link:https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#example-states[Example States] in the Kubernetes documentation.
|
||||
|
||||
@@ -33,6 +33,6 @@ endif::[]
|
||||
ifdef::openshift-online[]
|
||||
is limited.
|
||||
endif::[]
|
||||
Once your limit is reached, you might need to delete an existing project in
|
||||
After your limit is reached, you might have to delete an existing project in
|
||||
order to create a new one.
|
||||
====
|
||||
|
||||
@@ -96,7 +96,7 @@ Topics:
|
||||
----
|
||||
|
||||
. On the command line, run `asciibinder` from the root folder of openshift-docs.
|
||||
You don't need to add or commit your changes for asciibinder to run.
|
||||
You don't have to add or commit your changes for asciibinder to run.
|
||||
|
||||
. After the asciibinder build completes, open up your browser and navigate to
|
||||
<YOUR-LOCAL-GIT-REPO-LOCATION>/openshift-docs/_preview/openshift-enterprise/my_first_mod_docs/my_guide/assembly_my-first-assembly.html
|
||||
|
||||
@@ -71,7 +71,7 @@ in load-balancing, but continues to serve existing persistent connections.
|
||||
[NOTE]
|
||||
====
|
||||
Changes to the route just change the portion of traffic to the various services.
|
||||
You might need to scale the DeploymentConfigs to adjust the number of Pods
|
||||
You might have to scale the DeploymentConfigs to adjust the number of Pods
|
||||
to handle the anticipated loads.
|
||||
====
|
||||
+
|
||||
|
||||
@@ -13,7 +13,7 @@ to the new version.
|
||||
Because you control the portion of requests to each version, as testing
|
||||
progresses you can increase the fraction of requests to the new version and
|
||||
ultimately stop using the previous version. As you adjust the request load on
|
||||
each version, the number of Pods in each service may need to be scaled as well
|
||||
each version, the number of Pods in each service might have to be scaled as well
|
||||
to provide the expected performance.
|
||||
|
||||
In addition to upgrading software, you can use this feature to experiment with
|
||||
|
||||
@@ -11,7 +11,7 @@ the readiness check never succeeds, the canary instance is removed and the
|
||||
DeploymentConfig will be automatically rolled back.
|
||||
|
||||
The readiness check is part of the application code and can be as sophisticated
|
||||
as necessary to ensure the new instance is ready to be used. If you need to
|
||||
as necessary to ensure the new instance is ready to be used. If you must
|
||||
implement more complex checks of the application (such as sending real user
|
||||
workloads to the new instance), consider implementing a Custom deployment or
|
||||
using a blue-green deployment strategy.
|
||||
|
||||
@@ -78,6 +78,6 @@ capacity (an in-place update).
|
||||
- `maxUnavailable*=10%` and `maxSurge*=10%` scales up and down quickly with
|
||||
some potential for capacity loss.
|
||||
|
||||
Generally, if you want fast rollouts, use `maxSurge`. If you need to take into
|
||||
Generally, if you want fast rollouts, use `maxSurge`. If you have to take into
|
||||
account resource quota and can accept partial unavailability, use
|
||||
`maxUnavailable`.
|
||||
|
||||
@@ -5,14 +5,14 @@
|
||||
[id="defining-storage-classes-{context}"]
|
||||
= Defining a StorageClass
|
||||
|
||||
StorageClass objects are currently a globally scoped object and need to be
|
||||
StorageClass objects are currently a globally scoped object and must be
|
||||
created by `cluster-admin` or `storage-admin` users.
|
||||
|
||||
[NOTE]
|
||||
====
|
||||
For GCE and AWS, a default StorageClass is created during {product-title}
|
||||
For GCE and AWS, a default StorageClass is created during {product-title}
|
||||
installation. You can change the default StorageClass or delete it.
|
||||
====
|
||||
|
||||
The following sections describe the basic object definition for a
|
||||
StorageClass and specific examples for each of the supported plug-in types.
|
||||
The following sections describe the basic object definition for a
|
||||
StorageClass and specific examples for each of the supported plug-in types.
|
||||
|
||||
@@ -5,10 +5,10 @@
|
||||
[id="efk-logging-configuring-crd-{context}"]
|
||||
= About the Cluster logging CRD
|
||||
|
||||
The Cluster Logging Operator Custom Resource Definition (CRD) defines a complete cluster logging deployment
|
||||
that includes all the components of the logging stack to collect, store and visualize logs.
|
||||
The Cluster Logging Operator Custom Resource Definition (CRD) defines a complete cluster logging deployment
|
||||
that includes all the components of the logging stack to collect, store and visualize logs.
|
||||
|
||||
You should never need to modify this CRD. To make changes to your deployment, create and modify a specific Custom Resource (CR).
|
||||
You should never have to modify this CRD. To make changes to your deployment, create and modify a specific Custom Resource (CR).
|
||||
Instructions for creating or modifying a CR are provided in this documentation as appropriate.
|
||||
|
||||
The following is an example of a typical Custom Resource Definition for cluster logging.
|
||||
|
||||
@@ -8,14 +8,14 @@
|
||||
////
|
||||
An Elasticsearch index is a collection of primary shards and its corresponding replica
|
||||
shards. This is how Elasticsearch implements high availability internally, therefore there
|
||||
is little need to use hardware based mirroring RAID variants. RAID 0 can still
|
||||
is little requirement to use hardware based mirroring RAID variants. RAID 0 can still
|
||||
be used to increase overall disk performance.
|
||||
|
||||
//Following paragraph also in nodes/efk-logging-elasticsearch
|
||||
|
||||
Elasticsearch is a memory-intensive application. The default cluster logging installation deploys 16G of memory for both memory requests and CPU limits.
|
||||
The initial set of {product-title} nodes might not be large enough to support the Elasticsearch cluster. You must add additional nodes to the
|
||||
{product-title} cluster to run with the recommended or higher memory. Each Elasticsearch node can operate with a lower
|
||||
Elasticsearch is a memory-intensive application. The default cluster logging installation deploys 16G of memory for both memory requests and CPU limits.
|
||||
The initial set of {product-title} nodes might not be large enough to support the Elasticsearch cluster. You must add additional nodes to the
|
||||
{product-title} cluster to run with the recommended or higher memory. Each Elasticsearch node can operate with a lower
|
||||
memory setting though this is not recommended for production deployments.
|
||||
|
||||
Each Elasticsearch data node requires its own individual storage, but an {product-title} deployment
|
||||
@@ -89,13 +89,13 @@ absolute storage consumption around 50% and below 70% at all times]. This
|
||||
helps to avoid Elasticsearch becoming unresponsive during large merge
|
||||
operations.
|
||||
|
||||
By default, at 85% Elasticsearch stops allocating new data to the node, at 90% Elasticsearch attempts to relocate
|
||||
existing shards from that node to other nodes if possible. But if no nodes have free capacity below 85%, Elasticsearch effectively rejects creating new indices
|
||||
and becomes RED.
|
||||
By default, at 85% Elasticsearch stops allocating new data to the node, at 90% Elasticsearch attempts to relocate
|
||||
existing shards from that node to other nodes if possible. But if no nodes have free capacity below 85%, Elasticsearch effectively rejects creating new indices
|
||||
and becomes RED.
|
||||
|
||||
[NOTE]
|
||||
====
|
||||
These low and high watermark values are Elasticsearch defaults in the current release. You can modify these values,
|
||||
but you will also need to apply any modifications to the alerts also. The alerts are based
|
||||
These low and high watermark values are Elasticsearch defaults in the current release. You can modify these values,
|
||||
but you also must apply any modifications to the alerts also. The alerts are based
|
||||
on these defaults.
|
||||
====
|
||||
|
||||
@@ -22,9 +22,9 @@ in {product-title} for installing the operators individually.
|
||||
. Ensure that you have the necessary persistent storage for Elasticsearch. Note that each Elasticsearch node
|
||||
requires its own storage volume.
|
||||
+
|
||||
Elasticsearch is a memory-intensive application. Each Elasticsearch node needs 16G of memory for both memory requests and CPU limits.
|
||||
The initial set of {product-title} nodes might not be large enough to support the Elasticsearch cluster. You must add additional nodes to the
|
||||
{product-title} cluster to run with the recommended or higher memory. Each Elasticsearch node can operate with a lower
|
||||
Elasticsearch is a memory-intensive application. Each Elasticsearch node needs 16G of memory for both memory requests and CPU limits.
|
||||
The initial set of {product-title} nodes might not be large enough to support the Elasticsearch cluster. You must add additional nodes to the
|
||||
{product-title} cluster to run with the recommended or higher memory. Each Elasticsearch node can operate with a lower
|
||||
memory setting though this is not recommended for production deployments.
|
||||
|
||||
. Create a project for cluster logging. You must create the project with the CLI:
|
||||
@@ -104,11 +104,11 @@ under *Status*.
|
||||
|
||||
.. On the *Custom Resource Definitions* page, click *ClusterLogging*.
|
||||
|
||||
.. On the *Custom Resource Definition Overview* page, select *View Instances* from the *Actions* menu.
|
||||
.. On the *Custom Resource Definition Overview* page, select *View Instances* from the *Actions* menu.
|
||||
|
||||
.. On the *Cluster Loggings* page, click *Create Cluster Logging*.
|
||||
+
|
||||
You might need to refresh the page to load the data.
|
||||
You might have to refresh the page to load the data.
|
||||
|
||||
.. In the YAML, replace the code with the following:
|
||||
+
|
||||
@@ -143,7 +143,7 @@ spec:
|
||||
fluentd: {}
|
||||
----
|
||||
<1> The name of the CR. This must be `instance`.
|
||||
<2> The cluster logging management state. In most cases, if you change the default cluster logging defaults, you will need to set this to `Unmanaged`.
|
||||
<2> The cluster logging management state. In most cases, if you change the default cluster logging defaults, you must set this to `Unmanaged`.
|
||||
However, an unmanaged deployment does not receive updates until the cluster logging is placed back into a managed state. For more information, see *Changing cluster logging management state*.
|
||||
<3> Settings for configuring Elasticsearch. Using the CR, you can configure shard replication policy and persistent storage. For more information, see *Configuring Elasticsearch*.
|
||||
<4> Settings for configuring Kibana. Using the CR, you can scale Kibana for redundancy and configure the CPU and memory for your Kibana nodes. For more information, see *Configuring Kibana*.
|
||||
@@ -152,9 +152,9 @@ However, an unmanaged deployment does not receive updates until the cluster logg
|
||||
+
|
||||
[NOTE]
|
||||
====
|
||||
The default cluster logging configuration should support a wide array of environments. Review the topics on tuning and
|
||||
The default cluster logging configuration should support a wide array of environments. Review the topics on tuning and
|
||||
configuring the cluster logging components for information on modifications you can make to your cluster logging cluster.
|
||||
====
|
||||
====
|
||||
|
||||
.. Click *Create*. This creates the Cluster Logging Custom Resource and Elasticsearch Custom Resource, which you
|
||||
can edit to make changes to your cluster logging cluster.
|
||||
|
||||
@@ -7,13 +7,13 @@
|
||||
|
||||
An Elasticsearch index is a collection of primary shards and its corresponding replica
|
||||
shards. This is how ES implements high availability internally, therefore there
|
||||
is little need to use hardware based mirroring RAID variants. RAID 0 can still
|
||||
is little requirement to use hardware based mirroring RAID variants. RAID 0 can still
|
||||
be used to increase overall disk performance.
|
||||
|
||||
//Following paragraph also in nodes/efk-logging-elasticsearch
|
||||
|
||||
Elasticsearch is a memory-intensive application. Each Elasticsearch node needs 16G of memory for both memory requests and CPU limits
|
||||
unless you specify otherwise the Cluster Logging Custom Resource. The initial set of {product-title} nodes might not be large enough
|
||||
Elasticsearch is a memory-intensive application. Each Elasticsearch node needs 16G of memory for both memory requests and CPU limits
|
||||
unless you specify otherwise the Cluster Logging Custom Resource. The initial set of {product-title} nodes might not be large enough
|
||||
to support the Elasticsearch cluster. You must add additional nodes to the {product-title} cluster to run with the recommended
|
||||
or higher memory. Each Elasticsearch node can operate with a lower memory setting though this is not recommended for production deployments.
|
||||
|
||||
@@ -95,14 +95,14 @@ absolute storage consumption around 50% and below 70% at all times]. This
|
||||
helps to avoid Elasticsearch becoming unresponsive during large merge
|
||||
operations.
|
||||
|
||||
By default, at 85% ES stops allocating new data to the node, at 90% ES starts de-allocating
|
||||
existing shards from that node to other nodes if possible. But if no nodes have
|
||||
free capacity below 85% then ES will effectively reject creating new indices
|
||||
and becomes RED.
|
||||
By default, at 85% ES stops allocating new data to the node, at 90% ES starts de-allocating
|
||||
existing shards from that node to other nodes if possible. But if no nodes have
|
||||
free capacity below 85% then ES will effectively reject creating new indices
|
||||
and becomes RED.
|
||||
|
||||
[NOTE]
|
||||
====
|
||||
These low and high watermark values are Elasticsearch defaults in the current release. You can modify these values,
|
||||
but you will also need to apply any modifications to the alerts also. The alerts are based
|
||||
These low and high watermark values are Elasticsearch defaults in the current release. You can modify these values,
|
||||
but you must also apply any modifications to the alerts also. The alerts are based
|
||||
on these defaults.
|
||||
====
|
||||
|
||||
@@ -6,7 +6,7 @@
|
||||
= Exposing Elasticsearch as a route
|
||||
|
||||
By default, Elasticsearch deployed with cluster logging is not
|
||||
accessible from outside the logging cluster. You can enable a route with re-encryption termination
|
||||
accessible from outside the logging cluster. You can enable a route with re-encryption termination
|
||||
for external access to Elasticsearch for those tools that want to access its data.
|
||||
|
||||
Externally, you can access Elasticsearch by creating a reencrypt route, your {product-title} token and the installed
|
||||
@@ -76,7 +76,7 @@ metadata:
|
||||
name: elasticsearch
|
||||
namespace: openshift-logging
|
||||
spec:
|
||||
host:
|
||||
host:
|
||||
to:
|
||||
kind: Service
|
||||
name: elasticsearch
|
||||
@@ -84,14 +84,14 @@ spec:
|
||||
termination: reencrypt
|
||||
destinationCACertificate: | <1>
|
||||
----
|
||||
<1> Add the Elasticsearch CA ceritifcate or use the command in the next step. You do not need to set the `spec.tls.key`, `spec.tls.certificate`, and `spec.tls.caCertificate` parameters
|
||||
<1> Add the Elasticsearch CA ceritifcate or use the command in the next step. You do not have to set the `spec.tls.key`, `spec.tls.certificate`, and `spec.tls.caCertificate` parameters
|
||||
required by some reencrypt routes.
|
||||
|
||||
.. Run the following command to add the Elasticsearch CA certificate to the route YAML you created:
|
||||
+
|
||||
----
|
||||
cat ./admin-ca | sed -e "s/^/ /" >> <file-name>.yaml
|
||||
----
|
||||
----
|
||||
|
||||
.. Run the following command to create the route:
|
||||
+
|
||||
|
||||
@@ -5,8 +5,8 @@
|
||||
[id="efk-logging-elasticsearch-limits-{context}"]
|
||||
= Configuring Elasticsearch CPU and memory limits
|
||||
|
||||
Each component specification allows for adjustments to both the CPU and memory limits.
|
||||
You should not need to manually adjust these values as the Elasticsearch
|
||||
Each component specification allows for adjustments to both the CPU and memory limits.
|
||||
You should not have to manually adjust these values as the Elasticsearch
|
||||
Operator sets values sufficient for your environment.
|
||||
|
||||
.Prerequisite
|
||||
@@ -25,7 +25,7 @@ instance 112m
|
||||
|
||||
.Procedure
|
||||
|
||||
Edit the Cluster Logging Custom Resource (CR) in the `openshift-logging` project:
|
||||
Edit the Cluster Logging Custom Resource (CR) in the `openshift-logging` project:
|
||||
|
||||
[source,yaml]
|
||||
----
|
||||
|
||||
@@ -6,13 +6,13 @@
|
||||
= Scaling up systemd-journald
|
||||
|
||||
As you scale up your project, the default logging environment might need some
|
||||
adjustments.
|
||||
adjustments.
|
||||
|
||||
For example, if you are missing logs, you might need to increase the rate limits for journald.
|
||||
For example, if you are missing logs, you might have to increase the rate limits for journald.
|
||||
|
||||
.Procedure
|
||||
|
||||
. Update to *systemd-219-22.el7.x86_64*.
|
||||
. Update to *systemd-219-22.el7.x86_64*.
|
||||
|
||||
. Add the following to the *_/etc/systemd/journald.conf_* file:
|
||||
|
||||
|
||||
@@ -20,7 +20,7 @@ Endpoints: <none>
|
||||
----
|
||||
+
|
||||
If any Kibana pods are live, endpoints are listed. If they are not, check
|
||||
the state of the Kibana pods and deployment. You may need to scale the
|
||||
the state of the Kibana pods and deployment. You might have to scale the
|
||||
deployment down and back up again.
|
||||
|
||||
* The route for accessing the Kibana service is masked. This can happen if you perform a test deployment in one
|
||||
|
||||
@@ -100,7 +100,7 @@ by default *_/etc/ansible/hosts_*.
|
||||
[discrete]
|
||||
===== Configuring Apache
|
||||
|
||||
This proxy does not need to reside on the same
|
||||
This proxy does not have to reside on the same
|
||||
host as the master. It uses a client certificate to connect to the master, which
|
||||
is configured to trust the `X-Remote-User` header.
|
||||
|
||||
|
||||
@@ -8,7 +8,7 @@
|
||||
The following Custom Resource (CR)s shows the parameters and acceptable values for an
|
||||
OpenID Connect identity provider.
|
||||
|
||||
If you need to specify a custom certificate bundle, extra scopes, extra authorization request
|
||||
If you must specify a custom certificate bundle, extra scopes, extra authorization request
|
||||
parameters, or a `userInfo` URL, use the full OpenID Connect CR.
|
||||
|
||||
.Standard OpenID Connect CR
|
||||
@@ -78,7 +78,7 @@ spec:
|
||||
type: OpenID
|
||||
openID:
|
||||
clientID: ...
|
||||
clientSecret:
|
||||
clientSecret:
|
||||
name: idp-secret
|
||||
ca: <1>
|
||||
name: ca-config-map
|
||||
@@ -104,8 +104,8 @@ spec:
|
||||
userInfo: https://myidp.example.com/oauth2/userinfo <7>
|
||||
|
||||
----
|
||||
<1> Optional: Reference to an {product-title} ConfigMap containing the
|
||||
PEM-encoded certificate authority bundle to use in validating server
|
||||
<1> Optional: Reference to an {product-title} ConfigMap containing the
|
||||
PEM-encoded certificate authority bundle to use in validating server
|
||||
certificates for the configured URL.
|
||||
<2> Optional list of scopes to request, in addition to the *openid* scope,
|
||||
during the authorization token request.
|
||||
|
||||
@@ -52,7 +52,7 @@ by grouping them into a single pod.
|
||||
This colocation ensures the containers share a network namespace and storage
|
||||
for communication. Updates are also less disruptive as each image can be updated
|
||||
less frequently and independently. Signal handling flows are also clearer with a
|
||||
single process as you do not need to manage routing signals to spawned
|
||||
single process as you do not have to manage routing signals to spawned
|
||||
processes.
|
||||
|
||||
[discrete]
|
||||
@@ -147,8 +147,8 @@ RUN yum -y install mypackage && yum clean all -y
|
||||
----
|
||||
|
||||
Then each time you changed *_myfile_* and reran `podman build` or `docker build`, the `ADD`
|
||||
operation would invalidate the `RUN` layer cache, so the `yum` operation would
|
||||
need to be rerun as well.
|
||||
operation would invalidate the `RUN` layer cache, so the `yum` operation must be
|
||||
rerun as well.
|
||||
|
||||
[discrete]
|
||||
== Mark important ports
|
||||
@@ -174,7 +174,7 @@ advertising a path on the system that could be used by another process, such as
|
||||
It is best to avoid setting default passwords. Many people will extend the image
|
||||
and forget to remove or change the default password. This can lead to security
|
||||
issues if a user in production is assigned a well-known password. Passwords
|
||||
should be configurable using an environment variable instead.
|
||||
should be configurable using an environment variable instead.
|
||||
|
||||
If you do choose to set a default password, ensure that an appropriate warning
|
||||
message is displayed when the container is started. The message should inform
|
||||
@@ -208,7 +208,7 @@ writing data to ephemeral storage in a container. Designing your image around
|
||||
that capability now will make it easier to take advantage of it later.
|
||||
|
||||
Furthermore, explicitly defining volumes in your `Dockerfile` makes it easy
|
||||
for consumers of the image to understand what volumes they need to define when
|
||||
for consumers of the image to understand what volumes they must define when
|
||||
running your image.
|
||||
|
||||
See the
|
||||
|
||||
@@ -14,7 +14,7 @@ ifdef::openshift-online[]
|
||||
== Privileges and volume builds
|
||||
|
||||
container images cannot be built using the `VOLUME` directive in the `DOCKERFILE`.
|
||||
Images using a read/write file system need to use persistent volumes or
|
||||
Images using a read/write file system must use persistent volumes or
|
||||
`emptyDir` volumes instead of local storage. Instead of specifying a volume in
|
||||
the Dockerfile, specify a directory for local storage and mount either a
|
||||
persistent volume or `emptyDir` volume to that directory when deploying the pod.
|
||||
@@ -207,7 +207,7 @@ image, or offer suggestions on other images that may also be needed.
|
||||
|
||||
You must fully understand what it means to run multiple instances of your image.
|
||||
In the simplest case, the load balancing function of a service handles routing
|
||||
traffic to all instances of your image. However, many frameworks need to share
|
||||
traffic to all instances of your image. However, many frameworks must share
|
||||
information in order to perform leader election or failover state; for example,
|
||||
in session replication.
|
||||
|
||||
@@ -221,7 +221,7 @@ important for your clustering scheme to be dynamic.
|
||||
|
||||
It is best to send all logging to standard out. {product-title} collects
|
||||
standard out from containers and sends it to the centralized logging service
|
||||
where it can be viewed. If you need to separate log content, prefix the output
|
||||
where it can be viewed. If you must separate log content, prefix the output
|
||||
with an appropriate keyword, which makes it possible to filter the messages.
|
||||
|
||||
If your image logs to a file, users must use manual operations to enter the
|
||||
|
||||
@@ -35,7 +35,7 @@ variables with the information needed to proceed with the build:
|
||||
|Variable Name |Description
|
||||
|
||||
|`BUILD`
|
||||
|The entire serialized JSON of the `Build` object definition. If you need to
|
||||
|The entire serialized JSON of the `Build` object definition. If you must
|
||||
use a specific API version for serialization, you can set the
|
||||
`buildAPIVersion` parameter in the custom strategy
|
||||
specification of the build configuration.
|
||||
|
||||
@@ -23,5 +23,5 @@ $ oc tag docker.io/python:3.6.0 python:3.6
|
||||
Tag python:3.6 set to docker.io/python:3.6.0.
|
||||
----
|
||||
+
|
||||
If the external image is secured, you will need to create a secret with
|
||||
If the external image is secured, you must create a secret with
|
||||
credentials for accessing that registry.
|
||||
|
||||
@@ -26,7 +26,7 @@ $ ansible-playbook [-i /path/to/inventory] \
|
||||
|
||||
[IMPORTANT]
|
||||
====
|
||||
For Amazon Web Services, you might need to override the detected IP Addresses and host names.
|
||||
For Amazon Web Services, you might have to override the detected IP Addresses and host names.
|
||||
====
|
||||
|
||||
Now, verify the detected common settings. If they are not what you expect them
|
||||
|
||||
@@ -105,7 +105,7 @@ totaling 4 CPU cores and 19 GB of RAM.
|
||||
|
||||
Node hosts::
|
||||
The size of a node host depends on the expected size of its workload. As an
|
||||
{product-title} cluster administrator, you need to calculate the expected
|
||||
{product-title} cluster administrator, you must calculate the expected
|
||||
workload and add about 10 percent for overhead. For production environments,
|
||||
allocate enough resources so that a node host failure does not affect your
|
||||
maximum capacity.
|
||||
|
||||
@@ -13,7 +13,7 @@ Web Services (AWS).
|
||||
====
|
||||
|
||||
You can install either a standard cluster or a customized cluster. With a
|
||||
standard cluster, you provide minimum details that you need to install the
|
||||
standard cluster, you provide minimum details that are required to install the
|
||||
cluster. With a customized cluster, you can specify more details about the
|
||||
platform, such as the number of machines that the control plane uses, the type
|
||||
of virtual machine that the cluster deploys, or the CIDR range for the
|
||||
|
||||
@@ -20,7 +20,7 @@ certificates to use during the other phases of this set up. Run the following co
|
||||
--write-config=/etc/origin/
|
||||
----
|
||||
+
|
||||
The output inclues the *_/etc/origin/master/ca.crt_* and
|
||||
The output inclues the *_/etc/origin/master/ca.crt_* and
|
||||
*_/etc/origin/master/ca.key_* signing certificates.
|
||||
. Use the signing certificate to generate keys to use on the remote basic
|
||||
authentication server:
|
||||
@@ -36,7 +36,7 @@ authentication server:
|
||||
--signer-serial='/etc/origin/master/ca.serial.txt'
|
||||
----
|
||||
+
|
||||
<1> A comma-separated list of all the host names and interface IP addresses that need to access the
|
||||
<1> A comma-separated list of all the host names and interface IP addresses that must access the
|
||||
remote basic authentication server.
|
||||
+
|
||||
[NOTE]
|
||||
@@ -48,7 +48,7 @@ but for security reasons, do not make them greater than 730.
|
||||
+
|
||||
[IMPORTANT]
|
||||
====
|
||||
If you do not list all host names and interface IP addresses that need to access the
|
||||
If you do not list all host names and interface IP addresses that must access the
|
||||
remote basic authentication server, the HTTPS connection will fail.
|
||||
====
|
||||
. Copy the necessary certificates and key to the remote basic authentication server:
|
||||
|
||||
@@ -5,7 +5,7 @@
|
||||
[id="sssd-for-ldap-prereqs-{context}"]
|
||||
= Prerequisites for configuring basic remote authentication
|
||||
|
||||
* Before starting setup, you need to know the following information about your
|
||||
* Before starting setup, you must know the following information about your
|
||||
LDAP server:
|
||||
** Whether the directory server is powered by
|
||||
http://www.freeipa.org/page/Main_Page[FreeIPA], Active Directory, or another
|
||||
@@ -16,10 +16,10 @@ LDAP solution.
|
||||
** Whether the LDAP server corresponds to RFC 2307 or RFC2307bis for user groups.
|
||||
* Prepare the servers:
|
||||
** `remote-basic.example.com`: A VM to use as the remote basic authentication server.
|
||||
*** Select an operating system that includes SSSD version 1.12.0 for this server
|
||||
*** Select an operating system that includes SSSD version 1.12.0 for this server
|
||||
such as Red Hat Enterprise Linux 7.0 or later.
|
||||
ifeval::["{context}" == "sssd-ldap-failover-extend"]
|
||||
*** Install mod_lookup_identity version 0.9.4 or later. You can obtain this
|
||||
*** Install mod_lookup_identity version 0.9.4 or later. You can obtain this
|
||||
package link:https://github.com/adelton/mod_lookup_identity/releases[from
|
||||
upstream].
|
||||
endif::[]
|
||||
|
||||
@@ -15,7 +15,7 @@ providerSpec, which describes the types of compute nodes that are offered for di
|
||||
cloud platforms. For example, a `machine` type for a worker node on Amazon Web
|
||||
Services (AWS) might define a specific machine type and required metadata.
|
||||
`MachineSets`:: Groups of machines. `MachineSets` are to `machines` as
|
||||
`ReplicaSets` are to `Pods`. If you need more `machines` or need to scale them down,
|
||||
`ReplicaSets` are to `Pods`. If you need more `machines` or must scale them down,
|
||||
you change the *replicas* field on the `MachineSet` to meet your compute need.
|
||||
|
||||
The following custom resources add more capabilities to your cluster:
|
||||
@@ -44,6 +44,6 @@ architecture easily because the cluster did not manage machine provisioning.
|
||||
Beginning with 4.1 this process is easier. Each `MachineSet` is scoped to a
|
||||
single zone, so the installation program sends out `MachineSets` across
|
||||
availability zones on your behalf. And then because your compute is dynamic, and
|
||||
in the face of a zone failure, you always have a zone for when you need to
|
||||
in the face of a zone failure, you always have a zone for when you must
|
||||
rebalance your machines. The autoscaler provides best-effort balancing over the
|
||||
life of a cluster.
|
||||
|
||||
@@ -5,7 +5,7 @@
|
||||
[id="machineset-manually-scaling-{context}"]
|
||||
= Scaling a MachineSet manually
|
||||
|
||||
If you need to add or remove an instance of a machine in a MachineSet, you can
|
||||
If you must add or remove an instance of a machine in a MachineSet, you can
|
||||
manually scale the MachineSet.
|
||||
|
||||
.Prerequisites
|
||||
|
||||
@@ -260,7 +260,7 @@ The main function for a v0.1.0 operator in `cmd/manager/main.go` sets up the
|
||||
link:https://godoc.org/github.com/kubernetes-sigs/controller-runtime/pkg/manager[Manager],
|
||||
which registers the custom resources and starts all the controllers.
|
||||
+
|
||||
There is no need to migrate the SDK functions `sdk.Watch()`,`sdk.Handle()`, and
|
||||
There is no requirement to migrate the SDK functions `sdk.Watch()`,`sdk.Handle()`, and
|
||||
`sdk.Run()` from the old `main.go` since that logic is now defined in a
|
||||
controller.
|
||||
+
|
||||
|
||||
@@ -5,7 +5,7 @@ You can access Prometheus, Alertmanager, and Grafana web UIs using the `oc` tool
|
||||
|
||||
.Prerequisites
|
||||
|
||||
* Authentication is performed against the {product-title} identity and uses the same credentials or means of authentication as is used elsewhere in {product-title}. You need to use a role that has read access to all namespaces, such as the `cluster-monitoring-view` cluster role.
|
||||
* Authentication is performed against the {product-title} identity and uses the same credentials or means of authentication as is used elsewhere in {product-title}. You must use a role that has read access to all namespaces, such as the `cluster-monitoring-view` cluster role.
|
||||
|
||||
.Procedure
|
||||
|
||||
|
||||
@@ -5,7 +5,7 @@
|
||||
[id="configuring-a-persistent-volume-claim-{context}"]
|
||||
= Configuring a persistent volume claim
|
||||
|
||||
For the Prometheus or Alertmanager to use a persistent volume (PV), you first need to configure a persistent volume claim (PVC).
|
||||
For the Prometheus or Alertmanager to use a persistent volume (PV), you first must configure a persistent volume claim (PVC).
|
||||
|
||||
.Prerequisites
|
||||
|
||||
|
||||
@@ -9,7 +9,7 @@ Before application developers can monitor their applications, the human operator
|
||||
|
||||
.Prerequisites
|
||||
|
||||
* You need to log in as a user belonging to a role with administrative privileges for the cluster.
|
||||
* You must log in as a user that belongs to a role with administrative privileges for the cluster.
|
||||
|
||||
.Procedure
|
||||
|
||||
|
||||
@@ -54,7 +54,7 @@ data:
|
||||
openshift.io/control-plane: "true"*
|
||||
----
|
||||
+
|
||||
.. If you run `etcd` on separate hosts, you need to specify the nodes using IP addresses:
|
||||
.. If you run `etcd` on separate hosts, you must specify the nodes using IP addresses:
|
||||
+
|
||||
[subs="quotes"]
|
||||
----
|
||||
@@ -70,7 +70,7 @@ data:
|
||||
- "127.0.0.3"*
|
||||
----
|
||||
+
|
||||
If `etcd` nodes IP addresses change, you need to update this list.
|
||||
If `etcd` nodes IP addresses change, you must update this list.
|
||||
|
||||
. Verify that the `etcd` service monitor is now running:
|
||||
+
|
||||
|
||||
@@ -5,7 +5,7 @@
|
||||
[id="creating-cluster-monitoring-configmap-{context}"]
|
||||
= Creating cluster monitoring ConfigMap
|
||||
|
||||
To configure the Prometheus Cluster Monitoring stack, you need to create the cluster monitoring Configmap.
|
||||
To configure the Prometheus Cluster Monitoring stack, you must create the cluster monitoring Configmap.
|
||||
|
||||
.Prerequisites
|
||||
|
||||
|
||||
@@ -66,7 +66,7 @@ If the `BUFFER_QUEUE_LIMIT` variable has the default set of values:
|
||||
* `number_of_outputs = 1`
|
||||
* `BUFFER_SIZE_LIMIT = 8Mi`
|
||||
|
||||
The value of `buffer_queue_limit` will be `32`. To change the `buffer_queue_limit`, you need to change the value of `FILE_BUFFER_LIMIT`.
|
||||
The value of `buffer_queue_limit` will be `32`. To change the `buffer_queue_limit`, you must change the value of `FILE_BUFFER_LIMIT`.
|
||||
|
||||
In this formula, `number_of_outputs` is `1` if all the logs are sent to a single resource, and it is incremented by `1` for each additional resource. For example, the value of `number_of_outputs` is:
|
||||
|
||||
|
||||
@@ -42,7 +42,7 @@ immediately terminated if it exceeds the limit amount.
|
||||
This topic applies only if you enabled the ephemeral storage technology preview.
|
||||
This feature is disabled by default. If enabled, the
|
||||
{product-title} cluster uses ephemeral storage to store information that does
|
||||
not need to persist after the cluster is destroyed. To enable this feature, see
|
||||
not have to persist after the cluster is destroyed. To enable this feature, see
|
||||
configuring for ephemeral storage.
|
||||
====
|
||||
|
||||
|
||||
@@ -13,7 +13,7 @@ containers allow you to reorganize setup scripts and binding code.
|
||||
An Init Container can:
|
||||
|
||||
* Contain and run utilities that are not desirable to include in the app Container image for security reasons.
|
||||
* Contain utilities or custom code for setup that is not present in an app image. For example, there is no need to make an image FROM another image just to use a tool like sed, awk, python, or dig during setup.
|
||||
* Contain utilities or custom code for setup that is not present in an app image. For example, there is no requirement to make an image FROM another image just to use a tool like sed, awk, python, or dig during setup.
|
||||
* Use Linux namespaces so that they have different filesystem views from app containers, such as access to Secrets that application containers are not able to access.
|
||||
|
||||
Each Init Container must complete successfully before the next one is started. So, Init Containers provide an easy way to block or delay the startup of app containers until some set of preconditions are met.
|
||||
|
||||
@@ -16,7 +16,7 @@ The following general scenarios show how you can use projected volumes.
|
||||
|
||||
*ConfigMap, Secrets, Downward API.*::
|
||||
Projected volumes allow you to deploy containers with configuration data that includes passwords.
|
||||
An application using these resources could be deploying OpenStack on Kubernetes. The configuration data may need to be assembled differently depending on if the services are going to be used for production or for testing. If a pod is labeled with production or testing, the downward API selector `metadata.labels` can be used to produce the correct OpenStack configs.
|
||||
An application using these resources could be deploying OpenStack on Kubernetes. The configuration data might have to be assembled differently depending on if the services are going to be used for production or for testing. If a pod is labeled with production or testing, the downward API selector `metadata.labels` can be used to produce the correct OpenStack configs.
|
||||
|
||||
*ConfigMap + Secrets.*::
|
||||
Projected volumes allow you to deploy containers involving configuration data and passwords.
|
||||
|
||||
@@ -3,13 +3,13 @@
|
||||
// * nodes/nodes-nodes-managing.adoc
|
||||
|
||||
[id="nodes-nodes-managing-about-{context}"]
|
||||
= Modifying Nodes
|
||||
= Modifying Nodes
|
||||
|
||||
To make configuration changes to a cluster, or MachinePool, you need to create a Custom Resource Definition, or KubeletConfig instance. {product-title} uses the Machine Config Controller to watch for changes introduced through the CRD applies the changes to the cluster.
|
||||
To make configuration changes to a cluster, or MachinePool, you must create a Custom Resource Definition, or KubeletConfig instance. {product-title} uses the Machine Config Controller to watch for changes introduced through the CRD applies the changes to the cluster.
|
||||
|
||||
.Procedure
|
||||
|
||||
. Obtain the label associated with the static CRD, Machine Config Pool, for the type of node you want to configure.
|
||||
. Obtain the label associated with the static CRD, Machine Config Pool, for the type of node you want to configure.
|
||||
Perform one of the following steps:
|
||||
|
||||
.. Edit the machineconfigpool master, add label"custom-kubelet: small-pods"
|
||||
@@ -28,7 +28,7 @@ metadata:
|
||||
----
|
||||
<1> If a label has been added it appears under `labels`.
|
||||
|
||||
.. If the label is not present, add a key/value pair under `labels`.
|
||||
.. If the label is not present, add a key/value pair under `labels`.
|
||||
|
||||
. Create a Custom Resource (CR) for your configuration change.
|
||||
+
|
||||
@@ -42,7 +42,7 @@ metadata:
|
||||
name: set-max-pods <1>
|
||||
spec:
|
||||
machineConfigPoolSelector:
|
||||
matchLabels:
|
||||
matchLabels:
|
||||
custom-kubelet: small-pods <2>
|
||||
kubeletConfig: <3>
|
||||
maxPods: 100
|
||||
@@ -60,7 +60,7 @@ $ oc create -f <file-name>
|
||||
For example:
|
||||
+
|
||||
----
|
||||
$ oc create -f master-kube-config.yaml
|
||||
$ oc create -f master-kube-config.yaml
|
||||
----
|
||||
|
||||
Most https://github.com/kubernetes/kubernetes/blob/release-1.11/pkg/kubelet/apis/kubeletconfig/v1beta1/types.go#L45[KubeletConfig Options] may be set by the user. The following options are not allowed to be overwritten:
|
||||
|
||||
@@ -21,7 +21,7 @@ The process to install the Node Problem Detector involves installing the Node Pr
|
||||
|
||||
. Install the Node Problem Detector Operator:
|
||||
|
||||
.. In the {product-title} console, click *Catalog* -> *Operator Hub*.
|
||||
.. In the {product-title} console, click *Catalog* -> *Operator Hub*.
|
||||
|
||||
.. Choose *Node Problem Detector* from the list of available Operators, and click *Install*.
|
||||
|
||||
@@ -30,30 +30,30 @@ Then, click *Subscribe*.
|
||||
|
||||
. Verify the operator installations:
|
||||
|
||||
.. Switch to the *Catalog* → *Installed Operators* page.
|
||||
.. Switch to the *Catalog* → *Installed Operators* page.
|
||||
|
||||
.. Ensure that *Node Problem Detector* is listed on
|
||||
.. Ensure that *Node Problem Detector* is listed on
|
||||
the *InstallSucceeded* tab with a *Status* of *InstallSucceeded*. Change the project to *openshift-node-problem-detector* if necessary.
|
||||
+
|
||||
If either operator does not appear as installed, to troubleshoot further:
|
||||
If either operator does not appear as installed, to troubleshoot further:
|
||||
|
||||
* On the *Copied* tab of the *Installed Operators* page, if an operator show a *Status* of
|
||||
*Copied*, this indicates the installation is in process and is expected behavior.
|
||||
* Switch to the *Catalog* → *Operator Management* page and inspect
|
||||
the *Operator Subscriptions* and *Install Plans* tabs for any failure or errors
|
||||
under *Status*.
|
||||
* Switch to the *Workloads* → *Pods* page and check the logs in any Pods in the
|
||||
* Switch to the *Catalog* → *Operator Management* page and inspect
|
||||
the *Operator Subscriptions* and *Install Plans* tabs for any failure or errors
|
||||
under *Status*.
|
||||
* Switch to the *Workloads* → *Pods* page and check the logs in any Pods in the
|
||||
`openshift-logging` and `openshift-operators` projects that are reporting issues.
|
||||
|
||||
. Create a cluster logging instance:
|
||||
|
||||
.. Switch to the the *Administration* -> *CRD* page.
|
||||
.. Switch to the the *Administration* -> *CRD* page.
|
||||
|
||||
.. On the *Custom Resource Definitions* page, click *NodeProblemDetector*.
|
||||
.. On the *Custom Resource Definitions* page, click *NodeProblemDetector*.
|
||||
|
||||
.. On the *Node Problem Detector* page, click *Create Node Problem Detector*.
|
||||
+
|
||||
You might need to refresh the page to load the data.
|
||||
You might have to refresh the page to load the data.
|
||||
|
||||
.. Specify a name and enter the *openshift-node-problem-detector* namespace.
|
||||
+
|
||||
|
||||
@@ -9,7 +9,7 @@ When you delete a node using the CLI, the node object is deleted in Kubernetes,
|
||||
but the pods that exist on the node are not deleted. Any bare pods not
|
||||
backed by a replication controller become inaccessible to {product-title}.
|
||||
Pods backed by replication controllers are rescheduled to other available
|
||||
nodes. You need to delete local manifest pods.
|
||||
nodes. You must delete local manifest pods.
|
||||
|
||||
.Procedure
|
||||
|
||||
@@ -29,6 +29,6 @@ The MachineSets are listed in the form of <clusterid>-worker-<aws-region-az>.
|
||||
$ oc scale --replicas=2 machineset <machineset> -n openshift-machine-api
|
||||
----
|
||||
|
||||
For more information on scaling your cluster using a MachineSet, see Manually scaling a MachineSet.
|
||||
For more information on scaling your cluster using a MachineSet, see Manually scaling a MachineSet.
|
||||
|
||||
|
||||
|
||||
@@ -32,7 +32,7 @@ After the pod is bound to a node, the pod will never be bound to another node. T
|
||||
|Replication Controller
|
||||
| `Always`.
|
||||
|
||||
|Pods that need to run one-per-machine
|
||||
|Pods that must run one-per-machine
|
||||
|Daemonset
|
||||
|Any
|
||||
|===
|
||||
@@ -40,8 +40,8 @@ After the pod is bound to a node, the pod will never be bound to another node. T
|
||||
If a Container on a pod fails and the restart policy is set to `OnFailure`, the pod stays on the node and the Container is restarted. If you do not want the Container to
|
||||
restart, use a restart policy of `Never`.
|
||||
|
||||
If an entire pod fails, {product-title} starts a new pod. Developers need to address the possibility that applications might be restarted in a new pod. In particular,
|
||||
applications need to handle temporary files, locks, incomplete output, and so forth caused by previous runs.
|
||||
If an entire pod fails, {product-title} starts a new pod. Developers must address the possibility that applications might be restarted in a new pod. In particular,
|
||||
applications must handle temporary files, locks, incomplete output, and so forth caused by previous runs.
|
||||
|
||||
For details on how {product-title} uses restart policy with failed Containers, see
|
||||
the link:https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#example-states[Example States] in the Kubernetes documentation.
|
||||
|
||||
@@ -66,7 +66,7 @@ in the Device Manager code:
|
||||
* Daemonsets are the recommended approach for device plug-in deployments.
|
||||
* Upon start, the device plug-in will try to create a UNIX domain socket at
|
||||
*_/var/lib/kubelet/device-plugin/_* on the node to serve RPCs from Device Manager.
|
||||
* Since device plug-ins need to manage hardware resources, access to the host
|
||||
* Since device plug-ins must manage hardware resources, access to the host
|
||||
file system, as well as socket creation, they must be run in a privileged
|
||||
security context.
|
||||
* More specific details regarding deployment steps can be found with each device
|
||||
|
||||
@@ -109,7 +109,7 @@ The `CheckVolumeBinding` predicate must be enabled in non-default schedulers.
|
||||
== General Predicates
|
||||
|
||||
The following general predicates check whether non-critical predicates and essential predicates pass. Non-critical predicates are the predicates
|
||||
that only non-critical pods need to pass and essential predicates are the predicates that all pods need to pass.
|
||||
that only non-critical pods must pass and essential predicates are the predicates that all pods must pass.
|
||||
|
||||
_The default scheduler policy includes the general predicates._
|
||||
|
||||
|
||||
@@ -5,13 +5,13 @@
|
||||
[id="oauth-server-metadata-{context}"]
|
||||
= OAuth server metadata
|
||||
|
||||
Applications running in {product-title} may need to discover information
|
||||
about the built-in OAuth server. For example, they might need to discover
|
||||
what the address of the `<namespace_route>` is without manual
|
||||
Applications running in {product-title} might have to discover information
|
||||
about the built-in OAuth server. For example, they might have to discover
|
||||
what the address of the `<namespace_route>` is without manual
|
||||
configuration. To aid in this, {product-title} implements the IETF
|
||||
link:https://tools.ietf.org/html/draft-ietf-oauth-discovery-10[OAuth 2.0 Authorization Server Metadata] draft specification.
|
||||
|
||||
Thus, any application running inside the cluster can issue a `GET` request
|
||||
Thus, any application running inside the cluster can issue a `GET` request
|
||||
to *_\https://openshift.default.svc/.well-known/oauth-authorization-server_*
|
||||
to fetch the following information:
|
||||
|
||||
|
||||
@@ -46,7 +46,7 @@
|
||||
|TBD
|
||||
|
||||
| Number of pods per namespace footnoteref:[objectpernamespace,There are
|
||||
a number of control loops in the system that need to iterate over all objects
|
||||
a number of control loops in the system that must iterate over all objects
|
||||
in a given namespace as a reaction to some changes in state. Having a large
|
||||
number of objects of a given type in a single namespace can make those loops
|
||||
expensive and slow down processing given state changes.]
|
||||
|
||||
@@ -55,7 +55,7 @@ all environments using a `kubectl` or `oc` command:
|
||||
$ kubectl get Tomcats --all-namespaces
|
||||
----
|
||||
|
||||
There is no need to use the Helm CLI or install Tiller; Helm-based Operators
|
||||
import code from the Helm project. All you need to do is have an instance of the
|
||||
There is no requirement use the Helm CLI or install Tiller; Helm-based Operators
|
||||
import code from the Helm project. All you have to do is have an instance of the
|
||||
Operator running and register the CR with a Custom Resource Definition (CRD).
|
||||
And because it obeys RBAC, you can more easily prevent production changes.
|
||||
|
||||
@@ -71,7 +71,7 @@ For example:
|
||||
|
||||
{product-title} includes a set of default cluster roles that you can bind to
|
||||
users and groups cluster-wide or locally. You can manually modify the default
|
||||
cluster roles, if required, but you will need to take extra steps each time
|
||||
cluster roles, if required, but you must take extra steps each time
|
||||
you restart a master node.
|
||||
|
||||
[cols="1,4",options="header"]
|
||||
|
||||
@@ -79,7 +79,7 @@ testing that was possibly completed against these OpenShift core components.
|
||||
In a non-scaled/high-availability (HA) {product-title} registry cluster deployment:
|
||||
|
||||
* The preferred storage technology is object storage followed by block storage. The
|
||||
storage technology does not need to support RWX access mode.
|
||||
storage technology does not have to support RWX access mode.
|
||||
* The storage technology must ensure read-after-write consistency. All NAS storage are not
|
||||
recommended for {product-title} Registry cluster deployment with production workloads.
|
||||
* While `hostPath` volumes are configurable for a non-scaled/HA {product-title} Registry, they are not recommended for cluster deployment.
|
||||
|
||||
@@ -68,8 +68,8 @@ $ oc create -f -
|
||||
$ oc adm policy add-cluster-role-to-user prometheus-scraper <username>
|
||||
----
|
||||
|
||||
. Access the metrics using cluster role. You still need to
|
||||
enable the endpoint, but you do not need to specify a `<secret>`. The part of
|
||||
. Access the metrics using cluster role. You still must
|
||||
enable the endpoint, but you do not have to specify a `<secret>`. The part of
|
||||
the configuration file responsible for metrics should look like this:
|
||||
|
||||
----
|
||||
|
||||
@@ -5,7 +5,7 @@
|
||||
[id="service-accounts-using-credentials-externally-{context}"]
|
||||
= Using a service account's credentials externally
|
||||
|
||||
You can distribute a service account's token to external applications that need to
|
||||
You can distribute a service account's token to external applications that must
|
||||
authenticate to the API.
|
||||
|
||||
In order to pull an image, the authenticated user must have `get` rights on the
|
||||
|
||||
@@ -5,7 +5,7 @@
|
||||
= Configuring application for {product-title}
|
||||
|
||||
To have your application communicate with the PostgreSQL database
|
||||
service running in {product-title} you will need to edit the
|
||||
service running in {product-title} you must edit the
|
||||
`default` section in your `config/database.yml` to use environment
|
||||
variables, which you will define later, upon the database service creation.
|
||||
|
||||
|
||||
@@ -8,7 +8,7 @@ Your Rails application expects a running database service. For this service use
|
||||
PostgeSQL database image.
|
||||
|
||||
To create the database service you will use the `oc new-app` command. To this
|
||||
command you will need to pass some necessary environment variables which will be
|
||||
command you must pass some necessary environment variables which will be
|
||||
used inside the database container. These environment variables are required to
|
||||
set the username, password, and name of the database. You can change the values
|
||||
of these environment variables to anything you would like. The variables are as
|
||||
|
||||
@@ -4,7 +4,7 @@
|
||||
[id="templates-rails-creating-frontend-service-{context}"]
|
||||
= Creating the frontend service
|
||||
|
||||
To bring your application to {product-title}, you need to specify a repository
|
||||
To bring your application to {product-title}, you must specify a repository
|
||||
in which your application lives.
|
||||
|
||||
.Procedure
|
||||
@@ -66,7 +66,7 @@ $ oc get pods
|
||||
You should see a line starting with `myapp-<number>-<hash>`, and that is your
|
||||
application running in {product-title}.
|
||||
|
||||
. Before your application will be functional, you need to initialize the database
|
||||
. Before your application will be functional, you must initialize the database
|
||||
by running the database migration script. There are two ways you can do this:
|
||||
+
|
||||
* Manually from the running frontend container:
|
||||
|
||||
@@ -5,7 +5,7 @@
|
||||
= Storing your application in Git
|
||||
|
||||
Building an application in {product-title} usually requires that the source code
|
||||
be stored in a git repository, so you will need to
|
||||
be stored in a git repository, so you must
|
||||
install `git` if you do not already have it.
|
||||
|
||||
.Prerequisites
|
||||
@@ -45,7 +45,7 @@ $ git add .
|
||||
$ git commit -m "initial commit"
|
||||
----
|
||||
+
|
||||
Once your application is committed you need to push it to a remote repository.
|
||||
After your application is committed you must push it to a remote repository.
|
||||
GitHub account, in which you create a new repository.
|
||||
|
||||
. Set the remote that points to your `git` repository:
|
||||
|
||||
@@ -4,7 +4,7 @@
|
||||
[id="templates-rails-writing-application-{context}"]
|
||||
= Writing your application
|
||||
|
||||
If you are starting your Rails application from scratch, you need to install the
|
||||
If you are starting your Rails application from scratch, you must install the
|
||||
Rails gem first. Then you can proceed with writing your application.
|
||||
|
||||
.Procedure
|
||||
@@ -43,8 +43,8 @@ gem 'pg'
|
||||
$ bundle install
|
||||
----
|
||||
|
||||
. In addition to using the `postgresql` database with the `pg` gem, you will also
|
||||
need to ensure the `config/database.yml` is using the `postgresql` adapter.
|
||||
. In addition to using the `postgresql` database with the `pg` gem, you also
|
||||
must ensure that the `config/database.yml` is using the `postgresql` adapter.
|
||||
+
|
||||
Make sure you updated `default` section in the `config/database.yml` file, so it
|
||||
looks like this:
|
||||
|
||||
@@ -46,7 +46,7 @@ message: "Your admin credentials are ${ADMIN_USERNAME}:${ADMIN_PASSWORD}" <10>
|
||||
<1> The unique name of the template.
|
||||
<2> A brief, user-friendly name, which can be employed by user interfaces.
|
||||
<3> A description of the template. Include enough detail that the user will
|
||||
understand what is being deployed and any caveats they need to know before
|
||||
understand what is being deployed and any caveats they must know before
|
||||
deploying. It should also provide links to additional information, such as a
|
||||
*_README_* file. Newlines can be included to create paragraphs.
|
||||
<4> Additional template description. This may be displayed by the service
|
||||
|
||||
@@ -8,7 +8,7 @@ toc::[]
|
||||
== Overview
|
||||
|
||||
Multus CNI provides the capability to attach multiple network interfaces to Pods in {product-title}.
|
||||
This gives you flexibility when you need to configure
|
||||
This gives you flexibility when you must configure
|
||||
Pods that deliver network functionality, such as switching or routing.
|
||||
|
||||
Multus CNI is useful in situations where network isolation is needed, including
|
||||
|
||||
@@ -14,7 +14,7 @@ access the registry used by the samples.
|
||||
* Deploy an {product-title} cluster.
|
||||
|
||||
//Add any other prereqs that are always valid before you modify the CRD.
|
||||
//If you need to configure a different resource before you configure this one, point it out here.
|
||||
//If you must configure a different resource before you configure this one, point it out here.
|
||||
|
||||
include::modules/samples-operator-overview.adoc[leveloffset=+1]
|
||||
include::modules/samples-operator-configuration.adoc[leveloffset=+1]
|
||||
|
||||
@@ -32,7 +32,7 @@ include::modules/images-create-metadata.adoc[leveloffset=+1]
|
||||
|
||||
|
||||
|
||||
//Testing may need to move
|
||||
//Testing may have to move
|
||||
include::modules/images-test-s2i.adoc[leveloffset=+1]
|
||||
|
||||
|
||||
|
||||
Reference in New Issue
Block a user