1
0
mirror of https://github.com/openshift/openshift-docs.git synced 2026-02-05 12:46:18 +01:00

Use ASCII ' instead of ’ for apostrophe

This commit is contained in:
Jason Boxman
2021-07-18 23:32:55 -04:00
committed by openshift-cherrypick-robot
parent 6cac789f62
commit d4fa3f90c1
92 changed files with 144 additions and 144 deletions

View File

@@ -25,7 +25,7 @@ workflows.
* xref:../registry/architecture-component-imageregistry.adoc[Image Registry] -
The image registry provides a scalable repository for storing and retrieving
container images that are produced by and run on the cluster. Image access is
integrated with the clusters role-based access controls and user authentication
integrated with the cluster's role-based access controls and user authentication
system.
* xref:../openshift_images/images-understand.adoc[Image
streams] - The imagestream API provides an abstraction over container images

View File

@@ -13,7 +13,7 @@ Using a _continuous integration/continuous delivery_ (CI/CD) methodology enables
_Continuous integration_ is an automation process for developers. Code changes to an application are regularly built, tested, and merged to a shared repository.
_Continuous delivery_ and _continuous deployment_ are closely related concepts that are sometimes used interchangeably and refer to automation of the pipeline.
Continuous delivery uses automation to ensure that a developers changes to an application are tested and sent to a repository, where an operations team can deploy them to a production environment. Continuous deployment enables the release of changes, starting from the repository and ending in production. Continuous deployment speeds up application delivery and prevents the operations team from getting overloaded.
Continuous delivery uses automation to ensure that a developer's changes to an application are tested and sent to a repository, where an operations team can deploy them to a production environment. Continuous deployment enables the release of changes, starting from the repository and ending in production. Continuous deployment speeds up application delivery and prevents the operations team from getting overloaded.
[id="cicd_gitops_methodology"]
== The GitOps methodology and practice

View File

@@ -378,7 +378,7 @@ like updating the Operator, can happen automatically and invisibly to the
Operator's users.
An example of a useful Operator is one that is set up to automatically back up
data at particular times. Having an Operator manage an applications backup at
data at particular times. Having an Operator manage an application's backup at
set times can save a system administrator from remembering to do it.
Any application maintenance that has traditionally been completed manually,

View File

@@ -20,7 +20,7 @@ For an overview of {gitops-title}, see xref:../../cicd/gitops/understanding-open
[id="gitops-inclusive-language"]
== Making open source more inclusive
Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see link:https://www.redhat.com/en/blog/making-open-source-more-inclusive-eradicating-problematic-language[Red Hat CTO Chris Wrights message].
Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see link:https://www.redhat.com/en/blog/making-open-source-more-inclusive-eradicating-problematic-language[Red Hat CTO Chris Wright's message].
// Modules included, most to least recent
include::modules/gitops-release-notes-1-1.adoc[leveloffset=+1]

View File

@@ -21,7 +21,7 @@ For an overview of {pipelines-title}, see xref:../../cicd/pipelines/understandin
[id="openshift-pipelines-inclusive-language"]
== Making open source more inclusive
Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see link:https://www.redhat.com/en/blog/making-open-source-more-inclusive-eradicating-problematic-language[Red Hat CTO Chris Wrights message].
Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see link:https://www.redhat.com/en/blog/making-open-source-more-inclusive-eradicating-problematic-language[Red Hat CTO Chris Wright's message].
// Modules included, most to least recent
include::modules/op-release-notes-1-4.adoc[leveloffset=+1]

View File

@@ -72,7 +72,7 @@ include::modules/jaeger-config-ingester.adoc[leveloffset=+2]
[id="injecting-sidecars"]
== Injecting sidecars
{ProductName} relies on a proxy sidecar within the applications pod to provide the agent. The Jaeger Operator can inject Jaeger Agent sidecars into Deployment workloads. You can enable automatic sidecar injection or manage it manually.
{ProductName} relies on a proxy sidecar within the application's pod to provide the agent. The Jaeger Operator can inject Jaeger Agent sidecars into Deployment workloads. You can enable automatic sidecar injection or manage it manually.
include::modules/jaeger-sidecar-automatic.adoc[leveloffset=+2]

View File

@@ -9,4 +9,4 @@ The Operator Lifecycle Manager (OLM) controls the installation, upgrade, and rol
The OLM queries for available Operators as well as upgrades for installed Operators.
For more information about how {product-title} handled upgrades, refer to the xref:../../operators/understanding/olm/olm-understanding-olm.adoc#olm-understanding-olm[Operator Lifecycle Manager] documentation.
The update approach used by the Jaeger Operator upgrades the managed Jaeger instances to the version associated with the Operator. Whenever a new version of the Jaeger Operator is installed, all the Jaeger application instances managed by the Operator will be upgraded to the Operators version. For example, if version 1.10 is installed (both Operator and backend components) and the Operator is upgraded to version 1.11, then as soon as the Operator upgrade has completed, the Operator will scan for running Jaeger instances and upgrade them to 1.11 as well.
The update approach used by the Jaeger Operator upgrades the managed Jaeger instances to the version associated with the Operator. Whenever a new version of the Jaeger Operator is installed, all the Jaeger application instances managed by the Operator will be upgraded to the Operator's version. For example, if version 1.10 is installed (both Operator and backend components) and the Operator is upgraded to version 1.11, then as soon as the Operator upgrade has completed, the Operator will scan for running Jaeger instances and upgrade them to 1.11 as well.

View File

@@ -19,7 +19,7 @@ The following advisories are available for {ProductName} 5.0:
[id="openshift-logging-5-0-inclusive-language"]
== Making open source more inclusive
Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see link:https://www.redhat.com/en/blog/making-open-source-more-inclusive-eradicating-problematic-language[Red Hat CTO Chris Wrights message].
Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see link:https://www.redhat.com/en/blog/making-open-source-more-inclusive-eradicating-problematic-language[Red Hat CTO Chris Wright's message].
[id="openshift-logging-5-0-deprecated-removed-features"]
== Deprecated and removed features

View File

@@ -20,7 +20,7 @@ endif::[]
ifdef::openshift-dedicated[]
* {product-title} clusters are deployed on AWS environments and can be used as part of a hybrid approach for application management.
endif::[]
* Integrated Red Hat technology. Major components in {product-title} come from {op-system-base-full} and related Red Hat technologies. {product-title} benefits from the intense testing and certification initiatives for Red Hats enterprise quality software.
* Integrated Red Hat technology. Major components in {product-title} come from {op-system-base-full} and related Red Hat technologies. {product-title} benefits from the intense testing and certification initiatives for Red Hat's enterprise quality software.
* Open source development model. Development is completed in the open, and the source code is available from public software repositories. This open collaboration fosters rapid innovation and development.
Although Kubernetes excels at managing your applications, it does not specify

View File

@@ -22,7 +22,7 @@ data: <2>
stringData: <4>
hostname: myapp.mydomain.com <5>
----
<1> Indicates the structure of the secrets key names and values.
<1> Indicates the structure of the secret's key names and values.
<2> The allowable format for the keys in the `data` field must meet the guidelines in the `DNS_SUBDOMAIN` value in the Kubernetes identifiers glossary.
<3> The value associated with keys in the `data` map must be base64 encoded.
<4> Entries in the `stringData` map are converted to base64 and the entry are then moved to the `data` map automatically. This field is write-only. The value is only be returned by the `data` field.

View File

@@ -23,7 +23,7 @@ mappings of `default` in the template's mapping section.
| `@timestamp`
| The UTC value marking when the log payload was created, or when the log payload
was first collected if the creation time is not known. This is the log
processing pipelines best effort determination of when the log payload was
processing pipeline's best effort determination of when the log payload was
generated. Add the `@` prefix convention to note a field as being reserved for a
particular use. With Elasticsearch, most tools look for `@timestamp` by default.
For example, the format would be 2015-01-24 14:06:05.071000.

View File

@@ -167,7 +167,7 @@ The current release fixes this issue. Now, when a rollover occurs in the `indexm
* Previously, if you deleted the secret, it was not recreated. Even though the certificates were on a disk local to the operator, they weren't rewritten because they hadn't changed. That is, certificates were only written if they changed. The current release fixes this issue. It rewrites the secret if the certificate changes or is not found. Now, if you delete the master-certs, they are replaced. (link:https://bugzilla.redhat.com/show_bug.cgi?id=1901869[*BZ#1901869*])
* Previously, if a cluster had multiple custom resources with the same name, the resource would get selected alphabetically when not fully qualified with the API group. As a result, if you installed both Red Hats OpenShift Elasticsearch Operator alongside the OpenShift Elasticsearch Operator, you would see failures when collected data via a must-gather report. The current release fixes this issue by ensuring must-gathers now use the full API group when gathering information about the cluster's custom resources. (link:https://bugzilla.redhat.com/show_bug.cgi?id=1897731[*BZ#1897731*])
* Previously, if a cluster had multiple custom resources with the same name, the resource would get selected alphabetically when not fully qualified with the API group. As a result, if you installed both Red Hat's OpenShift Elasticsearch Operator alongside the OpenShift Elasticsearch Operator, you would see failures when collected data via a must-gather report. The current release fixes this issue by ensuring must-gathers now use the full API group when gathering information about the cluster's custom resources. (link:https://bugzilla.redhat.com/show_bug.cgi?id=1897731[*BZ#1897731*])
* An earlier bug fix to address issues related to certificate generation introduced an error. Trying to read the certificates caused them to be regenerated because they were recognized as missing. This, in turn, triggered the OpenShift Elasticsearch Operator to perform a rolling upgrade on the Elasticsearch cluster and, potentially, to have mismatched certificates. This bug was caused by the operator incorrectly writing certificates to the working directory. The current release fixes this issue. Now the operator consistently reads and writes certificates to the same working directory, and the certificates are only regenerated if needed. (link:https://bugzilla.redhat.com/show_bug.cgi?id=1905910[*BZ#1905910*])

View File

@@ -252,4 +252,4 @@ metadata:
EOF
----
<1> Modify this line to match the DUs networking.
<1> Modify this line to match the DU's networking.

View File

@@ -73,7 +73,7 @@ $ oc cp pv-extract:/workers-scan-results .
+
[IMPORTANT]
====
Spawning a pod that mounts the persistent volume will keep the claim as `Bound`. If the volumes storage class in use has permissions set to `ReadWriteOnce`, the volume is only mountable by one pod at a time. You must delete the pod upon completion, or it will be possible for the Operator to schedule a pod and continue storing results in this location.
Spawning a pod that mounts the persistent volume will keep the claim as `Bound`. If the volume's storage class in use has permissions set to `ReadWriteOnce`, the volume is only mountable by one pod at a time. You must delete the pod upon completion, or it will be possible for the Operator to schedule a pod and continue storing results in this location.
====
. After the extraction is complete, the pod can be deleted:

View File

@@ -40,9 +40,9 @@ caveats around concurrent create and removal.
* *Samples Registry:* Overrides the registry from which images are imported.
* *Architecture:* Place holder to choose an architecture type. Currently only x86
is supported.
* *Skipped Imagestreams:* Imagestreams that are in the operators
* *Skipped Imagestreams:* Imagestreams that are in the operator's
inventory, but that the cluster administrator wants the operator to ignore or not manage.
* *Skipped Templates:* Templates that are in the operators inventory, but that
* *Skipped Templates:* Templates that are in the operator's inventory, but that
the cluster administrator wants the operator to ignore or not manage.
|Infrastructure

View File

@@ -9,7 +9,7 @@ The tool creates multiple namespaces (projects), which contain multiple template
== Example Cluster Loader configuration file
Cluster Loaders configuration file is a basic YAML file:
Cluster Loader's configuration file is a basic YAML file:
[source,yaml]
----

View File

@@ -8,7 +8,7 @@
|If you are looking for Red Hat OpenShift Container Storage information about...
|See the following Red Hat OpenShift Container Storage documentation:
|Whats new, known issues, notable bug fixes, and Technology Previews
|What's new, known issues, notable bug fixes, and Technology Previews
|link:https://access.redhat.com/documentation/en-us/red_hat_openshift_container_storage/4.5/html/4.5_release_notes/[OpenShift Container Storage 4.5 Release Notes]
|Supported workloads, layouts, hardware and software requirements, sizing and scaling recommendations

View File

@@ -13,14 +13,14 @@
|Description
|`spec.nodeSelector`
|A map of key-values pairs that must match with nodes labels in order for the
|A map of key-values pairs that must match with node's labels in order for the
AIDE pods to be schedulable on that node. The typical use is to set only a
single key-value pair where `node-role.kubernetes.io/worker: ""` schedules AIDE on
all worker nodes, `node.openshift.io/os_id: "rhcos"` schedules on all
{op-system-first} nodes.
|`spec.debug`
|A boolean attribute. If set to `true`, the daemon running in the AIDE deamon sets
|A boolean attribute. If set to `true`, the daemon running in the AIDE deamon set's
pods would output extra information.
|`spec.tolerations`

View File

@@ -14,6 +14,6 @@ easier.
[NOTE]
====
`/hostroot` is the directory where the pods running AIDE mount the hosts
`/hostroot` is the directory where the pods running AIDE mount the host's
file system. Changing the configuration triggers a reinitializing of the database.
====

View File

@@ -17,7 +17,7 @@ After you answer a few questions, the `bootstrap.ign`, `master.ign`, and
`worker.ign` files appear in the directory you entered.
To see the contents of the `bootstrap.ign` file, pipe it through the `jq` filter.
Heres a snippet from that file:
Here's a snippet from that file:
[source,terminal]
----
@@ -56,7 +56,7 @@ $ cat $HOME/testconfig/bootstrap.ign | jq
To decode the contents of a file listed in the `bootstrap.ign` file, pipe the
base64-encoded data string representing the contents of that file to the `base64
-d` command. Heres an example using the contents of the `/etc/motd` file added to
-d` command. Here's an example using the contents of the `/etc/motd` file added to
the bootstrap machine from the output shown above:
[source,terminal]
@@ -89,16 +89,16 @@ Here are a few things you can learn from the `bootstrap.ign` file: +
* Format: The format of the file is defined in the
https://coreos.github.io/ignition/configuration-v3_2/[Ignition config spec].
Files of the same format are used later by the MCO to merge changes into a
machines configuration.
machine's configuration.
* Contents: Because the bootstrap machine serves the Ignition configs for other
machines, both master and worker machine Ignition config information is stored in the
`bootstrap.ign`, along with the bootstrap machines configuration.
`bootstrap.ign`, along with the bootstrap machine's configuration.
* Size: The file is more than 1300 lines long, with path to various types of resources.
* The content of each file that will be copied to the machine is actually encoded
into data URLs, which tends to make the content a bit clumsy to read. (Use the
`jq` and `base64` commands shown previously to make the content more readable.)
* Configuration: The different sections of the Ignition config file are generally
meant to contain files that are just dropped into a machines file system, rather
meant to contain files that are just dropped into a machine's file system, rather
than commands to modify existing files. For example, instead of having a section
on NFS that configures that service, you would just add an NFS configuration
file, which would then be started by the init process when the system comes up.

View File

@@ -30,7 +30,7 @@ This colocation ensures the containers share a network namespace and storage for
[discrete]
== Use `exec` in wrapper scripts
Many images use wrapper scripts to do some setup before starting a process for the software being run. If your image uses such a script, that script uses `exec` so that the scripts process is replaced by your software. If you do not use `exec`, then signals sent by your container runtime go to your wrapper script instead of your softwares process. This is not what you want.
Many images use wrapper scripts to do some setup before starting a process for the software being run. If your image uses such a script, that script uses `exec` so that the script's process is replaced by your software. If you do not use `exec`, then signals sent by your container runtime go to your wrapper script instead of your software's process. This is not what you want.
If you have a wrapper script that starts a process for some server. You start your container, for example, using `podman run -i`, which runs the wrapper script, which in turn starts your process. If you want to close your container with `CTRL+C`. If your wrapper script used `exec` to start the server process, `podman` sends SIGINT to the server process, and everything works as you expect. If you did not use `exec` in your wrapper script, `podman` sends SIGINT to the process for the wrapper script and your process keeps running like nothing happened.

View File

@@ -42,7 +42,7 @@ The cluster must be able to access the resource group that contains the existing
Your VNet must meet the following characteristics:
* The VNets CIDR block must contain the `Networking.MachineCIDR` range, which is the IP address pool for cluster machines.
* The VNet's CIDR block must contain the `Networking.MachineCIDR` range, which is the IP address pool for cluster machines.
* The VNet and its subnets must belong to the same resource group, and the subnets must be configured to use Azure-assigned DHCP IP addresses instead of static IP addresses.
You must provide two subnets within your VNet, one for the control plane machines and one for the compute machines. Because Azure distributes machines in different availability zones within the region that you specify, your cluster will have high availability by default.

View File

@@ -30,7 +30,7 @@ Your VPC must meet the following characteristics:
* The VPC's CIDR block must contain the `Networking.MachineCIDR` range, which is the IP address pool for cluster machines.
* The VPC must not use the `kubernetes.io/cluster/.*: owned` tag.
* You must enable the `enableDnsSupport` and `enableDnsHostnames` attributes in your VPC so that the cluster can use the Route 53 zones that are attached to the VPC to resolve clusters internal DNS records. See link:https://docs.aws.amazon.com/vpc/latest/userguide/vpc-dns.html#vpc-dns-support[DNS Support in Your VPC] in the AWS documentation. If you prefer using your own Route 53 hosted private zone, you must associate the existing hosted zone with your VPC prior to installing a cluster. You can define your hosted zone using the `platform.aws.hostedZone` field in the `install-config.yaml` file.
* You must enable the `enableDnsSupport` and `enableDnsHostnames` attributes in your VPC so that the cluster can use the Route 53 zones that are attached to the VPC to resolve cluster's internal DNS records. See link:https://docs.aws.amazon.com/vpc/latest/userguide/vpc-dns.html#vpc-dns-support[DNS Support in Your VPC] in the AWS documentation. If you prefer using your own Route 53 hosted private zone, you must associate the existing hosted zone with your VPC prior to installing a cluster. You can define your hosted zone using the `platform.aws.hostedZone` field in the `install-config.yaml` file.
If you use a cluster with public access, you must create a public and a private subnet for each availability zone that your cluster uses.

View File

@@ -330,7 +330,7 @@ ocpadmin@internal
... For `oVirt cluster`, select the cluster for installing {product-title}.
... For `oVirt storage domain`, select the storage domain for installing {product-title}.
... For `oVirt network`, select a virtual network that has access to the {rh-virtualization} Manager REST API.
... For `Internal API Virtual IP`, enter the static IP address you set aside for the clusters REST API.
... For `Internal API Virtual IP`, enter the static IP address you set aside for the cluster's REST API.
... For `Ingress virtual IP`, enter the static IP address you reserved for the wildcard apps domain.
... For `Base Domain`, enter the base domain of the {product-title} cluster. If this cluster is exposed to the outside world, this must be a valid domain recognized by DNS infrastructure. For example, enter: `virtlab.example.com`
... For `Cluster Name`, enter the name of the cluster. For example, `my-cluster`. Use cluster name from the externally registered/resolvable DNS entries you created for the {product-title} REST API and apps domain names. The installation program also gives this name to the cluster in the {rh-virtualization} environment.

View File

@@ -351,7 +351,7 @@ endif::openshift-origin[]
.. For `Cluster`, select the {rh-virtualization} cluster for installing {product-title}.
.. For `Storage domain`, select the storage domain for installing {product-title}.
.. For `Network`, select a virtual network that has access to the {rh-virtualization} Manager REST API.
.. For `Internal API Virtual IP`, enter the static IP address you set aside for the clusters REST API.
.. For `Internal API Virtual IP`, enter the static IP address you set aside for the cluster's REST API.
.. For `Ingress virtual IP`, enter the static IP address you reserved for the wildcard apps domain.
.. For `Base Domain`, enter the base domain of the {product-title} cluster. If this cluster is exposed to the outside world, this must be a valid domain recognized by DNS infrastructure. For example, enter: `virtlab.example.com`
.. For `Cluster Name`, enter the name of the cluster. For example, `my-cluster`. Use cluster name from the externally registered/resolvable DNS entries you created for the {product-title} REST API and apps domain names. The installation program also gives this name to the cluster in the {rh-virtualization} environment.

View File

@@ -10,14 +10,14 @@ Clusters using a restricted network must import the default must-gather image to
.Procedure
. If you have not added your mirror registry's trusted CA to your cluster's image configuration object as part of the Cluster Samples Operator configuration, perform the following steps:
.. Create the clusters image configuration object:
.. Create the cluster's image configuration object:
+
[source,terminal]
----
$ oc create configmap registry-config --from-file=${MIRROR_ADDR_HOSTNAME}..5000=$path/ca.crt -n openshift-config
----
.. Add the required trusted CAs for the mirror in the clusters image
.. Add the required trusted CAs for the mirror in the cluster's image
configuration object:
+
[source,terminal]

View File

@@ -77,14 +77,14 @@ endif::[]
$ oc image mirror registry.redhat.io/rhscl/ruby-25-rhel7:latest ${MIRROR_ADDR}/rhscl/ruby-25-rhel7:latest
----
. Create the clusters image configuration object:
. Create the cluster's image configuration object:
+
[source,terminal]
----
$ oc create configmap registry-config --from-file=${MIRROR_ADDR_HOSTNAME}..5000=$path/ca.crt -n openshift-config
----
. Add the required trusted CAs for the mirror in the clusters image
. Add the required trusted CAs for the mirror in the cluster's image
configuration object:
+
[source,terminal]

View File

@@ -46,7 +46,7 @@ is best provided with a valid RHEL subscription.
Before deploying kernel modules to your {product-title} cluster,
you can test the process on a separate RHEL system.
Gather the kernel modules source code, the KVC framework, and the
Gather the kernel module's source code, the KVC framework, and the
kmod-via-containers software. Then build and test the module. To do
that on a RHEL 8 system, do the following:

View File

@@ -30,7 +30,7 @@ You must install the {product-title} cluster on a VMware vSphere version 6 or 7
|Networking (NSX-T)
|vSphere 6.5U3 or vSphere 6.7U2 and later
|vSphere 6.5U3 or vSphere 6.7U2+ are required for {product-title}. VMwares NSX Container Plug-in (NCP) is certified with {product-title} 4.6 and NSX-T 3.x+.
|vSphere 6.5U3 or vSphere 6.7U2+ are required for {product-title}. VMware's NSX Container Plug-in (NCP) is certified with {product-title} 4.6 and NSX-T 3.x+.
|Storage with in-tree drivers

View File

@@ -46,7 +46,7 @@ These requirements are based on the default resources the installation program u
** 16 GiB for each of the three compute machines
.. Record the amount of *Max free Memory for scheduling new virtual machines* for use later on.
+
. Verify that the virtual network for installing {product-title} has access to the {rh-virtualization} Managers REST API. From a virtual machine on this network, use curl to reach the {rh-virtualization} Managers REST API:
. Verify that the virtual network for installing {product-title} has access to the {rh-virtualization} Manager's REST API. From a virtual machine on this network, use curl to reach the {rh-virtualization} Manager's REST API:
+
[source,terminal]
----

View File

@@ -37,7 +37,7 @@ kourier LoadBalancer 172.30.51.103 a83e86291bcdd11e993af02b7a65e514-335442
+
The public address is surfaced in the `EXTERNAL-IP` field, and in this case is `a83e86291bcdd11e993af02b7a65e514-33544245.us-east-1.elb.amazonaws.com`.
. Manually set the host header of your HTTP request to the applications host, but direct the request itself against the public address of the ingress gateway.
. Manually set the host header of your HTTP request to the application's host, but direct the request itself against the public address of the ingress gateway.
+
[source,terminal]
@@ -53,7 +53,7 @@ Hello Serverless!
----
+
You can also make a gRPC request by setting the authority to the applications host, while directing the request against the ingress gateway directly:
You can also make a gRPC request by setting the authority to the application's host, while directing the request against the ingress gateway directly:
+
[source,yaml]

View File

@@ -7,7 +7,7 @@
== Enabling kdump
The `kdump` service, included in `kexec-tools`, provides a crash-dumping mechanism. You can use this service to save the contents of the systems memory for later analysis.
The `kdump` service, included in `kexec-tools`, provides a crash-dumping mechanism. You can use this service to save the contents of the system's memory for later analysis.
The `kdump` service is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.

View File

@@ -95,7 +95,7 @@ Minimum deployment =3
resources:
requests:
cpu:
|Number of central processing units for requests, based on your environments configuration.
|Number of central processing units for requests, based on your environment's configuration.
|Specified in cores or millicores (for example, 200m, 0.5, 1). For example, Proof of concept = 500m,
Minimum deployment =1
|1
@@ -104,7 +104,7 @@ Minimum deployment =1
resources:
requests:
memory:
|Available memory for requests, based on your environments configuration.
|Available memory for requests, based on your environment's configuration.
|Specified in bytes (for example, 200Ki, 50Mi, 5Gi). For example, Proof of concept = 1Gi,
Minimum deployment = 16Gi*
|16Gi
@@ -113,7 +113,7 @@ Minimum deployment = 16Gi*
resources:
limits:
cpu:
|Limit on number of central processing units, based on your environments configuration.
|Limit on number of central processing units, based on your environment's configuration.
|Specified in cores or millicores (for example, 200m, 0.5, 1). For example, Proof of concept = 500m,
Minimum deployment =1
|
@@ -122,7 +122,7 @@ Minimum deployment =1
resources:
limits:
memory:
|Available memory limit based on your environments configuration.
|Available memory limit based on your environment's configuration.
|Specified in bytes (for example, 200Ki, 50Mi, 5Gi). For example, Proof of concept = 1Gi,
Minimum deployment = 16Gi*
|

View File

@@ -125,7 +125,7 @@ NOTE: There is no `..` at the starting of the path.
////
* If your assembly is in a subfolder of a guide/book directory, you must add a
statement to the assemblys metadata to use `relfileprefix`.
statement to the assembly's metadata to use `relfileprefix`.
+
This adjusts all the xref links in your modules to start from the root
directory.

View File

@@ -21,7 +21,7 @@ Note the following about memory requests and memory limits:
container to a node, then fences off the requested memory on the chosen node
for the use of the container.
- If a nodes memory is exhausted, {product-title} prioritizes evicting its
- If a node's memory is exhausted, {product-title} prioritizes evicting its
containers whose memory usage most exceeds their memory request. In serious
cases of memory exhaustion, the node OOM killer may select and kill a
process in a container based on a similar metric.

View File

@@ -5,7 +5,7 @@
[id="nodes-cluster-resource-configure-evicted_{context}"]
= Understanding pod eviction
{product-title} may evict a pod from its node when the nodes memory is
{product-title} may evict a pod from its node when the node's memory is
exhausted. Depending on the extent of memory exhaustion, the eviction may or
may not be graceful. Graceful eviction implies the main process (PID 1) of each
container receiving a SIGTERM signal, then some time later a SIGKILL signal if
@@ -14,7 +14,7 @@ process of each container immediately receiving a SIGKILL signal.
An evicted pod has phase *Failed* and reason *Evicted*. It will not be
restarted, regardless of the value of `restartPolicy`. However, controllers
such as the replication controller will notice the pods failed status and create
such as the replication controller will notice the pod's failed status and create
a new pod to replace the old one.
[source,terminal]

View File

@@ -29,7 +29,7 @@ this documentation, and may involve setting multiple additional JVM options.
For many Java workloads, the JVM heap is the largest single consumer of memory.
Currently, the OpenJDK defaults to allowing up to 1/4 (1/`-XX:MaxRAMFraction`)
of the compute nodes memory to be used for the heap, regardless of whether the
of the compute node's memory to be used for the heap, regardless of whether the
OpenJDK is running in a container or not. It is therefore *essential* to
override this behavior, especially if a container memory limit is also set.

View File

@@ -74,7 +74,7 @@ If one or more processes in a pod are OOM killed, when the pod subsequently
exits, whether immediately or not, it will have phase *Failed* and reason
*OOMKilled*. An OOM-killed pod might be restarted depending on the value of
`restartPolicy`. If not restarted, controllers such as the
replication controller will notice the pods failed status and create a new pod
replication controller will notice the pod's failed status and create a new pod
to replace the old one.
+
Use the follwing command to get the pod status:

View File

@@ -29,9 +29,9 @@ the pod.
in the pod.
{product-title} handles port-forward requests from clients. Upon receiving a request, {product-title} upgrades the response and waits for the client
to create port-forwarding streams. When {product-title} receives a new stream, it copies data between the stream and the pods port.
to create port-forwarding streams. When {product-title} receives a new stream, it copies data between the stream and the pod's port.
Architecturally, there are options for forwarding to a pods port. The supported {product-title} implementation invokes `nsenter` directly on the node host
to enter the pods network namespace, then invokes `socat` to copy data between the stream and the pods port. However, a custom implementation could
Architecturally, there are options for forwarding to a pod's port. The supported {product-title} implementation invokes `nsenter` directly on the node host
to enter the pod's network namespace, then invokes `socat` to copy data between the stream and the pod's port. However, a custom implementation could
include running a _helper_ pod that then runs `nsenter` and `socat`, so that those binaries are not required to be installed on the host.

View File

@@ -33,7 +33,7 @@ data: <2>
stringData: <4>
hostname: myapp.mydomain.com <5>
----
<1> Indicates the structure of the secrets key names and values.
<1> Indicates the structure of the secret's key names and values.
<2> The allowable format for the keys in the `data` field must meet the
guidelines in the *DNS_SUBDOMAIN* value in
link:https://github.com/kubernetes/kubernetes/blob/v1.0.0/docs/design/identifiers.md[the

View File

@@ -27,7 +27,7 @@ This admission controller has the following behavior:
. If the Namespace has an annotation with a key scheduler.alpha.kubernetes.io/node-selector, use its value as the node selector.
. If the namespace lacks such an annotation, use the `clusterDefaultNodeSelector` defined in the `PodNodeSelector` plug-in configuration file as the node selector.
. Evaluate the pods node selector against the namespace node selector for conflicts. Conflicts result in rejection.
. Evaluate the pods node selector against the namespace-specific whitelist defined the plug-in configuration file. Conflicts result in rejection.
. Evaluate the pod's node selector against the namespace node selector for conflicts. Conflicts result in rejection.
. Evaluate the pod's node selector against the namespace-specific whitelist defined the plug-in configuration file. Conflicts result in rejection.

View File

@@ -97,7 +97,7 @@ endif::ovn[]
====
The egress firewall always allows pods access to the external interface of the node that the pod is on for DNS resolution.
If you use domain names in your egress firewall policy and your DNS resolution is not handled by a DNS server on the local node, then you must add egress firewall rules that allow access to your DNS servers IP addresses. if you are using domain names in your pods.
If you use domain names in your egress firewall policy and your DNS resolution is not handled by a DNS server on the local node, then you must add egress firewall rules that allow access to your DNS server's IP addresses. if you are using domain names in your pods.
====
ifdef::ovn[]

View File

@@ -44,7 +44,7 @@ proxying HTTPS connections.
+
[NOTE]
====
You can skip this step if the proxys identity certificate is signed by an
You can skip this step if the proxy's identity certificate is signed by an
authority from the RHCOS trust bundle.
====

View File

@@ -16,7 +16,7 @@ Using a node port requires additional port resources.
A `NodePort` exposes the service on a static port on the node's IP address.
``NodePort``s are in the `30000` to `32767` range by default, which means a
`NodePort` is unlikely to match a services intended port. For example, port
`NodePort` is unlikely to match a service's intended port. For example, port
`8080` may be exposed as port `31020` on the node.
The administrator must ensure the external IP addresses are routed to the nodes.

View File

@@ -18,7 +18,7 @@ Why deploy on Kubernetes?::
Kubernetes (and by extension, {product-title}) contains all of the primitives needed to build complex distributed systems secret handling, load balancing, service discovery, autoscaling that work across on-premise and cloud providers.
Why manage your app with Kubernetes APIs and `kubectl` tooling?::
These APIs are feature rich, have clients for all platforms and plug into the clusters access control/auditing. An Operator uses the Kubernetes extension mechanism, custom resource definitions (CRDs), so your custom object, link:https://marketplace.redhat.com/en-us/products/mongodb-enterprise-advanced-from-ibm[for example `MongoDB`], looks and acts just like the built-in, native Kubernetes objects.
These APIs are feature rich, have clients for all platforms and plug into the cluster's access control/auditing. An Operator uses the Kubernetes extension mechanism, custom resource definitions (CRDs), so your custom object, link:https://marketplace.redhat.com/en-us/products/mongodb-enterprise-advanced-from-ibm[for example `MongoDB`], looks and acts just like the built-in, native Kubernetes objects.
How do Operators compare with service brokers?::
A service broker is a step towards programmatic discovery and deployment of an app. However, because it is not a long running process, it cannot execute Day 2 operations like upgrade, failover, or scaling. Customizations and parameterization of tunables are provided at install time, versus an Operator that is constantly watching the current state of your cluster. Off-cluster services are a good match for a service broker, although Operators exist for these as well.

View File

@@ -75,7 +75,7 @@ As a cluster admin, you can disable installation of the default pipeline templat
+
[NOTE]
====
Because `openshift` is the default namespace used by the operator-installed pipeline templates, you must create the custom pipeline template in the `openshift` namespace. When an application uses a pipeline template, the template is automatically copied to the respective projects namespace.
Because `openshift` is the default namespace used by the operator-installed pipeline templates, you must create the custom pipeline template in the `openshift` namespace. When an application uses a pipeline template, the template is automatically copied to the respective project's namespace.
====
+
.. Under the *Details* tab of the created pipeline, ensure that the *Labels* in the custom template match the labels in the default pipeline. The custom pipeline template must have the correct labels for the runtime, type, and strategy of the application. For example, the required labels for a `node.js` application deployed on {product-title} are as follows:

View File

@@ -59,7 +59,7 @@ Tested Cloud Platforms for {product-title} 4.x: Amazon Web Services, Microsoft A
|===
[.small]
--
1. The pod count displayed here is the number of test pods. The actual number of pods depends on the applications memory, CPU, and storage requirements.
1. The pod count displayed here is the number of test pods. The actual number of pods depends on the application's memory, CPU, and storage requirements.
2. This was tested on a cluster with 100 worker nodes with 500 pods per worker node. The default `maxPods` is still 250. To get to 500 `maxPods`, the cluster must be created with a `maxPods` set to `500` using a custom kubelet config. If you need 500 user pods, you need a `hostPrefix` of `22` because there are 10-15 system pods already running on the node. The maximum number of pods with attached persistent volume claims (PVC) depends on storage backend from where PVC are allocated. In our tests, only OpenShift Container Storage v4 (OCS v4) was able to satisfy the number of pods per node discussed in this document.
3. When there are a large number of active projects, etcd might suffer from poor performance if the keyspace grows excessively large and exceeds the space quota. Periodic maintenance of etcd, including defragmentaion, is highly recommended to free etcd storage.
4. There are a number of control loops in the system that must iterate over all objects in a given namespace as a reaction to some changes in state. Having a large number of objects of a given type in a single namespace can make those loops expensive and slow down processing given state changes. The limit assumes that the system has enough CPU, memory, and disk to satisfy the application requirements.

View File

@@ -80,7 +80,7 @@ The values in the following table were tested independently of each other and re
|===
[.small]
--
1. The pod count displayed here is the number of test pods. The actual number of pods depends on the applications memory, CPU, and storage requirements.
1. The pod count displayed here is the number of test pods. The actual number of pods depends on the application's memory, CPU, and storage requirements.
2. When there are a large number of active projects, etcd might suffer from poor performance if the keyspace grows excessively large and exceeds the space quota. Periodic maintenance of etcd, including defragmentaion, is highly recommended to free etcd storage.
3. There are a number of control loops in the system that must iterate over all objects in a given namespace as a reaction to some changes in state. Having a large number of objects of a given type in a single namespace can make those loops expensive and slow down processing given state changes. The limit assumes that the system has enough CPU, memory, and disk to satisfy the application requirements.
4. Each service port and each service back end has a corresponding entry in iptables. The number of back ends of a given service impact the size of the endpoints objects, which impacts the size of data that is being sent all over the system.

View File

@@ -126,7 +126,7 @@ Minimum deployment =3
|requests:
cpu:
|Number of central processing units for requests, based on your environments configuration.
|Number of central processing units for requests, based on your environment's configuration.
|Specified in cores or millicores (for example, 200m, 0.5, 1).
|1Gi
|Proof of concept = 500m,
@@ -134,7 +134,7 @@ Minimum deployment =1
|requests:
memory:
|Available memory for requests, based on your environments configuration.
|Available memory for requests, based on your environment's configuration.
|Specified in bytes (for example, 200Ki, 50Mi, 5Gi).
|500m
|Proof of concept = 1Gi,
@@ -142,7 +142,7 @@ Minimum deployment = 16Gi*
|limits:
cpu:
|Limit on number of central processing units, based on your environments configuration.
|Limit on number of central processing units, based on your environment's configuration.
|Specified in cores or millicores (for example, 200m, 0.5, 1).
|
|Proof of concept = 500m,
@@ -150,7 +150,7 @@ Minimum deployment =1
|limits:
memory:
|Available memory limit based on your environments configuration.
|Available memory limit based on your environment's configuration.
|Specified in bytes (for example, 200Ki, 50Mi, 5Gi).
|
|Proof of concept = 1Gi,

View File

@@ -102,7 +102,7 @@ The following table lists the specifications for the `ServiceMeshControlPlane` r
|Not configurable
|`conditions`
|Represents the latest available observations of the objects current state. `Reconciled` indicates whether the operator has finished reconciling the actual state of deployed components with the configuration in the `ServiceMeshControlPlane` resource. `Installed` indicates whether the control plane has been installed. `Ready` indicates whether all control plane components are ready.
|Represents the latest available observations of the object's current state. `Reconciled` indicates whether the operator has finished reconciling the actual state of deployed components with the configuration in the `ServiceMeshControlPlane` resource. `Installed` indicates whether the control plane has been installed. `Ready` indicates whether all control plane components are ready.
|string
|`components`

View File

@@ -78,24 +78,24 @@ These parameters are specific to the proxy subset of global parameters.
|`requests`
|`cpu`
|The amount of CPU resources requested for Envoy proxy.
|CPU resources, specified in cores or millicores (for example, 200m, 0.5, 1) based on your environments configuration.
|CPU resources, specified in cores or millicores (for example, 200m, 0.5, 1) based on your environment's configuration.
|`10m`
|
|`memory`
|The amount of memory requested for Envoy proxy
|Available memory in bytes(for example, 200Ki, 50Mi, 5Gi) based on your environments configuration.
|Available memory in bytes(for example, 200Ki, 50Mi, 5Gi) based on your environment's configuration.
|`128Mi`
|Limits
|`cpu`
|The maximum amount of CPU resources requested for Envoy proxy.
|CPU resources, specified in cores or millicores (for example, 200m, 0.5, 1) based on your environments configuration.
|CPU resources, specified in cores or millicores (for example, 200m, 0.5, 1) based on your environment's configuration.
|`2000m`
|
|`memory`
|The maximum amount of memory Envoy proxy is permitted to use.
|Available memory in bytes (for example, 200Ki, 50Mi, 5Gi) based on your environments configuration.
|Available memory in bytes (for example, 200Ki, 50Mi, 5Gi) based on your environment's configuration.
|`1024Mi`
|===

View File

@@ -78,24 +78,24 @@ These parameters are specific to the proxy subset of global parameters.
|`requests`
|`cpu`
|The amount of CPU resources requested for Envoy proxy.
|CPU resources, specified in cores or millicores (for example, 200m, 0.5, 1) based on your environments configuration.
|CPU resources, specified in cores or millicores (for example, 200m, 0.5, 1) based on your environment's configuration.
|`10m`
|
|`memory`
|The amount of memory requested for Envoy proxy
|Available memory in bytes(for example, 200Ki, 50Mi, 5Gi) based on your environments configuration.
|Available memory in bytes(for example, 200Ki, 50Mi, 5Gi) based on your environment's configuration.
|`128Mi`
|`limits`
|`cpu`
|The maximum amount of CPU resources requested for Envoy proxy.
|CPU resources, specified in cores or millicores (for example, 200m, 0.5, 1) based on your environments configuration.
|CPU resources, specified in cores or millicores (for example, 200m, 0.5, 1) based on your environment's configuration.
|`2000m`
|
|`memory`
|The maximum amount of memory Envoy proxy is permitted to use.
|Available memory in bytes (for example, 200Ki, 50Mi, 5Gi) based on your environments configuration.
|Available memory in bytes (for example, 200Ki, 50Mi, 5Gi) based on your environment's configuration.
|`1024Mi`
|===

View File

@@ -63,7 +63,7 @@ mixer:
|
|`memory`
|The amount of memory requested for Mixer telemetry.
|Available memory in bytes (for example, 200Ki, 50Mi, 5Gi) based on your environments configuration.
|Available memory in bytes (for example, 200Ki, 50Mi, 5Gi) based on your environment's configuration.
|`128Mi`
|Limits
@@ -75,6 +75,6 @@ mixer:
|
|`memory`
|The maximum amount of memory Mixer telemetry is permitted to use.
|Available memory in bytes (for example, 200Ki, 50Mi, 5Gi) based on your environments configuration.
|Available memory in bytes (for example, 200Ki, 50Mi, 5Gi) based on your environment's configuration.
|`4G`
|===

View File

@@ -63,7 +63,7 @@ mixer:
|
|`memory`
|The amount of memory requested for Mixer telemetry.
|Available memory in bytes (for example, 200Ki, 50Mi, 5Gi) based on your environments configuration.
|Available memory in bytes (for example, 200Ki, 50Mi, 5Gi) based on your environment's configuration.
|`128Mi`
|`limits`
@@ -75,6 +75,6 @@ mixer:
|
|`memory`
|The maximum amount of memory Mixer telemetry is permitted to use.
|Available memory in bytes (for example, 200Ki, 50Mi, 5Gi) based on your environments configuration.
|Available memory in bytes (for example, 200Ki, 50Mi, 5Gi) based on your environment's configuration.
|`4G`
|===

View File

@@ -30,7 +30,7 @@ Here is an example that illustrates the Istio Pilot parameters for the `ServiceM
|`memory`
|The amount of memory requested for Pilot.
|Available memory in bytes (for example, 200Ki, 50Mi, 5Gi) based on your environments configuration.
|Available memory in bytes (for example, 200Ki, 50Mi, 5Gi) based on your environment's configuration.
|`128Mi`
|`autoscaleEnabled`

View File

@@ -53,7 +53,7 @@ spec:
|`memory`
|The amount of memory requested for Pilot.
|Available memory in bytes (for example, 200Ki, 50Mi, 5Gi) based on your environments configuration.
|Available memory in bytes (for example, 200Ki, 50Mi, 5Gi) based on your environment's configuration.
|`128Mi`
|`autoscaleEnabled`

View File

@@ -73,7 +73,7 @@ Minimum deployment =3
|requests:
cpu:
|Number of central processing units for requests, based on your environments configuration.
|Number of central processing units for requests, based on your environment's configuration.
|Specified in cores or millicores (for example, 200m, 0.5, 1).
|1Gi
|Proof of concept = 500m,
@@ -81,7 +81,7 @@ Minimum deployment =1
|requests:
memory:
|Available memory for requests, based on your environments configuration.
|Available memory for requests, based on your environment's configuration.
|Specified in bytes (for example, 200Ki, 50Mi, 5Gi).
|500m
|Proof of concept = 1Gi,
@@ -89,7 +89,7 @@ Minimum deployment = 16Gi*
|limits:
cpu:
|Limit on number of central processing units, based on your environments configuration.
|Limit on number of central processing units, based on your environment's configuration.
|Specified in cores or millicores (for example, 200m, 0.5, 1).
|
|Proof of concept = 500m,
@@ -97,7 +97,7 @@ Minimum deployment =1
|limits:
memory:
|Available memory limit based on your environments configuration.
|Available memory limit based on your environment's configuration.
|Specified in bytes (for example, 200Ki, 50Mi, 5Gi).
|
|Proof of concept = 1Gi,

View File

@@ -17,10 +17,10 @@ In addition, Kiali depends on external services and components provided by the c
* *Red Hat Service Mesh* (Istio) - Istio is a Kiali requirement. Istio is the component that provides and controls the service mesh. Although Kiali and Istio can be installed separately, Kiali depends on Istio and will not work if it is not present. Kiali needs to retrieve Istio data and configurations, which are exposed through Prometheus and the cluster API.
* *Prometheus* - A dedicated Prometheus instance is included as part of the {ProductName} installation. When Istio telemetry is enabled, metrics data are stored in Prometheus. Kiali uses this Prometheus data to determine the mesh topology, display metrics, calculate health, show possible problems, and so on. Kiali communicates directly with Prometheus and assumes the data schema used by Istio Telemetry. Prometheus is an Istio dependency and a hard dependency for Kiali, and many of Kialis features will not work without Prometheus.
* *Prometheus* - A dedicated Prometheus instance is included as part of the {ProductName} installation. When Istio telemetry is enabled, metrics data are stored in Prometheus. Kiali uses this Prometheus data to determine the mesh topology, display metrics, calculate health, show possible problems, and so on. Kiali communicates directly with Prometheus and assumes the data schema used by Istio Telemetry. Prometheus is an Istio dependency and a hard dependency for Kiali, and many of Kiali's features will not work without Prometheus.
* *Cluster API* - Kiali uses the API of the {product-title} (cluster API) to fetch and resolve service mesh configurations. Kiali queries the cluster API to retrieve, for example, definitions for namespaces, services, deployments, pods, and other entities. Kiali also makes queries to resolve relationships between the different cluster entities. The cluster API is also queried to retrieve Istio configurations like virtual services, destination rules, route rules, gateways, quotas, and so on.
* *Jaeger* - Jaeger is optional, but is installed by default as part of the {ProductName} installation. When you install Jaeger as part of the default {ProductName} installation, the Kiali console includes a tab to display Jaegers tracing data. Note that tracing data will not be available if you disable Istios distributed tracing feature. Also note that user must have access to the namespace where the control plane is installed to view Jaeger data.
* *Jaeger* - Jaeger is optional, but is installed by default as part of the {ProductName} installation. When you install Jaeger as part of the default {ProductName} installation, the Kiali console includes a tab to display Jaeger's tracing data. Note that tracing data will not be available if you disable Istio's distributed tracing feature. Also note that user must have access to the namespace where the control plane is installed to view Jaeger data.
* *Grafana* - Grafana is optional, but is installed by default as part of the {ProductName} installation. When available, the metrics pages of Kiali display links to direct the user to the same metric in Grafana. Note that user must have access to the namespace where the control plane is installed to view links to the Grafana dashboard and view Grafana data.

View File

@@ -5,7 +5,7 @@
[id="ossm-mixer-policy-1x_{context}"]
= Updating Mixer policy enforcement
In previous versions of {ProductName}, Mixers policy enforcement was enabled by default. Mixer policy enforcement is now disabled by default. You must enable it before running policy tasks.
In previous versions of {ProductName}, Mixer's policy enforcement was enabled by default. Mixer policy enforcement is now disabled by default. You must enable it before running policy tasks.
.Prerequisites
* Access to the {product-title} Command-line Interface (CLI) also known as `oc`.

View File

@@ -5,7 +5,7 @@
[id="ossm-mixer-policy_{context}"]
= Updating Mixer policy enforcement
In previous versions of {ProductName}, Mixers policy enforcement was enabled by default. Mixer policy enforcement is now disabled by default. You must enable it before running policy tasks.
In previous versions of {ProductName}, Mixer's policy enforcement was enabled by default. Mixer policy enforcement is now disabled by default. You must enable it before running policy tasks.
.Prerequisites
* Access to the {product-title} Command-line Interface (CLI) also known as `oc`.

View File

@@ -81,7 +81,7 @@ spec:
[id="ossm-routing-routing-rules_{context}"]
=== Routing rules
The `http` section contains the virtual services routing rules which describe match conditions and actions for routing HTTP/1.1, HTTP2, and gRPC traffic sent to the destination as specified in the hosts field. A routing rule consists of the destination where you want the traffic to go and any specified match conditions.
The `http` section contains the virtual service's routing rules which describe match conditions and actions for routing HTTP/1.1, HTTP2, and gRPC traffic sent to the destination as specified in the hosts field. A routing rule consists of the destination where you want the traffic to go and any specified match conditions.
.Match condition
@@ -101,7 +101,7 @@ spec:
.Destination
The `destination` field in the route section specifies the actual destination for traffic that matches this condition. Unlike the virtual services host, the destinations host must be a real destination that exists in the {ProductName} service registry. This can be a mesh service with proxies or a non-mesh service added using a service entry. In this example, the host name is a Kubernetes service name:
The `destination` field in the route section specifies the actual destination for traffic that matches this condition. Unlike the virtual service's host, the destination's host must be a real destination that exists in the {ProductName} service registry. This can be a mesh service with proxies or a non-mesh service added using a service entry. In this example, the host name is a Kubernetes service name:
[source,yaml]
----
@@ -122,7 +122,7 @@ spec:
[id="ossm-routing-dr_{context}"]
=== Destination rules
Destination rules are applied after virtual service routing rules are evaluated, so they apply to the traffics real destination. Virtual services route traffic to a destination. Destination rules configure what happens to traffic at that destination.
Destination rules are applied after virtual service routing rules are evaluated, so they apply to the traffic's real destination. Virtual services route traffic to a destination. Destination rules configure what happens to traffic at that destination.
[id="ossm-routing-lb_{context}"]
==== Load balancing options
@@ -200,7 +200,7 @@ spec:
This gateway configuration lets HTTPS traffic from `ext-host.example.com` into the mesh on port 443, but doesnt specify any routing for the traffic.
To specify routing and for the gateway to work as intended, you must also bind the gateway to a virtual service. You do this using the virtual services gateways field, as shown in the following example:
To specify routing and for the gateway to work as intended, you must also bind the gateway to a virtual service. You do this using the virtual service's gateways field, as shown in the following example:
[source,yaml]
----

View File

@@ -32,7 +32,7 @@ A pod that uses a hostPath volume must be referenced by manual (static) provisio
<1> The name of the volume. This name is how it is identified by persistent volume claims or pods.
<2> Used to bind persistent volume claim requests to this persistent volume.
<3> The volume can be mounted as `read-write` by a single node.
<4> The configuration file specifies that the volume is at `/mnt/data` on the clusters node.
<4> The configuration file specifies that the volume is at `/mnt/data` on the cluster's node.
. Create the PV from the file:
+

View File

@@ -23,7 +23,7 @@ The internal load balancer relies on instance groups rather than the target pool
* The cluster IP address is internal only.
* One forwarding rule manages both the Kubernetes API and machine config server ports.
* The backend service is comprised of each zones instance group and, while it exists, the bootstrap instance group.
* The backend service is comprised of each zone's instance group and, while it exists, the bootstrap instance group.
* The firewall uses a single rule that is based on only internal source ranges.
[id="private-clusters-limitations-gcp_{context}"]

View File

@@ -11,7 +11,7 @@ By default, {product-title} is provisioned using publicly-accessible DNS and end
[id="private-clusters-about-dns_{context}"]
== DNS
If you install {product-title} on installer-provisioned infrastructure, the installation program creates records in a pre-existing public zone and, where possible, creates a private zone for the clusters own DNS resolution. In both the public zone and the private zone, the installation program or cluster creates DNS entries for `*.apps`, for the `Ingress` object, and `api`, for the API server.
If you install {product-title} on installer-provisioned infrastructure, the installation program creates records in a pre-existing public zone and, where possible, creates a private zone for the cluster's own DNS resolution. In both the public zone and the private zone, the installation program or cluster creates DNS entries for `*.apps`, for the `Ingress` object, and `api`, for the API server.
The `*.apps` records in the public and private zone are identical, so when you delete the public zone, the private zone seamlessly provides all DNS resolution for the cluster.

View File

@@ -135,7 +135,7 @@ Set up your environment.
** *Incorrect example*:
+
----
Lets set up our environment.
Let's set up our environment.
----
[id="quick-start-content-guidelines-check-your-work-module_{context}"]

View File

@@ -101,7 +101,7 @@ To create machines by using Ignition, you need Ignition config files. The {produ
The way that Ignition configures machines is similar to how tools like https://cloud-init.io/[cloud-init] or Linux Anaconda https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html-single/installation_guide/index#chap-kickstart-installations[kickstart] configure systems, but with some important differences:
* Ignition runs from an initial RAM disk that is separate from the system you are installing to. Because of that, Ignition can repartition disks, set up file systems, and perform other changes to the machines permanent file system. In contrast, cloud-init runs as part of a machine init system when the system boots, so making foundational changes to things like disk partitions cannot be done as easily. With cloud-init, it is also difficult to reconfigure the boot process while you are in the middle of the node boot process.
* Ignition runs from an initial RAM disk that is separate from the system you are installing to. Because of that, Ignition can repartition disks, set up file systems, and perform other changes to the machine's permanent file system. In contrast, cloud-init runs as part of a machine init system when the system boots, so making foundational changes to things like disk partitions cannot be done as easily. With cloud-init, it is also difficult to reconfigure the boot process while you are in the middle of the node boot process.
* Ignition is meant to initialize systems, not change existing systems. After a machine initializes and the kernel is running from the installed system, the Machine Config Operator from the {product-title} cluster completes all future machine configuration.
* Instead of completing a defined set of actions, Ignition implements a declarative configuration. It checks that all partitions, files, services, and other items are in place before the new machine starts. It then makes the changes, like copying files to disk that are necessary for the new machine to meet the specified configuration.
@@ -134,5 +134,5 @@ The Ignition process for an {op-system} machine in an {product-title} cluster in
At the end of this process, the machine is ready to join the cluster and does not require a reboot.
////
After Ignition finishes its work on an individual machine, the kernel pivots to the installed system. The initial RAM disk is no longer used and the kernel goes on to run the init service to start up everything on the host from the installed disk. When the last machine under the bootstrap machines control is completed, and the services on those machines come up, the work of the bootstrap machine is over.
After Ignition finishes its work on an individual machine, the kernel pivots to the installed system. The initial RAM disk is no longer used and the kernel goes on to run the init service to start up everything on the host from the installed disk. When the last machine under the bootstrap machine's control is completed, and the services on those machines come up, the work of the bootstrap machine is over.
////

View File

@@ -33,7 +33,7 @@ Creation or update of RHEL content is not gated by the existence of the pull sec
|Placeholder to choose an architecture type.
|`skippedImagestreams`
|Image streams that are in the Cluster Samples Operators inventory but that the cluster administrator wants the Operator to ignore or not manage. You can add a list of image stream names to this parameter. For example, `["httpd","perl"]`.
|Image streams that are in the Cluster Samples Operator's inventory but that the cluster administrator wants the Operator to ignore or not manage. You can add a list of image stream names to this parameter. For example, `["httpd","perl"]`.
|`skippedTemplates`
|Templates that are in the Cluster Samples Operator's inventory, but that the cluster administrator wants the Operator to ignore or not manage.

View File

@@ -26,13 +26,13 @@ image::serverless-search.png[{ServerlessOperatorName} in the {product-title} web
.. Select the *stable* channel as the *Update Channel*. The *stable* channel will enable installation of the latest stable release of the {ServerlessOperatorName}.
.. Select *Automatic* or *Manual* approval strategy.
. Click *Install* to make the Operator available to the selected namespaces on this {product-title} cluster.
. From the *Catalog* -> *Operator Management* page, you can monitor the {ServerlessOperatorName} subscriptions installation and upgrade progress.
.. If you selected a *Manual* approval strategy, the subscriptions upgrade status will remain *Upgrading* until you review and approve its install plan. After approving on the *Install Plan* page, the subscription upgrade status moves to *Up to date*.
. From the *Catalog* -> *Operator Management* page, you can monitor the {ServerlessOperatorName} subscription's installation and upgrade progress.
.. If you selected a *Manual* approval strategy, the subscription's upgrade status will remain *Upgrading* until you review and approve its install plan. After approving on the *Install Plan* page, the subscription upgrade status moves to *Up to date*.
.. If you selected an *Automatic* approval strategy, the upgrade status should resolve to *Up to date* without intervention.
.Verification
After the Subscriptions upgrade status is *Up to date*, select *Catalog* -> *Installed Operators* to verify that the {ServerlessOperatorName} eventually shows up and its *Status* ultimately resolves to *InstallSucceeded* in the relevant namespace.
After the Subscription's upgrade status is *Up to date*, select *Catalog* -> *Installed Operators* to verify that the {ServerlessOperatorName} eventually shows up and its *Status* ultimately resolves to *InstallSucceeded* in the relevant namespace.
If it does not:

View File

@@ -10,7 +10,7 @@
+
[NOTE]
====
If you have selected Manual updates, you will need to complete additional steps after updating the channel as described in this guide. The Subscriptions upgrade status will remain *Upgrading* until you review and approve its Install Plan. Information about the Install Plan can be found in the {product-title} Operators documentation.
If you have selected Manual updates, you will need to complete additional steps after updating the channel as described in this guide. The Subscription's upgrade status will remain *Upgrading* until you review and approve its Install Plan. Information about the Install Plan can be found in the {product-title} Operators documentation.
====
* You have logged in to the {product-title} web console.

View File

@@ -69,7 +69,7 @@ If an existing `toolbox` pod is already running, the `toolbox` command outputs `
----
$ tcpdump -nn -s 0 -i ens5 -w /host/var/tmp/my-cluster-node_$(date +%d_%m_%Y-%H_%M_%S-%Z).pcap <1>
----
<1> The `tcpdump` capture file's path is outside of the `chroot` environment because the toolbox container mounts the hosts root directory at `/host`.
<1> The `tcpdump` capture file's path is outside of the `chroot` environment because the toolbox container mounts the host's root directory at `/host`.
. If a `tcpdump` capture is required for a specific container on the node, follow these steps.
.. Determine the target container ID. The `chroot host` command precedes the `crictl` command in this step because the toolbox container mounts the host's root directory at `/host`:
@@ -92,7 +92,7 @@ $ tcpdump -nn -s 0 -i ens5 -w /host/var/tmp/my-cluster-node_$(date +%d_%m_%Y-%H_
----
# nsenter -n -t 49628 -- tcpdump -nn -i ens5 -w /host/var/tmp/my-cluster-node-my-container_$(date +%d_%m_%Y-%H_%M_%S-%Z).pcap.pcap <1>
----
<1> The `tcpdump` capture file's path is outside of the `chroot` environment because the toolbox container mounts the hosts root directory at `/host`.
<1> The `tcpdump` capture file's path is outside of the `chroot` environment because the toolbox container mounts the host's root directory at `/host`.
. Provide the `tcpdump` capture file to Red Hat Support for analysis, using one of the following methods.
+
@@ -103,7 +103,7 @@ $ tcpdump -nn -s 0 -i ens5 -w /host/var/tmp/my-cluster-node_$(date +%d_%m_%Y-%H_
----
# redhat-support-tool addattachment -c 01234567 /host/var/tmp/my-tcpdump-capture-file.pcap <1>
----
<1> The toolbox container mounts the hosts root directory at `/host`. Reference the absolute path from the toolbox container's root directory, including `/host/`, when specifying files to upload through the `redhat-support-tool` command.
<1> The toolbox container mounts the host's root directory at `/host`. Reference the absolute path from the toolbox container's root directory, including `/host/`, when specifying files to upload through the `redhat-support-tool` command.
+
* Upload the file to an existing Red Hat support case.
.. Concatenate the `sosreport` archive by running the `oc debug node/<node_name>` command and redirect the output to a file. This command assumes you have exited the previous `oc debug` session:
@@ -112,7 +112,7 @@ $ tcpdump -nn -s 0 -i ens5 -w /host/var/tmp/my-cluster-node_$(date +%d_%m_%Y-%H_
----
$ oc debug node/my-cluster-node -- bash -c 'cat /host/var/tmp/my-tcpdump-capture-file.pcap' > /tmp/my-tcpdump-capture-file.pcap <1>
----
<1> The debug container mounts the hosts root directory at `/host`. Reference the absolute path from the debug container's root directory, including `/host`, when specifying target files for concatenation.
<1> The debug container mounts the host's root directory at `/host`. Reference the absolute path from the debug container's root directory, including `/host`, when specifying target files for concatenation.
+
[NOTE]
====

View File

@@ -78,7 +78,7 @@ Your sosreport has been generated and saved in:
The checksum is: 382ffc167510fd71b4f12a4f40b97a4e
----
<1> The `sosreport` archive's file path is outside of the `chroot` environment because the toolbox container mounts the hosts root directory at `/host`.
<1> The `sosreport` archive's file path is outside of the `chroot` environment because the toolbox container mounts the host's root directory at `/host`.
. Provide the `sosreport` archive to Red Hat Support for analysis, using one of the following methods.
+
@@ -89,7 +89,7 @@ The checksum is: 382ffc167510fd71b4f12a4f40b97a4e
----
# redhat-support-tool addattachment -c 01234567 /host/var/tmp/my-sosreport.tar.xz <1>
----
<1> The toolbox container mounts the hosts root directory at `/host`. Reference the absolute path from the toolbox container's root directory, including `/host/`, when specifying files to upload through the `redhat-support-tool` command.
<1> The toolbox container mounts the host's root directory at `/host`. Reference the absolute path from the toolbox container's root directory, including `/host/`, when specifying files to upload through the `redhat-support-tool` command.
+
* Upload the file to an existing Red Hat support case.
.. Concatenate the `sosreport` archive by running the `oc debug node/<node_name>` command and redirect the output to a file. This command assumes you have exited the previous `oc debug` session:
@@ -98,7 +98,7 @@ The checksum is: 382ffc167510fd71b4f12a4f40b97a4e
----
$ oc debug node/my-cluster-node -- bash -c 'cat /host/var/tmp/sosreport-my-cluster-node-01234567-2020-05-28-eyjknxt.tar.xz' > /tmp/sosreport-my-cluster-node-01234567-2020-05-28-eyjknxt.tar.xz <1>
----
<1> The debug container mounts the hosts root directory at `/host`. Reference the absolute path from the debug container's root directory, including `/host`, when specifying target files for concatenation.
<1> The debug container mounts the host's root directory at `/host`. Reference the absolute path from the debug container's root directory, including `/host`, when specifying target files for concatenation.
+
[NOTE]
====

View File

@@ -25,7 +25,7 @@ When investigating {product-title} issues, Red Hat Support might ask you to uplo
----
$ oc debug node/my-cluster-node -- bash -c 'cat /host/var/tmp/my-diagnostic-data.tar.gz' > /var/tmp/my-diagnostic-data.tar.gz <1>
----
<1> The debug container mounts the hosts root directory at `/host`. Reference the absolute path from the debug container's root directory, including `/host`, when specifying target files for concatenation.
<1> The debug container mounts the host's root directory at `/host`. Reference the absolute path from the debug container's root directory, including `/host`, when specifying target files for concatenation.
+
[NOTE]
====
@@ -81,4 +81,4 @@ If an existing `toolbox` pod is already running, the `toolbox` command outputs `
----
# redhat-support-tool addattachment -c 01234567 /host/var/tmp/my-diagnostic-data.tar.gz <1>
----
<1> The toolbox container mounts the hosts root directory at `/host`. Reference the absolute path from the toolbox container's root directory, including `/host/`, when specifying files to upload through the `redhat-support-tool` command.
<1> The toolbox container mounts the host's root directory at `/host`. Reference the absolute path from the toolbox container's root directory, including `/host/`, when specifying files to upload through the `redhat-support-tool` command.

View File

@@ -31,7 +31,7 @@ endif::virt-cluster[]
* Information about the validity of certificates
* Number of application builds by build strategy type
Telemetry does not collect identifying information such as user names or passwords. Red Hat does not intend to collect personal information. If Red Hat discovers that personal information has been inadvertently received, Red Hat will delete such information. To the extent that any telemetry data constitutes personal data, please refer to the link:https://www.redhat.com/en/about/privacy-policy[Red Hat Privacy Statement] for more information about Red Hats privacy practices.
Telemetry does not collect identifying information such as user names or passwords. Red Hat does not intend to collect personal information. If Red Hat discovers that personal information has been inadvertently received, Red Hat will delete such information. To the extent that any telemetry data constitutes personal data, please refer to the link:https://www.redhat.com/en/about/privacy-policy[Red Hat Privacy Statement] for more information about Red Hat's privacy practices.
ifeval::["{context}" == "virt-openshift-cluster-monitoring"]
:!virt-cluster:

View File

@@ -21,7 +21,7 @@ No VMI is present when a virtual machine is created, which is the same behavior
Different combinations of the `start`, `stop` and `restart` virtctl commands affect which `RunStrategy` is used.
The following table follows a VMs transition from different states. The first column shows the VM's initial `RunStrategy`. Each additional column shows a virtctl command and the new `RunStrategy` after that command is run.
The following table follows a VM's transition from different states. The first column shows the VM's initial `RunStrategy`. Each additional column shows a virtctl command and the new `RunStrategy` after that command is run.
|===
|Initial RunStrategy |start |stop |restart

View File

@@ -57,7 +57,7 @@ However, note that `Reason` is `Completed` and the `Message` field indicates
In the `Events` section, the `Reason` and `Message` contain additional
troubleshooting information about the failed operation. In this example,
the `Message` displays an inability to connect due to a `404`, listed in the
`Events` sections first `Warning`.
`Events` section's first `Warning`.
+
From this information, you conclude that an import operation was running,
creating contention for other operations that are

View File

@@ -10,7 +10,7 @@
[IMPORTANT]
====
The numbers noted in this documentation are based on Red Hats test methodology and setup. These numbers can vary based on your own individual setup and environments.
The numbers noted in this documentation are based on Red Hat's test methodology and setup. These numbers can vary based on your own individual setup and environments.
====
[id="memory-overhead_{context}"]

View File

@@ -5,7 +5,7 @@
[id="virt-configuring-guest-memory-overcommitment_{context}"]
= Configuring guest memory overcommitment
If your virtual workload requires more memory than available, you can use memory overcommitment to allocate all or most of the hosts memory to your virtual machine instances (VMIs). Enabling memory overcommitment means that you can maximize resources that are normally reserved for the host.
If your virtual workload requires more memory than available, you can use memory overcommitment to allocate all or most of the host's memory to your virtual machine instances (VMIs). Enabling memory overcommitment means that you can maximize resources that are normally reserved for the host.
For example, if the host has 32 GB RAM, you can use memory overcommitment to fit 8 virtual machines (VMs) with 4 GB RAM each. This allocation works under the assumption that the virtual machines will not use all of their memory at the same time.

View File

@@ -26,7 +26,7 @@ instance by starting it.
[NOTE]
====
A https://kubernetes.io/docs/concepts/workloads/controllers/replicaset/[ReplicaSet]s purpose is often used to guarantee the availability of a specified number of identical pods.
A https://kubernetes.io/docs/concepts/workloads/controllers/replicaset/[ReplicaSet]'s purpose is often used to guarantee the availability of a specified number of identical pods.
ReplicaSet is not currently supported in {VirtProductName}.
====

View File

@@ -39,7 +39,7 @@ CRs are as follows:
|Pull secret for the disconnected registry.
|AgentClusterInstall
|Specifies the single node clusters configuration such as networking, number of supervisor (control plane) nodes, and so on.
|Specifies the single node cluster's configuration such as networking, number of supervisor (control plane) nodes, and so on.
|ClusterDeployment
|Defines the cluster name, domain, and other details.

View File

@@ -11,7 +11,7 @@ toc::[]
The _Downward API_ is a mechanism that allows containers to consume information
about API objects without coupling to {product-title}.
Such information includes the pods name, namespace, and resource values.
Such information includes the pod's name, namespace, and resource values.
Containers can consume information from the downward API using environment
variables or a volume plug-in.

View File

@@ -7,7 +7,7 @@ toc::[]
Use the following topics to discover the different Source-to-Image (S2I), database, and other container images that are available for {product-title} users.
Red Hat official container images are provided in the Red Hat Registry at link:https://registry.redhat.io[registry.redhat.io]. {product-title}s supported S2I, database, and Jenkins images are provided in the `openshift4` repository in the Red Hat Quay Registry. For example, `quay.io/openshift-release-dev/ocp-v4.0-<address>` is the name of the OpenShift Application Platform image.
Red Hat official container images are provided in the Red Hat Registry at link:https://registry.redhat.io[registry.redhat.io]. {product-title}'s supported S2I, database, and Jenkins images are provided in the `openshift4` repository in the Red Hat Quay Registry. For example, `quay.io/openshift-release-dev/ocp-v4.0-<address>` is the name of the OpenShift Application Platform image.
The xPaaS middleware images are provided in their respective product repositories on the Red Hat Registry but suffixed with a `-openshift`. For example, `registry.redhat.io/jboss-eap-6/eap64-openshift` is the name of the JBoss EAP image.

View File

@@ -11,7 +11,7 @@ used in active pods on the cluster. The CSO:
* Watches containers associated with pods on all or specified namespaces
* Queries the container registry where the containers came from for
vulnerability information, provided an images registry is running image
vulnerability information, provided an image's registry is running image
scanning (such as
link:https://quay.io[Quay.io] or a
link:https://access.redhat.com/products/red-hat-quay[Red Hat Quay] registry with Clair scanning)

View File

@@ -9,7 +9,7 @@ Installing the {ProductShortName} involves installing the OpenShift Elasticsearc
[NOTE]
====
Mixers policy enforcement is disabled by default. You must enable it to run policy tasks. See xref:../../service_mesh/v1x/prepare-to-deploy-applications-ossm.adoc#ossm-mixer-policy-1x_deploying-applications-ossm-v1x[Update Mixer policy enforcement] for instructions on enabling Mixer policy enforcement.
Mixer's policy enforcement is disabled by default. You must enable it to run policy tasks. See xref:../../service_mesh/v1x/prepare-to-deploy-applications-ossm.adoc#ossm-mixer-policy-1x_deploying-applications-ossm-v1x[Update Mixer policy enforcement] for instructions on enabling Mixer policy enforcement.
====
[NOTE]

View File

@@ -124,7 +124,7 @@ The following annotations are no longer supported in v2.0. If you are using one
* `sidecar.maistra.io/proxyMemoryLimit` has been replaced with `sidecar.istio.io/proxyMemoryLimit`
* `sidecar.istio.io/discoveryAddress` is no longer supported. Also, the default discovery address has moved from `pilot.<control_plane_namespace>.svc:15010` (or port 15011, if mtls is enabled) to `istiod-<smcp_name>.<control_plane_namespace>.svc:15012`.
* The health status port is no longer configurable and is hard-coded to 15021. * If you were defining a custom status port, for example, `status.sidecar.istio.io/port`, you must remove the override before moving the workload to a v2.0 control plane. Readiness checks can still be disabled by setting the status port to `0`.
* Kubernetes Secret resources are no longer used to distribute client certificates for sidecars. Certificates are now distributed through Istiods SDS service. If you were relying on mounted secrets, they are longer available for workloads in v2.0 control planes.
* Kubernetes Secret resources are no longer used to distribute client certificates for sidecars. Certificates are now distributed through Istiod's SDS service. If you were relying on mounted secrets, they are longer available for workloads in v2.0 control planes.
[id="ossm-upgrading-differences-behavior_{context}"]
=== Behavioral changes
@@ -145,7 +145,7 @@ Policy resources must be migrated to new resource types for use with v2.0 contro
.Mutual TLS
Mutual TLS enforcement is accomplished using the `security.istio.io/v1beta1` PeerAuthentication resource. The legacy `spec.peers.mtls.mode` field maps directly to the new resources `spec.mtls.mode` field. Selection criteria has changed from specifying a service name in `spec.targets[x].name` to a label selector in `spec.selector.matchLabels`. In PeerAuthentication, the labels must match the selector on the services named in the targets list. Any port-specific settings will need to be mapped into `spec.portLevelMtls`.
Mutual TLS enforcement is accomplished using the `security.istio.io/v1beta1` PeerAuthentication resource. The legacy `spec.peers.mtls.mode` field maps directly to the new resource's `spec.mtls.mode` field. Selection criteria has changed from specifying a service name in `spec.targets[x].name` to a label selector in `spec.selector.matchLabels`. In PeerAuthentication, the labels must match the selector on the services named in the targets list. Any port-specific settings will need to be mapped into `spec.portLevelMtls`.
.Authentication
@@ -174,7 +174,7 @@ AuthorizationPolicy includes configuration for both the selector to which the co
.ServiceMeshRbacConfig (maistra.io/v1)
This resource is replaced by using a `security.istio.io/v1beta1` AuthorizationPolicy resource with an empty spec.selector in the control planes namespace. This policy will be the default authorization policy applied to all workloads in the mesh. For specific migration details, see RbacConfig above.
This resource is replaced by using a `security.istio.io/v1beta1` AuthorizationPolicy resource with an empty spec.selector in the control plane's namespace. This policy will be the default authorization policy applied to all workloads in the mesh. For specific migration details, see RbacConfig above.
[id="ossm-upgrading-mig-mixer_{context}"]
=== Mixer plugins

View File

@@ -5,7 +5,7 @@ include::modules/common-attributes.adoc[]
toc::[]
A hostPath volume in an {product-title} cluster mounts a file or directory from the host nodes filesystem into your pod. Most pods will not need a hostPath volume, but it does offer a quick option for testing should an application require it.
A hostPath volume in an {product-title} cluster mounts a file or directory from the host node's filesystem into your pod. Most pods will not need a hostPath volume, but it does offer a quick option for testing should an application require it.
[IMPORTANT]
====

View File

@@ -22,7 +22,7 @@ OpenShift Container Storage on top of Red Hat Hyperconverged Infrastructure (RHH
2+^| *Planning*
|Whats new, known issues, notable bug fixes, and Technology Previews
|What's new, known issues, notable bug fixes, and Technology Previews
|link:https://access.redhat.com/documentation/en-us/red_hat_openshift_container_storage/4.7/html/4.7_release_notes/[Red Hat OpenShift Container Storage 4.7 Release Notes]
|Supported workloads, layouts, hardware and software requirements, sizing and scaling recommendations

View File

@@ -5,7 +5,7 @@ include::modules/common-attributes.adoc[]
toc::[]
{product-title} allows use of VMware vSpheres Virtual Machine Disk (VMDK) volumes. You can provision your {product-title} cluster with persistent storage using VMware vSphere. Some familiarity with Kubernetes and VMware vSphere is assumed.
{product-title} allows use of VMware vSphere's Virtual Machine Disk (VMDK) volumes. You can provision your {product-title} cluster with persistent storage using VMware vSphere. Some familiarity with Kubernetes and VMware vSphere is assumed.
VMware vSphere volumes can be provisioned dynamically. {product-title} creates the disk in vSphere and attaches this disk to the correct image.

View File

@@ -5,7 +5,7 @@ include::modules/common-attributes.adoc[]
toc::[]
Operators are a method of packaging, deploying, and managing an {product-title} application. They act like an extension of the software vendors engineering team, watching over an {product-title} environment and using its current state to make decisions in real time. Operators are designed to handle upgrades seamlessly, react to failures automatically, and not take shortcuts, such as skipping a software backup process to save time.
Operators are a method of packaging, deploying, and managing an {product-title} application. They act like an extension of the software vendor's engineering team, watching over an {product-title} environment and using its current state to make decisions in real time. Operators are designed to handle upgrades seamlessly, react to failures automatically, and not take shortcuts, such as skipping a software backup process to save time.
{product-title} {product-version} includes a default set of Operators that are required for proper functioning of the cluster. These default Operators are managed by the Cluster Version Operator (CVO).

View File

@@ -34,5 +34,5 @@ include::modules/quick-start-content-guidelines.adoc[leveloffset=+1]
[id="quick-start-tutorials-additional-resources"]
== Additional resources
* For voice and tone requirements, refer to link:https://www.patternfly.org/v4/ux-writing/brand-voice-and-tone[PatternFlys brand voice and tone guidelines].
* For other UX content guidance, refer to all areas of link:https://www.patternfly.org/v4/ux-writing/about[PatternFlys UX writing style guide].
* For voice and tone requirements, refer to link:https://www.patternfly.org/v4/ux-writing/brand-voice-and-tone[PatternFly's brand voice and tone guidelines].
* For other UX content guidance, refer to all areas of link:https://www.patternfly.org/v4/ux-writing/about[PatternFly's UX writing style guide].

View File

@@ -168,7 +168,7 @@ that offers OpenTracing observability for containerized services on
=== Advanced networking
The standard networking solutions in {product-title} are not supported with an
{oce} subscription. {product-title}s Kubernetes CNI plug-in for automation of
{oce} subscription. {product-title}'s Kubernetes CNI plug-in for automation of
multi-tenant network segmentation between {product-title} projects is not
entitled for use with {oce}. {product-title} offers more granular control of the
source IP addresses that are used by application services on the cluster.
@@ -184,9 +184,9 @@ derived from the istio.io open source project, is not supported in {oce}.
With {oce}, the following capabilities are not supported:
* The developer experience utilities and tools.
* {product-title}s pipeline feature that integrates a streamlined,
Kubernetes-enabled Jenkins experience in the users project space.
* The {product-title}s source-to-image feature, which allows you to easily
* {product-title}'s pipeline feature that integrates a streamlined,
Kubernetes-enabled Jenkins experience in the user's project space.
* The {product-title}'s source-to-image feature, which allows you to easily
deploy source code, dockerfiles, or container images across the cluster.
* Build strategies, builder pods, or imagestreams for end user container
deployments.

View File

@@ -187,7 +187,7 @@ not supported in {oke}.
=== Advanced networking
The standard networking solutions in {product-title} are supported with an
{oke} subscription. {product-title}s Kubernetes CNI plug-in for automation of
{oke} subscription. {product-title}'s Kubernetes CNI plug-in for automation of
multi-tenant network segmentation between {product-title} projects is
entitled for use with {oke}. {oke} offers all the granular control of the
source IP addresses that are used by application services on the cluster.
@@ -205,9 +205,9 @@ on {oke}.
With {oke}, the following capabilities are not supported:
* The CodeReady developer experience utilities and tools, such as CodeReady Workspaces.
* {product-title}s pipeline feature that integrates a streamlined,
Kubernetes-enabled Jenkins and Tekton experience in the users project space.
* The {product-title}s source-to-image feature, which allows you to easily
* {product-title}'s pipeline feature that integrates a streamlined,
Kubernetes-enabled Jenkins and Tekton experience in the user's project space.
* The {product-title}'s source-to-image feature, which allows you to easily
deploy source code, dockerfiles, or container images across the cluster.
* Build strategies, builder pods, or Tekton for end user container
deployments.

View File

@@ -15,7 +15,7 @@ See the link:https://github.com/openshift/okd/releases[*Releases*] page in the
xref:../operators/understanding/olm-what-operators-are.adoc#olm-what-operators-are[Operators]
are pieces of software that ease the operational complexity of running another
piece of software. They act like an extension of the software vendors
piece of software. They act like an extension of the software vendor's
engineering team, watching over a Kubernetes environment (such as
{product-title}) and using its current state to make decisions in real time.
Advanced Operators are designed to handle upgrades seamlessly, react to failures