mirror of
https://github.com/openshift/openshift-docs.git
synced 2026-02-05 12:46:18 +01:00
OSDOCS-10115: Substituted 60+ contractions over 40+ files..
This commit is contained in:
committed by
openshift-cherrypick-robot
parent
b96897b263
commit
8becf54314
@@ -9,7 +9,7 @@ endif::[]
|
||||
[id="builds-using-build-volumes_{context}"]
|
||||
= Using build volumes
|
||||
|
||||
You can mount build volumes to give running builds access to information that you don't want to persist in the output container image.
|
||||
You can mount build volumes to give running builds access to information that you do not want to persist in the output container image.
|
||||
|
||||
Build volumes provide sensitive information, such as repository credentials, that the build environment or configuration only needs at build time. Build volumes are different from xref:../../cicd/builds/creating-build-inputs.adoc#builds-define-build-inputs_creating-build-inputs[build inputs], whose data can persist in the output container image.
|
||||
|
||||
@@ -55,7 +55,7 @@ spec:
|
||||
attribute: value
|
||||
----
|
||||
<1> Required. A unique name.
|
||||
<2> Required. The absolute path of the mount point. It must not contain `..` or `:` and doesn't collide with the destination path generated by the builder. The `/opt/app-root/src` is the default home directory for many Red Hat S2I-enabled images.
|
||||
<2> Required. The absolute path of the mount point. It must not contain `..` or `:` and does not collide with the destination path generated by the builder. The `/opt/app-root/src` is the default home directory for many Red Hat S2I-enabled images.
|
||||
<3> Required. The type of source, `ConfigMap`, `Secret`, or `CSI`.
|
||||
<4> Required. The name of the source.
|
||||
<5> Required. The driver that provides the ephemeral CSI volume.
|
||||
@@ -105,7 +105,7 @@ spec:
|
||||
----
|
||||
|
||||
<1> Required. A unique name.
|
||||
<2> Required. The absolute path of the mount point. It must not contain `..` or `:` and doesn't collide with the destination path generated by the builder. The `/opt/app-root/src` is the default home directory for many Red Hat S2I-enabled images.
|
||||
<2> Required. The absolute path of the mount point. It must not contain `..` or `:` and does not collide with the destination path generated by the builder. The `/opt/app-root/src` is the default home directory for many Red Hat S2I-enabled images.
|
||||
<3> Required. The type of source, `ConfigMap`, `Secret`, or `CSI`.
|
||||
<4> Required. The name of the source.
|
||||
<5> Required. The driver that provides the ephemeral CSI volume.
|
||||
|
||||
@@ -191,7 +191,7 @@ This option is unsupported if you use {rh-openstack} earlier than version 17 wit
|
||||
// | The id of the loadbalancer flavor to use. Uses octavia default if not set.
|
||||
|
||||
// | `availability-zone`
|
||||
// | The name of the loadbalancer availability zone to use. The Octavia availability zone capabilities will not be used if it is not set. The parameter will be ignored if the Octavia version doesn't support availability zones yet.
|
||||
// | The name of the loadbalancer availability zone to use. The Octavia availability zone capabilities will not be used if it is not set. The parameter will be ignored if the Octavia version does not support availability zones yet.
|
||||
|
||||
| `LoadBalancerClass "ClassName"`
|
||||
a| This is a config section that comprises a set of options:
|
||||
|
||||
@@ -17,7 +17,7 @@ You can view these alerting rules in the {product-title} web console.
|
||||
|
||||
|`ElasticsearchClusterNotHealthy`
|
||||
|The cluster health status has been RED for at least 2 minutes. The cluster does not accept writes, shards may be missing, or the master
|
||||
node hasn't been elected yet.
|
||||
node has not been elected yet.
|
||||
|Critical
|
||||
|
||||
|`ElasticsearchClusterNotHealthy`
|
||||
|
||||
@@ -79,7 +79,7 @@ The following values come from link:http://sourceware.org/git/?p=glibc.git;a=blo
|
||||
The two following values are not part of `syslog.h` but are widely used:
|
||||
|
||||
* `8` = `trace`, trace-level messages, which are more verbose than `debug` messages.
|
||||
* `9` = `unknown`, when the logging system gets a value it doesn't recognize.
|
||||
* `9` = `unknown`, when the logging system gets a value it does not recognize.
|
||||
|
||||
Map the log levels or priorities of other logging systems to their nearest match in the preceding list. For example, from link:https://docs.python.org/2.7/library/logging.html#logging-levels[python logging], you can match `CRITICAL` with `crit`, `ERROR` with `err`, and so on.
|
||||
|
||||
|
||||
@@ -118,7 +118,7 @@ Starting pod/<node-name>-debug ...
|
||||
To use host binaries, run `chroot /host`
|
||||
|
||||
Pod IP: <ip-address>
|
||||
If you don't see a command prompt, try pressing enter.
|
||||
If you do not see a command prompt, try pressing enter.
|
||||
|
||||
sh-4.4#
|
||||
----
|
||||
|
||||
@@ -96,7 +96,7 @@ Topics:
|
||||
----
|
||||
|
||||
. On the command line, run `asciibinder` from the root folder of openshift-docs.
|
||||
You don't have to add or commit your changes for asciibinder to run.
|
||||
You do not have to add or commit your changes for asciibinder to run.
|
||||
|
||||
. After the asciibinder build completes, open up your browser and navigate to
|
||||
<YOUR-LOCAL-GIT-REPO-LOCATION>/openshift-docs/_preview/openshift-enterprise/my_first_mod_docs/my_guide/assembly_my-first-assembly.html
|
||||
@@ -105,4 +105,4 @@ You don't have to add or commit your changes for asciibinder to run.
|
||||
contents from your module `My First Module`.
|
||||
|
||||
NOTE: You can delete this branch now if you are done testing. This branch
|
||||
shouldn't be submitted to the upstream openshift-docs repository.
|
||||
should not be submitted to the upstream openshift-docs repository.
|
||||
|
||||
@@ -122,7 +122,7 @@ Component for displaying an error status popover.
|
||||
|Parameter Name |Description
|
||||
|`title` |(optional) status text
|
||||
|`iconOnly` |(optional) if true, only displays icon
|
||||
|`noTooltip` |(optional) if true, tooltip won't be displayed
|
||||
|`noTooltip` |(optional) if true, tooltip is not displayed
|
||||
|`className` |(optional) additional class name for the component
|
||||
|`popoverTitle` |(optional) title for popover
|
||||
|===
|
||||
@@ -143,7 +143,7 @@ Component for displaying an information status popover.
|
||||
|Parameter Name |Description
|
||||
|`title` |(optional) status text
|
||||
|`iconOnly` |(optional) if true, only displays icon
|
||||
|`noTooltip` |(optional) if true, tooltip won't be displayed
|
||||
|`noTooltip` |(optional) if true, tooltip is not displayed
|
||||
|`className` |(optional) additional class name for the component
|
||||
|`popoverTitle` |(optional) title for popover
|
||||
|===
|
||||
@@ -164,7 +164,7 @@ Component for displaying a progressing status popover.
|
||||
|Parameter Name |Description
|
||||
|`title` |(optional) status text
|
||||
|`iconOnly` |(optional) if true, only displays icon
|
||||
|`noTooltip` |(optional) if true, tooltip won't be displayed
|
||||
|`noTooltip` |(optional) if true, tooltip is not displayed
|
||||
|`className` |(optional) additional class name for the component
|
||||
|`popoverTitle` |(optional) title for popover
|
||||
|===
|
||||
@@ -185,7 +185,7 @@ Component for displaying a success status popover.
|
||||
|Parameter Name |Description
|
||||
|`title` |(optional) status text
|
||||
|`iconOnly` |(optional) if true, only displays icon
|
||||
|`noTooltip` |(optional) if true, tooltip won't be displayed
|
||||
|`noTooltip` |(optional) if true, tooltip is not displayed
|
||||
|`className` |(optional) additional class name for the component
|
||||
|`popoverTitle` |(optional) title for popover
|
||||
|===
|
||||
@@ -219,7 +219,7 @@ Hook that provides information about user access to a given resource. It returns
|
||||
|
||||
React hook for consuming Console extensions with resolved `CodeRef` properties. This hook accepts the same argument(s) as `useExtensions` hook and returns an adapted list of extension instances, resolving all code references within each extension's properties.
|
||||
|
||||
Initially, the hook returns an empty array. After the resolution is complete, the React component is re-rendered with the hook returning an adapted list of extensions. When the list of matching extensions changes, the resolution is restarted. The hook will continue to return the previous result until the resolution completes.
|
||||
Initially, the hook returns an empty array. After the resolution is complete, the React component is re-rendered with the hook returning an adapted list of extensions. When the list of matching extensions changes, the resolution is restarted. The hook continues to return the previous result until the resolution completes.
|
||||
|
||||
The hook's result elements are guaranteed to be referentially stable across re-renders. It returns a tuple containing a list of adapted extension instances with resolved code references, a boolean flag indicating whether the resolution is complete, and a list of errors detected during the resolution.
|
||||
|
||||
@@ -364,7 +364,7 @@ A hook that provides a list of user-selected active TableColumns.
|
||||
TableColumns
|
||||
|
||||
|`\{boolean} [options.showNamespaceOverride]` |(optional) If true, a
|
||||
namespace column will be included, regardless of column management
|
||||
namespace column is included, regardless of column management
|
||||
selections
|
||||
|
||||
|`\{string} [options.columnManagementID]` |(optional) A unique ID
|
||||
@@ -757,7 +757,7 @@ const Component: React.FC = () => {
|
||||
|===
|
||||
|Parameter Name |Description
|
||||
|`initResources` |Resources must be watched as key-value pair,
|
||||
wherein key will be unique to resource and value will be options needed
|
||||
wherein key is unique to resource and value is options needed
|
||||
to watch for the respective resource.
|
||||
|===
|
||||
|
||||
@@ -837,8 +837,8 @@ model. In case of failure, the promise gets rejected with HTTP error response.
|
||||
|
||||
|`options.model` |k8s model
|
||||
|
||||
|`options.name` |The name of the resource, if not provided then it will
|
||||
look for all the resources matching the model.
|
||||
|`options.name` |The name of the resource, if not provided then it
|
||||
looks for all the resources matching the model.
|
||||
|
||||
|`options.ns` | The namespace to look into, should not be specified
|
||||
for cluster-scoped resources.
|
||||
@@ -949,7 +949,7 @@ request headers, method, redirect, etc. See link:{power-bi-url}[Interface Reques
|
||||
|
||||
|
||||
|`options.json` |Can control garbage collection of resources
|
||||
explicitly if provided else will default to model's "propagationPolicy".
|
||||
explicitly if provided or else it defaults to the model's "propagationPolicy".
|
||||
|===
|
||||
|
||||
[discrete]
|
||||
@@ -990,7 +990,7 @@ Provides apiVersion for a k8s model.
|
||||
[discrete]
|
||||
== `getGroupVersionKindForResource`
|
||||
|
||||
Provides a group, version, and kind for a resource. It returns the group, version, kind for the provided resource. If the resource does not have an API group, group "core" will be returned. If the resource has an invalid apiVersion, then it will throw an Error.
|
||||
Provides a group, version, and kind for a resource. It returns the group, version, kind for the provided resource. If the resource does not have an API group, group "core" is returned. If the resource has an invalid apiVersion, then it throws an Error.
|
||||
|
||||
[cols=",",options="header",]
|
||||
|===
|
||||
@@ -1001,7 +1001,7 @@ Provides a group, version, and kind for a resource. It returns the group, versio
|
||||
[discrete]
|
||||
== `getGroupVersionKindForModel`
|
||||
|
||||
Provides a group, version, and kind for a k8s model. This returns the group, version, kind for the provided model. If the model does not have an apiGroup, group "core" will be returned.
|
||||
Provides a group, version, and kind for a k8s model. This returns the group, version, kind for the provided model. If the model does not have an apiGroup, group "core" is returned.
|
||||
|
||||
[cols=",",options="header",]
|
||||
|===
|
||||
@@ -1294,7 +1294,7 @@ the editor. This prop is used only during the initial render
|
||||
|
||||
|`header` |Add a header on top of the YAML editor
|
||||
|
||||
|`onSave` |Callback for the Save button. Passing it will override the
|
||||
|`onSave` |Callback for the Save button. Passing it overrides the
|
||||
default update performed on the resource by the editor
|
||||
|===
|
||||
|
||||
@@ -1404,7 +1404,7 @@ Component that allows to receive contributions from other plugins for the `conso
|
||||
[discrete]
|
||||
== `NamespaceBar`
|
||||
|
||||
A component that renders a horizontal toolbar with a namespace dropdown menu in the leftmost position. Additional components can be passed in as children and will be rendered to the right of the namespace dropdown. This component is designed to be used at the top of the page. It should be used on pages where the user needs to be able to change the active namespace, such as on pages with k8s resources.
|
||||
A component that renders a horizontal toolbar with a namespace dropdown menu in the leftmost position. Additional components can be passed in as children and is rendered to the right of the namespace dropdown. This component is designed to be used at the top of the page. It should be used on pages where the user needs to be able to change the active namespace, such as on pages with k8s resources.
|
||||
|
||||
.Example
|
||||
[source,text]
|
||||
@@ -1429,7 +1429,7 @@ namespace option is selected. It accepts the new namespace in the form
|
||||
of a string as its only argument. The active namespace is updated
|
||||
automatically when an option is selected, but additional logic can be
|
||||
applied via this function. When the namespace is changed, the namespace
|
||||
parameter in the URL will be changed from the previous namespace to the
|
||||
parameter in the URL is changed from the previous namespace to the
|
||||
newly selected namespace.
|
||||
|
||||
|`isDisabled` |(optional) A boolean flag that disables the namespace
|
||||
@@ -1493,14 +1493,14 @@ A component that renders a graph of the results from a Prometheus PromQL query a
|
||||
|`customDataSource` |(optional) Base URL of an API endpoint that handles PromQL queries. If provided, this is used instead of the default API for fetching data.
|
||||
|`defaultSamples` |(optional) The default number of data samples plotted for each data series. If there are many data series, QueryBrowser might automatically pick a lower number of data samples than specified here.
|
||||
|`defaultTimespan` |(optional) The default timespan for the graph in milliseconds - defaults to 1,800,000 (30 minutes).
|
||||
|`disabledSeries` |(optional) Disable (don't display) data series with these exact label / value pairs.
|
||||
|`disabledSeries` |(optional) Disable (do not display) data series with these exact label / value pairs.
|
||||
|`disableZoom` |(optional) Flag to disable the graph zoom controls.
|
||||
|`filterLabels` |(optional) Optionally filter the returned data series to only those that match these label / value pairs.
|
||||
|`fixedEndTime` |(optional) Set the end time for the displayed time range rather than showing data up to the current time.
|
||||
|`formatSeriesTitle` |(optional) Function that returns a string to use as the title for a single data series.
|
||||
|`GraphLink` |(optional) Component for rendering a link to another page (for example getting more information about this query).
|
||||
|`hideControls` |(optional) Flag to hide the graph controls for changing the graph timespan, and so on.
|
||||
|`isStack` |(optional) Flag to display a stacked graph instead of a line graph. If showStackedControl is set, it will still be possible for the user to switch to a line graph.
|
||||
|`isStack` |(optional) Flag to display a stacked graph instead of a line graph. If showStackedControl is set, it is still possible for the user to switch to a line graph.
|
||||
|`namespace` |(optional) If provided, data is only returned for this namespace (only series that have this namespace label).
|
||||
|`onZoom` |(optional) Callback called when the graph is zoomed.
|
||||
|`pollInterval` |(optional) If set, determines how often the graph is updated to show the latest data (in milliseconds).
|
||||
@@ -1533,7 +1533,7 @@ const PodAnnotationsButton = ({ pod }) => {
|
||||
|===
|
||||
|
||||
.Returns
|
||||
A function which will launch a modal for editing a resource's annotations.
|
||||
A function which launches a modal for editing a resource's annotations.
|
||||
|
||||
[discrete]
|
||||
== `useDeleteModal`
|
||||
@@ -1561,7 +1561,7 @@ const DeletePodButton = ({ pod }) => {
|
||||
|===
|
||||
|
||||
.Returns
|
||||
A function which will launch a modal for deleting a resource.
|
||||
A function which launches a modal for deleting a resource.
|
||||
|
||||
[discrete]
|
||||
== `useLabelsModel`
|
||||
@@ -1585,7 +1585,7 @@ const PodLabelsButton = ({ pod }) => {
|
||||
|===
|
||||
|
||||
.Returns
|
||||
A function which will launch a modal for editing a resource's labels.
|
||||
A function which launches a modal for editing a resource's labels.
|
||||
|
||||
[discrete]
|
||||
== `useActiveNamespace`
|
||||
|
||||
@@ -24,7 +24,7 @@ The current release adds the following improvements.
|
||||
|
||||
* As an administrative user, when you give Argo CD access to a namespace by using the `argocd.argoproj.io/managed-by` label, it assumes namespace-admin privileges. These privileges are an issue for administrators who provide namespaces to non-administrators, such as development teams, because the privileges enable non-administrators to modify objects such as network policies.
|
||||
+
|
||||
With this update, administrators can configure a common cluster role for all the managed namespaces. In role bindings for the Argo CD application controller, the Operator refers to the `CONTROLLER_CLUSTER_ROLE` environment variable. In role bindings for the Argo CD server, the Operator refers to the `SERVER_CLUSTER_ROLE` environment variable. If these environment variables contain custom roles, the Operator doesn't create the default admin role. Instead, it uses the existing custom role for all managed namespaces. link:https://issues.redhat.com/browse/GITOPS-1290[GITOPS-1290]
|
||||
With this update, administrators can configure a common cluster role for all the managed namespaces. In role bindings for the Argo CD application controller, the Operator refers to the `CONTROLLER_CLUSTER_ROLE` environment variable. In role bindings for the Argo CD server, the Operator refers to the `SERVER_CLUSTER_ROLE` environment variable. If these environment variables contain custom roles, the Operator does not create the default admin role. Instead, it uses the existing custom role for all managed namespaces. link:https://issues.redhat.com/browse/GITOPS-1290[GITOPS-1290]
|
||||
|
||||
* With this update, the *Environments* page in the {product-title} *Developer* perspective displays a broken heart icon to indicate degraded resources, excluding ones whose status is `Progressing`, `Missing`, and `Unknown`. The console displays a yellow yield sign icon to indicate out-of-sync resources. link:https://issues.redhat.com/browse/GITOPS-1307[GITOPS-1307]
|
||||
|
||||
@@ -37,9 +37,9 @@ The following issues have been resolved in the current release:
|
||||
|
||||
* Before this update, setting a resource quota in the namespace of the Argo CD custom resource might cause the setup of the Red Hat SSO (RH SSO) instance to fail. This update fixes this issue by setting a minimum resource request for the RH SSO deployment pods. link:https://issues.redhat.com/browse/GITOPS-1297[GITOPS-1297]
|
||||
|
||||
* Before this update, if you changed the log level for the `argocd-repo-server` workload, the Operator didn't reconcile this setting. The workaround was to delete the deployment resource so that the Operator recreated it with the new log level. With this update, the log level is correctly reconciled for existing `argocd-repo-server` workloads. link:https://issues.redhat.com/browse/GITOPS-1387[GITOPS-1387]
|
||||
* Before this update, if you changed the log level for the `argocd-repo-server` workload, the Operator did not reconcile this setting. The workaround was to delete the deployment resource so that the Operator recreated it with the new log level. With this update, the log level is correctly reconciled for existing `argocd-repo-server` workloads. link:https://issues.redhat.com/browse/GITOPS-1387[GITOPS-1387]
|
||||
|
||||
* Before this update, if the Operator managed an Argo CD instance that lacked the `.data` field in the `argocd-secret` Secret, the Operator on that instance crashed. This update fixes the issue so that the Operator doesn't crash when the `.data` field is missing. Instead, the secret regenerates and the `gitops-operator-controller-manager` resource is redeployed. link:https://issues.redhat.com/browse/GITOPS-1402[GITOPS-1402]
|
||||
* Before this update, if the Operator managed an Argo CD instance that lacked the `.data` field in the `argocd-secret` Secret, the Operator on that instance crashed. This update fixes the issue so that the Operator does not crash when the `.data` field is missing. Instead, the secret regenerates and the `gitops-operator-controller-manager` resource is redeployed. link:https://issues.redhat.com/browse/GITOPS-1402[GITOPS-1402]
|
||||
|
||||
* Before this update, the `gitopsservice` service was annotated as an internal object. This update removes the annotation so you can update or delete the default Argo CD instance and run GitOps workloads on infrastructure nodes by using the UI. link:https://issues.redhat.com/browse/GITOPS-1429[GITOPS-1429]
|
||||
|
||||
|
||||
@@ -96,7 +96,7 @@ Find the subnet ID and replace it with the ID of the private subnet created in t
|
||||
|
||||
* Specify MTU value for the Network Provider
|
||||
+
|
||||
Outpost service links support a maximum packet size of 1300 bytes. It's required to modify the MTU of the Network Provider to follow this requirement.
|
||||
Outpost service links support a maximum packet size of 1300 bytes. You must modify the MTU of the Network Provider to follow this requirement.
|
||||
Create a new file under the manifests directory and name the file `cluster-network-03-config.yml`. For the OVN-Kubernetes network provider, set the MTU value to 1200.
|
||||
+
|
||||
[source,yaml]
|
||||
|
||||
@@ -415,7 +415,7 @@ ifdef::aws-outposts[]
|
||||
|
||||
* Unlike AWS Regions, which offer near-infinite scale, AWS Outposts are limited by their provisioned capacity, EC2 family and generations, configured instance sizes, and availability of compute capacity that is not already consumed by other workloads. Therefore, when creating new {product-title} cluster, you need to provide the supported instance type in the `compute.platform.aws.type` section in the configuration file.
|
||||
* When deploying {product-title} cluster with remote workers running in AWS Outposts, only one Availability Zone can be used for the compute instances - the Availability Zone in which the Outpost instance was created in. Therefore, when creating new {product-title} cluster, it recommended to provide the relevant Availability Zone in the `compute.platform.aws.zones` section in the configuration file, in order to limit the compute instances to this Availability Zone.
|
||||
* Amazon Elastic Block Store (EBS) gp3 volumes aren't supported by the AWS Outposts service. This volume type is the default type used by the {product-title} cluster. Therefore, when creating new {product-title} cluster, you must change the volume type in the `compute.platform.aws.rootVolume.type` section to gp2.
|
||||
* Amazon Elastic Block Store (EBS) gp3 volumes are not supported by the AWS Outposts service. This volume type is the default type used by the {product-title} cluster. Therefore, when creating new {product-title} cluster, you must change the volume type in the `compute.platform.aws.rootVolume.type` section to gp2.
|
||||
You will find more information about how to change these values below.
|
||||
endif::aws-outposts[]
|
||||
|
||||
|
||||
@@ -20,7 +20,7 @@ Create three control plane machines by using the Ignition config files that you
|
||||
|
||||
. On a command line, change the working directory to the location of the playbooks.
|
||||
|
||||
. If the control plane Ignition config files aren't already in your working directory, copy them into it.
|
||||
. If the control plane Ignition config files are not already in your working directory, copy them into it.
|
||||
|
||||
. On a command line, run the `control-plane.yaml` playbook:
|
||||
+
|
||||
|
||||
@@ -253,7 +253,7 @@ spec:
|
||||
----
|
||||
<1> The etcd database must be mounted by the device, not a label, to ensure that `systemd` generates the device dependency used in this config to trigger filesystem creation.
|
||||
<2> Do not run if the file system `dev/disk/by-label/local-etcd` already exists.
|
||||
<3> Fails with an alert message if `/dev/disk/by-label/ephemeral0` doesn't exist.
|
||||
<3> Fails with an alert message if `/dev/disk/by-label/ephemeral0` does not exist.
|
||||
<4> Migrates existing data to local etcd database. This config does so after `/var/lib/etcd` is mounted, but before CRI-O starts so etcd is not running yet.
|
||||
<5> Requires that etcd is mounted and does not contain a member directory, but the ostree does.
|
||||
<6> Cleans up any previous migration state.
|
||||
|
||||
@@ -143,7 +143,7 @@ Adjust the commands to match your actual interface names and gateway.
|
||||
$ ping <remote_worker_node_ip_address>
|
||||
----
|
||||
+
|
||||
If the ping is successful, it means the control plane nodes in the first subnet can reach the remote worker nodes in the second subnet. If you don't receive a response, review the network configurations and repeat the procedure for the node.
|
||||
If the ping is successful, it means the control plane nodes in the first subnet can reach the remote worker nodes in the second subnet. If you do not receive a response, review the network configurations and repeat the procedure for the node.
|
||||
|
||||
.. From the remote worker nodes in the second subnet, ping a control plane node in the first subnet by running the following command:
|
||||
+
|
||||
@@ -152,4 +152,4 @@ If the ping is successful, it means the control plane nodes in the first subnet
|
||||
$ ping <control_plane_node_ip_address>
|
||||
----
|
||||
+
|
||||
If the ping is successful, it means the remote worker nodes in the second subnet can reach the control plane in the first subnet. If you don't receive a response, review the network configurations and repeat the procedure for the node.
|
||||
If the ping is successful, it means the remote worker nodes in the second subnet can reach the control plane in the first subnet. If you do not receive a response, review the network configurations and repeat the procedure for the node.
|
||||
|
||||
@@ -6,7 +6,7 @@
|
||||
[id='ipi-install-modifying-install-config-for-slaac-dual-stack-network_{context}']
|
||||
= Optional: Configuring address generation modes for SLAAC in dual-stack networks
|
||||
|
||||
For dual-stack clusters that use Stateless Address AutoConfiguration (SLAAC), you must specify a global value for the `ipv6.addr-gen-mode` network setting. You can set this value using NMState to configure the ramdisk and the cluster configuration files. If you don't configure a consistent `ipv6.addr-gen-mode` in these locations, IPv6 address mismatches can occur between CSR resources and `BareMetalHost` resources in the cluster.
|
||||
For dual-stack clusters that use Stateless Address AutoConfiguration (SLAAC), you must specify a global value for the `ipv6.addr-gen-mode` network setting. You can set this value using NMState to configure the RAM disk and the cluster configuration files. If you do not configure a consistent `ipv6.addr-gen-mode` in these locations, IPv6 address mismatches can occur between CSR resources and `BareMetalHost` resources in the cluster.
|
||||
|
||||
.Prerequisites
|
||||
|
||||
|
||||
@@ -179,7 +179,7 @@ record. `mail` or `sAMAccountName` are preferred choices in most installations.
|
||||
|string array
|
||||
|
||||
|`tolerateMemberNotFoundErrors`
|
||||
|Determines the behavior of the LDAP sync job when missing user entries are encountered. If `true`, an LDAP query for users that does not find any will be tolerated and an only and error will be logged. If `false`, the LDAP sync job will fail if a query for users doesn't find any. The default value is `false`. Misconfigured LDAP sync jobs with this flag set to `true` can cause group membership to be removed, so it is recommended to use this flag with caution.
|
||||
|Determines the behavior of the LDAP sync job when missing user entries are encountered. If `true`, an LDAP query for users that does not find any will be tolerated and an only and error will be logged. If `false`, the LDAP sync job will fail if a query for users does not find any. The default value is `false`. Misconfigured LDAP sync jobs with this flag set to `true` can cause group membership to be removed, so it is recommended to use this flag with caution.
|
||||
|boolean
|
||||
|
||||
|`tolerateMemberOutOfScopeErrors`
|
||||
|
||||
@@ -6,7 +6,7 @@
|
||||
[id="logging-loki-zone-fail-recovery_{context}"]
|
||||
= Recovering Loki pods from failed zones
|
||||
|
||||
In {product-title} a zone failure happens when specific availability zone resources become inaccessible. Availability zones are isolated areas within a cloud provider's data center, aimed at enhancing redundancy and fault tolerance. If your {product-title} cluster isn't configured to handle this, a zone failure can lead to service or data loss.
|
||||
In {product-title} a zone failure happens when specific availability zone resources become inaccessible. Availability zones are isolated areas within a cloud provider's data center, aimed at enhancing redundancy and fault tolerance. If your {product-title} cluster is not configured to handle this, a zone failure can lead to service or data loss.
|
||||
|
||||
Loki pods are part of a link:https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/[StatefulSet], and they come with Persistent Volume Claims (PVCs) provisioned by a `StorageClass` object. Each Loki pod and its PVCs reside in the same zone. When a zone failure occurs in a cluster, the StatefulSet controller automatically attempts to recover the affected pods in the failed zone.
|
||||
|
||||
|
||||
@@ -26,7 +26,7 @@ With this update, slashes are replaced with underscores, resolving the issue. (l
|
||||
|
||||
* Before this update, the Cluster Logging Operator terminated unexpectedly when set to an unmanaged state. With this update, a check to ensure that the `ClusterLogging` resource is in the correct Management state before initiating the reconciliation of the `ClusterLogForwarder` CR, resolving the issue. (link:https://issues.redhat.com/browse/LOG-4177[LOG-4177])
|
||||
|
||||
* Before this update, when viewing logs within the {product-title} web console, selecting a time range by dragging over the histogram didn't work on the aggregated logs view inside the pod detail. With this update, the time range can be selected by dragging on the histogram in this view. (link:https://issues.redhat.com/browse/LOG-4108[LOG-4108])
|
||||
* Before this update, when viewing logs within the {product-title} web console, selecting a time range by dragging over the histogram did not work on the aggregated logs view inside the pod detail. With this update, the time range can be selected by dragging on the histogram in this view. (link:https://issues.redhat.com/browse/LOG-4108[LOG-4108])
|
||||
|
||||
* Before this update, when viewing logs within the {product-title} web console, queries longer than 30 seconds timed out. With this update, the timeout value can be configured in the configmap/logging-view-plugin. (link:https://issues.redhat.com/browse/LOG-3498[LOG-3498])
|
||||
|
||||
|
||||
@@ -24,7 +24,7 @@ $ oc describe pvc <pvc_name> <1>
|
||||
+
|
||||
- *FailedMount or FailedUnMount:* This error indicates problems when trying to mount the volume to a node or unmount a volume from a node. If the disk has failed, this error might appear when a pod tries to use the PVC.
|
||||
+
|
||||
- *Volume is already exclusively attached to one node and can't be attached to another:* This error can appear with storage solutions that do not support `ReadWriteMany` access modes.
|
||||
- *Volume is already exclusively attached to one node and cannot be attached to another:* This error can appear with storage solutions that do not support `ReadWriteMany` access modes.
|
||||
|
||||
. Establish a direct connection to the host where the problem is occurring.
|
||||
|
||||
|
||||
@@ -24,7 +24,7 @@ NAME AGE
|
||||
my-lvmcluster 65m
|
||||
----
|
||||
|
||||
. If the cluster doesn't have an `LVMCluster` resource, create one by running the following command:
|
||||
. If the cluster does not have an `LVMCluster` resource, create one by running the following command:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
|
||||
@@ -61,7 +61,7 @@ To manually configure, or disable OAuth in the Reporting Operator, you must set
|
||||
This also disables all TLS and authentication between the Reporting Operator, Presto, and Hive. You would need to manually configure these resources yourself.
|
||||
====
|
||||
|
||||
Authentication can be enabled by configuring the following options. Enabling authentication configures the Reporting Operator pod to run the {product-title} auth-proxy as a sidecar container in the pod. This adjusts the ports so that the reporting API isn't exposed directly, but instead is proxied to via the auth-proxy sidecar container.
|
||||
Authentication can be enabled by configuring the following options. Enabling authentication configures the Reporting Operator pod to run the {product-title} auth-proxy as a sidecar container in the pod. This adjusts the ports so that the reporting API is not exposed directly, but instead is proxied to via the auth-proxy sidecar container.
|
||||
|
||||
* `reporting-operator.spec.authProxy.enabled`
|
||||
* `reporting-operator.spec.authProxy.cookie.createSecret`
|
||||
|
||||
@@ -49,7 +49,7 @@ $ crane tunnel-api [--namespace <namespace>] \
|
||||
--source-context <source-cluster>
|
||||
----
|
||||
+
|
||||
If you don't specify a namespace, the command uses the default value `openvpn`.
|
||||
If you do not specify a namespace, the command uses the default value `openvpn`.
|
||||
+
|
||||
For example:
|
||||
+
|
||||
|
||||
@@ -88,7 +88,7 @@ of the openshift-docs repository. These modules must follow the file naming
|
||||
conventions specified in the
|
||||
link:https://redhat-documentation.github.io/modular-docs/[modular docs guidelines].
|
||||
|
||||
* All assemblies must go in the relevant guide/book. If you can't find a relevant
|
||||
* All assemblies must go in the relevant guide/book. If you cannot find a relevant
|
||||
guide/book, reach out to a member of the OpenShift CCS team. So guides/books contain assemblies, which
|
||||
contain modules.
|
||||
|
||||
|
||||
@@ -4,7 +4,7 @@
|
||||
:_mod-docs-content-type: REFERENCE
|
||||
[id="network-observability-without-loki_{context}"]
|
||||
= Network Observability without Loki
|
||||
You can use Network Observability without Loki by not performing the Loki installation steps and skipping directly to "Installing the Network Observability Operator". If you only want to export flows to a Kafka consumer or IPFIX collector, or you only need dashboard metrics, then you do not need to install Loki or provide storage for Loki. Without Loki, there won't be a Network Traffic panel under Observe, which means there is no overview charts, flow table, or topology. The following table compares available features with and without Loki:
|
||||
You can use Network Observability without Loki by not performing the Loki installation steps and skipping directly to "Installing the Network Observability Operator". If you only want to export flows to a Kafka consumer or IPFIX collector, or you only need dashboard metrics, then you do not need to install Loki or provide storage for Loki. Without Loki, there is no *Network Traffic* panel under *Observe*, which means there is no overview charts, flow table, or topology. The following table compares available features with and without Loki:
|
||||
|
||||
.Comparison of feature availability with and without Loki
|
||||
[options="header"]
|
||||
|
||||
@@ -35,7 +35,7 @@ ifdef::openshift-rosa,openshift-dedicated[]
|
||||
Although the default configuration works in most cases, {product-title} offers a second worker latency profile for situations where the network is experiencing higher latency than usual. The two worker latency profiles are described in the following sections:
|
||||
endif::openshift-rosa,openshift-dedicated[]
|
||||
|
||||
Default worker latency profile:: With the `Default` profile, each `Kubelet` updates it's status every 10 seconds (`node-status-update-frequency`). The `Kube Controller Manager` checks the statuses of `Kubelet` every 5 seconds (`node-monitor-grace-period`).
|
||||
Default worker latency profile:: With the `Default` profile, each `Kubelet` updates its status every 10 seconds (`node-status-update-frequency`). The `Kube Controller Manager` checks the statuses of `Kubelet` every 5 seconds (`node-monitor-grace-period`).
|
||||
+
|
||||
The Kubernetes Controller Manager waits 40 seconds for a status update from `Kubelet` before considering the `Kubelet` unhealthy. If no status is made available to the Kubernetes Controller Manager, it then marks the node with the `node.kubernetes.io/not-ready` or `node.kubernetes.io/unreachable` taint and evicts the pods on that node.
|
||||
+
|
||||
|
||||
@@ -85,7 +85,7 @@ source: <1>
|
||||
* `ExternalName`
|
||||
<4> Ensures that the controller considers only those resources which matches with label filter.
|
||||
<5> The default value for `hostnameAnnotation` is `Ignore` which instructs `ExternalDNS` to generate DNS records using the templates specified in the field `fqdnTemplates`. When the value is `Allow` the DNS records get generated based on the value specified in the `external-dns.alpha.kubernetes.io/hostname` annotation.
|
||||
<6> The External DNS Operator uses a string to generate DNS names from sources that don't define a hostname, or to add a hostname suffix when paired with the fake source.
|
||||
<6> The External DNS Operator uses a string to generate DNS names from sources that do not define a hostname, or to add a hostname suffix when paired with the fake source.
|
||||
|
||||
[source,yaml]
|
||||
----
|
||||
|
||||
@@ -7,7 +7,7 @@
|
||||
|
||||
You can set the external traffic policy (ETP) parameter, `.spec.externalTrafficPolicy`, on a load balancer service to preserve the source IP address of incoming traffic when it reaches service endpoint pods. However, if your cluster uses the Amphora Octavia provider, the source IP of the traffic is replaced with the IP address of the Amphora VM. This behavior does not occur if your cluster uses the OVN Octavia provider.
|
||||
|
||||
Having the `ETP` option set to `Local` requires that health monitors be created for the load balancer. Without health monitors, traffic can be routed to a node that doesn't have a functional endpoint, which causes the connection to drop. To force Cloud Provider OpenStack to create health monitors, you must set the value of the `create-monitor` option in the cloud provider configuration to `true`.
|
||||
Having the `ETP` option set to `Local` requires that health monitors be created for the load balancer. Without health monitors, traffic can be routed to a node that does not have a functional endpoint, which causes the connection to drop. To force Cloud Provider OpenStack to create health monitors, you must set the value of the `create-monitor` option in the cloud provider configuration to `true`.
|
||||
|
||||
In {rh-openstack} 16.2, the OVN Octavia provider does not support health monitors. Therefore, setting the ETP to local is unsupported.
|
||||
|
||||
|
||||
@@ -122,7 +122,7 @@ spec:
|
||||
+
|
||||
[NOTE]
|
||||
====
|
||||
If you don't provide custom values for the optional fields in the `TektonHub` CR, the default values configured in the {tekton-hub} API config map is used.
|
||||
If you do not provide custom values for the optional fields in the `TektonHub` CR, the default values configured in the {tekton-hub} API config map is used.
|
||||
====
|
||||
|
||||
. Apply the `TektonHub` CR.
|
||||
|
||||
@@ -66,7 +66,7 @@ spec:
|
||||
+
|
||||
[NOTE]
|
||||
====
|
||||
If you don't provide custom values for the optional fields in the `TektonHub` CR, the default values configured in the {tekton-hub} API config map is used.
|
||||
If you do not provide custom values for the optional fields in the `TektonHub` CR, the default values configured in the {tekton-hub} API config map is used.
|
||||
====
|
||||
|
||||
. Apply the `TektonHub` CR.
|
||||
|
||||
@@ -18,7 +18,7 @@ The `tkn pac` CLI tool offers the following capabilities:
|
||||
|
||||
[TIP]
|
||||
====
|
||||
You can use the commands corresponding to the capabilities for testing and experimentation, so that you don't have to make changes to the Git repository containing the application source code.
|
||||
You can use the commands corresponding to the capabilities for testing and experimentation, so that you do not have to make changes to the Git repository containing the application source code.
|
||||
====
|
||||
|
||||
== Basic syntax
|
||||
|
||||
@@ -605,7 +605,7 @@ config 1.8.1 True
|
||||
* Before this update, using the `tkn` CLI tool, you could not remove task runs and pipeline runs that contained a `result` object whose type was `array`. With this update, you can use the `tkn` CLI tool to remove task runs and pipeline runs that contain a `result` object whose type is `array`.
|
||||
// https://issues.redhat.com/browse/SRVKP-2478
|
||||
|
||||
* Before this update, if a pipeline specification contained a task with an `ENV_VARS` parameter of `array` type, the pipeline run failed with the following error: `invalid input params for task func-buildpacks: param types don't match the user-specified type: [ENV_VARS]`. With this update, pipeline runs with such pipeline and task specifications do not fail.
|
||||
* Before this update, if a pipeline specification contained a task with an `ENV_VARS` parameter of `array` type, the pipeline run failed with the following error: `invalid input params for task func-buildpacks: param types do not match the user-specified type: [ENV_VARS]`. With this update, pipeline runs with such pipeline and task specifications do not fail.
|
||||
// https://issues.redhat.com/browse/SRVKP-2422
|
||||
|
||||
* Before this update, cluster administrators could not provide a `config.json` file to the `Buildah` cluster task for accessing a container registry. With this update, cluster administrators can provide the `Buildah` cluster task with a `config.json` file by using the `dockerconfig` workspace.
|
||||
|
||||
@@ -30,7 +30,7 @@ For each example, to get the corresponding sub-commands and flags, run `tkn hub
|
||||
== hub downgrade
|
||||
Downgrade an installed resource.
|
||||
|
||||
.Example: Downgrade the `mytask` task in the `mynamespace` namespace to it's older version
|
||||
.Example: Downgrade the `mytask` task in the `mynamespace` namespace to its older version
|
||||
[source,terminal]
|
||||
----
|
||||
$ tkn hub downgrade task mytask --to version -n mynamespace
|
||||
|
||||
@@ -123,7 +123,7 @@ The following table lists the specifications for the `ServiceMeshControlPlane` r
|
||||
|string
|
||||
|
||||
|`observedGeneration`
|
||||
|The generation observed by the controller during the most recent reconciliation. The information in the status pertains to this particular generation of the object. The `status.conditions` are not up-to-date if the `status.observedGeneration` field doesn't match `metadata.generation`.
|
||||
|The generation observed by the controller during the most recent reconciliation. The information in the status pertains to this particular generation of the object. The `status.conditions` are not up-to-date if the `status.observedGeneration` field does not match `metadata.generation`.
|
||||
|integer
|
||||
|
||||
|`operatorVersion`
|
||||
|
||||
@@ -13,7 +13,7 @@ The `status` parameter describes the current state of your service mesh. This in
|
||||
|Name |Description |Type
|
||||
|
||||
|`observedGeneration`
|
||||
|The generation observed by the controller during the most recent reconciliation. The information in the status pertains to this particular generation of the object. The `status.conditions` are not up-to-date if the `status.observedGeneration` field doesn't match `metadata.generation`.
|
||||
|The generation observed by the controller during the most recent reconciliation. The information in the status pertains to this particular generation of the object. The `status.conditions` are not up-to-date if the `status.observedGeneration` field does not match `metadata.generation`.
|
||||
|integer
|
||||
|
||||
|`annotations`
|
||||
|
||||
@@ -23,7 +23,7 @@ When you set the `importAsLocal` parameter to `true` to aggregate the remote end
|
||||
|
||||
[NOTE]
|
||||
====
|
||||
You can configure services for export even if they don't exist yet. When a service that matches the value specified in the ExportedServiceSet is deployed, it will be automatically exported.
|
||||
You can configure services for export even if they do not exist yet. When a service that matches the value specified in the ExportedServiceSet is deployed, it will be automatically exported.
|
||||
====
|
||||
|
||||
////
|
||||
|
||||
@@ -18,7 +18,7 @@ Services are imported with the name `<exported-name>.<exported-namespace>.svc.<S
|
||||
|
||||
[NOTE]
|
||||
====
|
||||
You can configure services for import even if they haven't been exported yet. When a service that matches the value specified in the ImportedServiceSet is deployed and exported, it will be automatically imported.
|
||||
You can configure services for import even if they have not been exported yet. When a service that matches the value specified in the ImportedServiceSet is deployed and exported, it will be automatically imported.
|
||||
====
|
||||
|
||||
////
|
||||
@@ -131,4 +131,4 @@ status:
|
||||
namespace: ""
|
||||
----
|
||||
+
|
||||
In the preceding example only the ratings service is imported, as indicated by the populated fields under `localService`. The reviews service is available for import, but isn't currently imported because it does not match any `importRules` in the `ImportedServiceSet` object.
|
||||
In the preceding example only the ratings service is imported, as indicated by the populated fields under `localService`. The reviews service is available for import, but is not currently imported because it does not match any `importRules` in the `ImportedServiceSet` object.
|
||||
|
||||
@@ -115,13 +115,13 @@ The `status.discoveryStatus.active.remotes` field shows that istiod in the peer
|
||||
+
|
||||
The `status.discoveryStatus.active.watch` field shows that istiod in the current mesh is connected to istiod in the peer mesh.
|
||||
+
|
||||
If you check the `servicemeshpeer` named `red-mesh` in `green-mesh-system`, you'll find information about the same two connections from the perspective of the green mesh.
|
||||
If you check the `servicemeshpeer` named `red-mesh` in `green-mesh-system`, you can find information about the same two connections from the perspective of the green mesh.
|
||||
+
|
||||
When the connection between two meshes is not established, the `ServiceMeshPeer` status indicates this in the `status.discoveryStatus.inactive` field.
|
||||
+
|
||||
For more information on why a connection attempt failed, inspect the Istiod log, the access log of the egress gateway handling egress traffic for the peer, and the ingress gateway handling ingress traffic for the current mesh in the peer mesh.
|
||||
+
|
||||
For example, if the red mesh can't connect to the green mesh, check the following logs:
|
||||
For example, if the red mesh cannot connect to the green mesh, check the following logs:
|
||||
|
||||
* istiod-red-mesh in red-mesh-system
|
||||
* egress-green-mesh in red-mesh-system
|
||||
|
||||
@@ -60,7 +60,7 @@ The memory consumption of the proxy depends on the total configuration state the
|
||||
A large number of listeners, clusters, and routes can increase memory usage.
|
||||
//Istio 1.1 introduced namespace isolation to limit the scope of the configuration sent to a proxy. In a large namespace, the proxy consumes approximately 50 MB of memory.
|
||||
|
||||
Since the proxy normally doesn't buffer the data passing through, request rate doesn't affect the memory consumption.
|
||||
Since the proxy normally does not buffer the data passing through, request rate does not affect the memory consumption.
|
||||
|
||||
=== Additional latency
|
||||
|
||||
@@ -69,7 +69,7 @@ Every additional filter adds to the path length inside the proxy and affects lat
|
||||
|
||||
The Envoy proxy collects raw telemetry data after a response is sent to the client.
|
||||
The time spent collecting raw telemetry for a request does not contribute to the total time taken to complete that request.
|
||||
However, since the worker is busy handling the request, the worker won't start handling the next request immediately.
|
||||
However, because the worker is busy handling the request, the worker does not start handling the next request immediately.
|
||||
This process adds to the queue wait time of the next request and affects average and tail latencies.
|
||||
The actual tail latency depends on the traffic pattern.
|
||||
|
||||
|
||||
@@ -6,7 +6,7 @@
|
||||
[id="ossm-members_{context}"]
|
||||
= Creating the {SMProductName} members
|
||||
|
||||
`ServiceMeshMember` resources provide a way for {SMProductName} administrators to delegate permissions to add projects to a service mesh, even when the respective users don't have direct access to the service mesh project or member roll. While project administrators are automatically given permission to create the `ServiceMeshMember` resource in their project, they cannot point it to any `ServiceMeshControlPlane` until the service mesh administrator explicitly grants access to the service mesh. Administrators can grant users permissions to access the mesh by granting them the `mesh-user` user role. In this example, `istio-system` is the name of the {SMProductShortName} control plane project.
|
||||
`ServiceMeshMember` resources provide a way for {SMProductName} administrators to delegate permissions to add projects to a service mesh, even when the respective users do not have direct access to the service mesh project or member roll. While project administrators are automatically given permission to create the `ServiceMeshMember` resource in their project, they cannot point it to any `ServiceMeshControlPlane` until the service mesh administrator explicitly grants access to the service mesh. Administrators can grant users permissions to access the mesh by granting them the `mesh-user` user role. In this example, `istio-system` is the name of the {SMProductShortName} control plane project.
|
||||
|
||||
[source,terminal]
|
||||
----
|
||||
|
||||
@@ -15,7 +15,7 @@ Upgrading from version 1.1 to 2.0 requires manual steps that migrate your worklo
|
||||
[id="ossm-migrating_{context}"]
|
||||
== Upgrading {SMProductName}
|
||||
|
||||
To upgrade {SMProductName}, you must create an instance of {SMProductName} `ServiceMeshControlPlane` v2 resource in a new namespace. Then, once it's configured, move your microservice applications and workloads from your old mesh to the new service mesh.
|
||||
To upgrade {SMProductName}, you must create an instance of {SMProductName} `ServiceMeshControlPlane` v2 resource in a new namespace. Then, once it is configured, move your microservice applications and workloads from your old mesh to the new service mesh.
|
||||
|
||||
.Procedure
|
||||
|
||||
@@ -104,7 +104,7 @@ Alternatively, you can use the console to create the {SMProductShortName} contro
|
||||
[id="ossm-migrating-smcp_{context}"]
|
||||
== Configuring the 2.0 ServiceMeshControlPlane
|
||||
|
||||
The `ServiceMeshControlPlane` resource has been changed for {SMProductName} version 2.0. After you created a v2 version of the `ServiceMeshControlPlane` resource, modify it to take advantage of the new features and to fit your deployment. Consider the following changes to the specification and behavior of {SMProductName} 2.0 as you're modifying your `ServiceMeshControlPlane` resource. You can also refer to the {SMProductName} 2.0 product documentation for new information to features you use. The v2 resource must be used for {SMProductName} 2.0 installations.
|
||||
The `ServiceMeshControlPlane` resource has been changed for {SMProductName} version 2.0. After you created a v2 version of the `ServiceMeshControlPlane` resource, modify it to take advantage of the new features and to fit your deployment. Consider the following changes to the specification and behavior of {SMProductName} 2.0 as you are modifying your `ServiceMeshControlPlane` resource. You can also refer to the {SMProductName} 2.0 product documentation for new information to features you use. The v2 resource must be used for {SMProductName} 2.0 installations.
|
||||
|
||||
[id="ossm-migrating-differences-arch_{context}"]
|
||||
=== Architecture changes
|
||||
|
||||
@@ -70,7 +70,7 @@ $ export TCP_INGRESS_PORT=$(kubectl -n istio-system get service istio-ingressgat
|
||||
|
||||
[NOTE]
|
||||
====
|
||||
In some environments, the load balancer may be exposed using a hostname instead of an IP address. For that case, the ingress gateway's `EXTERNAL-IP` value is not an IP address. Instead, it's a hostname, and the previous command fails to set the `INGRESS_HOST` environment variable.
|
||||
In some environments, the load balancer may be exposed using a hostname instead of an IP address. For that case, the ingress gateway's `EXTERNAL-IP` value is not an IP address. Instead, it is a hostname, and the previous command fails to set the `INGRESS_HOST` environment variable.
|
||||
|
||||
In that case, use the following command to correct the `INGRESS_HOST` value:
|
||||
====
|
||||
|
||||
@@ -38,7 +38,7 @@ spec:
|
||||
.Behavioral changes
|
||||
|
||||
* `AuthorizationPolicy` updates:
|
||||
** With the PROXY protocol, if you're using `ipBlocks` and `notIpBlocks` to specify remote IP addresses, update the configuration to use `remoteIpBlocks` and `notRemoteIpBlocks` instead.
|
||||
** With the PROXY protocol, if you are using `ipBlocks` and `notIpBlocks` to specify remote IP addresses, update the configuration to use `remoteIpBlocks` and `notRemoteIpBlocks` instead.
|
||||
** Added support for nested JSON Web Token (JWT) claims.
|
||||
* `EnvoyFilter` breaking changes>
|
||||
** Must use `typed_config`
|
||||
|
||||
@@ -98,7 +98,7 @@ Catch-all domains ("\*") are not supported. If one is found in the Gateway defin
|
||||
|
||||
[id="ossm-subdomains_{context}"]
|
||||
=== Subdomains
|
||||
Subdomains (e.g.: "*.domain.com") are supported. However this ability doesn't come enabled by default in {product-title}. This means that {SMProductName} _will_ create the route with the subdomain, but it will only be in effect if {product-title} is configured to enable it.
|
||||
Subdomains (e.g.: "*.domain.com") are supported. However this ability does not come enabled by default in {product-title}. This means that {SMProductName} _will_ create the route with the subdomain, but it will only be in effect if {product-title} is configured to enable it.
|
||||
|
||||
[id="ossm-tls_{context}"]
|
||||
=== Transport layer security
|
||||
|
||||
@@ -165,7 +165,7 @@ Catch-all domains ("\*") are not supported. If one is found in the Gateway defin
|
||||
|
||||
[id="ossm-subdomains_{context}"]
|
||||
=== Subdomains
|
||||
Subdomains (e.g.: "*.domain.com") are supported. However this ability doesn't come enabled by default in {product-title}. This means that {SMProductName} _will_ create the route with the subdomain, but it will only be in effect if {product-title} is configured to enable it.
|
||||
Subdomains (e.g.: "*.domain.com") are supported. However this ability does not come enabled by default in {product-title}. This means that {SMProductName} _will_ create the route with the subdomain, but it will only be in effect if {product-title} is configured to enable it.
|
||||
|
||||
[id="ossm-tls_{context}"]
|
||||
=== Transport layer security
|
||||
|
||||
@@ -17,7 +17,7 @@ Data Collection and Processing Hub:: It acts as a central component that gathers
|
||||
|
||||
Customizable telemetry data pipeline:: The OpenTelemetry Collector is designed to be customizable. It supports various processors, exporters, and receivers.
|
||||
|
||||
Auto-instrumentation features:: Automatic instrumentation simplifies the process of adding observability to applications. Developers don't need to manually instrument their code for basic telemetry data.
|
||||
Auto-instrumentation features:: Automatic instrumentation simplifies the process of adding observability to applications. Developers do not need to manually instrument their code for basic telemetry data.
|
||||
|
||||
Here are some of the use cases for the OpenTelemetry Collector:
|
||||
|
||||
|
||||
@@ -155,13 +155,13 @@ Cloud Infrastructure Account refers to the underlying AWS or Google Cloud accoun
|
||||
5. Limited to what is granted through RBAC by the customer administrator, as well as namespaces created by the user.
|
||||
--
|
||||
|
||||
// TODO: The above uses an asterisk as a footnote I think for the first sentence (though it doesn't show it as a reference below the table), then numbers for the rest of the footnote items. I'd suggest bumping all the numbers and using a number for the first header asterisk as well.
|
||||
// TODO: The above uses an asterisk as a footnote I think for the first sentence (though it does not show it as a reference below the table), then numbers for the rest of the footnote items. I would suggest bumping all the numbers and using a number for the first header asterisk as well.
|
||||
|
||||
[id="customer-access_{context}"]
|
||||
== Customer access
|
||||
Customer access is limited to namespaces created by the customer and permissions that are granted using RBAC by the customer administrator role. Access to the underlying infrastructure or product namespaces is generally not permitted without `cluster-admin` access. More information on customer access and authentication can be found in the Understanding Authentication section of the documentation.
|
||||
|
||||
// TODO: I don't think there is this "Understanding Authentication" section in the OSD docs
|
||||
// TODO: I do not think there is this "Understanding Authentication" section in the OSD docs
|
||||
|
||||
[id="access-approval_{context}"]
|
||||
== Access approval and review
|
||||
|
||||
@@ -52,7 +52,7 @@ To validate the hardware for etcd before or after you create the {product-title}
|
||||
|
||||
.Prerequisites
|
||||
|
||||
* Container runtimes such as Podman or Docker are installed on the machine that you're testing.
|
||||
* Container runtimes such as Podman or Docker are installed on the machine that you are testing.
|
||||
* Data is written to the `/var/lib/etcd` path.
|
||||
|
||||
.Procedure
|
||||
|
||||
@@ -65,4 +65,4 @@ When planning a cluster update, check the configuration of the `PodDisruptionBud
|
||||
|
||||
* For highly available workloads, make sure there are replicas that can be temporarily taken offline without being prohibited by the `PodDisruptionBudget`.
|
||||
|
||||
* For workloads that aren't highly available, make sure they are either not protected by a `PodDisruptionBudget` or have some alternative mechanism for draining these workloads eventually, such as periodic restart or guaranteed eventual termination.
|
||||
* For workloads that are not highly available, make sure they are either not protected by a `PodDisruptionBudget` or have some alternative mechanism for draining these workloads eventually, such as periodic restart or guaranteed eventual termination.
|
||||
|
||||
@@ -70,7 +70,7 @@ ztp-site.example1-perf-policy inform No
|
||||
.. To check policy status from the {rh-rhacm} web console, perform the following actions:
|
||||
|
||||
... Click *Governance* -> *Find policies*.
|
||||
... Click on a cluster policy to check it's status.
|
||||
... Click on a cluster policy to check its status.
|
||||
|
||||
When all of the cluster policies become compliant, {ztp} installation and configuration for the cluster is complete. The `ztp-done` label is added to the cluster.
|
||||
|
||||
|
||||
Reference in New Issue
Block a user