1
0
mirror of https://github.com/openshift/openshift-docs.git synced 2026-02-05 12:46:18 +01:00

Fixing some typos

This commit is contained in:
Andrea Hoffer
2025-01-14 10:19:57 -05:00
committed by openshift-cherrypick-robot
parent 01e3510ca6
commit 89a84a1add
16 changed files with 55 additions and 57 deletions

View File

@@ -7,9 +7,9 @@ include::_attributes/common-attributes.adoc[]
toc::[]
{product-title} generates a large amount of data, such as performance metrics and logs from both the platform and the the workloads running on it.
As an administrator, you can use various tools to collect and analyze all the data available.
What follows is an outline of best practices for system engineers, architects, and administrators configuring the observability stack.
{product-title} generates a large amount of data, such as performance metrics and logs from both the platform and the workloads running on it.
As an administrator, you can use various tools to collect and analyze all the data available.
What follows is an outline of best practices for system engineers, architects, and administrators configuring the observability stack.
Unless explicitly stated, the material in this document refers to both Edge and Core deployments.
@@ -49,4 +49,4 @@ include::modules/telco-observability-workload-monitoring.adoc[leveloffset=+1]
* xref:../../../observability/monitoring/enabling-monitoring-for-user-defined-projects.adoc#enabling-monitoring-for-user-defined-projects[Enabling monitoring for user-defined projects]
* xref:../../../observability/monitoring/managing-alerts.adoc#managing-alerting-rules-for-user-defined-projects_managing-alerts[Managing alerting rules for user-defined projects]
* xref:../../../observability/monitoring/managing-alerts.adoc#managing-alerting-rules-for-user-defined-projects_managing-alerts[Managing alerting rules for user-defined projects]

View File

@@ -1,12 +1,12 @@
// Module included in the following assemblies:
//
// * edge_computing/ibi-edge-image-based-install.adoc
// * edge_computing/ibi-edge-image-based-install.adoc
:_mod-docs-content-type: PROCEDURE
[id="ibi-create-config-iso_{context}"]
= Deploying a managed {sno} cluster using the IBI Operator
Create the site-specific configuration resources in the hub cluster to initiate the image-based deployment of a preinstalled host.
Create the site-specific configuration resources in the hub cluster to initiate the image-based deployment of a preinstalled host.
When you create these configuration resources in the hub cluster, the Image Based Install (IBI) Operator generates a configuration ISO and attaches it to the target host to begin the site-specific configuration process. When the configuration process completes, the {sno} cluster is ready.
@@ -64,7 +64,7 @@ $ oc create -f secret-image-registry.yaml
.Example `host-network-config-secret.yaml` file
[source,yaml]
----
apiVersion: v1
apiVersion: v1
kind: Secret
metadata:
name: host-network-config-secret <1>
@@ -137,7 +137,7 @@ data:
password: <password> <11>
----
<1> Specify the name for the `BareMetalHost` resource.
<2> Specify if the host should be online.
<2> Specify if the host should be online.
<3> Specify the host boot MAC address.
<4> Specify the BMC address. You can only use bare-metal host drivers that support virtual media networking booting, for example redfish-virtualmedia and idrac-virtualmedia.
<5> Specify the name of the bare-metal host `Secret` resource.
@@ -245,7 +245,7 @@ spec:
baseDomain: example.com <3>
clusterInstallRef:
group: extensions.hive.openshift.io
kind: ImageClusterInstall
kind: ImageClusterInstall
name: ibi-image-install <4>
version: v1alpha1
clusterName: ibi-cluster <5>
@@ -283,7 +283,7 @@ spec:
hubAcceptsClient: true <2>
----
<1> Specify the name for the `ManagedCluster` resource.
<2> Specify `true` to enable {rh-rhacm} to mange the cluster.
<2> Specify `true` to enable {rh-rhacm} to manage the cluster.
.. Create the `ManagedCluster` resource by running the following command:
+

View File

@@ -15,8 +15,8 @@ As a use case, consider the following example situation in which you want to aut
.Prerequisites
* You have created the `10-auto-recovery.conf` and `microshift-auto-recovery.service` files as explained in the the "Automating the integration process with systemd for RPM systems" section.
* You have created the `microshift-auto-recovery` script as explained in the the "Automating the integration process with systemd for RPM systems" section.
* You have created the `10-auto-recovery.conf` and `microshift-auto-recovery.service` files as explained in the "Automating the integration process with systemd for RPM systems" section.
* You have created the `microshift-auto-recovery` script as explained in the "Automating the integration process with systemd for RPM systems" section.
.Procedure

View File

@@ -22,7 +22,7 @@ You can use detailed ingress control settings by updating the {microshift-short}
.. Update the {microshift-short} `config.yaml` configuration file by making a copy of the provided `config.yaml.default` file in the `/etc/microshift/` directory, naming it `config.yaml` and keeping it in the source directory.
* After you create it, the `config.yaml` file takes precedence over built-in settings. The configuration file is read every time the {microshift-short} service starts.
.. Use a configuration snippet to apply the ingress control settings you want. To do this, create a configuration snippet YAML file and put it in the the `/etc/microshift/config.d/` configuration directory.
.. Use a configuration snippet to apply the ingress control settings you want. To do this, create a configuration snippet YAML file and put it in the `/etc/microshift/config.d/` configuration directory.
* Configuration snippet YAMLs take precedence over both built-in settings and a `config.yaml` configuration file. See the Additional resources links for more information.
. Replace the default values in the `network` section of the {microshift-short} YAML with your valid values, or create a configuration snippet file with the sections you need.
@@ -164,4 +164,4 @@ $ oc get pods -n openshift-ingress
----
NAME READY STATUS RESTARTS AGE
router-default-8649b5bf65-w29cn 1/1 Running 0 6m10s
----
----

View File

@@ -5,7 +5,7 @@
:_mod-docs-content-type: PROCEDURE
[id="network-observability-multi-tenancy_{context}"]
= Enabling multi-tenancy in Network Observability
Multi-tenancy in the Network Observability Operator allows and restricts individual user access, or group access, to the flows stored in Loki and or or Prometheus. Access is enabled for project administrators. Project administrators who have limited access to some namespaces can access flows for only those namespaces.
Multi-tenancy in the Network Observability Operator allows and restricts individual user access, or group access, to the flows stored in Loki and or Prometheus. Access is enabled for project administrators. Project administrators who have limited access to some namespaces can access flows for only those namespaces.
For Developers, multi-tenancy is available for both Loki and Prometheus but requires different access rights.
@@ -42,4 +42,4 @@ $ oc adm policy add-cluster-role-to-user cluster-monitoring-view <user_group_or_
[source,terminal]
----
$ oc adm policy add-cluster-role-to-user netobserv-metrics-reader <user_group_or_name>
----
----

View File

@@ -94,7 +94,7 @@ Or:
$ oc edit machineset <machineset> -n openshift-machine-api
----
+
After each machine is scaled up, the machine controller creates an `IPAddresssClaim` resource.
After each machine is scaled up, the machine controller creates an `IPAddressClaim` resource.
. Optional: Check that the `IPAddressClaim` resource exists in the `openshift-machine-api` namespace by entering the following command:
+

View File

@@ -82,7 +82,7 @@ go-xdp-counter-example xdp_stats {} ReconcileSuccess
$ oc logs <pod_name> -n go-xdp-counter
----
+
Replace `<pod_name>` with the name of a XDP program pod, such as `go-xdp-counter-ds-4m9cw`.
Replace `<pod_name>` with the name of an XDP program pod, such as `go-xdp-counter-ds-4m9cw`.
+
.Example output
[source,text]

View File

@@ -52,7 +52,7 @@ domains:
pattern: ".*\\.otherzonedomain\\.com" <5>
----
<1> Ensures that the `ExternalDNS` resource includes the domain name.
<2> Instructs `ExtrnalDNS` that the domain matching has to be exact as opposed to regular expression match.
<2> Instructs `ExternalDNS` that the domain matching has to be exact as opposed to regular expression match.
<3> Defines the name of the domain.
<4> Sets the `regex-domain-filter` flag in the `ExternalDNS` resource. You can limit possible domains by using a Regex filter.
<5> Defines the regex pattern to be used by the `ExternalDNS` resource to filter the domains of the target zones.

View File

@@ -67,7 +67,7 @@ To avoid a node in an unhealthy MCP from blocking the application of node networ
+
[NOTE]
====
When `externallyManaged` is set to `true`, you must manually create the Virtual Functions (VFs) on the physical function (PF) before applying the `SriovNetworkNodePolicy` resource. If the VFs are not pre-created, the SR-IOV Network Operator's webhook will block the policy request.
When `externallyManaged` is set to `true`, you must manually create the Virtual Functions (VFs) on the physical function (PF) before applying the `SriovNetworkNodePolicy` resource. If the VFs are not pre-created, the SR-IOV Network Operator's webhook will block the policy request.
When `externallyManaged` is set to `false`, the SR-IOV Network Operator automatically creates and manages the VFs, including resetting them if necessary.
@@ -109,7 +109,7 @@ When `linkType` is set to `ib`, `isRdma` is automatically set to `true` by the S
+
Do not set linkType to `eth` for SriovNetworkNodePolicy, because this can lead to an incorrect number of available devices reported by the device plugin.
<19> Optional: To enable hardware offloading, you must set the `eSwitchMode` field to `"switchdev"`. For more information about hardware offloading , see "Configuring hardware offloading".
<19> Optional: To enable hardware offloading, you must set the `eSwitchMode` field to `"switchdev"`. For more information about hardware offloading, see "Configuring hardware offloading".
<20> Optional: To exclude advertising an SR-IOV network resource's NUMA node to the Topology Manager, set the value to `true`. The default value is `false`.
@@ -163,4 +163,4 @@ spec:
<1> The `numVfs` field is always set to `1` when configuring the node network policy for a virtual machine.
<2> The `netFilter` field must refer to a network ID when the virtual machine is deployed on {rh-openstack}. Valid values for `netFilter` are available from an `SriovNetworkNodeState` object.
<2> The `netFilter` field must refer to a network ID when the virtual machine is deployed on {rh-openstack}. Valid values for `netFilter` are available from an `SriovNetworkNodeState` object.

View File

@@ -6,7 +6,7 @@
[id="oadp-usecase-include-ca-cert-backup_{context}"]
= Backing up an application and its self-signed CA certificate
The `s3.openshift-storage.svc` service, provided by {odf-short}, uses a Transport Layer Security protocol (TLS) certificate that is signed with the self-signed service CA.
The `s3.openshift-storage.svc` service, provided by {odf-short}, uses a Transport Layer Security protocol (TLS) certificate that is signed with the self-signed service CA.
To prevent a `certificate signed by unknown authority` error, you must include a self-signed CA certificate in the backup storage location (BSL) section of `DataProtectionApplication` custom resource (CR). For this situation, you must complete the following tasks:
@@ -71,11 +71,11 @@ backup-c20...41fd
s3.openshift-storage.svc
----
. To get the bucket credentials from the `secret` object , run the following command:
. To get the bucket credentials from the `secret` object, run the following command:
+
[source,terminal]
----
$ oc extract --to=- secret/test-obc
$ oc extract --to=- secret/test-obc
----
+
.Example output
@@ -92,8 +92,8 @@ YXf...+NaCkdyC3QPym
[source,terminal]
----
[default]
aws_access_key_id=<AWS_ACCESS_KEY_ID>
aws_secret_access_key=<AWS_SECRET_ACCESS_KEY>
aws_access_key_id=<AWS_ACCESS_KEY_ID>
aws_secret_access_key=<AWS_SECRET_ACCESS_KEY>
----
. Create the `cloud-credentials` secret with the `cloud-credentials` file content by running the following command:
@@ -142,20 +142,20 @@ spec:
- aws
- openshift
- csi
defaultSnapshotMoveData: true
defaultSnapshotMoveData: true
backupLocations:
- velero:
config:
profile: "default"
region: noobaa
s3Url: https://s3.openshift-storage.svc
s3ForcePathStyle: "true"
s3Url: https://s3.openshift-storage.svc
s3ForcePathStyle: "true"
insecureSkipTLSVerify: "false" # <1>
provider: aws
default: true
credential:
key: cloud
name: cloud-credentials
name: cloud-credentials
objectStorage:
bucket: <bucket_name> # <2>
prefix: oadp
@@ -169,7 +169,7 @@ spec:
+
[source,terminal]
----
$ oc apply -f <dpa_filename>
$ oc apply -f <dpa_filename>
----
. Verify that the `DataProtectionApplication` CR is created successfully by running the following command:
@@ -211,7 +211,7 @@ metadata:
+
[source,terminal]
----
$ oc get backupstoragelocations.velero.io -n openshift-adp
$ oc get backupstoragelocations.velero.io -n openshift-adp
----
+
.Example output
@@ -241,7 +241,7 @@ spec:
+
[source,terminal]
----
$ oc apply -f <backup_cr_filename>
$ oc apply -f <backup_cr_filename>
----
.Verification
@@ -250,7 +250,7 @@ $ oc apply -f <backup_cr_filename>
+
[source,terminal]
----
$ oc describe backup test-backup -n openshift-adp
$ oc describe backup test-backup -n openshift-adp
----
+
.Example output
@@ -273,4 +273,4 @@ Status:
Start Timestamp: 2024-09-25T10:16:31Z
Version: 1
Events: <none>
----
----

View File

@@ -17,11 +17,11 @@ Here are some key metrics that you should pay attention to:
* OVN health
* Overall cluster operator health
A good rule to follow is that if you decide that a metric is important, there should be an alert for it.
A good rule to follow is that if you decide that a metric is important, there should be an alert for it.
[NOTE]
====
You can check the available metrics by runnning the following command:
You can check the available metrics by running the following command:
[source,terminal]
----
$ oc -n openshift-monitoring exec -c prometheus prometheus-k8s-0 -- curl -qsk http://localhost:9090/api/v1/metadata | jq '.data
@@ -31,12 +31,12 @@ $ oc -n openshift-monitoring exec -c prometheus prometheus-k8s-0 -- curl -qsk ht
[id="example-queries-promql"]
== Example queries in PromQL
The following tables show some queries that you can explore in the metrics query browser using the {product-title} console.
The following tables show some queries that you can explore in the metrics query browser using the {product-title} console.
[NOTE]
====
The URL for the console is https://<OpenShift Console FQDN>/monitoring/query-browser.
You can get the Openshift Console FQDN by runnning the following command:
You can get the Openshift Console FQDN by running the following command:
[source,terminal]
----
$ oc get routes -n openshift-console console -o jsonpath='{.status.ingress[0].host}'
@@ -79,7 +79,7 @@ $ oc get routes -n openshift-console console -o jsonpath='{.status.ingress[0].ho
|`POST`
|`histogram_quantile (0.99, sum by (le,managed_cluster) (sum_over_time(apiserver_request_duration_seconds_bucket{apiserver="kube-apiserver\|openshift-apiserver", verb="POST"}[60m])))`
|`LIST`
|`LIST`
|`histogram_quantile (0.99, sum by (le,managed_cluster) (sum_over_time(apiserver_request_duration_seconds_bucket{apiserver="kube-apiserver\|openshift-apiserver", verb="LIST"}[60m])))`
|`PUT`
@@ -130,14 +130,14 @@ $ oc get routes -n openshift-console console -o jsonpath='{.status.ingress[0].ho
[id="recommendations-for-storage-of-metrics"]
== Recommendations for storage of metrics
Out of the box, Prometheus does not back up saved metrics with persistent storage.
If you restart the Prometheus pods, all metrics data are lost.
You should configure the monitoring stack to use the back-end storage that is available on the platform.
Out of the box, Prometheus does not back up saved metrics with persistent storage.
If you restart the Prometheus pods, all metrics data are lost.
You should configure the monitoring stack to use the back-end storage that is available on the platform.
To meet the high IO demands of Prometheus you should use local storage.
For Telco core clusters, you can use the Local Storage Operator for persistent storage for Prometheus.
For Telco core clusters, you can use the Local Storage Operator for persistent storage for Prometheus.
{odf-first}, which deploys a ceph cluster for block, file, and object storage, is also a suitable candidate for a Telco core cluster.
To keep system resource requirements low on a RAN {sno} or far edge cluster, you should not provision backend storage for the monitoring stack.
Such clusters forward all metrics to the hub cluster where you can provision a third party monitoring platform.
To keep system resource requirements low on a RAN {sno} or far edge cluster, you should not provision backend storage for the monitoring stack.
Such clusters forward all metrics to the hub cluster where you can provision a third party monitoring platform.

View File

@@ -14,6 +14,6 @@ You must consider these minor components and how the MCO can help you manage you
[IMPORTANT]
====
You must use the the MCO to perform all changes on worker or control plane nodes.
You must use the MCO to perform all changes on worker or control plane nodes.
Do not manually make changes to {op-system} or node files.
====
====

View File

@@ -17,7 +17,7 @@ You can configure virtual machine (VM) access to a USB device. This configuratio
$ oc /dev/serial/by-id/usb-VENDOR_device_name
----
. Open the virtual machine instance custom resource (CR) by running the following commmand:
. Open the virtual machine instance custom resource (CR) by running the following command:
+
[source,terminal]
----
@@ -44,4 +44,3 @@ spec:
# ...
----
<1> The name of the USB device.

View File

@@ -23,7 +23,7 @@ You specify a resource name and USB device name for each device you want first t
$ lsusb
----
. Open the HCO CR by running the following commmand:
. Open the HCO CR by running the following command:
+
[source,terminal]
----
@@ -57,4 +57,4 @@ spec:
----
<1> Lists the host devices that have permission to be used in the cluster.
<2> Lists the available USB devices.
<3> Uses `resourceName: deviceName` for each device you want to add and assign to the VM. In this example, the resource is bound to three devices, each of which is identified by `vendor` and `product` and is known as a `selector`.
<3> Uses `resourceName: deviceName` for each device you want to add and assign to the VM. In this example, the resource is bound to three devices, each of which is identified by `vendor` and `product` and is known as a `selector`.

View File

@@ -21,7 +21,7 @@ Each approach is mutually exclusive and you can only use one approach for managi
[NOTE]
====
When deploying {product-title} nodes with multiple network interfaces on {rh-openstack-first} with OVN SDN, DNS configuration of the secondary interface might take precedence over the DNS configuration of the primary interface. In this case, remove the DNS nameservers for the the subnet ID that is attached to the secondary interface by running the following command:
When deploying {product-title} nodes with multiple network interfaces on {rh-openstack-first} with OVN SDN, DNS configuration of the secondary interface might take precedence over the DNS configuration of the primary interface. In this case, remove the DNS nameservers for the subnet ID that is attached to the secondary interface by running the following command:
[source,terminal]
----
@@ -33,4 +33,4 @@ include::modules/nw-multus-create-network.adoc[leveloffset=+1]
include::modules/nw-nad-cr.adoc[leveloffset=+1]
include::modules/nw-multus-create-network-apply.adoc[leveloffset=+1]
include::modules/nw-multus-create-network-apply.adoc[leveloffset=+1]

View File

@@ -69,7 +69,7 @@ include::modules/images-configuration-registry-mirror-configuring.adoc[leveloffs
include::modules/nodes-nodes-rebooting-gracefully.adoc[leveloffset=+1]
.Additional references
.Additional resources
* xref:../nodes/nodes/nodes-nodes-rebooting.adoc#nodes-nodes-rebooting-gracefully_nodes-nodes-rebooting[Rebooting a {product-title} node gracefully]
* xref:../backup_and_restore/control_plane_backup_and_restore/backing-up-etcd.adoc#backup-etcd[Backing up etcd data]
@@ -78,4 +78,3 @@ include::modules/nodes-nodes-rebooting-gracefully.adoc[leveloffset=+1]
* xref:../installing/installing_azure/ipi/installing-azure-default.adoc#ssh-agent-using_installing-azure-default[Generating a key pair for cluster node SSH access]
* xref:../operators/admin/olm-adding-operators-to-cluster.adoc#olm-adding-operators-to-a-cluster[Adding Operators to a cluster]