1
0
mirror of https://github.com/openshift/openshift-docs.git synced 2026-02-05 12:46:18 +01:00

Merge pull request #94611 from openshift-cherrypick-robot/cherry-pick-94585-to-enterprise-4.19

[enterprise-4.19] Updating a bunch of typos
This commit is contained in:
Andrea Hoffer
2025-06-11 12:41:52 -04:00
committed by GitHub
35 changed files with 167 additions and 168 deletions

View File

@@ -75,7 +75,7 @@ include::modules/oc-mirror-IDMS-ITMS-about.adoc[leveloffset=+1]
// Configuring your cluster to use the resources generated by oc-mirror
include::modules/oc-mirror-updating-cluster-manifests-v2.adoc[leveloffset=+2]
After your cluster is configured to use the resources generated by oc-mirror plugin v2, see xref:../../disconnected/mirroring/about-installing-oc-mirror-v2.adoc#next-steps_about-installing-oc-mirror-v2[Next Steps] for information about tasks that you can perform using your mirrored images.
After your cluster is configured to use the resources generated by oc-mirror plugin v2, see xref:../../disconnected/mirroring/about-installing-oc-mirror-v2.adoc#next-steps_about-installing-oc-mirror-v2[Next steps] for information about tasks that you can perform using your mirrored images.
[role="_additional-resources"]
.Additional resources
@@ -140,4 +140,4 @@ After you mirror images to your disconnected environment using oc-mirror plugin
* xref:../../disconnected/using-olm.adoc#olm-restricted-networks[Using Operator Lifecycle Manager in disconnected environments]
* xref:../../disconnected/updating/disconnected-update-osus.adoc#updating-disconnected-cluster-osus[Updating a cluster in a disconnected environment using the OpenShift Update Service]
// Intentionally linking to the OSUS update procedure since we probably want to steer users to do that workflow as much as possible. But I can change to the index of the update section if I shouldn't be as prescriptive.
// Intentionally linking to the OSUS update procedure since we probably want to steer users to do that workflow as much as possible. But I can change to the index of the update section if I shouldn't be as prescriptive.

View File

@@ -8,13 +8,13 @@ toc::[]
You can use the following Day 2 operations to manage telco core CNF clusters.
Updating a telco core CNF cluster:: Updating your cluster is a critical task that ensures that bugs and potential security vulnerabilities are patched.
Updating a telco core CNF cluster:: Updating your cluster is a critical task that ensures that bugs and potential security vulnerabilities are patched.
For more information, see xref:../day_2_core_cnf_clusters/updating/telco-update-welcome.adoc#telco-update-welcome[Updating a telco core CNF cluster].
Troubleshooting and maintaining telco core CNF clusters:: To maintain and troubleshoot a bare-metal environment where high-bandwidth network throughput is required, see see xref:../day_2_core_cnf_clusters/troubleshooting/telco-troubleshooting-intro.adoc#telco-troubleshooting-intro[Troubleshooting and maintaining telco core CNF clusters].
Troubleshooting and maintaining telco core CNF clusters:: To maintain and troubleshoot a bare-metal environment where high-bandwidth network throughput is required, see xref:../day_2_core_cnf_clusters/troubleshooting/telco-troubleshooting-intro.adoc#telco-troubleshooting-intro[Troubleshooting and maintaining telco core CNF clusters].
Observability in telco core CNF clusters:: {product-title} generates a large amount of data, such as performance metrics and logs from the platform and the workloads running on it.
As an administrator, you can use tools to collect and analyze the available data.
Observability in telco core CNF clusters:: {product-title} generates a large amount of data, such as performance metrics and logs from the platform and the workloads running on it.
As an administrator, you can use tools to collect and analyze the available data.
For more information, see xref:../day_2_core_cnf_clusters/observability/telco-observability.adoc#telco-observability[Observability in telco core CNF clusters].
Security:: You can enhance security for high-bandwidth network deployments in telco environments by following key security considerations.

View File

@@ -34,7 +34,7 @@ a|
* Create a new persistent volume (PV) in the same namespace.
* Copy data from the source PV to the target PV, and change the VM definition to point to the new PV.
** If you have the `liveMigrate` flag set, the VM migrates live.
** If you do have the `liveMigrate` flag set, the VM shuts down, the source PV contents are copied to the target PV, and the the VM is started.
** If you do have the `liveMigrate` flag set, the VM shuts down, the source PV contents are copied to the target PV, and the VM is started.
|Move
|No

View File

@@ -76,7 +76,7 @@ spec:
$ oc create -f redis-backup.yaml
----
+
.Example output:
.Example output
+
[source,terminal]
----
@@ -93,7 +93,7 @@ backup.velero.io/redis-backup created
$ oc get backups.velero.io redis-backup -o yaml
----
+
.Example output:
.Example output
+
[source,terminal]
----
@@ -107,4 +107,4 @@ phase: Completed
progress: {}
startTimestamp: "2025-04-17T13:25:16Z"
version: 1
----
----

View File

@@ -10,7 +10,7 @@ In large scale environments, the default `PriorityClass` object can be too low t
.Prerequisites
* Optional: You have created a `PriorityClass` object. For more information, see "Configuring priority and preemption" in the _Additional Resources_.
* Optional: You have created a `PriorityClass` object. For more information, see "Configuring priority and preemption" in the _Additional resources_.
.Procedure
@@ -55,4 +55,4 @@ roles:
scanTolerations:
- operator: Exists
----
<1> If the `PriorityClass` referenced in the `ScanSetting` cannot be found, the Operator will leave the `PriorityClass` empty, issue a warning, and continue scheduling scans without a `PriorityClass`.
<1> If the `PriorityClass` referenced in the `ScanSetting` cannot be found, the Operator will leave the `PriorityClass` empty, issue a warning, and continue scheduling scans without a `PriorityClass`.

View File

@@ -6,8 +6,8 @@
[id="coreos-layering-configuring-on-remove_{context}"]
= Removing an on-cluster custom layered image
To prevent the custom layered images from taking up excessive space in your registry, you can automatically remove an on-cluster custom layered image from the repository by deleting the `MachineOSBuild` object that created the image.
To prevent the custom layered images from taking up excessive space in your registry, you can automatically remove an on-cluster custom layered image from the repository by deleting the `MachineOSBuild` object that created the image.
The credentials provided by the registry push secret that you added to the the `MachineOSBuild` object must grant the permission for deleting an image from the registry. If the delete permission is not provided, the image is not removed when you delete the `MachineOSBuild` object.
The credentials provided by the registry push secret that you added to the `MachineOSBuild` object must grant the permission for deleting an image from the registry. If the delete permission is not provided, the image is not removed when you delete the `MachineOSBuild` object.
Note that the custom layered image is not deleted if the image is either currently in use on a node or is desired by the nodes, as indicated by the `machineconfiguration.openshift.io/currentConfig` or `machineconfiguration.openshift.io/desiredConfig` annotation on the node.

View File

@@ -200,7 +200,7 @@ h|Resolution
|`Invalid value...When 2 CIDRs are set, they must be from different IP families`.
|You must change one of your CIDR ranges to a different IP family.
|The `spec.network.localnet.ipam.mode` is `Disabled` but the ``spec.network.localnet.lifecycle` has a value of `Persistent`.
|The `spec.network.localnet.ipam.mode` is `Disabled` but the `spec.network.localnet.lifecycle` has a value of `Persistent`.
|`lifecycle Persistent is only supported when ipam.mode is Enabled`
|You must set the `ipam.mode` to `Enabled` when the optional field `lifecycle` has a value of `Persistent`.
|===

View File

@@ -92,7 +92,7 @@ In a terminal that has access to the cluster as a `cluster-admin` user, run the
$ oc get machines -n openshift-machine-api -o wide
----
+
.Example output:
.Example output
+
[source,terminal]
----
@@ -123,7 +123,7 @@ A new machine is automatically provisioned after deleting the machine of the off
$ oc get machines -n openshift-machine-api -o wide
----
+
.Example output:
.Example output
+
[source,terminal]
----

View File

@@ -25,7 +25,7 @@ You must include the entire `auto-recovery` process for {op-system-image} system
+
[IMPORTANT]
====
The location of the the `10-auto-recovery.conf` and `microshift-auto-recovery.service` must be relative to the Containerfile.
The location of the `10-auto-recovery.conf` and `microshift-auto-recovery.service` must be relative to the Containerfile.
For example, if the path to the Containerfile is `/home/microshift/my-build/Containerfile`, the systemd files need to be adjacent for proper embedding. The following paths are correct for this example:
@@ -90,4 +90,4 @@ $ sudo podman images "${IMAGE_NAME}"
----
REPOSITORY TAG IMAGE ID CREATED SIZE
localhost/microshift-4.18-bootc latest 193425283c00 2 minutes ago 2.31 GB
----
----

View File

@@ -67,7 +67,7 @@ The following table explains {microshift-short} configuration YAML parameters an
|`debugging.logLevel`
|`Normal`, `Debug`, `Trace`, or `TraceAll`
|Log verbosity. Default value is is `Normal`.
|Log verbosity. Default value is `Normal`.
|`dns.baseDomain`
|`valid domain`

View File

@@ -6,7 +6,7 @@
[id="microshift-rhoai-get-model-ready-inference_{context}"]
= Getting your AI model ready for inference
Before querying your AI model through the API, get the model ready to provide answers based on the the training data. The following examples continue with the OVMS model.
Before querying your AI model through the API, get the model ready to provide answers based on the training data. The following examples continue with the OVMS model.
.Prerequisites

View File

@@ -37,7 +37,7 @@ spec:
----
<1> An additional argument to make {ovms} ({ov}) accept the request input data in a different layout than the model was originally exported with. Extra arguments are passed through to the {ov} container.
. Save the the `InferenceService` example to a file, then create it on the cluster by running the following command:
. Save the `InferenceService` example to a file, then create it on the cluster by running the following command:
+
[source,terminal,subs="+quotes"]
----

View File

@@ -58,7 +58,7 @@ $ sudo podman build -t $IMAGE_REF <1>
----
<1> Because CRI-O and Podman share storage, using `sudo` is required to make the image part of the root's container storage and usable by {microshift-short}.
+
.Example output:
.Example output
+
[source,text]
----

View File

@@ -34,7 +34,7 @@ Workflow for configuring a model-serving runtime::
* If the {microshift-short} cluster is already running, you can export the required `ServingRuntime` CR to a file and edit it.
* If the {microshift-short} cluster is not running, or if you want to manually prepare a manifest, you can use the original definition on the disk, which is is part of the `microshift-ai-model-serving` RPM.
* If the {microshift-short} cluster is not running, or if you want to manually prepare a manifest, you can use the original definition on the disk, which is part of the `microshift-ai-model-serving` RPM.
* Create the `InferenceService` CR in your workload namespace.
//CRD is shipped with product; the CR is what users are creating.

View File

@@ -65,7 +65,7 @@ set-cookie: 56bb4b6df4f80f0b59f56aa0a5a91c1a=4af1408b4a1c40925456f73033d4a7d1; p
$ curl "${DOMAIN}/v2/models/ovms-resnet50" --connect-to "${DOMAIN}::${IP}:"
----
+
.Example output:
.Example output
[source,json]
----
{"name":"ovms-resnet50","versions":["1"],"platform":"OpenVINO","inputs":[{"name":"0","datatype":"FP32","shape":[1,224,224,3]}],"outputs":[{"name":"1463","datatype":"FP32","shape":[1,1000]}]

View File

@@ -27,7 +27,7 @@ When searching for a namespace in the *Select Namespace* step of the migration p
.Unable to create a migration plan due to a reconciliation failure
In {mtc-short}, when creating a migration plan , the UI remains on *Persistent Volumes* and you cannot continue. This issue occurs due to a critical reconciliation failure and returns a 404 API error when you attempt to fetch the migration plan from the backend. These issues cause the migration plan to remain in a *Not Ready* state, and you are prevented from continuing. This issue has been resolved in {mtc-short} 1.8.6. link:https://issues.redhat.com/browse/MIG-1705[(MIG-1705)]
In {mtc-short}, when creating a migration plan, the UI remains on *Persistent Volumes* and you cannot continue. This issue occurs due to a critical reconciliation failure and returns a 404 API error when you attempt to fetch the migration plan from the backend. These issues cause the migration plan to remain in a *Not Ready* state, and you are prevented from continuing. This issue has been resolved in {mtc-short} 1.8.6. link:https://issues.redhat.com/browse/MIG-1705[(MIG-1705)]
.Migration process becomes fails to complete after the `StageBackup` phase
@@ -35,7 +35,7 @@ When migrating a Django and PostgreSQL application, the migration becomes fails
.Migration shown as succeeded despite a failed phase due to a misleading UI status
After running a migration using {mtc-short}, the UI incorrectly indicates that the migration was successful, with the status shown as *Migration succeeded*. However, the Direct Volume Migration (DVM) phase failed. This misleading status appears on both the *Migration* and the *Migration Details* pages. This issue has been resolved in {mtc-short} 1.8.6. link:https://issues.redhat.com/browse/MIG-1711[(MIG-1711)]
After running a migration using {mtc-short}, the UI incorrectly indicates that the migration was successful, with the status shown as *Migration succeeded*. However, the Direct Volume Migration (DVM) phase failed. This misleading status appears on both the *Migration* and the *Migration Details* pages. This issue has been resolved in {mtc-short} 1.8.6. link:https://issues.redhat.com/browse/MIG-1711[(MIG-1711)]
.Persistent Volumes page hangs indefinitely for namespaces without persistent volume claims
When a migration plan includes a namespace that does not have any persistent volume claims (PVCs), the *Persistent Volumes* selection page remains indefinitely with the following message shown: `Discovering persistent volumes attached to source projects...`. The page never completes loading, preventing you from proceeding with the migration. This issue has been resolved in {mtc-short} 1.8.6. link:https://issues.redhat.com/browse/MIG-1713[(MIG-1713)]

View File

@@ -10,7 +10,7 @@ As a cluster administrator, you can tune the performance of your Vertical Pod Au
Additionally, you can configure the VPA Operator to monitor only those workloads that are being managed by a VPA custom resource (CR). By default, the VPA Operator monitors every workload in the cluster. This allows the VPA Operator to accrue and store 8 days of historical data for all workloads, which the Operator can use if a new VPA CR is created for a workload. However, this causes the VPA Operator to use significant CPU and memory, which could cause the Operator to fail, particularly on larger clusters. By configuring the VPA Operator to monitor only workloads with a VPA CR, you can save on CPU and memory resources. One trade-off is that if you have a workload that has been running, and you create a VPA CR to manage that workload, the VPA Operator does not have any historical data for that workload. As a result, the initial recommendations are not as useful as those after the workload has been running for some time.
These tunings allow you to ensure the VPA has sufficient resources to operate at peak efficiency and to prevent throttling and a possible delay in pod admissions.
These tunings allow you to ensure the VPA has sufficient resources to operate at peak efficiency and to prevent throttling and a possible delay in pod admissions.
You can perform the following tunings on the VPA components by editing the `VerticalPodAutoscalerController` custom resource (CR):
@@ -20,7 +20,7 @@ You can perform the following tunings on the VPA components by editing the `Vert
* To configure the VPA Operator to monitor only workloads that are being managed by a VPA CR, set the `memory-saver` parameter to `true` for the recommender component.
For guidelines on the resources and rate limits that you could set for each VPA component, the following tables provide recommended baseline values, depending on the size of your cluster and other factors.
For guidelines on the resources and rate limits that you could set for each VPA component, the following tables provide recommended baseline values, depending on the size of your cluster and other factors.
[IMPORTANT]
====
@@ -32,7 +32,7 @@ These recommended values were derived from internal Red{nbsp}Hat testing on clus
|===
| Component 2+| 1-500 containers 2+| 500-1000 containers 2+| 1000-2000 containers 2+| 2000-4000 containers 2+| 4000+ containers
|
|
| *CPU*
| *Memory*
| *CPU*
@@ -44,15 +44,15 @@ These recommended values were derived from internal Red{nbsp}Hat testing on clus
| *CPU*
| *Memory*
s| Admission
| 25m
s| Admission
| 25m
| 50Mi
| 25m
| 75Mi
| 40m
| 150Mi
| 75m
| 260Mi
| 25m
| 75Mi
| 40m
| 150Mi
| 75m
| 260Mi
| (0.03c)/2 + 10 ^[1]^
| (0.1c)/2 + 50 ^[1]^
@@ -94,7 +94,7 @@ It is recommended that you set the memory limit on your containers to at least d
|===
| Component 2+| 1 - 150 VPAs 2+| 151 - 500 VPAs 2+| 501-2000 VPAs 2+| 2001-4000 VPAs
|
|
| *QPS Limit* ^[1]^
| *Burst* ^[2]^
| *QPS Limit*
@@ -126,7 +126,7 @@ s| Updater
|===
[.small]
. QPS specifies the queries per second (QPS) limit when making requests to Kubernetes API server. The default for the updater and recommender pods is `5.0`.
. QPS specifies the queries per second (QPS) limit when making requests to Kubernetes API server. The default for the updater and recommender pods is `5.0`.
. Burst specifies the burst limit when making requests to Kubernetes API server. The default for the updater and recommender pods is `10.0`.
[NOTE]
@@ -147,7 +147,7 @@ Hiding as autoscaling custom resources not supported
|===
| Component 2+| 1-25 CR pod creation surge ^[1]^ 2+| 26-50 CR pod creation surge 2+| 50+ CR pod creation surge
|
|
| *QPS Limit* ^[2]^
| *Burst* ^[3]^
| *QPS Limit*
@@ -166,7 +166,7 @@ s| Admission
|===
[.small]
. _Pod creation surge_ refers to the maximum number of pods that you expect to be created in a single second at any given time.
. QPS specifies the queries per second (QPS) limit when making requests to Kubernetes API server. The default is `5.0`.
. QPS specifies the queries per second (QPS) limit when making requests to Kubernetes API server. The default is `5.0`.
. Burst specifies the burst limit when making requests to Kubernetes API server. The default is `10.0`.
[NOTE]
@@ -175,7 +175,7 @@ The admission pod can get throttled if you are using the VPA on custom resources
====
////
The following example VPA controller CR is for a cluster with 1000 to 2000 containers and a pod creation surge of 26 to 50. The CR sets the following values:
The following example VPA controller CR is for a cluster with 1000 to 2000 containers and a pod creation surge of 26 to 50. The CR sets the following values:
* The container memory and CPU requests for all three VPA components
* The container memory limit for all three VPA components
@@ -202,7 +202,7 @@ spec:
cpu: 40m
memory: 150Mi
limits:
memory: 300Mi
memory: 300Mi
recommender: <4>
container:
args:
@@ -234,7 +234,7 @@ spec:
----
<1> Specifies the tuning parameters for the VPA admission controller.
<2> Specifies the API QPS and burst rates for the VPA admission controller.
+
+
--
* `kube-api-qps`: Specifies the queries per second (QPS) limit when making requests to Kubernetes API server. The default is `5.0`.
* `kube-api-burst`: Specifies the burst limit when making requests to Kubernetes API server. The default is `10.0`.
@@ -248,7 +248,7 @@ spec:
Hiding these three callouts as not supported
<5> Specifies how often the VPA should collect the container metrics for the recommender pod. Valid time units are `ns`, `us` (or `µs`), `ms`, `s`, `m`, and `h`. The default is one minute.
<6> Specifies the timeout for writing VPA checkpoints after the start of the recommender interval. If you increase the `recommender-interval` value, it is recommended setting this value to the same value. Valid time units are `ns`, `us` (or `µs`), `ms`, `s`, `m`, and `h`. The default is one minute.
<9> Specifies how often the VPA should collect the container metrics for the updater pod. Valid time units are `ns`, `us` (or `µs`), `ms`, `s`, `m`, and `h`. The default is one minute.
<9> Specifies how often the VPA should collect the container metrics for the updater pod. Valid time units are `ns`, `us` (or `µs`), `ms`, `s`, `m`, and `h`. The default is one minute.
- '--recommender-interval=2m' <5>
- '--checkpoints-timeout=' <6>
- '--updater-interval=30m0s' <9>

View File

@@ -36,4 +36,4 @@ Before setting up a `ClusterUserDefinedNetwork` custom resource (CR), users shou
** Avoid overlapping subnets between your physical network and your other network interfaces. Overlapping network subnets can cause routing conflicts and network instability. To prevent conflicts when using the `spec.network.localnet.subnets` parameter, you might use the `spec.network.localnet.excludeSubnets` parameter.
** When you configure a Virtual Local Area Network (VLAN), you must ensure that both your underlying physical infrastructure (switches, routers, and so on) and your nodes are properly configured to accept VLAN IDs (VIDs). This means that you configure the physical network interface, for example `eth1`, as an access port for the VLAN, for example `20`, that you are connecting to through the physical switch. In addition, you must verify that an OVS bridge mapping, for example `eth1`, exists on your nodes to ensure that that the physical interface is properly connected with OVN-Kubernetes.
** When you configure a Virtual Local Area Network (VLAN), you must ensure that both your underlying physical infrastructure (switches, routers, and so on) and your nodes are properly configured to accept VLAN IDs (VIDs). This means that you configure the physical network interface, for example `eth1`, as an access port for the VLAN, for example `20`, that you are connecting to through the physical switch. In addition, you must verify that an OVS bridge mapping, for example `eth1`, exists on your nodes to ensure that the physical interface is properly connected with OVN-Kubernetes.

View File

@@ -45,7 +45,7 @@ $ oc delete OperatorGroup dpu-operators -n openshift-dpu-operator
$ oc get csv -n openshift-dpu-operator
----
+
.Example output:
.Example output
+
[source,terminal]
----
@@ -76,7 +76,7 @@ $ oc delete namespace openshift-dpu-operator
$ oc get csv -n openshift-dpu-operator
----
+
.Example output:
.Example output
+
[source,terminal]
----

View File

@@ -6,11 +6,11 @@
[id="nw-understanding-networking-features_{context}"]
= Networking features
{product-title} offers several networking features and enhancements. These features and enhancements are listed as follows:
{product-title} offers several networking features and enhancements. These features and enhancements are listed as follows:
* Ingress Operator and Route API: {product-title} includes an Ingress Operator that implements the Ingress Controller API. This component enables external access to cluster services by deploying and managing HAProxy-based Ingress Controllers that support advanced routing configurations and load balancing. {product-title} uses the Route API to translate upstream Ingress objects to route objects. Routes are specific to networking in {product-title}, but you can also use third-party Ingress Controllers.
* Enhanced security: {product-title} provides advanced network security features, such as the egress firewall and and the ingress node firewall.
* Enhanced security: {product-title} provides advanced network security features, such as the egress firewall and the ingress node firewall.
+
** Egress firewall: The egress firewall controls and restricts outbound traffic from pods within the cluster. You can set rules to limit which external hosts or IP ranges with which pods can communicate.
** Ingress node firewall: The ingress node firewall is managed by the Ingress Firewall Operator and provides firewall rules at the node level. You can protect your nodes from threats by configuring this firewall on specific nodes within the cluster to filter incoming traffic before it reaches these nodes.
@@ -32,4 +32,4 @@
* Egress IP: Egress IP allows you to assign a fixed source IP address for all egress traffic originating from pods within a namespace. Egress IP can improve security and access control by ensuring consistent source IP addresses for external services. For example, if a pod needs to access an external database that only allows traffic from specific IP adresses, you can configure an egress IP for that pod to meet the access requirements.
* Egress router: An egress router is a pod that acts as a bridge between the cluster and external systems. Egress routers allow traffic from pods to be routed through a specific IP address that is not used for any other purpose. With egress routers, you can enforce access controls or route traffic through a specific gateway.
* Egress router: An egress router is a pod that acts as a bridge between the cluster and external systems. Egress routers allow traffic from pods to be routed through a specific IP address that is not used for any other purpose. With egress routers, you can enforce access controls or route traffic through a specific gateway.

View File

@@ -21,9 +21,9 @@ You can use the Self-Service feature only after the cluster administrator instal
link:https://issues.redhat.com/browse/OADP-4001[OADP-4001]
.Collecting logs with the `must-gather` tool has been improved with a Markdown summary
.Collecting logs with the `must-gather` tool has been improved with a Markdown summary
You can collect logs, and information about {oadp-first} custom resources by using the `must-gather` tool. The `must-gather` data must be attached to all customer cases.
You can collect logs, and information about {oadp-first} custom resources by using the `must-gather` tool. The `must-gather` data must be attached to all customer cases.
This tool generates a Markdown output file with the collected information, which is located in the `must-gather` logs clusters directory.
link:https://issues.redhat.com/browse/OADP-5384[OADP-5384]
@@ -44,8 +44,8 @@ Velero no longer uses the `node-agent-config` config map for configuring the `no
link:https://issues.redhat.com/browse/OADP-5042[OADP-5042]
.Configuring DPA with with the backup repository configuration config map is now possible
.Configuring DPA with the backup repository configuration config map is now possible
With Velero 1.15 and later, you can now configure the total size of a cache per repository. This prevents pods from being removed due to running out of ephemeral storage. See the following new parameters added to the `NodeAgentConfig` field in DPA:
* `cacheLimitMB`: Sets the local data cache size limit in megabytes.
@@ -78,7 +78,7 @@ link:https://issues.redhat.com/browse/OADP-5031[OADP-5031]
.Adds DPA support for parallel item backup
By default, only one thread processes an item block. Velero 1.16 supports a parallel item backup, where multiple items within a backup can be processed in parallel.
By default, only one thread processes an item block. Velero 1.16 supports a parallel item backup, where multiple items within a backup can be processed in parallel.
You can use the optional Velero server parameter `--item-block-worker-count` to run additional worker threads to process items in parallel. To enable this in OADP, set the `dpa.Spec.Configuration.Velero.ItemBlockWorkerCount` parameter to an integer value greater than zero.
[NOTE]
@@ -112,7 +112,7 @@ link:https://issues.redhat.com/browse/OADP-1338[OADP-1338]
.Containers now use `FallbackToLogsOnError` for `terminationMessagePolicy`
With this release, the `terminationMessagePolicy` field can now set the `FallbackToLogsOnError` value for the {oadp-first} Operator containers such as `operator-manager`, `velero`, `node-agent`, and `non-admin-controller`.
With this release, the `terminationMessagePolicy` field can now set the `FallbackToLogsOnError` value for the {oadp-first} Operator containers such as `operator-manager`, `velero`, `node-agent`, and `non-admin-controller`.
This change ensures that if a container exits with an error and the termination message file is empty, {OCP-short} uses the last portion of the container logs output as the termination message.
@@ -120,7 +120,7 @@ link:https://issues.redhat.com/browse/OADP-5183[OADP-5183]
.Namespace admin can now access the application after restore
Previously, the namespace admin could not execute an application after the restore operation with the following errors:
Previously, the namespace admin could not execute an application after the restore operation with the following errors:
* `exec operation is not allowed because the pod's security context exceeds your permissions`
* `unable to validate against any security context constraint`
@@ -132,7 +132,7 @@ link:https://issues.redhat.com/browse/OADP-5611[OADP-5611]
.Specifying status restoration at the individual resource instance level using the annotation is now possible
Previously, status restoration was only configured at the resource type using the `restoreStatus` field in the `Restore` custom resource (CR).
Previously, status restoration was only configured at the resource type using the `restoreStatus` field in the `Restore` custom resource (CR).
With this release, you can now specify the status restoration at the individual resource instance level using the following annotation:
@@ -148,7 +148,7 @@ link:https://issues.redhat.com/browse/OADP-5968[OADP-5968]
.Restore is now successful with `excludedClusterScopedResources`
Previously, on performing the backup of an application with the `excludedClusterScopedResources` field set to `storageclasses`, `Namespace` parameter, the backup was successful but the restore partially failed.
Previously, on performing the backup of an application with the `excludedClusterScopedResources` field set to `storageclasses`, `Namespace` parameter, the backup was successful but the restore partially failed.
With this update, the restore is successful.
link:https://issues.redhat.com/browse/OADP-5239[OADP-5239]
@@ -168,7 +168,7 @@ link:https://issues.redhat.com/browse/OADP-2941[OADP-2941]
.Error messages are now more informative when the` disableFsbackup` parameter is set to `true` in DPA
Previously, when the `spec.configuration.velero.disableFsBackup` field from a Data Protection Application (DPA) was set to `true`, the backup partially failed with an error, which was not informative.
Previously, when the `spec.configuration.velero.disableFsBackup` field from a Data Protection Application (DPA) was set to `true`, the backup partially failed with an error, which was not informative.
This update makes error messages more useful for troubleshooting issues. For example, error messages indicating that `disableFsBackup: true` is the issue in a DPA or not having access to a DPA if it is for non-administrator users.
@@ -176,15 +176,15 @@ link:https://issues.redhat.com/browse/OADP-5952[OADP-5952]
.Handles AWS STS credentials in the parseAWSSecret
Previously, AWS credentials using STS authentication were not properly validated.
Previously, AWS credentials using STS authentication were not properly validated.
With this update, the `parseAWSSecret` function detects STS-specific fields, and updates the `ensureSecretDataExists` function to handle STS profiles correctly.
link:https://issues.redhat.com/browse/OADP-6105[OADP-6105]
.The `repositoryMaintenance` job affinity config is available to configure
Previously, the new configurations for repository maintenance job pod affinity was missing from a DPA specification.
Previously, the new configurations for repository maintenance job pod affinity was missing from a DPA specification.
With this update, the `repositoryMaintenance` job affinity config is now available to map a `BackupRepository` identifier to its configuration.
@@ -192,7 +192,7 @@ link:https://issues.redhat.com/browse/OADP-6134[OADP-6134]
.The `ValidationErrors` field fades away once the CR specification is correct
Previously, when a schedule CR was created with a wrong `spec.schedule` value and the same was later patched with a correct value, the `ValidationErrors` field still existed. Consequently, the `ValidationErrors` field was displaying incorrect information even though the spec was correct.
Previously, when a schedule CR was created with a wrong `spec.schedule` value and the same was later patched with a correct value, the `ValidationErrors` field still existed. Consequently, the `ValidationErrors` field was displaying incorrect information even though the spec was correct.
With this update, the `ValidationErrors` field fades away once the CR specification is correct.
@@ -202,7 +202,7 @@ link:https://issues.redhat.com/browse/OADP-5419[OADP-5419]
Previously, when a restore operation was triggered with the `includedNamespace` field in a restore specification, restore operation was completed successfully but no `volumeSnapshotContents` custom resources (CR) were created and the PVCs were in a `Pending` status.
With this update, `volumeSnapshotContents` CR are restored even when the `includedNamesapces` field is used in `restoreSpec`. As a result, an application pod is in a `Running` state after restore.
With this update, `volumeSnapshotContents` CR are restored even when the `includedNamesapces` field is used in `restoreSpec`. As a result, an application pod is in a `Running` state after restore.
link:https://issues.redhat.com/browse/OADP-5939[OADP-5939]
@@ -234,10 +234,10 @@ For a complete list of all issues resolved in this release, see the list of link
Even after deleting a backup, Kopia does not delete the volume artifacts from the `${bucket_name}/kopia/${namespace}` on the S3 location after the backup expired. Information related to the expired and removed data files remains in the metadata.
To ensure that {oadp-first} functions properly, the data is not deleted, and it exists in the `/kopia/` directory, for example:
* `kopia.repository`: Main repository format information such as encryption, version, and other details.
* `kopia.blobcfg`: Configuration for how data blobs are named.
* `kopia.maintenance`: Tracks maintenance owner, schedule, and last successful build.
* `log`: Log blobs.
* `kopia.repository`: Main repository format information such as encryption, version, and other details.
* `kopia.blobcfg`: Configuration for how data blobs are named.
* `kopia.maintenance`: Tracks maintenance owner, schedule, and last successful build.
* `log`: Log blobs.
link:https://issues.redhat.com/browse/OADP-5131[OADP-5131]

View File

@@ -8,9 +8,9 @@
In this use case, you back up an application by using {oadp-short} and store the backup in an object storage provided by {odf-first}.
* You create a object bucket claim (OBC) to configure the backup storage location. You use {odf-short} to configure an Amazon S3-compatible object storage bucket. {odf-short} provides MultiCloud Object Gateway (NooBaa MCG) and Ceph Object Gateway, also known as RADOS Gateway (RGW), object storage service. In this use case, you use NooBaa MCG as the backup storage location.
* You use the NooBaa MCG service with {oadp-short} by using the `aws` provider plugin.
* You configure the Data Protection Application (DPA) with the backup storage location (BSL).
* You create an object bucket claim (OBC) to configure the backup storage location. You use {odf-short} to configure an Amazon S3-compatible object storage bucket. {odf-short} provides MultiCloud Object Gateway (NooBaa MCG) and Ceph Object Gateway, also known as RADOS Gateway (RGW), object storage service. In this use case, you use NooBaa MCG as the backup storage location.
* You use the NooBaa MCG service with {oadp-short} by using the `aws` provider plugin.
* You configure the Data Protection Application (DPA) with the backup storage location (BSL).
* You create a backup custom resource (CR) and specify the application namespace to back up.
* You create and verify the backup.
@@ -74,7 +74,7 @@ s3.openshift-storage.svc
+
[source,terminal]
----
$ oc extract --to=- secret/test-obc
$ oc extract --to=- secret/test-obc
----
+
.Example output
@@ -98,8 +98,8 @@ $ oc get route s3 -n openshift-storage
[source,terminal]
----
[default]
aws_access_key_id=<AWS_ACCESS_KEY_ID>
aws_secret_access_key=<AWS_SECRET_ACCESS_KEY>
aws_access_key_id=<AWS_ACCESS_KEY_ID>
aws_secret_access_key=<AWS_SECRET_ACCESS_KEY>
----
. Create the `cloud-credentials` secret with the `cloud-credentials` file content as shown in the following command:
@@ -139,13 +139,13 @@ spec:
profile: "default"
region: noobaa
s3Url: https://s3.openshift-storage.svc # <2>
s3ForcePathStyle: "true"
s3ForcePathStyle: "true"
insecureSkipTLSVerify: "true"
provider: aws
default: true
credential:
key: cloud
name: cloud-credentials
name: cloud-credentials
objectStorage:
bucket: <bucket_name> # <3>
prefix: oadp
@@ -158,7 +158,7 @@ spec:
+
[source,terminal]
----
$ oc apply -f <dpa_filename>
$ oc apply -f <dpa_filename>
----
. Verify that the DPA is created successfully by running the following command. In the example output, you can see the `status` object has `type` field set to `Reconciled`. This means, the DPA is successfully created.
@@ -200,7 +200,7 @@ metadata:
+
[source,terminal]
----
$ oc get backupstoragelocations.velero.io -n openshift-adp
$ oc get backupstoragelocations.velero.io -n openshift-adp
----
+
.Example output
@@ -230,7 +230,7 @@ spec:
+
[source,terminal]
----
$ oc apply -f <backup_cr_filename>
$ oc apply -f <backup_cr_filename>
----
.Verification
@@ -239,7 +239,7 @@ $ oc apply -f <backup_cr_filename>
+
[source,terminal]
----
$ oc describe backup test-backup -n openshift-adp
$ oc describe backup test-backup -n openshift-adp
----
+
.Example output
@@ -262,4 +262,4 @@ Status:
Start Timestamp: 2024-09-25T10:16:31Z
Version: 1
Events: <none>
----
----

View File

@@ -18,55 +18,55 @@ The following tables describe the `oc mirror` subcommands and flags for deleting
|`--authfile <string>`
|Path of the authentication file. The default value is `${XDG_RUNTIME_DIR}/containers/auth.json`.
|`--cache-dir <string>`
|`--cache-dir <string>`
|oc-mirror cache directory location. The default is `$HOME`.
|`-c <string>`, `--config <string>`
|`-c <string>`, `--config <string>`
|Path to the delete imageset configuration file.
|`--delete-id <string>`
|`--delete-id <string>`
|Used to differentiate between versions for files created by the delete functionality.
|`--delete-v1-images`
|`--delete-v1-images`
|Used during the migration, along with `--generate`, in order to target images previously mirrored with oc-mirror plugin v1.
|`--delete-yaml-file <string>`
|`--delete-yaml-file <string>`
|If set, uses the generated or updated yaml file to delete contents.
|`--dest-tls-verify`
|Require HTTPS and verify certificates when talking to the container registry or daemon. TThe default value is `true`.
|`--force-cache-delete`
|Used to force delete the local cache manifests and blobs.
|`--generate`
|Used to generate the delete yaml for the list of manifests and blobs, used when deleting from local cache and remote registry.
|`-h`, `--help`
|Displays help.
|`--log-level <string>`
|Log level one of `info`, `debug`, `trace`, and `error`. The default value is `info`.
|`--parallel-images <unit>`
|Indicates the number of images deleted in parallel. The default value is `8`.
|`--parallel-layers <unit>`
|Indicates the number of image layers mirrored in parallel. The default value is `10`.
|`-p <unit>`, `--port <unit>`
|HTTP port used by oc-mirror's local storage instance. The default value is `55000`.
|`--retry-delay`
|Duration delay between 2 retries. The default value is `1s`.
|`--retry-times <int>`
|The number of times to retry. The default value is `2`.
|`--src-tls-verify`
|`--dest-tls-verify`
|Require HTTPS and verify certificates when talking to the container registry or daemon. The default value is `true`.
|`--workspace <string>`
|`--force-cache-delete`
|Used to force delete the local cache manifests and blobs.
|`--generate`
|Used to generate the delete yaml for the list of manifests and blobs, used when deleting from local cache and remote registry.
|`-h`, `--help`
|Displays help.
|`--log-level <string>`
|Log level one of `info`, `debug`, `trace`, and `error`. The default value is `info`.
|`--parallel-images <unit>`
|Indicates the number of images deleted in parallel. The default value is `8`.
|`--parallel-layers <unit>`
|Indicates the number of image layers mirrored in parallel. The default value is `10`.
|`-p <unit>`, `--port <unit>`
|HTTP port used by oc-mirror's local storage instance. The default value is `55000`.
|`--retry-delay`
|Duration delay between 2 retries. The default value is `1s`.
|`--retry-times <int>`
|The number of times to retry. The default value is `2`.
|`--src-tls-verify`
|Require HTTPS and verify certificates when talking to the container registry or daemon. The default value is `true`.
|`--workspace <string>`
|oc-mirror workspace where resources and internal artifacts are generated.
|===
|===

View File

@@ -180,7 +180,7 @@ spec:
requests:
storage: 2048Gi <2>
----
<1> PVC references the the storage pool-specific storage class. In this example, `hyperdisk-sc`.
<1> PVC references the storage pool-specific storage class. In this example, `hyperdisk-sc`.
<2> Target storage capacity of the hyperdisk-balanced volume. In this example, `2048Gi`.
. Create a deployment that uses the PVC that you just created. Using a deployment helps ensure that your application has access to the persistent storage even after the pod restarts and rescheduling:
@@ -284,4 +284,4 @@ $ gcloud compute storage-pools list-disks pool-us-east4-c --zone=us-east4-c
----
NAME STATUS PROVISIONED_IOPS PROVISIONED_THROUGHPUT SIZE_GB
pvc-1ff52479-4c81-4481-aa1d-b21c8f8860c6 READY 3000 140 2048
----
----

View File

@@ -21,7 +21,7 @@ You can restore the back-end Redis database by deleting the deployment and speci
$ oc delete deployment backend-redis -n threescale
----
+
.Example output:
.Example output
+
[source,terminal]
----
@@ -71,7 +71,7 @@ $ oc create -f restore-backend.yaml
restore.velerio.io/restore-backend created
----
.Verification
.Verification
* Verify that the `PodVolumeRestore` restore is completed by running the following command:
+
@@ -85,4 +85,4 @@ $ oc get podvolumerestores.velero.io -n openshift-adp
----
NAME NAMESPACE POD UPLOADER TYPE VOLUME STATUS TOTALBYTES BYTESDONE AGE
restore-backend-jmrwx threescale backend-redis-1-bsfmv kopia backend-redis-storage Completed 76123 76123 21m
----
----

View File

@@ -8,7 +8,7 @@
Restoring a MySQL database re-creates the following resources:
* The `Pod`, `ReplicationController`, and `Deployment` objects.
* The additional persistent volumes (PVs) and associated persistent volume claims (PVCs).
* The additional persistent volumes (PVs) and associated persistent volume claims (PVCs).
* The MySQL dump, which the `example-claim` PVC contains.
[WARNING]
@@ -29,7 +29,7 @@ Do not delete the default PV and PVC associated with the database. If you do, yo
$ oc scale deployment threescale-operator-controller-manager-v2 --replicas=0 -n threescale
----
+
.Example output:
.Example output
[source,terminal]
----
deployment.apps/threescale-operator-controller-manager-v2 scaled
@@ -54,10 +54,10 @@ done
+
[source,terminal]
----
$ ./scaledowndeployment.sh
$ ./scaledowndeployment.sh
----
+
.Example output:
.Example output
[source,terminal]
----
deployment.apps.openshift.io/apicast-production scaled
@@ -84,7 +84,7 @@ deployment.apps.openshift.io/zync-que scaled
$ oc delete deployment system-mysql -n threescale
----
+
.Example output:
.Example output
[source,terminal]
----
Warning: apps.openshift.io/v1 deployment is deprecated in v4.14+, unavailable in v4.10000+
@@ -123,7 +123,7 @@ spec:
- '-c'
- >
sleep 30
mysql -h 127.0.0.1 -D system -u root
--password=$MYSQL_ROOT_PASSWORD <
/var/lib/mysqldump/data/dump.sql <2>
@@ -151,7 +151,7 @@ $ oc create -f restore-mysql.yaml
restore.velerio.io/restore-mysql created
----
.Verification
.Verification
. Verify that the `PodVolumeRestore` restore is completed by running the following command:
+
@@ -160,7 +160,7 @@ restore.velerio.io/restore-mysql created
$ oc get podvolumerestores.velero.io -n openshift-adp
----
+
.Example output:
.Example output
[source,terminal]
----
NAME NAMESPACE POD UPLOADER TYPE VOLUME STATUS TOTALBYTES BYTESDONE AGE
@@ -175,7 +175,7 @@ restore-mysql-z7x7l threescale system-mysql-2-kjkhl kopia exa
$ oc get pvc -n threescale
----
+
.Example output:
.Example output
[source,terminal]
----
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS VOLUMEATTRIBUTESCLASS AGE
@@ -184,4 +184,4 @@ example-claim Bound pvc-cbaa49b0-06cd-4b1a-9e90-0ef755c67a54 1Gi
mysql-storage Bound pvc-4549649f-b9ad-44f7-8f67-dd6b9dbb3896 1Gi RWO gp3-csi <unset> 68m
system-redis-storage Bound pvc-04dadafd-8a3e-4d00-8381-6041800a24fc 1Gi RWO gp3-csi <unset> 68m
system-searchd Bound pvc-afbf606c-d4a8-4041-8ec6-54c5baf1a3b9 1Gi RWO gp3-csi <unset> 68m
----
----

View File

@@ -30,7 +30,7 @@ $ oc scale deployment threescale-operator-controller-manager-v2 --replicas=1 -n
deployment.apps/threescale-operator-controller-manager-v2 scaled
----
. Ensure that the 3scale pod is running to verify if the 3scale operator was deployed by running the following command:
. Ensure that the 3scale pod is running to verify if the 3scale operator was deployed by running the following command:
+
[source,terminal]
----
@@ -63,10 +63,10 @@ done
+
[source,terminal]
----
$ ./scaledeployment.sh
$ ./scaledeployment.sh
----
+
.Example output:
.Example output
[source,terminal]
----
deployment.apps.openshift.io/apicast-production scaled
@@ -107,4 +107,4 @@ zync-3scale-provider-mjxlb 3scale.apps.custom-cluster-name.openshift.com
+
In this example, `3scale-admin.apps.custom-cluster-name.openshift.com` is the 3scale-admin URL.
. Use the URL from this output to log in to the 3scale operator as an administrator. You can verify that the data, when you took backup, is available.
. Use the URL from this output to log in to the 3scale operator as an administrator. You can verify that the data, when you took backup, is available.

View File

@@ -14,7 +14,7 @@ Shrinking persistent volumes (PVs) is _not_ supported.
.Prerequisites
* The underlying CSI driver supports resize. See "CSI drivers supported by {product-title}" in the "Additional Resources" section.
* The underlying CSI driver supports resize. See "CSI drivers supported by {product-title}" in the "Additional resources" section.
* Dynamic provisioning is used.
@@ -24,4 +24,4 @@ Shrinking persistent volumes (PVs) is _not_ supported.
. For the persistent volume claim (PVC), set `.spec.resources.requests.storage` to the desired new size.
. Watch the `status.conditions` field of the PVC to see if the resize has completed. {product-title} adds the `Resizing` condition to the PVC during expansion, which is removed after expansion completes.
. Watch the `status.conditions` field of the PVC to see if the resize has completed. {product-title} adds the `Resizing` condition to the PVC during expansion, which is removed after expansion completes.

View File

@@ -8,12 +8,12 @@
You can enable port isolation for a Linux bridge network attachment definition (NAD) so that virtual machines (VMs) or pods that run on the same virtual LAN (VLAN) can operate in isolation from one another. The Linux bridge NAD creates a virtual bridge, or _virtual switch_, between network interfaces and the physical network.
Isolating ports in this way can provide enhanced security for VM workloads that run on the same node.
Isolating ports in this way can provide enhanced security for VM workloads that run on the same node.
.Prerequisites
* For VMs, you configured either a static or dynamic IP address for each VM. See "Configuring IP addresses for virtual machines".
* You created a Linux bridge NAD by using either the web console or the command-line interface.
* You created a Linux bridge NAD by using either the web console or the command-line interface.
* You have installed the {oc-first}.
.Procedure
@@ -25,9 +25,9 @@ Isolating ports in this way can provide enhanced security for VM workloads that
apiVersion: "k8s.cni.cncf.io/v1"
kind: NetworkAttachmentDefinition
metadata:
name: bridge-network
name: bridge-network
annotations:
k8s.v1.cni.cncf.io/resourceName: bridge.network.kubevirt.io/br1
k8s.v1.cni.cncf.io/resourceName: bridge.network.kubevirt.io/br1
spec:
config: |
{
@@ -42,7 +42,7 @@ spec:
}
# ...
----
<1> The name for the configuration. The name must match the the value in the `metadata.name` of the NAD.
<1> The name for the configuration. The name must match the value in the `metadata.name` of the NAD.
<2> The actual name of the Container Network Interface (CNI) plugin that provides the network for this network attachment definition. Do not change this field unless you want to use a different CNI.
<3> The name of the Linux bridge that is configured on the node. The name must match the interface bridge name defined in the NodeNetworkConfigurationPolicy manifest.
<4> Enables or disables port isolation on the virtual bridge. Default value is `false`. When set to `true`, each VM or pod is assigned to an isolated port. The virtual bridge prevents traffic from one isolated port from reaching another isolated port.

View File

@@ -18,8 +18,7 @@ You can restore a virtual machine (VM) to a previous configuration represented b
. Click the Options menu {kebab} and select *Restore VirtualMachine from snapshot*.
. Click *Restore*.
. Optional: You can also create a new VM based on the snapshot. To do so:
.. In the Options menu {kebab} of the the snapshot, select *Create VirtualMachine from Snapshot*.
. Optional: You can also create a new VM based on the snapshot. To do so:
.. In the Options menu {kebab} of the snapshot, select *Create VirtualMachine from Snapshot*.
.. Provide a name for the new VM.
.. Click *Create*

View File

@@ -6,7 +6,7 @@
[id="ztp-configuring-the-hub-cluster-for-backup-and-restore_{context}"]
= Configuring the hub cluster for backup and restore
You can use {ztp} to configure a set of policies to back up `BareMetalHost` resources.
You can use {ztp} to configure a set of policies to back up `BareMetalHost` resources.
This allows you to recover data from a failed hub cluster and deploy a replacement cluster using {rh-rhacm-first}.
.Prerequisites
@@ -18,7 +18,7 @@ This allows you to recover data from a failed hub cluster and deploy a replaceme
.Procedure
. Create a policy to add the `cluster.open-cluster-management.io/backup=cluster-activation` label to all `BareMetalHost` resources that have the `infraenvs.agent-install.openshift.io` label.
Save the policy as `BareMetalHostBackupPolicy.yaml`.
Save the policy as `BareMetalHostBackupPolicy.yaml`.
+
The following example adds the `cluster.open-cluster-management.io/backup` label to all `BareMetalHost` resources that have the `infraenvs.agent-install.openshift.io` label:
+
@@ -104,7 +104,7 @@ $ oc apply -f BareMetalHostBackupPolicy.yaml
.Verification
. Find all `BareMetalHost` resources with the label `infraenvs.agent-install.openshift.io` by running the following command:
. Find all `BareMetalHost` resources with the label `infraenvs.agent-install.openshift.io` by running the following command:
+
[source,terminal]
----
@@ -133,9 +133,9 @@ baremetal-ns baremetal-name false 50s
----
+
The output must show the same list as in the previous step, which listed all `BareMetalHost` resources with the label `infraenvs.agent-install.openshift.io`.
This confirms that that all the `BareMetalHost` resources with the `infraenvs.agent-install.openshift.io` label also have the `cluster.open-cluster-management.io/backup: cluster-activation` label.
This confirms that all the `BareMetalHost` resources with the `infraenvs.agent-install.openshift.io` label also have the `cluster.open-cluster-management.io/backup: cluster-activation` label.
+
The following example shows a `BareMetalHost` resource with the `infraenvs.agent-install.openshift.io` label.
The following example shows a `BareMetalHost` resource with the `infraenvs.agent-install.openshift.io` label.
The resource must also have the `cluster.open-cluster-management.io/backup: cluster-activation` label, which was added by the policy created in step 1.
+
[source,yaml]
@@ -150,7 +150,7 @@ metadata:
namespace: baremetal-ns
----
You can now use {rh-rhacm-title} to restore a managed cluster.
You can now use {rh-rhacm-title} to restore a managed cluster.
[IMPORTANT]
====
@@ -169,9 +169,9 @@ spec:
veleroCredentialsBackupName: latest
veleroResourcesBackupName: latest
restoreStatus:
includedResources:
includedResources:
- BareMetalHosts<2>
----
====
<1> Set `veleroManagedClustersBackupName: latest` to restore activation resources.
<2> Restores the status for `BareMetalHosts` resources.
<2> Restores the status for `BareMetalHosts` resources.

View File

@@ -107,7 +107,7 @@ For more information, see xref:../../observability/network_observability/netobse
* Previously, a resource using multiple IPs was displayed separately in the *Topology* view. Now, the resource shows as a single topology node in the view. (link:https://issues.redhat.com/browse/NETOBSERV-1818[*NETOBSERV-1818*])
* Previously, the console refreshed the *Network traffic* table view contents when the mouse pointer hovered over the columns. Now, the the display is fixed, so row height remains constant with a mouse hover. (link:https://issues.redhat.com/browse/NETOBSERV-2049[*NETOBSERV-2049*])
* Previously, the console refreshed the *Network traffic* table view contents when the mouse pointer hovered over the columns. Now, the display is fixed, so row height remains constant with a mouse hover. (link:https://issues.redhat.com/browse/NETOBSERV-2049[*NETOBSERV-2049*])
[id="network-observability-operator-1-8-known-issues_{context}"]
=== Known issues

View File

@@ -8,19 +8,19 @@ toc::[]
The Multiarch Tuning Operator optimizes workload management within multi-architecture clusters and in single-architecture clusters transitioning to multi-architecture environments.
Architecture-aware workload scheduling allows the scheduler to place pods onto nodes that match the architecture of the pod images.
Architecture-aware workload scheduling allows the scheduler to place pods onto nodes that match the architecture of the pod images.
By default, the scheduler does not consider the architecture of a pod's container images when determining the placement of new pods onto nodes.
To enable architecture-aware workload scheduling, you must create the `ClusterPodPlacementConfig` object. When you create the `ClusterPodPlacementConfig` object, the Multiarch Tuning Operator deploys the necessary operands to support architecture-aware workload scheduling. You can also use the `nodeAffinityScoring` plugin in the `ClusterPodPlacementConfig` object to set cluster-wide scores for node architectures. If you enable the `nodeAffinityScoring` plugin, the scheduler first filters nodes with compatible architectures and then places the pod on the node with the highest score.
When a pod is created, the operands perform the following actions:
When a pod is created, the operands perform the following actions:
. Add the `multiarch.openshift.io/scheduling-gate` scheduling gate that prevents the scheduling of the pod.
. Compute a scheduling predicate that includes the supported architecture values for the `kubernetes.io/arch` label.
. Compute a scheduling predicate that includes the supported architecture values for the `kubernetes.io/arch` label.
. Integrate the scheduling predicate as a `nodeAffinity` requirement in the pod specification.
. Remove the scheduling gate from the pod.
[IMPORTANT]
====
Note the following operand behaviors:
@@ -31,7 +31,7 @@ Note the following operand behaviors:
* If the `nodeName` field is already set, the Multiarch Tuning Operator does not process the pod.
* If the pod is owned by a DaemonSet, the operand does not update the the `nodeAffinity` field.
* If the pod is owned by a DaemonSet, the operand does not update the `nodeAffinity` field.
* If both `nodeSelector` or `nodeAffinity` and `preferredAffinity` fields are set for the `kubernetes.io/arch` label, the operand does not update the `nodeAffinity` field.
@@ -69,4 +69,4 @@ include::modules/multi-arch-deleting-podplacment-config-using-web-console.adoc[l
//Uninstalling Multiarch Tuning Operator
include::modules/multi-arch-uninstalling-using-cli.adoc[leveloffset=+1]
include::modules/multi-arch-uninstalling-using-web-console.adoc[leveloffset=+1]
include::modules/multi-arch-uninstalling-using-web-console.adoc[leveloffset=+1]

View File

@@ -35,7 +35,7 @@ After you complete these steps, you can xref:../tutorials/dev-app-cli.adoc#getti
[id="prerequisites_{context}"]
== Prerequisites
Before you start this tutorial, ensure that you have the the following required prerequisites:
Before you start this tutorial, ensure that you have the following required prerequisites:
* You have installed the xref:../cli_reference/openshift_cli/getting-started-cli.adoc#installing-openshift-cli[{oc-first}].
* You have access to a test {product-title} cluster.

View File

@@ -35,7 +35,7 @@ After you complete these steps, you can xref:../tutorials/dev-app-web-console.ad
[id="prerequisites_{context}"]
== Prerequisites
Before you start this tutorial, ensure that you have the the following required prerequisites:
Before you start this tutorial, ensure that you have the following required prerequisites:
* You have access to a test {product-title} cluster.
+