mirror of
https://github.com/openshift/openshift-docs.git
synced 2026-02-05 12:46:18 +01:00
Updating a bunch of typos
This commit is contained in:
@@ -34,7 +34,7 @@ a|
|
||||
* Create a new persistent volume (PV) in the same namespace.
|
||||
* Copy data from the source PV to the target PV, and change the VM definition to point to the new PV.
|
||||
** If you have the `liveMigrate` flag set, the VM migrates live.
|
||||
** If you do have the `liveMigrate` flag set, the VM shuts down, the source PV contents are copied to the target PV, and the the VM is started.
|
||||
** If you do have the `liveMigrate` flag set, the VM shuts down, the source PV contents are copied to the target PV, and the VM is started.
|
||||
|
||||
|Move
|
||||
|No
|
||||
|
||||
@@ -76,7 +76,7 @@ spec:
|
||||
$ oc create -f redis-backup.yaml
|
||||
----
|
||||
+
|
||||
.Example output:
|
||||
.Example output
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
@@ -93,7 +93,7 @@ backup.velero.io/redis-backup created
|
||||
$ oc get backups.velero.io redis-backup -o yaml
|
||||
----
|
||||
+
|
||||
.Example output:
|
||||
.Example output
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
@@ -107,4 +107,4 @@ phase: Completed
|
||||
progress: {}
|
||||
startTimestamp: "2025-04-17T13:25:16Z"
|
||||
version: 1
|
||||
----
|
||||
----
|
||||
|
||||
@@ -10,7 +10,7 @@ In large scale environments, the default `PriorityClass` object can be too low t
|
||||
|
||||
.Prerequisites
|
||||
|
||||
* Optional: You have created a `PriorityClass` object. For more information, see "Configuring priority and preemption" in the _Additional Resources_.
|
||||
* Optional: You have created a `PriorityClass` object. For more information, see "Configuring priority and preemption" in the _Additional resources_.
|
||||
|
||||
.Procedure
|
||||
|
||||
@@ -55,4 +55,4 @@ roles:
|
||||
scanTolerations:
|
||||
- operator: Exists
|
||||
----
|
||||
<1> If the `PriorityClass` referenced in the `ScanSetting` cannot be found, the Operator will leave the `PriorityClass` empty, issue a warning, and continue scheduling scans without a `PriorityClass`.
|
||||
<1> If the `PriorityClass` referenced in the `ScanSetting` cannot be found, the Operator will leave the `PriorityClass` empty, issue a warning, and continue scheduling scans without a `PriorityClass`.
|
||||
|
||||
@@ -27,7 +27,7 @@ When searching for a namespace in the *Select Namespace* step of the migration p
|
||||
|
||||
.Unable to create a migration plan due to a reconciliation failure
|
||||
|
||||
In {mtc-short}, when creating a migration plan , the UI remains on *Persistent Volumes* and you cannot continue. This issue occurs due to a critical reconciliation failure and returns a 404 API error when you attempt to fetch the migration plan from the backend. These issues cause the migration plan to remain in a *Not Ready* state, and you are prevented from continuing. This issue has been resolved in {mtc-short} 1.8.6. link:https://issues.redhat.com/browse/MIG-1705[(MIG-1705)]
|
||||
In {mtc-short}, when creating a migration plan, the UI remains on *Persistent Volumes* and you cannot continue. This issue occurs due to a critical reconciliation failure and returns a 404 API error when you attempt to fetch the migration plan from the backend. These issues cause the migration plan to remain in a *Not Ready* state, and you are prevented from continuing. This issue has been resolved in {mtc-short} 1.8.6. link:https://issues.redhat.com/browse/MIG-1705[(MIG-1705)]
|
||||
|
||||
.Migration process becomes fails to complete after the `StageBackup` phase
|
||||
|
||||
@@ -35,7 +35,7 @@ When migrating a Django and PostgreSQL application, the migration becomes fails
|
||||
|
||||
.Migration shown as succeeded despite a failed phase due to a misleading UI status
|
||||
|
||||
After running a migration using {mtc-short}, the UI incorrectly indicates that the migration was successful, with the status shown as *Migration succeeded*. However, the Direct Volume Migration (DVM) phase failed. This misleading status appears on both the *Migration* and the *Migration Details* pages. This issue has been resolved in {mtc-short} 1.8.6. link:https://issues.redhat.com/browse/MIG-1711[(MIG-1711)]
|
||||
After running a migration using {mtc-short}, the UI incorrectly indicates that the migration was successful, with the status shown as *Migration succeeded*. However, the Direct Volume Migration (DVM) phase failed. This misleading status appears on both the *Migration* and the *Migration Details* pages. This issue has been resolved in {mtc-short} 1.8.6. link:https://issues.redhat.com/browse/MIG-1711[(MIG-1711)]
|
||||
|
||||
.Persistent Volumes page hangs indefinitely for namespaces without persistent volume claims
|
||||
When a migration plan includes a namespace that does not have any persistent volume claims (PVCs), the *Persistent Volumes* selection page remains indefinitely with the following message shown: `Discovering persistent volumes attached to source projects...`. The page never completes loading, preventing you from proceeding with the migration. This issue has been resolved in {mtc-short} 1.8.6. link:https://issues.redhat.com/browse/MIG-1713[(MIG-1713)]
|
||||
|
||||
@@ -10,7 +10,7 @@ As a cluster administrator, you can tune the performance of your Vertical Pod Au
|
||||
|
||||
Additionally, you can configure the VPA Operator to monitor only those workloads that are being managed by a VPA custom resource (CR). By default, the VPA Operator monitors every workload in the cluster. This allows the VPA Operator to accrue and store 8 days of historical data for all workloads, which the Operator can use if a new VPA CR is created for a workload. However, this causes the VPA Operator to use significant CPU and memory, which could cause the Operator to fail, particularly on larger clusters. By configuring the VPA Operator to monitor only workloads with a VPA CR, you can save on CPU and memory resources. One trade-off is that if you have a workload that has been running, and you create a VPA CR to manage that workload, the VPA Operator does not have any historical data for that workload. As a result, the initial recommendations are not as useful as those after the workload has been running for some time.
|
||||
|
||||
These tunings allow you to ensure the VPA has sufficient resources to operate at peak efficiency and to prevent throttling and a possible delay in pod admissions.
|
||||
These tunings allow you to ensure the VPA has sufficient resources to operate at peak efficiency and to prevent throttling and a possible delay in pod admissions.
|
||||
|
||||
You can perform the following tunings on the VPA components by editing the `VerticalPodAutoscalerController` custom resource (CR):
|
||||
|
||||
@@ -20,7 +20,7 @@ You can perform the following tunings on the VPA components by editing the `Vert
|
||||
|
||||
* To configure the VPA Operator to monitor only workloads that are being managed by a VPA CR, set the `memory-saver` parameter to `true` for the recommender component.
|
||||
|
||||
For guidelines on the resources and rate limits that you could set for each VPA component, the following tables provide recommended baseline values, depending on the size of your cluster and other factors.
|
||||
For guidelines on the resources and rate limits that you could set for each VPA component, the following tables provide recommended baseline values, depending on the size of your cluster and other factors.
|
||||
|
||||
[IMPORTANT]
|
||||
====
|
||||
@@ -32,7 +32,7 @@ These recommended values were derived from internal Red{nbsp}Hat testing on clus
|
||||
|===
|
||||
| Component 2+| 1-500 containers 2+| 500-1000 containers 2+| 1000-2000 containers 2+| 2000-4000 containers 2+| 4000+ containers
|
||||
|
||||
|
|
||||
|
|
||||
| *CPU*
|
||||
| *Memory*
|
||||
| *CPU*
|
||||
@@ -44,15 +44,15 @@ These recommended values were derived from internal Red{nbsp}Hat testing on clus
|
||||
| *CPU*
|
||||
| *Memory*
|
||||
|
||||
s| Admission
|
||||
| 25m
|
||||
s| Admission
|
||||
| 25m
|
||||
| 50Mi
|
||||
| 25m
|
||||
| 75Mi
|
||||
| 40m
|
||||
| 150Mi
|
||||
| 75m
|
||||
| 260Mi
|
||||
| 25m
|
||||
| 75Mi
|
||||
| 40m
|
||||
| 150Mi
|
||||
| 75m
|
||||
| 260Mi
|
||||
| (0.03c)/2 + 10 ^[1]^
|
||||
| (0.1c)/2 + 50 ^[1]^
|
||||
|
||||
@@ -94,7 +94,7 @@ It is recommended that you set the memory limit on your containers to at least d
|
||||
|===
|
||||
| Component 2+| 1 - 150 VPAs 2+| 151 - 500 VPAs 2+| 501-2000 VPAs 2+| 2001-4000 VPAs
|
||||
|
||||
|
|
||||
|
|
||||
| *QPS Limit* ^[1]^
|
||||
| *Burst* ^[2]^
|
||||
| *QPS Limit*
|
||||
@@ -126,7 +126,7 @@ s| Updater
|
||||
|
||||
|===
|
||||
[.small]
|
||||
. QPS specifies the queries per second (QPS) limit when making requests to Kubernetes API server. The default for the updater and recommender pods is `5.0`.
|
||||
. QPS specifies the queries per second (QPS) limit when making requests to Kubernetes API server. The default for the updater and recommender pods is `5.0`.
|
||||
. Burst specifies the burst limit when making requests to Kubernetes API server. The default for the updater and recommender pods is `10.0`.
|
||||
|
||||
[NOTE]
|
||||
@@ -147,7 +147,7 @@ Hiding as autoscaling custom resources not supported
|
||||
|===
|
||||
| Component 2+| 1-25 CR pod creation surge ^[1]^ 2+| 26-50 CR pod creation surge 2+| 50+ CR pod creation surge
|
||||
|
||||
|
|
||||
|
|
||||
| *QPS Limit* ^[2]^
|
||||
| *Burst* ^[3]^
|
||||
| *QPS Limit*
|
||||
@@ -166,7 +166,7 @@ s| Admission
|
||||
|===
|
||||
[.small]
|
||||
. _Pod creation surge_ refers to the maximum number of pods that you expect to be created in a single second at any given time.
|
||||
. QPS specifies the queries per second (QPS) limit when making requests to Kubernetes API server. The default is `5.0`.
|
||||
. QPS specifies the queries per second (QPS) limit when making requests to Kubernetes API server. The default is `5.0`.
|
||||
. Burst specifies the burst limit when making requests to Kubernetes API server. The default is `10.0`.
|
||||
|
||||
[NOTE]
|
||||
@@ -175,7 +175,7 @@ The admission pod can get throttled if you are using the VPA on custom resources
|
||||
====
|
||||
////
|
||||
|
||||
The following example VPA controller CR is for a cluster with 1000 to 2000 containers and a pod creation surge of 26 to 50. The CR sets the following values:
|
||||
The following example VPA controller CR is for a cluster with 1000 to 2000 containers and a pod creation surge of 26 to 50. The CR sets the following values:
|
||||
|
||||
* The container memory and CPU requests for all three VPA components
|
||||
* The container memory limit for all three VPA components
|
||||
@@ -202,7 +202,7 @@ spec:
|
||||
cpu: 40m
|
||||
memory: 150Mi
|
||||
limits:
|
||||
memory: 300Mi
|
||||
memory: 300Mi
|
||||
recommender: <4>
|
||||
container:
|
||||
args:
|
||||
@@ -234,7 +234,7 @@ spec:
|
||||
----
|
||||
<1> Specifies the tuning parameters for the VPA admission controller.
|
||||
<2> Specifies the API QPS and burst rates for the VPA admission controller.
|
||||
+
|
||||
+
|
||||
--
|
||||
* `kube-api-qps`: Specifies the queries per second (QPS) limit when making requests to Kubernetes API server. The default is `5.0`.
|
||||
* `kube-api-burst`: Specifies the burst limit when making requests to Kubernetes API server. The default is `10.0`.
|
||||
@@ -248,7 +248,7 @@ spec:
|
||||
Hiding these three callouts as not supported
|
||||
<5> Specifies how often the VPA should collect the container metrics for the recommender pod. Valid time units are `ns`, `us` (or `µs`), `ms`, `s`, `m`, and `h`. The default is one minute.
|
||||
<6> Specifies the timeout for writing VPA checkpoints after the start of the recommender interval. If you increase the `recommender-interval` value, it is recommended setting this value to the same value. Valid time units are `ns`, `us` (or `µs`), `ms`, `s`, `m`, and `h`. The default is one minute.
|
||||
<9> Specifies how often the VPA should collect the container metrics for the updater pod. Valid time units are `ns`, `us` (or `µs`), `ms`, `s`, `m`, and `h`. The default is one minute.
|
||||
<9> Specifies how often the VPA should collect the container metrics for the updater pod. Valid time units are `ns`, `us` (or `µs`), `ms`, `s`, `m`, and `h`. The default is one minute.
|
||||
- '--recommender-interval=2m' <5>
|
||||
- '--checkpoints-timeout=' <6>
|
||||
- '--updater-interval=30m0s' <9>
|
||||
|
||||
@@ -8,9 +8,9 @@
|
||||
|
||||
In this use case, you back up an application by using {oadp-short} and store the backup in an object storage provided by {odf-first}.
|
||||
|
||||
* You create a object bucket claim (OBC) to configure the backup storage location. You use {odf-short} to configure an Amazon S3-compatible object storage bucket. {odf-short} provides MultiCloud Object Gateway (NooBaa MCG) and Ceph Object Gateway, also known as RADOS Gateway (RGW), object storage service. In this use case, you use NooBaa MCG as the backup storage location.
|
||||
* You use the NooBaa MCG service with {oadp-short} by using the `aws` provider plugin.
|
||||
* You configure the Data Protection Application (DPA) with the backup storage location (BSL).
|
||||
* You create an object bucket claim (OBC) to configure the backup storage location. You use {odf-short} to configure an Amazon S3-compatible object storage bucket. {odf-short} provides MultiCloud Object Gateway (NooBaa MCG) and Ceph Object Gateway, also known as RADOS Gateway (RGW), object storage service. In this use case, you use NooBaa MCG as the backup storage location.
|
||||
* You use the NooBaa MCG service with {oadp-short} by using the `aws` provider plugin.
|
||||
* You configure the Data Protection Application (DPA) with the backup storage location (BSL).
|
||||
* You create a backup custom resource (CR) and specify the application namespace to back up.
|
||||
* You create and verify the backup.
|
||||
|
||||
@@ -74,7 +74,7 @@ s3.openshift-storage.svc
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ oc extract --to=- secret/test-obc
|
||||
$ oc extract --to=- secret/test-obc
|
||||
----
|
||||
+
|
||||
.Example output
|
||||
@@ -98,8 +98,8 @@ $ oc get route s3 -n openshift-storage
|
||||
[source,terminal]
|
||||
----
|
||||
[default]
|
||||
aws_access_key_id=<AWS_ACCESS_KEY_ID>
|
||||
aws_secret_access_key=<AWS_SECRET_ACCESS_KEY>
|
||||
aws_access_key_id=<AWS_ACCESS_KEY_ID>
|
||||
aws_secret_access_key=<AWS_SECRET_ACCESS_KEY>
|
||||
----
|
||||
|
||||
. Create the `cloud-credentials` secret with the `cloud-credentials` file content as shown in the following command:
|
||||
@@ -139,13 +139,13 @@ spec:
|
||||
profile: "default"
|
||||
region: noobaa
|
||||
s3Url: https://s3.openshift-storage.svc # <2>
|
||||
s3ForcePathStyle: "true"
|
||||
s3ForcePathStyle: "true"
|
||||
insecureSkipTLSVerify: "true"
|
||||
provider: aws
|
||||
default: true
|
||||
credential:
|
||||
key: cloud
|
||||
name: cloud-credentials
|
||||
name: cloud-credentials
|
||||
objectStorage:
|
||||
bucket: <bucket_name> # <3>
|
||||
prefix: oadp
|
||||
@@ -158,7 +158,7 @@ spec:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ oc apply -f <dpa_filename>
|
||||
$ oc apply -f <dpa_filename>
|
||||
----
|
||||
|
||||
. Verify that the DPA is created successfully by running the following command. In the example output, you can see the `status` object has `type` field set to `Reconciled`. This means, the DPA is successfully created.
|
||||
@@ -200,7 +200,7 @@ metadata:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ oc get backupstoragelocations.velero.io -n openshift-adp
|
||||
$ oc get backupstoragelocations.velero.io -n openshift-adp
|
||||
----
|
||||
+
|
||||
.Example output
|
||||
@@ -230,7 +230,7 @@ spec:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ oc apply -f <backup_cr_filename>
|
||||
$ oc apply -f <backup_cr_filename>
|
||||
----
|
||||
|
||||
.Verification
|
||||
@@ -239,7 +239,7 @@ $ oc apply -f <backup_cr_filename>
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ oc describe backup test-backup -n openshift-adp
|
||||
$ oc describe backup test-backup -n openshift-adp
|
||||
----
|
||||
+
|
||||
.Example output
|
||||
@@ -262,4 +262,4 @@ Status:
|
||||
Start Timestamp: 2024-09-25T10:16:31Z
|
||||
Version: 1
|
||||
Events: <none>
|
||||
----
|
||||
----
|
||||
|
||||
@@ -21,7 +21,7 @@ You can restore the back-end Redis database by deleting the deployment and speci
|
||||
$ oc delete deployment backend-redis -n threescale
|
||||
----
|
||||
+
|
||||
.Example output:
|
||||
.Example output
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
@@ -71,7 +71,7 @@ $ oc create -f restore-backend.yaml
|
||||
restore.velerio.io/restore-backend created
|
||||
----
|
||||
|
||||
.Verification
|
||||
.Verification
|
||||
|
||||
* Verify that the `PodVolumeRestore` restore is completed by running the following command:
|
||||
+
|
||||
@@ -85,4 +85,4 @@ $ oc get podvolumerestores.velero.io -n openshift-adp
|
||||
----
|
||||
NAME NAMESPACE POD UPLOADER TYPE VOLUME STATUS TOTALBYTES BYTESDONE AGE
|
||||
restore-backend-jmrwx threescale backend-redis-1-bsfmv kopia backend-redis-storage Completed 76123 76123 21m
|
||||
----
|
||||
----
|
||||
|
||||
@@ -8,7 +8,7 @@
|
||||
Restoring a MySQL database re-creates the following resources:
|
||||
|
||||
* The `Pod`, `ReplicationController`, and `Deployment` objects.
|
||||
* The additional persistent volumes (PVs) and associated persistent volume claims (PVCs).
|
||||
* The additional persistent volumes (PVs) and associated persistent volume claims (PVCs).
|
||||
* The MySQL dump, which the `example-claim` PVC contains.
|
||||
|
||||
[WARNING]
|
||||
@@ -29,7 +29,7 @@ Do not delete the default PV and PVC associated with the database. If you do, yo
|
||||
$ oc scale deployment threescale-operator-controller-manager-v2 --replicas=0 -n threescale
|
||||
----
|
||||
+
|
||||
.Example output:
|
||||
.Example output
|
||||
[source,terminal]
|
||||
----
|
||||
deployment.apps/threescale-operator-controller-manager-v2 scaled
|
||||
@@ -54,10 +54,10 @@ done
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ ./scaledowndeployment.sh
|
||||
$ ./scaledowndeployment.sh
|
||||
----
|
||||
+
|
||||
.Example output:
|
||||
.Example output
|
||||
[source,terminal]
|
||||
----
|
||||
deployment.apps.openshift.io/apicast-production scaled
|
||||
@@ -84,7 +84,7 @@ deployment.apps.openshift.io/zync-que scaled
|
||||
$ oc delete deployment system-mysql -n threescale
|
||||
----
|
||||
+
|
||||
.Example output:
|
||||
.Example output
|
||||
[source,terminal]
|
||||
----
|
||||
Warning: apps.openshift.io/v1 deployment is deprecated in v4.14+, unavailable in v4.10000+
|
||||
@@ -123,7 +123,7 @@ spec:
|
||||
- '-c'
|
||||
- >
|
||||
sleep 30
|
||||
|
||||
|
||||
mysql -h 127.0.0.1 -D system -u root
|
||||
--password=$MYSQL_ROOT_PASSWORD <
|
||||
/var/lib/mysqldump/data/dump.sql <2>
|
||||
@@ -151,7 +151,7 @@ $ oc create -f restore-mysql.yaml
|
||||
restore.velerio.io/restore-mysql created
|
||||
----
|
||||
|
||||
.Verification
|
||||
.Verification
|
||||
|
||||
. Verify that the `PodVolumeRestore` restore is completed by running the following command:
|
||||
+
|
||||
@@ -160,7 +160,7 @@ restore.velerio.io/restore-mysql created
|
||||
$ oc get podvolumerestores.velero.io -n openshift-adp
|
||||
----
|
||||
+
|
||||
.Example output:
|
||||
.Example output
|
||||
[source,terminal]
|
||||
----
|
||||
NAME NAMESPACE POD UPLOADER TYPE VOLUME STATUS TOTALBYTES BYTESDONE AGE
|
||||
@@ -175,7 +175,7 @@ restore-mysql-z7x7l threescale system-mysql-2-kjkhl kopia exa
|
||||
$ oc get pvc -n threescale
|
||||
----
|
||||
+
|
||||
.Example output:
|
||||
.Example output
|
||||
[source,terminal]
|
||||
----
|
||||
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS VOLUMEATTRIBUTESCLASS AGE
|
||||
@@ -184,4 +184,4 @@ example-claim Bound pvc-cbaa49b0-06cd-4b1a-9e90-0ef755c67a54 1Gi
|
||||
mysql-storage Bound pvc-4549649f-b9ad-44f7-8f67-dd6b9dbb3896 1Gi RWO gp3-csi <unset> 68m
|
||||
system-redis-storage Bound pvc-04dadafd-8a3e-4d00-8381-6041800a24fc 1Gi RWO gp3-csi <unset> 68m
|
||||
system-searchd Bound pvc-afbf606c-d4a8-4041-8ec6-54c5baf1a3b9 1Gi RWO gp3-csi <unset> 68m
|
||||
----
|
||||
----
|
||||
|
||||
@@ -30,7 +30,7 @@ $ oc scale deployment threescale-operator-controller-manager-v2 --replicas=1 -n
|
||||
deployment.apps/threescale-operator-controller-manager-v2 scaled
|
||||
----
|
||||
|
||||
. Ensure that the 3scale pod is running to verify if the 3scale operator was deployed by running the following command:
|
||||
. Ensure that the 3scale pod is running to verify if the 3scale operator was deployed by running the following command:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
@@ -63,10 +63,10 @@ done
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ ./scaledeployment.sh
|
||||
$ ./scaledeployment.sh
|
||||
----
|
||||
+
|
||||
.Example output:
|
||||
.Example output
|
||||
[source,terminal]
|
||||
----
|
||||
deployment.apps.openshift.io/apicast-production scaled
|
||||
@@ -107,4 +107,4 @@ zync-3scale-provider-mjxlb 3scale.apps.custom-cluster-name.openshift.com
|
||||
+
|
||||
In this example, `3scale-admin.apps.custom-cluster-name.openshift.com` is the 3scale-admin URL.
|
||||
|
||||
. Use the URL from this output to log in to the 3scale operator as an administrator. You can verify that the data, when you took backup, is available.
|
||||
. Use the URL from this output to log in to the 3scale operator as an administrator. You can verify that the data, when you took backup, is available.
|
||||
|
||||
@@ -90,7 +90,7 @@ For more information, see xref:../../observability/network_observability/netobse
|
||||
|
||||
* Previously, a resource using multiple IPs was displayed separately in the *Topology* view. Now, the resource shows as a single topology node in the view. (link:https://issues.redhat.com/browse/NETOBSERV-1818[*NETOBSERV-1818*])
|
||||
|
||||
* Previously, the console refreshed the *Network traffic* table view contents when the mouse pointer hovered over the columns. Now, the the display is fixed, so row height remains constant with a mouse hover. (link:https://issues.redhat.com/browse/NETOBSERV-2049[*NETOBSERV-2049*])
|
||||
* Previously, the console refreshed the *Network traffic* table view contents when the mouse pointer hovered over the columns. Now, the display is fixed, so row height remains constant with a mouse hover. (link:https://issues.redhat.com/browse/NETOBSERV-2049[*NETOBSERV-2049*])
|
||||
|
||||
[id="network-observability-operator-1-8-known-issues_{context}"]
|
||||
=== Known issues
|
||||
|
||||
@@ -8,19 +8,19 @@ toc::[]
|
||||
|
||||
The Multiarch Tuning Operator optimizes workload management within multi-architecture clusters and in single-architecture clusters transitioning to multi-architecture environments.
|
||||
|
||||
Architecture-aware workload scheduling allows the scheduler to place pods onto nodes that match the architecture of the pod images.
|
||||
Architecture-aware workload scheduling allows the scheduler to place pods onto nodes that match the architecture of the pod images.
|
||||
|
||||
By default, the scheduler does not consider the architecture of a pod's container images when determining the placement of new pods onto nodes.
|
||||
|
||||
To enable architecture-aware workload scheduling, you must create the `ClusterPodPlacementConfig` object. When you create the `ClusterPodPlacementConfig` object, the Multiarch Tuning Operator deploys the necessary operands to support architecture-aware workload scheduling. You can also use the `nodeAffinityScoring` plugin in the `ClusterPodPlacementConfig` object to set cluster-wide scores for node architectures. If you enable the `nodeAffinityScoring` plugin, the scheduler first filters nodes with compatible architectures and then places the pod on the node with the highest score.
|
||||
|
||||
When a pod is created, the operands perform the following actions:
|
||||
When a pod is created, the operands perform the following actions:
|
||||
|
||||
. Add the `multiarch.openshift.io/scheduling-gate` scheduling gate that prevents the scheduling of the pod.
|
||||
. Compute a scheduling predicate that includes the supported architecture values for the `kubernetes.io/arch` label.
|
||||
. Compute a scheduling predicate that includes the supported architecture values for the `kubernetes.io/arch` label.
|
||||
. Integrate the scheduling predicate as a `nodeAffinity` requirement in the pod specification.
|
||||
. Remove the scheduling gate from the pod.
|
||||
|
||||
|
||||
[IMPORTANT]
|
||||
====
|
||||
Note the following operand behaviors:
|
||||
@@ -31,7 +31,7 @@ Note the following operand behaviors:
|
||||
|
||||
* If the `nodeName` field is already set, the Multiarch Tuning Operator does not process the pod.
|
||||
|
||||
* If the pod is owned by a DaemonSet, the operand does not update the the `nodeAffinity` field.
|
||||
* If the pod is owned by a DaemonSet, the operand does not update the `nodeAffinity` field.
|
||||
|
||||
* If both `nodeSelector` or `nodeAffinity` and `preferredAffinity` fields are set for the `kubernetes.io/arch` label, the operand does not update the `nodeAffinity` field.
|
||||
|
||||
@@ -69,4 +69,4 @@ include::modules/multi-arch-deleting-podplacment-config-using-web-console.adoc[l
|
||||
//Uninstalling Multiarch Tuning Operator
|
||||
include::modules/multi-arch-uninstalling-using-cli.adoc[leveloffset=+1]
|
||||
|
||||
include::modules/multi-arch-uninstalling-using-web-console.adoc[leveloffset=+1]
|
||||
include::modules/multi-arch-uninstalling-using-web-console.adoc[leveloffset=+1]
|
||||
|
||||
Reference in New Issue
Block a user