mirror of
https://github.com/openshift/openshift-docs.git
synced 2026-02-05 12:46:18 +01:00
OSDOCS-2893: Updating kube references from 1.22 to 1.23
This commit is contained in:
committed by
openshift-cherrypick-robot
parent
d3fcc2192f
commit
834223014d
@@ -77,7 +77,7 @@ Data type:: group
|
||||
|
||||
== kubernetes.event
|
||||
|
||||
The kubernetes event obtained from kubernetes master API The event is already JSON object and as whole nested under kubernetes field This description should loosely follow 'type Event' in https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.22/#event-v1-core
|
||||
The kubernetes event obtained from kubernetes master API The event is already JSON object and as whole nested under kubernetes field This description should loosely follow 'type Event' in https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.23/#event-v1-core
|
||||
|
||||
[horizontal]
|
||||
Data type:: group
|
||||
|
||||
@@ -155,12 +155,12 @@ $ oc get nodes
|
||||
[source,terminal]
|
||||
----
|
||||
NAME STATUS ROLES AGE VERSION
|
||||
ip-10-0-147-106.us-east-2.compute.internal Ready master 14h v1.22.1
|
||||
ip-10-0-150-175.us-east-2.compute.internal Ready worker 14h v1.22.1
|
||||
ip-10-0-175-23.us-east-2.compute.internal Ready master 14h v1.22.1
|
||||
ip-10-0-189-6.us-east-2.compute.internal Ready worker 14h v1.22.1
|
||||
ip-10-0-205-158.us-east-2.compute.internal Ready master 14h v1.22.1
|
||||
ip-10-0-210-167.us-east-2.compute.internal Ready worker 14h v1.22.1
|
||||
ip-10-0-147-106.us-east-2.compute.internal Ready master 14h v1.23.0
|
||||
ip-10-0-150-175.us-east-2.compute.internal Ready worker 14h v1.23.0
|
||||
ip-10-0-175-23.us-east-2.compute.internal Ready master 14h v1.23.0
|
||||
ip-10-0-189-6.us-east-2.compute.internal Ready worker 14h v1.23.0
|
||||
ip-10-0-205-158.us-east-2.compute.internal Ready master 14h v1.23.0
|
||||
ip-10-0-210-167.us-east-2.compute.internal Ready worker 14h v1.23.0
|
||||
----
|
||||
--
|
||||
|
||||
|
||||
@@ -103,7 +103,7 @@ $ oc get nodes
|
||||
+
|
||||
----
|
||||
NAME STATUS ROLES AGE VERSION
|
||||
ci-ln-tkbxyft-f76d1-nvwhr-master-1 Ready, SchedulingDisabled master 133m v1.22.0-rc.0+75ee307
|
||||
ci-ln-tkbxyft-f76d1-nvwhr-master-1 Ready, SchedulingDisabled master 133m v1.23.0
|
||||
----
|
||||
+
|
||||
. Mark the node schedulable. You will know that the scheduling is enabled when `SchedulingDisabled` is no longer in status:
|
||||
@@ -118,6 +118,6 @@ $ oc adm uncordon <nodename>
|
||||
+
|
||||
----
|
||||
NAME STATUS ROLES AGE VERSION
|
||||
ci-ln-tkbxyft-f76d1-nvwhr-master-1 Ready master 133m v1.22.0-rc.0+75ee307
|
||||
ci-ln-tkbxyft-f76d1-nvwhr-master-1 Ready master 133m v1.23.0
|
||||
----
|
||||
+
|
||||
|
||||
@@ -84,7 +84,7 @@ Data type:: group
|
||||
|
||||
=== kubernetes.event
|
||||
|
||||
The Kubernetes event obtained from the Kubernetes master API. This event description loosely follows `type Event` in link:https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.22/#event-v1-core[Event v1 core].
|
||||
The Kubernetes event obtained from the Kubernetes master API. This event description loosely follows `type Event` in link:https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.23/#event-v1-core[Event v1 core].
|
||||
|
||||
[horizontal]
|
||||
Data type:: group
|
||||
|
||||
@@ -131,9 +131,9 @@ Note the worker with the role `worker-rt` that contains the string `4.18.0-211.r
|
||||
NAME STATUS ROLES AGE VERSION INTERNAL-IP
|
||||
EXTERNAL-IP OS-IMAGE KERNEL-VERSION
|
||||
CONTAINER-RUNTIME
|
||||
rt-worker-0.example.com Ready worker,worker-rt 5d17h v1.22.1
|
||||
rt-worker-0.example.com Ready worker,worker-rt 5d17h v1.23.0
|
||||
128.66.135.107 <none> Red Hat Enterprise Linux CoreOS 46.82.202008252340-0 (Ootpa)
|
||||
4.18.0-211.rt5.23.el8.x86_64 cri-o://1.22.1-90.rhaos4.9.git4a0ac05.el8-rc.1
|
||||
4.18.0-211.rt5.23.el8.x86_64 cri-o://1.23.0-90.rhaos4.9.git4a0ac05.el8-rc.1
|
||||
[...]
|
||||
----
|
||||
|
||||
|
||||
@@ -5,7 +5,7 @@
|
||||
[id="connected-to-disconnected-verify_{context}"]
|
||||
= Ensure applications continue to work
|
||||
|
||||
Before disconnecting the cluster from the network, ensure that your cluster is working as expected and all of your applications are working as expected.
|
||||
Before disconnecting the cluster from the network, ensure that your cluster is working as expected and all of your applications are working as expected.
|
||||
|
||||
.Procedure
|
||||
|
||||
@@ -43,10 +43,10 @@ $ oc get nodes
|
||||
[source,terminal]
|
||||
----
|
||||
NAME STATUS ROLES AGE VERSION
|
||||
ci-ln-47ltxtb-f76d1-mrffg-master-0 Ready master 42m v1.21.1+a620f50
|
||||
ci-ln-47ltxtb-f76d1-mrffg-master-1 Ready master 42m v1.21.1+a620f50
|
||||
ci-ln-47ltxtb-f76d1-mrffg-master-2 Ready master 42m v1.21.1+a620f50
|
||||
ci-ln-47ltxtb-f76d1-mrffg-worker-a-gsxbz Ready worker 35m v1.21.1+a620f50
|
||||
ci-ln-47ltxtb-f76d1-mrffg-worker-b-5qqdx Ready worker 35m v1.21.1+a620f50
|
||||
ci-ln-47ltxtb-f76d1-mrffg-worker-c-rjkpq Ready worker 34m v1.21.1+a620f50
|
||||
ci-ln-47ltxtb-f76d1-mrffg-master-0 Ready master 42m v1.23.0
|
||||
ci-ln-47ltxtb-f76d1-mrffg-master-1 Ready master 42m v1.23.0
|
||||
ci-ln-47ltxtb-f76d1-mrffg-master-2 Ready master 42m v1.23.0
|
||||
ci-ln-47ltxtb-f76d1-mrffg-worker-a-gsxbz Ready worker 35m v1.23.0
|
||||
ci-ln-47ltxtb-f76d1-mrffg-worker-b-5qqdx Ready worker 35m v1.23.0
|
||||
ci-ln-47ltxtb-f76d1-mrffg-worker-c-rjkpq Ready worker 34m v1.23.0
|
||||
----
|
||||
|
||||
@@ -34,9 +34,9 @@ The control plane nodes are ready if the status is `Ready`, as shown in the foll
|
||||
[source,terminal]
|
||||
----
|
||||
NAME STATUS ROLES AGE VERSION
|
||||
ip-10-0-168-251.ec2.internal Ready master 75m v1.22.1
|
||||
ip-10-0-170-223.ec2.internal Ready master 75m v1.22.1
|
||||
ip-10-0-211-16.ec2.internal Ready master 75m v1.22.1
|
||||
ip-10-0-168-251.ec2.internal Ready master 75m v1.23.0
|
||||
ip-10-0-170-223.ec2.internal Ready master 75m v1.23.0
|
||||
ip-10-0-211-16.ec2.internal Ready master 75m v1.23.0
|
||||
----
|
||||
|
||||
. If the control plane nodes are _not_ ready, then check whether there are any pending certificate signing requests (CSRs) that must be approved.
|
||||
@@ -75,9 +75,9 @@ The worker nodes are ready if the status is `Ready`, as shown in the following o
|
||||
[source,terminal]
|
||||
----
|
||||
NAME STATUS ROLES AGE VERSION
|
||||
ip-10-0-179-95.ec2.internal Ready worker 64m v1.22.1
|
||||
ip-10-0-182-134.ec2.internal Ready worker 64m v1.22.1
|
||||
ip-10-0-250-100.ec2.internal Ready worker 64m v1.22.1
|
||||
ip-10-0-179-95.ec2.internal Ready worker 64m v1.23.0
|
||||
ip-10-0-182-134.ec2.internal Ready worker 64m v1.23.0
|
||||
ip-10-0-250-100.ec2.internal Ready worker 64m v1.23.0
|
||||
----
|
||||
|
||||
. If the worker nodes are _not_ ready, then check whether there are any pending certificate signing requests (CSRs) that must be approved.
|
||||
@@ -141,12 +141,12 @@ Check that the status for all nodes is `Ready`.
|
||||
[source,terminal]
|
||||
----
|
||||
NAME STATUS ROLES AGE VERSION
|
||||
ip-10-0-168-251.ec2.internal Ready master 82m v1.22.1
|
||||
ip-10-0-170-223.ec2.internal Ready master 82m v1.22.1
|
||||
ip-10-0-179-95.ec2.internal Ready worker 70m v1.22.1
|
||||
ip-10-0-182-134.ec2.internal Ready worker 70m v1.22.1
|
||||
ip-10-0-211-16.ec2.internal Ready master 82m v1.22.1
|
||||
ip-10-0-250-100.ec2.internal Ready worker 69m v1.22.1
|
||||
ip-10-0-168-251.ec2.internal Ready master 82m v1.23.0
|
||||
ip-10-0-170-223.ec2.internal Ready master 82m v1.23.0
|
||||
ip-10-0-179-95.ec2.internal Ready worker 70m v1.23.0
|
||||
ip-10-0-182-134.ec2.internal Ready worker 70m v1.23.0
|
||||
ip-10-0-211-16.ec2.internal Ready master 82m v1.23.0
|
||||
ip-10-0-250-100.ec2.internal Ready worker 69m v1.23.0
|
||||
----
|
||||
|
||||
If the cluster did not start properly, you might need to restore your cluster using an etcd backup.
|
||||
|
||||
@@ -75,10 +75,10 @@ $ oc get nodes
|
||||
[source,terminal]
|
||||
----
|
||||
NAME STATUS ROLES AGE VERSION
|
||||
ci-ln-j5cd0qt-f76d1-vfj5x-master-0 Ready master 98m v1.22.1
|
||||
ci-ln-j5cd0qt-f76d1-vfj5x-master-1 Ready,SchedulingDisabled master 99m v1.22.1
|
||||
ci-ln-j5cd0qt-f76d1-vfj5x-master-2 Ready master 98m v1.22.1
|
||||
ci-ln-j5cd0qt-f76d1-vfj5x-worker-b-nsnd4 Ready worker 90m v1.22.1
|
||||
ci-ln-j5cd0qt-f76d1-vfj5x-worker-c-5z2gz NotReady,SchedulingDisabled worker 90m v1.22.1
|
||||
ci-ln-j5cd0qt-f76d1-vfj5x-worker-d-stsjv Ready worker 90m v1.22.1
|
||||
ci-ln-j5cd0qt-f76d1-vfj5x-master-0 Ready master 98m v1.23.0
|
||||
ci-ln-j5cd0qt-f76d1-vfj5x-master-1 Ready,SchedulingDisabled master 99m v1.23.0
|
||||
ci-ln-j5cd0qt-f76d1-vfj5x-master-2 Ready master 98m v1.23.0
|
||||
ci-ln-j5cd0qt-f76d1-vfj5x-worker-b-nsnd4 Ready worker 90m v1.23.0
|
||||
ci-ln-j5cd0qt-f76d1-vfj5x-worker-c-5z2gz NotReady,SchedulingDisabled worker 90m v1.23.0
|
||||
ci-ln-j5cd0qt-f76d1-vfj5x-worker-d-stsjv Ready worker 90m v1.23.0
|
||||
----
|
||||
|
||||
@@ -110,12 +110,12 @@ $ oc get node
|
||||
[source,terminal]
|
||||
----
|
||||
NAME STATUS ROLES AGE VERSION
|
||||
ip-10-0-137-44.ec2.internal Ready worker 7m v1.22.1
|
||||
ip-10-0-138-148.ec2.internal Ready master 11m v1.22.1
|
||||
ip-10-0-139-122.ec2.internal Ready master 11m v1.22.1
|
||||
ip-10-0-147-35.ec2.internal Ready,SchedulingDisabled worker 7m v1.22.1
|
||||
ip-10-0-153-12.ec2.internal Ready worker 7m v1.22.1
|
||||
ip-10-0-154-10.ec2.internal Ready master 11m v1.22.1
|
||||
ip-10-0-137-44.ec2.internal Ready worker 7m v1.23.0
|
||||
ip-10-0-138-148.ec2.internal Ready master 11m v1.23.0
|
||||
ip-10-0-139-122.ec2.internal Ready master 11m v1.23.0
|
||||
ip-10-0-147-35.ec2.internal Ready,SchedulingDisabled worker 7m v1.23.0
|
||||
ip-10-0-153-12.ec2.internal Ready worker 7m v1.23.0
|
||||
ip-10-0-154-10.ec2.internal Ready master 11m v1.23.0
|
||||
----
|
||||
+
|
||||
You can see that scheduling on each worker node is disabled as the change is being applied.
|
||||
|
||||
@@ -98,13 +98,13 @@ $ oc get nodes
|
||||
[source,terminal]
|
||||
----
|
||||
NAME STATUS ROLES AGE VERSION
|
||||
ip-10-0-133-216.us-east-2.compute.internal Ready master 60m v1.22.1
|
||||
ip-10-0-139-146.us-east-2.compute.internal Ready master 60m v1.22.1
|
||||
ip-10-0-139-192.us-east-2.compute.internal Ready worker 51m v1.22.1
|
||||
ip-10-0-139-241.us-east-2.compute.internal Ready worker 51m v1.22.1
|
||||
ip-10-0-147-79.us-east-2.compute.internal Ready worker 51m v1.22.1
|
||||
ip-10-0-152-241.us-east-2.compute.internal Ready master 60m v1.22.1
|
||||
ip-10-0-139-48.us-east-2.compute.internal Ready infra 51m v1.22.1
|
||||
ip-10-0-133-216.us-east-2.compute.internal Ready master 60m v1.23.0
|
||||
ip-10-0-139-146.us-east-2.compute.internal Ready master 60m v1.23.0
|
||||
ip-10-0-139-192.us-east-2.compute.internal Ready worker 51m v1.23.0
|
||||
ip-10-0-139-241.us-east-2.compute.internal Ready worker 51m v1.23.0
|
||||
ip-10-0-147-79.us-east-2.compute.internal Ready worker 51m v1.23.0
|
||||
ip-10-0-152-241.us-east-2.compute.internal Ready master 60m v1.23.0
|
||||
ip-10-0-139-48.us-east-2.compute.internal Ready infra 51m v1.23.0
|
||||
----
|
||||
+
|
||||
Note that the node has a `node-role.kubernetes.io/infra: ''` label:
|
||||
|
||||
@@ -97,7 +97,7 @@ $ oc get node <node_name> <1>
|
||||
[source,terminal]
|
||||
----
|
||||
NAME STATUS ROLES AGE VERSION
|
||||
ip-10-0-217-226.ec2.internal Ready infra,worker 17h v1.22.1
|
||||
ip-10-0-217-226.ec2.internal Ready infra,worker 17h v1.23.0
|
||||
----
|
||||
+
|
||||
Because the role list includes `infra`, the pod is running on the correct node.
|
||||
|
||||
@@ -56,9 +56,9 @@ $ oc get nodes
|
||||
[source,terminal]
|
||||
----
|
||||
NAME STATUS ROLES AGE VERSION
|
||||
master-0 Ready master 63m v1.22.1
|
||||
master-1 Ready master 63m v1.22.1
|
||||
master-2 Ready master 64m v1.22.1
|
||||
master-0 Ready master 63m v1.23.0
|
||||
master-1 Ready master 63m v1.23.0
|
||||
master-2 Ready master 64m v1.23.0
|
||||
----
|
||||
+
|
||||
The output lists all of the machines that you created.
|
||||
@@ -178,11 +178,11 @@ $ oc get nodes
|
||||
[source,terminal]
|
||||
----
|
||||
NAME STATUS ROLES AGE VERSION
|
||||
master-0 Ready master 73m v1.22.1
|
||||
master-1 Ready master 73m v1.22.1
|
||||
master-2 Ready master 74m v1.22.1
|
||||
worker-0 Ready worker 11m v1.22.1
|
||||
worker-1 Ready worker 11m v1.22.1
|
||||
master-0 Ready master 73m v1.23.0
|
||||
master-1 Ready master 73m v1.23.0
|
||||
master-2 Ready master 74m v1.23.0
|
||||
worker-0 Ready worker 11m v1.23.0
|
||||
worker-1 Ready worker 11m v1.23.0
|
||||
----
|
||||
+
|
||||
[NOTE]
|
||||
|
||||
@@ -39,7 +39,7 @@ stored the installation files in.
|
||||
[source,terminal]
|
||||
----
|
||||
INFO Waiting up to 20m0s for the Kubernetes API at https://api.mycluster.example.com:6443...
|
||||
INFO API v1.22.1 up
|
||||
INFO API v1.23.0 up
|
||||
INFO Waiting up to 30m0s for bootstrapping to complete...
|
||||
INFO It is now safe to remove the bootstrap resources
|
||||
INFO Time elapsed: 1s
|
||||
|
||||
@@ -58,7 +58,7 @@ $ ./openshift-install --dir <installation_directory> wait-for bootstrap-complete
|
||||
[source,terminal]
|
||||
----
|
||||
INFO Waiting up to 30m0s for the Kubernetes API at https://api.test.example.com:6443...
|
||||
INFO API v1.22.1 up
|
||||
INFO API v1.23.0 up
|
||||
INFO Waiting up to 30m0s for bootstrapping to complete...
|
||||
INFO It is now safe to remove the bootstrap resources
|
||||
----
|
||||
|
||||
@@ -39,7 +39,7 @@ You will see messages that confirm that the control plane machines are running a
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
INFO API v1.22.1 up
|
||||
INFO API v1.23.0 up
|
||||
INFO Waiting up to 30m0s for bootstrapping to complete...
|
||||
...
|
||||
INFO It is now safe to remove the bootstrap resources
|
||||
|
||||
@@ -27,7 +27,7 @@ $ openshift-install wait-for bootstrap-complete --dir $ASSETS_DIR
|
||||
.Example output
|
||||
[source,terminal]
|
||||
----
|
||||
INFO API v1.22.1 up
|
||||
INFO API v1.23.0 up
|
||||
INFO Waiting up to 40m0s for bootstrapping to complete...
|
||||
----
|
||||
|
||||
|
||||
@@ -91,12 +91,12 @@ $ oc get nodes
|
||||
[source,terminal]
|
||||
----
|
||||
NAME STATUS ROLES AGE VERSION
|
||||
ip-10-0-139-200.us-east-2.compute.internal Ready master 111m v1.22.1
|
||||
ip-10-0-143-147.us-east-2.compute.internal Ready worker 103m v1.22.1
|
||||
ip-10-0-146-92.us-east-2.compute.internal Ready worker 101m v1.22.1
|
||||
ip-10-0-156-255.us-east-2.compute.internal Ready master 111m v1.22.1
|
||||
ip-10-0-164-74.us-east-2.compute.internal Ready master 111m v1.22.1
|
||||
ip-10-0-169-2.us-east-2.compute.internal Ready worker 102m v1.22.1
|
||||
ip-10-0-139-200.us-east-2.compute.internal Ready master 111m v1.23.0
|
||||
ip-10-0-143-147.us-east-2.compute.internal Ready worker 103m v1.23.0
|
||||
ip-10-0-146-92.us-east-2.compute.internal Ready worker 101m v1.23.0
|
||||
ip-10-0-156-255.us-east-2.compute.internal Ready master 111m v1.23.0
|
||||
ip-10-0-164-74.us-east-2.compute.internal Ready master 111m v1.23.0
|
||||
ip-10-0-169-2.us-east-2.compute.internal Ready worker 102m v1.23.0
|
||||
----
|
||||
+
|
||||
[source,terminal]
|
||||
|
||||
@@ -34,12 +34,12 @@ $ oc get nodes
|
||||
[source,bash]
|
||||
----
|
||||
NAME STATUS ROLES AGE VERSION
|
||||
provisioner.openshift.example.com Ready master 30h v1.22.1
|
||||
openshift-master-1.openshift.example.com Ready master 30h v1.22.1
|
||||
openshift-master-2.openshift.example.com Ready master 30h v1.22.1
|
||||
openshift-master-3.openshift.example.com Ready master 30h v1.22.1
|
||||
openshift-worker-0.openshift.example.com Ready master 30h v1.22.1
|
||||
openshift-worker-1.openshift.example.com Ready master 30h v1.22.1
|
||||
provisioner.openshift.example.com Ready master 30h v1.23.0
|
||||
openshift-master-1.openshift.example.com Ready master 30h v1.23.0
|
||||
openshift-master-2.openshift.example.com Ready master 30h v1.23.0
|
||||
openshift-master-3.openshift.example.com Ready master 30h v1.23.0
|
||||
openshift-worker-0.openshift.example.com Ready master 30h v1.23.0
|
||||
openshift-worker-1.openshift.example.com Ready master 30h v1.23.0
|
||||
----
|
||||
|
||||
. Get the machine set.
|
||||
@@ -99,13 +99,13 @@ $ oc get nodes
|
||||
[source,bash]
|
||||
----
|
||||
NAME STATUS ROLES AGE VERSION
|
||||
provisioner.openshift.example.com Ready master 30h v1.22.1
|
||||
openshift-master-1.openshift.example.com Ready master 30h v1.22.1
|
||||
openshift-master-2.openshift.example.com Ready master 30h v1.22.1
|
||||
openshift-master-3.openshift.example.com Ready master 30h v1.22.1
|
||||
openshift-worker-0.openshift.example.com Ready master 30h v1.22.1
|
||||
openshift-worker-1.openshift.example.com Ready master 30h v1.22.1
|
||||
openshift-worker-<num>.openshift.example.com Ready worker 3m27s v1.22.1
|
||||
provisioner.openshift.example.com Ready master 30h v1.23.0
|
||||
openshift-master-1.openshift.example.com Ready master 30h v1.23.0
|
||||
openshift-master-2.openshift.example.com Ready master 30h v1.23.0
|
||||
openshift-master-3.openshift.example.com Ready master 30h v1.23.0
|
||||
openshift-worker-0.openshift.example.com Ready master 30h v1.23.0
|
||||
openshift-worker-1.openshift.example.com Ready master 30h v1.23.0
|
||||
openshift-worker-<num>.openshift.example.com Ready worker 3m27s v1.23.0
|
||||
----
|
||||
+
|
||||
You can also check the kubelet.
|
||||
|
||||
@@ -19,10 +19,10 @@ $ oc get nodes
|
||||
[source,terminal]
|
||||
----
|
||||
NAME STATUS ROLES AGE VERSION
|
||||
master-0.cloud.example.com Ready master 145m v1.22.1
|
||||
master-1.cloud.example.com Ready master 135m v1.22.1
|
||||
master-2.cloud.example.com Ready master 145m v1.22.1
|
||||
worker-2.cloud.example.com Ready worker 100m v1.22.1
|
||||
master-0.cloud.example.com Ready master 145m v1.23.0
|
||||
master-1.cloud.example.com Ready master 135m v1.23.0
|
||||
master-2.cloud.example.com Ready master 145m v1.23.0
|
||||
worker-2.cloud.example.com Ready worker 100m v1.23.0
|
||||
----
|
||||
|
||||
. Check for inconsistent timing delays due to clock drift. For example:
|
||||
|
||||
@@ -19,9 +19,9 @@ $ oc get nodes
|
||||
[source,bash]
|
||||
----
|
||||
NAME STATUS ROLES AGE VERSION
|
||||
master-0.example.com Ready master,worker 4h v1.22.1
|
||||
master-1.example.com Ready master,worker 4h v1.22.1
|
||||
master-2.example.com Ready master,worker 4h v1.22.1
|
||||
master-0.example.com Ready master,worker 4h v1.23.0
|
||||
master-1.example.com Ready master,worker 4h v1.23.0
|
||||
master-2.example.com Ready master,worker 4h v1.23.0
|
||||
----
|
||||
|
||||
. Confirm the installer deployed all pods successfully. The following command
|
||||
|
||||
@@ -14,9 +14,9 @@ $ oc get nodes
|
||||
|
||||
NAME STATUS ROLES AGE VERSION
|
||||
|
||||
ip-10-0-0-1.us-east-2.compute.internal Ready worker 3h19m v1.22.1
|
||||
ip-10-0-0-1.us-east-2.compute.internal Ready worker 3h19m v1.23.0
|
||||
|
||||
ip-10-0-0-39.us-east-2.compute.internal Ready master 3h37m v1.22.1
|
||||
ip-10-0-0-39.us-east-2.compute.internal Ready master 3h37m v1.23.0
|
||||
|
||||
…
|
||||
|
||||
|
||||
@@ -108,12 +108,12 @@ $ oc get nodes
|
||||
[source,terminal]
|
||||
----
|
||||
NAME STATUS ROLES AGE VERSION
|
||||
ci-ln-fm1qnwt-72292-99kt6-master-0 Ready master 58m v1.22.1+6859754
|
||||
ci-ln-fm1qnwt-72292-99kt6-master-1 Ready master 58m v1.22.1+6859754
|
||||
ci-ln-fm1qnwt-72292-99kt6-master-2 Ready master 58m v1.22.1+6859754
|
||||
ci-ln-fm1qnwt-72292-99kt6-worker-a-h5gt4 Ready,SchedulingDisabled worker 48m v1.22.1+6859754
|
||||
ci-ln-fm1qnwt-72292-99kt6-worker-b-7vtmd Ready worker 48m v1.22.1+6859754
|
||||
ci-ln-fm1qnwt-72292-99kt6-worker-c-rhzkv Ready worker 48m v1.22.1+6859754
|
||||
ci-ln-fm1qnwt-72292-99kt6-master-0 Ready master 58m v1.23.0
|
||||
ci-ln-fm1qnwt-72292-99kt6-master-1 Ready master 58m v1.23.0
|
||||
ci-ln-fm1qnwt-72292-99kt6-master-2 Ready master 58m v1.23.0
|
||||
ci-ln-fm1qnwt-72292-99kt6-worker-a-h5gt4 Ready,SchedulingDisabled worker 48m v1.23.0
|
||||
ci-ln-fm1qnwt-72292-99kt6-worker-b-7vtmd Ready worker 48m v1.23.0
|
||||
ci-ln-fm1qnwt-72292-99kt6-worker-c-rhzkv Ready worker 48m v1.23.0
|
||||
----
|
||||
|
||||
. After a node returns to the `Ready` state, you can verify that cgroups v2 is enabled by checking that the `sys/fs/cgroup/cgroup.controllers` file is present on the node. This file is created by cgroups v2.
|
||||
@@ -139,4 +139,3 @@ cgroup.procs cpu.pressure io.pressure memory.stat
|
||||
.Additional resources
|
||||
|
||||
* For information about enabling cgroups v2 during installation, see the _Optional parameters_ table in the _Installation configuration parameters_ section of your installation process.
|
||||
|
||||
|
||||
@@ -143,12 +143,12 @@ $ oc get nodes
|
||||
[source,terminal]
|
||||
----
|
||||
NAME STATUS ROLES AGE VERSION
|
||||
ip-10-0-136-161.ec2.internal Ready worker 28m v1.22.1
|
||||
ip-10-0-136-243.ec2.internal Ready master 34m v1.22.1
|
||||
ip-10-0-141-105.ec2.internal Ready,SchedulingDisabled worker 28m v1.22.1
|
||||
ip-10-0-142-249.ec2.internal Ready master 34m v1.22.1
|
||||
ip-10-0-153-11.ec2.internal Ready worker 28m v1.22.1
|
||||
ip-10-0-153-150.ec2.internal Ready master 34m v1.22.1
|
||||
ip-10-0-136-161.ec2.internal Ready worker 28m v1.23.0
|
||||
ip-10-0-136-243.ec2.internal Ready master 34m v1.23.0
|
||||
ip-10-0-141-105.ec2.internal Ready,SchedulingDisabled worker 28m v1.23.0
|
||||
ip-10-0-142-249.ec2.internal Ready master 34m v1.23.0
|
||||
ip-10-0-153-11.ec2.internal Ready worker 28m v1.23.0
|
||||
ip-10-0-153-150.ec2.internal Ready master 34m v1.23.0
|
||||
----
|
||||
+
|
||||
You can see that scheduling on each worker node is disabled as the change is being applied.
|
||||
|
||||
@@ -57,9 +57,9 @@ $ oc get nodes
|
||||
[source,terminal]
|
||||
----
|
||||
NAME STATUS ROLES AGE VERSION
|
||||
ip-10-0-143-147.us-east-2.compute.internal Ready worker 103m v1.22.1
|
||||
ip-10-0-146-92.us-east-2.compute.internal Ready worker 101m v1.22.1
|
||||
ip-10-0-169-2.us-east-2.compute.internal Ready worker 102m v1.22.1
|
||||
ip-10-0-143-147.us-east-2.compute.internal Ready worker 103m v1.23.0
|
||||
ip-10-0-146-92.us-east-2.compute.internal Ready worker 101m v1.23.0
|
||||
ip-10-0-169-2.us-east-2.compute.internal Ready worker 102m v1.23.0
|
||||
----
|
||||
+
|
||||
[source,terminal]
|
||||
|
||||
@@ -25,9 +25,9 @@ $ oc get nodes
|
||||
[source,terminal]
|
||||
----
|
||||
NAME STATUS ROLES AGE VERSION
|
||||
master.example.com Ready master 7h v1.22.1
|
||||
node1.example.com Ready worker 7h v1.22.1
|
||||
node2.example.com Ready worker 7h v1.22.1
|
||||
master.example.com Ready master 7h v1.23.0
|
||||
node1.example.com Ready worker 7h v1.23.0
|
||||
node2.example.com Ready worker 7h v1.23.0
|
||||
----
|
||||
+
|
||||
The following example is a cluster with one unhealthy node:
|
||||
@@ -41,9 +41,9 @@ $ oc get nodes
|
||||
[source,terminal]
|
||||
----
|
||||
NAME STATUS ROLES AGE VERSION
|
||||
master.example.com Ready master 7h v1.22.1
|
||||
node1.example.com NotReady,SchedulingDisabled worker 7h v1.22.1
|
||||
node2.example.com Ready worker 7h v1.22.1
|
||||
master.example.com Ready master 7h v1.23.0
|
||||
node1.example.com NotReady,SchedulingDisabled worker 7h v1.23.0
|
||||
node2.example.com Ready worker 7h v1.23.0
|
||||
----
|
||||
+
|
||||
The conditions that trigger a `NotReady` status are shown later in this section.
|
||||
@@ -59,9 +59,9 @@ $ oc get nodes -o wide
|
||||
[source,terminal]
|
||||
----
|
||||
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
|
||||
master.example.com Ready master 171m v1.22.1 10.0.129.108 <none> Red Hat Enterprise Linux CoreOS 48.83.202103210901-0 (Ootpa) 4.18.0-240.15.1.el8_3.x86_64 cri-o://1.22.1-30.rhaos4.9.gitf2f339d.el8-dev
|
||||
node1.example.com Ready worker 72m v1.22.1 10.0.129.222 <none> Red Hat Enterprise Linux CoreOS 48.83.202103210901-0 (Ootpa) 4.18.0-240.15.1.el8_3.x86_64 cri-o://1.22.1-30.rhaos4.9.gitf2f339d.el8-dev
|
||||
node2.example.com Ready worker 164m v1.22.1 10.0.142.150 <none> Red Hat Enterprise Linux CoreOS 48.83.202103210901-0 (Ootpa) 4.18.0-240.15.1.el8_3.x86_64 cri-o://1.22.1-30.rhaos4.9.gitf2f339d.el8-dev
|
||||
master.example.com Ready master 171m v1.23.0 10.0.129.108 <none> Red Hat Enterprise Linux CoreOS 48.83.202103210901-0 (Ootpa) 4.18.0-240.15.1.el8_3.x86_64 cri-o://1.23.0-30.rhaos4.9.gitf2f339d.el8-dev
|
||||
node1.example.com Ready worker 72m v1.23.0 10.0.129.222 <none> Red Hat Enterprise Linux CoreOS 48.83.202103210901-0 (Ootpa) 4.18.0-240.15.1.el8_3.x86_64 cri-o://1.23.0-30.rhaos4.9.gitf2f339d.el8-dev
|
||||
node2.example.com Ready worker 164m v1.23.0 10.0.142.150 <none> Red Hat Enterprise Linux CoreOS 48.83.202103210901-0 (Ootpa) 4.18.0-240.15.1.el8_3.x86_64 cri-o://1.23.0-30.rhaos4.9.gitf2f339d.el8-dev
|
||||
----
|
||||
|
||||
* The following command lists information about a single node:
|
||||
@@ -82,7 +82,7 @@ $ oc get node node1.example.com
|
||||
[source,terminal]
|
||||
----
|
||||
NAME STATUS ROLES AGE VERSION
|
||||
node1.example.com Ready worker 7h v1.22.1
|
||||
node1.example.com Ready worker 7h v1.23.0
|
||||
----
|
||||
|
||||
* The following command provides more detailed information about a specific node, including the reason for
|
||||
@@ -155,8 +155,8 @@ System Info: <9>
|
||||
Operating System: linux
|
||||
Architecture: amd64
|
||||
Container Runtime Version: cri-o://1.16.0-0.6.dev.rhaos4.3.git9ad059b.el8-rc2
|
||||
Kubelet Version: v1.22.1
|
||||
Kube-Proxy Version: v1.22.1
|
||||
Kubelet Version: v1.23.0
|
||||
Kube-Proxy Version: v1.23.0
|
||||
PodCIDR: 10.128.4.0/24
|
||||
ProviderID: aws:///us-east-2a/i-04e87b31dc6b3e171
|
||||
Non-terminated Pods: (13 in total) <10>
|
||||
|
||||
@@ -43,7 +43,7 @@ $ oc get node <node1>
|
||||
[source,terminal]
|
||||
----
|
||||
NAME STATUS ROLES AGE VERSION
|
||||
<node1> NotReady,SchedulingDisabled worker 1d v1.22.1
|
||||
<node1> NotReady,SchedulingDisabled worker 1d v1.23.0
|
||||
----
|
||||
|
||||
. Evacuate the pods using one of the following methods:
|
||||
|
||||
@@ -170,7 +170,7 @@ $ oc get nodes -l type=user-node
|
||||
[source,terminal]
|
||||
----
|
||||
NAME STATUS ROLES AGE VERSION
|
||||
ci-ln-l8nry52-f76d1-hl7m7-worker-c-vmqzp Ready worker 61s v1.22.1
|
||||
ci-ln-l8nry52-f76d1-hl7m7-worker-c-vmqzp Ready worker 61s v1.23.0
|
||||
----
|
||||
|
||||
* Add labels directly to a node:
|
||||
@@ -223,5 +223,5 @@ $ oc get nodes -l type=user-node,region=east
|
||||
[source,terminal]
|
||||
----
|
||||
NAME STATUS ROLES AGE VERSION
|
||||
ci-ln-l8nry52-f76d1-hl7m7-worker-b-tgq49 Ready worker 17m v1.22.1
|
||||
ci-ln-l8nry52-f76d1-hl7m7-worker-b-tgq49 Ready worker 17m v1.23.0
|
||||
----
|
||||
|
||||
@@ -158,7 +158,7 @@ $ oc get nodes -l type=user-node,region=east
|
||||
[source,terminal]
|
||||
----
|
||||
NAME STATUS ROLES AGE VERSION
|
||||
ip-10-0-142-25.ec2.internal Ready worker 17m v1.22.1
|
||||
ip-10-0-142-25.ec2.internal Ready worker 17m v1.23.0
|
||||
----
|
||||
|
||||
. Add the matching node selector to a pod:
|
||||
|
||||
@@ -160,7 +160,7 @@ $ oc get nodes -l type=user-node,region=east
|
||||
[source,terminal]
|
||||
----
|
||||
NAME STATUS ROLES AGE VERSION
|
||||
ci-ln-l8nry52-f76d1-hl7m7-worker-c-vmqzp Ready worker 61s v1.22.1
|
||||
ci-ln-l8nry52-f76d1-hl7m7-worker-c-vmqzp Ready worker 61s v1.23.0
|
||||
----
|
||||
|
||||
* Add labels directly to a node:
|
||||
@@ -213,5 +213,5 @@ $ oc get nodes -l type=user-node,region=east
|
||||
[source,terminal]
|
||||
----
|
||||
NAME STATUS ROLES AGE VERSION
|
||||
ci-ln-l8nry52-f76d1-hl7m7-worker-b-tgq49 Ready worker 17m v1.22.1
|
||||
ci-ln-l8nry52-f76d1-hl7m7-worker-b-tgq49 Ready worker 17m v1.23.0
|
||||
----
|
||||
|
||||
@@ -64,7 +64,7 @@ metadata:
|
||||
"quay.io/example-org/example-catalog:v{kube_major_version}.{kube_minor_version}"
|
||||
spec:
|
||||
displayName: Example Catalog
|
||||
image: quay.io/example-org/example-catalog:v1.22
|
||||
image: quay.io/example-org/example-catalog:v1.23
|
||||
priority: -400
|
||||
publisher: Example Org
|
||||
----
|
||||
@@ -77,11 +77,11 @@ If the `spec.image` field and the `olm.catalogImageTemplate` annotation are both
|
||||
If the `spec.image` field is not set and the annotation does not resolve to a usable pull spec, OLM stops reconciliation of the catalog source and sets it into a human-readable error condition.
|
||||
====
|
||||
|
||||
For an {product-title} 4.9 cluster, which uses Kubernetes 1.22, the `olm.catalogImageTemplate` annotation in the preceding example resolves to the following image reference:
|
||||
For an {product-title} 4.9 cluster, which uses Kubernetes 1.23, the `olm.catalogImageTemplate` annotation in the preceding example resolves to the following image reference:
|
||||
|
||||
[source,terminal]
|
||||
----
|
||||
quay.io/example-org/example-catalog:v1.22
|
||||
quay.io/example-org/example-catalog:v1.23
|
||||
----
|
||||
|
||||
For future releases of {product-title}, you can create updated index images for your custom catalogs that target the later Kubernetes version that is used by the later {product-title} version. With the `olm.catalogImageTemplate` annotation set before the upgrade, upgrading the cluster to the later {product-title} version would then automatically update the catalog's index image as well.
|
||||
|
||||
@@ -25,12 +25,12 @@ $ oc get nodes
|
||||
[source,terminal]
|
||||
----
|
||||
NAME STATUS ROLES AGE VERSION
|
||||
compute-1.example.com Ready worker 33m v1.22.1
|
||||
control-plane-1.example.com Ready master 41m v1.22.1
|
||||
control-plane-2.example.com Ready master 45m v1.22.1
|
||||
compute-2.example.com Ready worker 38m v1.22.1
|
||||
compute-3.example.com Ready worker 33m v1.22.1
|
||||
control-plane-3.example.com Ready master 41m v1.22.1
|
||||
compute-1.example.com Ready worker 33m v1.23.0
|
||||
control-plane-1.example.com Ready master 41m v1.23.0
|
||||
control-plane-2.example.com Ready master 45m v1.23.0
|
||||
compute-2.example.com Ready worker 38m v1.23.0
|
||||
compute-3.example.com Ready worker 33m v1.23.0
|
||||
control-plane-3.example.com Ready master 41m v1.23.0
|
||||
----
|
||||
|
||||
. Review CPU and memory resource availability for each cluster node:
|
||||
|
||||
@@ -70,7 +70,7 @@ $ oc get nodes -l node-role.kubernetes.io/master | grep "NotReady"
|
||||
.Example output
|
||||
[source,terminal]
|
||||
----
|
||||
ip-10-0-131-183.ec2.internal NotReady master 122m v1.22.1 <1>
|
||||
ip-10-0-131-183.ec2.internal NotReady master 122m v1.23.0 <1>
|
||||
----
|
||||
<1> If the node is listed as `NotReady`, then the *node is not ready*.
|
||||
|
||||
@@ -94,9 +94,9 @@ $ oc get nodes -l node-role.kubernetes.io/master
|
||||
[source,terminal]
|
||||
----
|
||||
NAME STATUS ROLES AGE VERSION
|
||||
ip-10-0-131-183.ec2.internal Ready master 6h13m v1.22.1
|
||||
ip-10-0-164-97.ec2.internal Ready master 6h13m v1.22.1
|
||||
ip-10-0-154-204.ec2.internal Ready master 6h13m v1.22.1
|
||||
ip-10-0-131-183.ec2.internal Ready master 6h13m v1.23.0
|
||||
ip-10-0-164-97.ec2.internal Ready master 6h13m v1.23.0
|
||||
ip-10-0-154-204.ec2.internal Ready master 6h13m v1.23.0
|
||||
----
|
||||
|
||||
.. Check whether the status of an etcd pod is either `Error` or `CrashloopBackoff`:
|
||||
|
||||
@@ -91,7 +91,7 @@ $ oc get node | grep worker
|
||||
[source,terminal]
|
||||
----
|
||||
NAME STATUS ROLES AGE VERSION
|
||||
ip-10-0-169-2.us-east-2.compute.internal Ready worker 102m v1.22.1
|
||||
ip-10-0-169-2.us-east-2.compute.internal Ready worker 102m v1.23.0
|
||||
----
|
||||
+
|
||||
[source,terminal]
|
||||
|
||||
@@ -103,12 +103,12 @@ $ oc get nodes
|
||||
[source,terminal]
|
||||
----
|
||||
NAME STATUS ROLES AGE VERSION
|
||||
ip-10-0-136-161.ec2.internal Ready worker 28m v1.22.1
|
||||
ip-10-0-136-243.ec2.internal Ready master 34m v1.22.1
|
||||
ip-10-0-141-105.ec2.internal Ready,SchedulingDisabled worker 28m v1.22.1
|
||||
ip-10-0-142-249.ec2.internal Ready master 34m v1.22.1
|
||||
ip-10-0-153-11.ec2.internal Ready worker 28m v1.22.1
|
||||
ip-10-0-153-150.ec2.internal Ready master 34m v1.22.1
|
||||
ip-10-0-136-161.ec2.internal Ready worker 28m v1.23.0
|
||||
ip-10-0-136-243.ec2.internal Ready master 34m v1.23.0
|
||||
ip-10-0-141-105.ec2.internal Ready,SchedulingDisabled worker 28m v1.23.0
|
||||
ip-10-0-142-249.ec2.internal Ready master 34m v1.23.0
|
||||
ip-10-0-153-11.ec2.internal Ready worker 28m v1.23.0
|
||||
ip-10-0-153-150.ec2.internal Ready master 34m v1.23.0
|
||||
----
|
||||
+
|
||||
You can see that scheduling on each worker node is disabled as the change is being applied.
|
||||
|
||||
@@ -87,13 +87,13 @@ By default, the base OS RHEL with "Minimal" installation option enables firewall
|
||||
[source,terminal]
|
||||
----
|
||||
NAME STATUS ROLES AGE VERSION
|
||||
mycluster-control-plane-0 Ready master 145m v1.22.1
|
||||
mycluster-control-plane-1 Ready master 145m v1.22.1
|
||||
mycluster-control-plane-2 Ready master 145m v1.22.1
|
||||
mycluster-rhel7-0 NotReady,SchedulingDisabled worker 98m v1.22.1
|
||||
mycluster-rhel7-1 Ready worker 98m v1.22.1
|
||||
mycluster-rhel7-2 Ready worker 98m v1.22.1
|
||||
mycluster-rhel7-3 Ready worker 98m v1.22.1
|
||||
mycluster-control-plane-0 Ready master 145m v1.23.0
|
||||
mycluster-control-plane-1 Ready master 145m v1.23.0
|
||||
mycluster-control-plane-2 Ready master 145m v1.23.0
|
||||
mycluster-rhel7-0 NotReady,SchedulingDisabled worker 98m v1.23.0
|
||||
mycluster-rhel7-1 Ready worker 98m v1.23.0
|
||||
mycluster-rhel7-2 Ready worker 98m v1.23.0
|
||||
mycluster-rhel7-3 Ready worker 98m v1.23.0
|
||||
----
|
||||
+
|
||||
Note which machine has the `NotReady,SchedulingDisabled` status.
|
||||
@@ -144,13 +144,13 @@ The `upgrade` playbook only upgrades the {product-title} packages. It does not u
|
||||
[source,terminal]
|
||||
----
|
||||
NAME STATUS ROLES AGE VERSION
|
||||
mycluster-control-plane-0 Ready master 145m v1.22.1
|
||||
mycluster-control-plane-1 Ready master 145m v1.22.1
|
||||
mycluster-control-plane-2 Ready master 145m v1.22.1
|
||||
mycluster-rhel7-0 NotReady,SchedulingDisabled worker 98m v1.22.1
|
||||
mycluster-rhel7-1 Ready worker 98m v1.22.1
|
||||
mycluster-rhel7-2 Ready worker 98m v1.22.1
|
||||
mycluster-rhel7-3 Ready worker 98m v1.22.1
|
||||
mycluster-control-plane-0 Ready master 145m v1.23.0
|
||||
mycluster-control-plane-1 Ready master 145m v1.23.0
|
||||
mycluster-control-plane-2 Ready master 145m v1.23.0
|
||||
mycluster-rhel7-0 NotReady,SchedulingDisabled worker 98m v1.23.0
|
||||
mycluster-rhel7-1 Ready worker 98m v1.23.0
|
||||
mycluster-rhel7-2 Ready worker 98m v1.23.0
|
||||
mycluster-rhel7-3 Ready worker 98m v1.23.0
|
||||
----
|
||||
. Optional: Update the operating system packages that were not updated by the `upgrade` playbook. To update packages that are not on {product-version}, use the following command:
|
||||
+
|
||||
|
||||
@@ -13,12 +13,12 @@ To see which workers and masters are running on your cluster, type:
|
||||
$ oc get nodes
|
||||
|
||||
NAME STATUS ROLES AGE VERSION
|
||||
ip-10-0-0-1.us-east-2.compute.internal Ready worker 4h20m v1.22.1
|
||||
ip-10-0-0-2.us-east-2.compute.internal Ready master 4h39m v1.22.1
|
||||
ip-10-0-0.3.us-east-2.compute.internal Ready worker 4h20m v1.22.1
|
||||
ip-10-0-0-4.us-east-2.compute.internal Ready master 4h39m v1.22.1
|
||||
ip-10-0-0-5.us-east-2.compute.internal Ready master 4h39m v1.22.1
|
||||
ip-10-0-0-6.us-east-2.compute.internal Ready worker 4h20m v1.22.1
|
||||
ip-10-0-0-1.us-east-2.compute.internal Ready worker 4h20m v1.23.0
|
||||
ip-10-0-0-2.us-east-2.compute.internal Ready master 4h39m v1.23.0
|
||||
ip-10-0-0.3.us-east-2.compute.internal Ready worker 4h20m v1.23.0
|
||||
ip-10-0-0-4.us-east-2.compute.internal Ready master 4h39m v1.23.0
|
||||
ip-10-0-0-5.us-east-2.compute.internal Ready master 4h39m v1.23.0
|
||||
ip-10-0-0-6.us-east-2.compute.internal Ready worker 4h20m v1.23.0
|
||||
----
|
||||
|
||||
To see more information about internal and external IP addresses, the type of operating system ({op-system}), kernel version, and container runtime (CRI-O), add the `-o wide` option.
|
||||
@@ -27,7 +27,7 @@ To see more information about internal and external IP addresses, the type of op
|
||||
$ oc get nodes -o wide
|
||||
|
||||
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
|
||||
ip-10-0-134-252.us-east-2.compute.internal Ready worker 17h v1.22.1 10.0.134.252 <none> Red Hat CoreOS 4.0 3.10.0-957.5.1.el7.x86_64 cri-o://1.22.1-1.rhaos4.0.git2f0cb0d.el7
|
||||
ip-10-0-134-252.us-east-2.compute.internal Ready worker 17h v1.23.0 10.0.134.252 <none> Red Hat CoreOS 4.0 3.10.0-957.5.1.el7.x86_64 cri-o://1.23.0-1.rhaos4.0.git2f0cb0d.el7
|
||||
|
||||
....
|
||||
----
|
||||
|
||||
@@ -187,10 +187,10 @@ $ oc get nodes
|
||||
[source,terminal]
|
||||
----
|
||||
NAME STATUS ROLES AGE VERSION
|
||||
ip-10-0-168-251.ec2.internal Ready master 82m v1.22.1
|
||||
ip-10-0-170-223.ec2.internal Ready master 82m v1.22.1
|
||||
ip-10-0-179-95.ec2.internal Ready worker 70m v1.22.1
|
||||
ip-10-0-182-134.ec2.internal Ready worker 70m v1.22.1
|
||||
ip-10-0-211-16.ec2.internal Ready master 82m v1.22.1
|
||||
ip-10-0-250-100.ec2.internal Ready worker 69m v1.22.1
|
||||
ip-10-0-168-251.ec2.internal Ready master 82m v1.23.0
|
||||
ip-10-0-170-223.ec2.internal Ready master 82m v1.23.0
|
||||
ip-10-0-179-95.ec2.internal Ready worker 70m v1.23.0
|
||||
ip-10-0-182-134.ec2.internal Ready worker 70m v1.23.0
|
||||
ip-10-0-211-16.ec2.internal Ready master 82m v1.23.0
|
||||
ip-10-0-250-100.ec2.internal Ready worker 69m v1.23.0
|
||||
----
|
||||
|
||||
@@ -30,9 +30,9 @@ $ oc get nodes -l node-role.kubernetes.io/worker
|
||||
[source,terminal]
|
||||
----
|
||||
NAME STATUS ROLES AGE VERSION
|
||||
compute-node-0 Ready worker 30m v1.22.1
|
||||
compute-node-1 Ready worker 30m v1.22.1
|
||||
compute-node-2 Ready worker 30m v1.22.1
|
||||
compute-node-0 Ready worker 30m v1.23.0
|
||||
compute-node-1 Ready worker 30m v1.23.0
|
||||
compute-node-2 Ready worker 30m v1.23.0
|
||||
----
|
||||
+
|
||||
Note the names of your compute nodes.
|
||||
|
||||
@@ -25,9 +25,9 @@ $ oc get nodes -l node-role.kubernetes.io/master
|
||||
[source,terminal]
|
||||
----
|
||||
NAME STATUS ROLES AGE VERSION
|
||||
control-plane-node-0 Ready master 75m v1.22.1
|
||||
control-plane-node-1 Ready master 75m v1.22.1
|
||||
control-plane-node-2 Ready master 75m v1.22.1
|
||||
control-plane-node-0 Ready master 75m v1.23.0
|
||||
control-plane-node-1 Ready master 75m v1.23.0
|
||||
control-plane-node-2 Ready master 75m v1.23.0
|
||||
----
|
||||
+
|
||||
Note the names of your control plane nodes.
|
||||
|
||||
@@ -158,7 +158,7 @@ $ oc get node __<node_name>__ -o wide
|
||||
[source,terminal]
|
||||
----
|
||||
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP
|
||||
node01 Ready worker 6d22h v1.22.1 192.168.55.101 <none>
|
||||
node01 Ready worker 6d22h v1.23.0 192.168.55.101 <none>
|
||||
----
|
||||
|
||||
. Log in to the VM via SSH by specifying the IP address of the node where the VM is running and the port number. Use the port number displayed by the `oc get svc` command and the IP address of the node displayed by the `oc get node` command. The following example shows the `ssh` command with the username, node's IP address, and the port number:
|
||||
|
||||
Reference in New Issue
Block a user