1
0
mirror of https://github.com/openshift/openshift-docs.git synced 2026-02-05 12:46:18 +01:00

OSDOCS-1651: Updating 1.19 to 1.20 in example output

This commit is contained in:
Andrea Hoffer
2020-12-14 13:20:33 -05:00
committed by openshift-cherrypick-robot
parent 3a9b2ef592
commit 8b720c08d0
17 changed files with 84 additions and 84 deletions

View File

@@ -152,12 +152,12 @@ $ oc get nodes
[source,terminal]
----
NAME STATUS ROLES AGE VERSION
ip-10-0-147-106.us-east-2.compute.internal Ready master 14h v1.19.0+6025c28
ip-10-0-150-175.us-east-2.compute.internal Ready worker 14h v1.19.0+6025c28
ip-10-0-175-23.us-east-2.compute.internal Ready master 14h v1.19.0+6025c28
ip-10-0-189-6.us-east-2.compute.internal Ready worker 14h v1.19.0+6025c28
ip-10-0-205-158.us-east-2.compute.internal Ready master 14h v1.19.0+6025c28
ip-10-0-210-167.us-east-2.compute.internal Ready worker 14h v1.19.0+6025c28
ip-10-0-147-106.us-east-2.compute.internal Ready master 14h v1.20.0+6025c28
ip-10-0-150-175.us-east-2.compute.internal Ready worker 14h v1.20.0+6025c28
ip-10-0-175-23.us-east-2.compute.internal Ready master 14h v1.20.0+6025c28
ip-10-0-189-6.us-east-2.compute.internal Ready worker 14h v1.20.0+6025c28
ip-10-0-205-158.us-east-2.compute.internal Ready master 14h v1.20.0+6025c28
ip-10-0-210-167.us-east-2.compute.internal Ready worker 14h v1.20.0+6025c28
----
--

View File

@@ -88,9 +88,9 @@ Note the worker with the role `worker-rt` that contains the string `4.18.0-211.r
NAME STATUS ROLES AGE VERSION INTERNAL-IP
EXTERNAL-IP OS-IMAGE KERNEL-VERSION
CONTAINER-RUNTIME
cnf-worker-0.example.com Ready worker,worker-rt 5d17h v1.19.0-rc.2+aaf4ce1-dirty
cnf-worker-0.example.com Ready worker,worker-rt 5d17h v1.20.0
128.66.135.107 <none> Red Hat Enterprise Linux CoreOS 46.82.202008252340-0 (Ootpa)
4.18.0-211.rt5.23.el8.x86_64 cri-o://1.19.0-90.rhaos4.6.git4a0ac05.el8-rc.1
4.18.0-211.rt5.23.el8.x86_64 cri-o://1.20.0-90.rhaos4.6.git4a0ac05.el8-rc.1
[...]
----

View File

@@ -34,9 +34,9 @@ The master nodes are ready if the status is `Ready`, as shown in the following o
[source,terminal]
----
NAME STATUS ROLES AGE VERSION
ip-10-0-168-251.ec2.internal Ready master 75m v1.19.0
ip-10-0-170-223.ec2.internal Ready master 75m v1.19.0
ip-10-0-211-16.ec2.internal Ready master 75m v1.19.0
ip-10-0-168-251.ec2.internal Ready master 75m v1.20.0
ip-10-0-170-223.ec2.internal Ready master 75m v1.20.0
ip-10-0-211-16.ec2.internal Ready master 75m v1.20.0
----
. If the master nodes are _not_ ready, then check whether there are any pending certificate signing requests (CSRs) that must be approved.
@@ -75,9 +75,9 @@ The worker nodes are ready if the status is `Ready`, as shown in the following o
[source,terminal]
----
NAME STATUS ROLES AGE VERSION
ip-10-0-179-95.ec2.internal Ready worker 64m v1.19.0
ip-10-0-182-134.ec2.internal Ready worker 64m v1.19.0
ip-10-0-250-100.ec2.internal Ready worker 64m v1.19.0
ip-10-0-179-95.ec2.internal Ready worker 64m v1.20.0
ip-10-0-182-134.ec2.internal Ready worker 64m v1.20.0
ip-10-0-250-100.ec2.internal Ready worker 64m v1.20.0
----
. If the worker nodes are _not_ ready, then check whether there are any pending certificate signing requests (CSRs) that must be approved.
@@ -141,12 +141,12 @@ Check that the status for all nodes is `Ready`.
[source,terminal]
----
NAME STATUS ROLES AGE VERSION
ip-10-0-168-251.ec2.internal Ready master 82m v1.19.0
ip-10-0-170-223.ec2.internal Ready master 82m v1.19.0
ip-10-0-179-95.ec2.internal Ready worker 70m v1.19.0
ip-10-0-182-134.ec2.internal Ready worker 70m v1.19.0
ip-10-0-211-16.ec2.internal Ready master 82m v1.19.0
ip-10-0-250-100.ec2.internal Ready worker 69m v1.19.0
ip-10-0-168-251.ec2.internal Ready master 82m v1.20.0
ip-10-0-170-223.ec2.internal Ready master 82m v1.20.0
ip-10-0-179-95.ec2.internal Ready worker 70m v1.20.0
ip-10-0-182-134.ec2.internal Ready worker 70m v1.20.0
ip-10-0-211-16.ec2.internal Ready master 82m v1.20.0
ip-10-0-250-100.ec2.internal Ready worker 69m v1.20.0
----
If the cluster did not start properly, you might need to restore your cluster using an etcd backup.

View File

@@ -133,12 +133,12 @@ $ oc get node
[source,terminal]
----
NAME STATUS ROLES AGE VERSION
ip-10-0-137-44.ec2.internal Ready worker 7m v1.19.0
ip-10-0-138-148.ec2.internal Ready master 11m v1.19.0
ip-10-0-139-122.ec2.internal Ready master 11m v1.19.0
ip-10-0-147-35.ec2.internal Ready,SchedulingDisabled worker 7m v1.19.0
ip-10-0-153-12.ec2.internal Ready worker 7m v1.19.0
ip-10-0-154-10.ec2.internal Ready master 11m v1.19.0
ip-10-0-137-44.ec2.internal Ready worker 7m v1.20.0
ip-10-0-138-148.ec2.internal Ready master 11m v1.20.0
ip-10-0-139-122.ec2.internal Ready master 11m v1.20.0
ip-10-0-147-35.ec2.internal Ready,SchedulingDisabled worker 7m v1.20.0
ip-10-0-153-12.ec2.internal Ready worker 7m v1.20.0
ip-10-0-154-10.ec2.internal Ready master 11m v1.20.0
----
+
You can see that scheduling on each worker node is disabled as the change is being applied.

View File

@@ -111,13 +111,13 @@ $ oc get nodes
[source,terminal]
----
NAME STATUS ROLES AGE VERSION
ip-10-0-133-216.us-east-2.compute.internal Ready master 60m v1.19.0
ip-10-0-139-146.us-east-2.compute.internal Ready master 60m v1.19.0
ip-10-0-139-192.us-east-2.compute.internal Ready worker 51m v1.19.0
ip-10-0-139-241.us-east-2.compute.internal Ready worker 51m v1.19.0
ip-10-0-147-79.us-east-2.compute.internal Ready worker 51m v1.19.0
ip-10-0-152-241.us-east-2.compute.internal Ready master 60m v1.19.0
ip-10-0-139-48.us-east-2.compute.internal Ready infra 51m v1.19.0
ip-10-0-133-216.us-east-2.compute.internal Ready master 60m v1.20.0
ip-10-0-139-146.us-east-2.compute.internal Ready master 60m v1.20.0
ip-10-0-139-192.us-east-2.compute.internal Ready worker 51m v1.20.0
ip-10-0-139-241.us-east-2.compute.internal Ready worker 51m v1.20.0
ip-10-0-147-79.us-east-2.compute.internal Ready worker 51m v1.20.0
ip-10-0-152-241.us-east-2.compute.internal Ready master 60m v1.20.0
ip-10-0-139-48.us-east-2.compute.internal Ready infra 51m v1.20.0
----
+
Note that the node has a `node-role.kubernetes.io/infra: ''` label:

View File

@@ -97,7 +97,7 @@ $ oc get node <node_name> <1>
[source,terminal]
----
NAME STATUS ROLES AGE VERSION
ip-10-0-217-226.ec2.internal Ready infra,worker 17h v1.19.0
ip-10-0-217-226.ec2.internal Ready infra,worker 17h v1.20.0
----
+
Because the role list includes `infra`, the pod is running on the correct node.

View File

@@ -42,11 +42,11 @@ $ oc get nodes
[source,terminal]
----
NAME STATUS ROLES AGE VERSION
master-0 Ready master 63m v1.19.0
master-1 Ready master 63m v1.19.0
master-2 Ready master 64m v1.19.0
worker-0 NotReady worker 76s v1.19.0
worker-1 NotReady worker 70s v1.19.0
master-0 Ready master 63m v1.20.0
master-1 Ready master 63m v1.20.0
master-2 Ready master 64m v1.20.0
worker-0 NotReady worker 76s v1.20.0
worker-1 NotReady worker 70s v1.20.0
----
+
The output lists all of the machines that you created.

View File

@@ -53,7 +53,7 @@ $ ./openshift-install --dir=<installation_directory> wait-for bootstrap-complete
[source,terminal]
----
INFO Waiting up to 30m0s for the Kubernetes API at https://api.test.example.com:6443...
INFO API v1.19.0 up
INFO API v1.20.0 up
INFO Waiting up to 30m0s for bootstrapping to complete...
INFO It is now safe to remove the bootstrap resources
----

View File

@@ -91,12 +91,12 @@ $ oc get nodes
[source,terminal]
----
NAME STATUS ROLES AGE VERSION
ip-10-0-139-200.us-east-2.compute.internal Ready master 111m v1.19.0
ip-10-0-143-147.us-east-2.compute.internal Ready worker 103m v1.19.0
ip-10-0-146-92.us-east-2.compute.internal Ready worker 101m v1.19.0
ip-10-0-156-255.us-east-2.compute.internal Ready master 111m v1.19.0
ip-10-0-164-74.us-east-2.compute.internal Ready master 111m v1.19.0
ip-10-0-169-2.us-east-2.compute.internal Ready worker 102m v1.19.0
ip-10-0-139-200.us-east-2.compute.internal Ready master 111m v1.20.0
ip-10-0-143-147.us-east-2.compute.internal Ready worker 103m v1.20.0
ip-10-0-146-92.us-east-2.compute.internal Ready worker 101m v1.20.0
ip-10-0-156-255.us-east-2.compute.internal Ready master 111m v1.20.0
ip-10-0-164-74.us-east-2.compute.internal Ready master 111m v1.20.0
ip-10-0-169-2.us-east-2.compute.internal Ready worker 102m v1.20.0
----
+
[source,terminal]

View File

@@ -14,9 +14,9 @@ $ oc get nodes
NAME STATUS ROLES AGE VERSION
ip-10-0-0-1.us-east-2.compute.internal Ready worker 3h19m v1.19.0
ip-10-0-0-1.us-east-2.compute.internal Ready worker 3h19m v1.20.0
ip-10-0-0-39.us-east-2.compute.internal Ready master 3h37m v1.19.0
ip-10-0-0-39.us-east-2.compute.internal Ready master 3h37m v1.20.0

View File

@@ -137,12 +137,12 @@ $ oc get nodes
[source,terminal]
----
NAME STATUS ROLES AGE VERSION
ip-10-0-136-161.ec2.internal Ready worker 28m v1.19.0
ip-10-0-136-243.ec2.internal Ready master 34m v1.19.0
ip-10-0-141-105.ec2.internal Ready,SchedulingDisabled worker 28m v1.19.0
ip-10-0-142-249.ec2.internal Ready master 34m v1.19.0
ip-10-0-153-11.ec2.internal Ready worker 28m v1.19.0
ip-10-0-153-150.ec2.internal Ready master 34m v1.19.0
ip-10-0-136-161.ec2.internal Ready worker 28m v1.20.0
ip-10-0-136-243.ec2.internal Ready master 34m v1.20.0
ip-10-0-141-105.ec2.internal Ready,SchedulingDisabled worker 28m v1.20.0
ip-10-0-142-249.ec2.internal Ready master 34m v1.20.0
ip-10-0-153-11.ec2.internal Ready worker 28m v1.20.0
ip-10-0-153-150.ec2.internal Ready master 34m v1.20.0
----
+
You can see that scheduling on each worker node is disabled as the change is being applied.

View File

@@ -57,9 +57,9 @@ $ oc get nodes
[source,terminal]
----
NAME STATUS ROLES AGE VERSION
ip-10-0-143-147.us-east-2.compute.internal Ready worker 103m v1.19.0
ip-10-0-146-92.us-east-2.compute.internal Ready worker 101m v1.19.0
ip-10-0-169-2.us-east-2.compute.internal Ready worker 102m v1.19.0
ip-10-0-143-147.us-east-2.compute.internal Ready worker 103m v1.20.0
ip-10-0-146-92.us-east-2.compute.internal Ready worker 101m v1.20.0
ip-10-0-169-2.us-east-2.compute.internal Ready worker 102m v1.20.0
----
+
[source,terminal]

View File

@@ -18,9 +18,9 @@ $ oc get nodes
[source,terminal]
----
NAME STATUS ROLES AGE VERSION
master.example.com Ready master 7h v1.19.0
node1.example.com Ready worker 7h v1.19.0
node2.example.com Ready worker 7h v1.19.0
master.example.com Ready master 7h v1.20.0
node1.example.com Ready worker 7h v1.20.0
node2.example.com Ready worker 7h v1.20.0
----
* The `-wide` option provides additional information on all nodes.
@@ -127,8 +127,8 @@ System Info: <9>
Operating System: linux
Architecture: amd64
Container Runtime Version: cri-o://1.16.0-0.6.dev.rhaos4.3.git9ad059b.el8-rc2
Kubelet Version: v1.19.0
Kube-Proxy Version: v1.19.0
Kubelet Version: v1.20.0
Kube-Proxy Version: v1.20.0
PodCIDR: 10.128.4.0/24
ProviderID: aws:///us-east-2a/i-04e87b31dc6b3e171
Non-terminated Pods: (13 in total) <10>

View File

@@ -43,7 +43,7 @@ $ oc get node <node1>
[source,terminal]
----
NAME STATUS ROLES AGE VERSION
<node1> NotReady,SchedulingDisabled worker 1d v1.19.0
<node1> NotReady,SchedulingDisabled worker 1d v1.20.0
----
. Evacuate the pods using one of the following methods:

View File

@@ -70,7 +70,7 @@ $ oc get nodes -l node-role.kubernetes.io/master | grep "NotReady"
.Example output
[source,terminal]
----
ip-10-0-131-183.ec2.internal NotReady master 122m v1.19.0 <1>
ip-10-0-131-183.ec2.internal NotReady master 122m v1.20.0 <1>
----
<1> If the node is listed as `NotReady`, then the *node is not ready*.
@@ -94,9 +94,9 @@ $ oc get nodes -l node-role.kubernetes.io/master
[source,terminal]
----
NAME STATUS ROLES AGE VERSION
ip-10-0-131-183.ec2.internal Ready master 6h13m v1.19.0
ip-10-0-164-97.ec2.internal Ready master 6h13m v1.19.0
ip-10-0-154-204.ec2.internal Ready master 6h13m v1.19.0
ip-10-0-131-183.ec2.internal Ready master 6h13m v1.20.0
ip-10-0-164-97.ec2.internal Ready master 6h13m v1.20.0
ip-10-0-154-204.ec2.internal Ready master 6h13m v1.20.0
----
.. Check whether the status of an etcd pod is either `Error` or `CrashloopBackoff`:

View File

@@ -76,9 +76,9 @@ You must not enable firewalld later. If you do, you cannot access {product-title
[source,terminal]
----
NAME STATUS ROLES AGE VERSION
mycluster-control-plane-0 Ready master 145m v1.19.0
mycluster-control-plane-1 Ready master 145m v1.19.0
mycluster-control-plane-2 Ready master 145m v1.19.0
mycluster-control-plane-0 Ready master 145m v1.20.0
mycluster-control-plane-1 Ready master 145m v1.20.0
mycluster-control-plane-2 Ready master 145m v1.20.0
mycluster-rhel7-0 NotReady,SchedulingDisabled worker 98m v1.14.6+97c81d00e
mycluster-rhel7-1 Ready worker 98m v1.14.6+97c81d00e
mycluster-rhel7-2 Ready worker 98m v1.14.6+97c81d00e
@@ -130,11 +130,11 @@ that you created.
[source,terminal]
----
NAME STATUS ROLES AGE VERSION
mycluster-control-plane-0 Ready master 145m v1.19.0
mycluster-control-plane-1 Ready master 145m v1.19.0
mycluster-control-plane-2 Ready master 145m v1.19.0
mycluster-rhel7-0 NotReady,SchedulingDisabled worker 98m v1.19.0
mycluster-rhel7-1 Ready worker 98m v1.19.0
mycluster-rhel7-2 Ready worker 98m v1.19.0
mycluster-rhel7-3 Ready worker 98m v1.19.0
mycluster-control-plane-0 Ready master 145m v1.20.0
mycluster-control-plane-1 Ready master 145m v1.20.0
mycluster-control-plane-2 Ready master 145m v1.20.0
mycluster-rhel7-0 NotReady,SchedulingDisabled worker 98m v1.20.0
mycluster-rhel7-1 Ready worker 98m v1.20.0
mycluster-rhel7-2 Ready worker 98m v1.20.0
mycluster-rhel7-3 Ready worker 98m v1.20.0
----

View File

@@ -13,12 +13,12 @@ To see which workers and masters are running on your cluster, type:
$ oc get nodes
NAME STATUS ROLES AGE VERSION
ip-10-0-0-1.us-east-2.compute.internal Ready worker 4h20m v1.19.0
ip-10-0-0-2.us-east-2.compute.internal Ready master 4h39m v1.19.0
ip-10-0-0.3.us-east-2.compute.internal Ready worker 4h20m v1.19.0
ip-10-0-0-4.us-east-2.compute.internal Ready master 4h39m v1.19.0
ip-10-0-0-5.us-east-2.compute.internal Ready master 4h39m v1.19.0
ip-10-0-0-6.us-east-2.compute.internal Ready worker 4h20m v1.19.0
ip-10-0-0-1.us-east-2.compute.internal Ready worker 4h20m v1.20.0
ip-10-0-0-2.us-east-2.compute.internal Ready master 4h39m v1.20.0
ip-10-0-0.3.us-east-2.compute.internal Ready worker 4h20m v1.20.0
ip-10-0-0-4.us-east-2.compute.internal Ready master 4h39m v1.20.0
ip-10-0-0-5.us-east-2.compute.internal Ready master 4h39m v1.20.0
ip-10-0-0-6.us-east-2.compute.internal Ready worker 4h20m v1.20.0
----
To see more information about internal and external IP addresses, the type of operating system ({op-system}), kernel version, and container runtime (CRI-O), add the `-o wide` option.
@@ -27,7 +27,7 @@ To see more information about internal and external IP addresses, the type of op
$ oc get nodes -o wide
NAME                                       STATUS ROLES  AGE  VERSION  INTERNAL-IP   EXTERNAL-IP  OS-IMAGE             KERNEL-VERSION             CONTAINER-RUNTIME
ip-10-0-134-252.us-east-2.compute.internal Ready worker 17h v1.19.0 10.0.134.252 <none> Red Hat CoreOS 4.0 3.10.0-957.5.1.el7.x86_64 cri-o://1.13.6-1.rhaos4.0.git2f0cb0d.el7
ip-10-0-134-252.us-east-2.compute.internal Ready worker 17h v1.20.0 10.0.134.252 <none> Red Hat CoreOS 4.0 3.10.0-957.5.1.el7.x86_64 cri-o://1.13.6-1.rhaos4.0.git2f0cb0d.el7
....
----