diff --git a/modules/cli-installing-cli.adoc b/modules/cli-installing-cli.adoc index 7992ea44f2..a5fabde871 100644 --- a/modules/cli-installing-cli.adoc +++ b/modules/cli-installing-cli.adoc @@ -68,6 +68,7 @@ ifndef::openshift-origin[] endif::[] . Unpack the archive: + +[source,terminal] ---- $ tar xvzf ---- @@ -75,12 +76,14 @@ $ tar xvzf + To check your `PATH`, execute the following command: + +[source,terminal] ---- $ echo $PATH ---- After you install the CLI, it is available using the `oc` command: +[source,terminal] ---- $ oc ---- @@ -106,12 +109,14 @@ endif::[] + To check your `PATH`, open the command prompt and execute the following command: + +[source,terminal] ---- C:\> path ---- After you install the CLI, it is available using the `oc` command: +[source,terminal] ---- C:\> oc ---- @@ -137,12 +142,14 @@ endif::[] + To check your `PATH`, open a terminal and execute the following command: + +[source,terminal] ---- $ echo $PATH ---- After you install the CLI, it is available using the `oc` command: +[source,terminal] ---- $ oc ---- diff --git a/modules/cli-logging-in-kubeadmin.adoc b/modules/cli-logging-in-kubeadmin.adoc index 82321e8a3a..4dd4c5448b 100644 --- a/modules/cli-logging-in-kubeadmin.adoc +++ b/modules/cli-logging-in-kubeadmin.adoc @@ -49,6 +49,7 @@ The file is specific to a cluster and is created during {product-title} installa . Export the `kubeadmin` credentials: + +[source,terminal] ---- $ export KUBECONFIG=/auth/kubeconfig <1> ---- @@ -57,7 +58,13 @@ the installation files in. . Verify you can run `oc` commands successfully using the exported configuration: + +[source,terminal] ---- $ oc whoami +---- ++ +.Example output +[source,terminal] +---- system:admin ---- diff --git a/modules/installation-approve-csrs.adoc b/modules/installation-approve-csrs.adoc index 2e65154f67..213efb9a1e 100644 --- a/modules/installation-approve-csrs.adoc +++ b/modules/installation-approve-csrs.adoc @@ -33,9 +33,14 @@ these CSRs are approved or, if necessary, approve them yourself. . Confirm that the cluster recognizes the machines: + ifdef::ibm-z[] +[source,terminal] ---- # oc get nodes - +---- ++ +.Example output +[source,terminal] +---- NAME STATUS ROLES AGE VERSION master-0.cl1mstr0.example.com Ready master 20h v1.14.6+888f9c630 master-1.cl1mstr1.example.com Ready master 20h v1.14.6+888f9c630 @@ -45,9 +50,14 @@ worker-1.cl1wrk01.example.com Ready worker 20h v1.14.6+888f9c630 ---- endif::ibm-z[] ifndef::ibm-z[] +[source,terminal] ---- $ oc get nodes - +---- ++ +.Example output +[source,terminal] +---- NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.18.3 master-1 Ready master 63m v1.18.3 @@ -63,9 +73,14 @@ The output lists all of the machines that you created. you see a client and server request with `Pending` or `Approved` status for each machine that you added to the cluster: + +[source,terminal] ---- $ oc get csr - +---- ++ +.Example output +[source,terminal] +---- NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending <1> csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending @@ -96,6 +111,7 @@ of automatically approving the kubelet serving certificate requests. ** To approve them individually, run the following command for each valid CSR: + +[source,terminal] ---- $ oc adm certificate approve <1> ---- @@ -103,6 +119,7 @@ $ oc adm certificate approve <1> ** To approve all pending CSRs, run the following command: + +[source,terminal] ---- $ oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs oc adm certificate approve ---- diff --git a/modules/installation-aws-user-infra-bootstrap.adoc b/modules/installation-aws-user-infra-bootstrap.adoc index 2210ceff7f..606f5c20ef 100644 --- a/modules/installation-aws-user-infra-bootstrap.adoc +++ b/modules/installation-aws-user-infra-bootstrap.adoc @@ -25,6 +25,7 @@ you can install the cluster. . Change to the directory that contains the installation program and run the following command: + +[source,terminal] ---- $ ./openshift-install wait-for bootstrap-complete --dir= \ <1> --log-level=info <2> diff --git a/modules/installation-aws-user-infra-delete-bootstrap.adoc b/modules/installation-aws-user-infra-delete-bootstrap.adoc index 2757d5c350..c49d5e7933 100644 --- a/modules/installation-aws-user-infra-delete-bootstrap.adoc +++ b/modules/installation-aws-user-infra-delete-bootstrap.adoc @@ -17,6 +17,7 @@ After you complete the initial Operator configuration for the cluster, remove th . Delete the bootstrap resources. If you used the CloudFormation template, link:https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/cfn-console-delete-stack.html[delete its stack]: + +[source,terminal] ---- $ aws cloudformation delete-stack --stack-name <1> ---- diff --git a/modules/installation-aws-user-infra-installation.adoc b/modules/installation-aws-user-infra-installation.adoc index d7c440887e..c46b2516ed 100644 --- a/modules/installation-aws-user-infra-installation.adoc +++ b/modules/installation-aws-user-infra-installation.adoc @@ -20,16 +20,27 @@ user-provisioned infrastructure, monitor the deployment to completion. .Procedure -* Complete the cluster installation: +ifdef::restricted[] +. Complete +endif::restricted[] +ifndef::restricted[] +* Complete +endif::restricted[] +the cluster installation: + +[source,terminal] ---- $ ./openshift-install --dir= wait-for install-complete <1> - -INFO Waiting up to 30m0s for the cluster to initialize... ---- <1> For ``, specify the path to the directory that you stored the installation files in. + +.Example output +[source,terminal] +---- +INFO Waiting up to 30m0s for the cluster to initialize... +---- ++ [IMPORTANT] ==== The Ignition config files that the installation program generates contain certificates that expire after 24 hours. You must keep the cluster running for 24 hours in a non-degraded state to ensure that the first certificate rotation has finished. diff --git a/modules/installation-azure-create-ingress-dns-records.adoc b/modules/installation-azure-create-ingress-dns-records.adoc index 46be94419b..eb0d12f796 100644 --- a/modules/installation-azure-create-ingress-dns-records.adoc +++ b/modules/installation-azure-create-ingress-dns-records.adoc @@ -23,14 +23,21 @@ records per your requirements. . Confirm the Ingress router has created a load balancer and populated the `EXTERNAL-IP` field: + +[source,terminal] ---- $ oc -n openshift-ingress get service router-default +---- ++ +.Example output +[source,terminal] +---- NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE router-default LoadBalancer 172.30.20.10 35.130.120.110 80:32288/TCP,443:31215/TCP 20 ---- . Export the Ingress router IP as a variable: + +[source,terminal] ---- $ export PUBLIC_IP_ROUTER=`oc -n openshift-ingress get service router-default --no-headers | awk '{print $4}'` ---- @@ -39,20 +46,29 @@ $ export PUBLIC_IP_ROUTER=`oc -n openshift-ingress get service router-default -- .. If you are adding this cluster to a new public zone, run: + +[source,terminal] ---- $ az network dns record-set a add-record -g ${BASE_DOMAIN_RESOURCE_GROUP} -z ${CLUSTER_NAME}.${BASE_DOMAIN} -n *.apps -a ${PUBLIC_IP_ROUTER} --ttl 300 ---- .. If you are adding this cluster to an already existing public zone, run: + +[source,terminal] ---- $ az network dns record-set a add-record -g ${BASE_DOMAIN_RESOURCE_GROUP} -z ${BASE_DOMAIN} -n *.apps.${CLUSTER_NAME} -a ${PUBLIC_IP_ROUTER} --ttl 300 ---- . Add a `*.apps` record to the private DNS zone: +.. Create a `*.apps` record by using the following command: + +[source,terminal] ---- $ az network private-dns record-set a create -g ${RESOURCE_GROUP} -z ${CLUSTER_NAME}.${BASE_DOMAIN} -n *.apps --ttl 300 +---- +.. Add the `*.apps` record to the private DNS zone by using the following command: ++ +[source,terminal] +---- $ az network private-dns record-set a add-record -g ${RESOURCE_GROUP} -z ${CLUSTER_NAME}.${BASE_DOMAIN} -n *.apps -a ${PUBLIC_IP_ROUTER} ---- diff --git a/modules/installation-complete-user-infra.adoc b/modules/installation-complete-user-infra.adoc index 2e143cb277..c3f0848fe6 100644 --- a/modules/installation-complete-user-infra.adoc +++ b/modules/installation-complete-user-infra.adoc @@ -28,9 +28,14 @@ cluster on infrastructure that you provide. . Confirm that all the cluster components are online: + +[source,terminal] ---- $ watch -n5 oc get clusteroperators - +---- ++ +.Example output +[source,terminal] +---- NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.5.4 True False False 7m56s cloud-credential 4.5.4 True False False 31m @@ -68,13 +73,19 @@ When all of the cluster Operators are `AVAILABLE`, you can complete the installa . Monitor for cluster completion: + +[source,terminal] ---- $ ./openshift-install --dir= wait-for install-complete <1> -INFO Waiting up to 30m0s for the cluster to initialize... ---- <1> For ``, specify the path to the directory that you stored the installation files in. + +.Example output +[source,terminal] +---- +INFO Waiting up to 30m0s for the cluster to initialize... +---- ++ The command succeeds when the Cluster Version Operator finishes deploying the {product-title} cluster from Kubernetes API server. + @@ -86,9 +97,14 @@ The Ignition config files that the installation program generates contain certif . Confirm that the Kubernetes API server is communicating with the Pods. .. To view a list of all Pods, use the following command: + +[source,terminal] ---- $ oc get pods --all-namespaces - +---- ++ +.Example output +[source,terminal] +---- NAMESPACE NAME READY STATUS RESTARTS AGE openshift-apiserver-operator openshift-apiserver-operator-85cb746d55-zqhs8 1/1 Running 1 9m openshift-apiserver apiserver-67b9g 1/1 Running 0 3m @@ -101,6 +117,7 @@ openshift-authentication-operator authentication-operator-69d5d8bf84-vh2n8 .. View the logs for a Pod that is listed in the output of the previous command by using the following command: + +[source,terminal] ---- $ oc logs -n <1> ---- diff --git a/modules/installation-create-ingress-dns-records.adoc b/modules/installation-create-ingress-dns-records.adoc index 6a3fe4b1c1..1c202d7e56 100644 --- a/modules/installation-create-ingress-dns-records.adoc +++ b/modules/installation-create-ingress-dns-records.adoc @@ -23,8 +23,14 @@ link:https://docs.aws.amazon.com/cli/latest/userguide/install-bundle.html[Instal ** To create a wildcard record, use `*.apps..`, where `` is your cluster name, and `` is the Route53 base domain for your {product-title} cluster. ** To create specific records, you must create a record for each route that your cluster uses, as shown in the output of the following command: + +[source,terminal] ---- $ oc get --all-namespaces -o jsonpath='{range .items[*]}{range .status.ingress[*]}{.host}{"\n"}{end}{end}' routes +---- ++ +.Example output +[source,terminal] +---- oauth-openshift.apps.. console-openshift-console.apps.. downloads-openshift-console.apps.. @@ -35,39 +41,57 @@ prometheus-k8s-openshift-monitoring.apps.. . Retrieve the Ingress Operator load balancer status and note the value of the external IP address that it uses, which is shown in the `EXTERNAL-IP` column: + +[source,terminal] ---- $ oc -n openshift-ingress get service router-default +---- ++ +.Example output +[source,terminal] +---- NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE router-default LoadBalancer 172.30.62.215 ab3...28.us-east-2.elb.amazonaws.com 80:31499/TCP,443:30693/TCP 5m ---- . Locate the hosted zone ID for the load balancer: + +[source,terminal] ---- $ aws elb describe-load-balancers | jq -r '.LoadBalancerDescriptions[] | select(.DNSName == "").CanonicalHostedZoneNameID' <1> - -Z3AADJGX6KTTL2 ---- <1> For ``, specify the value of the external IP address of the Ingress Operator load balancer that you obtained. ++ +.Example output +[source,terminal] +---- +Z3AADJGX6KTTL2 +---- + + The output of this command is the load balancer hosted zone ID. . Obtain the public hosted zone ID for your cluster's domain: + +[source,terminal] ---- $ aws route53 list-hosted-zones-by-name \ --dns-name "" \ <1> --query 'HostedZones[? Config.PrivateZone != `true` && Name == `.`].Id' <1> --output text - -/hostedzone/Z3URY6TWQ91KVV ---- <1> For ``, specify the Route53 base domain for your {product-title} cluster. + +.Example output +[source,terminal] +---- +/hostedzone/Z3URY6TWQ91KVV +---- ++ The public hosted zone ID for your domain is shown in the command output. In this example, it is `Z3URY6TWQ91KVV`. . Add the alias records to your private zone: + +[source,terminal] ---- $ aws route53 change-resource-record-sets --hosted-zone-id "" --change-batch '{ <1> > "Changes": [ @@ -93,6 +117,7 @@ $ aws route53 change-resource-record-sets --hosted-zone-id """ --change-batch '{ <1> > "Changes": [ diff --git a/modules/installation-creating-aws-bootstrap.adoc b/modules/installation-creating-aws-bootstrap.adoc index c4abcf15a1..b85e4533d3 100644 --- a/modules/installation-creating-aws-bootstrap.adoc +++ b/modules/installation-creating-aws-bootstrap.adoc @@ -52,6 +52,7 @@ address that the bootstrap machine can reach. .. Create the bucket: + +[source,terminal] ---- $ aws s3 mb s3://-infra <1> ---- @@ -59,15 +60,21 @@ $ aws s3 mb s3://-infra <1> .. Upload the `bootstrap.ign` Ignition config file to the bucket: + +[source,terminal] ---- $ aws s3 cp bootstrap.ign s3://-infra/bootstrap.ign ---- .. Verify that the file uploaded: + +[source,terminal] ---- $ aws s3 ls s3://-infra/ - +---- ++ +.Example output +[source,terminal] +---- 2019-04-03 16:15:16 314878 bootstrap.ign ---- @@ -176,6 +183,7 @@ describes the bootstrap machine that your cluster requires. You must enter the command on a single line. ==== + +[source,terminal] ---- $ aws cloudformation create-stack --stack-name <1> --template-body file://