mirror of
https://github.com/openshift/openshift-docs.git
synced 2026-02-05 12:46:18 +01:00
updating AWS installation codeblocks
This commit is contained in:
committed by
openshift-cherrypick-robot
parent
e3c2a9028a
commit
ae6851dbda
@@ -68,6 +68,7 @@ ifndef::openshift-origin[]
|
||||
endif::[]
|
||||
. Unpack the archive:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ tar xvzf <file>
|
||||
----
|
||||
@@ -75,12 +76,14 @@ $ tar xvzf <file>
|
||||
+
|
||||
To check your `PATH`, execute the following command:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ echo $PATH
|
||||
----
|
||||
|
||||
After you install the CLI, it is available using the `oc` command:
|
||||
|
||||
[source,terminal]
|
||||
----
|
||||
$ oc <command>
|
||||
----
|
||||
@@ -106,12 +109,14 @@ endif::[]
|
||||
+
|
||||
To check your `PATH`, open the command prompt and execute the following command:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
C:\> path
|
||||
----
|
||||
|
||||
After you install the CLI, it is available using the `oc` command:
|
||||
|
||||
[source,terminal]
|
||||
----
|
||||
C:\> oc <command>
|
||||
----
|
||||
@@ -137,12 +142,14 @@ endif::[]
|
||||
+
|
||||
To check your `PATH`, open a terminal and execute the following command:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ echo $PATH
|
||||
----
|
||||
|
||||
After you install the CLI, it is available using the `oc` command:
|
||||
|
||||
[source,terminal]
|
||||
----
|
||||
$ oc <command>
|
||||
----
|
||||
|
||||
@@ -49,6 +49,7 @@ The file is specific to a cluster and is created during {product-title} installa
|
||||
|
||||
. Export the `kubeadmin` credentials:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ export KUBECONFIG=<installation_directory>/auth/kubeconfig <1>
|
||||
----
|
||||
@@ -57,7 +58,13 @@ the installation files in.
|
||||
|
||||
. Verify you can run `oc` commands successfully using the exported configuration:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ oc whoami
|
||||
----
|
||||
+
|
||||
.Example output
|
||||
[source,terminal]
|
||||
----
|
||||
system:admin
|
||||
----
|
||||
|
||||
@@ -33,9 +33,14 @@ these CSRs are approved or, if necessary, approve them yourself.
|
||||
. Confirm that the cluster recognizes the machines:
|
||||
+
|
||||
ifdef::ibm-z[]
|
||||
[source,terminal]
|
||||
----
|
||||
# oc get nodes
|
||||
|
||||
----
|
||||
+
|
||||
.Example output
|
||||
[source,terminal]
|
||||
----
|
||||
NAME STATUS ROLES AGE VERSION
|
||||
master-0.cl1mstr0.example.com Ready master 20h v1.14.6+888f9c630
|
||||
master-1.cl1mstr1.example.com Ready master 20h v1.14.6+888f9c630
|
||||
@@ -45,9 +50,14 @@ worker-1.cl1wrk01.example.com Ready worker 20h v1.14.6+888f9c630
|
||||
----
|
||||
endif::ibm-z[]
|
||||
ifndef::ibm-z[]
|
||||
[source,terminal]
|
||||
----
|
||||
$ oc get nodes
|
||||
|
||||
----
|
||||
+
|
||||
.Example output
|
||||
[source,terminal]
|
||||
----
|
||||
NAME STATUS ROLES AGE VERSION
|
||||
master-0 Ready master 63m v1.18.3
|
||||
master-1 Ready master 63m v1.18.3
|
||||
@@ -63,9 +73,14 @@ The output lists all of the machines that you created.
|
||||
you see a client and server request with `Pending` or `Approved` status for
|
||||
each machine that you added to the cluster:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ oc get csr
|
||||
|
||||
----
|
||||
+
|
||||
.Example output
|
||||
[source,terminal]
|
||||
----
|
||||
NAME AGE REQUESTOR CONDITION
|
||||
csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending <1>
|
||||
csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending
|
||||
@@ -96,6 +111,7 @@ of automatically approving the kubelet serving certificate requests.
|
||||
** To approve them individually, run the following command for each valid
|
||||
CSR:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ oc adm certificate approve <csr_name> <1>
|
||||
----
|
||||
@@ -103,6 +119,7 @@ $ oc adm certificate approve <csr_name> <1>
|
||||
|
||||
** To approve all pending CSRs, run the following command:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs oc adm certificate approve
|
||||
----
|
||||
|
||||
@@ -25,6 +25,7 @@ you can install the cluster.
|
||||
. Change to the directory that contains the installation program and run the
|
||||
following command:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ ./openshift-install wait-for bootstrap-complete --dir=<installation_directory> \ <1>
|
||||
--log-level=info <2>
|
||||
|
||||
@@ -17,6 +17,7 @@ After you complete the initial Operator configuration for the cluster, remove th
|
||||
. Delete the bootstrap resources. If you used the CloudFormation template,
|
||||
link:https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/cfn-console-delete-stack.html[delete its stack]:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ aws cloudformation delete-stack --stack-name <name> <1>
|
||||
----
|
||||
|
||||
@@ -20,16 +20,27 @@ user-provisioned infrastructure, monitor the deployment to completion.
|
||||
|
||||
.Procedure
|
||||
|
||||
* Complete the cluster installation:
|
||||
ifdef::restricted[]
|
||||
. Complete
|
||||
endif::restricted[]
|
||||
ifndef::restricted[]
|
||||
* Complete
|
||||
endif::restricted[]
|
||||
the cluster installation:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ ./openshift-install --dir=<installation_directory> wait-for install-complete <1>
|
||||
|
||||
INFO Waiting up to 30m0s for the cluster to initialize...
|
||||
----
|
||||
<1> For `<installation_directory>`, specify the path to the directory that you
|
||||
stored the installation files in.
|
||||
+
|
||||
.Example output
|
||||
[source,terminal]
|
||||
----
|
||||
INFO Waiting up to 30m0s for the cluster to initialize...
|
||||
----
|
||||
+
|
||||
[IMPORTANT]
|
||||
====
|
||||
The Ignition config files that the installation program generates contain certificates that expire after 24 hours. You must keep the cluster running for 24 hours in a non-degraded state to ensure that the first certificate rotation has finished.
|
||||
|
||||
@@ -23,14 +23,21 @@ records per your requirements.
|
||||
. Confirm the Ingress router has created a load balancer and populated the
|
||||
`EXTERNAL-IP` field:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ oc -n openshift-ingress get service router-default
|
||||
----
|
||||
+
|
||||
.Example output
|
||||
[source,terminal]
|
||||
----
|
||||
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
|
||||
router-default LoadBalancer 172.30.20.10 35.130.120.110 80:32288/TCP,443:31215/TCP 20
|
||||
----
|
||||
|
||||
. Export the Ingress router IP as a variable:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ export PUBLIC_IP_ROUTER=`oc -n openshift-ingress get service router-default --no-headers | awk '{print $4}'`
|
||||
----
|
||||
@@ -39,20 +46,29 @@ $ export PUBLIC_IP_ROUTER=`oc -n openshift-ingress get service router-default --
|
||||
|
||||
.. If you are adding this cluster to a new public zone, run:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ az network dns record-set a add-record -g ${BASE_DOMAIN_RESOURCE_GROUP} -z ${CLUSTER_NAME}.${BASE_DOMAIN} -n *.apps -a ${PUBLIC_IP_ROUTER} --ttl 300
|
||||
----
|
||||
|
||||
.. If you are adding this cluster to an already existing public zone, run:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ az network dns record-set a add-record -g ${BASE_DOMAIN_RESOURCE_GROUP} -z ${BASE_DOMAIN} -n *.apps.${CLUSTER_NAME} -a ${PUBLIC_IP_ROUTER} --ttl 300
|
||||
----
|
||||
|
||||
. Add a `*.apps` record to the private DNS zone:
|
||||
.. Create a `*.apps` record by using the following command:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ az network private-dns record-set a create -g ${RESOURCE_GROUP} -z ${CLUSTER_NAME}.${BASE_DOMAIN} -n *.apps --ttl 300
|
||||
----
|
||||
.. Add the `*.apps` record to the private DNS zone by using the following command:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ az network private-dns record-set a add-record -g ${RESOURCE_GROUP} -z ${CLUSTER_NAME}.${BASE_DOMAIN} -n *.apps -a ${PUBLIC_IP_ROUTER}
|
||||
----
|
||||
|
||||
|
||||
@@ -28,9 +28,14 @@ cluster on infrastructure that you provide.
|
||||
|
||||
. Confirm that all the cluster components are online:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ watch -n5 oc get clusteroperators
|
||||
|
||||
----
|
||||
+
|
||||
.Example output
|
||||
[source,terminal]
|
||||
----
|
||||
NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE
|
||||
authentication 4.5.4 True False False 7m56s
|
||||
cloud-credential 4.5.4 True False False 31m
|
||||
@@ -68,13 +73,19 @@ When all of the cluster Operators are `AVAILABLE`, you can complete the installa
|
||||
|
||||
. Monitor for cluster completion:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ ./openshift-install --dir=<installation_directory> wait-for install-complete <1>
|
||||
INFO Waiting up to 30m0s for the cluster to initialize...
|
||||
----
|
||||
<1> For `<installation_directory>`, specify the path to the directory that you
|
||||
stored the installation files in.
|
||||
+
|
||||
.Example output
|
||||
[source,terminal]
|
||||
----
|
||||
INFO Waiting up to 30m0s for the cluster to initialize...
|
||||
----
|
||||
+
|
||||
The command succeeds when the Cluster Version Operator finishes deploying the
|
||||
{product-title} cluster from Kubernetes API server.
|
||||
+
|
||||
@@ -86,9 +97,14 @@ The Ignition config files that the installation program generates contain certif
|
||||
. Confirm that the Kubernetes API server is communicating with the Pods.
|
||||
.. To view a list of all Pods, use the following command:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ oc get pods --all-namespaces
|
||||
|
||||
----
|
||||
+
|
||||
.Example output
|
||||
[source,terminal]
|
||||
----
|
||||
NAMESPACE NAME READY STATUS RESTARTS AGE
|
||||
openshift-apiserver-operator openshift-apiserver-operator-85cb746d55-zqhs8 1/1 Running 1 9m
|
||||
openshift-apiserver apiserver-67b9g 1/1 Running 0 3m
|
||||
@@ -101,6 +117,7 @@ openshift-authentication-operator authentication-operator-69d5d8bf84-vh2n8
|
||||
.. View the logs for a Pod that is listed in the output of the previous command
|
||||
by using the following command:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ oc logs <pod_name> -n <namespace> <1>
|
||||
----
|
||||
|
||||
@@ -23,8 +23,14 @@ link:https://docs.aws.amazon.com/cli/latest/userguide/install-bundle.html[Instal
|
||||
** To create a wildcard record, use `*.apps.<cluster_name>.<domain_name>`, where `<cluster_name>` is your cluster name, and `<domain_name>` is the Route53 base domain for your {product-title} cluster.
|
||||
** To create specific records, you must create a record for each route that your cluster uses, as shown in the output of the following command:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ oc get --all-namespaces -o jsonpath='{range .items[*]}{range .status.ingress[*]}{.host}{"\n"}{end}{end}' routes
|
||||
----
|
||||
+
|
||||
.Example output
|
||||
[source,terminal]
|
||||
----
|
||||
oauth-openshift.apps.<cluster_name>.<domain_name>
|
||||
console-openshift-console.apps.<cluster_name>.<domain_name>
|
||||
downloads-openshift-console.apps.<cluster_name>.<domain_name>
|
||||
@@ -35,39 +41,57 @@ prometheus-k8s-openshift-monitoring.apps.<cluster_name>.<domain_name>
|
||||
|
||||
. Retrieve the Ingress Operator load balancer status and note the value of the external IP address that it uses, which is shown in the `EXTERNAL-IP` column:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ oc -n openshift-ingress get service router-default
|
||||
----
|
||||
+
|
||||
.Example output
|
||||
[source,terminal]
|
||||
----
|
||||
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
|
||||
router-default LoadBalancer 172.30.62.215 ab3...28.us-east-2.elb.amazonaws.com 80:31499/TCP,443:30693/TCP 5m
|
||||
----
|
||||
|
||||
. Locate the hosted zone ID for the load balancer:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ aws elb describe-load-balancers | jq -r '.LoadBalancerDescriptions[] | select(.DNSName == "<external_ip>").CanonicalHostedZoneNameID' <1>
|
||||
|
||||
Z3AADJGX6KTTL2
|
||||
----
|
||||
<1> For `<external_ip>`, specify the value of the external IP address of the Ingress Operator load balancer that you obtained.
|
||||
+
|
||||
.Example output
|
||||
[source,terminal]
|
||||
----
|
||||
Z3AADJGX6KTTL2
|
||||
----
|
||||
|
||||
+
|
||||
The output of this command is the load balancer hosted zone ID.
|
||||
|
||||
. Obtain the public hosted zone ID for your cluster's domain:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ aws route53 list-hosted-zones-by-name \
|
||||
--dns-name "<domain_name>" \ <1>
|
||||
--query 'HostedZones[? Config.PrivateZone != `true` && Name == `<domain_name>.`].Id' <1>
|
||||
--output text
|
||||
|
||||
/hostedzone/Z3URY6TWQ91KVV
|
||||
----
|
||||
<1> For `<domain_name>`, specify the Route53 base domain for your {product-title} cluster.
|
||||
+
|
||||
.Example output
|
||||
[source,terminal]
|
||||
----
|
||||
/hostedzone/Z3URY6TWQ91KVV
|
||||
----
|
||||
+
|
||||
The public hosted zone ID for your domain is shown in the command output. In this example, it is `Z3URY6TWQ91KVV`.
|
||||
|
||||
. Add the alias records to your private zone:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ aws route53 change-resource-record-sets --hosted-zone-id "<private_hosted_zone_id>" --change-batch '{ <1>
|
||||
> "Changes": [
|
||||
@@ -93,6 +117,7 @@ $ aws route53 change-resource-record-sets --hosted-zone-id "<private_hosted_zone
|
||||
|
||||
. Add the records to your public zone:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ aws route53 change-resource-record-sets --hosted-zone-id "<public_hosted_zone_id>"" --change-batch '{ <1>
|
||||
> "Changes": [
|
||||
|
||||
@@ -52,6 +52,7 @@ address that the bootstrap machine can reach.
|
||||
|
||||
.. Create the bucket:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ aws s3 mb s3://<cluster-name>-infra <1>
|
||||
----
|
||||
@@ -59,15 +60,21 @@ $ aws s3 mb s3://<cluster-name>-infra <1>
|
||||
|
||||
.. Upload the `bootstrap.ign` Ignition config file to the bucket:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ aws s3 cp bootstrap.ign s3://<cluster-name>-infra/bootstrap.ign
|
||||
----
|
||||
|
||||
.. Verify that the file uploaded:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ aws s3 ls s3://<cluster-name>-infra/
|
||||
|
||||
----
|
||||
+
|
||||
.Example output
|
||||
[source,terminal]
|
||||
----
|
||||
2019-04-03 16:15:16 314878 bootstrap.ign
|
||||
----
|
||||
|
||||
@@ -176,6 +183,7 @@ describes the bootstrap machine that your cluster requires.
|
||||
You must enter the command on a single line.
|
||||
====
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ aws cloudformation create-stack --stack-name <name> <1>
|
||||
--template-body file://<template>.yaml <2>
|
||||
@@ -191,6 +199,7 @@ parameters JSON file.
|
||||
|
||||
. Confirm that the template components exist:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ aws cloudformation describe-stacks --stack-name <name>
|
||||
----
|
||||
|
||||
@@ -195,6 +195,7 @@ in the CloudFormation template.
|
||||
You must enter the command on a single line.
|
||||
====
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ aws cloudformation create-stack --stack-name <name> <1>
|
||||
--template-body file://<template>.yaml <2>
|
||||
@@ -209,6 +210,7 @@ parameters JSON file.
|
||||
|
||||
. Confirm that the template components exist:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ aws cloudformation describe-stacks --stack-name <name>
|
||||
----
|
||||
|
||||
@@ -38,6 +38,7 @@ AWS console or by running the following command:
|
||||
You must enter the command on a single line.
|
||||
====
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ aws route53 list-hosted-zones-by-name |
|
||||
jq --arg name "<route53_domain>." \ <1>
|
||||
@@ -117,6 +118,7 @@ describes the networking and load balancing objects that your cluster requires.
|
||||
You must enter the command on a single line.
|
||||
====
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ aws cloudformation create-stack --stack-name <name> <1>
|
||||
--template-body file://<template>.yaml <2>
|
||||
@@ -132,6 +134,7 @@ parameters JSON file.
|
||||
|
||||
. Confirm that the template components exist:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ aws cloudformation describe-stacks --stack-name <name>
|
||||
----
|
||||
|
||||
@@ -75,6 +75,7 @@ describes the security groups and roles that your cluster requires.
|
||||
You must enter the command on a single line.
|
||||
====
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ aws cloudformation create-stack --stack-name <name> <1>
|
||||
--template-body file://<template>.yaml <2>
|
||||
@@ -90,6 +91,7 @@ parameters JSON file.
|
||||
|
||||
. Confirm that the template components exist:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ aws cloudformation describe-stacks --stack-name <name>
|
||||
----
|
||||
|
||||
@@ -64,6 +64,7 @@ describes the VPC that your cluster requires.
|
||||
You must enter the command on a single line.
|
||||
====
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ aws cloudformation create-stack --stack-name <name> <1>
|
||||
--template-body file://<template>.yaml <2>
|
||||
@@ -78,6 +79,7 @@ parameters JSON file.
|
||||
|
||||
. Confirm that the template components exist:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ aws cloudformation describe-stacks --stack-name <name>
|
||||
----
|
||||
|
||||
@@ -145,6 +145,7 @@ in the CloudFormation template.
|
||||
You must enter the command on a single line.
|
||||
====
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ aws cloudformation create-stack --stack-name <name> <1>
|
||||
--template-body file://<template>.yaml \ <2>
|
||||
@@ -159,6 +160,7 @@ parameters JSON file.
|
||||
|
||||
.. Confirm that the template components exist:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ aws cloudformation describe-stacks --stack-name <name>
|
||||
----
|
||||
|
||||
@@ -72,13 +72,19 @@ endif::azure[]
|
||||
* To extract and view the infrastructure name from the Ignition config file
|
||||
metadata, run the following command:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ jq -r .infraID <installation_directory>/metadata.json <1>
|
||||
openshift-vw9j6 <2>
|
||||
----
|
||||
<1> For `<installation_directory>`, specify the path to the directory that you stored the
|
||||
installation files in.
|
||||
<2> The output of this command is your cluster name and a random string.
|
||||
+
|
||||
.Example output
|
||||
[source,terminal]
|
||||
----
|
||||
openshift-vw9j6 <1>
|
||||
----
|
||||
<1> The output of this command is your cluster name and a random string.
|
||||
|
||||
ifeval::["{context}" == "installing-aws-user-infra"]
|
||||
:!cp-first:
|
||||
|
||||
@@ -26,6 +26,7 @@ endif::restricted[]
|
||||
. Obtain the `install-config.yaml` file.
|
||||
.. Run the following command:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ ./openshift-install create install-config --dir=<installation_directory> <1>
|
||||
----
|
||||
|
||||
@@ -9,6 +9,7 @@ You can use any of the following actions to get debug information from the insta
|
||||
|
||||
* Look at debug messages from a past installation in the hidden `.openshift_install.log` file. For example, enter:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ cat ~/<installation_directory>/.openshift_install.log <1>
|
||||
----
|
||||
@@ -16,6 +17,7 @@ $ cat ~/<installation_directory>/.openshift_install.log <1>
|
||||
|
||||
* Re-run the installation program with `--log-level=debug`:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ ./openshift-install create cluster --dir=<installation_directory> --log-level=debug <1>
|
||||
----
|
||||
|
||||
@@ -126,6 +126,7 @@ endif::vsphere[]
|
||||
ifndef::rhv[]
|
||||
.. Run the following command:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ ./openshift-install create install-config --dir=<installation_directory> <1>
|
||||
----
|
||||
|
||||
@@ -131,6 +131,7 @@ endif::gcp[]
|
||||
. Run the installation program:
|
||||
+
|
||||
ifndef::rhv[]
|
||||
[source,terminal]
|
||||
----
|
||||
$ ./openshift-install create cluster --dir=<installation_directory> \ <1>
|
||||
--log-level=info <2>
|
||||
@@ -146,6 +147,7 @@ endif::no-config[]
|
||||
`error` instead of `info`.
|
||||
endif::rhv[]
|
||||
ifdef::rhv[]
|
||||
[source,terminal]
|
||||
----
|
||||
$ sudo ./openshift-install create cluster --dir=<installation_directory> \ <1>
|
||||
--log-level=info <2>
|
||||
|
||||
@@ -78,6 +78,7 @@ provider to remove your cluster entirely.
|
||||
. Extract the installation program. For example, on a computer that uses a Linux
|
||||
operating system, run the following command:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ tar xvf <installation_program>.tar.gz
|
||||
----
|
||||
|
||||
@@ -22,9 +22,14 @@ Operators so that they all become available.
|
||||
|
||||
. Watch the cluster components come online:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ watch -n5 oc get clusteroperators
|
||||
|
||||
----
|
||||
+
|
||||
.Example output
|
||||
[source,terminal]
|
||||
----
|
||||
NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE
|
||||
authentication 4.5.4 True False False 69s
|
||||
cloud-credential 4.5.4 True False False 12m
|
||||
|
||||
@@ -15,6 +15,7 @@ You can verify your {product-title} cluster's status during or after installatio
|
||||
|
||||
. In the cluster environment, export the administrator's kubeconfig file:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ export KUBECONFIG=<installation_directory>/auth/kubeconfig <1>
|
||||
----
|
||||
@@ -25,24 +26,28 @@ The `kubeconfig` file contains information about the cluster that is used by the
|
||||
|
||||
. View the control plane and compute machines created after a deployment:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ oc get nodes
|
||||
----
|
||||
|
||||
. View your cluster's version:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ oc get clusterversion
|
||||
----
|
||||
|
||||
. View your operators' status:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ oc get clusteroperator
|
||||
----
|
||||
|
||||
. View all running Pods in the cluster:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ oc get pods -A
|
||||
----
|
||||
|
||||
@@ -19,6 +19,7 @@ all images are lost if you restart the registry.
|
||||
|
||||
* To set the image registry storage to an empty directory:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{"spec":{"storage":{"emptyDir":{}}}}'
|
||||
----
|
||||
@@ -31,6 +32,7 @@ Configure this option for only non-production clusters.
|
||||
If you run this command before the Image Registry Operator initializes its
|
||||
components, the `oc patch` command fails with the following error:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
Error from server (NotFound): configs.imageregistry.operator.openshift.io "cluster" not found
|
||||
----
|
||||
|
||||
@@ -13,6 +13,7 @@ This is required on most Dell systems. Check the manual for your computer.
|
||||
|
||||
. Generate the Kubernetes manifests for the cluster:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ ./openshift-install create manifests --dir=<installation_directory>
|
||||
----
|
||||
@@ -20,6 +21,7 @@ $ ./openshift-install create manifests --dir=<installation_directory>
|
||||
. In the `openshift` directory, create a master and/or worker file to encrypt
|
||||
disks for those nodes. Here are examples of those two files:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ cat << EOF > ./99-openshift-worker-tpmv2-encryption.yaml
|
||||
apiVersion: machineconfiguration.openshift.io/v1
|
||||
@@ -43,6 +45,7 @@ EOF
|
||||
----
|
||||
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ cat << EOF > ./99-openshift-master-tpmv2-encryption.yaml
|
||||
apiVersion: machineconfiguration.openshift.io/v1
|
||||
|
||||
@@ -5,7 +5,7 @@
|
||||
[id="installation-special-config-kargs_{context}"]
|
||||
|
||||
= Adding day-1 kernel arguments
|
||||
ALthough it is often preferable to modify kernel arguments as a day-2 activity,
|
||||
ALthough it is often preferable to modify kernel arguments as a day-2 activity,
|
||||
you might want to add kernel arguments to all master or worker nodes during initial cluster
|
||||
installation. Here are some reasons you might want
|
||||
to add kernel arguments during cluster installation so they take effect before
|
||||
@@ -27,8 +27,9 @@ It is best to only add kernel arguments with this procedure if they are needed t
|
||||
|
||||
. Generate the Kubernetes manifests for the cluster:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ ./openshift-install create manifests --dir=<installation_directory>
|
||||
$ ./openshift-install create manifests --dir=<installation_directory>
|
||||
----
|
||||
|
||||
. Decide if you want to add kernel arguments to worker or master nodes.
|
||||
@@ -38,6 +39,7 @@ $ ./openshift-install create manifests --dir=<installation_directory>
|
||||
object to add the kernel settings.
|
||||
This example adds a `loglevel=7` kernel argument to master nodes:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ cat << EOF > 99-openshift-machineconfig-master-kargs.yaml
|
||||
apiVersion: machineconfiguration.openshift.io/v1
|
||||
|
||||
@@ -35,12 +35,14 @@ This procedure is also supported for use with Google Cloud Platform.
|
||||
. Create the `intall-config.yaml` file using the installer or prepare it manually.
|
||||
To create it using installer, run:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ ./openshift-install create install-config --dir=<installation_directory>
|
||||
----
|
||||
|
||||
. Generate the Kubernetes manifests for the cluster:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ ./openshift-install create manifests --dir=<installation_directory>
|
||||
----
|
||||
@@ -51,6 +53,7 @@ $ ./openshift-install create manifests --dir=<installation_directory>
|
||||
`99-worker-realtime.yaml`) to define a MachineConfig object that applies a
|
||||
real-time kernel to the selected nodes (worker nodes in this case):
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ cat << EOF > 99-worker-realtime.yaml
|
||||
apiVersion: machineconfiguration.openshift.io/v1
|
||||
@@ -69,6 +72,7 @@ Create a separate YAML file to add to both master and worker nodes.
|
||||
|
||||
. Create the cluster. You can now continue on to create the {product-title} cluster.
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ ./openshift-install create cluster --dir=<installation_directory>
|
||||
----
|
||||
@@ -78,8 +82,14 @@ and run the following commands to make sure that the real-time kernel has
|
||||
replaced the regular kernel for the set of worker or master nodes you
|
||||
configured:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ oc get nodes
|
||||
----
|
||||
+
|
||||
.Example output
|
||||
[source,terminal]
|
||||
----
|
||||
NAME STATUS ROLES AGE VERSION
|
||||
ip-10-0-139-200.us-east-2.compute.internal Ready master 111m v1.18.3
|
||||
ip-10-0-143-147.us-east-2.compute.internal Ready worker 103m v1.18.3
|
||||
@@ -87,12 +97,28 @@ ip-10-0-146-92.us-east-2.compute.internal Ready worker 101m v1.18.3
|
||||
ip-10-0-156-255.us-east-2.compute.internal Ready master 111m v1.18.3
|
||||
ip-10-0-164-74.us-east-2.compute.internal Ready master 111m v1.18.3
|
||||
ip-10-0-169-2.us-east-2.compute.internal Ready worker 102m v1.18.3
|
||||
|
||||
----
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ oc debug node/ip-10-0-143-147.us-east-2.compute.internal
|
||||
----
|
||||
+
|
||||
.Example output
|
||||
[source,terminal]
|
||||
----
|
||||
Starting pod/ip-10-0-143-147us-east-2computeinternal-debug ...
|
||||
To use host binaries, run `chroot /host`
|
||||
|
||||
----
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
sh-4.4# uname -a
|
||||
----
|
||||
+
|
||||
.Example output
|
||||
[source,terminal]
|
||||
----
|
||||
Linux <worker_node> 4.18.0-147.3.1.rt24.96.el8_1.x86_64 #1 SMP PREEMPT RT
|
||||
Wed Nov 27 18:29:55 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux
|
||||
----
|
||||
|
||||
@@ -23,6 +23,7 @@ cluster.
|
||||
|
||||
. From the computer that you used to install the cluster, run the following command:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ ./openshift-install destroy cluster \
|
||||
--dir=<installation_directory> --log-level=info <1> <2>
|
||||
|
||||
@@ -49,6 +49,7 @@ $ export BASE_DOMAIN_RESOURCE_GROUP=ocp-cluster
|
||||
|
||||
. Export the kubeadmin credentials:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ export KUBECONFIG=<installation_directory>/auth/kubeconfig <1>
|
||||
----
|
||||
|
||||
@@ -82,9 +82,14 @@ endif::restricted,baremetal-restricted[]
|
||||
|
||||
. Generate the Kubernetes manifests for the cluster:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ ./openshift-install create manifests --dir=<installation_directory> <1>
|
||||
|
||||
----
|
||||
+
|
||||
.Example output
|
||||
[source,terminal]
|
||||
----
|
||||
INFO Consuming Install Config from target directory
|
||||
WARNING Making control-plane schedulable by setting MastersSchedulable to true for Scheduler cluster settings
|
||||
----
|
||||
@@ -97,6 +102,7 @@ you can safely ignore this warning.
|
||||
ifdef::aws,azure,gcp[]
|
||||
. Remove the Kubernetes manifest files that define the control plane machines:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ rm -f <installation_directory>/openshift/99_openshift-cluster-api_master-machines-*.yaml
|
||||
----
|
||||
@@ -115,6 +121,7 @@ ifdef::aws,azure,user-infra-vpc[]
|
||||
endif::aws,azure,user-infra-vpc[]
|
||||
ifdef::aws,azure,gcp[]
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ rm -f <installation_directory>/openshift/99_openshift-cluster-api_worker-machineset-*.yaml
|
||||
----
|
||||
@@ -126,6 +133,7 @@ endif::aws,azure,gcp[]
|
||||
ifdef::osp,vsphere[]
|
||||
. Remove the Kubernetes manifest files that define the control plane machines and compute machineSets:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ rm -f openshift/99_openshift-cluster-api_master-machines-*.yaml openshift/99_openshift-cluster-api_worker-machineset-*.yaml
|
||||
----
|
||||
@@ -245,17 +253,26 @@ ifdef::azure-user-infra[]
|
||||
. When configuring Azure on user-provisioned infrastructure, you must export
|
||||
some common variables defined in the manifest files to use later in the Azure
|
||||
Resource Manager (ARM) templates:
|
||||
.. Export the infrastructure ID by using the following command:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ export INFRA_ID=<infra_id><1>
|
||||
$ export RESOURCE_GROUP=<resource_group><2>
|
||||
$ export INFRA_ID=<infra_id> <1>
|
||||
----
|
||||
<1> The {product-title} cluster has been assigned an identifier (`INFRA_ID`) in the form of `<cluster_name>-<random_string>`. This will be used as the base name for most resources created using the provided ARM templates. This is the value of the `.status.infrastructureName` attribute from the `manifests/cluster-infrastructure-02-config.yml` file.
|
||||
<2> All resources created in this Azure deployment exists as part of a link:https://docs.microsoft.com/en-us/azure/azure-resource-manager/management/overview#resource-groups[resource group]. The resource group name is also based on the `INFRA_ID`, in the form of `<cluster_name>-<random_string>-rg`. This is the value of the `.status.platformStatus.azure.resourceGroupName` attribute from the `manifests/cluster-infrastructure-02-config.yml` file.
|
||||
|
||||
.. Export the resource group by using the following command:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ export RESOURCE_GROUP=<resource_group> <1>
|
||||
----
|
||||
<1> All resources created in this Azure deployment exists as part of a link:https://docs.microsoft.com/en-us/azure/azure-resource-manager/management/overview#resource-groups[resource group]. The resource group name is also based on the `INFRA_ID`, in the form of `<cluster_name>-<random_string>-rg`. This is the value of the `.status.platformStatus.azure.resourceGroupName` attribute from the `manifests/cluster-infrastructure-02-config.yml` file.
|
||||
endif::azure-user-infra[]
|
||||
|
||||
. Obtain the Ignition config files:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ ./openshift-install create ignition-configs --dir=<installation_directory> <1>
|
||||
----
|
||||
@@ -277,6 +294,7 @@ The following files are generated in the directory:
|
||||
ifdef::osp[]
|
||||
. Export the metadata file's `infraID` key as an environment variable:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ export INFRA_ID=$(jq -r .infraID metadata.json)
|
||||
----
|
||||
|
||||
@@ -30,13 +30,15 @@ The initial `kubeadmin` password can be found in `<install_directory>/auth/kubea
|
||||
+
|
||||
.. Verify the master node Ignition file URL. Replace `<http_server_fqdn>` with HTTP server's fully qualified domain name:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
curl -I http://<http_server_fqdn>:<port>/master.ign <1>
|
||||
$ curl -I http://<http_server_fqdn>:<port>/master.ign <1>
|
||||
----
|
||||
<1> The `-I` option returns the header only. If the Ignition file is available on the specified URL, the command returns `200 OK` status. If it is not available, the command returns `404 file not found`.
|
||||
+
|
||||
.. To verify that the ignition file was received by the master node, query the HTTP server logs on the serving host. For example, if you are using an Apache web server to serve Ignition files:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ grep -is 'master.ign' /var/log/httpd/access_log
|
||||
----
|
||||
@@ -56,12 +58,14 @@ If the master Ignition file is received, the associated `HTTP GET` log message w
|
||||
. Determine master node status.
|
||||
.. Query master node status:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ oc get nodes
|
||||
----
|
||||
+
|
||||
.. If one of the master nodes does not reach a `Ready` status, retrieve a detailed node description:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ oc describe node <master_node>
|
||||
----
|
||||
@@ -75,18 +79,21 @@ It is not possible to run `oc` commands if an installation issue prevents the {p
|
||||
+
|
||||
.. Review `sdn-controller`, `sdn`, and `ovs` DaemonSet status, in the `openshift-sdn` namespace:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ oc get daemonsets -n openshift-sdn
|
||||
----
|
||||
+
|
||||
.. If those resources are listed as `Not found`, review Pods in the `openshift-sdn` namespace:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ oc get pods -n openshift-sdn
|
||||
----
|
||||
+
|
||||
.. Review logs relating to failed {product-title} SDN Pods in the `openshift-sdn` namespace:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ oc logs <sdn_pod> -n openshift-sdn
|
||||
----
|
||||
@@ -94,24 +101,28 @@ $ oc logs <sdn_pod> -n openshift-sdn
|
||||
. Determine cluster network configuration status.
|
||||
.. Review whether the cluster's network configuration exists:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ oc get network.config.openshift.io cluster -o yaml
|
||||
----
|
||||
+
|
||||
.. If the installer failed to create the network configuration, generate the Kubernetes manifests again and review message output:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ ./openshift-install create manifests
|
||||
----
|
||||
+
|
||||
.. Review Pod status in the `openshift-network-operator` namespace to determine whether the network Operator is running:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ oc get pods -n openshift-network-operator
|
||||
----
|
||||
+
|
||||
.. Gather network Operator Pod logs from the `openshift-network-operator` namespace:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ oc logs pod/<network_operator_pod_name> -n openshift-network-operator
|
||||
----
|
||||
@@ -119,12 +130,14 @@ $ oc logs pod/<network_operator_pod_name> -n openshift-network-operator
|
||||
. Monitor `kubelet.service` journald unit logs on master nodes, after they have booted. This provides visibility into master node agent activity.
|
||||
.. Retrieve the logs using `oc`:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ oc adm node-logs --role=master -u kubelet
|
||||
----
|
||||
+
|
||||
.. If the API is not functional, review the logs using SSH instead. Replace `<master-node>.<cluster_name>.<base_domain>` with appropriate values:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ ssh core@<master-node>.<cluster_name>.<base_domain> journalctl -b -f -u kubelet.service
|
||||
----
|
||||
@@ -137,12 +150,14 @@ $ ssh core@<master-node>.<cluster_name>.<base_domain> journalctl -b -f -u kubele
|
||||
. Retrieve `crio.service` journald unit logs on master nodes, after they have booted. This provides visibility into master node CRI-O container runtime activity.
|
||||
.. Retrieve the logs using `oc`:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ oc adm node-logs --role=master -u crio
|
||||
----
|
||||
+
|
||||
.. If the API is not functional, review the logs using SSH instead:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ ssh core@<master-node>.<cluster_name>.<base_domain> journalctl -b -f -u crio.service
|
||||
----
|
||||
@@ -150,18 +165,21 @@ $ ssh core@<master-node>.<cluster_name>.<base_domain> journalctl -b -f -u crio.s
|
||||
. Collect logs from specific subdirectories under `/var/log/` on master nodes.
|
||||
.. Retrieve a list of logs contained within a `/var/log/` subdirectory. The following example lists files in `/var/log/openshift-apiserver/` on all master nodes:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ oc adm node-logs --role=master --path=openshift-apiserver
|
||||
----
|
||||
+
|
||||
.. Inspect a specific log within a `/var/log/` subdirectory. The following example outputs `/var/log/openshift-apiserver/audit.log` contents from all master nodes:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ oc adm node-logs --role=master --path=openshift-apiserver/audit.log
|
||||
----
|
||||
+
|
||||
.. If the API is not functional, review the logs on each node using SSH instead. The following example tails `/var/log/openshift-apiserver/audit.log`:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ ssh core@<master-node>.<cluster_name>.<base_domain> sudo tail -f /var/log/openshift-apiserver/audit.log
|
||||
----
|
||||
@@ -169,12 +187,14 @@ $ ssh core@<master-node>.<cluster_name>.<base_domain> sudo tail -f /var/log/open
|
||||
. Review master node container logs using SSH.
|
||||
.. List the containers:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ ssh core@<master-node>.<cluster_name>.<base_domain> sudo crictl ps -a
|
||||
----
|
||||
+
|
||||
.. Retrieve a container's logs using `crictl`:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ ssh core@<master-node>.<cluster_name>.<base_domain> sudo crictl logs -f <container_id>
|
||||
----
|
||||
@@ -182,6 +202,7 @@ $ ssh core@<master-node>.<cluster_name>.<base_domain> sudo crictl logs -f <conta
|
||||
. If you experience master node configuration issues, verify that the MCO, MCO endpoint, and DNS record are functioning. The Machine Config Operator (MCO) manages operating system configuration during the installation procedure. Also verify system clock accuracy and certificate validity.
|
||||
.. Test whether the MCO endpoint is available. Replace `<cluster_name>` with appropriate values:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ curl https://api-int.<cluster_name>:22623/config/master
|
||||
----
|
||||
@@ -191,30 +212,35 @@ $ curl https://api-int.<cluster_name>:22623/config/master
|
||||
.. Verify that the MCO endpoint's DNS record is configured and resolves to the load balancer.
|
||||
... Run a DNS lookup for the defined MCO endpoint name:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ dig api-int.<cluster_name> @<dns_server>
|
||||
----
|
||||
+
|
||||
... Run a reverse lookup to the assigned MCO IP address on the load balancer:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ dig -x <load_balancer_mco_ip_address> @<dns_server>
|
||||
----
|
||||
+
|
||||
.. Verify that the MCO is functioning from the bootstrap node directly. Replace `<bootstrap_fqdn>` with the bootstrap node's fully qualified domain name:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ ssh core@<bootstrap_fqdn> curl https://api-int.<cluster_name>:22623/config/master
|
||||
----
|
||||
+
|
||||
.. System clock time must be synchronized between bootstrap, master, and worker nodes. Check each node's system clock reference time and time synchronization statistics:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ ssh core@<node>.<cluster_name>.<base_domain> chronyc tracking
|
||||
----
|
||||
+
|
||||
.. Review certificate validity:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ openssl s_client -connect api-int.<cluster_name>:22623 | openssl x509 -noout -text
|
||||
----
|
||||
|
||||
@@ -26,6 +26,7 @@ credential secret in the cluster `kube-system` namespace.
|
||||
|
||||
. Run the {product-title} installer to generate manifests:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ openshift-install create manifests --dir=mycluster
|
||||
----
|
||||
@@ -33,6 +34,7 @@ $ openshift-install create manifests --dir=mycluster
|
||||
. Insert a ConfigMap into the manifests directory so that the Cloud Credential
|
||||
Operator is placed in manual mode:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ cat <<EOF > mycluster/manifests/cco-configmap.yaml
|
||||
apiVersion: v1
|
||||
@@ -50,6 +52,7 @@ EOF
|
||||
. Remove the `admin` credential secret created using your local cloud credentials.
|
||||
This removal prevents your `admin` credential from being stored in the cluster:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ rm mycluster/openshift/99_cloud-creds-secret.yaml
|
||||
----
|
||||
@@ -57,11 +60,13 @@ $ rm mycluster/openshift/99_cloud-creds-secret.yaml
|
||||
. Obtain the {product-title} release image your `openshift-install` binary is built
|
||||
to use:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ bin/openshift-install version
|
||||
----
|
||||
+
|
||||
.Example output
|
||||
[source,terminal]
|
||||
----
|
||||
release image quay.io/openshift-release-dev/ocp-release:4.z.z-x86_64
|
||||
----
|
||||
@@ -69,12 +74,14 @@ release image quay.io/openshift-release-dev/ocp-release:4.z.z-x86_64
|
||||
. Locate all `CredentialsRequests` in this release image that target the cloud you
|
||||
are deploying on:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ oc adm release extract quay.io/openshift-release-dev/ocp-release:4.z.z-x86_64 --to ./release-image
|
||||
----
|
||||
|
||||
. Locate the `CredentialsRequests` in the extracted file:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ grep -l "apiVersion: cloudcredential.openshift.io" * | xargs cat
|
||||
----
|
||||
@@ -168,6 +175,7 @@ secret data varies for each cloud provider.
|
||||
|
||||
. Proceed with cluster creation:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ openshift-install create cluster --dir=mycluster
|
||||
----
|
||||
|
||||
@@ -32,9 +32,14 @@ to abort incomplete multipart uploads that are one day old.
|
||||
. Fill in the storage configuration in
|
||||
`configs.imageregistry.operator.openshift.io/cluster`:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ oc edit configs.imageregistry.operator.openshift.io/cluster
|
||||
|
||||
----
|
||||
+
|
||||
.Example configuration
|
||||
[source,terminal]
|
||||
----
|
||||
storage:
|
||||
s3:
|
||||
bucket: <bucket-name>
|
||||
|
||||
@@ -22,9 +22,14 @@ cloud credentials.
|
||||
|
||||
. Fill in the storage configuration in `configs.imageregistry.operator.openshift.io/cluster`:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ oc edit configs.imageregistry.operator.openshift.io/cluster
|
||||
|
||||
----
|
||||
+
|
||||
.Example configuration
|
||||
[source,terminal]
|
||||
----
|
||||
storage:
|
||||
azure:
|
||||
accountName: <storage-account-name>
|
||||
|
||||
@@ -104,6 +104,7 @@ on your computer, create one.
|
||||
For example, on a computer that uses a Linux operating system, run the
|
||||
following command:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ ssh-keygen -t rsa -b 4096 -N '' \
|
||||
-f <path>/<file_name> <1>
|
||||
@@ -120,17 +121,27 @@ If you create a new SSH key pair, avoid overwriting existing SSH keys.
|
||||
+
|
||||
. Start the `ssh-agent` process as a background task:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ eval "$(ssh-agent -s)"
|
||||
|
||||
----
|
||||
+
|
||||
.Example output
|
||||
[source,terminal]
|
||||
----
|
||||
Agent pid 31874
|
||||
----
|
||||
|
||||
. Add your SSH private key to the `ssh-agent`:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ ssh-add <path>/<file_name> <1>
|
||||
|
||||
----
|
||||
+
|
||||
.Example output
|
||||
[source,terminal]
|
||||
----
|
||||
Identity added: /home/<you>/<path>/<file_name> (<computer_name>)
|
||||
----
|
||||
<1> Specify the path and file name for your SSH private key, such as `~/.ssh/id_rsa`
|
||||
|
||||
Reference in New Issue
Block a user