diff --git a/modules/configuring-vsphere-host-groups.adoc b/modules/configuring-vsphere-host-groups.adoc index dcbd85e565..01a389faac 100644 --- a/modules/configuring-vsphere-host-groups.adoc +++ b/modules/configuring-vsphere-host-groups.adoc @@ -35,7 +35,6 @@ To enable host group support, you must define multiple failure domains for your ==== If you specify different names for the `openshift-region` and `openshift-zone` vCenter tag categories, the installation of the {product-title} cluster fails. ==== - + [source,terminal] ---- @@ -76,7 +75,7 @@ $ govc tags.attach -c //datastore/" resourcePool: "//host//Resources/" folder: "//vm/" ----- \ No newline at end of file +---- diff --git a/modules/configuring-vsphere-regions-zones.adoc b/modules/configuring-vsphere-regions-zones.adoc index a12937245e..88da37772c 100644 --- a/modules/configuring-vsphere-regions-zones.adoc +++ b/modules/configuring-vsphere-regions-zones.adoc @@ -21,6 +21,7 @@ The default `install-config.yaml` file configuration from the previous release o ==== You must specify at least one failure domain for your {product-title} cluster, so that you can provision data center objects for your VMware vCenter server. Consider specifying multiple failure domains if you need to provision virtual machine nodes in different data centers, clusters, datastores, and other components. To enable regions and zones, you must define multiple failure domains for your {product-title} cluster. ==== ++ * You have installed the `govc` command line tool. + [IMPORTANT] @@ -76,30 +77,30 @@ $ govc tags.attach -c //host/" - "" ---- +# ... controlPlane: ---- +# ... vsphere: zones: - "" - "" ---- +# ... platform: vsphere: vcenters: ---- +# ... datacenters: - - @@ -128,5 +129,5 @@ platform: datastore: "//datastore/" resourcePool: "//host//Resources/" folder: "//vm/" ---- +# ... ---- diff --git a/modules/dynamic-plug-in-development.adoc b/modules/dynamic-plug-in-development.adoc index b9c34d7f30..d1b18333d6 100644 --- a/modules/dynamic-plug-in-development.adoc +++ b/modules/dynamic-plug-in-development.adoc @@ -10,6 +10,7 @@ You can run the plugin using a local development environment. The {product-title} web console runs in a container connected to the cluster you have logged into. .Prerequisites + * You must have cloned the link:https://github.com/openshift/console-plugin-template[`console-plugin-template`] repository, which contains a template for creating plugins. + [IMPORTANT] @@ -41,7 +42,6 @@ $ yarn install ---- . After installing, run the following command to start yarn. - + [source,terminal] ---- @@ -69,11 +69,24 @@ The `yarn run start-console` command runs an `amd64` image and might fail when r [source,terminal] ---- $ podman machine ssh +---- + +[source,terminal] +---- $ sudo -i +---- + +[source,terminal] +---- $ rpm-ostree install qemu-user-static +---- + +[source,terminal] +---- $ systemctl reboot ---- ==== .Verification -* Visit link:http://localhost:9000/example[localhost:9000] to view the running plugin. Inspect the value of `window.SERVER_FLAGS.consolePlugins` to see the list of plugins which load at runtime. \ No newline at end of file + +* Visit link:http://localhost:9000/example[localhost:9000] to view the running plugin. Inspect the value of `window.SERVER_FLAGS.consolePlugins` to see the list of plugins which load at runtime. diff --git a/modules/gitops-default-permissions-of-an-argocd-instance.adoc b/modules/gitops-default-permissions-of-an-argocd-instance.adoc index 3a1c39a6b3..0194427089 100644 --- a/modules/gitops-default-permissions-of-an-argocd-instance.adoc +++ b/modules/gitops-default-permissions-of-an-argocd-instance.adoc @@ -9,34 +9,38 @@ By default Argo CD instance has the following permissions: -* Argo CD instance has the `admin` privileges to manage resources only in the namespace where it is deployed. For instance, an Argo CD instance deployed in the **foo** namespace has the `admin` privileges to manage resources only for that namespace. +* Argo CD instance has the `admin` privileges to manage resources only in the namespace where it is deployed. For instance, an Argo CD instance deployed in the *foo* namespace has the `admin` privileges to manage resources only for that namespace. * Argo CD has the following cluster-scoped permissions because Argo CD requires cluster-wide `read` privileges on resources to function appropriately: + -[source,yaml] +[source,yaml,subs="attributes+"] ---- - verbs: - get - list - watch apiGroups: - - '*' + - /'*' resources: - - '*' + - /'*' - verbs: - get - list nonResourceURLs: - - '*' + - /'*' ---- [NOTE] ==== * You can edit the cluster roles used by the `argocd-server` and `argocd-application-controller` components where Argo CD is running such that the `write` privileges are limited to only the namespaces and resources that you wish Argo CD to manage. -+ + [source,terminal] ---- $ oc edit clusterrole argocd-server +---- + +[source,terminal] +---- $ oc edit clusterrole argocd-application-controller ---- ==== diff --git a/modules/hosted-cluster-etcd-backup-restore-on-premise.adoc b/modules/hosted-cluster-etcd-backup-restore-on-premise.adoc index e75d125679..2e00e477b0 100644 --- a/modules/hosted-cluster-etcd-backup-restore-on-premise.adoc +++ b/modules/hosted-cluster-etcd-backup-restore-on-premise.adoc @@ -18,7 +18,7 @@ include::snippets/technology-preview.adoc[] .Procedure . First, set up your environment variables: - ++ .. Set up environment variables for your hosted cluster by entering the following commands, replacing values as necessary: + [source,terminal] @@ -35,7 +35,7 @@ $ HOSTED_CLUSTER_NAMESPACE=clusters ---- $ CONTROL_PLANE_NAMESPACE="${HOSTED_CLUSTER_NAMESPACE}-${CLUSTER_NAME}" ---- - ++ .. Pause reconciliation of the hosted cluster by entering the following command, replacing values as necessary: + [source,terminal] @@ -45,17 +45,18 @@ $ oc patch -n ${HOSTED_CLUSTER_NAMESPACE} hostedclusters/${CLUSTER_NAME} \ ---- . Next, take a snapshot of etcd by using one of the following methods: - ++ .. Use a previously backed-up snapshot of etcd. ++ .. If you have an available etcd pod, take a snapshot from the active etcd pod by completing the following steps: - ++ ... List etcd pods by entering the following command: + [source,terminal] ---- $ oc get -n ${CONTROL_PLANE_NAMESPACE} pods -l app=etcd ---- - ++ ... Take a snapshot of the pod database and save it locally to your machine by entering the following commands: + [source,terminal] @@ -73,7 +74,7 @@ $ oc exec -n ${CONTROL_PLANE_NAMESPACE} -c etcd -t ${ETCD_POD} -- \ --endpoints=https://localhost:2379 \ snapshot save /var/lib/snapshot.db ---- - ++ ... Verify that the snapshot is successful by entering the following command: + [source,terminal] @@ -82,7 +83,7 @@ $ oc exec -n ${CONTROL_PLANE_NAMESPACE} -c etcd -t ${ETCD_POD} -- \ env ETCDCTL_API=3 /usr/bin/etcdctl -w table snapshot status \ /var/lib/snapshot.db ---- - ++ .. Make a local copy of the snapshot by entering the following command: + [source,terminal] @@ -90,7 +91,7 @@ $ oc exec -n ${CONTROL_PLANE_NAMESPACE} -c etcd -t ${ETCD_POD} -- \ $ oc cp -c etcd ${CONTROL_PLANE_NAMESPACE}/${ETCD_POD}:/var/lib/snapshot.db \ /tmp/etcd.snapshot.db ---- - ++ ... Make a copy of the snapshot database from etcd persistent storage: + .... List etcd pods by entering the following command: @@ -99,7 +100,7 @@ $ oc cp -c etcd ${CONTROL_PLANE_NAMESPACE}/${ETCD_POD}:/var/lib/snapshot.db \ ---- $ oc get -n ${CONTROL_PLANE_NAMESPACE} pods -l app=etcd ---- - ++ .... Find a pod that is running and set its name as the value of `ETCD_POD: ETCD_POD=etcd-0`, and then copy its snapshot database by entering the following command: + [source,terminal] @@ -115,16 +116,16 @@ $ oc cp -c etcd \ ---- $ oc scale -n ${CONTROL_PLANE_NAMESPACE} statefulset/etcd --replicas=0 ---- - ++ .. Delete volumes for second and third members by entering the following command: + [source,terminal] ---- $ oc delete -n ${CONTROL_PLANE_NAMESPACE} pvc/data-etcd-1 pvc/data-etcd-2 ---- - ++ .. Create a pod to access the first etcd member's data: - ++ ... Get the etcd image by entering the following command: + [source,terminal] @@ -135,7 +136,7 @@ $ ETCD_IMAGE=$(oc get -n ${CONTROL_PLANE_NAMESPACE} statefulset/etcd \ + ... Create a pod that allows access to etcd data: + -[source,yaml] +[source,yaml,subs="attributes+"] ---- $ cat << EOF | oc apply -n ${CONTROL_PLANE_NAMESPACE} -f - apiVersion: apps/v1 @@ -170,16 +171,16 @@ spec: - name: data persistentVolumeClaim: claimName: data-etcd-0 -EOF + EOF ---- - ++ ... Check the status of the `etcd-data` pod and wait for it to be running by entering the following command: + [source,terminal] ---- $ oc get -n ${CONTROL_PLANE_NAMESPACE} pods -l app=etcd-data ---- - ++ ... Get the name of the `etcd-data` pod by entering the following command: + [source,terminal] @@ -187,7 +188,7 @@ $ oc get -n ${CONTROL_PLANE_NAMESPACE} pods -l app=etcd-data $ DATA_POD=$(oc get -n ${CONTROL_PLANE_NAMESPACE} pods --no-headers \ -l app=etcd-data -o name | cut -d/ -f2) ---- - ++ .. Copy an etcd snapshot into the pod by entering the following command: + [source,terminal] @@ -195,7 +196,7 @@ $ DATA_POD=$(oc get -n ${CONTROL_PLANE_NAMESPACE} pods --no-headers \ $ oc cp /tmp/etcd.snapshot.db \ ${CONTROL_PLANE_NAMESPACE}/${DATA_POD}:/var/lib/restored.snap.db ---- - ++ .. Remove old data from the `etcd-data` pod by entering the following commands: + [source,terminal] @@ -207,7 +208,7 @@ $ oc exec -n ${CONTROL_PLANE_NAMESPACE} ${DATA_POD} -- rm -rf /var/lib/data ---- $ oc exec -n ${CONTROL_PLANE_NAMESPACE} ${DATA_POD} -- mkdir -p /var/lib/data ---- - ++ .. Restore the etcd snapshot by entering the following command: + [source,terminal] @@ -220,7 +221,7 @@ $ oc exec -n ${CONTROL_PLANE_NAMESPACE} ${DATA_POD} -- \ --initial-cluster etcd-0=https://etcd-0.etcd-discovery.${CONTROL_PLANE_NAMESPACE}.svc:2380,etcd-1=https://etcd-1.etcd-discovery.${CONTROL_PLANE_NAMESPACE}.svc:2380,etcd-2=https://etcd-2.etcd-discovery.${CONTROL_PLANE_NAMESPACE}.svc:2380 \ --initial-advertise-peer-urls https://etcd-0.etcd-discovery.${CONTROL_PLANE_NAMESPACE}.svc:2380 ---- - ++ .. Remove the temporary etcd snapshot from the pod by entering the following command: + [source,terminal] @@ -228,21 +229,21 @@ $ oc exec -n ${CONTROL_PLANE_NAMESPACE} ${DATA_POD} -- \ $ oc exec -n ${CONTROL_PLANE_NAMESPACE} ${DATA_POD} -- \ rm /var/lib/restored.snap.db ---- - ++ .. Delete data access deployment by entering the following command: + [source,terminal] ---- $ oc delete -n ${CONTROL_PLANE_NAMESPACE} deployment/etcd-data ---- - ++ .. Scale up the etcd cluster by entering the following command: + [source,terminal] ---- $ oc scale -n ${CONTROL_PLANE_NAMESPACE} statefulset/etcd --replicas=3 ---- - ++ .. Wait for the etcd member pods to return and report as available by entering the following command: + [source,terminal] @@ -291,4 +292,4 @@ $ oc annotate hostedcluster -n \ --overwrite ---- + -After a few minutes, the control plane pods start running. \ No newline at end of file +After a few minutes, the control plane pods start running. diff --git a/modules/ibi-generate-seed-image.adoc b/modules/ibi-generate-seed-image.adoc index f574632f50..6698e65b93 100644 --- a/modules/ibi-generate-seed-image.adoc +++ b/modules/ibi-generate-seed-image.adoc @@ -35,37 +35,42 @@ To avoid including any {rh-rhacm-first} resources in your seed image, you need t .Procedure . If the cluster is a managed cluster, detach the cluster from the hub to delete any cluster-specific resources from the seed cluster that must not be in the seed image: - ++ .. Manually detach the seed cluster by running the following command: + [source,terminal] ---- $ oc delete managedcluster sno-worker-example ---- - ++ ... Wait until the `ManagedCluster` CR is removed. After the CR is removed, create the proper `SeedGenerator` CR. The {lcao} cleans up the {rh-rhacm} artifacts. . Create the `Secret` object so that you can push the seed image to your registry. - ++ .. Create the authentication file by running the following command: + --- [source,terminal] ---- $ MY_USER=myuserid +---- ++ +[source,terminal] +---- $ AUTHFILE=/tmp/my-auth.json +---- ++ +[source,terminal] +---- $ podman login --authfile ${AUTHFILE} -u ${MY_USER} quay.io/${MY_USER} ---- - ++ [source,terminal] ---- $ base64 -w 0 ${AUTHFILE} ; echo ---- --- - ++ .. Copy the output into the `seedAuth` field in the `Secret` YAML file named `seedgen` in the `openshift-lifecycle-agent` namespace: + --- [source,yaml] ---- apiVersion: v1 @@ -79,8 +84,7 @@ data: ---- <1> The `Secret` resource must have the `name: seedgen` and `namespace: openshift-lifecycle-agent` fields. <2> Specifies a base64-encoded authfile for write-access to the registry for pushing the generated seed images. --- - ++ .. Apply the `Secret` by running the following command: + [source,terminal] @@ -90,7 +94,6 @@ $ oc apply -f secretseedgenerator.yaml . Create the `SeedGenerator` CR: + --- [source,yaml] ---- apiVersion: lca.openshift.io/v1 @@ -102,7 +105,7 @@ spec: ---- <1> The `SeedGenerator` CR must be named `seedimage`. <2> Specify the container image URL, for example, `quay.io/example/seed-container-image:`. It is recommended to use the `:` format. --- + . Generate the seed image by running the following command: + @@ -110,7 +113,6 @@ spec: ---- $ oc apply -f seedgenerator.yaml ---- - + [IMPORTANT] ==== @@ -121,14 +123,13 @@ If you want to generate further seed images, you must provision a new seed clust .Verification -. After the cluster recovers and it is available, you can check the status of the `SeedGenerator` CR: +* After the cluster recovers and it is available, you can check the status of the `SeedGenerator` CR: + --- [source,terminal] ---- $ oc get seedgenerator -oyaml ---- - ++ .Example output [source,yaml] ---- @@ -149,4 +150,4 @@ status: observedGeneration: 1 ---- <1> The seed image generation is complete. --- + diff --git a/modules/installation-approve-csrs.adoc b/modules/installation-approve-csrs.adoc index 3019f60e9d..077ae8cccf 100644 --- a/modules/installation-approve-csrs.adoc +++ b/modules/installation-approve-csrs.adoc @@ -122,7 +122,7 @@ Because the CSRs rotate automatically, approve your CSRs within an hour of addin ==== For clusters running on platforms that are not machine API enabled, such as bare metal and other user-provisioned infrastructure, you must implement a method of automatically approving the kubelet serving certificate requests (CSRs). If a request is not approved, then the `oc exec`, `oc rsh`, and `oc logs` commands cannot succeed, because a serving certificate is required when the API server connects to the kubelet. Any operation that contacts the Kubelet endpoint requires this certificate approval to be in place. The method must watch for new CSRs, confirm that the CSR was submitted by the `node-bootstrapper` service account in the `system:node` or `system:admin` groups, and confirm the identity of the node. ==== - ++ ** To approve them individually, run the following command for each valid CSR: + [source,terminal] @@ -130,7 +130,7 @@ For clusters running on platforms that are not machine API enabled, such as bare $ oc adm certificate approve <1> ---- <1> `` is the name of a CSR from the list of current CSRs. - ++ ** To approve all pending CSRs, run the following command: + [source,terminal] @@ -160,7 +160,7 @@ csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal ---- . If the remaining CSRs are not approved, and are in the `Pending` status, approve the CSRs for your cluster machines: - ++ ** To approve them individually, run the following command for each valid CSR: + [source,terminal] @@ -168,7 +168,7 @@ csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal $ oc adm certificate approve <1> ---- <1> `` is the name of a CSR from the list of current CSRs. - ++ ** To approve all pending CSRs, run the following command: + [source,terminal] @@ -178,28 +178,35 @@ $ oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name} . After all client and server CSRs have been approved, the machines have the `Ready` status. Verify this by running the following command: + +ifndef::ibm-power[] [source,terminal] ---- -ifndef::ibm-power[] $ oc get nodes +---- endif::ibm-power[] ifdef::ibm-power[] -$ oc get nodes -o wide -endif::ibm-power[] +[source,terminal] ---- +$ oc get nodes -o wide +---- +endif::ibm-power[] + +ifndef::ibm-power[] .Example output [source,terminal] ---- -ifndef::ibm-power[] NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.33.4 master-1 Ready master 73m v1.33.4 master-2 Ready master 74m v1.33.4 worker-0 Ready worker 11m v1.33.4 worker-1 Ready worker 11m v1.33.4 +---- endif::ibm-power[] ifdef::ibm-power[] +.Example output +[source,terminal] +---- NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME worker-0-ppc64le Ready worker 42d v1.33.4 192.168.200.21 Red Hat Enterprise Linux CoreOS 415.92.202309261919-0 (Plow) 5.14.0-284.34.1.el9_2.ppc64le cri-o://1.33.4-3.rhaos4.15.gitb36169e.el9 worker-1-ppc64le Ready worker 42d v1.33.4 192.168.200.20 Red Hat Enterprise Linux CoreOS 415.92.202309261919-0 (Plow) 5.14.0-284.34.1.el9_2.ppc64le cri-o://1.33.4-3.rhaos4.15.gitb36169e.el9 @@ -208,8 +215,8 @@ master-1-x86 Ready control-plane,master 75d v1.33.4 10.248.0.39 master-2-x86 Ready control-plane,master 75d v1.33.4 10.248.0.40 10.248.0.40 Red Hat Enterprise Linux CoreOS 415.92.202309261919-0 (Plow) 5.14.0-284.34.1.el9_2.x86_64 cri-o://1.33.4-3.rhaos4.15.gitb36169e.el9 worker-0-x86 Ready worker 75d v1.33.4 10.248.0.43 10.248.0.43 Red Hat Enterprise Linux CoreOS 415.92.202309261919-0 (Plow) 5.14.0-284.34.1.el9_2.x86_64 cri-o://1.33.4-3.rhaos4.15.gitb36169e.el9 worker-1-x86 Ready worker 75d v1.33.4 10.248.0.44 10.248.0.44 Red Hat Enterprise Linux CoreOS 415.92.202309261919-0 (Plow) 5.14.0-284.34.1.el9_2.x86_64 cri-o://1.33.4-3.rhaos4.15.gitb36169e.el9 -endif::ibm-power[] ---- +endif::ibm-power[] + [NOTE] ==== diff --git a/modules/installation-azure-preparing-diskencryptionsets.adoc b/modules/installation-azure-preparing-diskencryptionsets.adoc index 95ebd9d93e..950adb3be6 100644 --- a/modules/installation-azure-preparing-diskencryptionsets.adoc +++ b/modules/installation-azure-preparing-diskencryptionsets.adoc @@ -5,6 +5,7 @@ :_mod-docs-content-type: PROCEDURE [id="preparing-disk-encryption-sets_{context}"] = Preparing an {azure-short} Disk Encryption Set + The {product-title} installer can use an existing Disk Encryption Set with a user-managed key. To enable this feature, you can create a Disk Encryption Set in {azure-short} and provide the key to the installer. .Procedure @@ -18,7 +19,7 @@ $ export RESOURCEGROUP="" \// <1> ---- <1> Specifies the name of the {azure-short} resource group where the Disk Encryption Set and encryption key are to be created. To prevent losing access to your keys when you destroy the cluster, create the Disk Encryption Set in a separate resource group from the one where you install the cluster. <2> Specifies the {azure-short} location where the resource group is to be created. -+ + . Set the environment variables for the {azure-short} Key Vault and Disk Encryption Set by running the following command: + [source,terminal] @@ -30,7 +31,7 @@ $ export KEYVAULT_NAME="" \// <1> <1> Specifies the name of the {azure-short} Key Vault to be created. <2> Specifies the name of the encryption key to be created. <3> Specifies the name of the disk encryption set to be created. -+ + . Set the environment variable for the ID of your {azure-short} service principal by running the following command: + [source,terminal] @@ -38,7 +39,7 @@ $ export KEYVAULT_NAME="" \// <1> $ export CLUSTER_SP_ID="" <1> ---- <1> Specifies the ID of the service principal to be used for installation. -+ + . Enable host-level encryption in {azure-short} by running the following command: + [source,terminal] @@ -55,14 +56,14 @@ $ az feature show --namespace Microsoft.Compute --name EncryptionAtHost ---- $ az provider register -n Microsoft.Compute ---- -+ + . Create an {azure-short} resource group to hold the disk encryption set and associated resources by running the following command: + [source,terminal] ---- $ az group create --name $RESOURCEGROUP --location $LOCATION ---- -+ + . Create an {azure-short} Key Vault by running the following command: + [source,terminal] @@ -70,7 +71,7 @@ $ az group create --name $RESOURCEGROUP --location $LOCATION $ az keyvault create -n $KEYVAULT_NAME -g $RESOURCEGROUP -l $LOCATION \ --enable-purge-protection true ---- -+ + . Create an encryption key in the key vault by running the following command: + [source,terminal] @@ -78,14 +79,14 @@ $ az keyvault create -n $KEYVAULT_NAME -g $RESOURCEGROUP -l $LOCATION \ $ az keyvault key create --vault-name $KEYVAULT_NAME -n $KEYVAULT_KEY_NAME \ --protection software ---- -+ + . Capture the ID of the key vault by running the following command: + [source,terminal] ---- $ KEYVAULT_ID=$(az keyvault show --name $KEYVAULT_NAME --query "[id]" -o tsv) ---- -+ + . Capture the key URL in the key vault by running the following command: + [source,terminal] @@ -93,7 +94,7 @@ $ KEYVAULT_ID=$(az keyvault show --name $KEYVAULT_NAME --query "[id]" -o tsv) $ KEYVAULT_KEY_URL=$(az keyvault key show --vault-name $KEYVAULT_NAME --name \ $KEYVAULT_KEY_NAME --query "[key.kid]" -o tsv) ---- -+ + . Create a disk encryption set by running the following command: + [source,terminal] @@ -101,7 +102,7 @@ $ KEYVAULT_KEY_URL=$(az keyvault key show --vault-name $KEYVAULT_NAME --name \ $ az disk-encryption-set create -n $DISK_ENCRYPTION_SET_NAME -l $LOCATION -g \ $RESOURCEGROUP --source-vault $KEYVAULT_ID --key-url $KEYVAULT_KEY_URL ---- -+ + . Grant the `DiskEncryptionSet` resource access to the key vault by running the following commands: + [source,terminal] @@ -115,7 +116,7 @@ $ DES_IDENTITY=$(az disk-encryption-set show -n $DISK_ENCRYPTION_SET_NAME -g \ $ az keyvault set-policy -n $KEYVAULT_NAME -g $RESOURCEGROUP --object-id \ $DES_IDENTITY --key-permissions wrapkey unwrapkey get ---- -+ + . Grant the {azure-short} service principal permission to read the Disk Encryption Set by running the following commands: + [source,terminal] diff --git a/modules/installation-azure-user-infra-wait-for-bootstrap.adoc b/modules/installation-azure-user-infra-wait-for-bootstrap.adoc index bc9b58ef60..b5b34e3f34 100644 --- a/modules/installation-azure-user-infra-wait-for-bootstrap.adoc +++ b/modules/installation-azure-user-infra-wait-for-bootstrap.adoc @@ -52,15 +52,43 @@ has initialized. [source,terminal] ---- $ az network nsg rule delete -g ${RESOURCE_GROUP} --nsg-name ${INFRA_ID}-nsg --name bootstrap_ssh_in +---- ++ +[source,terminal] +---- $ az vm stop -g ${RESOURCE_GROUP} --name ${INFRA_ID}-bootstrap +---- ++ +[source,terminal] +---- $ az vm deallocate -g ${RESOURCE_GROUP} --name ${INFRA_ID}-bootstrap +---- ++ +[source,terminal] +---- $ az vm delete -g ${RESOURCE_GROUP} --name ${INFRA_ID}-bootstrap --yes +---- ++ +[source,terminal] +---- $ az disk delete -g ${RESOURCE_GROUP} --name ${INFRA_ID}-bootstrap_OSDisk --no-wait --yes +---- ++ +[source,terminal] +---- $ az network nic delete -g ${RESOURCE_GROUP} --name ${INFRA_ID}-bootstrap-nic --no-wait +---- ++ +[source,terminal] +---- $ az storage blob delete --account-key ${ACCOUNT_KEY} --account-name ${CLUSTER_NAME}sa --container-name files --name bootstrap.ign +---- ++ +[source,terminal] +---- $ az network public-ip delete -g ${RESOURCE_GROUP} --name ${INFRA_ID}-bootstrap-ssh-pip ---- - ++ [NOTE] ==== If you do not delete the bootstrap server, installation may not succeed due to API traffic being routed to the bootstrap server. diff --git a/modules/installation-creating-gcp-iam-shared-vpc.adoc b/modules/installation-creating-gcp-iam-shared-vpc.adoc index 6012a25df5..857e68b3af 100644 --- a/modules/installation-creating-gcp-iam-shared-vpc.adoc +++ b/modules/installation-creating-gcp-iam-shared-vpc.adoc @@ -17,10 +17,7 @@ to modify the provided Deployment Manager template. [NOTE] ==== -If you do not use the provided Deployment Manager template to create your {gcp-short} -infrastructure, you must review the provided information and manually create -the infrastructure. If your cluster does not initialize correctly, you might -have to contact Red Hat support with your installation logs. +If you do not use the provided Deployment Manager template to create your {gcp-short} infrastructure, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs. ==== .Prerequisites @@ -29,10 +26,7 @@ have to contact Red Hat support with your installation logs. .Procedure -. Copy the template from the -*Deployment Manager template for IAM roles* -section of this topic and save it as `03_iam.py` on your computer. This -template describes the IAM roles that your cluster requires. +. Copy the template from the *Deployment Manager template for IAM roles* section of this topic and save it as `03_iam.py` on your computer. This template describes the IAM roles that your cluster requires. . Create a `03_iam.yaml` resource definition file: + @@ -82,35 +76,35 @@ endif::shared-vpc[] ifdef::shared-vpc[] . Assign the permissions that the installation program requires to the service accounts for the subnets that host the control plane and compute subnets: - ++ .. Grant the `networkViewer` role of the project that hosts your shared VPC to the master service account: + [source,terminal] ---- $ gcloud --account=${HOST_PROJECT_ACCOUNT} --project=${HOST_PROJECT} projects add-iam-policy-binding ${HOST_PROJECT} --member "serviceAccount:${MASTER_SERVICE_ACCOUNT}" --role "roles/compute.networkViewer" ---- - ++ .. Grant the `networkUser` role to the master service account for the control plane subnet: + [source,terminal] ---- $ gcloud --account=${HOST_PROJECT_ACCOUNT} --project=${HOST_PROJECT} compute networks subnets add-iam-policy-binding "${HOST_PROJECT_CONTROL_SUBNET}" --member "serviceAccount:${MASTER_SERVICE_ACCOUNT}" --role "roles/compute.networkUser" --region ${REGION} ---- - ++ .. Grant the `networkUser` role to the worker service account for the control plane subnet: + [source,terminal] ---- $ gcloud --account=${HOST_PROJECT_ACCOUNT} --project=${HOST_PROJECT} compute networks subnets add-iam-policy-binding "${HOST_PROJECT_CONTROL_SUBNET}" --member "serviceAccount:${WORKER_SERVICE_ACCOUNT}" --role "roles/compute.networkUser" --region ${REGION} ---- - ++ .. Grant the `networkUser` role to the master service account for the compute subnet: + [source,terminal] ---- $ gcloud --account=${HOST_PROJECT_ACCOUNT} --project=${HOST_PROJECT} compute networks subnets add-iam-policy-binding "${HOST_PROJECT_COMPUTE_SUBNET}" --member "serviceAccount:${MASTER_SERVICE_ACCOUNT}" --role "roles/compute.networkUser" --region ${REGION} ---- - ++ .. Grant the `networkUser` role to the worker service account for the compute subnet: + [source,terminal] @@ -125,12 +119,35 @@ Manager, so you must create them manually: [source,terminal] ---- $ gcloud projects add-iam-policy-binding ${PROJECT_NAME} --member "serviceAccount:${MASTER_SERVICE_ACCOUNT}" --role "roles/compute.instanceAdmin" +---- ++ +[source,terminal] +---- $ gcloud projects add-iam-policy-binding ${PROJECT_NAME} --member "serviceAccount:${MASTER_SERVICE_ACCOUNT}" --role "roles/compute.networkAdmin" +---- ++ +[source,terminal] +---- $ gcloud projects add-iam-policy-binding ${PROJECT_NAME} --member "serviceAccount:${MASTER_SERVICE_ACCOUNT}" --role "roles/compute.securityAdmin" +---- ++ +[source,terminal] +---- $ gcloud projects add-iam-policy-binding ${PROJECT_NAME} --member "serviceAccount:${MASTER_SERVICE_ACCOUNT}" --role "roles/iam.serviceAccountUser" +---- ++ +[source,terminal] +---- $ gcloud projects add-iam-policy-binding ${PROJECT_NAME} --member "serviceAccount:${MASTER_SERVICE_ACCOUNT}" --role "roles/storage.admin" - +---- ++ +[source,terminal] +---- $ gcloud projects add-iam-policy-binding ${PROJECT_NAME} --member "serviceAccount:${WORKER_SERVICE_ACCOUNT}" --role "roles/compute.viewer" +---- ++ +[source,terminal] +---- $ gcloud projects add-iam-policy-binding ${PROJECT_NAME} --member "serviceAccount:${WORKER_SERVICE_ACCOUNT}" --role "roles/storage.admin" ---- diff --git a/modules/installation-creating-gcp-private-dns.adoc b/modules/installation-creating-gcp-private-dns.adoc index 8958caf68e..db247b8489 100644 --- a/modules/installation-creating-gcp-private-dns.adoc +++ b/modules/installation-creating-gcp-private-dns.adoc @@ -72,16 +72,32 @@ endif::shared-vpc[] . The templates do not create DNS entries due to limitations of Deployment Manager, so you must create them manually: - ++ .. Add the internal DNS entries: + ifdef::shared-vpc[] [source,terminal] ---- $ if [ -f transaction.yaml ]; then rm transaction.yaml; fi +---- ++ +[source,terminal] +---- $ gcloud dns record-sets transaction start --zone ${INFRA_ID}-private-zone --project ${HOST_PROJECT} --account ${HOST_PROJECT_ACCOUNT} +---- ++ +[source,terminal] +---- $ gcloud dns record-sets transaction add ${CLUSTER_IP} --name api.${CLUSTER_NAME}.${BASE_DOMAIN}. --ttl 60 --type A --zone ${INFRA_ID}-private-zone --project ${HOST_PROJECT} --account ${HOST_PROJECT_ACCOUNT} +---- ++ +[source,terminal] +---- $ gcloud dns record-sets transaction add ${CLUSTER_IP} --name api-int.${CLUSTER_NAME}.${BASE_DOMAIN}. --ttl 60 --type A --zone ${INFRA_ID}-private-zone --project ${HOST_PROJECT} --account ${HOST_PROJECT_ACCOUNT} +---- ++ +[source,terminal] +---- $ gcloud dns record-sets transaction execute --zone ${INFRA_ID}-private-zone --project ${HOST_PROJECT} --account ${HOST_PROJECT_ACCOUNT} ---- endif::shared-vpc[] @@ -89,21 +105,49 @@ ifndef::shared-vpc[] [source,terminal] ---- $ if [ -f transaction.yaml ]; then rm transaction.yaml; fi +---- ++ +[source,terminal] +---- $ gcloud dns record-sets transaction start --zone ${INFRA_ID}-private-zone +---- ++ +[source,terminal] +---- $ gcloud dns record-sets transaction add ${CLUSTER_IP} --name api.${CLUSTER_NAME}.${BASE_DOMAIN}. --ttl 60 --type A --zone ${INFRA_ID}-private-zone +---- ++ +[source,terminal] +---- $ gcloud dns record-sets transaction add ${CLUSTER_IP} --name api-int.${CLUSTER_NAME}.${BASE_DOMAIN}. --ttl 60 --type A --zone ${INFRA_ID}-private-zone +---- ++ +[source,terminal] +---- $ gcloud dns record-sets transaction execute --zone ${INFRA_ID}-private-zone ---- endif::shared-vpc[] - ++ .. For an external cluster, also add the external DNS entries: + ifdef::shared-vpc[] [source,terminal] ---- $ if [ -f transaction.yaml ]; then rm transaction.yaml; fi +---- ++ +[source,terminal] +---- $ gcloud --account=${HOST_PROJECT_ACCOUNT} --project=${HOST_PROJECT} dns record-sets transaction start --zone ${BASE_DOMAIN_ZONE_NAME} +---- ++ +[source,terminal] +---- $ gcloud --account=${HOST_PROJECT_ACCOUNT} --project=${HOST_PROJECT} dns record-sets transaction add ${CLUSTER_PUBLIC_IP} --name api.${CLUSTER_NAME}.${BASE_DOMAIN}. --ttl 60 --type A --zone ${BASE_DOMAIN_ZONE_NAME} +---- ++ +[source,terminal] +---- $ gcloud --account=${HOST_PROJECT_ACCOUNT} --project=${HOST_PROJECT} dns record-sets transaction execute --zone ${BASE_DOMAIN_ZONE_NAME} ---- endif::shared-vpc[] @@ -111,8 +155,20 @@ ifndef::shared-vpc[] [source,terminal] ---- $ if [ -f transaction.yaml ]; then rm transaction.yaml; fi +---- ++ +[source,terminal] +---- $ gcloud dns record-sets transaction start --zone ${BASE_DOMAIN_ZONE_NAME} +---- ++ +[source,terminal] +---- $ gcloud dns record-sets transaction add ${CLUSTER_PUBLIC_IP} --name api.${CLUSTER_NAME}.${BASE_DOMAIN}. --ttl 60 --type A --zone ${BASE_DOMAIN_ZONE_NAME} +---- ++ +[source,terminal] +---- $ gcloud dns record-sets transaction execute --zone ${BASE_DOMAIN_ZONE_NAME} ---- endif::shared-vpc[] diff --git a/modules/installation-creating-gcp-worker.adoc b/modules/installation-creating-gcp-worker.adoc index 44f9f0edc0..0821aac171 100644 --- a/modules/installation-creating-gcp-worker.adoc +++ b/modules/installation-creating-gcp-worker.adoc @@ -133,13 +133,14 @@ file. $ gcloud deployment-manager deployments create ${INFRA_ID}-worker --config 06_worker.yaml ---- -[.small] --- -1. To use a {gcp-short} Marketplace image, specify the offer to use: +. To use a {gcp-short} Marketplace image, specify the offer to use: ++ ** {product-title}: `\https://www.googleapis.com/compute/v1/projects/redhat-marketplace-public/global/images/redhat-coreos-ocp-413-x86-64-202305021736` ++ ** {opp}: `\https://www.googleapis.com/compute/v1/projects/redhat-marketplace-public/global/images/redhat-coreos-opp-413-x86-64-202305021736` ++ ** {oke}: `\https://www.googleapis.com/compute/v1/projects/redhat-marketplace-public/global/images/redhat-coreos-oke-413-x86-64-202305021736` --- + ifeval::["{context}" == "installing-gcp-user-infra"] :!three-node-cluster: diff --git a/modules/installation-disk-partitioning-upi-templates.adoc b/modules/installation-disk-partitioning-upi-templates.adoc index 429d98658f..1639dfe46f 100644 --- a/modules/installation-disk-partitioning-upi-templates.adoc +++ b/modules/installation-disk-partitioning-upi-templates.adoc @@ -21,6 +21,7 @@ :_mod-docs-content-type: PROCEDURE [id="installation-disk-partitioning-upi-templates_{context}"] = Optional: Creating a separate `/var` partition + It is recommended that disk partitioning for {product-title} be left to the installer. However, there are cases where you might want to create separate partitions in a part of the filesystem that you expect to grow. {product-title} supports the addition of a single partition to attach storage to either the `/var` partition or a subdirectory of `/var`. For example: @@ -130,8 +131,13 @@ $ butane $HOME/clusterconfig/98-var-partition.bu -o $HOME/clusterconfig/openshif [source,terminal] ---- $ openshift-install create ignition-configs --dir $HOME/clusterconfig +---- ++ +[source,terminal] +---- $ ls $HOME/clusterconfig/ auth bootstrap.ign master.ign metadata.json worker.ign ---- ++ +You can now use the Ignition config files as input to the installation procedures to install {op-system-first} systems. -Now you can use the Ignition config files as input to the installation procedures to install {op-system-first} systems. diff --git a/modules/installation-gcp-user-infra-adding-ingress.adoc b/modules/installation-gcp-user-infra-adding-ingress.adoc index 56038bfc93..95a5afd770 100644 --- a/modules/installation-gcp-user-infra-adding-ingress.adoc +++ b/modules/installation-gcp-user-infra-adding-ingress.adoc @@ -71,8 +71,20 @@ ifndef::shared-vpc[] [source,terminal] ---- $ if [ -f transaction.yaml ]; then rm transaction.yaml; fi +---- ++ +[source,terminal] +---- $ gcloud dns record-sets transaction start --zone ${INFRA_ID}-private-zone +---- ++ +[source,terminal] +---- $ gcloud dns record-sets transaction add ${ROUTER_IP} --name \*.apps.${CLUSTER_NAME}.${BASE_DOMAIN}. --ttl 300 --type A --zone ${INFRA_ID}-private-zone +---- ++ +[source,terminal] +---- $ gcloud dns record-sets transaction execute --zone ${INFRA_ID}-private-zone ---- endif::shared-vpc[] @@ -81,8 +93,20 @@ ifdef::shared-vpc[] [source,terminal] ---- $ if [ -f transaction.yaml ]; then rm transaction.yaml; fi +---- ++ +[source,terminal] +---- $ gcloud dns record-sets transaction start --zone ${INFRA_ID}-private-zone --project ${HOST_PROJECT} --account ${HOST_PROJECT_ACCOUNT} +---- ++ +[source,terminal] +---- $ gcloud dns record-sets transaction add ${ROUTER_IP} --name \*.apps.${CLUSTER_NAME}.${BASE_DOMAIN}. --ttl 300 --type A --zone ${INFRA_ID}-private-zone --project ${HOST_PROJECT} --account ${HOST_PROJECT_ACCOUNT} +---- ++ +[source,terminal] +---- $ gcloud dns record-sets transaction execute --zone ${INFRA_ID}-private-zone --project ${HOST_PROJECT} --account ${HOST_PROJECT_ACCOUNT} ---- endif::shared-vpc[] @@ -92,8 +116,20 @@ ifndef::shared-vpc[] [source,terminal] ---- $ if [ -f transaction.yaml ]; then rm transaction.yaml; fi +---- ++ +[source,terminal] +---- $ gcloud dns record-sets transaction start --zone ${BASE_DOMAIN_ZONE_NAME} +---- ++ +[source,terminal] +---- $ gcloud dns record-sets transaction add ${ROUTER_IP} --name \*.apps.${CLUSTER_NAME}.${BASE_DOMAIN}. --ttl 300 --type A --zone ${BASE_DOMAIN_ZONE_NAME} +---- ++ +[source,terminal] +---- $ gcloud dns record-sets transaction execute --zone ${BASE_DOMAIN_ZONE_NAME} ---- endif::shared-vpc[] @@ -102,8 +138,20 @@ ifdef::shared-vpc[] [source,terminal] ---- $ if [ -f transaction.yaml ]; then rm transaction.yaml; fi +---- ++ +[source,terminal] +---- $ gcloud dns record-sets transaction start --zone ${BASE_DOMAIN_ZONE_NAME} --project ${HOST_PROJECT} --account ${HOST_PROJECT_ACCOUNT} +---- ++ +[source,terminal] +---- $ gcloud dns record-sets transaction add ${ROUTER_IP} --name \*.apps.${CLUSTER_NAME}.${BASE_DOMAIN}. --ttl 300 --type A --zone ${BASE_DOMAIN_ZONE_NAME} --project ${HOST_PROJECT} --account ${HOST_PROJECT_ACCOUNT} +---- ++ +[source,terminal] +---- $ gcloud dns record-sets transaction execute --zone ${BASE_DOMAIN_ZONE_NAME} --project ${HOST_PROJECT} --account ${HOST_PROJECT_ACCOUNT} ---- endif::shared-vpc[] diff --git a/modules/installation-gcp-user-infra-completing.adoc b/modules/installation-gcp-user-infra-completing.adoc index ef96c28579..bab684b7ad 100644 --- a/modules/installation-gcp-user-infra-completing.adoc +++ b/modules/installation-gcp-user-infra-completing.adoc @@ -41,7 +41,6 @@ stored the installation files in. . Observe the running state of your cluster. + --- .. Run the following command to view the current cluster version and status: + [source,terminal] @@ -55,7 +54,7 @@ $ oc get clusterversion NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version False True 24m Working towards 4.5.4: 99% complete ---- - ++ .. Run the following command to view the Operators managed on the control plane by the Cluster Version Operator (CVO): + @@ -99,7 +98,7 @@ service-catalog-apiserver 4.5.4 True False F service-catalog-controller-manager 4.5.4 True False False 23m storage 4.5.4 True False False 17m ---- - ++ .. Run the following command to view your cluster pods: + [source,terminal] @@ -126,6 +125,5 @@ openshift-service-ca service-serving-cert-sig openshift-service-catalog-apiserver-operator openshift-service-catalog-apiserver-operator-549f44668b-b5q2w 1/1 Running 0 32m openshift-service-catalog-controller-manager-operator openshift-service-catalog-controller-manager-operator-b78cr2lnm 1/1 Running 0 31m ---- --- + When the current cluster version is `AVAILABLE`, the installation is complete. diff --git a/modules/installation-user-infra-exporting-common-variables-arm-templates.adoc b/modules/installation-user-infra-exporting-common-variables-arm-templates.adoc index cbf9971fdd..f4321dbd37 100644 --- a/modules/installation-user-infra-exporting-common-variables-arm-templates.adoc +++ b/modules/installation-user-infra-exporting-common-variables-arm-templates.adoc @@ -38,45 +38,60 @@ Specific ARM templates can also require additional exported variables, which are ---- $ export CLUSTER_NAME= ---- -* ``: The value of the `.metadata.name` attribute from the `install-config.yaml` file. ++ +where: ++ +``:: The value of the `.metadata.name` attribute from the `install-config.yaml` file. + [source,terminal] ---- $ export AZURE_REGION= ---- ++ +where: ++ ifndef::ash[] -* ``: The region to deploy the cluster into, for example `centralus`. This is the value of the `.platform.azure.region` attribute from the `install-config.yaml` file. +``:: The region to deploy the cluster into, for example `centralus`. This is the value of the `.platform.azure.region` attribute from the `install-config.yaml` file. endif::ash[] ifdef::ash[] -* ``: The region to deploy the cluster into. This is the value of the `.platform.azure.region` attribute from the `install-config.yaml` file. +``:: The region to deploy the cluster into. This is the value of the `.platform.azure.region` attribute from the `install-config.yaml` file. endif::ash[] + [source,terminal] ---- $ export SSH_KEY= ---- -* ``: The SSH RSA public key file as a string. You must enclose the SSH key in quotes since it contains spaces. This is the value of the `.sshKey` attribute from the `install-config.yaml` file. ++ +where: ++ +``:: The SSH RSA public key file as a string. You must enclose the SSH key in quotes since it contains spaces. This is the value of the `.sshKey` attribute from the `install-config.yaml` file. + [source,terminal] ---- $ export BASE_DOMAIN= ---- ++ +where: ++ ifndef::ash[] -* ``: The base domain to deploy the cluster to. The base domain corresponds to the public DNS zone that you created for your cluster. This is the value of the `.baseDomain` attribute from the `install-config.yaml` file. +``:: The base domain to deploy the cluster to. The base domain corresponds to the public DNS zone that you created for your cluster. This is the value of the `.baseDomain` attribute from the `install-config.yaml` file. endif::ash[] ifdef::ash[] -* ``: The base domain to deploy the cluster to. The base domain corresponds to the DNS zone that you created for your cluster. This is the value of the `.baseDomain` attribute from the `install-config.yaml` file. +``:: The base domain to deploy the cluster to. The base domain corresponds to the DNS zone that you created for your cluster. This is the value of the `.baseDomain` attribute from the `install-config.yaml` file. endif::ash[] + [source,terminal] ---- $ export BASE_DOMAIN_RESOURCE_GROUP= ---- ++ +where: ++ ifndef::ash[] -* ``: The resource group where the public DNS zone exists. This is the value of the `.platform.azure.baseDomainResourceGroupName` attribute from the `install-config.yaml` file. +``:: The resource group where the public DNS zone exists. This is the value of the `.platform.azure.baseDomainResourceGroupName` attribute from the `install-config.yaml` file. endif::ash[] ifdef::ash[] -* ``: The resource group where the DNS zone exists. This is the value of the `.platform.azure.baseDomainResourceGroupName` attribute from the `install-config.yaml` file. +``:: The resource group where the DNS zone exists. This is the value of the `.platform.azure.baseDomainResourceGroupName` attribute from the `install-config.yaml` file. endif::ash[] + For example: @@ -112,7 +127,10 @@ $ export BASE_DOMAIN_RESOURCE_GROUP=ocp-cluster ---- $ export KUBECONFIG=/auth/kubeconfig ---- -* ``: Specify the path to the directory that you stored the installation files in. ++ +where: ++ +``:: Specify the path to the directory that you stored the installation files in. ifeval::["{context}" == "installing-azure-user-infra"] :!cp: diff --git a/modules/installation-user-infra-exporting-common-variables.adoc b/modules/installation-user-infra-exporting-common-variables.adoc index 9663057cc9..98204e910a 100644 --- a/modules/installation-user-infra-exporting-common-variables.adoc +++ b/modules/installation-user-infra-exporting-common-variables.adoc @@ -38,9 +38,7 @@ endif::[] [id="installation-user-infra-exporting-common-variables_{context}"] = Exporting common variables for {cp-template} templates -You must export a common set of variables that are used with the provided -{cp-template} templates used to assist in completing a user-provided -infrastructure install on {cp-first}. +You must export a common set of variables that are used with the provided {cp-template} templates used to assist in completing a user-provided infrastructure install on {cp-first}. [NOTE] ==== @@ -49,8 +47,7 @@ Specific {cp-template} templates can also require additional exported variables, .Procedure -. Export the following common variables to be used by the provided {cp-template} -templates: +. Export the following common variables to be used by the provided {cp-template} templates. For any command with ``, specify the path to the directory that you stored the installation files in. + ifndef::shared-vpc[] [source,terminal] @@ -82,7 +79,6 @@ $ export WORKER_SUBNET_CIDR='10.0.128.0/17' ---- $ export KUBECONFIG=/auth/kubeconfig ---- -* ``: Specify the path to the directory that you stored the installation files in. + [source,terminal] ---- @@ -116,7 +112,6 @@ $ export BASE_DOMAIN='' ---- $ export BASE_DOMAIN_ZONE_NAME='' ---- -* ``: Supply the values for the host project. + [source,terminal] ---- @@ -127,7 +122,6 @@ $ export NETWORK_CIDR='10.0.0.0/16' ---- $ export KUBECONFIG=/auth/kubeconfig ---- -* ``: Specify the path to the directory that you stored the installation files in. + [source,terminal] ----