1
0
mirror of https://github.com/openshift/openshift-docs.git synced 2026-02-05 12:46:18 +01:00
Files
openshift-docs/modules/installation-creating-gcp-worker.adoc
2026-01-14 08:52:23 +00:00

151 lines
5.4 KiB
Plaintext

// Module included in the following assemblies:
//
// * installing/installing_gcp/installing-gcp-user-infra.adoc
// * installing/installing_gcp/installing-restricted-networks-gcp.adoc
ifeval::["{context}" == "installing-gcp-user-infra"]
:three-node-cluster:
endif::[]
ifeval::["{context}" == "installing-gcp-user-infra-vpc"]
:shared-vpc:
endif::[]
:_mod-docs-content-type: PROCEDURE
[id="installation-creating-gcp-worker_{context}"]
= Creating additional worker machines in {gcp-short}
You can create worker machines in {gcp-first} for your cluster
to use by launching individual instances discretely or by automated processes
outside the cluster, such as auto scaling groups. You can also take advantage of
the built-in cluster scaling mechanisms and the machine API in {product-title}.
ifdef::three-node-cluster[]
[NOTE]
====
If you are installing a three-node cluster, skip this step. A three-node cluster consists of three control plane machines, which also act as compute machines.
====
endif::three-node-cluster[]
In this example, you manually launch one instance by using the Deployment
Manager template. Additional instances can be launched by including additional
resources of type `06_worker.py` in the file.
[NOTE]
====
If you do not use the provided Deployment Manager template to create your worker
machines, you must review the provided information and manually create
the infrastructure. If your cluster does not initialize correctly, you might
have to contact Red Hat support with your installation logs.
====
.Prerequisites
* Ensure you defined the variables in the _Exporting common variables_, _Creating load balancers in {gcp-short}_, and _Creating the bootstrap machine in {gcp-short}_ sections.
* Create the bootstrap machine.
* Create the control plane machines.
.Procedure
. Copy the template from the *Deployment Manager template for worker machines*
section of this topic and save it as `06_worker.py` on your computer. This
template describes the worker machines that your cluster requires.
. Export the variables that the resource definition uses.
.. Export the subnet that hosts the compute machines:
+
ifndef::shared-vpc[]
[source,terminal]
----
$ export COMPUTE_SUBNET=(`gcloud compute networks subnets describe ${INFRA_ID}-worker-subnet --region=${REGION} --format json | jq -r .selfLink`)
----
endif::shared-vpc[]
ifdef::shared-vpc[]
[source,terminal]
----
$ export COMPUTE_SUBNET=(`gcloud compute networks subnets describe ${HOST_PROJECT_COMPUTE_SUBNET} --region=${REGION} --project ${HOST_PROJECT} --account ${HOST_PROJECT_ACCOUNT} --format json | jq -r .selfLink`)
----
endif::shared-vpc[]
.. Export the email address for your service account:
+
[source,terminal]
----
$ export WORKER_SERVICE_ACCOUNT=(`gcloud iam service-accounts list --filter "email~^${INFRA_ID}-w@${PROJECT_NAME}." --format json | jq -r '.[0].email'`)
----
.. Export the location of the compute machine Ignition config file:
+
[source,terminal]
----
$ export WORKER_IGNITION=`cat <installation_directory>/worker.ign`
----
. Create a `06_worker.yaml` resource definition file:
+
[source,terminal]
----
$ cat <<EOF >06_worker.yaml
imports:
- path: 06_worker.py
resources:
- name: 'worker-0' <1>
type: 06_worker.py
properties:
infra_id: '${INFRA_ID}' <2>
zone: '${ZONE_0}' <3>
compute_subnet: '${COMPUTE_SUBNET}' <4>
image: '${CLUSTER_IMAGE}' <5>
machine_type: 'n1-standard-4' <6>
root_volume_size: '128'
service_account_email: '${WORKER_SERVICE_ACCOUNT}' <7>
ignition: '${WORKER_IGNITION}' <8>
- name: 'worker-1'
type: 06_worker.py
properties:
infra_id: '${INFRA_ID}' <2>
zone: '${ZONE_1}' <3>
compute_subnet: '${COMPUTE_SUBNET}' <4>
image: '${CLUSTER_IMAGE}' <5>
machine_type: 'n1-standard-4' <6>
root_volume_size: '128'
service_account_email: '${WORKER_SERVICE_ACCOUNT}' <7>
ignition: '${WORKER_IGNITION}' <8>
EOF
----
<1> `name` is the name of the worker machine, for example `worker-0`.
<2> `infra_id` is the `INFRA_ID` infrastructure name from the extraction step.
<3> `zone` is the zone to deploy the worker machine into, for example `us-central1-a`.
<4> `compute_subnet` is the `selfLink` URL to the compute subnet.
<5> `image` is the `selfLink` URL to the {op-system} image. ^1^
<6> `machine_type` is the machine type of the instance, for example `n1-standard-4`.
<7> `service_account_email` is the email address for the worker service account that you created.
<8> `ignition` is the contents of the `worker.ign` file.
. Optional: If you want to launch additional instances, include additional
resources of type `06_worker.py` in your `06_worker.yaml` resource definition
file.
. Create the deployment by using the `gcloud` CLI:
+
[source,terminal]
----
$ gcloud deployment-manager deployments create ${INFRA_ID}-worker --config 06_worker.yaml
----
. To use a {gcp-short} Marketplace image, specify the offer to use:
+
** {product-title}: `\https://www.googleapis.com/compute/v1/projects/redhat-marketplace-public/global/images/redhat-coreos-ocp-413-x86-64-202305021736`
+
** {opp}: `\https://www.googleapis.com/compute/v1/projects/redhat-marketplace-public/global/images/redhat-coreos-opp-413-x86-64-202305021736`
+
** {oke}: `\https://www.googleapis.com/compute/v1/projects/redhat-marketplace-public/global/images/redhat-coreos-oke-413-x86-64-202305021736`
ifeval::["{context}" == "installing-gcp-user-infra"]
:!three-node-cluster:
endif::[]
ifeval::["{context}" == "installing-gcp-user-infra-vpc"]
:!shared-vpc:
endif::[]