mirror of
https://github.com/openshift/openshift-docs.git
synced 2026-02-05 12:46:18 +01:00
162 lines
6.1 KiB
Plaintext
162 lines
6.1 KiB
Plaintext
//Module included in the following assembly
|
|
//
|
|
//post_installation_configuration/configuring-multi-arch-compute-machines/creating-multi-arch-compute-nodes-gcp.adoc
|
|
|
|
:_mod-docs-content-type: PROCEDURE
|
|
[id="multi-architecture-modify-machine-set-gcp_{context}"]
|
|
= Adding a multi-architecture compute machine set to your {gcp-short} cluster
|
|
|
|
After creating a multi-architecture cluster, you can add nodes with different architectures.
|
|
|
|
You can add multi-architecture compute machines to a multi-architecture cluster in the following ways:
|
|
|
|
* Adding 64-bit x86 compute machines to a cluster that uses 64-bit ARM control plane machines and already includes 64-bit ARM compute machines. In this case, 64-bit x86 is considered the secondary architecture.
|
|
* Adding 64-bit ARM compute machines to a cluster that uses 64-bit x86 control plane machines and already includes 64-bit x86 compute machines. In this case, 64-bit ARM is considered the secondary architecture.
|
|
|
|
include::snippets/about-multiarch-tuning-operator.adoc[]
|
|
|
|
.Prerequisites
|
|
|
|
* You installed the {oc-first}.
|
|
* You used the installation program to create a 64-bit x86 or 64-bit ARM single-architecture {gcp-short} cluster with the multi-architecture installer binary.
|
|
|
|
.Procedure
|
|
|
|
. Log in to the {oc-first}.
|
|
|
|
. Create a YAML file, and add the configuration to create a compute machine set to control the 64-bit ARM or 64-bit x86 compute nodes in your cluster.
|
|
+
|
|
--
|
|
.Example `MachineSet` object for a {gcp-short} 64-bit ARM or 64-bit x86 compute node
|
|
[source,yaml]
|
|
----
|
|
apiVersion: machine.openshift.io/v1beta1
|
|
kind: MachineSet
|
|
metadata:
|
|
labels:
|
|
machine.openshift.io/cluster-api-cluster: <infrastructure_id> <1>
|
|
name: <infrastructure_id>-w-a
|
|
namespace: openshift-machine-api
|
|
spec:
|
|
replicas: 1
|
|
selector:
|
|
matchLabels:
|
|
machine.openshift.io/cluster-api-cluster: <infrastructure_id>
|
|
machine.openshift.io/cluster-api-machineset: <infrastructure_id>-w-a
|
|
template:
|
|
metadata:
|
|
creationTimestamp: null
|
|
labels:
|
|
machine.openshift.io/cluster-api-cluster: <infrastructure_id>
|
|
machine.openshift.io/cluster-api-machine-role: <role> <2>
|
|
machine.openshift.io/cluster-api-machine-type: <role>
|
|
machine.openshift.io/cluster-api-machineset: <infrastructure_id>-w-a
|
|
spec:
|
|
metadata:
|
|
labels:
|
|
node-role.kubernetes.io/<role>: ""
|
|
providerSpec:
|
|
value:
|
|
apiVersion: gcpprovider.openshift.io/v1beta1
|
|
canIPForward: false
|
|
credentialsSecret:
|
|
name: gcp-cloud-credentials
|
|
deletionProtection: false
|
|
disks:
|
|
- autoDelete: true
|
|
boot: true
|
|
image: <path_to_image> <3>
|
|
labels: null
|
|
sizeGb: 128
|
|
type: pd-ssd
|
|
gcpMetadata: <4>
|
|
- key: <custom_metadata_key>
|
|
value: <custom_metadata_value>
|
|
kind: GCPMachineProviderSpec
|
|
machineType: n1-standard-4 <5>
|
|
metadata:
|
|
creationTimestamp: null
|
|
networkInterfaces:
|
|
- network: <infrastructure_id>-network
|
|
subnetwork: <infrastructure_id>-worker-subnet
|
|
projectID: <project_name> <6>
|
|
region: us-central1 <7>
|
|
serviceAccounts:
|
|
- email: <infrastructure_id>-w@<project_name>.iam.gserviceaccount.com
|
|
scopes:
|
|
- https://www.googleapis.com/auth/cloud-platform
|
|
tags:
|
|
- <infrastructure_id>-worker
|
|
userDataSecret:
|
|
name: worker-user-data
|
|
zone: us-central1-a
|
|
----
|
|
<1> Specify the infrastructure ID that is based on the cluster ID that you set when you provisioned the cluster. You can obtain the infrastructure ID by running the following command:
|
|
+
|
|
[source,terminal]
|
|
----
|
|
$ oc get -o jsonpath='{.status.infrastructureName}{"\n"}' infrastructure cluster
|
|
----
|
|
<2> Specify the role node label to add.
|
|
<3> Specify the path to the image that is used in current compute machine sets. You need the project and image name for your path to image.
|
|
+
|
|
To access the project and image name, run the following command:
|
|
+
|
|
[source,terminal]
|
|
----
|
|
$ oc get configmap/coreos-bootimages \
|
|
-n openshift-machine-config-operator \
|
|
-o jsonpath='{.data.stream}' | jq \
|
|
-r '.architectures.aarch64.images.gcp'
|
|
----
|
|
+
|
|
.Example output
|
|
[source,terminal]
|
|
----
|
|
"gcp": {
|
|
"release": "415.92.202309142014-0",
|
|
"project": "rhcos-cloud",
|
|
"name": "rhcos-415-92-202309142014-0-gcp-aarch64"
|
|
}
|
|
----
|
|
Use the `project` and `name` parameters from the output to create the path to image field in your machine set. The path to the image should follow the following format:
|
|
+
|
|
[source,terminal]
|
|
----
|
|
$ projects/<project>/global/images/<image_name>
|
|
----
|
|
<4> Optional: Specify custom metadata in the form of a `key:value` pair. For example use cases, see the {gcp-short} documentation for link:https://cloud.google.com/compute/docs/metadata/setting-custom-metadata[setting custom metadata].
|
|
<5> Specify a machine type that aligns with the CPU architecture of the chosen OS image. For more information, see "Tested instance types for {gcp-short} on 64-bit ARM infrastructures".
|
|
<6> Specify the name of the {gcp-short} project that you use for your cluster.
|
|
<7> Specify the region. For example, `us-central1`. Ensure that the zone you select has machines with the required architecture.
|
|
--
|
|
|
|
. Create the compute machine set by running the following command:
|
|
+
|
|
[source,terminal]
|
|
----
|
|
$ oc create -f <file_name> <1>
|
|
----
|
|
<1> Replace `<file_name>` with the name of the YAML file with compute machine set configuration. For example: `gcp-arm64-machine-set-0.yaml`, or `gcp-amd64-machine-set-0.yaml`.
|
|
|
|
.Verification
|
|
. View the list of compute machine sets by running the following command:
|
|
+
|
|
[source,terminal]
|
|
----
|
|
$ oc get machineset -n openshift-machine-api
|
|
----
|
|
The output must include the machine set that you created.
|
|
+
|
|
.Example output
|
|
[source,terminal]
|
|
----
|
|
NAME DESIRED CURRENT READY AVAILABLE AGE
|
|
<infrastructure_id>-gcp-machine-set-0 2 2 2 2 10m
|
|
----
|
|
. You can check if the nodes are ready and schedulable by running the following command:
|
|
+
|
|
[source,terminal]
|
|
----
|
|
$ oc get nodes
|
|
---- |