mirror of
https://github.com/openshift/openshift-docs.git
synced 2026-02-05 21:46:22 +01:00
Add GCP UPI install docs
This commit is contained in:
@@ -117,6 +117,11 @@ Topics:
|
||||
Topics:
|
||||
- Name: Installing a cluster on AWS using CloudFormation templates
|
||||
File: installing-aws-user-infra
|
||||
- Name: Installing on user-provisioned GCP
|
||||
Dir: installing_gcp_user_infra
|
||||
Topics:
|
||||
- Name: Installing a cluster on GCP using Deployment Manager templates
|
||||
File: installing-gcp-user-infra
|
||||
- Name: Installing on bare metal
|
||||
Dir: installing_bare_metal
|
||||
Topics:
|
||||
|
||||
@@ -6,7 +6,7 @@ include::modules/common-attributes.adoc[]
|
||||
toc::[]
|
||||
|
||||
In {product-title} version {product-version}, you can install a
|
||||
cluster on Amazon Web Services (AWS) using infrastructure that you provide.
|
||||
cluster on Amazon Web Services (AWS) by using infrastructure that you provide.
|
||||
|
||||
One way to create this infrastructure is to use the provided
|
||||
CloudFormation templates. You can modify the templates to customize your
|
||||
@@ -53,13 +53,13 @@ include::modules/installation-obtaining-installer.adoc[leveloffset=+1]
|
||||
|
||||
include::modules/ssh-agent-using.adoc[leveloffset=+1]
|
||||
|
||||
include::modules/installation-generate-aws-user-infra.adoc[leveloffset=+1]
|
||||
include::modules/installation-user-infra-generate.adoc[leveloffset=+1]
|
||||
|
||||
include::modules/installation-generate-aws-user-infra-install-config.adoc[leveloffset=+2]
|
||||
|
||||
include::modules/installation-configure-proxy.adoc[leveloffset=+2]
|
||||
|
||||
include::modules/installation-generate-aws-user-infra-ignition.adoc[leveloffset=+2]
|
||||
include::modules/installation-user-infra-generate-k8s-manifest-ignition.adoc[leveloffset=+2]
|
||||
|
||||
include::modules/installation-extracting-infraid.adoc[leveloffset=+1]
|
||||
|
||||
|
||||
@@ -0,0 +1,84 @@
|
||||
[id="installing-gcp-user-infra"]
|
||||
= Installing a cluster on GCP using Deployment Manager templates
|
||||
include::modules/common-attributes.adoc[]
|
||||
:context: installing-gcp-user-infra
|
||||
|
||||
toc::[]
|
||||
|
||||
In {product-title} version {product-version}, you can install a cluster on
|
||||
Google Cloud Platform (GCP) by using infrastructure that you provide.
|
||||
|
||||
The steps for performing a user-provided infrastructure install are outlined here. Several
|
||||
link:https://cloud.google.com/deployment-manager/docs[Deployment Manager] templates are provided to assist in
|
||||
completing these steps or to help model your own. You are also free to create
|
||||
the required resources through other methods; the templates are just an
|
||||
example.
|
||||
|
||||
[id="installation-gcp-user-infra-config-project"]
|
||||
== Configuring your GCP project
|
||||
|
||||
Before you can install {product-title}, you must configure a Google Cloud
|
||||
Platform (GCP) project to host it.
|
||||
|
||||
include::modules/installation-gcp-dns.adoc[leveloffset=+2]
|
||||
include::modules/installation-gcp-limits.adoc[leveloffset=+2]
|
||||
include::modules/installation-gcp-service-account.adoc[leveloffset=+2]
|
||||
include::modules/installation-gcp-permissions.adoc[leveloffset=+3]
|
||||
include::modules/installation-gcp-enabling-api-services.adoc[leveloffset=+2]
|
||||
include::modules/installation-gcp-regions.adoc[leveloffset=+2]
|
||||
include::modules/installation-gcp-install-cli.adoc[leveloffset=+2]
|
||||
|
||||
include::modules/installation-user-infra-generate.adoc[leveloffset=+1]
|
||||
|
||||
include::modules/installation-initializing.adoc[leveloffset=+2]
|
||||
|
||||
include::modules/installation-configure-proxy.adoc[leveloffset=+2]
|
||||
|
||||
include::modules/installation-user-infra-generate-k8s-manifest-ignition.adoc[leveloffset=+2]
|
||||
.Additional resources
|
||||
|
||||
* xref:../../installing/installing_gcp_user_infra/installing-gcp-user-infra.adoc#installation-gcp-user-infra-adding-ingress_installing-gcp-user-infra[Optional: Adding the ingress DNS records]
|
||||
|
||||
[id="installation-gcp-user-infra-exporting-common-variables"]
|
||||
== Exporting common variables
|
||||
|
||||
include::modules/installation-extracting-infraid.adoc[leveloffset=+2]
|
||||
include::modules/installation-user-infra-exporting-common-variables.adoc[leveloffset=+2]
|
||||
|
||||
include::modules/installation-creating-gcp-vpc.adoc[leveloffset=+1]
|
||||
include::modules/installation-deployment-manager-vpc.adoc[leveloffset=+2]
|
||||
|
||||
include::modules/installation-creating-gcp-dns.adoc[leveloffset=+1]
|
||||
include::modules/installation-deployment-manager-dns.adoc[leveloffset=+2]
|
||||
|
||||
include::modules/installation-creating-gcp-security.adoc[leveloffset=+1]
|
||||
include::modules/installation-deployment-manager-security.adoc[leveloffset=+2]
|
||||
|
||||
include::modules/installation-gcp-user-infra-rhcos.adoc[leveloffset=+1]
|
||||
|
||||
include::modules/installation-creating-gcp-bootstrap.adoc[leveloffset=+1]
|
||||
include::modules/installation-deployment-manager-bootstrap.adoc[leveloffset=+2]
|
||||
|
||||
include::modules/installation-creating-gcp-control-plane.adoc[leveloffset=+1]
|
||||
include::modules/installation-deployment-manager-control-plane.adoc[leveloffset=+2]
|
||||
|
||||
include::modules/installation-gcp-user-infra-wait-for-bootstrap.adoc[leveloffset=+1]
|
||||
|
||||
include::modules/installation-creating-gcp-worker.adoc[leveloffset=+1]
|
||||
include::modules/installation-deployment-manager-worker.adoc[leveloffset=+2]
|
||||
|
||||
include::modules/cli-installing-cli.adoc[leveloffset=+1]
|
||||
|
||||
include::modules/cli-logging-in-kubeadmin.adoc[leveloffset=+1]
|
||||
|
||||
include::modules/installation-approve-csrs.adoc[leveloffset=+1]
|
||||
|
||||
include::modules/installation-gcp-user-infra-adding-ingress.adoc[leveloffset=+1]
|
||||
|
||||
include::modules/installation-gcp-user-infra-completing.adoc[leveloffset=+1]
|
||||
|
||||
.Next steps
|
||||
|
||||
* xref:../../installing/install_config/customizations.adoc#customizations[Customize your cluster].
|
||||
* If necessary, you can
|
||||
xref:../../support/remote_health_monitoring/opting-out-of-remote-health-reporting.adoc#opting-out-remote-health-reporting_opting-out-remote-health-reporting[opt out of remote health reporting].
|
||||
@@ -70,13 +70,13 @@ include::modules/installation-aws-permissions.adoc[leveloffset=+2]
|
||||
|
||||
include::modules/ssh-agent-using.adoc[leveloffset=+1]
|
||||
|
||||
include::modules/installation-generate-aws-user-infra.adoc[leveloffset=+1]
|
||||
include::modules/installation-user-infra-generate.adoc[leveloffset=+1]
|
||||
|
||||
include::modules/installation-generate-aws-user-infra-install-config.adoc[leveloffset=+2]
|
||||
|
||||
include::modules/installation-configure-proxy.adoc[leveloffset=+2]
|
||||
|
||||
include::modules/installation-generate-aws-user-infra-ignition.adoc[leveloffset=+2]
|
||||
include::modules/installation-user-infra-generate-k8s-manifest-ignition.adoc[leveloffset=+2]
|
||||
|
||||
include::modules/installation-extracting-infraid.adoc[leveloffset=+1]
|
||||
|
||||
|
||||
@@ -10,6 +10,7 @@
|
||||
// * installing/installing_bare_metal/installing-bare-metal.adoc
|
||||
// * installing/installing_gcp/installing-gcp-customizations.adoc
|
||||
// * installing/installing_gcp/installing-gcp-default.adoc
|
||||
// * installing/installing_gcp_user_infra/installing-gcp-user-infra.adoc
|
||||
// * installing/installing_restricted_networks/installing-restricted-networks-preparations.adoc
|
||||
// * installing/installing_vsphere/installing-vsphere.adoc
|
||||
//
|
||||
|
||||
@@ -9,6 +9,7 @@
|
||||
// * installing/installing_bare_metal/installing-bare-metal.adoc
|
||||
// * installing/installing_gcp/installing-gcp-customizations.adoc
|
||||
// * installing/installing_gcp/installing-gcp-default.adoc
|
||||
// * installing/installing_gcp_user_infra/installing-gcp-user-infra.adoc
|
||||
// * installing/installing_openstack/installing-openstack-installer-custom.adoc
|
||||
// * installing/installing_openstack/installing-openstack-installer-kuryr.adoc
|
||||
// * installing/installing_openstack/installing-openstack-installer.adoc
|
||||
@@ -35,22 +36,13 @@ The file is specific to a cluster and is created during {product-title} installa
|
||||
+
|
||||
----
|
||||
$ export KUBECONFIG=<installation_directory>/auth/kubeconfig <1>
|
||||
|
||||
$ oc whoami
|
||||
system:admin
|
||||
----
|
||||
<1> For `<installation_directory>`, specify the path to the directory that you stored
|
||||
the installation files in.
|
||||
|
||||
////
|
||||
. Log in to the `oc` CLI:
|
||||
. Verify you can run `oc` commands successfully using the exported configuration:
|
||||
+
|
||||
----
|
||||
$ oc login
|
||||
$ oc whoami
|
||||
system:admin
|
||||
----
|
||||
+
|
||||
Specify `kubeadmin` as the user and the password that displayed when the
|
||||
installation process completed. If you no longer have the password for the `kubeadmin`
|
||||
user, it is also listed in the `.openshift_install.log` file in your
|
||||
installation directory.
|
||||
////
|
||||
|
||||
@@ -1,6 +1,7 @@
|
||||
// Module included in the following assemblies:
|
||||
//
|
||||
// * installing/installing_aws_user_infra/installing-aws-user-infra.adoc
|
||||
// * installing/installing_gcp_user_infra/installing-gcp-user-infra.adoc
|
||||
// * installing/installing_bare_metal/installing-bare-metal.adoc
|
||||
// * installing/installing_restricted_networks/installing-restricted-networks-aws.adoc
|
||||
// * installing/installing_restricted_networks/installing-restricted-networks-bare-metal.adoc
|
||||
|
||||
@@ -1,8 +1,8 @@
|
||||
// Module included in the following assemblies:
|
||||
//
|
||||
// * installing/installing_aws_user_infra/installing-aws-user-infra.adoc
|
||||
// * installing/installing_bare_metal/installing-bare-metal.adoc
|
||||
// * installing/installing_gcp_user_infra/installing-gcp-user-infra.adoc
|
||||
// * installing/installing_bare_metal/installing-bare-metal.adoc
|
||||
// * installing/installing_restricted_networks/installing-restricted-networks-aws.adoc
|
||||
// * installing/installing_restricted_networks/installing-restricted-networks-bare-metal.adoc
|
||||
// * installing/installing_restricted_networks/installing-restricted-networks-vsphere.adoc
|
||||
|
||||
106
modules/installation-creating-gcp-bootstrap.adoc
Normal file
106
modules/installation-creating-gcp-bootstrap.adoc
Normal file
@@ -0,0 +1,106 @@
|
||||
// Module included in the following assemblies:
|
||||
//
|
||||
// * installing/installing_gcp_user_infra/installing-gcp-user-infra.adoc
|
||||
|
||||
[id="installation-creating-gcp-bootstrap_{context}"]
|
||||
= Creating the bootstrap machine in GCP
|
||||
|
||||
You must create the bootstrap machine in Google Cloud Platform (GCP) to use during
|
||||
{product-title} cluster initialization. One way to create this machine is
|
||||
to modify the provided Deployment Manager template.
|
||||
|
||||
[NOTE]
|
||||
====
|
||||
If you do not use the provided Deployment Manager template to create your bootstrap
|
||||
machine, you must review the provided information and manually create
|
||||
the infrastructure. If your cluster does not initialize correctly, you might
|
||||
have to contact Red Hat support with your installation logs.
|
||||
====
|
||||
|
||||
.Prerequisites
|
||||
|
||||
* Configure a GCP account.
|
||||
* Generate the Ignition config files for your cluster.
|
||||
* Create and configure a VPC and assocated subnets in GCP.
|
||||
* Create and configure networking and load balancers in GCP.
|
||||
* Create control plane and compute roles.
|
||||
|
||||
.Procedure
|
||||
|
||||
. Copy the template from the *Deployment Manager template for the bootstrap machine*
|
||||
section of this topic and save it as `04_bootstrap.py` on your computer. This
|
||||
template describes the bootstrap machine that your cluster requires.
|
||||
|
||||
. Export the following variables required by the resource definition:
|
||||
+
|
||||
----
|
||||
$ export CONTROL_SUBNET=`gcloud compute networks subnets describe ${INFRA_ID}-master-subnet --region=${REGION} --format json | jq -r .selfLink`
|
||||
$ export CLUSTER_IMAGE=`gcloud compute images describe ${INFRA_ID}-rhcos-image --format json | jq -r .selfLink`
|
||||
$ export ZONE_0=`gcloud compute regions describe ${REGION} --format=json | jq -r .zones[0] | cut -d "/" -f9`
|
||||
$ export ZONE_1=`gcloud compute regions describe ${REGION} --format=json | jq -r .zones[1] | cut -d "/" -f9`
|
||||
$ export ZONE_2=`gcloud compute regions describe ${REGION} --format=json | jq -r .zones[2] | cut -d "/" -f9`
|
||||
----
|
||||
|
||||
. Create a bucket and upload the `bootstrap.ign` file:
|
||||
+
|
||||
----
|
||||
$ gsutil mb gs://${INFRA_ID}-bootstrap-ignition
|
||||
$ gsutil cp bootstrap.ign gs://${INFRA_ID}-bootstrap-ignition/
|
||||
----
|
||||
|
||||
. Create a signed URL for the bootstrap instance to use to access the Ignition
|
||||
config. Export the URL from the output as a variable:
|
||||
+
|
||||
----
|
||||
$ export BOOTSTRAP_IGN=`gsutil signurl -d 1h service-account-key.json \
|
||||
gs://${INFRA_ID}-bootstrap-ignition/bootstrap.ign | grep "^gs:" | awk '{print $5}'`
|
||||
----
|
||||
|
||||
. Create a `04_bootstrap.yaml` resource definition file:
|
||||
+
|
||||
----
|
||||
$ cat <<EOF >04_bootstrap.yaml
|
||||
imports:
|
||||
- path: 04_bootstrap.py
|
||||
|
||||
resources:
|
||||
- name: cluster-bootstrap
|
||||
type: 04_bootstrap.py
|
||||
properties:
|
||||
infra_id: '${INFRA_ID}' <1>
|
||||
region: '${REGION}' <2>
|
||||
zone: '${ZONE_0}' <3>
|
||||
|
||||
cluster_network: '${CLUSTER_NETWORK}' <4>
|
||||
control_subnet: '${CONTROL_SUBNET}' <5>
|
||||
image: '${CLUSTER_IMAGE}' <6>
|
||||
machine_type: 'n1-standard-4' <7>
|
||||
root_volume_size: '128' <8>
|
||||
|
||||
bootstrap_ign: '${BOOTSTRAP_IGN}' <9>
|
||||
EOF
|
||||
----
|
||||
<1> `infra_id` is the `INFRA_ID` infrastructure name from the extraction step.
|
||||
<2> `region` is the region to deploy the cluster into, for example `us-east1`.
|
||||
<3> `zone` is the zone to deploy the bootstrap instance into, for example `us-east1-b`.
|
||||
<4> `cluster_network` is the `selfLink` URL to the cluster network.
|
||||
<5> `control_subnet` is the `selfLink` URL to the control subnet.
|
||||
<6> `image` is the `selfLink` URL to the {op-system} image.
|
||||
<7> `machine_type` is the machine type of the instance, for example `n1-standard-4`.
|
||||
<8> `bootstrap_ign` is the URL output when creating a signed URL above.
|
||||
|
||||
. Create the deployment by using the `gcloud` CLI:
|
||||
+
|
||||
----
|
||||
$ gcloud deployment-manager deployments create ${INFRA_ID}-bootstrap --config 04_bootstrap.yaml
|
||||
----
|
||||
|
||||
. The templates do not manage load balancer membership due to limitations of Deployment
|
||||
Manager, so you must add the bootstrap machine manually:
|
||||
+
|
||||
----
|
||||
$ gcloud compute target-pools add-instances \
|
||||
${INFRA_ID}-api-target-pool --instances-zone="${ZONE_0}" --instances=${INFRA_ID}-bootstrap
|
||||
$ gcloud compute target-pools add-instances \
|
||||
${INFRA_ID}-ign-target-pool --instances-zone="${ZONE_0}" --instances=${INFRA_ID}-bootstrap
|
||||
----
|
||||
114
modules/installation-creating-gcp-control-plane.adoc
Normal file
114
modules/installation-creating-gcp-control-plane.adoc
Normal file
@@ -0,0 +1,114 @@
|
||||
// Module included in the following assemblies:
|
||||
//
|
||||
// * installing/installing_gcp_user_infra/installing-gcp-user-infra.adoc
|
||||
|
||||
[id="installation-creating-gcp-control-plane_{context}"]
|
||||
= Creating the control plane machines in GCP
|
||||
|
||||
You must create the control plane machines in Google Cloud Platform (GCP) for
|
||||
your cluster to use. One way to create these machines is to modify the
|
||||
provided Deployment Manager template.
|
||||
|
||||
[NOTE]
|
||||
====
|
||||
If you do not use the provided Deployment Manager template to create your
|
||||
control plane machines, you must review the provided information and manually
|
||||
create the infrastructure. If your cluster does not initialize correctly, you
|
||||
might have to contact Red Hat support with your installation logs.
|
||||
====
|
||||
|
||||
.Prerequisites
|
||||
|
||||
* Configure a GCP account.
|
||||
* Generate the Ignition config files for your cluster.
|
||||
* Create and configure a VPC and assocated subnets in GCP.
|
||||
* Create and configure networking and load balancers in GCP.
|
||||
* Create control plane and compute roles.
|
||||
* Create the bootstrap machine.
|
||||
|
||||
.Procedure
|
||||
|
||||
. Copy the template from the *Deployment Manager template for control plane machines*
|
||||
section of this topic and save it as `05_control_plane.py` on your computer.
|
||||
This template describes the control plane machines that your cluster requires.
|
||||
|
||||
. Export the following variables needed by the resource definition:
|
||||
+
|
||||
----
|
||||
$ export MASTER_SERVICE_ACCOUNT_EMAIL=`gcloud iam service-accounts list | grep "^${INFRA_ID}-master-node " | awk '{print $2}'`
|
||||
$ export MASTER_IGNITION=`cat master.ign`
|
||||
----
|
||||
|
||||
. Create a `05_control_plane.yaml` resource definition file:
|
||||
+
|
||||
----
|
||||
$ cat <<EOF >05_control_plane.yaml
|
||||
imports:
|
||||
- path: 05_control_plane.py
|
||||
|
||||
resources:
|
||||
- name: cluster-control-plane
|
||||
type: 05_control_plane.py
|
||||
properties:
|
||||
infra_id: '${INFRA_ID}' <1>
|
||||
region: '${REGION}' <2>
|
||||
zones: <3>
|
||||
- '${ZONE_0}'
|
||||
- '${ZONE_1}'
|
||||
- '${ZONE_2}'
|
||||
|
||||
control_subnet: '${CONTROL_SUBNET}' <4>
|
||||
image: '${CLUSTER_IMAGE}' <5>
|
||||
machine_type: 'n1-standard-4' <6>
|
||||
root_volume_size: '128'
|
||||
service_account_email: '${MASTER_SERVICE_ACCOUNT_EMAIL}' <7>
|
||||
|
||||
ignition: '${MASTER_IGNITION}' <8>
|
||||
EOF
|
||||
----
|
||||
<1> `infra_id` is the `INFRA_ID` infrastructure name from the extraction step.
|
||||
<2> `region` is the region to deploy the cluster into, for example `us-east1`.
|
||||
<3> `zones` are the zones to deploy the bootstrap instance into, for example `us-east1-b`, `us-east1-c`, and `us-east1-d`.
|
||||
<4> `control_subnet` is the `selfLink` URL to the control subnet.
|
||||
<5> `image` is the `selfLink` URL to the {op-system} image.
|
||||
<6> `machine_type` is the machine type of the instance, for example `n1-standard-4`.
|
||||
<7> `service_account_email` is the email address for the master service account created above.
|
||||
<8> `ignition` is the contents of the `master.ign` file.
|
||||
|
||||
. Create the deployment by using the `gcloud` CLI:
|
||||
+
|
||||
----
|
||||
$ gcloud deployment-manager deployments create ${INFRA_ID}-control-plane --config 05_control_plane.yaml
|
||||
----
|
||||
|
||||
. The templates do not manage DNS entries due to limitations of Deployment
|
||||
Manager, so you must add the etcd entries manually:
|
||||
+
|
||||
----
|
||||
$ export MASTER0_IP=`gcloud compute instances describe ${INFRA_ID}-m-0 --zone ${ZONE_0} --format json | jq -r .networkInterfaces[0].networkIP`
|
||||
$ export MASTER1_IP=`gcloud compute instances describe ${INFRA_ID}-m-1 --zone ${ZONE_1} --format json | jq -r .networkInterfaces[0].networkIP`
|
||||
$ export MASTER2_IP=`gcloud compute instances describe ${INFRA_ID}-m-2 --zone ${ZONE_2} --format json | jq -r .networkInterfaces[0].networkIP`
|
||||
$ if [ -f transaction.yaml ]; then rm transaction.yaml; fi
|
||||
$ gcloud dns record-sets transaction start --zone ${INFRA_ID}-private-zone
|
||||
$ gcloud dns record-sets transaction add ${MASTER0_IP} --name etcd-0.${CLUSTER_NAME}.${BASE_DOMAIN}. --ttl 60 --type A --zone ${INFRA_ID}-private-zone
|
||||
$ gcloud dns record-sets transaction add ${MASTER1_IP} --name etcd-1.${CLUSTER_NAME}.${BASE_DOMAIN}. --ttl 60 --type A --zone ${INFRA_ID}-private-zone
|
||||
$ gcloud dns record-sets transaction add ${MASTER2_IP} --name etcd-2.${CLUSTER_NAME}.${BASE_DOMAIN}. --ttl 60 --type A --zone ${INFRA_ID}-private-zone
|
||||
$ gcloud dns record-sets transaction add \
|
||||
"0 10 2380 etcd-0.${CLUSTER_NAME}.${BASE_DOMAIN}." \
|
||||
"0 10 2380 etcd-1.${CLUSTER_NAME}.${BASE_DOMAIN}." \
|
||||
"0 10 2380 etcd-2.${CLUSTER_NAME}.${BASE_DOMAIN}." \
|
||||
--name _etcd-server-ssl._tcp.${CLUSTER_NAME}.${BASE_DOMAIN}. --ttl 60 --type SRV --zone ${INFRA_ID}-private-zone
|
||||
$ gcloud dns record-sets transaction execute --zone ${INFRA_ID}-private-zone
|
||||
----
|
||||
|
||||
. The templates do not manage load balancer membership due to limitations of Deployment
|
||||
Manager, so you must add the control plane machines manually:
|
||||
+
|
||||
----
|
||||
$ gcloud compute target-pools add-instances ${INFRA_ID}-api-target-pool --instances-zone="${ZONE_0}" --instances=${INFRA_ID}-m-0
|
||||
$ gcloud compute target-pools add-instances ${INFRA_ID}-api-target-pool --instances-zone="${ZONE_1}" --instances=${INFRA_ID}-m-1
|
||||
$ gcloud compute target-pools add-instances ${INFRA_ID}-api-target-pool --instances-zone="${ZONE_2}" --instances=${INFRA_ID}-m-2
|
||||
$ gcloud compute target-pools add-instances ${INFRA_ID}-ign-target-pool --instances-zone="${ZONE_0}" --instances=${INFRA_ID}-m-0
|
||||
$ gcloud compute target-pools add-instances ${INFRA_ID}-ign-target-pool --instances-zone="${ZONE_1}" --instances=${INFRA_ID}-m-1
|
||||
$ gcloud compute target-pools add-instances ${INFRA_ID}-ign-target-pool --instances-zone="${ZONE_2}" --instances=${INFRA_ID}-m-2
|
||||
----
|
||||
94
modules/installation-creating-gcp-dns.adoc
Normal file
94
modules/installation-creating-gcp-dns.adoc
Normal file
@@ -0,0 +1,94 @@
|
||||
// Module included in the following assemblies:
|
||||
//
|
||||
// * installing/installing_gcp_user_infra/installing-gcp-user-infra.adoc
|
||||
|
||||
[id="installation-creating-gcp-dns_{context}"]
|
||||
= Creating networking and load balancing components in GCP
|
||||
|
||||
You must configure networking and load balancing in Google Cloud Platform (GCP) for your
|
||||
{product-title} cluster to use. One way to create these components is
|
||||
to modify the provided Deployment Manager template.
|
||||
|
||||
[NOTE]
|
||||
====
|
||||
If you do not use the provided Deployment Manager template to create your GCP
|
||||
infrastructure, you must review the provided information and manually create
|
||||
the infrastructure. If your cluster does not initialize correctly, you might
|
||||
have to contact Red Hat support with your installation logs.
|
||||
====
|
||||
|
||||
.Prerequisites
|
||||
|
||||
* Configure a GCP account.
|
||||
* Generate the Ignition config files for your cluster.
|
||||
* Create and configure a VPC and associated subnets in GCP.
|
||||
|
||||
.Procedure
|
||||
|
||||
. Copy the template from the *Deployment Manager template for the network and load balancers*
|
||||
section of this topic and save it as `02_infra.py` on your computer. This
|
||||
template describes the networking and load balancing objects that your cluster
|
||||
requires.
|
||||
|
||||
. Export the following variable required by the resource definition:
|
||||
+
|
||||
----
|
||||
$ export CLUSTER_NETWORK=`gcloud compute networks describe ${INFRA_ID}-network --format json | jq -r .selfLink`
|
||||
----
|
||||
|
||||
. Create a `02_infra.yaml` resource definition file:
|
||||
+
|
||||
----
|
||||
$ cat <<EOF >02_infra.yaml
|
||||
imports:
|
||||
- path: 02_infra.py
|
||||
|
||||
resources:
|
||||
- name: cluster-infra
|
||||
type: 02_infra.py
|
||||
properties:
|
||||
infra_id: '${INFRA_ID}' <1>
|
||||
region: '${REGION}' <2>
|
||||
|
||||
cluster_domain: '${CLUSTER_NAME}.${BASE_DOMAIN}' <3>
|
||||
cluster_network: '${CLUSTER_NETWORK}' <4>
|
||||
EOF
|
||||
----
|
||||
<1> `infra_id` is the `INFRA_ID` infrastructure name from the extraction step.
|
||||
<2> `region` is the region to deploy the cluster into, for example `us-east1`.
|
||||
<3> `cluster_domain` is the domain for the cluster, for example `openshift.example.com`.
|
||||
<4> `cluster_network` is the `selfLink` URL to the cluster network.
|
||||
|
||||
. Create the deployment by using the `gcloud` CLI:
|
||||
+
|
||||
----
|
||||
$ gcloud deployment-manager deployments create ${INFRA_ID}-infra --config 02_infra.yaml
|
||||
----
|
||||
|
||||
. The templates do not create DNS entries due to limitations of Deployment
|
||||
Manager, so you must create them manually:
|
||||
|
||||
.. Export the following variable:
|
||||
+
|
||||
----
|
||||
$ export CLUSTER_IP=`gcloud compute addresses describe ${INFRA_ID}-cluster-public-ip --region=${REGION} --format json | jq -r .address`
|
||||
----
|
||||
|
||||
.. Add external DNS entries:
|
||||
+
|
||||
----
|
||||
$ if [ -f transaction.yaml ]; then rm transaction.yaml; fi
|
||||
$ gcloud dns record-sets transaction start --zone ${BASE_DOMAIN_ZONE_NAME}
|
||||
$ gcloud dns record-sets transaction add ${CLUSTER_IP} --name api.${CLUSTER_NAME}.${BASE_DOMAIN}. --ttl 60 --type A --zone ${BASE_DOMAIN_ZONE_NAME}
|
||||
$ gcloud dns record-sets transaction execute --zone ${BASE_DOMAIN_ZONE_NAME}
|
||||
----
|
||||
|
||||
.. Add internal DNS entries:
|
||||
+
|
||||
----
|
||||
$ if [ -f transaction.yaml ]; then rm transaction.yaml; fi
|
||||
$ gcloud dns record-sets transaction start --zone ${INFRA_ID}-private-zone
|
||||
$ gcloud dns record-sets transaction add ${CLUSTER_IP} --name api.${CLUSTER_NAME}.${BASE_DOMAIN}. --ttl 60 --type A --zone ${INFRA_ID}-private-zone
|
||||
$ gcloud dns record-sets transaction add ${CLUSTER_IP} --name api-int.${CLUSTER_NAME}.${BASE_DOMAIN}. --ttl 60 --type A --zone ${INFRA_ID}-private-zone
|
||||
$ gcloud dns record-sets transaction execute --zone ${INFRA_ID}-private-zone
|
||||
----
|
||||
92
modules/installation-creating-gcp-security.adoc
Normal file
92
modules/installation-creating-gcp-security.adoc
Normal file
@@ -0,0 +1,92 @@
|
||||
// Module included in the following assemblies:
|
||||
//
|
||||
// * installing/installing_gcp_user_infra/installing-gcp-user-infra.adoc
|
||||
|
||||
[id="installation-creating-gcp-security_{context}"]
|
||||
= Creating firewall rules and IAM roles in GCP
|
||||
|
||||
You must create security groups and roles in Google Cloud Platform (GCP) for your
|
||||
{product-title} cluster to use. One way to create these components is
|
||||
to modify the provided Deployment Manager template.
|
||||
|
||||
[NOTE]
|
||||
====
|
||||
If you do not use the provided Deployment Manager template to create your GCP
|
||||
infrastructure, you must review the provided information and manually create
|
||||
the infrastructure. If your cluster does not initialize correctly, you might
|
||||
have to contact Red Hat support with your installation logs.
|
||||
====
|
||||
|
||||
.Prerequisites
|
||||
|
||||
* Configure a GCP account.
|
||||
* Generate the Ignition config files for your cluster.
|
||||
* Create and configure a VPC and associated subnets in GCP.
|
||||
|
||||
.Procedure
|
||||
|
||||
. Copy the template from the *Deployment Manager template for firewall rules and IAM roles*
|
||||
section of this topic and save it as `03_security.py` on your computer. This
|
||||
template describes the security groups and roles that your cluster requires.
|
||||
|
||||
. Export the following variables required by the resource definition:
|
||||
+
|
||||
----
|
||||
$ export MASTER_NAT_IP=`gcloud compute addresses describe ${INFRA_ID}-master-nat-ip --region ${REGION} --format json | jq -r .address`
|
||||
$ export WORKER_NAT_IP=`gcloud compute addresses describe ${INFRA_ID}-worker-nat-ip --region ${REGION} --format json | jq -r .address`
|
||||
----
|
||||
|
||||
. Create a `03_security.yaml` resource definition file:
|
||||
+
|
||||
----
|
||||
$ cat <<EOF >03_security.yaml
|
||||
imports:
|
||||
- path: 03_security.py
|
||||
|
||||
resources:
|
||||
- name: cluster-security
|
||||
type: 03_security.py
|
||||
properties:
|
||||
infra_id: '${INFRA_ID}' <1>
|
||||
region: '${REGION}' <2>
|
||||
|
||||
cluster_network: '${CLUSTER_NETWORK}' <3>
|
||||
network_cidr: '${NETWORK_CIDR}' <4>
|
||||
master_nat_ip: '${MASTER_NAT_IP}' <5>
|
||||
worker_nat_ip: '${WORKER_NAT_IP}' <6>
|
||||
EOF
|
||||
----
|
||||
<1> `infra_id` is the `INFRA_ID` infrastructure name from the extraction step.
|
||||
<2> `region` is the region to deploy the cluster into, for example `us-east1`.
|
||||
<3> `cluster_network` is the `selfLink` URL to the cluster network.
|
||||
<4> `network_cidr` is the CIDR of the VPC network, for example `10.0.0.0/16`.
|
||||
<5> `master_nat_ip` is the IP address of the master NAT, for example `34.94.100.1`.
|
||||
<6> `worker_nat_ip` is the IP address of the worker NAT, for example `34.94.200.1`.
|
||||
|
||||
. Create the deployment by using the `gcloud` CLI:
|
||||
+
|
||||
----
|
||||
$ gcloud deployment-manager deployments create ${INFRA_ID}-security --config 03_security.yaml
|
||||
----
|
||||
|
||||
. The templates do not create the policy bindings due to limitations of Deployment
|
||||
Manager, so you must create them manually:
|
||||
+
|
||||
----
|
||||
$ export MASTER_SA=${INFRA_ID}-m@${PROJECT_NAME}.iam.gserviceaccount.com
|
||||
$ gcloud projects add-iam-policy-binding ${PROJECT_NAME} --member "serviceAccount:${MASTER_SA}" --role "roles/compute.instanceAdmin"
|
||||
$ gcloud projects add-iam-policy-binding ${PROJECT_NAME} --member "serviceAccount:${MASTER_SA}" --role "roles/compute.networkAdmin"
|
||||
$ gcloud projects add-iam-policy-binding ${PROJECT_NAME} --member "serviceAccount:${MASTER_SA}" --role "roles/compute.securityAdmin"
|
||||
$ gcloud projects add-iam-policy-binding ${PROJECT_NAME} --member "serviceAccount:${MASTER_SA}" --role "roles/iam.serviceAccountUser"
|
||||
$ gcloud projects add-iam-policy-binding ${PROJECT_NAME} --member "serviceAccount:${MASTER_SA}" --role "roles/storage.admin"
|
||||
|
||||
$ export WORKER_SA=${INFRA_ID}-w@${PROJECT_NAME}.iam.gserviceaccount.com
|
||||
$ gcloud projects add-iam-policy-binding ${PROJECT_NAME} --member "serviceAccount:${WORKER_SA}" --role "roles/compute.viewer"
|
||||
$ gcloud projects add-iam-policy-binding ${PROJECT_NAME} --member "serviceAccount:${WORKER_SA}" --role "roles/storage.admin"
|
||||
----
|
||||
|
||||
. Create a service account key and store it locally for later use:
|
||||
+
|
||||
----
|
||||
$ gcloud iam service-accounts keys create service-account-key.json --iam-account=${MASTER_SA}
|
||||
----
|
||||
58
modules/installation-creating-gcp-vpc.adoc
Normal file
58
modules/installation-creating-gcp-vpc.adoc
Normal file
@@ -0,0 +1,58 @@
|
||||
// Module included in the following assemblies:
|
||||
//
|
||||
// * installing/installing_gcp_user_infra/installing-gcp-user-infra.adoc
|
||||
|
||||
[id="installation-creating-gcp-vpc_{context}"]
|
||||
= Creating a VPC in GCP
|
||||
|
||||
You must create a VPC in Google Cloud Platform (GCP) for your {product-title}
|
||||
cluster to use. You can customize the VPC to meet your requirements. One way to
|
||||
create the VPC is to modify the provided Deployment Manager template.
|
||||
|
||||
[NOTE]
|
||||
====
|
||||
If you do not use the provided Deployment Manager template to create your GCP
|
||||
infrastructure, you must review the provided information and manually create
|
||||
the infrastructure. If your cluster does not initialize correctly, you might
|
||||
have to contact Red Hat support with your installation logs.
|
||||
====
|
||||
|
||||
.Prerequisites
|
||||
|
||||
* Configure a GCP account.
|
||||
* Generate the Ignition config files for your cluster.
|
||||
|
||||
.Procedure
|
||||
|
||||
. Copy the template from the *Deployment Manager template for the VPC*
|
||||
section of this topic and save it as `01_vpc.py` on your computer. This template
|
||||
describes the VPC that your cluster requires.
|
||||
|
||||
. Create a `01_vpc.yaml` resource definition file:
|
||||
+
|
||||
----
|
||||
$ cat <<EOF >01_vpc.yaml
|
||||
imports:
|
||||
- path: 01_vpc.py
|
||||
|
||||
resources:
|
||||
- name: cluster-vpc
|
||||
type: 01_vpc.py
|
||||
properties:
|
||||
infra_id: '${INFRA_ID}' <1>
|
||||
region: '${REGION}' <2>
|
||||
|
||||
master_subnet_cidr: '${MASTER_SUBNET_CIDR}' <3>
|
||||
worker_subnet_cidr: '${WORKER_SUBNET_CIDR}' <4>
|
||||
EOF
|
||||
----
|
||||
<1> `infra_id` is the `INFRA_ID` infrastructure name from the extraction step.
|
||||
<2> `region` is the region to deploy the cluster into, for example `us-east1`.
|
||||
<3> `master_subnet_cidr` is the CIDR for the master subnet, for example `10.0.0.0/19`.
|
||||
<4> `worker_subnet_cidr` is the CIDR for the worker subnet, for example `10.0.32.0/19`.
|
||||
|
||||
. Create the deployment by using the `gcloud` CLI:
|
||||
+
|
||||
----
|
||||
$ gcloud deployment-manager deployments create ${INFRA_ID}-vpc --config 01_vpc.yaml
|
||||
----
|
||||
91
modules/installation-creating-gcp-worker.adoc
Normal file
91
modules/installation-creating-gcp-worker.adoc
Normal file
@@ -0,0 +1,91 @@
|
||||
// Module included in the following assemblies:
|
||||
//
|
||||
// * installing/installing_gcp_user_infra/installing-gcp-user-infra.adoc
|
||||
|
||||
[id="installation-creating-gcp-worker_{context}"]
|
||||
= Creating additional worker machines in GCP
|
||||
|
||||
You can create worker machines in Google Cloud Platform (GCP) for your cluster
|
||||
to use by launching individual instances discretely or by automated processes
|
||||
outside the cluster, such as Auto Scaling Groups. You can also take advantage of
|
||||
the built-in cluster scaling mechanisms and the machine API in {product-title}.
|
||||
|
||||
In this example, you manually launch one instance by using the Deployment
|
||||
Manager template. Additional instances can be launched by including additional
|
||||
resources of type `06_worker.py` in the file.
|
||||
|
||||
[NOTE]
|
||||
====
|
||||
If you do not use the provided Deployment Manager template to create your worker
|
||||
machines, you must review the provided information and manually create
|
||||
the infrastructure. If your cluster does not initialize correctly, you might
|
||||
have to contact Red Hat support with your installation logs.
|
||||
====
|
||||
|
||||
.Prerequisites
|
||||
|
||||
* Configure a GCP account.
|
||||
* Generate the Ignition config files for your cluster.
|
||||
* Create and configure a VPC and assocated subnets in GCP.
|
||||
* Create and configure networking and load balancers in GCP.
|
||||
* Create control plane and compute roles.
|
||||
* Create the bootstrap machine.
|
||||
* Create the control plane machines.
|
||||
|
||||
.Procedure
|
||||
|
||||
. Copy the template from the *Deployment Manager template for worker machines*
|
||||
section of this topic and save it as `06_worker.py` on your computer. This
|
||||
template describes the worker machines that your cluster requires.
|
||||
|
||||
. Export the following variables needed by the resource definition:
|
||||
+
|
||||
----
|
||||
$ export COMPUTE_SUBNET=`gcloud compute networks subnets describe ${INFRA_ID}-worker-subnet --region=${REGION} --format json | jq -r .selfLink`
|
||||
$ export WORKER_SERVICE_ACCOUNT_EMAIL=`gcloud iam service-accounts list | grep "^${INFRA_ID}-worker-node " | awk '{print $2}'`
|
||||
$ export WORKER_IGNITION=`cat worker.ign`
|
||||
----
|
||||
|
||||
. Create a `06_worker.yaml` resource definition file:
|
||||
+
|
||||
----
|
||||
$ cat <<EOF >06_worker.yaml
|
||||
imports:
|
||||
- path: 06_worker.py
|
||||
|
||||
resources:
|
||||
- name: 'w-a-0' <1>
|
||||
type: 06_worker.py
|
||||
properties:
|
||||
infra_id: '${INFRA_ID}' <2>
|
||||
region: '${REGION}' <3>
|
||||
zone: '${ZONE_0}' <4>
|
||||
|
||||
compute_subnet: '${COMPUTE_SUBNET}' <5>
|
||||
image: '${CLUSTER_IMAGE}' <6>
|
||||
machine_type: 'n1-standard-4' <7>
|
||||
root_volume_size: '128'
|
||||
service_account_email: '${WORKER_SERVICE_ACCOUNT_EMAIL}' <8>
|
||||
|
||||
ignition: '${WORKER_IGNITION}' <9>
|
||||
EOF
|
||||
----
|
||||
<1> `name` is the name of the worker machine, for example `w-a-0`.
|
||||
<2> `infra_id` is the `INFRA_ID` infrastructure name from the extraction step.
|
||||
<3> `region` is the region to deploy the cluster into, for example `us-east1`.
|
||||
<4> `zone` is the zone to deploy the worker machine into, for example `us-east1-b`.
|
||||
<5> `compute_subnet` is the `selfLink` URL to the compute subnet.
|
||||
<6> `image` is the `selfLink` URL to the {op-system} image.
|
||||
<7> `machine_type` is the machine type of the instance, for example `n1-standard-4`.
|
||||
<8> `service_account_email` is the email address for the worker service account created above.
|
||||
<9> `ignition` is the contents of the `worker.ign` file.
|
||||
|
||||
. Optional: If you want to launch additional instances, include additional
|
||||
resources of type `06_worker.py` in your `06_worker.yaml` resource definition
|
||||
file.
|
||||
|
||||
. Create the deployment by using the `gcloud` CLI:
|
||||
+
|
||||
----
|
||||
$ gcloud deployment-manager deployments create ${INFRA_ID}-worker --config 06_worker.yaml
|
||||
----
|
||||
70
modules/installation-deployment-manager-bootstrap.adoc
Normal file
70
modules/installation-deployment-manager-bootstrap.adoc
Normal file
@@ -0,0 +1,70 @@
|
||||
// Module included in the following assemblies:
|
||||
//
|
||||
// * installing/installing_gcp_user_infra/installing-gcp-user-infra.adoc
|
||||
|
||||
[id="installation-deployment-manager-bootstrap_{context}"]
|
||||
= Deployment Manager template for the bootstrap machine
|
||||
|
||||
You can use the following Deployment Mananger template to deploy the bootstrap
|
||||
machine that you need for your {product-title} cluster:
|
||||
|
||||
.`04_bootstrap.py` Deployment Manager template
|
||||
[source,python]
|
||||
----
|
||||
def GenerateConfig(context):
|
||||
|
||||
resources = [{
|
||||
'name': context.properties['infra_id'] + '-bootstrap-public-ip',
|
||||
'type': 'compute.v1.address',
|
||||
'properties': {
|
||||
'region': context.properties['region']
|
||||
}
|
||||
}, {
|
||||
'name': context.properties['infra_id'] + '-bootstrap-in-ssh',
|
||||
'type': 'compute.v1.firewall',
|
||||
'properties': {
|
||||
'network': context.properties['cluster_network'],
|
||||
'allowed': [{
|
||||
'IPProtocol': 'tcp',
|
||||
'ports': ['22']
|
||||
}],
|
||||
'sourceRanges': ['0.0.0.0/0'],
|
||||
'targetTags': [context.properties['infra_id'] + '-bootstrap']
|
||||
}
|
||||
}, {
|
||||
'name': context.properties['infra_id'] + '-bootstrap',
|
||||
'type': 'compute.v1.instance',
|
||||
'properties': {
|
||||
'disks': [{
|
||||
'autoDelete': True,
|
||||
'boot': True,
|
||||
'initializeParams': {
|
||||
'diskSizeGb': context.properties['root_volume_size'],
|
||||
'sourceImage': context.properties['image']
|
||||
}
|
||||
}],
|
||||
'machineType': 'zones/' + context.properties['zone'] + '/machineTypes/' + context.properties['machine_type'],
|
||||
'metadata': {
|
||||
'items': [{
|
||||
'key': 'user-data',
|
||||
'value': '{"ignition":{"config":{"replace":{"source":"' + context.properties['bootstrap_ign'] + '","verification":{}}},"timeouts":{},"version":"2.1.0"},"networkd":{},"passwd":{},"storage":{},"systemd":{}}',
|
||||
}]
|
||||
},
|
||||
'networkInterfaces': [{
|
||||
'subnetwork': context.properties['control_subnet'],
|
||||
'accessConfigs': [{
|
||||
'natIP': '$(ref.' + context.properties['infra_id'] + '-bootstrap-public-ip.address)'
|
||||
}]
|
||||
}],
|
||||
'tags': {
|
||||
'items': [
|
||||
context.properties['infra_id'] + '-master',
|
||||
context.properties['infra_id'] + '-bootstrap'
|
||||
]
|
||||
},
|
||||
'zone': context.properties['zone']
|
||||
}
|
||||
}]
|
||||
|
||||
return {'resources': resources}
|
||||
----
|
||||
121
modules/installation-deployment-manager-control-plane.adoc
Normal file
121
modules/installation-deployment-manager-control-plane.adoc
Normal file
@@ -0,0 +1,121 @@
|
||||
// Module included in the following assemblies:
|
||||
//
|
||||
// * installing/installing_gcp_user_infra/installing-gcp-user-infra.adoc
|
||||
|
||||
[id="installation-deployment-manager-control-plane_{context}"]
|
||||
= Deployment Manager template for control plane machines
|
||||
|
||||
You can use the following Deployment Mananger template to deploy the control
|
||||
plane machines that you need for your {product-title} cluster:
|
||||
|
||||
.`05_control_plane.py` Deployment Manager template
|
||||
[source,python]
|
||||
----
|
||||
def GenerateConfig(context):
|
||||
|
||||
resources = [{
|
||||
'name': context.properties['infra_id'] + '-m-0',
|
||||
'type': 'compute.v1.instance',
|
||||
'properties': {
|
||||
'disks': [{
|
||||
'autoDelete': True,
|
||||
'boot': True,
|
||||
'initializeParams': {
|
||||
'diskSizeGb': context.properties['root_volume_size'],
|
||||
'diskType': 'zones/' + context.properties['zones'][0] + '/diskTypes/pd-ssd',
|
||||
'sourceImage': context.properties['image']
|
||||
}
|
||||
}],
|
||||
'machineType': 'zones/' + context.properties['zones'][0] + '/machineTypes/' + context.properties['machine_type'],
|
||||
'metadata': {
|
||||
'items': [{
|
||||
'key': 'user-data',
|
||||
'value': context.properties['ignition']
|
||||
}]
|
||||
},
|
||||
'networkInterfaces': [{
|
||||
'subnetwork': context.properties['control_subnet']
|
||||
}],
|
||||
'serviceAccounts': [{
|
||||
'email': context.properties['service_account_email'],
|
||||
'scopes': ['https://www.googleapis.com/auth/cloud-platform']
|
||||
}],
|
||||
'tags': {
|
||||
'items': [
|
||||
context.properties['infra_id'] + '-master',
|
||||
]
|
||||
},
|
||||
'zone': context.properties['zones'][0]
|
||||
}
|
||||
}, {
|
||||
'name': context.properties['infra_id'] + '-m-1',
|
||||
'type': 'compute.v1.instance',
|
||||
'properties': {
|
||||
'disks': [{
|
||||
'autoDelete': True,
|
||||
'boot': True,
|
||||
'initializeParams': {
|
||||
'diskSizeGb': context.properties['root_volume_size'],
|
||||
'diskType': 'zones/' + context.properties['zones'][1] + '/diskTypes/pd-ssd',
|
||||
'sourceImage': context.properties['image']
|
||||
}
|
||||
}],
|
||||
'machineType': 'zones/' + context.properties['zones'][1] + '/machineTypes/' + context.properties['machine_type'],
|
||||
'metadata': {
|
||||
'items': [{
|
||||
'key': 'user-data',
|
||||
'value': context.properties['ignition']
|
||||
}]
|
||||
},
|
||||
'networkInterfaces': [{
|
||||
'subnetwork': context.properties['control_subnet']
|
||||
}],
|
||||
'serviceAccounts': [{
|
||||
'email': context.properties['service_account_email'],
|
||||
'scopes': ['https://www.googleapis.com/auth/cloud-platform']
|
||||
}],
|
||||
'tags': {
|
||||
'items': [
|
||||
context.properties['infra_id'] + '-master',
|
||||
]
|
||||
},
|
||||
'zone': context.properties['zones'][1]
|
||||
}
|
||||
}, {
|
||||
'name': context.properties['infra_id'] + '-m-2',
|
||||
'type': 'compute.v1.instance',
|
||||
'properties': {
|
||||
'disks': [{
|
||||
'autoDelete': True,
|
||||
'boot': True,
|
||||
'initializeParams': {
|
||||
'diskSizeGb': context.properties['root_volume_size'],
|
||||
'diskType': 'zones/' + context.properties['zones'][2] + '/diskTypes/pd-ssd',
|
||||
'sourceImage': context.properties['image']
|
||||
}
|
||||
}],
|
||||
'machineType': 'zones/' + context.properties['zones'][2] + '/machineTypes/' + context.properties['machine_type'],
|
||||
'metadata': {
|
||||
'items': [{
|
||||
'key': 'user-data',
|
||||
'value': context.properties['ignition']
|
||||
}]
|
||||
},
|
||||
'networkInterfaces': [{
|
||||
'subnetwork': context.properties['control_subnet']
|
||||
}],
|
||||
'serviceAccounts': [{
|
||||
'email': context.properties['service_account_email'],
|
||||
'scopes': ['https://www.googleapis.com/auth/cloud-platform']
|
||||
}],
|
||||
'tags': {
|
||||
'items': [
|
||||
context.properties['infra_id'] + '-master',
|
||||
]
|
||||
},
|
||||
'zone': context.properties['zones'][2]
|
||||
}
|
||||
}]
|
||||
|
||||
return {'resources': resources}
|
||||
----
|
||||
86
modules/installation-deployment-manager-dns.adoc
Normal file
86
modules/installation-deployment-manager-dns.adoc
Normal file
@@ -0,0 +1,86 @@
|
||||
// Module included in the following assemblies:
|
||||
//
|
||||
// * installing/installing_gcp_user_infra/installing-gcp-user-infra.adoc
|
||||
|
||||
[id="installation-deployment-manager-dns_{context}"]
|
||||
= Deployment Manager template for the network and load balancers
|
||||
|
||||
You can use the following Deployment Manager template to deploy the networking
|
||||
objects and load balancers that you need for your {product-title} cluster:
|
||||
|
||||
.`02_infra.py` Deployment Manager template
|
||||
[source,python]
|
||||
----
|
||||
def GenerateConfig(context):
|
||||
|
||||
resources = [{
|
||||
'name': context.properties['infra_id'] + '-cluster-public-ip',
|
||||
'type': 'compute.v1.address',
|
||||
'properties': {
|
||||
'region': context.properties['region']
|
||||
}
|
||||
}, {
|
||||
'name': context.properties['infra_id'] + '-api-http-health-check',
|
||||
'type': 'compute.v1.httpHealthCheck',
|
||||
'properties': {
|
||||
'port': 6080,
|
||||
'requestPath': '/readyz'
|
||||
}
|
||||
}, {
|
||||
'name': context.properties['infra_id'] + '-api-target-pool',
|
||||
'type': 'compute.v1.targetPool',
|
||||
'properties': {
|
||||
'region': context.properties['region'],
|
||||
'healthChecks': ['$(ref.' + context.properties['infra_id'] + '-api-http-health-check.selfLink)'],
|
||||
'instances': []
|
||||
}
|
||||
}, {
|
||||
'name': context.properties['infra_id'] + '-api-forwarding-rule',
|
||||
'type': 'compute.v1.forwardingRule',
|
||||
'properties': {
|
||||
'region': context.properties['region'],
|
||||
'IPAddress': '$(ref.' + context.properties['infra_id'] + '-cluster-public-ip.selfLink)',
|
||||
'target': '$(ref.' + context.properties['infra_id'] + '-api-target-pool.selfLink)',
|
||||
'portRange': '6443'
|
||||
}
|
||||
}, {
|
||||
'name': context.properties['infra_id'] + '-ign-http-health-check',
|
||||
'type': 'compute.v1.httpHealthCheck',
|
||||
'properties': {
|
||||
'port': 22624,
|
||||
'requestPath': '/healthz'
|
||||
}
|
||||
}, {
|
||||
'name': context.properties['infra_id'] + '-ign-target-pool',
|
||||
'type': 'compute.v1.targetPool',
|
||||
'properties': {
|
||||
'region': context.properties['region'],
|
||||
'healthChecks': ['$(ref.' + context.properties['infra_id'] + '-ign-http-health-check.selfLink)'],
|
||||
'instances': []
|
||||
}
|
||||
}, {
|
||||
'name': context.properties['infra_id'] + '-ign-forwarding-rule',
|
||||
'type': 'compute.v1.forwardingRule',
|
||||
'properties': {
|
||||
'region': context.properties['region'],
|
||||
'IPAddress': '$(ref.' + context.properties['infra_id'] + '-cluster-public-ip.selfLink)',
|
||||
'target': '$(ref.' + context.properties['infra_id'] + '-ign-target-pool.selfLink)',
|
||||
'portRange': '22623'
|
||||
}
|
||||
}, {
|
||||
'name': context.properties['infra_id'] + '-private-zone',
|
||||
'type': 'dns.v1.managedZone',
|
||||
'properties': {
|
||||
'description': '',
|
||||
'dnsName': context.properties['cluster_domain'] + '.',
|
||||
'visibility': 'private',
|
||||
'privateVisibilityConfig': {
|
||||
'networks': [{
|
||||
'networkUrl': context.properties['cluster_network']
|
||||
}]
|
||||
}
|
||||
}
|
||||
}]
|
||||
|
||||
return {'resources': resources}
|
||||
----
|
||||
153
modules/installation-deployment-manager-security.adoc
Normal file
153
modules/installation-deployment-manager-security.adoc
Normal file
@@ -0,0 +1,153 @@
|
||||
// Module included in the following assemblies:
|
||||
//
|
||||
// * installing/installing_gcp_user_infra/installing-gcp-user-infra.adoc
|
||||
|
||||
[id="installation-deployment-manager-security_{context}"]
|
||||
= Deployment Manager template for firewall rules and IAM roles
|
||||
|
||||
You can use the following Deployment Manager template to deploy the security
|
||||
objects that you need for your {product-title} cluster:
|
||||
|
||||
.`03_security.py` Deployment Manager template
|
||||
[source,python]
|
||||
----
|
||||
def GenerateConfig(context):
|
||||
|
||||
resources = [{
|
||||
'name': context.properties['infra_id'] + '-api',
|
||||
'type': 'compute.v1.firewall',
|
||||
'properties': {
|
||||
'network': context.properties['cluster_network'],
|
||||
'allowed': [{
|
||||
'IPProtocol': 'tcp',
|
||||
'ports': ['6443']
|
||||
}],
|
||||
'sourceRanges': ['0.0.0.0/0'],
|
||||
'targetTags': [context.properties['infra_id'] + '-master']
|
||||
}
|
||||
}, {
|
||||
'name': context.properties['infra_id'] + '-mcs',
|
||||
'type': 'compute.v1.firewall',
|
||||
'properties': {
|
||||
'network': context.properties['cluster_network'],
|
||||
'allowed': [{
|
||||
'IPProtocol': 'tcp',
|
||||
'ports': ['22623']
|
||||
}],
|
||||
'sourceRanges': [
|
||||
context.properties['network_cidr'],
|
||||
context.properties['master_nat_ip'],
|
||||
context.properties['worker_nat_ip']
|
||||
],
|
||||
'targetTags': [context.properties['infra_id'] + '-master']
|
||||
}
|
||||
}, {
|
||||
'name': context.properties['infra_id'] + '-health-checks',
|
||||
'type': 'compute.v1.firewall',
|
||||
'properties': {
|
||||
'network': context.properties['cluster_network'],
|
||||
'allowed': [{
|
||||
'IPProtocol': 'tcp',
|
||||
'ports': ['6080', '22624']
|
||||
}],
|
||||
'sourceRanges': ['35.191.0.0/16', '209.85.152.0/22', '209.85.204.0/22'],
|
||||
'targetTags': [context.properties['infra_id'] + '-master']
|
||||
}
|
||||
}, {
|
||||
'name': context.properties['infra_id'] + '-etcd',
|
||||
'type': 'compute.v1.firewall',
|
||||
'properties': {
|
||||
'network': context.properties['cluster_network'],
|
||||
'allowed': [{
|
||||
'IPProtocol': 'tcp',
|
||||
'ports': ['2379-2380']
|
||||
}],
|
||||
'sourceTags': [context.properties['infra_id'] + '-master'],
|
||||
'targetTags': [context.properties['infra_id'] + '-master']
|
||||
}
|
||||
}, {
|
||||
'name': context.properties['infra_id'] + '-control-plane',
|
||||
'type': 'compute.v1.firewall',
|
||||
'properties': {
|
||||
'network': context.properties['cluster_network'],
|
||||
'allowed': [{
|
||||
'IPProtocol': 'tcp',
|
||||
'ports': ['10257']
|
||||
},{
|
||||
'IPProtocol': 'tcp',
|
||||
'ports': ['10259']
|
||||
}],
|
||||
'sourceTags': [
|
||||
context.properties['infra_id'] + '-master',
|
||||
context.properties['infra_id'] + '-worker'
|
||||
],
|
||||
'targetTags': [context.properties['infra_id'] + '-master']
|
||||
}
|
||||
}, {
|
||||
'name': context.properties['infra_id'] + '-internal-network',
|
||||
'type': 'compute.v1.firewall',
|
||||
'properties': {
|
||||
'network': context.properties['cluster_network'],
|
||||
'allowed': [{
|
||||
'IPProtocol': 'icmp'
|
||||
},{
|
||||
'IPProtocol': 'tcp',
|
||||
'ports': ['22']
|
||||
}],
|
||||
'sourceRanges': [context.properties['network_cidr']],
|
||||
'targetTags': [
|
||||
context.properties['infra_id'] + '-master',
|
||||
context.properties['infra_id'] + '-worker'
|
||||
]
|
||||
}
|
||||
}, {
|
||||
'name': context.properties['infra_id'] + '-internal-cluster',
|
||||
'type': 'compute.v1.firewall',
|
||||
'properties': {
|
||||
'network': context.properties['cluster_network'],
|
||||
'allowed': [{
|
||||
'IPProtocol': 'udp',
|
||||
'ports': ['4789', '6081']
|
||||
},{
|
||||
'IPProtocol': 'tcp',
|
||||
'ports': ['9000-9999']
|
||||
},{
|
||||
'IPProtocol': 'udp',
|
||||
'ports': ['9000-9999']
|
||||
},{
|
||||
'IPProtocol': 'tcp',
|
||||
'ports': ['10250']
|
||||
},{
|
||||
'IPProtocol': 'tcp',
|
||||
'ports': ['30000-32767']
|
||||
},{
|
||||
'IPProtocol': 'udp',
|
||||
'ports': ['30000-32767']
|
||||
}],
|
||||
'sourceTags': [
|
||||
context.properties['infra_id'] + '-master',
|
||||
context.properties['infra_id'] + '-worker'
|
||||
],
|
||||
'targetTags': [
|
||||
context.properties['infra_id'] + '-master',
|
||||
context.properties['infra_id'] + '-worker'
|
||||
]
|
||||
}
|
||||
}, {
|
||||
'name': context.properties['infra_id'] + '-master-node-sa',
|
||||
'type': 'iam.v1.serviceAccount',
|
||||
'properties': {
|
||||
'accountId': context.properties['infra_id'] + '-m',
|
||||
'displayName': context.properties['infra_id'] + '-master-node'
|
||||
}
|
||||
}, {
|
||||
'name': context.properties['infra_id'] + '-worker-node-sa',
|
||||
'type': 'iam.v1.serviceAccount',
|
||||
'properties': {
|
||||
'accountId': context.properties['infra_id'] + '-w',
|
||||
'displayName': context.properties['infra_id'] + '-worker-node'
|
||||
}
|
||||
}]
|
||||
|
||||
return {'resources': resources}
|
||||
----
|
||||
82
modules/installation-deployment-manager-vpc.adoc
Normal file
82
modules/installation-deployment-manager-vpc.adoc
Normal file
@@ -0,0 +1,82 @@
|
||||
// Module included in the following assemblies:
|
||||
//
|
||||
// * installing/installing_gcp_user_infra/installing-gcp-user-infra.adoc
|
||||
|
||||
[id="installation-deployment-manager-vpc_{context}"]
|
||||
= Deployment Manager template for the VPC
|
||||
|
||||
You can use the following Deployment Manager template to deploy the VPC that
|
||||
you need for your {product-title} cluster:
|
||||
|
||||
.`01_vpc.py` Deployment Manager template
|
||||
[source,python]
|
||||
----
|
||||
def GenerateConfig(context):
|
||||
|
||||
resources = [{
|
||||
'name': context.properties['infra_id'] + '-network',
|
||||
'type': 'compute.v1.network',
|
||||
'properties': {
|
||||
'region': context.properties['region'],
|
||||
'autoCreateSubnetworks': False
|
||||
}
|
||||
}, {
|
||||
'name': context.properties['infra_id'] + '-master-subnet',
|
||||
'type': 'compute.v1.subnetwork',
|
||||
'properties': {
|
||||
'region': context.properties['region'],
|
||||
'network': '$(ref.' + context.properties['infra_id'] + '-network.selfLink)',
|
||||
'ipCidrRange': context.properties['master_subnet_cidr']
|
||||
}
|
||||
}, {
|
||||
'name': context.properties['infra_id'] + '-worker-subnet',
|
||||
'type': 'compute.v1.subnetwork',
|
||||
'properties': {
|
||||
'region': context.properties['region'],
|
||||
'network': '$(ref.' + context.properties['infra_id'] + '-network.selfLink)',
|
||||
'ipCidrRange': context.properties['worker_subnet_cidr']
|
||||
}
|
||||
}, {
|
||||
'name': context.properties['infra_id'] + '-master-nat-ip',
|
||||
'type': 'compute.v1.address',
|
||||
'properties': {
|
||||
'region': context.properties['region']
|
||||
}
|
||||
}, {
|
||||
'name': context.properties['infra_id'] + '-worker-nat-ip',
|
||||
'type': 'compute.v1.address',
|
||||
'properties': {
|
||||
'region': context.properties['region']
|
||||
}
|
||||
}, {
|
||||
'name': context.properties['infra_id'] + '-router',
|
||||
'type': 'compute.v1.router',
|
||||
'properties': {
|
||||
'region': context.properties['region'],
|
||||
'network': '$(ref.' + context.properties['infra_id'] + '-network.selfLink)',
|
||||
'nats': [{
|
||||
'name': context.properties['infra_id'] + '-nat-master',
|
||||
'natIpAllocateOption': 'MANUAL_ONLY',
|
||||
'natIps': ['$(ref.' + context.properties['infra_id'] + '-master-nat-ip.selfLink)'],
|
||||
'minPortsPerVm': 7168,
|
||||
'sourceSubnetworkIpRangesToNat': 'LIST_OF_SUBNETWORKS',
|
||||
'subnetworks': [{
|
||||
'name': '$(ref.' + context.properties['infra_id'] + '-master-subnet.selfLink)',
|
||||
'sourceIpRangesToNat': ['ALL_IP_RANGES']
|
||||
}]
|
||||
}, {
|
||||
'name': context.properties['infra_id'] + '-nat-worker',
|
||||
'natIpAllocateOption': 'MANUAL_ONLY',
|
||||
'natIps': ['$(ref.' + context.properties['infra_id'] + '-worker-nat-ip.selfLink)'],
|
||||
'minPortsPerVm': 128,
|
||||
'sourceSubnetworkIpRangesToNat': 'LIST_OF_SUBNETWORKS',
|
||||
'subnetworks': [{
|
||||
'name': '$(ref.' + context.properties['infra_id'] + '-worker-subnet.selfLink)',
|
||||
'sourceIpRangesToNat': ['ALL_IP_RANGES']
|
||||
}]
|
||||
}]
|
||||
}
|
||||
}]
|
||||
|
||||
return {'resources': resources}
|
||||
----
|
||||
52
modules/installation-deployment-manager-worker.adoc
Normal file
52
modules/installation-deployment-manager-worker.adoc
Normal file
@@ -0,0 +1,52 @@
|
||||
// Module included in the following assemblies:
|
||||
//
|
||||
// * installing/installing_gcp_user_infra/installing-gcp-user-infra.adoc
|
||||
|
||||
[id="installation-deployment-manager-worker_{context}"]
|
||||
= Deployment Manager template for worker machines
|
||||
|
||||
You can use the following Deloyment Manager template to deploy the worker machines
|
||||
that you need for your {product-title} cluster:
|
||||
|
||||
.`06_worker.py` Deployment Manager template
|
||||
[source,python]
|
||||
----
|
||||
def GenerateConfig(context):
|
||||
|
||||
resources = [{
|
||||
'name': context.properties['infra_id'] + '-' + context.env['name'],
|
||||
'type': 'compute.v1.instance',
|
||||
'properties': {
|
||||
'disks': [{
|
||||
'autoDelete': True,
|
||||
'boot': True,
|
||||
'initializeParams': {
|
||||
'diskSizeGb': context.properties['root_volume_size'],
|
||||
'sourceImage': context.properties['image']
|
||||
}
|
||||
}],
|
||||
'machineType': 'zones/' + context.properties['zone'] + '/machineTypes/' + context.properties['machine_type'],
|
||||
'metadata': {
|
||||
'items': [{
|
||||
'key': 'user-data',
|
||||
'value': context.properties['ignition']
|
||||
}]
|
||||
},
|
||||
'networkInterfaces': [{
|
||||
'subnetwork': context.properties['compute_subnet']
|
||||
}],
|
||||
'serviceAccounts': [{
|
||||
'email': context.properties['service_account_email'],
|
||||
'scopes': ['https://www.googleapis.com/auth/cloud-platform']
|
||||
}],
|
||||
'tags': {
|
||||
'items': [
|
||||
context.properties['infra_id'] + '-worker',
|
||||
]
|
||||
},
|
||||
'zone': context.properties['zone']
|
||||
}
|
||||
}]
|
||||
|
||||
return {'resources': resources}
|
||||
----
|
||||
@@ -2,13 +2,25 @@
|
||||
//
|
||||
// * installing/installing_aws_user_infra/installing-aws-user-infra.adoc
|
||||
// * installing/installing_restricted_networks/installing-restricted-networks-aws.adoc
|
||||
// * installing/installing_gcp_user_infra/installing-gcp-user-infra.adoc
|
||||
|
||||
ifeval::["{context}" == "installing-aws-user-infra"]
|
||||
:cp-first: Amazon Web Services
|
||||
:cp: AWS
|
||||
:cp-template: CloudFormation
|
||||
endif::[]
|
||||
ifeval::["{context}" == "installing-gcp-user-infra"]
|
||||
:cp-first: Google Cloud Platform
|
||||
:cp: GCP
|
||||
:cp-template: Deployment Manager
|
||||
endif::[]
|
||||
|
||||
[id="installation-extracting-infraid_{context}"]
|
||||
= Extracting the infrastructure name
|
||||
|
||||
The Ignition configs contain a unique cluster identifier that you can use to
|
||||
uniquely identify your cluster in Amazon Web Services (AWS) tags. The provided
|
||||
CloudFormation templates contain references to this tag, so you must extract
|
||||
uniquely identify your cluster in {cp-first} ({cp}). The provided {cp-template}
|
||||
templates contain references to this infrastructure name, so you must extract
|
||||
it.
|
||||
|
||||
.Prerequisites
|
||||
@@ -19,8 +31,8 @@ it.
|
||||
|
||||
.Procedure
|
||||
|
||||
* To extract the infrastructure name from the Ignition config file metadata, run
|
||||
the following command:
|
||||
* To extract and view the infrastructure name from the Ignition config file
|
||||
metadata, run the following command:
|
||||
+
|
||||
----
|
||||
$ jq -r .infraID /<installation_directory>/metadata.json <1>
|
||||
@@ -29,6 +41,3 @@ openshift-vw9j6 <2>
|
||||
<1> For `<installation_directory>`, specify the path to the directory that you stored the
|
||||
installation files in.
|
||||
<2> The output of this command is your cluster name and a random string.
|
||||
+
|
||||
You need the output of this command to configure the provided CloudFormation
|
||||
templates and can use it in other AWS tags.
|
||||
|
||||
@@ -1,6 +1,7 @@
|
||||
// Module included in the following assemblies:
|
||||
//
|
||||
// * installing/installing_gcp/installing-gcp-account.adoc
|
||||
// * installing/installing_gcp_user_infra/installing-gcp-user-infra.adoc
|
||||
|
||||
[id="installation-gcp-dns_{context}"]
|
||||
= Configuring DNS for GCP
|
||||
|
||||
@@ -1,6 +1,7 @@
|
||||
// Module included in the following assemblies:
|
||||
//
|
||||
// * installing/installing_gcp/installing-gcp-account.adoc
|
||||
// * installing/installing_gcp_user_infra/installing-gcp-user-infra.adoc
|
||||
|
||||
[id="installation-gcp-enabling-api-services_{context}"]
|
||||
= Enabling API services in GCP
|
||||
@@ -23,6 +24,12 @@ in the GCP documentation.
|
||||
[cols="2a,3a",options="header"]
|
||||
|===
|
||||
|API service |Console service name
|
||||
|
||||
ifeval::["{context}" == "installing-gcp-user-infra"]
|
||||
|Cloud Deployment Manager V2 API
|
||||
|`deploymentmanager.googleapis.com`
|
||||
endif::[]
|
||||
|
||||
|Compute Engine API
|
||||
|`compute.googleapis.com`
|
||||
|
||||
@@ -38,7 +45,7 @@ in the GCP documentation.
|
||||
|IAM Service Account Credentials API
|
||||
|`iamcredentials.googleapis.com`
|
||||
|
||||
|Identity and Access Management (IAM API)
|
||||
|Identity and Access Management (IAM) API
|
||||
|`iam.googleapis.com`
|
||||
|
||||
|Service Management API
|
||||
@@ -52,4 +59,5 @@ in the GCP documentation.
|
||||
|
||||
|Cloud Storage
|
||||
|`storage-component.googleapis.com`
|
||||
|
||||
|===
|
||||
|
||||
28
modules/installation-gcp-install-cli.adoc
Normal file
28
modules/installation-gcp-install-cli.adoc
Normal file
@@ -0,0 +1,28 @@
|
||||
// Module included in the following assemblies:
|
||||
//
|
||||
// * installing/installing_gcp_user_infra/installing-gcp-user-infra.adoc
|
||||
|
||||
[id="installation-gcp-install-cli_{context}"]
|
||||
= Installing and configuring CLI tools for GCP
|
||||
|
||||
To install {product-title} on Google Cloud Platform (GCP) using user-provisioned
|
||||
infrastructure, you must install and configure the CLI tools for GCP.
|
||||
|
||||
.Prerequisites
|
||||
|
||||
* You created a project to host your cluster.
|
||||
* You created a service account and granted it the required permissions.
|
||||
|
||||
.Procedure
|
||||
|
||||
. Install the following binaries in `$PATH`:
|
||||
+
|
||||
--
|
||||
* `gcloud`
|
||||
* `gsutil`
|
||||
--
|
||||
+
|
||||
See link:https://cloud.google.com/sdk/docs/#install_the_latest_cloud_tools_version_cloudsdk_current_version[Install the latest Cloud SDK version]
|
||||
in the GCP documentation.
|
||||
|
||||
. Authenticate using the `gcloud` tool with your configured service account.
|
||||
@@ -1,6 +1,7 @@
|
||||
// Module included in the following assemblies:
|
||||
//
|
||||
// * installing/installing_gcp/installing-gcp-account.adoc
|
||||
// * installing/installing_gcp_user_infra/installing-gcp-user-infra.adoc
|
||||
|
||||
[id="installation-gcp-limits_{context}"]
|
||||
= GCP account limits
|
||||
@@ -24,6 +25,7 @@ the bootstrap process and are removed after the cluster deploys.
|
||||
|Total resources required
|
||||
|Resources removed after bootstrap
|
||||
|
||||
ifeval::["{context}" == "installing-gcp-account"]
|
||||
|Service account |IAM |Global |5 |0
|
||||
|Firewall Rules |Compute |Global |35 |1
|
||||
|Forwarding Rules |Compute |Global |3 |0
|
||||
@@ -38,5 +40,22 @@ the bootstrap process and are removed after the cluster deploys.
|
||||
|Target Pools |Compute |Global |3 |0
|
||||
|CPUs |Compute |Region |28 |4
|
||||
|Persistent Disk SSD (GB) |Compute |Region |896 |128
|
||||
endif::[]
|
||||
|
||||
ifeval::["{context}" == "installing-gcp-user-infra"]
|
||||
|Service account |IAM |Global |5 |0
|
||||
|Firewall Rules |Networking |Global |35 |1
|
||||
|Forwarding Rules |Compute |Global |2 |0
|
||||
// |In-use IP addresses global |Networking |Global |4 |1
|
||||
|Health checks |Compute |Global |2 |0
|
||||
|Images |Compute |Global |1 |0
|
||||
|Networks |Networking |Global |1 |0
|
||||
// |Static IP addresses |Compute |Region |4 |1
|
||||
|Routers |Networking |Global |1 |0
|
||||
|Routes |Networking |Global |3 |0
|
||||
|Subnetworks |Compute |Global |2 |0
|
||||
|Target Pools |Networking |Global |2 |0
|
||||
// |CPUs |Compute |Region |28 |4
|
||||
// |Persistent Disk SSD (GB) |Compute |Region |896 |128
|
||||
endif::[]
|
||||
|===
|
||||
|
||||
@@ -1,6 +1,7 @@
|
||||
// Module included in the following assemblies:
|
||||
//
|
||||
// * installing/installing_gcp/installing-gcp-account.adoc
|
||||
// * installing/installing_gcp_user_infra/installing-gcp-user-infra.adoc
|
||||
|
||||
[id="installation-gcp-permissions_{context}"]
|
||||
= Required GCP permissions
|
||||
@@ -18,6 +19,12 @@ account requires the following permissions:
|
||||
* Service Account User
|
||||
* Storage Admin
|
||||
|
||||
ifeval::["{context}" == "installing-gcp-user-infra"]
|
||||
.Required roles for user-provisioned GCP infrastructure
|
||||
* Deployment Manager Editor
|
||||
* Service Account Key Admin
|
||||
endif::[]
|
||||
|
||||
.Optional roles
|
||||
For the cluster to create new limited credentials for its Operators, add
|
||||
the following role:
|
||||
@@ -36,7 +43,7 @@ machines use:
|
||||
|
||||
.5+|Control Plane
|
||||
|`roles/compute.instanceAdmin`
|
||||
|`roles/network.admin`
|
||||
|`roles/compute.networkAdmin`
|
||||
|`roles/compute.securityAdmin`
|
||||
|`roles/storage.admin`
|
||||
|`roles/iam.serviceAccountUser`
|
||||
|
||||
@@ -1,6 +1,7 @@
|
||||
// Module included in the following assemblies:
|
||||
//
|
||||
// * installing/installing_gcp/installing-gcp-account.adoc
|
||||
// * installing/installing_gcp_user_infra/installing-gcp-user-infra.adoc
|
||||
|
||||
[id="installation-gcp-regions_{context}"]
|
||||
= Supported GCP regions
|
||||
|
||||
@@ -1,6 +1,7 @@
|
||||
// Module included in the following assemblies:
|
||||
//
|
||||
// * installing/installing_gcp/installing-gcp-account.adoc
|
||||
// * installing/installing_gcp_user_infra/installing-gcp-user-infra.adoc
|
||||
|
||||
[id="installation-gcp-service-account_{context}"]
|
||||
= Creating a service account in GCP
|
||||
@@ -20,7 +21,7 @@ in the GCP documentation.
|
||||
|
||||
. Grant the service account the appropriate permissions. You can either
|
||||
grant the individual permissions that follow or assign the `Owner` role to it.
|
||||
See link:https://cloud.google.com/iam/docs/granting-roles-to-service-accounts#granting_access_to_a_service_account_for_a_resource[Granting roles to a service account for specific resources]
|
||||
See link:https://cloud.google.com/iam/docs/granting-roles-to-service-accounts#granting_access_to_a_service_account_for_a_resource[Granting roles to a service account for specific resources].
|
||||
|
||||
. Create the service account key.
|
||||
See link:https://cloud.google.com/iam/docs/creating-managing-service-account-keys#creating_service_account_keys[Creating service account keys]
|
||||
|
||||
63
modules/installation-gcp-user-infra-adding-ingress.adoc
Normal file
63
modules/installation-gcp-user-infra-adding-ingress.adoc
Normal file
@@ -0,0 +1,63 @@
|
||||
// Module included in the following assemblies:
|
||||
//
|
||||
// * installing/installing_gcp_user_infra/installing-gcp-user-infra.adoc
|
||||
|
||||
[id="installation-gcp-user-infra-adding-ingress_{context}"]
|
||||
= Optional: Adding the ingress DNS records
|
||||
|
||||
If you removed the DNS Zone configuration when creating Kubernetes manifests and
|
||||
generating Ignition configs, you must manually create DNS records that point at
|
||||
the ingress load balancer. You can create either a wildcard
|
||||
`*.apps.{baseDomain}.` or specific records. You can use A, CNAME, and other
|
||||
records per your requirements.
|
||||
|
||||
.Prerequisites
|
||||
|
||||
* Configure a GCP account.
|
||||
* Remove the DNS Zone configuration when creating Kubernetes manifests and
|
||||
generating Ignition configs.
|
||||
* Create and configure a VPC and assocated subnets in GCP.
|
||||
* Create and configure networking and load balancers in GCP.
|
||||
* Create control plane and compute roles.
|
||||
* Create the bootstrap machine.
|
||||
* Create the control plane machines.
|
||||
* Create the worker machines.
|
||||
|
||||
.Procedure
|
||||
|
||||
. Wait for the Ingress router to create a load balancer and populate the `EXTERNAL-IP` field:
|
||||
+
|
||||
----
|
||||
$ oc -n openshift-ingress get service router-default
|
||||
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
|
||||
router-default LoadBalancer 172.30.18.154 35.233.157.184 80:32288/TCP,443:31215/TCP 98
|
||||
----
|
||||
|
||||
. Add the A record to your public and private zones:
|
||||
+
|
||||
----
|
||||
$ export ROUTER_IP=`oc -n openshift-ingress get service router-default --no-headers | awk '{print $4}'`
|
||||
|
||||
$ if [ -f transaction.yaml ]; then rm transaction.yaml; fi
|
||||
$ gcloud dns record-sets transaction start --zone ${BASE_DOMAIN_ZONE_NAME}
|
||||
$ gcloud dns record-sets transaction add ${ROUTER_IP} --name \*.apps.${CLUSTER_NAME}.${BASE_DOMAIN}. --ttl 300 --type A --zone ${BASE_DOMAIN_ZONE_NAME}
|
||||
$ gcloud dns record-sets transaction execute --zone ${BASE_DOMAIN_ZONE_NAME}
|
||||
|
||||
$ if [ -f transaction.yaml ]; then rm transaction.yaml; fi
|
||||
$ gcloud dns record-sets transaction start --zone ${INFRA_ID}-private-zone
|
||||
$ gcloud dns record-sets transaction add ${ROUTER_IP} --name \*.apps.${CLUSTER_NAME}.${BASE_DOMAIN}. --ttl 300 --type A --zone ${INFRA_ID}-private-zone
|
||||
$ gcloud dns record-sets transaction execute --zone ${INFRA_ID}-private-zone
|
||||
----
|
||||
+
|
||||
If you prefer to add explicit domains instead of using a wildcard, you can
|
||||
create entries for each of the cluster's current routes:
|
||||
+
|
||||
----
|
||||
$ oc get --all-namespaces -o jsonpath='{range .items[*]}{range .status.ingress[*]}{.host}{"\n"}{end}{end}' routes
|
||||
oauth-openshift.apps.your.cluster.domain.example.com
|
||||
console-openshift-console.apps.your.cluster.domain.example.com
|
||||
downloads-openshift-console.apps.your.cluster.domain.example.com
|
||||
alertmanager-main-openshift-monitoring.apps.your.cluster.domain.example.com
|
||||
grafana-openshift-monitoring.apps.your.cluster.domain.example.com
|
||||
prometheus-k8s-openshift-monitoring.apps.your.cluster.domain.example.com
|
||||
----
|
||||
97
modules/installation-gcp-user-infra-completing.adoc
Normal file
97
modules/installation-gcp-user-infra-completing.adoc
Normal file
@@ -0,0 +1,97 @@
|
||||
// Module included in the following assemblies:
|
||||
//
|
||||
// * installing/installing_gcp_user_infra/installing-gcp-user-infra.adoc
|
||||
|
||||
[id="installation-gcp-user-infra-installation_{context}"]
|
||||
= Completing a GCP installation on user-provisioned infrastructure
|
||||
|
||||
After you start the {product-title} installation on Google Cloud Platform (GCP)
|
||||
user-provisioned infrastructure, you can monitor the cluster events until the
|
||||
cluster is ready.
|
||||
|
||||
.Prerequisites
|
||||
|
||||
* Deploy the bootstrap machine for an {product-title} cluster on user-provisioned GCP infrastructure.
|
||||
* Install the `oc` CLI and log in.
|
||||
|
||||
.Procedure
|
||||
|
||||
. Complete the cluster installation:
|
||||
+
|
||||
----
|
||||
$ ./openshift-install --dir=<installation_directory> wait-for install-complete <1>
|
||||
|
||||
INFO Waiting up to 30m0s for the cluster to initialize...
|
||||
----
|
||||
<1> For `<installation_directory>`, specify the path to the directory that you
|
||||
stored the installation files in.
|
||||
|
||||
. Observe the running state of your cluster.
|
||||
+
|
||||
--
|
||||
.. Run the following command to view the current cluster version and status:
|
||||
+
|
||||
----
|
||||
$ oc get clusterversion
|
||||
NAME VERSION AVAILABLE PROGRESSING SINCE STATUS
|
||||
version False True 24m Working towards 4.2.0-0: 99% complete
|
||||
----
|
||||
|
||||
.. Run the following command to view the Operators managed on the control plane by
|
||||
the Cluster Version Operator (CVO):
|
||||
+
|
||||
----
|
||||
$ oc get clusteroperators
|
||||
NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE
|
||||
authentication 4.2.0-0 True False False 6m18s
|
||||
cloud-credential 4.2.0-0 True False False 17m
|
||||
cluster-autoscaler 4.2.0-0 True False False 80s
|
||||
console 4.2.0-0 True False False 3m57s
|
||||
dns 4.2.0-0 True False False 22m
|
||||
image-registry 4.2.0-0 True False False 5m4s
|
||||
ingress 4.2.0-0 True False False 4m38s
|
||||
insights 4.2.0-0 True False False 21m
|
||||
kube-apiserver 4.2.0-0 True False False 12m
|
||||
kube-controller-manager 4.2.0-0 True False False 12m
|
||||
kube-scheduler 4.2.0-0 True False False 11m
|
||||
machine-api 4.2.0-0 True False False 18m
|
||||
machine-config 4.2.0-0 True False False 22m
|
||||
marketplace 4.2.0-0 True False False 5m38s
|
||||
monitoring 4.2.0-0 True False False 86s
|
||||
network 4.2.0-0 True False False 14m
|
||||
node-tuning 4.2.0-0 True False False 6m8s
|
||||
openshift-apiserver 4.2.0-0 True False False 6m48s
|
||||
openshift-controller-manager 4.2.0-0 True False False 12m
|
||||
openshift-samples 4.2.0-0 True False False 67s
|
||||
operator-lifecycle-manager 4.2.0-0 True False False 15m
|
||||
operator-lifecycle-manager-catalog 4.2.0-0 True False False 15m
|
||||
operator-lifecycle-manager-packageserver 4.2.0-0 True False False 6m48s
|
||||
service-ca 4.2.0-0 True False False 17m
|
||||
service-catalog-apiserver 4.2.0-0 True False False 6m18s
|
||||
service-catalog-controller-manager 4.2.0-0 True False False 6m19s
|
||||
storage 4.2.0-0 True False False 6m20s
|
||||
----
|
||||
|
||||
.. Run the following command to view your cluster Pods:
|
||||
+
|
||||
----
|
||||
$ oc get pods --all-namespaces
|
||||
NAMESPACE NAME READY STATUS RESTARTS AGE
|
||||
kube-system etcd-member-ip-10-0-3-111.us-east-2.compute.internal 1/1 Running 0 35m
|
||||
kube-system etcd-member-ip-10-0-3-239.us-east-2.compute.internal 1/1 Running 0 37m
|
||||
kube-system etcd-member-ip-10-0-3-24.us-east-2.compute.internal 1/1 Running 0 35m
|
||||
openshift-apiserver-operator openshift-apiserver-operator-6d6674f4f4-h7t2t 1/1 Running 1 37m
|
||||
openshift-apiserver apiserver-fm48r 1/1 Running 0 30m
|
||||
openshift-apiserver apiserver-fxkvv 1/1 Running 0 29m
|
||||
openshift-apiserver apiserver-q85nm 1/1 Running 0 29m
|
||||
...
|
||||
openshift-service-ca-operator openshift-service-ca-operator-66ff6dc6cd-9r257 1/1 Running 0 37m
|
||||
openshift-service-ca apiservice-cabundle-injector-695b6bcbc-cl5hm 1/1 Running 0 35m
|
||||
openshift-service-ca configmap-cabundle-injector-8498544d7-25qn6 1/1 Running 0 35m
|
||||
openshift-service-ca service-serving-cert-signer-6445fc9c6-wqdqn 1/1 Running 0 35m
|
||||
openshift-service-catalog-apiserver-operator openshift-service-catalog-apiserver-operator-549f44668b-b5q2w 1/1 Running 0 32m
|
||||
openshift-service-catalog-controller-manager-operator openshift-service-catalog-controller-manager-operator-b78cr2lnm 1/1 Running 0 31m
|
||||
----
|
||||
--
|
||||
+
|
||||
When the current cluster version is `AVAILABLE`, the installation is complete.
|
||||
40
modules/installation-gcp-user-infra-rhcos.adoc
Normal file
40
modules/installation-gcp-user-infra-rhcos.adoc
Normal file
@@ -0,0 +1,40 @@
|
||||
// Module included in the following assemblies:
|
||||
//
|
||||
// * installing/installing_gcp_user_infra/installing-gcp-user-infra.adoc
|
||||
|
||||
[id="installation-gcp-user-infra-rhcos_{context}"]
|
||||
= Creating the {op-system} cluster image for the GCP infrastructure
|
||||
|
||||
You must use a valid {op-system-first} image for Google Cloud Platform (GCP) for
|
||||
your {product-title} nodes.
|
||||
|
||||
.Procedure
|
||||
|
||||
. Obtain the {op-system} image from the
|
||||
link:https://access.redhat.com/downloads/content/290[Product Downloads] page on the Red
|
||||
Hat customer portal or the
|
||||
link:https://mirror.openshift.com/pub/openshift-v4/dependencies/rhcos/4.2/[{op-system} image mirror]
|
||||
page.
|
||||
+
|
||||
[IMPORTANT]
|
||||
====
|
||||
The {op-system} images might not change with every release of {product-title}.
|
||||
You must download an image with the highest version that is
|
||||
less than or equal to the {product-title} version that you install. Use the image version
|
||||
that matches your {product-title} version if it is available.
|
||||
====
|
||||
+
|
||||
The file name contains the {product-title} version number in the format
|
||||
`rhcos-<version>-gcp.tar`.
|
||||
|
||||
. Export the following variable:
|
||||
+
|
||||
----
|
||||
$ export IMAGE_SOURCE=<downloaded_image_file_path>
|
||||
----
|
||||
. Create the cluster image:
|
||||
+
|
||||
----
|
||||
$ gcloud compute images create "${INFRA_ID}-rhcos-image" \
|
||||
--source-uri="${IMAGE_SOURCE}"
|
||||
----
|
||||
48
modules/installation-gcp-user-infra-wait-for-bootstrap.adoc
Normal file
48
modules/installation-gcp-user-infra-wait-for-bootstrap.adoc
Normal file
@@ -0,0 +1,48 @@
|
||||
// Module included in the following assemblies:
|
||||
//
|
||||
// * installing/installing_gcps_user_infra/installing-gcp-user-infra.adoc
|
||||
|
||||
[id="installation-gcp-user-infra-wait-for-bootstrap_{context}"]
|
||||
= Wait for bootstrap completion and remove bootstrap resources in GCP
|
||||
|
||||
After you create all of the required infrastructure in Google Cloud Platform
|
||||
(GCP), wait for the bootstrap process to complete on the machines that you
|
||||
provisioned by using the Ignition config files that you generated with the
|
||||
installation program.
|
||||
|
||||
.Prerequisites
|
||||
|
||||
* Configure a GCP account.
|
||||
* Generate the Ignition config files for your cluster.
|
||||
* Create and configure a VPC and assocated subnets in GCP.
|
||||
* Create and configure networking and load balancers in GCP.
|
||||
* Create control plane and compute roles.
|
||||
* Create the bootstrap machine.
|
||||
* Create the control plane machines.
|
||||
|
||||
.Procedure
|
||||
|
||||
. Change to the directory that contains the installation program and run the
|
||||
following command:
|
||||
+
|
||||
----
|
||||
$ ./openshift-install wait-for bootstrap-complete --dir=<installation_directory> \ <1>
|
||||
--log-level info <2>
|
||||
----
|
||||
<1> For `<installation_directory>`, specify the path to the directory that you
|
||||
stored the installation files in.
|
||||
<2> To view different installation details, specify `warn`, `debug`, or
|
||||
`error` instead of `info`.
|
||||
+
|
||||
If the command exits without a `FATAL` warning, your production control plane
|
||||
has initialized.
|
||||
|
||||
. Delete the bootstrap resources:
|
||||
+
|
||||
----
|
||||
$ gcloud compute target-pools remove-instances ${INFRA_ID}-api-target-pool --instances-zone="${ZONE_0}" --instances=${INFRA_ID}-bootstrap
|
||||
$ gcloud compute target-pools remove-instances ${INFRA_ID}-ign-target-pool --instances-zone="${ZONE_0}" --instances=${INFRA_ID}-bootstrap
|
||||
$ gsutil rm gs://${INFRA_ID}-bootstrap-ignition/bootstrap.ign
|
||||
$ gsutil rb gs://${INFRA_ID}-bootstrap-ignition
|
||||
$ gcloud deployment-manager deployments delete ${INFRA_ID}-bootstrap
|
||||
----
|
||||
@@ -1,17 +0,0 @@
|
||||
// Module included in the following assemblies:
|
||||
//
|
||||
// * installing/installing_aws_user_infra/installing-aws-user-infra.adoc
|
||||
// * installing/installing_restricted_networks/installing-restricted-networks-aws.adoc
|
||||
|
||||
ifeval::["{context}" == "installing-restricted-networks-aws"]
|
||||
:restricted:
|
||||
endif::[]
|
||||
|
||||
[id="installation-generate-aws-user-infra_{context}"]
|
||||
= Creating the installation files for AWS
|
||||
|
||||
To install {product-title} on Amazon Web Services (AWS) using user-provisioned
|
||||
infrastructure, you must generate the files that the installation
|
||||
program needs to deploy your cluster and modify them so that the cluster creates
|
||||
only the machines that it will use. You generate and customize the
|
||||
`install_config.yaml` file, Kubernetes manifests, and Ignition config files.
|
||||
@@ -4,6 +4,7 @@
|
||||
// * installing/installing_aws/installing-aws-network-customizations.adoc
|
||||
// * installing/installing_azure/installing-azure-customizations.adoc
|
||||
// * installing/installing_gcp/installing-gcp-customizations.adoc
|
||||
// * installing/installing_gcp_user_infra/installing-gcp-user-infra.adoc
|
||||
// * installing/installing_openstack/installing-openstack-installer-custom.adoc
|
||||
// * installing/installing_openstack/installing-openstack-installer-kuryr.adoc
|
||||
// Consider also adding the installation-configuration-parameters.adoc module.
|
||||
@@ -21,6 +22,9 @@ endif::[]
|
||||
ifeval::["{context}" == "installing-gcp-customizations"]
|
||||
:gcp:
|
||||
endif::[]
|
||||
ifeval::["{context}" == "installing-gcp-user-infra"]
|
||||
:gcp:
|
||||
endif::[]
|
||||
ifeval::["{context}" == "installing-openstack-installer-custom"]
|
||||
:osp:
|
||||
endif::[]
|
||||
@@ -141,6 +145,22 @@ endif::gcp[]
|
||||
... Paste the pull secret that you obtained from the
|
||||
link:https://cloud.redhat.com/openshift/install[OpenShift Infrastructure Providers] page.
|
||||
|
||||
ifeval::["{context}" == "installing-gcp-user-infra"]
|
||||
.. Optional: If you do not want the cluster to provision compute machines, empty
|
||||
the compute pool by editing the resulting `install-config.yaml` file to set
|
||||
`replicas` to `0` for the `compute` pool:
|
||||
+
|
||||
[source,yaml]
|
||||
----
|
||||
compute:
|
||||
- hyperthreading: Enabled
|
||||
name: worker
|
||||
platform: {}
|
||||
replicas: 0 <1>
|
||||
----
|
||||
<1> Set to `0`.
|
||||
endif::[]
|
||||
|
||||
. Modify the `install-config.yaml` file. You can find more information about
|
||||
the available parameters in the *Installation configuration parameters* section.
|
||||
|
||||
|
||||
@@ -9,7 +9,7 @@
|
||||
= Creating the cluster
|
||||
|
||||
To create the {product-title} cluster, you wait for the bootstrap process to
|
||||
complete on the machines that you provisoned by using the
|
||||
complete on the machines that you provisioned by using the
|
||||
Ignition config files that you generated with the installation program.
|
||||
|
||||
.Prerequisites
|
||||
|
||||
@@ -0,0 +1,48 @@
|
||||
// Module included in the following assemblies:
|
||||
//
|
||||
// * installing/installing_gcp_user_infra/installing-gcp-user-infra.adoc
|
||||
|
||||
ifeval::["{context}" == "installing-gcp-user-infra"]
|
||||
:cp-first: Google Cloud Platform
|
||||
:cp: GCP
|
||||
:cp-template: Deployment Manager
|
||||
endif::[]
|
||||
|
||||
[id="installation-user-infra-exporting-common-variables_{context}"]
|
||||
= Exporting common variables for {cp-template} templates
|
||||
|
||||
You must export a common set of variables that are used with the provided
|
||||
{cp-template} templates used to assist in completing a user-provided
|
||||
infrastructure install on {cp-first} ({cp}).
|
||||
|
||||
[NOTE]
|
||||
====
|
||||
Specific {cp-template} templates can also require additional exported
|
||||
variables, which are detailed in their related procedures.
|
||||
====
|
||||
|
||||
.Prerequisites
|
||||
|
||||
* Obtain the {product-title} installation program and the pull secret for your cluster.
|
||||
* Generate the Ignition config files for your cluster.
|
||||
* Install the `jq` package.
|
||||
|
||||
.Procedure
|
||||
|
||||
* Export the following common variables to be used by the provided {cp-template}
|
||||
templates:
|
||||
+
|
||||
----
|
||||
$ export BASE_DOMAIN='<base_domain>'
|
||||
$ export BASE_DOMAIN_ZONE_NAME='<base_domain_zone_name>'
|
||||
$ export NETWORK_CIDR='10.0.0.0/16'
|
||||
$ export MASTER_SUBNET_CIDR='10.0.0.0/19'
|
||||
$ export WORKER_SUBNET_CIDR='10.0.32.0/19'
|
||||
|
||||
$ export KUBECONFIG=<installation_directory>/auth/kubeconfig <1>
|
||||
$ export CLUSTER_NAME=`jq -r .clusterName <installation_directory>/metadata.json`
|
||||
$ export INFRA_ID=`jq -r .infraID <installation_directory>/metadata.json`
|
||||
$ export PROJECT_NAME=`jq -r .gcp.projectID <installation_directory>/metadata.json`
|
||||
$ export REGION=`jq -r .gcp.region <installation_directory>/metadata.json`
|
||||
----
|
||||
<1> For `<installation_directory>`, specify the path to the directory that you stored the installation files in.
|
||||
@@ -2,12 +2,19 @@
|
||||
//
|
||||
// * installing/installing_aws_user_infra/installing-aws-user-infra.adoc
|
||||
// * installing/installing_restricted_networks/installing-restricted-networks-aws.adoc
|
||||
// * installing/installing_gcp_user_infra/installing-gcp-user-infra.adoc
|
||||
|
||||
ifeval::["{context}" == "installing-restricted-networks-aws"]
|
||||
:restricted:
|
||||
endif::[]
|
||||
ifeval::["{context}" == "installing-aws-user-infra"]
|
||||
:aws:
|
||||
endif::[]
|
||||
ifeval::["{context}" == "installing-gcp-user-infra"]
|
||||
:gcp:
|
||||
endif::[]
|
||||
|
||||
[id="installation-generate-aws-user-infra-ignition_{context}"]
|
||||
[id="installation-user-infra-generate-k8s-manifest-ignition_{context}"]
|
||||
= Creating the Kubernetes manifest and Ignition config files
|
||||
|
||||
Because you must manually start the cluster machines, you must generate the
|
||||
@@ -54,6 +61,7 @@ $ rm -f openshift/99_openshift-cluster-api_master-machines-*.yaml
|
||||
+
|
||||
By removing these files, you prevent the cluster from automatically generating control plane machines.
|
||||
|
||||
ifdef::aws[]
|
||||
. Remove the Kubernetes manifest files that define the worker machines:
|
||||
+
|
||||
----
|
||||
@@ -62,6 +70,54 @@ $ rm -f openshift/99_openshift-cluster-api_worker-machineset-*
|
||||
+
|
||||
Because you create and manage the worker machines yourself, you do not need
|
||||
to initialize these machines.
|
||||
endif::[]
|
||||
|
||||
ifdef::gcp[]
|
||||
. Optional: If you do not want the cluster to provision compute machines, remove
|
||||
the Kubernetes manifest files that define the worker machines:
|
||||
+
|
||||
----
|
||||
$ rm -f openshift/99_openshift-cluster-api_worker-machineset-*.yaml
|
||||
----
|
||||
+
|
||||
Because you create and manage the worker machines yourself, you do not need
|
||||
to initialize these machines.
|
||||
|
||||
. Optional: Currently, emptying the compute pools makes control plane machines
|
||||
schedulable. However, due to a
|
||||
link:https://github.com/kubernetes/kubernetes/issues/65618[Kubernetes limitation],
|
||||
router Pods running on control plane machines will not be reachable by the
|
||||
ingress load balancer.
|
||||
+
|
||||
If you emptied the compute note in an earlier step, ensure the
|
||||
`mastersSchedulable` parameter is set to `false` in the
|
||||
`manifests/cluster-scheduler-02-config.yml` scheduler configuration file to keep
|
||||
router Pods and other workloads off the control plane machines.
|
||||
|
||||
. Optional: If you do not want
|
||||
link:https://github.com/openshift/cluster-ingress-operator[the Ingress Operator]
|
||||
to create DNS records on your behalf, remove the `privateZone` and `publicZone`
|
||||
sections from the `manifests/cluster-dns-02-config.yml` DNS configuration file:
|
||||
+
|
||||
[source,yaml]
|
||||
----
|
||||
apiVersion: config.openshift.io/v1
|
||||
kind: DNS
|
||||
metadata:
|
||||
creationTimestamp: null
|
||||
name: cluster
|
||||
spec:
|
||||
baseDomain: example.openshift.com
|
||||
privateZone: <1>
|
||||
id: mycluster-100419-private-zone
|
||||
publicZone: <1>
|
||||
id: example.openshift.com
|
||||
status: {}
|
||||
----
|
||||
<1> Remove these sections completely.
|
||||
+
|
||||
If you do so, you must add ingress DNS records manually in a later step.
|
||||
endif::[]
|
||||
|
||||
. Obtain the Ignition config files:
|
||||
+
|
||||
28
modules/installation-user-infra-generate.adoc
Normal file
28
modules/installation-user-infra-generate.adoc
Normal file
@@ -0,0 +1,28 @@
|
||||
// Module included in the following assemblies:
|
||||
//
|
||||
// * installing/installing_aws_user_infra/installing-aws-user-infra.adoc
|
||||
// * installing/installing_gcp_user_infra/installing-gcp-user-infra.adoc
|
||||
// * installing/installing_restricted_networks/installing-restricted-networks-aws.adoc
|
||||
|
||||
ifeval::["{context}" == "installing-restricted-networks-aws"]
|
||||
:restricted:
|
||||
:cp-first: Amazon Web Services
|
||||
:cp: AWS
|
||||
endif::[]
|
||||
ifeval::["{context}" == "installing-aws-user-infra"]
|
||||
:cp-first: Amazon Web Services
|
||||
:cp: AWS
|
||||
endif::[]
|
||||
ifeval::["{context}" == "installing-gcp-user-infra"]
|
||||
:cp-first: Google Cloud Platform
|
||||
:cp: GCP
|
||||
endif::[]
|
||||
|
||||
[id="installation-user-infra-generate_{context}"]
|
||||
= Creating the installation files for {cp}
|
||||
|
||||
To install {product-title} on {cp-first} ({cp}) using user-provisioned
|
||||
infrastructure, you must generate the files that the installation
|
||||
program needs to deploy your cluster and modify them so that the cluster creates
|
||||
only the machines that it will use. You generate and customize the
|
||||
`install-config.yaml` file, Kubernetes manifests, and Ignition config files.
|
||||
Reference in New Issue
Block a user