1
0
mirror of https://github.com/openshift/installer.git synced 2026-02-05 06:46:36 +01:00

gcp upi: enable internal load balancers

This change adds 02_lb_int.py template to the workflow to enable
internal load balancers. The cluster will begin communicating to the api
and mcs through the internal load balancers. The external load balancer
can optionally be disabled for private clusters.

This change also updates the documentation to use the $(command) syntax
to be in line with the other platforms.

In addition, the variable definitions were all moved to immediately
after the associated resources were created. This will help make clear
where their origins are.
This commit is contained in:
Jeremiah Stuever
2020-03-10 18:54:30 -07:00
parent dd1763199d
commit 45ccd3fe2c
5 changed files with 236 additions and 194 deletions

View File

@@ -1,5 +1,4 @@
# Install: User-Provided Infrastructure
The steps for performing a user-provided infrastructure install are outlined here. Several
[Deployment Manager][deploymentmanager] templates are provided to assist in
completing these steps or to help model your own. You are also free to create
@@ -7,7 +6,6 @@ the required resources through other methods; the templates are just an
example.
## Prerequisites
* all prerequisites from [README](README.md)
* the following binaries installed and in $PATH:
* gcloud
@@ -19,13 +17,11 @@ example.
* Cloud Deployment Manager V2 API (deploymentmanager.googleapis.com)
## Create Ignition configs
The machines will be started manually. Therefore, it is required to generate
the bootstrap and machine Ignition configs and store them for later steps.
Use a [staged install](../overview.md#multiple-invocations) to enable desired customizations.
### Create an install config
Create an install configuration as for [the usual approach](install.md#create-configuration).
```console
@@ -40,7 +36,6 @@ $ openshift-install create install-config
```
### Empty the compute pool (optional)
If you do not want the cluster to provision compute machines, edit the resulting `install-config.yaml` to set `replicas` to 0 for the `compute` pool.
```sh
@@ -53,7 +48,6 @@ open(path, "w").write(yaml.dump(data, default_flow_style=False))'
```
### Create manifests
Create manifest to enable customizations which are not exposed via the install configuration.
```console
@@ -62,7 +56,6 @@ INFO Consuming "Install Config" from target directory
```
### Remove control plane machines
Remove the control plane machines from the manifests.
We'll be providing those ourselves and don't want to involve [the machine-API operator][machine-api-operator].
@@ -71,7 +64,6 @@ rm -f openshift/99_openshift-cluster-api_master-machines-*.yaml
```
### Remove compute machinesets (Optional)
If you do not want the cluster to provision compute machines, remove the compute machinesets from the manifests as well.
```sh
@@ -79,7 +71,6 @@ rm -f openshift/99_openshift-cluster-api_worker-machineset-*.yaml
```
### Make control-plane nodes unschedulable
Currently [emptying the compute pools](#empty-compute-pools) makes control-plane nodes schedulable.
But due to a [Kubernetes limitation][kubernetes-service-load-balancers-exclude-masters], router pods running on control-plane nodes will not be reachable by the ingress load balancer.
Update the scheduler configuration to keep router pods and other workloads off the control-plane nodes:
@@ -94,9 +85,8 @@ open(path, "w").write(yaml.dump(data, default_flow_style=False))'
```
### Remove DNS Zones (Optional)
If you don't want [the ingress operator][ingress-operator] to create DNS records on your behalf, remove the `privateZone` and `publicZone` sections from the DNS configuration.
If you do so, you'll need to [add ingress DNS records manually](#add-the-ingress-dns-records-optional) later on.
If you do so, you'll need to [add ingress DNS records manually](#add-the-ingress-dns-records) later on.
```sh
python -c '
@@ -109,7 +99,6 @@ open(path, "w").write(yaml.dump(data, default_flow_style=False))'
```
### Create Ignition configs
Now we can create the bootstrap Ignition configs.
```console
@@ -129,8 +118,7 @@ $ tree
└── worker.ign
```
## Extract infrastructure name from Ignition metadata
### Extract infrastructure name from Ignition metadata
By default, Ignition generates a unique cluster identifier comprised of the
cluster name specified during the invocation of the installer and a short
string known internally as the infrastructure name. These values are seeded
@@ -145,7 +133,7 @@ $ jq -r .infraID metadata.json
openshift-vw9j6
```
Export variables to be used in examples below.
## Export variables to be used in examples below.
```sh
export BASE_DOMAIN='example.com'
@@ -155,11 +143,18 @@ export MASTER_SUBNET_CIDR='10.0.0.0/19'
export WORKER_SUBNET_CIDR='10.0.32.0/19'
export KUBECONFIG=auth/kubeconfig
export CLUSTER_NAME=`jq -r .clusterName metadata.json`
export INFRA_ID=`jq -r .infraID metadata.json`
export PROJECT_NAME=`jq -r .gcp.projectID metadata.json`
export REGION=`jq -r .gcp.region metadata.json`
export CLUSTER_NAME=$(jq -r .clusterName metadata.json)
export INFRA_ID=$(jq -r .infraID metadata.json)
export PROJECT_NAME=$(jq -r .gcp.projectID metadata.json)
export REGION=$(jq -r .gcp.region metadata.json)
export ZONE_0=$(gcloud compute regions describe ${REGION} --format=json | jq -r .zones[0] | cut -d "/" -f9)
export ZONE_1=$(gcloud compute regions describe ${REGION} --format=json | jq -r .zones[1] | cut -d "/" -f9)
export ZONE_2=$(gcloud compute regions describe ${REGION} --format=json | jq -r .zones[2] | cut -d "/" -f9)
export MASTER_IGNITION=$(cat master.ign)
export WORKER_IGNITION=$(cat worker.ign)
```
## Create the VPC
Copy [`01_vpc.py`](../../../upi/gcp/01_vpc.py) locally.
@@ -191,16 +186,19 @@ Create the deployment using gcloud.
gcloud deployment-manager deployments create ${INFRA_ID}-vpc --config 01_vpc.yaml
```
## Configure VPC variables
```sh
export CLUSTER_NETWORK=$(gcloud compute networks describe ${INFRA_ID}-network --format json | jq -r .selfLink)
export CONTROL_SUBNET=$(gcloud compute networks subnets describe ${INFRA_ID}-master-subnet --region=${REGION} --format json | jq -r .selfLink)
export COMPUTE_SUBNET=$(gcloud compute networks subnets describe ${INFRA_ID}-worker-subnet --region=${REGION} --format json | jq -r .selfLink)
```
## Create DNS entries and load balancers
Copy [`02_dns.py`](../../../upi/gcp/02_dns.py) locally.
Copy [`02_lb_ext.py`](../../../upi/gcp/02_lb_ext.py) locally.
Export variables needed by the resource definition.
```sh
export CLUSTER_NETWORK=`gcloud compute networks describe ${INFRA_ID}-network --format json | jq -r .selfLink`
```
Copy [`02_lb_int.py`](../../../upi/gcp/02_lb_int.py) locally.
Create a resource definition file: `02_infra.yaml`
@@ -209,6 +207,7 @@ $ cat <<EOF >02_infra.yaml
imports:
- path: 02_dns.py
- path: 02_lb_ext.py
- path: 02_lb_int.py
resources:
- name: cluster-dns
type: 02_dns.py
@@ -221,12 +220,27 @@ resources:
properties:
infra_id: '${INFRA_ID}'
region: '${REGION}'
- name: cluster-lb-int
type: 02_lb_int.py
properties:
cluster_network: '${CLUSTER_NETWORK}'
control_subnet: '${CONTROL_SUBNET}'
infra_id: '${INFRA_ID}'
region: '${REGION}'
zones:
- '${ZONE_0}'
- '${ZONE_1}'
- '${ZONE_2}'
EOF
```
- `infra_id`: the infrastructure name (INFRA_ID above)
- `region`: the region to deploy the cluster into (for example us-east1)
- `cluster_domain`: the domain for the cluster (for example openshift.example.com)
- `cluster_network`: the URI to the cluster network
- `control_subnet`: the URI to the control subnet
- `zones`: the zones to deploy the control plane instances into (for example us-east1-b, us-east1-c, us-east1-d)
If you do not want external load balancers for the api, remove the `cluster-lb-ext` section from the file.
Create the deployment using gcloud.
@@ -234,19 +248,21 @@ Create the deployment using gcloud.
gcloud deployment-manager deployments create ${INFRA_ID}-infra --config 02_infra.yaml
```
## Configure infra variables
If you excluded the `cluster-lb-ext` section above, then skip `CLUSTER_PUBLIC_IP`.
```sh
export CLUSTER_IP=$(gcloud compute addresses describe ${INFRA_ID}-cluster-ip --region=${REGION} --format json | jq -r .address)
export CLUSTER_PUBLIC_IP=$(gcloud compute addresses describe ${INFRA_ID}-cluster-public-ip --region=${REGION} --format json | jq -r .address)
```
## Add DNS entries
The templates do not create DNS entries due to limitations of Deployment
Manager, so we must create them manually.
### Add internal DNS entries
```sh
export CLUSTER_IP=`gcloud compute addresses describe ${INFRA_ID}-cluster-public-ip --region=${REGION} --format json | jq -r .address`
# Add external DNS entries
if [ -f transaction.yaml ]; then rm transaction.yaml; fi
gcloud dns record-sets transaction start --zone ${BASE_DOMAIN_ZONE_NAME}
gcloud dns record-sets transaction add ${CLUSTER_IP} --name api.${CLUSTER_NAME}.${BASE_DOMAIN}. --ttl 60 --type A --zone ${BASE_DOMAIN_ZONE_NAME}
gcloud dns record-sets transaction execute --zone ${BASE_DOMAIN_ZONE_NAME}
# Add internal DNS entries
if [ -f transaction.yaml ]; then rm transaction.yaml; fi
gcloud dns record-sets transaction start --zone ${INFRA_ID}-private-zone
gcloud dns record-sets transaction add ${CLUSTER_IP} --name api.${CLUSTER_NAME}.${BASE_DOMAIN}. --ttl 60 --type A --zone ${INFRA_ID}-private-zone
@@ -254,18 +270,21 @@ gcloud dns record-sets transaction add ${CLUSTER_IP} --name api-int.${CLUSTER_NA
gcloud dns record-sets transaction execute --zone ${INFRA_ID}-private-zone
```
### Add external DNS entries (optional)
If you deployed external load balancers with `02_infra.yaml`, you can deploy external DNS entries.
```sh
if [ -f transaction.yaml ]; then rm transaction.yaml; fi
gcloud dns record-sets transaction start --zone ${BASE_DOMAIN_ZONE_NAME}
gcloud dns record-sets transaction add ${CLUSTER_PUBLIC_IP} --name api.${CLUSTER_NAME}.${BASE_DOMAIN}. --ttl 60 --type A --zone ${BASE_DOMAIN_ZONE_NAME}
gcloud dns record-sets transaction execute --zone ${BASE_DOMAIN_ZONE_NAME}
```
## Create firewall rules and IAM roles
Copy [`03_firewall.py`](../../../upi/gcp/03_firewall.py) locally.
Copy [`03_iam.py`](../../../upi/gcp/03_iam.py) locally.
Export variables needed by the resource definition.
```sh
export MASTER_NAT_IP=`gcloud compute addresses describe ${INFRA_ID}-master-nat-ip --region ${REGION} --format json | jq -r .address`
export WORKER_NAT_IP=`gcloud compute addresses describe ${INFRA_ID}-worker-nat-ip --region ${REGION} --format json | jq -r .address`
```
Create a resource definition file: `03_security.yaml`
```console
@@ -277,23 +296,21 @@ resources:
- name: cluster-firewall
type: 03_firewall.py
properties:
allowed_external_cidr: '0.0.0.0/0'
infra_id: '${INFRA_ID}'
cluster_network: '${CLUSTER_NETWORK}'
network_cidr: '${NETWORK_CIDR}'
master_nat_ip: '${MASTER_NAT_IP}'
worker_nat_ip: '${WORKER_NAT_IP}'
- name: cluster-iam
type: 03_iam.py
properties:
infra_id: '${INFRA_ID}'
EOF
```
- `allowed_external_cidr`: limits access to the cluster API and ssh to the bootstrap host. (for example External: 0.0.0.0/0, Internal: 10.0.0.0/16)
- `infra_id`: the infrastructure name (INFRA_ID above)
- `region`: the region to deploy the cluster into (for example us-east1)
- `cluster_network`: the URI to the cluster network
- `network_cidr`: the CIDR of the vpc network (for example 10.0.0.0/16)
- `master_nat_ip`: the ip address of the master nat (for example 34.94.100.1)
- `worker_nat_ip`: the ip address of the worker nat (for example 34.94.200.1)
Create the deployment using gcloud.
@@ -301,51 +318,44 @@ Create the deployment using gcloud.
gcloud deployment-manager deployments create ${INFRA_ID}-security --config 03_security.yaml
```
## Configure security variables
```sh
export MASTER_SERVICE_ACCOUNT=$(gcloud iam service-accounts list --filter "email~^${INFRA_ID}-m@${PROJECT_NAME}." --format json | jq -r '.[0].email')
export WORKER_SERVICE_ACCOUNT=$(gcloud iam service-accounts list --filter "email~^${INFRA_ID}-w@${PROJECT_NAME}." --format json | jq -r '.[0].email')
```
## Add required roles to IAM service accounts
The templates do not create the policy bindings due to limitations of Deployment
Manager, so we must create them manually.
```sh
export MASTER_SA=${INFRA_ID}-m@${PROJECT_NAME}.iam.gserviceaccount.com
gcloud projects add-iam-policy-binding ${PROJECT_NAME} --member "serviceAccount:${MASTER_SA}" --role "roles/compute.instanceAdmin"
gcloud projects add-iam-policy-binding ${PROJECT_NAME} --member "serviceAccount:${MASTER_SA}" --role "roles/compute.networkAdmin"
gcloud projects add-iam-policy-binding ${PROJECT_NAME} --member "serviceAccount:${MASTER_SA}" --role "roles/compute.securityAdmin"
gcloud projects add-iam-policy-binding ${PROJECT_NAME} --member "serviceAccount:${MASTER_SA}" --role "roles/iam.serviceAccountUser"
gcloud projects add-iam-policy-binding ${PROJECT_NAME} --member "serviceAccount:${MASTER_SA}" --role "roles/storage.admin"
gcloud projects add-iam-policy-binding ${PROJECT_NAME} --member "serviceAccount:${MASTER_SERVICE_ACCOUNT}" --role "roles/compute.instanceAdmin"
gcloud projects add-iam-policy-binding ${PROJECT_NAME} --member "serviceAccount:${MASTER_SERVICE_ACCOUNT}" --role "roles/compute.networkAdmin"
gcloud projects add-iam-policy-binding ${PROJECT_NAME} --member "serviceAccount:${MASTER_SERVICE_ACCOUNT}" --role "roles/compute.securityAdmin"
gcloud projects add-iam-policy-binding ${PROJECT_NAME} --member "serviceAccount:${MASTER_SERVICE_ACCOUNT}" --role "roles/iam.serviceAccountUser"
gcloud projects add-iam-policy-binding ${PROJECT_NAME} --member "serviceAccount:${MASTER_SERVICE_ACCOUNT}" --role "roles/storage.admin"
export WORKER_SA=${INFRA_ID}-w@${PROJECT_NAME}.iam.gserviceaccount.com
gcloud projects add-iam-policy-binding ${PROJECT_NAME} --member "serviceAccount:${WORKER_SA}" --role "roles/compute.viewer"
gcloud projects add-iam-policy-binding ${PROJECT_NAME} --member "serviceAccount:${WORKER_SA}" --role "roles/storage.admin"
gcloud projects add-iam-policy-binding ${PROJECT_NAME} --member "serviceAccount:${WORKER_SERVICE_ACCOUNT}" --role "roles/compute.viewer"
gcloud projects add-iam-policy-binding ${PROJECT_NAME} --member "serviceAccount:${WORKER_SERVICE_ACCOUNT}" --role "roles/storage.admin"
```
Create a service account key and store it locally for later use.
## Generate a service-account-key for signing the bootstrap.ign url
```sh
gcloud iam service-accounts keys create service-account-key.json --iam-account=${MASTER_SA}
gcloud iam service-accounts keys create service-account-key.json --iam-account=${MASTER_SERVICE_ACCOUNT}
```
## Create the cluster image.
Locate the RHCOS image source and create a cluster image.
```sh
export IMAGE_SOURCE=`curl https://raw.githubusercontent.com/openshift/installer/master/data/data/rhcos.json | jq -r .gcp.url`
export IMAGE_SOURCE=$(curl https://raw.githubusercontent.com/openshift/installer/master/data/data/rhcos.json | jq -r .gcp.url)
gcloud compute images create "${INFRA_ID}-rhcos-image" --source-uri="${IMAGE_SOURCE}"
export CLUSTER_IMAGE=$(gcloud compute images describe ${INFRA_ID}-rhcos-image --format json | jq -r .selfLink)
```
## Launch temporary bootstrap resources
Copy [`04_bootstrap.py`](../../../upi/gcp/04_bootstrap.py) locally.
Export variables needed by the resource definition.
```sh
export CONTROL_SUBNET=`gcloud compute networks subnets describe ${INFRA_ID}-master-subnet --region=${REGION} --format json | jq -r .selfLink`
export CLUSTER_IMAGE=`gcloud compute images describe ${INFRA_ID}-rhcos-image --format json | jq -r .selfLink`
export ZONE_0=`gcloud compute regions describe ${REGION} --format=json | jq -r .zones[0] | cut -d "/" -f9`
export ZONE_1=`gcloud compute regions describe ${REGION} --format=json | jq -r .zones[1] | cut -d "/" -f9`
export ZONE_2=`gcloud compute regions describe ${REGION} --format=json | jq -r .zones[2] | cut -d "/" -f9`
```
## Upload the bootstrap.ign to a new bucket
Create a bucket and upload the bootstrap.ign file.
```sh
@@ -357,9 +367,13 @@ Create a signed URL for the bootstrap instance to use to access the Ignition
config. Export the URL from the output as a variable.
```sh
export BOOTSTRAP_IGN=`gsutil signurl -d 1h service-account-key.json gs://${INFRA_ID}-bootstrap-ignition/bootstrap.ign | grep "^gs:" | awk '{print $5}'`
export BOOTSTRAP_IGN=$(gsutil signurl -d 1h service-account-key.json gs://${INFRA_ID}-bootstrap-ignition/bootstrap.ign | grep "^gs:" | awk '{print $5}')
```
## Launch temporary bootstrap resources
Copy [`04_bootstrap.py`](../../../upi/gcp/04_bootstrap.py) locally.
Create a resource definition file: `04_bootstrap.yaml`
```console
@@ -395,25 +409,27 @@ Create the deployment using gcloud.
gcloud deployment-manager deployments create ${INFRA_ID}-bootstrap --config 04_bootstrap.yaml
```
## Add the bootstrap instance to the load balancers
The templates do not manage load balancer membership due to limitations of Deployment
Manager, so we must add the bootstrap node manually.
### Add bootstrap instance to internal load balancer instance group
```sh
gcloud compute target-pools add-instances ${INFRA_ID}-api-target-pool --instances-zone="${ZONE_0}" --instances=${INFRA_ID}-bootstrap
gcloud compute target-pools add-instances ${INFRA_ID}-ign-target-pool --instances-zone="${ZONE_0}" --instances=${INFRA_ID}-bootstrap
gcloud compute instance-groups unmanaged add-instances ${INFRA_ID}-master-${ZONE_0}-instance-group --zone=${ZONE_0} --instances=${INFRA_ID}-bootstrap
```
### Add bootstrap instance to external load balancer target pool (optional)
If you deployed external load balancers with `02_infra.yaml`, add the bootstrap node to the target pool.
```sh
gcloud compute target-pools add-instances ${INFRA_ID}-api-target-pool --instances-zone=${ZONE_0} --instances=${INFRA_ID}-bootstrap
```
## Launch permanent control plane
Copy [`05_control_plane.py`](../../../upi/gcp/05_control_plane.py) locally.
Export variables needed by the resource definition.
```sh
export MASTER_SERVICE_ACCOUNT_EMAIL=`gcloud iam service-accounts list | grep "^${INFRA_ID}-master-node " | awk '{print $2}'`
export MASTER_IGNITION=`cat master.ign`
```
Create a resource definition file: `05_control_plane.yaml`
```console
@@ -433,7 +449,7 @@ resources:
image: '${CLUSTER_IMAGE}'
machine_type: 'n1-standard-4'
root_volume_size: '128'
service_account_email: '${MASTER_SERVICE_ACCOUNT_EMAIL}'
service_account_email: '${MASTER_SERVICE_ACCOUNT}'
ignition: '${MASTER_IGNITION}'
EOF
```
@@ -452,13 +468,19 @@ Create the deployment using gcloud.
gcloud deployment-manager deployments create ${INFRA_ID}-control-plane --config 05_control_plane.yaml
```
## Configure control plane variables
```sh
export MASTER0_IP=$(gcloud compute instances describe ${INFRA_ID}-m-0 --zone ${ZONE_0} --format json | jq -r .networkInterfaces[0].networkIP)
export MASTER1_IP=$(gcloud compute instances describe ${INFRA_ID}-m-1 --zone ${ZONE_1} --format json | jq -r .networkInterfaces[0].networkIP)
export MASTER2_IP=$(gcloud compute instances describe ${INFRA_ID}-m-2 --zone ${ZONE_2} --format json | jq -r .networkInterfaces[0].networkIP)
```
## Add DNS entries for control plane etcd
The templates do not manage DNS entries due to limitations of Deployment
Manager, so we must add the etcd entries manually.
```sh
export MASTER0_IP=`gcloud compute instances describe ${INFRA_ID}-m-0 --zone ${ZONE_0} --format json | jq -r .networkInterfaces[0].networkIP`
export MASTER1_IP=`gcloud compute instances describe ${INFRA_ID}-m-1 --zone ${ZONE_1} --format json | jq -r .networkInterfaces[0].networkIP`
export MASTER2_IP=`gcloud compute instances describe ${INFRA_ID}-m-2 --zone ${ZONE_2} --format json | jq -r .networkInterfaces[0].networkIP`
if [ -f transaction.yaml ]; then rm transaction.yaml; fi
gcloud dns record-sets transaction start --zone ${INFRA_ID}-private-zone
gcloud dns record-sets transaction add ${MASTER0_IP} --name etcd-0.${CLUSTER_NAME}.${BASE_DOMAIN}. --ttl 60 --type A --zone ${INFRA_ID}-private-zone
@@ -472,41 +494,28 @@ gcloud dns record-sets transaction add \
gcloud dns record-sets transaction execute --zone ${INFRA_ID}-private-zone
```
## Add control plane instances to load balancers
The templates do not manage load balancer membership due to limitations of Deployment
Manager, so we must add the control plane nodes manually.
### Add control plane instances to internal load balancer instance groups
```sh
gcloud compute instance-groups unmanaged add-instances ${INFRA_ID}-master-${ZONE_0}-instance-group --zone=${ZONE_0} --instances=${INFRA_ID}-m-0
gcloud compute instance-groups unmanaged add-instances ${INFRA_ID}-master-${ZONE_1}-instance-group --zone=${ZONE_1} --instances=${INFRA_ID}-m-1
gcloud compute instance-groups unmanaged add-instances ${INFRA_ID}-master-${ZONE_2}-instance-group --zone=${ZONE_2} --instances=${INFRA_ID}-m-2
```
### Add control plane instances to external load balancer target pools (optional)
If you deployed external load balancers with `02_infra.yaml`, add the control plane instances to the target pool.
```sh
gcloud compute target-pools add-instances ${INFRA_ID}-api-target-pool --instances-zone="${ZONE_0}" --instances=${INFRA_ID}-m-0
gcloud compute target-pools add-instances ${INFRA_ID}-api-target-pool --instances-zone="${ZONE_1}" --instances=${INFRA_ID}-m-1
gcloud compute target-pools add-instances ${INFRA_ID}-api-target-pool --instances-zone="${ZONE_2}" --instances=${INFRA_ID}-m-2
gcloud compute target-pools add-instances ${INFRA_ID}-ign-target-pool --instances-zone="${ZONE_0}" --instances=${INFRA_ID}-m-0
gcloud compute target-pools add-instances ${INFRA_ID}-ign-target-pool --instances-zone="${ZONE_1}" --instances=${INFRA_ID}-m-1
gcloud compute target-pools add-instances ${INFRA_ID}-ign-target-pool --instances-zone="${ZONE_2}" --instances=${INFRA_ID}-m-2
```
## Monitor for `bootstrap-complete`
```console
$ openshift-install wait-for bootstrap-complete
INFO Waiting up to 30m0s for the Kubernetes API at https://api.test.example.com:6443...
INFO API v1.12.4+c53f462 up
INFO Waiting up to 30m0s for the bootstrap-complete event...
```
## Destroy bootstrap resources
At this point, you should delete the bootstrap resources.
```sh
gcloud compute target-pools remove-instances ${INFRA_ID}-api-target-pool --instances-zone="${ZONE_0}" --instances=${INFRA_ID}-bootstrap
gcloud compute target-pools remove-instances ${INFRA_ID}-ign-target-pool --instances-zone="${ZONE_0}" --instances=${INFRA_ID}-bootstrap
gsutil rm gs://${INFRA_ID}-bootstrap-ignition/bootstrap.ign
gsutil rb gs://${INFRA_ID}-bootstrap-ignition
gcloud deployment-manager deployments delete ${INFRA_ID}-bootstrap
```
## Launch additional compute nodes
You may create compute nodes by launching individual instances discretely
or by automated processes outside the cluster (e.g. Auto Scaling Groups). You
can also take advantage of the built in cluster scaling mechanisms and the
@@ -517,14 +526,6 @@ resources of type 06_worker.py in the file.
Copy [`06_worker.py`](../../../upi/gcp/06_worker.py) locally.
Export variables needed by the resource definition.
```sh
export COMPUTE_SUBNET=`gcloud compute networks subnets describe ${INFRA_ID}-worker-subnet --region=${REGION} --format json | jq -r .selfLink`
export WORKER_SERVICE_ACCOUNT_EMAIL=`gcloud iam service-accounts list | grep "^${INFRA_ID}-worker-node " | awk '{print $2}'`
export WORKER_IGNITION=`cat worker.ign`
```
Create a resource definition file: `06_worker.yaml`
```console
$ cat <<EOF >06_worker.yaml
@@ -540,7 +541,7 @@ resources:
image: '${CLUSTER_IMAGE}'
machine_type: 'n1-standard-4'
root_volume_size: '128'
service_account_email: '${WORKER_SERVICE_ACCOUNT_EMAIL}'
service_account_email: '${WORKER_SERVICE_ACCOUNT}'
ignition: '${WORKER_IGNITION}'
- name: 'w-1'
type: 06_worker.py
@@ -551,7 +552,7 @@ resources:
image: '${CLUSTER_IMAGE}'
machine_type: 'n1-standard-4'
root_volume_size: '128'
service_account_email: '${WORKER_SERVICE_ACCOUNT_EMAIL}'
service_account_email: '${WORKER_SERVICE_ACCOUNT}'
ignition: '${WORKER_IGNITION}'
EOF
```
@@ -571,8 +572,27 @@ Create the deployment using gcloud.
gcloud deployment-manager deployments create ${INFRA_ID}-worker --config 06_worker.yaml
```
### Approving the CSR requests for nodes
## Monitor for `bootstrap-complete`
```console
$ openshift-install wait-for bootstrap-complete
INFO Waiting up to 30m0s for the Kubernetes API at https://api.test.example.com:6443...
INFO API v1.12.4+c53f462 up
INFO Waiting up to 30m0s for the bootstrap-complete event...
```
## Destroy bootstrap resources
At this point, you should delete the bootstrap resources.
```sh
gcloud compute instance-groups unmanaged remove-instances ${INFRA_ID}-master-${ZONE_0}-instance-group --zone=${ZONE_0} --instances=${INFRA_ID}-bootstrap
gcloud compute target-pools remove-instances ${INFRA_ID}-api-target-pool --instances-zone="${ZONE_0}" --instances=${INFRA_ID}-bootstrap
gsutil rm gs://${INFRA_ID}-bootstrap-ignition/bootstrap.ign
gsutil rb gs://${INFRA_ID}-bootstrap-ignition
gcloud deployment-manager deployments delete ${INFRA_ID}-bootstrap
```
## Approving the CSR requests for nodes
The CSR requests for client and server certificates for nodes joining the cluster will need to be approved by the administrator.
Nodes that have not been provisioned by the cluster need their associated `system:serviceaccount` certificate approved to join the cluster.
You can view them with:
@@ -595,36 +615,39 @@ CSRs can be approved by name, for example:
oc adm certificate approve csr-bfd72
```
## Add the Ingress DNS Records (Optional)
## Add the Ingress DNS Records
If you removed the DNS Zone configuration [earlier](#remove-dns-zones), you'll need to manually create some DNS records pointing at the ingress load balancer.
You can create either a wildcard `*.apps.{baseDomain}.` or specific records (more on the specific records below).
You can use A, CNAME, etc. records, as you see fit.
You must wait for the ingress-router to create a load balancer and populate the `EXTERNAL-IP`
### Wait for the ingress-router to create a load balancer and populate the `EXTERNAL-IP`
```console
$ oc -n openshift-ingress get service router-default
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
router-default LoadBalancer 172.30.18.154 35.233.157.184 80:32288/TCP,443:31215/TCP 98
export ROUTER_IP=$(oc -n openshift-ingress get service router-default --no-headers | awk '{print $4}')
```
Then add the A record to your public and private zones.
### Add the internal *.apps DNS record
```sh
export ROUTER_IP=`oc -n openshift-ingress get service router-default --no-headers | awk '{print $4}'`
if [ -f transaction.yaml ]; then rm transaction.yaml; fi
gcloud dns record-sets transaction start --zone ${BASE_DOMAIN_ZONE_NAME}
gcloud dns record-sets transaction add ${ROUTER_IP} --name \*.apps.${CLUSTER_NAME}.${BASE_DOMAIN}. --ttl 300 --type A --zone ${BASE_DOMAIN_ZONE_NAME}
gcloud dns record-sets transaction execute --zone ${BASE_DOMAIN_ZONE_NAME}
if [ -f transaction.yaml ]; then rm transaction.yaml; fi
gcloud dns record-sets transaction start --zone ${INFRA_ID}-private-zone
gcloud dns record-sets transaction add ${ROUTER_IP} --name \*.apps.${CLUSTER_NAME}.${BASE_DOMAIN}. --ttl 300 --type A --zone ${INFRA_ID}-private-zone
gcloud dns record-sets transaction execute --zone ${INFRA_ID}-private-zone
```
### Add the external *.apps DNS record (optional)
```sh
if [ -f transaction.yaml ]; then rm transaction.yaml; fi
gcloud dns record-sets transaction start --zone ${BASE_DOMAIN_ZONE_NAME}
gcloud dns record-sets transaction add ${ROUTER_IP} --name \*.apps.${CLUSTER_NAME}.${BASE_DOMAIN}. --ttl 300 --type A --zone ${BASE_DOMAIN_ZONE_NAME}
gcloud dns record-sets transaction execute --zone ${BASE_DOMAIN_ZONE_NAME}
```
If you prefer to add explicit domains instead of using a wildcard, you can create entries for each of the cluster's current routes:
```console

View File

@@ -23,18 +23,6 @@ def GenerateConfig(context):
'network': '$(ref.' + context.properties['infra_id'] + '-network.selfLink)',
'ipCidrRange': context.properties['worker_subnet_cidr']
}
}, {
'name': context.properties['infra_id'] + '-master-nat-ip',
'type': 'compute.v1.address',
'properties': {
'region': context.properties['region']
}
}, {
'name': context.properties['infra_id'] + '-worker-nat-ip',
'type': 'compute.v1.address',
'properties': {
'region': context.properties['region']
}
}, {
'name': context.properties['infra_id'] + '-router',
'type': 'compute.v1.router',
@@ -43,8 +31,7 @@ def GenerateConfig(context):
'network': '$(ref.' + context.properties['infra_id'] + '-network.selfLink)',
'nats': [{
'name': context.properties['infra_id'] + '-nat-master',
'natIpAllocateOption': 'MANUAL_ONLY',
'natIps': ['$(ref.' + context.properties['infra_id'] + '-master-nat-ip.selfLink)'],
'natIpAllocateOption': 'AUTO_ONLY',
'minPortsPerVm': 7168,
'sourceSubnetworkIpRangesToNat': 'LIST_OF_SUBNETWORKS',
'subnetworks': [{
@@ -53,9 +40,8 @@ def GenerateConfig(context):
}]
}, {
'name': context.properties['infra_id'] + '-nat-worker',
'natIpAllocateOption': 'MANUAL_ONLY',
'natIps': ['$(ref.' + context.properties['infra_id'] + '-worker-nat-ip.selfLink)'],
'minPortsPerVm': 128,
'natIpAllocateOption': 'AUTO_ONLY',
'minPortsPerVm': 512,
'sourceSubnetworkIpRangesToNat': 'LIST_OF_SUBNETWORKS',
'subnetworks': [{
'name': '$(ref.' + context.properties['infra_id'] + '-worker-subnet.selfLink)',

View File

@@ -30,30 +30,6 @@ def GenerateConfig(context):
'target': '$(ref.' + context.properties['infra_id'] + '-api-target-pool.selfLink)',
'portRange': '6443'
}
}, {
'name': context.properties['infra_id'] + '-ign-http-health-check',
'type': 'compute.v1.httpHealthCheck',
'properties': {
'port': 22624,
'requestPath': '/healthz'
}
}, {
'name': context.properties['infra_id'] + '-ign-target-pool',
'type': 'compute.v1.targetPool',
'properties': {
'region': context.properties['region'],
'healthChecks': ['$(ref.' + context.properties['infra_id'] + '-ign-http-health-check.selfLink)'],
'instances': []
}
}, {
'name': context.properties['infra_id'] + '-ign-forwarding-rule',
'type': 'compute.v1.forwardingRule',
'properties': {
'region': context.properties['region'],
'IPAddress': '$(ref.' + context.properties['infra_id'] + '-cluster-public-ip.selfLink)',
'target': '$(ref.' + context.properties['infra_id'] + '-ign-target-pool.selfLink)',
'portRange': '22623'
}
}]
return {'resources': resources}

70
upi/gcp/02_lb_int.py Normal file
View File

@@ -0,0 +1,70 @@
def GenerateConfig(context):
backends = []
for zone in context.properties['zones']:
backends.append({
'group': '$(ref.' + context.properties['infra_id'] + '-master-' + zone + '-instance-group' + '.selfLink)'
})
resources = [{
'name': context.properties['infra_id'] + '-cluster-ip',
'type': 'compute.v1.address',
'properties': {
'addressType': 'INTERNAL',
'region': context.properties['region'],
'subnetwork': context.properties['control_subnet']
}
}, {
'name': context.properties['infra_id'] + '-api-internal-health-check',
'type': 'compute.v1.healthCheck',
'properties': {
'httpsHealthCheck': {
'port': 6443,
'requestPath': '/readyz'
},
'type': "HTTPS"
}
}, {
'name': context.properties['infra_id'] + '-api-internal-backend-service',
'type': 'compute.v1.regionBackendService',
'properties': {
'backends': backends,
'healthChecks': ['$(ref.' + context.properties['infra_id'] + '-api-internal-health-check.selfLink)'],
'loadBalancingScheme': 'INTERNAL',
'region': context.properties['region'],
'protocol': 'TCP',
'timeoutSec': 120
}
}, {
'name': context.properties['infra_id'] + '-api-internal-forwarding-rule',
'type': 'compute.v1.forwardingRule',
'properties': {
'backendService': '$(ref.' + context.properties['infra_id'] + '-api-internal-backend-service.selfLink)',
'IPAddress': '$(ref.' + context.properties['infra_id'] + '-cluster-ip.selfLink)',
'loadBalancingScheme': 'INTERNAL',
'ports': ['6443','22623'],
'region': context.properties['region'],
'subnetwork': context.properties['control_subnet']
}
}]
for zone in context.properties['zones']:
resources.append({
'name': context.properties['infra_id'] + '-master-' + zone + '-instance-group',
'type': 'compute.v1.instanceGroup',
'properties': {
'namedPorts': [
{
'name': 'ignition',
'port': 22623
}, {
'name': 'https',
'port': 6443
}
],
'network': context.properties['cluster_network'],
'zone': zone
}
})
return {'resources': resources}

View File

@@ -9,7 +9,7 @@ def GenerateConfig(context):
'IPProtocol': 'tcp',
'ports': ['22']
}],
'sourceRanges': ['0.0.0.0/0'],
'sourceRanges': [context.properties['allowed_external_cidr']],
'targetTags': [context.properties['infra_id'] + '-bootstrap']
}
}, {
@@ -21,23 +21,7 @@ def GenerateConfig(context):
'IPProtocol': 'tcp',
'ports': ['6443']
}],
'sourceRanges': ['0.0.0.0/0'],
'targetTags': [context.properties['infra_id'] + '-master']
}
}, {
'name': context.properties['infra_id'] + '-mcs',
'type': 'compute.v1.firewall',
'properties': {
'network': context.properties['cluster_network'],
'allowed': [{
'IPProtocol': 'tcp',
'ports': ['22623']
}],
'sourceRanges': [
context.properties['network_cidr'],
context.properties['master_nat_ip'],
context.properties['worker_nat_ip']
],
'sourceRanges': [context.properties['allowed_external_cidr']],
'targetTags': [context.properties['infra_id'] + '-master']
}
}, {
@@ -47,7 +31,7 @@ def GenerateConfig(context):
'network': context.properties['cluster_network'],
'allowed': [{
'IPProtocol': 'tcp',
'ports': ['6080', '22624']
'ports': ['6080', '6443', '22624']
}],
'sourceRanges': ['35.191.0.0/16', '130.211.0.0/22', '209.85.152.0/22', '209.85.204.0/22'],
'targetTags': [context.properties['infra_id'] + '-master']
@@ -75,6 +59,9 @@ def GenerateConfig(context):
},{
'IPProtocol': 'tcp',
'ports': ['10259']
},{
'IPProtocol': 'tcp',
'ports': ['22623']
}],
'sourceTags': [
context.properties['infra_id'] + '-master',
@@ -93,7 +80,7 @@ def GenerateConfig(context):
'IPProtocol': 'tcp',
'ports': ['22']
}],
'sourceRanges': [context.properties['network_cidr']],
'sourceRanges': [context.properties['network_cidr']],
'targetTags': [
context.properties['infra_id'] + '-master',
context.properties['infra_id'] + '-worker'