This mirrors changes to GCP IPI in #3544 The infra id of the clusters on GCP was reduced to 12 in #2088 because we couldn't handle the hostname seen by rhcos machine to be greater than 64. More details on this are available in https://bugzilla.redhat.com/show_bug.cgi?id=1809345 now since BZ 1809345 is fixed by openshift/machine-config-operator#1711 and openshift/cluster-api-provider-gcp#88 the installer can relax the restriction on the infra-id to match the other platforms. Why is it important? On GCP all resources are prefixed with infra-id, which currently is 12 chars with 6 chars used by random bit, leaving only 6 chars from cluster name. This causes trouble associating the cluster to jobs in CI as most of the identifyable characters are dropped from the resource names in CI due to this restriction. Also because of the previous restriction, only one char are used from pool's name, making is higly likely to collide in cases there are more.
OpenShift Installer
Supported Platforms
- AWS
- AWS (UPI)
- Azure
- Bare Metal (UPI)
- Bare Metal (IPI) (Experimental)
- GCP
- GCP (UPI)
- Libvirt with KVM (development only)
- OpenStack
- OpenStack (UPI) (Experimental)
- oVirt
- vSphere
- vSphere (UPI)
Quick Start
First, install all build dependencies.
Clone this repository. Then build the openshift-install binary with:
hack/build.sh
This will create bin/openshift-install. This binary can then be invoked to create an OpenShift cluster, like so:
bin/openshift-install create cluster
The installer will show a series of prompts for user-specific information and use reasonable defaults for everything else.
In non-interactive contexts, prompts can be bypassed by providing an install-config.yaml.
If you have trouble, refer to the troubleshooting guide.
Connect to the cluster
Details for connecting to your new cluster are printed by the openshift-install binary upon completion, and are also available in the .openshift_install.log file.
Example output:
INFO Waiting 10m0s for the openshift-console route to be created...
INFO Install complete!
INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/path/to/installer/auth/kubeconfig'
INFO Access the OpenShift web-console here: https://console-openshift-console.apps.${CLUSTER_NAME}.${BASE_DOMAIN}:6443
INFO Login to the console with user: kubeadmin, password: 5char-5char-5char-5char
Cleanup
Destroy the cluster and release associated resources with:
openshift-install destroy cluster
Note that you almost certainly also want to clean up the installer state files too, including auth/, terraform.tfstate, etc.
The best thing to do is always pass the --dir argument to install and destroy.
And if you want to reinstall from scratch, rm -rf the asset directory beforehand.