That's the latest RHCOS release: $ curl -s https://releases-rhcos.svc.ci.openshift.org/storage/releases/maipo/builds.json | jq '{latest: .builds[0], timestamp}' { "latest": "47.198", "timestamp": "2018-12-08T23:13:22Z" } And Clayton just pushed 4.0.0-alpha.0-2018-12-07-090414 to quay.io/openshift-release-dev/ocp-release:4.0.0-4. That's not the most recent release, but it's the most-recent stable release ;). Renaming OPENSHIFT_INSTALL_RELEASE_IMAGE_OVERRIDE gets us CI testing of the pinned release despite openshift/release@60007df2 (Use RELEASE_IMAGE_LATEST for CVO payload, 2018-10-03, openshift/release#1793).
Openshift Installer
Supported Platforms
- AWS
- Libvirt with KVM
- OpenStack (Experimental)
Quick Start
First, install all build dependencies.
Clone this repository to src/github.com/openshift/installer in your GOPATH. Then build the openshift-install binary with:
hack/build.sh
This will create bin/openshift-install. This binary can then be invoked to create an OpenShift cluster, like so:
bin/openshift-install create cluster
The installer will show a series of prompts for user-specific information and use reasonable defaults for everything else. In non-interactive contexts, prompts can be bypassed by providing appropriately-named environment variables. Refer to the user documentation for more information.
Connect to the cluster
Console
Shortly after the cluster command completes, the OpenShift console will come up at https://${OPENSHIFT_INSTALL_CLUSTER_NAME}-api.${OPENSHIFT_INSTALL_BASE_DOMAIN}:6443/console/.
You may need to ignore a certificate warning if you did not configure a certificate authority known to your browser.
Log in using the admin credentials you configured when creating the cluster.
Kubeconfig
You can also use the admin kubeconfig which openshift-install create cluster placed under --dir (which defaults to .) in auth/kubeconfig.
If you launched the cluster with openshift-install --dir "${DIR}" create cluster, you can use:
export KUBECONFIG="${DIR}/auth/kubeconfig"
Cleanup
Destroy the cluster and release associated resources with:
openshift-install destroy cluster
Note that you almost certainly also want to clean up the installer state files too, including auth/, terraform.tfstate, etc.
The best thing to do is always pass the --dir argument to install and destroy.
And if you want to reinstall from scratch, rm -rf the asset directory beforehand.