1
0
mirror of https://github.com/openshift/installer.git synced 2026-02-05 15:47:14 +01:00
Laurent Domb 15c4320819 update for docs/user/troubleshooting: Document 'core' user and SSH key injection
Previously, these platform-agnostic docs left users guessing about which username to use when SSHing in.

And folks with AWS experience might be surprised that we don't use AWS key pairs, so add some wording to avoid them thinking the lack of an AWS key pair is the source of their SSH issues.
2019-01-03 16:20:26 -05:00
2018-11-22 09:50:07 -08:00
2018-11-14 11:50:43 -05:00
2018-12-04 14:12:36 -05:00
2016-08-09 13:19:44 -07:00
2018-12-04 14:12:36 -05:00
2018-11-27 12:26:12 -08:00
2014-01-19 12:25:11 -08:00
2017-02-10 09:36:49 -08:00
2018-11-06 11:05:26 +01:00

Openshift Installer

Supported Platforms

Quick Start

First, install all build dependencies.

Clone this repository to src/github.com/openshift/installer in your GOPATH. Then build the openshift-install binary with:

hack/build.sh

This will create bin/openshift-install. This binary can then be invoked to create an OpenShift cluster, like so:

bin/openshift-install create cluster

The installer requires the terraform binary either alongside openshift-install or in $PATH. If you don't have terraform, run the following to create bin/terraform:

hack/get-terraform.sh

The installer will show a series of prompts for user-specific information and use reasonable defaults for everything else. In non-interactive contexts, prompts can be bypassed by providing appropriately-named environment variables. Refer to the user documentation for more information.

Connect to the cluster

Console

Shortly after the cluster command completes, the OpenShift console will come up at https://${OPENSHIFT_INSTALL_CLUSTER_NAME}-api.${OPENSHIFT_INSTALL_BASE_DOMAIN}:6443/console/. You may need to ignore a certificate warning if you did not configure a certificate authority known to your browser. Log in using the admin credentials you configured when creating the cluster.

Kubeconfig

You can also use the admin kubeconfig which openshift-install create cluster placed under --dir (which defaults to .) in auth/kubeconfig. If you launched the cluster with openshift-install --dir "${DIR}" create cluster, you can use:

export KUBECONFIG="${DIR}/auth/kubeconfig"

Cleanup

Destroy the cluster and release associated resources with:

openshift-install destroy cluster

Note that you almost certainly also want to clean up the installer state files too, including auth/, terraform.tfstate, etc. The best thing to do is always pass the --dir argument to install and destroy. And if you want to reinstall from scratch, rm -rf the asset directory beforehand.

Description
Install an OpenShift cluster
Readme 1.2 GiB
Languages
Go 85.1%
HCL 10.8%
Shell 2.8%
Python 1.2%