The EKS controller feature gate is enabled by default in CAPA, which causes the following lines to show up in the logs: ``` time="2024-06-18T11:43:59Z" level=debug msg="I0618 11:43:59.613409 349 logger.go:75] \"enabling EKS controllers and webhooks\" logger=\"setup\"" time="2024-06-18T11:43:59Z" level=debug msg="I0618 11:43:59.613416 349 logger.go:81] \"EKS IAM role creation\" logger=\"setup\" enabled=false" time="2024-06-18T11:43:59Z" level=debug msg="I0618 11:43:59.613420 349 logger.go:81] \"EKS IAM additional roles\" logger=\"setup\" enabled=false" time="2024-06-18T11:43:59Z" level=debug msg="I0618 11:43:59.613425 349 logger.go:81] \"enabling EKS control plane controller\" logger=\"setup\"" time="2024-06-18T11:43:59Z" level=debug msg="I0618 11:43:59.613449 349 logger.go:81] \"enabling EKS bootstrap controller\" logger=\"setup\"" time="2024-06-18T11:43:59Z" level=debug msg="I0618 11:43:59.613464 349 logger.go:81] \"enabling EKS managed cluster controller\" logger=\"setup\"" time="2024-06-18T11:43:59Z" level=debug msg="I0618 11:43:59.613496 349 logger.go:81] \"enabling EKS managed machine pool controller\" logger=\"setup\"" ``` Although harmless, they can be confusing for users. This change disables the feature so the lines are gone and we are not running controllers unnecessarily.
OpenShift Installer
Supported Platforms
- AWS
- AWS (UPI)
- Azure
- Bare Metal (UPI)
- Bare Metal (IPI)
- GCP
- GCP (UPI)
- Libvirt with KVM (development only)
- OpenStack
- OpenStack (UPI)
- Power
- oVirt
- oVirt (UPI)
- vSphere
- vSphere (UPI)
- z/VM
Quick Start
First, install all build dependencies.
Clone this repository. Then build the openshift-install binary with:
hack/build.sh
This will create bin/openshift-install. This binary can then be invoked to create an OpenShift cluster, like so:
bin/openshift-install create cluster
The installer will show a series of prompts for user-specific information and use reasonable defaults for everything else.
In non-interactive contexts, prompts can be bypassed by providing an install-config.yaml.
If you have trouble, refer to the troubleshooting guide.
Connect to the cluster
Details for connecting to your new cluster are printed by the openshift-install binary upon completion, and are also available in the .openshift_install.log file.
Example output:
INFO Waiting 10m0s for the openshift-console route to be created...
INFO Install complete!
INFO To access the cluster as the system:admin user when using 'oc', run
export KUBECONFIG=/path/to/installer/auth/kubeconfig
INFO Access the OpenShift web-console here: https://console-openshift-console.apps.${CLUSTER_NAME}.${BASE_DOMAIN}:6443
INFO Login to the console with user: kubeadmin, password: 5char-5char-5char-5char
Cleanup
Destroy the cluster and release associated resources with:
openshift-install destroy cluster
Note that you almost certainly also want to clean up the installer state files too, including auth/, terraform.tfstate, etc.
The best thing to do is always pass the --dir argument to create and destroy.
And if you want to reinstall from scratch, rm -rf the asset directory beforehand.