The changes done here will update the RHCOS 4.20 bootimage metadata and
address the following issues:
OCPBUGS-64611: [4.20] coreos-boot-disk link not working with multipath on early boot
OCPBUGS-67201: [4.20] Cannot use auto-forward kargs (like ip=) with coreos-installer (iso|pxe) customize
OCPBUGS-68356: [4.20] Using multipath on the sysroot will fail to boot if less than 2 paths are present
OCPBUGS-69837: [4.20] Ignition fails with crypto/ecdh: invalid random source in FIPS 140-only mode
This change was generated using:
plume cosa2stream \
--target data/data/coreos/rhcos.json \
--distro rhcos \
--no-signatures \
--name rhel-9.6 \
--url https://rhcos.mirror.openshift.com/art/storage/prod/streams \
x86_64=9.6.20260112-0 \
aarch64=9.6.20260112-0 \
s390x=9.6.20260112-0 \
ppc64le=9.6.20260112-0
Signed-off-by: Tiago Bueno <tiago.bueno@gmail.com>
OpenShift Installer
Supported Platforms
- AWS (Official Docs)
- Azure (Official Docs)
- Bare Metal (Official Docs)
- GCP (Official Docs)
- IBM Cloud (Official Docs)
- Nutanix (Official Docs)
- OpenStack (Official Docs)
- Power (Official Docs)
- Power VS (Official Docs)
- vSphere (Official Docs)
- z/VM (Official Docs)
Quick Start
First, install all build dependencies.
Clone this repository. Then build the openshift-install binary with:
hack/build.sh
This will create bin/openshift-install. This binary can then be invoked to create an OpenShift cluster, like so:
bin/openshift-install create cluster
The installer will show a series of prompts for user-specific information and use reasonable defaults for everything else.
In non-interactive contexts, prompts can be bypassed by providing an install-config.yaml.
If you have trouble, refer to the troubleshooting guide.
Connect to the cluster
Details for connecting to your new cluster are printed by the openshift-install binary upon completion, and are also available in the .openshift_install.log file.
Example output:
INFO Waiting 10m0s for the openshift-console route to be created...
INFO Install complete!
INFO To access the cluster as the system:admin user when using 'oc', run
export KUBECONFIG=/path/to/installer/auth/kubeconfig
INFO Access the OpenShift web-console here: https://console-openshift-console.apps.${CLUSTER_NAME}.${BASE_DOMAIN}:6443
INFO Login to the console with user: kubeadmin, password: 5char-5char-5char-5char
Cleanup
Destroy the cluster and release associated resources with:
openshift-install destroy cluster
Note that you almost certainly also want to clean up the installer state files too, including auth/, terraform.tfstate, etc.
The best thing to do is always pass the --dir argument to create and destroy.
And if you want to reinstall from scratch, rm -rf the asset directory beforehand.