Starting with 4.19, the PATH environment variable is actively used
in CI to manage idempotent repositories that work both in CI as locally.
By overriding the PATH environment variable, machine-os-images gets
unfortunate repositories as a result. I suspect this path override can
just disappear.
Because of the way in which we rebase and build our kube fork, the
binary doesn't have minor versions set correctly. For example, version
1.29.5 appears as `v1.29.0-rc.1.3970+87992f48b0ead9-dirty` and that
breaks the version detection in our scripts.
Since we already have pre-built binaries in CI/release, there is no
reason for downloading those binaries. The only case in which it's
needed is for local dev. So we introduce an env var `SKIP_ENVTEST` to
skip the download when building images.
We copy pre-built binaries from the `-artifacts` images when we want
statically linked binaries and from the regular etcd/hyperkube images
for dynamically linked binaries.
This is necessary to have capi and capi providers included in the
installer images both for CI and for the release payload.
Multiarch CI jobs in 4.16 and onward are moving to a UPI deployment strategy. `virt-install` is needed in the CI image
to allow us to stand up the libvirt VMs for the cluster nodes.
When we linked to libvirt directly, having the libvirt platform in the
build was synonymous with a dynamically linked build. Now that we use a
pure-Go libvirt library, the build type can be independent of whether
libvirt is included.
Once the baremetal installer build no longer includes the libvirt
platform, we still want it to be dynamically linked.
The baremetal-installer/libvirt-installer images continue being dynamically
built for FIPS support. For that reason it cannot reuse the existing
`terraform-providers` image. All other images will be statically built.
For multi-architecture compute deployments for s390x on libvirt, we need
to extract cluster state and correct assets for spinning up nodes on a
second architecture via oc, then spin up nodes with libvirt client. oc
can be helpful for this purpose.
This is analogous to the UPI installer image (images/installer/Dockerfile.upi.ci#L14,L20)
Signed-off-by: Dominik Werle <dwerle@redhat.com>
Since https://github.com/openshift/release/pull/39563 there should be
now an `installer-terraform-providers` image in the CI namespace with
pre-built terraform provider binaries. If no changes are detected from
the last time the providers were built, we skip building the providers
which can save us around 1h in the CI tests.
Currently, google-cloud-sdk is broken and issuing the following error when launching and running jobs.
err
Providing a specific version should allow jobs to no longer fail
All installer binaries extracted from a payload, regardless of their
runtime OS or architecture, are built on the payload architecture.
Therefore, GOHOSTARCH can be used to assume the cluster architecture for
which its payload was built. This is set through the Dockerfiles so that
manual builds of installer will continue to default to amd64.
Update references to registry.ci.openshift.org/ocp/builder:golang-1.14
with registry.ci.openshift.org/ocp/builder:rhel-8-golang-1.15-openshift-4.8.
The exception is for images targeting rhel7, where the replacement is
registry.ci.openshift.org/ocp/builder:rhel-7-golang-1.15-openshift-4.8 instead.
With Go 1.14 the handling of modules has improved in the sense that all the subcommands `go {test, generate}` now use the vendor when available by default. This makes it easier for us to run generate using the vendored tools like controller-tools etc. as it now uses the checked in vendor.
For s390x/ppc64le we do some amount of yaml manipulation, for example: https://github.com/openshift/release/pull/9362
and we plan to add a volume size which will have increased disk space for certain tests. yq will be especially useful
and easier to use than sed for these manipulations.
The current nested-libvirt CI image is capable of provisioning libvirt clusters by creating a GCE VM instance that has all of the libvirt dependencies and using it as a hypervisor. This new image supports that workflow as well as providing the dependencies for running a libvirt installation against a remote libvirt service hosted on external hardware. This requires access to the libvirt client locally, which the nested libvirt image did not provide.