Removes all dependencies for the Terraform providers image from the
Dockerfiles. We want to do this separately from removing all the
build artifacts, so that the image can stop being built by ART
without breaking anything, then remove the build artifacts.
Because of the way in which we rebase and build our kube fork, the
binary doesn't have minor versions set correctly. For example, version
1.29.5 appears as `v1.29.0-rc.1.3970+87992f48b0ead9-dirty` and that
breaks the version detection in our scripts.
Since we already have pre-built binaries in CI/release, there is no
reason for downloading those binaries. The only case in which it's
needed is for local dev. So we introduce an env var `SKIP_ENVTEST` to
skip the download when building images.
We copy pre-built binaries from the `-artifacts` images when we want
statically linked binaries and from the regular etcd/hyperkube images
for dynamically linked binaries.
This is necessary to have capi and capi providers included in the
installer images both for CI and for the release payload.
The baremetal-installer/libvirt-installer images continue being dynamically
built for FIPS support. For that reason it cannot reuse the existing
`terraform-providers` image. All other images will be statically built.
Since https://github.com/openshift/release/pull/39563 there should be
now an `installer-terraform-providers` image in the CI namespace with
pre-built terraform provider binaries. If no changes are detected from
the last time the providers were built, we skip building the providers
which can save us around 1h in the CI tests.
The installer-artifacts image depends on the installer image,
which means they cannot be built in parallel. The effect of this
dependency is that the installer-artifacts image contains the linux
amd64 builds in addition to the mac/linux amd64/arm64 builds present
in the installer-artifacts image.
Breaking the dependency results in a more efficient build process
because they can be built in parallel as well as no duplication of
the linux amd64 binary in the release image (before this change the
binary would be present in both images within the release image).
The risk, though, is that clients are inconsistently grabbing the linux
amd64 binary from either image, rather than only the installer image,
in which case we would break any client trying to get that binary from
the installer-artifacts image.
go generate always needs to be run natively, even when cross-compiling.
Handling that in build.sh simplifies cross-compiling both manually and
in installer-artifacts.
All installer binaries extracted from a payload, regardless of their
runtime OS or architecture, are built on the payload architecture.
Therefore, GOHOSTARCH can be used to assume the cluster architecture for
which its payload was built. This is set through the Dockerfiles so that
manual builds of installer will continue to default to amd64.
This will be used by oc in the next release to allow extracting an
x86_64 installer from another architecture's payload.
Due to the size of the installer binary, building each in a separate
stage minimizes resource requirements when using imagebuilder. This
does require a matching change to ocp-build-data to add a second builder
stage.
This PR is autogenerated by the [ocp-build-data-enforcer][1].
It updates the base images in the Dockerfile used for promotion in order to ensure it
matches the configuration in the [ocp-build-data repository][2] used
for producing release artifacts.
Instead of merging this PR you can also create an alternate PR that includes the changes found here.
If you believe the content of this PR is incorrect, please contact the dptp team in
#aos-art.
[1]: https://github.com/openshift/ci-tools/tree/master/cmd/ocp-build-data-enforcer
[2]: https://github.com/openshift/ocp-build-data/tree/openshift-4.6/images
With Go 1.14 the handling of modules has improved in the sense that all the subcommands `go {test, generate}` now use the vendor when available by default. This makes it easier for us to run generate using the vendored tools like controller-tools etc. as it now uses the checked in vendor.
The images in use should be consistent across all components, and
pointing to the right versions. Doing this will allow the installer
CI repo to benefit from binary build reuse in ci-operator.
Assets are required to build, but hack/build-go.sh cannot handle
cross architecture asset generation. Explicitly generate before
invoking the script.
Failed when CI tried to build:
+ go build ... ./cmd/openshift-install
data/unpack.go:12:15: undefined: Assets
To ensure the payload is reproducible, we will include images for
both cli and installer commands for non linux platforms that allow
users to get the correct tools to install the release image. Since
the installer binary is not small and we don't want to take the cost
to pull it on each node when in use from hive, create a new image
installer-artifacts that layers on top of installer (inheriting the
default installer Linux binary) and places the darwin binary into
/usr/share/openshift/mac/openshift-install. This directory structure
is kept deliberately simple for end users - if we do in the future
need to deal with multi-arch concerns we'll do that at a higher level
and in practice neither 32bit nor arm will be "supported" as part
of the core distro yet.
The dockerfile matches the desired final form from the release team
(Dockerfile.rhel) and will be used in both CI and publishing.