Currently, the installer relies on a generated go file for determining
the AWS region in which the RHCOS image is published. The `go generate`
directive was inadvertently removed in https://github.com/openshift/installer/pull/4582.
Rather than resurrecting the directive, this commit removes the generated
code in favor of gathering the regions directly from the rhcos stream
data when needed.
https://issues.redhat.com/browse/CORS-1838
if the go codegen commands failed, the the hack/verify-codegen.sh script
continued and the CI check passed. see #5406 for an example. the script
was not set to fail on a non-zero return code. this pr adds the bash e
option to the set builtin to catch errors and exit on failure.
Signed-off-by: Christy Norman <christy@linux.vnet.ibm.com>
It's more logically owned by the CoreOS team and this will
allow us to have a separate `OWNERS` file.
The `OWNERS` file is copied from the current one in openshift/os.
In https://github.com/openshift/enhancements/pull/679
we landed support for a stream metadata format already used
by FCOS and now by RHCOS/OCP consumers to find the bootimages.
We kept the old metadata because the UPI CI jobs used it.
In https://github.com/openshift/release/pull/17482 I tried
to port the UPI CI jobs, but ran into huge levels of technical debt.
As best I can tell, the UPI CI jobs are not running on this repo
now and are likely broken for other reasons.
Let's remove the old data and avoid the confusing duplication.
Anyone who goes to work on the UPI CI jobs and sanitizes things
should be able to pick up the work to port to stream metadata
relatively easily.
A recent commit passed the CI test for go mod tidy but introduced a bug
in master when we run go mod tidy. By including go mod tidy, we can
catch these earlier.
All installer binaries extracted from a payload, regardless of their
runtime OS or architecture, are built on the payload architecture.
Therefore, GOHOSTARCH can be used to assume the cluster architecture for
which its payload was built. This is set through the Dockerfiles so that
manual builds of installer will continue to default to amd64.
This unblocked the build on other architectures, but the resulting
ppc64le binary had issues. The golang bug which affected static linking
has been fixed with a one-line patch, which we are carrying downstream
until the upstream backports land.
This reverts commit 24ac0a15e7.
Marshal the coreos-bootimages ConfigMap included in the installer
manifests as yaml instead of json. This allows `oc` to replace
the `0.0.1-snapshot` value of the `releaseVersion`. The quotes
around the value when marshalled as json were preventing the
value from matching the regular expression that `oc` uses.
/fixes https://github.com/openshift/installer/issues/4797
Split out from https://github.com/openshift/installer/pull/4582
This copies the bits from https://github.com/cgwalters/rhel-coreos-bootimages
which builds a ConfigMap out of the stream metadata and injects
it into the cluster.
We have an `installer` image in the release image today; this adds
the "is an operator" label, even though it's not really an
operator. We just want the CVO to inject the manifest.
Among other important semantics, this will ensure that in-place
cluster upgrades that have new pinned CoreOS stream data will
have this configmap updated.
Briefly describe the history and future of the pinned {RHEL, Fedora} CoreOS
metadata in the installer.
Co-authored-by: Matthew Staebler <staebler@redhat.com>
This implements part of the plan from:
https://github.com/openshift/os/issues/477
When we originally added the pinned RHCOS metadata `rhcos.json`
to the installer, we also changed the coreos-assembler `meta.json`
format into an arbitrary new format in the name of some cleanups.
In retrospect, this was a big mistake because we now have two
formats.
Then Fedora CoreOS appeared and added streams JSON as a public API.
We decided to unify on streams metadata; there's now a published
Go library for it: https://github.com/coreos/stream-metadata-go
Among other benefits, it is a single file that supports multiple
architectures.
UPI installs should now use stream metadata, particularly
to find public cloud images. This is exposed via a new
`openshift-install coreos print-stream-json` command.
This is an important preparatory step for exposing this via
`oc` as well as having something in the cluster update to
it.
HOWEVER as a (really hopefully temporary) hack, we *duplicate*
the metadata so that IPI installs use the new stream format,
and UPI CI jobs can still use the old format (with different RHCOS versions).
We will port the UPI docs and CI jobs after this merges.
Co-authored-by: Matthew Staebler <staebler@redhat.com>
Rather than building an image to run mockgen and having that image
always get the latest verion of mockgen, use the mockgen from the
vendor directory. This has a couple benefits.
1) We do not get changes to the client mocks that pop up in random
PRs when a new version of mockgen is released.
2) Contributors that do not have access to the CI registry can
generate the mock clients.
The example URL is missing the /art/ path prefix, which makes it unclear
how to derive the external URL from the meta.json link in the ART build
browser.
With Go 1.14 the handling of modules has improved in the sense that all the subcommands `go {test, generate}` now use the vendor when available by default. This makes it easier for us to run generate using the vendored tools like controller-tools etc. as it now uses the checked in vendor.
F-strings are new in 3.6 [1], and my old RHEL 7.5 CSB has the old
Python 3.4.9 by default. I'm probably just way behind the times, but
it doesn't cost much to use the older .format() to make this
compatible with all of Python 3 ;).
[1]: https://www.python.org/dev/peps/pep-0498/
This splits the RHCOS build metadata into architecture-specific files.
This will allow the metadata to contain information about bootimages of
multiple architectures. In order to preserve backward compatibility
(there are a few users, including certain CI jobs, that pull rhcos.json
from GitHub directly), I've opted to use separate files for each
architecture. Normally, we could have just symlinked the legacy metadata
file, but when hosted on raw.githubcontent.com, the symlinks aren't
followed.
When updating the RHCOS bootimages, this script will need to be run once
for each architecture that is being updated.
also removes hack/run_bdd_suite.sh
This is in preparation of flattening all vendor
directories in this repository into one:
These tests had their own vendor directories
with outdated dependencies conflicting with
deps from the main vendor/ dir.
Due to the fact that this is
unused/dead code, removal should be safe.
gosec does static code analysis and checks for common security issues in
golang codebases.
This PR introduces a script that will run gosec similarly to other check
tools.
The location of the defaultReleaseImage variable has moved to
pkg/asset/releaseimage, and this doesn't currently work. It also appears
nothing is relying on this feature so we can remove it. If you want to
override the release image, you can do at run time with
OPENSHIFT_RELEASE_IMAGE_OVERRIDE, which functions the same other than
not being embedded in the produced openshift-install binary.
Like 3313c08266 (hack/build: Use SOURCE_GIT_COMMIT if set, 2019-05-13, #1828).
This should fix [1]:
$ ./openshift-install version
./openshift-install v4.1.0-201905212232-dirty
...
which is from Git looking at the build directory (and seeing -dirty
because doozer is adjusting our Dockerfile?). Since the tags aren't
in the source repository (github.com/openshift/install), we need to
use the BUILD_* variables which were added to Doozer in [2]. In a
recent build [3], these looked like:
ENV SOURCE_GIT_COMMIT=8aa5b10aa82d201deba0befbfac9fabf76f719f4 SOURCE_DATE_EPOCH=1561485623 BUILD_VERSION=v4.1.9 SOURCE_GIT_URL=https://github.com/openshift/installer SOURCE_GIT_TAG=8aa5b10a BUILD_RELEASE=201907311355
With this commit, we prefer BUILD_VERSION, falling back to 'git
describe' in the local repository if BUILD_VERSION is unset or empty.
Doozer also sets BUILD_RELEASE, but Luke says we always set
BUILD_VERSION in OpenShift 4 so there's no need to include it in the
fallback chain [4] (otherwise I'd have preferred BUILD_VERSION,
falling back to BUILD_RELEASE, falling back to Git inspection).
I don't see a need to use:
GIT_TAG="${BUILD_VERSION}-${BUILD_RELEASE}"
with both, because if we know the version is v4.1.8, who cares about
the timestamp? We only ever get a single 4.1.8 far enough along to
show up in front of end-users, even if there's a build hiccup or some
such that causes us to build multiple rounds with the same version
internally. There's a bunch of v4.2.0, etc., in the run-up to a new
minor release, but we have commit hashes to distinguish between those.
[1]: https://github.com/openshift/installer/issues/1828
[2]: b02114a0bb
distgit.py: ART-165 add build ENV vars, 2019-07-07
[3]: http://pkgs.devel.redhat.com/cgit/containers/ose-installer/tree/Dockerfile?h=rhaos-4.1-rhel-7
[4]: https://github.com/openshift/installer/pull/1829#discussion_r310290990