Bumping CAPI provider version is currently a manual step with little
documentation to follow. This commit gathers all knowledge of such steps
for other contributors.
* docs: remove ovirt from supported platforms
* docs: add ibmcloud, nutanix, and powervs as supported platform
* docs: reorganized and added powervs upi
* docs: remove libvirt from supported list
* docs: include official docs for supported platforms and deduplication
Catching up with c1dbb138f8 (pkg/types: ensure kubebuilder can build
correct documentation, 2020-04-27, #3515), to make it easier for
first-time or occasional contributors to figure out how to update that
file.
1. Replace yum by dnf.
2. Add zip to installation dependencies as it's really needed and not
installed by default.
3. Update requitred golang version in does base what is currently in
go.mod.
The file data/data/rhcos-stream.json was deleted in
d773ee5573
and the corresponding data now lives in data/data/coreos/rhcos.json. Let's update
the documentation to reflect this change.
In https://github.com/openshift/enhancements/pull/679
we landed support for a stream metadata format already used
by FCOS and now by RHCOS/OCP consumers to find the bootimages.
We kept the old metadata because the UPI CI jobs used it.
In https://github.com/openshift/release/pull/17482 I tried
to port the UPI CI jobs, but ran into huge levels of technical debt.
As best I can tell, the UPI CI jobs are not running on this repo
now and are likely broken for other reasons.
Let's remove the old data and avoid the confusing duplication.
Anyone who goes to work on the UPI CI jobs and sanitizes things
should be able to pick up the work to port to stream metadata
relatively easily.
The installer for the development libvirt target does not launch a load balancer by default.
A default configuration of a basic HAProxy config is given here as a guideline for developers.
Signed-off-by: Tim Hansen <tihansen@redhat.com>
Briefly describe the history and future of the pinned {RHEL, Fedora} CoreOS
metadata in the installer.
Co-authored-by: Matthew Staebler <staebler@redhat.com>
* add "pre-command start", "pre-command end", "post-command start" and "post-command end" phases
* fixed issue where the kubelet was not notifying systemd that it had started since it had been moved to a script
Each OpenShift service running on the bootstrap machine will now
create a json file in /var/log/openshift/ that contains an array
of entries detailing the progress that the service has made.
The entries included in the json file are the following.
* Service start
* Service end, with result and error details
* Service stage start
* Service stage end, with result and error details
The json files in /var/log/openshift will be collected by the
bootstrap gather in /bootstrap/services/ for evaluation by the
installer for improved failure reporting to the user. The evaluation
is left for follow-on work.
https://issues.redhat.com/browse/CORS-1542https://issues.redhat.com/browse/CORS-1543
Since libvirt 5.6.0, there is an option to pass in dnsmasq options through the libvirt network [1]. This addresses the following problems:
- eliminate the need for hacking routes in the cluster (the workaround mentioned in [3]) so that libvirt's dnsmasq does not manage the domain (and so the requests from inside the cluster will go up the chain to the host itself).
- eliminate the hacky workaround used in the multi-arch CI automation to inject `*.apps` entries in the libvirt network that point to a single worker node [2]. Instead of waiting for the libvirt networks to come up and update entries, we can set this before the installation itself through the install config.
- another issue this solves - with the above mentioned workaround, having multiple worker nodes becomes problematic when running upgrade tests. Having the route to just one worker node would fail the upgrade when that worker node is down. With this change, we could now point to the .1 address and have a load balancer forward traffic to any worker node.
With this change, the option can be specified through the install config yaml in the network section as pairs of option name and values. An example:
```
platform:
libvirt:
network:
dnsmasqOptions:
- name: "address"
value: "/.apps.tt.testing/192.168.126.51"
if: tt0
```
The terraform provider supports rendering these options through a datasource and injecting them into the network xml.
Since this config is optional, not specifying it will continue to work as before without issues.
[1] https://libvirt.org/formatnetwork.html#elementsNamespaces
[2] https://github.com/openshift/release/blob/master/ci-operator/templates/openshift/installer/cluster-launch-installer-remote-libvirt-e2e.yaml#L532-L554
[2] https://github.com/openshift/installer/issues/1007
With Go 1.14 the handling of modules has improved in the sense that all the subcommands `go {test, generate}` now use the vendor when available by default. This makes it easier for us to run generate using the vendored tools like controller-tools etc. as it now uses the checked in vendor.
In newer libvirtd that ships the "libvirt-tcp.socket" unit files for
socket activation, the --listen argument to libvirtd should not be
used. Enabling both socket activation and the --listen argument will
cause libvirtd to exit with an error about mutually exclusive
configuration options.
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>