The file data/data/rhcos-stream.json was deleted in
d773ee5573
and the corresponding data now lives in data/data/coreos/rhcos.json. Let's update
the documentation to reflect this change.
In https://github.com/openshift/enhancements/pull/679
we landed support for a stream metadata format already used
by FCOS and now by RHCOS/OCP consumers to find the bootimages.
We kept the old metadata because the UPI CI jobs used it.
In https://github.com/openshift/release/pull/17482 I tried
to port the UPI CI jobs, but ran into huge levels of technical debt.
As best I can tell, the UPI CI jobs are not running on this repo
now and are likely broken for other reasons.
Let's remove the old data and avoid the confusing duplication.
Anyone who goes to work on the UPI CI jobs and sanitizes things
should be able to pick up the work to port to stream metadata
relatively easily.
The installer for the development libvirt target does not launch a load balancer by default.
A default configuration of a basic HAProxy config is given here as a guideline for developers.
Signed-off-by: Tim Hansen <tihansen@redhat.com>
Briefly describe the history and future of the pinned {RHEL, Fedora} CoreOS
metadata in the installer.
Co-authored-by: Matthew Staebler <staebler@redhat.com>
* add "pre-command start", "pre-command end", "post-command start" and "post-command end" phases
* fixed issue where the kubelet was not notifying systemd that it had started since it had been moved to a script
Each OpenShift service running on the bootstrap machine will now
create a json file in /var/log/openshift/ that contains an array
of entries detailing the progress that the service has made.
The entries included in the json file are the following.
* Service start
* Service end, with result and error details
* Service stage start
* Service stage end, with result and error details
The json files in /var/log/openshift will be collected by the
bootstrap gather in /bootstrap/services/ for evaluation by the
installer for improved failure reporting to the user. The evaluation
is left for follow-on work.
https://issues.redhat.com/browse/CORS-1542https://issues.redhat.com/browse/CORS-1543
Since libvirt 5.6.0, there is an option to pass in dnsmasq options through the libvirt network [1]. This addresses the following problems:
- eliminate the need for hacking routes in the cluster (the workaround mentioned in [3]) so that libvirt's dnsmasq does not manage the domain (and so the requests from inside the cluster will go up the chain to the host itself).
- eliminate the hacky workaround used in the multi-arch CI automation to inject `*.apps` entries in the libvirt network that point to a single worker node [2]. Instead of waiting for the libvirt networks to come up and update entries, we can set this before the installation itself through the install config.
- another issue this solves - with the above mentioned workaround, having multiple worker nodes becomes problematic when running upgrade tests. Having the route to just one worker node would fail the upgrade when that worker node is down. With this change, we could now point to the .1 address and have a load balancer forward traffic to any worker node.
With this change, the option can be specified through the install config yaml in the network section as pairs of option name and values. An example:
```
platform:
libvirt:
network:
dnsmasqOptions:
- name: "address"
value: "/.apps.tt.testing/192.168.126.51"
if: tt0
```
The terraform provider supports rendering these options through a datasource and injecting them into the network xml.
Since this config is optional, not specifying it will continue to work as before without issues.
[1] https://libvirt.org/formatnetwork.html#elementsNamespaces
[2] https://github.com/openshift/release/blob/master/ci-operator/templates/openshift/installer/cluster-launch-installer-remote-libvirt-e2e.yaml#L532-L554
[2] https://github.com/openshift/installer/issues/1007
With Go 1.14 the handling of modules has improved in the sense that all the subcommands `go {test, generate}` now use the vendor when available by default. This makes it easier for us to run generate using the vendored tools like controller-tools etc. as it now uses the checked in vendor.
In newer libvirtd that ships the "libvirt-tcp.socket" unit files for
socket activation, the --listen argument to libvirtd should not be
used. Enabling both socket activation and the --listen argument will
cause libvirtd to exit with an error about mutually exclusive
configuration options.
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
A connection to libvirtd gives the client application privileges that
are equivalent to those of a root shell. IOW, disabling authentication
and encryption in libvirtd is akin to running a telnet server with no
root password. This implication is not obvious to users following the
guide, so should be spelt out explicitly, so they understand it is
critical to correctly apply the firewall rules listed later in the
install guide.
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
The "libvirt" RPM is a meta package which depends on every single other
libvirt RPM. It is undesirable to install this because it pulls in a
huge chain of dependencies, which are irrelevant for accomplishing the
steps described in this document. The main interesting thing it was
likely needed for is the "virsh" client, and can thus be replaced by
the "libvirt-client" RPM
The "libvirt-daemon-kvm" RPM pulls in everything needed for a typical
libvirt installation that will be used for running KVM guests, and is
the recommended option for scenarios that don't need to go to extreme
to minimize features installed.
The "qemu-kvm" RPM does not need to be listed explicitly, since it is
already a dependancy of "libvirt-daemon-kvm".
Further information to help understand the libvirt RPM choices is
present at https://libvirt.org/kbase/rpm-deployment.html
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
Issue: https://github.com/code-ready/snc/issues/112 have been raised. It
is for permission denied errors that was caused by selinux. Selinux
isn't available on Debian/Ubuntu, and should be disabled in `qemu.conf`.
This is a bit more accessible than pointing folks at Godocs, since it
allows us to focus on the YAML property names (while Godocs
understandably focus on Go property names) and YAML renderings. Also
break up our old "one big example" install-config.yaml into a minimal
per-platform example and a series of small extentions excercising
groups of properties.
The vSphere docs are based heavily on [1].
Also drop proxy.md. It was added in e7edbf71fd (Add proxy
configuration to bootstrap node, 2019-06-24, #1832), but:
* Proxy testing and Squid configuration information belongs in
openshift/release, not in the installer repository.
* docs/user/customization.md now contains a more complete proxy-config
fragment.
OpenStack computeFlavor precedence is based on [2].
[1]: https://github.com/openshift/openshift-docs/blob/enterprise-4.2/modules/installation-vsphere-config-yaml.adoc
Last touched by commit openshift/openshift-docs@25afc7626d , 2019-08-19
[2]: https://github.com/openshift/installer/pull/2162#discussion_r322410878
The documents outlines the proposal and details for using alternate source/repositories for release-image.
The proposal is driven by the fact that, only flows using the `oc adm release mirror` to create the alternate sources for release-image will be supported.