The installer for the development libvirt target does not launch a load balancer by default.
A default configuration of a basic HAProxy config is given here as a guideline for developers.
Signed-off-by: Tim Hansen <tihansen@redhat.com>
Since libvirt 5.6.0, there is an option to pass in dnsmasq options through the libvirt network [1]. This addresses the following problems:
- eliminate the need for hacking routes in the cluster (the workaround mentioned in [3]) so that libvirt's dnsmasq does not manage the domain (and so the requests from inside the cluster will go up the chain to the host itself).
- eliminate the hacky workaround used in the multi-arch CI automation to inject `*.apps` entries in the libvirt network that point to a single worker node [2]. Instead of waiting for the libvirt networks to come up and update entries, we can set this before the installation itself through the install config.
- another issue this solves - with the above mentioned workaround, having multiple worker nodes becomes problematic when running upgrade tests. Having the route to just one worker node would fail the upgrade when that worker node is down. With this change, we could now point to the .1 address and have a load balancer forward traffic to any worker node.
With this change, the option can be specified through the install config yaml in the network section as pairs of option name and values. An example:
```
platform:
libvirt:
network:
dnsmasqOptions:
- name: "address"
value: "/.apps.tt.testing/192.168.126.51"
if: tt0
```
The terraform provider supports rendering these options through a datasource and injecting them into the network xml.
Since this config is optional, not specifying it will continue to work as before without issues.
[1] https://libvirt.org/formatnetwork.html#elementsNamespaces
[2] https://github.com/openshift/release/blob/master/ci-operator/templates/openshift/installer/cluster-launch-installer-remote-libvirt-e2e.yaml#L532-L554
[2] https://github.com/openshift/installer/issues/1007
In newer libvirtd that ships the "libvirt-tcp.socket" unit files for
socket activation, the --listen argument to libvirtd should not be
used. Enabling both socket activation and the --listen argument will
cause libvirtd to exit with an error about mutually exclusive
configuration options.
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
A connection to libvirtd gives the client application privileges that
are equivalent to those of a root shell. IOW, disabling authentication
and encryption in libvirtd is akin to running a telnet server with no
root password. This implication is not obvious to users following the
guide, so should be spelt out explicitly, so they understand it is
critical to correctly apply the firewall rules listed later in the
install guide.
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
The "libvirt" RPM is a meta package which depends on every single other
libvirt RPM. It is undesirable to install this because it pulls in a
huge chain of dependencies, which are irrelevant for accomplishing the
steps described in this document. The main interesting thing it was
likely needed for is the "virsh" client, and can thus be replaced by
the "libvirt-client" RPM
The "libvirt-daemon-kvm" RPM pulls in everything needed for a typical
libvirt installation that will be used for running KVM guests, and is
the recommended option for scenarios that don't need to go to extreme
to minimize features installed.
The "qemu-kvm" RPM does not need to be listed explicitly, since it is
already a dependancy of "libvirt-daemon-kvm".
Further information to help understand the libvirt RPM choices is
present at https://libvirt.org/kbase/rpm-deployment.html
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
Issue: https://github.com/code-ready/snc/issues/112 have been raised. It
is for permission denied errors that was caused by selinux. Selinux
isn't available on Debian/Ubuntu, and should be disabled in `qemu.conf`.
There are some significant firewalld zone differences between Fedora
Workstation and RHEL8. This commit takes this into account, and adjusts
the Fedora instructions so that the libvirt port does not get exposed
externally.
Since by default the installer uses qemu+tcp://192.168.122.1 and we
document to disable auth on tcp connections, the policykit step is not
required for the installer.
Signed-off-by: Christophe Fergeau <cfergeau@redhat.com>
Commit 30b1ae8e4 changed the subnet the cluster will use from
192.168.124.0 to 192.168.126.0. However, it also changed mentions of the default
libvirt from 192.168.122.0 to 192.168.124.0.
This commit revert the last part of the change as 192.168.122.0 is more
likely to be used as it's the upstream libvirt default.
Signed-off-by: Christophe Fergeau <cfergeau@redhat.com>
Currently cluster created by libvirt not able to resolve the auth route
and because of that console doesn't comeup. This troubleshooting doc entry
direct users to make some modification before running the cluster so that
auth route can be resolved by the cluster. Fix #1007
This is to give ownership of libvirt backend of Installer to CRC team. For
now I've only added two members from CRC team (myself and Praveen). I also
added two members of Installer team who seem to have been the most active
devs developing the relevant code.
We don't add `libvirt-approvers` for `pkg/types/libvirt`. For reasons:
https://github.com/openshift/installer/pull/1662#issuecomment-485895942