mirror of
https://github.com/openshift/installer.git
synced 2026-02-05 06:46:36 +01:00
Since libvirt 5.6.0, there is an option to pass in dnsmasq options through the libvirt network [1]. This addresses the following problems:
- eliminate the need for hacking routes in the cluster (the workaround mentioned in [3]) so that libvirt's dnsmasq does not manage the domain (and so the requests from inside the cluster will go up the chain to the host itself).
- eliminate the hacky workaround used in the multi-arch CI automation to inject `*.apps` entries in the libvirt network that point to a single worker node [2]. Instead of waiting for the libvirt networks to come up and update entries, we can set this before the installation itself through the install config.
- another issue this solves - with the above mentioned workaround, having multiple worker nodes becomes problematic when running upgrade tests. Having the route to just one worker node would fail the upgrade when that worker node is down. With this change, we could now point to the .1 address and have a load balancer forward traffic to any worker node.
With this change, the option can be specified through the install config yaml in the network section as pairs of option name and values. An example:
```
platform:
libvirt:
network:
dnsmasqOptions:
- name: "address"
value: "/.apps.tt.testing/192.168.126.51"
if: tt0
```
The terraform provider supports rendering these options through a datasource and injecting them into the network xml.
Since this config is optional, not specifying it will continue to work as before without issues.
[1] https://libvirt.org/formatnetwork.html#elementsNamespaces
[2] https://github.com/openshift/release/blob/master/ci-operator/templates/openshift/installer/cluster-launch-installer-remote-libvirt-e2e.yaml#L532-L554
[2] https://github.com/openshift/installer/issues/1007