- lb-default-stable: As a default load balancer is now being assigned when one is not provided this test needs to be updated to reflect that change
- lb-unmanaged: I made changes to how the defaults are set. If the load
balancer is user-managed VIPs will not automatically be assigned
anymore. This change needs to be reflected in this test by adding a
apiVIPs and ingressVIPs value to the install-config
The `FailureDomains` was updated to become a pointer, consequently
the field may be absent in the `OpenShiftMachineV1Beta1MachineTemplate`.
This commits updates the manifests test to take that into account.
On clusters configured with dual-stack network the
IPv4 and IPv6 addresses can be added to the main interface
at different time, which results in the openshift node addresses
not containing the IPv6 address. This commit fixes the issue
by including `ip=dhcp,dhcp6` to the kernel args of masters and works,
which sets `required-timeout` to an value that the IP configuration
will be tried before succeeds. This configuration is valid for day1
dual-stack clusters only.
We introduced a TechPreview of OpenStack network failure domains in 4.13
that is now incompatible with the new control-plane-machine-set.
With this change, we remove the experimental implementation of network
failure domains to prepare for the control-plane-machine-set
implementation.
Co-Authored-By: Emilien Macchi <emilien@redhat.com>
Co-Authored-By: Pierre Prinetti <pierreprinetti@redhat.com>
This API has not moved and there is no plan to make any change that
would be backward incompatible in the future.
The feature was well tested (and automated) by our QE on this platform,
as well documented on OCP 4.13.
We think this API is ready to be GA'ed.
Distribute Control plane machines across user-defined failure domains.
This feature is being release under a TechPreviewNoUpgrade FeatureSet.
Failure domains can be defined in the `controlPlane` machine-pool of
`install-config.yaml` as follows:
```yaml
controlPlane:
name: master
platform:
openstack:
type: ${CONTROL_PLANE_FLAVOR}
failureDomains:
- computeAvailabilityZone: 'nova-1'
storageAvailabilityZone: 'cinder-1'
portTargets:
- id: storage
network:
id: 8db6a48e-375b-4caa-b20b-5b9a7218bfe6
- computeAvailabilityZone: 'nova-2'
storageAvailabilityZone: 'cinder-2'
portTargets:
- id: storage
network:
id: 39a7b82a-a8a4-45a4-ba5a-288569a6edd1
- computeAvailabilityZone: 'nova-3'
storageAvailabilityZone: 'cinder-3'
portTargets:
- id: storage
network:
id: 8e4b4e0d-3865-4a9b-a769-559270271242
```
Each `failureDomains` entry can take an optional
`computeAvailabilityZone` string, an optional `storageAvailabilityZone`
string, and an optional `portTargets` array.
Each `portTargets` entry requires an arbirtary `id`, which must be unique per
`failureDomain`. If `id` is exactly `control-plane`, then that
`portTarget` is used instead of the default primary subnet (or instead
of `machinesSubnet` if defined) as the first machine network.
Each `portTargets` entry takes an optional `network` object and an
optional `fixedIPs` array (not represented in the example).
The `network` object taks an optional `name` string and an optional `id`
string. `name` is ignored if `id` is passed.
Each `fixedIPs` entry takes a `subnet` object which syntax is [defined
in the `machinev1alpha1` spec as
`SubnetFilter`](d170fcdc0f/machine/v1alpha1/types_openstack.go (L230-L281)).
Note that unless an external load balancer is used, `portTargets` with
id `control-plane` must all have one single subnet and must all refer to
the same OpenStack subnet. As a consequence, the result will be similar
as setting a `machinesSubnet`, except that Compute nodes will not
follow.
In 4.12, the default CNI will be OVNKubernetes.
This change will deploy ovnk by default as well
as adjust tests, docs and comments to reflect
the same.
Signed-off-by: Jamo Luhrsen <jluhrsen@gmail.com>
The openstack manifests tests use an install config with the
clusterID field that is no longer supported by the installer.
Changes to the installer to enforce strict unmarshalling of the
install config is in place but is being hindered by the openstack
manifests tests in this PR.
https://github.com/openshift/installer/pull/5307
With this change, Compute nodes within each MachineSet are automatically
created in a Server group, with a default policy of
"soft-anti-affinity".
With this change, a "serverGroupPolicy" can be set in install-config, on
the worker MachinePool and/or in the platform default.
Implements OSASINFRA-2570
Co-Authored-By: Matthew Booth <mbooth@redhat.com>