PowerVC is an OpenStack based cloud provider with some significant
differences. Since we can use the OpenStack provider for most of the
work, we will create a thin provider which will only handle the
differences.
When using a network as an additional network in the OCP cluster
that has an IPv6 Subnet, that subnet must be included in the router
to provide router advertisements, otherwise the interface wouldn't get
an address configured. This happens because the network manager is
configured with method=auto.
Add instructions how to query OpenShift and OpenStack metrics together.
Co-authored-by: Pierre Prinetti <pierreprinetti@redhat.com>
Co-authored-by: Martin André <m.andre@redhat.com>
We experienced issues caused by network resources created with the same
name, which makes ansible playbooks to behave differently.
Due to fact, that there is not yet OpenShift infraID accessible on the
stage of creating network resources, there is a need to create
deployment unique identifier in some other way. This patch implements
generating such identifier independent from OpenShift deployment id.
Co-authored-by: Maysa De Macedo Souza <maysa.macedo95@gmail.com>
When following this guide using RHEL or older Fedora there is an issue
with the openstack.cloud with latest version (2.0.0) as it will need
openstacksdk in version 1.0 or higher, which is unavailable in mentioned
Linux distribution.
In 4.15 Kuryr is no longer a supported NetworkType, following its
deprecation in 4.12. This commit removes mentions of Kuryr from the
documentation and code, but also adds validation to prevent
installations from being executed when `networkType` is set to `Kuryr`.
Compact clusters have been supported for a while in IPI.
To also allow compact clusters on UPI, the security group rules
for UPI should be adapted enabling the same ingress traffic
that is enabled for workers.
The `MachinesSubnet` field has been reshaped as `controlPlanePort`,
this commit updates the docs to ensure `controlPlanePort` is used.
Also, this commit adds dual-stack documentation.
When a machine is created with a compute availability zone (defined via `mpool.zones`) and a storage root volume (defined as `mpool.rootVolume`) and that `rootVolume` has no specified `zones`, CAPO will use the compute AZ for the volume AZ.
This can be problematic if the AZ doesn't exist in Cinder.
Source:
9d183bd479/pkg/cloud/services/compute/instance.go (L439-L442)
```golang
func (s *Service) getOrCreateRootVolume(eventObject runtime.Object, instanceSpec *InstanceSpec, imageID string) (*volumes.Volume, error) {
(...)
availabilityZone := instanceSpec.FailureDomain
if rootVolume.AvailabilityZone != "" {
availabilityZone = rootVolume.AvailabilityZone
}
(...)
```
If a compute AZ is provided alongside with a root volume, we now require
the root volume to have an AZ, so we force the user to make a choice on
which AZ the root volume is deployed on.
We are also enforcing it via CEL validation in OpenShift API.
* Do nothing - at the risk of hitting this situation: a failure domain with a Compute AZ and a root volume with no AZ, CAPO using the compute AZ to create the volume but that AZ doesn't exist in Cinder, leading into Machine creation errors.
* Only do a validation in the CPMS - which will require CPMS manual
edits from the user.
* Change logic in CAPO wrt how root volume AZ is picked - unlikely to happen
When attaching a manila network by editing a machinset, you probably
want to disable allowed address pairs. Document this.
Signed-off-by: Stephen Finucane <stephenfin@redhat.com>
2.5 years ago we allowed to configure `serverGroupPolicy` in
install-config so a user could choose which Nova scheduling policy
to adopt for the machines.
However, if the masters were configured with AZ, Terraform would
create one ServerGroup in OpenStack (the one from master-0) but
configure the Machine providerSpec with different ServerGroups, one
per AZ. This was unwanted and now we want to use a single ServerGroup
for masters.
With compute AZ support, the users already have the possibility to
ensure that masters aren't on the same failure domain as others.
Also, even if there is less than 3 AZs (e.g. 2), the default
`soft-anti-affinity` server group policy would make Nova to
scheduling in best effort the machines on different hosts within a same
AZ.
Therefore, there is no need to configure the master machines with a
`serverGroup` per availability zone in their Machines.
Also, note that in OCP 4.14, CPMS will be enabled by default.
If a user has set multiple AZ for the controlPlane and upgrade from
4.13 to 4.14, CPMS will adopt the control plane and create a CPMS in
Inactive mode, with a single `serverGroup`. The `serverGroup` will
likely be the one from master-0, and this will be shared across all
control plane machines.
It'll be up to the user to set the CPMS to Active
and then the masters will be redeployed in the unique group for all
masters. They will never have a ServerGroup with "clusterID + role" name
because in previous releases we added the AZ name in it.