In 4.15 Kuryr is no longer a supported NetworkType, following its
deprecation in 4.12. This commit removes mentions of Kuryr from the
documentation and code, but also adds validation to prevent
installations from being executed when `networkType` is set to `Kuryr`.
The `MachinesSubnet` field has been reshaped as `controlPlanePort`,
this commit updates the docs to ensure `controlPlanePort` is used.
Also, this commit adds dual-stack documentation.
2.5 years ago we allowed to configure `serverGroupPolicy` in
install-config so a user could choose which Nova scheduling policy
to adopt for the machines.
However, if the masters were configured with AZ, Terraform would
create one ServerGroup in OpenStack (the one from master-0) but
configure the Machine providerSpec with different ServerGroups, one
per AZ. This was unwanted and now we want to use a single ServerGroup
for masters.
With compute AZ support, the users already have the possibility to
ensure that masters aren't on the same failure domain as others.
Also, even if there is less than 3 AZs (e.g. 2), the default
`soft-anti-affinity` server group policy would make Nova to
scheduling in best effort the machines on different hosts within a same
AZ.
Therefore, there is no need to configure the master machines with a
`serverGroup` per availability zone in their Machines.
Also, note that in OCP 4.14, CPMS will be enabled by default.
If a user has set multiple AZ for the controlPlane and upgrade from
4.13 to 4.14, CPMS will adopt the control plane and create a CPMS in
Inactive mode, with a single `serverGroup`. The `serverGroup` will
likely be the one from master-0, and this will be shared across all
control plane machines.
It'll be up to the user to set the CPMS to Active
and then the masters will be redeployed in the unique group for all
masters. They will never have a ServerGroup with "clusterID + role" name
because in previous releases we added the AZ name in it.
Our indicated supported version was incorrect. Rather than having to
remember to update it for each new OSP version, simply remove this
snippet.
The LB FIP is now called the API FIP.
Signed-off-by: Stephen Finucane <stephenfin@redhat.com>
Other platforms require at least 100 GB of disk size and we've updated
openshift-docs to reflect that in OpenStack too. Seems like we forgot to
update flavor validation code and docs in the installer. This commit
fixes this.
In 4.12, the default CNI will be OVNKubernetes.
This change will deploy ovnk by default as well
as adjust tests, docs and comments to reflect
the same.
Signed-off-by: Jamo Luhrsen <jluhrsen@gmail.com>
Before this patch, the migration would sometimes fail with the following
error:
```
cannot delete Pods not managed by ReplicationController, ReplicaSet,
Job, DaemonSet or StatefulSet (use --force to override)
```
Documentation on how to manually set the Server group in the MachineSet
manifests at install-time is no longer necessary, since the introduction
of the `serverGroupPolicy` property in the OpenStack platform section of
install-config's machine-pools.
Co-authored-by: Max Bridges <50179998+maxwelldb@users.noreply.github.com>
With this change, Compute nodes within each MachineSet are automatically
created in a Server group, with a default policy of
"soft-anti-affinity".
With this change, a "serverGroupPolicy" can be set in install-config, on
the worker MachinePool and/or in the platform default.
Implements OSASINFRA-2570
Co-Authored-By: Matthew Booth <mbooth@redhat.com>
In 4.9 we introduce support for LoadBalancer services. This means that
user might need to tweak the cloud provider options to match their
OpenStack cluster configuration. This commit adds documentation on how
to do it before and after the installation.
Add a note that if we deploy a cluster in an OpenStack AZ, it's
suggested to use Cinder backend if the service is available in this AZ,
since Cinder is topology aware.
Swift isn't deployed in AZs usually, so traffic would have to go
through the link between sites, which isn't optimal in real world.
Signed-off-by: Emilien Macchi <emilien@redhat.com>
This is a first iteration of documenting how to deploy OCP clusters on
provider networks and all the gotchas.
Signed-off-by: Emilien Macchi <emilien@redhat.com>
With 9314e6dc5823690a08109acd26583c517912f55d, the Installer reads the
`clouds.yaml` `cacert` file to connect to the OpenStack API. It is
therefore no longer necessary to add the certificate to the system
trust.
Due to the addition of the PrimarySubnets variable in the providerSpec
we wanted to ensure that users knew it existed and of its pitfalls.
Fixes: OSASINFRA-2088
Note: openshift-installer already check quotas for AWS and GCP.
1) Calculate the quota constraints based on the InstallConfig.
For both ControlPlane and workers, get the flavors and
create a list of quota resources that will be needed to
successfully deploy the cluster.
Note: for now, only instances, CPUs and RAM are supported.
In the future, we'll add networking and storage resources.
2) Fetch project quotas by using OpenStack API (via gophercloud)
and create the constraints (for instances, CPUs and RAM only
for now).
The Quota constraints will be stored in the CloudInfo struct,
for caching so we avoid multiple calls to the OpenStack APIs
to get quotas.
3) The logging is improved when there is no region name for
a cloud provider.
bz#1690924
jira https://issues.redhat.com/browse/OSASINFRA-1141
Co-Authored-By: Matthew Booth <mbooth@redhat.com>
Signed-off-by: Emilien Macchi <emilien@redhat.com>
The underlying network architecture has changed a lot since these docs
were initially written. We want to make sure that these docs are accurate
and up to date so that users with complex networking use cases like workers
on a custom subnet and baremetal workers are able to manage their ingress/egress
traffic as needed. More up to date examples and reference information has been added.
We have chosen to omit sections outlining how to replace the internal lb and dns
services since they were inaccurate. We are targeting an upcoming release to handle
these features better given the complexity of our current networking architecture.
Since https://github.com/openshift/installer/pull/3855, the installer is
able to attach the `ingressFloatingIP` specified in the
`install-config.yaml` to the ingress-port. Update the documentation to
reflect the change.