Other platforms require at least 100 GB of disk size and we've updated
openshift-docs to reflect that in OpenStack too. Seems like we forgot to
update flavor validation code and docs in the installer. This commit
fixes this.
In 4.12, the default CNI will be OVNKubernetes.
This change will deploy ovnk by default as well
as adjust tests, docs and comments to reflect
the same.
Signed-off-by: Jamo Luhrsen <jluhrsen@gmail.com>
Before this patch, the migration would sometimes fail with the following
error:
```
cannot delete Pods not managed by ReplicationController, ReplicaSet,
Job, DaemonSet or StatefulSet (use --force to override)
```
Documentation on how to manually set the Server group in the MachineSet
manifests at install-time is no longer necessary, since the introduction
of the `serverGroupPolicy` property in the OpenStack platform section of
install-config's machine-pools.
Co-authored-by: Max Bridges <50179998+maxwelldb@users.noreply.github.com>
With this change, Compute nodes within each MachineSet are automatically
created in a Server group, with a default policy of
"soft-anti-affinity".
With this change, a "serverGroupPolicy" can be set in install-config, on
the worker MachinePool and/or in the platform default.
Implements OSASINFRA-2570
Co-Authored-By: Matthew Booth <mbooth@redhat.com>
In 4.9 we introduce support for LoadBalancer services. This means that
user might need to tweak the cloud provider options to match their
OpenStack cluster configuration. This commit adds documentation on how
to do it before and after the installation.
Add a note that if we deploy a cluster in an OpenStack AZ, it's
suggested to use Cinder backend if the service is available in this AZ,
since Cinder is topology aware.
Swift isn't deployed in AZs usually, so traffic would have to go
through the link between sites, which isn't optimal in real world.
Signed-off-by: Emilien Macchi <emilien@redhat.com>
This is a first iteration of documenting how to deploy OCP clusters on
provider networks and all the gotchas.
Signed-off-by: Emilien Macchi <emilien@redhat.com>
With 9314e6dc5823690a08109acd26583c517912f55d, the Installer reads the
`clouds.yaml` `cacert` file to connect to the OpenStack API. It is
therefore no longer necessary to add the certificate to the system
trust.
Due to the addition of the PrimarySubnets variable in the providerSpec
we wanted to ensure that users knew it existed and of its pitfalls.
Fixes: OSASINFRA-2088
Note: openshift-installer already check quotas for AWS and GCP.
1) Calculate the quota constraints based on the InstallConfig.
For both ControlPlane and workers, get the flavors and
create a list of quota resources that will be needed to
successfully deploy the cluster.
Note: for now, only instances, CPUs and RAM are supported.
In the future, we'll add networking and storage resources.
2) Fetch project quotas by using OpenStack API (via gophercloud)
and create the constraints (for instances, CPUs and RAM only
for now).
The Quota constraints will be stored in the CloudInfo struct,
for caching so we avoid multiple calls to the OpenStack APIs
to get quotas.
3) The logging is improved when there is no region name for
a cloud provider.
bz#1690924
jira https://issues.redhat.com/browse/OSASINFRA-1141
Co-Authored-By: Matthew Booth <mbooth@redhat.com>
Signed-off-by: Emilien Macchi <emilien@redhat.com>
The underlying network architecture has changed a lot since these docs
were initially written. We want to make sure that these docs are accurate
and up to date so that users with complex networking use cases like workers
on a custom subnet and baremetal workers are able to manage their ingress/egress
traffic as needed. More up to date examples and reference information has been added.
We have chosen to omit sections outlining how to replace the internal lb and dns
services since they were inaccurate. We are targeting an upcoming release to handle
these features better given the complexity of our current networking architecture.
Since https://github.com/openshift/installer/pull/3855, the installer is
able to attach the `ingressFloatingIP` specified in the
`install-config.yaml` to the ingress-port. Update the documentation to
reflect the change.
Respecting the 301 redirects currently in place:
$ curl -sLi https://try.openshift.com | grep -i '^http\|^location'
HTTP/1.1 301 Moved Permanently
location: https://www.openshift.com/try/
HTTP/1.1 301 Moved Permanently
Location: https://www.openshift.com/try
HTTP/1.1 200 OK
From the spec for 301 redirects [1]:
The 301 (Moved Permanently) status code indicates that the target
resource has been assigned a new permanent URI and any future
references to this resource ought to use one of the enclosed URIs.
So we should no longer be using the outdated URI.
[1]: https://tools.ietf.org/html/rfc7231#section-6.4.2
Prior to this change, the documentation refers to working clusters with
two workers. As of 4.5, two workers of 2vCPUs are not enough for a
healthy cluster.
In order for the user to account for the needed resources, the
documentation should mention that while 2 floating IPs are needed for
running an OpenShift cluster, a third one will be used during
installation.
This places the Control Plane servers in a Server Group that enforces
"soft anti-affinity" policy.
"Soft anti-affinity" will cause Nova to create VMs on separate hosts, if
that is possible.
Implements OSASINFRA-1300