GCP has a size restriction of 63 for the instance group name which
is mostly taken up by the suffix -instance-group that is being added
to make sure the resources have unique name. Reducing the size
of the suffix from -instance-group to -ig would help in restricting
the size of the name and would also help in keeping the names
unique.
The installer now generates spec v3.1 ignition config,
instead of v2.2 (and v2.4 for openstack) as before.
The v3.1 ignition config specification can be found at [1].
A detailed overview of the differences between specs v2 and v3 can be found at [2].
Notable differences are:
- the `Filesystem` identifier on ignition file configs no longer exists
- `Overwrite` now defaults to `false` (was `true` in spec v2), which is why
it is now set explicitly to keep the same behaviour.
- duplicate file configs are now prohibited, i.e. all contents and
all appendices must be defined in a single config.
- duplicate systemd unit configs are now prohibited, i.e. the content
and all dropins must be defined in a single config.
This commit:
- Bumps ignition to v2.3.0 with support for config spec v3.1.
- Bumps terraform-provider-ignition to v2.1.0.
Also adds downloading of the provider binary to `images/installer/Dockerfile.upi.ci`
which is necessary because the ignition v2/spec3 version from the
`community-terraform-providers/terraform-ignition-provider` fork is not
present in the provider registry that is maintained by Hashicorp and can
therefore not be pulled in automatically by terraform.
is not present in the
- Bumps machine-config-operator to b3b074ee9156
(latest commit at the time of this writing).
- Adds "github.com/clarketm/json" dependency for marshaling Ignition configs.
This is a dropin replacement for "encoding/json" that supports zero values of
structs with omittempty annotations when marshaling.
In effect, this will exclude empty pointer struct fields from the
marshaled data instead of inserting nil values into them, which do not
pass openAPI validation on fields that are supposed to contain e.g. strings.
The same library is used by machine-config-operator and ignition itself.
- Updates the vendor dir to make commit idempotent.
[1] https://github.com/coreos/ignition/blob/master/doc/configuration-v3_1.md
[2] https://github.com/coreos/ignition/blob/master/doc/migrating-configs.md#from-version-230-to-300
Co-authored-by: Vadim Rutkovsky <vrutkovs@redhat.com>
This mirrors changes to GCP IPI in #3544
The infra id of the clusters on GCP was reduced to 12 in #2088 because
we couldn't handle the hostname seen by rhcos machine to be greater than
64.
More details on this are available in
https://bugzilla.redhat.com/show_bug.cgi?id=1809345
now since BZ 1809345 is fixed by openshift/machine-config-operator#1711
and openshift/cluster-api-provider-gcp#88 the installer can relax the
restriction on the infra-id to match the other platforms.
Why is it important?
On GCP all resources are prefixed with infra-id, which currently is 12
chars with 6 chars used by random bit, leaving only 6 chars from cluster
name. This causes trouble associating the cluster to jobs in CI as most
of the identifyable characters are dropped from the resource names in CI
due to this restriction.
Also because of the previous restriction, only one char are used from
pool's name, making is higly likely to collide in cases there are more.
Previously, the bootstrap host was being added to the first master
instance group. This causes an issue if the gcp cloud provider attempts
to create internal load balancers for the cluster because it ignores the
first master's instance groupd and tries to put it into a new instance
group. If there are workers that are in a different subnet, then the
cloud provider throws an error and never creates the ingress lbs.
This change creates an instance group for the bootstrap host, and
updates the doc to utilize it. It also removes the steps of adding and
removing the bootstrap host from the external target pools, as that is
not what we are doing with ipi.
This change adds 02_lb_int.py template to the workflow to enable
internal load balancers. The cluster will begin communicating to the api
and mcs through the internal load balancers. The external load balancer
can optionally be disabled for private clusters.
This change also updates the documentation to use the $(command) syntax
to be in line with the other platforms.
In addition, the variable definitions were all moved to immediately
after the associated resources were created. This will help make clear
where their origins are.
Prior to this change, users needed to edit the gcp upi python templates
in order to provision an cluster using a shared VPC. This was prone to
user error.
This change breaks up the templates so that only the yaml files need to
be modified, thus greatly simplifying the process. All of the resources
that would be provisioned in the host project are now in their own
python templates (01_vpc.py, 02_dns.py, and 03_firewall.py). These
resources can be removed from the yaml files to be run against the
service project and placed into yaml files to be run against the host
project instead.
This is the UPI equivalent of 4c346afcde. The initial implementation of both UPI & IPI was not allowing the complete range of network load balancers. This includes the fix for UPI and also leaves the ranges for internal load balancers.
This change increases the minimum ports per control-plane instance to
allow much higher resiliency. It is based on #2376, which did the same
for GCP IPI.
Before this change, the gcp upi firewall rules limited access to the
nodePort ports from worker to worker and master to master. Access
between worker and master was denied. However, because gcp upi produces
masters that have the 'worker' role, the nodePort services could run on
masters and need to be accessed from pods on other workers. Or pods on
masters might need access to nodePort services on workers.
This change modifies the gcp upi firewall rules to allow the nodePort
services across the entire cluster.
Before this change, gcp used individual firewall rules for each
service/port used. This caused quota issues where multiple clusters were
provisoned to the same project.
This change collapses the firewall rules where approperiate to reduce
the number of firewall rules used.
Before this change, the GCP UPI workflow hard coded the zones in the
bootstrap and control-plane templates. It assumed every region had zones
$REGION-{a,b,c}. However, in some regions this is not the case.
This change adds the zone(s) as parameters to the templates and updates
the docs accordingly. The list of zones is now fetched from gcp, and
then used to populate the templates.