GP3 volumes have the ability to configure throughput from 125 MiB/s to
2000 MiB/s. This allows the ability to set this at install time in the
install-config.
https://issues.redhat.com/browse/CORS-4212
Installations using ABI/assisted with 16GiB of RAM on the bootstrap node
were failing with "no space left on device" during bootstrapping. The
live ISO environment uses a tmpfs mounted at /var that is sized at 50%
of available RAM. On systems with 16GiB of RAM, this provides only 8GiB
of tmpfs space.
At the beginning of the bootstrap process, node-image-pull.sh creates an
ostree checkout underneath /var/ostree-container. When this is added to
the regular disk space usage of the later parts of the bootstrap, the
peak tmpfs usage hits around 9.4GiB.
This fix creates a separate 4GiB tmpfs for /var/ostree-container, so
that it is not subject to the limits on the size of /var.
GP3 volumes have the ability to configure throughput from 125 MiB/s to
2000 MiB/s. This allows the ability to set this at install time in the
install-config.
https://issues.redhat.com/browse/CORS-4212
pkg/types/gcp/platform.go:
Add FirewallManagementPolicy. The policy will indicate whether the cluster or user
will manage the firewall rules.
Add validation to ensure that a network is provided when the install config
is set to Unmanaged to FirewallManagement.
pkg/types/gcp/metadata.go:
Add the management policy to the metadata so that the bootstrap destroy process
knows whether to delete the bootstrap firewall rules or not.
Achieved by bumping the library itself:
pushd cluster-api/providers/openstack
go get -u sigs.k8s.io/cluster-api-provider-openstack@latest
go mod tidy
go mod vendor
popd
Followed by the assets:
pushd <path-to-upstream-capo-repo>
git checkout v0.13.0
make release-manifests
popd
cp <path-to-upstream-capo-repo>/out/infrastructure-components.yaml \
data/data/cluster-api/openstack-infrastructure-components.yaml
This has the side effect of bumping golang to 1.24.
Signed-off-by: Stephen Finucane <stephenfin@redhat.com>
This is split out from CAPO starting with CAPO v0.12.0. Start deploying it manually
in preparation for a CAPO bump.
Signed-off-by: Stephen Finucane <stephenfin@redhat.com>
Adding support to install multiple NAT gateways per subnet in
the specific zones they need to be in.
Also, allowing the users to bring their own subnets.
(NAT gateways on BYO subnets are not supported by CAPZ, it just
creates a dummy NAT gateway and doesn't attach it to the subnet).
Pull in the most recent version which includes the v1beta2 API required
by CAPO v0.13.x (and likely others in the future).
Signed-off-by: Stephen Finucane <stephenfin@redhat.com>
Wire in the internalDNSRecords API field to install-config so we can
disable the internal DNS records for deployments using a
user-managed loadbalancer.
Co-Authored-By: Claude <noreply@anthropic.com>
PowerVC is an OpenStack based cloud provider with some significant
differences. Since we can use the OpenStack provider for most of the
work, we will create a thin provider which will only handle the
differences.
Iff the intent of adding kubebuilder/DeepCopy code generation is to
enable these types to be used in CRD definitions, it stands to reason
that these CRDs should be usable in a k8s cluster. UniqueItem=true is
not permitted on CRDs for k8s.
This might be controversial because it relaxes validation requirements