The changes done here will update the RHCOS 4.20 bootimage metadata and
address the following issues:
OCPBUGS-64611: [4.20] coreos-boot-disk link not working with multipath on early boot
OCPBUGS-67201: [4.20] Cannot use auto-forward kargs (like ip=) with coreos-installer (iso|pxe) customize
OCPBUGS-68356: [4.20] Using multipath on the sysroot will fail to boot if less than 2 paths are present
OCPBUGS-69837: [4.20] Ignition fails with crypto/ecdh: invalid random source in FIPS 140-only mode
This change was generated using:
plume cosa2stream \
--target data/data/coreos/rhcos.json \
--distro rhcos \
--no-signatures \
--name rhel-9.6 \
--url https://rhcos.mirror.openshift.com/art/storage/prod/streams \
x86_64=9.6.20260112-0 \
aarch64=9.6.20260112-0 \
s390x=9.6.20260112-0 \
ppc64le=9.6.20260112-0
Signed-off-by: Tiago Bueno <tiago.bueno@gmail.com>
We have a report that a baremetal installation with VLAN interfaces
can require longer than the current setting for the timeout of
the pre-network-manager-config service. Increasing it to 300
seconds.
Installations using ABI/assisted with 16GiB of RAM on the bootstrap node
were failing with "no space left on device" during bootstrapping. The
live ISO environment uses a tmpfs mounted at /var that is sized at 50%
of available RAM. On systems with 16GiB of RAM, this provides only 8GiB
of tmpfs space.
At the beginning of the bootstrap process, node-image-pull.sh creates an
ostree checkout underneath /var/ostree-container. When this is added to
the regular disk space usage of the later parts of the bootstrap, the
peak tmpfs usage hits around 9.4GiB.
This fix creates a separate 4GiB tmpfs for /var/ostree-container, so
that it is not subject to the limits on the size of /var.
The changes done here will update the RHCOS 4.20 bootimage metadata and
address the following issues:
OCPBUGS-64611: [4.20] [OCP 4.18] coreos-boot-disk link not working with
multipath on early boot
This change was generated using:
plume cosa2stream \
--target data/data/coreos/rhcos.json \
--distro rhcos \
--no-signatures \
--name rhel-9.6 \
--url https://rhcos.mirror.openshift.com/art/storage/prod/streams \
x86_64=9.6.20251113-0 \
aarch64=9.6.20251113-0 \
s390x=9.6.20251113-0 \
ppc64le=9.6.20251113-0
Signed-off-by: Tiago Bueno <tiago.bueno@gmail.com>
The changes done here will update the RHCOS 4.20 bootimage metadata and
address the following issues:
OCPBUGS-62699: Revert inclusion of AWS ECR credential provider in RHEL layer
This change was generated using:
```
plume cosa2stream --target data/data/coreos/rhcos.json \
--distro rhcos --no-signatures --name rhel-9.6 \
--url https://rhcos.mirror.openshift.com/art/storage/prod/streams \
x86_64=9.6.20251023-0 \
aarch64=9.6.20251023-0 \
s390x=9.6.20251023-0 \
ppc64le=9.6.20251023-0
```
The changes done here will update the RHCOS 4.20 bootimage metadata and
address the following issues:
COS-3042: GA ROSA-HCP support Windows LI for CNV
This change was generated using:
```
plume cosa2stream --target data/data/coreos/rhcos.json \
--distro rhcos --no-signatures --name rhel-9.6 \
--url https://rhcos.mirror.openshift.com/art/storage/prod/streams \
x86_64=9.6.20251015-1 \
aarch64=9.6.20251015-1 \
s390x=9.6.20251015-1 \
ppc64le=9.6.20251015-1
```
Not timeout is currently set. If there is an underlying issue like
an incorrectly configured registry.conf, the service runs continously.
Ssh and login waits for agent-extract-tui to complete and are
blocked, leaving the host inaccessible.
The copy operation using the * wildcard under /var/lib/containers/storage/
does not work reliably and often times fails.
Changed to using find --exec cp.
Thank you Andrea Fasano for providing the command.
Disable shellcheck on agent-image.env The file is generated at
runtime after get-container-images.sh is executed.
Added missing double quotes and switch echo to printf in
install-status.sh
install.openshift.io_installconfigs.yaml:
** Updated fields from the types/installconfig/gcp
CORS-4047: Add private Zone Validation
pkg/types/gcp/platform.go:
** Add the user specified private dns zone
** Add static validation
pkg/asset/installconfig/gcp/validation.go:
** When private dns zone information is provided, ensure that the project and zone
are used for validation.
CORS-4045: Update Clsuter Metadata
** Add the GCP private zone information to the cluster metadata
CORS-4048: Update TFVars to include private zone info
CORS-4049: Find the correct project for the dns zones
** Update the DNS Manifest to take the correct private zone project when specified.
** Note: Need to update DNS Spec to take in a project.
CORS-4046: Delete Private Zones
pkg/destroy/gcp:
** Use the cluster metadata to update the gcp cluster uninstaller.
** Find DNS zones in the correct project. Delete the zones that can and should be
deleted.
** Delete the DNS records in the private and public zones.
pkg/destroy/gcp:
** Destroy DNS zones if they have the "owned" label.
installconfig/gcp:
** Generate a new Client function to find private DNS zones where the base domain
and zone name are both provided.
manifests/dns:
** Use the new client function to ensure that we find the correct private zone
when private zone information is provided in the install config file.
clusterapi/dns:
** Use the new client function to ensure that we find the correct private zone
when private zone information is provided in the install config file.
Adding the "shared" tag when the installer does not create the private managed zone.
** On Destroy, search the private dns zone for the labels. If the
shared label with a key matching the cluster ID exists, remove the label.
When custom-dns is enabled, the resolv.conf file on the bootstrap node
needs to be kept updated to point to localhost(127.0.0.1) where the
local static CoreDNS pod is providing DNS for API and API-Int.
After initial creation of the resolv.conf file it needs to be kept
upated in case it gets overwritten by Network Manager.
Added platform-agnostic multi-disk support using Ignition configuration embedded in MachineConfigs
Created new disk types: etcd, swap, and user-defined disks
Implemented disk setup validation and feature gates
Added machine config generation for disk provisioning
Review and unit tests were assisted-by: cursor
** Added a common file for all GCP API Clients to be created.
** transferred over all client creation to the common file.
** Transferred over a call for resource manager from v1 to v3 in clusterapi. This ensured that all calls were for the same
version of the api.
** Note: monitoring service needs to be added to the API.
CORS-3916: Update Installconfig to format and accept service endpoints
** Accept service endpoints through the install config
** Service Endpoints should be entered in a format such as
https://compute-exampleendpoint.p.googleapis.com
and the path will be added by the installer to be something like
https://compute-exampleendpoint.p.googleapis.com/compute/v1/.
** The endpoints are formatted to ensure that the version is correct. If the
user would provide a version such as v2 when v1 is required, it would be difficult
for the installer to provide useful errors.
** Send the formatted endpoints to CAPG.
** Format the endpoints to be sent to the GCP Cloud provider (cloud provider config).
** Format the endpoints to be sent to the GCP PD CSI Driver (Infrastructure). This is how most of the
other packages can receive this information as it is passsed through the API Infrastructure.
Note: The GCP PD CSI Driver will ignore the Path of the endpoint.
** Cleaned up the formatting for the endpoints. This includes providing options to format
the enpoints with or without paths. The paths should not be included in the infrastructure
config, because the other packages do not want them (also the infrastructure validation fails).
The changes done here will update the RHCOS 4.20 bootimage metadata.
A notable change is adding the kubevirt artifact for s390x.
This change was generated using:
```
plume cosa2stream --target data/data/coreos/rhcos.json \
--distro rhcos --no-signatures --name rhel-9.6 \
--url https://rhcos.mirror.openshift.com/art/storage/prod/streams \
x86_64=9.6.20250701-0 \
aarch64=9.6.20250701-0 \
s390x=9.6.20250701-0 \
ppc64le=9.6.20250701-0
```
Currently you can only specify a name for an existing Transit Gateway
or Virtual Private Cloud. This can lead to issues since names are not
guaranteed to be unique. So allow a UUID instead of a name.
Created agent-extract-tui.service for the interactive-disconnected
workflow to extract the agent-tui and nmstate libraries during boot.
The files are extracted from the agent-install-utils image. In the
interactive-disconnected workflow, the image is available on the
local container storage. They need to be extracted before the
agent-interactive-console.service starts.
Adding the option for the users to create a NAT gateway for the
compute nodes as an option to replace the traditional load balancer
setup. This is only for a single NAT gateway in the compute
subnet as CAPZ expects an outbound LB for control planes.
added support for arbiter installs to ABI flow, we currently do not
support installing TechPreview featureSet with agent based install, this
includes adding that capability for overriding featureSet to be passed
to the assisted service.
Signed-off-by: ehila <ehila@redhat.com>
* pkg/asset/manifests: add MCO operator manifest
Adds manifest generation for MCO configuration.
Currently the manifest is only generated when
custom boot images are specified, in order
to disable MCO management of those boot images.
The manifest generation uses a golang template
as testing revealed that API server validation
would not permit the manifests generated from
serializing the golang structs, which would
be more consistent with how we generate manifests
for other openshift operators. As golang will
populate the zero value for any non-pointer struct
this triggered validation, where the API server
expected certain required fields for these zero-value
structs. Using a template allows us to bypass this
problem.
Fixes OCPBUGS-57348
* fixup! pkg/asset/manifests: add MCO operator manifest
* fixup! pkg/asset/manifests: add MCO operator manifest