The changes done here will update the RHCOS 4.21 bootimage metadata and
address the following issues:
OCPBUGS-62699: Revert inclusion of AWS ECR credential provider in RHEL layer
This change was generated using:
```
plume cosa2stream --target data/data/coreos/rhcos.json \
--distro rhcos --no-signatures --name rhel-9.6 \
--url https://rhcos.mirror.openshift.com/art/storage/prod/streams \
x86_64=9.6.20251023-0 \
aarch64=9.6.20251023-0 \
s390x=9.6.20251023-0 \
ppc64le=9.6.20251023-0
```
The changes done here will update the RHCOS 4.21 bootimage metadata and
address the following issues:
COS-3042: GA ROSA-HCP support Windows LI for CNV
This change was generated using:
```
plume cosa2stream --target data/data/coreos/rhcos.json \
--distro rhcos --no-signatures --name rhel-9.6 \
--url https://rhcos.mirror.openshift.com/art/storage/prod/streams \
x86_64=9.6.20251015-1 \
aarch64=9.6.20251015-1 \
s390x=9.6.20251015-1 \
ppc64le=9.6.20251015-1
```
Code to wait for iptables to start was added (to startironic.sh) in the
RHEL 7 era. We kept it because we were not sure it was still needed. We
now know that when oneshot systemd services are restarted, it prevents
any of the services that depend on them from starting. It follows that
the wait-iptables-init service is not having any effect today in RHEL 9,
so it can be removed.
Since the image customization service now runs as a kubernetes
controller, it is no longer directly required by ironic (as it was when
it ran on static data and Terraform was doing the provisioning). Ironic
needs only the kernel to be set up by extract-machine-os, which was
previously a transitive dependency.
And although it is wanted by BMO, it should not block BMO startup as BMO
may be able to at least provide some useful debugging information in the
CRDs if the image-customization service is not running.
In turn, the image-customization controller can run and potentially
provide useful error messages if the CoreOS ISO file does not exist, so
use Wants instead of Requires to depend on extract-machine-os.
Some of the configuration in this file has been inherited from the
ironic service. We don't mount ironic.volume, so there is no need to
depend on it. We don't include the /etc/ironic.env environment file, so
we do not need to wait for it to be built and should not attempt to pass
on the HTTP_PORT and IRONIC_KERNEL_PARAMS environment variables from it.
Finally, BMO requires the ironic.service, so set that dependency instead of
using ironic.service's own dependencies. Note that this must be a
Requires dependency because we need the BMO to stop when we stop
ironic.service in master-bmh-update.
** Provide the user with the ability to specify the name of the private service
connect endpoint and the location. When the region is empty it is assumed to be
a global location.
Not timeout is currently set. If there is an underlying issue like
an incorrectly configured registry.conf, the service runs continously.
Ssh and login waits for agent-extract-tui to complete and are
blocked, leaving the host inaccessible.
Since we do not detach from the container process in "podman run", log
messages get written both to stderr (where they are captured by systemd
and written to the journal) and to the journal directly by podman. This
results in duplicate log messages in the journal.
We cannot detach from the container in a oneshot service, so use the
passthrough log driver to ensure we get only one copy of the logs.
When bootkube.sh runs a podman container, use the k8s-file log driver to
restore the behaviour from RHEL 8. The default log driver changed to
journald in RHEL 9, with the result that a separate log file was not
created, and that all log messages from inside the container appear
twice in the journal since they are also captured from stderr by systemd.
Since we want to run these pods synchronously, we do not want to detach
from them (which would be the other way to ensure we do not get two
copies of the logs).
The copy operation using the * wildcard under /var/lib/containers/storage/
does not work reliably and often times fails.
Changed to using find --exec cp.
Thank you Andrea Fasano for providing the command.
Disable shellcheck on agent-image.env The file is generated at
runtime after get-container-images.sh is executed.
Added missing double quotes and switch echo to printf in
install-status.sh
install.openshift.io_installconfigs.yaml:
** Updated fields from the types/installconfig/gcp
CORS-4047: Add private Zone Validation
pkg/types/gcp/platform.go:
** Add the user specified private dns zone
** Add static validation
pkg/asset/installconfig/gcp/validation.go:
** When private dns zone information is provided, ensure that the project and zone
are used for validation.
CORS-4045: Update Clsuter Metadata
** Add the GCP private zone information to the cluster metadata
CORS-4048: Update TFVars to include private zone info
CORS-4049: Find the correct project for the dns zones
** Update the DNS Manifest to take the correct private zone project when specified.
** Note: Need to update DNS Spec to take in a project.
CORS-4046: Delete Private Zones
pkg/destroy/gcp:
** Use the cluster metadata to update the gcp cluster uninstaller.
** Find DNS zones in the correct project. Delete the zones that can and should be
deleted.
** Delete the DNS records in the private and public zones.
pkg/destroy/gcp:
** Destroy DNS zones if they have the "owned" label.
installconfig/gcp:
** Generate a new Client function to find private DNS zones where the base domain
and zone name are both provided.
manifests/dns:
** Use the new client function to ensure that we find the correct private zone
when private zone information is provided in the install config file.
clusterapi/dns:
** Use the new client function to ensure that we find the correct private zone
when private zone information is provided in the install config file.
Adding the "shared" tag when the installer does not create the private managed zone.
** On Destroy, search the private dns zone for the labels. If the
shared label with a key matching the cluster ID exists, remove the label.
When custom-dns is enabled, the resolv.conf file on the bootstrap node
needs to be kept updated to point to localhost(127.0.0.1) where the
local static CoreDNS pod is providing DNS for API and API-Int.
After initial creation of the resolv.conf file it needs to be kept
upated in case it gets overwritten by Network Manager.
Added platform-agnostic multi-disk support using Ignition configuration embedded in MachineConfigs
Created new disk types: etcd, swap, and user-defined disks
Implemented disk setup validation and feature gates
Added machine config generation for disk provisioning
Review and unit tests were assisted-by: cursor
** Added a common file for all GCP API Clients to be created.
** transferred over all client creation to the common file.
** Transferred over a call for resource manager from v1 to v3 in clusterapi. This ensured that all calls were for the same
version of the api.
** Note: monitoring service needs to be added to the API.
CORS-3916: Update Installconfig to format and accept service endpoints
** Accept service endpoints through the install config
** Service Endpoints should be entered in a format such as
https://compute-exampleendpoint.p.googleapis.com
and the path will be added by the installer to be something like
https://compute-exampleendpoint.p.googleapis.com/compute/v1/.
** The endpoints are formatted to ensure that the version is correct. If the
user would provide a version such as v2 when v1 is required, it would be difficult
for the installer to provide useful errors.
** Send the formatted endpoints to CAPG.
** Format the endpoints to be sent to the GCP Cloud provider (cloud provider config).
** Format the endpoints to be sent to the GCP PD CSI Driver (Infrastructure). This is how most of the
other packages can receive this information as it is passsed through the API Infrastructure.
Note: The GCP PD CSI Driver will ignore the Path of the endpoint.
** Cleaned up the formatting for the endpoints. This includes providing options to format
the enpoints with or without paths. The paths should not be included in the infrastructure
config, because the other packages do not want them (also the infrastructure validation fails).
The changes done here will update the RHCOS 4.20 bootimage metadata.
A notable change is adding the kubevirt artifact for s390x.
This change was generated using:
```
plume cosa2stream --target data/data/coreos/rhcos.json \
--distro rhcos --no-signatures --name rhel-9.6 \
--url https://rhcos.mirror.openshift.com/art/storage/prod/streams \
x86_64=9.6.20250701-0 \
aarch64=9.6.20250701-0 \
s390x=9.6.20250701-0 \
ppc64le=9.6.20250701-0
```
Currently you can only specify a name for an existing Transit Gateway
or Virtual Private Cloud. This can lead to issues since names are not
guaranteed to be unique. So allow a UUID instead of a name.
Created agent-extract-tui.service for the interactive-disconnected
workflow to extract the agent-tui and nmstate libraries during boot.
The files are extracted from the agent-install-utils image. In the
interactive-disconnected workflow, the image is available on the
local container storage. They need to be extracted before the
agent-interactive-console.service starts.