diff --git a/_topic_maps/_topic_map.yml b/_topic_maps/_topic_map.yml index d7d90dadd9..5f8d6b712e 100644 --- a/_topic_maps/_topic_map.yml +++ b/_topic_maps/_topic_map.yml @@ -51,8 +51,8 @@ Name: Release notes Dir: release_notes Distros: openshift-enterprise Topics: -- Name: OpenShift Container Platform 4.14 release notes - File: ocp-4-14-release-notes +- Name: OpenShift Container Platform 4.15 release notes + File: ocp-4-15-release-notes --- Name: Getting started Dir: getting_started diff --git a/migrating_from_ocp_3_to_4/planning-migration-3-4.adoc b/migrating_from_ocp_3_to_4/planning-migration-3-4.adoc index 25b141cfaa..bfe207f454 100644 --- a/migrating_from_ocp_3_to_4/planning-migration-3-4.adoc +++ b/migrating_from_ocp_3_to_4/planning-migration-3-4.adoc @@ -9,7 +9,7 @@ toc::[] {product-title} {product-version} introduces architectural changes and enhancements/ The procedures that you used to manage your {product-title} 3 cluster might not apply to {product-title} 4. ifndef::openshift-origin[] -For information on configuring your {product-title} 4 cluster, review the appropriate sections of the {product-title} documentation. For information on new features and other notable technical changes, review the xref:../release_notes/ocp-4-14-release-notes.adoc#ocp-4-14-release-notes[OpenShift Container Platform 4.14 release notes]. +For information on configuring your {product-title} 4 cluster, review the appropriate sections of the {product-title} documentation. For information on new features and other notable technical changes, review the xref:../release_notes/ocp-4-15-release-notes.adoc#ocp-4-15-release-notes[OpenShift Container Platform 4.15 release notes]. endif::[] It is not possible to upgrade your existing {product-title} 3 cluster to {product-title} 4. You must start with a new {product-title} 4 installation. Tools are available to assist in migrating your control plane settings and application workloads. diff --git a/release_notes/ocp-4-14-release-notes.adoc b/release_notes/ocp-4-14-release-notes.adoc deleted file mode 100644 index 4e578a0713..0000000000 --- a/release_notes/ocp-4-14-release-notes.adoc +++ /dev/null @@ -1,3026 +0,0 @@ -:_mod-docs-content-type: ASSEMBLY -[id="ocp-4-14-release-notes"] -= {product-title} {product-version} release notes -include::_attributes/common-attributes.adoc[] -:context: release-notes - -toc::[] - -Red Hat {product-title} provides developers and IT organizations with a hybrid cloud application platform for deploying both new and existing applications on secure, scalable resources with minimal configuration and management overhead. {product-title} supports a wide selection of programming languages and frameworks, such as Java, JavaScript, Python, Ruby, and PHP. - -Built on {op-system-base-full} and Kubernetes, {product-title} provides a more secure and scalable multitenant operating system for today's enterprise-class applications, while delivering integrated application runtimes and libraries. {product-title} enables organizations to meet security, privacy, compliance, and governance requirements. - -[id="ocp-4-14-about-this-release"] -== About this release - -// TODO: Update with the relevant information closer to release. -{product-title} (link:https://access.redhat.com/errata/RHSA-2023:1326[RHSA-2023:1326]) is now available. This release uses link:https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.27.md[Kubernetes 1.27] with CRI-O runtime. New features, changes, and known issues that pertain to {product-title} {product-version} are included in this topic. - -{product-title} {product-version} clusters are available at https://console.redhat.com/openshift. With the {cluster-manager-first} application for {product-title}, you can deploy {product-title} clusters to either on-premises or cloud environments. - -// Double check OP system versions -{product-title} {product-version} is supported on {op-system-base-full} 8.6, 8.7, and 8.8 as well as on {op-system-first} 4.13. - -You must use {op-system} machines for the control plane, and you can use either {op-system} or {op-system-base} for compute machines. -//Removed the note per https://issues.redhat.com/browse/GRPA-3517 - -//TODO: Add this for 4.14 -Starting with {product-title} 4.12, an additional six months is added to the Extended Update Support (EUS) phase on even numbered releases from 18 months to two years. For more information, see the link:https://access.redhat.com/support/policy/updates/openshift[Red Hat {product-title} Life Cycle Policy]. - -Starting with {product-title} {product-version}, Extended Update Support (EUS) is extended to 64-bit ARM, {ibmpowerProductName} (ppc64le), and {ibmzProductName} (s390x) platforms. For more information, see the link:https://access.redhat.com/support/policy/updates/openshift-eus[OpenShift EUS Overview]. - -//TODO: Add the line below for EUS releases. -{product-title} {product-version} is an Extended Update Support (EUS) release. More information on Red Hat OpenShift EUS is available in link:https://access.redhat.com/support/policy/updates/openshift#ocp4_phases[OpenShift Life Cycle] and link:https://access.redhat.com/support/policy/updates/openshift-eus[OpenShift EUS Overview]. - -//TODO: The line below should be used when it is next appropriate. Revisit in August 2023 time frame. -Maintenance support ends for version 4.12 on 25 January 2025 and goes to extended life phase. For more information, see the link:https://access.redhat.com/support/policy/updates/openshift[Red Hat {product-title} Life Cycle Policy]. - -Commencing with the {product-version} release, Red Hat is simplifying the administration and management of Red Hat shipped cluster Operators with the introduction of three new life cycle classifications; Platform Aligned, Platform Agnostic, and Rolling Stream. These life cycle classifications provide additional ease and transparency for cluster administrators to understand the life cycle policies of each Operator and form cluster maintenance and upgrade plans with predictable support boundaries. For more information, see link:https://access.redhat.com/webassets/avalon/j/includes/session/scribe/?redirectTo=https%3A%2F%2Faccess.redhat.com%2Fsupport%2Fpolicy%2Fupdates%2Fopenshift_operators[OpenShift Operator Life Cycles]. - -// Added in 4.14. Language came directly from Kirsten Newcomer. -{product-title} is designed for FIPS. When running {op-system-base-full} or {op-system-first} booted in FIPS mode, {product-title} core components use the {op-system-base} cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the `x86_64`, `ppc64le`, and `s390x` architectures. - -For more information about the NIST validation program, see link:https://csrc.nist.gov/Projects/cryptographic-module-validation-program/validated-modules[Cryptographic Module Validation Program]. For the latest NIST status for the individual versions of {op-system-base} cryptographic libraries that have been submitted for validation, see link:https://access.redhat.com/articles/2918071#fips-140-2-and-fips-140-3-2[Compliance Activities and Government Standards]. - -[id="ocp-4-14-add-on-support-status"] -== {product-title} layered and dependent component support and compatibility - -The scope of support for layered and dependent components of {product-title} changes independently of the {product-title} version. To determine the current support status and compatibility for an add-on, refer to its release notes. For more information, see the link:https://access.redhat.com/support/policy/updates/openshift[Red Hat {product-title} Life Cycle Policy]. - -[id="ocp-4-14-new-features-and-enhancements"] -== New features and enhancements - -This release adds improvements related to the following components and concepts. - -[id="ocp-4-14-rhcos"] -=== {op-system-first} - -[id="ocp-4-14-rhcos-rhel-9-2-packages"] -==== {op-system} now uses {op-system-base} 9.2 - -{op-system} now uses {op-system-base-full} 9.2 packages in {product-title} {product-version}. These packages ensure that your {product-title} instance receives the latest fixes, features, enhancements, hardware support, and driver updates. Excluded from this change, {product-title} 4.12 is an Extended Update Support (EUS) release that will continue to use {op-system-base} 8.6 EUS packages for the entirety of its lifecycle. - -[id="ocp-4-14-rhel-9-considerations"] -===== Considerations for upgrading to {product-title} with {op-system-base} 9.2 - -Because {product-title} {product-version} now uses a {op-system-base} 9.2 based {op-system}, consider the following before upgrading: - -* Some component configuration options and services might have changed between {op-system-base} 8.6 and {op-system-base} 9.2, which means existing machine configuration files might no longer be valid. - -* If you customized the default OpenSSH `/etc/ssh/sshd_config` server configuration file, you must update it according to this link:https://access.redhat.com/solutions/7030537[Red Hat Knowledgebase article]. - -* {op-system-base} 6 base image containers are not supported on {op-system} container hosts but are supported on {op-system-base} 8 worker nodes. For more information, see the link:https://access.redhat.com/support/policy/rhel-container-compatibility[Red Hat Container Compatibility] matrix. - -* Some device drivers have been deprecated, see the link:https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/9/html/considerations_in_adopting_rhel_9/assembly_hardware-enablement_considerations-in-adopting-rhel-9#unmaintained-hardware-support[{op-system-base} documentation] for more information. - -[id="ocp-4-14-installation-and-update"] -=== Installation and update - -[id="ocp-4-14-aws-shared-vpc"] -==== Installing a cluster on Amazon Web Services (AWS) by using a shared VPC - -In {product-title} {product-version}, you can install a cluster on AWS that uses a shared Virtual Private Cloud (VPC), with a private hosted zone in a different account than the cluster. For more information, see xref:../installing/installing_aws/installing-aws-vpc.adoc#installing-aws-vpc[Installing a cluster on AWS into an existing VPC]. - -[id="ocp-4-14-aws-s3-deletion"] -==== Enabling S3 bucket to be retained during cluster bootstrap on AWS - -With this update, you can opt out of the automatic deletion of the S3 bucket during the cluster bootstrap on AWS. This option is useful when you have security policies that prevent the deletion of S3 buckets. - -[id="ocp-4-14-nat-gateway-azure"] -==== Installing a cluster on Microsoft Azure using a NAT gateway (Technology Preview) -In {product-title} {product-version}, you can install a cluster that uses a NAT gateway for outbound networking. This is available as a Technology Preview (TP). For more information, see xref:../installing/installing_azure/installation-config-parameters-azure.adoc#installation-configuration-parameters-additional-azure_installation-config-parameters-azure[Additional Azure configuration parameters]. - -[id="ocp-4-14-pd-balanced-disktype-gcp"] -==== Installing a cluster on Google Cloud Platform (GCP) using pd-balanced disk type -In {product-title} {product-version}, you can install a cluster on GCP using the `pd-balanced` disk type. This disk type is only available for compute nodes and cannot be used for control plane nodes. For more information, see xref:../installing/installing_gcp/installation-config-parameters-gcp.adoc#installation-configuration-parameters-additional-gcp_installation-config-parameters-gcp[Additional GCP configuration parameters]. - -[id="ocp-4-14-optional-capabilities"] -==== Optional capabilities in {product-title} {product-version} -In {product-title} {product-version}, you can disable the `Build`, `DeploymentConfig`, `ImageRegistry` and `MachineAPI` capabilities during installation. You can only disable the `MachineAPI` capability if you install a cluster with user-provisioned infrastructure. For more information, see xref:../installing/cluster-capabilities.adoc#cluster-capabilities[Cluster capabilities]. - -[id="ocp-4-14-azure-ad-workload-id"] -==== Installing a cluster with Azure AD Workload Identity - -During installation, you can now configure a Microsoft Azure cluster to use Azure AD Workload Identity. With Azure AD Workload Identity, cluster components use short-term security credentials that are managed outside the cluster. - -For more information about the short-term credentials implementation for {product-title} clusters on Azure, see xref:../authentication/managing_cloud_provider_credentials/cco-short-term-creds.adoc#cco-short-term-creds-azure_cco-short-term-creds[Azure AD Workload Identity]. - -To learn how to configure this credentials management strategy during installation, see xref:../installing/installing_azure/installing-azure-customizations.adoc#installing-azure-with-short-term-creds_installing-azure-customizations[Configuring an Azure cluster to use short-term credentials]. - -[id="ocp-4-14-user-defined-tags-azure"] -==== User-defined tags for Microsoft Azure now generally available - -The user-defined tags feature for Microsoft Azure was previously introduced as Technology Preview in {product-title} 4.13 and is now generally available in {product-title} {product-version}. For more information, see xref:../installing/installing_azure/installing-azure-customizations.adoc#installing-azure-user-defined-tags_installing-azure-customizations[Configuring the user-defined tags for Azure]. - -[id="ocp-4-14-confidential-vms-azure"] -==== Confidential VMs for Azure (Technology Preview) - -You can enable confidential VMs when you install your cluster on Azure. You can use confidential computing to encrypt the virtual machine guest state storage during installation. This feature is in Technology Preview due to known issues which are listed in the Known Issues section of this page. For more information, see xref:../installing/installing_azure/installing-azure-customizations.adoc#installation-azure-confidential-vms_installing-azure-customizations[Enabling confidential VMs]. - -[id="ocp-4-14-trusted-launch-azure"] -==== Trusted launch for Azure (Technology Preview) - -You can enable trusted launch features when you install your cluster on Azure as a Technology Preview. These features include secure boot and virtualized Trusted Platform Modules. For more information, see xref:../installing/installing_azure/installing-azure-customizations.adoc#installation-azure-trusted-launch_installing-azure-customizations[Enabling trusted launch for Azure VMs]. - -[id="ocp-4-14-user-defined-tags-gcp"] -==== User-defined labels and tags for Google Cloud Platform (Technology Preview) - -You can now configure user-defined labels and tags in Google Cloud Platform (GCP) for grouping resources and for managing resource access and cost. User-defined labels can be applied only to resources created with the {product-title} installation program and its core components. User-defined tags can be applied only to resources created with the {product-title} Image Registry Operator. For more information, see xref:../installing/installing_gcp/installing-gcp-customizations.adoc#installing-gcp-user-defined-labels-and-tags_installing-gcp-customizations[Managing the user-defined labels and tags for GCP]. - -[id="ocp-4-14-installing-ocp-azure-restricted-networks"] -==== Installing an {product-title} cluster on Microsoft Azure in a restricted network - -In {product-title} {product-version}, you can install a cluster on Microsoft Azure in a restricted network for installer-provisioned infrastructure (IPI) and user-provisioned infrastructure (UPI). For IPI, you can create an internal mirror of the installation release content on an existing Azure Virtual Network (VNet). For UPI, you can install a cluster on Microsoft Azure by using infrastructure that you provide. For more information, see xref:../installing/installing_azure/installing-restricted-networks-azure-installer-provisioned.adoc#installing-restricted-networks-azure-installer-provisioned[Installing a cluster on Azure in a restricted network] and xref:../installing/installing_azure/installing-restricted-networks-azure-user-provisioned.adoc#installing-restricted-networks-azure-user-provisioned[Installing a cluster on Azure in a restricted network with user-provisioned infrastructure]. - -[id="ocp-4-14-bare-metal-installation-disk-id"] -==== Specifying installation disks using a by-path device alias - -You can now specify the installation disk using a by-path device alias, such as `deviceName: "/dev/disk/by-path/pci-0000:01:00.0-scsi-0:0:0:0"`, when you install a cluster on bare metal with installer-provisioned infrastructure. You can also specify this parameter during Agent-based installations. This type of disk alias persists across reboots. For more information, see xref:../installing/installing_bare_metal_ipi/ipi-install-installation-workflow.html#configuring-the-install-config-file_ipi-install-installation-workflow[Configuring the install-config.yaml file for bare metal] or xref:../installing/installing_with_agent_based_installer/preparing-to-install-with-agent-based-installer.html#root-device-hints_preparing-to-install-with-agent-based-installer[About root device hints for Agent-based installations]. - -[id="ocp-4-14-aws-security-groups"] -==== Applying existing AWS security groups to a cluster - -By default, the installation program creates and attaches security groups to control plane and compute machines. The rules associated with the default security groups cannot be modified. - -With {product-title} {product-version}, if you deploy a cluster to an existing Amazon Virtual Private Cloud (VPC), you can apply additional existing AWS security groups to control plane and compute machines. These security groups must be associated with the VPC that you are deploying the cluster to. Applying custom security groups can help you meet the security needs of your organization, in such cases where you must control the incoming or outgoing traffic of these machines. For more information, see xref:../installing/installing_aws/installing-aws-vpc.adoc#installation-aws-vpc-security-groups_installing-aws-vpc[Applying existing AWS security groups to the cluster]. - -[id="ocp-4-14-admin-ack-updating"] -==== Required administrator acknowledgment when updating from {product-title} 4.13 to {product-version} - -{product-title} {product-version} uses Kubernetes 1.27, which removed a xref:../release_notes/ocp-4-14-release-notes.adoc#ocp-4-14-removed-kube-1-27-apis[deprecated API]. - -A cluster administrator must provide a manual acknowledgment before the cluster can be updated from {product-title} 4.13 to {product-version}. This is to help prevent issues after updating to {product-title} {product-version}, where APIs that have been removed are still in use by workloads, tools, or other components running on or interacting with the cluster. Administrators must evaluate their cluster for any APIs in use that will be removed and migrate the affected components to use the appropriate new API version. After this is done, the administrator can provide the administrator acknowledgment. - -All {product-title} 4.13 clusters require this administrator acknowledgment before they can be updated to {product-title} {product-version}. - -For more information, see xref:../updating/preparing_for_updates/updating-cluster-prepare.adoc#updating-cluster-prepare[Preparing to update to {product-title} {product-version}]. - -[id="ocp-4-14-nutanix-three-node"] -==== Three-node cluster support for Nutanix -Deploying a three-node cluster is supported on Nutanix as of {product-title} {product-version}. This type of {product-title} cluster is a more resource efficient cluster. It consists of only three control plane machines, which also act as compute machines. For more information, see xref:../installing/installing_nutanix/installing-nutanix-three-node.adoc#installing-nutanix-three-node[Installing a three-node cluster on Nutanix]. - -[id="ocp-4-14-installation-gcp-confidential-vms"] -==== Installing a cluster on GCP using Confidential VMs is generally available -In {product-title} {product-version}, using Confidential VMs when installing your cluster is generally available. Confidential VMs are currently not supported on 64-bit ARM architectures. For more information, see xref:../installing/installing_gcp/installing-gcp-customizations.adoc#installation-gcp-enabling-confidential-vms_installing-gcp-customizations[Enabling Confidential VMs]. - -[id="ocp-4-14-rootvolume-types-openstack-available"] -==== Root volume types parameter for {rh-openstack} is now available -You can now specify one or more root volume types in {rh-openstack}, by using the `rootVolume.types` parameter. This parameter is available for both control plane and compute machines. - -[id="ocp-4-14-static-ip-addresses-vsphere-nodes"] -==== Static IP addresses for vSphere nodes -You can provision bootstrap, control plane, and compute nodes with static IP addresses in environments where Dynamic Host Configuration Protocol (DHCP) does not exist. - -:FeatureName: Static IP addresses for vSphere nodes -include::snippets/technology-preview.adoc[] - -After you have deployed your cluster to run nodes with static IP addresses, you can scale a machine to use one of these static IP addresses. Additionally, you can use a machine set to configure a machine to use one of the configured static IP addresses. - -For more information, see the "Static IP addresses for vSphere nodes" section in the xref:../installing/installing_vsphere/installing-vsphere-installer-provisioned.adoc[Installing a cluster on vSphere] document. - -[id="ocp-4-14-bmo-validations"] -==== Additional validation for the Bare Metal Host CR - -The Bare Metal Host Custom Resource (CR) now contains the `ValidatingWebhooks` parameter. With this parameter, the Bare Metal Operator now catches any configuration errors before accepting the CR, and returns a message with the configuration errors to the user. - -[id="ocp-4-14-quickly-install-cluster-aws-local-zones"] -==== Install a cluster quickly in AWS Local Zones -For {product-title} {product-version}, you can quickly install a cluster on Amazon Web Services (AWS) to extend compute nodes to Local Zone locations. After you add zone names to the installation configuration file, the installation program fully automates the creation of required resources, network and compute, on each Local Zone. For more information, see xref:../installing/installing_aws/installing-aws-localzone.adoc#installation-cluster-quickly-extend-workers_installing-aws-localzone[Intall a cluster quickly in AWS Local Zones]. - -[id="ocp-4-14-install-update-cco-manual-enhancements"] -==== Simplified installation and update experience for clusters with manually maintained cloud credentials - -This release includes changes that improve the experience of installing and updating clusters that use the Cloud Credential Operator (CCO) in manual mode for cloud provider authentication. The following parameters for the `oc adm release extract` command simplify the manual configuration of cloud credentials: - -`--included`:: Use this parameter to extract only the manifests that your specific cluster configuration needs. -+ -If you use cluster capabilities to disable one or more optional components, you are no longer required to delete the `CredentialsRequest` CRs for any disabled components before installing or updating a cluster. -+ -In a future release, this parameter might make the CCO utility (`ccoctl`) `--enable-tech-preview` parameter unnecessary. - -`--install-config`:: Use this parameter to specify the location of the `install-config.yaml` file when installing a cluster. -+ -By referencing the `install-config.yaml` file, the extract command can determine aspects of the cluster configuration for the cluster that you are about to create. This parameter is not needed during a cluster update because `oc` can connect to the cluster to determine its configuration. -+ -With this change, you are no longer required to specify the cloud platform you are installing on with the `--cloud` parameter. As a result, the `--cloud` parameter is deprecated starting in {product-title} {product-version}. - -To understand how to use these parameters, see the installation procedure for your configuration and the procedures in xref:../updating/preparing_for_updates/preparing-manual-creds-update.adoc#preparing-manual-creds-update[Preparing to update a cluster with manually maintained credentials]. - -[id="ocp-4-14-vsphere-pre-existing-template"] -==== Quickly install {op-system} on vSphere hosts by using a pre-existing {op-system} image template -{product-title} {product-version} includes a new VMware vSphere configuration parameter for use on installer-provisioned infrastructure: `template`. By using this parameter, you can now specify the absolute path to a pre-existing {op-system-first} image template or virtual machine in the installation configuration file. The installation program can then use the image template or virtual machine to quickly install {op-system} on vSphere hosts. - -This installation method is an alternative to uploading an {op-system} image on vSphere hosts. - -[IMPORTANT] -==== -Before you set a path value for the `template` parameter, ensure that the default {op-system} boot image in the {product-title} release matches the {op-system} image template or virtual machine version; otherwise, cluster installation might fail. -==== - -[id="ocp-4-14-OCP-on-ARM"] -==== {product-title} on 64-bit ARM - -{product-title} {product-version} is now supported on 64-bit ARM architecture-based Google Cloud Platform installer-provisioned and user-provisioned infrastructures. You can also now use the `oc mirror` CLI plug-in disconnected environments on 64-bit ARM clusters. For more information about instance availability and installation documentation, see xref:../installing/installing-preparing.adoc#supported-installation-methods-for-different-platforms[Supported installation methods for different platforms]. - -[id="ocp-4-14-azure-custom-rhcos"] -==== Using a custom {op-system} image for a Microsoft Azure cluster - -By default, the installation program downloads and installs the {op-system-first} image that is used to boot control plane and compute machines. With this enhancement, you can now override the default behavior by modifying the installation configuration file (`install-config.yaml`) to specify a custom {op-system} image. Before you deploy the cluster, you can modify the following installation parameters: - -* `compute.platorm.azure.osImage.publisher` -* `compute.platorm.azure.osImage.offer` -* `compute.platorm.azure.osImage.sku` -* `compute.platorm.azure.osImage.version` -* `controlPlane.platorm.azure.osImage.publisher` -* `controlPlane.platorm.azure.osImage.offer` -* `controlPlane.platorm.azure.osImage.sku` -* `controlPlane.platorm.azure.osImage.version` -* `platform.azure.defaultMachinePlatform.osImage.publisher` -* `platform.azure.defaultMachinePlatform.osImage.offer` -* `platform.azure.defaultMachinePlatform.osImage.sku` -* `platform.azure.defaultMachinePlatform.osImage.version` - -For more information about these parameters, see xref:../installing/installing_azure/installation-config-parameters-azure.adoc#installation-configuration-parameters-additional-azure_installation-config-parameters-azure[Additional Azure configuration parameters]. - -[id="ocp-4-14-install-sno-on-cloud-providers"] -==== Installing {sno} on cloud providers - -{product-title} {product-version} expands support for installing {sno} on cloud providers. Installation options for {sno} include Amazon Web Services (AWS), Google Cloud Platform (GCP), and Microsoft Azure. For more information about the supported platforms, see xref:../installing/installing_sno/install-sno-installing-sno.adoc#supported-cloud-providers-for-single-node-openshift_install-sno-installing-sno-with-the-assisted-installer[Supported cloud providers for single node openshift]. - -[id="ocp-4-14-post-installation"] -=== Post-installation configuration - -[id="ocp-4-14-OCP-on-multi-arch-clusters"] -==== {product-title} cluster with multi-architecture compute machines -{product-title} {product-version} clusters with multi-architecture compute machines are now supported on Google Cloud Platform (GCP) as a Day 2 operation. {product-title} clusters with multi-architecture compute machines on bare metal installations are now generally available. For more information on clusters with multi-architecture compute machines and supported platforms, see xref:../post_installation_configuration/configuring-multi-arch-compute-machines/multi-architecture-configuration.adoc#multi-architecture-configuration[About clusters with multi-architecture compute machines]. - -[id="ocp-4-14-web-console"] -=== Web console - -[id="ocp-4-14-administrator-perspective"] -==== Administrator Perspective - -With this release, there are several updates to the *Administrator* perspective of the web console. You can now perform the following actions: - -* Narrow down the list of resources in a list view or search page with exact search capabilities. This action is useful when you have similarly named resources and the standard search functionality does not narrow down your search. -* Provide direct feedback about features and report a bug by clicking the *Help* button on the toolbar and clicking *Share Feedback* from the drop-down list. -* Display and hide tooltips in the YAML editor. Because the tooltips persist, you do not need to change a tooltip every time you navigate to a page. -* Configure the web terminal image for all users. For more information, see xref:../web_console/web_terminal/configuring-web-terminal.adoc#configuring-web-terminal[Configuring the web terminal]. - -[id="web-console-dynamic-plugin-enhancements"] -===== Dynamic plugin enhancements - -With this update, you can add custom metric dashboards and extend the cluster's *Overview* page with the `QueryBowser` extension. The {product-title} release adds additional extension points, so you can add different types of modals, set the active namespace, provide custom error pages, and set proxy timeouts for your dynamic plugin. - -For more information, see xref:../web_console/dynamic-plugin/dynamic-plugins-reference.adoc#dynamic-plugin-reference[Dynamic plugin reference] and `QueryBrowser` in the xref:../web_console/dynamic-plugin/dynamic-plugins-reference.html#dynamic-plugin-api_dynamic-plugins-reference[{product-title} console API]. - -[id="supported-os-types-cluster"] -===== Operating system based filtering in OperatorHub - -With this update, Operators in OperatorHub are now filtered based on the operating systems of the nodes, because clusters can contain heterogenous nodes. - -[id="console-supports-installing-specific-operator-versions"] -===== Support for installing specific Operator versions in the web console - -With this update, you can now choose from a list of available versions for an Operator based on the selected channel on the *OperatorHub* page in the console. Additionally, you can view the metadata for that channel and version when available. When selecting an older version, a manual approval update strategy is required, otherwise the Operator immediately updates back to the latest version on the channel. - -For more information, see xref:../operators/admin/olm-adding-operators-to-cluster.adoc#olm-installing-specific-version-web-console_olm-adding-operators-to-a-cluster[Installing a specific version of an Operator in the web console]. - -[id="console-supports-aws-sts-detection"] -===== OperatorHub support for AWS STS - -With this release, OperatorHub detects when an Amazon Web Services (AWS) cluster is using the Security Token Service (STS). When detected, a "Cluster in STS Mode" notification displays with additional instructions before installing an Operator to ensure it runs correctly. The *Operator Installation* page is also modified to add the required *role ARN* field. For more information, see xref:../operators/operator_sdk/osdk-token-auth.adoc#osdk-token-auth[Token authentication for Operators on cloud providers]. - -[id="ocp-4-14-developer-perspective"] -==== Developer Perspective - -With this release, there are several updates to the *Developer* perspective of the web console. You can now perform the following actions: - -* Change the default timeout period for the web terminal for your current session. For more information, see xref:../web_console/web_terminal/configuring-web-terminal.adoc#odc-configure-web-terminal-timeout-session_configuring-web-terminal[Configuring the web terminal timeout for a session]. -* Test {serverlessproductshortname} functions in the web console from the *Topology* view and the Serverless Service *List* and *Detail* pages, so that you can use a {serverlessproductshortname} function with a CloudEvent or HTTP request. -* View status, start time, and duration of the latest build for `BuildConfigs` and Shipwright builds. You can also view this information on the *Details* page. - -[id="developer-console-quick-starts"] -===== New quick starts - -With this release, new quick starts exist where you can discover developer tools, such as installing the Cryostat Operator and getting started with JBoss EAP by using a helm chart. - -[id="developer-perspective-pipeline-page-improvements"] -===== {pipelines-shortname} page improvements - -In {product-title} {product-version}, you can see the following navigation improvements on the *Pipelines* page: - -* Autodetection of Pipelines as Code (PAC) in Git import flow. -* {serverlessproductshortname} functions in the samples catalog. - -[id="ocp-4-14-openshift-cli"] -=== OpenShift CLI (oc) - -[id="oc-mirror-multi-arch-oci-local-images"] -==== Supporting multi-arch OCI local images for catalogs with oc-mirror - -With {product-title} {product-version}, oc-mirror supports multi-arch OCI local images for catalogs. - -OCI layouts consist of an `index.json` file that identifies the images held within them on disk. This `index.json` file can reference any number of single or multi-arch images. However, oc-mirror only references a single image at a time in a given OCI layout. The image stored in the OCI layout can be a single-arch image, that is, an image manifest or a multi-arch image, that is, a manifest list. - -The `ImageSetConfiguration` stores the OCI images. After processing the catalog, the catalog content adds new layers representing the content of all images in the layout. The ImageBuilder is modified to handle image updates for both single-arch and multi-arch images. - -[id="oc-logging-in-browser"] -==== Logging in to the CLI using a web browser - -With {product-title} {product-version}, a new `oc` command-line interface (CLI) flag, `--web` is now available for the `oc login` command. - -With this enhancement, you can log in by using a web browser, so that you do not need to insert your access token into the command line. - -For more information, see xref:../cli_reference/openshift_cli/getting-started-cli.adoc#cli-logging-in-web_cli-developer-commands[Logging in to the OpenShift CLI using a web browser]. - -[id="oc-new-build-enhancement"] -==== Enhancement to oc new-build - -A new `oc` CLI flag, `--import-mode`, has been added to the `oc new-build` command. With this enhancement, you can set the `--import-mode` flag to `Legacy` or `PreserverOriginal`, so that you trigger builds by using a single sub-manifest or all manifests. - -[id="oc-new-app-enhancement"] -==== Enhancement to oc new-app - -A new `oc` CLI flag, `--import-mode`, has been added to the `oc new-app` command. With this enhancement, you can set the `--import-mode` flag to `Legacy` or `PreserverOriginal`, and then create new applications by using a single sub-manifest or all manifests. - -For more information, see xref:../applications/creating_applications/creating-applications-using-cli.adoc#setting-the-import-mode[Setting the import mode]. - -[id="ocp-4-14-ibm-z"] -=== {ibmzProductName} and {linuxoneProductName} - -With this release, {ibmzRegProductName} and {linuxoneProductName} are now compatible with {product-title} {product-version}. The installation can be performed with z/VM or {op-system-base-full} Kernel-based Virtual Machine (KVM). For installation instructions, see the following documentation: - -* xref:../installing/installing_ibm_z/installing-ibm-z.adoc#installing-ibm-z[Installing a cluster with z/VM on {ibmzRegProductName} and {linuxoneProductName}] -* xref:../installing/installing_ibm_z/installing-restricted-networks-ibm-z.adoc#installing-restricted-networks-ibm-z[Installing a cluster with z/VM on {ibmzRegProductName} and {linuxoneProductName} in a restricted network] -* xref:../installing/installing_ibm_z/installing-ibm-z-kvm.adoc#installing-ibm-z-kvm[Installing a cluster with {op-system-base} KVM on {ibmzRegProductName} and {linuxoneProductName}] -* xref:../installing/installing_ibm_z/installing-restricted-networks-ibm-z-kvm.adoc#installing-restricted-networks-ibm-z-kvm[Installing a cluster with RHEL KVM on {ibmzRegProductName} and {linuxoneProductName} in a restricted network] - -[IMPORTANT] -==== -Compute nodes must run {op-system-first}. -==== - -[discrete] -==== {ibmzRegProductName} and {linuxoneProductName} notable enhancements - -Starting in {product-title} {product-version}, Extended Update Support (EUS) is extended to the {ibmzRegProductName} platform. For more information, see the link:https://access.redhat.com/support/policy/updates/openshift-eus[OpenShift EUS Overview]. - -The {ibmzRegProductName} and {linuxoneProductName} release on {product-title} {product-version} adds improvements and new capabilities to {product-title} components and concepts. - -This release introduces support for the following features on {ibmzRegProductName} and {linuxoneProductName}: - -* Assisted Installer with z/VM -* Installing on a single node -//* Hosted control planes (Technology Preview) -* Multi-architecture compute nodes -* oc-mirror plugin - -[discrete] -==== IBM Secure Execution - -{product-title} now supports configuring {op-system-first} nodes for IBM Secure Execution on {ibmzRegProductName} and {linuxoneProductName} (s390x architecture). - -For installation instructions, see the following documentation: - -* xref:../installing/installing_ibm_z/installing-ibm-z-kvm.html#installing-rhcos-using-ibm-secure-execution_installing-ibm-z-kvm[Installing {op-system} using IBM Secure Execution] - -[id="ocp-4-14-ibm-power"] -=== {ibmpowerRegProductName} - -{ibmpowerRegProductName} is now compatible with {product-title} {product-version}. For installation instructions, see the following documentation: - -* xref:../installing/installing_ibm_power/installing-ibm-power.adoc#installing-ibm-power_installing-ibm-power[Installing a cluster on {ibmpowerProductName}] -* xref:../installing/installing_ibm_power/installing-restricted-networks-ibm-power.adoc#installing-restricted-networks-ibm-power_installing-restricted-networks-ibm-power[Installing a cluster on {ibmpowerProductName} in a restricted network] - -[IMPORTANT] -==== -Compute nodes must run {op-system-first}. -==== - -[discrete] -==== {ibmpowerRegProductName} notable enhancements - -Starting in {product-title} {product-version}, Extended Update Support (EUS) is extended to the {ibmpowerRegProductName} platform. For more information, see the link:https://access.redhat.com/support/policy/updates/openshift-eus[OpenShift EUS Overview]. - -The {ibmpowerRegProductName} release on {product-title} {product-version} adds improvements and new capabilities to {product-title} components. - -This release introduces support for the following features on {ibmpowerProductName}: - -* {ibmpowerProductName} Virtual Server Block CSI Driver Operator (Technology Preview) -* Installing on a single node -//* Hosted control planes (Technology Preview) -* Multi-architecture compute nodes -* oc-mirror plugin - -[discrete] -=== {ibmpowerRegProductName}, {ibmzRegProductName}, and {linuxoneProductName} support matrix - -.{product-title} features -[cols="3,1,1",options="header"] -|==== -|Feature |{ibmpowerRegProductName} |{ibmzRegProductName} and {linuxoneProductName} - -|Alternate authentication providers -|Supported -|Supported - -|Automatic Device Discovery with Local Storage Operator -|Unsupported -|Supported - -|Automatic repair of damaged machines with machine health checking -|Unsupported -|Unsupported - -|Cloud controller manager for IBM Cloud -|Supported -|Unsupported - -|Controlling overcommit and managing container density on nodes -|Unsupported -|Unsupported - -|Cron jobs -|Supported -|Supported - -|Descheduler -|Supported -|Supported - -|Egress IP -|Supported -|Supported - -|Encrypting data stored in etcd -|Supported -|Supported - -|FIPS cryptography -|Supported -|Supported - -|Helm -|Supported -|Supported - -|Horizontal pod autoscaling -|Supported -|Supported - -|IBM Secure Execution -|Unsupported -|Supported - -|{ibmpowerProductName} Virtual Server Block CSI Driver Operator (Technology Preview) -|Supported -|Unsupported - -|Installer-provisioned Infrastructure Enablement for {ibmpowerProductName} Virtual Server (Technology Preview) -|Supported -|Unsupported - -|Installing on a single node -|Supported -|Supported - -|IPv6 -|Supported -|Supported - -|Monitoring for user-defined projects -|Supported -|Supported - -|Multi-architecture compute nodes -|Supported -|Supported - -|Multipathing -|Supported -|Supported - -|Network-Bound Disk Encryption - External Tang Server -|Supported -|Supported - -|Non--volatile memory express drives (NVMe) -|Supported -|Unsupported - -|oc-mirror plugin -|Supported -|Supported - -|OpenShift CLI (`oc`) plugins -|Supported -|Supported - -|Operator API -|Supported -|Supported - -|OpenShift Virtualization -|Unsupported -|Unsupported - -|OVN-Kubernetes, including IPsec encryption -|Supported -|Supported - -|PodDisruptionBudget -|Supported -|Supported - -|Precision Time Protocol (PTP) hardware -|Unsupported -|Unsupported - -|{openshift-local-productname} -|Unsupported -|Unsupported - -|Scheduler profiles -|Supported -|Supported - -|Stream Control Transmission Protocol (SCTP) -|Supported -|Supported - -|Support for multiple network interfaces -|Supported -|Supported - -|Three-node cluster support -|Supported -|Supported - -|Topology Manager -|Supported -|Unsupported - -|z/VM Emulated FBA devices on SCSI disks -|Unsupported -|Supported - -|4K FCP block device -|Supported -|Supported -|==== - -.Persistent storage options -[cols="2,1,1",options="header"] -|==== -|Feature |{ibmpowerProductName} |{ibmzProductName} and {linuxoneProductName} -|Persistent storage using iSCSI -|Supported ^[1]^ -|Supported ^[1]^,^[2]^ - -|Persistent storage using local volumes (LSO) -|Supported ^[1]^ -|Supported ^[1]^,^[2]^ - -|Persistent storage using hostPath -|Supported ^[1]^ -|Supported ^[1]^,^[2]^ - -|Persistent storage using Fibre Channel -|Supported ^[1]^ -|Supported ^[1]^,^[2]^ - -|Persistent storage using Raw Block -|Supported ^[1]^ -|Supported ^[1]^,^[2]^ - -|Persistent storage using EDEV/FBA -|Supported ^[1]^ -|Supported ^[1]^,^[2]^ -|==== -[.small] --- -1. Persistent shared storage must be provisioned by using either {rh-storage-first} or other supported storage protocols. -2. Persistent non-shared storage must be provisioned by using local storage, such as iSCSI, FC, or by using LSO with DASD, FCP, or EDEV/FBA. --- - -.Operators -[cols="2,1,1",options="header"] -|==== -|Feature |{ibmpowerRegProductName} |{ibmzRegProductName} and {linuxoneProductName} - -|Cluster Logging Operator -|Supported -|Supported - -|Cluster Resource Override Operator -|Supported -|Supported - -|Compliance Operator -|Supported -|Supported - -|File Integrity Operator -|Supported -|Supported - -//|HyperShift Operator -//|Technology Preview -//|Technology Preview - -|Local Storage Operator -|Supported -|Supported - -|MetalLB Operator -|Supported -|Supported - -|NFD Operator -|Supported -|Supported - -|NMState Operator -|Supported -|Supported - -|OpenShift Elasticsearch Operator -|Supported -|Supported - -|Service Binding Operator -|Supported -|Supported - -|Vertical Pod Autoscaler Operator -|Supported -|Supported -|==== - -.Multus CNI plugins -[cols="2,1,1",options="header"] -|==== -|Feature |{ibmpowerRegProductName} |{ibmzRegProductName} and {linuxoneProductName} - -|Bridge -|Supported -|Supported - -|Host-device -|Supported -|Supported - -|IPAM -|Supported -|Supported - -|IPVLAN -|Supported -|Supported -|==== - -.CSI Volumes -[cols="2,1,1",options="header"] -|==== -|Feature |{ibmpowerRegProductName} |{ibmzRegProductName} and {linuxoneProductName} - -|Cloning -|Supported -|Supported - -|Expansion -|Supported -|Supported - -|Snapshot -|Supported -|Supported -|==== - -[id="ocp-4-14-auth"] -=== Authentication and authorization - -[id="ocp-4-14-auth-required-scc"] -==== SCC preemption prevention - -With this release, you can now require your workloads to use a specific security context constraint (SCC). By setting a specific SCC, you can prevent the SCC that you want from being preempted by another SCC in the cluster. For more information, see xref:../authentication/managing-security-context-constraints.adoc#security-context-constraints-requiring_configuring-internal-oauth[Configuring a workload to require a specific SCC]. - -[id="ocp-4-14-auth-psa-privileged-namespaces"] -==== Pod security admission privileged namespaces - -With this release, the following system namespaces are always set to the `privileged` pod security admission profile: - -* `default` -* `kube-public` -* `kube-system` - -For more information, see xref:../authentication/understanding-and-managing-pod-security-admission.adoc#psa-privileged-namespaces_understanding-and-managing-pod-security-admission[Privileged namespaces]. - -[id="ocp-4-14-auth-psa-disable-sync-modified"] -==== Pod security admission synchronization disabled on modified namespaces - -With this release, if a user manually modifies a pod security admission label from the automatically labeled value on a label-synchronized namespace, synchronization is disabled for that label. Users can enable synchronization again, if necessary. For more information, see xref:../authentication/understanding-and-managing-pod-security-admission.adoc#security-context-constraints-psa-sync-exclusions_understanding-and-managing-pod-security-admission[Pod security admission synchronization namespace exclusions]. - -[id="ocp-4-14-auth-cco-sts"] -==== OLM-based Operator support for AWS STS - -With this release, some Operators managed by Operator Lifecycle Manager (OLM) on Amazon Web Services (AWS) clusters can use the Cloud Credential Operator (CCO) in manual mode with the Security Token Service (STS). These Operators authenticate with limited-privilege, short-term credentials that are managed outside the cluster. For more information, see xref:../operators/operator_sdk/osdk-token-auth.adoc#osdk-token-auth[Token authentication for Operators on cloud providers]. - -[id="ocp-4-14-auth-op-noproxy"] -==== Authentication Operator honors `noProxy` during connection checks - -With this release, if the `noProxy` field is set and the route is reachable without the cluster-wide proxy, the Authentication Operator will bypass the proxy and perform connection checks directly through the configured ingress route. Previously, the Authentication Operator always performed connection checks through the cluster-wide proxy, regardless of the `noProxy` setting. For more information, see xref:../networking/enable-cluster-wide-proxy.adoc#configure-the-cluster-wide-proxy[Configuring the cluster-wide proxy]. - -[id="ocp-4-14-networking"] -=== Networking - -[id="ocp-4-14-multiple-external-gateway-support-ovn-kubernetes-network-plugin"] -==== Multiple external gateway support for the OVN-Kubernetes network plugin - -The OVN-Kubernetes network plugin supports defining additional default gateways for specific workloads. Both IPv4 and IPv6 address families are supported. You define each default gateway by using the `AdminPolicyBasedExternalRoute` object, in which you can specify two types of next hops, static and dynamic: - -- Static next hop: One or more IP addresses of external gateways -- Dynamic next hop: A combination of pod and namespace selectors for pod selection, and a network attachment definition name previously associated with the selected pods. - -The next hops that you define are scoped by a namespace selector that you specify. You can then use the external gateway for specific workloads that match the namespace selector. - -For more information, refer to xref:../networking/ovn_kubernetes_network_provider/configuring-secondary-external-gateway.adoc#configuring-secondary-external-gateway[Configure an external gateway through a secondary network interface]. - -[id="ocp-4-14-ingress-node-firewall-operator-ga"] -==== Ingress Node Firewall Operator is generally available -Ingress Node Firewall Operator was designated a Technology Preview feature in {product-title} 4.12. With this release, Ingress Node Firewall Operator is generally available. You can now configure firewall rules at the node level. For more information, see xref:../networking/networking-operators-overview.adoc#ingress-node-firewall-operator-1_networking-operators-overview[Ingress Node Firewall Operator]. - -[id="ocp-4-14-networking-kernal-network-pinning"] -==== Dynamic use of non-reserved CPUs for OVS - -With this release, the Open vSwitch (OVS) networking stack can dynamically use non-reserved CPUs. -This dynamic use of non-reserved CPUs occurs by default in nodes in a machine config pool that has a performance profile applied to it. -The dynamic use of available, non-reserved CPUs maximizes compute resources for OVS and minimizes network latency for workloads during periods of high demand. -OVS remains unable to dynamically use isolated CPUs assigned to containers in `Guaranteed` QoS pods. This separation avoids disruption to critical application workloads. - -[NOTE] -==== -When the Node Tuning Operator recognizes the performance conditions to activate the use of non-reserved CPUs, there is a several second delay while OVN-Kubernetes configures the CPU affinity alignment of OVS daemons running on the CPUs. During this window, if a `Guaranteed` QoS pod starts, it can experience a latency spike. -==== - -[id="ocp-4-14-dual-stack-configuration"] -==== Dual-stack configuration for multiple IP addresses - -In previous releases of the Whereabouts IPAM CNI plugin, only one IP address could be assigned per network interface. - -Now, Whereabouts supports the assignment of an arbitrary number of IP addresses to support dual-stack IPv4/IPv6 functionality. See xref:../networking/hardware_networks/configuring-sriov-net-attach.adoc#nw-multus-configure-dualstack-ip-address_configuring-sriov-net-attach[Creating a configuration for assignment of dual-stack IP addresses dynamically]. - -[id="ocp-4-14-networking-sriov-exclude-topology"] -==== Exclude SR-IOV network topology for NUMA-aware scheduling - -With this release, you can exclude advertising the Non-Uniform Memory Access (NUMA) node for the SR-IOV network to the Topology Manager. By not advertising the NUMA node for the SR-IOV network, you can permit more flexible SR-IOV network deployments during NUMA-aware pod scheduling. - -For example, in some scenarios, it is a priority to maximize CPU and memory resources for a pod on a single NUMA node. By not providing a hint to the Topology Manager about the NUMA node for the pod’s SR-IOV network resource, the Topology Manager can deploy the SR-IOV network resource and the pod CPU and memory resources to different NUMA nodes. In earlier {product-title} releases, the Topology Manager attempted to place all resources on the same NUMA node only. - -For more information about this more flexible SR-IOV network deployment during NUMA-aware pod scheduling, see xref:../networking/hardware_networks/configuring-sriov-device.adoc#nw-sriov-exclude-topology-manager_configuring-sriov-device[Exclude the SR-IOV network topology for NUMA-aware scheduling]. - -[id="ocp-4-14-networking-haproxy-update"] -==== Update to HAProxy 2.6 -With this release, {product-title} is updated to HAProxy 2.6. - -[id="ocp-4-14-max-log-length-sidecar"] -==== Support for configuring the maximum length with sidecar logging in the Ingress Controller -Previously, the maximum length of the syslog message in the Ingress Controller was 1024 bytes. Now, the maximum value can be increased. -For more information, see xref:../networking/ingress-operator.adoc#nw-configure-ingress-access-logging_configuring-ingress[Allow the Ingress Controller to modify the HAProxy log length when using a sidecar]. - -[id="ocp-4-14-nmstate-ui-console-update"] -==== NMstate Operator updated in console - -With this release, you can access the NMstate Operator and resources such as the `NodeNetworkState` (NNS), `NodeNetworkConfigurationPolicy` (NNCP), and `NodeNetworkConfigurationEnhancement` (NNCE) from the web console. In the *Administrator* perspective of the console from the *Networking* page you can access NNCP, NNCE from the *NodeNetworkConfigurationPolicy* page, and NNS on the *NodeNetworkState* page. For more information about NMState resources and how to update them in the console, see xref:../networking/k8s_nmstate/k8s-nmstate-updating-node-network-config.adoc#k8s-nmstate-updating-node-network-config[Updating node network configuration]. - -[id="ocp-4-14-networking-ovn-kubernetes-ipsec-ibm-cloud"] -==== OVN-Kubernetes network plugin support for IPsec on IBM Cloud - -IPsec is now supported on the IBM Cloud platform for clusters that use the OVN-Kubernetes network plugin, which is the default in {product-title} {product-version}. For more information, see xref:../networking/ovn_kubernetes_network_provider/configuring-ipsec-ovn.adoc#configuring-ipsec-ovn[Configuring IPsec encryption]. - -[id="ocp-4-14-networking-ovn-kubernetes-ipsec-support-for-external-traffic"] -==== OVN-Kubernetes network plugin support for IPsec encryption of external traffic (Technology Preview) - -{product-title} now supports encryption of external traffic, also known as _north-south traffic_. IPsec already supports encryption of network traffic between pods, known as _east-west traffic_. You can use both features in conjunction to provide full in-transit encryption for {product-title} clusters. This is available as a Technology Preview feature. - -To use this feature, you need to define an IPsec configuration tuned for your network infrastructure. For more information, refer to xref:../networking/ovn_kubernetes_network_provider/configuring-ipsec-ovn.adoc#nw-ovn-ipsec-north-south-enable_configuring-ipsec-ovn[Enabling IPsec encryption for external IPsec endpoints]. - -[id="ocp-4-14-single-stack-support-nmstate"] -==== Single-stack IPv6 support for Kubernetes NMstate -With this release, you can use Kubernetes NMState Operator in single-stack IPv6 clusters. - -[id="ocp-4-14-networking-egress-service"] -==== Egress service resource to manage egress traffic for pods behind a load balancer (Technology Preview) - -With this update, you can use an `EgressService` custom resource (CR) to manage egress traffic for pods behind a load balancer service. This is available as a Technology Preview feature. - -You can use the `EgressService` CR to manage egress traffic in the following ways: - -* Assign the load balancer service's IP address as the source IP address of egress traffic for pods behind the load balancer service. - -* Assign the egress traffic for pods behind a load balancer to a different network than the default node network. - -For more information, see xref:../networking/ovn_kubernetes_network_provider/configuring-egress-traffic-for-vrf-loadbalancer-services.adoc#configuring-egress-traffic-loadbalancer-services[Configuring an egress service]. - -[id="ocp-4-14-networking-metallb-vrf"] -==== Support for VRF specification in MetalLB's BGPPeer resource (Technology Preview) - -With this update, you can specify a Virtual Routing and Forwarding (VRF) instance in a `BGPPeer` custom resource. MetalLB can advertise services through the interfaces belonging to the VRF. This is available as a Technology Preview feature. For more information, see xref:../networking/metallb/metallb-configure-bgp-peers.adoc#nw-metallb-bgp-peer-vrf_configure-metallb-bgp-peers[Exposing a service through a network VRF]. - -[id="ocp-4-14-networking-nmstate-vrf"] -==== Support for VRF specification in NMState's NodeNetworkConfigurationPolicy resource (Technology Preview) - -With this update, you can associate a Virtual Routing and Forwarding (VRF) instance with a network interface by using a `NodeNetworkConfigurationPolicy` custom resource. By associating a VRF instance with a network interface, you can support traffic isolation, independent routing decisions, and the logical separation of network resources. This feature is available as a Technology Preview feature. For more information, see xref:../networking/k8s_nmstate/k8s-nmstate-updating-node-network-config.adoc#virt-example-host-vrf_k8s_nmstate-updating-node-network-config[Example: Network interface with a VRF instance node network configuration policy]. - -[id="ocp-414-broadcom-bcm57504-support"] -==== Support for Broadcom BCM57504 is now GA - -Support for the Broadcom BCM57504 network interface controller is now available for the SR-IOV Network Operator.For more information, see xref:../networking/hardware_networks/about-sriov.adoc#supported-devices_about-sriov[Supported devices]. - -[id="ocp-4-14-networking-ovn-kubernetes-secondary-network"] -==== OVN-Kubernetes is available as a secondary network - -With this release, the {openshift-networking} OVN-Kubernetes network plugin allows the configuration of secondary network interfaces for pods. As a secondary network, OVN-Kubernetes supports both layer 2 switched and localnet switched topology networks.For more information about OVN-Kubernetes as a secondary network, see xref:../networking/multiple_networks/configuring-additional-network.adoc#configuration-ovnk-additional-networks_configuring-additional-network[Configuration for an OVN-Kubernetes additional network]. - -[id="ocp-4-14-admin-network-policy"] -==== Admin Network Policy (Technology Preview) - -Admin Network Policy is available as a Technology Preview feature. You can enable `AdminNetworkPolicy` and `BaselineAdminNetworkPolicy` resources, which are part of the Network Policy V2 API, in clusters running the OVN-Kubernetes CNI plugin. Cluster administrators can apply cluster-scoped policies and safeguards for an entire cluster before namespaces are created. Network administrators can secure clusters by enforcing network traffic controls that cannot be overridden by users. Network administrators can enforce optional baseline network traffic controls that can be overridden by users in the cluster, if necessary. Currently, these APIs support only expressing policies for intra-cluster traffic. - -[id="ocp-4-14-creating-subinterface"] -==== MAC-VLAN, IP-VLAN, and VLAN subinterface creation for pods - -With this release, the ability to create a MAC-VLAN, IP-VLAN, and VLAN subinterface based on a master interface in a container namespace is generally available. You can use this feature to create the master interfaces as part of the pod network configuration in a separate network attachment definition. You can then base the VLAN, MACVLAN or IPVLAN on this interface without knowing the network configuration of the node. For more information, see xref:../networking/multiple_networks/configuring-additional-network.html#nw-about-configuring-master-interface-container_configuring-additional-network[About configuring the master interface in the container network namespace]. - -[id="ocp-4-14-tap-device-plugin"] -==== Enhance network flexibility by using the TAP device plugin - -This release introduces a new Container Network Interface (CNI) network plugin type: the Tanzu Application Platform (TAP) device plugin. You can use this plugin to create TAP devices within containers, which enables user-space programs to handle network frames and act as an interface that receives frames from and that sends frames to user-space applications instead of through traditional network interfaces. For more information, see xref:../networking/multiple_networks/configuring-additional-network.html#nw-multus-tap-object_configuring-additional-network[Configuration for a TAP additional network]. - -[id="ocp-4-14-non-root-dpdk"] -==== Support for running rootless DPDK workloads with kernel access by using the TAP CNI plugin - -In {product-title} version {product-version} and later, DPDK applications that need to inject traffic to the kernel can run in non-privileged pods with the help of the TAP CNI plugin. For more information, see xref:../networking/hardware_networks/using-dpdk-and-rdma.html#nw-running-dpdk-rootless-tap_using-dpdk-and-rdma[Using the TAP CNI to run a rootless DPDK workload with kernel access]. - -[id="ocp-4-14-networking-http-headers"] -==== Set or delete specific HTTP headers using an Ingress Controller or a Route object - -Certain HTTP request and response headers can now be set or deleted either globally by using an Ingress Controller or for specific routes. You can set or delete the following headers: - -* X-Frame-Options -* X-Cache-Info -* X-XSS-Protection -* X-Source -* X-SSL-Client-Cert -* X-Target -* Content-Location -* Content-Language - -For more information, see xref:../networking/ingress-operator.adoc#nw-ingress-set-or-delete-http-headers_configuring-ingress[Setting or deleting HTTP request and response headers in an Ingress Controller] and xref:../networking/routes/route-configuration.adoc#nw-route-set-or-delete-http-headers_route-configuration[Setting or deleting HTTP request and response headers in a route]. - -[id="ocp-4-14-egress-ips-additional-networks"] -==== Support for egress IPs on additional network interfaces - -Support for egress IPs addresses on additional network interfaces is now available as a Technology Preview feature. This feature provides {product-title} administrators with a greater level of control over networking aspects such as routing, addressing, segmentation, and security policies. You can also route workload traffic over specific network interfaces for purposes such as traffic segmentation or meeting specialized requirements. - -For more information, see xref:../networking/ovn_kubernetes_network_provider/configuring-egress-ips-ovn.adoc#nw-egress-ips-multi-nic-considerations_configuring-egress-ips-ovn[Considerations for using an egress IP on additional network interfaces]. - -[id="ocp-4-14-registry"] -=== Registry - -[id="ocp-4-14-optional-image-registry-operator"] -==== Optional Image Registry Operator -With this release, the Image Registry Operator is now an optional component. This feature helps reduce the overall resources footprint of {product-title} in Telco environments when the Image Registry Operator is not needed. For more information about disabling the Image Registry Operator, see xref:../installing/cluster-capabilities.adoc#selecting-cluster-capabilities_cluster-capabilities[Selecting cluster capabilities]. - -[id="ocp-4-14-storage"] -=== Storage - -[id="ocp-4-14-storage-device-selector"] -==== Support for OR logic in LVMS -With this release, the logical volume manager (LVM) cluster custom resource (CR) provides `OR` logic in the `deviceSelector` setting. In previous releases, specifying the `paths` setting for device paths used `AND` logic only. With this release, you can also specify the `optionalPaths` setting, which supports `OR` logic. For more information, see the CR examples in xref:../storage/persistent_storage/persistent_storage_local/persistent-storage-using-lvms.adoc#persistent-storage-using-lvms[Persistent storage using logical volume manager storage]. - -[id="ocp-4-14-storage-lvms-ext4-support"] -==== Support for ext4 in LVMS -With this release, the logical volume manager (LVM) cluster custom resource (CR) provides support for the `ext4` filesystem with the `fstype` setting under `deviceClasses`. The default filesystem is `xfs`. For more information, see the CR examples in xref:../storage/persistent_storage/persistent_storage_local/persistent-storage-using-lvms.adoc#persistent-storage-using-lvms[Persistent storage using logical volume manager storage]. - -[id="ocp-4-14-storage-standardized-sts-config"] -==== Standardized STS configuration workflow -{product-title} {product-version} provides a streamlined and standardized procedure to configure Security Token Service (STS) with the AWS Elastic File Storage (EFS) Container Storage Interface (CSI) Driver Operator. - -For more information, see xref:../storage/container_storage_interface/persistent-storage-csi-aws-efs.adoc#efs-sts_persistent-storage-csi-aws-efs[Obtaining a role Amazon Resource Name for Security Token Service]. - -[id="ocp-4-14-storage-rwop-access-mode"] -==== Read Write Once Pod access mode (Technology Preview) -{product-title} {product-version} introduces a new access mode for persistent volumes (PVs) and persistent volume claims (PVCs) called ReadWriteOncePod (RWOP), which can be used only in a single pod on a single node. This is compared to the existing ReadWriteOnce access mode where a PV or PVC can be used on a single node by many pods. This feature is supported with Technology Preview status. - -For more information, see xref:../storage/understanding-persistent-storage.adoc#pv-access-modes_understanding-persistent-storage[Access modes]. - -[id="ocp-4-14-storage-gcp-filestore-csi-driver"] -==== GCP Filestore storage CSI Driver Operator is generally available -{product-title} is capable of provisioning persistent volumes (PVs) using the Container Storage Interface (CSI) driver for Google Compute Platform (GCP) Filestore Storage. The GCP Filestore CSI Driver Operator was introduced in {product-title} 4.12 with Technology Preview support. The GCP Filestore CSI Driver Operator is now generally available. For more information, see xref:../storage/container_storage_interface/persistent-storage-csi-google-cloud-file.adoc#persistent-storage-csi-google-cloud-file[Google Compute Platform Filestore CSI Driver Operator]. - -[id="ocp-4-14-storage-automigration-vsphere"] -==== Automatic CSI migration for VMware vSphere -The Automatic CSI migration for VMware vSphere feature automatically translates in-tree objects to their counterpart CSI representations and, ideally, must be completely transparent to users. Although storage class referencing to the in-tree storage plug-in continues to work, consider switching the default storage class to the CSI storage class. - -In {product-title} {product-version}, CSI migration for vSphere is enabled by default under all circumstances and requires no action by an administrator. - -If you are using vSphere in-tree persistent volumes (PVs) and want to upgrade from {product-title} 4.12 or 4.13 to {product-version}, update vSphere vCenter and ESXI host to 7.0 Update 3L or 8.0 Update 2, otherwise the {product-title} upgrade is blocked. If you do not want to update vSphere, you can proceed with an {product-title} upgrade by performing an administrator acknowledgment. However, with using the administrator acknowledgment, known issues can occur. Before proceeding with the administrator acknowledgement, carefully read the link:https://access.redhat.com/node/7011683[Knowledge Base article]. - -For more information, see xref:../storage/container_storage_interface/persistent-storage-csi-migration.adoc[CSI automatic migration]. - -[id="ocp-4-14-storage-secrets-store-csi-driver-operator"] -==== Secrets Store CSI Driver Operator (Technology Preview) -The Secrets Store Container Storage Interface (CSI) Driver Operator, `secrets-store.csi.k8s.io`, allows {product-title} to mount multiple secrets, keys, and certificates stored in enterprise-grade external secrets stores into pods as an inline ephemeral volume. The Secrets Store CSI Driver Operator communicates with the provider using gRPC to fetch the mount contents from the specified external secrets store. After the volume is attached, the data in it is mounted into the container’s file system. This feature is supported with Technology Preview status. For more information about the {secrets-store-driver}, see xref:../storage/container_storage_interface/persistent-storage-csi-secrets-store.adoc#persistent-storage-csi-secrets-store[{secrets-store-driver}]. - -For information about using the {secrets-store-operator} to mount secrets from an external secrets store to a CSI volume, see xref:../nodes/pods/nodes-pods-secrets-store.adoc#nodes-pods-secrets-store[Providing sensitive data to pods by using an external secrets store]. - -[id="ocp-4-14-storage-azure-file-nfs"] -==== Azure File supporting NFS is generally available -{product-title} {product-version} supports Azure File Container Storage Interface (CSI) Driver Operator with Network File System (NFS) as generally available. - -For more information, see xref:../storage/container_storage_interface/persistent-storage-csi-azure-file.adoc#persistent-storage-csi-azure-file-nfs_persistent-storage-csi-azure-file[NFS support]. - -[id="ocp-4-14-oci"] -=== Oracle® Cloud Infrastructure - -You can now install an {product-title} cluster on {oci-first} by using the Assisted installer or the Agent-based installer. For {product-title} {product-version}, {product-title} on {oci} is available as a Developer Preview feature. - -To install an {product-title} cluster on {oci}, choose one of the following installation options: - -* link:https://access.redhat.com/articles/7039183[Using the Assisted Installer to install a cluster on {oci-first}] -* link:https://access.redhat.com/node/7038262/draft[Using the Agent-based Installer to install a cluster on {oci-first}] - -For more information about a Developer Preview feature, see link:https://access.redhat.com/support/offerings/devpreview[Developer Preview Support Scope] on the Red Hat Customer Portal. - -[id="ocp-4-14-olm"] -=== Operator lifecycle - -[id="ocp-4-14-olmv1"] -==== {olmv1-first} (Technology Preview) - -Operator Lifecycle Manager (OLM) has been included with {product-title} 4 since its initial release. {product-title} {product-version} introduces components for a next-generation iteration of OLM as a Technology Preview feature, known during this phase as _{olmv1}_. This updated framework evolves many of the concepts that have been part of previous versions of OLM and adds new capabilities. - -During this Technology Preview phase of {olmv1} in {product-title} {product-version}, administrators can explore the following features: - -Fully declarative model that supports GitOps workflows:: -{olmv1} simplifies Operator management through two key APIs: -+ --- -* A new `Operator` API, provided as `operators.operators.operatorframework.io` by the new Operator Controller component, streamlines management of installed Operators by consolidating user-facing APIs into a single object. This empowers administrators and SREs to automate processes and define desired states by using GitOps principles. -* The `Catalog` API, provided by the new catalogd component, serves as the foundation for {olmv1}, unpacking catalogs for on-cluster clients so that users can discover installable content, such as Operators and Kubernetes extensions. This provides increased visibility into all available Operator bundle versions, including their details, channels, and update edges. --- -+ -For more information, see xref:../operators/olm_v1/arch/olmv1-operator-controller.adoc#olmv1-operator-controller[Operator Controller] and xref:../operators/olm_v1/arch/olmv1-catalogd.adoc#olmv1-catalogd[Catalogd]. - -Improved control over Operator updates:: -With improved insight into catalog content, administrators can specify target versions for installation and updates. This grants administrators more control over the target version of Operator updates. For more information, see xref:../operators/olm_v1/olmv1-installing-an-operator-from-a-catalog.adoc#olmv1-installing-an-operator-from-a-catalog[Installing an Operator from a catalog]. - -Flexible Operator packaging format:: -Administrators can use file-based catalogs to install and manage the following types of content: -+ --- -* OLM-based Operators, similar to the existing OLM experience -* _Plain bundles_, which are static collections of arbitrary Kubernetes manifests --- -+ -In addition, bundle size is no longer constrained by the etcd value size limit. For more information, see xref:../operators/olm_v1/olmv1-managing-plain-bundles.adoc#olmv1-managing-plain-bundles[Managing plain bundles in {olmv1}]. - -include::snippets/olmv1-cli-only.adoc[] - -For more information, see xref:../operators/olm_v1/index.adoc#olmv1-about[About Operator Lifecycle Manager 1.0]. - -[id="ocp-4-14-osdk"] -=== Operator development - -[id="ocp-4-14-osdk-cco-sts"] -==== Token authentication for Operators on cloud providers: AWS STS - -With this release, Operators managed by Operator Lifecycle Manager (OLM) can support token authentication when running on Amazon Web Services (AWS) clusters that use the Security Token Service (STS). The Cloud Credential Operator (CCO) is updated to semi-automate provisioning certain limited-privilege, short-term credentials, provided that the Operator author has enabled their Operator to support AWS STS. For more information about enabling OLM-based Operators to support CCO-based workflows with AWS STS, see xref:../operators/operator_sdk/osdk-token-auth.adoc#osdk-token-auth[Token authentication for Operators on cloud providers]. - -[id="ocp-4-14-osdk-multi-platform"] -==== Configuring Operator projects with support for multiple platforms - -With this release, Operator authors can configure their Operator projects with support for multiple architectures and operating systems, or _platforms_. Operator authors can configure support for multiple platforms by performing the following actions: - -* Building a manifest list that specifies the platforms that the Operator supports. -* Setting the Operator’s node affinity to support multi-architecture compute machines. - -For more information, see xref:../operators/operator_sdk/osdk-multi-arch-support.adoc#osdk-multi-arch[Configuring Operator projects for multi-platform support]. - -[id="ocp-4.14-builds"] -=== Builds - -* With this update, the Source-to-Image (S2I) tool is now generally available in {product-title} {product-version}. You can use the S2I tool to build container images from source code, and transform application code into ready-to-deploy container images. This feature enhances the platform's ability to support reproducible containerized application development. For more information, see link:https://access.redhat.com/documentation/en-us/red_hat_software_collections/2/html/using_red_hat_software_collections_container_images/sti#doc-wrapper[Using Source-to-Image (S2I) tool]. - -* With this update, the Build CSI Volumes feature is now generally available in {product-title} {product-version}. - -[id="ocp-4-14-machine-config-operator"] -=== Machine Config Operator - -[id="ocp-4-14-mco-ca-distribution"] -==== Handling of registry certificate authorities - -The Machine Config Operator now handles distributing certificate authorities for image registries. This change does not affect end users. - -[id="ocp-4-14-additional-prometheus-metrics"] -==== Additional metrics available in Prometheus - -With this release, you can query additional metrics to more closely monitor the state of your machines and machine config pools. - -For more information on how to use Prometheus, see xref:../monitoring/managing-metrics.adoc#viewing-a-list-of-available-metrics_managing-metrics[Viewing a list of available metrics]. - -[id="ocp-4-14-offline-tang"] -==== Support for offline Tang provisioning - -With this release, you can now provision an {product-title} cluster with Tang-enforced, network-bound disk encryption (NBDE) by using Tang servers that are unreachable during first boot. - -For more information, see xref:../installing/install_config/installing-customizing.adoc#installation-special-config-encryption-threshold_installing-customizing[Configuring an encryption threshold] and xref:../installing/install_config/installing-customizing.adoc#installation-special-config-storage-procedure_installing-customizing[Configuring disk encryption and mirroring]. - -[id="ocp-4-14-new-cert-process"] -==== Certificates are now handled by the Machine Config Daemon - -In previous {product-title} versions, the MCO read and handled certificates directly from machine configuration files. This led to rotation issues and created unwanted situations, such as certificates getting stuck behind a paused machine config pool. - -With this release, certificates are no longer templated from bootstrap into machine configuration files. Instead, they are put directly into the Ignition object, written onto a disk using the controller config, and handled by the Machine Config Daemon (MCD) during regular cluster operation. The certs are then visible by using the `ControllerConfig` resource. - -The Machine Config Controller (MCC) holds the following certificate data: - -* `/etc/kubernetes/kubelet-ca.crt` -* `/etc/kubernetes/static-pod-resources/configmaps/cloud-config/ca-bundle.pem` -* `/etc/pki/ca-trust/source/anchors/openshift-config-user-ca-bundle.crt` - -The MCC also handles the image registry certificates and its associated user bundle certificate. This means that certificates are not bound by the machine config pool status and are more timely in their rotation. The previously listed CAs stored in machine configuration files are removed, and the templated files found during cluster installation no longer exist. For more information on how to access these certificates, see xref:../post_installation_configuration/machine-configuration-tasks.adoc#checking-mco-status-certs_post-install-machine-configuration-tasks[Viewing and interacting with certificates]. - -[id="ocp-4-14-machine-api"] -=== Machine API - -[id="ocp-4-14-mapi-cpms-platform-support"] -==== Support for control plane machine sets on Nutanix clusters - -With this release, control plane machine sets are supported for Nutanix clusters. For more information, see xref:../machine_management/control_plane_machine_management/cpmso-getting-started.adoc#cpmso-getting-started[Getting started with the Control Plane Machine Set Operator]. - -[id="ocp-4-14-mapi-cpms-shiftstack-support"] -==== Support for control plane machine sets on {rh-openstack} clusters - -With this release, control plane machine sets are supported for clusters that run on {rh-openstack}. - -For more information, see xref:../machine_management/control_plane_machine_management/cpmso-getting-started.adoc#cpmso-getting-started[Getting started with the Control Plane Machine Set Operator]. - -[id="ocp-4-14-mapi-aws-placement-groups"] -==== Support for assigning AWS machines to placement groups - -With this release, you can configure a machine set to deploy machines within an existing AWS placement group. You can use this feature with Elastic Fabric Adapter (EFA) instances to improve network performance for machines within the specified placement group. You can use this feature with xref:../machine_management/creating_machinesets/creating-machineset-aws.adoc#machineset-aws-existing-placement-group_creating-machineset-aws[compute] and xref:../machine_management/control_plane_machine_management/cpmso-using.adoc#machineset-aws-existing-placement-group_cpmso-using[control plane] machine sets. - -[id="ocp-4-14-mapi-azure-confidential-compute"] -==== Support for Azure confidential VMs and trusted launch (Technology Preview) - -With this release, you can configure a machine set to deploy machines that use Azure confidential VMs, trusted launch, or both. These machines can use Unified Extensible Firmware Interface (UEFI) security features such as Secure Boot or a dedicated virtual Trusted Platform Module (vTPM) instance. - -You can use this feature with xref:../machine_management/creating_machinesets/creating-machineset-azure.adoc#creating-machineset-azure[compute] and xref:../machine_management/control_plane_machine_management/cpmso-using.adoc#cpmso-supported-features-azure_cpmso-using[control plane] machine sets. - -[id="ocp-4-14-nodes"] -=== Nodes - -[id="ocp-4-14-descheduler-resource-limits"] -==== Descheduler resource limits for large clusters - -With this release, the resource limits for the descheduler operand are removed. This enables the descheduler to be used for large clusters with many nodes and pods without failing due to out-of-memory errors. - -[id="ocp-4-14-nodes-pod-topology-constraints-matchlabelkeys"] -==== Pod topology spread constraints matchLabelKeys parameter is now generally available - -The `matchLabelKeys` parameter for configuring pod topology spread constraints is now generally available in {product-title} {product-version}. Previously, the parameter was available as a Technology Preview feature by enabling the `TechPreviewNoUpgrade` feature set. The `matchLabelKeys` parameter takes a list of pod label keys to select the pods to calculate spreading over. - -For more information, see xref:../nodes/scheduling/nodes-scheduler-pod-topology-spread-constraints.adoc#nodes-scheduler-pod-topology-spread-constraints[Controlling pod placement by using pod topology spread constraints]. - -[id="ocp-4-14-MaxUnavailableStatefulSet"] -==== MaxUnavailableStatefulSet enabled - -With this release, the `MaxUnavailableStatefulSet` featureSet configuration parameter is enabled by default. You can now define the maximum number of `StatefulSet` pods that can be unavailable during updates; thereby, reducing application downtime when upgrading. - -[id="ocp-4-14-pdb-unhealthy-pod-eviction-policy"] -==== Pod disruption budget (PDB) unhealthy pod eviction policy - -With this release, specifying an unhealthy pod eviction policy for pod disruption budgets (PDBs) is Generally Available in {product-title} and has been removed from the `TechPreviewNoUpgrade` featureSet. This helps evict malfunctioning applications during a node drain. - -For more information, see xref:../nodes/pods/nodes-pods-configuring.adoc#pod-disruption-eviction-policy_nodes-pods-configuring[Specifying the eviction policy for unhealthy pods]. - -[id="ocp-4-14-nodes-cgroupv2-default"] -==== Linux Control Groups version 2 is now default - -Beginning with {product-title} 4.14, new installs use Control Groups version 2 by default, also known as cgroup v2, cgroup2, or cgroupsv2. This enhancement includes many bug fixes, performance improvements, and the ability to integrate with new features. cgroup v1 is still used in upgraded clusters that have initial installation dates prior to {product-title} 4.14. cgroup v1 can still be used by changing the `cgroupMode` field in the `node.config` object to `v1`. - -For more information, see xref:../nodes/clusters/nodes-cluster-cgroups-2.adoc#nodes-clusters-cgroups-2[Configuring the Linux cgroup version on your nodes]. - -[id="ocp-4-14-monitoring"] -=== Monitoring - -The monitoring stack for this release includes the following new and modified features: - -[id="ocp-4-14-monitoring-updates-to-monitoring-stack-components-and-dependencies"] -==== Updates to monitoring stack components and dependencies -This release includes the following version updates for monitoring stack components and dependencies: - -* `kube-state-metrics` to 2.9.2 -* `node-exporter` to 1.6.1 -* `prom-label-proxy` to 0.7.0 -* Prometheus to 2.46.0 -* `prometheus-operator` to 0.67.1 - -[id="ocp-4-14-monitoring-changes-to-alerting-rules"] -==== Changes to alerting rules - -[NOTE] -==== -Red{nbsp}Hat does not guarantee backward compatibility for recording rules or alerting rules. -==== - -* *New* -** Added the `KubeDeploymentRolloutStuck` alert to monitor if the rollout of a deployment has not progressed for 15 minutes. -** Added the `NodeSystemSaturation` alert to monitor resource saturation on a node. -** Added the `NodeSystemdServiceFailed` alert to monitor the systemd service on a node. -** Added the `NodeMemoryMajorPagesFaults` alert to monitor major page faults on a node. -** Added the `PrometheusSDRefreshFailure` alert to monitor failed Prometheus service discoveries. - -* *Changed* -** Modified the `KubeAggregatedAPIDown` alert and the `KubeAggregatedAPIErrors` alert to evaluate only metrics from the `apiserver` job. -** Modified the `KubeCPUOvercommit` alert to evaluate only metrics from the `kube-state-metrics` job. -** Modified the `NodeHighNumberConntrackEntriesUsed`, `NodeNetworkReceiveErrs` and `NodeNetworkTransmitErrs` alerts to evaluate only metrics from the `node-exporter` job. - -* *Removed* -** Removed the `MultipleContainersOOMKilled` alert for not being actionable. Nodes under memory pressure are covered by other alerts. - -[id="ocp-4-14-monitoring-new-option-to-create-alerts-based-on-core-platform-metrics"] -==== New option to create alerts based on core platform metrics - -With this release, administrators can create new alerting rules based on core platform metrics. -You can now modify settings for existing platform alerting rules by adjusting thresholds and by changing labels. -You can also define and add new custom alerting rules by constructing a query expression based on core platform metrics in the `openshift-monitoring` namespace. -This feature was included as a Technology Preview feature in the {product-title} 4.12 release, and the feature is now generally available in {product-title} {product-version}. -For more information, see xref:../monitoring/managing-alerts.adoc#managing-core-platform-alerting-rules_managing-alerts[Managing alerting rules for core platform monitoring]. - -[id="ocp-4-14-monitoring-new-option-to-specify-resource-limits-for-all-monitoring-components"] -==== New option to specify resource limits for all monitoring components - -With this release, you can now specify resource requests and limits for all monitoring components, including the following: - -* Alertmanager -* `kube-state-metrics` -* `monitoring-plugin` -* `node-exporter` -* `openshift-state-metrics` -* Prometheus -* Prometheus Adapter -* Prometheus Operator and its admission webhook service -* Telemeter Client -* Thanos Querier -* Thanos Ruler - -In previous versions of {product-title}, you could only set options for Prometheus, Alertmanager, Thanos Querier, and Thanos Ruler. - -[id="ocp-4-14-monitoring-new-options-to-configure-node-exporter-collectors"] -==== New options to configure node-exporter collectors - -With this release, you can customize Cluster Monitoring Operator (CMO) config map settings for additional `node-exporter` collectors. The following `node-exporter` collectors are now optional, and you can enable or disable each one individually in the config map settings: - -* `ksmd` collector -* `mountstats` collector -* `processes` collector -* `systemd` collector - -In addition, you can now exclude network devices from the relevant collector configuration for the `netdev` and `netclass` collectors. You can also now use the `maxProcs` option to set the maximum number of processes that can run node-exporter. - -[id="ocp-4-14-monitoring-new-options-to-deploy-monitoring-web-console-plugin-resources"] -==== New option to deploy monitoring web console plugin resources - -With this release, the monitoring pages in the *Observe* section of the {product-title} web console are deployed as a xref:../web_console/dynamic-plugin/overview-dynamic-plugin.adoc[dynamic plugin]. -With this change, the Cluster Monitoring Operator (CMO) is now the component that deploys the {product-title} web console monitoring plugin resources. -You can now use CMO settings to configure the following features of the console monitoring plugin resource: - -* Node selectors -* Tolerations -* Topology spread constraints -* Resource requests -* Resource limits - -[id="ocp-4-14-network-observability-1-4"] -=== Network Observability Operator -The Network Observability Operator releases updates independently from the {product-title} minor version release stream. Updates are available through a single, rolling stream which is supported on all currently supported versions of {product-title} 4. Information regarding new features, enhancements, and bug fixes for the Network Observability Operator is found in the xref:../network_observability/network-observability-operator-release-notes.adoc[Network Observability release notes]. - -[id="ocp-4-14-scalability-and-performance"] -=== Scalability and performance - -[id="ocp-4-14-PAO-image-must-gather"] -==== PAO must-gather image added to default must-gather image - -With this release, the Performance Addon Operator (PAO) must-gather image is no longer required as an argument for the `must-gather` command to capture debugging data related to low-latency tuning. The functions of the PAO must-gather image are now under the default plugin image used by the `must-gather` command without any image arguments. -For further information about gathering debugging information relating to low-latency tuning, see xref:../scalability_and_performance/cnf-low-latency-tuning.adoc#cnf-collecting-low-latency-tuning-debugging-data-for-red-hat-support_cnf-master[Collecting low latency tuning debugging data for Red Hat Support]. - -[id="ocp-4-14-NRO-image-must-gather"] -==== Collecting data for the NUMA Resources Operator with the must-gather image of the Operator - -In this release, the `must-gather` tool is updated to collect the data of the NUMA Resources Operator with the `must-gather` image of the Operator. For further information about gathering debugging information for the NUMA Resources Operator, see xref:../scalability_and_performance/cnf-numa-aware-scheduling.adoc#cnf-about-collecting-nro-data_numa-aware[Collecting NUMA Resources Operator data]. - -[id="ocp-4-14-additional-power-savings-control"] -==== Enabling more control over the C-states for each pod - -With this release, you have more control over the C-states for your pods. Now, instead of disabling C-states completely, you can specify a maximum latency in microseconds for C-states. You can configure this option in the `cpu-c-states.crio.io` annotation. This helps to optimize power savings in high-priority applications by enabling some of the shallower C-states instead of disabling them completely. For further information about controlling pod C-states, see xref:../scalability_and_performance/cnf-low-latency-tuning.adoc#node-tuning-operator-pod-power-saving-config_cnf-master[Optional: Power saving configurations]. - -[id="ocp-4-14-nw-ipv6-spoke-cluster-support"] -==== Support for provisioning IPv6 spoke clusters from dual-stack hub clusters -With this update, you can provision IPv6 address spoke clusters from dual-stack hub clusters. In a zero touch provisioning (ZTP) environment, the HTTP server on the hub cluster that hosts the boot ISO now listens on both IPv4 and IPv6 networks. The provisioning service also checks the baseboard management controller (BMC) address scheme on the target spoke cluster and provides a matching URL for the installation media. These updates offer the ability to provision single-stack, IPv6 spoke clusters from a dual-stack hub cluster. - -[id="ocp-4-14-nw-shiftstack-dual-stack"] -==== Support for dual-stack networking for {rh-openstack} clusters (Technology Preview) - -Dual-stack network configuration is now available for clusters that run on {rh-openstack}. You can configure dual-stack networking during the deployment of a cluster on installer-provisioned infrastructure. - -For more information, see xref:../installing/installing_openstack/installing-openstack-installer-custom.adoc#install-osp-dualstack_installing-openstack-installer-custom[Configuring a cluster with dual-stack networking]. - -[id="ocp-4-14-nw-shiftstack-manage-security-groups"] -==== Security group management for {rh-openstack} clusters - -In {product-title} 4.14, security for clusters that run on {rh-openstack} is enhanced. By default, the OpenStack cloud provider now sets the `manage-security-groups` option for load balancers to `true`, ensuring that only node ports that are required for cluster operation are open. Previously, security groups for both compute and control plane machines were configured to open a wide range of node ports for all incoming traffic. - -You can opt to use the previous configuration by setting the `manage-security-groups` option to `false` in the configuration of a load balancer and ensuring that the security group rules permit traffic from `0.0.0.0/0` on the node ports range 30000 through 32767. - -For clusters that are upgraded to 4.14, you must manually remove permissive security group rules that open the deployment to all traffic. For example, you must remove a rule that permits traffic from `0.0.0.0/0` on the node ports range 30000 through 32767. - -[id="ocp-custom-crs-with-pgt-ztp"] -==== Using custom CRs with PolicyGenTemplate CRs in the {ztp-first} pipeline - -You can now use {ztp} to include custom CRs in addition to the base source CRs provided by the {ztp} plugin in the `ztp-site-generate` container. -For more information, see xref:../scalability_and_performance/ztp_far_edge/ztp-advanced-policy-config.adoc#ztp-adding-new-content-to-gitops-ztp_ztp-advanced-policy-config[Adding custom content to the {ztp} pipeline]. - -[id="ocp-managed-clusters-version-independence-ztp"] -==== {ztp} independence from managed cluster version - -You can now use {ztp} to provision managed clusters that are running different versions of {product-title}. -This means that the hub cluster and the {ztp} plugin version can be independent of the version of {product-title} running on the managed clusters. -For more information, see xref:../scalability_and_performance/ztp_far_edge/ztp-preparing-the-hub-cluster.adoc#ztp-preparing-the-ztp-git-repository-ver-ind_ztp-preparing-the-hub-cluster[Preparing the {ztp} site configuration repository for version independence]. - -[id="ocp-4-14-precaching-user-spec-images"] -==== Pre-caching user-specified images with {cgu-operator-full} - -With this release, you can precache your application workload images before upgrading your applications on {sno} clusters with {cgu-operator-full}. For more information, see xref:../scalability_and_performance/ztp_far_edge/ztp-talm-updating-managed-policies.adoc#talm-prechache-user-specified-images-concept_ztp-talm[Pre-caching user-specified images with {cgu-operator} on {sno} clusters]. - -[id="ocp-4-14-ztp-siteconfig-disk-cleaning"] -==== Disk cleaning option through SiteConfig and {ztp} - -With this release, you can remove the partitioning table before installation by using the `automatedCleaningMode` field in the `SiteConfig` CR. For more information, see xref:../scalability_and_performance/ztp_far_edge/ztp-deploying-far-edge-sites.adoc#ztp-deploying-a-site_ztp-deploying-far-edge-sites[Deploying a managed cluster with SiteConfig and GitOps ZTP]. - -[id="ocp-4-14-ztp-support-custom-node-labels"] -==== Support for adding custom node labels in the SiteConfig CR through {ztp} - -With this update, you can add the `nodeLabels` field in the `SiteConfig` CR to create custom roles for nodes in managed clusters. For more information about how to add custom labels, see xref:../scalability_and_performance/ztp_far_edge/ztp-deploying-far-edge-sites.adoc#ztp-deploying-a-site_ztp-deploying-far-edge-sites[Deploying a managed cluster with SiteConfig and {ztp}], xref:../scalability_and_performance/ztp_far_edge/ztp-manual-install.adoc#ztp-generating-install-and-config-crs-manually_ztp-manual-install[Generating {ztp} installation and configuration CRs manually], and xref:../scalability_and_performance/ztp_far_edge/ztp-deploying-far-edge-sites.adoc#ztp-sno-siteconfig-config-reference_ztp-deploying-far-edge-sites[{sno} SiteConfig CR installation reference]. - -//// -[id="ocp-4-14-hcp"] -=== Hosted control planes (Technology Preview) -//// - -[id="ocp-4-14-insights-operator"] -=== Insights Operator - -[id="ocp-4-14-insights-operator-on-demand-data-gathering"] -==== On demand data gathering (Technology Preview) -In {product-title} {product-version}, Insights Operator can now run gather operations on demand. For more information about running gather operations on demand, see xref:../support/remote_health_monitoring/using-insights-operator.adoc#running-insights-operator-gather_using-insights-operator[Running an Insights Operator gather operation]. - -[id="ocp-4-14-insights-operator-individual-pods"] -==== Running gather operations as individual pods (Technology Preview) -In {product-title} {product-version} Technology Preview clusters, Insights Operator runs gather operations in individual pods. This supports the new on demand data gathering feature. - -[id="ocp-4-14-notable-technical-changes"] -== Notable technical changes - -{product-title} {product-version} introduces the following notable technical changes. - -// Note: use [discrete] for these sub-headings. - -[discrete] -[id="ocp-4-14-cluster-cloud-controller-manager-operator"] -=== Cloud controller managers for additional cloud providers - -The Kubernetes community plans to deprecate the use of the Kubernetes controller manager to interact with underlying cloud platforms in favor of using cloud controller managers. As a result, there is no plan to add Kubernetes controller manager support for any new cloud platforms. - -This release introduces the General Availability of using cloud controller managers for Amazon Web Services and Microsoft Azure. - -To learn more about the cloud controller manager, see the link:https://kubernetes.io/docs/concepts/architecture/cloud-controller/[Kubernetes Cloud Controller Manager documentation]. - -To manage the cloud controller manager and cloud node manager deployments and lifecycles, use the Cluster Cloud Controller Manager Operator. -For more information, see the xref:../operators/operator-reference.adoc#cluster-cloud-controller-manager-operator_cluster-operators-ref[Cluster Cloud Controller Manager Operator] entry in the _Platform Operators reference_. - -[discrete] -[id="ocp-4-14-planned-psa-restricted-enforcement"] -=== Future restricted enforcement for pod security admission - -Currently, pod security violations are shown as warnings and logged in the audit logs, but do not cause the pod to be rejected. - -Global restricted enforcement for pod security admission is currently planned for the next minor release of {product-title}. When this restricted enforcement is enabled, pods with pod security violations will be rejected. - -To prepare for this upcoming change, ensure that your workloads match the pod security admission profile that applies to them. Workloads that are not configured according to the enforced security standards defined globally or at the namespace level will be rejected. The `restricted-v2` SCC admits workloads according to the link:https://kubernetes.io/docs/concepts/security/pod-security-standards/#restricted[Restricted] Kubernetes definition. - -If you are receiving pod security violations, see the following resources: - -* See xref:../authentication/understanding-and-managing-pod-security-admission.adoc#security-context-constraints-psa-alert-eval_understanding-and-managing-pod-security-admission[Identifying pod security violations] for information about how to find which workloads are causing pod security violations. - -* See xref:../authentication/understanding-and-managing-pod-security-admission.adoc#security-context-constraints-psa-synchronization_understanding-and-managing-pod-security-admission[Security context constraint synchronization with pod security standards] to understand when pod security admission label synchronization is performed. Pod security admission labels are not synchronized in certain situations, such as the following situations: -** The workload is running in a system-created namespace that is prefixed with `openshift-`. -** The workload is running on a pod that was created directly without a pod controller. - -* If necessary, you can set a custom admission profile on the namespace or pod by setting the `pod-security.kubernetes.io/enforce` label. - -[discrete] -[id="ocp-4-14-rhcos-ssh-key-location"] -=== Change in SSH key location -{product-title} {product-version} introduces a {op-system-base} 9.2 based {op-system}. Before this update, SSH keys were located in `/home/core/.ssh/authorized_keys` on {op-system}. With this update, on {op-system-base} 9.2 based {op-system}, SSH keys are located in `/home/core/.ssh/authorized_keys.d/ignition`. - -If you customized the default OpenSSH `/etc/ssh/sshd_config` server configuration file, you must update it according to this link:https://access.redhat.com/solutions/7030537[Red Hat Knowledgebase article]. - -[discrete] -[id="ocp-4-14-cert-manager-operator-1-11"] -=== cert-manager Operator general availability - -The {cert-manager-operator} 1.11 is now generally available in {product-title} {product-version} and {product-title} 4.13 and {product-title} 4.12. - -[discrete] -[id="ocp-4-14-ovn-k-interconnect"] -=== Improved scaling and stability with Open Virtual Network (OVN) Optimizations -{product-title} {product-version} introduces an optimization of Open Virtual Network Kubernetes (OVN-K) in which its internal architecture was modified to reduce operational latency to remove barriers to scale and performance of the networking control plane. Network flow data is now localized to cluster nodes instead of centralizing information on the control plane. This reduces operational latency and reduces cluster-wide traffic between worker and control nodes. As a result, cluster networking scales linearly with node count, because additional networking capacity is added with each additional node, which optimizes larger clusters. Because network flow is localized on every node, RAFT leader election of control plane nodes is no longer needed, and a primary source of instability is removed. An additional benefit to localized network flow data is that the effect of node loss on networking is limited to the failed node and has no bearing on the rest of the cluster’s networking, thereby making the cluster more resilient to failure scenarios. For more information, see xref:../networking/ovn_kubernetes_network_provider/ovn-kubernetes-architecture-assembly.adoc#ovn-kubernetes-architecture-assembly[OVN-Kubernetes architecture]. - -[discrete] -[id="ocp-4-14-operator-sdk-1-31"] -=== Operator SDK {osdk_ver} - -{product-title} {product-version} supports Operator SDK {osdk_ver}. See xref:../cli_reference/osdk/cli-osdk-install.adoc#cli-osdk-install[Installing the Operator SDK CLI] to install or update to this latest version. - -[NOTE] -==== -Operator SDK {osdk_ver} supports Kubernetes 1.26. -==== - -If you have Operator projects that were previously created or maintained with Operator SDK {osdk_ver_n1}, update your projects to keep compatibility with Operator SDK {osdk_ver}. - -* xref:../operators/operator_sdk/golang/osdk-golang-updating-projects.adoc#osdk-upgrading-projects_osdk-golang-updating-projects[Updating Go-based Operator projects] - -* xref:../operators/operator_sdk/ansible/osdk-ansible-updating-projects.adoc#osdk-upgrading-projects_osdk-ansible-updating-projects[Updating Ansible-based Operator projects] - -* xref:../operators/operator_sdk/helm/osdk-helm-updating-projects.adoc#osdk-upgrading-projects_osdk-helm-updating-projects[Updating Helm-based Operator projects] - -* xref:../operators/operator_sdk/helm/osdk-hybrid-helm-updating-projects.adoc#osdk-upgrading-projects_osdk-hybrid-helm-updating-projects[Updating Hybrid Helm-based Operator projects] - -* xref:../operators/operator_sdk/java/osdk-java-updating-projects.adoc#osdk-upgrading-projects_osdk-java-updating-projects[Updating Java-based Operator projects] - -[discrete] -[id="ocp-4-14-credentials-podman-default"] -=== oc commands now default to storing and obtaining credentials from Podman configuration locations - -Previously, OpenShift CLI (`oc`) commands that used the registry configuration, for example `oc adm release` or `oc image` commands, obtained credentials from Docker configuration file locations, such as `~/.docker/config.json`, first. If a registry entry could not be found in the Docker configuration locations, `oc` commands obtained the credentials from Podman configuration file locations, such as `${XDG_RUNTIME_DIR}/containers/auth.json`. - -With this release, `oc` commands now default to obtaining the credentials from Podman configuration locations first. If a registry entry cannot be found in the Podman configuration locations, `oc` commands obtain the credentials from Docker configuration locations. - -Additionally, the `oc registry login` command now stores credentials in the Podman configuration locations instead of the Docker configuration file locations. - -[id="ocp-4-14-deprecated-removed-features"] -== Deprecated and removed features - -Some features available in previous releases have been deprecated or removed. - -Deprecated functionality is still included in {product-title} and continues to be supported; however, it will be removed in a future release of this product and is not recommended for new deployments. For the most recent list of major functionality deprecated and removed within {product-title} {product-version}, refer to the table below. Additional details for more functionality that has been deprecated and removed are listed after the table. - -In the following tables, features are marked with the following statuses: - -* _General Availability_ -* _Deprecated_ -* _Removed_ - -[discrete] -[id="ocp-4-14-operators-dep-rem"] -=== Operator lifecycle and development deprecated and removed features - -.Operator lifecycle and development deprecated and removed tracker -[cols="4,1,1,1",options="header"] -|==== -|Feature |4.12 |4.13 |4.14 - -|SQLite database format for Operator catalogs -|Deprecated -|Deprecated -|Deprecated - -|==== - -[discrete] -=== Images deprecated and removed features - -.Images deprecated and removed tracker -[cols="4,1,1,1",options="header"] -|==== -|Feature |4.12 |4.13 |4.14 - -|`ImageChangesInProgress` condition for Cluster Samples Operator -|Deprecated -|Deprecated -|Deprecated - -|`MigrationInProgress` condition for Cluster Samples Operator -|Deprecated -|Deprecated -|Deprecated - -|==== - -//[discrete] -//=== Monitoring deprecated and removed features - -//.Monitoring deprecated and removed tracker -//[cols="4,1,1,1",options="header"] -//|==== -//|Feature |4.12 |4.13 |4.14 - -//|==== - -[discrete] -=== Installation deprecated and removed features - -.Installation deprecated and removed tracker -[cols="4,1,1,1",options="header"] -|==== -|Feature |4.12 |4.13 |4.14 - -|`--cloud` parameter for `oc adm release extract` -|General Availability -|General Availability -|Deprecated - -|CoreDNS wildcard queries for the `cluster.local` domain -|Deprecated -|Deprecated -|Deprecated - -|`compute.platform.openstack.rootVolume.type` for {rh-openstack} -|General Availability -|General Availability -|Deprecated - -|`controlPlane.platform.openstack.rootVolume.type` for {rh-openstack} -|General Availability -|General Availability -|Deprecated - -|`ingressVIP` and `apiVIP` settings in the `install-config.yaml` file for installer-provisioned infrastructure clusters -|Deprecated -|Deprecated -|Deprecated - -|`platform.gcp.licenses` for Google Cloud Provider -|Deprecated -|Deprecated -|Removed - -|VMware ESXi 7.0 Update 1 or earlier -|General Availability -|Removed ^[1]^ -|Removed - -|vSphere 7.0 Update 1 or earlier -|Deprecated -|Removed ^[1]^ -|Removed - -|==== -[.small] --- -1. For {product-title} {product-version}, you must install the {product-title} cluster on a VMware vSphere version 7.0 Update 2 or later instance, including VMware vSphere version 8.0, that meets the requirements for the components that you use. --- -//[discrete] -//=== Updating clusters deprecated and removed features - -//.Updating clusters deprecated and removed tracker -//[cols="4,1,1,1",options="header"] -//|==== -//|Feature |4.12 |4.13 |4.14 - -//|==== - -[discrete] -=== Storage deprecated and removed features - -.Storage deprecated and removed tracker -[cols="4,1,1,1",options="header"] -|==== -|Feature |4.12 |4.13 |4.14 - -|Persistent storage using FlexVolume -|Deprecated -|Deprecated -|Deprecated - -|==== - -//[discrete] -//=== Authentication and authorization deprecated and removed features - -//.Authentication and authorization deprecated and removed tracker -//[cols="4,1,1,1",options="header"] -//|==== -//|Feature |4.12 |4.13 |4.14 - -//|==== -//[discrete] -//=== Specialized hardware and driver enablement deprecated and removed features - -//.Specialized hardware and driver enablement deprecated and removed tracker -//[cols="4,1,1,1",options="header"] -//|==== -//|Feature |4.12 |4.13 |4.14 - -//|==== - -[discrete] -=== Multi-architecture deprecated and removed features - -.Multi-architecture deprecated and removed tracker -[cols="4,1,1,1",options="header"] -|==== -|Feature |4.12 |4.13 |4.14 - -|IBM Power8 all models (`ppc64le`) -|Deprecated -|Removed -|Removed - -|{ibmpowerProductName} AC922 (`ppc64le`) -|Deprecated -|Removed -|Removed - -|{ibmpowerProductName} IC922 (`ppc64le`) -|Deprecated -|Removed -|Removed - -|{ibmpowerProductName} LC922 (`ppc64le`) -|Deprecated -|Removed -|Removed - -|IBM z13 all models (`s390x`) -|Deprecated -|Removed -|Removed - -|{linuxoneProductName} Emperor (`s390x`) -|Deprecated -|Removed -|Removed - -|{linuxoneProductName} Rockhopper (`s390x`) -|Deprecated -|Removed -|Removed - -|AMD64 (x86_64) v1 CPU -|Deprecated -|Removed -|Removed - -|==== - -[discrete] -=== Networking deprecated and removed features - -.Networking deprecated and removed tracker -[cols="4,1,1,1",options="header"] -|==== -|Feature |4.12 |4.13 |4.14 - -|Kuryr on {rh-openstack} -|Deprecated -|Deprecated -|Deprecated - -|==== - -//[discrete] -//=== Web console deprecated and removed features - -//.Web console deprecated and removed tracker -//[cols="4,1,1,1",options="header"] -//|==== -//|Feature |4.12 |4.13 |4.14 - -//|==== - -[discrete] -=== Node deprecated and removed features - -.Node deprecated and removed tracker -[cols="4,1,1,1",options="header"] -|==== -|Feature |4.12 |4.13 |4.14 - -|`ImageContentSourcePolicy` (ICSP) objects -|General Availability -|Deprecated -|Deprecated - -|Kubernetes topology label `failure-domain.beta.kubernetes.io/zone` -|General Availability -|Deprecated -|Deprecated - -|Kubernetes topology label `failure-domain.beta.kubernetes.io/region` -|General Availability -|Deprecated -|Deprecated - -|==== - -[discrete] -=== OpenShift CLI (oc) deprecated and removed features -[cols="4,1,1,1",options="header"] -|==== -|Feature |4.12 |4.13 |4.14 - -|`--include-local-oci-catalogs` parameter for `oc-mirror` -|Not Available -|General Availability -|Removed - -|`--use-oci-feature` parameter for `oc-mirror` -|General Availability -|Deprecated -|Removed - -|==== - -[discrete] -=== Workloads deprecated and removed features - -.Workloads deprecated and removed tracker -[cols="4,1,1,1",options="header"] -|==== -|Feature |4.12 |4.13 |4.14 - -|`DeploymentConfig` objects -|General Availability -|General Availability -|Deprecated - -|==== - -[id="ocp-4-14-deprecated-features"] -=== Deprecated features - -[id="ocp-4-14-deployment-config-deprecated"] -==== DeploymentConfig resources are now deprecated - -As of {product-title} {product-version}, `DeploymentConfig` objects are deprecated. `DeploymentConfig` objects are still supported, but are not recommended for new installations. Only security-related and critical issues will be fixed. - -Instead, use `Deployment` objects or another alternative to provide declarative updates for pods. - -[id="ocp-4-14-ztp-talm-defaultcatsrc-update"] -==== Operator-specific CatalogSource CRs used in {ztp} are deprecated - -From {product-title} {product-version}, you must only use the `DefaultCatSrc.yaml` `CatalogSource` CR when updating Operators with {cgu-operator-first}. All other `CatalogSource` CRs are deprecated and are planned to be removed in a future release. Red{nbsp}Hat will provide bug fixes and support for this feature during the current release lifecycle, but this feature will no longer receive enhancements and will be removed. For more information about `DefaultCatSrc` CR, see xref:../scalability_and_performance/ztp_far_edge/ztp-talm-updating-managed-policies.adoc#talo-operator-update_ztp-talm[Performing an Operator update]. - -[id="ocp-4-14-dep-cloud-release-extract-param"] -==== The `--cloud` parameter for the `oc adm release extract` command - -As of {product-title} {product-version}, the `--cloud` parameter for the `oc adm release extract` command is deprecated. The introduction of the `--included` and `--install-config` parameters make the `--cloud` parameter unnecessary. - -For more information, see xref:../release_notes/ocp-4-14-release-notes.adoc#ocp-4-14-install-update-cco-manual-enhancements[Simplified installation and update experience for clusters with manually maintained cloud credentials]. - -[id="ocp-4-14-rhv-deprecations"] -==== Red Hat Virtualization (RHV) as a host platform for {product-title} will be deprecated - -Red Hat Virtualization (RHV) as a host platform for {product-title} was deprecated and is no longer supported. This platform will be removed from {product-title} in a future {product-title} release. - -[id="ocp-4-14-dep-registry-auth-preference"] -==== Using the REGISTRY_AUTH_PREFERENCE environment variable is now deprecated - -Using the `REGISTRY_AUTH_PREFERENCE` environment variable to specify your preferred location to obtain registry credentials for OpenShift CLI (`oc`) commands is now deprecated. - -OpenShift CLI (`oc`) commands now default to obtaining the credentials from Podman configuration locations first, but will fall back to checking the deprecated Docker configuration file locations. - -[id="ocp-4-14-removed-features"] -=== Removed features - -[id="ocp-4-14-removed-kube-1-27-apis"] -==== Beta APIs removed from Kubernetes 1.27 - -Kubernetes 1.27 removed the following deprecated API, so you must migrate manifests and API clients to use the appropriate API version. For more information about migrating removed APIs, see the link:https://kubernetes.io/docs/reference/using-api/deprecation-guide/#v1-27[Kubernetes documentation]. - -.APIs removed from Kubernetes 1.27 -[cols="2,2,2",options="header",] -|=== -|Resource |Removed API |Migrate to - -|`CSIStorageCapacity` -|`storage.k8s.io/v1beta1` -|`storage.k8s.io/v1` - -|=== - -[id="ocp-4-14-removed-latencysensitive-feature-set"] -==== Support for the LatencySensitive feature set is removed - -As of {product-title} {product-version}, support for the `LatencySensitive` feature set is removed. - -[id="ocp-4-14-removed-store-registry-credentials"] -==== oc registry login no longer stores credentials in Docker configuration file locations - -As of {product-title} {product-version}, the `oc registry login` command no longer stores registry credentials in the Docker file locations, such as `~/.docker/config.json`. The `oc registry login` command now stores credentials in the Podman configuration file locations, such as `${XDG_RUNTIME_DIR}/containers/auth.json`. - -[id="ocp-4-14-future-deprecation"] -=== Notice of future deprecation - -[id="ocp-4-14-future-deprecation-sdn"] -==== Future deprecation of the OpenShift SDN network plugin - -Targeting {product-title} 4.15, the OpenShift SDN CNI network plugin will not be an option for new installations, but will continue to be supported in clusters upgrading to 4.15 and 4.16 from clusters previously installed with the OpenShift SDN network plugin. In a future release, targeting no earlier than 4.17, the OpenShift SDN network plugin will be removed and and will no longer be supported. - -[id="ocp-4-14-bug-fixes"] -== Bug fixes -//Bug fix work for TELCODOCS-750 -//Bare Metal Hardware Provisioning / OS Image Provider -//Bare Metal Hardware Provisioning / baremetal-operator -//Bare Metal Hardware Provisioning / cluster-baremetal-operator -//Bare Metal Hardware Provisioning / ironic" -//CNF Platform Validation -//Cloud Native Events / Cloud Event Proxy -//Cloud Native Events / Cloud Native Events -//Cloud Native Events / Hardware Event Proxy -//Cloud Native Events -//Driver Toolkit -//Installer / Assisted installer -//Installer / OpenShift on Bare Metal IPI -//Networking / ptp -//Node Feature Discovery Operator -//Performance Addon Operator -//Telco Edge / HW Event Operator -//Telco Edge / RAN -//Telco Edge / TALO -//Telco Edge / ZTP -[discrete] -[id="ocp-4-14-api-auth-bug-fixes"] -==== API Server and Authentication - -* Previously, when creating a pod controller with a pod spec that would be mutated by security context constraints, users might get a warning that the pod did not meet the given namespace's pod security level. With this release, you no longer get a warning about pod security violations if the pod controller will create pods that do not violate pod security in that namespace. (link:https://issues.redhat.com/browse/OCPBUGS-7267[*OCPBUGS-7267*]) - -* A `user:check-access` scoped token grants sufficient permissions to send a SelfSubjectAccessReview request. Previously, the cluster did not grant sufficient permissions to perform the access review unless the token also had the `user:full` scope or a role scope. With this release, the cluster authorizes a SelfSubjectAccessReview request as if it has either the full user's permissions or the permissions of the user's role set on the request in order to be able to perform the access review. (link:https://issues.redhat.com/browse/OCPBUGS-7415[*OCPBUGS-7415*]) - -* Previously, the pod security admission controller required the `RoleBinding` object's `.subject[].namespace` field to be set when `.subjects[].kind` is set to `ServiceAccount` in order to successfully bind the service account to a role. With this release, the pod security admission controller uses the namespace of the `RoleBinding` object if the `.subject[].namespace` is not specified. (link:https://issues.redhat.com/browse/OCPBUGS-160[*OCPBUGS-160*]) - -* Previously, the `clientConfig` of all the webhooks of `ValidatingWebhookConfiguration` and `MutatingWebhookConfiguration` objects did not get a properly injected `caBundle` with the `service-ca` trust bundle. With this release, the `clientConfig` of all the webhooks of `ValidatingWebhookConfiguration` and `MutatingWebhookConfiguration` objects now get a properly injected `caBundle` with the `service-ca` trust bundle. (link:https://issues.redhat.com/browse/OCPBUGS-19318[*OCPBUGS-19318*]) - -* Previously, kube-apiserver did not change to `Degraded=True` when an invalid secret name was specified for `servingCertificate` in `namedCertificates`. With this release, kube-apiserver now switches to `Degraded=True` and shows why the certificate was not accepted to allow for easier troubleshooting. (link:https://issues.redhat.com/browse/OCPBUGS-8404[*OCPBUGS-8404*]) - -* Previously, observability dashboards used large queries to show data which caused frequent timeouts on clusters with a large number of nodes. With this release, observability dashboards use recording rules that are precalculated to ensure reliability on clusters with a large number of nodes. (link:https://issues.redhat.com/browse/OCPBUGS-3986[*OCPBUGS-3986*]) - -[discrete] -[id="ocp-4-14-bare-metal-hardware-bug-fixes"] -==== Bare Metal Hardware Provisioning - -* Previously, if the hostname of a bare-metal machine was not provided by either reverse DNS or DHCP, it would default to `localhost` during bare-metal cluster provisioning on installer-provisioned infrastructure. This issue caused Kubernetes node name conflicts and prevented the cluster from being deployed. Now, if the hostname is detected to be `localhost`, the provisioning agent sets the persistent hostname to the name of the `BareMetalHost` object. (link:https://issues.redhat.com/browse/OCPBUGS-9072[*OCPBUGS-9072*]) - -//[discrete] -//[id="ocp-4-14-builds-bug-fixes"] -//==== Builds - -[discrete] -[id="ocp-4-14-cloud-compute-bug-fixes"] -==== Cloud Compute - -* Previously, the Machine API controller could not determine the zone of machines in vSphere clusters that use multiple zones. With this release, the zone lookup logic is based on the host of a VM and, as a result, machine objects indicate proper zones. (link:https://issues.redhat.com/browse/OCPBUGS-7249[*OCPBUGS-7249*]) - -* Previously, after the rotation of cloud credentials in the `clouds.yaml` file, the OpenStack machine API provider would need to be restarted in order to pick up the new cloud credentials. As a result, the ability of a machine set to scale to zero could be affected. With this change, cloud credentials are no longer cached, and the provider reads the corresponding secret freshly as needed. (link:https://issues.redhat.com/browse/OCPBUGS-8687[*OCPBUGS-8687*]) - -* Previously, some conditions during the startup process of the Cluster Autoscaler Operator caused a lock that prevented the Operator from successfully starting and marking itself as available. As a result, the cluster became degraded. The issue is resolved with this release. (link:https://issues.redhat.com/browse/OCPBUGS-20038[*OCPBUGS-20038*]) - -* Previously, the bootstrap credentials used to request client credentials for control plane nodes did not include the generic, all service accounts group. As a result, the cluster machine approver ignored certificate signing requests (CSRs) created during this phase. In certain conditions, this prevented approval of CSRs during bootstrap and caused the installation to fail. With this release, the bootstrap credential includes the groups that the cluster machine approver expects for a service account. This change allows the machine approver to take over from the bootstrap CSR approver earlier in the cluster lifecycle and should reduce bootstrap failures related to CSR approval. (link:https://issues.redhat.com/browse/OCPBUGS-8349[*OCPBUGS-8349*]) - -* Previously, if scaling the machines on a Nutanix cluster exceeded the available memory to complete the operation, machines would get stuck in the `Provisioning` state and could not be scaled up or down. The issue is resolved in this release. (link:https://issues.redhat.com/browse/OCPBUGS-19731[*OCPBUGS-19731*]) - -* Previously, for clusters on which the Control Plane Machine Set Operator is configured to use the `OnDelete` update strategy, cached information about machines caused the Operator to balance machines incorrectly and place them in an unexpected failure domain during reconciliation. With this release, the Operator refreshes this information immediately before creating new machines so that it correctly identifies the failure domains to place machines in. (link:https://issues.redhat.com/browse/OCPBUGS-15338[*OCPBUGS-15338*]) - -* Previously, the Control Plane Machine Set Operator used the `Infrastructure` object specification to determine the platform type for the cluster. For clusters upgraded from {product-title} version 4.5 and earlier, this practice meant that the Operator could not correctly determine that a cluster was running on AWS, and therefore did not generate the `ControlPlaneMachineSet` custom resource (CR) as expected. With this release, the Operator uses the status platform type, which is populated on all clusters independent of when they were created and is now able to generate the `ControlPlaneMachineSet` CR for all clusters. (link:https://issues.redhat.com/browse/OCPBUGS-11389[*OCPBUGS-11389*]) - -* Previously, machines created by a control plane machine set were considered ready once the underlying Machine API machine was running. With this release, the machine is not considered ready until the node linked to that machine is also ready. (link:https://issues.redhat.com/browse/OCPBUGS-7989[*OCPBUGS-7989*]) - -* Previously, the Control Plane Machine Set Operator prioritized failure domains alphabetically and moved machines from alphabetically later failure domains to alphabetically earlier failure domains, even if doing so did not improve the the availability of the machines across the failure domains. With this release, the Operator is updated to prioritize failure domains that are present in the existing machines and to respect existing failure domains that provide better availability. (link:https://issues.redhat.com/browse/OCPBUGS-7921[*OCPBUGS-7921*]) - -* Previously, when a control plane machine on a vSphere cluster that uses a control plane machine set was deleted, sometimes two replacement machines were created. With this release, the control plane machine set no longer causes an extra machine to be created. (link:https://issues.redhat.com/browse/OCPBUGS-7516[*OCPBUGS-7516*]) - -* Previously, when the availability zone and subnet ID in a machine set were mismatched, a machine was created successfully by using the machine set specification with no indication to the user of the mismatch. Because the mismatched values can cause problems with some configurations, this occurrence might be visible as a warning message. With this release, a warning about the mismatch is logged. (link:https://issues.redhat.com/browse/OCPBUGS-6882[*OCPBUGS-6882*]) - -* Previously, when creating an {product-title} cluster on Nutanix that uses Dynamic Host Configuration Protocol (DHCP) instead of an IP address management (IPAM) network configuration, the hostname of the VM was not set by DHCP. With this release, the VM hostname is set with values from the ignition configuration files. As a result, the issue is resolved for DHCP as well as other network configuration types. (link:https://issues.redhat.com/browse/OCPBUGS-6727[*OCPBUGS-6727*]) - -* Previously, multiple clusters could be created in the `openshift-cluster-api` namespace. This namespace must contain only one cluster. With this release, additional clusters cannot be created in this namespace. (link:https://issues.redhat.com/browse/OCPBUGS-4147[*OCPBUGS-4147*]) - -* Previously, clearing some parameters from the `providerSpec` field of a control plane machine set custom resource caused a loop of control plane machine deletion and creation. With this release, these parameters receive a default value if they are cleared or left empty, which resolves the issue. (link:https://issues.redhat.com/browse/OCPBUGS-2960[*OCPBUGS-2960*]) - -[discrete] -[id="ocp-4-14-cloud-cred-operator-bug-fixes"] -==== Cloud Credential Operator - -* Previously, the Cloud Credential Operator utility (`ccoctl`) used an incorrect Amazon Resource Names (ARN) prefix for AWS GovCloud (US) and AWS China regions. The incorrect ARN prefix caused the `ccoctl aws create-all` command that is used to create AWS resources during installation to fail. This release updates the ARN prefixes to the correct values. (link:https://issues.redhat.com/browse/OCPBUGS-13549[*OCPBUGS-13549*]) - -* Previously, security changes to Amazon S3 buckets caused the Cloud Credential Operator utility (`ccoctl`) command that is used to create AWS resources during installation (`ccoctl aws create-all`) to fail. With this release, the `ccoctl` utility is updated to reflect the Amazon S3 security changes. (link:https://issues.redhat.com/browse/OCPBUGS-11671[*OCPBUGS-11671*]) - -[discrete] -[id="ocp-4-14-cluster-version-operator-bug-fixes"] -==== Cluster Version Operator - -* Previously, the Cluster Version Operator (CVO) did not reconcile `SecurityContextConstraints` resources as expected. The CVO now properly reconciles `SecurityContextConstraints` resources towards the state defined in the release image, reverting any unsupported modifications to them. -+ -Users who want to upgrade from earlier {product-title} versions and who operate workloads depending on modified system `SecurityContextConstraints` resources must follow the procedure in the link:https://access.redhat.com/solutions/7033949[Knowledge Base article] to make sure their workloads are able to run without modified system `SecurityContextConstraint` resources. (link:https://issues.redhat.com/browse/OCPBUGS-19465[*OCPBUGS-19465*]) - -* Previously, the Cluster Version Operator did not prioritize likely targets when determining which conditional update risks to evaluate first. Now for conditional updates to which risks do not apply, these updates are available faster after Cluster Version Operator detection. . (link:https://issues.redhat.com/browse/OCPBUGS-5469[*OCPBUGS-5469*]) - -[discrete] -[id="ocp-4-14-dev-console-bug-fixes"] -==== Developer Console - -* Previously, if you tried to edit a Helm chart repository in the *Developer* console by navigating to *Helm*, clicking the *Repositories* tab, then selecting *Edit HelmChartRepository* through the {kebab} menu for your Helm chart repository, an *Error* page displayed a `404: Page Not Found` error. This was caused by a component path that was not up to date. This issue is now fixed. (link:https://issues.redhat.com/browse/OCPBUGS-14660[*OCPBUGS-14660*]) - -* Previously, distinguishing between the types of samples listed in the *Samples* page was difficult. With this fix, you can easily identify the sample type from the badges displayed on the *Samples* page. (link:https://issues.redhat.com/browse/OCPBUGS-7446[*OCPBUGS-7446*]) - -* Previously on the Pipeline *Metrics* page, only four legends were visible for `TaskRun` duration charts. With this update, you can see all the legends present for the `TaskRun` duration charts. (link:https://issues.redhat.com/browse/OCPBUGS-19878[*OCPBUGS-19878*]) - -* Previously, an issue occurred when creating an application by using the `Import JAR` form in a disconnected cluster with the Cluster Samples Operator not installed. With this update, the `Import JAR` form from the *Add* page and the *Topology* page is hidden when the Java Builder Image is absent. (link:https://issues.redhat.com/browse/OCPBUGS-15011[*OCPBUGS-15011*]) - -* Previously, the Operator backed catalog did not show any catalog items if cluster service version (CSV) copies were disabled. With this fix, Operator backed catalogs are shown in every namespace even if CSV copies are disabled. (link:https://issues.redhat.com/browse/OCPBUGS-14907[*OCPBUGS-14907*]) - -* Previously, in the *Import from Git* and *Deploy Image* flows, the *Resource Type* section was moved to *Advanced* section. As a result, it was difficult to identify the type of resource created. With this fix, the *Resource Type* section is moved to the *General* section. (link:https://issues.redhat.com/browse/OCPBUGS-7395[*OCPBUGS-7395*]) - -[discrete] -[id="ocp-4-14-cloud-etcd-operator-bug-fixes"] -==== etcd Cluster Operator - -* Previously, the `etcdctl` binary was cached on the local machine indefinitely, making updates to the binary impossible. The binary is now properly updated on every invocation of the `cluster-backup.sh` script. (link:https://issues.redhat.com/browse/OCPBUGS-19499[*OCPBUGS-19499*]) - -//// -[discrete] -[id="ocp-4-14-hosted-control-plane-bug-fixes"] -==== Hosted Control Plane - -[discrete] -[id="ocp-4-14-image-registry-bug-fixes"] -==== Image Registry -//// - -[discrete] -[id="ocp-4-14-installer-bug-fixes"] -==== Installer - -* Previously, if you did not specify a custom {op-system-first} Amazon Machine Image (AMI) when installing an AWS cluster to a supported secret partition, the installation failed. With this update, the installation program validates that you have specified the ID of an {op-system} AMI in the installation configuration file before deploying the cluster. (link:https://issues.redhat.com/browse/OCPBUGS-13636[*OCPBUGS-13636*]) - -* Previously, the {product-title} installation program did not find private hosted zones in the host project during installations on Google Cloud Platform (GCP) by using a shared VPC. With this update, the installation program checks for an existing private hosted zone in the host project and uses the private hosted zone if it exists. (link:https://issues.redhat.com/browse/OCPBUGS-11736[*OCPBUGS-11736*]) - -* Previously, if you configured user-defined outbound routing when installing a private Azure cluster, the cluster was incorrectly deployed with the default public load balancer. This behavior occurred when using the installer-provisioned infrastructure to install the cluster. With this update, the installation program no longer creates the public load balancer when user-defined routing is configured. (link:https://issues.redhat.com/browse/OCPBUGS-9404[*OCPBUGS-9404*]) - -* Previously, for clusters that run on {rh-openstack}, in the deprovisioning phase of installation, the installer deleted object storage containers sequentially. This behavior caused slow and inefficient deletion of objects, especially with large containers. This problem occurred in part because image streams that use Swift containers accumulated objects over time. Now, bulk object deletion occurs concurrently with up to 3 calls to the {rh-openstack} API, improving efficiency by handling a higher object count per call. This optimization speeds up resource cleanup during deprovisioning. (link:https://issues.redhat.com/browse/OCPBUGS-9081[*OCPBUGS-9081*]) - -* Previously, SSH access to bootstrap and cluster nodes failed when the bastion host ran in the same VPC network as the cluster nodes. Additionally, this configuration caused SSH access from the temporary bootstrap node to the cluster nodes to fail. These issues are now fixed by updating the IBM Cloud `SecurityGroupRules` to support SSH traffic between the temporary bootstrap node and cluster nodes, and to support SSH traffic from a bastion host to cluster nodes on the same VPC network. Log and debug information can be accurately collected for analysis during installer-provisioned infrastructure failure.(link:https://issues.redhat.com/browse/OCPBUGS-8035[*OCPBUGS-8035*]) - -* Previously, DNS records that the installation program created were not removed when uninstalling a private cluster. With this update, the installation program now correctly removes these DNS records. (link:https://issues.redhat.com/browse/OCPBUGS-7973[*OCPBUGS-7973*]) - -* Previously, a script provided in the documentation for checking invalid HTTPS certificates in the {rh-openstack} API assumed a recent version of the {rh-openstack} client. For users who did not have a recent version of the client, this script failed. Now, manual instructions are added to the documentation that users can follow to perform the check with any version of the client. (link:https://issues.redhat.com/browse/OCPBUGS-7954[*OCPBUGS-7954*]) - -* Previously, when defining static IP addresses in the `agent-config.yaml` or `nmstateconfig.yaml` files for the configuration of an Agent-based install, the configured static IP addresses might not have been configured during bootstrap. As a result, the host interfaces would choose an address through DHCP. With this update, timing issues are fixed to ensure that the configured static IP address is correctly applied to the host interface. (link:https://issues.redhat.com/browse/OCPBUGS-16219[*OCPBUGS-16219*]) - -* Previously, during an Agent-based installation, the certificates in the `AdditionalTrustBundle` field of the `install-config.yaml` file were only propagated to the final image when the `ImageContentSources` field was also set for mirroring. If mirroring was not set, the additional certificates were on the bootstrap but not the final image. This situation can cause issues when you have set up a proxy and want to add additional certificates as described in link:https://docs.openshift.com/container-platform/4.12/networking/configuring-a-custom-pki.html#installation-configure-proxy_configuring-a-custom-pki[Configuring the cluster-wide proxy during installation]. With this update, these additional certificates are propagated to the final image whether or not the `ImageContentSources` field is also set. (link:https://issues.redhat.com/browse/OCPBUGS-13535[*OCPBUGS-13535*]) - -* Previously, the `openshift-install agent create` command did not return the help output when running an invalid command. With this update, the help output is now shown when you run an invalid `openshift-install agent create` command. (link:https://issues.redhat.com/browse/OCPBUGS-10638[*OCPBUGS-10638*]) - -* Previously, primary networks were not correctly set for generated machines that used Technology Preview failure domains. As a consequence, port targets with the ID `control-plane` were not set as the primary network on machines, which could cause installations that use Kuryr to function improperly. The field is now set to use the proper port target, if set. The primary network for generated machines is now set correctly, allowing installations that use Kuryr to complete. (link:https://issues.redhat.com/browse/OCPBUGS-10570[*OCPBUGS-10570*]) - -* Previously, when running the `openshift-install agent create image` command while using a `releaseImage` that contained a digest, the command produced the following warning message: `WARNING The ImageContentSources configuration in install-config.yaml should have at-least one source field matching the releaseImage`. This message was produced every time, regardless of how `ImageContentSources` was configured, and could cause confusion. With this update, the warning message is only produced when `ImageContentSources` is legitimately not set to have at least one source field matching the release image. (link:https://issues.redhat.com/browse/OCPBUGS-10207[*OCPBUGS-10207*]) - -* Previously, when running the `openshift-install agent create image` command to generate a bootable ISO image, the command output provided a message indicating a successful generated image. This output message existed even if the Agent-based installer could not extract a base ISO image from the release image. With this update, the command output now produces an error message if the Agent-based Installer cannot locate the base ISO image, which might be indicative of an issue with `releaseImage`. (link:https://issues.redhat.com/browse/OCPBUGS-9949[*OCPBUGS-9949*]) - -* Previously, shared VPC installations on GCP that used passthrough credentials mode could fail because the installation program used credentials from the default service account. With this update, you can specify another service account to use for node creation instead of the default. (link:https://issues.redhat.com/browse/OCPBUGS-15421[*OCPBUGS-15421*]) - -* Previously, if you defined more control plane nodes than compute nodes in either the `agent-config.yaml` or the `nmstateconfig.yaml` configuration file, you received a warning message. Now, if you specify this configuration in either file, you receive an error message, which indicates that compute nodes cannot exceed control plane nodes in either file. (link:https://issues.redhat.com/browse/OCPBUGS-14877[*OCPBUGS-14877*]) - -* Previously, an Agent-based installation would fail if a non-canonical IPv6 address was used for the `RendezvousIP` field in the `agent-config.yaml` file. Non-canonical IPv6 addresses contain leading zeros, for example, `2001:0db8:0000:0000:0000:0000:0000:0000`. With this update, these valid addresses can now be used for the `RendezvousIP`. (link:https://issues.redhat.com/browse/OCPBUGS-14121[*OCPBUGS-14121*]) - -* Previously, the Operator cached the cloud credentials, which resulted in authentication issues when these credentials were rotated. Now, the Operator always uses the latest credentials. The Manila CSI Driver Operator now automatically creates an OpenShift storage class for each available Manila share type. As part of this operation, the Operator queries the Manila API. (link:https://issues.redhat.com/browse/OCPBUGS-14049[*OCPBUGS-14049*]) - -* Previously, when configuring the `install-config.yaml` file for use during an Agent-based installation, changing the `cpuPartitioning` field to a non-default value did not produce a warning to alert users that the field is ignored for Agent-based installations. With this update, changing the `cpuPartitioning` field causes a warning to users that the configuration does not impact the install. (link:https://issues.redhat.com/browse/OCPBUGS-13662[*OCPBUGS-13662*]) - -* Previously, installing an Azure cluster into an existing Azure Virtual Network (VNet) could fail because the installation program created a default network security group, which allowed traffic from `0.0.0.0`. The failure occurred when the existing VNet had the following rule enabled in the tenant: `Rule: Network Security Groups shall not allow rule with 0.0.0.0/Any Source/Destination IP Addresses - Custom Deny`. With this fix, the installation program no longer creates the default network security group when installing a cluster into an existing VNet, and the installation succeeds. (link:https://issues.redhat.com/browse/OCPBUGS-11796[*OCPBUGS-11796*]) - -* During an installation, when the cluster status is `installing-pending-user-action`, the installation does not complete until the status is resolved. Previously, if you ran the `openshift-install agent wait-for bootstrap-complete` command, no indication existed of how to resolve the problem that caused this status. With this update, the command output provides a message indicating which actions must be taken to resolve the issue. (link:https://issues.redhat.com/browse/OCPBUGS-4998[*OCPBUGS-4998*]) -+ -For example, the `wait-for` output when an invalid boot disk is used is now as follows: -+ -[source,terminal] ----- -"level=info msg=Cluster has hosts requiring user input -level=debug msg=Host master-1 Expected the host to boot from disk, but it booted the installation image - please reboot and fix boot order to boot from disk QEMU_HARDDISK drive-scsi0-0-0-0 (sda, /dev/disk/by-path/pci-0000:03:00.0-scsi-0:0:0:0) -level=debug msg=Host master-2 Expected the host to boot from disk, but it booted the installation image - please reboot and fix boot order to boot from disk QEMU_HARDDISK drive-scsi0-0-0-0 (sda, /dev/disk/by-path/pci-0000:03:00.0-scsi-0:0:0:0) -level=info msg=cluster has stopped installing... working to recover installation" ----- - -* Previously, the `assisted-installer-controller` on the installed cluster would run continuously even after the cluster had completed installation. Because `assisted-service` runs on the bootstrap node and not on the cloud, and because the assisted-service goes offline after the bootstrap node reboots to join the cluster, the `assisted-installer-controller` was unable to communicate with assisted-service to post updates and upload logs and loops. Now, the `assisted-installer-controller` checks the cluster installation without using `assisted-service`, and exits when the cluster installation is complete. (link:https://issues.redhat.com/browse/OCPBUGS-4240[*OCPBUGS-4240*]) - -* Previously, installing a cluster to the AWS Commercial Cloud Services (C2S) `us-iso-east-1` region failed with an error message stating an `UnsupportedOperation`. With this fix, installing to this region now succeeds. (link:https://issues.redhat.com/browse/OCPBUGS-2324[*OCPBUGS-2324*]) - -* Previously, installations on AWS could fail because the installation program did not create the `cloud.conf` file with the necessary service endpoints in it. This led to the machine config operator creating an empty `cloud.conf` file that lacked the service endpoints, leading to an error. With this update, the installation program always creates the `cloud.conf` file so that the installation succeeds. (link:https://issues.redhat.com/browse/OCPBUGS-20401[*OCPBUGS-20401*]) - -* Previously, if you installed a cluster using the Agent-based installer and your pull secret had a null `auth` or `email` field, the installation would fail without providing a useful error. With this update, the `openshift-install agent wait-for install-complete` command validates your pull secret and notifies you if there are null fields. (link:https://issues.redhat.com/browse/OCPBUGS-14405[*OCPBUGS-14405*]) - -* Previously, the `create agent-config-template` command printed a line with `INFO` only, but no details about whether the command was successful and where the template file was written to. Now, if the command is successful, the command will print `INFO Created Agent Config Template in directory`. (link:https://issues.redhat.com/browse/OCPBUGS-13408[*OCPBUGS-13408*]) - -* Previously, when a user specified the `vendor` hint in the `agent-config.yaml` file, the value was checked against the wrong field so that the hint would not match. With this update, the use of the `vendor` hint correctly selects a disk. (link:https://issues.redhat.com/browse/OCPBUGS-13356[*OCPBUGS-13356*]) - -* Previously, setting the `metadataService.authentication` field to `Required` when installing a cluster on AWS did not configure the bootstrap VM to use IMDSv2 authentication. This could result in installations failing if you configured your AWS account to block IMDSv1 authentication. With this update, the `metadataService.authentication` field correctly configures the bootstrap VM to use IMDSv2 authentication when set to `Required`. (link:https://issues.redhat.com/browse/OCPBUGS-12964[*OCPBUGS-12964*]) - -* Previously, if you configured user-defined outbound routing when installing a private Azure cluster, the cluster was incorrectly deployed with the default public load balancer. This behavior occurred when using the installer-provisioned infrastructure to install the cluster. With this update, the installation program no longer creates the public load balancer when user-defined routing is configured. (link:https://issues.redhat.com/browse/OCPBUGS-9404[*OCPBUGS-9404*]) - -* Previously, the vSphere Terraform `vsphere_virtual_machine` resource did not include the `firmware` parameter. This issue caused the firmware of the VM to be set to `bios` by default instead of `efi`. Now, the resource includes the `firmware` parameter and sets `efi` as the default value for the parameter, so that the VM runs the Extensible Firmware Interface (EFI) instead of the basic input/output system (BIOS) interface. (https://issues.redhat.com/browse/OCPBUGS-9378[*OCPBUGS-9378*]) - -* Previously, for clusters that run on {rh-openstack}, in the deprovisioning phase of installation, the installer deleted object storage containers sequentially. This behavior caused slow and inefficient deletion of objects, especially with large containers. This problem occurred in part because image streams that use Swift containers accumulated objects over time. Now, bulk object deletion now occurs concurrently with up to 3 calls to the {rh-openstack} API, improving efficiency by handling a higher object count per call. This optimization speeds up resource cleanup during deprovisioning. (link:https://issues.redhat.com/browse/OCPBUGS-9081[*OCPBUGS-9081*]) - -* Previously, the installation program did not exit with an error if you installed a cluster on Azure by using disk encryption without providing a subscription ID. This caused the installation to begin, and then only to fail later on. With this update, the installation program requires you to specify a subscription ID for encrypted Azure installations and exits with an error if you do not provide one. (link:https://issues.redhat.com/browse/OCPBUGS-8449[*OCPBUGS-8449*]) - -* Previously, the Agent-based installer showed the results of secondary checks such as `ping` and `nslookup`, which can harmlessly fail even when the installation succeeds. This could result in errors being displayed despite the cluster installing successfully. With this update, secondary checks only display results if the primary installation checks fail, so that you can use the secondary checks to troubleshoot the failed installation. (link:https://issues.redhat.com/browse/OCPBUGS-8390[*OCPBUGS-8390*]) - -* Using an IPI `install-config` with the Agent-based Installer results in warning log messages showing the contents of any unused fields. Previously, these warnings printed sensitive information such as passwords. With this update, the warning messages for the credentials fields in the `vsphere` and `baremetal` platform sections have been changed to avoid logging any sensitive information. (link:https://issues.redhat.com/browse/OCPBUGS-8203[*OCPBUGS-8203*]) - -* Previously, clusters on Azure Stack Hub could not create new control plane nodes unless the nodes had custom disk sizes, because the default disk size could not be validated. With this update, the default disk size has been set to 128 GB and the installation program enforces user-specified disk size values between 128 and 1023 GB. (link:https://issues.redhat.com/browse/OCPBUGS-6759[*OCPBUGS-6759*]) - -* Previously, the installation program used port 80 to provide images to the Baseboard Management Controller (BMC) and the deployment agent when installing on bare metal with installer-provisioned infrastructure. This could present security concerns because many types of public traffic use port 80. With this update, the installation program uses port 6180 for this purpose. (link:https://issues.redhat.com/browse/OCPBUGS-8509[*OCPBUGS-8509*]) - -//// -[discrete] -[id="ocp-4-14-kube-controller-bug-fixes"] -==== Kubernetes Controller Manager - -[discrete] -[id="ocp-4-14-kube-scheduler-bug-fixes"] -==== Kubernetes Scheduler -//// - -[discrete] -[id="ocp-4-14-machine-config-operator-bug-fixes"] -==== Machine Config Operator - -* Previously, {product-title} clusters that were installed on AWS used 4.1 boot images that were not able to scale up. This issue occurred because two systemd units, configured from Ignition then rendered and launched by the MCO during the initial boot of a new machine, have a dependency on the application Afterburn. Because {product-title} 4.1 boot images do not contain Afterburn, this issue prevented new nodes from being able to join the cluster. Now, `systemd` units contain an additional check for Afterburn along with fallback code that does not rely on the presence of Afterburn. (link:https://issues.redhat.com/browse/OCPBUGS-7559[*OCPBUGS-7559*]) - -[discrete] -[id="ocp-4-14-management-console-bug-fixes"] -==== Management Console - -* Previously, alerts loaded from non-Prometheus datasources such as logs. This caused the source of all alerts to be displayed always as *Prometheus*. With this update, alert sources are displayed correctly. (link:https://issues.redhat.com/browse/OCPBUGS-9907[*OCPBUGS-9907*]) - -* Previously, there was an issue with Patternfly 4 where you could not select or change the log component under the logs section of master node once a selection was already made. With this update, when you change to the log component from the log section of the master node, refresh the page to reload the default options. (link:https://issues.redhat.com/browse/OCPBUGS-18727[*OCPBUGS-18727*]) - -* Previously, an empty page was displayed when viewing route details on the *Metrics* tab of the *`alertmanager-main`* page. With this update, user privileges were updated so you can view the route details on the *Metrics* tab. (link:https://issues.redhat.com/browse/OCPBUGS-15021[*OCPBUGS-15021*]) - -* Previously, {product-rosa} used custom branding and the favicon would disappear so no specific branding appeared when custom branding was being used. With this update, {product-rosa} branding is now part of the branding API. (link:https://issues.redhat.com/browse/OCPBUGS-14716[*OCPBUGS-14716*]) - -* Previously, the {product-title} web console did not render the monitoring *Dashboard* page when a proxy was expected. As a result, the websocket connection failed. With this update, the web console also detects proxy settings from environment variables. (link:https://issues.redhat.com/browse/OCPBUGS-14550[*OCPBUGS-14550*]) - -* Previously, if the `console.openshift.io/disable-operand delete: "true"` and `operator.openshift.io/uninstall-message: "some message"` annotations were used on an operator CSV, the uninstall instructions did not show up in the web console. With this update, the instructions to opt out of the installment are available. (link:https://issues.redhat.com/browse/OCPBUGS-13782[*OCPBUGS-13782*]) - -* Previously, the size on the *PersistentVolumeClaims* namespace *Details* page was incorrect. With this update, the Prometheus query on *PersistentVolumeClaims* namespace *Details* page includes the namespace label and the size is now correct. (link:https://issues.redhat.com//browse/OCPBUGS-13208[*OCPBUGS-13208*]) - -* Previously, after customizing the routes for console and downloads, the downloads route did not update in the `ConsoleCLIDownloads` link and pointed to the default downloads route. With this update, the `ConsoleCLIDownloads` link updates when the custom downloads route is set. (link:https://issues.redhat.com/browse/OCPBUGS-12990[*OCPBUGS-12990*]) - -* Previously, the print preview displayed incomplete topology information from the list view. With this update, a full list of resources is printed when they are longer than one page. (link:https://issues.redhat.com/browse/OCPBUGS-11219[*OCPBUGS-11219*]) - -* Previously, dynamic plugins that proxy to services with longer response times timed out at 30 seconds with a `504` error message. With this update, a 5-minute HAProxy timeout annotation was added to the console route to match the maximum timeout of most browsers. (link:https://issues.redhat.com/browse/OCPBUGS-9917[*OCPBUGS-9917*]) - -* Previously, the provided API page used the `displayName` of the provided API, but this value was not always set. As a result, the list was empty but you could still click all instances to get to the YAML of a new instance. With this update, if the `displayName` is not set, the list displays text. (link:https://issues.redhat.com/browse/OCPBUGS-8682[*OCPBUGS-8682*]) - -* Previously, the `CronJobs` table and details view did not have a `suspend` indication. With this update, `spec.suspend` was added to the list and details view for `CronJobs`. (link:https://issues.redhat.com/browse/OCPBUGS-8299[*OCPBUGS-8299*]) - -* Previously, when enabling a single plugin in the configuration of the console operator, the redeployed console fails. With this update, the list of plugins is now unique and pods run as expected. (link:https://issues.redhat.com/browse/OCPBUGS-5059[*OCPBUGS-5059*]) - -* Previously, after upgrading a plugin image, old plugin files were still requested. With this update, the `?cacheBuster=${getRandomChars()}` query string was added when `plugin-entry.js` resources are requested. (link:https://issues.redhat.com/browse/OCPBUGS-3495[*OCPBUGS-3495*]) - -[discrete] -[id="ocp-4-14-monitoring-bug-fixes"] -==== Monitoring - -* Before this update, large amounts of CPU resources might be consumed during metrics scraping as a result of the way the `node-exporter` collected network interface information. This release fixes this issue by improving the performance of `node-exporter` when collecting network interface information, thereby resolving the issue with excessive CPU usage during metrics scraping. (link:https://issues.redhat.com/browse/OCPBUGS-12714[*OCPBUGS-12714*]) - -* Before this update, Thanos Querier failed to de-duplicate metrics by node roles. This update fixes the issue so that Thanos Querier now properly de-duplicates metrics by node roles. (link:https://issues.redhat.com/browse/OCPBUGS-12525[*OCPBUGS-12525*]) - -* Before this update, the `btrfs` collector of `node-exporter` was always enabled, which caused increased CPU usage because {op-system-base-full} does not support the `btrfs` storage format. With this update, the `btrfs` collector is now disabled, thereby resolving the issue. (link:https://issues.redhat.com/browse/OCPBUGS-11434[*OCPBUGS-11434*]) - -* Before this update, for the `cluster:capacity_cpu_cores:sum` metric, nodes with the `infra` role but not `master` role were not assigned a value of `infra` for the `label_node_role_kubernetes_io` label. With this update, nodes with the `infra` role, but not `master` role, are now correctly labeled as `infra` for this metric. (link:https://issues.redhat.com/browse/OCPBUGS-10387[*OCPBUGS-10387*]) - -* Before this update, the lack of a startup probe prevented the Prometheus Adapter pods from starting when the Kubernetes API had many custom resource definitions installed because the program initialization would take longer than what was allowed by the liveness probe. With this update, the Prometheus Adapter pods are now configured with a startup probe that waits five minutes before failing, thereby resolving the issue. (link:https://issues.redhat.com/browse/OCPBUGS-7694[*OCPBUGS-7694*]) - -* The `node_exporter` collector is meant to collect network interface metrics for physical interfaces only, but before this update, the `node-exporter` collector did not exclude Calico Virtual network interface controllers (NICs) when collecting these metrics. This update adds the `++cali[a-f0-9]*++` value to the `collector.netclass.ignored-devices` list to ensure that metrics are not collected for Calico Virtual NICs. (link:https://issues.redhat.com/browse/OCPBUGS-7282[*OCPBUGS-7282*]) - -* With this release, as a security measure, Cross Origin Resource Sharing (CORS) headers are now disabled by default for Thanos Querier. If you still need to use CORS headers, you can enable them by setting the value of the `enableCORS` parameter to `true` for the `ThanosQuerierConfig` resource. (link:https://issues.redhat.com/browse/OCPBUGS-11889[*OCPBUGS-11889*]) - -[discrete] -[id="ocp-4-14-networking-bug-fixes"] -==== Networking - -* Previously, when a client mutual TLS (mTLS) was configured on an ingress controller, and the certificate authority (CA) certificates in the CA bundle required more than 1 MB of certificate revocation lists (CRL) to be downloaded, the CRL config map could not be updated due to size limitations. Because of the missing CRLs, connections with valid client certificates might have been rejected with the following error: `unknown ca`. -+ -With this update, CRLs are no longer placed in a config map, and the router now directly downloads CRLs. As a result, the CRL config map for each ingress controller no longer exists. CRLs are now downloaded directly and connections with valid client certificates are no longer rejected. (link:https://issues.redhat.com/browse/OCPBUGS-6661[*OCPBUGS-6661*]) - -* Previously, a non-compliant upstream DNS server that provided a UDP response larger than {product-title}'s specified buffer size of 512 bytes caused CoreDNS to throw an overflow error. Consequently, it would not provide a response to a DNS query. -+ -With this update, users can now configure the `protocolStrategy` field on the `dnses.operator.openshift.io` custom resource (CR) to be `TCP`. With this field set to `TCP`, CoreDNS uses the TCP protocol for upstream requests and works around UDP overflow issues with non-compliant upstream DNS servers. (link:https://issues.redhat.com/browse/OCPBUGS-6829[*OCPBUGS-6829*]) - -* Previously, if cluster administrators configured an infra node using a taint with the `NoExecute` effect, the Ingress Operator's canary pods would not be scheduled on these infra nodes. After some time, the DaemonSet configuration would get overridden, and the pods would be terminated on the infra nodes. -+ -With this release, the Ingress Operator now configures the canary DaemonSet to tolerate a `node-role.kubernetes.io/infra` node taint that specifies the `NoExecute` effect. As a result, canary pods are scheduled on infra nodes regardless of what effect has been specified. (link:https://issues.redhat.com/browse/OCPBUGS-9274[*OCPBUGS-9274*]) - -* Previously, when a client mutual TLS (mTLS) was configured on an ingress controller, if any of the client certificate authority (CA) certificates included a certificate revocation list (CRL) distribution point for a CRL issued by a different CA and that CRL expired, the mismatch between the distributing CA and the issuing CA caused the incorrect CRL to be downloaded. Consequently, the CRL bundle would be updated to contain an extra copy of the erroneously downloaded CRL, and the CRL that needed to be updated would be missing. Because of the missing CRL, connections with valid client certificates might have been rejected with the following error: `unknown ca`. -+ -With this update, downloaded CRLs are now tracked by the CA that distributes them. When a CRL expires, the distributing CA's CRL distribution point is used to download an updated CRL. As a result, valid client certificates are no longer rejected. (link:https://issues.redhat.com/browse/OCPBUGS-9464[*OCPBUGS-9464*]) - -* Previously, when the Gateway API was enabled for {SMProductName}, the Ingress Operator would fail to configure and would return the following error: `the spec.techPreview.controlPlaneMode field is not supported in version 2.4+; use spec.mode`. With this release, the {SMProductShortName} `spec.techPreview.controlPlaneMode` API field in the `ServiceMeshControlPlane` custom resource (CR) has been replaced with `spec.mode`. As a result, the Ingress Operator is able to create a `ServiceMeshControlPlane` custom resource, and the Gateway API works properly. (link:https://issues.redhat.com/browse/OCPBUGS-10714[*OCPBUGS-10714*]) - -* Previously, when configuring DNS for Gateway API gateways, the Ingress Operator would attempt to create a DNS record for a gateway listener, even if the listener specified a hostname with a domain that was outside of the cluster's base domain. Consequently, the Ingress Operator attempted, and failed, to publish DNS records, and would return the following error: `failed to publish DNS record to zone`. -+ -With this update, when creating a `DNSRecord` custom resource (CR) for a gateway listener, the Ingress Operator now sets the `DNSRecord's` DNS management policy to `Unmanaged` if its domain is outside of the cluster's base domain. As a result, the Ingress Operator no longer attempts to publish records, and no longer logs the `failed to publish DNS record to zone` error. (link:https://issues.redhat.com/browse/OCPBUGS-10875[*OCPBUGS-10875*]) - -* Previously, `oc explain route.spec.tls.insecureEdgeTerminationPolicy` documented the incorrect possible options that could be confusing some some users. With this release, the API documentation has been updated so that it shows the correct possible options for `insecureEdgeTerminationPolicy`. This is an API documentation fix only. (link:https://issues.redhat.com/browse/OCPBUGS-11393[*OCPBUGS-11393*]) - -* Previously, a Cluster Network Operator controller monitored a broader set of resources than necessary, which resulted in its reconciler being triggered too often. Consequently, this increased the loads on both the Cluster Network Operator and the `kube-apiserver`. -+ -With this update, the Cluster Network Operator `allowlist` controller monitors its `cni-sysctl-allowlist` config map for changes. As a result, rather than being triggered when any config map is changed, the `allowlist` controller reconciler is only triggered when changes are made to the `cni-sysctl-allowlist` config map or the `default-cni-sysctl-allowlist` config map. As a result, Cluter Network Operator API requests and config map requests are reduced. (link:https://issues.redhat.com/browse/OCPBUGS-11565[*OCPBUGS-11565*]) - -* `segfault` failures that were related to HaProxy have been resolved. Users should no longer receive these errors. (link:https://issues.redhat.com/browse/OCPBUGS-11595[*OCPBUGS-11595*]) - -* Previously, CoreDNS terminated unexpectedly if a user created an `EndpointSlice` port without a port number. With this update, validation was added to CoreDNS to prevent it from unexpectedly terminated. (link:https://issues.redhat.com/browse/OCPBUGS-19805[*OCPBUGS-19805*]) - -* Previously, the OpenShift router directed traffic to a route with a weight of `0` when it had only one back-end service. With this update, the router no longer sends traffic to routes with a single backend with weight `0`. (link:https://issues.redhat.com/browse/OCPBUGS-16623[*OCPBUGS-16623*]) - -* Previously, the Ingress Operator created its canary route without specifying the `spec.subdomain` or the `spec.host` parameter on the route. Usually, this caused the API server to use the cluster's Ingress domain, which matches the domain of the default Ingress Controller, to set a default value for the `spec.host` parameter. However, if you configured the cluster by using the `appsDomain` option to set an alternative Ingress domain, the route host would have the alternative domain. Further, if you deleted the canary route, the route would be recreated with a domain that did not match the default Ingress Controller's domain, which would cause canary checks to fail. Now, the Ingress Controller specifies the `spec.subdomain` parameter when it creates the canary route. If you use the `appsDomain` option to configure your cluster and then delete the canary route, the canary checks do not fail. (link:https://issues.redhat.com/browse/OCPBUGS-16089[*OCPBUGS-16089*]) - -* Previously, the Ingress Operator did not check status of DNS records in public hosted zones when updating the Operator status. This caused the Ingress Operator to report the DNS status as `Ready` when there could be errors in DNS records in public hosted zones. Now, the Ingress Operator checks the status of both public and private hosted zones, which fixes the issue. (link:https://issues.redhat.com/browse/OCPBUGS-15978[*OCPBUGS-15978*]) - -* Previously, the CoreDNS `bufsize` setting was configured as 512 bytes. Now, the maximum size of the buffer for {product-title} CoreDNS is 1232 bytes. This modification enhances DNS performance by reducing the occurrence of DNS truncations and retries. (link:https://issues.redhat.com/browse/OCPBUGS-15605[*OCPBUGS-15605*]) - -* Previously, the Ingress Operator would specify the `spec.template.spec.hostNetwork: true` parameter on a router deployment without specifying the `spec.template.spec.containers[*].ports[*].hostPort`. This caused the API server to set a default value for each port's `hostPort` field, which the Ingress Operator would then detect as an external update and attempt to revert it. Now, the Ingress Operator no longer incorrectly performs these updates. (link:https://issues.redhat.com/browse/OCPBUGS-14995[*OCPBUGS-14995*]) - -* Previously, the DNS Operator logged the `cluster-dns-operator startup has an error message: [controller-runtime] log.SetLogger(...) was never called, logs will not be displayed:` error message on startup, which could mislead users. Now, the error message is not displayed on startup. (link:https://issues.redhat.com/browse/OCPBUGS-14395[*OCPBUGS-14395*]) - -* Previously, the Ingress Operator was leaving the `spec.internalTrafficPolicy`, `spec.ipFamilies`, and `spec.ipFamilyPolicy` fields unspecified for `NodePort` and `ClusterIP` type services. The API would then set default values for these fields, which the Ingress Operator would try to revert. With this update, the Ingress Operator specifies an initial value and fixes the error caused by API default values. (link:https://issues.redhat.com/browse/OCPBUGS-13190[*OCPBUGS-13190*]) - -* Previously, transmission control protocol (TCP) connections were load balanced for all DNS. With this update, TCP connections are enabled to prefer local DNS endpoints. (link:https://issues.redhat.com/browse/OCPBUGS-9985[*OCPBUGS-9985*]) - -* Previously, for Intel E810 NICs, resetting a MAC address on an SR-IOV with a virtual function (VF) when a pod was deleted caused a failure. This resulted in a long delay when creating a pod with SR-IOV VF. With this update, the container network interface (CNI) does not fail fixing this issue. (link:https://issues.redhat.com/browse/OCPBUGS-5892[*OCPBUGS-5892*]) - -//[discrete] -//[id="ocp-4-14-node-bug-fixes"] -//==== Node -//node does not have any bugs as of 10.23.23 - -//[discrete] -//[id="ocp-4-14-node-tuning-operator-bug-fixes"] -//==== Node Tuning Operator (NTO) -// only had a known issue for 4.14.0 - -[discrete] -[id="ocp-4-14-openshift-cli-bug-fixes"] -==== OpenShift CLI (oc) - -* Previously, container image references that have both tag and digest were not correctly interpreted by the oc-mirror plug-in and resulted in the following error: -+ -[source,text] ----- -"localhost:6000/cp/cpd/postgresql:13.7@sha256" is not a valid image reference: invalid reference format ----- -+ -This behavior has been fixed, and the references are now accepted and correctly mirrored. (link:https://issues.redhat.com/browse/OCPBUGS-11840[*OCPBUGS-11840*]) - -* Previously, you were receiving `401 - Unauthorized` error for registries where the number of path components exceeded the expected maximum path components. This issue is fixed by ensuring that the oc-mirror fails when the number of path components exceeds maximum path components. You can now set the maximum path components by using the flag `--max-nested-paths`, which accepts an integer value. By default, there is no limit to the maximum path components and is set to `0`. The generated `ImageContentSourcePolicy` will contain source and mirror references up to the repository level. -(link:https://issues.redhat.com/browse/OCPBUGS-8111[*OCPBUGS-8111*], link:https://issues.redhat.com/browse/OCPBUGS-11910[*OCPBUGS-11910*], link:https://issues.redhat.com/browse/OCPBUGS-11922[*OCPBUGS-11922*]) - -* Previously, the oc-mirror flags `--short`, `-v`, and `--verbose` provided incorrect version information. You can now use the oc mirror `version` flag to know the correct version of oc-mirror. The oc-mirror flags `--short`, `-v`, and `--verbose` have been deprecated and will no longer be supported. (link:https://issues.redhat.com/browse/OCPBUGS-7845[*OCPBUGS-7845*]) - -* Previously, mirroring from registry to disk would fail when several digests of an image were specified in the `imageSetConfig` without tags. The oc-mirror would add the default tag `latest` to the images. The issue is now fixed by using a truncated digest as the tag. (link:https://issues.redhat.com/browse/OCPBUGS-2633[*OCPBUGS-2633*]) - -* Previously, oc-mirror would incorrectly add the Operator catalog to `ImageContentSourcePolicy` specification. This is an unexpected behavior because the Operator catalog is directly used from the destination registry through `CatalogSource` resource. This bug is fixed by ensuring that the oc-mirror does not add the Operator catalog as an entry to `ImageContentSourcePolicy`. (link:https://issues.redhat.com/browse/OCPBUGS-10051[*OCPBUGS-10051*]) - -* Previously, mirroring images for Operators would fail when the registry domain name was not a part of the image reference. With this fix, the images are downloaded from `docker.io` if the registry domain name is not specified.(link:https://issues.redhat.com/browse/OCPBUGS-10348[*OCPBUGS-10348*]) - -* Previously, when both tag and digest were included in container image references, oc-mirror would incorrectly interpret it resulting in an `invalid reference format` error. This issue has been fixed and the images are successfully mirrored. (link:https://issues.redhat.com/browse/OCPBUGS-11840[*OCPBUGS-11840*]) - -* Previously, you could not create a `CatalogSource` resource if the name started with a number. With this fix, by default, the `CatalogSource` resource name is generated with the `cs-` prefix and is compliant with RFC 1035. (link:https://issues.redhat.com/browse/OCPBUGS-13332[*OCPBUGS-13332*]) - -* Previously, when using the `registries.conf` file, some images were not included in the mapping. With this bug fix, you can now see the images included in the mapping without any errors. (link:https://issues.redhat.com/browse/OCPBUGS-13962[*OCPBUGS-13962*]) - -* Previously, while using the insecure mirrors in the `registries.conf` file that is referenced in `--oci-registries-config` flag, oc-mirror tried to establish an HTTPS connection with the mirror registry. With this fix, you can configure oc-mirror to not use an HTTPS connection by specifying either `--source-skip-tls` or `--source-use-http` in the command line. (link:https://issues.redhat.com/browse/OCPBUGS-14402[*OCPBUGS-14402*]) - -* Previously, image mirroring would fail when you attempted to mirror OCI indexes by using oc-mirror plugins. With this fix, you can mirror OCI indexes by using oc-mirror plugins. (link:https://issues.redhat.com/browse/OCPBUGS-15329[*OCPBUGS-15329*]) - -* Previously, when mirroring several large catalogs on a low-bandwidth network, mirroring would be interrupted due to an expired authentication token resulting in an `HTTP 401 unauthorized` error. This issue is now fixed by refreshing the authentication tokens before starting the mirroring process of each catalog. (link:https://issues.redhat.com/browse/OCPBUGS-20137[*OCPBUGS-20137*]) - -[discrete] -[id="ocp-4-14-olm-bug-fixes"] -==== Operator Lifecycle Manager (OLM) - -* Before this update, Operator Lifecycle Manager (OLM) could cause failed installations due to initialization errors when the API server was busy. This update fixes the issue by adding a one-minute-retry interval for initialization errors. -(link:https://issues.redhat.com/browse/OCPBUGS-13128[*OCPBUGS-13128*]) - -* Before this update, a race condition occurred if custom catalogs used the same names as the default Red Hat catalogs in a disconnected environment. If the default Red Hat catalogs were disabled, the catalogs were created at start and deleted after the OperatorHub custom resource (CR) was reconciled. As a result, the custom catalogs were deleted along with the default Red Hat catalogs. With this update, the OperatorHub CR is reconciled before any catalogs are deleted, preventing the race condition. -(link:https://issues.redhat.com/browse/OCPBUGS-9357[*OCPBUGS-9357*]) - -* Before this update, the channels of some Operators were displayed on OperatorHub in a random order. With this update, Operator channels are displayed in lexicographical order. -(link:https://issues.redhat.com/browse/OCPBUGS-7910[*OCPBUGS-7910*]) - -* Before this update, registry pods were not drained gracefully by the autoscaler if the controller flag was not set to true in the owner references file. With this update, the controller flag is set to true and draining nodes no longer requires a forceful shutdown. -(link:https://issues.redhat.com/browse/OCPBUGS-7431[*OCPBUGS-7431*]) - -* Before this update, `collect-profiles` pods caused regular spikes of CPU usage due to the way certificates were generated. With this update, certificates are generated daily, the loading of the certificate is optimized, and CPU usage is lower. -(link:https://issues.redhat.com/browse/OCPBUGS-1684[*OCPBUGS-1684*]) - -[discrete] -[id="ocp-4-14-openshift-api-server-bug-fixes"] -==== OpenShift API server - -* Previously, the `metadata.namespace` field would be automatically populated in update and patch requests to the `projects` resource. As a result, the affected requests would generate spurious validation errors. With this release, the `projects` resource is no longer automatically populated. (link:https://issues.redhat.com/browse/OCPBUGS-8232[*OCPBUGS-8232*]) - -[discrete] -[id="ocp-4-14-rhcos-bug-fixes"] -==== {op-system-first} - -* Previously, pods in {product-title} that access block persistent volume claims (PVC) storage with logical volume manager (LVM) metadata could get stuck when terminating. This is because the same LVM devices were active both inside the container and on the host. An example of this occurred when running a virtual machine inside a pod that used {VirtProductName} that in turn used LVM for the virtual machine. With this update, {op-system} by default only attempts to setup and access devices that are in the `/etc/lvm/devices/system.devices` file. This prevents contentious access to the LVM devices inside the virtual machine guests. (link:https://issues.redhat.com/browse/OCPBUGS-5223[*OCPBUGS-5223*]) - -* Previously, pods were stuck in the `ContainerCreating` state on Google Cloud Platform (GCP) Confidential Computing instances, which caused a volume mount failure. This fix adds support for the Persistent Disk storage type for Confidential Computing instances in Google Cloud Platform, which can be used as persistent volumes in {product-title}. As a result, pods are able to enter a `Running` state and volumes can be mounted. (link:https://issues.redhat.com/browse/OCPBUGS-7582[*OCPBUGS-7582*]) - -//[discrete] -//[id="ocp-4-14-scalability-and-performance-bug-fixes"] -//==== Scalability and performance - -[discrete] -[id="ocp-4-14-storage-bug-fixes"] -==== Storage - -* Previously, when the cluster-wide proxy is enabled on {ibmcloudVPCRegProductName} clusters, there was a failure to provision volumes. (link:https://issues.redhat.com/browse/OCPBUGS-18142[*OCPBUGS-18142*]) - -* The `vsphereStorageDriver` field of the Storage Operator object has been deprecated. This field was used to opt in to CSI migration on {product-title} 4.13 vSphere clusters, but it has no effect on {product-title} {product-version} and newer clusters. (link:https://issues.redhat.com/browse/OCPBUGS-13914[*OCPBUGS-13914*]) - -//[discrete] -//[id="ocp-4-14-windows-containers-bug-fixes"] -//==== Windows containers -// Added after OCP GA -//* Previously, Window nodes could not be deconfigured because of `conatainerd` log files that were not removed. Now, `containered` runtime is stopped before removing the log files, fixing the issue. (link:https://issues.redhat.com/browse/OCPBUGS-14700[*OCPBUGS-14700*]) - -[id="ocp-4-14-technology-preview"] -== Technology Preview features - -Some features in this release are currently in Technology Preview. These experimental features are not intended for production use. Note the following scope of support on the Red Hat Customer Portal for these features: - -link:https://access.redhat.com/support/offerings/techpreview[Technology Preview Features Support Scope] - -In the following tables, features are marked with the following statuses: - -* _Technology Preview_ -* _General Availability_ -* _Not Available_ -* _Deprecated_ - -[discrete] -=== Networking Technology Preview features - -.Networking Technology Preview tracker -[cols="4,1,1,1",options="header"] -|==== -|Feature |4.12 |4.13 |4.14 - -|PTP dual NIC hardware configured as boundary clock -|Technology Preview -|General Availability -|General Availability - -|Ingress Node Firewall Operator -|Technology Preview -|Technology Preview -|General Availability - -|Advertise using L2 mode the MetalLB service from a subset of nodes, using a specific pool of IP addresses -|Technology Preview -|Technology Preview -|Technology Preview - -|Multi-network policies for SR-IOV networks -|Technology Preview -|Technology Preview -|Technology Preview - -|OVN-Kubernetes network plugin as secondary network -|Not Available -|Technology Preview -|General Availability - -|Updating the interface-specific safe sysctls list -|Technology Preview -|Technology Preview -|Technology Preview - -|MT2892 Family [ConnectX-6 Dx] SR-IOV support -|Technology Preview -|General Availability -|General Availability - -|MT2894 Family [ConnectX-6 Lx] SR-IOV support -|Technology Preview -|General Availability -|General Availability - -|MT42822 BlueField-2 in ConnectX-6 NIC mode SR-IOV support -|Technology Preview -|General Availability -|General Availability - -|Silicom STS Family SR-IOV support -|Technology Preview -|General Availability -|General Availability - -|MT2892 Family [ConnectX-6 Dx] OvS Hardware Offload support -|Technology Preview -|General Availability -|General Availability - -|MT2894 Family [ConnectX-6 Lx] OvS Hardware Offload support -|Technology Preview -|General Availability -|General Availability - -|MT42822 BlueField-2 in ConnectX-6 NIC mode OvS Hardware Offload support -|Technology Preview -|General Availability -|General Availability - -|Switching Bluefield-2 from DPU to NIC -|Technology Preview -|General Availability -|General Availability - -|Intel E810-XXVDA4T -|Not Available -|General Availability -|General Availability - -|Egress service custom resource -|Not Available -|Not Available -|Technology Preview - -|VRF specification in `BGPPeer` custom resource -|Not Available -|Not Available -|Technology Preview - -|VRF specification in `NodeNetworkConfigurationPolicy` custom resource -|Not Available -|Not Available -|Technology Preview - -|Admin Network Policy (`AdminNetworkPolicy`) -|Not Available -|Not Available -|Technology Preview - -|IPsec external traffic (north-south) -|Not Available -|Not Available -|Technology Preview - -|==== - -[discrete] -=== Storage Technology Preview features - -.Storage Technology Preview tracker -[cols="4,1,1,1",options="header"] -|==== -|Feature |4.12 |4.13 |4.14 - -|Automatic device discovery and provisioning with Local Storage Operator -|Technology Preview -|Technology Preview -|Technology Preview - -|Google Filestore CSI Driver Operator -|Technology Preview -|Technology Preview -|General Availability - -|CSI automatic migration -(Azure file, VMware vSphere) -|Technology Preview -|General Availability -|General Availability - -|CSI inline ephemeral volumes -|Technology Preview -|General Availability -|General Availability - -|{ibmpowerProductName} Virtual Server Block CSI Driver Operator -|Not Available -|Technology Preview -|Technology Preview - -|NFS support for Azure File CSI Operator Driver -|Generally Available -|Generally Available -|Generally Available - -|Read Write Once Pod access mode -|Not available -|Not available -|Technology Preview - -|Build CSI Volumes in OpenShift Builds -|Technology Preview -|Technology Preview -|General Availability - -|Shared Resources CSI Driver in OpenShift Builds -|Technology Preview -|Technology Preview -|Technology Preview - -|{secrets-store-operator} -|Not available -|Not available -|Technology Preview - -|==== - -[discrete] -=== Installation Technology Preview features - -.Installation Technology Preview tracker -[cols="4,1,1,1",options="header"] -|==== -|Feature |4.12 |4.13 |4.14 - -|Adding kernel modules to nodes with kvc -|Technology Preview -|Technology Preview -|Technology Preview - -|Azure Tagging -|Not Available -|Technology Preview -|General Availability - -|Enabling NIC partitioning for SR-IOV devices -|Not Available -|Technology Preview -|Technology Preview - -|GCP Confidential VMs -|Not Available -|Technology Preview -|General Availability - -|User-defined labels and tags for Google Cloud Platform (GCP) -|Not Available -|Not Available -|Technology Preview - -|Installing a cluster on Alibaba Cloud by using installer-provisioned infrastructure -|Technology Preview -|Technology Preview -|Technology Preview - -|Mount shared entitlements in BuildConfigs in RHEL -|Technology Preview -|Technology Preview -|Technology Preview - -|Multi-architecture compute machines -|Technology Preview -|General Availability -|General Availability - -|{product-title} on Oracle Cloud Infrastructure (OCI) -|Not Available -|Not Available -|Developer Preview - -|Selectable Cluster Inventory -|Technology Preview -|Technology Preview -|Technology Preview - -|Static IP addresses with vSphere (IPI only) -|Not Available -|Not Available -|Technology Preview - -|==== - -[discrete] -=== Node Technology Preview features - -.Nodes Technology Preview tracker -[cols="4,1,1,1",options="header"] -|==== -|Feature |4.12 |4.13 |4.14 - -|Linux Control Group version 2 (cgroup v2) -|Technology Preview -|General Availability -|General Availability - -|crun container runtime -|Technology Preview -|General Availability -|General Availability - -|Cron job time zones -|Technology Preview -|Technology Preview -|Technology Preview - -|`MaxUnavailableStatefulSet` featureset -|Not Available -|Not Available -|Technology Preview - -|==== - -[discrete] -=== Multi-Architecture Technology Preview features - -.Multi-Architecture Technology Preview tracker -[cols="4,1,1,1",options="header"] -|==== -|Feature |4.12 |4.13 |4.14 - -|IBM Secure Execution on {ibmzProductName} and {linuxoneProductName} -|Technology Preview -|General Availability -|General Availability - -|{ibmpowerProductName} Virtual Server using installer-provisioned infrastructure -|Not Available -|Technology Preview -|Technology Preview - -|`kdump` on `arm64` architecture -|Technology Preview -|Technology Preview -|Technology Preview - -|`kdump` on `s390x` architecture -|Technology Preview -|Technology Preview -|Technology Preview - -|`kdump` on `ppc64le` architecture -|Technology Preview -|Technology Preview -|Technology Preview - -|==== - -[discrete] -=== Specialized hardware and driver enablement Technology Preview features - -.Specialized hardware and driver enablement Technology Preview tracker -[cols="4,1,1,1",options="header"] -|==== -|Feature |4.12 |4.13 |4.14 - -|Driver Toolkit -|General Availability -|General Availability -|General Availability - -|Hub and spoke cluster support -|Technology Preview -|General Availability -|General Availability - -|==== - -//// -[discrete] -=== Web console Technology Preview features - -.Web console Technology Preview tracker -[cols="4,1,1,1",options="header"] -|==== -|Feature |4.12 |4.13 |4.14 - -//|Multicluster console -//|Technology Preview -//|Technology Preview -//|Technology Preview - -|==== -//// - -[discrete] -[id="ocp-413-scalability-tech-preview"] -=== Scalability and performance Technology Preview features - -.Scalability and performance Technology Preview tracker -[cols="4,1,1,1",options="header"] -|==== -|Feature |4.12 |4.13 |4.14 - -|Hyperthreading-aware CPU manager policy -|Technology Preview -|Technology Preview -|Technology Preview - -|Node Observability Operator -|Technology Preview -|Technology Preview -|Technology Preview - -|{factory-prestaging-tool} -|Not Available -|Technology Preview -|Technology Preview - -|{sno-caps} cluster expansion with worker nodes -|Technology Preview -|General Availability -|General Availability - -|{cgu-operator-first} -|Technology Preview -|General Availability -|General Availability - -|Mount namespace encapsulation -|Not Available -|Technology Preview -|Technology Preview - -|NUMA-aware scheduling with NUMA Resources Operator -|Technology Preview -|General Availability -|General Availability - -|HTTP transport replaces AMQP for PTP and bare-metal events -|Not Available -|Technology Preview -|Technology Preview - -|Intel E810 Westport Channel NIC as PTP grandmaster clock -|Not Available -|Technology Preview -|Technology Preview - -|Workload partitioning for three-node clusters and standard clusters -|Not Available -|Technology Preview -|Technology Preview - -|==== - -[discrete] -[id="ocp-4-14-operators-tech-preview"] -=== Operator lifecycle and development Technology Preview features - -.Operator lifecycle and development Technology Preview tracker -[cols="4,1,1,1",options="header"] -|==== -|Feature |4.12 |4.13 |4.14 - -|Operator Lifecycle Manager (OLM) v1 -|Not Available -|Not Available -|Technology Preview - -|RukPak -|Technology Preview -|Technology Preview -|Technology Preview - -|Platform Operators -|Technology Preview -|Technology Preview -|Technology Preview - -|Hybrid Helm Operator -|Technology Preview -|Technology Preview -|Technology Preview - -|Java-based Operator -|Technology Preview -|Technology Preview -|Technology Preview - -|==== - -[discrete] -=== Monitoring Technology Preview features - -.Monitoring Technology Preview tracker -[cols="4,1,1,1",options="header"] -|==== -|Feature |4.12 |4.13 |4.14 - - -|Alerting rules based on platform monitoring metrics -|Technology Preview -|Technology Preview -|General Availability - -|Metrics Collection Profiles -|Not Available -|Technology Preview -|Technology Preview - -|==== - -//// -[discrete] -=== {rh-openstack-first} Technology Preview features - -.{rh-openstack} Technology Preview tracker -[cols="4,1,1,1",options="header"] -|==== -|Feature |4.12 |4.13 |4.14 - -|External load balancers with installer-provisioned infrastructure -|Not Available -|Technology Preview -|General Availability - -|Dual-stack networking with installer-provisioned infrastructure -|Not Available -|Not Available -|Technology Preview - -|==== -//// - - -[discrete] -=== Architecture Technology Preview features - -.Architecture Technology Preview tracker -[cols="4,1,1,1",options="header"] -|==== -|Feature |4.12 |4.13 |4.14 - -|Hosted control planes for {product-title} on Amazon Web Services (AWS) -|Technology Preview -|Technology Preview -|Technology Preview -// Needs to move to GA after Nov 15, 2023 -|Hosted control planes for {product-title} on bare metal -|Technology Preview -|Technology Preview -|Technology Preview -// Needs to move to GA after Nov 15, 2023 -|Hosted control planes for {product-title} on {VirtProductName} -|Not Available -|Technology Preview -|Technology Preview - -|==== - -[discrete] -=== Machine management Technology Preview features - -.Machine management Technology Preview tracker -[cols="4,1,1,1",options="header"] -|==== -|Feature |4.12 |4.13 |4.14 - -|Managing machines with the Cluster API -|Technology Preview -|Technology Preview -|Technology Preview - -|Cloud controller manager for Alibaba Cloud -|Technology Preview -|Technology Preview -|Technology Preview - -|Cloud controller manager for Amazon Web Services -|Technology Preview -|Technology Preview -|General Availability - -|Cloud controller manager for Google Cloud Platform -|Technology Preview -|Technology Preview -|Technology Preview - -|Cloud controller manager for IBM Cloud Power VS -|Not Available -|Technology Preview -|Technology Preview - -|Cloud controller manager for Microsoft Azure -|Technology Preview -|Technology Preview -|General Availability - -|Cloud controller manager for Nutanix -|Technology Preview -|General Availability -|General Availability - -|Cloud controller manager for VMware vSphere -|Technology Preview -|General Availability -|General Availability - -|==== - -[discrete] -=== Authentication and authorization Technology Preview features - -.Authentication and authorization Technology Preview tracker -[cols="4,1,1,1",options="header"] -|==== -|Feature |4.12 |4.13 |4.14 - -|Pod security admission restricted enforcement -|Technology Preview -|Technology Preview -|Technology Preview - -|==== - -[discrete] -=== Machine Config Operator Technology Preview features - -.Machine Config Operator Technology Preview tracker -[cols="4,1,1,1",options="header"] -|==== -|Feature |4.12 |4.13 |4.14 - -|{op-system-first} image layering -|Technology Preview -|General Availability -|General Availability - -|==== - -[id="ocp-4-14-known-issues"] -== Known issues - -// TODO: This known issue should carry forward to 4.8 and beyond! This needs some SME/QE review before being updated for 4.11. Need to check if KI should be removed or should stay. -* In {product-title} 4.1, anonymous users could access discovery endpoints. Later releases revoked this access to reduce the possible attack surface for security exploits because some discovery endpoints are forwarded to aggregated API servers. However, unauthenticated access is preserved in upgraded clusters so that existing use cases are not broken. -+ -If you are a cluster administrator for a cluster that has been upgraded from {product-title} 4.1 to {product-version}, you can either revoke or continue to allow unauthenticated access. Unless there is a specific need for unauthenticated access, you should revoke it. If you do continue to allow unauthenticated access, be aware of the increased risks. -+ -[WARNING] -==== -If you have applications that rely on unauthenticated access, they might receive HTTP `403` errors if you revoke unauthenticated access. -==== -+ -Use the following script to revoke unauthenticated access to discovery endpoints: -+ -[source,bash] ----- -## Snippet to remove unauthenticated group from all the cluster role bindings -$ for clusterrolebinding in cluster-status-binding discovery system:basic-user system:discovery system:openshift:discovery ; -do -### Find the index of unauthenticated group in list of subjects -index=$(oc get clusterrolebinding ${clusterrolebinding} -o json | jq 'select(.subjects!=null) | .subjects | map(.name=="system:unauthenticated") | index(true)'); -### Remove the element at index from subjects array -oc patch clusterrolebinding ${clusterrolebinding} --type=json --patch "[{'op': 'remove','path': '/subjects/$index'}]"; -done ----- -+ -This script removes unauthenticated subjects from the following cluster role bindings: -+ --- -** `cluster-status-binding` -** `discovery` -** `system:basic-user` -** `system:discovery` -** `system:openshift:discovery` --- -+ -(link:https://bugzilla.redhat.com/show_bug.cgi?id=1821771[*BZ#1821771*]) - -// TODO: This known issue should carry forward to 4.9 and beyond! -* The `oc annotate` command does not work for LDAP group names that contain an equal sign (`=`), because the command uses the equal sign as a delimiter between the annotation name and value. As a workaround, use `oc patch` or `oc edit` to add the annotation. (link:https://bugzilla.redhat.com/show_bug.cgi?id=1917280[*BZ#1917280*]) - -* If the installation program cannot get all of the projects that are associated with the Google Cloud Platform (GCP) service account, the installation fails with a `context deadline exceeded` error message. -+ -This behavior occurs when the following conditions are met: - -** The service account has access to an excessive number of projects. -** The installation program is run with one of the following commands: -*** `openshift-install create install-config` -+ -.Error message -[source,text] ----- -FATAL failed to fetch Install Config: failed to fetch dependency of "Install Config": failed to fetch dependency of "Base Domain": failed to generate asset "Platform": failed to get projects: context deadline exceeded ----- -*** `openshift-install create cluster` without an existing installation configuration file (`install-config.yaml`) -+ -.Error message -[source,text] ----- -FATAL failed to fetch Metadata: failed to fetch dependency of "Metadata": failed to fetch dependency of "Cluster ID": failed to fetch dependency of "Install Config": failed to fetch dependency of "Base Domain": failed to generate asset "Platform": failed to get projects: context deadline exceeded ----- -*** `openshift-install create manifests` with or without an existing installation configuration file -+ -.Error message -[source,text] ----- -ERROR failed to fetch Master Machines: failed to load asset "Install Config": failed to create install config: platform.gcp.project: Internal error: context deadline exceeded ----- -+ -As a workaround, if you have an installation configuration file, update it with a specific project id to use (`platform.gcp.projectID`). Otherwise, manually create an installation configuration file, and enter a specific project id. Run the installation program again, specifying the file. (link:https://issues.redhat.com/browse/OCPBUGS-15238[*OCPBUGS-15238*]) - -* Booting fails in a large compute node. (link:https://issues.redhat.com/browse/OCPBUGS-20075[*OCPBUGS-20075*]) - -* When you deploy a cluster with a Network Type of `OVNKubernetes` on {ibmpowerRegProductName}, compute nodes might reboot because of a kernel stack overflow. As a workaround, you can deploy the cluster with a Network Type of `OpenShiftSDN`. (link:https://issues.redhat.com/browse/RHEL-3901[*RHEL-3901*]) - -* The following known issue applies to users who updated their {product-title} deployment to an early access version of {product-version} using release candidate 3 or 4: -+ -After the introduction of the node identify feature, some pods that were running as root are updated to run unprivileged. For users who updated to an early access version of {product-title} {product-version}, attempting to upgrade to the official version of {product-version} might not progress. In this scenario, the Network Operator reports the following state, indicating an issue with the update: `DaemonSet "/openshift-network-node-identity/network-node-identity" update is rolling`. -+ -As a workaround, you can delete all pods in the `openshift-network-node-identify` namespace by running the following command: `oc delete --force=true -n openshift-network-node-identity --all pods`. After running this command, the update continues. -+ -For more information about early access, xref:../updating/understanding_updates/understanding-update-channels-release.adoc#candidate-version-channel_understanding-update-channels-releases[candidate-4.14 channel]. - -* Currently, users cannot modify the `interface-specific` safe sysctl list by updating the `cni-sysctl-allowlist` config map in the `openshift-multus` namespace. As a workaround, you can modify, either manually or with a DaemonSet, the `/etc/cni/tuning/allowlist.conf` file on the node or nodes. (link:https://issues.redhat.com/browse/OCPBUGS-11046[*OCPBUGS-11046*]) - -* In {product-title} {product-version}, all nodes use Linux control group version 2 (cgroup v2) for internal resource management in alignment with the default {op-system-base} 9 configuration. However, if you apply a performance profile in your cluster, the low-latency tuning features associated with the performance profile do not support cgroup v2. -+ -As a result, if you apply a performance profile, all nodes in the cluster reboot to switch back to the cgroup v1 configuration. This reboot includes control plane nodes and worker nodes that were not targeted by the performance profile. -+ -To revert all nodes in the cluster to the cgroups v2 configuration, you must edit the `Node` resource. For more information, see xref:../nodes/clusters/nodes-cluster-cgroups-2.adoc#nodes-clusters-cgroups-2_nodes-cluster-cgroups-2[Configuring Linux cgroup v2]. You cannot revert the cluster to the cgroups v2 configuration by removing the last performance profile. (link:https://issues.redhat.com/browse/OCPBUGS-16976[*OCPBUGS-16976*]) - -* AWS `M4` and `C4` instances might fail to boot properly in clusters installed using {product-title} {product-version}. There is no current workaround. (link:https://issues.redhat.com/browse/OCPBUGS-17154[*OCPBUGS-17154*]) - -* There is a known issue in this release which prevents installing a cluster on Alibaba Cloud by using installer-provisioned infrastructure. Installing a cluster on Alibaba Cloud is a Technology Preview feature in this release. (link:https://issues.redhat.com/browse/OCPBUGS-20552[*OCPBUGS-20552*]) - -* From {product-title} {product-version} onwards, global IP address forwarding is disabled on OVN-Kubernetes based cluster deployments to prevent undesirable effects for cluster administrators with nodes acting as routers. OVN-Kubernetes now enables and restricts forwarding on a per-managed interface basis. -+ -You can control IP forwarding for all traffic on OVN-Kubernetes managed interfaces by using the `gatewayConfig.ipForwarding` specification in the `Network` resource. Specify `Restricted` to forward all traffic related to OVN-Kubernetes only. Specify `Global` to allow forwarding of all IP traffic. For new installations, the default is `Restricted`. For upgrades to {product-version}, the default is `Global`. (link:https://issues.redhat.com/browse/OCPBUGS-3176[*OCPBUGS-3176*]) (link:https://issues.redhat.com/browse/OCPBUGS-16051[*OCPBUGS-16051*]) - -* For clusters that have root volume availability zones and are running on {rh-openstack} that you upgrade to {product-version} and have root volume availability zones, you must converge control plane machines onto one server group before you can enable control plane machine sets. To make the required change, follow the instructions in the link:https://access.redhat.com/solutions/7013893[Knowledgebase article]. (link:https://issues.redhat.com/browse/OCPBUGS-13300[*OCPBUGS-13300*]) - -* For clusters that have compute zones configured with at least one zone and are running on {rh-openstack}, which is upgradable to version {product-version}, root volumes must now also be configured with at least one zone. If this configuration change does not occur, a control plane machine set cannot be generated for your cluster. To make the required change, follow the instructions in link:https://access.redhat.com/solutions/7024383[Knowledge Base article]. (link:https://issues.redhat.com/browse/OCPBUGS-15997[*OCPBUGS-15997*]) - -* Currently, an error might occur when deleting a pod that uses an SR-IOV network device. This error is caused by a change in {op-system-base} 9 where the previous name of a network interface is added to its alternative names list when it is renamed. As a consequence, when a pod attached to an SR-IOV virtual function (VF) is deleted, the VF returns to the pool with a new unexpected name, such as `dev69`, instead of its original name, such as `ensf0v2`. Although this error is not severe, the Multus and SR-IOV logs might show the error while the system recovers on its own. Deleting the pod might take a few seconds longer due to this error. (link:https://issues.redhat.com/browse/OCPBUGS-11281[*OCPBUGS-11281*], link:https://issues.redhat.com/browse/OCPBUGS-18822[*OCPBUGS-18822*], link:https://issues.redhat.com/browse/RHEL-5988[*RHEL-5988*]) - -* Starting from `RHEL 5.14.0-284.28.1.el9_2`, if you configure a SR-IOV virtual function with a specific MAC address, configuration errors might occur in the i40e driver. Consequently, Intel 7xx Series NICs might have connectivity issues. As a workaround, avoid specifying MAC addresses in the `metadata.annotations` field in the Pod resource. Instead, use the default address that the driver assigns to the virtual function.(link:https://issues.redhat.com/browse/RHEL-7168[*RHEL-7168*], link:https://issues.redhat.com/browse/OCPBUGS-19536[*OCPBUGS-19536*], link:https://issues.redhat.com/browse/OCPBUGS-19407[*OCPBUGS-19407*], link:https://issues.redhat.com/browse/OCPBUGS-18873[*OCPBUGS-18873*]) - -* Currently, defining a `sysctl` value for a setting with a slash in its name, such as for bond devices, in the `profile` field of a `Tuned` resource might not work. Values with a slash in the `sysctl` option name are not mapped correctly to the `/proc` filesystem. As a workaround, create a `MachineConfig` resource that places a configuration file with the required values in the `/etc/sysctl.d` node directory. (link:https://issues.redhat.com/browse/RHEL-3707[*RHEL-3707*]) - -* Currently, due to an issue with Kubernetes, the CPU Manager is unable to return CPU resources from the last pod admitted to a node to the pool of available CPU resources. These resources can be allocated if a subsequent pod is admitted to the node. However, this in turn becomes the last pod, and again, the CPU manager cannot return the resources of this pod to the available pool. -+ -This issue affects the CPU load balancing features because these features depend on the CPU Manager releasing CPUs to the available pool. Consequently, non-guaranteed pods might run with a reduced number of CPUs. As a workaround, schedule a pod with a `best-effort` CPU Manager policy on the affected node. This pod will be the last pod admitted, ensuring that resources are properly released to the available pool. (link:https://issues.redhat.com/browse/OCPBUGS-17792[*OCPBUGS-17792*]) - -* Currently, the Machine Config Operator (MCO) might apply an incorrect cgroup version argument for custom pools because of how the MCO handles machine configurations for worker pools and custom pools. As a consequence, nodes in the custom pool might have an incorrect cgroup kernel argument, resulting in unpredictable behavior. As a workaround, specify the cgroup version kernel arguments for worker and control plane pools only. (link:https://issues.redhat.com/browse/OCPBUGS-19352[*OCPBUGS-19352*]) - -* Currently, due to a race condition between the application of a `udev` rule on physical network devices and the application of the default requests per second (RPS) mask to all network devices, some physical network devices might feature the wrong RPS mask configuration. As a consequence, a performance degradation might affect the physical network devices with the wrong RPS mask configuration. It is anticipated that an upcoming z-stream release will include a fix for this issue. -(link:https://issues.redhat.com/browse/OCPBUGS-21845[*OCPBUGS-21845*]) - -* Broadcom network interface controllers in legacy Single Root I/O Virtualization (SR-IOV) do not support quality of service (QoS) and tag protocol identifier (TPID) settings for the SRIOV VLAN. This affects Broadcom BCM57414, Broadcom BCM57508, and Broadcom BCM57504. (link:https://issues.redhat.com/browse/RHEL-9881[*RHEL-9881*]) - -* In hosted control planes for {product-title}, installing the File Integrity Operator on a hosted cluster fails. (link:https://issues.redhat.com/browse/OCPBUGS-3410[*OCPBUGS-3410*]) - -* In hosted control planes for {product-title}, the Vertical Pod Autoscaler Operator fails to install on a hosted cluster. (link:https://issues.redhat.com/browse/PODAUTO-65[*PODAUTO-65*]) - -* In hosted control planes for {product-title}, on the bare metal and {VirtProductName} platforms, the auto-repair function is disabled. (link:https://issues.redhat.com/browse/OCPBUGS-20028[*OCPBUGS-20028*]) - -* In hosted control planes for {product-title}, using the {secrets-store-operator} with AWS Secrets Manager or AWS Systems Manager Parameter Store is not supported. (link:https://issues.redhat.com/browse/OCPBUGS-18711[*OCPBUGS-18711*]) - -* In hosted control planes for {product-title}, the `default`, `kube-system`, and `kube-public` namespaces are not properly excluded from pod security admission. (link:https://issues.redhat.com/browse/OCPBUGS-22379[*OCPBUGS-22379*]) - -* Agent-based installations on vSphere will fail due to a failure to remove node taint, which causes the installation to be stuck in a pending state. -{sno-caps} clusters are not impacted. -You can work around this issue by running the following command to manually remove the node taint: -+ -[source,terminal] ----- -$ oc adm taint nodes node.cloudprovider.kubernetes.io/uninitialized:NoSchedule- ----- -+ -(link:https://issues.redhat.com/browse/OCPBUGS-20049[*OCPBUGS-20049*]) - -* There is a known issue with using Azure confidential virtual machines, which is a Technology Preview feature in this release. Configuring a cluster to encrypt the managed disk and the Azure VM Guest State (VMGS) blob with a platform-managed key (PMK) or a customer-managed key (CMK) is unsupported. To avoid this issue, only enable encryption of the VMGS blob by setting the value of the `securityEncryptionType` parameter to `VMGuestStateOnly`. (link:https://issues.redhat.com/browse/OCPBUGS-18379[*OCPBUGS-18379*]) - -* There is a known issue with using Azure confidential virtual machines, which is a Technology Preview feature in this release. Installing a cluster configured to use this feature fails because the control plane provisioning process times out after 30 minutes. -+ -If this occurs, you can run the `openshift-install create cluster` command a second time to complete the installation. -+ -To avoid this issue, you can enable confidential VMs on an existing cluster by using machine sets. (link:https://issues.redhat.com/browse/OCPBUGS-18488[*OCPBUGS-18488*]) - -* When you run hosted control planes for {product-title} on a bare-metal platform, if a worker node fails, another node is not automatically added to the hosted cluster, even when other agents are available. As a workaround, manually delete the machine that is associated with the failed worker node. (link:https://issues.redhat.com/browse/MGMT-15939[*MGMT-15939*]) - -* Since the source catalog bundles an architecture specific `opm` binary, you must run the mirroring from that architecture. For instance if you are mirroring a ppc64le catalog, you must run oc-mirror from a system that runs on the ppc64le architecture. (link:https://issues.redhat.com/browse/OCPBUGS-22264[*OCPBUGS-22264*]) - -* If more than one {product-title} group points to the same LDAP group, only one {product-title} group is synced. The `oc adm groups sync` command prints a warning when multiple groups point to the same LDAP group, indicating that only a single group is eligible for mapping. (link:https://issues.redhat.com/browse/OCPBUGS-11123[*OCPBUGS-11123*]) - -* Installation fails when installing {product-title} with the `bootMode` set to `UEFISecureBoot` on a node where Secure Boot is disabled. Subsequent attempts to install {product-title} with Secure Boot enabled will proceed normally. (link:https://issues.redhat.com/browse/OCPBUGS-19884)[*OCPBUGS-19884*]) - -* In {product-title} {product-version}, a `MachineConfig` object with Ignition version 3.4 might fail scans of the `api-collector` pods with `CrashLoopBackOff` errors, causing the Compliance Operator to not work as expected. (link:https://issues.redhat.com/browse/OCPBUGS-18025[*OCPBUGS-18025*]) - -* In {product-title} {product-version}, assigning an IPv6 egress IP to a network interface that is not the primary network interface is unsupported. This is a known issue and will be fixed in a future version of {product-title}. (link:https://issues.redhat.com/browse/OCPBUGS-17637[*OCPBUGS-17637*]) - -[id="ocp-telco-ran-4-14-known-issues"] -* When you run CNF latency tests on an {product-title} cluster, the `oslat` test can sometimes return results greater than 20 microseconds. This results in an `oslat` test failure. -(link:https://issues.redhat.com/browse/RHEL-9279[*RHEL-9279*]) - -* When you use `preempt-rt` patches with the realtime kernel and you update the SMP affinity of a network interrupt, the corresponding IRQ thread does not immediately receive the update. -Instead, the update takes effect when the next interrupt is received, and the thread is subsequently migrated to the correct core. -(link:https://issues.redhat.com/browse/RHEL-9148[*RHEL-9148*]) - -* Low-latency applications that rely on high-resolution timers to wake up their threads might experience higher wake up latencies than expected. Although the expected wake up latency is under 20μs, latencies exceeding this can occasionally be seen when running the `cyclictest` tool for long durations (24 hours or more). Testing has shown that wake up latencies are under 20μs for over 99.999999% of the samples. -(link:https://issues.redhat.com/browse/RHELPLAN-138733[*RHELPLAN-138733*]) - -* The global navigation satellite system (GNSS) module in an Intel Westport Channel e810 NIC that is configured as a grandmaster clock (T-GM) can report the GPS `FIX` state and the GNSS offset between the GNSS module and the GNSS constellation satellites. -+ -The current T-GM implementation does not use the `ubxtool` CLI to probe the `ublox` module for reading the GNSS offset and GPS `FIX` values. -Instead, it uses the `gpsd` service to read the GPS `FIX` information. -This is because the current implementation of the `ubxtool` CLI takes 2 seconds to receive a response, and with every call, it increases CPU usage threefold. -(link:https://issues.redhat.com/browse/OCPBUGS-17422[*OCPBUGS-17422*]) - -* In a PTP grandmaster clock clocked sourced from GNSS, when the GNSS signal is lost, the Digital Phase Locked Loop (DPLL) clock state can change in 2 ways: it can transition to unlocked, or it can enter a holdover state. Currently, the driver transitions the DPLL state to unlocked by default. -An upstream change is currently being developed to handle the holdover state functionality and to configure which state machine handling is used. -(link:https://issues.redhat.com/browse/RHELPLAN-164754[*RHELPLAN-164754*]) - -* The DPLL subsystem and DPLL support is not currently enabled in the Intel Westport Channel e810 NIC ice driver. -(link:https://issues.redhat.com/browse/RHELPLAN-165955[*RHELPLAN-165955*]) - -* The current grandmaster clock (T-GM) implementation has a single NMEA sentence generator sourced from the GNSS without a backup NMEA sentence generator. -If NMEA sentences are lost on their way to the e810 NIC, the T-GM cannot synchronize the devices in the network synchronization chain and the PTP Operator reports an error. -A proposed fix is to report a `FREERUN` event when the NMEA string is lost. -(link:https://issues.redhat.com/browse/OCPBUGS-19838[*OCPBUGS-19838*]) - -* Currently, due to differences in setting up a container's cgroup hierarchy, containers that use the `crun` OCI runtime along with a `PerformanceProfile` configuration encounter performance degradation. -As a workaround, use the `runc` OCI container runtime. -Although the `runc` container runtime has lower performance during container startup, shutdown operations, and `exec` probes, the `crun` and `runc` container runtimes are functionally identical. -It is anticipated that an upcoming z-stream release will include a fix for this issue. -(link:https://issues.redhat.com/browse/OCPBUGS-20492[*OCPBUGS-20492*]) - -* There is a known issue after enabling and disabling IPsec during runtime that causes the cluster to be in an unhealthy state with the error message: `an unknown error has occurred: MultipleErrors`. (link:https://issues.redhat.com/browse/OCPBUGS-19408[*OCPBUGS-19408*]) - -* Creating pods with Microsoft Azure File NFS volumes that are scheduled to the control plane node causes the mount to be denied. -+ -To work around this issue: If your control plane nodes are schedulable, and the pods can run on worker nodes, use `nodeSelector` or Affinity to schedule the pod in worker nodes. (link:https://issues.redhat.com/browse/OCPBUGS-18581[*OCPBUGS-18581*]) - -* For clusters that run on {rh-openstack} 17.1 and use network function virtualization (NFV), a known issue in {rh-openstack} prevents successful cluster deployment. There is no workaround for this issue. Contact Red Hat Support to request a hotfix. (link:https://bugzilla.redhat.com/show_bug.cgi?id=2228643[*BZ2228643*]) - -* There is no support for Kuryr installations on {rh-openstack} 17.1. - -* Currently, the update to HAProxy version 2.6.13 in {product-title} {product-version} causes an increase in P99 latency for re-encrypt traffic. This is observed when the volume of ingress traffic puts the HAProxy component of the `IngressController` custom resource (CR) under a considerable load. The latency increase does not affect overall throughput, which remains consistent. -+ -The default `IngressController` CR is configured with 4 HAProxy threads. If you experience elevated P99 latencies during high ingress traffic conditions, specifically with re-encrypt traffic, it's recommended to increase the number of HAProxy threads to reduce latency. (link:https://issues.redhat.com/browse/OCPBUGS-18936[*OCPBUGS-18936*]) - -* For {sno-caps} on {product-version} and Google Cloud Platform (GCP), there is a known issue with the Cloud Network Config Controller (CNCC) entering a `CrashLoopBackOff` state. This occurs at initialization time when the CNCC tries to reach the GCP internal load balancer address and the resulting hairpin traffic is not correctly prevented in OVN-Kubernetes shared gateway mode on GCP causing it to get dropped. Cluster Network Operator will show a `Progressing=true` status in such case. Currently, there is no workaround for this issue. (link:https://issues.redhat.com/browse/OCPBUGS-20554[*OCPBUGS-20554*]) - -* There is a known issue that prevents installing a cluster on or updating a cluster to this version of {product-title} on Microsoft Azure Stack Hub. For more details and a workaround, see the information in this link:https://access.redhat.com/solutions/7040264[Red Hat Knowledgebase article]. (link:https://issues.redhat.com/browse/OCPBUGS-20548[*OCPBUGS-20548*]) - -[id="ocp-4-14-asynchronous-errata-updates"] -== Asynchronous errata updates - -Security, bug fix, and enhancement updates for {product-title} {product-version} are released as asynchronous errata through the Red Hat Network. All {product-title} {product-version} errata is https://access.redhat.com/downloads/content/290/[available on the Red Hat Customer Portal]. See the https://access.redhat.com/support/policy/updates/openshift[{product-title} Life Cycle] for more information about asynchronous errata. - -Red Hat Customer Portal users can enable errata notifications in the account settings for Red Hat Subscription Management (RHSM). When errata notifications are enabled, users are notified through email whenever new errata relevant to their registered systems are released. - -[NOTE] -==== -Red Hat Customer Portal user accounts must have systems registered and consuming {product-title} entitlements for {product-title} errata notification emails to generate. -==== - -This section will continue to be updated over time to provide notes on enhancements and bug fixes for future asynchronous errata releases of {product-title} {product-version}. Versioned asynchronous releases, for example with the form {product-title} {product-version}.z, will be detailed in subsections. In addition, releases in which the errata text cannot fit in the space provided by the advisory will be detailed in subsections that follow. - -[IMPORTANT] -==== -For any {product-title} release, always review the instructions on xref:../updating/updating_a_cluster/updating-cluster-web-console.adoc#updating-cluster-web-console[updating your cluster] properly. -==== - -//Update with relevant advisory information -[id="ocp-4-14-0-ga"] -=== RHSA-2023:5006 - {product-title} {product-version}.0 image release, bug fix, and security update advisory - -Issued: 2023-10-31 - -{product-title} release {product-version}.0, which includes security updates, is now available. The list of bug fixes that are included in the update is documented in the link:https://access.redhat.com/errata/RHSA-2023:5006[RHSA-2023:5006] advisory. The RPM packages that are included in the update are provided by the link:https://access.redhat.com/errata/RHSA-2023:5009[RHSA-2023:5009] advisory. - -Space precluded documenting all of the container images for this release in the advisory. - -You can view the container images in this release by running the following command: - -[source,terminal] ----- -$ oc adm release info 4.14.0 --pullspecs ----- -//replace 4.y.z for the correct values for the release. You do not need to update oc to run this command. diff --git a/release_notes/ocp-4-15-release-notes.adoc b/release_notes/ocp-4-15-release-notes.adoc new file mode 100644 index 0000000000..49c0001748 --- /dev/null +++ b/release_notes/ocp-4-15-release-notes.adoc @@ -0,0 +1,1324 @@ +:_mod-docs-content-type: ASSEMBLY +[id="ocp-4-15-release-notes"] += {product-title} {product-version} release notes +include::_attributes/common-attributes.adoc[] +:context: release-notes + +toc::[] + +Red Hat {product-title} provides developers and IT organizations with a hybrid cloud application platform for deploying both new and existing applications on secure, scalable resources with minimal configuration and management overhead. {product-title} supports a wide selection of programming languages and frameworks, such as Java, JavaScript, Python, Ruby, and PHP. + +Built on {op-system-base-full} and Kubernetes, {product-title} provides a more secure and scalable multitenant operating system for today's enterprise-class applications, while delivering integrated application runtimes and libraries. {product-title} enables organizations to meet security, privacy, compliance, and governance requirements. + +[id="ocp-4-15-about-this-release"] +== About this release + +// TODO: Update with the relevant information closer to release. +{product-title} (link:https://access.redhat.com/errata/RHSA-2024:XXXX[RHSA-2024:XXXX]) is now available. This release uses link:https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.27.md[Kubernetes 1.27] with CRI-O runtime. New features, changes, and known issues that pertain to {product-title} {product-version} are included in this topic. + +{product-title} {product-version} clusters are available at https://console.redhat.com/openshift. With the {cluster-manager-first} application for {product-title}, you can deploy {product-title} clusters to either on-premises or cloud environments. + +// Double check OP system versions +{product-title} {product-version} is supported on {op-system-base-full} 8.6, 8.7, and 8.8 as well as on {op-system-first} 4.13. + +You must use {op-system} machines for the control plane, and you can use either {op-system} or {op-system-base} for compute machines. +//Removed the note per https://issues.redhat.com/browse/GRPA-3517 + +//TODO: Add this for 4.14 +Starting with {product-title} 4.12, an additional six months is added to the Extended Update Support (EUS) phase on even numbered releases from 18 months to two years. For more information, see the link:https://access.redhat.com/support/policy/updates/openshift[Red Hat {product-title} Life Cycle Policy]. + +Starting with {product-title} {product-version}, Extended Update Support (EUS) is extended to 64-bit ARM, {ibmpowerProductName} (ppc64le), and {ibmzProductName} (s390x) platforms. For more information, see the link:https://access.redhat.com/support/policy/updates/openshift-eus[OpenShift EUS Overview]. + +//TODO: Add the line below for EUS releases. +{product-title} {product-version} is an Extended Update Support (EUS) release. More information on Red Hat OpenShift EUS is available in link:https://access.redhat.com/support/policy/updates/openshift#ocp4_phases[OpenShift Life Cycle] and link:https://access.redhat.com/support/policy/updates/openshift-eus[OpenShift EUS Overview]. + +//TODO: The line below should be used when it is next appropriate. Revisit in August 2023 time frame. +Maintenance support ends for version 4.12 on 25 January 2025 and goes to extended life phase. For more information, see the link:https://access.redhat.com/support/policy/updates/openshift[Red Hat {product-title} Life Cycle Policy]. + +Commencing with the {product-version} release, Red Hat is simplifying the administration and management of Red Hat shipped cluster Operators with the introduction of three new life cycle classifications; Platform Aligned, Platform Agnostic, and Rolling Stream. These life cycle classifications provide additional ease and transparency for cluster administrators to understand the life cycle policies of each Operator and form cluster maintenance and upgrade plans with predictable support boundaries. For more information, see link:https://access.redhat.com/webassets/avalon/j/includes/session/scribe/?redirectTo=https%3A%2F%2Faccess.redhat.com%2Fsupport%2Fpolicy%2Fupdates%2Fopenshift_operators[OpenShift Operator Life Cycles]. + +// Added in 4.14. Language came directly from Kirsten Newcomer. +{product-title} is designed for FIPS. When running {op-system-base-full} or {op-system-first} booted in FIPS mode, {product-title} core components use the {op-system-base} cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the `x86_64`, `ppc64le`, and `s390x` architectures. + +For more information about the NIST validation program, see link:https://csrc.nist.gov/Projects/cryptographic-module-validation-program/validated-modules[Cryptographic Module Validation Program]. For the latest NIST status for the individual versions of {op-system-base} cryptographic libraries that have been submitted for validation, see link:https://access.redhat.com/articles/2918071#fips-140-2-and-fips-140-3-2[Compliance Activities and Government Standards]. + +[id="ocp-4-15-add-on-support-status"] +== {product-title} layered and dependent component support and compatibility + +The scope of support for layered and dependent components of {product-title} changes independently of the {product-title} version. To determine the current support status and compatibility for an add-on, refer to its release notes. For more information, see the link:https://access.redhat.com/support/policy/updates/openshift[Red Hat {product-title} Life Cycle Policy]. + +[id="ocp-4-15-new-features-and-enhancements"] +== New features and enhancements + +This release adds improvements related to the following components and concepts. + +[id="ocp-4-15-rhcos"] +=== {op-system-first} + +[id="ocp-4-15-installation-and-update"] +=== Installation and update + +[id="ocp-4-15-web-console"] +=== Web console + +[id="ocp-4-15-openshift-cli"] +=== OpenShift CLI (oc) + +[id="ocp-4-15-ibm-z"] +=== {ibmzProductName}(R) and {linuxoneProductName} + +[id="ocp-4-15-ibm-power"] +=== {ibmpowerRegProductName} + +[discrete] +==== {ibmpowerRegProductName} notable enhancements + +[discrete] +=== {ibmpowerRegProductName}, {ibmzRegProductName}, and {linuxoneProductName} support matrix + +.{product-title} features +[cols="3,1,1",options="header"] +|==== +|Feature |{ibmpowerRegProductName} |{ibmzRegProductName} and {linuxoneProductName} + +|Alternate authentication providers +|Supported +|Supported + +|Automatic Device Discovery with Local Storage Operator +|Unsupported +|Supported + +|Automatic repair of damaged machines with machine health checking +|Unsupported +|Unsupported + +|Cloud controller manager for IBM Cloud +|Supported +|Unsupported + +|Controlling overcommit and managing container density on nodes +|Unsupported +|Unsupported + +|Cron jobs +|Supported +|Supported + +|Descheduler +|Supported +|Supported + +|Egress IP +|Supported +|Supported + +|Encrypting data stored in etcd +|Supported +|Supported + +|FIPS cryptography +|Supported +|Supported + +|Helm +|Supported +|Supported + +|Horizontal pod autoscaling +|Supported +|Supported + +|IBM Secure Execution +|Unsupported +|Supported + +|{ibmpowerProductName} Virtual Server Block CSI Driver Operator (Technology Preview) +|Supported +|Unsupported + +|Installer-provisioned Infrastructure Enablement for {ibmpowerProductName} Virtual Server (Technology Preview) +|Supported +|Unsupported + +|Installing on a single node +|Supported +|Supported + +|IPv6 +|Supported +|Supported + +|Monitoring for user-defined projects +|Supported +|Supported + +|Multi-architecture compute nodes +|Supported +|Supported + +|Multipathing +|Supported +|Supported + +|Network-Bound Disk Encryption - External Tang Server +|Supported +|Supported + +|Non--volatile memory express drives (NVMe) +|Supported +|Unsupported + +|oc-mirror plugin +|Supported +|Supported + +|OpenShift CLI (`oc`) plugins +|Supported +|Supported + +|Operator API +|Supported +|Supported + +|OpenShift Virtualization +|Unsupported +|Unsupported + +|OVN-Kubernetes, including IPsec encryption +|Supported +|Supported + +|PodDisruptionBudget +|Supported +|Supported + +|Precision Time Protocol (PTP) hardware +|Unsupported +|Unsupported + +|{openshift-local-productname} +|Unsupported +|Unsupported + +|Scheduler profiles +|Supported +|Supported + +|Stream Control Transmission Protocol (SCTP) +|Supported +|Supported + +|Support for multiple network interfaces +|Supported +|Supported + +|Three-node cluster support +|Supported +|Supported + +|Topology Manager +|Supported +|Unsupported + +|z/VM Emulated FBA devices on SCSI disks +|Unsupported +|Supported + +|4K FCP block device +|Supported +|Supported +|==== + +.Persistent storage options +[cols="2,1,1",options="header"] +|==== +|Feature |{ibmpowerProductName} |{ibmzProductName} and {linuxoneProductName} +|Persistent storage using iSCSI +|Supported ^[1]^ +|Supported ^[1]^,^[2]^ + +|Persistent storage using local volumes (LSO) +|Supported ^[1]^ +|Supported ^[1]^,^[2]^ + +|Persistent storage using hostPath +|Supported ^[1]^ +|Supported ^[1]^,^[2]^ + +|Persistent storage using Fibre Channel +|Supported ^[1]^ +|Supported ^[1]^,^[2]^ + +|Persistent storage using Raw Block +|Supported ^[1]^ +|Supported ^[1]^,^[2]^ + +|Persistent storage using EDEV/FBA +|Supported ^[1]^ +|Supported ^[1]^,^[2]^ +|==== +[.small] +-- +1. Persistent shared storage must be provisioned by using either {rh-storage-first} or other supported storage protocols. +2. Persistent non-shared storage must be provisioned by using local storage, such as iSCSI, FC, or by using LSO with DASD, FCP, or EDEV/FBA. +-- + +.Operators +[cols="2,1,1",options="header"] +|==== +|Feature |{ibmpowerRegProductName} |{ibmzRegProductName} and {linuxoneProductName} + +|Cluster Logging Operator +|Supported +|Supported + +|Cluster Resource Override Operator +|Supported +|Supported + +|Compliance Operator +|Supported +|Supported + +|File Integrity Operator +|Supported +|Supported + +|HyperShift Operator +|Technology Preview +|Technology Preview + +|Local Storage Operator +|Supported +|Supported + +|MetalLB Operator +|Supported +|Supported + +|NFD Operator +|Supported +|Supported + +|NMState Operator +|Supported +|Supported + +|OpenShift Elasticsearch Operator +|Supported +|Supported + +|Service Binding Operator +|Supported +|Supported + +|Vertical Pod Autoscaler Operator +|Supported +|Supported +|==== + +.Multus CNI plugins +[cols="2,1,1",options="header"] +|==== +|Feature |{ibmpowerRegProductName} |{ibmzRegProductName} and {linuxoneProductName} + +|Bridge +|Supported +|Supported + +|Host-device +|Supported +|Supported + +|IPAM +|Supported +|Supported + +|IPVLAN +|Supported +|Supported +|==== + +.CSI Volumes +[cols="2,1,1",options="header"] +|==== +|Feature |{ibmpowerRegProductName} |{ibmzRegProductName} and {linuxoneProductName} + +|Cloning +|Supported +|Supported + +|Expansion +|Supported +|Supported + +|Snapshot +|Supported +|Supported +|==== + +[id="ocp-4-15-auth"] +=== Authentication and authorization + +[id="ocp-4-15-networking"] +=== Networking + +[id="ocp-4-15-registry"] +=== Registry + +[id="ocp-4-15-storage"] +=== Storage + +[id="ocp-4-15-oci"] +=== Oracle(R) Cloud Infrastructure + +[id="ocp-4-15-olm"] +=== Operator lifecycle + +[id="ocp-4-15-osdk"] +=== Operator development + +[id="ocp-4-15-builds"] +=== Builds + +[id="ocp-4-15-machine-config-operator"] +=== Machine Config Operator + +[id="ocp-4-15-machine-api"] +=== Machine API + +[id="ocp-4-15-nodes"] +=== Nodes + +[id="ocp-4-15-monitoring"] +=== Monitoring + +[id="ocp-4-15-network-observability-1-5"] +=== Network Observability Operator +The Network Observability Operator releases updates independently from the {product-title} minor version release stream. Updates are available through a single, rolling stream which is supported on all currently supported versions of {product-title} 4. Information regarding new features, enhancements, and bug fixes for the Network Observability Operator is found in the xref:../network_observability/network-observability-operator-release-notes.adoc[Network Observability release notes]. + +[id="ocp-4-15-scalability-and-performance"] +=== Scalability and performance + +[id="ocp-4-15-hcp"] +=== Hosted control planes + + +[id="ocp-4-15-insights-operator"] +=== Insights Operator + +[id="ocp-4-15-notable-technical-changes"] +== Notable technical changes + +{product-title} {product-version} introduces the following notable technical changes. + +// Note: use [discrete] for these sub-headings. +// example sub-heading below: +//[discrete] +//[id="ocp-4-15-cluster-cloud-controller-manager-operator"] +//=== Cloud controller managers for additional cloud providers + +[id="ocp-4-15-deprecated-removed-features"] +== Deprecated and removed features + +Some features available in previous releases have been deprecated or removed. + +Deprecated functionality is still included in {product-title} and continues to be supported; however, it will be removed in a future release of this product and is not recommended for new deployments. For the most recent list of major functionality deprecated and removed within {product-title} {product-version}, refer to the table below. Additional details for more functionality that has been deprecated and removed are listed after the table. + +In the following tables, features are marked with the following statuses: + +* _General Availability_ +* _Deprecated_ +* _Removed_ + +[discrete] +[id="ocp-4-15-operators-dep-rem"] +=== Operator lifecycle and development deprecated and removed features + +.Operator lifecycle and development deprecated and removed tracker +[cols="4,1,1,1",options="header"] +|==== +|Feature |4.13 |4.14 |4.15 + +|SQLite database format for Operator catalogs +|Deprecated +|Deprecated +|Deprecated + +|==== + +[discrete] +=== Images deprecated and removed features + +.Images deprecated and removed tracker +[cols="4,1,1,1",options="header"] +|==== +|Feature |4.13 |4.14 |4.15 + +|`ImageChangesInProgress` condition for Cluster Samples Operator +|Deprecated +|Deprecated +|Deprecated + +|`MigrationInProgress` condition for Cluster Samples Operator +|Deprecated +|Deprecated +|Deprecated + +|==== + +[discrete] +=== Monitoring deprecated and removed features + +.Monitoring deprecated and removed tracker +[cols="4,1,1,1",options="header"] +|==== +|Feature |4.13 |4.14 |4.15 + +|==== + +[discrete] +=== Installation deprecated and removed features + +.Installation deprecated and removed tracker +[cols="4,1,1,1",options="header"] +|==== +|Feature |4.13 |4.14 |4.15 + +|`--cloud` parameter for `oc adm release extract` +|General Availability +|Deprecated +|Deprecated + +|CoreDNS wildcard queries for the `cluster.local` domain +|Deprecated +|Deprecated +|Deprecated + +|`compute.platform.openstack.rootVolume.type` for {rh-openstack} +|General Availability +|Deprecated +|Deprecated + +|`controlPlane.platform.openstack.rootVolume.type` for {rh-openstack} +|General Availability +|Deprecated +|Deprecated + +|`ingressVIP` and `apiVIP` settings in the `install-config.yaml` file for installer-provisioned infrastructure clusters +|Deprecated +|Deprecated +|Deprecated + +|`platform.gcp.licenses` for Google Cloud Provider +|Deprecated +|Removed +|Removed + +|==== +[.small] +-- +1. For {product-title} {product-version}, you must install the {product-title} cluster on a VMware vSphere version 7.0 Update 2 or later instance, including VMware vSphere version 8.0, that meets the requirements for the components that you use. +-- +[discrete] +=== Updating clusters deprecated and removed features + +.Updating clusters deprecated and removed tracker +[cols="4,1,1,1",options="header"] +|==== +|Feature |4.13 |4.14 |4.15 + +|==== + +[discrete] +=== Storage deprecated and removed features + +.Storage deprecated and removed tracker +[cols="4,1,1,1",options="header"] +|==== +|Feature |4.13 |4.14 |4.15 + +|Persistent storage using FlexVolume +|Deprecated +|Deprecated +|Deprecated + +|==== + +[discrete] +=== Authentication and authorization deprecated and removed features + +.Authentication and authorization deprecated and removed tracker +[cols="4,1,1,1",options="header"] +|==== +|Feature |4.13 |4.14 |4.15 + +|==== +[discrete] +=== Specialized hardware and driver enablement deprecated and removed features + +.Specialized hardware and driver enablement deprecated and removed tracker +[cols="4,1,1,1",options="header"] +|==== +|Feature |4.13 |4.14 |4.15 + +|==== + +[discrete] +=== Multi-architecture deprecated and removed features + +.Multi-architecture deprecated and removed tracker +[cols="4,1,1,1",options="header"] +|==== +|Feature |4.13 |4.14 |4.15 + +|==== + +[discrete] +=== Networking deprecated and removed features + +.Networking deprecated and removed tracker +[cols="4,1,1,1",options="header"] +|==== +|Feature |4.13 |4.14 |4.15 + +|Kuryr on {rh-openstack} +|Deprecated +|Deprecated +|Deprecated + +|==== + +[discrete] +=== Web console deprecated and removed features + +.Web console deprecated and removed tracker +[cols="4,1,1,1",options="header"] +|==== +|Feature |4.13 |4.14 |4.15 + +|==== + +[discrete] +=== Node deprecated and removed features + +.Node deprecated and removed tracker +[cols="4,1,1,1",options="header"] +|==== +|Feature |4.13 |4.14 |4.15 + +|`ImageContentSourcePolicy` (ICSP) objects +|Deprecated +|Deprecated +|Deprecated + +|Kubernetes topology label `failure-domain.beta.kubernetes.io/zone` +|Deprecated +|Deprecated +|Deprecated + +|Kubernetes topology label `failure-domain.beta.kubernetes.io/region` +|Deprecated +|Deprecated +|Deprecated + +|==== + +[discrete] +=== OpenShift CLI (oc) deprecated and removed features +[cols="4,1,1,1",options="header"] +|==== +|Feature |4.13 |4.14 |4.15 + +|`--include-local-oci-catalogs` parameter for `oc-mirror` +|General Availability +|Removed +|Removed + +|`--use-oci-feature` parameter for `oc-mirror` +|Deprecated +|Removed +|Removed + +|==== + +[discrete] +=== Workloads deprecated and removed features + +.Workloads deprecated and removed tracker +[cols="4,1,1,1",options="header"] +|==== +|Feature |4.13 |4.14 |4.15 + +|`DeploymentConfig` objects +|General Availability +|Deprecated +|Deprecated + +|==== + +[id="ocp-4-15-deprecated-features"] +=== Deprecated features + +[id="ocp-4-15-removed-features"] +=== Removed features + +.APIs removed from Kubernetes 1.27 +[cols="2,2,2",options="header",] +|=== +|Resource |Removed API |Migrate to + +|`CSIStorageCapacity` +|`storage.k8s.io/v1beta1` +|`storage.k8s.io/v1` + +|=== + +[id="ocp-4-15-future-deprecation"] +=== Notice of future deprecation + +[id="ocp-4-15-bug-fixes"] +== Bug fixes +//Bug fix work for TELCODOCS-750 +//Bare Metal Hardware Provisioning / OS Image Provider +//Bare Metal Hardware Provisioning / baremetal-operator +//Bare Metal Hardware Provisioning / cluster-baremetal-operator +//Bare Metal Hardware Provisioning / ironic" +//CNF Platform Validation +//Cloud Native Events / Cloud Event Proxy +//Cloud Native Events / Cloud Native Events +//Cloud Native Events / Hardware Event Proxy +//Cloud Native Events +//Driver Toolkit +//Installer / Assisted installer +//Installer / OpenShift on Bare Metal IPI +//Networking / ptp +//Node Feature Discovery Operator +//Performance Addon Operator +//Telco Edge / HW Event Operator +//Telco Edge / RAN +//Telco Edge / TALO +//Telco Edge / ZTP +[discrete] +[id="ocp-4-15-api-auth-bug-fixes"] +==== API Server and Authentication + +[discrete] +[id="ocp-4-15-bare-metal-hardware-bug-fixes"] +==== Bare Metal Hardware Provisioning + +[discrete] +[id="ocp-4-15-builds-bug-fixes"] +==== Builds + +[discrete] +[id="ocp-4-15-cloud-compute-bug-fixes"] +==== Cloud Compute + +[discrete] +[id="ocp-4-15-cloud-cred-operator-bug-fixes"] +==== Cloud Credential Operator + +[discrete] +[id="ocp-4-15-cluster-version-operator-bug-fixes"] +==== Cluster Version Operator + +[discrete] +[id="ocp-4-15-dev-console-bug-fixes"] +==== Developer Console + +[discrete] +[id="ocp-4-15-cloud-etcd-operator-bug-fixes"] +==== etcd Cluster Operator + + +[discrete] +[id="ocp-4-15-hosted-control-plane-bug-fixes"] +==== Hosted Control Plane + +[discrete] +[id="ocp-4-15-image-registry-bug-fixes"] +==== Image Registry + + +[discrete] +[id="ocp-4-15-installer-bug-fixes"] +==== Installer + +[discrete] +[id="ocp-4-15-kube-controller-bug-fixes"] +==== Kubernetes Controller Manager + +[discrete] +[id="ocp-4-15-kube-scheduler-bug-fixes"] +==== Kubernetes Scheduler + + +[discrete] +[id="ocp-4-15-machine-config-operator-bug-fixes"] +==== Machine Config Operator + +[discrete] +[id="ocp-4-15-management-console-bug-fixes"] +==== Management Console + +[discrete] +[id="ocp-4-15-monitoring-bug-fixes"] +==== Monitoring + +[discrete] +[id="ocp-4-15-networking-bug-fixes"] +==== Networking + +[discrete] +[id="ocp-4-15-node-bug-fixes"] +==== Node + +[discrete] +[id="ocp-4-15-node-tuning-operator-bug-fixes"] +==== Node Tuning Operator (NTO) + +[discrete] +[id="ocp-4-15-openshift-cli-bug-fixes"] +==== OpenShift CLI (oc) + +[discrete] +[id="ocp-4-15-olm-bug-fixes"] +==== Operator Lifecycle Manager (OLM) + +[discrete] +[id="ocp-4-15-openshift-api-server-bug-fixes"] +==== OpenShift API server + +[discrete] +[id="ocp-4-15-rhcos-bug-fixes"] +==== {op-system-first} + +[discrete] +[id="ocp-4-15-scalability-and-performance-bug-fixes"] +==== Scalability and performance + +[discrete] +[id="ocp-4-15-storage-bug-fixes"] +==== Storage + +[discrete] +[id="ocp-4-15-windows-containers-bug-fixes"] +==== Windows containers +// Added after OCP GA + +[id="ocp-4-15-technology-preview"] +== Technology Preview features + +Some features in this release are currently in Technology Preview. These experimental features are not intended for production use. Note the following scope of support on the Red Hat Customer Portal for these features: + +link:https://access.redhat.com/support/offerings/techpreview[Technology Preview Features Support Scope] + +In the following tables, features are marked with the following statuses: + +* _Technology Preview_ +* _General Availability_ +* _Not Available_ +* _Deprecated_ + +[discrete] +=== Networking Technology Preview features + +.Networking Technology Preview tracker +[cols="4,1,1,1",options="header"] +|==== +|Feature |4.13 |4.14 |4.15 + +|Ingress Node Firewall Operator +|Technology Preview +|General Availability +|General Availability + +|Advertise using L2 mode the MetalLB service from a subset of nodes, using a specific pool of IP addresses +|Technology Preview +|Technology Preview +|Technology Preview + +|Multi-network policies for SR-IOV networks +|Technology Preview +|Technology Preview +|Technology Preview + +|OVN-Kubernetes network plugin as secondary network +|Technology Preview +|General Availability +|General Availability + +|Updating the interface-specific safe sysctls list +|Technology Preview +|Technology Preview +|Technology Preview + +|Egress service custom resource +|Not Available +|Technology Preview +|Technology Preview + +|VRF specification in `BGPPeer` custom resource +|Not Available +|Technology Preview +|Technology Preview + +|VRF specification in `NodeNetworkConfigurationPolicy` custom resource +|Not Available +|Technology Preview +|Technology Preview + +|Admin Network Policy (`AdminNetworkPolicy`) +|Not Available +|Technology Preview +|Technology Preview + +|IPsec external traffic (north-south) +|Not Available +|Technology Preview +|Technology Preview + +|==== + +[discrete] +=== Storage Technology Preview features + +.Storage Technology Preview tracker +[cols="4,1,1,1",options="header"] +|==== +|Feature |4.13 |4.14 |4.15 + +|Automatic device discovery and provisioning with Local Storage Operator +|Technology Preview +|Technology Preview +|Technology Preview + +|Google Filestore CSI Driver Operator +|Technology Preview +|General Availability +|General Availability + +|{ibmpowerProductName} Virtual Server Block CSI Driver Operator +|Technology Preview +|Technology Preview +|Technology Preview + +|Read Write Once Pod access mod +|Not available +|Technology Preview +|Technology Preview + +|Build CSI Volumes in OpenShift Builds +|Technology Preview +|General Availability +|General Availability + +|Shared Resources CSI Driver in OpenShift Builds +|Technology Preview +|Technology Preview +|Technology Preview + +|{secrets-store-operator} +|Not available +|Technology Preview +|Technology Preview + +|==== + +[discrete] +=== Installation Technology Preview features + +.Installation Technology Preview tracker +[cols="4,1,1,1",options="header"] +|==== +|Feature |4.13 |4.14 |4.15 + +|Adding kernel modules to nodes with kvc +|Technology Preview +|Technology Preview +|Technology Preview + +|Azure Tagging +|Technology Preview +|General Availability +|General Availability + +|Enabling NIC partitioning for SR-IOV devices +|Technology Preview +|Technology Preview +|Technology Preview + +|GCP Confidential VMs +|Technology Preview +|General Availability +|General Availability + +|User-defined labels and tags for Google Cloud Platform (GCP) +|Not Available +|Technology Preview +|Technology Preview + +|Installing a cluster on Alibaba Cloud by using installer-provisioned infrastructure +|Technology Preview +|Technology Preview +|Technology Preview + +|Mount shared entitlements in BuildConfigs in RHEL +|Technology Preview +|Technology Preview +|Technology Preview + +|{product-title} on Oracle Cloud Infrastructure (OCI) +|Not Available +|Developer Preview +|Developer Preview + +|Selectable Cluster Inventory +|Technology Preview +|Technology Preview +|Technology Preview + +|Static IP addresses with vSphere (IPI only) +|Not Available +|Technology Preview +|Technology Preview + +|==== + +[discrete] +=== Node Technology Preview features + +.Nodes Technology Preview tracker +[cols="4,1,1,1",options="header"] +|==== +|Feature |4.13 |4.14 |4.15 + +|Cron job time zones +|Technology Preview +|Technology Preview +|Technology Preview + +|`MaxUnavailableStatefulSet` featureset +|Not Available +|Technology Preview +|Technology Preview + +|==== + +[discrete] +=== Multi-Architecture Technology Preview features + +.Multi-Architecture Technology Preview tracker +[cols="4,1,1,1",options="header"] +|==== +|Feature |4.13 |4.14 |4.15 + +|{ibmpowerProductName} Virtual Server using installer-provisioned infrastructure +|Technology Preview +|Technology Preview +|Technology Preview + +|`kdump` on `arm64` architecture +|Technology Preview +|Technology Preview +|Technology Preview + +|`kdump` on `s390x` architecture +|Technology Preview +|Technology Preview +|Technology Preview + +|`kdump` on `ppc64le` architecture +|Technology Preview +|Technology Preview +|Technology Preview + +|==== + +[discrete] +=== Specialized hardware and driver enablement Technology Preview features + +.Specialized hardware and driver enablement Technology Preview tracker +[cols="4,1,1,1",options="header"] +|==== +|Feature |4.13 |4.14 |4.15 + +|==== + + +[discrete] +=== Web console Technology Preview features + +.Web console Technology Preview tracker +[cols="4,1,1,1",options="header"] +|==== +|Feature |4.13 |4.14 |4.15 + +|Multicluster console +|Technology Preview +|Technology Preview +|Technology Preview + +|==== + + +[discrete] +[id="ocp-413-scalability-tech-preview"] +=== Scalability and performance Technology Preview features + +.Scalability and performance Technology Preview tracker +[cols="4,1,1,1",options="header"] +|==== +|Feature |4.13 |4.14 |4.15 + +|Hyperthreading-aware CPU manager policy +|Technology Preview +|Technology Preview +|Technology Preview + +|Node Observability Operator +|Technology Preview +|Technology Preview +|Technology Preview + +|{factory-prestaging-tool} +|Technology Preview +|Technology Preview +|Technology Preview + +|Mount namespace encapsulation +|Technology Preview +|Technology Preview +|Technology Preview + +|HTTP transport replaces AMQP for PTP and bare-metal events +|Technology Preview +|Technology Preview +|Technology Preview + +|Intel E810 Westport Channel NIC as PTP grandmaster clock +|Technology Preview +|Technology Preview +|Technology Preview + +|Workload partitioning for three-node clusters and standard clusters +|Technology Preview +|Technology Preview +|Technology Preview + +|==== + +[discrete] +[id="ocp-4-15-operators-tech-preview"] +=== Operator lifecycle and development Technology Preview features + +.Operator lifecycle and development Technology Preview tracker +[cols="4,1,1,1",options="header"] +|==== +|Feature |4.13 |4.14 |4.15 + +|Operator Lifecycle Manager (OLM) v1 +|Not Available +|Technology Preview +|Technology Preview + +|RukPak +|Technology Preview +|Technology Preview +|Technology Preview + +|Platform Operators +|Technology Preview +|Technology Preview +|Technology Preview + +|Hybrid Helm Operator +|Technology Preview +|Technology Preview +|Technology Preview + +|Java-based Operator +|Technology Preview +|Technology Preview +|Technology Preview + +|==== + +[discrete] +=== Monitoring Technology Preview features + +.Monitoring Technology Preview tracker +[cols="4,1,1,1",options="header"] +|==== +|Feature |4.13 |4.14 |4.15 + + +|Alerting rules based on platform monitoring metrics +|Technology Preview +|General Availability +|General Availability + +|Metrics Collection Profiles +|Technology Preview +|Technology Preview +|Technology Preview + +|==== + + +[discrete] +=== {rh-openstack-first} Technology Preview features + +.{rh-openstack} Technology Preview tracker +[cols="4,1,1,1",options="header"] +|==== +|Feature |4.13 |4.14 |4.15 + +|External load balancers with installer-provisioned infrastructure +|Technology Preview +|General Availability +|General Availability + +|Dual-stack networking with installer-provisioned infrastructure +|Not Available +|Technology Preview +|Technology Preview + +|==== + +[discrete] +=== Architecture Technology Preview features + +.Architecture Technology Preview tracker +[cols="4,1,1,1",options="header"] +|==== +|Feature |4.13 |4.14 |4.15 + +|Hosted control planes for {product-title} on Amazon Web Services (AWS) +|Technology Preview +|Technology Preview +|Technology Preview +// Needs to move to GA after Nov 15, 2023 +|Hosted control planes for {product-title} on bare metal +|Technology Preview +|Technology Preview +|General Availability +// Needs to move to GA after Nov 15, 2023 +|Hosted control planes for {product-title} on {VirtProductName} +|Not Available +|Technology Preview +|General Availability + +|==== + +[discrete] +=== Machine management Technology Preview features + +.Machine management Technology Preview tracker +[cols="4,1,1,1",options="header"] +|==== +|Feature |4.13 |4.14 |4.15 + +|Managing machines with the Cluster API +|Technology Preview +|Technology Preview +|Technology Preview + +|Cloud controller manager for Alibaba Cloud +|Technology Preview +|Technology Preview +|Technology Preview + +|Cloud controller manager for Amazon Web Services +|Technology Preview +|General Availability +|General Availability + +|Cloud controller manager for Google Cloud Platform +|Technology Preview +|Technology Preview +|Technology Preview + +|Cloud controller manager for IBM Cloud Power VS +|Technology Preview +|Technology Preview +|Technology Preview + +|Cloud controller manager for Microsoft Azure +|Technology Preview +|General Availability +|General Availability + +|==== + +[discrete] +=== Authentication and authorization Technology Preview features + +.Authentication and authorization Technology Preview tracker +[cols="4,1,1,1",options="header"] +|==== +|Feature |4.13 |4.14 |4.15 + +|Pod security admission restricted enforcement +|Technology Preview +|Technology Preview +|Technology Preview + +|==== + +[discrete] +=== Machine Config Operator Technology Preview features + +.Machine Config Operator Technology Preview tracker +[cols="4,1,1,1",options="header"] +|==== +|Feature |4.13 |4.14 |4.15 + +|==== + +[id="ocp-4-15-known-issues"] +== Known issues + +// TODO: This known issue should carry forward to 4.9 and beyond! +* The `oc annotate` command does not work for LDAP group names that contain an equal sign (`=`), because the command uses the equal sign as a delimiter between the annotation name and value. As a workaround, use `oc patch` or `oc edit` to add the annotation. (link:https://bugzilla.redhat.com/show_bug.cgi?id=1917280[*BZ#1917280*]) + +[id="ocp-telco-ran-4-15-known-issues"] + +[id="ocp-4-15-asynchronous-errata-updates"] +== Asynchronous errata updates + +Security, bug fix, and enhancement updates for {product-title} {product-version} are released as asynchronous errata through the Red Hat Network. All {product-title} {product-version} errata is https://access.redhat.com/downloads/content/290/[available on the Red Hat Customer Portal]. See the https://access.redhat.com/support/policy/updates/openshift[{product-title} Life Cycle] for more information about asynchronous errata. + +Red Hat Customer Portal users can enable errata notifications in the account settings for Red Hat Subscription Management (RHSM). When errata notifications are enabled, users are notified through email whenever new errata relevant to their registered systems are released. + +[NOTE] +==== +Red Hat Customer Portal user accounts must have systems registered and consuming {product-title} entitlements for {product-title} errata notification emails to generate. +==== + +This section will continue to be updated over time to provide notes on enhancements and bug fixes for future asynchronous errata releases of {product-title} {product-version}. Versioned asynchronous releases, for example with the form {product-title} {product-version}.z, will be detailed in subsections. In addition, releases in which the errata text cannot fit in the space provided by the advisory will be detailed in subsections that follow. + +[IMPORTANT] +==== +For any {product-title} release, always review the instructions on xref:../updating/updating_a_cluster/updating-cluster-web-console.adoc#updating-cluster-web-console[updating your cluster] properly. +==== + +//Update with relevant advisory information +[id="ocp-4-15-0-ga"] +=== RHSA-2024:XXXX - {product-title} {product-version}.0 image release, bug fix, and security update advisory + +Issued: TBD + +{product-title} release {product-version}.0, which includes security updates, is now available. The list of bug fixes that are included in the update is documented in the link:https://access.redhat.com/errata/RHSA-2024:XXXX[RHSA-2024:XXXX] advisory. The RPM packages that are included in the update are provided by the link:https://access.redhat.com/errata/RHSA-2024:XXXX[RHSA-2024:XXXX] advisory. + +Space precluded documenting all of the container images for this release in the advisory. + +You can view the container images in this release by running the following command: + +[source,terminal] +---- +$ oc adm release info 4.15.0 --pullspecs +---- +//replace 4.y.z for the correct values for the release. You do not need to update oc to run this command. diff --git a/welcome/index.adoc b/welcome/index.adoc index f6d5efd959..9c17d8c05e 100644 --- a/welcome/index.adoc +++ b/welcome/index.adoc @@ -58,7 +58,7 @@ Start with xref:../architecture/architecture.adoc#architecture-overview-architec xref:../security/container_security/security-understanding.adoc#understanding-security[Security and compliance]. ifdef::openshift-enterprise,openshift-webscale[] Next, view the -xref:../release_notes/ocp-4-14-release-notes.adoc#ocp-4-14-release-notes[release notes]. +xref:../release_notes/ocp-4-15-release-notes.adoc#ocp-4-15-release-notes[release notes]. endif::[] ifdef::openshift-online,openshift-aro[]