diff --git a/_topic_maps/_topic_map.yml b/_topic_maps/_topic_map.yml index a7cda2d659..d010ab72e2 100644 --- a/_topic_maps/_topic_map.yml +++ b/_topic_maps/_topic_map.yml @@ -51,8 +51,8 @@ Name: Release notes Dir: release_notes Distros: openshift-enterprise Topics: -- Name: OpenShift Container Platform 4.12 release notes - File: ocp-4-12-release-notes +- Name: OpenShift Container Platform 4.13 release notes + File: ocp-4-13-release-notes --- Name: Getting started Dir: getting_started diff --git a/migrating_from_ocp_3_to_4/planning-migration-3-4.adoc b/migrating_from_ocp_3_to_4/planning-migration-3-4.adoc index f0599dfa6e..1f334af954 100644 --- a/migrating_from_ocp_3_to_4/planning-migration-3-4.adoc +++ b/migrating_from_ocp_3_to_4/planning-migration-3-4.adoc @@ -9,7 +9,7 @@ toc::[] {product-title} {product-version} introduces architectural changes and enhancements/ The procedures that you used to manage your {product-title} 3 cluster might not apply to {product-title} 4. ifndef::openshift-origin[] -For information on configuring your {product-title} 4 cluster, review the appropriate sections of the {product-title} documentation. For information on new features and other notable technical changes, review the xref:../release_notes/ocp-4-12-release-notes.adoc#ocp-4-12-release-notes[OpenShift Container Platform 4.12 release notes]. +For information on configuring your {product-title} 4 cluster, review the appropriate sections of the {product-title} documentation. For information on new features and other notable technical changes, review the xref:../release_notes/ocp-4-13-release-notes.adoc#ocp-4-13-release-notes[OpenShift Container Platform 4.13 release notes]. endif::[] It is not possible to upgrade your existing {product-title} 3 cluster to {product-title} 4. You must start with a new {product-title} 4 installation. Tools are available to assist in migrating your control plane settings and application workloads. diff --git a/release_notes/ocp-4-12-release-notes.adoc b/release_notes/ocp-4-12-release-notes.adoc deleted file mode 100644 index 431e4ebc7c..0000000000 --- a/release_notes/ocp-4-12-release-notes.adoc +++ /dev/null @@ -1,3098 +0,0 @@ -:_content-type: ASSEMBLY -[id="ocp-4-12-release-notes"] -= {product-title} {product-version} release notes -include::_attributes/common-attributes.adoc[] -:context: release-notes - -toc::[] - -Red Hat {product-title} provides developers and IT organizations with a hybrid cloud application platform for deploying both new and existing applications on secure, scalable resources with minimal configuration and management overhead. {product-title} supports a wide selection of programming languages and frameworks, such as Java, JavaScript, Python, Ruby, and PHP. - -Built on {op-system-base-full} and Kubernetes, {product-title} provides a more secure and scalable multitenant operating system for today's enterprise-class applications, while delivering integrated application runtimes and libraries. {product-title} enables organizations to meet security, privacy, compliance, and governance requirements. - -[id="ocp-4-12-about-this-release"] -== About this release - -// TODO: Update with the relevant information closer to release. -{product-title} (link:https://access.redhat.com/errata/RHSA-2022:7399[RHSA-2022:7399]) is now available. This release uses link:https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.25.md[Kubernetes 1.25] with CRI-O runtime. New features, changes, and known issues that pertain to {product-title} {product-version} are included in this topic. - -{product-title} {product-version} clusters are available at https://console.redhat.com/openshift. With the {cluster-manager-first} application for {product-title}, you can deploy OpenShift clusters to either on-premises or cloud environments. - -// Double check OP system versions -{product-title} {product-version} is supported on {op-system-base-full} 8.4 and 8.5, as well as on {op-system-first} 4.12. - -You must use {op-system} machines for the control plane, and you can use either {op-system} or {op-system-base} for compute machines. -//Removed the note per https://issues.redhat.com/browse/GRPA-3517 - -//TODO: Remove this for 4.13 -Starting with {product-title} {product-version} an additional six months of Extended Update Support (EUS) phase on even numbered releases from 18 months to two years. For more information, see the link:https://access.redhat.com/support/policy/updates/openshift[Red Hat {product-title} Life Cycle Policy]. - -//TODO: Add the line below for EUS releases. -{product-title} 4.8 is an Extended Update Support (EUS) release. More information on Red Hat OpenShift EUS is available in link:https://access.redhat.com/support/policy/updates/openshift#ocp4_phases[OpenShift Life Cycle] and link:https://access.redhat.com/support/policy/updates/openshift-eus[OpenShift EUS Overview]. - -//TODO: The line below should be used when it is next appropriate. Revisit in April 2023 timeframe. -Maintenance support ends for version 4.8 in January 2023 and goes to extended life phase. For more information, see the link:https://access.redhat.com/support/policy/updates/openshift[Red Hat {product-title} Life Cycle Policy]. - - -[id="ocp-4-12-add-on-support-status"] -== {product-title} layered and dependent component support and compatibility - -The scope of support for layered and dependent components of {product-title} changes independently of the {product-title} version. To determine the current support status and compatibility for an add-on, refer to its release notes. For more information, see the link:https://access.redhat.com/support/policy/updates/openshift[Red Hat {product-title} Life Cycle Policy]. - -[id="ocp-4-12-new-features-and-enhancements"] -== New features and enhancements - -This release adds improvements related to the following components and concepts. - -[id="ocp-4-12-rhcos"] -=== {op-system-first} - -[id="ocp-4-12-default-graphical-console"] -==== Default consoles for new clusters are now determined by the installation platform -{op-system-first} nodes installed from an {product-title} {product-version} boot image now use a platform-specific default console. The default consoles on cloud platforms correspond to the specific system consoles expected by that cloud provider. VMware and OpenStack images now use a primary graphical console and a secondary serial console. Other bare metal installations now use only the graphical console by default, and do not enable a serial console. Installations performed with `coreos-installer` can override existing defaults and enable the serial console. - -Existing nodes are not affected. New nodes on existing clusters are not likely to be affected because they are typically installed from the boot image that was originally used to install the cluster. - -For information about how to enable the serial console, see the following documentation: - -* xref:../installing/installing_bare_metal/installing-bare-metal.adoc#installation-user-infra-machines-advanced-console-configuration_installing-bare-metal[Default console configuration]. - -* xref:../installing/installing_bare_metal/installing-bare-metal.adoc#installation-user-infra-machines-advanced-customizing-live-iso-serial-console_installing-bare-metal[Modifying a live install ISO image to enable the serial console]. - -* xref:../installing/installing_bare_metal/installing-bare-metal.adoc#installation-user-infra-machines-advanced-customizing-live-pxe-serial-console_installing-bare-metal[Modifying a live install PXE environment to enable the serial console]. - -[id="ocp-4-12-secure-execution-z-linux-one"] -==== IBM Secure Execution on {ibmzProductName} and LinuxONE (Technology Preview) -{product-title} now supports configuring {op-system-first} nodes for IBM Secure Execution on {ibmzProductName} and LinuxONE (s390x architecture) as a Technology Preview feature. IBM Secure Execution is a hardware enhancement that protects memory boundaries for KVM guests. IBM Secure Execution provides the highest level of isolation and security for cluster workloads, and you can enable it by using an IBM Secure Execution-ready QCOW2 boot image. - -To use IBM Secure Execution, you must have host keys for your host machine(s) and they must be specified in your Ignition configuration file. IBM Secure Execution automatically encrypts your boot volumes using LUKS encryption. - -For more information, see xref:../installing/installing_ibm_z/installing-ibm-z-kvm.adoc#installing-rhcos-using-ibm-secure-execution_installing-ibm-z-kvm[Installing {op-system} using IBM Secure Execution]. - -[id="ocp-4-12-rhcos-rhel-8-6-packages"] -==== {op-system} now uses {op-system-base} 8.6 - -{op-system} now uses {op-system-base-full} 8.6 packages in {product-title} {product-version}. This enables you to have the latest fixes, features, and enhancements, as well as the latest hardware support and driver updates. {product-title} 4.10 is an Extended Update Support (EUS) release that will continue to use RHEL 8.4 EUS packages for the entirety of its lifecycle. - -[id="ocp-4-12-installation-and-upgrade"] -=== Installation and upgrade - -[id="ocp-4-12-aws-load-balancer-customization"] -==== Specify the load balancer type in AWS during installation -Beginning with {product-title} {product-version}, you can specify either Network Load Balancer (NLB) or Classic as a persistent load balancer type in AWS during installation. Afterwards, if an Ingress Controller is deleted, the load balancer type persists with the lbType configured during installation. - -For more information, see xref:../installing/installing_aws/installing-aws-network-customizations.adoc[Installing a cluster on AWS with network customizations]. - -[id="ocp-4-12-aws-local-zones-customization"] -==== Extend worker nodes to the edge of AWS when installing into an existing Virtual Private Cloud (VPC) with Local Zone subnets. - -With this update you can install {product-title} to an existing VPC with installer-provisioned infrastructure, extending the worker nodes to Local Zones subnets. The installation program will provision worker nodes on the edge of the AWS network that are specifically designated for user applications by using NoSchedule taints. Applications deployed on the Local Zones locations deliver low latency for end users. - -For more information, see xref:../installing/installing_aws/installing-aws-localzone.adoc[Installing a cluster using AWS Local Zones]. - - -[id="ocp-4-12-installation-and-upgrade-gcp-marketplace"] -==== Google Cloud Platform Marketplace offering -{product-title} is now available on the GCP Marketplace. Installing an {product-title} with a GCP Marketplace image lets you create self-managed cluster deployments that are billed on pay-per-use basis (hourly, per core) through GCP, while still being supported directly by Red Hat. - -For more information about installing using installer-provisioned infrastructure, see xref:../installing/installing_gcp/installing-gcp-customizations.adoc#installation-gcp-marketplace_installing-gcp-customizations[Using a GCP Marketplace image]. For more information about installing a using user-provisioned infrastructure, see xref:../installing/installing_gcp/installing-gcp-user-infra.adoc#installation-creating-gcp-worker_installing-gcp-user-infra[Creating additional worker machines in GCP]. - -[id="ocp-4-12-gcp-azure-serial-console-logs"] -==== Troubleshooting bootstrap failures during installation on GCP and Azure -The installer now gathers serial console logs from the bootstrap and control plane hosts on GCP and Azure. This log data is added to the standard bootstrap log bundle. - -For more information, see xref:../installing/installing-troubleshooting.adoc#installation-bootstrap-gather_installing-troubleshooting[Troubleshooting installation issues]. - -[id="ocp-4-12-ibm-cloud-vpc"] -==== IBM Cloud VPC general availability -IBM Cloud VPC is now generally available in {product-title} {product-version}. - -For more information about installing a cluster, see xref:../installing/installing_ibm_cloud_public/preparing-to-install-on-ibm-cloud.adoc#preparing-to-install-on-ibm-cloud[Preparing to install on IBM Cloud VPC]. - -[id="ocp-4-12-admin-ack-upgrading"] -==== Required administrator acknowledgment when upgrading from {product-title} 4.11 to 4.12 - -{product-title} 4.12 uses Kubernetes 1.25, which removed xref:../release_notes/ocp-4-12-release-notes.adoc#ocp-4-12-removed-kube-1-25-apis[several deprecated APIs]. - -A cluster administrator must provide a manual acknowledgment before the cluster can be upgraded from {product-title} 4.11 to 4.12. This is to help prevent issues after upgrading to {product-title} 4.12, where APIs that have been removed are still in use by workloads, tools, or other components running on or interacting with the cluster. Administrators must evaluate their cluster for any APIs in use that will be removed and migrate the affected components to use the appropriate new API version. After this is done, the administrator can provide the administrator acknowledgment. - -All {product-title} 4.11 clusters require this administrator acknowledgment before they can be upgraded to {product-title} 4.12. - -For more information, see xref:../updating/updating-cluster-prepare.adoc#updating-cluster-prepare[Preparing to update to {product-title} 4.12]. - -[id="ocp-4-12-feature-set"] -==== Enabling a feature set when installing a cluster -Beginning with {product-title} {product-version}, you can enable a feature set as part of the installation process. A feature set is a collection of {product-title} features that are not enabled by default. - -For more information about enabling a feature set during installation, see xref:../nodes/clusters/nodes-cluster-enabling-features.adoc#nodes-cluster-enabling[Enabling {product-title} features using feature gates]. - -[id="ocp-4-12-arm-on-azure"] -==== {product-title} on ARM -{product-title} {product-version} is now supported on ARM architecture-based Azure installer-provisioned infrastructure. AWS Graviton 3 processors are now available for cluster deployments and are also supported on {product-title} 4.11. For more information about instance availability and installation documentation, see xref:../installing/installing-preparing.adoc#supported-installation-methods-for-different-platforms[Supported installation methods for different platforms] - -[id="ocp-4-12-oc-mirror-oci"] -==== Mirroring file-based catalog Operator images in OCI format with the oc-mirror CLI plug-in (Technology Preview) - -Using the oc-mirror CLI plug-in to mirror file-based catalog Operator images in OCI format instead of Docker v2 format is now available as a link:https://access.redhat.com/support/offerings/techpreview[Technology Preview]. - -For more information, see xref:../installing/disconnected_install/installing-mirroring-disconnected.adoc#oc-mirror-oci-format_installing-mirroring-disconnected[Mirroring file-based catalog Operator images in OCI format]. - -[id="ocp-4-12-gcp-shared-vpc"] -==== Installing an {product-title} cluster on GCP into a shared VPC (Technology Preview) -In {product-title} {product-version}, you can install a cluster on GCP into a shared VPC as a link:https://access.redhat.com/support/offerings/techpreview[Technology Preview]. In this installation method, the cluster is configured to use a VPC from a different GCP project. A shared VPC enables an organization to connect resources from multiple projects to a common VPC network. You can communicate within the organization securely and efficiently by using internal IP addresses from that network. - -For more information, see xref:../installing/installing_gcp/installing-gcp-shared-vpc.adoc#installing-gcp-shared-vpc[Installing a cluster on GCP into a shared VPC]. - -[id="ocp-4-12-consistent-address-for-ironicapi"] -==== Consistent IP address for Ironic API in bare-metal installations without a provisioning network -With this update, in bare-metal installations without a provisioning network, the Ironic API service is accessible through a proxy server. This proxy server provides a consistent IP address for the Ironic API service. If the Metal3 pod that contains `metal3-ironic` relocates to another pod, the consistent proxy address ensures constant communication with the Ironic API service. - -[id="ocp-4-12-gcp-service-account-installation"] -==== Installing {product-title} on GCP using service account authentication -In {product-title} {product-version}, you can install a cluster on GCP using a virtual machine with a service account attached to it. This allows you to perform an installation without needing to use a service account JSON file. - -For more information, see xref:../installing/installing_gcp/installing-gcp-account.adoc#installation-gcp-service-account_installing-gcp-account[Creating a GCP service account]. - -[id="ocp-4-12-aws-user-tag"] -==== `propagateUserTags` parameter for AWS resources provisioned by the {product-title} cluster -In {product-title} {product-version}, the `propagateUserTags` parameter is a flag that directs in-cluster Operators to include the specified user tags in the tags of the AWS resources that the Operators create. - -For more information, see xref:../installing/installing_aws/installing-aws-network-customizations.html#installation-configuration-parameters_installing-aws-network-customizations[Optional configuration parameters]. - -[id="ocp-4-12-ironic-base-image-rhel-9"] -==== Ironic container images use RHEL 9 base image -In earlier versions of {product-title}, Ironic container images used {op-system-base-full} 8 as the base image. From {product-title} {product-version}, Ironic container images use {op-system-base} 9 as the base image. The {op-system-base} 9 base image adds support for CentOS Stream 9, Python 3.8, and Python 3.9 in Ironic components. - -For more information about the Ironic provisioning service, see xref:../installing/installing_bare_metal_ipi/ipi-install-overview.adoc[Deploying installer-provisioned clusters on bare metal]. - -[id="ocp-4-12-openstack-external-cloud-control-upgrade"] -==== Cloud provider configuration updates for clusters that run on {rh-openstack} - -In {product-title} 4.12, clusters that run on {rh-openstack-first} are switched from the legacy OpenStack cloud provider to the external Cloud Controller Manager (CCM). This change follows the move in Kubernetes from in-tree, legacy cloud providers to external cloud providers that are implemented by using the link:https://kubernetes.io/docs/concepts/architecture/cloud-controller/[Cloud Controller Manager]. - -For more information, see xref:../installing/installing_openstack/installing-openstack-cloud-config-reference.adoc#nw-openstack-external-ccm_installing-openstack-cloud-config-reference[The OpenStack Cloud Controller Manager]. - -[id="ocp-4-12-install-shiftstack-deployments-on-dcn"] -==== Support for workloads on {rh-openstack} distributed compute nodes - -In {product-title} 4.12, cluster deployments to {rh-openstack-first} clouds that have distributed compute node (DCN) architecture were validated. A reference architecture for these deployments is forthcoming. - -For a brief overview of this type of deployment, see the blog post link:https://cloud.redhat.com/blog/deploying-your-cluster-at-the-edge-with-openstack[Deploying Your Cluster at the Edge With OpenStack]. - -[id="ocp-4-12-aws-outposts"] -==== {product-title} on AWS Outposts -{product-title} {product-version} is now supported on the AWS Outposts platform. With AWS Outposts you can deploy edge-based worker nodes, while using AWS Regions for the control plane nodes. The documentation for this feature is currently unavailable and is targeted for release at a later date. - -==== Agent-based installation supports two input modes -The Agent-based installation supports two input modes: - -* `install-config.yaml` file -* `agent-config.yaml` file - -.Optional - -* Zero Touch Provisioning (ZTP) manifests - -With the preferred mode, you can configure the `install-config.yaml` file and specify Agent-based specific settings in the `agent-config.yaml` file. -For more information, see xref:../installing/installing_with_agent_based_installer/preparing-to-install-with-agent-based-installer.adoc#about-the-agent-based-installer[About the Agent-based {product-title} Installer]. - -[id="ocp-4-12-agent-based-install-fips"] -==== Agent-based installation supports installing {product-title} clusters in FIPS compliant mode -Agent-based {product-title} Installer supports {product-title} clusters in Federal Information Processing Standards (FIPS) compliant mode. You must set the value of the `fips` field to `True` in the `install-config.yaml` file. -For more information, see xref:../installing/installing_with_agent_based_installer/preparing-to-install-with-agent-based-installer.adoc#agent-installer-configuring-fips-compliance_preparing-to-install-with-agent-based-installer[About FIPS compliance]. - -[id="ocp-4-12-agent-based-install-disconnected"] -==== Deploy an Agent-based {product-title} cluster in a disconnected environment -You can perform an Agent-based installation in a disconnected environment. To create an image that is used in a disconnected environment, the `imageContentSources` section in the `install-config.yaml` file must contain the mirror information or `registries.conf` file if you are using ZTP manifests. The actual configuration settings to use in these files are supplied by either the `oc adm release mirror` or `oc mirror` command. -For more information, see xref:../installing/installing_with_agent_based_installer/understanding-disconnected-installation-mirroring.adoc#understanding-disconnected-installation-mirroring[Understanding disconnected installation mirroring]. - -[id="ocp-4-12-agent-based-install-dual-stack"] -==== Agent-based installation supports single and dual stack networking -You can create the agent ISO image with the following IP address configurations: - -* IPv4 -* IPv6 -* IPv4 and IPv6 in parallel (dual-stack) - -For more information, see xref:../installing/installing_with_agent_based_installer/installing-with-agent-based-installer.adoc#installing-ocp-agent_installing-with-agent-based-installer[Dual and single IP stack clusters]. - -[id="ocp-4-12-agent-based-install-mce"] -==== Agent deployed {product-title} cluster can be used as a hub cluster -You can install the multicluster engine for Kubernetes Operator and deploy a hub cluster with the Agent-based {product-title} Installer. -For more information, see xref:../installing/installing_with_agent_based_installer/preparing-an-agent-based-installed-cluster-for-mce.adoc#preparing-an-agent-based-installed-cluster-for-mce[Preparing an Agent-based installed cluster for the multicluster engine for Kubernetes Operator]. - -[id="ocp-4-12-agent-based-install-validations"] -==== Agent-based installation performs installation validations - -The Agent-based {product-title} Installer performs validations on: - -* Installation image generation: The user-provided manifests are checked for validity and compatibility. -* Installation: The installation service checks the hardware available for installation and emits validation events that can be retrieved with the `openshift-install agent wait-for` subcommands. - -For more information, see xref:../installing/installing_with_agent_based_installer/preparing-to-install-with-agent-based-installer.adoc#validations-before-agent-iso-creation_preparing-to-install-with-agent-based-installer[Installation validations]. - -[id="ocp-4-12-agent-based-install-static-networking"] -==== Configure static networking in a Agent-based installation -With the Agent-based {product-title} Installer, you can configure static IP addresses for IPv4, IPv6, or dual-stack (both IPv4 and IPv6) for all the hosts prior to creating the agent ISO image. You can add the static addresses to the `hosts` section of the `agent-config.yaml` file or in the `NMStateConfig.yaml` file if you are using the ZTP manifests. -Note that the configuration of the addresses must follow the syntax rules for NMState as described in link:https://nmstate.io/examples.html[NMState state examples]. - -For more information, see xref:../installing/installing_with_agent_based_installer/preparing-to-install-with-agent-based-installer.adoc#agent-install-networking_preparing-to-install-with-agent-based-installer[About networking]. - -[id="ocp-4-12-agent-based-install-automated-deployment"] -==== CLI based automated deployment in an Agent-based installation -With the Agent-based {product-title} Installer, you can define your installation configurations, generate an ISO for all the nodes, and then have an unattended installation by booting the target systems with the generated ISO. -For more information, see xref:../installing/installing_with_agent_based_installer/installing-with-agent-based-installer.adoc#installing-with-agent-based-installer[Installing a {product-title} cluster with the Agent-based {product-title} Installer]. - -[id="ocp-4-12-agent-based-install-host-specific"] -==== Agent-based installation supports host specific configuration at the instalation time -You can configure the hostname, network configuration in NMState format, root device hints, and role in an Agent-based installation. - -For more information, see xref:../installing/installing_with_agent_based_installer/preparing-to-install-with-agent-based-installer.adoc#root-device-hints_preparing-to-install-with-agent-based-installer[About root device hints]. - -[id="ocp-4-12-agent-based-install-dhcp"] -==== Agent-based installation supports DHCP -With the Agent-based {product-title} Installer, you can deploy to environments where you rely on DHCP to configure networking for all the nodes, as long as you know the IP that at least one of the systems will receive. This IP is required so that all nodes use it as a meeting point. -For more information, see xref:../installing/installing_with_agent_based_installer/preparing-to-install-with-agent-based-installer.adoc#dhcp[DHCP]. - -[id="ocp-4-12-post-installation"] -=== Post-installation configuration - -[id="ocp-4-12-csi-driver-install-vsphere"] -==== CSI driver installation on vSphere clusters -To install a CSI driver on a cluster running on vSphere, the following requirements must be met: - -* Virtual machines of hardware version 15 or later - -* VMware vSphere version 7.0.2 or later - -* vCenter 7.0.2 or later - -* No third-party CSI driver already installed in the cluster -+ -If a third-party CSI driver is present in the cluster, {product-title} does not overwrite it. - -Components with versions earlier than those above are still supported, but are deprecated. These versions are still fully supported, but version 4.12 of {product-title} requires vSphere virtual hardware version 15 or later. For more information, see xref:../release_notes/ocp-4-12-release-notes#ocp-4-12-deprecated-removed-features[Deprecated and removed features]. - -Failing to meet the above requirements prevents {product-title} from upgrading to {product-title} 4.13 or later. -[id="ocp-4-12-cluster-capabilities"] -==== Cluster Capabilities - -The following new cluster capabilities have been added: - -* Console -* Insights -* Storage -* CSISnapshot - -A new predefined set of cluster capabilities, `v4.12`, has been added. This includes all capabilities from `v4.11`, and the new capabilities added with the current release. - -For more information, see link: xref:../post_installation_configuration/enabling-cluster-capabilities.adoc[Enabling cluster capabilities]. - -[id="ocp-4-12-ocp-with-multi-arch-compute-machines"] -==== {product-title} with multi-architecture compute machines (Technology Preview) - -{product-title} {product-version} with multi-architecture compute machines now supports manifest listed images on image streams. For more information about manifest list images, see xref:../post_installation_configuration/multi-architecture-configuration.adoc[Configuring multi-architecture compute machines on an {product-title} cluster]. - -On a cluster with multi-architecture compute machines, you can now override the node affinity in the Operator's `Subscription` object to schedule pods on nodes with architectures that the Operator supports. For more information, see xref:../nodes/scheduling/nodes-scheduler-node-affinity.adoc#olm-overriding-operator-pod-affinity_nodes-scheduler-node-affinity[Using node affinity to control where an Operator is installed]. - -[id="ocp-4-12-web-console"] -=== Web console - -[id="ocp-4-12-Administrator-perspective"] -==== Administrator Perspective - -With this release, there are several updates to the *Administrator* perspective of the web console. - -* The {product-title} web console displays a `ConsoleNotification` if the cluster is upgrading. Once the upgrade is done, the notification is removed. -* A *_restart rollout_* option for the `Deployment` resource and a *_retry rollouts_* option for the `DeploymentConfig` resource are available on the *Action* and *Kebab* menus. -* You can view a list of supported clusters on the *All Clusters* dropdown list. The supported clusters include {product-title}, {product-title} Service on AWS (ROSA), Azure Red Hat OpenShift (ARO), ROKS, and {product-dedicated}. - -[id="ocp-4-12-multi-arch-console"] -===== Multi-architecture compute machines on the {product-title} web console - -The `console-operator` now scans all nodes and builds a set of all architecture types that cluster nodes run on and pass it to the `console-config.yaml`. The `console-operator` can be installed on nodes with architectures of the values `amd64`, `arm64`, `ppc64le`, or `s390x`. - -For more information about multi-architechture compute machines, see xref:../post_installation_configuration/multi-architecture-configuration.adoc[Configuring a multi-architecture compute machine on an OpenShift cluster]. - -[id="ocp-4-12-dynamic-plug-in-updates"] -===== Dynamic plug-in generally available - -This feature was previously introduced as a Technology Preview in {product-title} 4.10 and is now generally available in {product-title} 4.12. With the dynamic plug-in, you can build high quality and unique user experiences natively in the web console. You can: - -* Add custom pages. -* Add perspectives beyond administrator and developer. -* Add navigation items. -* Add tabs and actions to resource pages. -* Extend existing pages. - -For more information, see xref:../web_console/dynamic-plug-in/dynamic-plug-in.adoc#overview-of-dynamic-plug-ins[Overview of dynamic-plug-ins]. - -[id="ocp-4-12-developer-perspective"] -==== Developer Perspective - -With this release, there are several updates to the *Developer* perspective of the web console. You can perform the following actions: - -* Export your application in the ZIP file format to another project or cluster by using the *Export application* option on the *+Add* page. -* Create a Kafka event sink to receive events from a particular source and send them to a Kafka topic. -* Set the default resource preference in the *User Preferences* -> *Applications* page. In addition, you can select another resource type to be the default. -** Optionally, set another resource type from the *Add* page by clicking *Import from Git* -> *Advanced options* -> *Resource type* and selecting the resource from the drop-down list. -* Make the `status.HostIP` node IP address for pods visible in the *Details* tab of the *Pods* page. -* See the resource quota alert label on the *Topology* and *Add* pages whenever any resource reaches the quota. The alert label link takes you to the *ResourceQuotas* list page. If the alert label link is for a single resource quota, it takes you to the *ResourceQuota details* page. -** For deployments, an alert is displayed in the topology node side panel if any errors are associated with resource quotas. Also, a yellow border is displayed around the deployment nodes when the resource quota is exceeded. -* Customize the following UI items using the form or YAML view: -** Perspectives visible to users -** Quick starts visible to users -** Cluster roles accessible to a project -** Actions visible on the *+Add* page -** Item types in the *Developer Catalog* -* See the common updates to the *Pipeline details* and *PipelineRun details* page visualization by performing the following actions: -** Use the mouse wheel to change the zoom factor. -** Hover over the tasks to see the task details. -** Use the standard icons to zoom in, zoom out, fit to screen, and reset the view. -** *PipelineRun details* page only: At specific zoom factors, the background color of the tasks changes to indicate the error or warning status. You can hover over the tasks badge to see the total number of tasks and the completed tasks. - -[id="ocp-4-12-helm-page-improvements"] -===== Helm page improvements - -In {product-title} 4.12, you can do the following from the *Helm* page: - -* Create Helm releases and repositories using the *Create* button. -* Create, update, or delete a cluster-scoped or a namespace-scoped Helm chart repository. -* View the list of the existing Helm chart repositories with their scope in the *Repositories* page. -* View the newly created Helm release in the *Helm Releases* page. - -[id="ocp-4-12-negativematcher-alert"] -===== Negative matchers in Alertmanager -With this update, Alertmanager now supports a `Negative matcher` option. Using `Negative matcher`, you can update the *Label value* to a Not Equals matcher. The negative matcher checkbox changes `=` (value equals) into `!=` (value does not equal) and changes `=~` (value matches regular expression) into `!~`(value does not match regular expression). Also, the *Use RegEx* checkbox label is renamed to *RegEx*. - -[id="ocp-4-12-oc"] -=== OpenShift CLI (oc) - -[id="ocp-4-12-cli-krew"] -==== Managing plug-ins for the OpenShift CLI with Krew (Technology Preview) - -Using Krew to install and manage plug-ins for the OpenShift CLI (`oc`) is now available as a link:https://access.redhat.com/support/offerings/techpreview[Technology Preview]. - -For more information, see xref:../cli_reference/openshift_cli/managing-cli-plugins-krew.adoc#managing-cli-plugin-krew[Managing CLI plug-ins with Krew]. - -[id="ocp-4-12-ibm-z"] -=== IBM Z and LinuxONE - -With this release, IBM Z and LinuxONE are now compatible with {product-title} {product-version}. The installation can be performed with z/VM or {op-system-base} KVM. For installation instructions, see the following documentation: - -* xref:../installing/installing_ibm_z/installing-ibm-z.adoc#installing-ibm-z[Installing a cluster with z/VM on IBM Z and LinuxONE] -* xref:../installing/installing_ibm_z/installing-restricted-networks-ibm-z.adoc#installing-restricted-networks-ibm-z[Installing a cluster with z/VM on IBM Z and LinuxONE in a restricted network] -* xref:../installing/installing_ibm_z/installing-ibm-z-kvm.adoc#installing-ibm-z-kvm[Installing a cluster with {op-system-base} KVM on IBM Z and LinuxONE] -* xref:../installing/installing_ibm_z/installing-restricted-networks-ibm-z-kvm.adoc#installing-restricted-networks-ibm-z-kvm[Installing a cluster with RHEL KVM on IBM Z and LinuxONE in a restricted network] - -[discrete] -==== Notable enhancements - -The following new features are supported on IBM Z and LinuxONE with {product-title} {product-version}: - -* Cron jobs -* Descheduler -//* FIPS cryptography -* IPv6 -* PodDisruptionBudget -* Scheduler profiles -* Stream Control Transmission Protocol (SCTP) - -[discrete] -==== IBM Secure Execution (Technology Preview) - -{product-title} now supports configuring {op-system-first} nodes for IBM Secure Execution on {ibmzProductName} and LinuxONE (s390x architecture) as a Technology Preview feature. - -For installation instructions, see the following documentation: - -* xref:../installing/installing_ibm_z/installing-ibm-z-kvm.html#installing-rhcos-using-ibm-secure-execution_installing-ibm-z-kvm[Installing {op-system} using IBM Secure Execution] - -[discrete] -==== Supported features - -The following features are also supported on IBM Z and LinuxONE: - -* Currently, the following Operators are supported: -** Cluster Logging Operator -** Compliance Operator -** File Integrity Operator -** Local Storage Operator -** NFD Operator -** NMState Operator -** OpenShift Elasticsearch Operator -** Service Binding Operator -** Vertical Pod Autoscaler Operator -* The following Multus CNI plugins are supported: -** Bridge -** Host-device -** IPAM -** IPVLAN -* Alternate authentication providers -* Automatic Device Discovery with Local Storage Operator -* CSI Volumes -** Cloning -** Expansion -** Snapshot -* Encrypting data stored in etcd -* Helm -* Horizontal pod autoscaling -* Monitoring for user-defined projects -* Multipathing -* Operator API -* OC CLI plugins -* Persistent storage using iSCSI -* Persistent storage using local volumes (Local Storage Operator) -* Persistent storage using hostPath -* Persistent storage using Fibre Channel -* Persistent storage using Raw Block -* OVN-Kubernetes, including IPsec encryption -* Support for multiple network interfaces -* Three-node cluster support -* z/VM Emulated FBA devices on SCSI disks -* 4K FCP block device - -These features are available only for {product-title} on IBM Z and LinuxONE for {product-version}: - -* HyperPAV enabled on IBM Z and LinuxONE for the virtual machines for FICON attached ECKD storage - -[discrete] -==== Restrictions - -The following restrictions impact {product-title} on IBM Z and LinuxONE: - -* Automatic repair of damaged machines with machine health checking -* {openshift-local-productname} -* Controlling overcommit and managing container density on nodes -* NVMe -* OpenShift Metering -* OpenShift Virtualization -* Precision Time Protocol (PTP) hardware -* Tang mode disk encryption during {product-title} deployment - -* Compute nodes must run {op-system-first} -* Persistent shared storage must be provisioned by using either {rh-storage-first} or other supported storage protocols -* Persistent non-shared storage must be provisioned using local storage, like iSCSI, FC, or using LSO with DASD, FCP, or EDEV/FBA - -[id="ocp-4-12-ibm-power"] -=== IBM Power - -With this release, IBM Power is now compatible with {product-title} {product-version}. For installation instructions, see the following documentation: - -* xref:../installing/installing_ibm_power/installing-ibm-power.adoc#installing-ibm-power_installing-ibm-power[Installing a cluster on IBM Power] -* xref:../installing/installing_ibm_power/installing-restricted-networks-ibm-power.adoc#installing-restricted-networks-ibm-power_installing-restricted-networks-ibm-power[Installing a cluster on IBM Power in a restricted network] - -[discrete] -==== Notable enhancements - -The following new features are supported on IBM Power with {product-title} {product-version}: - -* Cloud controller manager for IBM Cloud -* Cron jobs -* Descheduler -//* FIPS cryptography -* PodDisruptionBudget -* Scheduler profiles -* Stream Control Transmission Protocol (SCTP) -* Topology Manager - -[discrete] -==== Supported features - -The following features are also supported on IBM Power: - -* Currently, the following Operators are supported: -** Cluster Logging Operator -** Compliance Operator -** File Integrity Operator -** Local Storage Operator -** NFD Operator -** NMState Operator -** OpenShift Elasticsearch Operator -** SR-IOV Network Operator -** Service Binding Operator -** Vertical Pod Autoscaler Operator -* The following Multus CNI plugins are supported: -** Bridge -** Host-device -** IPAM -** IPVLAN -* Alternate authentication providers -* CSI Volumes -** Cloning -** Expansion -** Snapshot -* Encrypting data stored in etcd -* Helm -* Horizontal pod autoscaling -* IPv6 -* Monitoring for user-defined projects -* Multipathing -* Multus SR-IOV -* Operator API -* OC CLI plugins -* OVN-Kubernetes, including IPsec encryption -* Persistent storage using iSCSI -* Persistent storage using local volumes (Local Storage Operator) -* Persistent storage using hostPath -* Persistent storage using Fibre Channel -* Persistent storage using Raw Block -* Support for multiple network interfaces -* Support for Power10 -* Three-node cluster support -* 4K Disk Support - -[discrete] -==== Restrictions - -The following restrictions impact {product-title} on IBM Power: - -* Automatic repair of damaged machines with machine health checking -* {openshift-local-productname} -* Controlling overcommit and managing container density on nodes -* OpenShift Metering -* OpenShift Virtualization -* Precision Time Protocol (PTP) hardware -* Tang mode disk encryption during {product-title} deployment - -* Compute nodes must run {op-system-first} -* Persistent storage must be of the Filesystem type that uses local volumes, {rh-storage-first}, Network File System (NFS), or Container Storage Interface (CSI) - -[id="ocp-4-12-images"] -=== Images - -A new import value, `importMode`, has been added to the `importPolicy` parameter of image streams. The following fields are available for this value: - -* `Legacy`: `Legacy` is the default value for `importMode`. When active, the manifest list is discarded, and a single sub-manifest is imported. The platform is chosen in the following order of priority: -+ -. Tag annotations -. Control plane architecture -. Linux/AMD64 -. The first manifest in the list - -* `PreserveOriginal`: When active, the original manifest is preserved. For manifest lists, the manifest list and all of its sub-manifests are imported. - -//[id="ocp-4-12-security"] -//=== Security and compliance -// -// This content will be added post-GA, as it is asynchronous content. - -[id="ocp-4-12-networking"] -=== Networking - -[id="ocp-4-12-redhat-openshift-networking"] -==== Red Hat OpenShift Networking - -{openshift-networking} is an ecosystem of features, plug-ins, and advanced networking capabilities that extend Kubernetes networking beyond the Kubernetes CNI plug-in with the advanced networking-related features that your cluster needs to manage its network traffic for one or multiple hybrid clusters. This ecosystem of networking capabilities integrates ingress, egress, load balancing, high-performance throughput, security, and inter-, and intra-cluster traffic management and provides role-based observability tooling to reduce its natural complexities. - -For more information, see xref:../networking/about-networking.adoc#about-networking[About networking]. - -[id="ocp-4-12-ovn-kubernetes-default-network-plug-in"] -==== OVN-Kubernetes is now the default networking plug-in - -When installing a new cluster the OVN-Kubernetes network plug-in is the default networking plug-in. For all prior versions of {product-title}, OpenShift SDN remains the default networking plug-in. - -The OVN-Kubernetes network plug-in includes a wider array of features than OpenShift SDN, including: - -- Support for all existing OpenShift SDN features -- Support for xref:../networking/ovn_kubernetes_network_provider/converting-to-dual-stack.adoc#converting-to-dual-stack[IPv6 networks] -- Support for xref:../networking/ovn_kubernetes_network_provider/configuring-ipsec-ovn.adoc#configuring-ipsec-ovn[Configuring IPsec encryption] -- Complete support for the xref:../networking/network_policy/about-network-policy.adoc#about-network-policy[`NetworkPolicy` API] -- Support for xref:../networking/ovn_kubernetes_network_provider/logging-network-policy.adoc#logging-network-policy[audit logging of network policy events] -- Support for xref:../networking/ovn_kubernetes_network_provider/tracking-network-flows.adoc#tracking-network-flows[network flow tracking] in NetFlow, sFlow, and IPFIX formats -- Support for xref:../networking/ovn_kubernetes_network_provider/configuring-hybrid-networking.adoc#configuring-hybrid-networking[hybrid networks] for Windows containers -- Support for xref:../networking/hardware_networks/configuring-hardware-offloading.adoc#configuring-hardware-offloading[hardware offloading] to compatible NICs - -There are also enormous scale, performance, and stability improvements in {product-title} {product-version} compared to prior versions. - -If you are using the OpenShift SDN network plug-in, note that: - -* Existing and future deployments using OpenShift SDN continues to be supported. -* OpenShift SDN remains the default on {product-title} versions earlier than {product-version}. -* As of {product-title} {product-version}, OpenShift SDN is a supported installation-time option. -* OpenShift SDN remains feature frozen. - -For more information about OVN-Kubernetes, including a feature comparison matrix with OpenShift SDN, see xref:../networking/ovn_kubernetes_network_provider/about-ovn-kubernetes.adoc#about-ovn-kubernetes[About the OVN-Kubernetes network plug-in]. - -For information on migrating to OVN-Kubernetes from OpenShift SDN, see xref:../networking/ovn_kubernetes_network_provider/migrate-from-openshift-sdn.adoc#migrate-from-openshift-sdn[Migrating from the OpenShift SDN network plug-in]. - -[id="ocp-4-12-ingress-node-firewall-operator"] -==== Ingress Node Firewall Operator - -This update introduces a new stateless Ingress Node Firewall Operator. You can now configure firewall rules at the node level. For more information, see xref:../networking/networking-operators-overview.adoc#networking-operators-overview[Ingress Node Firewall Operator]. - -[id="ocp-4-12-networking-metrics"] -==== Enhancements to networking metrics - -The following metrics are now available for the OVN-Kubernetes network plugin: - -* `ovn_controller_southbound_database_connected` -* `ovnkube_master_libovsdb_monitors` -* `ovnkube_master_network_programming_duration_seconds` -* `ovnkube_master_network_programming_ovn_duration_seconds` -* `ovnkube_master_egress_routing_via_host` -* `ovs_vswitchd_interface_resets_total` -* `ovs_vswitchd_interface_rx_dropped_total` -* `ovs_vswitchd_interface_tx_dropped_total` -* `ovs_vswitchd_interface_rx_errors_total` -* `ovs_vswitchd_interface_tx_errors_total` -* `ovs_vswitchd_interface_collisions_total` - -The following metric has been removed: - -* `ovnkube_master_skipped_nbctl_daemon_total` - -[id="ocp-4-12-multi-zone-ipi-vsphere-installation"] -==== Multi-zone Installer Provisioned Infrastructure VMware vSphere installation (Technology Preview) -Beginning with {product-title} {product-version}, -the ability to configure multiple vCenter datacenters and multiple vCenter clusters in a single vCenter installation using installer-provisioned infrastructure is now available as a Technology Preview feature. Using vCenter tags, you can use this feature to associate vCenter datacenters and compute clusters with openshift-regions and openshift-zones. These associations define failure domains to enable application workloads to be associated with specific locations and failure domains. - -// The link to content will be added post-GA - -[id="ocp-4-12-k8s-nmstate-support-for-vsphere"] -==== Kubernetes NMState in VMware vSphere now supported -Beginning with {product-title} {product-version}, you can configure the networking settings such as DNS servers or search domains, VLANs, bridges, and interface bonding using the Kubernetes NMState Operator on your VMware vSphere instance. - -For more information, see xref:../networking/k8s_nmstate/k8s-nmstate-about-the-k8s-nmstate-operator.adoc[About the Kubernetes NMState Operator]. - -[id="ocp-4-12-k8s-nmstate-support-for-openstack"] -==== Kubernetes NMState in OpenStack now supported -Beginning with {product-title} {product-version}, you can configure the networking settings such as DNS servers or search domains, VLANs, bridges, and interface bonding using the Kubernetes NMState Operator on your OpenStack instance. - -For more information, see xref:../networking/k8s_nmstate/k8s-nmstate-about-the-k8s-nmstate-operator.adoc[About the Kubernetes NMState Operator]. - -[id="ocp-4-12-nw-external-dns-operator"] -==== External DNS Operator - -In {product-title} 4.12, the External DNS Operator modifies the format of the ExternalDNS wildcard TXT records on AzureDNS. The External DNS Operator replaces the asterisk with `any` in ExternalDNS wildcard TXT records. You must avoid the ExternalDNS wildcard A and CNAME records having `any` leftmost subdomain because this might cause a conflict. - -The upstream version of `ExternalDNS` for {product-title} 4.12 is v0.13.1. - -[id="ocp-4-12-nw-metrics-telemetry"] -==== Capturing metrics and telemetry associated with the use of routes and shards - -In {product-title} 4.12, the Cluster Ingress Operator exports a new metric named `route_metrics_controller_routes_per_shard`. The `shard_name` label of the metric specifies the name of the shards. This metric gives the total number of routes that are admitted by each shard. - -The following metrics are sent through telemetry. - -.Metrics sent through telemetry -[cols="1,1,1",options="header"] -|=== -| Name | Recording rule expression | Description - -| `cluster:route_metrics_controller_routes_per_shard:min` -| `min(route_metrics_controller_routes_per_shard)` -| Tracks the minimum number of routes admitted by any of the shards - -| `cluster:route_metrics_controller_routes_per_shard:max` -| `max(route_metrics_controller_routes_per_shard)` -| Tracks the maximum number of routes admitted by any of the shards - -| `cluster:route_metrics_controller_routes_per_shard:avg` -| `avg(route_metrics_controller_routes_per_shard)` -| Tracks the average value of the `route_metrics_controller_routes_per_shard` metric - -| `cluster:route_metrics_controller_routes_per_shard:median` -| `quantile(0.5, route_metrics_controller_routes_per_shard)` -| Tracks the median value of the `route_metrics_controller_routes_per_shard` metric - -| `cluster:openshift_route_info:tls_termination:sum` -| `sum (openshift_route_info) by (tls_termination)` -| Tracks the number of routes for each `tls_termination` value. The possible values for `tls_termination` are `edge`, `passthrough` and `reencrypt` - -|=== - -[id="ocp-4-12-nw-aws-load-balancer-operator"] -==== AWS Load Balancer Operator - -In {product-title} 4.12, the AWS Load Balancer controller now implements the Kubernetes Ingress specification for multiple matches. If multiple paths within an Ingress match a request, the longest matching path takes the precedence. If two paths still match, paths with an exact path type take precedence over a prefix path type. - -The AWS Load Balancer Operator sets the `EnableIPTargetType` feature gate to `false`. The AWS Load Balancer controller disables the support for services and ingress resources for `target-type` `ip`. - -The upstream version of `aws-load-balancer-controller` for an {product-title} 4.12 is v2.4.4. - -[id="ocp-4-12-nw-ingress-autoscaling"] -==== Ingress Controller Autoscaling (Technology Preview) - -You can now use the {product-title} Custom Metrics Autoscaler Operator to dynamically scale the default Ingress Controller based on metrics in your deployed cluster, such as the number of worker nodes available. The Custom Metrics Autoscaler is available as a Technology Preview feature. - -For more information, see xref:../networking/ingress-operator.adoc#nw-autoscaling-ingress-controller_configuring-ingress[Autoscaling an Ingress Controller]. - -[id="ocp-4-12-nw-ingress-haproxy-maxconn-default"] -==== HAProxy maxConnections default is now 50,000 - -In {product-title} 4.12, the default value for the `maxConnections` setting is now 50000. Previously starting with {product-title} 4.11, the default value for the `maxConnections` setting was 20000. - -For more information, see xref:../networking/ingress-operator.adoc#nw-ingress-controller-configuration-parameters_configuring-ingress[Ingress Controller configuration parameters]. - -[id="ocp-4-12-nw-configure-dns-management"] -==== Configuration of an Ingress Controller for manual DNS management - -You can now configure an Ingress Controller to stop automatic DNS management and start manual DNS management. Set the `dnsManagementPolicy` parameter to specify automatic or manual DNS management. - -For more information, see xref:../networking/ingress-controller-dnsmgt.adoc#ingress-controller-dnsmgt[Configuring an Ingress Controller to manually manage DNS]. - -[id="ocp-4-12-networking-supported-hardware-for-sr-iov"] -==== Supported hardware for SR-IOV (Single Root I/O Virtualization) - -{product-title} 4.12 adds support for the following SR-IOV devices: - -* MT2892 Family [ConnectX‑6{nbsp}Dx] -* MT2894 Family [ConnectX-6 Lx] -* MT42822 BlueField‑2 in ConnectX‑6 NIC mode -* Silicom STS Family - -For more information, see xref:../networking/hardware_networks/about-sriov.adoc#supported-devices_about-sriov[Supported devices]. - -[id="ocp-4-12-networking-supported-hardware-for-ovs"] -==== Supported hardware for OvS (Open vSwitch) Hardware Offload - -{product-title} 4.12 adds OvS Hardware Offload support for the following devices: - -* MT2892 Family [ConnectX-6 Dx] -* MT2894 Family [ConnectX-6 Lx] -* MT42822 BlueField‑2 in ConnectX‑6 NIC mode - -For more information, see xref:../networking/hardware_networks/configuring-hardware-offloading.adoc#supported_devices_configuring-hardware-offloading[Supported devices]. - -[id="ocp-4-12-multi-network-policy-supported-for-sr-iov"] -==== Multi-network-policy supported for SR-IOV (Technology Preview) - -{product-title} 4.12 adds support for configuring multi-network policy for SR-IOV devices. - -You can now configure multi-network for SR-IOV additional networks. Configuring SR-IOV additional networks is a Technology Preview feature and is only supported with kernel network interface cards (NICs). - -For more information, see xref:../networking/multiple_networks/configuring-multi-network-policy.adoc#configuring-multi-network-policy[Configuring multi-network policy]. - -[id="ocp-4-12-switch-aws-load-balancer"] -==== Switch between AWS load balancer types without deleting the Ingress Controller - -You can update the Ingress Controller to switch between an AWS Classic Load Balancer (CLB) and an AWS Network Load Balancer (NLB) without deleting the Ingress Controller. - -For more information, see xref:../networking/configuring_ingress_cluster_traffic/configuring-ingress-cluster-traffic-aws.adoc[Configuring ingress cluster traffic on AWS]. - -[id="ocp-4-12-ipv6-gratuitous-ARPs-default-SR-IOV"] -==== IPv6 unsolicited neighbor advertisements and IPv4 gratuitous address resolution protocol now default on the SR-IOV CNI plugin - -Pods created with the Single Root I/O Virtualization (SR-IOV) CNI plugin, where the IP address management CNI plugin has assigned IPs, now send IPv6 unsolicited neighbor advertisements and/or IPv4 gratuitous address resolution protocol by default onto the network. This enhancement notifies hosts of the new pod's MAC address for a particular IP to refresh ARP/NDP caches with the correct information. - -For more information, see xref:../networking/hardware_networks/about-sriov.adoc#supported-devices_about-sriov[Supported devices]. - -[id="ocp-4-12-coredns-cache-tuning"] -==== Support for CoreDNS cache tuning - -You can now configure the time-to-live (TTL) duration of both successful and unsuccessful DNS queries cached by CoreDNS. - -For more information, see xref:../networking/dns-operator.adoc#nw-dns-cache-tuning_dns-operator[Tuning the CoreDNS cache]. - -[id="ocp-4-12-ovn-kubernetes-supports-configuration-internal-subnet"] -==== OVN-Kubernetes supports configuration of internal subnet - -Previously, the subnet that OVN-Kubernetes uses internally was `100.64.0.0/16` for IPv4 and `fd98::/48` for IPv6 and could not be modified. To support instances when these subnets overlap with existing subnets in your infrastructure, you can now change these internal subnets to avoid any overlap. - -For more information, see xref:../networking/cluster-network-operator.adoc#nw-operator-cr-cno-object_cluster-network-operator[Cluster Network Operator configuration object] - -[id=ocp-4-12-egress-ip-support] -==== Egress IP support on {rh-openstack-first} - -{rh-openstack}, paired with {product-title}, now supports automatic attachment and detachment of Egress IP addresses. The traffic from one or more pods in any number of namespaces has a consistent source IP address for services outside of the cluster. This support applies to OpenShift SDN and OVN-Kubernetes as default network providers. - -[id="ocp-4-12-openshift-sdn-ovn-kubernetes-feature-migration-support"] -==== OpenShift SDN to OVN-Kubernetes feature migration support - -If you plan to migrate from the OpenShift SDN network plugin to the OVN-Kubernetes network plugin, your configurations for the following capabilities are automatically converted to work with OVN-Kubernetes: - -* Egress IP addresses -* Egress firewalls -* Multicast - -For more information about how the migration to OVN-Kubernetes works, see xref:../networking/ovn_kubernetes_network_provider/migrate-from-openshift-sdn.adoc#migrate-from-openshift-sdn[Migrating from the OpenShift SDN cluster network provider]. - -[id="ocp-4-12-egress-firewall-audit-logging"] -==== Egress firewall audit logging - -For the OVN-Kubernetes network plugin, egress firewalls support audit logging using the same mechanism that network policy audit logging uses. For more information, see xref:../networking/ovn_kubernetes_network_provider/logging-network-policy.adoc#logging-network-policy[Logging for egress firewall and network policy rules]. - -[id="ocp-4-12-announce-IP-from-node-subset-metallb"] -==== Advertise MetalLB from a given address pool from a subset of nodes -With this update, in BGP mode, you can use the node selector to advertise the MetalLB service from a subset of nodes, using a specific pool of IP addresses. This feature was introduced as a Technology Preview feature in {product-title} 4.11 and is now generally available in {product-title} 4.12 for BGP mode only. L2 mode remains a Technology Preview feature. - -For more information, see xref:../networking/metallb/about-advertising-ipaddresspool.adoc#nw-metallb-advertise-ip-pools-to-node-subset_about-advertising-ip-address-pool[Advertising an IP address pool from a subset of nodes]. - -[id="ocp-4-12-ne-metallb-deployment-specs"] -==== Additional deployment specifications for MetalLB -This update provides additional deployment specifications for MetalLB. When you use a custom resource to deploy MetalLB, you can use these additional deployment specifications to manage how MetalLB `speaker` and `controller` pods deploy and run in your cluster. For example, you can use MetalLB deployment specifications to manage where MetalLB pods are deployed, define CPU limits for MetalLB pods, and assign runtime classes to MetalLB pods. - -For more information about deployment specifications for MetalLB, see xref:../networking/metallb/metallb-operator-install.adoc#nw-metallb-operator-deployment-specifications-for-metallb_metallb-operator-install[Deployment specifications for MetalLB]. - -[id="overriding-default-node-ip-selection-logic"] -==== Node IP selection improvements - -Previously, the `nodeip-configuration` service on a cluster host selected the IP address from the interface that the default route used. If multiple routes were present, the service would select the route with the lowest metric value. As a result, network traffic could be distributed from the incorrect interface. - -With {product-title} 4.12, a new interface has been added to the `nodeip-configuration` service, which allows users to create a hint file. The hint file contains a variable, `NODEIP_HINT`, that overrides the default IP selection logic and selects a specific node IP address from the subnet `NODEIP_HINT` variable. Using the `NODEIP_HINT` variable allows users to specify which IP address is used, ensuring that network traffic is distributed from the correct interface. - -For more information, see xref:../support/troubleshooting/troubleshooting-network-issues.adoc#overriding-default-node-ip-selection-logic_troubleshooting-network-issues[Optional: Overriding the default node IP selection logic]. - -[id="ocp-4-12-ne-coredns-bump"] -==== CoreDNS update to version 1.10.0 - -In {product-title} {product-version}, CoreDNS uses version 1.10.0, which includes the following changes: - -* CoreDNS does not expand the query UDP buffer size if it was previously set to a smaller value. -* CoreDNS now always prefixes each log line in Kubernetes client logs with the associated log level. -* CoreDNS now reloads more quickly at an approximate speed of 20ms. - -[id="ocp-4-12-ne-reload-interval"] -==== Support for a configurable reload interval in HAProxy - -With this update, a cluster administrator can configure the reload interval to force HAProxy to reload its configuration less frequently in response to route and endpoint updates. The default minimum HAProxy reload interval is 5 seconds. - -For more information, see xref:../post_installation_configuration/network-configuration.adoc#configuring-haproxy-interval_post-install-network-configuration[Configuring HAProxy reload interval]. - -[id="new-network-observability"] -==== New Network Observability Operator to observe network traffic flow -As an administrator, you can now install the Network Observability Operator to observe the network traffic for {product-title} cluster in the console. You can view and monitor the network traffic data in different graphical representations. The Network Observability Operator uses eBPF technology to create the network flows. The network flows are enriched with {product-title} information, and stored in Loki. You can use the network traffic information for detailed troubleshooting and analysis. - -For more information, see xref:../networking/network_observability/network-observability-overview.html#network-observability-overview[Network Observability]. - -[id="ocp-4-12-nw-shiftstack-ipv6-secondary"] -==== IPv6 for secondary network interfaces on {rh-openstack} - -IPv6 for secondary network interfaces is now supported in clusters that run on {rh-openstack}. - -For more information,see xref:../post_installation_configuration/network-configuration.adoc#nw-osp-pod-connections-ipv6_post-install-network-configuration[Enabling IPv6 connectivity to pods on {rh-openstack}]. - -[id="ocp-4-12-nw-shiftstack-udp-load-balancing"] -==== UDP support for load balancers on {rh-openstack} - -Resulting from the switch to an external OpenStack cloud provider, UDP is now supported for `LoadBalancer` services for clusters that run on that platform. - -[id="ocp-4-12-hcp-sr-iov-operator"] -==== Deploy the SR-IOV Operator for hosted control planes (Technology Preview) - -If you configured and deployed your hosting service cluster, you can now deploy the SR-IOV Operator for a hosted cluster. For more information, see xref:../networking/hardware_networks/configuring-sriov-operator.adoc#sriov-operator-hosted-control-planes_configuring-sriov-operator[Deploying the SR-IOV Operator for hosted control planes]. - -[id="ocp-4-12-nw-api-ingress-ipv6-support"] -==== Support for IPv6 virtual IP (VIP) addresses for the Ingress VIP and API VIP services - -With this update, in installer-provisioned infrastructure clusters, the `ingressVIP` and `apiVIP` configuration settings in the `install-config.yaml` file are deprecated. Instead, use the `ingressVIPs` and `apiVIPs` configuration settings. These settings support dual-stack networking for applications that require IPv4 and IPv6 access to the cluster by using the Ingress VIP and API VIP services. The `ingressVIPs` and `apiVIPs` configuration settings use a list format to specify an IPv4 address, an IPv6 address, or both IP address formats. The order of the list indicates the primary and secondary VIP address for each service. The primary IP address must be from the IPv4 network when using dual stack networking. - -[id="ocp-4-12-storage"] -=== Storage - -[id="ocp-4-12-storage-google-cloud-file-csi-driver"] -==== Persistent storage using the GCP Filestore Driver Operator (Technology Preview) -{product-title} is capable of provisioning persistent volumes (PVs) using the Container Storage Interface (CSI) driver for Google Compute Platform (GCP) Filestore. The GCP Filestore CSI Driver Operator that manages this driver is in Technology Preview. - -For more information, see see xref:../storage/container_storage_interface/persistent-storage-csi-google-cloud-file.adoc[GCP Filestore CSI Driver Operator]. - -[id="ocp-4-12-storage-aws-ebs-auto-migration-ga"] -==== Automatic CSI migration for AWS Elastic Block Storage auto migration is generally available -Starting with {product-title} 4.8, automatic migration for in-tree volume plug-ins to their equivalent Container Storage Interface (CSI) drivers became available as a Technology Preview feature. Support for Amazon Web Services (AWS) Elastic Block Storage (EBS) was provided in this feature in {product-title} 4.8, and {product-title} 4.12 now supports automatic migration for AWS EBS as generally available. CSI migration for AWS EBS is now enabled by default and requires no action by an administrator. - -This feature automatically translates in-tree objects to their counterpart CSI representations and should be completely transparent to users. Translated objects are not stored on disk, and user data is not migrated. - -While storage class referencing to the in-tree storage plug-in will continue working, it is recommended that you switch the default storage class to the CSI storage class. - -For more information, see xref:../storage/container_storage_interface/persistent-storage-csi-migration.adoc[CSI Automatic Migration]. - -[id="ocp-4-12-storage-gce-auto-migration-ga"] -==== Automatic CSI migration for GCP PD auto migration is generally available -Starting with {product-title} 4.8, automatic migration for in-tree volume plug-ins to their equivalent Container Storage Interface (CSI) drivers became available as a Technology Preview feature. Support for Google Compute Engine Persistent Disk (GCP PD) was provided in this feature in {product-title} 4.9, and {product-title} 4.12 now supports automatic migration for GCP PD as generally available. CSI migration for GCP PD is now enabled by default and requires no action by an administrator. - -This feature automatically translates in-tree objects to their counterpart CSI representations and should be completely transparent to users. Translated objects are not stored on disk, and user data is not migrated. - -While storage class referencing to the in-tree storage plug-in will continue working, it is recommended that you switch the default storage class to the CSI storage class. - -For more information, see xref:../storage/container_storage_interface/persistent-storage-csi-migration.adoc[CSI Automatic Migration]. - -[id="ocp-4-12-storage-capacity-tracking"] -==== Storage capacity tracking for pod scheduling is generally available -This new feature exposes the currently available storage capacity using `CSIStorageCapacity` objects, and enhances scheduling of pods that use Container Storage Interface (CSI) volumes with late binding. Currently, the only {product-title} storage type that supports this features is OpenShift Data Foundation. - -[id="ocp-4-12-vsphere-topology-awareness"] -==== VMware vSphere CSI topology is generally available -{product-title} provides the ability to deploy {product-title} for vSphere on different zones and regions, which allows you to deploy over multiple compute clusters, thus helping to avoid a single point of failure. - -For more information, see xref:../storage/container_storage_interface/persistent-storage-csi-vsphere.adoc#persistent-storage-csi-vsphere-top-aware_persistent-storage-csi-vsphere[vSphere CSI topology]. - -[id="ocp-4-12-local-ephemeral-storage-mgmt"] -==== Local ephemeral storage resource management is generally available -The local ephemeral storage resource management features is now generally available. With this feature, you can manage local ephemeral storage by specifying requests and limits. - -For more information, see xref:../storage/understanding-ephemeral-storage.adoc#storage-ephemeral-storage-manage_understanding-ephemeral-storage[Ephemeral storage management]. - -[id="ocp-4-12-vol-populators"] -==== Volume populators (Technology Preview) -Volume populators use `datasource` to enable creating pre-populated volumes. - -Volume population is currently enabled, and supported as a Technology Preview feature. However, {product-title} does not ship with any volume populators. - -For more information, see xref:../storage/container_storage_interface/persistent-storage-csi.adoc#persistent-storage-csi-vol-populator_persistent-storage-csi[Volume populators]. - -[id="ocp-4-12-vsphere-min-requirements"] -==== VMware vSphere CSI Driver Operator requirements -For {product-title} 4.12, VMWare vSphere Container Storage Interface (CSI) Driver Operator requires the following minimum components installed: - -* VMware vSphere version 7.0.2 or later -* vCenter 7.0.2 or later -* Virtual machines of hardware version 15 or later -* No third-party CSI driver already installed in the cluster - -If a third-party CSI driver is present in the cluster, {product-title} does not overwrite it. The presence of a third-party CSI driver prevents {product-title} from upgrading to {product-title} 4.13 or later. - -For more information, see xref:../storage/container_storage_interface/persistent-storage-csi-vsphere.adoc#vsphere-csi-driver-reqs_persistent-storage-csi-vsphere[VMware vSphere CSI Driver Operator requirements]. - -[id="ocp-4-12-olm"] -=== Operator lifecycle - -[id="ocp-4-12-platform-operators"] -==== Platform Operators (Technology Preview) - -Starting in {product-title} 4.12, Operator Lifecycle Manager (OLM) introduces the _platform Operator_ type as a Technology Preview feature. The platform Operator mechanism relies on resources from the RukPak component, also introduced in {product-title} 4.12, to source and manage content. - -A platform Operator is an OLM-based Operator that can be installed during or after an {product-title} cluster's Day 0 operations and participates in the cluster's lifecycle. As a cluster administrator, you can use platform Operators to further customize your {product-title} installation to meet your requirements and use cases. - -For more information about platform Operators, see xref:../operators/admin/olm-managing-po.adoc#olm-managing-po[Managing platform Operators]. For more information about RukPak and its resources, see xref:../operators/understanding/olm-packaging-format.adoc#olm-rukpak-about_olm-packaging-format[Operator Framework packaging format]. - -[id="ocp-4-12-olm-operator-node-affinity"] -==== Controlling where an Operator is installed - -By default, when you install an Operator, {product-title} randomly installs the Operator pod to one of your worker nodes. - -In {product-title} {product-version}, you can control where an Operator pod is installed by adding affinity constraints to the Operator’s `Subscription` object. - -For more information, see xref:../operators/admin/olm-adding-operators-to-cluster.adoc#olm-overriding-operator-pod-affinity_olm-adding-operators-to-a-cluster[Controlling where an Operator is installed]. - -[id="ocp-4-12-olm-pod-security-label-synchronization"] -==== Pod security admission synchronization for user-created openshift-* namespaces - -In {product-title} {product-version}, pod security admission synchronization is enabled by default if an Operator is installed in user-created namespaces that have an `openshift-` prefix. Synchronization is enabled after a cluster service version (CSV) is created in the namespace. The synchronized label inherits the permissions of the service accounts in the namespace. - -For more information, see xref:../authentication/understanding-and-managing-pod-security-admission.adoc#security-context-constraints-psa-synchronization_understanding-and-managing-pod-security-admission[Security context constraint synchronization with pod security standards]. - -[id="ocp-4-12-osdk"] -=== Operator development - -[id="ocp-4-12-osdk-security-context-config"] -==== Configuring the security context of a catalog pod - -You can configure the security context of a catalog pod by using the `--security-context-config` flag on the `run bundle` and `bundle-upgrade` subcommands. The flag enables seccomp profiles to comply with pod security admission. The flag accepts the values of `restricted` and `legacy`. If you do not specify a value, the seccomp profile defaults to `restricted`. If your catalog pod cannot run with restricted permissions, set the flag to `legacy`, as shown in the following example: - -[source,terminal] ----- -$ operator-sdk run bundle \ - --security-context-config=legacy ----- - -[id="ocp-4-12-machine-api"] -=== Machine API - -[id="ocp-4-12-mapi-control-plane-machine-sets"] -==== Control plane machine sets - -{product-title} 4.12 introduces control plane machine sets. Control plane machine sets provide management capabilities for control plane machines that are similar to what compute machine sets provide for compute machines. For more information, see xref:../machine_management/control_plane_machine_management/cpmso-about.adoc#cpmso-about[Managing control plane machines]. - -[id="ocp-4-12-mapi-autoscaler-verbosity"] -==== Specifying cluster autoscaler log level verbosity - -{product-title} now supports setting the log level verbosity of the cluster autoscaler by setting the `logVerbosity` parameter in the `ClusterAutoscaler` custom resource. For more information, see the xref:../machine_management/applying-autoscaling.adoc#cluster-autoscaler-cr_applying-autoscaling[`ClusterAutoscaler` resource definition]. - -[id="ocp-4-12-azure-boot-diagnostics"] -==== Enabling Azure boot diagnostics - -{product-title} now supports enabling boot diagnostics on Azure machines that your machine set creates. For more information, see "Enabling Azure boot diagnostics" for xref:../machine_management/creating_machinesets/creating-machineset-azure.adoc#machineset-azure-boot-diagnostics_creating-machineset-azure[compute machines] or xref:../machine_management/control_plane_machine_management/cpmso-using.adoc#machineset-azure-boot-diagnostics_cpmso-using[control plane machines]. - -[id="ocp-4-12-machine-config-operator"] -=== Machine Config Operator - -[id="ocp-4-12-machine-config-operator-layering"] -==== {op-system} image layering - -{op-system-first} image layering allows you to add new images on top of the base {op-system} image. This layering does not modify the base {op-system} image. Instead, it creates a _custom layered image_ that includes all {op-system} functionality and adds additional functionality to specific nodes in the cluster. - -Currently, {op-system} image layering allows you to work with Customer Experience and Engagement (CEE) to obtain and apply Hotfix packages on top of your {op-system} image, based on the link:https://access.redhat.com/solutions/2996001[Red Hat Hotfix policy]. It is planned for future releases that you can use {op-system} image layering to incorporate third-party software packages such as Libreswan or numactl. - -For more information, see xref:../post_installation_configuration/coreos-layering.adoc#coreos-layering[{op-system} image layering]. - -[id="ocp-4-12-nodes"] -=== Nodes - -[id="ocp-4-12-updating-interface-specific-list"] -==== Updating the interface-specific safe list (Technology Preview) - -{product-title} now supports updating the default interface-specific safe `sysctls`. - -You can add or remove `sysctls` from the predefined list. -When you add `sysctls`, they can be set across all nodes. -Updating the interface-specific safe `sysctls` list is a Technology Preview feature only. - -For more information, see xref:../nodes/containers/nodes-containers-sysctls.adoc#updating-interface-specific-safe-sysctls-list_nodes-containers-using[Updating the interface-specific safe sysctls list]. - -[id="ocp-4-12-node-cron-job-time-zone"] -==== Cron job time zones (Technology Preview) - -Setting a time zone for a cron job schedule is now offered as a link:https://access.redhat.com/support/offerings/techpreview[Technology Preview]. If a time zone is not specified, the Kubernetes controller manager interprets the schedule relative to its local time zone. - -For more information, see xref:../nodes/jobs/nodes-nodes-jobs.adoc#nodes-nodes-jobs-creating-cron_nodes-nodes-jobs[Creating cron jobs]. - -[id="ocp-4-12-node-cgroups-v2"] -==== Linux Control Group version 2 promoted to Technology Preview - -{product-title} support for link:https://www.kernel.org/doc/html/latest/admin-guide/cgroup-v2.html[Linux Control Group version 2] (cgroup v2) has been promoted to Technology Preview. cgroup v2 is the next version of the kernel link:https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/6/html/resource_management_guide/ch01[control groups]. cgroups v2 offers multiple improvements, including a unified hierarchy, safer sub-tree delegation, new features such as link:https://www.kernel.org/doc/html/latest/accounting/psi.html[Pressure Stall Information], and enhanced resource management and isolation. For more information, see xref:../nodes/clusters/nodes-cluster-cgroups-2.adoc#nodes-cluster-cgroups-2[Enabling Linux Control Group version 2 (cgroup v2)]. - -[id="ocp-4-12-node-cgroups-crun"] -==== crun container runtime (Technology Preview) - -{product-title} now supports the crun container runtime in Technology Preview. You can switch between the crun container runtime and the default container runtime as needed by using a `ContainerRuntimeConfig` custom resource (CR). For more information, see xref:../nodes/containers/nodes-containers-using.adoc#nodes-containers-runtimes[About the container engine and container runtime]. - -[id="ocp-4-12-self-node-remediation-operator"] -==== Self Node Remediation Operator enhancements - -{product-title} now supports control plane fencing by the Self Node Remediation Operator. In the event of node failure, you can follow remediation strategies on both worker nodes and control plane nodes. For more information, see xref:../nodes/nodes/ecosystems/eco-self-node-remediation-operator.adoc#control-plane-fencing-self-node-remediation-operator_self-node-remediation-operator-remediate-nodes[Control Plane Fencing]. - -[id="ocp-4-12-node-health-check-operator"] -==== Node Health Check Operator enhancements - -{product-title} now supports control plane fencing on the Node Health Check Operator. In the event of node failure, you can follow remediation strategies on both worker nodes and control plane nodes. For more information, see xref:../nodes/nodes/ecosystems/eco-node-health-check-operator.adoc#control-plane-fencing-node-health-check-operator_node-health-check-operator[Control Plane Fencing]. - -The Node Health Check Operator now also includes a web console plug-in for managing Node Health Checks. For more information, see xref:../nodes/nodes/ecosystems/eco-node-health-check-operator.adoc#eco-node-health-check-operator-creating-node-health-check_node-health-check-operator[Creating a node health check]. - -For installing or updating to the latest version of the Node Health Check Operator, use the `stable` subscription channel. For more information, see xref:../nodes/nodes/ecosystems/eco-node-health-check-operator.adoc#installing-node-health-check-operator-using-cli_node-health-check-operator[Installing the Node Health Check Operator by using the CLI]. - -[id="ocp-4-12-monitoring"] -=== Monitoring - -The monitoring stack for this release includes the following new and modified features. - -[id="ocp-4-12-monitoring-updates-to-monitoring-stack-compnents-and-dependencies"] -==== Updates to monitoring stack components and dependencies -This release includes the following version updates for monitoring stack components and dependencies: - -* kube-state-metrics to 2.6.0 -* node-exporter to 1.4.0 -* prom-label-proxy to 0.5.0 -* Prometheus to 2.39.1 -* prometheus-adapter to 0.10.0 -* prometheus-operator to 0.60.1 -* Thanos to 0.28.1 - -[id="ocp-4-12-monitoring-changes-to-alerting-rules"] -==== Changes to alerting rules - -[NOTE] -==== -Red Hat does not guarantee backward compatibility for recording rules or alerting rules. -==== - -* *New* -** Added the `TelemeterClientFailures` alert, which triggers when a cluster tries and fails to submit Telemetry data at a certain rate over a period of time. The alert fires when the rate of failed requests reaches 20% of the total rate of requests within a 15-minute window. -* *Changed* -** The `KubeAggregatedAPIDown` alert now waits 900 seconds rather than 300 seconds before sending a notification. -** The `NodeClockNotSynchronising` and `NodeClockSkewDetected` alerts now only evaluate metrics from the `node-exporter` job. -** The `NodeRAIDDegraded` and `NodeRAIDDiskFailure` alerts now include a device label filter to match only the value returned by `mmcblk.p.+|nvme.+|sd.+|vd.+|xvd.+|dm-.+|dasd.+`. -** The `PrometheusHighQueryLoad` and `ThanosQueryOverload` alerts now also trigger when a high querying load exists on the query layer. - -[id="ocp-4-12-monitoring-new-option-to-specify-pod-topology-spread-constraints-for-monitoring-components"] -==== New option to specify pod topology spread constraints for monitoring components -You can now use pod topology spread constraints to control how Prometheus, Thanos Ruler, and Alertmanager pods are spread across a network topology when {product-title} pods are deployed in multiple availability zones. - -[id="ocp-4-12-monitoring-new-option-to-improve-data-consistency-for-prometheus-adapter"] -==== New option to improve data consistency for Prometheus Adapter -You can now configure an optional kubelet service monitor for Prometheus Adapter (PA) that improves data consistency across multiple autoscaling requests. -Enabling this service monitor eliminates the possibility that two queries sent at the same time to PA might yield different results because the underlying PromQL queries executed by PA might be on different Prometheus servers. - -[id="ocp-4-12-monitoring-update-to-alertmanager-configuration-for-additional-secret-keys"] -==== Update to Alertmanager configuration for additional secret keys -With this release, if you configure an Alertmanager secret to hold additional keys and if the Alertmanager configuration references these keys as files (such as templates, TLS certificates, or tokens), your configuration settings must point to these keys by using an absolute path rather than a relative path. -These keys are available under the `/etc/alertmanager/config` directory. -In earlier releases of {product-title}, you could use relative paths in your configuration to point to these keys because the Alertmanager configuration file was located in the same directory as the keys. - -[IMPORTANT] -==== -If you are upgrading to {product-title} {product-version} and have specified relative paths for additional Alertmanager secret keys that are referenced as files, you must change these relative paths to absolute paths in your Alertmanager configuration. -Otherwise, alert receivers that use the files will fail to deliver notifications. -==== - -[id="ocp-4-12-scalability-and-performance"] -=== Scalability and performance - -[id="ocp-4-12-rsp-workload-hints"] -==== Disabling realtime using workload hints removes Receive Packet Steering from the cluster -At the cluster level by default, a systemd service sets a Receive Packet Steering (RPS) mask for virtual network interfaces. The RPS mask routes interrupt requests from virtual network interfaces according to the list of reserved CPUs defined in the performance profile. At the container level, a `CRI-O` hook script also sets an RPS mask for all virtual network devices. - -With this update, if you set `spec.workloadHints.realTime` in the performance profile to `False`, the system also disables both the systemd service and the `CRI-O` hook script which set the RPS mask. The system disables these RPS functions because RPS is typically relevant to use cases requiring low-latency, realtime workloads only. - -To retain RPS functions even when you set `spec.workloadHints.realTime` to `False`, see the _RPS Settings_ section of the Red Hat Knowledgebase solution link:https://access.redhat.com/solutions/5532341[Performance addons operator advanced configuration]. - -For more information about configuring workload hints, see xref:../scalability_and_performance/cnf-low-latency-tuning.adoc#cnf-understanding-workload-hints_cnf-master[Understanding workload hints]. - -[id="ocp-4-12-tuned"] -==== Tuned profile - -The `tuned` profile now defines the `fs.aio-max-nr` `sysctl` value by default, improving asynchronous I/O performance for default node profiles. - -[id="ocp-4-12-low-latency-tuning-features-options"] -==== Support for new kernel features and options -The low latency tuning has been updated to use the latest kernel features and options. The fix for link:https://bugzilla.redhat.com/show_bug.cgi?id=2117780[2117780] introduced a new per-CPU `kthread`, `ktimers`. This thread must be pinned to the proper CPU cores. With this update, there is no functional change; the isolation of the workload is the same. For more information, see link:https://bugzilla.redhat.com/show_bug.cgi?id=2102450[2102450]. - -[id="tuning-of-power-states"] -==== Power-saving configurations - -In {product-title} {product-version}, by enabling C-states and OS-controlled P-states, you can use different power-saving configurations for critical and non-critical workloads. You can apply the configurations through the new `perPodPowerManagement` workload hint, and the `cpu-c-states.crio.io` and `cpu-freq-governor.crio.io` CRI-O annotations. For more information about the feature, see xref:../scalability_and_performance/cnf-low-latency-tuning.adoc#node-tuning-operator-pod-power-saving-config_cnf-master[Power-saving configurations]. - -[id="scalability-and-performance-sno-additional-worker-node"] -==== {sno-caps} cluster expansion with worker nodes (Technology Preview) - -In {product-title} {product-version}, you can expand your existing {sno} clusters with additional worker nodes to increase available resources. For more information about {sno} expansion, see xref:../scalability_and_performance/ztp_far_edge/ztp-sno-additional-worker-node.adoc#ztp-additional-worker-sno_sno-additional-worker[Single-node OpenShift cluster expansion with worker nodes]. - -[id="ocp-4-12-iPXE-ZTP"] -==== Support for iPXE network booting with ZTP -Zero touch provisioning (ZTP) uses the Metal3 service to boot RHCOS on the target host as part of the deployment of spoke clusters. With this update, ZTP leverages the capabilities of Metal3 by adding the option of Preboot Execution Environment (iPXE) network booting for these RHCOS installations. - -For more information, see xref:../scalability_and_performance/ztp_far_edge/ztp-deploying-far-edge-clusters-at-scale.adoc#about-ztp_ztp-deploying-far-edge-clusters-at-scale[Using ZTP to provision clusters at the network far edge]. - -[id="scalability-and-performance-factory-precaching-tool"] -==== {factory-prestaging-tool-caps} to reduce {product-title} and Operator deployment times (Technology Preview) - -In {product-title} {product-version}, you can use the {factory-prestaging-tool} to pre-cache {product-title} and Operator images on a server at the factory, and then you can include the pre-cached server to the site for deployment. For more information about the {factory-prestaging-tool}, see xref:../scalability_and_performance/ztp_far_edge/ztp-precaching-tool.adoc[Pre-caching images for single-node OpenShift deployments]. - -[id="scalability-and-performance-factory-precaching-tool-in-ztp"] -==== Zero touch provisioning (ZTP) integration of the {factory-prestaging-tool} (Technology Preview) - -In {product-title} {product-version}, you can use the {factory-prestaging-tool} in the GitOps ZTP workflow. For more information, see xref:../scalability_and_performance/ztp_far_edge/ztp-precaching-tool.adoc[Pre-caching images for single-node OpenShift deployments]. - -[id="ocp-4-12-hcp-node-tuning-operator"] -==== Node tuning in a hosted cluster (Technology Preview) - -You can now configure OS-level tuning for nodes in a hosted cluster by using the Node Tuning Operator. To configure node tuning, you can create config maps in the management cluster that contain `Tuned` objects, and reference those config maps in your node pools. The tuning configuration that is definied in the `Tuned` objects is applied to the nodes in the node pool. For more information, see xref:../scalability_and_performance/using-node-tuning-operator.adoc#node-tuning-hosted-cluster_node-tuning-operator[Configuring node tuning in a hosted cluster]. - -[id="ocp-4-12-kmmo"] -==== Kernel module management Operator - -The kernel module management (KMM) Operator replaces the Special Resource Operator (SRO). -KMM includes the following features for connected environments only: - -* Hub and spoke support for edge deployments -* Pre-flight checks for upgrade support -* Secure boot kernel module signing -* Must gather logs to assist with troubleshooting -* Binary firmware deployment - -[id="ocp-4-12-hub-and-spoke"] -==== Hub and spoke cluster support (Technology Preview) - -For hub and spoke deployments in an environment that can access the internet, you can use the kernel module management (KMM) Operator deployed in the hub cluster to manage the deployment of the required kernel modules to one or more managed clusters. - -[id="scalability-and-performance-talm-updates"] -==== {cgu-operator-first} - -{cgu-operator-first} now provides more detailed status information and messages, and redesigned conditions. -You can use the `ClusterLabelSelector` field for greater flexibility in selecting clusters for update. -You can use timeout settings to determine what happens if an update fails for a cluster, for example, skipping the failing cluster and continuing to upgrade other clusters, or stopping policy remediation for all clusters. -For more information see xref:../scalability_and_performance/cnf-talm-for-cluster-upgrades.adoc[Topology Aware Lifecycle Manager for cluster updates]. - -[id="scalability-and-performance-mount-namespace"] -==== Mount namespace encapsulation (Technology Preview) - -Encapsulation is the process of moving all Kubernetes-specific mount points to an alternative namespace to reduce the visibility and performance impact of a large number of mount points in the default namespace. Previously, mount namespace encapsulation has been deployed transparently in {product-title} specifically for Distributed Units (DUs) installed using GitOps ZTP. In {product-title} v4.12, this functionality is now available as a configurable option. - -A standard host operating system uses systemd to constantly scan all mount namespaces: both the standard Linux mounts and the numerous mounts that Kubernetes uses to operate. The current implementation of Kubelet and CRI-O both use the top-level namespace for all container and Kubelet mount points. Encapsulating these container-specific mount points in a private namespace reduces systemd overhead and enhances CPU performance. Encapsulation can also improve security, by storing Kubernetes-specific mount points in a location safe from inspection by unprivileged users. - -For more information, see xref:../scalability_and_performance/optimizing-cpu-usage.adoc#optimizing-cpu-usage[Optimizing CPU usage with mount namespace encapsulation]. - -[id="ocp-4-12-ztp-sno-workload-partitioning"] -==== Changing the workload partitioning CPU set in {sno} clusters that are deployed with GitOps ZTP - -You can configure the workload partitioning CPU set in {sno} clusters that you deploy with GitOps ZTP. -To do this, you specify cluster management CPU resources with the `cpuset` field of the `SiteConfig` custom resource (CR) and the `reserved` field of the group `PolicyGenTemplate` CR. -The value that you set for `cpuset` should match the value set in the cluster `PerformanceProfile` CR `.spec.cpu.reserved` field for workload partitioning. - -For more information, see xref:../scalability_and_performance/ztp_far_edge/ztp-reference-cluster-configuration-for-vdu.adoc#ztp-sno-du-enabling-workload-partitioning_sno-configure-for-vdu[Workload partitioning]. - -[id="ocp-4-12-ztp-hub-templates"] -==== RHACM hub template functions now available for use with GitOps ZTP - -Hub template functions are now available for use with GitOps ZTP using {rh-rhacm-first} and {cgu-operator-first}. -Hub-side cluster templates reduce the need to create separate policies for many clusters with similiar configurations but with different values. -For more information, see xref:../scalability_and_performance/ztp_far_edge/ztp-advanced-policy-config.adoc#ztp-using-hub-cluster-templates_ztp-advanced-policy-config[Using hub templates in PolicyGenTemplate CRs]. - -[id="ocp-4-12-ztp-argocd-managed-cluster-limits"] -==== ArgoCD managed cluster limits - -{rh-rhacm} uses `SiteConfig` CRs to generate the Day 1 managed cluster installation CRs for ArgoCD. Each ArgoCD application can manage a maximum of 300 `SiteConfig` CRs. -For more information, see xref:../scalability_and_performance/ztp_far_edge/ztp-preparing-the-hub-cluster.adoc#ztp-configuring-hub-cluster-with-argocd_ztp-preparing-the-hub-cluster[Configuring the hub cluster with ArgoCD]. - -[id="ocp-4-12-ztp-config-policy-eval-intervals"] -==== GitOps ZTP support for configuring policy compliance evaluation timeouts in PolicyGenTemplate CRs - -In GitOps ZTP v4.11+, a default policy compliance evaluation timeout value is available for use in `PolicyGenTemplate` custom resources (CRs). -This value specifies how long the related `ConfigurationPolicy` CR can be in a state of policy compliance or non-compliance before {rh-rhacm} re-evaluates the applied cluster policies. - -Optionally, you can now override the default evaluation intervals for all policies in `PolicyGenTemplate` CRs. - -For more information, see xref:../scalability_and_performance/ztp_far_edge/ztp-advanced-policy-config.adoc#ztp-configuring-pgt-compliance-eval-timeouts_ztp-advanced-policy-config[Configuring policy compliance evaluation timeouts for PolicyGenTemplate CRs]. - -[id="ocp-4-12-insights-operator"] -=== Insights Operator - -[id="ocp-4-12-insights-alerts"] -==== Insights alerts -In {product-title} {product-version}, active Insights recommendations are now presented to the user as alerts. You can view and configure these alerts with Alertmanager. - -[id="ocp-4-12-insights-data-collection"] -==== Insights Operator data collection enhancements -In {product-title} {product-version}, the Insights Operator now collects the following metrics: - -* `console_helm_uninstalls_total` -* `console_helm_upgrades_total` - -[id="ocp-4-12-auth"] -=== Authentication and authorization - -[id="ocp-4-12-rhosp-application-credentials"] -==== Application credentials on {rh-openstack} - -You can now specify link:https://access.redhat.com/documentation/en-us/red_hat_openstack_platform/16.1/html/users_and_identity_management_guide/application_credentials[application credentials] in the `clouds.yaml` files of clusters that run on {rh-openstack-first}. Application credentials are an alternative to embedding user account details in configuration files. As an example, see the following section of a `clouds.yaml` file that includes user account details: - -[source,yaml] ----- -clouds: - openstack: - auth: - auth_url: https://127.0.0.1:13000 - password: thepassword - project_domain_name: Default - project_name: theprojectname - user_domain_name: Default - username: theusername - region_name: regionOne ----- - -Compare that section to one that uses application credentials: - -[source,yaml] ----- -clouds: - openstack: - auth: - auth_url: https://127.0.0.1:13000 - application_credential_id: '5dc185489adc4b0f854532e1af81ffe0' - application_credential_secret: 'PDCTKans2bPBbaEqBLiT_IajG8e5J_nJB4kvQHjaAy6ufhod0Zl0NkNoBzjn_bWSYzk587ieIGSlT11c4pVehA' - auth_type: "v3applicationcredential" - region_name: regionOne ----- - -To use application credentials with your cluster as a {rh-openstack} administrator, create the credentials. Then, use them in a `clouds.yaml` file when you install a cluster. Alternatively, you can create the `clouds.yaml` file and rotate it into an existing cluster. - -[id="ocp-4-12-hcp"] -=== Hosted control planes (Technology Preview) - -[id="ocp-4-12-hcp-api"] -==== HyperShift API beta release now available - -The default version for the `hypershift.openshift.io` API, which is the API for hosted control planes on {product-title}, is now v1beta1. Currently, for an existing cluster, the move from alpha to beta is not supported. - -[id="ocp-4-12-hcp-versioning"] -==== Versioning for hosted control planes - -With each major, minor, or patch version release of {product-title}, the HyperShift Operator is released. The HyperShift command-line interface (CLI) is released as part of each HyperShift Operator release. - -The `HostedCluster` and `NodePool` API resources are available in the beta version of the API and follow a similar policy to xref:../rest_api/understanding-api-support-tiers.adoc[{product-title}] and link:https://kubernetes.io/docs/concepts/overview/kubernetes-api/[Kubernetes]. - -[id="ocp-4-12-hcp-etcd-backup"] -==== Backing up and restoring etcd on a hosted cluster - -If you use hosted control planes on {product-title}, you can back up and restore etcd by taking a snapshot of etcd and uploading it to a location where you can retrieve it later, such as an S3 bucket. Later, if needed, you can restore the snapshot. For more information, see xref:../backup_and_restore/control_plane_backup_and_restore/etcd-backup-restore-hosted-cluster.adoc[Backing up and restoring etcd on a hosted cluster]. - -[id="ocp-4-12-hcp-dr-in-aws-region"] -==== Disaster recovery for a hosted cluster within an AWS region - -In a situation where you need disaster recovery for a hosted cluster, you can recover the hosted cluster to the same region within AWS. For more information, see xref:../backup_and_restore/control_plane_backup_and_restore/disaster_recovery/dr-hosted-cluster-within-aws-region.adoc[Disaster recovery for a hosted cluster within an AWS region]. - -[id="ocp-4-12-rhv"] -=== Red Hat Virtualization (RHV) - -This release provides several updates to Red Hat Virtualization (RHV). With this release: - -* The oVirt CSI driver logging was revised with new error messages to improve the clarity and readability of the logs. -* The cluster API provider automatically updates oVirt and Red Hat Virtualization (RHV) credentials when they are changed in {product-title}. - -[id="ocp-4-12-notable-technical-changes"] -== Notable technical changes - -{product-title} {product-version} introduces the following notable technical changes. - -// Note: use [discrete] for these sub-headings. - -[discrete] -[id="ocp-4-12-auth-aws-sts-endpoints"] -=== AWS Security Token Service regional endpoints - -The Cloud Credential Operator utility (`ccoctl`) now creates secrets that use regional endpoints for the xref:../authentication/managing_cloud_provider_credentials/cco-mode-sts.adoc[AWS Security Token Service (AWS STS)]. This approach aligns with AWS recommended best practices. - -[discrete] -[id="ocp-4-12-auth-ccoctl-gcp-del-dir"] -=== Credentials requests directory parameter for deleting GCP resources with the Cloud Credential Operator utility - -With this release, when you xref:../installing/installing_gcp/uninstalling-cluster-gcp.adoc#cco-ccoctl-deleting-sts-resources_uninstalling-cluster-gcp[delete GCP resources with the Cloud Credential Operator utility], you must specify the directory containing the files for the component `CredentialsRequest` objects. - -[discrete] -[id="ocp-4-12-psa-restricted-enforcement"] -=== Future restricted enforcement for pod security admission - -Currently, pod security violations are shown as warnings and logged in the audit logs, but do not cause the pod to be rejected. - -Global restricted enforcement for pod security admission is currently planned for the next minor release of {product-title}. When this restricted enforcement is enabled, pods with pod security violations will be rejected. - -To prepare for this upcoming change, ensure that your workloads match the pod security admission profile that applies to them. Workloads that are not configured according to the enforced security standards defined globally or at the namespace level will be rejected. The `restricted-v2` SCC admits workloads according to the link:https://kubernetes.io/docs/concepts/security/pod-security-standards/#restricted[Restricted] Kubernetes definition. - -If you are receiving pod security violations, see the following resources: - -* See xref:../authentication/understanding-and-managing-pod-security-admission.adoc#security-context-constraints-psa-alert-eval_understanding-and-managing-pod-security-admission[Identifying pod security violations] for information about how to find which workloads are causing pod security violations. - -* See xref:../authentication/understanding-and-managing-pod-security-admission.adoc#security-context-constraints-psa-synchronization_understanding-and-managing-pod-security-admission[Security context constraint synchronization with pod security standards] to understand when pod security admission label synchronization is performed. Pod security admission labels are not synchronized in certain situations, such as the following situations: -** The workload is running in a system-created namespace that is prefixed with `openshift-`. -** The workload is running on a pod that was created directly without a pod controller. - -* If necessary, you can set a custom admission profile on the namespace or pod by setting the `pod-security.kubernetes.io/enforce` label. - -[discrete] -[id="ocp-4-12-olm-catalogs-and-psa-restricted-enforcement"] -=== Catalog sources and restricted pod security admission enforcement - -Catalog sources built using the SQLite-based catalog format and a version of the `opm` CLI tool released before {product-title} 4.11 cannot run under restricted pod security enforcement. - -In {product-title} {product-version}, namespaces do not have restricted pod security enforcement by default and the default catalog source security mode is set to `legacy`. - -If you do not want to run your SQLite-based catalog source pods under restricted pod security enforcement, you do not need to update your catalog source in {product-title} {product-version}. However, to ensure your catalog sources run in future {product-title} releases, you must update your catalog sources to run under restricted pod security enforcement. - -As a catalog author, you can enable compatibility with restricted pod security enforcement by completing either of the following actions: - -* Migrate your catalog to the file-based catalog format. -* Update your catalog image with a version of the `opm` CLI tool released with {product-title} 4.11 or later. - -If you do not want to update your SQLite database catalog image or migrate your catalog to the file-based catalog format, you can configure your catalog to run with elevated permissions. - -For more information, see xref:../operators/admin/olm-managing-custom-catalogs.adoc#olm-catalog-sources-and-psa_olm-managing-custom-catalogs[Catalog sources and pod security admission]. - -[discrete] -[id="ocp-4-12-operator-sdk-1-25-1"] -=== Operator SDK 1.25.0 - -{product-title} 4.12 supports Operator SDK 1.25.0. See xref:../cli_reference/osdk/cli-osdk-install.adoc#cli-osdk-install[Installing the Operator SDK CLI] to install or update to this latest version. - -[NOTE] -==== -Operator SDK 1.25.0 supports Kubernetes 1.25. - -For more information, see xref:../release_notes/ocp-4-12-release-notes.adoc#ocp-4-12-removed-kube-1-25-apis[Beta APIs removed from Kubernetes 1.25]. -==== - -If you have Operator projects that were previously created or maintained with Operator SDK 1.22.0, update your projects to keep compatibility with Operator SDK 1.25.0. - -* xref:../operators/operator_sdk/golang/osdk-golang-updating-projects.adoc#osdk-upgrading-projects_osdk-golang-updating-projects[Updating Go-based Operator projects] - -* xref:../operators/operator_sdk/ansible/osdk-ansible-updating-projects.adoc#osdk-upgrading-projects_osdk-ansible-updating-projects[Updating Ansible-based Operator projects] - -* xref:../operators/operator_sdk/helm/osdk-helm-updating-projects.adoc#osdk-upgrading-projects_osdk-helm-updating-projects[Updating Helm-based Operator projects] - -* xref:../operators/operator_sdk/helm/osdk-hybrid-helm-updating-projects.adoc#osdk-upgrading-projects_osdk-hybrid-helm-updating-projects[Updating Hybrid Helm-based Operator projects] - -* xref:../operators/operator_sdk/java/osdk-java-updating-projects.adoc#osdk-upgrading-projects_osdk-java-updating-projects[Updating Java-based Operator projects] - -[discrete] -[id="ocp-4-12-logical-volume-manager-storage"] -=== LVM Operator is now called Logical Volume Manager Storage - -The LVM Operator that was previously delivered with {rh-storage-first} requires installation through the {rh-storage}. -In {product-title} v4.12, the LVM Operator has been renamed _Logical Volume Manager Storage_. -Now, you install it as a standalone Operator from the OpenShift Operator catalog. -Logical Volume Manager Storage provides dynamic provisioning of block storage on a single, limited resources {sno} cluster. - -[id="ocp-4-12-deprecated-removed-features"] -== Deprecated and removed features - -Some features available in previous releases have been deprecated or removed. - -Deprecated functionality is still included in {product-title} and continues to be supported; however, it will be removed in a future release of this product and is not recommended for new deployments. For the most recent list of major functionality deprecated and removed within {product-title} {product-version}, refer to the table below. Additional details for more functionality that has been deprecated and removed are listed after the table. - -In the following tables, features are marked with the following statuses: - -* _General Availability_ -* _Deprecated_ -* _Removed_ - -[discrete] -=== Operator deprecated and removed features - -.Operator deprecated and removed tracker -[cols="4,1,1,1",options="header"] -|==== -|Feature |4.10 |4.11 | 4.12 - -|SQLite database format for Operator catalogs -|Deprecated -|Deprecated -|Deprecated - -|==== - -[discrete] -=== Images deprecated and removed features - -.Images deprecated and removed tracker -[cols="4,1,1,1",options="header"] -|==== -|Feature |4.10 |4.11 | 4.12 - -|`ImageChangesInProgress` condition for Cluster Samples Operator -|Deprecated -|Deprecated -|Deprecated - -|`MigrationInProgress` condition for Cluster Samples Operator -|Deprecated -|Deprecated -|Deprecated - -|Removal of Jenkins images from install payload -|General Availability -|Removed -|Removed - -|==== - -[discrete] -=== Monitoring deprecated and removed features - -.Monitoring deprecated and removed tracker -[cols="4,1,1,1",options="header"] -|==== -|Feature |4.10 |4.11 | 4.12 - -|Grafana component in monitoring stack -|Deprecated -|Removed -|Removed - -|Access to Prometheus and Grafana UIs in monitoring stack -|Deprecated -|Removed -|Removed - -|==== - -[discrete] -=== Installation deprecated and removed features - -.Installation deprecated and removed tracker -[cols="4,1,1,1",options="header"] -|==== -|Feature |4.10 |4.11 | 4.12 - -|vSphere 6.7 Update 2 or earlier -|Deprecated -|Removed -|Removed - -|vSphere 7.0 Update 1 or earlier -|General Availability -|Deprecated -|Deprecated - -|VMware ESXi 6.7 Update 2 or earlier -|Deprecated -|Removed -|Removed - -|VMware ESXi 7.0 Update 1 or earlier -|General Availability -|Deprecated -|Deprecated - -|CoreDNS wildcard queries for the `cluster.local` domain -|General Availability -|General Availability -|Deprecated - -|`ingressVIP` and `apiVIP` settings in the `install-config.yaml` file for installer-provisioned infrastructure clusters -|General Availability -|General Availability -|Deprecated - -|==== - -[discrete] -=== Updating clusters deprecated and removed features - -.Updating clusters deprecated and removed tracker -[cols="4,1,1,1",options="header"] -|==== -|Feature |4.10 |4.11 | 4.12 - -|Virtual hardware version 13 -|Deprecated -|Removed -|Removed - -|==== - -[discrete] -=== Storage deprecated and removed features - -.Storage deprecated and removed tracker -[cols="4,1,1,1",options="header"] -|==== -|Feature |4.10 |4.11 | 4.12 - -|`Snapshot.storage.k8s.io/v1beta1` API endpoint -|Deprecated -|Removed -|Removed - -|Persistent storage using FlexVolume -|Deprecated -|Deprecated -|Deprecated - -|==== - -[discrete] -=== Authentication and authorization deprecated and removed features - -.Authentication and authorization deprecated and removed tracker -[cols="4,1,1,1",options="header"] -|==== -|Feature |4.10 |4.11 | 4.12 - -|Automatic generation of service account token secrets -|General Availability -|Removed -|Removed - -|==== - -[discrete] -=== Specialized hardware and driver enablement deprecated and removed features - -.Specialized hardware and driver enablement deprecated and removed tracker -[cols="4,1,1,1",options="header"] -|==== -|Feature |4.10 |4.11 | 4.12 - -|Special Resource Operator (SRO) -|Technology Preview -|Technology Preview -|Removed - -|==== - -[discrete] -=== Multi-architecture deprecated and removed features - -.Multi-architecture deprecated and removed tracker -[cols="4,1,1,1",options="header"] -|==== -|Feature |4.10 |4.11 | 4.12 - -|IBM POWER8 all models (`ppc64le`) -|General Availability -|General Availability -|Deprecated - -|IBM IBM POWER9 AC922 (`ppc64le`) -|General Availability -|General Availability -|Deprecated - -|IBM IBM POWER9 IC922 (`ppc64le`) -|General Availability -|General Availability -|Deprecated - -|IBM IBM POWER9 LC922 (`ppc64le`) -|General Availability -|General Availability -|Deprecated - -|IBM z13 all models (`s390x`) -|General Availability -|General Availability -|Deprecated - -|IBM LinuxONE Emperor (`s390x`) -|General Availability -|General Availability -|Deprecated - -|IBM LinuxONE Rockhopper (`s390x`) -|General Availability -|General Availability -|Deprecated - -|AMD64 (x86_64) v1 CPU -|General Availability -|General Availability -|Deprecated - -|==== - -[discrete] -=== Networking deprecated and removed features - -.Networking deprecated and removed tracker -[cols="4,1,1,1",options="header"] -|==== -|Feature |4.10 |4.11 | 4.12 - -|Kuryr on {rh-openstack} -|General Availability -|General Availability -|Deprecated - -|==== - -[id="ocp-4-12-deprecated-features"] -=== Deprecated features - -[id="ocp-4-12-rhv-deprecations"] -==== Red Hat Virtualization (RHV) as a host platform for {product-title} will be deprecated - -Red Hat Virtualization (RHV) will be deprecated in an upcoming release of {product-title}. Support for {product-title} on RHV will be removed from a future {product-title} release, currently planned as {product-title} 4.14. - -[id="ocp-4-12-ne-deprecations"] -==== Wildcard DNS queries for the `cluster.local` domain are deprecated - -CoreDNS will stop supporting wildcard DNS queries for names under the `cluster.local` domain. These queries will resolve in {product-title} {product-version} as they do in earlier versions, but support will be removed from a future {product-title} release. - -[id="ocp-4-12-z13-power8-x86v1-deprecations"] -==== Specific hardware models on `ppc64le`, `s390x`, and `x86_64` v1 CPU architectures are deprecated - -In {product-title} 4.12, support for {op-system} functionality is deprecated for: - -* IBM POWER8 all models (ppc64le) -* IBM POWER9 AC922 (ppc64le) -* IBM POWER9 IC922 (ppc64le) -* IBM POWER9 LC922 (ppc64le) -* IBM z13 all models (s390x) -* LinuxONE Emperor (s390x) -* LinuxONE Rockhopper (s390x) -* AMD64 (x86_64) v1 CPU - -While these hardware models remain fully supported in {product-title} 4.12, Red Hat recommends that you use later hardware models. - -[id="ocp-4-12-ne-kuryr"] -==== Kuryr support for clusters that run on {rh-openstack} - -In {product-title} 4.12, support for Kuryr on clusters that run on {rh-openstack} is deprecated. Support will be removed no earlier than {product-title} 4.14. - -[id="ocp-4-12-removed-features"] -=== Removed features - -[id="ocp-4-12-removed-kube-1-25-apis"] -==== Beta APIs removed from Kubernetes 1.25 - -Kubernetes 1.25 removed the following deprecated APIs, so you must migrate manifests and API clients to use the appropriate API version. For more information about migrating removed APIs, see the link:https://kubernetes.io/docs/reference/using-api/deprecation-guide/#v1-25[Kubernetes documentation]. - -.APIs removed from Kubernetes 1.25 -[cols="2,2,2,1",options="header",] -|=== -|Resource |Removed API |Migrate to |Notable changes - -|`CronJob` -|`batch/v1beta1` -|`batch/v1` -|No - -|`EndpointSlice` -|`discovery.k8s.io/v1beta1` -|`discovery.k8s.io/v1` -|link:https://kubernetes.io/docs/reference/using-api/deprecation-guide/#endpointslice-v125[Yes] - -|`Event` -|`events.k8s.io/v1beta1` -|`events.k8s.io/v1` -|link:https://kubernetes.io/docs/reference/using-api/deprecation-guide/#event-v125[Yes] - -|`HorizontalPodAutoscaler` -|`autoscaling/v2beta1` -|`autoscaling/v2` -|No - -|`PodDisruptionBudget` -|`policy/v1beta1` -|`policy/v1` -|link:https://kubernetes.io/docs/reference/using-api/deprecation-guide/#poddisruptionbudget-v125[Yes] - -|`PodSecurityPolicy` -|`policy/v1beta1` -|link:https://kubernetes.io/docs/concepts/security/pod-security-admission/[Pod Security Admission] ^[1]^ -|link:https://kubernetes.io/docs/reference/using-api/deprecation-guide/#psp-v125[Yes] - -|`RuntimeClass` -|`node.k8s.io/v1beta1` -|`node.k8s.io/v1` -|No - -|=== -[.small] -1. For more information about pod security admission in {product-title}, see xref:../authentication/understanding-and-managing-pod-security-admission.adoc#understanding-and-managing-pod-security-admission[Understanding and managing pod security admission]. - -[id="ocp-4-12-empty-file-support-removed"] -==== Empty file and stdout support for the oc registry login command -The `--registry-config` and `--to option` options for the `oc registry login` command now stop accepting empty files. -These options continue to work with files that do not exist. The ability to write output to `-` (stdout) is also removed. - -[id="ocp-4-12-oc-rhel-7"] -==== {op-system-base} 7 support for the OpenShift CLI (oc) has been removed - -Support for using {op-system-base-full} 7 with the OpenShift CLI (`oc`) has been removed. If you use the OpenShift CLI (`oc`) with {op-system-base}, you must use {op-system-base} 8 or later. - -[id="ocp-4-12-oc-commands-removed"] -==== OpenShift CLI (oc) commands have been removed - -The following OpenShift CLI (`oc`) commands were removed with this release: - -* `oc adm migrate etcd-ttl` -* `oc adm migrate image-references` -* `oc adm migrate legacy-hpa` -* `oc adm migrate storage` - -[id="ocp-4-12-monitoring-grafana-component-removed"] -==== Grafana component removed from monitoring stack -The Grafana component is no longer a part of the {product-title} {product-version} monitoring stack. -As an alternative, go to *Observe* -> *Dashboards* in the {product-title} web console to view monitoring dashboards. - -[id="ocp-4-12-monitoring-prometheus-and-grafana-uis-removed"] -==== Prometheus and Grafana user interface access removed from monitoring stack -Access to the third-party Prometheus and Grafana user interfaces have been removed from the {product-title} {product-version} monitoring stack. -As an alternative, click *Observe* in the {product-title} web console to view alerting, metrics, dashboards, and metrics targets for monitoring components. - -[id="ocp-4-12-removed-features-virt-hw-v13"] -==== Support for virtual hardware version 13 is removed -In {product-title} 4.11, support for virtual hardware version 13 is removed. Support for virtual hardware version 13 was deprecated in {product-title} 4.9. Red Hat recommends that you use virtual hardware version 15 or later. - -[id="ocp-4-12-removed-features-snapshot-v1beta1-api"] -==== Support for snapshot v1beta1 API endpoint is removed -In {product-title} 4.11, support for `snapshot.storage.k8s.io/v1beta1` API endpoint is removed. Support for `snapshot.storage.k8s.io/v1beta1` API endpoint was deprecated in {product-title} 4.7. Red Hat recommends that you use `snapshot.storage.k8s.io/v1`. All objects created as `v1beta1` are available through the v1 endpoint. - -[id="ocp-4-12-custom-scheduler"] -==== Support for manually deploying a custom scheduler has been removed - -Support for deploying custom schedulers manually has been removed with this release. Use the xref:../nodes/scheduling/secondary_scheduler/index.adoc#nodes-secondary-scheduler-about_nodes-secondary-scheduler-about[{secondary-scheduler-operator-full}] instead to deploy a custom secondary scheduler in {product-title}. - -[id="ocp-4-12-openshiftsdn-sno-not-supported"] -==== Support for deploying {sno} with OpenShiftSDN has been removed - -Support for deploying {sno} clusters with OpenShiftSDN has been removed with this release. OVN-Kubernetes is the default networking solution for {sno} deployments. - -==== Removal of Jenkins images from install payload - -* {product-title} 4.11 moves the "OpenShift Jenkins" and "OpenShift Agent Base" images to the `ocp-tools-4` repository at `registry.redhat.io` so that Red Hat can produce and update the images outside the {product-title} lifecycle. Previously, these images were in the {product-title} install payload and the `openshift4` repository at `registry.redhat.io`. For more information, see xref:../cicd/jenkins/important-changes-to-openshift-jenkins-images.adoc#openshift-jenkins[OpenShift Jenkins]. - -* {product-title} 4.11 removes "OpenShift Jenkins Maven" and "NodeJS Agent" images from its payload. Previously, {product-title} 4.10 deprecated these images. Red Hat no longer produces these images, and they are not available from the `ocp-tools-4` repository at `registry.redhat.io`. -+ -However, upgrading to {product-title} 4.11 does not remove "OpenShift Jenkins Maven" and "NodeJS Agent" images from 4.10 and earlier releases. And Red Hat provides bug fixes and support for these images through the end of the 4.10 release lifecycle, in accordance with the link:https://access.redhat.com/support/policy/updates/openshift[{product-title} lifecycle policy]. -+ -For more information, see xref:../cicd/jenkins/important-changes-to-openshift-jenkins-images.adoc#openshift-jenkins[OpenShift Jenkins]. - -[id="ocp-4-12-future-removals"] -=== Future Kubernetes API removals - -The next minor release of {product-title} is expected to use Kubernetes 1.26. Currently, Kubernetes 1.26 is scheduled to remove several deprecated APIs. - -See the link:https://kubernetes.io/docs/reference/using-api/deprecation-guide/#v1-26[Deprecated API Migration Guide] in the upstream Kubernetes documentation for the list of planned Kubernetes API removals. - -See link:https://access.redhat.com/articles/6955985[Navigating Kubernetes API deprecations and removals] for information about how to check your cluster for Kubernetes APIs that are planned for removal. - -[id="ocp-4-12-bug-fixes"] -== Bug fixes - -[discrete] -[id="ocp-4-12-api-auth-bug-fixes"] -==== API Server and Authentication - -* Previously, the Cluster Authentication Operator state was set to `progressing = false` after receiving a `workloadIsBeingUpdatedTooLong` error. At the same time, `degraded = false` was kept for the time of the `inertia` defined. Consequently, the shortened amount of progressing and increased time of degradedation would create a situation where `progressing = false` and `degraded = false` were set prematurely. This caused inconsistent OpenShift CI tests because a healthy state was assumed, which was incorrect. This issue has been fixed by removing the `progressing = false` setting after the `workloadIsBeingUpdatedTooLong` error is returned. Now, because there is no `progressing = false` state, OpenShift CI tests are more consistent. (link:https://bugzilla.redhat.com/show_bug.cgi?id=2111842[*BZ#2111842*]) - -//Bug fix work for TELCODOCS-750 -[discrete] -[id="ocp-4-12-bare-metal-hardware-bug-fixes"] -==== Bare Metal Hardware Provisioning - -* In recent versions of server firmware the time between server operations has increased. This causes timeouts during installer-provisioned infrastructure installations when the {product-title} installation program waits for a response from the Baseboard Management Controller (BMC). The new `python3-sushy` release increases the number of server side attempts to contact the BMC. This update accounts for the extended waiting time and avoids timeouts during installation. (link:https://issues.redhat.com/browse/OCPBUGS-4097[*OCPBUGS-4097*]) - -* Before this update, the Ironic provisioning service did not support Baseboard Management Controllers (BMC) that use weak eTags combined with strict eTag validation. By design, if the BMC provides a weak eTag, Ironic returns two eTags: the original eTag and the original eTag converted to the strong format for compatibility with BMC that do not support weak eTags. Although Ironic can send two eTags, BMC using strict eTag validation rejects such requests due to the presence of the second eTag. As a result, on some older server hardware, bare-metal provisioning failed with the following error: `HTTP 412 Precondition Failed`. In {product-title} 4.12 and later, this behavior changes and Ironic no longer attempts to send two eTags in cases where a weak eTag is provided. Instead, if a Redfish request dependent on an eTag fails with an eTag validation error, Ironic retries the request with known workarounds. This minimizes the risk of bare-metal provisioning failures on machines with strict eTag validation. (link:https://issues.redhat.com/browse/OCPBUGS-3479[*OCPBUGS-3479*]) - -* Before this update, when a Redfish system features a Settings URI, the Ironic provisioning service always attempts to use this URI to make changes to boot related BIOS settings. However, bare-metal provisioning fails if the Baseboard Management Controller (BMC) features a Settings URI but does not support changing a particular BIOS setting by using this Settings URI. In {product-title} 4.12 and later, if a system features a Settings URI, Ironic verifies that it can change a particular BIOS setting by using the Settings URI before proceeding. Otherwise, Ironic implements the change by using the System URI. This additional logic ensures that Ironic can apply boot-related BIOS setting changes and bare-metal provisioning can succeed. (link:https://issues.redhat.com/browse/OCPBUGS-2052[*OCPBUGS-2052*]) - -[discrete] -[id="ocp-4-12-builds-bug-fixes"] -==== Builds - -* By default, Buildah prints steps to the log file, including the contents of environment variables, which might include xref:../cicd/builds/creating-build-inputs.adoc#builds-input-secrets-configmaps_creating-build-inputs[build input secrets]. Although you can use the `--quiet` build argument to suppress printing of those environment variables, this argument isn't available if you use the source-to-image (S2I) build strategy. The current release fixes this issue. To suppress printing of environment variables, set the `BUILDAH_QUIET` environment variable in your build configuration: -+ -[source,yaml] ----- -sourceStrategy: -... - env: - - name: "BUILDAH_QUIET" - value: "true" ----- -+ -(link:https://bugzilla.redhat.com/show_bug.cgi?id=2099991[*BZ#2099991*]) - -[discrete] -[id="ocp-4-12-cloud-compute-bug-fixes"] -==== Cloud Compute - -* Previously, instances were not set to respect the GCP infrastructure default option for automated restarts. As a result, instances could be created without using the infrastructure default for automatic restarts. This sometimes meant that instances were terminated in GCP but their associated machines were still listed in the `Running` state because they did not automatically restart. With this release, the code for passing the automatic restart option has been improved to better detect and pass on the default option selection from users. Instances now use the infrastructure default properly and are automatically restarted when the user requests the default functionality. (link:https://issues.redhat.com/browse/OCPBUGS-4504[*OCPBUGS-4504*]) - -* The `v1beta1` version of the `PodDisruptionBudget` object is now deprecated in Kubernetes. With this release, internal references to `v1beta1` are replaced with `v1`. This change is internal to the cluster autoscaler and does not require user action beyond the advice in the link:https://access.redhat.com/articles/6955381[Preparing to upgrade to OpenShift Container Platform 4.12] Red Hat Knowledgebase Article. (link:https://issues.redhat.com/browse/OCPBUGS-1484[*OCPBUGS-1484*]) - -* Previously, the GCP machine controller reconciled the state of machines every 10 hours. Other providers set this value to 10 minutes so that changes that happen outside of the Machine API system are detected within a short period. The longer reconciliation period for GCP could cause unexpected issues such as missing certificate signing requests (CSR) approvals due to an external IP address being added but not detected for an extended period. With this release, the GCP machine controller is updated to reconcile every 10 minutes to be consistent with other platforms and so that external changes are picked up sooner. (link:https://issues.redhat.com/browse/OCPBUGS-4499[*OCPBUGS-4499*]) - -* Previously, due to a deployment misconfiguration for the Cluster Machine Approver Operator, enabling the `TechPreviewNoUpgrade` feature set caused errors and sporadic Operator degradation. Because clusters with the `TechPreviewNoUpgrade` feature set enabled use two instances of the Cluster Machine Approver Operator and both deployments used the same set of ports, there was a conflict that lead to errors for single-node topology. With this release, the Cluster Machine Approver Operator deployment is updated to use a different set of ports for different deployments. (link:https://issues.redhat.com/browse/OCPBUGS-2621[*OCPBUGS-2621*]) - -* Previously, the scale from zero functionality in Azure relied on a statically compiled list of instance types mapping the name of the instance type to the number of CPUs and the amount of memory allocated to the instance type. This list grew stale over time. With this release, information about instance type sizes is dynamically gathered from the Azure API directly to prevent the list from becoming stale. (link:https://issues.redhat.com/browse/OCPBUGS-2558[*OCPBUGS-2558*]) - -* Previously, Machine API termination handler pods did not start on spot instances. As a result, pods that were running on tainted spot instances did not receive a termination signal if the instance was terminated. This could result in loss of data in workload applications. With this release, the Machine API termination handler deployment is modified to tolerate the taints and pods running on spot instances with taints now receive termination signals. -(link:https://issues.redhat.com/browse/OCPBUGS-1274[*OCPBUGS-1274*]) - -* Previously, error messages for Azure clusters did not explain that it is not possible to create new machines with public IP addresses for a disconnected install that uses only the internal publish strategy. With this release, the error message is updated for improved clarity. (link:https://issues.redhat.com/browse/OCPBUGS-519[*OCPBUGS-519*]) - -* Previously, the Cloud Controller Manager Operator did not check the `cloud-config` configuration file for AWS clusters. As a result, it was not possible to pass additional settings to the AWS cloud controller manager component by using the configuration file. With this release, the Cloud Controller Manager Operator checks the infrastructure resource and parses references to the `cloud-config` configuration file so that users can configure additional settings. (link:https://bugzilla.redhat.com/show_bug.cgi?id=2104373[*BZ#2104373*]) - -* Previously, when Azure added new instance types and enabled accelerated networking support on instance types that previously did not have it, the list of Azure instances in the machine controller became outdated. As a result, the machine controller could not create machines with instance types that did not previously support accelerated networking, even if they support this feature on Azure. With this release, the required instance type information is retrieved from Azure API before the machine is created to keep it up to date so the machine controller is able to create machines with new and updated instance types. This fix also applies to any instance types that are added in the future. (link:https://bugzilla.redhat.com/show_bug.cgi?id=2108647[*BZ#2108647*]) - -* Previously, the cluster autoscaler did not respect the AWS, IBM Cloud, and Alibaba Cloud topology labels for the CSI drivers when using the Cluster API provider. As a result, nodes with the topology label were not processed properly by the autoscaler when attempting to balance nodes during a scale-out event. With this release, the autoscaler's custom processors are updated so that it respects this label. The autoscaler can now balance similar node groups that are labeled by the AWS, IBM Cloud, or Alibaba CSI labels. (link:https://bugzilla.redhat.com/show_bug.cgi?id=2001027[*BZ#2001027*]) - -* Previously, Power VS cloud providers were not capable of fetching the machine IP address from a DHCP server. Changing the IP address did not update the node, which caused some inconsistencies, such as pending certificate signing requests. With this release, the Power VS cloud provider is updated to fetch the machine IP address from the DHCP server so that the IP addresses for the nodes are consistent with the machine IP address. (link:https://bugzilla.redhat.com/show_bug.cgi?id=2111474[*BZ#2111474*]) - -* Previously, machines created in early versions of {product-title} with invalid configurations could not be deleted. With this release, the webhooks that prevent the creation of machines with invalid configurations no longer prevent the deletion of existing invalid machines. Users can now successfully remove these machines from their cluster by manually removing the finalizers on these machines. (link:https://bugzilla.redhat.com/show_bug.cgi?id=2101736[*BZ#2101736*]) - -* Previously, short DHCP lease times, caused by `NetworkManager` not being run as a daemon or in continuous mode, caused machines to become stuck during initial provisioning and never become nodes in the cluster. With this release, extra checks are added so that if a machine becomes stuck in this state it is deleted and recreated automatically. Machines that are affected by this network condition can become nodes after a reboot from the Machine API controller. (link:https://bugzilla.redhat.com/show_bug.cgi?id=2115090[*BZ#2115090*]) - -* Previously, when creating a new `Machine` resource using a machine profile that does not exist in IBM Cloud, the machines became stuck in the `Provisioning` phase. With this release, validation is added to the IBM Cloud Machine API provider to ensure that a machine profile exists, and machines with an invalid machine profile are rejected by the Machine API. (link:https://bugzilla.redhat.com/show_bug.cgi?id=2062579[*BZ#2062579*]) - -* Previously, the Machine API provider for AWS did not verify that the security group defined in the machine specification exists. Instead of returning an error in this case, it used a default security group, which should not be used for {product-title} machines, and successfully created a machine without informing the user that the default group was used. With this release, the Machine API returns an error when users set either incorrect or empty security group names in the machine specification. (link:https://bugzilla.redhat.com/show_bug.cgi?id=2060068[*BZ#2060068*]) - -* Previously, the Machine API provider Azure did not treated user-provided values for instance types as case sensitive. This led to false-positive errors when instance types were correct but did not match the case. With this release, instance types are converted to the lowercase characters so that users get correct results without false-positive errors for mismatched case. (link:https://bugzilla.redhat.com/show_bug.cgi?id=2085390[*BZ#2085390*]) - -* Previously, there was no check for nil values in the annotations of a machine object before attempting to access the object. This situation was rare, but caused the machine controller to panic when reconciling the machine. With this release, nil values are checked and the machine controller is able to reconcile machines without annotations. (link:https://bugzilla.redhat.com/show_bug.cgi?id=2106733[*BZ#2106733*]) - -* Previously, the cluster autoscaler metrics for cluster CPU and memory usage would never reach, or exceed, the limits set by the `ClusterAutoscaler` resource. As a result, no alerts were fired when the cluster autoscaler could not scale due to resource limitations. With this release, a new metric called `cluster_autoscaler_skipped_scale_events_count` is added to the cluster autoscaler to more accurately detect when resource limits are reached or exceeded. Alerts will now fire when the cluster autoscaler is unable to scale the cluster up because it has reached the cluster resource limits. (link:https://bugzilla.redhat.com/show_bug.cgi?id=1997396[*BZ#1997396*]) - -* Previously, when the Machine API provider failed to fetch the machine IP address, it would not set the internal DNS name and the machine certificate signing requests were not automatically approved. With this release, the Power VS machine provider is updated to set the server name as the internal DNS name even when it fails to fetch the IP address. (link:https://bugzilla.redhat.com/show_bug.cgi?id=2111467[*BZ#2111467*]) - -* Previously, the Machine API vSphere machine controller set the `PowerOn` flag when cloning a VM. This created a `PowerOn` task that the machine controller was not aware of. If that `PowerOn` task failed, machines were stuck in the `Provisioned` phase but never powered on. With this release, the cloning sequence is altered to avoid the issue. Additionally, the machine controller now retries powering on the VM in case of failure and reports failures properly. (link:https://bugzilla.redhat.com/show_bug.cgi?id=2087981[*BZ#2087981*], link:https://issues.redhat.com/browse/OCPBUGS-954[*OCPBUGS-954*]) - -* With this release, AWS security groups are tagged immediately instead of after creation. This means that fewer requests are sent to AWS and the required user privileges are lowered. (link:https://bugzilla.redhat.com/show_bug.cgi?id=2098054[*BZ#2098054*], link:https://issues.redhat.com/browse/OCPBUGS-3094[*OCPBUGS-3094*]) - -* Previously, a bug in the {rh-openstack} legacy cloud provider resulted in a crash if certain {rh-openstack} operations were attempted after authentication had failed. For example, shutting down a server causes the Kubernetes controller manager to fetch server information from {rh-openstack}, which triggered this bug. As a result, if initial cloud authentication failed or was configured incorrectly, shutting down a server caused the Kubernetes controller manager to crash. With this release, the {rh-openstack} legacy cloud provider is updated to not attempt any {rh-openstack} API calls if it has not previously authenticated successfully. Now, shutting down a server with invalid cloud credentials no longer causes Kubernetes controller manager to crash. (link:https://bugzilla.redhat.com/show_bug.cgi?id=2102383[*BZ#2102383*]) - -[discrete] -[id="ocp-4-12-dev-console-bug-fixes"] -==== Developer Console - -* Previously, the `openshift-config` namespace was hard coded for the `HelmChartRepository` custom resource, which was the same namespace for the `ProjectHelmChartRepository` custom resource. This prevented users from adding private `ProjectHelmChartRepository` custom resources in their desired namespace. Consequently, users were unable to access secrets and configmaps in the `openshift-config` namespace. This update fixes the `ProjectHelmChartRepository` custom resource definition with a `namespace` field that can read the secret and configmaps from a namespace of choice by a user with the correct permissions. Additionally, the user can add secrets and configmaps to the accessible namespace, and they can add private Helm chart repositories in the namespace used the creation resources. (link:https://bugzilla.redhat.com/show_bug.cgi?id=2071792[*BZ#2071792*]) - -[discrete] -[id="ocp-4-12-image-registry-bug-fixes"] -==== Image Registry - -* Previously, the image trigger controller did not have permissions to change objects. Consequently, image trigger annotations did not work on some resources. This update creates a cluster role binding that provides the controller the required permissions to update objects according to annotations. (link:https://bugzilla.redhat.com/show_bug.cgi?id=2055620[*BZ#2055620*]) - -* Previously, the Image Registry Operator did not have a `progressing` condition for the `node-ca` daemon set and used `generation` from an incorrect object. Consequently, the `node-ca` daemon set could be marked as `degraded` while the Operator was still running. This update adds the `progressing` condition, which indicates that the installation is not complete. As a result, the Image Registry Operator successfully installs the `node-ca` daemon set, and the installer waits until it is fully deployed. (link:https://bugzilla.redhat.com/show_bug.cgi?id=2093440[[*BZ#2093440*]) - -[discrete] -[id="ocp-4-12-installer-bug-fixes"] -==== Installer - -* Previously, the number of supported user-defined tags was 8, and reserved {product-title} tags were 2 for AWS resources. With this release, the number of supported user-defined tags is now 25 and reserved {product-title} tags are 25 for AWS resources. You can now add up to 25 user tags during installation. (link:https://issues.redhat.com/browse/CFE-592[*CFE#592*]) - -* Previously, installing a cluster on Amazon Web Services started and then failed when the IAM administrative user was not assigned the `s3:GetBucketPolicy` permission. This update adds this policy to checklist that the installation program uses to ensure that all of the required permissions are assigned. As a result, the installation program now stops the installation with a warning that the IAM administrative user is missing the `s3:GetBucketPolicy` permission. (link:https://bugzilla.redhat.com/show_bug.cgi?id=2109388[*BZ#2109388*]) - -* Previously, installing a cluster on Microsoft Azure failed when the Azure DCasv5-series or DCadsv5-series of confidential VMs were specified as control plane nodes. With this update, the installation program now stops the installation with an error, which states that confidential VMs are not yet supported. (link:https://bugzilla.redhat.com/show_bug.cgi?id=2055247[*BZ#2055247*]) - -* Previously, gathering bootstrap logs was not possible until the control plane machines were running. With this update, gathering bootstrap logs now only requires that the bootstrap machine be available. (link:https://bugzilla.redhat.com/show_bug.cgi?id=2105341[*BZ#2105341*]) - -* Previously, if a cluster failed to install on Google Cloud Platform because the service account had insufficient permissions, the resulting error message did not mention this as the cause of the failure. This update improves the error message, which now instructs users to check the permissions that are assigned to the service account. (link:https://bugzilla.redhat.com/show_bug.cgi?id=2103236[*BZ#2103236*]) - -* Previously, when an installation on Google Cloud provider (GCP) failed because an invalid GCP region was specified, the resulting error message did not mention this as the cause of the failure. This update improves the error message, which now states the region is not valid. (link:https://bugzilla.redhat.com/show_bug.cgi?id=2102324[*BZ#2102324*]) - -* Previously, cluster installations using Hive could fail if Hive used an older version of the install-config.yaml file. This update allows the installation program to accept older versions of the `install-config.yaml` file provided by Hive. (link:https://bugzilla.redhat.com/show_bug.cgi?id=2098299[*BZ#2098299*]) - -* Previously, the installation program would incorrectly allow the `apiVIP` and `ingressVIP` parameters to use the same IPv6 address if they represented the address differently, such as listing the address in an abbreviated format. In this update, the installer correctly validates these two parameters regardless of their formatting, requiring separate IP addresses for each parameter. (link:https://bugzilla.redhat.com/show_bug.cgi?id=2103144[*BZ#2103144*]) - -* Previously, uninstalling a cluster using the installation program failed to delete all resources in clusters installed on GCP if the cluster name was more than 22 characters long. In this update, uninstalling a cluster using the installation program correctly locates and deletes all GCP cluster resources in cases of long cluster names. (link:https://bugzilla.redhat.com/show_bug.cgi?id=2076646[*BZ#2076646*]) - -* Previously, when installing a cluster on {rh-openstack-first} with multiple networks defined in the `machineNetwork` parameter, the installation program only created security group rules for the first network. With this update, the installation program creates security group rules for all networks defined in the `machineNetwork` so that users no longer need to manually edit security group rules after installation. (link:https://bugzilla.redhat.com/show_bug.cgi?id=2095323[*BZ#2095323*]) - -* Previously, users could manually set the API and Ingress virtual IP addresses to values that conflicted with the allocation pool of the DHCP server when installing a cluster on OpenStack. This could cause the DHCP server to assign one of the VIP addresses to a new machine, which would fail to start. In this update, the installation program validates the user-provided VIP addresses to ensure that they do not conflict with any DHCP pools. (link:https://bugzilla.redhat.com/show_bug.cgi?id=1944365[*BZ#1944365*]) - -* Previously, when installing a cluster on vSphere using a datacenter that is embedded inside a folder, the installation program could not locate the datacenter object, causing the installation to fail. In this update, the installation program can traverse the directory that contains the datacenter object, allowing the installation to succeed. (link:https://bugzilla.redhat.com/show_bug.cgi?id=2097691[*BZ#2097691*]) - -* Previously, when installing a cluster on Azure using arm64 architecture with installer-provisioned infrastructure, the image definition resource for `hyperVGeneration` V1 incorrectly had an architecture value of `x64`. With this update, the image definition resource for `hyperVGeneration` V1 has the correct architecture value of `Arm64`. (link:https://issues.redhat.com/browse/OCPBUGS-3639[*OCPBUGS-3639*]) - -* Previously, when installing a cluster on VMware vSphere, the installation could fail if the user specified a user-defined folder in the `failureDomain` section of the `install-config.yaml` file. With this update, the installation program correctly validates user-defined folders in the `failureDomain` section of the `install-config.yaml` file. (link:https://issues.redhat.com/browse/OCPBUGS-3343[*OCPBUGS-3343*]) - -* Previously, when destroying a partially deployed cluster after an installation failed on VMware vSphere, some virtual machine folders were not destroyed. This error could occur in clusters configured with multiple vSphere datacenters or multiple vSphere clusters. With this update, all installer-provisioned infrastructure is correctly deleted when destroying a partially deployed cluster after an installation failure. (link:https://issues.redhat.com/browse/OCPBUGS-1489[*OCPBUGS-1489*]) - -* Previously, when installing a cluster on VMware vSphere, the installation failed if the user specified the `platform.vsphere.vcenters` parameter but did not specify the `platform.vsphere.failureDomains.topology.networks` parameter in the `install-config.yaml` file. With this update, the installation program alerts the user that the `platform.vsphere.failureDomains.topology.networks` field is required when specifying `platform.vsphere.vcenters`. (link:https://issues.redhat.com/browse/OCPBUGS-1698[*OCPBUGS-1698*]) - -* Previously, when installing a cluster on VMware vSphere, the installation failed if the user defined the `platform.vsphere.vcenters` and `platform.vsphere.failureDomains` parameters but did not define `platform.vsphere.defaultMachinePlatform.zones`, or `compute.platform.vsphere.zones` and `controlPlane.platform.vsphere.zones`. With this update, the installation program validates that the user has defined the `zones` parameter in multi-region or multi-zone deployments prior to installation. (link:https://issues.redhat.com/browse/OCPBUGS-1490[*OCPBUGS-1490*]) - -[discrete] -[id="ocp-4-12-kube-controller-bug-fixes"] -==== Kubernetes Controller Manager - -* Previously, the Kubernetes Controller Manager Operator reported `degraded` on environments without a monitoring stack presence. With this update, the Kubernetes Controller Manager Operator skips checking the monitoring for cues about degradation when the monitoring stack is not present. (link:https://bugzilla.redhat.com/show_bug.cgi?id=2118286[*BZ#2118286*]) - -* With this update, Kubernetes Controller Manager alerts (`KubeControllerManagerDown`, `PodDisruptionBudgetAtLimit`, `PodDisruptionBudgetLimit`, and `GarbageCollectorSyncFailed`) have links to Github runbooks. The runbooks help users to understand debug these alerts. (link:https://bugzilla.redhat.com/show_bug.cgi?id=2001409[*BZ#2001409*]) - -[discrete] -[id="ocp-4-12-kube-scheduler-bug-fixes"] -==== Kubernetes Scheduler - -* Previously, the secondary scheduler deployment was not deleted after a secondary scheduler custom resource was deleted. Consequently, the Secondary Schedule Operator and Operand were not fully uninstalled. With this update, the correct owner reference is set in the secondary scheduler custom resource so that it points to the secondary scheduler deployment. As a result, secondary scheduler deployments are deleted when the secondary scheduler custom resource is deleted. (link:https://bugzilla.redhat.com/show_bug.cgi?id=2100923[*BZ#2100923*]) - -* For the {product-title} {product-version} release, the descheduler can now publish events to an API group because the release adds additional role-based access controls (RBAC) rules to the descheduler's profile.(link:https://issues.redhat.com/browse/OCPBUGS-2330[*OCPBUGS-2330*]) - -[discrete] -[id="ocp-4-12-machine-config-operator-bug-fixes"] -==== Machine Config Operator - -* Previously, the Machine Config Operator (MCO) `ControllerConfig` resource, which contains important certificates, was only synced if the Operator's daemon sync succeeded. By design, unready nodes during a daemon sync prevent that daemon sync from succeeding, so unready nodes were indirectly preventing the `ControllerConfig` resource, and therefore those certificates, from syncing. This resulted in eventual cluster degradation when there were unready nodes due to inability to rotate the certificates contained in the `ControllerConfig` resource. With this release, the sync of the `ControllerConfig` resource is no longer dependent on the daemon sync succeeding, so the `ControllerConfig` resource now continues to sync if the daemon sync fails. This means that unready nodes no longer prevent the `ControllerConfig` resource from syncing, so certificates continue to be updated even when there are unready nodes. (link:https://bugzilla.redhat.com/show_bug.cgi?id=2034883[*BZ#2034883*]) - -[discrete] -[id="ocp-4-12-management-console-bug-fixes"] -==== Management Console - -* Previously, the *Operator details* page attempted to display multiple error messages, but the error message component can only display a single error message at a time. As a result, relevant error messages were not displayed. With this update, the *Operator details* page displays only the first error message so the user sees a relevant error. (link:https://issues.redhat.com/browse/OCPBUGS-3927[*OCPBUGS-3927*]) - -* Previously, the product name for Azure Red Hat OpenShift was incorrect in Customer Case Management (CCM). As a result, the console had to use the same incorrect product name to correctly populate the fields in CCM. Once the product name in CCM was updated, the console needed to be updated as well. With this update, the same, correct product name as CCM is correctly populated with the correct Azure product name when following the link from the console. (link:https://issues.redhat.com/browse/OCPBUGS-869[*OCPBUGS-869*]) - -* Previously, when a plug-in page resulted in an error, the error did not reset when navigating away from the error page, and the error persisted after navigating to a page that was not the cause of the error. With this update, the error state is reset to its default when a user navigates to a new page, and the error no longer persists after navigating to a new page. (link:https://bugzilla.redhat.com/show_bug.cgi?id=2117738[*BZ#2117738*], link:https://issues.redhat.com/browse/OCPBUGS-523[*OCPBUGS-523*]) - -* Previously, the *_View it here_* link in the *Operator details* pane for installed Operators was incorrectly built when *_All Namespaces_* was selected. As a result, the link attempted to navigate to the *Operator details* page for a cluster service version (CSV) in *All Projects*, which is an invalid route. With this update, the *_View it here_* link to use the namespace where the CSV is installed now builds correctly and the link works as expected. (link:https://issues.redhat.com/browse/OCPBUGS-184[*OCPBUGS-184*]) - -* Previously, line numbers with more than five digits resulted in a cosmetic issue where the line number overlaid the vertical divider between the line number and the line contents making it harder to read. With this update, the amount of space available for line numbers was increased to account for longer line numbers, and the line number no longer overlays the vertical divider. (link:https://issues.redhat.com/browse/OCPBUGS-183[*OCPBUGS-183*]) - -* Previously, in the administrator perspective of the web console, the link to *_Learn more about the OpenShift local update services_* on the *Default update server* pop-up window in the *Cluster Settings* page produced a 404 error. With this update, the link works as expected. (link:https://bugzilla.redhat.com/show_bug.cgi?id=2098234[*BZ#2098234*]) - -* Previously, the `MatchExpression` component did not account for array-type values. As a result, only single values could be entered through forms using this component. With this update, the `MatchExpression` component accepts comma-separated values as an array. (link:https://bugzilla.redhat.com/show_bug.cgi?id=2079690[*BZ#207690*]) - -* Previously, there were redundant checks for the model resulting in tab reloading which occasionally resulted in a flickering of the tab contents where they rerendered. With this update, the redundant model check was removed, and the model is only checked once. As a result, the tab contents do not flicker and no longer rerender. (link:https://bugzilla.redhat.com/show_bug.cgi?id=2037329[*BZ#2037329*]) - -* Previously, when selecting the `edit` label from the action list on the OpenShift Dedicated node page, no response was elicited and a web hook error was returned. This issue has been fixed so that the error message is only returned when editing fails. (link:https://bugzilla.redhat.com/show_bug.cgi?id=2102098[*BZ#2102098*]) - -* Previously, if issues were pending, clicking on the *Insights* link would crash the page. As a workaround, you can wait for the variable to become `initialized` before clicking the *Insights* link. As a result, the Insights page will open as expected. (link:https://bugzilla.redhat.com/show_bug.cgi?id=2052662[*BZ#2052662*]) - -* Previously, when the `MachineConfigPool` resource was paused, the option to unpause said *Resume rollouts*. The wording has been updated so that it now says *Resume updates*. (link:https://bugzilla.redhat.com/show_bug.cgi?id=2094240[*BZ#2094240*]) - -* Previously, the wrong calculating method was used when counting master and worker nodes. With this update, the correct worker nodes are calculated when nodes have both the `master` and `worker` role. (link:https://bugzilla.redhat.com/show_bug.cgi?id=1951901[*BZ#1951901*]) - -* Previously, conflicting `react-router` routes for `ImageManifestVuln` resulted in attempts to render a details page for `ImageManifestVuln` with a `~new` name. Now, the container security plugin has been updated to remove conflicting routes and to ensure dynamic lists and details page extensions are used on the Operator details page. As a result, the console renders the correct create, list, and details pages for `ImageManifestVuln`. (link:https://bugzilla.redhat.com/show_bug.cgi?id=2080260[*BZ#2080260*]) - -* Previously, incomplete YAML was not synced was occasionally displayed to users. With this update, synced YAML always displays. (link:https://bugzilla.redhat.com/show_bug.cgi?id=2084453[*BZ#2084453*]) - -* Previously, when installing an Operator that required a custom resource (CR) to be created for use, the *Create resource* button could fail to install the CR because it was pointing to the incorrect namespace. With this update, the *Create resource* button works as expected. (link:https://bugzilla.redhat.com/show_bug.cgi?id=2094502[*BZ#2094502*]) - -* Previously, the *Cluster update* modal was not displaying errors properly. As a result, the *Cluster update* modal did not display or explain errors when they occurred. With this update, the *Cluster update* modal correctly display errors. (link:https://bugzilla.redhat.com/show_bug.cgi?id=2096350[*BZ#2096350*]) - -[discrete] -[id="ocp-4-12-monitoring-bug-fixes"] -==== Monitoring - -* Before this update, cluster administrators could not distinguish between a pod being not ready because of a scheduling issue and a pod being not ready because it could not be started by the kubelet. In both cases, the `KubePodNotReady` alert would fire. With this update, the `KubePodNotScheduled` alert now fires when a pod is not ready because of a scheduling issue, and the `KubePodNotReady` alert fires when a pod is not ready because it could not be started by the kubelet. (link:https://issues.redhat.com/browse/OCPBUGS-4431[*OCPBUGS-4431*]) - -* Before this update, `node_exporter` would report metrics about virtual network interfaces such as `tun` interfaces, `br` interfaces, and `ovn-k8s-mp` interfaces. With this update, metrics for these virtual interfaces are no longer collected, which decreases monitoring resource consumption. (link:https://issues.redhat.com/browse/OCPBUGS-1321[*OCPBUGS-1321*]) - -* Before this update, Alertmanager pod startup might time out because of slow DNS resolution, and the Alertmanager pods would not start. With this release, the timeout value has been increased to seven minutes, which prevents pod startup from timing out. (link:https://bugzilla.redhat.com/show_bug.cgi?id=2083226[*BZ#2083226*]) - -* Before this update, if Prometheus Operator failed to run or schedule Prometheus pods, the system provided no underlying reason for the failure. With this update, if Prometheus pods are not run or scheduled, the Cluster Monitoring Operator updates the `clusterOperator` monitoring status with a reason for the failure, which can be used to troubleshoot the underlying issue. (link:https://bugzilla.redhat.com/show_bug.cgi?id=2043518[*BZ#2043518*]) - -* Before this update, if you created an alert silence from the *Developer* perspective in the {product-title} web console, external labels were included that did not match the alert. Therefore, the alert would not be silenced. With this update, external labels are now excluded when you create a silence in the *Developer* perspective so that newly created silences function as expected. (link:https://bugzilla.redhat.com/show_bug.cgi?id=2084504[*BZ#2084504*]) - -* Previously, if you enabled an instance of Alertmanager dedicated to user-defined projects, a misconfiguration could occur in certain circumstances, and you would not be informed that the user-defined project Alertmanager config map settings did not load for either the main instance of Alertmanager or the instance dedicated to user-defined projects. With this release, if this misconfiguration occurs, the Cluster Monitoring Operator now displays a message that informs you of the issue and provides resolution steps. (link:https://bugzilla.redhat.com/show_bug.cgi?id=2099939[*BZ#2099939*]) - -* Before this update, if the Cluster Monitoring Operator (CMO) failed to update Prometheus, the CMO did not verify whether a previous deployment was running and would report that cluster monitoring was unavailable even if one of the Prometheus pods was still running. With this update, the CMO now checks for running Prometheus pods in this situation and reports that cluster monitoring is unavailable only if no Prometheus pods are running. (link:https://bugzilla.redhat.com/show_bug.cgi?id=2039411[*BZ#2039411*]) - -* Before this update, if you configured OpsGenie as an alert receiver, a warning would appear in the log that `api_key` and `api_key_file` are mutually exclusive and that `api_key` takes precedence. This warning appeared even if you had not defined `api_key_file`. With this update, this warning only appears in the log if you have defined both `api_key` and `api_key_file`. (link:https://bugzilla.redhat.com/show_bug.cgi?id=2093892[*BZ#2093892*]) - -* Before this update the Telemeter Client (TC) only loaded new pull secrets when it was manually restarted. Therefore, if a pull secret had been changed or updated and the TC had not been restarted, the TC would fail to authenticate with the server. This update addresses the issue so that when the secret is rotated, the deployment is automatically restarted and uses the updated token to authenticate. (link:https://bugzilla.redhat.com/show_bug.cgi?id=2114721[*BZ#2114721*]) - -[discrete] -[id="ocp-4-12-networking-bug-fixes"] -==== Networking - -* Previously, routers that were in the terminating state delayed the `oc cp` command which would delay the `oc adm must-gather` command until the pod was terminated. With this update, a timeout for each issued `oc cp` command is set to prevent delaying the `must-gather` command from running. As a result, terminating pods no longer delay `must-gather` commands. (link:https://bugzilla.redhat.com/show_bug.cgi?id=2103283[*BZ#2103283*]) - -* Previously, an Ingress Controller could not be configured with both the `Private` endpoint publishing strategy type and PROXY protocol. With this update, users can now configure an Ingress Controller with both the `Private` endpoint publishing strategy type and PROXY protocol. (link:https://bugzilla.redhat.com/show_bug.cgi?id=2104481[*BZ#2104481*]) - -* Previously, the `routeSelector` parameter cleared the route status of the Ingress Controller prior to the router deployment. Because of this, the route status repopulated incorrectly. To avoid using stale data, route status detection has been updated to no longer rely on the Kubernetes object cache. Additionally, this update includes a fix to check the generation ID on route deployment to determine the route status. As a result, the route status is consistently cleared with a `routeSelector` update. (link:https://bugzilla.redhat.com/show_bug.cgi?id=2101878[*BZ#2101878*]) - -* Previously, a cluster that was upgraded from a version of {product-title} earlier than 4.8 could have orphaned `Route` objects. This was caused by earlier versions of {product-title} translating `Ingress` objects into `Route` objects irrespective of a given `Ingress` object's indicated `IngressClass`. With this update, an alert is sent to the cluster administrator about any orphaned Route objects still present in the cluster after Ingress-to-Route translation. This update also adds another alert that notifies the cluster administrator about any Ingress objects that do not specify an `IngressClass`. (link:https://bugzilla.redhat.com/show_bug.cgi?id=1962502[*BZ#1962502*]) - -* Previously, if a `configmap` that the router deployment depends on is not created, then the router deployment does not progress. With this update, the cluster Operator reports `ingress progressing=true` if the default ingress controller deployment is progressing. This results in users debugging issues with the ingress controller by using the command `oc get co`. (link:https://bugzilla.redhat.com/show_bug.cgi?id=2066560[*BZ#2066560*]) - -* Previously, when an incorrectly created network policy was added to the OVN-Kubernetes cache, it would cause the OVN-Kubernetes leader to enter `crashloopbackoff` status. With this update, OVN-Kubernetes leader does not enter `crashloopbackoff` status by skipping deleting nil policies. (link:https://bugzilla.redhat.com/show_bug.cgi?id=2091238[*BZ#2091238*]) - -* Previously, recreating an EgressIP pod with the same namespace or name within 60 seconds of deleting an older one with the same namespace or name causes the wrong SNAT to be configured. As a result, packets could go out with nodeIP instead of EgressIP SNAT. With this update, traffic leaves the pod with EgressIP instead of nodeIP. (link:https://bugzilla.redhat.com/show_bug.cgi?id=2097243[*BZ#2097243*]). - -* Previously, older Access Control Lists (ACL)s with `arp` produced `unexpectedly found multiple equivalent ACLs (arp v/s arp||nd)` errors due to a change in the ACL from `arp` to `arp II nd`. This prevented network policies from being created properly. With this update, older ACLs with just the `arp` match have been removed so that only ACLs with the new `arp II nd` match exist so that network policies can be created correctly and no errors will be observed on `ovnkube-master`. NOTE: This effects customers upgrading into 4.8.14, 4.9.32, 4.10.13 or higher from older versions. (link:https://bugzilla.redhat.com/show_bug.cgi?id=2095852[*BZ#2095852*]). - -* With this update, CoreDNS has been updated to version 1.10.0, which is based on Kubernetes 1.25. This keeps both the CoreDNS version and {product-title} {product-version}, which is also based on Kubernetes 1.25, in alignment with one another. (link:https://issues.redhat.com/browse/OCPBUGS-1731[*OCPBUGS-1731*]) - -* With this update, the {product-title} router now uses `k8s.io/client-go` version 1.25.2, which supports Kubernetes 1.25. This keeps both the `openshift-router` and {product-title} {product-version}, which is also based on Kubernetes 1.25, in alignment with one another. (link:https://issues.redhat.com/browse/OCPBUGS-1730[*OCPBUGS-1730*]) - -* With this update, the Ingress Operator now uses `k8s.io/client-go` version 1.25.2, which supports Kubernetes 1.25. This keeps both the Ingress Operator and {product-title} {product-version}, which is also based on Kubernetes 1.25, in alignment with one another. (link:https://issues.redhat.com/browse/OCPBUGS-1554[*OCPBUGS-1554*]) - -* Previously, the DNS Operator did not reconcile the `openshift-dns` namespace. Because {product-title} {product-version} requires the `openshift-dns` namespace to have pod-security labels, this caused the namespace to be missing those labels upon cluster update. Without the pod-security labels, the pods failed to start. With this update, the DNS Operator now reconciles the `openshift-dns` namespace, and the pod-security labels are now present. As a result, pods start as expected. (link:https://issues.redhat.com/browse/OCPBUGS-1549[*OCPBUGS-1549*]) - -* Previously, the `ingresscontroller.spec.tuniningOptions.reloadInterval` did not support decimal numerals as valid parameter values because the Ingress Operator internally converts the specified value into milliseconds, which was not a supported time unit. This prevented an Ingress Controller from being deleted. With this update, `ingresscontroller.spec.tuningOptions.reloadInterval` now supports decimal numerals and users can delete Ingress Controllers with `reloadInterval` parameter values which were previously unsupported. (link:https://issues.redhat.com/browse/OCPBUGS-236[*OCPBUGS-236*]) - -* Previously, the Cluster DNS Operator used GO Kubernetes libraries that were based on Kubernetes 1.24 while {product-title} 4.12 is based on Kubernetes 1.25. With this update, GO Kubernetes API is v1.25.2, which aligns the Cluster DNS Operator with {product-title} 4.12 that uses Kubernetes 1.25 APIs. (link: https://issues.redhat.com/browse/OCPBUGS-1558[*OCPBUGS-1558*]) - -* Previously, setting the `disableNetworkDiagnostics` configuration to `true` did not persist when the `network-operator` pod was re-created. With this update, the `disableNetworkDiagnostics` configuration property of network`operator.openshift.io/cluster` no longer resets to its default value after network operator restart. (link:https://issues.redhat.com/browse/OCPBUGS-392[*OCPBUGS-392*]) - -* Previously, `ovn-kubernetes` did not configure the correct MAC address of bonded interfaces in `br-ex` bridge. As a result, a node that uses bonding for the primary Kubernetes interface fails to join the cluster. With this update, `ovn-kubernetes` configures the correct MAC address of bonded interfaces in `br-ex` bridge, and nodes that use bonding for the primary Kubernetes interface successfully join the cluster. (link:https://bugzilla.redhat.com/show_bug.cgi?id=2096413[*BZ2096413*]) - -* Previously, when the Ingress Operator was configured to enable the use of mTLS, the Operator would not check if CRLs needed updating until some other event caused it to reconcile. As a result, CRLs used for mTLS could become out of date. With this update, the Ingress Operator now automatically reconciles when any CRL expires, and CRLs will be updated at the time specified by their `nextUpdate` field. (link:https://bugzilla.redhat.com/show_bug.cgi?id=2117524[*BZ#2117524*]) - -[discrete] -[id="ocp-4-12-node-bug-fixes"] -==== Node - -* Previously, a symlinks error message was printed out as raw data instead of formatted as an error, making it difficult to understand. This fix formats the error message properly, so that it is easily understood. (link:https://bugzilla.redhat.com/show_bug.cgi?id=1977660[*BZ#1977660*]) - -* Previously, kubelet hard eviction thresholds were different from Kubernetes defaults when a performance profile was applied to a node. With this release, the defaults have been updated to match the expected Kubernetes defaults. (link:https://issues.redhat.com/browse/OCPBUGS-4362[*OCPBUGS-4362*]). - -[discrete] -[id="ocp-4-12-openshift-cli-bug-fixes"] -==== OpenShift CLI (oc) - -* The {product-title} {product-version} release fixes an issue with entering a debug session on a target node when the target namespace lacks the appropriate security level. This caused the `oc` CLI to prompt you with a pod security error message. If the existing namespace does not contain the appropriate security levels, {product-title} now creates a temporary namespace when you enter `oc` debug mode on a target node. (link:https://issues.redhat.com/browse/OCPBUGS-852[OCPBUGS-852]) - -* Previously, on macOS arm64 architecture, the `oc` binary needed to be signed manually. As a result, the `oc` binary did not work as expected. This update implements a self-signing binary for `oc` mimicking. As a result, the `oc` binary on macOS arm64 architectures works properly. (link:https://bugzilla.redhat.com/show_bug.cgi?id=2059125[*BZ#2059125*]) - -* Previously, `must-gather` was trying to collect resources that were not present on the server. Consequently, `must-gather` would print error messages. Now, before collecting resources, `must-gather` checks whether the resource exists. As a result, `must-gather` no longer prints an error when it fails to collect non-existing resources on the server. (link:https://bugzilla.redhat.com/show_bug.cgi?id=2095708[*BZ#2095708*]) - -* The {product-title} {product-version} release updates the `oc-mirror` library, so that the library supports multi-arch platform images. This means that you can choose from a wider selection of architectures, such as `arm64`, when mirroring a platform release payload. (link:https://issues.redhat.com/browse/OCPBUGS-617[*OCPBUGS-617*]) - -[discrete] -[id="ocp-4-12-olm-bug-fixes"] -==== Operator Lifecycle Manager (OLM) - -* Before the {product-title} {product-version} release, the `package-server-manager` controller would not revert any changes made to a `package-server` cluster service version (CSV), because of an issue with the `on-cluster` function. These persistent changes might impact how an Operator starts in a cluster. For {product-title} {product-version}, the `package-server-manager` controller always rebuilds a `package-server` CSV to its original state, so that no modifications to the CSV persist after a cluster upgrade operation. The `on-cluster` function no longer controls the state of a `package-server` CSV. (link:https://issues.redhat.com/browse/OCPBUGS-867[*OCPBUGS-867*]) - -* Previously, Operator Lifecycle Manager (OLM) would attempt to update namespaces to apply a label, even if the label was present on the namespace. Consequently, the update requests increased the workload in API and etcd services. With this update, OLM compares existing labels against the expected labels on a namespace before issuing an update. As a result, OLM no longer attempts to make unnecessary update requests on namespaces. (link:https://bugzilla.redhat.com/show_bug.cgi?id=2105045[*BZ#2105045*]) - -* Previously, Operator Lifecycle Manager (OLM) would prevent minor cluster upgrades that should not be blocked based on a miscalculation of the `ClusterVersion` custom resources's `spec.DesiredVersion` field. With this update, OLM no longer prevents cluster upgrades when the upgrade should be supported. (link:https://bugzilla.redhat.com/show_bug.cgi?id=2097557[*BZ#2097557*]) - -* Previously, the reconciler would update a resource's annotation without making a copy of the resource. This caused an error that would terminate the reconciler process. With this update, the reconciler no longer stops due the error. (link:https://bugzilla.redhat.com/show_bug.cgi?id=2105045[*BZ#2105045*]) - -* The `package-server-manifest` (PSM) is a controller that ensures that the correct `package-server` Cluster Service Version (CSV) is installed on a cluster. Previously, changes to the `package-server` CSV were not being reverted because of a logical error in the reconcile function in which an on-cluster object could influence the expected object. Users could modify the `package-server` CSV and the changes would not be reverted. Additionally, cluster upgrades would not update the YAML for the `package-server` CSV. With this update, the expected version of the CSV is now always built from scratch, which removes the ability for an on-cluster object to influence the expected values. As a result, the PSM now reverts any attempts to modify the `package-server` CSV, and cluster upgrades now deploy the expected `package-server` CSV. (link:https://issues.redhat.com/browse/OCPBUGS-858[*OCPBUGS-858*]) - -* Previously, OLM would upgrade an Operator according to the Operator's CRD status. A CRD lists component references in an order defined by the group/version/kind (GVK) identifier. Operators that share the same components might cause the GVK to change the component listings for an Operator, and this can cause the OLM to require more system resources to continuously update the status of a CRD. With this update, the Operator Lifecycle Manager (OLM) now upgrades an Operator according to the Operator's component references. A change to the custom resource definition (CRD) status of an Operator does not impact the OLM Operator upgrade process.(link:https://issues.redhat.com/browse/OCPBUGS-3795[*OCPBUGS-3795*]) - -[discrete] -[id="ocp-4-12-openshift-operator-sdk-bug-fixes"] -==== Operator SDK - -* With this update, you can now set the security context for the registry pod by including the `securityContext` configuration field in the pod specification. This will apply the security context for all containers in the pod. The `securityContext` field also defines the pod's privileges. (link:https://bugzilla.redhat.com/show_bug.cgi?id=2091864[*BZ#2091864*]) - -[discrete] -[id="ocp-4-12-file-integrity-operator-bug-fixes"] -==== File Integrity Operator - -* Previously, the File Integrity Operator deployed templates using the `openshift-file-integrity` namespace in the permissions for the Operator. When the Operator attempted to create objects in the namespace, it would fail due to permission issues. With this release, the deployment resources used by OLM are updated to use the correct namespace, fixing the permission issues so that users can install and use the operator in non-default namespaces. (link:https://bugzilla.redhat.com/show_bug.cgi?id=2104897[*BZ#2104897*]) - -* Previously, underlying dependencies of the File Integrity Operator changed how alerts and notifications were handled, and the Operator didn't send metrics as a result. With this release the Operator ensures that the metrics endpoint is correct and reachable on startup. (link:https://bugzilla.redhat.com/show_bug.cgi?id=2115821[*BZ#2115821*]) - -* Previously, alerts issued by the File Integrity Operator did not set a namespace. This made it difficult to understand where the alert was coming from, or what component was responsible for issuing it. With this release, the Operator includes the namespace it was installed into in the alert, making it easier to narrow down what component needs attention. (link:https://bugzilla.redhat.com/show_bug.cgi?id=2101393[*BZ#2101393*]) - -* Previously, the File Integrity Operator did not properly handle modifying alerts during an upgrade. As a result, alerts did not include the namespace in which the Operator was installed. With this release, the Operator includes the namespace it was installed into in the alert, making it easier to narrow down what component needs attention. (link:https://bugzilla.redhat.com/show_bug.cgi?id=2112394[*BZ#2112394*]) - -* Previously, service account ownership for the File Integrity Operator regressed due to underlying OLM updates, and updates from 0.1.24 to 0.1.29 were broken. With this update, the Operator defaults to upgrading to 0.1.30. (link:https://bugzilla.redhat.com/show_bug.cgi?id=2109153[*BZ#2109153*]) - -* Previously, the File Integrity Operator daemon used the `ClusterRoles` parameter instead of the `Roles` parameter for a recent permission change. As a result, OLM could not update the Operator. With this release, the Operator daemon reverts to using the `Roles` parameter and updates from older versions to version 0.1.29 are successful. (link:https://bugzilla.redhat.com/show_bug.cgi?id=2108475[*BZ#2108475*]) - -[discrete] -[id="ocp-4-12-compliance-operator-bug-fixes"] -==== Compliance Operator - -* Previously, the Compliance Operator used an old version of the Operator SDK, which is a dependency for building Operators. This caused alerts about deprecated Kubernetes functionality used by the Operator SDK. With this release, the Compliance Operator is updated to version 0.1.55, which includes an updated version of the Operator SDK. (link:https://bugzilla.redhat.com/show_bug.cgi?id=2098581[*BZ#2098581*]) - -* Previously, applying automatic remediation for the `rhcos4-high-master-sysctl-kernel-yama-ptrace-scope` and `rhcos4-sysctl-kernel-core-pattern` rules resulted in subsequent failures of those rules in scan results, even though they were remediated. The issue is fixed in this release. (link:https://bugzilla.redhat.com/show_bug.cgi?id=2094382[*BZ#2094382*]) - -* Previously, the Compliance Operator hard coded notifications to the default namespace. As a result, notifications from the Operator would not appear if the Operator was installed in a different namespace. This issue is fixed in this release. (link:https://bugzilla.redhat.com/show_bug.cgi?id=2060726[*BZ#2060726*]) - -* Previously, the Compliance Operator failed to fetch API resources when parsing machine configurations without Ignition specifications. This caused the `api-check-pods` check to crash loop. With this release, the Compliance Operator is updated to gracefully handle machine configuration pools without Ignition specifications. (link:https://bugzilla.redhat.com/show_bug.cgi?id=2117268[*BZ#2117268*]) - -* Previously, the Compliance Operator held machine configurations in a stuck state because it could not determine the relationship between machine configurations and kubelet configurations. This was due to incorrect assumptions about machine configuration names. With this release, the Compliance Operator is able to determine if a kubelet configuration is a subset of a machine configuration. (link:https://bugzilla.redhat.com/show_bug.cgi?id=2102511[*BZ#2102511*]) - -[discrete] -[id="ocp-4-12-openshift-api-server-bug-fixes"] -==== OpenShift API server - -* Previously, adding a member could remove previous members from a group. As a result, the user lost group privileges. With this release, the dependencies were bumped and users no longer lose group privledges. (link:https://issues.redhat.com/browse/OCPBUGS-533[*OCPBUGS-533*]) - -[discrete] -[id="ocp-4-12-rhcos-bug-fixes"] -==== {op-system-first} - -* Previously, updating to Podman 4.0 prevented users from using custom images with toolbox containers on {op-system}. This fix updates the toolbox library code to account for the new Podman behavior, so users can now use custom images with toolbox on {op-system} as expected. (link:https://bugzilla.redhat.com/show_bug.cgi?id=2048789[*BZ#2048789*]) - -* Previously, the `podman exec` command did not work well with nested containers. Users encountered this issue when accessing a node using the `oc debug` command and then running a container with the `toolbox` command. Because of this, users were unable to reuse toolboxes on {op-system}. This fix updates the toolbox library code to account for this behavior, so users can now reuse toolboxes on {op-system}. (link:https://bugzilla.redhat.com/show_bug.cgi?id=1915537[*BZ#1915537*]) - -* With this update, running the `toolbox` command now checks for updates to the default image before launching the container. This improves security and provides users with the latest bug fixes. (link:https://bugzilla.redhat.com/show_bug.cgi?id=2049591[*BZ#2049591*]) - -* Previously, updating to Podman 4.0 prevented users from running the `toolbox` command on {op-system}. This fix updates the toolbox library code to account for the new Podman behavior, so users can now run `toolbox` on {op-system} as expected. (link:https://bugzilla.redhat.com/show_bug.cgi?id=2093040[*BZ#2093040*]) - -* Previously, custom SELinux policy modules were not properly supported by `rpm-ostree`, so they were not updated along with the rest of the system upon update. This would surface as failures in unrelated components. Pending SELinux userspace improvements landing in a future {product-title} release, this update provides a workaround to {op-system} that will rebuild and reload the SELinux policy during boot as needed. (link:https://issues.redhat.com/browse/OCPBUGS-595[*OCPBUGS-595*]) - -[discrete] -[id="ocp-4-12-scalability-and-performance-bug-fixes"] -==== Scalability and performance - -* The tuned profile has been modified to assign the same priority as `ksoftirqd` and `rcuc` to the newly introduced per-CPU kthreads (`ktimers`) added in a recent Red Hat Enterprise Linux (RHEL) kernel patch. For more information, see link:https://issues.redhat.com/browse/OCPBUGS-3475[*OCPBUGS-3475*], link:https://bugzilla.redhat.com/show_bug.cgi?id=2117780[*BZ#2117780*] and link:https://bugzilla.redhat.com/show_bug.cgi?id=2122220[*BZ#2122220*]. - -* Previously, restarts of the `tuned` service caused improper reset of the `irqbalance` configuration, leading to IRQ operation being served again on the isolated CPUs, therefore violating the isolation guarantees. With this fix, the `irqbalance` service configuration is properly preserved across `tuned` service restarts (explicit or caused by bugs), therefore preserving the CPU isolation guarantees with respect to IRQ serving. (link:https://issues.redhat.com/browse/OCPBUGS-585[*OCPBUGS-585*]) - -* Previously, when the tuned daemon was restarted out of order as part of the cluster Node Tuning Operator, the CPU affinity of interrupt handlers was reset and the tuning was compromised. With this fix, the `irqbalance` plug-in in tuned is disabled, and {product-title} now relies on the logic and interaction between `CRI-O` and `irqbalance`.(link:https://bugzilla.redhat.com/show_bug.cgi?id=2105123[*BZ#2105123*]) - -* Previously, a low latency hook script executing for every new `veth` device took too long when the node was under load. The resultant accumulated delays during pod start events caused the rollout time for `kube-apiserver` to be slow and sometimes exceed the 5-minute rollout timeout. With this fix, the container start time should be shorter and within the 5-minute threshold. (link:https://bugzilla.redhat.com/show_bug.cgi?id=2109965[*BZ#2109965*]). - -* Previously, the `oslat` control thread was collocated with one of the test threads, which caused latency spikes in the measurements. With this fix, the `oslat` runner now reserves one CPU for the control thread, meaning the test uses one less CPU for running the busy threads. (link:https://bugzilla.redhat.com/show_bug.cgi?id=2051443[*BZ#2051443*]) - -* Latency measurement tools, also known as `oslat`, `cyclictest`, and `hwlatdetect`, now run on completely isolated CPUs without the helper process running in the background that might cause latency spikes, therefore providing more accurate latency measurements. (link:https://issues.redhat.com/browse/OCPBUGS-2618[*OCPBUGS-2618*]) - -* Previously, although the reference `PolicyGenTemplate` for `group-du-sno-ranGen.yaml` includes two `StorageClass` entries, the generated policy included only one. With this update, the generated policy now includes both policies. (link:https://bugzilla.redhat.com/show_bug.cgi?id=2049306[*BZ#2049306*]). - -[discrete] -[id="ocp-4-12-storage-bug-fixes"] -==== Storage - -* Previously, checks for generic ephemeral volumes failed. With this update, checks for expandable volumes now include generic ephemeral volumes. (link:https://bugzilla.redhat.com/show_bug.cgi?id=2082773[*BZ#2082773*]) - -* Previously, if more than one secret was present for vSphere, the vSphere CSI Operator randomly picked a secret and sometimes caused the Operator to restart. With this update, a warning appears when there is more than one secret on the vCenter CSI Operator. (link:https://bugzilla.redhat.com/show_bug.cgi?id=2108473[*BZ#2108473*]) - -* Previously, {product-title} detached a volume when a Container Storage Interface (CSI) driver was not able to unmount the volume from a node. Detaching a volume without unmount is not allowed by CSI specifications and drivers could enter an `undocumented` state. With this update, CSI drivers are detached before unmounting only on unhealthy nodes preventing the `undocumented` state. (link:https://bugzilla.redhat.com/show_bug.cgi?id=2049306[*BZ#2049306*]) - -* Previously, there were missing annotations on the Manila CSI Driver Operator's VolumeSnapshotClass. Consequently, the Manila CSI snapshotter could not locate secrets, and could not create snapshots with the default VolumeSnapshotClass. This update fixes the issue so that secret names and namespaces are included in the default VolumeSnapshotClass. As a result, users can now create snapshots in the Manila CSI Driver Operator using the default VolumeSnapshotClass. (link:https://bugzilla.redhat.com/show_bug.cgi?id=2057637[*BZ#2057637*]) - -* Users can now opt into using the experimental VHD feature on Azure File. To opt in, users must specify the `fstype` parameter in a storage class and enable it with `--enable-vhd=true`. If `fstype` is used and the feature is not set to `true`, the volumes will fail to provision. -+ -To opt out of using the VHD feature, remove the `fstype` parameter from your storage class. (link:https://bugzilla.redhat.com/show_bug.cgi?id=2080449[*BZ#2080449*]) - -* Previously, if more than one secret was present for vSphere, the vSphere CSI Operator randomly picked a secret and sometimes caused the Operator to restart. With this update, a warning appears when there is more than one secret on the vCenter CSI Operator. (link:https://bugzilla.redhat.com/show_bug.cgi?id=2108473[*BZ#2108473*]) - -[discrete] -[id="ocp-4-12-web-console-developer-perspective-bug-fixes"] -==== Web console (Developer perspective) - -* Previously, the users could not deselect a Git secret in add and edit forms. As a result, the resources had to be recreated. This fix resolves the issue by adding the option to choose `No Secret` in the select secret option list. As a result, the users can easily select, deselect, or detach any attached secrets. (link:https://bugzilla.redhat.com/show_bug.cgi?id=2089221[*BZ#2089221*]) - -* In {product-title} 4.9, when it is minimal or no data in the *Developer Perspective*, most of the monitoring charts or graphs (CPU consumption, memory usage, and bandwidth) show a range of -1 to 1. However, none of these values can ever go below zero. This will be resolved in a future release. (link:https://bugzilla.redhat.com/show_bug.cgi?id=1904106[*BZ#1904106*]) - -* Before this update, users could not silence alerts in the *Developer* perspective in the {product-title} web console when a user-defined Alertmanager service was deployed because the web console would forward the request to the platform Alertmanager service in the `openshift-monitoring` namespace. With this update, when you view the *Developer* perspective in the web console and try to silence an alert, the request is forwarded to the correct Alertmanager service. (link:https://issues.redhat.com/browse/OCPBUGS-1789[*OCPBUGS-1789*]) - -* Previously, there was a known issue in the *Add Helm Chart Repositories* form to extend the Developer Catalog of a project. The *Quick Start* guides shows that you can add the `ProjectHelmChartRepository` CR in the required namespace whereas it does not mention that to perform this you need permission from the kubeadmin. This issue was resolved with *Quickstart* mentioning the correct steps to create `ProjectHelmChartRepository` CR. (link:https://bugzilla.redhat.com/show_bug.cgi?id=2057306[*BZ#2057306*]) - -[id="ocp-4-12-technology-preview"] -== Technology Preview features - -Some features in this release are currently in Technology Preview. These experimental features are not intended for production use. Note the following scope of support on the Red Hat Customer Portal for these features: - -link:https://access.redhat.com/support/offerings/techpreview[Technology Preview Features Support Scope] - -In the following tables, features are marked with the following statuses: - -* _Technology Preview_ -* _General Availability_ -* _Not Available_ -* _Deprecated_ - -[discrete] -=== Networking Technology Preview features - -.Networking Technology Preview tracker -[cols="4,1,1,1",options="header"] -|==== -|Feature |4.10 |4.11 | 4.12 - -|PTP single NIC hardware configured as boundary clock -|Technology Preview -|General Availability -|General Availability - -|PTP dual NIC hardware configured as boundary clock -|Not Available -|Technology Preview -|Technology Preview - -|PTP events with boundary clock -|Technology Preview -|General Availability -|General Availability - -|Pod-level bonding for secondary networks -|Technology Preview -|General Availability -|General Availability - -|External DNS Operator -|Technology Preview -|General Availability -|General Availability - -|AWS Load Balancer Operator -|Not Available -|Technology Preview -|Technology Preview - -|Cloud controller manager for VMware vSphere -|Technology Preview -|Technology Preview -|Technology Preview - -|Ingress Node Firewall Operator -|Not Available -|Not Available -|Technology Preview - -|Advertise using BGP mode the MetalLB service from a subset of nodes, using a specific pool of IP addresses -|Not Available -|Technology Preview -|General Availability - -|Advertise using L2 mode the MetalLB service from a subset of nodes, using a specific pool of IP addresses -|Not Available -|Technology Preview -|Technology Preview - -|Multi-network policies for SR-IOV networks -|Not Available -|Not Available -|Technology Preview - -|Updating the interface-specific safe sysctls list -|Not Available -|Not Available -|Technology Preview - -|MT2892 Family [ConnectX-6 Dx] SR-IOV support -|Not Available -|Not Available -|Technology Preview - -|MT2894 Family [ConnectX-6 Lx] SR-IOV support -|Not Available -|Not Available -|Technology Preview - -|MT42822 BlueField-2 in ConnectX-6 NIC mode SR-IOV support -|Not Available -|Not Available -|Technology Preview - -|Silicom STS Family SR-IOV support -|Not Available -|Not Available -|Technology Preview - -|MT2892 Family [ConnectX-6 Dx] OvS Hardware Offload support -|Not Available -|Not Available -|Technology Preview - -|MT2894 Family [ConnectX-6 Lx] OvS Hardware Offload support -|Not Available -|Not Available -|Technology Preview - -|MT42822 BlueField-2 in ConnectX-6 NIC mode OvS Hardware Offload support -|Not Available -|Not Available -|Technology Preview - -|==== - -[discrete] -=== Storage Technology Preview features - -.Storage Technology Preview tracker -[cols="4,1,1,1",options="header"] -|==== -|Feature |4.10 |4.11 | 4.12 - -|Shared Resources CSI Driver and Build CSI Volumes in OpenShift Builds -|Technology Preview -|Technology Preview -|Technology Preview - -|CSI volume expansion -|Technology Preview -|General Availability -|General Availability - -|CSI Azure File Driver Operator -|Technology Preview -|General Availability -|General Availability - -|CSI Google Filestore Driver Operator -|Not Available -|Not Available -|Technology Preview - -|CSI automatic migration -(Azure file, VMware vSphere) -|Technology Preview -|Technology Preview -|Technology Preview - -|CSI automatic migration -(Azure Disk, OpenStack Cinder) -|Technology Preview -|General Availability -|General Availability - -|CSI automatic migration -(AWS EBS, GCP disk) -|Technology Preview -|Technology Preview -|General Availability - -|CSI inline ephemeral volumes -|Technology Preview -|Technology Preview -|Technology Preview - -|CSI generic ephemeral volumes -|Not Available -|General Availability -|General Availability - -|Shared Resource CSI Driver -|Technology Preview -|Technology Preview -|Technology Preview - -|CSI Google Filestore Driver Operator -|Not Available -|Not Available -|Technology Preview - -|Automatic device discovery and provisioning with Local Storage Operator -|Technology Preview -|Technology Preview -|Technology Preview - -|==== - -[discrete] -=== Installation Technology Preview features - -.Installation Technology Preview tracker -[cols="4,1,1,1",options="header"] -|==== -|Feature |4.10 |4.11 | 4.12 - -|Adding kernel modules to nodes with kvc -|Technology Preview -|Technology Preview -|Technology Preview - -|IBM Cloud VPC clusters -|Technology Preview -|Technology Preview -|General Availability - -|Selectable Cluster Inventory -|Technology Preview -|Technology Preview -|Technology Preview - -|Multi-architecture compute machines -|Not Available -|Technology Preview -|Technology Preview - -|Disconnected mirroring with the oc-mirror CLI plug-in -|Technology Preview -|General Availability -|General Availability - -|Mount shared entitlements in BuildConfigs in RHEL -|Technology Preview -|Technology Preview -|Technology Preview - -|Agent-based {product-title} Installer -|Not Available -|Not Available -|General Availability - -|==== - -[discrete] -=== Node Technology Preview features - -.Nodes Technology Preview tracker -[cols="4,1,1,1",options="header"] -|==== -|Feature |4.10 |4.11 | 4.12 - -|Non-preempting priority classes -|Technology Preview -|Technology Preview -|Technology Preview - -|Node Health Check Operator -|Technology Preview -|General Availability -|General Availability - -|Linux Control Group version 2 (cgroup v2) -|Not Available -|Not Available -|Technology Preview - -|crun container runtime -|Not Available -|Not Available -|Technology Preview - -|==== - -[discrete] -=== Multi-Architecture Technology Preview features - -.Multi-Architecture Technology Preview tracker -[cols="4,1,1,1",options="header"] -|==== -|Feature |4.10 |4.11 | 4.12 - -|`kdump` on `x86_64` architecture -|Technology Preview -|General Availability -|General Availability - -|`kdump` on `arm64` architecture -|Not Available -|Technology Preview -|Technology Preview - -|`kdump` on `s390x` architecture -|Technology Preview -|Technology Preview -|Technology Preview - -|`kdump` on `ppc64le` architecture -|Technology Preview -|Technology Preview -|Technology Preview - -|IBM Secure Execution on {ibmzProductName} and LinuxONE -|Not Available -|Not Available -|Technology Preview - -|==== - -[discrete] -=== Serverless Technology Preview features - -.Serverless Technology Preview tracker -[cols="4,1,1,1",options="header"] -|==== -|Feature |4.10 |4.11 | 4.12 - -|Serverless functions -|Technology Preview -|Technology Preview -|Technology Preview - -|==== - -[discrete] -=== Specialized hardware and driver enablement Technology Preview features - -.Specialized hardware and driver enablement Technology Preview tracker -[cols="4,1,1,1",options="header"] -|==== -|Feature |4.10 |4.11 | 4.12 - -|Driver Toolkit -|Technology Preview -|Technology Preview -|General Availability - -|Special Resource Operator (SRO) -|Technology Preview -|Technology Preview -|Not Available - -|Hub and spoke cluster support -|Not Available -|Not Available -|Technology Preview - -|==== - -[discrete] -=== Web console Technology Preview features - -.Web console Technology Preview tracker -[cols="4,1,1,1",options="header"] -|==== -|Feature |4.10 |4.11 | 4.12 - -|Multicluster console -|Technology Preview -|Technology Preview -|Technology Preview - -|Dynamic Plug-ins -|Technology Preview -|Technology Preview -|General Availability - -|==== - -[discrete] -=== Scalability and performance Technology Preview features - -.Scalability and performance Technology Preview tracker -[cols="4,1,1,1",options="header"] -|==== -|Feature |4.10 |4.11 | 4.12 - -|Hyperthreading-aware CPU manager policy -|Technology Preview -|Technology Preview -|Technology Preview - -|Node Observability Operator -|Not Available -|Technology Preview -|Technology Preview - -|{factory-prestaging-tool} -|Not Available -|Not Available -|Technology Preview - -|Single-node OpenShift cluster expansion with worker nodes -|Not Available -|Not Available -|Technology Preview - -|{cgu-operator-first} -|Technology Preview -|Technology Preview -|General Availability - -|Mount namespace encapsulation -|Not Available -|Not Available -|Technology Preview - -|==== - -[discrete] -=== Operator Technology Preview features - -.Operator Technology Preview tracker -[cols="4,1,1,1",options="header"] -|==== -|Feature |4.10 |4.11 | 4.12 - -|Hybrid Helm Operator -|Technology Preview -|Technology Preview -|Technology Preview - -|Java-based Operator -|Not Available -|Technology Preview -|Technology Preview - -|Multi-cluster Engine Operator -|Technology Preview -|Technology Preview -|Technology Preview - -|Node Observability Operator -|Not Available -|Not Available -|Technology Preview - -|Network Observability Operator -|Not Available -|Not Available -|General Availability - -|Platform Operators -|Not Available -|Not Available -|Technology Preview - -|RukPak -|Not Available -|Not Available -|Technology Preview - -|Cert-manager Operator -|Technology Preview -|Technology Preview -|Technology Preview - -|==== - -[discrete] -=== Monitoring Technology Preview features - -.Monitoring Technology Preview tracker -[cols="4,1,1,1",options="header"] -|==== -|Feature |4.10 |4.11 | 4.12 - -|Alert routing for user-defined projects monitoring -|Technology Preview -|General Availability -|General Availability - -|Alerting rules based on platform monitoring metrics -|Not Available -|Technology Preview -|Technology Preview - -|==== - -[discrete] -=== {rh-openstack-first} Technology Preview features - -.{rh-openstack} Technology Preview tracker -[cols="4,1,1,1",options="header"] -|==== -|Feature |4.10 |4.11 | 4.12 - -|Support for {rh-openstack} DCN -|Technology Preview -|Technology Preview -|Technology Preview - -|Support for external cloud providers for clusters on {rh-openstack} -|Technology Preview -|Technology Preview -|General Availability - -|OVS hardware offloading for clusters on {rh-openstack} -|Technology Preview -|General Availability -|General Availability - -|==== - -[discrete] -=== Architecture Technology Preview features - -.Architecture Technology Preview tracker -[cols="4,1,1,1",options="header"] -|==== -|Feature |4.10 |4.11 | 4.12 - -|Hosted control planes for {product-title} on bare metal -|Not Available -|Technology Preview -|Technology Preview - -|Hosted control planes for {product-title} on Amazon Web Services (AWS) -||Not Available -|Technology Preview -|Technology Preview - -|==== - -[discrete] -=== Machine management Technology Preview features - -.Machine management Technology Preview tracker -[cols="4,1,1,1",options="header"] -|==== -|Feature |4.10 |4.11 | 4.12 - -|Managing machines with the Cluster API -|Not Available -|Technology Preview -|Technology Preview - -|Cron job time zones -|Not Available -|Not Available -|Technology Preview - -|Cloud controller manager for Alibaba Cloud -|Technology Preview -|Technology Preview -|Technology Preview - -|Cloud controller manager for Amazon Web Services -|Technology Preview -|Technology Preview -|Technology Preview - -|Cloud controller manager for Google Cloud Platform -|Technology Preview -|Technology Preview -|Technology Preview - -|Cloud controller manager for Microsoft Azure -|Technology Preview -|Technology Preview -|Technology Preview - -|Cloud controller manager for {rh-openstack-first} -|Technology Preview -|Technology Preview -|General Availability - -|Custom Metrics Autoscaler Operator -|Not Available -|Technology Preview -|Technology Preview - -|==== - -[id="ocp-4-12-known-issues"] -== Known issues - -// TODO: This known issue should carry forward to 4.8 and beyond! This needs some SME/QE review before being updated for 4.11. Need to check if KI should be removed or should stay. -* In {product-title} 4.1, anonymous users could access discovery endpoints. Later releases revoked this access to reduce the possible attack surface for security exploits because some discovery endpoints are forwarded to aggregated API servers. However, unauthenticated access is preserved in upgraded clusters so that existing use cases are not broken. -+ -If you are a cluster administrator for a cluster that has been upgraded from {product-title} 4.1 to {product-version}, you can either revoke or continue to allow unauthenticated access. Unless there is a specific need for unauthenticated access, you should revoke it. If you do continue to allow unauthenticated access, be aware of the increased risks. -+ -[WARNING] -==== -If you have applications that rely on unauthenticated access, they might receive HTTP `403` errors if you revoke unauthenticated access. -==== -+ -Use the following script to revoke unauthenticated access to discovery endpoints: -+ -[source,bash] ----- -## Snippet to remove unauthenticated group from all the cluster role bindings -$ for clusterrolebinding in cluster-status-binding discovery system:basic-user system:discovery system:openshift:discovery ; -do -### Find the index of unauthenticated group in list of subjects -index=$(oc get clusterrolebinding ${clusterrolebinding} -o json | jq 'select(.subjects!=null) | .subjects | map(.name=="system:unauthenticated") | index(true)'); -### Remove the element at index from subjects array -oc patch clusterrolebinding ${clusterrolebinding} --type=json --patch "[{'op': 'remove','path': '/subjects/$index'}]"; -done ----- -+ -This script removes unauthenticated subjects from the following cluster role bindings: -+ --- -** `cluster-status-binding` -** `discovery` -** `system:basic-user` -** `system:discovery` -** `system:openshift:discovery` --- -+ -(link:https://bugzilla.redhat.com/show_bug.cgi?id=1821771[*BZ#1821771*]) - -// TODO: This known issue should be removed when RHCOS images begin using RHEL 9. RHEL 9 prevents this issue from occurring. -* Intermittently, an IBM Cloud VPC cluster might fail to install because some worker machines do not start. Rather, these worker machines remain in the `Provisioned` phase. -+ -There is a workaround for this issue. From the host where you performed the initial installation, delete the failed machines and run the installation program again. -+ -. Verify that the status of the internal application load balancer (ALB) for the master API server is `active`. -.. Identify the cluster's infrastructure ID by running the following command: -+ -[source,terminal] ----- -$ oc get infrastructure/cluster -ojson | jq -r '.status.infrastructureName' ----- -.. Log into the IBM Cloud account for your cluster and target the correct region for your cluster. -.. Verify that the internal ALB status is `active` by running the following command: -+ -[source,terminal] ----- -$ ibmcloud is lb -kubernetes-api-private --output json | jq -r '.provisioning_status' ----- -. Identify the machines that are in the `Provisioned` phase by running the following command: -+ -[source,terminal] ----- -$ oc get machine -n openshift-machine-api ----- -+ -.Example output -[source,terminal] ----- -NAME PHASE TYPE REGION ZONE AGE -example-public-1-x4gpn-master-0 Running bx2-4x16 us-east us-east-1 23h -example-public-1-x4gpn-master-1 Running bx2-4x16 us-east us-east-2 23h -example-public-1-x4gpn-master-2 Running bx2-4x16 us-east us-east-3 23h -example-public-1-x4gpn-worker-1-xqzzm Running bx2-4x16 us-east us-east-1 22h -example-public-1-x4gpn-worker-2-vg9w6 Provisioned bx2-4x16 us-east us-east-2 22h -example-public-1-x4gpn-worker-3-2f7zd Provisioned bx2-4x16 us-east us-east-3 22h ----- -. Delete each failed machine by running the following command: -+ -[source,terminal] ----- -$ oc delete machine -n openshift-machine-api ----- -. Wait for the deleted worker machines to be replaced, which can take up to 10 minutes. -. Verify that the new worker machines are in the `Running` phase by running the following command: -+ -[source,terminal] ----- -$ oc get machine -n openshift-machine-api ----- -+ -.Example output -[source,terminal] ----- -NAME PHASE TYPE REGION ZONE AGE -example-public-1-x4gpn-master-0 Running bx2-4x16 us-east us-east-1 23h -example-public-1-x4gpn-master-1 Running bx2-4x16 us-east us-east-2 23h -example-public-1-x4gpn-master-2 Running bx2-4x16 us-east us-east-3 23h -example-public-1-x4gpn-worker-1-xqzzm Running bx2-4x16 us-east us-east-1 23h -example-public-1-x4gpn-worker-2-mnlsz Running bx2-4x16 us-east us-east-2 8m2s -example-public-1-x4gpn-worker-3-7nz4q Running bx2-4x16 us-east us-east-3 7m24s ----- -. Complete the installation by running the following command. Running the installation program again ensures that the cluster's `kubeconfig` is initialized properly: -+ -[source,terminal] ----- -$ ./openshift-install wait-for install-complete ----- -+ -(link:https://issues.redhat.com/browse/OCPBUGS-1327[*OCPBUGS#1327*]) - -// TODO: This known issue should carry forward to 4.9 and beyond! -* The `oc annotate` command does not work for LDAP group names that contain an equal sign (`=`), because the command uses the equal sign as a delimiter between the annotation name and value. As a workaround, use `oc patch` or `oc edit` to add the annotation. (link:https://bugzilla.redhat.com/show_bug.cgi?id=1917280[*BZ#1917280*]) - -* Due to the inclusion of old images in some image indexes, running `oc adm catalog mirror` and `oc image mirror` might result in the following error: `error: unable to retrieve source image`. As a temporary workaround, you can use the `--skip-missing` option to bypass the error and continue downloading the image index. For more information, see link:https://access.redhat.com/solutions/6975305[Service Mesh Operator mirroring failed]. - -* When using the egress IP address feature in {product-title} on {rh-openstack}, you can assign a floating IP address to a reservation port to have a predictable SNAT address for egress traffic. The floating IP address association must be created by the same user that installed the {product-title} cluster. Otherwise any delete or move operation for the egress IP address hangs indefinitely because of insufficient privileges. When this issue occurs, a user with sufficient privileges must manually unset the floating IP address association to resolve the issue. (link:https://issues.redhat.com/browse/OCPBUGS-4902[*OCPBUGS-4902*]) - -* There is a known issue with Nutanix installation where the installation fails if you use 4096-bit certificates with Prism Central 2022.x. Instead, use 2048-bit certificates. (link:https://access.redhat.com/solutions/6976743[*KCS*]) - -* Deleting the bidirectional forwarding detection (BFD) profile and removing the `bfdProfile` added to the border gateway protocol (BGP) peer resource does not disable the BFD. Instead, the BGP peer starts using the default BFD profile. To disable BFD from a BGP peer resource, delete the BGP peer configuration and recreate it without a BFD profile. (link:https://bugzilla.redhat.com/show_bug.cgi?id=2050824[*BZ#2050824*]) - -* Due to an unresolved metadata API issue, you cannot install clusters that use bare-metal workers on {rh-openstack} 16.1. Clusters on {rh-openstack} 16.2 are not impacted by this issue. (link:https://bugzilla.redhat.com/show_bug.cgi?id=2033953[*BZ#2033953*]) - -* The `loadBalancerSourceRanges` attribute is not supported, and is therefore ignored, in load-balancer type services in clusters that run on {rh-openstack} and use the OVN Octavia provider. There is no workaround for this issue. (link:https://issues.redhat.com/browse/OCPBUGS-2789[*OCPBUGS-2789*]) - -* After a catalog source update, it takes time for OLM to update the subscription status. -This can mean that the status of the subscription policy may continue to show as compliant when {cgu-operator-first} decides whether remediation is needed. -As a result the operator specified in the subscription policy does not get upgraded. -As a workaround, include a `status` field in the `spec` section of the catalog source policy as follows: -+ -[source,yaml] ----- -metadata: - name: redhat-operators-disconnected -spec: - displayName: disconnected-redhat-operators - image: registry.example.com:5000/disconnected-redhat-operators/disconnected-redhat-operator-index:v4.11 -status: - connectionState: - lastObservedState: READY ----- -+ -This mitigates the delay for OLM to pull the new index image and get the pod ready, reducing the time between completion of catalog source policy remediation and the update of the subscription status. -If the issue persists and the subscription policy status update is still late you can apply another `ClusterGroupUpdate` CR with the same subscription policy, or an identical `ClusterGroupUpdate` CR with a different name. -(link:https://issues.redhat.com/browse/OCPBUGS-2813[*OCPBUGS-2813*]) - -* {cgu-operator} skips remediating a policy if all selected clusters are compliant when the `ClusterGroupUpdate` CR is started. -The update of operators with a modified catalog source policy and a subscription policy in the same `ClusterGroupUpdate` CR does not complete. -The subscription policy is skipped as it is still compliant until the catalog source change is enforced. -As a workaround, add the following change to one CR in the `common-subscription` policy, for example: -+ -[source,yaml] ----- -metadata.annotations.upgrade: "1" ----- -+ -This makes the policy non-compliant prior to the start of the `ClusterGroupUpdate` CR. -(link:https://issues.redhat.com/browse/OCPBUGS-2812[*OCPBUGS-2812*]) - -* On a {sno} instance, rebooting without draining the node to remove all the running pods can cause issues with workload container recovery. -After the reboot, the workload restarts before all the device plugins are ready, resulting in resources not being available or the workload running on the wrong NUMA node. -The workaround is to restart the workload pods when all the device plugins have re-registered themselves during the reboot recovery procedure. -(link:https://issues.redhat.com/browse/OCPBUGS-2180[*OCPBUGS-2180*]) - -* The default `dataset_comparison` is currently `ieee1588`. The recommended dataset_comparison is `G.8275.x`. It is planned to be fixed in a future version of {product-title}. In the short term, you can manually update the ptp configuration to include the recommended `dataset_comparison`. -(link:https://issues.redhat.com/browse/OCPBUGS-2336[*OCPBUGS-2336*]) - -* The default `step_threshold` is 0.0. The recommended `step_threshold` is 2.0. It is planned to be fixed in a future version of {product-title}. In the short term, you can manually update the ptp configuration to include the recommended `step_threshold`. -(link:https://issues.redhat.com/browse/OCPBUGS-3005[*OCPBUGS-3005*]) - -* The `BMCEventSubscription` CR fails to create a Redfish subscription for a spoke cluster in an ACM-deployed multi-cluster environment, where the metal3 service is only running on a hub cluster. -The workaround is to create the subscription by calling the Redfish API directly, for example, by running the following command: -+ -[source,terminal] ----- -curl -X POST -i --insecure -u ":" https:///redfish/v1/EventService/Subscriptions \ - -H 'Content-Type: application/json' \ - --data-raw '{ - "Protocol": "Redfish", - "Context": "any string is valid", - "Destination": "https://hw-event-proxy-openshift-bare-metal-events.apps.example.com/webhook", - "EventTypes": ["Alert"] - }' ----- -+ -You should receive a `201 Created` response and a header with `Location: /redfish/v1/EventService/Subscriptions/` that indicates that the Redfish events subscription is successfully created. -(link:https://issues.redhat.com/browse/OCPBUGSM-43707[*OCPBUGSM-43707*]) - -* When using the GitOps ZTP pipeline to install a {sno} cluster in a disconnected environment, there should be two `CatalogSource` CRs applied in the cluster. One of the `CatalogSource` CRs gets deleted following multiple node reboots. As a workaround, you can change the default names, such as `certified-operators` and `redhat-operators`, of the catalog sources. (link:https://issues.redhat.com/browse/OCPBUGSM-46245[*OCPBUGSM-46245*]) - -* If an invalid subscription channel is specified in the subscription policy that is used to perform a cluster upgrade, the {cgu-operator-full} indicates a successful upgrade right after the policy is enforced because the `Subscription` state remains `AtLatestKnown`. (link:https://issues.redhat.com/browse/OCPBUGSM-43618[*OCPBUGSM-43618*]) - -* The `SiteConfig` disk partition definition fails when applied to multiple nodes in a cluster. When a `SiteConfig` CR is used to provision a compact cluster, creating a valid `diskPartition` config on multiple nodes fails with a Kustomize plug-in error. (link:https://issues.redhat.com/browse/OCPBUGSM-44403[*OCPBUGSM-44403*]) - -* If secure boot is currently disabled and you try to enable it using ZTP, the cluster installation does not start. When secure boot is enabled through ZTP, the boot options are configured before the virtual CD is attached. Therefore, the first boot from the existing hard disk has the secure boot turned on. The cluster installation gets stuck because the system never boots from the CD. (link:https://issues.redhat.com/browse/OCPBUGSM-45085[*OCPBUGSM-45085*]) - -* Using {rh-rhacm-first}, spoke cluster deployments on Dell PowerEdge R640 servers are blocked when the virtual media does not disconnect the ISO in the iDRAC console after writing the image to the disk. As a workaround, disconnect the ISO manually through the Virtual Media tab in the iDRAC console. -(link:https://issues.redhat.com/browse/OCPBUGSM-45884[*OCPBUGSM-45884*]) - -* Low-latency applications that rely on high-resolution timers to wake up their threads might experience higher wake up latencies than expected. -Although the expected wake up latency is under 20us, latencies exceeding this can occasionally be seen when running the cyclictest tool for long durations (24 hours or more). -Testing has shown that wake up latencies are under 20us for over 99.999999% of the samples. -(link:https://issues.redhat.com/browse/RHELPLAN-138733[*RHELPLAN-138733*]) - -* A Chapman Beach NIC from Intel must be installed in a bifurcated PCIe slot to ensure that both ports are visible. A limitation also exists in the current devlink tooling in RHEL 8.6 which prevents the configuration of 2 ports in the bifurcated PCIe slot. -(link:https://issues.redhat.com/browse/RHELPLAN-142458[*RHELPLAN-142458*]) - -* Disabling an SR-IOV VF when a port goes down can cause a 3-4 second delay with Intel NICs. -(link:https://issues.redhat.com/browse/RHELPLAN-126931[*RHELPLAN-126931*]) - -* When using Intel NICs, IPV6 traffic stops when an SR-IOV VF is assigned an IPV6 address. - (link:https://issues.redhat.com/browse/RHELPLAN-137741[*RHELPLAN-137741*]) - -* When using VLAN strip offloading, the offload flag (`ol_flag`) is not consistently set correctly with the iavf driver. -(link:https://issues.redhat.com/browse/RHELPLAN-141240[*RHELPLAN-141240*]) - -* A deadlock can occur if an allocation fails during a configuration change with the ice driver. -(link:https://issues.redhat.com/browse/RHELPLAN-130855[*RHELPLAN-130855*]) - -* SR-IOV VFs send GARP packets with the wrong MAC address when using Intel NICs. -(link:https://issues.redhat.com/browse/RHELPLAN-140971[*RHELPLAN-140971*]) - -* When using the GitOps ZTP method of managing clusters and deleting a cluster which has not completed installation, the cleanup of the cluster namespace on the hub cluster might hang indefinitely. -To complete the namespace deletion, remove the `baremetalhost.metal3.io` finalizer from two CRs in the cluster namespace: -. Remove the finalizer from the secret that is pointed to by the BareMetalHost CR `.spec.bmc.credentialsName`. -. Remove the finalizer from the `BareMetalHost` CR. -When these finalizers are removed the namespace termination completes within a few seconds. -(link:https://issues.redhat.com/browse/OCPBUGS-3029[*OCPBUGS-3029*]) - -* The addition of a new feature in OCP 4.12 that enables UDP GRO also causes all veth devices to have one RX queue per available CPU (previously each veth had one queue). -Those queues are dynamically configured by OVN and there is no synchronization between latency tuning and this queue creation. -The latency tuning logic monitors the veth NIC creation events and starts configuring the RPS queue cpu masks before all the queues are properly created. -This means that some of the RPS queue masks are not configured. Since not all NIC queues are configured properly there is a chance of latency spikes in a real-time application that uses timing-sensitive cpus for communicating with services in other containers. -Applications that do not use kernel networking stack are not affected. -(link:https://issues.redhat.com/browse/OCPBUGS-4194[*OCPBUGS-4194*]) - -* Platform Operator and RukPak known issues: - -** Deleting a platform Operator results in a cascading deletion of the underlying resources. This cascading deletion logic can only delete resources that are defined in the Operator Lifecycle Manager-based (OLM) Operator's bundle format. In the case that a platform Operator creates resources that are defined outside of that bundle format, then the platform Operator is responsible for handling this cleanup interaction. This behavior can be observed when installing the cert-manager Operator as a platform Operator, and then removing it. The expected behavior is that a namespace is left behind that the cert-manager Operator created. - -** The platform Operators manager does not have any logic that compares the current and desired state of the cluster-scoped `BundleDeployment` resource it is managing. This leaves the possibility for a user who has sufficient role-based access control (RBAC) to manually modify that underlying `BundleDeployment` resource and can lead to situations where users can escalate their permissions to the `cluster-admin` role. By default, you should limit access to this resource to a small number of users that explicitly require access. The only supported client for the `BundleDeployment` resource during this Technology Preview release is the platform Operators manager component. - -** OLM's Marketplace component is an optional cluster capability that can be disabled. This has implications during the Technology Preview release because platform Operators are currently only sourced from the `redhat-operators` catalog source that is managed by the Marketplace component. As a workaround, a cluster administrator can create this catalog source manually. - -** The RukPak provisioner implementations do not have the ability to inspect the health or state of the resources that they are managing. This has implications for surfacing the generated `BundleDeployment` resource state to the `PlatformOperator` resource that owns it. If a `registry+v1` bundle contains manifests that can be successfully applied to the cluster, but will fail at runtime, such as a `Deployment` object referencing a non-existent image, the result is a successful status being reflected in individual `PlatformOperator` and `BundleDeployment` resources. - -** Cluster administrators configuring `PlatformOperator` resources before cluster creation cannot easily determine the desired package name without leveraging an existing cluster or relying on documented examples. There is currently no validation logic that ensures an individually configured `PlatformOperator` resource will be able to successfully roll out to the cluster. - -* When using the Technology Preview OCI feature with the oc-mirror CLI plug-in, the mirrored catalog embeds all of the Operator bundles, instead of filtering only on those specified in the image set configuration file. (link:https://issues.redhat.com/browse/OCPBUGS-5085[*OCPBUGS-5085*]) - -* There is currently a known issue when you run the Agent-based {product-title} Installer to generate an ISO image from a directory where the previous release was used for ISO image generation. An error message is displayed with the release version not matching. As a workaround, create and use a new directory. (link:https://issues.redhat.com/browse/OCPBUGS-5159[*OCPBUGS#5159*]) - -* The defined capabilities in the `install-config.yaml` file are not applied in the Agent-based {product-title} installation. Currently, there is no workaround. (link:https://issues.redhat.com/browse/OCPBUGS-5129[*OCPBUGS#5129*]) - -* Fully populated load balancers on {rh-openstack} that are created with the OVN driver can contain pools that are stuck in a pending creation status. This issue can cause problems for clusters that are deployed on {rh-openstack}. To resolve the issue, update your {rh-openstack} packages. (link:https://bugzilla.redhat.com/show_bug.cgi?id=2042976[*BZ#2042976*]) - -* Bulk load-balancer member updates on {rh-openstack} can return a 500 code in response to `PUT` requests. This issue can cause problems for clusters that are deployed on {rh-openstack}. To resolve the issue, update your {rh-openstack} packages. (link:https://bugzilla.redhat.com/show_bug.cgi?id=2100135[*BZ#2100135*]) - -* Clusters that use external cloud providers can fail to retrieve updated credentials after rotation. The following platforms are affected: -+ --- -** {alibaba} -** IBM Cloud VPC -** IBM Power -** {VirtProductName} -** {rh-openstack} --- -+ -As a workaround, restart `openshift-cloud-controller-manager` pods by running the following command: -+ -[source,terminal] ----- -$ oc delete pods --all -n openshift-cloud-controller-manager ----- -+ -(link:https://issues.redhat.com/browse/OCPBUGS-5036[*OCPBUGS-5036*]) - -* There is a known issue when `cloud-provider-openstack` tries to create health monitors on OVN load balancers by using the API to create fully populated load balancers. These health monitors become stuck in a `PENDING_CREATE` status. After their deletion, associated load balancers are are stuck in a `PENDING_UPDATE` status. There is no workaround. (link:https://bugzilla.redhat.com/show_bug.cgi?id=2143732[*BZ#2143732*]) - -* Due to a known issue, to use stateful IPv6 networks with cluster that run on {rh-openstack}, you must include `ip=dhcp,dhcpv6` in the kernel arguments of link:../nodes/nodes/nodes-nodes-working.adoc#nodes-nodes-kernel-arguments_nodes-nodes-working[worker nodes]. (link:https://issues.redhat.com/browse/OCPBUGS-2104[*OCPBUGS-2104*]) - -* It is not possible to create a macvlan on the physical function (PF) when a virtual function (VF) already exists. This issue affects the Intel E810 NIC. (link:https://bugzilla.redhat.com/show_bug.cgi?id=2120585[*BZ#2120585*]) - -* There is currently a known issue when manually configuring IPv6 addresses and routes on an IPv4 {product-title} cluster. When converting to a dual-stack cluster, newly created pods remain in the `ContainerCreating` status. Currently, there is no workaround. This issue is planned to be addressed in a future {product-title} release. (link:https://issues.redhat.com/browse/OCPBUGS-4411[*OCPBUGS-4411*]) - -* When an OVN cluster installed on IBM Public Cloud has more than 60 worker nodes, simultaneously creating 2000 or more services and route objects can cause pods created at the same time to remain in the `ContainerCreating` status. If this problem occurs, entering the `oc describe pod ` command shows events with the following warning: `FailedCreatePodSandBox...failed to configure pod interface: timed out waiting for OVS port binding (ovn-installed)`. There is currently no workaround for this issue. (link:https://issues.redhat.com/browse/OCPBUGS-3470[*OCPBUGS-3470*]) - -* When a control plane machine is replaced on a cluster that uses the OVN-Kubernetes network provider, the pods related to OVN-Kubernetes might not start on the replacement machine. When this occurs, the lack of networking on the new machine prevents etcd from allowing it to replace the old machine. As a result, the cluster is stuck in this state and might become degraded. This behavior can occur when the control plane is replaced manually or by the control plane machine set. -+ -There is currently no workaround to resolve this issue if encountered. To avoid this issue, xref:../machine_management/control_plane_machine_management/cpmso-disabling.adoc[disable the control plane machine set] and do not replace control plane machines manually if your cluster uses the OVN-Kubernetes network provider. (link:https://issues.redhat.com/browse/OCPBUGS-5306[*OCPBUGS-5306*]) - -[id="ocp-4-12-asynchronous-errata-updates"] -== Asynchronous errata updates - -Security, bug fix, and enhancement updates for {product-title} {product-version} are released as asynchronous errata through the Red Hat Network. All {product-title} {product-version} errata is https://access.redhat.com/downloads/content/290/[available on the Red Hat Customer Portal]. See the https://access.redhat.com/support/policy/updates/openshift[{product-title} Life Cycle] for more information about asynchronous errata. - -Red Hat Customer Portal users can enable errata notifications in the account settings for Red Hat Subscription Management (RHSM). When errata notifications are enabled, users are notified through email whenever new errata relevant to their registered systems are released. - -[NOTE] -==== -Red Hat Customer Portal user accounts must have systems registered and consuming {product-title} entitlements for {product-title} errata notification emails to generate. -==== - -This section will continue to be updated over time to provide notes on enhancements and bug fixes for future asynchronous errata releases of {product-title} {product-version}. Versioned asynchronous releases, for example with the form {product-title} {product-version}.z, will be detailed in subsections. In addition, releases in which the errata text cannot fit in the space provided by the advisory will be detailed in subsections that follow. - -[IMPORTANT] -==== -For any {product-title} release, always review the instructions on xref:../updating/updating-cluster-within-minor.adoc#updating-cluster-within-minor[updating your cluster] properly. -==== - -//Update with relevant advisory information -[id="ocp-4-12-0-ga"] -=== RHSA-2022:7399 - {product-title} 4.12.0 image release, bug fix, and security update advisory - -Issued: 2023-01-17 - -{product-title} release 4.12.0, which includes security updates, is now available. The list of bug fixes that are included in the update is documented in the link:https://access.redhat.com/errata/RHSA-2022:7399[RHSA-2022:7399] advisory. The RPM packages that are included in the update are provided by the link:https://access.redhat.com/errata/RHSA-2022:7398[RHSA-2022:7398] advisory. - -Space precluded documenting all of the container images for this release in the advisory. See the following article for notes on the container images in this release: - -You can view the container images in this release by running the following command: - -[source,terminal] ----- -$ oc adm release info 4.12.0 --pullspecs ----- -//replace 4.y.z for the correct values for the release. You do not need to update oc to run this command. diff --git a/release_notes/ocp-4-13-release-notes.adoc b/release_notes/ocp-4-13-release-notes.adoc new file mode 100644 index 0000000000..e7f6e2c3b0 --- /dev/null +++ b/release_notes/ocp-4-13-release-notes.adoc @@ -0,0 +1,1149 @@ +:_content-type: ASSEMBLY +[id="ocp-4-13-release-notes"] += {product-title} {product-version} release notes +include::_attributes/common-attributes.adoc[] +:context: release-notes + +toc::[] + +Red Hat {product-title} provides developers and IT organizations with a hybrid cloud application platform for deploying both new and existing applications on secure, scalable resources with minimal configuration and management overhead. {product-title} supports a wide selection of programming languages and frameworks, such as Java, JavaScript, Python, Ruby, and PHP. + +Built on {op-system-base-full} and Kubernetes, {product-title} provides a more secure and scalable multitenant operating system for today's enterprise-class applications, while delivering integrated application runtimes and libraries. {product-title} enables organizations to meet security, privacy, compliance, and governance requirements. + +[id="ocp-4-13-about-this-release"] +== About this release + +// TODO: Update with the relevant information closer to release. +{product-title} (link:https://access.redhat.com/errata/RHSA-2022:7399[RHSA-2022:7399]) is now available. This release uses link:https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.25.md[Kubernetes 1.25] with CRI-O runtime. New features, changes, and known issues that pertain to {product-title} {product-version} are included in this topic. + +{product-title} {product-version} clusters are available at https://console.redhat.com/openshift. With the {cluster-manager-first} application for {product-title}, you can deploy OpenShift clusters to either on-premises or cloud environments. + +// Double check OP system versions +{product-title} {product-version} is supported on {op-system-base-full} 8.4 and 8.5, as well as on {op-system-first} 4.13. + +You must use {op-system} machines for the control plane, and you can use either {op-system} or {op-system-base} for compute machines. +//Removed the note per https://issues.redhat.com/browse/GRPA-3517 + +//TODO: Remove this for 4.13 +Starting with {product-title} {product-version} an additional six months of Extended Update Support (EUS) phase on even numbered releases from 18 months to two years. For more information, see the link:https://access.redhat.com/support/policy/updates/openshift[Red Hat {product-title} Life Cycle Policy]. + +//TODO: Add the line below for EUS releases. +{product-title} 4.8 is an Extended Update Support (EUS) release. More information on Red Hat OpenShift EUS is available in link:https://access.redhat.com/support/policy/updates/openshift#ocp4_phases[OpenShift Life Cycle] and link:https://access.redhat.com/support/policy/updates/openshift-eus[OpenShift EUS Overview]. + +//TODO: The line below should be used when it is next appropriate. Revisit in April 2023 timeframe. +Maintenance support ends for version 4.8 in January 2023 and goes to extended life phase. For more information, see the link:https://access.redhat.com/support/policy/updates/openshift[Red Hat {product-title} Life Cycle Policy]. + + +[id="ocp-4-13-add-on-support-status"] +== {product-title} layered and dependent component support and compatibility + +The scope of support for layered and dependent components of {product-title} changes independently of the {product-title} version. To determine the current support status and compatibility for an add-on, refer to its release notes. For more information, see the link:https://access.redhat.com/support/policy/updates/openshift[Red Hat {product-title} Life Cycle Policy]. + +[id="ocp-4-13-new-features-and-enhancements"] +== New features and enhancements + +This release adds improvements related to the following components and concepts. + +[id="ocp-4-13-rhcos"] +=== {op-system-first} + +[id="ocp-4-13-installation-and-upgrade"] +=== Installation and upgrade + +[id="ocp-4-13-post-installation"] +=== Post-installation configuration + +[id="ocp-4-13-web-console"] +=== Web console + +[id="ocp-4-13-oc"] +=== OpenShift CLI (oc) + +[id="ocp-4-13-ibm-z"] +=== IBM Z and LinuxONE + +[id="ocp-4-13-images"] +=== Images + +[id="ocp-4-13-networking"] +=== Networking + +[id="ocp-4-13-storage"] +=== Storage + +[id="ocp-4-13-olm"] +=== Operator lifecycle + +[id="ocp-4-13-osdk"] +=== Operator development + +[id="ocp-4-13-machine-api"] +=== Machine API + +[id="ocp-4-13-machine-config-operator"] +=== Machine Config Operator + +[id="ocp-4-13-nodes"] +=== Nodes + +[id="ocp-4-13-monitoring"] +=== Monitoring + +[id="ocp-4-13-scalability-and-performance"] +=== Scalability and performance + +[id="ocp-4-13-insights-operator"] +=== Insights Operator + +[id="ocp-4-13-auth"] +=== Authentication and authorization + +[id="ocp-4-13-hcp"] +=== Hosted control planes (Technology Preview) + +[id="ocp-4-13-rhv"] +=== Red Hat Virtualization (RHV) + +[id="ocp-4-13-notable-technical-changes"] +== Notable technical changes + +{product-title} {product-version} introduces the following notable technical changes. + +// Note: use [discrete] for these sub-headings. + +[id="ocp-4-13-deprecated-removed-features"] +== Deprecated and removed features + +Some features available in previous releases have been deprecated or removed. + +Deprecated functionality is still included in {product-title} and continues to be supported; however, it will be removed in a future release of this product and is not recommended for new deployments. For the most recent list of major functionality deprecated and removed within {product-title} {product-version}, refer to the table below. Additional details for more functionality that has been deprecated and removed are listed after the table. + +In the following tables, features are marked with the following statuses: + +* _General Availability_ +* _Deprecated_ +* _Removed_ + +[discrete] +=== Operator deprecated and removed features + +.Operator deprecated and removed tracker +[cols="4,1,1,1",options="header"] +|==== +|Feature |4.11 |4.12 | 4.13 + +|SQLite database format for Operator catalogs +|Deprecated +|Deprecated +|Deprecated + +|==== + +[discrete] +=== Images deprecated and removed features + +.Images deprecated and removed tracker +[cols="4,1,1,1",options="header"] +|==== +|Feature |4.11 |4.12 | 4.13 + +|`ImageChangesInProgress` condition for Cluster Samples Operator +|Deprecated +|Deprecated +|Deprecated + +|`MigrationInProgress` condition for Cluster Samples Operator +|Deprecated +|Deprecated +|Deprecated + +|==== + +[discrete] +=== Monitoring deprecated and removed features + +.Monitoring deprecated and removed tracker +[cols="4,1,1,1",options="header"] +|==== +|Feature |4.11 |4.12 | 4.13 + +|==== + +[discrete] +=== Installation deprecated and removed features + +.Installation deprecated and removed tracker +[cols="4,1,1,1",options="header"] +|==== +|Feature |4.11 |4.12 | 4.13 + +|vSphere 7.0 Update 1 or earlier +|General Availability +|Deprecated +|Deprecated + +|VMware ESXi 7.0 Update 1 or earlier +|General Availability +|Deprecated +|Deprecated + +|CoreDNS wildcard queries for the `cluster.local` domain +|General Availability +|General Availability +|Deprecated + +|`ingressVIP` and `apiVIP` settings in the `install-config.yaml` file for installer-provisioned infrastructure clusters +|General Availability +|General Availability +|Deprecated + +|==== + +[discrete] +=== Updating clusters deprecated and removed features + +.Updating clusters deprecated and removed tracker +[cols="4,1,1,1",options="header"] +|==== +|Feature |4.11 |4.12 | 4.13 + +|==== + +[discrete] +=== Storage deprecated and removed features + +.Storage deprecated and removed tracker +[cols="4,1,1,1",options="header"] +|==== +|Feature |4.11 |4.12 | 4.13 + +|Persistent storage using FlexVolume +|Deprecated +|Deprecated +|Deprecated + +|==== + +[discrete] +=== Authentication and authorization deprecated and removed features + +.Authentication and authorization deprecated and removed tracker +[cols="4,1,1,1",options="header"] +|==== +|Feature |4.11 |4.12 | 4.13 + +|==== + +[discrete] +=== Specialized hardware and driver enablement deprecated and removed features + +.Specialized hardware and driver enablement deprecated and removed tracker +[cols="4,1,1,1",options="header"] +|==== +|Feature |4.11 |4.12 | 4.13 + +|Special Resource Operator (SRO) +|Technology Preview +|Technology Preview +|Removed + +|==== + +[discrete] +=== Multi-architecture deprecated and removed features + +.Multi-architecture deprecated and removed tracker +[cols="4,1,1,1",options="header"] +|==== +|Feature |4.11 |4.12 | 4.13 + +|IBM POWER8 all models (`ppc64le`) +|General Availability +|General Availability +|Deprecated + +|IBM IBM POWER9 AC922 (`ppc64le`) +|General Availability +|General Availability +|Deprecated + +|IBM IBM POWER9 IC922 (`ppc64le`) +|General Availability +|General Availability +|Deprecated + +|IBM IBM POWER9 LC922 (`ppc64le`) +|General Availability +|General Availability +|Deprecated + +|IBM z13 all models (`s390x`) +|General Availability +|General Availability +|Deprecated + +|IBM LinuxONE Emperor (`s390x`) +|General Availability +|General Availability +|Deprecated + +|IBM LinuxONE Rockhopper (`s390x`) +|General Availability +|General Availability +|Deprecated + +|AMD64 (x86_64) v1 CPU +|General Availability +|General Availability +|Deprecated + +|==== + +[discrete] +=== Networking deprecated and removed features + +.Networking deprecated and removed tracker +[cols="4,1,1,1",options="header"] +|==== +|Feature |4.11 |4.12 | 4.13 + +|Kuryr on {rh-openstack} +|General Availability +|General Availability +|Deprecated + +|==== + +[id="ocp-4-13-deprecated-features"] +=== Deprecated features + +[id="ocp-4-13-rhv-deprecations"] +==== Red Hat Virtualization (RHV) as a host platform for {product-title} will be deprecated + +Red Hat Virtualization (RHV) will be deprecated in an upcoming release of {product-title}. Support for {product-title} on RHV will be removed from a future {product-title} release, currently planned as {product-title} 4.14. + +[id="ocp-4-13-ne-deprecations"] +==== Wildcard DNS queries for the `cluster.local` domain are deprecated + +CoreDNS will stop supporting wildcard DNS queries for names under the `cluster.local` domain. These queries will resolve in {product-title} {product-version} as they do in earlier versions, but support will be removed from a future {product-title} release. + +[id="ocp-4-13-ne-kuryr"] +==== Kuryr support for clusters that run on {rh-openstack} + +In {product-title} 4.12, support for Kuryr on clusters that run on {rh-openstack} is deprecated. Support will be removed no earlier than {product-title} 4.14. + +[id="ocp-4-13-removed-features"] +=== Removed features + +[id="ocp-4-13-bug-fixes"] +== Bug fixes + +[discrete] +[id="ocp-4-13-api-auth-bug-fixes"] +==== API Server and Authentication + +//Bug fix work for TELCODOCS-750 +//Bare Metal Hardware Provisioning / OS Image Provider +//Bare Metal Hardware Provisioning / baremetal-operator +//Bare Metal Hardware Provisioning / cluster-baremetal-operator +//Bare Metal Hardware Provisioning / ironic" +//CNF Platform Validation +//Cloud Native Events / Cloud Event Proxy +//Cloud Native Events / Cloud Native Events +//Cloud Native Events / Hardware Event Proxy +//Cloud Native Events +//Driver Toolkit +//Installer / Assisted installer +//Installer / OpenShift on Bare Metal IPI +//Networking / ptp +//Node Feature Discovery Operator +//Performance Addon Operator +//Telco Edge / HW Event Operator +//Telco Edge / RAN +//Telco Edge / TALO +//Telco Edge / ZTP + +[discrete] +[id="ocp-4-13-bare-metal-hardware-bug-fixes"] +==== Bare Metal Hardware Provisioning + +[discrete] +[id="ocp-4-13-builds-bug-fixes"] +==== Builds + +[discrete] +[id="ocp-4-13-cloud-compute-bug-fixes"] +==== Cloud Compute + +[discrete] +[id="ocp-4-13-dev-console-bug-fixes"] +==== Developer Console + +[discrete] +[id="ocp-4-13-image-registry-bug-fixes"] +==== Image Registry + +[discrete] +[id="ocp-4-13-installer-bug-fixes"] +==== Installer + +[discrete] +[id="ocp-4-13-kube-controller-bug-fixes"] +==== Kubernetes Controller Manager + +[discrete] +[id="ocp-4-13-kube-scheduler-bug-fixes"] +==== Kubernetes Scheduler + +[discrete] +[id="ocp-4-13-machine-config-operator-bug-fixes"] +==== Machine Config Operator + +[discrete] +[id="ocp-4-13-management-console-bug-fixes"] +==== Management Console + +[discrete] +[id="ocp-4-13-monitoring-bug-fixes"] +==== Monitoring + +[discrete] +[id="ocp-4-13-networking-bug-fixes"] +==== Networking + +[discrete] +[id="ocp-4-13-node-bug-fixes"] +==== Node + +[discrete] +[id="ocp-4-13-openshift-cli-bug-fixes"] +==== OpenShift CLI (oc) + +[discrete] +[id="ocp-4-13-olm-bug-fixes"] +==== Operator Lifecycle Manager (OLM) + +[discrete] +[id="ocp-4-13-openshift-operator-sdk-bug-fixes"] +==== Operator SDK + +[discrete] +[id="ocp-4-13-file-integrity-operator-bug-fixes"] +==== File Integrity Operator + +[discrete] +[id="ocp-4-13-compliance-operator-bug-fixes"] +==== Compliance Operator + +[discrete] +[id="ocp-4-13-openshift-api-server-bug-fixes"] +==== OpenShift API server + +[discrete] +[id="ocp-4-13-rhcos-bug-fixes"] +==== {op-system-first} + +[discrete] +[id="ocp-4-13-scalability-and-performance-bug-fixes"] +==== Scalability and performance + +[discrete] +[id="ocp-4-13-storage-bug-fixes"] +==== Storage + +[id="ocp-4-13-technology-preview"] +== Technology Preview features + +Some features in this release are currently in Technology Preview. These experimental features are not intended for production use. Note the following scope of support on the Red Hat Customer Portal for these features: + +link:https://access.redhat.com/support/offerings/techpreview[Technology Preview Features Support Scope] + +In the following tables, features are marked with the following statuses: + +* _Technology Preview_ +* _General Availability_ +* _Not Available_ +* _Deprecated_ + +[discrete] +=== Networking Technology Preview features + +.Networking Technology Preview tracker +[cols="4,1,1,1",options="header"] +|==== +|Feature |4.11 |4.12 | 4.13 + +|PTP single NIC hardware configured as boundary clock +|Technology Preview +|General Availability +|General Availability + +|PTP dual NIC hardware configured as boundary clock +|Not Available +|Technology Preview +|Technology Preview + +|PTP events with boundary clock +|Technology Preview +|General Availability +|General Availability + +|Pod-level bonding for secondary networks +|Technology Preview +|General Availability +|General Availability + +|External DNS Operator +|Technology Preview +|General Availability +|General Availability + +|AWS Load Balancer Operator +|Not Available +|Technology Preview +|Technology Preview + +|Cloud controller manager for VMware vSphere +|Technology Preview +|Technology Preview +|Technology Preview + +|Ingress Node Firewall Operator +|Not Available +|Not Available +|Technology Preview + +|Advertise using BGP mode the MetalLB service from a subset of nodes, using a specific pool of IP addresses +|Not Available +|Technology Preview +|General Availability + +|Advertise using L2 mode the MetalLB service from a subset of nodes, using a specific pool of IP addresses +|Not Available +|Technology Preview +|Technology Preview + +|Multi-network policies for SR-IOV networks +|Not Available +|Not Available +|Technology Preview + +|Updating the interface-specific safe sysctls list +|Not Available +|Not Available +|Technology Preview + +|MT2892 Family [ConnectX-6 Dx] SR-IOV support +|Not Available +|Not Available +|Technology Preview + +|MT2894 Family [ConnectX-6 Lx] SR-IOV support +|Not Available +|Not Available +|Technology Preview + +|MT42822 BlueField-2 in ConnectX-6 NIC mode SR-IOV support +|Not Available +|Not Available +|Technology Preview + +|Silicom STS Family SR-IOV support +|Not Available +|Not Available +|Technology Preview + +|MT2892 Family [ConnectX-6 Dx] OvS Hardware Offload support +|Not Available +|Not Available +|Technology Preview + +|MT2894 Family [ConnectX-6 Lx] OvS Hardware Offload support +|Not Available +|Not Available +|Technology Preview + +|MT42822 BlueField-2 in ConnectX-6 NIC mode OvS Hardware Offload support +|Not Available +|Not Available +|Technology Preview + +|==== + +[discrete] +=== Storage Technology Preview features + +.Storage Technology Preview tracker +[cols="4,1,1,1",options="header"] +|==== +|Feature |4.11 |4.12 | 4.13 + +|Shared Resources CSI Driver and Build CSI Volumes in OpenShift Builds +|Technology Preview +|Technology Preview +|Technology Preview + +|CSI volume expansion +|Technology Preview +|General Availability +|General Availability + +|CSI Azure File Driver Operator +|Technology Preview +|General Availability +|General Availability + +|CSI Google Filestore Driver Operator +|Not Available +|Not Available +|Technology Preview + +|CSI automatic migration +(Azure file, VMware vSphere) +|Technology Preview +|Technology Preview +|Technology Preview + +|CSI automatic migration +(Azure Disk, OpenStack Cinder) +|Technology Preview +|General Availability +|General Availability + +|CSI automatic migration +(AWS EBS, GCP disk) +|Technology Preview +|Technology Preview +|General Availability + +|CSI inline ephemeral volumes +|Technology Preview +|Technology Preview +|Technology Preview + +|CSI generic ephemeral volumes +|Not Available +|General Availability +|General Availability + +|Shared Resource CSI Driver +|Technology Preview +|Technology Preview +|Technology Preview + +|CSI Google Filestore Driver Operator +|Not Available +|Not Available +|Technology Preview + +|Automatic device discovery and provisioning with Local Storage Operator +|Technology Preview +|Technology Preview +|Technology Preview + +|==== + +[discrete] +=== Installation Technology Preview features + +.Installation Technology Preview tracker +[cols="4,1,1,1",options="header"] +|==== +|Feature |4.11 |4.12 | 4.13 + +|Adding kernel modules to nodes with kvc +|Technology Preview +|Technology Preview +|Technology Preview + +|IBM Cloud VPC clusters +|Technology Preview +|Technology Preview +|General Availability + +|Selectable Cluster Inventory +|Technology Preview +|Technology Preview +|Technology Preview + +|Multi-architecture compute machines +|Not Available +|Technology Preview +|Technology Preview + +|Mount shared entitlements in BuildConfigs in RHEL +|Technology Preview +|Technology Preview +|Technology Preview + +|Agent-based {product-title} Installer +|Not Available +|Not Available +|General Availability + +|==== + +[discrete] +=== Node Technology Preview features + +.Nodes Technology Preview tracker +[cols="4,1,1,1",options="header"] +|==== +|Feature |4.11 |4.12 | 4.13 + +|Non-preempting priority classes +|Technology Preview +|Technology Preview +|Technology Preview + +|Linux Control Group version 2 (cgroup v2) +|Not Available +|Not Available +|Technology Preview + +|crun container runtime +|Not Available +|Not Available +|Technology Preview + +|==== + +[discrete] +=== Multi-Architecture Technology Preview features + +.Multi-Architecture Technology Preview tracker +[cols="4,1,1,1",options="header"] +|==== +|Feature |4.11 |4.12 | 4.13 + +|`kdump` on `arm64` architecture +|Not Available +|Technology Preview +|Technology Preview + +|`kdump` on `s390x` architecture +|Technology Preview +|Technology Preview +|Technology Preview + +|`kdump` on `ppc64le` architecture +|Technology Preview +|Technology Preview +|Technology Preview + +|IBM Secure Execution on {ibmzProductName} and LinuxONE +|Not Available +|Not Available +|Technology Preview + +|==== + +[discrete] +=== Serverless Technology Preview features + +.Serverless Technology Preview tracker +[cols="4,1,1,1",options="header"] +|==== +|Feature |4.11 |4.12 | 4.13 + +|Serverless functions +|Technology Preview +|Technology Preview +|Technology Preview + +|==== + +[discrete] +=== Specialized hardware and driver enablement Technology Preview features + +.Specialized hardware and driver enablement Technology Preview tracker +[cols="4,1,1,1",options="header"] +|==== +|Feature |4.11 |4.12 | 4.13 + +|Driver Toolkit +|Technology Preview +|Technology Preview +|General Availability + +|Special Resource Operator (SRO) +|Technology Preview +|Technology Preview +|Not Available + +|Hub and spoke cluster support +|Not Available +|Not Available +|Technology Preview + +|==== + +[discrete] +=== Web console Technology Preview features + +.Web console Technology Preview tracker +[cols="4,1,1,1",options="header"] +|==== +|Feature |4.11 |4.12 | 4.13 + +|Multicluster console +|Technology Preview +|Technology Preview +|Technology Preview + +|Dynamic Plug-ins +|Technology Preview +|Technology Preview +|General Availability + +|==== + +[discrete] +=== Scalability and performance Technology Preview features + +.Scalability and performance Technology Preview tracker +[cols="4,1,1,1",options="header"] +|==== +|Feature |4.11 |4.12 | 4.13 + +|Hyperthreading-aware CPU manager policy +|Technology Preview +|Technology Preview +|Technology Preview + +|Node Observability Operator +|Not Available +|Technology Preview +|Technology Preview + +|{factory-prestaging-tool} +|Not Available +|Not Available +|Technology Preview + +|Single-node OpenShift cluster expansion with worker nodes +|Not Available +|Not Available +|Technology Preview + +|{cgu-operator-first} +|Technology Preview +|Technology Preview +|General Availability + +|Mount namespace encapsulation +|Not Available +|Not Available +|Technology Preview + +|==== + +[discrete] +=== Operator Technology Preview features + +.Operator Technology Preview tracker +[cols="4,1,1,1",options="header"] +|==== +|Feature |4.11 |4.12 | 4.13 + +|Hybrid Helm Operator +|Technology Preview +|Technology Preview +|Technology Preview + +|Java-based Operator +|Not Available +|Technology Preview +|Technology Preview + +|Multi-cluster Engine Operator +|Technology Preview +|Technology Preview +|Technology Preview + +|Node Observability Operator +|Not Available +|Not Available +|Technology Preview + +|Network Observability Operator +|Not Available +|Not Available +|General Availability + +|Platform Operators +|Not Available +|Not Available +|Technology Preview + +|RukPak +|Not Available +|Not Available +|Technology Preview + +|Cert-manager Operator +|Technology Preview +|Technology Preview +|Technology Preview + +|==== + +[discrete] +=== Monitoring Technology Preview features + +.Monitoring Technology Preview tracker +[cols="4,1,1,1",options="header"] +|==== +|Feature |4.11 |4.12 | 4.13 + +|Alert routing for user-defined projects monitoring +|Technology Preview +|General Availability +|General Availability + +|Alerting rules based on platform monitoring metrics +|Not Available +|Technology Preview +|Technology Preview + +|==== + +[discrete] +=== {rh-openstack-first} Technology Preview features + +.{rh-openstack} Technology Preview tracker +[cols="4,1,1,1",options="header"] +|==== +|Feature |4.11 |4.12 | 4.13 + +|Support for {rh-openstack} DCN +|Technology Preview +|Technology Preview +|Technology Preview + +|Support for external cloud providers for clusters on {rh-openstack} +|Technology Preview +|Technology Preview +|General Availability + +|==== + +[discrete] +=== Architecture Technology Preview features + +.Architecture Technology Preview tracker +[cols="4,1,1,1",options="header"] +|==== +|Feature |4.11 |4.12 | 4.13 + +|Hosted control planes for {product-title} on bare metal +|Not Available +|Technology Preview +|Technology Preview + +|Hosted control planes for {product-title} on Amazon Web Services (AWS) +|Not Available +|Technology Preview +|Technology Preview + +|==== + +[discrete] +=== Machine management Technology Preview features + +.Machine management Technology Preview tracker +[cols="4,1,1,1",options="header"] +|==== +|Feature |4.11 |4.12 | 4.13 + +|Managing machines with the Cluster API +|Not Available +|Technology Preview +|Technology Preview + +|Cron job time zones +|Not Available +|Not Available +|Technology Preview + +|Cloud controller manager for Alibaba Cloud +|Technology Preview +|Technology Preview +|Technology Preview + +|Cloud controller manager for Amazon Web Services +|Technology Preview +|Technology Preview +|Technology Preview + +|Cloud controller manager for Google Cloud Platform +|Technology Preview +|Technology Preview +|Technology Preview + +|Cloud controller manager for Microsoft Azure +|Technology Preview +|Technology Preview +|Technology Preview + +|Cloud controller manager for {rh-openstack-first} +|Technology Preview +|Technology Preview +|General Availability + +|Custom Metrics Autoscaler Operator +|Not Available +|Technology Preview +|Technology Preview + +|==== + +[id="ocp-4-13-known-issues"] +== Known issues + +// TODO: This known issue should carry forward to 4.8 and beyond! This needs some SME/QE review before being updated for 4.11. Need to check if KI should be removed or should stay. +* In {product-title} 4.1, anonymous users could access discovery endpoints. Later releases revoked this access to reduce the possible attack surface for security exploits because some discovery endpoints are forwarded to aggregated API servers. However, unauthenticated access is preserved in upgraded clusters so that existing use cases are not broken. ++ +If you are a cluster administrator for a cluster that has been upgraded from {product-title} 4.1 to {product-version}, you can either revoke or continue to allow unauthenticated access. Unless there is a specific need for unauthenticated access, you should revoke it. If you do continue to allow unauthenticated access, be aware of the increased risks. ++ +[WARNING] +==== +If you have applications that rely on unauthenticated access, they might receive HTTP `403` errors if you revoke unauthenticated access. +==== ++ +Use the following script to revoke unauthenticated access to discovery endpoints: ++ +[source,bash] +---- +## Snippet to remove unauthenticated group from all the cluster role bindings +$ for clusterrolebinding in cluster-status-binding discovery system:basic-user system:discovery system:openshift:discovery ; +do +### Find the index of unauthenticated group in list of subjects +index=$(oc get clusterrolebinding ${clusterrolebinding} -o json | jq 'select(.subjects!=null) | .subjects | map(.name=="system:unauthenticated") | index(true)'); +### Remove the element at index from subjects array +oc patch clusterrolebinding ${clusterrolebinding} --type=json --patch "[{'op': 'remove','path': '/subjects/$index'}]"; +done +---- ++ +This script removes unauthenticated subjects from the following cluster role bindings: ++ +-- +** `cluster-status-binding` +** `discovery` +** `system:basic-user` +** `system:discovery` +** `system:openshift:discovery` +-- ++ +(link:https://bugzilla.redhat.com/show_bug.cgi?id=1821771[*BZ#1821771*]) + +// TODO: This known issue should be removed when RHCOS images begin using RHEL 9. RHEL 9 prevents this issue from occurring. +* Intermittently, an IBM Cloud VPC cluster might fail to install because some worker machines do not start. Rather, these worker machines remain in the `Provisioned` phase. ++ +There is a workaround for this issue. From the host where you performed the initial installation, delete the failed machines and run the installation program again. ++ +. Verify that the status of the internal application load balancer (ALB) for the master API server is `active`. +.. Identify the cluster's infrastructure ID by running the following command: ++ +[source,terminal] +---- +$ oc get infrastructure/cluster -ojson | jq -r '.status.infrastructureName' +---- +.. Log into the IBM Cloud account for your cluster and target the correct region for your cluster. +.. Verify that the internal ALB status is `active` by running the following command: ++ +[source,terminal] +---- +$ ibmcloud is lb -kubernetes-api-private --output json | jq -r '.provisioning_status' +---- +. Identify the machines that are in the `Provisioned` phase by running the following command: ++ +[source,terminal] +---- +$ oc get machine -n openshift-machine-api +---- ++ +.Example output +[source,terminal] +---- +NAME PHASE TYPE REGION ZONE AGE +example-public-1-x4gpn-master-0 Running bx2-4x16 us-east us-east-1 23h +example-public-1-x4gpn-master-1 Running bx2-4x16 us-east us-east-2 23h +example-public-1-x4gpn-master-2 Running bx2-4x16 us-east us-east-3 23h +example-public-1-x4gpn-worker-1-xqzzm Running bx2-4x16 us-east us-east-1 22h +example-public-1-x4gpn-worker-2-vg9w6 Provisioned bx2-4x16 us-east us-east-2 22h +example-public-1-x4gpn-worker-3-2f7zd Provisioned bx2-4x16 us-east us-east-3 22h +---- +. Delete each failed machine by running the following command: ++ +[source,terminal] +---- +$ oc delete machine -n openshift-machine-api +---- +. Wait for the deleted worker machines to be replaced, which can take up to 10 minutes. +. Verify that the new worker machines are in the `Running` phase by running the following command: ++ +[source,terminal] +---- +$ oc get machine -n openshift-machine-api +---- ++ +.Example output +[source,terminal] +---- +NAME PHASE TYPE REGION ZONE AGE +example-public-1-x4gpn-master-0 Running bx2-4x16 us-east us-east-1 23h +example-public-1-x4gpn-master-1 Running bx2-4x16 us-east us-east-2 23h +example-public-1-x4gpn-master-2 Running bx2-4x16 us-east us-east-3 23h +example-public-1-x4gpn-worker-1-xqzzm Running bx2-4x16 us-east us-east-1 23h +example-public-1-x4gpn-worker-2-mnlsz Running bx2-4x16 us-east us-east-2 8m2s +example-public-1-x4gpn-worker-3-7nz4q Running bx2-4x16 us-east us-east-3 7m24s +---- +. Complete the installation by running the following command. Running the installation program again ensures that the cluster's `kubeconfig` is initialized properly: ++ +[source,terminal] +---- +$ ./openshift-install wait-for install-complete +---- ++ +(link:https://issues.redhat.com/browse/OCPBUGS-1327[*OCPBUGS#1327*]) + +// TODO: This known issue should carry forward to 4.9 and beyond! +* The `oc annotate` command does not work for LDAP group names that contain an equal sign (`=`), because the command uses the equal sign as a delimiter between the annotation name and value. As a workaround, use `oc patch` or `oc edit` to add the annotation. (link:https://bugzilla.redhat.com/show_bug.cgi?id=1917280[*BZ#1917280*]) + +[id="ocp-4-13-asynchronous-errata-updates"] +== Asynchronous errata updates + +Security, bug fix, and enhancement updates for {product-title} {product-version} are released as asynchronous errata through the Red Hat Network. All {product-title} {product-version} errata is https://access.redhat.com/downloads/content/290/[available on the Red Hat Customer Portal]. See the https://access.redhat.com/support/policy/updates/openshift[{product-title} Life Cycle] for more information about asynchronous errata. + +Red Hat Customer Portal users can enable errata notifications in the account settings for Red Hat Subscription Management (RHSM). When errata notifications are enabled, users are notified through email whenever new errata relevant to their registered systems are released. + +[NOTE] +==== +Red Hat Customer Portal user accounts must have systems registered and consuming {product-title} entitlements for {product-title} errata notification emails to generate. +==== + +This section will continue to be updated over time to provide notes on enhancements and bug fixes for future asynchronous errata releases of {product-title} {product-version}. Versioned asynchronous releases, for example with the form {product-title} {product-version}.z, will be detailed in subsections. In addition, releases in which the errata text cannot fit in the space provided by the advisory will be detailed in subsections that follow. + +[IMPORTANT] +==== +For any {product-title} release, always review the instructions on xref:../updating/updating-cluster-within-minor.adoc#updating-cluster-within-minor[updating your cluster] properly. +==== + +//Update with relevant advisory information +[id="ocp-4-13-0-ga"] +=== RHSA-2022:xxxx - {product-title} 4.13.0 image release, bug fix, and security update advisory + +Issued: 2023-TBD + +{product-title} release 4.13.0, which includes security updates, is now available. The list of bug fixes that are included in the update is documented in the link:https://access.redhat.com/errata/RHSA-2022:7399[RHSA-2022:7399] advisory. The RPM packages that are included in the update are provided by the link:https://access.redhat.com/errata/RHSA-2022:7398[RHSA-2022:7398] advisory. + +Space precluded documenting all of the container images for this release in the advisory. See the following article for notes on the container images in this release: + +You can view the container images in this release by running the following command: + +[source,terminal] +---- +$ oc adm release info 4.13.0 --pullspecs +---- +//replace 4.y.z for the correct values for the release. You do not need to update oc to run this command. diff --git a/welcome/index.adoc b/welcome/index.adoc index 8d634cbded..8d9af76d52 100644 --- a/welcome/index.adoc +++ b/welcome/index.adoc @@ -52,7 +52,7 @@ Start with xref:../architecture/architecture.adoc#architecture-overview-architec xref:../security/container_security/security-understanding.adoc#understanding-security[Security and compliance]. ifdef::openshift-enterprise,openshift-webscale[] Then, see the -xref:../release_notes/ocp-4-12-release-notes.adoc#ocp-4-12-release-notes[release notes]. +xref:../release_notes/ocp-4-13-release-notes.adoc#ocp-4-13-release-notes[release notes]. endif::[] endif::[] @@ -161,7 +161,7 @@ on Azure, you can install a cluster. - **Install a private cluster**: If your cluster does not require external internet access, you can install a private cluster on xref:../installing/installing_aws/installing-aws-private.adoc#installing-aws-private[AWS], -xref:../installing/installing_azure/installing-azure-private.adoc#installing-aws-private[Azure], +xref:../installing/installing_azure/installing-azure-private.adoc#installing-aws-private[Azure], xref:../installing/installing_gcp/installing-gcp-private.adoc#installing-gcp-private[GCP], or xref:../installing/installing_ibm_cloud_public/preparing-to-install-on-ibm-cloud.adoc#preparing-to-install-on-ibm-cloud[IBM Cloud VPC] Internet access is still required to access the cloud APIs and installation media. @@ -262,7 +262,7 @@ As a cluster administrator for {product-title}, this documentation helps you: - **xref:../architecture/architecture.adoc#architecture-overview-architecture[Understand {product-title} management]**: Learn about components of the {product-title} {product-version} control plane. See how {product-title} control plane and worker nodes are managed and updated through the xref:../machine_management/creating_machinesets/creating-machineset-aws.adoc#machine-api-overview_creating-machineset-aws[Machine API] and xref:../architecture/control-plane.adoc#operators-overview_control-plane[Operators]. -- **Enable cluster capabilities that were disabled prior to installation** Cluster administrators can enable cluster capabilities that were disabled prior to installation. For more information, see xref:../post_installation_configuration/enabling-cluster-capabilities.adoc#enabling-cluster-capabilities[Enabling cluster capabilities]. +- **Enable cluster capabilities that were disabled prior to installation** Cluster administrators can enable cluster capabilities that were disabled prior to installation. For more information, see xref:../post_installation_configuration/enabling-cluster-capabilities.adoc#enabling-cluster-capabilities[Enabling cluster capabilities]. === Manage cluster components