1
0
mirror of https://github.com/openshift/openshift-docs.git synced 2026-02-06 06:46:26 +01:00

Merge pull request #58179 from aireilly/ran-4.13-release-notes

TELCODOCS-1154 - release notes for RAN 4.13 new features and bugs
This commit is contained in:
Gabriel McGoldrick
2023-05-15 10:16:05 +01:00
committed by GitHub

View File

@@ -1066,7 +1066,7 @@ This update adds the following features:
For more information, see xref:../scalability_and_performance/cnf-numa-aware-scheduling.adoc#cnf-about-numa-aware-scheduling_numa-aware[Scheduling NUMA-aware workloads].
[id="ocp-4-13-ran-workload-partitioning-three-node-standard-node"]
==== Support for workload partitioning for three-node clusters and standard clusters
==== Support for workload partitioning for three-node clusters and standard clusters (Technology Preview)
Before this update, workload partitioning was supported for {sno} clusters only.
Now, you can also configure workload partitioning for three-node compact clusters and standard clusters.
@@ -1075,20 +1075,17 @@ Use workload partitioning to isolate {product-title} services, cluster managemen
For more information, see xref:../scalability_and_performance/enabling-workload-partitioning.adoc#enabling-workload-partitioning[Workload partitioning].
[id="ocp-4-13-configuring-power-states-using-ztp"]
==== Configuring power states using zero touch provisioning (ZTP)
==== Configuring power states using {ztp}
{product-title} 4.12 introduced the ability to set power states for critical and non-critical workloads. This release now enables the user to configure power states by using ZTP.
{product-title} 4.12 introduced the ability to set power states for critical and non-critical workloads.
In {product-title} 4.13, you can now configure power states with {ztp}.
For more information about the feature, see xref:../scalability_and_performance/ztp_far_edge/ztp-advanced-policy-config.adoc#ztp-using-pgt-to-configure-power-saving-states_ztp-advanced-policy-config[Configuring power states using PolicyGenTemplates CRs].
[id="ocp-4-13-etcd-overview"]
==== Documentation enhancement: Overview of etcd is now available
An overview of etcd, including the benefits it provides and how it works, is now available in the {product-title} documentation. As the primary data store for Kubernetes, etcd provides a reliable approach to cluster configuration and management on {product-title} through the etcd Operator. For more information, see xref:../architecture/control-plane.adoc#etcd-overview_control-plane[Overview of etcd].
[id="ocp-4-13-scalability-and-performance-talm-updates"]
==== Pre-caching container images for managed cluster updates with {cgu-operator} and GitOps ZTP
==== Pre-caching container images for managed cluster updates with {cgu-operator} and {ztp}
This release adds two new {cgu-operator-first} features for use with GitOps ZTP:
This release adds two new {cgu-operator-first} features for use with {ztp}:
* A new check ensures that there is sufficient available disk space on the managed cluster host before cluster updates.
Now, during container image pre-caching, {cgu-operator} compares the available host disk space with the estimated {product-title} image size to ensure that there is enough disk space on the host.
@@ -1097,6 +1094,36 @@ Now, during container image pre-caching, {cgu-operator} compares the available h
For more information see xref:../scalability_and_performance/cnf-talm-for-cluster-upgrades.adoc#talo-precache-feature-image-filter_cnf-topology-aware-lifecycle-manager[Using the container image pre-cache filter].
[id="ocp-4-13-ran-http-transport"]
==== HTTP transport replaces AMQP for PTP and bare-metal events (Technology Preview)
HTTP is now the default transport in the PTP and bare-metal events infrastructure.
AMQ Interconnect is end of life (EOL) from 30 June 2024.
When you use HTTP transport for PTP and bare-metal events, you must persist the events subscription in the cluster using a `PersistentVolume` resource.
For more information, see xref:../networking/using-ptp.adoc#cnf-about-ptp-fast-event-notifications-framework_using-ptp[About the PTP fast event notifications framework].
[id="ocp-4-13-ran-westport-channel-grandmaster"]
==== Support for Intel E810 Westport Channel NIC as PTP grandmaster clock (Technology Preview)
You can now configure the Intel E810 Westport Channel NIC as a PTP grandmaster clock by using the PTP Operator.
PTP grandmaster clocks use `ts2phc` (time stamp 2 physical clock) for system clock and network time synchronization.
For more information, see xref:../networking/using-ptp.adoc#configuring-linuxptp-services-as-grandmaster-clock_using-ptp[Configuring linuxptp services as a grandmaster clock].
[id="ocp-413-ran-ztp-crun-container-runtime"]
==== Configuring crun as the default container runtime for managed clusters in {ztp}
A `ContainerRuntimeConfig` CR that configures crun as the default container runtime has been added to the GitOps ZTP `ztp-site-generate` container.
For optimal performance in clusters that you install with {ztp}, enable crun for control plane and worker nodes in {sno}, {3no}, and standard clusters alongside additional Day 0 installation manifest CRs.
For more information, see xref:../scalability_and_performance/ztp_far_edge/ztp-reference-cluster-configuration-for-vdu.adoc#ztp-sno-du-configuring-crun-container-runtime_sno-configure-for-vdu[Configuring crun as the default container runtime].
[id="ocp-4-13-etcd-overview"]
==== Documentation enhancement: Overview of etcd is now available
An overview of etcd, including the benefits it provides and how it works, is now available in the {product-title} documentation. As the primary data store for Kubernetes, etcd provides a reliable approach to cluster configuration and management on {product-title} through the etcd Operator. For more information, see xref:../architecture/control-plane.adoc#etcd-overview_control-plane[Overview of etcd].
[id="ocp-4-13-insights-operator"]
=== Insights Operator
// https://issues.redhat.com/browse/OCPBUGS-6832
@@ -1888,7 +1915,7 @@ In the following tables, features are marked with the following statuses:
|PTP dual NIC hardware configured as boundary clock
|Not Available
|Technology Preview
|Technology Preview
|General Availability
|PTP events with boundary clock
|Technology Preview
@@ -2210,6 +2237,7 @@ In the following tables, features are marked with the following statuses:
|====
[discrete]
[id="ocp-413-scalability-tech-preview"]
=== Scalability and performance Technology Preview features
.Scalability and performance Technology Preview tracker
@@ -2232,10 +2260,10 @@ In the following tables, features are marked with the following statuses:
|Not Available
|Technology Preview
|Single-node OpenShift cluster expansion with worker nodes
|Not Available
|{sno-caps} cluster expansion with worker nodes
|Not Available
|Technology Preview
|General Availability
|{cgu-operator-first}
|Technology Preview
@@ -2252,6 +2280,21 @@ In the following tables, features are marked with the following statuses:
|Technology Preview
|General Availability
|HTTP transport replaces AMQP for PTP and bare-metal events
|Not Available
|Not Available
|Technology Preview
|Intel E810 Westport Channel NIC as PTP grandmaster clock
|Not Available
|Not Available
|Technology Preview
|Workload partitioning for three-node clusters and standard clusters
|Not Available
|Not Available
|Technology Preview
|====
[discrete]
@@ -2527,6 +2570,48 @@ To work around this issue, before you update to {product-title} {product-version
* There is a disk discovery delay when attaching storage to workloads. (link:https://issues.redhat.com/browse/OCPBUGS-11149[*OCPBUGS-11149*])
[id="ocp-4-13-ran-known-issues"]
* If you specify an invalid subscription channel in the subscription policy that you use to perform a cluster upgrade, the {cgu-operator-first} indicates that the upgrade is successful immediately after {cgu-operator} enforces the policy because the `Subscription` resource remains in the `AtLatestKnown` state.
(link:https://issues.redhat.com/browse/OCPBUGS-9239[*OCPBUGS-9239*])
* After a system crash, `kdump` fails to generate the `vmcore` crash dump file on HPE Edgeline e920t and HPE ProLiant DL110 Gen10 servers with Intel E810 NIC and ice driver installed.
(link:https://issues.redhat.com/browse/RHELPLAN-138236[*RHELPLAN-138236*])
* In {ztp}, when you provision a managed cluster that contains more than a single node using a `SiteConfig` CR, disk partition fails when one or more nodes has a `diskPartition` resource configured in the `SiteConfig` CR.
(link:https://issues.redhat.com/browse/OCPBUGS-9272[*OCPBUGS-9272*])
* In clusters configured with PTP boundary clocks (T-BC) and deployed DU applications, messages are intermittently not sent from the follower interface of the T-BC on the vDU host for periods of up to 40 seconds.
The rate of errors in the logs can vary.
An example error log is below:
+
.Example output
[source,terminal]
----
2023-01-15T19:26:33.017221334+00:00 stdout F phc2sys[359186.957]: [ptp4l.0.config] nothing to synchronize
----
(link:https://issues.redhat.com/browse/RHELPLAN-145492[*RHELPLAN-145492*])
* When you install a {sno} cluster using {ztp} and configure PTP and bare-metal events with HTTP transport, the `linuxptp-daemon` daemon pod intermittently fails to deploy.
The required `PersistentVolumeClaim` (`PVC`) resource is created but is not mounted in the pod.
The following volume mount error is reported:
+
.Example output
[source,terminal]
----
mount: /var/lib/kubelet/plugins/kubernetes.io/local-volume/mounts/local-pv-bc42d358: mount(2) system call failed: Structure needs cleaning.
----
To workaround the issue, delete the `cloud-event-proxy-store-storage-class-http-events` `PVC` CR and re-deploy the PTP Operator.
(link:https://issues.redhat.com/browse/OCPBUGS-12358[*OCPBUGS-12358*])
* RFC2544 performance tests show that the `Max delay` value for a packet to traverse the network is over the minimum threshold. This regression is found in {product-title} 4.13 clusters running the Telco RAN DU profile.
(link:https://issues.redhat.com/browse/OCPBUGS-13224[*OCPBUGS-13224*])
* Performance tests run on a {sno} cluster with {product-title} 4.13 installed show an `oslat` maximum latency result greater than 20 microseconds.
(link:https://issues.redhat.com/browse/RHELPLAN-155443[*RHELPLAN-155443*])
* Performance tests run on a {sno} cluster with {product-title} 4.13 installed show a `cyclictest` maximum latency result greater than 20 microseconds.
(link:https://issues.redhat.com/browse/RHELPLAN-155460[*RHELPLAN-155460*])
[id="ocp-4-13-asynchronous-errata-updates"]
== Asynchronous errata updates