1
0
mirror of https://github.com/openshift/openshift-docs.git synced 2026-02-05 12:46:18 +01:00

Merge pull request #88561 from brendan-daly-red-hat/OSDOCS-11442_RN

OSDOCS-11442_RN#adding Nutanix subnets
This commit is contained in:
Jeana Routh
2025-02-14 10:21:32 -05:00
committed by GitHub

View File

@@ -511,6 +511,13 @@ With this release, you can install a cluster on Nutanix by using the named, prel
For more information, see xref:../installing/installing_nutanix/installation-config-parameters-nutanix.adoc#installation-configuration-parameters-additional-nutanix_installation-config-parameters-nutanix[Additional Nutanix configuration parameters].
[id="ocp-4-18-installation-and-update-nutanix-multiple-nics_{context}"]
==== Installing a cluster on Nutanix with up to 32 subnets
With this release, Nutanix supports more than one subnet for the Prism Element where you deployed an {product-title} cluster to. A maximum of 32 subnets for each Prism Element is supported.
For more information, see xref:../installing/installing_nutanix/installing-nutanix-installer-provisioned.adoc#installation-configuring-nutanix-failure-domains_installing-nutanix-installer-provisioned[Configuring failure domains] and xref:../installing/installing_nutanix/installation-config-parameters-nutanix.adoc#installation-configuration-parameters-additional-nutanix_installation-config-parameters-nutanix[Additional Nutanix configuration parameters].
For an existing Nutanix cluster, you can add multiple subnets by using machine sets. For more information, see xref:../installing/installing_nutanix/nutanix-failure-domains.adoc#post-installation-configuring-nutanix-failure-domains_nutanix-failure-domains[Adding failure domains to the Infrastructure CR].
[id="ocp-release-notes-olm_{context}"]
=== Operator lifecycle
@@ -546,7 +553,7 @@ The image config nodes custom resource, which allows you to monitor the progress
* If each node was drained
* If each node was rebooted
* If a node had a CRI-O reload
* If a node had the operating system and node files updated
* If a node had the operating system and node files updated
[id="ocp-release-notes-machine-config-operator-ocl_{context}"]
==== On-cluster layering changes (Technology Preview)
@@ -556,7 +563,7 @@ There are several important changes to the on-cluster layering feature:
* You can now install extensions onto an on-cluster customer layered image by using a `MachineConfig` object.
* Updating the Containerfile in a `MachineOSConfig` object now triggers a build to be performed.
* You can now revert an on-cluster custom layered image back to the base image by removing a label from the `MachineOSConfig` object.
* The `must-gather` for the Machine Config Operator now includes data on the `MachineOSConfig` and `MachineOSBuild` objects.
* The `must-gather` for the Machine Config Operator now includes data on the `MachineOSConfig` and `MachineOSBuild` objects.
For more information about on-cluster layering, see xref:../machine_configuration/mco-coreos-layering.html#coreos-layering-configuring-on_mco-coreos-layering[Using on-cluster layering to apply a custom layered image].
@@ -566,8 +573,8 @@ For more information about on-cluster layering, see xref:../machine_configuratio
[id="ocp-4-18-capi-tp-azure_{context}"]
==== Managing machines with the Cluster API for {azure-full} (Technology Preview)
This release introduces the ability to manage machines by using the upstream Cluster API, integrated into {product-title}, as a Technology Preview for {azure-full} clusters.
This capability is in addition or an alternative to managing machines with the Machine API.
This release introduces the ability to manage machines by using the upstream Cluster API, integrated into {product-title}, as a Technology Preview for {azure-full} clusters.
This capability is in addition or an alternative to managing machines with the Machine API.
For more information, see xref:../machine_management/cluster_api_machine_management/cluster-api-about.adoc#cluster-api-about[About the Cluster API].
[id="ocp-release-notes-monitoring_{context}"]
@@ -640,7 +647,7 @@ When you combine RDMA with SR-IOV, you provide a mechanism to expose hardware co
==== crun is now the default container runtime
crun is now the default container runtime for new containers created in {product-title}. The runC runtime is still supported and you can change the default runtime to runC, if needed. For more information on crun, see xref:../nodes/containers/nodes-containers-using.adoc#nodes-containers-runtimes[About the container engine and container runtime]. For information on changing the default to runC, see xref:../machine_configuration/machine-configs-custom.adoc#create-a-containerruntimeconfig_machine-configs-custom[Creating a ContainerRuntimeConfig CR to edit CRI-O parameters].
After updating from {product-title} 4.17.z to {product-title} {product-version}, the container runtime configured as the default is respected in {product-version}.
After updating from {product-title} 4.17.z to {product-title} {product-version}, the container runtime configured as the default is respected in {product-version}.
[id="ocp-release-notes-nodes-crun-sigstore_{context}"]
==== sigstore support (Technology Preview)
@@ -1560,7 +1567,7 @@ In the following tables, features are marked with the following statuses:
|Technology Preview
|Technology Preview
|sigstore support
|sigstore support
|Not Available
|Technology Preview
|Technology Preview