diff --git a/modules/customize-certificates-manually-rotate-service-ca.adoc b/modules/customize-certificates-manually-rotate-service-ca.adoc index f86baf82c2..feec524e20 100644 --- a/modules/customize-certificates-manually-rotate-service-ca.adoc +++ b/modules/customize-certificates-manually-rotate-service-ca.adoc @@ -5,8 +5,14 @@ [id="manually-rotate-service-ca_{context}"] = Manually rotate the service CA certificate -The service CA is valid for one year after {product-title} is -installed. Follow these steps to manually refresh the service CA before the expiration date. +The service CA is valid for 26 months and is automatically refreshed when there is less than six months validity left. + +If necessary, you can manually refresh the service CA by using the following procedure. + +[WARNING] +==== +A manually-rotated service CA does not maintain trust with the previous service CA. You might experience a temporary service disruption until the Pods in the cluster are restarted, which ensures that Pods are using service serving certificates issued by the new service CA. +==== .Prerequisites diff --git a/modules/customize-certificates-understanding-service-serving.adoc b/modules/customize-certificates-understanding-service-serving.adoc index d1eedae4de..b1f2af2242 100644 --- a/modules/customize-certificates-understanding-service-serving.adoc +++ b/modules/customize-certificates-understanding-service-serving.adoc @@ -15,5 +15,18 @@ algorithm to generate service certificates. The generated certificate and key are in PEM format, stored in `tls.crt` and `tls.key` respectively, within a created secret. The certificate and key are automatically replaced when they get close to -expiration. The service CA certificate, which signs the service -certificates, is only valid for one year after {product-title} is installed. +expiration. + +The service CA certificate, which issues the service certificates, is valid for 26 months and is automatically rotated when there is less than six months validity left. After rotation, the previous service CA configuration is still trusted until its expiration. This allows a grace period for all affected services to refresh their key material before the expiration. If you do not upgrade your cluster during this grace period, which restarts services and refreshes their key material, you might need to manually restart services to avoid failures after the previous service CA expires. + +[NOTE] +==== +You can use the following command to manually restart all Pods in the cluster. Be aware that running this command causes a service interruption, because it deletes every running Pod in every namespace. These Pods will automatically restart after they are deleted. + +---- +$ for I in $(oc get ns -o jsonpath='{range .items[*]} {.metadata.name}{"\n"} {end}'); \ + do oc delete pods --all -n $I; \ + sleep 1; \ + done +---- +==== diff --git a/modules/update-upgrading-cli.adoc b/modules/update-upgrading-cli.adoc index 71e1c008c1..97ee086717 100644 --- a/modules/update-upgrading-cli.adoc +++ b/modules/update-upgrading-cli.adoc @@ -145,3 +145,19 @@ $ oc get clusterversion NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version 4.1.2 True False 2m Cluster version is 4.1.2 ---- + +. If you are upgrading to this release from {product-title} 4.3.3 or earlier, you must restart all Pods after the upgrade is complete. You can do this using the following command: ++ +---- +$ for I in $(oc get ns -o jsonpath='{range .items[*]} {.metadata.name}{"\n"} {end}'); \ + do oc delete pods --all -n $I; \ + sleep 1; \ + done +---- ++ +[NOTE] +==== +Restarting all Pods is required because the service CA is automatically rotated as of {product-title} 4.3.5. The service CA is rotated during the upgrade and a restart is required afterward to ensure that all services are using the new service CA before the previous service CA expires. + +After this one-time manual restart, subsequent upgrades and rotations will ensure restart before the service CA expires without requiring manual intervention. +==== diff --git a/modules/update-upgrading-web.adoc b/modules/update-upgrading-web.adoc index 9cdf929d8d..902f8ac803 100644 --- a/modules/update-upgrading-web.adoc +++ b/modules/update-upgrading-web.adoc @@ -63,6 +63,38 @@ and click *Update*. The *UPDATE STATUS* changes to `Updating`, and you can review the progress of the Operator upgrades on the *Cluster Operators* tab. +. If you are upgrading to this release from {product-title} +ifdef::within[] +4.3.3 or earlier, +endif::within[] +ifdef::between[] +// 4.2.16 or earlier, +4.2, +endif::between[] +you must restart all Pods after the upgrade is complete. You can do this using the following command, which requires the OpenShift CLI (`oc`): ++ +---- +$ for I in $(oc get ns -o jsonpath='{range .items[*]} {.metadata.name}{"\n"} {end}'); \ + do oc delete pods --all -n $I; \ + sleep 1; \ + done +---- ++ +[NOTE] +==== +Restarting all Pods is required because the service CA is automatically rotated as of {product-title} +ifdef::within[] +4.3.5. +endif::within[] +ifdef::between[] +// 4.2.18 and 4.3.5. +4.3.5. +endif::between[] +The service CA is rotated during the upgrade and a restart is required afterward to ensure that all services are using the new service CA before the previous service CA expires. + +After this one-time manual restart, subsequent upgrades and rotations will ensure restart before the service CA expires without requiring manual intervention. +==== + ifdef::between[] . After the update completes and the Cluster Version Operator refreshes the available updates, check if more updates are available in your current channel. + diff --git a/updating/updating-cluster-between-minor.adoc b/updating/updating-cluster-between-minor.adoc index 3d03fe8051..952d37f4e0 100644 --- a/updating/updating-cluster-between-minor.adoc +++ b/updating/updating-cluster-between-minor.adoc @@ -18,6 +18,21 @@ Because of the difficulty of changing update channels by using `oc`, use the web See xref:../authentication/using-rbac.adoc[Using RBAC to define and apply permissions]. * Have a recent xref:../backup_and_restore/backing-up-etcd.adoc#backup-etcd[etcd backup] in case your upgrade fails and you must xref:../backup_and_restore/disaster_recovery/scenario-2-restoring-cluster-state.adoc#dr-restoring-cluster-state[restore your cluster to a previous state]. +// The following note will need to stay until (and be updated for) the 4.4 docs + +// Assuming next z-streams: +// from {product-title} 4.1.31 or earlier; as of {product-title} 4.1.34 +// from {product-title} 4.2.16 or earlier; as of {product-title} 4.2.18 and 4.3.5 + +[IMPORTANT] +==== +If you are upgrading to this release from {product-title} 4.2, you must restart all Pods after the upgrade is complete. + +This is because the service CA is automatically rotated as of {product-title} 4.3.5. The service CA is rotated during the upgrade and a restart is required afterward to ensure that all services are using the new service CA before the previous service CA expires. + +After this one-time manual restart, subsequent upgrades and rotations will ensure restart before the service CA expires without requiring manual intervention. +==== + include::modules/update-service-overview.adoc[leveloffset=+1] include::modules/understanding-upgrade-channels.adoc[leveloffset=+1] diff --git a/updating/updating-cluster-cli.adoc b/updating/updating-cluster-cli.adoc index 8a320bc024..de603a9abc 100644 --- a/updating/updating-cluster-cli.adoc +++ b/updating/updating-cluster-cli.adoc @@ -13,6 +13,21 @@ You can update, or upgrade, an {product-title} cluster within a minor version by See xref:../authentication/using-rbac.adoc[Using RBAC to define and apply permissions]. * Have a recent xref:../backup_and_restore/backing-up-etcd.adoc#backup-etcd[etcd backup] in case your upgrade fails and you must xref:../backup_and_restore/disaster_recovery/scenario-2-restoring-cluster-state.adoc#dr-restoring-cluster-state[restore your cluster to a previous state]. +// The following note should not be needed come 4.4 docs + +// Assuming next z-streams: +// from {product-title} 4.1.31 or earlier; as of {product-title} 4.1.34 +// from {product-title} 4.2.16 or earlier; as of {product-title} 4.2.18 + +[IMPORTANT] +==== +If you are upgrading to this release from {product-title} 4.3.3 or earlier, you must restart all Pods after the upgrade is complete. + +This is because the service CA is automatically rotated as of {product-title} 4.3.5. The service CA is rotated during the upgrade and a restart is required afterward to ensure that all services are using the new service CA before the previous service CA expires. + +After this one-time manual restart, subsequent upgrades and rotations will ensure restart before the service CA expires without requiring manual intervention. +==== + include::modules/update-service-overview.adoc[leveloffset=+1] include::modules/understanding-upgrade-channels.adoc[leveloffset=+1] diff --git a/updating/updating-cluster-rhel-compute.adoc b/updating/updating-cluster-rhel-compute.adoc index 5d4bcf135c..7e044230ef 100644 --- a/updating/updating-cluster-rhel-compute.adoc +++ b/updating/updating-cluster-rhel-compute.adoc @@ -15,6 +15,21 @@ those machines. See xref:../authentication/using-rbac.adoc[Using RBAC to define and apply permissions]. * Have a recent xref:../backup_and_restore/backing-up-etcd.adoc#backup-etcd[etcd backup] in case your upgrade fails and you must xref:../backup_and_restore/disaster_recovery/scenario-2-restoring-cluster-state.adoc#dr-restoring-cluster-state[restore your cluster to a previous state]. +// The following note will need to stay until (and be updated for) the 4.4 docs + +// Assuming next z-streams: +// from {product-title} 4.1.31 or earlier; as of {product-title} 4.1.34 +// from {product-title} 4.2.16 or earlier; as of {product-title} 4.2.18 and 4.3.5 + +[IMPORTANT] +==== +If you are upgrading to this release from {product-title} 4.2, you must restart all Pods after the upgrade is complete. + +This is because the service CA is automatically rotated as of {product-title} 4.3.5. The service CA is rotated during the upgrade and a restart is required afterward to ensure that all services are using the new service CA before the previous service CA expires. + +After this one-time manual restart, subsequent upgrades and rotations will ensure restart before the service CA expires without requiring manual intervention. +==== + include::modules/update-service-overview.adoc[leveloffset=+1] include::modules/understanding-upgrade-channels.adoc[leveloffset=+1] diff --git a/updating/updating-cluster.adoc b/updating/updating-cluster.adoc index db81a74a19..44e7249d72 100644 --- a/updating/updating-cluster.adoc +++ b/updating/updating-cluster.adoc @@ -13,6 +13,21 @@ You can update, or upgrade, an {product-title} cluster by using the web console. See xref:../authentication/using-rbac.adoc[Using RBAC to define and apply permissions]. * Have a recent xref:../backup_and_restore/backing-up-etcd.adoc#backup-etcd[etcd backup] in case your upgrade fails and you must xref:../backup_and_restore/disaster_recovery/scenario-2-restoring-cluster-state.adoc#dr-restoring-cluster-state[restore your cluster to a previous state]. +// The following note should not be needed come 4.4 docs + +// Assuming next z-streams: +// from {product-title} 4.1.31 or earlier; as of {product-title} 4.1.34 +// from {product-title} 4.2.16 or earlier; as of {product-title} 4.2.18 + +[IMPORTANT] +==== +If you are upgrading to this release from {product-title} 4.3.3 or earlier, you must restart all Pods after the upgrade is complete. + +This is because the service CA is automatically rotated as of {product-title} 4.3.5. The service CA is rotated during the upgrade and a restart is required afterward to ensure that all services are using the new service CA before the previous service CA expires. + +After this one-time manual restart, subsequent upgrades and rotations will ensure restart before the service CA expires without requiring manual intervention. +==== + include::modules/update-service-overview.adoc[leveloffset=+1] include::modules/understanding-upgrade-channels.adoc[leveloffset=+1]