mirror of
https://github.com/openshift/openshift-docs.git
synced 2026-02-06 06:46:26 +01:00
84 lines
3.5 KiB
Plaintext
84 lines
3.5 KiB
Plaintext
// Module included in the following assemblies:
|
|
//
|
|
// * applications/deployments/deployment-strategies.adoc
|
|
|
|
[id="deployments-rolling-strategy_{context}"]
|
|
= Rolling strategy
|
|
|
|
A rolling deployment slowly replaces instances of the previous version of an
|
|
application with instances of the new version of the application. The Rolling
|
|
strategy is the default deployment strategy used if no strategy is specified on
|
|
a DeploymentConfig.
|
|
|
|
A rolling deployment typically waits for new pods to become `ready` via a
|
|
`readiness check` before scaling down the old components. If a significant issue
|
|
occurs, the rolling deployment can be aborted.
|
|
|
|
*When to use a Rolling deployment:*
|
|
|
|
- When you want to take no downtime during an application update.
|
|
- When your application supports having old code and new code running at the same time.
|
|
|
|
A Rolling deployment means you to have both old and new versions of your code
|
|
running at the same time. This typically requires that your application handle
|
|
N-1 compatibility.
|
|
|
|
.Example Rolling strategy definition
|
|
[source,yaml]
|
|
----
|
|
strategy:
|
|
type: Rolling
|
|
rollingParams:
|
|
updatePeriodSeconds: 1 <1>
|
|
intervalSeconds: 1 <2>
|
|
timeoutSeconds: 120 <3>
|
|
maxSurge: "20%" <4>
|
|
maxUnavailable: "10%" <5>
|
|
pre: {} <6>
|
|
post: {}
|
|
----
|
|
<1> The time to wait between individual Pod updates. If unspecified, this value defaults to `1`.
|
|
<2> The time to wait between polling the deployment status after update. If unspecified, this value defaults to `1`.
|
|
<3> The time to wait for a scaling event before giving up. Optional; the default is `600`. Here, _giving up_ means
|
|
automatically rolling back to the previous complete deployment.
|
|
<4> `maxSurge` is optional and defaults to `25%` if not specified. See the information below the following procedure.
|
|
<5> `maxUnavailable` is optional and defaults to `25%` if not specified. See the information below the following procedure.
|
|
<6> `pre` and `post` are both lifecycle hooks.
|
|
|
|
The Rolling strategy:
|
|
|
|
. Executes any `pre` lifecycle hook.
|
|
. Scales up the new ReplicationController based on the surge count.
|
|
. Scales down the old ReplicationController based on the max unavailable count.
|
|
. Repeats this scaling until the new ReplicationController has reached the desired
|
|
replica count and the old ReplicationController has been scaled to zero.
|
|
. Executes any `post` lifecycle hook.
|
|
|
|
[IMPORTANT]
|
|
====
|
|
When scaling down, the Rolling strategy waits for pods to become ready so it can
|
|
decide whether further scaling would affect availability. If scaled up pods
|
|
never become ready, the deployment process will eventually time out and result in a
|
|
deployment failure.
|
|
====
|
|
|
|
The `maxUnavailable` parameter is the maximum number of pods that can be
|
|
unavailable during the update. The `maxSurge` parameter is the maximum number
|
|
of pods that can be scheduled above the original number of pods. Both parameters
|
|
can be set to either a percentage (e.g., `10%`) or an absolute value (e.g.,
|
|
`2`). The default value for both is `25%`.
|
|
|
|
These parameters allow the deployment to be tuned for availability and speed. For
|
|
example:
|
|
|
|
- `maxUnavailable*=0` and `maxSurge*=20%` ensures full capacity is maintained
|
|
during the update and rapid scale up.
|
|
- `maxUnavailable*=10%` and `maxSurge*=0` performs an update using no extra
|
|
capacity (an in-place update).
|
|
- `maxUnavailable*=10%` and `maxSurge*=10%` scales up and down quickly with
|
|
some potential for capacity loss.
|
|
|
|
Generally, if you want fast rollouts, use `maxSurge`. If you have to take into
|
|
account resource quota and can accept partial unavailability, use
|
|
`maxUnavailable`.
|