mirror of
https://github.com/openshift/openshift-docs.git
synced 2026-02-05 12:46:18 +01:00
Updated for product names and versions
This commit is contained in:
committed by
openshift-cherrypick-robot
parent
1f807d58ba
commit
e03410550f
@@ -45,8 +45,8 @@ Manage various aspects of the {product-title} release process, such as viewing i
|
||||
.Example: Generate a changelog between two releases and save to `changelog.md`
|
||||
----
|
||||
$ oc adm release info --changelog=/tmp/git \
|
||||
quay.io/openshift-release-dev/ocp-release:4.1.0-rc.7 \
|
||||
quay.io/openshift-release-dev/ocp-release:4.1.0 \
|
||||
quay.io/openshift-release-dev/ocp-release:4.2.0-rc.7 \
|
||||
quay.io/openshift-release-dev/ocp-release:4.2.0 \
|
||||
> changelog.md
|
||||
----
|
||||
|
||||
|
||||
@@ -5,8 +5,8 @@
|
||||
[id="cluster-logging-collector-collector_{context}"]
|
||||
= Selecting the logging collector
|
||||
|
||||
{product-title} cluster logging uses Fluentd by default.
|
||||
Log collectors are deployed as a DaemonSet to each node in the cluster.
|
||||
{product-title} cluster logging uses Fluentd by default.
|
||||
Log collectors are deployed as a DaemonSet to each node in the cluster.
|
||||
|
||||
Currently, Fluentd is the only supported log collector, so you cannot change the log collector type.
|
||||
|
||||
@@ -34,7 +34,7 @@ endif::[]
|
||||
|
||||
.Procedure
|
||||
|
||||
. Edit the Cluster Logging Custom Resource (CR) in the `openshift-logging` project:
|
||||
. Edit the Cluster Logging Custom Resource (CR) in the `openshift-logging` project:
|
||||
+
|
||||
----
|
||||
$ oc edit ClusterLogging instance
|
||||
@@ -47,12 +47,12 @@ kind: "ClusterLogging"
|
||||
metadata:
|
||||
name: "instance"
|
||||
nodeSpec:
|
||||
image: registry.redhat.io/openshift4/ose-logging-fluentd:v4.1
|
||||
image: registry.redhat.io/openshift4/ose-logging-fluentd:v4.2
|
||||
resources: {}
|
||||
|
||||
....
|
||||
|
||||
collection:
|
||||
collection:
|
||||
logs:
|
||||
type: "fluentd" <1>
|
||||
----
|
||||
|
||||
@@ -26,7 +26,7 @@ OAUTH_PROXY_IMAGE=registry.redhat.io/openshift4/ose-oauth-proxy:v4.2 <5>
|
||||
<2> *FLUENTD_IMAGE* deploys Fluentd.
|
||||
<3> *KIBANA_IMAGE* deploys Kibana.
|
||||
<4> *CURATOR_IMAGE* deploys Curator.
|
||||
<5> *OAUTH_PROXY_IMAGE* defines OAUTH for OpenShift Container Platform.
|
||||
<5> *OAUTH_PROXY_IMAGE* defines OAUTH for {product-title}.
|
||||
|
||||
////
|
||||
RSYSLOG_IMAGE=registry.redhat.io/openshift4/ose-logging-rsyslog:v4.2 <6>
|
||||
|
||||
@@ -29,7 +29,7 @@ Defaulting container name to elasticsearch.
|
||||
Use 'oc describe pod/elasticsearch-cdm-1godmszn-1-6f8495-vp4lw -n openshift-logging' to see all of the containers in this pod.
|
||||
Wed Apr 10 05:42:12 UTC 2019
|
||||
health status index uuid pri rep docs.count docs.deleted store.size pri.store.size
|
||||
red open .kibana.647a750f1787408bf50088234ec0edd5a6a9b2ac N7iCbRjSSc2bGhn8Cpc7Jg 2 1
|
||||
red open .kibana.647a750f1787408bf50088234ec0edd5a6a9b2ac N7iCbRjSSc2bGhn8Cpc7Jg 2 1
|
||||
green open .operations.2019.04.10 GTewEJEzQjaus9QjvBBnGg 3 1 2176114 0 3929 1956
|
||||
green open .operations.2019.04.11 ausZHoKxTNOoBvv9RlXfrw 3 1 1494624 0 2947 1475
|
||||
green open .kibana 9Fltn1D0QHSnFMXpphZ--Q 1 1 1 0 0 0
|
||||
@@ -86,10 +86,10 @@ Containers:
|
||||
|
||||
Conditions:
|
||||
Type Status
|
||||
Initialized True
|
||||
Ready True
|
||||
ContainersReady True
|
||||
PodScheduled True
|
||||
Initialized True
|
||||
Ready True
|
||||
ContainersReady True
|
||||
PodScheduled True
|
||||
|
||||
....
|
||||
|
||||
@@ -121,7 +121,7 @@ The output includes the following status information:
|
||||
....
|
||||
Containers:
|
||||
elasticsearch:
|
||||
Image: registry.redhat.io/openshift4/ose-logging-elasticsearch5:v4.1
|
||||
Image: registry.redhat.io/openshift4/ose-logging-elasticsearch5:v4.2
|
||||
Readiness: exec [/usr/share/elasticsearch/probe/readiness.sh] delay=10s timeout=30s period=5s #success=1 #failure=3
|
||||
|
||||
....
|
||||
@@ -162,12 +162,10 @@ The output includes the following status information:
|
||||
....
|
||||
Containers:
|
||||
elasticsearch:
|
||||
Image: registry.redhat.io/openshift4/ose-logging-elasticsearch5:v4.1
|
||||
Image: registry.redhat.io/openshift4/ose-logging-elasticsearch5:v4.2
|
||||
Readiness: exec [/usr/share/elasticsearch/probe/readiness.sh] delay=10s timeout=30s period=5s #success=1 #failure=3
|
||||
|
||||
....
|
||||
|
||||
Events: <none>
|
||||
----
|
||||
|
||||
|
||||
|
||||
@@ -5,7 +5,7 @@
|
||||
[id="cluster-logging-exported-fields-container_{context}"]
|
||||
= Container exported fields
|
||||
|
||||
These are the Docker fields exported by the OpenShift Container Platform cluster logging available for searching from Elasticsearch and Kibana.
|
||||
These are the Docker fields exported by the {product-title} cluster logging available for searching from Elasticsearch and Kibana.
|
||||
Namespace for docker container-specific metadata. The docker.container_id is the Docker container ID.
|
||||
|
||||
|
||||
|
||||
@@ -5,7 +5,7 @@
|
||||
[id="cluster-logging-exported-fields-container_{context}"]
|
||||
= Container exported fields
|
||||
|
||||
These are the Docker fields exported by the OpenShift Container Platform cluster logging available for searching from Elasticsearch and Kibana.
|
||||
These are the Docker fields exported by the {product-title} cluster logging available for searching from Elasticsearch and Kibana.
|
||||
Namespace for docker container-specific metadata. The docker.container_id is the Docker container ID.
|
||||
|
||||
|
||||
|
||||
@@ -30,7 +30,7 @@ spec:
|
||||
spec:
|
||||
containers:
|
||||
- name: fluentd
|
||||
image: 'registry.redhat.io/openshift4/ose-logging-fluentd:v4.1'
|
||||
image: 'registry.redhat.io/openshift4/ose-logging-fluentd:v4.2'
|
||||
env:
|
||||
- name: REMOTE_SYSLOG_HOST <1>
|
||||
value: host1
|
||||
@@ -88,4 +88,3 @@ the same messages on port `5555`.
|
||||
This implementation is insecure, and should only be used in environments
|
||||
where you can guarantee no snooping on the connection.
|
||||
====
|
||||
|
||||
|
||||
@@ -20,7 +20,7 @@ Follow this procedure to recover from a situation where your control plane certi
|
||||
----
|
||||
# RELEASE_IMAGE=<release_image> <1>
|
||||
----
|
||||
<1> An example value for `<release_image>` is `quay.io/openshift-release-dev/ocp-release:4.1.0`.
|
||||
<1> An example value for `<release_image>` is `quay.io/openshift-release-dev/ocp-release:4.2.0`.
|
||||
+
|
||||
----
|
||||
# KAO_IMAGE=$( oc adm release info --registry-config='/var/lib/kubelet/config.json' "${RELEASE_IMAGE}" --image-for=cluster-kube-apiserver-operator )
|
||||
|
||||
@@ -11,7 +11,7 @@ To see the current version that your cluster is on, type:
|
||||
$ oc get clusterversion
|
||||
|
||||
NAME VERSION AVAILABLE PROGRESSING SINCE STATUS
|
||||
version 4.1.0-0.7 True False 10h Cluster version is 4.1.0-0.7
|
||||
version 4.2.0-0.0 True False 10h Cluster version is 4.2.0-0.0
|
||||
----
|
||||
|
||||
Each release version is represented by a set of images. To see basic release information and a list of those images, type:
|
||||
@@ -67,4 +67,4 @@ openshift-samples True False
|
||||
operator-lifecycle-manager True False False 10h
|
||||
----
|
||||
|
||||
While most of the Cluster Operators listed provide services to the {product-title} cluster, the machine-config Operator in particular is tasked with managing the {op-system} operating systems in the nodes.
|
||||
While most of the Cluster Operators listed provide services to the {product-title} cluster, the machine-config Operator in particular is tasked with managing the {op-system} operating systems in the nodes.
|
||||
|
||||
@@ -11,10 +11,10 @@ The {product-title} Jenkins agent images are available on `quay.io` or
|
||||
Jenkins images are available through the Red Hat Registry:
|
||||
|
||||
----
|
||||
$ docker pull registry.redhat.io/openshift4/ose-jenkins:<v4.1.4>
|
||||
$ docker pull registry.redhat.io/openshift4/ose-jenkins-agent-nodejs:<v4.1.4>
|
||||
$ docker pull registry.redhat.io/openshift4/ose-jenkins-agent-maven:<v4.1.4>
|
||||
$ docker pull registry.redhat.io/openshift4/ose-jenkins-agent-base:<v4.1.4>
|
||||
$ docker pull registry.redhat.io/openshift4/ose-jenkins:<v4.2.0>
|
||||
$ docker pull registry.redhat.io/openshift4/ose-jenkins-agent-nodejs:<v4.2.0>
|
||||
$ docker pull registry.redhat.io/openshift4/ose-jenkins-agent-maven:<v4.2.0>
|
||||
$ docker pull registry.redhat.io/openshift4/ose-jenkins-agent-base:<v4.2.0>
|
||||
----
|
||||
|
||||
To use these images, you can either access them directly from `quay.io` or
|
||||
|
||||
@@ -11,5 +11,5 @@ Cluster Loader is included in the `origin-tests` container image.
|
||||
. To pull the `origin-tests` container image, run:
|
||||
+
|
||||
----
|
||||
$ sudo podman pull quay.io/openshift/origin-tests:4.1
|
||||
$ sudo podman pull quay.io/openshift/origin-tests:4.2
|
||||
----
|
||||
|
||||
@@ -53,7 +53,7 @@ feature
|
||||
|
||||
In {product-title} version 3.11, you could not roll out a multi-zone
|
||||
architecture easily because the cluster did not manage machine provisioning.
|
||||
Beginning with 4.1 this process is easier. Each MachineSet is scoped to a
|
||||
Beginning with {product-title} version 4.1, this process is easier. Each MachineSet is scoped to a
|
||||
single zone, so the installation program sends out MachineSets across
|
||||
availability zones on your behalf. And then because your compute is dynamic, and
|
||||
in the face of a zone failure, you always have a zone for when you must
|
||||
|
||||
@@ -181,7 +181,7 @@ spec:
|
||||
+
|
||||
----
|
||||
$ kubectl get -n openshift-monitoring deploy/prometheus-adapter -o jsonpath="{..image}"
|
||||
quay.io/openshift-release-dev/ocp-v4.1-art-dev@sha256:76db3c86554ad7f581ba33844d6a6ebc891236f7db64f2d290c3135ba81c264c
|
||||
quay.io/openshift-release-dev/ocp-v4.2-art-dev@sha256:76db3c86554ad7f581ba33844d6a6ebc891236f7db64f2d290c3135ba81c264c
|
||||
----
|
||||
|
||||
. Add configuration for deploying `prometheus-adapter`:
|
||||
@@ -209,7 +209,7 @@ spec:
|
||||
serviceAccountName: custom-metrics-apiserver
|
||||
containers:
|
||||
- name: prometheus-adapter
|
||||
image: openshift-release-dev/ocp-v4.1-art-dev <1>
|
||||
image: openshift-release-dev/ocp-v4.2-art-dev <1>
|
||||
args:
|
||||
- --secure-port=6443
|
||||
- --tls-cert-file=/var/run/serving-cert/tls.crt
|
||||
@@ -240,7 +240,7 @@ spec:
|
||||
- name: tmp-vol
|
||||
emptyDir: {}
|
||||
----
|
||||
<1> `image: openshift-release-dev/ocp-v4.1-art-dev` specifies the Prometheus Adapter image found in the previous step.
|
||||
<1> `image: openshift-release-dev/ocp-v4.2-art-dev` specifies the Prometheus Adapter image found in the previous step.
|
||||
|
||||
. Apply the configuration file to the cluster:
|
||||
+
|
||||
@@ -249,4 +249,3 @@ $ oc apply -f deploy.yaml
|
||||
----
|
||||
|
||||
. Now the application's metrics are exposed and can be used to configure horizontal pod autoscaling.
|
||||
|
||||
|
||||
@@ -62,7 +62,7 @@ spec:
|
||||
readOnly: true
|
||||
mountPath: /var/run/secrets/kubernetes.io/serviceaccount
|
||||
terminationMessagePolicy: File
|
||||
image: registry.redhat.io/openshift4/ose-ogging-eventrouter:v4.1 <6>
|
||||
image: registry.redhat.io/openshift4/ose-ogging-eventrouter:v4.2 <6>
|
||||
serviceAccount: default <7>
|
||||
volumes: <8>
|
||||
- name: default-token-wbqsl
|
||||
|
||||
@@ -3,19 +3,19 @@
|
||||
// * nodes/nodes-scheduler-node-selector.adoc
|
||||
|
||||
[id="nodes-scheduler-node-selectors-pod_{context}"]
|
||||
= Using node selectors to control pod placement
|
||||
= Using node selectors to control pod placement
|
||||
|
||||
You can use node selector labels on pods to control where the pod is scheduled.
|
||||
|
||||
With node selectors, {product-title} schedules the pods on nodes that contain matching labels.
|
||||
With node selectors, {product-title} schedules the pods on nodes that contain matching labels.
|
||||
|
||||
You can add labels to a node or MachineConfig, but the labels will not persist if the node or machine goes down.
|
||||
You can add labels to a node or MachineConfig, but the labels will not persist if the node or machine goes down.
|
||||
Adding the label to the MachineSet ensures that new nodes or machines will have the label.
|
||||
|
||||
To add node selectors to an existing pod, add a node selector to the controlling object for that node, such as
|
||||
To add node selectors to an existing pod, add a node selector to the controlling object for that node, such as
|
||||
a ReplicaSet, Daemonset, or StatefulSet. Any existing pods under that controlling object are recreated on a node
|
||||
with a matching label. If you are creating a new pod, you can add the node selector directly
|
||||
to the pod spec.
|
||||
to the pod spec.
|
||||
|
||||
[NOTE]
|
||||
====
|
||||
@@ -85,7 +85,7 @@ metadata:
|
||||
labels:
|
||||
beta.kubernetes.io/os: linux
|
||||
failure-domain.beta.kubernetes.io/zone: us-east-1a
|
||||
node.openshift.io/os_version: '4.1'
|
||||
node.openshift.io/os_version: '4.2'
|
||||
node-role.kubernetes.io/worker: ''
|
||||
failure-domain.beta.kubernetes.io/region: us-east-1
|
||||
node.openshift.io/os_id: rhcos
|
||||
@@ -159,7 +159,7 @@ spec:
|
||||
type: user-node
|
||||
----
|
||||
|
||||
[NOTE]
|
||||
[NOTE]
|
||||
====
|
||||
If you are using node selectors and node affinity in the same pod configuration, note the following:
|
||||
|
||||
@@ -169,4 +169,3 @@ If you are using node selectors and node affinity in the same pod configuration,
|
||||
|
||||
* If you specify multiple `matchExpressions` associated with `nodeSelectorTerms`, then the pod can be scheduled onto a node only if all `matchExpressions` are satisfied.
|
||||
====
|
||||
|
||||
|
||||
@@ -29,7 +29,7 @@ network-operator 1/1 1 1 56m
|
||||
$ oc get clusteroperator/network
|
||||
|
||||
NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE
|
||||
network 4.1.0 True False False 50m
|
||||
network 4.2.0 True False False 50m
|
||||
----
|
||||
The following fields provide information about the status of the operator:
|
||||
`AVAILABLE`, `PROGRESSING`, and `DEGRADED`. The `AVAILABLE` field is `True` when
|
||||
|
||||
@@ -23,7 +23,7 @@ You cannot modify these parameters after installation.
|
||||
|`networking.networkType`
|
||||
|The network plug-in to deploy. The `OpenShiftSDN` plug-in is the only plug-in
|
||||
supported in {product-title} {product-version}. The `OVNKubernetes` plug-in is
|
||||
available as a Tech Preview in {product-title} 4.2.
|
||||
available as Technology Preview in {product-title} 4.2.
|
||||
|Either `OpenShiftSDN` or `OVNKubernetes`. The default value is `OpenShiftSDN`.
|
||||
|
||||
|`networking.clusterNetwork.cidr`
|
||||
|
||||
@@ -12,7 +12,7 @@ template builds and waits for them to complete:
|
||||
+
|
||||
----
|
||||
$ sudo podman run -v ${LOCAL_KUBECONFIG}:/root/.kube/config -i
|
||||
quay.io/openshift/origin-tests:4.1 /bin/bash -c 'export KUBECONFIG=/root/.kube/config && \
|
||||
quay.io/openshift/origin-tests:4.2 /bin/bash -c 'export KUBECONFIG=/root/.kube/config && \
|
||||
openshift-tests run-test "[Feature:Performance][Serial][Slow] Load cluster should load the \
|
||||
cluster [Suite:openshift]"'
|
||||
----
|
||||
@@ -22,7 +22,7 @@ setting the environment variable for `VIPERCONFIG`:
|
||||
+
|
||||
----
|
||||
$ sudo podman run -v ${LOCAL_KUBECONFIG}:/root/.kube/config -i
|
||||
quay.io/openshift/origin-tests:4.1 /bin/bash -c 'export KUBECONFIG=/root/.kube/config && \
|
||||
quay.io/openshift/origin-tests:4.2 /bin/bash -c 'export KUBECONFIG=/root/.kube/config && \
|
||||
export VIPERCONFIG=config/test && \
|
||||
openshift-tests run-test "[Feature:Performance][Serial][Slow] Load cluster should load the \
|
||||
cluster [Suite:openshift]"'
|
||||
|
||||
@@ -28,17 +28,17 @@ known as `oc`, that matches the version for your updated version.
|
||||
$ oc get clusterversion
|
||||
|
||||
NAME VERSION AVAILABLE PROGRESSING SINCE STATUS
|
||||
version 4.1.0 True False 158m Cluster version is 4.1.0
|
||||
version 4.2.0 True False 158m Cluster version is 4.2.0
|
||||
----
|
||||
|
||||
. Review the current update channel information and confirm that your channel
|
||||
is set to `stable-4.1`:
|
||||
is set to `stable-4.2`:
|
||||
+
|
||||
----
|
||||
$ oc get clusterversion -o json|jq ".items[0].spec"
|
||||
|
||||
{
|
||||
"channel": "stable-4.1",
|
||||
"channel": "stable-4.2",
|
||||
"clusterID": "990f7ab8-109b-4c95-8480-2bd1deec55ff",
|
||||
"upstream": "https://api.openshift.com/api/upgrades_info/v1/graph"
|
||||
}
|
||||
@@ -46,7 +46,7 @@ $ oc get clusterversion -o json|jq ".items[0].spec"
|
||||
+
|
||||
[IMPORTANT]
|
||||
====
|
||||
For production clusters, you must subscribe to the `stable-4.1` channel.
|
||||
For production clusters, you must subscribe to the `stable-4.2` channel.
|
||||
====
|
||||
|
||||
. View the available updates and note the version number of the update that
|
||||
@@ -84,12 +84,12 @@ previous command.
|
||||
$ oc get clusterversion -o json|jq ".items[0].spec"
|
||||
|
||||
{
|
||||
"channel": "stable-4.1",
|
||||
"channel": "stable-4.2",
|
||||
"clusterID": "990f7ab8-109b-4c95-8480-2bd1deec55ff",
|
||||
"desiredUpdate": {
|
||||
"force": false,
|
||||
"image": "quay.io/openshift-release-dev/ocp-release@sha256:9c5f0df8b192a0d7b46cd5f6a4da2289c155fd5302dec7954f8f06c878160b8b",
|
||||
"version": "4.1.2" <1>
|
||||
"version": "4.2.1" <1>
|
||||
},
|
||||
"upstream": "https://api.openshift.com/api/upgrades_info/v1/graph"
|
||||
}
|
||||
|
||||
@@ -21,7 +21,7 @@ metadata:
|
||||
selfLink: /apis/config.openshift.io/v1/clusterversions/version
|
||||
uid: 82f9f2c4-4cae-11e9-90b7-06dc0f62ad38
|
||||
spec:
|
||||
channel: stable-4.1 <1>
|
||||
channel: stable-4.2 <1>
|
||||
overrides: "" <2>
|
||||
clusterID: 0b1cf91f-c3fb-4f9e-aa02-e0d70c71f6e6
|
||||
upstream: https://api.openshift.com/api/upgrades_info/v1/graph
|
||||
|
||||
@@ -20,7 +20,7 @@ within {product-title}:
|
||||
For example, when:
|
||||
|
||||
* <command> = `oc`
|
||||
* <version> = The client version. For example, `v4.1.0`. Requests made against the Kubernetes
|
||||
* <version> = The client version. For example, `v4.2.0`. Requests made against the Kubernetes
|
||||
API at `/api` receive the Kubernetes version, while requests made against the
|
||||
{product-title} API at `/oapi` receive the {product-title} version (as specified
|
||||
by `oc version`)
|
||||
|
||||
@@ -9,10 +9,10 @@ toc::[]
|
||||
|
||||
|
||||
|
||||
The _Downward API_ is a mechanism that allows containers to consume information
|
||||
about API objects without coupling to OpenShift Container Platform.
|
||||
Such information includes the pod’s name, namespace, and resource values.
|
||||
Containers can consume information from the downward API using environment
|
||||
The _Downward API_ is a mechanism that allows containers to consume information
|
||||
about API objects without coupling to {product-title}.
|
||||
Such information includes the pod’s name, namespace, and resource values.
|
||||
Containers can consume information from the downward API using environment
|
||||
variables or a volume plug-in.
|
||||
|
||||
|
||||
|
||||
@@ -21,10 +21,10 @@ creates bindings (pod to node bindings) for the pods using the master API.
|
||||
// assemblies.
|
||||
|
||||
Default pod scheduling::
|
||||
OpenShift Container Platform comes with a xref:../../nodes/scheduling/nodes-scheduler-default.adoc#nodes-scheduler-default[default scheduler] that serves the needs of most users. The default scheduler uses both inherent and customization tools to determine the best fit for a pod.
|
||||
{product-title} comes with a xref:../../nodes/scheduling/nodes-scheduler-default.adoc#nodes-scheduler-default[default scheduler] that serves the needs of most users. The default scheduler uses both inherent and customization tools to determine the best fit for a pod.
|
||||
|
||||
Advanced pod scheduling::
|
||||
In situations where you might want more control over where new pods are placed, the OpenShift Container Platform advanced scheduling features allow you to configure a pod so that the pod is required or has a preference to run on a particular node or alongside a specific pod by.
|
||||
In situations where you might want more control over where new pods are placed, the {product-title} advanced scheduling features allow you to configure a pod so that the pod is required or has a preference to run on a particular node or alongside a specific pod by.
|
||||
|
||||
* Using xref:../../nodes/scheduling/nodes-scheduler-pod-affinity.adoc#nodes-scheduler-pod-affinity[pod affinity and anti-affinity rules].
|
||||
* Controlling pod placement with xref:../../nodes/scheduling/nodes-scheduler-pod-affinity.adoc#nodes-scheduler-pod-affinity-about_nodes-scheduler-pod-affinity[pod affinity].
|
||||
|
||||
@@ -21,7 +21,7 @@ The {product-title} Jenkins images are available on `quay.io` or
|
||||
For example:
|
||||
|
||||
----
|
||||
$ docker pull registry.redhat.io/openshift4/ose-jenkins:<v4.1.4>
|
||||
$ docker pull registry.redhat.io/openshift4/ose-jenkins:<v4.2.0>
|
||||
----
|
||||
|
||||
To use these images, you can either access them directly from these registries
|
||||
|
||||
@@ -1,5 +1,5 @@
|
||||
[id="ocp-4-2-release-notes"]
|
||||
= {product-title} 4.2 release notes
|
||||
= {product-title} {product-version} release notes
|
||||
include::modules/common-attributes.adoc[]
|
||||
:context: release-notes
|
||||
|
||||
@@ -145,11 +145,11 @@ Bare Metal, and VMware vSphere.
|
||||
==== Scoped Operator installations
|
||||
|
||||
Previously, only users carrying `cluster-admin` roles were allowed to install
|
||||
Operators. In {product-title} 4.2, the `cluster-admin` can select namespaces in
|
||||
which namespace administrators can install Operators self-sufficiently. The
|
||||
`cluster-admin` defines the ServiceAccount in this namespace; all installed
|
||||
Operators in this namespace get equal or lower permissions of this
|
||||
ServiceAccount.
|
||||
Operators. In {product-title} {product-version}, the `cluster-admin` can select
|
||||
namespaces in which namespace administrators can install Operators
|
||||
self-sufficiently. The `cluster-admin` defines the ServiceAccount in this
|
||||
namespace; all installed Operators in this namespace get equal or lower
|
||||
permissions of this ServiceAccount.
|
||||
|
||||
See
|
||||
xref:../operators/olm-creating-policy.adoc#olm-creating-policy[Creating policy for Operator installations and upgrades]
|
||||
@@ -219,14 +219,14 @@ Persistent volumes using the Local Storage Operator is now available in
|
||||
{product-title} {product-version}.
|
||||
|
||||
[id="ocp-4-2-OCS"]
|
||||
==== OpenShift Container Storage 4.2
|
||||
==== OpenShift Container Storage {product-version}
|
||||
|
||||
With {product-title} {product-version}, OpenShift Container Storage (OCS) 4.2 is
|
||||
available, providing persistent storage for applications and {product-title}
|
||||
infrastructure services. OCS 4.2 will provide complete data services for
|
||||
{product-title}, including dynamic provisioning of object buckets. This S3
|
||||
capability is new in OCS 4.2. A consistent S3 interface is now added across
|
||||
public and on-premise infrastructure.
|
||||
With {product-title} {product-version}, OpenShift Container Storage (OCS)
|
||||
{product-version} is available, providing persistent storage for applications
|
||||
and {product-title} infrastructure services. OCS {product-version} will provide
|
||||
complete data services for {product-title}, including dynamic provisioning of
|
||||
object buckets. This S3 capability is new in OCS {product-version}. A consistent
|
||||
S3 interface is now added across public and on-premise infrastructure.
|
||||
|
||||
[id="ocp-4-2-CSI"]
|
||||
==== OpenShift Container Storage Interface (CSI)
|
||||
@@ -546,7 +546,7 @@ New and upgraded ingress controllers will no longer support these TLS versions.
|
||||
|
||||
OperatorHub has been updated to reduce the
|
||||
number of API resources a cluster administrator must interact with and
|
||||
streamline the installation of new Operators on {product-title} 4.2.
|
||||
streamline the installation of new Operators on {product-title} {product-version}.
|
||||
|
||||
To work with OperatorHub in {product-title} 4.1, cluster administrators
|
||||
primarily interacted with OperatorSource and CatalogSourceConfig API resources.
|
||||
@@ -558,8 +558,8 @@ OperatorSource of your cluster. Behind the scenes, it configured an Operator
|
||||
Lifecycle Manager (OLM) CatalogSource so that the Operator could then be managed
|
||||
by OLM.
|
||||
|
||||
To reduce complexity, OperatorHub in {product-title} 4.2 no longer uses
|
||||
CatalogSourceConfigs in the workflow of installing Operators. Instead,
|
||||
To reduce complexity, OperatorHub in {product-title} {product-version} no longer
|
||||
uses CatalogSourceConfigs in the workflow of installing Operators. Instead,
|
||||
CatalogSources are still created as a result of adding OperatorSources to the
|
||||
cluster, however Subscription resources are now created directly using the
|
||||
CatalogSource.
|
||||
@@ -576,17 +576,17 @@ supported in {product-title}.
|
||||
|
||||
In {product-title} 4.1, the default global catalog namespace, where
|
||||
CatalogSources are installed by default, is
|
||||
`openshift-operator-lifecycle-manager`. Starting with {product-title} 4.2, this
|
||||
has changed to the `openshift-marketplace` namespace.
|
||||
`openshift-operator-lifecycle-manager`. Starting with {product-title}
|
||||
{product-version}, this has changed to the `openshift-marketplace` namespace.
|
||||
|
||||
If you have installed an Operator from OperatorHub on a {product-title} 4.1
|
||||
cluster, the CatalogSource is in the same namespace as the Subscription. These
|
||||
Subscriptions are not affected by this change and should continue to behave
|
||||
normally after a cluster upgrade.
|
||||
|
||||
In {product-title} 4.2, if you install an Operator from OperatorHub, the
|
||||
Subscription created refers to a CatalogSource located in the new global
|
||||
catalog namespace `openshift-marketplace`.
|
||||
In {product-title} {product-version}, if you install an Operator from
|
||||
OperatorHub, the Subscription created refers to a CatalogSource located in the
|
||||
new global catalog namespace `openshift-marketplace`.
|
||||
|
||||
[discrete]
|
||||
[id="ocp-4-2-global-catalog-namespace-workaround"]
|
||||
|
||||
@@ -7,7 +7,7 @@ toc::[]
|
||||
|
||||
[IMPORTANT]
|
||||
====
|
||||
Volume snapshot is deprecated in {product-title} 4.2.
|
||||
Volume snapshot is deprecated in {product-title} {product-version}.
|
||||
====
|
||||
|
||||
This document describes how to use VolumeSnapshots to protect against data loss in {product-title}. Familiarity with xref:../../storage/understanding-persistent-storage.adoc#persistent-volumes_understanding-persistent-storage[persistent volumes] is suggested.
|
||||
|
||||
Reference in New Issue
Block a user