1
0
mirror of https://github.com/openshift/openshift-docs.git synced 2026-02-05 12:46:18 +01:00

OSDOCS-2869 - Removing some of the residual OSD conditionals

This commit is contained in:
Paul Needle
2021-11-19 19:22:55 +00:00
parent a21a73f80c
commit 42d204ffec
25 changed files with 4 additions and 403 deletions

View File

@@ -14,13 +14,6 @@ As
a cluster administrator, you can allow and configure how developers and service
accounts can create, or _self-provision_, their own projects.
////
ifdef::openshift-dedicated[]
A dedicated administrator is by default an administrator for all projects on the
cluster that are not managed by Red Hat Operations.
endif::[]
////
include::modules/about-project-creation.adoc[leveloffset=+1]
include::modules/modifying-template-for-new-projects.adoc[leveloffset=+1]
include::modules/disabling-project-self-provisioning.adoc[leveloffset=+1]

View File

@@ -13,4 +13,4 @@ include::modules/bound-sa-tokens-about.adoc[leveloffset=+1]
// Configuring bound service account tokens using volume projection
include::modules/bound-sa-tokens-configuring.adoc[leveloffset=+1]
// TODO: Verify distros: openshift-enterprise,openshift-webscale,openshift-origin,openshift-dedicated
// TODO: Verify distros: openshift-enterprise,openshift-webscale,openshift-origin

View File

@@ -8,9 +8,6 @@ toc::[]
ifdef::openshift-enterprise,openshift-webscale,openshift-origin[]
As an administrator,
endif::[]
ifdef::openshift-dedicated[]
As a xref:../authentication/understanding-and-creating-service-accounts.html#dedicated-admin-role-overview_{context}[dedicated administrator],
endif::[]
you can use groups to manage users, change
their permissions, and enhance collaboration. Your organization may have already
created user groups and stored them in an LDAP server. {product-title} can sync
@@ -29,13 +26,6 @@ You must have `cluster-admin` privileges to sync groups.
====
endif::[]
ifdef::openshift-dedicated[]
[NOTE]
====
You must have `dedicated-admins` privileges to sync groups.
====
endif::[]
include::modules/ldap-syncing-about.adoc[leveloffset=+1]
include::modules/ldap-syncing-config-rfc2307.adoc[leveloffset=+2]
include::modules/ldap-syncing-config-activedir.adoc[leveloffset=+2]

View File

@@ -7,10 +7,6 @@ toc::[]
include::modules/service-accounts-overview.adoc[leveloffset=+1]
include::modules/service-accounts-dedicated-admin-role.adoc[leveloffset=+1]
include::modules/dedicated-admin-role-overview.adoc[leveloffset=+1]
// include::modules/service-accounts-enabling-authentication.adoc[leveloffset=+1]
include::modules/service-accounts-creating.adoc[leveloffset=+1]

View File

@@ -24,20 +24,6 @@ Because the internal {product-title} Elasticsearch log store does not provide se
====
endif::[]
ifdef::openshift-dedicated[]
As an administrator, you can deploy OpenShift Logging to
aggregate logs for a range of {product-title} services.
OpenShift Logging runs on worker nodes. As an
administrator, you can monitor resource consumption in the
console and via Prometheus and Grafana. Due to the high work load required for
logging, more worker nodes may be required for your environment.
Logs in {product-title} are retained for seven days before rotation. Logging
storage is capped at 600GiB. This is independent of a cluster's allocated base
storage.
endif::[]
// The following include statements pull in the module files that comprise
// the assembly. Include any combination of concept, procedure, or reference
// modules required to cover the user story. You can also include other

View File

@@ -13,59 +13,6 @@ The following example shows a typical custom resource for OpenShift Logging.
[id="efk-logging-configuring-about-sample_{context}"]
.Sample `ClusterLogging` custom resource (CR)
ifdef::openshift-dedicated[]
[source,yaml]
----
apiVersion: "logging.openshift.io/v1"
kind: "ClusterLogging"
metadata:
name: "instance"
namespace: "openshift-logging"
spec:
managementState: "Managed"
logStore:
type: "elasticsearch"
retentionPolicy:
application:
maxAge: 1d
infra:
maxAge: 7d
audit:
maxAge: 7d
elasticsearch:
nodeCount: 3
storage:
storageClassName: "gp2"
size: "200Gi"
redundancyPolicy: "SingleRedundancy"
nodeSelector:
node-role.kubernetes.io/worker: ""
resources:
limits:
memory: "16Gi"
requests:
memory: "16Gi"
proxy:
resources:
limits:
memory: 256Mi
requests:
memory: 256Mi
visualization:
type: "kibana"
kibana:
replicas: 1
nodeSelector:
node-role.kubernetes.io/worker: ""
collection:
logs:
type: "fluentd"
fluentd: {}
nodeSelector:
node-role.kubernetes.io/worker: ""
----
endif::[]
ifdef::openshift-enterprise,openshift-webscale,openshift-origin[]
[source,yaml]
----

View File

@@ -20,14 +20,6 @@ other resources necessary to support OpenShift Logging. The operators are
responsible for deploying, upgrading, and maintaining OpenShift Logging.
endif::openshift-enterprise,openshift-webscale,openshift-origin[]
ifdef::openshift-dedicated[]
{product-title} administrators can deploy the Red Hat OpenShift Logging Operator and the
OpenShift Elasticsearch Operator by using the {product-title} web console and can configure logging in the
`openshift-logging` namespace. Configuring logging will deploy Elasticsearch,
Fluentd, and Kibana in the `openshift-logging` namespace. The operators are
responsible for deploying, upgrading, and maintaining OpenShift Logging.
endif::openshift-dedicated[]
The `ClusterLogging` CR defines a complete OpenShift Logging environment that includes all the components
of the logging stack to collect, store and visualize logs. The Red Hat OpenShift Logging Operator watches the OpenShift Logging
CR and adjusts the logging deployment accordingly.

View File

@@ -284,52 +284,6 @@ This default OpenShift Logging configuration should support a wide array of envi
configuring OpenShift Logging components for information on modifications you can make to your OpenShift Logging cluster.
====
+
ifdef::openshift-dedicated[]
[source,yaml]
----
apiVersion: "logging.openshift.io/v1"
kind: "ClusterLogging"
metadata:
name: "instance"
namespace: "openshift-logging"
spec:
managementState: "Managed"
logStore:
type: "elasticsearch"
retentionPolicy:
application:
maxAge: 1d
infra:
maxAge: 7d
audit:
maxAge: 7d
elasticsearch:
nodeCount: 3
storage:
storageClassName: gp2
size: "200Gi"
redundancyPolicy: "SingleRedundancy"
nodeSelector:
node-role.kubernetes.io/worker: ""
limits:
memory: "16Gi"
requests:
memory: "16Gi"
visualization:
type: "kibana"
kibana:
replicas: 1
nodeSelector:
node-role.kubernetes.io/worker: ""
collection:
logs:
type: "fluentd"
fluentd: {}
nodeSelector:
node-role.kubernetes.io/worker: ""
----
endif::[]
ifdef::openshift-enterprise,openshift-webscale,openshift-origin[]
[source,yaml]
----

View File

@@ -126,53 +126,6 @@ This default OpenShift Logging configuration should support a wide array of envi
configuring OpenShift Logging components for information on modifications you can make to your OpenShift Logging cluster.
====
+
ifdef::openshift-dedicated[]
[source,yaml]
----
apiVersion: "logging.openshift.io/v1"
kind: "ClusterLogging"
metadata:
name: "instance"
namespace: "openshift-logging"
spec:
managementState: "Managed"
logStore:
type: "elasticsearch"
retentionPolicy:
application:
maxAge: 1d
infra:
maxAge: 7d
audit:
maxAge: 7d
elasticsearch:
nodeCount: 3
storage:
storageClassName: gp2
size: "200Gi"
redundancyPolicy: "SingleRedundancy"
nodeSelector:
node-role.kubernetes.io/worker: ""
resources:
limits:
memory: "16Gi"
request:
memory: "16Gi"
visualization:
type: "kibana"
kibana:
replicas: 1
nodeSelector:
node-role.kubernetes.io/worker: ""
collection:
logs:
type: "fluentd"
fluentd: {}
nodeSelector:
node-role.kubernetes.io/worker: ""
----
endif::[]
ifdef::openshift-enterprise,openshift-webscale,openshift-origin[]
[source,yaml]
----

View File

@@ -31,31 +31,3 @@ $ oc create configmap registry-cas -n openshift-config \
$ oc patch image.config.openshift.io/cluster --patch '{"spec":{"additionalTrustedCA":{"name":"registry-cas"}}}' --type=merge
----
endif::[]
ifdef::openshift-dedicated[]
You can add certificate authorities (CA) to the cluster for use when pushing and pulling images with the following procedure.
.Prerequisites
* You must have Dedicated administrator privileges.
* You must have access to the public certificates of the registry, usually a
`hostname/ca.crt` file located in the `/etc/docker/certs.d/` directory.
.Procedure
. Create a `ConfigMap` in the `openshift-config` namespace containing the trusted certificates for the registries that use self-signed certificates. For each CA file, ensure the key in the `ConfigMap` is the hostname of the registry in the `hostname[..port]` format:
+
[source,terminal]
----
$ oc create configmap registry-cas -n openshift-config \
--from-file=myregistry.corp.com..5000=/etc/docker/certs.d/myregistry.corp.com:5000/ca.crt \
--from-file=otherregistry.com=/etc/docker/certs.d/otherregistry.com/ca.crt
----
. Update the cluster image configuration:
+
[source,terminal]
----
$ oc patch image.config.openshift.io/cluster --patch '{"spec":{"additionalTrustedCA":{"name":"registry-cas"}}}' --type=merge
----
endif::[]

View File

@@ -17,9 +17,6 @@ You must explicitly assign permissions to each of these roles. The roles with mo
ifdef::openshift-enterprise,openshift-webscale,openshift-origin[]
- Create a CRD.
endif::[]
ifdef::openshift-dedicated[]
- A cluster-scoped CRD has been created in your cluster.
endif::[]
.Procedure

View File

@@ -15,9 +15,6 @@ After a custom resource definitions (CRD) has been added to the cluster, custom
ifdef::openshift-enterprise,openshift-webscale,openshift-origin[]
- CRD added to the cluster by a cluster administrator.
endif::[]
ifdef::openshift-dedicated[]
- A CRD has been created in your cluster.
endif::[]
.Procedure

View File

@@ -27,6 +27,3 @@ Cluster administrators can also add CRDs manually to the cluster outside of the
While only cluster administrators can create CRDs, developers can create the CR from an existing CRD if they have read and write permission to it.
====
endif::[]
ifdef::openshift-dedicated[]
{product-title} cluster administrators can manage RBAC to CRDs that have been added to customer project namespaces, making them available to authorized users.
endif::[]

View File

@@ -39,7 +39,7 @@ $ oc new-project hello-openshift \
[NOTE]
====
The number of projects you are allowed to create
ifdef::openshift-enterprise,openshift-webscale,openshift-origin,openshift-dedicated[]
ifdef::openshift-enterprise,openshift-webscale,openshift-origin[]
might be limited by the system administrator.
endif::[]
ifdef::openshift-online[]

View File

@@ -1,53 +0,0 @@
// Module included in the following assemblies:
//
// * authentication/using-service-accounts.adoc
[id="dedicated-admin-role-overview_{context}"]
ifdef::openshift-dedicated[]
= About the dedicated administrator role
The accounts for dedicated administrators of an {product-title} cluster,
have increased
permissions and access to all user-created projects.
////
[NOTE]
====
Some configuration changes or procedures discussed in this guide can be
performed by only the {product-title} Operations Team. They are included in this
guide for informational purposes to help you as an {product-title} cluster
administrator better understand what configuration options are possible. If you
want to request a change to your cluster that you cannot perform using the
administrator CLI, open a support case on the
https://access.redhat.com/support/[Red Hat Customer Portal].
====
////
When your account has the `dedicated-admins-cluster` authorization role
bound to it, you
are automatically bound to the `dedicated-admins-project` role for any new projects
that users in the cluster create.
You can perform actions that are associated with a set of verbs, such as `create`,
to operate on a set of resource names, such as `templates`. To view the details
of these roles and their sets of
verbs and resources, run the following commands:
[source,terminal]
----
$ oc describe clusterrole/dedicated-admins-cluster
$ oc describe clusterrole/dedicated-admins-project
----
The verb names do not necessarily all map directly to `oc` commands but rather
equate more generally to the types of CLI operations you can perform. For
example, having the `list` verb means that you can display a list of all objects
of a given resource name by running the `oc get` command, but the `get` verb
means that you can display the details of a specific object if you know its name
by runnign the `oc describe` command.
{product-title} administrators can grant users a `dedicated-reader` role, which
provides view-only access at the cluster level and view access for all
user projects.
endif::[]

View File

@@ -262,12 +262,3 @@ spec:
<1> The name of the `LimitRange` object.
<2> The minimum amount of storage that can be requested in a persistent volume claim.
<3> The maximum amount of storage that can be requested in a persistent volume claim.
ifdef::openshift-dedicated[]
[id="nodes-cluster-limit-project-limits"]
== Project limits
You can enforce different limits on the number of projects that
your users can create, as well as on managing limits and quotas on project
resources.
endif::openshift-dedicated[]

View File

@@ -26,13 +26,6 @@ ifdef::openshift-enterprise,openshift-webscale[]
For the maximum number of pods per {product-title} node host, see the Cluster Limits.
====
endif::[]
ifdef::openshift-dedicated[]
[IMPORTANT]
====
The recommended maximum number of pods per {product-title} node host is 35. You
can have no more than 40 pods per node.
====
endif::[]
[WARNING]
====

View File

@@ -10,10 +10,6 @@ You can configure the Pod Node Constraints admission controller to ensure that p
.Prerequisites
Ensure you have the desired labels
ifdef::openshift-dedicated[]
(request changes by opening a support case on the
https://access.redhat.com/support/[Red Hat Customer Portal])
endif::openshift-dedicated[]
and node selector set up in your environment.
For example, make sure that your pod configuration features the `nodeName`

View File

@@ -13,10 +13,6 @@ You can configure the Pod Node Constraints admission controller to ensure that p
ifdef::openshift-enterprise,openshift-webscale,openshift-origin[]
labels on your nodes.
endif::openshift-enterprise,openshift-webscale,openshift-origin[]
ifdef::openshift-dedicated[]
labels on your nodes (request changes by opening a support case on the
https://access.redhat.com/support/[Red Hat Customer Portal]).
endif::openshift-dedicated[]
and node selector set up in your environment.
+
For example, make sure that your pod configuration features the `nodeSelector`

View File

@@ -47,14 +47,6 @@ sh-4.2# podman login -u kubeadmin -p $(oc whoami -t) image-registry.openshift-im
----
endif::[]
+
ifdef::openshift-dedicated[]
[source,terminal]
----
$ podman login -u $(oc whoami) -p $(oc whoami -t) $(oc -n openshift-image-registry get route default-route -o jsonpath='{.spec.host}')
----
endif::[]
+
You should see a message confirming login, such as:
+
@@ -94,11 +86,6 @@ ifdef::openshift-enterprise,openshift-webscale,openshift-origin[]
|`5000`
endif::[]
ifdef::openshift-dedicated[]
|*<registry_url>*
|`oc -n openshift-image-registry get route default-route -o jsonpath='{.spec.host}'`
endif::[]
|*<project>*
|`openshift`
@@ -126,16 +113,6 @@ correctly place and later access the image in the registry:
$ podman tag name.io/image image-registry.openshift-image-registry.svc:5000/openshift/image
----
endif::[]
ifdef::openshift-dedicated[]
.. Tag the new image with the form `<registry_url>:<project>/<image>`.
The project name must appear in this pull specification for {product-title} to
correctly place and later access the image in the registry:
+
[source,terminal]
----
$ podman tag name.io/image $(oc -n openshift-image-registry get route default-route -o jsonpath='{.spec.host}')/openshift/image
----
endif::[]
+
[NOTE]
====
@@ -153,9 +130,3 @@ ifdef::openshift-enterprise,openshift-webscale,openshift-origin[]
$ podman push image-registry.openshift-image-registry.svc:5000/openshift/image
----
endif::[]
ifdef::openshift-dedicated[]
[source,terminal]
----
$ podman push $(oc -n openshift-image-registry get route default-route -o jsonpath='{.spec.host}')/openshift/image
----
endif::[]

View File

@@ -27,23 +27,3 @@ image metadata, which is exposed by the standard cluster APIs and is used to
perform access control, is stored as standard API resources, specifically images
and imagestreams.
endif::[]
ifdef::openshift-dedicated[]
{product-title} provides a built-in container image registry that runs as a
standard workload on the cluster. The registry is configured and managed by an
infrastructure Operator. It provides an out-of-the-box solution for users to
manage the images that run their workloads, and runs on top of the existing
cluster infrastructure. In addition, it is integrated into the cluster user
authentication and authorization system, which means that access to create and
retrieve images is controlled by defining user permissions on the image resources.
The registry is typically used as a publication target for images built on the
cluster, as well as being a source of images for workloads running on the cluster.
When a new image is pushed to the registry, the cluster is notified of the
new image and other components can react to and consume the updated image.
The actual image data is stored in a Red Hat-managed Amazon S3 bucket. The
image metadata, which is exposed by the standard cluster APIs and is used to
perform access control, is stored as standard API resources, specifically images
and imagestreams.
endif::[]

View File

@@ -41,7 +41,7 @@ All communication channels with the REST API, as well as between master
components such as etcd and the API server, are secured with TLS. TLS provides
strong encryption, data integrity, and authentication of servers with X.509
server certificates and public key infrastructure.
ifdef::openshift-origin,openshift-enterprise,openshift-dedicated[]
ifdef::openshift-origin,openshift-enterprise[]
By default, a new internal PKI is created for each deployment of
{product-title}. The internal PKI uses 2048 bit RSA keys and SHA-256 signatures.
endif::[]

View File

@@ -1,33 +0,0 @@
// Module included in the following assemblies:
//
// * authentication/using-service-accounts.adoc
[id="service-accounts-dedicated-admin-role_{context}"]
ifdef::openshift-dedicated[]
= Service accounts in {product-title}
As an administrator, you can use service accounts to perform any
actions that require {product-title} `admin` roles.
The `dedicated-admin` service creates the `dedicated-admins` group. This group
is granted the roles at the cluster or individual project
level. Users can be assigned to this group, and group membership defines who has
{product-title} administrator access. However, by design, service accounts
cannot be added to regular groups.
Instead, the `dedicated-admin` service creates a special project for this
purpose named `dedicated-admin`. The service account group for this project is
granted {product-title} `admin` roles, which grants {product-title} administrator
access to all service accounts within the `dedicated-admin` project. You can
then use these service accounts to perform any actions that require
{product-title} administrator access.
Users that are members of the `dedicated-admins` group are granted the
`dedicated-admin` role permissions and have `edit` access to the `dedicated-admin`
project. These users can manage the service accounts in this project
and create new ones as needed.
Users with a `dedicated-reader` role are granted edit and view access to the
`dedicated-reader` project and view-only access to the other projects.
endif::[]

View File

@@ -165,23 +165,13 @@ endif::[]
// GCE Persistent Disks, or Openstack Cinder PVs.
--
ifdef::openshift-dedicated,openshift-online[]
ifdef::openshift-online[]
[id="pv-restrictions_{context}"]
== Restrictions
The following restrictions apply when using PVs with {product-title}:
endif::[]
ifdef::openshift-dedicated[]
* PVs are provisioned with either EBS volumes (AWS) or GCP storage (GCP),
depending on where the cluster is provisioned.
* Only RWO access mode is applicable, as EBS volumes and GCE Persistent
Disks cannot be mounted to multiple nodes.
* *emptyDir* has the same lifecycle as the pod:
** *emptyDir* volumes survive container crashes/restarts.
** *emptyDir* volumes are deleted when the pod is deleted.
endif::[]
ifdef::openshift-online[]
* PVs are provisioned with EBS volumes (AWS).
* Only RWO access mode is applicable, as EBS volumes and GCE Persistent

View File

@@ -20,7 +20,3 @@ include::modules/storage-expanding-filesystem-pvc.adoc[leveloffset=+1]
include::modules/storage-expanding-recovering-failure.adoc[leveloffset=+1]
endif::openshift-enterprise,openshift-webscale,openshift-origin[]
ifdef::openshift-dedicated[]
include::modules/dedicated-storage-expanding-filesystem-pvc.adoc[leveloffset=+1]
endif::openshift-dedicated[]