1
0
mirror of https://github.com/openshift/openshift-docs.git synced 2026-02-05 12:46:18 +01:00

remove unused assemblies

This commit is contained in:
Max Bridges
2025-11-07 16:58:31 -05:00
parent dfd9136f38
commit 6179aeb74c
33 changed files with 0 additions and 2108 deletions

View File

@@ -1,31 +0,0 @@
:_mod-docs-content-type: ASSEMBLY
include::_attributes/common-attributes.adoc[]
[id="troubleshooting"]
= Troubleshooting
:context: troubleshooting
toc::[]
// commented out - note for TS docs
// Troubleshooting installations
// Verifying node health
// Troubleshooting network issues
// Troubleshooting Operator issues
// Investigating pod issues
// Diagnosing CLI issues
////
Troubleshooting Jobs
Troubleshooting the status of a Job
Troubleshooting Queues
Troubleshooting the status of a LocalQueue or ClusterQueue
Troubleshooting Provisioning Request in Kueue
Troubleshooting the status of a Provisioning Request in Kueue
Troubleshooting Pods
Troubleshooting the status of a Pod or group of Pods
Troubleshooting delete ClusterQueue
////

View File

@@ -1,25 +0,0 @@
:_mod-docs-content-type: ASSEMBLY
[id="argocd"]
= Using ArgoCD with {product-title}
include::_attributes/common-attributes.adoc[]
:context: argocd
toc::[]
[id="argocd-what"]
== What does ArgoCD do?
ArgoCD is a declarative continuous delivery tool that leverages GitOps to maintain cluster resources. ArgoCD is implemented as a controller that continuously monitors application definitions and configurations defined in a Git repository and compares the specified state of those configurations with their live state on the cluster. Configurations that deviate from their specified state in the Git repository are classified as OutOfSync. ArgoCD reports these differences and allows administrators to automatically or manually resync configurations to the defined state.
ArgoCD enables you to deliver global custom resources, like the resources that are used to configure {product-title} clusters.
[id="argocd-support"]
== Statement of support
Red Hat does not provide support for this tool. To obtain support for ArgoCD, see link:https://argoproj.github.io/argo-cd/SUPPORT/[Support] in the ArgoCD documentation.
[id="argocd-documentation"]
== ArgoCD documentation
For more information about using ArgoCD, see the link:https://argoproj.github.io/argo-cd/[ArgoCD documentation].

View File

@@ -1,29 +0,0 @@
:_mod-docs-content-type: CONCEPT
[id="oadp-data-mover-intro"]
= OADP Data Mover Introduction
include::_attributes/common-attributes.adoc[]
:context: data-mover
toc::[]
OADP Data Mover allows you to restore stateful applications from the store if a failure, accidental deletion, or corruption of the cluster occurs.
:FeatureName: The OADP 1.2 Data Mover
include::snippets/technology-preview.adoc[leveloffset=+1]
* You can use OADP Data Mover to back up Container Storage Interface (CSI) volume snapshots to a remote object store. See xref:../../../backup_and_restore/application_backup_and_restore/installing/oadp-using-data-mover-for-csi-snapshots-doc.adoc#oadp-using-data-mover-for-csi-snapshots-doc[Using Data Mover for CSI snapshots].
* You can use OADP 1.2 Data Mover to back up and restore application data for clusters that use CephFS, CephRBD, or both. See xref:../../../backup_and_restore/application_backup_and_restore/installing/oadp-using-data-mover-for-csi-snapshots-doc.adoc#oadp-using-data-mover-for-csi-snapshots-doc[Using OADP 1.2 Data Mover with Ceph storage].
include::snippets/snip-post-mig-hook[]
[id="oadp-data-mover-prerequisites"]
== OADP Data Mover prerequisites
* You have a stateful application running in a separate namespace.
* You have installed the OADP Operator by using Operator Lifecycle Manager (OLM).
* You have created an appropriate `VolumeSnapshotClass` and `StorageClass`.
* You have installed the VolSync operator using OLM.

View File

@@ -1,276 +0,0 @@
// Module included in the following assemblies:
//
// * backup_and_restore/application_backup_and_restore/backing_up_and_restoring/backing-up-applications.adoc
:_mod-docs-content-type: PROCEDURE
[id="oadp-using-data-mover-for-csi-snapshots-doc"]
= Using Data Mover for CSI snapshots
include::_attributes/common-attributes.adoc[]
:context: backing-up-applications
toc::[]
:FeatureName: Data Mover for CSI snapshots
The OADP Data Mover enables customers to back up Container Storage Interface (CSI) volume snapshots to a remote object store. When Data Mover is enabled, you can restore stateful applications, using CSI volume snapshots pulled from the object store if a failure, accidental deletion, or corruption of the cluster occurs.
The Data Mover solution uses the Restic option of VolSync.
Data Mover supports backup and restore of CSI volume snapshots only.
In OADP 1.2 Data Mover `VolumeSnapshotBackups` (VSBs) and `VolumeSnapshotRestores` (VSRs) are queued using the VolumeSnapshotMover (VSM). The VSM's performance is improved by specifying a concurrent number of VSBs and VSRs simultaneously `InProgress`. After all async plugin operations are complete, the backup is marked as complete.
:FeatureName: The OADP 1.2 Data Mover
include::snippets/technology-preview.adoc[leveloffset=+1]
[NOTE]
====
Red Hat recommends that customers who use OADP 1.2 Data Mover in order to back up and restore ODF CephFS volumes, upgrade or install {product-title} version 4.12 or later for improved performance. OADP Data Mover can leverage CephFS shallow volumes in {product-title} version 4.12 or later, which based on our testing, can improve the performance of backup times.
* https://issues.redhat.com/browse/RHSTOR-4287[CephFS ROX details]
//* https://github.com/ceph/ceph-csi/blob/devel/docs/cephfs-snapshot-backed-volumes.md[Provisioning and mounting CephFS snapshot-backed volumes]
//For more information about OADP 1.2 with CephS [name of topic], see ___.
====
.Prerequisites
* You have verified that the `StorageClass` and `VolumeSnapshotClass` custom resources (CRs) support CSI.
* You have verified that only one `VolumeSnapshotClass` CR has the annotation `snapshot.storage.kubernetes.io/is-default-class: "true"`.
+
[NOTE]
====
In {product-title} version 4.12 or later, verify that this is the only default `VolumeSnapshotClass`.
====
* You have verified that `deletionPolicy` of the `VolumeSnapshotClass` CR is set to `Retain`.
* You have verified that only one `StorageClass` CR has the annotation `storageclass.kubernetes.io/is-default-class: "true"`.
* You have included the label `{velero-domain}/csi-volumesnapshot-class: "true"` in your `VolumeSnapshotClass` CR.
* You have verified that the `OADP namespace` has the annotation `oc annotate --overwrite namespace/openshift-adp volsync.backube/privileged-movers="true"`.
+
[NOTE]
====
In OADP 1.2 the `privileged-movers` setting is not required in most scenarios. The restoring container permissions should be adequate for the Volsync copy. In some user scenarios, there may be permission errors that the `privileged-mover`= `true` setting should resolve.
====
* You have installed the VolSync Operator by using the Operator Lifecycle Manager (OLM).
+
[NOTE]
====
The VolSync Operator is required for using OADP Data Mover.
====
* You have installed the OADP operator by using OLM.
+
--
include::snippets/xfs-filesystem-snippet.adoc[]
--
.Procedure
. Configure a Restic secret by creating a `.yaml` file as following:
+
[source,yaml]
----
apiVersion: v1
kind: Secret
metadata:
name: <secret_name>
namespace: openshift-adp
type: Opaque
stringData:
RESTIC_PASSWORD: <secure_restic_password>
----
+
[NOTE]
====
By default, the Operator looks for a secret named `dm-credential`. If you are using a different name, you need to specify the name through a Data Protection Application (DPA) CR using `dpa.spec.features.dataMover.credentialName`.
====
. Create a DPA CR similar to the following example. The default plugins include CSI.
+
.Example Data Protection Application (DPA) CR
[source,yaml]
----
apiVersion: oadp.openshift.io/v1alpha1
kind: DataProtectionApplication
metadata:
name: velero-sample
namespace: openshift-adp
spec:
backupLocations:
- velero:
config:
profile: default
region: us-east-1
credential:
key: cloud
name: cloud-credentials
default: true
objectStorage:
bucket: <bucket_name>
prefix: <bucket-prefix>
provider: aws
configuration:
restic:
enable: <true_or_false>
velero:
itemOperationSyncFrequency: "10s"
defaultPlugins:
- openshift
- aws
- csi
- vsm
features:
dataMover:
credentialName: restic-secret
enable: true
maxConcurrentBackupVolumes: "3" <1>
maxConcurrentRestoreVolumes: "3" <2>
pruneInterval: "14" <3>
volumeOptions: <4>
sourceVolumeOptions:
accessMode: ReadOnlyMany
cacheAccessMode: ReadWriteOnce
cacheCapacity: 2Gi
destinationVolumeOptions:
storageClass: other-storageclass-name
cacheAccessMode: ReadWriteMany
snapshotLocations:
- velero:
config:
profile: default
region: us-west-2
provider: aws
----
<1> Optional: Specify the upper limit of the number of snapshots allowed to be queued for backup. The default value is `10`.
<2> Optional: Specify the upper limit of the number of snapshots allowed to be queued for restore. The default value is `10`.
<3> Optional: Specify the number of days between running Restic pruning on the repository. The prune operation repacks the data to free space, but it can also generate significant I/O traffic as a part of the process. Setting this option allows a trade-off between storage consumption, from no longer referenced data, and access costs.
<4> Optional: Specify VolumeSync volume options for backup and restore.
+
The OADP Operator installs two custom resource definitions (CRDs), `VolumeSnapshotBackup` and `VolumeSnapshotRestore`.
+
.Example `VolumeSnapshotBackup` CRD
[source,yaml]
----
apiVersion: datamover.oadp.openshift.io/v1alpha1
kind: VolumeSnapshotBackup
metadata:
name: <vsb_name>
namespace: <namespace_name> <1>
spec:
volumeSnapshotContent:
name: <snapcontent_name>
protectedNamespace: <adp_namespace> <2>
resticSecretRef:
name: <restic_secret_name>
----
<1> Specify the namespace where the volume snapshot exists.
<2> Specify the namespace where the OADP Operator is installed. The default is `openshift-adp`.
+
.Example `VolumeSnapshotRestore` CRD
[source,yaml]
----
apiVersion: datamover.oadp.openshift.io/v1alpha1
kind: VolumeSnapshotRestore
metadata:
name: <vsr_name>
namespace: <namespace_name> <1>
spec:
protectedNamespace: <protected_ns> <2>
resticSecretRef:
name: <restic_secret_name>
volumeSnapshotMoverBackupRef:
sourcePVCData:
name: <source_pvc_name>
size: <source_pvc_size>
resticrepository: <your_restic_repo>
volumeSnapshotClassName: <vsclass_name>
----
<1> Specify the namespace where the volume snapshot exists.
<2> Specify the namespace where the OADP Operator is installed. The default is `openshift-adp`.
. You can back up a volume snapshot by performing the following steps:
.. Create a backup CR:
+
[source,yaml]
----
apiVersion: velero.io/v1
kind: Backup
metadata:
name: <backup_name>
namespace: <protected_ns> <1>
spec:
includedNamespaces:
- <app_ns> <2>
storageLocation: velero-sample-1
----
<1> Specify the namespace where the Operator is installed. The default namespace is `openshift-adp`.
<2> Specify the application namespace or namespaces to be backed up.
.. Wait up to 10 minutes and check whether the `VolumeSnapshotBackup` CR status is `Completed` by entering the following commands:
+
[source,terminal]
----
$ oc get vsb -n <app_ns>
----
+
[source,terminal]
----
$ oc get vsb <vsb_name> -n <app_ns> -o jsonpath="{.status.phase}"
----
+
A snapshot is created in the object store was configured in the DPA.
+
[NOTE]
====
If the status of the `VolumeSnapshotBackup` CR becomes `Failed`, refer to the Velero logs for troubleshooting.
====
. You can restore a volume snapshot by performing the following steps:
.. Delete the application namespace and the `VolumeSnapshotContent` that was created by the Velero CSI plugin.
.. Create a `Restore` CR and set `restorePVs` to `true`.
+
.Example `Restore` CR
[source,yaml]
----
apiVersion: velero.io/v1
kind: Restore
metadata:
name: <restore_name>
namespace: <protected_ns>
spec:
backupName: <previous_backup_name>
restorePVs: true
----
.. Wait up to 10 minutes and check whether the `VolumeSnapshotRestore` CR status is `Completed` by entering the following command:
+
[source,terminal]
----
$ oc get vsr -n <app_ns>
----
+
[source,terminal]
----
$ oc get vsr <vsr_name> -n <app_ns> -o jsonpath="{.status.phase}"
----
.. Check whether your application data and resources have been restored.
+
[NOTE]
====
If the status of the `VolumeSnapshotRestore` CR becomes 'Failed', refer to the Velero logs for troubleshooting.
====

View File

@@ -1,136 +0,0 @@
:_mod-docs-content-type: ASSEMBLY
[id="installing-aws-network-customizations"]
= Installing a cluster on AWS with network customizations
include::_attributes/common-attributes.adoc[]
:context: installing-aws-network-customizations
toc::[]
In {product-title} version {product-version}, you can install a cluster on
Amazon Web Services (AWS) with customized network configuration options. By
customizing your network configuration, your cluster can coexist with existing
IP address allocations in your environment and integrate with existing MTU and
VXLAN configurations.
You must set most of the network configuration parameters during installation,
and you can modify only `kubeProxy` configuration parameters in a running
cluster.
== Prerequisites
* You reviewed details about the xref:../../../architecture/architecture-installation.adoc#architecture-installation[{product-title} installation and update] processes.
* You read the documentation on xref:../../../installing/overview/installing-preparing.adoc#installing-preparing[selecting a cluster installation method and preparing it for users].
* You xref:../../../installing/installing_aws/installing-aws-account.adoc#installing-aws-account[configured an AWS account] to host the cluster.
+
[IMPORTANT]
====
If you have an AWS profile stored on your computer, it must not use a temporary session token that you generated while using a multi-factor authentication device. The cluster continues to use your current AWS credentials to create AWS resources for the entire life of the cluster, so you must use key-based, long-term credentials. To generate appropriate keys, see link:https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_access-keys.html[Managing Access Keys for IAM Users] in the AWS documentation. You can supply the keys when you run the installation program.
====
* If you use a firewall, you xref:../../../installing/install_config/configuring-firewall.adoc#configuring-firewall[configured it to allow the sites] that your cluster requires access to.
include::modules/nw-network-config.adoc[leveloffset=+1]
include::modules/installation-initializing.adoc[leveloffset=+1]
[role="_additional-resources"]
.Additional resources
* xref:../../../installing/installing_aws/installation-config-parameters-aws.adoc#installation-config-parameters-aws[Installation configuration parameters for AWS]
include::modules/installation-minimum-resource-requirements.adoc[leveloffset=+2]
[role="_additional-resources"]
.Additional resources
* xref:../../../scalability_and_performance/optimization/optimizing-storage.adoc#optimizing-storage[Optimizing storage]
include::modules/installation-aws-tested-machine-types.adoc[leveloffset=+2]
include::modules/installation-aws-arm-tested-machine-types.adoc[leveloffset=+2]
include::modules/installation-aws-config-yaml-customizations.adoc[leveloffset=+2]
[role="_additional-resources"]
.Additional resources
* xref:../../../installing/installing_aws/installation-config-parameters-aws.adoc#installation-config-parameters-aws[Installation configuration parameters for AWS]
include::modules/installation-configure-proxy.adoc[leveloffset=+2]
[id="installing-aws-manual-modes_{context}"]
== Alternatives to storing administrator-level secrets in the kube-system project
By default, administrator secrets are stored in the `kube-system` project. If you configured the `credentialsMode` parameter in the `install-config.yaml` file to `Manual`, you must use one of the following alternatives:
* To manage long-term cloud credentials manually, follow the procedure in xref:../../../installing/installing_aws/ipi/installing-aws-network-customizations.adoc#manually-create-iam_installing-aws-network-customizations[Manually creating long-term credentials].
* To implement short-term credentials that are managed outside the cluster for individual components, follow the procedures in xref:../../../installing/installing_aws/ipi/installing-aws-network-customizations.adoc#installing-aws-with-short-term-creds_installing-aws-network-customizations[Configuring an AWS cluster to use short-term credentials].
//Manually creating long-term credentials
include::modules/manually-create-identity-access-management.adoc[leveloffset=+2]
//Supertask: Configuring an AWS cluster to use short-term credentials
[id="installing-aws-with-short-term-creds_{context}"]
=== Configuring an AWS cluster to use short-term credentials
To install a cluster that is configured to use the AWS Security Token Service (STS), you must configure the CCO utility and create the required AWS resources for your cluster.
//Task part 1: Configuring the Cloud Credential Operator utility
include::modules/cco-ccoctl-configuring.adoc[leveloffset=+3]
//Task part 2: Creating the required AWS resources
[id="sts-mode-create-aws-resources-ccoctl_{context}"]
==== Creating AWS resources with the Cloud Credential Operator utility
You have the following options when creating AWS resources:
* You can use the `ccoctl aws create-all` command to create the AWS resources automatically. This is the quickest way to create the resources. See xref:../../../installing/installing_aws/ipi/installing-aws-network-customizations.adoc#cco-ccoctl-creating-at-once_installing-aws-network-customizations[Creating AWS resources with a single command].
* If you need to review the JSON files that the `ccoctl` tool creates before modifying AWS resources, or if the process the `ccoctl` tool uses to create AWS resources automatically does not meet the requirements of your organization, you can create the AWS resources individually. See xref:../../../installing/installing_aws/ipi/installing-aws-network-customizations.adoc#cco-ccoctl-creating-individually_installing-aws-network-customizations[Creating AWS resources individually].
//Task part 2a: Creating the required AWS resources all at once
include::modules/cco-ccoctl-creating-at-once.adoc[leveloffset=+4]
//Task part 2b: Creating the required AWS resources individually
include::modules/cco-ccoctl-creating-individually.adoc[leveloffset=+4]
//Task part 3: Incorporating the Cloud Credential Operator utility manifests
include::modules/cco-ccoctl-install-creating-manifests.adoc[leveloffset=+3]
// Network Operator specific configuration
include::modules/nw-operator-cr.adoc[leveloffset=+1]
include::modules/nw-modifying-operator-install-config.adoc[leveloffset=+1]
[NOTE]
====
For more information on using a Network Load Balancer (NLB) on AWS, see xref:../../../networking/ingress_load_balancing/configuring_ingress_cluster_traffic/configuring-ingress-cluster-traffic-aws.adoc#nw-configuring-ingress-cluster-traffic-aws-network-load-balancer_configuring-ingress-cluster-traffic-aws[Configuring Ingress cluster traffic on AWS using a Network Load Balancer].
====
include::modules/nw-aws-nlb-new-cluster.adoc[leveloffset=+1]
include::modules/configuring-hybrid-ovnkubernetes.adoc[leveloffset=+1]
////
Hiding until WMCO 10.19.0 GAs
[NOTE]
====
For more information about using Linux and Windows nodes in the same cluster, see ../../../windows_containers/understanding-windows-container-workloads.adoc#understanding-windows-container-workloads[Understanding Windows container workloads].
====
////
include::modules/installation-launching-installer.adoc[leveloffset=+1]
include::modules/cli-logging-in-kubeadmin.adoc[leveloffset=+1]
include::modules/logging-in-by-using-the-web-console.adoc[leveloffset=+1]
[role="_additional-resources"]
.Additional resources
* See xref:../../../web_console/web-console.adoc#web-console[Accessing the web console] for more details about accessing and understanding the {product-title} web console.
== Next steps
* xref:../../../installing/validation_and_troubleshooting/validating-an-installation.adoc#validating-an-installation[Validating an installation].
* xref:../../../post_installation_configuration/cluster-tasks.adoc#available_cluster_customizations[Customize your cluster].
* If necessary, you can xref:../../../support/remote_health_monitoring/remote-health-reporting.adoc#remote-health-reporting[Remote health reporting].
* If necessary, you can xref:../../../post_installation_configuration/changing-cloud-credentials-configuration.adoc#manually-removing-cloud-creds_changing-cloud-credentials-configuration[remove cloud provider credentials].

View File

@@ -1,104 +0,0 @@
:_mod-docs-content-type: ASSEMBLY
[id="installing-azure-network-customizations"]
= Installing a cluster on Azure with network customizations
include::_attributes/common-attributes.adoc[]
:context: installing-azure-network-customizations
toc::[]
In {product-title} version {product-version}, you can install a cluster with a
customized network configuration on infrastructure that the installation program
provisions on Microsoft Azure. By customizing your network configuration, your
cluster can coexist with existing IP address allocations in your environment and
integrate with existing MTU and VXLAN configurations.
You must set most of the network configuration parameters during installation,
and you can modify only `kubeProxy` configuration parameters in a running
cluster.
include::modules/installation-initializing.adoc[leveloffset=+1]
[role="_additional-resources"]
.Additional resources
* xref:../../../installing/installing_azure/installation-config-parameters-azure.adoc#installation-config-parameters-azure[Installation configuration parameters for Azure]
include::modules/installation-minimum-resource-requirements.adoc[leveloffset=+2]
[role="_additional-resources"]
.Additional resources
* xref:../../../scalability_and_performance/optimization/optimizing-storage.adoc#optimizing-storage[Optimizing storage]
include::modules/installation-azure-tested-machine-types.adoc[leveloffset=+2]
include::modules/installation-azure-arm-tested-machine-types.adoc[leveloffset=+2]
include::modules/installation-azure-trusted-launch.adoc[leveloffset=+2]
include::modules/installation-azure-confidential-vms.adoc[leveloffset=+2]
include::modules/installation-azure-dedicated-disks.adoc[leveloffset=+2]
include::modules/installation-azure-config-yaml.adoc[leveloffset=+2]
include::modules/installation-configure-proxy.adoc[leveloffset=+2]
// Network Operator specific configuration
include::modules/nw-network-config.adoc[leveloffset=+1]
include::modules/nw-modifying-operator-install-config.adoc[leveloffset=+1]
include::modules/nw-operator-cr.adoc[leveloffset=+1]
include::modules/configuring-hybrid-ovnkubernetes.adoc[leveloffset=+1]
////
Hiding until WMCO 10.19.0 GAs
[NOTE]
====
For more information about using Linux and Windows nodes in the same cluster, see ../../../windows_containers/understanding-windows-container-workloads.adoc#understanding-windows-container-workloads[Understanding Windows container workloads].
====
////
[role="_additional-resources"]
.Additional resources
* For more details about Accelerated Networking, see xref:../../../machine_management/creating_machinesets/creating-machineset-azure.adoc#machineset-azure-accelerated-networking_creating-machineset-azure[Accelerated Networking for Microsoft Azure VMs].
[id="installing-azure-manual-modes_{context}"]
== Alternatives to storing administrator-level secrets in the kube-system project
By default, administrator secrets are stored in the `kube-system` project. If you configured the `credentialsMode` parameter in the `install-config.yaml` file to `Manual`, you must use one of the following alternatives:
* To manage long-term cloud credentials manually, follow the procedure in xref:../../../installing/installing_azure/ipi/installing-azure-network-customizations.adoc#manually-create-iam_installing-azure-network-customizations[Manually creating long-term credentials].
* To implement short-term credentials that are managed outside the cluster for individual components, follow the procedures in xref:../../../installing/installing_azure/ipi/installing-azure-network-customizations.adoc#installing-azure-with-short-term-creds_installing-azure-network-customizations[Configuring an Azure cluster to use short-term credentials].
//Manually creating long-term credentials
include::modules/manually-create-identity-access-management.adoc[leveloffset=+2]
//Supertask: Configuring an Azure cluster to use short-term credentials
[id="installing-azure-with-short-term-creds_{context}"]
=== Configuring an Azure cluster to use short-term credentials
To install a cluster that uses {entra-first}, you must configure the Cloud Credential Operator utility and create the required Azure resources for your cluster.
//Task part 1: Configuring the Cloud Credential Operator utility
include::modules/cco-ccoctl-configuring.adoc[leveloffset=+3]
//Task part 2: Creating the required Azure resources
include::modules/cco-ccoctl-creating-at-once.adoc[leveloffset=+3]
// Additional steps for the Cloud Credential Operator utility (`ccoctl`)
include::modules/cco-ccoctl-install-creating-manifests.adoc[leveloffset=+3]
include::modules/installation-launching-installer.adoc[leveloffset=+1]
include::modules/cli-logging-in-kubeadmin.adoc[leveloffset=+1]
[role="_additional-resources"]
.Additional resources
* See xref:../../../web_console/web-console.adoc#web-console[Accessing the web console] for more details about accessing and understanding the {product-title} web console.
== Next steps
* xref:../../../post_installation_configuration/cluster-tasks.adoc#available_cluster_customizations[Customize your cluster].
* If necessary, you can
xref:../../../support/remote_health_monitoring/remote-health-reporting.adoc#remote-health-reporting[Remote health reporting].

View File

@@ -1,137 +0,0 @@
:_mod-docs-content-type: ASSEMBLY
include::_attributes/common-attributes.adoc[]
[id="installing-gcp-network-customizations"]
= Installing a cluster on {gcp-short} with network customizations
:context: installing-gcp-network-customizations
toc::[]
In {product-title} version {product-version}, you can install a cluster with a
customized network configuration on infrastructure that the installation program
provisions on {gcp-first}. By customizing your network
configuration, your cluster can coexist with existing IP address allocations in
your environment and integrate with existing MTU and VXLAN configurations. To
customize the installation, you modify parameters in the `install-config.yaml`
file before you install the cluster.
You must set most of the network configuration parameters during installation,
and you can modify only `kubeProxy` configuration parameters in a running
cluster.
== Prerequisites
* You reviewed details about the xref:../../architecture/architecture-installation.adoc#architecture-installation[{product-title} installation and update] processes.
* You read the documentation on xref:../../installing/overview/installing-preparing.adoc#installing-preparing[selecting a cluster installation method and preparing it for users].
* You xref:../../installing/installing_gcp/installing-gcp-account.adoc#installing-gcp-account[configured a {gcp-short} project] to host the cluster.
* If you use a firewall, you xref:../../installing/install_config/configuring-firewall.adoc#configuring-firewall[configured it to allow the sites] that your cluster requires access to.
include::modules/cluster-entitlements.adoc[leveloffset=+1]
include::modules/ssh-agent-using.adoc[leveloffset=+1]
include::modules/installation-obtaining-installer.adoc[leveloffset=+1]
include::modules/installation-initializing.adoc[leveloffset=+1]
[role="_additional-resources"]
.Additional resources
* xref:../../installing/installing_gcp/installation-config-parameters-gcp.adoc#installation-config-parameters-gcp[Installation configuration parameters for {gcp-short}]
include::modules/installation-minimum-resource-requirements.adoc[leveloffset=+2]
[role="_additional-resources"]
.Additional resources
* xref:../../scalability_and_performance/optimization/optimizing-storage.adoc#optimizing-storage[Optimizing storage]
include::modules/installation-gcp-tested-machine-types.adoc[leveloffset=+2]
include::modules/installation-gcp-tested-machine-types-arm.adoc[leveloffset=+2]
include::modules/installation-using-gcp-custom-machine-types.adoc[leveloffset=+2]
include::modules/installation-gcp-enabling-shielded-vms.adoc[leveloffset=+2]
include::modules/installation-gcp-enabling-confidential-vms.adoc[leveloffset=+2]
[role="_additional-resources"]
.Additional resources
* xref:../../installing/installing_gcp/installation-config-parameters-gcp.adoc#installation-configuration-parameters-additional-gcp_installation-config-parameters-gcp[Additional {gcp-first} configuration parameters]
include::modules/installation-gcp-managing-dns-solution.adoc[leveloffset=+2]
[role="_additional-resources"]
.Additional resources
* xref:../../installing/installing_gcp/installation-config-parameters-gcp.adoc#installation-config-parameters-gcp[Installation configuration parameters for {gcp-first}]
include::modules/installation-gcp-config-yaml.adoc[leveloffset=+2]
[role="_additional-resources"]
.Additional resources
* xref:../../machine_management/creating_machinesets/creating-machineset-gcp.adoc#machineset-enabling-customer-managed-encryption_creating-machineset-gcp[Enabling customer-managed encryption keys for a compute machine set]
include::modules/installation-configure-proxy.adoc[leveloffset=+2]
//Installing the OpenShift CLI by downloading the binary: Moved up to precede `ccoctl` steps, which require the use of `oc`
include::modules/cli-installing-cli.adoc[leveloffset=+1]
[id="installing-gcp-manual-modes_{context}"]
== Alternatives to storing administrator-level secrets in the kube-system project
By default, administrator secrets are stored in the `kube-system` project. If you configured the `credentialsMode` parameter in the `install-config.yaml` file to `Manual`, you must use one of the following alternatives:
* To manage long-term cloud credentials manually, follow the procedure in xref:../../installing/installing_gcp/installing-gcp-network-customizations.adoc#manually-create-iam_installing-gcp-network-customizations[Manually creating long-term credentials].
* To implement short-term credentials that are managed outside the cluster for individual components, follow the procedures in xref:../../installing/installing_gcp/installing-gcp-network-customizations.adoc#installing-gcp-with-short-term-creds_installing-gcp-network-customizations[Configuring a {gcp-short} cluster to use short-term credentials].
//Manually creating long-term credentials
include::modules/manually-create-identity-access-management.adoc[leveloffset=+2]
//Supertask: Configuring a GCP cluster to use short-term credentials
[id="installing-gcp-with-short-term-creds_{context}"]
=== Configuring a {gcp-short} cluster to use short-term credentials
To install a cluster that is configured to use {gcp-short} Workload Identity, you must configure the CCO utility and create the required {gcp-short} resources for your cluster.
//Task part 1: Configuring the Cloud Credential Operator utility
include::modules/cco-ccoctl-configuring.adoc[leveloffset=+3]
//Task part 2: Creating the required GCP resources
include::modules/cco-ccoctl-creating-at-once.adoc[leveloffset=+3]
//Task part 3: Incorporating the Cloud Credential Operator utility manifests
include::modules/cco-ccoctl-install-creating-manifests.adoc[leveloffset=+3]
// Network Operator specific configuration
include::modules/nw-network-config.adoc[leveloffset=+1]
include::modules/nw-modifying-operator-install-config.adoc[leveloffset=+1]
include::modules/nw-operator-cr.adoc[leveloffset=+1]
include::modules/installation-launching-installer.adoc[leveloffset=+1]
include::modules/installation-gcp-provisioning-dns-records.adoc[leveloffset=+1]
[role="_additional-resources"]
.Additional resources
* xref:../../installing/installing_gcp/installation-config-parameters-gcp.adoc#installation-configuration-parameters-additional-gcp_installation-config-parameters-gcp[Additional {gcp-first} configuration parameters]
include::modules/cli-logging-in-kubeadmin.adoc[leveloffset=+1]
[role="_additional-resources"]
.Additional resources
* See xref:../../web_console/web-console.adoc#web-console[Accessing the web console] for more details about accessing and understanding the {product-title} web console.
include::modules/cluster-telemetry.adoc[leveloffset=+1]
[role="_additional-resources"]
.Additional resources
* See xref:../../support/remote_health_monitoring/about-remote-health-monitoring.adoc#about-remote-health-monitoring[About remote health monitoring] for more information about the Telemetry service
== Next steps
* xref:../../post_installation_configuration/cluster-tasks.adoc#available_cluster_customizations[Customize your cluster].
* If necessary, you can
xref:../../support/remote_health_monitoring/remote-health-reporting.adoc#remote-health-reporting[Remote health reporting].

View File

@@ -1,19 +0,0 @@
:context: cluster-logging-curator
[id="cluster-logging-curator"]
= Configuring the log curator
include::_attributes/common-attributes.adoc[]
toc::[]
In {product-title} {product-version}, log curation is performed by Elasticsearch Curator, based on a xref:../../logging/config/cluster-logging-log-store.html#cluster-logging-elasticsearch-retention_cluster-logging-store[retention policy] that you define.
The Elasticsearch Curator tool removes Elasticsearch indices that use the data model prior to {product-title} 4.5. You can modify the Curator index retention policy for your old data.
// The following include statements pull in the module files that comprise
// the assembly. Include any combination of concept, procedure, or reference
// modules required to cover the user story. You can also include other
// assemblies.
include::modules/cluster-logging-curator-schedule.adoc[leveloffset=+1]
include::modules/cluster-logging-curator-delete-index.adoc[leveloffset=+1]

View File

@@ -1,9 +0,0 @@
:_mod-docs-content-type: ASSEMBLY
[id="dedicated-efk-deploying"]
= Installing the Cluster Logging and Elasticsearch Operators
include::_attributes/common-attributes.adoc[]
:context: dedicated-efk-deploying
toc::[]
include::modules/efk-logging-deploy-subscription.adoc[leveloffset=+1]

View File

@@ -1,52 +0,0 @@
:_mod-docs-content-type: ASSEMBLY
[id="adding-rhel-compute"]
= Adding RHEL compute machines to an {product-title} cluster
include::_attributes/common-attributes.adoc[]
:context: adding-rhel-compute
toc::[]
In {product-title}, you can add {op-system-base-full} compute machines to a user-provisioned infrastructure cluster or an installation-provisioned infrastructure cluster on the `x86_64` architecture. You can use {op-system-base} as the operating system only on compute machines.
include::modules/rhel-compute-overview.adoc[leveloffset=+1]
include::modules/rhel-compute-requirements.adoc[leveloffset=+1]
[role="_additional-resources"]
.Additional resources
* xref:../nodes/nodes/nodes-nodes-working.adoc#nodes-nodes-working-deleting_nodes-nodes-working[Deleting nodes]
* xref:../machine_management/creating_machinesets/creating-machineset-azure.adoc#machineset-azure-accelerated-networking_creating-machineset-azure[Accelerated Networking for Microsoft Azure VMs]
include::modules/csr-management.adoc[leveloffset=+2]
[id="adding-rhel-compute-preparing-image-cloud"]
== Preparing an image for your cloud
Amazon Machine Images (AMI) are required because various image formats cannot be used directly by AWS. You may use the AMIs that Red Hat has provided, or you can manually import your own images. The AMI must exist before the EC2 instance can be provisioned. You will need a valid AMI ID so that the correct {op-system-base} version needed for the compute machines is selected.
include::modules/rhel-images-aws.adoc[leveloffset=+2]
[role="_additional-resources"]
.Additional resources
* You may also manually link:https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html/image_builder_guide/sect-documentation-image_builder-chapter5-section_2[import {op-system-base} images to AWS].
include::modules/rhel-preparing-playbook-machine.adoc[leveloffset=+1]
include::modules/rhel-preparing-node.adoc[leveloffset=+1]
include::modules/rhel-attaching-instance-aws.adoc[leveloffset=+1]
[role="_additional-resources"]
.Additional resources
* See xref:../installing/installing_aws/installing-aws-account.adoc#installation-aws-permissions-iam-roles_installing-aws-account[Required AWS permissions for IAM roles].
include::modules/rhel-worker-tag.adoc[leveloffset=+1]
include::modules/rhel-adding-node.adoc[leveloffset=+1]
include::modules/installation-approve-csrs.adoc[leveloffset=+1]
include::modules/rhel-ansible-parameters.adoc[leveloffset=+1]
include::modules/rhel-removing-rhcos.adoc[leveloffset=+2]

View File

@@ -1,48 +0,0 @@
:_mod-docs-content-type: ASSEMBLY
[id="more-rhel-compute"]
= Adding more RHEL compute machines to an {product-title} cluster
include::_attributes/common-attributes.adoc[]
:context: more-rhel-compute
toc::[]
If your {product-title} cluster already includes Red Hat Enterprise Linux (RHEL) compute machines, which are also known as worker machines, you can add more RHEL compute machines to it.
include::modules/rhel-compute-overview.adoc[leveloffset=+1]
include::modules/rhel-compute-requirements.adoc[leveloffset=+1]
[role="_additional-resources"]
.Additional resources
* xref:../nodes/nodes/nodes-nodes-working.adoc#nodes-nodes-working-deleting_nodes-nodes-working[Deleting nodes]
* xref:../machine_management/creating_machinesets/creating-machineset-azure.adoc#machineset-azure-accelerated-networking_creating-machineset-azure[Accelerated Networking for Microsoft Azure VMs]
include::modules/csr-management.adoc[leveloffset=+2]
[id="more-rhel-compute-preparing-image-cloud"]
== Preparing an image for your cloud
Amazon Machine Images (AMI) are required since various image formats cannot be used directly by AWS. You may use the AMIs that Red Hat has provided, or you can manually import your own images. The AMI must exist before the EC2 instance can be provisioned. You must list the AMI IDs so that the correct {op-system-base} version needed for the compute machines is selected.
include::modules/rhel-images-aws.adoc[leveloffset=+2]
[role="_additional-resources"]
.Additional resources
* You may also manually link:https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html/image_builder_guide/sect-documentation-image_builder-chapter5-section_2[import {op-system-base} images to AWS].
include::modules/rhel-preparing-node.adoc[leveloffset=+1]
include::modules/rhel-attaching-instance-aws.adoc[leveloffset=+1]
[role="_additional-resources"]
.Additional resources
* See xref:../installing/installing_aws/installing-aws-account.adoc#installation-aws-permissions-iam-roles_installing-aws-account[Required AWS permissions for IAM roles].
include::modules/rhel-worker-tag.adoc[leveloffset=+1]
include::modules/rhel-adding-more-nodes.adoc[leveloffset=+1]
include::modules/installation-approve-csrs.adoc[leveloffset=+1]
include::modules/rhel-ansible-parameters.adoc[leveloffset=+1]

View File

@@ -1,20 +0,0 @@
:_mod-docs-content-type: ASSEMBLY
[id="metering-about-configuring"]
= About configuring metering
include::_attributes/common-attributes.adoc[]
:context: metering-about-configuring
toc::[]
:FeatureName: Metering
include::modules/deprecated-feature.adoc[leveloffset=+1]
The `MeteringConfig` custom resource specifies all the configuration details for your metering installation. When you first install the metering stack, a default `MeteringConfig` custom resource is generated. Use the examples in the documentation to modify this default file. Keep in mind the following key points:
* At a minimum, you need to xref:../../metering/configuring_metering/metering-configure-persistent-storage.adoc#metering-configure-persistent-storage[configure persistent storage] and xref:../../metering/configuring_metering/metering-configure-hive-metastore.adoc#metering-configure-hive-metastore[configure the Hive metastore].
* Most default configuration settings work, but larger deployments or highly customized deployments should review all configuration options carefully.
* Some configuration options can not be modified after installation.
For configuration options that can be modified after installation, make the changes in your `MeteringConfig` custom resource and reapply the file.

View File

@@ -1,175 +0,0 @@
:_mod-docs-content-type: ASSEMBLY
[id="metering-common-config-options"]
= Common configuration options
include::_attributes/common-attributes.adoc[]
:context: metering-common-config-options
toc::[]
:FeatureName: Metering
include::modules/deprecated-feature.adoc[leveloffset=+1]
== Resource requests and limits
You can adjust the CPU, memory, or storage resource requests and/or limits for pods and volumes. The `default-resource-limits.yaml` below provides an example of setting resource request and limits for each component.
[source,yaml]
----
apiVersion: metering.openshift.io/v1
kind: MeteringConfig
metadata:
name: "operator-metering"
spec:
reporting-operator:
spec:
resources:
limits:
cpu: 1
memory: 500Mi
requests:
cpu: 500m
memory: 100Mi
presto:
spec:
coordinator:
resources:
limits:
cpu: 4
memory: 4Gi
requests:
cpu: 2
memory: 2Gi
worker:
replicas: 0
resources:
limits:
cpu: 8
memory: 8Gi
requests:
cpu: 4
memory: 2Gi
hive:
spec:
metastore:
resources:
limits:
cpu: 4
memory: 2Gi
requests:
cpu: 500m
memory: 650Mi
storage:
class: null
create: true
size: 5Gi
server:
resources:
limits:
cpu: 1
memory: 1Gi
requests:
cpu: 500m
memory: 500Mi
----
== Node selectors
You can run the metering components on specific sets of nodes. Set the `nodeSelector` on a metering component to control where the component is scheduled. The `node-selectors.yaml` file below provides an example of setting node selectors for each component.
[NOTE]
====
Add the `openshift.io/node-selector: ""` namespace annotation to the metering namespace YAML file before configuring specific node selectors for the operand pods. Specify `""` as the annotation value.
====
[source,yaml]
----
apiVersion: metering.openshift.io/v1
kind: MeteringConfig
metadata:
name: "operator-metering"
spec:
reporting-operator:
spec:
nodeSelector:
"node-role.kubernetes.io/infra": "" <1>
presto:
spec:
coordinator:
nodeSelector:
"node-role.kubernetes.io/infra": "" <1>
worker:
nodeSelector:
"node-role.kubernetes.io/infra": "" <1>
hive:
spec:
metastore:
nodeSelector:
"node-role.kubernetes.io/infra": "" <1>
server:
nodeSelector:
"node-role.kubernetes.io/infra": "" <1>
----
<1> Add a `nodeSelector` parameter with the appropriate value to the component you want to move. You can use a `nodeSelector` in the format shown or use key-value pairs, based on the value specified for the node.
[NOTE]
====
Add the `openshift.io/node-selector: ""` namespace annotation to the metering namespace YAML file before configuring specific node selectors for the operand pods. When the `openshift.io/node-selector` annotation is set on the project, the value is used in preference to the value of the `spec.defaultNodeSelector` field in the cluster-wide `Scheduler` object.
====
.Verification
You can verify the metering node selectors by performing any of the following checks:
* Verify that all pods for metering are correctly scheduled on the IP of the node that is configured in the `MeteringConfig` custom resource:
+
--
. Check all pods in the `openshift-metering` namespace:
+
[source,terminal]
----
$ oc --namespace openshift-metering get pods -o wide
----
+
The output shows the `NODE` and corresponding `IP` for each pod running in the `openshift-metering` namespace.
+
.Example output
[source,terminal]
----
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
hive-metastore-0 1/2 Running 0 4m33s 10.129.2.26 ip-10-0-210-167.us-east-2.compute.internal <none> <none>
hive-server-0 2/3 Running 0 4m21s 10.128.2.26 ip-10-0-150-175.us-east-2.compute.internal <none> <none>
metering-operator-964b4fb55-4p699 2/2 Running 0 7h30m 10.131.0.33 ip-10-0-189-6.us-east-2.compute.internal <none> <none>
nfs-server 1/1 Running 0 7h30m 10.129.2.24 ip-10-0-210-167.us-east-2.compute.internal <none> <none>
presto-coordinator-0 2/2 Running 0 4m8s 10.131.0.35 ip-10-0-189-6.us-east-2.compute.internal <none> <none>
reporting-operator-869b854c78-8g2x5 1/2 Running 0 7h27m 10.128.2.25 ip-10-0-150-175.us-east-2.compute.internal <none> <none>
----
+
. Compare the nodes in the `openshift-metering` namespace to each node `NAME` in your cluster:
+
[source,terminal]
----
$ oc get nodes
----
+
.Example output
[source,terminal]
----
NAME STATUS ROLES AGE VERSION
ip-10-0-147-106.us-east-2.compute.internal Ready master 14h v1.33.4
ip-10-0-150-175.us-east-2.compute.internal Ready worker 14h v1.33.4
ip-10-0-175-23.us-east-2.compute.internal Ready master 14h v1.33.4
ip-10-0-189-6.us-east-2.compute.internal Ready worker 14h v1.33.4
ip-10-0-205-158.us-east-2.compute.internal Ready master 14h v1.33.4
ip-10-0-210-167.us-east-2.compute.internal Ready worker 14h v1.33.4
----
--
* Verify that the node selector configuration in the `MeteringConfig` custom resource does not interfere with the cluster-wide node selector configuration such that no metering operand pods are scheduled.
** Check the cluster-wide `Scheduler` object for the `spec.defaultNodeSelector` field, which shows where pods are scheduled by default:
+
[source,terminal]
----
$ oc get schedulers.config.openshift.io cluster -o yaml
----

View File

@@ -1,116 +0,0 @@
:_mod-docs-content-type: ASSEMBLY
[id="metering-configure-aws-billing-correlation"]
= Configure AWS billing correlation
include::_attributes/common-attributes.adoc[]
:context: metering-configure-aws-billing-correlation
toc::[]
:FeatureName: Metering
include::modules/deprecated-feature.adoc[leveloffset=+1]
Metering can correlate cluster usage information with https://docs.aws.amazon.com/awsaccountbilling/latest/aboutv2/billing-reports-costusage.html[AWS detailed billing information], attaching a dollar amount to resource usage. For clusters running in EC2, you can enable this by modifying the example `aws-billing.yaml` file below.
[source,yaml]
----
apiVersion: metering.openshift.io/v1
kind: MeteringConfig
metadata:
name: "operator-metering"
spec:
openshift-reporting:
spec:
awsBillingReportDataSource:
enabled: true
# Replace these with where your AWS billing reports are
# stored in S3.
bucket: "<your-aws-cost-report-bucket>" <1>
prefix: "<path/to/report>"
region: "<your-buckets-region>"
reporting-operator:
spec:
config:
aws:
secretName: "<your-aws-secret>" <2>
presto:
spec:
config:
aws:
secretName: "<your-aws-secret>" <2>
hive:
spec:
config:
aws:
secretName: "<your-aws-secret>" <2>
----
To enable AWS billing correlation, first ensure the AWS Cost and Usage Reports are enabled. For more information, see https://docs.aws.amazon.com/awsaccountbilling/latest/aboutv2/billing-reports-gettingstarted-turnonreports.html[Turning on the AWS Cost and Usage Report] in the AWS documentation.
<1> Update the bucket, prefix, and region to the location of your AWS Detailed billing report.
<2> All `secretName` fields should be set to the name of a secret in the metering namespace containing AWS credentials in the `data.aws-access-key-id` and `data.aws-secret-access-key` fields. See the example secret file below for more details.
[source,yaml]
----
apiVersion: v1
kind: Secret
metadata:
name: <your-aws-secret>
data:
aws-access-key-id: "dGVzdAo="
aws-secret-access-key: "c2VjcmV0Cg=="
----
To store data in S3, the `aws-access-key-id` and `aws-secret-access-key` credentials must have read and write access to the bucket. For an example of an IAM policy granting the required permissions, see the `aws/read-write.json` file below.
[source,json]
----
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "1",
"Effect": "Allow",
"Action": [
"s3:AbortMultipartUpload",
"s3:DeleteObject",
"s3:GetObject",
"s3:HeadBucket",
"s3:ListBucket",
"s3:ListMultipartUploadParts",
"s3:PutObject"
],
"Resource": [
"arn:aws:s3:::operator-metering-data/*", <1>
"arn:aws:s3:::operator-metering-data" <1>
]
}
]
}
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "1",
"Effect": "Allow",
"Action": [
"s3:AbortMultipartUpload",
"s3:DeleteObject",
"s3:GetObject",
"s3:HeadBucket",
"s3:ListBucket",
"s3:ListMultipartUploadParts",
"s3:PutObject"
],
"Resource": [
"arn:aws:s3:::operator-metering-data/*", <1>
"arn:aws:s3:::operator-metering-data" <1>
]
}
]
}
----
<1> Replace `operator-metering-data` with the name of your bucket.
This can be done either preinstallation or postinstallation. Disabling it postinstallation can cause errors in the Reporting Operator.

View File

@@ -1,18 +0,0 @@
:_mod-docs-content-type: ASSEMBLY
[id="metering-configure-hive-metastore"]
= Configuring the Hive metastore
include::_attributes/common-attributes.adoc[]
:context: metering-configure-hive-metastore
toc::[]
:FeatureName: Metering
include::modules/deprecated-feature.adoc[leveloffset=+1]
Hive metastore is responsible for storing all the metadata about the database tables created in Presto and Hive. By default, the metastore stores this information in a local embedded Derby database in a persistent volume attached to the pod.
Generally, the default configuration of the Hive metastore works for small clusters, but users may wish to improve performance or move storage requirements out of cluster by using a dedicated SQL database for storing the Hive metastore data.
include::modules/metering-configure-persistentvolumes.adoc[leveloffset=+1]
include::modules/metering-use-mysql-or-postgresql-for-hive.adoc[leveloffset=+1]

View File

@@ -1,22 +0,0 @@
:_mod-docs-content-type: ASSEMBLY
[id="metering-configure-persistent-storage"]
= Configuring persistent storage
include::_attributes/common-attributes.adoc[]
:context: metering-configure-persistent-storage
toc::[]
:FeatureName: Metering
include::modules/deprecated-feature.adoc[leveloffset=+1]
Metering requires persistent storage to persist data collected by the Metering Operator and to store the results of reports. A number of different storage providers and storage formats are supported. Select your storage provider and modify the example configuration files to configure persistent storage for your metering installation.
include::modules/metering-store-data-in-s3.adoc[leveloffset=+1]
include::modules/metering-store-data-in-s3-compatible.adoc[leveloffset=+1]
include::modules/metering-store-data-in-azure.adoc[leveloffset=+1]
include::modules/metering-store-data-in-gcp.adoc[leveloffset=+1]
include::modules/metering-store-data-in-shared-volumes.adoc[leveloffset=+1]

View File

@@ -1,16 +0,0 @@
:_mod-docs-content-type: ASSEMBLY
[id="metering-configure-reporting-operator"]
= Configuring the Reporting Operator
include::_attributes/common-attributes.adoc[]
:context: metering-configure-reporting-operator
toc::[]
:FeatureName: Metering
include::modules/deprecated-feature.adoc[leveloffset=+1]
The Reporting Operator is responsible for collecting data from Prometheus, storing the metrics in Presto, running report queries against Presto, and exposing their results via an HTTP API. Configuring the Reporting Operator is primarily done in your `MeteringConfig` custom resource.
include::modules/metering-prometheus-connection.adoc[leveloffset=+1]
include::modules/metering-exposing-the-reporting-api.adoc[leveloffset=+1]

View File

@@ -1,12 +0,0 @@
:_mod-docs-content-type: ASSEMBLY
[id="about-metering"]
= About Metering
include::_attributes/common-attributes.adoc[]
:context: about-metering
toc::[]
:FeatureName: Metering
include::modules/deprecated-feature.adoc[leveloffset=+1]
include::modules/metering-overview.adoc[leveloffset=+1]

View File

@@ -1,62 +0,0 @@
:_mod-docs-content-type: ASSEMBLY
[id="installing-metering"]
= Installing metering
include::_attributes/common-attributes.adoc[]
:context: installing-metering
toc::[]
:FeatureName: Metering
include::modules/deprecated-feature.adoc[leveloffset=+1]
Review the following sections before installing metering into your cluster.
To get started installing metering, first install the Metering Operator from the software catalog. Next, configure your instance of metering by creating a `MeteringConfig` custom resource (CR). Installing the Metering Operator creates a default `MeteringConfig` resource that you can modify using the examples in the documentation. After creating your `MeteringConfig` resource, install the metering stack. Last, verify your installation.
include::modules/metering-install-prerequisites.adoc[leveloffset=+1]
include::modules/metering-install-operator.adoc[leveloffset=+1]
// Including this content directly in the assembly because the workflow requires linking off to the config docs, and we don't current link
// inside of modules - klamenzo 2019-09-23
[id="metering-install-metering-stack_{context}"]
== Installing the metering stack
After adding the Metering Operator to your cluster you can install the components of metering by installing the metering stack.
== Prerequisites
* Review the xref:../metering/configuring_metering/metering-about-configuring.adoc#metering-about-configuring[configuration options]
* Create a `MeteringConfig` resource. You can begin the following process to generate a default `MeteringConfig` resource, then use the examples in the documentation to modify this default file for your specific installation. Review the following topics to create your `MeteringConfig` resource:
** For configuration options, review xref:../metering/configuring_metering/metering-about-configuring.adoc#metering-about-configuring[About configuring metering].
** At a minimum, you need to xref:../metering/configuring_metering/metering-configure-persistent-storage.adoc#metering-configure-persistent-storage[configure persistent storage] and xref:../metering/configuring_metering/metering-configure-hive-metastore.adoc#metering-configure-hive-metastore[configure the Hive metastore].
[IMPORTANT]
====
There can only be one `MeteringConfig` resource in the `openshift-metering` namespace. Any other configuration is not supported.
====
.Procedure
. From the web console, ensure you are on the *Operator Details* page for the Metering Operator in the `openshift-metering` project. You can navigate to this page by clicking *Ecosystem* -> *Installed Operators*, then selecting the Metering Operator.
. Under *Provided APIs*, click *Create Instance* on the Metering Configuration card. This opens a YAML editor with the default `MeteringConfig` resource file where you can define your configuration.
+
[NOTE]
====
For example configuration files and all supported configuration options, review the xref:../metering/configuring_metering/metering-about-configuring.adoc#metering-about-configuring[configuring metering documentation].
====
. Enter your `MeteringConfig` resource into the YAML editor and click *Create*.
The `MeteringConfig` resource begins to create the necessary resources for your metering stack. You can now move on to verifying your installation.
include::modules/metering-install-verify.adoc[leveloffset=+1]
[role="_additional-resources"]
[id="metering-install-additional-resources_{context}"]
== Additional resources
* For more information on configuration steps and available storage platforms, see xref:../metering/configuring_metering/metering-configure-persistent-storage.adoc#metering-configure-persistent-storage[Configuring persistent storage].
* For the steps to configure Hive, see xref:../metering/configuring_metering/metering-configure-hive-metastore.adoc#metering-configure-hive-metastore[Configuring the Hive metastore].

View File

@@ -1,21 +0,0 @@
:_mod-docs-content-type: ASSEMBLY
[id="metering-troubleshooting-debugging"]
= Troubleshooting and debugging metering
include::_attributes/common-attributes.adoc[]
:context: metering-troubleshooting-debugging
toc::[]
:FeatureName: Metering
include::modules/deprecated-feature.adoc[leveloffset=+1]
Use the following sections to help troubleshoot and debug specific issues with metering.
In addition to the information in this section, be sure to review the following topics:
* xref:../metering/metering-installing-metering.adoc#metering-install-prerequisites_installing-metering[Prerequisites for installing metering].
* xref:../metering/configuring_metering/metering-about-configuring.adoc#metering-about-configuring[About configuring metering]
include::modules/metering-troubleshooting.adoc[leveloffset=+1]
include::modules/metering-debugging.adoc[leveloffset=+1]

View File

@@ -1,31 +0,0 @@
:_mod-docs-content-type: ASSEMBLY
:context: metering-uninstall
[id="metering-uninstall"]
= Uninstalling metering
include::_attributes/common-attributes.adoc[]
toc::[]
:FeatureName: Metering
include::modules/deprecated-feature.adoc[leveloffset=+1]
You can remove metering from your {product-title} cluster.
[NOTE]
====
Metering does not manage or delete Amazon S3 bucket data. After uninstalling metering, you must manually clean up S3 buckets that were used to store metering data.
====
[id="metering-remove"]
== Removing the Metering Operator from your cluster
Remove the Metering Operator from your cluster by following the documentation on xref:../operators/admin/olm-deleting-operators-from-cluster.adoc#olm-deleting-operators-from-a-cluster[deleting Operators from a cluster].
[NOTE]
====
Removing the Metering Operator from your cluster does not remove its custom resource definitions or managed resources. See the following sections on xref:../metering/metering-uninstall.adoc#metering-uninstall_metering-uninstall[Uninstalling a metering namespace] and xref:../metering/metering-uninstall.adoc#metering-uninstall-crds_metering-uninstall[Uninstalling metering custom resource definitions] for steps to remove any remaining metering components.
====
include::modules/metering-uninstall.adoc[leveloffset=+1]
include::modules/metering-uninstall-crds.adoc[leveloffset=+1]

View File

@@ -1,22 +0,0 @@
:_mod-docs-content-type: ASSEMBLY
[id="metering-usage-examples"]
= Examples of using metering
include::_attributes/common-attributes.adoc[]
:context: metering-usage-examples
toc::[]
:FeatureName: Metering
include::modules/deprecated-feature.adoc[leveloffset=+1]
Use the following example reports to get started measuring capacity, usage, and utilization in your cluster. These examples showcase the various types of reports metering offers, along with a selection of the predefined queries.
== Prerequisites
* xref:../metering/metering-installing-metering.adoc#metering-install-operator_installing-metering[Install metering]
* Review the details about xref:../metering/metering-using-metering#using-metering[writing and viewing reports].
include::modules/metering-cluster-capacity-examples.adoc[leveloffset=+1]
include::modules/metering-cluster-usage-examples.adoc[leveloffset=+1]
include::modules/metering-cluster-utilization-examples.adoc[leveloffset=+1]

View File

@@ -1,19 +0,0 @@
:_mod-docs-content-type: ASSEMBLY
[id="using-metering"]
= Using Metering
include::_attributes/common-attributes.adoc[]
:context: using-metering
toc::[]
:FeatureName: Metering
include::modules/deprecated-feature.adoc[leveloffset=+1]
== Prerequisites
* xref:../metering/metering-installing-metering.adoc#metering-install-operator_installing-metering[Install Metering]
* Review the details about the available options that can be configured for a xref:../metering/reports/metering-about-reports.adoc#metering-about-reports[report] and how they function.
include::modules/metering-writing-reports.adoc[leveloffset=+1]
include::modules/metering-viewing-report-results.adoc[leveloffset=+1]

View File

@@ -1,16 +0,0 @@
:_mod-docs-content-type: ASSEMBLY
[id="metering-about-reports"]
= About Reports
include::_attributes/common-attributes.adoc[]
:context: metering-about-reports
toc::[]
:FeatureName: Metering
include::modules/deprecated-feature.adoc[leveloffset=+1]
A `Report` custom resource provides a method to manage periodic Extract Transform and Load (ETL) jobs using SQL queries. Reports are composed from other metering resources, such as `ReportQuery` resources that provide the actual SQL query to run, and `ReportDataSource` resources that define the data available to the `ReportQuery` and `Report` resources.
Many use cases are addressed by the predefined `ReportQuery` and `ReportDataSource` resources that come installed with metering. Therefore, you do not need to define your own unless you have a use case that is not covered by these predefined resources.
include::modules/metering-reports.adoc[leveloffset=+1]

View File

@@ -1,83 +0,0 @@
:_mod-docs-content-type: ASSEMBLY
[id="metering-storage-locations"]
= Storage locations
include::_attributes/common-attributes.adoc[]
:context: metering-storage-locations
toc::[]
:FeatureName: Metering
include::modules/deprecated-feature.adoc[leveloffset=+1]
A `StorageLocation` custom resource configures where data will be stored by the Reporting Operator. This includes the data collected from Prometheus, and the results produced by generating a `Report` custom resource.
You only need to configure a `StorageLocation` custom resource if you want to store data in multiple locations, like multiple S3 buckets or both S3 and HDFS, or if you wish to access a database in Hive and Presto that was not created by metering. For most users this is not a requirement, and the xref:../../metering/configuring_metering/metering-about-configuring.adoc#metering-about-configuring[documentation on configuring metering] is sufficient to configure all necessary storage components.
== Storage location examples
The following example shows the built-in local storage option, and is configured to use Hive. By default, data is stored wherever Hive is configured to use storage, such as HDFS, S3, or a `ReadWriteMany` persistent volume claim (PVC).
.Local storage example
[source,yaml]
----
apiVersion: metering.openshift.io/v1
kind: StorageLocation
metadata:
name: hive
labels:
operator-metering: "true"
spec:
hive: <1>
databaseName: metering <2>
unmanagedDatabase: false <3>
----
<1> If the `hive` section is present, then the `StorageLocation` resource will be configured to store data in Presto by creating the table using the Hive server. Only `databaseName` and `unmanagedDatabase` are required fields.
<2> The name of the database within hive.
<3> If `true`, the `StorageLocation` resource will not be actively managed, and the `databaseName` is expected to already exist in Hive. If `false`, the Reporting Operator will create the database in Hive.
The following example uses an AWS S3 bucket for storage. The prefix is appended to the bucket name when constructing the path to use.
.Remote storage example
[source,yaml]
----
apiVersion: metering.openshift.io/v1
kind: StorageLocation
metadata:
name: example-s3-storage
labels:
operator-metering: "true"
spec:
hive:
databaseName: example_s3_storage
unmanagedDatabase: false
location: "s3a://bucket-name/path/within/bucket" <1>
----
<1> Optional: The filesystem URL for Presto and Hive to use for the database. This can be an `hdfs://` or `s3a://` filesystem URL.
There are additional optional fields that can be specified in the `hive` section:
* `defaultTableProperties`: Contains configuration options for creating tables using Hive.
* `fileFormat`: The file format used for storing files in the filesystem. See the link:https://cwiki.apache.org/confluence/display/Hive/LanguageManual+DDL#LanguageManualDDL-StorageFormatsStorageFormatsRowFormat,StorageFormat,andSerDe[Hive Documentation on File Storage Format] for a list of options and more details.
* `rowFormat`: Controls the link:https://cwiki.apache.org/confluence/display/Hive/LanguageManual+DDL#LanguageManualDDL-RowFormats&SerDe[ Hive row format]. This controls how Hive serializes and deserializes rows. See the link:https://cwiki.apache.org/confluence/display/Hive/LanguageManual+DDL#LanguageManualDDL-RowFormats&SerDe[Hive Documentation on Row Formats and SerDe] for more details.
== Default storage location
If an annotation `storagelocation.metering.openshift.io/is-default` exists and is set to `true` on a `StorageLocation` resource, then that resource becomes the default storage resource. Any components with a storage configuration option where the storage location is not specified will use the default storage resource. There can be only one default storage resource. If more than one resource with the annotation exists, an error is logged because the Reporting Operator cannot determine the default.
.Default storage example
[source,yaml]
----
apiVersion: metering.openshift.io/v1
kind: StorageLocation
metadata:
name: example-s3-storage
labels:
operator-metering: "true"
annotations:
storagelocation.metering.openshift.io/is-default: "true"
spec:
hive:
databaseName: example_s3_storage
unmanagedDatabase: false
location: "s3a://bucket-name/path/within/bucket"
----

View File

@@ -1,2 +0,0 @@
:_mod-docs-content-type: ASSEMBLY

View File

@@ -1,13 +0,0 @@
:_mod-docs-content-type: ASSEMBLY
[id="ocp-certificates"]
= Certificate types and descriptions
include::_attributes/common-attributes.adoc[]
:context: ocp-certificates
toc::[]
== Certificate validation
{product-title} monitors certificates for proper validity, for the cluster certificates it issues and manages. The {product-title} alerting framework has rules to help identify when a certificate issue is about to occur. These rules consist of the following checks:
* API server client certificate expiration is less than five minutes.

View File

@@ -1,39 +0,0 @@
:_mod-docs-content-type: ASSEMBLY
[id="rh-required-whitelisted-IP-addresses-for-sre-access_{context}"]
include::_attributes/attributes-openshift-dedicated.adoc[]
include::_attributes/common-attributes.adoc[]
= Required allowlist IP addresses for SRE cluster access
:context: rh-required-whitelisted-IP-addresses-for-sre-access
toc::[]
[id="required-whitelisted-overview_{context}"]
== Overview
For Red Hat SREs to troubleshoot any issues within {product-title} clusters, they must have ingress access to the API server through allowlist IP addresses.
[id="required-whitelisted-access_{context}"]
== Obtaining allowlisted IP addresses
{product-title} users can use an {cluster-manager} CLI command to obtain the most up-to-date allowlist IP addresses for the Red Hat machines that are necessary for SRE access to {product-title} clusters.
[NOTE]
====
These allowlist IP addresses are not permanent and are subject to change. You must continuously review the API output for the most current allowlist IP addresses.
====
.Prerequisites
* You installed the link:https://console.redhat.com/openshift/downloads[OpenShift Cluster Manager API command-line interface (`ocm`)].
* You are able to configure your firewall to include the allowlist IP addresses.
.Procedure
. To get the current allowlist IP addresses needed for SRE access to your {product-title} cluster, run the following command:
+
[source,terminal]
----
$ ocm get /api/clusters_mgmt/v1/trusted_ip_addresses|jq -r '.items[].id'
----
. Configure your firewall to grant access to the allowlist IP addresses.

View File

@@ -1,18 +0,0 @@
:_mod-docs-content-type: ASSEMBLY
[id="zero-trust-manager-features"]
= Zero Trust Workload Identity Manager components and features
include::_attributes/common-attributes.adoc[]
:context: zero-trust-manager-features
// SPIFFE SPIRE components
include::modules/zero-trust-manager-about-components.adoc[leveloffset=+1]
//SPIRE features
include::modules/zero-trust-manager-about-features.adoc[leveloffset=+1]

View File

@@ -1,90 +0,0 @@
:_mod-docs-content-type: ASSEMBLY
[id="osd-persistent-storage-aws-efs-csi"]
= Setting up AWS Elastic File Service CSI Driver Operator
include::_attributes//attributes-openshift-dedicated.adoc[]
:context: osd-persistent-storage-aws-efs-csi
toc::[]
// Content similar to persistent-storage-csi-aws-efs.adoc. Modules are reused.
[IMPORTANT]
====
This procedure is specific to the link:https://github.com/openshift/aws-efs-csi-driver-operator[AWS EFS CSI Driver Operator] (a Red Hat operator), which is only applicable for {product-title} 4.10 and later versions.
====
== Overview
{product-title} is capable of provisioning persistent volumes (PVs) using the link:https://github.com/openshift/aws-efs-csi-driver[AWS EFS CSI driver].
Familiarity with link:https://access.redhat.com/documentation/en-us/openshift_container_platform/latest/html-single/storage/index#persistent-storage-overview_understanding-persistent-storage[persistent storage] and link:https://access.redhat.com/documentation/en-us/openshift_container_platform/latest/html-single/storage/index#persistent-storage-csi[configuring CSI volumes] is recommended when working with a CSI Operator and driver.
After installing the AWS EFS CSI Driver Operator, {product-title} installs the AWS EFS CSI Operator and the AWS EFS CSI driver by default in the `openshift-cluster-csi-drivers` namespace. This allows the AWS EFS CSI Driver Operator to create CSI-provisioned PVs that mount to AWS EFS assets.
* The _AWS EFS CSI Driver Operator_, after being installed, does not create a storage class by default to use to create persistent volume claims (PVCs). However, you can manually create the AWS EFS `StorageClass`.
The AWS EFS CSI Driver Operator supports dynamic volume provisioning by allowing storage volumes to be created on-demand.
This eliminates the need for cluster administrators to pre-provision storage.
* The _AWS EFS CSI driver_ enables you to create and mount AWS EFS PVs.
[NOTE]
====
Amazon Elastic File Storage (Amazon EFS) only supports regional volumes, not zonal volumes.
====
include::modules/persistent-storage-csi-about.adoc[leveloffset=+1]
:FeatureName: AWS EFS
include::modules/persistent-storage-efs-csi-driver-operator-setup.adoc[leveloffset=+1]
include::modules/persistent-storage-csi-olm-operator-install.adoc[leveloffset=+2]
.Next steps
ifdef::openshift-rosa[]
* If you are using Amazon EFS with AWS Secure Token Service (STS), you must configure the {FeatureName} CSI driver with STS. For more information, see xref:../../storage/container_storage_interface/osd-persistent-storage-aws-efs-csi.adoc#efs-sts_osd-persistent-storage-aws-efs-csi[Configuring {FeatureName} CSI Driver with STS].
endif::openshift-rosa[]
ifdef::openshift-dedicated[]
* xref:../../storage/container_storage_interface/osd-persistent-storage-aws-efs-csi.adoc#persistent-storage-csi-efs-driver-install_osd-persistent-storage-aws-efs-csi[Installing the {FeatureName} CSI Driver]
endif::openshift-dedicated[]
// Separate procedure for OSD and ROSA.
ifdef::openshift-rosa[]
include::modules/osd-persistent-storage-csi-efs-sts.adoc[leveloffset=+2]
[role="_additional-resources"]
.Additional resources
* xref:../../storage/container_storage_interface/osd-persistent-storage-aws-efs-csi.adoc#persistent-storage-csi-olm-operator-install_osd-persistent-storage-aws-efs-csi[Installing the {FeatureName} CSI Driver Operator]
* xref:../../storage/container_storage_interface/osd-persistent-storage-aws-efs-csi.adoc#persistent-storage-csi-efs-driver-install_osd-persistent-storage-aws-efs-csi[Installing the {FeatureName} CSI Driver]
endif::openshift-rosa[]
include::modules/persistent-storage-csi-efs-driver-install.adoc[leveloffset=+2]
:StorageClass: AWS EFS
:Provisioner: efs.csi.aws.com
include::modules/storage-create-storage-class.adoc[leveloffset=+1]
include::modules/storage-create-storage-class-console.adoc[leveloffset=+2]
include::modules/storage-create-storage-class-cli.adoc[leveloffset=+2]
include::modules/persistent-storage-csi-efs-create-volume.adoc[leveloffset=+1]
include::modules/persistent-storage-csi-dynamic-provisioning-aws-efs.adoc[leveloffset=+1]
If you have problems setting up dynamic provisioning, see xref:../../storage/container_storage_interface/osd-persistent-storage-aws-efs-csi.adoc#efs-troubleshooting_osd-persistent-storage-aws-efs-csi[Amazon Elastic File Storage troubleshooting].
[role="_additional-resources"]
.Additional resources
* xref:../../storage/container_storage_interface/osd-persistent-storage-aws-efs-csi.adoc#efs-create-volume_osd-persistent-storage-aws-efs-csi[Creating and configuring access to Amazon EFS volume(s)]
* xref:../../storage/container_storage_interface/osd-persistent-storage-aws-efs-csi.adoc#storage-create-storage-class_osd-persistent-storage-aws-efs-csi[Creating the {FeatureName} storage class]
// Undefine {StorageClass} attribute, so that any mistakes are easily spotted
:!StorageClass:
include::modules/persistent-storage-csi-efs-static-pv.adoc[leveloffset=+1]
If you have problems setting up static PVs, see xref:../../storage/container_storage_interface/osd-persistent-storage-aws-efs-csi.adoc#efs-troubleshooting_osd-persistent-storage-aws-efs-csi[Amazon Elastic File Storage troubleshooting].
include::modules/persistent-storage-csi-efs-security.adoc[leveloffset=+1]
include::modules/persistent-storage-csi-efs-troubleshooting.adoc[leveloffset=+1]
:FeatureName: AWS EFS
include::modules/persistent-storage-csi-olm-operator-uninstall.adoc[leveloffset=+1]
[role="_additional-resources"]
== Additional resources
* link:https://access.redhat.com/documentation/en-us/openshift_container_platform/latest/html-single/storage/index#persistent-storage-csi[Configuring CSI volumes]

View File

@@ -1,216 +0,0 @@
:_mod-docs-content-type: ASSEMBLY
[id="updating-restricted-network-cluster"]
= Updating a restricted network cluster
include::_attributes/common-attributes.adoc[]
:context: updating-restricted-network-cluster
toc::[]
You can update a restricted network {product-title} cluster by using the `oc` command-line interface (CLI) or using the OpenShift Update Service.
== Updating a restricted network cluster using the CLI
You can update a restricted network {product-title} cluster by using the `oc` command-line interface (CLI).
A restricted network environment is the one in which your cluster nodes cannot access the internet. For this reason, you must populate a registry with the installation images. If your registry host cannot access both the internet and the cluster, you can mirror the images to a file system that disconnected from that environment and then bring that host or removable media across that gap. If the local container registry and the cluster are connected to the mirror registry's host, you can directly push the release images to the local registry.
If multiple clusters are present within the restricted network, mirror the required release images to a single container image registry and use that registry to update all the clusters.
=== Prerequisites
* Have access to the internet to obtain the necessary container images.
* Have write access to a container registry in the restricted-network environment to push and pull images. The container registry must be compatible with Docker registry API v2.
* You must have the `oc` command-line interface (CLI) tool installed.
* Have access to the cluster as a user with `admin` privileges.
See xref:../authentication/using-rbac.adoc[Using RBAC to define and apply permissions].
* Have a recent xref:../backup_and_restore/control_plane_backup_and_restore/backing-up-etcd.adoc#backup-etcd[etcd backup] in case your update fails and you must xref:../backup_and_restore/control_plane_backup_and_restore/disaster_recovery/scenario-2-restoring-cluster-state.adoc#dr-restoring-cluster-state[restore your cluster to a previous state].
* Ensure that all machine config pools (MCPs) are running and not paused. Nodes associated with a paused MCP are skipped during the update process. You can pause the MCPs if you are performing a canary rollout update strategy.
* If your cluster uses manually maintained credentials, ensure that the Cloud Credential Operator (CCO) is in an upgradeable state. For more information, see _Upgrading clusters with manually maintained credentials_ for xref:../installing/installing_aws/manually-creating-iam.adoc#manually-maintained-credentials-upgrade_manually-creating-iam-aws[AWS], xref:../installing/installing_azure/manually-creating-iam-azure.adoc#manually-maintained-credentials-upgrade_manually-creating-iam-azure[Azure], or xref:../installing/installing_gcp/manually-creating-iam-gcp.adoc#manually-maintained-credentials-upgrade_manually-creating-iam-gcp[GCP].
//STS is not currently supported in a restricted network environment, but the following bullet can be uncommented when that changes.
//* If your cluster uses manually maintained credentials with the AWS Secure Token Service (STS), obtain a copy of the `ccoctl` utility from the release image being upgraded to and use it to process any updated credentials. For more information, see xref:../authentication/managing_cloud_provider_credentials/cco-mode-sts.adoc#sts-mode-upgrading[_Upgrading an OpenShift Container Platform cluster configured for manual mode with STS_].
* If you run an Operator or you have configured any application with the pod disruption budget, you might experience an interruption during the upgrade process. If `minAvailable` is set to 1 in `PodDisruptionBudget`, the nodes are drained to apply pending machine configs which might block the eviction process. If several nodes are rebooted, all the pods might run on only one node, and the `PodDisruptionBudget` field can prevent the node drain.
[id="updating-restricted-network-mirror-host"]
=== Preparing your mirror host
Before you perform the mirror procedure, you must prepare the host to retrieve content
and push it to the remote location.
include::modules/cli-installing-cli.adoc[leveloffset=+3]
// this file doesn't exist, so I'm including the one that should pick up more changes from Clayton's PR - modules/installation-adding-mirror-registry-pull-secret.adoc[leveloffset=+1]
include::modules/installation-adding-registry-pull-secret.adoc[leveloffset=+3]
[id="update-mirror-repository_updating-restricted-network-cluster"]
=== Mirroring the {product-title} image repository
You must mirror container images onto a mirror registry before you can update a cluster in a restricted network environment. You can also use this procedure in unrestricted networks to ensure your clusters only use container images that have satisfied your organizational controls on external content.
There are two supported methods for mirroring images onto a mirror registry:
* Using the oc-mirror OpenShift CLI (`oc`) plugin
* Using the oc adm release mirror command
Choose one of the following supported options.
include::modules/update-mirror-repository-oc-mirror.adoc[leveloffset=+3]
[role="_additional-resources"]
.Additional resources
* xref:../installing/disconnected_install/installing-mirroring-disconnected.adoc#installing-mirroring-disconnected[Mirroring images for a disconnected installation using the oc-mirror plugin]
include::modules/update-mirror-repository.adoc[leveloffset=+3]
include::modules/machine-health-checks-pausing.adoc[leveloffset=+2]
include::modules/update-restricted.adoc[leveloffset=+2]
include::modules/images-configuration-registry-mirror.adoc[leveloffset=+2]
include::modules/generating-icsp-object-scoped-to-a-registry.adoc[leveloffset=+2]
[id="additional-resources_security-container-signature"]
[role="_additional-resources"]
== Additional resources
* xref:../operators/admin/olm-restricted-networks.adoc#olm-restricted-networks[Using Operator Lifecycle Manager on restricted networks]
* xref:../post_installation_configuration/machine-configuration-tasks.adoc#machine-config-overview-post-install-machine-configuration-tasks[Machine Config Overview]
[id="update-restricted-network-cluster-update-service"]
== Updating a restricted network cluster using the OpenShift Update Service
include::modules/update-service-overview.adoc[leveloffset=+2]
[role="_additional-resources"]
.Additional resources
* xref:../updating/understanding-upgrade-channels-release.adoc#understanding-upgrade-channels_understanding-upgrade-channels-releases[Understanding upgrade channels and releases]
For clusters with internet accessibility, Red Hat provides over-the-air updates through an {product-title} update service as a hosted service located behind public APIs. However, clusters in a restricted network have no way to access public APIs for update information.
To provide a similar update experience in a restricted network, you can install and configure the OpenShift Update Service locally so that it is available within a disconnected environment.
The following sections describe how to provide over-the-air updates for your disconnected cluster and its underlying operating system.
[id="update-service-prereqs"]
=== Prerequisites
* For more information on installing Operators, see xref:../operators/user/olm-installing-operators-in-namespace.adoc#olm-installing-operators-in-namespace[Installing Operators in your namespace].
[id="registry-configuration-for-update-service"]
=== Configuring access to a secured registry for the OpenShift update service
If the release images are contained in a secure registry, complete the steps in xref:../registry/configuring-registry-operator.adoc#images-configuration-cas_configuring-registry-operator[Configuring additional trust stores for image registry access] along with following changes for the update service.
The OpenShift Update Service Operator needs the config map key name `updateservice-registry` in the registry CA cert.
.Image registry CA config map example for the update service
[source,yaml]
----
apiVersion: v1
kind: ConfigMap
metadata:
name: my-registry-ca
data:
updateservice-registry: | <1>
-----BEGIN CERTIFICATE-----
...
-----END CERTIFICATE-----
registry-with-port.example.com..5000: | <2>
-----BEGIN CERTIFICATE-----
...
-----END CERTIFICATE-----
----
<1> The OpenShift Update Service Operator requires the config map key name updateservice-registry in the registry CA cert.
<2> If the registry has the port, such as `registry-with-port.example.com:5000`, `:` should be replaced with `..`.
include::modules/images-update-global-pull-secret.adoc[leveloffset=+2]
[id="update-service-install"]
=== Installing the OpenShift Update Service Operator
To install the OpenShift Update Service, you must first install the OpenShift Update Service Operator by using the {product-title} web console or CLI.
[NOTE]
====
For clusters that are installed on restricted networks, also known as disconnected clusters, Operator Lifecycle Manager by default cannot access the Red Hat-provided OperatorHub sources hosted on remote registries because those remote sources require full internet connectivity. For more information, see xref:../operators/admin/olm-restricted-networks.adoc#olm-restricted-networks[Using Operator Lifecycle Manager on restricted networks].
====
include::modules/update-service-install-web-console.adoc[leveloffset=+2]
include::modules/update-service-install-cli.adoc[leveloffset=+2]
include::modules/update-service-graph-data.adoc[leveloffset=+2]
[id="update-service-mirror-release_updating-restricted-network-cluster"]
=== Mirroring the {product-title} image repository
You must mirror container images onto a mirror registry before you can update a cluster in a restricted network environment. You can also use this procedure in unrestricted networks to ensure your clusters only use container images that have satisfied your organizational controls on external content.
There are two supported methods for mirroring images onto a mirror registry:
* Using the oc-mirror OpenShift CLI (`oc`) plugin
* Using the oc adm release mirror command
Choose one of the following supported options.
//The module below is being used twice in this assembly, so this instance needs to have a unique context set in order for its ID to be unique. In the future, if and when the two main sections of this webpage are split into their own assemblies/pages, the context attributes below can be removed.
:!context:
:context: osus-restricted-network-cluster
include::modules/update-mirror-repository-oc-mirror.adoc[leveloffset=+3]
[role="_additional-resources"]
.Additional resources
* xref:../installing/disconnected_install/installing-mirroring-disconnected.adoc#installing-mirroring-disconnected[Mirroring images for a disconnected installation using the oc-mirror plugin]
:!context:
:context: updating-restricted-network-cluster
include::modules/update-service-mirror-release.adoc[leveloffset=+3]
[id="update-service-create-service"]
=== Creating an OpenShift Update Service application
You can create an OpenShift Update Service application by using the {product-title} web console or CLI.
include::modules/update-service-create-service-web-console.adoc[leveloffset=+3]
include::modules/update-service-create-service-cli.adoc[leveloffset=+3]
[NOTE]
====
The policy engine route name must not be more than 63 characters based on RFC-1123. If you see `ReconcileCompleted` status as `false` with the reason `CreateRouteFailed` caused by `host must conform to DNS 1123 naming convention and must be no more than 63 characters`, try creating the Update Service with a shorter name.
====
include::modules/update-service-configure-cvo.adoc[leveloffset=+3]
[NOTE]
====
See xref:../networking/enable-cluster-wide-proxy.adoc#nw-proxy-configure-object[Enabling the cluster-wide proxy] to configure the CA to trust the update server.
====
[id="update-service-delete-service"]
=== Deleting an OpenShift Update Service application
You can delete an OpenShift Update Service application by using the {product-title} web console or CLI.
include::modules/update-service-delete-service-web-console.adoc[leveloffset=+3]
include::modules/update-service-delete-service-cli.adoc[leveloffset=+3]
[id="update-service-uninstall"]
=== Uninstalling the OpenShift Update Service Operator
To uninstall the OpenShift Update Service, you must first delete all OpenShift Update Service applications by using the {product-title} web console or CLI.
include::modules/update-service-uninstall-web-console.adoc[leveloffset=+3]
include::modules/update-service-uninstall-cli.adoc[leveloffset=+3]

View File

@@ -1,102 +0,0 @@
:_mod-docs-content-type: ASSEMBLY
[id="about-hcp"]
= Learn more about ROSA with HCP
include::_attributes/common-attributes.adoc[]
include::_attributes/attributes-openshift-dedicated.adoc[]
:context: about-hcp
toc::[]
{hcp-title-first} offers a reduced-cost solution to create a managed ROSA cluster with a focus on efficiency. You can quickly create a new cluster and deploy applications in minutes.
== Key features of {hcp-title}
* {hcp-title} requires a minimum of only two nodes, making it ideal for smaller projects while still being able to scale to support larger projects and enterprises.
* The underlying control plane infrastructure is fully managed. Control plane components, such as the API server and etcd database, are hosted in a Red{nbsp}Hat-owned AWS account.
* Provisioning time is approximately 10 minutes.
* Customers can upgrade the control plane and machine pools separately, which means they do not have to shut down the entire cluster during upgrades.
== Getting started with {hcp-title}
Use the following sections to find content to help you learn about and use {hcp-title}.
[id="architect"]
=== Architect
[options="header",cols="3*"]
|===
| Learn about {hcp-title} |Plan {hcp-title} deployment |Additional resources
| xref:../architecture/index.adoc#architecture-overview[Architecture overview]
| xref:../backup_and_restore/application_backup_and_restore/oadp-intro.adoc#oadp-api[Back up and restore]
ifdef::openshift-rosa-hcp[]
| xref:../rosa_architecture/rosa_policy_service_definition/rosa-hcp-life-cycle.adoc#rosa-hcp-life-cycle[{hcp-title} life cycle]
endif::openshift-rosa-hcp[]
| xref:../architecture/rosa-architecture-models.adoc#rosa-architecture-models[{hcp-title} architecture]
ifdef::openshift-rosa-hcp[]
| xref:../rosa_architecture/rosa_policy_service_definition/rosa-hcp-service-definition.adoc#rosa-hcp-service-definition[{hcp-title} service definition]
endif::openshift-rosa-hcp[]
|
|
| xref:../support/index.adoc#support-overview[Getting support]
|===
[id="cluster-administrator"]
=== Cluster Administrator
[options="header",cols="4*"]
|===
|Learn about {hcp-title} |Deploy {hcp-title} |Manage {hcp-title} |Additional resources
| xref:../architecture/rosa-architecture-models.adoc#rosa-architecture-models[{hcp-title} architecture]
| xref:../rosa_hcp/rosa-hcp-sts-creating-a-cluster-quickly.adoc#rosa-hcp-sts-creating-a-cluster-quickly[Installing {hcp-title}]
// | xref :../observability/logging/cluster-logging.adoc#cluster-logging[Logging]
| xref:../support/index.adoc#support-overview[Getting Support]
| link:https://learn.openshift.com/?extIdCarryOver=true&sc_cid=701f2000001Css5AAC[OpenShift Interactive Learning Portal]
| xref:../storage/index.adoc#storage-overview[Storage]
| xref:../observability/monitoring/about-ocp-monitoring/about-ocp-monitoring.adoc#about-ocp-monitoring[About {product-title} monitoring]
ifdef::openshift-rosa-hcp[]
| xref:../rosa_architecture/rosa_policy_service_definition/rosa-hcp-life-cycle.adoc#rosa-hcp-life-cycle[{hcp-title} life cycle]
endif::openshift-rosa-hcp[]
|
| xref:../backup_and_restore/application_backup_and_restore/oadp-intro.adoc#oadp-api[Back up and restore]
|
|
//adding condition to get hcp upgrading PR built
ifdef::openshift-rosa-hcp[]
xref:../upgrading/rosa-hcp-upgrading.adoc#rosa-hcp-upgrading[Upgrading]
endif::openshift-rosa-hcp[]
|
|===
[id="Developer"]
=== Developer
[options="header",cols="3*"]
|===
|Learn about application development in {hcp-title} |Deploy applications |Additional resources
| link:https://developers.redhat.com/[Red{nbsp}Hat Developers site]
| xref:../applications/index.adoc#building-applications-overview[Building applications overview]
| xref:../support/index.adoc#support-overview[Getting support]
| link:https://developers.redhat.com/products/openshift-dev-spaces/overview[{openshift-dev-spaces-productname} (formerly Red{nbsp}Hat CodeReady Workspaces)]
| xref:../operators/index.adoc#operators-overview[Operators overview]
|
|
| xref:../openshift_images/index.adoc#overview-of-images[Images]
|
|
| xref:../cli_reference/odo-important-update.adoc#odo-important_update[Developer-focused CLI]
|
|===

View File

@@ -1,129 +0,0 @@
:_mod-docs-content-type: ASSEMBLY
[id="cloud-experts-rosa-hcp-sts-explained"]
= AWS STS and ROSA with HCP explained
include::_attributes/common-attributes.adoc[]
include::_attributes/attributes-openshift-dedicated.adoc[]
:context: cloud-experts-rosa-hcp-sts-explained
toc::[]
//rosaworkshop.io content metadata
//Brought into ROSA product docs 2023-10-26
//Modified for HCP 2024-4-16
{hcp-title-first} uses an AWS (Amazon Web Services) Security Token Service (STS) for AWS Identity Access Management (IAM) to obtain the necessary credentials to interact with resources in your AWS account.
[id="credential-methods-rosa-hcp"]
== AWS STS credential method
As part of {hcp-title}, Red{nbsp}Hat must be granted the necessary permissions to manage infrastructure resources in your AWS account.
{hcp-title} grants the cluster's automation software limited, short-term access to resources in your AWS account.
The STS method uses predefined roles and policies to grant temporary, least-privilege permissions to IAM roles. The credentials typically expire an hour after being requested. Once expired, they are no longer recognized by AWS and no longer have account access from API requests made with them. For more information, see the link:https://docs.aws.amazon.com/STS/latest/APIReference/welcome.html[AWS documentation].
AWS IAM STS roles must be created for each {hcp-title} cluster. The ROSA command-line interface (CLI) (`rosa`) manages the STS roles and helps you attach the ROSA-specific, AWS-managed policies to each role. The CLI provides the commands and files to create the roles, attach the AWS-managed policies, and an option to allow the CLI to automatically create the roles and attach the policies.
//See [insert new xref when we have one for HCP] for more information about the different `--mode` options.
[id="hcp-sts-security"]
== AWS STS security
Security features for AWS STS include:
* An explicit and limited set of policies that the user creates ahead of time.
** The user can review every requested permission needed by the platform.
* The service cannot do anything outside of those permissions.
* There is no need to rotate or revoke credentials. Whenever the service needs to perform an action, it obtains credentials that expire in one hour or less.
* Credential expiration reduces the risks of credentials leaking and being reused.
{hcp-title} grants cluster software components least-privilege permissions with short-term security credentials to specific and segregated IAM roles. The credentials are associated with IAM roles specific to each component and cluster that makes AWS API calls. This method aligns with principles of least-privilege and secure practices in cloud service resource management.
[id="components-specific-to-rosa-hcp-with-sts"]
== Components of {hcp-title}
* *AWS infrastructure* - The infrastructure required for the cluster including the Amazon EC2 instances, Amazon EBS storage, and networking components. See xref:../rosa_architecture/rosa_policy_service_definition/rosa-service-definition.adoc#rosa-sdpolicy-aws-compute-types_rosa-service-definition[AWS compute types] to see the supported instance types for compute nodes and xref:../rosa_planning/rosa-sts-aws-prereqs.adoc#rosa-ec2-instances_rosa-sts-aws-prereqs[provisioned AWS infrastructure] for more information on cloud resource configuration.
// This section needs to remain hidden until the HCP migration is completed.
// * *AWS infrastructure* - The infrastructure required for the cluster including the Amazon EC2 instances, Amazon EBS storage, and networking components. See xref:../rosa_architecture/rosa_policy_service_definition/rosa-service-definition.adoc#rosa-sdpolicy-aws-compute-types_rosa-service-definition[AWS compute types] to see the supported instance types for compute nodes and xref:../rosa_planning/rosa-sts-aws-prereqs.adoc#rosa-ec2-instances_rosa-sts-aws-prereqs[provisioned AWS infrastructure] for more information on cloud resource configuration.
* *AWS STS* - A method for granting short-term, dynamic tokens to provide users the necessary permissions to temporarily interact with your AWS account resources.
* *OpenID Connect (OIDC)* - A mechanism for cluster Operators to authenticate with AWS, assume the cluster roles through a trust policy, and obtain temporary credentials from AWS IAM STS to make the required API calls.
* *Roles and policies* - The roles and policies used by {hcp-title} can be divided into account-wide roles and policies and Operator roles and policies.
+
The policies determine the allowed actions for each of the roles.
ifdef::openshift-rosa[]
See xref:../rosa_architecture/rosa-sts-about-iam-resources.adoc#rosa-sts-about-iam-resources[About IAM resources] for more details about the individual roles and policies. See xref:../rosa_planning/rosa-sts-ocm-role.adoc#rosa-sts-ocm-role[ROSA IAM role resource] for more details about trust policies.
endif::openshift-rosa[]
ifdef::openshift-rosa-hcp[]
See xref:../rosa_architecture/rosa-sts-about-iam-resources.adoc#rosa-sts-about-iam-resources[About IAM resources] for more details about the individual roles and policies. See xref:../rosa_planning/rosa-hcp-prepare-iam-roles-resources.adoc#rosa-hcp-prepare-iam-roles-resources[Required IAM roles and resources] for more details on preparing these resources in your cluster.
endif::openshift-rosa-hcp[]
+
--
** The account-wide roles are:
*** `<prefix>-HCP-ROSA-Worker-Role`
*** `<prefix>-HCP-ROSA-Support-Role`
*** `<prefix>-HCP-ROSA-Installer-Role`
** The account-wide AWS-managed policies are:
*** link:https://docs.aws.amazon.com/aws-managed-policy/latest/reference/ROSAInstallerPolicy.html[ROSAInstallerPolicy]
*** link:https://docs.aws.amazon.com/aws-managed-policy/latest/reference/ROSAWorkerInstancePolicy.html[ROSAWorkerInstancePolicy]
*** link:https://docs.aws.amazon.com/aws-managed-policy/latest/reference/ROSASRESupportPolicy.html[ROSASRESupportPolicy]
*** link:https://docs.aws.amazon.com/aws-managed-policy/latest/reference/ROSAIngressOperatorPolicy.html[ROSAIngressOperatorPolicy]
*** link:https://docs.aws.amazon.com/aws-managed-policy/latest/reference/ROSAAmazonEBSCSIDriverOperatorPolicy.html[ROSAAmazonEBSCSIDriverOperatorPolicy]
*** link:https://docs.aws.amazon.com/aws-managed-policy/latest/reference/ROSACloudNetworkConfigOperatorPolicy.html[ROSACloudNetworkConfigOperatorPolicy]
*** link:https://docs.aws.amazon.com/aws-managed-policy/latest/reference/ROSAControlPlaneOperatorPolicy.html[ROSAControlPlaneOperatorPolicy]
*** link:https://docs.aws.amazon.com/aws-managed-policy/latest/reference/ROSAImageRegistryOperatorPolicy.html[ROSAImageRegistryOperatorPolicy]
*** link:https://docs.aws.amazon.com/aws-managed-policy/latest/reference/ROSAKMSProviderPolicy.html[ROSAKMSProviderPolicy]
*** link:https://docs.aws.amazon.com/aws-managed-policy/latest/reference/ROSAKubeControllerPolicy.html[ROSAKubeControllerPolicy]
*** link:https://docs.aws.amazon.com/aws-managed-policy/latest/reference/ROSAManageSubscription.html[ROSAManageSubscription]
*** link:https://docs.aws.amazon.com/aws-managed-policy/latest/reference/ROSANodePoolManagementPolicy.html[ROSANodePoolManagementPolicy]
--
+
[NOTE]
====
Certain policies are used by the cluster Operator roles, listed below. The Operator roles are created in a second step because they are dependent on an existing cluster name and cannot be created at the same time as the account-wide roles.
====
+
** The Operator roles are:
*** <operator_role_prefix>-openshift-cluster-csi-drivers-ebs-cloud-credentials
*** <operator_role_prefix>-openshift-cloud-network-config-controller-cloud-credentials
*** <operator_role_prefix>-openshift-machine-api-aws-cloud-credentials
*** <operator_role_prefix>-openshift-cloud-credential-operator-cloud-credentials
*** <operator_role_prefix>-openshift-image-registry-installer-cloud-credentials
*** <operator_role_prefix>-openshift-ingress-operator-cloud-credentials
+
** Trust policies are created for each account-wide role and each Operator role.
[id="deploying-rosa-hcp-with-sts-cluster"]
== Deploying a {hcp-title} cluster
Deploying a {hcp-title} cluster follows the following steps:
. You create the account-wide roles.
. You create the Operator roles.
. Red{nbsp}Hat uses AWS STS to send the required permissions to AWS that allow AWS to create and attach the corresponding AWS-managed Operator policies.
. You create the OIDC provider.
. You create the cluster.
During the cluster creation process, the ROSA CLI creates the required JSON files for you and outputs the commands you need. If desired, the ROSA CLI can also run the commands for you.
The ROSA CLI can automatically create the roles for you, or you can manually create them by using the `--mode manual` or `--mode auto` flags. For further details about deployment, see xref:../rosa_hcp/rosa-hcp-sts-creating-a-cluster-quickly.adoc#rosa-sts-creating-cluster-using-customizations_rosa-sts-creating-a-cluster-with-customizations[Creating a cluster with customizations].
[id="hcp-sts-process"]
== {hcp-title} workflow
The user creates the required account-wide roles. During role creation, a trust policy, known as a cross-account trust policy, is created which allows a Red{nbsp}Hat-owned role to assume the roles. Trust policies are also created for the EC2 service, which allows workloads on EC2 instances to assume roles and obtain credentials. AWS assigns a corresponding permissions policy to each role.
After the account-wide roles and policies are created, the user can create a cluster. Once cluster creation is initiated, the user creates the Operator roles so that cluster Operators can make AWS API calls. These roles are then assigned to the corresponding permission policies that were created earlier and a trust policy with an OIDC provider. The Operator roles differ from the account-wide roles in that they ultimately represent the pods that need access to AWS resources. Because a user cannot attach IAM roles to pods, they must create a trust policy with an OIDC provider so that the Operator, and therefore the pods, can access the roles they need.
Once the user assigns the roles to the corresponding policy permissions, the final step is creating the OIDC provider.
image::cloud-experts-sts-explained_creation_flow_hcp.png[]
When a new role is needed, the workload currently using the Red{nbsp}Hat role will assume the role in the AWS account, obtain temporary credentials from AWS STS, and begin performing the actions using API calls within the user's AWS account as permitted by the assumed role's permissions policy. The credentials are temporary and have a maximum duration of one hour.
image::cloud-experts-sts-explained_highlevel.png[]
//The entire workflow is depicted in the following graphic:
//image::cloud-experts-sts-explained_entire_flow_hcp.png[]
Operators use the following process to obtain the requisite credentials to perform their tasks. Each Operator is assigned an Operator role, a permissions policy, and a trust policy with an OIDC provider. The Operator will assume the role by passing a JSON web token that contains the role and a token file (`web_identity_token_file`) to the OIDC provider, which then authenticates the signed key with a public key. The public key is created during cluster creation and stored in an S3 bucket. The Operator then confirms that the subject in the signed token file matches the role in the role trust policy which ensures that the OIDC provider can only obtain the allowed role. The OIDC provider then returns the temporary credentials to the Operator so that the Operator can make AWS API calls. For a visual representation, see the following diagram:
image::cloud-experts-sts-explained_oidc_op_roles_hcp.png[]