1
0
mirror of https://github.com/openshift/openshift-docs.git synced 2026-02-05 12:46:18 +01:00
Files
openshift-docs/rosa_architecture/index.adoc
2024-09-24 15:14:15 +05:30

339 lines
41 KiB
Plaintext

:_mod-docs-content-type: ASSEMBLY
[id="welcome-index"]
= {product-title} {product-version} Documentation
include::_attributes/common-attributes.adoc[]
:context: welcome-index
{toc}
{toc-title}
toc::[]
[.lead]
ifndef::openshift-rosa,openshift-telco[]
Welcome to the official {product-title} {product-version} documentation, where you can learn about {product-title} and start exploring its features.
endif::openshift-rosa,openshift-telco[]
ifdef::openshift-rosa[]
Welcome to the official {product-title} (ROSA) documentation, where you can learn about ROSA and start exploring its features.
To learn about ROSA, interacting with ROSA by using {cluster-manager-first} and command-line interface (CLI) tools, consumption experience, and integration with Amazon Web Services (AWS) services, start with xref:../rosa_architecture/rosa-understanding.adoc#rosa-understanding[the Introduction to ROSA documentation].
image::291_OpenShift_on_AWS_Intro_1122_docs.png[{product-title}]
endif::openshift-rosa[]
ifdef::openshift-rosa[]
To navigate the ROSA documentation, use the left navigation bar.
endif::[]
ifndef::openshift-rosa[]
ifndef::openshift-rosa,openshift-dedicated,openshift-dpu,openshift-telco[]
To navigate the {product-title} {product-version} documentation, you can use one of the following methods:
* Use the left navigation bar to browse the documentation.
* Select the task that interests you from the contents of this Welcome page.
endif::openshift-rosa,openshift-dedicated,openshift-dpu,openshift-telco[]
ifdef::openshift-dpu[]
To navigate the {product-title} data processing unit (DPU) documentation, use the left navigation bar.
For documentation that is not DPU-specific, see the link:https://docs.openshift.com/container-platform/latest/welcome/index.html[{product-title} documentation].
endif::[]
ifdef::openshift-telco[]
[.lead]
[IMPORTANT]
====
The telco core and telco RAN DU reference design specifications (RDS) are no longer published at this location.
For the latest version of the telco RDS, see link:https://docs.openshift.com/container-platform/{product-version}/scalability_and_performance/telco_ref_design_specs/telco-ref-design-specs-overview.html[Telco core and RAN DU reference design specifications].
====
endif::[]
ifdef::openshift-dedicated[]
To navigate the {product-title} documentation, use the left navigation bar.
endif::[]
ifdef::openshift-enterprise,openshift-webscale,openshift-origin[]
Start with xref:../architecture/architecture.adoc#architecture-overview-architecture[Architecture] and
xref:../security/container_security/security-understanding.adoc#understanding-security[Security and compliance].
ifdef::openshift-enterprise,openshift-webscale[]
Next, view the
xref:../release_notes/ocp-4-15-release-notes.adoc#ocp-4-15-release-notes[release notes].
endif::[]
ifdef::openshift-online,openshift-aro[]
Start with **xref:../architecture/architecture.adoc#architecture-overview-architecture[Architecture]**.
endif::[]
ifdef::openshift-enterprise,openshift-webscale,openshift-origin[]
== Cluster installer activities
Explore the following {product-title} installation tasks:
- **xref:../installing/overview/index.adoc#ocp-installation-overview[{product-title} installation overview]**: Depending on the platform, you can install {product-title} on installer-provisioned or user-provisioned infrastructure. The {product-title} installation program provides the flexibility to deploy {product-title} on a range of different platforms.
// PR open https://github.com/openshift/openshift-docs/pull/77474
//- **xref:../installing/installing_alibaba/installing-alibaba-assisted-installer[Installing a cluster on {alibaba} by using the Assisted Installer]**: On {alibaba}, you can install {product-title} by using the Assisted Installer. This is currently a Technology Preview feature only.
- **xref:../installing/installing_aws/preparing-to-install-on-aws.adoc#preparing-to-install-on-aws[Install a cluster on {aws-short}]**: On AWS, you can install {product-title} on installer-provisioned infrastructure or user-provisioned infrastructure.
- **xref:../installing/installing_azure/preparing-to-install-on-azure.adoc#preparing-to-install-on-azure[Install a cluster on {azure-full}]**: On Microsoft Azure, you can install {product-title} on installer-provisioned infrastructure or user-provisioned infrastructure.
- **xref:../installing/installing_azure_stack_hub/preparing-to-install-on-azure-stack-hub.adoc#preparing-to-install-on-azure-stack-hub[Install a cluster on {azure-full} Stack Hub]**: On Microsoft Azure Stack Hub, you can install {product-title} on installer-provisioned infrastructure or user-provisioned infrastructure.
- **xref:../installing/installing_on_prem_assisted/installing-on-prem-assisted.adoc#using-the-assisted-installer_installing-on-prem-assisted[Installing {product-title} with the Assisted Installer]**: The Assisted Installer is an installation solution that is provided on the Red Hat {hybrid-console}. The Assisted Installer supports installing an {product-title} cluster on multiple platforms.
- **xref:../installing/installing_with_agent_based_installer/installing-with-agent-based-installer.adoc#installing-ocp-agent_installing-with-agent-based-installer[Installing {product-title} with the Agent-based Installer]**: You can use the Agent-based Installer to generate a bootable ISO image that contains the Assisted discovery agent, the Assisted Service, and all the other information required to deploy an {product-title} cluster. The Agent-based Installer leverages the advantages of the Assisted Installer in a disconnected environment
- **xref:../installing/installing_bare_metal/preparing-to-install-on-bare-metal.adoc#preparing-to-install-on-bare-metal[Install a cluster on bare metal]**: On bare metal, you can install {product-title} on installer-provisioned infrastructure or user-provisioned infrastructure. If none of the available platform and cloud provider deployment options meet your needs, consider using bare metal user-provisioned infrastructure.
- **xref:../installing/installing_gcp/preparing-to-install-on-gcp.adoc#preparing-to-install-on-gcp[Install a cluster on {gcp-short}]**: On {gcp-first} you can install {product-title} on installer-provisioned infrastructure or user-provisioned infrastructure.
ifndef::openshift-origin[]
- **xref:../installing/installing_ibm_cloud_public/preparing-to-install-on-ibm-cloud.adoc#preparing-to-install-on-ibm-cloud[Install a cluster on {ibm-cloud-name}]**: On {ibm-cloud-name}, you can install {product-title} on installer-provisioned infrastructure.
- **xref:../installing/installing_ibm_powervs/preparing-to-install-on-ibm-power-vs.adoc#preparing-to-install-on-ibm-power-vs[Install a cluster on {ibm-power-name} Virtual Server]**: On {ibm-power-name} Virtual Server, you can install {product-title} on installer-provisioned infrastructure.
- **xref:../installing/installing_ibm_power/installing-ibm-power.adoc#installing-ibm-power[Install a cluster on {ibm-power-name}]**: On {ibm-power-name}, you can install {product-title} on user-provisioned infrastructure.
- **xref:../installing/installing_ibm_z/preparing-to-install-on-ibm-z.adoc#preparing-to-install-on-ibm-z[Install a cluster on {ibm-z-name} and {ibm-linuxone-name}]**: On {ibm-z-name} and {ibm-linuxone-name}, you can install {product-title} on user-provisioned infrastructure.
endif::openshift-origin[]
- **Install a cluster on {oci-first}**: You can use the {ai-full} or the Agent-based Installer to install a cluster on {oci}. This means that you can run cluster workloads on infrastructure that supports dedicated, hybrid, public, and multiple cloud environments. See xref:../installing/installing_oci/installing-oci-assisted-installer.adoc#installing-oci-assisted-installer[Installing a cluster on {oci-first-no-rt} by using the {ai-full}] and xref:../installing/installing_oci/installing-oci-agent-based-installer.adoc#installing-oci-agent-based-installer[Installing a cluster on {oci-first-no-rt} by using the Agent-based Installer].
- **xref:../installing/installing_nutanix/preparing-to-install-on-nutanix.adoc#preparing-to-install-nutanix[Install a cluster on Nutanix]**: On Nutanix, you can install a cluster on your {product-title} on installer-provisioned infrastructure.
- **xref:../installing/installing_openstack/preparing-to-install-on-openstack.adoc#preparing-to-install-on-openstack[Install a cluster on {rh-openstack-first}]**: On {rh-openstack}, you can install {product-title} on installer-provisioned infrastructure or user-provisioned infrastructure.
- **xref:../installing/installing_vsphere/ipi/installing-vsphere-installer-provisioned.adoc#installing-vsphere-installer-provisioned[Install a cluster on {vmw-full}]**: You can install {product-title} on supported versions of {vmw-short}.
== Other cluster installer activities
ifndef::openshift-origin[]
- **Install a cluster in a restricted network**: If your cluster uses
user-provisioned infrastructure on
xref:../installing/installing_aws/upi/installing-restricted-networks-aws.adoc#installing-restricted-networks-aws[{aws-first}],
xref:../installing/installing_gcp/installing-restricted-networks-gcp.adoc#installing-restricted-networks-gcp[{gcp-short}],
xref:../installing/installing_vsphere/upi/installing-restricted-networks-vsphere.adoc#installing-restricted-networks-vsphere[{vmw-short}], xref:../installing/installing_ibm_cloud_public/installing-ibm-cloud-restricted.adoc#installing-ibm-cloud-restricted[{ibm-cloud-name}], xref:../installing/installing_ibm_z/preparing-to-install-on-ibm-z.adoc#preparing-to-install-on-ibm-z[{ibm-z-name} and {ibm-linuxone-name}], xref:../installing/installing_ibm_power/installing-restricted-networks-ibm-power.adoc#installing-restricted-networks-ibm-power[{ibm-power-name}],
or
xref:../installing/installing_bare_metal/installing-restricted-networks-bare-metal.adoc#installing-restricted-networks-bare-metal[bare metal] and the cluster
does not have full access to the internet, you must mirror the {product-title} installation images. To do this action, use one of the following methods, so that you can install a cluster in a restricted network.
*** xref:../disconnected/mirroring/installing-mirroring-installation-images.adoc#installing-mirroring-installation-images[Mirroring images for a disconnected installation]
*** xref:../disconnected/mirroring/installing-mirroring-disconnected.adoc#installing-mirroring-disconnected[Mirroring images for a disconnected installation by using the oc-mirror plug-in]
endif::openshift-origin[]
ifdef::openshift-origin[]
- **Install a cluster in a restricted network**: If your cluster that uses
user-provisioned infrastructure on
xref:../installing/installing_aws/upi/installing-restricted-networks-aws.adoc#installing-restricted-networks-aws[{aws-first}],
xref:../installing/installing_gcp/installing-restricted-networks-gcp.adoc#installing-restricted-networks-gcp[{gcp-short}],
or
xref:../installing/installing_bare_metal/installing-restricted-networks-bare-metal.adoc#installing-restricted-networks-bare-metal[bare metal]
does not have full access to the internet, then
xref:../disconnected/mirroring/installing-mirroring-installation-images.adoc#installing-mirroring-installation-images[mirror the {product-title} installation images] and install a cluster in a restricted network.
endif::openshift-origin[]
- **Install a cluster in an existing network**: If you use an existing Virtual Private Cloud (VPC) in
xref:../installing/installing_aws/ipi/installing-aws-vpc.adoc#installing-aws-vpc[{aws-first}] or
xref:../installing/installing_gcp/installing-gcp-vpc.adoc#installing-gcp-vpc[{gcp-short}] or an existing
xref:../installing/installing_azure/ipi/installing-azure-vnet.adoc#installing-azure-vnet[VNet]
on Microsoft Azure, you can install a cluster. Also consider xref:../installing/installing_gcp/installing-gcp-shared-vpc.adoc#installation-gcp-shared-vpc-prerequisites_installing-gcp-shared-vpc[Installing a cluster on {gcp-short} into a shared VPC]
- **Install a private cluster**: If your cluster does not require external
internet access, you can install a private cluster on
xref:../installing/installing_aws/ipi/installing-aws-private.adoc#installing-aws-private[{aws-first}],
xref:../installing/installing_azure/ipi/installing-azure-private.adoc#installing-azure-private[{azure-full}],
xref:../installing/installing_gcp/installing-gcp-private.adoc#installing-gcp-private[{gcp-short}], or
xref:../installing/installing_ibm_cloud_public/preparing-to-install-on-ibm-cloud.adoc#preparing-to-install-on-ibm-cloud[{ibm-cloud-name}]. Internet access is still required to access the cloud APIs and installation media.
- **xref:../installing/installing_bare_metal/installing-bare-metal.adoc#rhcos-install-iscsi-manual_installing-bare-metal[Installing RHCOS manually on an iSCSI boot device] and xref:../installing/installing_bare_metal/installing-bare-metal.adoc#rhcos-install-iscsi-ibft_installing-bare-metal[Installing RHCOS on an iSCSI boot device using iBFT]**: You can target iSCSI devices as the root disk for installation of {op-system}. Multipathing is also supported.
- **xref:../installing/installing-troubleshooting.adoc#installing-troubleshooting[Check installation logs]**: Access installation logs to evaluate issues that occur during {product-title} installation.
- **xref:../web_console/web-console.adoc#web-console[Access {product-title}]**: Use credentials output at the end of the installation process to log in to the {product-title} cluster from the command line or web console.
- **xref:../storage/persistent_storage/persistent-storage-ocs.adoc#red-hat-openshift-data-foundation[Install Red Hat OpenShift Data Foundation]**: You can install {rh-storage-first} as an Operator to provide highly integrated and simplified persistent storage management for containers.
- **xref:../machine_configuration/mco-coreos-layering.adoc#mco-coreos-layering[{op-system-first} image layering]**: As a post-installation task, you can add new images on top of the base {op-system} image. This layering does not modify the base {op-system} image. Instead, the layering creates a custom layered image that includes all {op-system} functions and adds additional functions to specific nodes in the cluster.
endif::[]
ifndef::openshift-rosa,openshift-dedicated,openshift-dpu,microshift[]
== Developer activities
Develop and deploy containerized applications with {product-title}. {product-title} is a platform for developing and deploying containerized applications. Read the following {product-title} documentation, so that you can better understand {product-title} functions:
- **xref:../architecture/understanding-development.adoc#understanding-development[Understand {product-title} development]**: Learn the different types of containerized applications, from simple containers to advanced Kubernetes deployments and Operators.
- **xref:../applications/projects/working-with-projects.adoc#working-with-projects[Work with projects]**: Create projects from the {product-title} web console or OpenShift CLI (`oc`) to organize and share the software you develop.
- **xref:../applications/creating_applications/odc-creating-applications-using-developer-perspective.adoc#odc-creating-applications-using-developer-perspective[Creating applications using the Developer perspective]**: Use the *Developer* perspective in the {product-title} web console to easily create and deploy applications.
- **xref:../applications/odc-viewing-application-composition-using-topology-view.adoc#odc-viewing-application-topology_viewing-application-composition-using-topology-view[Viewing application composition using the Topology view]**: Use the *Topology* view to visually interact with your applications, monitor status, connect and group components, and modify your code base.
- **link:https://docs.openshift.com/pipelines/latest/create/creating-applications-with-cicd-pipelines.html#creating-applications-with-cicd-pipelines[Create CI/CD Pipelines]**: Pipelines are serverless, cloud-native, continuous integration and continuous deployment systems that run in isolated containers.
Pipelines use standard Tekton custom resources to automate deployments and are designed for decentralized teams that work on microservice-based architecture.
ifdef::openshift-enterprise,openshift-webscale,openshift-origin[]
- **link:https://docs.openshift.com/gitops/latest/understanding_openshift_gitops/about-redhat-openshift-gitops.html#about-redhat-openshift-gitops[Manage your infrastructure and application configurations]**: GitOps is a declarative way to implement continuous deployment for cloud native applications. GitOps defines infrastructure and application definitions as code. GitOps uses this code to manage multiple workspaces and clusters to simplify the creation of infrastructure and application configurations. GitOps also handles and automates complex deployments at a fast pace, which saves time during deployment and release cycles.
- **xref:../applications/working_with_helm_charts/configuring-custom-helm-chart-repositories.adoc#installing-a-helm-chart-on-an-openshift-cluster_configuring-custom-helm-chart-repositories[Deploy Helm charts]**:
xref:../applications/working_with_helm_charts/understanding-helm.adoc#understanding-helm[Helm] is a software package manager that simplifies deployment of applications and services to {product-title} clusters. Helm uses a packaging format called _charts_. A Helm chart is a collection of files that describes the {product-title} resources.
- **xref:../cicd/builds/understanding-image-builds.adoc#understanding-image-builds[Understand image builds]**: Choose from different build strategies (Docker, S2I, custom, and pipeline) that can include different kinds of source materials, such as Git repositories, local binary inputs, and external artifacts. You can follow examples of build types from basic builds to advanced builds.
- **xref:../openshift_images/index.adoc#overview-of-images[Create container images]**: A container image is the most basic building block in {product-title} and Kubernetes applications. By defining image streams, you can gather multiple versions of an image in one place as you continue to develop the image stream. With S2I containers, you can insert your source code into a base container. The base container is configured to run code of a particular type, such as Ruby, Node.js, or Python.
- **xref:../applications/deployments/what-deployments-are.adoc#what-deployments-are[Create deployments]**: Use `Deployment` objects to exert fine-grained management over applications. Deployments create replica sets according to the rollout strategy, which orchestrates pod lifecycles.
- **xref:../openshift_images/using-templates.adoc#using-templates[Create templates]**: Use existing templates or create your own templates that describe how an application is built or deployed. A template can combine images with descriptions, parameters, replicas, exposed ports and other content that defines how an application can be run or built.
- **xref:../operators/understanding/olm-what-operators-are.adoc#olm-what-operators-are[Understand Operators]**: Operators are the preferred method for creating on-cluster applications for {product-title} {product-version}. Learn about the Operator Framework and how to deploy applications by using installed Operators into your projects.
- **xref:../operators/operator_sdk/osdk-about.adoc#osdk-about[Develop Operators]**: Operators are the preferred method for creating on-cluster applications for {product-title} {product-version}. Learn the workflow for building, testing, and deploying Operators. You can then create your own Operators based on xref:../operators/operator_sdk/ansible/osdk-ansible-support.adoc#osdk-ansible-support[Ansible] or
xref:../operators/operator_sdk/helm/osdk-helm-support.adoc#osdk-helm-support[Helm], or configure xref:../operators/operator_sdk/osdk-monitoring-prometheus.adoc#osdk-monitoring-prometheus[built-in Prometheus monitoring] by using the Operator SDK.
- **Reference the xref:../rest_api/overview/index.adoc#api-index[REST API index]**: Learn about {product-title} application programming interface endpoints.
// Need to provide a link closer to 4.15 GA
- **Software Supply Chain Security enhancements**: The PipelineRun *details* page in the *Developer* or *Administrator* perspective of the web console provides a visual representation of identified vulnerabilities, which are categorized by severity. Additionally, these enhancements provide an option to download or view Software Bill of Materials (SBOMs) for enhanced transparency and control within your supply chain. Learn about link:https://docs.openshift.com/pipelines/1.13/secure/setting-up-openshift-pipelines-to-view-software-supply-chain-security-elements.html[setting up OpenShift Pipelines in the web console to view Software Supply Chain Security elements].
endif::openshift-enterprise,openshift-webscale,openshift-origin[]
endif::openshift-rosa,openshift-dedicated,openshift-dpu,microshift[]
ifdef::openshift-dedicated[]
== Developer activities
{product-title} is a platform for developing and deploying containerized applications. Read the following {product-title} documentation, so that you can better understand {product-title} functions:
- *Understand {product-title} development*: Learn the different types of containerized applications, from simple containers to advanced Kubernetes deployments and Operators.
- *Work with projects*: Create projects from the web console or CLI to organize and share the software you develop.
- *Work with applications*: Use the *Developer* perspective in the {product-title} web console to easily create and deploy applications. Use the *Topology* view to visually interact with your applications, monitor status, connect and group components, and modify your code base.
- *Create CI/CD Pipelines*: Pipelines are serverless, cloud-native, continuous integration and continuous deployment systems that run in isolated containers. Pipelines use standard Tekton custom resources to automate deployments and are designed for decentralized teams that work on microservices-based architecture.
- *Understand Operators*: Operators are the preferred method for creating on-cluster applications for {product-title} {product-version}. Learn about the Operator Framework and how to deploy applications by using installed Operators into your projects.
- *Understand image builds*: Choose from different build strategies (Docker, S2I, custom, and pipeline) that can include different kinds of source materials, such as Git repositories, local binary inputs, and external artifacts. You can follow examples of build types from basic builds to advanced builds.
- *Create container images*: A container image is the most basic building block in {product-title} (and Kubernetes) applications. By defining image streams, you can gather multiple versions of an image in one place as you continue its development. With S2I containers, you can insert your source code into a base container that is set up to run code of a particular type (such as Ruby, Node.js, or Python).
- *Create deployments*: Use `Deployment` objects to exert fine-grained management over applications. Deployments create replica sets according to the rollout strategy, which orchestrates pod lifecycles.
- *Create templates*: Use existing templates or create your own templates that describe how an application is built or deployed. A template can combine images with descriptions, parameters, replicas, exposed ports and other content that defines how an application can be run or built.
endif::openshift-dedicated[]
ifdef::openshift-enterprise,openshift-webscale,openshift-origin[]
== Cluster administrator activities
Manage machines, provide services to users, and follow monitoring and logging reports. Read the following {product-title} documentation, so that you can better understand {product-title} functions:
- **xref:../architecture/architecture.adoc#architecture-overview-architecture[Understand {product-title} management]**: Learn about components of the {product-title} {product-version} control plane. See how {product-title} control plane and compute nodes are managed and updated through the xref:../machine_management/index.adoc#machine-api-overview_overview-of-machine-management[Machine API] and xref:../architecture/control-plane.adoc#operators-overview_control-plane[Operators].
- **xref:../post_installation_configuration/enabling-cluster-capabilities.adoc#enabling-cluster-capabilities[Enable cluster capabilities]**: As a cluster administrator, you can enable cluster capabilities that were disabled prior to installation.
=== Manage cluster components
- **Manage machines**: Manage xref:../machine_management/index.adoc#machine-mgmt-intro-managing-compute_overview-of-machine-management[compute] and xref:../machine_management/index.adoc#machine-mgmt-intro-managing-control-plane_overview-of-machine-management[control plane] machines in your cluster with machine sets, by xref:../machine_management/deploying-machine-health-checks.adoc#deploying-machine-health-checks[deploying health checks], and xref:../machine_management/applying-autoscaling.adoc#applying-autoscaling[applying autoscaling].
- **xref:../registry/index.adoc#registry-overview[Manage container registries]**: Each {product-title} cluster includes a built-in container registry for storing its images. You can also configure a separate link:https://access.redhat.com/documentation/en-us/red_hat_quay/[Red Hat Quay] registry to use with {product-title}. The link:https://quay.io[Quay.io] website provides a public container registry that stores {product-title} containers and Operators.
- **xref:../authentication/understanding-authentication.adoc#understanding-authentication[Manage users and groups]**: Add users and groups with different levels of permissions to use or modify clusters.
- **xref:../authentication/understanding-authentication.adoc#understanding-authentication[Manage authentication]**: Learn how user, group, and API authentication works in {product-title}. {product-title} supports xref:../authentication/understanding-identity-provider.adoc#supported-identity-providers[multiple identity providers].
- **Manage xref:../security/certificates/replacing-default-ingress-certificate.adoc#replacing-default-ingress[ingress], xref:../security/certificates/api-server.adoc#api-server-certificates[API server], and xref:../security/certificates/service-serving-certificate.adoc#add-service-serving[service] certificates**: {product-title} creates certificates by default for the Ingress Operator, the API server, and for services needed by complex middleware applications that require encryption. You might need to change, add, or rotate these certificates.
- **xref:../networking/understanding-networking.adoc#understanding-networking[Manage networking]**: The cluster network in {product-title} is managed by the xref:../networking/cluster-network-operator.adoc#cluster-network-operator[Cluster Network Operator] (CNO). The Multus Container Network Interface adds the capability to attach xref:../networking/multiple_networks/understanding-multiple-networks.adoc#understanding-multiple-networks[multiple network interfaces] to a pod. By using
xref:../networking/network_security/network_policy/about-network-policy.adoc#about-network-policy[network policy] features, you can isolate your pods or permit selected traffic.
- **xref:../operators/understanding/olm-understanding-operatorhub.adoc#olm-understanding-operatorhub[Manage Operators]**: Lists of Red Hat, ISV, and community Operators can be reviewed by cluster administrators and xref:../operators/admin/olm-adding-operators-to-cluster.adoc#olm-adding-operators-to-a-cluster[installed on their clusters]. After you install them, you can xref:../operators/user/olm-creating-apps-from-installed-operators.adoc#olm-creating-apps-from-installed-operators[run], xref:../operators/admin/olm-upgrading-operators.adoc#olm-upgrading-operators[upgrade], back up, or otherwise manage the Operator on your cluster.
- **xref:../windows_containers/understanding-windows-container-workloads.adoc#understanding-windows-container-workloads_understanding-windows-container-workloads[Understanding Windows container workloads]**. You can use the {productwinc} feature to run Windows compute nodes in an {product-title} cluster. This is possible by using the Red Hat Windows Machine Config Operator (WMCO) to install and manage Windows nodes.
=== Change cluster components
- **xref:../operators/understanding/crds/crd-extending-api-with-crds.adoc#crd-extending-api-with-crds[Use custom resource definitions (CRDs) to modify the cluster]**: Cluster features implemented with Operators can be modified with CRDs. Learn to xref:../operators/understanding/crds/crd-extending-api-with-crds.adoc#crd-creating-custom-resources-definition_crd-extending-api-with-crds[create a CRD] and xref:../operators/understanding/crds/crd-managing-resources-from-crds.adoc#crd-managing-resources-from-crds[manage resources from CRDs].
ifdef::openshift-enterprise,openshift-webscale,openshift-origin[]
- **xref:../applications/quotas/quotas-setting-per-project.adoc#quotas-setting-per-project[Set resource quotas]**: Choose from CPU, memory, and other system resources to xref:../applications/quotas/quotas-setting-per-project.adoc#quotas-setting-per-project[set quotas].
endif::openshift-enterprise,openshift-webscale,openshift-origin[]
- **xref:../applications/pruning-objects.adoc#pruning-objects[Prune and reclaim resources]**: Reclaim space by pruning unneeded Operators, groups, deployments, builds, images, registries, and cron jobs.
- **xref:../scalability_and_performance/recommended-performance-scale-practices/recommended-infrastructure-practices.adoc#scaling-cluster-monitoring-operator[Scale] and xref:../scalability_and_performance/using-node-tuning-operator.adoc#using-node-tuning-operator[tune] clusters**: Set cluster limits, tune nodes, scale cluster monitoring, and optimize networking, storage, and routes for your environment.
// Added context here.
- **xref:../updating/understanding_updates/intro-to-updates.adoc#understanding-openshift-updates[Update a cluster]**:
Use the Cluster Version Operator (CVO) to upgrade your {product-title} cluster. If an update is available from the OpenShift Update Service (OSUS), you apply that cluster update from the {product-title} xref:../updating/updating_a_cluster/updating-cluster-web-console.adoc#updating-cluster-web-console[web console] or the xref:../updating/updating_a_cluster/updating-cluster-cli.adoc#updating-cluster-cli[OpenShift CLI] (`oc`).
- **xref:../disconnected/updating/index.adoc#about-disconnected-updates[Using the OpenShift Update Service in a disconnected environment]**: You can use the OpenShift Update Service for recommending {product-title} updates in disconnected environments.
- **xref:../nodes/clusters/nodes-cluster-worker-latency-profiles.adoc#nodes-cluster-worker-latency-profiles[Improving cluster stability in high latency environments by using worker latency profiles]**: If your network has latency issues, you can use one of three worker latency profiles to help ensure that your control plane does not accidentally evict pods in case it cannot reach a worker node. You can configure or modify the profile at any time during the life of the cluster.
=== Observe a cluster
- **xref:../observability/logging/cluster-logging.adoc#cluster-logging[OpenShift Logging]**: Learn about logging and configure different logging components, such as log storage, log collectors, and the logging web console plugin.
- **xref:../observability/distr_tracing/distr_tracing_arch/distr-tracing-architecture.adoc#distr-tracing-architecture[Red Hat OpenShift distributed tracing platform]**: Store and visualize large volumes of requests passing through distributed systems, across the whole stack of microservices, and under heavy loads. Use the distributed tracing platform for monitoring distributed transactions, gathering insights into your instrumented services, network profiling, performance and latency optimization, root cause analysis, and troubleshooting the interaction between components in modern cloud-native microservices-based applications.
// xreffing to the installation page until further notice because OTEL content is currently planned for internal restructuring across pages that is likely to result in renamed page files
- **xref:../observability/otel/otel-installing.adoc#install-otel[Red Hat build of OpenTelemetry]**: Instrument, generate, collect, and export telemetry traces, metrics, and logs to analyze and understand your software's performance and behavior. Use open source backends like Tempo or Prometheus, or use commercial offerings. Learn a single set of APIs and conventions, and own the data that you generate.
- **xref:../observability/network_observability/network-observability-overview.adoc#network-observability-overview[Network Observability]**: Observe network traffic for {product-title} clusters by using eBPF technology to create and enrich network flows. You can xref:../observability/network_observability/metrics-alerts-dashboards.adoc#metrics-alerts-dashboards_metrics-alerts-dashboards[view dashboards, customize alerts], and xref:../observability/network_observability/observing-network-traffic.adoc#network-observability-trafficflow_nw-observe-network-traffic[analyze network flow] information for further insight and troubleshooting.
- **xref:../observability/monitoring/monitoring-overview.adoc#monitoring-overview[In-cluster monitoring]**:
Learn to xref:../observability/monitoring/configuring-the-monitoring-stack.adoc#configuring-the-monitoring-stack[configure the monitoring stack].
After configuring monitoring, use the web console to access xref:../observability/monitoring/reviewing-monitoring-dashboards.adoc#reviewing-monitoring-dashboards[monitoring dashboards]. In addition to infrastructure metrics, you can also scrape and view metrics for your own services.
- **xref:../support/remote_health_monitoring/about-remote-health-monitoring.adoc#about-remote-health-monitoring_about-remote-health-monitoring[Remote health monitoring]**: {product-title} collects anonymized aggregated information about your cluster. By using Telemetry and the Insights Operator, this data is received by Red Hat and used to improve {product-title}. You can view the xref:../support/remote_health_monitoring/showing-data-collected-by-remote-health-monitoring.adoc#showing-data-collected-by-remote-health-monitoring_showing-data-collected-by-remote-health-monitoring[data collected by remote health monitoring].
- **xref:../observability/power_monitoring/power-monitoring-overview.adoc#power-monitoring-overview[{PM-title-c} (Technology Preview)]**: You can use {PM-title} to monitor the power usage and identify power-consuming containers running in an {product-title} cluster. {PM-shortname-c} collects and exports energy-related system statistics from various components, such as CPU and DRAM. {PM-shortname-c} provides granular power consumption data for Kubernetes pods, namespaces, and nodes.
== Storage activities
- **xref:../storage/understanding-persistent-storage.adoc#understanding-persistent-storage[Manage storage]**: With {product-title}, a cluster administrator can configure persistent storage by using
xref:../storage/persistent_storage/persistent-storage-ocs.adoc#red-hat-openshift-data-foundation[Red Hat OpenShift Data Foundation],
xref:../storage/persistent_storage/persistent-storage-aws.adoc#persistent-storage-using-aws-ebs[{aws-short} Elastic Block Store],
xref:../storage/persistent_storage/persistent-storage-nfs.adoc#persistent-storage-using-nfs[NFS],
xref:../storage/persistent_storage/persistent-storage-iscsi.adoc#persistent-storage-using-iscsi[iSCSI],
xref:../storage/container_storage_interface/persistent-storage-csi.adoc#persistent-storage-using-csi[Container Storage Interface (CSI)],
and more.
You can xref:../storage/expanding-persistent-volumes.adoc#expanding-persistent-volumes[expand persistent volumes], configure xref:../storage/dynamic-provisioning.adoc#dynamic-provisioning[dynamic provisioning], and use CSI to xref:../storage/container_storage_interface/persistent-storage-csi.adoc#persistent-storage-using-csi[configure], xref:../storage/container_storage_interface/persistent-storage-csi-cloning.adoc#persistent-storage-csi-cloning[clone], and use xref:../storage/container_storage_interface/persistent-storage-csi-snapshots.adoc#persistent-storage-csi-snapshots[snapshots] of persistent storage.
- **xref:../storage/container_storage_interface/persistent-storage-csi-smb-cifs.adoc#persistent-storage-csi-smb-cifs[Persistent storage using CIFS/SMB CSI Driver Operator (Technology Preview)]**: {product-title} is capable of provisioning persistent volumes (PVs) with a Container Storage Interface (CSI) driver for the Common Internet File System (CIFS) dialect/Server Message Block (SMB) protocol. The CIFS/SMB CSI Driver Operator that manages this driver is in Technology Preview status.
- **xref:../storage/container_storage_interface/persistent-storage-csi-snapshots.adoc#persistent-storage-csi-snapshots-overview_persistent-storage-csi-snapshots[Changing vSphere CSI maximum number of snapshots]**: The default maximum number of snapshots in {vmw-first} Container Storage Interface (CSI) is 3 per volume. In {product-title} {product-version}, you can now change this maximum number of snapshots to a maximum of 32 per volume. You also have granular control of the maximum number of snapshots for vSAN and Virtual Volume datastores.
- **xref:../storage/container_storage_interface/persistent-storage-csi.adoc#persistent-storage-csi[Volume cloning supported for Azure File (Technology Preview)]**: {product-title} {product-version} introduces volume cloning for the Microsoft Azure File Container Storage Interface (CSI) Driver Operator as a Technology Preview feature.
- **xref:../storage/understanding-persistent-storage.adoc#pv-access-modes_understanding-persistent-storage[RWOP with SELinux context mount]**: {product-title} {product-version} changes feature status from Technical Preview status to generally available for the access mode `ReadWriteOncePod` (RWOP). RWOP can be used only in a single pod on a single node. If the driver enables it, RWOP uses the SELinux context mount set in the PodSpec or container, which allows the driver to mount the volume directly with the correct SELinux labels.
endif::openshift-enterprise,openshift-webscale,openshift-origin[]
ifdef::openshift-dedicated[]
== Cluster administrator activities
While cluster maintenance and host configuration is performed by the Red Hat Site Reliability Engineering (SRE) team, other ongoing tasks on your {product-title} {product-version} cluster can be performed by {product-title} cluster administrators. As an {product-title} cluster administrator, the documentation helps you:
- *Manage Dedicated Administrators*: Grant or revoke permissions to `dedicated admin` users.
- *Work with Logging*: Learn about OpenShift Logging and configure the Cluster Logging Operator.
- *Monitor clusters*: Learn to use the Web UI to access monitoring dashboards.
- *Manage nodes*: Learn to manage nodes, including configuring machine pools and autoscaling.
endif::openshift-dedicated[]
endif::openshift-enterprise,openshift-webscale,openshift-origin[]
endif::openshift-rosa[]
ifdef::openshift-enterprise[]
== Hosted control plane activities
* **Support for bare metal and {VirtProductName}**: {hcp-capital} for {product-title} is now Generally Available on bare metal and {VirtProductName} platforms. For more information, see the following documentation:
** link:https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.10/html/clusters/cluster_mce_overview#configuring-hosting-service-cluster-configure-bm[Configuring hosted control plane clusters on bare metal]
** link:https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.10/html/clusters/cluster_mce_overview#hosted-control-planes-manage-kubevirt[Managing hosted control plane clusters on OpenShift Virtualization]
* **Technology Preview features**: {hcp-capital} remains available as a Technology Preview feature on the {aws-first}, {ibm-power-name}, and {ibm-z-name} platforms. You can now provision a hosted control plane cluster by using the non bare metal agent machines. For more information, see the following documentation:
** link:https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.10/html/clusters/cluster_mce_overview#hosting-service-cluster-configure-aws[Configuring the hosting cluster on {aws-short} (Technology Preview)]
** link:https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.10/html/clusters/cluster_mce_overview#config-hosted-service-ibmpower[Configuring the hosting cluster on a 64-bit x86 {product-title} cluster to create {hcp} for {ibm-power-name} compute nodes (Technology Preview)]
** link:https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.10/html/clusters/cluster_mce_overview#configuring-hosting-service-cluster-ibmz[Configuring the hosted cluster on 64-bit x86 bare metal for {ibm-z-name} compute nodes (Technology Preview)]
** link:https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.10/html/clusters/cluster_mce_overview#configuring-hosting-service-cluster-configure-agent-non-bm[Configuring hosted control plane clusters using non bare metal agent machines (Technology Preview)]
endif::openshift-enterprise[]