mirror of
https://github.com/openshift/openshift-docs.git
synced 2026-02-05 03:47:04 +01:00
OSDOCS#15507: Removing discrete headings in core OCP docs
This commit is contained in:
@@ -19,7 +19,7 @@ include::modules/installation-process.adoc[leveloffset=+2]
|
||||
|
||||
* xref:../scalability_and_performance/recommended-performance-scale-practices/recommended-control-plane-practices.adoc#master-node-sizing_recommended-control-plane-practices[Control plane node sizing]
|
||||
|
||||
[discrete]
|
||||
|
||||
=== Installation scope
|
||||
|
||||
The scope of the {product-title} installation program is intentionally narrow. It is designed for simplicity and ensured success. You can complete many more configuration tasks after installation completes.
|
||||
|
||||
@@ -30,8 +30,8 @@ include::modules/identity-provider-add.adoc[leveloffset=+1]
|
||||
This example configures an Apache authentication proxy for the {product-title}
|
||||
using the request header identity provider.
|
||||
|
||||
[discrete]
|
||||
|
||||
include::modules/identity-provider-apache-custom-proxy-configuration.adoc[leveloffset=+2]
|
||||
|
||||
[discrete]
|
||||
|
||||
include::modules/identity-provider-configuring-apache-request-header.adoc[leveloffset=+2]
|
||||
|
||||
@@ -62,7 +62,7 @@ include::modules/hcp-bm-add-nodes-to-inventory.adoc[leveloffset=+2]
|
||||
|
||||
include::modules/hcp-bm-create-infra-console.adoc[leveloffset=+2]
|
||||
|
||||
[discrete]
|
||||
|
||||
[role="_additional-resources"]
|
||||
[id="addl-res-hcp-bm-infra-console_{context}"]
|
||||
=== Additional resources
|
||||
|
||||
@@ -188,7 +188,7 @@ include::modules/ipi-install-configuring-storage-on-nodes.adoc[leveloffset=+2]
|
||||
// Creating a disconnected registry
|
||||
include::modules/ipi-install-creating-a-disconnected-registry.adoc[leveloffset=+1]
|
||||
|
||||
[discrete]
|
||||
|
||||
[id="prerequisites_ipi-disconnected-registry"]
|
||||
=== Prerequisites
|
||||
|
||||
|
||||
@@ -154,7 +154,7 @@ Installing a single-node cluster on {ibm-z-name} and {ibm-linuxone-name} require
|
||||
Installing a single-node cluster on {ibm-z-name} simplifies installation for development and test environments and requires less resource requirements at entry level.
|
||||
====
|
||||
|
||||
[discrete]
|
||||
|
||||
=== Hardware requirements
|
||||
|
||||
* The equivalent of two Integrated Facilities for Linux (IFL), which are SMT2 enabled, for each cluster.
|
||||
@@ -181,7 +181,7 @@ Installing a single-node cluster on {ibm-power-name} requires user-provisioned i
|
||||
Installing a single-node cluster on {ibm-power-name} simplifies installation for development and test environments and requires less resource requirements at entry level.
|
||||
====
|
||||
|
||||
[discrete]
|
||||
|
||||
=== Hardware requirements
|
||||
|
||||
* The equivalent of two Integrated Facilities for Linux (IFL), which are SMT2 enabled, for each cluster.
|
||||
|
||||
@@ -55,7 +55,7 @@ include::modules/agent-installer-fips-compliance.adoc[leveloffset=+1]
|
||||
//Configuring FIPS through the Agent-based Installer
|
||||
include::modules/agent-installer-configuring-fips-compliance.adoc[leveloffset=+1]
|
||||
|
||||
[discrete]
|
||||
|
||||
[role="_additional-resources"]
|
||||
.Additional resources
|
||||
|
||||
|
||||
@@ -36,7 +36,7 @@ include::modules/ipi-verifying-nodes-after-installation.adoc[leveloffset=+2]
|
||||
|
||||
* link:https://access.redhat.com/documentation/en-us/assisted_installer_for_openshift_container_platform[Assisted Installer for OpenShift Container Platform]
|
||||
|
||||
[discrete]
|
||||
|
||||
=== Installation scope
|
||||
|
||||
The scope of the {product-title} installation program is intentionally narrow. It is designed for simplicity and ensured success. You can complete many more configuration tasks after installation completes.
|
||||
|
||||
@@ -20,15 +20,15 @@ include::modules/machineset-vsphere-required-permissions.adoc[leveloffset=+1]
|
||||
include::modules/compute-machineset-upi-reqs.adoc[leveloffset=+1]
|
||||
|
||||
//Obtaining the infrastructure ID
|
||||
[discrete]
|
||||
|
||||
include::modules/machineset-upi-reqs-infra-id.adoc[leveloffset=+2]
|
||||
|
||||
//Satisfying vSphere credentials requirements
|
||||
[discrete]
|
||||
|
||||
include::modules/machineset-upi-reqs-vsphere-creds.adoc[leveloffset=+2]
|
||||
|
||||
//Satisfying ignition configuration requirements
|
||||
[discrete]
|
||||
|
||||
include::modules/machineset-upi-reqs-ignition-config.adoc[leveloffset=+2]
|
||||
[role="_additional-resources"]
|
||||
.Additional resources
|
||||
|
||||
File diff suppressed because it is too large
Load Diff
@@ -32,7 +32,7 @@ include::modules/migration-state-migration-cli.adoc[leveloffset=+2]
|
||||
|
||||
[role="_additional-resources"]
|
||||
[id="additional-resources-for-state-migration_{context}"]
|
||||
[discrete]
|
||||
|
||||
=== Additional resources
|
||||
|
||||
* See xref:../migrating_from_ocp_3_to_4/advanced-migration-options-3-4.adoc#migration-excluding-pvcs_advanced-migration-options-3-4[Excluding PVCs from migration] to select PVCs for state migration.
|
||||
@@ -55,7 +55,7 @@ include::modules/migration-editing-pvs-in-migplan.adoc[leveloffset=+2]
|
||||
|
||||
[role="_additional-resources"]
|
||||
[id="additional-resources-for-editing-pv-attributes_{context}"]
|
||||
[discrete]
|
||||
|
||||
==== Additional resources
|
||||
|
||||
* For details about the `move` and `copy` actions, see xref:../migrating_from_ocp_3_to_4/about-mtc-3-4.adoc#migration-mtc-workflow_about-mtc-3-4[MTC workflow].
|
||||
|
||||
@@ -32,7 +32,7 @@ These annotations preserve the UID range, ensuring that the containers retain th
|
||||
include::modules/migration-prerequisites.adoc[leveloffset=+1]
|
||||
|
||||
[role="_additional-resources"]
|
||||
[discrete]
|
||||
|
||||
[id="additional-resources-for-migration-prerequisites_{context}"]
|
||||
=== Additional resources for migration prerequisites
|
||||
|
||||
@@ -50,7 +50,7 @@ include::modules/migration-adding-replication-repository-to-cam.adoc[leveloffset
|
||||
include::modules/migration-creating-migration-plan-cam.adoc[leveloffset=+2]
|
||||
|
||||
[role="_additional-resources"]
|
||||
[discrete]
|
||||
|
||||
[id="additional-resources-for-persistent-volume-copy-methods_{context}"]
|
||||
=== Additional resources
|
||||
|
||||
|
||||
@@ -25,7 +25,7 @@ Beginning with {product-title} 4.13, {op-system} now uses {op-system-base-full}
|
||||
|
||||
For more information, see xref:../architecture/architecture.adoc#architecture[OpenShift Container Platform architecture].
|
||||
|
||||
[discrete]
|
||||
|
||||
=== Immutable infrastructure
|
||||
|
||||
{product-title} 4 uses {op-system-first}, which is designed to run containerized applications, and provides efficient installation, Operator-based management, and simplified upgrades. {op-system} is an immutable container host, rather than a customizable operating system like {op-system-base}. {op-system} enables {product-title} 4 to manage and automate the deployment of the underlying container host. {op-system} is a part of {product-title}, which means that everything runs inside a container and is deployed using {product-title}.
|
||||
@@ -34,7 +34,7 @@ In {product-title} 4, control plane nodes must run {op-system}, ensuring that fu
|
||||
|
||||
For more information, see xref:../architecture/architecture-rhcos.adoc#architecture-rhcos[{op-system-first}].
|
||||
|
||||
[discrete]
|
||||
|
||||
=== Operators
|
||||
|
||||
Operators are a method of packaging, deploying, and managing a Kubernetes application. Operators ease the operational complexity of running another piece of software. They watch over your environment and use the current state to make decisions in real time. Advanced Operators are designed to upgrade and react to failures automatically.
|
||||
@@ -44,7 +44,7 @@ For more information, see xref:../operators/understanding/olm-what-operators-are
|
||||
[id="migration-differences-install"]
|
||||
== Installation and upgrade
|
||||
|
||||
[discrete]
|
||||
|
||||
=== Installation process
|
||||
|
||||
To install {product-title} 3.11, you prepared your {op-system-base-full} hosts, set all of the configuration values your cluster needed, and then ran an Ansible playbook to install and set up your cluster.
|
||||
@@ -53,14 +53,14 @@ In {product-title} {product-version}, you use the OpenShift installation program
|
||||
|
||||
For more information, see xref:../architecture/architecture-installation.adoc#installation-process_architecture-installation[Installation process].
|
||||
|
||||
[discrete]
|
||||
|
||||
=== Infrastructure options
|
||||
|
||||
In {product-title} 3.11, you installed your cluster on infrastructure that you prepared and maintained. In addition to providing your own infrastructure, {product-title} 4 offers an option to deploy a cluster on infrastructure that the {product-title} installation program provisions and the cluster maintains.
|
||||
|
||||
For more information, see xref:../architecture/architecture-installation.adoc#installation-overview_architecture-installation[OpenShift Container Platform installation overview].
|
||||
|
||||
[discrete]
|
||||
|
||||
=== Upgrading your cluster
|
||||
|
||||
In {product-title} 3.11, you upgraded your cluster by running Ansible playbooks. In {product-title} {product-version}, the cluster manages its own updates, including updates to {op-system-first} on cluster nodes. You can easily upgrade your cluster by using the web console or by using the `oc adm upgrade` command from the OpenShift CLI and the Operators will automatically upgrade themselves. If your {product-title} {product-version} cluster has {op-system-base} worker machines, then you will still need to run an Ansible playbook to upgrade those worker machines.
|
||||
@@ -77,28 +77,28 @@ Review the changes and other considerations that might affect your transition fr
|
||||
|
||||
Review the following storage changes to consider when transitioning from {product-title} 3.11 to {product-title} {product-version}.
|
||||
|
||||
[discrete]
|
||||
|
||||
==== Local volume persistent storage
|
||||
|
||||
Local storage is only supported by using the Local Storage Operator in {product-title} {product-version}. It is not supported to use the local provisioner method from {product-title} 3.11.
|
||||
|
||||
For more information, see xref:../storage/persistent_storage_local/persistent-storage-local.adoc#persistent-storage-using-local-volume[Persistent storage using local volumes].
|
||||
|
||||
[discrete]
|
||||
|
||||
==== FlexVolume persistent storage
|
||||
|
||||
The FlexVolume plugin location changed from {product-title} 3.11. The new location in {product-title} {product-version} is `/etc/kubernetes/kubelet-plugins/volume/exec`. Attachable FlexVolume plugins are no longer supported.
|
||||
|
||||
For more information, see xref:../storage/persistent_storage/persistent-storage-flexvolume.adoc#persistent-storage-using-flexvolume[Persistent storage using FlexVolume].
|
||||
|
||||
[discrete]
|
||||
|
||||
==== Container Storage Interface (CSI) persistent storage
|
||||
|
||||
Persistent storage using the Container Storage Interface (CSI) was link:https://access.redhat.com/support/offerings/techpreview[Technology Preview] in {product-title} 3.11. {product-title} {product-version} ships with xref:../storage/container_storage_interface/persistent-storage-csi.adoc#csi-drivers-supported_persistent-storage-csi[several CSI drivers]. You can also install your own driver.
|
||||
|
||||
For more information, see xref:../storage/container_storage_interface/persistent-storage-csi.adoc#persistent-storage-using-csi[Persistent storage using the Container Storage Interface (CSI)].
|
||||
|
||||
[discrete]
|
||||
|
||||
==== Red Hat OpenShift Data Foundation
|
||||
|
||||
OpenShift Container Storage 3, which is available for use with {product-title} 3.11, uses Red Hat Gluster Storage as the backing storage.
|
||||
@@ -107,7 +107,7 @@ OpenShift Container Storage 3, which is available for use with {product-title} 3
|
||||
|
||||
For more information, see xref:../storage/persistent_storage/persistent-storage-ocs.adoc#red-hat-openshift-data-foundation[Persistent storage using Red Hat OpenShift Data Foundation] and the link:https://access.redhat.com/articles/4731161[interoperability matrix] article.
|
||||
|
||||
[discrete]
|
||||
|
||||
==== Unsupported persistent storage options
|
||||
|
||||
Support for the following persistent storage options from {product-title} 3.11 has changed in {product-title} {product-version}:
|
||||
@@ -120,7 +120,7 @@ If you used one of these in {product-title} 3.11, you must choose a different pe
|
||||
|
||||
For more information, see xref:../storage/understanding-persistent-storage.adoc#understanding-persistent-storage[Understanding persistent storage].
|
||||
|
||||
[discrete]
|
||||
|
||||
==== Migration of in-tree volumes to CSI drivers
|
||||
|
||||
{product-title} 4 is migrating in-tree volume plugins to their Container Storage Interface (CSI) counterparts. In {product-title} {product-version}, CSI drivers are the new default for the following in-tree volume types:
|
||||
@@ -146,7 +146,7 @@ For more information, see xref:../storage/container_storage_interface/persistent
|
||||
|
||||
Review the following networking changes to consider when transitioning from {product-title} 3.11 to {product-title} {product-version}.
|
||||
|
||||
[discrete]
|
||||
|
||||
==== Network isolation mode
|
||||
|
||||
The default network isolation mode for {product-title} 3.11 was `ovs-subnet`, though users frequently switched to use `ovn-multitenant`. The default network isolation mode for {product-title} {product-version} is controlled by a network policy.
|
||||
@@ -155,7 +155,7 @@ If your {product-title} 3.11 cluster used the `ovs-subnet` or `ovs-multitenant`
|
||||
|
||||
For more information, see xref:../networking/network_security/network_policy/about-network-policy.adoc#about-network-policy[About network policy].
|
||||
|
||||
[discrete]
|
||||
|
||||
==== OVN-Kubernetes as the default networking plugin in Red Hat OpenShift Networking
|
||||
|
||||
In {product-title} 3.11, OpenShift SDN was the default networking plugin in Red Hat OpenShift Networking. In {product-title} {product-version}, OVN-Kubernetes is now the default networking plugin.
|
||||
@@ -184,17 +184,17 @@ You should install {product-title} 4 with the OVN-Kubernetes network plugin beca
|
||||
|
||||
Review the following logging changes to consider when transitioning from {product-title} 3.11 to {product-title} {product-version}.
|
||||
|
||||
[discrete]
|
||||
|
||||
==== Deploying OpenShift Logging
|
||||
|
||||
{product-title} 4 provides a simple deployment mechanism for OpenShift Logging, by using a Cluster Logging custom resource.
|
||||
|
||||
[discrete]
|
||||
|
||||
==== Aggregated logging data
|
||||
|
||||
You cannot transition your aggregate logging data from {product-title} 3.11 into your new {product-title} 4 cluster.
|
||||
|
||||
[discrete]
|
||||
|
||||
==== Unsupported logging configurations
|
||||
|
||||
Some logging configurations that were available in {product-title} 3.11 are no longer supported in {product-title} {product-version}.
|
||||
@@ -204,14 +204,14 @@ Some logging configurations that were available in {product-title} 3.11 are no l
|
||||
|
||||
Review the following security changes to consider when transitioning from {product-title} 3.11 to {product-title} {product-version}.
|
||||
|
||||
[discrete]
|
||||
|
||||
==== Unauthenticated access to discovery endpoints
|
||||
|
||||
In {product-title} 3.11, an unauthenticated user could access the discovery endpoints (for example, [x-]`/api/*` and [x-]`/apis/*`). For security reasons, unauthenticated access to the discovery endpoints is no longer allowed in {product-title} {product-version}. If you do need to allow unauthenticated access, you can configure the RBAC settings as necessary; however, be sure to consider the security implications as this can expose internal cluster components to the external network.
|
||||
|
||||
// TODO: Anything to xref to, or additional details?
|
||||
|
||||
[discrete]
|
||||
|
||||
==== Identity providers
|
||||
|
||||
Configuration for identity providers has changed for {product-title} 4, including the following notable changes:
|
||||
@@ -221,12 +221,12 @@ Configuration for identity providers has changed for {product-title} 4, includin
|
||||
|
||||
For more information, see xref:../authentication/understanding-identity-provider.adoc#understanding-identity-provider[Understanding identity provider configuration].
|
||||
|
||||
[discrete]
|
||||
|
||||
==== OAuth token storage format
|
||||
|
||||
Newly created OAuth HTTP bearer tokens no longer match the names of their OAuth access token objects. The object names are now a hash of the bearer token and are no longer sensitive. This reduces the risk of leaking sensitive information.
|
||||
|
||||
[discrete]
|
||||
|
||||
==== Default security context constraints
|
||||
|
||||
The `restricted` security context constraints (SCC) in {product-title} 4 can no longer be accessed by any authenticated user as the `restricted` SCC in {product-title} 3.11. The broad authenticated access is now granted to the `restricted-v2` SCC, which is more restrictive than the old `restricted` SCC. The `restricted` SCC still exists; users that want to use it must be specifically given permissions to do it.
|
||||
@@ -238,7 +238,7 @@ For more information, see xref:../authentication/managing-security-context-const
|
||||
|
||||
Review the following monitoring changes when transitioning from {product-title} 3.11 to {product-title} {product-version}. You cannot migrate Hawkular configurations and metrics to Prometheus.
|
||||
|
||||
[discrete]
|
||||
|
||||
==== Alert for monitoring infrastructure availability
|
||||
|
||||
The default alert that triggers to ensure the availability of the monitoring structure was called `DeadMansSwitch` in {product-title} 3.11. This was renamed to `Watchdog` in {product-title} 4. If you had PagerDuty integration set up with this alert in {product-title} 3.11, you must set up the PagerDuty integration for the `Watchdog` alert in {product-title} 4.
|
||||
|
||||
@@ -16,7 +16,7 @@ For known issues, see the xref:../migration_toolkit_for_containers/release_notes
|
||||
|
||||
include::modules/migration-mtc-workflow.adoc[leveloffset=+1]
|
||||
|
||||
[discrete]
|
||||
|
||||
include::modules/migration-about-mtc-custom-resources.adoc[leveloffset=+2]
|
||||
|
||||
include::modules/migration-mtc-cr-manifests.adoc[leveloffset=+1]
|
||||
@@ -59,7 +59,7 @@ include::modules/migration-rolling-back-migration-manually.adoc[leveloffset=+2]
|
||||
|
||||
[role="_additional-resources"]
|
||||
[id="additional-resources-uninstalling_{context}"]
|
||||
[discrete]
|
||||
|
||||
=== Additional resources
|
||||
|
||||
* xref:../operators/admin/olm-deleting-operators-from-cluster.adoc#olm-deleting-operators-from-a-cluster-using-web-console_olm-deleting-operators-from-cluster[Deleting Operators from a cluster using the web console]
|
||||
|
||||
@@ -10,32 +10,32 @@
|
||||
The following are exceptions to compatibility in {product-title}:
|
||||
|
||||
ifndef::microshift[]
|
||||
[discrete]
|
||||
|
||||
[id="OS-file-system-modifications-not-made_{context}"]
|
||||
== RHEL CoreOS file system modifications not made with a supported Operator
|
||||
|
||||
No assurances are made at this time that a modification made to the host operating file system is preserved across minor releases except for where that modification is made through the public interface exposed via a supported Operator, such as the Machine Config Operator or Node Tuning Operator.
|
||||
|
||||
[discrete]
|
||||
|
||||
[id="modifications-to-cluster-infrastructure-in-cloud_{context}"]
|
||||
== Modifications to cluster infrastructure in cloud or virtualized environments
|
||||
|
||||
No assurances are made at this time that a modification to the cloud hosting environment that supports the cluster is preserved except for where that modification is made through a public interface exposed in the product or is documented as a supported configuration. Cluster infrastructure providers are responsible for preserving their cloud or virtualized infrastructure except for where they delegate that authority to the product through an API.
|
||||
endif::microshift[]
|
||||
|
||||
[discrete]
|
||||
|
||||
[id="Functional-defaults-between-upgraded-cluster-new-installation_{context}"]
|
||||
== Functional defaults between an upgraded cluster and a new installation
|
||||
|
||||
No assurances are made at this time that a new installation of a product minor release will have the same functional defaults as a version of the product that was installed with a prior minor release and upgraded to the equivalent version. For example, future versions of the product may provision cloud infrastructure with different defaults than prior minor versions. In addition, different default security choices may be made in future versions of the product than those made in past versions of the product. Past versions of the product will forward upgrade, but preserve legacy choices where appropriate specifically to maintain backwards compatibility.
|
||||
|
||||
[discrete]
|
||||
|
||||
[id="API-fields-that-have-the-prefix-unsupported-annotations_{context}"]
|
||||
== Usage of API fields that have the prefix "unsupported” or undocumented annotations
|
||||
|
||||
Select APIs in the product expose fields with the prefix `unsupported<FieldName>`. No assurances are made at this time that usage of this field is supported across releases or within a release. Product support can request a customer to specify a value in this field when debugging specific problems, but its usage is not supported outside of that interaction. Usage of annotations on objects that are not explicitly documented are not assured support across minor releases.
|
||||
|
||||
[discrete]
|
||||
|
||||
[id="API-availability-per-product-installation-topology_{context}"]
|
||||
== API availability per product installation topology
|
||||
The OpenShift distribution will continue to evolve its supported installation topology, and not all APIs in one install topology will necessarily be included in another. For example, certain topologies may restrict read/write access to particular APIs if they are in conflict with the product installation topology or not include a particular API at all if not pertinent to that topology. APIs that exist in a given topology will be supported in accordance with the compatibility tiers defined above.
|
||||
|
||||
@@ -8,24 +8,24 @@
|
||||
|
||||
All commercially supported APIs, components, and features are associated under one of the following support levels:
|
||||
|
||||
[discrete]
|
||||
|
||||
[id="api-tier-1_{context}"]
|
||||
== API tier 1
|
||||
APIs and application operating environments (AOEs) are stable within a major release. They may be deprecated within a major release, but they will not be removed until a subsequent major release.
|
||||
|
||||
[discrete]
|
||||
|
||||
[id="api-tier-2_{context}"]
|
||||
== API tier 2
|
||||
APIs and AOEs are stable within a major release for a minimum of 9 months or 3 minor releases from the announcement of deprecation, whichever is longer.
|
||||
|
||||
[discrete]
|
||||
|
||||
[id="api-tier-3_{context}"]
|
||||
== API tier 3
|
||||
This level applies to languages, tools, applications, and optional Operators included with {product-title} through Operator Hub. Each component will specify a lifetime during which the API and AOE will be supported. Newer versions of language runtime specific components will attempt to be as API and AOE compatible from minor version to minor version as possible. Minor version to minor version compatibility is not guaranteed, however.
|
||||
|
||||
Components and developer tools that receive continuous updates through the Operator Hub, referred to as Operators and operands, should be considered API tier 3. Developers should use caution and understand how these components may change with each minor release. Users are encouraged to consult the compatibility guidelines documented by the component.
|
||||
|
||||
[discrete]
|
||||
|
||||
[id="api-tier-4_{context}"]
|
||||
== API tier 4
|
||||
No compatibility is provided. API and AOE can change at any point. These capabilities should not be used by applications needing long-term support.
|
||||
|
||||
@@ -60,7 +60,7 @@ You can configure several fields to control the behavior of a probe:
|
||||
** for a readiness probe, the pod is marked `Unready`
|
||||
** for a startup probe, the container is killed and is subject to the pod's `restartPolicy`
|
||||
|
||||
[discrete]
|
||||
|
||||
[id="application-health-examples"]
|
||||
== Example probes
|
||||
|
||||
|
||||
@@ -6,7 +6,7 @@
|
||||
[id="build-config-capability_{context}"]
|
||||
= Build capability
|
||||
|
||||
[discrete]
|
||||
|
||||
== Purpose
|
||||
|
||||
The `Build` capability enables the `Build` API. The `Build` API manages the lifecycle of `Build` and `BuildConfig` objects.
|
||||
|
||||
@@ -68,7 +68,7 @@ If you are updating a cluster in a disconnected environment, install the `oc` ve
|
||||
endif::restricted[]
|
||||
====
|
||||
|
||||
[discrete]
|
||||
|
||||
== Installing the OpenShift CLI on Linux
|
||||
|
||||
You can install the OpenShift CLI (`oc`) binary on Linux by using the following procedure.
|
||||
@@ -122,7 +122,7 @@ $ echo $PATH
|
||||
$ oc <command>
|
||||
----
|
||||
|
||||
[discrete]
|
||||
|
||||
== Installing the OpenShift CLI on Windows
|
||||
|
||||
You can install the OpenShift CLI (`oc`) binary on Windows by using the following procedure.
|
||||
@@ -168,7 +168,7 @@ C:\> path
|
||||
C:\> oc <command>
|
||||
----
|
||||
|
||||
[discrete]
|
||||
|
||||
== Installing the OpenShift CLI on macOS
|
||||
|
||||
You can install the OpenShift CLI (`oc`) binary on macOS by using the following procedure.
|
||||
|
||||
@@ -29,12 +29,12 @@ The Cloud Credential Operator (CCO) manages cloud provider credentials as Kubern
|
||||
By setting different values for the `credentialsMode` parameter in the `install-config.yaml` file, the CCO can be configured to operate in several different modes. If no mode is specified, or the `credentialsMode` parameter is set to an empty string (`""`), the CCO operates in its default mode.
|
||||
|
||||
ifdef::operators[]
|
||||
[discrete]
|
||||
|
||||
== Project
|
||||
|
||||
link:https://github.com/openshift/cloud-credential-operator[openshift-cloud-credential-operator]
|
||||
|
||||
[discrete]
|
||||
|
||||
== CRDs
|
||||
|
||||
* `credentialsrequests.cloudcredential.openshift.io`
|
||||
@@ -42,7 +42,7 @@ link:https://github.com/openshift/cloud-credential-operator[openshift-cloud-cred
|
||||
** CR: `CredentialsRequest`
|
||||
** Validation: Yes
|
||||
|
||||
[discrete]
|
||||
|
||||
== Configuration objects
|
||||
|
||||
No configuration required.
|
||||
|
||||
@@ -12,7 +12,7 @@ The Cluster Authentication Operator installs and maintains the `Authentication`
|
||||
$ oc get clusteroperator authentication -o yaml
|
||||
----
|
||||
|
||||
[discrete]
|
||||
|
||||
== Project
|
||||
|
||||
link:https://github.com/openshift/cluster-authentication-operator[cluster-authentication-operator]
|
||||
|
||||
@@ -42,7 +42,7 @@ Ensure that the `maxNodesTotal` value in the `ClusterAutoscaler` resource defini
|
||||
====
|
||||
endif::openshift-rosa-hcp[]
|
||||
|
||||
[discrete]
|
||||
|
||||
[id="cluster-autoscaler-scale-down_{context}"]
|
||||
== Automatic node removal
|
||||
|
||||
@@ -64,7 +64,7 @@ If the following types of pods are present on a node, the cluster autoscaler wil
|
||||
|
||||
For example, you set the maximum CPU limit to 64 cores and configure the cluster autoscaler to only create machines that have 8 cores each. If your cluster starts with 30 cores, the cluster autoscaler can add up to 4 more nodes with 32 cores, for a total of 62.
|
||||
|
||||
[discrete]
|
||||
|
||||
[id="cluster-autoscaler-limitations_{context}"]
|
||||
== Limitations
|
||||
|
||||
@@ -82,7 +82,7 @@ The cluster autoscaler only adds nodes in autoscaled node groups if doing so wou
|
||||
If the available node types cannot meet the requirements for a pod request, or if the node groups that could meet these requirements are at their maximum size, the cluster autoscaler cannot scale up.
|
||||
====
|
||||
|
||||
[discrete]
|
||||
|
||||
[id="cluster-autoscaler-interaction_{context}"]
|
||||
== Interaction with other scheduling features
|
||||
|
||||
|
||||
@@ -7,12 +7,12 @@
|
||||
|
||||
The Cluster Autoscaler Operator manages deployments of the OpenShift Cluster Autoscaler using the `cluster-api` provider.
|
||||
|
||||
[discrete]
|
||||
|
||||
== Project
|
||||
|
||||
link:https://github.com/openshift/cluster-autoscaler-operator[cluster-autoscaler-operator]
|
||||
|
||||
[discrete]
|
||||
|
||||
== CRDs
|
||||
|
||||
* `ClusterAutoscaler`: This is a singleton resource, which controls the configuration autoscaler instance for the cluster. The Operator only responds to the `ClusterAutoscaler` resource named `default` in the managed namespace, the value of the `WATCH_NAMESPACE` environment variable.
|
||||
|
||||
@@ -49,7 +49,7 @@ endif::cluster-caps[]
|
||||
|
||||
ifdef::operator-ref[]
|
||||
|
||||
[discrete]
|
||||
|
||||
== Project
|
||||
|
||||
link:https://github.com/openshift/cluster-baremetal-operator[cluster-baremetal-operator]
|
||||
|
||||
@@ -12,12 +12,12 @@ The {cluster-capi-operator} maintains the lifecycle of Cluster API resources. Th
|
||||
This Operator is available as a link:https://access.redhat.com/support/offerings/techpreview[Technology Preview] for {aws-first}, {gcp-first}, {azure-first}, {rh-openstack-first}, and {vmw-first} clusters.
|
||||
====
|
||||
|
||||
[discrete]
|
||||
|
||||
== Project
|
||||
|
||||
link:https://github.com/openshift/cluster-capi-operator[cluster-capi-operator]
|
||||
|
||||
[discrete]
|
||||
|
||||
== CRDs
|
||||
|
||||
* `awsmachines.infrastructure.cluster.x-k8s.io`
|
||||
|
||||
@@ -53,7 +53,7 @@ The Cloud Controller Manager Operator includes the following components:
|
||||
By default, the Operator exposes Prometheus metrics through the `metrics` service.
|
||||
|
||||
ifdef::operators[]
|
||||
[discrete]
|
||||
|
||||
== Project
|
||||
|
||||
link:https://github.com/openshift/cluster-cloud-controller-manager-operator[cluster-cloud-controller-manager-operator]
|
||||
|
||||
@@ -12,7 +12,7 @@ The Cluster Config Operator performs the following tasks related to `config.open
|
||||
* Handles migrations.
|
||||
|
||||
|
||||
[discrete]
|
||||
|
||||
== Project
|
||||
|
||||
link:https://github.com/openshift/cluster-config-operator[cluster-config-operator]
|
||||
|
||||
@@ -35,7 +35,7 @@ The Cluster CSI Snapshot Controller Operator installs and maintains the CSI Snap
|
||||
|
||||
ifdef::operator-ref[]
|
||||
|
||||
[discrete]
|
||||
|
||||
== Project
|
||||
|
||||
link:https://github.com/openshift/cluster-csi-snapshot-controller-operator[cluster-csi-snapshot-controller-operator]
|
||||
|
||||
@@ -14,7 +14,7 @@ The Operator creates a working default deployment based on the cluster's configu
|
||||
|
||||
The DNS Operator manages CoreDNS as a Kubernetes daemon set exposed as a service with a static IP. CoreDNS runs on all nodes in the cluster.
|
||||
|
||||
[discrete]
|
||||
|
||||
== Project
|
||||
|
||||
link:https://github.com/openshift/cluster-dns-operator[cluster-dns-operator]
|
||||
|
||||
@@ -41,7 +41,7 @@ If you disable the `ImageRegistry` capability or if you disable the integrated {
|
||||
If you disable the `ImageRegistry` capability, you can reduce the overall resource footprint of {product-title} in Telco environments. Depending on your deployment, you can disable this component if you do not need it.
|
||||
endif::[]
|
||||
|
||||
[discrete]
|
||||
|
||||
== Project
|
||||
|
||||
link:https://github.com/openshift/cluster-image-registry-operator[cluster-image-registry-operator]
|
||||
|
||||
@@ -16,12 +16,12 @@ The Kubernetes Scheduler Operator contains the following components:
|
||||
|
||||
By default, the Operator exposes Prometheus metrics through the metrics service.
|
||||
|
||||
[discrete]
|
||||
|
||||
== Project
|
||||
|
||||
link:https://github.com/openshift/cluster-kube-scheduler-operator[cluster-kube-scheduler-operator]
|
||||
|
||||
[discrete]
|
||||
|
||||
== Configuration
|
||||
|
||||
The configuration for the Kubernetes Scheduler is the result of merging:
|
||||
|
||||
@@ -7,7 +7,7 @@
|
||||
|
||||
The Kubernetes Storage Version Migrator Operator detects changes of the default storage version, creates migration requests for resource types when the storage version changes, and processes migration requests.
|
||||
|
||||
[discrete]
|
||||
|
||||
== Project
|
||||
|
||||
link:https://github.com/openshift/cluster-kube-storage-version-migrator-operator[cluster-kube-storage-version-migrator-operator]
|
||||
|
||||
@@ -12,7 +12,7 @@ The Cluster Machine Approver Operator automatically approves the CSRs requested
|
||||
For the control plane node, the `approve-csr` service on the bootstrap node automatically approves all CSRs during the cluster bootstrapping phase.
|
||||
====
|
||||
|
||||
[discrete]
|
||||
|
||||
== Project
|
||||
|
||||
link:https://github.com/openshift/cluster-machine-approver[cluster-machine-approver-operator]
|
||||
|
||||
@@ -19,7 +19,7 @@ The custom resource definition (CRD) `openshiftcontrollermanagers.operator.opens
|
||||
$ oc get crd openshiftcontrollermanagers.operator.openshift.io -o yaml
|
||||
----
|
||||
|
||||
[discrete]
|
||||
|
||||
== Project
|
||||
|
||||
link:https://github.com/openshift/cluster-openshift-controller-manager-operator[cluster-openshift-controller-manager-operator]
|
||||
|
||||
@@ -57,7 +57,7 @@ The samples resource includes a finalizer, which cleans up the following upon it
|
||||
|
||||
Upon deletion of the samples resource, the Cluster Samples Operator recreates the resource using the default configuration.
|
||||
|
||||
[discrete]
|
||||
|
||||
== Project
|
||||
|
||||
link:https://github.com/openshift/cluster-samples-operator[cluster-samples-operator]
|
||||
|
||||
@@ -41,19 +41,19 @@ endif::cluster-caps[]
|
||||
|
||||
ifdef::operator-ref[]
|
||||
|
||||
[discrete]
|
||||
|
||||
== Project
|
||||
|
||||
link:https://github.com/openshift/cluster-storage-operator[cluster-storage-operator]
|
||||
|
||||
[discrete]
|
||||
|
||||
== Configuration
|
||||
|
||||
No configuration is required.
|
||||
|
||||
endif::operator-ref[]
|
||||
|
||||
[discrete]
|
||||
|
||||
== Notes
|
||||
|
||||
* The storage class that the Operator creates can be made non-default by editing its annotation, but this storage class cannot be deleted as long as the Operator runs.
|
||||
|
||||
@@ -11,7 +11,7 @@ The CVO also checks with the OpenShift Update Service to see the valid updates a
|
||||
|
||||
For more information regarding cluster version condition types, see "Understanding cluster version condition types".
|
||||
|
||||
[discrete]
|
||||
|
||||
== Project
|
||||
|
||||
link:https://github.com/openshift/cluster-version-operator[cluster-version-operator]
|
||||
|
||||
@@ -8,7 +8,7 @@
|
||||
|
||||
To configure a cluster-wide proxy, you must meet the following requirements. These requirements are valid when you configure a proxy during installation or postinstallation.
|
||||
|
||||
[discrete]
|
||||
|
||||
[id="cluster-wide-proxy-general-prereqs_{context}"]
|
||||
== General requirements
|
||||
|
||||
@@ -34,7 +34,7 @@ These endpoints are required to complete requests from the nodes to the AWS EC2
|
||||
When using a cluster-wide proxy, you must configure the `s3.<aws_region>.amazonaws.com` endpoint as type `Gateway`.
|
||||
====
|
||||
|
||||
[discrete]
|
||||
|
||||
[id="cluster-wide-proxy-network-prereqs_{context}"]
|
||||
== Network requirements
|
||||
|
||||
|
||||
@@ -36,7 +36,7 @@ The Console Operator installs and maintains the {product-title} web console on a
|
||||
|
||||
ifdef::operator-ref[]
|
||||
|
||||
[discrete]
|
||||
|
||||
== Project
|
||||
|
||||
link:https://github.com/openshift/console-operator[console-operator]
|
||||
|
||||
@@ -12,12 +12,12 @@ The Control Plane Machine Set Operator automates the management of control plane
|
||||
This Operator is available for Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, Nutanix, and VMware vSphere.
|
||||
====
|
||||
|
||||
[discrete]
|
||||
|
||||
== Project
|
||||
|
||||
link:https://github.com/openshift/cluster-control-plane-machine-set-operator[cluster-control-plane-machine-set-operator]
|
||||
|
||||
[discrete]
|
||||
|
||||
== CRDs
|
||||
|
||||
* `controlplanemachineset.machine.openshift.io`
|
||||
|
||||
@@ -73,7 +73,7 @@ If you used a custom machine config pool to apply an on-cluster layered image to
|
||||
|
||||
You can modify an on-custom layered image as needed, to install additional packages, remove existing packages, change repositories, update secrets, or other similar changes, by editing the MachineOSConfig object. For more information, see "Modifying a custom layered image".
|
||||
|
||||
[discrete]
|
||||
|
||||
[id="coreos-layering-configuring-on-limitations_{context}"]
|
||||
== {image-mode-os-on-caps} known limitations
|
||||
|
||||
|
||||
@@ -8,7 +8,7 @@
|
||||
|
||||
When you create a control plane machine set for an existing cluster, the provider specification must match the `providerSpec` configuration in the control plane machine custom resource (CR) that the installation program creates. You can omit any field that is set in the failure domain section of the CR.
|
||||
|
||||
[discrete]
|
||||
|
||||
[id="cpmso-yaml-provider-spec-gcp-oc_{context}"]
|
||||
== Values obtained by using the OpenShift CLI
|
||||
|
||||
|
||||
@@ -8,7 +8,7 @@
|
||||
|
||||
When you create a control plane machine set for an existing cluster, the provider specification must match the `providerSpec` configuration in the control plane machine custom resource (CR) that the installation program creates.
|
||||
|
||||
[discrete]
|
||||
|
||||
[id="cpmso-yaml-provider-spec-nutanix-oc_{context}"]
|
||||
== Values obtained by using the OpenShift CLI
|
||||
|
||||
|
||||
@@ -11,7 +11,7 @@ A Content Security Policy (CSP) is delivered to the browser in the `Content-Secu
|
||||
[id="content-security-policy-key-features_{context}"]
|
||||
== Key features of `contentSecurityPolicy`
|
||||
|
||||
[discrete]
|
||||
|
||||
=== Directive Types
|
||||
|
||||
The supported directive types include `DefaultSrc`, `ScriptSrc`, `StyleSrc`, `ImgSrc`, and `FontSrc`. These directives allow you to specify valid sources for loading different types of content for your plugin. Each directive type serves a different purpose. For example, `ScriptSrc` defines valid JavaScript sources, while `ImgSrc` controls where images can be loaded from.
|
||||
@@ -19,17 +19,17 @@ The supported directive types include `DefaultSrc`, `ScriptSrc`, `StyleSrc`, `Im
|
||||
//backporting the ConnectSrc directive, but that is tbd - openshift/console#14701 and https://github.com/openshift/api/pull/2164
|
||||
|
||||
|
||||
[discrete]
|
||||
|
||||
=== Values
|
||||
|
||||
Each directive can have a list of values representing allowed sources. For example, `ScriptSrc` can specify multiple external scripts. These values are restricted to 1024 characters and cannot include whitespace, commas, or semicolons. Additionally, single-quoted strings and wildcard characters (`*`) are disallowed.
|
||||
|
||||
[discrete]
|
||||
|
||||
=== Unified Policy
|
||||
|
||||
The {product-title} web console aggregates the CSP directives across all enabled `ConsolePlugin` custom resources (CRs) and merges them with its own default policy. The combined policy is then applied with the `Content-Security-Policy-Report-Only` HTTP response header.
|
||||
|
||||
[discrete]
|
||||
|
||||
=== Validation Rules
|
||||
* Each directive can have up to 16 unique values.
|
||||
* The total size of all values across directives must not exceed 8192 bytes (8KB).
|
||||
|
||||
@@ -6,7 +6,7 @@
|
||||
[id="deployment-config-capability_{context}"]
|
||||
= DeploymentConfig capability
|
||||
|
||||
[discrete]
|
||||
|
||||
== Purpose
|
||||
|
||||
The `DeploymentConfig` capability enables and manages the `DeploymentConfig` API.
|
||||
|
||||
@@ -9,7 +9,7 @@
|
||||
[id="dynamic-plugin-api_{context}"]
|
||||
= Dynamic plugin API
|
||||
|
||||
[discrete]
|
||||
|
||||
== `useActivePerspective`
|
||||
|
||||
Hook that provides the currently active perspective and a callback for setting the active perspective. It returns a tuple containing the current active perspective and setter callback.
|
||||
@@ -30,7 +30,7 @@ const Component: React.FC = (props) => {
|
||||
}
|
||||
----
|
||||
|
||||
[discrete]
|
||||
|
||||
== `GreenCheckCircleIcon`
|
||||
|
||||
Component for displaying a green check mark circle icon.
|
||||
@@ -49,7 +49,7 @@ Component for displaying a green check mark circle icon.
|
||||
|`size` |(optional) icon size: (`sm`, `md`, `lg`, `xl`)
|
||||
|===
|
||||
|
||||
[discrete]
|
||||
|
||||
== `RedExclamationCircleIcon`
|
||||
|
||||
Component for displaying a red exclamation mark circle icon.
|
||||
@@ -68,7 +68,7 @@ Component for displaying a red exclamation mark circle icon.
|
||||
|`size` |(optional) icon size: (`sm`, `md`, `lg`, `xl`)
|
||||
|===
|
||||
|
||||
[discrete]
|
||||
|
||||
== `YellowExclamationTriangleIcon`
|
||||
|
||||
Component for displaying a yellow triangle exclamation icon.
|
||||
@@ -87,7 +87,7 @@ Component for displaying a yellow triangle exclamation icon.
|
||||
|`size` |(optional) icon size: (`sm`, `md`, `lg`, `xl`)
|
||||
|===
|
||||
|
||||
[discrete]
|
||||
|
||||
== `BlueInfoCircleIcon`
|
||||
|
||||
Component for displaying a blue info circle icon.
|
||||
@@ -106,7 +106,7 @@ Component for displaying a blue info circle icon.
|
||||
|`size` |(optional) icon size: ('sm', 'md', 'lg', 'xl')
|
||||
|===
|
||||
|
||||
[discrete]
|
||||
|
||||
== `ErrorStatus`
|
||||
|
||||
Component for displaying an error status popover.
|
||||
@@ -127,7 +127,7 @@ Component for displaying an error status popover.
|
||||
|`popoverTitle` |(optional) title for popover
|
||||
|===
|
||||
|
||||
[discrete]
|
||||
|
||||
== `InfoStatus`
|
||||
|
||||
Component for displaying an information status popover.
|
||||
@@ -148,7 +148,7 @@ Component for displaying an information status popover.
|
||||
|`popoverTitle` |(optional) title for popover
|
||||
|===
|
||||
|
||||
[discrete]
|
||||
|
||||
== `ProgressStatus`
|
||||
|
||||
Component for displaying a progressing status popover.
|
||||
@@ -169,7 +169,7 @@ Component for displaying a progressing status popover.
|
||||
|`popoverTitle` |(optional) title for popover
|
||||
|===
|
||||
|
||||
[discrete]
|
||||
|
||||
== `SuccessStatus`
|
||||
|
||||
Component for displaying a success status popover.
|
||||
@@ -190,7 +190,7 @@ Component for displaying a success status popover.
|
||||
|`popoverTitle` |(optional) title for popover
|
||||
|===
|
||||
|
||||
[discrete]
|
||||
|
||||
== `checkAccess`
|
||||
|
||||
Provides information about user access to a given resource. It returns an object with resource access information.
|
||||
@@ -202,7 +202,7 @@ Provides information about user access to a given resource. It returns an object
|
||||
|`impersonate` |impersonation details
|
||||
|===
|
||||
|
||||
[discrete]
|
||||
|
||||
== `useAccessReview`
|
||||
|
||||
Hook that provides information about user access to a given resource. It returns an array with `isAllowed` and `loading` values.
|
||||
@@ -214,7 +214,7 @@ Hook that provides information about user access to a given resource. It returns
|
||||
|`impersonate` |impersonation details
|
||||
|===
|
||||
|
||||
[discrete]
|
||||
|
||||
== `useResolvedExtensions`
|
||||
|
||||
React hook for consuming Console extensions with resolved `CodeRef` properties. This hook accepts the same argument(s) as `useExtensions` hook and returns an adapted list of extension instances, resolving all code references within each extension's properties.
|
||||
@@ -238,7 +238,7 @@ extension as an argument and return a boolean flag indicating whether or
|
||||
not the extension meets desired type constraints
|
||||
|===
|
||||
|
||||
[discrete]
|
||||
|
||||
== `HorizontalNav`
|
||||
|
||||
A component that creates a Navigation bar for a page. Routing is handled as part of the component. `console.tab/horizontalNav` can be used to add additional content to any horizontal navigation.
|
||||
@@ -268,7 +268,7 @@ K8sResourceCommon type
|
||||
|`match` |match object provided by React Router
|
||||
|===
|
||||
|
||||
[discrete]
|
||||
|
||||
== `TableData`
|
||||
|
||||
Component for displaying table data within a table row.
|
||||
@@ -299,7 +299,7 @@ const PodRow: React.FC<RowProps<K8sResourceCommon>> = ({ obj, activeColumnIDs })
|
||||
|`className` |(optional) option class name for styling
|
||||
|===
|
||||
|
||||
[discrete]
|
||||
|
||||
== `useActiveColumns`
|
||||
|
||||
A hook that provides a list of user-selected active TableColumns.
|
||||
@@ -335,7 +335,7 @@ user settings. Usually a group/version/kind (GVK) string for a resource.
|
||||
|
||||
A tuple containing the current user selected active columns (a subset of options.columns), and a boolean flag indicating whether user settings have been loaded.
|
||||
|
||||
[discrete]
|
||||
|
||||
== `ListPageHeader`
|
||||
|
||||
Component for generating a page header.
|
||||
@@ -360,7 +360,7 @@ const exampleList: React.FC = () => {
|
||||
|`badge` |(optional) badge icon as react node
|
||||
|===
|
||||
|
||||
[discrete]
|
||||
|
||||
== `ListPageCreate`
|
||||
|
||||
Component for adding a create button for a specific resource kind that automatically generates a link to the create YAML for this resource.
|
||||
@@ -385,7 +385,7 @@ const exampleList: React.FC<MyProps> = () => {
|
||||
|`groupVersionKind` |the resource group/version/kind to represent
|
||||
|===
|
||||
|
||||
[discrete]
|
||||
|
||||
== `ListPageCreateLink`
|
||||
|
||||
Component for creating a stylized link.
|
||||
@@ -415,7 +415,7 @@ determine access
|
||||
|`children` |(optional) children for the component
|
||||
|===
|
||||
|
||||
[discrete]
|
||||
|
||||
== `ListPageCreateButton`
|
||||
|
||||
Component for creating button.
|
||||
@@ -443,7 +443,7 @@ determine access
|
||||
|`pfButtonProps` |(optional) Patternfly Button props
|
||||
|===
|
||||
|
||||
[discrete]
|
||||
|
||||
== `ListPageCreateDropdown`
|
||||
|
||||
Component for creating a dropdown wrapped with permissions check.
|
||||
@@ -479,7 +479,7 @@ determine access
|
||||
|`children` |(optional) children for the dropdown toggle
|
||||
|===
|
||||
|
||||
[discrete]
|
||||
|
||||
== `ResourceLink`
|
||||
|
||||
Component that creates a link to a specific resource type with an icon badge.
|
||||
@@ -527,7 +527,7 @@ link to
|
||||
|`truncate` |(optional) flag to truncate the link if too long
|
||||
|===
|
||||
|
||||
[discrete]
|
||||
|
||||
== `ResourceIcon`
|
||||
|
||||
Component that creates an icon badge for a specific resource type.
|
||||
@@ -546,7 +546,7 @@ Component that creates an icon badge for a specific resource type.
|
||||
|`className` |(optional) class style for component
|
||||
|===
|
||||
|
||||
[discrete]
|
||||
|
||||
== `useK8sModel`
|
||||
|
||||
Hook that retrieves the k8s model for provided K8sGroupVersionKind from redux. It returns an array with the first item as k8s model and second item as `inFlight` status.
|
||||
@@ -568,7 +568,7 @@ K8sGroupVersionKind is preferred alternatively can pass reference for
|
||||
group, version, kind which is deprecated, i.e, group/version/kind (GVK) K8sResourceKindReference.
|
||||
|===
|
||||
|
||||
[discrete]
|
||||
|
||||
== `useK8sModels`
|
||||
|
||||
Hook that retrieves all current k8s models from redux. It returns an array with the first item as the list of k8s model and second item as `inFlight` status.
|
||||
@@ -582,7 +582,7 @@ const Component: React.FC = () => {
|
||||
}
|
||||
----
|
||||
|
||||
[discrete]
|
||||
|
||||
== `useK8sWatchResource`
|
||||
|
||||
Hook that retrieves the k8s resource along with status for loaded and error. It returns an array with first item as resource(s), second item as loaded status and third item as error state if any.
|
||||
@@ -605,7 +605,7 @@ const Component: React.FC = () => {
|
||||
|`initResource` |options needed to watch for resource.
|
||||
|===
|
||||
|
||||
[discrete]
|
||||
|
||||
== `useK8sWatchResources`
|
||||
|
||||
Hook that retrieves the k8s resources along with their respective status for loaded and error. It returns a map where keys are as provided in initResouces and value has three properties data, loaded and error.
|
||||
@@ -632,7 +632,7 @@ wherein key is unique to resource and value is options needed
|
||||
to watch for the respective resource.
|
||||
|===
|
||||
|
||||
[discrete]
|
||||
|
||||
== `consoleFetch`
|
||||
|
||||
A custom wrapper around `fetch` that adds console specific headers and allows for retries and timeouts.It also validates the response status code and throws appropriate error or logs out the user if required. It returns a promise that resolves to the response.
|
||||
@@ -645,7 +645,7 @@ A custom wrapper around `fetch` that adds console specific headers and allows fo
|
||||
|`timeout` |The timeout in milliseconds
|
||||
|===
|
||||
|
||||
[discrete]
|
||||
|
||||
== `consoleFetchJSON`
|
||||
|
||||
A custom wrapper around `fetch` that adds console specific headers and allows for retries and timeouts. It also validates the response status code and throws appropriate error or logs out the user if required. It returns the response as a JSON object. Uses `consoleFetch` internally. It returns a promise that resolves to the response as JSON object.
|
||||
@@ -665,7 +665,7 @@ A custom wrapper around `fetch` that adds console specific headers and allows fo
|
||||
the active cluster the user has selected
|
||||
|===
|
||||
|
||||
[discrete]
|
||||
|
||||
== `consoleFetchText`
|
||||
|
||||
A custom wrapper around `fetch` that adds console specific headers and allows for retries and timeouts. It also validates the response status code and throws appropriate error or logs out the user if required. It returns the response as a text. Uses `consoleFetch` internally. It returns a promise that resolves to the response as text.
|
||||
@@ -683,7 +683,7 @@ A custom wrapper around `fetch` that adds console specific headers and allows fo
|
||||
the active cluster the user has selected
|
||||
|===
|
||||
|
||||
[discrete]
|
||||
|
||||
== `getConsoleRequestHeaders`
|
||||
|
||||
A function that creates impersonation and multicluster related headers for API requests using current redux state. It returns an object containing the appropriate impersonation and clustr requst headers, based on redux state.
|
||||
@@ -695,7 +695,7 @@ A function that creates impersonation and multicluster related headers for API r
|
||||
targetCluster
|
||||
|===
|
||||
|
||||
[discrete]
|
||||
|
||||
== `k8sGetResource`
|
||||
|
||||
It fetches a resource from the cluster, based on the provided options. If the name is provided it returns one resource else it returns all the resources matching the model. It returns a promise that resolves to the response as JSON object with a resource if the name is providedelse it returns all the resources matching the
|
||||
@@ -723,7 +723,7 @@ URL.
|
||||
request headers, method, redirect, etc. See link:{power-bi-url}[Interface RequestInit] for more.
|
||||
|===
|
||||
|
||||
[discrete]
|
||||
|
||||
== `k8sCreateResource`
|
||||
|
||||
It creates a resource in the cluster, based on the provided options. It returns a promise that resolves to the response of the resource created. In case of failure promise gets rejected with HTTP error response.
|
||||
@@ -743,7 +743,7 @@ It creates a resource in the cluster, based on the provided options. It returns
|
||||
URL.
|
||||
|===
|
||||
|
||||
[discrete]
|
||||
|
||||
== `k8sUpdateResource`
|
||||
|
||||
It updates the entire resource in the cluster, based on providedoptions. When a client needs to replace an existing resource entirely, they can use k8sUpdate. Alternatively can use k8sPatch to perform the partial update. It returns a promise that resolves to the response of the resource updated. In case of failure promise gets rejected with HTTP error response.
|
||||
@@ -768,7 +768,7 @@ cluster-scoped resources.
|
||||
URL.
|
||||
|===
|
||||
|
||||
[discrete]
|
||||
|
||||
== `k8sPatchResource`
|
||||
|
||||
It patches any resource in the cluster, based on provided options. When a client needs to perform the partial update, they can use
|
||||
@@ -792,7 +792,7 @@ with the operation, path, and value.
|
||||
URL.
|
||||
|===
|
||||
|
||||
[discrete]
|
||||
|
||||
== `k8sDeleteResource`
|
||||
|
||||
It deletes resources from the cluster, based on the provided model, resource. The garbage collection works based on `Foreground`|`Background` can be configured with propagationPolicy property in provided model or passed in json. It returns a promise that resolves to the response of kind Status. In case of failure promise gets rejected with HTTP error response.
|
||||
@@ -823,7 +823,7 @@ request headers, method, redirect, etc. See link:{power-bi-url}[Interface Reques
|
||||
explicitly if provided or else it defaults to the model's "propagationPolicy".
|
||||
|===
|
||||
|
||||
[discrete]
|
||||
|
||||
== `k8sListResource`
|
||||
|
||||
Lists the resources as an array in the cluster, based on provided options. It returns a promise that resolves to the response.
|
||||
@@ -842,12 +842,12 @@ URL and can pass label selector's as well with key "labelSelector".
|
||||
request headers, method, redirect, etc. See link:{power-bi-url}[Interface RequestInit] for more.
|
||||
|===
|
||||
|
||||
[discrete]
|
||||
|
||||
== `k8sListResourceItems`
|
||||
|
||||
Same interface as k8sListResource but returns the sub items. It returns the apiVersion for the model, i.e., `group/version`.
|
||||
|
||||
[discrete]
|
||||
|
||||
== `getAPIVersionForModel`
|
||||
|
||||
Provides apiVersion for a k8s model.
|
||||
@@ -858,7 +858,7 @@ Provides apiVersion for a k8s model.
|
||||
|`model` |k8s model
|
||||
|===
|
||||
|
||||
[discrete]
|
||||
|
||||
== `getGroupVersionKindForResource`
|
||||
|
||||
Provides a group, version, and kind for a resource. It returns the group, version, kind for the provided resource. If the resource does not have an API group, group "core" is returned. If the resource has an invalid apiVersion, then it throws an Error.
|
||||
@@ -869,7 +869,7 @@ Provides a group, version, and kind for a resource. It returns the group, versio
|
||||
|`resource` |k8s resource
|
||||
|===
|
||||
|
||||
[discrete]
|
||||
|
||||
== `getGroupVersionKindForModel`
|
||||
|
||||
Provides a group, version, and kind for a k8s model. This returns the group, version, kind for the provided model. If the model does not have an apiGroup, group "core" is returned.
|
||||
@@ -880,7 +880,7 @@ Provides a group, version, and kind for a k8s model. This returns the group, ver
|
||||
|`model` |k8s model
|
||||
|===
|
||||
|
||||
[discrete]
|
||||
|
||||
== `StatusPopupSection`
|
||||
|
||||
Component that shows the status in a popup window. Helpful component for building `console.dashboards/overview/health/resource` extensions.
|
||||
@@ -909,7 +909,7 @@ Component that shows the status in a popup window. Helpful component for buildin
|
||||
|`children` |(optional) children for the popup
|
||||
|===
|
||||
|
||||
[discrete]
|
||||
|
||||
== `StatusPopupItem`
|
||||
|
||||
Status element used in status popup; used in `StatusPopupSection`.
|
||||
@@ -938,7 +938,7 @@ Status element used in status popup; used in `StatusPopupSection`.
|
||||
|`children` |child elements
|
||||
|===
|
||||
|
||||
[discrete]
|
||||
|
||||
== `Overview`
|
||||
|
||||
Creates a wrapper component for a dashboard.
|
||||
@@ -958,7 +958,7 @@ Creates a wrapper component for a dashboard.
|
||||
|`children` |(optional) elements of the dashboard
|
||||
|===
|
||||
|
||||
[discrete]
|
||||
|
||||
== `OverviewGrid`
|
||||
|
||||
Creates a grid of card elements for a dashboard; used within `Overview`.
|
||||
@@ -979,7 +979,7 @@ Creates a grid of card elements for a dashboard; used within `Overview`.
|
||||
|`rightCards` |(optional) cards for right side of grid
|
||||
|===
|
||||
|
||||
[discrete]
|
||||
|
||||
== `InventoryItem`
|
||||
|
||||
Creates an inventory card item.
|
||||
@@ -1003,7 +1003,7 @@ Creates an inventory card item.
|
||||
|`children` |elements to render inside the item
|
||||
|===
|
||||
|
||||
[discrete]
|
||||
|
||||
== `InventoryItemTitle`
|
||||
|
||||
Creates a title for an inventory card item; used within `InventoryItem`.
|
||||
@@ -1027,7 +1027,7 @@ Creates a title for an inventory card item; used within `InventoryItem`.
|
||||
|`children` |elements to render inside the title
|
||||
|===
|
||||
|
||||
[discrete]
|
||||
|
||||
== `InventoryItemBody`
|
||||
|
||||
Creates the body of an inventory card; used within `InventoryCard` and can be used with `InventoryTitle`.
|
||||
@@ -1052,7 +1052,7 @@ Creates the body of an inventory card; used within `InventoryCard` and can be us
|
||||
|`error` |elements of the div
|
||||
|===
|
||||
|
||||
[discrete]
|
||||
|
||||
== `InventoryItemStatus`
|
||||
|
||||
Creates a count and icon for an inventory card with optional link address; used within `InventoryItemBody`
|
||||
@@ -1078,7 +1078,7 @@ Creates a count and icon for an inventory card with optional link address; used
|
||||
|`linkTo` |(optional) link address
|
||||
|===
|
||||
|
||||
[discrete]
|
||||
|
||||
== `InventoryItemLoading`
|
||||
|
||||
Creates a skeleton container for when an inventory card is loading; used with `InventoryItem` and related components
|
||||
@@ -1098,7 +1098,7 @@ return (
|
||||
)
|
||||
----
|
||||
|
||||
[discrete]
|
||||
|
||||
== `useFlag`
|
||||
|
||||
Hook that returns the given feature flag from FLAGS redux state. It returns the boolean value of the requested feature flag or undefined.
|
||||
@@ -1109,7 +1109,7 @@ Hook that returns the given feature flag from FLAGS redux state. It returns the
|
||||
|`flag` |The feature flag to return
|
||||
|===
|
||||
|
||||
[discrete]
|
||||
|
||||
== `CodeEditor`
|
||||
|
||||
A basic lazy loaded Code editor with hover help and completion.
|
||||
@@ -1140,7 +1140,7 @@ A basic lazy loaded Code editor with hover help and completion.
|
||||
|===
|
||||
|
||||
|
||||
[discrete]
|
||||
|
||||
== `ResourceYAMLEditor`
|
||||
|
||||
A lazy loaded YAML editor for Kubernetes resources with hover help and completion. The component use the YAMLEditor and add on top of it more functionality like resource update handling, alerts, save, cancel and reload buttons, accessibility and more. Unless `onSave` callback is provided, the resource update is automatically handled. It should be wrapped in a `React.Suspense` component.
|
||||
@@ -1169,7 +1169,7 @@ the editor. This prop is used only during the initial render
|
||||
default update performed on the resource by the editor
|
||||
|===
|
||||
|
||||
[discrete]
|
||||
|
||||
== `ResourceEventStream`
|
||||
|
||||
A component to show events related to a particular resource.
|
||||
@@ -1187,7 +1187,7 @@ return <ResourceEventStream resource={resource} />
|
||||
|`resource` |An object whose related events should be shown.
|
||||
|===
|
||||
|
||||
[discrete]
|
||||
|
||||
== `usePrometheusPoll`
|
||||
|
||||
Sets up a poll to Prometheus for a single query. It returns a tuple containing the query response, a boolean flag indicating whether the response has completed, and any errors encountered during the request or post-processing of the request.
|
||||
@@ -1215,7 +1215,7 @@ of the query range
|
||||
|`\{string} [options.timeout]` | (optional) a search param to append
|
||||
|===
|
||||
|
||||
[discrete]
|
||||
|
||||
== `Timestamp`
|
||||
|
||||
A component to render timestamp. The timestamps are synchronized between invidual instances of the Timestamp component. The provided timestamp is formatted according to user locale.
|
||||
@@ -1234,7 +1234,7 @@ tooltip.
|
||||
|`className` |additional class name for the component.
|
||||
|===
|
||||
|
||||
[discrete]
|
||||
|
||||
== `useOverlay`
|
||||
|
||||
The `useOverlay` hook inserts a component directly to the DOM outside the web console's page structure. This allows the component to be freely styled and positioning with CSS. For example, to float the overlay in the top right corner of the UI: `style={{ position: 'absolute', right: '2rem', top: '2rem', zIndex: 999 }}`. It is possible to add multiple overlays by calling `useOverlay` multiple times. A `closeOverlay` function is passed to the overlay component. Calling it removes the component from the DOM without affecting any other overlays that might have been added with `useOverlay`. Additional props can be passed to `useOverlay` and they will be passed through to the overlay component.
|
||||
@@ -1273,7 +1273,7 @@ const AppPage: React.FC = () => {
|
||||
}
|
||||
----
|
||||
|
||||
[discrete]
|
||||
|
||||
== `ActionServiceProvider`
|
||||
|
||||
Component that allows to receive contributions from other plugins for the `console.action/provider` extension type.
|
||||
@@ -1300,7 +1300,7 @@ Component that allows to receive contributions from other plugins for the `conso
|
||||
|`context` |Object with contextId and optional plugin data
|
||||
|===
|
||||
|
||||
[discrete]
|
||||
|
||||
== `NamespaceBar`
|
||||
|
||||
A component that renders a horizontal toolbar with a namespace dropdown menu in the leftmost position. Additional components can be passed in as children and is rendered to the right of the namespace dropdown. This component is designed to be used at the top of the page. It should be used on pages where the user needs to be able to change the active namespace, such as on pages with k8s resources.
|
||||
@@ -1339,7 +1339,7 @@ dropdown and has no effect on child components.
|
||||
toolbar to the right of the namespace dropdown.
|
||||
|===
|
||||
|
||||
[discrete]
|
||||
|
||||
== `ErrorBoundaryFallbackPage`
|
||||
|
||||
Creates full page ErrorBoundaryFallbackPage component to display the "Oh no! Something went wrong." message along with the stack trace and other helpful debugging information. This is to be used inconjunction with an component.
|
||||
@@ -1367,7 +1367,7 @@ Creates full page ErrorBoundaryFallbackPage component to display the "Oh no! Som
|
||||
|`title` |title to render as the header of the error boundary page
|
||||
|===
|
||||
|
||||
[discrete]
|
||||
|
||||
== `QueryBrowser`
|
||||
|
||||
A component that renders a graph of the results from a Prometheus PromQL query along with controls for interacting with the graph.
|
||||
@@ -1410,7 +1410,7 @@ A component that renders a graph of the results from a Prometheus PromQL query a
|
||||
|`units` |(optional) Units to display on the Y-axis and in the tooltip.
|
||||
|===
|
||||
|
||||
[discrete]
|
||||
|
||||
== `useAnnotationsModal`
|
||||
|
||||
A hook that provides a callback to launch a modal for editing Kubernetes resource annotations.
|
||||
@@ -1434,7 +1434,7 @@ const PodAnnotationsButton = ({ pod }) => {
|
||||
.Returns
|
||||
A function which launches a modal for editing a resource's annotations.
|
||||
|
||||
[discrete]
|
||||
|
||||
== `useDeleteModal`
|
||||
|
||||
A hook that provides a callback to launch a modal for deleting a resource.
|
||||
@@ -1462,7 +1462,7 @@ const DeletePodButton = ({ pod }) => {
|
||||
.Returns
|
||||
A function which launches a modal for deleting a resource.
|
||||
|
||||
[discrete]
|
||||
|
||||
== `useLabelsModel`
|
||||
|
||||
A hook that provides a callback to launch a modal for editing Kubernetes resource labels.
|
||||
@@ -1486,7 +1486,7 @@ const PodLabelsButton = ({ pod }) => {
|
||||
.Returns
|
||||
A function which launches a modal for editing a resource's labels.
|
||||
|
||||
[discrete]
|
||||
|
||||
== `useActiveNamespace`
|
||||
|
||||
Hook that provides the currently active namespace and a callback for setting the active namespace.
|
||||
@@ -1510,7 +1510,7 @@ const Component: React.FC = (props) => {
|
||||
.Returns
|
||||
A tuple containing the current active namespace and setter callback.
|
||||
|
||||
[discrete]
|
||||
|
||||
== `useUserSettings`
|
||||
|
||||
Hook that provides a user setting value and a callback for setting the user setting value.
|
||||
@@ -1533,7 +1533,7 @@ const Component: React.FC = (props) ++=>++ {
|
||||
.Returns
|
||||
A tuple containing the user setting vauel, a setter callback, and a loaded boolean.
|
||||
|
||||
[discrete]
|
||||
|
||||
== `useQuickStartContext`
|
||||
|
||||
Hook that provides the current quick start context values. This allows plugins to interoperate with console quick start functionality.
|
||||
@@ -1553,7 +1553,7 @@ const OpenQuickStartButton ++= ({ quickStartId }) => {++
|
||||
.Reterns
|
||||
Quick start context values object.
|
||||
|
||||
[discrete]
|
||||
|
||||
== `PerspectiveContext`
|
||||
|
||||
Deprecated: Use the provided `usePerspectiveContext` instead. Creates the perspective context.
|
||||
@@ -1564,7 +1564,7 @@ Deprecated: Use the provided `usePerspectiveContext` instead. Creates the perspe
|
||||
|`PerspectiveContextType` |object with active perspective and setter
|
||||
|===
|
||||
|
||||
[discrete]
|
||||
|
||||
== `useAccessReviewAllowed`
|
||||
|
||||
Deprecated: Use `useAccessReview` from `@console/dynamic-plugin-sdk` instead. Hook that provides allowed status about user access to a given resource. It returns the `isAllowed` boolean value.
|
||||
@@ -1576,7 +1576,7 @@ Deprecated: Use `useAccessReview` from `@console/dynamic-plugin-sdk` instead. Ho
|
||||
|`impersonate` |impersonation details
|
||||
|===
|
||||
|
||||
[discrete]
|
||||
|
||||
== `useSafetyFirst`
|
||||
|
||||
Deprecated: This hook is not related to console functionality. Hook that ensures a safe asynchronnous setting of React state in case a given component could be unmounted. It returns an array with a pair of state value and its set function.
|
||||
@@ -1589,7 +1589,7 @@ Deprecated: This hook is not related to console functionality. Hook that ensures
|
||||
|
||||
:!power-bi-url:
|
||||
|
||||
[discrete]
|
||||
|
||||
== `VirtualizedTable`
|
||||
|
||||
Deprecated: Use PatternFly's link:https://www.patternfly.org/extensions/data-view/overview/[Data view] instead. A component for making virtualized tables.
|
||||
@@ -1628,7 +1628,7 @@ const MachineList: React.FC<MachineListProps> = (props) => {
|
||||
|`rowData` |(optional) data specific to row
|
||||
|===
|
||||
|
||||
[discrete]
|
||||
|
||||
== `ListPageFilter`
|
||||
|
||||
Deprecated: Use PatternFly's link:https://www.patternfly.org/extensions/data-view/overview/[Data view] instead. Component that generates filter for list page.
|
||||
@@ -1680,7 +1680,7 @@ both name and label filter
|
||||
|`hideColumnManagement` |(optional) flag to hide the column management
|
||||
|===
|
||||
|
||||
[discrete]
|
||||
|
||||
== `useListPageFilter`
|
||||
|
||||
Deprecated: Use PatternFly's link:https://www.patternfly.org/extensions/data-view/overview/[Data view] instead. A hook that manages filter state for the ListPageFilter component. It returns a tuple containing the data filtered by all static filters, the data filtered by all static and row filters, and a callback that updates rowFilters.
|
||||
@@ -1718,7 +1718,7 @@ available filter options
|
||||
statically applied to the data
|
||||
|===
|
||||
|
||||
[discrete]
|
||||
|
||||
== `YAMLEditor`
|
||||
|
||||
Deprecated: Use `CodeEditor` instead. A basic lazy loaded YAML editor with hover help and completion.
|
||||
@@ -1756,7 +1756,7 @@ the `editor` property, you are able to access to all methods to control
|
||||
the editor.
|
||||
|===
|
||||
|
||||
[discrete]
|
||||
|
||||
== `useModal`
|
||||
|
||||
Deprecated: Use `useOverlay` from `@console/dynamic-plugin-sdk` instead. A hook to launch Modals.
|
||||
|
||||
@@ -6,7 +6,7 @@
|
||||
[id="dynamic-plugin-sdk-extensions_{context}"]
|
||||
= Dynamic plugin extension types
|
||||
|
||||
[discrete]
|
||||
|
||||
== `console.action/filter`
|
||||
|
||||
`ActionFilter` can be used to filter an action.
|
||||
@@ -26,7 +26,7 @@ remove the `ModifyCount` action from a deployment with a horizontal pod
|
||||
autoscaler (HPA).
|
||||
|===
|
||||
|
||||
[discrete]
|
||||
|
||||
== `console.action/group`
|
||||
|
||||
`ActionGroup` contributes an action group that can also be a submenu.
|
||||
@@ -50,7 +50,7 @@ item referenced here. For arrays, the first one found in order is
|
||||
used. The `insertBefore` value takes precedence.
|
||||
|===
|
||||
|
||||
[discrete]
|
||||
|
||||
== `console.action/provider`
|
||||
|
||||
`ActionProvider` contributes a hook that returns list of actions for specific context.
|
||||
@@ -66,7 +66,7 @@ that returns actions for the given scope. If `contextId` = `resource`, then
|
||||
the scope will always be a Kubernetes resource object.
|
||||
|===
|
||||
|
||||
[discrete]
|
||||
|
||||
== `console.action/resource-provider`
|
||||
|
||||
`ResourceActionProvider` contributes a hook that returns list of actions for specific resource model.
|
||||
@@ -81,7 +81,7 @@ provider provides actions for.
|
||||
which returns actions for the given resource model
|
||||
|===
|
||||
|
||||
[discrete]
|
||||
|
||||
== `console.alert-action`
|
||||
|
||||
This extension can be used to trigger a specific action when a specific Prometheus alert is observed by the Console based on its `rule.name` value.
|
||||
@@ -96,7 +96,7 @@ This extension can be used to trigger a specific action when a specific Promethe
|
||||
|`action` |`CodeRef<(alert: any) => void>` |no | Function to perform side effect |
|
||||
|===
|
||||
|
||||
[discrete]
|
||||
|
||||
== `console.catalog/item-filter`
|
||||
|
||||
This extension can be used for plugins to contribute a handler that can filter specific catalog items. For example, the plugin can contribute a filter that filters helm charts from specific provider.
|
||||
@@ -114,7 +114,7 @@ of a specific type. Value is a function that takes `CatalogItem[]` and
|
||||
returns a subset based on the filter criteria.
|
||||
|===
|
||||
|
||||
[discrete]
|
||||
|
||||
== `console.catalog/item-metadata`
|
||||
|
||||
This extension can be used to contribute a provider that adds extra metadata to specific catalog items.
|
||||
@@ -132,7 +132,7 @@ catalog this provider contributes to.
|
||||
|no |A hook which returns a function that will be used to provide metadata to catalog items of a specific type.
|
||||
|===
|
||||
|
||||
[discrete]
|
||||
|
||||
== `console.catalog/item-provider`
|
||||
|
||||
This extension allows plugins to contribute a provider for a catalog item type. For example, a Helm Plugin can add a provider that fetches all the Helm Charts. This extension can also be used by other plugins to add more items to a specific catalog item type.
|
||||
@@ -157,7 +157,7 @@ Higher priority providers may override catalog items provided by other
|
||||
providers.
|
||||
|===
|
||||
|
||||
[discrete]
|
||||
|
||||
== `console.catalog/item-type`
|
||||
|
||||
This extension allows plugins to contribute a new type of catalog item. For example, a Helm plugin can define a new catalog item type as HelmCharts that it wants to contribute to the Developer Catalog.
|
||||
@@ -182,7 +182,7 @@ the catalog item.
|
||||
to the catalog item.
|
||||
|===
|
||||
|
||||
[discrete]
|
||||
|
||||
== `console.catalog/item-type-metadata`
|
||||
|
||||
This extension allows plugins to contribute extra metadata like custom filters or groupings for any catalog item type. For example, a plugin can attach a custom filter for HelmCharts that can filter based on chart provider.
|
||||
@@ -199,7 +199,7 @@ the catalog item.
|
||||
to the catalog item.
|
||||
|===
|
||||
|
||||
[discrete]
|
||||
|
||||
== `console.cluster-overview/inventory-item`
|
||||
|
||||
Adds a new inventory item into cluster overview page.
|
||||
@@ -211,7 +211,7 @@ Adds a new inventory item into cluster overview page.
|
||||
be rendered.
|
||||
|===
|
||||
|
||||
[discrete]
|
||||
|
||||
== `console.cluster-overview/multiline-utilization-item`
|
||||
|
||||
Adds a new cluster overview multi-line utilization item.
|
||||
@@ -231,7 +231,7 @@ utilization query.
|
||||
Top consumer popover instead of plain value.
|
||||
|===
|
||||
|
||||
[discrete]
|
||||
|
||||
== `console.cluster-overview/utilization-item`
|
||||
|
||||
Adds a new cluster overview utilization item.
|
||||
@@ -257,7 +257,7 @@ query.
|
||||
consumer popover instead of plain value.
|
||||
|===
|
||||
|
||||
[discrete]
|
||||
|
||||
== `console.context-provider`
|
||||
|
||||
Adds a new React context provider to the web console application root.
|
||||
@@ -269,7 +269,7 @@ Adds a new React context provider to the web console application root.
|
||||
|`useValueHook` |`CodeRef<() => T>` |no |Hook for the Context value.
|
||||
|===
|
||||
|
||||
[discrete]
|
||||
|
||||
== `console.create-project-modal`
|
||||
|
||||
This extension can be used to pass a component that will be rendered in place of the standard create project modal.
|
||||
@@ -280,7 +280,7 @@ This extension can be used to pass a component that will be rendered in place of
|
||||
|`component` |`CodeRef<ModalComponent<CreateProjectModalProps>>` |no |A component to render in place of the create project modal.
|
||||
|===
|
||||
|
||||
[discrete]
|
||||
|
||||
== `console.dashboards/card`
|
||||
|
||||
Adds a new dashboard card.
|
||||
@@ -301,7 +301,7 @@ component.
|
||||
Ignored for small screens; defaults to `12`.
|
||||
|===
|
||||
|
||||
[discrete]
|
||||
|
||||
== `console.dashboards/custom/overview/detail/item`
|
||||
|
||||
Adds an item to the Details card of Overview Dashboard.
|
||||
@@ -320,7 +320,7 @@ Adds an item to the Details card of Overview Dashboard.
|
||||
| `error` | `CodeRef<() => string>` | yes | Function returning errors to be displayed by the component
|
||||
|===
|
||||
|
||||
[discrete]
|
||||
|
||||
== `console.dashboards/overview/activity/resource`
|
||||
|
||||
Adds an activity to the Activity Card of Overview Dashboard where the triggering of activity is based on watching a Kubernetes resource.
|
||||
@@ -342,7 +342,7 @@ every resource represents activity.
|
||||
the given action, which will be used for ordering.
|
||||
|===
|
||||
|
||||
[discrete]
|
||||
|
||||
== `console.dashboards/overview/health/operator`
|
||||
|
||||
Adds a health subsystem to the status card of the *Overview* dashboard, where the source of status is a Kubernetes REST API.
|
||||
@@ -367,7 +367,7 @@ provided, then a list page of the first resource from resources prop is
|
||||
used.
|
||||
|===
|
||||
|
||||
[discrete]
|
||||
|
||||
== `console.dashboards/overview/health/prometheus`
|
||||
|
||||
Adds a health subsystem to the status card of Overview dashboard where the source of status is Prometheus.
|
||||
@@ -396,7 +396,7 @@ link, which opens a pop-up menu with the given content.
|
||||
topology for which the subsystem should be hidden.
|
||||
|===
|
||||
|
||||
[discrete]
|
||||
|
||||
== `console.dashboards/overview/health/resource`
|
||||
|
||||
Adds a health subsystem to the status card of Overview dashboard where the source of status is a Kubernetes Resource.
|
||||
@@ -418,7 +418,7 @@ opens a pop-up menu with the given content.
|
||||
|`popupTitle` |`string` |yes |The title of the popover.
|
||||
|===
|
||||
|
||||
[discrete]
|
||||
|
||||
== `console.dashboards/overview/health/url`
|
||||
|
||||
Adds a health subsystem to the status card of Overview dashboard where the source of status is a Kubernetes REST API.
|
||||
@@ -442,7 +442,7 @@ represented as a link which opens popup with given content.
|
||||
|`popupTitle` |`string` |yes |The title of the popover.
|
||||
|===
|
||||
|
||||
[discrete]
|
||||
|
||||
== `console.dashboards/overview/inventory/item`
|
||||
|
||||
Adds a resource tile to the overview inventory card.
|
||||
@@ -460,7 +460,7 @@ various statuses to groups.
|
||||
resources which will be fetched and passed to the `mapper` function.
|
||||
|===
|
||||
|
||||
[discrete]
|
||||
|
||||
== `console.dashboards/overview/inventory/item/group`
|
||||
|
||||
Adds an inventory status group.
|
||||
@@ -475,7 +475,7 @@ Adds an inventory status group.
|
||||
|no |React component representing the status group icon.
|
||||
|===
|
||||
|
||||
[discrete]
|
||||
|
||||
== `console.dashboards/overview/inventory/item/replacement`
|
||||
|
||||
Replaces an overview inventory card.
|
||||
@@ -492,7 +492,7 @@ various statuses to groups.
|
||||
resources which will be fetched and passed to the `mapper` function.
|
||||
|===
|
||||
|
||||
[discrete]
|
||||
|
||||
== `console.dashboards/overview/prometheus/activity/resource`
|
||||
|
||||
Adds an activity to the Activity Card of Prometheus Overview Dashboard where the triggering of activity is based on watching a Kubernetes resource.
|
||||
@@ -510,7 +510,7 @@ Adds an activity to the Activity Card of Prometheus Overview Dashboard where the
|
||||
action. If not defined, every resource represents activity.
|
||||
|===
|
||||
|
||||
[discrete]
|
||||
|
||||
== `console.dashboards/project/overview/item`
|
||||
|
||||
Adds a resource tile to the project overview inventory card.
|
||||
@@ -528,7 +528,7 @@ various statuses to groups.
|
||||
resources which will be fetched and passed to the `mapper` function.
|
||||
|===
|
||||
|
||||
[discrete]
|
||||
|
||||
== `console.dashboards/tab`
|
||||
|
||||
Adds a new dashboard tab, placed after the *Overview* tab.
|
||||
@@ -544,7 +544,7 @@ and when adding cards to this tab.
|
||||
|`title` |`string` |no |The title of the tab.
|
||||
|===
|
||||
|
||||
[discrete]
|
||||
|
||||
== `console.file-upload`
|
||||
|
||||
This extension can be used to provide a handler for the file drop action on specific file extensions.
|
||||
@@ -558,7 +558,7 @@ This extension can be used to provide a handler for the file drop action on spec
|
||||
file drop action.
|
||||
|===
|
||||
|
||||
[discrete]
|
||||
|
||||
== `console.flag`
|
||||
|
||||
Gives full control over the web console feature flags.
|
||||
@@ -569,7 +569,7 @@ Gives full control over the web console feature flags.
|
||||
|`handler` |`CodeRef<FeatureFlagHandler>` |no |Used to set or unset arbitrary feature flags.
|
||||
|===
|
||||
|
||||
[discrete]
|
||||
|
||||
== `console.flag/hookProvider`
|
||||
|
||||
Gives full control over the web console feature flags with hook handlers.
|
||||
@@ -580,7 +580,7 @@ Gives full control over the web console feature flags with hook handlers.
|
||||
|`handler` |`CodeRef<FeatureFlagHandler>` |no |Used to set or unset arbitrary feature flags.
|
||||
|===
|
||||
|
||||
[discrete]
|
||||
|
||||
== `console.flag/model`
|
||||
|
||||
Adds a new web console feature flag driven by the presence of a `CustomResourceDefinition` (CRD) object on the cluster.
|
||||
@@ -594,7 +594,7 @@ Adds a new web console feature flag driven by the presence of a `CustomResourceD
|
||||
CRD.
|
||||
|===
|
||||
|
||||
[discrete]
|
||||
|
||||
== `console.global-config`
|
||||
|
||||
This extension identifies a resource used to manage the configuration of the cluster. A link to the resource will be added to the *Administration* -> *Cluster Settings* -> *Configuration* page.
|
||||
@@ -614,7 +614,7 @@ config resource.
|
||||
instance.
|
||||
|===
|
||||
|
||||
[discrete]
|
||||
|
||||
== `console.model-metadata`
|
||||
|
||||
Customize the display of models by overriding values retrieved and generated through API discovery.
|
||||
@@ -640,7 +640,7 @@ provided.
|
||||
uppercase characters in `kind`, up to 4 characters long. Requires that `kind` is provided.
|
||||
|===
|
||||
|
||||
[discrete]
|
||||
|
||||
== `console.navigation/href`
|
||||
|
||||
This extension can be used to contribute a navigation item that points to a specific link in the UI.
|
||||
@@ -678,7 +678,7 @@ item referenced here. For arrays, the first one found in order is used.
|
||||
|`prefixNamespaced` |`boolean` |yes |If `true`, adds `/k8s/ns/active-namespace` to the beginning.
|
||||
|===
|
||||
|
||||
[discrete]
|
||||
|
||||
== `console.navigation/resource-cluster`
|
||||
|
||||
This extension can be used to contribute a navigation item that points to a cluster resource details page. The K8s model of that resource can be used to define the navigation item.
|
||||
@@ -714,7 +714,7 @@ item referenced here. For arrays, the first one found in order is used.
|
||||
name of the link will equal the plural value of the model.
|
||||
|===
|
||||
|
||||
[discrete]
|
||||
|
||||
== `console.navigation/resource-ns`
|
||||
|
||||
This extension can be used to contribute a navigation item that points to a namespaced resource details page. The K8s model of that resource can be used to define the navigation item.
|
||||
@@ -750,7 +750,7 @@ item referenced here. For arrays, the first one found in order is used.
|
||||
name of the link will equal the plural value of the model.
|
||||
|===
|
||||
|
||||
[discrete]
|
||||
|
||||
== `console.navigation/section`
|
||||
|
||||
This extension can be used to define a new section of navigation items in the navigation tab.
|
||||
@@ -777,7 +777,7 @@ item referenced here. For arrays, the first one found in order is used.
|
||||
separator will be shown above the section.
|
||||
|===
|
||||
|
||||
[discrete]
|
||||
|
||||
== `console.navigation/separator`
|
||||
|
||||
This extension can be used to add a separator between navigation items in the navigation.
|
||||
@@ -804,7 +804,7 @@ item referenced here. For arrays, the first one found in order is used.
|
||||
`insertBefore` takes precedence.
|
||||
|===
|
||||
|
||||
[discrete]
|
||||
|
||||
== `console.page/resource/details`
|
||||
|
||||
[cols=",,,",options="header",]
|
||||
@@ -818,7 +818,7 @@ resource page links to.
|
||||
|no |The component to be rendered when the route matches.
|
||||
|===
|
||||
|
||||
[discrete]
|
||||
|
||||
== `console.page/resource/list`
|
||||
|
||||
Adds new resource list page to Console router.
|
||||
@@ -834,7 +834,7 @@ resource page links to.
|
||||
|no |The component to be rendered when the route matches.
|
||||
|===
|
||||
|
||||
[discrete]
|
||||
|
||||
== `console.page/route`
|
||||
|
||||
Adds a new page to the web console router. See link:https://v5.reactrouter.com/[React Router].
|
||||
@@ -856,7 +856,7 @@ belongs to. If not specified, contributes to all perspectives.
|
||||
the `location.pathname` exactly.
|
||||
|===
|
||||
|
||||
[discrete]
|
||||
|
||||
== `console.page/route/standalone`
|
||||
|
||||
Adds a new standalone page, rendered outside the common page layout, to the web console router. See link:https://v5.reactrouter.com/[React Router].
|
||||
@@ -875,7 +875,7 @@ Adds a new standalone page, rendered outside the common page layout, to the web
|
||||
the `location.pathname` exactly.
|
||||
|===
|
||||
|
||||
[discrete]
|
||||
|
||||
== `console.perspective`
|
||||
|
||||
This extension contributes a new perspective to the console, which enables customization of the navigation menu.
|
||||
@@ -906,7 +906,7 @@ the nav
|
||||
|The hook to detect default perspective
|
||||
|===
|
||||
|
||||
[discrete]
|
||||
|
||||
== `console.project-overview/inventory-item`
|
||||
|
||||
Adds a new inventory item into the *Project Overview* page.
|
||||
@@ -918,7 +918,7 @@ Adds a new inventory item into the *Project Overview* page.
|
||||
|no |The component to be rendered.
|
||||
|===
|
||||
|
||||
[discrete]
|
||||
|
||||
== `console.project-overview/utilization-item`
|
||||
|
||||
Adds a new project overview utilization item.
|
||||
@@ -946,7 +946,7 @@ query.
|
||||
|`CodeRef<React.ComponentType<TopConsumerPopoverProps>>` |yes |Shows the top consumer popover instead of plain value.
|
||||
|===
|
||||
|
||||
[discrete]
|
||||
|
||||
== `console.pvc/alert`
|
||||
|
||||
This extension can be used to contribute custom alerts on the PVC details page.
|
||||
@@ -958,7 +958,7 @@ This extension can be used to contribute custom alerts on the PVC details page.
|
||||
|no |The alert component.
|
||||
|===
|
||||
|
||||
[discrete]
|
||||
|
||||
== `console.pvc/create-prop`
|
||||
|
||||
This extension can be used to specify additional properties that will be used when creating PVC resources on the PVC list page.
|
||||
@@ -970,7 +970,7 @@ This extension can be used to specify additional properties that will be used wh
|
||||
|`path` |`string` |no |Path for the create prop action.
|
||||
|===
|
||||
|
||||
[discrete]
|
||||
|
||||
== `console.pvc/delete`
|
||||
|
||||
This extension allows hooking into deleting PVC resources. It can provide an alert with additional information and custom PVC delete logic.
|
||||
@@ -988,7 +988,7 @@ This extension allows hooking into deleting PVC resources. It can provide an ale
|
||||
|no |Alert component to show additional information.
|
||||
|===
|
||||
|
||||
[discrete]
|
||||
|
||||
== `console.pvc/status`
|
||||
|
||||
[cols=",,,",options="header",]
|
||||
@@ -1003,7 +1003,7 @@ This extension allows hooking into deleting PVC resources. It can provide an ale
|
||||
|Predicate that tells whether to render the status component or not.
|
||||
|===
|
||||
|
||||
[discrete]
|
||||
|
||||
== `console.redux-reducer`
|
||||
|
||||
Adds new reducer to Console Redux store which operates on `plugins.<scope>` substate.
|
||||
@@ -1018,7 +1018,7 @@ substate within the Redux state object.
|
||||
function, operating on the reducer-managed substate.
|
||||
|===
|
||||
|
||||
[discrete]
|
||||
|
||||
== `console.resource/create`
|
||||
|
||||
This extension allows plugins to provide a custom component (i.e., wizard or form) for specific resources, which will be rendered, when users try to create a new resource instance.
|
||||
@@ -1034,7 +1034,7 @@ resource page will be rendered
|
||||
component to be rendered when the model matches
|
||||
|===
|
||||
|
||||
[discrete]
|
||||
|
||||
== `console.resource/details-item`
|
||||
|
||||
Adds a new details item to the default resource summary on the details page.
|
||||
@@ -1065,7 +1065,7 @@ ComponentProps<K8sResourceCommon, any>>>` |yes |An optional React component that
|
||||
|
||||
|===
|
||||
|
||||
[discrete]
|
||||
|
||||
== `console.storage-class/provisioner`
|
||||
|
||||
Adds a new storage class provisioner as an option during storage class creation.
|
||||
@@ -1081,7 +1081,7 @@ Adds a new storage class provisioner as an option during storage class creation.
|
||||
|Other provisioner type
|
||||
|===
|
||||
|
||||
[discrete]
|
||||
|
||||
== `console.storage-provider`
|
||||
|
||||
This extension can be used to contribute a new storage provider to select, when attaching storage and a provider specific component.
|
||||
@@ -1096,7 +1096,7 @@ This extension can be used to contribute a new storage provider to select, when
|
||||
|no | Provider specific component to render. |
|
||||
|===
|
||||
|
||||
[discrete]
|
||||
|
||||
== `console.tab`
|
||||
|
||||
Adds a tab to a horizontal nav matching the `contextId`.
|
||||
@@ -1116,7 +1116,7 @@ Adds a tab to a horizontal nav matching the `contextId`.
|
||||
|no |Tab content component.
|
||||
|===
|
||||
|
||||
[discrete]
|
||||
|
||||
== `console.tab/horizontalNav`
|
||||
|
||||
This extension can be used to add a tab on the resource details page.
|
||||
@@ -1135,7 +1135,7 @@ horizontal tab. It takes tab name as name and href of the tab
|
||||
|no |The component to be rendered when the route matches.
|
||||
|===
|
||||
|
||||
[discrete]
|
||||
|
||||
== `console.telemetry/listener`
|
||||
|
||||
This component can be used to register a listener function receiving telemetry events. These events include user identification, page navigation, and other application specific events. The listener may use this data for reporting and analytics purposes.
|
||||
@@ -1147,7 +1147,7 @@ This component can be used to register a listener function receiving telemetry e
|
||||
events
|
||||
|===
|
||||
|
||||
[discrete]
|
||||
|
||||
== `console.topology/adapter/build`
|
||||
|
||||
`BuildAdapter` contributes an adapter to adapt element to data that can be used by the Build component.
|
||||
@@ -1160,7 +1160,7 @@ events
|
||||
|no |Adapter to adapt element to data that can be used by Build component.
|
||||
|===
|
||||
|
||||
[discrete]
|
||||
|
||||
== `console.topology/adapter/network`
|
||||
|
||||
`NetworkAdapater` contributes an adapter to adapt element to data that can be used by the `Networking` component.
|
||||
@@ -1173,7 +1173,7 @@ events
|
||||
|no |Adapter to adapt element to data that can be used by Networking component.
|
||||
|===
|
||||
|
||||
[discrete]
|
||||
|
||||
== `console.topology/adapter/pod`
|
||||
|
||||
`PodAdapter` contributes an adapter to adapt element to data that can be used by the `Pod` component.
|
||||
@@ -1186,7 +1186,7 @@ events
|
||||
|no |Adapter to adapt element to data that can be used by Pod component. |
|
||||
|===
|
||||
|
||||
[discrete]
|
||||
|
||||
== `console.topology/component/factory`
|
||||
|
||||
Getter for a `ViewComponentFactory`.
|
||||
@@ -1197,7 +1197,7 @@ Getter for a `ViewComponentFactory`.
|
||||
|`getFactory` |`CodeRef<ViewComponentFactory>` |no |Getter for a `ViewComponentFactory`.
|
||||
|===
|
||||
|
||||
[discrete]
|
||||
|
||||
== `console.topology/create/connector`
|
||||
|
||||
Getter for the create connector function.
|
||||
@@ -1209,7 +1209,7 @@ Getter for the create connector function.
|
||||
the create connector function.
|
||||
|===
|
||||
|
||||
[discrete]
|
||||
|
||||
== `console.topology/data/factory`
|
||||
|
||||
Topology Data Model Factory Extension
|
||||
@@ -1237,7 +1237,7 @@ for function to determine if a resource is depicted by this model factory.
|
||||
|Getter for function to reconcile data model after all extensions' models have loaded.
|
||||
|===
|
||||
|
||||
[discrete]
|
||||
|
||||
== `console.topology/decorator/provider`
|
||||
|
||||
Topology Decorator Provider Extension
|
||||
@@ -1251,7 +1251,7 @@ Topology Decorator Provider Extension
|
||||
|`decorator` |`CodeRef<TopologyDecoratorGetter>` |no |Decorator specific to the extension |
|
||||
|===
|
||||
|
||||
[discrete]
|
||||
|
||||
== `console.topology/details/resource-alert`
|
||||
|
||||
`DetailsResourceAlert` contributes an alert for specific topology context or graph element.
|
||||
@@ -1267,7 +1267,7 @@ alert should not be shown after dismissed.
|
||||
|no |Hook to return the contents of the alert.
|
||||
|===
|
||||
|
||||
[discrete]
|
||||
|
||||
== `console.topology/details/resource-link`
|
||||
|
||||
`DetailsResourceLink` contributes a link for specific topology context or graph element.
|
||||
@@ -1282,7 +1282,7 @@ alert should not be shown after dismissed.
|
||||
chance to create the link.
|
||||
|===
|
||||
|
||||
[discrete]
|
||||
|
||||
== `console.topology/details/tab`
|
||||
|
||||
`DetailsTab` contributes a tab for the topology details panel.
|
||||
@@ -1302,7 +1302,7 @@ item referenced here. For arrays, the first one found in order is
|
||||
used. The `insertBefore` value takes precedence.
|
||||
|===
|
||||
|
||||
[discrete]
|
||||
|
||||
== `console.topology/details/tab-section`
|
||||
|
||||
`DetailsTabSection` contributes a section for a specific tab in the topology details panel.
|
||||
@@ -1331,7 +1331,7 @@ item referenced here. For arrays, the first one found in order is
|
||||
used. The `insertBefore` value takes precedence.
|
||||
|===
|
||||
|
||||
[discrete]
|
||||
|
||||
== `console.topology/display/filters`
|
||||
|
||||
Topology Display Filters Extension
|
||||
@@ -1343,7 +1343,7 @@ Topology Display Filters Extension
|
||||
|`applyDisplayOptions` |`CodeRef<TopologyApplyDisplayOptions>` |no | Function to apply filters to the model
|
||||
|===
|
||||
|
||||
[discrete]
|
||||
|
||||
== `console.topology/relationship/provider`
|
||||
|
||||
Topology relationship provider connector extension
|
||||
@@ -1357,7 +1357,7 @@ Topology relationship provider connector extension
|
||||
|`priority` |`number` |no |Priority for relationship, higher will be preferred in case of multiple
|
||||
|===
|
||||
|
||||
[discrete]
|
||||
|
||||
== `console.user-preference/group`
|
||||
|
||||
This extension can be used to add a group on the console user-preferences page. It will appear as a vertical tab option on the console user-preferences page.
|
||||
@@ -1376,7 +1376,7 @@ this group should be placed
|
||||
this group should be placed
|
||||
|===
|
||||
|
||||
[discrete]
|
||||
|
||||
== `console.user-preference/item`
|
||||
|
||||
This extension can be used to add an item to the user preferences group on the console user preferences page.
|
||||
@@ -1404,7 +1404,7 @@ this item should be placed
|
||||
this item should be placed
|
||||
|===
|
||||
|
||||
[discrete]
|
||||
|
||||
== `console.yaml-template`
|
||||
|
||||
YAML templates for editing resources via the yaml editor.
|
||||
@@ -1420,7 +1420,7 @@ YAML templates for editing resources via the yaml editor.
|
||||
to mark this as the default template.
|
||||
|===
|
||||
|
||||
[discrete]
|
||||
|
||||
== `dev-console.add/action`
|
||||
|
||||
This extension allows plugins to contribute an add action item to the add page of developer perspective. For example, a Serverless plugin can add a new action item for adding serverless functions to the add page of developer console.
|
||||
@@ -1445,7 +1445,7 @@ action would belong to.
|
||||
access review to control the visibility or enablement of the action.
|
||||
|===
|
||||
|
||||
[discrete]
|
||||
|
||||
== `dev-console.add/action-group`
|
||||
|
||||
This extension allows plugins to contibute a group in the add page of developer console. Groups can be referenced by actions, which will be grouped together in the add action page based on their extension definition. For example, a Serverless plugin can contribute a Serverless group and together with multiple add actions.
|
||||
@@ -1464,7 +1464,7 @@ group should be placed
|
||||
should be placed
|
||||
|===
|
||||
|
||||
[discrete]
|
||||
|
||||
== `dev-console.import/environment`
|
||||
|
||||
This extension can be used to specify extra build environment variable fields under the builder image selector in the developer console git import form. When set, the fields will override environment variables of the same name in the build section.
|
||||
@@ -1480,7 +1480,7 @@ custom environment variables for
|
||||
|`environments` |`ImageEnvironment[]` |no |List of environment variables
|
||||
|===
|
||||
|
||||
[discrete]
|
||||
|
||||
== `console.dashboards/overview/detail/item`
|
||||
|
||||
Deprecated: use `CustomOverviewDetailItem` type instead.
|
||||
@@ -1492,7 +1492,7 @@ Deprecated: use `CustomOverviewDetailItem` type instead.
|
||||
on the `DetailItem` component
|
||||
|===
|
||||
|
||||
[discrete]
|
||||
|
||||
== `console.page/resource/tab`
|
||||
|
||||
Deprecated: Use `console.tab/horizontalNav` instead. Adds a new resource tab page to Console router.
|
||||
|
||||
@@ -6,12 +6,12 @@
|
||||
= etcd cluster Operator
|
||||
|
||||
The etcd cluster Operator automates etcd cluster scaling, enables etcd monitoring and metrics, and simplifies disaster recovery procedures.
|
||||
[discrete]
|
||||
|
||||
== Project
|
||||
|
||||
link:https://github.com/openshift/cluster-etcd-operator/[cluster-etcd-operator]
|
||||
|
||||
[discrete]
|
||||
|
||||
== CRDs
|
||||
|
||||
* `etcds.operator.openshift.io`
|
||||
@@ -19,7 +19,7 @@ link:https://github.com/openshift/cluster-etcd-operator/[cluster-etcd-operator]
|
||||
** CR: `etcd`
|
||||
** Validation: Yes
|
||||
|
||||
[discrete]
|
||||
|
||||
== Configuration objects
|
||||
|
||||
[source,terminal]
|
||||
|
||||
@@ -8,7 +8,7 @@
|
||||
|
||||
Delete the Argo CD instances added to the namespace of the GitOps Operator.
|
||||
|
||||
[discrete]
|
||||
|
||||
.Procedure
|
||||
. In the *Terminal* type the following command:
|
||||
|
||||
|
||||
@@ -9,7 +9,7 @@
|
||||
|
||||
The default Argo CD instance and the accompanying controllers, installed by the {gitops-title} Operator, can now run on the infrastructure nodes of the cluster by setting a simple configuration toggle.
|
||||
|
||||
[discrete]
|
||||
|
||||
.Procedure
|
||||
. Label the existing nodes:
|
||||
+
|
||||
|
||||
@@ -8,7 +8,7 @@
|
||||
|
||||
This section provides reference settings for environment labels and annotations required to display an environment application in the *Environments* page, in the *Developer* perspective of the {product-title} web console.
|
||||
|
||||
[discrete]
|
||||
|
||||
== Environment labels
|
||||
|
||||
The environment application manifest must contain `labels.openshift.gitops/environment` and `destination.namespace` fields. You must set identical values for the `<environment_name>` variable and the name of the environment application manifest.
|
||||
@@ -41,7 +41,7 @@ spec:
|
||||
----
|
||||
<1> The name of the environment application manifest. The value set is the same as the value of the `<environment_name>` variable.
|
||||
|
||||
[discrete]
|
||||
|
||||
== Environment annotations
|
||||
The environment namespace manifest must contain the `annotations.app.openshift.io/vcs-uri` and `annotations.app.openshift.io/vcs-ref` fields to specify the version controller code source of the application. You must set identical values for the `<environment_name>` variable and the name of the environment namespace manifest.
|
||||
|
||||
|
||||
@@ -6,7 +6,7 @@
|
||||
[id='go-uninstalling-gitops-operator_{context}']
|
||||
= Uninstalling the GitOps Operator
|
||||
|
||||
[discrete]
|
||||
|
||||
.Procedure
|
||||
. From the *Operators* -> *OperatorHub* page, use the *Filter by keyword* box to search for `{gitops-title} Operator` tile.
|
||||
|
||||
|
||||
@@ -18,7 +18,7 @@ cluster-specific Operator AWS Identity and Access Management (IAM) roles are cre
|
||||
|
||||
Cluster Operators use service accounts to assume IAM roles. When a service account assumes an IAM role, temporary AWS STS credentials are provided for the service account to use in the cluster Operator's pod. If the assumed role has the necessary AWS privileges, the service account can run AWS SDK operations in the pod.
|
||||
|
||||
[discrete]
|
||||
|
||||
[id="workflow-for-assuming-aws-iam-roles-in-sre-owned-projects_{context}"]
|
||||
== Workflow for assuming AWS IAM roles in Red{nbsp}Hat SRE owned projects
|
||||
|
||||
|
||||
@@ -6,28 +6,28 @@
|
||||
|
||||
The following guidelines apply when creating a container image in general, and are independent of whether the images are used on {product-title}.
|
||||
|
||||
[discrete]
|
||||
|
||||
== Reuse images
|
||||
|
||||
Wherever possible, base your image on an appropriate upstream image using the `FROM` statement. This ensures your image can easily pick up security fixes from an upstream image when it is updated, rather than you having to update your dependencies directly.
|
||||
|
||||
In addition, use tags in the `FROM` instruction, for example, `rhel:rhel7`, to make it clear to users exactly which version of an image your image is based on. Using a tag other than `latest` ensures your image is not subjected to breaking changes that might go into the `latest` version of an upstream image.
|
||||
|
||||
[discrete]
|
||||
|
||||
== Maintain compatibility within tags
|
||||
|
||||
When tagging your own images, try to maintain backwards compatibility within a tag. For example, if you provide an image named `image` and it currently includes version `1.0`, you might provide a tag of `image:v1`. When you update the image, as long as it continues to be compatible with the original image, you can continue to tag the new image `image:v1`, and downstream consumers of this tag are able to get updates without being broken.
|
||||
|
||||
If you later release an incompatible update, then switch to a new tag, for example `image:v2`. This allows downstream consumers to move up to the new version at will, but not be inadvertently broken by the new incompatible image. Any downstream consumer using `image:latest` takes on the risk of any incompatible changes being introduced.
|
||||
|
||||
[discrete]
|
||||
|
||||
== Avoid multiple processes
|
||||
|
||||
Do not start multiple services, such as a database and `SSHD`, inside one container. This is not necessary because containers are lightweight and can be easily linked together for orchestrating multiple processes. {product-title} allows you to easily colocate and co-manage related images by grouping them into a single pod.
|
||||
|
||||
This colocation ensures the containers share a network namespace and storage for communication. Updates are also less disruptive as each image can be updated less frequently and independently. Signal handling flows are also clearer with a single process as you do not have to manage routing signals to spawned processes.
|
||||
|
||||
[discrete]
|
||||
|
||||
== Use `exec` in wrapper scripts
|
||||
|
||||
Many images use wrapper scripts to do some setup before starting a process for the software being run. If your image uses such a script, that script uses `exec` so that the script's process is replaced by your software. If you do not use `exec`, then signals sent by your container runtime go to your wrapper script instead of your software's process. This is not what you want.
|
||||
@@ -42,7 +42,7 @@ Also see the https://felipec.wordpress.com/2013/11/04/init/["Demystifying the in
|
||||
systems.
|
||||
////
|
||||
|
||||
[discrete]
|
||||
|
||||
== Clean temporary files
|
||||
|
||||
Remove all temporary files you create during the build process. This also includes any files added with the `ADD` command. For example, run the `yum clean` command after performing `yum install` operations.
|
||||
@@ -68,7 +68,7 @@ The current container build process does not allow a command run in a later laye
|
||||
|
||||
In addition, performing multiple commands in a single `RUN` statement reduces the number of layers in your image, which improves download and extraction time.
|
||||
|
||||
[discrete]
|
||||
|
||||
== Place instructions in the proper order
|
||||
|
||||
The container builder reads the `Dockerfile` and runs the instructions from top to bottom. Every instruction that is successfully executed creates a layer which can be reused the next time this or another image is built. It is very important to place instructions that rarely change at the top of your `Dockerfile`. Doing so ensures the next builds of the same image are very fast because the cache is not invalidated by upper layer changes.
|
||||
@@ -95,7 +95,7 @@ RUN yum -y install mypackage && yum clean all -y
|
||||
|
||||
Then each time you changed `myfile` and reran `podman build` or `docker build`, the `ADD` operation would invalidate the `RUN` layer cache, so the `yum` operation must be rerun as well.
|
||||
|
||||
[discrete]
|
||||
|
||||
== Mark important ports
|
||||
|
||||
The EXPOSE instruction makes a port in the container available to the host system and other containers. While it is possible to specify that a port should be exposed with a `podman run` invocation, using the EXPOSE instruction in a `Dockerfile` makes it easier for both humans and software to use your image by explicitly declaring the ports your software needs to run:
|
||||
@@ -104,24 +104,24 @@ The EXPOSE instruction makes a port in the container available to the host syste
|
||||
* Exposed ports are present in the metadata for your image returned by `podman inspect`.
|
||||
* Exposed ports are linked when you link one container to another.
|
||||
|
||||
[discrete]
|
||||
|
||||
== Set environment variables
|
||||
|
||||
It is good practice to set environment variables with the `ENV` instruction. One example is to set the version of your project. This makes it easy for people to find the version without looking at the `Dockerfile`. Another example is advertising a path on the system that could be used by another process, such as `JAVA_HOME`.
|
||||
|
||||
[discrete]
|
||||
|
||||
== Avoid default passwords
|
||||
|
||||
Avoid setting default passwords. Many people extend the image and forget to remove or change the default password. This can lead to security issues if a user in production is assigned a well-known password. Passwords are configurable using an environment variable instead.
|
||||
|
||||
If you do choose to set a default password, ensure that an appropriate warning message is displayed when the container is started. The message should inform the user of the value of the default password and explain how to change it, such as what environment variable to set.
|
||||
|
||||
[discrete]
|
||||
|
||||
== Avoid sshd
|
||||
|
||||
It is best to avoid running `sshd` in your image. You can use the `podman exec` or `docker exec` command to access containers that are running on the local host. Alternatively, you can use the `oc exec` command or the `oc rsh` command to access containers that are running on the {product-title} cluster. Installing and running `sshd` in your image opens up additional vectors for attack and requirements for security patching.
|
||||
|
||||
[discrete]
|
||||
|
||||
== Use volumes for persistent data
|
||||
|
||||
Images use a link:https://docs.docker.com/reference/builder/#volume[volume] for persistent data. This way {product-title} mounts the network storage to the node running the container, and if the container moves to a new node the storage is reattached to that node. By using the volume for all persistent storage needs, the content is preserved even if the container is restarted or moved. If your image writes data to arbitrary locations within the container, that content could not be preserved.
|
||||
|
||||
@@ -44,7 +44,7 @@ $ oc import-image <multiarch-image-stream-tag> --from=<registry>/<project_name>
|
||||
The `--import-mode=` default value is `Legacy`. Excluding this value, or failing to specify either `Legacy` or `PreserveOriginal`, imports a single sub-manifest. An invalid import mode returns the following error: `error: valid ImportMode values are Legacy or PreserveOriginal`.
|
||||
====
|
||||
|
||||
[discrete]
|
||||
|
||||
[id="images-imagestream-import-import-mode-limitations"]
|
||||
== Limitations
|
||||
|
||||
|
||||
@@ -24,12 +24,12 @@ endif::cluster-caps[]
|
||||
|
||||
The Ingress Operator configures and manages the {product-title} router.
|
||||
|
||||
[discrete]
|
||||
|
||||
== Project
|
||||
|
||||
link:https://github.com/openshift/cluster-ingress-operator[openshift-ingress-operator]
|
||||
|
||||
[discrete]
|
||||
|
||||
== CRDs
|
||||
|
||||
* `clusteringresses.ingress.openshift.io`
|
||||
@@ -37,7 +37,7 @@ link:https://github.com/openshift/cluster-ingress-operator[openshift-ingress-ope
|
||||
** CR: `clusteringresses`
|
||||
** Validation: No
|
||||
|
||||
[discrete]
|
||||
|
||||
== Configuration objects
|
||||
|
||||
* Cluster config
|
||||
@@ -50,7 +50,7 @@ link:https://github.com/openshift/cluster-ingress-operator[openshift-ingress-ope
|
||||
$ oc get clusteringresses.ingress.openshift.io -n openshift-ingress-operator default -o yaml
|
||||
----
|
||||
|
||||
[discrete]
|
||||
|
||||
== Notes
|
||||
|
||||
The Ingress Operator sets up the router in the `openshift-ingress` project and creates the deployment for the router:
|
||||
|
||||
@@ -35,19 +35,19 @@ The Insights Operator gathers {product-title} configuration data and sends it to
|
||||
|
||||
ifdef::operator-ref[]
|
||||
|
||||
[discrete]
|
||||
|
||||
== Project
|
||||
|
||||
link:https://github.com/openshift/insights-operator[insights-operator]
|
||||
|
||||
[discrete]
|
||||
|
||||
== Configuration
|
||||
|
||||
No configuration is required.
|
||||
|
||||
endif::operator-ref[]
|
||||
|
||||
[discrete]
|
||||
|
||||
== Notes
|
||||
|
||||
Insights Operator complements {product-title} Telemetry.
|
||||
|
||||
@@ -14,17 +14,17 @@ Red Hat supports IPMI and PXE on the `provisioning` network only. Red Hat has no
|
||||
|
||||
You can customize {ibm-cloud-name} nodes using the {ibm-cloud-name} API. When creating {ibm-cloud-name} nodes, you must consider the following requirements.
|
||||
|
||||
[discrete]
|
||||
|
||||
== Use one data center per cluster
|
||||
|
||||
All nodes in the {product-title} cluster must run in the same {ibm-cloud-name} data center.
|
||||
|
||||
[discrete]
|
||||
|
||||
== Create public and private VLANs
|
||||
|
||||
Create all nodes with a single public VLAN and a single private VLAN.
|
||||
|
||||
[discrete]
|
||||
|
||||
== Ensure subnets have sufficient IP addresses
|
||||
|
||||
{ibm-cloud-name} public VLAN subnets use a `/28` prefix by default, which provides 16 IP addresses. That is sufficient for a cluster consisting of three control plane nodes, four worker nodes, and two IP addresses for the API VIP and Ingress VIP on the `baremetal` network. For larger clusters, you might need a smaller prefix.
|
||||
@@ -41,7 +41,7 @@ Create all nodes with a single public VLAN and a single private VLAN.
|
||||
|256| `/24`
|
||||
|====
|
||||
|
||||
[discrete]
|
||||
|
||||
== Configuring NICs
|
||||
|
||||
{product-title} deploys with two networks:
|
||||
@@ -73,7 +73,7 @@ In the previous example, NIC1 on all control plane and worker nodes connects to
|
||||
Ensure PXE is enabled on the NIC used for the `provisioning` network and is disabled on all other NICs.
|
||||
====
|
||||
|
||||
[discrete]
|
||||
|
||||
== Configuring canonical names
|
||||
|
||||
Clients access the {product-title} cluster nodes over the `baremetal` network. Configure {ibm-cloud-name} subdomains or subzones where the canonical name extension is the cluster name.
|
||||
@@ -88,7 +88,7 @@ For example:
|
||||
test-cluster.example.com
|
||||
----
|
||||
|
||||
[discrete]
|
||||
|
||||
== Creating DNS entries
|
||||
|
||||
You must create DNS `A` record entries resolving to unused IP addresses on the public subnet for the following:
|
||||
@@ -125,7 +125,7 @@ The following table provides an example of fully qualified domain names. The API
|
||||
After provisioning the {ibm-cloud-name} nodes, you must create a DNS entry for the `api.<cluster_name>.<domain>` domain name on the external DNS because removing CoreDNS causes the local entry to disappear. Failure to create a DNS record for the `api.<cluster_name>.<domain>` domain name in the external DNS server prevents worker nodes from joining the cluster.
|
||||
====
|
||||
|
||||
[discrete]
|
||||
|
||||
== Network Time Protocol (NTP)
|
||||
|
||||
Each {product-title} node in the cluster must have access to an NTP server. {product-title} nodes use NTP to synchronize their clocks. For example, cluster nodes use SSL certificates that require validation, which might fail if the date and time between the nodes are not in sync.
|
||||
@@ -135,7 +135,7 @@ Each {product-title} node in the cluster must have access to an NTP server. {pro
|
||||
Define a consistent clock date and time format in each cluster node's BIOS settings, or installation might fail.
|
||||
====
|
||||
|
||||
[discrete]
|
||||
|
||||
== Configure a DHCP server
|
||||
|
||||
{ibm-cloud-bm} does not run DHCP on the public or private VLANs. After provisioning {ibm-cloud-name} nodes, you must set up a DHCP server for the public VLAN, which corresponds to {product-title}'s `baremetal` network.
|
||||
@@ -147,7 +147,7 @@ The IP addresses allocated to each node do not need to match the IP addresses al
|
||||
|
||||
See the "Configuring the public subnet" section for details.
|
||||
|
||||
[discrete]
|
||||
|
||||
== Ensure BMC access privileges
|
||||
|
||||
The "Remote management" page for each node on the dashboard contains the node's intelligent platform management interface (IPMI) credentials. The default IPMI privileges prevent the user from making certain boot target changes. You must change the privilege level to `OPERATOR` so that Ironic can make those changes.
|
||||
@@ -161,7 +161,7 @@ ipmi://<IP>:<port>?privilegelevel=OPERATOR
|
||||
|
||||
Alternatively, contact {ibm-cloud-name} support and request that they increase the IPMI privileges to `ADMINISTRATOR` for each node.
|
||||
|
||||
[discrete]
|
||||
|
||||
== Create bare metal servers
|
||||
|
||||
Create bare metal servers in the link:https://cloud.ibm.com[{ibm-cloud-name} dashboard] by navigating to *Create resource* -> *Bare Metal Servers for Classic*.
|
||||
|
||||
@@ -34,7 +34,7 @@ Alternatively, you can manually create the components or you can reuse existing
|
||||
|
||||
If you are working in a disconnected environment, you are unable to reach the public IP addresses for EC2, ELB, and S3 endpoints. Depending on the level to which you want to restrict internet traffic during the installation, the following configuration options are available:
|
||||
|
||||
[discrete]
|
||||
|
||||
[id="create-vpc-endpoints_{context}"]
|
||||
=== Option 1: Create VPC endpoints
|
||||
|
||||
@@ -46,12 +46,12 @@ Create a VPC endpoint and attach it to the subnets that the clusters are using.
|
||||
|
||||
With this option, network traffic remains private between your VPC and the required AWS services.
|
||||
|
||||
[discrete]
|
||||
|
||||
[id="create-proxy-without-vpc-endpoints_{context}"]
|
||||
=== Option 2: Create a proxy without VPC endpoints
|
||||
As part of the installation process, you can configure an HTTP or HTTPS proxy. With this option, internet traffic goes through the proxy to reach the required AWS services.
|
||||
|
||||
[discrete]
|
||||
|
||||
[id="create-proxy-with-vpc-endpoints_{context}"]
|
||||
=== Option 3: Create a proxy with VPC endpoints
|
||||
As part of the installation process, you can configure an HTTP or HTTPS proxy with VPC endpoints. Create a VPC endpoint and attach it to the subnets that the clusters are using. Name the endpoints as follows:
|
||||
|
||||
@@ -11,7 +11,7 @@
|
||||
|
||||
The installation program dynamically generates the list of available {azure-full} regions based on your subscription.
|
||||
|
||||
[discrete]
|
||||
|
||||
== Supported {azure-short} public regions
|
||||
|
||||
* `australiacentral` (Australia Central)
|
||||
@@ -63,7 +63,7 @@ The installation program dynamically generates the list of available {azure-full
|
||||
* `westus2` (West US 2)
|
||||
* `westus3` (West US 3)
|
||||
|
||||
[discrete]
|
||||
|
||||
== Supported {azure-short} Government regions
|
||||
|
||||
Support for the following Microsoft Azure Government (MAG) regions was added in {product-title} version 4.6:
|
||||
|
||||
@@ -35,7 +35,7 @@ There are several pre-existing networking setups that are supported for internet
|
||||
|
||||
ifdef::restricted[]
|
||||
|
||||
[discrete]
|
||||
|
||||
== Restricted cluster with Azure Firewall
|
||||
|
||||
You can use Azure Firewall to restrict the outbound routing for the Virtual Network (VNet) that is used to install the {product-title} cluster. For more information, see link:https://learn.microsoft.com/en-us/azure/aks/egress-outboundtype#deploy-a-cluster-with-outbound-type-of-udr-and-azure-firewall[providing user-defined routing with Azure Firewall]. You can create a {product-title} cluster in a restricted network by using VNet with Azure Firewall and configuring the user-defined routing.
|
||||
@@ -47,28 +47,28 @@ If you are using Azure Firewall for restricting internet access, you must set th
|
||||
endif::restricted[]
|
||||
|
||||
ifdef::private[]
|
||||
[discrete]
|
||||
|
||||
== Private cluster with network address translation
|
||||
|
||||
You can use link:https://docs.microsoft.com/en-us/azure/virtual-network/nat-overview[Azure VNET network address translation (NAT)] to provide outbound internet access for the subnets in your cluster. You can reference link:https://docs.microsoft.com/en-us/azure/virtual-network/quickstart-create-nat-gateway-cli[Create a NAT gateway using Azure CLI] in the Azure documentation for configuration instructions.
|
||||
|
||||
When using a VNet setup with Azure NAT and user-defined routing configured, you can create a private cluster with no public endpoints.
|
||||
|
||||
[discrete]
|
||||
|
||||
== Private cluster with Azure Firewall
|
||||
|
||||
You can use Azure Firewall to provide outbound routing for the VNet used to install the cluster. You can learn more about link:https://docs.microsoft.com/en-us/azure/aks/egress-outboundtype#deploy-a-cluster-with-outbound-type-of-udr-and-azure-firewall[providing user-defined routing with Azure Firewall] in the Azure documentation.
|
||||
|
||||
When using a VNet setup with Azure Firewall and user-defined routing configured, you can create a private cluster with no public endpoints.
|
||||
|
||||
[discrete]
|
||||
|
||||
== Private cluster with a proxy configuration
|
||||
|
||||
You can use a proxy with user-defined routing to allow egress to the internet. You must ensure that cluster Operators do not access Azure APIs using a proxy; Operators must have access to Azure APIs outside of the proxy.
|
||||
|
||||
When using the default route table for subnets, with `0.0.0.0/0` populated automatically by Azure, all Azure API requests are routed over Azure's internal network even though the IP addresses are public. As long as the Network Security Group rules allow egress to Azure API endpoints, proxies with user-defined routing configured allow you to create private clusters with no public endpoints.
|
||||
|
||||
[discrete]
|
||||
|
||||
== Private cluster with no internet access
|
||||
|
||||
You can install a private network that restricts all access to the internet, except the Azure API. This is accomplished by mirroring the release image registry locally. Your cluster must have access to the following:
|
||||
|
||||
@@ -117,7 +117,7 @@ ifdef::aws-secret[]
|
||||
A cluster in an SC2S or C2S Region is unable to reach the public IP addresses for the EC2, ELB, and S3 endpoints. Depending on the level to which you want to restrict internet traffic during the installation, the following configuration options are available:
|
||||
endif::aws-secret[]
|
||||
|
||||
[discrete]
|
||||
|
||||
[id="create-vpc-endpoints_{context}"]
|
||||
=== Option 1: Create VPC endpoints
|
||||
|
||||
@@ -149,12 +149,12 @@ endif::aws-secret[]
|
||||
|
||||
With this option, network traffic remains private between your VPC and the required AWS services.
|
||||
|
||||
[discrete]
|
||||
|
||||
[id="create-proxy-without-vpc-endpoints_{context}"]
|
||||
=== Option 2: Create a proxy without VPC endpoints
|
||||
As part of the installation process, you can configure an HTTP or HTTPS proxy. With this option, internet traffic goes through the proxy to reach the required AWS services.
|
||||
|
||||
[discrete]
|
||||
|
||||
[id="create-proxy-with-vpc-endpoints_{context}"]
|
||||
=== Option 3: Create a proxy with VPC endpoints
|
||||
As part of the installation process, you can configure an HTTP or HTTPS proxy with VPC endpoints. Create a VPC endpoint and attach it to the subnets that the clusters are using. Name the endpoints as follows:
|
||||
|
||||
@@ -30,7 +30,7 @@ Kubernetes supports only two file system partitions. If you add more than one pa
|
||||
====
|
||||
* Retain existing partitions: For a brownfield installation where you are reinstalling {product-title} on an existing node and want to retain data partitions installed from your previous operating system, there are both boot arguments and options to `coreos-installer` that allow you to retain existing data partitions.
|
||||
|
||||
[discrete]
|
||||
|
||||
= Creating a separate `/var` partition
|
||||
In general, disk partitioning for {product-title} should be left to the
|
||||
installer. However, there are cases where you might want to create separate partitions in a part of the filesystem that you expect to grow.
|
||||
|
||||
@@ -217,7 +217,7 @@ the Cluster Version Operator on port `9099`.
|
||||
|===
|
||||
|
||||
ifndef::azure,gcp[]
|
||||
[discrete]
|
||||
|
||||
== NTP configuration for user-provisioned infrastructure
|
||||
|
||||
{product-title} clusters are configured to use a public Network Time Protocol (NTP) server by default. If you want to use a local enterprise NTP server, or if your cluster is being deployed in a disconnected network, you can configure the cluster to use a specific time server. For more information, see the documentation for _Configuring chrony time service_.
|
||||
|
||||
@@ -45,7 +45,7 @@ The installation configuration files are all pruned when you run the installatio
|
||||
You cannot modify the parameters that you set during installation, but you can modify many cluster attributes after installation.
|
||||
====
|
||||
|
||||
[discrete]
|
||||
|
||||
== The installation process with the {ai-full}
|
||||
|
||||
Installation with the link:https://access.redhat.com/documentation/en-us/assisted_installer_for_openshift_container_platform[{ai-full}] involves creating a cluster configuration interactively by using the web-based user interface or the RESTful API. The {ai-full} user interface prompts you for required values and provides reasonable default values for the remaining parameters, unless you change them in the user interface or with the API. The {ai-full} generates a discovery image, which you download and use to boot the cluster machines. The image installs {op-system} and an agent, and the agent handles the provisioning for you. You can install {product-title} with the {ai-full} and full integration on Nutanix, vSphere, and bare metal. Additionally, you can install {product-title} with the {ai-full} on other platforms without integration.
|
||||
@@ -54,14 +54,14 @@ Installation with the link:https://access.redhat.com/documentation/en-us/assiste
|
||||
|
||||
If possible, use the {ai-full} feature to avoid having to download and configure the Agent-based Installer.
|
||||
|
||||
[discrete]
|
||||
|
||||
== The installation process with Agent-based infrastructure
|
||||
|
||||
Agent-based installation is similar to using the {ai-full}, except that you must initially download and install the link:https://console.redhat.com/openshift/install/metal/agent-based[Agent-based Installer]. An Agent-based installation is useful when you want the convenience of the {ai-full}, but you need to install a cluster in a disconnected environment.
|
||||
|
||||
If possible, use the Agent-based installation feature to avoid having to create a provisioner machine with a bootstrap VM, and then provision and maintain the cluster infrastructure.
|
||||
|
||||
[discrete]
|
||||
|
||||
== The installation process with installer-provisioned infrastructure
|
||||
|
||||
The default installation type uses installer-provisioned infrastructure. By default, the installation program acts as an installation wizard, prompting you for values that it cannot determine on its own and providing reasonable default values for the remaining parameters. You can also customize the installation process to support advanced infrastructure scenarios. The installation program provisions the underlying infrastructure for the cluster.
|
||||
@@ -72,7 +72,7 @@ If possible, use this feature to avoid having to provision and maintain the clus
|
||||
|
||||
With installer-provisioned infrastructure clusters, {product-title} manages all aspects of the cluster, including the operating system itself. Each machine boots with a configuration that references resources hosted in the cluster that it joins. This configuration allows the cluster to manage itself as updates are applied.
|
||||
|
||||
[discrete]
|
||||
|
||||
== The installation process with user-provisioned infrastructure
|
||||
|
||||
You can also install {product-title} on infrastructure that you provide. You use the installation program to generate the assets that you require to provision the cluster infrastructure, create the cluster infrastructure, and then deploy the cluster to the infrastructure that you provided.
|
||||
@@ -86,7 +86,7 @@ If you do not use infrastructure that the installation program provisioned, you
|
||||
|
||||
If your cluster uses user-provisioned infrastructure, you have the option of adding {op-system-base} compute machines to your cluster.
|
||||
|
||||
[discrete]
|
||||
|
||||
== Installation process details
|
||||
|
||||
When a cluster is provisioned, each machine in the cluster requires information about the cluster. {product-title} uses a temporary bootstrap machine during initial configuration to provide the required information to the permanent control plane. The temporary bootstrap machine boots by using an Ignition config file that describes how to create the cluster. The bootstrap machine creates the control plane machines that make up the control plane. The control plane machines then create the compute machines, which are also known as worker machines. The following figure illustrates this process:
|
||||
|
||||
@@ -76,7 +76,7 @@ The following {ibm-name} hardware is supported with {product-title} version {pro
|
||||
For detailed system requirements, see link:https://www.ibm.com/support/pages/linux-ibm-zibm-linuxone-tested-platforms[Linux on {ibm-z-name}/{ibm-linuxone-name} tested platforms] (IBM Support).
|
||||
====
|
||||
|
||||
[discrete]
|
||||
|
||||
[id="ibm-z-hardware-requirements_{context}"]
|
||||
== Hardware requirements
|
||||
|
||||
@@ -90,7 +90,7 @@ For detailed system requirements, see link:https://www.ibm.com/support/pages/lin
|
||||
* Since the overall performance of the cluster can be impacted, the LPARs that are used to set up the {product-title} clusters must provide sufficient compute capacity. In this context, LPAR weight management, entitlements, and CPU shares on the hypervisor level play an important role. For more information, see "Recommended host practices for {ibm-z-title} & {ibm-linuxone-title} environments".
|
||||
====
|
||||
|
||||
[discrete]
|
||||
|
||||
[id="ibm-z-operating-system-requirements_{context}"]
|
||||
== {ibm-z-title} operating system requirements
|
||||
|
||||
@@ -126,7 +126,7 @@ For detailed system requirements, see link:https://www.ibm.com/support/pages/lin
|
||||
|
||||
|===
|
||||
|
||||
[discrete]
|
||||
|
||||
[id="ibm-z-network-connectivity_{context}"]
|
||||
== {ibm-z-title} network connectivity
|
||||
|
||||
@@ -162,7 +162,7 @@ See link:https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/
|
||||
|
||||
|
||||
|
||||
[discrete]
|
||||
|
||||
[id="ibm-z-disk-storage_{context}"]
|
||||
=== Disk storage
|
||||
|
||||
|
||||
@@ -71,7 +71,7 @@ You can install {product-title} version {product-version} on the following {ibm-
|
||||
[id="minimum-ibm-z-system-requirements_{context}"]
|
||||
== Minimum {ibm-z-title} system environment
|
||||
|
||||
[discrete]
|
||||
|
||||
=== Hardware requirements
|
||||
|
||||
* The equivalent of six Integrated Facilities for Linux (IFL), which are SMT2 enabled, for each cluster.
|
||||
@@ -87,7 +87,7 @@ You can use dedicated or shared IFLs to assign sufficient compute resources. Res
|
||||
Since the overall performance of the cluster can be impacted, the LPARs that are used to set up the {product-title} clusters must provide sufficient compute capacity. In this context, LPAR weight management, entitlements, and CPU shares on the hypervisor level play an important role.
|
||||
====
|
||||
|
||||
[discrete]
|
||||
|
||||
=== Operating system requirements
|
||||
* One LPAR running on {op-system-base} 8.6 or later with KVM, which is managed by libvirt
|
||||
|
||||
@@ -142,13 +142,13 @@ Each cluster virtual machine must meet the following minimum requirements:
|
||||
[id="preferred-ibm-z-system-requirements_{context}"]
|
||||
== Preferred {ibm-z-title} system environment
|
||||
|
||||
[discrete]
|
||||
|
||||
=== Hardware requirements
|
||||
|
||||
* Three LPARS that each have the equivalent of six IFLs, which are SMT2 enabled, for each cluster.
|
||||
* Two network connections to both connect to the `LoadBalancer` service and to serve data for traffic outside the cluster.
|
||||
|
||||
[discrete]
|
||||
|
||||
=== Operating system requirements
|
||||
|
||||
* For high availability, two or three LPARs running on {op-system-base} 8.6 or later with KVM, which are managed by libvirt.
|
||||
|
||||
@@ -85,7 +85,7 @@ endif::ibm-z-kvm[]
|
||||
|
||||
The following examples are the networking options for ISO installation.
|
||||
|
||||
[discrete]
|
||||
|
||||
[id="configuring-dhcp-or-static-ip-addresses_{context}"]
|
||||
=== Configuring DHCP or static IP addresses
|
||||
|
||||
@@ -109,7 +109,7 @@ nameserver=4.4.4.41
|
||||
When you use DHCP to configure IP addressing for the {op-system} machines, the machines also obtain the DNS server information through DHCP. For DHCP-based deployments, you can define the DNS server address that is used by the {op-system} nodes through your DHCP server configuration.
|
||||
====
|
||||
|
||||
[discrete]
|
||||
|
||||
=== Configuring an IP address without a static hostname
|
||||
|
||||
You can configure an IP address without assigning a static hostname. If a static hostname is not set by the user, it will be picked up and automatically set by a reverse DNS lookup. To configure an IP address without a static hostname refer to the following example:
|
||||
@@ -126,7 +126,7 @@ ip=10.10.10.2::10.10.10.254:255.255.255.0::enp1s0:none
|
||||
nameserver=4.4.4.41
|
||||
----
|
||||
|
||||
[discrete]
|
||||
|
||||
=== Specifying multiple network interfaces
|
||||
|
||||
You can specify multiple network interfaces by setting multiple `ip=` entries.
|
||||
@@ -137,7 +137,7 @@ ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none
|
||||
ip=10.10.10.3::10.10.10.254:255.255.255.0:core0.example.com:enp2s0:none
|
||||
----
|
||||
|
||||
[discrete]
|
||||
|
||||
=== Configuring default gateway and route
|
||||
|
||||
Optional: You can configure routes to additional networks by setting an `rd.route=` value.
|
||||
@@ -161,7 +161,7 @@ ip=::10.10.10.254::::
|
||||
rd.route=20.20.20.0/24:20.20.20.254:enp2s0
|
||||
----
|
||||
|
||||
[discrete]
|
||||
|
||||
=== Disabling DHCP on a single interface
|
||||
|
||||
You can disable DHCP on a single interface, such as when there are two or more network interfaces and only one interface is being used. In the example, the `enp1s0` interface has a static networking configuration and DHCP is disabled for `enp2s0`, which is not used:
|
||||
@@ -172,7 +172,7 @@ ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none
|
||||
ip=::::core0.example.com:enp2s0:none
|
||||
----
|
||||
|
||||
[discrete]
|
||||
|
||||
=== Combining DHCP and static IP configurations
|
||||
|
||||
You can combine DHCP and static IP configurations on systems with multiple network interfaces, for example:
|
||||
@@ -183,7 +183,7 @@ ip=enp1s0:dhcp
|
||||
ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp2s0:none
|
||||
----
|
||||
|
||||
[discrete]
|
||||
|
||||
=== Configuring VLANs on individual interfaces
|
||||
|
||||
Optional: You can configure VLANs on individual interfaces by using the `vlan=` parameter.
|
||||
@@ -204,7 +204,7 @@ ip=enp2s0.100:dhcp
|
||||
vlan=enp2s0.100:enp2s0
|
||||
----
|
||||
|
||||
[discrete]
|
||||
|
||||
=== Providing multiple DNS servers
|
||||
|
||||
You can provide multiple DNS servers by adding a `nameserver=` entry for each server, for example:
|
||||
@@ -217,7 +217,7 @@ nameserver=8.8.8.8
|
||||
ifndef::ibm-z-kvm[]
|
||||
|
||||
|
||||
[discrete]
|
||||
|
||||
=== Bonding multiple network interfaces to a single interface
|
||||
|
||||
Optional: You can bond multiple network interfaces to a single interface by using the `bond=` option. Refer to the following examples:
|
||||
@@ -259,7 +259,7 @@ Always set the `fail_over_mac=1` option in active-backup mode, to avoid problems
|
||||
endif::ibm-z[]
|
||||
|
||||
ifdef::ibm-z[]
|
||||
[discrete]
|
||||
|
||||
=== Bonding multiple network interfaces to a single interface
|
||||
|
||||
Optional: You can configure VLANs on bonded interfaces by using the `vlan=` parameter and to use DHCP, for example:
|
||||
@@ -284,7 +284,7 @@ endif::ibm-z[]
|
||||
ifndef::ibm-z[]
|
||||
|
||||
[id="bonding-multiple-sriov-network-interfaces-to-dual-port_{context}"]
|
||||
[discrete]
|
||||
|
||||
=== Bonding multiple SR-IOV network interfaces to a dual port NIC interface
|
||||
|
||||
Optional: You can bond multiple SR-IOV network interfaces to a dual port NIC interface by using the `bond=` option.
|
||||
@@ -326,7 +326,7 @@ ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:bond0:none
|
||||
endif::ibm-z[]
|
||||
|
||||
ifndef::ibm-power[]
|
||||
[discrete]
|
||||
|
||||
=== Using network teaming
|
||||
|
||||
Optional: You can use a network teaming as an alternative to bonding by using the `team=` parameter:
|
||||
|
||||
@@ -23,7 +23,7 @@ ifdef::upi[]
|
||||
Before you install an {product-title} cluster on your vCenter that uses infrastructure that you provided, you must prepare your environment.
|
||||
endif::upi[]
|
||||
|
||||
[discrete]
|
||||
|
||||
[id="installation-vsphere-installer-infra-requirements-account_{context}"]
|
||||
== Required vCenter account privileges
|
||||
|
||||
@@ -384,7 +384,7 @@ endif::ipi[]
|
||||
|
||||
For more information about creating an account with only the required privileges, see link:https://docs.vmware.com/en/VMware-vSphere/7.0/com.vmware.vsphere.security.doc/GUID-5372F580-5C23-4E9C-8A4E-EF1B4DD9033E.html[vSphere Permissions and User Management Tasks] in the vSphere documentation.
|
||||
|
||||
[discrete]
|
||||
|
||||
[id="installation-vsphere-installer-infra-minimum-requirements_{context}"]
|
||||
== Minimum required vCenter account privileges
|
||||
|
||||
@@ -785,7 +785,7 @@ endif::upi[]
|
||||
|===
|
||||
====
|
||||
|
||||
[discrete]
|
||||
|
||||
[id="installation-vsphere-installer-infra-requirements-vmotion_{context}"]
|
||||
== Using {product-title} with vMotion
|
||||
|
||||
@@ -808,7 +808,7 @@ You can specify the path of any datastore that exists in a datastore cluster. By
|
||||
If you must specify VMs across many datastores, use a `datastore` object to specify a failure domain in your cluster's `install-config.yaml` configuration file. For more information, see "VMware vSphere region and zone enablement".
|
||||
====
|
||||
|
||||
[discrete]
|
||||
|
||||
[id="installation-vsphere-installer-infra-requirements-resources_{context}"]
|
||||
== Cluster resources
|
||||
|
||||
@@ -835,13 +835,13 @@ Although these resources use 856 GB of storage, the bootstrap node gets deleted
|
||||
|
||||
If you deploy more compute machines, the {product-title} cluster will use more storage.
|
||||
|
||||
[discrete]
|
||||
|
||||
[id="installation-vsphere-installer-infra-requirements-limits_{context}"]
|
||||
== Cluster limits
|
||||
|
||||
Available resources vary between clusters. A limit exists for the number of possible clusters within vCenter, primarily by available storage space and any limitations on the number of required resources. Be sure to consider both limitations to the vCenter resources that the cluster creates and the resources that you require to deploy a cluster, such as IP addresses and networks.
|
||||
|
||||
[discrete]
|
||||
|
||||
[id="installation-vsphere-installer-infra-requirements-networking_{context}"]
|
||||
== Networking requirements
|
||||
|
||||
@@ -866,7 +866,7 @@ Ensure that each {product-title} node in the cluster has access to a Network Tim
|
||||
Additionally, you must create the following networking resources before you install the {product-title} cluster:
|
||||
|
||||
ifndef::upi[]
|
||||
[discrete]
|
||||
|
||||
[id="installation-vsphere-installer-infra-requirements-_{context}"]
|
||||
=== Required IP addresses
|
||||
For a network that uses DHCP, an installer-provisioned vSphere installation requires two static IP addresses:
|
||||
@@ -877,7 +877,7 @@ For a network that uses DHCP, an installer-provisioned vSphere installation requ
|
||||
You must give these IP addresses to the installation program when you install the {product-title} cluster.
|
||||
endif::upi[]
|
||||
|
||||
[discrete]
|
||||
|
||||
[id="installation-vsphere-installer-infra-requirements-dns-records_{context}"]
|
||||
=== DNS records
|
||||
You must create DNS records for two static IP addresses in the appropriate DNS server for the vCenter instance that hosts your {product-title} cluster. In each record, `<cluster_name>` is the cluster name and `<base_domain>` is the cluster base domain that you specify when you install the cluster. A complete DNS record takes the form: `<component>.<cluster_name>.<base_domain>.`.
|
||||
|
||||
@@ -3,7 +3,7 @@
|
||||
// * installing/installing_vsphere/ipi/ipi-vsphere-installation-reqs.adoc
|
||||
|
||||
:_mod-docs-content-type: CONCEPT
|
||||
[discrete]
|
||||
|
||||
[id="installation-vsphere-installer-infra-static-ip-nodes_{context}"]
|
||||
== Static IP addresses for vSphere nodes
|
||||
|
||||
|
||||
@@ -8,7 +8,7 @@
|
||||
|
||||
If you experience issues with using the {ai-full} to install an {product-title} cluster on {oci-first}, read the following sections to troubleshoot common problems.
|
||||
|
||||
[discrete]
|
||||
|
||||
== The Ingress Load Balancer in {oci} is not at a healthy status
|
||||
|
||||
This issue is classed as a `Warning` because by using {oci} to create a stack, you created a pool of compute nodes, 3 by default, that are automatically added as backend listeners for the Ingress Load Balancer. By default, the {product-title} deploys 2 router pods, which are based on the default values from the {product-title} manifest files. The `Warning` is expected because a mismatch exists with the number of router pods available, two, to run on the three compute nodes.
|
||||
@@ -18,7 +18,7 @@ image::ingress_load_balancer_warning_message.png[Example of an warning message t
|
||||
|
||||
You do not need to modify the Ingress Load Balancer configuration. Instead, you can point the Ingress Load Balancer to specific compute nodes that operate in your cluster on {product-title}. To do this, use placement mechanisms, such as annotations, on {product-title} to ensure router pods only run on the compute nodes that you originally configured on the Ingress Load Balancer as backend listeners.
|
||||
|
||||
[discrete]
|
||||
|
||||
== {oci} create stack operation fails with an Error: 400-InvalidParameter message
|
||||
|
||||
On attempting to create a stack on {oci}, you identified that the *Logs* section of the job outputs an error message. For example:
|
||||
|
||||
@@ -236,7 +236,7 @@ a|`provisioningNetworkCIDR`
|
||||
|
||||
|===
|
||||
|
||||
[discrete]
|
||||
|
||||
== Hosts
|
||||
|
||||
The `hosts` parameter is a list of separate bare metal assets used to build the cluster.
|
||||
|
||||
@@ -25,7 +25,7 @@ platform:
|
||||
|
||||
For Dell hardware, Red Hat supports integrated Dell Remote Access Controller (iDRAC) virtual media, Redfish network boot, and IPMI.
|
||||
|
||||
[discrete]
|
||||
|
||||
== BMC address formats for Dell iDRAC
|
||||
[width="100%", cols="1,3", options="header"]
|
||||
|====
|
||||
@@ -42,7 +42,7 @@ Use `idrac-virtualmedia` as the protocol for Redfish virtual media. `redfish-vir
|
||||
|
||||
See the following sections for additional details.
|
||||
|
||||
[discrete]
|
||||
|
||||
== Redfish virtual media for Dell iDRAC
|
||||
|
||||
For Redfish virtual media on Dell servers, use `idrac-virtualmedia://` in the `address` setting. Using `redfish-virtualmedia://` will not work.
|
||||
@@ -90,7 +90,7 @@ platform:
|
||||
disableCertificateVerification: True
|
||||
----
|
||||
|
||||
[discrete]
|
||||
|
||||
== Redfish network boot for iDRAC
|
||||
|
||||
To enable Redfish, use `redfish://` or `redfish+http://` to disable transport layer security (TLS). The installation program requires both the hostname or the IP address and the path to the system ID. The following example demonstrates a Redfish configuration within the `install-config.yaml` file.
|
||||
|
||||
@@ -35,7 +35,7 @@ For HPE integrated Lights Out (iLO), Red Hat supports Redfish virtual media, Red
|
||||
|
||||
See the following sections for additional details.
|
||||
|
||||
[discrete]
|
||||
|
||||
== Redfish virtual media for HPE iLO
|
||||
|
||||
To enable Redfish virtual media for HPE servers, use `redfish-virtualmedia://` in the `address` setting. The following example demonstrates using Redfish virtual media within the `install-config.yaml` file.
|
||||
@@ -75,7 +75,7 @@ Redfish virtual media is not supported on 9th generation systems running iLO4, b
|
||||
====
|
||||
|
||||
|
||||
[discrete]
|
||||
|
||||
== Redfish network boot for HPE iLO
|
||||
|
||||
To enable Redfish, use `redfish://` or `redfish+http://` to disable TLS. The installer requires both the hostname or the IP address and the path to the system ID. The following example demonstrates a Redfish configuration within the `install-config.yaml` file.
|
||||
|
||||
@@ -10,7 +10,7 @@ Most vendors support Baseboard Management Controller (BMC) addressing with the I
|
||||
|
||||
You can modify the BMC address during installation while the node is in the `Registering` state. If you need to modify the BMC address after the node leaves the `Registering` state, you must disconnect the node from Ironic, edit the `BareMetalHost` resource, and reconnect the node to Ironic. See the _Editing a BareMetalHost resource_ section for details.
|
||||
|
||||
[discrete]
|
||||
|
||||
== IPMI
|
||||
|
||||
Hosts using IPMI use the `ipmi://<out-of-band-ip>:<port>` address format, which defaults to port `623` if not specified. The following example demonstrates an IPMI configuration within the `install-config.yaml` file.
|
||||
@@ -33,7 +33,7 @@ platform:
|
||||
The `provisioning` network is required when PXE booting using IPMI for BMC addressing. It is not possible to PXE boot hosts without a `provisioning` network. If you deploy without a `provisioning` network, you must use a virtual media BMC addressing option such as `redfish-virtualmedia` or `idrac-virtualmedia`. See "Redfish virtual media for HPE iLO" in the "BMC addressing for HPE iLO" section or "Redfish virtual media for Dell iDRAC" in the "BMC addressing for Dell iDRAC" section for additional details.
|
||||
====
|
||||
|
||||
[discrete]
|
||||
|
||||
== Redfish network boot
|
||||
|
||||
To enable Redfish, use `redfish://` or `redfish+http://` to disable TLS. The installer requires both the hostname or the IP address and the path to the system ID. The following example demonstrates a Redfish configuration within the `install-config.yaml` file.
|
||||
|
||||
@@ -6,7 +6,7 @@
|
||||
[id="configuring-nodes_{context}"]
|
||||
= Configuring nodes
|
||||
|
||||
[discrete]
|
||||
|
||||
== Configuring nodes when using the `provisioning` network
|
||||
|
||||
Each node in the cluster requires the following configuration for proper installation.
|
||||
@@ -48,7 +48,7 @@ Configure the control plane and worker nodes as follows:
|
||||
| NIC1 PXE-enabled (provisioning network) | 1
|
||||
|===
|
||||
|
||||
[discrete]
|
||||
|
||||
== Configuring nodes without the `provisioning` network
|
||||
|
||||
The installation process requires one NIC:
|
||||
@@ -67,7 +67,7 @@ The `provisioning` network is optional, but it is required for PXE booting. If y
|
||||
====
|
||||
|
||||
[id="configuring-nodes-for-secure-boot_{context}"]
|
||||
[discrete]
|
||||
|
||||
== Configuring nodes for Secure Boot manually
|
||||
|
||||
Secure Boot prevents a node from booting unless it verifies the node is using only trusted software, such as UEFI firmware drivers, EFI applications, and the operating system.
|
||||
|
||||
@@ -7,12 +7,12 @@
|
||||
|
||||
The Kubernetes API Server Operator manages and updates the Kubernetes API server deployed on top of {product-title}. The Operator is based on the {product-title} `library-go` framework and it is installed using the Cluster Version Operator (CVO).
|
||||
|
||||
[discrete]
|
||||
|
||||
== Project
|
||||
|
||||
link:https://github.com/openshift/cluster-kube-apiserver-operator[openshift-kube-apiserver-operator]
|
||||
|
||||
[discrete]
|
||||
|
||||
== CRDs
|
||||
|
||||
* `kubeapiservers.operator.openshift.io`
|
||||
@@ -20,7 +20,7 @@ link:https://github.com/openshift/cluster-kube-apiserver-operator[openshift-kube
|
||||
** CR: `kubeapiserver`
|
||||
** Validation: Yes
|
||||
|
||||
[discrete]
|
||||
|
||||
== Configuration objects
|
||||
|
||||
[source,terminal]
|
||||
|
||||
@@ -16,7 +16,7 @@ It contains the following components:
|
||||
|
||||
By default, the Operator exposes Prometheus metrics through the `metrics` service.
|
||||
|
||||
[discrete]
|
||||
|
||||
== Project
|
||||
|
||||
link:https://github.com/openshift/cluster-kube-controller-manager-operator[cluster-kube-controller-manager-operator]
|
||||
|
||||
@@ -17,7 +17,7 @@ After you have installed {lvms}, you must create an `LVMCluster` custom resource
|
||||
|
||||
include::snippets/lvms-creating-lvmcluster.adoc[]
|
||||
|
||||
[discrete]
|
||||
|
||||
== Explanation of fields in the LVMCluster CR
|
||||
|
||||
The `LVMCluster` CR fields are described in the following table:
|
||||
|
||||
@@ -7,12 +7,12 @@
|
||||
|
||||
The Machine API Operator manages the lifecycle of specific purpose custom resource definitions (CRD), controllers, and RBAC objects that extend the Kubernetes API. This declares the desired state of machines in a cluster.
|
||||
|
||||
[discrete]
|
||||
|
||||
== Project
|
||||
|
||||
link:https://github.com/openshift/machine-api-operator[machine-api-operator]
|
||||
|
||||
[discrete]
|
||||
|
||||
== CRDs
|
||||
|
||||
* `MachineSet`
|
||||
|
||||
@@ -16,7 +16,7 @@ There are four components:
|
||||
|
||||
include::snippets/mcs-endpoint-limitation.adoc[]
|
||||
|
||||
[discrete]
|
||||
|
||||
== Project
|
||||
|
||||
link:https://github.com/openshift/machine-config-operator[openshift-machine-config-operator]
|
||||
|
||||
@@ -42,7 +42,7 @@ spec:
|
||||
<1> The name of the `preTerminate` lifecycle hook.
|
||||
<2> The hook-implementing controller that manages the `preTerminate` lifecycle hook.
|
||||
|
||||
[discrete]
|
||||
|
||||
[id="machine-lifecycle-hook-deletion-example_{context}"]
|
||||
== Example lifecycle hook configuration
|
||||
|
||||
|
||||
@@ -8,7 +8,7 @@
|
||||
|
||||
Operators can use lifecycle hooks for the machine deletion phase to modify the machine deletion process. The following examples demonstrate possible ways that an Operator can use this functionality.
|
||||
|
||||
[discrete]
|
||||
|
||||
[id="machine-lifecycle-hook-deletion-uses-predrain_{context}"]
|
||||
== Example use cases for `preDrain` lifecycle hooks
|
||||
|
||||
@@ -18,7 +18,7 @@ Implementing custom draining logic:: An Operator can use a `preDrain` lifecycle
|
||||
+
|
||||
For example, the machine controller drain libraries do not support ordering, but a custom drain provider could provide this functionality. By using a custom drain provider, an Operator could prioritize moving mission-critical applications before draining the node to ensure that service interruptions are minimized in cases where cluster capacity is limited.
|
||||
|
||||
[discrete]
|
||||
|
||||
[id="machine-lifecycle-hook-deletion-uses-preterminate_{context}"]
|
||||
== Example use cases for `preTerminate` lifecycle hooks
|
||||
|
||||
|
||||
@@ -19,7 +19,7 @@ ifndef::infra[`<role>`]
|
||||
ifdef::infra[`infra`]
|
||||
is the node label to add.
|
||||
|
||||
[discrete]
|
||||
|
||||
[id="cpmso-yaml-provider-spec-gcp-oc_{context}"]
|
||||
== Values obtained by using the OpenShift CLI
|
||||
|
||||
|
||||
@@ -20,7 +20,7 @@ ifndef::infra[`<role>`]
|
||||
ifdef::infra[`<infra>`]
|
||||
is the node label to add.
|
||||
|
||||
[discrete]
|
||||
|
||||
[id="machineset-yaml-nutanix-oc_{context}"]
|
||||
== Values obtained by using the OpenShift CLI
|
||||
|
||||
|
||||
@@ -110,7 +110,7 @@ Node: ip-10-xx-xx-xx.ap-southeast-1.compute.internal/10.xx.xx.xx
|
||||
----
|
||||
<1> The Reporting Operator pod was terminated due to OOM kill.
|
||||
|
||||
[discrete]
|
||||
|
||||
[id="metering-check-and-increase-memory-limits_{context}"]
|
||||
=== Increasing the reporting-operator pod memory limit
|
||||
|
||||
|
||||
@@ -11,12 +11,12 @@ You can install {product-title} version {product-version} on the following {ibm-
|
||||
|
||||
* {ibm-power-name}9, {ibm-power-name}10 or {ibm-power-name}11 processor-based systems
|
||||
|
||||
[discrete]
|
||||
|
||||
== Hardware requirements
|
||||
|
||||
* Six logical partitions (LPARs) across multiple PowerVM servers
|
||||
|
||||
[discrete]
|
||||
|
||||
== Operating system requirements
|
||||
|
||||
* One instance of an {ibm-power-name}9, Power10 or Power11 processor-based system
|
||||
@@ -27,19 +27,19 @@ On your {ibm-power-name} instance, set up:
|
||||
* Two LPARs for {product-title} compute machines
|
||||
* One LPAR for the temporary {product-title} bootstrap machine
|
||||
|
||||
[discrete]
|
||||
|
||||
== Disk storage for the {ibm-power-title} guest virtual machines
|
||||
|
||||
* Local storage, or storage provisioned by the Virtual I/O Server using vSCSI, NPIV (N-Port ID Virtualization), Fibre Channel, Multipathing, or SSP (shared storage pools)
|
||||
|
||||
[discrete]
|
||||
|
||||
== Network for the PowerVM guest virtual machines
|
||||
|
||||
* Dedicated physical adapter, or SR-IOV virtual function
|
||||
* Available by the Virtual I/O Server using Shared Ethernet Adapter
|
||||
* Virtualized by the Virtual I/O Server using {ibm-name} vNIC
|
||||
|
||||
[discrete]
|
||||
|
||||
== Storage / main memory
|
||||
|
||||
* 500 GB / 16 GB for {product-title} control plane machines
|
||||
|
||||
@@ -35,7 +35,7 @@ When running {product-title} on {ibm-z-name} without a hypervisor use the Dynami
|
||||
====
|
||||
endif::ibm-z-lpar[]
|
||||
|
||||
[discrete]
|
||||
|
||||
== Hardware requirements
|
||||
|
||||
* The equivalent of six Integrated Facilities for Linux (IFL), which are SMT2 enabled, for each cluster.
|
||||
@@ -51,7 +51,7 @@ You can use dedicated or shared IFLs to assign sufficient compute resources. Res
|
||||
Since the overall performance of the cluster can be impacted, the LPARs that are used to set up the {product-title} clusters must provide sufficient compute capacity. In this context, LPAR weight management, entitlements, and CPU shares on the hypervisor level play an important role.
|
||||
====
|
||||
|
||||
[discrete]
|
||||
|
||||
== Operating system requirements
|
||||
|
||||
ifdef::ibm-z[]
|
||||
@@ -70,7 +70,7 @@ ifdef::ibm-z-lpar[]
|
||||
* One machine for the temporary {product-title} bootstrap machine
|
||||
endif::ibm-z-lpar[]
|
||||
|
||||
[discrete]
|
||||
|
||||
== {ibm-z-title} network connectivity requirements
|
||||
|
||||
ifdef::ibm-z[]
|
||||
@@ -86,7 +86,7 @@ To install on {ibm-z-name} in an LPAR, you need:
|
||||
* For a preferred setup, use OSA link aggregation.
|
||||
endif::ibm-z-lpar[]
|
||||
|
||||
[discrete]
|
||||
|
||||
=== Disk storage
|
||||
|
||||
ifdef::ibm-z[]
|
||||
@@ -99,7 +99,7 @@ ifdef::ibm-z-lpar[]
|
||||
* NVMe disk storage
|
||||
endif::ibm-z-lpar[]
|
||||
|
||||
[discrete]
|
||||
|
||||
=== Storage / Main Memory
|
||||
|
||||
* 16 GB for {product-title} control plane machines
|
||||
|
||||
@@ -58,7 +58,7 @@ In earlier versions of {product-title}, the Performance Addon Operator was used
|
||||
endif::cluster-caps[]
|
||||
|
||||
ifdef::operators[]
|
||||
[discrete]
|
||||
|
||||
== Project
|
||||
|
||||
link:https://github.com/openshift/cluster-node-tuning-operator[cluster-node-tuning-operator]
|
||||
|
||||
@@ -49,7 +49,7 @@ Users and system components can store configuration data in a config map.
|
||||
|
||||
A config map is similar to a secret, but designed to more conveniently support working with strings that do not contain sensitive information.
|
||||
|
||||
[discrete]
|
||||
|
||||
== Config map restrictions
|
||||
|
||||
*A config map must be created before its contents can be consumed in pods.*
|
||||
|
||||
@@ -191,7 +191,7 @@ You can determine which other network interfaces might support egress IPs by ins
|
||||
OVN-Kubernetes provides a mechanism to control and direct outbound network traffic from specific namespaces and pods. This ensures that it exits the cluster through a particular network interface and with a specific egress IP address.
|
||||
====
|
||||
|
||||
[discrete]
|
||||
|
||||
[id="nw-egress-ips-multi-nic-requirements_{context}"]
|
||||
=== Requirements for assigning an egress IP to a network interface that is not the primary network interface
|
||||
|
||||
|
||||
@@ -47,7 +47,7 @@ policy:
|
||||
<1> A list of allowed IP address ranges in CIDR format.
|
||||
<2> A list of rejected IP address ranges in CIDR format.
|
||||
|
||||
[discrete]
|
||||
|
||||
== Example external IP configurations
|
||||
|
||||
Several possible configurations for external IP address pools are displayed in the following examples:
|
||||
|
||||
@@ -81,4 +81,4 @@ EOF
|
||||
[NOTE]
|
||||
====
|
||||
The Operator consumes the CR object and creates an ingress node firewall daemon set on all the nodes that match the `nodeSelector`.
|
||||
====
|
||||
====
|
||||
|
||||
@@ -136,7 +136,7 @@ If you are using the OVN-Kubernetes cluster network plugin, the kube-proxy confi
|
||||
For a cluster that needs to deploy objects across multiple networks, ensure that you specify the same value for the `clusterNetwork.hostPrefix` parameter for each network type that is defined in the `install-config.yaml` file. Setting a different value for each `clusterNetwork.hostPrefix` parameter can impact the OVN-Kubernetes network plugin, where the plugin cannot effectively route object traffic among different nodes.
|
||||
====
|
||||
|
||||
[discrete]
|
||||
|
||||
[id="nw-operator-cr-defaultnetwork_{context}"]
|
||||
=== defaultNetwork object configuration
|
||||
|
||||
@@ -161,7 +161,7 @@ The values for the `defaultNetwork` object are defined in the following table:
|
||||
|
||||
|====
|
||||
|
||||
[discrete]
|
||||
|
||||
[id="nw-operator-configuration-parameters-for-ovn-sdn_{context}"]
|
||||
==== Configuration for the OVN-Kubernetes network plugin
|
||||
|
||||
|
||||
@@ -7,7 +7,7 @@
|
||||
= AdminNetworkPolicy actions for rules
|
||||
As an administrator, you can set `Allow`, `Deny`, or `Pass` as the `action` field for your `AdminNetworkPolicy` rules. Because OVN-Kubernetes uses a tiered ACLs to evaluate network traffic rules, ANP allow you to set very strong policy rules that can only be changed by an administrator modifying them, deleting the rule, or overriding them by setting a higher priority rule.
|
||||
|
||||
[discrete]
|
||||
|
||||
[id="adminnetworkpolicy-allow-example_{context}"]
|
||||
== AdminNetworkPolicy Allow example
|
||||
The following ANP that is defined at priority 9 ensures all ingress traffic is allowed from the `monitoring` namespace towards any tenant (all other namespaces) in the cluster.
|
||||
@@ -38,7 +38,7 @@ spec:
|
||||
====
|
||||
This is an example of a strong `Allow` ANP because it is non-overridable by all the parties involved. No tenants can block themselves from being monitored using `NetworkPolicy` objects and the monitoring tenant also has no say in what it can or cannot monitor.
|
||||
|
||||
[discrete]
|
||||
|
||||
[id="adminnetworkpolicy-deny-example_{context}"]
|
||||
== AdminNetworkPolicy Deny example
|
||||
The following ANP that is defined at priority 5 ensures all ingress traffic from the `monitoring` namespace is blocked towards restricted tenants (namespaces that have labels `security: restricted`).
|
||||
@@ -72,7 +72,7 @@ This is a strong `Deny` ANP that is non-overridable by all the parties involved.
|
||||
|
||||
When combined with the strong `Allow` example, the `block-monitoring` ANP has a lower priority value giving it higher precedence, which ensures restricted tenants are never monitored.
|
||||
|
||||
[discrete]
|
||||
|
||||
[id="adminnetworkpolicy-pass-example_{context}"]
|
||||
== AdminNetworkPolicy Pass example
|
||||
The following ANP that is defined at priority 7 ensures all ingress traffic from the `monitoring` namespace towards internal infrastructure tenants (namespaces that have labels `security: internal`) are passed on to tier 2 of the ACLs and evaluated by the namespaces’ `NetworkPolicy` objects.
|
||||
|
||||
@@ -20,7 +20,7 @@ An ANP allows administrators to specify the following:
|
||||
|
||||
* A list of egress rules to be applied for all egress traffic from the `subject`.
|
||||
|
||||
[discrete]
|
||||
|
||||
[id="adminnetworkpolicy-example_{context}"]
|
||||
== AdminNetworkPolicy example
|
||||
|
||||
|
||||
@@ -18,7 +18,7 @@ A BANP allows administrators to specify:
|
||||
|
||||
* A list of egress rules to be applied for all egress traffic from the `subject`.
|
||||
|
||||
[discrete]
|
||||
|
||||
[id="baselineddminnetworkpolicy-example_{context}"]
|
||||
== BaselineAdminNetworkPolicy example
|
||||
|
||||
@@ -69,7 +69,7 @@ spec:
|
||||
====
|
||||
|
||||
|
||||
[discrete]
|
||||
|
||||
[id="BaselineAdminNetworkPolicy-default-deny-example"_{context}]
|
||||
== BaselineAdminNetworkPolicy Deny example
|
||||
The following BANP singleton ensures that the administrator has set up a default deny policy for all ingress monitoring traffic coming into the tenants at `internal` security level. When combined with the "AdminNetworkPolicy Pass example", this deny policy acts as a guardrail policy for all ingress traffic that is passed by the ANP `pass-monitoring` policy.
|
||||
|
||||
@@ -8,7 +8,7 @@
|
||||
|
||||
Session affinity is a feature that applies to Kubernetes `Service` objects. You can use _session affinity_ if you want to ensure that each time you connect to a <service_VIP>:<Port>, the traffic is always load balanced to the same back end. For more information, including how to set session affinity based on a client's IP address, see link:https://kubernetes.io/docs/reference/networking/virtual-ips/#session-affinity[Session affinity].
|
||||
|
||||
[discrete]
|
||||
|
||||
[id="nw-ovn-kubernetes-session-affinity-stickyness-timeout_{context}"]
|
||||
== Stickiness timeout for session affinity
|
||||
|
||||
|
||||
@@ -22,7 +22,7 @@ Boundary clock:: The boundary clock has ports in two or more communication paths
|
||||
|
||||
Ordinary clock:: The ordinary clock has a single port connection that can play the role of source or destination clock, depending on its position in the network. The ordinary clock can read and write timestamps.
|
||||
|
||||
[discrete]
|
||||
|
||||
[id="ptp-advantages-over-ntp_{context}"]
|
||||
== Advantages of PTP over NTP
|
||||
|
||||
|
||||
@@ -72,7 +72,7 @@ a|Returns values for `HoldOverTimeout`, `MaxOffsetThreshold`, and `MinOffsetThre
|
||||
|
||||
|====
|
||||
|
||||
[discrete]
|
||||
|
||||
== PTP fast event metrics only when T-GM is enabled
|
||||
|
||||
The following table describes the PTP fast event metrics that are available only when PTP grandmaster clock (T-GM) is enabled.
|
||||
|
||||
@@ -33,7 +33,7 @@ spec:
|
||||
----
|
||||
|
||||
|
||||
[discrete]
|
||||
|
||||
== Example use cases
|
||||
|
||||
*As a cluster administrator, you can:*
|
||||
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user