mirror of
https://github.com/openshift/openshift-docs.git
synced 2026-02-05 12:46:18 +01:00
remove unused snippets - automated
This commit is contained in:
@@ -1,4 +0,0 @@
|
||||
[IMPORTANT]
|
||||
====
|
||||
link:https://docs.aws.amazon.com/vpc/latest/userguide/vpc-sharing.html[Sharing VPCs across multiple AWS accounts] is not currently supported for {hcp-title}. Do not install a {hcp-title} cluster into subnets shared from another AWS account. See link:https://access.redhat.com/solutions/6980058["Are multiple ROSA clusters in a single VPC supported?"] for more information.
|
||||
====
|
||||
@@ -1,9 +0,0 @@
|
||||
// Text snippet included in the following assemblies:
|
||||
//
|
||||
//
|
||||
// Text snippet included in the following modules:
|
||||
//
|
||||
//
|
||||
:_mod-docs-content-type: SNIPPET
|
||||
|
||||
If the approval strategy in the subscription is set to *Automatic*, the update process initiates as soon as a new Operator version is available in the selected channel. If the approval strategy is set to *Manual*, you must manually approve pending updates.
|
||||
@@ -1,24 +0,0 @@
|
||||
// Text snippet included in the following assemblies:
|
||||
//
|
||||
//
|
||||
// Text snippet included in the following modules:
|
||||
//
|
||||
//
|
||||
:_mod-docs-content-type: SNIPPET
|
||||
|
||||
You can create a secret in the directory that contains your certificate and key files by using the following command:
|
||||
[subs="+quotes"]
|
||||
[source,terminal]
|
||||
----
|
||||
$ oc create secret generic -n openshift-logging <my-secret> \
|
||||
--from-file=tls.key=<your_key_file>
|
||||
--from-file=tls.crt=<your_crt_file>
|
||||
--from-file=ca-bundle.crt=<your_bundle_file>
|
||||
--from-literal=username=<your_username>
|
||||
--from-literal=password=<your_password>
|
||||
----
|
||||
|
||||
[NOTE]
|
||||
====
|
||||
Use generic or opaque secrets for best results.
|
||||
====
|
||||
@@ -1,12 +0,0 @@
|
||||
// Text snippet included in the following modules and assemblies:
|
||||
//
|
||||
|
||||
:_mod-docs-content-type: SNIPPET
|
||||
|
||||
Logs from any source contain a field `openshift.cluster_id`, the unique identifier of the cluster in which the Operator is deployed.
|
||||
|
||||
.ClusterID query
|
||||
[source,terminal]
|
||||
----
|
||||
$ oc get clusterversion/version -o jsonpath='{.spec.clusterID}{"\n"}'
|
||||
----
|
||||
@@ -1,15 +0,0 @@
|
||||
// Text snippet included in the following assemblies:
|
||||
//
|
||||
//
|
||||
// Text snippet included in the following modules:
|
||||
//
|
||||
//
|
||||
:_mod-docs-content-type: SNIPPET
|
||||
|
||||
{logging-uc} collects container logs and node logs. These are categorized into types:
|
||||
|
||||
* `application` - Container logs generated by non-infrastructure containers.
|
||||
|
||||
* `infrastructure` - Container logs from namespaces `kube-\*` and `openshift-\*`, and node logs from `journald`.
|
||||
|
||||
* `audit` - Logs from `auditd`, `kube-apiserver`, `openshift-apiserver`, and `ovn` if enabled.
|
||||
@@ -1,9 +0,0 @@
|
||||
// Text snippet included in the following assemblies:
|
||||
//
|
||||
//
|
||||
// Text snippet included in the following modules:
|
||||
//
|
||||
//
|
||||
:_mod-docs-content-type: SNIPPET
|
||||
|
||||
In logging documentation, LokiStack refers to the supported combination of Loki and web proxy with OpenShift Container Platform authentication integration. LokiStack’s proxy uses OpenShift Container Platform authentication to enforce multi-tenancy. Loki refers to the log store as either the individual component or an external store.
|
||||
@@ -1,27 +0,0 @@
|
||||
// Text snippet included in the following assemblies:
|
||||
//
|
||||
//
|
||||
// Text snippet included in the following modules:
|
||||
//
|
||||
//
|
||||
:_mod-docs-content-type: SNIPPET
|
||||
|
||||
.Output Destinations
|
||||
[options="header"]
|
||||
|======
|
||||
|Feature|Protocol|Tested with|Fluentd|Vector
|
||||
|Cloudwatch|REST over HTTPS||✓|✓
|
||||
|Elasticearch||||
|
||||
| * v6||v6.8.1|✓|✓
|
||||
| * v7||v7.12.2|✓|✓
|
||||
| * v8||||✓
|
||||
|Google Cloud Logging||||✓
|
||||
|
||||
|Kafka|kafka 0.11|kafka 2.4.1 kafka 2.7.0 kafka 3|✓|✓
|
||||
|
||||
|Fluent Forward|fluentd forward v1|fluentd 1.14.6
|
||||
logstash 7.10.1|✓|
|
||||
|
||||
|Loki|REST over HTTP(S)|Loki 2.3.0 Loki 2.6.0|✓|✓
|
||||
|Syslog|RFC3164,RFC5424|rsyslog 8.39.0|✓|
|
||||
|======
|
||||
@@ -1,23 +0,0 @@
|
||||
// Text snippet included in the following assemblies:
|
||||
//
|
||||
//
|
||||
// Text snippet included in the following modules:
|
||||
//
|
||||
//
|
||||
:_mod-docs-content-type: SNIPPET
|
||||
|
||||
[options="header"]
|
||||
|====================================================================================================
|
||||
| Output | Protocol | Tested with | Fluentd | Vector
|
||||
| Cloudwatch | REST over HTTP(S) | | ✓ | ✓
|
||||
| Elasticsearch v6 | | v6.8.1 | ✓ | ✓
|
||||
| Elasticsearch v7 | | v7.12.2, 7.17.7 | ✓ | ✓
|
||||
| Elasticsearch v8 | | v8.4.3 | | ✓
|
||||
| Fluent Forward | Fluentd forward v1 | Fluentd 1.14.6, Logstash 7.10.1 | ✓ |
|
||||
| Google Cloud Logging | | | | ✓
|
||||
| HTTP | HTTP 1.1 | Fluentd 1.14.6, Vector 0.21 | |
|
||||
| Kafka | Kafka 0.11 | Kafka 2.4.1, 2.7.0, 3.3.1 | ✓ | ✓
|
||||
| Loki | REST over HTTP(S) | Loki 2.3.0, 2.7 | ✓ | ✓
|
||||
| Splunk | HEC | v8.2.9, 9.0.0 | | ✓
|
||||
| Syslog | RFC3164, RFC5424 | Rsyslog 8.37.0-9.el7 | ✓ |
|
||||
|====================================================================================================
|
||||
@@ -1,13 +0,0 @@
|
||||
// Text snippet included in the following assemblies:
|
||||
//
|
||||
//
|
||||
// Text snippet included in the following modules:
|
||||
//
|
||||
//
|
||||
:_mod-docs-content-type: SNIPPET
|
||||
|
||||
|
||||
[NOTE]
|
||||
====
|
||||
For Logging 5.5 and higher, documentation is organized by version.
|
||||
====
|
||||
@@ -1,23 +0,0 @@
|
||||
// Text snippet included in the following assemblies:
|
||||
//
|
||||
//
|
||||
// Text snippet included in the following modules:
|
||||
//
|
||||
//
|
||||
:_mod-docs-content-type: SNIPPET
|
||||
|
||||
[source,YAML]
|
||||
----
|
||||
apiVersion: operators.coreos.com/v1alpha1
|
||||
kind: Subscription
|
||||
metadata:
|
||||
name: cluster-logging
|
||||
namespace: openshift-logging
|
||||
spec:
|
||||
channel: "stable" <1>
|
||||
name: cluster-logging
|
||||
source: redhat-operators
|
||||
sourceNamespace: openshift-marketplace
|
||||
installPlanApproval: Automatic
|
||||
----
|
||||
<1> Specify `stable`, or `stable-5.<y>` as the channel.
|
||||
@@ -1,27 +0,0 @@
|
||||
// Text snippet included in the following assemblies:
|
||||
//
|
||||
//
|
||||
//
|
||||
// Text snippet included in the following modules:
|
||||
//
|
||||
// * modules/network-observability-auth-multi-tenancy.adoc
|
||||
|
||||
:_mod-docs-content-type: SNIPPET
|
||||
.Example ClusterRole reader yaml
|
||||
[source, yaml]
|
||||
----
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
kind: ClusterRole
|
||||
metadata:
|
||||
name: netobserv-reader <1>
|
||||
rules:
|
||||
- apiGroups:
|
||||
- 'loki.grafana.com'
|
||||
resources:
|
||||
- network
|
||||
resourceNames:
|
||||
- logs
|
||||
verbs:
|
||||
- 'get'
|
||||
----
|
||||
<1> This role can be used for multi-tenancy.
|
||||
@@ -1,26 +0,0 @@
|
||||
// Text snippet included in the following assemblies:
|
||||
//
|
||||
//
|
||||
//
|
||||
// Text snippet included in the following modules:
|
||||
//
|
||||
// * modules/network-observability-auth-multi-tenancy.adoc
|
||||
|
||||
:_mod-docs-content-type: SNIPPET
|
||||
.Example ClusterRole writer yaml
|
||||
[source,yaml]
|
||||
----
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
kind: ClusterRole
|
||||
metadata:
|
||||
name: netobserv-writer
|
||||
rules:
|
||||
- apiGroups:
|
||||
- 'loki.grafana.com'
|
||||
resources:
|
||||
- network
|
||||
resourceNames:
|
||||
- logs
|
||||
verbs:
|
||||
- 'create'
|
||||
----
|
||||
@@ -1,30 +0,0 @@
|
||||
// Text snippet included in the following assemblies:
|
||||
//
|
||||
//
|
||||
//
|
||||
// Text snippet included in the following modules:
|
||||
//
|
||||
// * modules/network-observability-auth-multi-tenancy.adoc
|
||||
|
||||
:_mod-docs-content-type: SNIPPET
|
||||
|
||||
.Example ClusterRoleBinding yaml
|
||||
[source, yaml]
|
||||
----
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
kind: ClusterRoleBinding
|
||||
metadata:
|
||||
name: netobserv-writer-flp
|
||||
roleRef:
|
||||
apiGroup: rbac.authorization.k8s.io
|
||||
kind: ClusterRole
|
||||
name: netobserv-writer
|
||||
subjects:
|
||||
- kind: ServiceAccount
|
||||
name: flowlogs-pipeline <1>
|
||||
namespace: netobserv
|
||||
- kind: ServiceAccount
|
||||
name: flowlogs-pipeline-transformer
|
||||
namespace: netobserv
|
||||
----
|
||||
<1> The `flowlogs-pipeline` writes to Loki. If you are using Kafka, this value is `flowlogs-pipeline-transformer`.
|
||||
@@ -1,13 +0,0 @@
|
||||
// Text snippet included in the following modules:
|
||||
//
|
||||
// * modules/telco-ran-cluster-tuning.adoc
|
||||
// * modules/telco-core-cpu-partitioning-and-performance-tuning.adoc
|
||||
// * modules/telco-core-application-workloads.adoc
|
||||
|
||||
|
||||
:_mod-docs-content-type: SNIPPET
|
||||
|
||||
[NOTE]
|
||||
====
|
||||
As of {product-title} 4.19, cgroup v1 is no longer supported and has been removed. All workloads must now be compatible with cgroup v2. For more information, see link:https://www.redhat.com/en/blog/rhel-9-changes-context-red-hat-openshift-workloads[Red Hat Enterprise Linux 9 changes in the context of Red Hat OpenShift workloads].
|
||||
====
|
||||
@@ -1,11 +0,0 @@
|
||||
// Text snippet included in the following modules:
|
||||
//
|
||||
// * backup_and_restore/application_backup_and_restore/backing_up_and_restoring/backing-up-applications.adoc
|
||||
|
||||
:_mod-docs-content-type: SNIPPET
|
||||
|
||||
.Prerequisites
|
||||
|
||||
* A stateful application is running in a separate namespace with persistent volume claims (PVCs) using CephFS as the provisioner.
|
||||
* The `StorageClass` and `VolumeSnapshotClass` custom resources (CRs) are defined for CephFS and OADP 1.2 Data Mover.
|
||||
* There is a secret `cloud-credentials` in the `openshift-adp` namespace.
|
||||
@@ -1,15 +0,0 @@
|
||||
// Text snippet included in the following modules:
|
||||
//
|
||||
// * release_notes/ocp-4-17-release-notes.adoc (enterprise-4.17 branch only)
|
||||
// * extensions/arch/catalogd.adoc
|
||||
// * extensions/catalogs/creating-catalogs.adoc
|
||||
// * extensions/catalogs/fbc.adoc
|
||||
// * extensions/catalogs/managing-catalogs.adoc
|
||||
// * extensions/catalogs/rh-catalogs.adoc
|
||||
// * extensions/ce/managing-ce.adoc
|
||||
// * extensions/ce/update-paths.adoc
|
||||
// * extensions/index.adoc
|
||||
|
||||
:_mod-docs-content-type: SNIPPET
|
||||
|
||||
Currently, {olmv1-first} cannot authenticate private registries, such as the Red{nbsp}Hat-provided Operator catalogs. This is a known issue. As a result, the {olmv1} procedures that rely on having the Red{nbsp}Hat Operators catalog installed do not work. (link:https://issues.redhat.com/browse/OCPBUGS-36364[*OCPBUGS-36364*])
|
||||
@@ -1,14 +0,0 @@
|
||||
// Text snippet included in the following modules:
|
||||
//
|
||||
// * modules/olmv1-installing-an-operator.adoc
|
||||
// * release_notes/ocp-4-17-release-notes.adoc (enterprise-4.17 branch only)
|
||||
|
||||
:_mod-docs-content-type: SNIPPET
|
||||
|
||||
There is a known issue in {olmv1}. If you do not assign the correct role-based access controls (RBAC) to an extension's service account, {olmv1} gets stuck and reconciliation stops.
|
||||
|
||||
Currently, {olmv1} does not have tools to help extension administrators find the correct RBAC for a service account.
|
||||
|
||||
Because {olmv1} is a Technology Preview feature and must not be used on production clusters, you can avoid this issue by using the more permissive RBAC included in the documentation.
|
||||
|
||||
This RBAC is intended for testing purposes only. Do not use it on production clusters.
|
||||
@@ -1,14 +0,0 @@
|
||||
// Text snippet included in the following assemblies:
|
||||
//
|
||||
// *
|
||||
//
|
||||
// Text snippet included in the following modules:
|
||||
//
|
||||
// *
|
||||
|
||||
:_mod-docs-content-type: SNIPPET
|
||||
|
||||
[WARNING]
|
||||
====
|
||||
RukPak, a Technology Preview component, does not support FIPS. In {product-title} {product-version}, {olmv1-first} depends on RukPak. As a result, RukPak and {olmv1} do not run on clusters with FIPS mode enabled.
|
||||
====
|
||||
@@ -1,15 +0,0 @@
|
||||
// Text snippet included in the following modules:
|
||||
//
|
||||
// * modules/olmv1-adding-a-catalog.adoc
|
||||
// * modules/olmv1-creating-a-pull-secret-for-catalogd.adoc
|
||||
// * modules/olmv1-red-hat-catalogs.adoc
|
||||
|
||||
:_mod-docs-content-type: SNIPPET
|
||||
|
||||
If you want to use a catalog that is hosted on a private registry, such as Red{nbsp}Hat-provided Operator catalogs from `registry.redhat.io`, you must have a pull secret scoped to the `openshift-catalogd` namespace.
|
||||
|
||||
ifndef::olmv1-pullsecret-proc[For more information, see "Creating a pull secret for catalogs hosted on a secure registry".]
|
||||
|
||||
ifdef::olmv1-pullsecret-proc[]
|
||||
Catalogd cannot read global pull secrets from {product-title} clusters. Catalogd can read references to secrets only in the namespace where it is deployed.
|
||||
endif::[]
|
||||
@@ -1,13 +0,0 @@
|
||||
// Text snippet included in the following modules:
|
||||
//
|
||||
// * modules/olmv1-installing-an-operator.adoc
|
||||
// * modules/olmv1-updating-an-operator.adoc
|
||||
|
||||
:_mod-docs-content-type: SNIPPET
|
||||
|
||||
[NOTE]
|
||||
====
|
||||
If you specify a channel or define a version range in your Operator or extension's CR, {olmv1} does not display the resolved version installed on the cluster. Only the version and channel information specified in the CR are displayed.
|
||||
|
||||
If you want to find the specific version that is installed, you must compare the SHA of the image of the `spec.source.image.ref` field to the image reference in the catalog.
|
||||
====
|
||||
@@ -1,15 +0,0 @@
|
||||
// Text snippet included in the following assemblies:
|
||||
//
|
||||
|
||||
:_mod-docs-content-type: SNIPPET
|
||||
|
||||
[WARNING]
|
||||
====
|
||||
Operator Lifecycle Manager (OLM) creates the following cluster roles for each Operator group:
|
||||
|
||||
* `<operatorgroup_name>-admin`
|
||||
* `<operatorgroup_name>-edit`
|
||||
* `<operatorgroup_name>-view`
|
||||
|
||||
When you manually create an Operator group, you must specify a unique name that does not conflict with the existing cluster roles or other Operator groups on the cluster.
|
||||
====
|
||||
@@ -1,11 +0,0 @@
|
||||
:_mod-docs-content-type: SNIPPET
|
||||
[source,yaml]
|
||||
----
|
||||
- path: source-crs/HardwareEvent.yaml <1>
|
||||
patches:
|
||||
- spec:
|
||||
logLevel: debug
|
||||
nodeSelector: {}
|
||||
transportHost: http://hw-event-publisher-service.openshift-bare-metal-events.svc.cluster.local:9043
|
||||
----
|
||||
<1> Each baseboard management controller (BMC) requires a single `HardwareEvent` CR only.
|
||||
@@ -1,11 +0,0 @@
|
||||
:_mod-docs-content-type: SNIPPET
|
||||
[source,yaml]
|
||||
----
|
||||
- fileName: HardwareEvent.yaml <1>
|
||||
policyName: "config-policy"
|
||||
spec:
|
||||
nodeSelector: {}
|
||||
transportHost: "http://hw-event-publisher-service.openshift-bare-metal-events.svc.cluster.local:9043"
|
||||
logLevel: "info"
|
||||
----
|
||||
<1> Each baseboard management controller (BMC) requires a single `HardwareEvent` CR only.
|
||||
@@ -1,10 +0,0 @@
|
||||
:_mod-docs-content-type: SNIPPET
|
||||
[NOTE]
|
||||
====
|
||||
During holdover, the T-GM or T-BC uses the internal system clock to continue generating time synchronization signals as accurately as possible based on the last known good reference.
|
||||
|
||||
You can set the time holdover specification threshold controlling the time spent advertising `ClockClass` values `7` or `135` to `0` so that the T-GM or T-BC advertises a degraded `ClockClass` value directly after losing traceability to a PRTC.
|
||||
In this case, after initially advertising `ClockClass` values between `140–165`, a clock can still be within the holdover specification.
|
||||
====
|
||||
|
||||
For more information, see link:https://www.itu.int/rec/T-REC-G.8275.1-202211-I/en["Phase/time traceability information", ITU-T G.8275.1/Y.1369.1 Recommendations].
|
||||
@@ -1,6 +0,0 @@
|
||||
[IMPORTANT]
|
||||
====
|
||||
{product-title} ROSA 4.12 cluster creation can take a long time or fail. The default version of ROSA is set to 4.11, which means that only 4.11 resources are created when you create account roles or ROSA clusters using the default settings. Account roles from 4.12 are backwards compatible, which is the case for `account-role` policy versions. You can use the `--version` flag to create 4.12 resources.
|
||||
|
||||
For more information see the link:https://access.redhat.com/solutions/6996508[ROSA 4.12 cluster creation failure solution].
|
||||
====
|
||||
@@ -1,6 +0,0 @@
|
||||
:_mod-docs-content-type: SNIPPET
|
||||
|
||||
[IMPORTANT]
|
||||
====
|
||||
This section has not yet been fully tested against {hcp-title-first} clusters. Until this section is validated, avoid using this documentation to configure {hcp-title} clusters for production use.
|
||||
====
|
||||
@@ -1,6 +0,0 @@
|
||||
// Text snippet included in the following modules:
|
||||
//
|
||||
// * rosa_release_notes/rosa-release-notes.adoc
|
||||
|
||||
:_mod-docs-content-type: SNIPPET
|
||||
* **Hosted control planes.** {product-title} clusters that use {hcp} are now available as a link:https://access.redhat.com/support/offerings/techpreview[Technology Preview] feature. This new architecture provides a lower-cost, more resilient ROSA architecture. For more information, see xref:../rosa_hcp/rosa-hcp-sts-creating-a-cluster-quickly.adoc#rosa-hcp-sts-creating-a-cluster-quickly[Creating {hcp-title} clusters using the default options].
|
||||
@@ -1,18 +0,0 @@
|
||||
// Text snippet included in the following assemblies: (1)
|
||||
//
|
||||
// * rosa_cluster_admin/rosa-configuring-pid-limits.adoc
|
||||
//
|
||||
// Text snippet included in the following modules: (2)
|
||||
//
|
||||
// * modules/setting-higher-pid-limit-on-existing-cluster.adoc
|
||||
|
||||
:_mod-docs-content-type: SNIPPET
|
||||
|
||||
// Snippet that notifies user that Shielded VM is not supported for clusters created using bare metal instance types.
|
||||
|
||||
[IMPORTANT]
|
||||
====
|
||||
[subs="attributes+"]
|
||||
Shielded VM is not supported for {product-title} on {GCP} clusters using bare-metal instance types. For more information, see link:https://cloud.google.com/compute/shielded-vm/docs/shielded-vm#limitations[Limitations] in the Google Cloud documentation.
|
||||
====
|
||||
// Undefine {FeatureName} attribute, so that any mistakes are easily spotted
|
||||
@@ -1,14 +0,0 @@
|
||||
// Text snippet included in the following modules:
|
||||
//
|
||||
// * installing/installing_vsphere/installing-restricted-networks-installer-provisioned-vsphere.adoc
|
||||
// * installing/installing_vsphere/installing-restricted-networks-vsphere.adoc
|
||||
// * installing/installing_vsphere/installing-vsphere-installer-provisioned-customizations.adoc
|
||||
// * installing/installing_vsphere/installing-vsphere-installer-provisioned.adoc
|
||||
// * installing/installing_vsphere/installing-vsphere.adoc
|
||||
|
||||
:_mod-docs-content-type: SNIPPET
|
||||
|
||||
[NOTE]
|
||||
====
|
||||
{product-title} supports deploying a cluster to a single VMware vCenter only. Deploying a cluster with machines/machine sets on multiple vCenters is not supported.
|
||||
====
|
||||
@@ -1,92 +0,0 @@
|
||||
:_mod-docs-content-type: SNIPPET
|
||||
.Example {sno} cluster SiteConfig CR
|
||||
[source,yaml,subs="attributes+"]
|
||||
----
|
||||
apiVersion: ran.openshift.io/v1
|
||||
kind: SiteConfig
|
||||
metadata:
|
||||
name: "<site_name>"
|
||||
namespace: "<site_name>"
|
||||
spec:
|
||||
baseDomain: "example.com"
|
||||
pullSecretRef:
|
||||
name: "assisted-deployment-pull-secret" <1>
|
||||
clusterImageSetNameRef: "openshift-{product-version}" <2>
|
||||
sshPublicKey: "ssh-rsa AAAA..." <3>
|
||||
clusters:
|
||||
- clusterName: "<site_name>"
|
||||
networkType: "OVNKubernetes"
|
||||
clusterLabels: <4>
|
||||
common: true
|
||||
group-du-sno: ""
|
||||
sites : "<site_name>"
|
||||
clusterNetwork:
|
||||
- cidr: 1001:1::/48
|
||||
hostPrefix: 64
|
||||
machineNetwork:
|
||||
- cidr: 1111:2222:3333:4444::/64
|
||||
serviceNetwork:
|
||||
- 1001:2::/112
|
||||
additionalNTPSources:
|
||||
- 1111:2222:3333:4444::2
|
||||
#crTemplates:
|
||||
# KlusterletAddonConfig: "KlusterletAddonConfigOverride.yaml" <5>
|
||||
nodes:
|
||||
- hostName: "example-node.example.com" <6>
|
||||
role: "master"
|
||||
nodeLabels: <7>
|
||||
node-role.kubernetes.io/example-label:
|
||||
custom-label/parameter1: true
|
||||
# automatedCleaningMode: "disabled" <8>
|
||||
bmcAddress: idrac-virtualmedia://<out_of_band_ip>/<system_id>/ <9>
|
||||
bmcCredentialsName:
|
||||
name: "bmh-secret" <10>
|
||||
bootMACAddress: "AA:BB:CC:DD:EE:11"
|
||||
bootMode: "UEFI" <11>
|
||||
rootDeviceHints: <12>
|
||||
wwn: "0x11111000000asd123"
|
||||
cpuset: "0-1,52-53" <13>
|
||||
nodeNetwork: <14>
|
||||
interfaces:
|
||||
- name: eno1
|
||||
macAddress: "AA:BB:CC:DD:EE:11"
|
||||
config:
|
||||
interfaces:
|
||||
- name: eno1
|
||||
type: ethernet
|
||||
state: up
|
||||
ipv4:
|
||||
enabled: false
|
||||
ipv6: <15>
|
||||
enabled: true
|
||||
address:
|
||||
- ip: 1111:2222:3333:4444::aaaa:1
|
||||
prefix-length: 64
|
||||
dns-resolver:
|
||||
config:
|
||||
search:
|
||||
- example.com
|
||||
server:
|
||||
- 1111:2222:3333:4444::2
|
||||
routes:
|
||||
config:
|
||||
- destination: ::/0
|
||||
next-hop-interface: eno1
|
||||
next-hop-address: 1111:2222:3333:4444::1
|
||||
table-id: 254
|
||||
----
|
||||
<1> Create the `assisted-deployment-pull-secret` CR with the same namespace as the `SiteConfig` CR.
|
||||
<2> `clusterImageSetNameRef` defines an image set available on the hub cluster. To see the list of supported versions on your hub cluster, run `oc get clusterimagesets`.
|
||||
<3> Configure the SSH public key used to access the cluster.
|
||||
<4> Cluster labels must correspond to the `bindingRules` field in the `PolicyGenTemplate` CRs that you define. For example, `policygentemplates/common-ranGen.yaml` applies to all clusters with `common: true` set, `policygentemplates/group-du-sno-ranGen.yaml` applies to all clusters with `group-du-sno: ""` set.
|
||||
<5> Optional. The CR specifed under `KlusterletAddonConfig` is used to override the default `KlusterletAddonConfig` that is created for the cluster.
|
||||
<6> For single-node deployments, define a single host. For three-node deployments, define three hosts. For standard deployments, define three hosts with `role: master` and two or more hosts defined with `role: worker`.
|
||||
<7> Specify custom roles to your nodes in your managed clusters. These are additional roles which are not used by any {product-title} components, only by the user. When you add a custom role, it can be associated with a custom machine config pool that references a specific configuration for that role. Adding your custom labels or roles during installation makes the deployment process more effective and prevents the need for additional reboots after the installation is complete.
|
||||
<8> Optional. If the value is set to `metadata`, the partitioning table of the disk is removed, but the disk is not fully wiped. By default, the `automatedCleaningMode` field is disabled. To enable removing the partitioning table, uncomment this line and set the value to `metadata`.
|
||||
<9> BMC address that you use to access the host. Applies to all cluster types. {ztp} supports iPXE and virtual media booting by using Redfish or IPMI protocols. To use iPXE booting, you must use {rh-rhacm} 2.8 or later. For more information about BMC addressing, see the _Additional resources_ section.
|
||||
<10> Name of the `bmh-secret` CR that you separately create with the host BMC credentials. When creating the `bmh-secret` CR, use the same namespace as the `SiteConfig` CR that provisions the host.
|
||||
<11> Configures the boot mode for the host. The default value is `UEFI`. Use `UEFISecureBoot` to enable secure boot on the host.
|
||||
<12> Specifies the device for deployment. Identifiers that are stable across reboots are recommended, for example `wwn: <disk_wwn>` or `deviceName: /dev/disk/by-path/<device_path>`. For a detailed list of stable identifiers, see the _About root device hints_ section.
|
||||
<13> `cpuset` must match the value set in the cluster `PerformanceProfile` CR `spec.cpu.reserved` field for workload partitioning.
|
||||
<14> Specifies the network settings for the node.
|
||||
<15> Configures the IPv6 address for the host. For {sno} clusters with static IP addresses, the node-specific API and Ingress IP addresses must be the same.
|
||||
@@ -1,56 +0,0 @@
|
||||
:_mod-docs-content-type: SNIPPET
|
||||
[IMPORTANT]
|
||||
====
|
||||
The following guidelines are based on internal lab benchmark testing only and do not represent a complete real-world host specification.
|
||||
====
|
||||
|
||||
.Representative three-node hub cluster machine specifications
|
||||
[cols=2*, width="90%", options="header"]
|
||||
|====
|
||||
|Requirement
|
||||
|Description
|
||||
|
||||
|{product-title}
|
||||
|version 4.13
|
||||
|
||||
|{rh-rhacm}
|
||||
|version 2.7
|
||||
|
||||
|{cgu-operator-first}
|
||||
|version 4.13
|
||||
|
||||
|Server hardware
|
||||
|3 x Dell PowerEdge R650 rack servers
|
||||
|
||||
|NVMe hard disks
|
||||
a|* 50 GB disk for `/var/lib/etcd`
|
||||
* 2.9 TB disk for `/var/lib/containers`
|
||||
|
||||
|SSD hard disks
|
||||
a|* 1 SSD split into 15 200GB thin-provisioned logical volumes provisioned as `PV` CRs
|
||||
* 1 SSD serving as an extra large `PV` resource
|
||||
|
||||
|Number of applied DU profile policies
|
||||
|5
|
||||
|====
|
||||
|
||||
[IMPORTANT]
|
||||
====
|
||||
The following network specifications are representative of a typical real-world RAN network and were applied to the scale lab environment during testing.
|
||||
====
|
||||
|
||||
.Simulated lab environment network specifications
|
||||
[cols=2*, width="90%", options="header"]
|
||||
|====
|
||||
|Specification
|
||||
|Description
|
||||
|
||||
|Round-trip time (RTT) latency
|
||||
|50 ms
|
||||
|
||||
|Packet loss
|
||||
|0.02% packet loss
|
||||
|
||||
|Network bandwidth limit
|
||||
|20 Mbps
|
||||
|====
|
||||
Reference in New Issue
Block a user