1
0
mirror of https://github.com/openshift/openshift-docs.git synced 2026-02-05 21:46:22 +01:00

TELCODOCS-347-SNO-DU-INSTALL

This commit is contained in:
Stephen Smith
2022-04-06 08:45:21 -04:00
committed by openshift-cherrypick-robot
parent f634ab23e8
commit d7add07bed
17 changed files with 865 additions and 30 deletions

View File

@@ -2208,6 +2208,9 @@ Topics:
- Name: Creating a performance profile
File: cnf-create-performance-profiles
Distros: openshift-origin,openshift-enterprise
- Name: Deploying distributed units manually on single node OpenShift
File: ztp-configuring-single-node-cluster-deployment-during-installation
Distros: openshift-origin,openshift-enterprise
- Name: Provisioning and deploying a distributed unit (DU)
File: cnf-provisioning-and-deploying-a-distributed-unit
Distros: openshift-webscale

View File

@@ -0,0 +1,13 @@
// Module included in the following assemblies:
//
// * scalability_and_performance/sno-du-deploying-clusters-on-single-nodes.adoc
:_content-type: CONCEPT
[id="sno-du-applying-the-distributed-unit-configuration-to-sno_{context}"]
= Applying the distributed unit (DU) configuration to a single node cluster
Perform the following tasks to configure a single node cluster for a DU:
* Apply the required extra installation manifests at installation time.
* Apply the post-install configuration custom resources (CRs).

View File

@@ -0,0 +1,13 @@
// Module included in the following assemblies:
//
// *scalability_and_performance/sno-du-deploying-clusters-on-single-nodes.adoc
:_content-type: PROCEDURE
[id="sno-du-applying-the-extra-installation-manifests_{context}"]
= Applying the extra installation manifests
To apply the distributed unit (DU) configuration to the single node cluster, the following
extra installation manifests need to be included during installation:
* Enable workload partitioning.
* Other `MachineConfig` objects There is a set of `MachineConfig` custom resources (CRs) included by default. You can choose to include these additional `MachineConfig` CRs that are unique to their environment. It is recommended, but not required, to apply these CRs during installation in order to minimize the number of reboots that can occur during post-install configuration.

View File

@@ -0,0 +1,15 @@
// Module included in the following assemblies:
//
// *scalability_and_performance/sno-du-deploying-clusters-on-single-nodes.adoc
:_content-type: PROCEDURE
[id="sno-du-applying-the-post-install-configuration-custom-resources_{context}"]
= Applying the post-install configuration custom resources (CRs)
* After {product-title} is installed on the cluster, use the following command to apply the
CRs you configured for the distributed units (DUs):
[source,terminal]
----
$ oc apply -f <file_name>.yaml
----

View File

@@ -0,0 +1,60 @@
// Module included in the following assemblies:
//
// *scalability_and_performance/sno-du-deploying-clusters-on-single-nodes.adoc
:_content-type: PROCEDURE
[id="sno-du-configuring-logging-locally-and-forwarding_{context}"]
= Configuring logging locally and forwarding
To be able to debug a single node distributed unit (DU), logs need to be stored for further
analysis.
.Procedure
* Edit the `ClusterLogging` custom resource (CR) in the `openshift-logging` project:
+
[source,yaml]
----
apiVersion: logging.openshift.io/v1
kind: ClusterLogging <1>
metadata:
name: instance
namespace: openshift-logging
spec:
collection:
logs:
fluentd: {}
type: fluentd
curation:
type: "curator"
curator:
schedule: "30 3 * * *"
managementState: Managed
---
apiVersion: logging.openshift.io/v1
kind: ClusterLogForwarder <2>
metadata:
name: instance
namespace: openshift-logging
spec:
inputs:
- infrastructure: {}
outputs:
- name: kafka-open
type: kafka
url: tcp://10.46.55.190:9092/test <3>
pipelines:
- inputRefs:
- audit
name: audit-logs
outputRefs:
- kafka-open
- inputRefs:
- infrastructure
name: infrastructure-logs
outputRefs:
- kafka-open
----
<1> Updates the existing instance or creates the instance if it does not exist.
<2> Updates the existing instance or creates the instance if it does not exist.
<3> Specifies the destination of the kafka server.

View File

@@ -0,0 +1,50 @@
// Module included in the following assemblies:
//
// * scalability_and_performance/sno-du-deploying-clusters-on-single-nodes.adoc
:_content-type: PROCEDURE
[id="sno-du-configuring-performance-addons_{context}"]
= Configuring the Performance Addon Operator
This is a key configuration for the single node distributed unit (DU). Many of the real-time capabilities and service assurance are configured here.
.Procedure
* Configure the performance addons using the following example:
+
[source,yaml]
----
apiVersion: performance.openshift.io/v2
kind: PerformanceProfile
metadata:
name: perfprofile-policy
spec:
additionalKernelArgs:
- idle=poll
- rcupdate.rcu_normal_after_boot=0
cpu:
isolated: 2-19,22-39 <1>
reserved: 0-1,20-21 <2>
hugepages:
defaultHugepagesSize: 1G
pages:
- count: 32 <3>
size: 1G <4>
machineConfigPoolSelector:
pools.operator.machineconfiguration.openshift.io/master: ""
net:
userLevelNetworking: true <5>
nodeSelector:
node-role.kubernetes.io/master: ""
numa:
topologyPolicy: restricted
realTimeKernel:
enabled: true <6>
----
<1> Set the isolated CPUs. Ensure all of the HT pairs match.
<2> Set the reserved CPUs. In this case, a hyperthreaded pair is allocated on NUMA 0 and a pair on NUMA 1.
<3> Set the huge page size.
<4> Set the huge page number.
<5> Set to `true` to isolate the CPUs from networking interrupts.
<6> Set to `true` to install the real-time Linux kernel.

View File

@@ -0,0 +1,136 @@
// Module included in the following assemblies:
//
// *scalability_and_performance/sno-du-deploying-clusters-on-single-nodes.adoc
:_content-type: PROCEDURE
[id="sno-du-configuring-ptp_{context}"]
= Configuring Precision Time Protocol (PTP)
In the far edge, the RAN uses PTP to synchronize the systems.
.Procedure
* Configure PTP using the following example:
+
[source,yaml]
----
apiVersion: ptp.openshift.io/v1
kind: PtpConfig
metadata:
name: du-ptp-slave
namespace: openshift-ptp
spec:
profile:
- interface: ens5f0 <1>
name: slave
phc2sysOpts: -a -r -n 24
ptp4lConf: |
[global]
#
# Default Data Set
#
twoStepFlag 1
slaveOnly 0
priority1 128
priority2 128
domainNumber 24
#utc_offset 37
clockClass 248
clockAccuracy 0xFE
offsetScaledLogVariance 0xFFFF
free_running 0
freq_est_interval 1
dscp_event 0
dscp_general 0
dataset_comparison ieee1588
G.8275.defaultDS.localPriority 128
#
# Port Data Set
#
logAnnounceInterval -3
logSyncInterval -4
logMinDelayReqInterval -4
logMinPdelayReqInterval -4
announceReceiptTimeout 3
syncReceiptTimeout 0
delayAsymmetry 0
fault_reset_interval 4
neighborPropDelayThresh 20000000
masterOnly 0
G.8275.portDS.localPriority 128
#
# Run time options
#
assume_two_step 0
logging_level 6
path_trace_enabled 0
follow_up_info 0
hybrid_e2e 0
inhibit_multicast_service 0
net_sync_monitor 0
tc_spanning_tree 0
tx_timestamp_timeout 50
unicast_listen 0
unicast_master_table 0
unicast_req_duration 3600
use_syslog 1
verbose 0
summary_interval 0
kernel_leap 1
check_fup_sync 0
#
# Servo Options
#
pi_proportional_const 0.0
pi_integral_const 0.0
pi_proportional_scale 0.0
pi_proportional_exponent -0.3
pi_proportional_norm_max 0.7
pi_integral_scale 0.0
pi_integral_exponent 0.4
pi_integral_norm_max 0.3
step_threshold 0.0
first_step_threshold 0.00002
max_frequency 900000000
clock_servo pi
sanity_freq_limit 200000000
ntpshm_segment 0
#
# Transport options
#
transportSpecific 0x0
ptp_dst_mac 01:1B:19:00:00:00
p2p_dst_mac 01:80:C2:00:00:0E
udp_ttl 1
udp6_scope 0x0E
uds_address /var/run/ptp4l
#
# Default interface options
#
clock_type OC
network_transport UDPv4
delay_mechanism E2E
time_stamping hardware
tsproc_mode filter
delay_filter moving_median
delay_filter_length 10
egressLatency 0
ingressLatency 0
boundary_clock_jbod 0
#
# Clock description
#
productDescription ;;
revisionData ;;
manufacturerIdentity 00:00:00
userDescription ;
timeSource 0xA0
ptp4lOpts: -2 -s --summary_interval -4
recommend:
- match:
- nodeLabel: node-role.kubernetes.io/master
priority: 4
profile: slave
----
<1> Sets the interface used for PTP.

View File

@@ -0,0 +1,91 @@
// Module included in the following assemblies:
//
// *scalability_and_performance/sno-du-deploying-clusters-on-single-nodes.adoc
:_content-type: PROCEDURE
[id="sno-du-configuring-sriov_{context}"]
= Configuring single root I/O virtualization (SR-IOV)
SR-IOV is commonly used to enable the fronthaul and the midhaul networks.
.Procedure
* Use the following configuration to configure SRIOV on a single node distributed unit (DU). Note that the first custom resource (CR) is required. The following CRs are examples.
+
[source,yaml]
----
apiVersion: sriovnetwork.openshift.io/v1
kind: SriovOperatorConfig
metadata:
name: default
namespace: openshift-sriov-network-operator
spec:
configDaemonNodeSelector:
node-role.kubernetes.io/master: ""
disableDrain: true
enableInjector: true
enableOperatorWebhook: true
---
apiVersion: sriovnetwork.openshift.io/v1
kind: SriovNetwork
metadata:
name: sriov-nw-du-mh
namespace: openshift-sriov-network-operator
spec:
networkNamespace: openshift-sriov-network-operator
resourceName: du_mh
vlan: 150 <1>
---
apiVersion: sriovnetwork.openshift.io/v1
kind: SriovNetworkNodePolicy
metadata:
name: sriov-nnp-du-mh
namespace: openshift-sriov-network-operator
spec:
deviceType: vfio-pci <2>
isRdma: false
nicSelector:
pfNames:
- ens7f0 <3>
nodeSelector:
node-role.kubernetes.io/master: ""
numVfs: 8 <4>
priority: 10
resourceName: du_mh
---
apiVersion: sriovnetwork.openshift.io/v1
kind: SriovNetwork
metadata:
name: sriov-nw-du-fh
namespace: openshift-sriov-network-operator
spec:
networkNamespace: openshift-sriov-network-operator
resourceName: du_fh
vlan: 140 <5>
---
apiVersion: sriovnetwork.openshift.io/v1
kind: SriovNetworkNodePolicy
metadata:
name: sriov-nnp-du-fh
namespace: openshift-sriov-network-operator
spec:
deviceType: netdevice <6>
isRdma: true
nicSelector:
pfNames:
- ens5f0 <7>
nodeSelector:
node-role.kubernetes.io/master: ""
numVfs: 8 <8>
priority: 10
resourceName: du_fh
----
<1> Specifies the VLAN for the midhaul network.
<2> Select either `vfio-pci` or `netdevice`, as needed.
<3> Specifies the interface connected to the midhaul network.
<4> Specifies the number of VFs for the midhaul network.
<5> The VLAN for the fronthaul network.
<6> Select either `vfio-pci` or `netdevice`, as needed.
<7> Specifies the interface connected to the fronthaul network.
<8> Specifies the number of VFs for the fronthaul network.

View File

@@ -0,0 +1,84 @@
// Module included in the following assemblies:
//
// *scalability_and_performance/sno-du-deploying-clusters-on-single-nodes.adoc
:_content-type: CONCEPT
[id="sno-du-configuring-the-container-mountspace_{context}"]
= Configuring the container mount namespace
To reduce the overall management footprint of the platform, a machine configuration is provided to contain the mount points. No configuration changes are needed. Use the provided settings:
[source,yaml]
----
apiVersion: machineconfiguration.openshift.io/v1
kind: MachineConfig
metadata:
labels:
machineconfiguration.openshift.io/role: master
name: container-mount-namespace-and-kubelet-conf-master
spec:
config:
ignition:
version: 3.2.0
storage:
files:
- contents:
source: data:text/plain;charset=utf-8;base64,IyEvYmluL2Jhc2gKCmRlYnVnKCkgewogIGVjaG8gJEAgPiYyCn0KCnVzYWdlKCkgewogIGVjaG8gVXNhZ2U6ICQoYmFzZW5hbWUgJDApIFVOSVQgW2VudmZpbGUgW3Zhcm5hbWVdXQogIGVjaG8KICBlY2hvIEV4dHJhY3QgdGhlIGNvbnRlbnRzIG9mIHRoZSBmaXJzdCBFeGVjU3RhcnQgc3RhbnphIGZyb20gdGhlIGdpdmVuIHN5c3RlbWQgdW5pdCBhbmQgcmV0dXJuIGl0IHRvIHN0ZG91dAogIGVjaG8KICBlY2hvICJJZiAnZW52ZmlsZScgaXMgcHJvdmlkZWQsIHB1dCBpdCBpbiB0aGVyZSBpbnN0ZWFkLCBhcyBhbiBlbnZpcm9ubWVudCB2YXJpYWJsZSBuYW1lZCAndmFybmFtZSciCiAgZWNobyAiRGVmYXVsdCAndmFybmFtZScgaXMgRVhFQ1NUQVJUIGlmIG5vdCBzcGVjaWZpZWQiCiAgZXhpdCAxCn0KClVOSVQ9JDEKRU5WRklMRT0kMgpWQVJOQU1FPSQzCmlmIFtbIC16ICRVTklUIHx8ICRVTklUID09ICItLWhlbHAiIHx8ICRVTklUID09ICItaCIgXV07IHRoZW4KICB1c2FnZQpmaQpkZWJ1ZyAiRXh0cmFjdGluZyBFeGVjU3RhcnQgZnJvbSAkVU5JVCIKRklMRT0kKHN5c3RlbWN0bCBjYXQgJFVOSVQgfCBoZWFkIC1uIDEpCkZJTEU9JHtGSUxFI1wjIH0KaWYgW1sgISAtZiAkRklMRSBdXTsgdGhlbgogIGRlYnVnICJGYWlsZWQgdG8gZmluZCByb290IGZpbGUgZm9yIHVuaXQgJFVOSVQgKCRGSUxFKSIKICBleGl0CmZpCmRlYnVnICJTZXJ2aWNlIGRlZmluaXRpb24gaXMgaW4gJEZJTEUiCkVYRUNTVEFSVD0kKHNlZCAtbiAtZSAnL15FeGVjU3RhcnQ9LipcXCQvLC9bXlxcXSQvIHsgcy9eRXhlY1N0YXJ0PS8vOyBwIH0nIC1lICcvXkV4ZWNTdGFydD0uKlteXFxdJC8geyBzL15FeGVjU3RhcnQ9Ly87IHAgfScgJEZJTEUpCgppZiBbWyAkRU5WRklMRSBdXTsgdGhlbgogIFZBUk5BTUU9JHtWQVJOQU1FOi1FWEVDU1RBUlR9CiAgZWNobyAiJHtWQVJOQU1FfT0ke0VYRUNTVEFSVH0iID4gJEVOVkZJTEUKZWxzZQogIGVjaG8gJEVYRUNTVEFSVApmaQo=
mode: 493
path: /usr/local/bin/extractExecStart
- contents:
source: data:text/plain;charset=utf-8;base64,IyEvYmluL2Jhc2gKbnNlbnRlciAtLW1vdW50PS9ydW4vY29udGFpbmVyLW1vdW50LW5hbWVzcGFjZS9tbnQgIiRAIgo=
mode: 493
path: /usr/local/bin/nsenterCmns
systemd:
units:
- contents: |
[Unit]
Description=Manages a mount namespace that both kubelet and crio can use to share their container-specific mounts
[Service]
Type=oneshot
RemainAfterExit=yes
RuntimeDirectory=container-mount-namespace
Environment=RUNTIME_DIRECTORY=%t/container-mount-namespace
Environment=BIND_POINT=%t/container-mount-namespace/mnt
ExecStartPre=bash -c "findmnt ${RUNTIME_DIRECTORY} || mount --make-unbindable --bind ${RUNTIME_DIRECTORY} ${RUNTIME_DIRECTORY}"
ExecStartPre=touch ${BIND_POINT}
ExecStart=unshare --mount=${BIND_POINT} --propagation slave mount --make-rshared /
ExecStop=umount -R ${RUNTIME_DIRECTORY}
enabled: true
name: container-mount-namespace.service
- dropins:
- contents: |
[Unit]
Wants=container-mount-namespace.service
After=container-mount-namespace.service
[Service]
ExecStartPre=/usr/local/bin/extractExecStart %n /%t/%N-execstart.env ORIG_EXECSTART
EnvironmentFile=-/%t/%N-execstart.env
ExecStart=
ExecStart=bash -c "nsenter --mount=%t/container-mount-namespace/mnt \
${ORIG_EXECSTART}"
name: 90-container-mount-namespace.conf
name: crio.service
- dropins:
- contents: |
[Unit]
Wants=container-mount-namespace.service
After=container-mount-namespace.service
[Service]
ExecStartPre=/usr/local/bin/extractExecStart %n /%t/%N-execstart.env ORIG_EXECSTART
EnvironmentFile=-/%t/%N-execstart.env
ExecStart=
ExecStart=bash -c "nsenter --mount=%t/container-mount-namespace/mnt \
${ORIG_EXECSTART} --housekeeping-interval=30s"
name: 90-container-mount-namespace.conf
- contents: |
[Service]
Environment="OPENSHIFT_MAX_HOUSEKEEPING_INTERVAL_DURATION=60s"
Environment="OPENSHIFT_EVICTION_MONITORING_PERIOD_DURATION=30s"
name: 30-kubelet-interval-tuning.conf
name: kubelet.service
----

View File

@@ -0,0 +1,13 @@
// Module included in the following assemblies:
//
// *scalability_and_performance/sno-du-deploying-clusters-on-single-nodes.adoc
:_content-type: CONCEPT
[id="sno-configuring-the-distributed-units_{context}"]
= Configuring the distributed units (DUs)
This section describes a set of configurations for an {product-title} cluster so that it meets the feature and performance requirements necessary for running a distributed unit (DU) application. Some of this content must be applied during installation and other configurations can be applied post-install.
After you have installed the single-node DU, further configuration is needed to enable the platform to carry a DU workload.
The configurations in this section are applied to the cluster after installation in order to configure the cluster for DU workloads.

View File

@@ -0,0 +1,120 @@
// Module included in the following assemblies:
//
// *scalability_and_performance/sno-du-deploying-clusters-on-single-nodes.adoc
:_content-type: PROCEDURE
[id="sno-du-configuring-the-operators_{context}"]
= Creating OperatorGroups for Operators
This configuration is provided to enable addition of the Operators needed to configure the platform post-installation. It adds the `Namespace` and `OperatorGroup` objects for the Local Storage Operator, Logging Operator, Performance Addon Operator, PTP Operator, and SRIOV Network Operator.
.Procedure
* No configuration changes are needed. Use the provided settings:
+
.Local Storage Operator
+
[source,yaml]
----
apiVersion: v1
kind: Namespace
metadata:
annotations:
workload.openshift.io/allowed: management
name: openshift-local-storage
---
apiVersion: operators.coreos.com/v1
kind: OperatorGroup
metadata:
name: openshift-local-storage
namespace: openshift-local-storage
spec:
targetNamespaces:
- openshift-local-storage
----
+
.Logging Operator
+
[source,yaml]
----
apiVersion: v1
kind: Namespace
metadata:
annotations:
workload.openshift.io/allowed: management
name: openshift-logging
---
apiVersion: operators.coreos.com/v1
kind: OperatorGroup
metadata:
name: cluster-logging
namespace: openshift-logging
spec:
targetNamespaces:
- openshift-logging
----
+
.Performance Addon Operator
+
[source,yaml]
----
apiVersion: v1
kind: Namespace
metadata:
annotations:
workload.openshift.io/allowed: management
labels:
openshift.io/cluster-monitoring: "true"
name: openshift-performance-addon-operator
spec: {}
---
apiVersion: operators.coreos.com/v1
kind: OperatorGroup
metadata:
name: performance-addon-operator
namespace: openshift-performance-addon-operator
----
+
.PTP Operator
+
[source,yaml]
----
apiVersion: v1
kind: Namespace
metadata:
annotations:
workload.openshift.io/allowed: management
labels:
openshift.io/cluster-monitoring: "true"
name: openshift-ptp
---
apiVersion: operators.coreos.com/v1
kind: OperatorGroup
metadata:
name: ptp-operators
namespace: openshift-ptp
spec:
targetNamespaces:
- openshift-ptp
----
+
.SRIOV Network Operator
+
[source,yaml]
----
apiVersion: v1
kind: Namespace
metadata:
annotations:
workload.openshift.io/allowed: management
name: openshift-sriov-network-operator
---
apiVersion: operators.coreos.com/v1
kind: OperatorGroup
metadata:
name: sriov-network-operators
namespace: openshift-sriov-network-operator
spec:
targetNamespaces:
- openshift-sriov-network-operator
----

View File

@@ -0,0 +1,49 @@
// Module included in the following assemblies:
//
// *scalability_and_performance/sno-du-deploying-clusters-on-single-nodes.adoc
:_content-type: PROCEDURE
[id="sno-du-disabling-ntp_{context}"]
= Disabling Network Time Protocol (NTP)
After the system is configured for Precision Time Protocol (PTP), you need to remove NTP to prevent it from impacting the system clock.
.Procedure
* No configuration changes are needed. Use the provided settings:
+
[source,yaml]
----
apiVersion: machineconfiguration.openshift.io/v1
kind: MachineConfig
metadata:
labels:
machineconfiguration.openshift.io/role: master
name: disable-chronyd
spec:
config:
systemd:
units:
- contents: |
[Unit]
Description=NTP client/server
Documentation=man:chronyd(8) man:chrony.conf(5)
After=ntpdate.service sntp.service ntpd.service
Conflicts=ntpd.service systemd-timesyncd.service
ConditionCapability=CAP_SYS_TIME
[Service]
Type=forking
PIDFile=/run/chrony/chronyd.pid
EnvironmentFile=-/etc/sysconfig/chronyd
ExecStart=/usr/sbin/chronyd $OPTIONS
ExecStartPost=/usr/libexec/chrony-helper update-daemon
PrivateTmp=yes
ProtectHome=yes
ProtectSystem=full
[Install]
WantedBy=multi-user.target
enabled: false
name: chronyd.service
ignition:
version: 2.2.0
----

View File

@@ -0,0 +1,40 @@
// Module included in the following assemblies:
//
// *scalability_and_performance/sno-du-deploying-clusters-on-single-nodes.adoc
:_content-type: PROCEDURE
[id="sno-du-enabling-sctp_{context}"]
= Enabling Stream Control Transmission Protocol (SCTP)
SCTP is a key protocol used in RAN applications. This `MachineConfig` object adds the SCTP kernel module to the node to enable this protocol.
.Procedure
* No configuration changes are needed. Use the provided settings:
+
[source,yaml]
----
apiVersion: machineconfiguration.openshift.io/v1
kind: MachineConfig
metadata:
labels:
machineconfiguration.openshift.io/role: master
name: load-sctp-module
spec:
config:
ignition:
version: 2.2.0
storage:
files:
- contents:
source: data:,
verification: {}
filesystem: root
mode: 420
path: /etc/modprobe.d/sctp-blacklist.conf
- contents:
source: data:text/plain;charset=utf-8,sctp
filesystem: root
mode: 420
path: /etc/modules-load.d/sctp-load.conf
----

View File

@@ -1,16 +1,23 @@
// Module included in the following assemblies:
//
// * scalability_and_performance/sno-du-enabling-workload-partitioning-on-single-node-openshift.adoc
// *scalability_and_performance/sno-du-deploying-clusters-on-single-nodes.adoc
:_content-type: PROCEDURE
[id="sno-du-enabling-workload-partitioning_{context}"]
= Enabling workload partitioning
Use the following procedure to enable workload partitioning for your single node deployments.
A key feature to enable as part of a single node installation is workload partitioning. This limits the cores allowed to run platform services, maximizing the CPU core for application payloads. You must configure workload partitioning at cluster installation time.
[NOTE]
====
Workload partitioning must be applied during installation.
====
.Procedure
. To enable workload partitioning, you must provide a `MachineConfig` manifest during installation to configure CRI-O and kubelet to know about the workload types. The following example shows a manifest without the encoded file content:
* The base64-encoded content below contains the CPU set that the management workloads are constrained to.
This content must be adjusted to match the set specified in the `performanceprofile` and must be accurate for
the number of cores on the cluster.
+
[source,yaml]
----
@@ -27,42 +34,17 @@ spec:
storage:
files:
- contents:
source: data:text/plain;charset=utf-8;base64,encoded-content-here
source: data:text/plain;charset=utf-8;base64,W2NyaW8ucnVudGltZS53b3JrbG9hZHMubWFuYWdlbWVudF0KYWN0aXZhdGlvbl9hbm5vdGF0aW9uID0gInRhcmdldC53b3JrbG9hZC5vcGVuc2hpZnQuaW8vbWFuYWdlbWVudCIKYW5ub3RhdGlvbl9wcmVmaXggPSAicmVzb3VyY2VzLndvcmtsb2FkLm9wZW5zaGlmdC5pbyIKcmVzb3VyY2VzID0geyAiY3B1c2hhcmVzIiA9IDAsICJjcHVzZXQiID0gIjAtMSw1Mi01MyIgfQo=
mode: 420
overwrite: true
path: /etc/crio/crio.conf.d/01-workload-partitioning
user:
name: root
- contents:
source: data:text/plain;charset=utf-8;base64,encoded-content-here
source: data:text/plain;charset=utf-8;base64,ewogICJtYW5hZ2VtZW50IjogewogICAgImNwdXNldCI6ICIwLTEsNTItNTMiCiAgfQp9Cg==
mode: 420
overwrite: true
path: /etc/kubernetes/openshift-workload-pinning
user:
name: root
----
. Provide the contents of `/etc/crio/crio.conf.d/01-workload-partitioning` as the workload partitioning encoded content. The `cpuset` value varies based on the deployment:
+
[source,yaml]
----
cat <<EOF | base64 -w0
[crio.runtime.workloads.management]
activation_annotation = "target.workload.openshift.io/management"
annotation_prefix = "resources.workload.openshift.io"
resources = { "cpushares" = 0, "cpuset" = "0-1,52-53" }
EOF
----
. Provide the contents of `/etc/kubernetes/openshift-workload-pinning` as the workload pinning encoded content. The `cpuset` value varies based on the deployment:
+
[source,yaml]
----
cat <<EOF | base64 -w0
{
"management": {
"cpuset": "0-1,52-53"
}
}
EOF
----

View File

@@ -0,0 +1,31 @@
// Module included in the following assemblies:
//
// *scalability_and_performance/sno-du-deploying-clusters-on-single-nodes.adoc
:_content-type: PROCEDURE
[id="sno-du-removing-the-console-operator_{context}"]
= Disabling the console Operator
The console-operator installs and maintains the web console on a cluster. When the node is centrally managed the Operator is not needed and makes space for application workloads.
.Procedure
* You can disable the Operator using the following configuration file.
No configuration changes are needed. Use the provided settings:
+
[source,yaml]
----
apiVersion: operator.openshift.io/v1
kind: Console
metadata:
annotations:
include.release.openshift.io/ibm-cloud-managed: "false"
include.release.openshift.io/self-managed-high-availability: "false"
include.release.openshift.io/single-node-developer: "false"
release.openshift.io/create-only: "true"
name: cluster
spec:
logLevel: Normal
managementState: Removed
operatorLogLevel: Normal
----

View File

@@ -0,0 +1,83 @@
// Module included in the following assemblies:
//
// *scalability_and_performance/sno-du-deploying-clusters-on-single-nodes.adoc
:_content-type: PROCEDURE
[id="sno-du-subscribing-to-the-operators-needed-for-platform-configuration_{context}"]
= Subscribing to the Operators
The subscription provides the location to download the Operators needed for platform configuration.
.Procedure
* Use the following example to configure the subscription:
+
[source,yaml]
----
apiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
name: cluster-logging
namespace: openshift-logging
spec:
channel: "stable" <1>
name: cluster-logging
source: redhat-operators
sourceNamespace: openshift-marketplace
installPlanApproval: Manual <2>
---
apiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
name: local-storage-operator
namespace: openshift-local-storage
spec:
channel: "stable" <3>
installPlanApproval: Automatic
name: local-storage-operator
source: redhat-operators
sourceNamespace: openshift-marketplace
installPlanApproval: Manual
---
apiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
name: performance-addon-operator
namespace: openshift-performance-addon-operator
spec:
channel: "4.10" <4>
name: performance-addon-operator
source: performance-addon-operator
sourceNamespace: openshift-marketplace
installPlanApproval: Manual
---
apiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
name: ptp-operator-subscription
namespace: openshift-ptp
spec:
channel: "stable" <5>
name: ptp-operator
source: redhat-operators
sourceNamespace: openshift-marketplace
installPlanApproval: Manual
---
apiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
name: sriov-network-operator-subscription
namespace: openshift-sriov-network-operator
spec:
channel: "stable" <6>
name: sriov-network-operator
source: redhat-operators
sourceNamespace: openshift-marketplace
installPlanApproval: Manual
----
<1> Specify the channel to get the `cluster-logging` Operator.
<2> Specify `Manual` or `Automatic`. In `Automatic` mode, the Operator automatically updates to the latest versions in the channel as they become available in the registry. In `Manual` mode, new Operator versions are installed only after they are explicitly approved.
<3> Specify the channel to get the `local-storage-operator` Operator.
<4> Specify the channel to get the `performance-addon-operator` Operator.
<5> Specify the channel to get the `ptp-operator` Operator.
<6> Specify the channel to get the `sriov-network-operator` Operator.

View File

@@ -0,0 +1,52 @@
:_content-type: ASSEMBLY
[id="sno-du-deploying-distributed-units-manually-on-single-node-openshift"]
= Deploying distributed units manually on single node OpenShift
include::_attributes/common-attributes.adoc[]
:context: sno-du-deploying-distributed-units-manually-on-single-node-openshift
toc::[]
The procedures in this topic tell you how to manually deploy clusters on a small number of single nodes as a distributed unit (DU) during installation.
The procedures do not describe how to install single node OpenShift (SNO). This can be accomplished through many mechanisms. Rather, they are intended to capture the elements that should be configured as part of the installation process:
* Networking is needed to enable connectivity to the SNO DU when the installation is complete.
* Workload partitioning, which can only be configured during installation.
* Additional items that help minimize the potential reboots post installation.
// Configuring the DUs
include::modules/sno-du-configuring-the-distributed-units.adoc[leveloffset=+1]
include::modules/sno-du-enabling-workload-partitioning.adoc[leveloffset=+2]
include::modules/sno-du-configuring-the-container-mountspace.adoc[leveloffset=+2]
include::modules/sno-du-enabling-sctp.adoc[leveloffset=+2]
include::modules/sno-du-configuring-the-operators.adoc[leveloffset=+2]
include::modules/sno-du-subscribing-to-the-operators-needed-for-platform-configuration.adoc[leveloffset=+2]
include::modules/sno-du-configuring-logging-locally-and-forwarding.adoc[leveloffset=+2]
include::modules/sno-du-configuring-performance-addons.adoc[leveloffset=+2]
include::modules/sno-du-configuring-ptp.adoc[leveloffset=+2]
include::modules/sno-du-disabling-ntp.adoc[leveloffset=+2]
include::modules/sno-du-configuring-sriov.adoc[leveloffset=+2]
include::modules/sno-du-removing-the-console-operator.adoc[leveloffset=+2]
// Applying the distributed unit (DU) configuration to SNO
include::modules/sno-du-applying-the-distributed-unit-configuration-to-sno.adoc[leveloffset=+1]
include::modules/sno-du-applying-the-extra-installation-manifests.adoc[leveloffset=+2]
include::modules/sno-du-applying-the-post-install-configuration-custom-resources.adoc[leveloffset=+2]