mirror of
https://github.com/openshift/openshift-docs.git
synced 2026-02-05 03:47:04 +01:00
CNV-70051: Removing unused modules and fixing broken table
This commit is contained in:
committed by
openshift-cherrypick-robot
parent
13b17a931a
commit
ebd9bdf090
@@ -1,63 +0,0 @@
|
||||
// Module included in the following assemblies:
|
||||
//
|
||||
// * virt/vm_templates/virt-deploying-vm-template-to-custom-namespace.adoc
|
||||
|
||||
:_mod-docs-content-type: PROCEDURE
|
||||
[id="virt-adding-templates-to-custom-namespace_{context}"]
|
||||
= Adding templates to a custom namespace
|
||||
|
||||
[role="_abstract"]
|
||||
The Scheduling, Scale, and Performance (SSP) Operator deploys virtual machine templates to the `openshift` namespace by default. To also publish these templates in a custom namespace, set the `commonTemplatesNamespace` field in the `HyperConverged` custom resource (CR). After the templates sync to the custom namespace, you can modify or delete them there.
|
||||
|
||||
[NOTE]
|
||||
====
|
||||
Do not edit templates in the `openshift` namespace. The SSP Operator reconciles that namespace and overwrites changes.
|
||||
====
|
||||
|
||||
.Prerequisites
|
||||
* Install the {oc-first}.
|
||||
* Log in as a user with cluster-admin privileges.
|
||||
|
||||
.Procedure
|
||||
|
||||
. Optional: Create the custom namespace if it does not already exist:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ oc create namespace <custom_namespace>
|
||||
----
|
||||
|
||||
. Optional: View the list of templates in the `openshift` namespace:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ oc get templates -n openshift
|
||||
----
|
||||
|
||||
. Open the `HyperConverged` CR in your default editor by running the following command:
|
||||
+
|
||||
[source,terminal,subs="attributes+"]
|
||||
----
|
||||
$ oc edit hco -n {CNVNamespace} kubevirt-hyperconverged
|
||||
----
|
||||
|
||||
. Add the `commonTemplatesNamespace` field and set your target namespace. For example:
|
||||
+
|
||||
[source,yaml]
|
||||
----
|
||||
apiVersion: hco.kubevirt.io/v1beta1
|
||||
kind: HyperConverged
|
||||
metadata:
|
||||
name: kubevirt-hyperconverged
|
||||
spec:
|
||||
commonTemplatesNamespace: <custom_namespace>
|
||||
----
|
||||
|
||||
. Save and exit. The SSP Operator creates or updates the templates in the custom namespace.
|
||||
|
||||
. Verify the templates in the custom namespace:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ oc get templates -n <custom_namespace>
|
||||
----
|
||||
@@ -1,169 +0,0 @@
|
||||
// Module included in the following assemblies:
|
||||
//
|
||||
// * virt//support/monitoring/virt-running-cluster-checkups.adoc
|
||||
|
||||
:_mod-docs-content-type: PROCEDURE
|
||||
[id="virt-building-vm-containerdisk-image_{context}"]
|
||||
= Building a container disk image for {op-system-base} virtual machines
|
||||
|
||||
[role="_abstract"]
|
||||
You can build a custom {op-system-base-full} 9 OS image in `qcow2` format and use it to create a container disk image.
|
||||
|
||||
You can store the container disk image in a registry that is accessible from your cluster and specify the image location in the `spec.param.vmContainerDiskImage` attribute of the DPDK checkup config map.
|
||||
|
||||
To build a container disk image, you must create an image builder virtual machine (VM). The _image builder VM_ is a {op-system-base} 9 VM that can be used to build custom {op-system-base} images.
|
||||
|
||||
.Prerequisites
|
||||
* The image builder VM must run {op-system-base} 9.4 and must have a minimum of 2 CPU cores, 4 GiB RAM, and 20 GB of free space in the `/var` directory.
|
||||
* You have installed the image builder tool and its CLI (`composer-cli`) on the VM. For more information, see "Additional resources".
|
||||
* You have installed the `virt-customize` tool:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
# dnf install guestfs-tools
|
||||
----
|
||||
* You have installed the Podman CLI tool (`podman`).
|
||||
|
||||
.Procedure
|
||||
|
||||
. Verify that you can build a {op-system-base} 9.4 image:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
# composer-cli distros list
|
||||
----
|
||||
+
|
||||
[NOTE]
|
||||
====
|
||||
To run the `composer-cli` commands as non-root, add your user to the `weldr` or `root` groups:
|
||||
|
||||
[source,terminal]
|
||||
----
|
||||
# usermod -a -G weldr <user>
|
||||
----
|
||||
[source,terminal]
|
||||
----
|
||||
$ newgrp weldr
|
||||
----
|
||||
====
|
||||
|
||||
. Enter the following command to create an image blueprint file in TOML format that contains the packages to be installed, kernel customizations, and the services to be disabled during boot time:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ cat << EOF > dpdk-vm.toml
|
||||
name = "dpdk_image"
|
||||
description = "Image to use with the DPDK checkup"
|
||||
version = "0.0.1"
|
||||
distro = "rhel-9.4"
|
||||
|
||||
[[customizations.user]]
|
||||
name = "root"
|
||||
password = "redhat"
|
||||
|
||||
[[packages]]
|
||||
name = "dpdk"
|
||||
|
||||
[[packages]]
|
||||
name = "dpdk-tools"
|
||||
|
||||
[[packages]]
|
||||
name = "driverctl"
|
||||
|
||||
[[packages]]
|
||||
name = "tuned-profiles-cpu-partitioning"
|
||||
|
||||
[customizations.kernel]
|
||||
append = "default_hugepagesz=1GB hugepagesz=1G hugepages=1"
|
||||
|
||||
[customizations.services]
|
||||
disabled = ["NetworkManager-wait-online", "sshd"]
|
||||
EOF
|
||||
----
|
||||
|
||||
. Push the blueprint file to the image builder tool by running the following command:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
# composer-cli blueprints push dpdk-vm.toml
|
||||
----
|
||||
|
||||
. Generate the system image by specifying the blueprint name and output file format. The Universally Unique Identifier (UUID) of the image is displayed when you start the compose process.
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
# composer-cli compose start dpdk_image qcow2
|
||||
----
|
||||
|
||||
. Wait for the compose process to complete. The compose status must show `FINISHED` before you can continue to the next step.
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
# composer-cli compose status
|
||||
----
|
||||
|
||||
. Enter the following command to download the `qcow2` image file by specifying its UUID:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
# composer-cli compose image <UUID>
|
||||
----
|
||||
|
||||
. Create the customization scripts by running the following commands:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ cat <<EOF >customize-vm
|
||||
#!/bin/bash
|
||||
|
||||
# Setup hugepages mount
|
||||
mkdir -p /mnt/huge
|
||||
echo "hugetlbfs /mnt/huge hugetlbfs defaults,pagesize=1GB 0 0" >> /etc/fstab
|
||||
|
||||
# Create vfio-noiommu.conf
|
||||
echo "options vfio enable_unsafe_noiommu_mode=1" > /etc/modprobe.d/vfio-noiommu.conf
|
||||
|
||||
# Enable guest-exec,guest-exec-status on the qemu-guest-agent configuration
|
||||
sed -i 's/\(--allow-rpcs=[^"]*\)/\1,guest-exec-status,guest-exec/' /etc/sysconfig/qemu-ga
|
||||
|
||||
# Disable Bracketed-paste mode
|
||||
echo "set enable-bracketed-paste off" >> /root/.inputrc
|
||||
EOF
|
||||
----
|
||||
|
||||
. Use the `virt-customize` tool to customize the image generated by the image builder tool:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ virt-customize -a <UUID>-disk.qcow2 --run=customize-vm --selinux-relabel
|
||||
----
|
||||
|
||||
. To create a Dockerfile that contains all the commands to build the container disk image, enter the following command:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ cat << EOF > Dockerfile
|
||||
FROM scratch
|
||||
COPY --chown=107:107 <UUID>-disk.qcow2 /disk/
|
||||
EOF
|
||||
----
|
||||
+
|
||||
where:
|
||||
|
||||
<UUID>-disk.qcow2:: Specifies the name of the custom image in `qcow2` format.
|
||||
|
||||
. Build and tag the container by running the following command:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ podman build . -t dpdk-rhel:latest
|
||||
----
|
||||
|
||||
. Push the container disk image to a registry that is accessible from your cluster by running the following command:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ podman push dpdk-rhel:latest
|
||||
----
|
||||
|
||||
. Provide a link to the container disk image in the `spec.param.vmUnderTestContainerDiskImage` attribute in the DPDK checkup config map.
|
||||
@@ -1,245 +0,0 @@
|
||||
// Module included in the following assemblies:
|
||||
//
|
||||
// * virt/monitoring/virt-running-cluster-checkups.adoc
|
||||
|
||||
:_mod-docs-content-type: PROCEDURE
|
||||
[id="virt-checking-cluster-dpdk-readiness_{context}"]
|
||||
= Running a DPDK checkup by using the CLI
|
||||
|
||||
[role="_abstract"]
|
||||
Use a predefined checkup to verify that your {product-title} cluster node can run a virtual machine (VM) with a Data Plane Development Kit (DPDK) workload with zero packet loss. The DPDK checkup runs traffic between a traffic generator and a VM running a test DPDK application.
|
||||
|
||||
You run a DPDK checkup by performing the following steps:
|
||||
|
||||
. Create a service account, role, and role bindings for the DPDK checkup.
|
||||
. Create a config map to provide the input to run the checkup and to store the results.
|
||||
. Create a job to run the checkup.
|
||||
. Review the results in the config map.
|
||||
. Optional: To rerun the checkup, delete the existing config map and job and then create a new config map and job.
|
||||
. When you are finished, delete the DPDK checkup resources.
|
||||
|
||||
.Prerequisites
|
||||
* You have installed the OpenShift CLI (`oc`).
|
||||
* The cluster is configured to run DPDK applications.
|
||||
* The project is configured to run DPDK applications.
|
||||
|
||||
.Procedure
|
||||
|
||||
. Create a `ServiceAccount`, `Role`, and `RoleBinding` manifest for the DPDK checkup.
|
||||
+
|
||||
Example service account, role, and rolebinding manifest file:
|
||||
+
|
||||
[%collapsible]
|
||||
[source,yaml]
|
||||
----
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: ServiceAccount
|
||||
metadata:
|
||||
name: dpdk-checkup-sa
|
||||
---
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
kind: Role
|
||||
metadata:
|
||||
name: kiagnose-configmap-access
|
||||
rules:
|
||||
- apiGroups: [ "" ]
|
||||
resources: [ "configmaps" ]
|
||||
verbs: [ "get", "update" ]
|
||||
---
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
kind: RoleBinding
|
||||
metadata:
|
||||
name: kiagnose-configmap-access
|
||||
subjects:
|
||||
- kind: ServiceAccount
|
||||
name: dpdk-checkup-sa
|
||||
roleRef:
|
||||
apiGroup: rbac.authorization.k8s.io
|
||||
kind: Role
|
||||
name: kiagnose-configmap-access
|
||||
---
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
kind: Role
|
||||
metadata:
|
||||
name: kubevirt-dpdk-checker
|
||||
rules:
|
||||
- apiGroups: [ "kubevirt.io" ]
|
||||
resources: [ "virtualmachineinstances" ]
|
||||
verbs: [ "create", "get", "delete" ]
|
||||
- apiGroups: [ "subresources.kubevirt.io" ]
|
||||
resources: [ "virtualmachineinstances/console" ]
|
||||
verbs: [ "get" ]
|
||||
- apiGroups: [ "" ]
|
||||
resources: [ "configmaps" ]
|
||||
verbs: [ "create", "delete" ]
|
||||
---
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
kind: RoleBinding
|
||||
metadata:
|
||||
name: kubevirt-dpdk-checker
|
||||
subjects:
|
||||
- kind: ServiceAccount
|
||||
name: dpdk-checkup-sa
|
||||
roleRef:
|
||||
apiGroup: rbac.authorization.k8s.io
|
||||
kind: Role
|
||||
name: kubevirt-dpdk-checker
|
||||
----
|
||||
|
||||
. Apply the `ServiceAccount`, `Role`, and `RoleBinding` manifest:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ oc apply -n <target_namespace> -f <dpdk_sa_roles_rolebinding>.yaml
|
||||
----
|
||||
|
||||
. Create a `ConfigMap` manifest that contains the input parameters for the checkup.
|
||||
+
|
||||
Example input config map:
|
||||
+
|
||||
[source,yaml]
|
||||
----
|
||||
apiVersion: v1
|
||||
kind: ConfigMap
|
||||
metadata:
|
||||
name: dpdk-checkup-config
|
||||
labels:
|
||||
kiagnose/checkup-type: kubevirt-dpdk
|
||||
data:
|
||||
spec.timeout: 10m
|
||||
spec.param.networkAttachmentDefinitionName: <network_name> <1>
|
||||
spec.param.trafficGenContainerDiskImage: "quay.io/kiagnose/kubevirt-dpdk-checkup-traffic-gen:v0.4.0 <2>
|
||||
spec.param.vmUnderTestContainerDiskImage: "quay.io/kiagnose/kubevirt-dpdk-checkup-vm:v0.4.0" <3>
|
||||
----
|
||||
<1> The name of the `NetworkAttachmentDefinition` object.
|
||||
<2> The container disk image for the traffic generator. In this example, the image is pulled from the upstream Project Quay Container Registry.
|
||||
<3> The container disk image for the VM under test. In this example, the image is pulled from the upstream Project Quay Container Registry.
|
||||
|
||||
. Apply the `ConfigMap` manifest in the target namespace:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ oc apply -n <target_namespace> -f <dpdk_config_map>.yaml
|
||||
----
|
||||
|
||||
. Create a `Job` manifest to run the checkup.
|
||||
+
|
||||
Example job manifest:
|
||||
+
|
||||
[source,yaml,subs="attributes+"]
|
||||
----
|
||||
apiVersion: batch/v1
|
||||
kind: Job
|
||||
metadata:
|
||||
name: dpdk-checkup
|
||||
labels:
|
||||
kiagnose/checkup-type: kubevirt-dpdk
|
||||
spec:
|
||||
backoffLimit: 0
|
||||
template:
|
||||
spec:
|
||||
serviceAccountName: dpdk-checkup-sa
|
||||
restartPolicy: Never
|
||||
containers:
|
||||
- name: dpdk-checkup
|
||||
image: registry.redhat.io/container-native-virtualization/kubevirt-dpdk-checkup-rhel9:v{product-version}.0
|
||||
imagePullPolicy: Always
|
||||
securityContext:
|
||||
allowPrivilegeEscalation: false
|
||||
capabilities:
|
||||
drop: ["ALL"]
|
||||
runAsNonRoot: true
|
||||
seccompProfile:
|
||||
type: "RuntimeDefault"
|
||||
env:
|
||||
- name: CONFIGMAP_NAMESPACE
|
||||
value: <target-namespace>
|
||||
- name: CONFIGMAP_NAME
|
||||
value: dpdk-checkup-config
|
||||
- name: POD_UID
|
||||
valueFrom:
|
||||
fieldRef:
|
||||
fieldPath: metadata.uid
|
||||
----
|
||||
|
||||
. Apply the `Job` manifest:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ oc apply -n <target_namespace> -f <dpdk_job>.yaml
|
||||
----
|
||||
|
||||
. Wait for the job to complete:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ oc wait job dpdk-checkup -n <target_namespace> --for condition=complete --timeout 10m
|
||||
----
|
||||
|
||||
. Review the results of the checkup by running the following command:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ oc get configmap dpdk-checkup-config -n <target_namespace> -o yaml
|
||||
----
|
||||
+
|
||||
Example output config map (success):
|
||||
+
|
||||
[source,yaml]
|
||||
----
|
||||
apiVersion: v1
|
||||
kind: ConfigMap
|
||||
metadata:
|
||||
name: dpdk-checkup-config
|
||||
labels:
|
||||
kiagnose/checkup-type: kubevirt-dpdk
|
||||
data:
|
||||
spec.timeout: 10m
|
||||
spec.param.NetworkAttachmentDefinitionName: "dpdk-network-1"
|
||||
spec.param.trafficGenContainerDiskImage: "quay.io/kiagnose/kubevirt-dpdk-checkup-traffic-gen:v0.4.0"
|
||||
spec.param.vmUnderTestContainerDiskImage: "quay.io/kiagnose/kubevirt-dpdk-checkup-vm:v0.4.0"
|
||||
status.succeeded: "true" <1>
|
||||
status.failureReason: "" <2>
|
||||
status.startTimestamp: "2023-07-31T13:14:38Z" <3>
|
||||
status.completionTimestamp: "2023-07-31T13:19:41Z" <4>
|
||||
status.result.trafficGenSentPackets: "480000000" <5>
|
||||
status.result.trafficGenOutputErrorPackets: "0" <6>
|
||||
status.result.trafficGenInputErrorPackets: "0" <7>
|
||||
status.result.trafficGenActualNodeName: worker-dpdk1 <8>
|
||||
status.result.vmUnderTestActualNodeName: worker-dpdk2 <9>
|
||||
status.result.vmUnderTestReceivedPackets: "480000000" <10>
|
||||
status.result.vmUnderTestRxDroppedPackets: "0" <11>
|
||||
status.result.vmUnderTestTxDroppedPackets: "0" <12>
|
||||
----
|
||||
<1> Specifies if the checkup is successful (`true`) or not (`false`).
|
||||
<2> The reason for failure if the checkup fails.
|
||||
<3> The time when the checkup started, in RFC 3339 time format.
|
||||
<4> The time when the checkup has completed, in RFC 3339 time format.
|
||||
<5> The number of packets sent from the traffic generator.
|
||||
<6> The number of error packets sent from the traffic generator.
|
||||
<7> The number of error packets received by the traffic generator.
|
||||
<8> The node on which the traffic generator VM was scheduled.
|
||||
<9> The node on which the VM under test was scheduled.
|
||||
<10> The number of packets received on the VM under test.
|
||||
<11> The ingress traffic packets that were dropped by the DPDK application.
|
||||
<12> The egress traffic packets that were dropped from the DPDK application.
|
||||
|
||||
. Delete the job and config map that you previously created by running the following commands:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ oc delete job -n <target_namespace> dpdk-checkup
|
||||
----
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ oc delete config-map -n <target_namespace> dpdk-checkup-config
|
||||
----
|
||||
|
||||
. Optional: If you do not plan to run another checkup, delete the `ServiceAccount`, `Role`, and `RoleBinding` manifest:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ oc delete -f <dpdk_sa_roles_rolebinding>.yaml
|
||||
----
|
||||
@@ -1,81 +0,0 @@
|
||||
// Module included in the following assemblies:
|
||||
//
|
||||
// * virt/vm_networking/virt-connecting-vm-to-ovn-secondary-network.adoc
|
||||
|
||||
:_mod-docs-content-type: PROCEDURE
|
||||
[id="virt-creating-localnet-nad-cli_{context}"]
|
||||
= Creating a NAD for localnet topology using the CLI
|
||||
|
||||
[role="_abstract"]
|
||||
You can create a network attachment definition (NAD) which describes how to attach a pod to the underlying physical network.
|
||||
|
||||
.Prerequisites
|
||||
* You have access to the cluster as a user with `cluster-admin` privileges.
|
||||
* You have installed the OpenShift CLI (`oc`).
|
||||
* You have installed the Kubernetes NMState Operator.
|
||||
|
||||
.Procedure
|
||||
|
||||
. Create a `NodeNetworkConfigurationPolicy` object to map the OVN-Kubernetes secondary network to an Open vSwitch (OVS) bridge:
|
||||
+
|
||||
[source,yaml]
|
||||
----
|
||||
apiVersion: nmstate.io/v1
|
||||
kind: NodeNetworkConfigurationPolicy
|
||||
metadata:
|
||||
name: mapping # <1>
|
||||
spec:
|
||||
nodeSelector:
|
||||
node-role.kubernetes.io/worker: '' # <2>
|
||||
desiredState:
|
||||
ovn:
|
||||
bridge-mappings:
|
||||
- localnet: localnet-network # <3>
|
||||
bridge: br-ex # <4>
|
||||
state: present # <5>
|
||||
----
|
||||
<1> The name of the configuration object.
|
||||
<2> Specifies the nodes to which the node network configuration policy is to be applied. The recommended node selector value is `node-role.kubernetes.io/worker: ''`.
|
||||
<3> The name of the additional network from which traffic is forwarded to the OVS bridge. This attribute must match the value of the `spec.config.name` field of the `NetworkAttachmentDefinition` object that defines the OVN-Kubernetes additional network.
|
||||
<4> The name of the OVS bridge on the node. This value is required if the `state` attribute is `present`.
|
||||
<5> The state of the mapping. Must be either `present` to add the mapping or `absent` to remove the mapping. The default value is `present`.
|
||||
+
|
||||
[NOTE]
|
||||
====
|
||||
{VirtProductName} does not support Linux bridge bonding modes 0, 5, and 6. For more information, see link:https://access.redhat.com/solutions/67546[Which bonding modes work when used with a bridge that virtual machine guests or containers connect to?].
|
||||
====
|
||||
|
||||
. Create a `NetworkAttachmentDefinition` object:
|
||||
+
|
||||
[source,yaml]
|
||||
----
|
||||
apiVersion: k8s.cni.cncf.io/v1
|
||||
kind: NetworkAttachmentDefinition
|
||||
metadata:
|
||||
name: localnet-network
|
||||
namespace: default
|
||||
spec:
|
||||
config: |-
|
||||
{
|
||||
"cniVersion": "0.3.1", <1>
|
||||
"name": "localnet-network", <2>
|
||||
"type": "ovn-k8s-cni-overlay", <3>
|
||||
"topology": "localnet", <4>
|
||||
"mtu": 1500, <5>
|
||||
"netAttachDefName": "default/localnet-network" <6>
|
||||
}
|
||||
----
|
||||
<1> The CNI specification version. The required value is `0.3.1`.
|
||||
<2> The name of the network. This attribute must match the value of the `spec.desiredState.ovn.bridge-mappings.localnet` field of the `NodeNetworkConfigurationPolicy` object that defines the OVS bridge mapping.
|
||||
<3> The name of the CNI plug-in to be configured. The required value is `ovn-k8s-cni-overlay`.
|
||||
<4> The topological configuration for the network. The required value is `localnet`.
|
||||
<5> Optional: The maximum transmission unit (MTU) value. If you do not set a value, the Cluster Network Operator (CNO) sets a default MTU value by calculating the difference among the underlay MTU of the primary network interface, the overlay MTU of the pod network, and byte capacity of any enabled features, such as IPsec.
|
||||
<6> The value of the `namespace` and `name` fields in the `metadata` stanza of the `NetworkAttachmentDefinition` object.
|
||||
|
||||
. Apply the manifest:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ oc apply -f <filename>.yaml
|
||||
----
|
||||
|
||||
@@ -1,32 +0,0 @@
|
||||
// Module included in the following assemblies:
|
||||
//
|
||||
// * virt/vm_networking/virt-connecting-vm-to-ovn-secondary-network.adoc
|
||||
|
||||
:_mod-docs-content-type: PROCEDURE
|
||||
[id="virt-creating-nad-localnet-console_{context}"]
|
||||
= Creating a NAD for localnet topology using the web console
|
||||
|
||||
[role="_abstract"]
|
||||
You can create a network attachment definition (NAD) to connect workloads to a physical network by using the {product-title} web console.
|
||||
|
||||
.Prerequisites
|
||||
* You have access to the cluster as a user with `cluster-admin` privileges.
|
||||
* Use `nmstate` to configure the localnet to OVS bridge mappings.
|
||||
|
||||
.Procedure
|
||||
|
||||
. Navigate to *Networking* -> *NetworkAttachmentDefinitions* in the web console.
|
||||
|
||||
. Click *Create Network Attachment Definition*. The network attachment definition must be in the same namespace as the pod or virtual machine using it.
|
||||
|
||||
. Enter a unique *Name* and optional *Description*.
|
||||
|
||||
. Select *OVN Kubernetes secondary localnet network* from the *Network Type* list.
|
||||
|
||||
. Enter the name of your pre-configured localnet identifier in the *Bridge mapping* field.
|
||||
|
||||
. Optional: You can explicitly set MTU to the specified value. The default value is chosen by the kernel.
|
||||
|
||||
. Optional: Encapsulate the traffic in a VLAN. The default value is none.
|
||||
|
||||
. Click *Create*.
|
||||
@@ -1,47 +0,0 @@
|
||||
// Module included in the following assemblies:
|
||||
//
|
||||
// * virt/vm_templates/virt-deploying-vm-template-to-custom-namespace.adoc
|
||||
|
||||
:_mod-docs-content-type: PROCEDURE
|
||||
|
||||
[id="virt-deleting-templates-from-custom-namespace_{context}"]
|
||||
= Deleting templates from a custom namespace
|
||||
|
||||
To delete virtual machine templates from a custom namespace, remove the `commonTemplateNamespace` attribute from the `HyperConverged` custom resource (CR) and delete each template from that custom namespace.
|
||||
|
||||
.Procedure
|
||||
|
||||
. Edit the `HyperConverged` CR in your default editor by running the following command:
|
||||
+
|
||||
[source,terminal,subs="attributes+"]
|
||||
----
|
||||
$ oc edit hco -n {CNVNamespace} kubevirt-hyperconverged
|
||||
----
|
||||
+
|
||||
. Remove the `commonTemplateNamespace` attribute:
|
||||
+
|
||||
[source,yaml]
|
||||
----
|
||||
apiVersion: hco.kubevirt.io/v1beta1
|
||||
kind: HyperConverged
|
||||
metadata:
|
||||
name: kubevirt-hyperconverged
|
||||
spec:
|
||||
commonTemplatesNamespace: <custom_namespace>
|
||||
----
|
||||
+
|
||||
. Delete a specific template from the custom namespace that you removed from the `HyperConverged` CR:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ oc delete templates -n <custom_namespace> <template_name>
|
||||
----
|
||||
|
||||
.Verification
|
||||
|
||||
* Verify that the template was deleted from the custom namespace:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ oc get templates -n <custom_namespace>
|
||||
----
|
||||
@@ -1,58 +0,0 @@
|
||||
// Module included in the following assemblies:
|
||||
//
|
||||
// * virt/monitoring/virt-running-cluster-checkups.adoc
|
||||
|
||||
:_mod-docs-content-type: REFERENCE
|
||||
[id="virt-dpdk-config-map-parameters_{context}"]
|
||||
= DPDK checkup config map parameters
|
||||
|
||||
[role="_abstract"]
|
||||
The following table shows the mandatory and optional parameters that you can set in the `data` stanza of the input `ConfigMap` manifest when you run a cluster DPDK readiness checkup.
|
||||
|
||||
.DPDK checkup config map input parameters
|
||||
[cols="1,1,1", options="header"]
|
||||
|====
|
||||
|Parameter
|
||||
|Description
|
||||
|Is Mandatory
|
||||
|
||||
|`spec.timeout`
|
||||
|The time, in minutes, before the checkup fails.
|
||||
|True
|
||||
|
||||
|`spec.param.networkAttachmentDefinitionName`
|
||||
|The name of the `NetworkAttachmentDefinition` object of the SR-IOV NICs connected.
|
||||
|True
|
||||
|
||||
|`spec.param.trafficGenContainerDiskImage`
|
||||
|The container disk image for the traffic generator.
|
||||
|True
|
||||
|
||||
|`spec.param.trafficGenTargetNodeName`
|
||||
|The node on which the traffic generator VM is to be scheduled. The node should be configured to allow DPDK traffic.
|
||||
|False
|
||||
|
||||
|`spec.param.trafficGenPacketsPerSecond`
|
||||
|The number of packets per second, in kilo (k) or million(m). The default value is 8m.
|
||||
|False
|
||||
|
||||
|`spec.param.vmUnderTestContainerDiskImage`
|
||||
|The container disk image for the VM under test.
|
||||
|True
|
||||
|
||||
|`spec.param.vmUnderTestTargetNodeName`
|
||||
|The node on which the VM under test is to be scheduled. The node should be configured to allow DPDK traffic.
|
||||
|False
|
||||
|
||||
|`spec.param.testDuration`
|
||||
|The duration, in minutes, for which the traffic generator runs. The default value is 5 minutes.
|
||||
|False
|
||||
|
||||
|`spec.param.portBandwidthGbps`
|
||||
|The maximum bandwidth of the SR-IOV NIC. The default value is 10Gbps.
|
||||
|False
|
||||
|
||||
|`spec.param.verbose`
|
||||
|When set to `true`, it increases the verbosity of the checkup log. The default value is `false`.
|
||||
|False
|
||||
|====
|
||||
@@ -48,7 +48,6 @@ endif::openshift-rosa,openshift-dedicated[]
|
||||
|===
|
||||
|
||||
[id="virt-storage-wizard-fields-advanced-web_{context}"]
|
||||
[discrete]
|
||||
== Advanced storage settings
|
||||
|
||||
The following advanced storage settings are optional and available for *Blank*, *Import via URL*, and *Clone existing PVC* disks.
|
||||
@@ -66,7 +65,8 @@ If you do not specify these parameters, the system uses the default storage prof
|
||||
|Block
|
||||
|Stores the virtual disk directly on the block volume. Only use `Block` if the underlying storage supports it.
|
||||
|
||||
.2+|Access Mode
|
||||
.3+|Access Mode
|
||||
|
||||
|ReadWriteOnce (RWO)
|
||||
|Volume can be mounted as read-write by a single node.
|
||||
|ReadWriteMany (RWX)
|
||||
@@ -75,7 +75,6 @@ If you do not specify these parameters, the system uses the default storage prof
|
||||
====
|
||||
This mode is required for live migration.
|
||||
====
|
||||
|
||||
|ReadOnlyMany (ROX)
|
||||
|Volume can be mounted as read only by many nodes.
|
||||
|===
|
||||
|
||||
@@ -1,54 +0,0 @@
|
||||
// Module included in the following assemblies:
|
||||
//
|
||||
// * virt/post_installation_configuration/virt-configuring-higher-vm-workload-density.adoc
|
||||
|
||||
:_mod-docs-content-type: CONCEPT
|
||||
[id="virt-wasp-agent-pod-eviction_{context}"]
|
||||
= Pod eviction conditions used by wasp-agent
|
||||
|
||||
[role="_abstract"]
|
||||
The wasp agent manages pod eviction when the system is heavily loaded and nodes are at risk. Eviction triggers if one of the following conditions occurs:
|
||||
|
||||
High swap I/O traffic::
|
||||
|
||||
This condition occurs when swap-related I/O traffic is excessively high.
|
||||
+
|
||||
Condition:
|
||||
+
|
||||
[source,text]
|
||||
----
|
||||
averageSwapInPerSecond > maxAverageSwapInPagesPerSecond
|
||||
&&
|
||||
averageSwapOutPerSecond > maxAverageSwapOutPagesPerSecond
|
||||
----
|
||||
+
|
||||
By default, the `maxAverageSwapInPagesPerSecond` and `maxAverageSwapOutPagesPerSecond` values are 1000 pages. The default time interval for calculating the average is 30 seconds.
|
||||
|
||||
High swap utilization::
|
||||
|
||||
This condition occurs when swap utilization is excessively high, causing the current virtual memory usage to exceed the factored threshold. The `NODE_SWAP_SPACE` setting in your `MachineConfig` object can impact this condition.
|
||||
+
|
||||
Condition:
|
||||
+
|
||||
[source,text]
|
||||
----
|
||||
nodeWorkingSet + nodeSwapUsage < totalNodeMemory + totalSwapMemory × thresholdFactor
|
||||
----
|
||||
|
||||
[id="environment-variables_{context}"]
|
||||
== Environment variables
|
||||
|
||||
You can use the following environment variables to adjust the values used to calculate eviction conditions:
|
||||
|
||||
[cols="1,1"]
|
||||
|===
|
||||
|*Environment variable* |*Function*
|
||||
|`MAX_AVERAGE_SWAP_IN_PAGES_PER_SECOND`
|
||||
|Sets the value of `maxAverageSwapInPagesPerSecond`.
|
||||
|`MAX_AVERAGE_SWAP_OUT_PAGES_PER_SECOND`
|
||||
|Sets the value of `maxAverageSwapOutPagesPerSecond`.
|
||||
|`SWAP_UTILIZATION_THRESHOLD_FACTOR`
|
||||
|Sets the `thresholdFactor` value used to calculate high swap utilization.
|
||||
|`AVERAGE_WINDOW_SIZE_SECONDS`
|
||||
|Sets the time interval for calculating the average swap usage.
|
||||
|===
|
||||
@@ -13,7 +13,7 @@ A `layer2` topology connects workloads by a cluster-wide logical switch. The OVN
|
||||
endif::openshift-rosa,openshift-dedicated,openshift-rosa-hcp[]
|
||||
|
||||
ifndef::openshift-rosa,openshift-dedicated,openshift-rosa-hcp[]
|
||||
You can connect a virtual machine (VM) to an OVN-Kubernetes `layer2` secondary network by using the CLI.
|
||||
You can connect a virtual machine (VM) to an OVN-Kubernetes `layer2` secondary network by using the CLI.
|
||||
|
||||
A `layer2` topology connects workloads by a cluster-wide logical switch. The OVN-Kubernetes Container Network Interface (CNI) plugin uses the Geneve (Generic Network Virtualization Encapsulation) protocol to create an overlay network between nodes. You can use this overlay network to connect VMs on different nodes, without having to configure any additional physical networking infrastructure.
|
||||
|
||||
@@ -41,16 +41,8 @@ Configuring IP address management (IPAM) by specifying the `spec.config.ipam.sub
|
||||
|
||||
include::modules/virt-creating-layer2-nad-cli.adoc[leveloffset=+2]
|
||||
|
||||
//ifndef::openshift-rosa,openshift-dedicated,openshift-rosa-hcp[]
|
||||
//include::modules/virt-creating-localnet-nad-cli.adoc[leveloffset=+2]
|
||||
//endif::openshift-rosa,openshift-dedicated,openshift-rosa-hcp[]
|
||||
|
||||
include::modules/virt-creating-nad-l2-overlay-console.adoc[leveloffset=+2]
|
||||
|
||||
//ifndef::openshift-rosa,openshift-dedicated,openshift-rosa-hcp[]
|
||||
//include::modules/virt-creating-nad-localnet-console.adoc[leveloffset=+2]
|
||||
//endif::openshift-rosa,openshift-dedicated,openshift-rosa-hcp[]
|
||||
|
||||
[id="attaching-vm-to-ovn-secondary-nw"]
|
||||
== Attaching a virtual machine to the OVN-Kubernetes layer 2 secondary network
|
||||
|
||||
@@ -58,7 +50,6 @@ You can attach a virtual machine (VM) to the OVN-Kubernetes layer 2 secondary ne
|
||||
|
||||
include::modules/virt-attaching-vm-to-ovn-secondary-nw-cli.adoc[leveloffset=+2]
|
||||
|
||||
|
||||
ifndef::openshift-rosa,openshift-dedicated,openshift-rosa-hcp[]
|
||||
[role="_additional-resources"]
|
||||
[id="additional-resources_virt-connecting-vm-to-ovn-secondary-network"]
|
||||
|
||||
Reference in New Issue
Block a user