1
0
mirror of https://github.com/openshift/openshift-docs.git synced 2026-02-05 12:46:18 +01:00
Files
openshift-docs/modules/create-user-workloads-aws-edge.adoc
2024-08-01 15:16:19 +00:00

136 lines
3.9 KiB
Plaintext

// Module included in the following assemblies:
//
// * installing/installing_aws/ipi/installing-aws-outposts.adoc
//To-do: reintegrate installation-extend-edge-nodes-aws-local-zones.adoc with create-user-workloads-aws-edge.adoc. Requires global repo update of any xrefs/includes.
:_mod-docs-content-type: PROCEDURE
[id="create-user-workloads-aws-edge_{context}"]
= Creating user workloads in an Outpost
After you extend an {product-title} in an AWS VPC cluster into an Outpost, you can use edge compute nodes with the label `node-role.kubernetes.io/outposts` to create user workloads in the Outpost.
.Prerequisites
* You have extended an AWS VPC cluster into an Outpost.
* You have access to the cluster using an account with `cluster-admin` permissions.
* You have installed the {oc-first}.
* You have created a compute machine set that deploys edge compute machines compatible with the Outpost environment.
.Procedure
. Configure a `Deployment` resource file for an application that you want to deploy to the edge compute node in the edge subnet.
+
.Example `Deployment` manifest
[source,yaml]
----
kind: Namespace
apiVersion: v1
metadata:
name: <application_name> # <1>
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: <application_name>
namespace: <application_namespace> # <2>
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
storageClassName: gp2-csi # <3>
volumeMode: Filesystem
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: <application_name>
namespace: <application_namespace>
spec:
selector:
matchLabels:
app: <application_name>
replicas: 1
template:
metadata:
labels:
app: <application_name>
location: outposts # <4>
spec:
securityContext:
seccompProfile:
type: RuntimeDefault
nodeSelector: # <5>
node-role.kubernetes.io/outpost: ''
tolerations: # <6>
- key: "node-role.kubernetes.io/outposts"
operator: "Equal"
value: ""
effect: "NoSchedule"
containers:
- image: openshift/origin-node
command:
- "/bin/socat"
args:
- TCP4-LISTEN:8080,reuseaddr,fork
- EXEC:'/bin/bash -c \"printf \\\"HTTP/1.0 200 OK\r\n\r\n\\\"; sed -e \\\"/^\r/q\\\"\"'
imagePullPolicy: Always
name: <application_name>
ports:
- containerPort: 8080
volumeMounts:
- mountPath: "/mnt/storage"
name: data
volumes:
- name: data
persistentVolumeClaim:
claimName: <application_name>
----
<1> Specify a name for your application.
<2> Specify a namespace for your application. The application namespace can be the same as the application name.
<3> Specify the storage class name. For an edge compute configuration, you must use the `gp2-csi` storage class.
<4> Specify a label to identify workloads deployed in the Outpost.
<5> Specify the node selector label that targets edge compute nodes.
<6> Specify tolerations that match the `key` and `effects` taints in the compute machine set for your edge compute machines. Set the `value` and `operator` tolerations as shown.
. Create the `Deployment` resource by running the following command:
+
[source,terminal]
----
$ oc create -f <application_deployment>.yaml
----
. Configure a `Service` object that exposes a pod from a targeted edge compute node to services that run inside your edge network.
+
.Example `Service` manifest
[source,yaml]
----
apiVersion: v1
kind: Service # <1>
metadata:
name: <application_name>
namespace: <application_namespace>
spec:
ports:
- port: 80
targetPort: 8080
protocol: TCP
type: NodePort
selector: # <2>
app: <application_name>
----
<1> Defines the `service` resource.
<2> Specify the label type to apply to managed pods.
. Create the `Service` CR by running the following command:
+
[source,terminal]
----
$ oc create -f <application_service>.yaml
----