1
0
mirror of https://github.com/openshift/openshift-docs.git synced 2026-02-07 09:46:53 +01:00

OSDOCS-10770

This commit is contained in:
Janelle Neczypor
2024-10-25 11:35:42 -07:00
committed by openshift-cherrypick-robot
parent e0552707ef
commit 2d76ebcb58
20 changed files with 1211 additions and 11 deletions

View File

@@ -88,6 +88,22 @@ Topics:
Topics:
- Name: Workshop overview
File: learning-lab-overview
- Name: Deployment
File: cloud-experts-deploying-application-deployment
- Name: Health Check
File: cloud-experts-deploying-application-health-check
- Name: Storage
File: cloud-experts-deploying-application-storage
- Name: ConfigMap, secrets, and environment variables
File: cloud-experts-deploying-configmaps-secrets-env-var
- Name: Networking
File: cloud-experts-deploying-application-networking
- Name: Scaling an application
File: cloud-experts-deploying-application-scaling
- Name: S2i deployments
File: cloud-experts-deploying-application-s2i-deployments
- Name: Using Source-to-Image (S2I) webhooks for automated deployment
File: cloud-experts-deploying-s2i-webhook-cicd
# ---
# Name: Architecture
# Dir: architecture

View File

@@ -1,6 +1,6 @@
:_mod-docs-content-type: ASSEMBLY
[id="cloud-experts-getting-started-accessing"]
= Tutorial: Accessing your cluster
= Accessing your cluster
include::_attributes/attributes-openshift-dedicated.adoc[]
:context: cloud-experts-getting-started-accessing

View File

@@ -1,6 +1,6 @@
:_mod-docs-content-type: ASSEMBLY
[id="cloud-experts-getting-started-admin-rights"]
= Tutorial: Granting admin privileges
= Granting admin privileges
include::_attributes/attributes-openshift-dedicated.adoc[]
:context: cloud-experts-getting-started-admin-rights

View File

@@ -1,6 +1,6 @@
:_mod-docs-content-type: ASSEMBLY
[id="cloud-experts-getting-started-admin"]
= Tutorial: Creating an admin user
= Creating an admin user
include::_attributes/attributes-openshift-dedicated.adoc[]
:context: cloud-experts-getting-started-admin

View File

@@ -1,6 +1,6 @@
:_mod-docs-content-type: ASSEMBLY
[id="cloud-experts-getting-started-autoscaling"]
= Tutorial: Autoscaling
= Autoscaling
include::_attributes/attributes-openshift-dedicated.adoc[]
:context: cloud-experts-getting-started-autoscaling

View File

@@ -1,6 +1,6 @@
:_mod-docs-content-type: ASSEMBLY
[id="cloud-experts-getting-started-deleting"]
= Tutorial: Deleting your cluster
= Deleting your cluster
include::_attributes/attributes-openshift-dedicated.adoc[]
:context: cloud-experts-getting-started-deleting

View File

@@ -1,6 +1,6 @@
:_mod-docs-content-type: ASSEMBLY
[id="cloud-experts-getting-started-hcp-guide"]
= Workshop: Creating a cluster
= Creating a cluster
include::_attributes/attributes-openshift-dedicated.adoc[]
include::_attributes/common-attributes.adoc[]
:context: cloud-experts-getting-started-hcp

View File

@@ -1,6 +1,6 @@
:_mod-docs-content-type: ASSEMBLY
[id="cloud-experts-getting-started-idp"]
= Tutorial: Setting up an identity provider
= Setting up an identity provider
include::_attributes/attributes-openshift-dedicated.adoc[]
:context: cloud-experts-getting-started-idp

View File

@@ -1,6 +1,6 @@
:_mod-docs-content-type: ASSEMBLY
[id="cloud-experts-getting-started-managing-worker-nodes"]
= Tutorial: Managing worker nodes
= Managing worker nodes
include::_attributes/attributes-openshift-dedicated.adoc[]
:context: cloud-experts-getting-started-managing-worker-nodes

View File

@@ -1,6 +1,6 @@
:_mod-docs-content-type: ASSEMBLY
[id="cloud-experts-getting-started-support"]
= Tutorial: Obtaining support
= Obtaining support
include::_attributes/attributes-openshift-dedicated.adoc[]
:context: cloud-experts-getting-started-support

View File

@@ -1,6 +1,6 @@
:_mod-docs-content-type: ASSEMBLY
[id="cloud-experts-getting-started-upgrading"]
= Tutorial: Upgrading your cluster
= Upgrading your cluster
include::_attributes/attributes-openshift-dedicated.adoc[]
:context: cloud-experts-getting-started-upgrading

View File

@@ -0,0 +1,184 @@
:_mod-docs-content-type: ASSEMBLY
[id="cloud-experts-deploying-application-deployment"]
= Deploying the OSToy application with Kubernetes
include::_attributes/attributes-openshift-dedicated.adoc[]
:context: cloud-experts-deploying-application-deployment
toc::[]
//rosaworkshop.io content metadata
//Brought into ROSA product docs 23-JAN-2024
Deploying an application involves creating a container image, storing it in an image repository, and defining Deployment object that uses that image.
Deploying an application involves the following steps:
* Create the images for the front-end and back-end microservice containers
* Store the container images in an image repository
* Create the Kubernetes Deployment object for the application
* Deploy the application
[NOTE]
====
This workshop focuses on application deployment and has users run a remote file which uses an existing image.
====
[id="retrieving-login_deploying-application-deployment"]
== Retrieving the login command
.Procedure
. If you are not logged in to the command-line interface (CLI), access your cluster with the web console.
. Click the dropdown arrow next to your login name in the upper right corner, and select *Copy Login Command*.
+
image::4-cli-login.png[CLI login screen]
+
A new tab opens.
. Select your authentication method.
. Click *Display Token*.
. Copy the command under *Log in with this token*.
. From your terminal, paste and run the copied command. If the login is successful, you will see the following confirmation message:
+
[source,terminal]
----
$ oc login --token=<your_token> --server=https://api.osd4-demo.abc1.p1.openshiftapps.com:6443
Logged into "https://api.myrosacluster.abcd.p1.openshiftapps.com:6443" as "rosa-user" using the token provided.
You don't have any projects. You can try to create a new project, by running
oc new-project <project name>
----
[id="creating-new-project_deploying-application-deployment"]
== Creating a new project
Use your preferred interface to create a new project.
[id="new-project-cli_deploying-application-deployment"]
=== Creating a new project using the CLI
.Procedure
. Create a new project named `ostoy` in your cluster by running following command:
+
[source,terminal]
----
$ oc new-project ostoy
----
+
.Example output
[source,terminal]
----
Now using project "ostoy" on server "https://api.myrosacluster.abcd.p1.openshiftapps.com:6443".
----
** *Optional*: Create a unique project name by running the following command:
+
[source,terminal]
----
$ oc new-project ostoy-$(uuidgen | cut -d - -f 2 | tr '[:upper:]' '[:lower:]')
----
[id="new-project-ui_deploying-application-deployment"]
=== Creating a new project using the web console
.Procedure
. From the web console, click *Home -> Projects*.
. On the *Projects* page, click create *Create Project*.
+
image::4-createnewproj.png[The project creation screen]
. In the *Create Project* box, enter a project name in the *Name* field.
. Click *Create*.
[id="backend-microservice_deploying-application-deployment"]
== Deploying the back-end microservice
The microservice serves internal web requests and returns a JSON object containing the current hostname and a randomly generated color string.
.Procedure
* Deploy the microservice by running the following command:
+
[source,terminal]
----
$ oc apply -f https://raw.githubusercontent.com/openshift-cs/rosaworkshop/master/rosa-workshop/ostoy/yaml/ostoy-microservice-deployment.yaml
----
+
.Example output
[source,terminal]
----
$ oc apply -f https://raw.githubusercontent.com/openshift-cs/rosaworkshop/master/rosa-workshop/ostoy/yaml/ostoy-microservice-deployment.yaml
deployment.apps/ostoy-microservice created
service/ostoy-microservice-svc created
----
[id="frontend-microservice_deploying-application-deployment"]
== Deploying the front-end microservice
The front-end deployment uses the Node.js front-end for the application and additional Kubernetes objects.
Front-end deployment defines the following features:
* Persistent volume claim
* Deployment object
* Service
* Route
* ConfigMaps
* Secrets
.Procedure
* Deploy the application front-end and create the objects by running the following command:
+
[source,terminal]
----
$ oc apply -f https://raw.githubusercontent.com/openshift-cs/rosaworkshop/master/rosa-workshop/ostoy/yaml/ostoy-frontend-deployment.yaml
----
+
.Example output
[source,terminal]
----
persistentvolumeclaim/ostoy-pvc created
deployment.apps/ostoy-frontend created
service/ostoy-frontend-svc created
route.route.openshift.io/ostoy-route created
configmap/ostoy-configmap-env created
secret/ostoy-secret-env created
configmap/ostoy-configmap-files created
secret/ostoy-secret created
----
+
All objects should create successfully.
[id="obtain-route_deploying-application-deployment"]
== Obtain the route to your application
Obtain the route to access the application.
.Procedure
* Get the route to your application by running the following command:
+
[source,terminal]
----
$ oc get route
----
+
.Example output
[source,terminal]
----
NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD
ostoy-route ostoy-route-ostoy.apps.<your-rosa-cluster>.abcd.p1.openshiftapps.com ostoy-frontend-svc <all> None
----
[id="viewing-application_deploying-application-deployment"]
== Viewing the application
.Procedure
. Copy the `ostoy-route-ostoy.apps.<your-rosa-cluster>.abcd.p1.openshiftapps.com` URL output from the previous step.
. Paste the copied URL into your web browser and press enter. You should see the homepage of your application. If the page does not load, make sure you used `http` and not `https`.
+
image::4-ostoy-homepage.png[OStoy application homepage]

View File

@@ -0,0 +1,78 @@
:_mod-docs-content-type: ASSEMBLY
[id="cloud-experts-deploying-application-health-check"]
= Health check
include::_attributes/attributes-openshift-dedicated.adoc[]
:context: cloud-experts-deploying-application-health-check
toc::[]
//rosaworkshop.io content metadata
//Brought into ROSA product docs 2024-01-26
See how Kubernetes responds to pod failure by intentionally crashing your pod and making it unresponsive to the Kubernetes liveness probes.
[id="prepare_deploying-application-health-check"]
== Preparing your desktop
.Procedure
* From the OpenShift web console, select *Workloads > Deployments > ostoy-frontend* to view the OSToy deployment.
+
image::5-ostoy-deployview.png[The web console deployments page]
[id="crash-pod_deploying-application-health-check"]
== Crashing the pod
.Procedure
. From the OSToy application web console, click *Home* in the left menu, and enter a message in the *Crash Pod* box, for example, `This is goodbye!`.
. Click *Crash Pod*.
+
image::5-ostoy-crashpod.png[OSToy crash pod selection]
+
The pod crashes and Kubernetes restarts the pod.
+
image::5-ostoy-crashmsg.png[OSToy pod crash message]
[id="view-pod_deploying-application-health-check"]
== Viewing the revived pod
.Procedure
* From the OpenShift web console, quickly switch to the *Deployments* screen. You will see that the pod turns yellow, which means it is down. It should quickly revive and turn blue. The revival process happens quickly.
+
image::5-ostoy-podcrash.gif[Deployment details page]
.Verification
. From the web console, click *Pods > ostoy-frontend-xxxxxxx-xxxx* to change to the pods screen.
+
image::5-ostoy-events.png[Pod overview page]
. Click the *Events* subtab, and verify that the container crashed and restarted.
+
image::5-ostoy-podevents.png[Pod events list]
[id="forced-malfunction_deploying-application-health-check"]
== Making the application malfunction
.Procedure
~~. Keep the pod events page open.~~
. From the OSToy application, click *Toggle Health* in the *Toggle Health Status* tile. Watch *Current Health* switch to *I'm not feeling all that well*.
+
image::5-ostoy-togglehealth.png[OSToy toggle health tile]
.Verification
After you make the application malfunction, the application stops responding with a `200 HTTP code`. After 3 consecutive failures, Kubernetes stops the pod and restarts it.
* From the web console, switch back to the pod events page to see that the liveness probe failed and the pod restarted.
The following image shows an example of what you will see on your pod events page.
image::5-ostoy-podevents2.png[Pod events list]
*A.* The pod has three consecutive failures.
*B.* Kubernetes stops the pod.
*C.* Kubernetes restarts the pod.

View File

@@ -0,0 +1,63 @@
:_mod-docs-content-type: ASSEMBLY
[id="cloud-experts-deploying-application-networking"]
= Networking
include::_attributes/attributes-openshift-dedicated.adoc[]
:context: cloud-experts-deploying-application-networking
toc::[]
//rosaworkshop.io content metadata
//Brought into ROSA product docs 2023-12-14
The OSToy application uses intra-cluster networking to separate functions by using microservices.
image::deploying-networking-arch.png[OSToy Diagram]
In this workshop, there are at least two separate pods, each with its own service. One pod functions as the front end web application with a service and a publicly accessible route. The other pod functions as the backend microservice with a service object so that the front end pod can communicate with the microservice.
Communication occurs across the pods if there is more than one pod. The microservice is not accessible from outside the cluster and other namespaces or projects. The purpose of the microservice is to serve internal web requests and return a JSON object containing the current hostname (the pod's name) and a randomly generated color string. This color string displays a box with that color on the OSToy application web console.
For more information about the networking limitations, see link:https://docs.openshift.com/rosa/networking/network_security/network_policy/about-network-policy.html[About network policy].
[id="intraculter-networking_deploying-application-networking"]
== Intra-cluster networking
You can view your networking configurations in your OSToy application.
.Procedure
. In the OSToy application web console, click *Networking* in the left menu.
. Review the networking configuration. The tile "Hostname Lookup" illustrates how the service name created for a pod translates into an internal ClusterIP address.
+
image::deploying-networking-example.png[OSToy Networking page]
. Enter the name of the microservice created in the "Hostname Lookup" tile following the format: `<service_name>.<namespace>.svc.cluster.local`. You can find the microservice name in the service definition of `ostoy-microservice.yaml` by running the following command:
+
[source,terminal]
----
$ oc get service <name_of_service> -o yaml
----
+
.Example output
[source,yaml]
----
apiVersion: v1
kind: Service
metadata:
name: ostoy-microservice-svc
labels:
app: ostoy-microservice
spec:
type: ClusterIP
ports:
- port: 8080
targetPort: 8080
protocol: TCP
selector:
app: ostoy-microservice
----
+
In this example, the full hostname is `ostoy-microservice-svc.ostoy.svc.cluster.local`.
. An IP address is returned. In this example it is `172.30.165.246`. This is the intra-cluster IP address, which is only accessible from within the cluster.
+
image::deploying-networking-dns.png[OSToy DNS]

View File

@@ -0,0 +1,241 @@
:_mod-docs-content-type: ASSEMBLY
[id="cloud-experts-deploying-application-s2i-deployments"]
= S2I deployments
include::_attributes/attributes-openshift-dedicated.adoc[]
:context: cloud-experts-deploying-application-s2i-deployments
toc::[]
The integrated Source-to-Image (S2I) builder is one method to deploy applications in OpenShift. The S2I is a tool for building reproducible, Docker-formatted container images. For more information, see link:https://file.rdu.redhat.com/eponvell/OSDOCS-8400_Migrate-OpenShift-Concepts/cloud_experts_tutorials/cloud-experts-getting-started/cloud-experts-getting-started-openshift-concepts.html#source-to-image-s2i[OpenShift concepts].
[id="prereqs_deploying-application-s2i-deployments"]
.Prerequisites
. A ROSA cluster
[id="retrieving-login_deploying-application-s2i-deployments"]
== Retrieving your login command
.Procedure
. If you are not logged in to the command line interface (CLI), in {cluster-manager-url}, click the dropdown arrow next to your name in the upper-right and select *Copy Login Command*.
+
image::ostoy-cli-login.png[CLI Login]
+
. A new tab opens. Enter your username and password, and select the authentication method.
+
. Click *Display Token*
+
. Copy the command under "Log in with this token".
+
. Log in to the CLI by running the copied command in your terminal.
+
.Example input
[source,terminal]
----
$ oc login --token=RYhFlXXXXXXXXXXXX --server=https://api.osd4-demo.abc1.p1.openshiftapps.com:6443
----
+
.Example output
[source,terminal]
----
Logged into "https://api.myrosacluster.abcd.p1.openshiftapps.com:6443" as "rosa-user" using the token provided.
You don't have any projects. You can try to create a new project, by running
oc new-project <project name>
----
. Create a new project from the CLI by running the following command:
+
[source,terminal]
----
$ oc new-project ostoy-s2i
----
[id="fork-repo_deploying-application-s2i-deployments"]
== Fork the OSToy repository
In order to trigger automated builds based on changes to the source code, you must set up a GitHub webhook. The webhook will trigger S2I builds when you push code into your GitHub repository. To set up the webhook, you must first fork link:https://github.com/openshift-cs/ostoy/fork[the repository].
[IMPORTANT]
====
Replace `<UserName>` with your own GitHub username for the following URLs in this guide.
====
[id="deploy-to-cluster_deploying-application-s2i-deployments"]
== Using S2i to deploy OSToy on your cluster
.Procedure
. Add a secret to OpenShift.
+
This example emulates a `.env` file. Files are easily moved directly into an OpenShift environment and can even be renamed in the secret.
** Run the following command, replacing `<UserName>` with your GitHub username:
+
[source,terminal]
----
$ oc create -f https://raw.githubusercontent.com/<UserName>/ostoy/master/deployment/yaml/secret.yaml
----
. Add a ConfigMap to OpenShift.
+
This example emulates an HAProxy config file, which is typically used for overriding default configurations in an OpenShift application. Files can be renamed in the ConfigMap.
** Run the following command, replacing `<UserName>` with your GitHub username:
+
[source,terminal]
----
$ oc create -f https://raw.githubusercontent.com/<UserName>/ostoy/master/deployment/yaml/configmap.yaml
----
. Deploy the microservice.
+
You must deploy the microservice to ensure that the service environment variables are available from the UI application.
`--context-dir` builds the application defined in the `microservice` directory in the Git repository. The `app` label ensures the user interface (UI) application and microservice are both grouped in the OpenShift UI.
** Run the following command to create the microservice, replacing `<UserName>` with your GitHub username:
+
[source,terminal]
----
$ oc new-app https://github.com/<UserName>/ostoy \
--context-dir=microservice \
--name=ostoy-microservice \
--labels=app=ostoy
----
+
.Example output
[source,terminal]
----
--> Creating resources with label app=ostoy ...
imagestream.image.openshift.io "ostoy-microservice" created
buildconfig.build.openshift.io "ostoy-microservice" created
deployment.apps "ostoy-microservice" created
service "ostoy-microservice" created
--> Success
Build scheduled, use 'oc logs -f buildconfig/ostoy-microservice' to track its progress.
Application is not exposed. You can expose services to the outside world by executing one or more of the commands below:
'oc expose service/ostoy-microservice'
Run 'oc status' to view your app.
----
. Check the status of the microservice.
.. Check that the microservice was created and is running correctly by running the following command:
+
[source,terminal]
----
$ oc status
----
+
.Example output
[source,terminal]
----
In project ostoy-s2i on server https://api.myrosacluster.g14t.p1.openshiftapps.com:6443
svc/ostoy-microservice - 172.30.47.74:8080
dc/ostoy-microservice deploys istag/ostoy-microservice:latest <-
bc/ostoy-microservice source builds https://github.com/UserName/ostoy on openshift/nodejs:14-ubi8
deployment #1 deployed 34 seconds ago - 1 pod
----
+
Wait until you see that the microservice was successfully deployed. You can also check this through the web UI.
. Deploy the front end UI.
+
The application relies on several environment variables to define external settings.
.. Attach the secret and ConfigMap and create a PersistentVolume by running the following command:
+
[source,terminal]
----
$ oc new-app https://github.com/<UserName>/ostoy \
--env=MICROSERVICE_NAME=OSTOY_MICROSERVICE
----
+
.Example output
+
[source,terminal]
----
--> Creating resources ...
imagestream.image.openshift.io "ostoy" created
buildconfig.build.openshift.io "ostoy" created
deployment.apps "ostoy" created
service "ostoy" created
--> Success
Build scheduled, use 'oc logs -f buildconfig/ostoy' to track its progress.
Application is not exposed. You can expose services to the outside world by executing one or more of the commands below:
'oc expose service/ostoy'
Run 'oc status' to view your app.
----
. Update the deployment by running the following command:
+
[source,terminal]
----
$ oc patch deployment ostoy --type=json -p \
'[{"op": "replace", "path": "/spec/strategy/type", "value": "Recreate"}, {"op": "remove", "path": "/spec/strategy/rollingUpdate"}]'
----
. Set a liveness probe.
+
Create a liveness probe to ensure the pod restarts if something is wrong in the application.
.. Run the following command:
+
[source,terminal]
----
$ oc set probe deployment ostoy --liveness --get-url=http://:8080/health
----
. Attach the secret, ConfigMap, and persistent volume to the deployment.
+
.. Run the following command to attach your secret:
+
[source,terminal]
----
$ oc set volume deployment ostoy --add \
--secret-name=ostoy-secret \
--mount-path=/var/secret
----
+
.. Run the following command to attach your ConfigMap:
+
[source,terminal]
----
$ oc set volume deployment ostoy --add \
--configmap-name=ostoy-config \
-m /var/config
----
.. Run the following command to create and attach your persistent volume:
+
[source,terminal]
----
$ oc set volume deployment ostoy --add \
--type=pvc \
--claim-size=1G \
-m /var/demo_files
----
. Expose the UI application as an OpenShift Route.
.. Run the following command to deploy the application as an HTTPS application that uses the included TLS wildcard certificates:
+
[source,terminal]
----
$ oc create route edge --service=ostoy --insecure-policy=Redirect
----
. Browse to your application with the following methods:
** Run the following command to open a web browser with your OSToy application:
+
[source,terminal]
----
$ python -m webbrowser "$(oc get route ostoy -o template --template='https://{{.spec.host}}')"
----
.. Run the following command to get the route for the application and copy and paste the route into your browser:
+
[source,terminal]
----
$ oc get route
----

View File

@@ -0,0 +1,305 @@
:_mod-docs-content-type: ASSEMBLY
[id="cloud-experts-deploying-application-scaling"]
= Scaling an application
include::_attributes/attributes-openshift-dedicated.adoc[]
:context: cloud-experts-deploying-application-scaling
:source-highlighter: pygments
:pygments-style: emacs
:icons: font
toc::[]
//rosaworkshop.io content metadata
//Brought into ROSA product docs 2024-04-10
Manually or automatically scale your pods by using the Horizontal Pod Autoscaler (HPA). You can also scale your cluster nodes.
[id="prereqs_deploying-application-scaling"]
.Prerequisites
* An active ROSA cluster
* A deployed OSToy application
[id="manual-pod_deploying-application-scaling"]
== Manual pod scaling
You can manually scale your application's pods by using one of the following methods:
* Changing your ReplicaSet or deployment definition
* Using the command line
* Using the web console
This workshop starts by using only one pod for the microservice. By defining a replica of `1` in your deployment definition, the Kubernetes Replication Controller strives to keep one pod alive. You then learn how to define pod autoscaling by using the link:https://docs.openshift.com/container-platform/latest/nodes/pods/nodes-pods-autoscaling.html[Horizontal Pod Autoscaler](HPA) which is based on the load and will scale out more pods when necessary.
.Procedure
. In the OSToy app, click the *Networking* tab in the navigational menu.
. In the "Intra-cluster Communication" section, locate the box that randomly changes colors. Inside the box, you see the microservice's pod name. There is only one box in this example because there is only one microservice pod.
+
image::deploy-scale-network.png[HPA Menu]
+
. Confirm that there is only one pod running for the microservice by running the following command:
+
[source,terminal,highlight='4']
----
$ oc get pods
----
+
.Example output
[source,terminal]
----
NAME READY STATUS RESTARTS AGE
ostoy-frontend-679cb85695-5cn7x 1/1 Running 0 1h
ostoy-microservice-86b4c6f559-p594d 1/1 Running 0 1h
----
. Download the link:https://www.rosaworkshop.io/ostoy/yaml/ostoy-microservice-deployment.yaml[ostoy-microservice-deployment.yaml] and save it to your local machine.
. Change the deployment definition to three pods instead of one by using the following example:
+
[source,yaml]
----
spec:
selector:
matchLabels:
app: ostoy-microservice
replicas: 3
----
. Apply the replica changes by running the following command:
+
[source,terminal]
----
$ oc apply -f ostoy-microservice-deployment.yaml
----
+
[NOTE]
====
You can also edit the `ostoy-microservice-deployment.yaml` file in the OpenShift Web Console by going to the *Workloads > Deployments > ostoy-microservice > YAML* tab.
====
. Confirm that there are now 3 pods by running the following command:
+
[source,terminal]
----
$ oc get pods
----
+
The output shows that there are now 3 pods for the microservice instead of only one.
+
.Example output
+
[source,terminal]
----
NAME READY STATUS RESTARTS AGE
ostoy-frontend-5fbcc7d9-rzlgz 1/1 Running 0 26m
ostoy-microservice-6666dcf455-2lcv4 1/1 Running 0 81s
ostoy-microservice-6666dcf455-5z56w 1/1 Running 0 81s
ostoy-microservice-6666dcf455-tqzmn 1/1 Running 0 26m
----
. Scale the application by using the command line interface (CLI) or by using the web user interface (UI):
+
** In the CLI, decrease the number of pods from `3` to `2` by running the following command:
+
[source,terminal]
----
$ oc scale deployment ostoy-microservice --replicas=2
----
+
** From the navigational menu of the OpenShift web console UI, click *Workloads > Deployments > ostoy-microservice*.
** Locate the blue circle with a "3 Pod" label in the middle.
** Selecting the arrows next to the circle scales the number of pods. Select the down arrow to `2`.
+
image::deploy-scale-uiscale.png[UI Scale]
.Verification
Check your pod counts by using the CLI, the web UI, or the OSToy application:
* From the CLI, confirm that you are using two pods for the microservice by running the following command:
+
[source,terminal]
----
$ oc get pods
----
+
.Example output
[source,terminal]
----
NAME READY STATUS RESTARTS AGE
ostoy-frontend-5fbcc7d9-rzlgz 1/1 Running 0 75m
ostoy-microservice-6666dcf455-2lcv4 1/1 Running 0 50m
ostoy-microservice-6666dcf455-tqzmn 1/1 Running 0 75m
----
* In the web UI, select *Workloads > Deployments > ostoy-microservice*.
+
image::deploy-scale-verify-workload.png[Verify the workload pods]
* You can also confirm that there are two pods by selecting **Networking** in the navigation menu of the OSToy application. There should be two colored boxes for the two pods.
+
image::deploy-scale-colorspods.png[UI Scale]
[id="pod-autoscaling_deploying-application-scaling"]
== Pod autoscaling
{product-title} offers a link:https://docs.openshift.com/container-platform/latest/nodes/pods/nodes-pods-autoscaling.html[Horizontal Pod Autoscaler] (HPA). The HPA uses metrics to increase or decrease the number of pods when necessary.
.Procedure
. From the navigational menu of the web UI, select *Pod Auto Scaling*.
+
image::deploy-scale-hpa-menu.png[HPA Menu]
. Create the HPA by running the following command:
+
[source,terminal]
----
$ oc autoscale deployment/ostoy-microservice --cpu-percent=80 --min=1 --max=10
----
+
This command creates an HPA that maintains between 1 and 10 replicas of the pods controlled by the ostoy-microservice deployment. During deployment, HPA increases and decreases the number of replicas to keep the average CPU use across all pods at 80% and 40 millicores.
. On the *Pod Auto Scaling > Horizontal Pod Autoscaling* page, select *Increase the load*.
+
[IMPORTANT]
====
Because increasing the load generates CPU intensive calculations, the page can become unresponsive. This is an expected response. Only click *Increase the Load* once. For more information about the process, see the link:https://github.com/openshift-cs/ostoy/blob/master/microservice/app.js#L32[microservice's GitHub repository].
====
+
After a few minutes, the new pods display on the page represented by colored boxes.
+
[NOTE]
====
The page can experience lag.
====
.Verification
Check your pod counts with one of the following methods:
* In the OSToy application's web UI, see the remote pods box:
+
image::deploy-scale-hpa-mainpage.png[HPA Main]
+
Because there is only one pod, increasing the workload should trigger an increase of pods.
+
* In the CLI, run the following command:
+
[source,terminal]
----
oc get pods --field-selector=status.phase=Running | grep microservice
----
+
.Example output
+
[source,terminal]
----
ostoy-microservice-79894f6945-cdmbd 1/1 Running 0 3m14s
ostoy-microservice-79894f6945-mgwk7 1/1 Running 0 4h24m
ostoy-microservice-79894f6945-q925d 1/1 Running 0 3m14s
----
* You can also verify autoscaling from the {cluster-manager}
+
. In the OpenShift web console navigational menu, click *Observe > Dashboards*.
. In the dashboard, select *Kubernetes / Compute Resources / Namespace (Pods)* and your namespace *ostoy*.
+
image::deploy-scale-hpa-metrics.png[Select metrics]
+
. A graph appears showing your resource usage across CPU and memory. The top graph shows recent CPU consumption per pod and the lower graph indicates memory usage. The following lists the callouts in the graph:
.. The load increased (A).
.. Two new pods were created (B and C).
.. The thickness of each graph represents the CPU consumption and indicates which pods handled more load.
.. The load decreased (D), and the pods were deleted.
+
image::deploy-scale-metrics.png[Select metrics]
[id="node-autoscaling_deploying-application-scaling"]
== Node autoscaling
{product-title} allows you to use link:https://docs.openshift.com/rosa/rosa_cluster_admin/rosa_nodes/rosa-nodes-about-autoscaling-nodes.html[node autoscaling]. In this scenario, you will create a new project with a job that has a large workload that the cluster cannot handle. With autoscaling enabled, when the load is larger than your current capacity, the cluster will automatically create new nodes to handle the load.
.Prerequisites
* Autoscaling is enabled on your machine pools.
.Procedure
. Create a new project called `autoscale-ex` by running the following command:
+
[source,terminal]
----
$ oc new-project autoscale-ex
----
. Create the job by running the following command:
+
[source,terminal]
----
$ oc create -f https://raw.githubusercontent.com/openshift-cs/rosaworkshop/master/rosa-workshop/ostoy/yaml/job-work-queue.yaml
----
+
. After a few minuts, run the following command to see the pods:
+
[source,terminal]
----
$ oc get pods
----
+
.Example output
+
[source,terminal]
----
NAME READY STATUS RESTARTS AGE
work-queue-5x2nq-24xxn 0/1 Pending 0 10s
work-queue-5x2nq-57zpt 0/1 Pending 0 10s
work-queue-5x2nq-58bvs 0/1 Pending 0 10s
work-queue-5x2nq-6c5tl 1/1 Running 0 10s
work-queue-5x2nq-7b84p 0/1 Pending 0 10s
work-queue-5x2nq-7hktm 0/1 Pending 0 10s
work-queue-5x2nq-7md52 0/1 Pending 0 10s
work-queue-5x2nq-7qgmp 0/1 Pending 0 10s
work-queue-5x2nq-8279r 0/1 Pending 0 10s
work-queue-5x2nq-8rkj2 0/1 Pending 0 10s
work-queue-5x2nq-96cdl 0/1 Pending 0 10s
work-queue-5x2nq-96tfr 0/1 Pending 0 10s
----
. Because there are many pods in a `Pending` state, this status triggers the autoscaler to create more nodes in your machine pool. Allow time to create these worker nodes.
. After a few minutes, use the following command to see how many worker nodes you now have:
+
[source,terminal]
----
$ oc get nodes
----
+
.Example output
+
[source,terminal]
----
NAME STATUS ROLES AGE VERSION
ip-10-0-138-106.us-west-2.compute.internal Ready infra,worker 22h v1.23.5+3afdacb
ip-10-0-153-68.us-west-2.compute.internal Ready worker 2m12s v1.23.5+3afdacb
ip-10-0-165-183.us-west-2.compute.internal Ready worker 2m8s v1.23.5+3afdacb
ip-10-0-176-123.us-west-2.compute.internal Ready infra,worker 22h v1.23.5+3afdacb
ip-10-0-195-210.us-west-2.compute.internal Ready master 23h v1.23.5+3afdacb
ip-10-0-196-84.us-west-2.compute.internal Ready master 23h v1.23.5+3afdacb
ip-10-0-203-104.us-west-2.compute.internal Ready worker 2m6s v1.23.5+3afdacb
ip-10-0-217-202.us-west-2.compute.internal Ready master 23h v1.23.5+3afdacb
ip-10-0-225-141.us-west-2.compute.internal Ready worker 23h v1.23.5+3afdacb
ip-10-0-231-245.us-west-2.compute.internal Ready worker 2m11s v1.23.5+3afdacb
ip-10-0-245-27.us-west-2.compute.internal Ready worker 2m8s v1.23.5+3afdacb
ip-10-0-245-7.us-west-2.compute.internal Ready worker 23h v1.23.5+3afdacb
----
+
You can see the worker nodes were automatically created to handle the workload.
. Return to the OSToy application by entering the following command:
+
[source,terminal]
----
$ oc project ostoy
----

View File

@@ -0,0 +1,133 @@
:_mod-docs-content-type: ASSEMBLY
[id="cloud-experts-deploying-application-storage"]
= Persistent volumes for cluster storage
include::_attributes/attributes-openshift-dedicated.adoc[]
:context: cloud-experts-deploying-application-storage
toc::[]
//rosaworkshop.io content metadata
//Brought into ROSA product docs 2024-04-30
{rosa-classic-first} and Red Hat OpenShift Service on AWS (ROSA) support storing persistent volumes with either link:https://aws.amazon.com/ebs/[Amazon Web Services (AWS) Elastic Block Store (EBS)] or link:https://aws.amazon.com/efs/[AWS Elastic File System (EFS)].
[id="using-persistent-volumes_deploying-application-storage"]
== Using persistent volumes
Use the following procedures to create a file, store it on a persistent volume in your cluster, and confirm that it still exists after pod failure and re-creation.
[id="viewing_deploying-application-storage"]
=== Viewing a persistent volume claim
.Procedure
. Navigate to the cluster's OpenShift web console.
. Click *Storage* in the left menu, then click *PersistentVolumeClaims* to see a list of all the persistent volume claims.
. Click a persistence volume claim to see the size, access mode, storage class, and other additional claim details.
+
[NOTE]
====
The access mode is `ReadWriteOnce` (RWO). This means that the volume can only be mounted to one node and the pod or pods can read and write to the volume.
====
[id="storing_deploying-application-storage"]
=== Storing your file
.Procedure
. In the OSToy app console, click *Persistent Storage* in the left menu.
. In the *Filename* box, enter a file name with a `.txt` extension, for example `test-pv.txt`.
. In the *File contents* box, enter a sentence of text, for example `OpenShift is the greatest thing since sliced bread!`.
. Click *Create file*.
+
image::cloud-experts-storage-ostoy-createfile.png[]
+
.Verification
. Scroll to *Existing files* on the OSToy app console.
. Click the file you created to see the file name and contents.
+
image::cloud-experts-storage-ostoy-viewfile.png[]
[id="crash-pod_deploying-application-storage"]
=== Crashing the pod
.Procedure
. On the OSToy app console, click *Home* in the left menu.
. Click *Crash pod*.
[id="confirm_deploying-application-storage"]
=== Confirming persistent storage
.Procedure
. Wait for the pod to re-create.
. On the OSToy app console, click *Persistent Storage* in the left menu.
. Find the file you created, and open it to view and confirm the contents.
+
image::cloud-experts-storage-ostoy-existingfile.png[]
.Verification
The deployment YAML file shows that we mounted link:https://github.com/openshift-cs/rosaworkshop/blob/master/rosa-workshop/ostoy/yaml/ostoy-frontend-deployment.yaml#L61[the directory] `/var/demo_files` to our persistent volume claim.
. Retrieve the name of your front-end pod by running the following command:
+
[source,terminal]
----
$ oc get pods
----
+
. Start a secure shell (SSH) session in your container by running the following command:
+
[source,terminal]
----
$ oc rsh <pod_name>
----
+
. Go to the directory by running the following command:
+
[source,terminal]
----
$ cd /var/demo_files
----
+
. *Optional:* See all the files you created by running the following command:
+
[source,terminal]
----
$ ls
----
+
. Open the file to view the contents by running the following command:
+
[source,terminal]
----
$ cat test-pv.txt
----
+
. Verify that the output is the text you entered in the OSToy app console.
+
.Example terminal
[source,terminal]
----
$ oc get pods
NAME READY STATUS RESTARTS AGE
ostoy-frontend-5fc8d486dc-wsw24 1/1 Running 0 18m
ostoy-microservice-6cf764974f-hx4qm 1/1 Running 0 18m
$ oc rsh ostoy-frontend-5fc8d486dc-wsw24
$ cd /var/demo_files/
$ ls
lost+found test-pv.txt
$ cat test-pv.txt
OpenShift is the greatest thing since sliced bread!
----
[id="end-session_deploying-application-storage"]
=== Ending the session
.Procedure
* Type `exit` in your terminal to quit the session and return to the CLI.
[role="_additional-resources"]
== Additional resources
* For more information about persistent volume storage, see xref:../../storage/understanding-persistent-storage.adoc#persistent-volumes_understanding-persistent-storage[Understanding persistent storage].
* For more information about ROSA storage options, see xref:../../storage/index.adoc#storage-overview[Storage overview].

View File

@@ -0,0 +1,77 @@
:_mod-docs-content-type: ASSEMBLY
[id="cloud-experts-deploying-configmaps-secrets-envvar"]
= ConfigMaps, secrets, and environment variables
include::_attributes/attributes-openshift-dedicated.adoc[]
:context: cloud-experts-deploying-configmaps-secrets-envvar
toc::[]
//rosaworkshop.io content metadata
//Brought into ROSA product docs 05-07-2024
This tutorial shows how to configure the OSToy application by using link:https://docs.openshift.com/rosa/nodes/pods/nodes-pods-configmaps.html[config maps], link:https://docs.openshift.com/container-platform/latest/cicd/builds/creating-build-inputs.html#builds-input-secrets-configmaps_creating-build-inputs[secrets], and link:https://docs.openshift.com/container-platform/3.11/dev_guide/environment_variables.html[environment variables].
[id="configmaps_deploying-configmaps-secrets-envvar"]
== Configuration using ConfigMaps
Config maps allow you to decouple configuration artifacts from container image content to keep containerized applications portable.
.Procedure
* In the OSToy app, in the left menu, click *Config Maps*, displaying the contents of the config map available to the OSToy application. The code snippet shows an example of a config map configuration:
+
.Example output
[source,text]
----
kind: ConfigMap
apiVersion: v1
metadata:
name: ostoy-configmap-files
data:
config.json: '{ "default": "123" }'
----
[id="secrets_deploying-configmaps-secrets-envvar"]
== Configuration using secrets
Kubernetes `Secret` objects allow you to store and manage sensitive information, such as passwords, OAuth tokens, and SSH keys. Putting this information in a secret is safer and more flexible than putting it in plain text into a pod definition or a container image.
.Procedure
* In the OSToy app, in the left menu, click *Secrets*, displaying the contents of the secrets available to the OSToy application. The code snippet shows an example of a secret configuration:
+
.Example output
[source,text]
----
USERNAME=my_user
PASSWORD=VVNFUk5BTUU9bXlfdXNlcgpQQVNTV09SRD1AT3RCbCVYQXAhIzYzMlk1RndDQE1UUWsKU01UUD1sb2NhbGhvc3QKU01UUF9QT1JUPTI1
SMTP=localhost
SMTP_PORT=25
----
[id="environment-variables_deploying-configmaps-secrets-envvar"]
== Configuration using environment variables
Using environment variables is an easy way to change application behavior without requiring code changes. It allows different deployments of the same application to potentially behave differently based on the environment variables. {product-title} makes it simple to set, view, and update environment variables for pods or deployments.
.Procedure
* In the OSToy app, in the left menu, click *ENV Variables*, displaying the environment variables available to the OSToy application. The code snippet shows an example of an environmental variable configuration:
+
.Example output
[source,text]
----
{
"npm_config_local_prefix": "/opt/app-root/src",
"STI_SCRIPTS_PATH": "/usr/libexec/s2i",
"npm_package_version": "1.7.0",
"APP_ROOT": "/opt/app-root",
"NPM_CONFIG_PREFIX": "/opt/app-root/src/.npm-global",
"OSTOY_MICROSERVICE_PORT_8080_TCP_PORT": "8080",
"NODE": "/usr/bin/node",
"LD_PRELOAD": "libnss_wrapper.so",
"KUBERNETES_SERVICE_HOST": "172.30.0.1",
"OSTOY_MICROSERVICE_PORT": "tcp://172.30.60.255:8080",
"OSTOY_PORT": "tcp://172.30.152.25:8080",
"npm_package_name": "ostoy",
"OSTOY_SERVICE_PORT_8080_TCP": "8080",
"_": "/usr/bin/node"
"ENV_TOY_CONFIGMAP": "ostoy-configmap -env"
}
----

View File

@@ -0,0 +1,98 @@
:_mod-docs-content-type: ASSEMBLY
[id="cloud-experts-deploying-s2i-webhook-cicd"]
= Using Source-to-Image (S2I) webhooks for automated deployment
include::_attributes/attributes-openshift-dedicated.adoc[]
:context: cloud-experts-deploying-s2i-webhook-cicd
:source-highlighter: coderay
toc::[]
//rosaworkshop.io content metadata
//Brought into ROSA product docs 05-07-2024
Automatically trigger a build and deploy any time you change the source code by using a webhook. For more information about this process, see link:https://docs.openshift.com/container-platform/latest/cicd/builds/triggering-builds-build-hooks.html[Triggering Builds].
.Procedure
. Obtain the GitHub webhook trigger secret by running the following command:
+
[source,terminal]
----
$ oc get bc/ostoy-microservice -o=jsonpath='{.spec.triggers..github.secret}'
----
+
.Example output
[source,terminal]
----
`o_3x9M1qoI2Wj_cz1WiK`
----
+
[IMPORTANT]
====
You need to use this secret in a later step in this process.
====
. Obtain the GitHub webhook trigger URL from the OSToy's buildconfig by running the following command:
+
[source,terminal]
----
$ oc describe bc/ostoy-microservice
----
+
.Example output
[source,terminal]
----
[...]
Webhook GitHub:
URL: https://api.demo1234.openshift.com:443/apis/build.openshift.io/v1/namespaces/ostoy-s2i/buildconfigs/ostoy/webhooks/<secret>/github
[...]
----
. In the GitHub webhook URL, replace the `<secret>` text with the secret you retrieved. Your URL will resemble the following example output:
+
.Example output
[source,text]
----
https://api.demo1234.openshift.com:443/apis/build.openshift.io/v1/namespaces/ostoy-s2i/buildconfigs/ostoy-microservice/webhooks/o_3x9M1qoI2Wj_czR1WiK/github
----
. Set up the webhook URL in GitHub repository.
.. In your repository, click *Settings > Webhooks > Add webhook*.
+
image::ostoy-webhook.png[Add Webhook]
+
.. Paste the GitHub webhook URL with the `Secret` included in the "Payload URL" field.
.. Change the "Content type" to `application/json`.
.. Click the *Add webhook* button.
+
image::ostoy-webhookfinish.png[Finish Add Webhook]
+
You should see a message from GitHub stating that your webhook was successfully configured. Now, whenever you push a change to your GitHub repository, a new build automatically starts, and upon a successful build, a new deployment starts.
. Make a change in the source code. Any changes automatically trigger a build and deployment. In this example, the colors that denote the status of your OSToy app are randomly selected. To test the configuration, change the box to display grayscale.
+
.. Go to the source code in your repository link:[https://github.com/<username>/ostoy/blob/master/microservice/app.js].
.. Edit the file.
.. Comment out line 8 (containing `let randomColor = getRandomColor();`).
.. Uncomment line 9 (containing `let randomColor = getRandomGrayScaleColor();`).
+
[source,javascript,highlight='2-3']
----
7 app.get('/', function(request, response) {
8 //let randomColor = getRandomColor(); // <-- comment this
9 let randomColor = getRandomGrayScaleColor(); // <-- uncomment this
10
11 response.writeHead(200, {'Content-Type': 'application/json'});
----
+
.. Enter a message for the update, such as "changed box to grayscale colors".
.. Click *Commit* at the bottom to commit the changes to the main branch.
. In your cluster's web UI, click *Builds > Builds* to determine the status of the build. After this build is completed, the deployment begins. You can also check the status by running `oc status` in your terminal.
+
image::ostoy-builddone.png[Build Run]
. After the deployment has finished, return to the OSToy application in your browser. Access the *Networking* menu item on the left. The box color is now limited to grayscale colors only.
+
image::ostoy-gray.png[Gray]

View File

@@ -9,6 +9,7 @@ toc::[]
//Brought into ROSA product docs 22-JAN-2024
//Modified for HCP 15 October 2024
[id="introduction_learning-lab-overview"]
== Introduction
After successfully provisioning your cluster, follow this workshop to deploy an application on it to understand the concepts of deploying and operating container-based applications.
@@ -31,6 +32,7 @@ After successfully provisioning your cluster, follow this workshop to deploy an
* The link:https://docs.openshift.com/rosa/cli_reference/openshift_cli/getting-started-cli.html[OpenShift command line interface (CLI)]
* A link:https://github.com/signup[GitHub account]
[id="about-ostoy_learning-lab-overview"]
== About the OSToy application
OSToy is a Node.js application that deploys to a ROSA cluster to help explore the functionality of Kubernetes.
@@ -46,10 +48,12 @@ This application has a user interface where you can:
* Increase the load to view automatic scaling of the pods by using the HPA
//* Connect to an AWS S3 bucket to read and write objects
=== OSToy Application Diagram
[id="diagram_learning-lab-overview"]
=== OSToy application diagram
image::ostoy-arch.png[OSToy architecture diagram]
[id="ui_learning-lab-overview"]
=== Understanding the OSToy UI
image::ostoy-homepage.png[Preview of the OSToy homepage]
@@ -66,6 +70,7 @@ image::ostoy-homepage.png[Preview of the OSToy homepage]
+
. *About:* Application information
[id="lab-resources_learning-lab-overview"]
=== Lab resources
* link:https://github.com/openshift-cs/ostoy[OSToy application source code]