1
0
mirror of https://github.com/openshift/openshift-docs.git synced 2026-02-05 12:46:18 +01:00

OSDOCS-15381 updated modules

This commit is contained in:
William Gabor
2025-08-11 13:24:08 -04:00
committed by openshift-cherrypick-robot
parent 22fe83867f
commit 06ea1bc83f
12 changed files with 1365 additions and 3 deletions

View File

@@ -1230,6 +1230,8 @@ Topics:
File: zero-trust-manager-install
- Name: Deploying Zero Trust Workload Identity Manager operands
File: zero-trust-manager-configuration
- Name: Configuring Zero Trust Workload Identity Manager OIDC Federation
File: zero-trust-manager-oidc-federation
- Name: Monitoring Zero Trust Workload Identity Manager
File: zero-trust-manager-monitoring
- Name: Enabling create-only mode for the Zero Trust Workload Identity Manager

View File

@@ -0,0 +1,19 @@
// Module included in the following assemblies:
//
// * security/zero_trust_workload_identity_manageer/zero-trust-manager-oidc-federation.adoc
:_mod-docs-content-type: CONCEPT
[id="zero-trust-manager-config-vault-oidc_{context}"]
= How to configure the Vault OpenID Connect
The Vault OpenID Connect (OIDC) allows a SPIRE-identified workload to authenticate against a federated Vault server. The SPIRE Server issues JSON Web Token SPIFFE Verifiable Identity Documents (JWT-SVIDs) to workloads and the workloads then present the JWT-SVID to Vault to authenticate and retrieve the secrets it is authorized to access.
The steps to configure Vault OIDC are:
* Install Vault
* Initialize Vault

View File

@@ -0,0 +1,331 @@
// Module included in the following assemblies:
//
// * security/zero_trust_workload_identity_manageer/zero-trust-manager-oidc-federation.adoc
:_mod-docs-content-type: PROCEDURE
[id="zero-trust-manager-configure-aws_{context}"]
= Using Entra ID with {azure-first}
After the Entra ID configuration is complete, you can set up Entra ID to work with {azure-short}.
.Prerequisites
* You have configured the SPIRE OIDC Discovery Provider Route to serve the TLS certificates from a publicly trusted CA.
== Configuring an {azure-short} account
.Procedure
. Log in to Azure by running the following command:
+
[source,terminal]
----
$ az login
----
. Configure variables for your Azure subscription and tenant:
+
[source,terminal]
----
$ export SUBSCRIPTION_ID=$(az account list --query "[?isDefault].id" -o tsv) <1>
$ export TENANT_ID=$(az account list --query "[?isDefault].tenantId" -o tsv) <2>
$ export LOCATION=centralus <3>
----
+
<1> Your unique subscription identifier.
<2> The ID for your Azure Active Directory instance.
<3> The Azure region where your resource is created.
. Define resource variable names.
+
[source,terminal]
----
$ export NAME=ztwim <1>
$ export RESOURCE_GROUP="${NAME}-rg" <2>
$ export STORAGE_ACCOUNT="${NAME}storage" <3>
$ export STORAGE_CONTAINER="${NAME}storagecontainer" <4>
$ export USER_ASSIGNED_IDENTITY_NAME="${NAME}-identity" <5>
----
+
<1> A base name for all resources.
<2> The name of the resource group.
<3> The name for the storage account.
<4> The name for the storage container.
<5> The name for a managed identity.
. Create the resource group.
+
[source,terminal]
----
$ az group create \
--name "${RESOURCE_GROUP}" \
--location "${LOCATION}"
----
== Configuring Azure blob storage
.Procedure
. Create a new storage account that is used to store content.
+
[source,terminal]
----
$ az storage account create \
--name ${STORAGE_ACCOUNT} \
--resource-group ${RESOURCE_GROUP} \
--location ${LOCATION} \
--encryption-services blob
----
. Obtain the storage ID for the newly created storage account.
+
[source,terminal]
----
$ export STORAGE_ACCOUNT_ID=$(az storage account show -n ${STORAGE_ACCOUNT} -g ${RESOURCE_GROUP} --query id --out tsv)
----
. Create a storage container inside the newly created Storage Account to provide a location to support the storage of blobs.
+
[source,terminal]
----
$ az storage container create \
--account-name ${STORAGE_ACCOUNT} \
--name ${STORAGE_CONTAINER} \
--auth-mode login
----
== Configuring an Azure user managed identity
.Procedure
. Create a new User Managed Identity and then obtain the Client ID of the related Service Principal associated with the User Managed Identity.
+
[source,terminal]
----
$ az identity create \
--name ${USER_ASSIGNED_IDENTITY_NAME} \
--resource-group ${RESOURCE_GROUP}
$ export IDENTITY_CLIENT_ID=$(az identity show --resource-group "${RESOURCE_GROUP}" --name "${USER_ASSIGNED_IDENTITY_NAME}" --query 'clientId' -otsv)
----
. Associate a role with the Service Principal associated with the User Managed Identity.
+
[source,terminal]
----
$ az role assignment create \
--role "Storage Blob Data Contributor" \
--assignee "${IDENTITY_CLIENT_ID}" \
--scope ${STORAGE_ACCOUNT_ID}
----
== Creating the demonstration application
.Procedure
. Set the application name and namespace.
+
[source,terminal]
----
$ export APP_NAME=workload-app
$ export APP_NAMESPACE=demo
----
. Create the namespace.
+
[source,terminal]
----
$ oc create namespace $APP_NAMESPACE
----
. Create the application Secret.
+
[source,terminal]
----
$ oc apply -f - << EOF
apiVersion: v1
kind: Secret
metadata:
name: $APP_NAME
namespace: $APP_NAMESPACE
stringData:
AAD_AUTHORITY: https://login.microsoftonline.com/
AZURE_AUDIENCE: "api://AzureADTokenExchange"
AZURE_TENANT_ID: "${TENANT_ID}"
AZURE_CLIENT_ID: "${IDENTITY_CLIENT_ID}"
BLOB_STORE_ACCOUNT: "${STORAGE_ACCOUNT}"
BLOB_STORE_CONTAINER: "${STORAGE_CONTAINER}"
EOF
----
== Deploying the workload application
.Procedure
. To deploy the application, copy the entire command block provided and paste it directly into your terminal. Press *Enter*.
+
[source,terminal]
----
$ oc apply -f - << EOF
apiVersion: v1
kind: ServiceAccount
metadata:
name: $APP_NAME
namespace: $APP_NAMESPACE
---
kind: Deployment
apiVersion: apps/v1
metadata:
name: $APP_NAME
namespace: $APP_NAMESPACE
spec:
selector:
matchLabels:
app: $APP_NAME
template:
metadata:
labels:
app: $APP_NAME
deployment: $APP_NAME
spec:
serviceAccountName: $APP_NAME
containers:
- name: $APP_NAME
image: "registry.redhat.io/ubi9/python-311:latest"
command:
- /bin/bash
- "-c"
- |
#!/bin/bash
pip install spiffe azure-cli
cat << EOF > /opt/app-root/src/get-spiffe-token.py
#!/opt/app-root/bin/python
from spiffe import JwtSource
import argparse
parser = argparse.ArgumentParser(description='Retrieve SPIFFE Token.')
parser.add_argument("-a", "--audience", help="The audience to include in the token", required=True)
args = parser.parse_args()
with JwtSource() as source:
jwt_svid = source.fetch_svid(audience={args.audience})
print(jwt_svid.token)
EOF
chmod +x /opt/app-root/src/get-spiffe-token.py
while true; do sleep 10; done
envFrom:
- secretRef:
name: $APP_NAME
env:
- name: SPIFFE_ENDPOINT_SOCKET
value: unix:///run/spire/sockets/spire-agent.sock
securityContext:
allowPrivilegeEscalation: false
capabilities:
drop:
- ALL
readOnlyRootFilesystem: false
runAsNonRoot: true
seccompProfile:
type: RuntimeDefault
ports:
- containerPort: 8080
protocol: TCP
volumeMounts:
- name: spiffe-workload-api
mountPath: /run/spire/sockets
readOnly: true
volumes:
- name: spiffe-workload-api
csi:
driver: csi.spiffe.io
readOnly: true
EOF
----
.Verification
. Ensure that the `workload-app` pod is running successfully.
+
[source,terminal]
----
$ oc get pods -n $APP_NAMESPACE
----
+
.Example output
[source, terminal]
----
NAME READY STATUS RESTARTS AGE
workload-app-5f8b9d685b-abcde 1/1 Running 0 60s
----
. Retrieve the SPIFFE JWT Token (SVID-JWT)
+
[source,terminal]
----
# Get the pod name dynamically
$ POD_NAME=$(oc get pods -n $APP_NAMESPACE -l app=$APP_NAME -o jsonpath='{.items[0].metadata.name}')
# Execute the script inside the pod
$ oc exec -it $POD_NAME -n $APP_NAMESPACE -- \
/opt/app-root/src/get-spiffe-token.py -a "api://AzureADTokenExchange"
----
== Configuring Azure with the SPIFFE identity federation
.Procedure
* Federate the identities between the User Managed Identity and the SPIFFE identity associated with the workload application.
+
[source,terminal]
----
$ az identity federated-credential create \
--name ${NAME} \
--identity-name ${USER_ASSIGNED_IDENTITY_NAME} \
--resource-group ${RESOURCE_GROUP} \
--issuer https://$JWT_ISSUER_ENDPOINT \
--subject spiffe://$APP_DOMAIN/ns/$APP_NAMESPACE/sa/$APP_NAME \
--audience api://AzureADTokenExchange
----
== Verifying that the application workload can access the content in the Azure Blob Storage
.Procedure
. Retrieve a JWT token from the SPIFFE Workload API and login to the Azure CLI included within the pod.
+
[source,terminal]
----
$ oc rsh -n $APP_NAMESPACE deployment/$APP_NAME
$ export TOKEN=$(/opt/app-root/src/get-spiffe-token.py --audience=$AZURE_AUDIENCE)
$ az login --service-principal \
-t ${AZURE_TENANT_ID} \
-u ${AZURE_CLIENT_ID} \
--federated-token ${TOKEN}
----
. Create a new file with the application workload pod and upload the file to the Blob Storage.
+
[source,terminal]
----
$ echo “Hello from OpenShift” > openshift-spire-federated-identities.txt
$ az storage blob upload \
--account-name ${BLOB_STORE_ACCOUNT} \
--container-name ${BLOB_STORE_CONTAINER} \
--name openshift-spire-federated-identities.txt \
--file openshift-spire-federated-identities.txt \
--auth-mode login
----
.Verification
* Confirm the file uploaded successfully by listing the files contained.
+
[source,terminal]
----
$ az storage blob list \
--account-name ${BLOB_STORE_ACCOUNT} \
--container-name ${BLOB_STORE_CONTAINER} \
--auth-mode login \
-o table
----

View File

@@ -0,0 +1,98 @@
// Module included in the following assemblies:
//
// * security/zero_trust_workload_identity_manageer/zero-trust-manager-oidc-federation.adoc
:_mod-docs-content-type: PROCEDURE
[id="zero-trust-manager-create-route-oidc_{context}"]
= Configuring the external certificate for the managed OIDC discovery provider route
The managed route uses the External Route Certificate feature to set the `tls.externalCertificate` field to an externally managed Transfer Layer Security (TLS) secret's name.
.Prerequisites
* You have installed {zero-trust-full} 0.2.0 or later.
* You have deployed the SPIRE Server, SPIRE Agent, SPIFFEE CSI Driver, and the SPIRE OIDC Discovery Provider operands in the cluster.
* You have installed the {cert-manager-operator}. For more information, link:https://docs.redhat.com/en/documentation/openshift_container_platform/4.19/html-single/security_and_compliance/index#cert-manager-operator-install[Installing the cert-manager Operator for Red{nbsp}Hat OpenShift].
* You have created a `ClusterIssuer` or `Issuer` configured with a publicly trusted CA service. For example, an Automated Certificate Management Environment (ACME) type `Issuer` with the "Let's Encrypt ACME" service. For more information, see link:https://docs.redhat.com/en/documentation/openshift_container_platform/4.19/html-single/security_and_compliance/index#cert-manager-operator-issuer-acme[Configuring an ACME issuer]
.Procedure
. Create a `Role` to provide the router service account permissions to read the referenced secret by running the following command:
+
[source,terminal]
----
$ oc create role secret-reader \
--verb=get,list,watch \
--resource=secrets \
--resource-name=$TLS_SECRET_NAME \
-n zero-trust-workload-identity-manager
----
. Create a `RoleBinding` resource to bind the router service account with the newly created Role resource by running the following command:
+
[source,terminal]
----
$ oc create rolebinding secret-reader-binding \
--role=secret-reader \
--serviceaccount=openshift-ingress:router \
-n zero-trust-workload-identity-manager
----
. Configure the `SpireOIDCDIscoveryProvider` Custom Resource (CR) object to reference the Secret generated in the earlier step.
+
[source,terminal]
----
$ oc patch SpireOIDCDiscoveryProvider cluster --type=merge -p='
spec:
externalSecretRef: ${TLS_SECRET_NAME}
'
----
.Verification
. In the `SpireOIDCDiscoveryProvider` CR, check if the `ManageRouteReady` condition is set to `True`.
+
[source,terminal]
----
$ oc wait --for=jsonpath='{.status.conditions[?(@.type=="ManagedRouteReady")].status}'=True SpireOIDCDiscoveryProvider/cluster --timeout=120s
----
. Verify that the OIDC endpoint can be accessed securely through HTTPS.
+
[source,terminal]
----
$ curl https://$JWT_ISSUER_ENDPOINT/.well-known/openid-configuration
{
"issuer": "https://$JWT_ISSUER_ENDPOINT",
"jwks_uri": "https://$JWT_ISSUER_ENDPOINT/keys",
"authorization_endpoint": "",
"response_types_supported": [
"id_token"
],
"subject_types_supported": [],
"id_token_signing_alg_values_supported": [
"RS256",
"ES256",
"ES384"
]
}%
----
== Disabling a managed route
If you want to fully control the behavior of exposing the OIDC Discovery Provider service, you can disable the managed Route based on your requirements.
.Procedure
* To manually configure the OIDC Discovery Provider, set `managedRoute` to `false`.
+
[source,terminal]
----
$ oc patch SpireOIDCDiscoveryProvider cluster --type=merge -p='
spec:
managedRoute: "false"
----

View File

@@ -0,0 +1,14 @@
// Module included in the following assemblies:
//
// * security/zero_trust_workload_identity_manageer/zero-trust-manager-oidc-federation.adoc
:_mod-docs-content-type: CONCEPT
[id="zero-trust-manager-entraid-oidc-about_{context}"]
= About the Entra ID OpenID Connect
Entra ID is a cloud-based identity and access management service that centralizes user management and access control. Entra ID serves as the identify provider, verifying user identities and issuing and ID token to the application. This token has essential user information, allowing the application to confirm who the user is without managing their credentials.
Integrating Entra ID OpenID Connect (OIDC) with SPIRE provides workloads with automatic, short-lived cryptographic identities. The SPIRE-issued identities are sent to Entra ID to securely authenticate the service without any static secrets.

View File

@@ -0,0 +1,280 @@
// Module included in the following assemblies:
//
// * security/zero_trust_workload_identity_manageer/zero-trust-manager-oidc-federation.adoc
:_mod-docs-content-type: PROCEDURE
[id="zero-trust-manager-initialize-vault-oidc_{context}"]
= Initializing Vault
A newly installed Vault is sealed. This means that the primary encryption key, which protects all other encryption keys, is not loaded into the server memory upon startup. You need to initialize Vault to unseal it.
The steps to initialize a Vault server are:
. Initialize and unseal Vault
. Enable the key-value (KV) secrets engine and store a test secret
. Configure JSON Web Token (JWT) authentication with SPIRE
. Deploy a demonstration application
. Authenticate and retrieve the secret
.Prerequisites
* Ensure that Vault is running.
* Ensure that Vault is not initialized. You can only initialize a Vault server once.
== Initializing and unsealing Vault
.Procedure
. Open a remote shell into the `vault` pod:
+
[source,terminal]
----
$ oc rsh -n vault statefulset/vault
----
. Initialize Vault to get your unseal key and root token:
+
[source,terminal]
----
$ vault operator init -key-shares=1 -key-threshold=1 -format=json
----
. Export the unseal key and root token you received from the earlier command:
+
[source,terminal]
----
$ export UNSEAL_KEY=<Your-Unseal-Key>
$ export ROOT_TOKEN=<Your-Root-Token>
----
. Unseal Vault using your unseal key:
+
[source,terminal]
----
$ vault operator unseal -format=json $UNSEAL_KEY
----
. Exit the pod by entering `Exit`.
.Verification
* To verify that the Vault pod is ready, run the following command:
+
[source,terminal]
----
$ oc get pod -n vault
----
+
.Example output
[source, terminal]
----
NAME READY STATUS RESTARTS AGE
vault-0 1/1 Running 0 65d
----
== Enabling the key-value secrets engine and store a test secret
.Procedure
. Open another shell session in the `Vault` pod.
+
[source,terminal]
----
$ oc rsh -n vault statefulset/vault
----
. Export your root token again within this new session and log in:
+
[source,terminal]
----
$ export ROOT_TOKEN=<Your-Root-Token>
$ vault login "${ROOT_TOKEN}"
----
. Enable the KV secrets engine at the `secret/` path and create a test secret:
+
[source,terminal]
----
$ export NAME=ztwim
$ vault secrets enable -path=secret kv
$ vault kv put secret/$NAME version=v0.1.0
----
.Verification
* To verify that the secret is stored correctly, run the following command:
+
[source,terminal]
----
$ vault kv get secret/$NAME
----
== Configuring JSON Web Token authentication with SPIRE
You need to set up JSON Web Token (JWT) authentication so your applications can securely log in to Vault by using SPIFFE identities.
.Procedure
. On your local machine, retrieve the SPIRE Certificate Authority (CA) bundle and save it to a file:
+
[source,terminal]
----
$ oc get cm -n zero-trust-workload-identity-manager spire-bundle -o jsonpath='{ .data.bundle\.crt }' > oidc_provider_ca.pem
----
. Back in the Vault pod shell, create a temporary file and paste the contents of `oidc_provider_ca.pem` into it:
+
[source,terminal]
----
$ cat << EOF > /tmp/oidc_provider_ca.pem
-----BEGIN CERTIFICATE-----
<Paste-Your-Certificate-Content-Here>
-----END CERTIFICATE-----
EOF
----
. Set up the necessary environment variables for the JWT configuration:
+
[source,terminal]
----
$ export APP_DOMAIN=<Your-App-Domain>
$ export JWT_ISSUER_ENDPOINT="oidc-discovery.$APP_DOMAIN"
$ export OIDC_URL="https://$JWT_ISSUER_ENDPOINT"
$ export OIDC_CA_PEM="$(cat /tmp/oidc_provider_ca.pem)"
----
. Enable the JWT authentication method and configure it with your OIDC provider details:
+
[source,terminal]
----
$ export ROLE="${NAME}-role"
$ vault auth enable jwt
$ vault write auth/jwt/config \
oidc_discovery_url=$OIDC_URL \
oidc_discovery_ca_pem="$OIDC_CA_PEM" \
default_role=$ROLE
----
. Create a policy named `ztwim-policy` that grants read access to the secret you created earlier:
+
[source,terminal]
----
$ export POLICY="${NAME}-policy"
$ vault policy write $POLICY -<<EOF
path "secret/$NAME" {
capabilities = ["read"]
}
EOF
----
. Create a JWT role that binds the policy to workload with a specific SPIFFE ID:
+
[source,terminal]
----
$ export APP_NAME=client
$ export APP_NAMESPACE=demo
$ export AUDIENCE=$APP_NAME
$ vault write auth/jwt/role/$ROLE -<<EOF
{
"role_type": "jwt",
"user_claim": "sub",
"bound_audiences": "$AUDIENCE",
"bound_claims_type": "glob",
"bound_claims": {
"sub": "spiffe://$APP_DOMAIN/ns/$APP_NAMESPACE/sa/$APP_NAME"
},
"token_ttl": "24h",
"token_policies": "$POLICY"
}
EOF
----
== Deploying a demo application
This creates a simple client application that uses its SPIFFE identity to authenticate with Vault.
.Procedure
. On your local machine, set the environment variables for your application:
+
[source,terminal]
----
$ export APP_NAME=client
$ export APP_NAMESPACE=demo
$ export AUDIENCE=$APP_NAME
----
. Apply the Kubernetes manifest to create the namespace, service account, and deployment for the demo app. This deployment mounts the SPIFFE CSI driver socket.
+
[source,terminal]
----
$ oc apply -f - <<EOF
# ... (paste the full YAML from your provided code here) ...
EOF
----
.Verification
* Verify that the client deployment is ready by running the following command:
+
[source,terminal]
----
$ oc get deploy -n $APP_NAMESPACE
----
+
.Example output
[source, terminal]
----
NAME READY UP-TO-DATE AVAILABLE AGE
frontend-app 2/2 2 2 120d
backend-api 3/3 3 3 120d
----
== Authenticating and retrieving the secret
You use the demo app to fetch a JWT token from the SPIFFE Workload API and use it to log in to Vault and retrieve the secret.
.Procedure
. Run a command inside the running client pod to fetch a JWT-SVID:
+
[source,terminal]
----
$ oc -n $APP_NAMESPACE exec -it $(oc get pod -o=jsonpath='{.items[*].metadata.name}' -l app=$APP_NAME -n $APP_NAMESPACE) \
-- /opt/spire/bin/spire-agent api fetch jwt \
-socketPath /run/spire/sockets/spire-agent.sock \
-audience $AUDIENCE
----
. Copy the token from the output and export it as an environment variable on your local machine:
+
[source,terminal]
----
$ export IDENTITY_TOKEN=<Your-JWT-Token>
----
. Use `curl` to send the JWT token to the Vault login endpoint to get a Vault client token:
+
[source,terminal]
----
$ export ROLE="${NAME}-role"
$ VAULT_TOKEN=$(curl -s --request POST --data '{ "jwt": "'"${IDENTITY_TOKEN}"'", "role": "'"${ROLE}"'"}' "${VAULT_ADDR}"/v1/auth/jwt/login | jq -r '.auth.client_token')
----
.Verification
* Use the newly acquired Vault token to read the secret from the KV store:
+
[source,terminal]
----
$ curl -s -H "X-Vault-Token: $VAULT_TOKEN" $VAULT_ADDR/v1/secret/$NAME | jq
----
+
You should see the contents of the secret (`"version": "v0.1.0"`) in the output, confirming the entire workflow is successful

View File

@@ -0,0 +1,446 @@
// Module included in the following assemblies:
//
// * security/zero_trust_workload_identity_manageer/zero-trust-manager-oidc-federation.adoc
:_mod-docs-content-type: PROCEDURE
[id="zero-trust-manager-install-entraid_{context}"]
= Configuring the Entra ID
You need to configure the Entra ID so that the SPIRE server can automatically provide software workloads with short-lived, verifiable identities right within your infrastructure. The steps to do this include:
* Installing an Operator
* Deploying the operands
* Exposing the SPIFFE OIDC Discovery Provider service
* Verifying the OIDC endpoint can be accessed securely via HTTPS
== Installing the Operator
.Prerequisites
* Access to a Kubernetes cluster where the SPIRE server runs.
* cert-manager is installed and running within the Kubernetes cluster. For more information about installing cert-manager, see link:https://docs.redhat.com/en/documentation/openshift_container_platform/4.19/html-single/security_and_compliance/index#cert-manager-operator-install[Installing the cert-manager Operator for Red{nbsp}Hat OpenShift].
* A pre-configured cert-manager `Issuer` capable of signing intermediate Certificate Authority (CA) certificates.
.Procedure
. Log in to your OpenShift Cluster by running the following command:
+
[source,terminal]
----
$ oc login --token=<your_token> --server=<your_server_url>
----
. Apply the Operator manifest. Copy the entire command block provided and paste it directly into your terminal. Press *Enter*.
+
[source,yaml]
----
oc apply -f - <<EOF
apiVersion: v1
kind: Namespace
metadata:
name: zero-trust-workload-identity-manager
---
apiVersion: operators.coreos.com/v1
kind: OperatorGroup <1>
metadata:
name: zero-trust-workload-identity-manager-og
namespace: zero-trust-workload-identity-manager <2>
spec:
upgradeStrategy: Default
---
apiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
name: openshift-zero-trust-workload-identity-manager
namespace: zero-trust-workload-identity-manager
spec:
source: redhat-operators <3>
sourceNamespace: openshift-marketplace
name: openshift-zero-trust-workload-identity-manager
channel: tech-preview-v0.1
EOF
----
+
<1> Used to manage operator updates.
<2> Used to isolate the Operator.
<3> Used by the Operator Lifecycle Manager (OLM) to find the Operator in the `redhat-operators` catalog and install it.
.Verification
Verify that the subscription is created and is progressing by running the following command:
+
[source,terminal]
----
$ oc get subscription -n zero-trust-workload-identity-manager
----
.Example output
[source, terminal]
----
NAME PACKAGE SOURCE CHANNEL
openshift-zero-trust-workload-identity-manager openshift-zero-trust-workload-identity-manager redhat-operators tech-preview-v0.2
----
== Deploying SPIRE operands
The SPIRE Server, Agent, Container Storage Interface (CSI) Driver, and OIDC Discovery Provider operands need to be deployed so that {zero-trust-full} can use SPIFFE IDs.
.Procedure
. Get the application domain by running the following command:
+
[source,terminal]
----
$ export APP_DOMAIN=apps.$(oc get dns cluster -o jsonpath='{ .spec.baseDomain }')
----
. Define the JWT issuer endpoint for the OIDC provider, which is used for issuing JWT-SVIDs:
+
[source,terminal]
----
$ export JWT_ISSUER_ENDPOINT=oidc-discovery.${APP_DOMAIN}
----
. Define a unique name for your cluster configuration:
+
[source,terminal]
----
$ export CLUSTER_NAME=test01
----
. Apply the configuration manifests for the SPIRE components. Copy the entire command block provided and paste it directory into your terminal. Press *Enter* to run.
+
[source,yaml]
----
oc apply -f - <<EOF
apiVersion: operator.openshift.io/v1alpha1
kind: SpireServer
metadata:
name: cluster
spec:
trustDomain: $APP_DOMAIN
clusterName: $CLUSTER_NAME
caSubject:
commonName: $APP_DOMAIN
country: "US"
organization: "RH"
persistence:
type: pvc
size: "2Gi"
accessMode: ReadWriteOncePod
datastore:
databaseType: sqlite3
connectionString: "/run/spire/data/datastore.sqlite3"
maxOpenConns: 100
maxIdleConns: 2
connMaxLifetime: 3600
jwtIssuer: https://$JWT_ISSUER_ENDPOINT
---
apiVersion: operator.openshift.io/v1alpha1
kind: SpireAgent
metadata:
name: cluster
spec:
trustDomain: $APP_DOMAIN
clusterName: $CLUSTER_NAME
nodeAttestor:
k8sPSATEnabled: "true"
workloadAttestors:
k8sEnabled: "true"
workloadAttestorsVerification:
type: "auto"
---
apiVersion: operator.openshift.io/v1alpha1
kind: SpiffeCSIDriver
metadata:
name: cluster
spec: {}
---
apiVersion: operator.openshift.io/v1alpha1
kind: SpireOIDCDiscoveryProvider
metadata:
name: cluster
spec:
trustDomain: $APP_DOMAIN
jwtIssuer: $JWT_ISSUER_ENDPOINT
EOF
----
.Verification
. Check the SPIRE server status by running the following command:
+
[source,terminal]
----
$ oc rollout status statefulset/spire-server -n zero-trust-workload-identity-manager --timeout=2m
----
. Check the SPIRE agent status by running the following command:
+
[source,terminal]
----
$ oc rollout status statefulset/spire-agent -n zero-trust-workload-identity-manager --timeout=2m
----
. Check the SPIFFE CSI driver status by running the following command:
+
[source,terminal]
----
$ oc rollout status daemonset/spire-spiffe-csi-driver -n zero-trust-workload-identity-manager --timeout=2m
----
. Check the OIDC Discovery Provider status by running the following command:
+
[source,terminal]
----
$ oc wait --for=condition=Available deployment/spire-spiffe-oidc-discovery-provider -n zero-trust-workload-identity-manager --timeout=2m
----
== Exposing the SPIFFE OIDC Discovery Provider service
.Procedure
. Retrieve the SPIRE Certificate Authority (CA) bundle from the `spire-bundle` ConfigMap and save it to a local file named 'spire-ca-bundle.crt'.
+
[source,terminal]
----
$ oc get configmap spire-bundle \
-n zero-trust-workload-identity-manager \
-o jsonpath='{.data.bundle\.crt}' > ./spire-ca-bundle.crt
----
. Create a Secret from the CA bundle:
+
[source,terminal]
----
$ oc create secret generic \
-n zero-trust-workload-identity-manager \
spire-bundle --from-file=tls.crt=spire-ca-bundle.crt
----
. Set the TLS Secret name:
+
[source,terminal]
----
$ export TLS_SECRET_NAME=spire-spiffe-oidc-discovery-provider-tls
----
. Configure one of the options below to expose the SPIFFE OIDC Discovery Provider:
.. Create an Ingress with cert-manager Annotations
... Apply the Ingress manifest. Copy the entire command block provided and paste it directory into your terminal. Press *Enter* to run.
+
[source,yaml]
----
oc apply -f - <<EOF
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: oidc-discovery-provider-certmanager-anno
namespace: zero-trust-workload-identity-manager
annotations:
route.openshift.io/termination: reencrypt
route.openshift.io/destination-ca-certificate-secret: spire-bundle
cert-manager.io/issuer: letsencrypt-http01
cert-manager.io/common-name: $JWT_ISSUER_ENDPOINT
spec:
rules:
- host: $JWT_ISSUER_ENDPOINT
http:
paths:
- pathType: ImplementationSpecific
backend:
service:
name: spire-spiffe-oidc-discovery-provider
port:
number: 443
tls:
- hosts:
- $JWT_ISSUER_ENDPOINT
secretName: $TLS_SECRET_NAME
ingressClassName: openshift-default
EOF
----
... Confirm that `cert-manager` has successfully issued the certificate and that the Secret is ready by running the following command:
+
[source,terminal]
----
$ oc wait --for=condition=Ready certificate/$TLS_SECRET_NAME -n zero-trust-workload-identity-manager --timeout=5m
----
.. Manually create a Certificate and an Ingress
... Define and apply the `cert-manager` certificate resource. Copy the entire command block provided and paste it directory into your terminal. Press *Enter* to run.
+
[source,yaml]
----
oc apply -f - <<EOF
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
name: $TLS_SECRET_NAME
namespace: zero-trust-workload-identity-manager
spec:
secretName: $TLS_SECRET_NAME
commonName: $JWT_ISSUER_ENDPOINT
dnsNames:
- $JWT_ISSUER_ENDPOINT
usages:
- server auth
issuerRef:
kind: Issuer
name: letsencrypt-http01
EOF
----
... Run the following command to check if the certificate is provisioned and its status is `Ready`:
+
[source,terminal]
----
$ oc wait --for=condition=Ready certificate/$TLS_SECRET_NAME -n zero-trust-workload-identity-manager --timeout=5m
----
... Apply the Ingress manifest. Copy the entire command block provided and paste it directory into your terminal. Press *Enter* to run.
+
[source,yaml]
----
oc apply -f - <<EOF
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: oidc-discovery-provider-bring-your-own-tls
namespace: zero-trust-workload-identity-manager
annotations:
route.openshift.io/destination-ca-certificate-secret: spire-bundle
route.openshift.io/termination: reencrypt
spec:
rules:
- host: $JWT_ISSUER_ENDPOINT
http:
paths:
- pathType: ImplementationSpecific
backend:
service:
name: spire-spiffe-oidc-discovery-provider
port:
number: 443
tls:
- hosts:
- $JWT_ISSUER_ENDPOINT
secretName: $TLS_SECRET_NAME
EOF
----
.. Directly create a certificate and a route
... Define and apply the `cert-manager` certificate resource. Copy the entire command block provided and paste it directory into your terminal. Press *Enter* to run.
+
[source,yaml]
----
oc apply -f - <<EOF
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
name: $TLS_SECRET_NAME
namespace: zero-trust-workload-identity-manager
spec:
secretName: $TLS_SECRET_NAME
commonName: $JWT_ISSUER_ENDPOINT
dnsNames:
- $JWT_ISSUER_ENDPOINT
usages:
- server auth
issuerRef:
kind: Issuer
name: letsencrypt-http01
EOF
----
... Wait for the certificate state to be `Ready`.
+
[source,terminal]
----
$ c wait --for=condition=Ready certificate/$TLS_SECRET_NAME -n zero-trust-workload-identity-manager --timeout=5m
----
... Create a `Role` and `RoleBinding` to grant the OpenShift router `ServiceAccount` permission to read the TLS secret created by `cert-manager`.
+
[source,terminal]
----
# Create the Role
$ oc create role secret-reader \
--verb=get,list,watch \
--resource=secrets \
--resource-name=$TLS_SECRET_NAME \
-n zero-trust-workload-identity-manager
# Bind the Role to the router ServiceAccount
$ oc create rolebinding secret-reader-binding \
--role=secret-reader \
--serviceaccount=openshift-ingress:router \
-n zero-trust-workload-identity-manager
----
... Create the route.
+
[source,terminal]
----
$ oc create route reencrypt spiffe-oidc-discovery \
-n zero-trust-workload-identity-manager \
--hostname=$JWT_ISSUER_ENDPOINT \
--dest-ca-cert=./spire-ca-bundle.crt \
--service=spire-spiffe-oidc-discovery-provider \
--port https
----
... Link the external certificate to reference the externally managed TLS secret using the `externalCertificate` field.
+
[source,terminal]
----
$ oc patch route spiffe-oidc-discovery \
-p '{"spec":{"tls":{"externalCertificate":{"name":"'"$TLS_SECRET_NAME"'"}}}}' \
-n zero-trust-workload-identity-manager \
--type=merge
----
.Verification
Verify that the OIDC Discovery endpoint is publicly accessible, the TLS certificate is valid, and the OIDC provider is serving its configuration correctly.
. Run the following command and ensure that the `$JWT_ISSUER_ENDPOINT` environment variable has the hostname you configured in the earlier steps.
+
[source,terminal]
----
$ curl https://$JWT_ISSUER_ENDPOINT/.well-known/openid-configuration
----
.Example output
+
[source,JSON]
----
{
"issuer": "https://$JWT_ISSUER_ENDPOINT",
"jwks_uri": "https://$JWT_ISSUER_ENDPOINT/keys",
"authorization_endpoint": "",
"response_types_supported": [
"id_token"
],
"subject_types_supported": [],
"id_token_signing_alg_values_supported": [
"RS256",
"ES256",
"ES384"
]
}
----

View File

@@ -0,0 +1,120 @@
// Module included in the following assemblies:
//
// * security/zero_trust_workload_identity_manageer/zero-trust-manager-oidc-federation.adoc
:_mod-docs-content-type: PROCEDURE
[id="zero-trust-manager-install-vault-oidc_{context}"]
= Installing Vault
Before Vault is used as an OIDC, you need to install Vault.
.Prerequisites
* Configure a route. For more information, see link:https://docs.redhat.com/en/documentation/openshift_container_platform/4.19/html/ingress_and_load_balancing/configuring-routes#route-configuration[Route configuration]
* Helm is installed.
* A command-line JSON processor for easily reading the output from the Vault API.
* A HashiCorp Helm repository is added.
.Procedure
. Create the `vault-helm-value.yaml` file.
+
[source,yaml]
----
global:
enabled: true
openshift: true <1>
tlsDisable: true <2>
injector:
enabled: false
server:
ui:
enabled: true
image:
repository: docker.io/hashicorp/vault
tag: "1.19.0"
dataStorage:
enabled: true <3>
size: 1Gi
standalone:
enabled: true <4>
config: |
listener "tcp" {
tls_disable = 1 <5>
address = "[::]:8200"
cluster_address = "[::]:8201"
}
storage "file" {
path = "/vault/data"
}
extraEnvironmentVars: {}
----
+
<1> Optimizes the deployment for OpenShift-specific security contexts.
<2> Disables TLS for Kubernetes objects created by the chart.
<3> Creates a 1Gi persistent volume to store Vault data.
<4> Deploys a single Vault pod.
<5> Tells the Vault server to not use TLS.
. Run the `helm install` command:
+
[source,terminal]
----
$ helm install vault hashicorp/vault \
--create-namespace -n vault \
--values ./vault-helm-value.yaml
----
. Expose the Vault service.
+
[source,terminal]
----
$ oc expose service vault -n vault
----
. Set the `VAULT_ADDR` environment variable to retrieve the hostname from the new route and then export it.
+
[source,terminal]
----
$ export VAULT_ADDR="http://$(oc get route vault -n vault -o jsonpath='{.spec.host}')"
----
+
[NOTE]
====
`http://` is prepended because TLS is disabled.
====
.Verification
* To ensure your Vault instance is running, run the following command:
+
[source,terminal]
----
$ curl -s $VAULT_ADDR/v1/sys/health | jq
----
+
.Example output
[source,JSON]
----
{
"initialized": true,
"sealed": true,
"standby": true,
"performance_standby": false,
"replication_performance_mode": "disabled",
"replication_dr_mode": "disabled",
"server_time_utc": 1663786574,
"version": "1.19.0",
"cluster_name": "vault-cluster-a1b2c3d4",
"cluster_id": "5e6f7a8b-9c0d-1e2f-3a4b-5c6d7e8f9a0b"
}
----

View File

@@ -4,6 +4,7 @@
:_mod-docs-content-type: PROCEDURE
[id="zero-trust-manager-oidc-config_{context}"]
= Deploying the SPIRE OpenID Connect Discovery Provider
You can configure the `SpireOIDCDiscoveryProvider` custom resource (CR) to deploy and configure the SPIRE OpenID Connect (OIDC) Discovery Provider.
@@ -34,8 +35,8 @@ spec:
jwtIssuer: <jwt_issuer_domain> #<3>
----
<1> The trust domain to be used for the SPIFFE identifiers.
<2> The name of the SPIRE Agent unix socket.
<3> The JSON Web Token (JWT) issuer domain. The default value is set to the value specified in `oidc-discovery.$trustDomain`.
<2> The name of the SPIRE Agent UNIX socket.
<3> The JSON Web Token (JWT) issuer domain. The value must be a valid URL.
.. Apply the configuration by running the following command:
+

View File

@@ -58,7 +58,7 @@ spec:
<9> The maximum number of open database connections.
<10> The maximum number of idle connections in the pool.
<11> The maximum amount of time a connection can be reused. To specify an unlimited time, you can set the value to `0`.
<12> The JSON Web Token (JWT) issuer domain. The default value is set to the value specified in `oidc-discovery.$trustDomain`.
<12> The JSON Web Token (JWT) issuer domain. The value must be a valid URL.
.. Apply the configuration by running the following command:
+

View File

@@ -0,0 +1,10 @@
// Module included in the following assemblies:
//
// * security/zero_trust_workload_identity_manageer/zero-trust-manager-oidc-federation.adoc
:_mod-docs-content-type: CONCEPT
[id="zero-trust-manager-vault-oidc-about_{context}"]
= About Vault OpenID Connect
Vault OpenID Connect (OIDC) with SPIRE creates a secure authentication method where Vault uses SPIRE as a trusted OIDC provider. A workload requests a JWT-SVID from its local SPIRE Agent, which has a unique SPIFFE ID. The workload then presents this token to Vault, and Vault validates it against the public keys on the SPIRE Server. If all conditions are met, Vault issues a short-lived Vault token to the workload which the workload can now use to access secrets and perform actions within Vault.

View File

@@ -0,0 +1,41 @@
:_mod-docs-content-type: ASSEMBLY
[id="zero-trust-manager-oidc-federation"]
= Zero Trust Workload Identity Manager OIDC federation
include::_attributes/common-attributes.adoc[]
:context: zero-trust-manager-oidc-federation
toc::[]
{zero-trust-full} integrates with OpenID Connect (OIDC) by allowing a SPIRE server to act as an OIDC provider. This enables workloads to request and receive verifiable JSON Web Tokens - SPIFFE Verifiable Identity Documents (JWT-SVIDs) from the local SPIRE agent. External systems, such as cloud providers, can then use the OIDC discovery endpoint exposed by the SPIRE server to retrieve public keys.
:FeatureName: Zero Trust Workload Identity Manager for Red{nbsp}Hat OpenShift
include::snippets/technology-preview.adoc[]
The following providers are verified to work with SPIRE OIDC federation:
* Vault
* Azure Entra ID
// About the Entra ID OIDC
include::modules/zero-trust-manager-entraid-oidc-about.adoc[leveloffset=+1]
// configure OIDC route
include::modules/zero-trust-manager-create-route-oidc.adoc[leveloffset=+1]
// configure Azure
include::modules/zero-trust-manager-configure-azure.adoc[leveloffset=+1]
// About the Vault OIDC
include::modules/zero-trust-manager-vault-oidc-about.adoc[leveloffset=+1]
// Install the Vault OIDC
include::modules/zero-trust-manager-install-vault-oidc.adoc[leveloffset=+1]
// Initialize the Vault OIDC
include::modules/zero-trust-manager-initialize-vault-oidc.adoc[leveloffset=+1]