1
0
mirror of https://github.com/openshift/openshift-docs.git synced 2026-02-05 12:46:18 +01:00
Files
openshift-docs/modules/migration-configuring-aws-s3.adoc
2024-11-01 12:40:53 +00:00

163 lines
4.2 KiB
Plaintext

// Module included in the following assemblies:
//
// * migrating_from_ocp_3_to_4/installing-3-4.adoc
// * migration_toolkit_for_containers/installing-mtc.adoc
// * backup_and_restore/application_backup_and_restore/installing/installing-oadp-aws.adoc
:_mod-docs-content-type: PROCEDURE
[id="migration-configuring-aws-s3_{context}"]
= Configuring Amazon Web Services
ifdef::installing-3-4,installing-mtc[]
You configure Amazon Web Services (AWS) S3 object storage as a replication repository for the {mtc-first} .
endif::[]
ifdef::installing-oadp-aws[]
You configure Amazon Web Services (AWS) for the OpenShift API for Data Protection (OADP).
endif::[]
.Prerequisites
* You must have the link:https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-welcome.html[AWS CLI] installed.
ifdef::installing-3-4,installing-mtc[]
* The AWS S3 storage bucket must be accessible to the source and target clusters.
* If you are using the snapshot copy method:
** You must have access to EC2 Elastic Block Storage (EBS).
** The source and target clusters must be in the same region.
** The source and target clusters must have the same storage class.
** The storage class must be compatible with snapshots.
endif::[]
.Procedure
. Set the `BUCKET` variable:
+
[source,terminal]
----
$ BUCKET=<your_bucket>
----
. Set the `REGION` variable:
+
[source,terminal]
----
$ REGION=<your_region>
----
. Create an AWS S3 bucket:
+
[source,terminal]
----
$ aws s3api create-bucket \
--bucket $BUCKET \
--region $REGION \
--create-bucket-configuration LocationConstraint=$REGION <1>
----
<1> `us-east-1` does not support a `LocationConstraint`. If your region is `us-east-1`, omit `--create-bucket-configuration LocationConstraint=$REGION`.
. Create an IAM user:
+
[source,terminal]
----
$ aws iam create-user --user-name velero <1>
----
<1> If you want to use Velero to back up multiple clusters with multiple S3 buckets, create a unique user name for each cluster.
. Create a `velero-policy.json` file:
+
[source,terminal]
----
$ cat > velero-policy.json <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"ec2:DescribeVolumes",
"ec2:DescribeSnapshots",
"ec2:CreateTags",
"ec2:CreateVolume",
"ec2:CreateSnapshot",
"ec2:DeleteSnapshot"
],
"Resource": "*"
},
{
"Effect": "Allow",
"Action": [
"s3:GetObject",
"s3:DeleteObject",
"s3:PutObject",
"s3:AbortMultipartUpload",
"s3:ListMultipartUploadParts"
],
"Resource": [
"arn:aws:s3:::${BUCKET}/*"
]
},
{
"Effect": "Allow",
"Action": [
"s3:ListBucket",
"s3:GetBucketLocation",
"s3:ListBucketMultipartUploads"
],
"Resource": [
"arn:aws:s3:::${BUCKET}"
]
}
]
}
EOF
----
. Attach the policies to give the `velero` user the minimum necessary permissions:
+
[source,terminal]
----
$ aws iam put-user-policy \
--user-name velero \
--policy-name velero \
--policy-document file://velero-policy.json
----
. Create an access key for the `velero` user:
+
[source,terminal]
----
$ aws iam create-access-key --user-name velero
----
+
.Example output
+
[source,terminal]
----
{
"AccessKey": {
"UserName": "velero",
"Status": "Active",
"CreateDate": "2017-07-31T22:24:41.576Z",
"SecretAccessKey": <AWS_SECRET_ACCESS_KEY>,
"AccessKeyId": <AWS_ACCESS_KEY_ID>
}
}
----
ifdef::installing-3-4,installing-mtc[]
+
Record the `AWS_SECRET_ACCESS_KEY` and the `AWS_ACCESS_KEY_ID`. You use the credentials to add AWS as a replication repository.
endif::[]
ifdef::installing-oadp-aws[]
. Create a `credentials-velero` file:
+
[source,terminal,subs="attributes+"]
----
$ cat << EOF > ./credentials-velero
[default]
aws_access_key_id=<AWS_ACCESS_KEY_ID>
aws_secret_access_key=<AWS_SECRET_ACCESS_KEY>
EOF
----
+
You use the `credentials-velero` file to create a `Secret` object for AWS before you install the Data Protection Application.
endif::[]