* pkg/destroy/aws/ec2helpers.go
** the bulk of the changes are to the ec2helpers file. All of the sdk v1 imports
are removed except for session as this one is engrained too many files currently.
pkg/destroy/aws/aws.go
** Add a client for ELB ELBV2 and IAM to the Cluster Removal Struct. Even though
these changes are mainly to ec2helpers, the other clients were required in for
certain operations.
** The rest of the file updates are alter ARN import to come from aws sdk v2.
* pkg/destroy/aws/iamhelpers.go
** Remove/Change all imports from AWS sdk v1 to v2.
pkg/destroy/aws/errors.go
pkg/destroy/aws/ec2helpers.go
** Remove the Error checking/formatting function from ec2helpers and put the function
in the errors.go file.
* pkg/destroy/aws/elbhelpers.go
** Remove all SDK v1 imports from elb helpers.
* Add reference to correct HandleErrorCode function.
* pkg/destroy/aws/aws.go
** Update Route53, s3, and efs services to sdk v2. This is slowly removing the
requirement for aws session.
* ** Vendor updates for S3 and EFS services.
** This caused updates to other packages such as aws/config, credentials, stscreds, and
a list of aws internal packages.
* Clean up references and use the exported config creator to create new clients in destroyer.
* ** Migrate the use of resource tagging api to the sdk V2.
pkg/destroy/aws:
** Alter the function name from HandleErrorCode to handleErrorCode. The initial thought was that
this function could be used in other areas of the code, but it will remain in destroy for now.
pkg/destroy/aws/shared.go:
** Remove the session import and uses in the file.
* Fix references to HandleErrorCode.
* kg/destroy/aws/aws.go:
** Remove session from the imports. Added the agent handler to the configurations.
* Fix package updates for vendoring.
* Use the correct private and public zone clients.
Set a Destroy User Agent.
Cleanup pointer references to use the aws sdk.
* The ListUsers API call does not return tags for the IAM users in the
response. There is a separate call ListUserTags to fetch its tag for
checking in the installer code.
* rebase: fix other imports after rebase
* revert: use GetRole/GetUser to fetch tags
An older commit uses ListRoleTags/ListUserTags in order to save
bandwidth by fetching only tags. However, the minimal permission
required for the installer does not have permission iam:ListUserTags or
iam:ListRoleTags, thus causing the deprovisioning to skip users and
roles. This is part of the reasons for previous CI leaks.
This commit reverts the optimisation idea to just user GetRole/GetUser,
which should have sufficient minimal permission policy.
---------
Co-authored-by: barbacbd <barbacbd@gmail.com>
Update the GCP provider reference so that N4A instances can be validated.
Note: govmomi was set to v0.51.0 because the MAPI updates were causing an automatic
update to v0.52.0 resulting in build issues that have no current solution.
This bumps openshift/api to latest and bumps cluster-api to v1.11.
CAPI v1.11 contains breaking changes and not all providers have
completed migrating to v1.11 yet. Unfortunately this breaks us and
makes it so we are unable to update openshift/api (due to conflicts
with the latest apimachinery package which is incompatible with older
versions of CAPI).
Therefore, I am vendoring in the work-in-progress changes in CAPA,
CAPZ, & CAPX. The installer only uses this code to generate the
manifests (the controllers are vendored separately), so the risk
here is relatively low. This is meant as a temporary measure to
unblock us from making progress. Once these providers have
completed the merge to v1.11, we should be fine to remove
these replaces and vendor as usual.
Update to the latest sub-modules of the assisted-service repo. This
removes a dependency loop and restores the intended status that the
top-level module is not (indirectly) imported.
The following services are added:
- github.com/aws/aws-sdk-go-v2/service/elasticloadbalancingv2
- github.com/aws/aws-sdk-go-v2/service/elasticloadbalancing
- github.com/aws/aws-sdk-go-v2/service/resourcegroupstaggingapi
bumped ocp api to pull in DualReplica being promoted to TechPreview
needed to pin apimachinery to kube 32 since bumping to kube 33 caused a dependency on cluster-api 1.11 which is currently in beta
Signed-off-by: ehila <ehila@redhat.com>