Unable to delete Recovery Service Vault - recovery

enter image description hereVault deletion error
Recovery Services Vault cannot be deleted as there are existing resources within the vault. : DESKTOP-LHTVUDO. Please ensure all containers have been unregistered from the vault and all private endpoints associated with the vault have been deleted, and retry operation. For more details, see https://aka.ms/AB-AA4ecq5
Performed the below steps.
https://learn.microsoft.com/en-us/azure/backup/backup-azure-delete-vault?tabs=portal

Navigate to Recovery services vault, then perform the following.
Backup items blade under recovery services vault and delete all the items and its associated policies from backup policies blade. Ensure no item is stuck in soft delete. If soft delete is enabled on vault that needs to be disabled.
For details on Soft Delete, refer to https://learn.microsoft.com/en-us/azure/backup/backup-azure-security-feature-cloud
Under backup infrastructure blade in vault, you need to unregister any servers/agents, storage accounts/file shares etc.,(It most likely this in your case that's blocking you from deleting the vault.

Related

How to create an AKS StatefulSet with Azure Storage Key

I know it is possible to use Azure Files from AKS if you give permissions over the storage account to your sp or managed id.
But is it possible to create a StatefulSet if you are not allowed to give such access and only have storage key?
With normal deploys is possible since you only need to have the secret and to use secretName or even include shareName in order to have always the same name.
But when it comes to StatefulSet, which uses volumeClaimTemplates, It seems impossible, unless you have permissions over the storage as mentioned before.

Rename the EKS creator's IAM user name via aws cli

If we have a role change in the team, I read that EKS creator can NOT be transferred. Can we instead rename the creator's IAM user name via aws cli? Will that break EKS?
I only find ways to add new user using configmap but this configmap doesn't have the root user in there.
$ kubectl edit configmap aws-auth --namespace kube-system
There is no way to transfer the root user of an EKS cluster to another IAM user. The only way to do this would be to delete the cluster and recreate it with the new IAM user as the root user.
Can we instead rename the creator's IAM user name via aws cli? Will that break EKS?
The creator record is immutable and managed within EKS. This record is simply not accessible using CLI and not amendable (including DELETE).
How do we know a cluster was created by IAM roles or IAM users?
If you cannot find the identity (userIdentity.arn) in CloudTrail that invoked CreateCluster (eventName) for the cluster (responseElements.clusterName) in last 90 days, you need to raise it to the AWS Support to obtain the identity.
is it safe to delete the creator IAM user?
Typically, you start with deactivate the IAM user account (creator) if you are not sure of any side effect. You can proceed to delete the account later when you are confident to do so.
As already mentioned in the answer by Muhammad, it is not possible to transfer the root/creator role to another IAM user.
To avoid getting into the situation that you describe, or any other situation where the creator of the cluster should not stay root, it is recommended to not create clusters with IAM users but with assumed IAM roles instead.
This leads to the IAM role becoming the "creator", meaning that you can use IAM access management to control who can actually assume the given role und thus act as root.
You can either have dedicated roles for each cluster or one role for multiple clusters, depending on how you plan to do access management. The limits will however apply later, meaning that you can not switch the creator role afterwards, so this must be properly planned in advance.

Is it possible to undo kubernetes cluster delete command?

Is it possible to undo "gcloud container clusters delete" command?
Unfortunately not: Deleting a Cluster
All the source volumes and data (that are not persistent) are removed, and unless you made a conscious choice to take a backup of the cluster, it would be a permanent operation.
If a backup does exist, it would be a restore from backup rather than a revert on the delete command.
I suggest reading a bit more into the Administration of a cluster on Gcloud for more info: Administration of Clusters Overview
Unfortunately if you will delete cluster it is impossible to undo this.
In the GCP documentation you can check what will be deleted after gcloud container clusters delete and what will remain after this command.
One of the things which will remain is Persistent disk volumes. It means that if your ClaimPolicy was set to Retain and your PV status is Released you will be able to get data from PersistentVolume. To do that you will have to create PersistentVolumeClain. More info about ReclaimPolicyhere.
Run $ kubectl get pv to check if it is still bound and check ReclaimPolicy. Similar case can be found in this github thread.
In this documentation you can find step by stop how to connect pod to specific PV.
In addition, please note that you can backup your cluster. To do this you can use for example Ark.

Azure DevOps > Helm > Azure Kubernetes Deployment - Deletes Azure File share when deployment is deleted

TL;DR
My pods mounted Azure file shares are (inconsistently) being deleted by either Kubernetes / Helm when deleting a deployment.
Explanation
I've recently transitioned to using Helm for deploying Kubernetes objects on my Azure Kubernetes Cluster via the DevOps release pipeline.
I've started to see some unexpected behaviour in relation to the Azure File Shares that I mount to my Pods (as Persistent Volumes with associated Persistent Volume Claims and a Storage Class) as part of the deployment.
Whilst I've been finalising my deployment, I've been pushing out the deployment via the Azure Devops release pipeline using the built in Helm tasks, which have been working fine. When I've wanted to fix / improve the process I've then either manually deleted the objects on the Kubernetes Dashboard (UI), or used Powershell (command line) to delete the deployment.
For example:
helm delete myapp-prod-73
helm del --purge myapp-prod-73
Not every time, but more frequently, I'm seeing the underlying Azure File Shares also being deleted as I'm working through this process. There's very little around the web on this, but I've also seen an article outlining similar issues over at: https://winterdom.com/2018/07/26/kubernetes-azureFile-dynamic-volumes-deleting.
Has anyone in the community come across this issue?
Credit goes to https://twitter.com/tomasrestrepo here on pointing me in the right direction (the author of the article I mentioned above).
The behaviour here was a consequence of having the Reclaim Policy on the Storage Class & Persistent Volume set to "Delete". When switching over to Helm, I began following their commands to Delete / Purge the releases as I was testing. What I didn't realise, was that deleting the release would also mean that Helm / K8s would also reach out and delete the underlying Volume (in this case an Azure Fileshare). This is documented over at: https://kubernetes.io/docs/concepts/storage/persistent-volumes/#delete
I'll leave this Q & A here for anyone else that misses this subtly with the way in which the Storage Classes, Persistent Volumes (PVs) & underlying storage operates under K8s / Helm.
Note: I think this issue was made slightly more obscure by the fact I was manually creating the Azure Fileshare (through the Azure Portal) and trying to mount that as a static volume (as per https://learn.microsoft.com/en-us/azure/aks/azure-files-volume) within my Helm Chart, but that the underlying volume wasn't immediately being deleted when the release was deleted (sometimes an hour later?).

Is it necessary to remount a backend when the vault seals?

I would like to use Hashicorp's Vault with the AWS backend. I've automated the process for unsealing the vault. If the vault were to ever seal, do I have to mount the AWS backend again?
Basically, do mounts get unmounted when the vault seals?
I'm just trying to figure out if I need to add the mount command to my unseal automation.
Found the answer myself. I'll post here for anyone else looking.
Backends are not automatically unmounted. In fact, the documentation states that umounting a backend destroys all data:
When a secret backend is unmounted, all of its secrets are revoked (if they support it), and all of the data stored for that backend in the physical storage layer is deleted.
It would be pretty bad if sealing the vault also destroyed all your data. Heh heh.
I was able to test this for myself:
[vagrant#localhost ~]$ vault mount aws
Successfully mounted 'aws' at 'aws'!
[vagrant#localhost ~]$ vault mounts
Path Type Default TTL Max TTL Force No Cache Replication Behavior Description
aws/ aws system system false replicated
secret/ generic system system false replicated generic secret storage
sys/ system n/a n/a false replicated system endpoints used for control, policy and debugging
[vagrant#localhost ~]$ vault seal
Vault is now sealed.
[vagrant#localhost ~]$ vault unseal
Key (will be hidden):
[vagrant#localhost ~]$ vault mounts
Path Type Default TTL Max TTL Force No Cache Replication Behavior Description
aws/ aws system system false replicated
secret/ generic system system false replicated generic secret storage
sys/ system n/a n/a false replicated system endpoints used for control, policy and debugging