Failed to obtain the location of the Google Cloud Storage bucket - google-cloud-storage

I am trying to transfer data from S3 to GCS by using a Java client but I got this error.
Failed to obtain the location of the Google Cloud Storage (GCS) bucket
___ due to insufficient permissions. Please verify that the necessary permissions have been granted.
I am using a service account with the Project Owner role, which should grant unlimited access to all project resources.

Google Transfer Service is using an internal service account to move the data back and forth. This account is created automatically and should not be confused with the service accounts you create yourself.
You need to give this user a permission called "Legacy bucket writer".
This is written in the documentation, but it's VERY easy to miss:
https://cloud.google.com/storage-transfer/docs/configure-access

Thanks to #thnee's comment I was able to piece together a terraform script that adds the permissions to the hidden Storage Transfer service account:
data "google_project" "project" {}
locals {
// the project number is also available from the Project Info section on the Dashboard
transfer_service_id = "project-${data.google_project.project.number}#storage-transfer-service.iam.gserviceaccount.com"
}
resource "google_storage_bucket" "backups" {
location = "us-west1"
name = "backups"
storage_class = "REGIONAL"
}
data "google_iam_policy" "transfer_job" {
binding {
role = "roles/storage.legacyBucketReader"
members = [
"serviceAccount:${local.transfer_service_id}",
]
}
binding {
role = "roles/storage.objectAdmin"
members = [
"serviceAccount:${local.transfer_service_id}",
]
}
binding {
role = "roles/storage.admin"
members = [
"user:<GCP console user>",
"serviceAccount:<terraform user doing updates>",
]
}
}
resource "google_storage_bucket_iam_policy" "policy" {
bucket = "${google_storage_bucket.backups.name}"
policy_data = "${data.google_iam_policy.transfer_job.policy_data}"
}
Note that this removes the default acls of OWNER and READER present on the bucket. This would prevent you from being able to access the bucket in the console. We therefore add the roles/storage.admin back to owner users and the terraform service account that's doing the change.

I was logged into my work account on the gcloud CLI. Changing the auth to gcloud auth login helped solve my random issues.

Related

How do I automatically create service principals or MSIs with Terraform for use in Azure Pipelines to manage AKS resources?

I'm following the official docs to create Azure Kubernetes clusters. The docs state that I need to create a service principal first, manually, and provide the client_id and client_secret.
Doing it manually is not an option.
Here is the code for my service principal. It's decorated with links to the most recent Terraform docs for reference.
data "azurerm_subscription" "current" {}
data "azuread_client_config" "current" {}
resource "random_id" "current" {
byte_length = 8
prefix = "ExternalDnsTf"
}
# Create Azure AD App.
# https://registry.terraform.io/providers/hashicorp/azuread/latest/docs/resources/application
resource "azuread_application" "current" {
display_name = random_id.current.hex
owners = [data.azuread_client_config.current.object_id]
}
# Create Service Principal associated with the Azure AD App
# https://registry.terraform.io/providers/hashicorp/azuread/latest/docs/resources/service_principal
resource "azuread_service_principal" "current" {
application_id = azuread_application.current.application_id
app_role_assignment_required = false
owners = [data.azuread_client_config.current.object_id]
}
# Create Service Principal password
# https://registry.terraform.io/providers/hashicorp/azuread/latest/docs/resources/application_password
resource "azuread_application_password" "current" {
application_object_id = azuread_application.current.object_id
}
# Create role assignment for service principal
# https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/role_assignment
resource "azurerm_role_assignment" "current" {
scope = data.azurerm_subscription.current.id
role_definition_name = "Contributor"
# When assigning to a SP, use the object_id, not the appId
# see: https://learn.microsoft.com/en-us/azure/role-based-access-control/role-assignments-cli
principal_id = azuread_service_principal.current.object_id
}
I keep getting the following error in my pipeline: (note, I am the owner of my subscription)
ApplicationsClient.BaseClient.Post(): unexpected status 403 with OData
│ error: Authorization_RequestDenied: Insufficient privileges to complete the
│ operation.
What I'm trying to do is to eliminate the manual steps to setup supporting services. Take ExternalDNS for example. The Azure docs state that I need to use az ad sp create-for-rbac -n ExternalDnsServicePrincipal; az role assignment create --role "Reader" --assignee <appId GUID> --scope <resource group resource id>; az role assignment create --role "Contributor" --assignee <appId GUID> --scope <dns zone resource id>
Ultimately, I'm trying to create the terraform version of the azure cli commands.
Support for create-for-rbac was a feature request on github. That used to work great, but so much has changed, it's not applicable to current API versions. Also, with AAD Graph being deprecated in favor Microsoft Graph API, I wonder if I'm getting snagged on that.
The ExternalDNS docs also suggested Managed Service Identities (MSI). Service principals, MSI, MSGraph API integration, honestly, I don't care which one is used. Whatever is current best-practices is fine so long as I do not have to log into the portal to manually create or give permissions, or manually run az cli commands.
EDIT: Permissions clarification
I'm using Terraform, of course, to provision resources. If I do all of this without terraform (manually or with a bash script), I use azure cli I start setting permissions by doing the following:
az login
az account set -s <my-subscription-id>
I am the owner of my subscription. I can run all commands, create SPs, MSIs, assign roles, etc, with no problem.
In the pipelines, I am using the charleszipp az pipelines terraform plugin. In the logs, I see:
az login --service-principal -t <my-tenant-id> -u *** -p ***
az account set -s <my-subscription-id>
I'm not sure if that makes a difference. I interpret that as ultimately, commands are executed after signing in and setting the account subscription, like I do manually.
Technically, I'm not using a service connection in several of these tasks. However, where one is required, I have created a Service connection and defined its Scope to the subscription level. It's of type Azure Resource Manager.
However, if I click "manage service principal, it takes me to the portal where there are no permissions defined.
While I am the owner of my subscription, I am not the root management group. I'm owned / provisioned by someone else. Ultimately, they have control of Active Directory. I cannot add or edit permissions. If I try to add any under permissions API and select Microsoft Graph, it says that authorization is required. Grant Admin Consent for <parent organization is greyed out.
But why would that be important if I'm the owner of my subscription? If I can do whatever I want via the az cli command line, what's preventing me from doing the same in the pipeline?
I am using user-managed identity for that, it seemed most straightforward and worked fine for me.
resource "azurerm_user_managed_identity", "mi" {
resource_group_name = "rg"
name = "mi"
location = "eastus"
}
resource "azurerm_role_assignment" "ra" {
scope = azurerm_subnet.sn.id // subnet I created earlier
role_definition_name = "Network Contributor" // required with kubenet
principal_id = azurerm_user_managed_identity.mi.principal_id
}
resource "azurerm_kubernetes_cluster" "aks" {
name = "aks"
identity {
type = "UserAssigned"
user_assigned_identity_id = azurerm_user_managed_identity.mi.id
}
<...remaining attributes...>
depends_on = [azurerm_role_assignment.ra] // just to be safe
}

Azure Terraform Unable to Set secret in KeyVault during deployment

I am facing a blocker that I don't seem to find a practical solution.
I am using azure terraform to create a storage account, and I would like, during the release pipeline, to be able to set the connection string of this storage account as a secret in an existing KeyVault.
So far I am able to retrieve secret from this KeyVault as I am using a managed identity which has the following permission upon the KeyVault:
key = get, list
secret = get, list and set
cert = get , list
the workflow process in my terraform is as follow:
Retrieve the KeyVault data:
data "azurerm_key_vault" "test" {
name = "test"
resource_group_name = "KeyVault-test"
}
Retrieve the user assigned identity data:
data "azurerm_user_assigned_identity" "example" {
name = "mng-identity-example"
resource_group_name = "managed-identity-example"
}
Once I have those 2 data, I tried to create the secret as follow:
resource "azurerm_key_vault_secret" "secretTest" {
key_vault_id = data.azurerm_key_vault.test.id
name = "secretTest"
value = azurerm_storage_account.storageaccount.primary_connection_string
}
Once I set the release pipeline to run this terraform, it does fail with the error Access Denied
Which is fully understandable as this terraform does not have permission to set or retrieve the secret.
And this is the part on which I am blocked.
If anyone can help understand how can I use my managed identity to set this secret?
I looked into terraform documentation but couldn't find any step or explanation.
Thank you so much for your help and time, and please if you need more info just ask me.
Please make sure that the service principal that you are using to login into Azure using Terraform has the same permission which you assigned to the managed identity .
provider "azurerm" {
features {}
subscription_id = "00000000-0000-0000-0000-000000000000"
client_id = "00000000-0000-0000-0000-000000000000" ## This Client ID needs to have the permissions in Keyvault access policy which you have provided to the managed identity.
client_secret = var.client_secret
tenant_id = "00000000-0000-0000-0000-000000000000"
}
OR
If You are using a Service Connection to connect the Devops Pipeline to Azure and use it in Terrafarom , then you need to provide that Devops service connection (service principal) the permissions in the access policy.

Azure terraform Use existing managed identity to authenticate web app

I hope somebody can help me to solve this issue.
Using terraform I scripted some resource groups and webapps. Those web app have some configurations that need to access a key vault to retrieve some secrets.
But to do so, I need to activate the azure identity on the web app.
So far everything is working just fine without any problem. But as I am still learning how to use terraform with azure, I need to keep destroying and spinning up the webapps, which mean everytime I need to activate the identity and add the access policy in the key vault.
So what I did, is to create a azure managed identity resource in the same resource group where I have the key vault. Now I would like to use this managed identity to authenticate my web app everytime I spin up the web app.
My web app code looks like this:
resource "azurerm_app_service" "app-hri-stg-eur-configurations-api" {
name = "app-hri-${var.env}-${var.reg-name}-webapp-testing"
app_service_plan_id = azurerm_app_service_plan.ASP-hri-stg-eur-webapp.id
location = var.location
resource_group_name = azurerm_resource_group.rg-hri-stg-eur-webapp.name
app_settings = {
"secret" = "#Microsoft.KeyVault(SecretUri=https://mykeyvault.vault.azure.net/secrets/test)"
...... <My configuration>
}
identity {
type = "UserAssigned"
}
}
And here is where I am getting confused, how can I reference the azure managed identity that I have already created to grant access to my web app to read the secrets?
I hope I made my question clear enough, and please if not just ask for more info
Thank you so much for any help you can provide
A identity block supports the following:
type - (Required) Specifies the identity type of the App Service. Possible values are SystemAssigned (where Azure will generate a Service Principal for you), UserAssigned where you can specify the Service Principal IDs in the identity_ids field, and SystemAssigned, UserAssigned which assigns both a system managed identity as well as the specified user assigned identities.
identity_ids - (Optional) Specifies a list of user managed identity ids to be assigned. Required if type is UserAssigned.
so you should have sth like this:
data "azurerm_user_assigned_identity" "example" {
name = "name_of_user_assigned_identity"
resource_group_name = "name_of_resource_group"
}
If your identity is in another resource group. It allows you to reference to already created azure resource.
identity {
type = "UserAssigned",
identity_ids = [data.azurerm_user_assigned_identity.example.id]
}

Deployed static website to Azure via terraform - but the blade is inaccessible with permission error?

I've got a very basic terraform deployment going through Azure Devops, which defines a storage bucket and static website. However, when I go into the Azure Portal, the static website blade gives me "Access Denied. You do not have access". All other aspects of the storage bucket are available, though, so it doesn't appear to be a general permissions issue.
Terraform doesn't support the config in the AZ RM, so I'm using the local-exec pattern to configure the static website.
Running in DevOps, my terraform has a system connection and runs as a service user. However, I've also tried destroying the bucket and re-running the terraform as my user - this doesn't make any difference.
I've tried adding myself onto the IAM on the bucket, that also doesn't make any difference.
The definition for the storage bucket is:
name = "website"
resource_group_name = "${azurerm_resource_group.prod.name}"
location = var.region
account_kind = "StorageV2"
account_tier = "Standard"
account_replication_type = "LRS"
provisioner "local-exec" {
# https://github.com/terraform-providers/terraform-provider-azurerm/issues/1903
command = "az storage blob service-properties update --account-name ${azurerm_storage_account.website-storage.name} --static-website --index-document index.html --404-document 404.html"
}
}
I'm expecting to be able to get to the static website blade within the Portal - is there some reason why this wouldn't work?
I don't yet have a reason why this happened. I had earlier tried removing the storage and re-creating, but I removed the storage via the portal. This evening I tried renaming the resource in Terraform which forced it to destroy and recreate, and that works.
I had previously messed about with StorageV1 resource / container / blob definition of the same name; potentially there was something "invisible" in Azure which was causing this oddness...

Performing the association of an existing service account to a newly created subscription in gcloud

I have a service account in gcloud that i am using to create a new topic and subscription to that topic in that order.
However i need to be able to assign the newly created subscription to the service account explicitly. In the UI this is done by going to
Pubsub > Subscription > Selecting the subscription and then clicking on "Search member" > Adding the service account.
However I want to automate this using the gcloud command.
So far I have been able to :
1) Activate a service account serviceAccountA
2) Create Topic
3) Create subscription to the Topic
While trying to use the following command to set iam policy on the service account so as to give pubsub.editor role to the service account.
gcloud iam service-accounts set-iam-policy serviceAccountA <json> file path>
Json file content is as below:
{
"bindings": [
{
"role": "roles/pubsub.editor",
"members": ["serviceAccountA"]
}
]
}
The above gcloud command results in the error:
ERROR: (serviceAccountA PERMISSION_DENIED: Not allowed to get project settings for project <id>
I am missing something. Is there an easy way to associate the subscription with a specific service account?
I suspect the problem is that the service account you've activated doesn't have permissions to give itself permissions. Try setting this with a gcloud account that has edit permissions for the project. You can set the current account with gcloud auth login or gcloud config configurations activate <your_config>.