I am facing a blocker that I don't seem to find a practical solution.
I am using azure terraform to create a storage account, and I would like, during the release pipeline, to be able to set the connection string of this storage account as a secret in an existing KeyVault.
So far I am able to retrieve secret from this KeyVault as I am using a managed identity which has the following permission upon the KeyVault:
key = get, list
secret = get, list and set
cert = get , list
the workflow process in my terraform is as follow:
Retrieve the KeyVault data:
data "azurerm_key_vault" "test" {
name = "test"
resource_group_name = "KeyVault-test"
}
Retrieve the user assigned identity data:
data "azurerm_user_assigned_identity" "example" {
name = "mng-identity-example"
resource_group_name = "managed-identity-example"
}
Once I have those 2 data, I tried to create the secret as follow:
resource "azurerm_key_vault_secret" "secretTest" {
key_vault_id = data.azurerm_key_vault.test.id
name = "secretTest"
value = azurerm_storage_account.storageaccount.primary_connection_string
}
Once I set the release pipeline to run this terraform, it does fail with the error Access Denied
Which is fully understandable as this terraform does not have permission to set or retrieve the secret.
And this is the part on which I am blocked.
If anyone can help understand how can I use my managed identity to set this secret?
I looked into terraform documentation but couldn't find any step or explanation.
Thank you so much for your help and time, and please if you need more info just ask me.
Please make sure that the service principal that you are using to login into Azure using Terraform has the same permission which you assigned to the managed identity .
provider "azurerm" {
features {}
subscription_id = "00000000-0000-0000-0000-000000000000"
client_id = "00000000-0000-0000-0000-000000000000" ## This Client ID needs to have the permissions in Keyvault access policy which you have provided to the managed identity.
client_secret = var.client_secret
tenant_id = "00000000-0000-0000-0000-000000000000"
}
OR
If You are using a Service Connection to connect the Devops Pipeline to Azure and use it in Terrafarom , then you need to provide that Devops service connection (service principal) the permissions in the access policy.
Related
When I try to deploy my Bicep template through a DevOps release pipeline I get the following error:
Deployment failed with multiple errors: 'Authorization failed for
template resource '1525ed81-ad25-486e-99a3-124abd455499' of type
'Microsoft.Authorization/roleDefinitions'. The client
'378da07a-d663-4d11-93d0-9c383eadcf45' with object id
'378da07a-d663-4d11-93d0-9c383eadcf45' does not have permission to
perform action 'Microsoft.Authorization/roleDefinitions/write' at
scope
'/subscriptions/8449f684-37c6-482b-8b1a-576b999c77ef/resourceGroups/rgabpddt/providers/Microsoft.Authorization/roleDefinitions/1525ed81-ad25-486e-99a3-124abd455499'.:Authorization
failed for template resource '31c1daec-7d4a-4255-8528-169fc45fc14d' of
type 'Microsoft.Authorization/roleAssignments'.
I understand through this post that I have to grant "something" the Owner or User Access Administrator role.
But I don't understand what user has the ObjectId 378da07a-d663-4d11-93d0-9c383eadcf45.
I tried to look it up with the following az CLI command, but it says that it cannot find a resource with that Id:
az ad user show --id 378da07a-d663-4d11-93d0-9c383eadcf45
The response it returns:
Resource '378da07a-d663-4d11-93d0-9c383eadcf45' does not exist or one of its queried reference-property objects are not present.
I'm a but clueless here. What do I exactly have to grant permission?
When you use service connection in DevOps pipeline, for example Azure Resource Manager service connection, it will create a service principal(app registry) on Azure portal-> Active Directory. You can find the service principal by clicking the link on service connection:
When you deploy with service connection, please make sure you have give correct permission for this service principal on target resource, like mentioned Microsoft.Authorization/roleDefinitions/write. Suggest to give contributor role on the resource. Otherwise it will reports the error in your pipeline log.
When you add the role, you will find the object id, it's different with service principal application ID or object id.
It's azure role not Azure AD role. You can find the difference in the doc.
I'm following the official docs to create Azure Kubernetes clusters. The docs state that I need to create a service principal first, manually, and provide the client_id and client_secret.
Doing it manually is not an option.
Here is the code for my service principal. It's decorated with links to the most recent Terraform docs for reference.
data "azurerm_subscription" "current" {}
data "azuread_client_config" "current" {}
resource "random_id" "current" {
byte_length = 8
prefix = "ExternalDnsTf"
}
# Create Azure AD App.
# https://registry.terraform.io/providers/hashicorp/azuread/latest/docs/resources/application
resource "azuread_application" "current" {
display_name = random_id.current.hex
owners = [data.azuread_client_config.current.object_id]
}
# Create Service Principal associated with the Azure AD App
# https://registry.terraform.io/providers/hashicorp/azuread/latest/docs/resources/service_principal
resource "azuread_service_principal" "current" {
application_id = azuread_application.current.application_id
app_role_assignment_required = false
owners = [data.azuread_client_config.current.object_id]
}
# Create Service Principal password
# https://registry.terraform.io/providers/hashicorp/azuread/latest/docs/resources/application_password
resource "azuread_application_password" "current" {
application_object_id = azuread_application.current.object_id
}
# Create role assignment for service principal
# https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/role_assignment
resource "azurerm_role_assignment" "current" {
scope = data.azurerm_subscription.current.id
role_definition_name = "Contributor"
# When assigning to a SP, use the object_id, not the appId
# see: https://learn.microsoft.com/en-us/azure/role-based-access-control/role-assignments-cli
principal_id = azuread_service_principal.current.object_id
}
I keep getting the following error in my pipeline: (note, I am the owner of my subscription)
ApplicationsClient.BaseClient.Post(): unexpected status 403 with OData
│ error: Authorization_RequestDenied: Insufficient privileges to complete the
│ operation.
What I'm trying to do is to eliminate the manual steps to setup supporting services. Take ExternalDNS for example. The Azure docs state that I need to use az ad sp create-for-rbac -n ExternalDnsServicePrincipal; az role assignment create --role "Reader" --assignee <appId GUID> --scope <resource group resource id>; az role assignment create --role "Contributor" --assignee <appId GUID> --scope <dns zone resource id>
Ultimately, I'm trying to create the terraform version of the azure cli commands.
Support for create-for-rbac was a feature request on github. That used to work great, but so much has changed, it's not applicable to current API versions. Also, with AAD Graph being deprecated in favor Microsoft Graph API, I wonder if I'm getting snagged on that.
The ExternalDNS docs also suggested Managed Service Identities (MSI). Service principals, MSI, MSGraph API integration, honestly, I don't care which one is used. Whatever is current best-practices is fine so long as I do not have to log into the portal to manually create or give permissions, or manually run az cli commands.
EDIT: Permissions clarification
I'm using Terraform, of course, to provision resources. If I do all of this without terraform (manually or with a bash script), I use azure cli I start setting permissions by doing the following:
az login
az account set -s <my-subscription-id>
I am the owner of my subscription. I can run all commands, create SPs, MSIs, assign roles, etc, with no problem.
In the pipelines, I am using the charleszipp az pipelines terraform plugin. In the logs, I see:
az login --service-principal -t <my-tenant-id> -u *** -p ***
az account set -s <my-subscription-id>
I'm not sure if that makes a difference. I interpret that as ultimately, commands are executed after signing in and setting the account subscription, like I do manually.
Technically, I'm not using a service connection in several of these tasks. However, where one is required, I have created a Service connection and defined its Scope to the subscription level. It's of type Azure Resource Manager.
However, if I click "manage service principal, it takes me to the portal where there are no permissions defined.
While I am the owner of my subscription, I am not the root management group. I'm owned / provisioned by someone else. Ultimately, they have control of Active Directory. I cannot add or edit permissions. If I try to add any under permissions API and select Microsoft Graph, it says that authorization is required. Grant Admin Consent for <parent organization is greyed out.
But why would that be important if I'm the owner of my subscription? If I can do whatever I want via the az cli command line, what's preventing me from doing the same in the pipeline?
I am using user-managed identity for that, it seemed most straightforward and worked fine for me.
resource "azurerm_user_managed_identity", "mi" {
resource_group_name = "rg"
name = "mi"
location = "eastus"
}
resource "azurerm_role_assignment" "ra" {
scope = azurerm_subnet.sn.id // subnet I created earlier
role_definition_name = "Network Contributor" // required with kubenet
principal_id = azurerm_user_managed_identity.mi.principal_id
}
resource "azurerm_kubernetes_cluster" "aks" {
name = "aks"
identity {
type = "UserAssigned"
user_assigned_identity_id = azurerm_user_managed_identity.mi.id
}
<...remaining attributes...>
depends_on = [azurerm_role_assignment.ra] // just to be safe
}
I hope somebody can help me to solve this issue.
Using terraform I scripted some resource groups and webapps. Those web app have some configurations that need to access a key vault to retrieve some secrets.
But to do so, I need to activate the azure identity on the web app.
So far everything is working just fine without any problem. But as I am still learning how to use terraform with azure, I need to keep destroying and spinning up the webapps, which mean everytime I need to activate the identity and add the access policy in the key vault.
So what I did, is to create a azure managed identity resource in the same resource group where I have the key vault. Now I would like to use this managed identity to authenticate my web app everytime I spin up the web app.
My web app code looks like this:
resource "azurerm_app_service" "app-hri-stg-eur-configurations-api" {
name = "app-hri-${var.env}-${var.reg-name}-webapp-testing"
app_service_plan_id = azurerm_app_service_plan.ASP-hri-stg-eur-webapp.id
location = var.location
resource_group_name = azurerm_resource_group.rg-hri-stg-eur-webapp.name
app_settings = {
"secret" = "#Microsoft.KeyVault(SecretUri=https://mykeyvault.vault.azure.net/secrets/test)"
...... <My configuration>
}
identity {
type = "UserAssigned"
}
}
And here is where I am getting confused, how can I reference the azure managed identity that I have already created to grant access to my web app to read the secrets?
I hope I made my question clear enough, and please if not just ask for more info
Thank you so much for any help you can provide
A identity block supports the following:
type - (Required) Specifies the identity type of the App Service. Possible values are SystemAssigned (where Azure will generate a Service Principal for you), UserAssigned where you can specify the Service Principal IDs in the identity_ids field, and SystemAssigned, UserAssigned which assigns both a system managed identity as well as the specified user assigned identities.
identity_ids - (Optional) Specifies a list of user managed identity ids to be assigned. Required if type is UserAssigned.
so you should have sth like this:
data "azurerm_user_assigned_identity" "example" {
name = "name_of_user_assigned_identity"
resource_group_name = "name_of_resource_group"
}
If your identity is in another resource group. It allows you to reference to already created azure resource.
identity {
type = "UserAssigned",
identity_ids = [data.azurerm_user_assigned_identity.example.id]
}
I am trying to use Google's preferred "Workload Identity" method to enable my GKE app to securely access secrets from Google Secrets.
I've completed the setup and even checked all steps in the Troubleshooting section (https://cloud.google.com/kubernetes-engine/docs/how-to/workload-identity?hl=sr-ba#troubleshooting) but I'm still getting the following error in my logs:
Unhandled exception. Grpc.Core.RpcException:
Status(StatusCode=PermissionDenied, Detail="Permission
'secretmanager.secrets.list' denied for resource
'projects/my-project' (or it may not exist).")
I figured the problem was due to the node pool not using the correct service account, so I recreated it, this time specifying the correct service account.
The service account has the following roles added:
Cloud Build Service
Account Kubernetes Engine Developer
Container Registry Service Agent
Secret Manager Secret Accessor
Secret Manager Viewer
The relevant source code for the package I am using to authenticate is as follows:
var data = new Dictionary<string, string>(StringComparer.OrdinalIgnoreCase);
var request = new ListSecretsRequest
{
ParentAsProjectName = ProjectName.FromProject(projectName),
};
var secrets = secretManagerServiceClient.ListSecrets(request);
foreach(var secret in secrets)
{
var value = secretManagerServiceClient.AccessSecretVersion($"{secret.Name}/versions/latest");
string secretVal = this.manager.Load(value.Payload);
string configKey = this.manager.GetKey(secret.SecretName);
data.Add(configKey, secretVal);
}
Data = data;
Ref. https://github.com/jsukhabut/googledotnet
Am I missing a step in the process?
Any idea why Google is still saying "Permission 'secretmanager.secrets.list' denied for resource 'projects/my-project' (or it may not exist)?"
Like #sethvargo mentioned in the comments, you need to map the service account to your pod because Workload Identity doesn’t use the underlying node identity and instead maps a Kubernetes service account to a GCP service account. Everything happens at the per-pod level in Workload identity.
Assign a Kubernetes service account to the application and configure it to act as a Google service account.
1.Create a GCP service account with the required permissions.
2.Create a Kubernetes service account.
3.Assign the Kubernetes service account permission to impersonate the GCP
service account.
4.Run your workload as the Kubernetes service account.
Hope you are using project ID instead of project name in the project or secret.
You cannot update the service account of an already created pod.
Refer the link to add service account to the pods.
I am trying to transfer data from S3 to GCS by using a Java client but I got this error.
Failed to obtain the location of the Google Cloud Storage (GCS) bucket
___ due to insufficient permissions. Please verify that the necessary permissions have been granted.
I am using a service account with the Project Owner role, which should grant unlimited access to all project resources.
Google Transfer Service is using an internal service account to move the data back and forth. This account is created automatically and should not be confused with the service accounts you create yourself.
You need to give this user a permission called "Legacy bucket writer".
This is written in the documentation, but it's VERY easy to miss:
https://cloud.google.com/storage-transfer/docs/configure-access
Thanks to #thnee's comment I was able to piece together a terraform script that adds the permissions to the hidden Storage Transfer service account:
data "google_project" "project" {}
locals {
// the project number is also available from the Project Info section on the Dashboard
transfer_service_id = "project-${data.google_project.project.number}#storage-transfer-service.iam.gserviceaccount.com"
}
resource "google_storage_bucket" "backups" {
location = "us-west1"
name = "backups"
storage_class = "REGIONAL"
}
data "google_iam_policy" "transfer_job" {
binding {
role = "roles/storage.legacyBucketReader"
members = [
"serviceAccount:${local.transfer_service_id}",
]
}
binding {
role = "roles/storage.objectAdmin"
members = [
"serviceAccount:${local.transfer_service_id}",
]
}
binding {
role = "roles/storage.admin"
members = [
"user:<GCP console user>",
"serviceAccount:<terraform user doing updates>",
]
}
}
resource "google_storage_bucket_iam_policy" "policy" {
bucket = "${google_storage_bucket.backups.name}"
policy_data = "${data.google_iam_policy.transfer_job.policy_data}"
}
Note that this removes the default acls of OWNER and READER present on the bucket. This would prevent you from being able to access the bucket in the console. We therefore add the roles/storage.admin back to owner users and the terraform service account that's doing the change.
I was logged into my work account on the gcloud CLI. Changing the auth to gcloud auth login helped solve my random issues.