I want to connect service principal with certificate to authorize using pyspark. I could see the code in scala in this link - https://github.com/Azure/azure-event-hubs-spark/blob/master/docs/use-aad-authentication-to-connect-eventhubs.md
I have client _id and tenant_id and certificate details. Could some please share me the code in pyspark for same?
You add the Azure AD service principal to the Azure Databricks workspace using the SCIM API 2.0. Authentication using Pyspark isn't available.
To authenticate using service principal, you need to follow below steps:
As you already have clientID and tenantID, so the service principal ID already created.
Create the Azure Databricks personal access token
You’ll use an Azure Databricks personal access token (PAT) to authenticate against the Databricks REST API. To create a PAT that can be used to make API requests:
Go to your Azure Databricks workspace.
Click the user icon in the top-right corner of the screen and click User Settings.
Click Access Tokens > Generate New Token.
Copy and save the token value.
Add the service principal to the Azure Databricks workspace
You add the Azure AD service principal to a workspace using the SCIM API 2.0. You must also give the service principal permission to launch automated job clusters. You can grant this through the allow-cluster-create permission. Open a terminal and run the following command to add the service principal and grant the required permissions:
curl -X POST 'https://<per-workspace-url>/api/2.0/preview/scim/v2/ServicePrincipals' \
--header 'Content-Type: application/scim+json' \
--header 'Authorization: Bearer <personal-access-token>' \
--data-raw '{
"schemas":[
"urn:ietf:params:scim:schemas:core:2.0:ServicePrincipal"
],
"applicationId":"<application-id>",
"displayName": "test-sp",
"entitlements":[
{
"value":"allow-cluster-create"
}
]
}'
Replace with the unique per-workspace URL for your Azure Databricks workspace.
Replace with the Azure Databricks personal access token.
Replace with the Application (client) ID for the Azure AD application registration.
Create an Azure Key Vault-backed secret scope in Azure Databricks
Secret scopes provide secure storage and management of secrets. You’ll store the secret associated with the service principal in a secret scope. You can store secrets in a Azure Databricks secret scope or an Azure Key Vault-backed secret scope. These instructions describe the Azure Key Vault-backed option:
Create an Azure Key Vault instance in the Azure portal.
Create the Azure Databricks secret scope backed by the Azure Key Vault instance.
Step 1: Create an Azure Key Vault instance
In the Azure portal, select Key Vaults > + Add and give the key vault a name.
Click Review + create.
After validation completes, click Create .
After creating the key vault, go to the Properties page for the new key vault.
Copy and save the Vault URI and Resource ID.
Step 2: Create An Azure Key Vault-backed secret scope
Azure Databricks resources can reference secrets stored in an Azure key vault by creating a Key Vault-backed secret scope. To create the Azure Databricks secret scope:
Go to the Azure Databricks Create Secret Scope page at https:///#secrets/createScope. Replace per-workspace-url with the unique per-workspace URL for your Azure Databricks workspace.
Enter a Scope Name.
Enter the Vault URI and Resource ID values for the Azure key vault you created in Step 1: Create an Azure Key Vault instance.
Click Create.
Save the client secret in Azure Key Vault
In the Azure portal, go to the Key vaults service.
Select the key vault created in Step 1: Create an Azure Key Vault instance.
Under Settings > Secrets, click Generate/Import.
Select the Manual upload option and enter the client secret in the Value field.
Click Create.
Grant the service principal read access to the secret scope
You’ve created a secret scope and stored the service principal’s client secret in that scope. Now you’ll give the service principal access to read the secret from the secret scope.
Open a terminal and run the following command:
curl -X POST 'https://<per-workspace-url/api/2.0/secrets/acls/put' \
--header 'Authorization: Bearer <personal-access-token>' \
--header 'Content-Type: application/json' \
--data-raw '{
"scope": "<scope-name>",
"principal": "<application-id>",
"permission": "READ"
}'
Replace with the unique per-workspace URL for your Azure Databricks workspace.
Replace with the Azure Databricks personal access token.
Replace with the name of the Azure Databricks secret scope that contains the client secret.
Replace with the Application (client) ID for the Azure AD application registration.
Create a job in Azure Databricks and configure the cluster to read secrets from the secret scope
You’re now ready to create a job that can run as the new service principal. You’ll use a notebook created in the Azure Databricks UI and add the configuration to allow the job cluster to retrieve the service principal’s secret.
Go to your Azure Databricks landing page and select Create Blank Notebook. Give your notebook a name and select SQL as the default language.
Enter SELECT 1 in the first cell of the notebook. This is a simple command that just displays 1 if it succeeds. If you have granted your service principal access to particular files or paths in Azure Data Lake Storage Gen 2, you can read from those paths instead.
Go to Workflows and click the + Create Job button. Give the job a name, click Select Notebook, and select the notebook you just created.
Click Edit next to the Cluster information.
On the Configure Cluster page, click Advanced Options.
On the Spark tab, enter the following Spark Config:
fs.azure.account.auth.type.acmeadls.dfs.core.windows.net OAuth
fs.azure.account.oauth.provider.type.acmeadls.dfs.core.windows.net org.apache.hadoop.fs.azurebfs.oauth2.ClientCredsTokenProvider
fs.azure.account.oauth2.client.id.acmeadls.dfs.core.windows.net <application-id>
fs.azure.account.oauth2.client.secret.acmeadls.dfs.core.windows.net {{secrets/<secret-scope-name>/<secret-name>}}
fs.azure.account.oauth2.client.endpoint.acmeadls.dfs.core.windows.net https://login.microsoftonline.com/<directory-id>/oauth2/token
Replace with the name of the Azure Databricks secret scope that contains the client secret.
Replace with the Application (client) ID for the Azure AD application registration.
Replace with the name associated with the client secret value in the secret scope.
Replace with the Directory (tenant) ID for the Azure AD application registration.
Transfer ownership of the job to the service principal
A job can have exactly one owner, so you’ll need to transfer ownership of the job from yourself to the service principal. To ensure that other users can manage the job, you can also grant Can Manage permissions to a group. In this example, we use the Permissions API to set these permissions.
Open a terminal and run the following command:
curl -X PUT 'https://<per-workspace-url>/api/2.0/permissions/jobs/<job-id>' \
--header 'Authorization: Bearer <personal-access-token>' \
--header 'Content-Type: application/json' \
--data-raw '{
"access_control_list": [
{
"service_principal_name": "<application-id>",
"permission_level": "IS_OWNER"
},
{
"group_name": "admins",
"permission_level": "CAN_MANAGE"
}
]
}'
Replace with the unique per-workspace URL for your Azure Databricks workspace.
Replace with the Azure Databricks personal access token.
Replace with the Application (client) ID for the Azure AD application registration.
The job will also need read permissions to the notebook. Run the following command to grant the required permissions:
curl -X PUT 'https://<per-workspace-url>/api/2.0/permissions/notebooks/<notebook-id>' \
--header 'Authorization: Bearer <personal-access-token>' \
--header 'Content-Type: application/json' \
--data-raw '{
"access_control_list": [
{
"service_principal_name": "<application-id>",
"permission_level": "CAN_READ"
}
]
}'
Replace with the unique per-workspace URL for your Azure Databricks workspace.
Replace with the ID of the notebook associated with the job. To find the ID, go to the notebook in the Azure Databricks workspace and look for the numeric ID that follows notebook/ in the notebook’s URL.
Replace with the Azure Databricks personal access token.
Replace with the Application (client) ID for the Azure AD application registration.
You can now test the job. You run jobs with a service principal the same way you run jobs as a user, either through the UI, API, or CLI.
Related
I need to add Azure DevOps repos to azure databricks repo by using databricks API at this link. I am using a service principal credentials for this. The service principal is already added as admin user to databricks. With my service principal I can get the list of repos and even delete them. But when I want to add a repo to a folder, it raises the following error:
{
"error_code": "PERMISSION_DENIED",
"message": "Missing Git provider credentials. Go to User Settings > Git Integration to add your personal access token."
}
I am not using my own credentials to use a PAT token, instead I am getting a bearer token by sending request to https://login.microsoftonline.com/directory-id/oauth2/token and use it to authenticate. This works for get repos, delete repos and get repos/repo-id. Just for creating a repo (adding repo by using post method to /repos) it is failing.
If I still use a PAT instead of bearer token, I get the following error:
{
"error_code": "PERMISSION_DENIED",
"message": "Azure Active Directory credentials missing. Ensure you are either logged in with your Azure
Active Directory account or have setup an Azure DevOps personal access token (PAT) in User Settings > Git Integration.
If you are not using a PAT and are using Azure DevOps with the Repos API, you must use an AAD access token. See https://learn.microsoft.com/en-us/azure/databricks/dev-tools/api/latest/aad/app-aad-token for steps to acquire an AAD access token."
}
I am using postman to construct the requests. To generate the error I am getting I am using the following:
method: post
url-endpoint: https://adb-databricksid.azuredatabricks.net/api/2.0/repos
body:
url: azure-devops-repo
provider: azureDevOpsServices
path: /Repos/folder-name/testrepo
header:
Authorization: Bearer eyJ0eXAiOiJKV1QiLCJhbG... (Construct it by appending bearer token to key wor "Bearer")
X-Databricks-Azure-SP-Management-Token: management token (get it like bearer token by using resource https://management.core.windows.net/)
X-Databricks-Azure-Workspace-Resource-Id: /subscriptions/azure-subscription-id/resourceGroups/resourcegroup-name/providers/Microsoft.Databricks/workspaces/workspace-name
Here the screen shot of the postman:
Please note that I have used exactly same method of authentication for even creating clusters and jobs and deleting repos. Just for adding and updating repos it is failing. I'd like to know how I can resolve the error PERMISSION_DENIED mentioned above.
To make service principal working with Databricks Repos you need following:
Create an Azure DevOps personal access token (PAT) for it - Azure DevOps Git repositories don't support service principals authentication via AAD tokens (see documentation). (The service connection for SP that you configured is used for connection to other Azure services, not to the DevOps itself).
That PAT needs to be put into Databricks workspace using Git Credentials API - it should be done when configuring first time or when token is expired. When using this API you need to use AAD token of the service principal. (btw, it could be done via Terraform as well)
After it's done, you can use Databricks Repos APIs or databricks-cli to perform operations with Repos - create/update/delete them. (see previous answer on updating the repo)
Have you setup the git credentials using this endpoint before creating the repo through the API ?
https://docs.databricks.com/dev-tools/api/latest/gitcredentials.html#section/Authentication
If you do not setup this first, you can get the error when trying to create a repo.
Listing & deleting a repo only require a valid authentication to Databricks (Bearer token or PAT) and doesn't require valid git credentials.
When trying to create a repo, you need authorizations on the target repository that is on Azure Devops in your case.
So you need to call the git-credentials endpoint (it's the same syntax on AWS and Azure) to create it.
Once your git credentials up-to-date, the creation of the repo should work as intended.
Hashicorp Vault is the native product of our organization and is a widely used and recommended approach for storing all the key-value pairs or any secrets. Any applications that are deployed on Azure too must store/retrieve the token from Hashicorp Vault and not from the Azure Key Vault. I provided this information just to add a bit of background to the requirement.
Now coming to the actual problem, I deployed the dotnet application on Azure App Service, enable the system-managed identity, and was able to successfully retrieve the JWT token.
As per the flow which I understood by reading the documentation, it says, first retrieve the application token deployed on Azure having System Managed Identity enabled. Once this is done, pass this token for validation to Vault which gets it validated using OIDC from AAD. On successful validation, I will be given back the Vault token which can be used to fetch the secrets from Vault.
To perform these steps configuration is required at the Vault side, for which, I performed all the below steps on the vault server installed on my windows local machine:-
Command line operation
Start the Vault server
Open the other command prompt and set the environment variables set
VAULT_ADDR=http://127.0.0.1:8200 set
VAULT_TOKEN=s.iDdVbLKPCzmqF2z0RiXPMxLk
vault auth enable jwt
vault write auth/jwt/config
oidc_discovery_url=https://sts.windows.net/4a95f16f-35ba-4a52-9cb3-7f300cdc0c60/
bound_issuer=https://sts.windows.net/4a95f16f-35ba-4a52-9cb3-7f300cdc0c60/
vault read auth/jwt/config
Policy associated with the sqlconnection:-
create a role (webapp-role) by using the command
curl --header “X-Vault-Token: %VAULT_TOKEN%” --insecure --request POST
--data #C:\Users\48013\source\repos\HashVaultAzure\Vault-files\payload.json
%VAULT_ADDR%/v1/auth/jwt/role/webapp-role
–payload.json { “bound_audiences”: “https://management.azure.com/”,
“bound_claims”: { “idp”:
“https://sts.windows.net/4a95f16f-35ba-4a52-9cb3-7f300cdc0c60/”,
“oid”: “8d2b99fb-f4f4-4afb-9ee3-276891f40a65”, “tid”:
“4a95f16f-35ba-4a52-9cb3-7f300cdc0c60/” }, “bound_subject”:
“8d2b99fb-f4f4-4afb-9ee3-276891f40a65”, “claim_mappings”: { “appid”:
“application_id”, “xms_mirid”: “resource_id” }, “policies”:
[“sqlconnection”], “role_type”: “jwt”, “token_bound_cidrs”:
[“10.0.0.0/16”], “token_max_ttl”: “24h”, “user_claim”: “sub” }
Vault read auth/jwt/role/webapp-role
Run the command below with the JWT token retrieved from the application (having the managed identity enabled) deployed on Azure
AAD and pass it as “your_jwt”. This command should return the vault
token as shown in the link https://www.vaultproject.io/docs/auth/jwt
curl --request POST --data '{"jwt": "your_jwt", "role":
"webapp-role"}' http://127.0.0.1:8200/v1/auth/jwt/login
At this point I receive an error – “Missing Role”,
I am stuck here and not able to find any solution.
Expected response should be a vault token/client_token as shown:-
JWT Token decoded information
{
"aud": "https://management.azure.com",
"iss": "https://sts.windows.net/4a95f16f-35ba-4a52-9cb3-7f300cdc0c60/",
"iat": 1631172032,
"nbf": 1631172032,
"exp": 1631258732,
"aio": "E2ZgYNBN4JVfle92Tsl1b8m8pc9jAA==",
"appid": "cf5c734c-a4fd-4d85-8049-53de46db4ec0",
"appidacr": "2",
"idp": "https://sts.windows.net/4a95f16f-35ba-4a52-9cb3-7f300cdc0c60/",
"oid": "8d2b99fb-f4f4-4afb-9ee3-276891f40a65",
"rh": "0.AVMAb_GVSro1Ukqcs38wDNwMYExzXM_9pIVNgElT3kbbTsBTAAA.",
"sub": "8d2b99fb-f4f4-4afb-9ee3-276891f40a65",
"tid": "4a95f16f-35ba-4a52-9cb3-7f300cdc0c60",
"uti": "LDjkUZdlKUS4paEleUUFAA",
"ver": "1.0",
"xms_mirid": "/subscriptions/0edeaa4a-d371-4fa8-acbd-3675861b0ac8/resourcegroups/AzureAADResource/providers/Microsoft.Web/sites/hashvault-test",
"xms_tcdt": "1600006540"
}
The issue was with the missing configuration both at the Azure Cloud and Vault side.
These were the addition steps done further to make it work.
Create an Azure SPN (which is equal to creating an app registration with client secret)
az ad sp create-for-rbac --name "Hashicorp Vault Prod AzureSPN"
--skip-assignment Assign as Reader on subscription
Create Vault config
vault auth enable azure vault write auth/jwt/config
tenant_id=lg240e12-76g1-748b-cd9c-je6f29562476
resource=https://management.azure.com/ client_id=34906a49-
9a8f-462b-9d68-33ae40hgf8ug client_secret=123456ABCDEF
I am attempting to submit a job for training in ML-Engine using gcloud but am running into an error with service account permissions that I can't figure out. The model code exists on a Compute Engine instance from which I am running gcloud ml-engine jobs submit as part of a bash script. I have created a service account (ai-platform-developer#....iam.gserviceaccount.com) for gcloud authentication on the VM instance and have created a bucket for the job and model data. The service account has been granted Storage Object Viewer and Storage Object Creator roles for the bucket and the VM and bucket all belong to the same project.
When I try to submit a job per this tutorial, the following are executed:
time_stamp=`date +"%Y%m%d_%H%M"`
job_name='ObjectDetection_'${time_stamp}
gsutil cp object_detection/samples/configs/faster_rcnn_resnet50.config
gs://[bucket-name]/training_configs/faster-rcnn-resnet50.${job_name}.config
gcloud ml-engine jobs submit training ${job_name} \
--project [project-name] \
--runtime-version 1.12 \
--job-dir=gs://[bucket-name]/jobs/${job_name} \
--packages dist/object_detection-0.1.tar.gz,slim/dist/slim-0.1.tar.gz,/tmp/pycocotools/pycocotools-2.0.tar.gz \
--module-name object_detection.model_main \
--region us-central1 \
--config object_detection/training-config.yml \
-- \
--model_dir=gs://[bucket-name]/output/${job_name}} \
--pipeline_config_path=gs://[bucket-name]/training_configs/faster-rcnn-resnet50.${job_name}.config
where [bucket-name] and [project-name] are placeholders for the bucket created above and the project it and the VM are contained in.
The config file is successfully uploaded to the bucket, I can confirm it exists in the cloud console. However, the job fails to submit with the following error:
ERROR: (gcloud.ml-engine.jobs.submit.training) User [ai-platform-developer#....iam.gserviceaccount.com] does not have permission to access project [project-name] (or it may not exist): Field: job_dir Error: You don't have the permission to access the provided directory 'gs://[bucket-name]/jobs/ObjectDetection_20190709_2001'
- '#type': type.googleapis.com/google.rpc.BadRequest
fieldViolations:
- description: You don't have the permission to access the provided directory 'gs://[bucket-name]/jobs/ObjectDetection_20190709_2001'
field: job_dir
If I look in the cloud console, the files specified by --packages exist in that location, and I've ensured the service account ai-platform-developer#....iam.gserviceaccount.com has been given Storage Object Viewer and Storage Object Creator roles for the bucket, which has bucket level permissions set. After ensuring the service account is activated and the default, I can also run
gsutil ls gs://[bucket-name]/jobs/ObjectDetection_20190709_2001
which successfully returns the contents of the folder without a permission error. In the project, there exists a managed service account service-[project-number]#cloud-ml.google.com.iam.gserviceaccount.com and I have also granted this account Storage Object Viewer and Storage Object Creator roles on the bucket.
To confirm this VM is able to submit a job, I am able to switch the gcloud user to my personal account and the script runs and submits a job without any error. However, since this exists in a shared VM, I would like to rely on service account authorization instead of my own user account.
I had a similar problem with exactly the same error.
I found that the easiest way to troubleshoot those errors is to go to "Logging" and search for "PERMISSION DENIED" text.
In my case service account was missing permission "storage.buckets.get". Then you would need to find a role that have this permission. You could do that from IAM->Roles. In that view you could filter roles by permission name. It turned out that only following roles have the needed permission:
Storage Admin
Storage Legacy Bucket Owner
Storage Legacy Bucket Reader
Storage Legacy Bucket Writer
I added "Storage Legacy Bucket Writer" role to the service account in the bucket and then was able to submit a job.
Have you tried to look in the Compute Engine scope?
Shutdown instance, Edit and change Cloud API access scopes to:
Allow full access to all Cloud APIs
I have a working Azure AD/Azure daemon application using adal4j that uses user/password authentication. Due to issues with ADFS, I wish to also be able to authenticate using a service principal (client ID/secret). This seems to work fine for the Azure (non-AD) portion of the app, as the SP roles can be defined for the subscriptions in question, however for the Azure AD part, I get:
response code 403, error: Authorization_RequestDenied: Insufficient privileges to complete the operation.
...this occurs on the first call to the Graph API - I get valid tokens from AuthenticationContext.acquireToken() using the https://graph.windows.net scope.
My account is an Owner in the directory. I've tried using the "Grant Permissions" button on the app, and have also tried fabricating a consent URL (which works) and using that to consent to the app having the necessary privileges in the directory. Neither seems to affect this.
The app is a Native app, as it is a daemon/service app, so can't participate in OAuth2 consent.
How does one access the Azure AD Graph API using a SP to authenticate? As a reminder, unchanged, the app works with (non-ADFS) user/password, and the SP works with the Azure API, just not Azure AD Graph API.
Thanks...
P.S. I've also tried this with the Azure Graph API, which Microsoft now recommends instead of the Azure AD Graph API. Same result, and similarly works with user/password creds.
Amending this to kind of take adal4j out of the picture - this seems to be more of a generic Azure AD problem. Here's an example of the problem, using curl:
Client credentials token request:
curl --request POST "https://login.windows.net/367cxxxx41e5/oauth2/token" --data-urlencode "resource=https://graph.windows.net" --data-urlencode "client_id=9d83yyyy08cd" --data-urlencode "grant_type=client_credentials" --data-urlencode "client_secret=secret"
Client credentials token response:
{"token_type":"Bearer","expires_in":"3599","ext_expires_in":"0","expires_on":"1491486990","not_before":"1491483090","resource":"https://graph.windows.net","access_token":"eyJ0zzzz2zVA"}
Azure AD REST query using client credentials token:
curl "https://graph.windows.net/367cxxxx41e5/tenantDetails" --header "Authorization: eyJ0xxxx2zVA" --header "Accept: application/json;odata=minimalmetadata" --header "api-version: 1.6"
Azure AD REST response using client credentials token:
{"odata.error":{"code":"Authorization_RequestDenied","message":{"lang":"en","value":"Insufficient privileges to complete the operation."}}}
Now, contrast that with (note the same tenant ID/app ID):
Password credentials token request:
curl --request POST "https://login.windows.net/367cxxxx41e5/oauth2/token" --data-urlencode "resource=https://graph.windows.net" --data-urlencode "client_id=9d83yyyy08cd" --data-urlencode "grant_type=password" --data-urlencode "username=bozo#clown.com" --data-urlencode "password=password"
Password credentials token response:
{"token_type":"Bearer","scope":"Directory.AccessAsUser.All Directory.Read.All Group.Read.All Member.Read.Hidden User.Read User.Read.All User.ReadBasic.All","expires_in":"3599","ext_expires_in":"0","expires_on":"1491489157","not_before":"1491485257","resource":"https://graph.windows.net","access_token":"eyJ0zzzz0EXQ","refresh_token":"AQABzzzzAgAA"}
Azure AD REST query using password credentials token:
curl "https://graph.windows.net/367cxxxx41e5/tenantDetails" --header "Authorization: eyJ0xxxx0EXQ" --header "Accept: application/json;odata=minimalmetadata" --header "api-version: 1.6"
Azure AD REST response using password credentials token:
{"odata.metadata":"https://graph.windows.net/367cxxxx41e5/$metadata#directoryObjects/Microsoft.DirectoryServices.TenantDetail","value":[{"odata.type":"Microsoft.DirectoryServices.TenantDetail","objectType":"Company","objectId":"367cxxxx41e5","deletionTimestamp":null,"assignedPlans":[{"assignedTimestamp":"2017-02-24T03:25:33Z","capabilityStatus":"Enabled","service":"SharePoint","servicePlanId":"e95byyyyc014"},...
My suspicion at this point is that the SP, created by default by Azure AD when the app was created, doesn't include the necessary permissions. I'll try creating a new SP with the rights specified. Examples of this abound, but all are focused on the target app being some mythical LOB app, not Azure AD. And, as seems usual with the half-baked portal, the CLI must be used to do this.
</snark>
Also, I've verified that anyone with Reader role should be able to execute that query. I've added both Contributer and Owner to the SP, with no effect.
Also also, FWIW, I've verified that the SP has, in theory, the Azure AD (and other) permissions I entered in the portal. I think.
PS C:\> Get-AzureADServicePrincipalOAuth2PermissionGrant -objectid 'a9f9xxxx5377'|format-list -property consenttype,resourceid,scope
ConsentType : AllPrincipals
ResourceId : c569xxxxe7f0
Scope : Member.Read.Hidden User.Read User.ReadBasic.All User.Read.All Group.Read.All Directory.Read.All
Directory.AccessAsUser.All
ConsentType : AllPrincipals
ResourceId : 3318xxxx66a5
Scope : user_impersonation
ConsentType : AllPrincipals
ResourceId : 8c0fxxxx4198
Scope : User.Read.All User.ReadBasic.All Group.Read.All Directory.Read.All Directory.AccessAsUser.All User.Read
PS C:\> get-azureadobjectbyobjectid -objectids 'c569xxxxe7f0','3318xxxx66a5','8c0fxxxx4198'
ObjectId AppId DisplayName
-------- ----- -----------
8c0fxxxx4198 00000003-0000-0000-c000-000000000000 Microsoft Graph
3318xxxx66a5 797f4846-ba00-4fd7-ba43-dac1f8f63013 Windows Azure Service Management API
c569xxxxe7f0 00000002-0000-0000-c000-000000000000 Microsoft.Azure.ActiveDirectory
According to your description, per my experience, I think the issue was caused by two reasons.
Not add api access of Windows Azure Active Directory (Microsoft.Azure.ActiveDirectory) in the Required permissions tab of your registed application in Azure AD on Azure portal as below and select the related permissions.
As references, you can refer to the other SO thread Trouble with authorization using client_credentials Azure AD Graph API or the offical document here and a helpful blog.
Not assign Contributor for this service principal. You need to run the powershell command below to do this.
New-AzureRmRoleAssignment -RoleDefinitionName Contributor -ServicePrincipalName 'applicationID'
Or you can also refer to my answer for another SO thread Cannot list image publishers from Azure java SDK to do this via Azure CLI or just on Azure portal.
Hope it helps.
I want to create a service account on GCP using a python script calling the REST API and then give it specific roles - ideally some of these, such as roles/logging.logWriter.
First I make a request to create the account which works fine and I can see the account in Console/IAM.
Second I want to give it the role and this seems like the right method. However, it is not accepting roles/logging.logWriter, saying HttpError 400, "Role roles/logging.logWriter is not supported for this resource.">
Conversely, if I set the desired policy in console, then try the getIamPolicy method (using the gcloud tool), all I get back is response etag: ACAB, no mention of the actual role I set. Hence I think these roles refer to different things.
Any idea how to go about scripting a role/scope for a service account using the API?
You can grant permissions to a GCP service account in a GCP project without having to rewrite the entire project policy!
Use the gcloud projects add-iam-policy-binding ... command for that (docs).
For example, given the environment variables GCP_PROJECT_ID and GCP_SVC_ACC the following command grants all privileges in the container.admin role to the chosen service account:
gcloud projects add-iam-policy-binding ${GCP_PROJECT_ID} \
--member=serviceAccount:${GCP_SVC_ACC} \
--role=roles/container.admin
To review what you've done:
$ gcloud projects get-iam-policy $GCP_PROJECT_ID \
--flatten="bindings[].members" \
--format='table(bindings.role)' \
--filter="bindings.members:${GCP_SVC_ACC}"
Output:
ROLE
roles/container.admin
(or more roles, if those were granted before)
Notes:
The environment variable GCP_SVC_ACC is expected to contain the email notation for the service account.
Kudos to this answer for the nicely formatted readout.
You appear to be trying to set a role on the service account (as a resource). That's for setting who can use the service account.
If you want to give the service account (as an identity) a particular role on the project and its resources, see this method: https://cloud.google.com/resource-manager/reference/rest/v1/projects/setIamPolicy