Attempting to add custom roles to Azure AD application - single-sign-on

I am using the Microsoft Graph Editor to add application roles to a SAML SSO enabled application in Azure AD. I copied out the existing appRoles stanza using Get and have edited it to include two new roles.
https://graph.microsoft.com/beta/servicePrincipals/<objectID>
in this case is the objectID of my application.
However, when I run a Patch call to update the servicePrincipals data it throws a very generic error (One or more properties contains invalid values).
I have validated the JSON and am unable to determine what is causing the error.
My JSON is as follows:
{
"appRoles": [{
"allowedMemberTypes": ["User"],
"description": "msiam_access",
"displayName": "msiam_access",
"id": "b9632174-c057-4f7e-951b-be3adc52bfe6",
"isEnabled": true,
"origin": "Application",
"value": null
},
{
"allowedMemberTypes": ["User"],
"description": "Administrator",
"displayName": "Administrator",
"id": "b45591dd-c1f4-404e-9554-18fea972c3e4",
"isEnabled": true,
"origin": "ServicePrincipal",
"value": "SAML_Admin"
},
{
"allowedMemberTypes": ["User"],
"description": "ReadOnlyUsers",
"displayName": "ReadOnlyUsers",
"id": "e3c19ea4-e86a-4897-9bb5-3d2d115fed80",
"isEnabled": true,
"origin": "ServicePrincipal",
"value": "SAML_RO"
}]
}
I also used a GUID generator to generate the GUIDs. If and when they are not unique I get an error to that effect. So, I am ruling that out for now.

You need to update the application, not the service principal.
Custom permissions are defined on the Application object, and are only reflected in the Service principal.
So you'll need to do a PATCH on:
https://graph.microsoft.com/beta/applications/<objectID>
Where objectID is the object id for the Application object (note this is different from the service principal's object id).
You may then have to re-create the service principal.

Thanks to #juunas for helpful feedback.
The only solution that worked for me was to edit the Enterprise Application manifest directly with the new roles. I used a GUID creator web application to create the GUIDs and everything is working as expected.

Related

Deploying azure storage fileServices/shares - error: The value for one of the HTTP headers is not in the correct format

As part of a durable function app deployment, I am deploying azure storage.
On deploying the fileServices/shares, I am getting the following error:
error": {
"code": "InvalidHeaderValue",
"message": "The value for one of the HTTP headers is not in the correct format.\nRequestId:6c0b3fb0-701a-0058-0509-a8af5d000000\nTime:2022-08-04T13:49:24.6378224Z"
}
I would appreciate any advice as this is eating up a lot of time and I am no closer to resolving it.
Section of arm template for the share deployment is below:
{
"type": "Microsoft.Storage/storageAccounts/fileServices/shares",
"apiVersion": "2021-09-01",
"name": "[concat(parameters('storageAccount1_name'), '/default/FuncAppName')]",
"dependsOn": [
"[resourceId('Microsoft.Storage/storageAccounts/fileServices', parameters('storageAccount1_name'), 'default')]",
"[resourceId('Microsoft.Storage/storageAccounts', parameters('storageAccount1_name'))]"
],
"properties": {
"accessTier": "TransactionOptimized",
"shareQuota": 5120,
"enabledProtocols": "SMB"
}
}
Answer to this: removing the property "accessTier": "TransactionOptimized" resolves the issue. The default value for this is TransactionOptimized.
Although the template exported from azure portal includes this property, deployment fails if this parameter is present.

Deployed Keycloak Script Mapper does not show up in the GUI

I'm using the docker image of Keycloak 10.0.2. I want Keycloak to supply access_tokens that can be used by Hasura. Hasura requires custom claims like this:
{
"sub": "1234567890",
"name": "John Doe",
"admin": true,
"iat": 1516239022,
"https://hasura.io/jwt/claims": {
"x-hasura-allowed-roles": ["editor","user", "mod"],
"x-hasura-default-role": "user",
"x-hasura-user-id": "1234567890",
"x-hasura-org-id": "123",
"x-hasura-custom": "custom-value"
}
}
Following the documentation, and using a script I found online, (See this gist) I created a Script Mapper jar with this script (copied verbatim from the gist), in hasura-mapper.js:
var roles = [];
for each (var role in user.getRoleMappings()) roles.push(role.getName());
token.setOtherClaims("https://hasura.io/jwt/claims", {
"x-hasura-user-id": user.getId(),
"x-hasura-allowed-roles": Java.to(roles, "java.lang.String[]"),
"x-hasura-default-role": "user",
});
and the following keycloak-scripts.json in META-INF/:
{
"mappers": [
{
"name": "Hasura",
"fileName": "hasura-mapper.js",
"description": "Create Hasura Namespaces and roles"
}
]
}
Keycloak debug log indicates it found the jar, and successfully deployed it.
But what's the next step? I can't find the deployed mapper anywhere in the GUI, so how do I activate it? I tried creating a protocol Mapper, but the option 'Script Mapper' is not available. And Scopes -> Evaluate generates a standard access token.
How do I activate my deployed protocol mapper?
Of course after you put up a question on SO you still keep searching, and I finally found the answer in this JIRA issue. The scripts feature has been a preview feature since (I think) version 8.
So when starting Keycloak you need to provide:
-Dkeycloak.profile.feature.scripts=enabled
and after that your Script Mapper will show up in the Mapper Type dropdown on the Create Mapper screen, and everything works.

how to create data factory's integrated runtime in arm template

I am trying to deploy data factory using ARM template. It is easy to use the exported template to create a deployment pipeline.
However, as the data factory needs to access an on-premise database server, I need to have an integrated runtime. The problem is how can I include the run time in the arm template?
The template looks like this and we can see that it is trying to include the runtime:
{
"name": "[concat(parameters('factoryName'), '/OnPremisesSqlServer')]",
"type": "Microsoft.DataFactory/factories/linkedServices",
"apiVersion": "2018-06-01",
"properties":
{
"annotations": [],
"type": "SqlServer",
"typeProperties": {
"connectionString": "[parameters('OnPremisesSqlServer_connectionString')]"
},
"connectVia": {
"referenceName": "OnPremisesSqlServer",
"type": "IntegrationRuntimeReference"
}
},
"dependsOn": [
"[concat(variables('factoryId'), '/integrationRuntimes/OnPremisesSqlServer')]"
]
},
{
"name": "[concat(parameters('factoryName'), '/OnPremisesSqlServer')]",
"type": "Microsoft.DataFactory/factories/integrationRuntimes",
"apiVersion": "2018-06-01",
"properties": {
"type": "SelfHosted",
"typeProperties": {}
},
"dependsOn": []
}
Running this template gives me this error:
\"connectVia\": {\r\n \"referenceName\": \"OnPremisesSqlServer\",\r\n \"type\": \"IntegrationRuntimeReference\"\r\n }\r\n }\r\n} and error is: Failed to encrypted linked service credentials on self-hosted IR 'OnPremisesSqlServer', reason is: NotFound, error message is: No online instance..
The problem is that I will need to type in some key in the integrated runtime's UI, so it can be registered in azure. But I can only get that key from my data factory instance's UI. So above arm template deployment will always fail at least once. I am wondering if there is a way to create the run time independently?
The problem is that I will need to type in some key in the integrated
runtime's UI, so it can be registered in azure. But I can only get
that key from my data factory instance's UI. So above arm template
deployment will always fail at least once. I am wondering if there is
a way to create the run time independently?
It seems that you already know how to create Self-Hosted IR in the ADF ARM.
{
"name": "[concat(parameters('dataFactoryName'), '/integrationRuntime1')]",
"type": "Microsoft.DataFactory/factories/integrationRuntimes",
"apiVersion": "2018-06-01",
"properties": {
"additionalProperties": {},
"description": "jaygongIR1",
"type": "SelfHosted"
}
}
Result:
Only you concern is that Windows IR Tool need to be configured with AUTHENTICATION KEY to access ADF Self-Hosted IR node.So,it should be Unavailable status once it is created.This flow is make sense i think,authenticate key should be created first,then you can use it to configure On-Premise Tool.You can't implement all of things in one step because these behaviors are operated on both of azure and on-premise sides.
Based on the Self-Hosted IR Tool document ,the Register steps can't be implemented with Powershell code. So,all steps can't be processed in the flow are creating IR and getting Auth key,not for Registering in the tool.

A CNAME record pointing from mytmp.trafficmanager.net to mywebapp.azurewebsites.net was not found

I am trying to create all my azure resources from PowerShell script. All resources are getting, but it is also throwing this exception.
A CNAME record pointing from mytmp.trafficmanager.net to mywebapp.azurewebsites.net was not found
But I can see a traffic manager endpoint has been configured properly. What do I miss here, any idea?
PS Code:
{
"comments": "Generalized from resource: '/subscriptions/<subid>/resourceGroups/<rgid>/providers/Microsoft.Web/sites/<web_app_name>/hostNameBindings/<traffic_manager_dns>'.",
"type": "Microsoft.Web/sites/hostNameBindings",
"name": "[concat(parameters('<web_app_name>'), '/', parameters('hostNameBindings_<traffic_manager_dns>_name'))]",
"apiVersion": "2016-08-01",
"location": "South Central US",
"scale": null,
"properties": {
"siteName": "<web_app_name>",
"domainId": null,
"hostNameType": "Verified"
},
"dependsOn": [
"[resourceId('Microsoft.Web/sites', parameters('sites_<web_app_name>_name'))]"
]
}
Above code throws that exception actually. When I commented this code block everything is fine. But I wanted to understand the reason for the error.
A CNAME record pointing from mytmp.trafficmanager.net to mywebapp.azurewebsites.net was not found
It indicates the DNS record is not created when deploying the template. You need to prove that you are the owner of the hostname. You also could test it from the Azure portal manually.
Before deploy the template you need to create an CNAME record with your DNS provider. For more information you could refer to Map a CNAME record.

Import VSTS task group with dataSourceBindings

I am trying to define dataSourceBindings in the json of a task group I intend to reuse in release definitions. My task group is basically to deploy a web app with some custom actions. On the parameters I expose on the group, I want to populate the picklists with proper values from AzureRM endpoint like the subscriptions, resource groups, web apps and slots available.
To achieve it, I created a first template from the VSTS UI I then exported to edit in JSON. Based on multiple posts and examples availables in VSTS tasks repo, I defined my dataSourceBindings as:
"dataSourceBindings": [
{
"target": "ResourceGroupName",
"endpointId": "$(ConnectedServiceName)",
"dataSourceName": "AzureResourceGroups"
},
{
"target": "WebAppName",
"endpointId": "$(ConnectedServiceName)",
"dataSourceName": "AzureRMWebAppNamesByType",
"parameters": {
"WebAppKind": "app"
}
},
{
"target": "SlotName",
"endpointId": "$(ConnectedServiceName)",
"dataSourceName": "AzureRMWebAppSlotsId",
"parameters": {
"WebAppName": "$(WebAppName)",
"ResourceGroupName": "$(ResourceGroupName)"
},
"resultTemplate": "{\"Value\":\"{{{ #extractResource slots}}}\",\"DisplayValue\":\"{{{ #extractResource slots}}}\"}"
}
]
I was able to import my task group in VSTS. But once added to a release definition the picklists were still empty.
Then I tried to export the previously imported group and noticed the dataSourceBindings field was empty.
Is importing a group task with dataSourceBindings supported?
If yes, what could possibly go wrong with mine?