Is there an elegant way to clone a Keycloak realm with all its configurations (clients and roles) for a multitenant application? - keycloak

I'm building a multitenant application and I'm using Keycloak for authentication and authorization.
Foreach each tenant, the idea is to have a dedicated Keycloak realm. Each tenant will have exactly the same roles and clients.
I have tried to export one existing realm, use it as template and import it for new tenant. Problem: I'm facing database constraint violation due to internal id.
Question: Is there an elegant way to achieve this, having a template to create a new realm ?

Be sure that the feature for uploading script is enabled. For a deployment with a docker-compose just add this:
command: -Dkeycloak.profile.feature.upload_scripts=enabled
Export your realm (the one to be used as model)
Remove all line containing "id:" and "_id:"
Search and replace template realm name by the new realm name
In Keycloak UI admin console, Add new realm, provide the file and that is all.
You can use the cleaned exported file as template.

Can't comment due to rep,
but I'd like to add to #Youssouf Maiga's answer,
that you should also modify any fields that contain values under "authenticationFlowBindingOverrides":
Replace any entries that have values assigned under "direct_grant" or "browser"
i.e
"authenticationFlowBindingOverrides": {
"direct_grant": "f5d1wb45e-27eb-4466-937439-9cc8a615ad65e",
"browser": "5b23141a1c-7af8d-410e-a9b451f-0eec12039c72e9"
},
replaced with
"authenticationFlowBindingOverrides": {},
I tried cloning my realm based on this and got an error saying:
"Unable to resolve auth flow binding override for: direct_grant" when importing the modified realm export.
Keycloak version 16.1.1

What you could do is configure everything using the Keycloak Terraform provider. That way you only have to define the configuration once, in code, and then apply it using Terraform. See for the documentation: https://registry.terraform.io/providers/mrparkers/keycloak/latest/docs
An advantage of this is that you can put your code in an SCM tool (e.g. git), so you can track your changes, and go back to a previous version if necessary.

Related

Managed Identity Sql Auth with EF Core - Login failed for user '<token-identified principal>'

I have a dotnet 5 (isolated) Azure Function app that needs to access an Azure Sql Server database via EF Core 5. I would like to use the managed identity of the function app when making the sql server requests.
What I tried
I followed the instructions here.
I created a new AD account called "smsrouterdb" and made this the Azure Sql Admin.
The name of my function app is "func-smsrouter-msdn-01". So after logging into the DB via SSMS as "smsrouterdb", I created a contained user as below:
CREATE USER [func-smsrouter-msdn-01] FROM EXTERNAL PROVIDER
ALTER ROLE db_datawriter ADD MEMBER [func-smsrouter-msdn-01]
ALTER ROLE db_datareader ADD MEMBER [func-smsrouter-msdn-01]
I then triggered my function app via an http request.
What happened
I got the following error from the function app:
One or more errors occurred. (Invalid value for key 'authentication'.) ---> System.ArgumentException: Invalid value for key 'authentication'. at Microsoft.Data.Common.DbConnectionStringBuilderUtil.ConvertToAuthenticationType(String keyword, Object value)
I realised that this was because an old version of the nuget package Microsoft.Data.SqlClient was being referenced. So, I explicitly added a reference to v3.0.0.
I then got the following error
Login failed for user '<token-identified principal>'
However, if I change the connection string's authentication property to "Active Directory Interactive" and promote the object id of the managed identity for the function app to be Sql Admin using the following command:
az sql server ad-admin create --resource-group <tg name> --server-name <server name> --display-name MSIAzureAdmin --object-id "id of managed identity here"
then the rows are written correctly. My concern is that the managed identity should not need to be a sql admin.
Config
The nuget packages of the project containing the dbcontext are:
"Microsoft.Azure.Services.AppAuthentication" Version="1.6.1"
"Microsoft.EntityFrameworkCore" Version="5.0.6"
"Microsoft.EntityFrameworkCore.Design" Version="5.0.6"
"Microsoft.EntityFrameworkCore.SqlServer" Version="5.0.6"
"WindowsAzure.Storage" Version="9.3.3" />
From the main Azure Function project, I have references to the following nuget packages:
Microsoft.Data.SqlClient 3.0.0
Serilog.Sinks.MSSqlServer 5.6.0
The only code in my db context is:
protected override void OnConfiguring(DbContextOptionsBuilder optionsBuilder)
{
SqlConnection connection = new SqlConnection();
logger.LogInformation($"SqlSvrConString=[{sharedConfig[ConfigConstants.SqlSvrConnString]}]");
connection.ConnectionString = sharedConfig[ConfigConstants.SqlSvrConnString];
optionsBuilder.UseSqlServer(connection);
}
My connection string is:
Server=servernamehere.database.windows.net;Initial
Catalog=dbnamehere;Authentication=Active Directory Managed Identity;
Can anyone explain why this fails unless the managed identity is made sql admin?
I think the root cause of the problem was that when I'd issued the command: CREATE USER [function name here] FROM EXTERNAL PROVIDER, although the function name was spelled correctly, the case was incorrect.
One easy way to work around these kind of errors is to use the guid of the functions identity instead of the name. This works as well.
CREATE USER [abcdef-1234-5678-ghijkl] FROM EXTERNAL PROVIDER

How can I import existing password into Pulumi?

I'm trying to import an existing system into Pulumi, in particular I wish to support generating passwords for any new stacks, but using the existing password for the existing stack. Is this possible?
I've tried the following as per https://www.pulumi.com/docs/reference/pkg/random/randompassword/#import
pulumi new azure-python
pulumi plugin install resource random 3.1.1
pulumi import random:index/randomPassword:RandomPassword password Password123!
This gives the error random:index/randomPassword:RandomPassword resource 'password' has a problem: Required attribute is not set. Examine values at 'RandomPassword.Length'. This makes sense but it's not clear from the docs I've read that it is possible to set the value of an attribute when importing a resource.
This is using the latest version of Pulumi (2.32.1) which I'm using local login for.
This is actually for me writing a training exercise so if the answer is too unpleasant (e.g. exporting the state and reimporting it with the real password) it's probably not worth doing.

Keycloak Admin CLI - Updating a realm with JSON file

Objective:
Our objective is to update the entire realm provided a json file.
Problem:
The issue at hand is we cannot seem to update the realm entirely to include the client changes as well.
Actions taken:
Option 1:
Based on the Keycloak Admin CLI documentation, a Keycloak realm can be updated from a JSON file using the following command:
kcadm.sh update realms/demorealm -f demorealm.json
However, when making an update to a property within the clients section of the JSON file (i.e. a client's description), the change is not reflected within the Keycloak realm.
We also tried to take a look at the kcadm.sh help update . We tried to utilize the merge flag (Merge new values with existing configuration on the server. Merge is automatically enabled unless --file is specified) . We do have a file specified and therefore tried to enable it using the flag - but to no success. The clients did not change as expected.
Option 2:
We have tried the partial import command found in Keycloak documentation
$ kcadm.sh create partialImport -r demorealm -s ifResourceExists=OVERWRITE-o -f demorealm.json
With the ifResourceExists set to OVERWRITE, it accurately changes clients. However, it alters other Realm configurations such as assigned users roles. Ex: After manually creating a new user via the Keycloak UI and setting roles for the user, the roles are lost after running the command with the OVERWRITE flag set. Setting the ifResourceExists to SKIP does not properly update values for a client as it is skipped altogether.
Question:
Is it possible, either with a different command or different flags, to update a Keycloak realm in its entirety with a single Keycloak admin command? Neither Option 1 or Option 2 listed above worked for us. We want to avoid making individual update client calls when updating the Realm.
Notes:
We have properly authenticated and confirmed that changes made at the realm level are reflected in Keycloak.
After further research, the approach we decided to go with is to update realm level settings with:
kcadm.sh update realms/demorealm -f demorealm.json
We then iterate over the clients and add/update them with:
kcadm.sh update clients/{clients-uuid} -f clientfile.json
Since the previous command does not update client roles, we must then use the following command to add the roles:
kcadm.sh update clients/{clients-uuid}/roles/{role-name} -f rolefile.json
Finally, to add in composite roles, we use this command:
kcadm.sh add-roles --cclientid {clientID} --rid {id of client role} --rolename {name of role to add}

Updating a CloudFormation stack with a Cognito pool claims that we're adding attributes when we're not

Starting on Nov 7, 2018 we started getting the following error when updating our CloudFormation stacks:
Updating user pool schema is not allowed from cloudformation. Use the
AddCustomAttributes API or the AWS Cognito Console to update user pool
schema.
Our CF stacks don't have any changes to the custom attributes of the Cognito pool. They only have changes to the PostConfirmation and CustomMessage triggers, as well the addition of API Gateway responses.
Does anybody know why we might be seeing this? How can we avoid this error message?
We had the same problem with deployment. For now we are deploying it without CustomMessage trigger and setting CustomMessage trigger manually after deployment.
we removed the CustomMessage changes from our template and that seemed to do the trick.
Mostly by luck, I've found an answer that allows me to get around this in an automated manner.
How our scripts used to work
First, let me explain how this used to work. I used to have the following set of cloudFormation scripts:
cognitoSetup.template --> <Serverless Framework> --> <cognitoSetup.template updated with triggers>
So we'd setup the Cognito pool, run the Serverless Framework to add the Cognito Lambda functions, and then update the cognitoSetup.template file with the ARNs for the lambdas exported when the Serverless Framework ran.
The Fix
Now, we include the ARNs for the Lambdas in the cognitoSetup.template. So now cognitoSetup.template looks like this:
"CognitoUserPool": {
"Type": "AWS::Cognito::UserPool"
...
"Properties": {
...
"LambdaConfig": {
"CustomMessage": "arn:aws:lambda:<our aws region>:<our account#>:function:main-<our stage>-onCognitoCustomMessage"
}
}
Note, we're setting this trigger before the lambda even exists. The trigger just needs an ARN, and it doesn't seem to care that it's not there yet. Then we run sls deploy which creates the actual Lambda function and everything works fine.
Now our scripts look like this:
cognitoSetup.template --> <Serverless Framework>
Why does this fix this error? I don't actually know. CloudFormation seems to be fine with this modification but not okay with modifying the same file later in our process. But it works.

How to share information across notebooks in a DSX project

Is it possible to share information (such as credentials) across multiple notebooks in a DSX project, e.g. with environment variables?
For example a Cloud Foundry application in Bluemix has a control setting where environment variables can be defined, is there a similar concept for a DSX project (I couldn't see anything in the various project level settings).
Separate notebooks have separate runtimes in the background and at the moment it is not possible to share credentials among notebooks by defining environment variables. But there are helper methods for most obvious credential requirements in a project. This is called the "Insert to code" method.
For example: if you have an object store associated with your project.
Select the "Data" tab in the top bar.
Add some file to the object store by browsing or simple drag-n-drop.
Insert credentials of that object store container in your notebook by selecting the "Insert credentials" option, right besides your file in the right hand side panel.
You can then directly insert those credential (Step 3) in any other notebook in that project.
Besides "Insert to code" there are other helper functions like "Insert SparkR dataframe", "Pandas dataframe" etc. to speed up the analytics process of data scientists. Hope that was a bit helpful.
FYI - I've added a feature request on uservoice to allow Bluemix services to be bound to a project and then the credentials be accessed in the same way a Bluemix application accessess credentials. Please vote if you think this would be useful.
Currently, one pattern I use quite a lot is to create a notebook in my project that is used to save credentials to a file on DSX:
! echo '{ "username": "xxxx", "password": "xxxx", ... }' > cloudant_creds.json
That file is now available to all of your notebooks on the project. NOTE: the file is saved on the spark service file system. If you use the same spark service in other dsx projects, they will also be able to access the file.
The credentials for cloudant normally include other fields such as host, I haven't shown these fields here so I can Keep the example simple. I have indicated there are more fields with the .... I normally copy this json from the bluemix service credentials field.
In your other notebooks, you would read the credentials something like this:
with open('cloudant_creds.json') as data_file:
sourceDB = json.load(data_file)
You can then refer the credentials like this:
dfReader = sqlContext.read.format("com.cloudant.spark")
dfReader.option("cloudant.host", sourceDB.host)
if sourceDB.username:
dfReader.option("cloudant.username", sourceDB.username)
if sourceDB.password:
dfReader.option("cloudant.password", sourceDB.password)
df = dfReader.load(sourceDB.database).cache()