Keycloak create realm by sharding - keycloak

Is it possible to create a two separate clusters of Keycloak and shard realm creation by name?
Eg:
Cluster A - Keycloak instance1,...instance3
Cluster B - Keycloak instance1,...instance3
if hashCode(realm1)%2 == 0
create realm in cluster A
else
create realm in cluster B

Related

mongosync fails while synchronising data between source and destination sharded cluster

I am trying to synchronize data from source cluster to the destination cluster using mongosync however hitting the below error.
{"level":"fatal","serverID":"891fbc43","mongosyncID":"myShard_0","error":"(InvalidOptions) The 'clusteredIndex' option is not supported for namespace mongosync_reserved_for_internal_use.lastWriteStates.footballdb","time":"2022-12-16T02:21:15.209841065-08:00","message":"Error during replication"}
source cluster details:-
3 shards where each shard is a single node replicaset
Dataset: one database with one collection sharded across shards
destination cluster details:-
3 shards where each shard is a single node replicaset
No user data
All the mongosync pre-reqs including RBAC are verified successfully.
I am unable to diagnose the error here - The 'clusteredIndex' option is not supported for namespace mongosync_reserved_for_internal_use.lastWriteStates.footballdb
I tried the same usecase with same dataset for N node replicaset source(without shard) and destination cluster and the synchronisation worked fine.

Issue while deploying keycloak on kubernetes

I am trying to deploy keycloak on kubernetes cluster using Postgres as the database which is already present . This is the exception i am finding in the logs of keycloak.
"message":"Failed to create a XA-enabled DataSource from PostgreSQL JDBC Driver 42.3.3 for keycloak at jdbc:postgresql://postgres-pgpool.kube-tools.svc.cluster.local:5432/keycloak?``.
Here is my env variables attached

Postgres 2 node cluster setup

I want to setup 2 node cluster for postgres. How to configure primary and backup for a 2 node cluster I need to know the best way to do it also its necessary configurations.

Authenticate to K8s as specific user

I recently built an EKS cluster w/ Terraform. In doing that, I Created 3 different roles on the cluster, all mapped to the same AWS IAM Role (the one used to create the cluster)
Now, when I try to manage the cluster, RBAC seems to use the least privileged of these (which I made a view role) that only has read-only access.
Is there anyway to tell config to use the admin role instead of view?
I'm afraid I've hosed this cluster and may need to rebuild it.
Some into
You don't need to create a mapping in K8s for an IAM entity that created an EKS cluster, because by default it will be mapped to "system:masters" K8s group automatically. So, if you want to give additional permissions in a K8s cluster, just map other IAM roles/users.
In EKS, IAM entities are used authentication and K8s RBAC are for authorization purposes. The mapping between them is set in aws-auth configMap in kube-system namespace,
Back to the question
I'm not sure, why K8s may have mapped that IAM user to the least privileged K8s user - it may be the default behaviour (bug?) or due to the mapping record (for view perms) being later in the CM, so it just re-wrote the previous mapping.
Any way, there is no possibility to specify a K8s user to use with such mapping.
Also, if you used eksctl to spin up the cluster, you may try creating a new mapping as per docs, but not sure if that will work.
Some reading reference: #1, #2

Adding New Node to an Existing Mongo Replica with Authentication

I've an existing replica set using simple authentication configured and I want to add a new node to the replica.
I now use the following flow to do so:
Start the new node as a standalone with authentication enabled and the same keyfile as all other nodes
create an admin user locally (with the exact same credentials as the nodes in the replica have)
Stop the node
Start it again with the replica settings
Add it to the replica
But this feels awkward - having to recreate the user(s) for each node, is there no other way? to just specify the keyfile and connect the new node to the replica?
Thanks