How can i reimport a realm in keycloak. If it already exists - keycloak

Not able to import realm json using Keycloak import on keycloak version 15.0.2. I am running keycloak in a docker container.
Below is the docker volume in compose
volumes:
- ./keycloak-realm.json:/tmp/keycloak-realm.json
Environment variables for keycloak
KEYCLOAK_IMPORT=/tmp/keycloak-realm.json -Dkeycloak.profile.feature.upload_scripts=enabled -Dkeycloak.migration.strategy=OVERWRITE_EXISTING
The import fails with already exists error. When using migration strategy also getting the same error.
How can i import realm even if it exists?
Error from Keycloak:
04:47:48,047 INFO [org.keycloak.services] (ServerService Thread
Pool -- 70) KC-SERVICES0003: Not importing realm sso from file
/tmp/keycloak-realm.json. It already exists.
04:47:48,068 INFO [org.keycloak.services] (ServerService Thread Pool -- 70) KC-SERVICES0003: Not importing realm sso from file
/tmp/keycloak-realm.json. It already exists.

If you are using the official docker image, you will need to set the strategy via JAVA_OPTS_APPEND environment variable. In your case:
KEYCLOAK_IMPORT=/tmp/keycloak-realm.json
JAVA_OPTS_APPEND=-Dkeycloak.profile.feature.upload_scripts=enabled -Dkeycloak.migration.strategy=OVERWRITE_EXISTING

Related

Import realm in Keycloak:18.x

I cannot import any realms into Keycloak 18.0.0. That's the Quarkus, and not the Wildfly distribution anymore. Documentation here says it should be pretty simple, and by mounting my exported realm.json file into /opt/keycloak/data/import/...json it actually TRIES to import it, but it ends with :
"[org.keycloak.quarkus.runtime.cli.ExecutionExceptionHandler] (main) ERROR: Script upload is disabled".
Known to be removed, and the old -Dkeycloak.profile.feature.upload_scripts=enabled won't work anymore. OK.
But then what's the way to do import any realms on startup? That'd be used to distribute a ready-made local stack without any handcrafting needed to launch. I could do it with running SQL commands, but that's way too hacky to my taste.
Compose file :
cp-keycloak:
image: quay.io/keycloak/keycloak:18.0.0
environment:
KC_DB: mysql
KC_DB_URL: jdbc:mysql://cp-keycloak-database:3306/keycloak
KC_DB_USERNAME: root
KC_DB_PASSWORD: root
KC_HOSTNAME: localhost
KEYCLOAK_ADMIN: admin
KEYCLOAK_ADMIN_PASSWORD: admin
ports:
- 8082:8080
volumes:
- ./data/local_stack/init.keycloak.json:/opt/keycloak/data/import/main-realm.json:ro
entrypoint: "/opt/keycloak/bin/kc.sh start-dev --import-realm"
The output :
cp-keycloak_1 | 2022-05-05 14:07:26,801 ERROR [org.keycloak.quarkus.runtime.cli.ExecutionExceptionHandler] (main) ERROR: Failed to start server in (development) mode
cp-keycloak_1 | 2022-05-05 14:07:26,802 ERROR [org.keycloak.quarkus.runtime.cli.ExecutionExceptionHandler] (main) ERROR: Failed to import realm: Main-Realm
cp-keycloak_1 | 2022-05-05 14:07:26,803 ERROR [org.keycloak.quarkus.runtime.cli.ExecutionExceptionHandler] (main) ERROR: Script upload is disabled
Thanks
This might be caused because inside of your realm .json there is references to some configuration that is using the deprecated upload script feature.
Try to removed it, export the json and them try to imported again (this time without the upload script feature.
From the comments (credits to jfrantzius): 
See here for what you either need to remove or replace in your
realm-export.json:
https://github.com/keycloak/keycloak/issues/11664#issuecomment-1111062102
. We had to replace the entries, see also here
https://github.com/keycloak/keycloak/discussions/12041#discussioncomment-2768768

"SchemaRegistryException: Failed to get Kafka cluster ID" for LOCAL setup

I'm downloaded the .tz (I am on MAC) for confluent version 7.0.0 from the official confluent site and was following the setup for LOCAL (1 node) and Kafka/ZooKeeper are starting fine, but the Schema Registry keeps failing (Note, I am behind a corporate VPN)
The exception message in the SchemaRegistry logs is:
[2021-11-04 00:34:22,492] INFO Logging initialized #1403ms to org.eclipse.jetty.util.log.Slf4jLog (org.eclipse.jetty.util.log)
[2021-11-04 00:34:22,543] INFO Initial capacity 128, increased by 64, maximum capacity 2147483647. (io.confluent.rest.ApplicationServer)
[2021-11-04 00:34:22,614] INFO Adding listener: http://0.0.0.0:8081 (io.confluent.rest.ApplicationServer)
[2021-11-04 00:35:23,007] ERROR Error starting the schema registry (io.confluent.kafka.schemaregistry.rest.SchemaRegistryRestApplication)
io.confluent.kafka.schemaregistry.exceptions.SchemaRegistryException: Failed to get Kafka cluster ID
at io.confluent.kafka.schemaregistry.storage.KafkaSchemaRegistry.kafkaClusterId(KafkaSchemaRegistry.java:1488)
at io.confluent.kafka.schemaregistry.storage.KafkaSchemaRegistry.<init>(KafkaSchemaRegistry.java:166)
at io.confluent.kafka.schemaregistry.rest.SchemaRegistryRestApplication.initSchemaRegistry(SchemaRegistryRestApplication.java:71)
at io.confluent.kafka.schemaregistry.rest.SchemaRegistryRestApplication.configureBaseApplication(SchemaRegistryRestApplication.java:90)
at io.confluent.rest.Application.configureHandler(Application.java:271)
at io.confluent.rest.ApplicationServer.doStart(ApplicationServer.java:245)
at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:73)
at io.confluent.kafka.schemaregistry.rest.SchemaRegistryMain.main(SchemaRegistryMain.java:44)
Caused by: java.util.concurrent.TimeoutException
at java.util.concurrent.CompletableFuture.timedGet(CompletableFuture.java:1784)
at java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1928)
at org.apache.kafka.common.internals.KafkaFutureImpl.get(KafkaFutureImpl.java:180)
at io.confluent.kafka.schemaregistry.storage.KafkaSchemaRegistry.kafkaClusterId(KafkaSchemaRegistry.java:1486)
... 7 more
My schema-registry.properties file has bootstrap URL set to
kafkastore.bootstrap.servers=PLAINTEXT://localhost:9092
I saw some posts saying its the SchemaRegistry unable to connect to the KafkaCluster URL because of the localhost address potentially. I am fairly new to Kafka and basically just need this local setup to run a git repo that is utilizing some Topics/Kafka so my questions...
How can I fix this (I am behind a corporate VPN but I figured this shouldn't affect this)
Do I even need the SchemaRegistry?
I ended up just going with the Docker local setup inside, and the only change I had to make to the docker compose YAML was to change the schema-registry port (I changed it to 8082 or 8084, don't remember exactly but just an unused port that is not being used by some other Confluent service listed in the docker-compose.yaml) and my local setup is working fine now

ISPN000580: Failed to migrate persisted data - upgrading to jboss/keycloak 13.0.1

I am trying to upgrade jboss/keycloak 6.0.1 to 13.0.1 which is running as StatefulSet in k8s. I have converted my standalone-ha.xml and I am getting the following error:
[0m[32m13:05:34,632 DEBUG [org.infinispan.persistence.manager.PersistenceManagerImpl] (ServerService Thread Pool -- 68) PersistenceManagerImpl encountered an exception during startup of stores: java.util.concurrent.CompletionException: org.infinispan.persistence.spi.PersistenceException: ISPN000580: Failed to migrate persisted data.
at java.base/java.util.concurrent.CompletableFuture.encodeThrowable(CompletableFuture.java:314)
at java.base/java.util.concurrent.CompletableFuture.completeThrowable(CompletableFuture.java:319)
at java.base/java.util.concurrent.CompletableFuture$AsyncRun.run(CompletableFuture.java:1739)
at org.jboss.threads#2.4.0.Final//org.jboss.threads.ContextClassLoaderSavingRunnable.run(ContextClassLoaderSavingRunnable.java:35)
at org.jboss.threads#2.4.0.Final//org.jboss.threads.EnhancedQueueExecutor.safeRun(EnhancedQueueExecutor.java:1990)
at org.jboss.threads#2.4.0.Final//org.jboss.threads.EnhancedQueueExecutor$ThreadBody.doRunTask(EnhancedQueueExecutor.java:1486)
at org.jboss.threads#2.4.0.Final//org.jboss.threads.EnhancedQueueExecutor$ThreadBody.run(EnhancedQueueExecutor.java:1348)
at org.jboss.as.clustering.common#23.0.2.Final//org.jboss.as.clustering.context.ContextReferenceExecutor.execute(ContextReferenceExecutor.java:49)
at org.jboss.as.clustering.common#23.0.2.Final//org.jboss.as.clustering.context.ContextualExecutor$1.run(ContextualExecutor.java:70)
at java.base/java.lang.Thread.run(Thread.java:829)
Caused by: org.infinispan.persistence.spi.PersistenceException: ISPN000580: Failed to migrate persisted data.
at org.infinispan#11.0.9.Final//org.infinispan.persistence.file.SingleFileStore.migrateFromV1(SingleFileStore.java:373)
at org.infinispan#11.0.9.Final//org.infinispan.persistence.file.SingleFileStore.start(SingleFileStore.java:160)
at org.infinispan#11.0.9.Final//org.infinispan.persistence.support.NonBlockingStoreAdapter.lambda$start$0(NonBlockingStoreAdapter.java:108)
at java.base/java.util.concurrent.CompletableFuture$AsyncRun.run(CompletableFuture.java:1736)
... 7 more
Caused by: protostream.com.google.protobuf.InvalidProtocolBufferException: Protocol message contained an invalid tag (zero).
at org.infinispan.protostream#4.3.5.Final//protostream.com.google.protobuf.InvalidProtocolBufferException.invalidTag(InvalidProtocolBufferException.java:102)
at org.infinispan.protostream#4.3.5.Final//protostream.com.google.protobuf.CodedInputStream$ArrayDecoder.readTag(CodedInputStream.java:627)
at org.infinispan.protostream#4.3.5.Final//org.infinispan.protostream.impl.RawProtoStreamReaderImpl.readTag(RawProtoStreamReaderImpl.java:45)
at org.infinispan.protostream#4.3.5.Final//org.infinispan.protostream.WrappedMessage.readMessage(WrappedMessage.java:275)
at org.infinispan.protostream#4.3.5.Final//org.infinispan.protostream.ProtobufUtil.fromWrappedByteArray(ProtobufUtil.java:162)
at org.infinispan#11.0.9.Final//org.infinispan.marshall.persistence.impl.PersistenceMarshallerImpl.objectFromByteBuffer(PersistenceMarshallerImpl.java:155)
at org.infinispan#11.0.9.Final//org.infinispan.persistence.file.SingleFileStore.migrateFromV1(SingleFileStore.java:333)
... 10 more
Any idea how to tackle this error?
Do you try to update your Keycloak system while still running any instances? This is not supported by Keycloak. It's recommended to shut down all instances and start migration with one instance, when upgrading major versions!
Between Keycloak 6 and 13 the underlying Infinispan versions have changed and thus at some point also the de-/serialization mechanism. Most possibly that's the cause for your errors.
The error was due the cache. In order to make it work the cache will need to be migrated from the syntax that keycloak 6 uses to 13. In our case, we preferred to start with a new cache.
Official solution from Red Hat:
Delete saved web session data by removing all of the the contents of the directory
$JBOSS_HOME/standalone/data/infinispan/web

unable to import sample Data into Apache atlas

I have installed Apache atlas using docker with the help of the below URL
https://github.com/michalmiklas/atlas-docker
Now while importing sample data into to apache atlas using the below command,
bash-4.4# ./apache-atlas/bin/quick_start.py http://localhost:21000/
it is throwing the below error
Exception in thread "main" org.apache.atlas.AtlasServiceException: Metadata service API org.apache.atlas.AtlasClientV2$API_V2#30f842ca failed with status 403 (Forbidden) Response Body ({"errorCode":"ATLAS-403-00-001","errorMessage":"bird is not authorized to perform create classification-def Dimension"})
at org.apache.atlas.AtlasBaseClient.callAPIWithResource(AtlasBaseClient.java:395)
at org.apache.atlas.AtlasBaseClient.callAPIWithResource(AtlasBaseClient.java:323)
at org.apache.atlas.AtlasBaseClient.callAPI(AtlasBaseClient.java:211)
at org.apache.atlas.AtlasClientV2.createAtlasTypeDefs(AtlasClientV2.java:227)
at org.apache.atlas.examples.QuickStartV2.createTypes(QuickStartV2.java:185)
at org.apache.atlas.examples.QuickStartV2.runQuickstart(QuickStartV2.java:141)
at org.apache.atlas.examples.QuickStartV2.main(QuickStartV2.java:126)
No sample data added to Apache Atlas Server.
below is the total log for your reference.
./bin/apache-atlas/bin/quick_start.py http://localhost:21000/
log4j:WARN No such property [maxFileSize] in org.apache.log4j.PatternLayout.
log4j:WARN No such property [maxBackupIndex] in org.apache.log4j.PatternLayout.
log4j:WARN No such property [maxFileSize] in org.apache.log4j.PatternLayout.
log4j:WARN No such property [maxBackupIndex] in org.apache.log4j.PatternLayout.
log4j:WARN No such property [maxFileSize] in org.apache.log4j.PatternLayout.
Enter username for atlas :- bird
Enter password for atlas :-
Creating sample types:
Exception in thread "main" org.apache.atlas.AtlasServiceException: Metadata service API org.apache.atlas.AtlasClientV2$API_V2#30f842ca failed with status 403 (Forbidden) Response Body ({"errorCode":"ATLAS-403-00-001","errorMessage":"bird is not authorized to perform create classification-def Dimension"})
at org.apache.atlas.AtlasBaseClient.callAPIWithResource(AtlasBaseClient.java:395)
at org.apache.atlas.AtlasBaseClient.callAPIWithResource(AtlasBaseClient.java:323)
at org.apache.atlas.AtlasBaseClient.callAPI(AtlasBaseClient.java:211)
at org.apache.atlas.AtlasClientV2.createAtlasTypeDefs(AtlasClientV2.java:227)
at org.apache.atlas.examples.QuickStartV2.createTypes(QuickStartV2.java:185)
at org.apache.atlas.examples.QuickStartV2.runQuickstart(QuickStartV2.java:141)
at org.apache.atlas.examples.QuickStartV2.main(QuickStartV2.java:126)
No sample data added to Apache Atlas Server.
JFI, bird is admin user group user and i have also tried with DATA_STEWARD and DATA_SCIENTIST user groups but the result is same.
The user has to use the existing username and password to import Data into APACHE-ATLAS.
Default Username : admin (case sensitive)
Password : admin
Once you install the Apache Atlas, first check the Zookeeper server status and do not change the any user configurations.
Thanks for your help

Keycloak 4.8.0 Error when choosing standalone-ha.xml as --server-config parameter

We have keycloak 3.2.0 working on Docker.
When we run it, we add the ARGS --server-config standalone-ha.xml
e.g
Docker run foo bar jboss/keycloak:4.5.0.Final --server-config standalone-ha.xml
Purely because we're running a few nodes to the same DB
Upgrading to 4.5, the documentation here:
https://www.keycloak.org/docs/latest/server_installation/index.html#_standalone-ha-mode
Says, also add
--server-config standalone-ha.xml
However, when i do that (From version 4.0 onwards), i get
21:12:03,574 INFO [org.jboss.modules] (main) JBoss Modules version 1.8.6.Final
java.lang.IllegalArgumentException: WFLYSRV0191: Can't use both --server-config and --initial-server-config
at org.jboss.as.server.Main.assertSingleConfig(Main.java:395)
at org.jboss.as.server.Main.determineEnvironment(Main.java:169)
at org.jboss.as.server.Main.main(Main.java:96)
at org.jboss.modules.Module.run(Module.java:352)
at org.jboss.modules.Module.run(Module.java:320)
at org.jboss.modules.Main.main(Main.java:593)
21:12:03,973 FATAL [org.jboss.as.server] (main) WFLYSRV0239: Aborting with exit code 1
Now, if i run keycloak WITHOUT --server-config, and i enter the container, PS AUX shows its running standalone-ha.xml as config.
But thats because we are migrating from a DB which has 3.2.0 previously installed.
How do i enable and constantly make sure that standalone-ha.xml gets selected by passing parameter --server-config to choose the *-ha.xml configuration?
Thanks
It is a problem in Keycloak. Using -c instead of --server-config helps.
See https://issues.jboss.org/browse/KEYCLOAK-9393 for more details.