keycloak file import using admin CLI - keycloak

I am working on keycloak 4.1.0 using CLI(kcadm).
According to the documentation
bin/standalone.sh -Dkeycloak.migration.action=import is the command used to import which tells
syntax not found
Is there any other way this could be done?
Thank you.

You can check keycloak documentation here for importing and exporting.
Import file should be in json format.
bin/standalone.sh -Dkeycloak.migration.action=import -Dkeycloak.migration.provider=singleFile -Dkeycloak.migration.file=import_data.json -Dkeycloak.migration.strategy=OVERWRITE_EXISTING

Related

How to get performance of mongodb cluster from logs using mongodb keyhole?

I have installed Mongodb keyhole in my ubuntu server. Iam trying to analyze performance of a MongoDB cluster from the log file using the below command.
keyhole --loginfo log_file[.gz] [--collscan] [-v]
But the problem is iam getting the below error, eventhough the log file is same directory where iam running the command.Anyone please help me on this.
2022/10/12 11:20:45 open logfilename_mongodb.log.gz.[gz]: no such file or directory
I have fixed the issue with the below command format.
./keyhole -loginfo -v ~/Downloads/logfilepath.log
Glancing at the Logs Analytics readme for the project, it looks like you've got a simple syntax issue here. The [] characters are intending to indicate optional arguments/settings to use when running keyhole.
Have you tried a syntax similar to this?
keyhole --loginfo log_file --collscan -v

Connecting to AWS PostgreSQL from Neomodel Django

I am trying to implement neomodel package in my Django code which is designed as a back-end service. The problem I am facing is that I already established a PostgreSQL connection from Django but I am facing difficulty in in using the same database for neomodel. From the official neomodel website, I can see:
from neomodel import db
db.set_connection('bolt://neo4j:neo4j#localhost:7687')
Is there any feasible solution for me to connection to an external database to neomodel for graph analytics? Any help would be appreciated.
Thanks,
Winston
Try to use
from neomodel import db as neodb
neodb.set_connection('bolt://neo4j:neo4j#localhost:7687')
This code imports object called db from namespace neomodel into the current namespace and gives it the alias neodb

Using psycopg2 directly on Google AppEngine

When using Google Appengine Flexible + Python, how to use Psycopg2 directly (without SQLAlchemy) to access a CloudSQL PostgreSQL database?
Hello myselfhimself,
Here is a solution:
in your app.yaml, add an environment variable, imitating Google Appengine Flexible Python CloudSQL documentation's SQLAlchemy's URI but without the psycopg2+ prefix:
env_variables:
PSYCOPG2_POSTGRESQL_URI: postgresql://user:password#/databasename?host=/cloudsql/project-name:region:database-instance-name
in any python file to be deployed and run, pass that environment variable to psycopg2's connect statement directly. This leverages psycopg2.connect's ability to pass the URI directly to the psql client library (this might not work with older PostgreSQL versions..).
import os
import psycopg2
conn = psycopg2.connect(os.environ['PSYCOPG2_POSTGRESQL_URI'])
when working locally with the google cloud proxy tool, make sure you set the URI environment variable first, if your local server is not aware of app.yaml:
export PSYCOPG2_POSTGRESQL_URI="postgresql://user:password#/databasename?host=/cloudsql/project-name:region:database-instance-name"
./cloud_sql_proxy -instances=project-name:region:database-instance-name=tcp:5432
#somewhat later:
python myserver.py
I hope it will work for your too : )

How to use read_gbq or other bq in IPython to access datasets hosted in BigQuery

I am using the iPython notebook to read the Google BigQuery public dataset for natality
I have done the installation for the google-api
easy_install --upgrade google-api-python-client.
However it still does not detect the installed API
Anyone has a iPython notebook to share on accessing the public dataset and loading it into a dataframe in iPython.
import pandas as pd
projectid = "xxxx"
data_frame = pd.read_gbq('SELECT * FROM xxxx', project_id = projectid)
303 if not _GOOGLE_API_CLIENT_INSTALLED:
--> 304 raise ImportError('Could not import Google API Client.')
305
306 if not _GOOGLE_FLAGS_INSTALLED:
ImportError: Could not import Google API Client
I have shared the iPython Notebook used at
http://nbviewer.ipython.org/urls/dl.dropbox.com/s/d77u2xarscagw0b/BigQuery_Trial8.ipynb?dl=0
Additional info:
I am running on a server with a docker instance used for the iPython server.
I have run the curl https://sdk.cloud.google.com | bash installation on the linux server
I have tried to run some of the shared notebooks
nbviewer.ipython.org/gist/fhoffa/6459195
or nbviewer.ipython.org/gist/fhoffa/6472099
However I also get
ImportError: No module named bq
I suspect it is a simple case of missing dependencies.
Anyone who has clues, help welcome
As I just said it here: https://stackoverflow.com/a/31708375/2533394
I solved the problem with this:
pip install --force-reinstall uritemplate.py
Make sure your Pandas is version 0.17 or higher:
pip install -U pandas
You can check with:
import pandas as pd
pd.__version__

MongoLab syntax error on import command

I'm trying to import data into a MongoLab hosted mongo database. I keep getting syntax errors. I've used the tools->Pre-filled commands but it's not working.
Actually when I copy/paste from the Pre-filled example, the % doesn’t show in the command line tool. Should it be a ‘%’ as it shows in the 'helper' or is it an error in html encoding?
I've got no experience with MongoLab nor with MongoDB for that matter. Can anyone advise me about MongoLab as a hosting service for my Mongo db?
Thanks