Does anyone have any thoughts on how one might import a very large number of users into Keycloak.
We are in the process of upgrading from 2.5.5 to 4.0.0 and have had to switch from MongoDB to MySQL. We have been able to export our user base but with 280k+ users to import back into Keycloak. The import process takes 25 mins to import one file of 500 users, which doesnt really seem practical as that would take us approximately 9/10 days to import the user base if we were working 24/7.
Any thoughts or ideas would be appreciated.
I realize I'm a little late to the party here...
Keycloak 8 (and newer) has a mechanism for bulk importing users via a .json file: https://www.keycloak.org/docs/8.0/server_admin/index.html#_export_import
If you have some sort of mechanism for dumping your existing users to a .json file, it makes the import reasonably easy.
You can use the Keycloak REST API with partialImport
First, you need to get an access_token, you can use your admin user or a client with the role manage-realm assigned
access_token=`curl http://localhost:8080/auth/realms/my-realm/protocol/openid-connect/token -XPOST -d 'grant_type=client_credentials' -u 'admin-client:admin-secret' | jq -r .access_token`
Then you can import an array of users
curl -X POST -H "Authorization: Bearer $access_token" -H 'Accept: application/json' -H 'Content-Type: application/json' -d '{"users":[{"username":"jose.perez","email":"jose.perez#gmail.com","firstName":"Jose","lastName":"Perez","emailVerified":true,"enabled":true,"ifResourceExists":"SKIP"}' http://localhost:8080/auth/admin/realms/my-realm/partialImport
Related
Geoserver version 2.20.1
I am attempting to register a PostGIS table as a layer in Geoserver.
Here is my Curl command in bash
curl -v -u $GEOSERVER_ADMIN_USER:$GEOSERVER_ADMIN_PASSWORD \
-XPOST -H "Content-type: text/xml" \
-d "\
<featureType>
<name>$dataset</name>\
<title>$dataset</title>\
<nativeCRS class='projected'>EPSG:4326</nativeCRS><srs>EPSG:4326</srs>\
<nativeBoundingBox>\
<minx>-94.0301461140306003</minx>\
<maxx>-91.0935619356926054</maxx>\
<miny>46.5128696410899991</miny>\
<maxy>47.7878144308049002</maxy>\
<crs class='projected'>EPSG:4326</crs>\
</nativeBoundingBox>
</featureType>" \
http://geoserver:8080/geoserver/rest/workspaces/foropt/datastores/postgis/featuretypes
where $dataset is the name of the table.
Here is the error I am getting:
The retquest has not been applied because it lacks valid
authentication credentialsn for the target resource.
I have never seen this error before.
And I can't see how it's an issue with my credentials, since I am successfully performing other tasks (such as importing GeoTIFFs) within the same bash script using the same credentials. What is going on here?
In this situation, Geoserver is setup alongside PostGIS in a docker-compose arrangement.
Interestingly enough, when I first posted, I was using Postgres version 14, PostGIS version 3.1.
When I revert back to using Postgres version 13, the error goes away (well, a new one crops up but it seems to be a separate issue--you know how it goes). ¯_(ツ)_/¯
I'm not familiar enough with Postgres versions to say what difference it made by reverting back to version 13 (maybe there are security changes at version 14??), but it worked for me.
I am trying to run schema to a schema registry by explicit curl command.
curl -X POST -H "Content-Type: application/vnd.schemaregistry.v1+json" --data '{"schema" : {"type":"record","name":"myrecord","fields":[{"name":"f1","type":"string"}]}' http://localhost:8081/subjects/avro-test/versions/
I am getting the below error
{"error_code":500,"message":"Internal Server Error"}
Note : I am able to access the data from the existing subject but getting the same error while pushing it.
The schema needs to be string escaped
For example, starting out
POST -d'{"schema": {\"type\":\"record\"
If you can, then installing jq tool and create an AVSC file instead, that would help - see my comment here
I want to store results from Coverity® to InfluxDB and I was wondering does Coverity have REST API?
If you're only trying to dump data to InfluxDB, you can curl data from REST API and insert resulting json to the database. I do something similar, but in CSV format.
Create a view in coverity 'Issues: By Snapshot' that contains all your defects.
Curl data from coverity view
json format
curl --user <userid>:<password>
"http://<coverity_url>/api/viewContents/issues/v1/<View Name>?projectId=<project ID>&rowCount=-1"
csv format
curl --header "Accept: text/csv" --user <userid>:<password>
"http://<coverity_url>/api/viewContents/issues/v1/<View Name>?projectId=<project ID>&rowCount=-1"
Example:
If you created a view 'My Defects' in project 'My Project' the command would be
curl --user <userid>:<password> "http://<coverity_url>/api/viewContents/issues/v1/My%20Defects?projectId=My%20Project&rowCount=-1"
In above URL:
%20 -- URL encoded space
rowcount=-1 -- Download all rows in view. You can set it to desired limit.
Not really, no.
There is a very limited REST api but it only covers a few very specific things. I'd recommend you use cov-manage-im where you can and only use the SOAP API if you need something more.
cov-manage-im can help, it can be used to retrive defects for specific projects and streams. cov-manage-im --help can give you more info
cov-manage-im --host youcovhostname --user yourusername --password yourpassword --mode defects --show --project yourprojectname
I created an instance of IBM Graph service on bluemix and created some vertexes. When I try to issue a gremlin query to find one of the vertexes I created, I get an "Internal Error".
Here's the query I'm using
Create the Vertex
curl -u username-password -H 'Content-Type: application/json' -d '{ "label":"movie","properties":{"Name": "Million Dollar Baby","Type": "Movie"} }' -X POST "http://../g/vertices"
Reponse
{"requestId":"07f29cea-25b3-4305-b74b-540466206872","status":{"message":"","code":200,"attributes":{}},"result":{"data":[{"id":8336,"label":"movie","type":"vertex","properties":{"Type":[{"id":"36a-6fk-1l1","value":"Movie"}],"Name":[{"id":"2s2-6fk-sl","value":"Million Dollar Baby"}]}}],"meta":{}}}
Query whether the vertex has a Type property 'movie'
curl -u username-password -H 'Content-Type: application/json' -d '{"gremlin": "def g = graph.traversal(); g.V().has('Type','movie')"}' -X POST "http://../g/gremlin"
Response (Error)
{"code":"InternalError","message":""}
IBM Graph requires users to create indexes for any property that they are going to issue queries against. In this case 'Type' is a property and is included in a gremlin query.
You need to create an index using the /schema endpoint which is provided by the IBM Graph service in bluemix.
An example of this is provided in the Service Getting Started guide
http://ibm-graph-docs.ng.bluemix.net/gettingstarted.html
I've been using Parse from an iOS app and wanted to fetch some of the data from a web app using the REST API. Following the docs, I tried this on the command line:
url -X GET -H "X-Parse-Application-Id: MYAPPID"
-H "X-Parse-REST-API-Key: MYAPIKEY"
-G --data-urlencode 'limit=1'
https://api.parse.com/1/classes/MyClass
However, it isn't returning properties for all of the columns in my parse app.
What could be happening? It's possible there is some configuration in Parse, but I can't find it. There are no special security settings for that table.
I figured this out myself. The default query returns the oldest records first. I believe some columns were added later, and there must be no data in the early records. When I request the data ordered by most recent first, then I see all of the columns:
url -X GET -H "X-Parse-Application-Id: MYAPPID"
-H "X-Parse-REST-API-Key: MYAPIKEY"
-G --data-urlencode 'limit=1'
--data-urlencode 'order=-createdAt'
https://api.parse.com/1/classes/MyClass