Keycloak - Export realm and user data - keycloak

We are currently on KeyCloak v8.02 installed as a windows service using Oracle provider and we are going to upgrade to KeyCloak v16 using PostgreSQL. Currently I am trying to export the realm and users data by using the below command.
sh standalone.sh \
-Dkeycloak.migration.action=export \
-Dkeycloak.migration.provider=singleFile \
-Dkeycloak.migration.file=keycloak-export.json \
-Djboss.http.port=8888 -Djboss.https.port=9999 \
-Djboss.management.http.port=7777
We have three realms and the data that is being exported just has the master realm data (not even the master users data). What is happening? How can we export and import the data while migrating to another version?

Related

Can't upload feature type to Geoserver REST API using Curl

Geoserver version 2.20.1
I am attempting to register a PostGIS table as a layer in Geoserver.
Here is my Curl command in bash
curl -v -u $GEOSERVER_ADMIN_USER:$GEOSERVER_ADMIN_PASSWORD \
-XPOST -H "Content-type: text/xml" \
-d "\
<featureType>
<name>$dataset</name>\
<title>$dataset</title>\
<nativeCRS class='projected'>EPSG:4326</nativeCRS><srs>EPSG:4326</srs>\
<nativeBoundingBox>\
<minx>-94.0301461140306003</minx>\
<maxx>-91.0935619356926054</maxx>\
<miny>46.5128696410899991</miny>\
<maxy>47.7878144308049002</maxy>\
<crs class='projected'>EPSG:4326</crs>\
</nativeBoundingBox>
</featureType>" \
http://geoserver:8080/geoserver/rest/workspaces/foropt/datastores/postgis/featuretypes
where $dataset is the name of the table.
Here is the error I am getting:
The retquest has not been applied because it lacks valid
authentication credentialsn for the target resource.
I have never seen this error before.
And I can't see how it's an issue with my credentials, since I am successfully performing other tasks (such as importing GeoTIFFs) within the same bash script using the same credentials. What is going on here?
In this situation, Geoserver is setup alongside PostGIS in a docker-compose arrangement.
Interestingly enough, when I first posted, I was using Postgres version 14, PostGIS version 3.1.
When I revert back to using Postgres version 13, the error goes away (well, a new one crops up but it seems to be a separate issue--you know how it goes). ¯_(ツ)_/¯
I'm not familiar enough with Postgres versions to say what difference it made by reverting back to version 13 (maybe there are security changes at version 14??), but it worked for me.

How to migrate from internal postgres db to external postgres db in gitlab

Currently we have all in one single docker container for our production gitlab, where we are using bundled postgres and redis. So everything in same container. We want to use external postgres db and separate container for redis as well to follow the production standards.
How can I migrate from internal postgres db to external postgres db? If anyone provides process and steps that will be really helpful. We are new to this process. Please let us know If anyone knows
Thank you everyone for your inputs ,
PRS
You can follow the article "Migrating GitLab from internal to external PostgreSQL", which involves:
a database dump/reload, using pg_dumpall
sudo -u gitlab-psql /opt/gitlab/embedded/bin/pg_dumpall \
--username=gitlab-psql --host=/var/opt/gitlab/postgresql > /var/lib/pgsql/database.sql
sudo -u postgres psql -f /var/lib/pgsql/database.sql
Note: yuo can also use a backup of the database, but only if the external PostgreSQL version matches exactly the embedded one.
setting its password
sudo -u postgres psql -c "ALTER USER gitlab ENCRYPTED PASSWORD '***' VALID UNTIL 'infinity';"
and modifying the GitLab configuration:
That is:
# Disable the built-in Postgres
postgresql['enable'] = false
# Fill in the connection details
gitlab_rails['db_adapter'] = 'postgresql'
gitlab_rails['db_encoding'] = 'utf8'
gitlab_rails['db_host'] = '127.0.0.1'
gitlab_rails['db_port'] = 5432
gitlab_rails['db_database'] = "gitlabhq_production"
gitlab_rails['db_username'] = 'gitlab'
gitlab_rails['db_password'] = '***'
apply tour changes:
gitlab-ctl reconfigure && gitlab-ctl restart
#VonC
Hi, let me know about the process I have done below
We currently have single all in one docker gitlab container which is using bundled postgres and redis . To follow the production standards we are looking to maintain separate postgres and redis instances for our prod gitlab..We already have data in bundled db ..so we took back up current gitlab with bundled postgres ..it generated .tar file....Next we did change gitlab.rb to point external post gres db [ same version ]..then we are able connect to gitlab but didn;t see any data because nothing was there as it is fresh db. Later we did the restore using external postgres db ...now we can see all the data?? Can we do in this method ? Now our gitlab is attached to external postgres and I can see all the restored data. Will this process works ? Any downfalls?
How this process is different from pgdump and import ?

Scripting theme configuration for a realm

I have made a custom theme for Keycloak and I'd like to set a specific realm to use that theme for the login page and enable internationalization without using admin console, mainly because I want to make it automatic as the realm creation based on the JSON import.
It seems the JSON file is unable to handle theme configuration, is there any way to make this configuration without any human action?
You can use Keycloak CLI to specify a theme for realm. Keycloak CLI executable (kcadm.bat or kcadm.sh) is placed in the /bin directory
First, you need to login with admin credentials:
kcadm config credentials --server http://localhost:8080/auth --realm master --user admin --password ADMIN_PASSWORD
Then you need to update corresponding realm, setting its loginTheme attribute:
kcadm update realms/REALM_NAME -s "loginTheme=REALM_LOGIN_THEME_NAME"

Creating a mysqldump file

Goal: migrate google-cloud-sql First Generation to second generation
Exporting Data from Cloud SQL is working fine.
https://cloud.google.com/sql/docs/backup-recovery/backing-up
But:
Note: If you are exporting your data for use in a Cloud SQL instance, you must use the instructions provided in Exporting data for Import into Cloud SQL. You cannot use these instructions.
So i get to this page:
Exporting Data for Import into Cloud SQL
https://cloud.google.com/sql/docs/import-export/creating-mysqldump-csv#mysqldump
This pages describes how to create a mysqldump or CSV file from a MySQL database that does not reside in Cloud SQL.
Instructions are not working:
mysqldump --databases [DATABASE_NAME] -h [INSTANCE_IP] -u [USERNAME] -p \
--hex-blob --skip-triggers --set-gtid-purged=OFF --default-character-set=utf8 > [DATABASE_FILE].sql
mysqldump: unknown variable 'set-gtid-purged=OFF
How do I create mysqldump for import in cloud sql second generation?
thanks in advance,
Sander
edit:
Using google cloud sql first generation via google cloud console
removed set-gtid-purged=OFF
result:
Enter password:
mysqldump: Got error: 2013: Lost connection to MySQL server at 'reading initial communication packet', system error: 0 when trying to connect
s#folkloric-alpha-618:~$
For set-gtid-purged. Please verify which mysql-client version you have installed. Many OS come with the MariaDB version which does not support this flag (since their implementation of GTID is different).
I know the Oracle official mysql-client supports this flag since 5.6.9.
To verify your package run:
mysqldump --version
If you get this, you don't have the official client:
mysqldump Ver 10.16 Distrib 10.1.41-MariaDB, for debian-linux-gnu (x86_64)
The official client would be something like this:
mysqldump Ver 10.13 Distrib 5.7.27, for Linux (x86_64)
If you want to change the version, you can use their official repository.

How to update table structures and stored procedures from Development to Production server in PostgreSQL?

I have three PostgreSQL servers:
Development,
Practice
Production
The Production server has all the data of interest and is in heavy use. However, after many months of development, the Development server has some new tables and different table structures as well as new and changed stored procedures.
Is there a way of automating detection of which tables and stored procedures are different between the Development and Production servers so as to automate changing the Production server to match the new/changed tables and procedures from the Development server? Or am I stuck going through this table by table and stored procedure by stored procedure manually?
Ofcourse, the goal is to retain the table data from the production server but with the tables and procedures of the development server.
Here is short step by step tutorial for using liquibase with postgresql to generate database diff in sql format.
I assume you have java installed.
download liquibase itself
download latest version of postgresql jdbc driver from http://jdbc.postgresql.org and put it into liquibase home's lib directory
run
./liquibase --driver=org.postgresql.Driver
--url=jdbc:postgresql://host1:port1/dbname1
--username=user1 \
--password=pass1 \
diffChangelog \
--referenceUrl=jdbc:postgresql://host2:port2/dbname2 \
--referenceUsername=user2 \
--referencePassword=pass2 > db-changelog.xml
Here reference database is your where your changes are made.
finally last step to get sql script run (against db at host1:port1):
./liquibase --driver=org.postgresql.Driver \
--url=jdbc:postgresql://host1:port1/dbname1 \
--username=user1 \
--password=pass1 \
--changeLogFile=db-changelog.xml \
updateSql > changes.sql
Before you apply genereated sql script onto production database you should clean it up - liquibase is introducing some metadata tables (databasechangelog and databasechangeloglock) - but thats simple search and delete.
That's it.
Final note is as #a_horse_with_no_name said: spend some time to learn how to put your db changes in VCS and avoid executing them manually (in favor of liquibase or flyway).