I am upgrading the keycloak version from 14.0.0 to 17.0.1 (keycloak-legacy) in our auth application. Our database is postgres. In the process of upgradation, I am testing the automatic database migration.
According to keycloak documentation:
Creating an index on huge tables with millions of records can easily take a huge amount of time and potentially cause major service disruption on upgrades. For those cases, we added a threshold (the number of records) for automated index creation. By default, this threshold is 300000 records. When the number of records is higher than the threshold, the index is not created automatically, and there will be a warning message in server logs including SQL commands which can be applied later manually.
To change the threshold, set the index-creation-threshold property, value for the default connections-liquibase provider:
kc.[sh|bat] start --spi-connections-liquibase-default-index-creation-threshold=300000*
"
As per the keycloak document suggestion, I updated the indexCreationThreshold value to 1 and expecting the SQL generation in logs along with the warning message as per keycloak documentation for the automatic database migration. But I do not see either the warning message or the SQL that we need to execute manually.
I would really appreciate if anyone can provide a pointer on this.
Related
I am migrating oracle database to postgresql.
While migration came to know with the following query in the oracle side.
Oracle Query:
SELECT
TRIM(value) AS val
FROM v$parameter
WHERE name = 'job_queue_processes';
I just want to know how can we get the maximum number of job slaves per instance that can be created for the execution at the postgresql side.
I have created pg_cron extension and created required jobs till now. But one of the function is using above query in oracle, so I just want to convert it into the postgresql.
The documentation is usually a good source of information.
Important: By default, pg_cron uses libpq to open a new connection to the local database.
In this case, there is no specific limit. It would be limited in the same way other user connections are limited. Mostly by max_connections, but possibly lowered from that for particular users or particular databases by the ALTER command. You could create a user specifically for cron if you wanted to limit its connections separately, then grant that user privileges of other roles it will operate on behalf of. I don't know what pg_cron does if the limit is reached, does it deal with it gracefully or not?
Alternatively, pg_cron can be configured to use background workers. In that case, the number of concurrent jobs is limited by the max_worker_processes setting, so you may need to raise that.
Note that the max number of workers may have to be shared with parallel execution workers and maybe with other extensions.
I am architecting a database where I expected to have 1,000s of tenants where some data will be shared between tenants. I am currently planning on using Postgres with row level security for tenant isolation. I am also using knex and Objection.js to model the database in node.js.
Most of the tutorials I have seen look like this where you create a separate knex connection per tenant. However, I've run into a problem on my development machine where after I create ~100 connections, I received this error: "remaining connection slots are reserved for non-replication superuser connections".
I'm investigating a few possible solutions/work-arounds, but I was wondering if anyone has been able to make this setup work the way I'm intending. Thanks!
Perhaps one solution might be to cache a limited number of connections, and destroy the oldest cached connection when the limit is reached. See this code as an example.
That code should probably be improved, however, to use a Map as the knexCache instead of an object, since a Map remembers the insertion order.
We are facing stale issue stale read issue for some percentage of users for our MongoDB Spring framework based app. It's a very low volume app with hits less than 10K a day as well as a record count of less than 100K or even less. Following is our app tech stack.
Mongo DB version db version v3.2.8.
compile group: 'org.springframework.data', name: 'spring-data-mongodb', version:'1.5.5.RELEASE'
compile group: 'org.mongodb', name: 'mongo-java-driver', version:'2.13.2'.
Users reported that in case of a new record insert or update, that value is not available to read for a certain duration say half an hour. After which the latest values in reading got reflected and available for reading across all users. However, when connecting with the mongo terminal, we are able to see the latest values in DB.
We confirmed that there is no application-level cache involved in reported flows. Also for JSP's we added timestamp on reported pages as well tried private browsing mode to rule out any browser issue.
We also tried changing Write concern in MongoClient and Mongo Template but no change in behavior:
MongoClientOptions.builder().writeConcern(WriteConcern.FSYNCED).build(); //Mongo Client
mongoTemplate.setWriteConcern(WriteConcern.FSYNCED); // Spring Mongo template
mongoTemplate.setWriteResultChecking(WriteResultChecking.LOG);
Also, DB logs look clean, no exceptions or errors seem to be generated on MongoDB logs.
We also didn't introduce any new library or DB changes and this setup was working prefect for the past 2 years. Any pointers would be helpful.
NOTE: It's a single mongo Instance with no slaves or secondary configured.
Write concern does not affect reads.
Most likely you have some cache in your application or on your user's system (like their web browser) that you are overlooking.
Second likely reason is you are reading from secondaries (i.e. using anything other than primary read preference).
I’m using Firebird 2.5 with FlameRobin and ran into a strange issue yesterday when creating a simple sequence / generator with the following SQL:
CREATE GENERATOR MY_GEN_NAME_HERE;
This gave the following error message:
Error: *** IBPP::SQLException ***
Context: Statement::Execute( CREATE GENERATOR MY_GEN_NAME_HERE)
Message: isc_dsql_execute2 failed
SQL Message : -607
This operation is not defined for system tables.
Engine Code : 335544351
Engine Message :
unsuccessful metadata update
DEFINE GENERATOR failed
arithmetic exception, numeric overflow, or string truncation
numeric value is out of range
At trigger 'RDB$TRIGGER_6'
According to the Firebird FAQ this means that the maximum number of generators in the database has been reached. The database only contains ~250 actual generators however, and according to the manual there should be 32767 available.
The FAQ suggests that a backup and restore will fix the issue, and this did indeed work, but ideally I’d like to understand why it happened so I can prevent it next time.
I’m aware that even failed generator creations can increment the counter, so I believe this must be the problem. It’s highly unlikely to be ‘manual’ failed generator creation statements as the database is not in production use yet, and there are only two of us working with it for development. I think it must be something attempting to create generators programmatically therefore, although nothing we've written should be doing this as far as I can see. I can’t rule out the industry ERP system we’re using with the database, and we have raised it with the supplier, but I’d be highly surprised if it’s that either.
Has anyone run into this issue before, is there anything else which can affect the generator counter?
A sequence (generator) has a 'slot' on the generator data page(s) that stores its current value. This slot number (RDB$GENERATOR_ID) is assigned when the generator is created (using an internal sequence).
When you drop a sequence, the slot numbers will only increase, until the maximum number of slots have been assigned (and possibly dropped).
In Firebird 2.1 and earlier, this would be the end: having created (and dropped) 32757 sequences would mean you could no longer create sequences. So, if your application is creating (and dropping) a lot of sequences, you will eventually run out of slots, even if you only have 250 'live' sequences.
The only way to reclaim those slots, is by backing up and restoring the database. During the restore, the sequences will be created anew (with the start value from the backup) and get a new slot assigned. These slots will be assigned contiguously, so previously existing gaps disappear, and you will then have unassigned slots available.
However, this was changed Firebird 2.5 with CORE-1544, Firebird will now automatically recycle unused slots. This change will only work with ODS 11.2 or higher databases (ODS = On-Disk Structure). ODS 11.2 is the on-disk structure for databases created with Firebird 2.5.
If you get this error, then probably your database is (was) still ODS 11.1 (the Firebird 2.1 on-disk structure) or earlier. Firebird 2.5 can read earlier on-disk structures. Upgrading the ODS of a database is a matter of backing up and restoring the database. Given you already did this, I assume your database is now ODS 11.2, and the error should no longer occur (unless you actually have 32767 sequences in your database).
My postresql database is updated each night.
At the end of each nightly update, I need to know what data changed.
The update process is complex, taking a couple of hours and requires dozens of scripts, so I don't know if that influences how I could see what data has changed.
The database is around 1 TB in size, so any method that requires starting a temporary database may be very slow.
The database is an AWS instance (RDS). I have automated backups enabled (these are different to RDS snapshots which are user initiated). Is it possible to see the difference between two RDS automated backups?
I do not know if it is possible to see difference between RDS snapshots. But in the past we tested several solutions for similar problem. Maybe you can take some inspiration from it.
Obvious solution is of course auditing system. This way you can see in relatively simply way what was changed. Depending on granularity of your auditing system down to column values. Of course there is impact on your application due auditing triggers and queries into audit tables.
Another possibility is - for tables with primary keys you can store values of primary key and 'xmin' and 'ctid' hidden system columns (https://www.postgresql.org/docs/current/static/ddl-system-columns.html) for each row before updated and compare them with values after update. But this way you can identify only changed / inserted / deleted rows but not changes in different columns.
You can make streaming replica and set replication slots (and to be on the safe side also WAL log archiving ). Then stop replication on replica before updates and compare data after updates using dblink selects. But these queries can be very heavy.