Firebird 2.5 ‘DEFINE GENERATOR failed’ seemingly due to reaching database generator limit, but actual amount nowhere near that limit - firebird

I’m using Firebird 2.5 with FlameRobin and ran into a strange issue yesterday when creating a simple sequence / generator with the following SQL:
CREATE GENERATOR MY_GEN_NAME_HERE;
This gave the following error message:
Error: *** IBPP::SQLException ***
Context: Statement::Execute( CREATE GENERATOR MY_GEN_NAME_HERE)
Message: isc_dsql_execute2 failed
SQL Message : -607
This operation is not defined for system tables.
Engine Code : 335544351
Engine Message :
unsuccessful metadata update
DEFINE GENERATOR failed
arithmetic exception, numeric overflow, or string truncation
numeric value is out of range
At trigger 'RDB$TRIGGER_6'
According to the Firebird FAQ this means that the maximum number of generators in the database has been reached. The database only contains ~250 actual generators however, and according to the manual there should be 32767 available.
The FAQ suggests that a backup and restore will fix the issue, and this did indeed work, but ideally I’d like to understand why it happened so I can prevent it next time.
I’m aware that even failed generator creations can increment the counter, so I believe this must be the problem. It’s highly unlikely to be ‘manual’ failed generator creation statements as the database is not in production use yet, and there are only two of us working with it for development. I think it must be something attempting to create generators programmatically therefore, although nothing we've written should be doing this as far as I can see. I can’t rule out the industry ERP system we’re using with the database, and we have raised it with the supplier, but I’d be highly surprised if it’s that either.
Has anyone run into this issue before, is there anything else which can affect the generator counter?

A sequence (generator) has a 'slot' on the generator data page(s) that stores its current value. This slot number (RDB$GENERATOR_ID) is assigned when the generator is created (using an internal sequence).
When you drop a sequence, the slot numbers will only increase, until the maximum number of slots have been assigned (and possibly dropped).
In Firebird 2.1 and earlier, this would be the end: having created (and dropped) 32757 sequences would mean you could no longer create sequences. So, if your application is creating (and dropping) a lot of sequences, you will eventually run out of slots, even if you only have 250 'live' sequences.
The only way to reclaim those slots, is by backing up and restoring the database. During the restore, the sequences will be created anew (with the start value from the backup) and get a new slot assigned. These slots will be assigned contiguously, so previously existing gaps disappear, and you will then have unassigned slots available.
However, this was changed Firebird 2.5 with CORE-1544, Firebird will now automatically recycle unused slots. This change will only work with ODS 11.2 or higher databases (ODS = On-Disk Structure). ODS 11.2 is the on-disk structure for databases created with Firebird 2.5.
If you get this error, then probably your database is (was) still ODS 11.1 (the Firebird 2.1 on-disk structure) or earlier. Firebird 2.5 can read earlier on-disk structures. Upgrading the ODS of a database is a matter of backing up and restoring the database. Given you already did this, I assume your database is now ODS 11.2, and the error should no longer occur (unless you actually have 32767 sequences in your database).

Related

Keycloak automatic database migration threshold setting is not working as expected

I am upgrading the keycloak version from 14.0.0 to 17.0.1 (keycloak-legacy) in our auth application. Our database is postgres. In the process of upgradation, I am testing the automatic database migration.
According to keycloak documentation:
Creating an index on huge tables with millions of records can easily take a huge amount of time and potentially cause major service disruption on upgrades. For those cases, we added a threshold (the number of records) for automated index creation. By default, this threshold is 300000 records. When the number of records is higher than the threshold, the index is not created automatically, and there will be a warning message in server logs including SQL commands which can be applied later manually.
To change the threshold, set the index-creation-threshold property, value for the default connections-liquibase provider:
kc.[sh|bat] start --spi-connections-liquibase-default-index-creation-threshold=300000*
"
As per the keycloak document suggestion, I updated the indexCreationThreshold value to 1 and expecting the SQL generation in logs along with the warning message as per keycloak documentation for the automatic database migration. But I do not see either the warning message or the SQL that we need to execute manually.
I would really appreciate if anyone can provide a pointer on this.

AWS - DMS migration missing sequence , views , routines ... etc

I a trying to do the migration for our Postgres database to Aurora postgres
first I create a normal task it migrates all tables only except its constraints.
My tries to clone our database
I downloaded AWS SCT (Schema Conversion Tool) then set my configuration to generate a migration report, here is the report
We completed the analysis of your PostgreSQL source database and
estimate that 100% of the database storage objects and 99.1% of
database code objects can be converted automatically or with minimal
changes if you select Amazon Aurora (PostgreSQL compatible) as your
migration target. Database storage objects include schemas, tables,
table constraints, indexes, types, sequences and foreign tables.
Database code objects include triggers, views, materialized views,
functions, domains, rules, operators, collations, fts configurations,
fts dictionaries and aggregates. Based on the source code syntax
analysis, we estimate 99.9% (based on # lines of code) of your code
can be converted to Amazon Aurora (PostgreSQL compatible)
automatically. To complete the migration, we recommend 133 conversion
action(s) ranging from simple tasks to medium-complexity actions to
complex conversion actions.
my question:
1- is there a way to automate including everything in my source database
2- the report mentions we recommend 133 conversion action(s) where I can find these conversion actions.
3- is it safe to ongoing migration as in my case we need to run migration every day.
Sequence, Index, and Constraint are not migrated and it is mentioned in the official docs on AWS.
You can use this source.
This will help you to migrate Sequence, Index, and Constraint at once.
p.s: this doesn't include View and Routine.
There's no way AFAIK in AWS to automate everything if that was there then it would have been already added in SCT. however, if there are similar errors that are occurring in code/DDL/function like some datatype conversions. you can create a script that will take schema dump and convert all these data types to the desired ones.
Choose the SQL Conversion Actions tab in SCT tool.
The SQL Conversion Actions tab contains a list of SQL code items that can't be converted automatically. There are also recommendations for how to manually convert the SQL code. You can look into the errors and make changes accordingly.
In case if you are migrating to the same version of PG in aurora you can take a schema only dump and restore it into target aurora and later setup a full load/ongoing replication with DMS and you don't have to take SCT into consideration(most of the time worked for me). Just make sure you adhere to aurora limitations specific to the PG version
We have been using ongoing migration in our project at it's working great. There are some best practices we have developed but that will differ from project to project
DDL changes must be made on the target first and stop replication while doing it and resume once done
Separate the tables with high transactions as different DMS task as it will help you in troubleshooting and your rest of the tables can still be working
Always keep in mind DMS replicates data, not the view/function/procedures
Active monitoring of tasks and replication instances
And I would like to suggest if you are performing homogenous migration(PG -> PG) you should consider pg_dump & pg_restore that easy and sophisticated for the same versions and AWS aurora supports it.

Postgresql is having off behavior after disk was run out

Background of the issue (could be irrelevent, but only relation to these issues makes sense for me)
In our production environment, disk space had run out. (We do have monitoring and notifications for this, but no-one read them - the classical)
Anyway, after fixing the issue Postgresql PostgreSQL 9.4.17 on x86_64-unknown-linux-gnu, compiled by gcc (Debian 4.9.2-10) 4.9.2, 64-bit has shown couple of weird behaviors.
1. Unique indexes
I have couple of (multi column) unique indexes specified for the database, but they do not appear to be functioning. However I can find duplicate rows from the database.
2. Sorting based on date
We got one table which is basically just logging some json data. We got three columns: id, json and insertedAt DEFAULT NOW(). If I do simple query where I try to sort based on the insertedAt column, the sorting doesn't work around the area of disk overflow. All of the data is valid and readable, but order is invalid
3. Db dumps / backups are having some corruption.
Again, when I was browsing this logging data and tried to recover a backup to my local machine for better observation it gave an error around some random row. When I examined the sql-file with text editor, I encountered that data was otherwise valid expect that it was missing some semicolons on some rows. I think I'll give a try shortly for the never backup if it's still having the same error or if it was random issue with the one backup I tried playing with.
I've tried the basic ones: restarting the machine and PG process.

Does the "increment" feature in Sequelize scale well?

I'm investigating the scalibility of Sequalize in a production app, specifically the increment function, to see how well it can handle when a row could theortically be updated several times simultaneously (say, the totals row of something). My question is can the sequalize increment operator be trusted for these little addition operations that could be concurrent?
We're using Postgres on the backend, but I'm not familar with the internals of Postgres and how it would handle this type of scenerio (heroku postgres will be the production host, if it matters).
The Docs / The Code
The sql ran by Sequealize according to the code comments
SET column = column + X
It's hard to say without complete SQL examples, but I'd say this will likely serialize all transactions that call it on the same object.
If you update an object, the db takes a row update lock that's only released at commit/rollback time. Other updates/deletes block on this lock until the first tx commits or rolls back.

what happens to my dataset in case of unexpected failure

i know this has been asked here. But my question is slightly different. When the dataset was designed keeping the disconnected principle in mind, what was provided as a feature which would handle unexpected termination of the application, say a power failure or a windows hang or system exception leading to restart. Say the user has entered some 100 rows and it is modified at the dataset alone. Usually the dataset is updated at the application close or at a timely period.
In old times which programming using vb 6.0 all interaction used to take place directly with the database, thus each successful transaction was committing itself automatically. How can that be done using datasets?
DataSets are never for direct access to database, they are a disconnected model only. There is no intent that they be able to recover from machine failures.
If you want to work live against the database you need to use DataReaders and issue DbCommands against the database live for changes. This of course will increase your load on the database server though.
You have to balance the two for most applications. If you know a user just entered vital data as a new row, execute an insert command to the database, and put a copy in your local cached DataSet. Then your local queries can run against the disconnected data, and inserts are stored immediately.
A DataSet can be serialized very easily, so you could implement your own regular backup to disk by using serialization of the DataSet to the filesystem. This will give you some protection, but you will have to write your own code to check for any data that your application may have saved to disk previously and so on...
You could also ignore DataSets and use SqlDataReaders and SqlCommands for the same sort of 'direct access to the database' you are describing.