Firebird performance depends on the connected user? - firebird

I have a Firebird 2.5 database that performs intensive read / write operations. I have about 15 defined users who have read / write permissions on the data in the database.
Even with a single connection to the database, I notice that if I connect with the SYSDBA user these operations are executed a few times faster than any other user.
Has anyone noticed this behavior? I do not want regular users to connect with SYSDBA to the database to get increased performance.

Related

Get maximum number of job slaves per instance that can be created for the execution in PostgreSQL

I am migrating oracle database to postgresql.
While migration came to know with the following query in the oracle side.
Oracle Query:
SELECT
TRIM(value) AS val
FROM v$parameter
WHERE name = 'job_queue_processes';
I just want to know how can we get the maximum number of job slaves per instance that can be created for the execution at the postgresql side.
I have created pg_cron extension and created required jobs till now. But one of the function is using above query in oracle, so I just want to convert it into the postgresql.
The documentation is usually a good source of information.
Important: By default, pg_cron uses libpq to open a new connection to the local database.
In this case, there is no specific limit. It would be limited in the same way other user connections are limited. Mostly by max_connections, but possibly lowered from that for particular users or particular databases by the ALTER command. You could create a user specifically for cron if you wanted to limit its connections separately, then grant that user privileges of other roles it will operate on behalf of. I don't know what pg_cron does if the limit is reached, does it deal with it gracefully or not?
Alternatively, pg_cron can be configured to use background workers. In that case, the number of concurrent jobs is limited by the max_worker_processes setting, so you may need to raise that.
Note that the max number of workers may have to be shared with parallel execution workers and maybe with other extensions.

How to setup mutli-tenancy using row level security on Postgres with knex

I am architecting a database where I expected to have 1,000s of tenants where some data will be shared between tenants. I am currently planning on using Postgres with row level security for tenant isolation. I am also using knex and Objection.js to model the database in node.js.
Most of the tutorials I have seen look like this where you create a separate knex connection per tenant. However, I've run into a problem on my development machine where after I create ~100 connections, I received this error: "remaining connection slots are reserved for non-replication superuser connections".
I'm investigating a few possible solutions/work-arounds, but I was wondering if anyone has been able to make this setup work the way I'm intending. Thanks!
Perhaps one solution might be to cache a limited number of connections, and destroy the oldest cached connection when the limit is reached. See this code as an example.
That code should probably be improved, however, to use a Map as the knexCache instead of an object, since a Map remembers the insertion order.

Static lookup data stored in localhost for 1000+ users (connections)

Sometimes you have static data that is used by all customers. I am looking for a solution that fetches this from localhost (127.0.0.1) using a sort of database.
I have done some tests using Golang fetching from a local Postgresql database and it works perfect. But how does this scale to 1000+ users?
I noticed that only 1 session was started at the local server regardless which computer (as I used 127.0.0.1 in Golang to call Postgres). At some point this may or maybe not be a bottleneck for 1000 users to only using one session?
My questions are:
How many concurrent users can Postgresql handle per session before
it become a bottleneck? Or is this handled by the calling language (Golang)?
Is it even possible to handle many queries per session from
different users?
Is there other better ways to manage static lookup data for all customers than a local Postgresql database (Redis?)
I hope this question fits this forum. Otherwise, please point me in right direction.
Every session creates a new postgres process, which gets forked from the "main" postgres process listening to the port (default 5432).
Default is that 100 sessions can be opened in parallel, but this can easily be changed in postgresql.conf.
There are no parallel queries being executed in one session.

FireDAC: Shared Lock on Table with Firebird

I'm using Delphi 10.1 with FireDAC to connect to Firebird.
I would like to open a table in Exclusive mode in Firebird with FireDAC?
How would it be?
Firebird does not handle table or row locks. So there's no way you're going to get this to work with FireDAC... no connection parameters can do this magic.
What you can do with Firebird is to use the entire database in single user mode. To do this, you must shut it down, run GFIX to flag it as a single user database, and then reconnect to the database. You can find more details on the Firebird How-To FAQ. But I doubt this is what you are looking for.
You should explain better what you are trying to do. With real SQL servers you should not feel the need to lock tables or rows. Transactions and transaction isolation should be enough to handle most situations. If not, then you should probably start thinking about application level locks, that is, if you have just one application that uses the database.

How can I obtain the creation date of a DB2 database without connecting to it?

How can I obtain the creation date or time of an IBM's DB2 database without connecting to the specified database first? Solutions like:
select min(create_time) from syscat.tables
and:
db2 list tables for schema SYSIBM
require me to connect to the database first, like:
db2 connect to dbname user userName using password
Is there another way of doing this through a DB2 command instead, so I wouldn't need to connect to the database?
Can db2look command be used for that?
Edit 01: Background Story
Since more than one person asked why do I need to do this and for what reasons, here is the background story.
I have a server with DB2 DBMS where many people and automated scripts are using it to create some databases for temporary tasks and tests. It's never meant to keep the data for long time. However for one reason or another (ex: developer not cleaning after himself or tests stopping forcefully before they can do the clean up) some databases never get dropped and they start to get accumulated till the hard disk is filled out eventually. So The idea of the app is to look up the age of the database and drop it, if it's older than 6 months (for example).