Get maximum number of job slaves per instance that can be created for the execution in PostgreSQL - postgresql

I am migrating oracle database to postgresql.
While migration came to know with the following query in the oracle side.
Oracle Query:
SELECT
TRIM(value) AS val
FROM v$parameter
WHERE name = 'job_queue_processes';
I just want to know how can we get the maximum number of job slaves per instance that can be created for the execution at the postgresql side.
I have created pg_cron extension and created required jobs till now. But one of the function is using above query in oracle, so I just want to convert it into the postgresql.

The documentation is usually a good source of information.
Important: By default, pg_cron uses libpq to open a new connection to the local database.
In this case, there is no specific limit. It would be limited in the same way other user connections are limited. Mostly by max_connections, but possibly lowered from that for particular users or particular databases by the ALTER command. You could create a user specifically for cron if you wanted to limit its connections separately, then grant that user privileges of other roles it will operate on behalf of. I don't know what pg_cron does if the limit is reached, does it deal with it gracefully or not?
Alternatively, pg_cron can be configured to use background workers. In that case, the number of concurrent jobs is limited by the max_worker_processes setting, so you may need to raise that.
Note that the max number of workers may have to be shared with parallel execution workers and maybe with other extensions.

Related

How to setup mutli-tenancy using row level security on Postgres with knex

I am architecting a database where I expected to have 1,000s of tenants where some data will be shared between tenants. I am currently planning on using Postgres with row level security for tenant isolation. I am also using knex and Objection.js to model the database in node.js.
Most of the tutorials I have seen look like this where you create a separate knex connection per tenant. However, I've run into a problem on my development machine where after I create ~100 connections, I received this error: "remaining connection slots are reserved for non-replication superuser connections".
I'm investigating a few possible solutions/work-arounds, but I was wondering if anyone has been able to make this setup work the way I'm intending. Thanks!
Perhaps one solution might be to cache a limited number of connections, and destroy the oldest cached connection when the limit is reached. See this code as an example.
That code should probably be improved, however, to use a Map as the knexCache instead of an object, since a Map remembers the insertion order.

How to retrieve all database sql executed from particular server?

I am running an application in a particular server which updates a postgres database table.Is there any way that I can retrieve all the queries executed to that database (may be my table) from a -period of time if I have admin privilege?
You can install the extension pg_stat_statements which will give you a summary of the queries executed.
Note that the number of queries that are stored in the table pg_stat_statements is limited (the limit can be configured). So you probably want to store a snapshot of that table on a regular basis. How often depends on your workload. Increasing pg_stat_statements.max means you can reduce the frequency of taking snapshots from that table.

How can I obtain the creation date of a DB2 database without connecting to it?

How can I obtain the creation date or time of an IBM's DB2 database without connecting to the specified database first? Solutions like:
select min(create_time) from syscat.tables
and:
db2 list tables for schema SYSIBM
require me to connect to the database first, like:
db2 connect to dbname user userName using password
Is there another way of doing this through a DB2 command instead, so I wouldn't need to connect to the database?
Can db2look command be used for that?
Edit 01: Background Story
Since more than one person asked why do I need to do this and for what reasons, here is the background story.
I have a server with DB2 DBMS where many people and automated scripts are using it to create some databases for temporary tasks and tests. It's never meant to keep the data for long time. However for one reason or another (ex: developer not cleaning after himself or tests stopping forcefully before they can do the clean up) some databases never get dropped and they start to get accumulated till the hard disk is filled out eventually. So The idea of the app is to look up the age of the database and drop it, if it's older than 6 months (for example).

Does AWS Redshift support a replica?

We are working with AWS Redshift DB and would like to create an online replicate (such that is fully updated with changes as well)?
The reason is that we wish to have a separate environment for one of our departments to run their own queries, and as they might "go crazy" and do some super-complex queries (they need free style, and I can't control what will they do), I don't want it to overload the main Redshift and take up all the resources for my main users. A replica will solve it as it will create an environment of their own.
No, you can not have replication between two redshift clusters like we do in mysql.
You can can use WLM in redshift and create query queues.
create user groups and query groups assign more cpu/more memory and more concurrency to production users and assign less concurrency and less memory to your department users so that your production users will not be affected.

Multiple threads in db2luw

I am very new to Db2. I have a question , Developed few procedures which will perform some operations on db2 database. My question is how to create multiple threads on db2 server concurrently. I mean I have a database with 70,000 tables each having more than 1000 records . I have a procedure which will update all these 70,000 tables. So time consumption is the main factor, here. I want to divide my update statement into 10 threads , where each thread will update 7000 tables. I want to run all the 10 threads simultaneously.
Can some one kindly let me know the way , to achieve this.
DB2 c Express on windows.
There's nothing in DB2 for creating multiple threads.
The enterprise level version of DB2 will automatically process a single statement across multiple cores when and where needed. But that's not what you're asking for.
I don't believe any SQL based RDBMS allows for a SP that create it's own threads. The whole point of SQL is hat it's a higher level of abstraction, you don't have access to those kinds of details.
You'll need to write an external app in a language that supports threads and that opens 10 connections to the DB simultaneously. But depending on the specifics of the update you're doing, and hardware you have. You might find that 10 connections is too many.
To elaborate on Charles's correct answer, it is up to the client application to parallelize its DML workload by opening multiple connections to the database. You could write such a program on your own, but many ETL utilities provide components that enable parallel workflows similar to what you've described. Aside from reduced programming, another advantage of using an ETL tool to define and manage a multi-threaded database update is built-in exception handling, making it easier to roll back all of the involved connections if any of them encounter an error.