Enviroment:
Cluster with 3 DB servers managed with Patroni.
PostgreSQL 14.
I have procedure without parameteres which archive some data and works a long time.
When I running it, after exactly 30 minutes this procedure start again. I know it because on the beginnig I insert record into control table with information about start date.
Do you know what process can responsible for it ?
Related
I have a basic Postgres instance in GCP with the following configuration:
https://i.stack.imgur.com/5FFZ6.png
I have a simple python script using pycopg2 that does inserts in a loop to the database connected through the sql auth proxy.
All the inserts are done in the same transaction.
The problem is that it is taking a couple of hours to insert around 200,000 records.
When I run the script on a local database it takes a couple of seconds.
What could be causing this huge difference?
I am about to move my production PostgreSQL DB to a managed DB from Digital Ocean. So, there is a possibility for a down time during the maintenance/back up procedures which may last up to 10 seconds. How to effectively handle this DB down time without getting the rails application crashed? I read about 'reconnect: true' in database.yml. But not sure it will work for PostgreSQL. Any suggestions?
First of all i do not know if is possible what i want to achive, will describe below:
I have access of a remote PostgreSQL that holds the data i need (let's say Remote PostgreSQL 1)
have just credentials to read from database
What i want to achive is to create a localt PostgreSQL on my machine (let's say Local PostgreSQL 2)
Want to copy and check for missing data from Remote PostgreSQL 1 to Local PostgreSQL 2 in real time or at list to copy at the end of the day data
The scenario will work perfect with replication but the issue is that Remote PostgreSQL 1 is not owned by me and can not be used as real time DB, because of this trying to find a solution to get all the data from Remote PostgreSQL 1 to Local PostgreSQL 2.
Could be the following scenarios:
first time setup to downlaod all the database from Remote PostgreSQL 1 to Local PostgreSQL 2
after first time setup to check what data came new inside and add them in Local PostgreSQL 2
Would be great if this could be done on OS level on UBUNTU. My application is written in python 3 i could do scripts to do all this job but i speak of 100 millions of raws per table huge amount of data. Think will be problems to get everything from database and start to check everything what is missing and not.
Any ideas would be great.
If the owner of Remote Database 1 won't cooperate with you other than to give you read only access to the tables, then you don't have any efficient options. If the remote owner does or can be convinced to keep insertion/modification timestamp columns in all the tables (although then deletions would be a problem), or an in-database "audit" log for all the tables, you could use those. I think you have an organizational/political problem rather a programming problem.
I have a plpgsql function. This function deletes and inserts some rows in some foreign table which is connected to OracleXE via oracle_fdw plugin. Every minute cron start 10 instances of this function for 10 different tables using psql. Sometimes (less than once per week) one instance stucks. It is in active state, but can't be canceled or terminated.
I can kill it by SIGKILL but this will cancel all other postgres processes and start recovery mode.
Is it possible to stop this process without using SIGKILL?
(Postgres 9.3 on CentOS 7)
I've been working on a backup / restore for a Postgres server for quite a while now. It's an Azure Windows Virtual Machine (Windows server 2012).
The database isn't that big (near 5Gb), but the restore takes (literally) days. I've tried (several) times with different settings to restore the database, but all of the times it took days to "finish" (it didn't finish - I killed the process because I didn't see anything happening, that's why I'm running the job verbose this time).
I've now been running the job (verbose one) for 5 days straight and still it isn't finished. It's inserting rows (or at least displaying the rows), but it's still running.
Currently I'm using this command:
pg_restore -Fc -v --jobs=2 --host=localhost [filename]
Jobs is set at 2 because it's a dual core server. Like I said: different settings still very very slow.
What is wrong - should I be "tuning" the database before the restore or what?
This is a test-server setup. When we're doing with the test the current data need to be restored (again) to the new production server: we can't afford to wait days on end before the production environment comes online.
It's not pushing errors into the logs or something - it just keeps running and running and running...
So what am I doing wrong?