PSQL On RDS Post upgrade to 14 ANALYZE VERBOSE is taking hours - postgresql

Is there a way to speed up ANALYZE VERBOSE?
600GB database takes about 3 hours to run ANALYZE VERBOSE after upgrading to v14 on RDS.

Related

PostgreSQL 9.6 vacuum using 100% CPU

I am running PostgreSQL 9.6 on Centos 7.
The database was migrated from a PostgreSQL 9.4 server that did not have the issue.
With autovacuum on Postgres is using 100% of one core constantly (10% of total CPU). With autovacuum off it does not use any CPU other than when executing queries.
If this expected or normal, or something bad going on? Note it is a very big database, with many schemas/tables.
I tried,
vacuumdb --all -w
and,
ANALYZE VERBOSE;
The "ANALYZE VERBOSE;" made the database run a lot faster, but did not change the CPU usage.

Upgraded RDS postgres from 11.16 to 14.4, CPU has been stuck at 100% for hours while running the huge aggregate queries

We have upgraded our AWS RDS Postgres from 11.16 to 14.4, After that I can see CPU stay around 100% for long hours while running the aggregate Queries.
I'm working with a large database. Most of the main tables have 1 million records. While running these aggregate queries in previous version (11.16) it will give the result without increasing the CPU usage.
Can we use ANALYZE command for optimizing the upgraded database?
Is there any impact when we running this command on database?
Any suggestions for resolving this?

Postgres analyze takes time after upgrade 11.12 to 13.4

Issue:Analyze running for hours
I kept maintenance worker memory as 1 gb and maintenance workers as 4.DB Size is 40 GB
We upgraded postgres from 11.12 to 13.4.
Post upgrade, i am running analyze as below statement and i am seeing this job runs for hours.
(4 hours still running) .
Any input for this unusual long hours.
Note:
Command i used-> **VACUUM (VERBOSE, ANALYZE,parallel 4)**
Tracking via below statement:
select * from pg_stat_progress_analyze,From this table,I can see per second 250 blocks are scanned

Postgres 10 Upgrade Stuck Queries

I followed these instructions:
Upgrade PostgreSQL from 9.6 to 10.0 on Ubuntu 16.10
And got the upgrade done without issue. I kept the old cluster - however with identical queries on the new cluster im getting:
wait_event_type: IO
wait_event: DataFileRead
These queries that hang are largish in that they join 10's millions of rows. I have double checked postgresql.conf and parameters where present in both the 9.6 version and 10 version are identical except:
"bytea_output"
"client_encoding"
"hot_standby"
"max_connections"
"max_replication_slots"
"password_encryption"
"port"
"server_version"
"server_version_num"
"superuser_reserved_connections"
"wal_level"
I have both clusters up on the same machine - tables I'm querying are sitting on the same tablespaces, queries are identical, config is more or less identical - is there something I'm missing with postgres 10?
Many Thanks

Backup/Restore of Firebird SQL Server 3.0.2 is slow on Windows Server 2016

I have installed Firebird 3.0.2 SQL database on my Windows Server 2016. No other software has been installed yet.
I'm using Superserver mode and an SSD drive.
When I just copy my database file of size 6 GB, it is done in 20-30 seconds (same disk).
But when I execute backup it takes 20-30 minutes. Restore is about the same amount of time. Together 40-60 minutes.
And there is strange thing: backup/restore process (gbak.exe) does not use full power of CPUs and HDD. It is using only ~20% . I don't understand why.
I think it should be something in configuration right? But I kept everything in default values.
Very important thing: I am new in Windows Server 2016 so I have no idea what I am doing.
Any ideas?
I found out that it is about configuration of Power Options.
Windows Server 2016 is after installation set for Balanced Power Plan.
I changed it to High performance and results are highly better. (backup drops from 30 minutes to 6 minutes)
More details you can find here: https://serverfault.com/a/797473
To find the restore bottleneck in Firebird 3 you should add the detail protokoll option:
-v -stat TDRW Filename
-v (Verbose output of what GBAK is doing)
-stat (Runtime statistics in its verbose output)
T (Total time)
D (Total delta)
R (Page reads)
W (Page writes)
Have a look to the GBAK option
-service localhost:service_mgr
it is a speed demon :-)