I need to get the database CPU usage using query
For example I get the database size using this function: pg_database_size('databaseName')
and for SQL Server, I use this query:
SELECT top 1 avg_cpu_percent AS AverageCPUUsed,end_time
FROM sys.resource_stats
WHERE database_name = 'qa'
Any idea how to do this in postgres ?
Related
We have a connection in Exasol (v7.0.18) to PostgreSQL (v14) created like this
create or replace connection POSTGRES_DB to
'jdbc:postgresql://hostname:5432/my_db?useCursorFetch=true&defaultFetchSize=2147483648'
user 'abc'
identified by <>;
I am running an export statement using this connection like this:
EXPORT MY_SCHEMA.TEST_TABLE
INTO JDBC AT POSTGRES_DB
TABLE pg_schema.test_table
truncate;
This works without any error.
The issue is that it runs only one insert statement in the PostgresSQL at a time. I am expecting multiple inserts running at a time in PostgresSQL.
This documentation page says Importing from Exasol databases is always parallelized. For Exasol, loading tables directly is significantly faster than using the STATEMENT option.
How can I make the export statement do parallel insert into PostgreSQL?
I am trying to use pg_cron to schedule calls on stored procedure on several DBs in a Postgres Cloud SQL instance.
Unfortunately it looks like pg_cron can only be only created on postgres DB
When I try to use pg_cron on a DB different than postgres I get this message :
CREATE EXTENSION pg_cron;
ERROR: can only create extension in database postgres
Detail: Jobs must be scheduled from the database configured in
cron.database_name, since the pg_cron background worker reads job
descriptions from this database. Hint: Add cron.database_name =
'mydb' in postgresql.conf to use the current database.
Where: PL/pgSQL function inline_code_block line 4 at RAISE
Query = CREATE EXTENSION pg_cron;
... I don't think I have access to postgresql.conf in Cloud SQL ... is there another way ?
Maybe I could use postgres_fdw to achieve my goal ?
Thank you,
There's no need to edit any files. All you have to do is set the cloudsql.enable_pg_cron flag (see guide) and then create the extension in the postgres database.
You need to log onto the postgres database rather than the one you're using for your app. For me that's just replacing the name of my app database with 'postgres' e.g.
psql -U<username> -h<host ip> -p<port> postgres
Then simply run the create extension command and the cron.job table appears. Here's one I did a few minutes ago in our cloudsql database. I'm using the cloudsql proxy to access the remote db:
127.0.0.1:2345 admin#postgres=> create extension pg_cron;
CREATE EXTENSION
Time: 268.376 ms
127.0.0.1:2345 admin#postgres=> select * from cron.job;
jobid | schedule | command | nodename | nodeport | database | username | active | jobname
-------+----------+---------+----------+----------+----------+----------+--------+---------
(0 rows)
Time: 157.447 ms
Be careful to specify the correct target database when setting the schedule otherwise it will think that you want the job to run in the postgres database
.. I don't think I have access to postgresql.conf in Cloud SQL ...
Actually there is, you can use the patch command.
according to pg_cron doc, you need two change two things in the conf file:
shared_preload_libraries = 'pg_cron'
cron.database_name = 'another_table' #optionnaly to change the database where pg_cron background worker expects its metadata tables to be created
Now, according to gcloud
You need to set up two flags on your instance:
gcloud sql instances patch [my_instance] --database-flags=cloudsql.enable_pg_cron=on,cron.database_name=[my_name]
CAREFUL, don't use twice the command "patch" as you would erase your first setting. Put all your changes in one command
You also might want set cron.database_name in postgresql.conf (or flag in CloudSQL)
cron.database_name = mydatabase
A puzzle... I have two Postgres 9.3.9 databases. Let's call them 'production' and 'staging'.
I am doing a query like:
SELECT * FROM mytable WHERE id > start_id OFFSET 99 LIMIT 1
The problem lies in what happens when there are less than 99 items in the results.
On 'staging' the query returns zero rows.
On 'production' the query hangs and doesn't return.
I have experienced the same problem on the production db both via Django ORM code in a python console (via heroku run) and also using Postico GUI client remotely ...so it seems likely problem is something in the db config.
Any ideas anyone?
I have a query which gives about 14M rows (I was not aware of this). When I use psql to run the query, my Fedora machine froze. Also after the query was done, I could not use Fedora anymore and had to restart my machine. When I redirected standard output to a file, Fedora also froze.
So how should I handle large resultsets with psql?
psql accumulates complete results in client memory by default. This behavior is usual for all libpq based Postgres applications or drivers. The solutions are cursors - then you are fetching only N rows from server. Cursors can be used by psql too. You can change it by setting FETCH_COUNT variable, then it will use cursors with batch retrieval size FETCH_COUNT.
postgres=# \set FETCH_COUNT 1000
postgres=# select * from generate_series(1,100000); -- big query
I'm currently reviewing the usage of indexes in a complex PostgreSQL database. One of the queries that look useful is this
SELECT idstat.schemaname AS schema_name, idstat.relname AS table_name,
indexrelname AS index_name,
idstat.idx_scan AS times_used,
pg_size_pretty(pg_relation_size(idstat.relid)) AS table_size, pg_size_pretty(pg_relation_size(indexrelid)) AS index_size,
n_tup_upd + n_tup_ins + n_tup_del as num_writes,
indexdef AS definition
FROM pg_stat_user_indexes AS idstat JOIN pg_indexes ON (indexrelname = indexname AND idstat.schemaname = pg_indexes.schemaname)
JOIN pg_stat_user_tables AS tabstat ON idstat.relid = tabstat.relid
WHERE idstat.idx_scan < 200
AND indexdef !~* 'unique'
ORDER BY idstat.relname, indexrelname;
It tells me how often the index was used, how much space it uses etc.
However:
I get the database backup from the client site. When I restore the database, the query returns zeros for times_used. I suspect that all indexes are rebuild on restore.
Finally, question:
What is the easiest way to capture (backup) the data from pg_catalog so that actual client data on index use can be restored and analyzed?
Provide an SQL script that the client can run with psql -f report_on_live_db.sql live_db_name . Have it SELECT the desired data.
Alternately, you might be able to get a raw file-system level copy of the client database from the client and then fire it up on your local machine. This will only work if you can run PostgreSQL on the same operating system and architecture; you can't use a 64-bit PostgreSQL to access a 32-bit database, nor can you use a Linux PostgreSQL to access a Windows or BSD database. The same PostgreSQL major version must be used; you can't use 9.1 to access a 9.0 database or vice versa.