I have a Postgis database deployed on a Kubernetes cluster, using this image: docker pull postgis/postgis:13-3.1.
I was trying to solve this error:
2021-06-29 03:20:50.958 UTC [2852] ERROR: relation "pg_stat_statements" does not exist at character 536
2021-06-29 03:20:50.958 UTC [2852] STATEMENT: SELECT t2.rolname, t3.datname, queryid, calls, total_time / 1000 as total_time_seconds, min_time / 1000 as min_time_seconds, max_time / 1000 as max_time_seconds, mean_time / 1000 as mean_time_seconds, stddev_time / 1000 as stddev_time_seconds, rows, shared_blks_hit, shared_blks_read, shared_blks_dirtied, shared_blks_written, local_blks_hit, local_blks_read, local_blks_dirtied, local_blks_written, temp_blks_read, temp_blks_written, blk_read_time / 1000 as blk_read_time_seconds, blk_write_time / 1000 as blk_write_time_seconds FROM pg_stat_statements t1 JOIN pg_roles t2 ON (t1.userid=t2.oid) JOIN pg_database t3 ON (t1.dbid=t3.oid) WHERE t2.rolname != 'rdsadmin'
So I installed the missing extension in all my dbs, like so:
psql -U $PG_USER \
-d $DATABSE \
-c "CREATE EXTENSION pg_stat_statements SCHEMA public"
psql -U $PG_USER \
-d $DATABSE \
-c "CREATE EXTENSION pg_stat_statements SCHEMA pg_catalog"
I'm now getting this error:
2021-06-29 20:04:46.870 UTC [331] ERROR: column "total_time" does not exist at character 48
2021-06-29 20:04:46.870 UTC [331] STATEMENT: SELECT t2.rolname, t3.datname, queryid, calls, total_time / 1000 as total_time_seconds, min_time / 1000 as min_time_seconds, max_time / 1000 as max_time_seconds, mean_time / 1000 as mean_time_seconds, stddev_time / 1000 as stddev_time_seconds, rows, shared_blks_hit, shared_blks_read, shared_blks_dirtied, shared_blks_written, local_blks_hit, local_blks_read, local_blks_dirtied, local_blks_written, temp_blks_read, temp_blks_written, blk_read_time / 1000 as blk_read_time_seconds, blk_write_time / 1000 as blk_write_time_seconds FROM pg_stat_statements t1 JOIN pg_roles t2 ON (t1.userid=t2.oid) JOIN pg_database t3 ON (t1.dbid=t3.oid) WHERE t2.rolname != 'rdsadmin'
I've tried to find the reason why, but haven't found anything helpful anywhere. Any ideias on how to solve this issue?
Postgres version: psql (PostgreSQL) 13.2 (Debian 13.2-1.pgdg100+1)
Config file:
# ...
shared_preload_libraries = 'pg_stat_statements'
pg_stat_statements.max = 1000
pg_stat_statements.track = all
# ...
That query is coming from some monitoring tool; which as far as I can tell has nothing to do with postgis. The monitoring tool is out of date, as now that column is called "total_exec_time". (It was renamed when columns were added for planning times as well as for execution times.)
Following jjanes comment I found out that the Prometheus node exporter was making an incorrect query (here), due to the fact that the column was, indeed, mispelled.
Changing the column name fixed this issue.
Related
I'm practicing SQL in Jupyter Notebook. I use sqlaclhemy and psycopg2 libs for practicing PostgreSQL syntax (using ElephantSQL) and sqlite3 lib for local db.
I have a request:
sql = '''
with medium_credits as (
select t.credit_amount from german_credit t
where t.credit_amount > 1000 and t.credit_amount < 3000
group by t.credit_amount),
medium_credits_info as (
select * from german_credit t
where t.credit_amount in medium_credits)
select t.purpose, t.housing, count(1) as count
from medium_credits_info t
group by t.purpose, t.housing
'''
Running this request using pd.read_sql(sql,connection), where connection is object created with sqlite3 - it goes great.
When I try to try it in Postgres-like system using pd.read_sql(sql,engine), where engine is object created with sqlalchemy, it throws a ProgrammingError:
ProgrammingError: (psycopg2.errors.SyntaxError) syntax error at or near "medium_credits"
LINE 14: where t.credit_amount in medium_credits)
I guess, Postgres doesn't let you use values from CTE-created table directly this way.
Is there any option I could run this code without an error?
PS: local and elephantsql databases are identical
try this :
with medium_credits as (
select t.credit_amount from german_credit t
where t.credit_amount > 1000 and t.credit_amount < 3000
group by t.credit_amount),
medium_credits_info as (
SELECT *
FROM german_credit t
INNER JOIN medium_credits m
ON m.credit_amount = t.credit_amount)
I created a pg_dump with the following command -
pg_dump -U postgres -d db -n public \
--exclude-table-data 'exclude_table_*' \
--exclude-table-data 'another_set_of_tables_to_exclude*' > dump.sql
This excluded the tables I needed it to exclude, but it didn't dump any functions that were in the public schema. Why did it not dump the functions and how do I get it to dump them?
UPDATE
This is the definition of a materialized view -
CREATE MATERIALIZED VIEW public.attending AS
SELECT (split_part((ct.id)::text, '-'::text, 1))::bigint AS
attending_physician,
split_part((ct.id)::text, '-'::text, 2) AS business,
(split_part((ct.id)::text, '-'::text, 3))::bigint AS organization,
split_part((ct.id)::text, '-'::text, 4) AS county,
ct.id,
ct."qtr-0",
ct."qtr-1",
ct."qtr-2",
ct."qtr-3",
ct."qtr-4",
ct."qtr-5",
ct."qtr-6",
ct."qtr-7",
ct."qtr-8"
FROM crosstab('SELECT attending_practitioner || ''-'' || business || ''-'' || organization || ''-'' || county AS id, period, COALESCE(admits, 0)
FROM calc ORDER BY 1, 2 DESC'::text, 'SELECT year || ''q'' || quarter FROM calc_trend ORDER BY 1 DESC limit 9'::text) ct(id character varying(32), "qtr-0" integer, "qtr-1" integer, "qtr-2" integer, "qtr-3" integer, "qtr-4" integer, "qtr-5" integer, "qtr-6" integer, "qtr-7" integer, "qtr-8" integer);
It should dump functions (and all other objects) in the public schema.
The functions that are not dumped are those that are part of an extension, like the crosstab in your case. Such objects are not dumped individually, they are included in the CREATE EXTENSION.
Unfortunately extensions are not dumped with a schema dump (they belong to the database).
You should create the extensions manually on the destination database before restoring the dump:
CREATE EXTENSION crosstab;
We have a heroku postgres database that is running on version 9.6.1
When upgrading our pghero installation to the newest version 2.0.2 we're getting failures that pghero isn't able to find the queryid column in the pg_stat_statements table.
The pg_stat_statements extension is installed.
2017-08-10T09:54:44.002042+00:00 app[web.1]: Completed 500 Internal Server Error in 274ms (ActiveRecord: 186.1ms)
2017-08-10T09:54:44.004012+00:00 app[web.1]:
2017-08-10T09:54:44.004048+00:00 app[web.1]: ActiveRecord::StatementInvalid (PG::UndefinedColumn: ERROR: column "queryid" does not exist
2017-08-10T09:54:44.004050+00:00 app[web.1]: LINE 1: ...ry_stats AS ( SELECT LEFT(query, 10000) AS query, queryid AS...
2017-08-10T09:54:44.004051+00:00 app[web.1]: ^
2017-08-10T09:54:44.004052+00:00 app[web.1]: HINT: Perhaps you meant to reference the column "pg_stat_statements.query".
2017-08-10T09:54:44.004055+00:00 app[web.1]: : WITH query_stats AS ( SELECT LEFT(query, 10000) AS query, queryid AS query_hash, rolname AS user, (total_time / 1000 / 60) AS total_minutes, (total_time / calls) AS average_time, calls FROM pg_stat_statements INNER JOIN pg_database ON pg_database.oid = pg_stat_statements.dbid INNER JOIN pg_roles ON pg_roles.oid = pg_stat_statements.userid WHERE pg_database.datname = current_database() ) SELECT query, query_hash, query_stats.user, total_minutes, average_time, calls, total_minutes * 100.0 / (SELECT SUM(total_minutes) FROM query_stats) AS total_percent, (SELECT SUM(total_minutes) FROM query_stats) AS all_queries_total_minutes FROM query_stats ORDER BY "total_minutes" DESC LIMIT 100):
It turns out that upgrading the postgres version on heroku does not necessarily update extensions to the most current version.
Updating the extension by running
ALTER EXTENSION pg_stat_statements UPDATE;
fixed the problem.
I generated a (UTF-8) file by an external program for importing into PostgreSQL 9.6.1. Problem is the bytea field (PWHASH).
Snippet from this file (using TAB as delimiter)
COPY USERS (ID,CODE,PWHASH,EMAIL) FROM stdin;
7 test1 E'\\\\x657B954D27B4AC56FA997D24A5FF2563' test#amce.org
\.
When importing with
psql mydb myrole -f test.sql
Everything goes well.
However, if i query the result, the byte array is not 16 bytes, but 37 bytes:
select passwordhash,length(passwordhash) from users;
passwordhash | length
------------------------------------------------------------------------------+--------
\x45275c78363537423935344432374234414335364641393937443234413546463235363327 | 37
What is the correct syntax for this?
The format of the input file is wrong. It should be like this:
7 test1 \\x657B954D27B4AC56FA997D24A5FF2563 test#amce.org
I will have to "prepare" data I believe. Smth like here:
t=# insert into u select 'x657B954D27B4AC56FA997D24A5FF2563';
INSERT 0 1
Time: 5990.809 ms
t=# select b from u;
b
----------------------------------------------------------------------
\x783635374239353444323742344143353646413939374432344135464632353633
(1 row)
Time: 0.234 ms
t=# insert into u select decode('657B954D27B4AC56FA997D24A5FF2563','hex');
INSERT 0 1
Time: 62.767 ms
t=# select b from u;
b
----------------------------------------------------------------------
\x783635374239353444323742344143353646413939374432344135464632353633
\x657b954d27b4ac56fa997d24a5ff2563
(2 rows)
Time: 0.208 ms
So in your case you can:
create table t as select ID,CODE,PWHASH::text,EMAIL from users where false;
COPY t (ID,CODE,PWHASH,EMAIL) FROM stdin;
insert into users select ID,CODE,decode(substr(PWHASH,4),'hex'),EMAIL from t;
With PostgreSQL 9.5, I would like to track the total amount of bytes written (since DB cluster start) to:
WAL
temp files
temp tables
For 1.:
select
pg_size_pretty(archived_count * 16*1024*1024) temp_bytes,
(now() - stats_reset)::text uptime
from pg_stat_archiver;
For 2.:
select
(now() - stats_reset)::text uptime,
pg_size_pretty(temp_bytes) temp_bytes
from pg_stat_database where datname = 'mydb';
How do I get 3.?
In response to a comment below, I did some tests to check where temp tables are actually written.
First, the DB parameter temp_buffers is at 8GB on this cluster:
select pg_size_pretty(setting::bigint*8192) from pg_settings
where name = 'temp_buffers';
-- "8192 MB"
Lets create a temp table:
drop table if exists foo;
create temp table foo as
select random() from generate_series(1, 1000000000);
-- Query returned successfully: 1000000000 rows affected, 10:22 minutes execution time.
Check the PostgreSQL backend pid and OID of the created temp table:
select pg_backend_pid(), 'pg_temp.foo'::regclass::oid;
-- 46573;398695055
Check the RSS size of the backend process
~$ grep VmRSS /proc/46573/status
VmRSS: 9246276 kB
As can be seen, this is only slightly above the 8GB set with temp_buffers.
The data inserted into the temp table is however immediately written, and it is written to the normal tablespace directories, not temp files:
select * from pg_relation_filepath('pg_temp.foo')
-- "base/16416/t3_398695055"
Here is the number of files and amount written:
with temp_table_files as
(
select * from pg_ls_dir('base/16416/') fn
where fn like 't3_398695055%'
)
select
count(*) as cnt,
pg_size_pretty(sum((pg_stat_file('base/16416/' || fn)).size)) as size
from temp_table_files;
-- 34;"34 GB"
And finally verify that the set of temp files owned by this backend PID is indeed empty:
with temp_files_per_pid as
(
with temp_files as
(
select
temp_file,
(regexp_replace(temp_file, $r$^pgsql_tmp(\d+)\..*$$r$, $rr$\1$rr$, 'g'))::int as pid,
(pg_stat_file('base/pgsql_tmp/' || temp_file)).size as size
from pg_ls_dir('base/pgsql_tmp') temp_file
)
select pid, pg_size_pretty(sum(size)) from temp_files group by pid order by pid
)
select * from temp_files_per_pid where pid = 46573;
Returns nothing.
What is also "interesting", after dropping the temp table
DROP TABLE foo;
the RSS of the backend process does not reduce:
~$ grep VmRSS /proc/46573/status
VmRSS: 9254544 kB
Doing the following will also not free the RSS again:
RESET ALL;
DEALLOCATE ALL;
DISCARD TEMP;
What I know, there are not any special metric for temp tables. The temp tables uses session (process) memory to temp_buffers size (8MB by default). When these temp buffers are full, then temporary files are generated.