queryid column missing in pg_stat_statements table - postgresql

We have a heroku postgres database that is running on version 9.6.1
When upgrading our pghero installation to the newest version 2.0.2 we're getting failures that pghero isn't able to find the queryid column in the pg_stat_statements table.
The pg_stat_statements extension is installed.
2017-08-10T09:54:44.002042+00:00 app[web.1]: Completed 500 Internal Server Error in 274ms (ActiveRecord: 186.1ms)
2017-08-10T09:54:44.004012+00:00 app[web.1]:
2017-08-10T09:54:44.004048+00:00 app[web.1]: ActiveRecord::StatementInvalid (PG::UndefinedColumn: ERROR: column "queryid" does not exist
2017-08-10T09:54:44.004050+00:00 app[web.1]: LINE 1: ...ry_stats AS ( SELECT LEFT(query, 10000) AS query, queryid AS...
2017-08-10T09:54:44.004051+00:00 app[web.1]: ^
2017-08-10T09:54:44.004052+00:00 app[web.1]: HINT: Perhaps you meant to reference the column "pg_stat_statements.query".
2017-08-10T09:54:44.004055+00:00 app[web.1]: : WITH query_stats AS ( SELECT LEFT(query, 10000) AS query, queryid AS query_hash, rolname AS user, (total_time / 1000 / 60) AS total_minutes, (total_time / calls) AS average_time, calls FROM pg_stat_statements INNER JOIN pg_database ON pg_database.oid = pg_stat_statements.dbid INNER JOIN pg_roles ON pg_roles.oid = pg_stat_statements.userid WHERE pg_database.datname = current_database() ) SELECT query, query_hash, query_stats.user, total_minutes, average_time, calls, total_minutes * 100.0 / (SELECT SUM(total_minutes) FROM query_stats) AS total_percent, (SELECT SUM(total_minutes) FROM query_stats) AS all_queries_total_minutes FROM query_stats ORDER BY "total_minutes" DESC LIMIT 100):

It turns out that upgrading the postgres version on heroku does not necessarily update extensions to the most current version.
Updating the extension by running
ALTER EXTENSION pg_stat_statements UPDATE;
fixed the problem.

Related

Difference in CTE using SQLite3 and PostgreSQL

I'm practicing SQL in Jupyter Notebook. I use sqlaclhemy and psycopg2 libs for practicing PostgreSQL syntax (using ElephantSQL) and sqlite3 lib for local db.
I have a request:
sql = '''
with medium_credits as (
select t.credit_amount from german_credit t
where t.credit_amount > 1000 and t.credit_amount < 3000
group by t.credit_amount),
medium_credits_info as (
select * from german_credit t
where t.credit_amount in medium_credits)
select t.purpose, t.housing, count(1) as count
from medium_credits_info t
group by t.purpose, t.housing
'''
Running this request using pd.read_sql(sql,connection), where connection is object created with sqlite3 - it goes great.
When I try to try it in Postgres-like system using pd.read_sql(sql,engine), where engine is object created with sqlalchemy, it throws a ProgrammingError:
ProgrammingError: (psycopg2.errors.SyntaxError) syntax error at or near "medium_credits"
LINE 14: where t.credit_amount in medium_credits)
I guess, Postgres doesn't let you use values from CTE-created table directly this way.
Is there any option I could run this code without an error?
PS: local and elephantsql databases are identical
try this :
with medium_credits as (
select t.credit_amount from german_credit t
where t.credit_amount > 1000 and t.credit_amount < 3000
group by t.credit_amount),
medium_credits_info as (
SELECT *
FROM german_credit t
INNER JOIN medium_credits m
ON m.credit_amount = t.credit_amount)

How create Oracle procedure witch select with dblink from postgres

I can select(in Oracle) from postgers with dblink, and its work fine.
But if i create procedure with this select:
Procedure:
`CREATE OR REPLACE PROCEDURE test_merge as
begin
MERGE INTO CARDS C
USING (SELECT c."card_id", 1, n."channel"
FROM "table_1"#DBLINK_NAME n
JOIN
"table_2"#DBLINK_NAME c
ON n."card_id" = c."id"
WHERE n."type" = 'param1') B
ON (C.CARDID = B."card_id")
WHEN MATCHED
THEN
UPDATE SET C.SENDR = 1, C.PHONE = '+' || B."channel";
end;`
ORA-04052: error occurred when looking up remote object postgres.table_name#DBLINK_NAME ORA-00604: error occurred at recursive SQL level 1 ORA-28500: connection from ORACLE to a non-Oracle system return this message: ERROR: relation "postgres.card" does
Have any idea?
Oracle 11g
Thanks!
It was necessary to specify the owner of the database on postgre. In my case, it was enough to specify "public". in the procedure. before accessing tables.
"public"."table_1"#DBLINK_NAME

Postgis pg_stat_statements errors

I have a Postgis database deployed on a Kubernetes cluster, using this image: docker pull postgis/postgis:13-3.1.
I was trying to solve this error:
2021-06-29 03:20:50.958 UTC [2852] ERROR: relation "pg_stat_statements" does not exist at character 536
2021-06-29 03:20:50.958 UTC [2852] STATEMENT: SELECT t2.rolname, t3.datname, queryid, calls, total_time / 1000 as total_time_seconds, min_time / 1000 as min_time_seconds, max_time / 1000 as max_time_seconds, mean_time / 1000 as mean_time_seconds, stddev_time / 1000 as stddev_time_seconds, rows, shared_blks_hit, shared_blks_read, shared_blks_dirtied, shared_blks_written, local_blks_hit, local_blks_read, local_blks_dirtied, local_blks_written, temp_blks_read, temp_blks_written, blk_read_time / 1000 as blk_read_time_seconds, blk_write_time / 1000 as blk_write_time_seconds FROM pg_stat_statements t1 JOIN pg_roles t2 ON (t1.userid=t2.oid) JOIN pg_database t3 ON (t1.dbid=t3.oid) WHERE t2.rolname != 'rdsadmin'
So I installed the missing extension in all my dbs, like so:
psql -U $PG_USER \
-d $DATABSE \
-c "CREATE EXTENSION pg_stat_statements SCHEMA public"
psql -U $PG_USER \
-d $DATABSE \
-c "CREATE EXTENSION pg_stat_statements SCHEMA pg_catalog"
I'm now getting this error:
2021-06-29 20:04:46.870 UTC [331] ERROR: column "total_time" does not exist at character 48
2021-06-29 20:04:46.870 UTC [331] STATEMENT: SELECT t2.rolname, t3.datname, queryid, calls, total_time / 1000 as total_time_seconds, min_time / 1000 as min_time_seconds, max_time / 1000 as max_time_seconds, mean_time / 1000 as mean_time_seconds, stddev_time / 1000 as stddev_time_seconds, rows, shared_blks_hit, shared_blks_read, shared_blks_dirtied, shared_blks_written, local_blks_hit, local_blks_read, local_blks_dirtied, local_blks_written, temp_blks_read, temp_blks_written, blk_read_time / 1000 as blk_read_time_seconds, blk_write_time / 1000 as blk_write_time_seconds FROM pg_stat_statements t1 JOIN pg_roles t2 ON (t1.userid=t2.oid) JOIN pg_database t3 ON (t1.dbid=t3.oid) WHERE t2.rolname != 'rdsadmin'
I've tried to find the reason why, but haven't found anything helpful anywhere. Any ideias on how to solve this issue?
Postgres version: psql (PostgreSQL) 13.2 (Debian 13.2-1.pgdg100+1)
Config file:
# ...
shared_preload_libraries = 'pg_stat_statements'
pg_stat_statements.max = 1000
pg_stat_statements.track = all
# ...
That query is coming from some monitoring tool; which as far as I can tell has nothing to do with postgis. The monitoring tool is out of date, as now that column is called "total_exec_time". (It was renamed when columns were added for planning times as well as for execution times.)
Following jjanes comment I found out that the Prometheus node exporter was making an incorrect query (here), due to the fact that the column was, indeed, mispelled.
Changing the column name fixed this issue.

pg_locks table has lot of simple select statements

we are connecting to our Postgresql (RDS) server from our django backend as well as lambda, sometimes django backend queries time out and I run the following query to see the locks:
SELECT
pg_stat_activity.client_addr,
pg_stat_activity.query
FROM pg_class
JOIN
pg_locks ON pg_locks.relation = pg_class.oid
JOIN
pg_stat_activity ON pg_locks.pid =
pg_stat_activity.pid
WHERE
pg_locks.granted='t' AND
pg_class.relname='accounts_user'
This gives me 30 rows of simple select queries executed from lambda like this:
SELECT first_name, picture, username FROM accounts_user WHERE id = $1
why does this query hold a lock? should I be worried?
I'm using pg8000 library to connect from Lambda
with pgsql.cursor() as cursor:
cursor.execute(
"""
SELECT first_name, picture, username
FROM accounts_user
WHERE id = %s
""",
(author_user_id,),
)
row = cursor.fetchone()
# use the row ..
I opened an issue at Github maybe it's because I'm using the library wrong. https://github.com/tlocke/pg8000/issues/16
You can also try to reuse the database connection, see https://docs.djangoproject.com/en/2.2/ref/settings/#conn-max-age
DATABASES = {
'default': {
...
'CONN_MAX_AGE': 600, # reuse database connection
}
}

MySQL Workbench generate this error

Error Code: 1064. You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'AS a LIMIT 0, 1000' at line 3
Query:
SELECT * FROM (
subject s INNER JOIN prerequisite p ON s.subject_code = p.subject_id)
AS a
I think what you want to do are derived tables
https://dev.mysql.com/doc/refman/5.6/en/derived-tables.html
SELECT * FROM (
select * from subject s INNER JOIN prerequisite p ON s.subject_code = p.subject_id)
AS a