tell me please, I'm setting up postgresql monitoring via zabbix 4.2. I am using the standard built-in postgresql template. All data is displayed correctly, except for metrics from the pgsql.query.time.sql query, data from pgsql.query.time.sql is not displayed
I try to manually execute this request, I get an error:
psql -qtAX -h "$1" -p "$2" -U "$3" -d "$4" -v tmax=$5 -f "/var/lib/zabbix/postgresql/pgsql.query.time.sql"
psql:/var/lib/zabbix/postgresql/pgsql.query.time.sql:31: ERROR: syntax error (near position: ")")
STRING 22: ...'epoch' FROM (clock_timestamp() - query_start)) > )::integer...
Here is the query itself from /var/lib/zabbix/postgresql/pgsql.query.time.sql
WITH T AS
(SELECT db.datname,
coalesce(T.query_time_max, 0) query_time_max,
coalesce(T.tx_time_max, 0) tx_time_max,
coalesce(T.mro_time_max, 0) mro_time_max,
coalesce(T.query_time_sum, 0) query_time_sum,
coalesce(T.tx_time_sum, 0) tx_time_sum,
coalesce(T.mro_time_sum, 0) mro_time_sum,
coalesce(T.query_slow_count, 0) query_slow_count,
coalesce(T.tx_slow_count, 0) tx_slow_count,
coalesce(T.mro_slow_count, 0) mro_slow_count
FROM pg_database db NATURAL
LEFT JOIN (
SELECT datname,
extract(epoch FROM now())::integer ts,
coalesce(max(extract('epoch' FROM (clock_timestamp() - query_start))::integer * (state NOT IN ('idle', 'idle in transaction', 'idle in transaction (aborted)') AND query !~* E'^(\\s*(--[^\\n]*\\n|/\\*.*\\*/|\\n))*(autovacuum|VACUUM|ANALYZE|REINDEX|CLUSTER|CREATE|ALTER|TRUNCATE|DROP)')::integer), 0) query_time_max,
coalesce(max(extract('epoch' FROM (clock_timestamp() - query_start))::integer * (state NOT IN ('idle') AND query !~* E'^(\\s*(--[^\\n]*\\n|/\\*.*\\*/|\\n))*(autovacuum|VACUUM|ANALYZE|REINDEX|CLUSTER|CREATE|ALTER|TRUNCATE|DROP)')::integer), 0) tx_time_max,
coalesce(max(extract('epoch' FROM (clock_timestamp() - query_start))::integer * (state NOT IN ('idle') AND query ~* E'^(\\s*(--[^\\n]*\\n|/\\*.*\\*/|\\n))*(autovacuum|VACUUM|ANALYZE|REINDEX|CLUSTER|CREATE|ALTER|TRUNCATE|DROP)')::integer), 0) mro_time_max,
coalesce(sum(extract('epoch' FROM (clock_timestamp() - query_start))::integer * (state NOT IN ('idle', 'idle in transaction', 'idle in transaction (aborted)') AND query !~* E'^(\\s*(--[^\\n]*\\n|/\\*.*\\*/|\\n))*(autovacuum|VACUUM|ANALYZE|REINDEX|CLUSTER|CREATE|ALTER|TRUNCATE|DROP)')::integer), 0) query_time_sum,
coalesce(sum(extract('epoch' FROM (clock_timestamp() - query_start))::integer * (state NOT IN ('idle') AND query !~* E'^(\\s*(--[^\\n]*\\n|/\\*.*\\*/|\\n))*(autovacuum|VACUUM|ANALYZE|REINDEX|CLUSTER|CREATE|ALTER|TRUNCATE|DROP)')::integer), 0) tx_time_sum,
coalesce(sum(extract('epoch' FROM (clock_timestamp() - query_start))::integer * (state NOT IN ('idle') AND query ~* E'^(\\s*(--[^\\n]*\\n|/\\*.*\\*/|\\n))*(autovacuum|VACUUM|ANALYZE|REINDEX|CLUSTER|CREATE|ALTER|TRUNCATE|DROP)')::integer), 0) mro_time_sum,
coalesce(sum((extract('epoch' FROM (clock_timestamp() - query_start)) > :tmax)::integer * (state NOT IN ('idle', 'idle in transaction', 'idle in transaction (aborted)') AND query !~* E'^(\\s*(--[^\\n]*\\n|/\\*.*\\*/|\\n))*(autovacuum|VACUUM|ANALYZE|REINDEX|CLUSTER|CREATE|ALTER|TRUNCATE|DROP)')::integer), 0) query_slow_count,
coalesce(sum((extract('epoch' FROM (clock_timestamp() - query_start)) > :tmax)::integer * (state NOT IN ('idle') AND query !~* E'^(\\s*(--[^\\n]*\\n|/\\*.*\\*/|\\n))*(autovacuum|VACUUM|ANALYZE|REINDEX|CLUSTER|CREATE|ALTER|TRUNCATE|DROP)')::integer), 0) tx_slow_count,
coalesce(sum((extract('epoch' FROM (clock_timestamp() - query_start)) > :tmax)::integer * (state NOT IN ('idle') AND query ~* E'^(\\s*(--[^\\n]*\\n|/\\*.*\\*/|\\n))*(autovacuum|VACUUM|ANALYZE|REINDEX|CLUSTER|CREATE|ALTER|TRUNCATE|DROP)')::integer), 0) mro_slow_count
FROM pg_stat_activity
WHERE pid <> pg_backend_pid()
GROUP BY 1) T
WHERE NOT db.datistemplate )
SELECT json_object_agg(datname, row_to_json(T))
FROM T
The documentation says:
-c command
[...]
command must be either a command string that is completely parsable by the server (i.e., it contains no psql-specific features), or a single backslash command.
So you cannot use variables that way.
You could use a "here document":
psql <<EOF
\set x $5
SELECT :x
EOF
Or you use the shell variable directly in the statement:
psql -c "SELECT $5"
The issue here is that you are passing an empty value to tmax=, wich is indicated by an error
psql:/var/lib/zabbix/postgresql/pgsql.query.time.sql:31: ERROR: syntax error (near position: ")")
STRING 22: ...'epoch' FROM (clock_timestamp() - query_start)) > )::integer...
^-- missing value here
I believe {$PG.SLOW_QUERIES.MAX.WARN} macro is the culprit here. Make sure that:
It exists in template macros;
It is not being overwritten to an empty value by a host macro.
Now, this may not be an actual reason why monitoring doesn't work. To be sure what exactly goes wrong we need to see zabbix agent logs, you can find those in /var/log/zabbix/zabbix_agentd.log. You may need to change DebugLevel in agent config to 3 or even 4 (be weary level 4 produces a lot of output).
I enabled debug level = 4 and in the log of the zabbix agent I saw the correct command now:
psql -qtAX -h "127.0.0.1" -p "5432" -U "zbx_monitor" -d "test2" -v tmax=30 -f "/var/lib/zabbix/postgresql/pgsql.query.time.sql"
But in the output of the command, I get 0 values for all metrics for all databases, although I ran a parallel query to the test2 database, which takes 14 seconds (i have run this query many times). Why is that?
Here is the output of the command execution result
psql -qtAX -h "127.0.0.1" -p "5432" -U "zbx_monitor" -d "test2" -v tmax=30 -f "/var/lib/zabbix/postgresql/pgsql.query.time.sql"
{ "postgres" : {"datname":"postgres","query_time_max":0,"tx_time_max":0,"mro_time_max":0,"query_time_sum":0,"tx_time_sum":0,"mro_time_sum":0,"query_slow_count":0,"tx_slow_count":0,"mro_slow_count":0}, "test" : {"datname":"test","query_time_max":0,"tx_time_max":0,"mro_time_max":0,"query_time_sum":0,"tx_time_sum":0,"mro_time_sum":0,"query_slow_count":0,"tx_slow_count":0,"mro_slow_count":0}, "test2" : {"datname":"test2","query_time_max":0,"tx_time_max":0,"mro_time_max":0,"query_time_sum":0,"tx_time_sum":0,"mro_time_sum":0,"query_slow_count":0,"tx_slow_count":0,"mro_slow_count":0} }
Related
cd C:\Program Files\PostgreSQL\12\bin
psql.exe -v v1="test" -h localhost -d postgres -U postgres -p 5432 -a -q -f /elan/Validate_files.sql
....
Select case when current_date-date(a.timestamp_1) >1 then 'No' Else 'Yes' End as Check
from elan.temp_file_names a
where a.filename not in (select b.filename from elan.temp_previous_names b)
and position('ELAN_CLAIMS' in a.filename)>1 order by timestamp_1 desc
\gset
If Check is No, I want to end the psql process, how do I do that?
Change the CASE expression to return a boolean:
CASE WHEN ... THEN TRUE ELSE FALSE
END AS check ... \gset
Then use \if:
\if :check
\q
\endif
I have a Postgis database deployed on a Kubernetes cluster, using this image: docker pull postgis/postgis:13-3.1.
I was trying to solve this error:
2021-06-29 03:20:50.958 UTC [2852] ERROR: relation "pg_stat_statements" does not exist at character 536
2021-06-29 03:20:50.958 UTC [2852] STATEMENT: SELECT t2.rolname, t3.datname, queryid, calls, total_time / 1000 as total_time_seconds, min_time / 1000 as min_time_seconds, max_time / 1000 as max_time_seconds, mean_time / 1000 as mean_time_seconds, stddev_time / 1000 as stddev_time_seconds, rows, shared_blks_hit, shared_blks_read, shared_blks_dirtied, shared_blks_written, local_blks_hit, local_blks_read, local_blks_dirtied, local_blks_written, temp_blks_read, temp_blks_written, blk_read_time / 1000 as blk_read_time_seconds, blk_write_time / 1000 as blk_write_time_seconds FROM pg_stat_statements t1 JOIN pg_roles t2 ON (t1.userid=t2.oid) JOIN pg_database t3 ON (t1.dbid=t3.oid) WHERE t2.rolname != 'rdsadmin'
So I installed the missing extension in all my dbs, like so:
psql -U $PG_USER \
-d $DATABSE \
-c "CREATE EXTENSION pg_stat_statements SCHEMA public"
psql -U $PG_USER \
-d $DATABSE \
-c "CREATE EXTENSION pg_stat_statements SCHEMA pg_catalog"
I'm now getting this error:
2021-06-29 20:04:46.870 UTC [331] ERROR: column "total_time" does not exist at character 48
2021-06-29 20:04:46.870 UTC [331] STATEMENT: SELECT t2.rolname, t3.datname, queryid, calls, total_time / 1000 as total_time_seconds, min_time / 1000 as min_time_seconds, max_time / 1000 as max_time_seconds, mean_time / 1000 as mean_time_seconds, stddev_time / 1000 as stddev_time_seconds, rows, shared_blks_hit, shared_blks_read, shared_blks_dirtied, shared_blks_written, local_blks_hit, local_blks_read, local_blks_dirtied, local_blks_written, temp_blks_read, temp_blks_written, blk_read_time / 1000 as blk_read_time_seconds, blk_write_time / 1000 as blk_write_time_seconds FROM pg_stat_statements t1 JOIN pg_roles t2 ON (t1.userid=t2.oid) JOIN pg_database t3 ON (t1.dbid=t3.oid) WHERE t2.rolname != 'rdsadmin'
I've tried to find the reason why, but haven't found anything helpful anywhere. Any ideias on how to solve this issue?
Postgres version: psql (PostgreSQL) 13.2 (Debian 13.2-1.pgdg100+1)
Config file:
# ...
shared_preload_libraries = 'pg_stat_statements'
pg_stat_statements.max = 1000
pg_stat_statements.track = all
# ...
That query is coming from some monitoring tool; which as far as I can tell has nothing to do with postgis. The monitoring tool is out of date, as now that column is called "total_exec_time". (It was renamed when columns were added for planning times as well as for execution times.)
Following jjanes comment I found out that the Prometheus node exporter was making an incorrect query (here), due to the fact that the column was, indeed, mispelled.
Changing the column name fixed this issue.
I have a requirement to dump the contents of a definable selection of tables as CSV's for an initial load of systems that are not able to connect with PostgreSQL for various reasons.
I have written a script to do this which runs through a list of tables using psql with the -c flag to run psql's \COPY command to dump the corresponding table to a file like this:
COPY table_name TO table_name.csv WITH (FORMAT 'csv', HEADER, QUOTE '\"', DELIMITER '|');
It works fine. But I am sure you have already spotted the problem: as the process takes ~57 minutes for ~60 odd tables, the likelyhood of consistency is quite close to absolute zero.
I had a think about it and suspected I could make a few lightweight changes to pg_dump to do what I want, i.e., create multiple csv's from pg_dump whilst having a hope of integrity between the tables - and being able to specify parallel dumps too.
I have added a few flags to allow me to apply a file postfix (the date), set the format options and pass in a path for the relevant output file.
However my modified pg_dump was failing when writing to a file, like:
COPY table_name (pkey_id, field1, field2 ... fieldn) TO table_name.csv WITH (FORMAT 'csv', HEADER, QUOTE '"', DELIMITER '|')
Note: Within pg_dump, the column list is expanded
So I cast around for further information and found these COPY Tips.
It looks like writing to a file is a no-no over the network; however I am on the same machine (for now). I felt writing to /tmp would be OK as it is writable by anyone.
So I tried cheating with:
seingramp#seluonkeydb01:~$ ./tp_dump -a -t table_name -D /tmp/ -k "FORMAT 'csv', HEADER, QUOTE '\"', DELIMITER '|'" -K "_$DATE_POSTFIX"
tp_dump: warning: there are circular foreign-key constraints on this table:
tp_dump: table_name
tp_dump: You might not be able to restore the dump without using --disable-triggers or temporarily dropping the constraints.
tp_dump: Consider using a full dump instead of a --data-only dump to avoid this problem.
--
-- PostgreSQL database dump
--
-- Dumped from database version 12.3
-- Dumped by pg_dump version 14devel
SET statement_timeout = 0;
SET lock_timeout = 0;
SET idle_in_transaction_session_timeout = 0;
SET client_encoding = 'UTF8';
SET standard_conforming_strings = on;
SELECT pg_catalog.set_config('search_path', '', false);
SET check_function_bodies = false;
SET xmloption = content;
SET client_min_messages = warning;
SET row_security = off;
--
-- Data for Name: material_master; Type: TABLE DATA; Schema: mm; Owner: postgres
--
COPY table_name (pkey_id, field1, field2 ... fieldn) FROM stdin;
tp_dump: error: query failed:
tp_dump: error: query was: COPY table_name (pkey_id, field1, field2 ... fieldn) TO PROGRAM 'gzip > /tmp/table_name_20200814.csv.gz' WITH (FORMAT 'csv', HEADER, QUOTE '"', DELIMITER '|')
I have neutered the data as it is customer specific.
I didn't find pg_dump's error message very helpful, do you have any ideas as to what I am doing wrong?
The changes really are quite small (excuse the code!) starting ~line 1900, ignoring the flags added around getopt().
/*
* Use COPY (SELECT ...) TO when dumping a foreign table's data, and when
* a filter condition was specified. For other cases a simple COPY
* suffices.
*/
if (tdinfo->filtercond || tbinfo->relkind == RELKIND_FOREIGN_TABLE)
{
/* Note: this syntax is only supported in 8.2 and up */
appendPQExpBufferStr(q, "COPY (SELECT ");
/* klugery to get rid of parens in column list */
if (strlen(column_list) > 2)
{
appendPQExpBufferStr(q, column_list + 1);
q->data[q->len - 1] = ' ';
}
else
appendPQExpBufferStr(q, "* ");
if ( copy_from_spec )
{
if ( copy_from_postfix )
{
appendPQExpBuffer(q, "FROM %s %s) TO PROGRAM 'gzip > %s%s%s.csv.gz' WITH (%s)",
fmtQualifiedDumpable(tbinfo),
tdinfo->filtercond ? tdinfo->filtercond : "",
copy_from_dest ? copy_from_dest : "",
fmtQualifiedDumpable(tbinfo),
copy_from_postfix,
copy_from_spec);
}
else
{
appendPQExpBuffer(q, "FROM %s %s) TO PROGRAM 'gzip > %s%s.csv.gz' WITH (%s)",
fmtQualifiedDumpable(tbinfo),
tdinfo->filtercond ? tdinfo->filtercond : "",
copy_from_dest ? copy_from_dest : "",
fmtQualifiedDumpable(tbinfo),
copy_from_spec);
}
}
else
{
appendPQExpBuffer(q, "FROM %s %s) TO stdout;",
fmtQualifiedDumpable(tbinfo),
tdinfo->filtercond ? tdinfo->filtercond : "");
}
}
else
{
if ( copy_from_spec )
{
if ( copy_from_postfix )
{
appendPQExpBuffer(q, "COPY %s %s TO PROGRAM 'gzip > %s%s%s.csv.gz' WITH (%s)",
fmtQualifiedDumpable(tbinfo),
column_list,
copy_from_dest ? copy_from_dest : "",
fmtQualifiedDumpable(tbinfo),
copy_from_postfix,
copy_from_spec);
}
else
{
appendPQExpBuffer(q, "COPY %s %s TO PROGRAM 'gzip > %s%s.csv.gz' WITH (%s)",
fmtQualifiedDumpable(tbinfo),
column_list,
copy_from_dest ? copy_from_dest : "",
fmtQualifiedDumpable(tbinfo),
copy_from_spec);
}
}
else
{
appendPQExpBuffer(q, "COPY %s %s TO stdout;",
fmtQualifiedDumpable(tbinfo),
column_list);
}
I tried a couple of other cheats too, like specifying a directory owned by postgres. I know it's a quick hack but I hope you can help, and thanks for looking.
This is a use case for pg_restore -f.
So:
-- Create custom format dump file
pg_dump -d some_db -U some_user -Fc -f dump.out
-- Move that file to where you need it
-- Dump data only from named table to a file from the dump file.
pg_restore -a -t table_1 -f table_1_data.sql dump.out
The pg_dump will create a consistent snapshot of the tables, so you have the database in a 'frozen' state in dump.out. Then you can use pg_restore to 'thaw out' those parts you need on your schedule. By using -a you will get the COPY you want.
I have a PostgreSQL query as below which is running fine . I am calling it from a shell script as below
Result=$(psql -U username -d database -t -c
$'SELECT round(sum(i.total), 2) AS "ROUND(sum(i.total), 2)"
FROM invoice i
WHERE i.create_datetime = '2019-03-01 00:00:00-06'
AND i.is_review = '1' AND i.user_id != 60;')
now I want the value which I have hard coded as i.create_datetime = '2019-03-01 00:00:00-06' to replace it with a variable date value.
I have tried two ways
way 1:
Result=$(psql -U username -d database -t -c
$'WITH var(reviewMonth) as (values(\'$reviewMonth\'))
SELECT round(sum(i.total),2) AS "ROUND(sum(i.total),2)"
FROM var,invoice i
WHERE i.create_datetime = var.reviewMonth::timestamp
AND i.is_review = \'1\' AND i.user_id != 60;')
and
way 2:
Result=$(psql -U username -d database -t -c
$'SELECT round(sum(i.total),2) AS "ROUND(sum(i.total),2)"
FROM invoice i
WHERE i.create_datetime = \'$reviewMonth\'
AND i.is_review = \'1\' AND i.user_id != 60;')
But both way it's throwing error
way 1 throwing error as :
ERROR: operator does not exist: timestamp with time zone = text
way 2 throwing error as :
ERROR: invalid input syntax for type timestamp with time zone: "$reviewMonth"
Please suggest what should be my approach.
You should try using the psql variables. Here's an example:
# Put the query in a file, with the variable TSTAMP:
> echo "SELECT :'TSTAMP'::timestamp with time zone;" > query.sql
> export TSTAMP='2019-03-01 00:00:00-06'
> RESULT=$(psql -U postgres -t --variable=TSTAMP="$TSTAMP" -f query.sql )
> echo $RESULT
2019-03-01 06:00:00+00
Note how we format the string literal substitution in the query: :'TSTAMP'
You could also do the substitution yourself. Here's an example using a heredoc:
> export TSTAMP='2019-03-01 00:00:01-06'
> RESULT=$(psql -U postgres -t << EOF
SELECT '$TSTAMP'::timestamp with time zone;
EOF
)
> echo $RESULT
2019-03-01 06:00:01+00
In this case, we aren't using psql's variable substitution, so we have to quote the variable like '$TSTAMP' . Using a heredoc makes the quoting much simpler than using -c because you aren't trying to quote the whole command.
EDIT: more examples because it appears this wasn't clear enough. TSTAMP does not have to be hard coded, it's just a bash variable than can be set like any other bash variable.
> TSTAMP=$(date -d 'now' +'%Y-%m-01 00:00:00')
> RESULT=$(psql -U postgres -t << EOF
SELECT '$TSTAMP'::timestamp with time zone;
EOF
)
> echo $RESULT
2019-06-01 00:00:00+00
However, if you're really just looking for the start of the month, there's no need for shell variables at all
> RESULT=$(psql -U postgres -t << EOF
SELECT date_trunc('month', now());
EOF
)
> echo $RESULT
2019-06-01 00:00:00+00
Hello I am trying to migrate from Mysql to Postgresql.
I have an SQL file which queries some records and I want to put this in Redis with mass insert.
In Mysql it was working below this sample command;
sudo mysql -h $DB_HOST -u $DB_USERNAME -p$DB_PASSWORD $DB_DATABASE --skip-column-names --raw < test.sql | redis-cli --pipe
I figured out test.sql file for Postgresql syntax.
SELECT
'*3\r\n' ||
'$' || length(redis_cmd::text) || '\r\n' || redis_cmd::text || '\r\n' ||
'$' || length(redis_key::text) || '\r\n' || redis_key::text || '\r\n' ||
'$' || length(sval::text) || '\r\n' || sval::text || '\r\n'
FROM (
SELECT
'SET' as redis_cmd,
'ddi_lemmas:' || id::text AS redis_key,
lemma AS sval
FROM ddi_lemmas
) AS t
and its one output like
"*3\r\n$3\r\nSET\r\n$11\r\nddi_lemmas:1\r\n$22\r\nabil+abil+neg+aor+pagr\r\n"
But I couldn't find any example like Mysql command piping from command line.
There are some examples that have two stages not directly (first insert to a txt file and then put it in Redis)
sudo PGPASSWORD=$PASSWORD psql -U $USERNAME -h $HOSTNAME -d $DATABASE -f test.sql > data.txt
Above command working but with column names which i dont want.
I am trying to find directly send output of Postgresql result to Redis.
Could you help me please?
Solution:
If I want to insert with RESP commands from a sql file. (with the help of #teppic )
echo -e "$(psql -U $USERNAME -h $HOSTNAME -d $DATABASE -AEt -f test.sql)" | redis-cli --pipe
From the psql man page, -t will "Turn off printing of column names and result row count footers, etc."
-A turns off alignment, and -q sets "quiet" mode.
It looks like you're outputting RESP commands, in which case you'll have to use the escaped string format to get the newline/carriage return pairs, e.g. E'*3\r\n' (note the E).
It might be simpler to pipe SET commands to redis-cli:
psql -At -c "SELECT 'SET ddi_lemmas:' || id :: TEXT || ' ' || lemma FROM ddi_lemmas" | redis-cli