In the PostgreSQL shell I can type:
SHOW log_directory
and I can see where the log file is saved.
The question is - is there a SELECT statement which will give me the same information?
TIA!
Yes, these two are equivalent:
knayak=# show log_directory;
log_directory
---------------
log
(1 row)
knayak=# select setting FROM pg_settings where name = 'log_directory';
setting
---------
log
(1 row)
Related
How can I output the result of PostgreSQL dates function as to_char to french language, for example the output of:
select to_char(current_date, 'Day') ;
should be (a french name for a day):
Mardi
instead of english of day (e.g. Monady)
You would need to set the display of date/time (LC_TIME) to french, and to query not the Day but rather the localizable day TMDay using the TM prefix.
show LC_TIME;
SET LC_TIME = 'French';
select to_char(current_date, 'TMDay') ;
to_char
---------
Mardi
(1 row)
The following works on Ubuntu 16.04 Server, with english language set-up
first we need to add system support for French templating with the command:
sudo locale-gen fr_FR.utf8
then restart postgresql service:
sudo systemctl restart postgresql
then log in to psql
SET LC_TIME = 'fr_FR.utf8';
select to_char(current_date, 'TMDay') ;
to_char
---------
Mardi
(1 row)
I was trying to trace the slow queries. I'm new to Pg9.6.
I could not find the /pg_log/ folder in the new version. It was available in /data/pg_log/ in older versions(I was using 9.2)..
If this is a repeating question, please tag.
connect to your postgres and run:
t=# show log_directory;
log_directory
---------------
pg_log
(1 row)
t=# show logging_collector ;
logging_collector
-------------------
on
(1 row)
https://www.postgresql.org/docs/9.6/static/runtime-config-logging.html
log_directory (string)
When logging_collector is enabled, this parameter determines the
directory in which log files will be created. It can be specified as
an absolute path, or relative to the cluster data directory. This
parameter can only be set in the postgresql.conf file or on the server
command line. The default is pg_log.
You could also want to check all not default values with
select name,setting from pg_settings where source <>'default' and name like 'log%';
I've postgresql-9.4 up and running, and I've enabled pg_stat_statements module lately by the help of official documentation.
But I'm getting following error upon usage:
postgres=# SELECT * FROM pg_stat_statements;
ERROR: relation "pg_stat_statements" does not exist
LINE 1: SELECT * FROM pg_stat_statements;
postgres=# SELECT pg_stat_statements_reset();
ERROR: function pg_stat_statements_reset() does not exist
LINE 1: SELECT pg_stat_statements_reset();
I'm logged in to psql with the postgres user.
I've also checked the available extension lists:
postgres=# SELECT * FROM pg_available_extensions WHERE name = 'pg_stat_statements'
;
name | default_version | installed_version | comment
--------------------+-----------------+-------------------+-----------------------------------------------------------
pg_stat_statements | 1.2 | | track execution statistics of all SQL statements executed
(1 row)
And here's the results of the extension versions query:
postgres=# SELECT * FROM pg_available_extension_versions WHERE name = 'pg_stat_statements';
name | version | installed | superuser | relocatable | schema | requires | comment
--------------------+---------+-----------+-----------+-------------+--------+----------+-----------------------------------------------------------
pg_stat_statements | 1.2 | f | t | t | | | track execution statistics of all SQL statements executed
(1 row)
Any help will be appreciated.
Extension isn't installed:
SELECT *
FROM pg_available_extensions
WHERE
name = 'pg_stat_statements' and
installed_version is not null;
If the table is empty, create the extension:
CREATE EXTENSION pg_stat_statements;
I've faced with this issue at configuring Percona Monitoring and Management (PMM) because by some strange reason PMM connecting to database with name postgres, so pg_stat_statements extension have to be created in this database:
yourdb# \c postgres
postgres# CREATE EXTENSION pg_stat_statements SCHEMA public;
Follow below steps:
Create the extension
CREATE EXTENSION pg_stat_statements;
Change in config
alter system set shared_preload_libraries='pg_stat_statements';
Restart
$ systemctl restart postgresql
Verify changes applied or not.
select * from pg_file_Settings where name='shared_preload_libraries';
The applied attribute must be 'true'.
I Had the same issue when deploying the environment using liquibase for the first time.
I understand that my reply maybe is not related with your problem but was the first google result so I think that other guys like me can arrive here with my the same Liquibase Issue.
These are PosGreSQL metadata tables that are retrieved by liquibase when you generate your first xml file.
In my case it only was useless autogenerated code, so I solved it deleteing these lines:
<changeSet author="martinlarizzate (generated)" id="1588181532394-7">
<createView fullDefinition="false" viewName="pg_stat_statements"> SELECT pg_stat_statements.userid,
pg_stat_statements.dbid,
pg_stat_statements.queryid,
pg_stat_statements.query,
pg_stat_statements.calls,
pg_stat_statements.total_time,
pg_stat_statements.min_time,
pg_stat_statements.max_time,
pg_stat_statements.mean_time,
pg_stat_statements.stddev_time,
pg_stat_statements.rows,
pg_stat_statements.shared_blks_hit,
pg_stat_statements.shared_blks_read,
pg_stat_statements.shared_blks_dirtied,
pg_stat_statements.shared_blks_written,
pg_stat_statements.local_blks_hit,
pg_stat_statements.local_blks_read,
pg_stat_statements.local_blks_dirtied,
pg_stat_statements.local_blks_written,
pg_stat_statements.temp_blks_read,
pg_stat_statements.temp_blks_written,
pg_stat_statements.blk_read_time,
pg_stat_statements.blk_write_time
FROM pg_stat_statements(true) pg_stat_statements(userid, dbid, queryid, query, calls, total_time, min_time, max_time, mean_time, stddev_time, rows, shared_blks_hit, shared_blks_read, shared_blks_dirtied, shared_blks_written, local_blks_hit, local_blks_read, local_blks_dirtied, local_blks_written, temp_blks_read, temp_blks_written, blk_read_time, blk_write_time);</createView>
</changeSet>
In pg_stat_activity I can see that a client is working its way through some query results using a cursor. But how can I see what the original query is?
pipeline=> select pid, query from pg_stat_activity where state = 'active' order by query_start;
pid | query
-------+--------------------------------------------------------------------------------------
6734 | FETCH FORWARD 1000 FROM "c_109886590_1"
26731 | select pid, query from pg_stat_activity where state = 'active' order by query_start;
(2 rows)
I see there is pg_cursors, but it is empty:
pipeline=> select * from pg_cursors;
name | statement | is_holdable | is_binary | is_scrollable | creation_time
------+-----------+-------------+-----------+---------------+---------------
(0 rows)
The server is on AWS RDS.
pipeline=> select version();
version
--------------------------------------------------------------------------------------------------------------
PostgreSQL 9.3.3 on x86_64-unknown-linux-gnu, compiled by gcc (GCC) 4.6.3 20120306 (Red Hat 4.6.3-2), 64-bit
(1 row)
You can't.
pg_cursors is backend-local. It doesn't show cursors that aren't part of the current connection.
PostgreSQL has no way to find out what query underlies a cursor from another session.
The only way I can think of to do this is using log analysis, with log_statement = all and a suitable log_line_prefix.
When running command-line queries in MySQL you can optionally use '\G' as a statement terminator, and instead of the result set columns being listed horizontally across the screen, it will list each column vertically, which the corresponding data to the right. Is there a way to the same or a similar thing with the DB2 command line utility?
Example regular MySQL result
mysql> select * from tagmap limit 2;
+----+---------+--------+
| id | blog_id | tag_id |
+----+---------+--------+
| 16 | 8 | 1 |
| 17 | 8 | 4 |
+----+---------+--------+
Example Alternate MySQL result:
mysql> select * from tagmap limit 2\G
*************************** 1. row ***************************
id: 16
blog_id: 8
tag_id: 1
*************************** 2. row ***************************
id: 17
blog_id: 8
tag_id: 4
2 rows in set (0.00 sec)
Obviously, this is much more useful when the columns are large strings, or when there are many columns in a result set, but this demonstrates the formatting better than I can probably explain it.
I don't think such an option is available with the DB2 command line client. See http://www.dbforums.com/showthread.php?t=708079 for some suggestions. For a more general set of information about the DB2 command line client you might check out the IBM DeveloperWorks article DB2's Command Line Processor and Scripting.
Little bit late, but found this post when I searched for an option to retrieve only the selected data.
So db2 -x <query> gives only the result back. More options can be found here: https://www.ibm.com/docs/en/db2/11.1?topic=clp-options
Example:
[db2inst1#a21c-db2 db2]$ db2 -n select postschemaver from files.product
POSTSCHEMAVER
--------------------------------
147.3
1 record(s) selected.
[db2inst1#a21c-db2 db2]$ db2 -x select postschemaver from files.product
147.3
DB2 command line utility always displays data in tabular format. i.e. rows horizontally and columns vertically. It does not support any other format like \G statement terminator do for mysql. But yes, you can store column organized data in DB2 tables when DB2_WORKLOAD=ANALYTICS is set.
db2 => connect to coldb
Database Connection Information
Database server = DB2/LINUXX8664 10.5.5
SQL authorization ID = BIMALJHA
Local database alias = COLDB
db2 => create table testtable (c1 int, c2 varchar(10)) organize by column
DB20000I The SQL command completed successfully.
db2 => insert into testtable values (2, 'bimal'),(3, 'kumar')
DB20000I The SQL command completed successfully.
db2 => select * from testtable
C1 C2
----------- ----------
2 bimal
3 kumar
2 record(s) selected.
db2 => terminate
DB20000I The TERMINATE command completed successfully.