What data entered into postgres - postgresql

This is a web application which uses Postgres to store data pushed from various modules of the web application.
From Postgres side, how can I know what data entered/modified into Postgres? I mean if there is any Postgres data logging?
Postgres has 4-5 schemas & each schema has 2-3 tables. Each table has 10-20 records.
EDIT (8 Sept 2021):
log_destination=csvlog - I tried different destinations to see if I'm getting the required logs. But all log destination are capturing the same logs.
2: reload - Did restarted server every time I made
changes to postgresql.conf.
3: The logs are being generated at
/var/lib/postgresql/data/pg_log directory with format
postgresql-%Y-%m-%d_%H%M%S.log. Where latest logs are generated, but as mentioned in comment, I'm not getting all the query logs that are being executed on Postgres by the web application
4: I installed Postgres 9.6 as a Docker container.
5: I'm making changes in
/var/lib/postgresql/data/postgresql.conf which is reflected in
database query SELECT name, setting FROM pg_settings WHERE name LIKE '%log%'; after restart.
EDIT (7 Sept 2021):
name |setting |
---------------------------+------------------------------+
log_autovacuum_min_duration|-1 |
log_checkpoints |off |
log_connections |off |
log_destination |csvlog |
log_directory |pg_log |
log_disconnections |off |
log_duration |off |
log_error_verbosity |default |
log_executor_stats |off |
log_file_mode |0600 |
log_filename |postgresql-%Y-%m-%d_%H%M%S.log|
log_hostname |off |
log_line_prefix | |
log_lock_waits |off |
log_min_duration_statement |-1 |
log_min_error_statement |error |
log_min_messages |warning |
log_parser_stats |off |
log_planner_stats |off |
log_replication_commands |off |
log_rotation_age |1440 |
log_rotation_size |10240 |
log_statement |all |
log_statement_stats |off |
log_temp_files |-1 |
log_timezone |Etc/UTC |
log_truncate_on_rotation |off |
logging_collector |on |
syslog_facility |local0 |
syslog_ident |postgres |
syslog_sequence_numbers |on |
syslog_split_messages |on |
wal_log_hints |off |
EDIT

Check a the logging configuration in postgresql.conf. You want to set up this What to log in particular log_statement. For your use the mod setting would be appropriate. Then you can look at the Postgres log to see what was changed and when. You might also want to set log_connections and log_disconnections to see what user the connection is running as.

Related

PostgreSql streaming replication - table not created on the slave

I am new to PostgreSql replication.
I tried to set up streaming replication, and at the end I created a database on the master, which I could afterwards see on the slave.
However, when I created a table on the master, it is not replicated to the slave.
Checking the table pg_stat_replication on the master, it looks OK as far as I can understand:
select usename,application_name,client_addr,backend_start,state,sync_state from pg_stat_replication ;
usename | application_name | client_addr | backend_start | state | sync_state
------------+------------------+-------------+-------------------------------+-----------+------------
replicator | walreceiver | 10.97.7.150 | 2020-06-28 20:48:15.463922+03 | streaming | async
select client_addr, state, sent_lsn, write_lsn,replitest,flush_lsn, replay_lsn from pg_stat_replication;
client_addr | state | sent_lsn | write_lsn | flush_lsn | replay_lsn
-------------+-----------+------------+------------+------------+------------
10.97.7.150 | streaming | 0/2701AFB8 | 0/2701AFB8 | 0/2701AFB8 | 0/2701AFB8
On the slave side I see this:
SELECT pg_last_xact_replay_timestamp();
pg_last_xact_replay_timestamp
-------------------------------
2020-06-28 20:52:22.915897+03
select pg_is_in_recovery();
pg_is_in_recovery
-------------------
t
Still when I create a table, I cannot find her on the slave side.
What should I check further?

Can't get new postgres config file settings to take effect

I have a somewhat large table in my database and I am inserting new records to it. As the number of records grow, I started having issues and can't insert.
My postgresql log files suggest I increase WAL size:
[700] LOG: checkpoints are occurring too frequently (6 seconds apart)
[700] HINT: Consider increasing the configuration parameter "max_wal_size".
I got the path to my config file with =# show config_file; and made some modifications with vim:
max_wal_senders = 0
wal_level = minimal
max_wal_size = 4GB
When I check the file I see the changes I made.
I then tried reloading and restarting the database:
(I get the data directory with =# show data_directory ;)
I tried reload:
pg_ctl reload -D path
server signaled
I tried restart
pg_ctl restart -D path
waiting for server to shut down.... done
server stopped
waiting for server to start....
2020-01-17 13:08:19.063 EST [16913] LOG: listening on IPv4 address
2020-01-17 13:08:19.063 EST [16913] LOG: listening on IPv6 address
2020-01-17 13:08:19.079 EST [16913] LOG: listening on Unix socket
2020-01-17 13:08:19.117 EST [16914] LOG: database system was shut down at 2020-01-17 13:08:18 EST
2020-01-17 13:08:19.126 EST [16913] LOG: database system is ready to accept connections
done
server started
But when I connect to the database and check for my settings:
name | setting | unit | category | short_desc | extra_desc | context | vartype | source | min_val | max_val | enumvals | boot_val | reset_val | sourcefile | sourceline | pending_restart
-----------------+---------+------+-------------------------------+-------------------------------------------------------------------------+------------+------------+---------+---------+---------+------------+---------------------------+----------+-----------+------------+------------+-----------------
max_wal_senders | 10 | | Replication / Sending Servers | Sets the maximum number of simultaneously running WAL sender processes. | | postmaster | integer | default | 0 | 262143 | | 10 | 10 | | | f
max_wal_size | 1024 | MB | Write-Ahead Log / Checkpoints | Sets the WAL size that triggers a checkpoint. | | sighup | integer | default | 2 | 2147483647 | | 1024 | 1024 | | | f
wal_level | replica | | Write-Ahead Log / Settings | Set the level of information written to the WAL. | | postmaster | enum | default | | | {minimal,replica,logical} | replica | replica | | | f
(3 rows)
I still see the old default settings.
What am I missing here? How can I get these settings to take effect?
Configuration settings can come from several sources:
postgresql.conf
postgresql.auto.conf (set with ALTER SYSTEM)
command line arguments at server start
set with ALTER DATABASE or ALTER USER
Moreover, if a parameter occurs twice in a configuration file, the second entry wins.
To figure out from where in this mess your setting originates, run
SELECT name, source, sourcefile, sourceline, pending_restart
FROM pg_settings
WHERE name IN ('wal_level', 'max_wal_size', 'max_wal_senders');
If the source is database or user, you can user the psql command \drds to figure out details.
The result of the queries shows that your PostgreSQL has been modified or built so that these values are the default values.
You'd have to override these defaults with any of the methods shown above.
Locations of config files. Ordered by priority.
/var/lib/postgresql/12/main/postgresql.auto.conf
/etc/postgresql/12/main/postgresql.conf

PostgreSQL - data replication stopped

Data replication has stopped from one of my three nodes. The replication slot on the errant node has disappeared. Does anyone have insight as to what happened or how to fix it?
DETAILS:
Nodes SS1, SS2, and SS3 have publications to which SSK subscribes. Replication from SS2 is now failing. Using PostgreSQL 10.1.
SSK psql log:
2019-02-07 10:21:13.953 CST [26274] LOG: logical replication apply worker for subscription "SS2" has started
2019-02-07 10:21:14.309 CST [26274] ERROR: could not start WAL streaming: ERROR: replication slot "SS2" does not exist
2019-02-07 10:21:14.311 CST [1641] LOG: worker process: logical replication worker for subscription 17237 (PID 26274) exited with exit code 1
SS2 replication slots table:
slot_name | plugin | slot_type | datoid | database | temporary | active | active_pid | xmin | catalog_xmin | restart_lsn | confirmed_flush_lsn
-----------+--------+-----------+--------+----------+-----------+--------+------------+------+--------------+-------------+---------------------
(0 rows)
For comparison, SS1 replication slots table:
slot_name | plugin | slot_type | datoid | database | temporary | active | active_pid | xmin | catalog_xmin | restart_lsn | confirmed_flush_lsn
-----------+----------+-----------+--------+----------+-----------+--------+------------+------+--------------+-------------+---------------------
SS1 | pgoutput | logical | 33280 | DBAdd | f | t | 2113 | | 56655301 | 3/114FB460 | 3/114FB498
(1 row)
Replication slot don't just disappear.
Somebody or something must have deleted it.
Perhaps the PostgreSQL database log of the primary server has valuable information.
Did you promote a standby recently? Since replication slots are not replicated, that would make them disappear.

Users And Grant Execute Permission Gets Automatically Removed Only In Cloud Sql

CLOUD SQL VERSION & DB ENGINE: Currently our CLOUD MYSQL Version is 5.6.21 n DB ENGINE is INNODB
1. Create User In Mysql
Create User 'USERNAME' # 'HOSTNAME' Identified By 'PASSWORD';
But This User Is Not Permanently Stored In mysql.user Table. This User Getting removed In The Table If Any Issue Comes In Script Side Or Server Restarts...and also sometimes, created user password gets empty.
2.Likewise Grant Execute Permission For Procedure Also Not Working Properly.
Grant Execute On Procedure Schemaname . Spname To 'USERNAME'#'%';
This Execute Permission Works For Some Time,But The Privileges Immediately Disappears For The Granted User.
Other Solutons We Tried Are:
1.Flush Tables-After Creating User
2.Flush Privilges- After Giving Any Grant Access/Revoke Access
But These 2 Solutions Are Also Not Working In Google Cloud Sql, Still Issue Remains Same.
But This Issue We Dont Have In Local Mysql Version, It Is Reproducible Only On Google Cloud Sql.
We are Struck With This Issue In Our Front End App.
Anyone knows how To resolve This Issue In Google Cloud Sql...
I'm not able to reproduce the fact that a creating a user doesn't survive a Cloud SQL instances.
Here is how I tested (I replaces some sensitive information with (edited)).
First I connect to an existing instance and create a user called xxx and checked that it shows up in the mysql.user table.
$ mysql -uroot -proot -h (edited)
mysql> SELECT host,user,password FROM mysql.user;
+-----------+------+-------------------------------------------+
| host | user | password |
+-----------+------+-------------------------------------------+
| localhost | root | |
| 127.0.0.1 | root | |
| ::1 | root | |
| localhost | | |
| % | root | *81F5E21E35407D884A6CD4A731AEBFB6AF209E1B |
+-----------+------+-------------------------------------------+
5 rows in set (0.07 sec)
mysql> CREATE USER xxx#'%' IDENTIFIED BY 'xxx';
Query OK, 0 rows affected (0.61 sec)
mysql> SELECT host,user,password FROM mysql.user;
+-----------+------+-------------------------------------------+
| host | user | password |
+-----------+------+-------------------------------------------+
| localhost | root | |
| 127.0.0.1 | root | |
| ::1 | root | |
| localhost | | |
| % | root | *81F5E21E35407D884A6CD4A731AEBFB6AF209E1B |
| % | xxx | *3D56A309CD04FA2EEF181462E59011F075C89548 |
+-----------+------+-------------------------------------------+
6 rows in set (0.06 sec)
mysql> Bye
Then I restart the Cloud SQL instances.
$ gcloud sql instances restart (edited) --project (edited)
Restarting Cloud SQL instance...done.
Restarted [https://www.googleapis.com/sql/v1beta3/projects/(edited)/instances/(edited)].
$
Then I connected again and check the mysql.user tables.
$ mysql -uroot -proot -h (edited)
mysql> SELECT host,user,password FROM mysql.user;
+-----------+------+-------------------------------------------+
| host | user | password |
+-----------+------+-------------------------------------------+
| localhost | root | |
| 127.0.0.1 | root | |
| ::1 | root | |
| localhost | | |
| % | root | *81F5E21E35407D884A6CD4A731AEBFB6AF209E1B |
| % | xxx | *3D56A309CD04FA2EEF181462E59011F075C89548 |
+-----------+------+-------------------------------------------+
6 rows in set (0.07 sec)
mysql> Bye
$

Capturing changes occurring on tables?

I want to see what tables and fields in PostgreSQL that a module in OpenERP is changing/updating when it runs.
Any suggestions?
application_name | pgAdmin III - Query Tool | client
bytea_output | escape | session
client_encoding | UNICODE | session
client_min_messages | notice | session
DateStyle | ISO,MDY | session
default_text_search_config | pg_catalog.english | configuration file
lc_messages | English_United States.1252 | configuration file
lc_monetary | English_United States.1252 | configuration file
lc_numeric | English_United States.1252 | configuration file
lc_time | English_United States.1252 | configuration file
listen_addresses | * | configuration file
log_destination | csvlog | configuration file
log_line_prefix | %t | configuration file
log_timezone | US/Pacific | configuration file
logging_collector | on | configuration file
max_connections | 100 | configuration file
max_stack_depth | 2MB | environment variable
port | 5432 | configuration file
shared_buffers | 32MB | configuration file
TimeZone | US/Pacific | configuration file
If OpenERP is using a specific ROLE when it connects ("openerp" is used in the example), you can log the statements a couple of different ways:
1). ALTER ROLE openerp SET log_min_duration_statement TO 0;
2). ALTER ROLE openerp SET log_statement TO 'mod';
My preference is option #1, but you might want to try both.
To revert the settings to defaults:
1). ALTER ROLE openerp SET log_min_duration_statement TO DEFAULT;
2). ALTER ROLE openerp SET log_statement TO DEFAULT; -- or 'none'
To see what the current settings are (when not set via ROLE), paste the results of the following query:
SELECT name,
current_setting(name) AS current_setting,
source
FROM pg_settings
WHERE source <> ALL (ARRAY['default'::text, 'override'::text]);