Determine PostgreSQL mode - postgresql

I have inherited a PostgreSQL 13 cluster which was setup using log shipping. I had to make a few changes and would to confirm everything is working well.
Is there a command to make pgsql report whether it is in standby mode or active mode?
Is there a command to make pgsql report up to which WAL file it has applied changes?
I assume that the only way to test the wal shipping is working is to manually modify the active db server and watch for the change on the standby?
I know there are better ways to setup a cluster, but for now I just need to ensure the system remains operational as setup.

From the doc:
select pg_is_in_recovery();
select pg_walfile_name(pg_last_wal_replay_lsn());
Either that or select pg_current_wal_lsn(); on the primary, then select pg_wal_lsn_diff(pg_current_wal_lsn__taken_from_primary,pg_last_wal_replay_lsn()); on standby as commented by #Laurenz Albe.

Related

Why are there always at least 10 sessions for a postgreSQL database? Why can't they be terminated?

Original aim: rename a database using ALTER DATABASE via psql.
Problem: rename fails due to other sessions accessing target database. ・All terminals/applications I am aware of have been closed.
・querying pg_stat_activity shows that there are 10 processes(=sessions?) accessing the db.・The username for each session is the same user I have been using for psql and for some local phoenix and django apps. The client_addr is also local host for all of them.
・When I use pg_terminate_backend, on any of the pids, another process gets immediately spawned.
・After restarting my pc, 10 processes are again spawned.
Concern: As I can't account for these 10 processes that I can't get rid of, I think I'm misunderstanding how postgres works somewhere.
Question: Why do 10 session/processes connected to a particular one of my databases, and why can't I terminate them using pg_terminate_backend?
Note: In the phoenix project I set up recently, I set the and set the pool_size of the Repo config to 10 - which makes me think it's related...but I'm pretty sure that project isn't running in any way.
Update - Solved
As a_horse_with_no_name suggested, the by doing the following I was able to put a stop to the 10 mystery sessions.
(1) prevent login of user responsible for the sessions (identifiable by querying `pg_stat_activity`), by doing `alter user .... with nologin`
(2)-running pg_terminate_backend on each of the session's pids.
After those steps I was able to change the table name.
The remaining puzzle is, how did those sessions get in that status in the first place... from the contents of pg_stat_activity, the wait_event value for each was clientRead.
From this post, it seems that the application may have been forcibly stopped halfway through a transaction or something, leaving postgres hanging.

Monitor postgres 9 queries

There is a postgres 9 with database AAA that is used by my application. I use pgadmin 4 to manage it manually.
I would like to check what queries are executed by this application on database AAA in real time.
I did a research about monitoring options in pgadmin in vain.
Is is possible to do that by using just pgadmin4? Or is it necessary to use another tool (if yes - what is he name of this tool)?
When I point pgAdmin4 at a 9.6 server, I see a dashboard by default which shows every session (done by querying pg_stat_activity). You can then drill down in a session to see the query. If a query last for less time than the monitoring interval, then you might not see it if the sample is taken at the wrong time.
If that isn't acceptable, then you should probably use a logging solution (like log_statemnt='all') or maybe the pg_stat_statements extension, rather than sample-based monitoring. pg_stat_statements doesn't integrate with the dashboard in pgAdmin4, but you can select from the view from an SQL window just like you can run any other SQL. I don't believe pgAdmin4 offers a built-in way to monitor the database server's log files, the way pgAdmin3 did.

Postgresql 9.2.1 Normal user mode Vs Standalone Backend mode

I'm having a remote machine that's using postgresql 9.2.1. Suddenly, i couldn't start my pgsql server(pg_isready command is rejecting connections). what my doubt is that, is there any possibility that i can start my database in Standalone back end mode, while it is not opening in Normal user mode?
And, what is the difference in starting the pgsql server in those two modes?
Thanks in advance.
Rather than using single user mode, look into the PostgreSQL server log file. That should tell you what the problem is.
In single-user mode, there will be just a single process accessing the database; none of the background processes are started. You'll be superuser, and the database process will last only for the duration of your session. This is something for emergency recovery, like when system tables are corrupted, you forgot your superuser password and so on.
In your case, single-user mode will probably only help if the database shut down because of an impending transaction ID wraparound. You can then issue the saving VACUUM (FREEZE) in single-user mode.
As soon as you have fixed your problem, upgrade to a supported release of PostgreSQL.

PostgreSQL turn off durabilty

I want to make a script that will run postgres in-memory without durability.
I read this page: http://www.postgresql.org/docs/9.1/static/non-durability.html
But I didn't understand how I can set this parameters in script. Could you please, help me?
Thanks for help!
Most of those parameters, like fsync, can only be set in postgresql.conf. Changes are applied by re-starting PostgreSQL. They apply to the whole database cluster - all the databases in that PostgreSQL install. That's because the databases all share a single postmaster, write-ahead log, and set of shared system tables.
The only parameter listed there that you can set at the SQL level in a script is synchronous_commit. By setting synchronous_commit = 'off' you can say "it's OK to lose this transaction if the database crashes in the next few seconds, just make sure it still applies atomically".
I wrote more on this topic in a previous answer, Optimise PostgreSQL for fast testing.
If you want to set the other params with a script you can do so but you have to do it by opening and modifying postgresql.conf using the script, then re-starting PostgreSQL. Text-processing tools like sed make this kind of job easier.
If you're running a debian based linux distro, you can just do something like:
pg_createcluster -d /dev/shm/mypgcluster 8.4 ramcluster
to create a ram based cluster. Note that you'll have to do:
pg_drop cluster 8.4 ramcluster
and recreate it on reboot etc.

PostgreSQL - using log shipping to incrementally update a remote read-only slave

My company's website uses a PostgreSQL database. In our data center we have a master DB and a few read-only slave DB's, and we use Londiste for continuous replication between them.
I would like to setup another read-only slave DB for reporting purposes, and I'd like this slave to be in a remote location (outside the data center). This slave doesn't need to be 100% up-to-date. If it's up to 24 hours old, that's fine. Also, I'd like to minimize the load I'm putting on the master DB. Since our master DB is busy during the day and idle at night, I figure a good idea (if possible) is to get the reporting slave caught up once each night.
I'm thinking about using log shipping for this, as described on
http://www.postgresql.org/docs/8.4/static/continuous-archiving.html
My plan is:
Setup WAL archiving on the master DB
Produce a full DB snapshot and copy it to the remote location
Restore the DB and get it caught up
Go into steady state where:
DAYTIME -- the DB falls behind but people can query it
NIGHT -- I copy over the day's worth of WAL files and get the DB caught up
Note: the key here is that I only need to copy a full DB snapshot one time. Thereafter I should only have to copy a day's worth of WAL files in order to get the remote slave caught up again.
Since I haven't done log-shipping before I'd like some feedback / advice.
Will this work? Does PostgreSQL support this kind of repeated recovery?
Do you have other suggestions for how to set up a remote semi-fresh read-only slave?
thanks!
--S
Your plan should work.
As Charles says, warm standby is another possible solution. It's supported since 8.2 and has relatively low performance impact on the primary server.
Warm Standby is documented in the Manual: PostgreSQL 8.4 Warm Standby
The short procedure for configuring a
standby server is as follows. For full
details of each step, refer to
previous sections as noted.
Set up primary and standby systems as near identically as possible,
including two identical copies of
PostgreSQL at the same release level.
Set up continuous archiving from the primary to a WAL archive located
in a directory on the standby server.
Ensure that archive_mode,
archive_command and archive_timeout
are set appropriately on the primary
(see Section 24.3.1).
Make a base backup of the primary server (see Section 24.3.2), and load
this data onto the standby.
Begin recovery on the standby server from the local WAL archive,
using a recovery.conf that specifies a
restore_command that waits as
described previously (see Section
24.3.3).
To achieve only nightly syncs, your archive_command should exit with a non-zero exit status during daytime.
Additional Informations:
Postgres Wiki about Warm Standby
Blog Post Warm Standby Setup
9.0's built-in WAL streaming replication is designed to accomplish something that should meet your goals -- a warm or hot standby that can accept read-only queries. Have you considered using it, or are you stuck on 8.4 for now?
(Also, the upcoming 9.1 release is expected to include an updated/rewritten version of pg_basebackup, a tool for creating the initial backup point for a fresh slave.)
Update: PostgreSQL 9.1 will include the ability to pause and resume streaming replication with a simple function call on the slave.