Get PostgreSQL replica list - postgresql

I have a Postgres database that uses physical replication. I want to get list of replicas.
Tried to select * from pg_stat_replication,I have two rows for replicas, but client_hostname field is empty. Documentation says that "it indicates that the client is connected via a Unix socket on the server machine". So how can I get replica connection strings/hostname/IP or any other way I can connect to replica to send query?

You cannot get that by querying the primary server. All the primary server sees is the client IP address from which the standby server connects, and that could be on a different network or even (as in your case) using a different connection method from a client connection to the standby server.

In my case, it turned out to be the lack of permissions for user. I added him a corresponding role and everything fixed.

Related

How to monitor/list outgoing connections to foreign servers in postgres?

I currently use the native foreign data wrapper extension to connect to some other postgres instances (sharding architecture). Now I would like to understand how those underlying outgoing connections are managed by the fdw. I read in the documentation that libpq is used to handle the connections. There is no connection pooling in place but those connections are cached.
This connection is kept and re-used for subsequent queries in the same session.
I would like to list those connections to monitor those? We can list incoming connections via SELECT * FROM pg_stat_activity; - can we do something similar for outgoing connections?
The connections to the foreign server are kept open until the database session ends. To monitor that, set log_connections and log_disconnections to on on the target PostgreSQL server.

Incremental data from Postgresql

I have a number of identical local postgreSql databases (identical in structure - not data) on several laptops that have intermittant access to internet. Records are being added to each DB daily. So Branch A,B,C each with a local Postgresql database. I would like all records from A,B,C in each table in a cloud Database.Also A,B,C data is separate - there is no overlap - A doesnt change B, or C etc. There are no duplicated unique keys.
NEED: I would like to collect all this data on a cloud based database by adding daily incremental data to a single cloud databse - so I can query the whole consolidated data using SQL and pull reports as needed.
Please can anyone point me in the right direction?
Thanks
It sounds like you want logical replication from each laptop to the cloud server. The problem there might be that contact must be made by the replica to each of the masters, so when your laptops are online, they would need to have predictable IP addresses so that they can be reached.
Maybe the best way around this is with a reverse SSH tunnel. On the central replica, you would tell it to subscribe to a publication hosted on some non-standard port on localhost. With a different port reserved for each laptop. So, for example, 9997, 9998, and 9999.
Then when each laptop has connectivity, it could run something like:
ssh rajb#1centralserver.example.com -R9999:localhost:5432 -f -N -T
This establishes an ssh connection to the central server (requiring a password, or private key, or however you have ssh set up) and sends instructions to the central server that whenever someone connects to port 9999 on the central server it should really send that connection back over ssh tunnel and hook it up to port 5432 (the default postgres server port) of the laptop.
For initially setting things up and debugging, you might want to omit the -f -N -T. That way, in addition to setting up the tunnel, you also get an interactive ssh session you can use for monitoring things.
Once the central service notices the connection is available, it will start downloading changes since the last time it could connect. When there is no connection, you will get a lot of nuisance messages to the log file as it checks each server every ~5 seconds to see if it is available.
From each laptop's perspective, the connection is coming from within, so the replication connection will use whatever authentication is set up or 127.0.0.1 or ::1, not the authentication set up for the actual remote IP.

PostgreSQL - Limit user to connect only to replica (and not to the master node)

Is it possible to limit a user (in this case a read only one) to be able to connect just to a read replica instance and not the main one?
(i know a firewall is an option, i am just wondering if there is something with PostgreSQL i can do instead)
You should configure pg_hba.conf on the servers so that the user is rejected on the server where you don't want a connection, and allowed on the other one.

Postgres terminology: client vs connection

In Postgres, is there a one-to-one relationship between a client and a connection? In other word, is a client always one connection and no client can open more than one connection?
For example, when Postgres says:
org.postgresql.util.PSQLException: FATAL: sorry, too many clients already.
is that equivalent to "too many connections already"?
Also, as far as I understand, Postgres uses one process for each client. So does this mean that each process is used for one connection only?
Refer - https://www.postgresql.org/docs/9.6/static/connect-estab.html
PostgreSQL is implemented using a simple "process per user"
client/server model. In this model there is one client process
connected to exactly one server process. As we do not know ahead of
time how many connections will be made, we have to use a master
process that spawns a new server process every time a connection is
requested.
So yes, one server process serves one connection.
You can have as many connections from a single client (machine, application) as the server can manage. The server can support a given number of connections, whether or not these come from different clients (machine, application) is irrelevant to the server.
The connection is made to the postmaster process that is listening on the port that PG is configured to listen to (5432 by default). When a connection is established (after authentication), the server spawns a process which is used exclusively by a single client. That client can make multiple connections to the same server, for instance to connect to different databases, or the same database using different credentials, etc.

How to get the local port of a jdbc connection?

As far as I know when one establishes multiple Connection objects via JDBC to one database then each connection occupies a separate port on the machine where the Connection is established (they all connect to one port on the server where the DBMS is running).
I tried to extract the port that corresponds to the Connection objects. Unfortunately I did not find any way to do so.
Background: I'm doing performance analysis where I setup multiple clients which issue queries on the db. I'm logging the execution time of queries on the database server. In the resulting log I have - among others - information about the connection who initiated the query e.g. localhost.localdomain:44760 I hope it is possible to use this information to map each query to the client or more precisely the Connection object who initiated the query (which is my ultimate goal and serves analysis purposes).
Just run this select through the JDBC connection:
select inet_client_port()
More functions like that are in the manual:
http://www.postgresql.org/docs/current/static/functions-info.html