Is it possible to restrict the number of connections in postgres, using the ip address of machine, since i'm not able to find which ip address is tagged to which user / role.
implicitly you can do it with pgbouncer - at least 1.7 version supports hba file, so you can bond db+user+ip and set limit for db+user in ini file. This way limiting connections to IP or network.
explicitly you can try using HAProxy's or just IPTABLES (I think prefered way)
lastly you can write some monkey job that will check number of connection from pg_stat_activity.client_addr and select pg_terminate_backend(pid) from pg_stat_activity where client_addr = 'x.x.x.x' order by query_started desc limit 10 (to keep first 10 connections), but this way is an awful wheel for such task I'd say.
Related
I have a Postgres database that uses physical replication. I want to get list of replicas.
Tried to select * from pg_stat_replication,I have two rows for replicas, but client_hostname field is empty. Documentation says that "it indicates that the client is connected via a Unix socket on the server machine". So how can I get replica connection strings/hostname/IP or any other way I can connect to replica to send query?
You cannot get that by querying the primary server. All the primary server sees is the client IP address from which the standby server connects, and that could be on a different network or even (as in your case) using a different connection method from a client connection to the standby server.
In my case, it turned out to be the lack of permissions for user. I added him a corresponding role and everything fixed.
I have a number of identical local postgreSql databases (identical in structure - not data) on several laptops that have intermittant access to internet. Records are being added to each DB daily. So Branch A,B,C each with a local Postgresql database. I would like all records from A,B,C in each table in a cloud Database.Also A,B,C data is separate - there is no overlap - A doesnt change B, or C etc. There are no duplicated unique keys.
NEED: I would like to collect all this data on a cloud based database by adding daily incremental data to a single cloud databse - so I can query the whole consolidated data using SQL and pull reports as needed.
Please can anyone point me in the right direction?
Thanks
It sounds like you want logical replication from each laptop to the cloud server. The problem there might be that contact must be made by the replica to each of the masters, so when your laptops are online, they would need to have predictable IP addresses so that they can be reached.
Maybe the best way around this is with a reverse SSH tunnel. On the central replica, you would tell it to subscribe to a publication hosted on some non-standard port on localhost. With a different port reserved for each laptop. So, for example, 9997, 9998, and 9999.
Then when each laptop has connectivity, it could run something like:
ssh rajb#1centralserver.example.com -R9999:localhost:5432 -f -N -T
This establishes an ssh connection to the central server (requiring a password, or private key, or however you have ssh set up) and sends instructions to the central server that whenever someone connects to port 9999 on the central server it should really send that connection back over ssh tunnel and hook it up to port 5432 (the default postgres server port) of the laptop.
For initially setting things up and debugging, you might want to omit the -f -N -T. That way, in addition to setting up the tunnel, you also get an interactive ssh session you can use for monitoring things.
Once the central service notices the connection is available, it will start downloading changes since the last time it could connect. When there is no connection, you will get a lot of nuisance messages to the log file as it checks each server every ~5 seconds to see if it is available.
From each laptop's perspective, the connection is coming from within, so the replication connection will use whatever authentication is set up or 127.0.0.1 or ::1, not the authentication set up for the actual remote IP.
I've installed Postgresql 9.4 on a windows server 2008. I am writing an application that will access this server from our Windows 7 machines. I also installed PGAdminIII on one workstation where I am developing.
I am not able to connect from the workstations. I get a "Server doesn't listen" message. I've looked online for some solutions but none seemed to help me.
On the Server where the service is running. I've tried and change the values through paAdminIII for the files pg_hba.conf and
It looks like pg_hba.conf was setup to listen to the loopback and then a range of ip addresses on the same computer. When I change the "host" key value of the ip_address range from 127.0.0.1/32 to 192.168.2.1/128 (and keep the other values the same -> all, all, md5) the service starts and then stop immediately.
If I leave it with 127.0.0.1/32 then it starts fine but I can not connect from the workstation.
I left the listen_addresses on the postgresql.conf file as the default "*" which is trying to listen to all addresses.
I am trying to develop a client/server app before moving it to the cloud and this is step 0.
I did install on my Windows 7 machine an "add_on" the VisualStudio to help me get a connect string down the line but I am only using the PostgreSql "tools" at this time.
I did some search to see if this question was asked before in this client/server scenario and did not find one. If it has already been answered I'd appreciate some pointers directing me to the correct way to configure server access, if not, then an answer on how to do it would be great.
I can ping the server with no problems from the workstation(s).
The IP address/CIDR mask specification of 192.168.2.1/128 is wrong. The last value indicates the number of bits to be masked, not an IP address range. If you want (most of) the range 192.168.2.1 - 192.168.2.128, the entry in pg_hba.conf should be 192.168.2.0/25 (meaning: take the three highest bytes 192.128.2 (24 bits), plus the highest order bit 0 of the last byte and let the 7 remaining bits vary (values 0 to 127).
Note that this includes 192.168.2.0 and excludes 192.168.2.128, but that is just how bit masking of IP addresses works. You could add 192.168.2.128 with a separate entry in pg_hba.conf, but you cannot get 192.168.2.0 out.
mongodb has bind ip but it is not so practical due to when new server add, it need shutdown db and add the new server ip into bind ip list and restart db. This is unacceptable because all other servers need to relaunch either.
In almost all deployment, servers machine and db machine are in same LAN. So can mongodb be configured as only accept ranges of ip of [172.16.0.0 - 172.31.255.255], [192.168.0.0 - 192.168.255.255], [10.0.0.0 - 10.255.255.255]?
These 3 ranges ip is LAN ip
The bind_ip configuration value only determines which IP address(es) your MongoDB server is listening to. It does not control access from remote IPs -- that is the job of a firewall.
The address ranges you have listed as requiring remote access are all private IP address space which means these networks are not directly reachable/routable outside your LAN. Assuming you can route traffic between your private networks you should not need to bind to multiple IP addresses.
Given you are allowing access from a broad range of IP addresses, you should also read the Security section of the MongoDB manual (in particular, the Security Checklist and tutorial on enabling Access Control).
bindIp can accept multiple comma separated values. See the "Security considerations" section Here
Other than that you might want to consider configuring your firewall, maybe iptables if it runs on Linux machine.
Hope this helps
Since port numbers are limited to 65536, is there a limit for the connection num?
How does each connection differs from each other?
If it's by port,then there can never been more than 65536 connections at the same time?
There's many different pieces in play. Since a connection is defined by (Src IP, Src Port, Dest IP, Dest Port) tuples, you're allowed 65536 ^ 2 connections between two given peers at any given time: from 1 to 1, from 1 to 2, .. from 1 to 65535, etc. And that's just between two peers -- you can of course have many connections open to many peers simultaneously.
BUT, most operating systems limit the number of open filedescriptors / handles per process. This limit was historically low (20), but is now often higher (1024 on my system, ulimit -a will show per-process limits in bash(1)).
In addition to the setrlimit(3) limits on Unix systems, there are also system-wide limits; /proc/sys/fs/file-max on a Linux system will report the maximum number of open files allowed on the entire system. (This is 596118 on my system.) Other systems will have different limits.
And, there may be a limit to the number of open connections enforced by a stateful firewall in the middle. Since each state requires memory in the firewall tables, any will probably enforce some arbitrary limit to avoid running short on memory.
A TCP connection is actually identified by peer IP address + peer port + local IP address + local port, so you could actually have way more than 64k, but I don't know if OSs do the work to allow more than 64k per local IP address. Windows doesn't.
One thing of interest is that ports can remain reserved for a short while after they are closed. (This is done to avoid accidental or intentional crosstalk between old and new connections.) By simply creating and closing a connection on tight loop, you can actually make your machine run out of ports. See http://www.perlmonks.org/?node_id=897591 for Perl code that will hang socket connection calls (on some machines) by using up all the sockets.
UDP also has ports, but UDP doesn't have connections. The socket is therefore identified only by its local IP address + local port, so one can have a maximum of 64k UPD ports on the go per local IP address.
Update: Added paragraph on UDP.