I have an ecosystem in which there is a C++ socket based (TCP/IP) Server. multiple clients can connect to this server. However there are multiple instances of this server running across different physical IP addresses.
Each client has a configuration file which has its client_identification_id and the server address(there is a fallback mechanism in which the client can connect to another server if the primary can't be connected) to connect to, the servers also have a list of registered clients (saved in db). if a client session is already in progress, connection from that client should not be accepted by any of the other servers while the client session is active
I am faced with the problem with below scenario.
3 clients: C1, C2, C3
3 servers: IPS1, IPS2, IPS3
A scenario arises in which C1 could simultaneously connect to IPS1 and IPS2. Since the instances are running on different physical machines, how should i synchronize client request to multiple servers?
Related
Let's say I am creating a chat (client-server) application intended to use in my local network. I'm thinking about having a server which communicates with clients and multiple clients which communicate only with the server.
My initial thought was that the server is going to have TCP socket listener as well as each client. The problem arises when I have both the server side application and client side application on the same machine listening to the same port. This is not allowed. The same problem arises with two client side applications running on my computer which cannot both listen to the same TCP port.
How can I solve this problem? What is the common strategy?
The problem only exists if the client apps are binding to a specific port that the server app and/or other client apps are also binding to, instead of binding to ephemeral ports (which is the usual case for clients to do).
To bind to an ephemeral port, either don't bind at all (connect() does an implicit bind), or else bind to port 0 and let the OS pick an available port. In most cases, servers should bind to specific ports and clients should bind to ephemeral ports, when possible.
Your client apps do not need their own listening sockets to communicate with the server. They would need that only if they are performing things like peer-to-peer data transfers, etc. And even then, they should be using ephermeral ports, or pre-configured ports (in cases of firewalls, NATs, etc), and can use the server-based communications to share what those ports are with each other while negotiating the transfers.
In Postgres, is there a one-to-one relationship between a client and a connection? In other word, is a client always one connection and no client can open more than one connection?
For example, when Postgres says:
org.postgresql.util.PSQLException: FATAL: sorry, too many clients already.
is that equivalent to "too many connections already"?
Also, as far as I understand, Postgres uses one process for each client. So does this mean that each process is used for one connection only?
Refer - https://www.postgresql.org/docs/9.6/static/connect-estab.html
PostgreSQL is implemented using a simple "process per user"
client/server model. In this model there is one client process
connected to exactly one server process. As we do not know ahead of
time how many connections will be made, we have to use a master
process that spawns a new server process every time a connection is
requested.
So yes, one server process serves one connection.
You can have as many connections from a single client (machine, application) as the server can manage. The server can support a given number of connections, whether or not these come from different clients (machine, application) is irrelevant to the server.
The connection is made to the postmaster process that is listening on the port that PG is configured to listen to (5432 by default). When a connection is established (after authentication), the server spawns a process which is used exclusively by a single client. That client can make multiple connections to the same server, for instance to connect to different databases, or the same database using different credentials, etc.
I want to use this as a proxy server to connect many different clients with servers. Here is what I'm looking to do:
The server software on a user's computer would connect to a proxy server that is running on a VPS. It would pass in some kind of Key or authentication info to identify itself and then would maintain a persistent TCP connection to the proxy server.
A client application running on a mobile device or other computer would connect to the proxy server and pass in some kind of Key or authentication info. The proxy server would match the connection between the client and server based on their authentication info, and then forward all data back and fourth between the connections.
The proxy server would need to be able to handle multiple clients and servers connecting to it at once and use the authentication info to pair them up. There could be multiple clients connecting to the same server at the same time too. The connection from the client and server would both be outbound so that they are not blocked by firewalls. I wrote the client and server software, so I can make them work with any specific proxy.
What is the name of this kind of proxy server? And can anyone recommend any?
Thanks!
I have a simple node.js client and server programs running on one machine and when I try to connect to the server with second instance of client program simultaneously I get EADDRINUSE, Address already in use error. Is it possible to have two or more TCP based socket client connections (created with createConnection) to one server (created with createServer) on the same machine, or only one client program can be connected to the server at the same time?
Yes, it's possible. Infact, very common. Many applications open dozens, or hundreds of connections to the same server. It sounds like your client program is binding on a port. Only the server should be binding on a port. You should verify.
The client will usually use a random port between 1024-65535, as assigned by your OS. You don't need to worry about it. Since the client is initiating a connection to the server, the server port must unique to one program. Which is why your problem signifies you are trying to start the server twice. Please see http://www.tcpipguide.com/free/t_TCPIPClientEphemeralPortsandClientServerApplicatio.htm
I am trying to understand how a solution will behave if deployed in a server farm. We have a Java web application which will talk to an FTP server for file uploads and downloads.
It is also desirable to protect the FTP server with a firewall, such that it will allow incoming traffic only from the web server.
AT the moment since we do not have a server farm, all requests to the FTP server come from the same IP (web server IP) making it possible to add a simple rule in the firewall. However, if the application is moved to a server farm, then I do not know which machine in the farm will make a request to the FTP server.
Just like the farm is hidden behind a facade for it's clients, is it hidden behind a facade for the services it might invoke, so that regardless of which machine from the farm makes the request to the FTP server, it always sees the same IP?
Are all server farms implemented the same way, or would this behavior depend on the type of server farm? I am thinking of using Amazon Elastic CLoud.
It depends very much on how your web cluster is configured. If your cluster is behind a NAT firewall, then yes, all outgoing connections will appear to come from the same address. Otherwise, the IP addresses will be different, but they'll almost certainly all be in a fairly small range of addresses, and you should be able to add that range to the firewall's exclude list, or even just list the IP address of each machine individually.
Usually you can enter cnames or subnets when setting up firewall rules, which would simplify the maintenance of them. You can also send all traffic through a load balancer or proxy. Thats essentially how any cloud/cluster/farm service works.
many client ips <-> load balancer <-> many servers