Multiple TCP/IP servers and sharing the same "well known port" ... somehow? - sockets

I apologize for the weird question wording... here's the design problem:
I am developing a server (on Linux using C++, FWIW) that provides a service to many instances of a client application running on consumer PCs.
I want the following:
1) All clients first identify themselves to a "gatekeeper" server application. Consider this a login procedure, with credentials like a user name and password being passed in. Call the gatekeeper program "gserver". (for gatekeeper.)
2) Once each client has been validated, it is then placed into a long term connection with one of several instances of a different server application running on the same physical server box bound to the same server address. Call any of these instances "wserver" (for "working" server.)
So, what the client sees is that a "gatekeeper" application gives it passworded access to one of several "working" servers running on the same box.
Here is the "real" challenge: we want to exclusively use a "well known" port number for the inbound server connections (like port 80 or 443, say.) Or, our own "well known" port.
We would prefer not to have to make the client talk to a second port on the server for the long term connection phase with wserver(n). The problem with this, of course, is that only one server process at a time can be bound to the same port and server address.
This implies that a connection made by the client with gserver must also fill the role of the long term connection. The only way I see to accomplish this is that gserver must, after login, act like a proxy and copy traffic between itself and the client to the particular wserver(n) that the client is bound to logically.
It would be ideal if a TCP/IP connection first made between client(n) and gserver could be somehow "transported" to another application on the same server, intact, and could then be sustained by one of the wserver(n) instances for the long term connection.
I know that web servers do something like this for spreading out server loads. "Load balancing". The main difference here is that the "balancing" is the allocation of a particular user to a particular wserver(n) instance. But I also have the impression that load balancing is a kind of proxying - which I am trying to avoid (since it complicates the architecture and adds overhead as well as a single point of failure.)
This is a conceptual and design question. Don't worry about source code examples, unless they are absolutely essential to get the ideas across. If we pin down an approach, I can code it up.
Thanks!

What you are looking for is file descriptor passing. See UNP 15.7. One well-known heavy user of this facility is postfix.

I developed such an application long time ago. Since multiple servers can't listen on the same port. What you need is to have gserver listening on the well-known port. Once connection is established, pass the connection to the other servers via an Unix socket. Once the connection is passed to other server, gserver is out of picture. It can die and the other server will be still serving the connection.

I dont' know if this applies to your design, but the usual solution (as implemmented by the xinetd daemon) is to fork() and then exec() the process. For example, xinetd may serve services like rlogin, rsh, tftp, telnet, etc. which are actually served by different programs. This will not be useful to you if your wservers are processes already running in the system.

Related

Transferring files over a network: Send from client or from server?

I am presently working on a client-server solution to transfer files to another machine via a socket network connection. I am fairly new to the whole client-server thing and therefore have the following - admittedly very basic - question:
For the file transfer, does it make any difference if I am sending the file from a client to a server or from a server to a client?
Any qualified insight into this will be much appreciated!
For the file transfer, does it make any difference if I am sending the file from a client to a server or from a server to a client?
Basically, no it does not matter. Once you have the connection made you are free to send data in both directions. Although you have to consider that a server won't accept data that is sent to it unless it explicitly reads from the socket.
To be a bit more general, server and client are completely arbitrary for a home brewed implementation of data transfer. If you boil this down to the simplest concept then you are just opening a socket and writing data to it on one side, and on the other side you are reading from a different socket.
You might choose to implement a single client program capable of connecting other clients (P2P) and sending files back and forth. In that case you could call the "server" the program that is currently sending the file, and the "client" is the program currently receiving.
Alternatively, you could implement two programs, one for client and one for server. Your server will listen for connections and the client will decide when it wants to connect to the server.
Remember that there are network limitations for connecting. If the program that is listening for connections is behind a firewall then you have to be sure you are forwarding the correct ports. If you are connecting machines within a LAN then you probably have nothing to worry about.

Multiple service connections vs internal routing in MMO

The server consists of several services with which a user interacts: profiles, game logics, physics.
I heard that it's a bad practice to have multiple client connections to the same server.
I'm not sure whether I will use UDP or TCP.
The services are realtime, they should reply as fast as possible so I don't want to include any additional rerouting if there are no really important reasons. So are there any reasons to rerote traffic through one external endpoint service to specific internal services in my case?
This seems to be multiple questions in one package. I will try to answer the ones I can identify as separate...
UDP vs TCP: You're saying "real-time", this usually means UDP is the right choice. However, that means having to deal with lost packets and possible re-ordering of packets. But, using UDP leaves a couple of possible delay-decreasing tricks open.
Multiple connections from a single client to a single server: This consumes resources (end-points, as it were) on both the client (probably ignorable) and on the server (possibly a problem, possibly ignorable). The advantage of using separate connections for separate concerns (profiles, physics, ...) is that when you need to separate these onto separate servers (or server farms), you don't need to update the clients, they just need to connect to other end-points, using code that's already tested.
"Re-router" (or "load balancer") needed: Probably not going to be an issue initially. However, it will probably become an issue later. Depending on your overall design and server OS, using UDP may actually become an asset here. UDP packet arrives at the load balancer, dispatched to the right backend and that could then in theory send back a reply with the source IP of the load balancer.
An alternative would be to have a "session broker". The client makes an initial connection to a well-known endpoint, says "I am a client, tell me where my profile, physics, what-have0-you servers are", the broker considers the current load, possibly the location of the client and other things that may make sense and the client then connects to the relevant backends on its own. The downside of this is that it's harder (not impossible, but harder) to silently migrate an ongoing session to a new backend, when there's a load-balancer in the way, this can be done essentially-transparently.

How do I design a peer-to-peer app that avoids using listening sockets?

I've noticed that if you want to write an application that utilizes listening sockets, you need to create port forwarding rules on your router. If I want to connect two computers without either one of the the computers messing about with router settings, is there a way that I can get the two clients to connect to each other without either of them using listening sockets? There would need to be another server somewhere else telling them to connect but is it possible?
Some clarifications, and an answer:
Routers don't care about, or handle ports, that is the role of a firewall, which do port forwarding. The router/firewall combined device most of us have at home adds to the common misunderstanding.
Can you connect two computers without ServerSocket? No. You can use UDP (a stateless, connectionless communication protocol), but the role of a ServerSocket is to "listen" for incoming connection requests, and generate a Socket from those requests, which creates a communications channel between two endpoints. A Socket has both an InputStream and an OutputStream, so it can both read at write at either end. At that point (once the connection is made), the distinction between client/server is arbitrary, since a Socket is a two-way connection object, which allows both sides to send/receive.
What about proxying? Doesn't that allow connections between two computers without a ServerSocket? Well, no, because the server that's doing the proxying still has to be using a ServerSocket. Depending on what application you're trying to implement, this might be the way to go, or or might just add overhead. Even if there were "another server somewhere else telling them to connect", somebody has to listen for a connection request, which is the job of the ServerSocket.
If connections are happening over already open ports (most publicly accessible servers have ports <1024 not blocked by firewalls, but exceptions exist), then you shouldn't need to change firewall settings to get the connection to work.
So, to reiterate, the ONLY role of a ServerSocket (as far as your question is concerned) is to listen for incoming connection requests, and from those requests, create a Socket, which is a two-way communications channel between the two end points.
To answer the question, "How do I design a peer-to-peer app that avoids using listening sockets?", you don't. In the case of something like Vuze, the software acts as both client and server simultaneously, hence the term "peer", vs. "client" or "server" alone. In Vuze every client is a server, and every server (except for the tracker) is a client.
If you need a TCP connection between the 2 computers and both of them are behind routers (and you don't want to set up port forwarding) I think the only other possibility you have is having a third server somewhere that isn't behind a firewall running a ServerSocket and accepting connections between your 2 other computers and proxying communications between the 2. You can't establish a TCP Connection between the 2 without one listening to a socket and the other connecting to it.
Q: If I want to connect two computers without either one of the the
computers messing about with router settings, is there a way that I
can get the two clients to connect to each other
Yes: have the server listen on an open port :)

Server to server communication over NAT/router

I have two servers that need to be able to send requests to each other, and I need them to be able to communicate over a NAT or router. One server has a registered domain, and it is always waiting for connections. The other server sends the first request (the login request) to the first server when it starts. What is the best way to allow the two server to continue to communicate?
Huh? This shouldn't be a problem. If the first server, the one with the registered name, is always contacted by the other one.
As long as the second machine has general Internet connectivity, all it needs to do is e.g. open a TCP/IP connection to firstserver.example.com or whatever, and they should be able to continue to communicate over that connection indefinitely.
This is no different from general surfing using NAT.

Detecting Port Utilized by Webbrowser

When the webbrowser control issues an HTTP request to a URL, it is assigned a port - which is utilized for the length of that connection.
Is there away to find out which port is being utilized for each connection the webbrowser control establishes/issues?
Every request is potentially using a different port. Since most requests are resolved in a couple of seconds and then closed, having the port information on the client isn't going to be very helpful.
If you're interested from a historical perspective, you can add the port number to the logs that many web servers generate.
In order to view this information live you can use a tool such as TCPView
Now for the real question. What are you trying to do? There may be an easier way.
you can run in background:
netstat -bn
and parse output to get information about your application (ports, ips, etc.)