Custom Service Discovery - sockets

I am trying to write a service discovery protocol for my communication framework. I am planning to do the following steps:
Server Side
Creating a multicast socket
Join a group
Broadcast(Send multicast message with some heart-beat [UDP Multicast Protocol]. The broadcast data contains server IP and some port which will be used to process client registration requests
Open thread to receive TCP packets from Client on that port to accept registrations
Client Side
Creating Multicast socket
Join the same multicast group as server to receive messages
Check for relevant broadcast from service
Get data from the broadcast packet (Data is server IP and Port)
Make a TCP Call to server(IP and port received) in the form of registration request.
Regarding security vulnerability, I can provide certificates in client registration step to protect that, but my main concern is in service broadcast.
If I broadcast server IP and port in a heartbeat that is very dangerous to attacks like port flooding etc.(I can provide some security measure but heavy-weight encryption and decryption will make service discovery lag). Is there any alternative design?

If I broadcast server IP and port in a heartbeat that is very dangerous to attacks like port flooding etc. (I can provide some security measure but heavy-weight encryption and decryption will make service discovery lag). Is there any alternative design?
An alternative design would be to send a broadcast from a client to discover the server. Just like DHCP and other services work: client broadcasts a "discovery" message, so every sever might response with a service "offer". Please find below more info:
https://en.wikipedia.org/wiki/Dynamic_Host_Configuration_Protocol
Regarding the "discovery lag" you mention. Having client send the first "discovery" packet should reduce the lag to a minimum, so that is another plus to "reverse" your model.

Related

Peer to Peer Networking - with shared public IP and DHCP

I am trying to setup peer to peer networking and am trying to understand how this works.
Normally in Client to Server connection, I will connect to the server IP and port. Behind the scenes, it will create a client socket bound to a local port at the local ip, and the packet is sent to the router. The router will then NAT the local port and the local socket, to the client public ip and a different public client socket with a destination for the server IP and port.
When the server responds, the router then DENATs the public client ip and public client port back to the local ip and local port, and the packet arrives at the computer.
In a Peer to Peer networking, I may have the peer's public IP, but it is shared by many machines and the router hasn't allowed a connection yet, so there isn't a open port I can send the data to.
There was then an option that both peers contact a server. That opens a port on the router. Then the peers send packets to each other's client port.
However, usually the router will only accept packets from the same IP the request was made to, so the two peers cannot reuse the server's connection.
How do the two peers talk to each other in this scenario ?
Peer-to-peer networking works exactly the same way as client/server networking. Only one of the peers will become a server and the other a client.
Normally in a peer-to-peer app like bittorrent all peers are also servers but of course for any individual connection one machine must take the role of the client. However a single peer may have multiple connections. So for any single peer some of the connections to it will be server sockets and some will be client sockets.
How this works with NAT is exactly the same as a client/server architecture. You must configure your router to NAT back to your peer application in order for others to connect to it. If not then your peer can only connect to other peers but other peers cannot connect to you. For example, if your bittorrent client is generally acting slow, not managing to get a lot of connections and not managing to finish downloading some torrents this often signifies that you have not configured your router's port forwarding back to your PC for your bittorrent client.
For the use-case of non-expert users (consumers) there are several ways to get around NAT automatically without requiring your users to configure their routers. The most widely used method is UPnP (Universal Plug and Play). However a lot of more expert users who can configure their own routers often disable UPnP because it is a fairly well known DDoS target. So if you do decide to use UPnP you should make it optional for more advanced users to disable it if they don't want to use it.
For cases where you need a guaranteed connection regardless of router configuration then your app cannot be 100% peer-to-peer. You'd need a relay server that acts as a server to both peers that will forward the packet form the sending client peer to the receiving client peer. Of course, the disadvantage of this is that you now have a fixed cost of maintaining a server to support your app just like traditional client/server systems but in this case you're using peer-to-peer to reduce server costs, not eliminate the server.
One example of this "hybrid" approach is cryptocurrencies like Bitcoin and Ethereum. They need a core group of servers to exist in order to work. However, for these protocols the servers run the same software as the clients - they're all just nodes. The only difference is that you don't shut down the servers whereas most people quit their bitcoin wallet once they've done using it (unless they're mining). Another example that is similar is the TOR network. There is a set of core TOR nodes that act as the "server" part of the network ensuring that the network always exist.
You said it yourself: "peers send packets to each other's client port". Therefore, the router will "accept packets from the same IP the request was made to".
Say, Alice is behind router A and Bob is behind router B.
Having learned their public endpoints from a server, Alice will send UDP packets to Bob's public IP, and Bob will send UDP packets to Alice's.
Having seen Alice talk to Bob's IP, router A will accept UDP packets from Bob.
Having seen Bob talk to Alice, router B will accept UDP packets from her as well.
That is, some initial packets might be rejected as coming from the blue, but after both parties have initiated communication on their side, routers will have no reason to block what follows.
In terms of Symmetric NAT Traversal using STUN 2003, by sending a packet to Bob, Alice is creating a door for Bob in A. On the other side, by sending a packet to Alice, Bob is creating a door for Alice in B.
The trick in UDP hole punching seems to be for the routers to reuse the same NAT tunnel for different IPs - so that the port discovered by a server is the same as the port reused for direct communication.
We can talk with different IPs from a normal UDP socket (by skipping connect and using sendto), so it's kind of logical that a tunneled socket would be able to do the same.

ZeroMQ broadcast to specific PULL client across firewall

I'm building a message broker which communicates with clients over ZeroMQ PUSH/PULL sockets and has the ability to exclude clients from messages they're not subscribed to from the server side (unlike ZeroMQ pub/sub which excludes messages on the client side).
Currently, I implement it in the following way:
Server: Binds ZeroMQ PULL socket on a fixed port
Client: Binds a ZeroMQ PULL socket on a random or fixed port
Client: Connects to the server's PULL socket and sends a handshake message containing the new client's address and port.
Server: Recieves handshake from client and connects a PUSH socket to the client's PULL server. Sends handshake response to the client's socket.
Client: Recieves handshake. Connected!
Now the client and server can communicate bidirectionally and the server can send messages to only a certain subset of clients. It works great!
However, this model doesn't work if the clients binding PULL sockets are unable to open a port in their firewall so the server can connect to them. How can I resolve this with minimal re-architecting (as the current model works very well when the firewall can be configured correctly)
I've considered the following:
Router/dealer pattern? I'm fairly ignorant on this and documentation I found was sparse.
Some sort of transport bridging? The linked example provides an example for PUB/SUB.
I was hoping to get some advice from someone who knows more about ZeroMQ than me.
tl;dr: I implemented a message broker that communicates with clients via bidirectional push/pull sockets. Each client binds a PULL socket and the server keeps a map of PUSH sockets so that it can address specific subscribers. How do I deal with a firewall blocking the client ports?
You can use the router/dealer to do this like you say. By default the ROUTER socket tracks every connection it has. The way it does this is by having the caller stick the connection identity information in front of each message it recieves. This makes things like pub/sub fairly trivial as all you need to do is handle a few messages server side that the DEALER socket sends it. In the past I have done something like
1.) Server side is a ROUTER socket. The ROUTER handles 2 messages from DEALER sockets SUB/UNSUB. This alongside the identity info sent as the first part of a frame allows the router to know the messages that a client is interested in.
2.) The server checks the mapping to see which clients should be sent a particular type of data using the map and then forwards the message to the correct client by appending the identity again to the start of the message.
This is nice in that it allows a single port to be exposed on the server. Client side we do not need to expose ports, simply just connect to the server ROUTER socket.
See https://zguide.zeromq.org/docs/chapter3/ for more info.

udp connection support without recfrom() system call in server

I have to implement a server-client program using UDP connection setup.
In which Server UDP writes data packets to port XXXX continuously, and client at any moment just read data.
note : there should be no handshaking i.e server will not be knowing the client address at any point in time.
I have implemented UDP server-client program.
server uses client address and broadcast the packets.i.e there is handshake well before broadcasting data to client. (but this was rejected, has server knew client address and handshaking was involved)
I am expecting how to make UDP server independent of the client address, just write data to port, then after some time client connect to port and read the data. there is no server intervention in reading the data

Understanding of WebSockets

My understanding is that a socket corresponds to a network identifier, port and TCP identifier. [1]
Operating systems enable a process to be associated with a port (which IIUC is a way of making the process addressable on the network for inbound data).
So a WebSocket server will typically be associated with a port well-known for accepting and understanding HTTP for the upgrade request (like 443) and then use TCP identifiers to enable multiple network sockets to be open concurrently for a single server process and a single port.
Please can someone confirm or correct my understanding?
[1] "To provide for unique names at
each TCP, we concatenate a NETWORK identifier, and a TCP identifier
with a port name to create a SOCKET name which will be unique
throughout all networks connected together." https://www.rfc-editor.org/rfc/rfc675
When a client connects to your server on a given port, the client connection is coming from an IP address and a client-side port number. The client-side port number is automatically generated by the client and will be unique for that client. So, you end up with four items that make a connection.
Server IP address (well known to all clients)
Server port (well known to all clients)
Client IP address (unique for that client)
Client port (dynamically unique for that client and that socket)
So, it is the combination of these four items that make a unique TCP connection. If the same client makes a second connection to the same server and port, then that second connection will have a different client port number (each connection a client makes will be given a different client port number) and thus the combination of those four items above will be different for that second client connection, allowing it's traffic to be completely separate from the first connection that client made.
So, a TCP socket is a unique combination of the four items above. To see how that is used, let's look at how some traffic flows.
After a client connects to the server and a TCP socket is created to represent that connection, then the client sends a packet. The packet is sent from the client IP address and from the unique client port number that that particular socket is using. When the server receives that packet on its own port number, it can see that the packet is coming from the client IP address and from that particular client port number. It can use these items to look up in its table and see which TCP socket this traffic is associated with and trigger an event for that particular socket. This separates that client's traffic from all the other currently connected sockets (whether they are other connections from that same client or connections from other clients).
Now, the server wants to send a response to that client. The packet is sent to the client's IP address and client port number. The client TCP stack does the same thing. It receives the packet from the server IP/port and addressed to the specific client port number and can then associate that packet with the appropriate TCP socket on the client so it can trigger an event on the right socket.
All traffic can uniquely be associated with the appropriate client or server TCP socket in this way, even though many clients may connect to the same server IP and port. The uniqueness of the client IP/port allows both ends to tell which socket a given packet belongs to.
webSocket connections start out with an HTTP connection (which is a TCP socket running the HTTP protocol). That initial HTTP request contains an "upgrade" header requesting the server to upgrade the protocol from HTTP to webSocket. If the server agrees to the upgrade, then it returns a response that indicates that the protocol will be changed to the webSocket protocol. The TCP socket remains the same, but both sides agree that they will now speak the webSocket protocol instead of the HTTP protocol. So, once connected, you then have a TCP socket where both sides are speaking the webSocket protocol. This TCP connection uses the same logic described above to remain unique from other TCP connections to the same server.
In this manner, you can have a single server on a single port that works for both HTTP connections and webSocket connections. All connections to that server start out as HTTP connections, but some are converted to webSocket connections after both sides agree to change the protocol. The HTTP connections that remain HTTP connections will be typical request/response and then the socket will be closed. The HTTP connections that are "upgraded" to the webSocket protocol will remain open for the duration of the webSocket session (which can be long lived). You can have many concurrent open webSocket connections that are all distinct from one another while new HTTP connections are regularly serviced all by the same server. The TCP logic above is used to keep track of which packets to/from the same server/port belong to which connection.
FYI, you may have heard about NAT (Network Address Translation). This is commonly used to allow private networks (like a home or corporate network) to interface to a public network (like the internet). With NAT a server may see multiple clients as having the same client IP address even though they are physically different computers on a private network). With NAT, multiple computers are routed through a common IP address, but NAT still guarantees that the client IP address and client port number are still a unique combination so the above scheme still works. When using NAT an incoming packet destined for a particular client arrives at the shared IP address. The IP/port is then translated to the actual client IP address and port number on the private network and then packet is forwarded to that device. The server is generally unaware of this translation and packet forwarding. Because the NAT server still maintains the uniqueness of the client IP/client port combination, the server's logic still works just fine even though it appears that many clients are sharing a common IP address). Note, home network routes are usually configured to use NAT since all computers on the home network will "share" the one public IP address that your router has when accessing the internet.
You will not enable multiple sockets, there is no need for it. You will have multiple conections. It's a little different, but you undesrstand well. For UDP there's nothing to do, cause there is no connections.
In TCP, if two different machines connect to the same port on a third machine, there are two distinct connections because the source IPs differ. If the same machine (or two behind NAT or otherwise sharing the same IP address) connects twice to a single remote end, the connections are differentiated by source port, the same machine cannot open 2 connections on the same port.

how listening to a socket works

If a client listens on a socket, at http://socketplaceonnet.com for example, how does it know that there is new content? I assume the server cannot send data directly to the client, as the client could be behind a router, with no port forwarding so a direct connection is not possible. The client could be a mobile phone which changes it's IP address. I understand that for the client to be a listener, the server doesn't need to know the client's IP.
Thank you
A client socket does not listen for incoming connections, it initiates an outgoing connection to the server. The server socket listens for incoming connections.
A server creates a socket, binds the socket to an IP address and port number (for TCP and UDP), and then listens for incoming connections. When a client connects to the server, a new socket is created for communication with the client (TCP only). A polling mechanism is used to determine if any activity has occurred on any of the open sockets.
A client creates a socket and connects to a remote IP address and port number (for TCP and UDP). A polling mechanism can be used (select(), poll(), epoll(), etc) to monitor the socket for information from the server without blocking the thread.
In the case that the client is behind a router which provides NAT (network address translation), the router re-writes the address of the client to match the router's public IP address. When the server responds, the router changes its public IP address back into the client's IP address. The router keeps a table of the active connections that it is translating so that it can map the server's responses to the correct client.
The TCP Iterative server accepts a client's connection, then processes it, completes all requests from the client,
and disconnects. The TCP iteration server can only process one client's request at a time. Only when all the
requests of the client are satisfied, the server can continue the subsequent requests. If one client occupies the
server, other clients can't work, so TCP servers seldom use the iterated server model.