I need to implement a mater election lib based on PaxosLease.And meeting problem on network layer design.
The network core requirements as follows:
Each node is both a server and a client
System can work without some nodes online
according to the requirements above, I design the network layer as follow:
Use TCP to transport msg.
Each node listen on named port to accept connection(called ReadSession), these connections will only read msg.
Connect to the other nodes by anonymous port(called WriteSession), these connections will only write msg.
The design maybe looks ok. Because some nodes can off-line at the beginnning, problem occurs: ReadSession can be created by new online node connecting to node, and how can I create WriteSession to the new online node? we can't know the off-line node when to online, and we can only know its anonymous port by accepting new connection from new online node rather than its listen port.
It's really a design problem for me. I have these two ideas at the moment:
Try to use UDP to transport msg
register async_connect again in Asio::ip::tcp::socket::async_connect when throw exception(means connect failed).But this idea still has problems:
asio::io_context spend to much IO resource on async_connect's handler.
we can't make WriteSession be ready immediately when we need write msg because of async op.
So, I want a design that can prepare WriteSession ok immediately when new online node connect to present node.
Thank you very much!
Actually, the network topology you create is a distributed system. And the problem you faced has been solved by many famous applications such as zookeeper and chubby.
If you don't want to be so complex, maybe you can try to use some broadcast protocol like LSD in LAN, or using DHT in WLAN.
Related
I'm developing fully decentralized poker game. At the moment my current design is I'm using pub/sub, push/pull sockets using the nano sockets to establish the communication.
Players push the data using nn_push socket type, dealer receives the data using nn_pull socket type, Once the dealer receives the data it processes it and publish the data using nn_pub and all the players in the game receive the data using nn_sub socket type.
Everything working fine so far, the only constraint here I see is my player nodes should know the IP of the dealer and this should be static in order to have this socket communications to work.
And also the players backend is connected from the GUI using libwebsockets for this I need static IP for my backend nodes too.
To summarize, I ended up in a situation where my dealer and playing nodes needs to have the static IP. I'm reading about dht protocols but not sure if those p2p protocols really be applicable in the context of pub/sub model.
Any inputs on how I avoid dependency on using static IP's is greatly helpful.
Thank you.
I suggest you use LSD and DHT both. LSD is really good for local neighbor nodes, and DHT can do what you want as you describe above. What's more, if you want to use a tracker, that may be much easier than use DHT, since you need to consider more about NAT Traversal if using DHT.
I need to make TCP based decentralised chat app for local network. By decentralised I mean there is no central server. Each entity on a network should have server/client architecture. When app starts it should check which user is online ( already running the app ). My question is how can i check that? Can i do it by trying to connect via connect() function from socket library? I'm new to programming, especially socket programing, so if it's a dumb question sorry in advance.
You should definitely study how other decentralized applications do this. There are lots of techniques.
Each instance of the application should, as part of its server functionality, track the addresses of other instances of the application. Each instance should, as part of its client functionality, keep track of a few instances it can connect to. Prefer instances that have been around for a long time.
The software should include a list of servers that have been running for a long time and are expected to typically be available. You may wish to include a fallback method such DNS, maintained by anyone willing to keep a list of well-known servers offering access through a well-known port. The fallback method can also be IRC or HTTP.
If you want to stay decentralized, you might want to try multicasting or broadcasting a request packet to all hosts on the network to discover other instances of your chat application.
Something similar has been implemented in Pidgin, named Bonjour. It works quite nicely and provides chatting capabilities on a local network. More specifically, it is defined as a Serverless Messaging part of XMPP.
If you are looking for code examples, have a look at one of my projects where I use multicast to discover hosts on the local network that provide a specific service: Headers and implementation.
I have built a WebSockets server that acts as a chat message router (i.e. receiving messages from clients and pushing them to other clients according to a client ID).
It is a requirement that the service be able to scale to handle many millions of concurrent open socket connections, and I wish to be able to horizontally scale the server.
The architecture I have had in mind is to put the websocket server nodes behind a load balancer, which will create a problem because clients connected to different nodes won't know about each other. While both clients A and B enter via the LoadBalancer, client A might have an open connection with node 1 while client B is connected to node 2 - each node holds it's own dictionary of open socket connections.
To solve this problem, I was thinking of using some MQ system like ZeroMQ or RabbitMQ. All of the websocket server nodes will be subscribers of the MQ server, and when a node gets a request to route a message to a client which is not in the local connections dictionary, it will pub-lish a message to the MQ server, which will tell all the sub-scriber nodes to look for this client and issue the message if it's connected to that node.
Q1: Does this architecture make sense?
Q2: Is the pub-sub pattern described here really what I am looking for?
ZeroMQ would be my option - both architecture-wise & performance-wise
-- fast & low latency ( can measure your implementation performance & overheads, down to sub [usec] scale )
-- broker-less ( does not introduce another point-of-failure, while itself can have { N+1 | N+M } self-healing architecture )
-- smart Formal Communication Pattern primitives ready to be used ( PUB / SUB is the least cardinal one )
-- fair-queue & load balancing architectures built-in ( invisible for external observer )
-- many transport Classes for server-side internal multi-process / multi-threading distributed / parallel processing
-- ready to almost linear scaleability
Adaptive node re-discovery
This is a bit more complex subject. Your intention to create a feasible architecture will have to drill down into more details to solve.
Node authentication vs. peer-to-peer messaging
Node (re)-discovery vs. legal & privacy issues
Node based autonomous self-organising Agents vs. needs for central policy enforcement
To update this for 2021, we just solved this problem where we needed to design a system that could handle millions of simultaneous WS connections from IoT devices. The WS server just relays messages to our Serverless API backend that handles the actual logic. We chose to use docker and the node ws package using an auto-scaling AWS ECS Fargate cluster with an ALB in front of it.
This solved the main problem of routing messages, but then we had the same issue of how do we route response messages from the server. We initially thought of just keeping a central DB of connections, but routing messages to a specific Fargate instance behind an ALB didn't seem feasible.
Instead, we set up a simple sub/pub pattern using AWS SNS (https://aws.amazon.com/pub-sub-messaging/). Every WS server receives the response and then searches its own WS connections. Since each Fargate instance handles just routing (no logic), they can handle a lot of connections when we vertically scale them.
Update: To make this even more performant, you can use a persistent connection like Redis Pub/Sub to allow the response message to only go to one single server instead of every server.
I am trying to make an application for iPhone that can listen for traffick on a specific network port.
A server on my network is sending out messages (different status messages for devices the server handles) on a specific port.
My problem is that when I make a thread and makePairWithSocket I block the port for others who want to send messages to the server, so I only want to listen to the traffic on a specifyed port and then check for specific heraders and then use those messages.
I know how to make the connection and talk to the server using write and read streams, but then I makePairWithSocket and block the port for all other devices on the network
Any one that has any suggestions on how to listen on a port in Objective-C without pairing with the server?
Thanks in advance
Daniel
Check out CocoaAsyncSocket. It gives you a nice and structured way (with delegates) to send and receive data... also with multiple clients. The documentation is quite good. project link
edit: Have a look at the AsyncUdpSocket class for a stateless UDP connection.
I think this requires network support well below the socket API level, perhaps at the hardware driver level, assuming the packets are even being routed to your device.
Is it possible to multiplex sa ocket connection?
I need to establish multiple connections to yahoo messenger and i am looking for a way to do this efficiently without having to hold a socket open for each client connection.
so far i have to use one socket for each client and this does not scale well above 50,000 connections.
oh, my solution is for a TELCO, so i need to at least hit 250,000 to 500,000 connections
i'm planing to bind multiple IP addresses to a single NIC to beat the 65k port restriction per IP address.
Please i would any help, insight i can get.
**most of my other questions on this site have gone un-answered :) **
Thanks
This is an interesting question about scaling in a serious situation.
You are essentially asking, "How do I establish N connections to an internet service, where N is >= 250,000".
The only way to do this effectively and efficiently is to cluster. You cannot do this on a single host, so you will need to be able to fragment and partition your client base into a number of different servers, so that each is only handling a subset.
The idea would be for a single server to hold open as few connections as possible (spreading out the connectivity evenly) while holding enough connections to make whatever service you're hosting viable by keeping inter-server communication to a minimum level. This will mean that any two connections that are related (such as two accounts that talk to each other a lot) will have to be on the same host.
You will need servers and network infrastructure that can handle this. You will need a subnet of ip addresses, each server will have to have stateless communication with the internet (i.e. your router will not be doing any NAT in order to not have to track 250,000+ connections).
You will have to talk to AOL. There is no way that AOL will be able to handle this level of connectivity without considering cutting your connection off. Any service of this scale would have to be negotiated with AOL so both you and they would be able to handle the connectivity.
There are i/o multiplexing technologies that you should investigate. Kqueue and epoll come to mind.
In order to write this massively concurrent and teleco grade solution, I would recommend investigating erlang. Erlang is designed for situations such as these (multi-server, massively-multi-client, massively-multithreaded telecommunications grade software). It is currently used for running Ericsson telephone exchanges.
While you can listen on a socket for multiple incoming connection requests, when the connection is established, it connects a unique port on the server to a unique port on the client. In order to multiplex a connection, you need to control both ends of the pipe and have a protocol that allows you to switch contexts from one virtual connection to another or use a stateless protocol that doesn't care about the client's identity. In the former case you'd need to implement it in the application layer so that you could reuse existing connections. In the latter case you could get by using a proxy that keeps track of which server response goes to which client. Since you're connecting to Yahoo Messenger, I don't think you'll be able to do this since it requires an authenticated connection and it assumes that each connection corresponds to a single user.
You can only multiplex multiple connections over a single socket if the other end supports such an operation.
In other words it's a function protocol - sockets don't have any native support for it.
I doubt yahoo messenger protocol has any support for it.
An alternative (to multiple IPs on a single NIC) is to design your own multiplexing protocol and have satellite servers that convert from the multiplex protocol to the yahoo protocol.
I'll trow in another approach you could consider (depending on how desperate you are).
Note that operating system TCP/IP implementations need to be general purpose, but you are only interested in a very specific use-case. So it might make sense to implement a cut-down version of TCP/IP (which only handles your use-case, but does that very well) in your application code.
For example, if you are using Linux, you could route a couple of IP addresses to a tun interface and have your application handle the IP packets for that tun interface. That way you can implement TCP/IP (optimised for your use-case) entirely in your application and avoid any operating system restriction on the number of open connections.
Of course, it's quite a bit of work doing the TCP/IP yourself, but it really depends on how desperate you are - i.e. how much hardware can you afford to throw at the problem.
500,000 arbitrary yahoo messenger connections - is your telco doing this on behalf of Yahoo? It seems like whatever solution has been in place for many years now should be scalable with the help of Moore's Law - and as far as I know all the IM clients have been pretty effective for a long time, and there's no pressing increase in demand that I can think of.
Why isn't this a reasonable problem to address with hardware plus traditional solutions?