We have a traditional client server application. The server side is a plain "old" windows service. To achieve higher availability in a rather quick way it would be ok to start with the service installed on two different servers. The services should however not both be active at the same time, so if the service fails for some reason on the first server, or the first server goes down, the second one should take over.
This can be done with a windows cluster, but this is fairly expensive solutuion because it has hardware and OS (win enterprise for example) requirements.
Is there any good 3rd party software that can handle this task of "monitoring" services and switch over services?
Thx in advance,
Kind regards,
Wim
A Game Of Static Poker : "decentralized fail-over"
overview
At entering the room you pick a random card.
If you find someone with the same card you leave & come back.
If you leave & come back you need to pick a new card.
The one with the highest card in the room at any given time speaks.
Basic Implementation
you have x services running on x servers
when the service starts it picks a random number between say int.Min and int.Max
in the service configuration you specify every host running the service, ex " http://internal.server1.com:8282, http://internal.server2.com:8282, http://internal.server3.com:8282"
each service starts an embedded http server (ex using http://nancyfx.org) listening to the hosts specified in the application configuration and serves the chosen number at /
when the services are running they endlessly poll these hosts for the maximum value
if the current service number is the highest of the maximum found value it switches to the active service, otherwise it switches to passive
Related
I'm creating a video chat web app and I need to deploy a coturn TURN server somewhere. What I'm asking is do I need to purchase 2 droplets on Digital ocean, one for the TURN server and one for my web server or can I place them in the same droplet thus saving money.
Is there any good reason in buying 2 hosting spots, maybe performance issues?
Technically, you can deploy it at the same server. But you shouldn't.
There will be a time when you need to upgrade the turn server, i.e. the numbers of users are increasing. If you split these applications, the node server will not be affected, since it's not necessary to upgrade it.
Another example would be, that you want to use more than 1 turn server, for load balancing for example. Also in case of a problem, once application will not be affected and it's easier to integrate a replacement / backup server.
My advice would be:
yourdomain.com -> Main application
turn1.yourdomain -> Turnserver
turnX.yourdomain -> maybe an additional server in the future.
With that solution you can also make sure, that the turn server is not using all the bandwidth, so yes, that's the possible performance issue you mentioned.
Assume I'm working on a multiplayer online game. Each group of players may start an instance of the game to play. Take League Of Legends as an example.
At any moment of time, there are many game matches being served at the same time. My question is about the architecture of this case. Here are my suggestions:
Assume we have a cloud with a gateway. Any game instance requires a game server behind this gateway to serve the game. For different clients outside the cloud to access different game servers in the cloud, the gateway may differentiate between connections according to ports. It is like we have one machine with many processes each of them listening on a different port.
Is this the best we can get?
Is there another way for the gateway to differentiate connections and forward them to different game instances?
Notice that these are socket connections NOT HTTP requests to an API gateway.
EDIT 1: This question is not about Load Balancing
The keyword is ports. Will each match be served on a different port? or is there another way to serve multiple services on the same host (host = IP)?
Elaboration: I'm using client-server model for each match instance. So multiple clients may connect to the same match server to participate in the same match. Each match need to be server by a match server.
The limitation in mind is: For one host (=IP) to serve multiple services it need to provide them on different ports. Match 1 on port 1234. So clients participating in match 1 will connect to and communicate with the match server on port 1234.
EDIT 2: Scalability is the target
My match server does not calculate and maintain the world of many matches. It maintains the world of one match. This is why each match need another instance of the match server. It is not scalable to have all clients communicating about different matches to connect to one process and to be processed by one process.
My idea is to serve the world of each match by different process. This will require each process to be listening on a different port.
Example: Any client will start a TCP connection with a server listening on port A. Is there is a way to serve multiple MatchServers on the same port A (so that more simultaneous MatchServers won't result in more ports)?
Is there a better scalable way to serve the different worlds of multiple matches?
Short answer: you probably shouldn't use proxy-gateway to handle user connections unless you are absolutely sure there's no other way - you are severely limiting your scaling ability.
Long answer:
What you've described is just a load balancing problem. You can find plenty of solutions based on given restrictions via google.
For League Of Legends it can be quite simple: using some health-check find server with lowest amount of load and stick (kinda like sticky sessions) current game to this server - until the game is finished any computations for particular game are made there. You could use any kind of caching mechanism to store game - server relation for subsequent requests on gateway side.
Another, a bit more complicated example could be data storage for statistics for particular game - it's usually solved via sharding which is a usual consequence of distributed computing. It could be solved this way: use some kind of hashing function (for example, modulo) with game ID as parameter to calculate server number. For example 18283 mod 15 = 13 for game ID = 18283 and 15 available shards - so 13th server should store/serve this data.
Main problem here would be "rebalancing" - adding/remove a shard from cluster, for example.
Those are just two examples, you can google more of them using appropriate keywords. Just keep in mind that all of this is just a subset of problems of distributed computing.
I'm new to Eureka and I see this information from the home page of my Eureka server(localhost:8761/). I didn't find any explanation from official docs about 'Renews' and 'Renews threshold'. Could any one please explain these words? Thanks!
Hope it helps:
Renews: total number of heartbeats the server received from clients
Renews threshold: a switch which controls the "self-preservation mode" of Eureka. If "Renews" is below "Renews threshold", the "self-preservation mode" is on.
self-preservation mode:
When the Eureka server comes up, it tries to get all of the instance registry information from a neighboring node. If there is a problem getting the information from a node, the server tries all of the peers before it gives up. If the server is able to successfully get all of the instances, it sets the renewal threshold that it should be receiving based on that information. If any time, the renewals falls below the percent configured for that value (below 85% within 15 mins), the server stops expiring instances to protect the current instance registry information.
In Netflix, the above safeguard is called as self-preservation mode and is primarily used as a protection in scenarios where there is a network partition between a group of clients and the Eureka Server. In these scenarios, the server tries to protect the information it already has. There may be scenarios in case of a mass outage that this may cause the clients to get the instances that do not exist anymore. The clients must make sure they are resilient to eureka server returning an instance that is non-existent or un-responsive. The best protection in these scenarios is to timeout quickly and try other servers.
For more details, please refer to the Eureka wiki.
I m studying Microservices architecture and I m actually wondering something.
I m quite okay with the fact of using (back) service discovery to make request able on REST based microservices. I need to know where's the service (or at least the front of the server cluster) to make requests. So it make sense to be able to discover an ip:port in that case.
But I was wondering what could be the aim of using service registry / discovery when dealing with AMQP (based only, without HTTP possible calls) ?
I mean, using AMQP is just like "I need that, and I expect somebody to answer me", I dont have to know who's the server that sent me back the response.
So what is the aim of using service registry / discovery with AMQP based microservice ?
Thanks for your help
AMQP (any MOM, actually) provides a way for processes to communicate without having to mind about actual IP addresses, communication security, routing, among other concerns. That does not necessarily means that any process can trust or even has any information about the processes it communicates with.
Message queues do solve half of the process: how to reach the remote service. But they do not solve the other half: which service is the right one for me. In other words, which service:
has the resources I need
can be trusted (is hosted on a reliable server, has a satisfactory service implementation, is located in a country where the local laws are compatible with your requirements, etc)
charges what you want to pay (although people rarely discuss cost when it comes to microservices)
will be there during the whole time window needed to process your service -- keep in mind that servers are becoming more and more volatile. Some servers are actually containers that can last for a couple minutes.
Those two problems are almost linearly independent. To solve the second kind of problems, you have resource brokers in Grid computing. There is also resource allocation in order to make sure that the last item above is correctly managed.
There are some alternative strategies such as multicasting the intention to use a service and waiting for replies with offers. You may have reverse auction in such a case, for instance.
In short, the rule of thumb is that if you do not have an a priori knowledge about which service you are going to use (hardcoded or in some configuration file), your agent will have to negotiate, which includes dynamic service discovery.
I setup an Active/Passive cluster with Pacemaker/Corosync/DRBD. I wanted to make an Asterisk server HA. The solution works perfectly but when the service fails on one server and starts on another all registered SIP clients with the active server will be lost. And the passive server show nothing in the output of:
sip show peers
Until clients make a call or register again. One solution is to set the Registration rate on clients to 1 Min or so. Are there other options? For example integrating Asterisk with a DBMS helps to save this kind of state in a DB??
First of all doing clusters by non-expert is bad idea.
You can use realtime sip architecture, it save state in database. Complexity - average. Note, "sip show peers" for realtime also show nothing.
You can use memory duplicating cluster(some solution for xen exists) which will copy memory state from one server to other. Complexity - very complex.