Distributed Recovery - can this be done without timeout? - distributed-computing

We have a mail sender application, that receives a bunch of mails in one blob, and then puts all those mails into database. This can take up to ten minutes. During this process the state of the mailing is BUILDING.
When it is finished the state gets changed to READY.
When the server crashes (shouldn't happen of course) and restarts, it looks for all mailings with status BUILDING and marks them as ERROR. This happens, because we never want to send incomplete mailings.
Now we'd like to scale up using a second server. The recovery strategy above doesn't work here.
e.g. server 1 is BUILDING a mailing, and server 2 crashes and restarts. Now server 2 will see the BUILDING mailing and doesn't know if it's been aborted or if it's running on another server.
So what's the best recovery strategy for distributed services?
(We thought about some timeout mechanism, where the BUILDING server updates a timestamp every few seconds, and when some server reboots it checks if there's a BUILDING mailing that hasn't been updated for x minutes. Then it's highly possible that this mailing has been aborted.)
EDIT:
What I'd like to achieve: If some server restarts (after a crash or just because we added a new mailing server to the cluster), it should not mark mailings as ERROR if this particular mailing is actually being built (by another server).
Nice to have: If this would work without having to store server ids, because then it's possible to easily add and/or remove servers. Else it would not be possible to completely remove some server, because then there might be a BUILDING mailing with that particular server id. But this server got removed and will never get started again. Though the only server that could set the mailing to ERROR will be gone.

Add two things to your state tracking: a timestamp and the server working on it.
If a server starts up and sees anything in a building state for itself it knows it failed. Conversely, if it starts up and sees something in a building state for another server, it now has information that it's going to need to look at later to see if there's a problem that needs to be addressed. You need to worry about multiple servers restarting at the same time, so you can't just have a server grab all old bundles for all servers at startup.
Or you can just use a clustering service for your OS.

Related

Sync two offline masters when network available

I have a use case where I need to set up two physical stations at a venue. Each station will be running a couple of app servers and a mongodb server.
I can't rely on the venue's internet access so I need my app to be able to work offline and "sync" the dbs every once in a while.
I initially thought about having two masters that would somehow sync with a remote one but TIL that master-master replication is not possible with mongodb.
I've read about the active-active approach, however, that won't let me write to a different shard when offline.
I'm running out of ideas, any recommendation would be greatly appreciated.
------ Update on what I'm trying to achieve:
I'm working with a venue that has two entrances. The idea is to be able to capture some information from people attending the events (name, email, etc). After getting registered we will print a name tag with some of the info.
Everything sounds pretty easy, however, if possible, I would like to not rely on the venue's network (internet). So that's where I started struggling figuring out whats the best approach. I guess what I want is being able to have a remote mongo but if the network goes down somehow keep saving records locally and send them to the remote mongo instance when network is available again.
Extra considerations:
- Events last a couple of days, some people lose their name tag overnight, they should be able to go to either of the entrances and get it reprinted. So we should be able to find their info even if they registered in entrance A but they are asking for a reprint in entrance B.
More questions:
- Am I overthinking it? Maybe venue's network + a 4G/LTE modem as a backup should be enough? I would prefer not relying on it tho.
I believe you're overthinking things. Here's what I would do if faced with a similar situation:
From the description, it doesn't sound like the two sites need to be connected in real time at all. I would create a server on Entry A, another in Entry B, and consolidate their data each day after the day ended if required. This is because:
It's unlikely that one person will register in both sites within a single day. If they lost their tag on that day, I'll just tell them to go back to where they registered earlier and get it reprinted there. Worst case, you'll create a duplicate entry (should be obvious which is the duplicate since no one would lose their tag within seconds) but I would not anticipate hundreds of people all lost their tags within a day.
If the attendee lost their tag overnight, both servers will have synced data and should be able to reprint.
If you're concerned about the venue's Wifi access, just run cables from the server to the printing stations.
Personally, I would argue that the overnight sync is not really needed at all (see the likelihood of people registering twice). I would just collect the data from both servers after the event ended. That is, unless you have specific needs for the combined data from both entries during the 2nd day.
Note: please make sure you're running a minimum of 3-node replica set. Running a standalone instance for prod environment is not recommended. Hardware/disk corruption is a common event.

Zero downtime deployment of Slack bot

We develop bot with BotKit and now we try to solve problem with minimal deployment downtime.
There are the server and docker container running on this server. Inside container run bot-app instance connected with RTM-server (Slack).
When I start to deploy new version (v2) of bot-app, I want to get zero downtime, users should not see "bot is offline".
Deploy script runs second docker container with a new version of bot-app. And bot-app connect to RTM-server too. In this way, there are few seconds, when both apps run, connected to RTM-server and responds to user commands (and a user will to see two answers to his command).
What optimal decision I can get if on the one hand we want to get zero downtime and on the other hand, we want to prevent the user interact with the two instances at the same time?
Decision 1:
To allow small chance the likelihood of a collision, when both instances will respond to the user command.
Decision 2:
Abandon the zero downtime deployment. In this case, deploy script first stops the first docker-container, then start another one. The app will not respond to user commands, sent between stopping current version of the app and fully starting of a new version of an app.
Decision 3:
With an interact of parallel run current and new version of app or mutexes. General schematic:
1) Current version of app is running
2) Deploy script starts new version of app
3) I time when a new version of app almost run and ready to connect to RTM-server, it send to current version app command to close RTM-connection.
4) Current version of app closes RTM-connection
5) New version of app open RTM-connection
I think there are other good solutions.
How would you have solved this problem in your application?
(Sorry for the second reply; had another idea.)
The approach I described earlier would be pretty disruptive to your existing code, since you'd probably need to stop using botkit (or at least not use it to do the RTM API communication). An approach that may be less disruptive would be to use some sort of external way to signal that a given message is already been processed.
For example, using Redis, have the bot do the following command when a message comes in:
SET message:<message timestamp> 1 NX PX 30000
The NX option means this command will only succeed if the key doesn't already exist. So the first instance of the bot that manages to execute this will succeed, and the other instance will fail. The bot should only process the message and respond if this command succeeded.
(The PX 30000 sets a 30-second expiration so Redis doesn't get full of these keys.)
This should let you do your zero-downtime upgrades via overlapping the running bot instances without having to worry about a message being processed twice.
Note that it's still possible in this scheme for a message to be dropped altogether if a bot is shut down in a non-graceful way. (It could die just after calling the SET command but before it's actually dealt with the message.) A real queue with a two-phase "get/delete" would be better, but then you're back to my other answer. :-)
One idea I would consider is separating into two components:
A component that keeps a WebSocket connected to the Slack RTM API. This component simply reads messages from the API and puts them on to a queue. (Let's call this the "queuer.")
The actual "bot," which reads messages from the queue and responds as needed.
Depending on how your bot behaves, it can use the Web API directly or perhaps put its own messages on an outbound queue which the "queuer" can send via the RTM API.
This architecture probably solves your problem... you can now either take the bot down briefly while upgrading—responses will just be delayed until the new version is running—or you can run two versions of the bot at the same time and rely on the semantics of the queue to prevent both versions from responding to the same message.

What are the limitations of the flask built-in web server

I'm a newbie in web server administration. I've read multiple times that flask built-in web server is not designed for "production", and must be used only for tests and debug...
But what if my app touchs only a thousand users who occasionnaly send data to the server ?
If it works, when will I have to bother with the configuration of a more sophisticated web server ? (I am looking for approximative metrics).
In a nutshell, I would love to find what the builtin web server can do (with approx thresholds) and what it cannot.
Thanks a lot !
There isn't one right answer to this question, but here are some things to keep in mind:
With the right amount of horizontal scaling, it is quite possible you could keep scaling out use of the debug server forever. When exactly you would need to start scaling (or switch to using a "real" web server) would also depend on the environment you are hosting in, the expectations of the users, etc.
The main issue you would probably run into is that the server is single-threaded. This means that it will handle each request one at a time, serially. This means that if you are trying to serve more than one request (including favicons, static items like images, CSS and Javascript files, etc.) the requests will take longer. If any given requests happens to take a long time (say, 20 seconds) then your entire application is unresponsive for that time (20 seconds). This is only the default, of course: you could bump the thread counts (or have requests be handled in other processes), which might alleviate some issues. But once again, it can still be slow under a "high" load. What is considered a "high" load will be dependent on your application and the expectations of a maximum acceptable response time.
Another issue is security: if you are concerned at ALL about security (and not just the security of the data in the application itself, but the security of the box that will be running it as well) then you should not use the development server. It is not ready to withstand any sort of attack.
Finally, the development server could just fail outright. It is not designed to be used as a long-running process (days, weeks, months), and so it has not been well tested to work in this capacity.
So, yes, it has limitations. Yes, you could still conceivably use it in production. And yes, I would still recommend using a "real" web server. If you don't like the idea of needing to install something like Apache or Nginx, you can still go with a solution that is still as easy as "run a python script" by using some of the WSGI Standalone servers, which can run a server that is designed to be in production with something just as simple as running python run_app.py in the command line. You typically just need to create a 4-5 line python script to import and create the server object, point it to your Flask app, and run it.
gunicorn could be run with only the following on the command line, no extra script needed:
gunicorn myproject:app
...where "myproject" is the Python package that contains the app Flask object. Keep in mind that one of developers of gunicorn would probably recommend against this approach. See https://serverfault.com/questions/331256/why-do-i-need-nginx-and-something-like-gunicorn.
The OP has long-since moved on, but for those who encounter this question in the future I would just add that setting up an Apache server, even on a laptop, is free and pretty easy. It can be readily configured for as few or as many features as you want just by uncomment in or commenting out lines in the config file. There might be an even easier GUI method for doing that nowdays, but just editing the configs is simple.

REST: how to tell server to do some background process

I am building a client-side product with REST. All user interaction will be done with a browser (the config stuff will be on a server running on localhost). I want everything to be REST compliant, even though the application will be running on a client's machine on localhost and will never be accessible from the outside.
The commands are pretty simple:
update
restart
sync
Here's what I've come up with:
POST to / with 'action' parameter (JSON) detailing specifics
PUT a new resource
subsequent GET requests will return the status
when the command is complete, the resource is deleted
What would be the most RESTful way to implement this?
Note:
I'm not asking for scrutinization of my software architecture. I have reasons for choosing a REST interface instead of a unix domain socket, CLI interface, or even a regular GUI interface. The justification would overcomplicate the question and make it too localized.
I have had the same need on a couple of different projects (both client only and server) and I am looking for community input on best practices.
I would POST to a /process resource with the appropriate parameters necessary to start the process, then I would have it return a Location header to that resource that actually represents the process status (/process/123). You can then use GET on that process to get the latest information about it.
I would not automatically delete the process, because if you do that, the client will not know if the process finished properly or not, just simply that it finished (well, stopped running).
Noting that, the client can certainly DELETE the resource when it is done, or you can clean it up later after some reasonable time where whoever was interested in it is likely not to be any more.

Is it possible to have an "internal" cron in mysql5?

The other day a friend suggested to play a web browser game called OGame. If you don't know it I'll tell you what it is:an rts game where you have to build things like mining factories, barracks and so on. The interesting thing that every building has a build time and you can log off while it's building because it will keep going.
Something like this I would believe is managed via dbms. I have my records where I have the end time of a costruction. How do I check when to update a building? Do I need an external application that checks every seconds what record needs to be updated? Is it possible with mysql5 to have an internal scheduler that launches a procedure on this table? And if so, is it a best practice?
I have built a similar game and I stored the construction end times (and other events to be fired) in an events table. I wrote a PHP daemon which regularly checks the events table for expired records and acts on them accordingly.
I couldn't find a way to do it in the database itself (and if I later wanted to migrate to another DB it would need rewriting). A cron'd script may overlap. A daemon can keep track of everything all the time, and output debug information if events are queuing faster than they're being processed. I also added a cron to check periodically that my daemon is still running, otherwise start it.
Creating a daemon in PHP (if you're using PHP)
Hope that helps.