Is there a way other than eval to prevent my perl scripts from terminating on errors - perl

I am coding a web API that uses a MongoDB database, interacts with node.js and starts all types of processes, anything can go wrong and if it does I want the api to return an "unknown error" message to the caller.
The problem is that sometimes the modules I'm using crash and the whole application dies without giving the api the opportunity to return an "Unknown error" message I want to control this without having to put an eval block in every database insert, process call, etc.
is there something like autoeval ?

If your process is crashing, something is very wrong, and you should look into why that is and fix it.
But failing that, do all your work in a child process, and have the parent monitor it and return an error response.
Though even easier than that is running your service behind a proxy server (which you may very well be doing anyway) and ensuring that the proxy server returns an appropriate api response on proxy errors.

Related

Design and Error handling in windows service

I have to design windows service and I have some question:
Proper error handling, if there is an error what happens to the
service? Does he continue to be up and logging in? Error recording
in event viewer? Is he falling?
What happens to a long run? How do you know for sure that everything
is running as required, and he is not stuck?
How to handle high memory consumption, out of memory, or other error
that wasn't been write to the log?
Handle users - what happened if create log to user A and changed to
user B? Rewrite or continue from same point?
How to handle times? - Is the service automatically up?
Thank you.
For error handling, the best I can recommend is taking advantage of try / catch cases. This way you ensure that you handle the cases where something unexpected happens and you can either try to correct it or bring the service down cleanly. Keep in mind that exceptions are not propagated outside the scope of a thread so you need to handle them in each thread.
To be able to tell if the service is doing fine, you can periodically log into the Event Log what the service does. If you do proper try / catch for each thread, it should go smoothly. In C# you can use log4net and the EventLogAppender to log crucial / error info in the Event Log.
If your service causes high memory usage for no apparent reason, it is likely a memory leak. Microsoft has a free tool called ".Net CLR profiler" that allows you to properly profile your service and see what exactly is causing the leak.
Unless you are dealing with user-protected files (in which case you need to consult the Log On tab of your service to give it the appropriate credentials), your service shouldn't depend on any logged-in user. Services run independently of the users on the computer.
A service can be set to start automatically, to start only on-demand, or to simply be disabled completely.

GWT: Client procedure and rpc request are always called several times with multiple thread id

For some client side procedures, I implement remote logging to log the calling of the procedure. The log is printed several times with different thread id, even though the procedure is only called once. Some rpc requests are sent to the sever a few times which causes some database session problem. Is it normal? Is there anyway to avoid it?
Thanks
This is not normal, and suggests there is a bug on your client causing it to send the same call more than once. Try adding logging on the client where you invoke the RPC call, and possibly add breakpoints to confirm why it is being called twice.
My best guess with no other information would be that you have more than one event handler wired up to the same button, or something like that.
--
More specifically, your servlet container starts multiple threads to handle incoming requests - if two requests come in close succession, they might be handled by different threads.
As you noted, this can cause problems with a database, where two simultaneous calls could be made to change the same data, especially if you have some checks to ensure that a servlet call cannot accidentally overwrite some newer data. This is almost certainly a bug in your client code, and debugging it should start there.

What should be returned from the API for CQRS commands?

As far as I understand, in a CQRS-oriented API exposed through a RESTful HTTP API the commands and queries are expressed through the HTTP verbs, the commands being asynchronous and usually returning 202 Accepted, while the queries get the information you need. Someone asked me the following: supposing they want to change some information, they would have to send a command and then a query to get the resulting state, why to force the client to make two HTTP requests when you can simply return what they want in the HTTP response of the command in a single HTTP request?
We had a long conversation in DDD/CRQS mailing list a couple of months ago (link). One part of the discussion was "one way command" and this is what I think you are assuming. You can find out that Greg Young is opposed to this pattern. A command changes the state and therefore prone to failure, meaning it can fail and you should support this. REST API with POST/PUT requests provide perfect support for this but you should not just return 202 Accepted but really give some meaningful result back. Some people return 200 success and also some object that contains a URL to retrieve the newly created or updated object. If the command handler fails, it should return 500 and an error message.
Having fire-and-forget commands is dangerous since it can give a consumer wrong ideas about the system state.
My team also recently had a very heated discussion about this very thing. Thanks for posting the question. I have usually been the defender of the "fire and forget" style commands. My position has always been that, if you want to be able to move to an async command dispatcher some day, then you cannot allow commands to return anything. Doing so would kill your chances since an async command doesn't have much of a way to return a value to the original http call. Some of my team mates really challenged this thinking so I had to start thinking if my position was really worth defending.
Then I realized that async or not async is JUST an implementation detail. This led me to realize that, using our frameworks, we can build in middleware to accomplish the same thing our async dispatchers are doing. So, we can build our command handlers the way we want to, returning what ever makes sense, and then let the framework around the handlers deal with the "when".
Example: My team is building an http API in node.js currently. Instead of requiring a POST command to only return a blank 202, we are returning details of the newly created resource. This helps the front-end move on. The front-end POSTS a widget and opens a channel to the server's web socket using the same command as the channel name. the request comes to the server and is intercepted by middleware which passes it to the service bus. When the command is eventually processed synchronously by the handler, it "returns" via the web socket and the front-end is happy. The middleware can be disabled easily, making the API synchronous again.
There is nothing stopping you from doing that. If you execute your commands synchronously and create your projections synchronously, then it will be easy for you to just make a query directly after executing the command and returning that result. If you do this asynchronously via the rest-api, then you have no query result to send back. If you do it asynchronously within your system, then you can wait for the projection to be created and then send the response to the client.
The important thing is that you separate your write and read models in classic CQRS style. That does not mean that you cannot do a read in the same request as you do the command. Sure, you can send a command to the server and then with SignalR (or something) wait for a notification that your projection have been created/updated. I do not see a problem with waiting for the projection to be created on the server side instead for on the client.
How you do this will affect you infrastructure and error handling. Also, you will hold the HTTP request open for a longer time if you return the result at once.

How do I avoid 502 responses with Plack and Mojolicious?

I have set up a small Mojolicious app to run behind Plack acting as proxy like this:
builder {
mount "/q" => builder {
Plack::App::Proxy->new(remote => "http://127.0.0.1:3010")->to_app;
};
};
I need to run it this way (rather than mounting the application directly) as I need to reload the app a few times a day, for reasons I can't go into here.
The app runs on hypnotoad, and when I hit it directly, everything's fine. However, when hit via the plack proxy, I often get a 502 response - Gateway error: Connection timed out.
The funny thing is, when I reload once or twice, everything seems fine, and I get the proper response.
Can anybody help figure this out?
It's more than possible that the default timeout values in Mojolicious aren't high enough for your app, which may lead to the worker process being stopped by the manager, resulting in an invalid response to the Plack app and thus the 502. So check the config settings for the timeouts and modify if necessary. You may also need to up the number of workers if your app is under heavy load, although i suspect that's not the problem here.
More useful information would be found in the mojolicious app log file - if you run hypnotoad under debug with MOJO_LOG_LEVEL=debug then you will see the connection hit the app, and then a timeout if this is indeed the problem.
The response being fine on a reload is indicative of maybe something being slow to load in your app, or perhaps a cache being populated, etc. Hard to say without log entries from the hypnotoad server

How can I prevent Windows from catching my Perl exceptions?

I have this Perl software that is supposed to run 24/7. It keeps open a connection to an IMAP server, checks for new mail and then classifies new messages.
Now I have a user that is hibernating his XP laptop every once in a while. When this happens, the connection to the server fails and an exception is triggered. The calling code usually catches that exception and tries to reconnect. But in this case, it seems that Windows (or Perl?) is catching the exception and delivering it to the user via a message box.
Anyone know how I can prevent that kind of wtf? Could my code catch a "system-is-about-to-hibernate" signal?
To clear up some points you already raised:
I have no problem with users hibernating their machines. I just need to find a way to deal with that.
The Perl module in question does throw an exception. It does something like "die 'foo bar'. Although the application is completely browser based and doesn't use anything like Wx or Tk, the user gets a message box titled "poll_timer". The content of that message box is exactly the contents of $# ('foo bar' in this example).
The application is compiled into an executable using perlapp. The documentation doesn't mention anything about exception handling, though.
I think that you're dealing with an OS-level exception, not something thrown from Perl. The relevant Perl module is making a call to something in a DLL (I presume), and the exception is getting thrown. Your best bet would be to boil this down to a simple, replicable test case that triggers the exception (you might have to do a lot of hibernating and waking the machines involved for this process). Then, send this information to the module developer and ask them if they can come up with a means of catching this exception in a way that is more useful for you.
If the module developer can't or won't help, then you'll probably wind up needing to use the Perl debugger to debug into the module's code and see exactly what is going on, and see if there is a way you can change the module yourself to catch and deal with the exception.
It's difficult to offer intelligent suggestions without seeing relevant bits of code. If you're getting a dialog box with an exception message the program is most likely using either the Tk or wxPerl GUI library, which may complicate things a bit. With that said, my guess would be that it would be pretty easy to modify the exception handling in the program by wrapping the failure point in an eval block and testing $# after the call. If $# contains an error message indicating connection failure, then re-establish the connection and go on your way.
Your user is not the exception but rather the rule. My laptop is hibernated between work and home. At work, it is on on DHCP network; at home, it is on another altogether. Most programs continue to work despite a confusing multiplicity of IP addresses (VMWare, VPN, plain old connection via NAT router). Those that don't (AT&T Net Client, for the VPN - unused in the office, necessary at home or on the road) recognize the disconnect at hibernate time (AT&T Net Client holds up the StandBy/Hibernate process until it has disconnected), and I re-establish the connection if appropriate when the machine wakes up. At airports, I use the local WiFi (more DHCP) but turn of the wireless altogether (one physical switch) before boarding the plane.
So, you need to find out how to learn that the machine is going into StandBy or Hibernation mode for your software to be usable. What I don't have, I'm sorry to say, is a recipe for what you need to do.
Some work with Google suggests that ACPI (Advanced Configuration and Power Interface) is part of the solution (Microsoft). APM (Advanced Power Management) may also be relevant.
I've found a hack to avoid modal system dialog boxes for hard errors (e.g. "encountered and exception and needs to close"). I don't know if the same trick will work for this kind of error you're describing, but you could give it a try.
See: Avoiding the “encountered a problem and needs to close” dialog on Windows
In short, set the
HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Windows\ErrorMode
registry key to the value “2″.