PowerShell log all unhandled exceptions - powershell

I am looking to log unhandled exceptions, since my script runs as an automated tool with no one monitoring progress at the console. My thinking is to create my own exception class, which has a constructor that accepts [InvokationInfo] so I can then log the error with file and line number info, and possibly the trace stack as well. The goal is to catch as many exception types as possible and handle them, but have a generic final catch for exceptions that are literally my code failing. It occurs to me that the entire script would be one big try/catch so that literally anything I didn't expect (so long as it's terminating) would get caught and logged.
To that end I have this mocked up:
class UnhandledException : Exception {
UnhandledException () : base () {
}
UnhandledException ([System.Management.Automation.InvocationInfo] $InvokationInfo) {
Write-Host "Unhandled Exception in $([System.IO.Path]::GetFileName($InvokationInfo.ScriptName)) at line $($InvokationInfo.ScriptLineNumber)"
}
}
CLS
try {
1/0
} catch [System.IO.IOException] {
# Handled exception types go here
} catch {
$unhandledException = [UnhandledException]::new($PSItem.InvocationInfo)
#throw $unhandledException
}
This seems to be working well, and I am debating if that final throw is needed, or if I can just as well terminate the script from within the exception, since by definition once I have logged that info, and maybe thrown up toast message about the failure, I will be exiting the script anyway.
My question is, is this an appropriate way to handle exceptions when the script is functioning as a silent command line utility? I really don't want to have a situation where, if the console is showing, that powershell exceptions are visible. I want to handle everything, to the extent I can, quietly and with log files. Those logs could then be sent to me so I could troubleshoot.
That said, I have not found any info on wrapping the entire script in a try/catch. This
suggests that "catch everything" is a code smell, but it's also talking more about methods that are consumed by other users, not utility scripts.
The bit about std::uncaught_exception() sounds like it might be an option too, if I could have my regular log, for logging actual progress of the script and error in data, inability to access network resources, etc. All of which would be exceptions that I do catch and handle. If I could define a log file that is ONLY for otherwise uncaught exceptions that might be even better, but I also haven't found anything like that for PowerShell. So my approach is my backup plan. Unless I am missing something and this is a horrible idea?
And, to be clear, I am not thinking this would be the ONLY exception handling. This would be the handler of last resort, logging anything that I hadn't otherwise expected, planned for and handled.
EDIT: So, a limitation I have found to trap is that it still only traps terminating errors, and ideally I would like to also get a log of continued errors as well. To that end, I have been exploring redirects I have tried this
function LocalFunction {
1/0
}
&{
CLS
LocalFunction
Remove-Item 'Z:\no.txt' -errorAction silentlyContinue
Test-Path 'C:\'
Write-Host 'Continued'
} 2>> c:\errors.txt
This will successfully log the divide by 0 error in the function and the error at Remove-Item when -errorAction is Continue (the default), but when I specially set it to SilentlyContinue it isn't logged. This seems to get me where I want to be, with ANY error that would be seen in the console instead going to the text file. I could then, at the end of processing, test the size of that file, and delete if 0 or provide a toast message if something got logged.
The real question becomes, is the &{} construct around basically the entire script a viable option once it's a 10,000 line script, rather than a little example? And is it a good idea in general? Or is this perhaps something useful during development, but I just need to put on my big boy pants and actually HANDLE every possible error?
EDIT 2: Well, after doing some tests on a branch of my utility, this redirect approach is actually looking REALLY promising. Apparently no impact on performance, and I can even add the contents of my errors log to my regular log to make things easier for users. Curious if anyone has some counter indications?
Also, a little digging suggest that Invoke-Expression might be better, because the & operator creates a child scope, and that might cause problems while Invoke-Expression doesn't. But on the other hand Invoke-Expression is right up there with Regular Expressions in the "Don't do that" hierarchy. Things that make you go hmmmmmm?

Related

PowerShell - How to log FULL exception including all useful details?

It has been a while since the last time I wrote a powershell script. The first thing I did is implement a logging function into my script but now I noticed that no matter if I log $_ or $_.Exception in a catch-block, there is always some information missing when the log is written to a file. How can I log the exception with all details like line number, stack trace and so on to a file without writing like 5 lines of code just for formatting the exception manually? Is there really no standard way of doing this in powershell?
Technically it depends on exception.
Every exception is an instance of Exception class or one of derivatives.
This means that all Exceptions guaranteed to have basic properties and methods listed in MSDN Exception doc, but additionally Exceptions can have it's own properties or methods. For example, ArgumentException is derivative from Exception, but is has an optional ParamName property.
You can catch predicted exceptions using different catchers or you can get all properties using, for example, Get-Member, and log them. But catch block should not be complex.
Additionally, try using Start-Transcript instead of custom logging. Maybe this will be enough for your needs. Don't forget to set compression flag on folder where transcriptions will be written to.

Can a matlab function called within a script cause the script to break?

I am running a script which calls a function, and if a certain condition is met, inside the function, I want the whole thing just to terminate (and by that I do not mean I want to close matlab using exit). Is that possible? I know I can use return or break to return to the script, however I want the script to stop as well if this condition is met.
The only function I know of that does this is error. This throws an exception, and, if no exception handlers with try and catch are installed in the calling script, will terminate and return to command prompt. Which is what you want, as far as I understand. It prints an error message though. This could be suppressed if you guard all code in the top-level script with a try catch handler. However this will have to be specific to the one error and it makes debugging ("stop-on-error") much more difficult.
The thing is that the only use case I see for this behavior (termination of whole program on certain event) is when a non recoverable error occurs, and in that case printing an error message is indeed appropriate.
In case the script is successful termination of the whole program is not really the right way. All functions should return to give the upper layers of the code to perform some clean-up action, like saving the output data or so.

Powershell Error Catching With Static Member

For reasons I'll not get into here, I'm being forced to use powershell to call and then populate a separate application. And to integrate it into a batch file, so powershell -command "& { }", which is already painful. I've got a while loop set call, check for the process ID to come up, then wait and call again if it hasn't come up yet.
The problem here is that afterwards I utilize a static member out of visualbasic to switch the focus to that application.
Namely, [microsoft.visualbasic.interaction]::AppActivate($hwnd) -- where $hwnd is the process ID of the application in question.
I hate to put anything like an artificial timer on there to wait for the application to finish loading, and I'd love to just put a while timer in there. But static member calls don't appear to support -erroraction or -errorvariable -- and the try {} catch {} appears to just ignore the error, as I was hoping to use it to trigger a flag to trigger the while loop to cycle again after a sleep of one second.
What other ways are there to catch errors out of a static member operator ::
Disregard. The Try {} catch {} works great if I don't replace the final bracket with a close parenthesis.

How to "throw" a %Status to %ETN?

Many of the Caché API methods return a %Status object which indicates if this is an error. The thing is, when it's an unknown error I don't know how to handle (like a network failure) what I really want to do is "throw" the error so my code stops what it's doing and the error gets caught by some higher level error handler (and/or the built-in %ETN error log).
I could use ztrap like:
s status = someObject.someMethod()
ztrap:$$$ISERR(status)
But that doesn't report much detail (unlike, say, .NET where I can throw an exception all the way to to the top of the stack) and I'm wondering if there are any better ways to do this.
Take a look at the Class Reference for %Exception.StatusException. You can create an exception from your status and throw it to whatever error trap is active at the time (so the flow of control would be the same as your ZTRAP example), like this
set sc = someobj.MethodReturningStatus()
if $$$ISERR(sc) {
set exception = ##class(%Exception.StatusException).CreateFromStatus(sc)
throw exception
}
However, in order to recover the exception information inside the error trap code that catches this exception, the error trap must have been established with try/catch. The older error handlers, $ztrap and $etrap, do not provide you with the exception object and you will only see that you have a <NOCATCH> error as the $ZERROR value. Even in that case, the flow of control will work as you want it to, but without try/catch, you would be no better off than you are with ZTRAP
These are two different error mechanisms and can't be combined in this way. ztrap and %ETN are for Cache level errors (the angle bracket errors like <UNDEFINED>). %Status objects are for application level errors (including errors that occurred through the use of the Cache Class Library) and you can choose how you want to handle them yourself. It's not really meaningful to handle a bad %Status through the Cache error mechanism because no Cache error has occurred.
Generally what most people do is something akin to:
d:$$$ISERR(status) $$$SomeMacroRelevantToMyAppThatWillHandleThisStatus(status)
It is possible to create your own domain with your own whole host of %Status codes with attendant %msg values for your application. Your app might have tried to connect to an FTP server and had a bad password, but that doesn't throw a <DISCONNECT> and there is no reason to investigate the stack, just an application level error that needs to be handled, possibly by asking the user to enter a new password.
It might seem odd that there are these two parallel error mechanisms, but they are describing two different types of errors. Think of one of them being "platform" level errors, and the other as "application level errors"
Edit: One thing I forgot, try DecomposeStatus^%apiOBJ(status) or ##class(%Status).LogicalToOdbc(status) to convert the status object to a human readable string. Also, if you're doing command line debugging or just want to print the readable form to the principal device, you can use $system.OBJ.DisplayError(status).

What's the best practice in case something goes wrong in Perl code? [duplicate]

This question already has answers here:
Closed 12 years ago.
Possible Duplicates:
How can I cleanly handle error checking in Perl?
What’s broken about exceptions in Perl?
I saw code which works like this:
do_something($param) || warn "something went wrong\n";
and I also saw code like this:
eval {
do_something_else($param);
};
if($#) {
warn "something went wrong\n";
}
Should I use eval/die in all my subroutines? Should I write all my code based on stuff returned from subroutines? Isn't eval'ing the code ( over and over ) gonna slow me down?
Block eval isn't string eval, so no, it's not slow. Using it is definitely recommended.
There are a few annoying subtleties to the way it works though (mostly annoying side-effects of the fact that $# is a global variable), so consider using Try::Tiny instead of memorizing all of the little tricks that you need to use eval defensively.
do_something($param) || warn "something went wrong\n";
In this case, do_something is expected to return an error code if something goes wrong. Either it can't die or if it does, it is a really unusual situation.
eval {
do_something_else($param);
};
if($#) {
warn "something went wrong\n";
}
Here, the assumption is that the only mechanism by which do_something_else communicates something going wrong is by throwing exceptions.
If do_something_else throws exceptions in truly exceptional situations and returns an error value in some others, you should also check its return value.
Using the block form of eval does not cause extra compilation at run time so there are no serious performance drawbacks:
In the second form, the code within the BLOCK is parsed only once--at the same time the code surrounding the eval itself was parsed--and executed within the context of the current Perl program. This form is typically used to trap exceptions more efficiently than the first (see below), while also providing the benefit of checking the code within BLOCK at compile time.
Modules that warn are very annoying. Either succeed or fail. Don't print something to the terminal and then keep running; my program can't take action based on some message you print. If the program can keep running, only print a message if you have been explicitly told that it's ok. If the program can't keep running, die. That's what it's for.
Always throw an exception when something is wrong. If you can fix the problem, fix it. If you can't fix the problem, don't try; just throw the exception and let the caller deal with it. (And if you can't handle an exception from something you call, don't.)
Basically, the reason many programs are buggy is because they try to fix errors that they can't. A program that dies cleanly at the first sign of a problem is easy to debug and fix. A program that keeps running when it's confused just corrupts data and annoys everyone. So don't do that. Die as soon as possible.
Your two examples do entirely different things. The first checks for a false return value, and takes some action in response. The second checks for an actual death of the called code.
You'll have to decide for yourself which action is appropriate in each case. I would suggest simply returning false in most circumstances. You should only be explicitly dieing if you have encountered errors so severe that you cannot continue (or there is no point in continuing, but even then you could still return false).
Wrapping a block in eval {} is not the same thing as wrapping arbitrary code in eval "". In the former case, the code is still parsed at compile-time, and you do not incur any extra overhead. You will simply catch any death of that code (but you won't have any indication as to what went wrong or how far you got in your code, except for the value that is left for you in $#). In the latter case, the code is treated as a simple string by the Perl interpreter until it is actually evaluated, so there is a definite cost here as the interpreter is invoked (and you lose all compile-time checking of your code).
Incidentally, the way you called eval and checked for the value of $# is not a recommended form; for an extensive discussion of exception gotchas and techniques in Perl, see this discussion.
The first version is very "perlish" and pretty straightforward to understand. The only drawback of this idiom is that it is readable only for short cases. If error handling needs more logic, use the second version.
Nobody's really addressed the "best practice" part of this yet, so I'll jump in.
Yes, you should definitely throw an exception in your code when something goes wrong, and you should do it as early as possible (so you limit the code that needs to be debugged to work out what's causing it).
Code that does stuff like return undef to signify failure isn't particularly reliable, simply because people will tend to use it without checking for the undef returnvalue - meaning that a variable they assume has something meaningful in it actually may not. This leads to complicated, hard to debug problems, and even unexpected problems cropping up later in previously-working code.
A more solid approach is to write your code so that it dies if something goes wrong, and then only if you need to recover from that failure, wrap the any calls to it in eval{ .. } (or, better, try { .. } catch { .. } from Try::Tiny, as has been mentioned). In most cases, there won't be anything meaningful that the calling code can do to recover, so calling code remains simple in the common case, and you can just assume you'll get a useful value back. If something does go wrong, then you'll get an error message from the actual part of the code that failed, rather than silently getting an undef. If your calling code can do something to recover failures, then it can arrange to catch exceptions and do whatever it needs to.
Something that's worth reading about is Exception classes, which are a structured way to send extra information to calling code, as well as allow it to pick which exceptions it wants to catch and which it can't handle. You probably won't want to use them everywhere in your code, but they're a useful technique when you have something complicated that can fail in equally complicated ways, and you want to arrange for failures to be recoverable.