swallowing assertexceptions in nunit - nunit

Running the code below both in VSCode and in Visual Studio is reported as failed although the exception is swallowed :(
Why it works this way? How can I make NUnit forget about the thrown exception?
[Test]
public void TestExceptionReporting() {
try {
Assert.False(true);
} catch(AssertionException e) {
Log.Debug($">>> {e.ToString()}");
}
}

Why does it work that way...
Because NUnit processes and records the error internally before you can catch the exception. The exception is propogated after processing solely as a way to terminte the test. For that reason, it is no longer a good idea to catch NUnit's own exceptions in a test.
How can you make NUnit forget about the test failure?
This is an xy question. Please explain why you want NUnit to notice a failure and then forget it. There are lots of ways to make NUnit take note of a condition without failing, but to answer folks need to know what you are actually trying to do.
You could either edit this question (and I can edit my answer) or ask a new question about what you really want to do.

Related

PowerShell log all unhandled exceptions

I am looking to log unhandled exceptions, since my script runs as an automated tool with no one monitoring progress at the console. My thinking is to create my own exception class, which has a constructor that accepts [InvokationInfo] so I can then log the error with file and line number info, and possibly the trace stack as well. The goal is to catch as many exception types as possible and handle them, but have a generic final catch for exceptions that are literally my code failing. It occurs to me that the entire script would be one big try/catch so that literally anything I didn't expect (so long as it's terminating) would get caught and logged.
To that end I have this mocked up:
class UnhandledException : Exception {
UnhandledException () : base () {
}
UnhandledException ([System.Management.Automation.InvocationInfo] $InvokationInfo) {
Write-Host "Unhandled Exception in $([System.IO.Path]::GetFileName($InvokationInfo.ScriptName)) at line $($InvokationInfo.ScriptLineNumber)"
}
}
CLS
try {
1/0
} catch [System.IO.IOException] {
# Handled exception types go here
} catch {
$unhandledException = [UnhandledException]::new($PSItem.InvocationInfo)
#throw $unhandledException
}
This seems to be working well, and I am debating if that final throw is needed, or if I can just as well terminate the script from within the exception, since by definition once I have logged that info, and maybe thrown up toast message about the failure, I will be exiting the script anyway.
My question is, is this an appropriate way to handle exceptions when the script is functioning as a silent command line utility? I really don't want to have a situation where, if the console is showing, that powershell exceptions are visible. I want to handle everything, to the extent I can, quietly and with log files. Those logs could then be sent to me so I could troubleshoot.
That said, I have not found any info on wrapping the entire script in a try/catch. This
suggests that "catch everything" is a code smell, but it's also talking more about methods that are consumed by other users, not utility scripts.
The bit about std::uncaught_exception() sounds like it might be an option too, if I could have my regular log, for logging actual progress of the script and error in data, inability to access network resources, etc. All of which would be exceptions that I do catch and handle. If I could define a log file that is ONLY for otherwise uncaught exceptions that might be even better, but I also haven't found anything like that for PowerShell. So my approach is my backup plan. Unless I am missing something and this is a horrible idea?
And, to be clear, I am not thinking this would be the ONLY exception handling. This would be the handler of last resort, logging anything that I hadn't otherwise expected, planned for and handled.
EDIT: So, a limitation I have found to trap is that it still only traps terminating errors, and ideally I would like to also get a log of continued errors as well. To that end, I have been exploring redirects I have tried this
function LocalFunction {
1/0
}
&{
CLS
LocalFunction
Remove-Item 'Z:\no.txt' -errorAction silentlyContinue
Test-Path 'C:\'
Write-Host 'Continued'
} 2>> c:\errors.txt
This will successfully log the divide by 0 error in the function and the error at Remove-Item when -errorAction is Continue (the default), but when I specially set it to SilentlyContinue it isn't logged. This seems to get me where I want to be, with ANY error that would be seen in the console instead going to the text file. I could then, at the end of processing, test the size of that file, and delete if 0 or provide a toast message if something got logged.
The real question becomes, is the &{} construct around basically the entire script a viable option once it's a 10,000 line script, rather than a little example? And is it a good idea in general? Or is this perhaps something useful during development, but I just need to put on my big boy pants and actually HANDLE every possible error?
EDIT 2: Well, after doing some tests on a branch of my utility, this redirect approach is actually looking REALLY promising. Apparently no impact on performance, and I can even add the contents of my errors log to my regular log to make things easier for users. Curious if anyone has some counter indications?
Also, a little digging suggest that Invoke-Expression might be better, because the & operator creates a child scope, and that might cause problems while Invoke-Expression doesn't. But on the other hand Invoke-Expression is right up there with Regular Expressions in the "Don't do that" hierarchy. Things that make you go hmmmmmm?

Why does Dart have so many silent runtime exceptions/errors?

I have been getting very frustrated with Dart as runtime exceptions seem to fail silently. Program execution will continue after some type of failure and leave me totally stumped about what is not working and why. For example, in using aqueduct for dart server (this is not an issue unique to aqueduct, but dart in general) I have this code:
jsonData.forEach((jsonObject) async {
ManagedObject record = Document();
record.read(jsonObject);
print(Util.jsonFormatToString(record.asMap()));
....more code
}
In this case, the application fails silently on record.read() falls out of the forEach loop and then continues code execution. During debugging, the application returns completely strange results and after debugging, presumably there is a problem with the jsonObject attempting to be read into the managed object, but Dart gives no notice that there is a problem or what that problem might be.
This is one example of many I have been running into.
Why does Dart fail so silently or am I missing some setting somewhere that is hiding critical info? I am using IntelliJ IDE.

UnitTestOutcome.Timeout equivalent in NUNIT

I am migrating our Project from MSTest to NUnit.
I have a scenario where I need to execute the below condition
testContext.CurrentTestOutcome.Equals(UnitTestOutcome.Timeout)
Can you please suggest the NUnit equivalent to MSTest's UnitTestOutcome.Timeout?
The question is not entirely clear. #Francesco B. already interpreted it as meaning "How can I specify a timeout?" and answered accordingly.
I understand you to be asking "How can I detect that my test has timed out?" Short answer - you can't detect it in the test itself. It can only be detected by a runner that is executing your test.
Longer answer...
You can examine the test context in your teardown to see what was the outcome of the test using TestContext.CurrentContext.Result.Outcome. This is useful if your teardown needs to know the test has failed.
However, you will never see an outcome of "timed out" because...
Your teardown is included in what gets timed by the Timeout attribute.
Your teardown won't be called if the test method triggers timeout.
Even if the first two points were not true, there is no "timed out" outcome. The test is marked as a failure and only the message indicates it timed out.
Of course, if I misunderstood the question and you just wanted to know how to specify a timeout, the other answer is what you want. :-)
As per the official documentation, you can use the Timeout attribute:
[Test, Timeout(2000)]
public void PotentiallyLongRunningTest()
{
...
}
Of course you will have to provide the timeout value in milliseconds; past that limit, your test will be listed as failed.
There is a known "rare" case where NUnit doesn't respect the timeout, which has already been discussed.

Scala Test: Auto-skip tests that exceed timeout

As I have a collection of scala tests that connect with remote services (some of which may not be available at the time of test execution), I would like to have a way of indicating Scala tests that should be ignored, if the time-out exceeds a desired threshold.
Indeed, I could enclose the body of a test in a future and have it auto-pass, if the time-out is exceeded but having slow tests silently pass strikes me as risky. It would be better if it were explicitly skipped during the test run. So, what I would really like is something like the following:
ignorePast(10 seconds) should "execute a service that is sometimes unavailable" in {
invokeServiceThatIsSometimesUnavailable()
....
}
Looking at the ScalaTest documentation, I don't see this feature supported directly but suspect that there might be away to add this capability? Indeed, I could just add a "tag" to "slow" tests and tell the runner not to execute them, but I would rather the tests be automatically skipped when the timeout is exceeded.
I believe that's not something you're test framework should be responsible for.
Wrap your invokeServiceThatIsSometimesUnavailable() in an exception handling block and you'll be fine.
try {
invokeServiceThatIsSometimesUnavailable()
} catch {
case e : YourServiceTimeoutException => reportTheIgnoredTest()
}
I agree with Maciej that exceptions are probably the best way to go, since the timeout happens within your test itself.
There's also assume (see here), which allows to cancel a test if some pre-requisite fails. You could use it also within a single test, I think.

Strange behavior for PESSIMISTIC_WRITE?

I am new to JPA 2.0 locking, so it might be I am missing something.
Using NetBeans, I tried to debug a Stateless Session Bean. I tried to switch between two threads to examine the concept:
em.lock(entity, LockModeType.PESSIMISTIC_WRITE);
em.persist(entity);
try {
em.flush();
} catch (Exception e) {
System.out.println("Already Locked!");
}
I let the first process to finish
em.flush();
(no exceptions). Then, I switched to the second process. Surprisingly - it paused after the first line, and continued only after the first process exited the function.
Note: It was all working as expected with LockModeType.OPTIMISTIC.
Is it a normal behavior? am I missing something? here it seems to behave in a different way.
Thanks,
Danny
It is perfectly normal behavior. Lock is released in transaction commit/rollback and that is not happening as a consequence of calling em.flush().