jasmine-node outputs silently when unhandled exceptions happen in tests - jasmine-node

I'm having a problem where jasmine-node silently fails if unhandled exceptions happen in a test.
If I run a single file, everything is OK and I get the expected jasmine output:
./node_modules/jasmine-node/bin/jasmine-node spec/unit/accessControlSpec.js
Finished in 0.011 seconds
4 tests, 6 assertions, 0 failures, 0 skipped
But, if I run all specs in a folder, it fails silently.
./node_modules/jasmine-node/bin/jasmine-node spec/unit
Tried --verbose and --captureExceptions but no luck.
In this specific case, some code inside a test was calling a method that didn't exist.

So, turns out the problem is that I'm not calling the correct command because I didn't install jasmine-node globally.
The correct way is:
node ./node_modules/jasmine-node/lib/jasmine-node/cli.js ./spec/unit
This is further described here: Command Line Usage

Related

Issues with multiple GitHub self-runners on the same server

Are there any reasons why this is not a good idea? I ask because I constantly experience very, very inconsistent results. For example, while setting up my GH Actions over the last few days, I must have run at least 200 workflows. However, for the first time ever, I am now seeing this error:
Run ruby/setup-ruby#v1
with:
ruby-version: 3.0.2
bundler-cache: true
bundler: default
working-directory: .
cache-version: 0
env:
BUNDLE_GEMS__CONTRIBSYS__COM: ***
ImageOS: ubuntu20
Modifying PATH
Entries added to PATH to use selected Ruby:
/opt/hostedtoolcache/Ruby/3.0.2/x64/bin
Downloading Ruby
https://github.com/ruby/ruby-builder/releases/download/toolcache/ruby-3.0.2-ubuntu-20.04.tar.gz
Took 0.71 seconds
Extracting Ruby
/usr/bin/tar -xz -C /opt/hostedtoolcache/Ruby/3.0.2 -f /home/ubuntu/actions-runner-2/_work/_temp/7d0937cf-69b1-4c73-b1bd-7386fca820a2
/usr/bin/tar: x64/lib: Cannot utime: No such file or directory
/usr/bin/tar: Exiting with failure status due to previous errors
Took 0.52 seconds
Error: The process '/usr/bin/tar' failed with exit code 2
I have absolutely no clue whatsoever why this would be presenting itself. If I re-run the same workflow, the error goes away. I'm not sure if this is because one runner is conflicting with another while trying to access the /opt/hostedtoolcache/ directory or something else.
Here's the exact same job re-run without any issues:

How to fix missing simulink simulation artificats issue when running test in parallel mode?

I have 29 Simulink/Matlab Test. It has a lot of different reference models. Before running a 20 second simulation , it has to load all reference models and create a lot of simulation artifacts in a work folder. A lot of reference model are shared in-between test.
When running one test at a time, I have no issue, all simulation artifact are created and used to run the various simulation. Everything Passes.
When running it all via parallel processing. I have a issue.Some simulation artifact are not built or missing, hence my simulation fails even before running.But surprisingly, not all 29 of them fail. It actually random,last time it was 17, another time it was 22. And it even ran once with 0 fail.
Another note, I only have this issue when running it on a self-hosted computer on Azure-Pipelines for CI purposes.
I would like to fix this issue and reproduce stable test pass/fail results of one at a time run, but on parallel process run. How would I do that?
Error:
2020-11-03T03:16:27.1083996Z Making simulation target "Foo_src_sfun", ...
2020-11-03T03:16:27.1084227Z
2020-11-03T03:16:27.1084361Z
2020-11-03T03:16:27.1084502Z
2020-11-03T03:16:27.1084789Z Microsoft (R) Program Maintenance Utility Version 14.00.24210.0
2020-11-03T03:16:27.1085188Z Copyright (C) Microsoft Corporation. All rights reserved.
2020-11-03T03:16:27.1085441Z
2020-11-03T03:16:27.1085815Z NMAKE : fatal error U1052: file 'Foo_src_sfun.mak' not found
2020-11-03T03:16:27.1086175Z Stop.
2020-11-03T03:16:27.1089399Z ================================================================================
2020-11-03T03:16:27.1089936Z Error occurred in TestSim/testSim(File=test_FooTest1_slx) and it did not run to completion.
2020-11-03T03:16:27.1090308Z
2020-11-03T03:16:27.1090497Z ---------
2020-11-03T03:16:27.1090720Z Error ID:
2020-11-03T03:16:27.1090946Z ---------
2020-11-03T03:16:27.1091254Z 'Slvnv:simcoverage:SimulationFailed'
2020-11-03T03:16:27.1091481Z
2020-11-03T03:16:27.1091669Z --------------
2020-11-03T03:16:27.1091919Z Error Details:
2020-11-03T03:16:27.1092186Z --------------
2020-11-03T03:16:27.1092419Z Error using cvsim
2020-11-03T03:16:27.1092659Z Simulation failed
2020-11-03T03:16:27.1092864Z
2020-11-03T03:16:27.1093112Z Error in testRunner (line 145)
2020-11-03T03:16:27.1093477Z [cvdo, simOutRes] = cvsim(testObj,paramStruct) ;
2020-11-03T03:16:27.1093765Z
2020-11-03T03:16:27.1094034Z Error in TestSim/testSim (line 30)
2020-11-03T03:16:27.1094373Z [cvdo, simOutRes, ErrLog] = testRunner(File,20);
2020-11-03T03:16:27.1094638Z
2020-11-03T03:16:27.1094830Z Caused by:
2020-11-03T03:16:27.1095168Z Error using autobuild_kernel>autobuild_local (line 219)
2020-11-03T03:16:27.1095612Z Unable to create mex function 'Foo_src_sfun.mexw64'
2020-11-03T03:16:27.1096006Z required for simulation.
2020-11-03T03:16:27.1096427Z ================================================================================
Update:
I found that I have also another kind of error, leads pretty much to same result.
2020-11-03T03:18:36.1668328Z Making simulation target "Foo2_src_sfun", ...
2020-11-03T03:18:36.1668601Z
2020-11-03T03:18:36.1668735Z
2020-11-03T03:18:36.1669087Z 'Foo2_src_sfun.bat' is not recognized as an internal or external command,
2020-11-03T03:18:36.1669483Z operable program or batch file.
2020-11-03T03:18:36.1669685Z
2020-11-03T03:18:36.1669892Z >>Removing MiL paths...
2020-11-03T03:18:36.1670104Z >>Done
I made a runSingleTest() that I run before my parallel run. Before running it creates all required model reference mexw64 files in the **/work/sim_artifact folder.
Hence when the parallel run they don't need to create any new files, they either use whats already there or update the files.
I have been having no issue since that change. Just a longer run time because of that repetitive test.

Why is google test EXPECT_EXIT yielding me the wrong error code?

I have a function that returns nothing, but based on various conditions will exit the process by calling exit(1). So I duly added an EXPECT_EXIT to my test program,
EXPECT_EXIT(checkAndExit(), ::testing::ExitedWithCode(1), ".*");
and when I run it I get this:
Result: died but not with expected exit code:
Exited with exit status 23
Which is interesting, because I most definitely exit(1);
This is on ubuntu 16.04. When I run the same code on osx, it works correctly, it returns 1 and the test passes.
I sifted a bit through the google test source code and nothing popped out at me. Any ideas?

gradle swallowing exception traces from scalatest tests

I have this in my build.gradle:
test {
testLogging {
exceptionFormat 'full'
showExceptions true
showStackTraces true
}
}
This works with java ("plain" junit) tests, but when I running scalatest tests, even with -i on command line, all I get in case of a failure is something like this:
com.mypackage.mytests.ScalatestSpec > .apply should fail miserably FAILED
org.scalatest.exceptions.TestFailedException: 2 was not equal to 1
No traceback or even a line number is printed out, and I have to rerun the test manually to be able to see where it actually failed.
Is there another special flag I have to set to get it to stop following my output?

Powershell Try Catch

I've a simple powershell script below which basically executes the abc.exe (console application) with few arguments.
& abc.exe ar1 ar2
abc.exe file is .net so it has it's own exception handler.
Whenever abc.exe throws exception I would like the Powershellscript to catch and log/echo.
Could someone help me how to achieve above.
No.
The exception in abc.exe will bubble up to main method, not any more. But you can check the ERRORLEVEL of abc.exe by looking at $LASTEXITCODE. (Check this)
What you should do:
Your abc.exe, as any exe, should return errorlevel 0 if everything was ok and other number in case of error
Abc.exe main method could write an error message in case of problem. This way when you invoke it from a PowerShell script you will see the error message in console and later your script will check for errorlevel.
Additionally, you can also use different exitcodes for errorlevels in abc.exe to provide some information to PowerShell script. For example these are 7zip error levels.
0 --> No error
1 --> Warning
2 --> Fatal error
7 --> Command line error
8 --> Not enough memory for operation
255 --> User stopped the process