Redirect standard output and standard err when executing a method - fantom

I have a program that tests each method in a Test# subclass and outputs XML in JUnit's XML format.
For instance:
class ExampleTest : Test
{
Void testOne()
{
...
}
}
I want to execute the testOne method and capture the standard output and standard error produced in it. This out and err output will be included in the XML report.
My first idea was to look at sys::Env. The environment class sys::Env has err and out but are readonly.
My second idea is that sys::Process can be launched for each test method and redirect sys::Process#.err and sys::Process#.out in it, but I'm afraid it will be very slow.
There is other way to do it?

You won't be able to redirect output from your current process (and really should not).
If the output absolutely has to be stdout/err - you'll need to go the Process route. You'll take the fork/jvm/stream setup hit, but that may be negligible compared to your test runtime.
A better option would be to log using the Logging API - which will give more control over what gets logged, and where things go.

Related

Go Unit Test irrelevant error "hostname resolving error"

I am trying to write unit test for this project
It appears that i need to refactor a lot and currently working on it. In order to test functions in project/api/handlers.go i was trying to write some unit test code however always taking error related with DB initializing. DB is from Psql Docker container. Error says given hostname is not valid, however without testing it works as no problem. Also, for Dockerized postgresql, container name is being used as hostname and this shouldn't be a problem.
The error is:
DB connection error: failed to connect to host=postgresdbT user=postgres database=worth2watchdb: hostname resolving error (lookup postgresdbT: no such host)
Process finished with the exit code 1
Anyway, i did a couple refactor and managed abstracting functions from DB query functions however this error still occurs and i cannot perform the test. So finally i decided to perform a totally blank test within same package simply checks with bcrypt package.
func TestCheckPasswordHash(t *testing.T) {
ret, err := HashPassword("password")
assert.Nil(t, err)
ok := CheckPasswordHash("password", ret)
if !ok {
t.Fail()
}
}
//HashPassword function hashes password with bcrypt algorithm as Cost value and return hashed string value with an error
func HashPassword(password string) (string, error) {
bytes, err := bcrypt.GenerateFromPassword([]byte(password), 4)
return string(bytes), err
}
//CheckPasswordHash function checks two inputs and returns TRUE if matches
func CheckPasswordHash(password, hash string) bool {
err := bcrypt.CompareHashAndPassword([]byte(hash), []byte(password))
return err == nil
}
However when I've tried to perform test for only TestCheckPasswordHash function with command of go test -run TestCheckPasswordHash ./api, it still gives same error. Btw, File is handlers_test.go, functions are at handlers.go file, package name is api for both .
There is no contact with any kind of DB related functions however i am having same error again and again. When i run this TestCheckPasswordHash code in another project or at project/util/util_test.go, it checks and passes as expected.
I don't know what to do, it seems that i cannot perform any test in this package unless figure this out.
Thanks in advance. Best wishes.
Was checking your repo, nice implementation, neat an simple good job!
I think your issue is in the init function, please try commenting it out and see if it work for that single test.
Is a bit complex to explain how the init function works without a graph of files as example but you can check the official documentation:
https://go.dev/doc/effective_go#init
PD: if this doesn't work please write me back
I've found the reason why this error occured and now it's solved.
This is partially related with Docker's working principle of how DB_Hostname must be declared but we can do nothing about it. So we need a better approach to overcome.
When go test command executed for (even) a function, Go testing logic checks whole package(not whole project) as per execution order, such as firstly variable declarations and init() function later. In case of calling an external package item, runtime detours that package for the item also.
Even if you are testing only an irrelevant function, you should consider that, before performing the test, Go runtime will evaluate whole package.
In order to prevent this, i wrapped the package level variables(even though they are in another package) that directly calls DB and cache. On initial stage, these can be allocated as a new variable. However, their connection will be ensured by main or main.init()
And now, prior to testing, all relevant variables (in same package or via external) are checked. Let's say if DB agent (redis.client or pgxpool.Pool) is needed, we are creating a new agent, compiler doesn't complain and testing begins. Agent will be operational only by a function call from main or somewhere we want to.
This is a better(may be not best) practice for a more maintainable code. Initialization of dependants should be able to be handled cautiously and in context with the functional needs. At least, with a simple refactoring, problem should be solvable.

NUnit - Is it possible to count total test, total time, total passed test?

Can we count total test, total time, total passed test in NUnit ?
Currently having, In which test got identify with Pass/Fail. I am looking for if is there any method, which gives count;
if (TestContext.CurrentContext.Result.Outcome.Status.ToString() == "Failed")
{
Log.Info("TestContext.Message = " + TestContext.CurrentContext.Result.Message);
Log.Info("TestContext.StackTrace = " + TestContext.CurrentContext.Result.StackTrace);
}
else if (TestContext.CurrentContext.Result.Outcome.Status.ToString() == "Passed")
{
Log.Info("TestContext.Status = " + TestContext.CurrentContext.Result.Outcome.Status.ToString());
}
else
{
Log.Info("Undefined TestContext.Status = " + TestContext.CurrentContext.Result.Outcome.Status.ToString());
}
As you may have guessed, TestContext is really only useful while tests are running. Trying to get the final results while tests are still running is kind of like trying to get your final hotel bill while you are still using the room. The answer you get is tentative and subject to change, for example, if you eat breakfast, take something from the minibar, watch a movie, etc.
For that reason, it's best to wait till after the tests are over to look at the results. For an individual test case, that would be in the [TearDown] method. For a fixture or SetUpFixture, in the [OneTimeTearDown] method. Even so, if those methods happen to throw an exception, all bets are off!
For the total run, I would use an engine extension rather than putting the code in my tests. You would write a TestListener extension. In it, you would only take action when the entire test run is complete. Then the entire outcome of the test, including all the counts would be available. This is the "correct" approach, but it's also a bit more work than what you are doing. See docs for details.
Another approach is to write a program that processes the test result XML file and gets the info there. This has the advantage of being a separate, straightforward program and not requiring you to know how to write extensions.
Finally, as a workaround, you can use code similar to what you have. However, it may not work in all future releases, because it uses knowledge of internal structures...
Create an assembly-level SetUpFixture with a OneTimeTearDown method. To be assembly-level, it must be outside of any of your namespaces.
In the OneTimeTearDown, access NUnit.Framework.Internal.TestExecutionContext.CurrentContext.CurrentResult. This field is a TestResult and contains everything there is to know about the result of the assembly, including counts of tests that passed, failed etc.
Whatever else you do, do not try to do anything that changes the TestResult. Odds are you'll break something if you do that. :-)
Good luck!

Is it possible to print to console during evalc in Matlab?

I have a testing framework where each test is an M file (for instance, test_featureX.m) that makes assertions to an instance of a special (custom) AssertionCollection class. Users will run tests individually when developing their features and may want to print useful information to the console during the test that will help them debug their problems. I also have a routine testAll that runs all tests for the entire repository and prints results in a standardized way. During this latter usage, I don't want any extraneous information printed to the console, so the test executions are wrapped in evalc (evalc('test_featureX(ac);')) which hides any console writes test_featureX makes.
Now, I would like testAll to print to screen in real time every time an assertion is made. I want to do this by adding a callback function to the AssertionClass instance (ac) before passing it to test_featureX, and having that callback function print an update on each assertion that passes. The problem is that the callback function is executed from within the call stack that originates in the evalc command, so its output is routed to the evalc string rather than the console.
Is there any way to force output to the console, even during an evalc evaluation, so that my callback can print statuses to the console while testAll rejects most standard writing to the console?
I'm hoping a result might look like:
s = evalc('testFunction();')
function testFunction()
disp('line 1');
fprintf('line 2\n');
fprintf(TO_THE_CONSOLE, 'line 3\n');
end
...and the resulting output would be
line 3
s =
line 1
line 2
I do not think it is possible. evalc captures everything except errors. Your best bet would be to add a return argument to testFunction and display that if needed.

What Event Handler to Subscribe and Assert.AreEqual Fails

I want to invoke a method when my integration test fails (i.e., Assert.AreEqual fails), is there any event delegate that I can subscribe to in NUnit framework? Of course the event delegate must be fired when the tests fail.
This is because in my tests there are a lot of Assert statement, and I can't tell need to log the Assert, along with the assertion information that cause a problem in my bug tracker. The best way to do this is to when the test fails, a delegate method is invoke.
I'm curious, what problem are you trying to solve by doing this? I don't know of a way to do this with NUnit, but there is a messy way to do it. You can supply a Func and invoke it as the fail message. The body of the Func can provide you delegation you're looking for.
var callback = new Func<string>(() => {
// do something
return "Reason this test failed";
});
Assert.AreEqual("", " ", callback.Invoke());
It seems that there is no event I can subscribe to in case of assert fail
This sounds like you are wanting to create your own test-runner.
There is a distinction between tests (a defines the actions) and the running of the tests (actually running the tests). In this case, if you want to detect that a test failed, you would have to do this when the test is run.
If all you are looking to do is send an email on fail, it may be easiest to create a script (powershell/batch) that runs the command line runner, and sends an email on fail (to your bug tracking system?)
If you want more complex interactivity, you may need to consider creating a custom runner. The runner can run the test and then take action based on results. In this case you should look in the Nunit.Core namespace for the test runner interface.
For example (untested):
TestPackage package = new TestPackage();
package.add("c:\some\path\to.dll");
SimpleTestRunner runner = new SimpleTestRunner();
if (runner.Load(Package)){
var results = runner.Run(new NullListener(), TestFilter.Empty, false, LoggingThreshold.Off);
if(results.ResultState != ResultState.Success){
... do something interesting ...
}
}
EDIT: better snippet of code https://stackoverflow.com/a/5241900/1961413

Rhino Mocks Calling instead of Recording in NUnit

I am trying to write unit tests for a bit of code involving Events. Since I need to raise an event at will, I've decided to rely upon RhinoMocks to do so for me, and then make sure that the results of the events being raised are as expected (when they click a button, values should change in a predictable manner, in this example, the height of the object should decrease)
So, I do a bit of research and realize I need an Event Raiser for the event in question. Then it's as simple as calling eventraiser.Raise(); and we're good.
The code for obtaining an event raiser I've written as is follows (written in C#) (more or less copied straight off the net)
using (mocks.Record())
{
MyControl testing = mocks.DynamicMock<MyControl>();
testing.Controls.Find("MainLabel",false)[0].Click += null;
LastCall.IgnoreArguments();
LastCall.Constraints(Rhino.Mocks.Constraints.Is.NotNull());
Raiser1 = LastCall.GetEventRaiser();
}
I then test it as In playback mode.
using (mocks.Playback())
{
MyControl thingy = new MyControl();
int temp=thingy.Size.Height;
Raiser1.Raise();
Assert.Greater(temp, thingy.Size.Height);
}
The problem is, when I run these tests through NUnit, it fails. It throws an exception at the line testing.Controls.Find("MainLabel",false)[0].Click += null; which complains about trying to add null to the event listener. Specifically, "System.NullReferenceException: Object Reference not set to an instance of the Object"
Now, I was under the understanding that any code under the Mocks.Record heading wouldn't actually be called, it would instead create expectations for code calls in the playback. However, this is the second instance where I've had a problem like this (the first problem involved classes/cases that where a lot more complicated) Where it appears in NUnit that the code is actually being called normally instead of creating expectations. I am curious if anyone can point out what I am doing wrong. Or an alternative way to solve the core issue.
I'm not sure, but you might get that behaviour if you haven't made the event virtual in MyControl. If methods, events, or properties aren't virtual, then I don't think DynamicMock can replace their behaviour with recording and playback versions.
Personally, I like to define interfaces for the classes I'm going to mock out and then mock the interface. That way, I'm sure to avoid this kind of problem.