Get test execution logs during test run by Nunit Test Engine - nunit

We are using NUnit Test Engine to run test programatically.
Lokks like that after we add FrameworkPackageSettings.NumberOfTestWorkers to the Runner code, the test run for our Ui test hangs in execution. I'm not able to see at what time or event the execuiton hangs because Test Runned returns test result logs (in xml) only when entire execution ends
Is there a way to get test execution logs for each test?
I've added InternalTraceLevel and InternalTraceWriter but these logs are something different (BTW, looks like ParallelWorker#9 hangs even to write to console :) )
_package.AddSetting(FrameworkPackageSettings.InternalTraceLevel, "Debug");
var nunitInternalLogsPath = Path.GetDirectoryName(Uri.UnescapeDataString(new Uri(Assembly.GetExecutingAssembly().CodeBase).AbsolutePath)) + "\\NunitInternalLogs.txt";
Console.WriteLine("nunitInternalLogsPath: "+nunitInternalLogsPath);
StreamWriter writer = File.CreateText(nunitInternalLogsPath);
_package.AddSetting(FrameworkPackageSettings.InternalTraceWriter, writer);

The result file, with default name TestResult.xml is not a log. That is, it is not a file produced, line by line, as execution proceeds. Rather, it is a picture of the result of your entire run and therefore is only created at the end of the run.
InternalTrace logs are actual logs in that sense. They were created to allow us to debug the internal workings of NUnit. We often ask users to create them when an NUnit bug is being tracked. Up to four of them may be produced when running a test of a single assembly under nunit3-console...
A log of the console runner itself
A log of the engine.
A log of the agent used to run tests (if an agent is used)
A log received from the test framework running the tests
In your case, #1 is not produced, of course. Based on the content of the trace log, we are seeing #4, triggered by the package setting passed to the framework. I have seen the situation where the log is incomplete in the past but not recently. The logs normally use auto-flush to ensure that all output is actually written.
If you want to see a complete log #2, then set the WorkDirectory and InternalTrace properties of the engine when you create it.
However, as stated, these logs are all intended for debugging NUnit, not for debugging your tests. The console runner produces another "log" even though it isn't given that name. It's the output written to the console as the tests run, especially that produced when using the --labels option.
If you want some similar information from your own runner, I suggest producing it yourself. Create either console output or a log file of some kind, by processing the various events received from the tests as they execute. To get an idea of how to do this, I suggest examining the code of the NUnit3 console runner. In particular, take a look at the TestEventHandler class, found at https://github.com/nunit/nunit-console/blob/version3/src/NUnitConsole/nunit3-console/TestEventHandler.cs

Related

Can you add logs files to NUnit

I am fairly new to NUnit and I'm trying to see if NUnit support multiple logging, by that I mean that I want to capture the logs from an external device as well as the network traces. Since I don't want to pollute my result logs with all these logs, I would like to have them in different files so that I have something like this:
Test results logs file
Telnet logs file 1
Telnet logs file 2
Network trace file
Does Nunit support the addition of other logs or do I have to create my own logging system?
Strictly speaking, NUnit's result file is not a "log file." Generally, a log file is created incrementally as execution proceeds. The TestResult file is an XML representation of the entire test run and is only written at the end of the run.
NUnit does have a set of log files, called Internal Trace logs, which are produced by the console runner when you use the --trace option. As their name suggests, they trace the internal workings of NUnit rather than your tests.
Any other logging you perform is entirely up to you and is not captured by NUnit at all.

RobotFrameWork: Is there a way of checking the report.html although the run paused?

Situation: VisualStudioCode (Browser library) runs a couple of .robot files (manually started)
Then it pauses because of an error...
At that point the process breaks and there is no final report.html
If you stop the run it doesn't generate an report.html that's not what you want. You actually want the results until that point. (or even better described: you still want the links output.xml, log.html and report.html)
you should be able to generate lag.htm and report.html using the rebot command. however you need output.xml for this. output.xml is created when you run the tests. when you break you will probobaly not have all the resources you need.
I would suggest to assign test timeout to the test that causes the pause. When the timeout is reached the test will be stoped automaticaly and you should have all reports. You can also set it globaly for all tests eg.:
*** Settings ***
Test Timeout 2 minutes

OPA5: How to make sure that every test starts in a fresh environment?

I got to refactor a module of OPA5-tests, because most of the test-cases fail currently.
While trying to find the reason for the failing I found out that most of the tests aren't erroneous.
When you run them in isolation they work just fine. The problem occurs when you run them as a module. This means you run them as a group. One test after the other.
The problem occurs when one test fails. Normally you execute iTeardownMyAppFrame() as
the very last method of the test. To remove the used iFrame. So that the following test
finds an untouched environment in which it can run.
Now when a test fails at some line, then the test stops, and the following invocations aren't done.
iTeardownMyAppFrame is never executed and the following test starts in the environment of the previous (the failed) test. So it might fail too because the environment isn't as expected.
Is there a way to make sure that every test starts in a new iFrame?
Something like "try-finally" with the iTeardownMyAppFrame in the final block. So that it
is executed in any case. No matter if the test has worked or it has failed.

Why do Selenium tests behave different on different machines?

I couldn't find much information on Google regarding this topic. Below, I have provided three results from the same Selenium tests. Why am I getting different results when running the tests from different places?
INFO:
So our architecture: Bitbucket, Bamboo Stage 1 (Build, Deploy to QA), Bamboo Stage 2 (start Amazon EC2 instance "Test", run tests from Test against recently deployed QA)
Using Chrome Webdriver.
For all three of the variations I am using the same QA URL that our application is deployed on.
I am running all tests Parallelizable per fixture
The EC2 instance is running Windows Server 2012 R2 with the Chrome browser installed
I have made sure that the test solution has been properly deployed to the EC2 "test" instance. It is indeed the exact same solution and builds correctly.
First, Local:
Second, from EC2 Via SSM Script that invokes the tests:
Note that the PowerShell script calls the nunit3-console.exe just like it would be utilized in my third example using the command line.
Lastly, RDP in on EC2 and run tests from the command line:
This has me perplexed... Any reasons why Selenium is running different on different machines?
This really should be a comment, but I can't comment yet so...
I don't know enough about the application you are testing to say for sure, but this seems like something I've seen testing the application I'm working on.
I have seen two issues. First, Selenium is checking for the element before it's created. Sometimes it works and sometimes it fails, it just depends on how quickly the page loads when the test runs. There's no rhyme or reason to it. Second, the app I'm testing is pretty dumb. When you touch a field, enter data and move on to the next, it, effectively, posts all editable fields back to the database and refreshes all the fields. So, Selenium enters the value, moves to the next field and pops either a stale element error or can't find element error depending on when in the post/refresh cycle it attempts to interact with the element.
The solution I have found is moderately ugly, I tried the wait until, but because it's the same element name, it's already visible and is grabbed immediately which returns a stale element. As a result, the only thing that I have found is that by using explicit waits between calls, I can get it to run correctly consistently. Below is an example of what I have to do with the app I'm testing. (I am aware that I can condense the code, I am working within the style manual for my company)
Thread.Sleep(2000);
By nBaseLocator = By.XPath("//*[#id='attr_seq_1240']");
IWebElement baseRate = driver.FindElement(nBaseLocator);
baseRate.SendKeys(Keys.Home + xBaseRate + Keys.Tab);
If this doesn't help, please tell us more about the app and how it's functioning so we can help you find a solution.
#Florent B. Thank you!
EDIT: This ended up not working...
The tests are still running different when called remotely with a powershell script. But, the tests are running locally on both the ec2 instance and my machine correctly.
So the headless command switch allowed me to replicate my failed tests locally.
Next I found out that a headless chrome browser is used during the tests when running via script on an EC2 instance... That is automatic, so the tests where indeed running and the errors where valid.
Finally, I figured out that the screen size is indeed the culprit as it was stuck to a size of 600/400 (600/400?)
So after many tries, the only usable screen size option for Windows, C# and ChromeDriver 2.32 is to set your webDriver options when you initiate you driver:
ChromeOptions chromeOpt = new ChromeOptions();
chromeOpt.AddArguments("--headless");
chromeOpt.AddArgument("--window-size=1920,1080");
chromeOpt.AddArguments("--disable-gpu");
webDriver = new ChromeDriver(chromeOpt);
FINISH EDIT:
Just to update
Screen size is large enough.
Still attempting to solve the issue. Anyone else ran into this?
AWS SSM Command -> Powershell -> Run Selenium Tests with Start-Process -> Any test that requires an element fails because ElementNotFound or ElementNotVisible exceptions.
Using POM for tests. FindsBy filter in c# is not finding elements.
Running tests locally on EC2 run fine from cmd, powershell and Powershell ISE.
The tests do not work correctly when executing with the AWS SSM Command. Cannot find any resources to fix problem.

Devel::Cover not collecting any data after startup with mod_perl2

I want to check Selenium's coverage of my web app, which runs on mod_perl2 on CentOS 6.5.
So I installed Devel::Cover, put use Devel::Cover; in my httpd.conf's <Perl> section, and restarted Apache. It immediately writes some coverage data from my custom ErrorLogging.pm module, but then if I hit any of the app's pages via a browser, nothing further happens.
I also tried changing this in httpd.conf:
StartServers 1
MinSpareServers 1
MaxSpareServers 1
...just to make sure it'd be collecting all data from the same process. However, after restarting Apache and trying again, the result was the same.
UPDATE: I also tried launching httpd with -D ONE_PROCESS as mentioned in this thread, but the result was more or less the same, except that I had to Ctrl+C the service when done testing, because it takes over the terminal, and at that point it segfaulted. But the coverage database in the end was virtually identical.
The docs don't mention anything different that I can see. How can I get Devel::Cover to record coverage data for code execution that happens in response to actual browser requests via mod_perl2?