Can NUnit TImeoutAttribute override NUnit Console Runner timeout? - nunit

I currently use NUnit Console to run my NUnit 3 tests. I pass a Timeout command-line option to NUnit Console Runner. This timeout is about 30 minutes, but I have one test that takes longer than 30 minutes, so I put a TimeoutAttribute on that one test. I assumed that TimeoutAttribute would take precedence over the global Timeout passed to the console runner. That does not seem to be the case. Seems like the TimeoutAttribute should take precedence or is there some other way I can do this besides changing the Timeout value passed on the command-line?

Assuming you have properly applied the TimeoutAttribute, it takes precedence over the nunit-console command-line option. The option could more properly be called "defaultTimeout".
I'm "assuming" because you haven't included any code that shows where you actually placed the attribute or shown the command-line you are running. If you add those to the question, I'll correct this answer as needed.

Related

Get test execution logs during test run by Nunit Test Engine

We are using NUnit Test Engine to run test programatically.
Lokks like that after we add FrameworkPackageSettings.NumberOfTestWorkers to the Runner code, the test run for our Ui test hangs in execution. I'm not able to see at what time or event the execuiton hangs because Test Runned returns test result logs (in xml) only when entire execution ends
Is there a way to get test execution logs for each test?
I've added InternalTraceLevel and InternalTraceWriter but these logs are something different (BTW, looks like ParallelWorker#9 hangs even to write to console :) )
_package.AddSetting(FrameworkPackageSettings.InternalTraceLevel, "Debug");
var nunitInternalLogsPath = Path.GetDirectoryName(Uri.UnescapeDataString(new Uri(Assembly.GetExecutingAssembly().CodeBase).AbsolutePath)) + "\\NunitInternalLogs.txt";
Console.WriteLine("nunitInternalLogsPath: "+nunitInternalLogsPath);
StreamWriter writer = File.CreateText(nunitInternalLogsPath);
_package.AddSetting(FrameworkPackageSettings.InternalTraceWriter, writer);
The result file, with default name TestResult.xml is not a log. That is, it is not a file produced, line by line, as execution proceeds. Rather, it is a picture of the result of your entire run and therefore is only created at the end of the run.
InternalTrace logs are actual logs in that sense. They were created to allow us to debug the internal workings of NUnit. We often ask users to create them when an NUnit bug is being tracked. Up to four of them may be produced when running a test of a single assembly under nunit3-console...
A log of the console runner itself
A log of the engine.
A log of the agent used to run tests (if an agent is used)
A log received from the test framework running the tests
In your case, #1 is not produced, of course. Based on the content of the trace log, we are seeing #4, triggered by the package setting passed to the framework. I have seen the situation where the log is incomplete in the past but not recently. The logs normally use auto-flush to ensure that all output is actually written.
If you want to see a complete log #2, then set the WorkDirectory and InternalTrace properties of the engine when you create it.
However, as stated, these logs are all intended for debugging NUnit, not for debugging your tests. The console runner produces another "log" even though it isn't given that name. It's the output written to the console as the tests run, especially that produced when using the --labels option.
If you want some similar information from your own runner, I suggest producing it yourself. Create either console output or a log file of some kind, by processing the various events received from the tests as they execute. To get an idea of how to do this, I suggest examining the code of the NUnit3 console runner. In particular, take a look at the TestEventHandler class, found at https://github.com/nunit/nunit-console/blob/version3/src/NUnitConsole/nunit3-console/TestEventHandler.cs

RobotFrameWork: Is there a way of checking the report.html although the run paused?

Situation: VisualStudioCode (Browser library) runs a couple of .robot files (manually started)
Then it pauses because of an error...
At that point the process breaks and there is no final report.html
If you stop the run it doesn't generate an report.html that's not what you want. You actually want the results until that point. (or even better described: you still want the links output.xml, log.html and report.html)
you should be able to generate lag.htm and report.html using the rebot command. however you need output.xml for this. output.xml is created when you run the tests. when you break you will probobaly not have all the resources you need.
I would suggest to assign test timeout to the test that causes the pause. When the timeout is reached the test will be stoped automaticaly and you should have all reports. You can also set it globaly for all tests eg.:
*** Settings ***
Test Timeout 2 minutes

OPA5: How to make sure that every test starts in a fresh environment?

I got to refactor a module of OPA5-tests, because most of the test-cases fail currently.
While trying to find the reason for the failing I found out that most of the tests aren't erroneous.
When you run them in isolation they work just fine. The problem occurs when you run them as a module. This means you run them as a group. One test after the other.
The problem occurs when one test fails. Normally you execute iTeardownMyAppFrame() as
the very last method of the test. To remove the used iFrame. So that the following test
finds an untouched environment in which it can run.
Now when a test fails at some line, then the test stops, and the following invocations aren't done.
iTeardownMyAppFrame is never executed and the following test starts in the environment of the previous (the failed) test. So it might fail too because the environment isn't as expected.
Is there a way to make sure that every test starts in a new iFrame?
Something like "try-finally" with the iTeardownMyAppFrame in the final block. So that it
is executed in any case. No matter if the test has worked or it has failed.

Why do Selenium tests behave different on different machines?

I couldn't find much information on Google regarding this topic. Below, I have provided three results from the same Selenium tests. Why am I getting different results when running the tests from different places?
INFO:
So our architecture: Bitbucket, Bamboo Stage 1 (Build, Deploy to QA), Bamboo Stage 2 (start Amazon EC2 instance "Test", run tests from Test against recently deployed QA)
Using Chrome Webdriver.
For all three of the variations I am using the same QA URL that our application is deployed on.
I am running all tests Parallelizable per fixture
The EC2 instance is running Windows Server 2012 R2 with the Chrome browser installed
I have made sure that the test solution has been properly deployed to the EC2 "test" instance. It is indeed the exact same solution and builds correctly.
First, Local:
Second, from EC2 Via SSM Script that invokes the tests:
Note that the PowerShell script calls the nunit3-console.exe just like it would be utilized in my third example using the command line.
Lastly, RDP in on EC2 and run tests from the command line:
This has me perplexed... Any reasons why Selenium is running different on different machines?
This really should be a comment, but I can't comment yet so...
I don't know enough about the application you are testing to say for sure, but this seems like something I've seen testing the application I'm working on.
I have seen two issues. First, Selenium is checking for the element before it's created. Sometimes it works and sometimes it fails, it just depends on how quickly the page loads when the test runs. There's no rhyme or reason to it. Second, the app I'm testing is pretty dumb. When you touch a field, enter data and move on to the next, it, effectively, posts all editable fields back to the database and refreshes all the fields. So, Selenium enters the value, moves to the next field and pops either a stale element error or can't find element error depending on when in the post/refresh cycle it attempts to interact with the element.
The solution I have found is moderately ugly, I tried the wait until, but because it's the same element name, it's already visible and is grabbed immediately which returns a stale element. As a result, the only thing that I have found is that by using explicit waits between calls, I can get it to run correctly consistently. Below is an example of what I have to do with the app I'm testing. (I am aware that I can condense the code, I am working within the style manual for my company)
Thread.Sleep(2000);
By nBaseLocator = By.XPath("//*[#id='attr_seq_1240']");
IWebElement baseRate = driver.FindElement(nBaseLocator);
baseRate.SendKeys(Keys.Home + xBaseRate + Keys.Tab);
If this doesn't help, please tell us more about the app and how it's functioning so we can help you find a solution.
#Florent B. Thank you!
EDIT: This ended up not working...
The tests are still running different when called remotely with a powershell script. But, the tests are running locally on both the ec2 instance and my machine correctly.
So the headless command switch allowed me to replicate my failed tests locally.
Next I found out that a headless chrome browser is used during the tests when running via script on an EC2 instance... That is automatic, so the tests where indeed running and the errors where valid.
Finally, I figured out that the screen size is indeed the culprit as it was stuck to a size of 600/400 (600/400?)
So after many tries, the only usable screen size option for Windows, C# and ChromeDriver 2.32 is to set your webDriver options when you initiate you driver:
ChromeOptions chromeOpt = new ChromeOptions();
chromeOpt.AddArguments("--headless");
chromeOpt.AddArgument("--window-size=1920,1080");
chromeOpt.AddArguments("--disable-gpu");
webDriver = new ChromeDriver(chromeOpt);
FINISH EDIT:
Just to update
Screen size is large enough.
Still attempting to solve the issue. Anyone else ran into this?
AWS SSM Command -> Powershell -> Run Selenium Tests with Start-Process -> Any test that requires an element fails because ElementNotFound or ElementNotVisible exceptions.
Using POM for tests. FindsBy filter in c# is not finding elements.
Running tests locally on EC2 run fine from cmd, powershell and Powershell ISE.
The tests do not work correctly when executing with the AWS SSM Command. Cannot find any resources to fix problem.

How can I confirm how many tests are running in parallel?

I'm running my unit tests on a 16-core machine. I can't see any difference in elapsed time when using no parallelization parameter, --workers=1 and --workers=32. I'd like to confirm that NUnit really is running the expected number of simultaneous tests, so that I don't spend time hunting down a non-existent problem in my test code.
I have [Parallelizable] (default scope, ParallelScope.Self) on the common base class. It is defined not on any other class or method. I'm using nunit3-console, both via Jenkins and on my local command line.
Is there a way to tell that tests are running in parallel? NUnit is reporting the correct number of worker threads, but there's no report saying (for example) how many tests were run in each thread.
Run Settings
ProcessModel: Multiple
RuntimeFramework: net-4.5
WorkDirectory: C:\Jenkins\workspace\myproject
NumberOfTestWorkers: 16
I can log the start and finish times of each test then manually check that there's a reasonable number of overlaps; is there any simpler and more repeatable way of getting what I want?
I turned on --trace=Verbose which produces a few files. InternalTrace.<pid1>.log and InternalTrace.<pid2>.<dll_name>.log together contained a thorough description of what was happening during the tests. The per-agent log (the one with the DLL name in the log file name) was pretty clear about the state of parallelization.
16:13:40.701 Debug [10] WorkItemDispatcher: Directly executing test1
16:13:47.506 Debug [10] WorkItemDispatcher: Directly executing test2
16:13:52.847 Debug [10] WorkItemDispatcher: Directly executing test3
16:13:58.922 Debug [10] WorkItemDispatcher: Directly executing test4
16:14:04.492 Debug [10] WorkItemDispatcher: Directly executing test5("param1")
16:14:09.720 Debug [10] WorkItemDispatcher: Directly executing test5("param2")
16:14:14.618 Debug [10] WorkItemDispatcher: Directly executing test5("param3")
That third field looks like a thread ID to me. So I believe that the agent is running only one test at a time, even though (I think..) I've made all the tests parallelizable and they're all in different test fixtures.
Now I've just got to figure out what I've done wrong and why they're not running in parallel...
I could be mistaken but it seems to me that to set [Parallelizable] parameter is not enough to make tests run in parallel. You also need to create nodes that will run tests. So if you use Jenkins you have to create Jenkins Slaves and only then your tests will be run in parallel.
So what I want to tell is that as I understand there is no possibility to run tests in parallel on one PC.
If there is such a possibility (to run tests in parallel on the same machine) it would be really great and I struggle to hear about it!