RobotFrameWork: Is there a way of checking the report.html although the run paused? - visual-studio-code

Situation: VisualStudioCode (Browser library) runs a couple of .robot files (manually started)
Then it pauses because of an error...
At that point the process breaks and there is no final report.html
If you stop the run it doesn't generate an report.html that's not what you want. You actually want the results until that point. (or even better described: you still want the links output.xml, log.html and report.html)

you should be able to generate lag.htm and report.html using the rebot command. however you need output.xml for this. output.xml is created when you run the tests. when you break you will probobaly not have all the resources you need.
I would suggest to assign test timeout to the test that causes the pause. When the timeout is reached the test will be stoped automaticaly and you should have all reports. You can also set it globaly for all tests eg.:
*** Settings ***
Test Timeout 2 minutes

Related

Get test execution logs during test run by Nunit Test Engine

We are using NUnit Test Engine to run test programatically.
Lokks like that after we add FrameworkPackageSettings.NumberOfTestWorkers to the Runner code, the test run for our Ui test hangs in execution. I'm not able to see at what time or event the execuiton hangs because Test Runned returns test result logs (in xml) only when entire execution ends
Is there a way to get test execution logs for each test?
I've added InternalTraceLevel and InternalTraceWriter but these logs are something different (BTW, looks like ParallelWorker#9 hangs even to write to console :) )
_package.AddSetting(FrameworkPackageSettings.InternalTraceLevel, "Debug");
var nunitInternalLogsPath = Path.GetDirectoryName(Uri.UnescapeDataString(new Uri(Assembly.GetExecutingAssembly().CodeBase).AbsolutePath)) + "\\NunitInternalLogs.txt";
Console.WriteLine("nunitInternalLogsPath: "+nunitInternalLogsPath);
StreamWriter writer = File.CreateText(nunitInternalLogsPath);
_package.AddSetting(FrameworkPackageSettings.InternalTraceWriter, writer);
The result file, with default name TestResult.xml is not a log. That is, it is not a file produced, line by line, as execution proceeds. Rather, it is a picture of the result of your entire run and therefore is only created at the end of the run.
InternalTrace logs are actual logs in that sense. They were created to allow us to debug the internal workings of NUnit. We often ask users to create them when an NUnit bug is being tracked. Up to four of them may be produced when running a test of a single assembly under nunit3-console...
A log of the console runner itself
A log of the engine.
A log of the agent used to run tests (if an agent is used)
A log received from the test framework running the tests
In your case, #1 is not produced, of course. Based on the content of the trace log, we are seeing #4, triggered by the package setting passed to the framework. I have seen the situation where the log is incomplete in the past but not recently. The logs normally use auto-flush to ensure that all output is actually written.
If you want to see a complete log #2, then set the WorkDirectory and InternalTrace properties of the engine when you create it.
However, as stated, these logs are all intended for debugging NUnit, not for debugging your tests. The console runner produces another "log" even though it isn't given that name. It's the output written to the console as the tests run, especially that produced when using the --labels option.
If you want some similar information from your own runner, I suggest producing it yourself. Create either console output or a log file of some kind, by processing the various events received from the tests as they execute. To get an idea of how to do this, I suggest examining the code of the NUnit3 console runner. In particular, take a look at the TestEventHandler class, found at https://github.com/nunit/nunit-console/blob/version3/src/NUnitConsole/nunit3-console/TestEventHandler.cs

How to restart an exe when it is exits in windows 10?

I have a process in windows which i am running in startup. Now i need to make it if somehow that process get killed or stopped i need to restart it again in Windows 10?
Is there any way. Process is a HTTP server which if somehow stopped in windows i need to restart it. I have tried of writing a power-shell in which I'll check task-list status of process and then if not found I'll restart but that is not a good way. Please suggest some good way to do it.
I have a golang exe; under a particular scenario my process got killed or stopped i need to start it up again automatically. This has to be done imediately after the exe got killed. What is the best way to achieve this?
I will give you a brief rundown. You can enable Audit Process Termination in local group policy of the machine as shown below. In your case, success audits would be enough. Please note that the pic is for Windows 7. It may change with OS.
Now every time a process gets terminated, a success event will be generated and written to the security eventlog.
This will allow you to create a task scheduler that triggers on the generation of this event that calls a script that would run the process again. Simple right?
Well, you might have some trouble setting that task up especially when you want to pass details about the generating event to the script. This should help you get through that.
You can user Task scheduler for this purpose. There is a option of "restart on failure" which can be selected and whenever your process get failed it will restart again.
Reference :- https://social.technet.microsoft.com/Forums/windowsserver/en-US/4545361c-cc1f-4505-a0a1-c2dcc094109a/restarting-scheduled-task-that-has-failed?forum=winserverManagement

OPA5: How to make sure that every test starts in a fresh environment?

I got to refactor a module of OPA5-tests, because most of the test-cases fail currently.
While trying to find the reason for the failing I found out that most of the tests aren't erroneous.
When you run them in isolation they work just fine. The problem occurs when you run them as a module. This means you run them as a group. One test after the other.
The problem occurs when one test fails. Normally you execute iTeardownMyAppFrame() as
the very last method of the test. To remove the used iFrame. So that the following test
finds an untouched environment in which it can run.
Now when a test fails at some line, then the test stops, and the following invocations aren't done.
iTeardownMyAppFrame is never executed and the following test starts in the environment of the previous (the failed) test. So it might fail too because the environment isn't as expected.
Is there a way to make sure that every test starts in a new iFrame?
Something like "try-finally" with the iTeardownMyAppFrame in the final block. So that it
is executed in any case. No matter if the test has worked or it has failed.

Devel::Cover not collecting any data after startup with mod_perl2

I want to check Selenium's coverage of my web app, which runs on mod_perl2 on CentOS 6.5.
So I installed Devel::Cover, put use Devel::Cover; in my httpd.conf's <Perl> section, and restarted Apache. It immediately writes some coverage data from my custom ErrorLogging.pm module, but then if I hit any of the app's pages via a browser, nothing further happens.
I also tried changing this in httpd.conf:
StartServers 1
MinSpareServers 1
MaxSpareServers 1
...just to make sure it'd be collecting all data from the same process. However, after restarting Apache and trying again, the result was the same.
UPDATE: I also tried launching httpd with -D ONE_PROCESS as mentioned in this thread, but the result was more or less the same, except that I had to Ctrl+C the service when done testing, because it takes over the terminal, and at that point it segfaulted. But the coverage database in the end was virtually identical.
The docs don't mention anything different that I can see. How can I get Devel::Cover to record coverage data for code execution that happens in response to actual browser requests via mod_perl2?

How to have Capistrano NOT rollback if a task fails

We're using Capistrano/Webistrano (with Lee Hambley's railsless-deploy gem) to push our PHP application to production servers. I have some custom tasks that get run during various parts of the deploy process.
As an example, I have tasks that attempt to stop and restart a jetty solr instance. However, sometimes this bit fails during the deploy, so Capistrano rolls back the entire deploy and reverts back to the previous revision. This is a pain. :-)
I'd like to tell Capistrano to ignore the return result of these tasks, so if they fail, Capistrano continues on it's way and finishes the deploy anyway. It's very easy for me to ssh to the server after the fact and properly kill and restart the solr instance, rather than having to do a complete deploy again.
Here is some relevant parts of the deploy script:
before "deploy:symlink", :solr_kill
after "deploy:symlink", :solr_start, :solr_index
task :solr_kill do
run "cd #{current_path}/Base ; #{sudo} phing solr-kill"
end
task :solr_start do
run "cd #{current_path}/Base ; #{sudo} phing solr-start"
run "sleep 10"
end
task :solr_index do
run "#{sudo} #{current_path}/Base/Bin/app.php cron run solr_index_cron"
end
from the Capistrano Task docs there is a config you can add to if there is an error, to continue.
task :solr_start, :on_error => :continue do
# your code here
end
Just add that to each task you want to ignore errors and continue. Though, the best possible thing is to see if you can figure out what is causing the failure and have the restart command be more robust to really restart it. I only say this, since when you try to hand off the script to someone else, they might not know exactly how to tell if it restarted correctly.