Can you add logs files to NUnit - nunit

I am fairly new to NUnit and I'm trying to see if NUnit support multiple logging, by that I mean that I want to capture the logs from an external device as well as the network traces. Since I don't want to pollute my result logs with all these logs, I would like to have them in different files so that I have something like this:
Test results logs file
Telnet logs file 1
Telnet logs file 2
Network trace file
Does Nunit support the addition of other logs or do I have to create my own logging system?

Strictly speaking, NUnit's result file is not a "log file." Generally, a log file is created incrementally as execution proceeds. The TestResult file is an XML representation of the entire test run and is only written at the end of the run.
NUnit does have a set of log files, called Internal Trace logs, which are produced by the console runner when you use the --trace option. As their name suggests, they trace the internal workings of NUnit rather than your tests.
Any other logging you perform is entirely up to you and is not captured by NUnit at all.

Related

Get test execution logs during test run by Nunit Test Engine

We are using NUnit Test Engine to run test programatically.
Lokks like that after we add FrameworkPackageSettings.NumberOfTestWorkers to the Runner code, the test run for our Ui test hangs in execution. I'm not able to see at what time or event the execuiton hangs because Test Runned returns test result logs (in xml) only when entire execution ends
Is there a way to get test execution logs for each test?
I've added InternalTraceLevel and InternalTraceWriter but these logs are something different (BTW, looks like ParallelWorker#9 hangs even to write to console :) )
_package.AddSetting(FrameworkPackageSettings.InternalTraceLevel, "Debug");
var nunitInternalLogsPath = Path.GetDirectoryName(Uri.UnescapeDataString(new Uri(Assembly.GetExecutingAssembly().CodeBase).AbsolutePath)) + "\\NunitInternalLogs.txt";
Console.WriteLine("nunitInternalLogsPath: "+nunitInternalLogsPath);
StreamWriter writer = File.CreateText(nunitInternalLogsPath);
_package.AddSetting(FrameworkPackageSettings.InternalTraceWriter, writer);
The result file, with default name TestResult.xml is not a log. That is, it is not a file produced, line by line, as execution proceeds. Rather, it is a picture of the result of your entire run and therefore is only created at the end of the run.
InternalTrace logs are actual logs in that sense. They were created to allow us to debug the internal workings of NUnit. We often ask users to create them when an NUnit bug is being tracked. Up to four of them may be produced when running a test of a single assembly under nunit3-console...
A log of the console runner itself
A log of the engine.
A log of the agent used to run tests (if an agent is used)
A log received from the test framework running the tests
In your case, #1 is not produced, of course. Based on the content of the trace log, we are seeing #4, triggered by the package setting passed to the framework. I have seen the situation where the log is incomplete in the past but not recently. The logs normally use auto-flush to ensure that all output is actually written.
If you want to see a complete log #2, then set the WorkDirectory and InternalTrace properties of the engine when you create it.
However, as stated, these logs are all intended for debugging NUnit, not for debugging your tests. The console runner produces another "log" even though it isn't given that name. It's the output written to the console as the tests run, especially that produced when using the --labels option.
If you want some similar information from your own runner, I suggest producing it yourself. Create either console output or a log file of some kind, by processing the various events received from the tests as they execute. To get an idea of how to do this, I suggest examining the code of the NUnit3 console runner. In particular, take a look at the TestEventHandler class, found at https://github.com/nunit/nunit-console/blob/version3/src/NUnitConsole/nunit3-console/TestEventHandler.cs

Where are the standard output commands for scheduled jobs logged in Rundeck?

I am trying to analyse the logs of scheduled jobs in a project in Rundeck. When I check the successful logs of a job in the Rundeck GUI, I can see some lines in the Log Output tab, however I wish to see where these logs are on the machine.
Here's what I have already tried:
I have checked /var/log/rundeck after reading some documentation here
I have also gone through the script to see if the logs are being logged elsewhere.
The logs I am looking for are standard print statements. Where can I find these logs?
Rundeck has two kind of logs, "general logs" (located at /var/log/rundeck) and Execution Logs (your question), located at: /var/lib/rundeck/logs/rundeck/your-project-name/job/your-job-id/logs.
Those paths exist if you have a DEB/RPM based installation. If you are using a WAR based installation the "general logs" are located in $RDECK_BASE/server/logs and Execution Logs at $RDECK_BASE/var/logs/rundeck/your-project-name/job/your-job-id/logs.

Jenkins job log monitoring, parsing with error pattern in master

I am working on a perl script which will do the following:
Trigger a script in post build action when job fails.
Read the log file and try to match the errors with a consolidated error/solution file.
If error is matched with pattern file, then concatenate the error message with the solution at the end of log file.
I am facing following challenges:
All jobs are running in Slave but the error log file is stored in Master. How can I run the script in post-build action? The script path will be taken from slave but my script is located in master. Is there any workaround for this?
The path of the error log is - /home/jenkins/data/jobs//builds/BUILD_NUMBER/log
We have many jobs that have folders created by jenkins folder plugins…how do we set the common folder for these?
/home/jenkins/data/jobs/FOLDERX//builds/BUILD_NUMBER/log
Other questions -
Do you think that publishing the jenkins error log and displaying the solution is the right approach?
There is no information on how complex the pattern maching is, but if it is a simple line based regex match there is a plugin for that, called Build Failure Analyzer.

Devel::Cover not collecting any data after startup with mod_perl2

I want to check Selenium's coverage of my web app, which runs on mod_perl2 on CentOS 6.5.
So I installed Devel::Cover, put use Devel::Cover; in my httpd.conf's <Perl> section, and restarted Apache. It immediately writes some coverage data from my custom ErrorLogging.pm module, but then if I hit any of the app's pages via a browser, nothing further happens.
I also tried changing this in httpd.conf:
StartServers 1
MinSpareServers 1
MaxSpareServers 1
...just to make sure it'd be collecting all data from the same process. However, after restarting Apache and trying again, the result was the same.
UPDATE: I also tried launching httpd with -D ONE_PROCESS as mentioned in this thread, but the result was more or less the same, except that I had to Ctrl+C the service when done testing, because it takes over the terminal, and at that point it segfaulted. But the coverage database in the end was virtually identical.
The docs don't mention anything different that I can see. How can I get Devel::Cover to record coverage data for code execution that happens in response to actual browser requests via mod_perl2?

Application Deployment with Powershell

I've developed a Powershell script to deploy updates to a suite of applications; including SQL Server database updates.
Next I need a way to execute these scripts on 100+ servers; without manually connecting to each server. "Powershell v2 with remoting" is not an option as it is still in CTP.
Powershell v1 with WinRM looks the most promising, but I can't get feedback from my scripts. The scripts execute, but I need to know about exceptions. The scripts create a log file, is there a way to send the contents of the log file back to the "client" (the local computer making the remote calls)?
Quick answer is No. Long version is, possible but will involve lots of hacks. I developed very similar deployment script/system using PowerShell 2 last year. The remoting feature is the primary reason we put up with the CTP status. PowerShell 1 with WinRM is flaky at best and as you said, no real feedback apart from ok or failed.
Alternative that I considered included using PsExec, which is very much non-standard and may be blocked by firewall. The other approach involves using system management tools such as MS's System Center, but that's just a big hammer for a tiny nail. So you have to pick your poison...
Just a comment on this: The easiest way to capture powershell output is to use the start-transcript cmdlet to pipe console output to a file. We have a small snippet at the start of all our script that sends a log file with the console output from each script to a central file share, and names the log file with script name and date executed so that we'll have an idea of what happened. Its not too hard to pipe all those log files into a database for further processing either. Probably won't seolve all your problems, but would definitely help on the "getting data back" part.
best regards,
Trond