Why does this dotCover rake task fail using these relative file paths? - rake

I'm having some issues getting dotCover to work in an Albacore exec task using relative paths.
#xUnitRunnerPath = Pathname.new('../../Tools/xUnit/xunitcontrib-dotcover.2.0/xunit.runner.utility.dll').realpath
#myTestDll = 'C:\PathToProj\My.Project.Tests\bin\Release\My.project.Tests.dll'
#outputDir = 'C:\PathToTestResults\'
exec :testCoverage do |cmd|
cmd.command = "C:/BuildAgent/tools/dotCover/dotCover.exe"
cmd.parameters = [
"cover",
"/targetexecutable=$$#{#xUnitRunnerPath}$$",
"/targetarguments=$$#{#myTestDll}$$",
"/output=#{#outputDir}/My.Project.Tests.dll.dcvr"
]
end
The dotCover error is unhelpful just telling me paths are wrong
Failed to convert relative paths to absolute in parameters for the 'cover'
command. The given path's format is not supported.
This doesn't provide much help and I've also tried dotcover help cover for the help on that but doesn't give many clues as to what's going wrong.
I've followed this post about rake and dotcover and also this question. Maybe I'm missing the relevant documentation here but it would be really helpful to be able to get this working.
EDIT: I just found this relating to relative and absolute paths, perhaps because I'm using absolute paths I need the following. We'll find out tomorrow
/AnalyseTargetArguments=false

I'm going to remix the rakefile/tasks from your own answer. There are some Ruby/Rake conventions that you should follow to appeal to a broader audience. And I have some opinions on how to write awesome rakefiles. In particular...
1. Don't invoke/execute Rake tasks directly
Rake::Task[:unitTestWithCoverage].execute( testAssembly )
There's all sorts of reasons why you don't want to mess with direct Rake invoke or execute. One of them doesn't invoke dependent tasks, one only runs dependent tasks once... it gets goofy. There should always be a way to construct properly defined and dependent tasks.
2. Don't parameterize "internal" tasks
exec :unitTestWithCoverage, [:testAssembly] do |cmd, testAssembly|
You probably have a static list or wildcard-matching list of test assemblies. You should be able to construct concrete tasks without using parameters. I only use parameterized tasks when the user can invoke them with custom input from the command line.
3. No need to create paths inside each task
testAssemblyRealPath = Pathname.new(testAssembly).realpath
testAssemblyName = File.basename(testAssemblyRealPath)
We're gonna explore the Rake FileList to figure out how to create custom, lazy, mapped lists of filenames/paths/arbitrary-strings!
The Remix (updated)
I made a critical mistake in my first answer (which I maintain at the bottom, for reference). I'll explain what went wrong in that section for your/my education!
What follows is my new recommendation. This should have been obvious to me because I made the same mistake with an mspec test runner task in my own builds.
dotcover_path = 'path/to/dotcover.exe'
xunit_runner_path = 'path/to/xunitrunner.exe'
test_assemblies = FileList['path/to/output/**/*.test.dll']
coverage_results = "#{test_results_path}/coverage_results.dcvr"
task :cover_all => [ :tests_with_coverage, :publish_coverage_results ]
exec :tests_with_coverage do |cmd|
cmd.comand = dotcover_path
cmd.parameters = [
"cover",
"/AnalyseTargetArguments=False",
"/TargetExecutable=\"#{xunit_runner_path}\"",
"/TargetArguments=\"#{test_assemblies.join ','}\"",
"/Output=\"#{coverage_results}\""
]
end
task :publish_coverage_results => [ :tests_with_coverage ] do
import_data 'dotNetCoverage', 'dotCover', coverage_results
end
def import_data(type, tool, file)
puts "##teamcity[importData type='#{type}' tool='#{tool}' path='#{file}']"
end
The Explanation
I default to absolute paths (usually by using File.expand_path and the __FILE__ constant). There are tools/tasks that require relative paths, but you can always play with methods like File.basename.
dotcover_path = 'path/to/dotcover.exe'
xunit_runner_path = 'path/to/xunitrunner.exe'
We can still use a FileList of built assemblies to define the target assemblies. We won't evaluate it until the body of the test task executes.
test_assemblies = FileList['path/to/output/**/*.test.dll']
The coverage runner supports multiple assemblies with a single results file. This way we don't have another complicated pathmap.
coverage_results = "#{test_results_path}/coverage_results.dcvr"
Call this from your CI server to run the tests and the coverage results published.
task :cover_all => [ :tests_with_coverage, :publish_coverage_results ]
This task is now plain and simple. Some notes:
1. Use join to turn the list of targets into a string of the correct format.
2. I tend to quote exec task parameters that have file paths (which requires escaping, \").
exec :tests_with_coverage do |cmd|
cmd.command = dotcover_path
cmd.parameters = [
"cover",
"/AnalyseTargetArguments=False",
"/TargetExecutable=\"#{xunit_runner_path}\"",
"/TargetArguments=\"#{test_assemblies.join ','}\"",
"/Output=\"#{coverage_results}\""
]
end
Same old publish task/method.
task publish_coverage_results => [ :tests_with_coverage ] do
import_data 'dotNetCoverage', 'dotCover', coverage_results
end
def import_data(type, tool, file)
puts "##teamcity[importData type='#{type}' tool='#{tool}' path='#{file}']"
end
The Old Remix
Snipped to show the problem area, assume the rest was uninteresting or exists in the new solution, too.
The test assemblies won't exist until after the build task. That's normally not a problem, since FileList is lazy. It won't evaluate until you enumerate over it (for example, by using each, map, or zip).
However, we immediately each over it to generate the test tasks... so this won't work. It'll have nothing in the list and generate no tasks. Or, even worse, it'll pickup the previous build's output and possibly do bad things (if you didn't completely clean the output directory).
test_assemblies = FileList['path/to/output/**/*.test.dll']
coverage_results = test_assemblies.pathmap "#{test_results_path}/%n.dcvr"
cover_task_names = test_assemblies.pathmap "cover_%n"
test_assemblies.zip(coverage_results, cover_task_names) do |assembly, results, task_name|
exec task_name do |cmd|
cmd.command = dotcover_path
cmd.parameters = [
"cover",
"/AnalyseTargetArguments=False",
"/TargetExecutable=#{xunit_path}",
"/TargetArguments=#{assembly}",
"/Output=#{results}"
]
end
end

For anyone that's interested here's my final rake tasks working
task :unitTestsWithCoverageReport => [ :unitTestsWithCoverage, :coverageServiceMessage ]
exec :unitTestsWithCoverage do |cmd|
fullPathAssemblies = []
#unitTestAssemblies.each do |testAssembly|
testAssemblyRealPath = Pathname.new(testAssembly).realpath
fullPathAssemblies << testAssemblyRealPath
end
cmd.command = #dotCoverRealPath
cmd.parameters = [
"cover",
"/AnalyseTargetArguments=False",
"/TargetExecutable=#{#xUnitRunnerRealPath}",
"/TargetArguments=\"#{fullPathAssemblies.join ';'}\"",
"/Output=#{#testResultsRealPath}/coverage.dcvr"
]
end
task :coverageServiceMessage do |t|
puts "##teamcity[importData type='dotNetCoverage' tool='dotcover' path='#{#testResultsRealPath}/coverage.dcvr']"
end
Many thanks to #AnthonyMastrean as he showed me some really nice little ruby tricks and how I should be structuring my rake file properly.

Related

preserve existing code for arbitrary scalafmt settings

I'm trying to gently introduce scalafmt to a large existing codebase and I want it to make virtually no changes except for a handful of noncontroversial settings the whole team can agree on.
With some settings like maxColumn I can override the default of 80 to something absurd like 5000 to have no changes. But with other settings I have to make choices that will modify the existing code like with continuationIndent.callSite. The setting requires a number which would aggressively introduce changes on the first run on our codebase.
Is there anything I can do in my scalafmt config to preserve all my code except for a few specific settings?
EDIT: I will also accept suggestions of other tools that solve the same issue.
Consider project.includeFilters:
Configure which source files should be formatted in this project.
# manually include files to format.
project.includeFilters = [
regex1
regex2
]
For example, say we have project structure with foo, bar, baz, etc. packages like so
someProject/src/main/scala/com/example/foo/*.scala
someProject/src/main/scala/com/example/bar/*.scala
someProject/src/main/scala/com/example/baz/qux/*.scala
...
Then the following .scalafmt.conf
project.includeFilters = [
"foo/.*"
]
continuationIndent.callSite = 2
...
will format only files in foo package. Now we can proceed to gradually introduce formatting to the codebase package-by-package
project.includeFilters = [
"foo/.*"
"bar/.*"
]
continuationIndent.callSite = 2
...
or even file-by-file
project.includeFilters = [
"foo/FooA\.scala"
"foo/FooB\.scala"
]
continuationIndent.callSite = 2
...

Debugging test cases when they are combination of Robot framework and python selenium

Currently I'm using Eclipse with Nokia/Red plugin which allows me to write robot framework test suites. Support is Python 3.6 and Selenium for it.
My project is called "Automation" and Test suites are in .robot files.
Test suites have test cases which are called "Keywords".
Test Cases
Create New Vehicle
Create new vehicle with next ${registrationno} and ${description}
Navigate to data section
Those "Keywords" are imported from python library and look like:
#keyword("Create new vehicle with next ${registrationno} and ${description}")
def create_new_vehicle_Simple(self,registrationno, description):
headerPage = HeaderPage(TestCaseKeywords.driver)
sideBarPage = headerPage.selectDaten()
basicVehicleCreation = sideBarPage.createNewVehicle()
basicVehicleCreation.setKennzeichen(registrationno)
basicVehicleCreation.setBeschreibung(description)
TestCaseKeywords.carnumber = basicVehicleCreation.save()
The problem is that when I run test cases, in log I only get result of this whole python function, pass or failed. I can't see at which step it failed- is it at first or second step of this function.
Is there any plugin or other solution for this case to be able to see which exact python function pass or fail? (of course, workaround is to use in TC for every function a keyword but that is not what I prefer)
If you need to "step into" a python defined keyword you need to use python debugger together with RED.
This can be done with any python debugger,if you like to have everything in one application, PyDev can be used with RED.
Follow below help document, if you will face any problems leave a comment here.
RED Debug with PyDev
If you are wanting to know which statement in the python-based keyword failed, you simply need to have it throw an appropriate error. Robot won't do this for you, however. From a reporting standpoint, a python based keyword is a black box. You will have to explicitly add logging messages, and return useful errors.
For example, the call to sideBarPage.createNewVehicle() should throw an exception such as "unable to create new vehicle". Likewise, the call to basicVehicleCreation.setKennzeichen(registrationno) should raise an error like "failed to register the vehicle".
If you don't have control over those methods, you can do the error handling from within your keyword:
#keyword("Create new vehicle with next ${registrationno} and ${description}")
def create_new_vehicle_Simple(self,registrationno, description):
headerPage = HeaderPage(TestCaseKeywords.driver)
sideBarPage = headerPage.selectDaten()
try:
basicVehicleCreation = sideBarPage.createNewVehicle()
except:
raise Exception("unable to create new vehicle")
try:
basicVehicleCreation.setKennzeichen(registrationno)
except:
raise exception("unable to register new vehicle")
...

Any way to ensure frisby.js test API calls go in sequential order?

I'm trying a simple sequence of tests on an API:
Create a user resource with a POST
Request the user resource with a GET
Delete the user resource with a DELETE
I've a single frisby test spec file mytest_spec.js. I've broken the test into 3 discrete steps, each with their own toss() like:
f1 = frisby.create("Create");
f1.post(post_url, {user_id: 1});
f1.expectStatus(201);
f1.toss();
// stuff...
f2 = frisby.create("Get");
f2.get(get_url);
f2.expectStatus(200);
f2.toss();
//Stuff...
f3 = frisby.create("delete");
f3.get(delete_url);
f3.expectStatus(200);
f3.toss();
Pretty basic stuff, right. However, there is no guarantee they'll execute in order as far as I can tell as they're asynchronous, so I might get a 404 on test 2 or 3 if the user doesn't exist by the time they run.
Does anyone know the correct way to create sequential tests in Frisby?
As you correctly pointed out, Frisby.js is asynchronous. There are several approaches to force it to run more synchronously. The easiest but not the cleanest one is to use .after(() -> ... you can find more about after() in Fisby.js docs.

Check if a command exists using qmake

I am working on a project which incorporates C code, as well as (MASM-like) assembly. I want to be able to compile it on Linux, as well as Windows, thus I am using a third-party assembler (jwasm) as follows:
QMAKE_PRE_LINK += jwasm -coff -Fo$$assembly_obj $$PWD/assembly.asm
(here, assembly_obj holds the directory I want jwasm to save the output. Oh, by the way: when using jwasm it is critical to first specify all the parameters, and only at the end the input files, otherwise it will ignore the parameters)
To make it easier for other people to compile the project, I would like to be able to check if jwasm is in their path, and if not, emit an error() telling them how to fix this. However, I am not sure if this is even possible using qmake. I have tried:
exists("jwasm") { # Always false
message("jwasm found!")
}
as well as:
packagesExist(jwasm) { # Always true
message("jwasm found!")
}
I have looked around in the qmake docs, but couldn't find any other alternatives...

Why is it a bad idea to write configuration data in code?

Real-life case (from caff) to exemplify the short question subject:
$CONFIG{'owner'} = q{Peter Palfrader};
$CONFIG{'email'} = q{peter#palfrader.org};
$CONFIG{'keyid'} = [ qw{DE7AAF6E94C09C7F 62AF4031C82E0039} ];
$CONFIG{'keyserver'} = 'wwwkeys.de.pgp.net';
$CONFIG{'mailer-send'} = [ 'testfile' ];
Then in the code: eval `cat $config`, access %CONFIG
Provide answers that lay out the general problems, not only specific to the example.
There are many reasons to avoid configuration in code, and I go through some of them in the configuration chapter in Mastering Perl.
No configuration change should carry the risk of breaking the program. It certainly shouldn't carry the risk of breaking the compilation stage.
People shouldn't have to edit the source to get a different configuration.
People should be able to share the same application without using a common group of settings, instead re-installing the application just to change the configuration.
People should be allowed to create several different configurations and run them in batches without having to edit the source.
You should be able to test your application under different settings without changing the code.
People shouldn't have to learn how to program to be able to use your tool.
You should only loosely tie your configuration data structures to the source of the information to make later architectural changes easier.
You really want an interface instead of direct access at the application level.
I sum this up in my Mastering Perl class by telling people that the first rule of programming is to create a situation where you do less work and people leave you alone. When you put configuration in code, you spend more time dealing with installation issues and responding to breakages. Unless you like that sort of thing, give people a way to change the settings without causing you more work.
$CONFIG{'unhappy_employee'} = `rm -rf /`
One major issue with this approach is that your config is not very portable. If a functionally identical tool were built in Java, loading configuration would have to be redone. If both the Perl and the Java variation used a simple key=value layout such as:
owner = "Peter Palfrader"
email = "peter#peter#palfrader.org"
...
they could share the config.
Also, calling eval on the config file seems to open this system up to attack. What could a malicious person add to this config file if they wanted to wreak some havoc? Do you realize that ANY arbitrary code in your config file will be executed?
Another issue is that it's highly counter-intuitive (at least to me). I would expect a config file to be read by some config loader, not executed as a runnable piece of code. This isn't so serious but could confuse new developers who aren't used to it.
Finally, while it's highly unlikely that the implementation of constructs like p{...} will ever change, if they did change, this might fail to continue to function.
It's a bad idea to put configuration data in compiled code, because it can't be easily changed by the user. For scripts, just make sure it's separated entirely from the rest and document it nicely.
A reason I'm surprised no one mentioned yet is testing. When config is in the code you have to write crazy, contorted tests to be able to test safely. You can end up writing tests that duplicate the code they test which makes the tests nearly useless; mostly just testing themselves, likely to drift, and difficult to maintain.
Hand in hand with testing is deployment which was mentioned. When something is easy to test, it is going to be easy (well, easier) to deploy.
The main issue here is reusability in an environment where multiple languages are possible. If your config file is in language A, then you want to share this configuration with language B, you will have to do some rewriting.
This is even more complicated if you have more complex configurations (example the apache config files) and are trying to figure out how to handle potential differences in data structures. If you use something like JSON, YAML, etc., parsers in the language will be aware of how to map things with regards to the data structures of the language.
The one major drawback of not having them in a language, is that you lose the potential of utilizing setting config values to dynamic data.
I agree with Tim Anderson. Somebody here confuses configuration in code as configuration not being configurable. This is corrected for compiled code.
Both a perl or ruby file is read and interpreted, as is a yml file or xml file with configuration data. I choose yml because it is easier on the eye than in code, as grouping by test environment, development, staging and production, which in code would involve more .. code.
As a side note, XML contradicts the "easy on the eye" completely. I find it interesting that XML config is extensively used with compiled languages.
Reason 1. Aesthetics. While no one gets harmed by bad smell, people tend to put effort into getting rid of it.
Reason 2. Operational cost. For a team of 5 this is probably ok, but once you have developer/sysadmin separation, you must hire sysadmins who understand Perl (which is $$$), or give developers access to production system (big $$$).
And to make matters worse you won't have time (also $$$) to introduce a configuration engine when you suddenly need it.
My main problem with configuration in many small scripts I write, is that they often contain login data (username and password or auth-token) to a service I use. Then later, when the scripts gets bigger, I start versioning it and want to upload it on github.
So before every commit I need to replace my configuration with some dummy values.
$CONFIG{'user'} = 'username';
$CONFIG{'password'} = '123456';
Also you have to be careful, that those values did not eventually slip into your commit history at some point. This can get very annoying. When you went through this one or two times, you will never again try to put configuration into code.
Excuse the long code listing. Below is a handy Conf.pm module that I have used in many systems which allows you to specify different variables for different production, staging and dev environments. Then I build my programs to either accept the environment parameters on the command line, or I store this file outside of the source control tree so that never gets over written.
The AUTOLOAD provides automatic methods for variable retrieval.
# Instructions:
# use Conf;
# my $c = Conf->new("production");
# print $c->root_dir;
# print $c->log_dir;
package Conf;
use strict;
our $AUTOLOAD;
my $default_environment = "production";
my #valid_environments = qw(
development
production
);
#######################################################################################
# You might need to change this.
sub set_vars {
my ($self) = #_;
$self->{"access_token"} = 'asdafsifhefh';
if ( $self->env eq "development" ) {
$self->{"root_dir"} = "/Users/patrickcollins/Documents/workspace/SysG_perl";
$self->{"server_base"} = "http://localhost:3000";
}
elsif ($self->env eq "production" ) {
$self->{"root_dir"} = "/mnt/SysG-production/current/lib";
$self->{"server_base"} = "http://api.SysG.com";
$self->{"log_dir"} = "/mnt/SysG-production/current/log"
} else {
die "No environment defined\n";
}
#######################################################################################
# You shouldn't need to configure this.
# More dirs. Move these into the dev/prod sections if they're different per env.
my $r = $self->{'root_dir'};
my $b = $self->{'server_base'};
$self->{"working_dir"} ||= "$r/working";
$self->{"bin_dir"} ||= "$r/bin";
$self->{"log_dir"} ||= "$r/log";
# Other URLs. Move these into the dev/prod sections if they're different per env.
$self->{"new_contract_url"} = "$b/SysG-training-center/v1/contract/new";
$self->{"new_documents_url"} = "$b/SysG-training-center/v1/documents/new";
}
#######################################################################################
# Code, don't change below here.
sub new {
my ($class,$env) = #_;
my $self = {};
bless ($self,$class);
if ($env) {
$self->env($env);
} else {
$self->env($default_environment);
}
$self->set_vars;
return $self;
}
sub AUTOLOAD {
my ($self,$val) = #_;
my $type = ref ($self) || die "$self is not an object";
my $field = $AUTOLOAD;
$field =~ s/.*://;
#print "field: $field\n";
unless (exists $self->{$field} || $field =~ /DESTROY/ )
{
die "ERROR: {$field} does not exist in object/class $type\n";
}
$self->{$field} = $val if ($val);
return $self->{$field};
}
sub env {
my ($self,$in) = #_;
if ($in) {
die ("Invalid environment $in") unless (grep($in,#valid_environments));
$self->{"_env"} = $in;
}
return $self->{"_env"};
}
1;