My NUnit tests fail unless the nunit runner is launched with /noshadow parameter.
But in CC.net, it seems to be impossible to supply this parameter in the <nunit> block.
I know I always can fall back to generic <exec> block, but is there really no way to configure the <nunit> block?
I would surmise that if this switch/flag isn't documented, then it isn't available in the that you mention.
The thing to keep in mind with these custom tasks, is that usually they are just friendly-wrappers for what eventually becomes a command-line call.
The task-author is just making things simpler for you. They take on the onus of creating the correct commandline, and pass that to the original .exe.
Now, it looks like somebody did address the command line of your interest here:
https://github.com/loresoft/msbuildtasks/blob/master/Source/MSBuild.Community.Tasks/NUnit.cs
Note the code:
if (DisableShadowCopy)
{
builder.AppendSwitch(c+"noshadow");
}
So I would see if you can get this task working.
In fact, I barely use any of the built in CC.NET tasks, except for source-code download and starting up msbuild.exe...and then the publishing. I leave the hard stuff to msbuild.
Aka, I pull source-code, which includes a MyBuild.proj file.
Then I have cc.net execute "msbuild.exe MyBuild.proj"
Then I have cc.net do some of the publishing.
Why?
If most of my logic is in a msbuild .proj file, then if I ever switch to another CI tool, the transition is much less traumatic. In fact, I recently learned that an old job of mine went to TFS, and because I wrote most of the build logic in msbuild (and not a lot of cc.net tasks)....the transition to TFS was fairly painless. If I had used cc.net tasks instead......every single one of those would have had to been translated to a corresponding tfs task.... :<
Anyways. Back to your question. Keep in mind...that somebody is basically (via a task) is usually just writing up a nice way to wire up things, and doing the command line arguments/syntax sugar for you. So they sometimes miss a flag, or a flag gets added later, but the original task is not updated.
So you'll either need to modify the source code yourself........ :< Or pick a library that keeps more up to date.
Good luck.
Related
Update: TL;DR there seems to be no built-in way to achieve this, so a custom task is an easy solution.
Capistrano provides facilities to share files and directories over all releases. This is convenient and provides even some safety on files that should not be easily changed (or must remain the same across releases), e.g. a database configuration file.
But when it comes to replace or just update one of these shared files, I end up doing it manually, directly on the target machine. I would like to improve on that, for instance by asking Capistrano to overwrite some or all shared files when deploying. A kind of --force flag with some granularity.
I am not aware of any such kind of facility, and failing so far in my search. Any pointer?
Thinking about it
One of the reason why this facility does not exist (except that I did not find it!) is that it may be harder than it looks. For example, let's assume we have a shared database configuration file, and we exclude it from version control for security reason (common practice). Current release relies on version 1 of the DB configuration. The next release requires version 2 of the DB configuration. If the deployment goes well, everything's good. It gets harder when rolling back after some error with the new release (e.g. a regression), as version 1 must then be available.
Such automation would be cool and convenient, but dangerous as well. Yet I have practical use cases at hand.
I created a template method to do this. For example, I could have a task like this:
task :create_database_yml do
on roles(:app, :db) do
within(shared_path) do
template "local/path/to/database.yml.erb",
"config/database.yml",
:mode => "600"
end
end
end
And then I have a database.yml.erb template that uses things like fetch(:database_password) to fill in appropriate values. You can use the ask method in Capistrano to prompt for these values so they are never committed.
The implementation of template can be very simple: you just need to read the file, pass it through ERB, and then use Capistrano's upload! to place the results on the server.
My version is a little more complicated than yours probably needs to be, but in case you are curious:
https://github.com/mattbrictson/capistrano-mb/blob/7600440ecd3331945d03e059368b75849857f1fb/lib/capistrano/mb/dsl.rb#L104
One approach is to use a system configuration tool like Chef or Puppet to deploy the configuration files distinctly from Capistrano.
Another approach is to create a custom task to do this: https://coderwall.com/p/wgs6gw/copy-local-files-to-remote-server-using-capistrano-3
I personally don't change on-server configs often enough or on enough servers yet to have tried to automate it. Crafting an scp command which copies the desired config file to all of the required servers has sufficed in the past.
From what I have seen Psake domain specific PowerShell scripts do not evaluate if dependent objects really need to be built - instead the dependent objects are always evaluated in order.
Is there a way to implement dependencies so that the script to build a make target, such as a file, is only executed if any of the dependent files are newer than the target file?
I experimented with precondition and post condition, with limited success but this seems like a standard requirement and is in every UNIX style "make" I've used in the past. It feels like I am missing something obvious. Help!
As far as I know, Psake does not have such tools. The similar PowerShell build tool Invoke-Build does. You may try it if "incremental" tasks are important for your build scripts. See its wiki pages
Incremental Tasks
Partial Incremental Tasks
I'm trying to introduce PowerShell workflow into some existing scripts to take advantage of the parallel running capability.
Currently in the WorkFlow I'm having to use:
Inline
{
Import-Module My.Modules
Execute-MyModulesCustomFunctionFromImportedModules -SomeVariable $Using:SomeVariableValue
}
Otherwise I get the error stating it can't find the custom function. There must be a better way to do this?
The article at http://www.powershellmagazine.com/2012/11/14/powershell-workflows/ confirms that having to import modules and then use them is just how it works - MS gets around this by creating WF activities for all its common PowerShell commands:
General workflow design strategy
It’s important to understand that the entire contents of the workflow
get translated into WF’s own language, which only understands
activities. With the exception of a few commands, Microsoft has
provided WF activities that correspond to most of the core PowerShell
cmdlets. That means most of PowerShell’s built-in commands—the ones
available before any modules have been imported—work fine.
That isn’t the case with add-in modules, though. Further, because each
workflow activity executes in a self-contained space, you can’t even
use Import-Module by itself in a workflow. You’d basically import a
module, but it would then go away by the time you tried to run any of
the module’s commands.
The solution is to think of a workflow as a high-level task
coordination mechanism. You’re likely to have a number of
InlineScript{} blocks within a workflow because the contents of those
blocks execute as a single unit, in a single PowerShell session.
Within an InlineScript{}, you can import a module and then run its
commands. Each InlineScript{} block that you include runs
independently, so think of each one as a standalone script file of
sorts: Each should perform whatever setup tasks are necessary for it
to run successfully.
There's support for forkMode in Ant and Maven and occasionally we use it with value perTest. However, the JUnit-tests in Eclipse still fail when we run the tests on a class or on a project (Run As -> JUnit Test). Obviously JUnit uses default settings or behaviour and executes the tests in parallel causing some red crosses in the JUnit-view.
Is there a way to code something into the test-class that lets JUnit behave like the forkMode setting? We don't mind if there's an Eclipse-only solution for this.
Or can this be done with a Run Configuration in Eclipse?
EDIT:
I understand that the problems are based on data remaining after tests and further tests will fail due to that. While this makes sense, please understand that this doesn't answer my question. Think of my situation as being part of some sort of a Tiger Team. We have a bunch of issues and fixing that part of existing tests is just one of them. Trust me, we will try to cover everything... (I haven't heard that in a while)
Eclipse runs the JUnit test serially, in a single thread, in the same JVM. If you have tests that normally operate in parallel, this should not affect the test behavior. However, if you assume that you can change settings in the VM, like system properties, or class static variables, and the next test will not have those changes, that will break your tests.
The rule of thumb is that each test should leave the system (vm, database, filesystem) exactly as it found it so that each test can be run at any time, in any order.
I have a list of several thousand NUnit tests that I want to run (generated automatically by another tool). (This is a subset of all of the tests, and changes frequently)
I'd like to be able to run these via NUnit-Console.exe. Unfortunately the /run option only takes a direct list of files which in my case would not fit on a single command line. I'd like it to pickup the list from a filename.
I appreciate that I could use categories, but the list I want to run changes frequently and so I'd prefer not to have to start changing source code.
Does anyone know if there is a clean way to get NUnit to run my specified tests?
(I could break it down into a series of smaller calls to NUnit-console with a full command line, but that's not very elegant)
(If it's not possible, maybe I should add it as an NUnit feature request.)
Had a reply from Charlie Poole (from NUnit development team), that this is not currently possible but has been added as a feature request for NUnit 2.6
I see what you're saying, but like you say you can run a single fixture from the command line.
nunit-console /fixture:namespace.fixture tests.dll
How about generating all the tests in the same fixture? Or place them all in the same assembly?
nunit-console tests.dll
As mentioned in the nunitLink, we need to mention the scenario/test case name. It simple but it has bit of a trick in it. Directly mentioning the test case name will not serve the purpose and you will end up with the 0 testcases executed. We need to write the exact path for the same.
I don't know how it works for other languages but using c# I have found a solution. Whenever we create a feature file corresponding feature.cs file get's created in Visual Studio. Click on the featureFileName.feature.cs and look for namespace and keep it aside(Part 1)
namespace MMBank.Test.Features
Scroll a bit down you will get the class name. Note that as well and keep it aside(Part 2)
public partial class HistoricalTransactionFeature
Keep scrolling down, you will see the code which nunit understands for execution basically.
[NUnit.Framework.TestAttribute()]
[NUnit.Framework.DescriptionAttribute("TC_1_A B C D")]
[NUnit.Framework.CategoryAttribute("MM_Bank")]
Below the code you can see the function/method name which will most likely be TC_1_ABCD(certain parameters)
public virtual void TC_1_ABCD(string username, string password, string visit)
You will be having multiple such methods based on no. of scenarios you have in your feature file. Note the method(test case) which you want to execute and keep it aside(Part 3)
Now collate all the parts with dots. Finally you will land up with something like this,
MMBank.Test.Features.HistoricalTransactionFeature.TC_1_ABCD
This is it. Similarly you can create the test case names from multiple feature files and stack them up in text file. Every test case name should be in different line. For command you can browse through above nunit link for execution using command prompt.