Generic argument list for an exec task? - nant

At the moment I've got several tasks that contain an exec task (SqlCmd). These all use the same argument list (username, options, etc) with only the executed script changing.
Is there a way to create a generic list of arguments, like you can with filter chain? eg:
<filterchain refid="AllProperties" />

Create a task that has the exec with all arguments -- either values or NAnt properties, then <call target="TheExecTask" /> in the places you want to use it. Note that call resets the depends list, so make this task depend on nothing and carefully document that this is a utility function not a regular task.

Related

Secrets as environmental variables in vstest

In my tests I'm using environmental variable. Due to security reasons, when setting it in azure devops pipeline I need to mark it as secret. Is it possible to pass it as argument, within vstest task?
Apparently lacking enough reputation to comment on the answer by #daniel-mann, I need to follow-up through an answer myself.
Regarding the use of a runsettings file;
Yes, I'm sure you can do it like this, but there's a much simpler way. In the task definition, you have the option to override test run parameters.
For a non-YAML pipeline, you do this in the "Override test run parameters" option (the tooltip says "Override parameters defined in the TestRunParameters section of runsettings file or Properties section of testsettings file. For example: -key1 value1 -key2 value2."), so like this then:
-SomeSecret $(SomeSecret)
For a YAML-pipeline, you can do this:
overrideTestrunParameters: '-SomeSecret $(SomeSecret)'
The nice thing is that the test run parameter that you'd like to override DOES NOT need to exist in your runsettings file. In fact, there's not even need for a runsettings file at all!
The bad thing about this approach in general is that you'll have to access your secrets through the "TestContext.Properties" collection (for NUnit), which really sucks if you want a transparent solution that works equally well both for local development (using "user secrets") and in an Azure pipeline.
Another potentially bad thing about this is that these "overridden" test run parameters are just that - parameters for THAT test run only. If you happen to have a mixture of .NET Framework and .NET Core tests, you would want to execute those in two different test runs (for it to work at all...), and then you'd need to duplicate those overrides (given that some of the same secrets are needed, that is).
Regarding adding an additional task in your pipeline to set the appropriate environment variables;
Absolutely. Using the "Batch script" task is well suited for this, where you just pass your secret-based variables as parameters to the script and pick them up inside the script file.
For this to work as expected, though, you will need to allow the task to modify the environment.
For a non-YAML pipeline, this is done by ticking off the "Modify Environment" checkbox.
For a YAML pipeline, you could do it like this:
- task: BatchScript#1
displayName: 'Export key vault vars as env. vars (for tests)'
inputs:
filename: 'ExportKeyVaultEnvironmentVariables.cmd'
arguments: '$(SomeSecret)'
modifyEnvironment: true
Where in the "ExportKeyVaultEnvironmentVariables.cmd" script, you just do this:
set SomeSecret=%1
Note: If your secret by chance has some funny characters, especially having a trailing "=" character, you might experience that what you get when collecting the parameter inside the script is NOT what you sent in.
You can avoid this problem by enclosing the parameters in double quotes, like this:
arguments: '"$(SomeSecret)"'
And then collect the parameter by removing those surrounding quotes by using the "~" parameter modifier, like this:
set SomeSecret=%~1
A nice bonus effect of this approach is that your shiny new full-blown environment variables persist for the remainder of your pipeline. Referencing back to my "bad thing" about having to potentially duplicate the test run parameter overrides, that would not be needed here.
Regarding the additional option mentioned by the OP;
Absolutely, in which case you wouldn't need a "Batch script" task (that needs to call a script file), but just a "Command line" task.
BUT, be aware of a possible gotcha! Yes, this will create an environment variable for your test code to pick up, if you access it through "Environment.GetEnvironmentVariable".
In my case, I was building an "IConfigurationRoot" instance, like this:
/// <summary>
/// Gets a configuration instance, based on user secrets (for local test execution) and environment variables (for Azure pipeline execution).
/// </summary>
/// <typeparam name="T">The type of the class that represents the "runtime" (and thus is able to get hold of any configuration).</typeparam>
/// <returns>An <see cref="IConfigurationRoot"/> instance.</returns>
public static IConfigurationRoot GetConfigurationRoot<T>()
where T : class =>
new ConfigurationBuilder()
//// Note: The "AddUserSecrets" method requires the "Microsoft.Extensions.Configuration.UserSecrets" package
.AddUserSecrets<T>()
//// Note: The "AddEnvironmentVariables" method requires the "Microsoft.Extensions.Configuration.EnvironmentVariables" package
.AddEnvironmentVariables()
.Build();
This works by adding various "configuration providers", which eventually allows you to access them all seamlessly through "configuration["SomeSecret"]".
What I found, though, if I'm not seriously mistaken, is that "SomeSecret" was still not available in the "EnvironmentVariablesConfigurationProvider" that's added, even though I could perfectly fine access it directly with the above mentioned method. Go figure (but I might be mistaken...).
Possible alternative approach (but for YAML pipelines only?);
It seems that for a YAML pipeline, you can explicitly set environment variables for a task, like this:
- task: VSTest#2
env:
SomeSecret: $(SomeSecret)
[...]
I haven't tested this myself, but seen a colleague do it (myself, I currently don't have a YAML pipeline). At least this variable can be picked up with "Environment.GetEnvironmentVariable", but I don't know if this can be picked up through an "IConfigurationRoot" instance.
I haven't seen any option to achieve the same in a non-YAML pipeline.
But again, this also suffers from the same possible "having to duplicate the environment variables across several test runs" problem.
See also my solution to my own question over at the Azure DevOps guys
I've been through this. There's no way to pass variables to VSTest on the command line, which means you have to jump through a few hoops.
You have a few options:
Use a runsettings file with a TestRunParameters section, then access it via TestContext.Properties["variableName"] within the tests themselves. You can use standard token replacement patterns to transform the XML file.
Use an app.config or appsettings.json (depending on your platform). This works pretty much the same as above, except, of course, you use the standard configuration classes to retrieve the values.
Add a step to your pipeline that sets the appropriate environment variables. Secrets don't get automatically mapped to environment variables for security purposes, but there's nothing that's stopping you from doing it yourself.
Move the secret values into a keyvault or some other sort of external secret storage and configure the test to pull the secrets at runtime.

luigi: command-line parameters not becoming part of a task's signature?

In luigi, I know how to use its parameter mechanism to pass command-line parameters into a task. However, if I do so, the parameter becomes part of the task's signature.
But there are some cases -- for example, if I want to optionally pass a --debug or --verbose flag on the command line -- where I don't want the command-line parameter to become part of the task's signature.
I know I can do this outside of the luigi world, such as by running my tasks via a wrapper script which can optionally set environment variables to be read within my luigi code. However, is there a way I can accomplish this via luigi, directly?
Just declare them as insignificant parameters, ie instantiate the parameter class passing significant=False as keyword argument.
Example:
class MyTask(DateTask):
other = luigi.Parameter(significant=False)

Rundeck: Customize option parameter on job workflow step

I'm using Rundeck-2.6.4 and I'm trying to reuse jobs of common task together with option param to control the variables.
Task B has some option params defined and i've added a workflow step on Task A to link to Task B. However I do not have any way to customize the option parameter in Task A to input to Task B.
Any inputs on this?
Think I found the answer,
http://rundeck.org/1.6.2/manual/job-workflows.html#job-reference-step
Finally, if the Job defines Options, you can specify them in the commandline arguments text field and can include variable expansion to pass any input options for the current job. Format:
-optname -optname ...

How to Edit/Update Nant script

I need to update an Nant script automatically by fetching some data from database. The solution I can think of is to be done through a service which fetches the data from DB and update the Nant script.
Can this be done? If yes, how?
In theory, if you need to change how the script works then you could create a program to generate the NAnt build file, run it with the exec task, include that file and then call a target.
That seems a bit over-complicated though. I suppose it depends on how much the script will change based on the data.
If the data is simply configuration, then you can use the data to set properties in your build script (either by the same mechanism above, or by creating a custom task to create a property value based on the result of a SQL statement). Then use those properties to determine control flow in the build script using standard things like if statements and foreach loops.
I don't think that there's anything built-in that will do this for you, but custom tasks are very easy to create if you can program.
If you update/edit a nant script it does not change the current execution. Instead you can generate .build files and execute them via <nant> task, for example using a <foreach> loop or <style> xsl-transformation. An alternative would be to write a small <script>, in particular if you can program it comfortably in C#. If you wish more specific answers more information would be helpful. (database used, what tools you can use to extract data)

How can I get the list of properties that MSBuild was invoked with?

Given this command:
MSBuild.exe build.xml /p:Configuration=Live /p:UseMerge=true /p:EnableUpdateable=false
how can I form a string like this in my build script:
UseMerge=true;EnableUpdateable=true
where I might not know which properties were used at the command line.
What are you going to do with the list?
There's no built in "properties that came via the commandline" thing a la splatting in PowerShell 2.0
Remember properties can come from environment variables and/or other scripts.
Also, you stripped on of the params out in your example.
In general, if one is trying to chain to another command, one uses defaulting (Conditions on elements in PropertyGroups) and validation (Messages Conditional on presence of options) and then either create a new property or embed the params you want to pass into a string.
Here's hoping someone has a nice neat example of a more general way to do this but I doubt it.
As covered in http://www.simple-talk.com/dotnet/.net-tools/extending-msbuild/ one can dump out the parameters passed by doing /v:diag on the commandline (but that's obviously not what you're after).
Have a look in the Common.targets files - you'll find lots of cases of chaininign involving manaully building up lists to pass onto subservient tasks.