How can I get the list of properties that MSBuild was invoked with? - command-line

Given this command:
MSBuild.exe build.xml /p:Configuration=Live /p:UseMerge=true /p:EnableUpdateable=false
how can I form a string like this in my build script:
UseMerge=true;EnableUpdateable=true
where I might not know which properties were used at the command line.

What are you going to do with the list?
There's no built in "properties that came via the commandline" thing a la splatting in PowerShell 2.0
Remember properties can come from environment variables and/or other scripts.
Also, you stripped on of the params out in your example.
In general, if one is trying to chain to another command, one uses defaulting (Conditions on elements in PropertyGroups) and validation (Messages Conditional on presence of options) and then either create a new property or embed the params you want to pass into a string.
Here's hoping someone has a nice neat example of a more general way to do this but I doubt it.
As covered in http://www.simple-talk.com/dotnet/.net-tools/extending-msbuild/ one can dump out the parameters passed by doing /v:diag on the commandline (but that's obviously not what you're after).
Have a look in the Common.targets files - you'll find lots of cases of chaininign involving manaully building up lists to pass onto subservient tasks.

Related

ILE RPG Bind by reference using CRTSQLRPGI

I've been trying a solution for this, but. I cannot find it.
What I'm trying to do, is work with the "bind by reference" ability, but working with ILE RPG written with embedded sql.
I can use the BNDDIR ctl opt in my source. And everything works correctly.
But that means a "bind by copy" method. Checked deleting the SRVPGM and even the BINDDIR. And the caller program still works.
So, is there any way to use "bind by reference" in an ILERPGSQL program?
After my question, an example:
Program SNILOG is a module, that conains several procedures. Part of them, exported.
In QSRVSRC I set the exported procedures, with a source with the same name: SNILOG. Something like this:
STRPGMEXP PGMLVL(*CURRENT)
/************************************************** ******************/
/* *MODULE SNILOG INIGREDI 04/10/21 15:25:30 */
/************************************************** ******************/
EXPORT SYMBOL("GETDIAG_TOSTRING")
EXPORT SYMBOL("GETDIAGNOSTICS")
EXPORT SYMBOL("GRABAR_LOG")
EXPORT SYMBOL("SNILOG")
ENDPGMEXP
As part of the procedures are programmed with embedded sql, the compilation must be done with CRTSQLRPGI, using the parameter OBJTYPE(*SRVPGM).
So, I finally get a SRVPGM called SNILOG, with those 4 procedures exported.
Once I've got the SRVPGM, I add it to a BNDDIR called SNI_BNDDIR.
Ok, let's go to the caller program: SNI600V.
Defined with
dftactgrp(*no)
, of course!.
And compiled with CRTSQLRPGI and parameter OBJTYPE(*PGM).
Here, if I use the control spec
bnddir('SNI_BNDDIR')
, it works fine.
But not fine enough, as this is a "bind by copy" method (I can delete the SRVPGM or the BNDDIR, and it is still working fine).
When I'm not working with SQL, I can use the CRTPGM command, and I can set the BNDSRVPGM parameter, to set the SRVPGM the program is going to be called. Well, just their procedures...
But I cannot find any similar option in CRTSQLRPGI command.
Nor in opt codes in ctl-opt sentence (We have BNDDIR, but not BNDSRVPGM option).
Any idea?
I'm running V7R3M0 with TR level: 6
Thanks in advance!
the use of
bnddir('SNI_BNDDIR')
Is the way to bind by reference OR bind by copy.
The key is what does your BNDDIR look like?
If you want to bind by reference, then it should include *SRVPGM objects.
If you want to bind by copy, then it should include *MODULE objects.
Generally, you want a *BNDDIR for every *SRVPGM that includes the modules (and maybe a utility *SRVPGM or two) needed for building a specific *SRVPGM.
Then one or more *BNDDIR that includes just *SRVPGM objects that are used to build the programs that use those *SRVPGMs.

Secrets as environmental variables in vstest

In my tests I'm using environmental variable. Due to security reasons, when setting it in azure devops pipeline I need to mark it as secret. Is it possible to pass it as argument, within vstest task?
Apparently lacking enough reputation to comment on the answer by #daniel-mann, I need to follow-up through an answer myself.
Regarding the use of a runsettings file;
Yes, I'm sure you can do it like this, but there's a much simpler way. In the task definition, you have the option to override test run parameters.
For a non-YAML pipeline, you do this in the "Override test run parameters" option (the tooltip says "Override parameters defined in the TestRunParameters section of runsettings file or Properties section of testsettings file. For example: -key1 value1 -key2 value2."), so like this then:
-SomeSecret $(SomeSecret)
For a YAML-pipeline, you can do this:
overrideTestrunParameters: '-SomeSecret $(SomeSecret)'
The nice thing is that the test run parameter that you'd like to override DOES NOT need to exist in your runsettings file. In fact, there's not even need for a runsettings file at all!
The bad thing about this approach in general is that you'll have to access your secrets through the "TestContext.Properties" collection (for NUnit), which really sucks if you want a transparent solution that works equally well both for local development (using "user secrets") and in an Azure pipeline.
Another potentially bad thing about this is that these "overridden" test run parameters are just that - parameters for THAT test run only. If you happen to have a mixture of .NET Framework and .NET Core tests, you would want to execute those in two different test runs (for it to work at all...), and then you'd need to duplicate those overrides (given that some of the same secrets are needed, that is).
Regarding adding an additional task in your pipeline to set the appropriate environment variables;
Absolutely. Using the "Batch script" task is well suited for this, where you just pass your secret-based variables as parameters to the script and pick them up inside the script file.
For this to work as expected, though, you will need to allow the task to modify the environment.
For a non-YAML pipeline, this is done by ticking off the "Modify Environment" checkbox.
For a YAML pipeline, you could do it like this:
- task: BatchScript#1
displayName: 'Export key vault vars as env. vars (for tests)'
inputs:
filename: 'ExportKeyVaultEnvironmentVariables.cmd'
arguments: '$(SomeSecret)'
modifyEnvironment: true
Where in the "ExportKeyVaultEnvironmentVariables.cmd" script, you just do this:
set SomeSecret=%1
Note: If your secret by chance has some funny characters, especially having a trailing "=" character, you might experience that what you get when collecting the parameter inside the script is NOT what you sent in.
You can avoid this problem by enclosing the parameters in double quotes, like this:
arguments: '"$(SomeSecret)"'
And then collect the parameter by removing those surrounding quotes by using the "~" parameter modifier, like this:
set SomeSecret=%~1
A nice bonus effect of this approach is that your shiny new full-blown environment variables persist for the remainder of your pipeline. Referencing back to my "bad thing" about having to potentially duplicate the test run parameter overrides, that would not be needed here.
Regarding the additional option mentioned by the OP;
Absolutely, in which case you wouldn't need a "Batch script" task (that needs to call a script file), but just a "Command line" task.
BUT, be aware of a possible gotcha! Yes, this will create an environment variable for your test code to pick up, if you access it through "Environment.GetEnvironmentVariable".
In my case, I was building an "IConfigurationRoot" instance, like this:
/// <summary>
/// Gets a configuration instance, based on user secrets (for local test execution) and environment variables (for Azure pipeline execution).
/// </summary>
/// <typeparam name="T">The type of the class that represents the "runtime" (and thus is able to get hold of any configuration).</typeparam>
/// <returns>An <see cref="IConfigurationRoot"/> instance.</returns>
public static IConfigurationRoot GetConfigurationRoot<T>()
where T : class =>
new ConfigurationBuilder()
//// Note: The "AddUserSecrets" method requires the "Microsoft.Extensions.Configuration.UserSecrets" package
.AddUserSecrets<T>()
//// Note: The "AddEnvironmentVariables" method requires the "Microsoft.Extensions.Configuration.EnvironmentVariables" package
.AddEnvironmentVariables()
.Build();
This works by adding various "configuration providers", which eventually allows you to access them all seamlessly through "configuration["SomeSecret"]".
What I found, though, if I'm not seriously mistaken, is that "SomeSecret" was still not available in the "EnvironmentVariablesConfigurationProvider" that's added, even though I could perfectly fine access it directly with the above mentioned method. Go figure (but I might be mistaken...).
Possible alternative approach (but for YAML pipelines only?);
It seems that for a YAML pipeline, you can explicitly set environment variables for a task, like this:
- task: VSTest#2
env:
SomeSecret: $(SomeSecret)
[...]
I haven't tested this myself, but seen a colleague do it (myself, I currently don't have a YAML pipeline). At least this variable can be picked up with "Environment.GetEnvironmentVariable", but I don't know if this can be picked up through an "IConfigurationRoot" instance.
I haven't seen any option to achieve the same in a non-YAML pipeline.
But again, this also suffers from the same possible "having to duplicate the environment variables across several test runs" problem.
See also my solution to my own question over at the Azure DevOps guys
I've been through this. There's no way to pass variables to VSTest on the command line, which means you have to jump through a few hoops.
You have a few options:
Use a runsettings file with a TestRunParameters section, then access it via TestContext.Properties["variableName"] within the tests themselves. You can use standard token replacement patterns to transform the XML file.
Use an app.config or appsettings.json (depending on your platform). This works pretty much the same as above, except, of course, you use the standard configuration classes to retrieve the values.
Add a step to your pipeline that sets the appropriate environment variables. Secrets don't get automatically mapped to environment variables for security purposes, but there's nothing that's stopping you from doing it yourself.
Move the secret values into a keyvault or some other sort of external secret storage and configure the test to pull the secrets at runtime.

Ansible: Playback with same defaults/magic as a role's main.yml

I often have a need for "custom playbooks" that do specific tasks, but still within one role, e.g. for a database backup task, I'd want it to be in roles/databases/backup.yml. A custom task like this would enjoy the same "magic" that main.yml enjoys (automatically reading role variables and so on).
The only workaround for this is relying on tags inside main.yml, but that's cumbersome - requires creating an "obstacle course" of tags just to ensure certain tasks are run, and specifying the tag on command-line (since a play cannot run a tag-filtered list of other plays).
I end up having to do everything manually and explicitly in a custom file.
Thinking further, the confusion is because I'm trying to work around two limitations of Ansible: (a) as mentioned, there's no way for a task to run a list of tagged tasks; (b) there's no such thing (yet) as "explicit" tags, ie tags that disable a task unless the tag is explicitly invoked. This means there's no simple way to run a particular subset of special/exceptional tasks within a playbook.
I was trying to work around this restriction by making a separate playbook. However, that would end up copying a bunch of logic from the main role playbook anyway.
The best approach for now is the workaround others have mentioned, which is to rely on variables as a workaround for "whitelisting" certain tasks. Then make a wrapper script which declares those variables and may also use skip-tags to eliminate unnecessary/slow tasks.

Install MSI with XML Properties File

Would be grateful to any assistance on how I can install an MSI and modify its default property values through an xml file that has the new values to be inserted during install time. The new property values will then be passed into MSIEXEC as a parameter by referencing the xml file and will therefore look like this:
msiexec /I MyMSIFle.msi PROPERTIESFILE=ProdProperties.xml
The need for this is because we have a number of environments. For sake of argument let's say DEV, TEST and PROD. The MSI property values differ for each environment and will be held in discrete XML properties files, e.g. DEV-Properties.xml, TEST-Properties.xml and PROD-Properties.xml.
The MSI is a single, generic MSI that we intend to install across all three environments successfully, simply by passing in the correct property values, all embedded within the individual XML files.
I'll be particularly happy to accept solutions using Powershell, Windows Batch scripts, VBScript, etc, but no third-party software as we have strict restrictions on using any such products within my company.
Thank you
I suggest you create an custom action for your MSI. Here is an example how:
http://blogs.technet.com/b/alexshev/archive/2009/05/15/from-msi-to-wix-part-22-dll-custom-actions-introduction.aspx
You could pass the .xml file name as an property and deserialize the file from the given path to an object. From the object you could then override some more properties.
Anyhow I think it is not sensible to contain such logic in an installer. I think a better way would be to write a simple flag to registry telling what environment is in question and the let the installed program deduce rest.

Is it possible for run NUnit against a specific (long) list of tests

I have a list of several thousand NUnit tests that I want to run (generated automatically by another tool). (This is a subset of all of the tests, and changes frequently)
I'd like to be able to run these via NUnit-Console.exe. Unfortunately the /run option only takes a direct list of files which in my case would not fit on a single command line. I'd like it to pickup the list from a filename.
I appreciate that I could use categories, but the list I want to run changes frequently and so I'd prefer not to have to start changing source code.
Does anyone know if there is a clean way to get NUnit to run my specified tests?
(I could break it down into a series of smaller calls to NUnit-console with a full command line, but that's not very elegant)
(If it's not possible, maybe I should add it as an NUnit feature request.)
Had a reply from Charlie Poole (from NUnit development team), that this is not currently possible but has been added as a feature request for NUnit 2.6
I see what you're saying, but like you say you can run a single fixture from the command line.
nunit-console /fixture:namespace.fixture tests.dll
How about generating all the tests in the same fixture? Or place them all in the same assembly?
nunit-console tests.dll
As mentioned in the nunitLink, we need to mention the scenario/test case name. It simple but it has bit of a trick in it. Directly mentioning the test case name will not serve the purpose and you will end up with the 0 testcases executed. We need to write the exact path for the same.
I don't know how it works for other languages but using c# I have found a solution. Whenever we create a feature file corresponding feature.cs file get's created in Visual Studio. Click on the featureFileName.feature.cs and look for namespace and keep it aside(Part 1)
namespace MMBank.Test.Features
Scroll a bit down you will get the class name. Note that as well and keep it aside(Part 2)
public partial class HistoricalTransactionFeature
Keep scrolling down, you will see the code which nunit understands for execution basically.
[NUnit.Framework.TestAttribute()]
[NUnit.Framework.DescriptionAttribute("TC_1_A B C D")]
[NUnit.Framework.CategoryAttribute("MM_Bank")]
Below the code you can see the function/method name which will most likely be TC_1_ABCD(certain parameters)
public virtual void TC_1_ABCD(string username, string password, string visit)
You will be having multiple such methods based on no. of scenarios you have in your feature file. Note the method(test case) which you want to execute and keep it aside(Part 3)
Now collate all the parts with dots. Finally you will land up with something like this,
MMBank.Test.Features.HistoricalTransactionFeature.TC_1_ABCD
This is it. Similarly you can create the test case names from multiple feature files and stack them up in text file. Every test case name should be in different line. For command you can browse through above nunit link for execution using command prompt.