Right way to evaluate sbt task from a simple method - scala

I have a task which, depends on other settings, should determine whether deploy my project to the production server or not, basically i'm call publish if everything is ok. But as i understand if pass publish task as a dependency or call .value on it, it's gonna be evaluated before the deploy task which is wrong. So i have to somehow run publish later from my method, i have the following structure:
val deploy: Initialize[...] = (...) map { (...) =>
def innerMethod() = { ... } // <- here i need run publish
}
The only way i know of is:
EvaluateTask(struct, publish in Deploy, state, projRef)
It works, but i need to depend on buildStructure, stats, thisProjectRef settings, which i don't like. There is also a method on task .evaluate which expects some Setting[Scope] and where to get this. Are there any other ways to achive the similar logic?

Have you considered making it a command instead of a task? http://www.scala-sbt.org/release/docs/Extending/Commands.html
Settings may only depend on other settings; tasks may only depend on settings and other tasks; commands, however, can do whatever they want, basically. They're top-level constructs. A setting or task can't depend on a command, so you can't just use commands for everything, but it sounds like what you're trying to do is a top-level kind of thing.

Related

Secrets as environmental variables in vstest

In my tests I'm using environmental variable. Due to security reasons, when setting it in azure devops pipeline I need to mark it as secret. Is it possible to pass it as argument, within vstest task?
Apparently lacking enough reputation to comment on the answer by #daniel-mann, I need to follow-up through an answer myself.
Regarding the use of a runsettings file;
Yes, I'm sure you can do it like this, but there's a much simpler way. In the task definition, you have the option to override test run parameters.
For a non-YAML pipeline, you do this in the "Override test run parameters" option (the tooltip says "Override parameters defined in the TestRunParameters section of runsettings file or Properties section of testsettings file. For example: -key1 value1 -key2 value2."), so like this then:
-SomeSecret $(SomeSecret)
For a YAML-pipeline, you can do this:
overrideTestrunParameters: '-SomeSecret $(SomeSecret)'
The nice thing is that the test run parameter that you'd like to override DOES NOT need to exist in your runsettings file. In fact, there's not even need for a runsettings file at all!
The bad thing about this approach in general is that you'll have to access your secrets through the "TestContext.Properties" collection (for NUnit), which really sucks if you want a transparent solution that works equally well both for local development (using "user secrets") and in an Azure pipeline.
Another potentially bad thing about this is that these "overridden" test run parameters are just that - parameters for THAT test run only. If you happen to have a mixture of .NET Framework and .NET Core tests, you would want to execute those in two different test runs (for it to work at all...), and then you'd need to duplicate those overrides (given that some of the same secrets are needed, that is).
Regarding adding an additional task in your pipeline to set the appropriate environment variables;
Absolutely. Using the "Batch script" task is well suited for this, where you just pass your secret-based variables as parameters to the script and pick them up inside the script file.
For this to work as expected, though, you will need to allow the task to modify the environment.
For a non-YAML pipeline, this is done by ticking off the "Modify Environment" checkbox.
For a YAML pipeline, you could do it like this:
- task: BatchScript#1
displayName: 'Export key vault vars as env. vars (for tests)'
inputs:
filename: 'ExportKeyVaultEnvironmentVariables.cmd'
arguments: '$(SomeSecret)'
modifyEnvironment: true
Where in the "ExportKeyVaultEnvironmentVariables.cmd" script, you just do this:
set SomeSecret=%1
Note: If your secret by chance has some funny characters, especially having a trailing "=" character, you might experience that what you get when collecting the parameter inside the script is NOT what you sent in.
You can avoid this problem by enclosing the parameters in double quotes, like this:
arguments: '"$(SomeSecret)"'
And then collect the parameter by removing those surrounding quotes by using the "~" parameter modifier, like this:
set SomeSecret=%~1
A nice bonus effect of this approach is that your shiny new full-blown environment variables persist for the remainder of your pipeline. Referencing back to my "bad thing" about having to potentially duplicate the test run parameter overrides, that would not be needed here.
Regarding the additional option mentioned by the OP;
Absolutely, in which case you wouldn't need a "Batch script" task (that needs to call a script file), but just a "Command line" task.
BUT, be aware of a possible gotcha! Yes, this will create an environment variable for your test code to pick up, if you access it through "Environment.GetEnvironmentVariable".
In my case, I was building an "IConfigurationRoot" instance, like this:
/// <summary>
/// Gets a configuration instance, based on user secrets (for local test execution) and environment variables (for Azure pipeline execution).
/// </summary>
/// <typeparam name="T">The type of the class that represents the "runtime" (and thus is able to get hold of any configuration).</typeparam>
/// <returns>An <see cref="IConfigurationRoot"/> instance.</returns>
public static IConfigurationRoot GetConfigurationRoot<T>()
where T : class =>
new ConfigurationBuilder()
//// Note: The "AddUserSecrets" method requires the "Microsoft.Extensions.Configuration.UserSecrets" package
.AddUserSecrets<T>()
//// Note: The "AddEnvironmentVariables" method requires the "Microsoft.Extensions.Configuration.EnvironmentVariables" package
.AddEnvironmentVariables()
.Build();
This works by adding various "configuration providers", which eventually allows you to access them all seamlessly through "configuration["SomeSecret"]".
What I found, though, if I'm not seriously mistaken, is that "SomeSecret" was still not available in the "EnvironmentVariablesConfigurationProvider" that's added, even though I could perfectly fine access it directly with the above mentioned method. Go figure (but I might be mistaken...).
Possible alternative approach (but for YAML pipelines only?);
It seems that for a YAML pipeline, you can explicitly set environment variables for a task, like this:
- task: VSTest#2
env:
SomeSecret: $(SomeSecret)
[...]
I haven't tested this myself, but seen a colleague do it (myself, I currently don't have a YAML pipeline). At least this variable can be picked up with "Environment.GetEnvironmentVariable", but I don't know if this can be picked up through an "IConfigurationRoot" instance.
I haven't seen any option to achieve the same in a non-YAML pipeline.
But again, this also suffers from the same possible "having to duplicate the environment variables across several test runs" problem.
See also my solution to my own question over at the Azure DevOps guys
I've been through this. There's no way to pass variables to VSTest on the command line, which means you have to jump through a few hoops.
You have a few options:
Use a runsettings file with a TestRunParameters section, then access it via TestContext.Properties["variableName"] within the tests themselves. You can use standard token replacement patterns to transform the XML file.
Use an app.config or appsettings.json (depending on your platform). This works pretty much the same as above, except, of course, you use the standard configuration classes to retrieve the values.
Add a step to your pipeline that sets the appropriate environment variables. Secrets don't get automatically mapped to environment variables for security purposes, but there's nothing that's stopping you from doing it yourself.
Move the secret values into a keyvault or some other sort of external secret storage and configure the test to pull the secrets at runtime.

Assign build clean option at queue time

i have a bitbucket classic build pipeline and i need to have the ability to set the build clean option at queue time. it seems like this used to be possible via the Build.Clean variable but that has since been deprecated.
When editing a build pipeline the Clean option uses an editable drop down but anytime you try and type something, it erases what you just wrote. i would like to set this option to a variable like $(CleanBuild)
Assign build clean option at queue time
Indeed, the variable Build.Clean is already deprecated. But the document Use predefined variables provided another variable Build.Repository.Clean, which will help us to clean the Sources:
Besides, if you want to clean other options fields, like All build directories:
I do not believe there is a way to assign the clean options at queue-time. Even if we use deprecated Build.Clean variable, we still can clear Sources only.
You could check the similar thread for some more details.
Hope this helps.

Ansible: Playback with same defaults/magic as a role's main.yml

I often have a need for "custom playbooks" that do specific tasks, but still within one role, e.g. for a database backup task, I'd want it to be in roles/databases/backup.yml. A custom task like this would enjoy the same "magic" that main.yml enjoys (automatically reading role variables and so on).
The only workaround for this is relying on tags inside main.yml, but that's cumbersome - requires creating an "obstacle course" of tags just to ensure certain tasks are run, and specifying the tag on command-line (since a play cannot run a tag-filtered list of other plays).
I end up having to do everything manually and explicitly in a custom file.
Thinking further, the confusion is because I'm trying to work around two limitations of Ansible: (a) as mentioned, there's no way for a task to run a list of tagged tasks; (b) there's no such thing (yet) as "explicit" tags, ie tags that disable a task unless the tag is explicitly invoked. This means there's no simple way to run a particular subset of special/exceptional tasks within a playbook.
I was trying to work around this restriction by making a separate playbook. However, that would end up copying a bunch of logic from the main role playbook anyway.
The best approach for now is the workaround others have mentioned, which is to rely on variables as a workaround for "whitelisting" certain tasks. Then make a wrapper script which declares those variables and may also use skip-tags to eliminate unnecessary/slow tasks.

How to run some but not all tests in a Perl test suite in parallel?

I've got a Perl-based test suite with 10,000+ tests that I would like to make run faster. I've tested using the -j flag to prove, and I have found that most-but-not-all of my tests are ready to run in parallel.
While I can work on making the remaining tests to be "parallel friendly", I expect there always be some tests which are not. What's a good way to manage this? I would like for it to be easy to run the whole set of tests efficiently, and make it easy to mark tests as "not-parallel-ready" if I need to.
Here are some options I see:
prove could be patched to support some tests as not-parallel-ready
Jenkins is being used to manage the test suite runs. I could split off the non-parallel tests into their own run. In other words, give up and use two test runs.
Perhaps there is a way to merge two TAP result streams together that I have yet to recover.
I'm not too concerned with how I will manage the list of exceptions. Either I can keep a list in a file as part of the test harness infrastructure, or I could put something in each test header that would mark it as such, and our test harness could determine the list of exceptions dynamically.
( The test suite is partially based on Test::Class, and I'll also be looking at Test::Class::Load to speed it up as well. )
I found a solution. It's in the documentation for aggregate_tests() for TAP::Harness. It includes a code sample for how I could write my own harness for this purpose:
...This is useful, for example, in the case where some tests should
run in parallel but others are unsuitable for parallel execution.
my $formatter = TAP::Formatter::Console->new;
my $ser_harness = TAP::Harness->new( { formatter => $formatter } );
my $par_harness = TAP::Harness->new(
{ formatter => $formatter,
jobs => 9
}
);
my $aggregator = TAP::Parser::Aggregator->new;
$aggregator->start();
$ser_harness->aggregate_tests( $aggregator, #ser_tests );
$par_harness->aggregate_tests( $aggregator, #par_tests );
$aggregator->stop();
$formatter->summary($aggregator);
From there it looks like I could:
Sub-class App::Prove and override _runtests(), which is where the new functionality above could be merged in.
Fork prove so that it calls My::App::Prove instead of App::Prove.
Now that I better understand how the pieces fit together I can see how I might create a patch for prove that would add an option like --exclude-from-parallel FILE, which would allow you to specify a file, which contains a list of test files to be excluded from parallel testing.
UPDATE 2012-08-16: I have a patch for prove now, and have submitted it for review. You can view and comment on the Pull Request. No summary is produced after the run output. It's not clear why.
I've now found the best solution so far to this problem. In turns out that prove has had undocumented support for marking some tests to be run in sequence instead of parallel since 2008. It's backed by a rather fancy "rules" system in TAP::Parser::Scheduler that allows for complex specifications of ordering arrangements for parallel and sequential test runs.
Here's the basic current recipe for prove:
# All tests are allowed to run in parallel, except those starting with "p"
--rules='seq=t/p*.t' --rules='par=**'
I have a new pull request that adds documentation for this feature, and have started a discussion about possibly offering a simpler syntax for basic exceptions as well. See the pull request for details.
I found another solution which advertised this feature, but I could only get trivial cases to work. It's to use Test::Steering. It allows me to do this:
include_tests( { jobs => 4 }, #parallel_tests );
include_tests( #serial_tests );
With this solution, be aware:
Before it actually works, I currently have to patch the code to fix a basic bug with it that has remained unpatched for multiple years.
Additional code is needed to handle generating the list of of parallel and serial tests to run.
I didn't actually get a combined summary for my real-world test... both sections emitted their own summary reports, so it didn't really work. Maybe I missed something, or maybe it's broken.
Test::Parallel also provides an easier way to run some tests in parallel
have a look at the sample from https://metacpan.org/pod/Test::Parallel
Another option: use a rules file for TAP::Harness.
You can build custom rules in a YAML file (called testrules.yml by default). I needed something similar to what you describe, which I was able to do with a testrules.yml file that looked like this:
---
seq:
# tests that are not parallel-ready (will run in isolation)
- seq:
- t/test1.t
- t/test2.t
# tests that can run in parallel
- par:
# wildcard for everything else
- **
In my case, I was using this with code that directly called App::Prove, rather than command-line prove. But I think it would work with prove too?

Using SBT, how do you execute a task with a different Setting[T] value at runtime?

In my project build definition the SettingKey useProguard in the Android scope is set to true. This is what I want by default. When I execute one particular task, however, I want useProguard to be false. Everything in the Android scope comes from the sbt-android-plugin.
I'm not sure how best to solve this problem. From what I read it seems like a command can get the job done, since it can execute a task with a different state than what your current session sees. I tried to create such a command like so:
def buildWithoutProguard = Command.command("build-without-proguard") { state =>
val extracted = Project.extract(state)
import extracted._
val transformed = session.mergeSettings :+ (useProguard in Android := false)
val newStructure = Load.reapply(transformed, structure)
val newState = Project.setProject(session, newStructure, state)
Project.evaluateTask(buildAndRun, newState)
state
}
I'm appending the command to my project settings and running the 'build-without-proguard' command executes the buildAndRun task as desired. However, useProguard is still true instead of false as I would expect.
First, this whole approach feels heavy handed to me. Assuming changing sbt-android-plugin isn't an option here then how else would I solve this problem?
Second, why doesn't this approach work as is?
From what I understand from your question, you want the setting to be different for a dependency depending on what is depending on it. This doesn't make sense -- a dependency either is satisfied or it isn't, and what depends on it doesn't come into the equation.
Your solution seems satisfactory to me. An alternative would be making two projects, pointing to the same source, but with different proguard settings and different target, so one would build with and the other without proguard, and both would keep their state. You'd then do whatever you want just switching the projects.