In liquibase documentation is written that it is possible to define a context in a changeset specified with AND, OR, ! and parentheses. But I did not find a way to pass a contexts=" V1.0 AND V2.0" parameter to liquibase (via command-line) since every time I do that liquibase generates an empty SQL file. This is how I try it:
.\liquibase --url=offline:mssql? `
--changeLogFile="C:\Users\Ferid\Documents\Box Sync\PRIVATE_Ferid\liquibase-3.5.5-bin\cl.xml" `
--outputFile="C:\Users\Ferid\Documents\Box Sync\PRIVATE_Ferid\liquibase-3.5.5-bin\output.sql" `
--contexts="V1.0 AND V2.0" `
updatesql
It works fine when I pass only one context or use V1.0, V2.0 so that he generates all changesets with one of the two versions but I need to generate only the changesets which have both of these contexts (V1.0 AND V2.0).
Here is an example how my changelog file's context attributes look like
<changeSet author="Ferid" id="1536838228609-1" context="V1.0"> ... </changeSet>
<changeSet author="Ferid" id="1536838228609-2" context="V2.0"> ...</changeSet>
<changeSet author="Ferid" id="1536838228609-3" context="V1.0 AND V2.0"> ...</changeSet>
I have tried different syntaxes but none of them worked for me. I am using liquibase 3.5.5.
Contexts are best used for things like environments (think DEV, STAGING, PRODUCTION). For what you are doing it is better to use labels.
Both labels and contexts can be used to control where and when changesets are applied to different environments. They are frequently used in conjunction with each other.
One key difference is expressed in this table:
labels contexts
in commands expression list
in changelog list expression
So, when you are defining your changelog, each changeset can have a 'labels' attribute that can contain a comma-separated list of labels. Each changeset can have a 'contexts' attribute that can contain a complex expression of contexts. Complex expressions are things like "qa or (acme_inc and dev)"
Conversely, when using command (i.e. deploy), you can specify a complex expression for the labels, but only a list of contexts.
The article linked below goes into depth, but in general labels are useful when you can simply enumerate/describe what a changeSet is for, but the deployment time environment is complex to describe. Contexts are useful when the 'context' in which a changeset should be deployed is a complex decision that is best left to the changeset author rather than the deployer.
One place to learn more about how Liquibase handles these is in a blog post Nathan wrote: http://www.liquibase.org/2014/11/contexts-vs-labels.html
Related
In the Snakemake documentation, the includes directive can incorporate all of the rules of another workflow into the main workflow and apparently can show up in snakemake --dag -n | dot -Tsvg > dag.svg. Sub-workflows, on the other hand, can be executed prior to the main workflow should you develop rules which depend on their output.
My question is: how are these two really different? Right now, I am working on a workflow, and it seems like I can get by on just using includes and putting the name of the output in rule all of the main workflow. I could probably even place the output in the input of a main-workflow rule, making the includes workflow execute prior to that rule. Additionally, I can't visualize a DAG which includes the sub-workflow, for whatever reason. What do sub-workflows offer that the includes directive can't do?
The include doesn't "incorporate another workflow". It just adds the rules from another file, like if you add them with copy/paste (with a minor difference that include doesn't affect your target rule). The subworkflow has an isolated set of rules that work together to produce the final target file of this subworkflow. So it is well structured and isolated from both main workflow and other subworkflows.
Anyway, my personal experience shows that there are some bugs in Snakemake that make using subworkflows quite difficult. Including the file is pretty straightforward and easy.
I've never used subworkflows, but here's a case where it may be more convenient to use them rather than the include directives. (In theory, I think you don't need include and subworkflow as you could write everything in a massive Snakefile, the point is more about convenience.)
Imagine you are writing a workflow that depends on result files from a published work (or from a previous project of yours). The authors did not make public the files you need but they provide a snakemake workflow to produce them. Their snakemake workflow may be quite complex and the files you need may be just intermediate steps. So instead of making sense of the all workflow and parsing it into your own include directives, you use subworkflow to generate the required file(s). E.g.:
subworkflow jones_etal:
workdir:
"./jones_etal"
snakefile:
"./jones_etal/Snakefile"
rule all:
input:
'my_results.txt',
rule one:
input:
jones_etal('from_jones.txt'),
output:
'my_results.txt',
...
I'd like to create a branch restriction for a ClearCase preop merge trigger.
However, it should fire based not on the exact branch type, but rather
based on whether the branch type follows a specific naming convention, like
.../my_special_branchname_prefix*
Can I do this, or do I have to list every branch separately?
I read in "cleartool man mktrtype" that a "branch-type-selector" can be used,
but unfortunately I was not able to find comprehensive information on what
it entails, i.e. if it can be a version selector pattern as used in a config spec (using e.g. the three-dot ellipsis), or even a globbing pattern, or if it
can only be an exact branch type name.
One way to check what you can do is to write a dummy preop trigger script which will simply output the "trigger environment variables (EV)"
That way, you can check if one of those CLEARCASE_xxx environment variable has the name of the branch you want (do you mean the source or destination branch of that merge?).
Once you see the right variable, you can enforce your policy, by making sure the preop trigger script exits with -1 when the name of that branch does not start with the expected prefix.
I'm defining my own Puppet class, and I was wondering if it is possible to have an array variable which contains a list of all files in a specific directory. I was wondering to have a similar syntax like below, but didn't found a way to make it work.
$dirs = Dir.entries('C:\\Program Files\\Java\\')
Does anyone how to do it in a Puppet file?
Thanks!
I was wondering if it is possible to have an array variable which contains a list of all files in a specific directory.
Information about the current state of the machine to be configured is conveyed to the catalog compiler via facts. These are available to your classes as top-scope variables, and Puppet (or Facter, actually) provides ways to define your own custom facts. That's a link into the Facter 3 manual, but similar applies to earlier versions. Do not overlook the rest of the Facter documentation, which has more relevant information on this topic.
On the other hand, information about the machine providing catalog-building services -- the master in a master / agent setup -- can be obtained by writing and calling a custom function. This is rarely what you actually want, but it's worth mentioning because you might one day want a custom function for some other purpose.
I'm looking through the Octopus powershell library and trying to identify a way to output all the variable names and their values used in a deployment - not the project overall, but only for a deployment.
So say I have 3 variables like the below
VariableOne Value1
VariableTwo Value2
VariableThree Value3
And I only use the first and third and want those printed with their names (VariableOne, VariableThree) and their values (Value1, Value3).
There is an option for outputting all the variables into the deployment log for debugging purposes.
Set one (or both) of the following in your project variables list:
OctopusPrintVariables True
OctopusPrintEvaluatedVariables True
I find that the latter of the two is generally sufficient.
This feature is written up at https://octopus.com/docs/how-to/debug-problems-with-octopus-variables
<TL;DR>
No. It can't.
It's something we tried as well, but Octopus Deploy has so many ways in which Variables can be used, from XPath to .config files, JsonPath to json files, direct references and inline scripts in the workflows as well as direct references in the #{var} syntax.
None of these options track which variables were actually transformed or referenced, plus, some optional expansion may actually shortcircuit.
I've asked Octopus whether they could actually extend the object model to detect the requests to the values of a variable, so we can see which values have actually been found. But that is currently not in place.
And they came back with the problem that the step scripts may actually change or override the values of variables between steps, so the value may actually change during the workflow, making tracking them even harder.
I have a fortran project whith some name conflicts (from doxygen's point of view). Sometimes a local variable in a procedure may have the same name as a subroutine or function. For compilation/linking there are no problems, as the different definitions live separate lives, for instance:
progA/main.f defines and uses the variable delta.
libB/delta.f defines a function named delta.
progB/main.f uses the function delta defined in libB.
progB is linked with libB, progA is not linked with libB.
In this case, when generating call/caller graphs, or linked source code, the variable delta in progA/main.f will be identified as the function delta. Is there some combination of doxygen settings I can use to inform it that progA is not supposed to use definitions in libB, or something similar?
Another issue is that I may have functions/subroutines with the same name in different subdirectories. Again, as long as they are not linked together this does not represent a problem for compilation, but doxygen cannot identify which of them is meant in links, calls, etc. Is there some work around this (without renaming procedures, that is)?