How to round trip ruamel.yaml strings like "on" - ruamel.yaml

When using ruamel.yaml to round-trip some YAML I see the following issue. Given this input:
root:
matchers:
- select: "response.body#state"
test: all
expected: "on"
I see this output:
root:
matchers:
- select: response.body#state
test: all
expected: on
Note that in YAML, on parses as a boolean true value while off parses as false.
The following code is used to read/write:
# Use the default (round-trip) settings.
yaml = YAML()
if args.source == '-':
src = sys.stdin
else:
src = open(args.source)
doc = yaml.load(src)
process(args.tag, set(args.keep.split(',')), doc)
if args.destination == '-':
dest = sys.stdout
else:
dest = open(args.destination, 'w')
yaml.dump(doc, dest)
The process function is not modifying values. It only removes things with a special tag in the input after crawling the structure.
How can I get the output to be a string rather than a boolean?

You write that:
Note that in YAML, on parses as a boolean true value while off parses as false.
That statement is not true (or better:
has not been true for ten years). If you have an unquoted on in your
YAML, like in your output, that is obviously not the case when using ruamel.yaml:
import sys
import ruamel.yaml
yaml_str = """\
root:
matchers:
- select: response.body#state
test: all
expected: on
"""
yaml = ruamel.yaml.YAML()
data = yaml.load(yaml_str)
expected = data['root']['matchers'][0]['expected']
print(type(expected), repr(expected))
which gives:
<class 'str'> 'on'
This is because in the YAML 1.2 spec on/off/yes/no are no
longer mentioned as having the same meaning as true
resp. false. They are mentioned in the YAML 1.1 spec, but that was
superseded in 2009. Unfortunately there are YAML libraries out in the
wild, that have not been updated since then.
What is actually happening is that the suprefluous quotes in your
input are automatically discarded by the round-trip process. You can
also see that happen for the value "response.body#state". Although
there the character that starts comments (#) is included, to
actually start a comment that character has to be proceded by
white-space, and since it is isn't, the quotes are not necessary.
So your output is fine, but if you are in the unfortunate
situation where you have to deal with other programs relying on
outdated YAML 1.1, then you can e.g. specify that you want to preserve
your quotes on round-trip:
yaml_str = """\
root:
matchers:
- select: "response.body#state"
test: all
expected: "on"
"""
yaml = ruamel.yaml.YAML()
yaml.indent(sequence=4, offset=2)
yaml.preserve_quotes = True
data = yaml.load(yaml_str)
yaml.dump(data, sys.stdout)
as this gives your exact input:
root:
matchers:
- select: "response.body#state"
test: all
expected: "on"
However maybe the better option would be that you actually specify that your
YAML is and has to conforming to the YAML 1.1 specification by making
your intensions, and the output document, explicit:
yaml_str = """\
root:
matchers:
- select: response.body#state
test: all
expected: on
"""
yaml_in = ruamel.yaml.YAML()
yaml_out = ruamel.yaml.YAML()
yaml_out.indent(sequence=4, offset=2)
yaml_out.version = (1, 1)
data = yaml_in.load(yaml_str)
yaml_out.dump(data, sys.stdout)
Notice that the "unquoted" YAML 1.2 input, gives output where on is quoted:
%YAML 1.1
---
root:
matchers:
- select: response.body#state
test: all
expected: 'on'

Related

Referencing json list value created in Rundeck Data Workflow step

Rundeck job: When I create data in data workflow step as json list
{
"repo": ["repo1","repo2","repo3"],
"myrepo": "repo4"
}
how can I access the elements in the list from inline script in next step?
#stub.repo[1]#
doesn't work
#stub.myrepo#
works fine
Data Workflow step executed
Script:
echo "value: #stub.repo[1]]#"
echo "value2: #stub.myrepo#"
Result:
value:
value2: repo4
The easiest way to catch that array is to use the jq-JSON mapper log filter plugin in any step like the command step or script step (here are the releases, here is how to install the plugin, and here how Log filters works).
Using this plugin you can use the array positions directly, e.g: ${data.data.0}, ${data.data.1}, etc.
Job definition example with your JSON output for testing.
- defaultTab: summary
description: ''
executionEnabled: true
group: JSON
id: f0d2843f-8de3-4984-a9ae-2fd7ab3963ae
loglevel: INFO
name: test-json-array
nodeFilterEditable: false
plugins:
ExecutionLifecycle: null
scheduleEnabled: true
sequence:
commands:
- plugins:
LogFilter:
- config:
filter: .[]
logData: 'true'
prefix: data
type: json-mapper
script: |-
cat <<-END
{
"repo": ["repo1","repo2","repo3"],
"myrepo": "repo4"
}
END
- exec: echo ${data.data.0}
keepgoing: false
strategy: node-first
uuid: f0d2843f-8de3-4984-a9ae-2fd7ab3963ae
Result.
More info about the plugin here.

Rundeck stop running steps based on global variable

I have a Rundeck job that executes multiple steps, each of which are Job References to other small jobs. The first step selects a server to upgrade, and sets a global variable with the server name. The remaining steps perform upgrade tasks. It is possible though for the first step to return NONE as the server name, and if that's the case I would like to halt execution right there without running the remaining steps, and I'd like the whole job to be marked as Successful.
I could just make that first job exit with an error code, but then the whole job looks failed, and it looks like there is something wrong with it, even though it successfully ran and found there was nothing to upgrade.
Any ideas? I'm finding "use a flow control step" everywhere, but I can't see how to make that work for my use case.
The best way to create complex workflows depending on some output value is to use the Ruleset Strategy (Rundeck Enterprise). Take a look at this.
On the community version you can save the result of the first step on a key-value variable and do some "script-fu" in the following steps:
Step 1: print the status and save it on a data variable using the key-value data log filter.
Steps 2,3,4: capture the key-value data and then the step can continue or not.
I made an example easy to import to your instance for testing:
- defaultTab: nodes
description: ''
executionEnabled: true
id: 27de501a-8bb2-4c6e-a5f9-0676e80ca75a
loglevel: INFO
name: HelloWorld
nodeFilterEditable: false
options:
- enforced: true
name: opt1
required: true
value: 'true'
values:
- 'true'
- 'false'
valuesListDelimiter: ','
plugins:
ExecutionLifecycle: null
scheduleEnabled: true
sequence:
commands:
- exec: echo "url=${option.opt1}"
plugins:
LogFilter:
- config:
invalidKeyPattern: \s|\$|\{|\}|\\
logData: 'true'
name: result
regex: .*=\s*(.+)$
type: key-value-data
- fileExtension: .sh
interpreterArgsQuoted: false
script: |-
# data/value evaluation
if [ "#data.result#" = "true" ]; then
echo "step two"
fi
scriptInterpreter: /bin/bash
- fileExtension: .sh
interpreterArgsQuoted: false
script: |-
# data/value evaluation
if [ "#data.result#" = "true" ]; then
echo "step three"
fi
scriptInterpreter: /bin/bash
- fileExtension: .sh
interpreterArgsQuoted: false
script: |-
# data/value evaluation
if [ "#data.result#" = "true" ]; then
echo "step four"
fi
scriptInterpreter: /bin/bash
keepgoing: false
strategy: node-first
uuid: 27de501a-8bb2-4c6e-a5f9-0676e80ca75a
MegaDrive68k's answer is what you can do best with the basic opensource version or if you have the Enterprise version.
But you can also create your own plugin or make a fork out of an existing one.
Which I did with the official flow control puglin and add conditions.
You can fork this plugin and add in the java code 2 new #PluginProperty (That add two new field in a plugin parameter in rundeck interface) and make a comparison of values.
Example:
#PluginProperty(title = "First Value", description = "Compare this", required = true)
String value1;
#PluginProperty(title = "Second Value", description = "To this", required = true)
String value2;
Comparison of Strings values (in your case it is)
if (value1.equals(value2)) {...}
Comparison of Numeric values
if (value1 == value2) {...}
If you want to stop the job with successful (it does not stop the parent job, just actual):
context.getFlowControl().Halt(true);
If you want to stop the job with a failed status:
context.getFlowControl().Halt(false);
If you want to stop the job with a customized status:
context.getFlowControl().Halt("MY CUSTOM STATUS");
And finally, if you want to continue and not stop:
context.getFlowControl().Continue();
So a complete example (add this to your public class):
#PluginProperty(title = "First Value", description = "Compare this", required = true)
String value1;
#PluginProperty(title = "Second Value", description = "To this", required = true)
String value2;
#Override
public void executeStep(final PluginStepContext context, final Map<String, Object> configuration)
throws StepException
{
if (value1.equals(value2)) {
//Halt actual JOB without failed
context.getFlowControl().Halt(true);
} else {
//Continue
context.getFlowControl().Continue();
}
}
Then create your jar file and place it in the libext folder.
Now you can add your custom step. Put your global var in the first field and "NONE" in the second field.
If global var contain "NONE" the job stop successful at this step.
If you call a job with this step from oterh job (parent), the parent job continue.
If you want you can use this fork plugin which already includes these modifications. Look like this

How to properly use expression results as booleans on Azure Devops pipelines?

I'm trying to use an or expression to define a boolean on a template as follows:
parameters:
- name: A
default: true
- name: B
default: false
stages:
- template: bacon.yml#template
parameters:
booleanParameter: or(eq(${{ parameters.A }}, true), eq(${{ parameters.B}}, true))
In my head, it should work just fine, yet I keep getting this same error:
The 'booleanParameter' parameter value 'or(eq(True, true), eq(False, true))' is not a valid Boolean.
I've tried some small variations of syntax, all of them resulting in the same error.
What am I missing here?
You should use template expression to wrap whole expression:
booleanParameter: ${{ or(eq(parameters.A, true), eq(parameters.B, true)) }}

How to prevent pulumi from stripping newlines from output?

I am using the Typescript API of pulumi. I noticed that when I invoke console.log("\n\n"), pulumi strips out the newlines. I want to keep these newlines to improve the readability of the deployment log.
Is there a way to instruct pulumi to keep newlines in the output log?
The current behavior of the Pulumi CLI is to break your messages into lines (split by \n), trim every line, drop empty lines, and display the result.
Although ugly, you could force your line breaks with an extra "zero-width space" character:
console.log("Top line");
console.log("\u200B\n\u200B\n\u200B");
console.log("There will be three empty lines before this line");
You could use something more trivial like _ instead of the zero-width space. Obviously, underscores will be visible.
Track this issue for further progress.
Pulumi should not be stripping newlines or otherwise manipulating your console.log() output. I just tested this and my string with newlines was printed as expected with newlines.
Code
import * as aws from "#pulumi/aws";
const bucket = new aws.s3.Bucket("main", {
acl: "private",
})
bucket.onObjectCreated("logger", new aws.lambda.CallbackFunction<aws.s3.BucketEvent, void>("loggerFn", {
memorySize: 128,
callback: (e) => {
for (const rec of e.Records || []) {
const [buck, key] = [rec.s3.bucket.name, rec.s3.object.key];
console.log(`Object created: ${buck}/${key}`);
}
},
}));
console.log(`My
multi-line
string`);
export const bucketName = bucket.bucket;
Output
$ pulumi up -y
Previewing update (dev):
Type Name Plan Info
+ pulumi:pulumi:Stack demo-aws-ts-serverless-dev create 3 ...
+ └─ aws:lambda:Function loggerFn create
Diagnostics:
pulumi:pulumi:Stack (demo-aws-ts-serverless-dev):
My
multi-line
string
Resources:
+ 8 to create
Updating (dev):
Type Name Status Info
+ pulumi:pulumi:Stack demo-aws-ts-serverless-dev created ...
+ └─ aws:lambda:Function loggerFn created
Diagnostics:
pulumi:pulumi:Stack (demo-aws-ts-serverless-dev):
My
multi-line
string
Outputs:
bucketName: "main-b568df3"
Resources:
+ 8 created
...

YAML Error: could not determine a constructor for the tag

This is very similar to questions/44786412 but mine appears to be triggered by YAML safe_load(). I'm using Ruamel's library and YamlReader to glue a bunch of CloudFormation pieces together into a single, merged template. Is bang-notation just not proper YAML?
Outputs:
Vpc:
Value: !Ref vpc
Export:
Name: !Sub "${AWS::StackName}-Vpc"
No problem with these
Outputs:
Vpc:
Value:
Ref: vpc
Export:
Name:
Fn::Sub: "${AWS::StackName}-Vpc"
Resources:
vpc:
Type: AWS::EC2::VPC
Properties:
CidrBlock:
Fn::FindInMap: [ CidrBlock, !Ref "AWS::Region", Vpc ]
Part 2; how to get load() to leave what's right of the 'Fn::Select:' alone.
FromPort:
Fn::Select: [ 0, Fn::FindInMap: [ Service, https, Ports ] ]
gets converted to this, that now CF doesn't like.
FromPort:
Fn::Select: [0, {Fn::FindInMap: [Service, https, Ports]}]
If I unroll the statement fully then no problems. I guess the shorthand is just problematic.
FromPort:
Fn::Select:
- 0
- Fn::FindInMap: [Service, ssh, Ports]
Your "bang notation" is proper YAML, normally this is called a tag. If you want to use the safe_load() with those you'll have to provide constructors for the !Ref and !Sub tags, e.g. using:
ruamel.yaml.add_constructor(u'!Ref', your_ref_constructor, constructor=ruamel.yaml.SafeConstructor)
where for both tags you should expect to handle scalars a value. and not the more common mapping.
I recommend you use the RoundTripLoader instead of the SafeLoader, that will preserve order, comments, etc. as well. The RoundTripLoader is a subclass of the SafeLoader.
If you are using ruamel.yaml>=0.15.33, which supports round-tripping scalars, you can do (using the new ruamel.yaml API):
import sys
from ruamel.yaml import YAML
yaml = YAML()
yaml.preserve_quotes = True
data = yaml.load("""\
Outputs:
Vpc:
Value: !Ref: vpc # first tag
Export:
Name: !Sub "${AWS::StackName}-Vpc" # second tag
""")
yaml.dump(data, sys.stdout)
to get:
Outputs:
Vpc:
Value: !Ref: vpc # first tag
Export:
Name: !Sub "${AWS::StackName}-Vpc" # second tag
In older 0.15.X versions, you'll have to specify the classes for the scalar objects yourself. This is cumbersome, if you have many objects, but allows for additional functionality:
import sys
from ruamel.yaml import YAML
class Ref:
yaml_tag = u'!Ref:'
def __init__(self, value, style=None):
self.value = value
self.style = style
#classmethod
def to_yaml(cls, representer, node):
return representer.represent_scalar(cls.yaml_tag,
u'{.value}'.format(node), node.style)
#classmethod
def from_yaml(cls, constructor, node):
return cls(node.value, node.style)
def __iadd__(self, v):
self.value += str(v)
return self
class Sub:
yaml_tag = u'!Sub'
def __init__(self, value, style=None):
self.value = value
self.style = style
#classmethod
def to_yaml(cls, representer, node):
return representer.represent_scalar(cls.yaml_tag,
u'{.value}'.format(node), node.style)
#classmethod
def from_yaml(cls, constructor, node):
return cls(node.value, node.style)
yaml = YAML(typ='rt')
yaml.register_class(Ref)
yaml.register_class(Sub)
data = yaml.load("""\
Outputs:
Vpc:
Value: !Ref: vpc # first tag
Export:
Name: !Sub "${AWS::StackName}-Vpc" # second tag
""")
data['Outputs']['Vpc']['Value'] += '123'
yaml.dump(data, sys.stdout)
which gives:
Outputs:
Vpc:
Value: !Ref: vpc123 # first tag
Export:
Name: !Sub "${AWS::StackName}-Vpc" # second tag