I.e. is there a way to transform this example:
myhash:
- name: name1
value: value1
myhash:
- name: name2
value: value2
into:
myhash:
- name: name1
value: value1
- name: name2
value: value2
As soon as I noticed by default YAML transforms it into:
myhash:
- name: name2
value: value2
In the YAML 1.2 specification it is stated that "mapping - an unordered association of unique keys to values" (emphasis mine). Your keys are not unique and what happens because of that depends on the library implementation (throw an error, ignore one of the keys).
What your parser obviously does is throw away the first key/value pair. What you want to do cannot be done by loading the first example using a YAML parser. You can of course write a utility that splits up the text that is not using the YAML parser.
Please note that in YAML 1.1:
It is an error for two equal keys to appear in the same mapping node. In such a case the YAML processor may continue, ignoring the second key: value pair and issuing an appropriate warning.
This is e.g. not how the YAML 1.1 parser PyYAML works: it doesn't ignore the value for the second (or following) key, nor does it issue a warning.
Related
I use kubectl to list Kubernetes custom resources of a kind mykind with an additional table column LABEL that contains the value of a label a.b.c.com/key if present:
kubectl get mykind -o=custom-columns=LABEL:.metadata.labels.'a\.b\.c\.com/key'
This works, i.e., the label value is properly displayed.
Subsequently, I wanted to add a corresponding additional printer column to the custom resource definition of mykind:
- description: Label value
jsonPath: .metadata.labels.'a\.b\.c\.com/key'
name: LABEL
type: string
Although the additional column is added to kubectl get mykind, it is empty and no label value is shown (in contrast to above kubectl command). My only suspicion were problems with escaping of the special characters - but no variation helped.
Are you aware of any difference between the JSON path handling in kubectl and additional printer columns? I expected strongly that they are exactly the same.
mdaniel's comment works!
- description: Label value
jsonPath: '.metadata.labels.a\.b\.c\.com/key'
name: LABEL
type: string
You need to use \. instead of . and use single quotes ' '. It doesn't work with double quotes for the reasons I don't understand
I need to comment a key/value pair entirely using ruamel.yaml. Something like:
import sys
from ruamel.yaml import YAML
inp = """\
# example
foo: bar
"""
yaml = YAML()
code = yaml.load(inp)
code['foo'].comment() # or whatever, can't seem to find a way to do this with existing api
yaml.dump(code, sys.stdout)
Output:
# foo: bar
Of course for multiline yaml key/value pairs it would need to comment the entire value:
foo:
- item1
- item2
to
# foo:
# - item1
# - item2
You won't be able to do your example using the existing routines for adding comments. In ruamel.yaml these comments are attached to either a dict (i.e. CommentedMap) or a list (CommentedSeq) and your end-result has neither.
You would need to dump the loaded code and, using the transform parameter of .dump() to add the start-of-line # sequence.
(Although you don't need that, it is possible to do that for some subpart of a data structure loaded from a YAML document. You would need to dump the key-value pair as a new dict (again with the transform parameter) and insert/update the result on the preceding key as comment, prepending a newline.)
In this learning exercise I want to use a PyPlate script to provision the BucketA, BucketB and BucketC buckets in addition to the TestBucket.
Imagine that the BucketNames parameter could be set by a user of this template who would specify a hundred bucket names using UUIDs for example.
AWSTemplateFormatVersion: "2010-09-09"
Transform: [PyPlate]
Description: A stack that provisions a bunch of s3 buckets based on param names
Parameters:
BucketNames:
Type: CommaDelimitedList
Description: All bucket names that should be created
Default: BucketA,BucketB,BucketC
Resources:
TestBucket:
Type: "AWS::S3::Bucket"
#!PyPlate
output = []
bucket_names = params['BucketNames']
for name in bucket_names:
output.append('"' + name + '": {"Type": "AWS::S3::Bucket"}')
The above when deployed responds with a Template format error: YAML not well-formed. (line 15, column 3)
Although the accepted answer is functionally correct, there is a better way to approach this.
Essentially PyPlate code recursively reads through all the key-value pairs of the stack and replaces the values with their Python computed values (ie they match the #!PyPlate regex). So we need to have a corresponding key to the PyPlate code.
Here's how a combination of PyPlate and Explode would solve the above problem.
AWSTemplateFormatVersion: "2010-09-09"
Transform: [PyPlate, Explode]
Description: A stack that provisions a bunch of s3 buckets based on param names
Parameters:
BucketNames:
Type: CommaDelimitedList
Description: All bucket names that should be created
Default: BucketA,BucketB,BucketC
Mappings: {}
Resources:
MacroWrapper:
ExplodeMap: |
#!PyPlate
param = "BucketNames"
mapNamespace = param + "Map"
template["Mappings"][mapNamespace] = {}
for bucket in params[param]:
template["Mappings"][mapNamespace][bucket] = {
"ResourceName": bucket
}
output = mapNamespace
Type: "AWS::S3::Bucket"
TestBucket:
Type: "AWS::S3::Bucket"
This approach is powerful because:
You can append resources to an existing template, because you won't tamper with the whole Resources block
You don't need to rely on hardcoded Mappings, as required by Explode. You can drive dynamic logic in Cloudformation.
Most of the Cloudformation props/KV's remain in the YAML part, and there is minimal python part which augments to the CFT functionality.
Please be aware of the macro order through - PyPlate needs to be executed before Explode, which is why the order [PyPlate, Explode]. The execution is sequential.
If we walk through the source code of PyPlate, it gives us control of more template related variables to work with, namely
params (stack parameters)
template (the entire template)
account_id
region
I utilised the template variable in this case.
Hope this helps
This works for me:
AWSTemplateFormatVersion: "2010-09-09"
Transform: [PyPlate]
Description: A stack that provisions a bunch of s3 buckets based on param names
Parameters:
BucketNames:
Type: CommaDelimitedList
Description: All bucket names that should be created
Default: BucketA,BucketB,BucketC
Resources:
|
#!PyPlate
output = {}
bucket_names = params['BucketNames']
for name in bucket_names:
output[name] = {"Type": "AWS::S3::Bucket"}
Explanation:
The python code outputs a dict object where the key is the bucket name and the value is its configuration:
{'BucketA': {'Type': 'AWS::S3::Bucket'}, 'BucketB': {'Type': 'AWS::S3::Bucket'}}
Prior to Macro execution, the YAML template is transformed to JSON format, and because the above is valid JSON data, I can plug it in as value of Resources.
(Note that having the hardcoded TestBucket won't work with this and I had to remove it)
I'm updating my API spec (OAS 3.0.0), and am having trouble understanding how to properly model a "complex" default value.
In general, default values for parameters are scalar values (i.e. the field offset has a default value of 0). But in the API I'm spec'ing, the default value is actually calculated based on other provided parameters.
For example, what if we take the Pet model from the example documentation, and decide that all animals need to be tagged. If the user of the API wants to supply a tag, great. If not, it will be equal to the name.
One possibility:
Pet:
required:
- id
- name
properties:
id:
type: integer
format: int64
name:
type: string
tag:
type: string
default: '#/components/schemas/Pet/name'
This stores the path value as the default, but I'd like to have it explain that the default value will be calculated.
Bonus points if I can encode information from a parent schema.
Is the alternative to just describe the behavior in a description field?
OpenAPI Specification does not support dynamic/conditional defaults. You can only document the behavior verbally in the description.
That said, you can use specification extensions (x-...) to add custom information to your definitions, like so:
tag:
type: string
x-default: name
or
tag:
type: string
x-default:
propertyName: name
# or similar
and extend the tooling to support your custom extensions.
I have a huge YAML file with tag definitions like in this snippet
- !!python/object:manufacturer.Manufacturer
name: aaaa
address: !!python/object:address.BusinessAddress {street: bbbb, number: 123, city: cccc}
And I needed to load this, first to make sure that the file is correct YAML, second to extract information at a certain tree-dept given a certain context. I had this all as nested dicts, lists and primitives that would be straightforward to do. But I cannot load the file as I don't have the original Python sources and class defines, so yaml.load() is out.
I have tried yaml.safe_load() but that throws and exception.
The BaseLoader loads the file, so it is correct. But that jumbles all primitive information (number, datetime) together as strings.
Then I found How to deserialize an object with PyYAML using safe_load?, since the file has over 100 different tags defined, the solutions presented there is impractical.
Do I have to use some other tools to strip the !!tag definitions (there is at least one occasion where !! occurs inside a normal string), so I can use safe_load. Is there simpler way to do solve this that I am not aware of?
If not I will have to do some string parsing to get the types back, but I thought I ask here first.
There is no need to go the cumbersome route of adding any of the classes if you want to use the safe_loader() on such a file.
You should have gotten an ConstructorError thrown in SafeConstructor.construct_undefined() in constructor.py. That method gets registered for the fall through case 'None' in the constructor.py file.
If you combine that info with the fact that all such tagged "classes" are mappings (and not lists or scalars), you can just copy the code for the mappings in a new function and register that as the fall-through case.
import yaml
from yaml.constructor import SafeConstructor
def my_construct_undefined(self, node):
data = {}
yield data
value = self.construct_mapping(node)
data.update(value)
SafeConstructor.add_constructor(
None, my_construct_undefined)
yaml_str = """\
- !!python/object:manufacturer.Manufacturer
name: aaaa
address: !!python/object:address.BusinessAddress {street: bbbb, number: 123, city: cccc}
"""
data = yaml.safe_load(yaml_str)
print(data)
should get you:
[{'name': 'aaaa', 'address': {'city': 'cccc', 'street': 'bbbb', 'number': 123}}]
without an exception thrown, and with "number" as integer not as string.