Merging two data structures with Ansible - merge

Ansible v2.9.25
I'm trying to merge two data structures with Ansible. I'm almost there but I'm not able to merge all data.
Let me explain:
I want to merge main_roles together with default_roles:
main_roles:
- name: admin
role_ref: admin
subjects:
- name: test
kind: ServiceAccount
- name: test2
kind: ServiceAccount
- name: edit
role_ref: edit
subjects:
- name: test
kind: ServiceAccount
default_roles:
- name: edit
role_ref: edit
subjects:
- name: merge_me
kind: ServiceAccount
I'm successfully combining with the following task:
- name: "Setting var roles_managed"
set_fact:
roles_managed: "{{ roles_managed | default([]) + [ item | combine(default_roles|selectattr('name', 'match', item.name) |list)] }}"
loop: "{{ main_roles }}"
Printing the var via loop.
- name: "print all roles"
ansible.builtin.debug:
msg: "{{ item }}"
loop: "{{ roles_managed }}"
ok: [] => (item={u'subjects': [{u'kind': u'ServiceAccount', u'name': u'test'}, {u'kind': u'ServiceAccount', u'name': u'test2'}], u'name': u'admin', u'role_ref': u'admin'}) => {
"msg": {
"name": "admin",
"role_ref": "admin",
"subjects": [
{
"kind": "ServiceAccount",
"name": "test"
},
{
"kind": "ServiceAccount",
"name": "test2"
}
]
}
}
ok: [] => (item={u'subjects': [{u'kind': u'ServiceAccount', u'name': u'merge_me'}], u'name': u'edit', u'role_ref': u'edit'}) => {
"msg": {
"name": "edit",
"role_ref": "edit",
"subjects": [
{
"kind": "ServiceAccount",
"name": "merge_me"
}
]
}
}
This results in a combine on the item.name. But I want the result to be also a merge of the subjects. So i would need a end result of merge_me and test (the subjects under name:edit):
- name: edit
role_ref: edit
subjects:
- name: merge_me
kind: ServiceAccount
- name: test
kind: ServiceAccount
What I'm understanding Ansible is not merging recursively by default. So I would need to set recursive=true in the combine filter. See: Combining hashes/dictionaries
But I'm not able to set this successfully in my context.
When I try: {{ roles_managed | default([]) + [ item | combine(default_role_bindings, recursive=true|selectattr('name', 'match', item.name) |list)] }} for example I'm getting an 'bool' object is not iterable" error code...
I've tried many variations and searched many other posts. But I'm still unsuccessfully after probably too many hours ;). Hoping someone has a solution!

Both main_roles and default_roles are lists. It's not possible to combine lists. Instead, it's possible to add them and groupby name. Then combine the dictionaries with the same name, e.g.
- set_fact:
my_roles: "{{ my_roles|d([]) + [item.1|combine(list_merge='append')] }}"
loop: "{{ (main_roles + default_roles)|groupby('name') }}"
gives
my_roles:
- name: admin
role_ref: admin
subjects:
- kind: ServiceAccount
name: test
- kind: ServiceAccount
name: test2
- name: edit
role_ref: edit
subjects:
- kind: ServiceAccount
name: test
- kind: ServiceAccount
name: merge_me
Use list_merge='append', available since 2.10, to append the items of the lists.
Append the subjects on your own if the option append is not available in your version, e.g. the task below gives the same result
- set_fact:
my_roles: "{{ my_roles|d([]) + [item.1.0|combine({'subjects':_subj})] }}"
loop: "{{ (main_roles + default_roles)|groupby('name') }}"
vars:
_subj: "{{ item.1|map(attribute='subjects')|flatten }}"

Related

Tekton YAML TriggerTemplate - string substitution

I have this kind of yaml file to define a trigger
`
apiVersion: triggers.tekton.dev/v1alpha1
kind: TriggerTemplate
metadata:
name: app-template-pr-deploy
spec:
params:
- name: target-branch
- name: commit
- name: actor
- name: pull-request-number
- name: namespace
resourcetemplates:
- apiVersion: tekton.dev/v1alpha1
kind: PipelineRun
metadata:
generateName: app-pr-$(tt.params.actor)-
labels:
actor: $(tt.params.actor)
spec:
serviceAccountName: myaccount
pipelineRef:
name: app-pr-deploy
podTemplate:
nodeSelector:
location: somelocation
params:
- name: branch
value: $(tt.params.target-branch)
** - name: namespace
value: $(tt.params.target-branch)**
- name: commit
value: $(tt.params.commit)
- name: pull-request-number
value: $(tt.params.pull-request-number)
resources:
- name: app-cluster
resourceRef:
name: app-location-cluster
`
The issue is that sometime target-branch is like "integration/feature" and then the namespace is not valid
I would like to check if there is an unvalid character in the value and replace it if there is.
Any way to do it ?
Didn't find any valuable way to do it beside creating a task to execute this via shell script later in the pipeline.
This is something you could do from your EventListener, using something such as:
apiVersion: triggers.tekton.dev/v1alpha1
kind: EventListener
metadata:
name: xx
spec:
triggers:
- name: demo
interceptors:
- name: addvar
ref:
name: cel
params:
- name: overlays
value:
- key: branch_name
expression: "body.ref.split('/')[2]"
bindings:
- ref: your-triggerbinding
template:
ref: your-triggertemplate
Then, from your TriggerTemplate, you would add the "branch_name" param, parsed from your EventListener.
Note: payload from git notification may vary. Sample above valid with github. Translating remote/origin/master into master, or abc/def/ghi/jkl into ghi.
I've created a separate task that is doing all the magic I needed and output a valid namespace name into a different variable.
Then instead of use namespace variable, i use valid-namespace all the way thru the pipeline.
apiVersion: tekton.dev/v1beta1
kind: Task
metadata:
name: validate-namespace-task-v1
spec:
description: >-
This task will validate namespaces
params:
- name: namespace
type: string
default: undefined
results:
- name: valid-namespace
description: this should be a valid namespace
steps:
- name: triage-validate-namespace
image: some-image:0.0.1
script: |
#!/bin/bash
echo -n "$(params.namespace)" | sed "s/[^[:alnum:]-]/-/g" | tr '[:upper:]' '[:lower:]'| tee $(results.valid-namespace.path)
Thanks

In a kubernetes operator, what's the correct way to trigger reconciliation on another resource type?

I have written an Ansible operator using the operator-sdk that assists with our cluster on boarding: from a single ProjectDefinition resource, the operator provisions a namespace, groups, rolebindings, resourcequotas, and limitranges.
The resourcequotas and limitranges are defined in ProjectQuota resource, and are referenced by name in the ProjectDefinition.
When the content of a ProjectQuota resource is modified, I want to ensure that those changes propagate to any managed namespaces that use the named quota. Right now, I'm doing it this way:
calculating a content hash for the ProjectQuota
Look up namespaces that reference the quota using a label selector
Annotate the discovered namespaces with the content hash
The namespace update triggers a reconciliation of the associated ProjectDefinition because of the ownerReference on the namespace, and this in turn propagates the changes in the ProjectQuota.
In other words, the projectquota role does this:
---
# tasks file for ProjectQuota
- name: Calculate content_hash
set_fact:
content_hash: "{{ quotadef|to_json|hash('sha256') }}"
vars:
quotadef:
q: "{{ resource_quota|default({}) }}"
l: "{{ limit_range|default({}) }}"
- name: Lookup up affected namespaces
kubernetes.core.k8s_info:
api_version: v1
kind: namespace
label_selectors:
- "example.com/named-quota = {{ ansible_operator_meta.name }}"
register: namespaces
- name: Update namespaces
kubernetes.core.k8s:
state: present
definition:
apiVersion: v1
kind: Namespace
metadata:
name: "{{ namespace.metadata.name }}"
annotations:
example.com/named-quota-hash: "{{ content_hash }}"
loop: "{{ namespaces.resources }}"
loop_control:
loop_var: namespace
label: "{{ namespace.metadata.name }}"
And the projectdefinition role does (among other things) this:
- name: "{{ ansible_operator_meta.name }} : handle named quota"
when: >-
quota.quota_name|default(false)
block:
- name: "{{ ansible_operator_meta.name }} : look up named quota"
kubernetes.core.k8s_info:
api_version: "{{ apiVersion }}"
kind: ProjectQuota
name: "{{ quota.quota_name }}"
register: named_quota
- name: "{{ ansible_operator_meta.name }} : label namespace"
kubernetes.core.k8s:
state: present
definition:
apiVersion: v1
kind: Namespace
metadata:
name: "{{ ansible_operator_meta.name }}"
namespace: "{{ ansible_operator_meta.name }}"
labels:
example.com/named-quota: "{{ quota.quota_name }}"
- name: "{{ ansible_operator_meta.name }} : apply resourcequota"
when: >-
"resourceQuota" in named_quota.resources[0].spec
kubernetes.core.k8s:
state: present
definition:
apiVersion: v1
kind: ResourceQuota
metadata:
name: "default"
namespace: "{{ ansible_operator_meta.name }}"
labels:
example.com/project: "{{ ansible_operator_meta.name }}"
spec: "{{ named_quota.resources[0].spec.resourceQuota }}"
- name: "{{ ansible_operator_meta.name }} : apply limitrange"
when: >-
"limitRange" in named_quota.resources[0].spec
kubernetes.core.k8s:
state: present
definition:
apiVersion: v1
kind: LimitRange
metadata:
name: "default"
namespace: "{{ ansible_operator_meta.name }}"
labels:
example.com/project: "{{ ansible_operator_meta.name }}"
spec: "{{ named_quota.resources[0].spec.limitRange }}"
This all works, but this seems like the sort of thing for which there
is probably a canonical solution. Is this it? I initially tried using
the generation on the ProjectQuota instead of a content hash, but
this value isn't exposed to the role by the Ansible operator.

Unable to pass output parameters from one workflowTemplate to a workflow via another workflowTemplate

I've Two workflowTemplates generate-output, lib-read-outputs and One workflow output-paramter as follows
generate-output.yaml
apiVersion: argoproj.io/v1alpha1
kind: WorkflowTemplate
metadata:
name: generate-output
spec:
entrypoint: main
templates:
- name: main
dag:
tasks:
# Generate Json for Outputs
- name: read-outputs
arguments:
parameters:
- name: outdata
value: |
{
"version": 4,
"terraform_version": "0.14.11",
"serial": 0,
"lineage": "732322df-5bd43-6e92-8f46-56c0dddwe83cb4",
"outputs": {
"key_alias_arn": {
"value": "arn:aws:kms:us-west-2:123456789:alias/tetsing-key",
"type": "string",
"sensitive": true
},
"key_arn": {
"value": "arn:aws:kms:us-west-2:123456789:alias/tetsing-key",
"type": "string",
"sensitive": true
}
}
}
template: retrieve-outputs
# Create Json
- name: retrieve-outputs
inputs:
parameters:
- name: outdata
script:
image: python
command: [python]
env:
- name: OUTDATA
value: "{{inputs.parameters.outdata}}"
source: |
import json
import os
OUTDATA = json.loads(os.environ["OUTDATA"])
with open('/tmp/templates_lst.json', 'w') as outfile:
outfile.write(str(json.dumps(OUTDATA['outputs'])))
volumeMounts:
- name: out
mountPath: /tmp
volumes:
- name: out
emptyDir: { }
outputs:
parameters:
- name: message
valueFrom:
path: /tmp/templates_lst.json
lib-read-outputs.yaml
apiVersion: argoproj.io/v1alpha1
kind: WorkflowTemplate
metadata:
name: lib-read-outputs
spec:
entrypoint: main
templates:
- name: main
dag:
tasks:
# Read Outputs
- name: lib-wft
templateRef:
name: generate-output
template: main
output-paramter.yaml
apiVersion: argoproj.io/v1alpha1
kind: Workflow
metadata:
generateName: output-paramter-
spec:
entrypoint: main
templates:
- name: main
dag:
tasks:
# Json Output data task1
- name: wf
templateRef:
name: lib-read-outputs
template: main
- name: lib-wf2
dependencies: [wf]
arguments:
parameters:
- name: outputResult
value: "{{tasks.wf.outputs.parameters.message}}"
template: whalesay
- name: whalesay
inputs:
parameters:
- name: outputResult
container:
image: docker/whalesay:latest
command: [cowsay]
args: ["{{inputs.parameters.outputResult}}"]
I am trying to pass the output parameters generated in workflowTemplate generate-output to workflow output-paramter via lib-read-outputs
When I execute them, it's giving the following error - Failed: invalid spec: templates.main.tasks.lib-wf2 failed to resolve {{tasks.wf.outputs.parameters.message}}
DAG and steps templates don't produce outputs by default
DAG and steps templates do not automatically produce their child templates' outputs, even if there is only one child template.
For example, the no-parameters template here does not produce an output, even though it invokes a template which does have an output.
apiVersion: argoproj.io/v1alpha1
kind: WorkflowTemplate
spec:
templates:
- name: no-parameters
dag:
tasks:
- name: get-a-parameter
template: get-a-parameter
This lack of outputs makes sense if you consider a DAG template with multiple tasks:
apiVersion: argoproj.io/v1alpha1
kind: WorkflowTemplate
spec:
templates:
- name: no-parameters
dag:
tasks:
- name: get-a-parameter
template: get-a-parameter
- name: get-another-parameter
depends: get-a-parameter
template: get-another-parameter
Which task's outputs should no-parameters produce? Since it's unclear, DAG and steps templates simply do not produce outputs by default.
You can think of templates as being like functions. You wouldn't expect a function to implicitly return the output of a function it calls.
def get_a_string():
return "Hello, world!"
def call_get_a_string():
get_a_string()
print(call_get_a_string()) # This prints nothing.
But a DAG or steps template can forward outputs
You can make a DAG or a steps template forward an output by setting its outputs field.
apiVersion: argoproj.io/v1alpha1
kind: WorkflowTemplate
metadata:
name: get-parameters-wftmpl
spec:
templates:
- name: get-parameters
dag:
tasks:
- name: get-a-parameter
template: get-a-parameter
- name: get-another-parameter
depends: get-a-parameter
template: get-another-parameter
# This is the critical part!
outputs:
parameters:
- name: parameter-1
valueFrom:
expression: "tasks['get-a-parameter'].outputs.parameters['parameter-name']"
- name: parameter-2
valueFrom:
expression: "tasks['get-another-parameter'].outputs.parameters['parameter-name']"
---
apiVersion: argoproj.io/v1alpha1
kind: Workflow
spec:
templates:
- name: print-parameter
dag:
tasks:
- name: get-parameters
templateRef:
name: get-parameters-wftmpl
template: get-parameters
- name: print-parameter
depends: get-parameters
template: print-parameter
arguments:
parameters:
- name: parameter
value: "{{tasks.get-parameters.outputs.parameters.parameter-1}}"
To continue the Python analogy:
def get_a_string():
return "Hello, world!"
def call_get_a_string():
return get_a_string() # Add 'return'.
print(call_get_a_string()) # This prints "Hello, world!".
So, in your specific case...
Add an outputs section to the main template in the generate-parameter WorkflowTemplate to forward the output parameter from the retrieve-parameters template.
apiVersion: argoproj.io/v1alpha1
kind: WorkflowTemplate
metadata:
name: generate-parameter
spec:
entrypoint: main
templates:
- name: main
outputs:
parameters:
- name: message
valueFrom:
expression: "tasks['read-parameters'].outputs.parameters.message"
dag:
tasks:
# ... the rest of the file ...
Add an outputs section to the main template in the lib-read-parameters WorkflowTemplate to forward generate-parameter's parameter.
apiVersion: argoproj.io/v1alpha1
kind: WorkflowTemplate
metadata:
name: lib-read-parameters
spec:
entrypoint: main
templates:
- name: main
outputs:
parameters:
- name: message
valueFrom:
expression: "tasks['lib-wft'].outputs.parameters.message"
dag:
tasks:
# ... the rest of the file ...

Kubernetes ConfigMap to write Node details to file

How can I use ConfigMap to write cluster node information to a JSON file?
The below gives me Node information :
kubectl get nodes -o jsonpath='{.items[*].status.addresses[?(#.type=="Hostname")].address}'
How can I use Configmap to write the above output to a text file?
You can save the output of command in any file.
Then use the file or data inside file to create configmap.
After creating the configmap you can mount it as a file in your deployment/pod.
For example:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
labels:
app: appname
name: appname
namespace: development
spec:
selector:
matchLabels:
app: appname
tier: sometier
template:
metadata:
creationTimestamp: null
labels:
app: appname
tier: sometier
spec:
containers:
- env:
- name: NODE_ENV
value: development
- name: PORT
value: "3000"
- name: SOME_VAR
value: xxx
image: someimage
imagePullPolicy: Always
name: appname
volumeMounts:
- name: your-volume-name
mountPath: "your/path/to/store/the/file"
readOnly: true
volumes:
- name: your-volume-name
configMap:
name: your-configmap-name
items:
- key: your-filename-inside-pod
path: your-filename-inside-pod
I added the following configuration in deployment:
volumeMounts:
- name: your-volume-name
mountPath: "your/path/to/store/the/file"
readOnly: true
volumes:
- name: your-volume-name
configMap:
name: your-configmap-name
items:
- key: your-filename-inside-pod
path: your-filename-inside-pod
To create ConfigMap from file:
kubectl create configmap your-configmap-name --from-file=your-file-path
Or just create ConfigMap with the output of your command:
apiVersion: v1
kind: ConfigMap
metadata:
name: your-configmap-name
namespace: your-namespace
data:
your-filename-inside-pod: |
output of command
At first save output of kubect get nodes command into JSON file:
$ exampleCommand > node-info.json
Then create proper ConfigMap.
Here is an example:
apiVersion: v1
kind: ConfigMap
metadata:
name: example-config
data:
node-info.json: |
{
"array": [
1,
2
],
"boolean": true,
"number": 123,
"object": {
"a": "egg",
"b": "egg1"
},
"string": "Welcome"
}
Then remember to add following lines below specification section in pod configuration file:
env:
- name: NODE_CONFIG_JSON
valueFrom:
configMapKeyRef:
name: example-config
key: node-info.json
You can also use PodPresent.
PodPreset is an object that enable to inject information egg. environment variables into pods during creation time.
Look at the example below:
apiVersion: settings.k8s.io/v1alpha1
kind: PodPreset
metadata:
name: example
spec:
selector:
matchLabels:
app: your-pod
env:
- name: DB_PORT
value: "6379"
envFrom:
- configMapRef:
name: etcd-env-config
key: node-info.json
but remember that you have to also add:
env:
- name: NODE_CONFIG_JSON
valueFrom:
configMapKeyRef:
name: example-config
key: node-info.json
section to your pod definition proper to your PodPresent and ConfigMap configuration.
More information you can find here: podpresent, pod-present-configuration.

Gcloud : Getting output from a template

I have created a template to deploy a compute instance content of template is given below:
resources:
- name: {{ properties["name"] }}
type: compute.v1.instance
properties:
zone: {{ properties["zone"] }}
machineType: https://www.googleapis.com/compute/v1/projects/{{ properties["project"] }}/zones/{{ properties["zone"] }}/machineTypes/{{ properties["machinetype"] }}
disks:
- deviceName: boot
type: PERSISTENT
boot: true
autoDelete: true
initializeParams:
sourceImage: {{ properties["sourceimage"] }}
networkInterfaces:
- network: https://www.googleapis.com/compute/v1/projects/{{ properties["project"] }}/global/networks/default
accessConfigs:
- name: External NAT
type: ONE_TO_ONE_NAT
outputs:
- name: var1
value: 'testing'
- name: var2
value: 88
Deploying template using gcloud I am expecting the output in outputs field But, After Successful Deployment of template, I am getting outputs field blank as given below:
{
"outputs": [],
"resources": [
{
"finalProperties":.....
}
}
Please suggest if I am missing out something.
It looks weird/impossible to use {{ properties["name"] }} with properties["name"] as a variable.
I think you should create a parameter as it is shown here