How to pass multiple lists in a single loop in argo workflows - argo-workflows

I want to pass elements of multiple lists in a single loop using argo templates
I have 3 lists like below
effect = ["Allow", "Allow", "Allow"]
resource = ["*", "*", "*"]
action = ["ec2:CreateTags", "ec2:DescribeInstances", "ec2:AuthorizeSecurityGroupIngress"]
I have a IAM policy which I need to construct with the argo templates. The below IAM policy takes elements from 3 lists in each loop. How can we pass these three lists elements in a single loop?
I referred to argo documentations, there's only withItems/withParams loop is available which takes only one list at a time
I tried the below method but, it is not working
- name: policy
value: |-
<<-EOP
{
"Effect": "{{item.1}}",
"Action": "{{item.2}}",
"Resource": "{{item.3}}"
}
EOP
withTogether:
- "{{inputs.parameters.effect}}"
- "{{inputs.parameters.actions}}"
- "{{inputs.parameters.resources}}"
If it is not supported in argo, is there any alternate way that we can achieve this?

Don't use withItems/withParams for simple JSON manipulation. Argo Workflows represents each iteration of these loops with at least one Pod. That's slow.
I'd recommend using a familiar programming language and a script template to perform the work.
Argo has a simple, built-in "expressions tag" templating tool which you could use to perform the same mutation.
apiVersion: argoproj.io/v1alpha1
kind: Workflow
metadata:
generateName: so-69180308-
spec:
arguments:
parameters:
- name: effects
value: '["Allow", "Allow", "Allow"]'
- name: resources
value: '["*", "*", "*"]'
- name: actions
value: '["ec2:CreateTags", "ec2:DescribeInstances", "ec2:AuthorizeSecurityGroupIngress"]'
entrypoint: main
templates:
- name: main
script:
env:
- name: POLICIES
value: >-
{{=
toJson(
map(
0..(len(jsonpath(workflow.parameters['effects'], '$'))-1),
{
{
effect: jsonpath(workflow.parameters['effects'], '$')[#],
resource: jsonpath(workflow.parameters['resources'], '$')[#],
action: jsonpath(workflow.parameters['actions'], '$')[#]
}
}
)
)
}}
image: debian:9.4
command: [bash]
source: |
echo "$POLICIES"
Output:
[{"action":"ec2:CreateTags","effect":"Allow","resource":"*"},{"action":"ec2:DescribeInstances","effect":"Allow","resource":"*"},{"action":"ec2:AuthorizeSecurityGroupIngress","effect":"Allow","resource":"*"}]
I'd recommend against going the expression tag route. It's a less well-known language and will be more difficult for others to maintain.

Related

How do I pass the job name into a github action's input?

jobs:
my-name:
name: "My Name"
...
steps:
- name: Slack Notification
uses: my-action
with:
slack-msg: ${{ jobs.${{ env.GITHUB_JOB }}.name }}
I want that slack-msg to evaluate to "My Name". I'm using my-action in multiple jobs, and I always want to pass in the job name, but I don't know how to do that. When I tried the above, the job literally didn't run and I don't know how to troubleshoot why: the github workflow log for my-name literally doesn't exist.
How do I pass job-name into an input parameter?
As nested expression are not supported you can use a trick like below to obtaint the matrix job name.
jobs:
test:
env:
# to expose matrix job name to steps, which is not possible with expansions
JOB_NAME: ${{ matrix.name || format('{0} ({1})', matrix.tox-target, matrix.os) }}
name: ${{ matrix.name || format('{0} ({1})', matrix.tox-target, matrix.os) }}
Note that you cannot really access the matrix name, but you can ensure you save the same name into an environment variable and use that.

Azure DevOps Pipeline - Use previously defined variables in variable definition

I don't know if there's a way to use previously defined variables in variable definition. Basically I want to do something like this:
variables:
- name: basePath
value: \\somepath
- name: servicePath
value: $(basePath)\servicePath
- name: backupPath
value: $(basePath)\backups
The later variables don't recognize basePath. Is there a different syntax I can use?
We do something similar, here's what we have in our yaml:
- name: cdn-base
value: 'https://cdn-name.azureedge.net'
- name: 'CDN_URL'
value: '$(cdn-base)/$(site-name)-$(environment)/'
- name: NODE_MODULES_CACHE_FOLDER
value: $(System.DefaultWorkingDirectory)/node_modules
Might just need to wrap your strings in quotes. Also check your agent type because it might be you using windows path separator on a linux agent.

Azure Yaml Pipelines - Dynamic object parameter to template

I would like to trigger a job template with an object as parameter.
Unfortunately, even based on the examples I couldn't find a way to do that.
I would appreciate if someone could guide me how to achieve this.
What I want to achieve, is to replace the ["DEPLOY", "CONFIG"] part with a dynamic variable:
- template: job-template.yaml
parameters:
jobs: ["DEPLOY", "CONFIG"]
This is not possible. YAML is very limited here and you may read more about this here
Yaml variables have always been string: string mappings.
So for instance you can define paramaters as complex type
Template file
parameters:
- name: 'instances'
type: object
default: {}
- name: 'server'
type: string
default: ''
steps:
- ${{ each instance in parameters.instances }}:
- script: echo ${{ parameters.server }}:${{ instance }}
Main file
steps:
- template: template.yaml
parameters:
instances:
- test1
- test2
server: someServer
But you are not able to do it dynamically/programmatically as every output you will create will end up as simple string.
What you can do is to pass as string and then using powershell split that string. But it all depends what you want to run further because you won't be able to simply iterate over yaml structure in that way. All what you can do is to run in in powershell loop and do something, but it can be not enough for you.
It's possible with some logic. see below
- template: job-template.yaml
parameters:
param: ["DEPLOY", "CONFIG"]
and in job-template.yaml file you can define. So every job name will be different
parameters:
param: []
jobs:
- ${{each jobName in parameters.param}}:
- job: ${{jobName}}
steps:
- task: Downl......

Update nested array value in yaml with yq

Given a yaml file (helmfile) like the following
releases:
- chart: ../charts/foo
name: foo
namespace: '{{ .Values.stack }}'
values:
- ../config/templates/foo-values.yaml.gotmpl
set:
- name: image.tag
value: 22
- name: replicas
value: 1
- chart: ../charts/bar
name: bar
namespace: '{{ .Values.stack }}'
values:
- ../config/templates/bar-values.yaml.gotmpl
set:
- name: image.bar_proxy.tag
value: 46
- name: image.bar.tag
value: 29
- name: replicas
value: 1
I'm trying to figure out a clean way to update a specific image tag. For example, I'd like to update image.bar_proxy.tag from 46 to 51.
I have the following, which does the job, but it requires that you know the exact index of the array item:
yq -y '.releases[] |= if .name=="bar" then .set[0].value |= 51 else . end' helmfile-example.yaml
So if the array order were to change at some point this would break.
A preferred solution would be: "update image.bar_proxy.tag value from 46 to 51 where set[].name==image.bar_proxy.tag". Any ideas on how to achieve a more specific conditional selection like this?
FYI our yq version:
$ yq --version
yq 2.10.0
You can use the following filter to make it work. It works by dynamically selecting the index of the object where your tag exists. On the selected object .value=51 will update the value as you wanted. You can also use the -i flag to do in-place modification of the original file.
yq -y '.releases[].set |= map(select(.name == "image.bar_proxy.tag").value=51)' yaml
See the underlying jq filter acting on the JSON object at jq-playground
Given the context of using Helmfile, there are a couple of ways you can approach this without necessarily editing the helmfile.yaml. Helmfile allows using the Go text/template language in many places, similarly to the underlying Helm tool, and has some other features that can help.
One of the easiest things you can do is take advantage of values: being a list, and of unknown values generally being ignored. You (or your CI/CD system) can write a separate YAML file that contains just the tags (JSON may be easier to write and is valid YAML)
# tags.yaml
image:
tag: 22
bar: {tag: 29}
bar_proxy: {tag: 46}
and then include this file as an additional file in the helmfile.yaml. (This would be equivalent to using helm install -f with multiple local values files, rather than helm install --set individual values.)
releases:
- name: foo
values:
- ../config/templates/foo-values.yaml.gotmpl
- tags.yaml
# no `set:`
- name: bar
values:
- ../config/templates/bar-values.yaml.gotmpl
- tags.yaml
- replicas: 1
# no `set:`
Helmfile's templating extensions also include env and requiredEnv to read ordinary environment variables from the host system. Helm proper does not have these to try to minimize the number of implicit inputs to a chart, but for Helmfile it's a possible way to provide values at deploy time.
releases:
- name: bar
set:
- name: image.bar_proxy.tag
value: {{ env "BAR_PROXY_TAG" | default "46" }}
- name: image.bar.tag
value: {{ requiredEnv "BAR_TAG" }}

helm override list values with --set in Azure DevOps

How do you override values in a Helm list with --set param in Azure DevOps?
Simple use case in values.yaml:
environment:
- name: foo
value: override_me
- name: bar
value: override_me
- name: baz
value: override_me
In the deployment.yaml file I use it like so:
env:
{{ toYaml .Values.environment | indent 10}}
One thing that kind of works, but not really, is:
environment[0].name=foo,environment[0].value=hello,{...}
The problem with this override is that it will override the entire list, even if I only want to replace value [0], not [1] and [2].
Also I get parsing errors when I pass url:s or int's (not on localhost, only AZ DevOps) - to overcome that paring error, you can escape it with \" - but then the chart is messed up - even though it passes the validation.
So, is it possible to override the env list in my case in Azure DevOps helm deployment? Or do I need to restructure the list to individual key=value pairs?
I've got weird experience when doing this, in 2 similar cases in one case it replaces them, in one overrides the whole array. so in the second case what I had to do is this:
environment:
- name: v1
value: keep_me
- name: v2
value: keep_me
- name: v3
value: keep_me
- name: foo
value: override_me
- name: bar
value: override_me
and I was doing this in the Azure Devops:
--set environment[3].name=foo,environment[03.value=xxx
for the other one I didnt have to do that, it would gladly overwrite only the values I've input. no idea why it did that.
Get yourself some variables defined in the task:
Use a standard set:
I was using a bash task in a release pipeline pointed at a deploy.sh file which existed as a published artifact. You need to chmod +x the file for this to work properly.