ArgoCD Application Set merge generator - kubernetes

I have an argocd ApplicationSet created. I have the following merge keys setup:
generators:
- merge:
mergeKeys:
- path
generators:
- matrix:
generators:
- git:
directories:
- path: aws-ebs-csi-driver
- path: cluster-autoscaler
repoURL: >-
...
revision: master
- clusters:
selector:
matchLabels:
argocd.argoproj.io/secret-type: cluster
- list:
elements:
- path: aws-ebs-csi-driver
namespace: system
- path: cluster-autoscaler
namespace: system
Syncing the application set however generates:
- lastTransitionTime: "2022-08-08T21:54:05Z"
message: the parameters from a generator were not unique by the given mergeKeys,
Merge requires all param sets to be unique. Duplicate key was {"path":"aws-ebs-csi-driver"}
reason: ApplicationGenerationFromParamsError
status: "True"
Any help is appreciated.

The matrix generator is producing one set of parameters for each combination of directory and cluster.
If there is more than one cluster, then there will be one parameter set with path: aws-ebs-csi-driver for each cluster.
The merge generator requires that each parameter used as a merge key be completely unique. That mode was the original design of the merge generator, but more modes may be supported in the future.
Argo CD v2.5 will support go templated ApplicationSets, which might provide an easier way to solve your problem.

Related

How to create dependency between releases in helmfile

I have a following helmfile and I want for nexus, teamcity-server, nexus, hub to be depended on certificates chart
releases:
- name: certificates
createNamespace: true
chart: ./charts/additional-dep
namespace: system
values:
- ./environments/default/system-values.yaml
- ./environments/{{ .Environment.Name }}/system-values.yaml
- name: hub
chart: ./charts/hub
namespace: system
values:
- ./environments/default/system-values.yaml
- name: nexus
chart: ./charts/nexus
namespace: system
values:
- ./environments/default/system-values.yaml
- ./environments/{{ .Environment.Name }}/system-values.yaml
dependsOn:
- certificates
- name: teamcity-server
chart: ./charts/teamcity-server
namespace: system
values:
- ./environments/default/system-values.yaml
- ./environments/{{ .Environment.Name }}/system-values.yaml
dependsOn:
- certificates
I have tried to use dependsOn in helmfile.yaml, however it has resulted in errors
Helmfile calls this functionality needs:, so
releases:
- name: certificates
...
- name: nexus
needs:
- certificates
...
This means the certificates: chart needs to be successfully installed before Helmfile will move on to nexus or teamcity-server. This is specific to Helmfile, so you're allowed to helm uninstall certificates and Helm itself won't know about the dependency. It also doesn't establish any sort of runtime dependency between the two charts, so if something happens later that causes certificates to fail, nexus and the other dependents won't be automatically stopped.

How to get values in helmfile

bases:
- common.yaml
releases:
- name: controller
values:
- values/controller-values.yaml
hooks:
- events: [ "presync" ]
....
- events: [ "postsync" ]
.....
common.yaml
environments:
default:
values:
- values/common-values.yaml
common-values
a:b
I want to move the values of the hooks to file when I added it to common.values it worked but I want to add it to different files and not to the common, so I tried to add base
bases:
- common.yaml
- hooks.yaml
releases:
- name: controller
values:
- values/controller-values.yaml
hooks:
{{ toYaml .Values.hooks | indent 6 }}
hooks.yaml
environments:
default:
values:
- values/hooks-values.yaml
hooks-values.yaml
hooks:
- events: [ "presync" ]
....
- events: [ "postsync" ]
.....
but I got an error
parsing: template: stringTemplate:21:21: executing "stringTemplate" at <.Values.hooks>: map has no entry for key "hooks"
I tried also to change it o
hooks:
- values/hooks-values.yaml
and I got an error
line 22: cannot unmarshal !!str values/... into event.Hook
I think the first issue is when specifying both common.yaml and hooks.yaml under bases:, they are not merged properly. Since they provide same keys, most probably the one that is included later under bases: overrides the other.
To solve that you can use a single entry in bases in helmfile:
bases:
- common.yaml
and then add your value files to common.yaml:
environments:
default:
values:
- values/common-values.yaml
- values/hooks-values.yaml
I don't claim this is best practice, but it should work :)
The second issue is that bases is treated specially, i.e. helmfile.yaml is rendered before base layering is processed, therefore your values (coming from bases) are not available at a point where you can reference them directly in the helmfile. If you embedded environments directly in the helmfile, it would be fine. But if you want to keep using bases, there seems to be couple of workarounds, and the simplest seemed to be adding --- after bases as explained in the next comment on the same thread.
So, a working version of your helmfile could be:
bases:
- common.yaml
---
releases:
- name: controller
chart: stable/nginx
version: 1.24.1
values:
- values/controller-values.yaml
hooks:
{{ toYaml .Values.hooks | nindent 6 }}
PS: chart: stable/nginx is just chosen randomly to be able to helmfile build.

What is the output of loop task in argo?

As per the Argo DAG template documentation.
tasks.<TASKNAME>.outputs.parameters: When the previous task uses
'withItems' or 'withParams', this contains a JSON array of the output
parameter maps of each invocation
When trying with the following simple workflow:
apiVersion: argoproj.io/v1alpha1
kind: Workflow
metadata:
generateName: test-workflow-
spec:
entrypoint: start
templates:
- name: start
dag:
tasks:
- name: with-items
template: hello-letter
arguments:
parameters:
- name: input-letter
value: "{{item}}"
withItems:
- A
- B
- C
- name: show-result
dependencies:
- with-items
template: echo-result
arguments:
parameters:
- name: input
value: "{{tasks.with-items.outputs.parameters}}"
- name: hello-letter
inputs:
parameters:
- name: input-letter
outputs:
parameters:
- name: output-letter
value: "{{inputs.parameters.input-letter}}"
script:
image: alpine
command: ["sh"]
source: |
echo "{{inputs.parameters.input-letter}}"
- name: echo-result
inputs:
parameters:
- name: input
outputs:
parameters:
- name: output
value: "{{inputs.parameters.input}}"
script:
image: alpine
command: ["sh"]
source: |
echo {{inputs.parameters.input}}
I get the following error :
Failed to submit workflow: templates.start.tasks.show-result failed to resolve {{tasks.with-items.outputs.parameters}}
Argo version (running in a minikube cluster)
argo: v2.10.0+195c6d8.dirty
BuildDate: 2020-08-18T23:06:32Z
GitCommit: 195c6d8310a70b07043b9df5c988d5a62dafe00d
GitTreeState: dirty
GitTag: v2.10.0
GoVersion: go1.13.4
Compiler: gc
Platform: darwin/amd64
Same error in Argo 2.8.1, although, using .result instead of .parameters in the show-result task worked fine (result was [A,B,C]), but doesn't work in 2.10 anymore
- name: show-result
dependencies:
- with-items
template: echo-result
arguments:
parameters:
- name: input
value: "{{tasks.with-items.outputs.result}}"
The result:
STEP TEMPLATE PODNAME DURATION MESSAGE
⚠ test-workflow-parallelism-xngg4 start
├-✔ with-items(0:A) hello-letter test-workflow-parallelism-xngg4-3307649634 6s
├-✔ with-items(1:B) hello-letter test-workflow-parallelism-xngg4-768315880 7s
├-✔ with-items(2:C) hello-letter test-workflow-parallelism-xngg4-2631126026 9s
└-⚠ show-result echo-result invalid character 'A' looking for beginning of value
I also tried to change the show-result task as:
- name: show-result
dependencies:
- with-items
template: echo-result
arguments:
parameters:
- name: input
value: "{{tasks.with-items.outputs.parameters.output-letter}}"
Executes without no errors:
STEP TEMPLATE PODNAME DURATION MESSAGE
✔ test-workflow-parallelism-qvp72 start
├-✔ with-items(0:A) hello-letter test-workflow-parallelism-qvp72-4221274474 8s
├-✔ with-items(1:B) hello-letter test-workflow-parallelism-qvp72-112866000 9s
├-✔ with-items(2:C) hello-letter test-workflow-parallelism-qvp72-1975676146 6s
└-✔ show-result echo-result test-workflow-parallelism-qvp72-3460867848 3s
But the parameter is not replaced by the value:
argo logs test-workflow-parallelism-qvp72
test-workflow-parallelism-qvp72-1975676146: 2020-08-25T14:52:50.622496755Z C
test-workflow-parallelism-qvp72-4221274474: 2020-08-25T14:52:52.228602517Z A
test-workflow-parallelism-qvp72-112866000: 2020-08-25T14:52:53.664320195Z B
test-workflow-parallelism-qvp72-3460867848: 2020-08-25T14:52:59.628892135Z {{tasks.with-items.outputs.parameters.output-letter}}
I don't understand what to expect as the output of a loop! What did I miss? Is there a way to find out what's happening?
There was a bug which caused this error before Argo version 3.2.5. Upgrade to latest and try again.
It looks like the problem was in the CLI only. I submitted the workflow with kubectl apply, and it ran fine. The error only appeared with argo submit.
The argo submit error was resolved when I upgraded to 3.2.6.
This is quite a common problem I've faced, I have not come across this in any bug report or feature documentation so far so it's yet to be determined if this is a feature or bug. However argo is clearly not capable in performing a "map-reduce" flow OOB.
The only "real" workaround I've found is to attach an artifact, write the with-items task output to it, and pass it along to your next step where you'll do the "reduce" yourself in code/script by reading values from the artifact.
---- edit -----
As mentioned by another answer this was indeed a bug which was resolved for the latest version, this resolves the usage of parameters as you mentioned as an option but outputs.result still causes an error post bugfix.
This issue is currently open on Argo Workflows Github: issue #6805
You could use a nested DAG to workaround this issue. This helps the parallel executions artifact resolution problem because each task output artifact is scoped to its inner nested DAG only, so there's only one upstream branch in the dependency tree. The error in issue #6805 happens when artifacts exist in the previous step and there's more than one upstream branch in the dependency tree.

How to skip a step for Argo workflow

I'm trying out Argo workflow and would like to understand how to freeze a step. Let's say that I have 3 step workflow and a workflow failed at step 2. So I'd like to resubmit the workflow from step 2 using successful step 1's artifact. How can I achieve this? I couldn't find the guidance anywhere on the document.
I think you should consider using Conditions and Artifact passing in your steps.
Conditionals provide a way to affect the control flow of a
workflow at runtime, depending on parameters. In this example
the 'print-hello' template may or may not be executed depending
on the input parameter, 'should-print'. When submitted with
$ argo submit examples/conditionals.yaml
the step will be skipped since 'should-print' will evaluate false.
When submitted with:
$ argo submit examples/conditionals.yaml -p should-print=true
the step will be executed since 'should-print' will evaluate true.
apiVersion: argoproj.io/v1alpha1
kind: Workflow
metadata:
generateName: conditional-
spec:
entrypoint: conditional-example
arguments:
parameters:
- name: should-print
value: "false"
templates:
- name: conditional-example
inputs:
parameters:
- name: should-print
steps:
- - name: print-hello
template: whalesay
when: "{{inputs.parameters.should-print}} == true"
- name: whalesay
container:
image: docker/whalesay:latest
command: [sh, -c]
args: ["cowsay hello"]
If you use conditions in each step you will be able to start from a step you like with appropriate condition.
Also have a loot at this article Argo: Workflow Engine for Kubernetes as author explains the use of conditions on coinflip example.
You can see many examples on their GitHub page.

Istio 1.1.4 helm setup --set global.defaultNodeSelector sample

Regarding the current installation options for Istio 1.1.4 it should be possible to define a default node selector which gets added to all Istio deployments
The documentation does not show a dedicated sample how the selector has to be defined, only {} as value.
Currently I was not able to find a working format to pass the values to the helm charts by using --set, e.g:
--set global.defaultNodeSelector="{cloud.google.com/gke-nodepool:istio-pool}"
I tried several variations, with and without escapes, JSON map, ... But currently everything results into the same Helm error message:
2019/05/06 15:58:10 Warning: Merging destination map for chart 'istio'. Cannot overwrite table item 'defaultNodeSelector', with non table value: map[]
Istio version 1.1.4
Helm 2.13.1
The expectation would be to have a more detailed documentation, giving some samples on Istio side.
When specifying overrides with --set, multiple key/value pairs are deeply merged based on keys. It means in your case, that only last item will be present in the generated template. The same will happen even if you override with -f (YAML file) option.
Here is an example of -f option usage with custom_values.yaml, with distinguished keys:
#custom_values.yaml
global:
defaultNodeSelector:
cloud.google.com/bird: stork
cloud.google.com/bee: wallace
helm template . -x charts/pilot/templates/deployment.yaml -f
custom_values.yaml
Snippet of rendered Istio`s Pilot deployment.yaml manifest file:
volumes:
- name: config-volume
configMap:
name: istio
- name: istio-certs
secret:
secretName: istio.istio-pilot-service-account
optional: true
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: beta.kubernetes.io/arch
operator: In
values:
- amd64
- ppc64le
- s390x
- key: cloud.google.com/bee
operator: In
values:
- wallace
- key: cloud.google.com/bird
operator: In
values:
- stork
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 2
preference:
matchExpressions:
- key: beta.kubernetes.io/arch
operator: In
values:
- amd64
The same can be achieved with --set:
--set global.defaultNodeSelector."cloud\.google\.com/bird"=stork,global.defaultNodeSelector."cloud\.google\.com/bee"=wallace
After searching for some hours I found a solution right after posting the question by digging in the Istio commits.
I'll leave my findings as a reference, maybe someone can safe some time that way.
Setting a default node selector works, at least for me, by separating the key by dots and escaping additional ones with \ (if there are dots in the label of interest)
--set global.defaultNodeSelector.cloud\\.google\\.com/gke-nodepool=istio-pool
To create a defaultNodeSelector for a node pool labeled with
cloud.google.com/gke-nodepool: istio-pool
I was not able to add multiple values that way the {} notation for adding lists in Helm doesn't seem to get respected.