I'm trying to set up a Concourse pipeline that will trigger a new deployment. The goal is to only let the pipeline run when new values have been pushed to the git repository AND when the time is within a defined time window.
Currently, the triggers seem to work in an OR fashion. When a new version is pushed, the pipeline will run. When the time is within the window, the pipeline will run.
It seems like the only exception is when both triggers have not succeeded at least once, for example on the first day when the time has not yet passed. This caused the pipeline to wait for the first success of the time-window trigger before running. After this, however, the unwanted behavior of running with each update to the git repository continued.
Below is a minimal version of my pipeline. The goal is to run the pipeline between only 9:00 and 9:10 PM, and preferably only when the git repository has been updated.
resource_types:
- name: helm
type: docker-image
source:
repository: linkyard/concourse-helm-resource
resources:
- name: cicd-helm-values_my-service
type: git
source:
branch: master
username: <redacted>
password: <redacted>
uri: https://bitbucket.org/myorg/cicd-helm-values.git
paths:
- dev-env/my-service/values.yaml
- name: helm-deployment
type: helm
source:
cluster_url: '<redacted>'
cluster_ca: <redacted>
admin_cert: <redacted>
admin_key: <redacted>
repos:
- name: chartmuseum
url: '<redacted>'
username: <redacted>
password: <redacted>
- name: time-window
type: time
source:
start: 9:00 PM
stop: 9:10 PM
jobs:
- name: deploy-my-service
plan:
- get: time-window
trigger: true
- get: cicd-helm-values_my-service
trigger: true
- put: helm-deployment
params:
release: my-service
namespace: dev-env
chart: chartmuseum/application-template
values: ./cicd-helm-values_my-service/dev-env/my-service/values.yaml
Any ideas on how to combine the time-window and the cicd-helm-values_my-service would be greatly appreciated. Thanks in advance!
For that kind of precise time scheduling, the time resource is not adapted. What works well is https://github.com/pivotal-cf-experimental/cron-resource. This will solve one part of your problem.
Regarding triggering with AND, this is not the semantics of a fan-in. The semantics is OR, as you noticed. You might try the gate resource https://github.com/Meshcloud/gate-resource, although I am not sure it would work for your case.
EDIT: Fixed the URL of the gated resource
Related
I've added a schedule block to my pipeline that backs up my RDS database. This is the main yaml file for the pipeline, and yaml validator finds no errors. So why is it not running? Nothing shows up in Scheduled Runs section in the UI, and I actually waited 3 hours for it to run, to no avail. What am I missing?
name: $(Date:yyyyMMdd)$(Rev:.r)
variables:
- template: ../global-vars.yml
resources:
repositories:
- repository: self
type: git
name: Deployment
trigger: none
schedules:
- cron: "0 */3 * * *"
displayName: DB backup every 3 hours
branches:
include:
- master
always: true
stages:
- stage: DBBackup
displayName: DB Backup
jobs:
- template: /templates/db/backup.yml
There is no property schedules which can be placed in resources > repositories > repository according to YAML syntax documentation
https://learn.microsoft.com/en-us/azure/devops/pipelines/yaml-schema/resources-repositories-repository?view=azure-pipelines
You can add schedules on the top level of YAML
https://learn.microsoft.com/en-us/azure/devops/pipelines/yaml-schema/schedules-cron?view=azure-pipelines
try something like this sample
name: $(Date:yyyyMMdd)$(Rev:.r)
variables:
- template: ../global-vars.yml
schedules:
- cron: "0 */3 * * *"
displayName: DB backup every 3 hours
branches:
include: master
always: true
resources:
repositories:
- repository: self
type: git
name: Deployment
trigger: none
stages: (...)
I've been attempting to use Tekton to deploy some AWS infrastructure via Terraform but not having much success.
The pipeline clones a Github repo containing TF code , it then attempts to use the terraform-cli task to provision the AWS infrastructure. For initial testing I just want to perform the initial TF init and provision the AWS VPC.
Expected behaviour
Clone Github Repo
Perform Terraform Init
Create the VPC using targeted TF apply
Actual Result
task terraform-init has failed: failed to create task run pod "my-infra-pipelinerun-terraform-init": Pod "my-infra-pipelinerun-terraform-init-pod" is invalid: spec.initContainers[1].name: Duplicate value: "step-init". Maybe missing or invalid Task default/terraform-cli
pod for taskrun my-infra-pipelinerun-terraform-init not available yet
Tasks Completed: 2 (Failed: 1, Cancelled 0), Skipped: 1
Steps to Reproduce the Problem
Prerequisites: Install Tekton command line tool, git-clone and terraform-cli
create this pipeline in Minikube
apiVersion: tekton.dev/v1beta1
kind: Pipeline
metadata:
name: my-infra-pipeline
spec:
description: Pipeline for TF deployment
params:
- name: repo-url
type: string
description: Git repository URL
- name: branch-name
type: string
description: The git branch
workspaces:
- name: tf-config
description: The workspace where the tf config code will be stored
tasks:
- name: clone-repo
taskRef:
name: git-clone
workspaces:
- name: output
workspace: tf-config
params:
- name: url
value: $(params.repo-url)
- name: revision
value: $(params.branch-name)
- name: terraform-init
runAfter: ["clone-repo"]
taskRef:
name: terraform-cli
workspaces:
- name: source
workspace: tf-config
params:
- name: terraform-secret
value: "tf-auth"
- name: ARGS
value:
- init
- name: build-vpc
runAfter: ["terraform-init"]
taskRef:
name: terraform-cli
workspaces:
- name: source
workspace: tf-config
params:
- name: terraform-secret
value: "tf-auth"
- name: ARGS
value:
- apply
- "-target=aws_vpc.vpc -auto-approve"
Run the pipeline by creating a pipelinerun resource in k8s
Review the logs > tkn pipelinerun logs my-tf-pipeline -a
Additional Information
Pipeline version: v0.35.1
There is a known issue regarding "step-init" in some earlier versions - I suggest you upgrade to latest version (0.36.0) and try again.
As per the Argo DAG template documentation.
tasks.<TASKNAME>.outputs.parameters: When the previous task uses
'withItems' or 'withParams', this contains a JSON array of the output
parameter maps of each invocation
When trying with the following simple workflow:
apiVersion: argoproj.io/v1alpha1
kind: Workflow
metadata:
generateName: test-workflow-
spec:
entrypoint: start
templates:
- name: start
dag:
tasks:
- name: with-items
template: hello-letter
arguments:
parameters:
- name: input-letter
value: "{{item}}"
withItems:
- A
- B
- C
- name: show-result
dependencies:
- with-items
template: echo-result
arguments:
parameters:
- name: input
value: "{{tasks.with-items.outputs.parameters}}"
- name: hello-letter
inputs:
parameters:
- name: input-letter
outputs:
parameters:
- name: output-letter
value: "{{inputs.parameters.input-letter}}"
script:
image: alpine
command: ["sh"]
source: |
echo "{{inputs.parameters.input-letter}}"
- name: echo-result
inputs:
parameters:
- name: input
outputs:
parameters:
- name: output
value: "{{inputs.parameters.input}}"
script:
image: alpine
command: ["sh"]
source: |
echo {{inputs.parameters.input}}
I get the following error :
Failed to submit workflow: templates.start.tasks.show-result failed to resolve {{tasks.with-items.outputs.parameters}}
Argo version (running in a minikube cluster)
argo: v2.10.0+195c6d8.dirty
BuildDate: 2020-08-18T23:06:32Z
GitCommit: 195c6d8310a70b07043b9df5c988d5a62dafe00d
GitTreeState: dirty
GitTag: v2.10.0
GoVersion: go1.13.4
Compiler: gc
Platform: darwin/amd64
Same error in Argo 2.8.1, although, using .result instead of .parameters in the show-result task worked fine (result was [A,B,C]), but doesn't work in 2.10 anymore
- name: show-result
dependencies:
- with-items
template: echo-result
arguments:
parameters:
- name: input
value: "{{tasks.with-items.outputs.result}}"
The result:
STEP TEMPLATE PODNAME DURATION MESSAGE
⚠ test-workflow-parallelism-xngg4 start
├-✔ with-items(0:A) hello-letter test-workflow-parallelism-xngg4-3307649634 6s
├-✔ with-items(1:B) hello-letter test-workflow-parallelism-xngg4-768315880 7s
├-✔ with-items(2:C) hello-letter test-workflow-parallelism-xngg4-2631126026 9s
└-⚠ show-result echo-result invalid character 'A' looking for beginning of value
I also tried to change the show-result task as:
- name: show-result
dependencies:
- with-items
template: echo-result
arguments:
parameters:
- name: input
value: "{{tasks.with-items.outputs.parameters.output-letter}}"
Executes without no errors:
STEP TEMPLATE PODNAME DURATION MESSAGE
✔ test-workflow-parallelism-qvp72 start
├-✔ with-items(0:A) hello-letter test-workflow-parallelism-qvp72-4221274474 8s
├-✔ with-items(1:B) hello-letter test-workflow-parallelism-qvp72-112866000 9s
├-✔ with-items(2:C) hello-letter test-workflow-parallelism-qvp72-1975676146 6s
└-✔ show-result echo-result test-workflow-parallelism-qvp72-3460867848 3s
But the parameter is not replaced by the value:
argo logs test-workflow-parallelism-qvp72
test-workflow-parallelism-qvp72-1975676146: 2020-08-25T14:52:50.622496755Z C
test-workflow-parallelism-qvp72-4221274474: 2020-08-25T14:52:52.228602517Z A
test-workflow-parallelism-qvp72-112866000: 2020-08-25T14:52:53.664320195Z B
test-workflow-parallelism-qvp72-3460867848: 2020-08-25T14:52:59.628892135Z {{tasks.with-items.outputs.parameters.output-letter}}
I don't understand what to expect as the output of a loop! What did I miss? Is there a way to find out what's happening?
There was a bug which caused this error before Argo version 3.2.5. Upgrade to latest and try again.
It looks like the problem was in the CLI only. I submitted the workflow with kubectl apply, and it ran fine. The error only appeared with argo submit.
The argo submit error was resolved when I upgraded to 3.2.6.
This is quite a common problem I've faced, I have not come across this in any bug report or feature documentation so far so it's yet to be determined if this is a feature or bug. However argo is clearly not capable in performing a "map-reduce" flow OOB.
The only "real" workaround I've found is to attach an artifact, write the with-items task output to it, and pass it along to your next step where you'll do the "reduce" yourself in code/script by reading values from the artifact.
---- edit -----
As mentioned by another answer this was indeed a bug which was resolved for the latest version, this resolves the usage of parameters as you mentioned as an option but outputs.result still causes an error post bugfix.
This issue is currently open on Argo Workflows Github: issue #6805
You could use a nested DAG to workaround this issue. This helps the parallel executions artifact resolution problem because each task output artifact is scoped to its inner nested DAG only, so there's only one upstream branch in the dependency tree. The error in issue #6805 happens when artifacts exist in the previous step and there's more than one upstream branch in the dependency tree.
I'm trying out Argo workflow and would like to understand how to freeze a step. Let's say that I have 3 step workflow and a workflow failed at step 2. So I'd like to resubmit the workflow from step 2 using successful step 1's artifact. How can I achieve this? I couldn't find the guidance anywhere on the document.
I think you should consider using Conditions and Artifact passing in your steps.
Conditionals provide a way to affect the control flow of a
workflow at runtime, depending on parameters. In this example
the 'print-hello' template may or may not be executed depending
on the input parameter, 'should-print'. When submitted with
$ argo submit examples/conditionals.yaml
the step will be skipped since 'should-print' will evaluate false.
When submitted with:
$ argo submit examples/conditionals.yaml -p should-print=true
the step will be executed since 'should-print' will evaluate true.
apiVersion: argoproj.io/v1alpha1
kind: Workflow
metadata:
generateName: conditional-
spec:
entrypoint: conditional-example
arguments:
parameters:
- name: should-print
value: "false"
templates:
- name: conditional-example
inputs:
parameters:
- name: should-print
steps:
- - name: print-hello
template: whalesay
when: "{{inputs.parameters.should-print}} == true"
- name: whalesay
container:
image: docker/whalesay:latest
command: [sh, -c]
args: ["cowsay hello"]
If you use conditions in each step you will be able to start from a step you like with appropriate condition.
Also have a loot at this article Argo: Workflow Engine for Kubernetes as author explains the use of conditions on coinflip example.
You can see many examples on their GitHub page.
I have two jobs viz. build and publish. I want publish to trigger after build is done. So, I am using an external resource gcs-resourcehttps://github.com/frodenas/gcs-resource
Following is my pipeline.yml:
---
resource_types:
- name: gcs-resource
type: docker-image
source:
repository: frodenas/gcs-resource
resources:
- name: proj-repo
type: git
source:
uri: <my uri>
branch: develop
username: <username>
password: <password>
- name: proj-gcr
type: docker-image
source:
repository: asia.gcr.io/myproject/proj
tag: develop
username: _json_key
password: <my password>
- name: proj-build-output
type: gcs-resource
source:
bucket: proj-build-deploy
json_key: <my key>
regexp: Dockerfile
jobs:
- name: build
serial_groups: [proj-build-deploy]
plan:
- get: proj
resource: proj-repo
- task: build
config:
platform: linux
image_resource:
type: docker-image
source: {repository: node, tag: 10.13.0}
inputs:
- name: proj
run:
path: sh
args:
- -exc
- |
<do something>
- put: proj-build-output
params:
file: proj/Dockerfile
content_type: application/octet-stream
- name: publish
serial_groups: [proj-build-deploy]
plan:
- get: proj-build-output
trigger: true
passed: [build]
- put: proj-gcr
params:
build: proj-build-output
I am using the external resource proj-build-output to trigger the next job. I can run the individual jobs without any problem, however the the publish job doesn't automatically get triggered after completion of build job.
Am I missing something?
The regexp of the gcs-resource is misconfigured:
...
regexp: Dockerfile
...
while regexp, as the original S3 resource from which it comes from, wants:
regexp: the pattern to match filenames against within GCS. The first grouped match is used to extract the version, or if a group is explicitly named version, that group is used.
The https://github.com/frodenas/gcs-resource#example-configuration shows its correct usage:
regexp: directory_on_gcs/release-(.*).tgz
This is not specific to the GCS or S3 resource; Concourse needs a "version" to move artifacts from jobs to storage and back. It is one of the fundamental concepts of Concourse. See https://web.archive.org/web/20171205105324/http://concourse.ci:80/versioned-s3-artifacts.html for an example.
As Marco mentioned, the problem was with versioning.
I solved my issue using these two steps:
Enabled versioning on my GCS Bucket https://cloud.google.com/storage/docs/object-versioning#_Enabling
Replaces regexp with versioned_file as mentioned in the docs https://github.com/frodenas/gcs-resource#file-names