Is there a way to make argo events sensor to run triggers sequentially after completion - argo-workflows

Based on my testing, sensor triggers are invoked one by one without waiting for the response. Is there a way to make sensor to run triggers to wait for the trigger to complete before invoking the next trigger ?
Example sensor:
apiVersion: argoproj.io/v1alpha1
kind: Sensor
metadata:
name: create-resource
spec:
template:
serviceAccountName: argo-events-controller-manager
dependencies:
- name: event
eventSourceName: kafka
eventName: create-resource
triggers:
- template:
name: create-user-if-not-exists
http:
url: "https://reqres.in/api/users?delay=20"
method: POST
retryStrategy:
steps: 3
duration: 3s
policy:
status:
allow:
- 200
- template:
name: delete-resource-if-exists
http:
url: "https://reqres.in/api/users?delay=10"
method: DELETE
retryStrategy:
steps: 3
duration: 3s
policy:
status:
allow:
- 204
- template:
name: create-resource
http:
url: "https://reqres.in/api/users?delay=3"
method: POST
payload:
- src:
dependencyName: event
dataKey: body
useRawData: true
dest: event
retryStrategy:
steps: 3
duration: 3s
policy:
status:
allow:
- 201
- 202
The above example uses a live testing REST API which mimics a delay in response. So it should be easy to replicate this scenario.
We trigger 3 REST API calls.
create-user-if-not-exists - this takes 20 seconds
delete-resource-if-exists - this takes 10 seconds
create-resource - this takes 3 seconds
Expectation :
after create-user-if-not-exists succeeds, delete-resource-if-exists.
after delete-resource-if-exists succeeds, create-resource
If first call fails, the trigger should stop and not run other triggers.
Current behaviour:
All triggers are fired one after other without waiting for response
if first trigger fails, still other triggers gets fired. There is no control to stop other triggers or make them wait for the completion of previous trigger.
Is there any way to enforce the order in which triggers gets executed and make the triggers wait (depend on) other trigger ?
Calling argo-workflow or other workflow systems from argo-events just for satisfying this need feels heavy.

Related

Override Argo Workflows generateName from Sensor

I'm looking into customizing workflow names. I see that argo submit --generate-name can override the .metadata.generateName property, but does anyone know if this is possible with a Sensor that triggers a Workflow?
I'm using a GitHub event to trigger these workflows, but it would be nice to pull the repository name out of the event and set it as the generateName on the Workflow.
Here's an example of what I was hoping would work, but doesn't seem to as far as I can tell. Maybe I've got the syntax wrong? Does anyone know if something like this is possible?
(Note, I've removed a large part of this sensor so that just the important parts are shown. Basically, I want to parse a GitHub event payload for the repository name. Set it on the workflow arguments. Then use those to override the workflow's generateName property.)
apiVersion: argoproj.io/v1alpha1
kind: Sensor
metadata:
name: github-sensor
spec:
dependencies:
- name: github-webhook-sensor
eventSourceName: github-events
eventName: github
triggers:
- template:
name: github
k8s:
group: argoproj.io
version: v1alpha1
resource: workflows
operation: create
source:
resource:
apiVersion: argoproj.io/v1alpha1
kind: Workflow
metadata:
generateName: {{ workflow.parameters.name }}
spec:
arguments:
parameters:
- name: "git-repository-name"
parameters:
# Parameter: git-repository-name
- src:
dependencyName: github-webhook-sensor
dataKey: body.repository.name
dest: spec.arguments.parameters.0.value
I think you can do this to prepend the generated name with the repo name (and a hypen):
...
source:
resource:
apiVersion: argoproj.io/v1alpha1
kind: Workflow
metadata:
generateName: "-"
spec:
arguments:
parameters:
- name: "git-repository-name"
parameters:
- src:
dependencyName: github-webhook-sensor
dataKey: body.repository.name
dest: metadata.generateName
operation: prepend
# Parameter: git-repository-name
- src:
dependencyName: github-webhook-sensor
dataKey: body.repository.name
dest: spec.arguments.parameters.0.value
You can also use name instead of generateName, but I'm not sure how that will behave if multiple triggers come.

How to skip a step for Argo workflow

I'm trying out Argo workflow and would like to understand how to freeze a step. Let's say that I have 3 step workflow and a workflow failed at step 2. So I'd like to resubmit the workflow from step 2 using successful step 1's artifact. How can I achieve this? I couldn't find the guidance anywhere on the document.
I think you should consider using Conditions and Artifact passing in your steps.
Conditionals provide a way to affect the control flow of a
workflow at runtime, depending on parameters. In this example
the 'print-hello' template may or may not be executed depending
on the input parameter, 'should-print'. When submitted with
$ argo submit examples/conditionals.yaml
the step will be skipped since 'should-print' will evaluate false.
When submitted with:
$ argo submit examples/conditionals.yaml -p should-print=true
the step will be executed since 'should-print' will evaluate true.
apiVersion: argoproj.io/v1alpha1
kind: Workflow
metadata:
generateName: conditional-
spec:
entrypoint: conditional-example
arguments:
parameters:
- name: should-print
value: "false"
templates:
- name: conditional-example
inputs:
parameters:
- name: should-print
steps:
- - name: print-hello
template: whalesay
when: "{{inputs.parameters.should-print}} == true"
- name: whalesay
container:
image: docker/whalesay:latest
command: [sh, -c]
args: ["cowsay hello"]
If you use conditions in each step you will be able to start from a step you like with appropriate condition.
Also have a loot at this article Argo: Workflow Engine for Kubernetes as author explains the use of conditions on coinflip example.
You can see many examples on their GitHub page.

How to combine triggers in Concourse pipeline: git and time resource?

I'm trying to set up a Concourse pipeline that will trigger a new deployment. The goal is to only let the pipeline run when new values have been pushed to the git repository AND when the time is within a defined time window.
Currently, the triggers seem to work in an OR fashion. When a new version is pushed, the pipeline will run. When the time is within the window, the pipeline will run.
It seems like the only exception is when both triggers have not succeeded at least once, for example on the first day when the time has not yet passed. This caused the pipeline to wait for the first success of the time-window trigger before running. After this, however, the unwanted behavior of running with each update to the git repository continued.
Below is a minimal version of my pipeline. The goal is to run the pipeline between only 9:00 and 9:10 PM, and preferably only when the git repository has been updated.
resource_types:
- name: helm
type: docker-image
source:
repository: linkyard/concourse-helm-resource
resources:
- name: cicd-helm-values_my-service
type: git
source:
branch: master
username: <redacted>
password: <redacted>
uri: https://bitbucket.org/myorg/cicd-helm-values.git
paths:
- dev-env/my-service/values.yaml
- name: helm-deployment
type: helm
source:
cluster_url: '<redacted>'
cluster_ca: <redacted>
admin_cert: <redacted>
admin_key: <redacted>
repos:
- name: chartmuseum
url: '<redacted>'
username: <redacted>
password: <redacted>
- name: time-window
type: time
source:
start: 9:00 PM
stop: 9:10 PM
jobs:
- name: deploy-my-service
plan:
- get: time-window
trigger: true
- get: cicd-helm-values_my-service
trigger: true
- put: helm-deployment
params:
release: my-service
namespace: dev-env
chart: chartmuseum/application-template
values: ./cicd-helm-values_my-service/dev-env/my-service/values.yaml
Any ideas on how to combine the time-window and the cicd-helm-values_my-service would be greatly appreciated. Thanks in advance!
For that kind of precise time scheduling, the time resource is not adapted. What works well is https://github.com/pivotal-cf-experimental/cron-resource. This will solve one part of your problem.
Regarding triggering with AND, this is not the semantics of a fan-in. The semantics is OR, as you noticed. You might try the gate resource https://github.com/Meshcloud/gate-resource, although I am not sure it would work for your case.
EDIT: Fixed the URL of the gated resource

How to use concurrencyPolicy for GKE cron job correctly?

I set concurrencyPolicy to Allow, here is my cronjob.yaml:
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: gke-cron-job
spec:
schedule: '*/1 * * * *'
startingDeadlineSeconds: 10
concurrencyPolicy: Allow
successfulJobsHistoryLimit: 3
failedJobsHistoryLimit: 1
jobTemplate:
spec:
template:
metadata:
labels:
run: gke-cron-job
spec:
restartPolicy: Never
containers:
- name: gke-cron-job-solution-2
image: docker.io/novaline/gke-cron-job-solution-2:1.3
env:
- name: NODE_ENV
value: 'production'
- name: EMAIL_TO
value: 'novaline.dulin#gmail.com'
- name: K8S_POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
ports:
- containerPort: 8080
protocol: TCP
After reading docs: https://cloud.google.com/kubernetes-engine/docs/how-to/cronjobs
I still don't understand how to use concurrencyPolicy.
How can I run my cron job concurrency?
Here is the logs of cron job:
☁ nodejs-gcp [master] ⚡ kubectl logs -l run=gke-cron-job
> gke-cron-job-solution-2#1.0.2 start /app
> node ./src/index.js
config: { ENV: 'production',
EMAIL_TO: 'novaline.dulin#gmail.com',
K8S_POD_NAME: 'gke-cron-job-1548660540-gmwvc',
VERSION: '1.0.2' }
[2019-01-28T07:29:10.593Z] Start daily report
send email: { to: 'novaline.dulin#gmail.com', text: { test: 'test data' } }
> gke-cron-job-solution-2#1.0.2 start /app
> node ./src/index.js
config: { ENV: 'production',
EMAIL_TO: 'novaline.dulin#gmail.com',
K8S_POD_NAME: 'gke-cron-job-1548660600-wbl5g',
VERSION: '1.0.2' }
[2019-01-28T07:30:11.405Z] Start daily report
send email: { to: 'novaline.dulin#gmail.com', text: { test: 'test data' } }
> gke-cron-job-solution-2#1.0.2 start /app
> node ./src/index.js
config: { ENV: 'production',
EMAIL_TO: 'novaline.dulin#gmail.com',
K8S_POD_NAME: 'gke-cron-job-1548660660-8mn4r',
VERSION: '1.0.2' }
[2019-01-28T07:31:11.099Z] Start daily report
send email: { to: 'novaline.dulin#gmail.com', text: { test: 'test data' } }
As you can see, the timestamp indicates that the cron job is not concurrency.
It's because you're reading the wrong documentation. CronJobs aren't a GKE-specific feature. For the full documentation on CronJob API, refer to the Kubernetes documentation: https://kubernetes.io/docs/tasks/job/automated-tasks-with-cron-jobs/#concurrency-policy (quoted below).
Concurrency policy decides whether a new container can be started while the previous CronJob is still running. If you have a CronJob that runs every 5 minutes, and sometimes the Job takes 8 minutes, then you may run into a case where multiple jobs are running at a time. This policy decides what to do in that case.
Concurrency Policy
The .spec.concurrencyPolicy field is also optional. It specifies how to treat concurrent executions of a job that is created by this cron job. the spec may specify only one of the following concurrency policies:
Allow (default): The cron job allows concurrently running jobs
Forbid: The cron job does not allow concurrent runs; if it is time for a new job run and the previous job run hasn’t finished yet, the cron job skips the new job run
Replace: If it is time for a new job run and the previous job run hasn’t finished yet, the cron job replaces the currently running job run with a new job run
Note that concurrency policy only applies to the jobs created by the same cron job. If there are multiple cron jobs, their respective jobs are always allowed to run concurrently.

Unable to transform request to binary

To implement binary support in api-gateway for file upload, i have used serverless-apigw-binary plugin and added necessary content types which should be converted by api-gateway.
This is my serverless.ymlfile
service: aws-java-gradle
provider:
name: aws
runtime: java8
stage: dev
region: us-east-1
custom:
apigwBinary:
types:
- 'application/pdf'
plugins:
- serverless-apigw-binary
package:
artifact: build/distributions/hello.zip
functions:
uploadLoadFiles:
handler: com.serverless.UploadFileHandler
role: UploadFileRole
timeout: 180
events:
- http:
integration: lambda
path: upload
method: put
cors:
origin: '*'
headers:
- Content-Type
- X-Amz-Date
- Authorization
- X-Api-Key
- X-Amz-Security-Token
- X-Amz-User-Agent
- X-Requested-With
allowCredentials: false
request:
passThrough: WHEN_NO_TEMPLATES
template:
application/pdf: '{ "operation":"dev-aws-java-gradle", "base64Image": "$input.body", "query": "$input.params("fileName")" }'
response:
template: $input.body
statusCodes:
400:
pattern: '.*"httpStatus":400,.*'
template: ${file(response-template.txt)}
401:
pattern: '.*"httpStatus":401,.*'
template: ${file(response-template.txt)}
403:
pattern: '.*"httpStatus":403,.*'
template: ${file(response-template.txt)}
500:
pattern: '.*"httpStatus":500,.*'
template: ${file(response-template.txt)}
501:
pattern: '.*[a-zA-Z].*'
template: ${file(unknown-error-response-template.txt)}
environment:
S3BUCKET: ${env:S3_BUCKET}
APS_ENV: ${env:APS_ENV}
# you can add CloudFormation resource templates here
resources:
Resources:
UploadFileRole:
Type: AWS::IAM::Role
Properties:
RoleName: "UploadFileRole"
AssumeRolePolicyDocument:
Version: '2012-10-17'
Statement:
- Effect: Allow
Principal:
Service:
- lambda.amazonaws.com
Action: sts:AssumeRole
Policies:
- PolicyName: loggingPolicy
PolicyDocument:
Version: '2012-10-17'
Statement:
- Effect: Allow
Action:
- logs:CreateLogGroup
- logs:CreateLogStream
- logs:PutLogEvents
Resource:
- 'Fn::Join':
- ':'
-
- 'arn:aws:logs'
- Ref: 'AWS::Region'
- Ref: 'AWS::AccountId'
- 'log-group:/aws/lambda/*:*:*'
all the necessary settings got implemented in api-gateway after doing sls deploy
(checked based on this article https://aws.amazon.com/blogs/compute/binary-support-for-api-integrations-with-amazon-api-gateway/)
but when i hit my end point api gateway is giving out an error like this
Verifying Usage Plan for request: 45bac722-f039-11e7-bcc6-f9a1aa509052. API Key: API Stage: fd4ue8lpia/int
API Key authorized because method 'PUT /upload' does not require API Key. Request will not contribute to throttle or quota limits
Usage Plan check succeeded for API Key and API Stage fd4ue8lpia/int
Starting execution for request: 45bac722-f039-11e7-bcc6-f9a1aa509052
HTTP Method: PUT, Resource Path: /upload
Method request path:
{}
Method request query string:
{}
Method request headers: {Accept=*/*, CloudFront-Viewer-Country=IN, postman-token=6f1a23ba-36c2-104d-2e12-dc2c76a985ee, CloudFront-Forwarded-Proto=https, CloudFront-Is-Tablet-Viewer=false, origin=chrome-extension://aicmkgpgakddgnaphhhpliifpcfhicfo, CloudFront-Is-Mobile-Viewer=false, User-Agent=Mozilla/5.0 (Macintosh; Intel Mac OS X 10_11_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/63.0.3239.84 Safari/537.36, X-Forwarded-Proto=https, CloudFront-Is-SmartTV-Viewer=false, Host=fd4ue8lpia.execute-api.us-east-1.amazonaws.com, Accept-Encoding=gzip, deflate, br, X-Forwarded-Port=443, X-Amzn-Trace-Id=Root=1-5a4c530e-2c1927f85440b71b66cf0c15, Via=2.0 fc18daf68173838631b562fe2efaf8f8.cloudfront.net (CloudFront), x-postman-interceptor-id=8fa8440f-0541-fdac-c60c-6019c2269a66, X-Amz-Cf-Id=P4PYoi5Wwb-NCYRSdTQ_5TdtpbQLBaEATXoyJGC7cS8g6LsoCRPbkg==, X-Forwarded-For=157.50.20.12, 52.46.37.156, content-type=application/pdf, Accept-Language=en-US,en;q=0.9, cache-control=no-cache, CloudFront-Is-Desktop-Viewer=true}
Method request body before transformations: [Binary Data]
Execution failed due to configuration error: Unable to transform request
Method completed with status: 500
and the request is not crossing api-gateway to lambda function.
But i followed a solution mentioned here
to edit lambda function name in integration request section of resource in api-gateway to the same name and click ok. Then error's were gone and working fine.
I checked for changes in roles after doing that but none were found.
Can any one suggest what could have happened there, and any better solutions for the above mentioned problem.
Thanks in advance.
But i followed a solution mentioned here to edit lambda function name
in integration request section of resource in api-gateway to the same
name and click ok. Then error's were gone and working fine.
When you edit lambda function on the console, it sets up the permissions to call the lambda function automatically. If you want to do that manually, you can do that using the CLI