Concourse CI - S3 trigger not firing. How often does it check? - concourse

I've got a Concourse job that uses the appearance of a file in an Amazon S3 bucket as a trigger to a suite of tests. Using this resource --> https://github.com/concourse/s3-resource . Problem is, the job is not firing when the file appears. When I trigger the job manually, it does see the file and start the test suite.
Yaml config looks like this:
- name: s3-trigger-file
type: s3
source:
bucket: my-bucket-name
regexp: qabot_request_(.*).json
access_key_id: {{s3-access-key-id}}
secret_access_key: {{s3-secret-access-key}}
jobs:
- name: my-job
public: true
plan:
- get: s3-trigger-file
trigger: true
When I click on the trigger itself in the Concourse UI, I see what looks like a running monitor:
As I said, the job isn't firing when the file appears, but a manual trigger does verify the S3 input is found.
How can I debug why the automatic trigger isn't firing? Also, how much latency is expected for the s3 resource to detect a new file has appeared?
Concourse 3.4. Thanks ~~

The capturing group in your regexp must refer to a semver compliant version.
See the documentation:
The version extracted from this pattern is used to version the resource. Semantic versions, or just numbers, are supported. Accordingly, full regular expressions are supported, to specify the capture groups.
Your capturing group is currently making the captured "version" quote2. You should probably delete the pipeline and regenerate it with a modified regex (e.g. qabot_request_quote(\d+).json)

Related

Azure DevOps: How to eliminate the warning "Tags set for the trigger didn't match the pipeline" in Azure DevOps?

I have two Azure DevOps pipelines set up so that the completion of Pipeline One triggers Pipeline Two. It works fine, but it generates an unnecessary error message.
All recent Pipeline Two builds are listed on this page (not really a link, don't bother clicking on it) : https://dev.azure.com/mycompany/myproject/_build?definitionId=29
Any trigger issues are listed on this page (not really a link, don't bother clicking on it) : https://dev.azure.com/mycompany/myproject/_build?definitionId=29&view=triggerIssues
It appears that every run of Pipeline One -> Pipeline Two adds this warning to the Trigger Issues page: "Tags set for the trigger didn't match the pipeline". It's only a warning, not an error, and Pipeline Two executes successfully. But how can I eliminate this warning message?
The pipeline resource is specified in Pipeline Two as follows:
resources:
pipelines:
- pipeline: pipeline-one
source: mycompany.pipeline-one
# project: myproject # optional - only required if first pipeline is in a different project
trigger:
enabled: true
branches:
include:
- master
- develop
- release_*
No tags are specified, because tags are not used.
I have reviewed the following documentation without finding an answer. I may have missed something in the docs.
https://learn.microsoft.com/en-us/azure/devops/pipelines/process/pipeline-triggers?view=azure-devops
https://learn.microsoft.com/en-us/azure/devops/pipelines/process/resources?view=azure-devops&tabs=schema#define-a-pipelines-resource
https://learn.microsoft.com/en-us/azure/devops/pipelines/yaml-schema/resources-pipelines-pipeline?view=azure-pipelines

How can I pass specific parameters from github webhook to tekton pipeline?

I am working on tekton pipeline. I would like to retrieved specific fields from source code like image version and image repo configured in helm manifests and pass it to tekton task.
Chart.yaml
appVersion: 1.1.37
values.yaml in the source code
image: images/gsample
tekton-task.yaml
params:
- name: IMAGE_REPO
description: The image registry
- name: IMAGE_TAG
description: The image registry
Any ideas on how to retrieved the values of image repo from values.yaml and image tag from chart.yaml and pass it to tekton pipeline?
Short answer: you can't grab values out of the repository itself.
Setting up a Tekton Trigger - and GitHub/GitLab/... webhooks in general: you have to work from the payload that would be sent, which usually includes: a branch, a commit ref, your repository clone URL (ssh and/or http), author of the last commit, ...
A good starting point, using GitHub, would be to go through their "Webhooks and Events Payloads" doc. See what could be relevant to your use case:
https://docs.github.com/en/developers/webhooks-and-events/webhooks/webhook-events-and-payloads
Now, in theory, ... you could try setting up a first trigger, receiving notifications from GitHub, starting a Task that would clone your repository, then another task to grab relevant values out of your values.yaml file or whatever else, ... and eventually notify another trigger, with an arbitrary payload.

GitHub Actions: How to dynamically set environment url based on deployment step output?

I found out about a really nice GitHub Actions Feature called Environments. Using the appropriate syntax a Environment could also be created inside a GitHub Action workflow.yml like this:
environment:
name: test_environment
url: https://your-apps-url-here.com
As the docs state thats a valid way to create GitHub Action Environments:
Running a workflow that references an environment that does not exist
will create an environment with the referenced name.
But inside my current GitHub Action workflow is there a way I dynamically set the url based on a deployment step output? I have a dynamic URL resulting from the deployment process to AWS which I can't define up-front.
The job workflow docs tell us that there's also a way of using expressions inside the url field:
environment:
name: test_environment
url: ${{ steps.step_name.outputs.url_output }}
Now imagine a ci.yml workflow file that uses AWS CLI to deploy a static website to S3, where we used a tool like Pulumi to dynamically create a S3 Bucket inside our AWS account. We can read the dynamically created S3 url using the following command pulumi stack output bucketName. The deploy step inside the ci.yml could then look like this:
- name: Deploy Nuxt.js generated static site to S3 Bucket via AWS CLI
id: aws-sync
run: |
aws s3 sync ../dist/ s3://$(pulumi stack output bucketName) --acl public-read
echo "::set-output name=s3_url::http://$(pulumi stack output bucketUrl)"
working-directory: ./deployment
There are 2 crucial points here: First we should use id inside the deployment step to define a step name we could easily access via step_name inside our environment:url. Second we need to define a step output using echo "::set-output name=s3_url::http://$(pulumi stack output bucketUrl)". In this example I create a variable s3_url. You could replace the pulumi stack output bucketUrl with any other command you'd like or tool you use, which responds with your dynamic environment url.
Be also sure to add a http:// or https:// in order to prevent an error message like this:
Environment URL 'microservice-ui-nuxt-js-hosting-bucket-bc75fce.s3-website.eu-central-1.amazonaws.com' is not a valid http(s) URL, so it will not be shown as a link in the workflow graph.
Now the environment definition at the top of our ci.yml can access the s3_url output variable from our deployment step like this:
jobs:
ci:
runs-on: ubuntu-latest
environment:
name: microservice-ui-nuxt-js-deployment
url: ${{ steps.aws-sync.outputs.s3_url }}
steps:
- name: Checkout
...
Using steps.aws-sync we reference the deployment step directly, since we defined it with the id. The appended .outputs.s3_url then directly references the variable containing our S3 url. If you defined everything correctly the GitHub Actions UI will render the environment URL directly below the finished job:
Here's also a fully working workflow embedded inside a example project.

How do I load values from a .json file into a Devops Yaml Pipeline Parameter

Microsoft Documentation explains the use of parameters in Yaml Pipeline jobs as
# File: azure-pipelines.yml
trigger:
- master
extends:
template: simple-param.yml
parameters:
yesNo: false # set to a non-boolean value to have the build fail
But instead of statically specifying the value of yesNo: I'd prefer to load it from a completely separate json config file. Preferably a json file that both my Build Job and my Application could share so that parameters specified for the Application could also be used in the Build Job.
Thus the question:
How do I load values from a .json file into a Devops Yaml Pipeline Parameter?
I've been using this marketplace task:
https://marketplace.visualstudio.com/items?itemName=OneLuckiDev.json2variable
And it's been working great so far. Haven't tried it builds, but can't see why it wouldn't work with separate build pipelines/multi-staged builds. There are a few things you have to be aware of/stumble upon, like double escaping slashes in directory paths - and you'll have to fetch secrets from someplace else, like traditional variable groups.

Why does Concourse `get` a resource after `put`ing it?

When I configure the following pipeline:
resources:
- name: my-image-src
type: git
source:
uri: https://github.com/concourse/static-golang
- name: my-image
type: docker-image
source:
repository: concourse/static-golang
username: {{username}}
password: {{password}}
jobs:
- name: "my-job"
plan:
- get: my-image-src
- put: my-image
After building and pushing the image to the Docker registry, it subsequently fetches the image. This can take some time and ultimately doesn't really add anything to the build. Is there a way to disable it?
Every put implies a get of the version that was created. There are a few reasons for this:
The primary reason for this is so that the newly created resource can be used by later steps in the build plan. Without the get there is no way to introduce "new" resources during a build's execution, as they're all resolved to a particular version to fetch when the build starts.
There are some side-benefits to doing this as well. For one, it immediately warms the cache on one worker. So it's at least not totally worthless; later jobs won't have to fetch it. It also acts as validation that the put actually had the desired effect.
In this particular case, as it's the last step in the build plan, the primary reason doesn't really apply. But we didn't bother optimizing it away since in most cases the side benefits make it worth not having the secondary question arise ("why do only SOME put steps imply a get?").
It also cannot be disabled as we resist adding so many knobs that you'll want to turn one day and then have to go back and turn back off once you actually do need it back to the default.
Docs: https://concourse-ci.org/put-step.html