How can I pass specific parameters from github webhook to tekton pipeline? - github

I am working on tekton pipeline. I would like to retrieved specific fields from source code like image version and image repo configured in helm manifests and pass it to tekton task.
Chart.yaml
appVersion: 1.1.37
values.yaml in the source code
image: images/gsample
tekton-task.yaml
params:
- name: IMAGE_REPO
description: The image registry
- name: IMAGE_TAG
description: The image registry
Any ideas on how to retrieved the values of image repo from values.yaml and image tag from chart.yaml and pass it to tekton pipeline?

Short answer: you can't grab values out of the repository itself.
Setting up a Tekton Trigger - and GitHub/GitLab/... webhooks in general: you have to work from the payload that would be sent, which usually includes: a branch, a commit ref, your repository clone URL (ssh and/or http), author of the last commit, ...
A good starting point, using GitHub, would be to go through their "Webhooks and Events Payloads" doc. See what could be relevant to your use case:
https://docs.github.com/en/developers/webhooks-and-events/webhooks/webhook-events-and-payloads
Now, in theory, ... you could try setting up a first trigger, receiving notifications from GitHub, starting a Task that would clone your repository, then another task to grab relevant values out of your values.yaml file or whatever else, ... and eventually notify another trigger, with an arbitrary payload.

Related

Ensure that a workflow in Github Actions is only ever triggered once

I have workflow that is triggered when a specific file is changed. Is it possible to ensure that this workflow is only triggered the first time that file is changed?
The use case is:
a repository is created from a template repository
to initialize the README and other things in the repo, some variables can be set in a JSON config file
once that file is committed, the workflow runs, creates the README from a template etc.
I have tried to do let the workflow delete itself before it commits and pushes the changed files. That doesn't work: fatal: not in a git directory and fatal: unsafe repository ('/github/workspace' is owned by someone else).
What might work is adding a topic like initialized to the repo at the end of the workflow and check for the presence of this topic at the beginning of the workflow. However, this is feels like a hack in that I'm abusing topics for something they're probably not meant to do.
So my question remains: is there a good way to only run the workflow once?
With the help of the comment by frennky I managed to do solve this by using the GitHub CLI to disable the workflow.
Here is the step at the end of the workflow that does this:
- name: Disable this workflow
shell: bash
run: |
gh workflow disable -R $GITHUB_REPOSITORY "${{ github.workflow }}"
env:
GITHUB_TOKEN: ${{secrets.GITHUB_TOKEN}}

GitHub Actions: How to dynamically set environment url based on deployment step output?

I found out about a really nice GitHub Actions Feature called Environments. Using the appropriate syntax a Environment could also be created inside a GitHub Action workflow.yml like this:
environment:
name: test_environment
url: https://your-apps-url-here.com
As the docs state thats a valid way to create GitHub Action Environments:
Running a workflow that references an environment that does not exist
will create an environment with the referenced name.
But inside my current GitHub Action workflow is there a way I dynamically set the url based on a deployment step output? I have a dynamic URL resulting from the deployment process to AWS which I can't define up-front.
The job workflow docs tell us that there's also a way of using expressions inside the url field:
environment:
name: test_environment
url: ${{ steps.step_name.outputs.url_output }}
Now imagine a ci.yml workflow file that uses AWS CLI to deploy a static website to S3, where we used a tool like Pulumi to dynamically create a S3 Bucket inside our AWS account. We can read the dynamically created S3 url using the following command pulumi stack output bucketName. The deploy step inside the ci.yml could then look like this:
- name: Deploy Nuxt.js generated static site to S3 Bucket via AWS CLI
id: aws-sync
run: |
aws s3 sync ../dist/ s3://$(pulumi stack output bucketName) --acl public-read
echo "::set-output name=s3_url::http://$(pulumi stack output bucketUrl)"
working-directory: ./deployment
There are 2 crucial points here: First we should use id inside the deployment step to define a step name we could easily access via step_name inside our environment:url. Second we need to define a step output using echo "::set-output name=s3_url::http://$(pulumi stack output bucketUrl)". In this example I create a variable s3_url. You could replace the pulumi stack output bucketUrl with any other command you'd like or tool you use, which responds with your dynamic environment url.
Be also sure to add a http:// or https:// in order to prevent an error message like this:
Environment URL 'microservice-ui-nuxt-js-hosting-bucket-bc75fce.s3-website.eu-central-1.amazonaws.com' is not a valid http(s) URL, so it will not be shown as a link in the workflow graph.
Now the environment definition at the top of our ci.yml can access the s3_url output variable from our deployment step like this:
jobs:
ci:
runs-on: ubuntu-latest
environment:
name: microservice-ui-nuxt-js-deployment
url: ${{ steps.aws-sync.outputs.s3_url }}
steps:
- name: Checkout
...
Using steps.aws-sync we reference the deployment step directly, since we defined it with the id. The appended .outputs.s3_url then directly references the variable containing our S3 url. If you defined everything correctly the GitHub Actions UI will render the environment URL directly below the finished job:
Here's also a fully working workflow embedded inside a example project.

Is there a way in Terraform Enterprise to read the payload from VCS?

I have configured a webhook between github and terraform enterprise correctly, so each time I push a commit, the terraform module gets executed. Why I want to achieve is to use part of the branch name where the push was made and pass it as a variable in the terraform module.
I have read that the value of a variable can be a HCL code, but I am unable to find the correct object to access the payload (or at least, the branch name), so at this moment I think it is not possible to get that value directly from the workspace configuration.
if you get a workaround for this, it may also work from me.
At this point the only idea I get is to call the terraform we hook using an API Call
Thanks in advance
Ok, after several try and error I found out that it is not possible to get any information in the terraform module if you are using the VCS mode. So, in order to be able to get the branch, I got these options:
Use several workspaces
You can configure a workspace for each branch, so you may create a variable a select that branch in each workspace. The problem is you will be repeating yourself with this option
Use Terraform CLI and a GitHub action
I used these fine tutorial from Hashicorp for creating a Github action that uses Terraform Cloud. It gets you done the 99% of the job. For passing a varible you must be aware that there are two methods, using a file or using an enviromental variable (check that information on the Hashicorp site here). So using a:
terraform apply -var="branch=value"
won't work. In my case I used the tfvars approach, so in my Github Action I put this snippet:
- name: Setup Terraform variables
id: vars
run: |-
cat > terraform.auto.tfvars <<EOF
branch = "${GITHUB_REF#refs/*/}"
EOF
I defined a variable within terraform called branch, I was able to get and work with this value

GitHub Actions: Are there security concerns using an external action in a workflow job?

I have a workflow that FTPs files by using an external action from someuser:
- name: ftp deploy
uses: someuser/ftp-action#master
with:
config: ${{ secrets.FTP_CONFIG }}
Is this a security concern? For example could someuser change ftp-action#master to access my secrets.FTP_CONFIG? Should I copy/paste their action into my workflow instead?
If you use ftp-action#master then every time your workflow runs it will fetch the master branch of the action and build it. So yes, I believe it would be possible for the owner to change the code to capture secrets and send them to an external server under their control.
What you can do to avoid this is use a specific version of the action and review their code. You can use a commit hash to refer to the exact version you want, such as ftp-action#efa82c9e876708f2fedf821563680e2058330de3. You could use a tag if it has release tags. e.g. ftp-action#v1.0.0
Although, this is maybe not as secure because tags can be changed.
Alternatively, and probably the most secure, is to fork the action repository and reference your own copy of it. my-fork/ftp-action#master.
The GitHub help page does mention:
Anyone with write access to a repository can read and use secrets.
If someuser does not have write access to the repository, there should be no security issue.
As commented below, you should specify the exact commit of the workflow you are using, in order to make sure it does not change its behavior without your knowledge.

Concourse CI - S3 trigger not firing. How often does it check?

I've got a Concourse job that uses the appearance of a file in an Amazon S3 bucket as a trigger to a suite of tests. Using this resource --> https://github.com/concourse/s3-resource . Problem is, the job is not firing when the file appears. When I trigger the job manually, it does see the file and start the test suite.
Yaml config looks like this:
- name: s3-trigger-file
type: s3
source:
bucket: my-bucket-name
regexp: qabot_request_(.*).json
access_key_id: {{s3-access-key-id}}
secret_access_key: {{s3-secret-access-key}}
jobs:
- name: my-job
public: true
plan:
- get: s3-trigger-file
trigger: true
When I click on the trigger itself in the Concourse UI, I see what looks like a running monitor:
As I said, the job isn't firing when the file appears, but a manual trigger does verify the S3 input is found.
How can I debug why the automatic trigger isn't firing? Also, how much latency is expected for the s3 resource to detect a new file has appeared?
Concourse 3.4. Thanks ~~
The capturing group in your regexp must refer to a semver compliant version.
See the documentation:
The version extracted from this pattern is used to version the resource. Semantic versions, or just numbers, are supported. Accordingly, full regular expressions are supported, to specify the capture groups.
Your capturing group is currently making the captured "version" quote2. You should probably delete the pipeline and regenerate it with a modified regex (e.g. qabot_request_quote(\d+).json)