GitHub Actions: How to dynamically set environment url based on deployment step output? - github

I found out about a really nice GitHub Actions Feature called Environments. Using the appropriate syntax a Environment could also be created inside a GitHub Action workflow.yml like this:
environment:
name: test_environment
url: https://your-apps-url-here.com
As the docs state thats a valid way to create GitHub Action Environments:
Running a workflow that references an environment that does not exist
will create an environment with the referenced name.
But inside my current GitHub Action workflow is there a way I dynamically set the url based on a deployment step output? I have a dynamic URL resulting from the deployment process to AWS which I can't define up-front.

The job workflow docs tell us that there's also a way of using expressions inside the url field:
environment:
name: test_environment
url: ${{ steps.step_name.outputs.url_output }}
Now imagine a ci.yml workflow file that uses AWS CLI to deploy a static website to S3, where we used a tool like Pulumi to dynamically create a S3 Bucket inside our AWS account. We can read the dynamically created S3 url using the following command pulumi stack output bucketName. The deploy step inside the ci.yml could then look like this:
- name: Deploy Nuxt.js generated static site to S3 Bucket via AWS CLI
id: aws-sync
run: |
aws s3 sync ../dist/ s3://$(pulumi stack output bucketName) --acl public-read
echo "::set-output name=s3_url::http://$(pulumi stack output bucketUrl)"
working-directory: ./deployment
There are 2 crucial points here: First we should use id inside the deployment step to define a step name we could easily access via step_name inside our environment:url. Second we need to define a step output using echo "::set-output name=s3_url::http://$(pulumi stack output bucketUrl)". In this example I create a variable s3_url. You could replace the pulumi stack output bucketUrl with any other command you'd like or tool you use, which responds with your dynamic environment url.
Be also sure to add a http:// or https:// in order to prevent an error message like this:
Environment URL 'microservice-ui-nuxt-js-hosting-bucket-bc75fce.s3-website.eu-central-1.amazonaws.com' is not a valid http(s) URL, so it will not be shown as a link in the workflow graph.
Now the environment definition at the top of our ci.yml can access the s3_url output variable from our deployment step like this:
jobs:
ci:
runs-on: ubuntu-latest
environment:
name: microservice-ui-nuxt-js-deployment
url: ${{ steps.aws-sync.outputs.s3_url }}
steps:
- name: Checkout
...
Using steps.aws-sync we reference the deployment step directly, since we defined it with the id. The appended .outputs.s3_url then directly references the variable containing our S3 url. If you defined everything correctly the GitHub Actions UI will render the environment URL directly below the finished job:
Here's also a fully working workflow embedded inside a example project.

Related

Github actions incorrectly thinks variable is a secret and so does not set outputs

A step in my workflow file will return some IDs of EC2 instances in my aws account and then i set these IDs as a github output to be used in other jobs in my workflow file
I have done this in many workflows and step will return something like this:
["i-0d945b001544f2614","i-0b90ba69d37aad78c"]
However, in one workflow file github is masking the IDs as it thinks it is a secret for some reason, so it will return:
["i-***2d571abc6d7d***4ef","i-***186ce12c5cd8e744"]
Therefore i get this error message on the workflow job summary:
Skip output 'instanceIDs' since it may contain secret.
And so the other jobs in my workflow file that rely on this output will fail as github won't set an output.
I have tried to use base64 as suggested in this post but i haven't been able to get that to work
Is there any other work around?
Recently, GitHub released a new feature - configuration variables in workflows.
Configuration variables allow you to store your non sensitive data as plain text variables that can be reused across your workflows in your repository or organization.
You can define variables at Organization, Repository, or Environment level based on your requirement.
These variables are accessible from the workflow by vars context.
Example:
jobs:
display-variables:
runs-on: ${{ vars.RUNNER }}
steps:
- name: Use variables
run: |
echo "Repository variable : ${{ vars.REPOSITORY_VAR }}"
echo "Organization variable : ${{ vars.ORGANIZATION_VAR }}"
In this example, we have the following configuration variables: RUNNER, REPOSITORY_VAR, ORGANIZATION_VAR. As opposed to the repository secrets, the values of these variables won't be masked.
For more details, see the Defining configuration variables for multiple workflows.

How can I pass specific parameters from github webhook to tekton pipeline?

I am working on tekton pipeline. I would like to retrieved specific fields from source code like image version and image repo configured in helm manifests and pass it to tekton task.
Chart.yaml
appVersion: 1.1.37
values.yaml in the source code
image: images/gsample
tekton-task.yaml
params:
- name: IMAGE_REPO
description: The image registry
- name: IMAGE_TAG
description: The image registry
Any ideas on how to retrieved the values of image repo from values.yaml and image tag from chart.yaml and pass it to tekton pipeline?
Short answer: you can't grab values out of the repository itself.
Setting up a Tekton Trigger - and GitHub/GitLab/... webhooks in general: you have to work from the payload that would be sent, which usually includes: a branch, a commit ref, your repository clone URL (ssh and/or http), author of the last commit, ...
A good starting point, using GitHub, would be to go through their "Webhooks and Events Payloads" doc. See what could be relevant to your use case:
https://docs.github.com/en/developers/webhooks-and-events/webhooks/webhook-events-and-payloads
Now, in theory, ... you could try setting up a first trigger, receiving notifications from GitHub, starting a Task that would clone your repository, then another task to grab relevant values out of your values.yaml file or whatever else, ... and eventually notify another trigger, with an arbitrary payload.

What does Lock services in Azure Pipelines' task DockerCompose actually do?

I am learning to use Azure Pipelines for CI/CD. I read the official document and found the Docker Compose task has an action called Lock services. I have no idea what this action actually do and what it means by locking the images.
Can anyone explain it to me, or provide me some examples on when and how to use it?
We have public the source code of this task, so you can check this page to analyze what the exactly action this command do.
For image, there has 2 different identifies: tag and digest. Now, let's assume one scenario:
Most of time, a tagged image in Container Registry is mutable, so with appropriate permissions you or anyone can update/push/delete an image with the same tag to that registry. However, when you deploy a image to production env, you could not sure whether the image with one specific tag does not been overwritten and it is the one you want to deployed.
At this time, digest would be a best choice for you.
Because digest is a SHA256 calculated from the image and identifies it uniquely. Once there has any changes to your image, the corresponding SHA256 value will be changes also.
Explanation of this action:
Check this code line(defined here). It's work logic is read out the image(s) used in the docker-compose.yml file, pull image(s) and generate a digest for them. Next a new docker-compose.yml file is automatically generated, which the image will be specified with digest in this new docker-compose.yml file.
Sample:
The task definition i used:
- task: DockerCompose#0
displayName: 'Lock services'
inputs:
containerregistrytype: 'Container Registry'
dockerRegistryEndpoint: {service connection name}
dockerComposeFile: 'Docker/docker-compose.yml'
action: 'Lock services'
removeBuildOptions: true
The docker-compose.yml:
version: '3'
services:
web:
image: xxxx/testwebapp
ports:
- "1983:80"
newsfeed:
image: xxx/merlin
redis:
image: redis
See the build log of this task:
And the contents of new docker-compose.yml which generated.
(List them by using cat xxx command):
Now, when you deploy the images to production, just use the new docker-compose.yml file the task generated automatically. This can guarantee the deployed image is the version you built at the beginning, even if someone overwrites this image later.

How do I load values from a .json file into a Devops Yaml Pipeline Parameter

Microsoft Documentation explains the use of parameters in Yaml Pipeline jobs as
# File: azure-pipelines.yml
trigger:
- master
extends:
template: simple-param.yml
parameters:
yesNo: false # set to a non-boolean value to have the build fail
But instead of statically specifying the value of yesNo: I'd prefer to load it from a completely separate json config file. Preferably a json file that both my Build Job and my Application could share so that parameters specified for the Application could also be used in the Build Job.
Thus the question:
How do I load values from a .json file into a Devops Yaml Pipeline Parameter?
I've been using this marketplace task:
https://marketplace.visualstudio.com/items?itemName=OneLuckiDev.json2variable
And it's been working great so far. Haven't tried it builds, but can't see why it wouldn't work with separate build pipelines/multi-staged builds. There are a few things you have to be aware of/stumble upon, like double escaping slashes in directory paths - and you'll have to fetch secrets from someplace else, like traditional variable groups.

Is it possible to use variables in a codeship-steps.yml file?

We currently use Codeship Pro to push Docker images to a private registry on AWS, as well as to deploy those images to an ECS cluster.
However, the codeship-steps.yml file includes a hard-coded region name for which AWS region I'm pushing to. For example:
- name: push_production
service: app
type: push
image_name: 123456789012.dkr.ecr.us-east-1.amazonaws.com/project/app-name
image_tag: "{{.Timestamp}}"
tag: master
registry: https://123456789012.dkr.ecr.us-east-1.amazonaws.com
dockercfg_service: aws_generator
I would like to be able to fairly easily switch this to deploy to a different AWS region. Thus the question:
Is it possible to use variables in a codeship-steps.yml file?
I know some of the properties can use a handful of built-in variables provided by Codeship (such as the {{.Timestamp}} value used for the image_tag property), but I don't know if, for example, values from an env_file can be used in the image_name, registry, and/or command properties of a step.
I'm imagining something like this...
codeship-steps.yml:
- name: push_production
service: app
type: push
image_name: "123456789012.dkr.ecr.{{.AWS_REGION}}.amazonaws.com/project/app-name"
image_tag: "{{.Timestamp}}"
tag: master
registry: "https://123456789012.dkr.ecr.{{.AWS_REGION}}.amazonaws.com"
dockercfg_service: aws_generator
... but that results in an "error parsing image name during push step: invalid reference format" on the push step.
I've tried simply not specifying the registry in the image_name...
image_name: project/app-name
... but I get a "Build Error: no basic auth credentials" on the push step. At this point, I'm running out of ideas.
Is it possible to use [environment] variables in a codeship-steps.yml file?
While the image_tag can take advantage of Go templates, the same is not the case for image_name, registry, or anything else. This is a separate set of templating variables that are accessible only to the image_tag generation.
As for environment variables in general (CI environment variables or those defined in the service configs), these values can be used in codeship-steps.yml on the command step when passed through a shell command. For example:
- service: app
command: echo The branch name is: $CI_BRANCH
Results in:
The branch name is: $CI_BRANCH
- service: app
command: /bin/sh -c 'echo The branch name is: $CI_BRANCH'
Results in:
The branch name is: master
As for your 'no basic auth credentials' error message, it's possible that there's an issue with how you are retrieving the basic auth credentials for access to your image registry. If you are on a MacOS device, I would recommend that you review our documentation on how to generate Docker credentials.