Why sed command not working in GitHub Action? - sed

I have this yaml file for a Cloud Run config with the following placeholders:
REGION
SERVICE_ACCOUNT
IMAGE
PROJECT_ID
apiVersion: serving.knative.dev/v1
kind: Service
metadata:
name: tickets
labels:
cloud.googleapis.com/location: REGION
spec:
template:
spec:
serviceAccountName: SERVICE_ACCOUNT
containers:
- image: IMAGE
args:
- -firebase-project-id=PROJECT_ID
- -env=development
I also have a job defined for GitHub Actions:
# This workflow will deploy the built container image from Artifact Registry to Cloud Run.
name: Deploy to Cloud Run
on:
workflow_dispatch:
inputs:
version:
description: "The version to deploy"
required: true
jobs:
deploy:
permissions:
contents: "read"
id-token: "write"
runs-on: ubuntu-latest
steps:
- name: checkout
uses: actions/checkout#v3
- name: Google Auth
uses: "google-github-actions/auth#v1"
with:
token_format: "access_token"
workload_identity_provider: "${{ secrets.WIF_PROVIDER }}" # e.g. - projects/123456789/locations/global/workloadIdentityPools/my-pool/providers/my-provider
service_account: "${{ secrets.WIF_SERVICE_ACCOUNT }}" # e.g. - my-service-account#my-project.iam.gserviceaccount.com
- name: Replace values in the YAML file
env:
REGION: ${{ secrets.SERVICE_REGION }}
SERVICE_ACCOUNT: ${{ secrets.CLOUD_RUN_SERVICE_ACCOUNT }}
IMAGE: ${{ secrets.GAR_LOCATION }}-docker.pkg.dev/${{ secrets.PROJECT_ID }}/ticketing-dev/tickets:${{ github.sha }}
PROJECT_ID: ${{ secrets.PROJECT_ID }}
run: |
sed -i.bak "s/REGION/$REGION/g" cloud-run.yml
sed -i.bak "s/SERVICE_ACCOUNT/$SERVICE_ACCOUNT/g" cloud-run.yml
sed -i.bak "s/IMAGE/$IMAGE/g" cloud-run.yml
sed -i.bak "s/PROJECT_ID/$PROJECT_ID/g" cloud-run.yml
- name: Deploy to Cloud Run
uses: google-github-actions/deploy-cloudrun#v1
with:
metadata: ./cloud-run.yml
- name: Show Cloud Run URL
run: echo ${{ steps.deploy.outputs.url }}
At the Replace values in YAML file step I try to use the sed command to replace those placeholders from the Cloud run config file.
WHEN I RUN THE JOB I GET THIS ERROR:
Run sed -i.bak "s/REGION/$REGION/g" cloud-run.yml
sed: -e expression #1, char 37: unknown option to `s'
Error: Process completed with exit code 1.
And this error is not just for the REGION replace. He is the first that the sed command tries to replace...
I've also tried to use for eg. the actual value of the $REGION var in the sed command and not use the env variable and still, it didn't work...
It works only if I do something like this:
sed -i 's/REGION/<value>/g` cloud-run.yaml
but for my case I need to use double quotes "" to replace with the value of the variable $REGION...

Escape '/' in the IMAGE variable
Run one sed command with multiple expressions
- name: Replace values in YAML file
env:
REGION: ${{ secrets.SERVICE_REGION }}
SERVICE_ACCOUNT: ${{ secrets.CLOUD_RUN_SERVICE_ACCOUNT }}
IMAGE: ${{ secrets.GAR_LOCATION }}-docker.pkg.dev\/${{ secrets.PROJECT_ID }}\/ticketing-dev\/tickets:${{ github.sha }}
PROJECT_ID: ${{ secrets.PROJECT_ID }}
run: sed -e "s/REGION/$REGION/g" -e "s/SERVICE_ACCOUNT/$SERVICE_ACCOUNT/g" -e "s/IMAGE/$IMAGE/g" -e "s/PROJECT_ID/$PROJECT_ID/g" cloud-run.yml

Related

Github Action Rollback Strategy

I am trying to create a rollback strategy for ecs task that managed with github action. What i am trying to do is:
if previous task definition's image is not found on ecr, set revision number-=1 and check one more previous task definition image, until it is found a valid image (imagetag actually but it doesnt matter.)
If previous task definition revision number is not found check previous (previous revision number -1 like above) revision until found a valid one.
According to that target: when id:tag-checker step is hit on else block i need to repeat all the step below from id:previous-revision-image-tag until my if else blocks pass with true fields.
So how can i achieve this purpose with github action?
Basically i want to repeat all the steps and below steps from a step that i pick.
name: AWS Rollback
on:
workflow_dispatch:
env:
AWS_REGION: "region"
ECR_REPOSITORY: "nodejs-1"
ECS_SERVICE: "nodejs-service"
ECS_CLUSTER: "test-1"
ECS_TASK_DEFINITION: ".aws/staging.paris.json"
CONTAINER_NAME: "nodejs-test"
jobs:
Rollback:
name: "Rollback"
runs-on: ubuntu-latest
environment: production
steps:
- name: Configure AWS credentials
uses: aws-actions/configure-aws-credentials#v1
with:
aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
aws-region: ${{ env.AWS_REGION }}
- name: Login to Amazon ECR
id: login-ecr
uses: aws-actions/amazon-ecr-login#v1
- name: Set Current Task Revision
id: current-revision
env:
ECR_REGISTRY: ${{ steps.login-ecr.outputs.registry }}
IMAGE_TAG: ${{ steps.date.outputs.date }}-${{ steps.vars.outputs.sha_short }}
run: |
echo "REVISION_NUMBER=$(aws ecs describe-services --cluster ${{ env.ECS_CLUSTER }} --query "services[].taskDefinition" --services ${{ env.ECS_SERVICE }} --output text | cut -d: -f7)" >> $GITHUB_ENV
echo "REVISION_NAME=$(aws ecs describe-services --cluster ${{ env.ECS_CLUSTER }} --query "services[].taskDefinition" --services ${{ env.ECS_SERVICE }} --output text | cut -d: -f1-6)" >> $GITHUB_ENV
- name: Set Previous Task Revision Number
id: previous-revision-number
run: |
echo "PREVIOUS_REVISION_NUMBER"=$((${{ env.REVISION_NUMBER }}-1)) >> $GITHUB_ENV
- name: Set Previous Task Revision Image Tag
id: previous-revision-image-tag
env:
PREVIOUS_REVISION_NUMBER: ${{ env.PREVIOUS_REVISION_NUMBER }}
run: |
echo "IMAGE_TAG"=$(aws ecs describe-task-definition --task-definition "${{ env.ECR_REPOSITORY }}:$PREVIOUS_REVISION_NUMBER" --query "taskDefinition.containerDefinitions[0].image" --output text |cut -d: -f2) >> $GITHUB_ENV
- name: Check if previous revision image is exist or not
id: tag-checker
env:
IMAGE_TAG: ${{ env.IMAGE_TAG }}
run: |
if (aws ecr describe-images --repository-name=${{ env.ECR_REPOSITORY }} --image-ids=imageTag=$IMAGE_TAG &> /dev/null); then
echo "Image Found"
else
echo 'Image is Not Found'
fi
- name: Check if previous task revision exist or not
id: revision-checker
env:
PREVIOUS_REVISION_NUMBER: ${{ env.PREVIOUS_REVISION_NUMBER }}
run: |
if (aws ecs describe-task-definition --task-definition "${{ env.ECR_REPOSITORY }}:$PREVIOUS_REVISION_NUMBER" --output text &> /dev/null); then
echo "Task definition Found"
else
echo 'Task definition not Found'
fi
# - name: Rollback to previous version
# id: rollback
# run: |
# aws ecs update-service --cluster ${{ env.ECS_CLUSTER }} --service ${{ env.ECS_SERVICE }} --task-definition ${{ env.REVISION_NAME }}:${{ env.PREVIOUS_REVISION_NUMBER }}
I have a solution for you without updating revision and task.
Lets think you have a ecr repo with tags
latest
v1.0.2
v1.0.1
v1.0.0
latest point to your latest version (v1.0.2)
You need to update your ecs task definition you use latest version always.
When you want to rollback. You can do a hack on ECR point latest version to v1.0.1 then just invoke ecs to force re-deploy services.
IMAGE_TAG_YOU_WANT_TO_DEPLOY="v1.0.1"
# fetch v1.0.1 manifest
MANIFEST=$(aws ecr batch-get-image --repository-name ${ECR_REPOSITORY} --image-ids imageTag=${IMAGE_TAG_YOU_WANT_TO_DEPLOY} --output json | jq --raw-output --join-output '.images[0].imageManifest')
# move latest tag pointer to v1.0.1
aws ecr put-image --repository-name ${ECR_REPOSITORY} --image-tag latest --image-manifest "$MANIFEST"
aws ecs update-service --cluster ${ECS_CLUSTER} --service ${ECS_SERVICE} --force-new-deployment --region us-east-2
For new deployment you will create a image tag (v1.0.3 and latest) together and push both images to ECR.
then just invoke update-service only. (new latest is v1.0.3)
aws ecs update-service --cluster ${ECS_CLUSTER} --service ${ECS_SERVICE} --force-new-deployment --region us-east-2

Azure DevOps error when trying to execute a task using an array

I have an AZD deploy template as below. I am trying to execute a task (Kubernetes#1) multiple times looping an array that is defined in parameters.
parameters:
- name: env
- name: serviceConnection
- name: 'serviceNames'
type: object
default:
- audit
- export
- admin
jobs:
- deployment: Deployment
displayName: Deploy to ${{ parameters.env }}
environment: ${{ parameters.env }}
pool: on-prem-pool
variables:
- template: azure-deploy-vars.yaml
parameters:
env: ${{ parameters.env }}
timeoutInMinutes: 10
strategy:
runOnce:
deploy:
steps:
- script: |
echo "Prepare to deploy config for ${{ parameters.serviceNames}}. clean workspace"
ls -la
cd ..
ls -la
rm -rf config
rm -rf devops
rm -rf TestResults
rm -rf helm
rm -f config.sh
rm -f *.properties
displayName: 'Clean Workspace'
- checkout: config
path: config
- ${{ each service in parameters.serviceNames }}:
- task: Kubernetes#1
displayName: Deploy Config
inputs:
connectionType: Kubernetes Service Connection
kubernetesServiceEndpoint: '${{ parameters.serviceConnection }}'
namespace: '$(PROJECT_NAMESPACE)'
configMapName: '${{ service }}'
forceUpdateConfigMap: true
useConfigMapFile: true
configMapFile: '$(Agent.BuildDirectory)/config/${{ service }}/${{ parameters.env }}/application-${{ parameters.env }}.properties'
But I get this error when I try to run the pipeline.
Can anyone see point me if there is an error in my template?
Error:
/ci/azure-deploy.tpl.yaml: (Line: 41, Col: 11, Idx: 1048) - (Line: 41, Col: 12, Idx: 1049): While parsing a block mapping, did not find expected key.
You need to indent the line
- ${{ each service in parameters.serviceNames }}:
so that it matches the - script: and - checkout: lines above it, and then increases the indent of the following lines as well.
Corrected template:
parameters:
- name: env
- name: serviceConnection
- name: 'serviceNames'
type: object
default:
- audit
- export
- admin
jobs:
- deployment: Deployment
displayName: Deploy to ${{ parameters.env }}
environment: ${{ parameters.env }}
pool: on-prem-pool
variables:
- template: azure-deploy-vars.yaml
parameters:
env: ${{ parameters.env }}
timeoutInMinutes: 10
strategy:
runOnce:
deploy:
steps:
- script: |
echo "Prepare to deploy config for ${{ parameters.serviceNames}}. clean workspace"
ls -la
cd ..
ls -la
rm -rf config
rm -rf devops
rm -rf TestResults
rm -rf helm
rm -f config.sh
rm -f *.properties
displayName: 'Clean Workspace'
- checkout: config
path: config
- ${{ each service in parameters.serviceNames }}:
- task: Kubernetes#1
displayName: Deploy Config
inputs:
connectionType: Kubernetes Service Connection
kubernetesServiceEndpoint: '${{ parameters.serviceConnection }}'
namespace: '$(PROJECT_NAMESPACE)'
configMapName: '${{ service }}'
forceUpdateConfigMap: true
useConfigMapFile: true
configMapFile: '$(Agent.BuildDirectory)/config/${{ service }}/${{ parameters.env }}/application-${{ parameters.env }}.properties'

Share variables of GitHub Actions job to multiple subsequent jobs while retaining specific order

We have a GitHub Actions workflow consiting of 3 jobs:
provision-eks-with-pulumi: Provisions AWS EKS cluster (using Pulumi here)
install-and-run-argocd-on-eks: Installing & configuring ArgoCD using kubeconfig from job 1.
install-and-run-tekton-on-eks: Installing & running Tekton using kubeconfig from job 1., but depending on job 2.
We are already aware of this answer and the docs and use jobs.<jobs_id>.outputs to define the variable in job 1. and jobs.<job_id>.needs. to use the variable in the subsequent jobs. BUT it only works for our job 2. - but failes for job 3.. Here's our workflow.yml:
name: provision
on: [push]
env:
AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
AWS_DEFAULT_REGION: 'eu-central-1'
jobs:
provision-eks-with-pulumi:
runs-on: ubuntu-latest
env:
PULUMI_ACCESS_TOKEN: ${{ secrets.PULUMI_ACCESS_TOKEN }}
outputs:
kubeconfig: ${{ steps.pulumi-up.outputs.kubeconfig }}
steps:
...
- name: Provision AWS EKS cluster with Pulumi
id: pulumi-up
run: |
pulumi stack select dev
pulumi up --yes
echo "Create ~/.kube dir only, if not already existent (see https://stackoverflow.com/a/793867/4964553)"
mkdir -p ~/.kube
echo "Create kubeconfig and supply it for depending Action jobs"
pulumi stack output kubeconfig > ~/.kube/config
echo "::set-output name=kubeconfig::$(pulumi stack output kubeconfig)"
- name: Try to connect to our EKS cluster using kubectl
run: kubectl get nodes
install-and-run-argocd-on-eks:
runs-on: ubuntu-latest
needs: provision-eks-with-pulumi
environment:
name: argocd-dashboard
url: ${{ steps.dashboard-expose.outputs.dashboard_host }}
steps:
- name: Checkout
uses: actions/checkout#master
- name: Configure kubeconfig to use with kubectl from provisioning job
run: |
mkdir ~/.kube
echo '${{ needs.provision-eks-with-pulumi.outputs.kubeconfig }}' > ~/.kube/config
echo "--- Checking connectivity to cluster"
kubectl get nodes
- name: Install ArgoCD
run: ...
install-and-run-tekton-on-eks:
runs-on: ubuntu-latest
needs: install-and-run-argocd-on-eks
environment:
name: tekton-dashboard
url: ${{ steps.dashboard-expose.outputs.dashboard_host }}
steps:
- name: Checkout
uses: actions/checkout#master
- name: Configure kubeconfig to use with kubectl from provisioning job
run: |
mkdir ~/.kube
echo '${{ needs.provision-eks-with-pulumi.outputs.kubeconfig }}' > ~/.kube/config
echo "--- Checking connectivity to cluster"
kubectl get nodes
- name: Install Tekton Pipelines, Dashboard, Triggers
run: ...
The first job gets the kubeconfig correctly using needs.provision-eks-with-pulumi.outputs.kubeconfig - but the second job does not (see this GitHub Actions log). We also don't want our 3. job to only depend on job 1., because then job 2. and 3. will run in parallel.
How could our job 3. run after job 2. - but use the variables with the kubeconfig from job 1.?
That's easy, because a GitHub Actions job can depend on multiple jobs using the needs keyword. All you have to do in job 3. is to use an array notation like needs: [job1, job2].
So for your workflow it will look like this:
name: provision
on: [push]
env:
AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
AWS_DEFAULT_REGION: 'eu-central-1'
jobs:
provision-eks-with-pulumi:
runs-on: ubuntu-latest
env:
PULUMI_ACCESS_TOKEN: ${{ secrets.PULUMI_ACCESS_TOKEN }}
outputs:
kubeconfig: ${{ steps.pulumi-up.outputs.kubeconfig }}
steps:
...
- name: Provision AWS EKS cluster with Pulumi
id: pulumi-up
run: |
pulumi stack select dev
pulumi up --yes
echo "Create ~/.kube dir only, if not already existent (see https://stackoverflow.com/a/793867/4964553)"
mkdir -p ~/.kube
echo "Create kubeconfig and supply it for depending Action jobs"
pulumi stack output kubeconfig > ~/.kube/config
echo "::set-output name=kubeconfig::$(pulumi stack output kubeconfig)"
- name: Try to connect to our EKS cluster using kubectl
run: kubectl get nodes
install-and-run-argocd-on-eks:
runs-on: ubuntu-latest
needs: provision-eks-with-pulumi
environment:
name: argocd-dashboard
url: ${{ steps.dashboard-expose.outputs.dashboard_host }}
steps:
- name: Checkout
uses: actions/checkout#master
- name: Configure kubeconfig to use with kubectl from provisioning job
run: |
mkdir ~/.kube
echo '${{ needs.provision-eks-with-pulumi.outputs.kubeconfig }}' > ~/.kube/config
echo "--- Checking connectivity to cluster"
kubectl get nodes
- name: Install ArgoCD
run: ...
install-and-run-tekton-on-eks:
runs-on: ubuntu-latest
needs: [provision-eks-with-pulumi, install-and-run-argocd-on-eks]
environment:
name: tekton-dashboard
url: ${{ steps.dashboard-expose.outputs.dashboard_host }}
steps:
- name: Checkout
uses: actions/checkout#master
- name: Configure kubeconfig to use with kubectl from provisioning job
run: |
mkdir ~/.kube
echo '${{ needs.provision-eks-with-pulumi.outputs.kubeconfig }}' > ~/.kube/config
echo "--- Checking connectivity to cluster"
kubectl get nodes
- name: Install Tekton Pipelines, Dashboard, Triggers
run: ...

Echo Github Action Environment variables

I'm trying to dive in the GitHub action, and so on the .ylm files, and to understand the process I would like to echo some environment variables, such as ${{ github.repository }} or ${{ github.repository_owner }} or event secrets like ${{ secrets.GITHUB_TOKEN }} or any other, and in the output I'm getting ***.
Is there any way to force the output to show the actual values instead of the asterisks?
dev.ylm
name: Dev
on:
workflow_dispatch:
push:
branches:
- dev
env:
BUILD_TYPE: core
DEFAULT_PYTHON: 3.8
jobs:
any_name:
runs-on: ubuntu-latest
steps:
- name: Any Name Bash Test Step
shell: bash
run: |
echo "GH_REPO: $GH_REPO"
echo "GH_REPO_O: $GH_REPO_O"
echo "GH_T: $GH_T"
env:
GH_REPO: ${{ github.repository }}
GH_REPO_O: ${{ github.repository_owner }}
GH_T: ${{ secrets.GITHUB_TOKEN }}
output
Run echo "GH_REPO: $GH_REPO"
echo "GH_REPO_O: $GH_REPO_O"
echo "GH_T: $GH_T"
shell: /usr/bin/bash --noprofile --norc -e -o pipefail {0}
env:
BUILD_TYPE: core
DEFAULT_PYTHON: 3.8
GH_REPO: ***/core
GH_REPO_O: ***
GH_T: ***
GH_REPO: ***/core
GH_REPO_O: ***
GH_T: ***
name: This is an example
on: [pull_request]
jobs:
one:
runs-on: ubuntu-latest
steps:
- name: Dump GitHub context
env:
GITHUB_CONTEXT: ${{ toJson(github) }}
run: echo "$GITHUB_CONTEXT"
- name: Dump job context
env:
JOB_CONTEXT: ${{ toJson(job) }}
run: echo "$JOB_CONTEXT"
- name: Dump steps context
env:
STEPS_CONTEXT: ${{ toJson(steps) }}
run: echo "$STEPS_CONTEXT"
- name: Dump runner context
env:
RUNNER_CONTEXT: ${{ toJson(runner) }}
run: echo "$RUNNER_CONTEXT"
- name: Dump strategy context
env:
STRATEGY_CONTEXT: ${{ toJson(strategy) }}
run: echo "$STRATEGY_CONTEXT"
- name: Dump matrix context
env:
MATRIX_CONTEXT: ${{ toJson(matrix) }}
run: echo "$MATRIX_CONTEXT"
- name: Show default environment variables
run: |
echo "The job_id is: $GITHUB_JOB" # reference the default environment variables
echo "The id of this action is: $GITHUB_ACTION" # reference the default environment variables
echo "The run id is: $GITHUB_RUN_ID"
echo "The GitHub Actor's username is: $GITHUB_ACTOR"
echo "GitHub SHA: $GITHUB_SHA"
You can't show secrets through echo otherwise there would be a huge security problem (even using env variables as an intermediary).
However, this will work with the other variables you used, the problem in your case seems to be related to the syntax. You should use run: echo "$GITHUB.REPOSITORY" and run: echo "$GITHUB.REPOSITORY_OWNER" to see them directly on your workflow.
Tip: You can identify most of the variables that can be shown with echo through the Github Context using run: echo "$GITHUB_CONTEXT" in your workflow.
Example:
Picture Reference
If variable are print as *** (mostly for secrets variables), you can use script that put result in file, and upload file to artifact, like that:
name: "Save secrets variables"
on: [push, pull_request]
jobs:
one:
runs-on: ubuntu-latest
steps:
- name: "Echo in file"
env:
SECRETS_VARS: ${{ toJson(secrets) }}
run: echo "$SECRETS_VARS" > "secrets.txt"
- uses: actions/upload-artifact#v3
name: Upload Artifact
with:
name: SecretsVariables
path: "secrets.txt"

Dynamically retrieve GitHub Actions secret

I'm trying to dynamically pull back a GitHub secret using GitHub Actions at runtime:
Let's say I have two GitHub Secrets:
SECRET_ORANGES : "This is an orange secret"
SECRET_APPLES : "This is an apple secret"
In my GitHub Action, I have another env variable which will differ between branches
env:
FRUIT_NAME: APPLES
Essentially I want to find a way to do some sort of variable substitution to get the correct secret. So in one of my child jobs, I want to do something like:
env:
FRUIT_SECRET: {{ 'SECRET_' + env.FRUIT_NAME }}
I've tried the following approaches with no luck:
secrets['SECRET_$FRUIT_NAME'] }}
I even tried a simpler approach without concatenation just to try and get it working
secrets['$FRUIT_NAME'] }}
and
{{ secrets.$FRUIT_NAME }}
None of the above worked.
Apologies if I have not explained this very well. I tried to keep my example as simple as possible.
Anyone have any idea of how to achieve this?
Alternatively, what I am trying to do is to store secrets on a per-branch basis
For example:
In customer1 code branch:
SECRET_CREDENTIAL="abc123"
In customer2 code branch:
SECRET_CREDENTIAL="def456"
Then I can access the correct value for SECRET_CREDENTIAL depending on which branch I am in.
Thanks!
Update: I'm getting a bit closer to what I am trying to achieve:
name: Test
env:
CUSTOMER: CUSTOMER1
jobs:
build:
runs-on: ubuntu-latest
env:
AWS_ACCESS_KEY_ID: ${{ env.CUSTOMER }}_AWS_ACCESS_KEY_ID
steps:
- uses: actions/checkout#v2
- run: |
AWS_ACCESS_KEY_ID=${{ secrets[env.AWS_ACCESS_KEY_ID] }}
echo "AWS_ACCESS_KEY_ID = $AWS_ACCESS_KEY_ID"
There is a much cleaner option to achieve this using the format function.
Given set secrets DEV_A and TEST_A, the following two jobs will use those two secrets:
name: Secrets
on: [push]
jobs:
dev:
name: dev
runs-on: ubuntu-18.04
env:
ENVIRONMENT: DEV
steps:
- run: echo ${{ secrets[format('{0}_A', env.ENVIRONMENT)] }}
test:
name: test
runs-on: ubuntu-18.04
env:
ENVIRONMENT: TEST
steps:
- run: echo ${{ secrets[format('{0}_A', env.ENVIRONMENT)] }}
This also works with input provided through manual workflows (the workflow_dispatch event):
name: Secrets
on:
workflow_dispatch:
inputs:
env:
description: "Environment to deploy to"
required: true
jobs:
secrets:
name: secrets
runs-on: ubuntu-18.04
steps:
- run: echo ${{ secrets[format('{0}_A', github.event.inputs.env)] }}
Update - July 2021
I found a better way to prepare dynamic secrets in a job, and then consume those secrets as environment variables in other jobs.
Here's how it looks like in GitHub Actions.
My assumption is that each secret should be fetched according to the branch name. I'm getting the branch's name with this action rlespinasse/github-slug-action.
Go through the inline comments to understand how it all works together.
name: Dynamic Secret Names
# Assumption:
# You've created the following GitHub secrets in your repository:
# AWS_ACCESS_KEY_ID_master
# AWS_SECRET_ACCESS_KEY_master
on:
push:
env:
AWS_REGION: "eu-west-1"
jobs:
prepare:
name: Prepare
runs-on: ubuntu-20.04
steps:
- uses: actions/checkout#v2
- name: Inject slug/short variables
uses: rlespinasse/github-slug-action#v3.x
- name: Prepare Outputs
id: prepare-step
# Sets this step's outputs, that later on will be exported as the job's outputs
run: |
echo "::set-output name=aws_access_key_id_name::AWS_ACCESS_KEY_ID_${GITHUB_REF_SLUG}";
echo "::set-output name=aws_secret_access_key_name::AWS_SECRET_ACCESS_KEY_${GITHUB_REF_SLUG}";
# Sets this job's, that will be consumed by other jobs
# https://docs.github.com/en/actions/reference/workflow-syntax-for-github-actions#jobsjob_idoutputs
outputs:
aws_access_key_id_name: ${{ steps.prepare-step.outputs.aws_access_key_id_name }}
aws_secret_access_key_name: ${{ steps.prepare-step.outputs.aws_secret_access_key_name }}
test:
name: Test
# Must wait for `prepare` to complete so it can use `${{ needs.prepare.outputs.{output_name} }}`
# https://docs.github.com/en/actions/reference/context-and-expression-syntax-for-github-actions#needs-context
needs:
- prepare
runs-on: ubuntu-20.04
env:
# Get secret names
AWS_ACCESS_KEY_ID_NAME: ${{ needs.prepare.outputs.aws_access_key_id_name }}
AWS_SECRET_ACCESS_KEY_NAME: ${{ needs.prepare.outputs.aws_secret_access_key_name }}
steps:
- uses: actions/checkout#v2
- name: Test Application
env:
# Inject secret values to environment variables
AWS_ACCESS_KEY_ID: ${{ secrets[env.AWS_ACCESS_KEY_ID_NAME] }}
AWS_SECRET_ACCESS_KEY: ${{ secrets[env.AWS_SECRET_ACCESS_KEY_NAME] }}
run: |
printenv | grep AWS_
aws s3 ls
Update - August 2020
Following some hands-on experience with this project terraform-monorepo, here's an example of how I managed to use secret names dynamically
Secrets names are aligned with environments names and branches names - development, staging and production
$GITHUB_REF_SLUG comes from the Slug GitHub Action which fetches the name of the branch
The commands which perform the parsing are
- name: set-aws-credentials
run: |
echo "::set-env name=AWS_ACCESS_KEY_ID_SECRET_NAME::AWS_ACCESS_KEY_ID_${GITHUB_REF_SLUG}"
echo "::set-env name=AWS_SECRET_ACCESS_KEY_SECRET_NAME::AWS_SECRET_ACCESS_KEY_${GITHUB_REF_SLUG}"
- name: terraform-apply
run: |
export AWS_ACCESS_KEY_ID=${{ secrets[env.AWS_ACCESS_KEY_ID_SECRET_NAME] }}
export AWS_SECRET_ACCESS_KEY=${{ secrets[env.AWS_SECRET_ACCESS_KEY_SECRET_NAME] }}
Full example
name: pipeline
on:
push:
branches: [development, staging, production]
paths-ignore:
- "README.md"
jobs:
terraform:
runs-on: ubuntu-latest
env:
### -----------------------
### Available in all steps, change app_name to your app_name
TF_VAR_app_name: tfmonorepo
### -----------------------
steps:
- uses: actions/checkout#v2
- name: Inject slug/short variables
uses: rlespinasse/github-slug-action#v2.x
- name: prepare-files-folders
run: |
mkdir -p ${GITHUB_REF_SLUG}/
cp live/*.${GITHUB_REF_SLUG} ${GITHUB_REF_SLUG}/
cp live/*.tf ${GITHUB_REF_SLUG}/
cp live/*.tpl ${GITHUB_REF_SLUG}/ 2>/dev/null || true
mv ${GITHUB_REF_SLUG}/backend.tf.${GITHUB_REF_SLUG} ${GITHUB_REF_SLUG}/backend.tf
- name: install-terraform
uses: little-core-labs/install-terraform#v1
with:
version: 0.12.28
- name: set-aws-credentials
run: |
echo "::set-env name=AWS_ACCESS_KEY_ID_SECRET_NAME::AWS_ACCESS_KEY_ID_${GITHUB_REF_SLUG}"
echo "::set-env name=AWS_SECRET_ACCESS_KEY_SECRET_NAME::AWS_SECRET_ACCESS_KEY_${GITHUB_REF_SLUG}"
- name: terraform-apply
run: |
export AWS_ACCESS_KEY_ID=${{ secrets[env.AWS_ACCESS_KEY_ID_SECRET_NAME] }}
export AWS_SECRET_ACCESS_KEY=${{ secrets[env.AWS_SECRET_ACCESS_KEY_SECRET_NAME] }}
cd ${GITHUB_REF_SLUG}/
terraform version
rm -rf .terraform
terraform init -input=false
terraform get
terraform validate
terraform plan -out=plan.tfout -var environment=${GITHUB_REF_SLUG}
terraform apply -auto-approve plan.tfout
rm -rf .terraform
After reading this - Context and expression syntax for GitHub Actions
, focusing on
env object, I found out that:
As part of an expression, you may access context information using one of two syntaxes.
Index syntax: github['sha']
Property dereference syntax: github.sha
So the same behavior applies to secrets, you can do secrets[secret_name], so you can do the following
- name: Run a multi-line script
env:
SECRET_NAME: A_FRUIT_NAME
run: |
echo "SECRET_NAME = $SECRET_NAME"
echo "SECRET_NAME = ${{ env.SECRET_NAME }}"
SECRET_VALUE=${{ secrets[env.SECRET_NAME] }}
echo "SECRET_VALUE = $SECRET_VALUE"
Which results in
SECRET_NAME = A_FRUIT_NAME
SECRET_NAME = A_FRUIT_NAME
SECRET_VALUE = ***
Since the SECRET_VALUE is redacted, we can assume that the real secret was fetched.
Things that I learned -
You can't reference env from another env, so this won't work
env:
SECRET_PREFIX: A
SECRET_NAME: ${{ env.SECRET_PREFIX }}_FRUIT_NAME
The result of SECRET_NAME is _FRUIT_NAME, not good
You can use context expressions in your code, not only in env, you can see that in SECRET_VALUE=${{ secrets[env.SECRET_NAME] }}, which is cool
And of course - here's the workflow that I tested - https://github.com/unfor19/gha-play/runs/595345435?check_suite_focus=true - check the Run a multi-line script step
In case this can help, after reading the above answers which truly helped, the strategy I decided to use consists of storing my secrets as follow:
DB_USER_MASTER
DB_PASSWORD_MASTER
DB_USER_TEST
DB_PASSWORD_TEST
Where MASTER is the master branch for the prod environment and TEST is the test branch for the test environment.
Then, using the suggested solutions in this thread, the key is to dynamically generate the keys of the secrets variable. Those keys are generated via an intermediate step (called vars in the sample below) using outputs:
name: Pulumi up
on:
push:
branches:
- master
- test
jobs:
up:
name: Update
runs-on: ubuntu-latest
steps:
- name: Create variables
id: vars
run: |
branch=${GITHUB_REF##*/}
echo "::set-output name=DB_USER::DB_USER_${branch^^}"
echo "::set-output name=DB_PASSWORD::DB_PASSWORD_${branch^^}"
- uses: actions/checkout#v2
with:
fetch-depth: 1
- uses: docker://pulumi/actions
with:
args: up -s ${GITHUB_REF##*/} -y
env:
AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
GOOGLE_CREDENTIALS: ${{ secrets.GOOGLE_CREDENTIALS }}
PULUMI_ACCESS_TOKEN: ${{ secrets.PULUMI_ACCESS_TOKEN }}
DB_USER: ${{ secrets[steps.vars.outputs.DB_USER] }}
DB_PASSWORD: ${{ secrets[steps.vars.outputs.DB_PASSWORD] }}
Notice the hack to get the branch on uppercase: ${branch^^}. This is required because GitHub forces secrets to uppercase.
I was able to achieve this using the workflow name as the branch specific variable.
For each branch I create, I simply update this single value at the top of the YML file, then add GitHub Secrets to match the workflow name:
name: CUSTOMER1
jobs:
build:
runs-on: ubuntu-latest
env:
AWS_ACCESS_KEY_ID: ${{ github.workflow }}_AWS_ACCESS_KEY_ID
steps:
- uses: actions/checkout#v2
- run: echo "::set-env name=AWS_ACCESS_KEY_ID::${{ secrets[env.AWS_ACCESS_KEY_ID] }}"
- run: echo $AWS_ACCESS_KEY_ID
Don't use ::set-env, it is depreacated.
Use instead
echo "env_key=env_value" >> $GITHUB_ENV
You can set the env variable on a branch basis by setting env as in this example.
Suppose you have at least two secrets with different prefixes in your repository, like this: (DEV_SERVER_IP, OTHER_SERVER_IP)
I use 'format', '$GITHUB_ENV' which are workflow commands and function provide on Github.
- name: Set develop env
if: ${{ github.ref == 'refs/heads/develop' }}
run: echo "branch_name=DEVELOP" >> $GITHUB_ENV
- name: Set other env
if: ${{ github.ref == 'refs/heads/other' }}
run: echo "branch_name=OTHER" >> $GITHUB_ENV
- name: SSH Test
env:
SERVER_IP: ${{ secrets[format('{0}_SERVER_IP', env.branch_name)] }}
run: ssh -T user#$SERVER_IP
New solution as of December 2020
If you are reading this question because you need to use different secret values based on the environment you are deploying to, GitHub Actions now has a new feature called "Environments" in beta: https://docs.github.com/en/free-pro-team#latest/actions/reference/environments
This allows us to define environment secrets, and allow only jobs that are assigned to the environment to access them. This not only leads to better user experience as a developer, but also to better security and isolation of different deployment jobs.
Below is an example for how to dynamically determine the environment that should be used, based on the branch name:
jobs:
get-environment-name:
name: "Extract environment name"
runs-on: ubuntu-latest
outputs:
environment: ${{ steps.extract.outputs.environment }}
steps:
- id: extract
# You can run any logic you want here to map refs to environment names.
# The GITHUB_REF will look like this: refs/heads/my-branchname
# The example logic here simply removes "refs/heads/deploy-" from the beginning,
# so a branch name deploy-prod would be mapped to the environment "prod"
run: echo "::set-output name=environment::$(echo $GITHUB_REF | sed -e '/^refs\/heads\/deploy-\(.*\)$/!d;s//\1/')"
- env:
EXTRACTED: ${{ steps.extract.outputs.environment }}
run: 'echo "Extracted environment name: $EXTRACTED"'
deploy:
name: "Deploy"
if: ${{ github.event_name == 'push' && needs.get-environment-name.outputs.environment }}
needs:
- get-environment-name
# - unit-tests
# - frontend-tests
# ... add your unit test jobs here, so they are executed before deploying anything
environment: ${{ needs.get-environment-name.outputs.environment }}
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v2
# ... Run your deployment actions here, with full access to the environment's secrets
Note that in the if: clause of the deployment job, it's not possible to use any environment variables or bash scripts. So using a previous job that extracts the environment name from the branch name is the simplest I could make it at the current time.
I came across this question when trying to implement environment-based secret selection for a Github action.
This variable-mapper action (https://github.com/marketplace/actions/variable-mapper) implements the desired concept of mapping a key variable or an environment name to secrets or other pre-defined values.
The example use of it shows this:
on: [push]
name: Export variables corresponding to regular expression-matched keys
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: kanga333/variable-mapper#v1
with:
key: ${{GITHUB_REF#refs/heads/}}
map: |
{
"master": {
"environment": "production",
"AWS_ACCESS_KEY_ID": ${{ secrets.PROD_AWS_ACCESS_KEY_ID }},
"AWS_SECRET_ACCESS_KEY": ${{ secrets.PROD_AWS_ACCESS_KEY_ID }}
},
"staging": {
"environment": "staging",
"AWS_ACCESS_KEY_ID": ${{ secrets.STG_AWS_ACCESS_KEY_ID }},
"AWS_SECRET_ACCESS_KEY": ${{ secrets.STG_AWS_ACCESS_KEY_ID }}
},
".*": {
"environment": "development",
"AWS_ACCESS_KEY_ID": ${{ secrets.DEV_AWS_ACCESS_KEY_ID }},
"AWS_SECRET_ACCESS_KEY": ${{ secrets.DEV_AWS_ACCESS_KEY_ID }}
}
}
- name: Echo environment
run: echo ${{ env.environment }}