GitHub Env Choice - github

Maybe someone is able to help. I have an example below, where you can choose an env you want to use. But if i have, for example, more then 10 envs, or i want to change them sometimes and i don't want to go to .ci.yml and change it every time. Is there an option not write env names one by one but just to list the env files in a folder ?
name: CI
on:
workflow_dispatch:
inputs:
environment:
type: environment
description: Select the environment
boolean:
type: boolean
description: True or False
choice:
type: choice
description: Make a choice
options:
- foo
- bar
so as you can see, i have foo and bar env. But i don't want to list each env name here.

Related

How to create a dropdown in a GitHub action job

My job needs to receive 2 input parameter from the user.
Valid values of those parameters are defined by a separate script. These values can change (not able to hard code them) so I want to populate the list dynamically when the job is to be run. The picker should update according to the result of the script allowing the user to select and only then should the actual job be run with those values as input to the job
Question is how to provide a drop down like that
Since Nov. 2021, the input type can actually be a choice list.
GitHub Actions: Input types for manual workflows
You can now specify input types for manually triggered workflows allowing you to provide a better experience to users of your workflow.
In addition to the default string type, we now support choice, boolean, and environment.
name: Mixed inputs
on:
workflow_dispatch:
inputs:
name:
type: choice
description: Who to greet
options:
- monalisa
- cschleiden
message:
required: true
use-emoji:
type: boolean
description: Include 🎉🤣 emojis
environment:
type: environment
jobs:
greet:
runs-on: ubuntu-latest
steps:
- name: Send greeting
run: echo "${{ github.event.inputs.message }} ${{ fromJSON('["", "🥳"]')[github.event.inputs.use-emoji == 'true'] }} ${{ github.event.inputs.name }}"
That would provide a dropdown.
The question remains: can you pass a list of choices as variable to your input choice field?
You should, if you can populate the inputs context, through another job (computing your list), and calling your choice job, passing the list through jobs.<job_id>.with / jobs.<job_id>.with.<input_id>

How to have workflow specific environment in a GitHub workflow yml file

I am creating a /.github/<workflow>.yml and am struggling with the environment.
from https://help.github.com/en/actions/automating-your-workflow-with-github-actions/workflow-syntax-for-github-actions#env
A map of environment variables that are available to all jobs and steps in the workflow. You can also set environment variables that are only available to a job or step. For more information, see jobs.<job_id>.env and jobs.<job_id>.steps.env.
When more than one environment variable is defined with the same name, GitHub uses the most specific environment variable. For example, an environment variable defined in a step will override job and workflow variables with the same name, while the step executes. A variable defined for a job will override a workflow variable with the same name, while the job executes.
From https://help.github.com/en/actions/automating-your-workflow-with-github-actions/using-environment-variables
To set custom environment variables, you need to specify the variables in the workflow file. You can define environment variables for a step, job, or entire workflow using the jobs.<job_id>.steps.env, jobs.<job_id>.env, and env keywords. For more information, see "Workflow syntax for GitHub."
How do I set up environment variables for the entire workflow (multiple jobs)?
on: push
env:
MY_ENV: value
jobs:
job1:
runs-on: ubuntu-latest
steps:
- run: echo "MY_ENV_1 = $MY_ENV"
job2:
runs-on: ubuntu-latest
steps:
- run: echo "MY_ENV_2 = $MY_ENV"
I'm 100% sure this one works, and 95% sure that you'll getting "invalid workflow" error due to mistake in other part of workflow or some minor syntax error (missing space, = instead of : when declaring variable, value starts with non-alphanumeric character and it's not inside '', etc).
Error report page is currently broken (at least for me) - whole message is in single line with no way to see more than just a beginning. If that's a case for you as well, use 'inspect element' in you browser, so you can see it in all its glory.
How do I set up environment variables for the entire workflow (multiple jobs)?
Well, you can't with the keyword env if you want to setup the environnement once. Ontherwise, you set the env in each job, like that :
jobs:
build:
runs-on: ubuntu-18.04
env:
- FOO: foo
- BAR: bar
steps:
- uses: actions/checkout#v1
- name: Do something
test:
runs-on: ubuntu-18.04
env:
- FOO: foo
- BAR: bar
steps:
- uses: actions/checkout#v1
- name: Do something else
But it depends if it's what you really want to do. A MRE would be appreciate, if it's not what you need, as jonrsharpe said.

Using variables expansion to load a template variables file per environment

I'm attempting to create multiple pipelines in Azure DevOps but I would like to reuse the same pipeline YAML file with the differences per environment being loaded from a separate template variables file.
For that purpose I've created two variable files, which are located in the same folder as the pipeline definition:
# vars.dev.yml
variables:
- name: EnvironmentName
value: Development
# vars.prd.yml
variables:
- name: EnvironmentName
value: Production
And the definition of the pipeline is the following:
trigger: none
pr: none
variables:
- name: EnvironmentCode
value: dev
- name: EnvironmentFileName
value: vars.$('EnvironmentCode').yml
stages:
- stage: LoadVariablesPerEnvironment
displayName: Load Variables Per Environment
variables:
- template: $(EnvironmentFileName)
jobs:
- job: ShowcaseLoadedVariables
steps:
- pwsh: Write-Host "Variables have been loaded for the '$ENV:ENVIRONMENTNAME' environment"
displayName: Output Environment Variables
After importing the pipelines using the Azure DevOps UI, I can go to settings of each and set the Environment Code variable to whatever desired environment code:
However I'm always getting the same error when I try to run the pipeline, regardless of the code I fill in the variable value:
So the question here is: Is this kind of variable expansion not supported or is there a different way that I should use to accomplish this?
Thanks!
EDIT
I was able to expand the variables using another method. The new version of the pipeline is as such:
variables:
- name: EnvironmentCode
value: dev
- name: EnvironmentFileName
value: vars.${{ variables.EnvironmentCode }}.yml
stages:
- stage: LoadVariablesPerEnvironment
displayName: Load Variables Per Environment
variables:
- template: ${{ variables.EnvironmentFileName }}
jobs:
- job: ShowcaseLoadedVariables
steps:
- pwsh: Write-Host "Variables have been loaded for the '$ENV:ENVIRONMENTNAME' environment"
displayName: Output Environment Variables
However there is yet the issue of loading different files. I made different attempts and verified the following:
If you give a different environment code using the UI, when running
the pipeline, the value it assumes is still the one that's on the
pipeline definition;
If you remove from the pipeline definition the
default value or the variable entirely the expression
${{variables.EnvironmentCode}} will return an empty string
assuming the filename to be vars..yml which doesn't exist.
Is this kind of variable expansion not supported or is there a
different way that I should use to accomplish this?
If I am not misunderstand, at first, you want to use $() to get the variable you defined using the UI but failed. But later, ${{ }} can give you the value of the variable EnvironmentCode.
In fact, while you change to use ${{ }}, it just accessing the variable you predefined in the YAML files instead of the one you defined with UI. Just see this doc: Variable templates.
For the variable you defined with UI, it can be get and used with the format $()(Note: ${{ }} is the format of get the variables which defined in YAML file). But also, there some things you need to pay attention is for the variables you defined in UI, it can only be get/accessed after the build begin to run, because the variable which defined with UI exists in environment only after the build compiled begin. In one word, they are the agent-scope variable. That's why the value it used is still the one that's on the pipeline definition instead of on the UI.
If you remove from the pipeline definition the default value or the
variable entirely the expression ${{variables.EnvironmentCode}} will
return an empty string assuming the filename to be vars..yml which
doesn't exist.
As the document defined and I mentioned before, ${{}} is format which used to get the value of variable which defined in YAML files rather than the one which defined using UI.
In the steps of job, the variable that defined in the UI or defined in the YAML file can all be get/accessed with the format $(). Still, for the variable which defined in the YAML file can also get with ${{variables.xxxx}}. But at this time, if the variable name which defined in YAML file is same with the one defined with UI, the server can only get the one defined in YAML file.

Concourse CI, git tag with constant value

I would like to tag my git commits as they are deployed to the various environments in my concourse pipeline with the name of the environment. For example, in my UAT deployment job, I would like to do something like:
- put: master-resource <-- a git resource
params:
repository: master <-- the resource local directory
tag: 'uat'
force: true <-- replace the tag, if it already exists
tag_only: true
This would seem like a common -or at least simple, thing to do however the value of the 'tag' parameter can only be the path to a file -there is no option to pass a constant/literal value.
I see two possible solutions but none of them seems 'simple' enough:
Create a file myself, but to do that (ideally?) I wish there were some kind of file resource that I could use to create the file.
The last alternative would be to create a custom task, and even there I was struggling to find a way to pass the name of the tag as a parameter.
Any suggestions on what would be the best way to accomplish my goal in the simplest way, or alternatively how to implement options 1 or 2?
Thanks!
The reason that tag takes in a file is so that you can dynamically set the tag of the commit based on information you imply during the course of the pipeline.
So, the best way I can see to do something like this would be workflow #2 that you described above.
So you would want something like this:
- task: generate-git-tag
params:
TAG: {{some-passed-in-tag}}
config:
platform: linux
image_resource:
type: docker-image
source:
repository: ruby
outputs:
- name: tag-file
params:
TAG:
run:
path: /bin/bash
args:
- -c
- |
echo "${TAG}" >> tag-file/tag.txt
- put: master-resource <-- a git resource
params:
repository: master <-- the resource local directory
tag: tag-file/tag.txt
force: true <-- replace the tag, if it already exists
tag_only: true

Concourse call job from another job with parameters

I have a job with many tasks like this:
- name: main-job
serial: true
plan:
- aggregate:
- get: <git-resource>
passed: [previous-job]
trigger: true
- get: <git-resource-3>
- task: <task-1>
file: <git-resource>/<path>/<task-1-no-db>.yml
- task: <task-2>
tags: ['<specific-tag>']
file: <git-resource>/<path>/<task-1>.yml
params:
DATABASE_HOST: <file>
DATABASE: <my-db-1>
- task: <task-2>
tags: ['<specific-tag>']
file: <git-resource>/<path>/<task-1>.yml
params:
DATABASE_HOST: <file>
DATABASE: <my-db-1>
The problem for me is, I have to literally call the same job but instead of DATABASE params being my-db-1, I want it to be my-db-2.
The only way I am able to do this is by having new job and pass the params, literally copy the entire set of lines. My job is too fat, as in has too many tasks in them, so copying it though is the obvious solution, I am wondering if there's a way to re-use by having multiple pipelines and one main pipeline that essentially calls these pipelines with the param for DATABASE passed or have two small jobs that calls this main job with different params something like this:
- name: <call-main-job-with-db-1>
serial: true
plan:
- aggregate:
- get: <git-resource>
passed: [previous-job]
trigger: true
- task: <call-main-job-task>
params:
DATABASE_HOST: <file>
DATABASE: <my-db-1>
- name: <call-main-job-with-db-2>
serial: true
plan:
- aggregate:
- get: <git-resource>
passed: [previous-job]
trigger: true
- task: <call-main-job-task>
params:
DATABASE: <my-db-2>
I am not sure if this is even possible since I didn't find any example of this.
Remember you are using YAML, so you can use YAML features like "Anchors"
You will find some additional information about "Anchors" in this link. Look for "EXTRA YAML FEATURES"
YAML also has a handy feature called 'anchors', which let you easily duplicate
content across your document. Both of these keys will have the same value: anchored_content: &anchor_name This string will appear as the
value of two keys. other_anchor: *anchor_name
# Anchors can be used to duplicate/inherit properties
base: &base
name: Everyone has same name
foo: &foo
<<: *base
age: 10
bar: &bar
<<: *base
age: 20
Try this for your Concourse Pipeline:
common:
db_common: &db_common
serial: true
plan:
- aggregate:
- get: <git-resource>
passed: [previous-job]
trigger: true
- task: <call-main-job-task>
params:
jobs:
- name: <call-main-job-with-db-1>
<<: *db_common
DATABASE_HOST: <file>
DATABASE: <my-db-1>
- name: <call-main-job-with-db-2>
<<: *db_common
DATABASE: <my-db-2>
NOTE: Remember that you can have as many Anchors as you want, you can define two or more anchors for the same Job/Task/Resource, etc.
You need to just copy and paste the task as you do in the question description. Concourse expects an expressive yaml, there is no branching or logic allowed. If you don't want to copy and paste so much yaml, then you can do some yaml generation magic to simplify what you look at and work with, but concourse will want the full yaml with each job defined separately.
Concourse has this fan in fan out paradigm, where you want to keep the jobs simple and short. Use a scripting language e.g. like python or ruby to make your pipeline creation more flexible.
Personally i use one pipeline.yml.erb file where i render different job templates inside. I try to keep my job.yml.erb as generic as possible so i can reuse them for different pipelines.
To bring it to the next level you could specify a meta config.yml and use this config inside your templates to generate your pipeline depending on what you specified in the config.