Azure Pipelines parameter value from variable template - azure-devops

We would like to deploy components of our application to developer's local machines and want it to be easy enough for our co-workers to use and easy enough for us to maintain. These are virtual machines with a certain naming convention, for instance: VM001, VM002, and so on.
I can define these machines, and use the value later on in the pipeline, in a parameter in YAML like this:
parameters:
- name: stage
displayName: Stage
type: string
values:
- VM001
- VM002
- And so on...
I then only have to maintain one stage, because the only thing that really differs is the stage name:
stages:
- stage: ${{ parameters.stage }}
displayName: Deploy on ${{ parameters.stage }}
- jobs:
...
The idea behind defining the machines in the parameters like this is that developers can choose their virtual machine from the 'Stage' dropdown when they want to deploy to their own virtual machine. By setting the value of the parameter to the virtual machine, the stage is named and the correct library groups will also be linked up to the deployment (each developer has their own library groups where we store variables such as accounts and secrets).
However, we have multiple components that we deploy through multiple pipelines. So each component gets its own YAML pipeline and for each pipeline we will have to enter and maintain the same list of virtual machines.
We already use variable and job templates for reusability. I want to find a way to create a template with the list of machines and pass it to the parameter value. This way, we only need to maintain one template so whenever someone new joins the team or someone leaves, we only need to update one file instead of updating all the pipelines.
I've tried to pass the template to the parameter value using an expression like this:
variables:
- name: VirtualMachinesList
value: VirtualMachinesList.yml
parameters:
- name: stage
displayName: Stage
type: string
values:
- ${{ variables.VirtualMachinesList }}
The VirtualMachinesList.yml looks like this:
variables:
- name: VM001
value: VM001
- name: VM002
value: VM002
- And so on...
This gives the following error when I try to run the pipeline:
A template expression is not allowed in this context
I've also tried changing the parameter type to object. This results in a text field with a list of all the virtual machines and you can select the ones you don't want to deploy to and remove them. This isn't very user-friendly and also very error-prone, so not a very desirable solution.
Is there a way to pass the list of virtual machines to the parameter value from a single location, so that developers can choose their own virtual machine to deploy to?

I know you want to maintain the list of virtual machines in one place, and also keep the function that developers can choose the vm from the dropdown to deploy to. But i am afraid it cannot be done currently. Runtime parameters doesnot support template yet. You can submit a user voice here regarding this issue.
Currently you can keep only one function, either maintain the vms in one place or developer can choose their vm from the dropdown.
1, To maintain the virtual machines in one place. You can define a variable template to hold the virtual machines. And make the developer to type their vm to deploy to. See below:
Define an empty runtime parameter to let the developer to type in.
parameters:
- name: vm
type: string
default:
Define the variable template to hold the VMS
#variable.yml template
variables:
vm1: vm1
vm2: vm2
...
Then in the pipeline define a variable to refer to the vm variable in the variables template. See below
variables:
- template: variables.yml
- name: vmname
value: $[variables.${{parameters.vm}}]
steps:
- powerhsell: echo $(vmname)
2, To make the developer have the convenience to choose their vm from the dropdown. You have to define these machines parameters in all pipeline.

You're really close. You'll want to update how you're consuming your variable template to:
variables:
- template: variable-template.yml
Here's a working example (assuming both the variable template and consuming pipeline are within the same directory of a repository):
variable-template.yml:
variables:
- name: VM001
value: VM001
- name: VM002
value: VM002
example-pipeline.yml:
name: Stackoverflow-Example-Variables
trigger:
- none
variables:
- template: variable-template.yml
stages:
- stage: StageA
displayName: "Stage A"
jobs:
- job: output_message_job
displayName: "Output Message Job"
pool:
vmImage: "ubuntu-latest"
steps:
- powershell: |
Write-Host "Root Variable: $(VM001), $(VM002)"
For reference, here's the MS documentation on variable template usage:
https://learn.microsoft.com/en-us/azure/devops/pipelines/process/templates?view=azure-devops#variable-reuse

Related

Github actions incorrectly thinks variable is a secret and so does not set outputs

A step in my workflow file will return some IDs of EC2 instances in my aws account and then i set these IDs as a github output to be used in other jobs in my workflow file
I have done this in many workflows and step will return something like this:
["i-0d945b001544f2614","i-0b90ba69d37aad78c"]
However, in one workflow file github is masking the IDs as it thinks it is a secret for some reason, so it will return:
["i-***2d571abc6d7d***4ef","i-***186ce12c5cd8e744"]
Therefore i get this error message on the workflow job summary:
Skip output 'instanceIDs' since it may contain secret.
And so the other jobs in my workflow file that rely on this output will fail as github won't set an output.
I have tried to use base64 as suggested in this post but i haven't been able to get that to work
Is there any other work around?
Recently, GitHub released a new feature - configuration variables in workflows.
Configuration variables allow you to store your non sensitive data as plain text variables that can be reused across your workflows in your repository or organization.
You can define variables at Organization, Repository, or Environment level based on your requirement.
These variables are accessible from the workflow by vars context.
Example:
jobs:
display-variables:
runs-on: ${{ vars.RUNNER }}
steps:
- name: Use variables
run: |
echo "Repository variable : ${{ vars.REPOSITORY_VAR }}"
echo "Organization variable : ${{ vars.ORGANIZATION_VAR }}"
In this example, we have the following configuration variables: RUNNER, REPOSITORY_VAR, ORGANIZATION_VAR. As opposed to the repository secrets, the values of these variables won't be masked.
For more details, see the Defining configuration variables for multiple workflows.

GitHub Actions: How to dynamically set environment url based on deployment step output?

I found out about a really nice GitHub Actions Feature called Environments. Using the appropriate syntax a Environment could also be created inside a GitHub Action workflow.yml like this:
environment:
name: test_environment
url: https://your-apps-url-here.com
As the docs state thats a valid way to create GitHub Action Environments:
Running a workflow that references an environment that does not exist
will create an environment with the referenced name.
But inside my current GitHub Action workflow is there a way I dynamically set the url based on a deployment step output? I have a dynamic URL resulting from the deployment process to AWS which I can't define up-front.
The job workflow docs tell us that there's also a way of using expressions inside the url field:
environment:
name: test_environment
url: ${{ steps.step_name.outputs.url_output }}
Now imagine a ci.yml workflow file that uses AWS CLI to deploy a static website to S3, where we used a tool like Pulumi to dynamically create a S3 Bucket inside our AWS account. We can read the dynamically created S3 url using the following command pulumi stack output bucketName. The deploy step inside the ci.yml could then look like this:
- name: Deploy Nuxt.js generated static site to S3 Bucket via AWS CLI
id: aws-sync
run: |
aws s3 sync ../dist/ s3://$(pulumi stack output bucketName) --acl public-read
echo "::set-output name=s3_url::http://$(pulumi stack output bucketUrl)"
working-directory: ./deployment
There are 2 crucial points here: First we should use id inside the deployment step to define a step name we could easily access via step_name inside our environment:url. Second we need to define a step output using echo "::set-output name=s3_url::http://$(pulumi stack output bucketUrl)". In this example I create a variable s3_url. You could replace the pulumi stack output bucketUrl with any other command you'd like or tool you use, which responds with your dynamic environment url.
Be also sure to add a http:// or https:// in order to prevent an error message like this:
Environment URL 'microservice-ui-nuxt-js-hosting-bucket-bc75fce.s3-website.eu-central-1.amazonaws.com' is not a valid http(s) URL, so it will not be shown as a link in the workflow graph.
Now the environment definition at the top of our ci.yml can access the s3_url output variable from our deployment step like this:
jobs:
ci:
runs-on: ubuntu-latest
environment:
name: microservice-ui-nuxt-js-deployment
url: ${{ steps.aws-sync.outputs.s3_url }}
steps:
- name: Checkout
...
Using steps.aws-sync we reference the deployment step directly, since we defined it with the id. The appended .outputs.s3_url then directly references the variable containing our S3 url. If you defined everything correctly the GitHub Actions UI will render the environment URL directly below the finished job:
Here's also a fully working workflow embedded inside a example project.

Azure DevOps Build with Parameters

Is it possible in azure-pipelines.yml to define multi-value at runtime parameters so when you run the build you have to input so values
parameters:
- name: image
displayName: Pool Image
type: string
default: ubuntu-latest
values:
- windows-latest
- vs2017-win2016
- ubuntu-latest
Upon clicking Run in Azure DevOps you are presented with a dropdown and you select which option you require ???
Upon your selection, the build will only run certain steps or tasks based on your selection
I am not sure when it was added, but drop down parameters are now available:
parameters:
- name: env
displayName: Environment
type: string
values:
- dev
- prod
- test
- train
default: train
will provide me with a drop down of dev, prod, etc., prepopulated with the value train.
Moreover, it will be a drop down if 4 values or more, and a radio dial with 3 or less. For instance,
- name: department
displayName: Business Department
type: string
values:
- AI
- BI
- Marketing
default: AI
will create a radio dial with AI selected by default. Note that the YAML is identical between the two, with the exception of 4 values in the first and 3 in the second.
Dropdown parameters is not yet supported on azure devops pipeline.
There is a workaround that you can create a variable with all the possible values, and enable Settable at queue time. The detailed steps are in below:
Edit your yaml pipeline, Click the 3dots on the top right corer and choose Triggers
Go to Variables tab, create a variable and check Settable at queue time
Then when you queue your pipeline, you will be allowed to set the value for this variable.
After you setup above steps. You also need to add condtions for your tasks.
For below example the script task can only run when the Environment variable is equal to prod and previous steps are all succeeded.
steps:
- script: echo "run this step when Environment is prod"
condition: and(succeeded(), eq(variables['Environment'], 'prod'))
Please check here for more information about Conditions and Expressions
You can also submit a feature request (Click suggest a feature and choose Azure devops)to Microsoft Develop, hope they will consider implementing this feature in the future.

Specify runtime parameter in a pipeline task

We have a requirement to somehow pass a dynamic runtime parameter to a pipeline task.
For example below paramater APPROVAL would be different for each run of the task.
This APPROVAL parameter is for the change and release number so that the task can tag it on the terraform resources created for audit purposes.
Been searching the web for a while but with no luck in finding a solution, is this possible in a concourse pipeline or best practice?
- task: plan-terraform
file: ci/concourse-jobs/pipelines/tasks/terraform/plan-terraform.yaml
params:
ENV: dev
APPROVAL: test
CHANNEL: Developement
GITLAB_KEY: ((gitlab_key))
REGION: eu-west-2
TF_FOLDER: terraform/squid
input_mapping:
ci: ci
tf: squid
output_mapping:
plan: plan
tags:
- dev
From https://concourse-ci.org/tasks.html:
ideally tasks are pure functions: given the same set of inputs, it should either always succeed with the same outputs or always fail.
A dynamic parameter would break that contract and produce different outputs from the same set of inputs. Could you possibly make APPROVAL an input? Then you'd maintain your build traceability. If it's a (file) input, you could then load it into a variable:
APPROVAL=$(cat <filename>)

How to parameterise concourse task files

I'm pretty impressed by the power and simplicity of Concourse. Since my pipelines keep growing I decided to move the tasks to separate files. One of the tasks use a custom Docker image from our own private registry. So, in that task file I have:
image_resource:
type: docker-image
source:
repository: docker.mycomp.com:443/app-builder
tag: latest
username: {{dckr-user}}
password: {{dckr-pass}}
When I do a set-pipeline, I pass the --load-from-vars argument to load credentials etc from a seperate file.
Now here's my problem: I notice that the vars in my pipeline files are replaced with the actual correct values, but once the task runs, the afore mentioned {{dckr-user}} and {{dckr-pass}} are not replaced.
How do I achieve this?
In addition to what was provided in this answer
If specifically you are looking to use private images in a task, you can do the following in your pipeline.yml:
resources:
- name: some-private-image
type: docker
params:
repository: ...
username: {{my-username}}
password: {{my-password}}
jobs:
- name: foo
plan:
- get: some-private-image
- task: some-task
image: some-private-image
Because this is your pipeline, you can use --load-vars-from, which will first get your image as a resource and then use it for the subsequent task.
You can also see this article on pre-fetching ruby gems in test containers on Concourse
The only downside to this is you cannot use this technique when running a fly execute.
As of concourse v3.3.0, you can set up Credential Management in order to use variables from one of the supported credential managers which are currently Vault, Credhub, Amazon SSM, and Amazon Secrets Manager. So you don't have to separate your task files partially in the pipeline.yml anymore. The values you set in the Vault will be also accessible from the task.yml files.
And since v3.2.0 {{foo}} is deprecated in favor of ((foo)).
Using the Credential Manager you can parameterize:
source under resources in a pipeline
source under resource_types in a pipeline
webhook_token under resources in a pipeline
image_resource.source under image_resource in a task config
params in a pipeline
params in a task config
For setting up vault with concourse you can refer to:
https://concourse-ci.org/creds.html
You can always define tasks in a pipeline.yml...
For example:
jobs:
- name: dotpersecond
plan:
- task: dotpersecond
config:
image_resource:
type: docker-image
source:
repository: docker.mycomp.com:443/app-builder
tag: latest
username: {{dckr-user}}
password: {{dckr-pass}}
run:
path: sh
args:
- "-c"
- |
for i in `seq 1000`; do echo hi; sleep 2; done