Using mulitple Terraform .tfvars in Azure Pipelines - azure-devops

We have a Terraform module source repository that covers three environments - dev, test and prod. Each environment has a dedicated folder, which also contains its own terraform.tfvars file as depicted below.
In conjunction with the above, I also have an Azure Release Pipeline with three deployment Stages - Dev, Test and Prod, as also depicted below.
Not surprisingly, what I am now seeking to achieve is set up the respective pipelines for all three Stages and ensure each consumes its dedicated *.tfvars file only. How can I get round this in the pipeline Tasks?

You can define variable limited to score of specific stage:
And then just call $(TerraformVarsFile).

Related

Can I use one AzureDevOps pipeline to run other pipelines?

I would like to have a master pipeline capable of running the pipelines of our system's individual components. I'd also like to be able to run any of those components' pipelines individually. Additionally, some of the component pipelines are configured using yaml, while others are using the classic approach. (I'm not sure if that figures into any possible solutions to this problem.) Those that are configured using yaml typically contain multiple jobs, and I'd need all of the jobs to run in those cases.
Using approach #2 recommended here, I tried the following:
jobs:
- job: build_and_deploy
displayName: Build and Deploy
cancelTimeoutInMinutes: 1
pool:
name: some-pool
steps:
- checkout: self
- template: component_one_pipeline.yml
- template: component_two_pipeline.yml
I receive an error for the following "unexpected values": trigger, resources, name, variables, and jobs. I'm guessing these aren't allowed in any yaml file referenced in the template step of another pipeline yaml file. As I mentioned above, though, I need these values in their files because we need to run the pipelines individually.
If possible, could someone point me in the direction of how to get this done?
EDIT: I have also tried the approach given here. I was thinking I'd have a master pipeline that essentially did nothing except serve as a trigger for all of the child pipelines that are supposed to run sequentially. Essentially, the child pipelines should subscribe to the master pipeline and run when it's done. I ended up with the following 2 files:
# master-pipeline.yml
trigger: none
pool:
name: some agent pool
steps:
- script: echo Running MASTER PIPELINE
displayName: 'Run master pipeline'
#child-pipeline.yml
trigger: none
#- testing-branch (tried these combinations trying to pick up master run)
#- main
pool:
name: some agent pool
resources:
pipelines:
- pipeline: testing_master_pipeline
source: TestingMasterPipeline
trigger: true
steps:
- script: echo Running CHILD PIPELINE 1
displayName: 'Run Child Pipeline 1'
Unfortunately, it's not working. I don't get any exceptions, but the child pipeline isn't running when I manually run the master pipeline. Any thoughts?
Thanks in advance.
The way that those approaches you linked work, and Azure DevOps build triggering works in general, is that a build completion can trigger another build, and you have to have the trigger in build to be triggered. So:
Yaml templates can't have things like triggers, so they won't really help you here (though you can of course split any of the individual pipelines to templates). Triggers are in the main yaml pipeline fail, which references the template-files. So you can't have a individual component pipelines as templates.
Yaml pipelines can be chained with the resources-declaration mentioned in the first link. The way this works is that the resource declaration is in the pipeline to be triggered, and you configure the conditions (like branch filters: https://learn.microsoft.com/en-us/azure/devops/pipelines/process/pipeline-triggers?view=azure-devops#branch-filters) to the pipeline to be triggered. For example, in your component pipeline you declare the master pipeline as resource, and set the conditions when the component pipeline will be triggered, like when the master pipeline is run against /release/* -branch. Or just set the trigger to true in order to trigger the component pipeline from any succesful run of the master pipeline. The component pipeline can still have its own pipeline triggers at the start of the pipeline declaration.
The classic build definitions can also be chained via edit build definition -> triggers -> build completion (see, for example, here: https://jpearson.blog/2019/03/27/chaining-builds-in-azure-devops/). This works the same way as with yaml pipelines; you configure the conditions for this the classic pipeline to trigger, so add the master pipeline as trigger to the component pipelines. Again, you can also set pipeline triggers for the component pipeline.
The limitation here is, that a classic pipeline can be triggered by an yaml pipeline, but not vice versa. A similar limitation in the yaml resources-declaration; they can't be triggered by a classic pipeline. If you need such triggering, or otherwise find the "native" triggers not to be enough, you can of course shoot an Azure DevOps API call in either type of pipeline to trigger any pipeline. See: https://blog.geralexgr.com/cloud/trigger-azure-devops-build-pipelines-using-rest-api, or just search for the azure devops rest api and associated blog posts that trigger the api with powershell, the rest api -task or by some other means.
As it turns out, I needed to set the pipelines' default branch to the one I was testing on for things to work correctly. The code in my original post works fine.
If you run into this problem and you're using a testing branch to test your pipelines, check out the information here on how to configure your pipeline to listen for triggers on your branch. Apparently a pipeline only listens for them on its default branch. Note: The example in the link uses the "classic" approach to pipeline configuration as an example, but you can reach the same page from a yaml configuration's edit screen by clicking the 3 dots on the right and selecting "Triggers."
Hope this helps someone.

How to run scheduled Azure DevOps pipeline with two different agents pools

I have Azure DevOps pipeline and I want to run it nightly run with two different agent pool, one dev and one prod.
This is the pipeline with default dev agent pool:
In the schedule setting there is no option to set different agent pool to the runs:
I saw this answer (solution with yaml settings), but I didn't found a way to use it in my pipeline (my pipeline defined in Azure DevOps UI settings).
As you use GUI classic pipelines you could define two different jobs that will run on different agent pools. This way you could have a single pipeline that you will run depending on your schedule.
When using YAML syntax you could define different stages to accomplish the same result.
https://learn.microsoft.com/en-us/azure/devops/pipelines/process/stages?view=azure-devops&tabs=yaml
Create a new Stage. The first stage's job will use one pool and the second stage will use a different pool. They can then be scheduled or triggered independently. You can also clone the first stage to save you the time of duplicating the tasks.
Reference

Azure DevOps - Pipelines checkout

I am trying to solve a problem where I can't find the right documentation for the problem that I have.
At the moment, in my project, I am using Azure DevOps pipelines to build and deploy a simples code in a function. What I am trying to approach is to have multiple stages doing something concrete.
Example of the pipeline
Stage 1 - Validation of code (checktsyle, guidelines,...)
Stage 2 - Tests
Job1 : Unit tests
Job2 : Integration tests
Stage 3 - Deployment on the cloud
Stage 4 - Function tests against the deployment done on stage 3.
Problem
As you may know, when you do different stages, the pipelines run into different slaves, which means that will apply the git checkout in all of them. What I am trying to make is to avoid this checkouts and only make one single checkout on the first stage and use the checkout of the first stage for the rest (the code is the same..)
Do you have any clue what I am missing here? I know that I can do this process in a single stage with all steps/jobs inside but I want to split this in different stages to make sure that each stage has it own responsability.
Thanks in advance for your time.
It depends on where your agents run, if the agents are self-hosted, you can of course use a common location and avoid checking out the self repo. With hosted agents, I don't think you can do this using the stage concept in azure pipelines, stages have specific semantics which do not map to your desired outcome AFAIK. There are other ways to split the responsibilities without insisting on using azure pipeline stages. It depends on what you want to achieve with this splitting of responsibilities;
If you simply want to logically partition the pipeline there are alternatives e.g. templates which will allow you to separate the partitions into files which can be maintained separately, if that would satisfy your requirement for separation of responsibilities. They can even be separated into different repositories like in the example below, of course they can also reside in the same repository.
An example I use for caching and restoring dependencies for C++ projects using a common repository.
- checkout: DevOpsScripts
- template: up-restore.yml#DevOpsScripts
parameters:
CachePath: $(updepsCache)
CacheKeyPrefix: 'updeps | "$(Agent.OS)"'
DependenciesManifest: $(updepsPrefix)$(osSuffix).json
As stated in order to accomplish that you should have a custom agent on which you can have a folder to store the code for example C:\code. Then you can checkout the repository on this code path and disable checkout on the next stages.
You can disable checkout on the job inside your stage.
- job: DeployCode
displayName: Deploy code
steps:
- checkout: none
- script: echo deploying code
displayName: deploy code
In order to checkout on a specific directory on your self hosted agent you should:
- checkout: self
clean: true
path: C:\code

Azure YAML Pipelines: Is it possible to find out which pipeline triggered a build?

I have two repos on my Azure DevOps project. One for the Cloud Infrastructure deployment and another that contains my application code.
I have a YAML pipeline that is triggered after any of those repos build pipeline finishes. The pipeline looks a bit like this like this:
resources:
pipelines:
- pipeline: MyProject-Code
- pipeline: MyProject-Infrastructure
jobs:
- job: DeployInfrastructure
steps:
# Here are the tasks the deploy the project infrastructure
- job: DeployCode
steps:
# Here are the tasks that deploy the code
I would like to put a condition on the DeployInfrastructure job so it is only executed if the triggering pipeline is the infrastructure one as I do not need to redeploy it if the change only affects the application code.
However, when reading the documentation from Microsoft there does not seem to be a very straightforward way of doing this.
Have a look at Pipeline resource variables
In each run, the metadata for a pipeline resource is available to all
jobs in the form of predefined variables. The is the
identifier that you gave for your pipeline resource. Pipeline
resources variables are only available at runtime.
There are also a number of predefined variables called Build.TriggeredBy.*, amongst them Build.TriggeredBy.DefinitionName, however documentation suggests that for yaml pipeline with pipeline triggers the resource variables should be used instead
If the build was triggered by another build, then this variable is set
to the name of the triggering build pipeline. In Classic pipelines,
this variable is triggered by a build completion trigger.
This variable is agent-scoped, and can be used as an environment
variable in a script and as a parameter in a build task, but not as
part of the build number or as a version control tag.
If you are triggering a YAML pipeline using resources, you should use
the resources variables instead.

Move variable groups to the code repository and reference it from YAML pipelines

We are looking for a solution how to move the non-secret variables from the Variable groups into the code repositories.
We would like to have the possibilities:
to track the changes of all the settings in the code repository
version value of the variable together with the source code, pipeline code version
Problem:
We have over 100 variable groups defined which are referenced by over 100 YAML pipelines.
They are injected at different pipeline/stage/job levels depends on the environment/component/stage they are operating on.
Example problems:
some variable can change their name, some variable can be removed and in the pipeline which targets the PROD environment it is still referenced, and on the pipeline which deploys on DEV it is not there
particular pipeline run used the version of the variables at some date in the past, it is good to know with what set of settings it had been deployed in the past
Possible solutions:
It should be possible to use the simple yaml template variables file to mimic the variable groups and just include the yaml templates with variable groups into the main yamls using this approach: Variable reuse.
# File: variable-group-component.yml
variables:
myComponentVariable: 'SomeVal'
# File: variable-group-environment.yml
variables:
myEnvVariable: 'DEV'
# File: azure-pipelines.yml
variables:
- template: variable-group-component.yml # Template reference
- template: variable-group-environment.yml # Template reference
#some stages/jobs/steps:
In theory, it should be easy to transform the variable groups to the YAML template files and reference them from YAML instead of using a reference to the variable group.
# Current reference we use
variables:
- group: "Current classical variable group"
However, even without implementing this approach, we hit the following limit in our pipelines: "No more than 100 separate YAML files may be included (directly or indirectly)"
YAML templates limits
Taking into consideration the requirement that we would like to have the variable groups logically granulated and separated and not stored in one big yml file (in order to not hit another limit with the number of variables in a job agent) we cannot go this way.
The second approach would be to add a simple script (PowerShell?) which will consume some key/value metadata file with variables (variableName/variableValue) records and just execute job step with a command to
##vso[task.setvariable variable=one]secondValue.
But it could be only done at the initial job level, as a first step, and it looks like the re-engineering variable groups mechanism provided natively in Azure DevOps.
We are not sure that this approach will work everywhere in the YAML pipelines when the variables are currently used. Somewhere they are passed as arguments to the tasks. Etc.
Move all the variables into the key vault secrets? We abandoned this option at the beginning as the key vault is a place to store sensitive data and not the settings which could be visible by anyone. Moreover storing it in secrets cause the pipeline logs to put * instead of real configuration setting and obfuscate the pipeline run log information.
Questions:
Q1. Do you have any other propositions/alternatives on how the variables versioning/changes tracking could be achieved in Azure DevOps YAML pipelines?
Q2. Do you see any problems in the 2. possible solution, or have better ideas?
You can consider this as alternative:
Store your non-secrets variable in json file in a repository
Create a pipeline to push variables to App Configuration (instead a Vault)
Then if you need this settings in your app make sure that you reference to app configuration from the app instead running replacement task in Azure Devops. Or if you need this settings directly by pipelines Pull them from App Configuration
Drawbacks:
the same as one mentioned by you in Powershell case. You need to do it job level
What you get:
track in repo
track in App Configuration and all benefits of App Configuration