Use different environment variables per deployment in GitLab - deployment

I'm trying to migrate a BitBucket pipeline to GitLab. In BitBucket we use a different set of environment variables for each deployment (staging/production etc).
I don't know how to specify this in GitLab.
I've set up just group variables and variables specific to the repository but I've not found how to override e.g. DB name for different deployments.
Thank you in advance for your help.

You can define variables and limit their scope
By default, all CI/CD variables are available to any job in a pipeline. Therefore, if a project uses a compromised tool in a test job, it could expose all CI/CD variables that a deployment job used. This is a common scenario in supply chain attacks.
GitLab helps mitigate supply chain attacks by limiting the environment scope of a variable.
GitLab does this by defining which environments and corresponding jobs the variable can be available for.
See "Scoping environments with specs" and "CI/CD variable expression"
deploy:
script: cap staging deploy
environment: staging
only:
variables:
- $RELEASE == "staging"
- $STAGING

Related

migrate gitlab ci to azure

Been working on migrating the .gitlab-ci.yml to azure-pipelines.yml. I am not able to find equivalents of some specific gitlab keywords in azure.
For eg:
(1)
rules:
- if: $CI_MERGE_REQUEST_ID
when: manual
timeout: 5 minutes
interruptible: false
allow_failure: true
(2)
paths:
- $ARTIFACTS_DIR/
expire_in: 1 week
timeout: 15 minutes
How to have the particular job working only on a specific rule? The equivalent of predefined variable
$CI_MERGE_REQUEST_ID, the keys like rules, if,when,timeout, interruptible,allow_failure, artifacts, paths, expire_in, timeout on azurepipelines.yml file?
Some insights would be great?
GitLab CI and Azure DevOps are two different systems, so keep in mind not every feature of GitLab CI has a one-to-one match in ADO and there are likely to be significant differences in how they are used.
For the features you mentioned, here are the analogs in Azure DevOps:
GitLab keyword
ADO Equivalent
rules
jobs.job.condition or steps.step.condition
allow_failure
jobs.job.continueOnError (also available in steps/tasks)
timeout
jobs.job.timeoutInMinutes
when:manual
See Manual Intervention task (set and first task and use condition: on this task for equivalent of rules:if:when:manual)
artifacts
see steps.publish, steps.download, pipeline artifacts, and build artifacts
expire_in
see retention policies.
interruptible
no analog: all jobs can be cancelled in ADO and this cannot be prevented. Closest solution would be to set a high cancelTimeoutInMinutes value
Predefined variables like CI_MERGE_REQUEST_ID only exist for GitLab CI, not Azure DevOps. Azure DevOps pipelines do have their own predefined variables -- System.PullRequest.PullRequestId would be the equivalent of CI_MERGE_REQUEST_ID, for example... but this may depend on exactly how you are using ADO with your repository.

GitHub Actions Are there Per-Target (Environment) Variable Values or Octopus Scoped Variable Equivalent?

Looking to deploy with actions to multiple environments. Environments are our logical environments, like Dev, Uat, Pro, etc. Deploy mechanic is same for all environments.
Given that the deploy is deploy x, y, z, and x, y, z are variables with differing values for each environment, what is the best practice for handling this in actions?
Octopus has scoped variables. Defined once, and per-environment values.
Here are some options I considered:
Define Actions Environments and:
Use Environment Secrets - con is maintainability for non-secrets (can't just see plain-text things)
SWITCH-type statement on Actions Environment to set env vars (seems best so far)
The Actions Environments seems to be the way to go for per-environment differentiation, but it's smelling to keep the var values in GitHub and feeling like should be externalized completely as there's no mechanism for "config".
Please let me know if you have approached such case. Thanks!
Edit: 12-factor app says "store config in the environment" and I think that means in AWS, that I have per-env SSM params and use the GH Actions Environments to pull the respective params' values into env vars. This feels most correct.

Why do you specify a pool in a deployment job in Azure Pipelines?

I find it hard to grasp the concept of deployment jobs and environments in Azure Pipelines. From what I understand, a deployment job is a sequence of steps to deploy your application to a specific environment, hence the environment field.
If so, why is there also a pool definition for agent pool for that job definition?
EDIT
What bothers me is that, from what I understand, an Environment is a collection of resources that you can run your application on. So you'll define some for dev, some for stage, prod, etc. So you want to run the job on these targets. So why do we need to specify an agent pool to run the deployment job on? Shouldn't it run on the resources that belong to the specified environment?
EDIT
Take this pipeline definition for example:
jobs:
# Track deployments on the environment.
- deployment: DeployWeb
displayName: deploy Web App
pool:
vmImage: 'Ubuntu-16.04'
# Creates an environment if it doesn't exist.
environment:
name: 'Stage'
resourceType: VirtualMachine
strategy:
# Default deployment strategy, more coming...
runOnce:
preDeploy:
steps:
- script: echo "Hello"
deploy:
steps:
- checkout: self
- script: echo my first deployment
I have an environment called "Stage" with one virtual machine on it.
When I run it, I can see both jobs run on my VM
The agent pool specified is NOT USED at all.
However, if I target another environment with no machines on it, it will run on Azure Pipelines vm
Why do you specify a pool in a deployment job in Azure Pipelines?
That because the environment is a collection of resources that you can target with deployments from a pipeline.
In other words, it is like the machine that hosts our private agent, but it can now be a virtual environment, like K8s, VM and so on.
When we specify an environment, it only provide us one virtual environment(You can think of it as a machine). However, there is no agent installed on these virtual environments for us to run the pipeline, so we need to specify an agent to run the pipeline.
For example, we execute our pipeline in our local machine, we still need to create our private agent, otherwise, we only have the target environment, but there is no agent that hosts the pipeline running.
The environment field denotes the target environment to where your artifact is deployed. There are commonly multiple environments like through which the artifacts flow, for example development -> test -> production. Azure DevOps uses this field to keep track of what versions are deployed to what environment etc, from the docs:
An environment is a collection of resources that you can target with
deployments from a pipeline. Typical examples of environment names are
Dev, Test, QA, Staging, and Production.
The pool is a reference to the agent pool. The agent is the machine executing the logic inside the job. For example, a deployment job might have several logical steps, such as scripts, file copying etc. All this logic is executed on the agent that comes from the agent pool. From the docs:
To build your code or deploy your software using Azure Pipelines, you
need at least one agent. As you add more code and people, you'll
eventually need more.
When your pipeline runs, the system begins one or more jobs. An agent
is computing infrastructure with installed agent software that runs
one job at a time.

Using mulitple Terraform .tfvars in Azure Pipelines

We have a Terraform module source repository that covers three environments - dev, test and prod. Each environment has a dedicated folder, which also contains its own terraform.tfvars file as depicted below.
In conjunction with the above, I also have an Azure Release Pipeline with three deployment Stages - Dev, Test and Prod, as also depicted below.
Not surprisingly, what I am now seeking to achieve is set up the respective pipelines for all three Stages and ensure each consumes its dedicated *.tfvars file only. How can I get round this in the pipeline Tasks?
You can define variable limited to score of specific stage:
And then just call $(TerraformVarsFile).

Move variable groups to the code repository and reference it from YAML pipelines

We are looking for a solution how to move the non-secret variables from the Variable groups into the code repositories.
We would like to have the possibilities:
to track the changes of all the settings in the code repository
version value of the variable together with the source code, pipeline code version
Problem:
We have over 100 variable groups defined which are referenced by over 100 YAML pipelines.
They are injected at different pipeline/stage/job levels depends on the environment/component/stage they are operating on.
Example problems:
some variable can change their name, some variable can be removed and in the pipeline which targets the PROD environment it is still referenced, and on the pipeline which deploys on DEV it is not there
particular pipeline run used the version of the variables at some date in the past, it is good to know with what set of settings it had been deployed in the past
Possible solutions:
It should be possible to use the simple yaml template variables file to mimic the variable groups and just include the yaml templates with variable groups into the main yamls using this approach: Variable reuse.
# File: variable-group-component.yml
variables:
myComponentVariable: 'SomeVal'
# File: variable-group-environment.yml
variables:
myEnvVariable: 'DEV'
# File: azure-pipelines.yml
variables:
- template: variable-group-component.yml # Template reference
- template: variable-group-environment.yml # Template reference
#some stages/jobs/steps:
In theory, it should be easy to transform the variable groups to the YAML template files and reference them from YAML instead of using a reference to the variable group.
# Current reference we use
variables:
- group: "Current classical variable group"
However, even without implementing this approach, we hit the following limit in our pipelines: "No more than 100 separate YAML files may be included (directly or indirectly)"
YAML templates limits
Taking into consideration the requirement that we would like to have the variable groups logically granulated and separated and not stored in one big yml file (in order to not hit another limit with the number of variables in a job agent) we cannot go this way.
The second approach would be to add a simple script (PowerShell?) which will consume some key/value metadata file with variables (variableName/variableValue) records and just execute job step with a command to
##vso[task.setvariable variable=one]secondValue.
But it could be only done at the initial job level, as a first step, and it looks like the re-engineering variable groups mechanism provided natively in Azure DevOps.
We are not sure that this approach will work everywhere in the YAML pipelines when the variables are currently used. Somewhere they are passed as arguments to the tasks. Etc.
Move all the variables into the key vault secrets? We abandoned this option at the beginning as the key vault is a place to store sensitive data and not the settings which could be visible by anyone. Moreover storing it in secrets cause the pipeline logs to put * instead of real configuration setting and obfuscate the pipeline run log information.
Questions:
Q1. Do you have any other propositions/alternatives on how the variables versioning/changes tracking could be achieved in Azure DevOps YAML pipelines?
Q2. Do you see any problems in the 2. possible solution, or have better ideas?
You can consider this as alternative:
Store your non-secrets variable in json file in a repository
Create a pipeline to push variables to App Configuration (instead a Vault)
Then if you need this settings in your app make sure that you reference to app configuration from the app instead running replacement task in Azure Devops. Or if you need this settings directly by pipelines Pull them from App Configuration
Drawbacks:
the same as one mentioned by you in Powershell case. You need to do it job level
What you get:
track in repo
track in App Configuration and all benefits of App Configuration