Storing and retrieving secrets in Azure keyvault with variable Groups in Azure Devops pipeline - azure-devops

[![enter image description here][1]][1]Using Azure devops pipelines and want to store my secr[![enter image description here][2]][2]ets in Azure KeyVault? How to use a variable group with keyvault integration to retrieve my secrets values and use within my DevOps pipeline.
Below is Yaml script which has hardcoded values of service connection and storage account name,key for to store the terraform tfstate files, my question is how we can pass it as secure type so that no one can see my data. I have created AzureKeyVault and it is linked to provided service connection.
trigger: none
#########################
# Declare Build Agents:-
#########################
pool:
vmImage: ubuntu-latest
######################
#DECLARE PARAMETERS:-
######################
parameters:
- name: ResourceGroup
displayName: Please Provide the Resource Group Name:-
type: object
default: <Please provide the required Name>
- name: Region
displayName: Please Provide the Region Name:-
type: object
default: <Please provide the required Name>
- name: sqlserver
displayName: Please Provide the sqlserver Name:-
type: object
default: <Please provide the required Name>
######################
#DECLARE VARIABLES:-
######################
variables:
TF_VAR_ResourceGroup: ${{ parameters.ResourceGroup }}
TF_VAR_REGION: ${{ parameters.Region }}
TF_VAR_SQLSERVER_NAME: ${{ parameters.sqlserver }}
###################
# Declare Stages:-
###################
stages:
- stage: tfvalidate
jobs:
- job: validate
continueOnError: false
steps:
- task: TerraformInstaller#0
inputs:
terraformVersion: 'latest'
- task: TerraformTaskV3#3
displayName: init
inputs:
provider: 'azurerm'
command: 'init'
backendServiceArm: $(serviceconnection) # it should pick from my azureKeyvault
backendAzureRmResourceGroupName: 'AzureDevops' # it should pick from my azureKeyvault
backendAzureRmStorageAccountName: 'azuredevopsdev' # it should pick from my azureKeyvault
backendAzureRmContainerName: 'tfstatedev' # it should pick from my azureKeyvault
backendAzureRmKey: 'terrafrom.tfstate' # it should pick from my azureKeyvault
- task: TerraformTaskV3#3
displayName: validate
inputs:
provider: 'azurerm'
command: 'validate'
- stage: tfdeploy
condition: succeeded('tfvalidate')
dependsOn: tfvalidate
jobs:
- job: apply
steps:
- task: TerraformInstaller#0
inputs:
terraformVersion: 'latest'
- task: TerraformTaskV3#3
displayName: init
inputs:
provider: 'azurerm'
command: 'init'
backendServiceArm: 'dev-Automationaccount_OIDC'
backendAzureRmResourceGroupName: 'AzureDevops'
backendAzureRmStorageAccountName: 'azuredevopsdev'
backendAzureRmContainerName: 'tfstatedev'
backendAzureRmKey: 'terrafrom.tfstate'
- task: TerraformTaskV3#3
displayName: validate
inputs:
provider: 'azurerm'
command: 'validate'
- task: TerraformTaskV3#3
displayName: plan
inputs:
provider: 'azurerm'
command: 'plan'
environmentServiceNameAzureRM: 'dev-Automationaccount_OIDC'
- task: TerraformTaskV3#3
displayName: apply
inputs:
provider: 'azurerm'
command: 'apply'
commandOptions: '-auto-approve'
environmentServiceNameAzureRM: 'dev-Automationaccount_OIDC'
backendAzureRmContainerName: 'tfstatedev'
backendAzureRmKey: 'terrafrom.tfstate'
- task: TerraformTaskV3#3
displayName: validate
inputs:
provider: 'azurerm'
command: 'validate'
- stage: tfdeploy
condition: succeeded('tfvalidate')
dependsOn: tfvalidate
jobs:
- job: apply
steps:
- task: TerraformInstaller#0
inputs:
terraformVersion: 'latest'
- task: TerraformTaskV3#3
displayName: init
inputs:
provider: 'azurerm'
command: 'init'
backendServiceArm: 'dev-Automationaccount_OIDC'
backendAzureRmResourceGroupName: 'AzureDevops'
backendAzureRmStorageAccountName: 'azuredevopsdev'
backendAzureRmContainerName: 'tfstatedev'
backendAzureRmKey: 'terrafrom.tfstate'
- task: TerraformTaskV3#3
displayName: validate
inputs:
provider: 'azurerm'
command: 'validate'
- task: TerraformTaskV3#3
displayName: plan
inputs:
provider: 'azurerm'
command: 'plan'
environmentServiceNameAzureRM: 'dev-Automationaccount_OIDC'
- task: TerraformTaskV3#3
displayName: apply
inputs:
provider: 'azurerm'
command: 'apply'
commandOptions: '-auto-approve'
environmentServiceNameAzureRM: 'dev-Automationaccount_OIDC'
```
[1]: https://i.stack.imgur.com/W0lU5.png
[2]: https://i.stack.imgur.com/cEm6h.png

You have to assign the correct role to the service principal on the KV:
find the linked service connection
project settings > service connections > manage service principal (opens azure portal) > copy the title of the SP
go to your KV and assign permission to this SP
Access Control (IAM) > +Add > Add role assignment > contributor > User, group or SP > select members: paste the copied title of your SP >> select+review&assign
go to Azure Dev/Ops and connect your KV + add secrets you want to use
Test inside YAML:
YAML config:
And inside the logs of the pipeline:
--> The secret is not exposed in the pipeline
note: make sure that access policies for the SP and the specific KV are set correctly, see: https://learn.microsoft.com/bs-latn-ba/azure/key-vault/general/assign-access-policy?tabs=azure-portal
note: when you run the pipeline you might have to give explicit permissions for the pipeline to access the library group, this only needs to be done once.
Alternative method:
Since you have already set up a connection you can also use an AzureCLI task to get credentials and persistent them through your pipeline,
https://learn.microsoft.com/en-us/azure/devops/pipelines/tasks/reference/azure-cli-v2?view=azure-pipelines
Set addSpnToEnvironment to true, and use something like this to set the value(s): echo "##vso[task.setvariable variable=SECRET;issecret=true;isoutput=true]$Key"
Hope this helps

Here is another approach I have tried to reproduce in my environment as below and got positive results.
Step 1: Create an Azure key vault and add secrets.
Step 2: Create a service principal and note down the app Id, tenant id and password for later use.
Step 3: Add service principle to key vault access policies as shown below.
• Navigate to the Access policies in the key vault and click on create.
• Select the required permissions and click on next.
Enter the object id which we created in step 2 and click Review+create.
Step 4: Create a service connection in the azure devops with the details created in step 2.
Fill these details and click create.
Step 5: Navigate to Pipelines > Library and click on Variable group
Add the required keys, secretes and certificates under variables.
Step 6: Refer the variable group in the Azure pipelines as below.

Related

Terraform on Azure DevOps

I am getting below error while running pipeline from Azure DevOps (Using Terraform). I have defined a service connection which is used as Variable on the pipeline.
Error building ARM Config: obtain subscription() from Azure CLI: parsing json result from the Azure CLI: waiting for the Azure CLI: exit status 1: ERROR: Please run 'az login' to setup account.
enter image description here
Below is my YAML file
parameters:
environment: ''
environmentPath: ''
terraformStateFilename: ''
artifacts: ''
steps:
- task: TerraformInstaller#0
inputs:
terraformVersion: $(terraformVersion)
- task: TerraformCLI#0
displayName: Terraform Init
inputs:
provider: 'azurerm'
command: 'init'
workingDirectory: $(System.DefaultWorkingDirectory)/${{ parameters.environmentPath }}
backendServiceArm: $(subscription)
backendAzureRmResourceGroupName: $(terraformGroup)
backendAzureRmStorageAccountName: $(terraformStorageName)
backendAzureRmContainerName: $(terraformContainerName)
backendAzureRmKey: ${{ parameters.terraformStateFilename }}
- task: TerraformCLI#0
displayName: Terraform Plan
inputs:
provider: 'azurerm'
command: 'plan'
workingDirectory: $(System.DefaultWorkingDirectory)/${{ parameters.environmentPath }}
environmentServiceNameAzureRM: $(subscription)
commandOptions: '-out plan.tfplan'
- task: CopyFiles#2
inputs:
SourceFolder: '${{ parameters.environmentPath }}'
Contents: |
terraform.lock.hcl
versions.tf
providers.tf
plan.tfplan
terraform.tfvars
TargetFolder: '$(Build.ArtifactStagingDirectory)'
displayName: 'Copy Artifacts'
- publish: '$(Build.ArtifactStagingDirectory)'
artifact: ${{ parameters.artifacts }}
e
You need to login to Azure using this step:
steps:
- task: AzureCLI#1
displayName: Set Azure vars
inputs:
azureSubscription: ${{ parameters.azureSubscription }}
scriptLocation: inlineScript
inlineScript: |
Write-Host "##vso[task.setvariable variable=AZURE_CLIENT_ID]$env:servicePrincipalId"
Write-Host "##vso[task.setvariable variable=AZURE_CLIENT_SECRET]$env:servicePrincipalKey"
Write-Host "##vso[task.setvariable variable=AZURE_TENANT_ID]$env:tenantId"
addSpnToEnvironment: true
Then in the steps where Terraform is required, you add an env to reference the previous variables:
- task: TerraformCLI#0
displayName: Terraform Plan
env:
ARM_CLIENT_ID: $(AZURE_CLIENT_ID)
ARM_CLIENT_SECRET: $(AZURE_CLIENT_SECRET)
ARM_TENANT_ID: $(AZURE_TENANT_ID)
inputs:
provider: 'azurerm'
command: 'plan'
workingDirectory: $(System.DefaultWorkingDirectory)/${{ parameters.environmentPath }}
environmentServiceNameAzureRM: $(subscription)
commandOptions: '-out plan.tfplan'

Assistance with First Yaml pipeline in Azure DevOps

I'm writing my first Terraform YAML pipeline in Azure DevOps. I define four repositories as resources, check them out and run to Terraform Plan. The pipeline succeeds but only the first repository of Terraform runs. This is based what is coming to the screen from that repositories' output.tf. The others don't generate any output even when I define a output variable with a string value. Something like this:
output "rg_module_debug" { value="rg module ran" }
Here is the pipeline code, any feedback on why the other code isn't running would be appreciated
name: 'Naming Test'
trigger:
- None
pool:
vmImage: ubuntu-latest
resources:
repositories:
- repository: VariablesRepo # identifier (A-Z, a-z, 0-9, and underscore)
type: git #git refers to Azure Repos Git repos
name: AzureTutorial/terraform-azurerm-variables-environment
ref: main
- repository: NameRepo # identifier (A-Z, a-z, 0-9, and underscore)
type: git #git refers to Azure Repos Git repos
name: AzureTutorial/terraform-azurerm-module-name
ref: main
- repository: NamingRepo # identifier (A-Z, a-z, 0-9, and underscore)
type: git #git refers to Azure Repos Git repos
name: AzureTutorial/terraform-azurerm-module-naming
ref: main
- repository: ResourceGrpRepo # identifier (A-Z, a-z, 0-9, and underscore)
type: git #git refers to Azure Repos Git repos
name: AzureTutorial/terraform-azurerm-module-resource_group
ref: main
stages:
- stage: Install
jobs:
- job:
timeoutInMinutes: 60 # how long to run the job before automatically cancelling
cancelTimeoutInMinutes: 2 # how much time to give 'run always even if cancelled tasks'
before stopping them
steps:
- checkout: self
- checkout: VariablesRepo
- checkout: NameRepo
- checkout: NamingRepo
- checkout: ResourceGrpRepo
- task: TerraformInstaller#0
displayName: 'install'
inputs:
terraformVersion: 'latest'
- task: TerraformCLI#0
displayName: 'terraform init'
inputs:
provider: 'azurerm'
command: 'init'
workingDirectory: '$(System.DefaultWorkingDirectory)/terraform-azurerm-module-name'
#environmentServiceName: TutorialSvcCon
- task: TerraformCLI#0
displayName: 'terraform plan'
inputs:
provider: 'azurerm'
command: 'plan'
workingDirectory: '$(System.DefaultWorkingDirectory)/terraform-azurerm-module-name'
#environmentServiceName: TutorialSvcCon
You are checking out multiple repositories therefore according to the documentation:
Multiple repositories:
If you have multiple checkout steps in your job, your source code is
checked out into directories named after the repositories as a
subfolder of s in (Agent.BuildDirectory). If (Agent.BuildDirectory) is
C:\agent_work\1 and your repositories are named tools and code, your
code is checked out to C:\agent_work\1\s\tools and
C:\agent_work\1\s\code.
Thus to achieve what you want you'd have to create one TerraformCLI#0 task for each checked out repository.
Therefore your stages code would be:
...
stages:
- stage: Install
jobs:
- job:
timeoutInMinutes: 60 # how long to run the job before automatically cancelling
cancelTimeoutInMinutes: 2 # how much time to give 'run always even if cancelled tasks'
before stopping them
steps:
- checkout: self
- checkout: VariablesRepo
- checkout: NameRepo
- checkout: NamingRepo
- checkout: ResourceGrpRepo
- task: TerraformInstaller#0
displayName: 'install'
inputs:
terraformVersion: 'latest'
- task: TerraformCLI#0
displayName: 'terraform init'
inputs:
provider: 'azurerm'
command: 'init'
workingDirectory: '$(System.DefaultWorkingDirectory)/NameRepo'
#environmentServiceName: TutorialSvcCon
- task: TerraformCLI#0
displayName: 'terraform plan'
inputs:
provider: 'azurerm'
command: 'plan'
workingDirectory: '$(System.DefaultWorkingDirectory)/NameRepo'
#environmentServiceName: TutorialSvcCon
If for instance you want to do the same operation for each repository then you add more tasks pointing to the RepositoryName as you correctly set up at the beginning:
- task: TerraformCLI#0
displayName: 'terraform init'
inputs:
provider: 'azurerm'
command: 'init'
workingDirectory: '$(System.DefaultWorkingDirectory)/VariablesRepo'
#environmentServiceName: TutorialSvcCon
- task: TerraformCLI#0
displayName: 'terraform plan'
inputs:
provider: 'azurerm'
command: 'plan'
workingDirectory: '$(System.DefaultWorkingDirectory)/VariablesRepo'
#environmentServiceName: TutorialSvcCon
And so forth.

Configure approval for Azure Pipelines deployment stage against Azure App Services

I'm defining an Azure Pipeline as code in which there will be several deployment stages (staging and production). I need the production stage to be executed on approval from some users.
Currently there's a way to define approvals for "Environments". However this only includes resources such as VMs and K8s, but the application will be deployed on Azure App Services:
Pipeline scerpt:
- stage: Deploy_Production
pool:
vmImage: ubuntu-latest
jobs:
- job: deploy
steps:
- script: find ./
- task: DownloadBuildArtifacts#0
inputs:
buildType: 'current'
downloadType: 'single'
artifactName: 'drop'
downloadPath: '$(System.ArtifactsDirectory)'
- script: 'find $(System.ArtifactsDirectory)'
- task: AzureRmWebAppDeployment#4
inputs:
ConnectionType: 'AzureRM'
azureSubscription: 'Free Trial(xxx)'
appType: 'webAppLinux'
WebAppName: 'app'
packageForLinux: '$(System.ArtifactsDirectory)/**/*.jar'
RuntimeStack: 'JAVA|11-java11'
StartupCommand: 'java - jar $(System.ArtifactsDirectory)/drop/build/libs/app.jar'
How can I configure approvals in this scenarios?
UPDATE:
Following MorrowSolutions' answer I updated my pipeline
If I leave it as shown in the answer, the steps entry is highlighted as invalid syntax:
If I indent it, it seems to be correct. The deployment stage executes and downloads the artifact, but nothing else seems to be executed (scripts, deploy task...):
So, the resources you tie to an environment do not restrict which pipelines can be associated with that environment. Also, they're not required and at the moment Microsoft only supports Kubernetes & VMS, so you won't be able to associate an Azure App Service.
In your case, don't associate any resources with your environment. You'll want to update your YAML to use a deployment job specifically and specify the environment within your parameters. This will tell your pipeline to associate releases with the environment you've configured. It should look something like this in your case:
stages:
- stage: Deploy_Production
pool:
vmImage: ubuntu-latest
jobs:
- deployment: DeployWeb
displayName: Deploy Web App
environment: YourApp-QA
pool:
vmImage: 'ubuntu-latest'
strategy:
runOnce:
deploy:
steps:
- script: find ./
- task: DownloadBuildArtifacts#0
inputs:
buildType: 'current'
downloadType: 'single'
artifactName: 'drop'
downloadPath: '$(System.ArtifactsDirectory)'
- script: 'find $(System.ArtifactsDirectory)'
- task: AzureRmWebAppDeployment#4
inputs:
ConnectionType: 'AzureRM'
azureSubscription: 'Free Trial(xxx)'
appType: 'webAppLinux'
WebAppName: 'app'
packageForLinux: '$(System.ArtifactsDirectory)/**/*.jar'
RuntimeStack: 'JAVA|11-java11'
StartupCommand: 'java - jar $(System.ArtifactsDirectory)/drop/build/libs/app.jar'
Here is Microsoft's documentation on the deployment job schema, with more information on how to use the environment parameter:
https://learn.microsoft.com/en-us/azure/devops/pipelines/process/deployment-jobs?view=azure-devops#schema
I actually just had a conversation with someone about this. You're not alone in thinking that the resources you tie to an environment have to be associated with the resources you're deploying within your YAML pipeline :)

There was a resource authorization issue: "The pipeline is not valid. > Job validate: Step TerraformTaskV1

I get this error in Azure devops pipeline when I split a yaml to make templates
There was a resource authorization issue: "The pipeline is not valid.
Job validate: Step TerraformTaskV1 input backendServiceArm references
service connection azurerm which could not be found. The service
connection does not exist or has not been authorized for use. For
authorization details, refer to https://aka.ms/yamlauthz."
here a solution is given to remove task and add again. But it did not work for me.
When I had terraform in one yaml file, it worked.
stages:
- stage: validate
jobs:
- job: validate
continueOnError: false
steps:
- task: TerraformInstaller#0
displayName: 'install'
inputs:
terraformVersion: '0.12.26'
- task: TerraformTaskV1#0
displayName: init
inputs:
provider: 'azurerm'
command: 'init'
backendServiceArm: 'azure-spn'
backendAzureRmResourceGroupName: 'terraform-rg'
backendAzureRmStorageAccountName: 'adsstatetr'
backendAzureRmContainerName: 'sktfcontainer'
backendAzureRmKey: 'terraform.tfstate'
- task: TerraformTaskV1#0
displayName: validate
inputs:
provider: 'azurerm'
command: 'validate'
When I split into two (templates)
stages:
- stage: validate
jobs:
- template: terraform-validate.yml
parameters:
version: '0.12.26'
sp: 'azurerm'
rg: 'terraform-rg'
sg: 'adsstatetr'
sgContainer: 'sktfcontainer'
skey: 'terraform.tfstate'
It failed and gave the error written above!
parameters:
version: ''
sp: ''
rg: ''
sg: ''
sgContainer: ''
skey: ''
jobs:
- job: validate
continueOnError: false
steps:
- task: TerraformInstaller#0
displayName: 'install'
inputs:
terraformVersion: '0.12.26'
- task: TerraformTaskV1#0
inputs:
provider: 'azurerm'
command: 'init'
backendServiceArm: '${{ parameters.sp }}'
backendAzureRmResourceGroupName: '${{ parameters.rg }}'
backendAzureRmStorageAccountName: '${{ parameters.sg }}'
backendAzureRmContainerName: '${{ parameters.sgContainer }}'
backendAzureRmKey: '${{ parameters.skey }}'
It also is showing a strange Authorize resource. clicking 'approve' does not fix either. Again why? If there is any issue with service connection, why should my single file yaml work? There is no approval issue here!
In working example as arm connection you pass backendServiceArm: 'azure-spn' and in template it is sp: 'azurerm', so if you change to sp: 'azure-spn', you should be fine.
your pipeline is not recognising any variable or parameter passed by you. if its recognising still there is issue than check the syntax. like if statement is not below stage, etc

Multi Stage YAML Azure Pipeline Variable Scope

I am using Multi stage Azure pipelines. Using the Classic editor I am able to set the scope for a variable but using the YAML pipeline I cannot. How is this possible using YAML Multi Stage Pipelines?
Here is the Classic UI where i can set the scope.
You can't. Use a variable group and link the variable group to the desired scope or store secrets in an Azure keyvault or some other secure secret store.
Actually you can, but you have to copy paste the variables in each '- stage' block.
The below method will help. In the below case I am changing in HTML file. You can use config or JSON etc. In HTML file in place of variable declare like
#{name}#
and
#{name2}#
- stage: Dev
displayName: Deploy to Dev
dependsOn: Build
variables:
name: valueDev
name2: valueDev2
jobs:
- deployment: Dev
displayName: Deploy to Dev
environment: Dev
strategy:
runOnce:
deploy:
steps:
- checkout: self
clean: true
- task: qetza.replacetokens.replacetokens-task.replacetokens#3
displayName: 'Replace tokens in **/*.config'
inputs:
targetFiles: '**/*.html'
tokenPrefix: '#{'
tokenSuffix: '}#'
- stage: Test
displayName: Deploy to Test
dependsOn: Dev
variables:
name: valueTest
name2: valueTest2
jobs:
- deployment: Test
displayName: Deploy to Test
environment: Test
strategy:
runOnce:
deploy:
steps:
- checkout: self
clean: true
- task: qetza.replacetokens.replacetokens-task.replacetokens#3
displayName: 'Replace tokens in **/*.config'
inputs:
targetFiles: '**/*.html'
tokenPrefix: '#{'
tokenSuffix: '}#'