How to apply terraform.tfvars in azure devops pipeline - azure-devops

Trying to use azure devops pipeline to build AKS using terraform, i wish i can pass variable values from prod_terraform.tfvars file
i want to run "terraform plan -var-file = prod_terraform.tfvars"
Here, is the yaml code
- task: TerraformCLI#0
displayName: Terraform Plan
inputs:
command: 'plan'
workingDirectory: '$(System.DefaultWorkingDirectory)/terraform-manifests'
commandOptions: '-out aks_cluster.tfplan'
allowTelemetryCollection: false
below is the error
/opt/hostedtoolcache/terraform/1.0.8/x64/terraform plan -out aks_cluster.tfplan
Acquiring state lock. This may take a few moments...
var.acr_demo
Enter a value:
##[error]The operation was canceled.
Finishing: Terraform Plan

Terraform will automatically load the variable values from the variable definition file if it is named terraform.tfvars or ends in .auto.tvfars and placed in the same directory as the other configuration files like below:
development
└── server
└── main.tf
└── variables.tf
└── terraform.tfvars
I renamed the file and it worked

Related

Checkov scan particular folder or PR custom branch files

Trying to run Checkov (for IaC validation) via Azure DevOps YAML pipelines, for ARM template files stored in Azure DevOps version control. The code below:
trigger: none
pool:
vmImage: ubuntu-latest
stages:
- stage: 'runCheckov'
displayName: 'Checkov - Scan ARM files'
jobs:
- job: 'RunCheckov'
displayName: 'Checkov solution'
steps:
- bash: |
docker pull bridgecrew/checkov
workingDirectory: $(System.DefaultWorkingDirectory)
displayName: 'Pull bridgecrew/checkov image'
- bash: |
docker run \
--volume $(pwd):/scripts bridgecrew/checkov \
--directory /scripts \
--output junitxml \
--soft-fail > $(pwd)/CheckovReport.xml
workingDirectory: $(System.DefaultWorkingDirectory)
displayName: 'Run checkov'
- task: PublishTestResults#2
inputs:
testRunTitle: 'Checkov run results'
failTaskOnFailedTests: false
testResultsFormat: 'JUnit'
testResultsFiles: 'CheckovReport.xml'
searchFolder: '$(System.DefaultWorkingDirectory)'
mergeTestResults: false
publishRunAttachments: true
displayName: 'Publish Test results'
The problem - how to change the path/folder of ARM templates to scan. Now it scans all ARM templates found under my whole repo1, regardless what directory value I set.
Also, how to scan PR files committed to custom branch during PR review, so it would trigger the build but the build would scan only those files in the custom branch. I know how to set to trigger build via DevOps repository settings, but again, how to assure build pipeline uses/scan particular PR commit files, not whole repo1 (and master branch).
I recommend you use the Docker image bridgecrew/checkov to set up a container job to run the Checkov scan. The container job will run all the tasks of the job into the Docker container started from this image.
In the container job, you can check out the source repository into the container, then use a script task (such as Bash task) to run the related Checkov CLI to do the files scan. On the script task, you can use the 'workingDirectory' option to specify the path/folder where the command lines run in. Normally, the command lines will only act on files which are in the specified directory and its subdirectories.
If you want to only scan the files in a specific branch in the job, you can clone/checkout the specific branch to the working directory of the job in the container, then like as above mentioned, use the related Checkov CLI to scan files under the specified directory.
[UPDATE]
In the pipeline job, you can try to call the Azure DevOps REST API "Commits - Get Changes" to get all the changed files and folders for the particular commit.
Then use the Checkov CLI with the parameter --directory (-d) or --file (-f) to scan the specified file or folder.

Terraform apply second time on Azure yaml pipeline fails | not storing state files in blob storage

I'm deploying Azure blob storage using Terraform in the Azure YAML pipeline.
The deployment from scratch works and deploys a number of resources including resource groups and blob storage. However, the second deployment fails, saying the resource group already exists. The terraform plan(on the second try) also shows old resources to create, so it is supposed to fail at terraform apply. Upon checking, it's not storing my terraform state files(after the first deploy) in Azure blob/container.
Pipeline Configuration:
steps:
- bash: |
cd ./terraform
terraform -version
terraform init \
-backend-config="storage_account_name=$(tfcicd-blob-account-name-kv)" \
-backend-config="access_key=$(tfcicd-blob-key-kv)" \
-backend-config="container_name=$(terraformStateContainer)" \
-backend-config="key=$(terraformStateFile)"
displayName: Terraform Init
- bash: |
cd ./terraform
terraform plan \
-var-file=$(terraformVarFile) \
-out $(terraformPlanFile)
displayName: Terraform Plan
env:
ARM_SUBSCRIPTION_ID: $(tfcicd-subscription-id-kv)
ARM_CLIENT_ID: $(tfcicd-sp-clientid-kv)
ARM_CLIENT_SECRET: $(tfcicd-client-secret-kv)
ARM_TENANT_ID: $(tfcicd-sp-tenantid-kv)
- bash: |
cd ./terraform
terraform apply -auto-approve $(terraformPlanFile)
displayName: Terraform Apply
env:
ARM_SUBSCRIPTION_ID: $(tfcicd-subscription-id-kv)
ARM_CLIENT_ID: $(tfcicd-sp-clientid-kv)
ARM_CLIENT_SECRET: $(tfcicd-client-secret-kv)
ARM_TENANT_ID: $(tfcicd-sp-tenantid-kv)
Terraform configs
I have a backend.tf file in the terraform/backend directory.
terraform {
backend "azurerm" {
}
}
I'm not sure why it's not storing the state files in the blob storage and not sure what I am doing wrong here. Any lead would be much appreciated.
Thank you.
The backend.tf file must contain all the values of your tfstate file
terraform {
backend "azurerm" {
resource_group_name = "storageaccountrg"
storage_account_name = "storageaccountname"
container_name = "containername"
key = "statefilename.tfstate"
}
}
and this my task for Init.
- task: Bash#3
displayName: 'Terraform Init'
env:
ARM_CLIENT_ID: $(AZURE_CLIENT_ID)
ARM_CLIENT_SECRET: $(AZURE_CLIENT_SECRET)
ARM_SUBSCRIPTION_ID: $(AZURE_SUBSCRIPTION_ID)
ARM_TENANT_ID: $(AZURE_TENANT_ID)
inputs:
targetType: inline
script: |
terraform init \
-backend-config="access_key=$(stgaccount-key1)"
workingDirectory: "$(System.DefaultWorkingDirectory)/$(FILE_PATH)"

Azure Pipeline TerraformCLI task trying to recreate already existing resource

I am trying to create an azure pipeline with Terraform. But when I ran this for the first time, it created half of the resources and failed in apply step. When I corrected the steps it failed with below error.
Error: A resource with the ID "/subscriptions/2c13ad21-ae92-4e09-b64f-2e24445dc076/resourceGroups/apim-resource-gp" already exists - to be managed via Terraform this resource needs to be imported into the State. Please see the resource documentation for "azurerm_resource_group" for more information.
│
│ with module.resource_gp.azurerm_resource_group.apim_rg,
│ on resourcegroup/resource-group.tf line 1, in resource "azurerm_resource_group" "apim_rg":
│ 1: resource "azurerm_resource_group" "apim_rg" {
Here I observed the problem, the plan step again creating a plan file which says all resources to be 'created' rather than skipping the already created resource.
Another observation is that my tfstate file which was supposed to be created in storage-account, didn't get created. But I am unable to figure out what has gone wrong here.
Pasting my azure-pipelines.yaml
azure-pipelines.yaml
# Starter pipeline
# Start with a minimal pipeline that you can customize to build and deploy your code.
# Add steps that build, run tests, deploy, and more:
# https://aka.ms/yaml
variables:
tf_version: "latest"
tf_state_rg: "blogpost-tfstate-rg"
tz_state_location: "centralus"
tf_state_sa_name: "apimstrgaccount"
tf_state_container_name: "tfstate"
tf_state_tags: ("env=blogpost-terraform-devops-pipeline" "deployedBy=devops")
tf_environment: "dev"
tf_state_sku: "Standard_LRS"
SUBSCRIPTION_NAME: "pipeline-terraform"
trigger:
- main
pool:
vmImage: ubuntu-latest
steps:
- task: terraformInstaller#0
displayName: "Install Terraform $(tf_version)"
inputs:
terraformVersion: "$(tf_version)"
- task: TerraformCLI#0
inputs:
command: "init"
backendType: "azurerm"
backendServiceArm: "$(SUBSCRIPTION_NAME)"
ensureBackend: true
backendAzureRmResourceGroupName: "$(tf_environment)-$(tf_state_rg)"
backendAzureRmResourceGroupLocation: "$(tz_state_location)"
backendAzureRmStorageAccountName: "$(tf_state_sa_name)"
backendAzureRmStorageAccountSku: "$(tf_state_sku)"
backendAzureRmContainerName: $(tf_state_container_name)
backendAzureRmKey: "$(tf_environment).terraform.tstate"
displayName: "Run > terraform init"
- task: TerraformCLI#0
inputs:
command: "validate"
environmentServiceName: "$(SUBSCRIPTION_NAME)"
displayName: "Run > terraform validate"
- task: TerraformCLI#0
displayName: 'terraform plan'
inputs:
command: plan
publishPlanResults: "$(SUBSCRIPTION_NAME)"
environmentServiceName: "$(SUBSCRIPTION_NAME)"
commandOptions: '-out=$(System.DefaultWorkingDirectory)/terraform.tfplan -detailed-exitcode'
- task: TerraformCLI#0
displayName: 'terraform apply'
condition: and(succeeded(), eq(variables['TERRAFORM_PLAN_HAS_CHANGES'], 'true'))
inputs:
command: apply
environmentServiceName: "$(SUBSCRIPTION_NAME)"
commandOptions: '$(System.DefaultWorkingDirectory)/terraform.tfplan'
I came across similar error :resource with the ID "/subscriptions/xxxx/resourceGroups/<rg>" already exists - to be managed via Terraform this resource needs to be imported into the State. Please see the resource documentation for "azurerm_resource_group" for more information
when I tried to Terraform pipeline in azure devops .
The devops pipeline was not be able to find state in the Azure UI and I even had this azure_rm provider set.
terraform {
required_providers {
azurerm = {
source = "hashicorp/azurerm"
version = "=3.0.2"
}
}
The error happens when terraform state may have not matched with the real state.
I have this terraform.tfstate file where the current state is stored in local.
But still it was occurring.
But when I tried to add terraform backend in “main.tf” file., then the file was executed without that error.
Try with no values like below.
terraform {
backend "azurerm" {
resource_group_name = ""
storage_account_name = ""
container_name = ""
key = ""
}
}
Or give the values :
terraform {
backend "azurerm" {
resource_group_name = "<rg>"
storage_account_name = "<give acct >"
container_name = "terraform"
key = "terraform.tfstate"
}
And state lock the terraform state to store in azure storage account.
Also try to import state using terraform import <terraform_id> <azure_resource_id>

Use Azure Devops variable in azure-pipelines.yml powershell script

I think I may be approaching this the wrong way. I have an azure-pipelines.yml file where I am deploying infrastructure through this pipeline using Terraform. So far, the pipeline installs Terraform in the environment with no issue. I am trying to run terraform init using a Powershell script, and am running into an error. Within the Powershell script command I am trying to reference a pipeline variable for access_key and secret_key. When executing the pipeline, I am getting the error no valid credential sources for S3 Backend found. This is happening, most likely, because I am referencing the variables I have set incorrectly. I have also set the variables in my Terraform variables file, but I think that may not be necessary since I am trying to read in from the pipeline variables. Below is the code for azure-pipelines.yml and the error I am getting in the pipeline output. Any advice would be appreciated.
azure-pipelines.yml
trigger:
- master
pool:
vmImage: ubuntu-latest
stages:
- stage: TerraformInstall
displayName: Terraform
jobs:
- job: InstallTerraform
displayName: Install Terraform
steps:
- task: charleszipp.azure-pipelines-tasks-terraform.azure-pipelines-tasks-terraform-installer.TerraformInstaller#0
- stage: Init
displayName: Init
jobs:
- job: init
displayName: Terraform init
steps:
- task: PowerShell#2
inputs:
targetType: 'inline'
script: 'terraform init -var access_key=${env:ACCESS_KEY} -var secret_key=${env:SECRET_KEY}'
Error
==============================================================================
Task : PowerShell
Description : Run a PowerShell script on Linux, macOS, or Windows
Version : 2.200.0
Author : Microsoft Corporation
Help : https://learn.microsoft.com/azure/devops/pipelines/tasks/utility/powershell
==============================================================================
Generating script.
========================== Starting Command Output ===========================
/usr/bin/pwsh -NoLogo -NoProfile -NonInteractive -Command . '/home/vsts/work/_temp/6e333d67-4373-4ae7-bc4b-96cc38572961.ps1'
Initializing the backend...
╷
│ Error: error configuring S3 Backend: no valid credential sources for S3 Backend found.
│
│ Please see https://www.terraform.io/docs/language/settings/backends/s3.html
│ for more information about providing credentials.
│
│ Error: NoCredentialProviders: no valid providers in chain. Deprecated.
│ For verbose messaging see aws.Config.CredentialsChainVerboseErrors
│
│
│
╵
##[error]PowerShell exited with code '1'.
Finishing: PowerShell
Without knowing more about how your variables are set it's hard to give a complete solution.
First, I don't think the Terraform CLI command Init accepts input variables (Correct me if I'm wrong). Just making a guess here, you're passing ACCESS_KEY and SECRET_KEY to be used with your Terraform backend provider. If that's the case, see this StackOver flow answer on how to do that. To summarize what that answer says
Create a separate .tfvars file that stores the variables that will be used for your backend provider
Use that .tfvars file with your Terraform Init like so:
terraform init -backend-config=backend.tfvars
Terraform aside, If you're using Azure DevOps Library variable groups, you can pass the variables to your script using the example below. Again, I don't think this will help you initialize your Terraform.
Note: You may need to play with the quotes depending on the OS of your agent.
This example assumes you have created an ADO library variable group named YourVariableGroupNameHere and have created two variables in that group named ACCESS_KEY and SECRET_KEY.
- stage: Init
displayName: Init
jobs:
- job: init
variables:
- group: YourVariableGroupNameHere
displayName: Terraform init
steps:
- task: PowerShell#2
inputs:
targetType: 'inline'
script: 'terraform init -var access_key=$(ACCESS_KEY) -var secret_key=$(SECRET_KEY)'

Azure DevOps pipeline for deploying only changed arm templates

We have a project with repo on Azure DevOps where we store ARM templates of our infrastructure. What we want to achieve is to deploy templates on every commit on master branch.
The question is: is it possible to define one pipeline which could trigger a deployment only of ARM templates changed with that commit? Let's go with example. We 3 templates in repo:
t1.json
t2.json
t3.json
The latest commit changed only t2.json. In this case we want pipeline to only deploy t2.json as t1.json and t3.json hasn't been changed in this commit.
Is it possible to create one universal pipeline or we should rather create separate pipeline for every template which is triggered by commit on specific file?
It is possible to define only one pipeline to deploy the changed template. You need to add a script task to get the changed template file name in your pipeline.
It is easy to get the changed files using git commands git diff-tree --no-commit-id --name-only -r commitId. When you get the changed file's name, you need to assign it to a variable using expression ##vso[task.setvariable variable=VariableName]value. Then you can set the csmFile parameter like this csmFile: '**\$(fileName)' in AzureResourceGroupDeployment task
You can check below yaml pipeline for example:
- powershell: |
#get the changed template
$a = git diff-tree --no-commit-id --name-only -r $(Build.SourceVersion)
#assign the filename to a variable
echo "##vso[task.setvariable variable=fileName]$a"
- task: AzureResourceGroupDeployment#2
inputs:
....
templateLocation: 'Linked artifact'
csmFile: '**\$(fileName)'
It is also easy to define multiple pipelines to achieve only deploying the changed template. You only need to add the paths trigger to the specific template file in the each pipeline. So that the changed template file only triggers its corresponding pipeline.
trigger:
paths:
include:
- pathTo/template1.json
...
- task: AzureResourceGroupDeployment#2
inputs:
....
templateLocation: 'Linked artifact'
csmFile: '**\template1.json'
Hope above helps!
What you asked is not supported out of the box. From what I understood you want to have triggers (based on file change) per a step or a job (depending hoq you organize your pipeline). However, I'm not sure if you need this. Deploying ARM template which was not changed will not affect you Azure resources if use use Create Or Update Resource Group doc here
You can also try to manually detect which file was changed (using powershell for instance and git commands), then set a flag and later use this falg to fire or not some steps. But it looks like overkill for what you want achieve.