I am trying to create an azure pipeline with Terraform. But when I ran this for the first time, it created half of the resources and failed in apply step. When I corrected the steps it failed with below error.
Error: A resource with the ID "/subscriptions/2c13ad21-ae92-4e09-b64f-2e24445dc076/resourceGroups/apim-resource-gp" already exists - to be managed via Terraform this resource needs to be imported into the State. Please see the resource documentation for "azurerm_resource_group" for more information.
│
│ with module.resource_gp.azurerm_resource_group.apim_rg,
│ on resourcegroup/resource-group.tf line 1, in resource "azurerm_resource_group" "apim_rg":
│ 1: resource "azurerm_resource_group" "apim_rg" {
Here I observed the problem, the plan step again creating a plan file which says all resources to be 'created' rather than skipping the already created resource.
Another observation is that my tfstate file which was supposed to be created in storage-account, didn't get created. But I am unable to figure out what has gone wrong here.
Pasting my azure-pipelines.yaml
azure-pipelines.yaml
# Starter pipeline
# Start with a minimal pipeline that you can customize to build and deploy your code.
# Add steps that build, run tests, deploy, and more:
# https://aka.ms/yaml
variables:
tf_version: "latest"
tf_state_rg: "blogpost-tfstate-rg"
tz_state_location: "centralus"
tf_state_sa_name: "apimstrgaccount"
tf_state_container_name: "tfstate"
tf_state_tags: ("env=blogpost-terraform-devops-pipeline" "deployedBy=devops")
tf_environment: "dev"
tf_state_sku: "Standard_LRS"
SUBSCRIPTION_NAME: "pipeline-terraform"
trigger:
- main
pool:
vmImage: ubuntu-latest
steps:
- task: terraformInstaller#0
displayName: "Install Terraform $(tf_version)"
inputs:
terraformVersion: "$(tf_version)"
- task: TerraformCLI#0
inputs:
command: "init"
backendType: "azurerm"
backendServiceArm: "$(SUBSCRIPTION_NAME)"
ensureBackend: true
backendAzureRmResourceGroupName: "$(tf_environment)-$(tf_state_rg)"
backendAzureRmResourceGroupLocation: "$(tz_state_location)"
backendAzureRmStorageAccountName: "$(tf_state_sa_name)"
backendAzureRmStorageAccountSku: "$(tf_state_sku)"
backendAzureRmContainerName: $(tf_state_container_name)
backendAzureRmKey: "$(tf_environment).terraform.tstate"
displayName: "Run > terraform init"
- task: TerraformCLI#0
inputs:
command: "validate"
environmentServiceName: "$(SUBSCRIPTION_NAME)"
displayName: "Run > terraform validate"
- task: TerraformCLI#0
displayName: 'terraform plan'
inputs:
command: plan
publishPlanResults: "$(SUBSCRIPTION_NAME)"
environmentServiceName: "$(SUBSCRIPTION_NAME)"
commandOptions: '-out=$(System.DefaultWorkingDirectory)/terraform.tfplan -detailed-exitcode'
- task: TerraformCLI#0
displayName: 'terraform apply'
condition: and(succeeded(), eq(variables['TERRAFORM_PLAN_HAS_CHANGES'], 'true'))
inputs:
command: apply
environmentServiceName: "$(SUBSCRIPTION_NAME)"
commandOptions: '$(System.DefaultWorkingDirectory)/terraform.tfplan'
I came across similar error :resource with the ID "/subscriptions/xxxx/resourceGroups/<rg>" already exists - to be managed via Terraform this resource needs to be imported into the State. Please see the resource documentation for "azurerm_resource_group" for more information
when I tried to Terraform pipeline in azure devops .
The devops pipeline was not be able to find state in the Azure UI and I even had this azure_rm provider set.
terraform {
required_providers {
azurerm = {
source = "hashicorp/azurerm"
version = "=3.0.2"
}
}
The error happens when terraform state may have not matched with the real state.
I have this terraform.tfstate file where the current state is stored in local.
But still it was occurring.
But when I tried to add terraform backend in “main.tf” file., then the file was executed without that error.
Try with no values like below.
terraform {
backend "azurerm" {
resource_group_name = ""
storage_account_name = ""
container_name = ""
key = ""
}
}
Or give the values :
terraform {
backend "azurerm" {
resource_group_name = "<rg>"
storage_account_name = "<give acct >"
container_name = "terraform"
key = "terraform.tfstate"
}
And state lock the terraform state to store in azure storage account.
Also try to import state using terraform import <terraform_id> <azure_resource_id>
Related
I'm deploying Azure blob storage using Terraform in the Azure YAML pipeline.
The deployment from scratch works and deploys a number of resources including resource groups and blob storage. However, the second deployment fails, saying the resource group already exists. The terraform plan(on the second try) also shows old resources to create, so it is supposed to fail at terraform apply. Upon checking, it's not storing my terraform state files(after the first deploy) in Azure blob/container.
Pipeline Configuration:
steps:
- bash: |
cd ./terraform
terraform -version
terraform init \
-backend-config="storage_account_name=$(tfcicd-blob-account-name-kv)" \
-backend-config="access_key=$(tfcicd-blob-key-kv)" \
-backend-config="container_name=$(terraformStateContainer)" \
-backend-config="key=$(terraformStateFile)"
displayName: Terraform Init
- bash: |
cd ./terraform
terraform plan \
-var-file=$(terraformVarFile) \
-out $(terraformPlanFile)
displayName: Terraform Plan
env:
ARM_SUBSCRIPTION_ID: $(tfcicd-subscription-id-kv)
ARM_CLIENT_ID: $(tfcicd-sp-clientid-kv)
ARM_CLIENT_SECRET: $(tfcicd-client-secret-kv)
ARM_TENANT_ID: $(tfcicd-sp-tenantid-kv)
- bash: |
cd ./terraform
terraform apply -auto-approve $(terraformPlanFile)
displayName: Terraform Apply
env:
ARM_SUBSCRIPTION_ID: $(tfcicd-subscription-id-kv)
ARM_CLIENT_ID: $(tfcicd-sp-clientid-kv)
ARM_CLIENT_SECRET: $(tfcicd-client-secret-kv)
ARM_TENANT_ID: $(tfcicd-sp-tenantid-kv)
Terraform configs
I have a backend.tf file in the terraform/backend directory.
terraform {
backend "azurerm" {
}
}
I'm not sure why it's not storing the state files in the blob storage and not sure what I am doing wrong here. Any lead would be much appreciated.
Thank you.
The backend.tf file must contain all the values of your tfstate file
terraform {
backend "azurerm" {
resource_group_name = "storageaccountrg"
storage_account_name = "storageaccountname"
container_name = "containername"
key = "statefilename.tfstate"
}
}
and this my task for Init.
- task: Bash#3
displayName: 'Terraform Init'
env:
ARM_CLIENT_ID: $(AZURE_CLIENT_ID)
ARM_CLIENT_SECRET: $(AZURE_CLIENT_SECRET)
ARM_SUBSCRIPTION_ID: $(AZURE_SUBSCRIPTION_ID)
ARM_TENANT_ID: $(AZURE_TENANT_ID)
inputs:
targetType: inline
script: |
terraform init \
-backend-config="access_key=$(stgaccount-key1)"
workingDirectory: "$(System.DefaultWorkingDirectory)/$(FILE_PATH)"
I am trying to deploy a function app via an Azure DevOps pipeline, however I am receiving the following error:
##[error]Failed to deploy web package to App Service.
##[error]To debug further please check Kudu stack trace URL : $URL_REMOVED
##[error]Error: Error: Failed to deploy web package to App Service. Ip Forbidden (CODE: 403)
From some googling a suggested solution seems to be to whitelist agent IP before the deployment, and then remove it after. I have added this to my pipeline, and I can see the agent IP get added to access restrictions, however the deployment still fails.
Here is my pipeline file:
# Node.js Function App to Linux on Azure
# Build a Node.js function app and deploy it to Azure as a Linux function app.
# Add steps that analyze code, save build artifacts, deploy, and more:
# https://learn.microsoft.com/azure/devops/pipelines/languages/javascript
trigger:
- main
variables:
# Azure Resource Manager connection created during pipeline creation
azureSubscription: 'xxx'
# Function app name
functionAppName: 'xxx'
# Environment name
environmentName: 'xxx'
# Agent VM image name
vmImageName: 'ubuntu-latest'
stages:
- stage: Build
displayName: Build stage
jobs:
- job: Build
displayName: Build
pool:
vmImage: $(vmImageName)
steps:
- task: NodeTool#0
inputs:
versionSpec: '10.x'
displayName: 'Install Node.js'
- script: |
if [ -f extensions.csproj ]
then
dotnet build extensions.csproj --runtime ubuntu.16.04-x64 --output ./bin
fi
displayName: 'Build extensions'
- script: |
npm install
npm run build --if-present
npm run test --if-present
displayName: 'Prepare binaries'
- task: ArchiveFiles#2
displayName: 'Archive files'
inputs:
rootFolderOrFile: '$(System.DefaultWorkingDirectory)'
includeRootFolder: false
archiveType: zip
archiveFile: $(Build.ArtifactStagingDirectory)/$(Build.BuildId).zip
replaceExistingArchive: true
- upload: $(Build.ArtifactStagingDirectory)/$(Build.BuildId).zip
artifact: drop
- stage: Deploy
displayName: Deploy stage
dependsOn: Build
condition: succeeded()
jobs:
- deployment: Deploy
displayName: Deploy
environment: $(environmentName)
pool:
vmImage: $(vmImageName)
strategy:
runOnce:
deploy:
steps:
- task: AzureCLI#2
inputs:
azureSubscription: '$(azureSubscription)'
scriptType: 'bash'
scriptLocation: 'inlineScript'
inlineScript: |
agentIP=$(curl -s https://api.ipify.org/)
az functionapp config access-restriction add -g xxx -n xxx --action Allow --ip-address $agentIP --priority 200
- task: AzureFunctionApp#1
displayName: 'Azure Functions App Deploy: xxx'
inputs:
azureSubscription: '$(azureSubscription)'
appType: functionAppLinux
appName: $(functionAppName)
package: '$(Pipeline.Workspace)/drop/$(Build.BuildId).zip'
Is anyone able to advise where I am going wrong?
I've had a simmilar issue while adding the agent IP to the network restrictions of an storage account (using Powershell but you'll understand the idea), we added a 60s sleep to be sure that the setting are taken into account by Azure.
$sa_name = "sapricer$env_prefix"
if ($null -ne (Get-AzStorageAccount -ResourceGroupName $sa_rg -AccountName $sa_name -ErrorAction Ignore)) {
Write-Output "Storage account '$sa_name' exists"
if ($enable) {
Write-Output "Add ip rule for $current_ip on $sa_name..."
Add-AzStorageAccountNetworkRule -ResourceGroupName $sa_rg -AccountName $sa_name -IPAddressOrRange $current_ip
}
else {
Write-Output "Remove ip rule for $current_ip on $sa_name..."
Remove-AzStorageAccountNetworkRule -ResourceGroupName $sa_rg -AccountName $sa_name -IPAddressOrRange $current_ip
}
}
Start-Sleep -Seconds 60
I found the solution to this.
Function Apps have two IP Restriction sections, one for the App and one for the SCM site. The SCM site is the one that requires the IP to be whitelisted in order for the deployment to work:
az functionapp config access-restriction add --scm-site true -g xxx -n xxx --action Allow --ip-address $agentIP --priority 200
You can deploy Azure function app to azure devops pipeline using azure function app task from Azure devops pipeline tasks
Here is the sample snippet for deploying azure function app
variables:
azureSubscription: Contoso
# To ignore SSL error, uncomment the below variable
# VSTS_ARM_REST_IGNORE_SSL_ERRORS: true
steps:
- task: AzureFunctionApp#1
displayName: Azure Function App Deploy
inputs:
azureSubscription: $(azureSubscription)
appName: samplefunctionapp
package: $(System.DefaultWorkingDirectory)/**/*.zip
Here is the Microsoft Document for deploying azure function app.
I have a DevOps pipeline that gives me this error:
There was a resource authorization issue: "The pipeline is not valid. Job ExecutionTerraform: Step AzureCLI input connectedServiceNameARM references service connection Azure: $(subscriptionName) which could not be found. The service connection does not exist or has not been authorized for use. For authorization details, refer to https://aka.ms/yamlauthz."
The configuration I am using is looking up the Subscription name dynamically.
The step I use for that is:
- bash: |
# pull the subscription data
# ... read data into local variables
# set the shared variables
echo "##vso[task.setvariable variable=subscriptionId]${SUBSCRIPTION_ID}"
echo "##vso[task.setvariable variable=subscriptionName]${SUBSCRIPTION_NAME}"
From there I attempt to call the Azure CLI via a template:
- template: execution-cli.yml
parameters:
azureSubscriptionId: $(subscriptionId)
azureSubscriptionName: $(subscriptionName)
Inside the template my CLI step uses:
steps:
- task: AzureCLI#2
displayName: Test CLI
inputs:
azureSubscription: "ARMTest ${{ parameters.azureSubscriptionName }}"
scriptType: bash
scriptLocation: inlineScript
inlineScript: |
az --version
addSpnToEnvironment: true
useGlobalConfig: true
It looks like Pipelines is trying to preemptively check authorization without noticing that there's a variable in there. What am I doing wrong here that is causing Azure to attempt to resolve that at the wrong time?
I do this in other pipelines without issues and I am not sure what is different in this particular instance.
Update 1: Working Template I have Elsewhere
Full template:
parameters:
- name: environment
type: string
jobs:
- job: AKSCredentials
displayName: "AKS Credentials Pull"
steps:
- task: AzureCLI#2
displayName: AKS Credentials
inputs:
azureSubscription: "Azure: testbed-${{ parameters.environment }}"
scriptType: bash
scriptLocation: inlineScript
inlineScript: az aks get-credentials -g testbed-${{ parameters.environment }} -n testbed-${{ parameters.environment }}-aks
addSpnToEnvironment: true
useGlobalConfig: true
This is not possible because azure subscription needs to be known at compilation time. You set your variable on run time.
Here an issue with similar case when it is explained:
run time variables aren't supported for service connection OR azure subscription. In your code sample, you are referring to AzureSubscription variable which will get initialized at the run time (but not at save time). Your syntax is correct but you need to set AzureSubscription variable as part of variables.
If you define your variables like:
variables:
subscriptionId: someValue
subscriptionName: someValue
and then you will use it
- template: execution-cli.yml
parameters:
azureSubscriptionId: $(subscriptionId)
azureSubscriptionName: $(subscriptionName)
it should work. But since you set up your variables on runtime it causes your issue.
The following YAML snippet is a part of my Azure DevOps build pipeline:
- task: TerraformTaskV2#2
displayName: 'Terraform plan'
inputs:
provider: 'azurerm'
command: 'plan'
workingDirectory: '$(System.DefaultWorkingDirectory)/infrastructure'
commandOptions: '-out $(Build.BuildNumber)'
environmentServiceNameAzureRM: 'MASKED'
- task: TerraformTaskV2#2
displayName: 'Terraform approve and apply'
name: terraformApply
inputs:
provider: 'azurerm'
command: 'apply'
workingDirectory: '$(System.DefaultWorkingDirectory)/infrastructure'
commandOptions: '$(Build.BuildNumber)'
environmentServiceNameAzureRM: 'MASKED'
Also, "terraform plan" stage creates an output whose name is the same as the build number but terraform apply wouldn't pick that name to simulate a graceful skip if the resource group already exists. Terraform apply task always appends "auto-approve" per the following example where 1.0.0 is the build number:
terraform apply -auto-approve 1.0.0
This piece of YAML runs well and creates the resource group if it does not exist. There are a few other steps after this step that have to run too. terraformApply stage fails if the resource group already exists and hence the following steps won't run. I would like to have a graceful pipeline to skip terraform apply stage if the resource group already exists and execute the following steps in the pipeline after "apply". How can I achieve this goal?
The error details reads as below:
2021-06-01T13:30:26.3472705Z ##[section]Starting: Terraform approve and apply
2021-06-01T13:30:26.3481129Z ==============================================================================
2021-06-01T13:30:26.3481656Z Task : Terraform
2021-06-01T13:30:26.3482325Z Description : Execute terraform commands to manage resources on AzureRM, Amazon Web Services(AWS) and Google Cloud Platform(GCP)
2021-06-01T13:30:26.3482826Z Version : 2.188.1
2021-06-01T13:30:26.3483165Z Author : Microsoft Corporation
2021-06-01T13:30:26.3483587Z Help : [Learn more about this task](https://aka.ms/AA5j5pf)
2021-06-01T13:30:26.3484057Z ==============================================================================
2021-06-01T13:30:26.4679906Z [command]/opt/hostedtoolcache/terraform/0.15.4/x64/terraform providers
2021-06-01T13:30:27.0463781Z
2021-06-01T13:30:27.0465060Z Providers required by configuration:
2021-06-01T13:30:27.0465541Z .
2021-06-01T13:30:27.0466639Z ├── provider[registry.terraform.io/hashicorp/azurerm] >= 2.26.0
2021-06-01T13:30:27.0467592Z └── provider[registry.terraform.io/hashicorp/random]
2021-06-01T13:30:27.0467902Z
2021-06-01T13:30:27.0478744Z [command]/opt/hostedtoolcache/terraform/0.15.4/x64/terraform validate
2021-06-01T13:30:28.5103220Z [32m[1mSuccess![0m The configuration is valid.
2021-06-01T13:30:28.5104320Z [0m
2021-06-01T13:30:28.5192981Z [command]/opt/hostedtoolcache/terraform/0.15.4/x64/terraform apply -auto-approve 1.0.0
2021-06-01T13:30:34.9083339Z [0m[1mazurerm_resource_group.gf: Creating...[0m[0m
2021-06-01T13:30:34.9697353Z [31m╷[0m[0m
2021-06-01T13:30:34.9699616Z [31m│[0m [0m[1m[31mError: [0m[0m[1mA resource with the ID "/subscriptions/MASKED/resourceGroups/FooResourceZGroup" already exists - to be managed via Terraform this resource needs to be imported into the State. Please see the resource documentation for "azurerm_resource_group" for more information.[0m
2021-06-01T13:30:34.9701090Z [31m│[0m [0m
2021-06-01T13:30:34.9702369Z [31m│[0m [0m[0m with azurerm_resource_group.gf,
2021-06-01T13:30:34.9703361Z [31m│[0m [0m on main.tf line 16, in resource "azurerm_resource_group" "gf":
2021-06-01T13:30:34.9704127Z [31m│[0m [0m 16: resource "azurerm_resource_group" "gf" [4m{[0m[0m
2021-06-01T13:30:34.9704722Z [31m│[0m [0m
2021-06-01T13:30:34.9705176Z [31m╵[0m[0m
2021-06-01T13:30:34.9828118Z ##[error]Error: The process '/opt/hostedtoolcache/terraform/0.15.4/x64/terraform' failed with exit code 1
2021-06-01T13:30:34.9843807Z ##[section]Finishing: Terraform approve and apply
UPDATE
The terraform YAML looks like the following code snippet in the pipeline:
- task: TerraformTaskV2#2
displayName: 'Terraform init'
inputs:
provider: 'azurerm'
command: 'init'
workingDirectory: '$(System.DefaultWorkingDirectory)/infrastructure'
backendServiceArm: 'MASKED'
backendAzureRmResourceGroupName: 'masked'
backendAzureRmStorageAccountName: 'masked'
backendAzureRmContainerName: 'multitstate'
backendAzureRmKey: 'terraform.state'
- task: TerraformTaskV2#2
displayName: 'Terraform plan'
inputs:
provider: 'azurerm'
command: 'plan'
workingDirectory: '$(System.DefaultWorkingDirectory)/infrastructure'
commandOptions: '-out $(Build.BuildNumber)'
environmentServiceNameAzureRM: 'MASKED'
- task: TerraformTaskV2#2
displayName: 'Terraform approve and apply'
name: terraformApply
inputs:
provider: 'azurerm'
command: 'apply'
workingDirectory: '$(System.DefaultWorkingDirectory)/infrastructure'
commandOptions: '$(Build.BuildNumber)'
environmentServiceNameAzureRM: 'MASKED'
Since your resource was created out of Terraform you need to import into terraform. Basically this is not done without your clear instruction. And for that you need to use terraform import command. You need to run it only once. So you can create a seprate pipeline, run it one and the remove this pipeline (or do this as a step in current pipeline).
What is bad that this task doesn't support import command:
So you need to run custom script like:
terraform init
terraform state pull
terraform import azurerm_resource_group.example /subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/example
Please remove resource, configure backend for your tasks:
backendServiceArm: 'full-subscription'
backendAzureRmResourceGroupName: 'azure-functions-minimal-downtime'
backendAzureRmStorageAccountName: 'afmdst'
backendAzureRmContainerName: 'activityitems'
backendAzureRmKey: 'test'
and run your pipeline again.
I am tying to run terraform on my azure Devops pipeline. I am using the terraform extension version 0.1.8 from the marketplace by MicrosoftDevLabs
My task looks as below :
task: TerraformTaskV1#0
displayName: 'Terraform - Init'
inputs:
provider: 'azurerm'
command: 'init'
commandOptions: '-input=false'
backendServiceArm: 'service-connection'
backendAzureRmResourceGroupName: 'Project-RG'
backendAzureRmStorageAccountName: 'projectsa'
backendAzureRmContainerName: 'tfstate'
backendAzureRmKey: 'terraform.tfstate'
workingDirectory: terraform
The command it tries to execute is
`/opt/hostedtoolcache/terraform/0.13.5/x64/terraform init -backend-config=storage_account_name=projectsa -backend-config=container_name=tfstate -backend-config=key=terraform.tfstate -backend-config=resource_group_name=Project-RG -backend-config=arm_subscription_id=xxxx-xxxx-xxxx -backend-config=arm_tenant_id=*** -backend-config=arm_client_id=*** -backend-config=arm_client_secret=***’
And the error message is:
Initializing the backend...
Error: Invalid backend configuration argument
The backend configuration argument "storage_account_name" given on the command line is not expected for the selected backend type.
Error: Invalid backend configuration argument
The backend configuration argument "container_name" given on the command line is not expected for the selected backend type.
Error: Invalid backend configuration argument
The backend configuration argument "key" given on the command line is not expected for the selected backend type.
I ran into a similar error and found that using task TerraformTaskV2#2 in my pipeline yml as opposed to the older TerraformTaskV1#0 resolved the issue. This newer task also works with very a recent Terraform version like 1.1.4.
# Install Terraform on Agent
- task: TerraformInstaller#0
displayName: 'install'
inputs:
terraformVersion: '1.1.4'
# Initialize Terraform
- task: TerraformTaskV2#2
displayName: 'init'
inputs:
provider: 'azurerm'
command: 'init'
backendAzureRmResourceGroupName: 'prodbackendstf'
backendAzureRmStorageAccountName: 'productiontfstate'
backendAzureRmContainerName: 'tfstate'
backendAzureRmKey: 'tf.state'
backendServiceArm: 'IaC SPn'
workingDirectory: '$(System.DefaultWorkingDirectory)/terraform'
Fixed it. The solution is slightly embarrassing. The.tf files backend was mentioned as local. Which now makes sense as local backend does not support these parameters. Changing the backend to azure fixed it. Make sure you have the correct backend defined as the error does say the parameters for the backend are not supported.
I'm using terraform v1.1.3 and the TerraformTaskV1#0 azure pipeline task. I was getting the same issue, strange as the same azurerm block used to work when I was using 0.14.X. To fix I edited the backend block so it would use access_key blob storage key instead, then removed TerraformTaskV1#0, and instead initialised terraform using a cmd task as shown below:
terraform {
required_providers {
azurerm = {
source = "hashicorp/azurerm"
version = "~> 2.92"
}
}
backend "azurerm" {
storage_account_name = "**terraformStorageAccount**"
container_name = "**terraformStateFileContainer**"
key = "**terraformStateFile**"
access_key = "**storageKey**" #this is sensitive so should be retrieved from safe place via keyvault or dev ops pipeline variables
}
required_version = ">= 1.1.0"
}
I use this task to replace the pipeline variables with the ones in the tf files. You'll have to install this task btw:
- task: replacetokens#3
displayName: Replace Variable Tokens
inputs:
rootDirectory: '$(Pipeline.Workspace)'
targetFiles: '**/*.tf'
encoding: 'auto'
writeBOM: true
actionOnMissing: 'warn'
keepToken: false
tokenPrefix: '**'
tokenSuffix: '**'
useLegacyPattern: false
enableTelemetry: false
Once the variables are replaced use a cmd task to terraform init
- task: CmdLine#2
inputs:
script: 'terraform init'
I ran this
- task: TerraformInstaller#0
inputs:
terraformVersion: '0.13.5'
- task: TerraformTaskV1#0
inputs:
provider: 'azurerm'
command: 'init'
workingDirectory: '$(System.DefaultWorkingDirectory)/stackoverflow/74-terraform'
backendServiceArm: 'rg-the-code-manual'
backendAzureRmResourceGroupName: 'TheCodeManual'
backendAzureRmStorageAccountName: 'thecodemanual'
backendAzureRmContainerName: 'infra'
backendAzureRmKey: 'tfstate-so-74'
commandOptions: '-input=false'
and got it working
2020-12-04T10:06:25.4318809Z [command]/opt/hostedtoolcache/terraform/0.13.5/x64/terraform init -backend-config=storage_account_name=thecodemanual -backend-config=container_name=infra -backend-config=key=tfstate-so-74 -backend-config=resource_group_name=TheCodeManual -backend-config=arm_subscription_id=<subscriptionId> -backend-config=arm_tenant_id=*** -backend-config=arm_client_id=*** -backend-config=arm_client_secret=***
2020-12-04T10:06:25.4670082Z
2020-12-04T10:06:25.4675423Z [0m[1mInitializing the backend...[0m
2020-12-04T10:06:25.4738557Z [0m[32m
2020-12-04T10:06:25.4740133Z Successfully configured the backend "azurerm"! Terraform will automatically
2020-12-04T10:06:25.4742265Z use this backend unless the backend configuration changes.[0m
2020-12-04T10:06:25.9242628Z [33m
2020-12-04T10:06:25.9244849Z [1m[33mWarning: [0m[0m[1m"arm_client_id": [DEPRECATED] `arm_client_id` has been replaced by `client_id`[0m
2020-12-04T10:06:25.9246980Z
2020-12-04T10:06:25.9248608Z [0m[0m[0m
2020-12-04T10:06:25.9249659Z [33m
2020-12-04T10:06:25.9251909Z [1m[33mWarning: [0m[0m[1m"arm_client_secret": [DEPRECATED] `arm_client_secret` has been replaced by `client_secret`[0m
2020-12-04T10:06:25.9252897Z
2020-12-04T10:06:25.9254321Z [0m[0m[0m
2020-12-04T10:06:25.9255028Z [33m
2020-12-04T10:06:25.9256913Z [1m[33mWarning: [0m[0m[1m"arm_tenant_id": [DEPRECATED] `arm_tenant_id` has been replaced by `tenant_id`[0m
2020-12-04T10:06:25.9261480Z
2020-12-04T10:06:25.9262574Z [0m[0m[0m
2020-12-04T10:06:25.9263605Z [33m
2020-12-04T10:06:25.9264816Z [1m[33mWarning: [0m[0m[1m"arm_subscription_id": [DEPRECATED] `arm_subscription_id` has been replaced by `subscription_id`[0m
2020-12-04T10:06:25.9265629Z
With info about deprecation of settings, but this at the moment doesn't lead to a fail. For this there is already issue and PR on github.
Did you run TerraformInstaller before TerraformTaskV1?