I think I may be approaching this the wrong way. I have an azure-pipelines.yml file where I am deploying infrastructure through this pipeline using Terraform. So far, the pipeline installs Terraform in the environment with no issue. I am trying to run terraform init using a Powershell script, and am running into an error. Within the Powershell script command I am trying to reference a pipeline variable for access_key and secret_key. When executing the pipeline, I am getting the error no valid credential sources for S3 Backend found. This is happening, most likely, because I am referencing the variables I have set incorrectly. I have also set the variables in my Terraform variables file, but I think that may not be necessary since I am trying to read in from the pipeline variables. Below is the code for azure-pipelines.yml and the error I am getting in the pipeline output. Any advice would be appreciated.
azure-pipelines.yml
trigger:
- master
pool:
vmImage: ubuntu-latest
stages:
- stage: TerraformInstall
displayName: Terraform
jobs:
- job: InstallTerraform
displayName: Install Terraform
steps:
- task: charleszipp.azure-pipelines-tasks-terraform.azure-pipelines-tasks-terraform-installer.TerraformInstaller#0
- stage: Init
displayName: Init
jobs:
- job: init
displayName: Terraform init
steps:
- task: PowerShell#2
inputs:
targetType: 'inline'
script: 'terraform init -var access_key=${env:ACCESS_KEY} -var secret_key=${env:SECRET_KEY}'
Error
==============================================================================
Task : PowerShell
Description : Run a PowerShell script on Linux, macOS, or Windows
Version : 2.200.0
Author : Microsoft Corporation
Help : https://learn.microsoft.com/azure/devops/pipelines/tasks/utility/powershell
==============================================================================
Generating script.
========================== Starting Command Output ===========================
/usr/bin/pwsh -NoLogo -NoProfile -NonInteractive -Command . '/home/vsts/work/_temp/6e333d67-4373-4ae7-bc4b-96cc38572961.ps1'
Initializing the backend...
╷
│ Error: error configuring S3 Backend: no valid credential sources for S3 Backend found.
│
│ Please see https://www.terraform.io/docs/language/settings/backends/s3.html
│ for more information about providing credentials.
│
│ Error: NoCredentialProviders: no valid providers in chain. Deprecated.
│ For verbose messaging see aws.Config.CredentialsChainVerboseErrors
│
│
│
╵
##[error]PowerShell exited with code '1'.
Finishing: PowerShell
Without knowing more about how your variables are set it's hard to give a complete solution.
First, I don't think the Terraform CLI command Init accepts input variables (Correct me if I'm wrong). Just making a guess here, you're passing ACCESS_KEY and SECRET_KEY to be used with your Terraform backend provider. If that's the case, see this StackOver flow answer on how to do that. To summarize what that answer says
Create a separate .tfvars file that stores the variables that will be used for your backend provider
Use that .tfvars file with your Terraform Init like so:
terraform init -backend-config=backend.tfvars
Terraform aside, If you're using Azure DevOps Library variable groups, you can pass the variables to your script using the example below. Again, I don't think this will help you initialize your Terraform.
Note: You may need to play with the quotes depending on the OS of your agent.
This example assumes you have created an ADO library variable group named YourVariableGroupNameHere and have created two variables in that group named ACCESS_KEY and SECRET_KEY.
- stage: Init
displayName: Init
jobs:
- job: init
variables:
- group: YourVariableGroupNameHere
displayName: Terraform init
steps:
- task: PowerShell#2
inputs:
targetType: 'inline'
script: 'terraform init -var access_key=$(ACCESS_KEY) -var secret_key=$(SECRET_KEY)'
Related
I am trying to create an azure pipeline with Terraform. But when I ran this for the first time, it created half of the resources and failed in apply step. When I corrected the steps it failed with below error.
Error: A resource with the ID "/subscriptions/2c13ad21-ae92-4e09-b64f-2e24445dc076/resourceGroups/apim-resource-gp" already exists - to be managed via Terraform this resource needs to be imported into the State. Please see the resource documentation for "azurerm_resource_group" for more information.
│
│ with module.resource_gp.azurerm_resource_group.apim_rg,
│ on resourcegroup/resource-group.tf line 1, in resource "azurerm_resource_group" "apim_rg":
│ 1: resource "azurerm_resource_group" "apim_rg" {
Here I observed the problem, the plan step again creating a plan file which says all resources to be 'created' rather than skipping the already created resource.
Another observation is that my tfstate file which was supposed to be created in storage-account, didn't get created. But I am unable to figure out what has gone wrong here.
Pasting my azure-pipelines.yaml
azure-pipelines.yaml
# Starter pipeline
# Start with a minimal pipeline that you can customize to build and deploy your code.
# Add steps that build, run tests, deploy, and more:
# https://aka.ms/yaml
variables:
tf_version: "latest"
tf_state_rg: "blogpost-tfstate-rg"
tz_state_location: "centralus"
tf_state_sa_name: "apimstrgaccount"
tf_state_container_name: "tfstate"
tf_state_tags: ("env=blogpost-terraform-devops-pipeline" "deployedBy=devops")
tf_environment: "dev"
tf_state_sku: "Standard_LRS"
SUBSCRIPTION_NAME: "pipeline-terraform"
trigger:
- main
pool:
vmImage: ubuntu-latest
steps:
- task: terraformInstaller#0
displayName: "Install Terraform $(tf_version)"
inputs:
terraformVersion: "$(tf_version)"
- task: TerraformCLI#0
inputs:
command: "init"
backendType: "azurerm"
backendServiceArm: "$(SUBSCRIPTION_NAME)"
ensureBackend: true
backendAzureRmResourceGroupName: "$(tf_environment)-$(tf_state_rg)"
backendAzureRmResourceGroupLocation: "$(tz_state_location)"
backendAzureRmStorageAccountName: "$(tf_state_sa_name)"
backendAzureRmStorageAccountSku: "$(tf_state_sku)"
backendAzureRmContainerName: $(tf_state_container_name)
backendAzureRmKey: "$(tf_environment).terraform.tstate"
displayName: "Run > terraform init"
- task: TerraformCLI#0
inputs:
command: "validate"
environmentServiceName: "$(SUBSCRIPTION_NAME)"
displayName: "Run > terraform validate"
- task: TerraformCLI#0
displayName: 'terraform plan'
inputs:
command: plan
publishPlanResults: "$(SUBSCRIPTION_NAME)"
environmentServiceName: "$(SUBSCRIPTION_NAME)"
commandOptions: '-out=$(System.DefaultWorkingDirectory)/terraform.tfplan -detailed-exitcode'
- task: TerraformCLI#0
displayName: 'terraform apply'
condition: and(succeeded(), eq(variables['TERRAFORM_PLAN_HAS_CHANGES'], 'true'))
inputs:
command: apply
environmentServiceName: "$(SUBSCRIPTION_NAME)"
commandOptions: '$(System.DefaultWorkingDirectory)/terraform.tfplan'
I came across similar error :resource with the ID "/subscriptions/xxxx/resourceGroups/<rg>" already exists - to be managed via Terraform this resource needs to be imported into the State. Please see the resource documentation for "azurerm_resource_group" for more information
when I tried to Terraform pipeline in azure devops .
The devops pipeline was not be able to find state in the Azure UI and I even had this azure_rm provider set.
terraform {
required_providers {
azurerm = {
source = "hashicorp/azurerm"
version = "=3.0.2"
}
}
The error happens when terraform state may have not matched with the real state.
I have this terraform.tfstate file where the current state is stored in local.
But still it was occurring.
But when I tried to add terraform backend in “main.tf” file., then the file was executed without that error.
Try with no values like below.
terraform {
backend "azurerm" {
resource_group_name = ""
storage_account_name = ""
container_name = ""
key = ""
}
}
Or give the values :
terraform {
backend "azurerm" {
resource_group_name = "<rg>"
storage_account_name = "<give acct >"
container_name = "terraform"
key = "terraform.tfstate"
}
And state lock the terraform state to store in azure storage account.
Also try to import state using terraform import <terraform_id> <azure_resource_id>
I am trying to use an Azure DevOps deployment job to create a ServiceNow Standard Change Request, register the sys_id to the pipeline, then use it in subsequent phases of the Deployment Job. I have a Python utility that creates a Change Request, and filters the sys_id registering it as a variable. I can access said variable in the same "phase" of the Deployment Job, but the next phase is not working as expected, well,the docs don't really cover any use like this other than contrived uses. I think I was following the Set Variables in Scripts See my pipeline below.
- task: Bash#3
name: snow
displayName: Create Standard RFC from Template
# This task, I'm registering the SYS_ID of the RFC being created. I want to use this throughout the rest
# of the Deployment Job.
inputs:
targetType: inline
script: |
export sys_id=$(servicenow standard create template $(std_tmpl_sys_id) --query="result.sys_id.value")
echo "##vso[task.setvariable variable=rfc_sys_id;isoutput=true]$sys_id"
env:
SNOW_USER: '$(SNOW_USER)'
SNOW_PASS: '$(SNOW_PASS)'
- task: Bash#3
displayName: Progress RFC to Scheduled
# This works, for this one task.
inputs:
targetType: inline
script: |
servicenow standard update $(snow.rfc_sys_id) state=Scheduled
env:
SNOW_USER: '$(SNOW_USER)'
SNOW_PASS: '$(SNOW_PASS)'
deploy:
steps:
- task: Bash#3
displayName: Install ServiceNow
inputs:
targetType: inline
script: |
pip install snow --index-url=https://azure:$(System.AccessToken)#pkgs.dev.azure.com/$(ADO_ORG)/$(ADO_PROJ)/_packaging/python-azure-artifacts/pypi/simple/
- task: Bash#3
displayName: Progress RFC to Implement
# Here, I attempt to get the registered variable from the preDeploy "phase" and use it as a Shell Variable
# because otherwise Azure DevOps would try to just execute it as a shell command.
inputs:
targetType: inline
script: |
servicenow standard update ${RFC_SYS_ID} state=Implement
env:
SNOW_USER: '$(SNOW_USER)'
SNOW_PASS: '$(SNOW_PASS)'
RFC_SYS_ID: $[ dependencies.BuildPythonApp.outputs['preDeploy.rfc_sys_id'] ]
Also found here:
https://gist.github.com/FilBot3/d8184b3c0b1c887e7e99884b051bd73c#file-azure-pipelines-yaml-L89-L131
Is it even possible to do this in Azure DevOps YAML Pipelines using a Deployment Job?
Can you try two of them? I think only the step-name is missing.
RFC_SYS_ID: $[ dependencies.BuildPythonApp.outputs['preDeploy.snow.rfc_sys_id'] ]
or
RFC_SYS_ID: $[ dependencies.BuildPythonApp.outputs['BuildPythonApp.snow.rfc_sys_id'] ]
Trying to use azure devops pipeline to build AKS using terraform, i wish i can pass variable values from prod_terraform.tfvars file
i want to run "terraform plan -var-file = prod_terraform.tfvars"
Here, is the yaml code
- task: TerraformCLI#0
displayName: Terraform Plan
inputs:
command: 'plan'
workingDirectory: '$(System.DefaultWorkingDirectory)/terraform-manifests'
commandOptions: '-out aks_cluster.tfplan'
allowTelemetryCollection: false
below is the error
/opt/hostedtoolcache/terraform/1.0.8/x64/terraform plan -out aks_cluster.tfplan
Acquiring state lock. This may take a few moments...
var.acr_demo
Enter a value:
##[error]The operation was canceled.
Finishing: Terraform Plan
Terraform will automatically load the variable values from the variable definition file if it is named terraform.tfvars or ends in .auto.tvfars and placed in the same directory as the other configuration files like below:
development
└── server
└── main.tf
└── variables.tf
└── terraform.tfvars
I renamed the file and it worked
I have a Azure DevOps build pipeline that runs a Cypress test. In that Cypress test we have a test user login with a e-mail and password. On my local system I have the password in a cypress.env.json file.
On the Azure build pipeline I get the message that the password is undefined which makes sense since we put the cypress.env.json file in the .gitignore not to expose it to the repo.
I've created a Azure variable to represent the password: $(ACCOUNT_PASSWORD)
So I think I need to create the cypress.env.json file in the build pipeline and use Azure variables for it, but I can't figure out how to create a file during the build step.
I have this task:
- task: CmdLine#2
displayName: 'run Cypress'
inputs:
script: |
npm run ci
So I need to add a task before this that creates the cypress.env.json file with the variable that represents the password:
{
"ACCOUNT_PASSWORD": $(ACCOUNT_PASSWORD)
}
You can add a simple PS script that creates the file:
- task: PowerShell#2
inputs:
targetType: 'inline'
script: |
$json = '{
"ACCOUNT_PASSWORD": $(ACCOUNT_PASSWORD)
}'
$json | Out-File cypress.env.json
workingDirectory: '$(Build.SourcesDirectory)'
pwsh: true # For Linux
In the workingDirectory set the path to where you want the file to be created.
If you want to create a json file using Azure Pipeline's Bash#3 for linux environments, you can do
steps:
- task: Bash#3
inputs:
targetType: "inline"
script: |
echo '{"ACCOUNT_PASSWORD": "$(ACCOUNT_PASSWORD)"}' > server/cypress.env.json
cd server
echo $(ls)
cat git-tag.json
- task: Docker#2
inputs:
command: buildAndPush
...
The cd echo and cat commands are not necessary. They are only there to log stuff to the console so you can see where the file is and the contents. This task may seem trivial to many but for someone like me with little bash experience even something simple like this task took quite a while to figure out how to debug and get right.
Azure Pipelines runs this bash script in the Build.SourcesDirectory, which is the root of your project. In this example above I have a /server folder which is where I want to put the .json file.
Later on, I run the Docker#2 task which builds my server. The Docker#2 task looks at my Dockerfile which has the instruction COPY . ./ I could be wrong here being new to docker, but my assumption is the Azure VM running the pipeline executes the docker COPY command and copies the file from the Build.SourcesDirectory to a destination inside the docker container.
I am creating YAML pipeline in Azure DevOps that consists of two stages.
The first stage (Prerequisites) is responsible for reading the git commit and creates a comma separated variable containing the list of services that has been affected by the commit.
The second stage (Build) is responsible for building and unit testing the project. This Stage consists of many templates, one for each Service. In the template script, the job will check if the relevant Service in in the variable created in the previous stage. If the job finds the Service it will continue to build and test the service. However if it cannot find the service, it will skip that job.
Run.yml:
stages:
- stage: Prerequisites
jobs:
- job: SetBuildQueue
steps:
- task: powershell#2
name: SetBuildQueue
displayName: 'Set.Build.Queue'
inputs:
targetType: inline
script: |
## ... PowerShell script to get changes - working as expected
Write-Host "Build Queue Auto: $global:buildQueueVariable"
Write-Host "##vso[task.setvariable variable=buildQueue;isOutput=true]$global:buildQueueVariable"
- stage: Build
jobs:
- job: StageInitialization
- template: Build.yml
parameters:
projectName: Service001
projectLocation: src/Service001
- template: Build.yml
parameters:
projectName: Service002
projectLocation: src/Service002
Build.yml:
parameters:
projectName: ''
projectLocation: ''
jobs:
- job:
displayName: '${{ parameters.projectName }} - Build'
dependsOn: SetBuildQueue
continueOnError: true
condition: and(succeeded(), contains(dependencies.SetBuildQueue.outputs['SetBuildQueue.buildQueue'], '${{ parameters.projectName }}'))
steps:
- task: NuGetToolInstaller#1
displayName: 'Install Nuget'
Issue:
When the first stages runs it will create a variable called buildQueue which is populated as seen in the console output of the PowerShell script task:
Service001 Changed
Build Queue Auto: Service001;
However when it gets to stage two and it tries to run the build template, when it checks the conditions it returns the following output:
Started: Today at 12:05 PM
Duration: 16m 7s
Evaluating: and(succeeded(), contains(dependencies['SetBuildQueue']['outputs']['SetBuildQueue.buildQueue'], 'STARS.API.Customer.Assessment'))
Expanded: and(True, contains(Null, 'service001'))
Result: False
So my question is how do I set the dependsOn and condition to get the information from the previous stage?
It because you want to access the variable in a different stage from where you defined them. currently, it's impossible, each stage it's a new instance of a fresh agent.
In this blog you can find a workaround that involves writing the variable to disk and then passing it as a file, leveraging pipeline artifacts.
To pass the variable FOO from a job to another one in a different stage:
Create a folder that will contain all variables you want to pass; any folder could work, but something like mkdir -p $(Pipeline.Workspace)/variables might be a good idea.
Write the contents of the variable to a file, for example echo "$FOO" > $(Pipeline.Workspace)/variables/FOO. Even though the name could be anything you’d like, giving the file the same name as the variable might be a good idea.
Publish the $(Pipeline.Workspace)/variables folder as a pipeline artifact named variables
In the second stage, download the variables pipeline artifact
Read each file into a variable, for example FOO=$(cat $(Pipeline.Workspace)/variables/FOO)
Expose the variable in the current job, just like we did in the first example: echo "##vso[task.setvariable variable=FOO]$FOO"
You can then access the variable by expanding it within Azure Pipelines ($(FOO)) or use it as an environmental variable inside a bash script ($FOO).