I have some azure resources (Function App, Cosmos etc) that I have successfully deployed in a resource group using terraform init-plan-apply in a Azure Devops Pipeline. From my local CLI I can change the resources in the main.tf and redeploy, presumably because I have the tf state locally. However, when I try to redploy using the pipeline I get the usual error
Error: A resource with the ID "/subscriptions/xxxxxx-xxxx-xxxx-xxxx/resourceGroups/my
-rg" already exists - to be managed via Terraform this resource needs to be imported into the State. Please see the resource documentation for "azurerm_resource_group" for more information.
│
When I try to import using the config described here I get the unhelpful error
##[error]Error: There was an error when attempting to execute the process '/usr/local/bin/terraform'. This may indicate the process failed to start. Error: spawn /usr/local/bin/terraform ENOENT
Am I thinking about pipelines with terraform in the correct way? Should I be trying to import the resource group, or is there a better way to redeploy resources using terraform?
You're right, the tf state is not saved on the Azure DevOps agents.
Common way is to use Azure Storage account to save the tf state.
You can find official Microsoft tutorial about it here.
More guides you can find here, here and here.
Related
I hope somebody can help me to solve this issue and understand how to implement the best approach.
I have a production environment running tons of azure services (sql server, databases, web app etc).
all those infra has been created with terraform. For as powerful as it is, I am terrified on using it in a pipeline for 1 reason.
Some of my friend, often they do some changes to the infra manually, and having not having those changes in my terraform states, if I automate this process, it might destroy the resource ungracefully, which is something that I don't want to face.
so I was wondering if anyone can shade some light on the following question:
is it possible to have terraform automated to check the infra state at every push to GitHub, and to quit if the output of the plan reports any change?
change to make clear my example.
Lets say I have a terraform state on which I have 2 web app, and somebody manually created a 3 web app on that resource group, it develops some code and push it to GitHub.My pipeline triggers, and as first step I have terraform that runs a terraform plan and/or terraform apply, if this command reports any change, I want it to quit the pipeline(fail) so I will know there is something new there, and if the terraform plan and/or terraform apply return there are no changes to the infra, is up to date to continue with the code deployment.
thank you in advance for any help and clarification.
Yes, you can just run
terraform plan -detailed-exitcode
If the exit code is != 0, you know there are changes. See here for details.
Let me point out that I would highly advise you to lock down your prod environment so that nobody can do manual changes! Your CI/CD pipeline should be the only way to make changes there.
Adding to the above answer, you can also make use of terraform import command just to import the remote changes to your state file. The terraform import command is used to import existing resources into Terraform. Later run plan to check if the changes are in sync.
Refer: https://www.terraform.io/docs/cli/commands/import.html
I have inherited some reponsibilities and by that I mean managing Terraform in AzureDevOps Release Pipeline deployments.
I am using the Terraform Task with the following steps:
init
validate
plan
apply
But during the plan output I can see a number of resources being destroyed that I don't want to be removed.
azurerm_key_vault_secret.kv_secret_az_backup_storage_account_name will be destroyed
I was looking for a way to disable any resource destruction during the creation of the tfstate file but there doesn't appear to be a way in Azure DevOps. So my best option would be I suppose resorted to amending the underlying main.tf script but I don't know how.
This is one of the resources being removed. I have renamed to keep anonymity. Can anyone suggestion a solution to my dilemma?
resource "azurerm_key_vault_secret" "kv_secret_az_storage_account_name" {
name = "storage-account-name"
value = azurerm_storage_account.storage_account.name
key_vault_id = azurerm_key_vault.keyvault.id
depends_on = [azurerm_storage_account.storage_account]
}
plan phase doesn't destroy your resource nor creates new. It does inform you what will happen when you run apply.
so
azurerm_key_vault_secret.kv_secret_az_backup_storage_account_name will be destroyed
this just says that your storage account will be destroyed if you run apply.
But since it tries to remove, it means that terrafrom keeps information bout this resource in state. So it was created be terraform and now if you want to put it out of the scope here - I mean you don't want to have it longer maintained by your teffafrom script you can use state rm command.
Items removed from the Terraform state are not physically destroyed. Items removed from the Terraform state are only no longer managed by Terraform. For example, if you remove an AWS instance from the state, the AWS instance will continue running, but terraform plan will no longer see that instance.
We have a requirement that while provisioning the Databricks service thru CI/CD pipeline in Azure DevOps we should able to mount a blob storage to DBFS without connecting to a cluster. Is it possible to mount object storage to DBFS cluster by using a bash script from Azure DevOps ?
I looked thru various forums but they all mention about doing this using dbutils.fs.mount but the problem is we cannot run this command in Azure DevOps CI/CD pipeline.
Will appreciate any help on this.
Thanks
What you're asking is possible but it requires a bit of extra work. In our organisation we've tried various approaches and I've been working with Databricks for a while. The solution that works best for us is to write a bash script that makes use of the databricks-cli in your Azure Devops pipeline. The approach we have is as follows:
Retrieve a Databricks token using the token API
Configure the Databricks CLI in the CI/CD pipeline
Use Databricks CLI to upload a mount script
Create a Databricks job using the Jobs API and set the mount script as file to execute
The steps above are all contained in a bash script that is part of our Azure Devops pipeline.
Setting up the CLI
Setting up the Databricks CLI without any manual steps is now possible since you can generate a temporary access token using the Token API. We use a Service Principal for authentication.
https://learn.microsoft.com/en-US/azure/databricks/dev-tools/api/latest/tokens
Create a mount script
We have a scala script that follows the mount instructions. This can be Python as well. See the following link for more information:
https://docs.databricks.com/data/data-sources/azure/azure-datalake-gen2.html#mount-azure-data-lake-storage-gen2-filesystem.
Upload the mount script
In the Azure Devops pipeline the databricks-cli is configured by creating a temporary token using the token API. Once this step is done, we're free to use the CLI to upload our mount script to DBFS or import it as a notebook using the Workspace API.
https://learn.microsoft.com/en-US/azure/databricks/dev-tools/api/latest/workspace#--import
Configure the job that actually mounts your storage
We have a JSON file that defines the job that executes the "mount storage" script. You can define a job to use the script/notebook that you've uploaded in the previous step. You can easily define a job using JSON, check out how it's done in the Jobs API documentation:
https://learn.microsoft.com/en-US/azure/databricks/dev-tools/api/latest/jobs#--
At this point, triggering the job should create a temporary cluster that mounts the storage for you. You should not need to use the web interface, or perform any manual steps.
You can apply this approach to different environments and resource groups, as do we. For this we make use of Jinja templating to fill out variables that are environment or project specific.
I hope this helps you out. Let me know if you have any questions!
We have Azure DevOps Server 2019 on-prem. That means no unified pipelines, no YAML for release pipeline.
The scenario is this:
A stage runs terraform code to provision some resources in Azure. It is necessary to insert manual approval between terraform plan and terraform apply, however, the plan file produced by terraform plan stage must be shared with the terraform apply stage.
I can see these options:
Save the plan file on a shared file system
Save the plan file in a dedicated storage on Azure
Save the plan file somewhere within the Azure DevOps so that stages can access it without defining a dedicated file share or Azure storage
Pass the contents of the plan file as an output variable
I, personally, like the most the last option, but I wonder what are the limitations on the output variable value length? What is the maximum length of a variable in Azure Pipelines? suggests it is around 32KB, which may not be good enough. Given that, is there an option to pass files between stages?
There is no default task you can use in classic release pipeline. Due to the limitation of a variable, you would need to publish the file to dedicated file share or Azure storage.
I have created separate ARM templates each for the DocumentDB, Azure SQL Server, Storage Account, Azure Key Vault, Azure Batch, HDInsight Cluster.
Using New-AzureRmResourceGroupDeployment powershell command when I deploy above resources in a loop within the same resource group I found a strange behaviour. While deploying DocDB all my previously deployed resources in the resource group vanishes (probably deleted automatically). Same is the case when I deploy Azure SQL Server.
Has anybody encountered the same issue? Is there a fix?
New-AzureRmResourceGroupDeployment has a -mode parameter which can be set to either complete or incremental
Complete mode will create a resource group exactly as you define it in a template deleting any resource that is not explicitly defined in the template.
Incremental mode will add or modify resources to achieve what is specified by the template. Ignoring any additional resources that are present within the resource group. Incremental mode will modify any pre-existing resources to match what is in the template.