Azure DevOps: Connecting Terraform output to Azure web service JSON config? - azure-devops

I'm trying to figure out how to update the JSON config files in my .NET Core web service, based on the deployed resources using Terraform.
I have an existing Azure DevOps pipeline, which builds/deploys a .NET Core web service to an Azure App Service.
In moving to Terraform, I'll be creating a CosmosDb database, Azure Search service, Event Grid, etc. for dev/test/prod environments.
I have a handle on creating these in Terraform, but I'm not clear how to take the outputs from these resources (like the CosmosDb location, key, and database id) and inject these into my JSON config files in my deployed web service.
Has anyone done this sort of thing, and can show a Terraform example? Thanks!

You don't actually inject those into your config file, you set those as app settings in your App Service and that will override those keys in your config file.
So if you have:
{
CosmosDb: {
Key: ""
}
}
In your terraform you would do the following.
resource "azurerm_app_service" "test" {
name = "example-app-service"
location = "${azurerm_resource_group.test.location}"
resource_group_name = "${azurerm_resource_group.test.name}"
app_service_plan_id = "${azurerm_app_service_plan.test.id}"
app_settings = {
"CosmosDb:Key" = "${azurerm_cosmosdb_account.db.primary_master_key}"
}
}
So you would reference your other Terraform resources to pull out the values you need and put those in the app settings section of your App Service in Terraform.

Related

Azure terraform Use existing managed identity to authenticate web app

I hope somebody can help me to solve this issue.
Using terraform I scripted some resource groups and webapps. Those web app have some configurations that need to access a key vault to retrieve some secrets.
But to do so, I need to activate the azure identity on the web app.
So far everything is working just fine without any problem. But as I am still learning how to use terraform with azure, I need to keep destroying and spinning up the webapps, which mean everytime I need to activate the identity and add the access policy in the key vault.
So what I did, is to create a azure managed identity resource in the same resource group where I have the key vault. Now I would like to use this managed identity to authenticate my web app everytime I spin up the web app.
My web app code looks like this:
resource "azurerm_app_service" "app-hri-stg-eur-configurations-api" {
name = "app-hri-${var.env}-${var.reg-name}-webapp-testing"
app_service_plan_id = azurerm_app_service_plan.ASP-hri-stg-eur-webapp.id
location = var.location
resource_group_name = azurerm_resource_group.rg-hri-stg-eur-webapp.name
app_settings = {
"secret" = "#Microsoft.KeyVault(SecretUri=https://mykeyvault.vault.azure.net/secrets/test)"
...... <My configuration>
}
identity {
type = "UserAssigned"
}
}
And here is where I am getting confused, how can I reference the azure managed identity that I have already created to grant access to my web app to read the secrets?
I hope I made my question clear enough, and please if not just ask for more info
Thank you so much for any help you can provide
A identity block supports the following:
type - (Required) Specifies the identity type of the App Service. Possible values are SystemAssigned (where Azure will generate a Service Principal for you), UserAssigned where you can specify the Service Principal IDs in the identity_ids field, and SystemAssigned, UserAssigned which assigns both a system managed identity as well as the specified user assigned identities.
identity_ids - (Optional) Specifies a list of user managed identity ids to be assigned. Required if type is UserAssigned.
so you should have sth like this:
data "azurerm_user_assigned_identity" "example" {
name = "name_of_user_assigned_identity"
resource_group_name = "name_of_resource_group"
}
If your identity is in another resource group. It allows you to reference to already created azure resource.
identity {
type = "UserAssigned",
identity_ids = [data.azurerm_user_assigned_identity.example.id]
}

CI/CD ADF Synapse - Modify URL in Key Vault Linked service

We use Synapse git Integration to deploy artifacts such as linked services generated by a Data Warehouse automation tool (JSON files)
It is different then deploying ARM template in ADF.
We created one Azure Key Vault (AKV) per environment so we do have an Azure Key Vault LinkedService in each environment and the linked services has the same name. But each AKV as his own URL so we need to change the URL in the deployed linked services during the CI/CD process.
I read this https://learn.microsoft.com/en-us/azure/synapse-analytics/cicd/continuous-integration-deployment#use-custom-parameters-of-the-workspace-template
I think I need to create a template to change "Microsoft.Synapse/workspaces/linkedServices"
But I didn't find any example on how to modify the KV url parameters.
Here is the linked services I want to modify,https://myKeyVaultDev.vault.azure.net as to be changed when deploying
{
"name": "myKeyVault",
"properties": {
"type": "AzureKeyVault",
"typeProperties": {
"baseUrl": "https://myKeyVaultDev.vault.azure.net"
}
}
}
Not much familiar with the ci/cd and azure devOps yet, but still I need to do it...
I have done this using Azure Devops. When you create the Release pipeline within Azure Devops, one of the options is to "override parameters". at this point you can specify the name of the keyvault and the corresponding value. The corresponding value is configured in a pipeline variable set - which itself can come from the same keyvault.
You don't need to create the template. Synapse already does that and stores it in the publish branch (“workspace_publish”). If you look in that branch you will see the template along with the available parameters that you can override.
More info is available here:
https://www.drware.com/how-to-use-ci-cd-integration-to-automate-the-deploy-of-a-synapse-workspace-to-multiple-environments/
https://techcommunity.microsoft.com/t5/data-architecture-blog/ci-cd-in-azure-synapse-analytics-part-1/ba-p/1964172
From the Azure Key Vault side of things, I believe you're right - you have change the Linked Services section within the template to point to the correct Key Vault base URL.
Azure Key Vault linked service
I don't know if you still are looking for the solution.
In order to parametrize linked service property and specially AKV reference, I think you should modify the template-parameters-definition.json, and add the following section:
"Microsoft.Synapse/workspaces/linkedServices":
{ "*":
{ "properties":
{ "typeProperties":
{ "baseUrl": "|:-connectionString:secureString" }
}
}
}
This will create a parameter for each linked service. The next step is to overrideParameters on SynapseWorkspaceDeployment task on Azure Devops.

How to deploy/filer the respective Server base endpoint in Swagger

I have an YAML/JSON files and we have the base serve endpoint defined as seen in the below screenshot.
How do we filter only the respective base URL for specific environment
For instance:
Server: dev files should be deployed to DEV environment, Stage files should be deployed to Stage environment and so on
Note: I'm using Azure pipeline for deployment.
In your current situation, in the devops pipeline, we do not have this function/option to do this. We recommend you can try to create a New Generic service connection and use it in your different deploy steps.

Deployed static website to Azure via terraform - but the blade is inaccessible with permission error?

I've got a very basic terraform deployment going through Azure Devops, which defines a storage bucket and static website. However, when I go into the Azure Portal, the static website blade gives me "Access Denied. You do not have access". All other aspects of the storage bucket are available, though, so it doesn't appear to be a general permissions issue.
Terraform doesn't support the config in the AZ RM, so I'm using the local-exec pattern to configure the static website.
Running in DevOps, my terraform has a system connection and runs as a service user. However, I've also tried destroying the bucket and re-running the terraform as my user - this doesn't make any difference.
I've tried adding myself onto the IAM on the bucket, that also doesn't make any difference.
The definition for the storage bucket is:
name = "website"
resource_group_name = "${azurerm_resource_group.prod.name}"
location = var.region
account_kind = "StorageV2"
account_tier = "Standard"
account_replication_type = "LRS"
provisioner "local-exec" {
# https://github.com/terraform-providers/terraform-provider-azurerm/issues/1903
command = "az storage blob service-properties update --account-name ${azurerm_storage_account.website-storage.name} --static-website --index-document index.html --404-document 404.html"
}
}
I'm expecting to be able to get to the static website blade within the Portal - is there some reason why this wouldn't work?
I don't yet have a reason why this happened. I had earlier tried removing the storage and re-creating, but I removed the storage via the portal. This evening I tried renaming the resource in Terraform which forced it to destroy and recreate, and that works.
I had previously messed about with StorageV1 resource / container / blob definition of the same name; potentially there was something "invisible" in Azure which was causing this oddness...

Configure terraform to connect to IBM Cloud

I try to connect terraform to IBM Cloud and I got messed up with
Softlayer and IBM Cloud credentials.
I followed the instruction on IBM sites to connect my terraform to the IBM Cloud and I am confused, because I may use SL and IBM Cloud connec-
tion information like API-keys etc.
I may not run terraform init and/or plan, because there are some
information missing. No I am asked for the organization (var.org).
Sometimes I got asked about the SL credentials. Our account started
in January 2019 and I am sure not to worked with SL at all and only
heard about API key from IBM cloud.
May some one have an example, how terraform.tfvars looks like to work
properly together with IBM Cloud Kubernetes Service, VPC and classic
infrastructure?
Thank you very much.
Jan
I recommend starting to take a look at these two tutorials, dealing with a LAMP stack on classic vertical servers and with Kubernetes and other services. Both provide step by step instructions and guide you through the process of setting up Terraform-based deployments.
They provide the necessary code in GitHub repos. For the Kubernetes sample credentials.tfvars you only need the API key:
ibmcloud_api_key = "your api key"
For the public_key a string containing the public key should be provided instead of a file that contains the key.
$ cat ~/.ssh/id_rsa.pub
ssh-rsa CCCde...
Then in terraform:
resource "ibm_compute_ssh_key" "test_ssh_key" {
public_key = "ssh-rsa CCCde..."
}
Alternatively you can use a key that you created earlier:
data "ibm_compute_ssh_key" "ssh_key" {
label = "yourexistingkey"
}
resource "ibm_compute_vm_instance" "onprem_vsi" {
ssh_key_ids = ["${data.ibm_compute_ssh_key.ssh_key.id}"]
}
Here is what you will need to run an init or plan for IBM Cloud Kubernetes Service clusters with terraform...
In your .tf file
terraform {
required_providers {
ibm = {
source = "IBM-Cloud/ibm"
}
}
}
provider "ibm" {
ibmcloud_api_key = var.ibmcloud_api_key
iaas_classic_username = var.classic_username
iaas_classic_api_key = var.classic_api_key
}
In your shell, set the following environment variables
export IBMCLOUD_API_KEY=<value of your IBM Cloud api key>
export CLASSIC_API_KEY=<Value of you r IBM Cloud classic (i.e. SL) api key>
export CLASSIC_USERNAME=<Value of your IBM Cloud classic username>
Run your init as follows:
terraform init
Run your plan as follows:
terraform plan \
-var ibmcloud_api_key="${IBMCLOUD_API_KEY}" \
-var classic_api_key="${CLASSIC_API_KEY}" \
-var classic_username="${CLASSIC_USERNAME}"