Allow Cloud Service ARM template deployment to a different subscription than the keyVault - deployment

Our KeyVault is in subscription 1 and we have multiple Cloud Services for multiple areas that we need deployed in different subscriptions. While working in Azdo I found out that I am unable to deploy CSES to a subscription that is different than the keyVault since the ARM template used for deployment is trying to access secrets from the keyvault.
Then, when I read this document https://learn.microsoft.com/en-us/azure/cloud-services-extended-support/deploy-prerequisite, it states that the "The key vault must be created in the same region and subscription as the cloud service".
Does anyone know of a way around this? It's imperative that we are able to deploy multiple Cloud Services (for different areas) in different Subscriptions and we only have one keyvault that stores all values used by the cloud services.

As mentioned in the Microsoft Documentation that you have shared , its not possible as its a prerequisite to create Key vault in the same subscription as the cloud Services.
In this Github Issue , it is possible to use secrets from one subscription in another subscription but using certificates is an limitation in ARM templates.
It is recommended by Azure to use for different Key vault for different environments for using Certificates .
Secrets can be referenced as parameters in the ARM template to used by Azure Services but certificate can't be referenced from another subscriptions otherwise you will get the below error :
{
"status": "Failed",
"error": {
"code": "InvalidParameter",
"target": "sourceVault.id",
"message": "The SubscriptionId:\"<id>\" of the request must match the SubscriptionId \"<sharedId>\" contained in the Key Vault Id."
}
}

Related

How to use Azure Data Factory, Key Vaults and ADF Private Endpoints together

I've created new ADF instance on Azure with Managed Virtual Network integration enabled.
I planned to connect to Azure Key Vault to retrieve credentials for my pipeline’s source and sink systems using Key Vault Private Endpoint. I was able to successfully create it using Azure Data Factory Studio. I have also created Azure Key Vault linked service.
However, when I try to configure another Linked Services for source and destination systems the only option available for retrieving credentials from Key Vault is AVK Linked Service. I'm not able to select related Private Endpoint anywhere (please see below screen).
Do I miss something?
Are there any additional configuration steps required? Is the scenario I've described possible at all?
Any help will be appreciated!
UPDATE: Screen comparing 2 linked services (one with managed network and private endpoint selected and another one where I'm not able to set this options up):
Managed Virtual Network integration enabled, Make sure check which region you are using unfortunately ADF managed virtual network is not supported for East Asia.
I have tried in my environment even that option is not available
So, I have gathered some information even if you create a private endpoint for Key Vault, this column is always shown as blank .it validates URL format but doesn't do any network operation
As per official document if you want to use new link service, instead of key vault try to create other database services like azure sql, azure synapse service like as below
For your Reference:
Store credentials in Azure Key Vault - Azure Data Factory | Microsoft Docs
Azure Data Factory and Key Vault - Tech Talk Corner

CI/CD ADF Synapse - Modify URL in Key Vault Linked service

We use Synapse git Integration to deploy artifacts such as linked services generated by a Data Warehouse automation tool (JSON files)
It is different then deploying ARM template in ADF.
We created one Azure Key Vault (AKV) per environment so we do have an Azure Key Vault LinkedService in each environment and the linked services has the same name. But each AKV as his own URL so we need to change the URL in the deployed linked services during the CI/CD process.
I read this https://learn.microsoft.com/en-us/azure/synapse-analytics/cicd/continuous-integration-deployment#use-custom-parameters-of-the-workspace-template
I think I need to create a template to change "Microsoft.Synapse/workspaces/linkedServices"
But I didn't find any example on how to modify the KV url parameters.
Here is the linked services I want to modify,https://myKeyVaultDev.vault.azure.net as to be changed when deploying
{
"name": "myKeyVault",
"properties": {
"type": "AzureKeyVault",
"typeProperties": {
"baseUrl": "https://myKeyVaultDev.vault.azure.net"
}
}
}
Not much familiar with the ci/cd and azure devOps yet, but still I need to do it...
I have done this using Azure Devops. When you create the Release pipeline within Azure Devops, one of the options is to "override parameters". at this point you can specify the name of the keyvault and the corresponding value. The corresponding value is configured in a pipeline variable set - which itself can come from the same keyvault.
You don't need to create the template. Synapse already does that and stores it in the publish branch (“workspace_publish”). If you look in that branch you will see the template along with the available parameters that you can override.
More info is available here:
https://www.drware.com/how-to-use-ci-cd-integration-to-automate-the-deploy-of-a-synapse-workspace-to-multiple-environments/
https://techcommunity.microsoft.com/t5/data-architecture-blog/ci-cd-in-azure-synapse-analytics-part-1/ba-p/1964172
From the Azure Key Vault side of things, I believe you're right - you have change the Linked Services section within the template to point to the correct Key Vault base URL.
Azure Key Vault linked service
I don't know if you still are looking for the solution.
In order to parametrize linked service property and specially AKV reference, I think you should modify the template-parameters-definition.json, and add the following section:
"Microsoft.Synapse/workspaces/linkedServices":
{ "*":
{ "properties":
{ "typeProperties":
{ "baseUrl": "|:-connectionString:secureString" }
}
}
}
This will create a parameter for each linked service. The next step is to overrideParameters on SynapseWorkspaceDeployment task on Azure Devops.

How to implement just in time access for a deployment server?

BACKGROUND
We are about to set up a deployment server that will be used to manage Azure resources. The deployment server will run pre-defined PowerShell scripts and deploy ARM-templates.
This article describes how to use service principals and keys vaults so that the application that runs inside the deployment server securely can execute deployment scripts.
PROBLEM
Frequently, the deployment server will be updated with scripts, new pipelines, different types of configuration, code snippets, templates etc. When changes are made on the deployment server, we do not want the secrets to being exposed in any way.
A JUST IN TIME APPROACH – CUSTOM ACCESS KEY API
The functionality we are looking for can possibly be implemented with a custom access key API with the flowing workflow:
In a service request portal, a deployment ticket is signed by an
approver
The deployment server receives the signed deployment ticket
The deployment server sends the signed ticket to a custom access
key API and receives a temporary service principal and access key
The deployment server executes scripts (with the temporary service
principal)
The temporary service principal and access key is automatically
removed
WHY A CUSTOM ACCESS KEY API?
The custom access key API adds the following capabilities:
By comparison to a deployment server, the API has a smaller footprint and we believe that updates to the service will be rare and can be done in a very controlled manner.
The API can give access to the deployment server based on the exact need (subscription, resource group, etc)
The API can use digital signatures to verify the original approver of the ticket
RECOMMENDED APPROACH?
What is the recommended approach to implement just in time access for a deployment server?

VSTS ARM Service Endpoint for national cloud environments

I am trying to create a Release Definition from my VSTS instance in west Europe to deploy an App in the Azure German Cloud. I tried creating an ARM Service Endpoint but it seems it tried to use by default the service management management.core.windows.net and the endpoint in Germany.
I've also tried using an Azure classic but I keep getting an authentication error: There was an error with the Azure credentials used for the deployment....the user I am using for that has owner role in my subscription. Besides Azure Classic Service Endpoint are deprecated.
Is there any way to change the environment endpoints for the Service Endpoint.
Any restrictions trying to deploy to the German cloud from a VSTS instance in west Europe?

Scale set using keyvault in another region

I'm working with an ARM template that creates a VM Scale Set for a Service Fabric cluster and associates some secrets with the VMs from a keyvault. I discovered this morning that it appears the VMs and keyvault must exist in the same region or I get an error like this:
New-AzureRmResourceGroupDeployment : 9:24:55 AM - Resource Microsoft.Compute/virtualMachineScaleSets 'StdNode' failed with message '{ "status": "Failed", "error": {
"code": "ResourceDeploymentFailure",
"message": "The resource operation completed with terminal provisioning state 'Failed'.",
"details": [
{
"code": "KeyVaultAndVMInDifferentRegions",
"message": "The Key Vault https://obscured.vault.azure.net/secrets/secretname/1112222aa31c4dcca4363bb0013e9999 is located in location West US, which is different from the location of the VM, northcentralus. "
}
] } }'
This feels like an artificial limitation and is a major issue for me. I want to have a centralized keyvault where I deploy all of my secrets and utilize them from all my deployments. Having to duplicate my secrets in regions around the world seems ridiculous and VERY error prone. There should be no significant perf issue here in obtaining secrets across regions. So what is the reason behind this, and will it change?
Anyone from the Azure Scale Sets team want to offer some color to this?
the reason that we enforce region boundaries is to prevent users from creating architectures that have cross region dependencies.
For an application designed like this an outage of the japaneast datacenter will cause your VMSSes in JapanWest to not be able to successfully scale out.
Regional isolation is a key design principle of cloud based applications, and we want to prevent users from making bad choices if we can.
The reason we do not allow cross subscription references is as an important final step to prevent malicious users from using CRP as a privilege escalation mechanism to access other users secrets.
There are other mechanisms which also prevent this in ARM, but are based on a configuration.
To overcome the problem you may simply want to apply a simple fix
Get-AzVM -ResourceGroupName "rg1" -Name "vm1" | Remove-AzVMSecret | Update-AzVM
This will remove the earlier secret and reissue a new one so that your vm is back in provisioning state.
You can use an architecture of a central key vault that you access for template parameters and store those secrets in a regional key vault. Then link to the regional key vault for your scale set. If the secrets are certificates you can have an ARM function to format the certificate (as a secret) properly to be imported as a part of the OSImage property on the VM/VMSS.
A more indepth look can be found here: https://devblogs.microsoft.com/premier-developer/centralized-vm-certificate-deployment-across-multiple-regions-with-arm-templates/