Adding Secrets and access policy to existing shared keyvault using ARM - azure-devops

I was searching the web after information in regards to the question I have to add secrets and access policies to an existing keyvault in azure shade by others applications using ARM.
I read this documentation.
What I'm worried about is in regards to if anything existing will be overwritten on deleted as I'm creating a new template and parameter file in my services "solution" so to speak.
And I know that I have my CICD pipelines in devops set to "incremental" in regards to what it should be updating an creating.
Anyone have a crystal clear understanding regarding this?
Thanks in advance!
UPDATE:
So I think I managed to get it right here after all.
I Created a new key vault resource and added a couple of secrets and some access policies to emulate a situation of an already created resource which I want to add new secrets to.
Then I created this template:
{
"$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
"contentVersion": "1.0.0.0",
"parameters": {
"keyVault": {
"type": "string"
},
"Credentials1": {
"type": "secureString"
},
"SecretName1": {
"type": "string"
},
"Credentials2": {
"type": "secureString"
},
"SecretName2": {
"type": "string"
}
},
"variables": {
},
"resources": [
{
"type": "Microsoft.KeyVault/vaults/secrets",
"name": "[concat(parameters('keyVault'), '/', parameters('SecretName1'))]",
"apiVersion": "2015-06-01",
"properties": {
"contentType": "text/plain",
"value": "[parameters('Credentials1')]"
}
},
{
"type": "Microsoft.KeyVault/vaults/secrets",
"name": "[concat(parameters('keyVault'), '/', parameters('SecretName2'))]",
"apiVersion": "2015-06-01",
"properties": {
"contentType": "text/plain",
"value": "[parameters('Credentials2')]"
}
}
],
"outputs": {}
}
What I've learned is that if an existing shared key vault exists which I want to add some secrets to I only have to define the sub resources, in this case the secrets to be added to the existing key vault.
so this worked an resulted in not modifying anything else in the existing key vault except adding the new secrets.
even though this is not a fully automated way of adding a whole new key vault setup related to a new service, as one doesn't connect the new resources correctly by adding their principal ID's (identity). Its good for now as I don't have to add each secret manually. Though I do have to add the principal ID's manually.

When using incremental mode to deploy the template, it should not overwrite the things in the keyvault.
But to be foolproof, I recommend you to back up your keyvault key, secret, certificate firstly. For the access policies, you can also export the template of the keyvault firstly, save the accessPolicies for restore in case.

If you redeploy the existing KeyVault in incremental mode any child properties, such as access policies, will be configured as they’re defined in the template. That could result in the loss of some access policies if you haven’t been careful to define them all in your template. The documentation linked to above will give you a full list of the properties that would be affected. As per the docs this can affect properties even if they’re not explicitly defined.
KeyVault Secrets aren’t a child property of the KeyVault resource so won’t get overwritten. They can be defined in ARM either as a separate resource in the same template or in a different template file. You can define some, all or none of the existing secrets in ARM. Any that aren’t defined in the ARM template will be left as is.
If you’re using CI/CD to manage your deployments it’s worth considering setting up a test environment to apply the changes to first so you can validate that the result is as expected before applying them to your production environment.

Related

Updating Pulumi-Stacks after rotating Azure-Secrets

We use Pulumi to create our Infrastructure in Azure. This Infractructure includes different Resources that contain secrets (Blobstorage, Cosmos, etc.)
Now we have the need to rotate the primary keys of those resources.
I noticed that the Stack-Configuration contains those information (encrypted):
"primaryAccessKey": {
"...": "...",
"ciphertext": "..."
},
"primaryBlobConnectionString": {
"...": "...",
"ciphertext": "..."
},
Will this be any issue for Pulumi if we change the secrets in Azure Portal?

How to programmatically assign user managed identity to Azure Data Factory

I have Azure Data Factory in which I want to connect to Azure Synapse using User Assigned Managed Identity authentication type.
Three steps need to be done but unfortunately, I haven't noticed a possibility to programmatically set up step 1.
In Data Factory (Settings-> Managed Identities) assign User Managed Ideneitty
Create credentials
Create linked services
If I execute Azure ARM with only implemented second and third steps I got the following exception:
"The referenced user assigned managed identity in the credential is not associated with the factory".
Do you know how I can assign User-Managed Identity to Data Factory?
Had the same error messsage: "The referenced user assigned managed identity in the credential is not associated with the factory"
When doing CI/CD for arm-template I noticed that the auto generated arm-template from the datafactory only had identity type SystemAssigned. Even when when i have manually added a UserAssigned identity in the ADF GUI.
My Solution was to modify arm-template-parameters-definition.json with this.
"Microsoft.DataFactory/factories": {
"identity": "=:-identity"
}
Then in the parameter file you can pass in this :
"dataFactory_identity": {
"value": {
"type": "SystemAssigned,UserAssigned",
"userAssignedIdentities": {
"/subscriptions/<Insert_subscrptionId>/resourceGroups/<Insert_resourceGroupsName>/providers/Microsoft.ManagedIdentity/userAssignedIdentities/<Insert_userAssignedIdentitiesName>": {}
}
}
}
Unfortunately, documentation for Azure Data Factory ARM is limited. I found a solution based on the documentation of the Azure Storage ARM and below it's the solution:
{
"name": "[variables('dataFactoryName')]",
"type": "Microsoft.DataFactory/factories",
"apiVersion": "2018-06-01",
"location": "[resourceGroup().location]",
"identity": {
"type": "SystemAssigned,UserAssigned",
"userAssignedIdentities": {
"[resourceId('Microsoft.ManagedIdentity/userAssignedIdentities', variables('uamiName'))]": {}
}
},

CI/CD ADF Synapse - Modify URL in Key Vault Linked service

We use Synapse git Integration to deploy artifacts such as linked services generated by a Data Warehouse automation tool (JSON files)
It is different then deploying ARM template in ADF.
We created one Azure Key Vault (AKV) per environment so we do have an Azure Key Vault LinkedService in each environment and the linked services has the same name. But each AKV as his own URL so we need to change the URL in the deployed linked services during the CI/CD process.
I read this https://learn.microsoft.com/en-us/azure/synapse-analytics/cicd/continuous-integration-deployment#use-custom-parameters-of-the-workspace-template
I think I need to create a template to change "Microsoft.Synapse/workspaces/linkedServices"
But I didn't find any example on how to modify the KV url parameters.
Here is the linked services I want to modify,https://myKeyVaultDev.vault.azure.net as to be changed when deploying
{
"name": "myKeyVault",
"properties": {
"type": "AzureKeyVault",
"typeProperties": {
"baseUrl": "https://myKeyVaultDev.vault.azure.net"
}
}
}
Not much familiar with the ci/cd and azure devOps yet, but still I need to do it...
I have done this using Azure Devops. When you create the Release pipeline within Azure Devops, one of the options is to "override parameters". at this point you can specify the name of the keyvault and the corresponding value. The corresponding value is configured in a pipeline variable set - which itself can come from the same keyvault.
You don't need to create the template. Synapse already does that and stores it in the publish branch (“workspace_publish”). If you look in that branch you will see the template along with the available parameters that you can override.
More info is available here:
https://www.drware.com/how-to-use-ci-cd-integration-to-automate-the-deploy-of-a-synapse-workspace-to-multiple-environments/
https://techcommunity.microsoft.com/t5/data-architecture-blog/ci-cd-in-azure-synapse-analytics-part-1/ba-p/1964172
From the Azure Key Vault side of things, I believe you're right - you have change the Linked Services section within the template to point to the correct Key Vault base URL.
Azure Key Vault linked service
I don't know if you still are looking for the solution.
In order to parametrize linked service property and specially AKV reference, I think you should modify the template-parameters-definition.json, and add the following section:
"Microsoft.Synapse/workspaces/linkedServices":
{ "*":
{ "properties":
{ "typeProperties":
{ "baseUrl": "|:-connectionString:secureString" }
}
}
}
This will create a parameter for each linked service. The next step is to overrideParameters on SynapseWorkspaceDeployment task on Azure Devops.

ARM template read certificate from keyvault certificates instead of secret

Previously, we were storing our certificates in a key vault secret. But as this function is deprecated we are now storing the certificates in the Key vault -> Certificates.
When deploying an appservice to azure, we make use of this part of the ARM template to get the certificate. This one is still getting the certificate from the secret, instead of from the certificates.
"resources": [
{
"type": "Microsoft.Web/certificates",
"name": "[variables('certName1')]",
"apiVersion": "2019-08-01",
"location": "[variables('location')]",
"properties": {
"keyVaultId": "[resourceId(variables('vaultSubscriptionId'),variables('vaultResourcegroupName'),'Microsoft.KeyVault/vaults', variables('vaultName'))]",
"keyVaultSecretName": "[variables('vaultSecretName1')]"
}
},
{
"type": "Microsoft.Web/certificates",
"name": "[variables('certName2')]",
"dependsOn": [
"[resourceId('Microsoft.Web/certificates', variables('certName1'))]"
],
"apiVersion": "2019-08-01",
"location": "[variables('location')]",
"properties": {
"keyVaultId": "[resourceId(variables('vaultSubscriptionId'),variables('vaultResourcegroupName'),'Microsoft.KeyVault/vaults', variables('vaultName'))]",
"keyVaultSecretName": "[variables('vaultSecretName2')]"
}
},
We are now getting the certificate with the keyVaultSecretName, but we don't want to use the keyvaultsecret anymore to get the certificate, but directly from Certificates. But I can't find how to do this. I am getting errors when removing the property keyVaultSecretName. Or when I leave it there, it can't find the certificate.
In your pipelines on Azure DevOps, if you want to use the Certificates stored in Key vault on Azure Portal, normally you should access the Certificates via a variable group on Azure DevOps.
Set up the variable group.
Link the variable group into the pipeline where you need to use the Certificates.
[UPDATE]
It seems that you should use the "keyVaultSecretName" to get the certificates, it is the predefined Certificate property. See here.
I also find some related articles, and found that all of them are using the "keyVaultSecretName".
Using an ARM template to deploy your SSL certificate stored in KeyVault on an Web App
How to access SSL in KeyVault from ARM Template
ARM Template with Key Vault certificate

Kubernetes about secrets and how to consume them in pods

I am using GCP Container Engine in my project and now I am facing some issue that I don't know if it can be solved via secrets.
One of my deployments is node-js app server, there I use some npm modules which require my GCP service account key (.json file) as an input.
The input is the path where this json file is located. Currently I managed to make it work by providing this file as part of my docker image and then in the code I put the path to this file and it works as expected. The problem is that I think that it is not a good solution because I want to decouple my nodejs image from the service account key because the service account key may be changed (e.g. dev,test,prod) and I will not be able to reuse my existing image (unless I will build and push it to a different registry).
So how could I upload this service account json file as secret and then consume it inside my pod? I saw it is possible to create secrets out of files but I don't know if it is possible to specify the path to the place where this json file is stored. If it is not possible with secrets (because maybe secrets are not saved in files...) so how (and if) it can be done?
You can make your json file a secret and consume in your pod. See the following link for secrets (http://kubernetes.io/docs/user-guide/secrets/), but I'll summarize next:
First create a secret from your json file:
kubectl create secret generic nodejs-key --from-file=./key.json
Now that you've created the secret, you can consume in your pod (in this example as a volume):
{
"apiVersion": "v1",
"kind": "Pod",
"metadata": {
"name": "nodejs"
},
"spec": {
"containers": [{
"name": "nodejs",
"image": "node",
"volumeMounts": [{
"name": "foo",
"mountPath": "/etc/foo",
"readOnly": true
}]
}],
"volumes": [{
"name": "foo",
"secret": {
"secretName": "nodejs-key"
}
}]
}
}
So when your pod spins up the file will be dropped in the "file system" in /etc/foo/key.json
I think you deploy on GKE/GCE, you don't need the key and it's going to work fine.
I've only tested with Google Cloud Logging but it might be the same for other services as well.
Eg: i only need the below when deploying app on gke/gce
var loggingClient = logging({
projectId: 'grape-spaceship-123'
});