Using Function app connector in ADF - How to override parameters in CI-CD? - azure-data-factory

I need a work around pretty quickly - this was a late surprise in the dev process when we added an Az function to our development ADF pipeline.
When you use a function app in ADF V2, when you generate the ARM template, it does not parameterize the key references unlike in other linked services. Ugh!
So for CI/CD scenarios, when we deploy we now have a fixed function app reference. What we'd like to do is the same as other linked services - override the key parameters to point to the correct Dev/UAT /Production environment versions of the functions.
I can think of dirty hacks using powershell to overwrite (does powershell support ADF functions yet? don't know - in January they didn't).
Any other ideas on how to override function app linked service settings?
the key parameters are under typeProperties (assuming the function key is in keyvault):
{"functionAppUrl:="https://xxx.azurewebsites.net"}
{"functionkey":{"store":{"referenceName"="xxxKeyVaultLS"}}}
{"functionkey":{"secretName"="xxxKeyName"}}
Right now these are hard coded from the UI settings - no parameter and no default.

ok, eventually got back to this.
The solution looks a lot but it is pretty simple.
In my devops release, I create a Powershell task after both the data factory ARM template has been deployed and the powershell task for deployment.ps1 with the "predeployment=$false" setting has run (see ADF CI/CD here.)
I have a json file for each environment (dev/uat/prod) in my git repo (I actually use a separate "common" repo to store scripts apart from the ADF git repo and its alias in DevOps is "_Common" - you'll see this below in the -File parameter of the script).
The json file to replace the deployed function linked service is a copy of the function linked service json in ADF and looks like this for DEV:
(scripts/Powershell/dev.json)
{
"name": "FuncLinkedServiceName",
"type": "Microsoft.DataFactory/factories/linkedservices",
"properties": {
"annotations": [],
"type": "AzureFunction",
"typeProperties": {
"functionAppUrl": "https://myDEVfunction.azurewebsites.net",
"functionKey": {
"type": "AzureKeyVaultSecret",
"store": {
"referenceName": "MyKeyvault_LS",
"type": "LinkedServiceReference"
},
"secretName": "MyFunctionKeyInKeyvault"
}
},
"connectVia": {
"referenceName": "MyintegrationRuntime",
"type": "IntegrationRuntimeReference"
}
}
}
...and the PROD file would be like this:
(scripts/Powershell/prod.json)
{
"name": "FuncLinkedServiceName",
"type": "Microsoft.DataFactory/factories/linkedservices",
"properties": {
"annotations": [],
"type": "AzureFunction",
"typeProperties": {
"functionAppUrl": "https://myPRODfunction.azurewebsites.net",
"functionKey": {
"type": "AzureKeyVaultSecret",
"store": {
"referenceName": "MyKeyvault_LS",
"type": "LinkedServiceReference"
},
"secretName": "MyFunctionKeyInKeyvault"
}
},
"connectVia": {
"referenceName": "MyintegrationRuntime",
"type": "IntegrationRuntimeReference"
}
}
}
then in the devops pipeline, I use a Powershell script block that looks like this:
Set-AzureRMDataFactoryV2LinkedService -ResourceGroup "$(varRGName)" -DataFactoryName "$(varAdfName)" -Name "$(varFuncLinkedServiceName)" -File "$(System.DefaultWorkingDirectory)/_Common/Scripts/Powershell/$(varEnvironment).json" -Force
or for Az
Set-AzDataFactoryV2LinkedService -ResourceGroupName "$(varRGName)" -DataFactoryName "$(varAdfName)" -Name "$(varFuncLinkedServiceName)" -DefinitionFile "$(System.DefaultWorkingDirectory)/_Common/Scripts/Powershell/Converter/$(varEnvironment).json" -Force
Note:
the $(varXxx) are defined in my pipeline variables e.g.
varFuncLinkServiceName = FuncLinkedServiceName.
varEnvironment = "DEV", "UAT", "PROD" depending on the target release
Force is used because the Linked service must already exist in the Data Factory ARM deployment and then we need to force the overwrite of just the function linked service.
Hopefully MSFT will release a function app linked service that uses parameters but until then, this has got us moving with the release pipeline.
HTH. Mark.
Update: Added the Az cmdlet version of the AzureRM command and changed to Set ("New-Az..." worked but in the new Az - there is only Set- for V2 linked services).

Related

Scaling of Cloud Service (Classic) via Powershell or Terraform

It's not possible to deploy the Cloud Service (classic) via terraform, So I have used Arm Template and Deploy Cloud Service (Classic). Following is the code.
resource "azurerm_resource_group_template_deployment" "classicCloudService" {
name = "testSyedClassic"
resource_group_name = var.resourceGroup
deployment_mode = "Incremental"
template_content = file("arm_template.json")
tags = local.resourceTags
}
Following is the ARM Template which I have used.
{
"$schema": "http://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
"contentVersion": "1.0.0.0",
"parameters": {
"dnsName": {
"type": "string",
"defaultValue": "testClassicSyed"
},
"location": {
"type": "string",
"defaultValue": "westeurope"
}
},
"resources": [
{
"name": "[parameters('dnsName')]",
"apiVersion": "2015-06-01",
"location": "[parameters('location')]",
"type": "Microsoft.ClassicCompute/domainNames",
"properties": {}
}
],
"outputs": {}
}
As scaling options are missing in the ARM/TF Template. I want to scale like below, As this is only possible via Powershell. (As far as I know). But it's in old Azure Module.
CloudService
TaskWorkerRole: 1
WorkerRole1: 1
LongRunningTaskworkerRole: 1
DialogCloudService
TaskWorkerRole: 1
Webhook: 1
Set-AzureRole -ServiceName '<your_service_name>' -RoleName '<your_role_name>' -Slot <target_slot> -Count <desired_instances>
Can we use Set-AzRole or any other command to scale this? Or can we also do this via Terraform or Do we have any ARM Template for this?
Can we use Set-AzRole or any other command to scale this? Or can we
also do this via Terraform or Do we have any ARM Template for this?
No we can not use set-AzRole ,Because The Azure Resource Manager and classic deployment models represent two different ways of deploying and managing your Azure solutions. To achieve that we can use the below PowerShell cmdlts to scale out the roles.
Set-AzureRole -ServiceName '<your_service_name>' -RoleName '<your_role_name>' -Slot <target_slot> -Count <desired_instances>
For more information please refer this MICROSOFT DOCUMENTATION|How to scale an Azure Cloud Service (classic) in PowerShell
Alternatively, if you want to use Portal please refer this MICROSOFT DOCUMENTATION|How to configure auto scaling for a Cloud Service (classic) in the portal

Azure Data Factory - Export CI CD Export Trigger with float param fails

I'm following the new CI CD process for ADF as described here
My devops build pipeline runs v0.1.5 of this npm package to export the arm template and param file.
This has been working great until a new ADF trigger was added that access a param of type float.
In the source code of the pipeline this is shown correctly as a float:
"parameters": {
"threshold": {
"type": "float",
"defaultValue": 0.8
}
},
However, in the arm template generated by the npm package, this is shown as an int:
"trgExportSpotlight_properties_MonitorSpotlightExtract_v2_parameters_threshold": {
"type": "int",
"defaultValue": 0.8
},
Has anyone else come across this?

From within a Build/Release pipeline, can we discover its path?

In Azure DevOps, we can organize our Build/Release definitions into high-level folders:
Example: for every pipeline that resides in the Framework folder, I want to conditionally execute a certain task. The pre-defined Build and Release variables provide a plethora of ways to discover information about the underlying file system, but seemingly nothing for this internal path information.
During a pipeline run, is it possible to determine the folder/path that it resides in?
You can check it with Rest API - Builds - Get:
GET https://dev.azure.com/{organization}/{project}/_apis/build/builds/{buildId}?api-version=6.0
In the response you get the definition details including the path:
"definition": {
"drafts": [
],
"id": 13,
"name": "TestBuild",
"url": "https://dev.azure.com/xxxxx/7fcdafd5-b891-4fe5-b2fe-xxxxxxx/_apis/build/Definitions/13?revision=1075",
"uri": "vstfs:///Build/Definition/13",
"path": "\\Test Folder",
"type": "build",
"queueStatus": "enabled",
"revision": 1075,
"project": {
"id": "7fcdafd5-b891-4fe5-b2fe-9b9axxxxx",
"name": "Sample",
"url": "https://dev.azure.com/xxxx/_apis/projects/7fcdafd5-b891-4fe5-b2fe-9xxxxxx",
"state": "wellFormed",
"revision": 97,
"visibility": "private",
"lastUpdateTime": "2021-03-22T10:25:39.33Z"
}
},
So:
Add a simple PS script that invokes the rest API (with the $(Build. BuildId) pre-defined variable)
Check the value of the path property
If it contains the Framework folder set a new variable with this command:
Write-Host "##vso[task.setvariable variable=isFramework;]true"
Now, in the task add a custom condition:
and(succeeded(), eq(variables['isFramework'], 'true'))

How do you add a checkbox input to an Azure DevOps Task Group?

In Azure DevOps, I have created a Task Group that runs Postman tests using the newman CLI. As inputs, users can pass in the paths to the Postman collection and environment files.
As the newman CLI is a requirement, the first task in the Task Group is to install it. However, in scenarios where several collections are run, there is no need to keep installing the CLI over and over, so I would like to offer a checkbox and then conditionally run the install task depending on the value of that checkbox.
As the UI for Task Groups is pretty lacking in useful options, I started exploring the API. I'm able to add additional inputs, but setting the obvious type option to checkbox yields only an additional text (string) input.
POST https://dev.azure.com/{org}/{project}/_apis/distributedtask/taskgroups?api-version=5.1-preview.1
{
...
"inputs": [
{
"aliases": [],
"options": {},
"properties": {},
"name": "Rbt.Cli.Install",
"label": "Install 'newman' CLI?",
"defaultValue": true,
"required": false,
"type": "checkbox",
"helpMarkDown": "Choose whether or not to install the 'newman' CLI. You only need to install it if it hasn't already been installed by a previos task running on this job.",
"groupName": ""
},
...
],
...
}
Looking more closely at the documentation, there is a definition for inputs - TaskInputDefinition. However, it looks as though whoever was tasked with writing that documentation left early one day and never got around to it. There are no descriptions at all, making it impossible to know valid values for properties in the definition.
How can I add a checkbox to my Task Group?
I have now found that Task Groups offer picklist as an input type. This has allowed be to present a yes/no option to the user, and based on their answer I am able to conditionally run a task.
I would still prefer to have a checkbox though, should anyone know how to do that.
{
"aliases": [],
"options": {
"yes": "Yes - install CLI",
"no": "No - the CLI has already been installed"
},
"properties": {},
"name": "Postman.Cli.Install",
"label": "Install 'newman' CLI?",
"defaultValue": "yes",
"required": true,
"type": "picklist",
"helpMarkDown": "Choose whether or not to install the 'newman' CLI. You only need to install it if it hasn't already been installed by a previos task running on this job.",
"groupName": ""
}
You can add checkbox to pipeline task easily by setting the type as boolean
{
"name": "Rbt.Cli.Install",
"type": "boolean",
"label": "Install 'newman' CLI?"
}
And also control the visibility of other controls based on the check box state as following:
{
"name": "someOtherField",
"type": "string",
"label": "Some other field",
"visibleRule": "Rbt.Cli.Install = true"
},

Apply multiple DSCs through Azure Resource Manager

Is it possible to apply multiple DSC configurations to one vm through Azure Resource Manager?
Currently I am using something like this:
{
"apiVersion": "2015-06-15",
"dependsOn": [
"[concat('Microsoft.Compute/virtualMachines/', variables('vm_name'))]"
],
"location": "[resourceGroup().location]",
"name": "DSCSetup",
"properties": {
"publisher": "Microsoft.Powershell",
"type": "DSC",
"typeHandlerVersion": "2.20",
"autoUpgradeMinorVersion": true,
"settings": {
"modulesUrl": "[concat('https://', variables('sa_name'), '.blob.core.windows.net/.../dsc.ps1.zip')]",
"configurationFunction": "dsc.ps1\\Main",
"properties": {
"MachineName": "[variables('vm_name')]",
"UserName": "[parameters('vm_user')]"
}
},
"protectedSettings": {}
},
"type": "extensions"
}
If not, can you merge multiple DSCs automatically?
Scenario is:
Have multiple DSCs
One DSC for IIS + ASP.Net
One DSC to create Site1
Another DSC to create Site2
In Dev deploy Site1 and Site2 to one machine
In Production deploy to seperate machines, maybe even in Availability Sets ...
(Be prepared to use seperate containers in the future)
There are some approaches to this, one simple and useful that I use is Nested Configurations to mix all DSC configurations to a single one.
You are creating Configurations without any specific node. Then create Configurations with nodes that group needed configurations.
This simple example may serve as a guide about what I'm talking about. See [MS doc]]1 for more details.
Configuration WindowsUpdate
{
Import-DscResource -ModuleName PSDesiredStateConfiguration
Service ModulesInstaller {
Name = "TrustedInstaller"
DisplayName = "Windows Modules Installer"
StartupType = "Disabled"
State = "Stopped"
}
}
Configuration ServerManager
{
Import-DscResource -ModuleName PSDesiredStateConfiguration
Registry DoNotOpenServerManagerAtLogon {
Ensure = "Present"
Key = "HKLM:\SOFTWARE\Microsoft\ServerManager"
ValueName = "DoNotOpenServerManagerAtLogon"
ValueData = 1
DependsOn = "[Registry]NoAutoUpdate"
}
}
Configuration VMConfig
{
Node localhost
{
WindowsUpdate NestedConfig1 {}
ServerManager NestedConfig2 {}
}
}
With this approach it is easy for me on each DSC extension to call for the machine entry Configuration that is just a composition of the configuration I want to apply.
"publisher": "Microsoft.Powershell",
"type": "DSC",
"typeHandlerVersion": "2.20",
"configuration": {
"url": "[concat(parameters('_artifactsLocation'), '/Configuration.zip')]",
"script": "Configuration.ps1",
"function": "VMConfig"
}
Another approach will be to execute multiple ARM DSC extensions to the same machine. The trick here is to use the same name always as only one DSC extension can be executed.
The caveat with this approach is that the previous configuration on the machine is overwritten. From the functional perspective the result may be the same but if you want the DSC local manager to correct wrong configuration it will be possible only for the latest one.
DSC only allows for a single configuration at the moment, so if you deployed 2 DSC extensions to the same VM (I'm not sure it will actually work) the second config would overwrite the first.
You could probably stack DSC and CustomScript but since DSC can run script, I'm not sure why you'd ever need to do that...
What's your scenario?