Azure ARM Template for runbook with Powershell file from Azure Git Repo - azure-devops

We are trying to deploy a runbook with Powershell that queries Log Analytics periodically. We have it working on the portal and now we are trying tobuild a ARM Template for doing future deployments to other environments. We have our ARM (json) template and the PS1 file in the same Azure Devops Git repo and we even tried hard coding the path of PS1 file in the template but it doesnt work. Can someone please help us here on what we are doing wrong. Following is the ARM Template:-
{
"$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
"contentVersion": "1.0.0.0",
"parameters": {
"automationAccountName": {
"type": "string",
"defaultValue": "Automation-XXX-INFRA-MONITORING",
"metadata": {
"description": "Automation Account"
}
},
"automationRegion": {
"defaultValue": "eastus2",
"type": "string",
"allowedValues": [
"westeurope",
"southeastasia",
"eastus2",
"southcentralus",
"japaneast",
"northeurope",
"canadacentral",
"australiasoutheast",
"centralindia",
"westcentralus"
],
"metadata": {
"description": "Specify the region for your automation account"
}
},
"_artifactsLocation": {
"type": "string",
"defaultValue": "https://ABCD.visualstudio.com/3Pager/_git/Infrastructure?path=%2FAzure.Infra%2FAppInsights%2FMonthOverMonthTrendAnalysisReport.ps1&version=GBmaster",
"metadata": {
"description": "URI to artifacts location"
}
},
"_artifactsLocation1": {
"type": "string",
"defaultValue": "https://ABCD.visualstudio.com/3Pager/_git/Infrastructure?path=%2FAzure.Infra%2FAppInsights%2FMonthOverMonthTrendAnalysisReport.ps1&version=GBmaster",
"metadata": {
"description": "URI to artifacts location"
}
}
},
"variables": {
"asrScripts": {
"runbooks": [
{
"name": "Test_Runbook",
"url": "[parameters('_artifactsLocation')]",
"version": "1.0.0.0",
"type": "PowerShell",
"description": "Runbook for month over month trend analysis report"
}
]
}
},
"resources": [
{
"apiVersion": "2015-10-31",
"type": "Microsoft.Automation/automationAccounts/runbooks",
"name": "[concat(parameters('automationAccountName'), '/', variables('asrScripts').runbooks[copyIndex()].Name)]",
"location": "[parameters('automationRegion')]",
"copy": {
"name": "runbooksLoop",
"count": "[length(variables('asrScripts').runbooks)]"
},
"properties": {
"description": "[variables('asrScripts').runbooks[copyIndex()].description]",
"runbookType": "[variables('asrScripts').runbooks[copyIndex()].type]",
"logProgress": false,
"logVerbose": true,
"publishContentLink": {
"uri":"[parameters('_artifactsLocation1')]",
"version": "[variables('asrScripts').runbooks[copyIndex()].version]" }
}
}
],
"outputs": {}
}

You are sending a URI to ARM telling it where to find the runbooks. When the AutomationAccount/runbooks resource type is created it will make a GET call to the publishContentLink.url in order to get the content from the URI. If Azure can't access that URI (presumably your visualstudio.com URI is not publicly accessible) then it won't be able to access the runbook content and the deployment will fail.
The solution is to make sure the publishContentLink URI is something accessible to the Azure Automation service. You can do this a couple ways:
Put the content in a publicly accessible URI such as Github or a public Blob Storage container.
Create a SAS token to the content. https://learn.microsoft.com/en-us/azure/azure-resource-manager/resource-manager-tutorial-secure-artifacts shows an example of how to do this with Azure Storage, or https://xebia.com/blog/setting-up-vsts-with-arm-templates/ for doing it with VSTS.

Leaving it here in case someone has similar issue. For a private Azure DevOps Repo it can be solved by Azure Function with HTTP trigger and following PowerShell script:
using namespace System.Net
# Input bindings are passed in via param block.
param($Request, $TriggerMetadata)
# Interact with query parameters or the body of the request.
$adouri = $Request.Query.adouri
$pat = $Request.Query.pat
if (!$adouri){
$response = "Please pass uri"
Write-Warning "URI not passed" -Warning
} elseif (!$pat){
$response = "Please pass pat"
Write-Warning "PAT not passed" -Warning
} else {
$base64AuthInfo = [Convert]::ToBase64String([Text.Encoding]::ASCII.GetBytes(("{0}:{1}" -f "", $pat)))
$headers = #{
"Authorization" = ("Basic {0}" -f $base64AuthInfo)
}
$response = Invoke-RestMethod -Uri $adouri -Method Get -ContentType "application/text" -Headers $headers
}
# Associate values to output bindings by calling 'Push-OutputBinding'.
Push-OutputBinding -Name Response -Value ([HttpResponseContext]#{
StatusCode = [HttpStatusCode]::OK
Body = $response
})
Once the function is ready, it can be called then directly from ARM template, e.g.:
"variables": {
"runbookUri": "https://[function name].azurewebsites.net/api/[trigger name]?adouri=https://dev.azure.com/[ADO Organization]/[ADO project]/_apis/git/repositories/[Repo]/items?path=%2F[folder]%2F[subfolder]%2Fscript.ps1&pat=[token]"
},
"resources": [
{
"type": "Microsoft.Automation/automationAccounts/runbooks",
"apiVersion": "2018-06-30",
"name": "[name]",
"location": "[location]"
"properties": {
"runbookType": "PowerShell",
"logVerbose": false,
"logProgress": false,
"logActivityTrace": 0,
"publishContentLink": {
"uri": "[variables('runbookUri')]"
}
}
}]
Token is just the Personal Access Token from Azure DevOps. To keep it secured it can be passed to the ARM template from a secret ADO variable or Azure KeyVault.

I overcame this limitation by using Terraform templates. They offer the possibility to publish custom content into a runbook by using local paths. So you can do this either directly from the Git repository file system (your pc), or by linking an Azure Pipeline to your repository and using the file as an artifact.
If interested, check https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/automation_runbook in the "Example Usage - custom content".

Related

Arm template deployment fail with 409 error for one specific storage account

I use arm template to deploy a storage account. However, I got an error saying: StorageAccountAlreadyExists: The storage account named xxx already exists.
My release pipeline is set to incremental, so shouldn't really show this error.
I changed storage account name to a new one, not only it worked the first time, but I can keep on deploying the same pipeline and no error ever thrown out.
Looks like it is something specific to this account, however, I can't see anything special. The arm template we use is also quite normal (something we got from official examples before).
{
"$schema": "http://schema.management.azure.com/schemas/2019-06-01/deploymentTemplate.json#",
"contentVersion": "1.0.0.0",
"parameters": {
"StorageDescriptor": {
"type": "string",
"defaultValue": "StorageAccount",
"metadata": {}
},
"StorageAccountName": {
"type": "string",
"defaultValue": "[toLower(concat(parameters('StorageDescriptor'), resourceGroup().name))]",
"metadata": { "Description": "Override name for the storage account" }
},
"StorageType": {
"type": "string",
"defaultValue": "Standard_LRS",
"allowedValues": [
"Standard_LRS",
"Standard_ZRS",
"Standard_GRS",
"Standard_RAGRS",
"Premium_LRS"
]
},
"Environment": {
"type": "string",
"defaultValue": "PreProd",
"metadata": { "description": "PreProd or Prod" }
}
},
"variables": {
},
"resources": [
{
"name": "[parameters('StorageAccountName')]",
"type": "Microsoft.Storage/storageAccounts",
"location": "[resourceGroup().location]",
"apiVersion": "2019-06-01",
"dependsOn": [],
"tags": {
"displayName": "Web Job Storage Account"
},
"properties": {
"accountType": "[parameters('StorageType')]"
}
}
],
"outputs": {
}
}
Even though your release pipeline is set to incremental, the storage account name must be unique every time you deploy. Refer to: here.
Arm template deployment fail with 409 error for one specific storage account
You need to check if the storage account attributes have been changed through the Azure/PowerShell portal by somebody else, and are different than the ones specified on the ARM template.
To resolve this issue, please try to export the template and update it in the Azure devops repo:
Then, we could update this new exported template file as you want and deploy with it.
As test, I could keep on deploying the same pipeline and no error ever thrown out.

Deploying an Azure VM and Users with an ARM template and DSC

I am taking my first look at creating a DSC (Desired State Configuration) to go with an ARM (Azure Resource Manager) template to deploy a Windows Server 2016 and additional local user accounts. So far the ARM template works fine and for the DSC file I am using simple example to test functionality. The deployment works fine until I try to pass a username/password so I can create a local Windows user account. I can't seem to make this function work at all (see the error message below).
My question is, how do I use the ARM template to pass the credentials (password) to the DSC (mof) file so that the user can be created without having to explicitly allow plain text passwords (which is not a good practice)?
This is what I have tried:
DSC file
Configuration xUser_CreateUserConfig {
[CmdletBinding()]
Param (
[Parameter(Mandatory = $true)]
[string]
$nodeName,
[Parameter(Mandatory = $true)]
[System.Management.Automation.PSCredential]
[System.Management.Automation.Credential()]
$Credential
)
Import-DscResource -ModuleName xPSDesiredStateConfiguration
Node $nodeName {
xUser 'CreateUserAccount' {
Ensure = 'Present'
UserName = Split-Path -Path $Credential.UserName -Leaf
Password = $Credential
}
}
}
Azure ARM Template Snippet 1st Method
"resources": [
{
"apiVersion": "2016-03-30",
"type": "extensions",
"name": "Microsoft.Powershell.DSC",
"location": "[parameters('location')]",
"tags": {
"DisplayName": "DSC",
"Dept": "[resourceGroup().tags['Dept']]",
"Created By": "[parameters('createdBy')]"
},
"dependsOn": [
"[resourceId('Microsoft.Compute/virtualMachines', concat(variables('vmNamePrefix'), copyIndex(1)))]"
],
"properties": {
"publisher": "Microsoft.Powershell",
"type": "DSC",
"typeHandlerVersion": "2.19",
"autoUpgradeMinorVersion": true,
"settings": {
"wmfVersion": "latest",
"modulesUrl": "[concat(variables('_artifactslocation'), '/', variables('dscArchiveFolder'), '/', variables('dscArchiveFileName'))]",
"configurationFunction": "xCreateUserDsc.ps1\\xUser_CreateUserConfig",
"properties": {
"nodeName": "[concat(variables('vmNamePrefix'), copyIndex(1))]",
"Credential": {
"UserName": "[parameters('noneAdminUsername')]",
"Password": "PrivateSettingsRef:UserPassword"
}
}
},
"protectedSettings": {
"Items": {
"UserPassword": "[parameters('noneAdminUserPassword')]"
}
}
}
}
]
Error message
The resource operation completed with terminal provisioning state 'Failed'. VM has reported a failure when processing extension 'Microsoft.Powershell.DSC'. Error message: \\"The DSC Extension received an incorrect input: Compilation errors occurred while processing configuration 'xUser_CreateUserConfig'. Please review the errors reported in error stream and modify your configuration code appropriately. System.InvalidOperationException error processing property 'Password' OF TYPE 'xUser': Converting and storing encrypted passwords as plain text is not recommended. For more information on securing credentials in MOF file, please refer to MSDN blog: http://go.microsoft.com/fwlink/?LinkId=393729
This error message does not help
Azure ARM Template snippet 2nd Method
"resources": [
{
"apiVersion": "2018-10-01",
"type": "extensions",
"name": "Microsoft.Powershell.DSC",
"location": "[parameters('location')]",
"tags": {
"DisplayName": "DSC",
"Dept": "[resourceGroup().tags['Dept']]",
"Created By": "[parameters('createdBy')]"
},
"dependsOn": [
"[resourceId('Microsoft.Compute/virtualMachines', concat(variables('vmNamePrefix'), copyIndex(1)))]"
],
"properties": {
"publisher": "Microsoft.Powershell",
"type": "DSC",
"typeHandlerVersion": "2.9",
"autoUpgradeMinorVersion": true,
"settings": {
"wmfVersion": "latest",
"configuration": {
"url": "[concat(variables('_artifactslocation'), '/', variables('dscArchiveFolder'), '/', variables('dscArchiveFileName'))]",
"script": "xCreateUserDsc.ps1",
"function": "xUser_CreateUserConfig"
},
"configurationArguments": {
"nodeName": "[concat(variables('vmNamePrefix'), copyIndex(1))]"
},
"privacy": {
"dataCollection": "Disable"
}
},
"protectedSettings": {
"configurationArguments": {
"Credential": {
"UserName": "[parameters('noneAdminUsername')]",
"Password": "[parameters('noneAdminUserPassword')]"
}
}
}
}
}
]
Error Message
VM has reported a failure when processing extension 'Microsoft.Powershell.DSC'. Error message: "The DSC Extension received an incorrect input: A parameter cannot be found that matches parameter name '$credential.Password'. Another common error is to specify parameters of type PSCredential without an explicit type. Please be sure to use a typed parameter in DSC Configuration, for example: configuration Example param([PSCredential] $UserAccount). Please correct the input and retry executing the extension. More information on troubleshooting is available at https://aka.ms/VMExtensionDSCWindowsTroubleshoot
This does not help!
I have been trying to solve this error for a couple of days. I have Googled for other example but can only find example of people deploying Web Server and Microsoft's documentation is no help because it tells you to use both of the above methods. When method 1 is the old way (according to Microsoft). So, any help will be much appreciated.
this is how I was setting up parameter in the configuration:
# Credentials
[Parameter(Mandatory)]
[System.Management.Automation.PSCredential]$Admincreds,
and then in the template:
"properties": {
"publisher": "Microsoft.Powershell",
"type": "DSC",
"typeHandlerVersion": "2.19",
"autoUpgradeMinorVersion": true,
"settings": {
"configuration": xxx // doesn't matter for this question
"configurationArguments": yyy // doesn't matter for this question
},
"protectedSettings": {
"configurationArguments": {
"adminCreds": {
"userName": "someValue",
"password": "someOtherValue"
}
}
}
}
Links to working stuff:
https://github.com/Cloudneeti/PCI_Reference_Architecture/blob/master/templates/resources/AD/azuredeploy.json#L261
https://github.com/Cloudneeti/PCI_Reference_Architecture/blob/master/artifacts/configurationscripts/ad-domain.ps1#L11
ps. you might also need to do this. Honestly, I dont remember ;)

How to run a PowerShell script during Azure VM deployment with ARM template?

I want to deploy a VM in azure using Azure Resource Manager (ARM), and then run a PowerShell script inside the VM post deployment to configure it.
I can do this fine with something like this: https://github.com/Azure/azure-quickstart-templates/tree/master/201-vm-vsts-agent
However that template grabs the PowerShell script from GitHub. As part of my deployment I want to upload the script to Azure Storage, and then have the VM get the script from Azure storage and run it. How can I do that part with regards to dependencies on the PowerShell script, because it has to exist in Azure Storage somewhere before being executed.
I currently have this to install a VSTS Agent as part of a deployment, but the script is downloaded from GitHub, I don't want to do that, I want the installation script of the VSTS Agent to be part of my ARM Project.
{
"name": "vsts-build-agents",
"type": "extensions",
"location": "[parameters('location')]",
"apiVersion": "2017-12-01",
"dependsOn": [
"vsts-build-vm"
],
"tags": {
"displayName": "VstsInstallScript"
},
"properties": {
"publisher": "Microsoft.Compute",
"type": "CustomScriptExtension",
"typeHandlerVersion": "1.9",
"settings": {
"fileUris": [
"[concat(parameters('_artifactsLocation'), '/', variables('powerShell').folder, '/', variables('powerShell').script, parameters('_artifactsLocationSasToken'))]"
]
},
"protectedSettings": {
"commandToExecute": "[concat('powershell.exe -ExecutionPolicy Unrestricted -Command \"& {', './', variables('powerShell').script, ' ', variables('powerShell').buildParameters, '}\"')]"
}
}
}
I guess my question is really about how to set _azurestoragelocation to an azure storage location where the script has just been uploaded as part of the deployment.
chicken\egg problem. you cannot upload to azure storage with arm template, you need to use script to upload to azure storage, but if you have that script on vm to upload it you dont really need to upload it.
that being said, why dont you use VSTS agent extension?
{
"name": "xxx",
"apiVersion": "2015-01-01",
"type": "Microsoft.Resources/deployments",
"properties": {
"mode": "Incremental",
"templateLink": {
"uri": "https://gallery.azure.com/artifact/20161101/microsoft.vsts-agent-windows-arm.1.0.0/Artifacts/MainTemplate.json"
},
"parameters": {
"vmName": {
"value": "xxx"
},
"location": {
"value": "xxx"
},
"VSTSAccountName": {
"value": "xxx"
},
"TeamProject": {
"value": "xxx"
},
"DeploymentGroup": {
"value": "Default"
},
"AgentName": {
"value": "xxx"
},
"PATToken": {
"value": "xxx"
}
}
}
},
Do you mean how to set _artifactsLocation as in the quickstart sample? If so you have 2 options (or 3 depending)
1) use the script in the QS repo, the defaultValue for the _artifactsLocation param will set that for you...
2) if you want to customize, from your local copy of the sample, just use the Deploy-AzureResourceGroup.ps1 in the repo and it will stage and set the value for you accordingly (when you use the -UploadArtifacts switch)
3) stage the PS1 somewhere yourself and manually set the values of _artifactsLocation and _artifactsLocationSasToken
You can also deploy from gallery.azure.com, but that will force you to use the script that is stored in the galley (same as using the defaults in GitHub)
That help?

Azure Automation Registration Endpoint is corrupted when used to pull DSC configuration

For some reason, I keep getting these weird issues.....
In this case, I have a Key and Endpoint URL for the Automation Account stored as Secrets in a KeyVault (I don't know of a away to extract it natively from Automation Account using ARM).
I can extract these values perfectly and they they are published to the Template that runs a PowerShell extension to pull a DSC Configuration.
For example as seen as an Input deploying the Template:
"RegistrationUrl":"https://ase-agentservice-prod-1.azure-automation.net/accounts/e0799801-a8da-8934-b0f3-9a43191dd7e6"
However, I receive the following error (note the Url in the Error with 3 forward slashes)
"code": "VMExtensionProvisioningError",
"message": "VM has reported a failure when processing extension 'dscLcm'.
Error message: "DSC Configuration 'ConfigureLCMforAAPull' completed with error(s). Following are the first few: The attempt to 'get an action' for AgentId 11A5A267-6D00-11E7-B07F-000D3AE0FB1B from server URL https://ase-agentservice-prod-1.azure-automation.net///accounts/e0799801-a8da-8934-b0f3-9a43191dd7e6/Nodes(AgentId='11A5A267-6D00-11E7-B07F-000D3AE0FB1B')/GetDscAction failed with server error 'ResourceNotFound(404)'.
For further details see the server error message below or the DSC debug event log with ID 4339.
ServerErrorMessage:- 'No NodeConfiguration was found for the agent.'\"."
The Endpoint Url is passed as a Secure String. I tried passing it a normal string - Same problem.
The Key and Endpoint are feed into the Template as Parameters:
"dscKeySecret": {
"type": "securestring",
"metadata": {
"description": "Key for PowerShell DSC Configuration."
}
},
"dscUrlSecret": {
"type": "securestring",
"metadata": {
"description": "Url for PowerShell DSC Configuration."
}
},
These values are used to create a parameter to be passed to the next template that runs the VM Extension.
"extn-settings": {
"value": {
"configuration": {
"url": "[concat(variables('urls').dscScripts, '/', 'lcm-aa-pull', '/', 'lcm-aa-pull', '.zip')]",
"script": "[concat('lcm-aa-pull', '.ps1')]",
"function": "ConfigureLCMforAAPull"
},
"configurationArguments": {
"registrationKey": {
"username": "dsckeySecret",
"password": "[parameters('dscKeySecret')]"
},
"registrationUrl": "[parameters('dscUrlSecret')]",
"configurationMode": "ApplyAndMonitor",
"configurationModeFrequencyMins": 15,
"domain": "[variables('names').domain]",
"name": "dscLcm",
"nodeConfigurationName": "[variables('names').config.ad]",
"rebootNodeIfNeeded": true,
"refreshFrequencyMins": 30
},
"protectedSettings": null,
}
}
The next template receives the Parameters and used in the Properties of the VM's Resources section:
"properties": {
"publisher": "Microsoft.Powershell",
"type": "DSC",
"typeHandlerVersion": "2.22",
"autoUpgradeMinorVersion": true,
"settings": {
"configuration": "[parameters('extn-settings').configuration]",
"configurationArguments": "[parameters('extn-settings').configurationArguments]"
},
"protectedSettings": "[parameters('extn-settings').protectedSettings]"
}
So why is the Url being corrupted with the the first '/' being changed to '///'?
I don't why the Endpoint Url has 3 x '/', but that wasn't the issue.... I wish I found the issue before I posted this question...
I found the Node Configuration Name was wrong with a spelling mistake (hang head in shame)
Thanks anyway!

Azure powershell cmdlet for creating Azure Service bus queue and topics

Is there any Azure powershell cmdlets for creating and managing Azure Service bus Queue and Topics?
The following link does not provide this:
http://msdn.microsoft.com/library/windowsazure/jj983731.aspx
Microsoft is planning to release this soon?
There currently are not PowerShell cmdlets for Service Bus queues and topics. They do have Cmdlets for creating the namespaces and some of the ACS entities, but not brokered messaging yet. The Azure cross-platform command line tool has the same capabilities.
You can monitor what's in the PowerShell Cmdlets, or even see pre-release bits on Windows Azure SDK-Tools repo on GitHub. This is a public repo of the code that makes up the PowerShell Cmdlets.
I've seen no public announcements about if/when this functionality will be added to the Cmdlets or the CLI tools.
Some documentation about this scenario was recently added:
http://azure.microsoft.com/en-us/documentation/articles/service-bus-powershell-how-to-provision/
The summary is that there are only a limited number of PowerShell cmdlets that relate to Service Bus. However, you can reference the NuGet packages and use the types there to do anything that is available in the client libraries.
Save the given below template in json file and execute the given below powershell command
----------------------------------------------------------------
param(
[Parameter(Mandatory=$True)]
[string]
$resourceGroupName,
[string]
$resourceGroupLocation,
[Parameter(Mandatory=$True)]
[string]
$templateFilePath = "C:\ARM-ServiceBus\sb_template.json",
[string]
$parametersFilePath = "C:\ARM-ServiceBus\sb_parameters.json"
)
#***********************************************************************
# Script body
# Execution begins here
#***********************************************************************
$ErrorActionPreference = "Stop"
$subscriptionId ='1234-545-474f-4544-5454454545'
# sign in
Write-Host "Logging in...";
Login-AzureRmAccount;
# select subscription
Write-Host "Selecting subscription '$subscriptionId'";
Select-AzureRmSubscription -SubscriptionID $subscriptionId;
#Create or check for existing resource group
$resourceGroup = Get-AzureRmResourceGroup -Name $resourceGroupName -
ErrorAction SilentlyContinue
if(!$resourceGroup)
{
Write-Host "Resource group '$resourceGroupName' does not exist. To
create a new resource group, please enter a location.";
if(!$resourceGroupLocation) {
$resourceGroupLocation = Read-Host "resourceGroupLocation";
}
Write-Host "Creating resource group '$resourceGroupName' in location
'$resourceGroupLocation'";
New-AzureRmResourceGroup -Name $resourceGroupName -Location
$resourceGroupLocation
}
else{
Write-Host "Using existing resource group '$resourceGroupName'";
}
# Start the deployment
Write-Host "Starting deployment...";
if(Test-Path $parametersFilePath) {
New-AzureRmResourceGroupDeployment -ResourceGroupName
$resourceGroupName -TemplateFile $templateFilePath -
TemplateParameterFile $parametersFilePath;
} else {
New-AzureRmResourceGroupDeployment -ResourceGroupName
$resourceGroupName -TemplateFile $templateFilePath;
}
--------------Template(sb_template.json)--------------------------
{
"$schema": "http://schema.management.azure.com/schemas/2014-04-01-
preview/deploymentTemplate.json#",
"contentVersion": "1.0.0.0",
"parameters": {
"serviceBusNamespaceName": {
"type": "string",
"metadata": {
"description": "Name of the Service Bus namespace"
}
},
"serviceBusQueueName": {
"type": "string",
"metadata": {
"description": "Name of the Queue"
}
},
"serviceBusApiVersion": {
"type": "string",
"defaultValue": "2015-08-01",
"metadata": {
"description": "Service Bus ApiVersion used by the template"
}
}
},
"variables": {
"location": "[resourceGroup().location]",
"sbVersion": "[parameters('serviceBusApiVersion')]",
"defaultSASKeyName": "RootManageSharedAccessKey",
"authRuleResourceId": "
[resourceId('Microsoft.ServiceBus/namespaces/authorizationRules',
parameters('serviceBusNamespaceName'),
variables('defaultSASKeyName'))]"
},
"resources": [{
"apiVersion": "[variables('sbVersion')]",
"name": "[parameters('serviceBusNamespaceName')]",
"type": "Microsoft.ServiceBus/Namespaces",
"location": "[variables('location')]",
"kind": "Messaging",
"sku": {
"name": "StandardSku",
"tier": "Standard",
"capacity": 1
},
"resources": [{
"apiVersion": "[variables('sbVersion')]",
"name": "[parameters('serviceBusQueueName')]",
"type": "Queues",
"dependsOn": [
"[concat('Microsoft.ServiceBus/namespaces/', parameters('serviceBusNamespaceName'))]"
],
"properties": {
"path": "[parameters('serviceBusQueueName')]"
}
}]
}],
"outputs": {
"NamespaceConnectionString": {
"type": "string",
"value": "[listkeys(variables('authRuleResourceId'), variables('sbVersion')).primaryConnectionString]"
},
"SharedAccessPolicyPrimaryKey": {
"type": "string",
"value": "[listkeys(variables('authRuleResourceId'), variables('sbVersion')).primaryKey]"
}
}
}