Problems deploying DSC Extension for Azure Resource Manager template - powershell

I'm trying to deploy the Azure Resource Manager template for Windows Virtual Machines' provisioning.
Currently, I'm bootstrapping the IIS Powershell script to the DSC Module to set up IIS for a Windows virtual machine provisioned through ARM.
I keep getting this error related to WinRM:
New-AzureRmResourceGroupDeployment : 5:04:53 PM - Resource Microsoft.Compute/virtualMachines/extensions 'vmSVX-TESTAU-SQL1/dscExtension' failed with message '{
"status": "Failed",
"error": {
"code": "ResourceDeploymentFailure",
"message": "The resource operation completed with terminal provisioning state 'Failed'.",
"details": [
{
"code": "VMExtensionProvisioningError",
"message": "VM has reported a failure when processing extension 'dscExtension'. Error message: \"DSC Configuration 'vmDSC' completed with error(s).
Following are the first few: The WinRM client cannot process the request. If the authentication scheme is different from Kerberos, or if the client computer is not joined to a domain, then HTTPS transport must be used or the destination machine must be added to the TrustedHosts configuration setting. Use winrm.cmd to
configure TrustedHosts. Note that computers in the TrustedHosts list might not be authenticated. You can get more information about that by running the following command: winrm help config.\"."
}
]
}
}'
The ARM Template related to the provisioning of this VM:
{
"apiVersion": "2016-03-30",
"type": "Microsoft.Compute/virtualMachines/extensions",
"name": "[concat(variables('vmNameSQL'), '/', 'dscExtension')]",
"location": "[variables('location')]",
"dependsOn": [
"[concat('Microsoft.Compute/virtualMachines/', variables('vmNameSQL'))]"
],
"properties": {
"publisher": "Microsoft.Powershell",
"type": "DSC",
"typeHandlerVersion": "2.9",
"autoUpgradeMinorVersion": true,
"settings": {
"configuration": {
"url": "[variables('dscModulesUrl')]",
"script": "[concat(variables('dscFunction'),'.ps1')]",
"function": "[variables('dscFunction')]"
},
"configurationArguments": {
"nodeName": "[variables('vmNameSQL')]"
}
},
"protectedSettings": {
"configurationUrlSasToken": "[parameters('_artifactsLocationSasToken')]"
}
}
}
As for the IIS powershell script that has been bootstrapped:
Configuration WindowsFeatures
{
param ([string[]]$NodeName = 'localhost')
Node $NodeName
{
#Install the IIS Role
WindowsFeature IIS
{
Ensure = “Present”
Name = “Web-Server”
}
}
}

After a chat with various parties, we ended up removing the
"configurationArguments": {
"nodeName": "[variables('vmNameSQL')]"
}
From the ARM template and removed the
param ([string[]]$NodeName = 'localhost')
From the DSC configuration. We also set the Node to "localhost".
#iteong was able to test this new configuration and it worked.
Another point to add is the full error message was different than what was shown above:
[ERROR] A parameter cannot be found that matches parameter name 'nodeName'.\n\nAnother common error is to specify parameters of type PSCredential without an explicit type.

When using the VM Extension to apply a DSC Configuration via ARM Templates, the Node parameter must always be localhost.
When pulling DSC configuration from Azure Automation, this is when you can using variables and do some fancy work to determine what Node receives what configuration.

Related

Deploying azure storage fileServices/shares - error: The value for one of the HTTP headers is not in the correct format

As part of a durable function app deployment, I am deploying azure storage.
On deploying the fileServices/shares, I am getting the following error:
error": {
"code": "InvalidHeaderValue",
"message": "The value for one of the HTTP headers is not in the correct format.\nRequestId:6c0b3fb0-701a-0058-0509-a8af5d000000\nTime:2022-08-04T13:49:24.6378224Z"
}
I would appreciate any advice as this is eating up a lot of time and I am no closer to resolving it.
Section of arm template for the share deployment is below:
{
"type": "Microsoft.Storage/storageAccounts/fileServices/shares",
"apiVersion": "2021-09-01",
"name": "[concat(parameters('storageAccount1_name'), '/default/FuncAppName')]",
"dependsOn": [
"[resourceId('Microsoft.Storage/storageAccounts/fileServices', parameters('storageAccount1_name'), 'default')]",
"[resourceId('Microsoft.Storage/storageAccounts', parameters('storageAccount1_name'))]"
],
"properties": {
"accessTier": "TransactionOptimized",
"shareQuota": 5120,
"enabledProtocols": "SMB"
}
}
Answer to this: removing the property "accessTier": "TransactionOptimized" resolves the issue. The default value for this is TransactionOptimized.
Although the template exported from azure portal includes this property, deployment fails if this parameter is present.

##[error]ResourceNotFound: The Resource 'Microsoft.Web/sites/xx' under resource group 'yy' was not found in deploying ARM template

I am getting resource not found in resource group error while deploying arm template.Could someone help
please .Below is the sample template used:
{
"name": "[variables('AppName')]",
"type": "Microsoft.Web/sites",
"apiVersion": "2016-08-01",
"kind": "app",
"location": "xx",
"identity": {
"type": "SystemAssigned"
},
"properties": {
"httpsOnly": true,
"clientAffinityEnabled": false,
"serverFarmId": "xx"
},
"resources": [
{
"name": "appsettings",
"type": "config",
"apiVersion": "2016-08-01",
"properties": {
xx:xx
},
"dependsOn": [
"[resourceId('Microsoft.Web/sites', variables('AppName'))]",
"[resourceId('Microsoft.KeyVault/vaults/secrets', variables('keyVaultName'),'xx')]",
"[resourceId('Microsoft.KeyVault/vaults/secrets', variables('keyVaultName'),'xx')]",
"[resourceId('Microsoft.KeyVault/vaults/secrets', variables('keyVaultName'),'xx')]"
]
}
]
},
{
"type": "Microsoft.Web/sites/config",
"apiVersion": "2016-08-01",
"name": "[concat(variables('AppName'), '/web')]",
"location": "xx",
"dependsOn": [
"[resourceId('Microsoft.Web/sites', variables('AppName'))]"
],
}
Let me know is this the right way to do
its hard to tell without having the exact template and all the variables\parameters, but generally it means one of the following:
wrong name used for the resources that depend on the webapp somewhere
wrong location used for the resources that depend on the webapp somewhere
dependsOn isn't setup properly and it doesnt wait for the webapp and attempts to create a resource in parallel with the web app
Have you ever use the same ARM template to deploy succeed?
Also kindly check if you could directly use script to deploy locally successful without using Azure DevOps. This will help to narrow down the issue.
##[error]ResourceNotFound: The Resource 'Microsoft.Web/sites/xx' under resource group 'yy' was not found in deploying ARM template
This error indicate Resource Manager needs to retrieve the properties for a resource, but can't find the resource in your subscriptions.
You could give a try with below solution:
Solution 1 - check resource properties
Solution 2 - set dependencies
Solution 3 - get external resource
Solution 4 - get managed identity from resource
Solution 5 - check functions
More details please take a look at our official doc here-- Resolve resource not found errors

how to create data factory's integrated runtime in arm template

I am trying to deploy data factory using ARM template. It is easy to use the exported template to create a deployment pipeline.
However, as the data factory needs to access an on-premise database server, I need to have an integrated runtime. The problem is how can I include the run time in the arm template?
The template looks like this and we can see that it is trying to include the runtime:
{
"name": "[concat(parameters('factoryName'), '/OnPremisesSqlServer')]",
"type": "Microsoft.DataFactory/factories/linkedServices",
"apiVersion": "2018-06-01",
"properties":
{
"annotations": [],
"type": "SqlServer",
"typeProperties": {
"connectionString": "[parameters('OnPremisesSqlServer_connectionString')]"
},
"connectVia": {
"referenceName": "OnPremisesSqlServer",
"type": "IntegrationRuntimeReference"
}
},
"dependsOn": [
"[concat(variables('factoryId'), '/integrationRuntimes/OnPremisesSqlServer')]"
]
},
{
"name": "[concat(parameters('factoryName'), '/OnPremisesSqlServer')]",
"type": "Microsoft.DataFactory/factories/integrationRuntimes",
"apiVersion": "2018-06-01",
"properties": {
"type": "SelfHosted",
"typeProperties": {}
},
"dependsOn": []
}
Running this template gives me this error:
\"connectVia\": {\r\n \"referenceName\": \"OnPremisesSqlServer\",\r\n \"type\": \"IntegrationRuntimeReference\"\r\n }\r\n }\r\n} and error is: Failed to encrypted linked service credentials on self-hosted IR 'OnPremisesSqlServer', reason is: NotFound, error message is: No online instance..
The problem is that I will need to type in some key in the integrated runtime's UI, so it can be registered in azure. But I can only get that key from my data factory instance's UI. So above arm template deployment will always fail at least once. I am wondering if there is a way to create the run time independently?
The problem is that I will need to type in some key in the integrated
runtime's UI, so it can be registered in azure. But I can only get
that key from my data factory instance's UI. So above arm template
deployment will always fail at least once. I am wondering if there is
a way to create the run time independently?
It seems that you already know how to create Self-Hosted IR in the ADF ARM.
{
"name": "[concat(parameters('dataFactoryName'), '/integrationRuntime1')]",
"type": "Microsoft.DataFactory/factories/integrationRuntimes",
"apiVersion": "2018-06-01",
"properties": {
"additionalProperties": {},
"description": "jaygongIR1",
"type": "SelfHosted"
}
}
Result:
Only you concern is that Windows IR Tool need to be configured with AUTHENTICATION KEY to access ADF Self-Hosted IR node.So,it should be Unavailable status once it is created.This flow is make sense i think,authenticate key should be created first,then you can use it to configure On-Premise Tool.You can't implement all of things in one step because these behaviors are operated on both of azure and on-premise sides.
Based on the Self-Hosted IR Tool document ,the Register steps can't be implemented with Powershell code. So,all steps can't be processed in the flow are creating IR and getting Auth key,not for Registering in the tool.

Get Azure VM status : "running , stopped" using resource manager deployment and rest api

i've deployed a vm using Resource Manager deployment model.
Using rest api as described here: https://msdn.microsoft.com/en-us/library/azure/mt163682.aspx
i'm able to get informations about my VM. But i cannot see if the VM is running or not. I want that information to start/stop the VM Automatically via code.
Does anyone have tried that and get the VM powerstate?
best regards...
i make a GET using this URI
string.Format("https://management.azure.com/subscriptions/{0}/resourceGroups/{1}/providers/Microsoft.Compute/virtualMachines/{2}?api-version={3}", subscriptionID, resssourcegroup, vmname,apiversion);
apiversion is 2016-03-30.
The API call for this information is:
https://management.azure.com/subscriptions/{subscription-id}/resourceGroups/{resource-group-name}/providers/Microsoft.Compute/virtualMachines/{vm-name}/InstanceView?api-version={api-version}
Needed to use the second request uri "Get information about the instance view of a virtual machine" from the following url https://msdn.microsoft.com/en-us/library/azure/mt163682.aspx to get the instance powerstate.
Thank you.
This is the link to the documentation where you can see the Status of the VM:
https://learn.microsoft.com/en-us/rest/api/compute/virtual-machines/instance-view?tabs=HTTP
This is an example of the output
"statuses": [
{
"code": "ProvisioningState/succeeded",
"level": "Info",
"displayStatus": "Provisioning succeeded",
"time": "2022-07-25T02:12:52.7726725+00:00"
},
{
"code": "PowerState/running",
"level": "Info",
"displayStatus": "VM running"
}
]

Automatically schedule future deployment in Octopus

Update: I found executing script on the octopus server is now available in version 3.3, I haven't update my octopus yet but I will take that would work as designed. I'm still wondering if there is a better way to do this without octo.exe?
The task I'm trying to accomplish is after each successful production deployment, automatically schedule a DR deployment to happen next 24 hours.
My desired approach is have octopus do it.
I added a new Octopus step at the end of the deployment only runs upon success of previous step. I attempted to use octo deploy-release --deployAt can be found here in the newly created step.
My challenge is, a script step requires me to pick a target role, which means it will be executed on a tentacle. Also, presence of Octo.exe is required.
I tried to create my own octopus step template, a deployment target role is still required in my customized step.
{
"Id": "ActionTemplates-2",
"Name": "Octopus - Schedule Deployment",
"Description": "Schedule a future octopus deployment",
"ActionType": "Octopus.Script",
"Version": 3,
"Properties": {
"Octopus.Action.Script.Syntax": "PowerShell",
"Octopus.Action.Script.ScriptBody": "--hide--"
},
"SensitiveProperties": {},
"Parameters": [
{
"Name": "OctoPath",
"Label": "Path for Octo.exe",
"HelpText": "Location for octo.exe",
"DefaultValue": null,
"DisplaySettings": {
"Octopus.ControlType": "SingleLineText"
}
},
{
"Name": "projName",
"Label": "Project Name",
"HelpText": "The name of the project should be deployed",
"DefaultValue": null,
"DisplaySettings": {
"Octopus.ControlType": "SingleLineText"
}
},
{
"Name": "days",
"Label": "Days",
"HelpText": "The days in future this deployment would happen",
"DefaultValue": null,
"DisplaySettings": {
"Octopus.ControlType": "SingleLineText"
}
},
{
"Name": "hours",
"Label": "Hours",
"HelpText": "The hours in future this deployment would happen",
"DefaultValue": null,
"DisplaySettings": {
"Octopus.ControlType": "SingleLineText"
}
},
{
"Name": "env",
"Label": "Environment to deploy",
"HelpText": "The environment next deployment should happen",
"DefaultValue": null,
"DisplaySettings": {
"Octopus.ControlType": "SingleLineText"
}
}
],
"$Meta": {
"ExportedAt": "2016-04-20T13:58:54.263Z",
"OctopusVersion": "3.2.0",
"Type": "ActionTemplate"
}
}
Is there a way to alter the template to get rid of the role selection and have octopus server directly execute it as it does for Azure script step?
Is there any another way we can have octopus server automatically schedule the deployment without external help? I guess this go back to first problem, I may still need octopus to run something on the server side.
Note: We kick off production deployment manually, thus I don't have another tool waiting for the response of the deployment. I think it is possible to have a process regularly call out the last deployment and do some analysis then schedule new deployment accordingly but this is not as clean as have octopus do it directly. Injecting octo.exe to a random production machine is not desired at all
You could create new WebAPI project in C#, pull in the Octopus.Deploy nuget package,
write code that accepts HTTP requests, and deals with the scheduling logic.
Host that project on the same server as Octopus server itself. Should be 20-30 minute job to set the website up in IIS.
In your deployment process, add step that creates http request, and done. You could go even one step further, and have the site/service listen for every successful deployment, and do decisions based on that, such that other projects don't have to add extra steps to octopus deployment process.
As you said, polling is also viable option.
Alternatively, if you're on Octopus deploy 3.0, they already expose REST API, I am not sure if it's powerful enough to allow you create scheduled deployment, but you could explore that: https://github.com/OctopusDeploy/OctopusDeploy-Api/wiki/Releases
I agree floating octo.exe in production servers is bad idea. It might get out of sync, and your production server shouldn't deal with this.