Azure virtual machine diagnostics event log level - event-log

I'm trying to configure an Azure VM (Windows Server 2012 R2) to log event logs to Azure Storage.
I've used the following .wadcfg file
<WadCfg>
<DiagnosticMonitorConfiguration>
<WindowsEventLog scheduledTransferLogLevelFilter="Verbose" scheduledTransferPeriod="PT1M">
<DataSource name="Application!*" />
</WindowsEventLog>
<PerformanceCounters scheduledTransferPeriod="PT5M">
<PerformanceCounterConfiguration counterSpecifier="\Processor(_Total)\% Processor Time" sampleRate="PT1M" />
</PerformanceCounters>
</DiagnosticMonitorConfiguration>
</WadCfg>
and I've used the following powershell script to install it
$serviceName = "x"
$vmname = "x"
$diagnosticsFilePath = "x\diagnostics.wadcfg"
$storageName = "x"
$storageKey = (Get-AzureStorageKey $storageName).Primary
$storageAccount = New-AzureStorageContext -StorageAccountName $storageName -StorageAccountKey $storageKey
(Get-AzureVM -ServiceName $serviceName -Name $vmname) | Set-AzureVMDiagnosticsExtension -DiagnosticsConfigurationPath $diagnosticsFilePath - StorageContext $storageAccount -Version 1.3 | Update-AzureVM
When I run the script without the 'scheduledTransferLogLevelFilter="Verbose"' it works fine. The scripts finishes and the event logs show up in storage. But when I try to run the script with the wadcfg as posted above, I get the following error in the Application Event Log on the VM, source AzureDiagnostics.
Microsoft.Azure.Plugins.Plugin.WadConfigValidatorException: Exception when parsing configuration: Error: The 'scheduledTransferLogLevelFilter' attribute is not declared. line: 2 column: 18
at Microsoft.Azure.Plugins.Plugin.WadConfigValidator`1.Initialize(String configString, String schemaPath)
at Microsoft.Azure.Plugins.Plugin.WadParser.Parse()
followed by
Failed to parse the WAD config file
and lastly
DiagnosticsPlugin launch failed with exit code -106
The same thing happens if I try to include 'bufferQuotaInMB="0"' as well. I can't figure out what the problem is.
I used the following sources, among others (I could only post two links):
http://blogs.msdn.com/b/cloud_solution_architect/archive/2015/01/26/azure-diagnostics-for-azure-virtual-machines.aspx
https://msdn.microsoft.com/en-us/library/hh411551.aspx
I might have just overlooked something simple, but for now I'm stuck.

Related

Create Vm in WAP using PowerShell, error 400

I'm trying to create a VM using PowerShell in Windows Azure Pack.
I've downloaded the subscription, and Get-WAPackVM returns the VM's already created.
I've tried running these two scripts:
$OSDisk = Get-WAPackVMOSDisk -Name "W2012R2 Template_disk_1"
$SizeProfile = Get-WAPackVMSizeProfile -Name "Template"
New-WAPackVM -Name "ContosoV073" -OSDisk $OSDisk -VMSizeProfile $SizeProfile
and
$Credentials = Get-Credential
$Template = Get-WAPackVMTemplate -Name "Template 1"
New-WAPackVM -Name "VirShits7" -Template $Template -VMCredential $Credentials -Windows
Both returns the same error:
New-WAPackVM : The remote server returned an error: (400) Bad Request.
All the Get cmdlets return values, and seem to be correct.
Anyone know how I get this to work?
You may reference the page:
https://msdn.microsoft.com/en-us/library/jj643289.aspx
The page say:
The key properties that you must set on the virtual machine object that is used with the Service Provider Foundation service are as follows: CloudId, StampId, VMTemplateId ,Name.
You may need to assign CloudId and StampId.
I did it by RESTApi and it works.

How do I reconfigure the Azure diagnostics extension when recreating an Azure VM

I need to make changes to a Azure Resource Manager Virtual Machine that are not allowed on an existing machine, such as changing the availability group. So I have to delete and recreate the machine, attaching the existing disks, network adapters, etc. to the new VM. I have a PowerShell script to do this, but I'm running into a problem with Virtual Machine extensions.
Here's my code:
$NewVMConfig = New-AzureRmVMConfig -VMName $VM.Name -VMSize $VM.HardwareProfile.VmSize
$NewVMConfig = Set-AzureRmVMOSDisk -VM $NewVMConfig -Name $VM.StorageProfile.OSDisk.Name -VhdUri $VM.StorageProfile.OSDisk.VHD.Uri -CreateOption attach -Windows
foreach ($disk in $vm.StorageProfile.DataDisks) {
$NewVMConfig = Add-AzureRmVMDataDisk -VM $NewVMConfig -Name $disk.Name -VhdUri $disk.Vhd.Uri -Caching $disk.Caching -DiskSizeInGB $disk.DiskSizeGB -CreateOption attach -Lun $disk.Lun
}
$NewVMConfig.AvailabilitySetReference = $VM.AvailabilitySetReference
$NewVMConfig.DiagnosticsProfile = $VM.DiagnosticsProfile
$NewVMConfig.Extensions = $VM.Extensions
$NewVMConfig.NetworkProfile = $VM.NetworkProfile
$location = $VM.Location
$resourceGroupName = $VM.ResourceGroupName
# Delete machine.
Remove-AzureRmVM -ResourceGroupName $VM.ResourceGroupName -Name $VM.Name
# Recreate machine
New-AzureRmVM -ResourceGroupName $resourceGroupName -Location $location -VM $NewVMConfig
Notice the line:
$NewVMConfig.Extensions = $VM.Extensions
The script runs without any error, but the new VM doesn't have the same extensions as the original. The diagnostics extension is gone and it now has the BGInfo extension which wasn't on the original machine.
I can use the Remove-AzureRmVMExtension command to remove the BGInfo extension, but I have been unsuccessful at recreating the diagnostics extensions. I've tried both Set-AzureRmVMExtension and Set-AzureRmVMDiagnosticsExtension to no avail.
Those VM extension commands do not support ARM yet. Hence, I suggest you to use ARM template instead. There is a quick-start template specifically for Windows VM with diagnostics extension on GitHub. You can download it and modify it to meet your needs, like specifying a VHD for your VM. And, use New-AzureRmResourceGroupDeployment to deploy your vm.
For your case, combining the above template with 201-specialized-vm-in-existing-vnet template would meet your needs.
Note: the 201-vm-diagnostics-extension-windows template deploys a Windows VM with diagnostics extension, while the 201-specialized-vm-in-existing-vnet template deploys a VM with existing VNet and VHD
For more information about this, see Create a Windows Virtual machine with monitoring and diagnostics using Azure Resource Manager Template.
For more information about authoring ARM template, see Authoring Azure Resource Manager templates.
For more information about deploying ARM template, see Deploy a Resource Group with Azure Resource Manager template.
Jack Zeng's answer with the virtual machine template showed me what was missing in my attempts to reconfigure the Azure diagnostics extension.
The key is that when you get a VM and look at the Extensions property (or the ExtensionsText property) it doesn't include the protected settings of the extension. (That's one way in which they are protected.) Thus you don't have all the information you need to recreate the extension. You have to rebuild the protected settings, which would vary from extension to extension, so you need to know what the specific extension requires. The virtual machine template to which Jack provide a link shows what information is needed for the protected settings of the Azure diagnostics extension, namely the storage account name, key, and endpoint.
Running the following code after recreating the virtual machine successfully reconfigured the diagnostics. In this code $VM is the original virtual machine object we got from calling Get-AzureRmVM before recreating the machine.
$diagnosticsExtension = $VM.Extensions | Where { $_.Name -eq 'Microsoft.Insights.VMDiagnosticsSettings' }
# The $VM.Extensions.Settings property does not correctly return the values of the different settings.
# Instead, use the $VM.ExtensionsText property to get the old settings.
$oldSettings = $VM.ExtensionsText | ConvertFrom-Json | Where { $_.Name -eq 'Microsoft.Insights.VMDiagnosticsSettings' } | foreach {$_.'properties.settings'}
# Need settings in a hash table.
$settings = #{
xmlCfg = $oldSettings.xmlCfg;
StorageAccount = $oldSettings.StorageAccount
}
$storageAccounts = Get-AzureRmStorageAccount
$storageAccount = $storageAccounts | Where { $_.StorageAccountName -eq $settings.StorageAccount }
$storageAccountKeys = $storageAccount | Get-AzureRmStorageAccountKey
$protectedSettings = #{
storageAccountName = $settings.StorageAccount;
storageAccountKey = $storageAccountKeys.Key1;
storageAccountEndPoint = "https://core.windows.net/"
}
Write-Host "Reconfiguring Azure diagnostics extension on $Name..."
$result = Set-AzureRmVMExtension -ResourceGroupName $newVM.ResourceGroupName -VMName $newVM.Name -Name $diagnosticsExtension.name -Publisher $diagnosticsExtension.Publisher -ExtensionType $diagnosticsExtension.VirtualMachineExtensionType -TypeHandlerVersion $diagnosticsExtension.TypeHandlerVersion -Settings $settings -ProtectedSettings $protectedSettings -Location $diagnosticsExtension.Location
Note that I am running version 1.2.1 of the Azure PowerShell extensions. In this release, Set-AzureRmVMDiagnosticsExtension appears to be broken, so I did not use it.

Azure Powershell script fails when run through task scheduler

I have a powershell script that I wrote to backup a local sqlserver to an azure blob. Its based on one I took from MSDN, but I added an extra feature to delete any old backups that are over 30 days old. When I run this as a user, it works fine. When I added this to task scheduler, set to run as me, and I manually ask for it to run, it works fine. (All output is captured in a log file, so I can see that its all working). When run from the task scheduler at night when I'm not logged in (the task scheduler is set to run the script as me) it fails. Specifically, it claims my azure subscription name is not know when I call Set-AzureSubscription. Then, fails when trying to delete the blob with:
Get-AzureStorageBlob : Can not find your azure storage credential. Please set current storage account using "Set-AzureSubscription" or set the "AZURE_STORAGE_CONNECTION_STRING" environment variable.
The script in question:
import-module sqlps
import-module azure
$storageAccount = "storageaccount"
$subscriptionName = "SubName"
$blobContainer = "backup"
$backupUrlContainer = "https://$storageAccount.blob.core.windows.net/$blobContainer/"
$credentialName = "creds"
Set-AzureSubscription -CurrentStorageAccountName $storageAccount -SubscriptionName $subscriptionName
$path = "sqlserver:\sql\servername\SQLEXPRESS\databases"
$alldatabases = get-childitem -Force -path $path | Where-object {$_.name -eq "DB0" -or $_.name -eq "DB1"}
foreach ($db in $alldatabases)
{
Backup-SqlDatabase -BackupContainer $backupUrlContainer -SqlCredential $credentialName $db
}
$oldblobs = Get-AzureStorageBlob -container backup | Where-object { $_.name.Contains("DB") -and (-((($_.LastModified) - $([DateTime]::Now)).TotalDays)) -gt $(New-TimeSpan -Days 30).TotalDays }
foreach($blob in $oldblobs)
{
Write-Output $blob.Name
Remove-AzureStorageBlob -Container "backup" -Blob $blob.Name
}
The backup part of the script works, just not the blob deletion parts. It would appear that something is being done to the environment when I log in that allows the azure powershell scripts to work but that isn't being done when I run the command at night when I'm not logged in.
Any one have any idea what that might be?
Task scheduler is set to run the command with a
Powershell -Command "C:\Scripts\BackupDatabases.ps1" 2>&1 >> "C:\Logs\backup.log"
The Azure PowerShell environment just needs to understand what Azure subscription to work with by default. You probably did this for your own environment, but the task scheduler is running in a different environment.
You just need to add an additional command to the beginning of your script to set the Azure subscription. Something like this:
Set-AzureSubscription -SubscriptionName
The documentation for this command is here. You can also set by SubscriptionID etc. instead of SubscriptionName.
In addition, this article walks through how to connect your Azure subscription to the PowerShell environment.
UPDATE: I messed around and got it working. Try adding a "Select-AzureSubscription" before your Set-AzureSubscription command.
Select-AzureSubscription $subscriptionName
Set-AzureSubscription -SubscriptionName $subscriptionName -CurrentStorageAccountName $storageAccount
The documentation for Select-AzureSubscription is here. If you aren't relying on that storage account being set, you may be able to remove the Set-AzureSubscription command.
I was never able to make the powershell script work. I assume I could have made it work if I had set the credentials in the environment variable, as it said, but I instead wrote a little program to do the work for me.
Visit https://github.com/sillyotter/BackupDBToAzure if you need a tool to backup things to azure blobs and delete old leftover backups.
Thanks for the help!

SharePoint search service setup with desired state configuration error

I'm configuring a SharePoint 2013 deployment using Desired State Configuration (DSC). I have configured several services to be provisioned using DSC, but I am having trouble getting search config to work. The following command fails in the context of DSC, but works fine when running with the exact same parameters in a normal PowerShell window:
function Set-TargetResource
{
...
$searchApp = New-SPEnterpriseSearchServiceApplication -Name $searchAppName `
-DatabaseServer $dbServer `
-DatabaseName $searchDB `
-ApplicationPool $pool `
-AdminApplicationPool $adminPool `
-Partitioned:([bool]::Parse($partitioned))
If (!$?) {
Throw " - An error occurred creating the $searchAppName application."
}
...
Other SharePoint cmdlets are working fine from within DSC. I know DSC runs in the context of "NT AUTHORITY\SYSTEM" - is this causing the problem for some SharePoint PowerShell cmdlets? - if so, how could search configuration still be achieved within the context of DSC?
You need to use different set of credentials that got the SP farm access. See xWindowsProcess DSC resource for an example. This is available as a part of the DSC resource kit.

Do I need to check instancestatus before I start or stop an azure VM?

I'm automating an Azure Virtual Machine using powershell, just starting and stopping one machine on a schedule. I've done this before, but I came across this code snippet which has an extra step, and I want to make sure I'm not missing something important:
# Shutdown VM(s)
$vmList = ('VM1', 'VM2', 'VM3')
$svcName = 'servicename'
For ( $vmCount = 0; $vmCount -lt $vmList.Count; $vmCount++) {
$vm = Get-AzureVM `
-ServiceName $svcName `
-Name $vmList[$vmCount]
if ( $vm.InstanceStatus -eq 'ReadyRole' ) {
Stop-AzureVM `
-ServiceName $vm.ServiceName `
-Name $vm.Name `
-Force
}
}
So I would have just called Stop-AzureVM... What does the check to InstanceStatus do? Does it, say, prevent the VM from shutting down if it's in the middle of installing updates? I'm thinking no, and that this is a check that's more important for other commands. But now I want to know.
I tried searching around and found it used in several unrelated code samples, but I've been unable to find an explanation.
ReadyRole is the steady state for an Azure VM. It essentially means it's not starting, stopping, provisioning, transitioning, stopped etc.
To me, I think the line $vm.InstanceStatus -eq 'ReadyRole' is just a basic check on the machine status. If you try to shutdown, or run any other command on the VM whilst it is busy doing something, your command will fail with an error anyway.
I just ran a test trying to stop a VM after I'd started it from the web management console and this is what I received:
stop-azurevm : ConflictError: Windows Azure is currently performing an operation with x-ms-requestid
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx on this deployment that requires exclusive access.
At line:1 char:1
In this case it's because the status was most likely starting
However, once the VM is stopped, issuing another stop command (whilst daft) works without any apparent problem.
PS > get-azurevm
ServiceName Name Status
----------- ---- ------
vm cloudservice StoppedDeallocated
PS > stop-azurevm -servicename cloudservice -name vm
OperationDescription OperationId OperationStatus
-------------------- ----------- ---------------
Stop-AzureVM xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx Succeeded
So, in conclusion, I'd say it's a tidy bit of scripting diligence to avoid pointless / impossible operations during the script execution.