Azure Powershell script fails when run through task scheduler - powershell

I have a powershell script that I wrote to backup a local sqlserver to an azure blob. Its based on one I took from MSDN, but I added an extra feature to delete any old backups that are over 30 days old. When I run this as a user, it works fine. When I added this to task scheduler, set to run as me, and I manually ask for it to run, it works fine. (All output is captured in a log file, so I can see that its all working). When run from the task scheduler at night when I'm not logged in (the task scheduler is set to run the script as me) it fails. Specifically, it claims my azure subscription name is not know when I call Set-AzureSubscription. Then, fails when trying to delete the blob with:
Get-AzureStorageBlob : Can not find your azure storage credential. Please set current storage account using "Set-AzureSubscription" or set the "AZURE_STORAGE_CONNECTION_STRING" environment variable.
The script in question:
import-module sqlps
import-module azure
$storageAccount = "storageaccount"
$subscriptionName = "SubName"
$blobContainer = "backup"
$backupUrlContainer = "https://$storageAccount.blob.core.windows.net/$blobContainer/"
$credentialName = "creds"
Set-AzureSubscription -CurrentStorageAccountName $storageAccount -SubscriptionName $subscriptionName
$path = "sqlserver:\sql\servername\SQLEXPRESS\databases"
$alldatabases = get-childitem -Force -path $path | Where-object {$_.name -eq "DB0" -or $_.name -eq "DB1"}
foreach ($db in $alldatabases)
{
Backup-SqlDatabase -BackupContainer $backupUrlContainer -SqlCredential $credentialName $db
}
$oldblobs = Get-AzureStorageBlob -container backup | Where-object { $_.name.Contains("DB") -and (-((($_.LastModified) - $([DateTime]::Now)).TotalDays)) -gt $(New-TimeSpan -Days 30).TotalDays }
foreach($blob in $oldblobs)
{
Write-Output $blob.Name
Remove-AzureStorageBlob -Container "backup" -Blob $blob.Name
}
The backup part of the script works, just not the blob deletion parts. It would appear that something is being done to the environment when I log in that allows the azure powershell scripts to work but that isn't being done when I run the command at night when I'm not logged in.
Any one have any idea what that might be?
Task scheduler is set to run the command with a
Powershell -Command "C:\Scripts\BackupDatabases.ps1" 2>&1 >> "C:\Logs\backup.log"

The Azure PowerShell environment just needs to understand what Azure subscription to work with by default. You probably did this for your own environment, but the task scheduler is running in a different environment.
You just need to add an additional command to the beginning of your script to set the Azure subscription. Something like this:
Set-AzureSubscription -SubscriptionName
The documentation for this command is here. You can also set by SubscriptionID etc. instead of SubscriptionName.
In addition, this article walks through how to connect your Azure subscription to the PowerShell environment.
UPDATE: I messed around and got it working. Try adding a "Select-AzureSubscription" before your Set-AzureSubscription command.
Select-AzureSubscription $subscriptionName
Set-AzureSubscription -SubscriptionName $subscriptionName -CurrentStorageAccountName $storageAccount
The documentation for Select-AzureSubscription is here. If you aren't relying on that storage account being set, you may be able to remove the Set-AzureSubscription command.

I was never able to make the powershell script work. I assume I could have made it work if I had set the credentials in the environment variable, as it said, but I instead wrote a little program to do the work for me.
Visit https://github.com/sillyotter/BackupDBToAzure if you need a tool to backup things to azure blobs and delete old leftover backups.
Thanks for the help!

Related

Best way to authenticate an Azure Automation Powershell script

I'm trying to implement a fairly simple PowerShell query, hosted in Azure Automation, to manage External Identities
I've set up a System Managed Identity and have successfully connected using Connect-AzAccount -Identity
But when I run it, it says You must call the Connect-AzureAD cmdlet before calling any other cmdlets
The next cmdlet is Get-AzureADPolicy, which I think triggered the above message
Following this blog, I tried this:
$AzureContext = Set-AzContext -SubscriptionName $AzureContext.Subscription -DefaultProfile $AzureContext -ErrorAction Stop
Connect-AzureAD -TenantId $AzureContext.Tenant.TenantId -AccountId $AzureContext.Account.Id
and I get this: Unable to find an entry point named 'GetPerAdapterInfo' in DLL 'iphlpapi.dll'
Am not at all sure now what to do; any help appreciated
PS: I'm aware there are quite few related questions, but I have not been able to find an answer to this particular query ...
I was having the same issue and I resolved it by using the below commands. I have added comments to underline what each statement is meant for.
# Ensures you do not inherit an AzContext in your runbook. Out-Null is used to disable any output from this Cmdlet.
Disable-AzContextAutosave -Scope Process | Out-Null
# Connect to Azure with system-assigned managed identity.
$AzureContext = (Connect-AzAccount -Identity).context
# set and store context. Out-Null is used to disable any output from this Cmdlet.
Set-AzContext -SubscriptionName $AzureContext.Subscription -DefaultProfile $AzureContext | Out-Null
With help from M/S support, I can now clarify the issue. The core point is that it is not possible to authenticate for AzureAD (with Connect-AzureAD) using Managed Identity; a Run As account must be used, at least currently
Further, for our use case, the Run As account had to have "Global Admin" role; "Owner" was not sufficient
It is of course possible to use Managed Identity for managing other Azure Resources (using Connect-AzAccount)

Set-AzDataFactoryV2Trigger fails in Azure Powershell Task in Release pipeline but works fine on Powershell in frontend machine

I want to create all the triggers in ADF after the Release pipeline has been run successfully . This is because there is a hard 256 parameters limit for ARM template max no. of parameters.
The idea is we will delete all the triggers in DEV, TEST, QA and in PROD. In our published artifact, we would have all the JSONs trigger files using which we can create triggers. The Release pipeline would run a PowerShell script and create the Triggers using Set-AzDataFactoryV2Trigger.
I am able to run the below script correctly on my frontend -
$AllTriggers = Get-ChildItem -Path .
Write-Host $AllTriggers
$AllTriggers | ForEach-Object {
Set-AzDataFactoryV2Trigger -ResourceGroupName "<MyResourceGroupName>" -DataFactoryName "<MyTargetDataFactoryName>" -Name "$_" -DefinitionFile ".\$_.json"
}
In the Azure Powershell script, the first line has to be changed a little to read all the JSON's from the Published Artifact -
$AllTriggers = Get-ChildItem -Name -Path "./_TriggerCreations/drop/" -Include *.json
I receive the below error when trying to run this script via Az Powershell task in the release pipeline (you may note that the error is gibberish) -
The yellow blurred line is the name of the Trigger.
Stuck on this for some time now. Any help would be highly appreciated.
Regards,
Sree

Still requiring Login-RmAzureAccount even after importing PublishSettings in Azure

I am attempting to login to an Azure account through a PowerShell script by means of making use of a publishsettings file; However, I am still finding that it is requiring me to login to my account using Login-AzureRmAccount, regardless of having those credentials.
My step-by step looks something like this:
Clear out all accounts that may be available:
Get-AzureAccount | ForEach-Object { Remove-AzureAccount $_.ID -Force }
Download the PublishSettings file: Import-AzurePublishSettingsFile –PublishSettingsFile $PublishSettingsFileNameWithPath
Select the Azure subscription using the subscription ID:
Select-AzureRMSubscription -SubscriptionId $SubscriptionId
And finally, create a new resource group in the subscription before deploying it: New-AzureRmResourceGroup -Name $ResourceGroupName -Location $ResourceGroupLocation -Verbose -Force 2>> .\errorCIMS_RG.txt | Out-File .\rgDetailsCIMS_RG.txt
However, this is when an error is thrown: Run Login-AzureRmAccount to login.
Assuming I have the PublishSettings file, and it hasnt expired, why would this be giving back an error?
As Mihail said, we should check Azure PowerShell version first, and install the latest version.
We can run this command to list Azure PowerShell version:
Get-Module -ListAvailable -Name Azure -Refresh
By the way, Import-AzurePublishSettingsFile work for ASM, New-AzureRmResourceGroup is ARM command, so if you want to create resource group, you should Login-AzureRmAccount first.
Note:
The AzureResourceManager module does not support publish settings
files.
More information about Import-AzurePublishSettingsFile, please refer to this link.
I solved this problem by updating to last version of azure powershell cmdlet.
You can find last one here:
https://github.com/Azure/azure-powershell/releases

Download blob from Azure storage with powershell -> LoaderException

I use powershell to download a blob from blobstorage in an Azure startup task. I updated Microsoft.WindowsAzure.Storage library today from 3.0.3.0 to 4.0.1.0 via NuGet.
After the library update files are still downloaded correctly but I get same sort of warning in command window:
'Unable to load one or more of the requested types. Retrieve the LoaderExceptions property for more information.'
function download_from_storage ($container, $blob, $connection, $destination) {
Add-Type -Path ((Get-Location).Path + '\Microsoft.WindowsAzure.Storage.dll')
$storageAccount = [Microsoft.WindowsAzure.Storage.CloudStorageAccount]::Parse($connection)
$blobClient = New-Object Microsoft.WindowsAzure.Storage.Blob.CloudBlobClient($storageAccount.BlobEndpoint, $storageAccount.Credentials)
$container = $blobClient.GetContainerReference($container)
$remoteBlob = $container.GetBlockBlobReference($blob)
$remoteBlob.DownloadToFile($destination + "\" + $blob, [System.IO.FileMode]::OpenOrCreate)
}
$connection_string = 'DefaultEndpointsProtocol=https;AccountName=<AcountName>;AccountKey=<Accountkey>'
# JRE
$jre = 'jre-7u60-windows-x64.exe'
$node = 'node-v0.10.29-x64.msi'
download_from_storage 'java-runtime' $jre $connection_string (Get-Location).Path
download_from_storage 'nodejs' $node $connection_string (Get-Location).Path
Since it is still working I am just clueless why the message occurs in the first place.
This is not exactly an answer to your question but here is a much simpler way of downloading files from blob storage:
$dlPath = "C:\temp\"
$container = "BlobContainer"
Set-AzureSubscription "NameOfYourSubscription" -CurrentStorageAccount "storageAccountName"
Get-AzureStorageContainer $container | Get-AzureStorageBlob |
Get-AzureStorageBlobContent -Destination $container
You can do this by installing Azure PowerShell itself in the startup task and then execute the download Azure blob cmdlet. Here are rouphly the steps
Installing Azure PowerShell automatically
Create new service project (New-AzureServiceProject)
Execute Add-AzureWebRole
Change the cscfg to use osFamily=3 (to use new PS version which is compatible with Azure PS)
Copy Azure PowerShell MSI under WebRole1\bin directory
Edit WebRole1\startup.cmd to include this line msiexec /i AzurePowerShell.msi /quiet
Authenticating Azure PowerShell so it can execute cmdlets (if you want to use storage cmdlets only you can ignore this step and pass your storage key/name when executing the Get-AzureStorageBlobContent cmdlet)
Copy a latest publish settings file (myPublishSettings.publishsettings) inside WebRole1\bin folder
Edit WebRole1\startup.cmd to include this line after the one added before: PowerShell.exe –Command “Import-AzurePublishSettingsFile .\myPublishSettings.publishsettings)

Remove-AzureDisk throws error, not sure why

I have an Azure VM and I'm trying to delete it using Powershell. I also want to remove the disk that that VM OS was on (there are no data disks).
I assume I'm going to need the following two cmdlets:
Remove-AzureVM
Remove-AzureDisk
Here's my code:
$VMs = Get-AzureVM $svcName
foreach ($VM in $VMs)
{
$OSDisk = ($VM | Get-AzureOSDisk)
if ($VM.InstanceStatus -eq "ReadyRole") {
Stop-AzureVM -Name $VM.Name -ServiceName $svcName
}
remove-azurevm -ServiceName $svcName -Name $VM.Name
Remove-AzureDisk -DiskName $OSDisk.DiskName
}
When I execute this the call to Remove-AzureVM returns successfully but the call to Remove-AzureDisk returns an error:
Remove-AzureDisk : BadRequest: A disk with name
XXX is currently in use
by virtual machine YYY running within hosted service
ZZZ, deployment XYZ.
Strange thing is, I can issue the same call to Remove-AzureDisk just a few moments later and it returns successfully.
Its as if the call to Remove-AzureVM is returning too quickly. i.e. Its reporting success before the VM has been fully removed, or before the link to the disk has been removed at any rate.
Can anyone explain why this might be and also how I might work around this problem?
thanks in advance
Jamie
What's happening here is that the Disk that is stored in BLOB storage is locked when in use by a VM. You are removing the VM, but it takes a few moments for the Lease on the BLOB to release. That's why you can remove it a few moments later.
There are a few folks who have written PowerShell to break the lease, or you could use PowerShell to use the SDK (or make direct REST API calls) to check lease status.
I ended up writing a script that creates a VM, clones it, then deletes the clones. As part of that I needed to wait until the lease was released hence if you're experiencing this same problem you might want to check my blog post at http://sqlblog.com/blogs/jamie_thomson/archive/2013/11/04/clone-an-azure-vm-using-powershell.aspx as it'll have some code that might help you.
Regards
Jamie
I puzzled at this for quite a while. Ultimately, I found a different command to do what I thought I was doing with this command. I would recommend the remove-azuredatadisk command to delete a disk, as it automatically breaks the lease.
Get-AzureVM -ServiceName <servicename> -name <vmname> |Remove-AzureDataDisk -lun <lun#> -deletevhd | Update-AzureVM
It will spin for a couple of minutes, but it will give you a success/failure output at the end.
This command just does it, and doesn't give you any feedback about which drive was removed. I would recommend tossing in a get-azuredatadisk first just to be sure of what you deleted.
Get-AzureVM -ServiceName <servicename> -name <vmname> | Get-AzureDataDisk
This is related to Windows Azure: Delete disk attached to non-existent VM. Cross-posting my answer here:
I was unable to use the (2016) web portal to delete orphaned disks in my (classic) storage account. Here is a detailed walk-through for deleteing these orphaned disks with PowerShell.
PowerShell
Download and install PowerShell if you haven't already. (Install and configure Azure PowerShell.) Initial steps from this doclink:
Check that the Azure PowerShell module is available after installing:
Get-Module –ListAvailable
If the Azure PowerShell module is not listed, you may need to import it:
Import-Module Azure
Login to Azure Resource Manager:
Login-AzureRmAccount
AzurePublishSettingsFile
Retreive your PublishSettingsFile.
Get-AzurePublishSettingsFile
Get-AzurePublishSettingsFile launches manage.windowsazure.com and prompts you to download an XML file that you can be saved anywhere.
Reference: Get-AzurePublishSettingsFile Documentation
Import-AzurePublishSettingsFile and specify the path to the file just saved.
Import-AzurePublishSettingsFile -PublishSettingsFile '<your file path>'
Show and Remove Disks
Show current disks. (Reference: Azure Storage Cmdlets)
Get-AzureDisk
Quickly remove all disks. (Credit to Mike's answer)
get-azuredisk | Remove-AzureDisk
Or remove disks by name. (Credit to Remove-AzureDisk Documentation)
Remove-AzureDisk -DiskName disk-name-000000000000000000 -DeleteVHD