Azure Blob Copy stuck in Pending State - powershell

I'm attempting to copy a VM from one subscription to another. I've done this for 5 other VM's without issue. All of the sudden, with THIS VM i'm having issues. The process continually gets hung when the OS disk is being copied.
The script I'm using is below:
foreach($disk in $allDisks)
{
$blobName = $disk.MediaLink.Segments[2]
$blobStorageAccount = $disk.MediaLink.Segments[1].Replace('/', '')
$targetBlob = Start-AzureStorageBlobCopy -SrcContainer $blobStorageAccount -SrcBlob $blobName `
-DestContainer vhds -DestBlob $blobName `
-Context $sourceContext -DestContext $destContext -Force
Write-Host "Copying blob $blobName"
$copyState = $targetBlob | Get-AzureStorageBlobCopyState
while ($copyState.Status -ne "Success")
{
$percent = ($copyState.BytesCopied / $copyState.TotalBytes) * 100
Write-Host "Completed $('{0:N2}' -f $percent)%"
sleep -Seconds 20
$copyState = $targetBlob | Get-AzureStorageBlobCopyState
}
When I check the status on the $copyState it's stuck at Pending. I've gone and used Stop-AzureStorageBlobCopy, deleted the destination blob and started over, but no matter what it's always just stuck in Pending state, with BytesCopied at 0.
The source VM has been stopped (deallocated) prior to copy. There are no other pending copy operations that I can see, and I've checked every blob in the destination subscription manually.
I even tried a rename operation of the source blob in AzureStorageExplorer, which ended up creating a copy. That copy completed without issue. I tried copying the COPY of the original file over to the other subscription, and it also got stuck on "Pending".
Any ideas why I can't copy between the subscriptions?

Related

How to stop a Storage Event Trigger of Azure Data Factory using Powershell when there is a Delete lock on the Resource group?

I want to stop a Storage Event Trigger that is on my data factory before I make modifications to the factory using ARM deployment/Azure DevOps. There is a Delete lock on my resource group which is causing the below error when I try to stop the trigger using powershell (Stop-AzDataFactoryV2Trigger) :
Error Code: BadRequest
Error Message: The scope '/subscriptions/XXX/resourceGroups/XXX/providers/Microsoft.Storage/storageAccounts/XXX/providers/Microsoft.EventGrid/eventSubscriptions/XXX'
cannot perform delete operation because following scope(s) are locked: '/subscriptions/XXX/resourceGroups/XXX'. Please remove the lock and try again.
Is there any way to do my ADF deployments without having to remove this Delete lock?
After a bit of research and digging around, I found out that the direct answer to this question is that it's not possible to Start/Stop a Storage Event Trigger on a Data Factory when there is a Delete lock on the entire Resource Group. This is because whenever a Storage Event Trigger is started or stopped, an Event Subscription (which is a resource) is created and deleted in the Resource Group but with a Delete lock in place, this deletion cannot happen.
However, there are few workarounds to address this requirement :
Have a Delete lock at the Resource level and not at the Resource
Group level.
Move the Data Factory and the Storage Account to a different Resource Group
which doesn't have a Delete lock.
Delete the "Delete lock" before the deployment of the ADF and recreate it after
the deployment. For this, the Service Principal being used to do the deployments
should have the permission needed to update/delete locks.
If anyone has a direct solution to this problem, I'm happy to accept that as the answer. Thanks.
The following sample script can be used to stop triggers before deployment.
if ($predeployment -eq $true) {
#Stop all triggers
Write-Host "Stopping deployed triggers`n"
$triggersToStop | ForEach-Object {
if ($_.TriggerType -eq "BlobEventsTrigger" -or $_.TriggerType -eq "CustomEventsTrigger") {
Write-Host "Unsubscribing" $_.Name "from events"
$status = Remove-AzDataFactoryV2TriggerSubscription -ResourceGroupName $ResourceGroupName -DataFactoryName $DataFactoryName -Name $_.Name
while ($status.Status -ne "Disabled"){
Start-Sleep -s 15
$status = Get-AzDataFactoryV2TriggerSubscriptionStatus -ResourceGroupName $ResourceGroupName -DataFactoryName $DataFactoryName -Name $_.Name
}
}
Write-Host "Stopping trigger" $_.Name
Stop-AzDataFactoryV2Trigger -ResourceGroupName $ResourceGroupName -DataFactoryName $DataFactoryName -Name $_.Name -Force
}
For more information follow this Pre- and Post-deployment script given in official documentation.

Azcopy upload then delete local

I'm working on a Powershell script to upload files to an Azure blob from a local directory then if successful remove the local files. The blob is just a staging location and the files are moved from there so I cannot sync. I think the only way to do this might be to check the status of the copy from the log? I thought this may do what I need but it just starts uploading all the files again when it cycles back through and never continues and removes the source. Any help would be greatly appreciated.
while($true){
.\azcopy copy "$Upload\*.iso" (SAS URL) --log-level ERROR
Start-Sleep -Seconds 60 -Verbose
}
Daniel's link got me going in the right direction.
.\azcopy copy "$Upload\*.iso" "SAS" --log-level ERROR
IF ($LASTEXITCODE -eq 0){
Write-Host "Transfer Successful"
Write-Host "Remove files after Azure copy is done"
rm "Source" -Verbose
Stop-Transcript
}
IF ($LASTEXITCODE -ne 0){
Write-Host "Transfer Failed"
Stop-Transcript
}

Prorgess bar for `New-MailboxExportRequest`

I'm trying to create a script that can export a user's mailbox to a PST, remotely (Exchange Server 2010 console is installed on the server we're running this from, and the module is loaded correctly). It's being done using a script so our L2 admins do not have to manually perform the task. Here's the MWE.
$UserID = Read-Host "Enter username"
$PstDestination = "\\ExServer\Share\$UserID.pst"
$Date = Get-Date -Format "yyyyMMddhhmmss"
$ExportName = "$UserID" + "$Date"
try {
New-MailboxExportRequest -Mailbox $UserID -FilePath $PstDestination -Name $ExportName -ErrorAction Stop -WarningAction SilentlyContinue | Out-Null
# Loop through the process to track its status and write progress
do {
$Percentage = (Get-MailboxExportRequest -Name $ExportName | Get-MailboxExportRequestStatistics).PercentComplete
Write-Progress "Mailbox export is in progress." -Status "Export $Percentage% complete" -PercentComplete "$Percentage"
}
while ($Percentage -ne 100)
Write-Output "$UserID`'s mailbox has been successfully exported. The archive can be found at $PstDestination."
}
catch {
Write-Output "There was an error exporting the mailbox. The process was aborted."
}
The problem is, as soon as we initiate the export, the task gets Queued. Sometimes, the export remains queued for a very long time, and the script is currently unable to figure out when the task begins, and when it does, is unable to display the progress correctly. The export happens in the background, but the script remains stuck there. So anything after the export, does not get executed, and the whole thing then has to be done manually.
Please suggest a way to handle this?
I tried adding a wait timer and then a check to see if the export has begun. It didn't quite work as expected.
Two things. First one is more about performance/hammering Exchange with unnesacary requests in do/while loop. Start-Sleep -Seconds 1 (or any other delay that makes sense depending on the mailbox size(s)) inside the loop is a must.
Second: rather than wait for job to start, just resume it yourself:
if ($request.Status -eq 'Queued') {
$request | Resume-MailboxExportRequest
}

PowerShell check for Azure Storage Container existence

I am creating a PowerShell script to perform several steps and one of them involves an Azure Storage Container removal:
Remove-AzureStorageContainer ....
The next step is dependant of this removal is done.
How can I be made aware that the previous REMOVE was successfully performed so as to continue execution on the next step ?
Something like;
while(Test-AzureStorageContainerExist "mycontainer")
{
Start-Sleep -s 100
}
<step2>
Unfortunately 'Test-AzureStorageContainerExist' doesn't seem available. :)
You can request the list of storage containers and look for a specific one and wait until that isn't returned anymore. This works okay if the account doesn't have a ton of containers in it. If it does have a lot of containers then this won't be efficient at all.
while (Get-AzureStorageContainer | Where-Object { $_.Name -eq "mycontainer" })
{
Start-Sleep -s 100
"Still there..."
}
The Get-AzureStorageContainer cmdlet also takes a -Name parameter and you could do a loop of asking for it to be returned; however, when the container doesn't exist it throws an error (Resource not found) instead of providing an empty result, so you could trap for that error and know it was gone (make sure to explicitly look for Reource Not found vs a timeout or something like that).
Update: Another option would be to make a call to the REST API directly for the get container properties until you get a 404 (not found). That would mean the container is gone. http://msdn.microsoft.com/en-us/library/dd179370.aspx
A try/catch approach:
try {
while($true){
Get-AzureStorageContainer -Name "myContainer -ErrorAction stop
sleep -s 100
}
}
catch {
write-host "no such container"
# step 2 action
}
This works
$containerDeleted = $false
while(!$containerDeleted) {
Try {
Write-Host "Try::New-AzureStorageContainer"
New-AzureStorageContainer -Name $storageContainerName -Permission Off -Context $context -Verbose -ErrorAction Stop
$containerDeleted = $true
} catch [Microsoft.WindowsAzure.Storage.StorageException] {
Start-Sleep -s 5
}
}
If you look into the error message being returned the exception code it is container being deleted

Azure blob download is incredibly slow using PowerShell (via Get-AzureStorageBlobContent), but fast via Azure Explorer, etc?

With very basic code that simply loops through my storage account and mirrors all containers and blobs to my local disk, I'm finding the Get-AsureStorageBlobContent cmdlet to be incredibly slow? It seems to take a real time second or two per blob regardless of the blob size...which adds considerable overhead when we've got thousands of tiny files.
In contrast, on the same machine and network connection (even running simultaneously), Azure Explorer does the same bulk copy 10x to 20x faster, and AzCopy does it literally 100x faster (async), so clearly it's not a network issue.
Is there a more efficient way to use the Azure storage cmdlets, or are they just dog slow by nature? The help for Get-AzureStorageContainer mentions a -ConcurrentTaskCount option which implies some ability to be async, but there's no documentation on how to achieve async and given that it only operates on a single item I'm not sure how it could?
This is the code I'm running:
$localContent = "C:\local_copy"
$storageAccountName = "myblobaccount"
$storageAccountKey = "mykey"
Import-Module Azure
$blob_account = New-AzureStorageContext -StorageAccountName $storageAccountName -StorageAccountKey $storageAccountKey -Protocol https
Get-AzureStorageContainer -Context $blob_account | ForEach-Object {
$container = $_.Name
Get-AzureStorageBlob -Container $container -Context $blob_account | ForEach-Object {
$local_path = "$localContent\{0}\{1}" -f$container,$_.Name
$local_dir = Split-Path $local_path
if (!(Test-Path $local_dir)) {
New-Item -Path $local_dir -ItemType directory -Force
}
Get-AzureStorageBlobContent -Context $blob_account -Container $container -Blob $_.Name -Destination $local_path -Force | Out-Null
}
}
I looked at the source code for Get-AzureStorageBlobContent on Github and found certain interesting things which may cause the slowness of downloading blobs (especially smaller sized blobs):
Line 165:
ICloudBlob blob = Channel.GetBlobReferenceFromServer(container, blobName, accessCondition, requestOptions, OperationContext);
What this code does is that it makes a request to the server to fetch blob type. So you add one extra request to the server for each blob.
Line 252 - 262:
try
{
DownloadBlob(blob, filePath);
Channel.FetchBlobAttributes(blob, accessCondition, requestOptions, OperationContext);
}
catch (Exception e)
{
WriteDebugLog(String.Format(Resources.DownloadBlobFailed, blob.Name, blob.Container.Name, filePath, e.Message));
throw;
}
If you look at the code above, it first downloads the blob DownloadBlob and the tries to fetch blob attributes Channel.FetchBlobAttributes. I haven't looked at the source code for Channel.FetchBlobAttributes function but I suspect it is making one more request to the server.
So to download a single blob, essentially the code is making 3 requests to the server which could be the reason for slowness. To be certain, you could trace your requests/response through Fiddler and see how exactly the cmdlet is interacting with storage.
Check out Blob Transfer Utility. It uses the Azure api, and its a good bet that is what Azure Explorer is using as well. BTU is open source so it would be much easier to test if its the cmdlet that is the problem.