Attaching a locally uploaded VHD to a Classic Azure VM - powershell

I have a VHD I uploaded to Azure using the
Set-AzureStorageBlobContent
and attempted using the -
Add-AzureVhd
When I utilize the
Add-AzureDataDisk
in the console, the VHD appears to be attached to LUN 0 on the VM, when I utilize
Get-AzureVM I utilized the appropriate URI for the MediaLocation argument, but when I check the Classic Portal (Web Interface) or go into the VM itself, the VHD is still not attached.
If I do the process manually, the VHD attaches all fine and dandy. Under
VMs->Instances->Disks
I can see the VHD thats been uploaded there if I do the process manually, but utilizing cmdlets, I cannot seem to get the VHD to appear in "existing disks" via VM instances.
Now I have triple checked everything, my storage account is in the same region as my VM instance. My locally uploaded VHD is fixed and labeled correctly in blobs as "someVHD.vhd", when I attempt to use Add-AzureDataDisk, in the console it returns that the disk is attached, the weird behavior is that if I labelled the existing disk during the attach cmdlet arguments, the disk still does not attach utilizing cmdlets.
This is my exact script -
$createVHD = New-VHD -Path $($vhdInstallFullPath) -Fixed -SizeBytes 256MB -ComputerName $hostName
Copy files to VHD and prepare them for upload to azure utilizing either Set-AzureStorageBlobContent or Add-AzureVHD, in this case I utilized Set-AzureStorageBlobContent, because the VHD is really small.
$migrateVHD = Set-AzureStorageBlobContent -File $vhdInstallFullPath -Blob $VHDInstallName -Container $StorageContainerName -Context $($newAzureContext.Context) -BlobType Page -Confirm:$False
$addAzureDataDisk = Add-AzureDataDisk -VM $azureVMInfo -ImportFrom -MediaLocation $azureInstallBlobURI -DiskLabel "InstallPackage" -LUN $azureDataDiskLUN
Now I have a lot of variables and I'm doing a lot of other things to get the storage context, the Azure VM object, and copying files to the vhd before I uplaod, but that script block should give everyone my gist.
Could my issue possibly be between utilizing page blobs over block blobs for the VHD? From documentation I understood that a VHD with multiple files would want to be a page blob.

Maybe you could try to use the following cmdlet.
Get-AzureVM "stlcs01" -Name "shuitest1" | Add-AzureDataDisk -ImportFrom -MediaLocation "https://t5portalvhdsx2463gvmvrz7.blob.core.windows.net/vhds/shui-shui-2017-02-02.vhd" -DiskLabel "InstallPackage" -LUN 1
I find a good article about your problem, maybe you could check this article:Add, Import Data Disk to Azure Virtual Machine using Powershell.
I test in my lab.
Add-AzureVhd -Destination “https://t5portalvhdsx2463gvmvrz7.blob.core.windows.net/vhds/shui.vhd” -LocalFilePath “D:\shui.vhd” -NumberOfUploaderThreads 32
Get-AzureVM -name shuitest -ServiceName shuitest | Add-AzureDataDisk -ImportFrom -MediaLocation “https://t5portalvhdsx2463gvmvrz7.blob.core.windows.net/vhds/shui.vhd” -DiskLabel “test” -LUN 1
Get-AzureVM -ServiceName shuitest -Name shuitest|Get-AzureDataDisk

Related

How to get the command line for AZCopy?

I want to send dump files to a storage container and for the copy to work we need to obtain a SAS key for the container we’re copying to.
When you use Azure Storage Explorer you can copy a file to a container and then copy the command it used to the clipboard which looks something like this:
$env:AZCOPY_CRED_TYPE = "Anonymous";
./azcopy.exe copy "C:\temp\test.txt" "https://dbbackups.blob.core.windows.net/memorydumps/test.txt?{SAS-TOKEN}" --overwrite=prompt --from-to=LocalBlob --blob-type Detect --follow-symlinks --put-md5 --follow-symlinks --recursive;
$env:AZCOPY_CRED_TYPE = "";
I copied this from AZ Storage Explorer when copying a file called test.txt from c:\temp to the memorydumps container in a Storage Account.
What I would need help with is creating a PowerShell script that generates the above command line so I can run it on azcopy-less nodes and have the dumps end up in the storage container. Any ideas?
You could use the Azure PowerShell equivalent to upload blobs to your container. The Set-AzStorageBlobContent uploads a local file to an Azure Storage blob.
Set-AzStorageBlobContent -File "C:\Temp\test.txt" `
-Container $containerName `
-Blob "Image001.jpg" `
-Context $ctx
Refer to this blog post for a detailed walkthough: File Transfers to Azure Blob Storage Using Azure PowerShell

Add existing vhd to Azure VM using PowerShell Azure Resource Manager cmdlets

NOTE: This is an Azure ARM question - not an Azure Service Management question.
I have 2 vhds in a storage account - machine-os.vhd and machine-data.vhd. I can use PowerShell ARM cmdlets to create a VM with the existing OS vhd, but cannot figure out how to attach the EXISTING data vhd. All the samples I have found create EMPTY data disks - but I want to attach the existing vhd as a data disk.
I've tried to use the createOption attach switch with Add-AzureVMDataDisk like so:
$vm = New-AzureVMConfig ...
$vm = Add-AzureVMDataDisk -VM $vm -VhdUri $existingDiskUri -Name "machine-data.vhd" -Caching ReadOnly -CreateOption attach -DiskSizeInGB 200
However, the operation fails with:
Blob https://x.blob.core.windows.net/vhds/machine-data.vhd already exists. Please provide a different blob URI to create a new blank data disk 'machine-data.vhd.'
I have to specify DiskSizeInGB for the Add-AzureVMDataDisk command to work (which seems strange). I've tried specifying SourceImageUri and a different name for the VhdUri, which, according to the documentation, should copy the vhd from the sourceImageUri to the vhdUri and attach it. Trying createOption fromImage fails because "you cannot specify size with source image uri". However, the size parameter for the cmdlet is mandatory, so I don't know how you could specify an sourceUri and not a size.
This SO question presents a similar issue (though I don't get the same error), but the link in the answer shows a template with EMPTY data disks, which doesn't help.
Interestingly I've tried to add the disk to the VM from the Azure Portal - there you have to specify a name and a URI, but the operation always fails with some strange json parsing error. I can't seem to get the uri for the data disk correct.
After playing around a bit more I found a hack:
Give a new URI for the existing disk and set this as the vhdUri
Use the URI of the existing disk as the sourceImageUri
Call Add-AzureVMDataDisk using the CreateOption fromImage
Set the size of the data disk to null (this is the hack)
When calling New-AzureVM, the existing disk is copied to the new Uri
After creating the VM, I delete the original vhd
Unfortunately you have to supply the DiskSizeInGB parameter to the Add-AzureVMDataDisk command since it's mandatory. However, I set it to null, otherwise the provisioning fails (the error message says that you can't specify both size and sourceImageUri).
Here's the final code:
$vm = New-AzureVMConfig ...
$vm = Add-AzureVMDataDisk -VM $vm `
-SourceImageUri $existingDataDiskUrl -VhdUri $newDataDiskUri `
-Name $newDataDiskName -CreateOption fromImage -Caching ReadOnly `
-DiskSizeInGB 200
# hack
$vm.StorageProfile.DataDisks[0].DiskSizeGB = $null
After that you can call:
New-AzureVM -ResourceGroupName $rgName -Location $location -VM $vm
Once the VM is created, I call Remove-AzureStorageBlob to delete the original disk.
Perhaps there's a cleaner way, but I can't find one. At least this way works in the end.

How to use "Set-AzureStorageFileContent" to upload file to HDInsight?

I think I have setup my powershell environment to connect to my Azure account. I want to upload a file to my HDInsight blob storage. I did:
Set-AzureStorageFileContent -ShareName "what is a share name?" -Source "c:\local.file" -Path "test/"
But I got
Set-AzureStorageFileContent : Can not find your azure storage credential.
Please set current storage account using
"Set-AzureSubscription" or set the "AZURE_STORAGE_CONNECTION_STRING" environment variable.
The help information for Set-AzureSubscription is so useless, I have no idea what it is talking about...
A few things here:
Set-AzureStorageFileContent uploads a file into File Service Share and not Blob Storage. To upload a file into blob storage, you would need to use Set-AzureStorageBlobContent Cmdlet.
I believe the error you're getting is because no storage account is specified. Set-AzureStorageBlobContent cmdlet has a parameter called Context and you would need to specify the context which can do by calling New-AzureStorageContext Cmdlet.
Sample code:
$storageContext = New-AzureStorageContext -StorageAccountName "accountname" -StorageAccountKey "accountkey"
Set-AzureStorageBlobContent -File "full file path" -Container "container name" -BlobType "Block" -Context $storageContext -Verbose
Please note that the container must exist in your storage account which you can create using New-AzureStorageContainer Cmdlet.

Remove-AzureDisk throws error, not sure why

I have an Azure VM and I'm trying to delete it using Powershell. I also want to remove the disk that that VM OS was on (there are no data disks).
I assume I'm going to need the following two cmdlets:
Remove-AzureVM
Remove-AzureDisk
Here's my code:
$VMs = Get-AzureVM $svcName
foreach ($VM in $VMs)
{
$OSDisk = ($VM | Get-AzureOSDisk)
if ($VM.InstanceStatus -eq "ReadyRole") {
Stop-AzureVM -Name $VM.Name -ServiceName $svcName
}
remove-azurevm -ServiceName $svcName -Name $VM.Name
Remove-AzureDisk -DiskName $OSDisk.DiskName
}
When I execute this the call to Remove-AzureVM returns successfully but the call to Remove-AzureDisk returns an error:
Remove-AzureDisk : BadRequest: A disk with name
XXX is currently in use
by virtual machine YYY running within hosted service
ZZZ, deployment XYZ.
Strange thing is, I can issue the same call to Remove-AzureDisk just a few moments later and it returns successfully.
Its as if the call to Remove-AzureVM is returning too quickly. i.e. Its reporting success before the VM has been fully removed, or before the link to the disk has been removed at any rate.
Can anyone explain why this might be and also how I might work around this problem?
thanks in advance
Jamie
What's happening here is that the Disk that is stored in BLOB storage is locked when in use by a VM. You are removing the VM, but it takes a few moments for the Lease on the BLOB to release. That's why you can remove it a few moments later.
There are a few folks who have written PowerShell to break the lease, or you could use PowerShell to use the SDK (or make direct REST API calls) to check lease status.
I ended up writing a script that creates a VM, clones it, then deletes the clones. As part of that I needed to wait until the lease was released hence if you're experiencing this same problem you might want to check my blog post at http://sqlblog.com/blogs/jamie_thomson/archive/2013/11/04/clone-an-azure-vm-using-powershell.aspx as it'll have some code that might help you.
Regards
Jamie
I puzzled at this for quite a while. Ultimately, I found a different command to do what I thought I was doing with this command. I would recommend the remove-azuredatadisk command to delete a disk, as it automatically breaks the lease.
Get-AzureVM -ServiceName <servicename> -name <vmname> |Remove-AzureDataDisk -lun <lun#> -deletevhd | Update-AzureVM
It will spin for a couple of minutes, but it will give you a success/failure output at the end.
This command just does it, and doesn't give you any feedback about which drive was removed. I would recommend tossing in a get-azuredatadisk first just to be sure of what you deleted.
Get-AzureVM -ServiceName <servicename> -name <vmname> | Get-AzureDataDisk
This is related to Windows Azure: Delete disk attached to non-existent VM. Cross-posting my answer here:
I was unable to use the (2016) web portal to delete orphaned disks in my (classic) storage account. Here is a detailed walk-through for deleteing these orphaned disks with PowerShell.
PowerShell
Download and install PowerShell if you haven't already. (Install and configure Azure PowerShell.) Initial steps from this doclink:
Check that the Azure PowerShell module is available after installing:
Get-Module –ListAvailable
If the Azure PowerShell module is not listed, you may need to import it:
Import-Module Azure
Login to Azure Resource Manager:
Login-AzureRmAccount
AzurePublishSettingsFile
Retreive your PublishSettingsFile.
Get-AzurePublishSettingsFile
Get-AzurePublishSettingsFile launches manage.windowsazure.com and prompts you to download an XML file that you can be saved anywhere.
Reference: Get-AzurePublishSettingsFile Documentation
Import-AzurePublishSettingsFile and specify the path to the file just saved.
Import-AzurePublishSettingsFile -PublishSettingsFile '<your file path>'
Show and Remove Disks
Show current disks. (Reference: Azure Storage Cmdlets)
Get-AzureDisk
Quickly remove all disks. (Credit to Mike's answer)
get-azuredisk | Remove-AzureDisk
Or remove disks by name. (Credit to Remove-AzureDisk Documentation)
Remove-AzureDisk -DiskName disk-name-000000000000000000 -DeleteVHD

Azure blob download is incredibly slow using PowerShell (via Get-AzureStorageBlobContent), but fast via Azure Explorer, etc?

With very basic code that simply loops through my storage account and mirrors all containers and blobs to my local disk, I'm finding the Get-AsureStorageBlobContent cmdlet to be incredibly slow? It seems to take a real time second or two per blob regardless of the blob size...which adds considerable overhead when we've got thousands of tiny files.
In contrast, on the same machine and network connection (even running simultaneously), Azure Explorer does the same bulk copy 10x to 20x faster, and AzCopy does it literally 100x faster (async), so clearly it's not a network issue.
Is there a more efficient way to use the Azure storage cmdlets, or are they just dog slow by nature? The help for Get-AzureStorageContainer mentions a -ConcurrentTaskCount option which implies some ability to be async, but there's no documentation on how to achieve async and given that it only operates on a single item I'm not sure how it could?
This is the code I'm running:
$localContent = "C:\local_copy"
$storageAccountName = "myblobaccount"
$storageAccountKey = "mykey"
Import-Module Azure
$blob_account = New-AzureStorageContext -StorageAccountName $storageAccountName -StorageAccountKey $storageAccountKey -Protocol https
Get-AzureStorageContainer -Context $blob_account | ForEach-Object {
$container = $_.Name
Get-AzureStorageBlob -Container $container -Context $blob_account | ForEach-Object {
$local_path = "$localContent\{0}\{1}" -f$container,$_.Name
$local_dir = Split-Path $local_path
if (!(Test-Path $local_dir)) {
New-Item -Path $local_dir -ItemType directory -Force
}
Get-AzureStorageBlobContent -Context $blob_account -Container $container -Blob $_.Name -Destination $local_path -Force | Out-Null
}
}
I looked at the source code for Get-AzureStorageBlobContent on Github and found certain interesting things which may cause the slowness of downloading blobs (especially smaller sized blobs):
Line 165:
ICloudBlob blob = Channel.GetBlobReferenceFromServer(container, blobName, accessCondition, requestOptions, OperationContext);
What this code does is that it makes a request to the server to fetch blob type. So you add one extra request to the server for each blob.
Line 252 - 262:
try
{
DownloadBlob(blob, filePath);
Channel.FetchBlobAttributes(blob, accessCondition, requestOptions, OperationContext);
}
catch (Exception e)
{
WriteDebugLog(String.Format(Resources.DownloadBlobFailed, blob.Name, blob.Container.Name, filePath, e.Message));
throw;
}
If you look at the code above, it first downloads the blob DownloadBlob and the tries to fetch blob attributes Channel.FetchBlobAttributes. I haven't looked at the source code for Channel.FetchBlobAttributes function but I suspect it is making one more request to the server.
So to download a single blob, essentially the code is making 3 requests to the server which could be the reason for slowness. To be certain, you could trace your requests/response through Fiddler and see how exactly the cmdlet is interacting with storage.
Check out Blob Transfer Utility. It uses the Azure api, and its a good bet that is what Azure Explorer is using as well. BTU is open source so it would be much easier to test if its the cmdlet that is the problem.