Issue Accessing File Storage in Azure WorkerRole using Startup Script - powershell

I have an Azure Cloud Service Worker Role which needs a separate Windows Service installed to redirect application tracing to a centralized server. I've placed the installation binaries for this Windows Service in a Storage Account's file storage as shown below. I then have my startup task call a batch file, which in turn executes a power-shell script to retrieve the file and install the service
When Azure deploys a new instance of the role, the script execution fails with the following error:
Cannot find path
'\\{name}.file.core.windows.net\utilities\slab1-1.zip' because it does
not exist
However, when I run the script after connecting through RDP, all is fine. Does anybody know why this might be happening? Here is the script below...
cmdkey /add:$storageAccountName.file.core.windows.net /user:$shareUser /pass:$shareAccessKey
net use * \\$storageAccountName.file.core.windows.net\utilities
mkdir slab
copy \\$storageAccountName.file.core.windows.net\utilities\$package .\slab\$package

I always have problem here and there by using a script to access the mounted azure file drive. I believe this is more or less related to the drive is mounted only for the current user and may not always work the same when called from a script.
I ended up pulling files from azure file the hard way without network drive.
$source= $stroageAccountName
$sourceKey = $shareAccessKey
$sharename = "utilities"
$package = "slab1-1.zip"
$dest = ".\slab\" + $package
#Define Azure file share root
$ctx=New-AzureStorageContext $source $sourceKey
$share = get-AzureStorageShare $sharename -Context $ctx
Get-AzureStorageFileContent -share $share -Destination $dest -Path $package -confirm:$false
Code example here will get you a good start:
https://azure.microsoft.com/en-us/documentation/articles/storage-dotnet-how-to-use-files/
It would be harder to manage if you have more complex folder structure, but objects there are CloudFileDirectory and CloudFile, property and methods there works seamlessly for me in powershell 4.0
*Azure Powershell module is required for 'Get-AzureStorageFileContent' cmdlet

Related

How to transfer a html file from Azure VM via Azure powershell or Azure CLI to a local machine

I am working on developing a Automated QA script for my project for my organisation. My goal is to execute pester scripts through custom script extension feature of azure vms. I got the Pester executed and result exported as a nunit xml. I would like to fetch the xml back from VM to my local machine. One way of doing that is by uploading the xml into blob storage from VMs. but since it requires azure connection to be established in VM using SP account. I dont prefer this method.
I would like to know the best way to retrive pester results and get it outside VM.
Any help is much appreciated. Thanks .
I'd use a shared access signature token for that (link). that way your script doesnt really need SP, it just needs the token. that token would limit permissions to only upload file to specific container (or even blob).
$sascontext = New-AzureStorageContext -StorageAccountName accountname -SasToken '?tokenvalue'
Set-AzureStorageBlobContent -File path -Container name -Context $sascontext -Force
You can create new token with New-AzureStorageBlobSASToken or New-AzureStorageContainerSASToken
Your only requirement would be to install Azure.Storage module before hand.

Drive Mapping with Azure Scale Sets using Desired State Configuration

I am running into an interesting issue. Maybe you fine folks can help me understand what's happening here. If there's a better method, I'm all ears.
I am running a DSC Configuration on Azure and would like to map a drive. I've read this really isn't what DSC is for, but I am not aware of any other way of doing this outside of DSC with Azure Scalesets. Here's the portion of the script I am running into issues:
Script MappedDrive
{
SetScript =
{
$pass = "passwordhere" | ConvertTo-SecureString -AsPlainText -force
$user = New-Object -TypeName System.Management.Automation.PSCredential -ArgumentList "username",$pass
New-PSDrive -Name W -PSProvider FileSystem -root \\azurestorage.file.core.windows.net\storage -Credential $user -Persist
}
TestScript =
{
Test-Path -path "W:"
}
GetScript =
{
$hashresults = #{}
$hashresults['Exists'] = test-path W:
}
}
I've also attempted this code in the SetScript section:
(New-Object -ComObject WScript.Network).MapNetworkDrive('W:','\\azurestorage.file.core.windows.net\storage',$true,'username','passwordhere')
I've also tried a simple net use command to map the drive instead of the fancy, New-Object or New-PSDrive cmdlets. Same behavior.
If I run these commands (New-Object/Net Use/New-PSDrive) manually, the machine will map the drive if I run it with a separate drive letter. Somehow, the drive is attempting to be mapped but isn't mapping.
Troubleshooting I've done:
There is no domain in my environment. I am simply attempting to create a scale set and run DSC to configure the machine using the storage account credentials granted upon creation of the storage account.
I am using the username and password that is given to me by the Storage Account user id and access key (randomly generated key, with usually the name of the storage account as the user).
Azure throws no errors on running the DSC module (No errors in Event Log, Information Only - Resource execution sequence properly lists all of my sequences in the DSC file.)
When I log into the machine and check to see if the drive is mapped, I run into a disconnected network drive on the drive letter I want (W:).
If I open Powershell, I receive an error: "Attempting to perform the InitializeDefaultDrives operation on the 'FileSystem' provider failed."
If I run "Get-PSDrive" the W: drive does not appear.
If I run the SetScript code manually inside a Powershell Console, the mapped drive works fine under a different drive letter.
If I try to disconnect the W: drive, I receive "This network connection does not exist."
I thought maybe DSC needed some time before mapping and added a Sleep Timer, but that didn't work. Same behavior.
I had a similar problem before, while it didn't involve DSC, mounting an Azure File share would be fine until the server would be restarted, then it would appear as a disconnected drive. This happend if i used New-Object/Net Use/New-PSDrive with the persist option.
The answer to that issue, i found in the updated docs
Persist your storage account credentials for the virtual machine
Before mounting to the file share, first persist your storage account
credentials on the virtual machine. This step allows Windows to
automatically reconnect to the file share when the virtual machine
reboots. To persist your account credentials, run the cmdkey command
from the PowerShell window on the virtual machine. Replace
with the name of your storage account, and
with your storage account key.
cmdkey /add:<storage-account-name>.file.core.windows.net /user:<storage-account-name> /pass:<storage-account-key>
Windows will now reconnect to your file share when the virtual machine
reboots. You can verify that the share has been reconnected by running
the net use command from a PowerShell window.
Note that credentials are persisted only in the context in which
cmdkey runs. If you are developing an application that runs as a
service, you will need to persist your credentials in that context as
well.
Mount the file share using the persisted credentials
Once you have a remote connection to the virtual machine, you can run
the net use command to mount the file share, using the following
syntax. Replace with the name of your storage
account, and with the name of your File storage share.
net use <drive-letter>: \\<storage-account-name>.file.core.windows.net\<share-name>
example :
net use z: \\samples.file.core.windows.net\logs
Since you persisted your storage account credentials in the previous
step, you do not need to provide them with the net use command. If you
have not already persisted your credentials, then include them as a
parameter passed to the net use command, as shown in the following
example.
Edit:
I don't have an Azure VM free to test it on, but this works fine on a Server 2016 hyper-v vm
Script MapAzureShare
{
GetScript =
{
}
TestScript =
{
Test-Path W:
}
SetScript =
{
Invoke-Expression -Command "cmdkey /add:somestorage.file.core.windows.net /user:somestorage /pass:somekey"
Invoke-Expression -Command "net use W: \\somestorage.file.core.windows.net\someshare"
}
PsDscRunAsCredential = $credential
}
In my brief testing the drive would only appear after the server was rebooted.
What I imagine is happening here:
DSC runs under the NT AUTHORITY\SYSTEM account and unless the Credential attribute has been set, the Computer account is used when pulling the files from a network share. But looking at how Azure Files operate, permissions shouldn't be an issue, but running this whole process under NT AUTHORITY\SYSTEM could. I suggest you try to run DSC as a user of your VM's and see if that works.
ps. You could also try to perform the same operation against a VM with network share where you are confident that share\ntfs permissions are correct. You might need to enable anonymous user to access your share for that to work.

How to download files from S3 to a local folder

I have a requirement to download files from an AWS S3 bucket to a local folder, count the number of files in the local folder, check against S3, and send an email with the number of files.
I tried to download files from S3 but I am getting an error like get-s3object commandnotfoundexception. How do I resolve this issue?
Here is my code:
# Your account access key - must have read access to your S3 Bucket
$accessKey = "YOUR-ACCESS-KEY"
# Your account secret access key
$secretKey = "YOUR-SECRET-KEY"
# The region associated with your bucket e.g. eu-west-1, us-east-1 etc. (see http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-regions-availability-zones.html#concepts-regions)
$region = "eu-west-1"
# The name of your S3 Bucket
$bucket = "my-test-bucket"
# The folder in your bucket to copy, including trailing slash. Leave blank to copy the entire bucket
$keyPrefix = "my-folder/"
# The local file path where files should be copied
$localPath = "C:\s3-downloads\"
$objects = Get-S3Object -BucketName $bucket -KeyPrefix $keyPrefix -AccessKey $accessKey -SecretKey $secretKey -Region $region
foreach($object in $objects) {
$localFileName = $object.Key -replace $keyPrefix, ''
if ($localFileName -ne '') {
$localFilePath = Join-Path $localPath $localFileName
Copy-S3Object -BucketName $bucket -Key $object.Key -LocalFile $localFilePath -AccessKey $accessKey -SecretKey $secretKey -Region $region
}
}
Since this question is one of the top Google results for "powershell download s3 files" I'm going to answer the question in the title (even though the actual question text is different):
Read-S3Object -BucketName "my-s3-bucket" -KeyPrefix "path/to/directory" -Folder .
You might need to call Set-AWSCredentials if it's not a public bucket.
Similar to Will's example, if you want to download the whole content of a "folder" keeping the directory structure try:
Get-S3Object -BucketName "my-bucket" -KeyPrefix "path/to/directory" | Read-S3Object -Folder .
MS doc at https://docs.aws.amazon.com/powershell/latest/reference/items/Read-S3Object.html provides examples with fancier filtering.
If you have installed the AWS PowerShell Module, you haven't correctly loaded it into your current session. We're identifying this as the issue because the error you specified means that the given cmdlet can't be found.
Verify first that the module is installed, by any of the options below:
Load module into an existing session: (PowerShell v3 and v4):
From the documentation:
In PowerShell 4.0 and later releases, Import-Module also searches the Program Files folder for installed modules, so it is not necessary to provide the full path to the module. You can run the following command to import the AWSPowerShell module. In PowerShell 3.0 and later, running a cmdlet in the module also automatically imports a module into your session.
To verify correct installation, add the following command to the beginning of your script:
PS C:\> Import-Module AWSPowerShell
Load module into an existing session: (PowerShell v2):
To verify correct installation, add the following command to the beginning of your script:
PS C:\> Import-Module "C:\Program Files (x86)\AWS Tools\PowerShell\AWSPowerShell\AWSPowerShell.psd1"
Open a new session with Windows PowerShell for AWS Desktop Shortcut:
A shortcut is added to your desktop that starts PowerShell with the correct module loaded into the session. If your installation was successful, this shortcut should be present and should also correctly load the AWS PowerShell module without additional effort from you.
From the documentation:
The installer creates a Start Menu group called, Amazon Web Services,
which contains a shortcut called Windows PowerShell for AWS. For
PowerShell 2.0, this shortcut automatically imports the AWSPowerShell
module and then runs the Initialize-AWSDefaults cmdlet. For PowerShell
3.0, the AWSPowerShell module is loaded automatically whenever you run an AWS cmdlet. So, for PowerShell 3.0, the shortcut created by the
installer only runs the Initialize-AWSDefaults cmdlet. For more
information about Initialize-AWSDefaults, see Using AWS Credentials.
Further Reading:
AWS PowerShell Documentation - Download and Install the AWS Tools for Windows PowerShell
AWS PowerShell Documentation - Setting up the AWS Tools for Windows PowerShell

How do I transfer my build output files to an Azure VM using PowerShell DSC?

I've been toying around with DSC and I think it's an awesome platform. I made a few tests to automate the deployment of our TFS build outputs and to automatically install the web applications and configure an environment.
This was relatively easy, as I could pass my drop folder path to the DSC script using a file share on our internal network, and use relative paths inside the configuration to select each of our modules.
My problem now is in how to expand this to Azure virtual machines. We wanted to create these scripts to automatically deploy to our QA and Production servers, which are hosted on Azure. Since they are not in our domain, I can't use the File resource anymore to transfer the files, but at the same time I want exactly the same functionality: I'd like to somehow point the configuration to our build output folder and copy the files from there to the virtual machines.
Is there some way I can copy the drop folder files easily from inside a configuration that is run on these remote computers, without sharing the same network and domain? I successfully configured the VMs to accept DSC calls over https using certificates, and I just found out that the Azure PowerShell cmdlets enable you to upload a configuration to Azure storage and run it in the VMs automatically (which seems a lot better than what I did) but I still don't know how I'd get access to my build outputs from inside the virtual machine when the configuration script is run.
I ended up using the Publish-AzureVMDscExtension cmdlet to create a local zip file, appending my build outputs to the zip, and then publishing the zip, something along those lines:
function Publish-AzureDscConfiguration
{
[CmdletBinding()]
Param(
[Parameter(Mandatory)]
[string] $ConfigurationPath
)
Begin{}
Process
{
$zippedConfigurationPath = "$ConfigurationPath.zip";
Publish-AzureVMDscConfiguration -ConfigurationPath:$ConfigurationPath -ConfigurationArchivePath:$zippedConfigurationPath -Force
$tempFolderName = [System.Guid]::NewGuid().ToString();
$tempFolderPath = "$env:TEMP\$tempFolderName";
$dropFolderPath = "$tempFolderPath\BuildDrop";
try{
Write-Verbose "Creating temporary folder and symbolic link to build outputs at '$tempFolderPath' ...";
New-Item -ItemType:Directory -Path:$tempFolderPath;
New-Symlink -LiteralPath:$dropFolderPath -TargetPath:$PWD;
Invoke-Expression ".\7za a $tempFolderPath\BuildDrop.zip $dropFolderPath -r -x!'7za.exe' -x!'DscDeployment.ps1'";
Write-Verbose "Adding component files to DSC package in '$zippedConfigurationPath'...";
Invoke-Expression ".\7za a $zippedConfigurationPath $dropFolderPath.zip";
}
finally{
Write-Verbose "Removing symbolic link and temporary folder at '$tempFolderPath'...";
Remove-ReparsePoint -Path:$dropFolderPath;
Remove-Item -Path:$tempFolderPath -Recurse -Force;
}
Publish-AzureVMDscConfiguration -ConfigurationPath:$zippedConfigurationPath -Force
}
End{}
}
By using a zip inside the zip used by Azure, I can access the inner contents in the working directory of the PowerShell DSC Extension (in the DSCWork folder). I tried just adding the drop folder directly to the zip (without zipping it first), but then the DSC Extension copies the folder to the modules path, thinking it is a module.
I'm not completely happy with this solution yet, and I'm having a few problems already, but it makes sense in my mind and should work fine.

(PowerShell) How to give a ZIP permissions to allow user to edit directly after downloading from Azure Blob

Good morning friends,
I've been writing a script in PowerShell to replace our current manual process to deploy our application to Azure Blob Storage in a ZIP folder during the Build Process in VS. I'm about done, but I've run into this issue:
When the ZIP that I upload to Azure is downloaded by anyone, the ZIP cannot be manipulated without having to extract the files first. This is something the current process is able to accomplish and I don't know how (The current process was written in C# and is done through a GUI). It needs to be editable via the ZIP because the current Updater is set to manipulate the ZIP without the extraction first.
So the initial question is: How do I set permissions on a ZIP archive that will follow it to Azure Blob Storage and then when it's downloaded on a client's machine that allow it's contents to be manipulated (The error itself at this time is that it cannot delete a file in a child folder) without extraction?
Currently, to ZIP my folder up, I use this process:
$src = "$TEMPFOL\$testBuildDrop"
$dst = "$TEMPFOL\LobbyGuard.zip"
[Reflection.Assembly]::LoadWithPartialName( "System.IO.Compression.FileSystem" )
[System.IO.Compression.ZipFile]::CreateFromDirectory($src, $dst)
and then push it to blob with:
set-azurestorageblobcontent -Container test -blob "LobbyGuard.zip" -file "$TEMPFOL\LobbyGuard.zip" -context $storageCreds -force
I've tried to set permissions on the folder prior to upload using
$getTEMPFOLACL = Get-ACL $TEMPFOL
$accessRule = New-Object System.Security.AccessControl.FileSystemAccessRule("Everyone", "FullControl", "Allow")
$getTEMPFOLACL.SetAccessRule($accessRule)
Which works on the current local file, but once downloaded, the permissions on the file are set as
Owner: BUILTIN\Adminstrators Access: NT AUTHORITY\SYSTEM Allow FullControl
Which is exactly the same permissions as the file that's downloaded from the current process. I'm not understanding what I'm missing here to make this work.
If necessary I can provide the DL link to our blob to show the current manual processes folder that can be manipulated IN the ZIP vs. My Scripts ZIP that cannot.
Try unblocking the file after downloading, ie
Unblock-File C:\path\yourDownloaded.zip