PowerShell check for Azure Storage Container existence - powershell

I am creating a PowerShell script to perform several steps and one of them involves an Azure Storage Container removal:
Remove-AzureStorageContainer ....
The next step is dependant of this removal is done.
How can I be made aware that the previous REMOVE was successfully performed so as to continue execution on the next step ?
Something like;
while(Test-AzureStorageContainerExist "mycontainer")
{
Start-Sleep -s 100
}
<step2>
Unfortunately 'Test-AzureStorageContainerExist' doesn't seem available. :)

You can request the list of storage containers and look for a specific one and wait until that isn't returned anymore. This works okay if the account doesn't have a ton of containers in it. If it does have a lot of containers then this won't be efficient at all.
while (Get-AzureStorageContainer | Where-Object { $_.Name -eq "mycontainer" })
{
Start-Sleep -s 100
"Still there..."
}
The Get-AzureStorageContainer cmdlet also takes a -Name parameter and you could do a loop of asking for it to be returned; however, when the container doesn't exist it throws an error (Resource not found) instead of providing an empty result, so you could trap for that error and know it was gone (make sure to explicitly look for Reource Not found vs a timeout or something like that).
Update: Another option would be to make a call to the REST API directly for the get container properties until you get a 404 (not found). That would mean the container is gone. http://msdn.microsoft.com/en-us/library/dd179370.aspx

A try/catch approach:
try {
while($true){
Get-AzureStorageContainer -Name "myContainer -ErrorAction stop
sleep -s 100
}
}
catch {
write-host "no such container"
# step 2 action
}

This works
$containerDeleted = $false
while(!$containerDeleted) {
Try {
Write-Host "Try::New-AzureStorageContainer"
New-AzureStorageContainer -Name $storageContainerName -Permission Off -Context $context -Verbose -ErrorAction Stop
$containerDeleted = $true
} catch [Microsoft.WindowsAzure.Storage.StorageException] {
Start-Sleep -s 5
}
}
If you look into the error message being returned the exception code it is container being deleted

Related

How to handle (avoid): "Fail to create runspace because you have exceeded your budget to create runspace."

I have HTTP triggered Azure Function App on PowerShell Core stack. Script is parsing the body of the request, assuming that everything is ok, it connects to Exchange Online and then executes 2 cmdlets to create MailContact type of contact. At the end it disconnects from Exchange Online. I have console app that is executing POST requests passing JSON data for one contact in the body. Requests are executed in a for-each loop and after 5th successful requests I get runspace exceeded budget error.
some code snippets from the script
...
try {
Connect-ExchangeOnline -CertificateThumbprint $thumb -AppId $appId -Organization $org -Showbanner:$false -CommandName Get-Contact,Get-MailContact,New-MailContact,Set-Contact,Set-MailContact,Remove-MailContact
New-MailContact -ErrorAction stop #p | Out-Null
Set-Contact -ErrorAction stop #parameters | Out-Null
}
catch {
...
}
finally {
Disconnect-ExchangeOnline -Confirm:$false -InformationAction Ignore -ErrorAction SilentlyContinue
Get-PSSession | Remove-PSSession
}
What I tried (without success):
relaxation for Exchange Online throttling policy (https://www.michev.info/Blog/Post/3205/self-service-powershell-throttling-policy-relaxation-for-exchange-online)
setting different environmental variables (like PSWorkerInProcConcurrencyUpperBound and FUNCTIONS_WORKER_PROCESS_COUNT)
What worked: having additional Function App and then cycle every 5 requests between the two.
Additional information that might help:
PSWorkerInProcConcurrencyUpperBound = 1000
FUNCTIONS_WORKER_PROCESS_COUNT = 10
Function runtime version = ~4
PowerShell Core Version = 7
Platform = 64Bit
Plan type = Consumption (Serverless)
On addition, it takes around 7-8 sec from sending request till I get the response back. Connecting to Exchange Online takes a lot of time.
Any help or hint how to solve the runspace budget error ?
a dirty workaround would be this:
try {
Connect-ExchangeOnline #ConnectExchange
} catch {
Write-Verbose -Verbose ($_.Exception.Message)
$Wait = ($_.Exception.Message) | Select-String ('(?<=for )(.*)(?= seconds)') -AllMatches
$Count = ([int]$Wait.Matches.Value)
Start-Sleep -seconds $Count
Connect-ExchangeOnline #ConnectExchange
}

Getting specific app pool's worker process in PowerShell returns value but process is already stopped

I have multiple websites - each on a separate app pool.
The app pool I'm referring to has 1 worker process.
After stopping the app pool, I'm trying to wait and verify that the worker process has stopped.
$appPoolName = $appPool.name;
Write-Host "appPoolName: $appPoolName";
$w3wp = Get-ChildItem "IIS:\AppPools\$appPoolName\WorkerProcesses\";
while($w3wp -and $retrys -gt 0)
{
Write-Host "w3wp value is: $w3wp";
Start-Sleep -s 10;
$retrys--;
$w3wp = Get-ChildItem "IIS:\AppPools\$appPoolName\WorkerProcesses\";
Write-Host "w3wp value(2) is: $w3wp";
if(-not $w3wp)
{
break;
}
}
The print of both values is always "Microsoft.IIs.PowerShell.Framework.ConfigurationElement", even when I see the process is stopped and no longer in Task Manager.
Also strange: When I open another PowerShell session while the code runs and call
$w3wp = Get-ChildItem "IIS:\AppPools\$appPoolName\WorkerProcesses\";
w3wp has no value (because it is no longer exist).
Any ideas why the value isn't changing?
Or maybe how to do that differently?
Thanks in advance :)
I think the IIS: provider is caching data. I dont know of a fix, but heres a couple of alternatives:
use WMI from powershell:
gwmi -NS 'root\WebAdministration' -class 'WorkerProcess' | select AppPoolName,ProcessId
Run appcmd
appcmd list wp

Restrict multiple executions of the same script

I have tried to restrict multiple executions of the same script in PowerShell. I have tried following code. Now it is working, but a major drawback is that when I close the PowerShell window and try to run the same script again, it will execute once again.
Code:
$history = Get-History
Write-Host "history=" $history.Length
if ($history.Length -gt 0) {
Write-Host "this script already run using History"
return
} else {
Write-Host "First time using history"
}
How can I avoid this drawback?
I presume you want to make sure that a script is not running from different powershell processes, and not from the same one as some sort of self-call.
In either case there isn't anything in powershell for this, so you need to mimic a semaphore.
For the same process, you can leverage a global variable and wrap your script around a try/finally block
$variableName="Something unique"
try
{
if(Get-Variable -Name $variableName -Scope Global -ErrorAction SilentlyContinue)
{
Write-Warning "Script is already executing"
return
}
else
{
Set-Variable -Name $variableName -Value 1 -Scope Global
}
# The rest of the script
}
finally
{
Remove-Variable -Name $variableName -ErrorAction SilentlyContinue
}
Now if you want to do the same, then you need to store something outside of your process. A file would be a good idea with a similar mindset using Test-Path, New-Item and Remove-Item.
In either case, please note that this trick that mimics semaphores, is not as rigid as an actual semaphore and can leak.

Prorgess bar for `New-MailboxExportRequest`

I'm trying to create a script that can export a user's mailbox to a PST, remotely (Exchange Server 2010 console is installed on the server we're running this from, and the module is loaded correctly). It's being done using a script so our L2 admins do not have to manually perform the task. Here's the MWE.
$UserID = Read-Host "Enter username"
$PstDestination = "\\ExServer\Share\$UserID.pst"
$Date = Get-Date -Format "yyyyMMddhhmmss"
$ExportName = "$UserID" + "$Date"
try {
New-MailboxExportRequest -Mailbox $UserID -FilePath $PstDestination -Name $ExportName -ErrorAction Stop -WarningAction SilentlyContinue | Out-Null
# Loop through the process to track its status and write progress
do {
$Percentage = (Get-MailboxExportRequest -Name $ExportName | Get-MailboxExportRequestStatistics).PercentComplete
Write-Progress "Mailbox export is in progress." -Status "Export $Percentage% complete" -PercentComplete "$Percentage"
}
while ($Percentage -ne 100)
Write-Output "$UserID`'s mailbox has been successfully exported. The archive can be found at $PstDestination."
}
catch {
Write-Output "There was an error exporting the mailbox. The process was aborted."
}
The problem is, as soon as we initiate the export, the task gets Queued. Sometimes, the export remains queued for a very long time, and the script is currently unable to figure out when the task begins, and when it does, is unable to display the progress correctly. The export happens in the background, but the script remains stuck there. So anything after the export, does not get executed, and the whole thing then has to be done manually.
Please suggest a way to handle this?
I tried adding a wait timer and then a check to see if the export has begun. It didn't quite work as expected.
Two things. First one is more about performance/hammering Exchange with unnesacary requests in do/while loop. Start-Sleep -Seconds 1 (or any other delay that makes sense depending on the mailbox size(s)) inside the loop is a must.
Second: rather than wait for job to start, just resume it yourself:
if ($request.Status -eq 'Queued') {
$request | Resume-MailboxExportRequest
}

Execute code if a PowerShell script is terminated

Is it possible to force the execution of some code if a PowerShell script is forcefully terminated? I have tried try..finally and Traps, but they both don't seem to work, at least when I press Ctrl-C from PowerShell ISE.
Basically, I have a Jenkins build that executes a PowerShell script. If for any reason I want to stop the build from within Jenkins, I don't want any subprocess to lock the files, hence keeping my build project in a broken state until an admin manually kill the offending processes (nunit-agent.exe in my case). So I want to be able to force the execution of a code that terminates nunit-agent.exe if this happens.
UPDATE: As #Frode suggested below, I tried to use try..finally:
$sleep = {
try {
Write-Output "In the try block of the job."
Start-Sleep -Seconds 10
}
finally {
Write-Output "In the finally block of the job."
}
}
try {
$sleepJob = Start-Job -ScriptBlock $sleep
Start-Sleep -Seconds 5
}
finally {
Write-Output "In the finaly block of the script."
Stop-Job $sleepJob
Write-Output "Receiving the output from the job:"
$content = Receive-Job $sleepJob
Write-Output $content
}
Then when I executed this and broke the process using Ctrl-C, I got no output. I thought that what I should got is:
In the finally block of the script.
Receiving the output from the job:
In the try block of the job.
In the finally block of the job.
I use try {} finally {} for this. The finally-block runs when try is done or if you use ctrl+c, so you need to either run commands that are safe to run either way, ex. it doesn't matter if you kill a process that's already dead..
Or you could add a test to see if the last command was a success using $?, ex:
try {
Write-Host "Working"
Start-Sleep -Seconds 100
} finally {
if(-not $?) { Write-Host "Cleanup on aisle 5" }
Write-Host "Done"
}
Or create your own test (just in case the last command in try failed for some reason):
try {
$IsDone = $false
Write-Host "Working"
Start-Sleep -Seconds 100
#.....
$IsDone = $true
} finally {
if(-not $IsDone) { Write-Host "Cleanup on aisle 5" }
Write-Host "Done"
}
UPDATE: The finally block will not work for output as the pipeline is stopped on CTRL+C.
Note that pressing CTRL+C stops the pipeline. Objects that are sent to
the pipeline will not be displayed as output. Therefore, if you
include a statement to be displayed, such as "Finally block has run",
it will not be displayed after you press CTRL+C, even if the Finally
block ran.
Source: about_Try_Catch_Finally
However, if you save the output from Receive-Job to a global variable like $global:content = Receive-Job $sleepJob you can read it after the finally-block. The variable is normally created in a different local scope and lost after the finally-block.