Azure Runbook query running jobs - powershell

Azure Powershell runbook, scheduled to run every 1h.
The script runs "Invoke-AzVMRunCommand" to call a Powershell script on remote VM locally.
The problem -
sometimes it runs longer than 1h and overlaps with next in the schedule, and the second run fails with an error related to "Invoke-AzVMRunCommand" :
"Run command extension execution is in progress. Please wait for completion before invoking a run command."
The question - how to query if the runbook job is currently running.
We can not change schedule.
thank you!

This one does the check:
$jobs = Get-AzAutomationJob -ResourceGroupName $rgName -AutomationAccountName $aaName -RunbookName $runbook
$runningCount = ($jobs | Where-Object { $_.Status -eq "Running" }).count
if (($jobs.status -contains "Running" -And $runningCount -gt 1 ) -Or ($jobs.Status -eq "New"))
{
Write-Output "`n This runbook [$runbook] execution is stopped - there is another job currently running. Execution will start as per schedule next hour."
Exit 1
}
else
{
Write-Output "`n Let's proceed with runbook [$runbook] execution - there are no interfering jobs currently running." | Out-File -Filepath StarStop.txt -Append
} #end of check runbook status

You may use Az PowerShell cmdlet Get-AzAutomationJob to check the status of the job. Also, based on that status you may decide to remove or set existing schedule using schedule related cmdlets from here.

Related

Powershell - get string information to a console for user from one script to another

I have 2 scripts, one serves as a Worker - uses variables to complete commands, second serves as a function Library. Worker via variable loads Library and uses functions from there.
As a user, when I run the script I would like to see the output in the console which I defined as the outcome for Library script.
Example Worker script:
Param(
[string]$server_01,
[string]$releaseDefinitionName,
[string]$pathRelease,
[string]$buildNumber,
[string]$command_01,
[string]$scheduledTask_01
)
$pathScriptLibrary = $pathRelease + "\" + $buildNumber + "\" + "_scripts"
. $pathScriptLibrary\_library.ps1
$user = xxx
$password = xxx
$cred = xxx
Invoke-Command -ComputerName $server_01 -Credential $cred -ErrorAction Stop -ScriptBlock {powershell $command_01}
Example Library script:
function Stop-ScheduledTasks{
Write-Output [INFO]: Stopping scheduled tasks... -ForegroundColor White
Get-ScheduledTask -TaskName "$scheduledTask_01" | ForEach {
if ($_.State -eq "Ready") {
Write-Output [WARNING]: Scheduled task $scheduledTask_01 was already stopped. -ForegroundColor Yellow
}
else {
Stop-ScheduledTask -TaskName "$scheduledTask_01"
Write-Output [OK]: Running task $scheduledTask_01 stopped. -ForegroundColor Green
}
}
}
function Start-ScheduledTasks{
Write-Output [INFO]: Starting scheduled tasks... -ForegroundColor White
Get-ScheduledTask -TaskName "$scheduledTask_01" | ForEach {
if ($_.State -eq "Running") {
Write-Output [WARNING]: Scheduled task $scheduledTask_01 already started. -ForegroundColor Yellow
}
else {
Start-ScheduledTask -TaskName "$scheduledTask_01"
Write-Output [OK]: Stopped scheduled task $scheduledTask_01 started. -ForegroundColor Green
}
}
}
Use case:
User starts the deployment by clicning the deploy button in Azure DevOps UI
The task using the Worker script takes function from Library script (in this case stops Scheduled Task) and performs it
User checks log on the Azure DevOps side and sees the custom output lines from Library script. (2 of them now - 1. starting with [INFO], 2. either starting with [WARNING] or [OK]).
Could you please advice a solution how to achieve that? Thank you.
NOTE: Those examples are run in Azure DevOps (on premise) release pipelines and desired outcomes are ment for users running those pipelines.
If you're trying to write to the azure devops pipeline log, then you should avoid using Write-Output. That does something subtly different; it adds to the function's return value.
So for example the Write-Output in the function Stop-ScheduledTask; that is roughly equivalent to you putting at the end of the function:
return "[WARNING]: Scheduled task $scheduledTask_01 was already stopped."
That might end up being printed to the pipeline log, or it might not; and importantly, it might completely mess up a function which is genuinely trying to return a simple value.
Instead of using Write-Output, I recommend using Write-Host. What that does is immediately write a line to the pipeline log, without affecting what a library function will return.
Write-Output "[WARNING]: Scheduled task $scheduledTask_01 was already stopped."
You can also use Write-Warning and Write-Error.

How to prevent multiple instances of the same PowerShell 7 script?

Context
On a build server, a PowerShell 7 script script.ps1 will be started and will be running in the background in the remote computer.
What I want
A safenet to ensure that at most 1 instance of the script.ps1 script is running at once on the build server or remote computer, at all times.
What I tried:
I tried meddling with PowerShell 7 background jobs (by executing the script.ps1 as a job inside a wrapper script wrapper.ps1), however that didn't solve the problem as jobs do not carry over (and can't be accessed) in other PowerShell sessions.
What I tried looks like this:
# inside wrapper.ps1
$running_jobs = $(Get-Job -State Running) | Where-Object {$_.Name -eq "ImportantJob"}
if ($running_jobs.count -eq 0) {
Start-Job .\script.ps1 -Name "ImportantJob" -ArgumentList #($some_variables)
} else {
Write-Warning "Could not start new job; Existing job detected must be terminated beforehand."
}
To reiterate, the problem with that is that $running_jobs only returns the jobs running in the current session, so this code only limits one job per session, allowing for multiple instances to be ran if multiple sessions were mistakenly opened.
What I also tried:
I tried to look into Get-CimInstance:
$processes = Get-CimInstance -ClassName Win32_Process | Where-Object {$_.Name -eq "pwsh.exe"}
While this does return the current running PowerShell instances, these elements carry no information on the script that is being executed, as shown after I run:
foreach ($p in $processes) {
$p | Format-List *
}
I'm therefore lost and I feel like I'm missing something.
I appreciate any help or suggestions.
I like to define a config path in the $env:ProgramData location using a CompanyName\ProjectName scheme so I can put "per system" configuration.
You could use a similar scheme with a defined location to store a lock file created when the script run and deleted at the end of it (as suggested already within the comments).
Then, it is up to you to add additional checks if needed (What happen if the script exit prematurely while the lock is still present ?)
Example
# Define default path (Not user specific)
$ConfigLocation = "$Env:ProgramData\CompanyName\ProjectName"
# Create path if it does not exist
New-Item -ItemType Directory -Path $ConfigLocation -EA 0 | Out-Null
$LockFilePath = "$ConfigLocation\Instance.Lock"
$Locked = $null -ne (Get-Item -Path $LockFilePath -EA 0)
if ($Locked) {Exit}
# Lock
New-Item -Path $LockFilePath
# Do stuff
# Remove lock
Remove-Item -Path $LockFilePath
Alternatively, on Windows, you could also use a scheduled task without a schedule and with the setting "If the task is already running, then the following rule applies: Do not start a new instance". From there, instead of calling the original script, you call a proxy script that just launch the scheduled task.

Azure DevOps Azure CLI task with PowerShell script and parallel ForEach-Object execution: no output on failure

In order to scale Function Apps quickly we want to be able to deploy them via IaC and then deploy a code package onto it. Unfortunately this is not possible dynamically with YAML pipelines in Azure DevOps so I had to resort to using the Azure CLI.
Below you see the PowerShell script I came up with to deploy the code into the pool of Function Apps that I deployed through Terraform before-hand. To speed things up I turned on parallel processing of the ForEach-Object loop since there are no dependencies between the single instances. This also works fine to a certain extent but I am having troubles due to the quirkiness of the Azure CLI. Writing non-error information to StdErr seems to be by design. This combined with some other strange behavior leads to the following scenarios:
Running sequentially usually works flawlessly and I see any error output if a problem occurs. Also I don't need to set powerShellErrorActionPreference: 'continue'. This of course is slowing down the deployment significantly.
Running in parallel fails always without setting powerShellErrorActionPreference: 'continue'. The reason for the failure is not output to the console. This seems to happen even if no real error occurs as with continue there is no error output to the console as well. This wouldn't be an issue if the pipeline fails in the case of a real error (which should be handled by checking the state of the ChildJobs - but it doesn't.
So here I am between a rock and a hard place. Does anyone see the flaw in my implementation? Any suggestions are highly appreciated.
- task: AzureCLI#2
displayName: 'Functions deployment'
env:
AZURE_CORE_ONLY_SHOW_ERRORS: 'True'
AZURE_DEVOPS_EXT_PAT: $(System.AccessToken)
ARM_CLIENT_ID: $(AzureApplicationId)
ARM_CLIENT_SECRET: $(AzureApplicationSecret)
ARM_SUBSCRIPTION_ID: $(AzureSubscriptionId)
ARM_TENANT_ID: $(AzureTenantId)
inputs:
azureSubscription: 'MySubscription'
scriptType: 'pscore'
scriptLocation: 'inlineScript'
inlineScript: |
Write-Output -InputObject "INFO: Get Function App names"
$appNames = terragrunt output -json all_functionapp_names | ConvertFrom-Json
Write-Output -InputObject "INFO: Loop over Function Apps"
$jobs = $appNames | ForEach-Object -Parallel {
$name = $_
try
{
Write-Output -InputObject "INFO: $name`: start slot"
az functionapp start --resource-group $(ResourceGroup) --name "$name" --slot Stage --verbose
Write-Output -InputObject "INFO: $name`: deploy into slot"
az functionapp deploy --resource-group $(ResourceGroup) --name "$name" --slot Stage --src-path "$(System.ArtifactsDirectory)/drop/MyCodePackage.zip" --type zip --verbose
Write-Output -InputObject "INFO: $name`: deploy app settings"
az functionapp config appsettings set --resource-group $(ResourceGroup) --name "$name" --slot Stage --settings "#$(Build.ArtifactStagingDirectory)/appsettings.json" --verbose
Write-Output -InputObject "INFO: $name`: swap slot with production"
az functionapp deployment slot swap --resource-group $(ResourceGroup) --name "$name" --slot Stage --action swap --verbose
}
catch
{
Write-Output -InputObject "ERROR: $name`: An error occured during deployment"
Write-Output -InputObject ($_.Exception | Format-List -Force)
}
finally
{
try
{
Write-Output -InputObject "INFO: $name`: stop slot"
az functionapp stop --resource-group $(ResourceGroup) --name "$name" --slot Stage --verbose
}
catch
{
Write-Output -InputObject "ERROR: $name`: could not stop slot"
}
}
} -AsJob
[int]$pollingInterval = 10
[int]$elapsedSeconds = 0
while ($jobs.State -eq "Running") {
$jobs.ChildJobs | ForEach-Object {
Write-Output -InputObject "---------------------------------"
Write-Output -InputObject "INFO: $($_.Name) output [$($elapsedSeconds)s]"
Write-Output -InputObject "---------------------------------"
$_ | Receive-Job
Write-Output -InputObject "---------------------------------"
Write-Output -InputObject ""
}
$elapsedSeconds += $pollingInterval
[Threading.Thread]::Sleep($pollingInterval * 1000)
}
$jobs.ChildJobs | Where-Object { $_.JobStateInfo.State -eq "Failed" } | ForEach-Object {
Write-Output -InputObject "ERROR: At least one of the deployments failed with the following reason:"
Write-Output -InputObject $_.JobStateInfo.Reason
}
if ($jobs.State -eq "Failed")
{
exit 1
}
else
{
exit 0
}
powerShellErrorActionPreference: 'continue'
workingDirectory: './infrastructure/environments/$(TerraFormEnvironmentName)'
Edit 1
To get all output from ChildJobs I had to alter the code like so:
[int]$pollingInterval = 10
[int]$elapsedSeconds = 0
$lastResultsRead = false
while ($jobs.State -eq "Running" -or !$lastResultsRead)
{
$lastResultsRead = $jobs.State -ne "Running"
$jobs.ChildJobs | ForEach-Object {
Write-Output -InputObject "---------------------------------"
Write-Output -InputObject "INFO: $($_.Name) output [$($elapsedSeconds)s]"
Write-Output -InputObject "---------------------------------"
$_ | Receive-Job
Write-Output -InputObject "---------------------------------"
Write-Output -InputObject ""
}
$elapsedSeconds += $pollingInterval
if (!$lastResultsRead)
{
[Threading.Thread]::Sleep($pollingInterval * 1000)
}
Hope this helps everyone that wants to achieve something similar.
So it seems that the mystery is solved.
TLDR;
If you want proper error handling, remove the --verbose from all Azure CLI calls as the verbose output is always written to StdErr even when setting the environment variable AZURE_CORE_ONLY_SHOW_ERRORS.
Explanation
I stumbled over the solution by adding an unrelated functionality to this script and noticed that in certain situations the last output of the ChildJobs is not being collected. I initially took that for a quirk of the Azure DevOps task but discovered that this also happens when I debug the output locally in VSCode.
That led me to add another condition for the while loop that would ensure to give me the final output. I'll update the script in my initial post accordingly. Finally equipped with the whole picture of what is going on in the ChildJobs I set up a separate test pipeline where I would run different test cases to find the culprit. Soon enough I noticed that taking away --verbose prevents the task from failing. This happened with AZURE_CORE_ONLY_SHOW_ERRORS set or not. So I gave the --only-show-errors option a go, which should have the same result as the environment variable though only on a single Azure CLI call. Due to the full output now at my disposal I could finally see the message that --verbose and --only-show-errors can't be used in conjunction. That settled it. --verbose had to go. All it adds is the information of how long the command ran anyway. I think we can do without it.
On an additional side-note: at the same time I discovered that ForEach-Object -Parallel {} -AsJob is making heavy use of PowerShell runspaces. That means that it cannot be debugged from within VSCode in the typical way. I found a video that might help in situations like this: https://www.youtube.com/watch?v=O-dksknPQBw
I hope this answer helps others that stumble over the same strange behavior. Happy coding.

Do threads still execute using -asjob with wait-job?

Hello all and good afternoon!
I had a quick question regarding -asjob running with invoke-command.
If I run 2 Invoke-Command's using -asjob, does it run simultaneously when I try to receive the ouput? Does this mean wait-job waits till the first job specified is finished running to get the next results?
Write-Host "Searching for PST and OST files. Please be patient!" -BackgroundColor White -ForegroundColor DarkBlue
$pSTlocation = Invoke-Command -ComputerName localhost -ScriptBlock {Get-Childitem "C:\" -Recurse -Filter "*.pst" -ErrorAction SilentlyContinue | % {Write-Host $_.FullName,$_.lastwritetime}} -AsJob
$OSTlocation = Invoke-Command -ComputerName localhost -ScriptBlock {Get-Childitem "C:\Users\me\APpdata" -Recurse -Filter "*.ost" -ErrorAction SilentlyContinue | % {Write-Host $_.FullName,$_.lastwritetime} } -AsJob
$pSTlocation | Wait-Job | Receive-Job
$OSTlocation | Wait-Job | Receive-Job
Also, another question: can i save the output of the jobs to a variable without it showing to the console? Im trying to make it where it checks if theres any return, and if there is output it, but if theres not do something else.
I tried:
$job1 = $pSTlocation | Wait-Job | Receive-Job
if(!$job1){write-host "PST Found: $job1"} else{ "No PST Found"}
$job2 = $OSTlocation | Wait-Job | Receive-Job
if(!$job2){write-host "OST Found: $job2"} else{ "No OST Found"}
No luck, it outputs the following:
Note: This answer does not directly answer the question - see the other answer for that; instead, it shows a reusable idiom for a waiting for multiple jobs to finish in a non-blocking fashion.
The following sample code uses the child-process-based Start-Job cmdlet to create local jobs, but the solution equally works with local thread-based jobs created by Start-ThreadJob as well as jobs based on remotely executing Invoke-Command -ComputerName ... -AsJob commands, as used in the question.
It shows a reusable idiom for a waiting for multiple jobs to finish in a non-blocking fashion that allows for other activity while waiting, along with collecting per-job output in an array.
Here, the output is only collected after each job completes, but note that collecting it piecemeal, as it becomes available, is also an option, using (potentially multiple) Receive-Job calls even before a job finishes.
# Start two jobs, which run in parallel, and store the objects
# representing them in array $jobs.
# Replace the Start-Job calls with your
# Invoke-Command -ComputerName ... -AsJob
# calls.
$jobs = (Start-Job { Get-Date; sleep 1 }),
(Start-Job { Get-Date '1970-01-01'; sleep 2 })
# Initialize a helper array to keep track of which jobs haven't finished yet.
$remainingJobs = $jobs
# Wait iteratively *without blocking* until any job finishes and receive and
# output its output, until all jobs have finished.
# Collect all results in $jobResults.
$jobResults =
while ($remainingJobs) {
# Check if at least 1 job has terminated.
if ($finishedJob = $remainingJobs | Where State -in Completed, Failed, Stopped, Disconnected | Select -First 1) {
# Output the just-finished job's results as part of custom object
# that also contains the original command and the
# specific termination state.
[pscustomobject] #{
Job = $finishedJob.Command
State = $finishedJob.State
Result = $finishedJob | Receive-Job
}
# Remove the just-finished job from the array of remaining ones...
$remainingJobs = #($remainingJobs) -ne $finishedJob
# ... and also as a job managed by PowerShell.
Remove-Job $finishedJob
} else {
# Do other things...
Write-Host . -NoNewline
Start-Sleep -Milliseconds 500
}
}
# Output the jobs' results
$jobResults
Note:
It's tempting to try $remainingJobs | Wait-Job -Any -Timeout 0 to momentarily check for termination of any one job without blocking execution, but as of PowerShell 7.1 this doesn't work as expected: even already completed jobs are never returned - this appears to be bug, discussed in GitHub issue #14675.
If I run 2 Invoke-Command's using -asjob, does it run simultaneously when I try to receive the output?
Yes, PowerShell jobs always run in parallel, whether they're executing remotely, as in your case (with Invoke-Command -AsJob, assuming that localhost in the question is just a placeholder for the actual name of a different computer), or locally (using Start-Job or Start-ThreadJob).
However, by using (separate) Wait-Job calls, you are synchronously waiting for each jobs to finish (in a fixed sequence, too). That is, each Wait-Job calls blocks further execution until the target job terminates.[1]
Note, however, that both jobs continue to execute while you're waiting for the first one to finish.
If, instead of waiting in a blocking fashion, you want to perform other operations while you wait for both jobs to finish, you need a different approach, detailed in the the other answer.
can i save the output of the jobs to a variable without it showing to the console?
Yes, but the problem is that in your remotely executing script block ({ ... }) you're mistakenly using Write-Host in an attempt to output data.
Write-Host is typically the wrong tool to use, unless the intent is to write to the display only, bypassing the success output stream and with it the ability to send output to other commands, capture it in a variable, or redirect it to a file. To output a value, use it by itself; e.g., $value instead of Write-Host $value (or use Write-Output $value, though that is rarely needed); see this answer.
Therefore, your attempt to collect the job's output in a variable failed, because the Write-Host output bypassed the success output stream that variable assignments capture and went straight to the host (console):
# Because the job's script block uses Write-Host, its output goes to the *console*,
# and nothing is captured in $job1
$job1 = $pSTlocation | Wait-Job | Receive-Job
(Incidentally, the command could be simplified to
$job1 = $pSTlocation | Receive-Job -Wait).
[1] Note that Wait-Job has an optional -Timeout parameter, which allows you to limit waiting to at most a given number of seconds and return without output if the target job hasn't finished yet. However, as of PowerShell 7.1, -Timeout 0 for non-blocking polling for whether jobs have finished does not work - see GitHub issue #14675.

Prorgess bar for `New-MailboxExportRequest`

I'm trying to create a script that can export a user's mailbox to a PST, remotely (Exchange Server 2010 console is installed on the server we're running this from, and the module is loaded correctly). It's being done using a script so our L2 admins do not have to manually perform the task. Here's the MWE.
$UserID = Read-Host "Enter username"
$PstDestination = "\\ExServer\Share\$UserID.pst"
$Date = Get-Date -Format "yyyyMMddhhmmss"
$ExportName = "$UserID" + "$Date"
try {
New-MailboxExportRequest -Mailbox $UserID -FilePath $PstDestination -Name $ExportName -ErrorAction Stop -WarningAction SilentlyContinue | Out-Null
# Loop through the process to track its status and write progress
do {
$Percentage = (Get-MailboxExportRequest -Name $ExportName | Get-MailboxExportRequestStatistics).PercentComplete
Write-Progress "Mailbox export is in progress." -Status "Export $Percentage% complete" -PercentComplete "$Percentage"
}
while ($Percentage -ne 100)
Write-Output "$UserID`'s mailbox has been successfully exported. The archive can be found at $PstDestination."
}
catch {
Write-Output "There was an error exporting the mailbox. The process was aborted."
}
The problem is, as soon as we initiate the export, the task gets Queued. Sometimes, the export remains queued for a very long time, and the script is currently unable to figure out when the task begins, and when it does, is unable to display the progress correctly. The export happens in the background, but the script remains stuck there. So anything after the export, does not get executed, and the whole thing then has to be done manually.
Please suggest a way to handle this?
I tried adding a wait timer and then a check to see if the export has begun. It didn't quite work as expected.
Two things. First one is more about performance/hammering Exchange with unnesacary requests in do/while loop. Start-Sleep -Seconds 1 (or any other delay that makes sense depending on the mailbox size(s)) inside the loop is a must.
Second: rather than wait for job to start, just resume it yourself:
if ($request.Status -eq 'Queued') {
$request | Resume-MailboxExportRequest
}