Is it possible to force the execution of some code if a PowerShell script is forcefully terminated? I have tried try..finally and Traps, but they both don't seem to work, at least when I press Ctrl-C from PowerShell ISE.
Basically, I have a Jenkins build that executes a PowerShell script. If for any reason I want to stop the build from within Jenkins, I don't want any subprocess to lock the files, hence keeping my build project in a broken state until an admin manually kill the offending processes (nunit-agent.exe in my case). So I want to be able to force the execution of a code that terminates nunit-agent.exe if this happens.
UPDATE: As #Frode suggested below, I tried to use try..finally:
$sleep = {
try {
Write-Output "In the try block of the job."
Start-Sleep -Seconds 10
}
finally {
Write-Output "In the finally block of the job."
}
}
try {
$sleepJob = Start-Job -ScriptBlock $sleep
Start-Sleep -Seconds 5
}
finally {
Write-Output "In the finaly block of the script."
Stop-Job $sleepJob
Write-Output "Receiving the output from the job:"
$content = Receive-Job $sleepJob
Write-Output $content
}
Then when I executed this and broke the process using Ctrl-C, I got no output. I thought that what I should got is:
In the finally block of the script.
Receiving the output from the job:
In the try block of the job.
In the finally block of the job.
I use try {} finally {} for this. The finally-block runs when try is done or if you use ctrl+c, so you need to either run commands that are safe to run either way, ex. it doesn't matter if you kill a process that's already dead..
Or you could add a test to see if the last command was a success using $?, ex:
try {
Write-Host "Working"
Start-Sleep -Seconds 100
} finally {
if(-not $?) { Write-Host "Cleanup on aisle 5" }
Write-Host "Done"
}
Or create your own test (just in case the last command in try failed for some reason):
try {
$IsDone = $false
Write-Host "Working"
Start-Sleep -Seconds 100
#.....
$IsDone = $true
} finally {
if(-not $IsDone) { Write-Host "Cleanup on aisle 5" }
Write-Host "Done"
}
UPDATE: The finally block will not work for output as the pipeline is stopped on CTRL+C.
Note that pressing CTRL+C stops the pipeline. Objects that are sent to
the pipeline will not be displayed as output. Therefore, if you
include a statement to be displayed, such as "Finally block has run",
it will not be displayed after you press CTRL+C, even if the Finally
block ran.
Source: about_Try_Catch_Finally
However, if you save the output from Receive-Job to a global variable like $global:content = Receive-Job $sleepJob you can read it after the finally-block. The variable is normally created in a different local scope and lost after the finally-block.
Related
I have a windows console app that currently runs to process some files, at the end of the run, if successful, it starts a windows service and I get the output > xxx service is now running, press control_c to exit.
The console app looks at a config file to pull some parameters, I need to be able to re-run this multiple times while changing the parameters in the config file first. To do this manually I'd do the following:
change config file
run the app from powershell
wait for the message above to appear
click ctrl + c to terminate
change config file and run again
I thought it makes sense to automate this in a PS script where I can just pass the config values for all the runs, then the script loops through the values, edit the config file and run the exe.
Issue I have is the loop gets "stuck" at first run because the application is waiting for the ctrl+c command so never progresses through the loop.
what I have at the moment looks like this:
foreach ($dt in $datesarr)
{
##edit config values with stuff in $dt
$output=(<path to app here>)
while ($output[-1] -notlike "*Control-C*")
{
Start-Sleep -Seconds 10
}
}
problem I have is the script never reaches the while loop as it's just stuck after running the app awaiting for ctrl + c... What I want it to do is launch the app, wait for it to get to the ctrl + c bit then exit the loop and pick the second value in the parameter.
Any thoughts would be hugely appreciated!
Try the following approach, which is based on direct use of the following, closely related .NET APIs:
System.Diagnostics.ProcessStartInfo
System.Diagnostics.Process
Instead of trying to programmatically press Ctrl-C, the process associated with the external program is simply killed (terminated).
# Remove the next line if you don't want to see verbose output.
$VerbosePreference = 'Continue'
$psi = [System.Diagnostics.ProcessStartInfo] #{
UseShellExecute = $false
WindowStyle = 'Hidden'
FileName = '<path to app here>'
Arguments = '<arguments string here>' # only if args must be passed
RedirectStandardOutput = $true
RedirectStandardError = $true # optional - if you also want to examine stderr
}
Write-Verbose "Launching $($psi.FileName)..."
$ps = [System.Diagnostics.Process]::Start($psi)
Write-Verbose "Waiting for launched process $($ps.Id) to output the line of interest..."
$found = $false
while (
-not $ps.HasExited -and
-not ($found = ($line = $ps.StandardOutput.ReadLine()) -match 'Control-C')
) {
Write-Verbose "Stdout line received: $line"
}
if ($found) {
Write-Verbose "Line of interest received; terminating process $($ps.Id)."
# Note: If the process has already terminated, this will be a no-op.
# .Kill() kills only the target process itself.
# In .NET Core / .NET 5+, you can use .Kill($true) to also
# kill descendants of the process, i.e. all processes launched
# by it, directly and via its children.
$ps.Kill()
} else {
Write-Error "Process $($ps.Id) terminated before producing the expected output."
}
$ps.Dispose()
I am writing a simple TCP/IP server using Powershell. I notice that Ctrl-C cannot interrupt the AcceptTcpClient() call. Ctrl-C works fine after the call though. I have searched around, nobody reported similar problem so far.
The problem can be repeated by the following simple code. I am using Windows 10, latest patch, with the native Powershell terminal, not Powershell ISE.
$listener=new-object System.Net.Sockets.TcpListener([system.net.ipaddress]::any, 4444)
$listener.start()
write-host "listener started at port 4444"
$tcpConnection = $listener.AcceptTcpClient()
write-host "accepted a client"
This is what happens when I run it
ps1> .\test_ctrl_c.ps1
listener started at port 4444
(Ctrl-C doesn't work here)
After getting #mklement0's answer, I gave up my original clean code. I figured out a workaround. Now Ctrl-C can interrupt my program
$listener=new-object System.Net.Sockets.TcpListener([system.net.ipaddress]::any, 4444)
$listener.start()
write-host "listener started at port 4444"
while ($true) {
if ($listener.Pending()) {
$tcpConnection = $listener.AcceptTcpClient()
break;
}
start-sleep -Milliseconds 1000
}
write-host "accepted a client"
Now Ctrl-C works
ps1> .\test_ctrl_c.ps1
listener started at port 4444
(Ctrl-C works here)
(As of PowerShell 7.0) Ctrl-C only works while PowerShell code is executing, not during execution of a .NET method.
Since most .NET method calls execute quickly, the problem doesn't usually surface.
See this GitHub issue for a discussion and background information.
As for possible workarounds:
The best approach - if possible - is the one shown in your own answer:
Run in a loop that periodically polls for a condition, sleeping between tries, and only invoke the method when the condition being met implies that the method will then execute quickly instead of blocking indefinitely.
If this is not an option (if there is no such condition you can test for), you can run the blocking method in a background job, so that it runs in a child process that can be terminated on demand by the caller; do note the limitations of this approach, however:
Background jobs are slow and resource-intensive, due to needing to run a new PowerShell instance in a hidden child process.
Since cross-process marshaling of inputs to and outputs from the job is necessary:
Inputs and output won't be live objects.
Complex objects (objects other than instances of primitive .NET types and a few well-known types) will be emulations of the original objects; in essence, objects with static copies of the property values, and no methods - see this answer for background information.
Here's a simple demonstration:
# Start the long-running, blocking operation in a background job (child process).
$jb = Start-Job -ErrorAction Stop {
# Simulate a long-running, blocking .NET method call.
[Threading.Thread]::Sleep(5000)
'Done.'
}
$completed = $false
try {
Write-Host -ForegroundColor Yellow "Waiting for background job to finish. Press Ctrl-C to abort."
# Note: The output collected won't be *live* objects, and with complex
# objects will be *emulations* of the original objects that have
# static copies of their property values and no methods.
$output = Receive-Job -Wait -Job $jb
$completed = $true
}
finally { # This block is called even when Ctrl-C has been pressed.
if (-not $completed) { Write-Warning 'Aborting due to Ctrl-C.' }
# Remove the background job.
# * If it is still running and we got here due to Ctrl-C, -Force is needed
# to forcefully terminate it.
# * Otherwise, normal job cleanup is performed.
Remove-Job -Force $jb
# If we got here due to Ctrl-C, execution stops here.
}
# Getting here means: Ctrl-C was *not* pressed.
# Show the output received from the job.
Write-Host -ForegroundColor Yellow "Job output received:"
$output
If you execute the above script and do not press Ctrl-C, you'll see:
If you do press Ctrl-C, you'll see:
I have been writing a script in PowerShell V4 on Windows 8.1 which makes use of background processes and events. I have found something which is a little strange. Rather than post the 2,500 lines or so of my script I have included a much shorter program which exhibits the odd behaviour. I expect it is something I am doing wrong but I cannot see what the problem is. The code is as follows:
`
# Scriptblocks to simulate the background task and the action to
# take when an event is raised
[scriptblock] $MyScript = {
for ($i = 0;$i -lt 30;$i++)
{
[console]::writeline("This is a test - $i")
start-sleep -m 500
}
}
[scriptblock] $StateChanged = {
[console]::writeline("The state changed")
}
# Create a runspace pool
$RunspacePool = [RunspaceFactory]::CreateRunspacePool(1, [int] $env:NUMBER_OF_PROCESSORS + 1)
$RunspacePool.ApartmentState = "MTA"
$RunspacePool.Open()
# Create and start the background task to run
$PS = [powershell]::Create()
[void] $PS.AddScript($MyScript)
$PS.RunspacePool = $RunspacePool
$Asyncresult = $PS.BeginInvoke()
# Register an interest in the InvocationStateChanged event for
# the background task. Should the event happen (which it will)
# run the $StateChanged scriptblock
Register-ObjectEvent -InputObject $PS -EventName InvocationStateChanged -Action $StateChanged
# The loop that simulates the main purpose of the script
[int] $j = 0
while ($PS.InvocationStateInfo.State -eq [System.Management.Automation.PSInvocationState]::Running)
{
if ($j -eq 2)
{
[void] $PS.BeginStop($NULL, $NULL)
}
"Running: $j" | out-host
sleep -m 400
$j = $j + 1
}
sleep 10
`
Essentially all it does is create a runspace to run a powershell scriptblock and while that is running something else happens in the foreground. I simulate someone pressing a button or, for whatever reason, a beginstop method being executed to stop the background process. That all works and the background process duly stops. However, I have registered an event for the background powershell script which runs a scriptblock when the background job changes state. The strange thing is that the scriptblock gets invoked twice and I cannot work out why.
Here is some output from running the script:
E:\Test Programs>powershell -file .\strange.ps1
This is a test - 0
Running: 0
This is a test - 1
Running: 1
The state changed
Running: 2
The state changed
E:\Test Programs>
As you can see it displays "The state changed" twice. They are a fraction of a second apart. I put a sleep 10 at the end to eliminate the possibility that it is the script stopping that is causing the second "The state changed" message.
If anyone can explain what is wrong I would be very grateful.
The InvocationStateChanged event is likely being called when on Running and Completed
If you change $StateChanged to also include the $Sender.InvocationStateInfo.State automatic variable properties, like this:
[scriptblock] $StateChanged = {
[console]::writeline("The state changed to: $($Sender.InvocationStateInfo.State)")
}
Your output will probably look like:
This is a test - 0
Running: 0
This is a test - 1
Running: 1
The state changed to: Running
Running: 2
The state changed: Completed
I am very sorry that I didn't reply a year and a half ago as I should have done. For some reason I never had a notification that there was an answer and, to be honest, I just forgot to check.
Anyway, thanks for the answer. I still have my little test script although the thing I was actually writing has been dumped and rewritten but I did check with my little test script and what I get is:
Running: 0
Running: 1
This is a test - 1
The state changed to: Stopped
The state changed to: Stopped
Running: 2
This is now on Windows 10.
Oh well, thanks for the suggestion anyway. As I said I've rewritten the original script so this is now just for interest.
Best wishes........
Colin
I'm trying to create a script that can export a user's mailbox to a PST, remotely (Exchange Server 2010 console is installed on the server we're running this from, and the module is loaded correctly). It's being done using a script so our L2 admins do not have to manually perform the task. Here's the MWE.
$UserID = Read-Host "Enter username"
$PstDestination = "\\ExServer\Share\$UserID.pst"
$Date = Get-Date -Format "yyyyMMddhhmmss"
$ExportName = "$UserID" + "$Date"
try {
New-MailboxExportRequest -Mailbox $UserID -FilePath $PstDestination -Name $ExportName -ErrorAction Stop -WarningAction SilentlyContinue | Out-Null
# Loop through the process to track its status and write progress
do {
$Percentage = (Get-MailboxExportRequest -Name $ExportName | Get-MailboxExportRequestStatistics).PercentComplete
Write-Progress "Mailbox export is in progress." -Status "Export $Percentage% complete" -PercentComplete "$Percentage"
}
while ($Percentage -ne 100)
Write-Output "$UserID`'s mailbox has been successfully exported. The archive can be found at $PstDestination."
}
catch {
Write-Output "There was an error exporting the mailbox. The process was aborted."
}
The problem is, as soon as we initiate the export, the task gets Queued. Sometimes, the export remains queued for a very long time, and the script is currently unable to figure out when the task begins, and when it does, is unable to display the progress correctly. The export happens in the background, but the script remains stuck there. So anything after the export, does not get executed, and the whole thing then has to be done manually.
Please suggest a way to handle this?
I tried adding a wait timer and then a check to see if the export has begun. It didn't quite work as expected.
Two things. First one is more about performance/hammering Exchange with unnesacary requests in do/while loop. Start-Sleep -Seconds 1 (or any other delay that makes sense depending on the mailbox size(s)) inside the loop is a must.
Second: rather than wait for job to start, just resume it yourself:
if ($request.Status -eq 'Queued') {
$request | Resume-MailboxExportRequest
}
./backup.ps1
$job = Start-Job {$i=0; $c=0; while (1) {
Write-Progress Activity "Step $i"; $i++; Start-Sleep -sec 1 }}
while ($job.State -eq 'Running' -And $c -lt 5) {
$c++;
$progress=$job.ChildJobs[0].progress;
$progress | %{$_.StatusDescription};
$progress.Clear(); Start-Sleep 1 }
I have been trying to work around a script which waits for backup to complete and than executes the next block of code, but i could not do it with start-job and wait-job, i found the above code and pasted into my document and it worked, but as i am new to powershell i dont know what exactly this script is doing
Can't you just pipe the job to out-null using this syntax?
& ./backup.ps1|Out-Null
This will wait to do next process until $job is done and released.
comment out everything after ./backup that is in regards to waiting for the script call to be finished.
You're just callling a PS script inside a PS script. if you want to wait for the first one to be done before you start your next task, add the |Out-Null to the end
Also - it's not going to have all the verbose output you have in your current set up. But it's quick, efficient, and much less code.