Powershell Finally block skipped with Ctrl-C - powershell

I'm writing a monitoring script in Powershell using a Try/Finally to log a message should the script end. The script is intended to run indefinitely, so I want a way to track unintended exiting.
Every other StackOverflow post and Help page I've checked states:
A Finally block runs even if you use CTRL+C to stop the script. A Finally block also runs if an Exit keyword stops the script from within a Catch block.
In practice, I have not found this to be true. I'm using the following contrived example to test this out:
Try {
While($True) {
echo "looping"
Start-Sleep 3
}
} Finally {
echo "goodbye!"
pause
}
The Finally block here is skipped every time after a Ctrl+C (no echo, no pause), both when running as a saved script or when executing through the built-in Powershell ISE. The only output I ever get is:
looping
looping
...(repeat until Ctrl-C)
I've clearly missed something, but I have no idea what it is, especially in a code snippet as small as this.

The proper answer is that Ctrl+C breaks pipeline, as is also stated in that link, and echo uses the pipeline to process its output. Therefore, once you Ctrl+C, writing to the pipeline causes the script block to err, and to not process any further commands. Therefore, do not use commands that send output to stdout directly, and there is a lot of them that indirectly use pipeline. Write-Host, on the other hand, does not use the pipeline, and thus does not throw an error.

Functional Code
This will give you the behaviour I believe you're after:
Try {
While($True) {
echo "looping"
Start-Sleep 3
}
} Finally {
Write-Host "goodbye!"
pause
}
References
Write-Output/echo - Synopsis
Sends the specified objects to the next command in the pipeline. If the command is the last command in the pipeline, the objects are displayed in the console.
Write-Host - Synopsis
Writes customized output to a host.
Try-Catch-Finally - Syntax note
Note that pressing CTRL+C stops the pipeline. Objects that are sent to the pipeline will not be displayed as output. Therefore, if you include a statement to be displayed, such as "Finally block has run", it will not be displayed after you press CTRL+C, even if the Finally block ran.
Explanation
The key, as per TheIncorrigible1's comment and Vesper's answer is that the pipeline is stopped. But this is not because of an error in Write-Output. And I don't find it is a satisfying explanation on its own.
"If the command is the last command in the pipeline, the objects are displayed in the console." - appears this statement is false within a finally block. However, passing to Out-Host explicitly will yield desired output.
On Try-Catch-Finally note
The quoted section is confusing as it applies to unhandled objects sent to the pipeline.
Objects sent to the pipeline and handled within a Finally block are fine.
It talks about "even if the Finally block has ran" but the pause does not run if preceded by a Write-Output.
More Code
A few things ran in the Finally block to investigate behaviour, with comments as to what happens.
} Finally {
Write-Output "goodbye!" | Out-Default # works fine
pause
}
} Finally {
Write-Output "goodbye!" | Out-Host # works fine
pause
}
} Finally {
pause # works fine
Write-output "goodbye!" # not executed
}
} Finally {
try{
Write-Output "goodbye!" -ErrorAction Stop
}catch{
Write-Host "error caught" # this is not executed.
} # $error[0] after script execution is empty
pause
}
} Finally {
try{
ThisCommandDoesNotExist
}catch{
Write-Host "error caught" # this is executed
} # $error[0] contains CommandNotFoundException
pause
}

Related

Another PowerShell function return value and Write-Output

this has been beaten to death but can't find an exact solution for my problem.
I have a PowerShell script that can be run from the command line or from a scheduled task. I'm using the following line
Write-Output "Updating user $account" | Tee-Object $logfile -Append
to write relevant information to the screen and a log file. I need both because when run from a command line, I can physically see what's going on but when run from a scheduled task, I have no visibility to its output hence the log file.
Thing is, I'm modifying my code to use functions but as you might already know, Write-Output messes up the return values of functions when used within said functions.
What could I do that would do something similar to what I stated above without affecting the function's return value?
Thanks.
Just write to a log file. When running from the console, open another console and tail the log file.
Get-Content 'C:\path\to\the\logfile.txt' -Tail 10 -Wait
Assuming PowerShell version 5 or higher, where Write-Host writes to the information output stream (stream number 6), which doesn't interfere with the success output stream (stream number 1) and therefore doesn't pollute your function's data output:
The following is not a single command, but you could easily wrap this in a function:
Write-Host "Updating user $account" -iv msg; $msg >> $logfile
The above uses the common -InformationVariable (-iv) parameter to capture Write-Host's output in variable $msg (note how its name must be passed to -iv, i.e. without the leading $).
The message captured in $msg is then appended to file $logfile with >>, the appending redirection operator.
Note: >> is in effect an alias for Out-File -Append, and uses a fixed character encoding, both on creation and appending.
Use Add-Content and its -Encoding parameter instead, if you want to control the encoding.
Instead of explicitly writing each log line to a file, you may want to use a different approach that references the log file only at one location in the code.
Advantages:
Easy to change log path and customize the log output (e. g. prepending a timestamp), without having to modify all code locations that log something.
Captures any kind of messages, e. g. also error, verbose and debug messages (if enabled).
Captures messages of 3rd party code aswell, without having to tell them the name of the log file.
Function SomeFunction {
Write-Host "Hello from SomeFunction" # a log message
"SomeFunctionOutput" # Implicit output (return value) of the function.
# This is short for Write-Output "SomeFunctionOutput".
}
Function Main {
Write-Host "Hello from Main" # a log message
# Call SomeFunction and store its result (aka output) in $x
$x = SomeFunction
# To demonstrate that "normal" function output is not affected by log messages
$x -eq "SomeFunctionOutput"
}
# Call Main and redirect all of its output streams, including those of any
# called functions.
Main *>&1 | Tee-Object -FilePath $PSScriptRoot\Log.txt -Append
Output:
Hello from Main
Hello from SomeFunction
True
In this sample all code is wrapped in function Main. This allows us to easily redirect all output streams using the *>&1 syntax, which employs the redirection operator to "merge" the streams. This means that all commands further down the pipeline (in this example Tee-Object) receive any script messages that would normally end up in the console (except when written directly to the console, which circumvents PowerShells streams).
Possible further improvements
You may want to use try/catch in function Main, so you also capture script-terminating errors:
try {
SomeFunction # May also cause a script-terminating error, which will be catched.
# Example code that causes a script-terminating error
Write-Error "Fatal error" -ErrorAction Stop
}
catch {
# Make sure script-terminating errors are logged
Write-Error -ErrorRecord $_ -ErrorAction Continue
}

how to prevent external script from terminating your script with break statement

I am calling an external .ps1 file which contains a break statement in certain error conditions. I would like to somehow catch this scenario, allow any externally printed messages to show as normal, and continue on with subsequent statements in my script. If the external script has a throw, this works fine using try/catch. Even with trap in my file, I cannot stop my script from terminating.
For answering this question, assume that the source code of the external .ps1 file (authored by someone else and pulled in at run time) cannot be changed.
Is what I want possible, or was the author of the script just not thinking about playing nice when called externally?
Edit: providing the following example.
In badscript.ps1:
if((Get-Date).DayOfWeek -ne "Yesterday"){
Write-Warning "Sorry, you can only run this script yesterday."
break
}
In myscript.ps1:
.\badscript.ps1
Write-Host "It is today."
The results I would like to achieve is to see the warning from badscript.ps1 and for it to continue on with my further statements in myscript.ps1. I understand why the break statement causes "It is today." to never be printed, however I wanted to find a way around it, as I am not the author of badscript.ps1.
Edit: Updating title from "powershell try/catch does not catch a break statement" to "how to prevent external script from terminating your script with break statement". The mention of try/catch was really more about one failed solution to the actual question which the new title better reflects.
Running a separate PowerShell process from within my script to invoke the external file has ended up being a solution good enough for my needs:
powershell -File .\badscript.ps1 will execute the contents of badscript.ps1 up until the break statement including any Write-Host or Write-Warning's and let my own script continue afterwards.
I get where you're coming from. Probably the easiest way would be to push the script off as a job, and wait for the results. You can even echo the results out with Receive-Job after it's done if you want.
So considering the bad script you have above, and this script file calling it:
$path = Split-Path -Path $MyInvocation.MyCommand.Definition -Parent
$start = Start-Job -ScriptBlock { . "$using:Path\badScript.ps1" } -Name "BadScript"
$wait = Wait-Job -Name "BadScript" -Timeout 100
Receive-Job -Name "BadScript"
Get-Command -Name "Get-ChildItem"
This will execute the bad script in a job, wait for the results, echo the results, and then continue executing the script it's in.
This could be wrapped in a function for any scripts you might need to call (just to be on the safe side.
Here's the output:
WARNING: Sorry, you can only run this script yesterday.
CommandType Name Version Source
----------- ---- ------- ------
Cmdlet Get-ChildItem 3.1.0.0 Microsoft.PowerShell.Management
In the about_Break documentation it says
PowerShell does not limit how far labels can resume execution. The
label can even pass control across script and function call
boundaries.
This got me thinking, "How can I trick this stupid language design choice?". And the answer is to create a little switch block that will trap the break on the way out:
.\NaughtyBreak.ps1
Write-Host "NaughtyBreak about to break"
break
.\OuterScript.ps1
switch ('dummy') { default {.\NaughtyBreak.ps1}}
Write-Host "After switch() {NaughtyBreak}"
.\NaughtyBreak.ps1
Write-Host "After plain NaughtyBreak"
Then when we call OuterScript.ps1 we get
NaughtyBreak about to break
After switch() {NaughtyBreak}
NaughtyBreak about to break
Notice that OuterScript.ps1 correctly resumed after the call to NaughtyBreak.ps1 embedded in the switch, but was unceremoniously killed when calling NaughtyBreak.ps1 directly.
Putting break back inside a loop (including switch) where it belongs.
foreach($i in 1) { ./badscript.ps1 }
'done'
Or
switch(1) { 1 { ./badscript.ps1 } }
'done'

Powershell: Write-Output -NoEnumerate not suppressing output to console

I'm writing a function in PowerShell that I want to be called via other PowerShell functions as well as be used as a standalone function.
With that objective in mind, I want to send a message down the pipeline using Write-Output to these other functions.
However, I don't want Write-Output to write to the PowerShell console. The TechNet page for Write-Output states:
Write-Output:
Sends the specified objects to the next command in the pipeline. If the command is the last command in the pipeline, the objects are displayed in the console.
-NoEnumerate:
By default, the Write-Output cmdlet always enumerates its output. The NoEnumerate parameter suppresses the default behavior, and prevents Write-Output from enumerating output. The NoEnumerate parameter has no effect on collections that were created by wrapping commands in parentheses, because the parentheses force enumeration.
For some reason, this -NoEnumerate switch will not work for me in either the PowerShell ISE or the PowerShell CLI. I always get output to my screen.
$data = "Text to suppress"
Write-Output -InputObject $data -NoEnumerate
This will always return 'Text to suppress' (no quotes).
I've seen people suggest to pipe to Out-Null like this:
$data = "Text to suppress"
Write-Output -InputObject $data -NoEnumerate | Out-Null
$_
This suppresses screen output, but when I use $_ I have nothing in my pipeline afterwards which defeats the purpose of me using Write-Output in the first place.
System is Windows 2012 with PowerShell 4.0
Any help is appreciated.
Write-Output doesn't write to the console unless it's the last command in the pipeline. In your first example, Write-Output is the only command in the pipeline, so its output is being dumped to the console. To keep that from happening, you need to send the output somewhere. For example:
Write-Output 5
will send "5" to the console, because Write-Output is the last and only command in the pipeline. However:
Write-Output 5 | Start-Sleep
no longer does that because Start-Sleep is now the next command in the pipeline, and has therefore become the recipient of Write-Output's data.
Try this:
Write your function as you have written it with Write-Output as the last command in the pipeline. This should send the output up the line to the invoker of the function. It's here that the invoker can use the output, and at the same time suppress writing to the console.
MyFunction blah, blah, blah | % {do something with each object in the output}
I haven't tried this, so I don't know if it works. But it seems plausible.
My question is not the greatest.
First of all Write-Output -NoEnumerate doesn't suppress output on Write-Output.
Secondly, Write-Output is supposed to write its output. Trying to make it stop is a silly goal.
Thirdly, piping Write-Output to Out-Null or Out-File means that the value you gave Write-Output will not continue down the pipeline which was the only reason I was using it.
Fourth, $suppress = Write-Output "String to Suppress" also doesn't pass the value down the pipeline.
So I'm answering my question by realizing if it prints out to the screen that's really not a terrible thing and moving on. Thank you for your help and suggestions.
Explicitly storing the output in a variable would be more prudent than trying to use an implicit automatic variable. As soon as another command is run, that implicit variable will lose the prior output stored in it. No automatic variable exists to do what you're asking.
If you want to type out a set of commands without storing everything in temporary variables along the way, you can write a scriptblock at the command line as well, and make use of the $_ automatic variable you've indicated you're trying to use.
You just need to start a new line using shift + enter and write the code block as you would in a normal scriptblock - in which you could use the $_ automatic variable as part of a pipeline.

How do I trap signals in PowerShell?

Is this possible? I've finally decided to start setting up my personal .NET development environment to closer mimic how I'd set up a *NIX dev environment, which means learning Powershell in earnest.
I'm currently writing a function that recurses through the file system, setting the working directory as it goes in order to build things. One little thing that bothers me is that if I Ctrl+C out of the function, it leaves me wherever the script last was. I've tried setting a trap block that changes the dir to the starting point when run, but this seems to only be intended (and fire) on Exception.
If this were in a language that had root in Unix, I'd set up a signal handler for SIGINT, but I can't find anything similar searching in Powershell. Putting on my .NET cap, I'm imagining there's some sort of event that I can attach a handler to, and if I had to guess, it'd be an event of $host, but I can't find any canonical documentation for System.Management.Automation.Internal.Host.InternalHostUserInterface, and nothing anecdotal that I've been able to search for has been helpful.
Perhaps I'm missing something completely obvious?
Do you mean something like this?
try
{
Push-Location
Set-Location "blah"
# Do some stuff here
}
finally
{
Pop-Location
}
See documentation here. Particularly that paragraph: "The Finally block statements run regardless of whether the Try block encounters a terminating error. Windows PowerShell runs the Finally block before the script terminates or before the current block goes out of scope. A Finally block runs even if you use CTRL+C to stop the script. A Finally block also runs if an Exit keyword stops the script from within a Catch block."
This handles console kepboard input. If control C is pressed during the loop you'll have a chance to handle the event however you want. In the example code a warning is printed and the loop is exited.
[console]::TreatControlCAsInput = $true
dir -Recurse -Path C:\ | % {
# Process file system object here...
Write-Host $_.FullName
# Check if ctrl+C was pressed and quit if so.
if ([console]::KeyAvailable) {
$key = [system.console]::readkey($true)
if (($key.modifiers -band [consolemodifiers]"control") -and ($key.key -eq "C")) {
Write-Warning "Quitting, user pressed control C..."
break
}
}

Gracefully stopping in Powershell

How do I catch and handle Ctrl+C in a PowerShell script? I understand that I can do this from a cmdlet in v2 by including an override for the Powershell.Stop() method, but I can't find an analog for use in scripts.
I'm currently performing cleanup via an end block, but I need to perform additional work when the script is canceled (as opposed to run to completion).
The documentation for try-catch-finally says:
A Finally block runs even if you use CTRL+C to stop the script. A Finally
block also runs if an Exit keyword stops the script from within a Catch
block.
See the following example. Run it and cancel it by pressing ctrl-c.
try
{
while($true)
{
"Working.."
Start-Sleep -Seconds 1
}
}
finally
{
write-host "Ended work."
}
You could use the method described on here on PoshCode
Summary:
Set
[console]::TreatControlCAsInput = $true
then poll for user input using
if($Host.UI.RawUI.KeyAvailable -and (3 -eq
[int]$Host.UI.RawUI.ReadKey("AllowCtrlC,IncludeKeyUp,NoEcho").Character))
There is also a Stopping property on $PSCmdlet that can be used for this.
Here is recent, working solution. I use the if part in a loop where I need to control the execution interruption (closing filehandles).
[Console]::TreatControlCAsInput = $true # at beginning of script
if ([Console]::KeyAvailable){
$readkey = [Console]::ReadKey($true)
if ($readkey.Modifiers -eq "Control" -and $readkey.Key -eq "C"){
# tasks before exit here...
return
}
}
Also note that there is a bug which leads KeyAvailable to be true upon start of scripts. You can mitigate by read calling ReadKey once at start. Not needed for this approach, just worth knowing in this context.