I have PS module that contains a number of scripts for individual functions. There is also a "library" script with a number of helper functions that get called by the functions used in the module.
Let's call the outer function ReadWeb, and it uses a helper function ParseXML.
I encountered an issue this week with error handling in the inner helper ParseXML function. That function contains a try/catch, and in the catch I interrogate:
$Error[0].Exception.InnerException.Message
...in order to pass the error back to the outer scope as a variable and determine if ParseXML worked.
For a particular case, I was getting an indexing error when I called ReadWeb. The root cause turned out to be the $Error object in the Catch block in ParseXML was coming back $Null.
I changed the error handling to check for a $Error -eq $Null and if it is, use $_ in the Catch to determine what the error message is.
My question is: what would cause $Error to be $null inside the Catch?
Note: This is written from the perspective of Windows PowerShell 5.1 and PowerShell (Core) 7.2.3 - it is possible that earlier Windows PowerShell versions behaved differently, though I suspect they didn't.
Accessing the error at hand inside a catch block should indeed be done via the automatic $_ variable.
Inside a module, $Error isn't $null, but surprisingly is an always-empty collection (of type System.Collections.ArrayList); therefore, $Error[0] - which in catch blocks outside modules is the same as $_ - is unexpectedly $null in modules:
What technically happens - and this may be a bug - is that modules have an unused, module-local copy of $Error, which shadows (hides) the true $Error variable that is located in the global scope.
When an error is automatically recorded from inside a module, however, it is added to the global $Error collection - just like in code running outside of modules.
Therefore, a workaround is to use $global:Error instead (which is only necessary if you need access to previous errors; as stated, for the current one, use $_).
The following sample code illustrates the problem:
$Error.Clear()
# Define a dynamic module that exports sample function 'foo'.
$null = New-Module {
Function foo {
try {
1 / 0 # Provoke an error that can be caught.
}
catch {
$varNames = '$Error', '$global:Error', '$_'
$varValues = $Error, $global:Error, $_
foreach ($i in 0..($varNames.Count-1)) {
[pscustomobject] #{
Name = $varNames[$i]
Type = $varValues[$i].GetType().FullName
Value = $varValues[$i]
}
}
}
}
}
foo
Output; note how the value of $Error is {}, indicating an empty collection:
Name Type Value
---- ---- -----
$Error System.Collections.ArrayList {}
$global:Error System.Collections.ArrayList {Attempted to divide by zero.}
$_ System.Management.Automation.ErrorRecord Attempted to divide by zero.
Edit: answer based on Powershell 3.
$error is an automatic variable handled by Powershell: 3rd ยง of the LONG DESCRIPTION in about_Try_Catch_Finally.
It is considered as the context of the Catch block, thus being available as $_.
Since Catch block is a different block than Try, the $error automatic variable is reset and valued $null.
$error and try / catch are different beasts in PowerShell.
try / catch catches terminating errors, but won't catch a Write-Error (cause it's non terminating).
$error is a list of all errors encountered (including ones swallowed up when -ErrorAction silentlycontinue is used).
$_ is the current error in a try/catch block.
I'd guess that your underlying function calls Write-Error, and you want that to go cleanly into a try/catch. To make this be a terminating error as well, use -ErrorAction Stop.
Related
I've got very weird behaviour which I suppose is related to dot-sourcing somehow, but I cannot wrap my head around it. Here's what I have:
A script sourced.ps1 which contains the two functions and a class:
class MyData {
[string] $Name
}
function withClass() {
$initialData = #{
Name1 = "1";
Name2 = "2";
}
$list = New-Object Collections.Generic.List[MyData]
foreach ($item in $initialData.Keys) {
$d = [MyData]::new()
$d.Name = $item
$list.Add($d)
}
}
function withString() {
$initialData = #{
Name1 = "1";
Name2 = "2";
}
$list = New-Object Collections.Generic.List[string]
foreach ($item in $initialData.Keys) {
$list.Add($item)
}
}
I also have a script caller.ps1 which dot-sources the one above and calls the function:
$ErrorActionPreference = 'Stop'
. ".\sourced.ps1"
withClass
I then call the caller.ps1 by executing .\caller.ps1 in the shell (Win terminal with PS Core).
Here's the behaviour I cannot explain: if I call .\caller.ps1, then .\sourced.ps1 and then caller.ps1 again, I get the error:
Line |
14 | $list.Add($d)
| ~~~~~~~~~~~~~
| Cannot find an overload for "Add" and the argument count: "1".
However, if I change the caller.ps1 to call withString function instead, everything works fine no matter how many times I call caller.ps1 and sourced.ps1.
Furthermore, if I first call caller.ps1 with withString, then change it to withClass, there is no error whatsoever.
I suppose using modules would be more correct, but I'm interested in the reason for such weird behaviour in the first place.
Written as of PowerShell 7.2.1
A given script file that is both dot-sourced and directly executed (in either order, irrespective of how often) creates successive versions of the class definitions in it - these are distinct .NET types, even though their structure is identical. Arguably, there's no good reason to do this, and the behavior may be a bug.
These versions, which have the same full name (PowerShell class definitions created in the top-level scope of scripts have only a name, no namespace) but are housed in different dynamic (in-memory) assemblies that differ by the last component of their version number, shadow each other, and which one is effect depends on the context:
Other scripts that dot-source such a script consistently see the new version.
Inside the script itself, irrespective of whether it is itself executed directly or dot-sourced:
In PowerShell code, the original version stays in effect.
Inside binary cmdlets, notably New-Object, the new version takes effect.
If you mix these two ways to access the class inside the script, type mismatches can occur, which is what happened in your case - see sample code below.
While you can technically avoid such errors by consistently using ::new() or New-Object to reference the class, it is better to avoid performing both direct execution and dot-sourcing of script files that contain class definitions to begin with.
Sample code:
Save the code to a script file, say, demo.ps1
Execute it twice.
First, by direct execution: .\demo.ps1
Then, via dot-sourcing: . .\demo.ps1
The type-mismatch error that you saw will occur during that second execution.
Note: The error message, Cannot find an overload for "Add" and the argument count: "1", is a bit obscure; what it is trying to express that is that the .Add() method cannot be called with the argument of the given type, because it expects an instance of the new version of [MyData], whereas ::new() created an instance of the original version.
# demo.ps1
# Define a class
class MyData { }
# Use New-Object to instantiate a generic list based on that class.
# This passes the type name as a *string*, and instantiation of the
# type happens *inside the cmdlet*.
# On the second execution, this will use the *new* [MyData] version.
Write-Verbose -Verbose 'Constructing list via New-Object'
$list = New-Object System.Collections.Generic.List[MyData]
# Use ::new() to create an instance of [MyData]
# Even on the second execution this will use the *original* [MyData] version
$myDataInstance = [MyData]::new()
# Try to add the instance to the list.
# On the second execution this will *fail*, because the [MyData] used
# by the list and the one that $myDataInstance is an instance of differ.
$list.Add($myDataInstance)
Note that if you used $myDataInstance = New-Object MyData, the type mismatch would go away.
Similarly, it would also go away if you stuck with ::new() and also used it to instantiate the list: $list = [Collections.Generic.List[MyData]]::new()
I want to include Whatif and Confirm to my functions but I encountered an issue with these parameters.
My functions are structured like this:
function Get-Stuff {
[CmdletBinding(SupportsShouldProcess, ConfirmImpact='High')]
param ( {...} )
process {
if ($PSCmdlet.ShouldProcess($Name, "Delete user")) {
$result = Invoke-RestMethod #restBody
}
}
end {
Write-Output -InputObject $result
Remove-Variable -Name result
}
}
I took on a habit to clean up my variables in the end-block with Remove-Variable. When I use now the -WhatIf or the -Confirm parameter (and denying it), I get an error that the $result variable is null.
ErrorRecord : Cannot find a variable with the name 'result'.
I understand that the RestMethod is skipped in this case but I would assume that the rest of the function would not be executed further.
My question is now, does one add an else-clause to end the continuing execution of the function or do I use these parameters incorrectly?
There's no good reason to remove your variables in the end block, since they go out of scope automatically anyway, given that they're local to your function.
(The only thing that makes sense is to .Dispose() of variables containing objects that implement the System.IDisposable interface; if releasing memory as quickly as possible is paramount - at the expense of blocking execution temporarily - you can additionally call [GC]::Collect(); [GC]::WaitForPendingFinalizers())
If you still want to call Remove-Variable, you have two options:
Simply ignore a non-existent variable by adding -ErrorAction Ignore to the Remove-Variable call.
Remove-Variable -Name result -ErrorAction Ignore
Alternatively, protect the call - and the Write-Output object - with an explicit test for the existence of the variable:
if (Get-Variable -Scope Local result -ErrorAction Ignore) {
$result # short for: Write-Output -InputObject
Remove-Variable -Name result
}
Also note that it's typical for output objects to be emitted directly from the process block - emitting from the end block is only a necessity for commands that for conceptual reasons must collect all input first, such as Sort-Object.
Emitting output objects from the process block - which is invoked for each input object - ensures the streaming output behavior - emitting objects one by one, as soon as they're available - that the pipeline was designed for.
I'm trying to grasp the Write-Error and $Error automatic variable with the code sample below, which is supposed to generate a single error:
function Test-Error
{
[CmdletBinding()]
param()
Write-Error -Message "Error message" -Category PermissionDenied -ErrorId SampleID
}
$Error.Clear()
$ErrorActionPreference = "Stop"
Test-Error
According to documentation:
$Error Contains an array of error objects that represent the most
recent errors. The most recent error is the first error object in the
array $Error[0].
OK so let's take a look at error contents now:
$Error[0] | select -property *
This will print out the error, (I'm not pasting it here because it's not relevant to the question)
So according to docs this the error, but it looks like docs are wrong:
$Error[1] | select -property *
This will also print out something!
The question is how is that single error happens to be split into 2 pieces?
Isn't just $Error[0] supposed to contain last error as the docs say?
Obviously I did clear the variable so only single element is supposed to be in the array, not 2.
both elements seem to be pointing to single error, but what's the purpose of 2 array elements for single error?
and if the $Error variable happens to have 30 or so errors how do tell which 2 are relating to same error? because that means 60 elements for 30 errors!
The other error is due to $ErrorActionPreference = "Stop" - the error message that you did not post would have clarified it, the Message property is The running command stopped because the preference variable "ErrorActionPreference" or common parameter is set to Stop: Error message. If you do not use that, you get only one element in $Error.
I came across this weird issue in Powershell (not in other languages). Could anyone please explain to me why this happened?
I tried to return a specified number (number 8), but the function keeps throwing everything at me. Is that a bug or by design?
Function GetNum() {
Return 10
}
Function Main() {
$Number10 = GetNum
$number10 #1 WHY NO OUTPUT HERE ??????? I don't want to use write host
$result = 8 # I WANT THIS NUMBER ONLY
PAUSE
return $result
}
do {
$again = Main
Write-Host "RESULT IS "$again # Weird Result, I only want Number 8
} While ($again -eq 10) # As the result is wrong, it loops forever
Is that a bug or by design?
By design. In PowerShell, cmdlets can return a stream of objects, much like using yield return in C# to return an IEnumerable collection.
The return keyword is not required for output values to be returned, it simply exits (or returns from) the current scope.
From Get-Help about_Return (emphasis added):
The Return keyword exits a function, script, or script block. It can be
used to exit a scope at a specific point, to return a value, or to indicate
that the end of the scope has been reached.
Users who are familiar with languages like C or C# might want to use the
Return keyword to make the logic of leaving a scope explicit.
In Windows PowerShell, the results of each statement are returned as
output, even without a statement that contains the Return keyword.
Languages like C or C# return only the value or values that are specified
by the Return keyword.
Mathias is spot on as usual.
I want to address this comment in your code:
$number10 #1 WHY NO OUTPUT HERE ??????? I don't want to use write host
Why don't you want to use Write-Host? Is it because you may have come across this very popular post from PowerShell's creator with the provocative title Write-Host Considered Harmful?
If so, I encourage you to read what I think is a great follow-up/companion piece by tby, titled Is Write-Host Really Harmful?
With this information, it should be clear that as Mathias said, you are returning objects to the pipeline, but you should also be armed with the information needed to choose an alternative, whether it's Write-Verbose, Write-Debug, or even Write-Host.
If I were going to be opinionated about it, I would go with Write-Verbose, altering your function definition slightly in order to support it:
function Main {
[CmdletBinding()]
param()
$Number10 = GetNum
Write-Verbose -Message $number10
$result = 8 # I WANT THIS NUMBER ONLY
PAUSE
$result
}
When you invoke it by just calling $again = Main you'll see nothing on the screen, and $again will have a value of 8. However if you call it this way:
$again = Main -Verbose
then $again will still have the value of 8, but on the screen you'll see:
VERBOSE: 10
likely in differently colored text.
What that gives is not only a way to show the value, but a way for the caller to control whether they see the value or not, without changing the return value of the function.
To drive some of the points in the articles home further, consider that it's not necessarily necessary to invoke your function with -Verbose to get that.
For example, let's say you stored that whole script in a file called FeelingNum.ps1.
If, in addition to the changes I made above, you also add the following to the very top of your file:
[CmdletBinding()]
param()
Then, you still invoked your function "normally" as $again = Main, you could still get the verbose output by invoking your script with -Verbose:
powershell.exe -File FeelingNum.ps1 -Verbose
What happens there is that using the -Verbose parameter sets a variable called $VerbosePreference, and that gets inherited on each function called down the stack (unless it's overridden). You can also set $VerbosePreference manually.
So what you get by using these built-in features is a lot of flexibility, both for you as the author and for anyone who uses your code, which is a good thing even if the only person using it is you.
Using Powershell, how can I add an entry to the $Error ArrayCollection (or possibly, more WHAT should I add to the collection)?
To my undertanding using Write-Error will both output the error AND add the entry to the errors collection. But I would just like to have an entry added directly to the collection without outputting at that time.
The ErrorAction switch should be able to handle that.
write-error "Something bad happened" -ErrorAction:SilentlyContinue