Powershell console different to script - powershell

Can anyone tell me why this command works fine in the Powershell console, returning a single thumbprint, but when run as a script it just returns all the certificate's thumbprints:
$crt = (Get-ChildItem -Path Cert:\LocalMachine\WebHosting\ | Where-Object {$_.Subject.Contains($certcn)}).thumbprint
$certcn is a string containing a domain. eg "www.test.com"

I figured it out. $certcn was derived from $args[0]. It turns out $args[0] is not a string, and even though PS would quite happily use it as a string in other commands, it would not do this with Where-Object.
Not sure what type $args[0] actually is, but doing $certcn = $args[0].tostring() fixed it.

The only explanation for your symptom is that the value of variable $certcn:
either: is the empty string ('') because 'someString'.Contains('') returns $true for any input string.
or: is implicitly converted to the empty string, though that wouldn't happen often in practice; here are some examples (see GitHub issue # for the [pscustomobject] stringification bug mentioned below):
# An empty array stringifies to ''
'someString'.Contains(#()) # -> $true
# A single-element array containing $null stringifies to ''
'someString'.Contains(#($null)) # -> $true
# Due to a longstanding bug, [pscustomobject] instances, when
# stringified via .ToString(), convert to the empty string.
# This makes the command equivalent to `.Contains(#(''))`, which is again
# the same as `.Contains('')`
'someString'.Contains(#([pscustomobject] #{ foo=1 })) # -> $true
$args[0] is not a string
The automatic $args variable is an array that contains all positional arguments that weren't bound to declared parameters, if any.
$args can contain elements of any data type, and what that type is is solely determined by the caller.
However, if you formally declare a parameter, you can type it, which means that if the caller passes an argument of a different data type, an attempt is made to convert the argument to the parameter's type (which may fail, but at least the failure will be "loud", and the reason obvious).
A robust solution for your script:
param(
[Parameter(Mandatory)] # Ensure that the caller passes a value.
[string] $CertCN # Type-constrain to a string.
# , ... declare other parameters as needed
)
# $CertCN is now guaranteed to be a *string* that is *non-empty*.
$crt =
(Get-ChildItem -Path Cert:\LocalMachine\WebHosting |
Where-Object { $_.Subject.Contains($CertCN) }).thumbprint
Note:
The use of the [Parameter()] attribute in the parameter declaration block (param(...)) makes your script an advanced one, which means that $args isn't supported, requiring all arguments to bind to explicitly declared parameters; however, you can define a catch-all parameter with [Parameter(ValueFromRemaningArguments)], if needed. (The other thing that makes a script or function an advanced one is use of the [CmdletBinding()] attribute above the param(...) block as a whole.)
[Parameter(Mandatory)], in addition to ensuring that the caller passes a value for the parameter, implicitly also prevents passing the empty string (or $null) - though you could explicitly allow that with [AllowEmptyString()]
Additionally, advanced scripts and functions automatically prevent passing arrays to [string]-typed parameters, which is desirable. (By contrast, simple functions and scripts simply stringify arrays, as would happen in expandable strings (string interpolation); e.g., & { param([string] $foo) $foo } 1, 2 binds '1 2', which is also what you'd get with "$(1, 2)")
Caveat:
When passing a value to a [string]-typed parameter, PowerShell accepts a scalar (non-collection) of any type, and any non-string type is automatically converted to a string, via .ToString(). This is usually desirable and convenient, but can result in useless stringifications; e.g.:
& { param([string] $str) $str } #{} # -> 'System.Collections.Hashtable'
Instances of hashtables (#{ ... }) stringify to their type name, which is unhelpful, and this behavior is the default behavior for any type that doesn't explicitly implement a meaningful string representation by overriding the .ToString() method.
If that is a concern, you can modify your script to ensure that the argument value being passed already is a string, using a [ValidateScript()] attribute.
param(
[Parameter(Mandatory)]
# Ensure that the argument value is a [string] to begin with.
# Note: The `ErrorMessage` property requires PowerShell (Core) 7+
[ValidateScript({ $_ -is [string] }, ErrorMessage='Please pass a *string* argument.')]
$CertCN # Do not type-constrain, so that the original type can be inspected.
# , ... declare other parameters as needed
)
# ...
As stated in the code comments, use of the ErrorMessage property in the requires PowerShell (Core) 7+, unfortunately. In Windows PowerShell a standard error message is invariably shown, which isn't user-friendly at all.

Related

PowerShell 5.1 Can someone please explain hashtable and splatting

Given:
PowerShell 5.1
I'm having a little trouble understanding hashtable and splatting. When splatting are use using a hash table to do that or is it something completely different?
I have the following code:
$hashtable1 = #{}
$hashtable1.add('PROD',#{ FirstName = 'John'; LastName = 'Smith'})
function Main() {
$sel = $hashtable1['PROD']
Function1 $sel
Function2 #sel
}
function Function1([hashtable] $x) {
"Value: $($x.LastName)"
$x.FirstName
}
function Function2([string] $firstName) {
"Value: $($firstName)"
}
Main
There's good information in the existing answers, but let me attempt a focused summary:
The answer to your actual question is:
Yes, #{ FirstName = 'John'; LastName = 'Smith' } is a hashtable too, namely in the form of a declarative hashtable literal - just like #{} is an empty hashtable literal (it constructs an instance that initially has no entries).
A hashtable literal consists of zero or more key-value pairs, with = separating each key from its value, and pairs being separated with ; or newlines.
Keys usually do not require quoting (e.g. FirstName), except if they contain special characters such as spaces or if they're provided via an expression, such as a variable reference; see this answer for details.
This contrasts with adding entries to a hashtable later, programmatically, as your $hashtable1.Add('PROD', ...) method call exemplifies (where PROD is the entry key, and ... is a placeholder for the entry value).
Note that a more convenient alternative to using the .Add() method is to use an index expression or even dot notation (property-like access), though note that it situationally either adds an entry or updates an existing one: $hashtable1['PROD'] = ... or $hashtable1.PROD = ...
The answer to the broader question implied by your question's title:
PowerShell's hashtables are a kind of data structure often called a dictionary or, in other languages, associative array or map. Specifically, they are case-insensitive instances of the .NET [hashtable] (System.Collections.Hashtable) type, which is a collection of unordered key-value pair entries. Hashtables enable efficient lookup of values by their associated keys.
Via syntactic sugar [ordered] #{ ... }, i.e. by placing [ordered] before a hashtable literal, PowerShell offers a case-insensitive ordered dictionary that maintains the entry-definition order and allows access by positional index in addition to the usual key-based access. Such ordered hashtables are case-insensitive instances of the .NET System.Collections.Specialized.OrderedDictionary type.
A quick example:
# Create an ordered hashtable (omit [ordered] for an unordered one).
$dict = [ordered] #{ foo = 1; bar = 'two' }
# All of the following return 'two'
$dict['bar'] # key-based access
$dict.bar # ditto, with dot notation (property-like access)
$dict[1] # index-based access; the 2nd entry's value.
Splatting is an argument-passing technique that enables passing arguments indirectly, via a variable containing a data structure encoding the arguments, which is useful for dynamically constructing arguments and making calls with many arguments more readable.
Typically and robustly - but only when calling PowerShell commands with declared parameters - that data structure is a hashtable, whose entry keys must match the names of the target command's parameters (e.g., key Path targets parameter -Path) and whose entry values specify the value to pass.
In other words: This form of splatting uses a hashtable to implement passing named arguments (parameter values preceded by the name of the target parameter, such as -Path /foo in direct argument passing).
A quick example:
# Define the hashtable of arguments (parameter name-value pairs)
# Note that File = $true is equivalent to the -File switch.
$argsHash = #{ LiteralPath = 'C:\Windows'; File = $true }
# Note the use of "#" instead of "$"; equivalent to:
# Get-ChildItem -LiteralPath 'C:\Windows' -File
Get-ChildItem #argsHash
Alternatively, an array may be used for splatting, comprising parameter values only, which are then passed positionally to the target command.
In other words: This form of splatting uses an array to implement passing positional arguments (parameter values only).
This form is typically only useful:
when calling PowerShell scripts or functions that do not formally declare parameters and access their - invariably positional - arguments via the automatic $args variable
when calling external programs; note that from PowerShell's perspective there's no concept of named arguments when calling external programs, as PowerShell's knows nothing about the parameter syntax in that case, and all arguments are simply placed on the process command line one by one, and it is up to the target program to interpret them as parameter names vs. values.
A quick example:
# Define an array of arguments (parameter values)
$argsArray = 'foo', 'bar'
# Note the use of "#" instead of "$", though due to calling an
# *external program* here, you may use "$" as well; equivalent to:
# cmd /c echo 'foo' 'bar'
cmd /c echo #argsArray
#postanote has some very good links about hashtables and splatting and are good reads. Taking your examples, you have two different functions. One to handle hashtables as a parameter, and the second one that can only handle a single string parameter. The hashtable cannot be used to pass parameters to the second function, e.g.:
PS C:\> Function2 $sel
Value: System.Collections.Hashtable
Conceptually, the real difference between using hashtables and splatting is not about how you are using them to pass information and parameters to functions, but how the functions and their parameters receive the information.
Yes, certain functions can have hashtables and arrays as parameters, but, typically in 98% of the cases, functions don't use hashtables as a named parameter to get its values.
For ex. Copy-Item doesn't use hash tables as a parameter. If it did, would you want to do this every time you want to copy anything:
$hashtable = #{
Path = "C:\Temp\myfile.txt",
Destination = "C:\New Folder\"
}
Copy-Item -Parameters $hashtable
No, instead, you want the parameters as strings, so you can make it a much easier one liner:
Copy-Item -Path "C:\Temp\myfile.txt" -Destination "C:\New Folder\"
It makes more sense to most people to deal with individual strings as opposed to a generic, large, hashtable "config" to pass. Also, by separating the parameters as separate strings, integers, floats, chars, etc. it is also easier to do validation, default values, mandatory/not mandatory parameters, etc.
Now despite saying all that, there is a situation where you have certain functions with lots of parameters (e.g. sending an email message), or you want to do something multiple times (e.g. copying/moving lots of files from a CSV file). In that case, using a hashtable, and/or arrays, and/or arrays of hashtables, would be useful.
This is where splating comes in. It takes a hashtable and instead of treating it like you are passing a single value (i.e. why Function2 $sel returns System.Collections.Hashtable), the # signt tells PowerShell that it is a collection of values, and to use the collection to try and match to the parameters of the function. That's why passing the hashtable to Function2 doesn't work, but splatting works e.g.:
PS C:\> Function2 #sel
Value: John
In this case, it takes the hashtable $sel and by using splatting #sel PowerShell now knows to not pass the hashtable as-is, but to open up the collection and to matche the $sel.FirstName to the -Firstname parameter.

Impossible to remove a variable in Powershell

I have come across the strangest behaviour that has been driving me nuts when writing scripts. It is impossible sometimes to remove the value of a variable in Powershell. I have tried:
Remove-Variable -Force
Also tried making it equal to an empty string or making it $null but the variable value and type remains.
Anyone have an idea how this can happen?
I am using Powershell version 5 on Windows Server 2016.
Here some screenshots:
To remove a variable, pass its name without the $ sigil to the Remove-Variable cmdlet's
-Name parameter (which is positionally implied); using the example of a variable $date:
Using an argument:
# Note the required absence of $ in the name; quoting the var. name is
# optional in this case.
Remove-Variable -Force -Name date
Using the pipeline would require you to specify objects whose .Name property contains the name of the variable to delete, because these property values implicitly bind to Remove-Variable's -Name parameter; the simplest way to achieve that is to use the Get-Variable cmdlet, which too requires specifying the name without the $:
# Works, but is inefficient.
Get-Variable -Name date | Remove-Variable -Force
However, this is both more verbose and less efficient than directly passing the name(s) as an argument.
As for what you tried:
You variable-removal command is conceptually flawed:
$date | Remove-Variable -Force
Except as the LHS of an assignment ($date = ...), referring to a variable with the $ sigil returns its value, not the variable itself.
That is, since your $date variable contains a [datetime] instance, it is that instance that is sent through the pipeline, and since only strings are supported as input - that is, variable names - the command fails.
In effect, your call is equivalent to the following, which predictably fails:
PS> Get-Date | Remove-Variable -Force
Remove-Variable : The input object cannot be bound to any parameters for the command
either because the command does not take pipeline input
or the input and its properties do not match any of the parameters that take pipeline input.
What the somewhat verbose, general error message is implying in this case is that the input object was of the wrong type (because only objects with a .Name property are accepted, which [datetime] doesn't have).
Contexts in which you need refer to a variable itself rather than to its value:
What these contexts have in common is that you need to specify the variable name without the $ sigil.
Two notable examples:
All *-Variable cmdlets expect the names of variables to operate on, such as the Get-Variable cmdlet that returns objects representing variables, of type System.Management.Automation.PSVariable; these objects include the name, value, and other attributes of a PowerShell variable.
# Gets an object describing variable $date
$varObject = Get-Variable date # -Name parameter implied
When you pass the name of an output variable to a -*Variable common parameter
# Prints Get-Date's output while also capturing the output
# in variable $date.
Get-Date -OutVariable date
As implied, above, assigning to a variable with = is the only exception: there you do use the $ sigil, e.g. $date = Get-Date.
Note that this differs from POSIX-compatible shells such as bash, where you do not use $ in assignments (and must not have whitespace around =); e.g., date=$(date).

Is it possible to have a scriptblock evaluated inside a string?

I would like for the resulting value in $s to be "now is then for today"
PS H:\> $s = "now is $({if (1 -eq 1){'then'}}) for today"
PS H:\> $s
now is if (1 -eq 1){'then'} for today
It's definitely possible, and pretty easy with subexpressions
You were close, just need to remove the outer set of curly braces
$s = "now is $(if (1 -eq 1){'then'}) for today"
$s
Delay-bind script-block arguments are an implicit feature that:
only works with parameters that are designed to take pipeline input,
of any type except the following, in which case regular parameter binding happens[1]:
[scriptblock]
[object] ([psobject], however, does work, and therefore [pscustomobject] too)
(no type specified), which is effectively the same as [object]
whether such parameters accept pipeline input by value (ValueFromPipelineBy) or by property name (ValueFromPipelineByPropertyName), is irrelevant.
enables per-input-object transformations via a script block passed instead of a type-appropriate argument; the script block is evaluated for each pipeline object, which is accessible inside the script block as $_, as usual, and the script block's output - which is assumed to be type-appropriate for the parameter - is used as the argument.
Since such ad-hoc script blocks by definition do not match the type of the parameter you're targeting, you must always use the parameter name explicitly when passing them.
Delay-bind script blocks unconditionally provide access to the pipeline input objects, even if the parameter would ordinarily not be bound by a given pipeline object, if it is defined as ValueFromPipelineByPropertyName and the object lacks a property by that name.
This enables techniques such as the following call to Rename-Item, where the pipeline input from Get-Item is - as usual - bound to the -LiteralPath parameter, but passing a script block to -NewName - which would ordinarily only bind to input objects with a .NewName property - enables access to the same pipeline object and thus deriving the destination filename from the input filename:
Get-Item file | Rename-Item -NewName { $_.Name + '1' } # renames 'file' to 'file1'; input binds to both -LiteralPath (implicitly) and the -NewName script block.
Note: Unlike script blocks passed to ForEach-Object or Where-Object, for example, delay-bind script blocks run in a child variable scope[2], which means that you cannot directly modify the caller's variables, such as incrementing a counter across input objects.
As a workaround, use a [ref]-typed variable declared in the caller's scope and access its .Value property inside the script block - see this answer for an example.

Wrapper function for cmdlet - pass remaining parameters

I'm writing a function that wraps a cmdlet using ValueFromRemainingArguments (as discussed here).
The following simple code demonstrates the problem:
works
function Test-WrapperArgs {
Set-Location #args
}
Test-WrapperArgs -Path C:\
does not work
function Test-WrapperUnbound {
Param(
[Parameter(ValueFromRemainingArguments)] $UnboundArgs
)
Set-Location #UnboundArgs
}
Test-WrapperUnbound -Path C:\
Set-Location: F:\cygwin\home\thorsten\.config\powershell\test.ps1:69
Line |
69 | Set-Location #UnboundArgs
| ~~~~~~~~~~~~~~~~~~~~~~~~~
| A positional parameter cannot be found that accepts argument 'C:\'.
I tried getting to the issue with GetType and EchoArgs from the PowerShell Community Extensions to no avail. At the moment I'm almost considering a bug (maybe related to this ticket??).
The best solution for an advanced function (one that uses a [CmdletBinding()] attribute and/or a [Parameter()] attribute) is to scaffold a proxy (wrapper) function via the PowerShell SDK, as shown in this answer.
This involves essentially duplicating the target command's parameter declarations (albeit in an automatic, but static fashion).
If you do not want to use this approach, your only option is to perform your own parsing of the $UnboundArgs array (technically, it is an instance of [System.Collections.Generic.List[object]]), which is cumbersome, however, and not foolproof:
function Test-WrapperUnbound {
Param(
[Parameter(ValueFromRemainingArguments)] $UnboundArgs
)
# (Incompletely) emulate PowerShell's own argument parsing by
# building a hashtable of parameter-argument pairs to pass through
# to Set-Location via splatting.
$htPassThruArgs = #{}; $key = $null
switch -regex ($UnboundArgs) {
'^-(.+)' { if ($key) { $htPassThruArgs[$key] = $true } $key = $Matches[1] }
default { $htPassThruArgs[$key] = $_; $key = $null }
}
if ($key) { $htPassThruArgs[$key] = $true } # trailing switch param.
# Pass the resulting hashtable via splatting.
Set-Location #htPassThruArgs
}
Note:
This isn't foolproof in that your function won't be able to distinguish between an actual parameter name (e.g., -Path) and a string literal that happens to look like a parameter name (e.g., '-Path')
Also, unlike with the scaffolding-based proxy-function approach mentioned at the top, you won't get tab-completion for any pass-through parameters and the pass-through parameters won't be listed with -? / Get-Help / Get-Command -Syntax.
If you don't mind having neither tab-completion nor syntax help and/or your wrapper function must support pass-through to multiple or not-known-in-advance target commands, using a simple (non-advanced) function with #args (as in your working example; see also below) is the simplest option, assuming your function doesn't itself need to support common parameters (which requires an advanced function).
Using a simple function also implies that common parameters are passed through to the wrapped command only (whereas an advanced function would interpret them as meant for itself, though their effect usually propagates to calls inside the function; with a common parameter such as -OutVariable, however, the distinction matters).
As for what you tried:
While PowerShell does support splatting via arrays (or array-like collections such as [System.Collections.Generic.List[object]]) in principle, this only works as intended if all elements are to be passed as positional arguments and/or if the target command is an external program (about whose parameter structure PowerShell knows nothing, and always passes arguments as a list/array of tokens).
In order to pass arguments with named parameters to other PowerShell commands, you must use hashtable-based splatting, where each entry's key identifies the target parameter and the value the parameter value (argument).
Even though the automatic $args variable is technically also an array ([object[]]), PowerShell has built-in magic that allows splatting with #args to also work with named parameters - this does not work with any custom array or collection.
Note that the automatic $args variable, which collects all arguments for which no parameter was declared - is only available in simple (non-advanced) functions and scripts; advanced functions and scripts - those that use the [CmdletBinding()] attribute and/or [Parameter()] attributes - require that all potential parameters be declared.

powershell function switch param with string array

I'm struggling to understand the outputs of the below function
function testApp
{
param(
[string] $appName,
[switch] $sw = $false,
[string[]] $test,
[string[]] $test2
)
Write-Host $appName - $sw - $test - $test2
}
testApp -appName "TestApp" -sw $true -test "one", "two" -test2 "three","four"
Output: TestApp - True - one two - three four
testApp -appName "TestApp" -sw $true -test "one", "two"
Output: TestApp - True - one two - True
The first output is as expected. But I cannot understand why the second output has "True" for the test2 array when I did not pass it. Can anyone help me in understanding the reason for the behavior? Thanks.
To summarize and complement the helpful comments on the question by Lee_Dailey, Matthew and mclayton:
[switch] parameters in PowerShell (aka flags in other shells):
switch parameters are meant to imply $true vs. $false by their presence in an invocation: e.g., passing -sw by itself signals $true, whereas omitting -sw signals $false.
It is possible to pass a Boolean value explicitly, for the purpose of passing a programmatically determined value; e.g.: -sw:$var
Note the required : following the switch name, which tells PowerShell that the Boolean value belongs to the switch parameter; without it, PowerShell thinks the value is a positional argument meant for a different parameter (see below).
Caveat: Commands may interpret -sw:$false differently from omitting -sw; a prominent example is is the use of common parameter -Confirm:$false to override the effective $ConfirmPreference value.
If you need to make this distinction in your own code, use $PSBoundParameters.ContainsKey('sw') -and -not $sw to detect the -sw:$false case.
Do not assign a default value to a switch parameter variable: while technically possible, the convention is that switches default to $false (which is a [switch] instance's default value anyway); that is, a [switch] parameter should always have opt-in logic.
A [switch] parameter variable effectively behaves like a Boolean value in most contexts:
That is, you can say if ($sw) { ... }, for instance.
If you need to access the wrapped Boolean value explicitly, access the .IsPresent property (note that the property name is somewhat confusing, because in a -sw:$false invocation the switch is still present, but its value, as reflected in .IsPresent, is $false).
An example of where .IsPresent is needed is the use of a Boolean as an implicit array index, notably to emulate a ternary conditional[1]: ('falseValue', 'trueValue')[$sw.IsPresent]; without the .IsPresent, the effective Boolean value wouldn't be recognized as such and wouldn't automatically be mapped to index 0 (from $false) or 1 (from $true).
Ultimately, your problem was that you thought $true was an argument for -sw, whereas it became a positional argument implicitly bound to the -test2 parameter.
[switch] parameters never need a value, so the next argument becomes a separate, positional argument - unless you explicitly indicate that the argument belongs to the switch by following the switch name with :, as shown above.[2]
Positional vs. named argument passing in PowerShell:
Terminology note: For conceptual clarity the term argument is used to refer to a value passed to a declared parameter. This avoids the ambiguity of using parameter situationally to either refer to the language construct that receives a value vs. a given value.
Named argument passing (binding) refers to explicitly placing the target parameter name before the argument (typically separated by a space, but alternatively also and / or by :); e.g., -AppName foo.
The order in which named arguments are passed never matters.
Positional (unnamed) argument passing refers to passing an argument without preceding it by the name of its target parameter; e.g., foo.
The passing is positional in the sense that the relative position (order) among other unnamed arguments determines what target parameter is implied.
[switch] parameters are the exception in that they:
are typically passed by name only (-sw), implying value $true, and if a value is passed, require : to separate the name from the value.
never support positional binding.
You may combine named passing with positional passing, in which case the named arguments are bound first, after which the positional ones are then considered (in order) for binding to the not-yet-bound parameters.
PowerShell functions are simple functions by default. In order to exercise control over positional binding, use of the [CmdletBinding()] and / or [Parameter()] attributes is necessary (see below), which invariably turn a simple function into an advanced function.
Making a simple function an advanced one has larger behavioral implications (mostly beneficial ones), which are detailed in this answer.
By default, PowerShell functions accept positional arguments for any parameter (other than those of type [switch]), in the order in which the parameters were declared.
Additionally, simple functions accept arbitrary additional arguments for which no parameters were declared, which are collected in the automatic $args array variable.
To prevent your function from accepting any positional arguments by default, place a [CmdletBinding(PositionalBinding=$false, ...)] attribute above the param(...) block.
Since this makes your function an advanced one, this also disables passing arbitrary additional arguments ($args no longer applies and isn't populated).
As an aside: when you implement a cmdlet (a command implemented as a binary, typically via C#), this behavior is implied.
To selectively support positional arguments, decorate individual parameter declarations with a [Parameter(Position=<n>, ...)] attribute (e.g, [Parameter(Position=0)] [string] $Path)
Note: Whether you start your numbering with 0 or 1 doesn't matter, as long as the numbers used reflect the desired ordering among all positional parameters; 0 is advisable as a self-documenting convention, because it is unambiguous.
Attribute [Parameter(Position=<n>)] is an explicit opt-in that selectively overrides [CmdletBinding(PositionalBinding=$false)]: that is, the latter disables positional binding unless explicitly indicated by individual parameter declarations; in fact, the latter is implied by the former, in that once you use one [Parameter(Position=<n>)] attribute, you must use it on all other parameters you want to bind positionally as well.
[1] Note that PowerShell [Core] 7.0+ supports ternary conditionals natively: $sw ? 'trueValue' : 'falseValue'
[2] In effect, [switch] parameters are the only type for which PowerShell supports an optional argument. See this answer for more information.