Find the values in ValidateSet - powershell

I was wondering if there was a way to retrieve the values used in the clause Param() for ValidateSet. Something like this would be great:
Function Foo {
Param (
[ValidateSet('Startup', 'Shutdown', 'LogOn', 'LogOff')]
[String]$Type = 'Startup'
)
$Type.ValidateSet
}
But of course there is no such property on the Type object. Is it possible to retrieve the values set in ValidateSet?

function Foo {
param (
[ValidateSet('Startup', 'Shutdown', 'LogOn', 'LogOff')]
[String]$Type = 'Startup'
)
$ParameterList = (Get-Command -Name $MyInvocation.MyCommand).Parameters
$ParameterList["Type"].Attributes.ValidValues
}
After your comment:
param (
[ValidateSet('Startup', 'Shutdown', 'LogOn', 'LogOff')]
[String]$Type = 'Startup'
)
(Get-Variable "Type").Attributes.ValidValues
The Get-Variable call also works in a function.

All solutions below work in both functions and scripts.
Most robust solution that should work in any invocation scenario, PSv2+:
param (
[ValidateSet('Startup', 'Shutdown', 'LogOn', 'LogOff')]
[String]$Type = 'Startup'
)
# -> #('Startup', 'Shutdown', ...)
($MyInvocation.MyCommand.Parameters['Type'].Attributes |
Where-Object { $_ -is [ValidateSet] }).ValidValues
A simpler, but potentially fragile PSv3+ solution, which assumes:
that Set-StrictMode is either set to -Version 1 or not set.
Set-StrictMode may have been set outside of your control, so if you don't fully control the execution environment, it is safer to use the more verbose, PSv2-compatible command above.
(The Set-StrictMode setting behaves like a variable: it is inherited by descendent scopes, but setting it in a descendent scope sets it locally (only affects that scope and its descendants).)
However:
You can explicitly run Set-StrictMode -Off or Set-StrictMode -Version 1 at the start of your script / function, though you may want to restore the desired value afterwards. Whatever mode you set will affect descendant scopes too. Note that that there is no way to query the strict mode currently in effect.
If you define a function as part of a module, the outside world's Set-StrictMode setting does not apply.
that running into this bug (still present as of PowerShell 7.3.1) when repeatedly dot-sourcing a script is not a concern.
param (
[ValidateSet('Startup', 'Shutdown', 'LogOn', 'LogOff')]
[String]$Type = 'Startup'
)
# Assumes that at most Set-StrictMode -Version 1 is in effect.
# You could explicitly run Set-StrictMode -Off or Set-StrictVersion -Version 1
# in here first.
(Get-Variable Type).Attributes.ValidValues
Optional background information
The PSv3+ shorthand syntax (Get-Variable Type).Attributes.ValidValues is essentially the equivalent of:
(Get-Variable Type).Attributes | ForEach-Object { $_.ValidValues }
That is, PowerShell automatically enumerates the collection .Attributes and collects the values of each element's .ValidValues property.
In the case at hand, only one attribute in the .Attributes collection - the one of subtype [System.Management.Automation.ValidateSetAttribute] - has a .ValidValues property, so that single value is returned.
Given that the other attributes have no such property, setting Set-StrictMode to -version 2 or higher causes the attempt to access a nonexistent property to raise an error, and the command fails.
((Get-Variable Type).Attributes |
Where-Object { $_ -is [System.Management.Automation.ValidateSetAttribute] }).ValidValues
bypasses this problem by explicitly targeting the one attribute of interest (using the -is operator to identify it by type) that is known to have a .ValidValues property.
The more verbose alternative to accessing the attributes of parameter [variable] $Type with (Get-Variable Type).Attributes is to use $MyInvocation.MyCommand.Parameters['Type'].Attributes.
Use of the $MyInvocation.MyCommand.Parameters collection enables enumerating and inspecting all parameters without needing to know their names in advance.
David Brabant's answer is helpful, but (as of this writing):
It may create the mistaken impression that separate approaches are needed for scripts and functions.
The Get-Command -Name $MyInvocation.MyCommand part is:
unnecessary, because $MyInvocation.MyCommand itself provides the information of interest:
$MyInvocation.MyCommand is an instance of type [System.Management.Automation.ExternalScriptInfo] in scripts, and type [System.Management.Automation.FunctionInfo] in functions, both of which derive from type [System.Management.Automation.CommandInfo], which is the type that Get-Commmand returns - so not only do they provide the same information, they also unambiguously refer to the enclosing script/function.
brittle:
$MyInvocation.MyCommand is converted to a string due to being passed to the -Name parameter, which in a script results in the script's mere filename (e.g., script.ps1), and in a function in the function's name (e.g., Foo).
In a script, this will typically cause Get-Command not to find the script at all - unless that script happens to be in the PATH (one of the directories listed in $env:PATH). But that also means that a different script that happens to have the same filename and that happens to be / come first in the PATH may be matched, yielding incorrect results.
In short: Get-Command -Name $MyInvocation.MyCommand in scripts will often break, and when it does return a result, it may be for the wrong script.
In a function, it can identify the wrong command too, although that is much less likely:
Due to PowerShell's command precedence, a given name is first interpreted as an alias, and then as a function, so, in theory, with a Foo alias defined, Get-Command -Name $MyInvocation.MyCommand inside function Foo would mistakenly return information about the alias.
(It's nontrivial to invoke function Foo while alias Foo is defined, but it can be done; e.g.: & (Get-Item Function:Foo))

validateScript, can provide a more flexible solution and would work well if you needed additional parameter validation. This also allows you to get a list of the valid parameters outside of the foo function, with the creation of the get-validTypes function.
Function Foo {
Param (
[validateScript({test-validTypes $_})]
[String]$Type = 'Startup'
)
get-validTypes
}
function get-validTypes {
$powerOptions = #('Startup', 'Shutdown', 'LogOn', 'LogOff')
Write-Output $powerOptions
}
function test-validTypes {
[cmdletbinding()]
param ($typeInput)
$validTypes = get-validTypes
if ($validTypes.contains($typeInput)){
return $true
} else {
Write-Error "Invalid Type Paramater, Must be on of the following: $powerOptions"
}
}

Related

Why does pipeline parameter cause error when combined with PSDefaultParameterValues?

My powershell function should accept a list of valid paths of mixed files and/or directories either as a named parameter or via pipeline, filter for files that match a pattern, and return the list of files.
$Paths = 'C:\MyFolder\','C:\MyFile'
This works: Get-Files -Paths $Paths This doesn't: $Paths | Get-Files
$PSDefaultParameterValues = #{
"Get-Files:Paths" = ( Get-Location ).Path
}
[regex]$DateRegex = '(20\d{2})([0-1]\d)([0-3]\d)'
[regex]$FileNameRegex = '^term2-3\.7_' + $DateRegex + '\.csv$'
Function Get-Files {
[CmdletBinding()]
[OutputType([System.IO.FileInfo[]])]
[OutputType([System.IO.FileInfo])]
param (
[Parameter(
Mandatory = $false, # Should default to $cwd provided by $PSDefaultParameterValues
ValueFromPipeline,
HelpMessage = "Enter filesystem paths that point either to files directly or to directories holding them."
)]
[String[]]$Paths
)
begin {
[System.IO.FileInfo[]]$FileInfos = #()
[System.IO.FileInfo[]]$SelectedFileInfos = #()
}
process { foreach ($Path in $Paths) {
Switch ($Path) {
{ Test-Path -Path $Path -PathType leaf } {
$FileInfos += (Get-Item $Path)
}
{ Test-Path -Path $Path -PathType container } {
foreach ($Child in (Get-ChildItem $Path -File)) {
$FileInfos += $Child
}
}
Default {
Write-Warning -Message "Path not found: $Path"
continue
}
}
$SelectedFileInfos += $FileInfos | Where-Object { $_.Name -match $FileNameRegex }
$FileInfos.Clear()
} }
end {
Return $SelectedFileInfos | Get-Unique
}
}
I found that both versions work if I remove the default parameter value. Why?
Why does passing a parameter via the pipeline cause an error when that parameter has a default defined in PSDefaultParameterValues, and is there a way to work around this?
Mathias R. Jessen provided the crucial pointer in a comment:
A parameter that is bound via an entry in the dictionary stored in the $PSDefaultParameterValues preference variable is bound before it is potentially bound via the pipeline, just like passing a parameter value explicitly, as an argument would.
Once a given parameter is bound that way, it cannot be bound again via the pipeline, causing an error:
The input object cannot be bound to any parameters for the command either because the command does not take pipeline input or the input and its properties do not match any of the parameters that take pipeline input.
As you can see, the specific problem at hand - a parameter already being bound - is unfortunately not covered by this message. The unspoken part is that once a given parameter has been bound by argument (possibly via $PSDefaultParameterValues), it is removed from the set of candidate pipeline-binding parameters the input could bind to, and if there are no candidates remaining, the error occurs.
The only way to override a $PSDefaultParameterValue preset value is to use an (explicit) argument.
This comment on a related GitHub issue provides details on the order of parameter binding.
A simplified way to reproduce the problem:
& {
# Preset the -Path parameter for Get-Item
# In any later Get-Item calls that do not use -Path explicitly, this
# is the same as calling Get-Item -Path /
$PSDefaultParameterValues = #{ 'Get-Item:Path' = '/' }
# Trying to bind the -Path parameter *via the pipeline* now fails,
# because it has already been bound via $PSDefaultParameterValues.
# Even without the $PSDefaultParameterValues entry in the picture,
# you'd get the same error with: '.' | Get-Item -Path /
'.' | Get-Item
# By contrast, using an *argument* allows you to override the preset.
Get-Item -Path .
}
What's happening here?!
This is a timing issue.
PowerShell attempts to bind and process parameter arguments in roughly* the following order:
Explicitly named parameter arguments are bound (eg. -Param $value)
Positional arguments are bound (abc in Write-Host abc)
Default parameter values are applied for any parameter that wasn't processed during the previous two steps - note that applicable $PSDefaultParameterValues always take precedence over defaults defined in the parameter block
Resolve parameter set, validate all mandatory parameters have values (this only fails if there are no upstream command in the pipeline)
Invoke the begin {} blocks on all commands in the pipeline
For any commands downstream in a pipeline: wait for input and then start binding it to the most appropriate parameter that hasn't been handled in previous steps, and invoke process {} blocks on all commands in the pipeline
As you can see, the value you assign to $PSDefaultParameterValues takes effect in step 3 - long before PowerShell even has a chance to start binding the piped string values to -Paths, in step 6.
*) this is a gross over-simplification, but the point remains: default parameter values must have been handled before pipeline binding starts.
How to work around it?
Given the procedure described above, we should be able to work around this behavior by explicitly naming the parameter we want to bind the pipeline input to.
But how do you combine -Paths with pipeline input?
By supplying a delay-bind script block (or a "pipeline-bound parameter expression" as they're sometimes called):
$Paths | Get-Files -Paths { $_ }
This will cause PowerShell to recognize -Paths during step 1 above - at which point the default value assignment is skipped.
Once the command starts receiving input, it transforms the input value and binds the resulting value to -Paths, by executing the { $_ } block - since we just output the item as-is, the effect is the exact same as when the pipeline input is bound implicitly.
Digging deeper
If you want to learn more about what happens "behind the curtain", the best tool available is Trace-Command:
$PSDefaultParameterValues['Get-Files:Paths'] = $PWD
Trace-Command -Expression { $Paths |Get-Files } -Name ParameterBinding -PSHost
I should mention that the ParameterBinding tracer is very verbose - which is great for surmising what's going on - but the output can be a bit overwhelming, in which case you might want to replace the -PSHost parameter with -PSPath .\path\to\output.txt to write the trace output to a file

What is the proper way to define a dynamic ValidateSet in a PowerShell script?

I have a PowerShell 7.1 helper script that I use to copy projects from subversion to my local device. I'd like to make this script easier for me to use by enabling PowerShell to auto-complete parameters into this script. After some research, it looks like I can implement an interface to provide valid parameters via a ValidateSet.
Based on Microsoft's documentation, I attempted to do this like so:
[CmdletBinding()]
param (
[Parameter(Mandatory)]
[ValidateSet([ProjectNames])]
[String]
$ProjectName,
#Other params
)
Class ProjectNames : System.Management.Automation.IValidateSetValuesGenerator {
[string[]] GetValidValues() {
# logic to return projects here.
}
}
When I run this, it does not auto-complete and I get the following error:
❯ Copy-ProjectFromSubversion.ps1 my-project
InvalidOperation: C:\OneDrive\Powershell-Scripts\Copy-ProjectFromSubversion.ps1:4
Line |
4 | [ValidateSet([ProjectNames])]
| ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
| Unable to find type [ProjectNames].
This makes sense since the class isn't defined until after the parameters. So I moved the class above the parameters. Obviously this is a syntax error. So how do I do this? Is it not possible in a simple PowerShell script?
Indeed, you've hit a catch-22: for the parameter declaration to work during the script-parsing phase, class [ProjectNames] must already be defined, yet you're not allowed to place the class definition before the parameter declaration.
The closest approximation of your intent using a stand-alone script file (.ps1) is to use the ValidateScript attribute instead:
[CmdletBinding()]
param (
[Parameter(Mandatory)]
[ValidateScript(
{ $_ -in (Get-ChildItem -Directory).Name },
ErrorMessage = 'Please specify the name of a subdirectory in the current directory.'
)]
[String] $ProjectName # ...
)
Limitations:
[ValidateScript] does not and cannot provide tab-completion: the script block, { ... }, providing the validation is only expected to return a Boolean, and there's no guarantee that a discrete set of values is even involved.
Similarly, you can't reference the dynamically generated set of valid values (as generated inside the script block) in the ErrorMessage property value.
The only way around these limitations would be to duplicate that part of the script block that calculates the valid values, but that can become a maintenance headache.
To get tab-completion you'll have to duplicate the relevant part of the code in an [ArgumentCompleter] attribute:
[CmdletBinding()]
param (
[Parameter(Mandatory)]
[ValidateScript(
{ $_ -in (Get-ChildItem -Directory).Name },
ErrorMessage = 'Please specify the name of a subdirectory in the current directory.'
)]
[ArgumentCompleter(
{
param($cmd, $param, $wordToComplete)
# This is the duplicated part of the code in the [ValidateScipt] attribute.
[array] $validValues = (Get-ChildItem -Directory).Name
$validValues -like "$wordToComplete*"
}
)]
[String] $ProjectName # ...
)

When my powershell cmdlet parameter accepts ValueFromPipelineByPropertyName and I have an alias, how can I get the original property name?

How can a function tell if a parameter was passed in as an alias, or an object in the pipeline's property was matched as an alias? How can it get the original name?
Suppose my Powershell cmdlet accepts pipeline input and I want to use ValueFromPipelineByPropertyName. I have an alias set up because I might be getting a few different types of objects, and I want to be able to do something slightly different depending on what I receive.
This does not work
function Test-DogOrCitizenOrComputer
{
[CmdletBinding()]
Param
(
# Way Overloaded Example
[Parameter(Mandatory=$true,
ValueFromPipeline=$true,
ValueFromPipelineByPropertyName=$true,
Position=0)]
[Alias("Country", "Manufacturer")]
[string]$DogBreed,
[Parameter(Mandatory=$true,
ValueFromPipelineByPropertyName=$true,
Position=1)]
[string]$Name
)
# For debugging purposes, since the debugger clobbers stuff
$foo = $MyInvocation
$bar = $PSBoundParameters
# This always matches.
if ($MyInvocation.BoundParameters.ContainsKey('DogBreed')) {
"Greetings, $Name, you are a good dog, you cute little $DogBreed"
}
# These never do.
if ($MyInvocation.BoundParameters.ContainsKey('Country')) {
"Greetings, $Name, proud citizen of $Country"
}
if ($MyInvocation.BoundParameters.ContainsKey('Manufacturer')) {
"Greetings, $Name, future ruler of earth, created by $Manufacturer"
}
}
Executing it, we see problems
At first, it seems to work:
PS> Test-DogOrCitizenOrComputer -Name Keith -DogBreed Basset
Greetings, Keith, you are a good dog, you cute little Basset
The problem is apparent when we try an Alias:
PS> Test-DogOrCitizenOrComputer -Name Calculon -Manufacturer HP
Greetings, Calculon, you are a good dog, you cute little HP
Bonus fail, doesn't work via pipeline:
PS> New-Object PSObject -Property #{'Name'='Fred'; 'Country'='USA'} | Test-DogOrCitizenOrComputer
Greetings, Fred, you are a good dog, you cute little USA
PS> New-Object PSObject -Property #{'Name'='HAL'; 'Manufacturer'='IBM'} | Test-DogOrCitizenOrComputer
Greetings, HAL, you are a good dog, you cute little IBM
Both $MyInvocation.BoundParameters and $PSBoundParameters contain the defined parameter names, not any aliases that were matched. I don't see a way to get the real names of arguments matched via alias.
It seems PowerShell is not only being 'helpful' to the user by silently massaging arguments to the right parameters via aliases, but it's also being 'helpful' to the programmer by folding all aliased inputs into the main parameter name. That's fine, but I can't figure out how to determine the actual original parameter passed to the Cmdlet (or the object property passed in via pipeline)
How can a function tell if a parameter was passed in as an alias, or an object in the pipeline's property was matched as an alias? How can it get the original name?
I don't think there is any way for a Function to know if an Alias has been used, but the point is it shouldn't matter. Inside the function you should always refer to the parameter as if its used by it's primary name.
If you need the parameter to act different depending on whether it's used an Alias that is not what an Alias is for and you should instead use different parameters, or a second parameter that acts as a switch.
By the way, if you're doing this because you want to use multiple parameters as ValueFromPipelineByPropertyName, you already can with individual parameters and you don't need to use Aliases to achieve this.
Accepting value from the pipeline by Value does need to be unique, for each different input type (e.g only one string can be by value, one int by value etc.). But accepting pipeline by Name can be enabled for every parameter (because each parameter name is unique).
I banged my head quite hard on this, so I'd like to write down the state of my understanding. The solution is at the bottom (such as it is).
First, quickly: if you alias the command, you can get the alias easily with $MyInvocation.InvocationName. But that doesn't help with parameter aliases.
Works in some cases
You can get some joy by pulling the commandline that invoked you:
function Do-Stuff {
[CmdletBinding()]param(
[Alias('AliasedParam')]$Param
)
$InvocationLine = $MyInvocation.Line.Substring($MyInvocation.OffsetInLine - 1)
return $InvocationLine
}
$a = 42; Do-Stuff -AliasedParam $a; $b = 23
# Do-Stuff -AliasedParam $a; $b = 23
This will show the alias names. You could parse them with regex, but I'd suggest using the language parser:
$InvocationAst = [Management.Automation.Language.Parser]::ParseInput($InvocationLine, [ref]$null, [ref]$null)
$InvocationAst.EndBlock.Statements[0].PipelineElements[0].CommandElements.ParameterName
That will get you a list of parameters as they were called. However, it's flimsy:
Doesn't work for splats
Doesn't work for ValueFromPipelineByPropertyName
Abbreviated param names will cause extra headache
Only works in the function body; in a dynamicparam block, the $MyInvocation properties are not yet populated
Doesn't work
I did a deep dive into ParameterBinderController - thanks to Rohn Edwards for some reflection snippets.
This is not going to get you anywhere. Why not? Because the relevant method has no side effects - it just moves seamlessly from canonical param names to aliases. Reflection ain't enough; you would need to attach a debugger, which I do not consider to be a code solution.
This is why Trace-Command never shows the alias resolution. If it did, you might be able to hook the trace provider.
Doesn't work
Register-ArgumentCompleter takes a scriptblock which accepts a CommandAst. This AST holds the aliased param names as tokens. But you won't get far in a script, because argument completers are only invoked when you interactively tab-complete an argument.
There are several completer classes that you could hook into; this limitation applies to them all.
Doesn't work
I messed about with custom parameter attributes, e.g. class HookAttribute : System.Management.Automation.ArgumentTransformationAttribute. These receive an EngineIntrinsics argument. Unfortunately, you get no new context; parameter binding has already been done when attributes are invoked, and the bindings you'll find with reflection are all referring to the canonical parameter name.
The Alias attribute itself is a sealed class.
Works
Where you can get joy is with the PreCommandLookupAction hook. This lets you intercept command resolution. At that point, you have the args as they were written.
This sample returns the string AliasedParam whenever you use the param alias. It works with abbreviated param names, colon syntax, and splatting.
$ExecutionContext.InvokeCommand.PreCommandLookupAction = {
param ($CommandName, $EventArgs)
if ($CommandName -eq 'Do-Stuff' -and $EventArgs.CommandOrigin -eq 'Runspace')
{
$EventArgs.CommandScriptBlock = {
# not sure why, but Global seems to be required
$Global:_args = $args
& $CommandName #args
Remove-Variable _args -Scope Global
}.GetNewClosure()
$EventArgs.StopSearch = $true
}
}
function Do-Stuff
{
[CmdletBinding()]
param
(
[Parameter()]
[Alias('AliasedParam')]
$Param
)
$CalledParamNames = #($_args) -match '^-' -replace '^-' -replace ':$'
$CanonParamNames = $MyInvocation.BoundParameters.Keys
$AliasParamNames = $CanonParamNames | ForEach-Object {$MyInvocation.MyCommand.Parameters[$_].Aliases}
# Filter out abbreviations that could match canonical param names (they take precedence over aliases)
$CalledParamNames = $CalledParamNames | Where-Object {
$CalledParamName = $_
-not ($CanonParamNames | Where-Object {$_.StartsWith($CalledParamName)} | Select-Object -First 1)
}
# Param aliases that would bind, so we infer that they were used
$BoundAliases = $AliasParamNames | Where-Object {
$AliasParamName = $_
$CalledParamNames | Where-Object {$AliasParamName.StartsWith($_)} | Select-Object -First 1
}
$BoundAliases
}
# Do-Stuff -AliasP 42
# AliasedParam
If the Global variable offends you, you could use a helper parameter instead:
$EventArgs.CommandScriptBlock = {
& $CommandName #args -_args $args
}.GetNewClosure()
[Parameter(DontShow)]
$_args
The drawback is that some fool might actually use the helper parameter, even though it's hidden with DontShow.
You could develop this approach further by doing a dry-run call of the parameter binding mechanism in the function body or the CommandScriptBlock.

Can I specify conditional default values for a parameter in PowerShell?

I thought if this was possible it might work using parameter sets so I tried the following:
Function New-TestMultipleDefaultValues {
[CmdletBinding(DefaultParameterSetName="Default1")]
param (
[Parameter(Mandatory,ParameterSetName="Default1")]$SomeOtherThingThatIfSpecifiedShouldResultInTest1HavingValue1,
[Parameter(ParameterSetName="Default1")]$Test1 = "Value1",
[Parameter(ParameterSetName="Default2")]$Test1 = "Value2"
)
$PSBoundParameters
}
Executing this to create the function results in the error Duplicate parameter $test1 in parameter list. so it doesn't look like this way is an option.
The only thing I can think of at this point is to do something like this:
Function New-TestMultipleDefaultValues {
param (
$SomeOtherThingThatIfSpecifiedShouldResultInTest1HavingValue1,
$Test1
)
if (-not $Test1 -and $SomeOtherThingThatIfSpecifiedShouldResultInTest1HavingValue1) {
$Test1 = "Value1"
} elseif (-not $Test1 -and -not $SomeOtherThingThatIfSpecifiedShouldResultInTest1HavingValue1) {
$Test1 = "Value2"
}
$Test1
}
Which works but seems ugly:
PS C:\Users\user> New-TestMultipleDefaultValues -SomeOtherThingThatIfSpecifiedShouldResultInTest1HavingValue1 "thing"
Value1
PS C:\Users\user> New-TestMultipleDefaultValues
Value2
PS C:\Users\user> New-TestMultipleDefaultValues -Test1 "test"
test
Any better way to accomplish this?
The following should work:
Since there is then no longer a need for explicit parameter sets, I've omitted them; without specific properties, the [Parameter()] attributes aren't strictly needed anymore either.
Function New-TestMultipleDefaultValues {
[CmdletBinding()]
param (
[Parameter()] $SomeOtherThing,
[Parameter()] $Test1 =
('Value2', 'Value1')[$PSBoundParameters.ContainsKey('SomeOtherThing')]
)
# * As expected, if -Test1 <value> is explicitly specified,
# parameter variable $Test1 receives that value.
# * If -Test1 is omitted, the expression assigns 'Value1` to $Test1
# if -SomeOtherThing was specified, and 'Value2' otherwise.
$Test1 # Output the effective value of $Test1
}
It is possible to use expressions as parameter default values.
The above code is an expression and therefore can be used as-is.
To use a single command (a call to a PowerShell cmdlet, function, script or to an external program) as an expression, enclose it in (...), the grouping operator.
In all other cases you need $(...), the subexpression operator (or #(...), the array-subexpression operator) to convert the code to an expression; these cases are:
A Throw statement (and, hypothetically, exit and return statements, but you wouldn't use them in this context)
A compound construct such as foreach, while, ...
Multiple commands, expressions, or compound constructs, separated with ;
However, it is safe to always use $(...) (or #(...)) to enclose the code that calculates the default value, which you may opt to do for simplicity.
These expressions are evaluated after the explicitly specified parameters have been bound, which allows an expression to examine what parameters have been bound, via the automatic $PSBoundParameters variable:
('Value2', 'Value1')[$PSBoundParameters.ContainsKey('SomeOtherThing')] is simply a more concise reformulation of
if ($PSBoundParameters.ContainsKey('SomeOtherThing')) { 'Value1' } else { 'Value2' }
that takes advantage of [bool] values mapping onto 0 ($false) and 1 ($true) when used as an array index (integer).
In PowerShell v7+ you could use a ternary conditional instead, which has the added advantage of short-circuiting the evaluation:
$PSBoundParameters.ContainsKey('SomeOtherThing') ? 'Value1' : 'Value2'
You may want to look at dynamic parameters. You declare a section called dynamicparams {} and inside you can create parameters on the fly.

How to pass a switch parameter to another PowerShell script?

I have two PowerShell scripts, which have switch parameters:
compile-tool1.ps1:
[CmdletBinding()]
param(
[switch]$VHDL2008
)
Write-Host "VHDL-2008 is enabled: $VHDL2008"
compile.ps1:
[CmdletBinding()]
param(
[switch]$VHDL2008
)
if (-not $VHDL2008)
{ compile-tool1.ps1 }
else
{ compile-tool1.ps1 -VHDL2008 }
How can I pass a switch parameter to another PowerShell script, without writing big if..then..else or case statements?
I don't want to convert the parameter $VHDL2008 of compile-tool1.ps1 to type bool, because, both scripts are front-end scripts (used by users). The latter one is a high-level wrapper for multiple compile-tool*.ps1 scripts.
You can specify $true or $false on a switch using the colon-syntax:
compile-tool1.ps1 -VHDL2008:$true
compile-tool1.ps1 -VHDL2008:$false
So just pass the actual value:
compile-tool1.ps1 -VHDL2008:$VHDL2008
Try
compile-tool1.ps1 -VHDL2008:$VHDL2008.IsPresent
Assuming you were iterating on development, it is highly likely that at some point you are going to add other switches and parameters to your main script that are going to be passed down to the next called script. Using the previous responses, you would have to go find each call and rewrite the line each time you add a parameter. In such case, you can avoid the overhead by doing the following,
.\compile-tool1.ps1 $($PSBoundParameters.GetEnumerator() | ForEach-Object {"-$($_.Key) $($_.Value)"})
The automatic variable $PSBoundParameters is a hashtable containing the parameters explicitly passed to the script.
Please note that script.ps1 -SomeSwitch is equivalent to script.ps1 -SomeSwitch $true and script.ps1 is equivalent to script.ps1 -SomeSwitch $false. Hence, including the switch set to false is equivalent to not including it.
According to a power shell team's blog (link below,) since V2 there is a technique called splatting. Basically, you use the automatic variable #PsBoundParameters to forward all the parameters. Details about splatting and the difference between # and $ are explained in the Microsoft Docs article (link below.)
Example:
parent.ps1
#Begin of parent.ps1
param(
[Switch] $MySwitch
)
Import-Module .\child.psm1
Call-Child #psBoundParameters
#End of parent.ps1
child.psm1
# Begin of child.psm1
function Call-Child {
param(
[switch] $MySwitch
)
if ($MySwitch){
Write-Output "`$MySwitch was specified"
} else {
Write-Output "`$MySwitch is missing"
}
}
#End of child.psm1
Now we can call the parent script with or without the switch
PS V:\sof\splatting> .\parent.ps1
$MySwitch is missing
PS V:\sof\splatting> .\parent.ps1 -MySwitch
$MySwitch was specified
PS V:\sof\splatting>
Update
In my original answer, I sourced the children instead of importing it as a module. It appears sourcing another script into the original just makes the parent's variables visible to all children so this will also work:
# Begin of child.ps1
function Call-Child {
if ($MySwitch){
Write-Output "`$MySwitch was specified"
} else {
Write-Output "`$MySwitch is missing"
}
}
#End of child.ps1
with
#Begin of parent.ps1
param(
[Switch] $MySwitch
)
. .\child.ps1
Call-Child # Not even specifying #psBoundParameters
#End of parent.ps1
Maybe, this is not the best way to make a program, nevertheless, this is the way it works.
About Splatting(Microsoft Docs)
How and Why to Use Splatting (passing [switch] parameters)
Another solution. If you declare your parameter with a default value of $false:
[switch] $VHDL2008 = $false
Then the following (the -VHDL2008 option with no value) will set $VHDL2008 to $true:
compile-tool1.ps1 -VHDL2008
If instead you omit the -VHDL2008 option, then this forces $VHDL2008 to use the default $false value:
compile-tool1.ps1
These examples are useful when calling a Powershell script from a bat script, as it is tricky to pass a $true/$false bool from bat to Powershell, because the bat will try to convert the bool to a string, resulting in the error:
Cannot process argument transformation on parameter 'VHDL2008'.
Cannot convert value "System.String" to type "System.Management.Automation.SwitchParameter".
Boolean parameters accept only Boolean values and numbers, such as $True, $False, 1 or 0.