How do I enforce command line argument checking in PowerShell - powershell

Here's my script (test.ps1):
[CmdLetBinding()]
Param
(
[parameter(Mandatory=$true)][string]$environment,
[switch][bool]$continue=$true
)
Write-Host $environment
Write-Host $continue
Question:
If I invoke this script by giving an argument which is a substring of the parameter I specified in the script like this: PS> .\test.ps1 -envi:blah, PowerShell doesn't seem to check the argument name. I want PowerShell to enforce parameter spelling, i.e., it should only accept -environment which matches the parameter name in the script. For anything else, it should raise an exception. Is that doable? How do I do that?
Thanks.

It's not pretty, but it will keep you from using anything except -environment as a parameter name.
Param
(
[parameter(Mandatory=$true)][string]$environment,
[parameter()]
[ValidateScript({throw "Invalid parameter. 'environment' required."})]
[string]$environmen,
[switch][bool]$continue=$true
)
Write-Host $environment
Write-Host $continue
}
Edit: As Matt noted in his comment the automatic disambiguation will force you to specify enough of the parameter name to find a unique substring match. What I'm doing here is basically giving it a parameter that satisfies all but the last character to prevent using any substring up to the last character (because it's ambiguous), and throwing an error to prevent you from using that.
And, FWIW, that could well be the ugliest parameter validation I've ever done but I don't have any better ideas right now.

Related

PowerShell - accessing script parameters, sent from Windows cmd shell

Questions
How can you access the parameters sent to PowerShell script (.ps1 file)?
Can you access parameters A: by name, B: by position, C: a mix of either?
Context
I recently tried to write a PowerShell script (.ps1) that would be called from a Windows batch file (.bat) (or potentially cmd shell, or AutoHotKey script) - which would pass parameters into the .ps1 script for it to use (to display a toast notification). Thanks to the instructions on ss64.com, I have used $args to do this kind of thing in the past, however for some reason I could access the parameters this way (despite passing parameters, $args[0] = '' (empty string) and $args.Count = 0) so eventually had to remove all the $args code, and replace it with Param() script instead.
I'm still not quite sure why, but thought this is something I should get to the bottom of before I try to write my next script...
Code Example 1: Args (un-named parameters)
ToastNotificationArgs.ps1
-------------------------
Write-Debug "The script has been passed $($args.Count) parameters"
If (!$args[0]) { # Handle first parameter missing }
If (!$args[1]) { # Handle second parameter missing }
Import-Module -Name BurntToast
New-BurntToastNotification -Text "$args[0], $args[1]"
^ I thought the above code was correct, but like I say, I kept struggling to access the parameters for some reason and could not figure out why. (If anyone can spot what I was doing wrong, please shout!)
Is $args[] a valid approach? I assume so given it's use of ss64.com, but maybe there are some pitfalls / limitations I need to be aware of?
Code Example 2: Param (named parameters)
ToastNotificationParams.ps1
---------------------------
Param(
[Parameter(Mandatory=$false, Position=0, ValueFromPipeline=$true)] [string]$Title,
[Parameter(Mandatory=$false, Position=1, ValueFromPipeline=$true)] [string]$Message
)
Import-Module -Name BurntToast
New-BurntToastNotification -Text "$Title, $Message"
^ This was the only way to get my script working in the end. However when I passed the parameters in, my calling cmd script sent the parameters by position i.e. (pwsh.exe -File "ToastNotificationParams.ps1" "This is the title" "Message goes here") rather than by named pairs. (Not sure if this is best practice, but is how my script was initially intended to be used to left it for now).
While Param() got my script working this time (and I also realise the inherent dangers of position-based parameters), there are times when a position-based approach might be necessary (e.g. the number of parameter is unknown)...
Code Example 3: Hybrid
ToastNotificationMix.ps1
------------------------
Param(
[Parameter(Mandatory=$false, Position=0, ValueFromPipeline=$true)] [string]$Title
)
Import-Module -Name BurntToast
For ( $i = 1; $i -lt $args.count; $i++ ) {
New-BurntToastNotification -Text "$Title, $args[i]"
}
Is something like this valid?.. If not (or there is a better solution), any help would be greatly appreciated!
Thanks in advance!
The automatic $args variable is only available in simple (non-advanced) functions / scripts. A script automatically becomes an advanced one by using the [CmdletBinding()] attribute and/or at least one per-parameter [Parameter()] attribute.
Using $args allows a function/script to accept an open-ended number of positional arguments, usually instead of, but also in addition to using explicitly declared parameters.
But it doesn't allow passing named arguments (arguments prefixed by a predeclared target parameter name, e.g., -Title)
For robustness, using an advanced (cmdlet-like) function or script is preferable; such functions / scripts:
They require declaring parameters explicitly.
They accept no arguments other than ones that bind to declared parameters.
However, you can define a single catch-all parameter that collects all positional arguments that don't bind to any of the other predeclared parameters, using [Parameter(ValueFromRemainingArguments)].
Explicitly defined parameters are positional by default, in the order in which they are declared inside the param(...) block.
You can turn off this default with [CmdletBinding(PositionalBinding=$false)],
which then allows you to selectively enable positional binding, using the Position property of the individual [Parameter()] attributes.
When you call a PowerShell script via the PowerShell's CLI's -File parameter, the invocation syntax is fundamentally the same as when calling script from inside PowerShell; that is, you can pass named arguments and/or - if supported - positional arguments.
Constraints:
The arguments are treated as literals.
Passing array arguments (,-separated elements) is not supported.
If you do need your arguments to be interpreted as they would be from inside PowerShell, use the -Command / -c CLI parameter instead
See this answer for guidance on when to use -File vs. `-Command.
To put it all together:
ToastNotificationMix.ps1:
[CmdletBinding(PositionalBinding=$false)]
Param(
[Parameter(Position=0)]
[string]$Title
,
[Parameter(Mandatory, ValueFromRemainingArguments)]
[string[]] $Rest
)
Import-Module -Name BurntToast
foreach ($restArg in $Rest) {
New-BurntToastNotification -Text "$Title, $restArg"
}
You can then call your script from cmd.exe as follows, for instance (I'm using pwsh.exe, the PowerShell (Core) CLI; for Windows PowerShell, use powershell.exe):
Positional binding only:
:: "foo" binds to $Title, "bar" to $Rest
pwsh -File ./ToastNotificationMix.ps1 foo bar
Mix of named and positional binding:
:: "bar" and "baz" both bind to $Rest
pwsh -File ./ToastNotificationMix.ps1 -Title foo bar baz

"Cannot bind parameter because Param2 is specified more than once"

I am trying to call a PS script via batch file, like so
Powershell.exe -file "C:\Scripts\Blah\Blah\Blah.ps1" -webUID "usernameValue" -webPWD "passwordValue" -Param "param value" -Param2 "param 2 value"
The issue seems to be the batch file is confusing Param and Param2. It thinks I am setting Param2 twice however Param and Param2 are separate parameters altogether. Has anyone experienced this? Is there perhaps a way to explicitly state the param names? Thanks
Param block
# Parameters
Param
(
[string]$WebUID,
[string]$WebPWD,
[string]$Param,
[string]$Param2
)
In an effort to support concise command-line use, PowerShell's "elastic syntax" allows specifying unambiguous prefix substrings of parameter names so that you only need to type as much of a parameter name as is necessary to identify it without ambiguity;
e.g., typing -p to refer to -Path is enough, if no other parameters start with p.
However, an exact match is always recognized, so that specifying -Param in your case should unambiguously match the -Param parameter, even though its full name happens to be a prefix substring of different parameter -Param2.
If the problem were an issue of ambiguity (it isn't), you'd see a different error message. For instance, were you to use the ambiguous -Para, you'd see:
Parameter cannot be processed because the parameter name 'para' is ambiguous. Possible matches include: -Param -Param2.
Instead, the wording of your error message suggests that the exact same parameter name - -Param2 - was indeed specified more than once - even though your sample code doesn't show that.
I've tested the behavior in PSv2 and PSv5.1 / 6.0 alpha 10 - it's conceivable, however, that other versions act differently due to a bug. Do let us know.
Consider an alternative approach:
If you invoked your script from within PowerShell, you could use a single, array-valued parameter - e.g. [string[]] $Params - and then simply pass as many parameters as needed, comma-separated, without needing to specify a distinct parameter name for each value.
Sadly, when invoking a script from outside of PowerShell, this approach won't work, because passing arrays isn't supported from the outside.
There is a workaround, however:
Declare the array-valued parameter decorated with [parameter(ValueFromRemainingArguments=$true)]
Invoke the script with the parameters as a space-separated list at the end of the command.
Applied to your scenario:
If your script defined its parameters as follows:
Param
(
[string]$WebUID,
[string]$WebPWD,
[parameter(ValueFromRemainingArguments=$true)]
[string[]] $Params
)
You could then invoke your script as follows:
Powershell.exe -file "C:\Scripts\Blah\Blah\Blah.ps1" `
-webUID "usernameValue" `
-webPWD "passwordValue" `
"param value" "param 2 value"
and $Params would receive an array of values: $Params[0] would receive param value, and $Params[1] would receive param 2 value.
Note that when calling from outside of PowerShell:
you must not use parameter name -Params in the invocation - just specify the values at the end.
you must not use , to separate the values - use spaces.
I'm no guru, but this looks like it's related to "Partial Parameters" and "Parameter Completion". See this article for more information.
Simply changing Param to Param1 should fix the issue.

Powershell function with Parameters throwing null exception

This script is throwing a null exception and I am not certain why that is the case...
Function StopServices{
Param
(
$ServiceName,
$Remoteserver
)
write-host($Remoteserver)
write-host($ServiceName)
[System.ServiceProcess.ServiceController]$service = Get-Service -Name $ServiceName -ComputerName $Remoteserver
}
the write-host writes the variable. The Get-Service -ComputerName method throws this exception:
powershell cannot validate argument on parameter 'computername' the argument is null or empty
I am wondering what they are talking about, Neither is empty...
StopServices("DUMMY","VALUES")
Neither of those are empty. Why is it throwing that exception?
Unlike most languages, PowerShell does not use parenthesis to call a function.
This means three things:
("DUMMY","VALUES") is actually being interpreted as an array. In other words, you are only giving StopServices one argument instead of the two that it requires.
This array is being assigned to $ServiceName.
Due to the lack of arguments, $Remoteserver is assigned to null.
To fix the problem, you need to call StopServices like this:
PS > StopServices DUMMY VALUES
Actually, $RemoteServer is null.
StopServices("Dummy", "Values") isn't doing what you think it's doing - PowerShell doesn't take arguments to functions in the same way that other programming languages do. PowerShell is interpreting the syntax you're using as an expression to create an array with two values in it ("DUMMY" and "VALUES), and to store that array in $ServiceName, leaving $RemoteServer as $null.
One of the following examples will give you the behavior you're after:
StopServices "Dummy" "Values"
-or-
StopServices -ServiceName "Dummy" -RemoteServer "Values"
You're calling the function incorrectly. You should be calling it like this:
StopServices "DUMMY" "VALUES"
Or, if you want to be more clear:
StopServices -ServiceName DUMMY -Remoteserver VALUES
The way you are passing the parameters to the function, PowerShell interprets ("DUMMY", "VALUES") as an array, which would get assigned to $ServiceName, leaving $Remoteserver null.

Powershell: Colon in commandlet parameters

What's the deal with Powershell commandlet switch parameters that require a colon?
Consider Exchange 2010 management shell cmdlet Move-ActiveMailboxDatabase. The Confirm switch is a System.Management.Automation.SwitchParameter and must be used like so,
Move-ActiveMailboxDatabase -Confirm:$false
Without the colon the command fails to recognize the don't confirm switch like so,
Move-ActiveMailboxDatabase -Confirm $false
Why is that? What's the difference the colon makes there? Why Exchange2010 seems to be about the only thing I've noticed this behavior?
I've browsed through Powershell in Action and Powershell 2.0, but didn't find anything about this syntax. Scope resolution and .Net object access uses are documented on those books though.
My Google-fu found an article which claims that it explicitly forwards switch parameter values, but fails to explain what that is about.
When you do:
Move-ActiveMailboxDatabase -Confirm $false
you are not saying Confirm parameter accepts the $false. You are saying -Confirm and also passing an (separate) argument to the cmdlet with value $false.
Since Confirm is a switch, just the presence of -Confirm means it is true. Absence of -Confirm means it is false.
Let me give you a script example:
param([switch]$test)
write-host Test is $test
If you just run the script without any arguments / paramters i.e .\script.ps1 you get output:
Test is False
If you run it as .\script.ps1 -test, the output is
Test is True
If you run it as .\script.ps1 -test $false, the output is
Test is True
If you run it as .\script.ps1 -test:$false the output is
Test is False
It is in scenarios where the value for a switch variable itself has to be determined from another variable that the : is used.
For example, consider the script:
param ([boolean]$in)
function func([switch] $test){
write-host Test is $test
}
func -test:$in
Here if you run it as .\script.ps1 -in $false, you get
Test is false
If you weren't able to use the :, you would have had to write it as:
if($in){ func -test}
else { func }
The colon can be used with every parameter value but is more special in the case of switch parameters. Switch parameters don't take values, they are either present ($true) or absent ($false).
Imagine you have a function like this:
function test-switch ([string]$name,[switch]$force) { ... }
And you call it like so:
test-switch -force $false
Since switch parameters are either present or not, $false would actually bind to the Name parameter. So, how do you bind a value to a switch parameter? With the colon:
test-switch -force:$false
Now the parameter binder knows which parameter the value goes to.

Using an answer file with a PowerShell script

I have a PowerShell script with a number of 'params' at the start:
param(
[switch] $whatif,
[string] $importPath = $(Read-Host "Full path to import tool"),
[string] $siteUrl = $(Read-Host "Enter URL to create or update"),
[int] $importCount = $(Read-Host "Import number")
)
Is there any way I can run this against an answer file to avoid entering the parameter values every time?
I am not getting the reason for the question. All you have to do to call your script is something like:
.\script.ps1 -whatif -importPath import_path -siteUrl google.com -importCount 1
The Read-Host are there as defaults, to be executed ( and then read and assign the values to the parameters ) only if you don't specify the values. As long you have the above comand ( saved in a file so that you can copy and paste into console or run from another script or whatever ), you don't have to enter the values again and again.
Start by setting the function or script up to accept pipeline input.
[CmdletBinding(SupportsShouldProcess=$True,ConfirmImpact='Low')]
param(
[Parameter(Mandatory=$True,ValueFromPipelineByPropertyName=$True)]
[string] $importPath,
[Parameter(Mandatory=$True,ValueFromPipelineByPropertyName=$True)]
[string] $siteUrl,
[Parameter(Mandatory=$True,ValueFromPipelineByPropertyName=$True)]
[int] $importCount
)
Notice that I removed your manually-created -whatif. No need for it - I'll get to it in a second. Also note that Mandatory=$True will make PowerShell prompt for a value if it isn't provided, so I removed your Read-Host.
Given the above, you could create an "answer file" that is a CSV file. Make an importPath column, a siteURL column, and an importCount column in the CSV file:
importPath,siteURL,importCount
"data","data",1
"x","y",2
Then do this:
Import-CSV my-csv-file.csv | ./My-Script
Assuming your script is My-Script.ps1, of course.
Now, to -whatif. Within the body of your script, do this:
if ($pscmdlet.shouldprocess($target)) {
# do whatever your action is here
}
This assumes you're doing something to $target, which might be a path, a computer name, a URL, or whatever. It's the thing you're modifying in your script. Put your modification actions/commands inside that if construct. Doing this, along with the SupportsShouldProcess() declaration at the top of the script, will enable -whatif and -confirm support. You don't need to code those parameters yourself.
What you're building is called an "Advanced Function," or if it's just a script than I guess it'd be an "Advanced Script." Utilizing pipeline input parameters in this fashion is the "PowerShell way of doing things."
To my knowledge, Powershell doesn't have a built-in understanding of answer files. You'll have to pass them in somehow or read them yourself from the answer file.
Wrapper. You could write another script that calls this script with the same parameters you want to use every time. You could also make a wrapper script that reads the values from the answer file, then pass them in.
Optional Parameters. Or you could change the parameters to use defaults that indicate no parameters were passed, then check for a file of a specific name to read values from. If the file isn't found, then prompt for the values.
If the format of the answer file is flexible, (i.e., you're only going to be using it with this Powershell script), you could get much closer to the behavior of an actual answer file by writing it as a Powershell script itself and dot-sourcing it.
if (test-path 'myAnswerfile'){
. 'myAnswerFile'
#process whatever was sourced from the answer file, if necessary
} else {
#prompt for values
}
It still requires removing the Read-Host calls from the parameters of the script.
Following on from Joel you could set up a different parameter set, based around the switch -answerfile.
If that's set the function will look for an answer file and parse though it - as he said you'll need to do that yourself. If it's not set and the others are then the functionis used with the parameters given. Minor benefit I see is that you can still have the parameters mandatory when used that way.
Matt