PowerShell module-qualified names and Command Prefix - powershell

I'm trying to make use of module-qualified names[1] and a DefaultCommandPrefix and not have it break if the module is imported with Import-Module -Prefix SomethingElse. Maybe I'm just doing something really wrong here, or those two features aren't meant to be used together.
Inside the main module file using "ModuleName\Verb-PrefixNoun args..." works as long as "Prefix" matches the DefaultCommandPrefix in the manifest (the module-qualified syntax seems to require the prefix used for the import[2]). But importing the module with a different prefix, all module-qualified references inside the module breaks.
After a bit of searching and trial and error, the least horrible solution I've managed to get working is to use something like the following hackish solution. But, I can't help wonder if there isn't some better way that automatically handles the prefix (just as Import-Module obviously manages to add the prefix, my first naive though was that using just ModuleName\Verb-Noun would automatically append any prefix to the noun, but evidently not[2].
So this is the hack I came up with, that looks up the modules prefix and appends it, then using "." or "&" to expand/invoke the command:
# (imagine this code in the `ModuleName.psm1`, and a manifest with some `DefaultCommandPrefix`)
Function MQ {
param (
[Parameter()][string]
$Verb,
[Parameter()][string]
$Noun,
[string]
$Module='ModuleName'
)
"$Module\$Verb-$((Get-Module $Module).Prefix)$Noun"
}
Function Verb-Noun {
# This works even when importing with a prefix,
# but can I be guaranteed that it's not some
# other module's cmdlet?
Verb-OtherNoun 1 2 3 '...'
#ModuleName\Verb-OtherNoun 1 2 3 '...'
. (MQ 'Verb' 'OtherNoun') 1 2 3 '...'
# or:
& (MQ 'Verb' 'OtherNoun') 1 2 3 '...'
}
(MQ could be made more user friendly by also accepting a single string MQ "Verb-Noun" and split/recombine automatically, and so on, etc. and all the usual disclamers)
Note: I know it would be possible to hard-code the name instead of using DefaultCommandPrefix, e.g. as PSReadLine does (and a bunch of other modules). But, to be honest that feels like a workaround.
Just calling Verb-OtherNoun seems fragile to me, as the most recent one is used[3]. So I would imagine that for example just before the call adding an Import-Module statement with a module that exports a Verb-OtherNoun would cause the wrong (not this module's) cmdlet being called. (Perhaps a more real world scenario is a module being loaded after this module has been loaded, but before calling Verb-Noun.)
Is there perhaps some syntax for module-qualifiaction I'm not aware of that would do something akin to Import-Module(e.g. Module\\Verb-Noun or Module\Verb+Noun that would resolve and inject Module's prefix) and now that I think of it, is there some reason why Module\Verb-Noun doesn't handle prefixes, or just that no one wrote the code for it? (I can't see how it would break things more than how using DefaultCommandPrefix would break v2/v3[2])
[1] https://www.sapien.com/blog/2015/10/23/using-module-qualified-cmdlet-names/
[2] https://github.com/PoshCode/PowerShellPracticeAndStyle/issues/23#issuecomment-106843619
[3] https://stackoverflow.com/a/22259706/13648152

You can avoid the problem by not using a module qualifier not using a noun prefix when you call your module's own functions.
That is, call them as Verb-Noun, exactly as named in the target function's implementation.
This is generally safe, because your own module's functions take precedence over any commands of the same name defined outside your module.
The sole exception is if an alias defined in the global scope happens to have the same name as one of your functions - but that shouldn't normally be a concern, because aliases are used for short names that do not follow the verb-noun naming convention.
It also makes sense in that it allows you to call your functions module-internally with an invariable name - a name that doesn't situationally change, depending on what the caller decided to choose as a noun prefix via Import-Module -Prefix ....
Think of the prefix feature as being a caller-only feature that is unrelated to your module's implementation.
As an aside: As of PowerShell 7.0, declaring a default noun prefix via the DefaultCommandPrefix module-manifest property doesn't properly integrate with the module auto-loading and command-discovery features - see this GitHub issue.

Related

In Powershell how to generate an error on invalid flags and switches that are not listed in the param statement?

Trying to get get param(...) to be have some basic error checking... one thing that puzzles me is how to detect invalid switch and flags that are not in the param list?
function abc {
param(
[switch]$one,
[switch]$two
)
}
When I use it:
PS> abc -One -Two
# ok... i like this
PS> abc -One -Two -NotAValidSwitch
# No Error here for -NotAValidSwitch? How to make it have an error for invalid switches?
As Santiago Squarzon, Abraham Zinala, and zett42 point out in comments, all you need to do is make your function (or script) an advanced (cmdlet-like) one:
explicitly, by decorating the param(...) block with a [CmdletBinding()] attribute.
and/or implicitly, by decorating at least one parameter variable with a [Parameter()] attribute.
function abc {
[CmdletBinding()] # Make function an advanced one.
param(
[switch]$one,
[switch]$two
)
}
An advanced function automatically ensures that only arguments that bind to explicitly declared parameters may be passed.
If unexpected arguments are passed, the invocation fails with a statement-terminating error.
Switching to an advanced script / function has side effects, but mostly beneficial ones:
You gain automatic support for common parameters, such as -OutVariable or -Verbose.
You lose the ability to receive unbound arguments, via the automatic $args variable variable (which is desired here); however, you can declare a catch-all parameter for any remaining positional arguments via [Parameter(ValueFromRemainingArguments)]
To accept pipeline input in an advanced function or script, a parameter must explicitly be declared as pipeline-binding, via [Parameter(ValueFromPipeline)] (objects as a whole) or [Parameter(ValueFromPipelineByPropertyName)] (value of the property of input objects that matches the parameter name) attributes.
For a juxtaposition of simple (non-advanced) and advanced functions, as well as binary cmdlets, see this answer.
If you do not want to make your function an advanced one:
Check if the automatic $args variable - reflecting any unbound arguments (a simpler alternative to $MyInvocation.UnboundArguments) - is empty (an empty array) and, if not, throw an error:
function abc {
param(
[switch]$one,
[switch]$two
)
if ($args.Count) { throw "Unexpected arguments passed: $args" }
}
Potential reasons for keeping a function a simple (non-advanced) one:
To "cut down on ceremony" in the parameter declarations, e.g. for pipeline-input processing via the automatic $input variable alone.
Generally, for simple helper functions, such as for module- or script-internal use that don't need support for common parameters.
When a function acts as a wrapper for an external program to which arguments are to be passed through and whose parameters (options) conflict with the names and aliases of PowerShell's common parameters, such as -verbose or -ov (-Out-Variable).
What isn't a good reason:
When your function is exported from a module and has an irregular name (not adhering to PowerShell's <Verb>-<Noun> naming convention based on approved verbs) and you want to avoid the warning that is emitted when you import that module.
First and foremost, this isn't an issue of simple vs. advanced functions, but relates solely to exporting a function from a module; that is, even an irregularly named simple function will trigger the warning. And the warning exists for a good reason: Functions exported from modules are typically "public", i.e. (also) for use by other users, who justifiable expect command names to follow PowerShell's naming conventions, which greatly facilitates command discovery. Similarly, users will expect cmdlet-like behavior from functions exported by a module, so it's best to only export advanced functions.
If you still want to use an irregular name while avoiding a warning, you have two options:
Disregard the naming conventions altogether (not advisable) and choose a name that contains no - character, e.g. doStuff - PowerShell will then not warn. A better option is to choose a regular name and define the irregular names as an alias for it (see below), but note that even aliases have a (less strictly adhered-to) naming convention, based on an official one or two-letter prefix defined for each approved verb, such as g for Get- and sa for Start- (see the approved-verbs doc link above).
If you do want to use the <Verb>-<Noun> convention but use an unapproved verb (token before -), define the function with a regular name (using an approved verb) and also define and export an alias for it that uses the irregular name (aliases aren't subject to the warning). E.g., if you want a command named Ensure-Foo, name the function Set-Foo, for instance, and define Set-Alias Ensure-Foo Set-Foo. Do note that both commands need to be exported and are therefore visible to the importer.
Finally, note that the warning can also be suppressed on import, namely via Import-Module -DisableNameChecking. The downside of this approach - aside from placing the burden of silencing the warning on the importer - is that custom classes exported by a module can't be imported this way, because importing such classes requires a using module statement, which has no silencing option (as of PowerShell 7.2.1; see GitHub issue #2449 for background information.

Does RegAsm need a source operator?

I have been asked to review and edit, if needed, some scripts. I have entered the relevant part of this one below. I believe its correct, but I cannot figure out if the period at the beginning of the second to last line is needed or just a typo. It appears to be a source operator, but I don't see why it'd be needed there.
As always, you folks here are the salt of the earth and deserve much more plaudits than you get and than I can give. Thank you so much for continuing to making me look better at this than I am.
$Assembly = 'D:\MgaLin2.dll'
."C:\Windows\Microsoft.NET\Framework\v4.0.30319\RegAsm.exe" -codebase -tlb $Assembly
Copy-Item -Path D:\Mga -Destination "C:\Program Files (x86)\Common Files\COMPANY_NAME\COMPANY_SUBFOLDER\" -Include *.*
The use of a period (.) (the dot-sourcing operator) is unusual in this case, but it works.
More typically, you'll see use of &, the call operator in this situation.
Only for PowerShell scripts (and script blocks, { ... }) does the distinction between . and & matter (. then dot-sources the code, i.e. runs it directly in the caller's scope rather than in a child scope, as & does).
For external programs, such as RegAsm.exe, you can technically use . and & interchangeably, but to avoid conceptual confusion, it is best to stick with &.
Note that the reason that & (or .) is required in this case is that the executable path is quoted; the same would apply if the path contained a variable reference (e.g., $env:ProgramFiles) or a subexpression (e.g., $($name + $ext)).
For more information about this requirement, which is a purely syntactic one, see this answer.

What is the meaning of ${} in powershell?

I have a script where function parameters are expressed like this:
param(
${param1},
${param2},
${param3}
)
What does it mean? I have been unable to find documentation on this.
What's the point of writing parameters that way instead of the more usual
param(
$param1,
$param2,
$param3
)
?
#MikeZ's answer is quite correct in explaining the example in the question, but as far as addressing the question title, there is actually more to say! The ${} notation actually has two uses; the second one is a hidden gem of PowerShell:
That is, you can use this bracket notation to do file I/O operations if you provide a drive-qualified path, as defined in the MSDN page Provider Paths.
(The above image comes from the Complete Guide to PowerShell Punctuation, a one-page wallchart freely available for download, attached to my recent article at Simple-Talk.com.)
They are both just parameter declarations. The two snippets are equivalent. Either syntax can be used here, however the braced form allows characters that would not otherwise be legal in variable names. From the PowerShell 3.0 language specification:
There are two ways of writing a variable name: A braced variable name, which begins with $, followed by a curly bracket-delimited set of one or more almost-arbitrary characters; and an ordinary variable name, which also begins with $, followed by a set of one or more characters from a more restrictive set than a braced variable name allows. Every ordinary variable name can be expressed using a corresponding braced variable name.
From about_Variables
To create or display a variable name that includes spaces or special characters, enclose the variable name in braces. This directs Windows PowerShell to interpret the characters in the variable name literally.
For example, the following command creates and then displays a variable named "save-items".
C:\PS> ${save-items} = "a", "b", "c"
C:\PS> ${save-items}
a
b
c
They are equivalent. It's just an alternative way of declaring a variable.
If you have characters that are illegal in a normal variable, you'd use the braces (think of it as "escaping" the variablename).
There is one additional usage.
One may have variable names like var1, var2, var11, var12, var101, etc.
Regardless if this is desirable variable naming, it just may be.
Using brackets one can precisely determine what is to be used:
assignment of $var11 may be ambiguous, using ${var1}1 or ${var11} leaves no room for mistakes.

Powershell: merging includes into single script

I want to merge script/function having multiple dot sourced scripts into single ps1 script/function. Each script that is included may also have its own includes and so on.
=== EDIT ===
I guess you need to be painfully obvious here on SO, so let me give trivial example:
first.ps1
. $PSScriptRoot\inc\second.ps1
"first"
second.ps1
"second"
Given the existence of function Merge that accepts main script and produces merged script:
Merge first.ps1 first-merged.ps1
the final script will look as:
first-merged.ps1
"second"
"first"
This is far from trivial to do given that you can dot source in bunch of different ways, for instance in a loop.
I suppose "powershell reader" will create something like this internally so perhaps there is a way to obtain it.
You're looking for something like the C preprocessor? That is, merge the contents without actually executing the script, right? AFAIK PowerShell doesn't delineate between dot sourcing and script execution. Dot sourcing is just another command. So you could either A) do a transitive search via regex of files that are dot sourced or if you're up for a challenge B) use the AST to help find dot sourced files e.g.:
(Get-Command .\first.ps1).ScriptBlock.Ast.EndBlock.Statements.PipelineElements | Where InvocationOperator -eq Dot
Outputs:
CommandElements : {$PSScriptRoot\second.ps1}
InvocationOperator : Dot
DefiningKeyword :
DefinedKeywords :
Redirections : {}
Extent : . $PSScriptRoot\second.ps1
Parent : . $PSScriptRoot\second.ps1
And of course, you'd have to chase down all these dot sourced files to do the same to them (to achieve transitive closure). But as you mention, this can be challenging if say the path contains a variable that you don't know until runtime.
You haven't specifically asked a question here, but I assume you meant to say "how do I do this". It is a very straightforward process.
For example, from Script1.ps1 (below) we can access functions in Script3.ps1 and Script4.ps1 by simply dot sourcing a single Script2.ps1 which contains these references.
Script1.ps1
.$PSScriptRoot\Script2.ps1
Get-Script3Name
Get-Script4Name
Script2.ps1
#dot source all the scripts you need access to
$PSScriptRoot\Script3.ps1
$PSScriptRoot\Script4.ps1
Script3.ps1
Function Get-Script3Name
{
"This is Script3"
}
Script4.ps1
Function Get-Script4Name
{
"This is Script4"
}
Results when running Script1.ps1
This is Script3
This is Script4
For more information, I also recommend reading this older post which has some very good answers provided

What does the period '.' operator do in powershell?

This is a weird one. Normally when I execute an external command from powershell I use the & operator like this:
& somecommand.exe -p somearguments
However, today I came across the . operator used like this:
.$env:systemdrive\chocolatey\chocolateyinstall\chocolatey.cmd install notepadplusplus
What purpose does the period serve in this scenario? I don't get it.
The "." dot sourcing operator will send AND receive variables from other scripts you have called. The "&" call operator will ONLY send variables.
For instance, considering the following:
Script 1 (call-operator.ps1):
clear
$funny = "laughing"
$scriptpath = split-path -parent $MyInvocation.MyCommand.Definition
$filename = "laughing.ps1"
"Example 1:" # Call another script. Variables are passed only forward.
& $scriptpath\$filename
"Example 2:" # Call another script. Variables are passed backwards and forwards.
. $scriptpath\$filename
$variableDefinedInOtherScript
Script 2 (laughing.ps1):
# This is to test the passing of variables from call-operator.ps1
"I am $funny so hard. Passing variables is so hilarious."
$variableDefinedInOtherScript = "Hello World!"
Create both scripts and ONLY run the first one. You'll see that the "." dot sourcing operator sends and receives variables.
Both have their uses, so be creative. For instance, the "&" call operator would be useful if you wanted to modify the value(s) of variables in another script while preserving the original value(s) in the your current script. Kinda a safeguard. ;)
The Short:
It is a Special Operator used to achieve what regular operators cannot achieve. This particular operator . actually has two distinctively different Special Operator use cases.
The Long:
As with any other language, scripting or otherwise, PowerShell script also supports many different types of Operators to help manipulate values. These regular operators include:
Arithmetic
Assignment
Comparison
Logical
Redirection
List item
Split and Join
Type
Unary
However, PowerShell also supports whats known as Special Operators which are used to perform tasks that cannot be performed by the other types of operators.
These Special Operators Include:
#() Array subexpression operator
& Call operator
[ ] Cast operator
, Comma operator
. Dot sourcing operator
-f Format operator
[ ] Index operator
| Pipeline operator
. Property dereference operator
.. Range operator
:: Static member operator
$( ) Subexpression operator
. Dot sourcing operator: is used in this context to allow a script to run in the current scope essentially allowing any functions, aliases, and variables which has been created by the script to be added to the current script.
Example:
. c:\scripts.sample.ps1
NoteThat this application of the . Special Operator is followed by a space to distinguish it from the (.) symbol that represents the current directory
Example:
. .\sample.ps1
. Property dereference operator: Allows access to the properties and methods of of an object which follows the . by indicating that the expression on the left side of the . character is an object and the expression on the right side of the is an object member (a property or method).
Example:
$myProcess.peakWorkingSet
(get-process PowerShell).kill()
Disclaimer & Sources:
I had the same question while looking at a PowerShell script that I was trying to expand on its feature sets and landed here when doing my research for the answer. However I managed to find my answer using this magnificent write up on the Microsoft Development Network supplemented with this further expansion of the same ideas from IT Pro.
Cheers.
The dot is a call operator:
$a = "Get-ChildItem"
. $a # (executes Get-ChildItem in the current scope)
In your case, however, I don't see what it does.
.Period or .full stop for an objects properties; like
$CompSys.TotalPhysicalMemory
See here: http://www.computerperformance.co.uk/powershell/powershell_syntax.htm#Operators_
This answer is to expand slightly upon those already provided by David Brabant and his commenters. While those remarks are all true and pertinent, there is something that has been missed.
The OPs use of & when invoking external commands is unnecessary. Omitting the & would have no effect (on the example of his usage). The purpose of & is to allow the invocation of commands whose names are the values of a (string) expression. By using the & above, powershell then (essentially) treats the subsequent arguments as strings, the first of which is the command name that & duly invokes. If the & were omitted, powershell would take the first item on the line as the command to execute.
However, the . in the second example is necessary (although, as noted by others, & would work just as well in this case). Without it, the command line would begin with a variable access ($env:systemdrive) and so powershell would be expecting an expression of some form. However, immediately following the variable reference is a bare file path which is not a valid expression and will generate an error. By using the . (or &) at the beginning of the line, it is now treated as a command (because the beginning doesn't look like a valid expression) and the arguments are processed as expandable strings (" "). Thus, the command line is treated as
. "$env:systemdrive\chocolatey\chocolateyinstall\chocolatey.cmd" "install" "notepadplusplus"
The first argument has $env:systemdrive substituted into it and then . invokes the program thus named.
Note: the full description of how powershell processes command line arguments is way more complicated than that given here. This version was cut down to just the essential bits needed to answer the question. Take a look at about_Parsing for a comprehensive description. It is not complete but should cover most normal usage. There are other posts on stackoverflow and github (where powershell now resides) that cover some of the seemingly quirky behaviour not listed in the official documentation. Another useful resource is about_Operators though again this isn't quite complete. An example being the equivalence of . and & when invoking something other than a powershell script/cmdlet getting no mention at all.