Using CmdletBinding, is there an easy way to regurgitate the exact parameters that a cmdlet was called with, so I can call another cmdlet with the exact same parameters?
I'm writing Powershell Cmdlets in Powershell. I'm using advanced functions. I have a cmdlet called Get-Environment, with several optional parameters like [string]EnvironmentName and [switch]Active. I have another cmdlet, called Get-Machine, with all of the same optional parameters; it calls Get-Environment. Originally, before I added the [switch]Active parameter, I simply called Get-Environment with all variables explicitly (see below).
I can't do the same thing now, because if I add "active" then it will be set. I don't want to have make a test in Get-Machine to see if Active is true and have two different versions of the Get-Environment call. I'd prefer to not have to trawl through the $PSBoundParameters hashtable and reconstruct the original strings, but that looks like the only feasible way forward (unless I'm missing something.)
Original code inside get-machine:
$environments = get-Environment -EnvironmentName $EnvironmentName
Oh for Pete's sake. I found it. I was missing the big stupid easy thing. I'll leave this up for others, and in case someone has an even better answer.
https://ss64.com/ps/psboundparameters.html
$PSBoundParameters can be used to call a subordinate function or cmdlet passing the same parameters - PowerShell will automatically splat the hash table's values instead of having to type each of the parameters:
get-otherthing #PSBoundParameters
Related
I'm going to start by saying I'm still pretty much a rookie at PowerShell and hoping there is a way to do this.
We have a utils.ps1 script that contains just functions that we dot source with in other scripts. One of the functions returns back a default value if a value is not passed in. I know I could check $args and such but what I wanted was to use the function for the default value in the parameters.
param(
[string]$dbServer=$(Get-DefaultParam "dbServer"),
[string]$appServer=$(Get-DefaultParam "appServer")
)
This doesn't work since the Util script hasn't been sourced yet. I can't put the dot source first because then params doesn't work as it's not the top line. The utils isn't a module and I can't use the #require.
What I got working was this
param(
[ValidateScript({ return $false; })]
[bool]$loadScript=$(. ./Utils.ps1; $true),
[string]$dbServer=$(Get-DefaultParam "dbServer"),
[string]$appServer=$(Get-DefaultParam "appServer")
)
Create a parameter that loads the script and prevent passing a value into that parameter. This will load the script in the correct scope, if I load it in the ValidateScript it's not in the correct scope. Then the rest of the parameters have access to the functions in the Utils.ps1. This probably is not a supported side effect, aka hack, as if I move the loadScript below the other parameters fail since the script hasn't been loaded.
PowerShell guarantee parameters will always load sequential?
Instead should we put all the functions in Utils.ps1 in global scope? this would need to run Utils.ps1 before the other scripts - which seems ok in scripting but less than ideal when running the scripts by hand
Is there a more supported way of doing this besides modules and #require?
Better to not use default value of params and just code all the checks after sourcing and check $args if we need to run the function?
It would be beneficial to instead turn that script into a PowerShell Module, despite your statement that you desire to avoid one. This way, your functions are always available for use as long as the module is installed. Also, despite not wanting to use it, the #Require directive is how you put execution constraints on your script, such as PowerShell version or modules that must be installed for the script to function.
If you really don't want to put this into a module, you can dot-source utils.ps1 from the executing user's $profile. As long as you don't run powershell.exe with the -NoProfile parameter, the profile loads with each session and your functions will be available for use.
In my new project team, for each powershell cmdlet they have written proxy function. When i asked the reason for this practice, they said that it is a normal way that automation framework would be written. They also said that If powershell cmdlet is changed then we do not need to worry ,we can just change one function.
I never saw powershell cmdlets functionality or names changed.
For example, In SQL powershell module they previously used snapin then they changed to module. but still the cmdlets are same. No change in cmdlet signature. May be extra arguments would have added.
Because of this proxy functions , even small tasks taking long time. Is their fear baseless or correct? Is there any incident where powershell cmdlets name or parameter changed?
I guess they want to be extra safe. Powershell would have breaking changes here and here sometimes but I doubt that what your team is doing would be impacted by those (given the rare nature of these events). For instance my several years old scripts continue to function properly up to present day (and they were mostly developed against PS 2-3).
I would say that this is overengineering, but I cant really blame them for that.
4c74356b41 makes some good points, but I wonder if there's a simpler approach.
Bear with me while I restate the situation, just to ensure I understand it.
My understanding of the issue is that usage of a certain cmdlet may be strewn about the code base of your automation framework.
One day, in a new release of PowerShell or that module, the implementation changes; could be internal only, could be parameters (signature) or even cmdlet name that changes.
The problem then, is you would have to change the implementation all throughout your code.
So with proxy functions, you don't prevent this issue; a breaking change will break your framework, but the idea is that fixing it would be simpler because you can fix up your own proxy function implementation, in one place, and then all of the code will be fixed.
Other Options
Because of the way command discovery works in PowerShell, you can override existing commands by defining functions or aliases with the same name.
So for example let's say that Get-Service had a breaking change and you used it all over (no proxy functions).
Instead of changing all your code, you can define your own Get-Service function, and the code will use that instead. It's basically the same thing you're doing now, except you don't have to implement hundreds of "empty" proxy functions.
For better naming, you can name your function Get-FrameworkService (or something) and then just define an alias for Get-Service to Get-FrameworkService. It's a bit easier to test that way.
One disadvantage with this is that reading the code could be unclear, because when you see Get-Service somewhere it's not immediately obvious that it could have been overwritten, which makes it a bit less straightforward if you really wanted to call the current original version.
For that, I recommend importing all of the modules you'll be using with -Prefix and then making all (potentially) overridable calls use the prefix, so there's a clear demarcation.
This even works with a lot of the "built-in" commands, so you could re-import the module with a prefix:
Import-Module Microsoft.PowerShell.Utility -Prefix Overridable -Force
TL;DR
So the short answer:
avoid making lots and lots of pass-thru proxy functions
import all modules with prefix
when needed create a new function to override functionality of another
then add an alias for prefixed_name -> override_function
Import-Module Microsoft.PowerShell.Utility -Prefix Overridable -Force
Compare-OverridableObject $a $b
No need for a proxy here; later when you want to override it:
function Compare-CanonicalObject { <# Stuff #> }
New-Alias Compare-OverridableObject Compare-CanonicalObject
Anywhere in the code that you see a direct call like:
Compare-Object $c $d
Then you know: either this intentionally calls the current implementation of that command (which in other places could be overridden), or this command should never be overridden.
Advantages:
Clarity: looking at the code tells you whether an override could exist.
Testability: writing tests is clearer and easier for overridden commands because they have their own unique name
Discoverability: all overridden commands can be discovered by searching for aliases with the right name pattern i.e. Get-Alias *-Overridable*
Much less code
All overrides and their aliases can be packaged into modules
Powershell Cmdlets inherit a bunch of common parameters. Some cmdlets I write end up with predicates that depend on which parameters are actually bound. This often leads to filtering out common parameters which means you need a list of common parameter names.
I also expect there to be difference in the list of common parameters from one version of powershell to another.
All of this boils down to this question:
How do you programmatically determine the list of common parameters?
What about these static properties?
[System.Management.Automation.PSCmdlet]::CommonParameters
[System.Management.Automation.PSCmdlet]::OptionalCommonParameters
The existing common parameters is the combination of both lists:
CommonParameters: Lists the common parameters that are added by the PowerShell engine to any cmdlet that derives from PSCmdlet.
OptionalCommonParameters: Lists the common parameters that are added by the PowerShell engine when a cmdlet defines additional capabilities (SupportsShouldProcess, SupportsTransactions)
i.e. All of them can exist, but the optional ones only exists if the cmdlet supports them. For detailed info see Cmdlet Class
Like this:
function Get-CommonParameterNames
{
[CmdletBinding()]
param()
$MyInvocation.MyCommand.Parameters.Keys
}
Does anyone have an example showing how to override the TabExpansion2 function in Windows PowerShell 3.0? I know how to override the old TabExpansion function, but I want to provide a list of items for the intellisense in PowerShell ISE. I looked at the definition of TabExpansion2 and it wasn't easily understandable how I inject my own code in the the tab expansion process.
I think this example should give you a good starting point: Windows Powershell Cookbook: Sample implementation of TabExpansion2. The example code shows that you can add code both before and after the default calls to [CommandCompletion]::CompleteInput.
For instance, you can add an entry to the $options hashtable named CustomArgumentCompleters to get custom completion for command arguments. The entry should be a hashtable where the keys are argument names (e.g. "ComputerName" or "Get-ChildItem:Filter") and the values are arrays of values that could be used to complete that parameter. Powertheshell.com also has an article about this: Dynamic Argument Completion. You can also specify custom completions for native executables, using the NativeArgumentCompleters option (again, keys are command names and values are arrays of possible completions).
OnceCompleteInput has returned, you can store the result in $result for further analysis. The result is an instance of the CommandCompletion class. If the default completion didn't find any matches, you can add your own CompletionResult entries to the list of matches:
$result.CompletionMatches.Add(
(New-Object Management.Automation.CompletionResult "my completion string") )
Don't forget to return $result from the function so the completion actually happens.
Finally, a note on troubleshooting: the code that calls TabCompletion2 seems to squelch all console-based output (not surprisingly), so if you want to write debugging messages for yourself, you might try writing them to a separate text file. For instance, you could change the End function in TabCompletion2 to look like this:
$result = [System.Management.Automation.CommandCompletion]::CompleteInput(
$inputScript, $cursorColumn, $options)
$result | Get-Member | Add-Content "c:\TabCompletionLog.txt"
$result
Here is an example of overridden TabExpansion2 -
TabExpansion2.ps1 (disclaimer: I'm the author)
and several used-in-practice profiles with completers for it:
Invoke-Build.ArgumentCompleters.ps1
Argument completers for Invoke-Build
Mdbc.ArgumentCompleters.ps1
Argument completers for Mdbc
ArgumentCompleters.ps1
Argument, input, and result completers
The points of interest:
TabExpansion2.ps1 does minimum work on loading. Potentially expensive
initialization is performed once when completion really happens.
Overridden TabExpansion2 provides extension mechanism via one or more
profiles *ArgumentCompleters.ps1 in the path. Profiles are invoked once
on the first call of TabExpansion2. Several profiles may come with
different independent modules, tools, and etc. and used simultaneously.
In addition to standard custom argument completers and
native argument completers this custom TabExpansion2 supports
result processors which tweak the results from the built-in completion
and input processors which can intercept and replace the built-in completion.
It works around read only empty built-in results in some cases.
ArgumentCompleters.ps1
contains an example of an input processor witch replaces the built-in
completion of types and namespaces with an alternative, more useful
sometimes.
Another completer provides completion in comments: help tags (.Synopsis,
.Description, etc.) and completion of commented out code, why not?
I'm writing a Cmdlet that can be called in the middle of a pipeline. With this Cmdlet, there are parameters that have the ValueFromPipelineByPropertyName attribute defined so that the Cmdlet can use parameters with the same names that are defined earlier in the pipeline.
The paradox that I've run into is in the overridden BeginProcessing() method, I utilize one of the parameters that can get its value bound from the pipeline. According to the Cmdlet Processing Lifecycle, the binding of pipeline parameters does not occur until after BeginProcessing() is called. Therefore, it seems that I'm unable to rely on pipeline bound parameters if they're attempting to be used in BeginProcessing().
I've thought about moving things to the ProcessRecord() method. Unfortunately, there is a one time, relatively expensive operation that needs to occur. The best place for this to happen seems to be in the BeginProcessing() method to help ensure that it only happens once in the pipeline.
A few questions question surrounding this:
Is there a good way around this?
These same parameters also have the Mandatory attribute set on them. How can I even get this far without PowerShell complaining about not having these required parameters?
Thanks in advance for your thoughts.
Update
I took out the second part of the question as I realized I just didn't understand pipeline bound parameters well enough. I mistakingly thought that pipeline bound parameters came from the previous Cmdlet that executed in the pipeline. The actually come from the object being passed through the pipeline! I referenced a post by Keith Hill to help understand this.
You could set an instance field bool (Init) to false in BeginProcessing. Then check to see if the parameter is set in BeginProcessing. If it is then call a method that does the one time init (InitMe). In ProcessRecord check the value of Init and if it is false, then call InitMe. InitMe should set Init to true before returning.
Regarding your second question, if you've marked the parameter as mandatory then it must be supplied either as a parameter or via the pipeline. Are you using multiple parameter sets? If so, then even if a parameter is marked as mandatory, it is only mandatory if the associated parameter set is the one that is determined by PowerShell to be in use for a particular invocation of the cmdlet.