Does anyone have an example showing how to override the TabExpansion2 function in Windows PowerShell 3.0? I know how to override the old TabExpansion function, but I want to provide a list of items for the intellisense in PowerShell ISE. I looked at the definition of TabExpansion2 and it wasn't easily understandable how I inject my own code in the the tab expansion process.
I think this example should give you a good starting point: Windows Powershell Cookbook: Sample implementation of TabExpansion2. The example code shows that you can add code both before and after the default calls to [CommandCompletion]::CompleteInput.
For instance, you can add an entry to the $options hashtable named CustomArgumentCompleters to get custom completion for command arguments. The entry should be a hashtable where the keys are argument names (e.g. "ComputerName" or "Get-ChildItem:Filter") and the values are arrays of values that could be used to complete that parameter. Powertheshell.com also has an article about this: Dynamic Argument Completion. You can also specify custom completions for native executables, using the NativeArgumentCompleters option (again, keys are command names and values are arrays of possible completions).
OnceCompleteInput has returned, you can store the result in $result for further analysis. The result is an instance of the CommandCompletion class. If the default completion didn't find any matches, you can add your own CompletionResult entries to the list of matches:
$result.CompletionMatches.Add(
(New-Object Management.Automation.CompletionResult "my completion string") )
Don't forget to return $result from the function so the completion actually happens.
Finally, a note on troubleshooting: the code that calls TabCompletion2 seems to squelch all console-based output (not surprisingly), so if you want to write debugging messages for yourself, you might try writing them to a separate text file. For instance, you could change the End function in TabCompletion2 to look like this:
$result = [System.Management.Automation.CommandCompletion]::CompleteInput(
$inputScript, $cursorColumn, $options)
$result | Get-Member | Add-Content "c:\TabCompletionLog.txt"
$result
Here is an example of overridden TabExpansion2 -
TabExpansion2.ps1 (disclaimer: I'm the author)
and several used-in-practice profiles with completers for it:
Invoke-Build.ArgumentCompleters.ps1
Argument completers for Invoke-Build
Mdbc.ArgumentCompleters.ps1
Argument completers for Mdbc
ArgumentCompleters.ps1
Argument, input, and result completers
The points of interest:
TabExpansion2.ps1 does minimum work on loading. Potentially expensive
initialization is performed once when completion really happens.
Overridden TabExpansion2 provides extension mechanism via one or more
profiles *ArgumentCompleters.ps1 in the path. Profiles are invoked once
on the first call of TabExpansion2. Several profiles may come with
different independent modules, tools, and etc. and used simultaneously.
In addition to standard custom argument completers and
native argument completers this custom TabExpansion2 supports
result processors which tweak the results from the built-in completion
and input processors which can intercept and replace the built-in completion.
It works around read only empty built-in results in some cases.
ArgumentCompleters.ps1
contains an example of an input processor witch replaces the built-in
completion of types and namespaces with an alternative, more useful
sometimes.
Another completer provides completion in comments: help tags (.Synopsis,
.Description, etc.) and completion of commented out code, why not?
Related
Following Create a wrapper for functions in powershell, I want to add an attribute that changes functions' (more precisely cmdlets) behavior. Add a certain Begin and DynamicParam scriptblock.
The attribute could act like a decorator in python. That is replacing the function with a new one that "wraps it" and calls it internally. Or it could just keep the original cmdlet object (that is keep the Process code , attributes and parameters), but it should add a "constant" begin section to every function(maybe depends on the function's name alone).
Is it possible to do it natively in powershell?
The question can be divided to 2:
Is it possible to dynamically generate a function?
Is it possible to do it in an attribute?
An example
function My
{
[WrapperFor(Other)]
[CmdLetBinding()]
Process { Other #newparams }
}
Which causes My to support all params of Other and parse them. Practically adds code sections.
I'm going to start by saying I'm still pretty much a rookie at PowerShell and hoping there is a way to do this.
We have a utils.ps1 script that contains just functions that we dot source with in other scripts. One of the functions returns back a default value if a value is not passed in. I know I could check $args and such but what I wanted was to use the function for the default value in the parameters.
param(
[string]$dbServer=$(Get-DefaultParam "dbServer"),
[string]$appServer=$(Get-DefaultParam "appServer")
)
This doesn't work since the Util script hasn't been sourced yet. I can't put the dot source first because then params doesn't work as it's not the top line. The utils isn't a module and I can't use the #require.
What I got working was this
param(
[ValidateScript({ return $false; })]
[bool]$loadScript=$(. ./Utils.ps1; $true),
[string]$dbServer=$(Get-DefaultParam "dbServer"),
[string]$appServer=$(Get-DefaultParam "appServer")
)
Create a parameter that loads the script and prevent passing a value into that parameter. This will load the script in the correct scope, if I load it in the ValidateScript it's not in the correct scope. Then the rest of the parameters have access to the functions in the Utils.ps1. This probably is not a supported side effect, aka hack, as if I move the loadScript below the other parameters fail since the script hasn't been loaded.
PowerShell guarantee parameters will always load sequential?
Instead should we put all the functions in Utils.ps1 in global scope? this would need to run Utils.ps1 before the other scripts - which seems ok in scripting but less than ideal when running the scripts by hand
Is there a more supported way of doing this besides modules and #require?
Better to not use default value of params and just code all the checks after sourcing and check $args if we need to run the function?
It would be beneficial to instead turn that script into a PowerShell Module, despite your statement that you desire to avoid one. This way, your functions are always available for use as long as the module is installed. Also, despite not wanting to use it, the #Require directive is how you put execution constraints on your script, such as PowerShell version or modules that must be installed for the script to function.
If you really don't want to put this into a module, you can dot-source utils.ps1 from the executing user's $profile. As long as you don't run powershell.exe with the -NoProfile parameter, the profile loads with each session and your functions will be available for use.
Using CmdletBinding, is there an easy way to regurgitate the exact parameters that a cmdlet was called with, so I can call another cmdlet with the exact same parameters?
I'm writing Powershell Cmdlets in Powershell. I'm using advanced functions. I have a cmdlet called Get-Environment, with several optional parameters like [string]EnvironmentName and [switch]Active. I have another cmdlet, called Get-Machine, with all of the same optional parameters; it calls Get-Environment. Originally, before I added the [switch]Active parameter, I simply called Get-Environment with all variables explicitly (see below).
I can't do the same thing now, because if I add "active" then it will be set. I don't want to have make a test in Get-Machine to see if Active is true and have two different versions of the Get-Environment call. I'd prefer to not have to trawl through the $PSBoundParameters hashtable and reconstruct the original strings, but that looks like the only feasible way forward (unless I'm missing something.)
Original code inside get-machine:
$environments = get-Environment -EnvironmentName $EnvironmentName
Oh for Pete's sake. I found it. I was missing the big stupid easy thing. I'll leave this up for others, and in case someone has an even better answer.
https://ss64.com/ps/psboundparameters.html
$PSBoundParameters can be used to call a subordinate function or cmdlet passing the same parameters - PowerShell will automatically splat the hash table's values instead of having to type each of the parameters:
get-otherthing #PSBoundParameters
I am writing a powershell module with a list of utilties that I use on a daily basis. However, my question is: How can I not repeat so much code?
For example if I have a function that gets a list of hostnames from a file, I have to create that parameter in every single function. How can I just create it once, and then have each function prompt for it, or grab it?
function CopyFiles {
param (
[parameter(Mandatory = $true, HelpMessage = "Enter the Path to the Machine List File (UNC Path or local). ")]
[ValidateScript({$_ -ne ""})]
[string] $MachineListFilename,
...Sometime later in the script...
$MachineList = Get-Content $MachineListFilename
}
function DoSomeOtherTask {
param (
[parameter(Mandatory = $true, HelpMessage = "Enter the Path to the Machine List File (UNC Path or local). ")]
[ValidateScript({$_ -ne ""})]
[string] $MachineListFilename,
...Sometime later in the script...
$MachineList = Get-Content $MachineListFilename
}
It just seems really in-efficient to cut and paste the same code over and over again. Especially for something like, domain-name, username, password, etc.
Ultimately, I'm trying to get to a point to where I just write wrapper scripts for these functions once I import the module. Then I can just pass parameters via the command line. However, with the current way I'm doing it, the module is going to be littered with a lot of repetitive code, like parameters for username and password, etc.
Is there a better way?
Make your cmdlets/functions as independent and flexible as you can. Sometimes a wrapper function is the way to go, other times consolidating things into one function and calling it differently is more workable.
In the example you've given here, give the caller two options - you can pass in the filename for the list of machines, or pass in the list of machines. That way, you can read the file once in the calling script, and pass the array of machine names into each function. This will be much more efficient as you're only reading from disk one time.
I strongly recommend reading up on advanced functions and parametersets to simplify things (you'll need this for my suggestion above).
As for "repetitive code" - as soon as you find yourself copying/pasting code, stop. Find a way to make that code generic and move it into its own function, then call that function wherever it's needed. This isn't a PowerShell notion - this is standard programming, the DRY Principle.
Even then, you'll still find yourself with some modicum of copypasta. It's going to happen just because of the nature of the PowerShell environment. Look at Microsoft's own cmdlets - you'll see evidence of it there too. The key is to minimize it.
Having 3 cmdlets that all take username & password (why not take a Credential object instead/as another option, BTW?) will result in copying & pasting those parameters in the function definition. You're not going to avoid that, and it's not necessarily a bad thing. You can create code snippets in most good editors (PowerShell ISE included) to automatically "generate" it for you if that makes it easier/faster.
I personally like to create intermediary functions that call my functions with specific parameters for things I do a lot of times. I manage these with a switch statement. This way, the backend driver does not change, and I have a nice interface I can give to others who want to use, but not develop on, the code I made.
function frontEnd {
call intermediary(typeA)
}
function intermediary (callType){
switch(callType){
case(typeA):
call backEnd(param1="get dns" param2="domain1" param3=True
case(typeB):
call backEnd(param1="add to dns" param2="domain" param3=False
case(other):
call backEnd(arg1, arg2, arg3)
}
Depending on what functionality you are looking for, this could help you. This is a very crude way of doing it, and I highly suggesting making it more robust and stable if you aren't going to be the only one using it.
Given this command:
MSBuild.exe build.xml /p:Configuration=Live /p:UseMerge=true /p:EnableUpdateable=false
how can I form a string like this in my build script:
UseMerge=true;EnableUpdateable=true
where I might not know which properties were used at the command line.
What are you going to do with the list?
There's no built in "properties that came via the commandline" thing a la splatting in PowerShell 2.0
Remember properties can come from environment variables and/or other scripts.
Also, you stripped on of the params out in your example.
In general, if one is trying to chain to another command, one uses defaulting (Conditions on elements in PropertyGroups) and validation (Messages Conditional on presence of options) and then either create a new property or embed the params you want to pass into a string.
Here's hoping someone has a nice neat example of a more general way to do this but I doubt it.
As covered in http://www.simple-talk.com/dotnet/.net-tools/extending-msbuild/ one can dump out the parameters passed by doing /v:diag on the commandline (but that's obviously not what you're after).
Have a look in the Common.targets files - you'll find lots of cases of chaininign involving manaully building up lists to pass onto subservient tasks.