I have a certain command that I want to be able to append a parameter to as a powershell profile function. Though I'm not quite sure the best way to be able to capture each time this command is run, any insight would be helpful.
Command: terraform plan
Each time a plan is run I want to be able to check the parameters and see if -lock=true is passed in and if not then append -lock=false to it. Is there a suitable way to capture when this command is run, without just creating a whole new function that builds that command? So far the only way I've seen to capture commands is with Start-Transcript but that doesn't quite get me to where I need.
The simplest approach is to create a wrapper function that analyzes its arguments and adds -lock=false as needed before calling the terraform utility.
function terraform {
$passThruArgs = $args
if (-not ($passThruArgs -match '^-lock=')) { $passThruArgs += '-lock=false'}
& (Get-Command -Type Application terraform) $passThruArgs
}
The above uses the same name as the utility, effectively shadowing the latter, as is your intent.
However, I would caution against using the same name for the wrapper function, as it can make it hard to understand what's going on.
Also, if defined globally via $PROFILE or interactively, any unsuspecting code run in the same session will call the wrapper function, unless an explicit path or the shown Get-Command technique is used.
Not to take away from the other answer posted, but to offer an alternative solution here's my take:
$Global:CMDLETCounter = 0
$ExecutionContext.InvokeCommand.PreCommandLookupAction = {
Param($CommandName, $CommandLookupEvents)
if ($CommandName -eq 'terraform' -and $Global:CMDLETCounter -eq 0)
{
$Global:CMDLETCounter++
$CommandLookupEvents.CommandScriptBlock = {
if ($Global:CMDLETCounter -eq 1)
{
if (-not ($args -match ($newArg = '-lock=')))
{
$args += "${newArg}true"
}
}
& "terraform" #args
$Global:CMDLETCounter--
}
}
}
You can make use of the $ExecutionContext automatic variable to tap into PowerShells parser and insert your own logic for a specific expression. In your case, youd be using terraform which the command input will be parsed for each token and checked against -lock= in the existing arguments. If not found, append -lock=true to the current arguments and execute the command again.
The counter you see ($Global:CMDLETCounter) is to prevent an endless loop as it would just recursively call itself without there being something to halt it.
Related
I've got several Powershell scripts under construction, and one thing I'd like to do in them is spit out a line at the top of the output echoing the command line used.
Use case is: output is being redirected to a file, and a year from now when someone examines that file, I want them to be able to copy/paste the command from the output file to regenerate the same output where the only differences are chronological. [ Okay, that was a little too generic ... first case: I'm examining ACLs and want to be able to repeat the same examination on the newer data at any point in the future by simply copy/pasting the same command. ]
My script begins with the parameter definitions:
[CmdletBinding()]
Param(
[string] $filter="Name -like '*'",
[string] $user=$null,
[switch] $test01=$false,
[switch] $test02=$false
)
What I'm doing now is a fall-back position, knowing what parameters can be accepted, I'm dumping out the names & values of those parameters:
if ($user.Length -eq 0) { $u = "NULL" } else { $u = "|$user|" }
if ($test01) { $u += ", -TEST01" }
if ($test02) { $u += ", -TEST02" }
"RUN BEGINS at $((get-date).ToString('F')) -- Filter is |$filter|, User is $u"
Ugly, hacky, not even a hint of portability in it, and definitely NOT a copy/paste of the command.
Regardless, this CAN be mangled into a command line; but not generically, and not with 100% surety.
I've tried using $args, but apparently the either defining named parameters or the CmdletBinding() breaks that mechanism, because it's always empty. Tried $PsBoundParameters, Get-History, and even $0 .. $9 bash-like variables. So far, nothing I can find gives the command line that launched the script that's running.
$PsBoundParameters is close, it's got all the right data as key,value pairs that could be built up into a command line. but it still isn't a command line, and would require mangling to get it into one.
Get-History came even closer as it includes a complete command line; problem is it gives the command run RIGHT BEFORE the command that launched the script, not the command that launched it.
Running out of options ... but am way open to suggestion.
Found it! And it was as simple as I'd hoped it would be. [ to use ... finding it was a pain ]
$MyInvocation.Line
While I agree that $MyInvocation.Line will get the literal command used which seems to be what you want on the surface, I'd still argue that the data in $PSBoundParameters is more useful long term simply because you can't guarantee users will call your function in a way that makes the command line actually useful.
Consider the common case where callers have declared variables to hold parameter values:
$myfilter = "Name -like '*Joe*'"
MyFunction -filter $myfilter
Consider the case where callers create a hashtable to splat with:
$myParams = #{
filter = "Name -like '*Joe*'"
test01 = $true
}
MyFunction #myParams
If you only record the command line, you'd lose the parameter data in both of these cases. And if you really want a literal command that people can copy/paste from the log, it shouldn't be that hard to generate a synthetic command based on the data in $PSBoundParameters. It doesn't have to be literally the same command as long as the same parameter data gets passed in, right?
I am trying to create an alias (named which) of the Get-Command cmdlet in a way that it doesn't run if I'm not sending any arguments (because if it's run without arguments it outputs all the commands that are available).
I know this can be done using a function but I would like to keep the tab completion functionality without having to write a sizeable function that is to be placed into my $PROFILE.
In short, I only want the alias to work if it is being passed arguments.
You can't do it with an alias, because PowerShell aliases can only refer to another command name or path, and can therefore neither include arguments nor custom logic.
Therefore you do need a function, but it can be a short and simple one:
function which { if ($args.count) { Get-Command #args } else { Throw "Missing command name." } }
Note that while passing -? for showing Get-Command's help does work, tab completion of arguments does not.
In order to get tab completion as well, you'll need to write a wrapper (proxy) function or at least replicate Get-Command's parameter declarations - which then does make the function definition sizable.
If the concern is just the size of the $PROFILE file itself, you can write a proxy script instead - which.ps1 - which you can invoke with just which as well, assuming you place it in one of the directories listed in $env:Path[1]; see next section.
Defining a wrapper (proxy) script or function:
Defining a wrapper (proxy) function or script is a nontrivial undertaking, but allows you to implement a robust wrapper that supports tab completion and even forwarding to the original command's help.
Note:
Bug alert: As zett42 points out, as of PowerShell [Core] 7.1, System.Management.Automation.ProxyCommand.Create neglects to include dynamic parameters if the target command is an (advanced) function or script; however, compiled cmdlets are not affected; see GitHub issue #4792 and this answer for a workaround.
For simplicity, the following creates a wrapper script, which.ps1 , and saves it in the current directory. As stated, if you place it in one of the directories listed in $env:PATH, you'll be able to invoke it as just which.
The code below can easily be adapted to create a wrapper function instead: simply take the contents of the $wrapperCmdSource variable below and enclose it in function which { ... }.
As of PowerShell Core 7.0.0-preview.5, there are some problems with the auto-generated code, which may or may not affect you; they will be fixed at some point; to learn more and to learn how to manually correct them, see GitHub issue #10863.
# Create the wrapper scaffolding as source code (outputs a single [string])
$wrapperCmdSource =
[System.Management.Automation.ProxyCommand]::Create((Get-Command Get-Command))
# Write the auto-generated source code to a script file
$wrapperCmdSource > which.ps1
Note:
Even though System.Management.Automation.ProxyCommand.Create requires a System.Management.Automation.CommandMetadata instance to identify the target command, the System.Management.Automation.CommandInfo instances output by Get-Command can be used as-is.
Re comment-based help: By default, the proxy function simply forwards to the original cmdlet's help; however, you can optionally pass a string to serve as the comment-based help as the 2nd argument.
By using [System.Management.Automation.ProxyCommand]::GetHelpComments() in combination with output from Get-Help, you could start with a copy of the original command's help and modify it:
[System.Management.Automation.ProxyCommand]::GetHelpComments((Get-Help Get-Command))
You now have a fully functional which.ps1 wrapper script that behaves like Get-Command itself.
You can invoke it as follows:
./which # Same as: Get-Command; tab completion of parameters supported.
./which -? # Shows Get-Command's help.
You can now edit the script file to perform the desired customization.
Note: The auto-generated source code contains a lot of boilerplate code; however, typically only one or two places need tweaking to implement the custom functionality.
Specifically, place the following command as the first statement inside the begin { ... } block:
if (-not $MyInvocation.ExpectingInput -and -not ($Name -or $CommandType -or $Module -or $FullyQualifiedModule)) {
Throw "Missing command name or filter."
}
This causes the script to throw an error if the caller didn't provide some way of targeting a specific command or group of commands, either by direct argument or via the pipeline.
If you invoke the modified script without arguments now, you should see the desired error:
PS> ./which.ps1
Missing command name or filter.
...
Other common types of customizations are:
Removing parameters from the wrapper, by simply removing the parameter declaration.
Adding additional parameters to the invocation of the wrapped command, by modifying the following line in the begin block:
# Add parameters, as needed.
$scriptCmd = { & $wrappedCmd #PSBoundParameters }
Preprocessing pipeline input before passing it to the wrapped command, by customizing the process block and replacing $_ with your preprocessed input in the following line:
# Replace $_ with a preprocessed version of it, as needed.
$steppablePipeline.Process($_)
For an example of a complete implementation of a proxy function, see this answer.
[1] Caveat for Linux users: since the Linux file-system is case is case-sensitive, invocation of your script won't work case-insensitively, the way commands normally work in PowerShell. E.g., if your script file name is Get-Foo.ps1, only Get-Foo - using the exact same case - will work, not also get-foo, for instance.
Let's take the PowerShell statement below as an example:
powershell.exe c:\temp\windowsbroker.ps1 IIS
Is it possible to have it scripted within windowsbroker.ps1 to check for that IIS string, and if it's present to do a specific install script? The broker script would be intended to install different applications depending on what string followed it when it was called.
This may seem like an odd question, but I've been using CloudFormation to spin up application environments and I'm specifying an "ApplicationStack" parameter that will be referenced at the time when the powershell script is run so it knows which script to run to install the correct application during bootup.
What you're trying to do is called argument or parameter handling. In its simplest form PowerShell provides all arguments to a script in the automatic variable $args. That would allow you to check for an argument IIS like this:
if ($args -contains 'iis') {
# do something
}
or like this if you want the check to be case-sensitive (which I wouldn't recommend, since Windows and PowerShell usually aren't):
if ($args -ccontains 'IIS') {
# do something
}
However, since apparently you want to use the argument as a switch to trigger specific behavior of your script, there are better, more sophisticated ways of doing this. You could add a Param() section at the top of your script and check if the parameter was present in the arguments like this (for a list of things to install):
Param(
[Parameter()]
[string[]]$Install
)
$Install | ForEach-Object {
switch ($_) {
'IIS' {
# do something
}
...
}
}
or like this (for a single option):
Param(
[switch]$IIS
)
if ($IIS.IsPresent) {
# do something
}
You'd run the script like this:
powershell "c:\temp\windowsbroker.ps1" -Install "IIS",...
or like this respectively:
powershell "c:\temp\windowsbroker.ps1" -IIS
Usually I'd prefer switches over parameters with array arguments (unless you have a rather extensive list of options), because with the latter you have to worry about spelling of the array elements, whereas with switches you got a built-in spell check.
Using a Param() section will also automatically add a short usage description to your script:
PS C:\temp> Get-Help windowsbroker.ps1
windowsbroker.ps1 [-IIS]
You can further enhance this online help to your script via comment-based help.
Using parameters has a lot of other advantages on top of that (even though they probably aren't of that much use in your scenario). You can do parameter validation, make parameters mandatory, define default values, read values from the pipeline, make parameters depend on other parameters via parameter sets, and so on. See here and here for more information.
Yes, they are called positional parameters. You provide the parameters at the beginning of your script:
Param(
[string]$appToInstall
)
You could then write your script as follows:
switch ($appToInstall){
"IIS" {"Install IIS here"}
}
In Javascript it is possible to do so, I think. In Powershell I'm not sure how to :
Let's say I want to override every call to write-host with my custom method but at some time I want to execute the native write-host inside my overide. Is it possible to store the native implentation under another name so as to call it later from new implementation ?
Update : it seems to me that the answer https://serverfault.com/a/642299/236470 does not fully answer the second part of my question. How do I store and call the native implementation ?
Calls to functions will override cmdlets. You can read more on this from about_Command_Precedence on TechNet ...
If you do not specify a path, Windows PowerShell uses the following
precedence order when it runs commands:
Alias
Function
Cmdlet
Native Windows commands
So simply making a function of the same name as a native cmdlet will get you what you want.
function Write-Host{
[cmdletbinding()]
param(
[Parameter(Mandatory,ValueFromPipeline)]
$string
)
Process {
# Executes once for each pipeline object
If ($string -match "bagels"){
Microsoft.PowerShell.Utility\Write-Host $string -ForegroundColor Green
}else{
Microsoft.PowerShell.Utility\Write-Host $string
}
}
}
So now write-host works with pipeline input that we can filter with. Calling the "real" cmdlet is as easy as specifying the module in the call. You can see I have done that twice in the above code sample. Some sample usage and output would be the following:
Be careful that you don't forget you have done this if you save it in a profile or something of that nature. Use Get-Command Write-Host whenever in doubt. In my case you can remove the override by calling Remove-Item function:write-host
You can also look into what are called proxy functions but I think that is overkill for what you intend to do.
Yes you can. I have an answer for that here on ServerFault, but since it's a different site I'll copy it since I can't close as duplicate to another site.
Yes, you can override Get-ChildItem or any other cmdlet in Powershell.
Name Your Function The Same
If you make a function with the same name in the same scope, yours will be used.
Example:
Function Get-ChildItem {
[CmdletBinding()]
param(
# Simulate the parameters here
)
# ... do stuff
}
Using Aliases
Create your own function, and then create an alias to that function, with the same name as the cmdlet you want to override.
Example:
Function My-GetChildItem {
[CmdletBinding()]
param(
# Simulate the parameters here
)
# ... do stuff
}
New-Alias -Name 'Get-ChildItem' -Value 'My-GetChildItem' -Scope Global
This way is nice because it's easier to test your function without
stomping on the built-in function, and you can control when the cmdlet
is overridden or not within your code.
To remove the alias:
Remove-Item 'Alias:\Get-ChildItem' -Force
How I can create functions inside my $profile file that will be executed only if I am inside some specific path when trying to execute them?
There is nothing built into PowerShell to effectively hide a command based on any sort of context (e.g. your current directory.)
In PowerShell V3 or greater, there are some event handlers around command lookup that you could use. One solution would look something like this:
$ExecutionContext.InvokeCommand.PreCommandLookupAction = {
param([string]$commandName,
[System.Management.Automation.CommandLookupEventArgs]$eventArgs)
if ($commandName -eq 'MyCommand' -and $pwd -eq 'some directory')
{
$eventArgs.StopSearch = $true
}
}
Your profile is evaluated at PowerShell start up so current directory doesn't really come into play. Any function inside the profile will be available as soon as your able to use the PowerShell console. You could re-implement the tabexpansion2 function to not tab-complete certain functions based on the current directory but that seems a bit over-the-top. Another option would be to override the prompt function and depending on the current directory, set the function's visibility to either public or private. If they are private, they won't show up in tab expansion e.g.:
$func = Get-Command MyFunc
$func.Visibility = 'private' # or 'public'