Can I override a Powershell native cmdlet but call it from my override - powershell

In Javascript it is possible to do so, I think. In Powershell I'm not sure how to :
Let's say I want to override every call to write-host with my custom method but at some time I want to execute the native write-host inside my overide. Is it possible to store the native implentation under another name so as to call it later from new implementation ?
Update : it seems to me that the answer https://serverfault.com/a/642299/236470 does not fully answer the second part of my question. How do I store and call the native implementation ?

Calls to functions will override cmdlets. You can read more on this from about_Command_Precedence on TechNet ...
If you do not specify a path, Windows PowerShell uses the following
precedence order when it runs commands:
Alias
Function
Cmdlet
Native Windows commands
So simply making a function of the same name as a native cmdlet will get you what you want.
function Write-Host{
[cmdletbinding()]
param(
[Parameter(Mandatory,ValueFromPipeline)]
$string
)
Process {
# Executes once for each pipeline object
If ($string -match "bagels"){
Microsoft.PowerShell.Utility\Write-Host $string -ForegroundColor Green
}else{
Microsoft.PowerShell.Utility\Write-Host $string
}
}
}
So now write-host works with pipeline input that we can filter with. Calling the "real" cmdlet is as easy as specifying the module in the call. You can see I have done that twice in the above code sample. Some sample usage and output would be the following:
Be careful that you don't forget you have done this if you save it in a profile or something of that nature. Use Get-Command Write-Host whenever in doubt. In my case you can remove the override by calling Remove-Item function:write-host
You can also look into what are called proxy functions but I think that is overkill for what you intend to do.

Yes you can. I have an answer for that here on ServerFault, but since it's a different site I'll copy it since I can't close as duplicate to another site.
Yes, you can override Get-ChildItem or any other cmdlet in Powershell.
Name Your Function The Same
If you make a function with the same name in the same scope, yours will be used.
Example:
Function Get-ChildItem {
[CmdletBinding()]
param(
# Simulate the parameters here
)
# ... do stuff
}
Using Aliases
Create your own function, and then create an alias to that function, with the same name as the cmdlet you want to override.
Example:
Function My-GetChildItem {
[CmdletBinding()]
param(
# Simulate the parameters here
)
# ... do stuff
}
New-Alias -Name 'Get-ChildItem' -Value 'My-GetChildItem' -Scope Global
This way is nice because it's easier to test your function without
stomping on the built-in function, and you can control when the cmdlet
is overridden or not within your code.
To remove the alias:
Remove-Item 'Alias:\Get-ChildItem' -Force

Related

Powershell Profile to append parameters to a certain command

I have a certain command that I want to be able to append a parameter to as a powershell profile function. Though I'm not quite sure the best way to be able to capture each time this command is run, any insight would be helpful.
Command: terraform plan
Each time a plan is run I want to be able to check the parameters and see if -lock=true is passed in and if not then append -lock=false to it. Is there a suitable way to capture when this command is run, without just creating a whole new function that builds that command? So far the only way I've seen to capture commands is with Start-Transcript but that doesn't quite get me to where I need.
The simplest approach is to create a wrapper function that analyzes its arguments and adds -lock=false as needed before calling the terraform utility.
function terraform {
$passThruArgs = $args
if (-not ($passThruArgs -match '^-lock=')) { $passThruArgs += '-lock=false'}
& (Get-Command -Type Application terraform) $passThruArgs
}
The above uses the same name as the utility, effectively shadowing the latter, as is your intent.
However, I would caution against using the same name for the wrapper function, as it can make it hard to understand what's going on.
Also, if defined globally via $PROFILE or interactively, any unsuspecting code run in the same session will call the wrapper function, unless an explicit path or the shown Get-Command technique is used.
Not to take away from the other answer posted, but to offer an alternative solution here's my take:
$Global:CMDLETCounter = 0
$ExecutionContext.InvokeCommand.PreCommandLookupAction = {
Param($CommandName, $CommandLookupEvents)
if ($CommandName -eq 'terraform' -and $Global:CMDLETCounter -eq 0)
{
$Global:CMDLETCounter++
$CommandLookupEvents.CommandScriptBlock = {
if ($Global:CMDLETCounter -eq 1)
{
if (-not ($args -match ($newArg = '-lock=')))
{
$args += "${newArg}true"
}
}
& "terraform" #args
$Global:CMDLETCounter--
}
}
}
You can make use of the $ExecutionContext automatic variable to tap into PowerShells parser and insert your own logic for a specific expression. In your case, youd be using terraform which the command input will be parsed for each token and checked against -lock= in the existing arguments. If not found, append -lock=true to the current arguments and execute the command again.
The counter you see ($Global:CMDLETCounter) is to prevent an endless loop as it would just recursively call itself without there being something to halt it.

PowerShell, auto load functions from internet on demand

It was pointed out to me (in PowerShell, replicate bash parallel ping) that I can load a function from the internet as follows:
iex (irm https://raw.githubusercontent.com/proxb/AsyncFunctions/master/Test-ConnectionAsync.ps1)
The url referenced Test-ConnectionAsync.ps1 contains two functions: Ping-Subnet and Test-ConnectionAsync
This made me wonder if I could then define bypass functions in my personal module that are dummy functions that will be permanently overridden as soon as they are invoked. e.g.
function Ping-Subnet <mimic the switches of the function to be loaded> {
if <function is not already loaded from internet> {
iex (irm https://raw.githubusercontent.com/proxb/AsyncFunctions/master/Test-ConnectionAsync.ps1)
}
# Now, somehow, permanently overwrite Ping-Subnet to be the function that loaded from the URL
Ping-Subnet <pass the switches that we mimicked to the required function that we have just loaded>
}
This would very simply allow me to reference a number of useful scripts directly from my module but without having to load them all from the internet upon loading the Module (i.e. the functions are only loaded on demand, when I invoke them, and I will often never invoke the functions unless I need them).
You could use the Parser to find the functions in the remote script and load them into your scope. This will not be a self-updating function, but should be safer than what you're trying to accomplish.
using namespace System.Management.Automation.Language
function Load-Function {
[cmdletbinding()]
param(
[parameter(Mandatory, ValueFromPipeline)]
[uri] $URI
)
process {
try {
$funcs = Invoke-RestMethod $URI
$ast = [Parser]::ParseInput($funcs, [ref] $null, [ref] $null)
foreach($func in $ast.FindAll({ $args[0] -is [FunctionDefinitionAst] }, $true)) {
if($func.Name -in (Get-Command -CommandType Function).Name) {
Write-Warning "$($func.Name) is already loaded! Skipping"
continue
}
New-Item -Name "script:$($func.Name)" -Path function: -Value $func.Body.GetScriptBlock()
}
}
catch {
Write-Warning $_.Exception.Message
}
}
}
Load-Function https://raw.githubusercontent.com/proxb/AsyncFunctions/master/Test-ConnectionAsync.ps1
Ping-Subnet # => now is available in your current session.
function Ping-Subnet{
$toImport = (IRM "https://raw.githubusercontent.com/proxb/AsyncFunctions/master/Test-ConnectionAsync.ps1").
Replace([Text.Encoding]::UTF8.GetString((239,187,191)),"")
NMO([ScriptBlock]::Create($toImport))|Out-Null
$MyInvocation.Line|IEX
}
function Test-ConnectionAsync{
$toImport = (IRM "https://raw.githubusercontent.com/proxb/AsyncFunctions/master/Test-ConnectionAsync.ps1").
Replace([Text.Encoding]::UTF8.GetString((239,187,191)),"")
NMO([ScriptBlock]::Create($toImport))|Out-Null
$MyInvocation.Line|IEX
}
Ping-Subnet -Result Success
Test-ConnectionAsync -Computername $env:COMPUTERNAME
Result:
Computername Result
------------ ------
192.168.1.1 Success
192.168.1.2 Success
192.168.1.146 Success
Computername IPAddress Result
------------ --------- ------
HOME-PC fe80::123:1234:ABCD:EF12 Success
Yes, it should work. Calling Test-ConnectionAsync.ps1 from with-in a function will create the functions defined with-in, in the wrapping function's scope. You will be able to call any wrapped functions until the function's scope ends.
If you name the wrapper and wrapped functions differently, you can check whether the function has been declared with something like...
Otherwise, you need to get more creative.
This said, PROCEED WITH CAUTION. Remote code execution, like this, is fraught with security issues, especially in the way we're talking about it i.e., no validation of Test-ConnectionAsync.ps1.
Fors1k's answer deserves the credit for coming up with the clever fundamentals of the approach:
Download and execute the remote script's content in a dynamic module created with New-Module (whose built-in alias is nmo), which causes the script's functions to be auto-exported and to become available session-globally[1]
Note that dynamic modules aren't easy to discover, because they're not shown in Get-Module's output; however, you can discover them indirectly, via the .Source property of the command-info objects output by Get-Command:
Get-Command | Where Source -like __DynamicModule_*
That the downloaded functions become available session-globally may be undesired if you're trying to use the technique inside a script that shouldn't affect the session's global state - see the bottom section for a solution.
Then re-invoke the function, under the assumption that the original stub function has been replaced with the downloaded version of the same name, passing the received arguments through.
While Fors1k's solution will typically work, here is a streamlined, robust alternative that prevents potential, inadvertent re-execution of code:
function Ping-Subnet{
$uri = 'https://raw.githubusercontent.com/proxb/AsyncFunctions/master/Test-ConnectionAsync.ps1'
# Define and session-globally import a dynamic module based on the remote
# script's content.
# Any functions defined in the script would automatically be exported.
# However, unlike with persisted modules, *aliases* are *not* exported by
# default, which the appended Export-ModuleMember call below compensates for.
# If desired, also add -Variable * in order to export variables too.
# Conversely, if you only care about functions, remove the Export-ModuleMember call.
$dynMod = New-Module ([scriptblock]::Create(
((Invoke-RestMethod $uri)) + "`nExport-ModuleMember -Function * -Alias *")
)
# If this stub function shadows the newly defined function in the dynamic
# module, remove it first, so that re-invocation by name uses the new function.
# Note: This happens if this stub function is run in a child scope, such as
# in a (non-dot-sourced) script rather than in the global scope.
# If run in the global scope, curiously, the stub function seemingly
# disappears from view right away - not even Get-Command -All shows it later.
$myName = $MyInvocation.MyCommand.Name
if ((Get-Command -Type Function $myName).ModuleName -ne $dynMod.Name) {
Remove-Item -LiteralPath "function:$myName"
}
# Now invoke the newly defined function of the same name, passing the arguments
# through.
& $myName #args
}
Specifically, this implementation ensures:
That aliases defined in the remote script are exported as well (just remove + "`nExport-ModuleMember -Function * -Alias *" from the code above if that is undesired.
That the re-invocation robustly targets the new, module-defined implementation of the function - even if the stub function runs in a child scope, such as in a (non-dot-sourced) script.
When run in a child scope, $MyInvocation.Line|IEX (iex is a built-in alias of the Invoke-Expression cmdlet) would result in an infinite loop, because the stub function itself is still in effect at that time.
That all received arguments are passed through on re-invocation without re-evaluation.
Using the built-in magic of splatting the automatic $args variable (#args) passes only the received, already expanded arguments through, supporting both named and positional arguments.[2]
$MyInvocation.Line|IEX has two potential problems:
If the invoking command line contained multiple commands, they are all repeated.
You can solve this particular problem by substituting (Get-PSCallStack)[1].Position.Text for $MyInvocation.Line, but that still wouldn't address the next problem.
Both $MyInvocation.Line and (Get-PSCallStack)[1].Position.Text contain the arguments that were passed in unexpanded (unevaluated) form, which causes their re-evaluation by Invoke-Expression, and the perils of that are that, at least hypothetically, this re-evaluation could involve lengthy commands whose output served as arguments or, worse, commands that had side effects that cannot or should not be repeated.
Scoping the technique to a given local script:
That the downloaded functions become available session-globally may be undesired if you're trying to use the technique inside a script that shouldn't affect the session's global state; that is, you may want the functions exported via the dynamic module to disappear when the script exits.
This requires two extra steps:
Piping the dynamic module to Import-Module, which is the prerequisite for being able to unload it before exiting with Remove-Module
Calling Remove-Module with the dynamic module before exiting in order to unload it.
function Ping-Subnet{
$uri = 'https://raw.githubusercontent.com/proxb/AsyncFunctions/master/Test-ConnectionAsync.ps1'
# Save the module in a script-level variable, and pipe it to Import-Module
# so that it can be removed before the script exits.
$script:dynMod = New-Module ([scriptblock]::Create(
((Invoke-RestMethod $uri)) + "`nExport-ModuleMember -Function * -Alias *")
) | Import-Module -PassThru
# If this stub function shadows the newly defined function in the dynamic
# module, remove it first, so that re-invocation by name use the new function.
# Note: This happens if this stub function is run in a child scope, such as
# in a (non-dot-sourced) script rather than in the global scope.
# If run in the global scope, curiously, the stub function seemingly
# disappears from view right away - not even Get-Command -All shows it later.
$myName = $MyInvocation.MyCommand.Name
if ((Get-Command -Type Function $myName).ModuleName -ne $dynMod.Name) {
Remove-Item -LiteralPath "function:$myName"
}
# Now invoke the newly defined function of the same name, passing the arguments
# through.
& $myName #args
}
# Sample commands to perform in the script.
Ping-Subnet -?
Get-Command Ping-Subnet, Test-ConnectionAsync | Format-Table
# Before exiting, remove (unload) the dynamic module.
$dynMod | Remove-Module
[1] This assumes that the New-Module call itself is made outside of a module; if it is made inside a module, at least that module's commands see the auto-exported functions; if that module uses implicit exporting behavior (which is rare and not advisable), the auto-exported functions from the dynamic module would be included in that module's exports and therefore again become available session-globally.
[2] This magic has one limitation, which, however, will only rarely surface: [switch] parameters with a directly attached Boolean argument aren't supported (e.g., -CaseSensitive:$true) - see this answer.

Is it possible to make alias a global via parameter binding?

In PowerShell Alias can be created in two ways as below.
Way 1:
function hello() {
[alias("HelloWorld")]
param(
[string] $name
)
Write-Host "Hello $name!"
}
Way 2:
Set-Alias HelloWorld hello
In the Set-Alias way we can pass -Scope and make it as global.
Is it possible to make the Alias as global in the first way?
(The reason for asking is that I have used a first way in my module but the alias is not visible when calling from another module).
If you are treating this as a 'real' module (i.e. loading via Import-Module) and not just calling it as an external script, you can export the alias by adding this to the end of your module:
Export-ModuleMember -Alias HelloWorld
Get more info here: Export-ModuelMember
Alternatively, you can add this information to the module manifest, if you are using one.
If you want to call it as an external script, remove the [Alias()] decoration and add this to the end:
Set-Alias -Name HellowWorld -Value hello
Make sure to dot-source the script. That is, call it like this:
. .\MyScript.ps1
EDIT:
For 'real' modules aliases, functions, etc are exported by default. You would use the Export-ModuleMember to only export those you wanted users to see and hide everything else (e.g. to stop them seeing internal helper functions)
boxdog's helpful answer tells you how to define and export aliases the right way.
As for your use of attribute [alias("HelloWorld")] to create a command alias (to allow HelloWorld to invoke command (function) hello):
Even though it happens to work, it isn't documented and shouldn't be relied upon, not least because of potential confusion with parameter aliases (see below):
function hello() {
[alias("HelloWorld")] # Do NOT define a command alias for 'hello' this way.
param(
# ...
To create command aliases, use Set-Alias or New-Alias, as shown in boxdog's answer.
The purpose of the [Alias()] attribute is to create parameter aliases, i.e., to allow a given command's parameters to be referred to by a different name; from the docs, emphasis added:
Declares an alternative name for a parameter
For instance, if you wanted to define -FirstName as an alias for the -Name parameter in your example:
function hello() {
param(
[alias('FirstName')] # alias() decorates *parameter* -Name ($Name)
[string] $Name
)
Write-Host "Hello $Name!"
}
For instance, hello -FirstName Mary and hello -Name Mary would then be equivalent (but note that it is the $Name parameter variable that gets bound in both cases).

Referencing text after script is called within PS1 Script

Let's take the PowerShell statement below as an example:
powershell.exe c:\temp\windowsbroker.ps1 IIS
Is it possible to have it scripted within windowsbroker.ps1 to check for that IIS string, and if it's present to do a specific install script? The broker script would be intended to install different applications depending on what string followed it when it was called.
This may seem like an odd question, but I've been using CloudFormation to spin up application environments and I'm specifying an "ApplicationStack" parameter that will be referenced at the time when the powershell script is run so it knows which script to run to install the correct application during bootup.
What you're trying to do is called argument or parameter handling. In its simplest form PowerShell provides all arguments to a script in the automatic variable $args. That would allow you to check for an argument IIS like this:
if ($args -contains 'iis') {
# do something
}
or like this if you want the check to be case-sensitive (which I wouldn't recommend, since Windows and PowerShell usually aren't):
if ($args -ccontains 'IIS') {
# do something
}
However, since apparently you want to use the argument as a switch to trigger specific behavior of your script, there are better, more sophisticated ways of doing this. You could add a Param() section at the top of your script and check if the parameter was present in the arguments like this (for a list of things to install):
Param(
[Parameter()]
[string[]]$Install
)
$Install | ForEach-Object {
switch ($_) {
'IIS' {
# do something
}
...
}
}
or like this (for a single option):
Param(
[switch]$IIS
)
if ($IIS.IsPresent) {
# do something
}
You'd run the script like this:
powershell "c:\temp\windowsbroker.ps1" -Install "IIS",...
or like this respectively:
powershell "c:\temp\windowsbroker.ps1" -IIS
Usually I'd prefer switches over parameters with array arguments (unless you have a rather extensive list of options), because with the latter you have to worry about spelling of the array elements, whereas with switches you got a built-in spell check.
Using a Param() section will also automatically add a short usage description to your script:
PS C:\temp> Get-Help windowsbroker.ps1
windowsbroker.ps1 [-IIS]
You can further enhance this online help to your script via comment-based help.
Using parameters has a lot of other advantages on top of that (even though they probably aren't of that much use in your scenario). You can do parameter validation, make parameters mandatory, define default values, read values from the pipeline, make parameters depend on other parameters via parameter sets, and so on. See here and here for more information.
Yes, they are called positional parameters. You provide the parameters at the beginning of your script:
Param(
[string]$appToInstall
)
You could then write your script as follows:
switch ($appToInstall){
"IIS" {"Install IIS here"}
}

How can I create a PowerShell "main" function?

I've used many scripting languages in the past, and am now learning PowerShell. In these other languages I typically define my main logic prior to defining my functions, so someone reading the code will focus on the main logic first. Often this takes the form of creating a "main" function or class at the top of the file, and invoking it at the bottom.
In PowerShell this pattern might look like this:
$main = {
...
do-something
...
}
function do-something() {
<function definition here>
}
& $main
This works well, but I now want to leverage PowerShell's ability to run code remotely. Since $main is a PowerShell ScriptBlock object, I [think I] can run that code on a remote machine like this:
Invoke-Command -ScriptBlock $main -ComputerName whatever;
However, the remote machine will know nothing about my function since it is defined outside the scope of $main. I can, of course, move the definition of the function into $main, but then I have to put it above the main logic and I'm right back to my first problem.
Is there a common pattern for writing PowerShell scripts where the main logic is in a function or script block at the top of the file, similar to many traditional languages? How do people write complex scripts -- do they always just write them from top-to-bottom, sprinkling in functions as needed at the top?
Someone has flagged this as a possible duplicate of the question Why do I need to have my functions written first in my Powershell script?. This is not a duplicate of that. I know why functions need to be defined first -- I've been programming since C was the cool new language.
What I'm asking for is the pattern for doing so in poweshell, particularly in the context of wanting to be able to run the main code remotely.
Is there a common pattern for putting a main block at the top of a powershell script? Powershell syntax and idioms are sufficiently different from more traditional languages, and even many popular modern scripting languages, that the pattern for moving the main logic to the top is not obvious (or maybe it is, but I just haven't found the right examples or documentation)
Scripts are read from the top to the bottom, so you can't use any references before they are initiated. You could however create a script/scriptblock that simulates how programming languages work.
In programming languages like c#, the main part is a function. When the app is done loading the necessary parts(like core functions etc.), and event handler calls the main-function to get the party started. With the approach below, you would simulate the same behaviour.
Invoke-Command with a script block
To use this with invoke-command, simply wrap the sample above inside a new scriptblock and use that.
$script = {
#Main-function
function main {
#starting helper function
helper-func
}
#Helpers
function helper-func {
Write-host "foo"
}
#Entry point
main
}
Invoke-Command -ComputerName mycomputer -Scriptblock $script
Invoke-Command with a file
Or save it to a file, and call it that way.
Script.ps1
#Main-function
function main {
#starting helper function
helper-func
}
#Helpers
function helper-func {
Write-host "foo"
}
#Entry point
main
Usage:
Invoke-Command -ComputerName mycomputer -FilePath '.\Script.ps1'
If you want emulate the "traditional" structure in PowerShell, one way to do that is to use the Begin, Process and End keywords.
These can appear in any order in the script, but the Begin block will always run first, so you can put it at the bottom of your script, and put your function definitions there, and have your main logic at the top of the script in a Process or End block.
AFAIK, there's no way to define subroutines after the main code in a script file (.ps1), because the code is processed sequentially, and if you call a function before it's defined, you'll get an error.
Subfunctions
If you want to organize it so that it's clearer which parts are the main code and which parts are subroutines, you can declare functions within a function (which also scopes them to the function), but they need to come at the beginning so that they're defined before they're called by the main code:
function main {
param(
# [...]
)
function sub1 {
# [...]
}
function sub2 {
# [...]
}
# [main code, using sub1 and sub2]
}
Modules
The only way I know of to have the subroutines appear after the main function is to use a PowerShell module. When you import the module, all the functions defined by it are imported before any of the code is executed.
So when you execute the main function all the subroutines are already defined.
Modules Background
In case you're not familiar with modules, it's quite simple. Just put all the code into a .psm1 file (it doesn't have to contain only function definitions, but any other code is executed when the module is imported). Then import it into the session with
Import-Module <path to .psm1 file>
(Note that you need to add the -Force switch to reimport in the same session.)
Running modules remotely
If you want to run it remotely, this should work:
Invoke-Command -ScriptBlock {Import-Module \\<unc path to .psm1 file>; main} -ComputerNam whatever
I personally prefer passing a ScriptBlock to a remote computer using Invoke-Command. That being said, it's perfectly valid to write an entire "start-to-finish" script file, that performs some local task, and then deploy it to the local (and/or remote) computers using the -File parameter for Invoke-Command.
This code example will dynamically generate a PowerShell script file, and then deploy it to the target systems. This way, you don't have to manually manage multiple script files:
A script that performs some task
A "deployment" script that wraps script #1
...
# 1. Declare script path and contents, and array of target systems
$ScriptPath = '{0}\test\script.ps1' -f $env:SystemDrive;
$ScriptContents = 'Get-Process';
$TargetSystems = 'server01', 'server02', 'server03';
# 2. Create / generate script file, and deploy the script to specified computers
mkdir -Path (Split-Path -Path $ScriptPath -Parent) -ErrorAction SilentlyContinue;
Set-Content -Path $ScriptPath -Value $ScriptContents;
Invoke-Command -ComputerName $TargetSystems -File $ScriptPath;
If you're like me, and prefer to use a ScriptBlock, then the entire contents of your script must be contained within the ScriptBlock, before you deploy it with Invoke-Command.
Here is an example:
# 1. Build a ScriptBlock from a PowerShell "Here-String" (multi-line string)
$ScriptBlock = [ScriptBlock]::Create(#'
function Helper1 {
[CmdletBinding()]
param (
)
begin {
Write-Host -Object ('{0} was called' -f $PSCmdlet.MyInvocation.InvocationName);
}
process { }
end { }
}
function Helper2 {
[CmdletBinding()]
param (
)
begin {
Write-Host -Object ('{0} was called' -f $PSCmdlet.MyInvocation.InvocationName);
}
process { }
end { }
}
function Main {
[CmdletBinding()]
param (
)
begin {
Helper1;
Helper2;
}
process { }
end { }
}
# Call Main
Main;
'#);
# 2. Call the ScriptBlock on localhost
Invoke-Command -ComputerName localhost -ScriptBlock $ScriptBlock;
Output
The output from the above command would look like this:
Helper1 was called
Helper2 was called