PowerShell: How to source a function like source a file? - powershell

In my main script, I will first call an init function to initiate many variables which I expected to be used in the script. One way is to use variables whose name are like $script:var1 which are script level variable. But that's kind of ugly and I'd like to use normal variable name, so I need a mechanism to source a function just like source a file.
When source a file, all the variables in that file are available in the calling script.

Use the same syntax that uses dot-operator, just like for sourcing files:
. My-Function

You can also do it in a scriptblock and dot-source that, but the rules are slightly different. You must have a space after the period to dot-source a function, and you don't with a scriptblock.
Both of these will produce 42
$a=0
function init {$a=42}
. init
$a
$a=0
$init={$a=42}
.$init
$a

Related

Is it possible to dot source a string variable in PowerShell?

I know I can dot source a file:
. .\MyFunctions.ps1
But, I would like to dot source the commands in a string variable:
. $myFuctions
I see that this is possible:
.{$x=2}
And $x equals 2 after the script block is sourced.
But... .{$myFunctions} does not work.
I tried $myFunctions | Invoke-Expression, but it doesn't keep the source function in the current scope. The closest I have been able to come up with is to write the variable to a temporary file, dot source the file, and then remove the file.
Inevitably, someone will ask: "What are you trying to do?" So here is my use case:
I want to obfuscate some functions I intend to call from another script. I don't want to obfuscate the master script, just my additional functions. I have a user base that will need to adjust the master script to their network, directory structure and other local factors, but I don't want certain functions modified. I would also like to protect the source code. So, an alternate question would be: What are some good ways to protect PowerShell script code?
I started with the idea that PowerShell will execute a Base64-encoded string, but only when passed on the command line with -EncodedCommand.
I first wanted to dot source an encoded command, but I couldn't figure that out. I then decided that it would be "obfuscated" enough for my purposes if I converted by Base64 file into a decode string and dot sourced the value of the string variable. However, without writing the decoded source to a file, I cannot figure out how to dot source it.
It would satisfy my needs if I could Import-Module -EncodedCommand .\MyEncodedFile.dat
Actually, there is a way to achieve that and you were almost there.
First, as you already stated, the source or dot operator works either by providing a path (as string) or a script block. See also: . (source or dot operator).
So, when trying to dot-source a string variable, PowerShell thinks it is a path. But, thanks to the possibility of dot-sourcing script blocks, you could do the following:
# Make sure everything is properly escaped.
$MyFunctions = "function Test-DotSourcing { Write-Host `"Worked`" }"
. { Invoke-Expression $MyFunctions }
Test-DotSourcing
And you successfully dot-sourced your functions from a string variable!
Explanation:
With Invoke-Expression the string is evaluated and run in the child scope (script block).
Then with . the evaluated expressions are added to the current scope.
See also:
Invoke-Expression
About scopes
While #dwettstein's answer is a viable approach using Invoke-Expression to handle the fact that the function is stored as a string, there are other approaches that seem to achieve the same result below.
One thing I'm not crystal clear on is the scoping itself, Invoke-Expression doesn't create a new scope so there isn't exactly a need to dot source at that point...
#Define your function as a string
PS> $MyUselessFunction = "function Test-WriteSomething { 'It works!' }"
#Invoke-Expression would let you use the function
PS> Invoke-Expression $MyUselessFunction
PS> Test-WriteSomething
It works!
#Dot sourcing works fine if you use a script block
PS> $ScriptBlock = [ScriptBlock]::Create($MyUselessFunction)
PS> . $ScriptBlock
PS> Test-WriteSomething
It works!
#Or just create the function as a script block initially
PS> $MyUselessFunction = {function Test-WriteSomething { 'It works!' }}
PS> . $MyUselessFunction
PS> Test-WriteSomething
It works!
In other words, there are probably a myriad of ways to get something similar to what you want - some of them documented, and some of them divined from the existing documentation. If your functions are defined as strings, then Invoke-Expression might be needed, or you can convert them into script blocks and dot source them.
At this time it is not possible to dot source a string variable.
I stand corrected! . { Invoke-Expression $MyFunctions } definitely works!

How can I pass unbound arguments from one script as parameters to another?

I have little experience with PowerShell in particular.
I'm trying to refactor some very commonly re-used code into a single script that can be sourced where it's needed, instead of copying and pasting this same code into n different scripts.
The scenario I'm trying to get looks (I think) like this:
#common.ps1:
param(
# Sure'd be great if clients didn't need to know about these
$some_params_here
...
)
function Common-Func-Uses-Params {
...
}
⋮
# foo/bar/bat.ps1:
# sure would love not to have to redefine all the common params() here...
. common.ps1 <pass-the-arguments>
Common-Func-Uses-Params $specific_Foo/Bar/Bat_Data
As the pseudo-comments above indicate, I've only been able to do this so far by capturing the params in the calling script as well.
I want to be in a situation where I can update the common code (say with a -Debug or -DryRun or -Url or whatever parameter) and not have to worry about updating all of the client code to match.
Is this possible?
You're missing two key things:
args - which captures all of (and only) the unbound arguments to the script
splatting (#) - which is used to pass arrays or hashtables to a command rather than flattening them like you'd get with $
When you combine these, you can easily pass all arguments onto another script, like so:
# foo.ps1
. common.ps1 #args
With a sourced file like this:
#common.ps1
param ([string]$foo = "foo")
echo "`$foo is $foo"
You get these output:
> foo.ps1 returns $foo is foo
> foo.ps1 -Foo bar returns $foo is bar
Note that, if you're trying to use the PowerShell ISE it might take you a while to figure this out or debug any of it. When you're in the debugger, both $args nor $MyInvocation.UnboundArguments will do their best to hide that information from you. They'll appear to be completely empty.
You can print the args with >> echo "$(#args)", but that also provides the very weird side effect of telling the Debugger to continue. I think the splatting is adding an extra newline and that's ending up in the Command Window.
The best workaround I have for that is to add $theargs = $args at the top of your script and remember to use $theargs in the debugger.

What is the difference between dot (.) and ampersand (&) in PowerShell?

In PowerShell, what is the difference between using dot (.) and ampersand (&) when invoking a cmdlet, function, script file, or operable program?
For example:
. foo.sh1
& foo.sh1
There is a very similar question which has been incorrectly closed as a duplicate: Differences between ampersand (&) and dot (.) while invoking a PowerShell scriptblock. The questions are different and have completely different keywords and search rankings. The answer on What is the `.` shorthand for in a PowerShell pipeline? only answers half the question.
The difference between the . and & operators matters only when calling PowerShell scripts or functions (or their aliases) - for cmdlets and external programs, they act the same.
For scripts and functions, . and & differ with respect to scoping of the definition of functions, aliases, and variables:
&, the call operator, executes scripts and functions in a child scope, which is the typical use case: functions and scripts are typically expected to execute without side effects:
The variables, (nested) functions, aliases defined in the script / function invoked are local to the invocation and go out of scope when the script exits / function returns.
Note, however, that even a script run in a child scope can affect the caller's environment, such as by using Set-Location to change the current location, explicitly modifying the parent scope (Set-Variable -Scope 1 ...) or the global scope ($global:...) or defining process-level environment variables.
., the dot-sourcing operator, executes scripts and functions in the current scope and is typically used to modify the caller's scope by adding functions, aliases, and possibly variables for later use. For instance, this mechanism is used to load the $PROFILE file that initializes an interactive session.
The caveat is that for functions (as opposed to scripts) the reference scope for child vs. current is not necessarily the caller's scope: if the function was defined in a module, the reference scope is that module's scope domain:
In other words: trying to use . with a module-originated function is virtually pointless, because the scope getting modified is the module's.
That said, functions defined in modules aren't usually designed with dot-sourcing in mind anyway.
You can also run things inside a module scope with the call operator, from my notes from Windows Powershell in Action.
# get and variable in module scope
$m = get-module counter
& $m Get-Variable count
& $m Set-Variable count 33
# see func def
& $m Get-Item function:Get-Count
# redefine func in memory
& $m {
function script:Get-Count
{
return $script:count += $increment * 2
}
}
# get original func def on disk
Import-Module .\counter.psm1 -Force
A few other things:
# run with commandinfo object
$d = get-command get-date
& $d
# call anonymous function
& {param($x,$y) $x+$y} 2 5
# same with dot operator
. {param($x,$y) $x+$y} 2 5
Like Mathias mentioned as a comment in this thread. & is used to invoke the expression whatever comes after the & and . is used to invoke it in the current scope and is normally used to dot source a helper file which contains functions to make it available in callers scope.
See https://mcpmag.com/articles/2017/02/02/exploring-dot-sourcing-in-powershell.aspx for more dot sourcing.
To segregate and give a clear understanding, I am explaining a scenario.
Imagine you have function named MyFunction in a source.ps1. And you wish to use that function in another script(MyCustomScript.ps1)
Put a line in MyCustomScript.ps1 like the below and you should be able to use it.
. path\of\the\source.ps1
MyFunction
So you are using the function which is present in source.ps1 in your custom script.
Whereas, & is the call operator in Powershell which will help you to call any of the outside executable like psexec and others.
Invoking a command (either directly or with the call operator) will create another scope known as child scope and will be gone once the command been executed. If the command is changing any of the values in a global variable then in that case the changes will be lost when the scope ends as well.
To avoid this drawback and to keep any changes made to global variables you can dot the script which will always execute the script in your current scope.
Dot sourcing will only run the function or script within the current scope and call operator (&) which will run a function or script as usual; but it will never add to the current scope.
Hope this gives an idea on when to use what.

Executing a Powershell function (which takes parameters) from Powershell command shell

I have a powershell script which is a function and takes parameters. From with the powershell command shell, how do I execute a function? It seems like it works differently for different users.
Is your script, simply a script or does it contain a function? If it is a script and takes parameters it will look something like this:
-- top of file foo.ps1 --
param($param1, $param2)
<script here>
You invoke that just like a cmdlet excecpt that if you are running from the current dir you have to specify the path to the script like so:
.\foo.ps1 a b
Also note that you pass arguments to scripts (and functions) space separated just like you do with cmdlets.
You mentioned function, so if you script looks like this you have a couple of options:
-- top of file foo.ps1 --
function foo ($param1, $param2) {
<script here>
}
If you run foo.ps1 like above, nothing will happen other than you will define a function called foo in a temporary scope and that scope will go away when the script exits. You could add a line to the bottom of the script that actually calls the foo function. But perhaps you are intending on using this script more as a reusable function library. In that case you probably want to load the functions into the current scope. You can do that with the dot source operator . like so:
C:\PS> . .\foo.ps1
C:\PS> foo a b
Now function foo will be defined at the global level. Note that you could do the same thing within another script which will load that function into the script's scope.
Take a look at this post, maybe it is the right for you.
How to pass command-line arguments to a PowerShell ps1 file
Anyway, the built-in $args variable is an array that holds all the command line arguments.

PowerShell variable collisions

I have a variable that is common to most of my app called "emails". I also want to use "emails" as the name of a parameter in one of the scripts. I need to refer to the value of both variables in the same script. Ideally there would be a way to refer using module/namespace or something and perhaps there is but I don't know it. You can see how I hack around this but it is ugly and prone to error. Is there a better way?
# PowerShell v1
# Some variable names are very common.
param ($emails)
# My Hack
# We need to save current value so we have it after we source in variables below.
$emails0=$emails
# Below is going to load a variable called "emails" which will overwrite parm above.
. C:\load_a_bunch_of_global_variables.ps1
It is because as documentation says: (the dot sourcing operator) Runs a script so that the items in the script are part of the calling scope.
In this case I would convert C:\load_a_bunch_of_global_variables.ps1 to a module and pass $emails as parameter or export a function that sets the $script:emails variable in the module. Then the variable will not be in a conflict with the variable in the parent script.
For more information about modules you can use get-help about_modules.
I would avoid using global variables if possible in my scripts.
Why? Because it is a code smell (as programmers say). With one script there is no problem. If two scripts use the same global variable and only read, it is maybe acceptable. But if any of them changes the value, then there might be unpleasant conflicts.
In some cases Get-Variable -scope 1 -name myvariable would help, but I would use it only in closed pieces of code like modules or in short scripts (the same reason as with global variables).
While you can use Get-Variable -scope to get access to variables at arbitrary levels of the call stack, it is easier in this case to grab the top level (to the script) variable using the script: modifier e.g.
$script:emails
rerun and stej both helped me out.
I still want to source in the file using ". file.ps1" but changing "$emails=foo#yahoo.com" in my load_a_bunch_of...ps1 file to "$global:emails=foo#yahoo.com" solved the problem. I can now refer to the variable using global key word when I have a local and a global variable, and when there is only one variable to deal with I can leave out the global keyword.
You can alwways access your global variables from a script using $global:var name inside your script you have local scope and you won't get collisions. If you . source your script you will override the global var.
For Ex if a have a script
$Crap ="test"
$Crap
And you run the flowing commands you get what you want. In line 2 we run the script and the var doesn't get a conflict but if you run the script as in line 4 with a . source you get what you are discovering which due to the way the . operator works
1:PS C:\Users\Adam> $crap = "hi"
2:PS C:\Users\Adam> .\test.ps1
test
3:PS C:\Users\Adam> $crap
hi
4:PS C:\Users\Adam> . .\test.ps1
test
5:PS C:\Users\Adam> $crap
test
6:PS C:\Users\Adam>
if You add the following line to the script run it
$global:crap;
you will get
PS C:\Users\Adam> .\test.ps1
test
hi