I want to merge script/function having multiple dot sourced scripts into single ps1 script/function. Each script that is included may also have its own includes and so on.
=== EDIT ===
I guess you need to be painfully obvious here on SO, so let me give trivial example:
first.ps1
. $PSScriptRoot\inc\second.ps1
"first"
second.ps1
"second"
Given the existence of function Merge that accepts main script and produces merged script:
Merge first.ps1 first-merged.ps1
the final script will look as:
first-merged.ps1
"second"
"first"
This is far from trivial to do given that you can dot source in bunch of different ways, for instance in a loop.
I suppose "powershell reader" will create something like this internally so perhaps there is a way to obtain it.
You're looking for something like the C preprocessor? That is, merge the contents without actually executing the script, right? AFAIK PowerShell doesn't delineate between dot sourcing and script execution. Dot sourcing is just another command. So you could either A) do a transitive search via regex of files that are dot sourced or if you're up for a challenge B) use the AST to help find dot sourced files e.g.:
(Get-Command .\first.ps1).ScriptBlock.Ast.EndBlock.Statements.PipelineElements | Where InvocationOperator -eq Dot
Outputs:
CommandElements : {$PSScriptRoot\second.ps1}
InvocationOperator : Dot
DefiningKeyword :
DefinedKeywords :
Redirections : {}
Extent : . $PSScriptRoot\second.ps1
Parent : . $PSScriptRoot\second.ps1
And of course, you'd have to chase down all these dot sourced files to do the same to them (to achieve transitive closure). But as you mention, this can be challenging if say the path contains a variable that you don't know until runtime.
You haven't specifically asked a question here, but I assume you meant to say "how do I do this". It is a very straightforward process.
For example, from Script1.ps1 (below) we can access functions in Script3.ps1 and Script4.ps1 by simply dot sourcing a single Script2.ps1 which contains these references.
Script1.ps1
.$PSScriptRoot\Script2.ps1
Get-Script3Name
Get-Script4Name
Script2.ps1
#dot source all the scripts you need access to
$PSScriptRoot\Script3.ps1
$PSScriptRoot\Script4.ps1
Script3.ps1
Function Get-Script3Name
{
"This is Script3"
}
Script4.ps1
Function Get-Script4Name
{
"This is Script4"
}
Results when running Script1.ps1
This is Script3
This is Script4
For more information, I also recommend reading this older post which has some very good answers provided
Related
I know I can dot source a file:
. .\MyFunctions.ps1
But, I would like to dot source the commands in a string variable:
. $myFuctions
I see that this is possible:
.{$x=2}
And $x equals 2 after the script block is sourced.
But... .{$myFunctions} does not work.
I tried $myFunctions | Invoke-Expression, but it doesn't keep the source function in the current scope. The closest I have been able to come up with is to write the variable to a temporary file, dot source the file, and then remove the file.
Inevitably, someone will ask: "What are you trying to do?" So here is my use case:
I want to obfuscate some functions I intend to call from another script. I don't want to obfuscate the master script, just my additional functions. I have a user base that will need to adjust the master script to their network, directory structure and other local factors, but I don't want certain functions modified. I would also like to protect the source code. So, an alternate question would be: What are some good ways to protect PowerShell script code?
I started with the idea that PowerShell will execute a Base64-encoded string, but only when passed on the command line with -EncodedCommand.
I first wanted to dot source an encoded command, but I couldn't figure that out. I then decided that it would be "obfuscated" enough for my purposes if I converted by Base64 file into a decode string and dot sourced the value of the string variable. However, without writing the decoded source to a file, I cannot figure out how to dot source it.
It would satisfy my needs if I could Import-Module -EncodedCommand .\MyEncodedFile.dat
Actually, there is a way to achieve that and you were almost there.
First, as you already stated, the source or dot operator works either by providing a path (as string) or a script block. See also: . (source or dot operator).
So, when trying to dot-source a string variable, PowerShell thinks it is a path. But, thanks to the possibility of dot-sourcing script blocks, you could do the following:
# Make sure everything is properly escaped.
$MyFunctions = "function Test-DotSourcing { Write-Host `"Worked`" }"
. { Invoke-Expression $MyFunctions }
Test-DotSourcing
And you successfully dot-sourced your functions from a string variable!
Explanation:
With Invoke-Expression the string is evaluated and run in the child scope (script block).
Then with . the evaluated expressions are added to the current scope.
See also:
Invoke-Expression
About scopes
While #dwettstein's answer is a viable approach using Invoke-Expression to handle the fact that the function is stored as a string, there are other approaches that seem to achieve the same result below.
One thing I'm not crystal clear on is the scoping itself, Invoke-Expression doesn't create a new scope so there isn't exactly a need to dot source at that point...
#Define your function as a string
PS> $MyUselessFunction = "function Test-WriteSomething { 'It works!' }"
#Invoke-Expression would let you use the function
PS> Invoke-Expression $MyUselessFunction
PS> Test-WriteSomething
It works!
#Dot sourcing works fine if you use a script block
PS> $ScriptBlock = [ScriptBlock]::Create($MyUselessFunction)
PS> . $ScriptBlock
PS> Test-WriteSomething
It works!
#Or just create the function as a script block initially
PS> $MyUselessFunction = {function Test-WriteSomething { 'It works!' }}
PS> . $MyUselessFunction
PS> Test-WriteSomething
It works!
In other words, there are probably a myriad of ways to get something similar to what you want - some of them documented, and some of them divined from the existing documentation. If your functions are defined as strings, then Invoke-Expression might be needed, or you can convert them into script blocks and dot source them.
At this time it is not possible to dot source a string variable.
I stand corrected! . { Invoke-Expression $MyFunctions } definitely works!
I'm trying to make use of module-qualified names[1] and a DefaultCommandPrefix and not have it break if the module is imported with Import-Module -Prefix SomethingElse. Maybe I'm just doing something really wrong here, or those two features aren't meant to be used together.
Inside the main module file using "ModuleName\Verb-PrefixNoun args..." works as long as "Prefix" matches the DefaultCommandPrefix in the manifest (the module-qualified syntax seems to require the prefix used for the import[2]). But importing the module with a different prefix, all module-qualified references inside the module breaks.
After a bit of searching and trial and error, the least horrible solution I've managed to get working is to use something like the following hackish solution. But, I can't help wonder if there isn't some better way that automatically handles the prefix (just as Import-Module obviously manages to add the prefix, my first naive though was that using just ModuleName\Verb-Noun would automatically append any prefix to the noun, but evidently not[2].
So this is the hack I came up with, that looks up the modules prefix and appends it, then using "." or "&" to expand/invoke the command:
# (imagine this code in the `ModuleName.psm1`, and a manifest with some `DefaultCommandPrefix`)
Function MQ {
param (
[Parameter()][string]
$Verb,
[Parameter()][string]
$Noun,
[string]
$Module='ModuleName'
)
"$Module\$Verb-$((Get-Module $Module).Prefix)$Noun"
}
Function Verb-Noun {
# This works even when importing with a prefix,
# but can I be guaranteed that it's not some
# other module's cmdlet?
Verb-OtherNoun 1 2 3 '...'
#ModuleName\Verb-OtherNoun 1 2 3 '...'
. (MQ 'Verb' 'OtherNoun') 1 2 3 '...'
# or:
& (MQ 'Verb' 'OtherNoun') 1 2 3 '...'
}
(MQ could be made more user friendly by also accepting a single string MQ "Verb-Noun" and split/recombine automatically, and so on, etc. and all the usual disclamers)
Note: I know it would be possible to hard-code the name instead of using DefaultCommandPrefix, e.g. as PSReadLine does (and a bunch of other modules). But, to be honest that feels like a workaround.
Just calling Verb-OtherNoun seems fragile to me, as the most recent one is used[3]. So I would imagine that for example just before the call adding an Import-Module statement with a module that exports a Verb-OtherNoun would cause the wrong (not this module's) cmdlet being called. (Perhaps a more real world scenario is a module being loaded after this module has been loaded, but before calling Verb-Noun.)
Is there perhaps some syntax for module-qualifiaction I'm not aware of that would do something akin to Import-Module(e.g. Module\\Verb-Noun or Module\Verb+Noun that would resolve and inject Module's prefix) and now that I think of it, is there some reason why Module\Verb-Noun doesn't handle prefixes, or just that no one wrote the code for it? (I can't see how it would break things more than how using DefaultCommandPrefix would break v2/v3[2])
[1] https://www.sapien.com/blog/2015/10/23/using-module-qualified-cmdlet-names/
[2] https://github.com/PoshCode/PowerShellPracticeAndStyle/issues/23#issuecomment-106843619
[3] https://stackoverflow.com/a/22259706/13648152
You can avoid the problem by not using a module qualifier not using a noun prefix when you call your module's own functions.
That is, call them as Verb-Noun, exactly as named in the target function's implementation.
This is generally safe, because your own module's functions take precedence over any commands of the same name defined outside your module.
The sole exception is if an alias defined in the global scope happens to have the same name as one of your functions - but that shouldn't normally be a concern, because aliases are used for short names that do not follow the verb-noun naming convention.
It also makes sense in that it allows you to call your functions module-internally with an invariable name - a name that doesn't situationally change, depending on what the caller decided to choose as a noun prefix via Import-Module -Prefix ....
Think of the prefix feature as being a caller-only feature that is unrelated to your module's implementation.
As an aside: As of PowerShell 7.0, declaring a default noun prefix via the DefaultCommandPrefix module-manifest property doesn't properly integrate with the module auto-loading and command-discovery features - see this GitHub issue.
I have little experience with PowerShell in particular.
I'm trying to refactor some very commonly re-used code into a single script that can be sourced where it's needed, instead of copying and pasting this same code into n different scripts.
The scenario I'm trying to get looks (I think) like this:
#common.ps1:
param(
# Sure'd be great if clients didn't need to know about these
$some_params_here
...
)
function Common-Func-Uses-Params {
...
}
⋮
# foo/bar/bat.ps1:
# sure would love not to have to redefine all the common params() here...
. common.ps1 <pass-the-arguments>
Common-Func-Uses-Params $specific_Foo/Bar/Bat_Data
As the pseudo-comments above indicate, I've only been able to do this so far by capturing the params in the calling script as well.
I want to be in a situation where I can update the common code (say with a -Debug or -DryRun or -Url or whatever parameter) and not have to worry about updating all of the client code to match.
Is this possible?
You're missing two key things:
args - which captures all of (and only) the unbound arguments to the script
splatting (#) - which is used to pass arrays or hashtables to a command rather than flattening them like you'd get with $
When you combine these, you can easily pass all arguments onto another script, like so:
# foo.ps1
. common.ps1 #args
With a sourced file like this:
#common.ps1
param ([string]$foo = "foo")
echo "`$foo is $foo"
You get these output:
> foo.ps1 returns $foo is foo
> foo.ps1 -Foo bar returns $foo is bar
Note that, if you're trying to use the PowerShell ISE it might take you a while to figure this out or debug any of it. When you're in the debugger, both $args nor $MyInvocation.UnboundArguments will do their best to hide that information from you. They'll appear to be completely empty.
You can print the args with >> echo "$(#args)", but that also provides the very weird side effect of telling the Debugger to continue. I think the splatting is adding an extra newline and that's ending up in the Command Window.
The best workaround I have for that is to add $theargs = $args at the top of your script and remember to use $theargs in the debugger.
This is a weird one. Normally when I execute an external command from powershell I use the & operator like this:
& somecommand.exe -p somearguments
However, today I came across the . operator used like this:
.$env:systemdrive\chocolatey\chocolateyinstall\chocolatey.cmd install notepadplusplus
What purpose does the period serve in this scenario? I don't get it.
The "." dot sourcing operator will send AND receive variables from other scripts you have called. The "&" call operator will ONLY send variables.
For instance, considering the following:
Script 1 (call-operator.ps1):
clear
$funny = "laughing"
$scriptpath = split-path -parent $MyInvocation.MyCommand.Definition
$filename = "laughing.ps1"
"Example 1:" # Call another script. Variables are passed only forward.
& $scriptpath\$filename
"Example 2:" # Call another script. Variables are passed backwards and forwards.
. $scriptpath\$filename
$variableDefinedInOtherScript
Script 2 (laughing.ps1):
# This is to test the passing of variables from call-operator.ps1
"I am $funny so hard. Passing variables is so hilarious."
$variableDefinedInOtherScript = "Hello World!"
Create both scripts and ONLY run the first one. You'll see that the "." dot sourcing operator sends and receives variables.
Both have their uses, so be creative. For instance, the "&" call operator would be useful if you wanted to modify the value(s) of variables in another script while preserving the original value(s) in the your current script. Kinda a safeguard. ;)
The Short:
It is a Special Operator used to achieve what regular operators cannot achieve. This particular operator . actually has two distinctively different Special Operator use cases.
The Long:
As with any other language, scripting or otherwise, PowerShell script also supports many different types of Operators to help manipulate values. These regular operators include:
Arithmetic
Assignment
Comparison
Logical
Redirection
List item
Split and Join
Type
Unary
However, PowerShell also supports whats known as Special Operators which are used to perform tasks that cannot be performed by the other types of operators.
These Special Operators Include:
#() Array subexpression operator
& Call operator
[ ] Cast operator
, Comma operator
. Dot sourcing operator
-f Format operator
[ ] Index operator
| Pipeline operator
. Property dereference operator
.. Range operator
:: Static member operator
$( ) Subexpression operator
. Dot sourcing operator: is used in this context to allow a script to run in the current scope essentially allowing any functions, aliases, and variables which has been created by the script to be added to the current script.
Example:
. c:\scripts.sample.ps1
NoteThat this application of the . Special Operator is followed by a space to distinguish it from the (.) symbol that represents the current directory
Example:
. .\sample.ps1
. Property dereference operator: Allows access to the properties and methods of of an object which follows the . by indicating that the expression on the left side of the . character is an object and the expression on the right side of the is an object member (a property or method).
Example:
$myProcess.peakWorkingSet
(get-process PowerShell).kill()
Disclaimer & Sources:
I had the same question while looking at a PowerShell script that I was trying to expand on its feature sets and landed here when doing my research for the answer. However I managed to find my answer using this magnificent write up on the Microsoft Development Network supplemented with this further expansion of the same ideas from IT Pro.
Cheers.
The dot is a call operator:
$a = "Get-ChildItem"
. $a # (executes Get-ChildItem in the current scope)
In your case, however, I don't see what it does.
.Period or .full stop for an objects properties; like
$CompSys.TotalPhysicalMemory
See here: http://www.computerperformance.co.uk/powershell/powershell_syntax.htm#Operators_
This answer is to expand slightly upon those already provided by David Brabant and his commenters. While those remarks are all true and pertinent, there is something that has been missed.
The OPs use of & when invoking external commands is unnecessary. Omitting the & would have no effect (on the example of his usage). The purpose of & is to allow the invocation of commands whose names are the values of a (string) expression. By using the & above, powershell then (essentially) treats the subsequent arguments as strings, the first of which is the command name that & duly invokes. If the & were omitted, powershell would take the first item on the line as the command to execute.
However, the . in the second example is necessary (although, as noted by others, & would work just as well in this case). Without it, the command line would begin with a variable access ($env:systemdrive) and so powershell would be expecting an expression of some form. However, immediately following the variable reference is a bare file path which is not a valid expression and will generate an error. By using the . (or &) at the beginning of the line, it is now treated as a command (because the beginning doesn't look like a valid expression) and the arguments are processed as expandable strings (" "). Thus, the command line is treated as
. "$env:systemdrive\chocolatey\chocolateyinstall\chocolatey.cmd" "install" "notepadplusplus"
The first argument has $env:systemdrive substituted into it and then . invokes the program thus named.
Note: the full description of how powershell processes command line arguments is way more complicated than that given here. This version was cut down to just the essential bits needed to answer the question. Take a look at about_Parsing for a comprehensive description. It is not complete but should cover most normal usage. There are other posts on stackoverflow and github (where powershell now resides) that cover some of the seemingly quirky behaviour not listed in the official documentation. Another useful resource is about_Operators though again this isn't quite complete. An example being the equivalence of . and & when invoking something other than a powershell script/cmdlet getting no mention at all.
I have a variable that is common to most of my app called "emails". I also want to use "emails" as the name of a parameter in one of the scripts. I need to refer to the value of both variables in the same script. Ideally there would be a way to refer using module/namespace or something and perhaps there is but I don't know it. You can see how I hack around this but it is ugly and prone to error. Is there a better way?
# PowerShell v1
# Some variable names are very common.
param ($emails)
# My Hack
# We need to save current value so we have it after we source in variables below.
$emails0=$emails
# Below is going to load a variable called "emails" which will overwrite parm above.
. C:\load_a_bunch_of_global_variables.ps1
It is because as documentation says: (the dot sourcing operator) Runs a script so that the items in the script are part of the calling scope.
In this case I would convert C:\load_a_bunch_of_global_variables.ps1 to a module and pass $emails as parameter or export a function that sets the $script:emails variable in the module. Then the variable will not be in a conflict with the variable in the parent script.
For more information about modules you can use get-help about_modules.
I would avoid using global variables if possible in my scripts.
Why? Because it is a code smell (as programmers say). With one script there is no problem. If two scripts use the same global variable and only read, it is maybe acceptable. But if any of them changes the value, then there might be unpleasant conflicts.
In some cases Get-Variable -scope 1 -name myvariable would help, but I would use it only in closed pieces of code like modules or in short scripts (the same reason as with global variables).
While you can use Get-Variable -scope to get access to variables at arbitrary levels of the call stack, it is easier in this case to grab the top level (to the script) variable using the script: modifier e.g.
$script:emails
rerun and stej both helped me out.
I still want to source in the file using ". file.ps1" but changing "$emails=foo#yahoo.com" in my load_a_bunch_of...ps1 file to "$global:emails=foo#yahoo.com" solved the problem. I can now refer to the variable using global key word when I have a local and a global variable, and when there is only one variable to deal with I can leave out the global keyword.
You can alwways access your global variables from a script using $global:var name inside your script you have local scope and you won't get collisions. If you . source your script you will override the global var.
For Ex if a have a script
$Crap ="test"
$Crap
And you run the flowing commands you get what you want. In line 2 we run the script and the var doesn't get a conflict but if you run the script as in line 4 with a . source you get what you are discovering which due to the way the . operator works
1:PS C:\Users\Adam> $crap = "hi"
2:PS C:\Users\Adam> .\test.ps1
test
3:PS C:\Users\Adam> $crap
hi
4:PS C:\Users\Adam> . .\test.ps1
test
5:PS C:\Users\Adam> $crap
test
6:PS C:\Users\Adam>
if You add the following line to the script run it
$global:crap;
you will get
PS C:\Users\Adam> .\test.ps1
test
hi