Run one PowerShell script from another without inheriting variables and scope - powershell

Normally in PowerShell this works:
# parent.ps1
$x = 1
&"$PSScriptRoot/child.ps1"
# child.ps1
Write-Host $x
When parent.ps1 runs, it prints out 1 since child.ps1 has inherited it.
Can I prevent this for my script?
I can do $private:x = 1, but parent has many variables, so it's verbose and error-prone.
Is there a way to call child.ps1 without inheriting scope?
Or maybe a way to mark everything in parent private?

No, short of defining all variables in the calling scope (and its ancestral scopes) with the $private: scope, you cannot prevent PowerShell's dynamic scoping.
That is, creating a variable in a given scope (without $private:) makes it visible to all its descendant scopes, such as the child scope in which a script (invoked directly or via &) runs.
Also, certain automatic (built-in) variable are defined with option AllScope, which invariably makes them visible in all scopes, not just descendant ones.
Workarounds:
In-process:
Call your script via a thread job, using Start-ThreadJob (PowerShell v6+) or with ForEach-Object -Parallel (v7+); e.g.:
ForEach-Object -Parallel { $PSScriptRoot/child.ps1 }
Thread jobs and the threads created by ForEach-Object -Parallel do not inherit the caller's state (with the exception of the current location in v7+)[1].
At the start of your script, enumerate all variables via Get-Variable and create local copies that you explicitly set to $null (you'll need to ignore errors stemming from built-in variables that you cannot override) - this will effectively shadow the variables from ancestral scopes.
Out-of-process:
Call your script via a new PowerShell process (powershell -File ... or pwsh -File ...) or via a background job (using Start-Job).
Caveat: In addition to decreased performance, due to cross-process XML-serialized serialization type fidelity may be lost - see this answer for details.
[1] Note that providing an opt-in for copying the caller's state to the ForEach-Object -Parallel threads is now being considered; see this GitHub feature request.

Related

Are these alternatives to Invoke-Expression really any safer? And why?

I'm not understanding if Invoke-Expression is internally flawed, making it more dangerous. Or is the problem that it incorporates text to code, code to execution, and maybe execution in the local scope, all in a single command.
What I'm wanting to do is create a class in C# with a public event EventHandler MyEvent; event via Add-Type, and then inherit from that class in PowerShell by writing the PowerShell in a string #'class psMessages: csMessages{<code for clas>}'#, converting the string into a script block, and then executing it.
I found these methods for creating the script block will work:
$ScriptBlock = ([System.Management.Automation.Language.Parser]::ParseInput($psMessages, [ref]$null, [ref]$null)).GetScriptBlock()
# or
$ScriptBlock = [scriptblock]::Create($psMessages)
And either of these commands will execute the the script block in the current scope:
. $ScriptBlock
# or
Invoke-Command -NoNewScope $ScriptBlock
Additional info: These commands fail, I believe because they execute the script block in a new scope - please correct me if I'm wrong:
& $ScriptBlock
# or
$ScriptBlock.Invoke()
# or
Invoke-Command $ScriptBlock
So, are any of these methods safer to use than Invoke-Expression? Or are they all just as dangerous? And, if any are safer, why?
What makes any command dangerous is the blind execution of source code from an unknown / untrusted source.
As such, the execution mechanism is incidental to the problem.
Conversely, this means that if you full control or implicitly trust a given piece of source code, use of Invoke-Expression - which is generally to be avoided - is acceptable.
Note that the code executed by Invoke-Expression invariably runs in the current scope; you could wrap the input string in & { ... } in order to execute it in a child scope.
As Santiago Squarzon points out, [scriptblock]::Create() enables a middle ground:
As demonstrated in this answer of his, it is possible to constrain what may be executed in terms of permissible commands, read access to specific variables, and whether read access to environment variables is allowed.
Additionally, a script block instance returned by [scriptblock]::Create() allows potentially reusable invocation on demand, with the choice to either execute it in the current scope, with ., the dot-sourcing operator, or a child scope, with &, the call operator.
As for the commands listed under "Additional info:"
They should not fail; they should all execute the script block in a child scope.
However:
Using the .Invoke() method on script blocks should be avoided, because it changes the semantics of the call in several respects - see this answer.
Similarly, there is no good reason to use Invoke-Command for (local) invocation of script blocks - see this answer.

Modifying variable in parent script using child script in powershell

I have two powershell scripts.
I have to assign the parent variable using child script
The child powershell script is called using parent powershell script
parent.ps1
$count = $Null
child.ps1
$count = 10
How do I make sure that the change in child script gets reflected in parent script?
Manuel Batsching's answer shows you how to use dot-sourcing to solve your problem indirectly: by executing the child script directly in the parent script's scope, all of the child script's (script-level) variables (among other definitions, namely functions and aliases) are created directly in the parent script's scope, which may be undesirable.
PowerShell does offer mechanisms to selectively modify variables in other scopes, but it's best to avoid them, because you're creating a tight coupling between your scripts that makes them difficult to maintain; instead, use other mechanism to communicate information between your scripts, in the simplest form via output.
If you still want to solve your problem by modifying a variable in the parent scope, you can use the following:
# When run in child.ps1 that was invoked (normally) by parent.ps1,
# sets $count to 1 in the scope of *parent.ps1*.
Set-Variable -Scope 1 -Name count -Value 10
Scope -1 refers to the parent scope of the calling scope (2 would refer to the grandparent scope, and so on) - see Set-Variable.
Complementarily, scope modifiers $script: and $global: may be used with variable names to target variables in the (same) script scope and the global scope (for instance, $script:foo = 'bar' could be used to set a script-level variable from inside of a function defined in the same script; creating or modifying global variables should be avoided, as they linger even after a script exits) - see about_Scopes.
For the sake of completeness:
Scope modifier $local: allows you to refer to the current (local) scope explicitly; it is rarely used, because the current scope is implied when you assign to a variable by mere name (e.g., $foo = 'bar' creates a local $foo variable).
However, on getting a variable $local:foo isn't necessarily the same as just $foo: due to PowerShell's dynamic scoping, $foo returns the value of a variable in an ancestral scope, if any, if no such variable exists in the current scope; by contrast, $local:foo strictly returns the value of a variable by that name in the current scope, if defined there.
To suppress dynamic scoping for a given variable, i.e. to prevent descendant scopes from seeing it (by default), create it with the $private: scope modifier.
See this answer for more information about PowerShell's dynamic scoping and the $private: scope.
Scope modifier $using: is used in the context of remoting and jobs; it doesn't actually reference a variable per se in the caller's context, but its value - see about_Remote_Variables.
By using the dot sourcing operator . you can run your child.ps1 script in the same scope as your parent.ps1 script. That will preserve all changes to variables, that your child script does (see: script scope and dot sourcing).
Let the content of your parent.ps1 be like:
$count = $null
. .\child.ps1
$count
This will return 10.

How can I prevent variable injection in PowerShell?

I was triggered again on a comment on a recent PowerShell question from #Ansgar Wiechers: DO NOT use Invoke-Expression with regards to a security question I have for a long time somewhere in the back of my mind and need to ask.
The strong statement (with a reference to the Invoke-Expression considered harmful article) suggests that an invocation of a script that can overwrite variables is considered harmful.
Also the PSScriptAnalyzer advises against using Invoke-Expression, see the AvoidUsingInvokeExpression rule.
But I once used a technic myself to update a common variable in a recursive script which can actually overwrite a value in any of its parents scopes which is as simple as:
([Ref]$ParentVariable).Value = $NewValue
As far as I can determine a potential malicious script could use this technic too to inject variables in any case no matter how it is invoked...
Consider the following "malicious" Inject.ps1 script:
([Ref]$MyValue).Value = 456
([Ref]$MyString).Value = 'Injected string'
([Ref]$MyObject).Value = [PSCustomObject]#{Name = 'Injected'; Value = 'Object'}
My Test.ps1 script:
$MyValue = 123
$MyString = "MyString"
$MyObject = [PSCustomObject]#{Name = 'My'; Value = 'Object'}
.\Inject.ps1
Write-Host $MyValue
Write-Host $MyString
Write-Host $MyObject
Result:
456
Injected string
#{Name=Injected; Value=Object}
As you see all three variables in the Test.ps1 scope are overwritten by the Inject.ps1 script. This can also be done using the Invoke-Command cmdlet and it doesn't even matter whether I set the scope of a variable to Private either:
New-Variable -Name MyValue -Value 123 -Scope Private
$MyString = "MyString"
$MyObject = [PSCustomObject]#{Name = 'My'; Value = 'Object'}
Invoke-Command {
([Ref]$MyValue).Value = 456
([Ref]$MyString).Value = 'Injected string'
([Ref]$MyObject).Value = [PSCustomObject]#{Name = 'Injected'; Value = 'Object'}
}
Write-Host $MyValue
Write-Host $MyString
Write-Host $MyObject
Is there a way to completely isolate an invoked script/command from overwriting variables in the current scope?
If not, can this be considered as a security risk for invoking scripts in any way?
The advice against use of Invoke-Expression use is primarily about preventing unintended execution of code (code injection).
If you invoke a piece of PowerShell code - whether directly or via Invoke-Expression - it can indeed (possibly maliciously) manipulate parent scopes, including the global scope.
Note that this potential manipulation isn't limited to variables: for instance, functions and aliases can be modified as well.
Caveat: Running unknown code is problematic in two respects:
Primarily for the potential to perform unwanted / destructive actions directly.[1]
Secondarily, for the potential to maliciously modify the caller's state (variables, ...), which is the only aspect the solutions below guard against.
To provide the desired isolation, you have two basic choices:
Run the code in a child process:
By starting another PowerShell instance; e.g. (use powershell instead of pwsh in Windows PowerShell):
pwsh -c { ./someUntrustedScript.ps1 }
By starting a background job; e.g.:
Start-Job { ./someUntrustedScript.ps1 } | Receive-Job -Wait -AutoRemove
Run the code in a separate thread in the same process:
As a thread job, via the Start-ThreadJob cmdlet (ships with PowerShell [Core] 6+; in Windows PowerShell, it can be installed from the PowerShell Gallery with something like Install-Module -Scope CurrentUser ThreadJob); e.g.:
Start-ThreadJob { ./someUntrustedScript.ps1 } | Receive-Job -Wait -AutoRemove
By creating a new runspace via the PowerShell SDK; e.g.:
[powershell]::Create().AddScript('./someUntrustedScript.ps1').Invoke()
Note that you'll have to do extra work to get the output streams other than the success one, notably the error stream's output; also, .Dispose() should be called on the PowerShell instance on completion of the command.
A child process-based solution will be slow and limited in terms of data types you can return (due to serialization / deserialization being involved), but it provides isolation against the invoked code crashing the process.
A thread-based job is much faster, can return any data type, but can crash the entire process.
In all cases you will have to pass any values from the caller that the invoked code needs access to as arguments or, with background jobs and thread jobs, alternatively via the $using: scope specifier.
js2010 mentions other, less desirable alternatives:
Start-Process (child process-based, with text-only arguments and output)
PowerShell Workflows, which are obsolescent (they weren't ported to PowerShell Core and won't be).
Using Invoke-Command with "loopback remoting" (-ComputerName localhost) is hypothetically also an option, but then you incur the double overhead of a child process and HTTP-based communication; also, your computer must be set up for remoting, and you must run with elevation (as administrator).
[1] A way to mitigate the problem is to limit which commands, statements, types, ... are permitted to be called when the string is evaluated, which can be achieved via the PowerShell SDK in combination with language modesand/or by explicitly constructing an initial session state. See this answer for an example of SDK use with language modes.

Are Powershell Profile scripts dot-sourced?

The Microsoft.PowerShell_profile.ps1 script I am using creates a lot of variables when it runs. I have set all the variables' scope to "Script", but the variables used in the script never go out-of-scope.
I would like the variables to go out-of-scope once the script is done running and control is handed over to me.
If I compare the number of global, local, and script variables I have, I come up with the same number.
Example:
# Profile script does what it does.
Get-Variable -Scope Global | Measure-Object
Get-Variable -Scope Local | Measure-Object
Get-Variable -Scope Script | Measure-Object
Output:
60
60
60
Currently, I am capturing a snapshot of the variables at the beginning of my profile script, then removing any new variables at the end.
Example:
$snapshotBefore = Get-Variable
$profileVar1 = 'some value'
$profileVar2 = 'some other value'
$snapshotAfter = Get-Variable
# Compare before and after, and create list of new variables.
Remove-Variable $variablesToRemove
Yes, PowerShell profiles are dot-sourced by design, because that's what allows the definitions contained in them (aliases, functions, ...) to be globally available by default - which is, after all, the main purpose of profile files.
Unfortunately, there is no scope modifier that allows you to create a temporary scope for variables you only want to exist while the profile is loading - even scope local is effectively global in a profile script; similarly, using scope private is also not an option, because the profile's script scope - due to being dot-sourced - is the global scope.
Generally speaking, you can use & (the call operator) with a script block to create variables inside that block that are scoped to that block, but that is usually at odds with creating globally available definitions in a profile, at least by default.
Similarly, calling another script without dot-sourcing it, as in your own answer, will not make its definitions globally available by default.
You can, however, create global elements from non-dot-sourced script blocks / script by specifying the global scope explicitly; e.g.: & { $global:foo = 'Going global' }, or & { function global:bar { 'global func' } }.
That said, the rationale behind dot-sourcing profiles is likely that it's easier to make all definitions global by default, making the definition of typical elements of a profile - aliases, functions, drive mappings, loading of modules - simpler (no need to specify an explicit scope).
By contrast, global variables are less typical, and to define the typical elements listed above you don't usually need script-level (and thus global) variables in your profile.
If you still need to create (conceptually) temporary variables in your profile (which is not a requirement for creating globally available aliases, functions, ...):
A simple workaround is to use an exotic variable name prefix such as __ inside the profile script to reduce the risk of their getting referenced by accident (e.g, $__profileVar1 = ...).
In other words: the variables still exist globally, but their exotic names will typically not cause problems.
However, your approach, even though it requires a little extra work, sounds like a robust workaround, here's what it looks like in full (using PSv3+ syntax):
# Save a snapshot of current variables.
# * If there are variables that you DO want to exist globally,
# define them ABOVE this command.
# * Also, load MODULE and dot-source OTHER SCRIPTS ABOVE this command,
# because they may create variables that *should* be available globally.
$varsBefore = (Get-Variable).Name
# ... define and use temporary variables
# Remove all variables that were created since the
# snapshot was taken, including $varsBefore.
Remove-Variable (Compare-Object $varsBefore (Get-Variable).Name).InputObject
Note that I'm relying on Compare-Object's default behavior of only reporting differences between objects and, assuming you haven't tried to remove any variables, only the variables added are reported.
Note that while it can be inferred from the actual behavior of profile files that they are indeed dot-sourced - given that dot-sourcing is the only way to add elements to the current scope (the global scope, in the case of profiles) -
this fact is not explicitly documented as such.
Here are snippets from various help topics (as of PSv5) that provide clues (emphasis mine):
From Get-Help about_Profiles:
A Windows PowerShell profile is a script that runs when Windows PowerShell
starts. You can use the profile as a logon script to customize the
environment. You can add commands, aliases, functions, variables, snap-ins,
modules, and Windows PowerShell drives. You can also add other
session-specific elements to your profile so they are available in every
session without having to import or re-create them.
From Get-Help about_Variables:
By default, variables are available only in the scope in which
they are created.
For example, a variable that you create in a function is
available only within the function. A variable that you
create in a script is available only within the script (unless
you dot-source the script, which adds it to the current scope).
From Get-Help about_Operators:
. Dot sourcing operator
Runs a script in the current scope so that any functions,
aliases, and variables that the script creates are added to the current
scope.
From Get-Help about_Scopes
But, you can add a script or function to the current scope by using dot
source notation. Then, when a script runs in the current scope, any
functions, aliases, and variables that the script creates are available
in the current scope.
To add a function to the current scope, type a dot (.) and a space before
the path and name of the function in the function call.
So it does sounds like Powershell dot-sources the profile. I couldn't find a resource that specifically says that, or other forums that have asked this question.
I have found an answer, and wanted to post it here.
I have changed my profile to only call a script file. The script now has its own scope, and as long as the variables aren't made global, they will go out-of-scope once the profile finishes loading.
So now my profile has one-line:
& (Split-Path $Path $profile -Parent | Join-Path "Microsoft.PowerShell_profile_v2.ps1")
Microsoft.PowerShell_profile_v2.ps1 can now contain proper scope:
$Global:myGlobalVar = "A variable that will be available during the current session"
$Script:myVar = "A variable that will disappear after script finishes."
$myVar2 = "Another variable that will disappear after script finishes."
What this allows, is for the profile script to import modules that contain global variables. These variables will continue to exist during the current session.
I would still be curious why Microsoft decided to call the profile in this way. If anyone knows, and would like to share. I would love to see the answer here.

Variable scoping in PowerShell

A sad thing about PowerShell is that function and scriptblocks are dynamically scoped.
But there is another thing that surprised me is that variables behave as a copy-on-write within an inner scope.
$array=#("g")
function foo()
{
$array += "h"
Write-Host $array
}
& {
$array +="s"
Write-Host $array
}
foo
Write-Host $array
The output is:
g s
g h
g
Which makes dynamic scoping a little bit less painful. But how do I avoid the copy-on-write?
The PowerShell scopes article (about_Scopes) is nice, but too verbose, so this is quotation from my article:
In general, PowerShell scopes are like .NET scopes. They are:
Global is public
Script is internal
Private is private
Local is current stack level
Numbered scopes are from 0..N where each step is up to stack level (and 0 is Local)
Here is simple example, which describes usage and effects of scopes:
$test = 'Global Scope'
Function Foo {
$test = 'Function Scope'
Write-Host $Global:test # Global Scope
Write-Host $Local:test # Function Scope
Write-Host $test # Function Scope
Write-Host (Get-Variable -Name test -ValueOnly -Scope 0) # Function Scope
Write-Host (Get-Variable -Name test -ValueOnly -Scope 1) # Global Scope
}
Foo
As you can see, you can use $Global:test like syntax only with named scopes, $0:test will be always $null.
You can use scope modifiers or the *-Variable cmdlets.
The scope modifiers are:
global used to access/modify at the outermost scope (eg. the interactive shell)
script used on access/modify at the scope of the running script (.ps1 file). If not running a script then operates as global.
(For the -Scope parameter of the *-Variable cmdlets see the help.)
Eg. in your second example, to directly modify the global $array:
& {
$global:array +="s"
Write-Host $array
}
For more details see the help topic about_scopes.
Not just varibles. When this says "item" it means variables, functions, aliases, and psdrives. All of those have scope.
LONG DESCRIPTION
Windows PowerShell protects access to variables, aliases, functions, and
Windows PowerShell drives (PSDrives) by limiting where they can be read and
changed. By enforcing a few simple rules for scope, Windows PowerShell
helps to ensure that you do not inadvertently change an item that should
not be changed.
The following are the basic rules of scope:
- An item you include in a scope is visible in the scope in which it
was created and in any child scope, unless you explicitly make it
private. You can place variables, aliases, functions, or Windows
PowerShell drives in one or more scopes.
- An item that you created within a scope can be changed only in the
scope in which it was created, unless you explicitly specify a
different scope.
The copy on write issue you're seeing is because of the way Powershell handles arrays. Adding to that array actually destroys the original array and creates a new one. Since it was created in that scope, it is destroyed when the function or script block exits and the scope is disposed of.
You can explicitly scope varibles when you update them, or you can use [ref] objects to do your updates, or write your script so that you're updating a property of an object or a hash table key of an object or hash table in a parent scope. This does not create a new object in the local scope, it modifies the object in the parent scope.
While other posts give lots of useful information they seem only to save you from RTFM.
The answer not mentioned is the one I find most useful!
([ref]$var).value = 'x'
This modifies the value of $var no matter what scope it happens to be in. You need not know its scope; only that it does in fact already exist. To use the OP's example:
$array=#("g")
function foo()
{
([ref]$array).Value += "h"
Write-Host $array
}
& {
([ref]$array).Value +="s"
Write-Host $array
}
foo
Write-Host $array
Produces:
g s
g s h
g s h
Explanation:
([ref]$var) gets you a pointer to the variable. Since this is a read operation it resolves to the most recent scope that actually did create that name. It also explains the error if the variable doesn't exist because [ref] can't create anything, it can only return a reference to something that already exists.
.value then takes you to the property holding the variable's definition; which you can then set.
You may be tempted to do something like this because it sometimes looks like it works.
([ref]$var) = "New Value"
DON'T!!!!
The instances where it looks like it works is an illusion because PowerShell is doing something that it only does under some very narrow circumstances such as on the command line. You can't count on it. In fact it doesn't work in the OP example.