I was triggered again on a comment on a recent PowerShell question from #Ansgar Wiechers: DO NOT use Invoke-Expression with regards to a security question I have for a long time somewhere in the back of my mind and need to ask.
The strong statement (with a reference to the Invoke-Expression considered harmful article) suggests that an invocation of a script that can overwrite variables is considered harmful.
Also the PSScriptAnalyzer advises against using Invoke-Expression, see the AvoidUsingInvokeExpression rule.
But I once used a technic myself to update a common variable in a recursive script which can actually overwrite a value in any of its parents scopes which is as simple as:
([Ref]$ParentVariable).Value = $NewValue
As far as I can determine a potential malicious script could use this technic too to inject variables in any case no matter how it is invoked...
Consider the following "malicious" Inject.ps1 script:
([Ref]$MyValue).Value = 456
([Ref]$MyString).Value = 'Injected string'
([Ref]$MyObject).Value = [PSCustomObject]#{Name = 'Injected'; Value = 'Object'}
My Test.ps1 script:
$MyValue = 123
$MyString = "MyString"
$MyObject = [PSCustomObject]#{Name = 'My'; Value = 'Object'}
.\Inject.ps1
Write-Host $MyValue
Write-Host $MyString
Write-Host $MyObject
Result:
456
Injected string
#{Name=Injected; Value=Object}
As you see all three variables in the Test.ps1 scope are overwritten by the Inject.ps1 script. This can also be done using the Invoke-Command cmdlet and it doesn't even matter whether I set the scope of a variable to Private either:
New-Variable -Name MyValue -Value 123 -Scope Private
$MyString = "MyString"
$MyObject = [PSCustomObject]#{Name = 'My'; Value = 'Object'}
Invoke-Command {
([Ref]$MyValue).Value = 456
([Ref]$MyString).Value = 'Injected string'
([Ref]$MyObject).Value = [PSCustomObject]#{Name = 'Injected'; Value = 'Object'}
}
Write-Host $MyValue
Write-Host $MyString
Write-Host $MyObject
Is there a way to completely isolate an invoked script/command from overwriting variables in the current scope?
If not, can this be considered as a security risk for invoking scripts in any way?
The advice against use of Invoke-Expression use is primarily about preventing unintended execution of code (code injection).
If you invoke a piece of PowerShell code - whether directly or via Invoke-Expression - it can indeed (possibly maliciously) manipulate parent scopes, including the global scope.
Note that this potential manipulation isn't limited to variables: for instance, functions and aliases can be modified as well.
Caveat: Running unknown code is problematic in two respects:
Primarily for the potential to perform unwanted / destructive actions directly.[1]
Secondarily, for the potential to maliciously modify the caller's state (variables, ...), which is the only aspect the solutions below guard against.
To provide the desired isolation, you have two basic choices:
Run the code in a child process:
By starting another PowerShell instance; e.g. (use powershell instead of pwsh in Windows PowerShell):
pwsh -c { ./someUntrustedScript.ps1 }
By starting a background job; e.g.:
Start-Job { ./someUntrustedScript.ps1 } | Receive-Job -Wait -AutoRemove
Run the code in a separate thread in the same process:
As a thread job, via the Start-ThreadJob cmdlet (ships with PowerShell [Core] 6+; in Windows PowerShell, it can be installed from the PowerShell Gallery with something like Install-Module -Scope CurrentUser ThreadJob); e.g.:
Start-ThreadJob { ./someUntrustedScript.ps1 } | Receive-Job -Wait -AutoRemove
By creating a new runspace via the PowerShell SDK; e.g.:
[powershell]::Create().AddScript('./someUntrustedScript.ps1').Invoke()
Note that you'll have to do extra work to get the output streams other than the success one, notably the error stream's output; also, .Dispose() should be called on the PowerShell instance on completion of the command.
A child process-based solution will be slow and limited in terms of data types you can return (due to serialization / deserialization being involved), but it provides isolation against the invoked code crashing the process.
A thread-based job is much faster, can return any data type, but can crash the entire process.
In all cases you will have to pass any values from the caller that the invoked code needs access to as arguments or, with background jobs and thread jobs, alternatively via the $using: scope specifier.
js2010 mentions other, less desirable alternatives:
Start-Process (child process-based, with text-only arguments and output)
PowerShell Workflows, which are obsolescent (they weren't ported to PowerShell Core and won't be).
Using Invoke-Command with "loopback remoting" (-ComputerName localhost) is hypothetically also an option, but then you incur the double overhead of a child process and HTTP-based communication; also, your computer must be set up for remoting, and you must run with elevation (as administrator).
[1] A way to mitigate the problem is to limit which commands, statements, types, ... are permitted to be called when the string is evaluated, which can be achieved via the PowerShell SDK in combination with language modesand/or by explicitly constructing an initial session state. See this answer for an example of SDK use with language modes.
Related
I'm not understanding if Invoke-Expression is internally flawed, making it more dangerous. Or is the problem that it incorporates text to code, code to execution, and maybe execution in the local scope, all in a single command.
What I'm wanting to do is create a class in C# with a public event EventHandler MyEvent; event via Add-Type, and then inherit from that class in PowerShell by writing the PowerShell in a string #'class psMessages: csMessages{<code for clas>}'#, converting the string into a script block, and then executing it.
I found these methods for creating the script block will work:
$ScriptBlock = ([System.Management.Automation.Language.Parser]::ParseInput($psMessages, [ref]$null, [ref]$null)).GetScriptBlock()
# or
$ScriptBlock = [scriptblock]::Create($psMessages)
And either of these commands will execute the the script block in the current scope:
. $ScriptBlock
# or
Invoke-Command -NoNewScope $ScriptBlock
Additional info: These commands fail, I believe because they execute the script block in a new scope - please correct me if I'm wrong:
& $ScriptBlock
# or
$ScriptBlock.Invoke()
# or
Invoke-Command $ScriptBlock
So, are any of these methods safer to use than Invoke-Expression? Or are they all just as dangerous? And, if any are safer, why?
What makes any command dangerous is the blind execution of source code from an unknown / untrusted source.
As such, the execution mechanism is incidental to the problem.
Conversely, this means that if you full control or implicitly trust a given piece of source code, use of Invoke-Expression - which is generally to be avoided - is acceptable.
Note that the code executed by Invoke-Expression invariably runs in the current scope; you could wrap the input string in & { ... } in order to execute it in a child scope.
As Santiago Squarzon points out, [scriptblock]::Create() enables a middle ground:
As demonstrated in this answer of his, it is possible to constrain what may be executed in terms of permissible commands, read access to specific variables, and whether read access to environment variables is allowed.
Additionally, a script block instance returned by [scriptblock]::Create() allows potentially reusable invocation on demand, with the choice to either execute it in the current scope, with ., the dot-sourcing operator, or a child scope, with &, the call operator.
As for the commands listed under "Additional info:"
They should not fail; they should all execute the script block in a child scope.
However:
Using the .Invoke() method on script blocks should be avoided, because it changes the semantics of the call in several respects - see this answer.
Similarly, there is no good reason to use Invoke-Command for (local) invocation of script blocks - see this answer.
It was pointed out to me (in PowerShell, replicate bash parallel ping) that I can load a function from the internet as follows:
iex (irm https://raw.githubusercontent.com/proxb/AsyncFunctions/master/Test-ConnectionAsync.ps1)
The url referenced Test-ConnectionAsync.ps1 contains two functions: Ping-Subnet and Test-ConnectionAsync
This made me wonder if I could then define bypass functions in my personal module that are dummy functions that will be permanently overridden as soon as they are invoked. e.g.
function Ping-Subnet <mimic the switches of the function to be loaded> {
if <function is not already loaded from internet> {
iex (irm https://raw.githubusercontent.com/proxb/AsyncFunctions/master/Test-ConnectionAsync.ps1)
}
# Now, somehow, permanently overwrite Ping-Subnet to be the function that loaded from the URL
Ping-Subnet <pass the switches that we mimicked to the required function that we have just loaded>
}
This would very simply allow me to reference a number of useful scripts directly from my module but without having to load them all from the internet upon loading the Module (i.e. the functions are only loaded on demand, when I invoke them, and I will often never invoke the functions unless I need them).
You could use the Parser to find the functions in the remote script and load them into your scope. This will not be a self-updating function, but should be safer than what you're trying to accomplish.
using namespace System.Management.Automation.Language
function Load-Function {
[cmdletbinding()]
param(
[parameter(Mandatory, ValueFromPipeline)]
[uri] $URI
)
process {
try {
$funcs = Invoke-RestMethod $URI
$ast = [Parser]::ParseInput($funcs, [ref] $null, [ref] $null)
foreach($func in $ast.FindAll({ $args[0] -is [FunctionDefinitionAst] }, $true)) {
if($func.Name -in (Get-Command -CommandType Function).Name) {
Write-Warning "$($func.Name) is already loaded! Skipping"
continue
}
New-Item -Name "script:$($func.Name)" -Path function: -Value $func.Body.GetScriptBlock()
}
}
catch {
Write-Warning $_.Exception.Message
}
}
}
Load-Function https://raw.githubusercontent.com/proxb/AsyncFunctions/master/Test-ConnectionAsync.ps1
Ping-Subnet # => now is available in your current session.
function Ping-Subnet{
$toImport = (IRM "https://raw.githubusercontent.com/proxb/AsyncFunctions/master/Test-ConnectionAsync.ps1").
Replace([Text.Encoding]::UTF8.GetString((239,187,191)),"")
NMO([ScriptBlock]::Create($toImport))|Out-Null
$MyInvocation.Line|IEX
}
function Test-ConnectionAsync{
$toImport = (IRM "https://raw.githubusercontent.com/proxb/AsyncFunctions/master/Test-ConnectionAsync.ps1").
Replace([Text.Encoding]::UTF8.GetString((239,187,191)),"")
NMO([ScriptBlock]::Create($toImport))|Out-Null
$MyInvocation.Line|IEX
}
Ping-Subnet -Result Success
Test-ConnectionAsync -Computername $env:COMPUTERNAME
Result:
Computername Result
------------ ------
192.168.1.1 Success
192.168.1.2 Success
192.168.1.146 Success
Computername IPAddress Result
------------ --------- ------
HOME-PC fe80::123:1234:ABCD:EF12 Success
Yes, it should work. Calling Test-ConnectionAsync.ps1 from with-in a function will create the functions defined with-in, in the wrapping function's scope. You will be able to call any wrapped functions until the function's scope ends.
If you name the wrapper and wrapped functions differently, you can check whether the function has been declared with something like...
Otherwise, you need to get more creative.
This said, PROCEED WITH CAUTION. Remote code execution, like this, is fraught with security issues, especially in the way we're talking about it i.e., no validation of Test-ConnectionAsync.ps1.
Fors1k's answer deserves the credit for coming up with the clever fundamentals of the approach:
Download and execute the remote script's content in a dynamic module created with New-Module (whose built-in alias is nmo), which causes the script's functions to be auto-exported and to become available session-globally[1]
Note that dynamic modules aren't easy to discover, because they're not shown in Get-Module's output; however, you can discover them indirectly, via the .Source property of the command-info objects output by Get-Command:
Get-Command | Where Source -like __DynamicModule_*
That the downloaded functions become available session-globally may be undesired if you're trying to use the technique inside a script that shouldn't affect the session's global state - see the bottom section for a solution.
Then re-invoke the function, under the assumption that the original stub function has been replaced with the downloaded version of the same name, passing the received arguments through.
While Fors1k's solution will typically work, here is a streamlined, robust alternative that prevents potential, inadvertent re-execution of code:
function Ping-Subnet{
$uri = 'https://raw.githubusercontent.com/proxb/AsyncFunctions/master/Test-ConnectionAsync.ps1'
# Define and session-globally import a dynamic module based on the remote
# script's content.
# Any functions defined in the script would automatically be exported.
# However, unlike with persisted modules, *aliases* are *not* exported by
# default, which the appended Export-ModuleMember call below compensates for.
# If desired, also add -Variable * in order to export variables too.
# Conversely, if you only care about functions, remove the Export-ModuleMember call.
$dynMod = New-Module ([scriptblock]::Create(
((Invoke-RestMethod $uri)) + "`nExport-ModuleMember -Function * -Alias *")
)
# If this stub function shadows the newly defined function in the dynamic
# module, remove it first, so that re-invocation by name uses the new function.
# Note: This happens if this stub function is run in a child scope, such as
# in a (non-dot-sourced) script rather than in the global scope.
# If run in the global scope, curiously, the stub function seemingly
# disappears from view right away - not even Get-Command -All shows it later.
$myName = $MyInvocation.MyCommand.Name
if ((Get-Command -Type Function $myName).ModuleName -ne $dynMod.Name) {
Remove-Item -LiteralPath "function:$myName"
}
# Now invoke the newly defined function of the same name, passing the arguments
# through.
& $myName #args
}
Specifically, this implementation ensures:
That aliases defined in the remote script are exported as well (just remove + "`nExport-ModuleMember -Function * -Alias *" from the code above if that is undesired.
That the re-invocation robustly targets the new, module-defined implementation of the function - even if the stub function runs in a child scope, such as in a (non-dot-sourced) script.
When run in a child scope, $MyInvocation.Line|IEX (iex is a built-in alias of the Invoke-Expression cmdlet) would result in an infinite loop, because the stub function itself is still in effect at that time.
That all received arguments are passed through on re-invocation without re-evaluation.
Using the built-in magic of splatting the automatic $args variable (#args) passes only the received, already expanded arguments through, supporting both named and positional arguments.[2]
$MyInvocation.Line|IEX has two potential problems:
If the invoking command line contained multiple commands, they are all repeated.
You can solve this particular problem by substituting (Get-PSCallStack)[1].Position.Text for $MyInvocation.Line, but that still wouldn't address the next problem.
Both $MyInvocation.Line and (Get-PSCallStack)[1].Position.Text contain the arguments that were passed in unexpanded (unevaluated) form, which causes their re-evaluation by Invoke-Expression, and the perils of that are that, at least hypothetically, this re-evaluation could involve lengthy commands whose output served as arguments or, worse, commands that had side effects that cannot or should not be repeated.
Scoping the technique to a given local script:
That the downloaded functions become available session-globally may be undesired if you're trying to use the technique inside a script that shouldn't affect the session's global state; that is, you may want the functions exported via the dynamic module to disappear when the script exits.
This requires two extra steps:
Piping the dynamic module to Import-Module, which is the prerequisite for being able to unload it before exiting with Remove-Module
Calling Remove-Module with the dynamic module before exiting in order to unload it.
function Ping-Subnet{
$uri = 'https://raw.githubusercontent.com/proxb/AsyncFunctions/master/Test-ConnectionAsync.ps1'
# Save the module in a script-level variable, and pipe it to Import-Module
# so that it can be removed before the script exits.
$script:dynMod = New-Module ([scriptblock]::Create(
((Invoke-RestMethod $uri)) + "`nExport-ModuleMember -Function * -Alias *")
) | Import-Module -PassThru
# If this stub function shadows the newly defined function in the dynamic
# module, remove it first, so that re-invocation by name use the new function.
# Note: This happens if this stub function is run in a child scope, such as
# in a (non-dot-sourced) script rather than in the global scope.
# If run in the global scope, curiously, the stub function seemingly
# disappears from view right away - not even Get-Command -All shows it later.
$myName = $MyInvocation.MyCommand.Name
if ((Get-Command -Type Function $myName).ModuleName -ne $dynMod.Name) {
Remove-Item -LiteralPath "function:$myName"
}
# Now invoke the newly defined function of the same name, passing the arguments
# through.
& $myName #args
}
# Sample commands to perform in the script.
Ping-Subnet -?
Get-Command Ping-Subnet, Test-ConnectionAsync | Format-Table
# Before exiting, remove (unload) the dynamic module.
$dynMod | Remove-Module
[1] This assumes that the New-Module call itself is made outside of a module; if it is made inside a module, at least that module's commands see the auto-exported functions; if that module uses implicit exporting behavior (which is rare and not advisable), the auto-exported functions from the dynamic module would be included in that module's exports and therefore again become available session-globally.
[2] This magic has one limitation, which, however, will only rarely surface: [switch] parameters with a directly attached Boolean argument aren't supported (e.g., -CaseSensitive:$true) - see this answer.
I have a self elevate snippet which is quite wordy, so I decided instead of duplicating it at the top of every script that needs to be run as admin to move it into a separate .ps1:
function Switch-ToAdmin {
# Self-elevate the script if required
if (-not ([Security.Principal.WindowsPrincipal] [Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] 'Administrator')) {
if ([int](Get-CimInstance -Class Win32_OperatingSystem | Select-Object -ExpandProperty BuildNumber) -ge 6000) {
$Cmd = #(
"-Command Set-Location `"$(Get-Location)`"; & `"$PSCommandPath`""
"-$($PSBoundParameters.Keys)"
)
$ProcArgs = #{
FilePath = 'PowerShell.exe'
Verb = 'RunAs'
ArgumentList = $Cmd
}
Start-Process #ProcArgs
Exit
}
}
}
So for every script that needs elevation I'd prepend
. "$PSScriptRoot\self-elevate.ps1"
Switch-ToAdmin
# rest of script
Doing above successfully procs the UAC prompt, but the rest of the script won't get executed.
Is this sorta stuff disallowed?
Darin and iRon have provided the crucial pointers:
Darin points out that the automatic $PSCommandPath variable variable in your Switch-ToAdmin function does not contain the full path of the script from which the function is called, but that of the script in which the function is defined, even if that script's definitions are loaded directly into the scope of your main script via ., the dot-sourcing operator.
The same applies analogously to the automatic $PSScriptRoot variable, which reflects the defining script's full directory path.
Also, more generally, the automatic $PSBoundParameters variable inside a function reflects that function's bound parameters, not its enclosing script's.
iRon points out that the Get-PSCallStack cmdlet can be used to get information about a script's callers, starting at index 1; the first object returned - index 0, when Get-PSCallStack output is captured in an array, represents the current command. Index 1 therefore refers to the immediate caller, which from the perspective of your dot-sourced script is your main script.
Therefore:
Replace $PSCommandPath with $MyInvocation.PSCommandPath, via the automatic $MyInvocation variable. $MyInvocation.PSCommandPath truly reflects the caller's full script path, irrespective of where the called function was defined.
Alternatively, use (Get-PSCallStack)[1].ScriptName, which despite what the property name suggests, returns the full path of the calling script too.
Replace $PSBoundParameters with (Get-PSCallStack)[1].InvocationInfo.BoundParameters
Note that there's also (Get-PSCallStack)[1].Arguments, but it seems to contain a single string only, containing a representation of all arguments that is only semi-structured and therefore doesn't allow robust reconstruction of the individual parameters.
As an aside:
Even if $PSBoundParameters contained the intended information, "-$($PSBoundParameters.Keys)" would only succeed in passing the bound parameters through if your script defines only one parameter, if that parameter is a [switch] parameter, and if it is actually passed in every invocation.
Passing arguments through robustly in this context is hard to do, and has inherent limitations - see this answer for a - complex - attempt to make it work as well as possible.
issue
the called powershell script will accept parameters but not all of them:
Current Set-Up and code:
I have a common folder where two .ps1 scripts are located:
DoWork.ps1
Workmanager.ps1
Workmanager.ps1 calls the Dowork.ps1:
$targetPath="M:\target"
echo "target path: $targetPath"
start powershell {.\DoWork.ps1 -target $targetPath -tempdrive D:\}
output (as expected):
target path: M:\target
DoWork.ps1 contains some start code:
param
(
[string]$tempdrive,
[string]$target,
[int] $threads = 8,
[int] $queuelength = -1
)
echo "variables:"
echo "temp drive: $tempdrive"
echo "target path: $target"
Unexpectedly, the $target is not beeing assigned. Previously I had the variable named $targetpath, which did not work either.
variables:
temp drive: D:\
target path:
Findings
It appears that the issue relies in Workmanager.ps1. Spcifying the parameter as fixed string rather than as variable will load the parameter. Any solution for this?
start powershell {.\DoWork.ps1 -target "foo" -tempdrive D:\}
When you use a ScriptBlock as an argument to powershell.exe, variables aren't going to be evaluated until after the new session starts. $targetPath has not been set in the child PowerShell process called by Workmanager.ps1 and so it has no value. This is actually an expected behavior of a ScriptBlock in general and behaves this way in other contexts too.
The solution is mentioned in the help text for powershell -?:
[-Command { - | <script-block> [-args <arg-array>] <========== THIS GUY
| <string> [<CommandParameters>] } ]
You must provide the -args parameter which will be passed to the ScriptBlock on execution (separate multiple arguments with a ,). Passed arguments are passed positionally, and must be referenced as though you were processing the arguments to a function manually using the $args array. For example:
$name = 'Bender'
& powershell { Write-Output "Hello, $($args[0])" } -args $name
However, especially with more complicated ScriptBlock bodies, having to remember which index of $args[i] contains the value you want at a given time is a pain in the butt. Luckily, we can use a little trick with defining parameters within the ScriptBlock to help:
$name = 'Bender'
& powershell { param($name) Write-Output "Hello, $name" } -args $name
This will print Hello, Bender as expected.
Some additional pointers:
The ScriptBlock can be multiline as though you were defining a function. way. The examples above are single line due to their simplicity.
A ScriptBlock is just an unnamed function, which is why defining parameters and referencing arguments within one works the same way.
To exemplify this behavior outside of powershell.exe -Command, Invoke-Command requires you to pass variables to its ScriptBlock in a similar fashion. Note however that answer uses an already-defined function body as the ScriptBlock (which is totally valid to do)
You don't need to use Start-Process here (start is its alias), at least as demonstrated in your example. You can simply use the call operator & unless you need to do something more complex than "run the program and wait for it to finish". See this answer of mine for more information.
If you opt to pass a string to powershell.exe instead, you don't need to provide arguments and your variables will get rendered in the current PowerShell process. However, so will any other unescaped variables that might be intended to set within the child process, so be careful with this approach. Personally, I prefer using ScriptBlock regardless, and just deal with the extra parameter definition and arguments.
Using the call & operator is optional when you are not executing a path rendered as a string. It can be omitted in the examples above, but is more useful like so:
& "C:\The\Program Path\Contains\spaces.exe"
& $programPathAsAVariable
Normally in PowerShell this works:
# parent.ps1
$x = 1
&"$PSScriptRoot/child.ps1"
# child.ps1
Write-Host $x
When parent.ps1 runs, it prints out 1 since child.ps1 has inherited it.
Can I prevent this for my script?
I can do $private:x = 1, but parent has many variables, so it's verbose and error-prone.
Is there a way to call child.ps1 without inheriting scope?
Or maybe a way to mark everything in parent private?
No, short of defining all variables in the calling scope (and its ancestral scopes) with the $private: scope, you cannot prevent PowerShell's dynamic scoping.
That is, creating a variable in a given scope (without $private:) makes it visible to all its descendant scopes, such as the child scope in which a script (invoked directly or via &) runs.
Also, certain automatic (built-in) variable are defined with option AllScope, which invariably makes them visible in all scopes, not just descendant ones.
Workarounds:
In-process:
Call your script via a thread job, using Start-ThreadJob (PowerShell v6+) or with ForEach-Object -Parallel (v7+); e.g.:
ForEach-Object -Parallel { $PSScriptRoot/child.ps1 }
Thread jobs and the threads created by ForEach-Object -Parallel do not inherit the caller's state (with the exception of the current location in v7+)[1].
At the start of your script, enumerate all variables via Get-Variable and create local copies that you explicitly set to $null (you'll need to ignore errors stemming from built-in variables that you cannot override) - this will effectively shadow the variables from ancestral scopes.
Out-of-process:
Call your script via a new PowerShell process (powershell -File ... or pwsh -File ...) or via a background job (using Start-Job).
Caveat: In addition to decreased performance, due to cross-process XML-serialized serialization type fidelity may be lost - see this answer for details.
[1] Note that providing an opt-in for copying the caller's state to the ForEach-Object -Parallel threads is now being considered; see this GitHub feature request.