Powershell - "Clear-Item variable:" vs "Remove-Variable" - powershell

When storing text temporarily in powershell variables at runtime, what is the most efficient way of removing a variables contents from memory when no longer needed?
I've used both Clear-Item variable: and Remove-Variable but how quickly does something get removed from memory with the latter vs nulling the memory contents with the former?
EDIT: I should have made it a little clearer why I am asking.
I am automating RDP login for a bunch of application VMs (application doesn't run as a service, outsourced developers, long story).
So, I am developing (largely finished) a script to group launch sessions to each of the VMs.
Idea is that the script function that stores credentials uses read-host to prompt for hostname then get-credentials to pick up domain/user/password.
The pass is then converted from secure-string using 256-bit key (runtime key unique to machine/user that stored the creds and runs the group launch).
The VMs name, domain, user and encrypted pass are stored in a file. When launching a session, the details are read in, password decrypted, details passed to cmdkey.exe to store \generic:TERMSRV credential for that VM, clear plaintext pass variable, launch mstsc to that host, a few seconds later remove the credential from windows credential store.
(If I passed password to cmdkey.exe as anything other than plaintext, the RDP session would either receive incorrect or no credentials).
So, hence the question, I need the password In plaintext to exist in memory for as short a time as possible.
To keep security guys happy, the script itself is aes256 encrypted and a c# wrapper with its own ps host reads, decrypts and runs the script, so there is no plaintext source on the machine that runs this. (Encrypted source on a file share so effectively I have a kill switch, can simply replace encrypted script with another displaying a message that this app has been disabled)

The only way I have been able to, with certainty, to clear variable data/content is to remove all variables running in the current session using:
Remove-Variable -Name * -ErrorAction SilentlyContinue
This removes all variables immediately. In fact, I add this to the end of some of my scripts, so that I can be sure that running another script with potentially the same name, will not have new data added and cause undesired results.
DRAWBACK: If you only need one variable cleared, which was in my case a few minutes ago, then you need to re-instantiate input variables required by your script.

The most efficient way is to let garbage collection do its job. Remember, Powershell is all .NET, with its famous memory management. Always control your scope and make sure variables get out of scope as soon as they are not needed. For example, if temporary variables are needed inside loops, they will invalidate at loop's end automatically, so no need to worry about that, etc.
EDIT: Regarding your update, why not just do $yourPasswordVariable = $null? I think it would be much easier to understand. And it should be the fastest way to do it. Because Remove-Item and Clear-Item are kind of all-in-one handlers, they need to process some stuff first, before determining you really wanted to erase a variable.

You can use a stopwatch to get the execution time for the commandlets. I think there is not really a time difference between these two cmdlets. I´m using normally "Remove-Item" because in my eyes it´s better to remove a variable complete.
$a = "TestA"
$b = "TestB"
$c = "TestC"
$d = "TestD"
$time = New-Object system.Diagnostics.Stopwatch
Start-Sleep 1
$time.Start()
$time.Stop()
$system = $time.Elapsed.TotalMilliseconds
Write-Host "Stopwatch StartStop" $system
$time.Reset()
Start-Sleep 1
$time.Start()
Clear-Item Variable:a
$time.Stop()
$aTime = $time.Elapsed.TotalMilliseconds - $system
Write-Host "Clear-Item in " $aTime
$time.Reset()
Start-Sleep 1
$time.Start()
Remove-Variable b
$time.Stop()
$bTime = $time.Elapsed.TotalMilliseconds - $system
Write-Host "Remove-Variable in " $bTime
$time.Reset()
Start-Sleep 1
$time.Start()
Clear-Item Variable:c
$time.Stop()
$cTime = $time.Elapsed.TotalMilliseconds - $system
Write-Host "Clear-Item in " $cTime
$time.Reset()
Start-Sleep 1
$time.Start()
Remove-Variable d
$time.Stop()
$dTime = $time.Elapsed.TotalMilliseconds - $system
Write-Host "Remove-Variable in " $dTime
$time.Reset()

Both efficiently remove "a" reference to a .NET object. Now if that reference is the last reference to the object then the GC will determine when the memory for said object is collected. However, if you no longer need the variable then use Remove-Variable to also allow the memory associated with the System.Management.Automation.PSVariable object to be eventually collected as well.

To measure the time it takes to run script blocks and cmdlets, use Measure-Command

Related

Powershell 5.0 pause function [duplicate]

Disclaimer : I am the epitome of a scipting/Powershell rookie, so please bear with me.
I've written a script to return the Active Directory username of any user currently logged into a given workstation.
$input = Read-Host "Workstation Name"
$domain = ".*****.***.com"
$computer = $input + $domain
$list = gwmi win32_computersystem -comp $computer | select Username,Caption
Write-Output $list
However, if I run this from a pinned script in the taskbar, the Powershell window closes before I have a chance to view the results.
I have tried method 2 and 3 from this post, but to no avail. Method 2 prompts for user input before the results are displayed instead of after, even when the code for the prompt is added at the end of the script.
Any help would be greatly appreciated.
Method 2 from the linked post - i.e., waiting for the user to press a key before exiting the script - can be used, but it requires additional effort:
End your script as follows in order to see the value of $list before the pause command prompts:
$list | Out-Host # Force *synchronous* to-display output.
pause # Wait for the user to press Enter before exiting.
Note: pause in PowerShell is simply a function wrapper around Read-Host as follows: $null = Read-Host 'Press Enter to continue...' Therefore, if you want to customize the prompt string, call Read-Host directly.
This answer explains why the use of Out-Host (or Format-Table) is necessary in this case; in short:
In PSv5+, an implicitly applied Format-Table command asynchronously waits for up to 300 msecs. for additional pipeline input, in an effort to derive suitable column widths from the input data.
Because you use Write-Output output objects without predefined formatting data that have 2 properties (4 or fewer ), tabular output is implicitly chosen, and Format-Table is used behind the scenes, asynchronously.
Note: The asynchronous behavior applies only to output objects for whose types formatting instructions aren't predefined (as would be reported with Get-FormatData <fullOutputTypeName>); for instance, the output format for the System.Management.Automation.AliasInfo instances output by Get-Alias is predefined, so Get-Alias; pause does produce output in the expected sequence.
The pause command executes before that waiting period has elapsed, and only after you've answered the prompt does the table print, after which point the window closes right away.
The use of an explicit formatting command (Out-Host in the most generic case, but any Format-* cmdlet will do too) avoids that problem by producing display output synchronously, so that the output will be visible by the time pause displays its prompt.
I had the same problem for scripts that I'm executing "on demand". I tend to simply add a Read-Host at the end of the script like so
$str = "This text is hardly readable because the console closes instantly"
Write-Output $str
Read-Host "Script paused - press [ENTER] to exit"

How do I prevent Powershell from closing after completion of a script?

Disclaimer : I am the epitome of a scipting/Powershell rookie, so please bear with me.
I've written a script to return the Active Directory username of any user currently logged into a given workstation.
$input = Read-Host "Workstation Name"
$domain = ".*****.***.com"
$computer = $input + $domain
$list = gwmi win32_computersystem -comp $computer | select Username,Caption
Write-Output $list
However, if I run this from a pinned script in the taskbar, the Powershell window closes before I have a chance to view the results.
I have tried method 2 and 3 from this post, but to no avail. Method 2 prompts for user input before the results are displayed instead of after, even when the code for the prompt is added at the end of the script.
Any help would be greatly appreciated.
Method 2 from the linked post - i.e., waiting for the user to press a key before exiting the script - can be used, but it requires additional effort:
End your script as follows in order to see the value of $list before the pause command prompts:
$list | Out-Host # Force *synchronous* to-display output.
pause # Wait for the user to press Enter before exiting.
Note: pause in PowerShell is simply a function wrapper around Read-Host as follows: $null = Read-Host 'Press Enter to continue...' Therefore, if you want to customize the prompt string, call Read-Host directly.
This answer explains why the use of Out-Host (or Format-Table) is necessary in this case; in short:
In PSv5+, an implicitly applied Format-Table command asynchronously waits for up to 300 msecs. for additional pipeline input, in an effort to derive suitable column widths from the input data.
Because you use Write-Output output objects without predefined formatting data that have 2 properties (4 or fewer ), tabular output is implicitly chosen, and Format-Table is used behind the scenes, asynchronously.
Note: The asynchronous behavior applies only to output objects for whose types formatting instructions aren't predefined (as would be reported with Get-FormatData <fullOutputTypeName>); for instance, the output format for the System.Management.Automation.AliasInfo instances output by Get-Alias is predefined, so Get-Alias; pause does produce output in the expected sequence.
The pause command executes before that waiting period has elapsed, and only after you've answered the prompt does the table print, after which point the window closes right away.
The use of an explicit formatting command (Out-Host in the most generic case, but any Format-* cmdlet will do too) avoids that problem by producing display output synchronously, so that the output will be visible by the time pause displays its prompt.
I had the same problem for scripts that I'm executing "on demand". I tend to simply add a Read-Host at the end of the script like so
$str = "This text is hardly readable because the console closes instantly"
Write-Output $str
Read-Host "Script paused - press [ENTER] to exit"

How to prevent input from displaying in console while script is running

I have a script that runs several loops of code and relies on specific input at various phases in order to advance. That functionality is working. My current issue revolves around extraneous input being supplied by the user displaying on screen in the console window wherever I have the cursor position currently aligned.
I have considered ignoring this issue since the functionality of the script is intact, however, I am striving for high standards with the console display of this script, and I would like to know a way to disable all user input period, unless prompted for. I imagine the answer has something to do with being able to command the Input Buffer to store 0 entries, or somehow disabling and then re-enabling the keyboard as needed.
I have tried using $HOST.UI.RawUI.Flushinputbuffer() at strategic locations in order to prevent characters from displaying, but I don't think there's anywhere I could put that in my loop that will perfectly block all input from displaying during code execution (it works great for making sure nothing gets passed when input is required, though). I've tried looking up the solution, but the only command I could find for manipulating the Input Buffer is the one above. I've also tried strategic implementation of the $host.UI.RawUI.KeyAvailable variable to detect keystrokes during execution, then $host.UI.RawUI.ReadKey() to determine if these keystrokes are unwanted and do nothing if they are, but the keystrokes still display in the console no matter what.
I am aware that this code is fairly broken as far as reading the key to escape the loop goes, but bear with me. I hashed up this example just so that you could see the issue I need help eliminating. If you hold down any letter key during this code's execution, you'll see unwanted input displaying.
$blinkPhase = 1
# Set Coordinates for cursor
$x = 106
$y = 16
$blinkTime = New-Object System.Diagnostics.Stopwatch
$blinkTime.Start()
$HOST.UI.RawUI.Flushinputbuffer()
do {
# A fancy blinking ellipses I use to indicate when Enter should be pressed to advance.
$HOST.UI.RawUI.Flushinputbuffer()
while ($host.UI.RawUI.KeyAvailable -eq $false) {
if ($blinkTime.Elapsed.Milliseconds -gt 400) {
if ($blinkPhase -eq 1) {
[console]::SetCursorPosition($x,$y)
write-host ". . ." -ForegroundColor gray
$blinkPhase = 2
$blinkTime.Restart()
} elseif ($blinkPhase -eq 2) {
[console]::SetCursorPosition($x,$y)
write-host " "
$blinkPhase = 1
$blinkTime.Restart()
}
}
start-sleep -m 10
}
# Reading for actual key to break the loop and advance the script.
$key = $host.UI.RawUI.ReadKey()
} while ($key.key -ne "Enter")
The expected result is that holding down any character key will NOT display the input in the console window while the ellipses is blinking. The actual result, sans error message, is that a limited amount of unwanted/unnecessary input IS displaying in the console window, making the script look messy and also interfering with the blinking process.
What you're looking for is to not echo (print) the keys being pressed, and that can be done with:
$key = $host.UI.RawUI.ReadKey('IncludeKeyDown, NoEcho')
Also, your test for when Enter was pressed is flawed[1]; use the following instead:
# ...
} while ($key.Character -ne "`r")
Caveat: As of at least PSReadLine version 2.0.0-beta4, a bug causes $host.UI.RawUI.KeyAvailable to report false positives, so your code may not work as intended - see this GitHub issue.
Workaround: Use [console]::KeyAvailable instead, which is arguably the better choice anyway, given that you're explicitly targeting a console (terminal) environment with your cursor-positioning command.
As an aside: You can simplify and improve the efficiency of your solution by using a thread job to perform the UI updates in a background thread, while only polling for keystrokes in the foreground:
Note: Requires the ThreadJob module, which comes standard with PowerShell Core, and on Windows PowerShell can be installed with Install-Module ThreadJob -Scope CurrentUser, for instance.
Write-Host 'Press Enter to stop waiting...'
# Start the background thread job that updates the UI every 400 msecs.
# NOTE: for simplicity, I'm using a simple "spinner" here.
$jb = Start-ThreadJob {
$i=0
while ($true) {
[Console]::Write("`r{0}" -f '/-\|'[($i++ % 4)])
Start-Sleep -ms 400
}
}
# Start another thread job to do work in the background.
# ...
# In the foreground, poll for keystrokes in shorter intervals, so as
# to be more responsive.
While (-not [console]::KeyAvailable -or ([Console]::ReadKey($true)).KeyChar -ne "`r" ) {
Start-Sleep -Milliseconds 50
}
$jb | Remove-Job -Force # Stop and remove the background UI thread.
Note the use of [Console]::Write() in the thread job, because Write-Host output wouldn't actually be passed straight through to the console.
[1] You tried to access a .Key property, which only the [SystemConsoleKeyInfo] type returned by [console]::ReadKey() has; the approximate equivalent in the $host.UI.rawUI.ReadKey() return type, [System.Management.Automation.Host.KeyInfo], is .VirtualKeyCode, but its specific type differs, so you can't (directly) compare it to "Enter"; The latter type's .Character returns the actual [char] instance pressed, which is the CR character ("`r") in the case of Enter.

How can I prevent variable injection in PowerShell?

I was triggered again on a comment on a recent PowerShell question from #Ansgar Wiechers: DO NOT use Invoke-Expression with regards to a security question I have for a long time somewhere in the back of my mind and need to ask.
The strong statement (with a reference to the Invoke-Expression considered harmful article) suggests that an invocation of a script that can overwrite variables is considered harmful.
Also the PSScriptAnalyzer advises against using Invoke-Expression, see the AvoidUsingInvokeExpression rule.
But I once used a technic myself to update a common variable in a recursive script which can actually overwrite a value in any of its parents scopes which is as simple as:
([Ref]$ParentVariable).Value = $NewValue
As far as I can determine a potential malicious script could use this technic too to inject variables in any case no matter how it is invoked...
Consider the following "malicious" Inject.ps1 script:
([Ref]$MyValue).Value = 456
([Ref]$MyString).Value = 'Injected string'
([Ref]$MyObject).Value = [PSCustomObject]#{Name = 'Injected'; Value = 'Object'}
My Test.ps1 script:
$MyValue = 123
$MyString = "MyString"
$MyObject = [PSCustomObject]#{Name = 'My'; Value = 'Object'}
.\Inject.ps1
Write-Host $MyValue
Write-Host $MyString
Write-Host $MyObject
Result:
456
Injected string
#{Name=Injected; Value=Object}
As you see all three variables in the Test.ps1 scope are overwritten by the Inject.ps1 script. This can also be done using the Invoke-Command cmdlet and it doesn't even matter whether I set the scope of a variable to Private either:
New-Variable -Name MyValue -Value 123 -Scope Private
$MyString = "MyString"
$MyObject = [PSCustomObject]#{Name = 'My'; Value = 'Object'}
Invoke-Command {
([Ref]$MyValue).Value = 456
([Ref]$MyString).Value = 'Injected string'
([Ref]$MyObject).Value = [PSCustomObject]#{Name = 'Injected'; Value = 'Object'}
}
Write-Host $MyValue
Write-Host $MyString
Write-Host $MyObject
Is there a way to completely isolate an invoked script/command from overwriting variables in the current scope?
If not, can this be considered as a security risk for invoking scripts in any way?
The advice against use of Invoke-Expression use is primarily about preventing unintended execution of code (code injection).
If you invoke a piece of PowerShell code - whether directly or via Invoke-Expression - it can indeed (possibly maliciously) manipulate parent scopes, including the global scope.
Note that this potential manipulation isn't limited to variables: for instance, functions and aliases can be modified as well.
Caveat: Running unknown code is problematic in two respects:
Primarily for the potential to perform unwanted / destructive actions directly.[1]
Secondarily, for the potential to maliciously modify the caller's state (variables, ...), which is the only aspect the solutions below guard against.
To provide the desired isolation, you have two basic choices:
Run the code in a child process:
By starting another PowerShell instance; e.g. (use powershell instead of pwsh in Windows PowerShell):
pwsh -c { ./someUntrustedScript.ps1 }
By starting a background job; e.g.:
Start-Job { ./someUntrustedScript.ps1 } | Receive-Job -Wait -AutoRemove
Run the code in a separate thread in the same process:
As a thread job, via the Start-ThreadJob cmdlet (ships with PowerShell [Core] 6+; in Windows PowerShell, it can be installed from the PowerShell Gallery with something like Install-Module -Scope CurrentUser ThreadJob); e.g.:
Start-ThreadJob { ./someUntrustedScript.ps1 } | Receive-Job -Wait -AutoRemove
By creating a new runspace via the PowerShell SDK; e.g.:
[powershell]::Create().AddScript('./someUntrustedScript.ps1').Invoke()
Note that you'll have to do extra work to get the output streams other than the success one, notably the error stream's output; also, .Dispose() should be called on the PowerShell instance on completion of the command.
A child process-based solution will be slow and limited in terms of data types you can return (due to serialization / deserialization being involved), but it provides isolation against the invoked code crashing the process.
A thread-based job is much faster, can return any data type, but can crash the entire process.
In all cases you will have to pass any values from the caller that the invoked code needs access to as arguments or, with background jobs and thread jobs, alternatively via the $using: scope specifier.
js2010 mentions other, less desirable alternatives:
Start-Process (child process-based, with text-only arguments and output)
PowerShell Workflows, which are obsolescent (they weren't ported to PowerShell Core and won't be).
Using Invoke-Command with "loopback remoting" (-ComputerName localhost) is hypothetically also an option, but then you incur the double overhead of a child process and HTTP-based communication; also, your computer must be set up for remoting, and you must run with elevation (as administrator).
[1] A way to mitigate the problem is to limit which commands, statements, types, ... are permitted to be called when the string is evaluated, which can be achieved via the PowerShell SDK in combination with language modesand/or by explicitly constructing an initial session state. See this answer for an example of SDK use with language modes.

No garbage collection while PowerShell pipeline is executing

UPDATE: The following bug seems to be resolved with PowerShell 5. The bug remains in 3 and 4. So don't process any huge files with the pipeline unless you're running PowerShell 2 or 5.
Consider the following code snippet:
function Get-DummyData() {
for ($i = 0; $i -lt 10000000; $i++) {
"This is freaking huge!! I'm a ninja! More words, yay!"
}
}
Get-DummyData | Out-Null
This will cause PowerShell memory usage to grow uncontrollably. After executing Get-DummyData | Out-Null a few times, I have seen PowerShell memory usage get all the way up to 4 GB.
According to ANTS Memory Profiler, we have a whole lot of things sitting around in the garbage collector's finalization queue. When I call [GC]::Collect(), the memory goes from 4 GB to a mere 70 MB. So we don't have a memory leak, strictly speaking.
Now, it's not good enough for me to be able to call [GC]::Collect() when I'm finished with a long-lived pipeline operation. I need garbage collection to happen during a pipeline operation. However if I try to invoke [GC]::Collect() while the pipeline is executing...
function Get-DummyData() {
for ($i = 0; $i -lt 10000000; $i++) {
"This is freaking huge!! I'm a ninja! More words, yay!"
if ($i % 1000000 -eq 0) {
Write-Host "Prompting a garbage collection..."
[GC]::Collect()
}
}
}
Get-DummyData | Out-Null
... the problem remains. Memory usage grows uncontrollably again. I have tried several variations of this, such as adding [GC]::WaitForPendingFinalizers(), Start-Sleep -Seconds 10, etc. I have tried changing garbage collector latency modes and forcing PowerShell to use server garbage collection to no avail. I just can't get the garbage collector to do its thing while the pipeline is executing.
This isn't a problem at all in PowerShell 2.0. It's also interesting to note that $null = Get-DummyData also seems to work without memory issues. So it seems tied to the pipeline, rather than the fact that we're generating tons of strings.
How can I prevent my memory from growing uncontrollably during long pipelines?
Side note:
My Get-DummyData function is only for demonstration purposes. My real-world problem is that I'm unable to read through large files in PowerShell using Get-Content or Import-Csv. No, I'm not storing the contents of these files in variables. I'm strictly using the pipeline like I'm supposed to. Get-Content .\super-huge-file.txt | Out-Null produces the same problem.
A couple of things to point out here. First, GC calls do work in the pipeline. Here's a pipeline script that only invokes the GC:
1..10 | Foreach {[System.GC]::Collect()}
Here's the perfmon graph of GCs during the time the script ran:
However, just because you invoke the GC it doesn't mean the private memory usage will return to the value you had before your script started. A GC collect will only collect memory that is no longer used. If there is a rooted reference to an object, it is not eligible to be collected (freed). So while GC systems typically don't leak in the C/C++ sense, they can have memory hoards that hold onto objects longer than perhaps they should.
In looking at this with a memory profiler it seems the bulk of the excess memory is taken up by a copy of the string with parameter binding info:
The root for these strings look like this:
I wonder if there is some logging feature that is causing PowerShell to hang onto a string-ized form pipeline bound objects?
BTW in this specific case, it is much more memory efficient to assign to $null to ignore the output:
$null = GetDummyData
Also, if you need to simply edit a file, check out the Edit-File command in the PowerShell Community Extensions 3.2.0. It should be memory efficient as long as you don't use the SingleString switch parameter.
It's not at all uncommon to find that the native cmdlets don't satisfy perfectly when you're doing something unusual like processing a massive text file. Personally, I've found working with large files in Powershell is much better when you script it with System.IO.StreamReader:
$SR = New-Object -TypeName System.IO.StreamReader -ArgumentList 'C:\super-huge-file.txt';
while ($line = $SR.ReadLine()) {
Do-Stuff $line;
}
$SR.Close() | Out-Null;
Note that you should use the absolute path in the ArgumentList. For me it always seems to assume you're in your home directory with relative paths.
Get-Content is simply meant to read the entire object into memory as an array and then outputs it. I think it just calls System.IO.File.ReadAllLines().
I don't know of any way to tell Powershell to discard items from the pipeline immediately upon completion, or that a function may return items asynchronously, so instead it preserves order. It may not allow it because it has no natural way to tell that the object isn't going to be used later on, or that later objects won't need to refer to earlier objects.
The other nice thing about Powershell is that you can often adopt the C# answers, too. I've never tried File.ReadLines, but that looks like it might be pretty easy to use, too.