What type of object is $<drivename>: (such as `$code:`) in Powershell? - powershell

I was using tab autocompletion for a variable name in Powershell 5.1 today and noticed that one of the choices was the name of a PSDrive. The drive name is docs and I wanted to expand is called $document_name. When I typed $do<tab>, the shell did indeed expand what I had typed to $document_name but for some reason, I typed <tab> a second time and that's when the expanded text changed to $docs:.
I explored further and found that this type of variable exists for each of my PSDrives, or at least tab expansion suggests that it does.
More formally, for every PSDrive PSD, tab expansion believes that $PSD: is a valid thing.
My question is simple: what the heck are these? Here are some observations I've made so far:
These names are prefixed with $, so they look like PS variables. For the rest of this discussion (and in the earlier discussion above), I will assume they are variables and refer to them as such.
Although they appear to be variables, they are not listed in the Variable: PSDrive like most variables. In this way, it behaves like the $env "variable," which also is not listed in Variable:. I have a feeling if I could find documentation about $env, then I'd understand these objects also.
In some ways, they behave like pointers to filesystem objects. For example, if there is a file name readme.txt containing the text "Hello, world!" on a PSDrive named code, then all of the following are possible interactions with Powershell.
Fetch the contents of the file.
λ ${code:\readme.txt}
Hello, world!
Just to prove that the type of the above result is String:
λ ${code:\readme.txt} | % { $_.GetType().Name }
String
Trying to use this as a reference to the PSDrive doesn't work well for many operations, such as cd:
C:\
λ cd ${code:}
At line:1 char:4
+ cd ${code:}
+ ~~~~~~~~
Variable reference is not valid. The variable name is missing.
+ CategoryInfo : ParserError: (:) [], ParentContainsErrorRecordException
+ FullyQualifiedErrorId : InvalidBracedVariableReference
I could go on, but I'm stumped. If I pass $code: (or $env:, for that matter) to Get-Member, I get an error saying Variable reference is not valid.
So just what the heck are "variables" like $env and $<PSDrive>: (such as $code:)? Are they expressions? Built-in expressions? Some kind of object? Thanks for any help.

What you're seeing is namespace variable notation, which is a variable-based way to access the content of items in PowerShell drives whose underlying provider implements content-based access (i.e., implements the IContentCmdletProvider interface).
Terminology and documentation note:
As of this writing, the docs briefly explain namespace variable notation in the conceptual about_Scopes help topic, albeit without using that term and, somewhat confusingly, discuss it in the context of scope modifiers; while namespace qualifiers (such as $env:) are unrelated to scope modifiers (such as $script:), they use the same basic syntax form, however.[1]
The general syntax is:
${<drive>:<path>} # same as: Get-Content <drive>:<path>
${<drive>:<path>} = ... # same as: Set-Content <drive>:<path> -Value ...
The enclosing {...} aren't necessary if both the <drive> name and the <path> can syntactically serve as a variable name; e.g.:
$env:HOME # no {...} needed
${env:ProgramFiles(x86)} # {...} needed due to "(" and ")"
In practice, as of Windows PowerShell v5.1, the following in-box drive providers support namespace variable notation:
Environment (drive Env:)
Function (drive Function:)
Alias (drive Alias:)
FileSystem (drives C:, ...)
Variable (drive Variable:) - though virtually pointless, given that omitting the drive part accesses variables by default (e.g., $variable:HOME is the same as just $HOME).
Of these, the Env: drive is by far the most frequently used with namespace variable notation, even though most users aren't aware of what underlies an environment-variable references such as $env:HOME.
On occasion you see it used with a filesystem drive - e.g., ${c:\foo\file.txt} - but the fact that you can only use literal paths and that you cannot control the character encoding limits its usefulness.
It allows interesting uses, however; e.g.:
PS> $alias:foreach # Get the definition of alias 'foreach'
ForEach-Object
PS> $function:prompt # Get the body of the 'prompt' function
"PS $($executionContext.SessionState.Path.CurrentLocation)$('>' * ($nestedPromptLevel + 1)) ";
# .Link
# https://go.microsoft.com/fwlink/?LinkID=225750
# .ExternalHelp System.Management.Automation.dll-help.xml
# Define a function foo that echoes 'hi' and invoke it.
PS> $function:foo = { 'hi' }; foo
hi
Note:
Because ${<drive>:<path>} and ${<drive>:<path>} = <value> are equivalent to Get-Content -Path <drive>:<path> and Set-Content -Path <drive>:<path> <value>, paths are interpreted as wildcard expressions (because that's what -Path does, as opposed to -LiteralPath), which can cause problems with paths that look like wildcards - see this answer for an example and a workaround.
[1] Previously, the feature wasn't documented at all; GitHub docs issue #3343 led to the current documentation, albeit not in the way that said issue proposed.

$env is the Windows environment variables, the same as what you get when you do SET in a command prompt. There are a few that are PS-specific.
The variable is providing access to the Environment Provider. https://learn.microsoft.com/en-us/powershell/module/microsoft.powershell.core/about/about_environment_variables?view=powershell-6
There are a bunch of other Providers that are described here: https://learn.microsoft.com/en-us/powershell/module/microsoft.powershell.core/about/about_providers?view=powershell-6
As it says in the doco:
The model for data presentation is a file system drive. To use data
that the provider exposes, you view it, move through it, and change it
as though it were data on a hard drive. Therefore, the most important
information about a provider is the name of the drive that it
supports.

Related

PowerShell Get-VHD "is not an existing virtual hard disk file"

When creating a new VM in Hyper-V, to keep things organized, I use a particular naming convention when creating the associated VHDX files. The naming convention is the VMs FQDN followed by the SCSI controller attachment point followed by what the name of the drive is called or used for inside of the VM. I encapsulate the SCSI and Name parameters inside smooth and square brackets respectively. I find this tends to make things a little bit easier from a human perspective to match the VHDX files in Hyper-V to what the VM sees internally when needing to do maintenance tasks. It has also helped with scripting in the past. An example file name would look as follows...
servername.example.com(0-0)[OS].vhdx
This has worked well for quite some time, but recently I tried to run some PowerShell commands against the VHDX files and ran across a problem. Apparently the square brackets for the internal VM name are being parsed as RegEx or something inside of the PowerShell commandlet (I'm honestly just guessing on this). When I try to use Get-VHD on a file with the above naming convention it spits out an error as follows:
Get-VHD : 'E:\Hyper-V\servername.example.com\Virtual Hard Disks\servername.example.com(0-0)[OS].vhdx' is not an existing virtual hard disk file.
At line:1 char:12
+ $VhdPath | Get-VHD
+ ~~~~~~~
+ CategoryInfo : InvalidArgument: (:) [Get-VHD], VirtualizationException
+ FullyQualifiedErrorId : InvalidParameter,Microsoft.Vhd.PowerShell.Cmdlets.GetVHD
If I simply rename the VHDX file to exclude the "[OS]" portion of the naming convention the command works properly. The smooth brackets for the SCSI attachment point don't seem to bother it. I've tried doing a replace command to add a backtick ''`'' in front of the brackets to escape them, but the same error results. I've also tried double backticks to see if passing in a backtick helped... that at least showed a single backtick in the error it spat out. Suspecting RegEx, I tried the backslash as an escape character too... which had the interesting effect of converting all the backslashes in the file path into double backslashes in the error message. I tried defining the path variable via single and double quotes without success. I've also tried a couple of different ways of obtaining it via pipeline such as this example...
((Get-VM $ComputerName).HardDrives | Select -First 1).Path | Get-VHD
And, for what it's worth, as many VMs as I am attempting to process... I need to be able to run this via pipeline or some other automation scriptable method rather than hand coding a reference to each VHDX file.
Still thinking it may be something with RegEx, I attempted to escape the variable string with the following to no avail:
$VhdPathEscaped = [System.Text.RegularExpressions.Regex]::Escape($VhdPath)
Quite frankly, I'm out of ideas.
When I first ran across this problem was when I tried to compact a VHDX file with PowerShell. But, since the single VM I was working with needed to be offline for that function to run anyway, rather than fight the error with the VHDX name, I simply renamed it, compacted it, and reset the name back. However, for the work I'm trying to do now, I can't afford to take the VM offline as this script is going to run against a whole fleet of live VMs. So, I need to know how to properly escape those characters so the Get-VHD commandlet will accept those file names.
tl;dr:
A design limitation of Get-VHD prevents it from properly recognizing VHD paths that contain [ and ] (see bottom section for details).
Workaround: Use short (8.3) file paths assuming the file-system supports them:
$fso = New-Object -ComObject Scripting.FileSystemObject
$VhdPath |
ForEach-Object { $fso.GetFile((Convert-Path -LiteralPath $_)) } |
Get-VHD
Otherwise, your only options are (as you report, in your case the VHDs are located on a ReFS file-system, which does not support short names):
Rename your files (and folders, if applicable) to not contain [ or ].
Alternatively, if you can assume that your VHDs are attached to VMs, you can provide the VM(s) to which the VHD(s) of interests are attached as input to Get-VHD, via Get-VM (you may have to filter the output down to only the VHDs of interest):
(Get-VM $vmName).Id | Get-VHD
Background information:
It looks like Get-VHD only has a -Path parameter, not also a -LiteralPath parameter, which looks like a design flaw:
Having both parameters is customary for file-processing cmdlets (e.g. Get-ChildItem):
-Path accepts wildcard expressions to match potentially multiple files by a pattern.
-LiteralPath is used to pass literal (verbatim) paths, to be used as-is.
What you have is a literal path that happens to look like a wildcard expression, due to use of metacharacters [ and ]. In wildcard contexts, these metacharacter must normally be escaped - as `[ and `] - in order to be treated as literals, which the following (regex-based) -replace operation ensures[1] (even with arrays as input).
Unfortunately, this appears not to be enough for Get-VHD. (Though you can verify that it works in principle by piping to Get-Item instead, which also binds to -Path).
Even double `-escaping (-replace '[][]', '``$&') doesn't work (which is - unexpectedly required in come cases - see GitHub issue #7999).
# !! SHOULD work, but DOES NOT
# !! Ditto for -replace '[][]', '``$&'
$VhdPath -replace '[][]', '`$&' | Get-VHD
Note: Normally, a robust way to ensure that a cmdlet's -LiteralPath parameter is bound by pipeline input is to pipe the output from Get-ChildItem or Get-Item to it.[2] Given that Get-VHD lacks -LiteralPath, this is not an option, however:
# !! DOES NOT HELP, because Get-VHD has no -LiteralPath parameter.
Get-Item -LiteralPath $VhdPath | Get-VHD
[1] See this regex101.com page for an explanation of the regex ($0 is an alias of $& and refers to the text captured by the match at hand, i.e. either [ or ]). Alternatively, you could pass all paths individually to the [WildcardPattern]::Escape() method (e.g., [WildcardPattern]::Escape('a[0].txt') yields a`[0`].txt.
[2] See this answer for the specifics of how this binding, which happens via the provider-supplied .PSPath property, works.
Ok... So, I couldn't get the escape characters to be accepted by Get-VHD... be it by hand or programmatically. I gave it a go of passing it on the pipeline using Get-ChildItem too without success. However... I did manage to find an alternative for my particular use case. In addition to a path to a VHDX file, the Get-VHD command will also accept vmid, and disknumber as parameters. So, not that it's the way I wanted to go about obtaining what I need (because this method spits out info on all the attached drives), I can still manage to accomplish the task at hand by using the following example:
Get-VM $ComputerName | Select-Object -Property VMId | Get-VHD
By referencing them in this manner the Get-VHD commandlet is happy. This works for today's problem only because the VHDX files in question are attached to VMs. However, I'll still need to figure out about referencing unattached files at some point in the future. Which... Maybe ultimately require a slow and painful renaming of all the VHDX files to not use the square brackets in their name.

Using Powershell environmental variables as strings when outputting files

I am using Get-WindowsAutopilotInfo to generate a computer's serial number and a hash code and export that info as a CSV.
Here is the code I usually use:
new-item "c:\Autopilot_Export" -type directory -force
Set-Location "C:\Autopilot_Export"
Get-WindowsAutopilotInfo.ps1 -OutputFile Autopilot_CSV.csv
Robocopy C:\Autopilot_Export \\Zapp\pc\Hash_Exports /copyall
This outputs a CSV file named "Autopilot_CSV.csv" to the C:\Autopilot_Export directory and then Robocopy copies it to the Network Share drive inside of the Hash_Exports folder listed above. In the above code, if I manually type in "test", "123", "ABC", etc. and replace "Autopilot_CSV" it will output a CSV under all of those names as well. So it looks like Get-WindowsAutopilotInfo will create a CSV file and save the file name with whatever string you pass into it. Great.
However, I am running this script on multiple different computers and I would like the exported CSV to be named something unique to the machine it is running on so I can easily identify it once it's copied. I am trying to pass the value of the environmental variable $env:computername as a string input for the CSV and it isn't working.
Here's the code I'm trying to use:
new-item "c:\Autopilot_Export" -type directory -force
Set-Location "C:\Autopilot_Export"
Get-WindowsAutopilotInfo.ps1 -OutputFile $env:computername.csv
Robocopy C:\Autopilot_Export C:\Users\danieln\Desktop /copyall
This code does not generate the CSV file at all. Why not?
Do I just have a basic misunderstanding of how environmental variables are used in Powershell? $env:computername appears to return a string of the computer's name, but I cannot pass it into Get-WindowsAutopilotInfo and have it save, but it will work if I manually type a string in as the input.
I have also tried setting it to a variable $computername = [string]$env:computername and just passing $computername in before the .CSV and that doesn't appear to work either. And per the docmentation, environmental variables are apparently already strings.
Am I missing something?
Thanks!
js2010's answer shows the effective solution: use "..."-quoting, i.e. an expandable string explicitly.
It is a good habit to form to use "..." explicitly around command arguments that are strings containing variable references (e.g. "$HOME/projects") or subexpressions (e.g., "./folder/$(Get-Date -Format yyyy-MM)")
While such compound string arguments generally do not require double-quoting[1] - because they are implicitly treated as if they were "..."-enclosed - in certain cases they do, and when they do is not obvious and therefore hard to remember:
This answer details the surprisingly complex rules, but here are two notable pitfalls if you do not use "...":
If a compound argument starts with a subexpression ($(...)), its output is passed as a separate argument; e.g. Write-Output $(Get-Location)/folder passes two arguments to Write-Output: the result of $(Get-Location) and literal /folder
If a compound argument starts with a variable reference and is followed by what syntactically looks like either (a) a property access (e.g., $PSVersionTable.PsVersion) or (b) a method call (e.g., $PSHome.Tolower()) it is executed as just that, i.e. as an expression (rather than being considered a variable reference followed by a literal part).
Aside #1: Such an argument then isn't necessarily a string, but whatever data type the property value or method-call return value happens to be.
Aside #2: A compound string that starts with a quoted string (whether single- or double-quoted) does not work, because the quoted string at the start is also considered an expression, like property access and method calls, so that what comes after is again passed as separate argument(s). Therefore, you can only have a compound strings consisting of a mix of quoted and unquoted substrings if the compound string starts with an unquoted substring or a non-expression variable reference.[2]
The latter is what happened in this case:
Unquoted $env:computername.csv was interpreted as an attempt to access a property named csv on the object stored in (environment) variable $env:computername, and since the [string] instance stored there has no csv property, the expression evaluated to $null.
By forcing PowerShell to interpret this value as an expandable string via "...", the usual interpolation rules apply, which means that the .csv is indeed treated as a literal (because property access requires use of $(...) in expandable strings).
[1] Quoting is required for strings that contain metacharacters such as spaces.
For values to be treated verbatim, '...' quoting is the better choice (see the bottom section of this answer for an overview of PowerShell string literals).
Also, using neither double- nor single-quoting and individually `-escaping metacharacters is another option (e.g. Write-Output C:\Program` Files.
[2] For instance, Write-Output a"b`t"'`c' and Write-Output $HOME"b`t"'`c' work as intended, whereas Write-Output "b`t"'`c' and Write-Output $HOME.Length"b`t"'`c' do not (the latter two each pass 3 arguments). The workaround is to either use a single "..." string with internal `-escaping as necessary (e.g. Write-Output "${HOME}b`t``c") or to use string-concatenation expression with + enclosed in (...) (e.g. Write-Output ($HOME + "b`t" + '`c'))
Doublequoting seems to work. The colon is special in powershell parameters.
echo hi | set-content "$env:computername.csv"
dir
Directory: C:\users\me\foo
Mode LastWriteTime Length Name
---- ------------- ------ ----
-a---- 2/11/2021 1:02 PM 4 COMP001.csv
The colon is special. For example in switch parameters:
get-childitem -force:$true
Actually, it's trying to find a property named csv:
echo hi | Set-Content $env:COMPUTERNAME.length
dir
Directory: C:\Users\me\foo
Mode LastWriteTime Length Name
---- ------------- ------ ----
-a---- 2/11/2021 3:04 PM 4 8
Basically pass it a string rather than the variable:
write-host $env:computername.csv
# output: (no output - looking for a variable called "computername.csv" instead of "computername")
Try the following syntax:
$filename = "$($env:computername).csv"
write-host $filename
# output: MYPCNAME.csv

Powershell splatting: pass ErrorAction = Ignore in hash table

Here's a script to list directories / files passed on the command line -- recursively or not:
param( [switch] $r )
#gci_args = #{
Recurse = $r
ErrorAction = Ignore
}
$args | gci #gci_args
Now this does not work because Ignore is interpreted as a literal. What's the canonical way to pass an ErrorAction?
I see that both "Ignore" and (in PS7) { Ignore } work as values, but neither seems to make a difference for my use case (bad file name created under Linux, which stops PS5 regardless of the ErrorAction, but does not bother PS7 at all). So I'm not even sure the parameter has any effect.
because Ignore is interpreted as a literal
No, Ignore is interpreted as a command to execute, because it is parsed in argument mode (command invocation, like a shell) rather than in expression mode (like a traditional programming language) - see this answer for more information.
While using a [System.Management.Automation.ActionPreference] enumeration value explicitly, as in filimonic's helpful answer, is definitely an option, you can take advantage of the fact that PowerShell automatically converts back and forth between enum values and their symbolic string representations.
Therefore, you can use string 'Ignore' as a more convenient alternative to [System.Management.Automation.ActionPreference]::Ignore:[1]
$gci_args = #{
# ...
ErrorAction = 'Ignore'
}
Note that it is the quoting ('...') that signals to PowerShell that expression-mode parsing should be used, i.e. that the token is a string literal rather than a command.
Also note that -ErrorAction only operates on non-terminating errors (which are the typical kind, however) - see this answer for more information.
As for discovery of the permissible -ErrorAction values:
The conceptual about_CommonParameters help topic covers all common parameters, of which -ErrorAction is one.
Many common parameters have corresponding preference variables (which accept the same values), covered in about_Preference_Variables, which allow you to preset common parameters.
Interactively, you can use tab-completion to see the permissible values (as unquoted symbolic names, which you simply need to wrap in quotes); e.g.:
# Pressing the Tab key repeatedly where indicated
# cycles through the acceptable arguments.
Get-ChildItem -ErrorAction <tab>
[1] Note that using a string does not mean giving up type safety, if the context unambiguously calls for a specific enum type, such as in this case. Validation only happens at runtime either way, given that PowerShell is an interpreted language.
However, it is possible for a PowerShell-aware editor - such as Visual Studio Code with the PowerShell extension - to flag incorrect values at design time. As of version 2020.6.0, however, that does not yet appear to be the case. Fortunately, however, tab-completion and IntelliSense work as expected, so the problem may not arise.
That said, as zett42 points out, in the context of defining a hashtable entry for latter splatting the expected type is not (yet) known, so explicit use of [System.Management.Automation.ActionPreference] does have advantages: (a) IntelliSense in the editor can guide you, and (b) - assuming that Set-StrictMode -Version 2 or higher is in effect - an invalid value will we reported earlier at runtime, namely at the point of assignment, which makes troubleshooting easier. As of PowerShell 7.1, a caveat regarding Set-StrictMode -Version 2 or higher is that you will not be able to use the intrinsic (PowerShell-supplied) .Count property on objects that don't have it type-natively, due to the bug described in GitHub issue #2798.
I think the best way is to use native type.
$ErrorActionPreference.GetType().FullName # System.Management.Automation.ActionPreference
So, use
$gci_args = #{
Recurse = $r
ErrorAction = [System.Management.Automation.ActionPreference]::Ignore
}

Difference between () and $() [duplicate]

This question already has answers here:
What is the difference between $a = $(Read-Host) and $a = (Read-Host)?
(2 answers)
Closed 5 years ago.
What's the difference between
Write-Host (Get-Date) # just paren
and
Write-Host $(Get-Date) # dollar-paren
Content within the parens could be anything, just going with a simple example. Is there any difference between the two?
I consider myself reasonably experienced with PS, but it's these little things that bug me, especially during code review and the like. Has anyone come across a good source for "here is how the language works" with enough detail to derive answers to these sorts of questions?
The sub expression ($(...)) contains a StatementBlockAst. It can take any number of statements, meaning keywords (if, foreach, etc), pipelines, commands, etc. Parsing is similar to the inside of a named block like begin/process/end.
The paren expression ((...)) can contain a single ExpressionAst which is a limited subset of the AST. The most notable difference between a statement and an expression is that keywords are not parsed.
$(if ($true) { 'It worked!' })
# It worked!
(if ($true) { 'It worked!' })
# if : The term 'if' is not recognized as the name of a cmdlet, function,
# script file, or operable program. Check the spelling of the name, or
# if a path was included, verify that the path is correct and try again.
# At line:1 char:2
# + (if ($true) { 'It worked' })
# + ~~
# + CategoryInfo : ObjectNotFound: (if:String) [], CommandNotFoundException
# + FullyQualifiedErrorId : CommandNotFoundException
Also as others have noted, the sub expression will expand in double quoted strings.
The () helps with order of operations
The $() helps with evaluating values inside of the ()
For instance, if you're trying to find today's date in a string you could do the following:
echo "The length of Bryce is (Get-Date)"
echo "The length of Bryce is $(Get-Date)"
You'll see that the output is different (in one it gives you literally "(Get-Date)" whereas in the other it gives you the evaluated expression of Get-Date)
You can read more about syntax operators here
Parenthesis are used to group and establish order just as they do in mathematics. Starting in PowerShell v3 you can also use them to evaluate a property of a group, such as getting the file names for the files in the current folder by running:
(Get-ChildItem).Name
A sub-expression $() evaluates the script within it, and then presents the output of that to be used in the command. Often times used within strings to expand a property of an object such as:
"Hello $($User.Name), would you like to play a game?"
It can also be useful when working with ComObjects, such as Excel where you may have a range that you want to test against a property of each item. While this does not work because the Range object does not have a Font property:
$Range.Font|Where{$_.Bold}
This would work, because it would output the Range as a collection of Cell objects, each of which have a Font property:
$($Range).Font|Where{$_.Bold}
You can think of sub-expressions as being a script within your script, since they can be several commands long, and the entire thing is evaluated at once so that the end output can be used for the parent command.

Are Powershell Profile scripts dot-sourced?

The Microsoft.PowerShell_profile.ps1 script I am using creates a lot of variables when it runs. I have set all the variables' scope to "Script", but the variables used in the script never go out-of-scope.
I would like the variables to go out-of-scope once the script is done running and control is handed over to me.
If I compare the number of global, local, and script variables I have, I come up with the same number.
Example:
# Profile script does what it does.
Get-Variable -Scope Global | Measure-Object
Get-Variable -Scope Local | Measure-Object
Get-Variable -Scope Script | Measure-Object
Output:
60
60
60
Currently, I am capturing a snapshot of the variables at the beginning of my profile script, then removing any new variables at the end.
Example:
$snapshotBefore = Get-Variable
$profileVar1 = 'some value'
$profileVar2 = 'some other value'
$snapshotAfter = Get-Variable
# Compare before and after, and create list of new variables.
Remove-Variable $variablesToRemove
Yes, PowerShell profiles are dot-sourced by design, because that's what allows the definitions contained in them (aliases, functions, ...) to be globally available by default - which is, after all, the main purpose of profile files.
Unfortunately, there is no scope modifier that allows you to create a temporary scope for variables you only want to exist while the profile is loading - even scope local is effectively global in a profile script; similarly, using scope private is also not an option, because the profile's script scope - due to being dot-sourced - is the global scope.
Generally speaking, you can use & (the call operator) with a script block to create variables inside that block that are scoped to that block, but that is usually at odds with creating globally available definitions in a profile, at least by default.
Similarly, calling another script without dot-sourcing it, as in your own answer, will not make its definitions globally available by default.
You can, however, create global elements from non-dot-sourced script blocks / script by specifying the global scope explicitly; e.g.: & { $global:foo = 'Going global' }, or & { function global:bar { 'global func' } }.
That said, the rationale behind dot-sourcing profiles is likely that it's easier to make all definitions global by default, making the definition of typical elements of a profile - aliases, functions, drive mappings, loading of modules - simpler (no need to specify an explicit scope).
By contrast, global variables are less typical, and to define the typical elements listed above you don't usually need script-level (and thus global) variables in your profile.
If you still need to create (conceptually) temporary variables in your profile (which is not a requirement for creating globally available aliases, functions, ...):
A simple workaround is to use an exotic variable name prefix such as __ inside the profile script to reduce the risk of their getting referenced by accident (e.g, $__profileVar1 = ...).
In other words: the variables still exist globally, but their exotic names will typically not cause problems.
However, your approach, even though it requires a little extra work, sounds like a robust workaround, here's what it looks like in full (using PSv3+ syntax):
# Save a snapshot of current variables.
# * If there are variables that you DO want to exist globally,
# define them ABOVE this command.
# * Also, load MODULE and dot-source OTHER SCRIPTS ABOVE this command,
# because they may create variables that *should* be available globally.
$varsBefore = (Get-Variable).Name
# ... define and use temporary variables
# Remove all variables that were created since the
# snapshot was taken, including $varsBefore.
Remove-Variable (Compare-Object $varsBefore (Get-Variable).Name).InputObject
Note that I'm relying on Compare-Object's default behavior of only reporting differences between objects and, assuming you haven't tried to remove any variables, only the variables added are reported.
Note that while it can be inferred from the actual behavior of profile files that they are indeed dot-sourced - given that dot-sourcing is the only way to add elements to the current scope (the global scope, in the case of profiles) -
this fact is not explicitly documented as such.
Here are snippets from various help topics (as of PSv5) that provide clues (emphasis mine):
From Get-Help about_Profiles:
A Windows PowerShell profile is a script that runs when Windows PowerShell
starts. You can use the profile as a logon script to customize the
environment. You can add commands, aliases, functions, variables, snap-ins,
modules, and Windows PowerShell drives. You can also add other
session-specific elements to your profile so they are available in every
session without having to import or re-create them.
From Get-Help about_Variables:
By default, variables are available only in the scope in which
they are created.
For example, a variable that you create in a function is
available only within the function. A variable that you
create in a script is available only within the script (unless
you dot-source the script, which adds it to the current scope).
From Get-Help about_Operators:
. Dot sourcing operator
Runs a script in the current scope so that any functions,
aliases, and variables that the script creates are added to the current
scope.
From Get-Help about_Scopes
But, you can add a script or function to the current scope by using dot
source notation. Then, when a script runs in the current scope, any
functions, aliases, and variables that the script creates are available
in the current scope.
To add a function to the current scope, type a dot (.) and a space before
the path and name of the function in the function call.
So it does sounds like Powershell dot-sources the profile. I couldn't find a resource that specifically says that, or other forums that have asked this question.
I have found an answer, and wanted to post it here.
I have changed my profile to only call a script file. The script now has its own scope, and as long as the variables aren't made global, they will go out-of-scope once the profile finishes loading.
So now my profile has one-line:
& (Split-Path $Path $profile -Parent | Join-Path "Microsoft.PowerShell_profile_v2.ps1")
Microsoft.PowerShell_profile_v2.ps1 can now contain proper scope:
$Global:myGlobalVar = "A variable that will be available during the current session"
$Script:myVar = "A variable that will disappear after script finishes."
$myVar2 = "Another variable that will disappear after script finishes."
What this allows, is for the profile script to import modules that contain global variables. These variables will continue to exist during the current session.
I would still be curious why Microsoft decided to call the profile in this way. If anyone knows, and would like to share. I would love to see the answer here.