Powershell Import-Module: prefix imported variables - powershell

Consider sample powershell module sample.psm1 which exports single variable as follows:
$ROOT = "C:\root"
Export-ModuleMember -Variable ROOT
This can be imported specifying a prefix:
Import-Module .\sample.psm1 -Prefix "Sample::" -Force
Even though prefix works fine for referencing module functions (e.g. you can now do Sample::SomeFunction) it does not seem to work for variables, i.e.
$Sample::ROOT does not work, neither does
Sample::ROOT,
$(Sample::ROOT),
Sample::$ROOT,
they are imported and available through global names only ($ROOT in this case)
What are possible options for forcing a prefix on imported variables? What is a general best practice for dealing with imported variables?
manually prefixing module variables in module declarations could do, but it's kind of ugly approach to namespacing

Seems to me that generally you wouldn't want to export variables from a module, but instead provide getter/setter functions, for the same reasons that .NET classes don't usually use fields, but properties instead.
If you execute Import-Module with -Verbose on your module you'll see that functions are prefixed and variables aren't. I take that as a strong indication that prefixes on variables aren't supported.
However, you can use this code to access variables and functions in myModule:
$myMod = get-module myModule
& $myMod {$root}
This gives access to variables & functions in modules ahead of like named variables & functions outside of the module. (If you use that code to access a variable not in a module, but in an outer scope, you'll get the variable in the outer scope, because modules have access to variables in outer scopes.) By not exporting a module variable the only way to access it from outside the module is by qualifying it with $myMod.
So not quite a namespace but close.

Related

How do I import specific functions from a module explicitly?

This is for readability and safety (clobbering duplication functions) and finding a workaround for no namespace support with PowerShell modules.
I want to be able to do something like this:
Import-Module MyHelpers.psm1 -Functions "FuncOne" -as MyHelpers.Func-One
MyHelpers.Func-One -blah sfsdfsdf
This make it obvious where FuncOne lives. for larger scripts I consider this pretty serious requirement.
It would probably be good enough if I could at least explicitly define which functions I'm importing (without being able to rename them). At least I would see where they are coming from. Is there any support for this? If not then I'll just have to name all functions inside of MyHelpers like MyHelpers.Func-One but then PowerShell will complain the verb is wrong; would that also break other things too?
Here's my comment as answer:
You can use the Function parameter to restrict the function (s) you want to import.
Next, with the Prefix parameter you can add a prefix string to these imported functions.
See Import- module examples 5 and 6
Theo's answer is correct, just want to point out also that you can fully qualify commands that you call by prefixing them with the module name already, for example:
Microsoft.PowerShell.Core\Import-Module -Name ActiveDirectory
ActiveDirectory\Get-ADComputer $env:COMPUTERNAME
or to your example:
Import-Module MyHelpers.psm1 -Functions "FuncOne" -as MyHelpers.Func-One
MyHelpers\Func-One -blah sfsdfsdf

Modifying variable in parent script using child script in powershell

I have two powershell scripts.
I have to assign the parent variable using child script
The child powershell script is called using parent powershell script
parent.ps1
$count = $Null
child.ps1
$count = 10
How do I make sure that the change in child script gets reflected in parent script?
Manuel Batsching's answer shows you how to use dot-sourcing to solve your problem indirectly: by executing the child script directly in the parent script's scope, all of the child script's (script-level) variables (among other definitions, namely functions and aliases) are created directly in the parent script's scope, which may be undesirable.
PowerShell does offer mechanisms to selectively modify variables in other scopes, but it's best to avoid them, because you're creating a tight coupling between your scripts that makes them difficult to maintain; instead, use other mechanism to communicate information between your scripts, in the simplest form via output.
If you still want to solve your problem by modifying a variable in the parent scope, you can use the following:
# When run in child.ps1 that was invoked (normally) by parent.ps1,
# sets $count to 1 in the scope of *parent.ps1*.
Set-Variable -Scope 1 -Name count -Value 10
Scope -1 refers to the parent scope of the calling scope (2 would refer to the grandparent scope, and so on) - see Set-Variable.
Complementarily, scope modifiers $script: and $global: may be used with variable names to target variables in the (same) script scope and the global scope (for instance, $script:foo = 'bar' could be used to set a script-level variable from inside of a function defined in the same script; creating or modifying global variables should be avoided, as they linger even after a script exits) - see about_Scopes.
For the sake of completeness:
Scope modifier $local: allows you to refer to the current (local) scope explicitly; it is rarely used, because the current scope is implied when you assign to a variable by mere name (e.g., $foo = 'bar' creates a local $foo variable).
However, on getting a variable $local:foo isn't necessarily the same as just $foo: due to PowerShell's dynamic scoping, $foo returns the value of a variable in an ancestral scope, if any, if no such variable exists in the current scope; by contrast, $local:foo strictly returns the value of a variable by that name in the current scope, if defined there.
To suppress dynamic scoping for a given variable, i.e. to prevent descendant scopes from seeing it (by default), create it with the $private: scope modifier.
See this answer for more information about PowerShell's dynamic scoping and the $private: scope.
Scope modifier $using: is used in the context of remoting and jobs; it doesn't actually reference a variable per se in the caller's context, but its value - see about_Remote_Variables.
By using the dot sourcing operator . you can run your child.ps1 script in the same scope as your parent.ps1 script. That will preserve all changes to variables, that your child script does (see: script scope and dot sourcing).
Let the content of your parent.ps1 be like:
$count = $null
. .\child.ps1
$count
This will return 10.

What type of object is $<drivename>: (such as `$code:`) in Powershell?

I was using tab autocompletion for a variable name in Powershell 5.1 today and noticed that one of the choices was the name of a PSDrive. The drive name is docs and I wanted to expand is called $document_name. When I typed $do<tab>, the shell did indeed expand what I had typed to $document_name but for some reason, I typed <tab> a second time and that's when the expanded text changed to $docs:.
I explored further and found that this type of variable exists for each of my PSDrives, or at least tab expansion suggests that it does.
More formally, for every PSDrive PSD, tab expansion believes that $PSD: is a valid thing.
My question is simple: what the heck are these? Here are some observations I've made so far:
These names are prefixed with $, so they look like PS variables. For the rest of this discussion (and in the earlier discussion above), I will assume they are variables and refer to them as such.
Although they appear to be variables, they are not listed in the Variable: PSDrive like most variables. In this way, it behaves like the $env "variable," which also is not listed in Variable:. I have a feeling if I could find documentation about $env, then I'd understand these objects also.
In some ways, they behave like pointers to filesystem objects. For example, if there is a file name readme.txt containing the text "Hello, world!" on a PSDrive named code, then all of the following are possible interactions with Powershell.
Fetch the contents of the file.
λ ${code:\readme.txt}
Hello, world!
Just to prove that the type of the above result is String:
λ ${code:\readme.txt} | % { $_.GetType().Name }
String
Trying to use this as a reference to the PSDrive doesn't work well for many operations, such as cd:
C:\
λ cd ${code:}
At line:1 char:4
+ cd ${code:}
+ ~~~~~~~~
Variable reference is not valid. The variable name is missing.
+ CategoryInfo : ParserError: (:) [], ParentContainsErrorRecordException
+ FullyQualifiedErrorId : InvalidBracedVariableReference
I could go on, but I'm stumped. If I pass $code: (or $env:, for that matter) to Get-Member, I get an error saying Variable reference is not valid.
So just what the heck are "variables" like $env and $<PSDrive>: (such as $code:)? Are they expressions? Built-in expressions? Some kind of object? Thanks for any help.
What you're seeing is namespace variable notation, which is a variable-based way to access the content of items in PowerShell drives whose underlying provider implements content-based access (i.e., implements the IContentCmdletProvider interface).
Terminology and documentation note:
As of this writing, the docs briefly explain namespace variable notation in the conceptual about_Scopes help topic, albeit without using that term and, somewhat confusingly, discuss it in the context of scope modifiers; while namespace qualifiers (such as $env:) are unrelated to scope modifiers (such as $script:), they use the same basic syntax form, however.[1]
The general syntax is:
${<drive>:<path>} # same as: Get-Content <drive>:<path>
${<drive>:<path>} = ... # same as: Set-Content <drive>:<path> -Value ...
The enclosing {...} aren't necessary if both the <drive> name and the <path> can syntactically serve as a variable name; e.g.:
$env:HOME # no {...} needed
${env:ProgramFiles(x86)} # {...} needed due to "(" and ")"
In practice, as of Windows PowerShell v5.1, the following in-box drive providers support namespace variable notation:
Environment (drive Env:)
Function (drive Function:)
Alias (drive Alias:)
FileSystem (drives C:, ...)
Variable (drive Variable:) - though virtually pointless, given that omitting the drive part accesses variables by default (e.g., $variable:HOME is the same as just $HOME).
Of these, the Env: drive is by far the most frequently used with namespace variable notation, even though most users aren't aware of what underlies an environment-variable references such as $env:HOME.
On occasion you see it used with a filesystem drive - e.g., ${c:\foo\file.txt} - but the fact that you can only use literal paths and that you cannot control the character encoding limits its usefulness.
It allows interesting uses, however; e.g.:
PS> $alias:foreach # Get the definition of alias 'foreach'
ForEach-Object
PS> $function:prompt # Get the body of the 'prompt' function
"PS $($executionContext.SessionState.Path.CurrentLocation)$('>' * ($nestedPromptLevel + 1)) ";
# .Link
# https://go.microsoft.com/fwlink/?LinkID=225750
# .ExternalHelp System.Management.Automation.dll-help.xml
# Define a function foo that echoes 'hi' and invoke it.
PS> $function:foo = { 'hi' }; foo
hi
Note:
Because ${<drive>:<path>} and ${<drive>:<path>} = <value> are equivalent to Get-Content -Path <drive>:<path> and Set-Content -Path <drive>:<path> <value>, paths are interpreted as wildcard expressions (because that's what -Path does, as opposed to -LiteralPath), which can cause problems with paths that look like wildcards - see this answer for an example and a workaround.
[1] Previously, the feature wasn't documented at all; GitHub docs issue #3343 led to the current documentation, albeit not in the way that said issue proposed.
$env is the Windows environment variables, the same as what you get when you do SET in a command prompt. There are a few that are PS-specific.
The variable is providing access to the Environment Provider. https://learn.microsoft.com/en-us/powershell/module/microsoft.powershell.core/about/about_environment_variables?view=powershell-6
There are a bunch of other Providers that are described here: https://learn.microsoft.com/en-us/powershell/module/microsoft.powershell.core/about/about_providers?view=powershell-6
As it says in the doco:
The model for data presentation is a file system drive. To use data
that the provider exposes, you view it, move through it, and change it
as though it were data on a hard drive. Therefore, the most important
information about a provider is the name of the drive that it
supports.

How to create global variable but without to use "global:"

How to create subj, but absolutely the same as $null, without any "global:" prefix AND this variable should be available anywhere in such notation, in functions body, for example? Powershell is 6.0.3 (Linux)
As for my knowledge this is not possible since you need to define the scope of a variable to use it globally within functions.
It might work to put the variable in a separate .ps1 file and dot-source it in the body and and in every function you want to access its data. But changing the variables value won't be global, therefore it would be "read only".
How about New-Variable cmdlet?
new-variable -scope global -name a -value "One"
also, check about_scopes help file
get-help about_scopes

Are Powershell Profile scripts dot-sourced?

The Microsoft.PowerShell_profile.ps1 script I am using creates a lot of variables when it runs. I have set all the variables' scope to "Script", but the variables used in the script never go out-of-scope.
I would like the variables to go out-of-scope once the script is done running and control is handed over to me.
If I compare the number of global, local, and script variables I have, I come up with the same number.
Example:
# Profile script does what it does.
Get-Variable -Scope Global | Measure-Object
Get-Variable -Scope Local | Measure-Object
Get-Variable -Scope Script | Measure-Object
Output:
60
60
60
Currently, I am capturing a snapshot of the variables at the beginning of my profile script, then removing any new variables at the end.
Example:
$snapshotBefore = Get-Variable
$profileVar1 = 'some value'
$profileVar2 = 'some other value'
$snapshotAfter = Get-Variable
# Compare before and after, and create list of new variables.
Remove-Variable $variablesToRemove
Yes, PowerShell profiles are dot-sourced by design, because that's what allows the definitions contained in them (aliases, functions, ...) to be globally available by default - which is, after all, the main purpose of profile files.
Unfortunately, there is no scope modifier that allows you to create a temporary scope for variables you only want to exist while the profile is loading - even scope local is effectively global in a profile script; similarly, using scope private is also not an option, because the profile's script scope - due to being dot-sourced - is the global scope.
Generally speaking, you can use & (the call operator) with a script block to create variables inside that block that are scoped to that block, but that is usually at odds with creating globally available definitions in a profile, at least by default.
Similarly, calling another script without dot-sourcing it, as in your own answer, will not make its definitions globally available by default.
You can, however, create global elements from non-dot-sourced script blocks / script by specifying the global scope explicitly; e.g.: & { $global:foo = 'Going global' }, or & { function global:bar { 'global func' } }.
That said, the rationale behind dot-sourcing profiles is likely that it's easier to make all definitions global by default, making the definition of typical elements of a profile - aliases, functions, drive mappings, loading of modules - simpler (no need to specify an explicit scope).
By contrast, global variables are less typical, and to define the typical elements listed above you don't usually need script-level (and thus global) variables in your profile.
If you still need to create (conceptually) temporary variables in your profile (which is not a requirement for creating globally available aliases, functions, ...):
A simple workaround is to use an exotic variable name prefix such as __ inside the profile script to reduce the risk of their getting referenced by accident (e.g, $__profileVar1 = ...).
In other words: the variables still exist globally, but their exotic names will typically not cause problems.
However, your approach, even though it requires a little extra work, sounds like a robust workaround, here's what it looks like in full (using PSv3+ syntax):
# Save a snapshot of current variables.
# * If there are variables that you DO want to exist globally,
# define them ABOVE this command.
# * Also, load MODULE and dot-source OTHER SCRIPTS ABOVE this command,
# because they may create variables that *should* be available globally.
$varsBefore = (Get-Variable).Name
# ... define and use temporary variables
# Remove all variables that were created since the
# snapshot was taken, including $varsBefore.
Remove-Variable (Compare-Object $varsBefore (Get-Variable).Name).InputObject
Note that I'm relying on Compare-Object's default behavior of only reporting differences between objects and, assuming you haven't tried to remove any variables, only the variables added are reported.
Note that while it can be inferred from the actual behavior of profile files that they are indeed dot-sourced - given that dot-sourcing is the only way to add elements to the current scope (the global scope, in the case of profiles) -
this fact is not explicitly documented as such.
Here are snippets from various help topics (as of PSv5) that provide clues (emphasis mine):
From Get-Help about_Profiles:
A Windows PowerShell profile is a script that runs when Windows PowerShell
starts. You can use the profile as a logon script to customize the
environment. You can add commands, aliases, functions, variables, snap-ins,
modules, and Windows PowerShell drives. You can also add other
session-specific elements to your profile so they are available in every
session without having to import or re-create them.
From Get-Help about_Variables:
By default, variables are available only in the scope in which
they are created.
For example, a variable that you create in a function is
available only within the function. A variable that you
create in a script is available only within the script (unless
you dot-source the script, which adds it to the current scope).
From Get-Help about_Operators:
. Dot sourcing operator
Runs a script in the current scope so that any functions,
aliases, and variables that the script creates are added to the current
scope.
From Get-Help about_Scopes
But, you can add a script or function to the current scope by using dot
source notation. Then, when a script runs in the current scope, any
functions, aliases, and variables that the script creates are available
in the current scope.
To add a function to the current scope, type a dot (.) and a space before
the path and name of the function in the function call.
So it does sounds like Powershell dot-sources the profile. I couldn't find a resource that specifically says that, or other forums that have asked this question.
I have found an answer, and wanted to post it here.
I have changed my profile to only call a script file. The script now has its own scope, and as long as the variables aren't made global, they will go out-of-scope once the profile finishes loading.
So now my profile has one-line:
& (Split-Path $Path $profile -Parent | Join-Path "Microsoft.PowerShell_profile_v2.ps1")
Microsoft.PowerShell_profile_v2.ps1 can now contain proper scope:
$Global:myGlobalVar = "A variable that will be available during the current session"
$Script:myVar = "A variable that will disappear after script finishes."
$myVar2 = "Another variable that will disappear after script finishes."
What this allows, is for the profile script to import modules that contain global variables. These variables will continue to exist during the current session.
I would still be curious why Microsoft decided to call the profile in this way. If anyone knows, and would like to share. I would love to see the answer here.