Why is PSModulePath not a PowerShell variable - powershell

I keep looking at this, and I cannot for the life of me understand why PSModulePath is an environment variable. Is there any rational reason for this?
To explain further, $profile is a PowerShell variable. You can invoke it simply by typing $profile. This makes sense. However, PSModulePath has absolutely no relevance or function outside of PowerShell, and yet, for some unintelligble decision, it has been set as an environment variable. i.e. it cannot be viewed by $PSModulePath, and can only be viewed by doing echo $env:PSModulePath. Making it even more irrational is that there is a $PSModuleAutoLoadingPreference which again is a PowerShell variable and not an Environment variable ...
The PSModulePath variable is 100% only for PowerShell. Can anyone explain a rationale for this extremely odd setup?

Because $env:PSModulePath is required before the Powershell session is loaded. I believe (and I could be wrong on the "why" here) this is because several "built-in" cmdlets are defined within modules themselves, and need to be loaded on startup. Notably, you'll see this if you look at a specific module directory defined in $env:PSModulePath, which is the System module path:
C:\WINDOWS\system32\WindowsPowerShell\v1.0\Modules
Generally you don't want to install your modules here and should remain "clean", as in, let MS manage the modules in this directory.

Related

Limitations of Target field in windows LNKs

I am attempting to use a shortcut to a powershell script, which has to be run elevated, and has some arguments which can be somewhat long. And the location of the script itself may be in a rather buried folder since Architects really like to have massive folder structures. The issue is I am running out of room. And Microsoft using 57 characters for C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe is not helping.
I also can't use the Start in: to provide the path to the script. My guess is I am just hosed by Microsoft's limitations in shortcuts, but I wonder if someone knows some secret sauce for doing something like, maybe forcing the shortcut to just use powershell.exe rather than the full path?
Or is my only option to get as terse as I possible can with my arguments?

How to make parameter name suggestions work?

I'm trying to create a powershell module to store some reusable utility functions. I created script module PsUtils.psm1 and script module manifest PsUtils.psd1 (used these docs). My problem is that when I import this module in another script Visual Code does not suggest parameters names. Here's a screenshot:
When I hover cursor over the function I only get this:
PsUtils.psm1
function Get-Filelist {
Param(
[Parameter(Mandatory=$true)]
[string[]]
$DirectoryPath
)
Write-Host "DIR PATH: $DirectoryPath"
}
PsUtils.psd1 (excerpt)
...
FunctionsToExport = '*'
I have Powershell extension installed. Do I need to install anything else to make the suggestions work? What am I missing?
Generally speaking, only auto-loading modules - i.e., those in one of the directories listed in environment variable $env:PSModulePath - are automatically discovered.
As of version v2022.7.2 of the PowerShell extension, the underlying PowerShell editor services make no attempt to infer from the current source-code file what modules in nonstandard directories are being imported via source code in that file, whether via Import-Module or using module
Doing so would be the prerequisite for discovering the commands exported by the modules being imported.
Doing so robustly sounds virtually impossible to do with the static analysis that the editor services are limited to performing, although it could work in simple cases; if I were to guess, such a feature request wouldn't be entertained, but you can always ask.
Workarounds:
Once you have imported a given module from a nonstandard location into the current session - either manually via the PIC (PowerShell Integrated Console) or by running your script (assuming the Import-Module call succeeds), the editor will provide IntelliSense for its exported commands from that point on, so your options are (use one of them):
Run your script in the debugger at least once before you start editing. You can place a breakpoint right after the Import-Module call and abort the run afterwards - the only prerequisite is that the file must be syntactically valid.
Run your Import-Module command manually in the PIC, replacing $PSScriptRoot with your script file's directory path.
Note: It is tempting to place the cursor on the Import-Module line in the script in order to use F8 to run just this statement, but, as of v2022.7.2, this won't work in your case, because $PSScriptRoot is only valid in the context of running an entire script.
GitHub issue #633 suggests adding special support for $PSScriptRoot; while the proposal has been green-lighted, no one has stepped up to implement it since.
(Temporarily) modify the $env:PSModulePath variable to include the path of your script file's directory.
The most convenient way to do that is via the $PROFILE file that is specific to the PowerShell extension, which you can open for editing with psedit $PROFILE from the PIC.
Note: Make sure that profile loading is enabled in the PowerShell extension's settings.
E.g., if your directory path is /path/to/my/module, add the following:
$env:PSModulePath+="$([IO.Path]::PathSeparator)/path/to/my/module"
The caveat is that all scripts / code that is run in the PIC will see this updated $env:PSModulePath value, so at least hypothetically other code could end up mistakenly importing your module instead of one expected to be in the standard locations.
Note that GitHub issue #880 is an (old) proposal to allow specifying $env:PSModulePath entries as part of the PowerShell extension settings instead.
On a somewhat related note:
Even when a module is auto-discovered / has been imported, IntelliSense only covers its exported commands, whereas while you're developing that module you'd also like to see its private commands. Overcoming this limitation is the subject of GitHub issue #104.

Possible to use commands provided by a recently installed program without restarting cmd/powershell?

I'm working on a script that automatically installs software. One of the programs to be installed includes its own command line commands and sub-commands when installed.
The goal is to use the program's provided commands to perform an action after its installation.
But running the command right after the program's installation I'm greeted by:
" is not recognized as an internal or external command , operable program or batch file"
If I open a new Powershell or cmd window the command is available in that instance.
What is the easiest way to to grant the script access to the commands?
Bender the Greatest's helpful answer explains the problem and shows you how to modify the $env:PATH variable in-session by manually appending a new directory path.
While that is a pragmatic solution, it requires that you know the specific directory path of the recently installed program.
If you don't - or you just want a generic solution that doesn't require you to hard-code paths - you can refresh the value of $env:PATH (the PATH environment variable) from the registry, via the [Environment]::GetEnvironmentVariable() .NET API method:
$env:PATH = [Environment]::GetEnvironmentVariable('Path', 'Machine'),
[Environment]::GetEnvironmentVariable('Path', 'User') -join ';'
This updates $env:PATH in-session to the same value that future sessions will see.
Note how the machine-level value (list of directories) takes precedence over the user-level one, due to coming first in the composite value.
Note:
If you happen to have made in-session-only $env:PATH modifications before calling the above, these modifications are lost.
If applicable, this includes modifications made by your $PROFILE file.
Hypothetically, other processes could have made additional modifications to the persistent Path variable definitions as well since your session started, which the call above will pick up too (as will future sessions).
This is because the PATH environment variable gets updated, but existing processes don't see that update unless they specifically query the registry for the live value of the update the PATH environment variable, then update PATH within its own process. If you need to continue in the same process, the workaround is to add the installation location to the PATH variable yourself after the program has been installed:
Note: I don't recommend updating the live value from the registry instead of the below in most cases. Other processes can modify that value, not just your own. It can introduce unnecessary risk, whereas appending only what you know should have changed is a more pragmatic approach. In addition, it adds code complexity for a case that often doesn't need to be generalized to that point.
# This will update the PATH variable for the current process
$env:PATH += ";C:\Path\To\New\Program\Folder;"

Powershell GetEnvironmentVariable vs $Env

I have run into a couple cases where I am trying to use a command via command line, but the command is not recognized. I have narrowed it down to an issue with environment variables. In each case, the variable is present when I retrieve the variable with the underlying C# method, but not with the shorthand, $env:myVariable
For example, if I retrieve the variable like this, I will get a value.
[Environment]::GetEnvironmentVariable('ChocolateyInstall', 'Machine')
But, if I retrieve the variable like this, nothing is returned
$env:ChocolateyInstall
I then have to do something like this to to get my command to work.
$env:ChocolateyInstall = [Environment]::GetEnvironmentVariable('ChocolateyInstall', 'Machine')
I have not been able to find a good explanation as to why I have to do this. I've looked at this documentation, but nothing stands out to me. Ideally, I would like to install a CLI and then not have to deal with checking for and assigning environment variables for the command to work.
When opening a PowerShell session, all permanently stored environment variables1 will be loaded into the Environment drive (Env:) of this current session (source):
The Environment drive is a flat namespace containing the environment
variables specific to the current user's session.
The documentation you linked states:
When you change environment variables in PowerShell, the change
affects only the current session. This behavior resembles the behavior
of the Set command in the Windows Command Shell and the Setenv command
in UNIX-based environments. To change values in the Machine or User
scopes, you must use the methods of the System.Environment class.
So defining/changing an environment variable like this:
$env:ChocolateyInstall = [Environment]::GetEnvironmentVariable('ChocolateyInstall', 'Machine')
Will change it for the current session, thus being immediately effective, but will also only be valid for the current session.
The methods of [System.Environment] are more fine grained. There you can choose which environment variable scope to address. There are three scopes available:
Machine
User
Process
The Process scope is equivalent to the Environment drive and covers the environment variables available in your current session. The Machine and the User scope address the permanently stored environment variables1. You can get variables from a particular scope like this:
[Environment]::GetEnvironmentVariable('ChocolateyInstall', 'Machine')
And set them with:
[Environment]::SetEnvironmentVariable('ChocolateyInstall', 'any/path/to/somewhere', 'Machine')
If you want to have new variables from the Machine or User scope available in your current PowerShell session, you have to create a new one. But don't open a new PowerShell session from your current PowerShell session, as it will then inherit all environment variables from your current PowerShell session (source):
Environment variables, unlike other types of variables in PowerShell,
are inherited by child processes, such as local background jobs and
the sessions in which module members run. This makes environment
variables well suited to storing values that are needed in both parent
and child processes.
So, to address the problem you described, you most probably changed your permanently stored environment variables1, while already having an open PowerShell session. If so, you just need to open a new (really new, see above) session and you will be able to access your environment variables via the Environment drive. Just to be clear, opening a new session will even reload environment variables of the Machine scope. There is no reboot required.
1 That are the environment variables you see in the GUI when going to the System Control Panel, selecting Advanced System Settings and on the Advanced tab, clicking on Environment Variable. Those variables cover the User and the Machine scope.
Alternatively, you can open this GUI directly by executing:
rundll32 sysdm.cpl,EditEnvironmentVariables

Powershell Module Manifest "ScriptsToProcess"

I have a powershell module I am writing and one thing I was curious of that is a bit unclear is the ScriptsToProcess Key. Can or should I use this to verify certain things like OS Type, Bit Type, or the presence or lack of certain environment variables to throw warnings if some of my functions may rely on some of the requirements above?
You can use ScriptsToProcess to execute script in the caller's session state (create variables, etc) or to use Write-Warning to notify the user the prereqs haven't been met. However, even if you throw an error, the module is still loaded. So if your intent is to prevent loading of the module I would put the warnings/throws in the startup script of your ModuleToProcess/RootModule. The module will show up in the Get-Module list but there shouldn't be any exported commands. Also if you want to check bitness, use the module manifest ProcessorArchitecture key.