How to refresh the environment of a PowerShell session after a Chocolatey install without needing to open a new session - powershell

I am writing automated script for cloning GitHub source code to local machine.
I failed after installing Git in my script, it asked for close/open powershell.
So I am not able to clone code automatic after installing Git.
Here is my code:
iex ((New-Object System.Net.WebClient).DownloadString('https://chocolatey.org/install.ps1'))
choco install -y git
refreshenv
Start-Sleep -Seconds 15
git clone --mirror https://${username}:${password}#$hostname/${username}/$Projectname.git D:\GitTemp -q 2>&1 | %{ "$_" }
Error:
git : The term 'git' is not recognized as the name of a cmdlet,
function, script file, or operable program.
Check the spelling of the name, or if a path was included,
verify that the path is correct and try again.
Please let me what should I put for reboot PowerShell without exiting?

You have a bootstrapping problem:
refreshenv (an alias for Update-SessionEnvironment) is generally the right command to use to update the current session with environment-variable changes after a choco install ... command.
However, immediately after installing Chocolatey itself, refreshenv / Update-SessionEnvironment themselves are only available in future PowerShell sessions, because loading these commands happens via code added to profile $PROFILE, based on environment variable $env:ChocolateyInstall.
That said, you should be able to emulate what Chocolatey does when $PROFILE is sourced in future sessions in order to be able to use refreshenv / Update-SessionEnvironment right away, immediately after installing Chocolatey:
iex ((New-Object System.Net.WebClient).DownloadString('https://chocolatey.org/install.ps1'))
choco install -y git
# Make `refreshenv` available right away, by defining the $env:ChocolateyInstall
# variable and importing the Chocolatey profile module.
# Note: Using `. $PROFILE` instead *may* work, but isn't guaranteed to.
$env:ChocolateyInstall = Convert-Path "$((Get-Command choco).Path)\..\.."
Import-Module "$env:ChocolateyInstall\helpers\chocolateyProfile.psm1"
# refreshenv is now an alias for Update-SessionEnvironment
# (rather than invoking refreshenv.cmd, the *batch file* for use with cmd.exe)
# This should make git.exe accessible via the refreshed $env:PATH, so that it
# can be called by name only.
refreshenv
# Verify that git can be called.
git --version
Note: The original solution used . $PROFILE instead of Import-Module ... to load the Chocolatey profile, relying on Chocolatey to have updated $PROFILE already at that point. However, ferventcoder points out that this updating of $PROFILE doesn't always happen, so it cannot be relied upon.

NEW:
The old approach I originally answered with has a few quirks with environment variables that use a ; delimiter. I tried compensated by handling PATH seperately, but sometimes there are other such variables.
So this is the new approach; you might want to put it in a script or something. It has to nest a couple of powershell processes real quick, which isn't ideal, but it's the only reliable way I've found to escape the active environment and capture the output.
# Call a powershell process to act as a wrapper to capture the output:
& ([Diagnostics.Process]::GetCurrentProcess().ProcessName) -NoP -c (
# String wrapper to help make the code more readable through comma-separation:
[String]::Join(' ', (
# Start a process that escapes the active environment:
'Start-Process', [Diagnostics.Process]::GetCurrentProcess().ProcessName,
'-UseNewEnvironment -NoNewWindow -Wait -Args ''-c'',',
# List the environment variables, separated by a tab character:
'''Get-ChildItem env: | &{process{ $_.Key + [char]9 + $_.Value }}'''
))) | &{process{
# Set each line of output to a process-scoped environment variable:
[Environment]::SetEnvironmentVariable(
$_.Split("`t")[0], # Key
$_.Split("`t")[1], # Value
'Process' # Scope
)
}}
OLD:
Did my best to try and make it a one-liner, but since the PATH variable needs special handling, I couldn't do it without making a stupidly long line. On the plus side, you don't need to rely on any third-party modules:
foreach ($s in 'Machine','User') {
[Environment]::GetEnvironmentVariables($s).GetEnumerator().
Where({$_.Key -ne 'PATH'}) | ForEach-Object {
[Environment]::SetEnvironmentVariable($_.Key,$_.Value,'Process') }}
$env:PATH = ( ('Machine','User').ForEach({
[Environment]::GetEnvironmentVariable('PATH',$_)}).
Split(';').Where({$_}) | Select-Object -Unique ) -join ';'
The code is scoped to the process, so you don't have to worry about anything getting messed up (not that it would, I tested it).
Note: Neither approach removes uniquely-named environment variables that were created in your active environment, so if you defined $env:FOO = 'bar' and 'FOO' is not normally one of your environment variables, $env:FOO will still return bar even after any of the above code is ran.

You can try and use Update-SessionEnvironment:
Updates the environment variables of the current powershell session with
any environment variable changes that may have occured during a Chocolatey package install.
That will test if that change is still effective after the chocolatey call.
If not, one easy workaround would be at least to use an absolute path for calling git.
To call Git from Powershell:
new-item -path alias:git -value 'C:\Program Files\Git\bin\git.exe'
Then you can try:
git clone --mirror https://${username}:${password}#$hostname/${username}/$Projectname.git D:\GitTemp -q 2>&1 | %{ "$_" }

I go with the lo-tech solution:
$env:Path += ";C:\Program Files\Git\bin"

Related

Restart environment and script during batch script

I've built a few FFmpeg powershell scripts for me and a few others to use and I'm attempting to make the setup and update process as easy as possible. The end goal is to be able to run 1 batch file that installs Chocolatey, FFmpeg, git, clones the github repo (for updates), and edits the Windows registry to add the actual FFmpeg powershell scripts / console programs to the Windows Explorer contextual menu. This way I just pass them the folder containing everything once and any time I change or add something to the project I can just tell them to run the batch file again, and presto everything is up to date.
However I'm struggling to find a way to install Chocolatey, then git with Chocolatey, and then run a git command with the execution of a single .bat file. From what I can tell after installing Chocolatey I need to restart the shell entirely before I can install git, and then I have to restart the shell again before I can use a git command. As of right now most of the actual processing is happening via Powershell scripts that are launched from the .bat file, and as each step is taken I update a txt file, attempt to restart the batch script, and read the txt file to pick up where I left off:
#echo off
echo Administrative permissions required. Detecting permissions...
echo.
net session >nul 2>&1
if %errorLevel% == 0 (
echo Success: Administrative permissions confirmed.
echo.
) else (
echo Failure: Current permissions inadequate.
PAUSE
exit
)
set relativePath=%~dp0
set relativePath=%relativePath:~0,-1%
PowerShell -NoProfile -ExecutionPolicy Bypass -File "%relativePath%\Setup\CheckRequiredPackages.ps1" -relativePath "%relativePath%"
set /p step=<"%relativePath%\Setup\Step.txt"
if %step% == 1 (
(echo 2) > "%relativePath%\Setup\Step.txt"
PowerShell -NoProfile -ExecutionPolicy Bypass -File "%relativePath%\Setup\GetChocolatey.ps1"
start "" "%relativePath%\RunMe.bat"
exit
)
if %step% == 2 (
(echo 3) > "%relativePath%\Setup\Step.txt"
PowerShell -NoProfile -ExecutionPolicy Bypass -File "%relativePath%\Setup\GetRequiredPackages.ps1"
start "" "%relativePath%\RunMe.bat"
exit
)
if %step% == 3 (
(echo 0) > "%relativePath%\Setup\Step.txt"
PowerShell -NoProfile -ExecutionPolicy Bypass -File "%relativePath%\Setup\Update.ps1" -relativePath "%relativePath%"
)
PAUSE
Exit
The problem is using the start command in the batch script doesn't seem to work, I'm guessing since that new process is spawned from the same process that handles the Chocolatey install it doesn't count as actually restarting the shell. Is there any way to actually restart the shell and somehow have the batch file start back up without user intervention?
I'm not sure why I didn't initially think of reloading the path environment variable but that's a whole lot more reasonable than restarting the script 4 times with an intermediary file.
Firstly I moved 99% of the heavy lifting from the .bat file to a Powershell script, as the only reason I'm using Batch is so the user can easily run the file by clicking it in Explorer. I couldn't get RefreshEnv to work, which is a feature of Chocolatey, but running this between each new package worked great:
$env:Path = [System.Environment]::GetEnvironmentVariable("Path","Machine") + ";" + [System.Environment]::GetEnvironmentVariable("Path","User")
So I have something like this now, and the Batch scrip just launches this Powershell Script:
Write-Host "Installing / updating required packages..."
Set-ExecutionPolicy Bypass -Scope Process -Force; [System.Net.ServicePointManager]::SecurityProtocol =
[System.Net.ServicePointManager]::SecurityProtocol -bor 3072; Invoke-Expression ((New-Object System.Net.WebClient).DownloadString('https://chocolatey.org/install.ps1'))
$env:Path = [System.Environment]::GetEnvironmentVariable("Path","Machine") + ";" + [System.Environment]::GetEnvironmentVariable("Path","User")
choco install ffmpeg -y
choco install git -y
$env:Path = [System.Environment]::GetEnvironmentVariable("Path","Machine") + ";" + [System.Environment]::GetEnvironmentVariable("Path","User")
Write-Host "Deleting old files..."
Remove-Item -LiteralPath $relativePath -Force -Recurse
Start-Sleep 2
Write-Host "`nUpdating Files..."
git clone https://github.com/TheNimble1/FFmpegContextCommands.git $relativePath
Which installs Chocolatey, refreshes the path, installs FFmpeg & Git, refreshes the path, deletes the old files, and then clones the git to replace with new files.
Indeed, a start-launched process inherits the calling process' environment rather than reading possibly updated environment-variable definitions from the registry.
Chocolatey comes with batch file RefreshEnv.cmd (C:\ProgramData\chocolatey\bin\RefreshEnv.cmd, but C:\ProgramData\chocolatey\bin should be in the %PATH%) specifically to avoid having to start a new, independent session for environment updates to take effect.
Therefore, something like the following may work:
:: Assumes that Chocolatey was just installed to the default location.
call "%ProgramData%\chocolatey\bin\RefreshEnv.cmd"
:: If Chocolatey was *previously* installed and its installation directory
:: has already been added to %Path%, the following will do:
:: call RefreshEnv.cmd
call "%relativePath%\RunMe.bat"
Since Chocolatey is only being installed during your script's execution and its binaries folder is therefore not yet in %Path%, you'll have to call RefreshEnv.cmd by its full path, as shown above - which assumes the default install directory.
Your own answer now shows how to refresh the $env:Path (%Path%) environment variable using .NET methods directly _from PowerShell, which is a pragmatic solution.
Note, however, that RefreshEnv.cmd is more comprehensive in that it reloads all environment-variable definitions and therefore potentially newly added and modified ones.
Note that calling RefreshEnv.cmd from PowerShell does not work, because it then runs out of process (which means that it cannot update the calling process' environment).
However, Chocolatey offers an Update-SessionEnvironment PowerShell command (aliased to refreshenv), which you can make available immediately after a Chocolatey install as follows:
# Import the module that defines Update-SessionEnvironment aka refreshenv
Import-Module "$env:ProgramData\Chocolatey\helpers\chocolateyProfile.psm1"
# Refresh all environment variables.
Update-SessionEnvironment # or: refreshenv
See this answer for a more robust approach that doesn't rely on assuming that the default location was installed to.

How to troubleshoot the error [The term 'pwsh.exe' is not recognized as the name of a cmdlet, function, script file, or operable program]?

While creating a new pipeline on Azure DevOps to set up a CI for a .NET project, I set up the following PowerShell script to automate the .NET Core setup.
Here is the script:
$ErrorActionPreference="Stop"
$ProgressPreference="SilentlyContinue"
# $LocalDotnet is the path to the locally-installed SDK to ensure the
# correct version of the tools are executed.
$LocalDotnet=""
# $InstallDir and $CliVersion variables can come from options to the
# script.
$InstallDir = "./cli-tools"
$CliVersion = "1.0.1"
# Test the path provided by $InstallDir to confirm it exists. If it
# does, it's removed. This is not strictly required, but it's a
# good way to reset the environment.
if (Test-Path $InstallDir)
{
rm -Recurse $InstallDir
}
New-Item -Type "directory" -Path $InstallDir
Write-Host "Downloading the CLI installer..."
# Use the Invoke-WebRequest PowerShell cmdlet to obtain the
# installation script and save it into the installation directory.
Invoke-WebRequest `
-Uri "https://dot.net/v1/dotnet-install.ps1" `
-OutFile "$InstallDir/dotnet-install.ps1"
Write-Host "Installing the CLI requested version ($CliVersion) ..."
# Install the SDK of the version specified in $CliVersion into the
# specified location ($InstallDir).
& $InstallDir/dotnet-install.ps1 -Version $CliVersion `
-InstallDir $InstallDir
Write-Host "Downloading and installation of the SDK is complete."
# $LocalDotnet holds the path to dotnet.exe for future use by the
# script.
$LocalDotnet = "$InstallDir/dotnet"
When I try to run the build, I have got the following error:
and
I've already searched on Google for people who have the same problem and how to fix it. But I haven't found much information yet. The Azure DevOps forum doesn't help either.
As mentioned in the comment from above, all you have to do is install the appropriate version of PowerShell on the machine that Agent is running on. For example, PowerShell 7. Then you have to make sure that the environment variable path is set. This variable should point to the directory with PowerShell Core.
Windows
Just install PowerShell Core with the Windows Installer (.msi file from PowerShell Git repository). In this case, the path environment variable is automatically set or expanded so that there will be the path to the directory with pwsh.exe under this variable.
Linux
Install PowerShell Core that is supported by your distribution. Make sure that there is a path variable in your ~/.bashrc file and that path contains the path to the directory with PowerShell Core.
Note: If Azure Agent is already running, you have to restart it so that it sees the changes in the path variable. Hence, on Windows, just restart the agent if run interactively and restart the service if run as a service. On Linux, you can follow this guide in order to update the environment variables that were passed to the Agent.
I know you have already configured your script as a PowerShell Core script, but for completeness I add this: If you use a PowerShell task in your Azure pipeline, the Core version of PowerShell is not set for it by default. In order to run the task as the PowerShell Core script, add this to the YAML code of the task: pwsh: true. Otherwise, if you are still using the old graphical interface, check the "Use PowerShell Core" checkbox under the "Advanced" heading for the task.

wget not found by PowerShell script?

I have an old notebook with Windows 7 64-bit that executes a PowerShell script perfectly every Sunday. Unfortunately it starts to crash as soon as the load increases and I decided to get a new PC. On this PC I previously installed Windows&nbspe;10 Pro 64-bit and even here the script was executed every Sunday. Due to the update policy of Microsoft I removed Windows 10 from the new PC and installed Windows 7 64-bit. But now the same script crashes as it does not find wget:
$wg = Start-Process wget.exe -wait -NoNewWindow -PassThru -ArgumentList $argList
Gnu Wget is installed correctly (I think). It is placed at:
C:\Program Files (x86)\GnuWin32\bin\wget.exe
It is even entered in the registry under HKEY_LOCAL_MACHINE → SOFTWARE → Wow6432Node → GnuWin32|Wget|1.11.4-1|setup|InstallPath: C:\Program Files (x86)\GnuWin32.
But despite this if I open the CMD console and enter wget (or wget.exe) I get:
The order "wget" is either misspelled or could not be found.
What do I have to do that PowerShell finds wget constantly even after a restart of the PC? Even e.g. Notepad++ cannot be found by the CMD console despite it is installed properly(?). What's wrong here?
If you want to be able to run a command without specifying its path you need to add the directory it resides in to the PATH environment variable. The install path in the SOFTWARE branch of the registry has nothing to do with it.
To add a directory to the PATH for the current and all future sessions you need to do something like this:
$dir = "${env:ProgramFiles(x86)}\GnuWin32\bin"
# set PATH environment variable for current session
$env:Path += ";${dir}"
# set PATH environment variable for future sessions
$path = [Environment]::GetEnvironmentVariable('PATH', 'Machine')
$path += ";{$dir}"
[Environment]::SetEnvironmentVariable('PATH', $path, 'Machine')
Note, however, that the second step (setting the variable for future sessions) only works correctly if there are no Windows environment variables (%something%) used in $path, because the method saves the value as a REG_SZ in the registry. Windows only expands environment variables in the PATH variable if it's stored as a REG_EXPAND_SZ value.
If you do have regular Windows environment variables somewhere in $path you must manually write the value to the registry with the correct type.
$key = 'HKLM:\SYSTEM\CurrentControlSet\Control\Session Manager\Environment'
Set-ItemProperty -Path $key -Name 'Path' -Value $path -Type ExpandString
Addendum:
All of the above applies only if you want to do this programmatically, of course. For a manual approach you can always edit the environment variables via the GUI and restart PowerShell.

Setting environment variables with batch file lauched by Powershell script

I have a batch script called SET_ENV.bat which contains environment variables that are used by other batch scripts. Currently this SET_ENV.bat is lauched by existing batch scripts.
Now I have a need to use Powershell script and I would like to launch the same SET_ENV.bat. I managed to do this using:
cmd.exe /c ..\..\SET_ENV.bat
I know that the batch file was run because it contained an echo
echo *** Set the environment variables for the processes ***
But after looking at the environment variables, I can see that none of them have been updated. Is there something that is preventing me from updating environment variables with Powershell + batch file combo?
I have tried SET_ENV.bat directly from command line and it works. I have also tried Start-Process cmdlet with "-Verb runAs" but that didn't do any good.
Launching PowerShell again at the end of the batch commands will keep every environment variable so far.
My use case was: set up Anaconda environment, set up MSVC environment, continue with that. Problem is both Anaconda and MSCV have a separate batch script that initialises the env.
The following command starting from PowerShell will:
initialise Anaconda
initialise MSVC
re-launch PowerShell
cmd.exe "/K" '%USERPROFILE%\apps\anaconda3\Scripts\activate.bat %USERPROFILE%\apps\anaconda3 && "C:\Program Files (x86)\Microsoft Visual Studio\2019\Community\VC\Auxiliary\Build\vcvars64.bat" && powershell'
Just swap the paths with what you need. Note that if the path contains spaces in needs to be inside double quotes ".
Breaking down the call above:
cmd.exe "/K": call cmd and do not exit after the commands finish executing /K
The rest is the full command, it is wrapped in single quotes '.
%USERPROFILE%\apps\anaconda3\Scripts\activate.bat %USERPROFILE%\apps\anaconda3: calls activate.bat with parameter ...\anaconda3
&& "C:\Program Files (x86)\Microsoft Visual Studio\2019\Community\VC\Auxiliary\Build\vcvars64.bat": && and if the previous command didn't fail, run the MSVC vars setup file. This is wrapped in " as it has spaces in it.
&& powershell: finally run PowerShell. This will now contain all environment variables from the ones above.
Just adding a better way of doing the aforementioned setup: using Anaconda's PowerShell init script to actually get it to display the environment name on the prompt. I won't break down this as it's just a modified command above.
cmd.exe "/K" '"C:\Program Files (x86)\Microsoft Visual Studio\2019\Community\VC\Auxiliary\Build\vcvars64.bat" && powershell -noexit -command "& ''~\apps\anaconda3\shell\condabin\conda-hook.ps1'' ; conda activate ''~\apps\anaconda3'' "'
Note that the single quotes in the powershell call are all doubled up to escape them
Environment variables are local to a process and get inherited (by default at least) to new child processes. In your case you launch a new instance of cmd, which inherits your PowerShell's environment variables, but has its own environment. The batch file then changes the environment of that cmd instance, which closes afterwards and you return back to your PowerShell script. Naturally, nothing in PowerShell's environment has changed.
It works in cmd since batch files are executed in the same process, so a batch file can set environment variables and subsequently they are available, since the batch file wasn't executed in a new process. If you use cmd /c setenv.cmd in an interactive cmd session you will find that your environment hasn't changed either.
You can try another option, such as specifying the environment variables in a language-agnostic file, to be read by either cmd or PowerShell to set the environment accordingly. Or you could launch your PowerShell scripts from cmd after first running your batch file. Or you could set those environment variables under your user account to no longer have to care for them. Or you just have one setenv.cmd and one setenv.ps1 and keep them updated in sync.
Summary
Write the environment variables to file and load them after.
Example
I've included an MWE below that exemplifies this by saving and loading the VS-studio environment.
Usage
To run the script, call New-Environment. You will now be in the VS2022 environment.
How it works
The first time New-Environment is called, the VS-studio environment batch file runs, but the results are saved to disk. On returning to PowerShell the results are loaded from disk. Subsequent times just use the saved results without running the environment activator again (because it's slow). The New-Environment -refresh parameter may be used if you do want to resave the VS-studio environment again, for instance if anything has changed.
Script
NOTE: This script must be present in your powershell $profile so the second instance can access the function! Please ensure to change the VS path to reflect your own installation.
function New-Environment()
{
param (
[switch]
$refresh
)
Write-Host "Env vars now: $($(Get-ChildItem env: | measure-object).Count)"
$fileName = "$home\my_vsenviron.json"
if ((-not $refresh) -and (Test-Path $fileName -PathType Leaf))
{
Import-Environment($fileName)
return;
}
$script = '"C:\Program Files\Microsoft Visual Studio\2022\Enterprise\VC\Auxiliary\Build\vcvars64.bat" && '
$script += "pwsh --command Export-Environment `"$fileName`""
&"cmd.exe" "/C" $script
Import-Environment($fileName)
}
function Export-Environment($fileName)
{
Get-ChildItem env: | Select-Object name,value | ConvertTo-Json | Out-File $fileName
Write-Host "I have exported the environment to $fileName"
}
function Import-Environment($fileName)
{
Get-Content $fileName | ConvertFrom-json | ForEach-Object -process {Set-Item "env:$($_.Name)" "$($_.Value)"}
Write-Host "I have imported the environment from $fileName"
Write-Host "Env vars now: $($(Get-ChildItem env: | measure-object).Count)"
}

adding cygwin bin directory to powershell path environment variable ... cygwin within powershell

Powershell is great for scripting. But when it comes to everyday use, certain things can be a huge PITA!!
so i thought it would be great if i could do something like this in my profile.ps1:
$env:path = "$($env:path);c:\cygwin\bin"
to get access to utilities like tar, zip, etc... but this doesn't work. The variable looks right when i do:
PS > $env:path
but when i try to do, say,
PS > unzip foo.zip
i get a command not found type error.
WTF PowerShell!?
edit: great answers! I looked at it with fresh eyes this morning and realized that I just needed to spell 'cygwin' correctly! now I don't have to switch back and forth between two consoles. It should be noted for anyone who uses this tip that your path in powershell is evaluated in order - if you put c:\cygwin\bin at the end of the $env:path variable, it will be searched last, so it won't interfere with existing powershell aliases / cmdlets.
It worked for me:
To set your profile:
$command = '$env:path = $env:path + ";C:\Program Files\Notepad++"'
$command | Out-File -FilePath $PROFILE -Append -Encoding UTF8
Or just the current shell:
$env:path = $env:path + ";C:\Program Files\Notepad++"
Using $env:path to add the cygwin bin to PATH should work as long as you are trying to use it in the same Powershell session. If you open a new console or if you close and open Powershell, it will not be persisted. Otherwise, what you are doing should work. Make sure you are indeed adding the correct path. If you want to persist the changes, add the line to your $profile.
Also, try using the Mingw / Msys / Msysgit utils. I find Mingw to be more lightweight than cygwin ( if you are using cygwin just to get some of these utils.)
PowerShell by default is only going to modify its local copy of PATH. When you run an external command, they aren't going to see the local environment variables.
Per this TechNet article, you can fall back to the .NET static method SetEnvironmentVariable to do this at the user level if you want this to be a permanent change:
[Environment]::SetEnvironmentVariable("TestVariable", "Test value.", "User")