Why don't .NET objects in PowerShell use the current directory? - powershell

When you use a .NET object from PowerShell, and it takes a filename, it always seems to be relative to C:\Windows\System32.
For example:
[IO.File]::WriteAllText('hello.txt', 'Hello World')
...will write C:\Windows\System32\hello.txt, rather than C:\Current\Directory\hello.txt
Why does PowerShell do this? Can this behaviour be changed? If it can't be changed, how do I work around it?
I've tried Resolve-Path, but that only works with files that already exist, and it's far too verbose to be doing all the time.

You can change .net working dir to powershell working dir: [Environment]::CurrentDirectory = (Get-Location -PSProvider FileSystem).ProviderPath
After this line all .net methods like [io.path]::GetFullPath and [IO.File]::WriteAllText will work without problems

The reasons PowerShell doesn't keep the .NET notion of current working directory in sync with PowerShell's notion of the working dir are:
PowerShell working dirs can be in a provider that isn't even file system
based e.g. HKLM:\Software
A single PowerShell process can have
multiple runspaces. Each runspace can be cd`d into a different file
system location. However the .NET/process "working directory" is
essentially a global for the process and wouldn't work for a
scenario where there can be multiple working dirs (one per runspace).

For convenience, I added the following to my prompt function, so that it runs whenever a command finishes:
# Make .NET's current directory follow PowerShell's
# current directory, if possible.
if ($PWD.Provider.Name -eq 'FileSystem') {
[System.IO.Directory]::SetCurrentDirectory($PWD)
}
This is not necessarily a great idea, because it means that some scripts (that assume that the Win32 working directory tracks the PowerShell working directory) will work on my machine, but not necessarily on others.

When you use filenames in .Net methods, the best practice is to use fully-qualified path names. Or use
$pwd\foo.cer
If you do in powershell console from:
C:\> [Environment]::CurrentDirectory
C:\WINDOWS\system32\WindowsPowerShell\v1.0
you can see what folder .net use.

That's probably because PowerShell is running in System32. When you cd to a directory in PowerShell, it doesn't actually change the working directory of powershell.exe.
See:
PowerTip article on syncing the two directories
Channel9 forum thread

I ran into the same problem a long time ago and now I add the following to the beginning of my profile:
# Setup user environment when running session under alternate credentials and
# logged in as a normal user.
if ((Get-PSProvider FileSystem).Home -eq "")
{
Set-Variable HOME $env:USERPROFILE -Force
$env:HOMEDRIVE = Split-Path $HOME -Qualifier
$env:HOMEPATH = Split-Path $HOME -NoQualifier
(Get-PSProvider FileSystem).Home = $HOME
Set-Location $HOME
}

Related

Can't remove directory passed as argument to the powershell script runs asynchronously

I have a powershell script which needs to delete the directory passed as argument and the script should be started asynchronously.
Example:
Script_1.ps1
$Path = "C:\XYZ"
$tempDir = "C:\Temp"
Start-Process -FilePath "powershell.exe" -ArgumentList "$($tempDir)\Script_2.ps1","-PathToDelete","$Path"
Script_2.ps1
param(
# The path to delete
[Parameter(Mandatory=$True,ValueFromPipeline=$False,HelpMessage="The path to delete")]
[string]$PathToDelete,
)
Remove-Item -path $PathToDelete -Recurse -Force
exit 0
Currently ,Remove_Item throws exception saying path cannot deleted because it's in use. Process explorer says the path to be deleted is used by powershell process started in script_1.ps.
You say that the process running Script_1.ps1 is preventing the removal, and I'm assuming you've already ruled out the following causes:
You're running that script from a directory located inside the directory tree you're trying to remove.
Inside that script, before calling Script_2.ps1:
you're changing the current directory to a directory located inside the directory tree.
or you've opened a file located in that directory tree without having closed it yet.
This leaves the following potential - and obscure - cause:
The process-level working directory - which is distinct from PowerShell's current location - may be a directory inside the directory tree you're trying to remove, thereby preventing removal.
To rule that out, change it to C:\ from inside Script_2.ps1, before calling Remove-Item, as follows:
[Environment]::CurrentDirectory = 'C:\'
Note:
That PowerShell's working directory (as reflected in the automatic $PWD variable and the return value from Get-Location) differs from the process' is unfortunate, but cannot be avoided, because a single PowerShell process can host multiple runspaces (sessions) simultaneously, each of which must maintain their own working directory (location) - see GitHub issue #3428.
A more common pitfall that results from this discrepancy is owed to the fact that .NET APIs use the process' working directory, which means that you cannot pass relative paths to .NET methods without risking their resolution relative to a different working directory than PowerShell's - see this answer for more information.

How can you run a powershell script if you do not know the location is will be run from?

I am new to powershell so this may just be me not understanding something.
I am writing a tool in powershell and have the tool working in c:\Users\Documents\WindowsPowerShell\Modules.
What I would like is the ability to copy the tool to a USB stick and run it from the USB stick.
How can I get the tool to run if its not in the PSModulePath environment variable path?
Or how can I add the USB stick added to the PSModulePath path?
Thanks for the help.
I have a solution that works by placing the following in a .ps1 script and using that to import the dependent modules and call the first function.
#find local directory
$LocalDir = Split-Path $MyInvocation.MyCommand.Path -Parent
#prove correct directory found.
Write-Host "Starting module load: $($MyInvocation.MyCommand) in directory = $LocalDir"
#load modules
Import-Module $LocalDir/Module –Force
#call first function
firstfunction
I could not get $PSScriptRoot to work but the Split-Path $MyInvocation.MyCommand.Path –Parent code has the advantage in that it also works in ISE.
I then placed the dependent modules in directories below the .ps1 script.
If you want your script to be portable, you will have to carry around the dependent module with you as well. However, in your script, you can make sure to add the full path to the module's parent directory on your USB to the PSModulePath as a process-level environment variable:
$portableModuleDirectory = [System.IO.DirectoryInfo]'./relative/path/to/dependent/module'
$env:PSModulePath += ";$(Split-Path -Parent $portableModuleDirectory.FullName)"
Import-Module ModuleName
However, this is cumbersome to have to do. The best solution here would be to host your own internal nuget feed to push your package to, so you can install the module on any machine you need it on from your network, if you can't upload it to the public PowerShell Gallery (which you should host your own feed/squid proxy for business needs).

Powershell Dot Slash .\ Starts at the root of a drive

Note: I'm using the built-in PowerShell ISE as my environment
I got a funny issue with dot slash on Powershell. All of my scripts run from a certain folder and there are subfolders that contain data that is needed for them to run.
For example, my scripts are saved at c:\users\chris\posh
Most of the time, I will call input and send output to subfolders like this...
c:\users\chris\posh\inputs
c:\users\chris\posh\output
Therefore I'll have scripts examples that look like this for inputs and outputs:
$hbslist = Get-Content .\inputs\HBS-IP.txt
write-output "$($lat),$($long)" | Out-File .\Outputs\"LatLong.csv" -Append
Lately, when I run the scripts, it cannot locate my files or exe's that I call on. That's because it's trying to look at P:/ instead of c:\users\chris\posh when using .\
Powershell also starts in my P:\ (mapped share drive) for some reason and I cannot figure out as to why my PC is running this way.
It might be a policy on your machine which changes your home directory. You can check the home directory with:
echo $env:HOME
This happens often on corporate machines. If you want to set it back for your powershell environment, you can set it in your profile.ps1.
This is typically stored at:
c:\Users\<Name>\Documents\WindowsPowershell\profile.ps1

working with relative paths in powershell WebClient and FileStream

I'm tasked with writing a powershell script to perform a file download, which will eventually be executed as a scheduled task once a week. I have no background in programming in the windows environment so this has been an interesting day.
I am experiencing a problem with the unexpected handling of the $pwd and $home of the shell.
I pass into my program a download URL and a destination file. I would like the destination file to be a relative path, e.g., download/temp.txt.gz
param($srcUrl, $destFile)
$client = new-object System.Net.WebClient
$client.DownloadFile($srcUrl, $destFile)
Ungzip-File $destFile
Remove-Item $destFile
This actually fails on the call to Remove-Item. If $destFile is a relative path then the script happily downloads the file and puts it in a file relative to $home. Likewise, I then unzip this and my function Ungzip-File makes use of System.IO.Filestream, and it seems to find this file. Then Remove-Item complains that there is no file in the path relative to $pwd.
I am a bit baffled, as these are all part of the shell, so to speak. I'm not clear why these functions would handle the path differently and, more to the point, I'm not sure how to fix this so both relative and absolute paths work. I've tried looking at the io.path methods but since my $home and $pwd are on different drives, I can't even use the IsPathRooted which was seemed so close when I found it.
Any help?
You have to be aware of where you are in the path. $pwd works just fine on the command shell but let's say you have started your script from a scheduled job. You might think $pwd is where your script resides and code accordingly but find out that it actually uses, say %windir%\system32.
In general, I would use fullpath to destination and for paths relative to script folder I would use $PSScriptRoot/folder_name/file_path.
There are catches there too. For example, I noticed $PSScriptRoot will resolve just fine within the script but not within Param() block. I would highly suggest using write-verbose when coding and testing, so you know what it thinks the path is.
[CMDLETBINDING()] ## you need this!
Param()
write-verbose "path is $pwd"
Write-Verbose "removing $destFile"
Remove-Item $destfile
and add -verbose when you are calling your script/function:
myscript.ps1 -verbose
mydownloadfunction -verbose
To use a relative path, you need to specify the current directory with ./
Example: ./download/temp.txt.gz
You can also change your location in the middle of the script with Set-Location (alias: cd)

PowerShell: Run command from script's directory

I have a PowerShell script that does some stuff using the script’s current directory. So when inside that directory, running .\script.ps1 works correctly.
Now I want to call that script from a different directory without changing the referencing directory of the script. So I want to call ..\..\dir\script.ps1 and still want that script to behave as it was called from inside its directory.
How do I do that, or how do I modify a script so it can run from any directory?
Do you mean you want the script's own path so you can reference a file next to the script? Try this:
$scriptpath = $MyInvocation.MyCommand.Path
$dir = Split-Path $scriptpath
Write-host "My directory is $dir"
You can get a lot of info from $MyInvocation and its properties.
If you want to reference a file in the current working directory, you can use Resolve-Path or Get-ChildItem:
$filepath = Resolve-Path "somefile.txt"
EDIT (based on comment from OP):
# temporarily change to the correct folder
Push-Location $dir
# do stuff, call ant, etc
# now back to previous directory
Pop-Location
There's probably other ways of achieving something similar using Invoke-Command as well.
There are answers with big number of votes, but when I read your question, I thought you wanted to know the directory where the script is, not that where the script is running. You can get the information with powershell's auto variables
$PSScriptRoot # the directory where the script exists, not the
# target directory the script is running in
$PSCommandPath # the full path of the script
For example, I have a $profile script that finds a Visual Studio solution file and starts it. I wanted to store the full path, once a solution file is started. But I wanted to save the file where the original script exists. So I used $PsScriptRoot.
If you're calling native apps, you need to worry about [Environment]::CurrentDirectory not about PowerShell's $PWD current directory. For various reasons, PowerShell does not set the process' current working directory when you Set-Location or Push-Location, so you need to make sure you do so if you're running applications (or cmdlets) that expect it to be set.
In a script, you can do this:
$CWD = [Environment]::CurrentDirectory
Push-Location $MyInvocation.MyCommand.Path
[Environment]::CurrentDirectory = $PWD
## Your script code calling a native executable
Pop-Location
# Consider whether you really want to set it back:
# What if another runspace has set it in-between calls?
[Environment]::CurrentDirectory = $CWD
There's no foolproof alternative to this. Many of us put a line in our prompt function to set [Environment]::CurrentDirectory ... but that doesn't help you when you're changing the location within a script.
Two notes about the reason why this is not set by PowerShell automatically:
PowerShell can be multi-threaded. You can have multiple Runspaces (see RunspacePool, and the PSThreadJob module) running simultaneously withinin a single process. Each runspace has it's own $PWD present working directory, but there's only one process, and only one Environment.
Even when you're single-threaded, $PWD isn't always a legal CurrentDirectory (you might CD into the registry provider for instance).
If you want to put it into your prompt (which would only run in the main runspace, single-threaded), you need to use:
[Environment]::CurrentDirectory = Get-Location -PSProvider FileSystem
This would work fine.
Push-Location $PSScriptRoot
Write-Host CurrentDirectory $CurDir
I often used the following code to import a module which sit under the same directory as the running script. It will first get the directory from which powershell is running
$currentPath=Split-Path ((Get-Variable
MyInvocation -Scope
0).Value).MyCommand.Path
import-module "$currentPath\sqlps.ps1"
I made a one-liner out of #JohnL's solution:
$MyInvocation.MyCommand.Path | Split-Path | Push-Location
Well I was looking for solution for this for a while, without any scripts just from CLI. This is how I do it xD:
Navigate to folder from which you want to run script (important thing is that you have tab completions)
..\..\dir
Now surround location with double quotes, and inside them add cd, so we could invoke another instance of powershell.
"cd ..\..\dir"
Add another command to run script separated by ;, with is a command separator in powershell
"cd ..\..\dir\; script.ps1"
Finally Run it with another instance of powershell
start powershell "cd..\..\dir\; script.ps1"
This will open new powershell window, go to ..\..\dir, run script.ps1 and close window.
Note that ";" just separates commands, like you typed them one by one, if first fails second will run and next after, and next after... If you wanna keep new powershell window open you add -noexit in passed command . Note that I first navigate to desired folder so I could use tab completions (you couldn't in double quotes).
start powershell "-noexit cd..\..\dir\; script.ps1"
Use double quotes "" so you could pass directories with spaces in names e.g.,
start powershell "-noexit cd '..\..\my dir'; script.ps1"