Note: I'm using the built-in PowerShell ISE as my environment
I got a funny issue with dot slash on Powershell. All of my scripts run from a certain folder and there are subfolders that contain data that is needed for them to run.
For example, my scripts are saved at c:\users\chris\posh
Most of the time, I will call input and send output to subfolders like this...
c:\users\chris\posh\inputs
c:\users\chris\posh\output
Therefore I'll have scripts examples that look like this for inputs and outputs:
$hbslist = Get-Content .\inputs\HBS-IP.txt
write-output "$($lat),$($long)" | Out-File .\Outputs\"LatLong.csv" -Append
Lately, when I run the scripts, it cannot locate my files or exe's that I call on. That's because it's trying to look at P:/ instead of c:\users\chris\posh when using .\
Powershell also starts in my P:\ (mapped share drive) for some reason and I cannot figure out as to why my PC is running this way.
It might be a policy on your machine which changes your home directory. You can check the home directory with:
echo $env:HOME
This happens often on corporate machines. If you want to set it back for your powershell environment, you can set it in your profile.ps1.
This is typically stored at:
c:\Users\<Name>\Documents\WindowsPowershell\profile.ps1
Related
Powershell noob here.
I have a script for copying PDF documents and CSV files. The script gets the CSV data from a URL defined in a .txt file in the same directory as the script. In the script, the file is determined like this:
$publishedCSV = Get-Content .\DriveURL.txt -Raw
When I run this script in ISE, it works fine and retrieves all the CSV data. However, when I run it in Scheduler, it tries to find the DriveURL file in System32, rather than in the path that is specified (I used transcript to find out what was happening)
I figured that out, and defined the FULL path of DriveURL, rather than just using the .\ notation. It works, but I don't know why it works
What I did:
Specified proper path of DriveURL and now my script works. I don't understand why it worked previously with using ./DriveURL.txt rather than the full path when I'd run it in ISE, but it didn't when run in Scheduler. It's the same script
If you use relative paths then you must also either set your working directory, or in the script change to the appropriate directory before referencing said relative paths. Alternatively you can use full paths, as you have already discovered.
A simple use of cd or pushd and the automatic $PSScriptRoot variable will change your working directory to wherever the script is saved to:
pushd $PSScriptRoot
I am trying to execute an .exe executable file (let' say it is called myfile.exe) under the argument (argument.fst) . Both files have the same name for each execution, but are located in different subfolders in the same parent directory.
My objective is to create a for-loop, in which, I will pinpoint the paths to both files (14 groups in total, so 14 loops) and then Windows Powershell will execute those. My goal is to automate my simulations, ran by the .exe files+arguments, thus saving time.
Is my thought possible to be implemented on Windows Powershell?
Thank you very much,
Ioannis Voultsos.
If you want to automate the process, you may store your command,args in csv file (i.e. commands.csv):
command;arguments
myapp.exe;c:/
myapp.exe;h:/
then load it and execute using &:
$csv=(import-csv commands.csv -delimiter ';')
$csv|foreach{ &$_.command $.arguments }
Beware of executing commands from strings, coming from untrusted sources though.
Try out this sample code on the parent folder
Get-ChildItem | Where-Object {($_.psiscontainer)} | ForEach-Object { cd $_.FullName; & ".\SampleApp.exe args0 args1"; cd.. }
it will go into each directory and execute .exe in each folder with arguments.
What do I want to achieve?
I have one ps1 file that has all of my functions inside. In the first step I want to convert it into a ps module. Then I want to have the following:
Colleague gets a script or bat he has to run ONCE. This will set his Modules Environment path $Env:PSModulePath to a path on a network drive everyone has access to
Copy and paste a custom profile.ps1 into the users %userprofile%\Documents\WindowsPowershell that imports the module
Every user should now have the powershell scripts I made available in their shell
How I tried to solve it
The way me and a colleague have set it up in the past is with this:
(we have a profile.ps1 file that does the following):
#set path for profile_loader.ps1
$path = "\\server\share\folderwithscripts";
#call profile_loader.ps1
. "$path"
Then this profile_loader.ps1 baiscally just loads tons of scripts (ps1 files) like this:
. "\\server\share\pathtoanotherscript.ps1
Line after line.
I don't like it and it is too complicated for my 25 other colleagues I want to set up in the future.
Question
What is the best way to achieve this? A good old .bat file that copy and past the ps1 file into their userprofile? Or is there a better way?
As someone who had their $profile wiped and set to a "company default", for the love of god, don't.
If you have to, then I suggest just creating a profile you want everyone to have with all your modules in a shared location, like your securely locked down Sysadmin folder.
Do psedit $proile.AllUsersAllHosts on your machine, modify that, then make a text file with all the hostnames you want to destroy with your own forced profile. Throw this in there to make it import your modules by default.
# Checks your server share for any PSM1 files, could change to include PS1 as well I suppose. Long name because its in a $Profile so ¯\_(ツ)_/¯
$ModulePathWithLongNameBecauseSomeoneMayUseThisInAnActualScript = Get-ChildItem -file -Recurse "\\server\share\" -Include "*.psm1"
# Sets module path for other adhoc module calls if they dont want to restart their Powershell
$env:PSModulePath = $env:PSModulePath + ";\\server\share\"
# Imports all PSM1 files from the ModulePath*
Foreach($psm in $ModulePathWithLongNameBecauseSomeoneMayUseThisInAnActualScript){
Import-Module "$($ModulePath.FullName)"
}
Run this on your machine to deliver your soul crushing $profile to your colleagues who may have had their own setup.
# Get a list of machines that your staff will use and throw them into a txt or csv etc.
$PCsForForcedProfile = Get-Content "\\server\share\PleaseNo.txt"
Foreach($Colleague in $PCsForForcedProfile){
Copy-Item "C:\Windows\System32\WindowsPowerShell\v1.0\profile.ps1" "\\$Colleague\C$\Windows\System32\WindowsPowerShell\v1.0\" -force
}
I have a script that I run at my work that uses get-childitem to get all the files of a certain type in a storage drive and sorts and moves them to an archive drive. I'd like to automate this process to run once everyday but I realized I would have a problem in doing so.
Occasionally, when this script is run a file or two will still be in the process of transferring over to our storage drive. If I let the script move this file while it is still being transferred from our customer, it gets corrupted and won't open later.
I know how to filter based on file type and date and other basic parameters, but I'm not entirely sure how I tell this script to exclude files that are currently growing in size.
Below is what I'm currently using to filter what I want to move:
$TargetType = "*.SomeFileType"
$TargetDrive = "\\Some\UNC\Path"
Get-ChildItem $targetdrive\$targettype | ForEach-Object {$_.fullname} | Sort-Object | out-file $outStorageMove
Also, at the moment I'm putting everything that get-childitem finds into a text file, that gets invoked later so that I can manually edit what I want it to move. I'd like to get rid of this step if at all possible.
So, move is essentially copy and delete.
So, like gvee state, Copy-Item is a better option, to get you past your stated concern, monitor for the copy to complete. My addition would be to delete once the copy is done and you have verified the copy.
Or use Bits as a job to do this.
Using Windows PowerShell to Create BITS Transfer Jobs
https://msdn.microsoft.com/en-us/library/windows/desktop/ee663885(v=vs.85).aspx
You can use PowerShell cmdlets to create synchronous and asynchronous Background Intelligent Transfer Service (BITS) transfer jobs.
All of the examples in this topic use the Start-BitsTransfer cmdlet. To use the cmdlet, be sure to import the module first. To install the module, run the following command: Import-Module BitsTransfer. For more information, type Get-Help Start-BitsTransfer at the PowerShell prompt.
When you use a .NET object from PowerShell, and it takes a filename, it always seems to be relative to C:\Windows\System32.
For example:
[IO.File]::WriteAllText('hello.txt', 'Hello World')
...will write C:\Windows\System32\hello.txt, rather than C:\Current\Directory\hello.txt
Why does PowerShell do this? Can this behaviour be changed? If it can't be changed, how do I work around it?
I've tried Resolve-Path, but that only works with files that already exist, and it's far too verbose to be doing all the time.
You can change .net working dir to powershell working dir: [Environment]::CurrentDirectory = (Get-Location -PSProvider FileSystem).ProviderPath
After this line all .net methods like [io.path]::GetFullPath and [IO.File]::WriteAllText will work without problems
The reasons PowerShell doesn't keep the .NET notion of current working directory in sync with PowerShell's notion of the working dir are:
PowerShell working dirs can be in a provider that isn't even file system
based e.g. HKLM:\Software
A single PowerShell process can have
multiple runspaces. Each runspace can be cd`d into a different file
system location. However the .NET/process "working directory" is
essentially a global for the process and wouldn't work for a
scenario where there can be multiple working dirs (one per runspace).
For convenience, I added the following to my prompt function, so that it runs whenever a command finishes:
# Make .NET's current directory follow PowerShell's
# current directory, if possible.
if ($PWD.Provider.Name -eq 'FileSystem') {
[System.IO.Directory]::SetCurrentDirectory($PWD)
}
This is not necessarily a great idea, because it means that some scripts (that assume that the Win32 working directory tracks the PowerShell working directory) will work on my machine, but not necessarily on others.
When you use filenames in .Net methods, the best practice is to use fully-qualified path names. Or use
$pwd\foo.cer
If you do in powershell console from:
C:\> [Environment]::CurrentDirectory
C:\WINDOWS\system32\WindowsPowerShell\v1.0
you can see what folder .net use.
That's probably because PowerShell is running in System32. When you cd to a directory in PowerShell, it doesn't actually change the working directory of powershell.exe.
See:
PowerTip article on syncing the two directories
Channel9 forum thread
I ran into the same problem a long time ago and now I add the following to the beginning of my profile:
# Setup user environment when running session under alternate credentials and
# logged in as a normal user.
if ((Get-PSProvider FileSystem).Home -eq "")
{
Set-Variable HOME $env:USERPROFILE -Force
$env:HOMEDRIVE = Split-Path $HOME -Qualifier
$env:HOMEPATH = Split-Path $HOME -NoQualifier
(Get-PSProvider FileSystem).Home = $HOME
Set-Location $HOME
}