Spaces in path are giving me an aneurysm - powershell

Running Window 7 64-bit with PowerShell 4.0. I'm having problems getting PowerShell's Test-Path and New-Item cmdlets to work for my path name, which has embedded spaces. I've run several Google searches which pointed to several similar StackOverflow entries, and most (like this one) refer to wrapping the path name in quotes - double quotes if the path includes variables to be interpreted, as mine does - which I've done. Doesn't seem like it should be that difficult, and I'm sure I'm overlooking something obvious, but nothing jumps out.
Here's the code fragment giving me grief - $mnemonic is part of a long parameter list that I shortened for brevity.
Param(
[string]$mnemonic = 'JXW29'
)
$logdir = "T:\$$PowerShell Scripts\logs\STVXYZ\$mnemonic\"
if ((Test-Path "$logdir") -eq $false)
#if ((Test-Path 'T:\$$PowerShell Scripts\logs\STVXYZ\JXW29\') -eq $false)
New-Item -Path "$logdir" -ItemType Directory
#New-Item -Path 'T:\$$PowerShell Scripts\logs\STVXYZ\JXW29' -ItemType Directory
Even though the last node in the directory does not exist, the Test-Path check returns true and my code blows right past the New-Item that should have created it. There are statements further down in the rest of the script that write to that directory that do not fail - no idea where they're really writing to.
If I uncomment and run the commented code, which which uses a literal string for the path instead of one with variables, everything works. First time through, the STVXYZ folder is not found and is created. Second time through, it's detected and the New-Item is skipped.

It is unclear what you are trying to do with "$$PowerShell Scripts". Is that also a variable ?
$$ contains the last token of last line input into the shell
I am assuming you should just take that out. A good way to test what you are actually testing is to just Write-Host $logdir prior to testing
param (
[string] $mnemonic = 'JXW29'
)
$logdir = "T:\PowerShell Scripts\logs\STVXYZ\$mnemonic\"
Write-Host "path I am testing: $logdir"
if ($(Test-Path $logdir) -eq $False){
mkdir $logdir
}

Never mind, found it. Those extra $$'s that are in my path name needed to be escaped.

Related

How do I copy multiple files from multiple hosts in powershell?

I am trying to make a powershell script (5.1) that will copy several files and folders from several hosts, these hosts change frequently therefore it would be ideal if I can use a list that I can append when required.
I have this all working using xcopy so I know the locations exist. I want to ensure that if a change is made when I am not In work someone can just add or remove a host in the text file and the back up will continue to work.
The code I have is supposed to go through each host in my list of hosts and copy all the files from the list of file paths before moving onto the next host.
But there are 2 errors showing up:
The term '\REMOTEHOST\c$\Users\Public\desktop\back-up\$Computers' is not recognized as the name of a cmdlet, function, script
file, or operable program. Check the spelling of the name, or if a path was included, verify that the path is correct and try again.
At line:8 char:17
and:
copy-item : Cannot find path '\HOST\C$\LK\Appdata\Cmmcfg C$\LKAppData\Errc C$\LK\Appdata\TCOMP C$\LK\Probes C$\LK\Appdata\CAMIO C$\LK\Appdata\LaunchPad C$\LK\Appdata\Wincmes
C$\barlen.dta C$\Caliprogs C$\Cali' because it does not exist.
This does not seem to reading through the list as I intended, I have also noticed that the HOST it is reading from is 6th in the list and not first.
REM*This file contains the list of hosts you want to copy files from*
$computers = Get-Content 'Y:\***FILEPATH***\HOSTFILE.txt'
REM*This is the file/folder(s) you want to copy from the hosts in the $computer variable*
$source = Get-Content 'Y:\***FILEPATH***\FilePaths.txt'
REM*The destination location you want the file/folder(s) to be copied to*
$destination = \\**REMOTEHOST**\c$\Users\Public\desktop\back-up\$Computers
foreach ($item in $computers) {
}
foreach ($item in $source) {
}
copy-item \\$computer\$source -Destination $destination -Verbose
Your destination variable needs to be enclosed in quotes. To have it evaluate other variables inside of it, enclose it in double quotes. Otherwise PowerShell thinks it's a command you are trying to run.
$destination = "\\**REMOTEHOST**\c$\Users\Public\desktop\back-up\$Computers"
cracked it, thank you for your help. I was messing up the foreach command!I had both variables set to Item, so I was confusing things!
foreach ($itemhost in $computers) {
$destination = "\Remotehost\c$\Users\xoliver.jeffries\desktop\back-up\$itemhost"
foreach ($item in $source)
{copy-item "\$itemhost\$item*" -Destination $destination -Verbose -recurse}
}
Its not the neatest output but that's just a snag! the code now enables me to use a list of hosts and a list files and copy them to a remote server!

Powershell: NTFS permissions and Parent Folders -pathtoolong issues

I apologize in advance of the long post. I have spent a significant amount of time trying to find an answer or piece together a solution to this problem.
It's a simple request: Add a user/group with 2 sets of permissions to your entire DFS environment applying one set to folders and sub-folders and the other to files only.
Seems easy enough, however in the environment I'm trying to manage we have 1000's of folder paths greater than 260 characters deep and any use of dir -recurse or get-childitem will result in hitting the error "pathtoolong". Every example solution for this problem has used a variation of the above or relies on "get-childitem". This fails for most real world situations as I believe many of us IT admins are faced with long paths due to the nature of DFS use.
The current attempt:
Currently I'm using a custom module "NTFSSecurity" which is fantastic to apply NTFS permissions. It works great!
It can be found here: https://ntfssecurity.codeplex.com/
A tutorial from here: https://blogs.technet.microsoft.com/fieldcoding/2014/12/05/ntfssecurity-tutorial-1-getting-adding-and-removing-permissions/
The problem found in the above tutorial and every other example I've been able to find, it references commands such as:
dir -Recurse | Get-NTFSAccess -Account $account
This will fail in the real world of super long file paths.
The "PathTooLong" error workaround:
My workaround current consists of using Robocopy to export the file paths to a text file. I found this as a recommendation from someone dealing with a similar problem. Robocopy will not error on "pathtoolong" issues and is perfect for this exercise. I then try and run commands against the text file containing all of the paths I need to modify.
The command for the Robocopy is this:
robocopy '<insert source path here>' NULL /NJH /E /COPYALL /XF *.* | Out-File -FilePath '<path to fileout.txt>'
This will create a text file while copying only folder structure and permissions. Excellent!
You will then have to clean up the text file from additional characters which I use:
$filepath = '<path>\DFS_Folder_Structure.txt'
$structure = Get-Content $filepath
$structure -replace ' New Dir 0 '| Out-File -FilePath \\<path_you_want_file>\DFS_Folder_Structure2.txt
I also reversed the contents of the text file so it shows the furthest child object (folder) and work down. I thought this might be easier for identifying a parent folder or some other recursive logic which I haven't been able to figure out.
To reverse text from bottom to top use this command here:
$x = Get-Content -Path 'C:\temp_dfs\DFS_Folder_Structure2.txt'; Set-Content -Path 'C:\temp_dfs\Reversed_data.txt' -Value ($x[($x.Length-1)..0])
This script currently only applies to paths with Inheritance off or for childobjects with Inheritance off. This is taken from the NTFSSecurity module command Get-NTFSInheritance which will return results for AccessInheritance and AuditInheritance.Access is if the folder is inheriting from a parent above. Audit is if the folder is passing it down to child objects.
There are 4 possibilities:
AccessInheritance True AuditInheritance True
AccessInheritance True AuditInheritance False
AccessInheritance False AuditInheritance True
AccessInheritance False AuditInheritance False
(*Special note: I have seen all 4 show up in the DFS structure I'm dealing with.)
Script to Set Permissions based on file path contained in text file:
#Get File content to evaluate
$path = Get-Content 'C:\temp_dfs\Reversed_data.txt'
$ADaccount = '<insert fully qualified domain\user or group etc.>'
Foreach ($line in $path)
{
#Get-NTFSAccess -Path $line -Account $ADaccount | Remove-NTFSAccess
#This command will find the access of an account and then remove it.
#It has been omitted but included in case needed later.
$result = Get-NTFSInheritance -Path $line
If ($result.AccessInheritanceEnabled -Match "False" -and $result.AuditInheritanceEnabled -match "False")
{
Add-NTFSAccess -Path $line -Account $ADaccount -AccessRights Traverse,ExecuteFile,ListDirectory,ReadData,ReadAttributes,ReadExtendedAttributes,ReadPermissions -AppliesTo ThisFolderAndSubfolders
Add-NTFSAccess -Path $line -Account $ADaccount -AccessRights ReadAttributes,ReadExtendedAttributes,ReadPermissions -AppliesTo FilesOnly
}
If ($result.AccessInheritanceEnabled -Match "False" -and $result.AuditInheritanceEnabled -Match "True")
{
Add-NTFSAccess -Path $line -Account $ADaccount -AccessRights Traverse,ExecuteFile,ListDirectory,ReadData,ReadAttributes,ReadExtendedAttributes,ReadPermissions -AppliesTo ThisFolderAndSubfolders
Add-NTFSAccess -Path $line -Account $ADaccount -AccessRights ReadAttributes,ReadExtendedAttributes,ReadPermissions -AppliesTo FilesOnly
}
If ($result.AccessInheritanceEnabled -Match "True" -and $result.AuditInheritanceEnabled -Match "False")
{
continue
}
If ($result.AccessInheritanceEnabled -Match "True" -and $result.AuditInheritanceEnabled -Match "True")
{
continue
}
}
This script will apply permissions for the specified User/Group account and set permissions for Folder and Sub-folders and then add another set of permissions to Files only.
Now this current fix works great except it only touches folders with Inheritance turned off. This means you'd need to run this script and then set permissions on the "main parent folder". This is completely do-able and may be the best method to avoid double entries of permissions and is the current state of my solution.
If you add criteria to the bottom sections where AccessInheritanceEnable = True and Audit = True you will get double entries because you're applying permissions to both the parent --> which pushes its permissions to the child-objects and also explicitly on the child-objects themselves. This is due to the text file contains both parent and child and I haven't figure out a way to address that. This isn't "horrible" but my OCD doesn't like to have double permissions added if it can be avoided.
The real question:
What I'd really like to do is somehow identify parent folders, compare them to parents further up the tree and see if it was inheriting permissions and only apply the permission set to the highest parent in a specific chain. My mind wants to explode thinking about how you would compare parents and find the "highest parent".
Again the problem being anytime you want to -recurse the folder structure it will fail due to "pathtoolong" issues so it needs to be contained to logic applied to the text file paths. I've seen a bit mentioned about split-path but I don't really understand how that's applied or how you could compare a path to another path until you identified a parent path.
Thank-you for taking the time to read this long post and question. If you're still awake now, I'm open to any suggestions. lol.
the NTFSSecurity module is indeed fantastic.
I used it to make a script that can export ntfs security of a UNC path and it's subfolders to a readable excel file.
It can be found on:
https://github.com/tgoetheyn/Export-NtfsSecurity
I use is frequently and didn't had any problem with long filenames.
Hope you like it.
PS: If you add ntfssecurity, don't forget to include the "Synchronize" permission. If not included, strange things can happen.

How to Do MSBuild's GetDirectoryNameOfFileAbove in PowerShell?

In MSBuild, there's the GetDirectoryNameOfFileAbove property function.
How do I achieve the same with PowerShell?
Should better have compact syntax, because that's what you have to paste into every entry-point script to find its includes.
The idea of this question:
There's a large solution in source code control. Some of its parts are relatively autonomous.
It has a location for shared scripts and reusable functions, at a known folder under the root.
There're numerous entry-point scripts (files which you explicitly execute) scattered around the project, all of them including the shared scripts.
What's the convenient way for locating the shared scripts from the entry-point-script?
Relative paths turn out to work bad because they look like "../../../../../../scripts/smth", are hard to write and maintain.
Registering modules is not a good option because (a) you're getting this from SCC, not by installing (b) you usually have different versions in different disc locations, all at the same time and (c) this is an excess dependency on the environment when technically just local info is enough.
MSBuild way for doing this (since v4) is as follows: drop a marker file (say, root.here or whatever), get an absolute path to that folder with GetDirectoryNameOfFileAbove, et voila! You got the local root to build paths from.
Maybe it's not the right way to go with powershell, so I'd be grateful for such directions as well.
You can access the current folder thus:
$invoker= Split-Path -Parent $MyInvocation.MyCommand.Path
So the parent of that one is :
$parent=Split-Path -Parent $MyInvocation.MyCommand.Path|Split-Path -Parent
A quick and dirty solution looks like this:
function GetDirectoryNameOfFileAbove($markerfile)
{
$result = ""
$path = $MyInvocation.ScriptName
while(($path -ne "") -and ($path -ne $null) -and ($result -eq ""))
{
if(Test-Path $(Join-Path $path $markerfile)) {$result=$path}
$path = Split-Path $path
}
if($result -eq "") {throw "Could not find marker file $markerfile in parent folders."}
return $result
}
It could be compacted into a single line for planting into scripts, but it's still too C#-ish, and I think it might be shortened down with some PS pipes/LINQ style magic.
UPD: edited the script, it was found that $MyInvocation.MyCommand.Path is often NULL when script is called from cmdline with dotsourcing (with any context level), so the current hypothesis is ScriptName.

How to get the current directory of the cmdlet being executed

This should be a simple task, but I have seen several attempts on how to get the path to the directory where the executed cmdlet is located with mixed success. For instance, when I execute C:\temp\myscripts\mycmdlet.ps1 which has a settings file at C:\temp\myscripts\settings.xml I would like to be able to store C:\temp\myscripts in a variable within mycmdlet.ps1.
This is one solution which works (although a bit cumbersome):
$invocation = (Get-Variable MyInvocation).Value
$directorypath = Split-Path $invocation.MyCommand.Path
$settingspath = $directorypath + '\settings.xml'
Another one suggested this solution which only works on our test environment:
$settingspath = '.\settings.xml'
I like the latter approach a lot and prefer it to having to parse the filepath as a parameter each time, but I can't get it to work on my development environment. What should I do? Does it have something to do with how PowerShell is configured?
Yes, that should work. But if you need to see the absolute path, this is all you need:
(Get-Item .).FullName
The reliable way to do this is just like you showed $MyInvocation.MyCommand.Path.
Using relative paths will be based on $pwd, in PowerShell, the current directory for an application, or the current working directory for a .NET API.
PowerShell v3+:
Use the automatic variable $PSScriptRoot.
The easiest method seems to be to use the following predefined variable:
$PSScriptRoot
about_Automatic_Variables and about_Scripts both state:
In PowerShell 2.0, this variable is valid only in script modules (.psm1). Beginning in PowerShell 3.0, it is valid in all scripts.
I use it like this:
$MyFileName = "data.txt"
$filebase = Join-Path $PSScriptRoot $MyFileName
You can also use:
(Resolve-Path .\).Path
The part in brackets returns a PathInfo object.
(Available since PowerShell 2.0.)
Try :
(Get-Location).path
or:
($pwd).path
Path is often null. This function is safer.
function Get-ScriptDirectory
{
$Invocation = (Get-Variable MyInvocation -Scope 1).Value;
if($Invocation.PSScriptRoot)
{
$Invocation.PSScriptRoot;
}
Elseif($Invocation.MyCommand.Path)
{
Split-Path $Invocation.MyCommand.Path
}
else
{
$Invocation.InvocationName.Substring(0,$Invocation.InvocationName.LastIndexOf("\"));
}
}
Get-Location will return the current location:
$Currentlocation = Get-Location
I like the one-line solution :)
$scriptDir = Split-Path -Path $MyInvocation.MyCommand.Definition -Parent
Try this:
$WorkingDir = Convert-Path .
In Powershell 3 and above you can simply use
$PSScriptRoot
If you just need the name of the current directory, you could do something like this:
((Get-Location) | Get-Item).Name
Assuming you are working from C:\Temp\Location\MyWorkingDirectory>
Output
MyWorkingDirectory
Most answers don't work when debugging in the following IDEs:
PS-ISE (PowerShell ISE)
VS Code (Visual Studio Code)
Because in those the $PSScriptRoot is empty and Resolve-Path .\ (and similars) will result in incorrect paths.
Freakydinde's answer is the only one that resolves those situations, so I up-voted that, but I don't think the Set-Location in that answer is really what is desired. So I fixed that and made the code a little clearer:
$directorypath = if ($PSScriptRoot) { $PSScriptRoot } `
elseif ($psise) { split-path $psise.CurrentFile.FullPath } `
elseif ($psEditor) { split-path $psEditor.GetEditorContext().CurrentFile.Path }
For what it's worth, to be a single-line solution, the below is a working solution for me.
$currFolderName = (Get-Location).Path.Substring((Get-Location).Path.LastIndexOf("\")+1)
The 1 at the end is to ignore the /.
Thanks to the posts above using the Get-Location cmdlet.
this function will set the prompt location to script path, dealing with the differents way to get scriptpath between vscode, psise and pwd :
function Set-CurrentLocation
{
$currentPath = $PSScriptRoot # AzureDevOps, Powershell
if (!$currentPath) { $currentPath = Split-Path $pseditor.GetEditorContext().CurrentFile.Path -ErrorAction SilentlyContinue } # VSCode
if (!$currentPath) { $currentPath = Split-Path $psISE.CurrentFile.FullPath -ErrorAction SilentlyContinue } # PsISE
if ($currentPath) { Set-Location $currentPath }
}
You would think that using '.\' as the path means that it's the invocation path. But not all the time. Example, if you use it inside a job ScriptBlock. In which case, it might point to %profile%\Documents.
This is what I came up with. It's an array including multiple methods of finding a path, uses the current location, filters out null\empty results, and returns the first not-null value.
#((
($MyInvocation.MyCommand.Module.ModuleBase),
($PSScriptRoot),
(Split-Path -Parent -Path $MyInvocation.MyCommand.Definition -ErrorAction SilentlyContinue),
(Get-Location | Select-Object -ExpandProperty Path)
) | Where-Object { $_ })[0]
To only get the current folder name, you can also use:
(Split-Path -Path (Get-Location) -Leaf)
To expand on #Cradle 's answer: you could also write a multi-purpose function that will get you the same result per the OP's question:
Function Get-AbsolutePath {
[CmdletBinding()]
Param(
[parameter(
Mandatory=$false,
ValueFromPipeline=$true
)]
[String]$relativePath=".\"
)
if (Test-Path -Path $relativePath) {
return (Get-Item -Path $relativePath).FullName -replace "\\$", ""
} else {
Write-Error -Message "'$relativePath' is not a valid path" -ErrorId 1 -ErrorAction Stop
}
}
I had similar problems and it made me a lot of trouble since I am making programs written in PowerShell (full end user GUI applications) and I have a lot of files and resources I need to load from disk.
From my experience, using . to represent current directory is unreliable. It should represent current working directory, but it often does not.
It appears that PowerShell saves location from which PowerShell has been invoked inside ..
To be more precise, when PowerShell is first started, it starts, by default, inside your home user directory. That is usually directory of your user account, something like C:\USERS\YOUR USER NAME.
After that, PowerShell changes directory to either directory from which you invoked it, or to directory where script you are executing is located before either presenting you with PowerShell prompt or running the script. But that happens after PowerShell app itself originally starts inside your home user directory.
And . represents that initial directory inside which PowerShell started. So . only represents current directory in case if you invoked PowerShell from the wanted directory. If you later change directory in PowerShell code, change appears not to be reflected inside . in every case.
In some cases . represents current working directory, and in others directory from which PowerShell (itself, not the script) has been invoked, what can lead to inconsistent results.
For this reason I use invoker script. PowerShell script with single command inside:
POWERSHELL.
That will ensure that PowerShell is invoked from the wanted directory and thus make . represent current directory. But it only works if you do not change directory later in PowerShell code.
In case of a script, I use invoker script which is similar to last one I mentioned, except it contains a file option:
POWERSHELL -FILE DRIVE:\PATH\SCRIPT NAME.PS1.
That ensures that PowerShell is started inside current working directory.
Simply clicking on script invokes PowerShell from your home user directory no matter where script is located.
It results with current working directory being directory where script is located, but PowerShell invocation directory being C:\USERS\YOUR USER NAME, and with . returning one of these two directories depending on the situation, what is ridiculous.
But to avoid all this fuss and using invoker script, you can simply use either $PWD or $PSSCRIPTROOT instead of . to represent current directory depending on weather you wish to represent current working directory or directory from which script has been invoked.
And if you, for some reason, want to retrieve other of two directories which . returns, you can use $HOME.
I personally just have invoker script inside root directory of my apps I develop with PowerShell which invokes my main app script, and simply remember to never ever change current working directory inside my source code of my app, so I never have to worry about this, and I can use . to represent current directory and to support relative file addressing in my applications without any problems.
This should work in newer versions of PowerShell (newer than version 2).
Mine was a short, so unplug everything but USB from it and recompile

PowerShell: How to check for multiple conditions (folder existence)

I am in the process of writing a script to make changes to folder permissions. Before it does that I would to do some checking to make sure that I am working in the correct directory. My problem is how do I check to see if four subfolders (i.e. Admin, Workspace, Com, & Data) exists before the script progresses. I assume I would be using Test-Path on each directory.
What's wrong with the following?
if ( (Test-Path $path1) -and (Test-Path $path2) ) {
}
Hint:
Remember to specify -LiteralPath - stops any possible misinterpretation. I've "been there" (so to speak) with this one, spending hours debugging code.
Test-Path can check multiple paths at once. Like this:
Test-Path "c:\path1","c:\path2"
The output will be an array of True/False for each corresponding path.
This could be especially helpful if you have a lot of files/folders to check.
Check if all paths are exists:
if ((Test-Path $arraywithpaths) -notcontains $false) {...}
Same way for the non-existence:
if ((Test-Path $arraywithpaths) -contains $false) {...}