PowerShell: How to check for multiple conditions (folder existence) - powershell

I am in the process of writing a script to make changes to folder permissions. Before it does that I would to do some checking to make sure that I am working in the correct directory. My problem is how do I check to see if four subfolders (i.e. Admin, Workspace, Com, & Data) exists before the script progresses. I assume I would be using Test-Path on each directory.

What's wrong with the following?
if ( (Test-Path $path1) -and (Test-Path $path2) ) {
}

Hint:
Remember to specify -LiteralPath - stops any possible misinterpretation. I've "been there" (so to speak) with this one, spending hours debugging code.

Test-Path can check multiple paths at once. Like this:
Test-Path "c:\path1","c:\path2"
The output will be an array of True/False for each corresponding path.
This could be especially helpful if you have a lot of files/folders to check.
Check if all paths are exists:
if ((Test-Path $arraywithpaths) -notcontains $false) {...}
Same way for the non-existence:
if ((Test-Path $arraywithpaths) -contains $false) {...}

Related

Removing Files From a Path Older Than 60 Days Old Fails

I have a powershell script that removes log files older than 60 days. It gets to a particular file and fails saying the file doesn't exist. The funny thing is the code is not pointing to a particular file but the folder. I changed the path to not reveal anything about my environment. Sample below. Any ideas?
clear-Host
$limit = (Get-Date).AddDays(-60)
$Path = 'C:\Windows\Temp'
Get-ChildItem $Path | Where-Object { -not $_.PSIsContainer -and $_.CreationTime -lt $limit -and $_.LastWriteTime -lt $limit } | Remove-Item
Some things to look at:
Is it safe to just nuke the %tempdir%?
Nowadays most apps will write to files in %localAppdata%\Temp which is in the user profile, so it's mostly Windows OS services using \Temp, which makes the C:\Windows\Temp path sort of a special directory. Sometimes processes might be running that have a lock on a file in that directory, and if you delete them, things can go poorly for the process. Those errors tend to look like this:
Remove-Item : Cannot remove item C:\temp\stack\2.txt:
The process cannot access the file 'C:\temp\stack\2.txt' because
it is being used by another process.
However, these are not terminating errors, which means the rest of the directory will get cleaned out. So if you don't care about the error, you could just append -ErrorAction SilentlyContinue to your Remove-Item cmdlet and consider the ticket done.
Your code isn't just removing log files though.
You mentioned that you want to delete only log files. In that case, you might want to append a file filter to only remove logs because today your code is going to nuke the whole directory. Adding a filter is really easy, it would look like this:
Get-ChildItem -Filter *.log | Where {#....

Spaces in path are giving me an aneurysm

Running Window 7 64-bit with PowerShell 4.0. I'm having problems getting PowerShell's Test-Path and New-Item cmdlets to work for my path name, which has embedded spaces. I've run several Google searches which pointed to several similar StackOverflow entries, and most (like this one) refer to wrapping the path name in quotes - double quotes if the path includes variables to be interpreted, as mine does - which I've done. Doesn't seem like it should be that difficult, and I'm sure I'm overlooking something obvious, but nothing jumps out.
Here's the code fragment giving me grief - $mnemonic is part of a long parameter list that I shortened for brevity.
Param(
[string]$mnemonic = 'JXW29'
)
$logdir = "T:\$$PowerShell Scripts\logs\STVXYZ\$mnemonic\"
if ((Test-Path "$logdir") -eq $false)
#if ((Test-Path 'T:\$$PowerShell Scripts\logs\STVXYZ\JXW29\') -eq $false)
New-Item -Path "$logdir" -ItemType Directory
#New-Item -Path 'T:\$$PowerShell Scripts\logs\STVXYZ\JXW29' -ItemType Directory
Even though the last node in the directory does not exist, the Test-Path check returns true and my code blows right past the New-Item that should have created it. There are statements further down in the rest of the script that write to that directory that do not fail - no idea where they're really writing to.
If I uncomment and run the commented code, which which uses a literal string for the path instead of one with variables, everything works. First time through, the STVXYZ folder is not found and is created. Second time through, it's detected and the New-Item is skipped.
It is unclear what you are trying to do with "$$PowerShell Scripts". Is that also a variable ?
$$ contains the last token of last line input into the shell
I am assuming you should just take that out. A good way to test what you are actually testing is to just Write-Host $logdir prior to testing
param (
[string] $mnemonic = 'JXW29'
)
$logdir = "T:\PowerShell Scripts\logs\STVXYZ\$mnemonic\"
Write-Host "path I am testing: $logdir"
if ($(Test-Path $logdir) -eq $False){
mkdir $logdir
}
Never mind, found it. Those extra $$'s that are in my path name needed to be escaped.

How do I copy multiple files from multiple hosts in powershell?

I am trying to make a powershell script (5.1) that will copy several files and folders from several hosts, these hosts change frequently therefore it would be ideal if I can use a list that I can append when required.
I have this all working using xcopy so I know the locations exist. I want to ensure that if a change is made when I am not In work someone can just add or remove a host in the text file and the back up will continue to work.
The code I have is supposed to go through each host in my list of hosts and copy all the files from the list of file paths before moving onto the next host.
But there are 2 errors showing up:
The term '\REMOTEHOST\c$\Users\Public\desktop\back-up\$Computers' is not recognized as the name of a cmdlet, function, script
file, or operable program. Check the spelling of the name, or if a path was included, verify that the path is correct and try again.
At line:8 char:17
and:
copy-item : Cannot find path '\HOST\C$\LK\Appdata\Cmmcfg C$\LKAppData\Errc C$\LK\Appdata\TCOMP C$\LK\Probes C$\LK\Appdata\CAMIO C$\LK\Appdata\LaunchPad C$\LK\Appdata\Wincmes
C$\barlen.dta C$\Caliprogs C$\Cali' because it does not exist.
This does not seem to reading through the list as I intended, I have also noticed that the HOST it is reading from is 6th in the list and not first.
REM*This file contains the list of hosts you want to copy files from*
$computers = Get-Content 'Y:\***FILEPATH***\HOSTFILE.txt'
REM*This is the file/folder(s) you want to copy from the hosts in the $computer variable*
$source = Get-Content 'Y:\***FILEPATH***\FilePaths.txt'
REM*The destination location you want the file/folder(s) to be copied to*
$destination = \\**REMOTEHOST**\c$\Users\Public\desktop\back-up\$Computers
foreach ($item in $computers) {
}
foreach ($item in $source) {
}
copy-item \\$computer\$source -Destination $destination -Verbose
Your destination variable needs to be enclosed in quotes. To have it evaluate other variables inside of it, enclose it in double quotes. Otherwise PowerShell thinks it's a command you are trying to run.
$destination = "\\**REMOTEHOST**\c$\Users\Public\desktop\back-up\$Computers"
cracked it, thank you for your help. I was messing up the foreach command!I had both variables set to Item, so I was confusing things!
foreach ($itemhost in $computers) {
$destination = "\Remotehost\c$\Users\xoliver.jeffries\desktop\back-up\$itemhost"
foreach ($item in $source)
{copy-item "\$itemhost\$item*" -Destination $destination -Verbose -recurse}
}
Its not the neatest output but that's just a snag! the code now enables me to use a list of hosts and a list files and copy them to a remote server!

Powershell: NTFS permissions and Parent Folders -pathtoolong issues

I apologize in advance of the long post. I have spent a significant amount of time trying to find an answer or piece together a solution to this problem.
It's a simple request: Add a user/group with 2 sets of permissions to your entire DFS environment applying one set to folders and sub-folders and the other to files only.
Seems easy enough, however in the environment I'm trying to manage we have 1000's of folder paths greater than 260 characters deep and any use of dir -recurse or get-childitem will result in hitting the error "pathtoolong". Every example solution for this problem has used a variation of the above or relies on "get-childitem". This fails for most real world situations as I believe many of us IT admins are faced with long paths due to the nature of DFS use.
The current attempt:
Currently I'm using a custom module "NTFSSecurity" which is fantastic to apply NTFS permissions. It works great!
It can be found here: https://ntfssecurity.codeplex.com/
A tutorial from here: https://blogs.technet.microsoft.com/fieldcoding/2014/12/05/ntfssecurity-tutorial-1-getting-adding-and-removing-permissions/
The problem found in the above tutorial and every other example I've been able to find, it references commands such as:
dir -Recurse | Get-NTFSAccess -Account $account
This will fail in the real world of super long file paths.
The "PathTooLong" error workaround:
My workaround current consists of using Robocopy to export the file paths to a text file. I found this as a recommendation from someone dealing with a similar problem. Robocopy will not error on "pathtoolong" issues and is perfect for this exercise. I then try and run commands against the text file containing all of the paths I need to modify.
The command for the Robocopy is this:
robocopy '<insert source path here>' NULL /NJH /E /COPYALL /XF *.* | Out-File -FilePath '<path to fileout.txt>'
This will create a text file while copying only folder structure and permissions. Excellent!
You will then have to clean up the text file from additional characters which I use:
$filepath = '<path>\DFS_Folder_Structure.txt'
$structure = Get-Content $filepath
$structure -replace ' New Dir 0 '| Out-File -FilePath \\<path_you_want_file>\DFS_Folder_Structure2.txt
I also reversed the contents of the text file so it shows the furthest child object (folder) and work down. I thought this might be easier for identifying a parent folder or some other recursive logic which I haven't been able to figure out.
To reverse text from bottom to top use this command here:
$x = Get-Content -Path 'C:\temp_dfs\DFS_Folder_Structure2.txt'; Set-Content -Path 'C:\temp_dfs\Reversed_data.txt' -Value ($x[($x.Length-1)..0])
This script currently only applies to paths with Inheritance off or for childobjects with Inheritance off. This is taken from the NTFSSecurity module command Get-NTFSInheritance which will return results for AccessInheritance and AuditInheritance.Access is if the folder is inheriting from a parent above. Audit is if the folder is passing it down to child objects.
There are 4 possibilities:
AccessInheritance True AuditInheritance True
AccessInheritance True AuditInheritance False
AccessInheritance False AuditInheritance True
AccessInheritance False AuditInheritance False
(*Special note: I have seen all 4 show up in the DFS structure I'm dealing with.)
Script to Set Permissions based on file path contained in text file:
#Get File content to evaluate
$path = Get-Content 'C:\temp_dfs\Reversed_data.txt'
$ADaccount = '<insert fully qualified domain\user or group etc.>'
Foreach ($line in $path)
{
#Get-NTFSAccess -Path $line -Account $ADaccount | Remove-NTFSAccess
#This command will find the access of an account and then remove it.
#It has been omitted but included in case needed later.
$result = Get-NTFSInheritance -Path $line
If ($result.AccessInheritanceEnabled -Match "False" -and $result.AuditInheritanceEnabled -match "False")
{
Add-NTFSAccess -Path $line -Account $ADaccount -AccessRights Traverse,ExecuteFile,ListDirectory,ReadData,ReadAttributes,ReadExtendedAttributes,ReadPermissions -AppliesTo ThisFolderAndSubfolders
Add-NTFSAccess -Path $line -Account $ADaccount -AccessRights ReadAttributes,ReadExtendedAttributes,ReadPermissions -AppliesTo FilesOnly
}
If ($result.AccessInheritanceEnabled -Match "False" -and $result.AuditInheritanceEnabled -Match "True")
{
Add-NTFSAccess -Path $line -Account $ADaccount -AccessRights Traverse,ExecuteFile,ListDirectory,ReadData,ReadAttributes,ReadExtendedAttributes,ReadPermissions -AppliesTo ThisFolderAndSubfolders
Add-NTFSAccess -Path $line -Account $ADaccount -AccessRights ReadAttributes,ReadExtendedAttributes,ReadPermissions -AppliesTo FilesOnly
}
If ($result.AccessInheritanceEnabled -Match "True" -and $result.AuditInheritanceEnabled -Match "False")
{
continue
}
If ($result.AccessInheritanceEnabled -Match "True" -and $result.AuditInheritanceEnabled -Match "True")
{
continue
}
}
This script will apply permissions for the specified User/Group account and set permissions for Folder and Sub-folders and then add another set of permissions to Files only.
Now this current fix works great except it only touches folders with Inheritance turned off. This means you'd need to run this script and then set permissions on the "main parent folder". This is completely do-able and may be the best method to avoid double entries of permissions and is the current state of my solution.
If you add criteria to the bottom sections where AccessInheritanceEnable = True and Audit = True you will get double entries because you're applying permissions to both the parent --> which pushes its permissions to the child-objects and also explicitly on the child-objects themselves. This is due to the text file contains both parent and child and I haven't figure out a way to address that. This isn't "horrible" but my OCD doesn't like to have double permissions added if it can be avoided.
The real question:
What I'd really like to do is somehow identify parent folders, compare them to parents further up the tree and see if it was inheriting permissions and only apply the permission set to the highest parent in a specific chain. My mind wants to explode thinking about how you would compare parents and find the "highest parent".
Again the problem being anytime you want to -recurse the folder structure it will fail due to "pathtoolong" issues so it needs to be contained to logic applied to the text file paths. I've seen a bit mentioned about split-path but I don't really understand how that's applied or how you could compare a path to another path until you identified a parent path.
Thank-you for taking the time to read this long post and question. If you're still awake now, I'm open to any suggestions. lol.
the NTFSSecurity module is indeed fantastic.
I used it to make a script that can export ntfs security of a UNC path and it's subfolders to a readable excel file.
It can be found on:
https://github.com/tgoetheyn/Export-NtfsSecurity
I use is frequently and didn't had any problem with long filenames.
Hope you like it.
PS: If you add ntfssecurity, don't forget to include the "Synchronize" permission. If not included, strange things can happen.

How to Do MSBuild's GetDirectoryNameOfFileAbove in PowerShell?

In MSBuild, there's the GetDirectoryNameOfFileAbove property function.
How do I achieve the same with PowerShell?
Should better have compact syntax, because that's what you have to paste into every entry-point script to find its includes.
The idea of this question:
There's a large solution in source code control. Some of its parts are relatively autonomous.
It has a location for shared scripts and reusable functions, at a known folder under the root.
There're numerous entry-point scripts (files which you explicitly execute) scattered around the project, all of them including the shared scripts.
What's the convenient way for locating the shared scripts from the entry-point-script?
Relative paths turn out to work bad because they look like "../../../../../../scripts/smth", are hard to write and maintain.
Registering modules is not a good option because (a) you're getting this from SCC, not by installing (b) you usually have different versions in different disc locations, all at the same time and (c) this is an excess dependency on the environment when technically just local info is enough.
MSBuild way for doing this (since v4) is as follows: drop a marker file (say, root.here or whatever), get an absolute path to that folder with GetDirectoryNameOfFileAbove, et voila! You got the local root to build paths from.
Maybe it's not the right way to go with powershell, so I'd be grateful for such directions as well.
You can access the current folder thus:
$invoker= Split-Path -Parent $MyInvocation.MyCommand.Path
So the parent of that one is :
$parent=Split-Path -Parent $MyInvocation.MyCommand.Path|Split-Path -Parent
A quick and dirty solution looks like this:
function GetDirectoryNameOfFileAbove($markerfile)
{
$result = ""
$path = $MyInvocation.ScriptName
while(($path -ne "") -and ($path -ne $null) -and ($result -eq ""))
{
if(Test-Path $(Join-Path $path $markerfile)) {$result=$path}
$path = Split-Path $path
}
if($result -eq "") {throw "Could not find marker file $markerfile in parent folders."}
return $result
}
It could be compacted into a single line for planting into scripts, but it's still too C#-ish, and I think it might be shortened down with some PS pipes/LINQ style magic.
UPD: edited the script, it was found that $MyInvocation.MyCommand.Path is often NULL when script is called from cmdline with dotsourcing (with any context level), so the current hypothesis is ScriptName.