I have a few hundred folders that look like this:
\\\uat.xxx.com\FileExport\New Collections\LCTS
\\\uat.xxx.com\FileExport\New Collections\GBSS
\\\uat.xxx.com\FileExport\New Collections\TRGS
etc
I need to check them for a specific file e.g. "Results 20150722New.dat"
I need to know the folders that do not contain the file, it may be nice if that can be outputted to a file e.g. log.txt but its more I need a list of folders that do not contain it.
I have been trying to use Test-Path but am really not getting anywhere
any chance someone could help me make a start on this
As one time operation you can find names (string) of all directories, that does not contain such file name, using:
Get-ChildItem "\\uat.xxx.com\FileExport\New Collections\" |
Where {$_.PSIsContainer } |
ForEach { if (-not(Test-Path "$($_.FullName)\Results 20150722New.dat")) {Echo $_.FullName } }
Optionally specify -Recurse switch to search folders recursively.
If there's a need to manipulate with results later, I'd prefer to save DirectoryInfo objects to a clollection instead of converting them to strings with Echo cmdlet.
$dirs_not_containing_file = #()
$dirs_not_containing_file +=
Get-ChildItem "\\uat.xxx.com\FileExport\New Collections\" |
Where {$_.PSIsContainer } |
ForEach { if (-not(Test-Path "$($_.FullName)\Results 20150722New.dat")) {$_} }
Splitted second statement to multiple lines for readability.
Related
I'm attempting to find files in a folder of filenames that look like the following:
C:\XMLFiles\
in.blahblah.xml
out.blahblah.xml
in.blah.xml
out.blah.xml
I need to return results of only files that do not have it's "counterpart". This folder contains thousands of files with randomized "center" portions of the file names....the commonality is in/out and ".xml".
Is there a way to do this in Powershell? It's an odd ask.
Thanks.
Your question is a little vague. I hope I got it right. Here is how I would do it.
$dir = 'my_dir'
$singleFiles = [System.Collections.Generic.HashSet[string]]::new()
Get-ChildItem $dir -Filter '*.xml' | ForEach-Object {
if ($_.BaseName -match '^(?<prefix>in|out)(?<rest>\..+)') {
$oppositeFileName = if ($Matches.prefix -eq 'in') {
'out'
}
else {
'in'
}
$oppositeFileName += $Matches.rest + $_.Extension
$oppositeFileFullName = Join-Path $_.DirectoryName -ChildPath $oppositeFileName
if ($singleFiles.Contains($oppositeFileFullName)) {
$singleFiles.Remove($oppositeFileFullName) | Out-Null
}
else {
$singleFiles.Add($_.FullName) | Out-Null
}
}
}
$singleFiles
I'm getting all the XML files from the directory and I'm iterating the results. I check the base name of the file (the name of the file doesn't include the directory path and the extension) if they match a regex. The regex says: match if the name starts with in or out followed by at least 1 character.
The $Matches automatic variable contains the matched groups. Based on these groups I'm building the name of the counter-part file: i.e. if I'm currently on in.abc I build out.abc.
After that, I'm building the absolute path of the file counter-part file and I check if it exists in the HashSet. if It does, I remove it because that means that at some point I iterated that file. Otherwise, I'm adding the current file.
The resulting HashSet will contain the files that do not have the counter part.
Tell me if you need a more detailed explanation and I will go line by line. It could be refactored a bit, but it does the job.
I found many similar examples but not this exact goal. I have a fairly large number of text files that all have a similar first word (clipxx) where xx is a different number in each file.
I want to rename each file using the first word in the file. Here is what I have tried using Powershell. I get an error that I cannot call a method on a null valued expression.
Get-ChildItem *.avs | ForEach-Object { Rename-Item = Get-Content ($line.Split(" "))[0] }
I'd do this in three parts:
Get the list of files you want to change.
Create and map the new name for each file.
Rename the files.
phase-1: get the list of files you want to change.
$files = Get-ChildItem *.avs
phase-2: map the file name to a new name
$file_map = #()
foreach ($file in $files) {
$file_map += #{
OldName = $file.Fullname
NewName = "{0}.avs" -f $(Get-Content $file.Fullname| select -First 1)
}
}
phase-3: make the name change
$file_map | % { Rename-Item -Path $_.OldName -NewName $_.NewName }
Changing things in a list that you're enumerating can be tricky. That's why I recommend breaking this up.
Here is me running this on my machine...
And, here is what was in my files...
Good Luck
I have a folder with a huge amount of files in it with lots of different extensions (abc, abc_trg, def, def_trg, ghi, ghi_trg, jkl, mno).
You will see that there are some files that have a matching 'trigger' file, but not all files in this folder need to have a trigger file, it is just the following extensions that must have a trigger file: abc, def, ghi.
filename1.abc
filename1.abc_trg
filename2.def
filename2.def_trg
filename3.abc
filename4.def_trg
filename5.ghi
filename6.jkl
filename7.mno
filename8.ghi
filename8.ghi_trg
filename9.jkl
i.e. The extension types that do have Trigger files (abc, abc_trg, def, def_trg, ghi, ghi_trg) must have a matching filename.
I need a PowerShell script that will analyse the and compare files that are meant to exist with a trigger filetype (abc, abc_trg, def, def_trg, ghi, ghi_trg) and if a match is found (e.g. filename1, filename2, filename8) or if there are files that have extensions not in this list, e.g. jkl & mno (filename6.jkl, filename7.mno, filename9.jkl) then those files are left/not touched.
If there are files that are meant to have a matching extension & trigger file, but do not, i.e. they have become orphaned, then these need to be deleted (e.g. filename3.abc, filename4.def_trg, filename5.ghi)
So the resultant file list should look like this:
filename1.abc
filename1.abc_trg
filename2.def
filename2.def_trg
filename6.jkl
filename7.mno
filename8.ghi
filename8.ghi_trg
filename9.jkl
Here is my code so far:
$strDir = "D:\Temp\FileCompareTest\"
$strFileTypesToIgnore = ".jkl",".mno"
$strExtABC = ".abc"
$strExtABC_Trigger = ".abc_trg"
$strExtDEF = ".def"
$strExtDEF_Trigger = ".def_trg"
$strExtGHI = ".ghi"
$strExtGHI_Trigger = ".ghi_trg"
$arrFiles = Get-ChildItem $strDir -exclude $strFileTypesToIgnore
ForEach ($objFile in $arrFiles) {
$strFilename = $objFile.BaseName
$strExtension = $objFile.Extension
If ($strExtension -eq ".abc") {
$arrFiles2 = Get-ChildItem $strDir -exclude $strFileTypesToIgnore
ForEach ($objFile2 in $arrFiles2) {
$strFilename2 = $objFile2.BaseName
$strExtension2 = $objFile2.Extension
If ($strExtension2 -eq ".abc_trg") {
If (Compare-Object $strFilename $strFilename2) {
Write-Host "match is: $strFilename$strExtension and $strFilename2$strExtension2"
} Else {
Write-Host "Not a match: $strFilename$strExtension and $strFilename2$strExtension2"
}
}
}
}
}
Can you help please?
Regards
Darren
Thanks for including your code. You have the right idea of comparing names and and extensions but lets try a different approach.
We loop each file and check its extension. Depending on if the file is a trg file or not we have a similar mean to check for its partner. Since we use a Where-Object clause the output files are passed onto the pipe to Remove-Item for deletion. Test with the -WhatIf switch to verify it is working.
I tried to use simple cmdlets and methods for string manipulation.
$path = "C:\temp\test"
$extension = "_trg"
$files = Get-ChildItem -Path $path
$fileNames = $files | Select-Object -ExpandProperty Name
$files | Where-Object{
# Check what extension this file is so we can find the appropriate partner
if($_.Extension.Contains($extension)){
# Attempt to find a matching non trg file
$fileNames -notcontains $_.Name.Substring(0, $_.Name.LastIndexOf($extension))
} else {
# Attempt to find a matching trg file
$fileNames -notcontains "$($_.Name)$extension"
}
} | Remove-Item -Force -Confirm:$false -WhatIf
We save all the file names in $fileNames and use -notcontains to see if the file in the loop has its partner in the list. If not it passed through the pipe.
I have a source tree, say c:\s, with many sub-folders. One of the sub-folders is called "c:\s\Includes" which can contain one or more .cs files recursively.
I want to make sure that none of the .cs files in the c:\s\Includes... path exist in any other folder under c:\s, recursively.
I wrote the following PowerShell script which works, but I'm not sure if there's an easier way to do it. I've had less than 24 hours experience with PowerShell so I have a feeling there's a better way.
I can assume at least PowerShell 3 being used.
I will accept any answer that improves my script, but I'll wait a few days before accepting the answer. When I say "improve", I mean it makes it shorter, more elegant or with better performance.
Any help from anyone would be greatly appreciated.
The current code:
$excludeFolder = "Includes"
$h = #{}
foreach ($i in ls $pwd.path *.cs -r -file | ? DirectoryName -notlike ("*\" + $excludeFolder + "\*")) { $h[$i.Name]=$i.DirectoryName }
ls ($pwd.path + "\" + $excludeFolder) *.cs -r -file | ? { $h.Contains($_.Name) } | Select #{Name="Duplicate";Expression={$h[$_.Name] + " has file with same name as " + $_.Fullname}}
1
I stared at this for a while, determined to write it without studying the existing answers, but I'd already glanced at the first sentence of Matt's answer mentioning Group-Object. After some different approaches, I get basically the same answer, except his is long-form and robust with regex character escaping and setup variables, mine is terse because you asked for shorter answers and because that's more fun.
$inc = '^c:\\s\\includes'
$cs = (gci -R 'c:\s' -File -I *.cs) | group name
$nopes = $cs |?{($_.Group.FullName -notmatch $inc)-and($_.Group.FullName -match $inc)}
$nopes | % {$_.Name; $_.Group.FullName}
Example output:
someFile.cs
c:\s\includes\wherever\someFile.cs
c:\s\lib\factories\alt\someFile.cs
c:\s\contrib\users\aa\testing\someFile.cs
The concept is:
Get all the .cs files in the whole source tree
Split them into groups of {filename: {files which share this filename}}
For each group, keep only those where the set of files contains any file with a path that matches the include folder and contains any file with a path that does not match the includes folder. This step covers
duplicates (if a file only exists once it cannot pass both tests)
duplicates across the {includes/not-includes} divide, instead of being duplicated within one branch
handles triplicates, n-tuplicates, as well.
Edit: I added the ^ to $inc to say it has to match at the start of the string, so the regex engine can fail faster for paths that don't match. Maybe this counts as premature optimization.
2
After that pretty dense attempt, the shape of a cleaner answer is much much easier:
Get all the files, split them into include, not-include arrays.
Nested for-loop testing every file against every other file.
Longer, but enormously quicker to write (it runs slower, though) and I imagine easier to read for someone who doesn't know what it does.
$sourceTree = 'c:\\s'
$allFiles = Get-ChildItem $sourceTree -Include '*.cs' -File -Recurse
$includeFiles = $allFiles | where FullName -imatch "$($sourceTree)\\includes"
$otherFiles = $allFiles | where FullName -inotmatch "$($sourceTree)\\includes"
foreach ($incFile in $includeFiles) {
foreach ($oFile in $otherFiles) {
if ($incFile.Name -ieq $oFile.Name) {
write "$($incFile.Name) clash"
write "* $($incFile.FullName)"
write "* $($oFile.FullName)"
write "`n"
}
}
}
3
Because code-golf is fun. If the hashtables are faster, what about this even less tested one-liner...
$h=#{};gci c:\s -R -file -Filt *.cs|%{$h[$_.Name]+=#($_.FullName)};$h.Values|?{$_.Count-gt1-and$_-like'c:\s\includes*'}
Edit: explanation of this version: It's doing much the same solution approach as version 1, but the grouping operation happens explicitly in the hashtable. The shape of the hashtable becomes:
$h = {
'fileA.cs': #('c:\cs\wherever\fileA.cs', 'c:\cs\includes\fileA.cs'),
'file2.cs': #('c:\cs\somewhere\file2.cs'),
'file3.cs': #('c:\cs\includes\file3.cs', 'c:\cs\x\file3.cs', 'c:\cs\z\file3.cs')
}
It hits the disk once for all the .cs files, iterates the whole list to build the hashtable. I don't think it can do less work than this for that bit.
It uses +=, so it can add files to the existing array for that filename, otherwise it would overwrite each of the hashtable lists and they would be one item long for only the most recently seen file.
It uses #() - because when it hits a filename for the first time, $h[$_.Name] won't return anything, and the script needs put an array into the hashtable at first, not a string. If it was +=$_.FullName then the first file would go into the hashtable as a string and the += next time would do string concatenation and that's no use to me. This forces the first file in the hashtable to start an array by forcing every file to be a one item array. The least-code way to get this result is with +=#(..) but that churn of creating throwaway arrays for every single file is needless work. Maybe changing it to longer code which does less array creation would help?
Changing the section
%{$h[$_.Name]+=#($_.FullName)}
to something like
%{if (!$h.ContainsKey($_.Name)){$h[$_.Name]=#()};$h[$_.Name]+=$_.FullName}
(I'm guessing, I don't have much intuition for what's most likely to be slow PowerShell code, and haven't tested).
After that, using h.Values isn't going over every file for a second time, it's going over every array in the hashtable - one per unique filename. That's got to happen to check the array size and prune the not-duplicates, but the -and operation short circuits - when the Count -gt 1 fails, the so the bit on the right checking the path name doesn't run.
If the array has two or more files in it, the -and $_ -like ... executes and pattern matches to see if at least one of the duplicates is in the includes path. (Bug: if all the duplicates are in c:\cs\includes and none anywhere else, it will still show them).
--
4
This is edited version 3 with the hashtable initialization tweak, and now it keeps track of seen files in $s, and then only considers those it's seen more than once.
$h=#{};$s=#{};gci 'c:\s' -R -file -Filt *.cs|%{if($h.ContainsKey($_.Name)){$s[$_.Name]=1}else{$h[$_.Name]=#()}$h[$_.Name]+=$_.FullName};$s.Keys|%{if ($h[$_]-like 'c:\s\includes*'){$h[$_]}}
Assuming it works, that's what it does, anyway.
--
Edit branch of topic; I keep thinking there ought to be a way to do this with the things in the System.Data namespace. Anyone know if you can connect System.Data.DataTable().ReadXML() to gci | ConvertTo-Xml without reams of boilerplate?
I'd do more or less the same, except I'd build the hashtable from the contents of the includes folder and then run over everything else to check for duplicates:
$root = 'C:\s'
$includes = "$root\includes"
$includeList = #{}
Get-ChildItem -Path $includes -Filter '*.cs' -Recurse -File |
% { $includeList[$_.Name] = $_.DirectoryName }
Get-ChildItem -Path $root -Filter '*.cs' -Recurse -File |
? { $_.FullName -notlike "$includes\*" -and $includeList.Contains($_.Name) } |
% { "Duplicate of '{0}': {1}" -f $includeList[$_.Name], $_.FullName }
I'm not as impressed with this as I would like but I thought that Group-Object might have a place in this question so I present the following:
$base = 'C:\s'
$unique = "$base\includes"
$extension = "*.cs"
Get-ChildItem -Path $base -Filter $extension -Recurse |
Group-Object $_.Name |
Where-Object{($_.Count -gt 1) -and (($_.Group).FullName -match [regex]::Escape($unique))} |
ForEach-Object {
$filename = $_.Name
($_.Group).FullName -notmatch [regex]::Escape($unique) | ForEach-Object{
"'{0}' has file with same name as '{1}'" -f (Split-Path $_),$filename
}
}
Collect all the files with the extension filter $extension. Group the files based on their names. Then of those groups find every group where there are more than one of that particular file and one of the group members is at least in the directory $unique. Take those groups and print out all the files that are not from the unique directory.
From Comment
For what its worth this is what I used for testing to create a bunch of files. (I know the folder 9 is empty)
$base = "E:\Temp\dev\cs"
Remove-Item "$base\*" -Recurse -Force
0..9 | %{[void](New-Item -ItemType directory "$base\$_")}
1..1000 | %{
$number = Get-Random -Minimum 1 -Maximum 100
$folder = Get-Random -Minimum 0 -Maximum 9
[void](New-Item -Path $base\$folder -ItemType File -Name "$number.txt" -Force)
}
After looking at all the others, I thought I would try a different approach.
$includes = "C:\s\includes"
$root = "C:\s"
# First script
Measure-Command {
[string[]]$filter = ls $includes -Filter *.cs -Recurse | % name
ls $root -include $filter -Recurse -Filter *.cs |
Where-object{$_.FullName -notlike "$includes*"}
}
# Second Script
Measure-Command {
$filter2 = ls $includes -Filter *.cs -Recurse
ls $root -Recurse -Filter *.cs |
Where-object{$filter2.name -eq $_.name -and $_.FullName -notlike "$includes*"}
}
In my first script, I get all the include files into a string array. Then i use that string array as a include param on the get-childitem. In the end, I filter out the include folder from the results.
In my second script, I enumerate everything and then filter after the pipe.
Remove the measure-command to see the results. I was using that to check the speed. With my dataset, the first one was 40% faster.
$FilesToFind = Get-ChildItem -Recurse 'c:\s\includes' -File -Include *.cs | Select Name
Get-ChildItem -Recurse C:\S -File -Include *.cs | ? { $_.Name -in $FilesToFind -and $_.Directory -notmatch '^c:\s\includes' } | Select Name, Directory
Create a list of file names to look for.
Find all files that are in the list but not part of the directory the list was generated from
Print their name and directory
I have limited experience with Powershell doing very basic tasks by itself (such as simple renaming or moving files), but I've never created one that has the need to actually extract information from inside a file and apply that data directly to a file name.
I'd like to create a script that can reference a simple .csv or text file containing a list of unique identifiers and have it assign those to a batch of duplicated files (they all have the same contents) that share a slightly different name in the form of a 3-digit number appended as the prefix of a generic name.
For example, let's say my list of files are something like this:
001_test.txt
002_test.txt
003_test.txt
004_test.txt
005_test.txt
etc.
Then my .csv contains an alphabetical list of what I would like those to become:
Alpha.txt
Beta.txt
Charlie.txt
Delta.txt
Echo.txt
etc.
I tried looking at similar examples, but I'm failing miserably trying to tailor them to get it to do the above.
EDIT: I didn't save what I already modified, but here is the baseline script I was messing with:
$file_server = Read-Host "Enter the file server IP address"
$rootFolder = 'C:\TEMP\GPO\source\5'
Get-ChildItem -LiteralPath $rootFolder -Directory |
Where-Object { $_.Name -as [System.Guid] } |
ForEach-Object {
$directory = $_.FullName
(Get-Content "$directory\gpreport.xml") |
ForEach-Object { $_ -replace "99.999.999.999", $file_server } |
Set-Content "$directory\gpreport.xml"
# ... etc
}
I think this is to replace a string inside a file though. I need to replace the file name itself using a list from another file (that is not getting renamed), while not changing the contents of the files that are being renamed.
So you want to rename similar files with those listed in a text file. Ok, here's what you are going to need for my solution (alias listed in parenthesis): Get-Content (GC), Get-ChildItem (GCI), Where (?), Rename-Item, ForEach (%)
$NewNames = GC c:\temp\Namelist.txt #Path, including file name, to list of new names
$Name = "dog.txt" #File name without the 001_ prefix
$Path = "C:\Temp" #Path to search
$i=0
GCI $path | ?{$_.Name -match "\d{3}_$Name"}|%{Rename-Item $_.FullName $NewNames[$i];$i++}
Tested as working. That gets your list of new names and saves it as an array. Then it defines your file name, path, and sets $i to 0 as a counter. Then for each file that matches your pattern it renames it based off of item number $i in the array of new names, and then increments $i up one number and moves to the next file.
I haven't tested this, but it should be pretty close. It assumes you have a CSV with a column named FileNames and that you have at least as many names in that list as there are on disk.
$newNames = Import-Csv newfilenames.csv | Select -ExpandProperty FileNames
$existingFiles = Get-ChildItem c:\someplace
for ($i = 0; $i -lt $existingFiles.count; $i++)
{
Rename-Item -Path $existingFiles[$i].FullName -NewName $newNames[$i]
}
Basically, you create two arrays and using a basic for loop steping through the list of files on disk and pull the name from the corresponding index in the newNames array.
Does your CSV file map the identifiers to the file names?
Identifier,NewName
001,Alpha
002,Beta
If so, you'll need to look up the identifier before renaming the file:
# Define the naming convention
$Suffix = '_test'
$Extension = 'txt'
# Get the files and what to rename them to
$Files = Get-ChildItem "*$Suffix.$Extension"
$Csv = Import-Csv 'Names.csv'
# Rename the files
foreach ($File in $Files) {
$NewName = ($Csv | Where-Object { $File.Name -match '^' + $_.Identifier } | Select-Object -ExpandProperty NewName)
Rename-Item $File "$NewName.$Extension"
}
If your CSV file is just a sequential list of filenames, logicaldiagram's answer is probably more along the lines of what you're looking for.