I currently have a script that copies a bunch of stuff over, but it's copying too many things.
My current script is:
robocopy $SourcePath $DestinationPath '????????????.jpg' /XO /S / MAXAGE:540
So this will make that I don't copy too old files over. But it's copying lots of files that are not "right" over.
Ideally I'd like to copy files that are in the following formats:
123456.jpg
1234567.jpg
AB12.jpg
ABC12.jpg
Or... numbers from 1 to 9999999.jpg (not 0 padded). Or Alphanumeric in characters 1-3 followed by 2 numerics, or 2 Alphanumerics followed by 2 numerics.
I could also run 3/4 scripts separately to execute this if I can't define this all in one go.
Related
We have 30 processes running which generate error screenshots. So we only keep 30 days worth I've been trying to write a Powershell script to do this. The problem I'm facing is with wildcards in the folder crawl. Say I have the following files:
C:\Runs\Process-1\AppFiles\Dummy.txt
C:\Runs\Process-1\AppFiles\Dummy.png
C:\Runs\Process-2\AppFiles\DummyPic.png
C:\Runs\Process-3\AppFiles\Dummy.log
C:\Runs\Process-3\AppFiles\Dummy1.png
And I want to get rid of all the png files in those subfolders more than 30 days old.
I tried:
ForFiles /p "C:\Runs\Process*" /s /d -30 /m "*.png"
but it doesn't like my folder wildcard. Help anyone?
In Powershell you may try this:
Get-ChildItem "C:\Runs\Process*\AppFiles\*.png" | Where-Object { $_.CreationTime -lt (Get-Date).AddDays(-30) } | Remove-Item
I would suggest using nested forfiles loops:
An outer forfiles loop for the directories, Process-*, and;
An inner forfiles loop for the *.png files that you wish to delete.
This way you have the additional flexibility of two loops to play with.
Another less elegant method would be to use a foreach-object loop, again containing a nested ForFiles, with a list of directories supplied to the foreach-object. However, then you have to use a pre-determined list of directories. Obviously, you could also use foreach-object for the inner loop as well, but again you would need a pre-determined list of .png files, which pretty much defeats the whole object of the exercise.
The nested forfiles approach is much better, IMHO.
I have a lot of ANSI text files that vary in size (from a few KB up to 1GB+) that I need to convert to Unicode.
At the moment, this has been done by loading the files into Notepad and then doing "Save As..." and selecting Unicode as the Encoding. Obviously this is very time consuming!
I'm looking for a way to convert all the files in one hit (in Windows). The files are in a directory structure so it would need to be able to traverse the full folder structure and convert all the files within it.
I've tried a few options but so far nothing has really ticked all the boxes:
ansi2unicode command line utility. This has been the closest to what I'm after as it processes files recursively in a folder structure...but it keeps crashing whilst running before it's finished converting.
CpConverter GUI utility. Works OK to a point but struggles with multiple files in a folder structure - only seems to be able to handle files in one folder
There's a DOS command that works OK on smaller files but doesn't seem to be able to cope with large files.
Tried GnuWin sed utility but it crashes every time I try and install it
So I'm still looking! If anyone has any recommendations I'd be really grateful
Thanks...
OK, so in case anyone else is interested, I found a way to do this using PowerShell:
Get-ChildItem "c:\some path\" -Filter *.csv -recurse |
Foreach-Object {
Write-Host (Get-Date).ToString() $_.FullName
Get-Content $_.FullName | Set-Content -Encoding unicode ($_.FullName + '_unicode.csv')
}
This recurses through the entire folder structure and converts all CSV files to Unicode; the converted files are written to the same locations as the originals but with "unicode" appended to the filename. You can change the value of the -Encoding parameter if you want to convert to something different (e.g. utf-8).
It also outputs a list of all the files converted along with a timestamp against each
I've been looking into batch renaming files with Powershell and I've made some good progress. To put it simply I'm looking to remove all extra zeros from the beginning of my files. So far I have a folder of images named as such:
0001_random_name.jpg
0002_random_name.jpg
0003_random_name.jpg
All the way up to 900~. I created a Powershell script that takes the first four characters and adds the .jpg extension back. Here is that script:
Get-ChildItem 'G:\InvaluableNumbered' | rename-item -newname { $_.name.substring(0,4) + ".jpg" }
This renames the files to
0001.jpg
0002.jpg
0003.jpg
For this project I need to name them
1.jpg
2.jpg
3.jpg
All the way up to 968.jpg. Is there any way I can use the script that I wrote and then have another command that removes all 0s up to where it hits the first number greater than zero?
Thank you for your time.
You can use TrimStart string method to achieve desired result:
$_.name.substring(0,4).TrimStart('0')
Other possibility could be cast value to [int] and then back to [string]:
[string][int]$_.name.substring(0,4)
I think we all know the PsIsContainer method to check if the current file is a folder or not. But in my project I need a way to quickly know the number of folders in a folder. All I need is to quickly get their number. I want to write in a .txt lines which would look like C:\folder;12. It would mean in the folder, with the -recurse argument, there would be 12 folders.
To explain why, I need to save the progress of my work when i cut off the program which is used to analyse some folders. When a folder's analysed, the result is written in a second .txt. For example, if a folder is called C:\folder\folder1, folder will be analysed and then folder1 will be too. Which makes folder appear 2 times in the file because the full name always is written. What i want to do is to count the number of lines where C:\folder is written. If it equals the number next it's path in the first .txt, it means the file already has been analysed and the function doesnt need to do it again.
Does someone have a solution ? Or maybe an another idea to save the progress ? Cause i really have the feeling this is taking too long to do this.
Thank you for your help.
Another approach, which i find much faster is using cmd built-in 'dir' command
of course this is in case you don't need the subfolders(which you can then run the function in a foreach loop, or change the function if this is the case)
Function Get-FolderCount($path)
{
$Dir = cmd /c dir $path /a:d
Return ($Dir[-1] -csplit 'Dir' -replace '\s')[0]
}
I use this as well for measuring folder size with /s switch and take the total size which is much faster then powershell, also much faster then run it on interactive shell...
So I feel like PS would be the best solution for this project, but cannot for the life of me figure out where to get started with it, here's the file layout..
I've got one folder, filled with folders generated by our automated system, they are labeled: foobarXXXXXXXXXXXXXXX
The end 15 characters of the folder is what I need to grab, and then search through another folder for any files that contain those 15 characters, and then move any files found into their respective folders.
I can give more details if this wasn't sufficient. Just need a point to get started.
I'm running Windows 7 should the version of PowerShell be a concern.
Ideally you want Powershell 3, but you can accomplish this task in Powershell 2 as well.
I would first look into the Select-String cmdlet. Also found on technet here.
It is also perfectly legal to use SubString function for .NET string manipulation.
$filePattern = $string.Substring(1,15)
To get collections of your files, you should use Get-ChildItem. Using the "#" in "#(Get-ChildItem)" produces an explicit array.
$files = #(Get-ChildItem -Path $path -Recurse)
And since there is no specific detail in your question there are no specific answers.
Also, I run Windows 7 with Powershell 2 and 3, side by side. Powershell 3 is kinda awesome.