looking for some help please - I have no experience in code writing so have been looking for a question/answer that is close but.....
My huge movie database lives on NAS drive "Video Y", each movie in its own subdirectory; it has multiple video file types, most being .avi, and I wanted to convert all the .avi to .mp4 (some devices will not play .avi").
So I filtered out all the .avi files and put them in one new directory "0 temp holder for avi", so I could use VideoProc to convert; this converted and placed the .mp4 files in one new directory "00 temp holder MP4".
Now I want to move the .mp4 files back in their own original sub directory which still contains various files related to the movie like .srt etc.
I think the simplest way for me is lining up the files in alphabetical order and the directories in the same order, (as directory names and file names are not necessarily exactly the same), checking for mismatches and correcting as needed, and using some code to move the first file to first directory, and iterate from there. But I'm still stumped and not sure to go about it.
I put under the Windows10 and Powershell tag, but someone may assist with more accurate tags please.
Directory layout
Shown below at the beginning of the PowerShell script, the $arrVideoFolders variable is to load all the folders names into an array.
The $arrFolder variable is for the filenames of all your video files in a separate array.
Shown below,
$arrVideoFolders = Get-ChildItem (Folders for the videos) |
Where-Object {$_.PSIsContainer} |
Foreach-Object {$_.Name}
$arrFolder = Get-ChildItem (Video file names) |
Where-Object {$_.PSIsContainer} |
Foreach-Object {$_.Name}
To give the logic of how i would write the rest of the PowerShell script.
Afterwards a foreach loop would be used to go through all folders, and foreach folder a second nested foreach loop would loop through your video files. An if statement would test, the video name is like, your folder name. This is because your video files names are similar, once found you can do a copy and paste, or cut and paste.
For testing maybe use folders, and simple empty text files instead of copying large files, and test in different folders.
Related
Not long ago, I was wondering if there is a way to merge multiple text files into a single text file, so I came across PowerShell while searching Google. And I also found out that it can do various functions. So, I will ask if it is possible to implement the functions I really need.
I took a screenshot to make it easier to understand the intent of the question. When I download a new video file, I usually create a snapshot and copy and paste the sum information of the video file at the front of the extension name to organize and store. Creating a snapshot is not a difficult task, but I would like to know how to do it if it is possible to automate the subsequent work. As a result, is it possible to do batch work like this?
The following shows you how to construct a snapshot file name for each of the .mp4 files in the current directory:
Get-ChildItem -Filter *.mp4 | ForEach-Object {
$snapshotFileName = '{0} - {1:N2}GB ({2} bytes).jpg' -f $_.BaseName,
($_.Length / 1gb),
$_.Length
# Display for diagnostic purposes.
Write-Verbose -Verbose $snapshotFileName
# Use Rename-Item to rename a preexisting snapshot file.
}
See also:
Get-ChildItem
ForEach-Object
-f, the format operator
Write-Verbose
Rename-Item
I am currently trying to write a powershell script that does the following:
Go through all PDF-Files in the directory in which the script is in
Check the first few bytes of those PDF-Files
If those bytes say something along the lines of "PK", move them to a different location
If the bytes say something else (ex: PDF1.4), dont move them at all and go to the next one.
Context: We have around 70k PDF-Files that cant be opened. After checking them with a certain tool, it looks like around 99% of those are damaged and the remaining 1% are zip files.
The first bytes of a zipped PDF file start with "PK", the first bytes of a broken PDF-File start with PDF1.4 for example.
I need to unzip all zip files and relocate them. Going through 70k PDF-Files by hand is kinda painful, so im looking for a way to automate it.
I know im supposed to provide a code sample, but the truth is that i am absolutely lost. I have written a few powershell scripts before, but i have no idea how to do something like this.
So, if anyone could kindly point me to the right direction or give me a useful function, i would really appreciate it a lot.
You can use Get-Content to get your first 6 bytes as you asked.
We can then tie that into a loop on all the documents and configure a simple if statement to decide what to do next, e.g. move the file to another dir
EDITED BASED ON YOUR COMMENT:
$pdfDirectory = 'C:\Temp\struktur_id_1225\ext_dok'
$newLocation = 'C:\Path\To\New\Folder'
Get-ChildItem "$pdfDirectory" -Filter "*.pdf" | foreach {
if((Get-Content $_.FullName | select -first 1 ) -like "%PDF-1.5*"){
$HL7 = $_.FullName.replace("ext_dok","MDM")
$HL7 = $HL7.replace(".pdf",".hl7")
move $_.FullName $newLocation;
move $HL7 $newLocation
}
}
Try using the above, which is also a bit easier to edit.
$pdfDirectory will need to be set to the folder containing the PDF Files
$newLocation will obviously be the new directory!
And you will still need to change the -like "%PDF-1.5*" to suit your search!
It should do the rest for you, give it a shot
Another Edit
I have mimicked your folder structure on my computer, and placed a few PDF files and matching HL7 files and the script is working perfectly.
Get-Content is not suited for PDF's, you'd want to use iTextSharp to read PDF's.
Download the iTextSharp(found in releases) and put the itextsharp.dll somewhere easy to find (ie. the folder your script is located in).
You can install the .nupkg by using Install-Package, or simply using an archive tool to extract the contents of the .nupkg file (it's basically a .zip file)
The code below adds every word on page 1 for each PDF separated by whitespace to an array. You can then test if the array contains your keyword
Add-Type -Path "C:\path\to\itextsharp.dll"
$pdfs = Get-ChildItem "C:\path\to\pdfs" *.pdf
foreach ($pdf in $pdfs) {
$reader = New-Object itextsharp.text.pdf.pdfreader -ArgumentList $pdf.Fullname
$text = [iTextSharp.text.pdf.parser.PdfTextExtractor]::GetTextFromPage($reader,1).Split("")
foreach($line in $text) {
# do your test here
}
}
I was wondering if anyone can help me with this problem, of moving files from C: drive to a network drive.
So at work we have a machine that outputs .txt files. For example these files include data about pets, so in the folder I have hundreds of files that are named similar to dogs_123456_10062019.txt then cats_123457_10062019.txt.
Now the first number is a reference number than changes per .txt file that is created and the other is a date, as said I can have hundreds of these per day the reference and date is not important to the transfer as the file includes all this information anyway
Now I have a network folder structure of Y:dogs & Y:cats and wanted a automated script that transfers all dog & cat text files to the corresponding network drive.
The network drive name cannot be changed as it's used by a monitoring software that outputs graphs based on the information in the text file.
Is this possible? Hopefully I've explained myself
Cheers
If folder names match file names then you can do something like that:
$SourceFolderPath = "C:\Source\"
$DestinationFolderPath = "Y:"
$FileList = Get-ChildItem -Path $SourceFolderPath
foreach($File in $FileList){
$FolderName = ($File.Name | Select-String -Pattern ".+?(?=_)").Matches.Value
$File | Move-Item -Destination "$DestinationFolderPath\$FolderName"
}
If names of folders do not match file names, you would need to manually create a dictionary of what should go where and then translate those
The code above is obviously not an enterprise level stuff :)
I have a lot of ANSI text files that vary in size (from a few KB up to 1GB+) that I need to convert to Unicode.
At the moment, this has been done by loading the files into Notepad and then doing "Save As..." and selecting Unicode as the Encoding. Obviously this is very time consuming!
I'm looking for a way to convert all the files in one hit (in Windows). The files are in a directory structure so it would need to be able to traverse the full folder structure and convert all the files within it.
I've tried a few options but so far nothing has really ticked all the boxes:
ansi2unicode command line utility. This has been the closest to what I'm after as it processes files recursively in a folder structure...but it keeps crashing whilst running before it's finished converting.
CpConverter GUI utility. Works OK to a point but struggles with multiple files in a folder structure - only seems to be able to handle files in one folder
There's a DOS command that works OK on smaller files but doesn't seem to be able to cope with large files.
Tried GnuWin sed utility but it crashes every time I try and install it
So I'm still looking! If anyone has any recommendations I'd be really grateful
Thanks...
OK, so in case anyone else is interested, I found a way to do this using PowerShell:
Get-ChildItem "c:\some path\" -Filter *.csv -recurse |
Foreach-Object {
Write-Host (Get-Date).ToString() $_.FullName
Get-Content $_.FullName | Set-Content -Encoding unicode ($_.FullName + '_unicode.csv')
}
This recurses through the entire folder structure and converts all CSV files to Unicode; the converted files are written to the same locations as the originals but with "unicode" appended to the filename. You can change the value of the -Encoding parameter if you want to convert to something different (e.g. utf-8).
It also outputs a list of all the files converted along with a timestamp against each
Anyone have any ideas on how to rename files by finding an association with an index file?
I have a file/folder structure like the following:
Folder name = "Doe, John EO11-123"
Several files under this folder
The index file(MS Excel) has several columns. It contains the names in 2 columns(First and Last). It also has a column containing the number EO11-123.
What I would like to do is write maybe a script to look at the folder names in a directory, compare/find an associated value in the index file(like that number EO11-123) and then rename all the files under the folder using a 4th column value in the index.
So,
Folder name = "Doe, John EO11-123", index column1 contains same value "EO11-123", use column2 value "111111_000000" and rename all the files under that directory folder to "111111_000000_0", "111111_000000_1", "111111_000000_2" and so on.
This possible with powershell or vbscript?
Ok, I'll answer your questions in your comment first. Importing the data into PowerShell allows you to make an array in powershell that you can match against, or better yet make a HashTable to reference for your renaming purposes. I'll get into that later, but it's way better than trying to have PowerShell talk to Excel and use Excel's search functions because this way it's all in PowerShell and there's no third party application dependencies. As for importing, that script is a function that you can load into your current session, so you run that function and it will automatically take care of the import for you (it opens Excel, then opens the XLS(x) file, saves it as a temp CSV file, closes Excel, imports that CSV file into PowerShell, and then deletes the temp file).
Now, you did not state what your XLS file looks like, so I'm going to assume it's got a header row, and looks something like this:
FirstName | Last Name | Identifier | FileCode
Joe | Shmoe | XA22-573 | JS573
John | Doe | EO11-123 | JD123
If that's not your format, you'll need to either adapt my code, or your file, or both.
So, how do we do this? First, download, save, and if needed unblock the script to Import-XLS. Then we will dot source that file to load the function into the current PowerShell session. Once we have the function we will run it and assign the results to a variable. Then we can make an empty hashtable, and for each record in the imported array create an entry in the hashtable where the 'Identifier' property (in your example above that would be the one that has the value "EO11-123" in it), make that the Key, then make the entire record the value. So, so far we have this:
#Load function into current session
. C:\Path\To\Import-XLS.ps1
$RefArray = Import-XLS C:\Path\To\file.xls
$RefHash = #{}
$RefArray | ForEach( $RefHash.Add($_.Identifier, $_)}
Now you should be able to reference the identifier to access any of the properties for the associated record such as:
PS C:\> $RefHash['EO11-123'].FileCode
JD123
Now, we just need to extract that name from the folder, and rename all the files in it. Pretty straight forward from here.
Get-ChildItem c:\Path\to\Folders -directory | Where{$_.Name -match "(?<= )(\S+)$"}|
ForEach{
$Files = Get-ChildItem $_.FullName
$NewName = $RefHash['$($Matches[1])'].FileCode
For($i = 1;$i -lt $files.count;$i++){
$Files[$i] | Rename-Item -New "$NewName_$i"
}
}
Edit: Ok, let's break down the rename process here. It is a lot of piping here, so I'll try and take it step by step. First off we have Get-ChildItem that gets a list of folders for the path you specify. That part's straight forward enough. Then it pipes to a Where statement, that filters the results checking each one's name to see if it matches the Regular Expression "(?<= )(\S+)$". If you are unfamiliar with how regular expressions work you can see a fairly good breakdown of it at https://regex101.com/r/zW8sW1/1. What that does is matches any folders that have more than one "word" in the name, and captures the last "word". It saves that in the automatic variable $Matches, and since it captured text, that gets assigned to $Matches[1]. Now the code breaks down here because your CSV isn't laid out like I had assumed, and you want the files named differently. We'll have to make some adjustments on the fly.
So, those folder that pass the filter will get piped into a ForEach loop (which I had a typo in previously and had a ( instead of {, that's fixed now). So for each of those folders it starts off by getting a list of files within that folder and assigning them to the variable $Files. It also sets up the $NewName variable, but since you don't have a column in your CSV named 'FileCode' that line won't work for you. It uses the $Matches automatic variable that I mentioned earlier to reference the hashtable that we setup with all of the Identifier codes, and then looks at a property of that specific record to setup the new name to assign to files. Since what you want and what I assumed are different, and your CSV has different properties we'll re-work both the previous Where statement, and this line a little bit. Here's how that bit of the script will now read:
Get-ChildItem c:\Path\to\Folders -directory | Where{$_.Name -match "^(.+?), .*? (\S+)$"}|
ForEach{
$Files = Get-ChildItem $_.FullName
$NewName = $Matches[2] + "_" + $Matches[1]
That now matches the folder name in the Where statement and captures 2 things. The first thing it grabs is everything at the beginning of the name before the comma. Then it skips everything until it gets tho the last piece of text at the end of the name and captures everything after the last space. New breakdown on RegEx101: https://regex101.com/r/zW8sW1/2
So you want the ID_LName, which can be gotten from the folder name, there's really no need to even use your CSV file at this point I don't think. We build the new name of the files based off the automatic $Matches variable using the second capture group and the first capture group and putting an underscore between them. Then we just iterate through the files with a For loop basing it off how many files were found. So we start with the first file in the array $Files (record 0), add that to the $NewName with an underscore, and use that to rename the file.