Use Powershell to copy Long Filename files - powershell

Unfortunatly for me we have a legacy file system which has been around since the dinosaurs roamed. Over time, files have been moved/copied to a location which has generated a folder + file structure which is too long to simply copy from one location to another. (long file name)
What I see as a solution is to use powershell to map a drive to each file location, copy it then recurse for each.
e.g.
net use w: \newlocation\
net use x: \silly long file path\folder1\
xcopy x:\file.txt w:\folder1
then use for each to copy the same structure as it was originally.
I know it seems long winded but I cannot think of a better way forward. I've tried Robocopy which is supposed to avoid this problem but through much testing.. Nope. Still fails to copy.
Can you please suggest a good method for doing this?

Not sure if you are still looking for an answer but you can try the program called Beyond Compare which should work for you.
Honestly, I didn't spend too much time with it as it has bunch of options but I know people who successfully use it every day for files with long file names.

Related

using powershell to only read last 1month of folders. copy in 7 days worth of folders then using robocopy

apologies to start as im new to powershell and robocopy.
i have a robocopy command that pulls in any files within its many subfolders that are within a maxage of 7. however, the main folder has a huge amount of folders dating back years(and i only need last 7 days each week it runs) so its slow reading each file in each folder before it even copies using robocopy.
it looks like powershell commands may be a way for me to limit the search of files for my robocopy, would this be possible? currently robocopy search each files in each folder in my main folder, ideally i would want it to be smart enough to only search even a months worth of files and then copy over last 7 days. this would speed up the run time hugely.
if possible even further, i only want csv files in each of the folders in my main folder but current robocopy searches the other folders and its files as well which takes time. all the csv files are in a folder called "run" in each parent folder(parent folder is a unique number within the "mainfolder".
my robocopy command:
robocopy \\server\mainfolder \\server\new_main_folder /S /maxage:7 /r:0 /w:0
I was going to point to you either FastCopy or FreeFileSync, both handle long file name paths and work well for me. But found problems running FastCopy when trying to filter folders the way you described. I wasn't getting the results I expected, so that leaves FreeFileSync. There is a little bit of a learning curve with FreeFileSync, but really, the only problem/complaint I've had with it is the xml based batch script that you can use to automate the program kept changing formats and they haven't been providing a way to read the old xml batch scripts with the new version of the software. Maybe that has changed, I haven't looked into that lately.
Maybe other people have had better experience with RoboCopy, but I found it to take literally many multiples longer to do the same job as many other copy programs. I don't think FreeFileSync is as fast as FastCopy, but I've never seen it act as bad as what I experienced with RoboCopy.
The way FreeFileSync works is:
You define 1 or more source/destination pairs.
There is a global setting at the top to set the defaults for all copy pairs.
There are individual settings per each copy pair that when set override the global settings.
In the filter tab of the settings you can set "Time span:" to "Last x days:" and set it to the 7 days that you want.
You can change include from * to something like \run\*.csv. I didn't try that exact pattern, but the patterns I did try worked as expected (Unlike FastCopy).
The Synchronization tab is the tricky/fun one. You can do logs, versioning, tell the system to shutdown or restart when done, maintain a database for tracking moved files ("Detect moved files" checkbox), and all kinds of adjustments to how it behaves when files don't match.
When done, there is I believe at least 2 options for saving the configuration - though I've always just created the xml based batch script and called that from another scripting language or an icon on the desktop.

how to create a script that allows to use the path list as a reference for copying files in PowerShell in .bat script

I'm looking for a way to automate archiving where after I plug my two external drives I can copy all my resources. The problem is that I have different file structures on my laptop and on both external drives so I need to select specific folders to be copied. It means that I can't select one root folder and copy it straightforward. I tried to find a way to declare more than one path in the cp command and in the copy command, without success. An example path:
/my_programming_stuff
/folder1
/folder2
/folder3
/folder4
I want to select only the first 3 folders to copy them into external drive1 and external drive 2. The idea is to create a .bat file that will copy everything at once ( in the best case scenario it will be copied simultaneously on both external drives, so it will be much faster). Another problem is that there needs to be a bypass the ntfs long path limitations (max. 260 characters).
Flags that I want to use:
Copy the files and directories and all of their attributes,
including ownerships and permissions.
Recursively copy directories and their contents.
When copying files from one directory to another, only
copy files that either doesn't exist or are newer than the
existing corresponding files, in the destination
directory.
data verification (so it's certain that the copy was verified)
progression bar with time eta
Until now I was using Total Commander to do this but every day I need to pick only a few folders to be copied which takes time and is inefficient.
I have experience with Bash and PowerShell but I am not sure how to handle this topic.
Create a static batch file with robocopy commands. I think /copyall is the only switch you need to specify for all this. Other defaults should satisfy requirements.
https://learn.microsoft.com/en-us/windows-server/administration/windows-commands/robocopy
I think your time will be better spent learning how to use either FastCopy or FreeFileSynce. I used FreeFileSync some years ago but got disgusted with the it's constantly changing format of its xml file used for starting a backup, so I switched to FastCopy. But it looks like FreeFileSync may be getting their act together and I aim to do some experiments over the summer to see if I want to switch back to it.
Both can handle the long filename format issues, both can be executed by a batch file, both seem to have a lot of quality, but FreeFileSync has more features - and more bloated because of the features. But speed wise, I think FastCopy is probably one of the better products out there and very streamline in use and design.

How to extract archives from dir/subdir/*.* to dir/%zipname%/

I have a bunch of Archives that I want to extract. Problem is, there's a lot of them, and it's a lot of info to move around. I'd like to do it all at once. It's probably taken more time to research than to do it manually, but research is more interesting.
TL;DR: Would like help with 7-zip command line to extract multiple archives into their own directory. Autohotkey, Powershell, and batch files answers would also be nice if you are feeling extra helpful.
Win10, latest update and all that. I've been using 7-zip, so if there's a better extractor for this it might be a helpful suggestion. I have a little experience with coding, so I can usually pars an example and apply it to my project, but I can't come up with code on my own. So with that said, I'm comfortable using cmd, autohotkey, powershell, batch files, and a few others, but I need an example before I can do anything. haha
So, in my research, I found
(7z x -o"...\Stellaris\mod\Examples\" "...\content\281990\*")
for cmd, which works, except that extracts everything to the same dir since the archive files are in the root archive dir (I think that's why; if they were one folder down, it should work like I want right?). I don't think you can use environment variables in the path(?). Not sure what would make it work here...
Powershell: I only recently started tinkering with it so the one script I found didn't make any sense to me. And never found anyone using AutoHotKey for this.
And finally a **batch file* I found here seemed to come closest (normally I'd comment on that thread cause apparently it's still active, but I don't have 50 rep), but I wasn't sure how to modify it for my purposes:
#echo off
SET "filename=%~1" #Where does the working dir path go?
SET dirName=%filename:~0,-4% #How/where would you put in wildcards?
7z x -o"%dirName%" "%filename%"
I don't mind using any method, though I might prefer AHK? I'm probably most experienced there.
If you made it this far, wow, I'm impressed! I hope it was coherent enough to understand (probably not at first?). And maybe a little entertaining? I think I'm funny. Let me know if I should add or remove anything for the future. I know it's probably way too much context, but I would rather have too much than not enough, and I'm never sure what would be relevant and what would not. I'm not happy with my code format here, but I didn't quite understand what the help was saying about whitespace and I'm not familiar enough with Markdown yet (I wanted comments to be in line). Also, I'm honestly not sure about the tags.
EDIT: Added TL;DR at the top, and...
Found an answer via a program that does this. I'll post it in an answer as well: ExtractNow seems to be a bit outdated, last update was in '17, but it did what I wanted it to.
For interactive use at the command prompt:
for %z in ("\path\to\dir\subdir\*.zip") do #echo 7z "-o\path\to\extracted\%~nz" "%~z"
This won't run 7z, but it will print out the commands. Once you are satisfied that the printed commands look fine, remove the #echo to execute them.
In a batch script you must of course duplicate the % signs.
Found an answer via a program that does this. ExtractNow seems to be a bit outdated, last update was in '17, but it did what I wanted it to with only a few settings changes.
So, in my research, I found
(7z x -o"...\Stellaris\mod\Examples" "...\content\281990\*")
for cmd, which works, except that extracts everything to the same dir...
Assuming you were using Windows, 7-zip would have worked fine to do what you wanted. The only thing you were missing is the * character, which 7-zip expands to be the archive name when used with the -o switch:
7z x "dir\subdir\*.*" -o"dir\*"
So 7z x -o"...\Stellaris\mod\Examples" "...\content\281990\*"
becomes:
7z x -o"...\Stellaris\mod\Examples\*" "...\content\281990\*"
Also be aware that *.* does not mean any file under 7-zip. 7-Zip takes *.* to be name of any file that has an extension. To process all files just use a "dir\subdir\*" without the extra .*.

How to replace a file inside a zip on iOS?

I need to replace a file on a zip using iOS. I tried many libraries with no results. The only one that kind of did the trick was zipzap (https://github.com/pixelglow/zipzap) but this one is no good for me, because what really do is re-zip the file again with the change and besides of this process be to slow for me, also do something that loads the whole file on memory and make my application crash.
PS: If this is not possible or way to complicated, I can settle for rename or delete an specific file.
You need to find a framework where you can modify how data is read and written. You would then use some form of mmap to essentially read and write small chunks. Searching on NSData and mmap resulted in this Post, however you can use mmap from the posix level too. Ps it will be slower than using pure memory no way around that.
Got it WORKING!! JXZip (https://github.com/JanX2/JXZip) has made exactly what I need, they link to libzip (http://www.nih.at/libzip/) that is a fully equiped library for working with ZIP files and JXZip have all the necessary Objective-C wrapper code. Thanks for all the replys.
For archive purposes, as the author of zipzap:
Actually zipzap does exactly what you want. If you replace an entry within a zip file, zipzap will do the minimum necessary to update it: it will skip writing all entries before the replaced entry, then write out the entry, then write out all entries after the replaced entry without recompressing. At the moment, it does require sufficient memory for the entries after the replaced entry though.

Pipe multiple files into a zip file

I have several files in a GridFS Document Store and what I'd like to do is to pipe this data into a zip file via stdin in NodeJS. So that I will end up with a zip file containing all these files.
Now my question is how can I give the files a valid filename inside of the zip file. I think I need to emulate/fake a file header containing the filename?
Any help is appreciated!
Thanks
I had problems when writing zip files with Node.js not long ago. I ended up doing something similar to what is described in Zip archives in node.js
I can't help you directly with your problem, but at least I hope I can point out some things:
Don't try to use node-archive. Even if the description says it allows to create zip files, the moment I read the source code (since documentation is unexistant) I realized that's just a lie. It only exposes methods for reading.
Using zip by spawning a process, like recommended on the provided link, seems to be the best way. Something that would work is copying the files to a local folder with whatever name you desire and then calling the zip command, just to delete the files afterwards.
The other option, which seems ok, is to use zipper (https://github.com/rubenv/zipper, although better just use npm). The reason I'm not really wishing to use it is because there's not that much flexibility, it seems to have been done in a day and it hasn't been modified since the first commit, so I'm not sure it will receive maintenance (sure, you could just fork it...).
I swear the day I have an entire free weekend with no work I will write a freaking module that does this as complete as possible. It's silly that there isn't and it shouldn't be that much struggle. blablablarant.
Edit:
Not sure if it was there before, but now I've been using the node-compress module (also using gzippo). It works fine.