I have a directory on a server that contains 2.5 million files.
I need to do something with the files. However, when I try to create an object representing those files, I run into a memory limit. (It's a 32 bit machine.)
PS D:\> $files=dir LotsOfFiles
Get-ChildItem : Exception of type 'System.OutOfMemoryException' was thrown.
Is there any way to work around this, like create a class representing a file that stores fewer attributes (I just need name and lastwritedate)?
did you try using dir /b LotsOfFiles ?
Related
I am using the command below to compare 2 paths and I get an error msg. when it gets to a folder that ends with a period in the name ie "Folder123."
When I manually try to open those folders I get an error, so I think they are corrupt. How can I skip all folders that end with a period or at least ignore the errors so that my processing can finish?
Compare (Get-ChildItem -r Y:\Ftp\BFold\Final) (Get-ChildItem -r Y:\Dest\TFold\Temp)
You're getting that error because it's part of Naming Files, Paths, and Namespaces limitations in Windows. One or severals of the tools you're using are not able to handle this special case.
Do not end a file or directory name with a space or a period. Although the underlying file system may support such names, the Windows shell and user interface does not. However, it is acceptable to specify a period as the first character of a name. For example, ".temp".
You could either filter the list of folders or using the -ErrorAction to change what happens on an error. Depending on what're you're seeing the error migth already by purely cosmetic.
For Filtering you could use Where-Object for example with -NotMatch ".*\.$".
I have a powershell script that generates a report, and I have connected it to an io.filesystemwatcher. I am trying to improve the error handling capability. I already have the report generation function (which only takes in a filepath) within a try-catch loop that basically kills word, excel and powerpoint and tries again if it fails. This seems to work well but I want to embed in that another try-catch loop that will restart the computer and generate the report after reboot if it fails a second consecutive time.
I decided to try and modify the registry after reading this article: https://cmatskas.com/configure-a-runonce-task-on-windows/
my plan would be, within the second try-catch loop I will create a textfile called RecoveredPath.txt with the file path being its only contents, and then add something like:
Set-ItemProperty "HKLMU:\Software\Microsoft\Windows\CurrentVersion\RunOnce" -Name '!RecoverReport' -Value "C:\...EmergencyRecovery.bat"
Before rebooting. Within the batch file I have:
set /p RecoveredDir=<RecoveredPath.txt
powershell.exe -File C:\...Report.ps1 %RecoveredDir%
When I try to run the batch script, it doesn't yield any errors but doesn't seem to do anything. I tried adding in an echo statement and it is storing the value of the text file as a variable but doesn't seem to be passing it to powershell correctly. I also tried adding -Path %RecoveredDir% but that yielded an error (the param in report.ps1 is named $Path).
What am I doing incorrectly?
One potential problem is that not enclosing %RecoveredDir% in "..." would break with paths containing spaces and other special chars.
However, the bigger problem is that using mere file name RecoveredPath.txt means that the file is looked for in whatever the current directory happens to be.
In a comment your state that both the batch file and input file RecoveredPath.txt are located in your desktop folder.
However, it is not the batch file's location that matters, it's the process' current directory - and that is most likely not your desktop when your batch file auto-runs on startup.
Given that the batch file and the input file are in the same folder and that you can refer to a batch file's full folder path with %~dp0 (which includes a trailing \), modify your batch file to look as follows:
set /p RecoveredDir=<"%~dp0RecoveredPath.txt"
powershell.exe -File C:\...Report.ps1 "%RecoveredDir%"
I'm trying to use the call operator (&) to run an R script, and for some reason I am unable to direct to the right path on the D:\ drive, but it works fine on the C:\ drive (copied the R folder from D:\ to C:\ for testing).
The D:\ drive error appears like a space error, even though there are quotes around the string/variable.
Double spacing between "Program" and "Files", the call command reads correctly.
Ideally I would like to call to Rscript.exe on the D:\ drive, but I don't know why it's giving me an error - especially when the C:\ drive works fine and double spacing reads correctly.
Also worth noting "D:\Program Files (x86)" doesn't read correctly either, with similar symptoms.
Update: running
gci -r d:\ -include rscript.exe | % fullname
returns:
D:\Program Files\R\R-3.2.3\bin\i386\Rscript.exe
D:\Program Files\R\R-3.2.3\bin\x64\Rscript.exe
D:\Program Files\R\R-3.2.3\bin\Rscript.exe
The last of which is what my variable $RscriptD is set to.
The first error message in your image is:
Rscript.exe : The term 'D:\Program' is not recognized as an internal or external command
This message means that the call operator (&) called Rscript.exe but Rscript.exe failed to do something by using 'D:\Program'.
I don't know exactly the details of internal process of Rscript.exe, however, I think Rscript.exe tried to run D:\Program Files\R\R-3.2.3\bin\i386\Rscript.exe or D:\Program Files\R\R-3.2.3\bin\x64\Rscript.exe but could not handle the whitespace of Program Files. Because the manual says:
Sub-architectures are also used on Windows, but by selecting executables within the appropriate bin directory, R_HOME/bin/i386 or R_HOME/bin/x64. For backwards compatibility there are executables R_HOME/bin/R.exe and R_HOME/bin/Rscript.exe: these will run an executable from one of the subdirectories, which one being taken first from the R_ARCH environment variable, then from the --arch command-line option and finally from the installation default (which is 32-bit for a combined 32/64 bit R installation).
According to this, I think it is better to call directly i386/Rscript.exe or x64/Rscript.exe rather than bin/Rscript.exe which is just for backwards compatibility.
My team faces the need to encrypt all files in a repository with AES256. For this purpose, we decided we are going to zip all files with such encryption, using the same key for all of them.
The problem we have is that these files sit in a NAS, so from windows boxes they are accessible by \ to them.
The directory structure is something like this:
Original Structure:
Root
-1
|--folder1
|---file1.ext
|---file2.ext
|--folder2
|---filea.ext
|---fileb.ext
|--folder2.a
|---filec.ext
and so on...
Essentially, what we need is to have all the original files contained in a zip file, keeping their original names, which would be something like this:
Desired Outcome:
|-Root
|-1
|--folder1
|---file1.zip
|---file2.zip
|--folder2
|---filea.zip
|---fileb.zip
|--folder2a
|---filec.zip
and so on...
To accomplish this, we tried a batch script that calls 7zip, but it only works if it's run from the root directory, which is something we cannot use as the files are not in a server.
Here is the syntax of the batch script we came up with:
FOR /R %%i IN ("*.wmv") DO "C:\Program Files\7-Zip\7z.exe" a -mx0 -tzip -pPasswordHere "%%~dpni.zip" "%%i"
But, as wrote previously, it only works when run from the root folder, which is something we cannot do as files sit on a network location.
Mapping the drive or making a symbolic link to it doesn't do the trick either.
I've also checked on 7zip to do this, namely, making use of its "-r" operator, but I couldn't find a way to get the desired outcome (namely, recurse through all folders in the remote tree structure -there are a lot of them...- and keep the original file name).
I'm open to any suggestions as any kind of script, trick or guizmo that gets the job done will be more than welcome. =)
Thanks a million in advance!,
Sebas.
----SOLUTION----
I actually found a sollution here, mapping the drive in a different way (it's so simple it just made me feel stupid(er), but it's altogheter beautiful).
Using the batch script below, the remote share can be mapped like so:
You can map a drive using
net use X: \\server\directory
and then you can change to that directory using
pushd X:
(Post from which the answer was taken from: Batch File Iterating through files on a local network server)
I am using PowerShell v5 and trying to archive the file with the Compress-Archive cmdlet from Microsoft.PowerShell.Archive module:
Compress-Archive -LiteralPath $GLBSourcePathFull -CompressionLevel Optimal -DestinationPath $GLBArchiveFile
This worked flawlessly with 3 files, which had the following sizes: 16MB, 341MB and 345MB.
However once it came across the files bigger in size than 600MB (approximately), PowerShell threw the following exception:
Exception calling "Write" with "3" argument(s): "Exception of type 'System.OutOfMemoryException' was thrown."
The same thing happened with files over 1GB in size.
To add more context to my situation, I am trying to zip up the file from the local folder to one of the network locations within my company, however I doubt there is a difference as I tested this on my local PC just to get the same results.
Have you ever encountered this before? Is it trying to read the whole file into memory before outputting the zip instead of writing directly to the disk? Or maybe there is a limit to how much memory PowerShell can use by default?
I know there are a few other solutions like 7Zip4powerShell module, but I am not allowed to use anything open source at this point, so I would like to understand the current situation I have and how I could potentially address this.
Thank you for any comments you may have.
Compress-Archive cmdlet probably uses a naive approach of loading/mapping the entire source file and target archive into memory and since you're apparently using 32-bit PowerShell these files (along with the PowerShell process code and other data used by the process) don't fit into the process address space which is 2GB (or 3-4GB if it's LARGEADDRESSAWARE).
64-bit PowerShell on a machine with lots of RAM (e.g. 32GB) successfully compresses 1GB+ files.
If you're stuck with 32-bit PowerShell and the built-in cmdlet, try splitting the file into 100MB files and use some descriptive file names to be able to join them in your unpacking script. Obviously such an archive would be unusable for anyone without the re-assembling script.