I am using PowerShell v5 and trying to archive the file with the Compress-Archive cmdlet from Microsoft.PowerShell.Archive module:
Compress-Archive -LiteralPath $GLBSourcePathFull -CompressionLevel Optimal -DestinationPath $GLBArchiveFile
This worked flawlessly with 3 files, which had the following sizes: 16MB, 341MB and 345MB.
However once it came across the files bigger in size than 600MB (approximately), PowerShell threw the following exception:
Exception calling "Write" with "3" argument(s): "Exception of type 'System.OutOfMemoryException' was thrown."
The same thing happened with files over 1GB in size.
To add more context to my situation, I am trying to zip up the file from the local folder to one of the network locations within my company, however I doubt there is a difference as I tested this on my local PC just to get the same results.
Have you ever encountered this before? Is it trying to read the whole file into memory before outputting the zip instead of writing directly to the disk? Or maybe there is a limit to how much memory PowerShell can use by default?
I know there are a few other solutions like 7Zip4powerShell module, but I am not allowed to use anything open source at this point, so I would like to understand the current situation I have and how I could potentially address this.
Thank you for any comments you may have.
Compress-Archive cmdlet probably uses a naive approach of loading/mapping the entire source file and target archive into memory and since you're apparently using 32-bit PowerShell these files (along with the PowerShell process code and other data used by the process) don't fit into the process address space which is 2GB (or 3-4GB if it's LARGEADDRESSAWARE).
64-bit PowerShell on a machine with lots of RAM (e.g. 32GB) successfully compresses 1GB+ files.
If you're stuck with 32-bit PowerShell and the built-in cmdlet, try splitting the file into 100MB files and use some descriptive file names to be able to join them in your unpacking script. Obviously such an archive would be unusable for anyone without the re-assembling script.
Related
I'm trying to copy a single file to the "Computer Backups" folder as shown below. The command does not work, even thought the syntax is correct, i.e., "source dest file". I've tried many permutations of this command, different quoting, and still it does not work. I've also tried running it using admin privs, to no avail. My guess is that it is a quoting issue.
robocopy F:\ "C:\Users\ben\OneDrive\Computer Backups" "HP Pavillion to USB_full_b2_s1_v1.tib"
-Thanks
Never mind, the problem was that OneDrive has a file size limit, and this file is 221 GB.
I'm trying to use the call operator (&) to run an R script, and for some reason I am unable to direct to the right path on the D:\ drive, but it works fine on the C:\ drive (copied the R folder from D:\ to C:\ for testing).
The D:\ drive error appears like a space error, even though there are quotes around the string/variable.
Double spacing between "Program" and "Files", the call command reads correctly.
Ideally I would like to call to Rscript.exe on the D:\ drive, but I don't know why it's giving me an error - especially when the C:\ drive works fine and double spacing reads correctly.
Also worth noting "D:\Program Files (x86)" doesn't read correctly either, with similar symptoms.
Update: running
gci -r d:\ -include rscript.exe | % fullname
returns:
D:\Program Files\R\R-3.2.3\bin\i386\Rscript.exe
D:\Program Files\R\R-3.2.3\bin\x64\Rscript.exe
D:\Program Files\R\R-3.2.3\bin\Rscript.exe
The last of which is what my variable $RscriptD is set to.
The first error message in your image is:
Rscript.exe : The term 'D:\Program' is not recognized as an internal or external command
This message means that the call operator (&) called Rscript.exe but Rscript.exe failed to do something by using 'D:\Program'.
I don't know exactly the details of internal process of Rscript.exe, however, I think Rscript.exe tried to run D:\Program Files\R\R-3.2.3\bin\i386\Rscript.exe or D:\Program Files\R\R-3.2.3\bin\x64\Rscript.exe but could not handle the whitespace of Program Files. Because the manual says:
Sub-architectures are also used on Windows, but by selecting executables within the appropriate bin directory, R_HOME/bin/i386 or R_HOME/bin/x64. For backwards compatibility there are executables R_HOME/bin/R.exe and R_HOME/bin/Rscript.exe: these will run an executable from one of the subdirectories, which one being taken first from the R_ARCH environment variable, then from the --arch command-line option and finally from the installation default (which is 32-bit for a combined 32/64 bit R installation).
According to this, I think it is better to call directly i386/Rscript.exe or x64/Rscript.exe rather than bin/Rscript.exe which is just for backwards compatibility.
I have a one liner that is baked into a larger script for some high level forensics. It is just a simple copy-item command and writes the dest folder and its contents back to my server. The code works great, BUT even with the switches:
-Recurse -Force
It is not returning the file with an extension of .dat. As you can guess what I am trying to achieve, I need the .dat file for analysis. I am running this from a privileged account. My only thought was that it is a read/write conflict and the host file was currently utilizing it (or other sys file). What switch am I missing? The "mode" for the file that will not copy over is -a---. Not hidden, just not copying. Suggestions elsewhere have said to use xCopy/robocopy- if possible I do not want to call another dependancy- im already using powershell for the majority of the script, id prefer to stick with it....Any thoughts? Thanks in advance, this one has been tickling my brain for a little...
The only way to copy a file in use is to find the locking handle close it then retry the copy operation(handle.exe).
From your question it looks like you are trying to remotely copy user profiles which includes ntuser.dat and other files that would be needed to keep the profile working properly. Even if you did manage to find a way to unload the dat file(s), you would have to consider the impact that would have on the remote system.
Shadow copy is typically used by backup programs to copy files in use so your best bet would be to find the latest backup of each remote computer and then try to extract the needed files from the backed-up copies or maybe wait for the users to logoff and then try.
I found a problem using robocopy in PowerShell. I used this tool to backup files from one disk (around 220GB) using command:
robocopy $source $destination /s /mt:8
The problem is that created copy took a lot of free space in the destination location (I stopped making backup when it reached around 850GB). Does anyone know why it has happened?
May be there're some loops involved.
robocopy has
Ability to skip NTFS junction points which can cause copying failures because of infinite loops
Try to run with /XJ flag or simply list/log what files are copied to check for loops
See robocopy help and
post about it
UP For those who faces same problem:
there were infinite loops which I found using WinDirStat. Mostly it were Application Data/Documments and Settings/Users folders
I have a directory on a server that contains 2.5 million files.
I need to do something with the files. However, when I try to create an object representing those files, I run into a memory limit. (It's a 32 bit machine.)
PS D:\> $files=dir LotsOfFiles
Get-ChildItem : Exception of type 'System.OutOfMemoryException' was thrown.
Is there any way to work around this, like create a class representing a file that stores fewer attributes (I just need name and lastwritedate)?
did you try using dir /b LotsOfFiles ?