COPY /Z equivalent in PowerShell - powershell

I'm trying to get used to using PowerShell whenever I'm tempted to reach for the familiar CMD command line that I've known and come to not-quite-love over a couple of decades. I'm starting to internalize Copy-Item, but one of the things I really miss when copying large files is the /Z argument. If you're not familiar with the /Z argument, it adds a quick progress indicator (see below). For small files, it's completely unnecessary, but it is a sanity saver when copying huge files, especially over a slow network.
Is there anything comparable to COPY /Z in PowerShell that doesn't involve lots of code? I'm hoping for something as easy an memorable as a simple argument or maybe a pipable cmdlet along the lines of:
Copy-Item -Path "C:\Source\File.exe" -Destination "C:\Destination\" | Show-Progress
Am I out of luck, or does something like this already exist?

While some PowerShell cmdlets support progress displays, Copy-Item does not.
For those that do support progress displays, such as Invoke-WebRequest, the logic is usually reversed. Progress is shown by default, and must be silenced on demand, with $ProgressPreference = 'SilentlyContinue'
While PowerShell offers the Write-Progress cmdlet for creating your own progress displays, this won't help you here, as you cannot track the internal progress of a single object being processed by another command.
However, you can simply call cmd.exes internal copy command from PowerShell, via cmd /c:
cmd /c copy /z C:\Source\File.exe C:\Destination\
Note:
As Jeroen Mostert points out, consider using robocopy.exe instead (which you can equally call from PowerShell) - see his comment on the question for details.

Related

Why is PS Get-ChildItem so difficult

I did a ton of reading and searching about a way to have Get-ChildItem return a dir listing in wide format, in alphabetical order, with the number of files and directories in the current directory. Here is a image of what I ended up with, but not using GCI.
I ended up writing a small PS file.
$bArgs = "--%/c"
$cArgs = "Dir /n/w"
& cmd.exe -ArgumentList $bArgs $cArgs
As you can see I ended up using the old cmd.exe and passing the variables I wanted. I made an alias in my PS $Profile to call this script.
Can this not be accomplished in PS v5.1? Thanks for any help or advice for an old noob.
PowerShell's for-display formatting differs from cmd.exe's, so if you want the formatting of the latter's internal dir command, you'll indeed have to call it via cmd /c, via a function you can place in your $PROFILE file (note that aliases in PowerShell are merely alternative names and can therefore not include baked-in arguments):
function lss { cmd /c dir /n /w /c $args }
Note that you lose a key benefit of PowerShell: the ability to process rich objects:
PowerShell-native commands output rich objects that enable robust programmatic processing; e.g., Get-ChildItem outputs System.IO.FileInfo and System.IO.DirectoryInfo instances; the aspect of for-display formatting is decoupled from the data output, and for-display formatting only kicks in when printing to the display (host), or when explicitly requested.
For instance, (Get-ChildItem -File).Name returns an array of all file names in the current directory.
By contrast, PowerShell can only use text to communicate with external programs, which makes processing cumbersome and brittle, if information must be extracted via text parsing.
As Pierre-Alain Vigeant notes, the following PowerShell command gives you at least similar output formatting as your dir command, though it lacks the combined-size and bytes-free summary at the bottom:
Get-ChildItem | Format-Wide -AutoSize
To wrap that up in a function, use:
function lss { Get-ChildItem #args | Format-Wide -Autosize }
Note, however, that - due to use of a Format-* cmdlet, all of which output objects that are formatting instructions rather than data - this function's output is also not suited to further programmatic processing.
A proper solution would require you to author custom formatting data and associate them with the System.IO.FileInfo and System.IO.DirectoryInfo types, which is nontrivial however.
See the conceptual about_Format.ps1xml help topic, Export-FormatData, Update-FormatData, and this answer for a simple example.

How can I move from Windows traditional command line to the modern PowerShell?

I was used to a few command line tricks in Windows that increased my productivity a lot.
Now I am told that I should move to PowerShell because it's more POWERful. It took me a while to get a little bit hang of it (objects, piping, etc.), and there are a lot of great tutorials on how to get a few things done.
However, some (relatively) basic trick still puzzle me. For instance, what is the equivalent of the FOR structure in PowerShell?
For example,
FOR %i IN (*.jpg) DO Convert %i -resize 800x300 resized/%i
The above line takes all of photos in a folder and uses the ImageMagick's Convert tool to resize the images and restores the resized imaged in a sub-folder called RESIZED.
In PowerShell I tried the command:
Dir ./ | foreach {convert $_.name -resize 800x300 resized/$_name}
This can't work despite all of the googling around I did. What is missing?
Note that / rather than \ is used as the path separator in this answer, which works on Windows too and makes the code compatible with the cross-platform PowerShell Core editions.
tl;dr:
$convertExe = './convert' # adjust path as necessary
Get-ChildItem -File -Filter *.jpg | ForEach-Object {
& $convertExe $_.Name -resize 800x300 resized/$($_.Name)
}
Read on for an explanation and background information.
The equivalent of:
FOR %i IN (*.jpg)
is:
Get-ChildItem -File -Filter *.jpg
or, with PowerShell's own wildcard expressions (slower, but more powerful):
Get-ChildItem -File -Path *.jpg # specifying parameter name -Path is optional
If you're not worried about restricting matches to files (as opposed to directories), Get-Item *.jpg will do too.
While dir works as a built-in alias for Get-ChildItem, I recommend getting used to PowerShell's own aliases, which follow a consistent naming convention; e.g., PowerShell's own alias for Get-ChildItem is gci
Also, in scripts it is better to always use the full command names - both for readability and robustness.
As you've discovered, to process the matching files in a loop you must pipe (|) the Get-ChildItem command's output to the ForEach-Object cmdlet, to which you pass a script block ({ ... }) that is executed for each input object, and in which $_ refers to the input object at hand.
(foreach is a built-in alias for ForEach-Object, but note that there's also a foreach statement, which works differently, and it's important not to confuse the two.)
There are 2 pitfalls for someone coming from the world of cmd.exe (batch files):
In PowerShell, referring to an executable by filename only (e.g., convert) does not execute an executable by that name located in the current directory, for security reasons.
Only executables in the PATH can be executed by filename only, and unless you've specifically placed ImageMagick's convert.exe in a directory that comes before the SYSTEM32 directory in the PATH, the standard Windows convert.exe utility (whose purpose is to convert FAT disk volumes to NTFS) will be invoked.
Use Get-Command convert to see what will actually execute when you submit convert; $env:PATH shows the current value of the PATH environment variable (equivalent of echo %PATH%).
If your custom convert.exe is indeed in the current directory, invoke it as ./convert - i.e., you must explicitly reference its location.
Otherwise (your convert.exe is either not in the PATH at all or is shadowed by a different utility) specify the path to the executable as needed, but note that if you reference that path in a variable or use a string that is single- or double-quoted (which is necessary if the path contains spaces, for instance), you must invoke with &, the call operator; e.g.,
& $convertExe ... or & "$HOME/ImageMagic 2/convert" ...
PowerShell sends objects through the pipeline, not strings (this innovation is at the heart of PowerShell's power). When you reference and object's property or an element by index as part of a larger string, you must enclose the expression in $(...), the subexpression operator:
resized/$($_.Name) - Correct: property reference enclosed in $(...)
resized/$_.Name - !! INCORRECT - $_ is stringified on its own, followed by literal .Name
However, note that a stand-alone property/index reference or even method call does not need $(...); e.g., $_.Name by itself, as used in the command in the question, does work, and retains its original type (is not stringified).
Note that a variable without property / index access - such as $_ by itself - does not need $(...), but in the case at hand $_ would expand to the full path. For the most part, unquoted tokens that include variable references are treated like implicitly double-quoted strings, whose interpolation rules are summarized in this answer of mine; however, many additional factors come into play, which are summarized here; edge cases are highlighted in this question.
At the end of the day, the safest choice is to double-quote strings that contain variable references or subexpressions:
"resized/$($_.Name)" - SAFEST
Use:
Get-ChildItem | foreach {convert $_.name -resize 800x300 resized/$($_.name)}
Or, perhaps, you need to pass the full name (with path), also showing a shorter syntax (using aliases):
gci | % {convert $_.fullname -resize 800x300 resized/$($_.name)}
Also, you might want to supply the full path to the executable.
Revised based on comments given below
There are many applications with the name "Convert". If I do
Get-Command Convert
on my computer. It shows me an app that is part of the Windows system. If PowerShell is running the wrong app on you, it's never going to work.
The solution will be to point PowerShell at the convert tool inside the ImageMagick program folder. A Google search on "ImageMagick PowerShell" will lead you to lots of people who have faced the same problem as you.

Microsoft's Consistency in PowerShell CmdLet Parameter Naming

Let's say I wrote a PowerShell script that includes this commmand:
Get-ChildItem -Recurse
But instead I wrote:
Get-ChildItem -Re
To save time. After some time passed and I upgraded my PowerShell version, Microsoft decided to add a parameter to Get-ChildItem called "-Return", that for example returns True or False depending if any items are found or not.
In that virtual scenario, do I have I to edit all my former scripts to ensure that the script will function as expected? I understand Microsoft's attempt to save my typing time, but this is my concern and therefore I will probably always try to write the complete parameter name.
Unless of course you know something I don't. Thank you for your insight!
This sounds more like a rant than a question, but to answer:
In that virtual scenario, do I have I to edit all my former scripts to ensure that the script will function as expected?
Yes!
You should always use the full parameter names in scripts (or any other snippet of reusable code).
Automatic resolution of partial parameter names, aliases and other shortcuts are great for convenience when using PowerShell interactively. It lets us fire up powershell.exe and do:
ls -re *.ps1|% FullName
when we want to find the path to all scripts in the profile. Great for exploration!
But if I were to incorporate that functionality into a script I would do:
Get-ChildItem -Path $Home -Filter *.ps1 -Recurse |Select-Object -ExpandProperty FullName
not just for the reasons you mentioned, but also for consistency and readability - if a colleague of mine comes along and maybe isn't familiar with the shortcuts I'm using, he'll still be able to discern the meaning and expected output from the pipeline.
Note: There are currently three open issues on GitHub to add warning rules for this in PSScriptAnalyzer - I'm sure the project maintainers would love a hand with this :-)

preplog.exe ran in foreach log file

I have a folder with x amount of web log files and I need to prep them for bulk import to SQL
for that I have to run preplog.exe into each one of them.
I want to create a Power script to do this for me, the problem that I'm having is that preplog.exe has to be run in CMD and I need to enter the input path and the output path.
For Example:
D:>preplog c:\blah.log > out.log
I've been playing with Foreach but I haven't have any luck.
Any pointers will be much appreciated
I would guess...
Get-ChildItem "C:\Folder\MyLogFiles" | Foreach-Object { preplog $_.FullName | Out-File "preplog.log" -Append }
FYI it is good practice on this site to post your not working code so at least we have some context. Here I assume you're logging to the current directory into one file.
Additionally you've said you need to run in CMD but you've tagged PowerShell - it pays to be specific. I've assumed PowerShell because it's a LOT easier to script.
I've also had to assume that the folder contains ONLY your log files, otherwise you will need to include a Where statement to filter the items.
In short I've made a lot of assumptions that means this may not be an accurate answer, so keep all this in mind for your next question =)

How to iterate over a folder with a large number of files in PowerShell?

I'm trying to write a script that would go through 1.6 million files in a folder and move them to the correct folder based on the file name.
The reason is that NTFS can't handle a large number of files within a single folder without a degrade in performance.
The script call "Get-ChildItem" to get all the items within that folder, and as you might expect, this consumes a lot of memory (about 3.8 GB).
I'm curious if there are any other ways to iterate through all the files in a directory without using up so much memory.
If you do
$files = Get-ChildItem $dirWithMillionsOfFiles
#Now, process with $files
you WILL face memory issues.
Use PowerShell piping to process the files:
Get-ChildItem $dirWithMillionsOfFiles | %{
#process here
}
The second way will consume less memory and should ideally not grow beyond a certain point.
If you need to reduce the memory footprint, you can skip using Get-ChildItem and instead use a .NET API directly. I'm assuming you are on Powershell v2, if so first follow the steps here to enable .NET 4 to load in Powershell v2.
In .NET 4 there are some nice APIs for enumerating files and directories, as opposed to returning them in arrays.
[IO.Directory]::EnumerateFiles("C:\logs") |%{ <move file $_> }
By using this API, instead of [IO.Directory]::GetFiles(), only one file name will be processed at a time, so the memory consumption should be relatively small.
Edit
I was also assuming you had tried a simple pipelined approach like Get-ChildItem |ForEach { process }. If this is enough, I agree it's the way to go.
But I want to clear up a common misconception: In v2, Get-ChildItem (or really, the FileSystem provider) does not truly stream. The implementation uses the APIs Directory.GetDirectories and Directory.GetFiles, which in your case will generate a 1.6M-element array before any processing can occur. Once this is done, then yes, the remainder of the pipeline is streaming. And yes, this initial low-level piece has relatively minimal impact, since it is simply a string array, not an array of rich FileInfo objects. But it is incorrect to claim that O(1) memory is used in this pattern.
Powershell v3, in contrast, is built on .NET 4, and thus takes advantage of the streaming APIs I mention above (Directory.EnumerateDirectories and Directory.EnumerateFiles). This is a nice change, and helps in scenarios just like yours.
This is how I implemented it without using .Net 4.0. Only Powershell 2.0 and old-fashioned DIR-command:
It's just 2 lines of (easy) code:
cd <source_path>
cmd /c "dir /B"| % { move-item $($_) -destination "<dest_folder>" }
My Powershell Proces only uses 15MB. No changes on the old Windows 2008 server!
Cheers!