issues trying to monitor log file with powershell - powershell

I want to monitor a log file which is constantly being added to (every few seconds) over the course of a 2 hour period. I am currently using Get-Content file.txt –Wait which displays the content to the screen and allows me to see what’s being added to the file but I need to take this a step further and actually watch for specific messages and if something I’m looking for in the log file appears, then do something. Initially I used a .net file reader with a for loop as shown below
try {
for(;;) {
$line = $log_reader.ReadLine()
if ($line -match "something")
{
Write-Host "we have a match"
Break
}
The issue with this however is that it was causing the process that is generating the log file to fall over - it throws an error (because another process is using the log file it’s creating – I thought this was odd because I assumed the .net stream reader would just be ‘reading’ the file). I don’t have any control over the process which is generating the log file so I don’t know what it’s doing exactly (I’m guessing it has the file in read/write mode with some kind of lock which gets upset when I try to read the file using a .net stream reader). Doing a Get-Content on the file doesn’t seem to cause this issue however.
The question is, how can I use something like Get-Content (or another process) to monitor the log file but move onto another part of the script if a message I’m looking for in the log appears?

If you constantly want to monitor the log file and catch the desired pattern as soon as it appears:
while(!(Select-String -path 'c:\log.txt' -pattern 'something' -quiet)){};
## when finds the pattern, comes out of the loop and proceed to next part of the script
Write-host 'we have a match'
You may want to change the path of the file to yours.
Though this would work, there will be a lot of processing that your computer have to do, since the while loop is a constant loop. If you can afford to introduce some delay, like if it is OK to find the error 30 sec or whatever is your threshold, after it appeared, then you can consider introducing sleep:
while(!(Select-String -path 'c:\log.txt' -pattern 'something' -quiet)){start-sleep -seconds 30};
## when finds the pattern, comes out of the loop and proceed to next part of the script
Write-host 'we have a match'
You can write a small logic to terminate the script after two hours, otherwise it would become an infinite loop, if 'something' doesn't gets written at all in the log file.
Edit 1:
If you want to print the new lines at the console, you can try manipulating a bit like:
while(!(Select-String -path 'c:\log.txt' -pattern 'something' -quiet))
{
$a = get-content 'c:\log.txt'
if(($a.count) -gt $b )
{
$a[$b..($a.count)]
}
$b = ($a.count)
start-sleep -Seconds 30
}
## To print the line containing the pattern when while loop exited
(Get-content 'c:\log.txt')[-1]

Related

Powershell - Text file is not written to

Good evening,
I'm hardly experienced in programming, but every now and then I try to build the things I need myself.
What do I want to achieve with the script?
This script should read a text file with words.
There is one new word per line. When reading the script should look if the word has between 3 and 16 letters. If it has less than 3 or more than 16, then the word is skipped. But if it is between the 3 and 16, then the word will be saved in a new Text_File. Again, I would love a new word every line.
Here is what I created.
Please don't judge my script too hard.
$regex = '[A-Z][a-z]{3,16}'
foreach ($line in Get-Content -Path C:\\Users\\Administrator\\Desktop\\namecheck\\words.txt)
{
if($line -match $regex)
{
Out-File -FilePath C:\\Users\\Administrator\\Desktop\\namecheck\\sorted.txt -Force
}
}
As mentioned above, the words are not written to a file. However, the file is created and the script also takes a little while to finish. So from my point of view something seems to happen.
'[A-Z][a-z]{3,16}' is only accounting for words that are of length 3+. That would be your first issue, and your second one is your export. Out-File isn't being told what you want to export. So, either provide it the a value via pipeline input, or using -InputObject:
$path = 'C:\Users\Administrator\Desktop\namecheck\'
$stream = [System.IO.StreamReader]::new("$path\words.txt")
while ($line = $stream.ReadLine())
{
if ($line.Length -gt 3 -and $line.Length -lt 16)
{
Out-File -FilePath "$path\sorted.txt" -InputObject $line -Append -Force
}
}
$stream.Close()
$stream.Dispose()
Although, a foreach loop is the fastest of the loops, when using Get-Content it still has to wait for the completion of the content gathering before it can move through the list. Only mentioning this since you said that the script takes quite a bit and without knowing the size of your words.txt file, im left to assume that's the cause.
With that said, using [System.IO.StreamReader] should speed up that reading of the file as you'll get to read and iterate through the file at the same time; with a while loop that is.

Another PowerShell function return value and Write-Output

this has been beaten to death but can't find an exact solution for my problem.
I have a PowerShell script that can be run from the command line or from a scheduled task. I'm using the following line
Write-Output "Updating user $account" | Tee-Object $logfile -Append
to write relevant information to the screen and a log file. I need both because when run from a command line, I can physically see what's going on but when run from a scheduled task, I have no visibility to its output hence the log file.
Thing is, I'm modifying my code to use functions but as you might already know, Write-Output messes up the return values of functions when used within said functions.
What could I do that would do something similar to what I stated above without affecting the function's return value?
Thanks.
Just write to a log file. When running from the console, open another console and tail the log file.
Get-Content 'C:\path\to\the\logfile.txt' -Tail 10 -Wait
Assuming PowerShell version 5 or higher, where Write-Host writes to the information output stream (stream number 6), which doesn't interfere with the success output stream (stream number 1) and therefore doesn't pollute your function's data output:
The following is not a single command, but you could easily wrap this in a function:
Write-Host "Updating user $account" -iv msg; $msg >> $logfile
The above uses the common -InformationVariable (-iv) parameter to capture Write-Host's output in variable $msg (note how its name must be passed to -iv, i.e. without the leading $).
The message captured in $msg is then appended to file $logfile with >>, the appending redirection operator.
Note: >> is in effect an alias for Out-File -Append, and uses a fixed character encoding, both on creation and appending.
Use Add-Content and its -Encoding parameter instead, if you want to control the encoding.
Instead of explicitly writing each log line to a file, you may want to use a different approach that references the log file only at one location in the code.
Advantages:
Easy to change log path and customize the log output (e. g. prepending a timestamp), without having to modify all code locations that log something.
Captures any kind of messages, e. g. also error, verbose and debug messages (if enabled).
Captures messages of 3rd party code aswell, without having to tell them the name of the log file.
Function SomeFunction {
Write-Host "Hello from SomeFunction" # a log message
"SomeFunctionOutput" # Implicit output (return value) of the function.
# This is short for Write-Output "SomeFunctionOutput".
}
Function Main {
Write-Host "Hello from Main" # a log message
# Call SomeFunction and store its result (aka output) in $x
$x = SomeFunction
# To demonstrate that "normal" function output is not affected by log messages
$x -eq "SomeFunctionOutput"
}
# Call Main and redirect all of its output streams, including those of any
# called functions.
Main *>&1 | Tee-Object -FilePath $PSScriptRoot\Log.txt -Append
Output:
Hello from Main
Hello from SomeFunction
True
In this sample all code is wrapped in function Main. This allows us to easily redirect all output streams using the *>&1 syntax, which employs the redirection operator to "merge" the streams. This means that all commands further down the pipeline (in this example Tee-Object) receive any script messages that would normally end up in the console (except when written directly to the console, which circumvents PowerShells streams).
Possible further improvements
You may want to use try/catch in function Main, so you also capture script-terminating errors:
try {
SomeFunction # May also cause a script-terminating error, which will be catched.
# Example code that causes a script-terminating error
Write-Error "Fatal error" -ErrorAction Stop
}
catch {
# Make sure script-terminating errors are logged
Write-Error -ErrorRecord $_ -ErrorAction Continue
}

How to print PDF files in a sequence using Powershell?

I have a bunch of PDF files that I would like to print in sequence on a windows 7 computer using Powershell.
get-childItem "*.pdf" | sort lastWriteTime | foreach-object {start-process $._Name -verb 'print'}
The printed files are sometimes out of order like 1) A.pdf, 2) C.pdf, 3) B.pdf 4) D.pdf.
Different trials printed out a different sequence of files, thus, I fear the error is related to the printing queue or the start-process command. My guess is that each printing process is fired without waiting for the previous printing process to be completed.
Is there a way to consistently print out PDF files in a sequence that I specify?
You are starting the processes in order, but by default Start-Process does not wait until the command completes before it starts the next one. Since the commands take different amounts of time to complete based on the .PDF file size they print in whatever order they finish in. Try adding the -wait switch to your Start-Process, which will force it to wait until the command completes before starting the next one.
EDIT: Found an article elsewhere on Stack which addresses this. Maybe it will help. https://superuser.com/questions/1277881/batch-printing-pdfs
Additionally, there are a number of PDF solutions out there which are not Adobe, and some of them are much better for automation than the standard Reader. Adobe has licensed .DLL files you can use, and the professional version of Acrobat has hooks into the back end .DLLs as well.
If you must use Acrobat Reader DC (closed system or some such) then I would try opening the file to print and getting a pointer to the process, then waiting some length of time, and forcing the process closed. This will work well if your PDF sizes are known and you can estimate how long it takes to finish printing so you're not killing the process before it finishes. Something like this:
ForEach ($PDF in (gci "*.pdf"))
{
$proc = Start-Process $PDF.FullName -PassThru
Start-Sleep -Seconds $NumberOfSeconds
$proc | Stop-Process
}
EDIT #2: One possible (but untested) optimization is that you might be able use the ProcessorTime counters $proc.PrivilegedProcessorTime and $proc.UserProcessorTime to see when the process goes idle. Of course, this assumes that the program goes completely idle after printing. I would try something like this:
$LastPrivTime = 0
$LastUserTime = 0
ForEach ($PDF in (gci "*.pdf"))
{
$proc = Start-Process $PDF.FullName -PassThru
Do
{
Start-Sleep -Seconds 1
$PrivTimeElapsed = $proc.PrivilegedProcessorTime - $LastPrivTime
$UserTimeElapsed = $proc.UserProcessorTime - $LastUserTime
$LastPrivTime = $proc.PrivilegedProcessorTime
$LastUserTime = $proc.UserProcessorTime
}
Until ($PrivTimeElapsed -eq 0 -and $UserTimeElapsed -eq 0)
$proc | Stop-Process
}
If the program still ends too soon, you might need to increase the # of seconds to sleep inside the inner Do loop.

How to prevent input from displaying in console while script is running

I have a script that runs several loops of code and relies on specific input at various phases in order to advance. That functionality is working. My current issue revolves around extraneous input being supplied by the user displaying on screen in the console window wherever I have the cursor position currently aligned.
I have considered ignoring this issue since the functionality of the script is intact, however, I am striving for high standards with the console display of this script, and I would like to know a way to disable all user input period, unless prompted for. I imagine the answer has something to do with being able to command the Input Buffer to store 0 entries, or somehow disabling and then re-enabling the keyboard as needed.
I have tried using $HOST.UI.RawUI.Flushinputbuffer() at strategic locations in order to prevent characters from displaying, but I don't think there's anywhere I could put that in my loop that will perfectly block all input from displaying during code execution (it works great for making sure nothing gets passed when input is required, though). I've tried looking up the solution, but the only command I could find for manipulating the Input Buffer is the one above. I've also tried strategic implementation of the $host.UI.RawUI.KeyAvailable variable to detect keystrokes during execution, then $host.UI.RawUI.ReadKey() to determine if these keystrokes are unwanted and do nothing if they are, but the keystrokes still display in the console no matter what.
I am aware that this code is fairly broken as far as reading the key to escape the loop goes, but bear with me. I hashed up this example just so that you could see the issue I need help eliminating. If you hold down any letter key during this code's execution, you'll see unwanted input displaying.
$blinkPhase = 1
# Set Coordinates for cursor
$x = 106
$y = 16
$blinkTime = New-Object System.Diagnostics.Stopwatch
$blinkTime.Start()
$HOST.UI.RawUI.Flushinputbuffer()
do {
# A fancy blinking ellipses I use to indicate when Enter should be pressed to advance.
$HOST.UI.RawUI.Flushinputbuffer()
while ($host.UI.RawUI.KeyAvailable -eq $false) {
if ($blinkTime.Elapsed.Milliseconds -gt 400) {
if ($blinkPhase -eq 1) {
[console]::SetCursorPosition($x,$y)
write-host ". . ." -ForegroundColor gray
$blinkPhase = 2
$blinkTime.Restart()
} elseif ($blinkPhase -eq 2) {
[console]::SetCursorPosition($x,$y)
write-host " "
$blinkPhase = 1
$blinkTime.Restart()
}
}
start-sleep -m 10
}
# Reading for actual key to break the loop and advance the script.
$key = $host.UI.RawUI.ReadKey()
} while ($key.key -ne "Enter")
The expected result is that holding down any character key will NOT display the input in the console window while the ellipses is blinking. The actual result, sans error message, is that a limited amount of unwanted/unnecessary input IS displaying in the console window, making the script look messy and also interfering with the blinking process.
What you're looking for is to not echo (print) the keys being pressed, and that can be done with:
$key = $host.UI.RawUI.ReadKey('IncludeKeyDown, NoEcho')
Also, your test for when Enter was pressed is flawed[1]; use the following instead:
# ...
} while ($key.Character -ne "`r")
Caveat: As of at least PSReadLine version 2.0.0-beta4, a bug causes $host.UI.RawUI.KeyAvailable to report false positives, so your code may not work as intended - see this GitHub issue.
Workaround: Use [console]::KeyAvailable instead, which is arguably the better choice anyway, given that you're explicitly targeting a console (terminal) environment with your cursor-positioning command.
As an aside: You can simplify and improve the efficiency of your solution by using a thread job to perform the UI updates in a background thread, while only polling for keystrokes in the foreground:
Note: Requires the ThreadJob module, which comes standard with PowerShell Core, and on Windows PowerShell can be installed with Install-Module ThreadJob -Scope CurrentUser, for instance.
Write-Host 'Press Enter to stop waiting...'
# Start the background thread job that updates the UI every 400 msecs.
# NOTE: for simplicity, I'm using a simple "spinner" here.
$jb = Start-ThreadJob {
$i=0
while ($true) {
[Console]::Write("`r{0}" -f '/-\|'[($i++ % 4)])
Start-Sleep -ms 400
}
}
# Start another thread job to do work in the background.
# ...
# In the foreground, poll for keystrokes in shorter intervals, so as
# to be more responsive.
While (-not [console]::KeyAvailable -or ([Console]::ReadKey($true)).KeyChar -ne "`r" ) {
Start-Sleep -Milliseconds 50
}
$jb | Remove-Job -Force # Stop and remove the background UI thread.
Note the use of [Console]::Write() in the thread job, because Write-Host output wouldn't actually be passed straight through to the console.
[1] You tried to access a .Key property, which only the [SystemConsoleKeyInfo] type returned by [console]::ReadKey() has; the approximate equivalent in the $host.UI.rawUI.ReadKey() return type, [System.Management.Automation.Host.KeyInfo], is .VirtualKeyCode, but its specific type differs, so you can't (directly) compare it to "Enter"; The latter type's .Character returns the actual [char] instance pressed, which is the CR character ("`r") in the case of Enter.

Get-Content -wait not working as described in the documentation

I've noticed that when Get-Content path/to/logfile -Wait, the output is actually not refreshed every second as the documentation explains it should. If I go in Windows Explorer to the folder where the log file is and Refresh the folder, then Get-Content would output the latest changes to the log file.
If I try tail -f with cygwin on the same log file (not at the same time than when trying get-content), then it tails as one would expect, refreshing real time without me having to do anything.
Does anyone have an idea why this happens?
Edit: Bernhard König reports in the comments that this has finally been fixed in Powershell 5.
You are quite right. The -Wait option on Get-Content waits until the file has been closed before it reads more content. It is possible to demonstrate this in Powershell, but can be tricky to get right as loops such as:
while (1){
get-date | add-content c:\tesetfiles\test1.txt
Start-Sleep -Milliseconds 500
}
will open and close the output file every time round the loop.
To demonstrate the issue open two Powershell windows (or two tabs in the ISE). In one enter this command:
PS C:\> 1..30 | % { "${_}: Write $(Get-Date -Format "hh:mm:ss")"; start-sleep 1 } >C:\temp\t.txt
That will run for 30 seconds writing 1 line into the file each second, but it doesn't close and open the file each time.
In the other window use Get-Content to read the file:
get-content c:\temp\t.txt -tail 1 -wait | % { "$_ read at $(Get-Date -Format "hh:mm:ss")" }
With the -Wait option you need to use Ctrl+C to stop the command so running that command 3 times waiting a few seconds after each of the first two and a longer wait after the third gave me this output:
PS C:\> get-content c:\temp\t.txt -tail 1 -wait | % { "$_ read at $(Get-Date -Format "hh:mm:ss")" }
8: Write 12:15:09 read at 12:15:09
PS C:\> get-content c:\temp\t.txt -tail 1 -wait | % { "$_ read at $(Get-Date -Format "hh:mm:ss")" }
13: Write 12:15:14 read at 12:15:15
PS C:\> get-content c:\temp\t.txt -tail 1 -wait | % { "$_ read at $(Get-Date -Format "hh:mm:ss")" }
19: Write 12:15:20 read at 12:15:20
20: Write 12:15:21 read at 12:15:32
21: Write 12:15:22 read at 12:15:32
22: Write 12:15:23 read at 12:15:32
23: Write 12:15:24 read at 12:15:32
24: Write 12:15:25 read at 12:15:32
25: Write 12:15:26 read at 12:15:32
26: Write 12:15:27 read at 12:15:32
27: Write 12:15:28 read at 12:15:32
28: Write 12:15:29 read at 12:15:32
29: Write 12:15:30 read at 12:15:32
30: Write 12:15:31 read at 12:15:32
From this I can clearly see:
Each time the command is run it gets the latest line written to the file. i.e. There is no problem with caching and no buffers needing flushed.
Only a single line is read and then no further output appears until the command running in the other window completes.
Once it does complete all of the pending lines appear together. This must have been triggered by the source program closing the file.
Also when I repeated the exercise with the Get-Content command running in two other windows one window read line 3 then just waited, the other window read line 6, so the line is definitely being written to the file.
It seems pretty conclusive that the -Wait option is waiting for a file close event, not waiting for the advertised 1 second. The documentation is wrong.
Edit:
I should add, as Adi Inbar seems to insistent that I'm wrong, that the examples I gave here use Powershell only as that seemed most appropriate for a Powershell discussion. I did also verify using Python that the behaviour is exactly as I described:
Content written to a file is readable by a new Get-Content -Wait command immediately provided the application has flushed its buffer.
A Powershell instance using Get-Content -Wait will not display new content in the file that is being written even though another Powershell instance, started later, sees the later data. This proves conclusively that the data is accessible to Powershell and Get-Content -Wait is not polling at 1 second intervals but waiting for some trigger event before it next looks for data.
The size of the file as reported by dir is updating while lines are being added, so it is not a case of Powershell waiting for the directory entry size to be updated.
When the process writing the file closes it, the Get-Content -Wait displays the new content almost instantly. If it were waiting until the data was flushed to disk there would be up to a delay until Windows flushed it's disk cache.
#AdiInbar, I'm afraid you don't understand what Excel does when you save a file. Have a closer look. If you are editing test.xlsx then there is also a hidden file ~test.xlsx in the same folder. Use dir ~test.xlsx -hidden | select CreationTime to see when it was created. Save your file and now test.xlsx will have the creation time from ~test.xlsx. In other words saving in Excel saves to the ~ file then deletes the original, renames the ~ file to the original name and creates a new ~ file. There's a lot of opening and closing going on there.
Before you save it has the file you are looking at open, and after that file is open, but its a different file. I think Excel is too complex a scenario to say exactly what triggers Get-Content to show new content but I'm sure you mis-interpreted it.
It looks like Powershell is monitoring the file's Last Modified property. The problem is that "for performance reasons" the NTFS metadata containing this property is not automatically updated except under certain circumstances.
One cirumstance is when the file handle is closed (hence #Duncan's observations). Another is when the file's information is queried directly, hence the Explorer refresh behaviour mentioned in the question.
You can observe the correlation by having Powershell monitoring a log with Get-Content -Wait and having Explorer open in the folder in details view with Last Modified column visible. Notice that Last Modified doesn't update automatically as the file is modified.
Now get the properties of the file in another window. E.g. at a command prompt, type the file. Or open another Explorer window in the same folder, and right-click the file and get its properties (for me, just right-clicking is enough). As soon as you do that, the first Explorer window will automatically update the Last Modified column and Powershell will notice the update and catch up with the log. In Powershell, touching the LastWriteTime property is enough:
(Get-Item file.log).LastWriteTime = (Get-Item file.log).LastWriteTime
or
(Get-Item file.log).LastWriteTime = Get-Date
So this is now working for me:
Start-Job {
$f=Get-Item full\path\to\log
while (1) {
$f.LastWriteTime = Get-Date
Start-Sleep -Seconds 10
}
}
Get-Content path\to\log -Wait
Can you tell us how to reproduce that?
I can start this script on one PS session:
get-content c:\testfiles\test1.txt -wait
and this in another session:
while (1){
get-date | add-content c:\tesetfiles\test1.txt
Start-Sleep -Milliseconds 500
}
And I see the new entries being written in the first session.
It appears that get-content only works if it goes through the windows api and that versions of appending to a file are different.
program.exe > output.txt
And then
get-content output.txt -wait
Will not update. But
program.exe | add-content output.txt
will work with.
get-content output.txt -wait
So I guess it depends on how the application does output.
I can assure you that Get-Content -Wait does refresh every second, and shows you changes when the file changes on the disk. I'm not sure what tail -f is doing differently, but based on your description I'm just about certain that this issue is not with PowerShell but with write caching. I can't rule out the possibility that log4net is doing the caching, but I strongly suspect that OS-level caching is the culprit, for two reasons:
The documentation for log4j/log4net says that it flushes the buffer after every append operation by default, and I presume that if you had explicitly configured it not to flush after every append, you'd be aware of that.
I know for a fact that refreshing Windows Explorer triggers a write buffer flush if any files in the directory have changed. That's because it actually reads the file contents, not just the metadata, in order to provide extended information such as thumbnails and previews, and the read operation causes the write buffer to flush. So, if you're seeing the delayed updates every time you refresh the logfile's directory in Windows Explorer, that points strongly in this direction.
Try this: Open Device Manager, expand the Disk Drives node, open the Properties of the disk on which the logfile is stored, switch to the Policies tab, and uncheck Enable write caching on the device. I think you'll find that Get-Content -Wait will now show you the changes as they happen.
As for why tail -f is showing you the changes immediately as it is, I can only speculate. Maybe you're using it to monitor a logfile on a different drive, or perhaps Cygwin requests frequent flushes while you're running tail -f, to address this very issue.
UPDATE:
Duncan commented below that it is an issue with PowerShell, and posted an answer contending that Get-Content -Wait doesn't output new results until the file is closed, contrary to the documentation.
However, based on information already established and further testing, I've confirmed conclusively that it does not wait for the file to be closed, but outputs new data added to the file as soon as it's written to disk, and that the issue the OP is seeing is almost definitely due to write buffering.
To prove this, let the facts be submitted to a candid world:
I created an Excel spreadsheet, and ran Get-Content -Wait against the .xlsx file. When I entered new data into the spreadsheet, the Get-Content -Wait did not produce new output, which is expected while the new information is only in RAM and not on disk. However, whenever I saved the spreadsheet after adding data, new output was produced immediately.
Excel does not close the file when you save it. The file remains open until you close the Window from Excel, or exit Excel. You can verify this by trying to delete, rename, or otherwise modify the .xlsx file after you've saved it, while the window is still open in Excel.
The OP stated that he gets new output when he refreshes the folder in Windows Explorer. Refreshing the folder listing does not close the file. It does flush the write buffer if any of the files have changed. That's because it has to read the file's attributes, and this operation flushes the write buffer. I'll try to find some references for this, but as I noted above, I know for a fact that this is true.
I verified this behavior by running the following modified version of Duncan's test, which runs for 1,000 iterations instead of 50, and displays progress at the console so that you can track exactly how the output in your Get-Content -Wait window relates to the data that the pipeline has added to the file:
1..1000 | %{"${_}: Write $(Get-Date -Format "hh:mm:ss")"; Write-Host -NoNewline "$_..."; Start-Sleep 1} > .\gcwtest.txt
While this was running, I ran Get-Content -Wait .\gcwtest.txt in another window, and opened the directory in Windows Explorer. I found that if I refresh, more output is produced any time the file size in KB changes, and sometimes but not always even if nothing visible has changed. (More on the implications of that inconsistency later...)
Using the same test, I opened a third PowerShell window, and observed that all of the following trigger an immediate update in the Get-Content -Wait listing:
Listing the file's contents with plain old Get-Content .\gcwtest.txt
Reading any of the file's attributes. However, for attributes that don't change, only the first read triggers an update.
For example, (gi .\gcwtest.txt).lastwritetime triggers more output multiple times. On the other hand, (gi .\gcwtest.txt).mode or (gi .\gcwtest.txt).directory trigger more output the first time each, but not if you repeat them. Also note the following:
» This behavior is not 100% consistent. Sometimes, reading Mode or Directory doesn't trigger more output the first time, but it does if you repeat the operation. All subsequent repetitions after the first one that triggers updated output have no effect.
» If you repeat the test, reading attributes that are the same does not trigger output, unless you delete the .txt file before running the pipeline again. In fact, sometimes even (gi .\gcwtest.txt).lastwritetime doesn't trigger more output if you repeat the test without deleting gcwtest.txt.
» If you issue (gi .\gcwtest.txt).lastwritetime multiple times in one second, only the first one triggers output, i.e. only when the result has changed.
Opening the file in a text editor. If you use an editor that keeps the file handle open (notepad does not), you'll see that closing the file without saving does not cause Get-Content -Wait to output the lines added by the pipeline since you opened the file in the editor.
Tab-completing the file's name
After you try any of the tests above a few times, you many find that Get-Content -Wait outputs more lines periodically for the remainder of the pipeline's execution, even if you don't do anything. Not one line at a time, but in batches.
The inconsistency in behavior itself points to buffer flushing, which occurs according to variable criteria that are hard to predict, as opposed to closing, which occurs under clear-cut and consistent circumstances.
Conclusion: Get-Content -Wait works exactly as advertised. New content is displayed as soon as it's physically written to the file on disk*.
It should be noted that my suggestion to disable write caching on the drive did not for the test above, i.e. it did not result in `Get-Content -Wait displaying new lines as soon as they're added to the text file by the pipeline, so perhaps the buffering responsible for the output latency is occurring on a filesystem or OS level as opposed to the disk's write cache. However, write buffering is clearly the explanation for the behavior observed in the OP's question.
* I'm not going to get into this in detail, since it's out of the scope of the question, but Get-Content -Wait does behave oddly if you add content to the file not at the end. It displays data from the end of the file equal in size to the amount of data added. The newly displayed data generally repeats data that was previously displayed, and may or may not include any of the new data, depending on whether the size of the new data exceeds the size of the data that follows it.
I ran in to the same issue while trying to watch WindowsUpdate.log in realtime. While not ideal, the code below allowed me to monitor the progress. -Wait didn't work due to the same file-writing limitations discussed above.
Displays the last 10 lines, sleeps for 10 seconds, clears the screen and then displays the last 10 again. CTRL + C to stop stream.
while(1){
Get-Content C:\Windows\WindowsUpdate.log -tail 10
Start-Sleep -Seconds 10
Clear
}