I'm running the DTEXEC.exe command from within a PowerShell script, trying to capture and log the output to a file. Sometimes the output is incomplete and I'm trying to figure out why this the case and what might be done about it. The lines that never seem to get logged are the most interesting:
DTEXEC: The package execution returned DTSER_SUCCESS(0)
Started: 10:58:43 a.m.
Finished: 10:59:24 a.m.
Elapsed: 41.484 seconds
The output always seems incomplete on packages that execute in less than ~ 8 seconds and this might be a clue (there isn't much output or they finish quickly).
I'm using .NETs System.Diagnostics.Process and ProcessStartInfo to setup and run the command, and I'm redirecting stdout and stderror to event handlers that each append to a StringBuilder which is subsequently written to disk.
The problem feels like a timing issue or a buffering issue. To solve the timing issue, I've attempted to use Monitor.Enter/Exit. If it's a buffering issue, I'm not sure how to force the Process to not buffer stdout and stderror.
The environment is
- PowerShell 2 running CLR version 2
- SQL 2008 32-bit DTEXEC.exe
- Host Operating System: XP Service Pack 3.
Here's the code:
function Execute-SSIS-Package
{
param([String]$fileName)
$cmd = GetDTExecPath
$proc = New-Object System.Diagnostics.Process
$proc.StartInfo.FileName = $cmd
$proc.StartInfo.Arguments = "/FILE ""$fileName"" /CHECKPOINTING OFF /REPORTING ""EWP"""
$proc.StartInfo.RedirectStandardOutput = $True
$proc.StartInfo.RedirectStandardError = $True
$proc.StartInfo.WorkingDirectory = Get-Location
$proc.StartInfo.UseShellExecute = $False
$proc.StartInfo.CreateNoWindow = $False
Write-Host $proc.StartInfo.FileName $proc.StartInfo.Arguments
$cmdOut = New-Object System.Text.StringBuilder
$errorEvent = Register-ObjectEvent -InputObj $proc `
-Event "ErrorDataReceived" `
-MessageData $cmdOut `
-Action `
{
param
(
[System.Object] $sender,
[System.Diagnostics.DataReceivedEventArgs] $e
)
try
{
[System.Threading.Monitor]::Enter($Event.MessageData)
Write-Host -ForegroundColor "DarkRed" $e.Data
[void](($Event.MessageData).AppendLine($e.Data))
}
catch
{
Write-Host -ForegroundColor "Red" "Error capturing processes std error" $Error
}
finally
{
[System.Threading.Monitor]::Exit($Event.MessageData)
}
}
$outEvent = Register-ObjectEvent -InputObj $proc `
-Event "OutputDataReceived" `
-MessageData $cmdOut `
-Action `
{
param
(
[System.Object] $sender,
[System.Diagnostics.DataReceivedEventArgs] $e
)
try
{
[System.Threading.Monitor]::Enter($Event.MessageData)
#Write-Host $e.Data
[void](($Event.MessageData).AppendLine($e.Data))
}
catch
{
Write-Host -ForegroundColor "Red" "Error capturing processes std output" $Error
}
finally
{
[System.Threading.Monitor]::Exit($Event.MessageData)
}
}
$isStarted = $proc.Start()
$proc.BeginOutputReadLine()
$proc.BeginErrorReadLine()
while (!$proc.HasExited)
{
Start-Sleep -Milliseconds 100
}
Start-Sleep -Milliseconds 1000
$procExitCode = $proc.ExitCode
$procStartTime = $proc.StartTime
$procFinishTime = Get-Date
$proc.Close()
$proc.CancelOutputRead()
$proc.CancelErrorRead()
$result = New-Object PsObject -Property #{
ExitCode = $procExitCode
StartTime = $procStartTime
FinishTime = $procFinishTime
ElapsedTime = $procFinishTime.Subtract($procStartTime)
StdErr = ""
StdOut = $cmdOut.ToString()
}
return $result
}
The reason that your output is truncated is that Powershell returns from WaitForExit() and sets the HasExited property before it has processed all the output events in the queue.
One solution it to loop an arbitrary amount of time with short sleeps to allow the events to be processed; Powershell event processing appear to not be pre-emptive so a single long sleep does not allow events to process.
A much better solution is to also register for the Exited event (in addition to Output and Error events) on the Process. This event is the last in the queue so if you set a flag when this event occurs then you can loop with short sleeps until this flag is set and know that you have processed all the output events.
I have written up a full solution on my blog but the core snippet is:
# Set up a pair of stringbuilders to which we can stream the process output
$global:outputSB = New-Object -TypeName "System.Text.StringBuilder";
$global:errorSB = New-Object -TypeName "System.Text.StringBuilder";
# Flag that shows that final process exit event has not yet been processed
$global:myprocessrunning = $true
$ps = new-object System.Diagnostics.Process
$ps.StartInfo.Filename = $target
$ps.StartInfo.WorkingDirectory = Split-Path $target -Parent
$ps.StartInfo.UseShellExecute = $false
$ps.StartInfo.RedirectStandardOutput = $true
$ps.StartInfo.RedirectStandardError = $true
$ps.StartInfo.CreateNoWindow = $true
# Register Asynchronous event handlers for Standard and Error Output
Register-ObjectEvent -InputObject $ps -EventName OutputDataReceived -action {
if(-not [string]::IsNullOrEmpty($EventArgs.data)) {
$global:outputSB.AppendLine(((get-date).toString('yyyyMMddHHmm')) + " " + $EventArgs.data)
}
} | Out-Null
Register-ObjectEvent -InputObject $ps -EventName ErrorDataReceived -action {
if(-not [string]::IsNullOrEmpty($EventArgs.data)) {
$global:errorSB.AppendLine(((get-date).toString('yyyyMMddHHmm')) + " " + $EventArgs.data)
}
} | Out-Null
Register-ObjectEvent -InputObject $ps -EventName Exited -action {
$global:myprocessrunning = $false
} | Out-Null
$ps.start() | Out-Null
$ps.BeginOutputReadLine();
$ps.BeginErrorReadLine();
# We set a timeout after which time the process will be forceably terminated
$processTimeout = $timeoutseconds * 1000
while (($global:myprocessrunning -eq $true) -and ($processTimeout -gt 0)) {
# We must use lots of shorts sleeps rather than a single long one otherwise events are not processed
$processTimeout -= 50
Start-Sleep -m 50
}
if ($processTimeout -le 0) {
Add-Content -Path $logFile -Value (((get-date).toString('yyyyMMddHHmm')) + " PROCESS EXCEEDED EXECUTION ALLOWANCE AND WAS ABENDED!")
$ps.Kill()
}
# Append the Standard and Error Output to log file, we don't use Add-Content as it appends a carriage return that is not required
[System.IO.File]::AppendAllText($logFile, $global:outputSB)
[System.IO.File]::AppendAllText($logFile, $global:errorSB)
My 2 cents...its not a powershell issue but an issue/bug in the System.Diagnostics.Process class and underlying shell. I've seen times when wrapping the StdError and StdOut does not catch everything, and other times when the 'listening' wrapper application will hang indefinitly because of HOW the underlying application writes to the console. (in the c/c++ world there are MANY different ways to do this, [e.g. WriteFile, fprintf, cout, etc])
In addition there are more than 2 outputs that may need to be captured, but the .net framework only shows you those two (given they are the two primary ones) [see this article about command redirection here as it starts to give hints).
My guess (for both your issue as well as mine) is that it has to do with some low-level buffer flushing and/or ref counting. (If you want to get deep, you can start here)
One (very hacky) way to get around this is instead of executing the program directly to actually execute wrap it in a call to cmd.exe with 2>&1, but this method has its own pitfalls and issues.
The most ideal solution is for the executable to have a logging parameter, and then go parse the log file after the process exits...but most of the time you don't have that option.
But wait, we're using powershell...why are you using System.Diagnositics.Process in the first place? you can just call the command directly:
$output = & (GetDTExecPath) /FILE "$fileName" /CHECKPOINTING OFF /REPORTING "EWP"
Related
Is there any simple way(i.e., script) to watch file in Powershell and run commands if file changes. I have been googling but can't find simple solution. Basically I run script in Powershell and if file changes then Powershell run other commands.
EDIT
Ok I think I made a mistake. I don't need script, a need function that I can include in my $PROFILE.ps1 file. But still, I was trying hard and still I'm unable to write it, so I will give bounty. It have to look like this:
function watch($command, $file) {
if($file #changed) {
#run $command
}
}
There is a NPM module that is doing what I want, watch , but it only watches for folders not files, and it's not Powershell xD.
Here is an example I have found in my snippets. Hopefully it is a little bit more comprehensive.
First you need to create a file system watcher and subsequently you subscribe to an event that the watcher is generating. This example listens for “Create” events, but could easily be modified to watch out for “Change”.
$folder = "C:\Users\LOCAL_~1\AppData\Local\Temp\3"
$filter = "*.LOG"
$Watcher = New-Object IO.FileSystemWatcher $folder, $filter -Property #{
IncludeSubdirectories = $false
NotifyFilter = [IO.NotifyFilters]'FileName, LastWrite'
}
$onCreated = Register-ObjectEvent $Watcher -EventName Created -SourceIdentifier FileCreated -Action {
$path = $Event.SourceEventArgs.FullPath
$name = $Event.SourceEventArgs.Name
$changeType = $Event.SourceEventArgs.ChangeType
$timeStamp = $Event.TimeGenerated
Write-Host "The file '$name' was $changeType at $timeStamp"
Write-Host $path
#Move-Item $path -Destination $destination -Force -Verbose
}
I will try to narrow this down to your requirements.
If you run this as part of your "profile.ps1" script you should read The Power of Profiles which explains the different profile scripts available and more.
Also, you should understand that waiting for a change in a folder can't be run as a function in the script. The profile script has to be finished, for your PowerShell session to start. You can, however use a function to register an event.
What this does, is register a piece of code, to be executed every time an event is triggered. This code will be executed in the context of your current PowerShell host (or shell) while the session remains open. It can interact with the host session, but has no knowledge of the original script that registered the code. The original script has probably finished already, by the time your code is triggered.
Here is the code:
Function Register-Watcher {
param ($folder)
$filter = "*.*" #all files
$watcher = New-Object IO.FileSystemWatcher $folder, $filter -Property #{
IncludeSubdirectories = $false
EnableRaisingEvents = $true
}
$changeAction = [scriptblock]::Create('
# This is the code which will be executed every time a file change is detected
$path = $Event.SourceEventArgs.FullPath
$name = $Event.SourceEventArgs.Name
$changeType = $Event.SourceEventArgs.ChangeType
$timeStamp = $Event.TimeGenerated
Write-Host "The file $name was $changeType at $timeStamp"
')
Register-ObjectEvent $Watcher -EventName "Changed" -Action $changeAction
}
Register-Watcher "c:\temp"
After running this code, change any file in the "C:\temp" directory (or any other directory you specify). You will see an event triggering execution of your code.
Also, valid FileSystemWatcher events you can register are "Changed", "Created", "Deleted" and "Renamed".
I will add another answer, because my previous one did miss the requirements.
Requirements
Write a function to WAIT for a change in a specific file
When a change is detected the function will execute a predefined command and return execution to the main script
File path and command are passed to the function as parameters
There is already an answer using file hashes. I want to follow my previous answer and show you how this can be accomplish using FileSystemWatcher.
$File = "C:\temp\log.txt"
$Action = 'Write-Output "The watched file was changed"'
$global:FileChanged = $false
function Wait-FileChange {
param(
[string]$File,
[string]$Action
)
$FilePath = Split-Path $File -Parent
$FileName = Split-Path $File -Leaf
$ScriptBlock = [scriptblock]::Create($Action)
$Watcher = New-Object IO.FileSystemWatcher $FilePath, $FileName -Property #{
IncludeSubdirectories = $false
EnableRaisingEvents = $true
}
$onChange = Register-ObjectEvent $Watcher Changed -Action {$global:FileChanged = $true}
while ($global:FileChanged -eq $false){
Start-Sleep -Milliseconds 100
}
& $ScriptBlock
Unregister-Event -SubscriptionId $onChange.Id
}
Wait-FileChange -File $File -Action $Action
Here is the solution I ended up with based on several of the previous answers here. I specifically wanted:
My code to be code, not a string
My code to be run on the I/O thread so I can see the console output
My code to be called every time there was a change, not once
Side note: I've left in the details of what I wanted to run due to the irony of using a global variable to communicate between threads so I can compile Erlang code.
Function RunMyStuff {
# this is the bit we want to happen when the file changes
Clear-Host # remove previous console output
& 'C:\Program Files\erl7.3\bin\erlc.exe' 'program.erl' # compile some erlang
erl -noshell -s program start -s init stop # run the compiled erlang program:start()
}
Function Watch {
$global:FileChanged = $false # dirty... any better suggestions?
$folder = "M:\dev\Erlang"
$filter = "*.erl"
$watcher = New-Object IO.FileSystemWatcher $folder, $filter -Property #{
IncludeSubdirectories = $false
EnableRaisingEvents = $true
}
Register-ObjectEvent $Watcher "Changed" -Action {$global:FileChanged = $true} > $null
while ($true){
while ($global:FileChanged -eq $false){
# We need this to block the IO thread until there is something to run
# so the script doesn't finish. If we call the action directly from
# the event it won't be able to write to the console
Start-Sleep -Milliseconds 100
}
# a file has changed, run our stuff on the I/O thread so we can see the output
RunMyStuff
# reset and go again
$global:FileChanged = $false
}
}
RunMyStuff # run the action at the start so I can see the current output
Watch
You could pass in folder/filter/action into watch if you want something more generic. Hopefully this is a helpful starting point for someone else.
Calculate the hash of a list of files
Store it in a dictionary
Check each hash on an interval
Perform action when hash is different
function watch($f, $command, $interval) {
$sha1 = New-Object System.Security.Cryptography.SHA1CryptoServiceProvider
$hashfunction = '[System.BitConverter]::ToString($sha1.ComputeHash([System.IO.File]::ReadAllBytes($file)))'
$files = #{}
foreach ($file in $f) {
$hash = iex $hashfunction
$files[$file.Name] = $hash
echo "$hash`t$($file.FullName)"
}
while ($true) {
sleep $interval
foreach ($file in $f) {
$hash = iex $hashfunction
if ($files[$file.Name] -ne $hash) {
iex $command
}
}
}
}
Example usage:
$c = 'send-mailmessage -to "admin#whatever.com" -from "watch#whatever.com" -subject "$($file.Name) has been altered!"'
$f = ls C:\MyFolder\aFile.jpg
watch $f $c 60
You can use the System.IO.FileSystemWatcher to monitor a file.
$watcher = New-Object System.IO.FileSystemWatcher
$watcher.Path = $searchPath
$watcher.IncludeSubdirectories = $true
$watcher.EnableRaisingEvents = $true
See also this article
Here is another option.
I just needed to write my own to watch and run tests within a Docker container. Jan's solution is much more elegant, but FileSystemWatcher is broken within Docker containers presently. My approach is similar to Vasili's, but much lazier, trusting the file system's write time.
Here's the function I needed, which runs the command block each time the file changes.
function watch($command, $file) {
$this_time = (get-item $file).LastWriteTime
$last_time = $this_time
while($true) {
if ($last_time -ne $this_time) {
$last_time = $this_time
invoke-command $command
}
sleep 1
$this_time = (get-item $file).LastWriteTime
}
}
Here is one that waits until the file changes, runs the block, then exits.
function waitfor($command, $file) {
$this_time = (get-item $file).LastWriteTime
$last_time = $this_time
while($last_time -eq $this_time) {
sleep 1
$this_time = (get-item $file).LastWriteTime
}
invoke-command $command
}
I had a similar problem. I first wanted to use Windows events and register, but this would be less fault-tolerant as the solution beneath.
My solution was a polling script (intervals of 3 seconds). The script has a minimal footprint on the system and notices changes very quickly. During the loop my script can do more things (actually I check 3 different folders).
My polling script is started through the task manager. The schedule is start every 5 minutes with the flag stop-when-already-running. This way it will restart after a reboot or after a crash.
Using the task manager for polling every 3 seconds is too frequent for the task manager.
When you add a task to the scheduler make sure you do not use network drives (that would call for extra settings) and give your user batch privileges.
I give my script a clean start by shutting it down a few minutes before midnight. The task manager starts the script every morning (the init function of my script will exit 1 minute around midnight).
I was looking for something I could run as a one-liner from a terminal. This is what I arrived at:
while ($True) { if ((Get-Item .\readme.md).LastWriteTime -ne $LastWriteTime) { "Hello!"; $LastWriteTime = (Get-Item .\readme.md).LastWriteTime; Sleep 1 } }
Another simple version:
$date = get-date
while ( (dir file.txt -ea 0 | % lastwritetime) -lt $date -and $count++ -lt 10) {
sleep 1
}
'file changed or timeout'
In a pretty restricted environment I'm basically only allowed to use Powershell + Plink if I want to automate some tasks.
I want to create a function that:
shows the output as it arrives if required
captures all output (stdout and stderr) to make it available later for further parsing or logging to console/file/whatever
inputs a password automatically
Unfortunately after the line where I input the password output capture stops. It used to work when I was only capturing stdout. After capturing stderr as well, no more luck.
The code:
function BaseRun {
param ($command, $arguments, $output = "Console")
$procInfo = New-Object System.Diagnostics.ProcessStartInfo
$procInfo.RedirectStandardOutput = $true
$procInfo.RedirectStandardError = $true
$procInfo.RedirectStandardInput = $true
$procInfo.FileName = $command
$procInfo.Arguments = $arguments
$procInfo.UseShellExecute = $false
$process = New-Object System.Diagnostics.Process
$process.StartInfo = $procInfo
[void]$process.Start()
$outputStream = $process.StandardOutput
$errorStream = $process.StandardError
$inputStream = $process.StandardInput
$outputBuffer = New-Object System.Text.StringBuilder
Start-Sleep -m 2000
$inputStream.Write("${env:password}`n")
while (-not $process.HasExited) {
do {
$outputLine = $outputStream.ReadLine()
$errorLine = $errorStream.ReadLine()
[void]$outputBuffer.Append("$outputLine`n")
if (($output -eq "All") -or ($output -eq "Console")) {
Write-Host "$outputLine"
Write-Host "$errorLine"
}
} while (($outputLine -ne $null) -and ($errorLine -ne $null))
}
return $outputBuffer.ToString()
}
Based on #w0xx0m's and #Martin Prikryl's help I managed to make this working solution:
Register-ObjectEvent -InputObject $process `
-EventName OutputDataReceived -SourceIdentifier processOutputDataReceived `
-Action {
$data = $EventArgs.data
if($data -ne $null) { Write-Host $data }
} | Out-Null
Register-ObjectEvent -InputObject $process `
-EventName ErrorDataReceived -SourceIdentifier processErrorDataReceived `
-Action {
$data = $EventArgs.data
if($data -ne $null) { Write-Host $data }
} | Out-Null
[void]$process.Start()
$process.BeginOutputReadLine()
$process.BeginErrorReadLine()
$inputStream = $process.StandardInput
Start-Sleep -m 2000
$inputStream.Write("${env:password}`n")
$process.WaitForExit()
Unregister-Event -SourceIdentifier processOutputDataReceived
Unregister-Event -SourceIdentifier processErrorDataReceived
$inputStream.Close()
For brevity I've removed the process launch section (it's the same as the one from above) and the data processing (in the example I just Write-Host it).
When you are reading both stdout and stderr, you cannot not use the ReadLine.
First, if there's only stderr output and no stdout, the $outputStream.ReadLine() never returns and you never get to the $errorStream.ReadLine().
A bigger problem is, that there's only a limited buffer in Windows for the outputs. So, when there's a lot of stderr before a complete stdout line is produced, the stderr buffer fills up and the applications (Plink) stops on the next attempt to write to stderr, waiting for the stderr buffer to be consumed. What it never does, as you keep waiting on a wrong stdout buffer. A deadlock.
You have to use a simple Read and never synchronously wait, when there's no output available.
I want to know if its bad form to use try blocks to test if a file is locked. Here's the background.
I need to send text output of an application to two serial printers simultaneously. My solution was to use MportMon, and a Powershell script. The way it's supposed to work is the application default prints to the MportMon virtual printer port, which actually makes a uniquely named file in a "dropbox" folder. The powershell script uses a filesystemwatcher to monitor the folder and when a new file is created, it takes the textual content and pushes it out two serial printers, then deletes the file, so as not to fill up the folder. I was having a problem when trying to read the text from the file that the virtual printer created. I found that I was getting errors becasue the file was still locked. To fixed the problem, I used a FSM to impliment the logic and instead of checking for a lock everytime before attempting to get the content from the file, I used a try block that attempts to read content from the file, if it fails, the catch block just reaffirms the state that the FSM is in, and the process is repeated until successful. It seems to work fine, but I've read somewhere that its bad practice. Is there any danger in this method, or is it safe and reliable? Below is my code.
$fsw = New-Object system.io.filesystemwatcher
$q = New-Object system.collections.queue
$path = "c:\DropBox"
$fsw.path = $path
$state = "waitforQ"
[string]$tempPath = $null
Register-ObjectEvent -InputObject $fsw -EventName created -Action {
$q.enqueue( $event.sourceeventargs.fullpath )
}
while($true) {
switch($state)
{
"waitforQ" {
echo "waitforQ"
if ($q.count -gt 0 ) {$state = "retrievefromQ"}
}
"retrievefromQ" {
echo "retrievefromQ"
$tempPath = $q.dequeue()
$state = "servicefile"
}
"servicefile" {
echo " in servicefile "
try
{
$text = Get-Content -ErrorAction stop $tempPath
#echo "in try"
$text | out-printer db1
$text | out-printer db2
echo " $text "
$state = "waitforQ"
rm $tempPath
}
catch
{
#echo "in catch"
$state = "servicefile"
}
}
Default { $state = "waitforQ" }
}
}
I wouldn't say it's bad practice to test a file to see if it's locked, but it's not as clean as checking the handles used by other processes. Personally I'd test the file like you do, but I adjust a few parts to make it safer/better.
That switch-statement looks way to complicated (for me), I'd replace it with a simple if-test. "If files in queue, proceed, if not, wait".
You need to slow down.. You will try to read the file as many times as possible while it's locked. This is a waste of resources since it will take some time for the current application to let it go and save the data to a HDD. Add some pauses. You won't notice them, but your CPU will love them. The same applies when there are no files in the queue.
You might benefit from adding a timeout, like max 50 attempts to read the file, to avoid the script getting stuck if one specific file is never released.
Try:
$fsw = New-Object system.io.filesystemwatcher
$q = New-Object system.collections.queue
$path = "c:\DropBox"
$fsw.path = $path
$MaxTries = 50 #50times * 0,2s sleep = 10sec timeout
[string]$tempPath = $null
Register-ObjectEvent -InputObject $fsw -EventName created -Action {
$q.enqueue( $event.sourceeventargs.fullpath )
}
while($true) {
if($q.Count -gt 0) {
#Get next file in queue
$tempPath = $q.dequeue()
#Read file
$text = $null
$i = 0
while($text -eq $null) {
#If locked, wait and try again
try {
$text = Get-Content -Path $tempPath -ErrorAction Stop
} catch {
$i++
if($i -eq $MaxTries) {
#Max attempts reached. Stops script
Write-Error -Message "Script is stuck on locked file '$tempPath'" -ErrorAction Stop
} else {
#Wait
Start-Sleep -Milliseconds 200
}
}
}
#Print file
$text | Out-Printer db1
$text | Out-Printer db2
echo " $text "
#Remove temp-file
Remove-Item $tempPath
}
#Relax..
Start-Sleep -Milliseconds 500
}
I'm working on a build environment for our JavaScript project. We use require.js (r.js) to combine different js modules into one output js file. We use TeamCity and I wanted to configure Powershell build step that would call r.js, read it's standard output and exit code and pass that exit code back to TeamCity (by exiting from Powershell with this exit code) so that if the tasks fails (r.js comes back with exit code 1) it won't proceed with other build steps. Also I wanted the standard output of r.js to be saved in the TeamCity log to allow developers to quickly see the error causing r.js to stop.
This is the way how I can start r.js process with it's arguments, read it's exit code and use it to exit from Powershell:
$process = start-process r.js.cmd -ArgumentList "-o build-dev.js" -PassThru -Wait
exit $process.ExitCode
If I try to read standard output in this way before exiting:
Write-Host $process.StandardOutput.ReadToEnd();
I get this error, which probably suggests that I can't read StandardOutput in this way as it is a stream:
You cannot call a method on a null-valued expression.
At line:1 char:45
+ Write-Host $process.StandardOutput.ReadToEnd <<<< ();
+ CategoryInfo : InvalidOperation: (ReadToEnd:String) [], Runtime
Exception
+ FullyQualifiedErrorId : InvokeMethodOnNull
Process exited with code 1
Now I found a way of running the process so that I can read the standard output but I'm not able to read the exit code:
$psi = New-object System.Diagnostics.ProcessStartInfo
$psi.UseShellExecute = $false
$psi.RedirectStandardOutput = $true
$psi.RedirectStandardError = $true
#$psi.FileName = "r.js.cmd"
$psi.FileName = "C:\Users\Administrator\AppData\Roaming\npm\r.js.cmd"
$psi.Arguments = #("-o build-dev.js")
#$psi.WorkingDirectory = (Get-Location).Path;
$process = New-Object System.Diagnostics.Process
$process.StartInfo = $psi
$process.Start() | Out-Host
$process.WaitForExit()
$output = $process.StandardOutput.ReadToEnd()
$stderr = $process.StandardError.ReadToEnd()
sleep(10)
$exit_code = $process.ExitCode
write-host "========== OUTPUT =========="
write-host $output
write-host "========== ERROR =========="
write-host $stderr
write-host "========== EXIT CODE =========="
write-host $exit_code
write-host "========== $LastExitCode =========="
#write-host $LastExitCode
#Exit $exit_code
But again this returns the console output but the exit code is always 0 even if r.js returns 1 because I have error in my js scripts.
Could anyone advise how can I read standard output with start-process or how can I read exit code with New-object System.Diadnostics.ProcessStartInfo
I can attach screenshots with more details of my TC build step configuration and output saved to the build log if that would help answering the question.
I usually tackle this situation by using a variation of the start-process command mentioned in the question.
$outputLog = "outputFile.log"
$errLog = "errorFile.log"
$process = Start-Process r.js.cmd -ArgumentList "-o build-dev.js" -PassThru -RedirectStandardOutput $outputLog -RedirectStandardError $errLog -Wait
$exitCode = $process.ExitCode
Now the log files $outputLog and $errorLog will have the standard output and error contents.
This is how you can do it. Cmd files are however not applications, they are associated with cmd so you might have to target cmd.exe and put the path to the cmd file as a parameter. Also UseShellExecute for process start info stops output and errors from being recorded so dont enable it if you need output. UseShellExecute is as if you put the $Path into the Win+R window. If its a png for example, windows will find an app that is supposed to open it and opens that file with it.
$Path = "C:\whatever.exe"
$WorkingDirectory = "C:\"
$CreateNoWindow = $true #$true to not create another window for console application
$Parameters = "-paremeters for -the app"
$WindowStyle = [Diagnostics.ProcessWindowStyle]::Normal
# prepare process start info
$processStartInfo = New-Object -TypeName 'System.Diagnostics.ProcessStartInfo' -ErrorAction 'Stop'
$processStartInfo.FileName = $Path
$processStartInfo.WorkingDirectory = $WorkingDirectory
$processStartInfo.ErrorDialog = $false
$processStartInfo.RedirectStandardOutput = $true
$processStartInfo.RedirectStandardError = $true
$processStartInfo.CreateNoWindow = $CreateNoWindow
If ($Parameters) { $processStartInfo.Arguments = $Parameters }
$processStartInfo.WindowStyle = $WindowStyle
#create a process and assign the process start info object to it
$process = New-Object -TypeName 'System.Diagnostics.Process' -ErrorAction 'Stop'
$process.StartInfo = $processStartInfo
#Add event handler to capture process's standard output redirection
[scriptblock]$processEventHandler = { If (-not [string]::IsNullOrEmpty($EventArgs.Data)) { $Event.MessageData.AppendLine($EventArgs.Data) } }
#create string builders to store the output and errors in
$stdOutBuilder = New-Object -TypeName 'System.Text.StringBuilder' -ArgumentList ''
$stdOutEvent = Register-ObjectEvent -InputObject $process -Action $processEventHandler -EventName 'OutputDataReceived' -MessageData $stdOutBuilder -ErrorAction 'Stop'
$stdErrBuilder = New-Object -TypeName 'System.Text.StringBuilder' -ArgumentList ''
$stdErrEvent = Register-ObjectEvent -InputObject $process -Action $processEventHandler -EventName 'ErrorDataReceived' -MessageData $stdErrBuilder -ErrorAction 'Stop'
#start the process
$null = $process.Start()
#begin reading the output and errors
$process.BeginOutputReadLine()
$process.BeginErrorReadLine()
#Instructs the Process component to wait indefinitely for the associated process to exit.
$process.WaitForExit()
#HasExited indicates that the associated process has terminated, either normally or abnormally. Wait until HasExited returns $true.
While (-not ($process.HasExited)) { $process.Refresh(); Start-Sleep -Seconds 1 }
## Get the exit code for the process
Try {
[int32]$returnCode = $process.ExitCode
}
Catch [System.Management.Automation.PSInvalidCastException] {
# Catch exit codes that are out of int32 range
[int32]$returnCode = 1
}
#unregister the events
If ($stdOutEvent) { Unregister-Event -SourceIdentifier $stdOutEvent.Name -ErrorAction 'Stop'; $stdOutEvent = $null }
If ($stdErrEvent) { Unregister-Event -SourceIdentifier $stdErrEvent.Name -ErrorAction 'Stop'; $stdErrEvent = $null }
$stdOut = $stdOutBuilder.ToString() -replace $null,''
$stdErr = $stdErrBuilder.ToString() -replace $null,''
## Free resources associated with the process, this does not cause process to exit
If ($process) { $process.Dispose() }
# check if we have output and return it
If ($stdOut) {
Write-Output "Process output:$stdOut"
}
If ($stdErr) {
Write-Output "Process errors:$stdErr"
}
# return exit code
Write-Output "Exit code:$returnCode"
Start-Sleep -Seconds 5
Exit $returnCode
I'm writing a script to download several repositories from GitHub. Here is the command to download a repository:
git clone "$RepositoryUrl" "$localRepoDirectory"
When I run this command it displays some nice progress information in the console window that I want displayed.
The problem is that I also want to be able to detect if any errors have occurred while downloading. I found this post that talks about redirecting the various streams, so I tried:
(git clone "$RepositoryUrl" "$localRepoDirectory") 2> $errorLogFilePath
This pipes any errors from stderr to my file, but no longer displays the nice progress information in the console.
I can use the Tee-Object like so:
(git clone "$RepositoryUrl" "$localRepoDirectory") | Tee-Object -FilePath $errorLogFilePath
and I still get the nice progress output, but this pipes stdout to the file, not stderr; I'm only concerned with detecting errors.
Is there a way that I can store any errors that occur in a file or (preferably) a variable, while also still having the progress information piped to the console window? I have a feeling that the answer might lie in redirecting various streams into other streams as discusses in this post, but I'm not really sure.
======== Update =======
I'm not sure if the git.exe is different than your typical executable, but I've done some more testing and here is what I've found:
$output = (git clone "$RepositoryUrl" "$localRepoDirectory")
$output always contains the text "Cloning into '[localRepoDirectory]'...", whether the command completed successfully or produced an error. Also, the progress information is still written to the console when doing this. This leads me to think that the progress information is not written via stdout, but by some other stream?
If an error occurs the error is written to the console, but in the usual white foreground color, not the typical red for errors and yellow for warnings. When this is called from within a cmdlet function and the command fails with an error, the error is NOT returned via the function's -ErrorVariable (or -WarningVariable) parameter (however if I do my own Write-Error that does get returned via -ErrorVariable). This leads me to think that git.exe doesn't write to stderr, but when we do:
(git clone "$RepositoryUrl" "$localRepoDirectory") 2> $errorLogFilePath
The error message is written to the file, so that makes me think that it does write to stderr. So now I'm confused...
======== Update 2 =======
So with Byron's help I've tried a couple of more solutions using a new process, but still can't get what I want. When using a new process I never get the nice progress written to the console.
The three new methods that I've tried both use this bit of code in common:
$process = New-Object System.Diagnostics.Process
$process.StartInfo.Arguments = "clone ""$RepositoryUrl"" ""$localRepoDirectory"""
$process.StartInfo.UseShellExecute = $false
$process.StartInfo.RedirectStandardOutput = $true
$process.StartInfo.RedirectStandardError = $true
$process.StartInfo.CreateNoWindow = $true
$process.StartInfo.WorkingDirectory = $WORKING_DIRECTORY
$process.StartInfo.FileName = "git"
Method 1 - Run in new process and read output afterwards:
$process.Start()
$process.WaitForExit()
Write-Host Output - $process.StandardOutput.ReadToEnd()
Write-Host Errors - $process.StandardError.ReadToEnd()
Method 2 - Get output synchronously:
$process.Start()
while (!$process.HasExited)
{
Write-Host Output - $process.StandardOutput.ReadToEnd()
Write-Host Error Output - $process.StandardError.ReadToEnd()
Start-Sleep -Seconds 1
}
Even though this looks like it would write the output while the process is running, it doesn't write anything until after the process exits.
Method 3 - Get output asynchronously:
Register-ObjectEvent -InputObject $process -EventName "OutputDataReceived" -Action {Write-Host Output Data - $args[1].Data }
Register-ObjectEvent -InputObject $process -EventName "ErrorDataReceived" -Action { Write-Host Error Data - $args[1].Data }
$process.Start()
$process.BeginOutputReadLine()
$process.BeginErrorReadLine()
while (!$process.HasExited)
{
Start-Sleep -Seconds 1
}
This does output data while the process is working which is good, but it still doesn't display the nice progress information :(
I think I have your answer. I'm working with PowerShell for a while and created several build systems. Sorry if the script is a bit long, but it works.
$dir = <your dir>
$global:log = <your log file which must be in the global scope> # Not global = won't work
function Create-Process {
$process = New-Object -TypeName System.Diagnostics.Process
$process.StartInfo.CreateNoWindow = $false
$process.StartInfo.RedirectStandardError = $true
$process.StartInfo.UseShellExecute = $false
return $process
}
function Terminate-Process {
param([System.Diagnostics.Process]$process)
$code = $process.ExitCode
$process.Close()
$process.Dispose()
Remove-Variable process
return $code
}
function Launch-Process {
param([System.Diagnostics.Process]$process, [string]$log, [int]$timeout = 0)
$errorjob = Register-ObjectEvent -InputObject $process -EventName ErrorDataReceived -SourceIdentifier Common.LaunchProcess.Error -action {
if(-not [string]::IsNullOrEmpty($EventArgs.data)) {
"ERROR - $($EventArgs.data)" | Out-File $log -Encoding ASCII -Append
Write-Host "ERROR - $($EventArgs.data)"
}
}
$outputjob = Register-ObjectEvent -InputObject $process -EventName OutputDataReceived -SourceIdentifier Common.LaunchProcess.Output -action {
if(-not [string]::IsNullOrEmpty($EventArgs.data)) {
"Out - $($EventArgs.data)" | Out-File $log -Encoding ASCII -Append
Write-Host "Out - $($EventArgs.data)"
}
}
if($errorjob -eq $null) {
"ERROR - The error job is null" | Out-File $log -Encoding ASCII -Append
Write-Host "ERROR - The error job is null"
}
if($outputjob -eq $null) {
"ERROR - The output job is null" | Out-File $log -Encoding ASCII -Append
Write-Host "ERROR - The output job is null"
}
$process.Start()
$process.BeginErrorReadLine()
if($process.StartInfo.RedirectStandardOutput) {
$process.BeginOutputReadLine()
}
$ret = $null
if($timeout -eq 0)
{
$process.WaitForExit()
$ret = $true
}
else
{
if(-not($process.WaitForExit($timeout)))
{
Write-Host "ERROR - The process is not completed, after the specified timeout: $($timeout)"
$ret = $false
}
else
{
$ret = $true
}
}
# Cancel the event registrations
Remove-Event * -ErrorAction SilentlyContinue
Unregister-Event -SourceIdentifier Common.LaunchProcess.Error
Unregister-Event -SourceIdentifier Common.LaunchProcess.Output
Stop-Job $errorjob.Id
Remove-Job $errorjob.Id
Stop-Job $outputjob.Id
Remove-Job $outputjob.Id
$ret
}
$repo = <your repo>
$process = Create-Process
$process.StartInfo.RedirectStandardOutput = $true
$process.StartInfo.FileName = "git.exe"
$process.StartInfo.Arguments = "clone $($repo)"
$process.StartInfo.WorkingDirectory = $dir
Launch-Process $process $global:log
Terminate-Process $process
The log file must be in the global scope because the routine which runs the event processing is not in the script scope.
Sample of my log file:
Out - Cloning into ''...
ERROR - Checking out files: 22% (666/2971)
ERROR - Checking out files: 23% (684/2971)
ERROR - Checking out files: 24% (714/2971)
You can do this by putting the git clone command inside an advanced function e.g.:
function Clone-Git {
[CmdletBinding()]
param($repoUrl, $localRepoDir)
git clone $repoUrl $localRepoDir
}
Clone-Git $RepositoryUrl $localRepoDirectory -ev cloneErrors
$cloneErrors
If you use System.Diagnostics.Process to start Git, you can redirect all the error and output.
I just had to solve this problem for Inkscape:
$si = New-Object System.Diagnostics.ProcessStartInfo
$si.Arguments = YOUR PROCESS ARGS
$si.UseShellExecute = $false
$si.RedirectStandardOutput = $true
$si.RedirectStandardError = $true
$si.WorkingDirectory = $workingDir
$si.FileName = EXECUTABLE LOCATION
$process = [Diagnostics.Process]::Start($si)
while (!($process.HasExited))
{
// Do what you want with strerr and stdout
Start-Sleep -s 1 // Sleep for 1 second
}
You can, of course, wrap this in a function with proper arguments...