I have a script that runs an external process which creates some output (a file) and then I also capture console output to file (the log)
try
{
p4d -r $root -jc | out-file $output
}
I later check the log output, grab some info and the script carries on.
The problem is that the external process could (and has once) stalled and I need a way to check that on the fly to handle the error.
The best way I can think to do this is to monitor the file that the process creates for increasing size. Obviously this isn't without issue as it could potentially stall at any point and we don't know the resulting file size.
I will likely check the size of the last successful process and use that to set some limits.
My question is how do I achieve the whole check a process whilst it's running thing?
Related
I'm trying to automate a workflow. The automation script is mainly written in Powershell It consists of these steps: 1) Opening a program 2) Communicating with the API, reading values, etc. 3) Closing the program. This script will be run many times a day, it would suffice to not close the program every time the script is finishing, but rather check at the beginning of the script whether the program is already opened, and if not, open it. I'd like to implement both, then decide which solution to use later on.
The code for opening the program is completed, but it's not enough to just run an .exe file to open the program, as I have to load the correct settings and GUI, for this while opening the .exe file from the command line, additionally, I have to use -s, also -c. I concluded all this in runProgram.cmd, so in the Powershell script, I only run this file to open the program. However, I am unsure how the already opened program can be detected (that it's opened), and how can I close it. I believe a solution might use processes, with the help of Get-Process, but I'm unsure of its capabilities and limitations (how do I check if my program's process is not amongst the list of running processes?), and whether there is a better way of dealing with this problem.
I have found the solution:
Open the program and open Powershell, and type Get-Process (this will list all the currently running processes)
Search yours (by name). If you don't know which process is the one you're looking for, you can close your program, then type Get-Process again, and look for the process that disappeared from the list, since you closed it. Let's assume the name of it is "yourprocess".
In the code, type $val = Get-Process -Name yourprocess. If it is running, $val should equal some data about the process, if it is not running, then $val is 0. Therefore, if you want to check whether it's opened, you should use:
if($null -ne $val){...}
Finally, stopping the process: Stop-Process -Name yourprocess.
There is an old command line tool my company uses to deploy log files to various servers.... whoever wrote it made it very very repetitive.
There is a lot of prompting that happens and I want to automate this process. We have a long term goal of replacing this .exe file down the line but for now automation works for the short term..
Example
./logdeploy.exe
Enter the destination folder:
I would like the powershell script to just automatically enter the folder, since its literally the same folder. because this exe is going to ask for it at least 20 times throughout this process, so copy paste just gets anyoing.
Is this even possible to do?
If there really is no way around simulating interactive user input in order to automate your external program, a solution is possible under the following assumption:
Your external program reads interactive responses from stdin (the standard input stream).
While doing so is typical, it's conceivable that a given program's security-sensitive prompts such as for passwords deliberately accept input from the terminal only, as so to expressly prevent automating responses.
If the first assumption holds, the specific method that must be used to send the response strings via stdin depends on whether the external program clears the keyboard buffer before each prompt.
(a) If it does not, you can simply send all strings in a single operation.
(b) If it does, you need to insert delays between sending the individual strings, so as to ensure that input is only sent when the external program is actively prompting for input.
This approach is inherently brittle, because in the absence of being able to detect when the external program is read to read a prompt response, you have to guess how much time needs to elapse between sending responses - and that time may vary based on many runtime conditions.
It's best to use longer delays for better reliability, which, however, results in increased runtime overall.
Implementation of (a):
As zett42 and Mathias R. Jessen suggest, use the following to send strings C:\foo and somepass 20 times to your external program's stdin stream:
('C:\foo', 'somepass') * 20 | ./logdeploy.exe
Again, this assumes that ./logdeploy.exe buffers keyboard input it receives before it puts up the next prompt.
Implementation of (b):
Note: The following works in PowerShell (Core) 7+ only, because only there is command output being sent to an external program properly streamed (sent line by line, as it becomes available); unfortunately, Windows PowerShell collects all output first.
# PowerShell 7+ only
# Adjust the Start-Sleep intervals as needed.
1..20 | ForEach-Object {
Start-Sleep 1
'C:\foo'
Start-Sleep 2
'somepass'
} | ./logdeploy.exe
I'm using MATLAB and calling an .exe via the system command.
[status,cmdout] = system(command_s);
where command_s is a command string that is formatted earlier in my script to pass all the desired options to the .exe. The .exe would normally write to a .csv file via the > redirection operator in Windows/DOS. Instead, this output is going to cmdout where I use it later in the MATLAB script. It is working correctly and as expected. I'm doing it this way so that the process just uses memory and does not write a very large file to the disk, which would then have to be read from the disk and then deleted after I'm done with it. In the end, it saves a .mat file that's usually in hundreds of KB instead of 10s/100s of MBs as the .csv file would be (some unneeded data is thrown out in the end).
The issue I'm having is since I'm dealing with large files, the executable can take a significant amount of time. I typically have to wait about 2 minutes after executing this command. In the meantime, I have no feedback to know it is progressing and that my system hasn't froze. I know I could add the & symbol to the end of my string, command_s, and run MATLAB code while this is running in the background (or asynchronously as some would say), but that brings up an external window AND makes cmdout empty - so I cannot use the output - forcing me to sit there for 2 minutes wondering each time it executes.
Is there any way to run in the background AND get the stdout from the command?
Maybe you could try system(command_s,'-echo')?
I have a successful script that takes up some data analysis from local machine and exports a csv file at the end. No issues there.
Mn problem is that if someone has that file open, and at the same time script is writing to the file (I have -append switch), nothing gets written so I lose the data for that particular computer.
Any ideas how to force write the file even if it is open or in use? Thank you.
Functional export of existing array:
$NewCSVObject | Export-CSV '<fullpath>\CleanupResults.csv' -noType -Append
Have you tried setting that file to Read-Only? That will prevent anyone else from getting an opportunistic lock on it, and you can still append data to it by using the -Force switch.
You really don't want to do that. If another a process already has exclusive access to the file, just pick another a file to write into.
Trying to shoehorn data into a file someone else is messing around is asking for trouble. For example, the other process might write changes into the file. Thus your input is going to be replaced with whatever data the other process has.
You could possibly do this with the help of PSTools by removing the lock on the file remotely before writing to it.
Example of how to close a file on remote machine using PSFile command:
PSFile.exe \\remotecomputername "C:\File.CSV" -c
You could also test for a file lock before you try to write to it and have logic around that.
This thread has a good example of testing for a lock.
How can I force a Powershell script to wait overwriting a file until another process is finished reading it?
My scenario: A text file(s) will keep coming into say a folder, I need to detect the new text file, and read particular information from it, say format being (word : info, OR word and under it a column of info, etc.). And, this process needs to keep going on always.
Problem: How should I go about doing this, I guess use perl scipt, but where to go from there ?, I am getting ideas, and also help on the internet, but I thought asking it here might make my thoughts clearer.
Kindly help, please suggest a path to do this.
Regards,
Chirayu
First thing: you want a daemon process, so you may want to have a look at Proc::Daemon.
Second thing, you need to read & parse your file. Parsing it, depends on its format, and your question is not really clear about it.
Finally, you may want to consider moving a newly detected file (or renaming it) while processing it, end then (possibly) deleting it after having processed. This depends on the requirements that you have. Alternatively, you may want to move the newly detected file into an archive directory after having processed them.
One approach might be to have a perl process that regularly (say every 5 seconds, every 5 minutes or every 5 hours, your call really) scans said directory and as soon as any new text file appears, spawn a child process that process it.
The child process might be another perl script which gets the name of the text file as it's argument and which reads the file, detects the word you mention and then extracts the information you are interested in (and then does whatever you consider necessary with that information).
Things to look out for is what to do with the text files once they are processed. Are they supposed to stay around? Then you need to keep track of which of them you have processed, so they do not get processed again in the case your master process (the one that scans the directory and spawn perl children) has to be restarted (due to either a crash or a deliberate restart).
If the text files are supposed to disappear once they are processed, then I assume it could be a good idea to either let the children remove them after completion or to let the master process remove them provided the master process always waits for the children to complete before it continues running. The drawback with a master process waiting for children to complete is that children then cannot be run in parallell but has to be run in strict sequence (not necessary a drawback depending on your situation).
(If you have a master process always waiting for the child process to run, you can actually skip having child processes altogether and create a subroutine in the master program which reads and processes the text file).
High level description but hope it helps.
What is the operating system you are using?
On Windows, you can use Win32::ChangeNotify and on Linux, you can use Linux::Inotify2 to be notified of changes to the contents of a directory.
Your script can simply wait to be notified and take action when notified instead of polling the contents of the directory which will either waste resources or potentially miss some changes.