Check progress of silent Terminal command writing a file? - perl

Not really sure if this is possible, but I am running this on Terminal:
script -q \/my\/directory\/\/$outfile \.\/lexparser.csh $file
Explanation
Through a perl script. The first directory and $outfile is where I am saving the output of the Terminal command. the \.\/lexparser.csh $file is just calling on that script to work on the input file, $file.
Problem
However, I put -q b/c I didn't want to save the unnecessary print to the file. The file is big ~ 30 thousand lines of text. It has been running for some time now, which was expected.
Question
I would like to check and ensure everything is going smoothly. The file output name is in Finder, but I'm afraid if I click on it, it will ruin the output. How can check the progress (possibly the current text file) without disrupting the process?
Thanks for your time, let me know if the question is unclear.

Open a new Terminal, navigate to the output directory, and:
tail -f <output_file>
You will continue to see new appends to the file without interruption to any writing process. Just leave the Terminal open with the tail going, and you can watch it all day long. Grab some popcorn.
In addition to tail, you can also learn about tee. The point of tee is to output to a file while also outputting to STDOUT in your terminal. Best of both worlds! Well, someone good aspects of two possible worlds.

You could tail the file via the command line which shouldn't cause problems.
Additionally you could have the program print to stderr as well as stdout, redirect stdout to the file and allow stderr through so it could tell you it's progress. Though that is more of a 20 / 20 hindsight solution.

Related

Powershell: Detecting that a specifically opened program is running (and closing it)

I'm trying to automate a workflow. The automation script is mainly written in Powershell It consists of these steps: 1) Opening a program 2) Communicating with the API, reading values, etc. 3) Closing the program. This script will be run many times a day, it would suffice to not close the program every time the script is finishing, but rather check at the beginning of the script whether the program is already opened, and if not, open it. I'd like to implement both, then decide which solution to use later on.
The code for opening the program is completed, but it's not enough to just run an .exe file to open the program, as I have to load the correct settings and GUI, for this while opening the .exe file from the command line, additionally, I have to use -s, also -c. I concluded all this in runProgram.cmd, so in the Powershell script, I only run this file to open the program. However, I am unsure how the already opened program can be detected (that it's opened), and how can I close it. I believe a solution might use processes, with the help of Get-Process, but I'm unsure of its capabilities and limitations (how do I check if my program's process is not amongst the list of running processes?), and whether there is a better way of dealing with this problem.
I have found the solution:
Open the program and open Powershell, and type Get-Process (this will list all the currently running processes)
Search yours (by name). If you don't know which process is the one you're looking for, you can close your program, then type Get-Process again, and look for the process that disappeared from the list, since you closed it. Let's assume the name of it is "yourprocess".
In the code, type $val = Get-Process -Name yourprocess. If it is running, $val should equal some data about the process, if it is not running, then $val is 0. Therefore, if you want to check whether it's opened, you should use:
if($null -ne $val){...}
Finally, stopping the process: Stop-Process -Name yourprocess.

mIRC Read command not performing

I am writing an mIRC script for a bot account to read a random line of text from a text file when a user keys in !read. As of now, when any user types !read, absolutely nothing happens. I have other commands set to work on TEXT commands, but this one seems to be the most puzzling, as I'm referencing a document rather than putting everything into the script itself.
on *:TEXT:!text:#: {
$read(C:\Program Files (x86)\mIRC\8Ball.txt,n)
}
My file is titled 8Ball.txt. What could be going wrong here?
Got it.
echo -a $read(C:\Users\Christopher\Desktop\8Ball.txt,n)
Changing the directory ended up doing it...it wasn't liking the location for some reason...I either blame me putting a / in front of echo, or I blame the space in Program Files (x86)
Your best move is to use the relative mIRC dir identifier $mircdir combing it with $qt which adds enclosing quotes.
$qt($+($mircdir,8Ball.txt))
Output:
"C:\Program Files (x86)\mIRC\8Ball.txt"
This way, you won't need to wonder why the script break when you changed the mIRC directory a year after.

LIne-by-line file-io not working as expected in Windows

I'm using Perl 5.16.1 from Strawberry in a Windows environment. I have a Perl script reading very large text files. The smallest text file is 30M. When reading files that do not have a line feed at the end of the very last line I get very peculiar results. It may not happen all the time but when it does It's as though it is reading cached data from the I/O system for another file that I previously opened with the Perl script. If I manually edit the file and add a line feed it's fine. I added a line counter and some inline code to display what happens when I'm near the end of the file to make sure I wasn't going nuts. To try and fix I tried adding this to my script:
open (SS_LOG, ">>", $SSFile) or die "Can't open $SSFile\r\n $!\r\n";
print SS_LOG "\r\n";
close SS_LOG;
but it does nothing. The file stays the same size. I'm also storing data in large arrays.
Has anyone else seen anything like this?
Try unbuffering your output:
SS_LOG->autoflush(1);

How to save command line log to text file?

I have several .cmd files that I run one after the other. These can be several thousand lines of text. I would like to save the output to a text file, but I can't figure out how. I've tried the > and >> operators, but nothing is generated in the text files.
Anyone know what I'm doing wrong?
If the programs are sending their output to stderr you would need to redirect stderr to stdout in order to capture it in your log file.. ie
cmd >> cmd.log 2>&1

Showing the input when using redirection operators in command prompt

Hopefully this is an easy/dumb question.
I am redirecting program input and output to text files in Windows.
Example:
program.exe < in.txt > out.txt
But the text that is inputted from the input file isn't shown in the output file (or screen when not redirecting the output). Is there any way to show it easily? I've tried google but I can't find anything.
Depends what program.exe is doing with the input. You will only see it in the file (or the console if you dont redirect) if program.exe actually writes its output to standard out. If you need to send the input multiple ways (such as to the program and also the screen at the same time) and the program itself doesnt actually make provision for this, you need something like tee - or a real shell like bash.