How to collect the output of man command in tcl? - command

Can somebody suggest me, how to collect output of man command in tcl?
I am writing :-
set hello [ man {command-name}]
and when am executing the script, the program gets halted and
man commands start running in the foreground, prompting the user
to "press RETURN" again and again till it gets completed.

You're just missing the exec command
set output [exec man cmd-name]
When you do set out [man cmd-name] in an interactive tcl session, the unknown command will intercept the 'man' command and implicitly perform an exec on it. In that scenario, 'man' somehow knows you're interactive and pipes the manpage through your $PAGER.

Related

unable to take user input in perl

I am having a strange issue. I have written a script which is basically running a perl script in remote server using ssh.
This script is working fine but after completion of the above operation it will ask user to choose the next operation.
it is showing the options in the command prompt but while I am giving any input it is not showing in the screen even after hitting enter also it remain same.
I am not getting what is the exact issue, but it seems there is some issue with the ssh command because if I am commenting out the ssh command it is working fine.
OPERATION:
print "1: run the script in remote server \n2: Exit\n\nEnter your choice:";
my $input=<STDIN>;
chomp($input);
..........
sub run_script()
{
my $com="sshg3.exe server -q --user=user --password=pass -exec script >/dev/null";
system("$com");
goto OPERATION;
}
after completing this ssh script it is showing in screen:
1: run remote script
2: exit
Enter your choice:
but while I am giving any input it is not displaying in the screen until and unless I am exiting it using crtl C.
Please can anyone help what might be the issue here ?
One of the classic gotchas with ssh is this - that it normally runs interactively, and as such will attach STDIN by default.
This can result in STDIN being consumed by ssh rather than your script.
Try it with ssh -n instead.
You can redirect the output in command prompt if -n option is not available for you.
try this one it might work for you.
system("$com />null");
As per https://support.ssh.com/manuals/client-user/62/sshg3.html there is an option for redirecting input use --dev-null (*nix) or --null (Windows).
-n, --dev-null (Unix), -n, --null (Windows)
Redirects input from /dev/null (Unix) and from NUL (Windows).

Can I automate Windbg to attach to a process and set breakpoints?

I am debugging a windows process that crashes if the execution stops for even a few milliseconds. (I don't know how many exactly, but it is definitely less than the time taken by my reflexes to resume the process.)
I know I can launch a windbg session via the command prompt by typing in windbg -p PID which brings up the GUI. But can I then further pass it windbg commands via the command prompt, such as bm *foo!bar* ".frame;gc";g".
Because If I can pass it these commands I can write these down in a .bat file and just run it. There would at least be no delay due to entering (or even copy-pasting) the commands.
Use the -c parameter to pass them:
windbg -p PID -c "bm *foo!bar* .frame;gc;g"
According to the help (found by running windbg /?):
-c "command"
Specifies the initial debugger command to run at start-up. This command must be enclosed in quotation marks. Multiple commands can be separated with semicolons. (If you have a long command list, it may be easier to put them in a script and then use the -c option with the $<, $><, $><, $$>< (Run Script File) command.)
If you are starting a debugging client, this command must be intended for the debugging server. Client-specific commands, such as .lsrcpath, are not allowed.
You may need to play around with the quotes...
Edit: Based on this answer, you might be better off putting the commands into a script file to deal with the quotes:
script.txt (I think this is what you want):
bm *foo!bar* ".frame;gc"
g
Then in your batch file:
windbg -p PID -c "$<full_path_to_script_txt"

Deal with executing Unix command which produces an endless output

Some unix command such as tail -f or starting a python web server (i.e. cherrypy) will produce an endless output, i.e. the only way to stop it is to Ctrl-C. I'm working on a scala application which execute the command like that, my implemetation is:
import scala.sys.process._
def exe(command: String): Unit = {
command !
}
However, as the command produces an endless output stream, the application hangs there until I either terminate it or kill the process started by the command. I also try to add & at the end of the command in order to run it in background but my application still hangs.
Hence, I'm looking for another way to execute a command without hanging my application.
You can use a custom ProcessLogger to deal with output however you wish as soon as it is available.
val proc =
Process(command).run(ProcessLogger(line => (), err => println("Uh-oh: "+err)))
You may kill a process with the destroy method.
proc.destroy
If you are waiting to get a certain output before killing it, you can create a custom ProcessLogger that can call destroy on its own process once it has what it needs.
You may prefer to use lines (in 2.10; the name is changing to lineStream in 2.11) instead of run to gather standard output, since that will give you a stream that will block when no new output is available. Then you wrap the whole thing in a Future, read lines from the stream until you have what you need, and then kill the process--this simplifies blocking/waiting.
Seq("sh", "-c", "tail -f /var/log/syslog > /dev/null &") !
works for me. I think Randall's answer fails because scala is just executing the commands, and can't interpret shell operators like "&". If the command passed to scala is "sh" and the arguments are a complete shell command, we work around this issue. There also seems to be an issue with how scala parses/separates individual arguments, and using a Seq instead of single String works better for that.
The above is equivalent to the unix command:
sh -c 'tail -f /var/log/syslog > /dev/null &'
If you close the descriptor(s) from which you're reading the process' output, it will get a SIGPIPE and (usually) terminate.
If you just don't want the output, redirect to /dev/null:
command arg arg arg >/dev/null 2>&1
Addendum: This pertains only to Unix-alike systems, not Windows.

Can I execute a multiline command in Perl's backticks?

In Unix, I have a process that I want to run using nohup. However this process will at some point wait at a prompt where I have to enter yes or no for it to continue. So far, in Unix I have been doing the following
nohup myprocess <<EOF
y
EOF
So I start the process 'myprocess' using nohup and pipe in a file with 'y' then close the file. The lines above are effectively three seperate commands - i.e. I hit enter on the first line in UNIX, then I get a prompt where I enter 'y' and then press enter to then finally type 'EOF' and hit return again.
I want to know execute this in Perl but I am not sure how I can execute this command as it is over three lines. I don't know if the following will work....
my $startprocess = `nohup myprocess <<EOF &
y
EOF
`
Please help - thank you!
I think your proposal will work as is. If not, try replacing the redirect with a pipe:
my $startprocess = `(echo "y" | nohup myprocess) &`;
Also, depending on WHY you are doing a nohup, please look at the following pure Perl daemonizing approach using Proc::Daemon : How can I run a Perl script as a system daemon in linux?
Expect for interactive programs can be used as well.

How to Ctrl-Z in Perl script

I am writing a Perl script and I need to execute Unix Ctrl+Z on the script.
How can I do it in Perl ?
thanks.
From perl you can send signals to processes with the function kill, which has the same name as the Unix command line tool that does the same thing. The equivalent to Ctrl+Z is running
kill -SIGTSTP pid
you need to find out what numeric value your TSTP signal has on your system. You would do this by running
kill -l TSTP
on the command line. Let's say this returns 20
Then in your Perl script you would add
kill 20 => $$;
which will send the TSTP signal to the currently running process id ($$)
Update:
as described by daxim, you can skip the 'kill -l' part and provide the name of the signal directly:
kill 'TSTP' => $$;
In bash ctrl+z stops the current job and puts it in background with %JobId you can return to this job. I'm not sure what you mean since I thought ctrl+z is caught by bash..