Ipython Have all outputs to stdout be included in history - ipython

Is there a way to have print statements be considered a valid output in python so that they appear in %history -o.
The needs is to create doc-tests from a session, however only returns are added to the history. Writes to stdout and err are not. However in doc-test the writes to stdout are used to match the test-case.

Related

Output stdin, stdout to file and console using Perl

I am trying a simple questionnaire using perl. I want to record the responses in a log file as and when a user inputs it. I'm having problem in redirecting the stdin to file.
Below is the code I implemented. Refer this.
open my $tee, "|-", "tee some_file.out";
print $tee "DO you want to continue?(y/n)\n";
$var=<STDIN>;
$var =~ s/[\n\r\f\t]//g;
if($var eq "y"){
print $tee "Enter\n";
}
close $tee;
The output I'm getting now is, only after user input is provided the question is printed.
#in console
y
DO you want to continue?(y/n)
Enter
#some_file.out
DO you want to continue?(y/n)
Enter
Below is the expected output:
#in console
DO you want to continue?(y/n)
y
Enter
#some_file.out
DO you want to continue?(y/n)
y
Enter
I also found Duplicate stdin to stdout but really couldn't achieve what I want to.
Am I missing something?!
Is there any cleaner solution available?
First of all, never use the phrase "redirecting the stdin to..." because stdin is input. It doesn't go to anything. It comes from somewhere.
It seems that what you expected is to have a copy of $var appear in your log file. Since you never printed $var to $tee there's no way that could happen.
So why did you think $var would appear in the log file? From the way you have shown us a copy of the log file next to a copy of what you see on the terminal, I guess that your reasoning went something like this:
The tee put all of the output into the log file
The tee also put all of the output on the terminal
My program didn't output anything else besides what went into the tee
The screen contents should match the log file
But there's a hidden assumption that's required to reach the conclusion:
3a. Nothing else was written to the terminal besides my program's output
And that's the part which is incorrect. When you type y into the terminal while your program is running, the terminal itself echoes what you type. It prints a copy in the terminal window, and also sends the character to your program's stdin. The y that you see on screen is not part of your program's output at all.
Since the echoing is done by the terminal and not by your program, you can't instruct it to also send a copy to your log file. You need to explicitly print it there if you want it to be logged.
You can ask the terminal to stop echoing, and then you take responsibility for printing the characters as they are typed so the user can see what they're typing. If you want to try that, see the Term::ReadKey module.
Or if what you really want is a complete record of everything that appeared on the terminal during the run of your program, maybe you should run it in the standard unix tool script, which is made for exactly that purpose.
(Side note: Did you know about the IO::Tee module? you can have teed output without an external process)

Redirect stdout to a file in tcl

I know this has been asked before.
But hear me out once..
I'm working on a Cisco router (IOS).
I have to write a script which executes some commands and redirect their output to a file. But instead of using tee for every command, I want to open a file then run the commands , whose output will be redirected to the file and then close the file.
And even the redirection operator > is not working or the following answer:
How can I redirect stdout into a file in tcl
The fact that the solutions in that question aren't working for you is informative: the real issue that you are experiencing is that Tcl commands do not normally write to stdout anyway (except for puts, which has that as its main job, and parray, which is a procedure that uses puts internally). The “write to stdout” that you are used to is a feature of the interactive Tcl shell only.
To capture the result of all commands while running a script, you need a wrapper like this:
trace add execution source leavestep log_result
proc log_result {cmd code result op} {
puts stdout $result
}
source theRealScript.tcl
You'll find that it produces a lot of output. Cutting the output down is a rather useful thing, so this reduces it to just the immediately-executed commands (rather than everything they call):
trace add execution source enterstep log_enter
trace add execution source leavestep log_result
proc log_enter {args} {
global log_depth
incr log_depth
}
proc log_result {cmd code result op} {
global log_depth
if {[incr log_depth -1] < 1 && $result ne ""} {
puts stdout $result
}
}
source theRealScript.tcl
You'll probably still get far more output than you want…

Line buffered reading in Perl

I have a perl script, say "process_output.pl" which is used in the following context:
long_running_command | "process_output.pl"
The process_output script, needs to be like the unix "tee" command, which dumps output of "long_running_command" to the terminal as it gets generated, and in addition captures output to a text file, and at the end of "long_running_command", forks another process with the text file as an input.
The behavior I am currently seeing is that, the output of "long_running_command" gets dumped to the terminal, only when it gets completed instead of, dumping output as it gets generated. Do I need to do something special to fix this?
Based on my reading in a few other stackexchange posts, i tried the following in "process_output.pl", without much help:
select(STDOUT); $| =1;
select(STDIN); $| =1; # Not sure even if this is needed
use FileHandle; STDOUT->autoflush(1);
stdbuf -oL -eL long_running_command | "process_output.pl"
Any pointers on how to proceed further.
Thanks
AB
This is more likely an issue with the output of the first process being buffered, rather than the input of your script. The easiest solution would be to try using the unbuffer command (I believe it's part of the expect package), something like
unbuffer long_running_command | "process_output.pl"
The unbuffer command will disable the buffering that happens normally when output is directed to a non-interactive place.
This will be the output processing of long_running_processing. More than likely it is using stdio - which will look to see what the output file descriptor is connected to before it does outputing. If it is a terminal (tty), then it will generally output line based, but in the above case - it will notice it is writing to a pipe and will therefore buffer the output into larger chunks.
You can control the buffering in your own process by using, as you showed
select(STDOUT); $| =1;
This means that things that your process prints to STDIO, are not buffered - it makes no sense doing this for input, as you control how much buffering is done - if you use sysread() then you are reading unbuffered, if you use a construct like <$fh> then perl will await until it has a "whole line" (it actually reads up to the next input line separator (as defined in variable $/ which is newline by default)) before it returns data to you.
unbuffer can be used to "disable" the output buffering, what it actually does is make the outputing process think that it is talking to a tty (by using a pseudo tty) so the output process does not buffer.

Perl: Testing an input reader?

Is there a way to automatically test using the standard Test etc. modules whether a Perl program is reading input from e.g. STDIN properly? E.g. testing a program that reads two integers from STDIN and prints their sum.
It's not 100% clear what you mean, I'll asnswer assuming you want to write a test script that tests your main program, which as part of the test needs to have test input data passed via STDIN.
You can easily do that if your program outputs what it reads. You don't need a special test module - simply:
Call your program your're testing via a system call
redirect both STDIN and STDOUT of tested program to your test script, using
IPC::Open2 module to open both sides via pipes to filehandles,
... OR, build your command to redirect to/from files and read/write the files in the test script
Check STDOUT from tested program that you collected in the last step to make sure correct values are printed.
If you want to test if STDIN is connected to a terminal, use -t, as in:
if( -t STDIN ){
print "Input from a terminal\n";
}else{
print "Input from a file or pipe\n";
}
See http://perldoc.perl.org/perlfunc.html#Alphabetical-Listing-of-Perl-Functions

Where can I find captured stdout for a py.test test that passes?

I am using the py.test reporting hooks (pytest_runtest_makereport() and pytest_report_teststatus()).
When a py.test test fails, I can find the captured stdout data in the report hook (at report.sections[]).
When a py.test test passes, the report.sections[] list is empty.
Where can I find the captured stdout for a test that passes?
Thanks.
Edit: From the source (_pytest/capture.py), it looks like this is available only if the test doesn't pass:
def pytest_runtest_makereport(self, __multicall__, item, call):
...
if not rep.passed:
addouterr(rep, outerr)
It turns out that the information is available in item.outerr, which is a tuple of two Unicode strings; the first is stdout, and the second is stderr.
Note that py.test specifies these strings in the setup, call, and teardown reports, and that some of those could be empty strings. So in order to save the output, without overwriting it with an empty string, the logic needs to be:
stdout = item.outerr[0]
if stdout and len(stdout):
whatever.stdout = stdout
stderr = item.outerr[1]
if stderr and len(stderr):
whatever.stderr = stderr