How to prevent the output truncated if the rows of output from the windbg to large? - windbg

If the output rows from the windbg command to large ,such as 100k rows, finally the windbg just display thousands of the rows, and most of them would be truncated , so my question is how to prevent the output truncated , or write all of the rows from the output to a local file to keep all of the output rows? the "write Windows text to file" wouldn't helpful.

Not sure if it would help, but .logopen and .logclose commands might be helpful in this case (respectively open and close a log file which keeps a copy of the events and commands from the Debugger Command window).
See also Keeping a Log File in WinDbg.

sometimes simply piping works especially when running cdb and quitting after executing just one command
cdb -c "tc 100;q" calc >> foo.txt
you should have 100 calls lets check
grep -c !.*: foo.txt
256
lets check how many sysenter were done and what were the index of the syscalls
grep sysenter -B 4 foo.txt | grep eax | awk "{print $1}"
eax=000000ea
eax=0000014d
eax=000000fb
we can use the output when the commands run for an infinite amount of time
without having file locked issues
like this
if .logopen .logclose isnt an option

Try to open additional command window with Ctrl+N and execute the long outputed command within it

Related

How to get vim to list the PIDs of selected files that are presently being edited, avoiding recovery mode, and not list all the other files

The vim manual page contains two similar -r type commands. I'll give more background below, this question is really how to invoke the first type of -r to list the swap files, but avoid the second -r that invokes recovery
-r List swap files, with information about using them for re‐
covery.
-r {file} Recovery mode. The swap file is used to recover a crashed
editing session. The swap file is a file with the same
filename as the text file with ".swp" appended. See ":help
recovery".
The -r without filename (the first -r above ) reports on the swap files of other files too, including ones in other directories
Background:
I'm trying to have vim report the swap files of a specific file (mostly to determine if vim still editing the file). If the file is being edited ( in another window, either on linux or cygwin ) I can 'raise' that window up to the top with "\e[2t\e[1t" as I have successfully be able to do thanks to Bring Window to Front
Vim has multiple swap file names, and multiple directories that it could put a file, so I want to ask vim, please tell me the name of the swap files that are currently in use for a given file, and if there is a current vim process on the file. Unfortunately, sometimes vim will open a command file in recovery mode in unexpected ways.
I'm invoking vim like this vim -r -c :q file, well actually, I'm invoking it from script, since I want vim to see something more like a terminal, then I look at the output file, so it's more like script -q -c "vim -r -c :q foo" fooscript, then I look in the fooscript file for messages like /Note: process STILL RUNNING: (\d+)/
It is beginning to look like I need to use vim -r without a file name, and parse the output of the -r report, and that there isn't a way to get the report pre-filtered to a single file in question.
after switching my focus to just vim -r, and
Knowing that vim will try to put the swap file into the same directory as the file it's editing ( thanks to #romainl for the pointer to :help swap-file )
observing that vim -r reports on the files in the current directory first,
observing that the file name associated with the swap file is reported before the process id of the vim process, and
observing that vim appends (STILL RUNNING) if it finds the active process
I changed the current directory appropriately and ran this code after plugging in the name of the file-to-search-for
perl -lne '
last if /^\s+In directory/;
undef $f if /^\d+/;
$f = $1 if /^\s+file name:\s+(.*)\s*$/;
if ( $f =~ m#/file-to-search-for# && /^\s+ process ID:\s(\d+).*?STILL RUNNING/ ) {
print $1;
$pid //= $1;
}
END { exit !$pid; } '
The pid of the running vim process is printed, and the exit status is zero when the appropiate swap file is found, and non-zero if the file was not being edited

"log=..." command-line parameter to send script output to STDOUT? [duplicate]

I'm working with a command line utility that requires passing the name of a file to write output to, e.g.
foo -o output.txt
The only thing it writes to stdout is a message that indicates that it ran successfully. I'd like to be able to pipe everything that is written to output.txt to another command line utility. My motivation is that output.txt will end up being a 40 GB file that I don't need to keep, and I'd rather pipe the streams than work on massive files in a stepwise manner.
Is there any way in this scenario to pipe the real output (i.e. output.txt) to another command? Can I somehow magically pass stdout as the file argument?
Solution 1: Using process substitution
The most convenient way of doing this is by using process substitution. In bash the syntax looks as follows:
foo -o >(other_command)
(Note that this is a bashism. There's similar solutions for other shells, but bottom line is that it's not portable.)
Solution 2: Using named pipes explicitly
You can do the above explicitly / manually as follows:
Create a named pipe using the mkfifo command.
mkfifo my_buf
Launch your other command with that file as input
other_command < my_buf
Execute foo and let it write it's output to my_buf
foo -o my_buf
Solution 3: Using /dev/stdout
You can also use the device file /dev/stdout as follows
foo -o /dev/stdout | other_command
Named pipes work fine, but you have a nicer, more direct syntax available via bash process substitution that has the added benefit of not using a permanent named pipe that must later be deleted (process substitution uses temporary named pipes behind the scenes):
foo -o >(other command)
Also, should you want to pipe the output to your command and also save the output to a file, you can do this:
foo -o >(tee output.txt) | other command
For the sake of making stackoverflow happy let me write a long enough sentence because my proposed solution is only 18 characters long instead of the required 30+
foo -o /dev/stdout
You could use the magic of UNIX and create a named pipe :)
Create the pipe
$ mknod -p mypipe
Start the process that reads from the pipe
$ second-process < mypipe
Start the process, that writes into the pipe
$ foo -o mypipe
foo -o <(cat)
if for some reason you don't have permission to write to /dev/stdout
I use /dev/tty as the output filename, equivalent to using /dev/nul/ when you want to output nothing at all. Then | and you are done.

grep command to print follow-up lines after a match

how to use "grep" command to find a match and to print followup of 10 lines from the match. this i need to get some error statements from log files. (else need to download use match for log time and then copy the content). Instead of downloading bulk size files i need to run a command to get those number of lines.
A default install of Solaris 10 or 11 will have the /usr/sfw/bin file tree. Gnu grep - /usr/sfw/bin/ggrep is there. ggrep supports /usr/sfw/bin/ggrep -A 10 [pattern] [file] which does what you want.
Solaris 9 and older may not have it. Or your system may not have been a default install. Check.
Suppose, you have a file /etc/passwd and want to filter user "chetan"
Please try below command:
cat /etc/passwd | /usr/sfw/bin/ggrep -A 2 'chetan'
It will print the line with letter "chetan" and the next two lines as well.
-- Tested in Solaris 10 --

CMD: Export all the screen content to a text file

In command prompt - How do I export all the content of the screen to a text file(basically a copy command, just not by using right-clicking and the clipboard)
This command works, but only for the commands you executed, not the actual output as well
doskey /HISTORY > history.txt
If you want to append a file instead of constantly making a new one/deleting the old one's content, use double > marks. A single > mark will overwrite all the file's content.
Overwrite file
MyCommand.exe>file.txt
^This will open file.txt if it already exists and overwrite the data, or create a new file and fill it with your output
Append file from its end-point
MyCommand.exe>>file.txt
^This will append file.txt from its current end of file if it already exists, or create a new file and fill it with your output.
Update #1 (advanced):
My batch-fu has improved over time, so here's some minor updates.
If you want to differentiate between error output and normal output for a program that correctly uses Standard streams, STDOUT/STDERR, you can do this with minor changes to the syntax. I'll just use > for overwriting for these examples, but they work perfectly fine with >> for append, in regards to file-piping output re-direction.
The 1 before the >> or > is the flag for STDOUT. If you need to actually output the number one or two before the re-direction symbols, this can lead to strange, unintuitive errors if you don't know about this mechanism. That's especially relevant when outputting a single result number into a file. 2 before the re-direction symbols is for STDERR.
Now that you know that you have more than one stream available, this is a good time to show the benefits of outputting to nul. Now, outputting to nul works the same way conceptually as outputting to a file. You don't see the content in your console. Instead of it going to file or your console output, it goes into the void.
STDERR to file and suppress STDOUT
MyCommand.exe 1>nul 2>errors.txt
STDERR to file to only log errors. Will keep STDOUT in console
MyCommand.exe 2>errors.txt
STDOUT to file and suppress STDERR
MyCommand.exe 1>file.txt 2>nul
STDOUT only to file. Will keep STDERR in console
MyCommand.exe 1>file.txt
STDOUT to one file and STDERR to another file
MyCommand.exe 1>stdout.txt 2>errors.txt
The only caveat I have here is that it can create a 0-byte file for an unused stream if one of the streams never gets used. Basically, if no errors occurred, you might end up with a 0-byte errors.txt file.
Update #2
I started noticing weird behavior when writing console apps that wrote directly to STDERR, and realized that if I wanted my error output to go to the same file when using basic piping, I either had to combine streams 1 and 2 or just use STDOUT. The problem with that problem is I didn't know about the correct way to combine streams, which is this:
%command% > outputfile 2>&1
Therefore, if you want all STDOUT and STDERR piped into the same stream, make sure to use that like so:
MyCommand.exe > file.txt 2>&1
The redirector actually defaults to 1> or 1>>, even if you don't explicitly use 1 in front of it if you don't use a number in front of it, and the 2>&1 combines the streams.
Update #3 (simple)
Null for Everything
If you want to completely suppress STDOUT and STDERR you can do it this way. As a warning not all text pipes use STDOUT and STDERR but it will work for a vast majority of use cases.
STD* to null
MyCommand.exe>nul 2>&1
Copying a CMD or Powershell session's command output
If all you want is the command output from a CMD or Powershell session that you just finished up, or any other shell for that matter you can usually just select that console from that session, CTRL + A to select all content, then CTRL + C to copy the content. Then you can do whatever you like with the copied content while it's in your clipboard.
Just see this page
in cmd type:
Command | clip
Then open a *.Txt file and Paste. That's it. Done.
If you are looking for each command separately
To export all the output of the command prompt in text files. Simply follow the following syntax.
C:> [syntax] >file.txt
The above command will create result of syntax in file.txt. Where new file.txt will be created on the current folder that you are in.
For example,
C:Result> dir >file.txt
To copy the whole session, Try this:
Copy & Paste a command session as follows:
1.) At the end of your session, click the upper left corner to display the menu.
Then select.. Edit -> Select all
2.) Again, click the upper left corner to display the menu.
Then select.. Edit -> Copy
3.) Open your favorite text editor and use Ctrl+V or your normal
Paste operation to paste in the text.
If your batch file is not interactive and you don't need to see it run then this should work.
#echo off
call file.bat >textfile.txt 2>&1
Otherwise use a tee filter. There are many, some not NT compatible. SFK the Swiss Army Knife has a tee feature and is still being developed. Maybe that will work for you.
How about this:
<command> > <filename.txt> & <filename.txt>
Example:
ipconfig /all > network.txt & network.txt
This will give the results in Notepad instead of the command prompt.
From command prompt Run as Administrator. Example below is to print a list of Services running on your PC run the command below:
net start > c:\netstart.txt
You should see a copy of the text file you just exported with a listing all the PC services running at the root of your C:\ drive.
If you want to output ALL verbosity, not just stdout. But also any printf statements made by the program, any warnings, infos, etc, you have to add 2>&1 at the end of the command line.
In your case, the command will be
Program.exe > file.txt 2>&1

Unable to use SED to edit files fast

The file is initially
$cat so/app.yaml
application: SO
...
I run the following command. I get an empty file.
$sed s/SO/so/ so/app.yaml > so/app.yaml
$cat so/app.yaml
$
How can you use SED to edit the file and not giving me an empty file?
$ sed -i -e's/SO/so/' so/app.yaml
The -i means in-place.
The > used in piping will open the output file when the pipes are all set up, i.e. before command execution. Thus, the input file is truncated prior to sed executing. This is a problem with all shell redirection, not just with sed.
Sheldon Young's answer shows how to use in-place editing.
You are using the wrong tool for the job. sed is a stream editor (that's why it's called sed), so it's for in-flight editing of streams in a pipe. ed OTOH is a file editor, which can do everything sed can do, except it works on files instead of streams. (Actually, it's the other way round: ed is the original utility and sed is a clone that avoids having to create temporary files for streams.)
ed works very much like sed (because sed is just a clone), but with one important difference: you can move around in files, but you can't move around in streams. So, all commands in ed take an address parameter that tells ed, where in the file to apply the command. In your case, you want to apply the command everywhere in the file, so the address parameter is just , because a,b means "from line a to line b" and the default for a is 1 (beginning-of-file) and the default for b is $ (end-of-file), so leaving them both out means "from beginning-of-file to end-of-file". Then comes the s (for substitute) and the rest looks much like sed.
So, your sed command s/SO/so/ turns into the ed command ,s/SO/so/.
And, again because ed is a file editor, and more precisely, an interactive file editor, we also need to write (w) the file and quit (q) the editor.
This is how it looks in its entirety:
ed -- so/app.yaml <<-HERE
,s/SO/so/
w
q
HERE
See also my answer to a similar question.
What happens in your case, is that executing a pipeline is a two-stage process: first construct the pipeline, then run it. > means "open the file, truncate it, and connect it to filedescriptor 1 (stdout)". Only then is the pipe actually run, i.e. sed is executed, but at this time, the file has already been truncated.
Some versions of sed also have a -i parameter for in-place editing of files, that makes sed behave a little more like ed, but using that is not advisable: first of all, it doesn't support all the features of ed, but more importantly, it is a non-standardized proprietary extension of GNU sed that doesn't work on many non-GNU systems. It's been a while since I used a non-GNU system, but last I used one, neither Solaris nor OpenBSD nor HP-UX nor IBM AIX sed supported the -i parameter.
I believe that redirecting output into the same file you are editing is causing your problem.
You need redirect standard output to some temporary file and when sed is done overwrite the original file by the temporary one.