Stop Groovy console truncating output? - groovy-console

Is it possible to stop the Groovy console truncating output?
Using the 1.8.4 console, if I execute the following script:
for (i in 0..4000) println i
I get the following output:
01
602
603
...
3999
4000
I can't see any options to preserve all program output.

Expanding upon the accepted answer, groovyconsole uses JAVA_OPTS, so anything you set in there will be picked up. For instance, if you wanted to increase the max memory to 4 GB and the console limit to 200,000, then you could do this statement prior to running groovyConsole:
export JAVA_OPTS="-Xmx4096m -Dgroovy.console.output.limit=200000"

This file:
https://svn.codehaus.org/groovy/trunk/groovy/groovy-core/src/main/groovy/ui/Console.groovy
contains this code:
// Maximum number of characters to show on console at any time
int maxOutputChars = System.getProperty('groovy.console.output.limit','20000') as int
Which seems to be the thing I want to change. There was even a JIRA on this:
https://issues.apache.org/jira/browse/GROOVY-4425
But so far I haven't been able to pass this property as a -D option through groovyConsole or groovyConsole.bat, the started console immediately closes. Will update if/when I figure out how to easily pass this property through to the console..

Related

When I run 'mongod' command in command prompt, the messages are being displayed in json form. How to fix it?

This is what is being displayed after I run "mongod". Everything works fine, just the issue of not being able to read the messages clearly due to this format, is there a way to change the format to where the messages are just displayed line by line?
I read in the documentation that this is how log messages will be displayed starting from v4.4 of mongodb.
It appears to be the case and a way to improve readability is by pretty-printing the file using 'jq' utility ( JSON processor )

DBeaver: Redirect server output to file

I'm using DBeaver to execute a large script file which produces a lot of output (via PostgreSQLs RAISE NOTICE statement). I can see the output in the server output tab, however, the buffer size seems to be limited so a lot of output is lost at the end of the execution.
Is it somehow possible to either increase the server output tab buffer size or redirect the server output directly to a file?
I was experiencing the same issue as you, and I have been unable to find any setting which limits the output length.
In my case, what I eventually discovered was that there was an error in my script which was causing it to fail silently. It looks like part of the output is missing, however it was just the script terminating prematurely.
I encourage you to consider this option, and check your script for errors. Be aware that errors in the script don't appear in the output log.

output from corFlags.exe and dumpbin.exe disappear when redirected

We are trying to automate some procedures using corFlags.exe and dumpbin.exe. Trying to capture the output from either of these programs has been impossible so far. In detail, executing
corFlags.exe yourfavorite.dll
in cmd.exe or in powershell.exe (with appropriate change of syntax) produces output just fine, but as soon as one attempts to capture the output, either through re-direction or piping, e.g.
corflags.exe yourFavorite.dll >>out.txt
or
$l_result = &corflags yourFavorite.dll | select-string -pattern "32BIT"
and the output of corflags is lost. There is a similar problem with dumpbin.
This is occuring on a Windows 7 sp1 machine (6.1.7601 sp1 build 7601).
I am guessing they suffer from the flaw of not flushing their output streams before exiting. See Output shows up in console, but disappears when redirected to file for example.
We have found no way of working around this problem so far (executing in sub-process/batch process/etc. etc.) Does anyone know of a work-around to this problem? Thanks.
A nice, simple demonstration of the problem is as follows. Open the PowerShell ISE and try to run "corFlags.exe some.dll" within the console window. You will not be able to get any output from it!

ipython rolling log

I want to have last 500Mb worth of ipython input and output saved to a file
The saving described above should be able to get around instances when I have to kill ipython. For example, saving based on a timer
I want to have this file reloaded (not re-executed) at startup. The file then gets updated in a rolling fashion
How can I achieve this?
IPython already logs your input - it's stored in history.sqlite in your profile folder (run ipython locate profile to see where that is). To turn on output logging as well, edit ipython_config.py and search for 'db_log_output'. This captures output resulting from the displayhook (with the Out [n]: prompt), not printed output.
To look at history from a previous session, you can use %hist ~1/1-10 (lines 1-10 of the session before the current one). It also works with magic commands like %rerun, %recall and %save.
If you want it recorded to a text file, have a look at the %logstart magic.

How can I make log4perl output easier to read?

When using log4perl, the debug log layout that I'm using is :
log4perl.appender.D10.layout=PatternLayout
log4perl.appender.D10.layout.ConversionPattern=%d [pid=%P] %p %F{1} (%L) %M %m%n
log4perl.appender.D10.Filter = DebugAndUp
This produces very verbose debug logs, for example:
2008/11/26 11:57:28 [pid=25485] DEBUG SomeModule.pm (331) functions::SomeModule::Test Test XXX was successfull
2008/11/26 11:57:29 [pid=25485] ERROR SomeOtherUnrelatedModule.pm (99999) functions::SomeModule::AnotherTest AnotherTest YYY has faled
This works great, and provides excellent debugging data.
However, each line of the debug log contains different function names, pid length, etc. This makes each line layout differently, and makes reading debug logs much harder than it needs to be.
Is there a way in log4perl to format the line so that the debugging metadata (everything up until the actual log message) be padded at the end with spaces/tabs, and have the actual message start at the same column of text?
You can pad the single fields that make up your entries. For example [pid=%5P] will always give you at least 5 characters for the PID.
The "Quantify Placeholders" section in the docs for Log::Log4perl::Layout gives more details.
There are a couple of ways to go with this, although you have to figure out which one works better for your situation:
Use a different appender if you are working live. Have that appender use a pattern that shows only the information you want. If you're working in a single process, for instance, your alternate appender might leave off the PID and the timestamp. You might only need the file name and line number.
Use %n to put newlines in the right place. That makes it multi-line output that is slightly harder to parse later, but you can choose another sequence for the input record separator (say, a literal "[EOL]") to make it easy to read entry-by-entry.
Log to a database instead of a file. For your reports, select just the columns you want to inspect.
Log everything, but write a filter to go through the log file ad-hoc to display just the parts that you want to see, such as only the debugging messages, the entries between certain times, only the entries involving a file, and so on.