When I run 'mongod' command in command prompt, the messages are being displayed in json form. How to fix it? - mongodb

This is what is being displayed after I run "mongod". Everything works fine, just the issue of not being able to read the messages clearly due to this format, is there a way to change the format to where the messages are just displayed line by line?

I read in the documentation that this is how log messages will be displayed starting from v4.4 of mongodb.
It appears to be the case and a way to improve readability is by pretty-printing the file using 'jq' utility ( JSON processor )

Related

When trying to save pgAdmin result to a file (TXT) the result is modified

When I launch my query into pgAdmin 4 v5's Query Tool I get this type of data representation (this is also what I would like to get in my export file).
Unfortunately this information is transformed when saving it to a .TXT file by clicking the following button and saving it as indicated in the subsequent image.
As you can see below, after double-clicking on the saved TXT document, it added '.0' and wrapped my long character by indicating 'e+29' up to a certain row.
Can you please indicate me how to remove those transformations ?
All,
I found out the above problem is linked with the version of pgAdmin I was using, pgAdmin 4 v5 precisely.
After downloading pgAdmin 4 v6.4 the problem doesn't appear anymore.
I consider this thus as fixed, even if the cause of the problem remains unknown to me.
Thanks for your help.
Brieuc

DBeaver: Redirect server output to file

I'm using DBeaver to execute a large script file which produces a lot of output (via PostgreSQLs RAISE NOTICE statement). I can see the output in the server output tab, however, the buffer size seems to be limited so a lot of output is lost at the end of the execution.
Is it somehow possible to either increase the server output tab buffer size or redirect the server output directly to a file?
I was experiencing the same issue as you, and I have been unable to find any setting which limits the output length.
In my case, what I eventually discovered was that there was an error in my script which was causing it to fail silently. It looks like part of the output is missing, however it was just the script terminating prematurely.
I encourage you to consider this option, and check your script for errors. Be aware that errors in the script don't appear in the output log.

ipython rolling log

I want to have last 500Mb worth of ipython input and output saved to a file
The saving described above should be able to get around instances when I have to kill ipython. For example, saving based on a timer
I want to have this file reloaded (not re-executed) at startup. The file then gets updated in a rolling fashion
How can I achieve this?
IPython already logs your input - it's stored in history.sqlite in your profile folder (run ipython locate profile to see where that is). To turn on output logging as well, edit ipython_config.py and search for 'db_log_output'. This captures output resulting from the displayhook (with the Out [n]: prompt), not printed output.
To look at history from a previous session, you can use %hist ~1/1-10 (lines 1-10 of the session before the current one). It also works with magic commands like %rerun, %recall and %save.
If you want it recorded to a text file, have a look at the %logstart magic.

printing text into a file in Matlab

I want to log the running of my program, specifically the running time of each part. At this moment I print to the screen using disp. Is there a way so that some of the things I print would also be printed into a text file?
You can use the DIARY command, that captures everything from the command window.
There are other solutions to this problem where you write to one or more logfiles opened when your program is running. This provides a permanent record without polluting your work space or diary. It also works well if you compile your MATLAB application.
Jan Simon has a nice solution at MATLAB Central which uses a persistant file id so the log to file mechanism can be used again and again throughout an application with many functions without passing the file id about.
Others at MATLAB Central (here and here) have developed class based solutions with more features.
Also, fprintf.

How can I make log4perl output easier to read?

When using log4perl, the debug log layout that I'm using is :
log4perl.appender.D10.layout=PatternLayout
log4perl.appender.D10.layout.ConversionPattern=%d [pid=%P] %p %F{1} (%L) %M %m%n
log4perl.appender.D10.Filter = DebugAndUp
This produces very verbose debug logs, for example:
2008/11/26 11:57:28 [pid=25485] DEBUG SomeModule.pm (331) functions::SomeModule::Test Test XXX was successfull
2008/11/26 11:57:29 [pid=25485] ERROR SomeOtherUnrelatedModule.pm (99999) functions::SomeModule::AnotherTest AnotherTest YYY has faled
This works great, and provides excellent debugging data.
However, each line of the debug log contains different function names, pid length, etc. This makes each line layout differently, and makes reading debug logs much harder than it needs to be.
Is there a way in log4perl to format the line so that the debugging metadata (everything up until the actual log message) be padded at the end with spaces/tabs, and have the actual message start at the same column of text?
You can pad the single fields that make up your entries. For example [pid=%5P] will always give you at least 5 characters for the PID.
The "Quantify Placeholders" section in the docs for Log::Log4perl::Layout gives more details.
There are a couple of ways to go with this, although you have to figure out which one works better for your situation:
Use a different appender if you are working live. Have that appender use a pattern that shows only the information you want. If you're working in a single process, for instance, your alternate appender might leave off the PID and the timestamp. You might only need the file name and line number.
Use %n to put newlines in the right place. That makes it multi-line output that is slightly harder to parse later, but you can choose another sequence for the input record separator (say, a literal "[EOL]") to make it easy to read entry-by-entry.
Log to a database instead of a file. For your reports, select just the columns you want to inspect.
Log everything, but write a filter to go through the log file ad-hoc to display just the parts that you want to see, such as only the debugging messages, the entries between certain times, only the entries involving a file, and so on.