Suppressing the output dmesg logs on stdout - linux-device-driver

The output from my device drivers which also falls in the dmesg is getting printed out on my stdout.Is there a way to prevent this?

You may be able to solve your problem by dynamically adjusting the console log level.
http://tuxthink.blogspot.com/2012/07/printk-and-console-log-level.html
Suggests that you can do this by writing to a proc node:
echo "6" > /proc/sys/kernel/printk
Would set it to 6 in their example. I suspect setting it to 0 or 1 would suppress almost everything non-fatal to the system
Log entries should still be retrievable by dmesg regardless of this setting.
However, this will affect messages from all sources. If you want to change the behavior of a custom driver, consider passing it a flag which would cause program logic to suppress message generation, or generate output at less important log levels which you might set the console to ignore as above.

There are different levels of printing messages in printk().
Refer http://www.makelinux.net/ldd3/chp-4-sect-2 for different levels in printk(). Use lowest loglevel, it wont be dispalyed in stdout, but it can be viewed using dmesg.

Related

How to print all states in Promela/SPIN

I would like to print all states when checking my model. We do get a trail file when an assertion violation occurs but I want to see the states even when there are no assertion violations. How can I do that?
One option is to compile pan with the gcc flag -DVERBOSE and watch the full details of the verification run. Of course the run will take a while and will spit excessive output, but you will see all the states as they are visited (the format is not very easy to read, but may be sufficient for your purposes).
Another option to see the state graphs of individual processes is
./pan -D | dot -Tps | ps2pdf - pan.pdf
This will create a multi-page PDF where each page is a process (including the never claim).

BFX field to large for a data item increase -S

I am getting the above error when trying to run a script produce to a report. It is a pre-existing script that has been run, successfully many times before. Research has told me that that it is something to do with the stack size? I’m running 10.2B02 in WRQ Reflections. Can anyone tell me what this statement means and how I look up the value of my –S.
Thanks,
Paul
-s is a client startup parameter. You mention "Reflections" so you are probably using a character terminal session. The -s parameter is on the command line used to start Progress (which might be inside a script). If there is a -pf somefile.pf on the command line then it is inside that "parameter file". If it is not specified the default value is 40. The maximum value is limited by available memory but setting it in the hundreds or even in the thousands is not unheard of.
You can also get the startup values by sending a SIGUSR1 to the _progres process that the session is running. I.e. kill -USR1 That will (safely) create a "protrace." file that includes startup parameters and a 4gl stack trace. The file will appear in either the current directory, the home directory or the temp-file directory (I forget which, just look for protrace*).
This error usually means that your code is manipulating a field that is too large. (Like the error says.) That might be for a lot of reasons.
One common possibility is string concatenation in a loop.
Or you might be calling lots of sub-procedures and passing parameters around.
If "nothing has changed" in the code then it probably just means that some data structure has grown slightly larger over time and increasing -s is really no big deal so long as it solves the problem.
If you keep having to increase it then it is more likely that you have some sort of coding issue. Maybe you're passing things by value that ought to be passed by reference or maybe you have run away recursion. Or something else. You'd need to provide a lot more detail to say for sure.
It is also possible (but unlikely) that you have a corrupt data record that appears to have a field in it that is too large. You could run "proutil dbName -C dbanalys" as an initial step to see if that is true.
Part of the error message is non-standard -- I'm not certain which log file it is coming from or how it got there (applications can write their own messages) but it seems that it might have something to do with trying to send an e-mail. So I'd be suspicious that either the list of recipients got too long or that the body of the e-mail is too large.

Why does matlab causes terminal std out crash and how do I fix it?

Every time when I finish running a matlab code collection on command line, when I exit matlab, the standard output just gets messed. I can still use the terminal window, but whatever I typed won't show up on the screen, leaving me either type with my eyes blind, or open up a new terminal and excessively cd to the old place.
This happens every single time when I use make to run a matlab collection, and since I'm working a lot on this, it turns out to be very annoying. Does anyone know what's the problem here and how should I fix it?
As was pointed out in the comments, the makescript is probably dumping "bad" characters to the terminal. You could prevent this (but possibly lose useful information) by redirecting the output - instead of sending it to the terminal window, you can send it to a file, or even /dev/null ("the great bit bucket in the sky").
The underlying problem, however, is that your makefile is even sending these characters to the terminal in the first place. I would recommend that you pipe the output to a file with something like make > myDump.txt, then examine the resulting file to see what is going on, and where in your makefile the problem is created. It is possible that you will still be getting some output when you do this - that's because by default > redirects stdout only, and not stderr - a second output stream used for error messages. You can redirect both to a file with make 2>&1 myDump.txt.
You have already seen the recommendation to use stty sane to restore the status of the terminal - I am repeating it here in case someone only looks at answers, and not at comments; but I don't take credit for it :-).

back ticks not working in perl

Got stuck with one problem in our live server.
Have script (perl) which runs almost 15 to 18 hrs a day. it creates 100+ sub process every day . One place it has command (product command which we run in command line solaris box) which is being triggerred with back ticks inside perl code.
It looks like the back ticks command gets skipped or failed randomly.
for eg. if i need to run for 50 customers 2 or 3 gets failed randomly.
I do not see the evidence that the command has been triggerred in anywhere.
since its live server we can't even try making much in code change until we are sure about the problem.
here is the code..
my $comm = "inventory -noX customer1"; #sample command i have given here
my $newLogFile = "To capture command output here we have path whre the file gets created");
my $piddy = `$comm 2>&1 > $newLogFile`;
Is it because of the back ticks it happens I am really not sure :(.
Also tried various analysis like memory/CPU/diskspace/Adding librtld_db.so in LD_LIBRARY_PATH etc....but no luck...Also the perl is in 64 bit ...what else Can i? :(
I suspect you are not checking for errors (and perl doesn't make that easy to do correctly for backticks).
Consider using IPC::System::Simple's capture in place of your backticks/qx.
As its doc says, "If there's an error, it will die with a detailed description of what went wrong."
It shouldn't fail just because of backticks, however because it is spawning a new process, that process may be periodically subject to failure due to system conditions (eg. sysLoad). Backticks are really a "fire and forget" method and should never be used for anything critical in a production environment. As previously suggested, there are far more detailed ways to manage spawning external processes.
If the command's output is being lost due to buffering, you might try turning off buffering, but keep an eye on it for performance degradation (it's usually not significant).
Buffering can be turned off for an entire script by adding this near the top:
$|=1;
When calling external commands, I'm using system of IPC::System::Simple or open3 of IPC::Open3.

How can I debug a Perl program that suddenly exits?

I have Perl program based on IO::Async, and it sometimes just exits after a few hours/days without printing any error message whatsoever. There's nothing in dmesg or /var/log either. STDOUT/STDERR are both autoflush(1) so data shouldn't be lost in buffers. It doesn't actually exit from IO::Async::Loop->loop_forever - print I put there just to make sure of that never gets triggered.
Now one way would be to keep peppering the program with more and more prints and hope one of them gives me some clue. Is there better way to get information what was going on in a program that made it exit/silently crash?
One trick I've used is to run the program under strace or ltrace (or attach to the process using strace). Naturally that was under Linux. Under other operating systems you'd use ktrace or dtrace or whatever is appropriate.
A trick I've used for programs which only exhibit sparse issues over days or week and then only over handfuls among hundreds of systems is to direct the output from my tracer to a FIFO, and have a custom program keep only 10K lines in a ring buffer (and with a handler on SIGPIPE and SIGHUP to dump the current buffer contents into a file. (It's a simple program, but I don't have a copy handy and I'm not going to re-write it tonight; my copy was written for internal use and is owned by a former employer).
The ring buffer allows the program to run indefinitely with fear of running systems out of disk space ... we usually only need a few hundred, even a couple thousand lines of the trace in such matters.
If you are capturing STDERR, you could start the program as perl -MCarp::Always foo_prog. Carp::Always forces a stack trace on all errors.
A sudden exit without any error message is possibly a SIGPIPE. Traditionally SIGPIPE is used to stop things like the cat command in the following pipeline:
cat file | head -10
It doesn't usually result in anything being printed either by libc or perl to indicate what happened.
Since in an IO::Async-based program you'd not want to silently exit on SIGPIPE, my suggestion would be to put somewhere in the main file of the program a line something like
$SIG{PIPE} = sub { die "Aborting on SIGPIPE\n" };
which will at least alert you to this fact. If instead you use Carp::croak without the \n you might even be lucky enough to get the file/line number of the syswrite, etc... that caused the SIGPIPE.