How to prevent tcpreplay from printing warning information? - pcap

I try to replay a large pcap file, while tcpreplay keep printing "Warning: Packet #50579 has gone back in time". That will affect the efficiency. It there a way to stop this?

As of tcpreplay 3.4.4, you cannot completely silence this warning, but it's trivial to change the code to do so -- look at the definitions in src/common/err.h, and change the warnx() definition to become a no-op when debugging is disabled, similarly to dbg() and dbgx().
However, you should verify whether the output indeed affects packet throughput. I doubt it, particularly if it only affects a single packet.

Related

BFX field to large for a data item increase -S

I am getting the above error when trying to run a script produce to a report. It is a pre-existing script that has been run, successfully many times before. Research has told me that that it is something to do with the stack size? I’m running 10.2B02 in WRQ Reflections. Can anyone tell me what this statement means and how I look up the value of my –S.
Thanks,
Paul
-s is a client startup parameter. You mention "Reflections" so you are probably using a character terminal session. The -s parameter is on the command line used to start Progress (which might be inside a script). If there is a -pf somefile.pf on the command line then it is inside that "parameter file". If it is not specified the default value is 40. The maximum value is limited by available memory but setting it in the hundreds or even in the thousands is not unheard of.
You can also get the startup values by sending a SIGUSR1 to the _progres process that the session is running. I.e. kill -USR1 That will (safely) create a "protrace." file that includes startup parameters and a 4gl stack trace. The file will appear in either the current directory, the home directory or the temp-file directory (I forget which, just look for protrace*).
This error usually means that your code is manipulating a field that is too large. (Like the error says.) That might be for a lot of reasons.
One common possibility is string concatenation in a loop.
Or you might be calling lots of sub-procedures and passing parameters around.
If "nothing has changed" in the code then it probably just means that some data structure has grown slightly larger over time and increasing -s is really no big deal so long as it solves the problem.
If you keep having to increase it then it is more likely that you have some sort of coding issue. Maybe you're passing things by value that ought to be passed by reference or maybe you have run away recursion. Or something else. You'd need to provide a lot more detail to say for sure.
It is also possible (but unlikely) that you have a corrupt data record that appears to have a field in it that is too large. You could run "proutil dbName -C dbanalys" as an initial step to see if that is true.
Part of the error message is non-standard -- I'm not certain which log file it is coming from or how it got there (applications can write their own messages) but it seems that it might have something to do with trying to send an e-mail. So I'd be suspicious that either the list of recipients got too long or that the body of the e-mail is too large.

Suppressing the output dmesg logs on stdout

The output from my device drivers which also falls in the dmesg is getting printed out on my stdout.Is there a way to prevent this?
You may be able to solve your problem by dynamically adjusting the console log level.
http://tuxthink.blogspot.com/2012/07/printk-and-console-log-level.html
Suggests that you can do this by writing to a proc node:
echo "6" > /proc/sys/kernel/printk
Would set it to 6 in their example. I suspect setting it to 0 or 1 would suppress almost everything non-fatal to the system
Log entries should still be retrievable by dmesg regardless of this setting.
However, this will affect messages from all sources. If you want to change the behavior of a custom driver, consider passing it a flag which would cause program logic to suppress message generation, or generate output at less important log levels which you might set the console to ignore as above.
There are different levels of printing messages in printk().
Refer http://www.makelinux.net/ldd3/chp-4-sect-2 for different levels in printk(). Use lowest loglevel, it wont be dispalyed in stdout, but it can be viewed using dmesg.

what might cause a print error in perl?

I have a long running script that every hour opens a file, prints to it and closes the file. I've recently found very rarely, the print is failing, not because I'm testing the status of the print itself but rather due to the fact of missing entries in the file until the system is actually rebooted!
I do trap for file open failures and write a message to syslog when that happens and I'm not seeing any open failures so I'm now guessing it may be the print that is failing. I'm not trapping the print failures, which I suspect most people don't but am now going to update that one print.
Meanwhile, my question is does anyone know what types of situations could cause a print statement to fail when there is plenty of disk storage and no contention for a file which has been successfully opened in append mode?
You could be out of memory (ENOMEM) or over a filesize limit (E2BIG or SIGXFSZ). You could have an old-fashioned I/O error (EIO). You could have a race condition if the script is run concurrently or if the file is accessed over NFS. And, of course, you could have an error in the expression whose value you would print.
An exotic cause that I once saw is that a CPU heatsink failure can lead to sprintf spuriously failing, causing some surprising results including writing garbage to file descriptors.
Finally, I remind you that print will often write its stuff in an I/O buffer. This means two things. (1) You need to check the result of close() as well. (2) If you print but you don't immediately close() or flush() then your data can be buffered and not actually written until much later (or not at all if the process dies horribly).

Process communication with signals

I was programming in C doing system calls, and I was wondering the following:
What's an example of where you'd want a process to ignore alarm signals, say if the signal was sent because of a lost packet in intra-network processes?
Many important daemons are very picky about the signals they will respond to; they often install a handler for SIGHUP to re-read their configuration file, use one of SIGUSR1 or SIGUSR2 to indicate the need to close and re-open their log files for log-rotation, and handle SIGINT, SIGQUIT, SIGTERM, etc., in some sort of graceful way.
Everything else should be ignored so that accidental signals do not cause the program to do funny things. The signals that are part of the program's interface should work exactly as designed -- and the other signals should do as little harm as possible.

Why aren't buffers auto-flushed by default?

I recently had the privilege of setting $| = 1; inside my Perl script to help it talk faster with another application across a pipe.
I'm curious as to why this is not the default setting. In other words, what do I lose out on if my buffer gets flushed straightaway?
Writing to a file descriptor is done via system calls, and system calls are slow.
Buffering a stream and flushing it only once some amount of data has been written is a way to save some system calls.
Benchmark it and you will understand.
Buffered depends on the device type of the output handle: ttys are line-buffered; pipes and sockets are pipe-buffered; disks are block-buffered.
This is just basic programming. It’s not a Perl thing.
The fewer times the I/O buffer is flushed, the faster your code is in general (since it doesn't have to make a system call as often). So your code spends more time waiting for I/O by enabling auto-flush.
In a purely network I/O-driven application, that obviously makes more sense. However, in the most common use cases, line-buffered I/O (Perl's default for TTYs) allows the program to flush the buffer less often and spend more time doing CPU work. The average user wouldn't notice the difference on a terminal or in a file.