How to ignore some subroutine calls in NYTProf reporting - perl

I'm trying to profile a Perl script, but CORE::sleep gobble all the space (and time) of my report.
How can i tell NYTProf to ignore sleep calls ?
Assuming we have the following script :
sub BrandNewSubroutine {
sleep 10;
print "Odelay\n";
}
BrandNewSubroutine();
I want to get rid of the following line of the report :
Exclusive Time;Inclusive Time;Subroutine
10.0s;10.0s;main::::CORE:sleepmain::CORE:sleep
(opcode)
Edit: Using DB::disable_profile() and DB::enable_profile() won't do the trick, as it add sleep time to BrandNewSubroutine Inclusive time.
Thanks in advance.

I'd suggest either wrapping the calls to sleep (possibly by use of method mentioned in perlsub) with DB::disable_profile() and DB::enable_profile() calls (RUN-TIME CONTROL OF PROFILING in NYTProf documentation), or post processing the report to remove the offending calls.

CORE::accept is already ignored in the way you'd like CORE::sleep to be, so the mechanism is already in place. See this code in NYTProf.xs:
/* XXX make configurable eg for wait(), and maybe even subs like FCGI::Accept
* so perhaps use $hide_sub_calls->{$package}{$subname} to make it general.
* Then the logic would have to move out of this block.
*/
if (OP_ACCEPT == op_type)
subr_entry->hide_subr_call_time = 1;
So with a little hacking (OP_SLEEP==op_type || OP_ACCEPT == op_type) you'd be able to ignore CORE::sleep in the same way.
I'd accept a patch to enable that as an option.

Related

parallely execution of perl language

I have a perl program that read the packets of a flow from a pcap file, but it takes a lot of time,I want to make it parallel,but I don't know is it possible or not?if yes can I do it with MPI?and another question, the best way for making this code parallel,here is the piece of my code ( I think I should work on this part for paralleling, but I don't know the best way!)
while (!eof($inFileH))
{
#inFileH is the handler of the pcap file
#in each while I read one packet
$ts_sec = readBytes($inFileH,4);
$ts_usec = readBytes($inFileH,4);
$incl_len = readBytes($inFileH,4);
$orig_len = readBytes($inFileH,4);
if ($totalLen == 0) # it is the 1st packet
{
$startTime = $ts_sec + $ts_usec/1000000;
}
$timeStamp = $ts_sec + $ts_usec/1000000 - $startTime;
$totalLen += $orig_len;
$#packet = -1; n # initing the array
for (my $i=0 ; $i<$incl_len ; $i++) #read all included octects of the current packet
{
read $inFileH, $packet[$i], 1;
$packet[$i] = ord($packet[$i]);
}
#and after that I will work on the "packet" and analyze it
so how should I send the file content for other processors to work on it in parallel.....
First you need to determine the bottleneck. If it is really CPU usage (i.e. CPU usage is at 100% while you are running the script), you need to figure out where the processing spends its time.
This may well be in the way that you are parsing the input. There may be obvious ways to speed this up. For instance, if you use complex regular expressions, and focus exclusively on matching input correctly, there may be ways to make the matching a lot faster by rewriting the expressions or doing simpler matches before trying more complex ones.
If you can't reduce CPU usage far enough in this way, and you really want to parallelize, see if you can employ the mechanism with which Perl was born: Unix pipes. You can write Perl scripts that pass data through to each other in a pipeline, or you can do the creation of the processes and pipes within Perl itself (see perlopentut, and if that isn't enough, perlipc).
As a general rule, I would consider these options first before trying other mechanisms, but it really depends of the details of what you're trying to do and the context in which you need to do it.

Notification window with buttons in Linux

I have a Perl script which listens to a port and filters messages, and, based on them, proposes to take action or ignore event.
I'd like to make it show a notification window (not a dialogue window) with buttons 'take action' and 'ignore', which would go after a certain timeout.
So far I have something like this:
my #react = ("somecommand", "someoptions); # based on what regex a message matched
my $cmd = "xmessage";
my $cmd_args = "-print -timeout 7 -buttons React,Dismiss $message"; # raw message from port
open XMSG, "$cmd $cmd_args |";
while (<XMSG>) {
if ($_ eq "React\n") {
do something...
}
}
But it would handle only one notification at once, and the next message would not appear until the previous one is dismissed, reacted to or timed out, so it's quite a bad decision. I cannot do anything until I get return code from xmessage, and I can't get xmessage run a command. Well I probably can if I introduce event IDs and listen to a socket where xmessage prints, but it would make things too complicated, I guess.
So I wonder is there a library or an utility for Linux to draw notify-like windows with buttons which would each trigger a command?
I'm sorry I didn't see this one when it first was posted. There are several gui toolkits which could do something along these lines. Prima is a toolkit built especially for Perl and has no external library dependencies.
For when you just need a popup dialog, there is the Ask module which delegates the task of popping up windows to any available library.
In case anyone's interested, I've ended up writing a small Tcl/Tk program for that, the full code (all 48 lines) can be found here: http://cloudcabin.org/read/twobutton_notify, and you can ignore the text in Russian around it.

Is there any way to clear NSLog Output?

I have been googling from last couple of hours for finding that is there any way to clear NSLog output using code or not?
Like we have clrscr() in c. So if we are trying to print something which we want to focus most and there is lots of log printin there we can put that code there and get keep our desire log on top for easy searching. This can be done by putting breakpoint on my NSLog line and than click on clear console. but question is is there a way to achive this programatically?
I found few question on stack overflow but I din't satisfied with answer like this is saying that I can disable log for release mode etc.
Or I can use DLog, ALog or ULog as requirement but my question is different..
Any one can help me in this?
Thanks in advance :)
You can use a conditional breakpoint to simulate it. Define a function like this in your code:
int clear_console()
{
NSLog(#"\n\n\n\n\n\n\n\n");
}
Then, when you want to clear the console just add a breakpoint before the NSLog with this condition:
Condition: 1 > 0
Action: Debugger Command expr (int) clear_console()
Options: Automatically continue after evaluating Check it to skip the pause.
Tested with Xcode 4.3.2 and lldb.
Previous answer:
AFAIK, no, there isn't.
Just in case you're not doing it yet, you can create custom macros to format the output to highlight what you want.
Define macros like this:
#define CLEAR(...) NSLog(#"\n\n\n\n\n\n") /* enough \n to "clear" the console */
#define WTF(...) CLEAR();NSLog(#"!!!!!!!!!!!!!!");NSLog(__VA_ARGS__)
#define TRACE(__message__) NSLog(#">>>>>>>>>>>>>>> %# <<<<<<<<<<<<<<<<<<<", __message__)
Then:
WTF(#"This should't be here object: %#", theObject);
...
TRACE(#"Start Encoding");
...
It's not what you want but it pretty much solves the problem. You'll end up with your own set of macros with custom prefixes easily scannable in the console output.

Perl IPC - FIFO and daemons & CPU Usage

I have a mail parser perl script which is called every time a mail arrives for a user (using .qmail). It extracts a calendar attachment out of the mail and places the "path" of the file in a FIFO queue implemented using the Directory::Queue module.
Another perl script which reads the path of the calendar attachment and performs certain file operations on the local system as well as on the remote CalDAV server, is being run as a daemon, as explained here. So basically this script looks like:
my $declarations
sub foo {
.
.
}
sub bar {
.
.
}
while ($keep_running) {
for(keep-checking-the-queue-for-new-entries) {
sub caldav_logic1 {
.
.
}
sub caldav_logic2 {
.
.
}
}
}
I am using Proc::Daemon for running the script as a daemon. Now the problem is, this process has almost 100% CPU usage. What are the suggested ways to implement the daemon in a more standard, safer way ? I am using pretty much the same code as mentioned in the link mentioned for usage of Proc::Daemon.
I bet it is your for loop and checking for new queue entries.
There are ways to watch a directory for file changes. These ways are OS dependent but there might be a Perl module that wraps them up for you. Use that instead of busy looping. Even with a sleep delay, the looping is inefficient when you can have your program told exactly when to wake up by an OS event.
File::ChangeNotify looks promising.
Maybe you don't want truly continuous polling. Is keep-checking-the-queue-for-new-entries a CPU-intensive part of the code, even when the queue is empty? That would explain why your processor is always busy.
Try putting a sleep 1 statement at the very top (or very bottom) of the while loop to let the processor rest between queue checks. If that doesn't degrade the program performance too much (i.e., if everyone can tolerate waiting an extra second before the company calendars get updated) and if the CPU usage still seems high, try sleep 2, sleep 5, etc.
cpan Linux::Inotify2
The kernel knows when files change and sends this information to your program which runs the sub. Maybe this will be better because the program will run the sub only when the file is changed.

How can I validate an image file in Perl?

How would I validate that a jpg file is a valid image file. We are having files written to a directory using FTP, but we seem to be picking up the file before it has finished writing it, creating invalid images. I need to be able to identify when it is no longer being written to. Any ideas?
Easiest way might just be to write the file to a temporary directory and then move it to the real directory after the write is finished.
Or you could check here.
JPEG::Error
[arguments: none] If the file reference remains undefined after a call to new, the file is to be considered not parseable by this module, and one should issue some error message and go to another file. An error message explaining the reason of the failure can be retrieved with the Error method:
EDIT:
Image::TestJPG might be even better.
You're solving the wrong problem, I think.
What you should be doing is figuring out how to tell when whatever FTPd you're using is done writing the file - that way when you come to have the same problem for (say) GIFs, DOCs or MPEGs, you don't have to fix it again.
Precisely how you do that depends rather crucially on what FTPd on what OS you're running. Some do, I believe, have hooks you can set to trigger when an upload's done.
If you can run your own FTPd, Net::FTPServer or POE::Component::Server::FTP are customizable to do the right thing.
In the absence of that:
1) try tailing the logs with a Perl script that looks for 'upload complete' messages
2) use something like lsof or fuser to check whether anything is locking a file before you try and copy it.
Again looking at the FTP issue rather than the JPG issue.
I check the timestamp on the file to make sure it hasn't been modified in the last X (5) mins - that way I can be reasonably sure they've finished uploading
# time in seconds that the file was last modified
my $last_modified = (stat("$path/$file"))[9];
# get the time in secs since epoch (ie 1970)
my $epoch_time = time();
# ensure file's not been modified during the last 5 mins, ie still uploading
unless ( $last_modified >= ($epoch_time - 300)) {
# move / edit or what ever
}
I had something similar come up once, more or less what I did was:
var oldImageSize = 0;
var currentImageSize;
while((currentImageSize = checkImageSize(imageFile)) != oldImageSize){
oldImageSize = currentImageSize;
sleep 10;
}
processImage(imageFile);
Have the FTP process set the readonly flag, then only work with files that have the readonly flag set.