Getting and setting priority value for a parent process and child process - perl

I have a perl script which will create a child process. I need to get the priority(nice) value for these two process(parent and child)
I can get the pid of both parent and child process as below:
$parentPID = $$;
$childPID = fork();
How to get the priority values for these process in perl script?

Check getpriority() where first parameter for PID is PRIO_PROCESS (you can use BSD::Resource to import this constant or just use zero instead)
Reading current PID priority, and setting new one,
nice -7 perl -E'say getpriority(0,$$); setpriority(0,$$,9); say getpriority(0,$$)'
output
7
9

use the Forks::Super CPAN module. Example:
$pid = fork { os_priority => 10 }; # like nice(1) on Un*x
if you don't want to use a CPAN module, setpriority function sets the current priority for a process, a process group, or a user.

Related

"Insecure dependency error while running with -T switch" using cicindela2

I am applying the cicindela2 recommendation engine
It uses Apache mod_perl and the Perl DBI module.
Here is the rough flow of how it works
Data input by Record Handler
Data is passed through the filter chain for batch processing
Temporary tables are output from batch processing
Recommendation result is requested by accessing the Recommend Handler which trigger the action of Recommender
I configured an aggregation and ran the project batch script. I know that the batch processing succeeded because I saw the output of processing from DB. But when I tried to access the recommendation result with URL that triggers the Recommend Handler, I saw a blank white page and the log said
FATAL: Insecure dependency in parameter 1 of DBIx::ContextualFetch::db=HASH(0x7f2a76169e78)->prepare_cached method call while running with -T switch at /usr/local/share/perl5/Ima/DBI.pm line 398.
This is where the error was thrown from the
Ima::DBI
base module
/usr/local/share/perl5/Ima/DBI.pm.
sub _mk_sql_closure {
my ($class, $sql_name, $statement, $db_meth, $cache) = #_;
return sub {
my $class = shift;
my $dbh = $class->$db_meth();
# Everything must pass through sprintf, even if #_ is empty.
# This is to do proper '%%' translation.
my $sql = $class->transform_sql($statement => #_);
return $cache # Line 398
? $dbh->prepare_cached($sql)
: $dbh->prepare($sql);
};
}
It seems that the SQL query prepared by the program is insecure, right?
What is reason for this error?
Is it related to the function of cache management of DBI?
Would it be solved if I clear the cache regularly?
Also, I tried to log the SQL statement generated, but the output failed even when I placed something like $LOGGER->warn("123") in the handle subroutine of the Recommend Handler.
How come the log failed and how to log it correctly?
Insecure dependency... while running with -T switch is Perl's way of telling you that you're running with taint mode active and attempting to do something with tainted data which could be potentially unsafe. In this particular case, $sql is tainted, because some or all of its content came from sources external to the program - probably user input, although it could also have been read from a file.
To fix this, you need to think about where $sql came from, so that you can work out the appropriate way to clean it up.
In the most likely scenario, you've asked a user to supply search terms and then inserted those terms directly into your SQL string. This is a bad idea in general, as it opens you up to the possibility of SQL injection attacks. (Obligatory Bobby Tables link.) Revise your SQL handling to make use of SQL placeholders instead of inserting user input into the WHERE clause and this vulnerability should go away.
If tainted data is making its way into $sql in some other way, you need to clean up the tainted data by using a regular expression to validate it and capture the validated data, then assign the captured data to your variable. e.g.,
my $tainted = <STDIN>;
$tainted =~ /([A-Z]*)/; # Only allow uppercase characters
my $clean = $1; # No longer tainted because it came from $1
If you need to take this route, DO NOT use .* as your regex to untaint the data without serious, serious consideration, because, if you just blindly accept any and all data, you will be discarding any and all benefit provided by taint mode.

Perl generate random session id for high traffic

I am trying to generate a random id for sessions cookie for every user session in Perl. Of course I searched cpan and google and found many similar topics and same weakness. The most modules used are Digest::SHA and Data::UUID and the module Data::GUID which internally uses Data::UUID.
Here is the code I can summarize the most methods used in modules on cpan:
#!/usr/bin/perl
use v5.10;
use Digest::SHA;
use Data::UUID;
use Data::GUID;# uses Data::UUID internally, so no need for it
use Time::HiRes ();
for (1..10) {
#say generate_sha(1); # 1= 40 bytes, 256=64 bytes, 512=128 bytes, 512224, 512256
say generate_uuid();
#say generate_guid();
}
sub generate_sha {
my ($bits) = #_;
# SHA-1/224/256/384/512
return Digest::SHA -> new($bits) -> add($$, +{}, Time::HiRes::time(), rand(Time::HiRes::time()) ) -> hexdigest;
}
sub generate_uuid {
return Data::UUID->new->create_hex(); #create_str, create_b64
}
sub generate_guid {
# uses Data::UUID internally
return Data::GUID->guid;
}
Here is a sample output form Data::UUID module:
0x0217C34C6C0710149FE4C7FBB6FA663B
0x0218665F6C0710149FE4C7FBB6FA663B
0x0218781A6C0710149FE4C7FBB6FA663B
0x021889316C0710149FE4C7FBB6FA663B
0x021899E16C0710149FE4C7FBB6FA663B
0x0218AB2B6C0710149FE4C7FBB6FA663B
0x0218BB1D6C0710149FE4C7FBB6FA663B
0x0218CABD6C0710149FE4C7FBB6FA663B
0x0218DB786C0710149FE4C7FBB6FA663B
0x0218ED396C0710149FE4C7FBB6FA663B
The id's generated from these seems to be unique, but what I am concerning is about high traffic or concurrency, say what if a 1000 only not saying 1000,000 users connected at the same time either from the same process like running under FCGI (say each FCGI process serving only 10 users) or from separate processes like running under CGI mode.
In the SHA I used this random string:
($$, +{}, Time::HiRes::time(), rand(Time::HiRes::time())
it includes Anonymous hash reference address and the current time in microseconds with Time::HiRes::time. Is there any other ways to make random string.
I have read topics to add the Host name and IP address of the remote user but others say about proxies could be used.
I see Plack::Session::State module uses this simple code to generate id's:
Digest::SHA1::sha1_hex(rand() . $$ . {} . time)
So the question in short I want to generate a unique may be up to 64 bytes long session id guaranteed to work with high traffic.
You can safely use Data::UUID and you shouldn't be concerned about duplicates, you will not encounter them.
Also rand() will not return the same number when called subsequently even under assumption that it is called at the same moment of time. A pseudo random algorithm generates the next number based on its current state and the previously generated values. A true random generator is generally not used alone but in conjunction with a pseudo random number generator. In either case the likelihood of subsequently generated numbers to repeat between the nearest clock ticks in practical terms is negligible. In your example you may want to use rand(2**32).

handle tests using TAP::Harness, how to print output when test exits

I am trying the whole day to find out the answer, but I didn't find anything.
I wrote some tests using test::more (test1.t, test2.t, test3.t ...).
and I wrote a main perl script (main.pl) that handles all the tests using TAP::Harness and print the output in a JUnit format using formatter_class => 'TAP::Formatter::JUnit.
In my tests I use the BAIL_OUT function.
The problem is that when a test is bailed out, the main script also exits and there is no output at all. If, for example test3.t bailed_out, I need to see the results for test1.t and test2.t. how can I do that?
I can't use exit or die instead of BAIL_OUT because I don't want the other tests to continue. (If test3.t was BAIL_OUT I don't want that test4.t will run.)
can someone please help me?
I need to see the results for the tests that were running before the bailed out test.
Thanks.
According to the Test::More docs:
BAIL_OUT($reason);
Indicates to the harness that things are going so badly all testing should terminate.
This includes the running of any additional test scripts.
So that explains why your suite aborts.
You might want to consider die_on_fail from Test::Most, or skip_all, depending on the reason for the BAIL_OUT.
EDIT: Looks like Test::Builder doesn't have any intention of printing out a summary when it exits on "catastrophic failure" according to the source code:
sub BAIL_OUT {
my( $self, $reason ) = #_;
$self->{Bailed_Out} = 1;
$self->_print("Bail out! $reason");
exit 255;
}
# Don't do an ending if we bailed out.
if( $self->{Bailed_Out} ) {
$self->is_passing(0);
return;
}
However, that Bailed_Out flag is only ever used to consider printing out summary diagnostics, and since Test::More exposes the underlying Test::Builder object, you can probably just tweak the BAIL_OUT subroutine and no set this flag. All untested, of course; YMMV.
Instead of passing in all tests to one TAP::Harness, you need to pass in one test at a time to the Harness in case of a BAIL_OUT
I haven't seen your code, so here is a sample of what I mean. Adjust to include the formatter and whatever else you need.
use TAP::Harness;
my $harness = TAP::Harness->new({ merge => 0 });
my $tests = ['t/test1.t', 't/test2.t'];
foreach my $test (#$tests) {
eval {
$harness->runtests([$test]);
}; if ($#) {
# create new harness object if the previous fails catastrophically.
$harness = TAP::Harness->new({ merge => 0 });
}
}

perl log db query errors into a log file

So I started to get familiar with Perl and I wrote my first Db script.
Now I am trying to select data from atable which is huge and trying to insert into a summary table based on some criteria.
Now there are chances , that select query may fail or the insert query may fail due to timeout or other database issues that is beyond my control.
Eventually my script is going to be cron script.
Can I log just the errors that i encounter for the connection,inserts and selects into a file generated in the script?
$logfile = $path.'logs/$currdate.log';
here is my code:
my $SQL_handled="SELECT division_id,region_id, NVL(COUNT(*),0) FROM super_tab GROUP BY division_id,region_id;";
my $result_handled = $dbh->prepare($SQL_handled);
$result_handled->execute();
while (my ($division_id,$region_id,$count ) = $result_handled->fetchrow_array()){
my $InsertHandled="INSERT INTO summary_tab (date_hour, division_id, region_id,volume) VALUES ('$current',$division_id,$region_id,$market_id,'$service_type','$handled',$count);";
my $result_insert_handled = $dbh->prepare($InsertHandled);
$result_insert_handled->execute();
}
something like
if(DBI-query failed ) {
// log the error onto the above logpath
}
Its usually done like this
my $SQL_handled="SELECT division_id,region_id, NVL(COUNT(*),0) FROM super_tab GROUP BY division_id,region_id;";
my $result_handled = $dbh->prepare($SQL_handled);
my $retval = $result_handled->execute();
if(!$retval){
#open a log file and write errors
writelog();
die "Error executing SQL SELECT - $dbh->errstr";
}
while(my ($division_id,$region_id,$count ) = $result_handled->fetchrow_array()){....
}
---------------------------------
sub writelog{
my $path = "/path/to/logfile";
my ($sec,$min,$hour,$mday,$mon,$year,$wday,$yday,$isdst) = localtime(time);
$year += 1900;
$mon++;
my $currdate = "$mon$mday$year";
$logfile = $path . "/$currdate.log";
open (OUT, ">>$logfile");
print OUT "There was an error encountered while executing SQL- $dbh->errstr \n";
close(OUT);
}
You can also use $dbh->err; which returns the native Oracle error code to trap the error and exit accordingly.
The above, basic exception handling can be performed for every execute() method call in your script. Remember, DBI will have AutoCommit set to 1 (enabled) by default, unless explicitly disabled. So your transactions would be auto committed per insert, in order to handle the ATOMICITY of the entire transaction, you can disable autocommit and use $dbh->commit and $dbh->rollback to handle when you want to commit, or may be use some custom commit point (for larger sets of data).
Or the below can be used while connecting to the DB
$dbh = DBI->connect( "dbi:Oracle:abcdef", "username", "password" , {
PrintError => 0, ### Don't report errors via warn( )
RaiseError => 1 ### Do report errors via die( )
} );
this would automatically report all errors via die. The RaiseError is usually turned off by default.
Also if I understand you correctly then, by cron you mean you would be calling it from a shell cron job. In that case, call to your perl script from the cron itself can be redirected to log files something like below
perl your_perl.pl >> out.log 2>> err.log
out.log will contain regular logs and err.log will contain errors (specifically thrown by DBI prepare() or execute() methods too). In this case, you also need to make sure you use proper verbiage in print or die so that the logs look meaningful.
First, bear in mind that if you put an email address at the top of your crontab file any output from the cron job will be emailed to you:
MAILTO=me#mydomain.com
Second, if you set DBI's RaiseError to 1 when you connect you do not need to check every call, DBI will raise an error whenever one happens.
Third, DBI has an error handler callback. You register a handler and it is called whenever an error occurs with the handle in error and error text etc. If you return false from the error handler, DBI works as it would without the handler and goes on to die or warn. As a result, it is easier to set RaiseError and create an error handler than as Annjawn suggested.
Lastly, if you don't want to do this yourself, you can use something like DBIx::Log4perl and simply ask for it to log errors and nothing else. Any errors will be written to your Log4perl file and they include the SQL being executed, parameters etc.

Can't get Extended SNMP output in Perl

I have written a Perl script to put back some SNMP values, which works fine. I have now written a script on the remote server and used the extend function in SNMP to put the value from the script into SNMP.
If I run:
snmpget -v2c -c public 10.0.0.10 'NET-SNMP-EXTEND-MIB::nsExtendOutput1Line."cc_power"'
I get the result:
NET-SNMP-EXTEND-MIB::nsExtendOutput1Line."cc_power" = STRING: 544
But when I try to use my script to get the information back it doesn't get it. Here is the script:
#!/usr/bin/perl
use strict;
use SNMP;
use RRDs;
my $rrd_db = "/storage/db/rrd/cc_power.rrd";
my $sess;
my $val;
my $error;
$sess = new SNMP::Session(DestHost => "10.0.0.10", Community => "public", Version => 2);
my $power = $sess->get('NET-SNMP-EXTEND-MIB::nsExtendOutput1Line.\"cc_power\"');
$error=RRDs::error;
die "ERROR while updating RRD: $error\n" if $error;
my $date=time;
print "Data Script has been run - Output: ${date}:${power}\n";
but nothing is returned, and I have no idea why... no errors or anything, have I missed something stupid?
Hope someone can help as this is driving me nuts :)
I assume that you used netsnmp snmpget. Well, it hides too many details from you, as it loads MIB documents in background and nicely translate OIDs and SNMP values to all kinds of user friendly formats.
So next time pay attention to what decoration it performs and simulate that in your own code to achieve the same effects.