I have a RFXCOM transceiver for 433 mhz signals. I managed to put together a program that can transmit signals without a problem (and for example turn on a lamp). However I also want to be able to receive signals from my remote control. A bit of googling gave me this working code;
use Device::SerialPort;
my $PortObj=Device::SerialPort->new("/dev/ttyUSB1");
$PortObj->user_msg(ON);
$PortObj->databits(8);
$PortObj->baudrate(38400);
$PortObj->parity("none");
$PortObj->stopbits(1);
$PortObj->handshake("rts");
my $STALL_DEFAULT=10; # how many seconds to wait for new input
my $timeout=$STALL_DEFAULT;
$PortObj->read_char_time(0); # don't wait for each character
$PortObj->read_const_time(1000); # 1 second per unfulfilled "read" call
my $chars=0;
my $buffer="";
while ($timeout>0) {
my ($count,$saw)=$PortObj->read(1); # will read _up to_ 255 chars
if ($count > 0) {
$chars+=$count;
$buffer.=$saw;
print $saw;
# Check here to see if what we want is in the $buffer
# say "last" if we find it
}
else {
$timeout--;
}
}
if ($timeout==0) {
die "Waited $STALL_DEFAULT seconds and never saw what I wanted\n";
}
One thing I can't figure out - this script gives me the output after about 10 seconds, but I want to see the received data instantly. Any idea what I need to change? I don't think it has to do with the timeout part since that just seems to measure the time since the last received signal. Any ideas?
Suffering from buffering? Set
$| = 1;
at the top of your script.
Related
I have perl v5.8.4, I cannot install any lib/module, I need to use the vanila version.
I have an perl script that sends HTTP request to an webserver. I'm trying to code an function to print how many requests I'm sending per sec and per minute to the webserver. The idea is to print once per second and then once per minute.
I was thinking on something like the logic below:
# First I get the time I started the script
$time = the time the script started
# Then, for each request I increase $req(for sec) and $reqmin(for minute)
for each request, $req++ and $reqmin++
# When $time hits one sec of load, I will print the number of requests I sent and then I will set back $req to 0, so I can reuse the var for the next second
if $time passed 1 sec, print $req (I think this may give me the TPS)
$req = 0
# Same as above, but for minutes
if $time passed 60sec, print $reqmin
$reqmin = 0
The above is not an perl code, but the explanation of what I'm trying to achieve. I'm not trying to get the runtime, control the traffic or do any benchmarking. I'm just trying to get how many requests I'm sending per sec and per min.
I'm not sure if the logic, explained above, is the correct path that I should follow to calculate the TPS(transactions per second) in my code.
The other problem that I have, is that I'm not sure how to calculate the time using perl. Like, I need to know that 1 second has past since the first run to print the requests per second, same for 1 minute. I believe I should use perl's time but I'm not sure.
I've prepared an example for you. Your algorithm is pretty sound. Here's an implementation that does the seconds only. You should be able to go from there.
It uses Time::HiRes, which is included with your old Perl. We need usleep only to simulate the requests. Thetv_interval function gets the delta between two microsecond-times, and gettimeofday grabs the current microsecond-time.
use strict;
use warnings;
use Time::HiRes qw(tv_interval usleep gettimeofday);
$|++; # disable output buffer
my $req_per_second = 0; # count requests per second
my $sum_towards_second = 0; # sum intervals towards one full second
my $last_timeofday = [gettimeofday]; # start of each interval
foreach my $i ( 1 .. 10000 ) {
do_request();
my $new_timeofday = [gettimeofday]; # end for delta
my $elapsed = tv_interval( $last_timeofday, $new_timeofday ); # get the delta
$last_timeofday = $new_timeofday; # new time is old time for the next round
$sum_towards_second += $elapsed; # add elapsed time to go towards one second
$req_per_second++; # we did one request within the current second
# when we arrive at a full second we reset
if ( $sum_towards_second > 1.0 ) {
printf "approximately %d req/s\n", $req_per_second;
$sum_towards_second = $req_per_second = 0;
}
}
sub do_request {
usleep rand 1000; # emulate the request
}
This algorithm is close to your idea, and also close to what I sketched out in my comment. In every iteration we start with doing the request, then take the current timestamp. We calculate the delta to the last timestamp and add it to a counter. If that counter reaches 1, we print the number of requests we've done in that second. Then we can reset both the time counter and the request counter.
The output looks like this.
approximately 1785 req/s
approximately 1761 req/s
approximately 1759 req/s
approximately 1699 req/s
approximately 1757 req/s
I'll leave counting minutes as an exercise to the reader.
Currently it reads the text.txt at random and it displays on a channel
on *:TEXT:!command:#channel:{
/msg $chan $read(text.txt)
I don't understand how to make it auto execute at x minute intervals, whitout using the !command
I've beginner at this, I want to make it like a /timer but can add read random lines from the text everytime
It's been a while since I last worked with mIRC, so I had to look up the documentation on /timer, but you should be able to do something like this:
on *:TEXT:!command:#channel:{
/timer 0 60 /msg $chan $!read(<textfile>)
}
This will execute /msg $chan $!read(<textfile>) an infinite number of times at 60 second intervals once !command has been entered into a channel.
If you need to cancel the timer for some reason, you would need to name the timer, which can be done by appending a name to the command, such as /timerMESSAGE or /timer1, and then including a command to turn the timer off, such as:
on *:TEXT:!timeroff:#channel:{
/timer<name> off
}
replacing <name> with the name of your timer.
EDIT: Thanks to Patrickdev for pointing out the difference of $!read() versus $read() for timer commands.
i suggest you to use this
if you disconnect from a network for whatever reason
ping timeout,broken pipe,connection reseted by peer,netsplit
it wont stop
the most efficient way is
using an on join event
on me:*:join:#channel:{
.timerrepeat 0 60 msg $chan $read(text.txt)
}
on me:*:part:#channel:{
.timerrepeat off
}
on *:disconnect:{
.timerrepeat off
}
this script will only triggers when you join on #channel
replace #channel with channel you want
I have a perl program that read the packets of a flow from a pcap file, but it takes a lot of time,I want to make it parallel,but I don't know is it possible or not?if yes can I do it with MPI?and another question, the best way for making this code parallel,here is the piece of my code ( I think I should work on this part for paralleling, but I don't know the best way!)
while (!eof($inFileH))
{
#inFileH is the handler of the pcap file
#in each while I read one packet
$ts_sec = readBytes($inFileH,4);
$ts_usec = readBytes($inFileH,4);
$incl_len = readBytes($inFileH,4);
$orig_len = readBytes($inFileH,4);
if ($totalLen == 0) # it is the 1st packet
{
$startTime = $ts_sec + $ts_usec/1000000;
}
$timeStamp = $ts_sec + $ts_usec/1000000 - $startTime;
$totalLen += $orig_len;
$#packet = -1; n # initing the array
for (my $i=0 ; $i<$incl_len ; $i++) #read all included octects of the current packet
{
read $inFileH, $packet[$i], 1;
$packet[$i] = ord($packet[$i]);
}
#and after that I will work on the "packet" and analyze it
so how should I send the file content for other processors to work on it in parallel.....
First you need to determine the bottleneck. If it is really CPU usage (i.e. CPU usage is at 100% while you are running the script), you need to figure out where the processing spends its time.
This may well be in the way that you are parsing the input. There may be obvious ways to speed this up. For instance, if you use complex regular expressions, and focus exclusively on matching input correctly, there may be ways to make the matching a lot faster by rewriting the expressions or doing simpler matches before trying more complex ones.
If you can't reduce CPU usage far enough in this way, and you really want to parallelize, see if you can employ the mechanism with which Perl was born: Unix pipes. You can write Perl scripts that pass data through to each other in a pipeline, or you can do the creation of the processes and pipes within Perl itself (see perlopentut, and if that isn't enough, perlipc).
As a general rule, I would consider these options first before trying other mechanisms, but it really depends of the details of what you're trying to do and the context in which you need to do it.
I have the following auto-responder on my bot
on *:TEXT:*sparky*:*: { msg # $read(scripts/name-responses.txt) }
on *:ACTION:*sparky*:*: { msg # $read(scripts/name-responses.txt) }
I wanted to know how can I tell write a code, I'm guessing with an IF statement, that if a user types sparky more than twice that the user gets ignored for 120 seconds. This way, my bot doesn't flood the chat due to the auto-responder feature.
Any help would be appreciated!
I would recommend keeping track of all users that have used the command, and when they have last used it. This can easily be done by saving all data in an INI file.
You can save this information by using the writeini command. To write the data to this file, use something along the lines of the following:
writeini sparky.ini usage $nick $ctime
$ctime will evaluate to the number of seconds elapsed since 1970/01/01. This is generally the way to compare times of events.
Once a user triggers your script again, you can read the value from this INI file and compare it to the current time. If the difference between the times is less than 10 seconds (for example), it can send the command and then ignore them for 120 seconds. You would read the value of their last usage using:
$readini(sparky.ini, n, usage, $nick)
Your final script could look like something along the lines of the following script. I've moved the functionality to a separate alias (/triggerSparky <nick> <channel>) to avoid identical code in the on TEXT and on ACTION event listeners.
on *:TEXT:*sparky*:#: {
triggerSparky
}
on *:ACTION:*sparky*:#: {
triggerSparky
}
alias triggerSparky {
; Send the message
msg $chan $read(scripts/name-responses.txt, n)
if ($calc($ctime - $readini(sparky.ini, n, usage, $nick)) < 10) {
; This user has recently triggered this script (10 seconds ago), ignore him for 120 seconds
ignore -u120 $nick
remini sparky.ini usage $nick
}
else {
writeini sparky.ini usage %nick $ctime
}
}
Of course, a slightly easier way to achieve a similar result is by simply ignoring them for a predefined time without saving their data in an INI file. This would stop you from checking whether they have triggered twice recently, but it would be a good way to only allow them to trigger it once per two minutes, for example.
I have a quite simple perl script, that in one function does the following:
if ( legato_is_up() ) {
write_log("INFO: Legato is up and running. Continue the installation.");
$wait_minutes = $WAITPERIOD + 1;
$legato_up = 1;
}
else {
my $towait = $WAITPERIOD - $wait_minutes;
write_log("INFO: Legato is not up yet. Waiting for another $towait minutes...");
sleep 30;
$wait_minutes = $wait_minutes + 0.5;
}
For some reason, sometimes (like 1 in 3 runs) the script gets killed. I don't know who's responsible for the kill, I just know it happens during the "sleep" call.
Can anyone give me a hint here? After script is killed, it's job is not done, which is a big problem.
Without knowing what else is running on your system, it's anybody's guess. You could add a signal handler, but all that it would tell you is which signal it was (and when), but not who sent it:
foreach my $signal (qw(INT PIPE HUP))
{
my $old_handler = $SIG{$signal};
$SIG{$signal} = sub {
print time, ": ", $signal, " received!\n";
$old_handler->(#_) if $old_handler;
};
}
You also may want to consider adding a WARN and DIE handler, if you are not logging output from stderr.
Under, at least Linux, you can see who sent a signal (if its an external process that used kill(2)) by looking at the siginfo struct (particularly si_pid) passed to a signal handler. I don't know how to see that from Perl however - but in your case you could strace (or similar on non-Linux platforms) your script and see it that way. e.g. strace -p <pid of your perl script>. You should see something like:
--- SIGTERM {si_signo=SIGTERM, si_code=SI_USER, si_pid=89165, si_uid=1000} ---
just before your untimely death.
(a few years late for the OP I know...)