How can I create a TCP server daemon process in Perl? - perl

I wish to create a TCP server daemon process in Perl.
Which is the best framework/module for it?.
Is there anything that comes bundled with Perl?
Edit: Something that has start | stop | restart options would be great.
Edit: It has to be a Multi threaded server.

Well - it's better if you could state what this daemon is supposed to do. As there are specialized frameworks/libraries for various tasks.
For simplest daemon that does nothing, just exists, you can easily do this:
#!/usr/bin/perl
use strict;
use warnings;
use Carp;
use POSIX qw( setsid );
daemonize();
do_your_daemon_stuff();
exit;
sub daemonize {
chdir '/' or croak "Can't chdir to /: $!";
open STDIN, '/dev/null' or croak "Can't read /dev/null: $!";
open STDOUT, '>/dev/null' or croak "Can't write to /dev/null: $!";
defined(my $pid = fork) or croak "Can't fork: $!";
exit if $pid;
setsid or croak "Can't start a new session: $!";
open STDERR, '>&STDOUT' or croak "Can't dup stdout: $!";
}
sub daemonize() was liften from perldoc perlipc (with minor change).
That's all - the code now properly daemonizes, and can do anything you want.
I just read your edit, that you want TCP server.
OK. Here is simplistic code:
#!/usr/bin/perl
use strict;
use warnings;
use Carp;
use POSIX qw( setsid );
use IO::Socket;
my $server_port = get_server_port();
daemonize();
handle_connections( $server_port );
exit;
sub daemonize {
chdir '/' or croak "Can't chdir to /: $!";
open STDIN, '/dev/null' or croak "Can't read /dev/null: $!";
open STDOUT, '>/dev/null' or croak "Can't write to /dev/null: $!";
defined(my $pid = fork) or croak "Can't fork: $!";
exit if $pid;
setsid or croak "Can't start a new session: $!";
open STDERR, '>&STDOUT' or croak "Can't dup stdout: $!";
}
sub get_server_port {
my $server = IO::Socket::INET->new(
'Proto' => 'tcp',
'LocalPort' => 31236,
'Listen' => SOMAXCONN,
'Reuse' => 1,
);
die "can't setup server" unless $server;
return $server;
}
sub handle_connections {
my $port = shift;
my $handled = 0;
while ( my $client = $port->accept() ) {
$handled++;
print $client "Hi, you're client #$handled\n";
chomp ( my $input = <$client> );
my $output = reverse $input;
print $client $output, "\n";
print $client "Bye, bye.\n";
close $client;
}
return;
}
Just remember that this is blocking tcp server, so it will be able to handle 1 connection at the time. If you want more than 1 - it becomes more complex, and you have to ask yourself if you prefer multithreading (or multi-processing), or you prefer single-process, event based server.

You do not actually want to write multithreaded perl. Perl threads are broken - they do not work properly (in my opinion).
Creating a new perl thread clones the entire interpreter including all the data currently in scope - therefore is basically WORSE than creating a new process (which of course, would use copy-on-write) and less useful.
So you definitely don't want it multithreaded.

If possible, I would consider looking at something like AnyEvent as an alternative to a pure threaded approach.

I've never had occasion to try it myself, but I believe POE is highly regarded for that sort of thing.
Here are some examples of TCP servers written with POE.

A quick search reveals quite a few possibilities. Daemon::Generic seems to be straightforward to use.
In addition, there are many server modules for various protocols. In fact, HTTP::Daemon has been a core module for a while now.

"Something that has start | stop | restart options would be great"
There are modules on CPAN which will provide this. E.g., I can see Daemon::Generic which claims to be a framework to provide start/stop/reload for a daemon.
/I3az/

Related

How can I check which method of checking a port state is more efficient?

I'm writing a script to schedule port checks on some of my servers, and report the state. I am planning to go along with one of two options:
Pipe the output of netcat (nc) into a perl variable, and check the return state
my $rcode=`nc -z 6.6.6.6 80; echo $?`
Use the perl module IO::Socket::PortState to do the same thing.
use IO::Socket::PortState qw(check_ports);
my %porthash = ( ... );
check_ports($host,$timeout,\%porthash);
print "$proto $_ is not open ($porthash{$proto}->{$_}->{name})
if !$porthash{$proto}->{$_}->{open};
I'd like to go along the more efficient (shorter time, more specific) route.
Is there a rule of thumb regarding this? As in piping the output of a system command/unix utility is more/less efficient than using a perl module? Or are different perl modules different, and one can check the same only by setting up multiple iterations of these checks and compare the time taken by this versus a system call to execute a unix utility?
This isn't really an answer to your question so much as a suggested implementation.
Just a quick run at an untested example of using Socket.pm.
This is probably the fastest perl implementation. But at the point you are using socket.pm you may as well just write it in C with socket.h. They are a near 1:1 mapping.
Ref:
#!/usr/bin/perl -w
use Socket;
$remote = $ARGV[0];
$port = $ARGV[1];
print ":: Attempting to connect to - $remote.\n";
$iaddr = inet_aton($remote) or die "Error: $!";
$paddr = sockaddr_in($port, $iaddr) or die "Error: $!";
$proto = getprotobyname('tcp') or die "Error: $!";
socket(SOCK, PF_INET, SOCK_STREAM, $proto) or die "Error: $!";
connect(SOCK, $paddr) or die "Error: $!";
print ":: Connected Successfully!\n";

Unable to get Perl's flock working

I can't seem to make Perl's
flock work.
I'm locking a file, checking return valued to make sure it's actually locked, and I'm still able to open and write to it like nothing is the matter.
Here is how I lock the file
#!/usr/bin/perl -w
use strict;
use Fcntl ':flock';
$| = 1;
my $f = $ARGV[0];
open( my $fh, '>>', $f ) or die "Could not open '$f' - $!";
print "locking '$f'...";
flock($fh, LOCK_EX) or die "Could not lock '$f' - $!";
print "locked\n";
sleep 10;
print "waking up and unlocking\n";
close( $fh );
While that script is sleeping I can fiddle with the same text file from a different process
#!/usr/bin/perl -w
use strict;
my $f = $ARGV[0];
open( my $fh, '>>', $f ) or die "Could not open '$f' - $!";
print $fh "This line was appended to a locked file!\n";
close( $fh );
Why am I then able to open the file and write to it without being told that it's locked?
flock() is an advisory lock. You have to have all your processes using flock()
Also realize that the way you are calling flock() it will block until it can get a lock. If you want a failure you have to use the LOCK_NB flag as well.
open(my $lf, ">>fileIWantToLockOn");
my $gotLock = flock($lf, LOCK_EX | LOCK_NB);
unless ($gotLock)
{
print "Couldn't get lock. Exiting";
exit 0;
}
EDIT: Also note that flock() won't work on NFS
I don't think flock does what you think it does. Locking a file doesn't prevent anybody from doing anything to the file except trying to obtain a lock on the same file.
From man 2 flock on my system:
flock(2) places advisory locks only; given suitable permissions on a file, a process is free to ignore the use of flock(2) and perform I/O on the file.
flock works with advisory locks. Among other things, this means that only other processes that try to flock the same file will realize it's locked.

Perl - How to send a binary (image) file to a remote host from "master" host and create directory structure too?

I have done a bunch of reading and some testing to no avail.
I found this script here on stackoverflow which guided me in the right direction but, my abilities are inadequate to modify and fully understand for my needs.
#! /usr/bin/perl
use warnings;
use strict;
use File::Basename;
use File::Copy;
use File::Spec::Functions qw/ abs2rel catfile /;
use Net::SSH qw/ sshopen3 /;
my $HOST = "user\#host.com";
my $SRC_BASE = "/tmp/host";
my $SRC_FILE = "$SRC_BASE/a/b/c/file";
my $DST_BASE = "/tmp/dest";
system("md5sum", $SRC_FILE) == 0 or exit 1;
my $dst_file = catfile $DST_BASE, abs2rel $SRC_FILE, $SRC_BASE;
my $dst_dir = dirname $dst_file;
sshopen3 $HOST, *SEND, *RECV, *ERRORS,
"mkdir -p $dst_dir && cat >$dst_file && md5sum $dst_file"
or die "$0: ssh: $!";
binmode SEND;
copy $SRC_FILE, \*SEND or die "$0: copy failed: $!";
close SEND or warn "$0: close: $!"; # later reads hang without this
undef $/;
my $errors = <ERRORS>;
warn $errors if $errors =~ /\S/;
close ERRORS or warn "$0: close: $!";
print <RECV>;
close RECV or warn "$0: close: $!";
Scenario:
I have image files in a directory on the "main" host eg /home/user/public_html/images/red/id100/image01.jpg (thru image012.jpg)
I would like to copy/send them to my remote host creating the red/id100 path if !-d. (the "/images" folder pre exists.)
I need to send a username and password to the remote host as well to get in. It is a typical cpanel shared server environment.
My main server is dedicated, also with cpanel management installed.
I looked into NET::FTP, File::Remote, Net::Telnet and Net::SFTP but, no examples available that were "dumbed" down to my level.
I will eventually need to do this with ascii files too but, I believe once I get it working with the images, I can figure out xfermode switch, I hope.
Thanks for any help and verbose examples. I always learn great things here.
Using SFTP it should be pretty simple:
use Net::SFTP::Foreign;
my $sftp = Net::SFTP::Foreign->new($HOST, passwd => $PASSWD);
$sftp->error and die "Unable to connect to remote host: " . $sftp->error;
$sftp->rput($SRC_BASE, $DST_BASE);
...or using ssh+rsync...
use Net::OpenSSH;
my $ssh = Net::OpenSSH->new($HOST, passwd => $PASSWD);
$ssh->error and die "Unable to connect to remote host: " . $ssh->error;
$ssh->rsync_put({recursive => 1}, $SRC_BASE, $DST_BASE)
or die "rsync failed: " . $ssh->error;

Perl pipe and C process as child [Windows ]

I want to fork a child ( which is my C executable ) and share a pipe between perl and C process,
Is it possible to have STDOUT and STDIN to use as pipe.
Tried with following code but child process keep continue running.
use IPC::Open2;
use Symbol;
my $CHILDPROCESS= "chile.exe";
$WRITER = gensym();
$READER = gensym();
my $pid = open2($READER,$WRITER,$CHILDPROCESS);
while(<STDIN>)
{
print $WRITER $_;
}
close($WRITER);
while(<$READER>)
{
print STDOUT "$_";
}
The Safe Pipe Opens section of the perlipc documentation describes a nice feature for doing this:
The open function will accept a file argument of either "-|" or "|-" to do a very interesting thing: it forks a child connected to the filehandle you've opened. The child is running the same program as the parent. This is useful for safely opening a file when running under an assumed UID or GID, for example. If you open a pipe to minus, you can write to the filehandle you opened and your kid will find it in his STDIN. If you open a pipe from minus, you can read from the filehandle you opened whatever your kid writes to his STDOUT.
But according to the perlport documentation
open
open to |- and -| are unsupported. (Win32, RISC OS)
EDIT: This might only work for Linux. I have not tried it for Windows. There might be a way to emulate it in Windows though.
Here is what you want I think:
#Set up pipes to talk to the shell.
pipe(FROM_PERL, TO_C) or die "pipe: $!\n";
pipe(FROM_C, TO_PERL) or die "pipe: $!\n";
#auto flush so we don't have (some) problems with deadlocks.
TO_C->autoflush(1);
TO_PERL->autoflush(1);
if($pid = fork()){
#parent
close(FROM_PERL) or die "close: $!\n";
close(TO_PERL) or die "close: $!\n";
}
else{
#child
die "Error on fork.\n" unless defined($pid);
#redirect I/O
open STDIN, "<&FROM_PERL";
open STDOUT, ">&TO_PERL";
open STDERR, ">&TO_PERL";
close(TO_C) or die "close: $!\n";
close(FROM_C) or die "close $!\n";
exec("./cprogram"); #start program
}
Now you can communicate to the shell via FROM_C and TO_C as input and output, respectively.
This Q&A over on Perlmonks suggests that open2 runs fine on Windows, provided you manage it carefully enough.

How can I redirect the output from one filehandle into another?

I want to set up a pipeline of processes from within Perl (running on Linux), consisting of two parts run at separate times.
Eg:
Start the consumer process:
open( OUT, "| tar xvf - " ) || die "Failed: tar: $!";
then much later start the producer process:
open( IN, "gpg -d $file |" ) || die "Failed: gpg: $!";
but then somehow redirect the output from gpg into the input to tar.
I can do this by building a loop:
while (<IN> ) {
print OUT;
}
But I would like to know if I can somehow glue the two processes together with redirection.
Add
pipe( IN, OUT );
Before the two open statements. That's it!
If you want to do anything more complicated, I would recommend the IPC::Run CPAN module:
http://search.cpan.org/dist/IPC-Run/
It lets you start processes, tie their input and outputs together, and add logging or redirection at any point in the chain.
If the two processes are completely unrelated, use a FIFO.
use POSIX qw(mkfifo);
mkfifo($path, 0700) or die "mkfifo $path failed: $!";
This creates a FIFO at $path. Now have one process write to that file, and the other process read from it.
I like Proc::SafeExec it lets you tie together processes and file handles in almost arbitrary ways easily. Here's an example:
use strict;
use warnings;
use Proc::SafeExec;
open(my $ls, "-|", "ls", "-l") or die "Err: $!";
open(my $fh, ">", "tmp.txt") or die "Err: $!";
my $p = Proc::SafeExec->new({
exec => [qw(sed -e s/a/b/)],
stdin => $ls,
stdout => $fh,
});
$p->wait();
After looking at IPC::Run, it looks a lot simpler...here's the same example using IPC::Run instead:
use IPC::Run qw(run);
run [qw(ls -l)], "|", [qw(sed -e s/a/b/)], ">", "tmp.txt";