Named Pipes IPC - perl

I am trying to create a pipe to use between two processes to send information. The two process are not related and implementation with signals has a problem where if the process that recieves the signal is doing a sys command it intreprets the signal as an intrupt.
I am new to perl so any help trying to have two processes use pipes would be really great!!

The perl man page perlipc talks a bit about using named pipes.

You didn't mention any specifics as to your project, so this may be completely off of what you are trying to achieve but have you considered implementing sockets as your IPC mechanism? Again, I understand this may not make sense in the context of your particular project but it may allow you to create a process with the ability to communicate across a network instead of just one machine.

Related

What's the conventional way to send commands to running processes?

Is there a conventional way to write a program such that commands can be issued to the program from the command line without a repl? For example, how you can send commands to a running nginx server using sudo /etc/init.d/nginx restart (or any other valid command besides restart)
One idea I had was having the long-running program create and monitor a unix socket that other programs can write to to send it commands. Another was to create a local server with a REST interface that can be sent commands that way, though that seems a bit gross.
What's the right way to do this?
Both ways are ok, and you could even consider using some RPC machinery, such as making your application serve JSONRPC on some unix(7) socket. Or use a fifo(7). Or use D-Bus.
A common habit on Unix is to have applications reload their configuration files on e.g. SIGHUP signal, and save some persistent state (before terminating) on SIGTERM. Read signal(7) (notice that only async-signal-safe routines can be called fro signal handlers; a good way is to only set some volatile sig_atomic_t variable inside the handler and test it outside). See also POSIX signal.h documentation.
You might make your application become a specialized HTTP server (e.g. using some HTTP server library like libonion) and give it some Web interface (or REST, or SOAP ...); the user (or sysadmin) will then use his browser to interact with your application.
You could make your server systemd compatible. (I don't know exactly what that requires, it is perhaps D-bus related).
You could embed some command interpreter (like Guile and Lua) in your app and have some limited kind of REPL loop running on some IPC like a socket or a fifo. Beware of nasty code injection.
I had a similar issue where I have a plethora of services running on any number of machines and each is in need of communicating with several others.
My main problem was not so much the communication between the services. That can be done with a simple message sent over a connection (as Basile mentioned, it can be TCP, UDP, Unix sockets, FIFOs...). However, when you have over 20 services, many of which need to communicate with several other services, you start having a headache on how to get all the connections right (I have such a system, although it has a relatively limited number of services, like just 10 and that's already very complicated).
So I created a process (yet another service) called Communicator. All services connect to the Communicator service and when they need to send a message, they include the name of the service they want to reach. The Communicator service is in charge of sending the message to the right placeā€”i.e. it could be to another Communicator service running on a different computer. Communicator has a graph of all the services available on your network and knows how to send messages to them without your service having to know anything about all of that. Computing a graph can be really complex.
For the purpose, I created the eventdispatcher project. It is in C++, which may not be what you're interested in, although you could use it in other languages that interface with C/C++. The structure of the messages are "proprietary" (specific to the Communicator), but you can create any message you want. A message includes a name and parameters (param-name=value). The first version has a simple one line text communication system. The newer version accepts JSON as well (still must be one line of text per message).
The system supports TCP, UDP, Unix sockets, FIFO, and between threads, you can have thread safe fifos. It also understand signals (like SIGHUP, SIGTERM, etc.) It has a specific connection to listen for the death of a thread. It supports encryption over TCP via OpenSSL. The messages can automatically be dispatched (hence the current name of the library). Connections are assigned a timer. And there are CUI and GUI (Qt) extensions as well.
The one main point here is that all your connections can be polled (see poll()) and thus you can implement a system that reacts to events instead of a system which sleeps and checks for events, sleeps and check, etc. or worth, you have a single blocking connection and everything has to happen on that one connection or your service gets stuck. This is one reason Unix has been using signals since early version of Unix did not have select() nor poll().

Share socket between processes in Perl (without fork)?

Is there a mechanism in Perl to share a socket between two separate processes-- without forking or threading in Linux?
I would assume no, but this answer leaves me to believe it is possible: https://stackoverflow.com/a/1139425/1170839
I would like to create a listening socket on one process, and allow another process to accept/read/write on it.
On many UNIXy systems, as the link you posted indicates, file descriptors may be passed over local domain sockets. For example, a privileged process can open/prepare an fd and then send it to an unprivileged process for use.
Socket::MsgHdr exposes this functionality for perl, and includes examples of file descriptor passing.
The way to go is to use POE. POE makes multithreading in perl ridiculously easy and is designed for just this. POE is a CPAN framework for event driven multithreaded applications. Hands down, the easiest and best way to do this in Perl is POE. There's no reason to reinvent this when it's all been done before and is so well tested.
See:
http://poe.perl.org/?Evolution_of_a_POE_Server and
http://poe.perl.org/?POE_Cookbook/TCP_Servers

What's the best way to handle multiple outgoing connections in Perl?

I have three TCP servers I need to connect to, each with different protocols, but all in nonblocking mode. Right now my plan is essentially opening a new IO::Socket per each one and adding them to IO::Select, then looping through can_read(). The idea is based on how servers are usually written in Perl, but it seems like it could work for clients.
I'm wondering if this is the best way to do it, furthermore I'm also wondering how I can probe each connection for disconnection, and initiate a reconnection to it without disrupting the other sockets. Any code examples would be a great help, or at least some points in the right direction on how best to do this.
You may want to use AnyEvent, or POE. Just look through the documentation, it has some nice examples to help you learn your way around.

3-way communication via sockets

Good Afternoon Gurus,
I am pretty familiar with basic socket programming, and the IO::Socket module but I need to code something now that I have not encountered before. It will be a 3 tier application. The first tier is an event-loop that sends messages upstream when certain events are encountered. The second tier is the 'middle-ware' server, which (among other things) acts as the message repository. The third tier is a cgi application, which will update a graphical display.
I am confused on how to set up the server to accept uni-directional connections from multiple clients one one side, and communicate bi-directionally with the cgi application on the other. I can do either of those tasks separately, just not in the same script (yet). Does my question make sense? I would like to stick with using the IO::Socket module, but it is not a requirement by any means. I am not asking for polished code, just advice on setting up the socket(s) and how to communicate from one client to another via the server.
Also, does it make more sense to have the cgi application query the server for new messages, or have the server push the new message upstream to the cgi application? The graphical updates need to be near real-time.
Thank you in advance,
Daren
You said you already have an event loop in the first tier. In a way, your second-tier server should also arrange some kind of event loop for asynchronous processing. There are many ways to code it using perl, like AnyEvent, POE, Event to name just a few. In the end, they all use one of select, poll, epoll, kqueue OS facilities (or their equivalent on Windows). If you feel comfortable coding in a relatively low-level, you can just use perl's select builtin, or, alternatively, its object-oriented counterpart, IO::Select.
Basically you create two listening sockets (you might only need one if the first tier uses the same communication protocol as the third tier to talk to your server), add it to the IO::Select object and do a select on it. Once the connection
is made, you add the accepted sockets to the select object.
The select method of IO::Select will give you back a list of sockets ready for reading or writing (I am ignoring the possibility of exceptions here). Of course you have to keep track of your sockets to know which one is which. Also, the communication logic will be somewhat complicated because you have to use non-blocking sockets.
As for the second part of your question, I am a little bit confused what you mean by "cgi" - whether it is a Common Gateway Interface (i.e., server-side web scripts), or whether it is a shorthand for "computer graphics". In both cases I think that it makes sense for your task to use server push.
In the latter case that's all I'd like to say. In the former case, I suggest you google for "Comet" (as in "AJAX"). :-)
In a standard CGI application, I don't see how you can "push" data to them. For a client interaction, the data goes through the CGI/presentation layer to the middle tier to remain in session storage (or cache) or to the backend to get stored in the database.
That is of course unless you have a thick application layer which is a caching locus and kind of a middle tier in itself.

Performance of sockets vs pipes

I have a Java-program which communicates with a C++ program using a socket on localhost. Can I expect to gain any performance (either latency, bandwidth, or both) by moving to use a native OS pipe? I'm primarily interested in Windows at the moment, but any insight related to Unix/Linux/OSX is welcome as well.
EDIT: Clarification: both programs run on the same host, currently communicating via a socket, i.e. by making a TCP/IP connection to localhost:. My question was what are the potential performance benefits of switching to using (local) named pipes (Windows), or their Unix equivalent (AF_UNIX domain socket?).
Ken is right. Named pipes are definitely faster on Windows. On UNIX & Linux, you'd want a UDS or local pipe. Same thing, different name.
Anything other than sockets will be faster for local communication. This includes memory mapped files, local pipes, shared memory, COM, etc.
The first google hit turned up this, which clocked NT4 and XP and found named pipes (that's what you meant, right?) to be faster on Windows.
For local processes communication pipes are definitely faster than sockets. There is a benchmark.
I think even though socket is flexible but it can also lead to bad code design. While using pipe it enforces you to design the architecture of your project like which process should be the parent which should be the children and how they cooperate(this will determine how pipes are established) and assign different functionality to processes. Your project design this way will have hierarchical structure and easy to maintain.