Share socket between processes in Perl (without fork)? - perl

Is there a mechanism in Perl to share a socket between two separate processes-- without forking or threading in Linux?
I would assume no, but this answer leaves me to believe it is possible: https://stackoverflow.com/a/1139425/1170839
I would like to create a listening socket on one process, and allow another process to accept/read/write on it.

On many UNIXy systems, as the link you posted indicates, file descriptors may be passed over local domain sockets. For example, a privileged process can open/prepare an fd and then send it to an unprivileged process for use.
Socket::MsgHdr exposes this functionality for perl, and includes examples of file descriptor passing.

The way to go is to use POE. POE makes multithreading in perl ridiculously easy and is designed for just this. POE is a CPAN framework for event driven multithreaded applications. Hands down, the easiest and best way to do this in Perl is POE. There's no reason to reinvent this when it's all been done before and is so well tested.
See:
http://poe.perl.org/?Evolution_of_a_POE_Server and
http://poe.perl.org/?POE_Cookbook/TCP_Servers

Related

Persistent socket in Lua in parallel with other Lua code

I am implementing sockets in Lua, and the example code I'm working from uses the following method to keep the connection alive:
while true do
-- handle socket traffic here
socket.sleep(1)
end
The loop obviously prevents the rest of the project code to be run, but if I exit the loop the socket server immediately says that the connection was closed.
So how do I keep the socket open simultaneously as the rest of my Lua code runs as normal? (Is there some sort of background job support? Can coroutines be used for this purpose?)
I used Lua Lanes to start a thread that is doing the socket i/o and running in the background as you stated.
http://kotisivu.dnainternet.net/askok/bin/lanes/
Take a look at this answer, which gives info on using Lua Lanes and sockets.
LuaLanes and LuaSockets
The Dual-Threaded Polling solution provided there is probably the most viable, but, there's information about coroutines there also.
(Your question is similar to this question (and I have appropriately flagged it as a duplicate), but here's a copy of my answer for your convenience!)
There are a various ways of handling this issue; which one you will select depends on how much work you want to do.*
But first, you should clarify (to yourself) whether you are dealing with UDP or TCP; there is no "underlying TCP stack" for UDP sockets. Also, UDP is the wrong protocol to use for sending whole data such as a text, or a photo; it is an unreliable protocol so you aren't guaranteed to receive every packet, unless you're using a managed socket library (such as ENet).
Lua51/LuaJIT + LuaSocket
Polling is the only method.
Blocking: call socket.select with no time argument and wait for the socket to be readable.
Non-blocking: call socket.select with a timeout argument of 0, and use sock:settimeout(0) on the socket you're reading from.
Then simply call these repeatedly.
I would suggest using a coroutine scheduler for the non-blocking version, to allow other parts of the program to continue executing without causing too much delay.
Lua51/LuaJIT + LuaSocket + Lua Lanes (Recommended)
Same as the above method, but the socket exists in another lane (a lightweight Lua state in another thread) made using Lua Lanes (latest source). This allows you to instantly read the data from the socket and into a buffer. Then, you use a linda to send the data to the main thread for processing.
This is probably the best solution to your problem.
I've made a simple example of this, available here. It relies on Lua Lanes 3.4.0 (GitHub repo) and a patched LuaSocket 2.0.2 (source, patch, blog post re' patch)
The results are promising, though you should definitely refactor my example code if you derive from it.
LuaJIT + OS-specific sockets
If you're a little masochistic, you can try implementing a socket library from scratch. LuaJIT's FFI library makes this possible from pure Lua. Lua Lanes would be useful for this as well.
For Windows, I suggest taking a look at William Adam's blog. He's had some very interesting adventures with LuaJIT and Windows development. As for Linux and the rest, look at tutorials for C or the source of LuaSocket and translate them to LuaJIT FFI operations.
(LuaJIT supports callbacks if the API requires it; however, there is a signficant performance cost compared to polling from Lua to C.)
LuaJIT + ENet
ENet is a great library. It provides the perfect mix between TCP and UDP: reliable when desired, unreliable otherwise. It also abstracts operating system specific details, much like LuaSocket does. You can use the Lua API to bind it, or directly access it via LuaJIT's FFI (recommended).
* Pun unintentional.
The other answers are nice, but kind of miss the most important point here:
There is rarely a need nowadays to use threads when dealing with sockets
Why? Because multiple sockets are so common, that the OSes (most notably *ix systems) implemented the "multiple poll" in the form of epoll function.
All high-performance networking libraries such as ZeroMQ keep only a few threads, and operate inside them. That lower the memory requirements, but doesn't sacrifice speed.
So my suggestion would be to hook up to OS libraries directly, which is really easy in Lua. You don't have to write the code yourself - quick google search brought me this epoll wrapper [1] You can then still use coroutines to read only from sockets that actually have some data.
You might also want to take a look at ZeroMQ library itself.
[1]Neopallium created Lua bindings for ZMQ, so I think it's legit.
You can indeed use coroutines for that purpose. This is what the popular library Copas does.
Depending on your use case you can use Copas or look at its source code to see how it does it. You may also look at lua-websockets which uses Copas.

Whats more portable in Perl, sockets or named pipes (fifos)?

I'm writing some Perl code. I want it to run on Windows and Linux/UNIX/OSX. So far it works on *NIX and uses fifos.
I am considering switching to sockets to avoid the problem that POSIX::mkfifo() doesn't work on Windows, so I need to write some separate code to use Win32::Pipe.
I'm feeling ambivalent about the whole thing. It seems to me both fixes require about the same amount of work. Is it a good idea to switch to sockets?
Short answer: IO::Socket::INET works on both Windows and *NIX.
Named Pipes
Slightly easier to code up quickly. You don't need write connect code.
Slightly faster. Sockets have the overhead of TCP and setting up the initial connection.
Works on all platforms.
Works even when network card doesn't exist. Some laptops shut down the network card to save power which can prevent even local sockets from working.
Sockets
Works on all platforms. However, some laptops shut down the network card to save power and even local sockets won't work if there is no network interface.
More portable in Perl. IO::Socket::INET works on both *NIX and Windows.
Allows you to have a separate conversation with each client.
Firewalls are not a problem. Ports over 1024 should work.
Personally, I've decided to switch to sockets. In my application it doesn't matter much. But I think it makes the code a bit simpler, gives me the flexibility to move to > 1 client in the future, and I want to learn IO::Socket anyway.
Answering more generically (ie, it's not perl specific):
Doing this sort of thing in windows vs the rest of the world almost always requires separate code for windows vs everything-else. Pretty much everything-else has good solutions for things like this, like unix file sockets or fifo's or ... Then on windows you have to fall back to sockets.
The right thing to do, IMHO, is to use the right solution on windows that isn't network sockets because that opens the application up to security issues. So on everything else "do it correctly" but then on windows, fall back to something like network sockets instead. But, make sure if you take the network-socket route you should at least use local sockets only (ie, bound to 127.0.0.1).
For perl, I'd be tempted to look in CPAN for a class that's already made this generic. But... I wouldn't be surprised if nothing exists.
LWP::socket works fine in Windows and *NIX. If you opt for sockets over fifos, then you eventually would be able to communicate Windows and *NIX processes. May be you don't need it today, but who knows.
IIRC, later versions of Perl have a working socketpair on Windows.

3-way communication via sockets

Good Afternoon Gurus,
I am pretty familiar with basic socket programming, and the IO::Socket module but I need to code something now that I have not encountered before. It will be a 3 tier application. The first tier is an event-loop that sends messages upstream when certain events are encountered. The second tier is the 'middle-ware' server, which (among other things) acts as the message repository. The third tier is a cgi application, which will update a graphical display.
I am confused on how to set up the server to accept uni-directional connections from multiple clients one one side, and communicate bi-directionally with the cgi application on the other. I can do either of those tasks separately, just not in the same script (yet). Does my question make sense? I would like to stick with using the IO::Socket module, but it is not a requirement by any means. I am not asking for polished code, just advice on setting up the socket(s) and how to communicate from one client to another via the server.
Also, does it make more sense to have the cgi application query the server for new messages, or have the server push the new message upstream to the cgi application? The graphical updates need to be near real-time.
Thank you in advance,
Daren
You said you already have an event loop in the first tier. In a way, your second-tier server should also arrange some kind of event loop for asynchronous processing. There are many ways to code it using perl, like AnyEvent, POE, Event to name just a few. In the end, they all use one of select, poll, epoll, kqueue OS facilities (or their equivalent on Windows). If you feel comfortable coding in a relatively low-level, you can just use perl's select builtin, or, alternatively, its object-oriented counterpart, IO::Select.
Basically you create two listening sockets (you might only need one if the first tier uses the same communication protocol as the third tier to talk to your server), add it to the IO::Select object and do a select on it. Once the connection
is made, you add the accepted sockets to the select object.
The select method of IO::Select will give you back a list of sockets ready for reading or writing (I am ignoring the possibility of exceptions here). Of course you have to keep track of your sockets to know which one is which. Also, the communication logic will be somewhat complicated because you have to use non-blocking sockets.
As for the second part of your question, I am a little bit confused what you mean by "cgi" - whether it is a Common Gateway Interface (i.e., server-side web scripts), or whether it is a shorthand for "computer graphics". In both cases I think that it makes sense for your task to use server push.
In the latter case that's all I'd like to say. In the former case, I suggest you google for "Comet" (as in "AJAX"). :-)
In a standard CGI application, I don't see how you can "push" data to them. For a client interaction, the data goes through the CGI/presentation layer to the middle tier to remain in session storage (or cache) or to the backend to get stored in the database.
That is of course unless you have a thick application layer which is a caching locus and kind of a middle tier in itself.

Performance of sockets vs pipes

I have a Java-program which communicates with a C++ program using a socket on localhost. Can I expect to gain any performance (either latency, bandwidth, or both) by moving to use a native OS pipe? I'm primarily interested in Windows at the moment, but any insight related to Unix/Linux/OSX is welcome as well.
EDIT: Clarification: both programs run on the same host, currently communicating via a socket, i.e. by making a TCP/IP connection to localhost:. My question was what are the potential performance benefits of switching to using (local) named pipes (Windows), or their Unix equivalent (AF_UNIX domain socket?).
Ken is right. Named pipes are definitely faster on Windows. On UNIX & Linux, you'd want a UDS or local pipe. Same thing, different name.
Anything other than sockets will be faster for local communication. This includes memory mapped files, local pipes, shared memory, COM, etc.
The first google hit turned up this, which clocked NT4 and XP and found named pipes (that's what you meant, right?) to be faster on Windows.
For local processes communication pipes are definitely faster than sockets. There is a benchmark.
I think even though socket is flexible but it can also lead to bad code design. While using pipe it enforces you to design the architecture of your project like which process should be the parent which should be the children and how they cooperate(this will determine how pipes are established) and assign different functionality to processes. Your project design this way will have hierarchical structure and easy to maintain.

Named Pipes IPC

I am trying to create a pipe to use between two processes to send information. The two process are not related and implementation with signals has a problem where if the process that recieves the signal is doing a sys command it intreprets the signal as an intrupt.
I am new to perl so any help trying to have two processes use pipes would be really great!!
The perl man page perlipc talks a bit about using named pipes.
You didn't mention any specifics as to your project, so this may be completely off of what you are trying to achieve but have you considered implementing sockets as your IPC mechanism? Again, I understand this may not make sense in the context of your particular project but it may allow you to create a process with the ability to communicate across a network instead of just one machine.