FastCGI with protocol = Tcp on IIS 7 - sockets

I have tried to use IIS 7 (as included in Windows 7) to test a FastCGI library I am currently developing.
According to the original FastCGI spec, when an application is called, its stdin handle is replaced with a socket. By default, IIS uses a named pipe instead, but it is possible to configure it to use TCP, i.e. a socket.
When I try to use this socket in my test application, I get an WSAENOTSOCK error.
When I try to use a named pipe instead (after reconfiguring IIS), I run into similar problems. For example, I get a ERROR_INVALID_HANDLE when I try to use PeekNamedPipe. ReadFile and WriteFile however work correctly.
I guess the problem is that this handle is inherited from the parent process and the current process does not really know its exact type. It seems to assume that the handle represents a simple file.
Has anyone run into similar problems and knows a solution/workaround? Can I somehow update the in-process status of my handle such that the WIN32 API function will accept it as a socket/named pipe?

In case anyone else ever stumbles upon this: DuplicateHandle does the trick.
In fact, the function OS_LibInit of the libfcgi implementation shows how to start an FastCGI app that got its socket through stdin.

Related

FastCGI or PSGI Interface to NGINX in 2021

This question I asked has resulted in me exploring directly interfacing my FastCGI script to NGINX, rather than using a reverse proxy to Apache. I successfully modified my FastCGI script to run as a daemon using some code I found online:
my $s = FCGI::OpenSocket(':9000',20);
my $request = FCGI::Request( \*STDIN, \*STDOUT, \*STDERR, \%ENV, $s);
# Remaining code stays just as it does when using with Apache's mod_fcgid
while($request->Accept() >= 0) {
# Call core app subroutines.
}
It works, but near as I can tell this has a distinct disadvantage over mod_fcgid: I have one process running which will handle one request at a time and if that process dies, there's nothing to start it back up. There are references on Stack Overflow to code that properly spun off workers, but the sites referenced inevitably seem to have gone offline, much like FastCGI's own site.
So, I'm trying to figure out what I need to add and also -- pardon the pun -- figure out if I need to take a fork in this road. Here are the options that I am trying to consider, if I understand my issues correctly:
Directly implement some sort of forking mechanism, ideally it seems like it should (1) toss off the request to a process/thread/worker -- perhaps one that can stay alive for multiple requests -- and move on to being ready for the next request and (2) be independent enough from the workers that if something goes wrong with a worker, it doesn't bring down the whole system until I catch it and restart the main process (e.g. autorestart processes). If this can be done simply and reliably, this seems to have a huge appeal since the code already works with FastCGI.
Give up on direct FastCGI and convert to PSGI and use an application server to handle these things. Given that I'm using Perl, I'd guess Starman is the logical option, although I've been reading on uwsgi's PSGI support and it sounds almost ideal in "tyrant Emperor" mode, where it could run processes with different privileges, auto restart missing processes, etc.
Option 1 seems intriguing since it requires the least modification to my existing code and a FastCGI script started up without FastCGI still works like a normal CGI script. (I'm not running this code under FastCGI when it is used by sites that are very low traffic).
Option 2, though, feels like it might be more "modern." At least PSGI documentation seems to still be online, for example, and using Starman or uwsgi seem like they take care of the background stuff I need probably better than I would cooking up my own system. Downside: I'd need two startup scripts for my code: one to be used by the PSGI enabled sites and one for sites still running in CGI.
Update: Continuing to explore option 1, I read through this tutorial on Perl fork() which seems somewhat relevant. Would using fork to break off each FastCGI request be a good approach if I go with option 1? I assume I'd be at risk of fork bombing, although if I kept track of the number of forks and issued wait() if ($forks > 10); perhaps that would be a safe approach? (Or perhaps using Parallel::ForkManager to do that process watching.) Or would it be safer and/or more efficient using something like Thread::Queue and passing FastCGI request objects to a set a threads that are reliably already established? There seem to be plenty of pitfalls I might overlook, which then returns me to whether I should opt for Option 2.

Sharing variables\data between Powershell processes

I would like to come up with a mechanism by which I can share 'data' between different Powershell processes. This would be in order to implement a kind of job system, whereby a function can be run in one Powershell process, complete and then someone communicate its status to a function run from another (distinct) Powershell process...
I guess what I'd ideally like psjob results to be shareable between sessions, but this does not seem to be possible.
I can think of a few dirty ways of achieving this (like O/S environment variables), but am I missing an semi-elegant way?
For example:
Function giveMeNumber
{
$return_vlaue = Get-Random -Minimum -100 -Maximum 100
Return $return_vlaue
}
What are some ways i could get this function to store it's return somewhere and then grab it from another Powershell session (without using a database).
Cheers.
The QA mentioned by Keith refers to using MSMQ, a message queueing feature optionally available on desktop, mobile & server OS's from Microsoft.
It doesn't run by default on desktop OS's so you would have to ensure that the appropriate service was started. Seems like serious overkill to me unless you wanted something pretty beefy.
Of course, the most common choice for this type of task would be a simple shared file.
Alternatively, you could create a TCP listener in each of the jobs that you want to have accept external info. Not done this myself in PowerShell though I know it is possible. Node.JS would be a more familiar environment or Python. Seems like overkill if a shared file would do the job!
Another way would be to use the registry. Though you might consider that cheating since it is actually a database (of a very broken and simplistic sort).
I'm actually not sure that environment variables would work since I know that they can be picky about the parent environment scope (for example setting an env variable in a cmd doesn't make it available outside of the cmd scope by default.
UPDATE: Doh, missed a few! Some of them very obvious. Microsoft have a list:
Clipboard
COM
Data Copy
DDE
File Mapping
Mailslots
Pipes
RPC
Windows Sockets
Pipes was the one I was trying to remember. Windows sockets would be similar to a TCP listener.

node.js - share sockets between processes

I´ve read that it´s possible to share sockets between processes. Is this also possible in Node.js?
I saw the cluster api in node.js, but that´s not what I´m looking for. I want to be able to accept a connection in one process, maybe send & read a bit, and after a while pass this socket to another fully independent node.js process.
I could already do this with piping, but I don´t want to do this, since it´s not as fast as directly reading/writing to the socket itself.
Any ideas?
Update
I found the following entry in the node.js documentation:
new net.Socket([options]) #
Construct a new socket object.
options is an object with the following defaults:
{ fd: null
type: null
allowHalfOpen: false
}
fd allows you to specify the existing file descriptor of socket. type specified underlying protocol. It can be 'tcp4', 'tcp6', or 'unix'. About allowHalfOpen, refer to createServer() and 'end' event.
I think it would be possible to set the "fd" property to the filedescriptor of the socket and then open the socket with that. But... How can I get the filedescriptor of the socket and pass it to the process that needs it?
Thanks for any help!
This is not possible at the moment, but I've added it as a feature request to the node issues page.
Update
In the mean time, I've written a module for this. You can find it here: https://github.com/VanCoding/node-ancillary
You probably want to take a look at hook.io
hook.io is a distributed EventEmitter built on node.js. In addition to providing a minimalistic event framework, hook.io also provides a rich network of hook libraries for managing all sorts of input and output.

How to know whether any process is bound to a Unix domain socket?

I'm writing a Unix domain socket server for Linux.
A peculiarity of Unix domain sockets I quickly found out is that, while creating a listening Unix socket creates the matching filesystem entry, closing the socket doesn't remove it. Moreover, until the filesystem entry is removed manually, it's not possible to bind() a socket to the same path again : bind() fails with EADDRINUSE if the path it is given already exists in the filesystem.
As a consequence, the socket's filesystem entry needs to be unlink()'ed on server shutdown to avoid getting EADDRINUSE on server restart. However, this cannot always be done (i.e.: server crash). Most FAQs, forum posts, Q&A websites I found only advise, as a workaround, to unlink() the socket prior to calling bind(). In this case however, it becomes desirable to know whether a process is bound to this socket before unlink()'ing it.
Indeed, unlink()'ing a Unix socket while a process is still bound to it and then re-creating the listening socket doesn't raise any error. As a result, however, the old server process is still running but unreachable : the old listening socket is "masked" by the new one. This behavior has to be avoided.
Ideally, using Unix domain sockets, the socket API should have exposed the same "mutual exclusion" behavior that is exposed when binding TCP or UDP sockets : "I want to bind socket S to address A; if a process is already bound to this address, just complain !" Unfortunately this is not the case...
Is there a way to enforce this "mutual exclusion" behavior ? Or, given a filesystem path, is there a way to know, via the socket API, whether any process on the system has a Unix domain socket bound to this path ? Should I use a synchronization primitive external to the socket API (flock(), ...) ? Or am I missing something ?
Thanks for your suggestions.
Note : Linux's abstract namespace Unix sockets seem to solve this issue, as there is no filesystem entry to unlink(). However, the server I'm writing aims to be generic : it must be robust against both types of Unix domain sockets, as I am not responsible for choosing listening addresses.
I know I am very late to the party and that this was answered a long time ago but I just encountered this searching for something else and I have an alternate proposal.
When you encounter the EADDRINUSE return from bind() you can enter an error checking routine that connects to the socket. If the connection succeeds, there is a running process that is at least alive enough to have done the accept(). This strikes me as being the simplest and most portable way of achieving what you want to achieve. It has drawbacks in that the server that created the UDS in the first place may actually still be running but "stuck" somehow and unable to do an accept(), so this solution certainly isn't fool-proof, but it is a step in the right direction I think.
If the connect() fails then go ahead and unlink() the endpoint and try the bind() again.
I don't think there is much to be done beyond things you have already considered. You seem to have researched it well.
There are ways to determine if a socket is bound to a unix socket (obviously lsof and netstat do it) but they are complicated and system dependent enough that I question whether they are worth the effort to deal with the problems you raise.
You are really raising two problems - dealing with name collisions with other applications and dealing with previous instances of your own app.
By definition multiple instances of your pgm should not be trying to bind to the same path so that probably means you only want one instance to run at a time. If that's the case you can just use the standard pid filelock technique so two instances don't run simultaneously. You shouldn't be unlinking the existing socket or even running if you can't get the lock. This takes care of the server crash scenario as well. If you can get the lock then you know you can unlink the existing socket path before binding.
There is not much you can do AFAIK to control other programs creating collisions. File permissions aren't perfect, but if the option is available to you, you could put your app in its own user/group. If there is an existing socket path and you don't own it then don't unlink it and put out an error message and letting the user or sysadmin sort it out. Using a config file to make it easily changeable - and available to clients - might work. Beyond that you almost have to go some kind of discovery service, which seems like massive overkill unless this is a really critical application.
On the whole you can take some comfort that this doesn't actually happen often.
Assuming you only have one server program that opens that socket.
Then what about this:
Exclusively create a file that contains the PID of the server process (maybe also the path of the socket)
If you succeed, then write your PID (and socket path) there and continue creating the socket.
If you fail, the socket was created before (most likely), but the server may be dead. Therefore read the PID from the file that exists, and then check that such a process still exists (e.g. using the kill with 0-signal):
If a process exists, it may be the server process, or it may be an unrelated process
(More steps may be needed here)
If no such process exists, remove the file and begin trying to create it exclusively.
Whenever the process terminates, remove the file after having closed (and removed) the socket.
If you place the socket and the lock file both in a volatile filesystem (/tmp in older ages, /run in modern times, then a reboot will clear old sockets and lock files automatically, most likely)
Unless administrators like to play with kill -9 you could also establish a signal handler that tries to remove the lock file when receiving fatal signals.

How can I get Perl's Jabber::SimpleSend to work with Gmail chat?

I'm trying to write a simple Perl script to send an Instant Message. Jabber seemed like it might be the most conducive protocol. But the following script fails:
#!/usr/bin/env perl
use Jabber::SimpleSend qw(send_jabber_message);
send_jabber_message('me#gmail.com',
'CENSORED',
'you#gmail.com',
'subject test',
"body test");
It says:
Can't call method "can_read" on an undefined value at
/opt/local/lib/perl5/site_perl/5.8.9/XML/Stream.pm line 1421.
As cartman's answer points out, the code should actually be
#!/usr/bin/env perl
use Jabber::SimpleSend qw(send_jabber_message);
send_jabber_message('me%40gmail.com#talk.google.com',
'CENSORED',
'you%40gmail.com#talk.google.com',
'subject test',
"body test");
But that fails with the following error:
No SASL mechanism found
at /usr/local/lib/perl5/site_perl/5.10.0/Authen/SASL.pm line 74
I do have the Authen::SASL cpan module installed.
Jabber::SimpleSend is the easier way to interact with a standard Jabber server but don't let the module name mislead you: gtalk is indeed a bit different, requiring TLS encryption (that Jabber::SimpleSend won't do) and a hostname change. You will get better results using Net::XMPP and dealing directly with its API.
See http://www.gridpp.ac.uk/wiki/Nagios_jabber_notification for a well-commented, fully working implementation in 75 lines of perl using Net::XMPP. It's inteded to send nagios notifications but it does exactly what you need.
I'm not familiar with the code, but that line in XML::Stream is where the module begins a select() loop. Line 523-524 is where is passes IO::Select a socket to the destination server, and IO::Select itself passes a blessed reference, which should never be undef the way XML::Stream uses it.
Something is probably modifying the "SELECT" element of the XML::Stream object in the Jabber modules, possibly in a misguided attempt to correct a server connection error. I'm sorry I couldn't be more specific.
In response to the update:
These are odd errors, and I've been meaning to look inside the Jabber modules anyway, so I took at look at the source. The following is based on looking at the latest versions of the modules used available from CPAN. This is probably not very useful unless you want to start subclassing these modules and adding code to see where something unexpected happens. (You can skip the next paragraph if you're not interested in the Jabber modules' internals.)
From the updated information, I've traced it to the point where Authen::SASL::Perl croaks on line 41. It needs a result from $parent->mechanism, and there are two possible causes, assuming Authen::SASL isn't broken. Either it's being called incorrectly with no arguments from Net::XMPP::Protocol (line 2968), which seems unlikely, or the "mechanisms" it set in the constructor for Authen::SASL don't exist. Net::XMPP::Protocol defines the "mechanisms" (GetStreamFeature called, line 2958; that method defined around line 3340) with return $self->{STREAM}->GetStreamFeature($self->GetStreamID(),$feature);, where $feature is just a string passed from the callee and the id part of the XML::Stream object's session.
Based on the original XML error and the possibility of the session id going bad, it appears that the server either sends bad data at some point unexpected to XML::Stream and unaccounted for by the modules using it. I'm not convinced that foo%40gmail.com#talk.google.com is the right user name format, but I don't know how that could be causing these errors without the Jabber server doing something wrong.
I would start fresh with different user names on a different server and see if Jabber::SimpleSend works at all, then try to capture the server's output somehow to see what XML::Stream is choking on.
Update: For what it's worth, I installed the module and I'm getting the exact same errors. Authen::SASL::Perl::PLAIN and all other prerequisites do exist. And when I set the user name to gmailaccountname#talk.google.com and enabled global warnings (eg, #!/usr/bin/perl -w or perl -w filename.pl), XML::Stream reveals a bunch of undefined value problems, and SimpleSend actually spits out the warning "Could not connect to Jabber server"! (No, I don't know what that really means :().
Update: I was trying to install Net::Jabber::Bot (I gave up after some ssl module errors) to see if it would solve anything, and I noticed its constructor has this option and note:
gtalk => 0 # Default to off, 1 for on. needed now due to gtalk differences from std jabber server.
which reinforces the idea that the server's doing something unusual, which XML::Stream doesn't bother to throw an exception for.
Your username should be me#gmail.com but the server name is talk.google.com. So the first parameter should be me#gmail.com#talk.google.com, but I am not sure if Perl can grok that double # signs. You may try to escape first # with %40 so that the first parameter is me%40gmail.com#talk.google.com .
Update I:
About the second error, looks like you are missing SASL authentication modules. GMail uses SASL Plain authentication. So do you have /usr/local/lib/perl5/site_perl/5.10.0/Authen/SASL/Perl/PLAIN.pm file ?
Looks like you require Authen::SASL::Cyrus (the C implementation) or Authen::SASL::Perl (the Perl implementation) to be installed as well as Authen::SASL (which simply tries to find the best option installed on your machine, and, for you, finds neither).
Check to see if you have one of them installed.
That's my reading of the source and the manual - I've not tested this, ymmv.