Injecting a custom die() handler into mod_perl SOAP handler - perl

We're using a $server = SOAP::Transport::HTTP::Apache->new; $server->dispatch_with(...) over here as a backend to a JS-based application. Should the underlying module die, it sends back a nice error message that gets displayed by the JS code.
The problem is, I would like more detailed messages (e.g. Carp::longmess), and a hard copy of those on STDERR.
How can I inject a custom exception handler into SOAP::Transport::HTTP::Apache with minimal code modifications?
(This is a large and old project we can't afford to rewrite, though honestly it deserves a rewrite).
UPDATE: here's a sample error message:
<soap:Body><soap:Fault>
<faultcode>soap:Server</faultcode><faultstring>Column
'allocation' cannot be null at
/usr/local/lib/perl5/site_perl/5.8.8/Tangram/Storage.pm
line 686. </faultstring></soap:Fault></soap:Body>
I get a Tangram error but this is unlikely a bug in Tangram and anyway I need a full stack-trace. OTOH, the die message got into a SOAP message which is not a normal die action so there's a handler somewhere -- which I want to customize a bit.

The error handler is located under SOAP::Transport::HTTP::Server::_output_soap_fault. Try a grep on <faultcode> in the perl INC paths.

Related

How to get the server's error message when `Net::LDAP::schema` fails?

I wrote some Perl program that uses $ldap->schema to get the server's schema.
So far the servers used returned a schema.
However I have one server does returns just undef, so I tried passing some dn => 'CN=Schema,CN=Configuration,...' to schema().
Unfortunately I still get an undef result.
Trying to get the schema using ldapsearch, I see an error message from the server like:
text: 000004DC: LdapErr: DSID-0C090A71, comment: In order to perform this operation a successful bind must be completed on the connection., ...
I'd like to get the server's error message from the $ldap->schema method.
How can I do that using the 0.44 version of Net::LDAP (I know it's a bit old meanwhile)?
It doesn't report any details. But you could trace through it with the debugger, or make a copy of it and add debugging statements.

SOAP Web Service Client error, While consuming the service

I am getting this error while using SOAP web service client with axis 1. I had created stub from the wsdl file and tried to consume it then I got this error. wsdl is given to me by someone else.
error in msg parsing: xml was empty, did't parse!
below is the error message and stack trace for the same. Anyone can help.?
In order to fix the javax.activation.DataHandler issue you must add the JavaBeans Activation Framework activation.jar in your classpath.
In order to fix the javax.mail.internet.mimeMultipart issue you must add the Java Mail API mail.jar in your classpath.
The warning messages printed in your console shows that the above jars are not in the classpath.
There are several common reasons to receive the message:
error in msg parsing: xml was empty, did't parse!
The most obvious is that no message was sent. If you have some way of inspecting your transport channel, that would be worth looking at.
Also, the xml message could have been sent in an unexpected character set, e.g. A header declares it to be "Utf-8" but it is really "Win-1252", sometimes you can get away with that if you only use 7-bit ASCII characters, but anything in the 8-bit plane will cause it to bomb.
Also, the xml message could have had a byte order mark unexpectedly inserted at the beginning of the message.
Also, the xml message might not have the document declaration ( starting in the first byte of the message, that violates the specification, and often causes parsers to puke and claim that no message was found.
All things considered with this error message, the parser was not able to find a valid xml message that it could parse, so it didn't. You need to grab the data on the transport channel and figure out what exactly is wrong to resolve the issue.

How can I force Mojolicious to send response to client?

I want a request to a Mojolicious application to be able to trigger a long running job. The client doesn't need to wait for that long job to finish, so I'd like the app to send back a quick response and start the job. Here's what I have in mind:
use Mojolicious::Lite;
get '/foo' => sub {
my $self = shift;
$self->render( text => 'Thanks for requesting /foo. I will get started on that.' );
# ... force Mojolicious to send response now ...
do_long_running_job();
};
But when I write the code like this, the client doesn't receive the response until after the long running job is finished (which may trigger inactivity timeouts, etc.). Is there any way to send the response more quickly? Is there another way to structure my code/app to achieve this?
Things from the docs that looked promising but didn't work:
$self->rendered(200);
$self->res->finish;
Randal Schwartz's Watching long processes through CGI should help:
The child goes on, but it must first close STDOUT, because otherwise Apache will think there might still be some output coming for the browser, and won't respond to the browser or release the connection until this is all resolved. Next, we have to launch a child process of the child to execute …
We'll do this with a pipe-open which includes an implicit fork, in line 37. The grandchild process merges STDERR to STDOUT, and then executes …
The child (that is, the parent of the traceroute) reads from the filehandle opened from the STDOUT (and STDERR) …
In short, the child process scurries off to execute the command. …
Given that you are only interested in kicking off a process rather than watching it, you should be able to prune most of the code.

perlipc - Interactive Client with IO::Socket - why does it fork?

I'm reading the perlipc perldoc and was confused by the section entitled "Interactive Client with IO::Socket". It shows a client program that connects with some server and sends a message, receives a response, sends another message, receives a response, ad infinitum. The author, Tom Christiansen, states that writing the client as a single-process program would be "much harder", and proceeds to show an implementation that forks a child process dedicated to reading STDIN and sending to the server, while the parent process reads from the server and writes to STDOUT.
I understand how this works, but I don't understand why it wouldn't be much simpler (rather than harder) to write it as a single-process program:
while (1) {
read from STDIN
write to server
read from server
write to STDOUT
}
Maybe I'm missing the point, but it seems to me this is a bad example. Would you ever really design an client/server application protocol where the server might suddenly think of something else to say, interjecting characters onto the terminal where the client is in the middle of typing his next query?
UPDATE 1: I understand that the example permits asynchronicity; what I'm puzzled about is why concurrent I/O between a CLI client and a server would ever be desirable (due to the jumbling of input and output of text on the terminal). I can't think of any CLI app - client/server or not - that does that.
UPDATE 2: Oh!! Duh... my solution only works if there's exactly one line sent from the server for every line sent by the client. If the server can send an unknown number of lines in response, I'd have to sit in a "read from server" loop - which would never end, unless my protocol defined some special "end of response" token. By handling the sending and receiving in separate processes, you leave it up to the user at the terminal to detect "end of response".
(I wonder whether it's the client, or the server, that typically generates a command prompt? I'd always assumed it was the client, but now I'm thinking it makes more sense for it to be the server.)
Because the <STDIN> read request can block, doing the same thing in a single process requires more complicated, asynchronous handling of the input/output functions:
while (1) {
if there is data in STDIN
read from stdin
write to server
if there is data from server
read from server
write to STDOUT
}

Is there a mod_perl2/Perl 5 equivalent to PHP's ignore_user_abort()?

I'm writing an internal service that needs to touch a mod_perl2 instance for a long-running-process. The job is fired from a HTTP POST, and them mod_perl handler picks it up and does the work. It could take a long time, and is ready to be handled asynchronously, so I was hoping I could terminate the HTTP connection while it is running.
PHP has a function ignore_user_abort(), that when combined with the right headers, can close the HTTP connection early, while leaving the process running (this technique is mentioned here on SO a few times).
Does Perl have an equivalent? I haven't been able to find one yet.
Ok, I figured it out.
Mod_perl has the 'opposite' problem of PHP here. By default, mod_perl processes are left open, even if the connection is aborted, where PHP by default closes the process.
The Practical mod_perl book says how to deal with aborted connections.
(BTW, for the purposes of this specific problem, a job queue was lower on the list than a 'disconnecting' http process)
#setup headers
$r->content_type('text/html');
$s = some_sub_returns_string();
$r->connection->keepalive(Apache2::Const::CONN_CLOSE);
$r->headers_out()->{'Content-Length'} = length($s);
$r->print($s);
$r->rflush();
#
# !!! at this point, the connection will close to the client
#
#do long running stuff
do_long_running_sub();
You may want to look at using a job queue for this. Here is one provided by Zend that will let you start background processing jobs. There should be a number of these to choose from for php and perl.
Here's another thread that talks about this problem and an article on some php options. I'm not perl monk, so I'll leave suggestions on those tools to others.