Get url status code using perl - perl

I want to create a script that the only parameter that I will give him is url. The output of the script will be the status code of the url. Any idea how to do that?

LWP is commonly used to make HTTP requests from Perl.
There's also the very fast Net::Curl::Easy, which supports parallel requests using Net::Curl::Multi.
Of course, event-based frameworks have their own clients. For example, AnyEvent has AnyEvent::HTTP. These also allow parallel requests.

Related

Share a complex object between processes

I want to share a Mojo::Transaction::WebSocket object between processes.
The reason for this is that I am building a websocket chat and I don't want to limit Mojolicious to run only with one worker.
Storable did not work for me it just gives me weird errors.
Any ideas would be appreciated.
There's various ways you can achieve, this. To share the websocket itself would be hard, and requires a solid understanding of process forking/threading, sharing file descriptors, and knowledge of the mojolicious foundation code, which very likely will need to be changed.
If your aim is to load balance, or perform some long running task, your better off having your mojo application take the request, and add it to a queue system such as redis. You can have multiple processes listening for specific requests, read the payload, and send a response back through the queue.
If you just want to be able to access the internals of your Mojo application for other purposes, consider proving a restful endpoint with the data you wish to publish,
Alternatively, you can look at remote procedure calls (RPC) which will allow your Mojo process to call functions, and send data to other processes. Look at RPC::Simple as an example.

Passing data from Server in C language to a python script

I am developing a prototype web application. I am using a toy http server which I have developed myself and it is written in C. The server needs to obtain text data from html form submitted and for furthur processing should pass it to a script which I have written in python. I know the following methods to achieve the same. Also once the processing is done the script has to return the data to server.
Save the data in a file and let the python script access it.
Calling the system or exec family of functions from within the Server code.
I have heard of CGI where a socket is used on both sides. I know how to do the same in my python script. But I am not able to see how the same is done in C server side.
Normally this is abstracted out as most people use apache or nginx as theor service. How can CGI be achieved in Server code?
CGI is covered here: https://www.rfc-editor.org/rfc/rfc3875
By 'CGI over sockets' you probably mean FastCGI which is a bit different (and involves a long running process listening on a Unix socket and sending back responses). Classical CGI involves spawning a short lived process which then receives the request parameters via environment variables (set this from your server via setenv().
Request data is sent to the spawned process via stdin (e.g. in Python, sys.stdin). The output response is then written (with headers and all) via stdout.
CGI is very simple and this is why it was so quick to be adopted by a high number of languages -- the learning curve to implementing CGI was rather quick.
FastCGI is similar to CGI only at the script language layer (i.e. the programming interface remains somewhat similar); however the actual mechanics are rather different (you need to serialize the information over Unix sockets).

How can the Racket web server be used without managing continuations?

I am trying to develop a simple web API for testing using Racket's web server. The requirements are:
Respond to port requests with a callback in a new thread.
Read the header values and POST data
Write response headers and content to the port.
I do not want to engage the complexity of stateful versus stateless servlets. Essentially I want to avoid the overhead of managing continuations.
By avoiding calls to any send/... function other than send/back the serve/servlet can be used without invoking continuation handling.
Calling (serve/servlet start #:manager web-server/managers/non ...) will cause an error if there is an attempt to use continuations.
Custom headers/content can be created using a "raw" response structure.
Alternatively, using serve\launch\wait with a dispatcher using web-server/dispatchers/dispatch-lift is possible. Raw data may even be written directly to the port.
Reference: Original discussion on Racket discussion list.

Does PLV8 support making http calls to other servers?

If I write a function for PostgreSql using PLV8, can I call an url with a get/post request from my PLV8 function?
No, as explained by Milen; use an untrusted PL like PL/perlu, PL/pythonu, PL/javau, etc.
Doing this has the same problem as sending email from a trigger, in that unexpected issues like DNS configuration problems could leave all your database connections busy waiting on HTTP connection attempts so nothing else can get any work done.
Instead, use LISTEN and NOTIFY to wake an external helper script that uses a queue table to manage the requests, as explained in the answer linked above.
No, according to this page and my understanding of "trusted":
PL/v8 is a trusted procedural language that is safe to use, fast to run and easy to develop, powered by V8 JavaScript Engine.

Should I connect directly to CouchDB's socket and pass HTTP requests or use node.js as a proxy?

First, here's my original question that spawned all of this.
I'm using Appcelerator Titanium to develop an iPhone app (eventually Android too). I'm connecting to CouchDB's port directly by using Titanium's Titanium.Network.TCPSocket object. I believe it utilizes the Apple SDK's CFSocket/NSStream class.
Once connected, I simply write:
'GET /mydb/_changes?filter=app/myfilter&feed=continuous&gameid=4&heartbeat=30000 HTTP/1.1\r\n\r\n'
directly to the socket. It keeps it open "forever" and returns JSON data whenever the db is updated and matches the filter and change request. Cool.
I'm wondering, is it ok to connect directly to CouchDB's socket like this, or would I be better off opening the socket to node.js instead, and maybe using this CouchDB node.js module to handle the CouchDB proxy through node.js?
My main concern is performance. I just don't have enough experience with CouchDB to know if hitting its socket and passing faux HTTP requests directly is good practice or not. Looking for experience and opinions on any ramifications or alternate suggestions.
It's me again. :-)
CouchDB inherits super concurrency handling from Erlang, the language it was written in. Erlang uses lightweight processes and message passing between those processes to achieve excellent performance under high concurrent load. It will take advantage of all cpu cores, too.
Nodejs runs a single process and basically only does one thing at a time within that process. Its event-based, non-blocking IO approach does allow it to multitask while it waits for chunks of IO but it still only does one thing at a time.
Both should easily handle tens of thousands of connections, but I would expect CouchDB to handle concurrency better (and with less effort on your part) than Node. And keep in mind that Node adds some latency if you put it in front of CouchDB. That may only be noticeable if you have them on different machines, though.
Writing directly to Couch via TCPSocket is a-ok as long as your write a well-formed HTTP request that follows the spec. (You're not passing a faux request...that's a real HTTP request you're sending just like any other.)
Note: HTTP 1.1 does require you to include a Host header in the request, so you'll need to correct your code to reflect that OR just use HTTP 1.0 which doesn't require it to keep things simple. (I'm curious why you're not using Titanium.Network.HTTPClient. Does it only give you the request body after the request finishes or something?)
Anyway, CouchDB can totally handle direct connections and--unless you put a lot of effort into your Node proxy--it's probably going to give users a better experience when you have 100k of them playing the game at once.
EDIT: If you use Node write an actual HTTP proxy. That will run a lot faster than using the module you provided and be simpler to implement. (Rather than defining your own API that then makes requests to Couch you can just pass certain requests on to CouchDB and block others, say, for security reasons.
Also take a look at how "multinode" works:
http://www.sitepen.com/blog/2010/07/14/multi-node-concurrent-nodejs-http-server/