If I write a function for PostgreSql using PLV8, can I call an url with a get/post request from my PLV8 function?
No, as explained by Milen; use an untrusted PL like PL/perlu, PL/pythonu, PL/javau, etc.
Doing this has the same problem as sending email from a trigger, in that unexpected issues like DNS configuration problems could leave all your database connections busy waiting on HTTP connection attempts so nothing else can get any work done.
Instead, use LISTEN and NOTIFY to wake an external helper script that uses a queue table to manage the requests, as explained in the answer linked above.
No, according to this page and my understanding of "trusted":
PL/v8 is a trusted procedural language that is safe to use, fast to run and easy to develop, powered by V8 JavaScript Engine.
Related
I have a twilio studio flow for sms and I want to write the sms output to an aws rds postgres database. I initially accomplished this by creating a twilio function that is triggered at the end of the flow and writes to the db (Twilio function timing out on connecting to AWS postgres database).
However, since Twilio doesn't have static IPs, I could only get this to work by opening my database to anybody (whitelisting the 0.0.0.0/0 IP). This seems not great for security, so I'm trying to figure out a more secure way. I've read Twilio's security docs and it seems like I might be able to approach this differently by setting up a web server where Twilio sends the results to, but this seems much more complicated if my end goal is just to store the sms in the database (& I don't really understand how to do this). Is setting up an http request the way to go? Alternatively, could something like zapier help?
Any thoughts on secure (& ideally not too complicated) ways to accomplish this would be appreciated!
Another option is to use Twilio Event streams as a trigger (to reach out to Twilio) and use the Studio Executions endpoint to view what transpired on the Studio Execution.
You could also use the Twilio X-Twilio-Signature validation or basic authentication over HTTPS to secure your endpoint.
Streaming Studio Flow Executions with Event Streams
Fetch Execution Context
Does JavaMail support notification of new emails through server-push?
If yes, where is the documentation for that?
If no, is there a library that can do it?
You should be using IMAPFolder's idle function to issue the idle command to the server. That will then listen for events, such as a new mail or deleted mail. (See the IMAP spec to see what the messages look like). And you should be using a MessageCountListener to execute code when a number of emails in the mailbox change.
IMAP's idle function is exactly meant to imitate "push" functionality.
http://java.sun.com/products/javamail/javadocs/javax/mail/event/MessageCountListener.html
http://java.sun.com/products/javamail/javadocs/com/sun/mail/imap/IMAPFolder.html
Sorry I didn't post any code that shows how this is used. I didn't want to waste my time since there are many readily available examples on the internet if you search for this stuff.
But be forewarned, this method won't work for more than one IMAP account since the idle command blocks. Unless you want them all on different threads (bad idea).
A Store event listens for notifications issued by your backend store:
http://java.sun.com/products/javamail/javadocs/javax/mail/event/StoreEvent.html
But in my experience the java mail docs are so thin in places, that the best way of finding out what is going on, is to debug through the process yourself.
This is a great allround resource as well; the JavaMail FAQ :
http://www.oracle.com/technetwork/java/faq-135477.html
I'm looking to add some sort of HTTP push-like functionality, implemented via long polling or another standard means, to a page built with Perl on top of Apache.
Is there a way to do this without setting up a separate server such as Meteor or Stardust? Is there a module that would help with the server code? Is there a way other than long polling?
If your need a quick and dirty fix to avoid major changes to your current application or design, and you do not need instant updates, then one simple approach is to use regular AJAX polling from the browser to the server.
In other words you would have javascript in your browser check the server every couple of seconds to see if there is any message and/or data on the server for this browser session. This will most likely not scale very well, especially with short poll timeouts, and will eat up server resources, but it may be a useful stopgap solution.
Just to reiterate, this is just a quick fix workaround - general consensus is you need to use COMET (probably on a separate server in your case) as a proper solution (until websockets arrive...) - see some good analysis in these links:
http://cometdaily.com/2007/11/06/comet-is-always-better-than-polling/
http://stackoverflow.com/questions/2975290/comet-vs-ajax-polling
First, here's my original question that spawned all of this.
I'm using Appcelerator Titanium to develop an iPhone app (eventually Android too). I'm connecting to CouchDB's port directly by using Titanium's Titanium.Network.TCPSocket object. I believe it utilizes the Apple SDK's CFSocket/NSStream class.
Once connected, I simply write:
'GET /mydb/_changes?filter=app/myfilter&feed=continuous&gameid=4&heartbeat=30000 HTTP/1.1\r\n\r\n'
directly to the socket. It keeps it open "forever" and returns JSON data whenever the db is updated and matches the filter and change request. Cool.
I'm wondering, is it ok to connect directly to CouchDB's socket like this, or would I be better off opening the socket to node.js instead, and maybe using this CouchDB node.js module to handle the CouchDB proxy through node.js?
My main concern is performance. I just don't have enough experience with CouchDB to know if hitting its socket and passing faux HTTP requests directly is good practice or not. Looking for experience and opinions on any ramifications or alternate suggestions.
It's me again. :-)
CouchDB inherits super concurrency handling from Erlang, the language it was written in. Erlang uses lightweight processes and message passing between those processes to achieve excellent performance under high concurrent load. It will take advantage of all cpu cores, too.
Nodejs runs a single process and basically only does one thing at a time within that process. Its event-based, non-blocking IO approach does allow it to multitask while it waits for chunks of IO but it still only does one thing at a time.
Both should easily handle tens of thousands of connections, but I would expect CouchDB to handle concurrency better (and with less effort on your part) than Node. And keep in mind that Node adds some latency if you put it in front of CouchDB. That may only be noticeable if you have them on different machines, though.
Writing directly to Couch via TCPSocket is a-ok as long as your write a well-formed HTTP request that follows the spec. (You're not passing a faux request...that's a real HTTP request you're sending just like any other.)
Note: HTTP 1.1 does require you to include a Host header in the request, so you'll need to correct your code to reflect that OR just use HTTP 1.0 which doesn't require it to keep things simple. (I'm curious why you're not using Titanium.Network.HTTPClient. Does it only give you the request body after the request finishes or something?)
Anyway, CouchDB can totally handle direct connections and--unless you put a lot of effort into your Node proxy--it's probably going to give users a better experience when you have 100k of them playing the game at once.
EDIT: If you use Node write an actual HTTP proxy. That will run a lot faster than using the module you provided and be simpler to implement. (Rather than defining your own API that then makes requests to Couch you can just pass certain requests on to CouchDB and block others, say, for security reasons.
Also take a look at how "multinode" works:
http://www.sitepen.com/blog/2010/07/14/multi-node-concurrent-nodejs-http-server/
Nginx uses epoll, or other multiplexing techniques(select) for its handling multiple clients, i.e it does not spawn a new thread for every request unlike apache.
I tried to replicate the same in my own test program using select. I could accept connections from multiple client by creating a non-blocking socket and using select to decide which client to serve. My program would simply echo their data back to them .It works fine for small data transfers (some bytes per client)
The problem occurs when I need to send a large file over a connection to the client. Since i have only one thread to serve all client till the time I am finished reading the file and writing it over to the socket i cannot resume serving other client.
Is there a known solution to this problem, or is it best to create a thread for every such request ?
When using select you should not send the whole file at once. If you e.g. are using sendfile to do this it will block until the whole file has been sent. Instead use a small buffer, and send a little data at a time to each client. Then use select to identify when the socket is again ready to be written to and send some more until all data has been sent. This will allow you to handle multiple clients in parallel.
The simplest approach is to create a thread per request, but it's certainly not the most scalable approach. I think at this time basically all high-performance web servers use various asynchronous approaches built on things like epoll (Linux), kqueue (BSD), or IOCP (Windows).
Since you don't provide any information about your performance requirements, and since all the non-threaded approaches require restructuring your application to use these often-complex asynchronous techniques (as described in the C10K article and others found from there), for now your best bet is just to use the threaded approach.
Please update your question with concrete requirements for performance and other relevant data if you need more.
For background this may be useful reading http://www.kegel.com/c10k.html
I think you are using your callback to handle a single connection. This is not how it was designed. Your callback has to handle the whatever-thousand of connections you are planning to serve, i.e from the number of file descriptor you get as parameter, you have to know (by reading the global variables) what to do with that client, either read() or send() or ... whatever