I am trying to develop a simple web API for testing using Racket's web server. The requirements are:
Respond to port requests with a callback in a new thread.
Read the header values and POST data
Write response headers and content to the port.
I do not want to engage the complexity of stateful versus stateless servlets. Essentially I want to avoid the overhead of managing continuations.
By avoiding calls to any send/... function other than send/back the serve/servlet can be used without invoking continuation handling.
Calling (serve/servlet start #:manager web-server/managers/non ...) will cause an error if there is an attempt to use continuations.
Custom headers/content can be created using a "raw" response structure.
Alternatively, using serve\launch\wait with a dispatcher using web-server/dispatchers/dispatch-lift is possible. Raw data may even be written directly to the port.
Reference: Original discussion on Racket discussion list.
Related
Currently we are implementing REST API's using the spring-boot. Since our API's are growing in number we are thinking of a solution to implement the REST API's using a different approach.
The approach is as below :
Expose a single service to receive all the HTTP requests.
We will have the URI's configured in a data base table to call the
next set of services. These service are configured to listen to
particular JMS messages.
The next set of services will receive the JMS messages and process
the data.
Below are my questions :
Will the above approach still represent the REST architecture ?
What are the downsides of above approach(we are aware of network
latency) any thing other then network latency ?
What are the REST architecture benefits will we be missing.
Or can we just say that our approach is the REST architecture done differently ?
You're making 2 major choices, each can be decided separately:
1) Having a single HTTP service
2) Using JMS as the communication between this service and the underlying microservices
Regarding #1, if you do this, you can no longer call your services REST since the whole point of REST is to use HTTP verbs together with your domain objects for a predicable set of endpoints. GET on /objects/ means the object is being fetched, POST on /objects means a new object is being created, etc... Now, this is OK, you can do it this way and it can work, though it will be "non-standard".
In fact, you might want to check out GraphQL https://www.howtographql.com/basics/1-graphql-is-the-better-rest/ as its pretty close to what you're trying to do.
These days really either REST or GraphQL seems to be the two popular approaches.
Another way to do REST, if you're looking to simply expose REST services on your domain objects without having to write a lot of code, is Spring Data REST: https://spring.io/projects/spring-data-rest and if you're comfortable with Spring already, this should be pretty easy to understand.
For #2, your choice of communication between your single gateway service and the underlying services. Do most of your calls require synchronous answers, such as a UI asking for data to display in a browser or phone? If so, JMS is not a good approach. JMS would be an ok approach if the majority of your services were asyncronous - for example someone submitting a stock trade request. The UI would just need to know the request was submitted, but it will actually be processed some time later and the result will be fetched asyncronously.
Without knowing much about your application, I would recommend sticking with HTTP between your services for simplicity sake unless there is a good reason to switch to JMS.
I'm trying to figure out the best way to implement a real websocket app using akka-http and akka-streams. What I'm mostly looking for is simplicity, which I'm just not getting now.
Assume you have a fairly complex pipeline which needs to discriminate between multiple requests and sometimes send the request to an actor for processing, sometimes issue a mongo query and return the response, sometimes perform a PUT on a REST API, etc.
Unlike the simple chat application examples out there, there are at least 3 problems that arise which seem to not have a standard solution:
Conditionally skipping the response, e.g., because it is not expected by the client that this request will receive a response. If I use the typical Flow from Message to Message, once the request has hit its target, I need to stop it from propagating further back to the websocket. It can be done with a special filter (involves some pain) or using various other ways (e.g., Conditionally skip flow using akka streams), but this adds a lot of boilerplate and complexity. Ideally, I'd like to be able to insert 'Skip' messages that just skip everything else.
Routing incoming messages to the appropriate place (e.g., actor, mongo). Once again, I can find solutions to that which involve a lot of boilerplate (e.g., broadcast and filter out at branches which do not handle this kind of request). Ideally, I should be able to define something like: if the message is X, send it there, if the message is Y, send it there, etc.
Propagating errors back to the client. Very similar to the routing problem described above. For example, if the JSON parse fails, I need to add a separate path (broadcast + merge) along which I send an error message, but I cannot even easily reuse the same path if an error occurs at the next stage and I want to propagate that error to the user. Ideally, I should have one single separate path for error handling that can be used at any arbitrary point in the flow, bypasses the rest of the flow entirely and goes back to the client.
At the moment, I have this insanely complex graph spanning 15 lines with paths going through >20 different stages and I'm really worried about keeping the complexity of this solution in check. The DSL is mostly unreadable at this size. I could of course modularize a bit better, but this feels like an insane amount of trouble for something that should be a lot simpler.
Am I missing something? Am I insane for considering akka-streams for such a task? Any ideas or code examples that could allow me to rein in all that complexity?
Thanks in advance!
This is a very wide-ranging question and may not be answerable in its current form.
Akka HTTP addresses many of these concerns in its HTTP handling layers (e.g. empty responses, routing, returning errors). Could you use some of the lessons learnt there and apply them to your system? Or, perhaps better, could you convert your system from using websocket communication into using HTTP communication and use that code directly?
I want to share a Mojo::Transaction::WebSocket object between processes.
The reason for this is that I am building a websocket chat and I don't want to limit Mojolicious to run only with one worker.
Storable did not work for me it just gives me weird errors.
Any ideas would be appreciated.
There's various ways you can achieve, this. To share the websocket itself would be hard, and requires a solid understanding of process forking/threading, sharing file descriptors, and knowledge of the mojolicious foundation code, which very likely will need to be changed.
If your aim is to load balance, or perform some long running task, your better off having your mojo application take the request, and add it to a queue system such as redis. You can have multiple processes listening for specific requests, read the payload, and send a response back through the queue.
If you just want to be able to access the internals of your Mojo application for other purposes, consider proving a restful endpoint with the data you wish to publish,
Alternatively, you can look at remote procedure calls (RPC) which will allow your Mojo process to call functions, and send data to other processes. Look at RPC::Simple as an example.
I have the following clear algorithm:
Client sends a request to my spray application.
Spray receives a request and I see spray receiving load as multiple requests come in.
If loading is high, spray returns HTTP 503; otherwise it starts processing the request.
How can I manage current spray loading?
Also, as I understand spray uses akka internally which can be extended with adding additional nodes, so how can I manage the load with additional nodes?
Spray itself uses reactive I/O and can handle very high loads, probably higher than any custom code "protecting" it could handle. So don't worry about trying to protect the spray system itself. If you've got complex processing logic that might take a while to handle certain requests, it might make sense to put a protective throttle around that processing logic, using something like http://letitcrash.com/post/28901663062/throttling-messages-in-akka-2 . And in the case where the queue is full you can simply complete(StatusCodes.ServiceUnavailable).
First, here's my original question that spawned all of this.
I'm using Appcelerator Titanium to develop an iPhone app (eventually Android too). I'm connecting to CouchDB's port directly by using Titanium's Titanium.Network.TCPSocket object. I believe it utilizes the Apple SDK's CFSocket/NSStream class.
Once connected, I simply write:
'GET /mydb/_changes?filter=app/myfilter&feed=continuous&gameid=4&heartbeat=30000 HTTP/1.1\r\n\r\n'
directly to the socket. It keeps it open "forever" and returns JSON data whenever the db is updated and matches the filter and change request. Cool.
I'm wondering, is it ok to connect directly to CouchDB's socket like this, or would I be better off opening the socket to node.js instead, and maybe using this CouchDB node.js module to handle the CouchDB proxy through node.js?
My main concern is performance. I just don't have enough experience with CouchDB to know if hitting its socket and passing faux HTTP requests directly is good practice or not. Looking for experience and opinions on any ramifications or alternate suggestions.
It's me again. :-)
CouchDB inherits super concurrency handling from Erlang, the language it was written in. Erlang uses lightweight processes and message passing between those processes to achieve excellent performance under high concurrent load. It will take advantage of all cpu cores, too.
Nodejs runs a single process and basically only does one thing at a time within that process. Its event-based, non-blocking IO approach does allow it to multitask while it waits for chunks of IO but it still only does one thing at a time.
Both should easily handle tens of thousands of connections, but I would expect CouchDB to handle concurrency better (and with less effort on your part) than Node. And keep in mind that Node adds some latency if you put it in front of CouchDB. That may only be noticeable if you have them on different machines, though.
Writing directly to Couch via TCPSocket is a-ok as long as your write a well-formed HTTP request that follows the spec. (You're not passing a faux request...that's a real HTTP request you're sending just like any other.)
Note: HTTP 1.1 does require you to include a Host header in the request, so you'll need to correct your code to reflect that OR just use HTTP 1.0 which doesn't require it to keep things simple. (I'm curious why you're not using Titanium.Network.HTTPClient. Does it only give you the request body after the request finishes or something?)
Anyway, CouchDB can totally handle direct connections and--unless you put a lot of effort into your Node proxy--it's probably going to give users a better experience when you have 100k of them playing the game at once.
EDIT: If you use Node write an actual HTTP proxy. That will run a lot faster than using the module you provided and be simpler to implement. (Rather than defining your own API that then makes requests to Couch you can just pass certain requests on to CouchDB and block others, say, for security reasons.
Also take a look at how "multinode" works:
http://www.sitepen.com/blog/2010/07/14/multi-node-concurrent-nodejs-http-server/