Understanding the request lifecycle of a Play! application - scala

I am new to the Play! web framework, and in order to understand how it works, as well as how it compares with other web frameworks, I would like to be able to trace, in the Play! source code, the request lifecycle from start to finish. I will be using the Scala implementation of Play!.
Because most of my experience has been with PHP frameworks, I am used to starting with an index.php file in a web root directory and reading down through any included config/bootstrapping scripts, dependency injection handling, request routing, action dispatching, and finally view/response rendering.
I have not been able to identify a similar point of entry for a Scala/Play! application, and I would very much appreciate a push in the right direction. A walkthrough of the request lifecycle would of course be very generous, but all I really need is to be shown the entry point.

By default Play framework uses built-in HTTP server (based on Netty). So closest analogy with PHP will be that Play is both Apache and PHP.
PHP uses legacy 'CGI-like' paradigm: to serve single HTTP request, your program is started and after finishing serving request it is terminated. In CGI to serve an HTTP request webserver starts external program -- your script -- and returns its output. Older versions of PHP was designed only for CGI, in later versions other ways to interact with server, because CGI is very slow, but core principle remained the same.
Most of web application technologies use another approach: your web application is started one time then stays running, so one running instance of web application continues to serve requests (and can serve multiple requests in parallel). It does not die after serving a single request, as in PHP. This allows to consume much less resources required for starting application each time, and only just slightly harder to work with, because most of request processing in hidden inside framework, and your app only needs to expose controller methods that are called when request arrive and return response.
It also allows for more flexibility, for example background processing can be started right inside web app, no need for external server processes. Play has Akka library that is very convenient for this.
As more and more web applications use Ajax and REST approach, instead of serving heavyweight webpages each time, it becomes more important. And it is almost impossible to create realtime messaging backend with PHP that will have good performance, regardless of requesting technology (polling, long polling, iframe with multipart).
But if compared to PHP MVC frameworks, from point of view of developer that creates views, models and controllers, Play is very similar. Both in PHP MVC frameworks and Play framework calls controller method or function and this method should return response, views are usually templates and models are usually ORM bindings to relational database.

I think this is the file you mean:
https://github.com/playframework/playframework/blob/master/framework/src/play-netty-server/src/main/scala/play/core/server/NettyServer.scala
Play is a Java application that starts listening at a given port. Listening is done using Netty library which understands different types of network protocols (most importantly HTTP). Once Netty knows whats happening it will give control to the Play framework.
The Play Framework will then use the Global file in combination with the Routes to determine what Action to invoke.

Play is more of a restful framework ( read http://en.wikipedia.org/wiki/Representational_state_transfer) rather than a typical template based frameworks like jsp jsf etc with a request lifecycle concept, although it does have templating support too. The basic idea is too have the interaction with server based on pure data like json and most of the code for update of dom structure is written in javascript and is done on client only which is actually more flexible and a lot simpler and efficent.
In play you just plainly create your methods for sending data to browser by defining a method in your scala class and mapping it in a routes file. Also like in a typical web development process you also place you html files in a public resource folder ( or create a template ) which will typically do a ajax call to that method when executed in browser.

Related

Do the isomorphic apps implies back and front stand together?

I've been getting into isomorphic applications development with Angular 2 Universal but there is this thing that i cannot get clear in my head.
My understanding is that keeping back and front sides on different modules is a good practice, but it doesn't seem to be a common pattern when working with MEAN applications.
So, i am about to start a project that could scale and i'd want to implement the server side rendering in a future but i don't know which approach i should go with. Honestly i feel more comfortable keeping both backend and frontend on separate but, if so, will it be possible to implement server-side rendering later on?
Besides, supposing that i duplicate the index.html in both sides, will the server be able to delegate the control to the client when the first server rendering is completed? I mean, i can't imagine how that would that delegation work given the fact that they are not in the same project.
Thanks in advance.
As I understand, you are talking about rendering the UI and this is frontend part of your application, even if you do the pre-rendering work on the server.
This pre-rendering is only optimization, you can keep it in some separate code layer, but I believe that the whole idea of the isomorphic javascript is to re-use the same code on the client and server. This way trying to duplicate code and/or templates is not a good idea (it never is).
If you really want to keep things separated, think on splitting your application into more services:
backend server application - some kind of rest API on top of the database which holds all the business logic (first node.js application)
frontend server application - another node.js application which gets data from API via HTTP requests and does server-side pre-rendering (second node.js application)
frontend - all the code running in browser
This way, initially the "frontend server application" can be simple proxy between the "frontend" and "backend server application".
Later on you can extend it with server-side rendering.
Important note: if you are going to develop application without pre-rendering on the server and add this on a later stage, you need to take into account that not all the browser-side javascript features will work on the server (for example, manipulations with native DOM elements), see the best practices section in angular universal readme.

A good way to structure AngularJS client code with Play 2.1 as backend

I own a Play 2.1 application.
Initially, I used the default template mechanisms from Play 2.1 until I .. learned AngularJS.
Now, I clearly want my client side to be an AngularJS app.
However, while surfing the net, I find there are no clear way to achieve it:
Letting Play behave as a simple RESTful application (deleting the view folder) and making a totally distinct project to build the view (AngularJS app initialized by grunt.js).
Advantage: Likely to be less messy, and front and backend teams would work easily separately.
Drawback: Need another HTTP server for the AngularJS app.
Try to totally integrate AngularJS app with the traditional Play's workflow.
Drawback: With a pretty complex framework like AngularJS, it would lead to a confusion of templates managementfor instance : scala.html (for Play) / tpl.html (for Angular) ... => messy.
Making a custom folder within the play project but distinct from the initial folders created by the Play scaffolding. Let's call it myangularview instead of traditional view for instance. Then, publish static contents generated by grunt.js into the Play's public folder, in order to be reachable from browser through Play's routing.
Advantage: SRP between components is still fairly respected and no need to use another light HTTP server for the client-side like in 1.
I pointed out my own views of advantage and drawbacks.
What would be a great way to achieve the combination of Play with Angular?
Yes, I'm answering to my own question :)
I came across this way of doing:
http://jeff.konowit.ch/posts/yeoman-rails-angular/
Rails?? No matter the framework is, the need remains exactly same.
It advocates a real separation between APIs (backend side), and front-end side (in this case making AJAX calls to backend server).
Thus, what I've learned is:
During development phase, a developer would use two servers: localhost on two distinct ports.
During production phase, the front-end elements would be encompassed into the whole backend side (the article would deal with a kind public folder, aiming to serve static contents: HTML, angular templates (for instance), CSS etc... Advantage? => dealing with a one and unique serving server's APIs exposition as well as static assets for the UI.
With this organization, some tools like Yeoman would be able to bring some really awesome handy things to developers like for instance: the livereload feature. :):)
Of course, during development phase, we end up with two different domains, (localhost:3000 and localhost:9000 for instance) causing issues for traditional ajax requests. Then, as the article points out, a proxy may be really useful.
I really find this whole practice very elegant and pleasant to work with.
There was an interesting discussion on the play mailinglist a couple of days ago about frontend-stack/solution, could be something in it for you, quite some people using angular it seems: https://groups.google.com/forum/#!searchin/play-framework/frontend/play-framework/IKdOowvRH0s/tQsD9zp--5oJ

Adapter Proxy for Restful APIs

this is a general 'what technologies are available' question.
My company provides a web application with a RESTful API. However, it is too slow for my needs and some of the results are in an awkward format.
I want to wrap their restful server with a proxy/adapter server, so when you connect to the proxy you get the RESTful API I wish the real one provides.
So it needs to do a few things:
passthrough most requests
cache some requests
do some extra requests on the original server to detect if a request is cacheable
for instance: there is a request for a field in a record: GET /records/id/field which might be slow, but there is a fingerprint request GET /records/id/fingerprint which is always fast. If there exists a cache of GET /records/1/field2 for the fingerprint feedbeef, then I need to check the original server still has the fingerprint feed beef before serving the cached version.
fix headers for some responses - e.g. content-type, based upon the path
do stream processing on some large content, for instance
GET /records/id/attachments/1234
returns a 100Mb log file in text format
remove null characters from files
optionally recode the log to filter out irrelevant lines, reducing the load on the client
cache the filtered version for later requests.
While I could modify the client to achieve this functionality, such code would not be re-usable for other clients (different languages), and complicates the client logic.
I had a look at whether clojure/ring could do it, and while there is a nice little proxy middleware for it, it doesn't handle streaming content as far as I can tell - the whole 100Mb would have to be downloaded. Also it doesn't include any cache logic yet.
I took a look at whether squid could do it, but I'm not familiar with the technology, and it seems mostly concerned with passing through requests rather than modifying them on the fly.
I'm looking for hints where I might find the correct technology to implement this. I'm mostly language agnostic if learning a new language gets me access to a really simple way to do it.
I believe you should choose a platform that is easier for you to implement your custom business logic on. The following web application frameworks provide easy connectivity with REST APIs, and allow you to create a web application that could work as a REST proxy:
Play framework (Java + Scala)
express + Node.js (Javascript)
Sinatra (Ruby)
I'm more familiar with Play, of which I know it provides utilities for caching you could find useful, and is also extendable by a number of plugins.
If you are familiar with Scala, you could have a also have a look at Finagle. It is a framework build be Twitter's infrastructure team to provide protocol-agnostic connectivity. It might be an overkill for REST to REST proxy, but it provides abstractions you might find useful.
You could also look at some 3rd party services like Apitools, which allows to create a proxy programmatically (in lua). Apirise is a similar service (of which I'm a co-founder) that intends to do provide similar functionalities with a user-friendly UI.
Beeceptor does exactly what you want. It plugs in-between your web-app and original API to route requests.
For your use-case of caching a few responses, you can create a rule. That way it shall not hit the original endpoint.
The requests to original APIs can be mocked, and you can inspect response
You can simulate delays.
(Note: it is a shameless plug, I am the author of Beeceptor and thought it should help you and other developers.)
https://github.com/nodejitsu/node-http-proxy is looking useful - although I don't yet know if it can stream process for transcoding.

What is the underlying technology behind ServiceAsync interface?

I developed a web application (works properly) which registers a user to the system and allows user to upload a file to system via https. The client side code is totally developed by using GWT 2.4 and the back end is several servlets. Except the upload code, all the client-server communications are done via using ServiceAsync interface as it is the common practice in GWT. The upload code is based on a form which is communicating with the upload servlet directly.
This project is developed as a course work and my Professor is keen on knowing the underlying architecture of google web toolkit specificly focused on the client-server communication. His question was,
"How the client code knows the server's url so that all the communication is done?"
His question is legitimate for the ServiceAsync interface. I am calling a function on the server side which seems interesting to him and he wants to know the underlying process behind it.
For uploading, I just defined uploadForm.setAction(GWT.getModuleBaseURL()+"upload"); where upload is the name of the upload servlet in web.xml.
I told him that the compiler generates Javascript code which contains all the web application code (whole system developed dynamically) and the url to the servlet is placed in that script file however the answer did not satisfy him. Please let me know the inner facts of the client-server communication with GWT.
Please give me some answers that can help my Professor to understand GWT's asynchronous client-to-server RPC communication.
The underlying technology is shown here as a diagram. Google says "GWT provides an RPC mechanism based on Java Servlets to provide access to server side resources. This mechanism includes generation of efficient client and server side code to serialize objects across the network using deferred binding."
The client knows the URL to query because you would have annotated your service interface with a #RemoteServiceRelativePath tag. This associates the service with a default path relative to the module base URL. That URL is where Javascript sends your request.
There's a lot more to learn about GWT's RPC if you care to, and you can start picking it apart here and here.
There are multiple ways in GWt how to bind RPC service to the specific url.
First is an annotation #RemoteServiceRelativePath which is placed on synchronous interface. Using deferred binding rule GWT will discover this URL and will set it to the service instance automatically.
Second is casting instance of GWT-RPC async service to the ServiceDefTarget and setting an url manually.
But this answer alone will not satisfy your professor, because most likely he will want to know some other details, so I recommend you to learn how exactly GWT-RPC works.
Regarding the basic question of how the client knows the url of the server, it sounds as though the professor may be asking about the full url of the site (the domain name), and not just the sub-directories for the services as they are defined in #RemoteServiceRelativePath and the web.xml.
For this more fundamental aspect of the question, I think the "Same Origin Policy" (SOP) that browsers have for javascript security could be an important part of the answer. This is explained in one of the GWT FAQs. The first thing that the browser is doing on the client side (after the HTTPS connection is established, which I think could be another important part of the answer) is reading the host html file, where the bootstrap nocache.js file is referenced. Once this file is loaded, the SOP is going to guarantee that all of the subsequent JS application files are coming from the same server as the bootstrap and the host html files. Once the application files are loaded, then everything is happening within that context, with specific internal url paths being defined for RPCs as already mentioned.

Embedding GWT application in ChromiumEmbedded

I have read through the chromiumembedded usage and looked at the cefclient application. Now i would like to provide my gwt application as an standalone application to my customers. Is it possible to package the gwt client code using chromiummebedded.
I am not sure how to make the RPC/RC calls to the server if its packaged in CEF.
I think you need to include an embedded webserver in your application, and serve the generated GWT application files from this.
Since the url for your server will be different, you could disable the same origin policy in ChromiumEmbedded to use normal RPC calls, but it might be better to use cross domain calls as describe in Googles tutorial