I have an application that when it was in the root of the site worked fine. I put it in a subdirectory to test compatibility and it breaks. The structure is as follows:
/cfc/
rest/ -- Rest endpoints
model/ -- ORM model mappings
util/ -- some util classes
It's a pretty simple crud that also gets the metadata for the models for use in the front end. Basically, any calls from the rest endpoints or cfc's called from the rest endpoints will encounter an error when it's dealing with anything that requires pathing. For example:
getComponentMetaData("models.table");
errors cannot find componenet models.table. This is using a mapping to /cfc/model, if I try cfc.model.table, same thing happens, as well as ApplicationName.cfc.model.table.
I have a feeling this is due to how CF is registering the rest endpoints, since it works as a root application, but just stalls out like this when it's in a subdir. Anyone have any insight on anything that can be done or how CF handles the registration that there might be a work around to it?
Related
When writing a RESTful API that needs to access different environments such as a lab/test database and a production database, what's the best practices around setting up the API?
Should there be a #PathParam?:
/employee/{emp_id}/{environment}
/{environment}/employee/{emp_id}/
Should there be a #QueryParam?:
/employee/{emp_id}/?environment="test"
/employee/{emp_id}/?environment="prod"
Should there be a field in the payload?:
{"emp_id":"123","environment":"test"}
{"emp_id":"123","environment":"production"}
In fact I see two ways to handle this. The reason to use one or the other corresponds to what is the most convenient to implement in your RESTful application.
Using a path parameter
With this approach, it should be a path parameter at the very beginning of the resource path. So URL would be like this: /{environment}/employee/{emp_id}. Such approach is convenient if you have several applications deployed under different root paths. For example:
/test: application packaged with the configuration for the test environment
/prod: application packaged with the configuration for the production
In this case, applications for each environment are isolated.
Using a custom header
You could also a custom header to specify on which environment to route. Github uses something like that to select the version of the API to use. See this link: https://developer.github.com/v3/#current-version. It's not exactly the same thing but you could have something like that:
GET /employee/{emp_id}
x-env: test
A reverse proxy could handle this header and route the request to the right environment.
I'm not convinced by the approach within the payload since an field environment isn't actually a part of the representation for element resource employee. Regarding the query parameter approach, it's similar since such parameters apply to the request on the resource.
Hope it helps you,
I own a Play 2.1 application.
Initially, I used the default template mechanisms from Play 2.1 until I .. learned AngularJS.
Now, I clearly want my client side to be an AngularJS app.
However, while surfing the net, I find there are no clear way to achieve it:
Letting Play behave as a simple RESTful application (deleting the view folder) and making a totally distinct project to build the view (AngularJS app initialized by grunt.js).
Advantage: Likely to be less messy, and front and backend teams would work easily separately.
Drawback: Need another HTTP server for the AngularJS app.
Try to totally integrate AngularJS app with the traditional Play's workflow.
Drawback: With a pretty complex framework like AngularJS, it would lead to a confusion of templates managementfor instance : scala.html (for Play) / tpl.html (for Angular) ... => messy.
Making a custom folder within the play project but distinct from the initial folders created by the Play scaffolding. Let's call it myangularview instead of traditional view for instance. Then, publish static contents generated by grunt.js into the Play's public folder, in order to be reachable from browser through Play's routing.
Advantage: SRP between components is still fairly respected and no need to use another light HTTP server for the client-side like in 1.
I pointed out my own views of advantage and drawbacks.
What would be a great way to achieve the combination of Play with Angular?
Yes, I'm answering to my own question :)
I came across this way of doing:
http://jeff.konowit.ch/posts/yeoman-rails-angular/
Rails?? No matter the framework is, the need remains exactly same.
It advocates a real separation between APIs (backend side), and front-end side (in this case making AJAX calls to backend server).
Thus, what I've learned is:
During development phase, a developer would use two servers: localhost on two distinct ports.
During production phase, the front-end elements would be encompassed into the whole backend side (the article would deal with a kind public folder, aiming to serve static contents: HTML, angular templates (for instance), CSS etc... Advantage? => dealing with a one and unique serving server's APIs exposition as well as static assets for the UI.
With this organization, some tools like Yeoman would be able to bring some really awesome handy things to developers like for instance: the livereload feature. :):)
Of course, during development phase, we end up with two different domains, (localhost:3000 and localhost:9000 for instance) causing issues for traditional ajax requests. Then, as the article points out, a proxy may be really useful.
I really find this whole practice very elegant and pleasant to work with.
There was an interesting discussion on the play mailinglist a couple of days ago about frontend-stack/solution, could be something in it for you, quite some people using angular it seems: https://groups.google.com/forum/#!searchin/play-framework/frontend/play-framework/IKdOowvRH0s/tQsD9zp--5oJ
I am new to the Play! web framework, and in order to understand how it works, as well as how it compares with other web frameworks, I would like to be able to trace, in the Play! source code, the request lifecycle from start to finish. I will be using the Scala implementation of Play!.
Because most of my experience has been with PHP frameworks, I am used to starting with an index.php file in a web root directory and reading down through any included config/bootstrapping scripts, dependency injection handling, request routing, action dispatching, and finally view/response rendering.
I have not been able to identify a similar point of entry for a Scala/Play! application, and I would very much appreciate a push in the right direction. A walkthrough of the request lifecycle would of course be very generous, but all I really need is to be shown the entry point.
By default Play framework uses built-in HTTP server (based on Netty). So closest analogy with PHP will be that Play is both Apache and PHP.
PHP uses legacy 'CGI-like' paradigm: to serve single HTTP request, your program is started and after finishing serving request it is terminated. In CGI to serve an HTTP request webserver starts external program -- your script -- and returns its output. Older versions of PHP was designed only for CGI, in later versions other ways to interact with server, because CGI is very slow, but core principle remained the same.
Most of web application technologies use another approach: your web application is started one time then stays running, so one running instance of web application continues to serve requests (and can serve multiple requests in parallel). It does not die after serving a single request, as in PHP. This allows to consume much less resources required for starting application each time, and only just slightly harder to work with, because most of request processing in hidden inside framework, and your app only needs to expose controller methods that are called when request arrive and return response.
It also allows for more flexibility, for example background processing can be started right inside web app, no need for external server processes. Play has Akka library that is very convenient for this.
As more and more web applications use Ajax and REST approach, instead of serving heavyweight webpages each time, it becomes more important. And it is almost impossible to create realtime messaging backend with PHP that will have good performance, regardless of requesting technology (polling, long polling, iframe with multipart).
But if compared to PHP MVC frameworks, from point of view of developer that creates views, models and controllers, Play is very similar. Both in PHP MVC frameworks and Play framework calls controller method or function and this method should return response, views are usually templates and models are usually ORM bindings to relational database.
I think this is the file you mean:
https://github.com/playframework/playframework/blob/master/framework/src/play-netty-server/src/main/scala/play/core/server/NettyServer.scala
Play is a Java application that starts listening at a given port. Listening is done using Netty library which understands different types of network protocols (most importantly HTTP). Once Netty knows whats happening it will give control to the Play framework.
The Play Framework will then use the Global file in combination with the Routes to determine what Action to invoke.
Play is more of a restful framework ( read http://en.wikipedia.org/wiki/Representational_state_transfer) rather than a typical template based frameworks like jsp jsf etc with a request lifecycle concept, although it does have templating support too. The basic idea is too have the interaction with server based on pure data like json and most of the code for update of dom structure is written in javascript and is done on client only which is actually more flexible and a lot simpler and efficent.
In play you just plainly create your methods for sending data to browser by defining a method in your scala class and mapping it in a routes file. Also like in a typical web development process you also place you html files in a public resource folder ( or create a template ) which will typically do a ajax call to that method when executed in browser.
Basically I started to design my project to be like that:
Play! Framework for web gui (consuming RESTful service)
Spray Framework for RESTful service, connects to database, process incoming data, serves data for web gui
Database. Only service have rights to access it
Now I'm wondering, if it's really best possible design.
In fact with Play! I could easily have both web gui and service hosted at once.
That would be much easier to test and deploy in simple cases probably.
In complicated cases when high performance is needed I still can run one instance purely for gui and few next just to work as services (even if each of them could still serve full functionality).
On the other side I'm not sure if it won't hit performance too hard (services will be processing a lot of data, not only from the web gui). Also, isn't it mixing things which I should keep separated?
If I'll decide to keep them separated, should I allow to connect to database only through RESTful service? How to resolve problem with service and web gui trying to use different version of database? Should I use versioned REST protocol in that case?
----------------- EDIT------------------
My current system structure looks like that:
But I'm wondering if it wouldn't make sense to simplify it by putting RESTful service inside Play! gui web server directly.
----------------- EDIT 2------------------
Here is the diagram which illustrates my main question.
To say it clearly in other words: would it be bad to connect my service and web gui and share the model? And why?
Because there is also few advantages:
less configuration between service and gui needed
no data transfer needed
no need to create separate access layer (that could be disadvantage maybe, but in what case?)
no inconsistencies between gui/service model (for example because of different protocol versions)
easier to test and deploy
no code duplication (normally we need to duplicate big part of the model)
That said, here is the diagram:
Why do you need the RESTful service to connect to the database? Most Play! applications access the database directly from the controllers. The Play! philosophy considers accessing your models through a service layer to be an anti-pattern. The service layer could be handy if you intend to share that data with other (non-Play!) applications or external systems outside your control, but otherwise it's better to keep things simple. But you could also simply expose the RESTful interface from the Play! application itself for the other systems.
Play! is about keeping things simple and avoiding the over-engineered nonsense that has plagued Java development in the past.
Well, after few more hours of thinking about this, I think I found solution which will satisfy my needs. The goals which I want to be fulfilled are:
Web GUI cannot make direct calls to the database; it need to use proper model which will in turn use some objects repository
It must be possible to test and deploy whole thing as one packet with minimum configuration (at least for development phase, and then it should be possible to easy switch to more flexible solution)
There should be no code duplication (i.e. the same code in the service and web gui model)
If one approach will appear to be wrong I need to be able to switch to other one
What I forget to say before is that my service will have some embedded cache used to aggregate and process the data, and then make commits to database with bigger chunks of them. It's also present on the diagram.
My class structure will look like this:
|
|- models
|- IElementsRepository.scala
|- ElementsRepositoryJSON.scala
|- ElementsRepositoryDB.scala
|- Element.scala
|- Service
|- Element.scala
|- Web
|- Element.scala
|- controlers
|- Element.scala
|- views
|- Element
|- index.scala.html
So it's like normal MVC web app except the fact that there are separate model classes for service and web gui inheriting from main one.
In the Element.scala I will have IElementsRepository object injected using DI (probably using Guice).
IElementsRepository have two concrete implementations:
ElementsRepositoryJSON allows to retrieve data from service through JSON
ElementsRepositoryDB allows to retrieve data from local cache and DB.
This means that depending on active DI configuration both service and web gui can get the data from other service or local/external storage.
So for early development I can keep everything in one Play! instance and use direct cache and DB access (through ElementsRepositoryDB) and later reconfigure web gui to use JSON (through ElementsRepositoryJSON). This also allows me to run gui and service as separated instances if I want. I can even configure service to use other services as data providers (however for now I don't have such a needs).
More or less it will look like that:
Well, I think there's no objectively right or wrong answer here, but I'll offer my opinion: I think the diagram you've provided is exactly right. Your RESTful service is the single point of access for all clients including your web front-end, and I'd say that's the way it should be.
Without saying anything about Play!, Spray or any other web frameworks (or, for that matter, any database servers, HTML templating libraries, JSON parsers or whatever), the obvious rule of thumb is to maintain a strict separation of concerns by keeping implementation details from leaking into your interfaces. Now, you raised two concerns:
Performance: The process of marshalling and unmarshalling objects into JSON representations and serving them over HTTP is plenty fast (compared to JAXB, for example) and well supported by Scala libraries and web frameworks. When you inevitably find performance bottlenecks in a particular component, you can deal with those bottlenecks in isolation.
Testing and Deployment: The fact that the Play! framework shuns servlets does complicate things a bit. Normally I'd suggest for testing/staging, that you just take both the WAR for your front-end stuff and the WAR for your web service, and put them side-by-side in the same servlet container. I've done this in the past using the Maven Cargo plugin, for example. This isn't so straightforward with Play!, but one module I found (and have never used) is the play-cargo module... Point being, do whatever you need to do to keep the layers decoupled and then glue the bits together for testing however you want.
Hope this is useful...
I am modifying some legacy project using SOAP web services. I noticed that some of the URLs it is pointing to for some of the namespace are not working anymore (500). Any idea what the consequence would be?
Both the client and server seems to be working fine still, but I need to make a new client that consumes the WS.
Namespaces may be in the form of a URL, but they do not represent a resource on the network. In many cases, there never was any resource at that location. It makes no difference at all.