Why to use Model from MVC in frontend? - rest

I've done almost all of my web projects using Java(Spring MVC + Thymeleaf) and it's MVC technologies. Recently I've heared about REST and started learning some stuff about it. I realized that it is one of the coolest things I've ever seen in my whole life!(I'm just joking around, it's definitely not)
All we need is just to parse data to json type and then return it to the frontend. And in frontend we no longer need to use Model and it's objects. Frontend can get all required data in nice-to-work-with json type using one single GET request!
We don't need to use some weird Thymeleaf constructions to handle errors or to iterate through the list in our template! We can handle all events and process all data using javascript and it's frameworks. It is much more powerful.
Does there exist something I missed? When to use Model? When to use json-type data?

These are two (somewhat overlapping) approaches to frontend development: generating pages on the backend and on the frontend.
Using a backend model and Thymeleaf templates (or any other HTML templates), you generate your web page on the server side. This means the following benefits
you can write frontend and backend logic in one language (Java);
you can enforce data and security constraints in one place - on the backend, using Java code or configuration;
in many cases, it's faster for the user to get their first page, since the rendering happens on the server, and the user gets an already rendered page;
But this approach has the following drawbacks:
the server has to render the pages, which means more load on the server;
most of the modern web sites and applications use JavaScript anyway, so you'll have to write at least some JavaScript;
server-generated pages don't even come close to what's possible to render in the browser.
Providing a REST API for a JavaScript frontend, you have the following benefits:
you unload your server, since most of the UI related work happens on the user's device;
you can achieve much more with modern frontend frameworks;
navigating on an already rendered UI is faster, since you don't need for the server to render every page, you only need to get a relatively small JSON response and then update the page dynamically.
But this approach has the following tradeoffs:
the user generally waits longer to see their first page, as the browser needs to download a lot of scripts and then spend time on dynamic rendering of the page;
in a large application, you need to either have a full-stack developer team, or two teams working on frontend and backend separately;
you have to write your UI logic in JavaScript. Make of it what you will :)
you have to sometimes duplicate the constraints both on the frontend and on the backend: e.g., if a user can't edit a field, you have to both show it as read-only on the frontend, and add validation in your REST service, since the user may try to access your REST API directly and bypass the frontend validation;
you have to be more aware of securing your REST API in general.

Related

Is it good practice to ALWAYS use a REST API for Razor Page specific data

Imagine I have a Razor Page or such like. Imagine the data used by that Razor Page is not used by any other page at all. So the data retrieval is very specific to this page only.
Is it bad practice to just grab the data directly using a database connection from within that Razor Page local to the only place that data is to be used?
If so, why should I abstract the data away into a separate API that isn't re-used anywhere? Why is it good practice?
It seems to me, that REST APIs are sometimes used unnecessarily and for no good reason. As if because every example video shows data retrieval from REST APIs. Correct me if I am wrong.
If your application is purely a server-side app, there is no justification for creating RESTful API that serves up JSON for it. Those kinds of APIs are usually created for "external" consumers, by which I mean third parties or the browser (via JavaScript). They are commonly implemented for client side apps - single page apps typically like React, Angular or Blazor where JSON is the data format of choice for the browser.
As to whether you should open database connections in your PageModel class, that's another question. For simple apps, why not? But for apps that need unit testing, it's not a good idea. You will be unable to execute unit test against the PageModel class without hitting the database.
As a habit, I tend to put the code that connects to a database in a series of separate classes, each one having an interface, and then inject them into the PageModel via dependency injection. That way I can mock the service represented by the interface for unit testing.
You might want to implement services that generate data as JSON within a Razor Pages app if you have some functionality that depends on Ajax requests for data. For those, you could use Web API controllers, minimal request handlers or even named handler methods that return JsonResult objects in the PageModel classes. With all of those, you might still want to put the code that actually calls the database in a separate class that is injected into the handler.

Constructing a back-end suitable for app and web interface

Let's suppose I was going to design a platform like Airbnb. They have a website as well as native apps on various mobile platforms.
I've been researching app design, and from what I've gathered, the most effective way to do this is to build an API for the back-end, like a REST API using something like node.js, and SQL or mongoDB. The font-end would then be developed natively on each platform which makes calls to the API endpoints to display and update data. This design sounds like it works great for mobile development, but what would be the best way to construct a website that uses the same API?
There are three approaches I can think of:
Use something completely client-side like AangularJS to create a single-page application front end which ties directly into the REST API back-end. This seems OK, but I don't really like the idea of a single-page application and would prefer a more traditional approach
Create a normal web application (in PHP, python, node.js, etc), but rather than tying the data to a typical back end like mySQL, it would basically act as an interface to the REST API. For example when you visit www.example.com/video/3 the server would then call the corresponding REST endpoint (ie api.example.com/video/3/show) and render the HTML for the user. This seems like kind of a messy approach, especially since most web frameworks are designed to work with a SQL backend.
Tie the web interface in directly in with the REST api. For example, The endpoint example.com/video/3/show can return both html or json depending on the HTTP headers. The advantage is that you can share most of your code, however the code would become more complex and you can't decouple your web interface from the API.
What is the best approach for this situation? Do you choose to completely decouple the web application from the REST API? If so, how do you elegantly interface between the two? Or do you choose to merge the REST API and web interface into one code base?
It's a usually a prefered way but one should have a good command of SPA.
Adds a redundant layer from performance perspective. You will basically make twice more requests all the time.
This might work with super simple UI, when it's just a matter of serializing your REST API result into different formats but I believe you want rich UI and going this way will be a nightmare from both implementation and maintainance perspective.
SUGGESTED SOLUTION:
Extract your core logic. Put it into a separate project/assembly and reuse it both in your REST API and UI. This way you will be able to reuse the business logic which is the same both for UI and REST API and keep the representation stuff separately which is different for UI and REST API.
Hope it helps!
Both the first and the second option seem reasonable to me, in the sense that there are certain advantages in decoupling the backend API from the clients (including your web site). For example, you could have dedicated teams per each project, if there's a bug on the web/api you'd only have to release that project, and not both.
Say you're going public with your API. If you're releasing a version that breaks backwards compatibility, with a decoupled web app you'd be able to detect that earlier (say staging environment, given you're developing both in-house). However, if they were tightly coupled they'd probably work just fine, and you'll find out you've broken the other clients only once you release in production.
I would say the first option is preferable one as a generic approach. SPA first load delay problem can be resolved with server side rendering technique.
For second option you will have to face scalability, cpu performance, user session(not on rest api of course because should be stateless), caching issues both on your rest api services and normal website node instances (maybe caching not in all the cases). In most of the cases this intermediate backend layer is just unnecessary, there is not any technical limitation for doing all the stuff in the recent versions of browsers.
The third option violates the separation of concerns, in your case presentational from data models/bussines logic.

Scraping WebObjects website & REST

I need to programmatically interact with a WebObjects website and extract data from the responses. The particular WebObjects site I am scraping uses component actions and stores sessions in cookies (not urls). This means that all urls look something like this:
http://example.com/WOApp/WebObjects/WOApp.woa/wo/7.0.0.0.29.1.1.1
My first questions are:
Does urls like this not completely destroy local and shared caching opportunities (cachable constraint in REST)? I imaging the only effective caching with such urls is the WebObjects server itself.
Isn't addressability broken as well? Each resource does have a unique endpoint, but it changes constantly. Furthermore (I think) that WebObjects also makes too old URLs invalid since they "time-out" after a period of time. I'm not sure whether this applies only to urls with sessions though.
Regarding the scraping I am not sure whether it's possible to extract any meaningful endpoints from the website. For example, with a normal website I would look through the HTML and extract the POST urls, then use them in my scraper by posting directly to them instead of going through the normal request-response cycle.
In this case I obviously cannot use any URLs extracted from the HTML since they are dynamically generated on each request, but I read something about being able to access WebObjects components directly if the security settings have not been set to disallow this (see https://developer.apple.com/legacy/library/documentation/LegacyTechnologies/WebObjects/WebObjects_3.5/PDF/WebObjectsDevGuide.pdf, p. 53 "Limitations on Direct requests"). I don't understand exactly how to do this though or if it's even possible.
If it's not possible what would be a good approach then? The only options I can think of is:
Using a full-blown browser client to interact with the website (e.g. WatiR or Selenium) and extract & process the HTML from their responses
Manually extracting the dynamic end-points by first request the page where they are on and then find the place in the HTML where they're located. Then use them afterwards as if they were "static".
I am interested in opinions on how to approach this scenario since I don't believe any of the solutions above are particularly good.
You've asked a number of questions, and I'll see if I can cover each in turn.
Does urls like this not completely destroy local and shared caching
opportunities (cachable constraint in REST)? I imaging the only
effective caching with such urls is the WebObjects server itself.
There is, indeed, a page cache within the WebObjects application server, and you're right to observe that these component action URLs probably thwart any other kind of caching. Additionally, even though the session ID is not present in the URL, you'd need the session ID in the cookie to re-create the same page, so having just that URL would get you a session restoration error from the application server.
Isn't addressability broken as well? Each resource does have a unique
endpoint, but it changes constantly.
Well, yes, on the face of it this is true. You've given a component action URL as an example, and they're tied to the session.
Furthermore (I think) that
WebObjects also makes too old URLs invalid since they "time-out" after
a period of time. I'm not sure whether this applies only to urls with
sessions though.
Again, all true. Component action URLs generate sessions, and sessions time out.
At this point, let me take a quick diversion. I'm assuming you're not the owner of the WebObjects application—you're talking about having to scrape a WebObjects app, and you've identified some ways in which this particular app doesn't conform to REST principles. You're completely right—a fully component-action-based WebObjects application won't be RESTful. WebObjects pre-dates REST by a few years. Having said that, there are ways in which a WebObjects application can be completely RESTful:
Using session-less direct actions gives a degree of REST-like behaviour, and would certainly solve the problems you identify with caching, addressability and expiry.
Using the ERRest framework to create a 100% RESTful application.
Of course, none of this will help you if you're just trying to scrape a legacy application.
Regarding the scraping I am not sure whether it's possible to extract
any meaningful endpoints from the website. For example, with a normal
website I would look through the HTML and extract the POST urls, then
use them in my scraper by posting directly to them instead of going
through the normal request-response cycle.
Again, if it's a fully component action-based application, you're right—all those URLs will be dynamically generated and useless to you.
In this case I obviously cannot use any URLs extracted from the HTML
since they are dynamically generated on each request, but I read
something about being able to access WebObjects components directly if
the security settings have not been set to disallow this…
That's talking about getting a component to render directly from its template with some restrictions:
As you note, the application can easily prevent it from happening at all.
As mentioned on p.53, the user input and action-invocation phases of rendering the component are skipped, which probably means this approach would be limited to rendering a component that didn't have any dynamic content anyway. This might be of some very limited use to you, though you'd need to know the component names you were interested in, and they wouldn't normally be exposed anywhere.
I'm not sure you're going to find anything better than the types of high-level functional approaches you've already suggested above, such as automating at the browser level with Selenium. If what you need is REST-style direct addressability of resources within the application, you're not going to get that unless you can re-write the application to use direct actions or ERRest where you need them.
A little late, but could help.
I use the Apache's mod_ext_filter (little modified) to pre/post filter the requests/responses from our WebObjects application. The filter calls PHP scripts and can read the dynamical hyperrefs and other things from the HTML pages. The scripts can also modify the HTTP requests, so we can programatically add/remove parameters from the request to implement new workflows in front of the legacy app and cleanup the requests before they will reach WebObjects. It is also possible to handle an additional database within the scripts and store some things over multiple requests.
So you can get the dynamically created links (maybe a button's name or HTML form destination) and can recognize these names within the request.
It is also possible to "remote control" such applications with little scripts like "click on the third button on the page". The only thing you need is a DOM parser to get the structure of the HTML pages and then rebuild the actions which the browser would do (i.e. create the HTTP request manually and send it as POST to the extracted form destination href). The only problem is the Javascript code, which we analyze and reprogram within PHP (i.e. enable/disable input elements, so they will not be transmitted within the requests)
There were some problems within the WebObjects Adapter Module for Apache. It still uses Content-Length within the HTTP header, which you cannot change in mod_ext_filter. If you change the HTML or the parameters within the request, the length of the content will not longer match. But it is possible to change that.
Theoretically it could also be possible to control such an closed-source legacy application from a new UI on a tablet or smartphone, which delegates the user interaction to the backend WebObjects app.
The scripts depends on the page structure, so if your WebObjects app will be changed, you have to correct some things in the scripts (i.e. third button could be now the fourth button).
It should also be possible to add a Restful interface in front of the application and query the data from the legacy app by the filter scripts.

Is CQ5 used as a backend service?

I have experience with other enterprise CMS's like Teamsite & Tridion, but no hands on experience with CQ5. I'm wondering, how is CQ5 usually integrated with a large site that has content & functionality? Functionality defined as pages are generated with data from a non-CMS repository or webservice.
My question is, is CQ5 content read in as a back-end service? I know the API is http based. But is that API typically called from server or client? For my example, lets say I have a page that is from mostly driven from a web service that is linked to some non-CMS enterprise system, but I want the footer & right rail to be "content" so that the users can change it easily. At what point would the the different page sources typically be combined?
I'm wondering because I work with asp.net. I know the CQ5 is Java, so I would expect that most cusomters are java shops, but I would think that the HTTP would be easy to consume for an ASP.net site, if it is really just another backend webservice.
Your question is really not all that clear to me to be honest. So I'm going to answer this rather broad.
To answer your question about the different page sources:
The client usually initiates a http or json request to the server (although server to server calls are not uncommon in case of extended infrastructure) and the server simply executes the necessary calls (using the apis) and delivers the answer to the request. But at the point of request return, all calls to the api have been made by the server, and the server is just returning rendered html or json, or whatever structured form you want to have your data and/or content in.
A page consists of various components. Some components are fairly static. Others are very dynamic and pull their data for example from a webservice, or external database or even another cms. The combining of these resources happens upon the rendering of the page which in its turn was triggered by a request for that page. The obvious exception is ofcourse the dispatcher caching system which will return a cached version of the page if possible. But in short, all the rendering and api calls are made server side.
CQ5 is fairly flexible since it's split up into 2 instances. The backend (author) which is where the actual authoring of pages happens. And the frontend (publish) which is basically the frontend, and does the actual rendering for a client (usually).
Wether you choose to use the publish instance a backend service is just up to you to be honest. I've seen cq5 used as what it was ment for (cq5 being the frontend), and I've seen cq5 used as a backend service (for example: as a backend service provider for hybris). And I've seen the combination, where one part was used as backend service for another system, and the other part was used for a public website frontend.
CQ's rich HTTP API (based on Apache Sling) gives full access to the CQ content in various formats including JSON and XML, so integrating CQ content in other systems is easy.
In the other direction, you can use Sling's ResourceProvider mechanism to access external content and make it part of the CQ content tree. See "custom resource providers" in the Sling Resources docs at http://sling.apache.org/documentation/the-sling-engine/resources.html .

Web UI to a restful interface, good idea?

I am working on a experimental website (which is accessible through web browser) that will act as a front-end to a restful interface (a sub-system). The website will serve as an interface between a user and the restful interface, as it will make http requests to the restful interface for almost all database operations. Authentication will probably be done using openid and authorization for the database operations will be done via oAuth.
Just out of curiousity, is this a feasible solution or I should develop two systems that accesses the database in parallel (i.e. the website has its own data access logic, and the restful interface has another data access logic)? And what are the pros/cons if I insist on doing it this way (it is just an experiment project for me to learn things like how OpenID and oAuth work in real life anyway) besides there will be more database queries and http requests generated for each transaction?
Your concept sounds quite feasible. I'd say that you'll get some fairly good wins out of this approach. For starters you'll get a large degree of code reuse since you'll be able to put other front ends on top of the RESTful service. Additionally, you'll be able to unit test this architecture with relative ease. Finally, you'll be able to give 3rd party developers access to the same API that you use (subject possibly to some restrictions) which will be a huge win when it comes to attracting customers and developers to your platform.
On the down side, depending on how you structure your back end you could run into the standard problem of granularity. Too much granularity and you'll end up making lots of connections for very little amounts of data. Too little and you'll get more data than you need in some cases. As for security, you should be able to lock down the back end so that requests can only be made under certain conditions: requests contain an authorization token, api key, etc.
Sounds good, but I'd recommend that you do this only if you plan to open up the restful API for other UI's to use, or simply to learn something cool. Support HTML XML and JSON for the interface.
Otherwise, use a great MVC framework instead (asp.net MVC, rails, cakephp). You'll end up with the same basic result but you'll be "strongerly" typed to the database.
with a modern javascript library your approach is quite straightforward.
ExtJS now has always had Ajax support, but it is now able to do this via a REST interface.
So, your ExtJS user interface components populate receive a URL. They populate themselves via a GET to the URL, and store update via POST to the URL.
This has worked really well on a project I'm currently working on. By applying RESTful principles there's an almost clinical separation between the front & backends - meaning it would be trivial undertaking to replace other. Plus, the API barely needs documenting, since it's an implementation of an existing mature standard.
Good luck,
Ian
woow! A question from 2009! And it's funny to read the answers. Many people seem to disagree with the web services approach and JS front end - which has nowadays become kind of standard, known as Single Page Applications..
I think the general approach you outline is quite feasible -- the main pro is flexibility, the main con is that it won't protect clueless users against their own ((expletive deleted)) abuses. As most users are likely to be clueless, this isn't feasible for mass consumption... but, it's fine for really leet users!-)
So to clarify, you want to have your web UI call into your web service, which in turn calls into the database?
This is exactly the path I took for a recent project and I think it was a mistake because you end up creating a lot of extra work. Here's why:
When you are coding your web service, you will create a library to wrap database calls, which is typical. No problem there.
But then when you code your web UI, you will end up creating another library to wrap calls into the REST interface... because otherwise it will get cumbersome making all the raw HTTP calls.
So you essentially created 2 data access libraries, one to wrap DB and the other to wrap the Web service calls. This basically doubles the amount of work you do, because for every operation on a resource, you will end up implementing in both libraries. This gets tiring real fast.
The simpler alternative is to create a single library that wraps access to the database, as before, then use that library from BOTH the web UI and web service.
This is assuming that your web UI and web service reside on the same network and both have direct access to the backend database server (which was the case for me). In this setup having both go directly to the database is also a lot more efficient then having the UI go through the web service.