Is it good practice to ALWAYS use a REST API for Razor Page specific data - rest

Imagine I have a Razor Page or such like. Imagine the data used by that Razor Page is not used by any other page at all. So the data retrieval is very specific to this page only.
Is it bad practice to just grab the data directly using a database connection from within that Razor Page local to the only place that data is to be used?
If so, why should I abstract the data away into a separate API that isn't re-used anywhere? Why is it good practice?
It seems to me, that REST APIs are sometimes used unnecessarily and for no good reason. As if because every example video shows data retrieval from REST APIs. Correct me if I am wrong.

If your application is purely a server-side app, there is no justification for creating RESTful API that serves up JSON for it. Those kinds of APIs are usually created for "external" consumers, by which I mean third parties or the browser (via JavaScript). They are commonly implemented for client side apps - single page apps typically like React, Angular or Blazor where JSON is the data format of choice for the browser.
As to whether you should open database connections in your PageModel class, that's another question. For simple apps, why not? But for apps that need unit testing, it's not a good idea. You will be unable to execute unit test against the PageModel class without hitting the database.
As a habit, I tend to put the code that connects to a database in a series of separate classes, each one having an interface, and then inject them into the PageModel via dependency injection. That way I can mock the service represented by the interface for unit testing.
You might want to implement services that generate data as JSON within a Razor Pages app if you have some functionality that depends on Ajax requests for data. For those, you could use Web API controllers, minimal request handlers or even named handler methods that return JsonResult objects in the PageModel classes. With all of those, you might still want to put the code that actually calls the database in a separate class that is injected into the handler.

Related

Why to use Model from MVC in frontend?

I've done almost all of my web projects using Java(Spring MVC + Thymeleaf) and it's MVC technologies. Recently I've heared about REST and started learning some stuff about it. I realized that it is one of the coolest things I've ever seen in my whole life!(I'm just joking around, it's definitely not)
All we need is just to parse data to json type and then return it to the frontend. And in frontend we no longer need to use Model and it's objects. Frontend can get all required data in nice-to-work-with json type using one single GET request!
We don't need to use some weird Thymeleaf constructions to handle errors or to iterate through the list in our template! We can handle all events and process all data using javascript and it's frameworks. It is much more powerful.
Does there exist something I missed? When to use Model? When to use json-type data?
These are two (somewhat overlapping) approaches to frontend development: generating pages on the backend and on the frontend.
Using a backend model and Thymeleaf templates (or any other HTML templates), you generate your web page on the server side. This means the following benefits
you can write frontend and backend logic in one language (Java);
you can enforce data and security constraints in one place - on the backend, using Java code or configuration;
in many cases, it's faster for the user to get their first page, since the rendering happens on the server, and the user gets an already rendered page;
But this approach has the following drawbacks:
the server has to render the pages, which means more load on the server;
most of the modern web sites and applications use JavaScript anyway, so you'll have to write at least some JavaScript;
server-generated pages don't even come close to what's possible to render in the browser.
Providing a REST API for a JavaScript frontend, you have the following benefits:
you unload your server, since most of the UI related work happens on the user's device;
you can achieve much more with modern frontend frameworks;
navigating on an already rendered UI is faster, since you don't need for the server to render every page, you only need to get a relatively small JSON response and then update the page dynamically.
But this approach has the following tradeoffs:
the user generally waits longer to see their first page, as the browser needs to download a lot of scripts and then spend time on dynamic rendering of the page;
in a large application, you need to either have a full-stack developer team, or two teams working on frontend and backend separately;
you have to write your UI logic in JavaScript. Make of it what you will :)
you have to sometimes duplicate the constraints both on the frontend and on the backend: e.g., if a user can't edit a field, you have to both show it as read-only on the frontend, and add validation in your REST service, since the user may try to access your REST API directly and bypass the frontend validation;
you have to be more aware of securing your REST API in general.

Constructing a back-end suitable for app and web interface

Let's suppose I was going to design a platform like Airbnb. They have a website as well as native apps on various mobile platforms.
I've been researching app design, and from what I've gathered, the most effective way to do this is to build an API for the back-end, like a REST API using something like node.js, and SQL or mongoDB. The font-end would then be developed natively on each platform which makes calls to the API endpoints to display and update data. This design sounds like it works great for mobile development, but what would be the best way to construct a website that uses the same API?
There are three approaches I can think of:
Use something completely client-side like AangularJS to create a single-page application front end which ties directly into the REST API back-end. This seems OK, but I don't really like the idea of a single-page application and would prefer a more traditional approach
Create a normal web application (in PHP, python, node.js, etc), but rather than tying the data to a typical back end like mySQL, it would basically act as an interface to the REST API. For example when you visit www.example.com/video/3 the server would then call the corresponding REST endpoint (ie api.example.com/video/3/show) and render the HTML for the user. This seems like kind of a messy approach, especially since most web frameworks are designed to work with a SQL backend.
Tie the web interface in directly in with the REST api. For example, The endpoint example.com/video/3/show can return both html or json depending on the HTTP headers. The advantage is that you can share most of your code, however the code would become more complex and you can't decouple your web interface from the API.
What is the best approach for this situation? Do you choose to completely decouple the web application from the REST API? If so, how do you elegantly interface between the two? Or do you choose to merge the REST API and web interface into one code base?
It's a usually a prefered way but one should have a good command of SPA.
Adds a redundant layer from performance perspective. You will basically make twice more requests all the time.
This might work with super simple UI, when it's just a matter of serializing your REST API result into different formats but I believe you want rich UI and going this way will be a nightmare from both implementation and maintainance perspective.
SUGGESTED SOLUTION:
Extract your core logic. Put it into a separate project/assembly and reuse it both in your REST API and UI. This way you will be able to reuse the business logic which is the same both for UI and REST API and keep the representation stuff separately which is different for UI and REST API.
Hope it helps!
Both the first and the second option seem reasonable to me, in the sense that there are certain advantages in decoupling the backend API from the clients (including your web site). For example, you could have dedicated teams per each project, if there's a bug on the web/api you'd only have to release that project, and not both.
Say you're going public with your API. If you're releasing a version that breaks backwards compatibility, with a decoupled web app you'd be able to detect that earlier (say staging environment, given you're developing both in-house). However, if they were tightly coupled they'd probably work just fine, and you'll find out you've broken the other clients only once you release in production.
I would say the first option is preferable one as a generic approach. SPA first load delay problem can be resolved with server side rendering technique.
For second option you will have to face scalability, cpu performance, user session(not on rest api of course because should be stateless), caching issues both on your rest api services and normal website node instances (maybe caching not in all the cases). In most of the cases this intermediate backend layer is just unnecessary, there is not any technical limitation for doing all the stuff in the recent versions of browsers.
The third option violates the separation of concerns, in your case presentational from data models/bussines logic.

Javascript client for REST API

I am trying to write a Javascript client for a web application which provides a REST API to interact with the application. I want to do this in a very advanced way like with a proven stack of tools and methodologies available in Javascript.
Most of the guides about javascript client library development I found in the web are application oriented which have a view part( I mean HTML part ). What I needed is like a client library with some methods which can be used to develop web applications. So I don't want to depend this library with any other javascript library like JQuery, Backbone etc.
I have gone through lot of design patterns available in javascript, especially patterns mentioned in Learning JavaScript Design Patterns a book by Addy Osmani. And after all I got confused, I couldn't decide which one to follow.
What I have in mind is like following:
Initialize the library with some key and secret (This can be compared to declaring object for a class in php).
There will be a data persistence unit which will keep the authenticated user's identity over a predefined amount of time like sessions in php. User data will be stored in cookies or local-storage. Also there will be provisions to override the methods of this unit so that user can implement their own storage mechanism. A reference to this unit will also be passed during library initialization.
Keep a global request method which handles all the API requests associated with the library(This can be compared to a method of the main class in php)
Define all the API methods encapsulated into different units according to the area of application it dealing with. This each unit will have a constructor method which defines some default properties to the unit( This can be compared to defining models in php which will fetch or save data from the application with API ). Each of this unit can be inherited from a super unit which provides some default properties and methods.
After reading some blogs and articles I have decided to use yeoman for library development. May be I can use some yeoman community javascript library generators to start the development.
As I described above I think what I needed is like: a class which keeps a single instance throughout the application which can be used to refer all the models and functions in the models. For this may be I can go with the singleton module pattern, but I am not fully sure about how to make use of it to my requirement.
Any advice is greatly appreciated!

Web UI to a restful interface, good idea?

I am working on a experimental website (which is accessible through web browser) that will act as a front-end to a restful interface (a sub-system). The website will serve as an interface between a user and the restful interface, as it will make http requests to the restful interface for almost all database operations. Authentication will probably be done using openid and authorization for the database operations will be done via oAuth.
Just out of curiousity, is this a feasible solution or I should develop two systems that accesses the database in parallel (i.e. the website has its own data access logic, and the restful interface has another data access logic)? And what are the pros/cons if I insist on doing it this way (it is just an experiment project for me to learn things like how OpenID and oAuth work in real life anyway) besides there will be more database queries and http requests generated for each transaction?
Your concept sounds quite feasible. I'd say that you'll get some fairly good wins out of this approach. For starters you'll get a large degree of code reuse since you'll be able to put other front ends on top of the RESTful service. Additionally, you'll be able to unit test this architecture with relative ease. Finally, you'll be able to give 3rd party developers access to the same API that you use (subject possibly to some restrictions) which will be a huge win when it comes to attracting customers and developers to your platform.
On the down side, depending on how you structure your back end you could run into the standard problem of granularity. Too much granularity and you'll end up making lots of connections for very little amounts of data. Too little and you'll get more data than you need in some cases. As for security, you should be able to lock down the back end so that requests can only be made under certain conditions: requests contain an authorization token, api key, etc.
Sounds good, but I'd recommend that you do this only if you plan to open up the restful API for other UI's to use, or simply to learn something cool. Support HTML XML and JSON for the interface.
Otherwise, use a great MVC framework instead (asp.net MVC, rails, cakephp). You'll end up with the same basic result but you'll be "strongerly" typed to the database.
with a modern javascript library your approach is quite straightforward.
ExtJS now has always had Ajax support, but it is now able to do this via a REST interface.
So, your ExtJS user interface components populate receive a URL. They populate themselves via a GET to the URL, and store update via POST to the URL.
This has worked really well on a project I'm currently working on. By applying RESTful principles there's an almost clinical separation between the front & backends - meaning it would be trivial undertaking to replace other. Plus, the API barely needs documenting, since it's an implementation of an existing mature standard.
Good luck,
Ian
woow! A question from 2009! And it's funny to read the answers. Many people seem to disagree with the web services approach and JS front end - which has nowadays become kind of standard, known as Single Page Applications..
I think the general approach you outline is quite feasible -- the main pro is flexibility, the main con is that it won't protect clueless users against their own ((expletive deleted)) abuses. As most users are likely to be clueless, this isn't feasible for mass consumption... but, it's fine for really leet users!-)
So to clarify, you want to have your web UI call into your web service, which in turn calls into the database?
This is exactly the path I took for a recent project and I think it was a mistake because you end up creating a lot of extra work. Here's why:
When you are coding your web service, you will create a library to wrap database calls, which is typical. No problem there.
But then when you code your web UI, you will end up creating another library to wrap calls into the REST interface... because otherwise it will get cumbersome making all the raw HTTP calls.
So you essentially created 2 data access libraries, one to wrap DB and the other to wrap the Web service calls. This basically doubles the amount of work you do, because for every operation on a resource, you will end up implementing in both libraries. This gets tiring real fast.
The simpler alternative is to create a single library that wraps access to the database, as before, then use that library from BOTH the web UI and web service.
This is assuming that your web UI and web service reside on the same network and both have direct access to the backend database server (which was the case for me). In this setup having both go directly to the database is also a lot more efficient then having the UI go through the web service.

Design pattern for iPhone -> web service functionality?

I'm developing an app that will talk with a web service exposing multiple methods. I'm trying to figure out what the best pattern would be to centralize the access to the web service, give options for synchronous and asynchronous access, and return data to clients. Has anybody tackled this problem yet?
One class for all methods seems like it would centralize everything well, but I'm thinking it would get confusing to return data to the correct places, especially when dealing with multiple asynchronous calls. Another thought I had was a separate subclass for each method, with some sort of factory brokering access, but I'm thinking that might be overengineering the situation.
(note: not asking for what method calls to use/how to parse response/etc, looking for a high level design pattern solution to the general problem)
I recently came across the same problem. While I don't believe my solution to be optimal, it may help you out.
I created a web service manager and an endpoint protocol. Each object that implements the endpoint protocol is responsible for connecting to a web service endpoint(method), parsing the returned data, and notifying its delegate(usually the web service manager) of completion or any errors. I ended up creating an EndpointBase class that I use 99% of the time.
The web service manager is responsible for instantiating the endpoints as needed and invoking them. All of the calls happen asynchronously.
All in all it seems to work pretty well for me. I did end up with a situation in which one endpoint relied on the response of another (I used the command pattern there).
SDK Components that you'll want to look at are:
NSURLConnection
NSXMLParser
Factories? We don't need no stinkin' factories.
I've done this a few times, and I basically do what you're saying: one object that provides methods for all the web service calls, encapsulating the details of communicating with the service, handling connection issues, etc. In one app it was a singleton, because it needed to keep session state; in another app it was just a collection of static methods.
Along with some formatting of the response data, that's the entirety of its responsibility.
It's left up to the callers is whether the call is synchronous or asynchronous; the class itself is written synchronously, and a caller just uses it in a separate thread if necessary. Cocoa's performSelector... methods make that easy.
If REST is a good fit for your data interactions, then I would suggest the ObjectiveResource library . It's designed to work seamlessly with a Ruby on Rails app, but it basically speaks JSON or POX (plain old XML) over HTTP using rails ActiveResource conventions.
It's basically a set of categories on NSObject and some of the primitive object types that will let you make calls like [Dog findAllRemote] to return a list of Dog objects, or [myDog saveRemote] to send changes made to the myDog object back to the server.