Rest API, save nested models in one request - rest

As I know in rest we need to save each model in separate request. What if I have 3-4 levels of nested models and would like to save it all in one request, whats the best practice? (Rails, PHP, Node.js)

REST doesn't really talk about models, it talks about resources.
It's fine in REST services for 'some data' (your model) to be represented by multiple resources.
So if you define a new resource that combines all these models into a single larger model, then it would also be acceptable for you to submit a PUT request there and update everything in 1 request, atomically.
One thing to look out for though is caching. If you heavily rely on caching, updating the big resource does not automatically invalidate all the sub-resources in the cache. As far as I know, there's no standard way yet to tell a client that other resources should be expelled from the cache. There's a 2011 draft, but it seems abandoned:
https://datatracker.ietf.org/doc/html/draft-nottingham-linked-cache-inv-04

Related

Is it preferred to use GraphQL for querying and REST for mutation operations in the same project?

I am working on a Ecommerce Site which needs API's for its mobile application. Seeing the implementation issues at first of GraphQL made me think of integrating GraphQL for querying data and REST for the mutation operations (store, update, delete).
Is it ok to do these things or should I just stick with any one of them for complete operations?
There's plenty of individual cases where it's more appropriate or even necessary to have separate endpoints. To name a few: authentication, file uploads, third-party webhooks, or any requests that should return something other than JSON.
Saying that all operations with side-effects should be done through separate endpoints seems like overkill. You'll not only lose the typical benefits associated with GraphQL (declarative data fetching, predictable responses, etc.) but you'll also be making things harder for front end developers, particularly if using Apollo. That's because cached query responses can be automatically updated based on the data returned in a mutation -- if you don't use GraphQL to mutate that data, though, you'll have to manually update the cache yourself.
I would recommend to stick with a single approach because of the following reasons
1) predictive changes to cached data after the operation otherwise you would have to write lot of duct tape code to ensure that the REST based updates mutates the cached data on the client.
2) Single pattern for code maintenance and less overhead while reading code.
3) To have a schema at a single place otherwise you might be duplicating code.

RESTful syntax. Is it Eager/Lazy or both?

I am trying to follow RESTful principles and a little confused on how "Eager" or "Lazy" endpoints would be set up.
For example, a
Shop has many Products
Products have many Ingredients.
Products have many Packging
Of course a "bad" endpoint that would fetch eagerly would be:
api/shop/1
Would return shop id 1's details but also with:
ALL the Products
ALL the Product's Ingredients
ALL the Product's Packging
This of course is a crazy model...so I can only guess RESTful is "always lazy" by default?
But with "lazy be default" say you want to get 10 different products AND their ingredients...
api/shop/1/product/1/ingredients
api/shop/1/product/2/ingredients
api/shop/1/product/3/ingredients
The number of requests is getting a little high...10 seperate HTTP requests for the 10 products.
So lastly, do you instead tend to design the RESTful endpoints based on what the front-end/consumer may want as opposed to modelling the business/database?
api/shop/1/product-details?productId=1,2,3,4,5,6,7,8,9,10
Is the above strictly "RESTful"?
So I guess the real underlying question is sort of:
Is RESTful API design a model of the Data or a model of the Views?
Is RESTful API design a model of the Data or a model of the Views?
Views is closer -- it's a model of resources
Your data model is not your object model is not your resource model is not your affordance model. -- Amundsen
The simplest analogy is to look at java script in an HTML page
we can embed the java script in the HTML page
we can link to the java script from the HTML page.
Both approaches work - they have different trade offs, primarily in how caching works.
Coarse grained resources are somewhat analogous to data transfer objects; exchange a large representation in a single request/response, and then the client can do lots of different things with that one representation.
Fine grained resources give you more control of caching strategies (the different parts can expire at different times), and perhaps respond better to scenarios where we expect the client to be sending back edited representations of those resources.
One issue that fine grained resources have had is the extra burden of round trips. HTTP/2 improves that story, as server push can be used to chain representations of multiple resources onto a single response -- all of the fine grained resources can be sent in a single burst.
But even so, we're talking about identifying resources, not database entities.
https://stackoverflow.com/questions/57420131/restful-syntax-is-it-eager-lazy-or-both
That's an identifier for a web page about a question
https://api.stackexchange.com/2.2/questions/57420131?site=stackoverflow
That's a different resource describing the same question.
REST API's aren't about exposing your data model via HTTP, they are about exchanging documents so that a client can navigate a protocol that gets useful work done. See Webber 2011.

How to structure a RESTful backend API with a database?

I want to make an API using REST which interacts (stores) data in a database.
While I was reading some design patterns and I came across remote facade, and the book I was reading mentions that the role of this facade is to translate the course grained methods from the remote calls into fine grained local calls, and that it should not have any extra logic. As an explaination, it says that the program should still work without this facade.
Here's an example
Yet I have two questions:
Considering I also have a database, does it make sense to split the general call into specific calls for each attribute? Doesn't it make more sense to just have a general "get data" method that runs one query against the database and converts it into an usable object, to reduce the number of database calls? So instead of splitting the get address to get street, get city, get zip, make on db call for all that info.
With all this in mind, and, in my case using golang, how should the project be structured in terms of files and functions?
I will have the main file with all the endpoints from the REST API, calling the controllers that handle these requests.
I will have a set of files that define those controllers. Are these controllers the remote facade? Should those methods not have logic in that case, and just call the equivalent local methods?
Should the local methods call the database directly, or should they use some sort of helper class that accesses the database?
Assuming all questions are positive, does the following structure make sense?
Main
Controllers
Domain
Database helper
First and foremost, as Mike Amundsen has stated
Your data model is not your object model is not your resource model is not your affordance model
Jim Webber did say something very similar, that by implementing a REST architecture you have an integration model, in the form of the Web, which is governed by HTTP and the other being the domain model. Resources adept and project your domain model to the world, though there is no 1:1 mapping between the data in your database and the representations you send out. A typical REST system does have many more resources than you have DB entries in your domain model.
With that being said, it is hard to give concrete advice on how you should structure your project, especially in terms of a certain framework you want to use. In regards to Robert "Uncle Bob" C. Martin on looking at the code structure, it should tell you something about the intent of the application and not about the framework¹ you use. According to him Architecture is about intent. Though what you usually see is the default-structure imposed by a framework such as Maven, Ruby on Rails, ... For golang you should probably read through certain documentation or blogs which might or might not give you some ideas.
In terms of accessing the database you might either try to follow a micro-service architecture where each service maintains their own database or you attempt something like a distributed monolith that acts as one cohesive system and shares the database among all its parts. In case you scale to the broad and a couple of parallel services consume data, i.e. in case of a message broker, you might need a distributed lock and/or queue to guarantee that the data is not consumed by multiple instances at the same time.
What you should do, however, is design your data layer in a way that it does scale well. What many developers often forget or underestimate is the benefit they can gain from caching. Links are basically used on the Web to reference from one resource to an other and giving the relation some semantic context by the utilization of well-defined link-relation names. Link relations also allow a server to control its own namespace and change URIs as needed. But URIs are not only pointers to a resource a client can invoke but also keys for a cache. Caching can take place on multiple locations. On the server side to avoid costly calculations or look ups on the client side to avoid sending requests out in general or on intermediary hops which allow to take away pressure from heavily requested servers. Fielding made caching even a constraint that needs to be respected.
In regards to what attributes you should create queries for is totally dependent on the use case you attempt to depict. In case of the address example given it does make sense to return the address information all at once as the street or zip code is rarely queried on its own. If the address is part of some user or employee data it is more vague whether to return that information as part of the user or employee data or just as a link that should be queried on its own as part of a further request. What you return may also depend on the capabilities of the media-type client and your service agree upon (content-type negotiation).
If you implement something like a grouping for i.e. some football players and certain categories they belong to, such as their teams and whether they are offense or defense players, you might have a Team A resource that includes all of the players as embedded data. Within the DB you could have either an own table for teams and references to the respective player or the team could just be a column in the player table. We don't know and a client usually doesn't bother as well. From a design perspective you should however be aware of the benefits and consequences of including all the players at the same time in regards to providing links to the respective player or using a mixed approach of presenting some base data and a link to learn further details.
The latter approach is probably the most sensible way as this gives a client enough information to determine whether more detailed data is needed or not. If needed a simple GET request to the provided URI is enough, which might be served by a cache and thus never reach the actual server at all. The first approach has for sure the disadvantage that it doesn't reuse caching optimally and may return way more data then actually needed. The approach to include links only may not provide enough information forcing the client to perform a follow-up request to learn data about the team member. But as mentioned before, you as the service designer decide which URIs or queries are returned to the client and thus can design your system and data model accordingly.
In general what you do in a REST architecture is providing a client with choices. It is good practice to design the overall interaction flow as a state machine which is traversed through receiving requests and returning responses. As REST uses the same interaction model as the Web, it probably feels more natural to design the whole system as if you'd implement it for the Web and then apply the design to your REST system.
Whether controllers should contain business logic or not is primarily an opinionated question. As Jim Webber correctly stated, HTTP, which is the de-facto transport layer of REST, is an
application protocol whose application domain is the transfer of documents over a network. That is what HTTP does. It moves documents around. ... HTTP is an application protocol, but it is NOT YOUR application protocol.
He further points out that you have to narrow HTTP into a domain application protocol and trigger business activities as a side-effect of moving documents around the network. So, it's the side-effect of moving documents over the network that triggers your business logic. There is no straight rule whether to include business logic in your controller or not, but usually you try to keep the business logic in yet their own layer, i.e. as a service that you just invoke from within the controller. That allows to test the business logic without the need of the controller and thus without the need of a real HTTP request.
While this answer can't provide more detailed information, partly due to the broad nature of the question itself, I hope I could shed some light in what areas you should put in some thoughts and that your data model is not necessarily your resource or affordance model.

What is the difference between REST and LDP?

first of all I am new to this...REST, RDF, LDP etc.
I could able to get an understanding about REST and RDF in a vague manner:
REST is a framework where everything is a resource and complex client side requests are converted to URI based structural requests and
using HTTP methods, we will get the results in RDF resource format:
XML or json format.
RDF is a framework to explain the relational structure or in other words, conceptual model of a web resource.
LDP seems to be same as REST, uses HTTP protocols to interact with RDF resources. What I understand is HTTP protocols are used to communicate with web services and get the result in HTML, jpeg, png or any other format, even XML too. Then what is LDP? - Does it somehow updates the XML using the HTTP methods.
Can't that be done in normal architecture. other than LDP?
LDP, Linked Data Platform, is a W3C specification defining a standard way for servers and clients to interact with resources (primarily RDF resources) over HTTP. In particular, it introduces the notion of Containers, RDFSources, and Non-RDFSources (or binaries).
It may help to think of an RDFSource as a document, kind of like an HTML web page. Only, the content is not HTML, it's a graph (a set of RDF triples) sharing the same subject URI. Together, the triples in this document would typically describe or make up a given entity or object. So, those could be thought of as properties of the object. The document could be expressed in RDF XML, in Turtle, JSON-LD, or possibly other formats. These properties may be literal values or they may be links to other resources.
LDP implements the RESTful architecture, so how you view this RDFSource depends on how you ask for it in your request to the server. If you ask for the resource to be expressed in JSON-LD, you should get back a JSON-LD representation of the resource. If you ask for it as Turtle, you should get back a Turtle representation. This is done by passing certain HTTP headers in the request. Additionally, the RESTful nature of an LDP allows you to use HTTP methods (GET,POST,PUT,DELETE) to interact with the resources in various ways.
A Container is also an RDFSource, but it allows you to manage membership. Containers can have member resources. You could think of a Container kind of like a folder. Only, it doesn't physically contain RDFSources or documents. Instead, it has a set of triples (membership triples) that define or link to its members. You can view a Container as a container or as an RDFSource depending on the preferred interaction model that you specify in a request header.
So, basically, you can think of an LDP as a way of interacting with RDF resources in a way that is similar to a web site with folders and documents. Only everything is RDF, not XHTML. On the back-end, the server may actually manage each resource as a real document (file). Or, as is the case with Carbon LDP, for example, it may put everything in a triplestore (an RDF store / database). Then it just gives you back a set of triples that look like a "document" because they share the same subject URI, which is what you used when making the RESTful request. Since Carbon LDP manages all these "documents" in a triplestore, it can also provide SPARQL query support across everything (though SPARQL support is not part of the LDP spec).
So, in essence, an LDP enables a very "webby" (or RESTful) way of working with RDF data. You make HTTP requests to URI's and they get resolved to resources (Containers or RDFSources), which you can then consume to get at all the triples. And of course you can create resources, update them, list members of a container, etc. In this way, you can build web applications, that use RESTful requests (perhaps async JavaScript or AJAX requests).
One advantage you win is that even though the data you're working with may be very specific to any given application your building on LDP, the REST API you use to work with that data is standard and consistent.
Another advantage is that you're working with RDF, so the properties of your objects, the predicates, can link data across your enterprise or the World Wide Web. This can help you incorporate data and discover things that your app may not have been specifically designed to support. And also, because you're working with the RDF data model, you can use pre-existing vocabularies for your triples, but you don't have near as much hassle with schemas.
In RDF, you can add new triples (new properties or links) without having to update some database schema and the associated code required to interpret it. LDP deals with RDF resources in a very generic way - it doesn't care what the triples that define or make up the resources actually are. When you build an LDP app, you can extend that sort of generic quality into the app in such a way that your data can keep changing and evolving without imposing as heavy costs on the maintenance and evolution of the app itself.
This kind of technology helps you bridge the gap between the current web of hyperlinked documents to a web of linked data, which is easier for computers to understand and interoperate with. For a little more info about RDF and the big difference between a hyperlink and a linked data link, see The Awesome Power of the Link in Linked Data.
You can also find a somewhat technical introduction to LDP in Introduction to: Linked Data Platform, an article I wrote a while back for Dataversity.

Strategy for RESTfully posting many entities

I am still in the process of getting comfortable with doing things the REST way.
In my situation, client software will be interacting with a RESTful service. Rarely, the client will upload its entire database of entities (each entity serializes into a roughly 5kb chunk of xml).
Perhaps I'm wrong, but the proper RESTful strategy seems to be to cycle through each entity and individually POST each one. However, there may plausibly be tens of thousands of these entities, and somehow so many rapid-fire POSTs doesn't seem kosher.
In this situation, it feels like packaging all the entities into one big xml representation would violate the RESTful way of doing things, but it would also avoid the need for thousands of POSTs.
Is there some standard-practice for accomplishing this? Thanks in advance!
I don't see why a "Packet of entities" cannot be considered a resource. Transactional writes certainly can consider database transaction to be a resource. I admit I haven't read Fielding's dissertation, but I don't see how wrapping several resources into a single representation would invalidate REST.
Database transactions do something like this. They will wrap smaller resources inside a transaction resource. It's true that usually they do this so that you can post those smaller resources, that can still be large, separately. But since the transaction itself is considered a resource, I don't believe that coming up with a representation for it that you could post as one POST request would make this design any less RESTful.
It's also used to the other direction. When the client GETs search results from the server, the server might wrap these inside a results resource so that the client can just get this one resource instead of several separate ones.
So I'd say that wrapping these small 5kb resources inside a larger collection resource can be considered RESTful and is probably the way you should go for.
There are at least two problems here which prevent you from being RESTful.
Each resource needs to be identified by a URI. Acting on the resource means that you must call the URI using an HTTP call. Consequently, you cannot call multiple actions in multiple resources in just one HTTP call.
The resources are identified by nouns and represent entities. This implies that to insert an Employee and a Car you need to call two different resources for each of the respective entities.
So in summation you cannot take a purely RESTful approach here. However, REST is designed to help by way of conventions, not constrict you. The best solution here is for you to create a custom action which does what you need.
Alternately, you can create a generic wrapper entity with INSERT, UPDATE and other actions which take in blobs of disparate data as XML. However, this will undermine your other end points because now it becomes possible to insert a Car record through the generic wrapper and through the /Car/ URI.
Without knowing much about your actual requirements, I would suggest you don't expose this functionality via REST specifically. Behind the scenes you could still call your INSERT action methods within the various Controllers once you break up the incoming collection if disparate objects.
As long as the big wrapper has a valid media-type then it is fine to treat it as a single resource. Figuring out what that media-type is going to be is the tricky part.
Nothing prevents you from creating more resources upon addition, aka post a resource that is a list of X to a resource that's a list of X using a POST.
You'd then send back a 201 created with the list of URIS of all resources created. Again, it's all perfectly allowable.
What you loose is the visibility to the intermediaries upon PUT, which prevent them from caching or modifying the specific resource at the specific URI. Although a smart intermediary would process the 201 for caching purposes.
And having one doesn't prevent you from having each created resource have its own URI post-creation (after the POST) and enable PUT / DELETE on those resources. Or a combination.