What is the real benefit of using GraphQL? - rest

I have been reading about the articles on the web about the benefits of graphql but so far I have not been able to find a single benefit of it.
One of the most common benefits mentioned in those articles are below?
No Overfetching with GraphQL.
Reducing number of calls made from client side.
Data Load Control Granularity
Evolve your API without versions.
Those above all makes sense but it is not the graphql itself that provides these benefits. Any second layer api written in java/python or any other language would be able to provide this benefits too. It is basically introducing another layer of abstraction above the data retrieval systems, rest or whatever, and decoupling the client side from that layer. After you do that everything you can do with graphql can also be done with any other language too.
Anyone can implement a say scala server that retrieves the data from various api's integrates them, create objects internally and feeds the client with only the relevant part of the data with total control on the data. This api can be easily versioned and released accordingly. Considering the syntax of graphql and how cumbersome it is and difficulty of creating a good cache around it, I can't see why would you use it really.
So the overall question is there any benefits of graphql that is provided to the application because of the graphql itself and not because you implement another layer of abstraction between your applications and your api's?

Best practices known as REST existed earlier, too.
GraphQL is more standarized than REST, safer (no injections) and syntax gives great flexibility in the area of quickly changing client needs.
It's just a good standard of best practices.

I feel GrapgQL is another example of overengineering. I would say "Best standards and practices" are "Keeping It Simple."
Breaking down and object and building a custom one before sending it to the client is very basic.

Related

Multiple GraphQL "hops" in end-to-end flow?

I am working on an enterprise-level system and am trying to understand if my idea is super inefficient.
Our company is looking to use GraphQL, and we want to use it as a way to assist the front-end client in retrieving data, but also as a data abstraction over our raw data. What I mean is:
If we have GraphQL closer to the client as one instance (that GraphQL server would sit in front of our domain REST services), but then we also have GraphQL sitting atop the data layer, does that present any issues?
I know the question might arise: "Why don't you have GraphQL over the domain services, and GraphQL over the data, but then federate those into a gateway and have clients pull from there!" But one of the tenants we are sticking to at our company is there must be an abstraction over our data. So, we either abstract that data via a REST API (which we do now), or we have GraphQL over the data and act as the abstraction.
So given that "data abstraction" requirement, I want to understand if there are any issues with the two "hops"/instances of GraphQL in the end-to-end flow?
This is a common pattern. We used this for our backend services, which received graphql on the domain layer and then used prisma for the data layer.
I have two recommendations from our experience.
Try, as best as possible, to auto-generate both your resolvers and your data API using a language, specific tool.
Do testing against the domain layer to make sure that nothing from the data layer slips through. It will be tempting to do simple "pass through" requests as the two schemas will often start off synchronized, and you may wind up accidentally passing through data you don't want going to the client.
(Shameless plug!) For the second one, Meeshkan does this sort of testing in an automated fashion, and there are plenty of testing frameworks you can use to execute hand-written tests as well (ie cucumber.

CQ(R)S using RPC-style API instead of REST

I'm working an a PHP/JS based project, where I like to introduce Domain Driven Design in the backend. I find that commands and queries are a better way to express my public domain, than CRUD, so I like to build my HTTP-based API following the CQS principle. It is not quiet CQRS since I want to use the same model for the command and query side, however many principles are the same. For API documentation I use Swagger.
I found an article which exposes CQRS through REST resources (https://www.infoq.com/articles/rest-api-on-cqrs). They use 5LMT to distinct commands, which Swagger does not support. Moreover, don't I loose the benefit of the intention-revealing interface which CQS provides by putting it into a resource-oriented REST API? I didn't find any articles or products which expose commands and queries directly through an HTTP-based backend.
So my question is: Is it a good idea to expose commands and queries directly through the API. It would look something like this:
POST /api/module1/command1
GET /api/module1/query1
...
It wouldn't be REST but I don't see how REST brings anything beneficial to the table. Maintaining REST resource would introduce yet another model. Moreover, having commands and queries in the URL would allow to use features like routing frameworks and access logs.
The commands and queries are an implementation detail. This is apparent from the fact that they wouldn't exist at all if you had chosen an alternative style.
A RESTful API usually (if done right) follows the conceptual domain model. The conceptual domain model is not an implementation detail, because it is in your users heads and is the a source for requirements for your system.
Thus, a RESTful API is usually much easier to understand, because the clients (developers) have to understand the conceptual domain model anyway, and RESTful interfaces follow the concepts of such a model. The same is not true for a queries and commands based API.
So we have a trade-off
You already identified the drawbacks of building a RESTful API around the commands and queries, and I pointed out the drawbacks of your suggestion. My advice would be the following:
If you're building an API that other teams or even customers consume, then go the RESTful way. It will be much easier for the clients to understand your API.
If, on the other hand, the API is only an internal one that is e.g. used by a JS front-end that your team builds, and you have no external clients on the API, then your suggestion of exposing the commands and queries directly can be short-cut that's worth the (above mentioned) drawbacks.
If you take the shortcut, be honest to yourself and acknowledge it as such. This means that as soon as your requirements change and you now have external clients, you should probably build a RESTful API.
don't I loose the benefit of the intention-revealing interface which
CQS provides by putting it into a resource-oriented REST API?
Intention revealing for whom? A client side programmer? A server side programmer? In the same team/org that maintains the domain model? Outside of that team/org? Someone on the internet who would access your API naively by just probing a starting URI with an OPTIONS request? Someone who would have access to the full API documentation with URIs and payloads structure?
REST is orthogonal to CQRS. In the end, no matter how you expose your resources on the web, domain notions will be reflected somewhere, whether in the URI, the payloads, the media types. I don't think using DDD or CQRS should influence the way you design your API that much.

Does REST only cover CRUD?

I'm writing an AngularJS application that's communicating with an API, and right now that API is following the REST architecture.
I know the basics of REST, but I've still not understood if REST only covers the CRUD operations? For example, if I'm building a community website and I want to make it possible for people to add each other as friends, is this covered by REST in any way? What about search queries? If not, is there any other architecture that's recommended to follow, or should I roll my own?
Also, should I even be using REST for a community website? There are a lot of cases where it seems like it's not the optimal design, but when I google around I only get results saying that REST is the best practice. For example PUT /api/user/:id wouldn't be very useful, since the only user you're able to update (unless you're an admin) is yourself.
It all depends, REST is just an architectural style and (in many forms unfortunately) is used all over the world. I also follow REST rules in all type of applications but try to stay at the second level of Richardson's Maturity Model. Why? Since I consider HAL, HATEOAS and all the API discoverability as an unnecessary buzz - unfortunately documentation is still very important.
What you need to consider while designing an API is if it's going to public or not. If it's not, you can probably whatever you want/need (of course this is not good idea). If it is going to be public the consistency starts to play a great role - API needs to be designed in such a way that it will be both intuitive and easy to use. E.g. this is not good idea to introduce new endpoint every time you need a new operation - thus following CRUD REST rules seems to be reasonable option. When it comes to to going beyond CRUD - yes, I've created APIs with verbs in endpoints - but it was almost always the last resort and to be honest I don't feel guilty.
I think the question is a bit too broad, but I'll try to answer.
REST only covers the CRUD operations?
No, it covers other operations as well. You have to transform your operation into a HTTP method and a resource. The resource can have identifiers: URIs. An URI with a HTTP method compose a hyperlink. This hyperlink can be followed by the client. You can attach the operation name, etc... to the hyperlink as meta-data, so it can be used by the client to recognize the operation. At least that's how it should work.
What about search queries?
General queries are not supported currently, because there is no standard RDF vocab which could be used to describe a general query. There are non-standard workaround, you can use them or for example a SPARQL endpoint. More fixed queries can be used with URI templates.
Also, should I even be using REST for a community website?
As far as I know facebook uses it for 3rd party clients, so you can develop a facebook application using their REST API. Another advantage that it scales better than SOAP. If you don't need these features currently, then you can use something else you are more familiar with.

GWT non-Java backend application structure

Does anyone know where is a good example of GWT application with non-Java backend? Something like the "Contacts" application on the official pages of GWT. I'm interested in the following topics specifically (this is how I see the "back" part of the application, but it may be different, of course):
DTO serialization.
Communication layer (the very back one). Should it use generics and work with any abstract DTO? Is there any other proven approach?
Service layer, which uses the communication layer with specific DTOs.
Requests caching. Should it be implemended on the service or communication level?
Good abstraction. So we can easily substitute any parts for testing and other purposes, like using different serializators (e. g. XML, JSON), different servers' behaviors when managing user sessions (URL might change once the user is logged in).
I know, there are many similar topics here, but I haven't found one, which is focused on the structure of the client part.
Use REST on both sides. PHP/Python REST server-side. GWT client-side.
Don't use GWT-RPC.
The following gives a guideline for Java-java REST, but there is leeway to move out of server-side Java. It explains why REST is appropriate.
http://h2g2java.blessedgeek.com/2011/11/gwt-with-jax-rs-aka-rpcrest-part-0.html.
REST is an industry established pattern (Google. Yahoo, in fact every stable-minded establishment deploys REST services).
REST is abstracted as a HTTP level data structure. Which Java, PHP and Python have established libraries to comply to ensure DTO integrity.
Communication layer (the very back one). Should it use generics ???
Don't understand the question or why the question exists. Just use the REST pattern to provide integrity to non-homogeneous language between server and client and to HTTP request/response.
If you are using Java at back-end, there is no escape from using generics. Generics saves code. But using generics extensively needs programmer to have equally extensive visual capacity to visualise the generics. If your back-end is on PHP or Python, are there generics for PHP? Python generics? Might as well stay in Java land or C# land and forget about Java-free service provider.
Did you mean DTO polymorphism? Don't try polymorphism or decide on it until after you have established your service. Then adaptively and with agility introduce polymorphism into your DTOs if you really see the need. But try to avoid it because with JSON data interchange, it gets rather confusing between server and client. Especially if they don't speak the same programming language.
If you are asking HTTP level generics? I don't know of any framework, not SOAP, not REST where you could have generics carried by the XML or JSON. Is there? Generics?
Service layer? REST.
Requests caching? Cache at every appropriate opportunity. Have service-provider cache query results for items common and static to all sessions like menu, menu/drop-down box choices, labels, etc. Cache your history and places.
On GWT side cache records so that forward/backward button will not trigger inadvertent query. Use MVP pattern and history to manage history traversal that might trigger redisplay of info.
If you are talking about unified info abstraction, you should start your project with JAX-RS to define/test the API and perform data abstraction. Without performing any business logic.
Then, once your HTTP-level APIs andDTOs are defined, convert server-side to using language of your choice to proceed to write more complex code.
BTW, I don't dig your terminology "Backend".
We normally use the terminology client-side for service consumer, server-side for service provider, backend for data repository/persistence access, mid-tier or middle-ware for intervening/auxiliary software required to provide mathematical/scientific/graphical analysis/synthesis.
If our terminologies did not coincide, I probably answered this question wrong.

RESTful API runtime discoverability / HATEOAS client design

For a SaaS startup I'm involved in, I am building both a RESTful web API and a couple of client apps on different platforms that consume it. I think I've got the API figured out, but now I'm turning to the clients. As I've been reading about REST, I see that a key part of REST is discovery, but there seems to be a lot of debate between two different interpretations of what discovery really means:
Developer discovery: The developer hard-codes copious amounts of API details into the client, such as resource URI's, query parameters, supported HTTP methods, and other details that they've discovered through browsing the docs and experimenting with the API's responses. This type of discovery IMHO necessitates cool linkage and the API versioning question, and leads to hard coupling of the client code to the API. Not much better than if using a well-documented collection of RPC's it seems.
Runtime discovery - The client app itself is able to figure out everything it needs with little or no out-of-band information (presumably, only a knowledge of the media types the API deals with.) Links can be hot. But to make the API very efficient, a lot of link templating for query parameters seems to be needed, which makes out-of-band info creep back in. There are possibly other difficulties I haven't thought of yet since I haven't gotten to that point in development. But I do like the idea of loose coupling.
Runtime discovery seems to be the holy grail of REST, but I'm seeing precious little discussion about how to implement such a client. Almost all REST sources I've found seem to assume Developer discovery. Anyone know of some Runtime discovery resources? Best practices? Examples or libraries with real code? I'm working in PHP (Zend Framework) for one client. Objective-C (iOS) for the other.
Is Runtime discovery a realistic goal, given the present set of tools and knowledge in the developer community? I can write my client to treat all of the URI's in an opaque manner, but how to do this most efficiently is a question, especially over low-bandwidth connections. Anyway, URI's are only part of the equation. What about link templating in the Runtime context? How about communicating what methods are supported, aside from making a lot of OPTIONS requests?
This is definitely a tough nut to crack. At Google, we've implemented our Discovery Service that all our new APIs are built against. The TL;DR version is we generate a JSON Schema-like spec that our clients can parse - many of them dynamically.
That results means easier SDK upgrades for the developer and easy/better maintenance for us.
By no means the perfect solution, but many of our devs seem to like.
See link for more details (and make sure to watch the vid.)
Fascinating. What you are describing is basically the HATEOAS principle. What is HATEOAS you ask? Read this: http://en.wikipedia.org/wiki/HATEOAS
In layman's terms, HATEOAS means link following. This approach decouples your client from specific URL's and gives you the flexibility to change your API without breaking anyone.
You did your home work and you got to the heart of it: runtime discovery is holy grail. Don't chase it.
UDDI tells a poignant story of runtime discovery: http://en.wikipedia.org/wiki/Universal_Description_Discovery_and_Integration
One of the requirements that should be satisfied before you can call an API 'RESTful' is that it should be possible to write a generic client application on top of that API. With the generic client, a user should be able to access all the API's functionality. A generic client is a client application that does not assume that any resource has a specific structure beyond the structure that is defined by the media type. For example, a web browser is a generic client that knows how to interpret HTML, including HTML forms etc.
Now, suppose we have a HTTP/JSON API for a web shop and we want to build a HTML/CSS/JavaScript client that gives our customers an excellent user experience. Would it be a realistic option to let that client be a generic client application? No. We want to provide a specific look-and-feel for every specific data element and every specific application state. We don't want to include all knowledge about these presentation-specifics in the API, on the contrary, the client should define the look and feel and the API should only carry the data. This implies that the client has hard-coded coupling of specific resource elements to specific layouts and user interactions.
Is this the end of HATEOAS and thus the end of REST? Yes and no.
Yes, because if we hard-code knowledge about the API into the client, we loose the benefit of HATEOAS: server-side changes may break the client.
No, for two reasons:
Being "RESTful" is a property of the API, not of the client. As long as it is possible, in theory, to build a generic client that offers all capabilities of the API, the API can be called RESTful. The fact that clients don't obey the rules, is not the API's fault. The fact that a generic client would have a lousy user experience is not an issue. Why is it important to know that it is possible to have a generic client, if we don't actually have that generic client? This brings me to the second reason:
A RESTful API offers clients the option to choose how generic they want to be, i.e. how resilient to server-side changes they want to be. Clients which need to provide a great user experience may still be resilient to URI changes, to changes in default values and more. Clients doing batch jobs without user interaction may be resilient to other kinds of changes.
If you are interested in practical examples, checkout my JAREST paper. The last section is about HATEOAS. You will see that with JAREST, even highly interactive and visually attractive clients can be quite resilient to server-side changes, though not 100%.
I think the important point about HATEOAS is not that it is some holy grail client-side, but that it isolates the client from URI changes - it is assumed you are using known (or developer discovered custom) Link Relations that will allow the system to know which link for an object is the editable form. The important point is to use a media type that is hypermedia aware (e.g. HTML, XHTML, etc).
You write:
To make the API very efficient, a lot of link templating for query parameters seems to be needed, which makes out-of-band info creep back in.
If that link template is supplied in the previous request, then there is no out-of-band information. For example a HTML search form uses link templating (/search?q=%#) to generate a URL (/search?q=hateoas), but nothing is known by the client (the web browser) other than how to use HTML forms and GET.