I think that my problem is a common one, and I'm weighing the costs and benefits of GraphQL as a solution.
I work on a product whose data is stored by a monolithic CRUD-based REST API. We have components of our application expose a search interface for data, and of course need some kind of server-side support for making requests for that data. This could include sorting, filtering, choosing fields, etc. There are, of course, more traditional ways of providing these functions in a REST context, like query parameter add-ons for endpoints, but it would be cool to try out GraphQL in this context to build a foundation for expanding its use for querying a bit.
GraphQL exposes a really nice query language for searching on data, and ultimately allows me to tailor the language of search specifically to my domain. However, I'm not sure if there is a great way to leverage the IDL without managing a separate server altogether.
Take the following Java Jersey API Proof-of-Concept example:
#GET
#Path("/api/v1/search")
public Response search(QueryIDL query) throws IOException {
final SchemaParser schemaParser = new SchemaParser();
TypeDefinitionRegistry typeDefinitionRegistry = // load schema
RuntimeWiring runtimeWiring = // wire up data-fetching classes
SchemaGenerator schemaGenerator = new SchemaGenerator();
GraphQLSchema graphQLSchema =
schemaGenerator.makeExecutableSchema(typeDefinitionRegistry, runtimeWiring);
GraphQL build = GraphQL.newGraphQL(graphQLSchema).build();
ExecutionResult executionResult = build.execute(query.toString());
return Response.ok(executionResult.getData()).build();
}
I am just planning to take a request body into my Jersey server that looks exactly like the request that would be sent to a GraphQL server. I'm then leveraging some library support to interpret and execute the request for data.
Without really thinking too much about everything that could go wrong, it looks like a client would be able to use this API similar to the way they would use a GraphQL server, except that I don't need to necessarily manage a separate server just to facilitate my search requirements.
Does it seem valuable, or silly, to use the GraphQL IDL in an endpoint-based context like this?
Apart from not needing to rebuild the schema or the GraphQL instance on each request (there are cases where you may want to rebuild the GraphQL instance, but your case is not the one), this is pretty much the canonical way of using it.
It is rather uncommon to keep a separate server for GraphQL, and it usually gets introduced exactly the way you described - as just another endpoint next to your usual REST endpoints. So your usage is legit - not silly at all :)
Btw, I'm not sure what would QueryIDL be... the query is just a string. No need for a special class.
Related
I really like the way Spring cloud function decouples the business logic from the runtime target (local or cloud) and makes it easy to integrate with serverless providers.
I plan to use SCF with AWS Lambda behind an API gateway to design the backend of a system.
However, I am not completely clear on what is the recommended way to handle REST related parameters such as Query params, headers, path etc. inside the Spring cloud functions.
As per our initial analysis, we could derive two possible approaches:
When enabling “Lambda proxy integration” in API Gateway, Query params and other information are available as Message headers inside the SCF.
We can use “Mapping templates” in API Gateway to map all the required information into a JSON body and deserialize as a POJO to take input directly into the SCF.
This way, the SCF does not need to bother about how the required data is passed to the API.
What is the recommended way to achieve this? Are we missing something that enables to do this in a better way?
I don't think you are missing anything featurewise, except perhaps that it might also be convenient to work with composite functions - e.g. marshal|transform, where marshal is a Function<Message<?>, ?> and transform is the business logic. The marshal function could be generic (and convert to some sort of canonical form), and be provided as an autoconfiguration in a shared library (for instance).
GraphQL's principle aim is to solve overfetching problem as faced by many REST APIs and it does that by querying for only specific fields as mentioned in the query.
But in REST APIs, if we use the fields parameter, it also does the same thing. So why need GraphQL if REST can solve overfetching like this?
The option to fetch partial fields is only one of the key features of GraphQL, but not the only one.
One other important advantage is the 'graphic' nature of the model. By treating your schema as a graph (that is, several resources tied together by fields), it allows you to fetch a complex response, constructed of several data types in a single API call. This is a flexibility that you don't have in a standard REST API
Both these features can obviously be done by rest as well, but GraphQL gives it in a much simpler and more intuitive way.
Take a look at this post, there's a fairly good explanation there of the advantages (and disadvantages) of GraphQL.
https://www.altexsoft.com/blog/engineering/graphql-core-features-architecture-pros-and-cons/
When you have a REST setup, you're typically returning a whole JSON representation for each endpoint. This includes all fields that you may or may not need which leads to more data usage or more HTTP calls (if you divide your RESTful API up, that is).
GraphQL on the other hand gives you exactly what you're asking for when you query with a single POST/GET request.
We are trying to follow a quite strict idiom for our REST service however we have come across a situation where we have two clients who require different representations of the same resource. One is front-end and they would prefer a very minimal resource with only the fields they require and in a more flattened structure (for performance), the other requires all fields that we have in our data store in a heavily nested structure. What is the idiomatic way for REST services to deal with this given the canonical URL should be the same as they are accessing the same resource. We thought of adding projections to the request but with this the structure would still be quite nested which causes performance issues in the JS client as it will have to walk through the structure and flatten it, something that can be quite costly when the number of resources returned is high.
I would suggest there are two alternatives:
1) If the query field can vary, you could specify the fields (structure) you want as query parameters. This is common in REST APIs. With no specification, you would return a default list of fields. What should be default or not depends on the service, but in general the minimal set makes a better default for performance. In order to avoid listing all fields, something like fields=all could be used. In your case structure might make more sense.
2) You can encode the field request in a custom request headers. Some would argue that this is the more REST-ful approach as you're only modifying the format of the response and not the underlying action invoked, and therefore the URL should be the same.
In practice, most services prefer the first approach as it's considered more approachable.
Personally, I think it's a marginal choice. I prefer to encode the return media (JSON, HTML, XML, etc.) in the Accept header. Any decent developer has tools that make it easy enough to set the headers, but the fields query parameter idiom is, in my experience, far more prevalent and there's a great deal to be said for convention.
Note, if you use the headers approach, you should probably not use the Accept header for the structure/field specification. Add your own header if you go that route.
I am working on a REST service and so far all the queries are retrieved using a GET request.
Right now we are using a sort of routing rule like this one:
API/Person/{id} GET
http://api.com/person/1
Now, what if I want to ask to the REST API "Give me a Person with FisrtName = 'Pippo'"
I have a complex DTO that I called PersonQueryDTO that can be sent to the REST method to interview the database using the query criterias.
Is this a good way to do it or should I build complex queries in a different way?
For me it's important to keep the REST principles.
If you want to stick with REST principles, then the way to do something like that is to supply additional parameters in the URL e.g.
GET API/Person?FirstName=SomeName
REST is all about identifying resources, API/Person identifies your collection of Person and the additional parameters are nothing but meta data which the service can use internally to determine what sort of result to return.
I have the basics of a REST service done, with "standard" list and GET/POST/PUT/DELETE verbs implemented around my nouns.
However, the client base I'm working with also wants to have more powerful operations. I'm using Mongo DB on the back-end, and it'd be easy to expose an "update" operation. This page describes how Mongo can do updates.
It'd be easy to write a page that takes a couple of JSON/XML/whatever arguments for the "criteria" and the "objNew" parts of the Mongo update function. Maybe I make a page like http://myserver.com/collection/update that takes a POST (or PUT?) request, with a request body that contains that data. Scrub the input for malicious querying and to enforce security, and we're done. Piece of cake.
My question is: what's the "best" way to expose this in a RESTful manner? Obviously, the approach I described above isn't kosher because "update" isn't a noun. This sort of thing seems much more suitable for a SOAP/RPC method, but the rest of the service is already using REST over HTTP, and I don't want users to have to make two different types of calls.
Thoughts?
Typically, I would handle this as:
url/collection
url/collection/item
GET collection: Returns a representation of the collection resource
GET collection/item: Returns a representation of the item resource
(optional URI params for content-types: json, xml, txt, etc)
POST collection/: Creates a new item (if via XML, I use XSD to validate)
PUT collection/item: Update an existing item
DELETE collection/item: Delete an existing item
Does that help?
Since as you're aware it isn't a good fit for REST, you're just going to have to do your best and invent a standard to make it work. Mongo's update functionality is so far removed from REST, I'd actually allow PUTs on the collection. Ignore the parameters in my examples, I haven't thought too hard about them.
PUT collection?set={field:value}
PUT collection?pop={field:1}
Or:
PUT collection/pop?field=1