I'm looking at the posibility of porting a lot of our REST API services to a gRPC schema, but here is the thing.
We currently make heavy use of one method of the API, that makes calls to multiple PostgreSQL functions based on the name of the function received as a parameter and the input as the body of the request, ie: api.com/functions/exec/{name}, the functions defined in the DB recieves and returns JSON.
So, if I understood well, a gRPC method can only have a static data structure both for receiving and returning types. How can i make it flexible?, because depends on the DB function to be called the data structure to be returned and sent as input
The structure returned by the API is something like
{
"code": 200,
"data": {
"some": "value"
},
"status": "Correct..blabla"
}
The structure of the data sent to the API depends on the mode to be used if it's encrypted the request body will be a binary string
a37163b25ed49d6f04249681b9145f63f64d84a486e234fa397f134f8b25fd62f1e755e40c09da09f9900beea4b51fc638e7db8730945bd319600943e01d10f2512fa6ab335fb65de32fc2ee0a2150f7987ae0999ea5d8d09e1125c533e7803ba9118ee5aff124282149792a33dce992385969b9df2417613cd2b039bf6056998469dfb015eade9585cb160275ec83de06d027180818652c60c09ada1c98d6e9ed11df9da737408e14161ae00aaf9d4568a015dc6b6d267f1ee04638dd60e4007dc543524b83ca6b2752c5f21e9dfff3b15e6e55db8b9be9e8c07d64ccd1b3d44ce48cc3e49daee5ae1da5186d9ef6994ccf41dc86a4e289fdbab8ca4a397f929445c42f40268ebb2c3d8bcb80f9e844dba55020da665bce887bd237ae2699876e12cc0043b91578f9df2d76391d50fbf8d19b6969
if it's not encrypted then it's just common JSON
{
"one": "parameter"
}
One possible solution that I can think of, is to use always a type byte, both on the request and the response types, the only thing that I have to do is convert JSON to a binary string and vice-versa right?
I'm open to suggestions.
Depending on your needs and performance requirements, going the raw bytes route might be sensible, if you really don't have any other uses for protobuf fields. If you do, you might want to define a message type that supports encrypted and unencrypted message fields like: https://github.com/grpc/grpc/blob/master/src/proto/grpc/reflection/v1alpha/reflection.proto#L77
Related
From my client application I make a POST call to a \validate end-point. This call is made to execute the controller on the server side and not to create any resource. In response, server can provide one of the two totally unrelated JSON objects. From the client, how do I know which of the two types I should use during deserialization? What is the clean way to do this?
There are multiple ways to do it.
Add some header to the http response to determine body type. Client should check header and use corresponding deserializer. This is a typical approach for webhooks API where you have single endpoint to process different event type. As an example, you could check AWS SNS API that uses x-amz-sns-message-type header to define response type.
As an alternative you could use special body format with some type field and payload that depends on this type.
{
"type: "Type",
"paylod": {
...
}
}
But from my opinion this approach is much harder to handle for the client and would required 2-step deserialization process.
Currently i've asking if the current HTTP method that i'm using on my rest api is the correct one to the occasion, my endpoint need a lot of data to "match" a query, so i've made a POST endpoint where the user can send a json with the parameters, e.g:
{
"product_id": 1
"partner": 99,
"category": [{
"id": 8,
"subcategories": [
{"id": "x"}, {"id": "y"}
]
}]
"brands": ["apple", "samsung"]
}
Note that brands and category are a list.
Reading the mozzila page about http methods i found:
The POST method is used to submit an entity to the specified resource, often causing a change in state or side effects on the server.
My POST endpoint does not take any effect on my server/database so in theory i'm using it wrong(?), but if i use a GET request how can i make it more "readable" and how can i manage lists on this method?
What HTTP method should i use when i need to pass a lot of data to a single endpoint?
From RFC 7230
Various ad hoc limitations on request-line length are found in practice. It is RECOMMENDED that all HTTP senders and recipients support, at a minimum, request-line lengths of 8000 octets.
That effectively limits the amount of information you can encode into the target-uri. Your limit in practice may be lower still if you have to support clients or intermediaries that don't follow this recommendation.
If the server needs the information, and you cannot encode it into the URI, then you are basically stuck with encoding it into the message-body; which in turn means that GET -- however otherwise suitable the semantics might be for your setting -- is out of the picture:
A payload within a GET request message has no defined semantics
So that's that - you are stuck with POST, and you lose safe semantics, idempotent semantics, and caching.
A possible alternative to consider is creating a resource which the client can later GET to retrieve the current representation of the matches. That doesn't make things any better for the first adhoc query, but it does give you nicer semantics for repeat queries.
You might, for example, copy the message-body into an document store, and encode the key to the document store (for example, a hash of the document) into the URI to be used by the client in subsequent GETs.
For cases where the boilerplate of the JSON document is large, and the encoding of the variation small, you might consider a family of resources that encode the variations into the URI. In effect, the variation is extracted from the URI and applied to the server's copy of the template, and then the fully reconstituted JSON document is used to achieve... whatever.
You should be using a POST anyways. With Get you can only "upload" data via URL parameters or HTTP Headers. Both are unsuitable for structured data like yours. Do use POST even though no "change" happens on the server.
I have the following Backend API:
Endpoint
HTTP GET
https://localhost:8443/getSomeParameterInfo
Query Parameter
?inputAsJson
example
{
'url': 'http://semanticstuff.org/blah#Parameter,
'parameters_1': 'value1',
'someArray': [
'http://semanticstuff.org/blah#something1,
'http://semanticstuff.org/blah#something2
],
'someOtherArray': [
'http://....'
]
}
the Final HTTP GET Call is
https://localhost:8443/getSomeParameterInfo?inputAsJson={aboveMentioned JSON}
Due to everchanging requirements for the Backend, the above mentioned JSON Structure keep increasing by addition of new key:value pairs. (This JSON Structure is also a query for a database)
Hinderances
Due to uses of weblinks as values it becomes necessary to use encodeURIComponent function for a successful REST Call. This means, the quotes, forward slashses etc. need to be encoded to get a Reply. This becomes quite tedious when one requires to do tests on standalone basis (using Postman or other REST Clients)
I have not seen a JSON structure passed to an API as mentioned above and hence I wish to confirm about the best practices and/or proper way to use large number of parameters when making such a RESTful call
I usually tend to think that getting something via a POST is a "bad" practice.
However, it sounds like body in GET is not something forbidden but still something not widely implemented in frameworks.
In your case, it will depends on how many attributes you have and the global length or your json.
If you keep on using GET method, then using an "exploded" key-value representation of your JSON should be the way to go.
Exemple:
{ "myKey": "myValue", "childObjKey": {"childObjProp": "childValue}}
could become
?myKey=myValue&childObjKey.childObjProp=childValue
But there are some limits on query parmeters' length which can be implemented in clients and/or servers.
If your number of parameters is huge and values' length are unpredictable (text without length limit for instance), then using POST should be the way to go.
so I finished a server in Node using Express (developed through testing) and while developing my frontend I realized that Java doesn't allow any body payload in GET requests. I've done some reading around and understand that the Http specs do allow this, but most often the standard is to not put any payload in GET. So if not this, then what's the best way to put this into a GET request. The data I include is mostly simple fields except in my structure some of them are nested and some of them are arrays, that's why sending JSON seemed so easy. How else should I do this?
For example I have a request with this data
{
token,
filter: {
author_id,
id
},
limit,
offset
}
I've done some reading around and understand that the Http specs do allow this, but most often the standard is to not put any payload in GET.
Right - the problem is that there are no defined semantics, which means that even if you control both the client and the server, you can't expect intermediaries participating in the discussion to cooperate. See RFC-7231
The data I include is mostly simple fields except in my structure some of them are nested and some of them are arrays, that's why sending JSON seemed so easy. How else should I do this?
The HTTP POST method is the appropriate way to deliver a payload to a resource. POST is the most general method in the HTTP vocabulary, it covers all use cases, even those covered by other cases.
What you lose in POST is the fact that the request is safe and idempotent, and you don't get any decent caching behavior.
On the other hand, if the JSON document is being used to constrain the representation that is returned by the resource, then it is correct to say that the JSON is part of the identifier for that document, in which case you encode it into the query
/some/hierarchical/part?{encoded json goes here}
This gives you back the safe semantic, supports caching, and so on.
Of course, if your json structures are complicated, then you may find yourself running into various implicit limits on URI length.
I found some interesting specs for GET that allow more complex objects to be posted (such as arrays and objects with properties inside). Many frameworks that support GET queries seem to parse this.
For arrays, redefine the field. For example for the array ids=[1,2,3]
test.com?ids=1&ids=2&ids=3
For nested objects such as
{
filter.id: 5,
filter.post: 2
}
test.com?filter[id]=5&filter[post]=2
I'm writing a REST service which is dealing with SomeKindOfResource stored in a database.
Don't ask me why (don't!) but for some reasons, the corresponding underlying table has a variable number of columns. That's the way it is and I can't change it.
Therefore, when I issue a GET /SomeKindOfResources/{id}, my DTO (later on serialized to JSON) might then contain a variable number of "fields". I know how to handle the dynamic object / serialization parts. My question is more on the philosophy side of things.
Let's say my client wants to know what will be the list of fields returned by a call to GET /SomeKindOfResources/{id} because, for example, that list determines what can be used, later-on, to filter out a list of SomeKindOfResources. Basically, I need something resembling a "GetCapability".
How would you deal with such a scenario in a RESTful way?
If I understand your requirement correctly, you want to return a metadata like response for a specific object (i.e. by Id) that has dynamic fields, so your client knows the field types it will receive when requesting that object.
Just a word of caution: A dynamic DTO isn't RESTful. The point of a DTO is that it is an agreed contract. It shouldn't change and as such there isn't a RESTful rule to handle your use case.
If you were to implement this, these are three possible approaches.
Metadata Route:
Create a new route, in your service, as this scenario isn't covered by the standard ServiceStack MetadataFeature, as it only works with static DTOs. So create something like this:
[Route("/SomeKindOfResources/{Id}/metadata", "GET"]
Then you would want the response to that route to describe the fields to your client. This is where it gets cloudy. The MetaDataFeature uses XSD to describe your standard DTOs, you could write your action to produce an XSD response which would describe your fields, based on your database lookup of available fields. But then will your client know how to parse XSD? As your use isn't standard, and the client can't be expected to handle it in a RESTful way, so you may just want to use a simple response type, like a Dictionary<string,Type>() essentially just returning field name, and underlying type. This will work fine for simple built in types, like string, int, bool etc, but custom class scenarios will be harder to handle, such as List<MySpecialType>.
Pseudo client code:
var fieldMetaData = client.get("/SomeKindOfResources/123/metadata");
var result = client.get("/SomeKingOfResources/123");
Example XSD Metadata Response.
OPTIONS header:
However you may wish to consider using the OPTIONS verb instead of a GET request to another prepended with /metadata, as recommended by RFC2616 ยง9.
This method allows the client to determine the options and/or requirements associated with a resource ... without implying a resource action or initiating a resource retrieval.
Pseudo client code:
var fieldMetaData = client.options("/SomeKindOfResources/123");
var result = client.get("/SomeKindOfResources/123");
But remember OPTIONS in REST is typically used for setting up CORS.
ServiceStack JSON __type response:
When ServiceStack returns JSON, you can tell the ServiceStack.Text serializer to include the return type information in a property call __type. Although this may not be easy for your clients to interpret, and it also applies globally to all JSON responses, so wouldn't be limited to just that action. In your configure method add:
JsConfig.IncludeTypeInfo = true;
Pseudo client code:
var result = client.get("/SomeKingOfResources/123");
var typeInformation = result.__type;