Mapping models from client and server Rest API - rest

I am doing in a project using client-server communication through rest API, Angular 2 calling restful web service, for specific.
On client side, it is written in Typescript, a sub set of javascript.
My problem is our object on the server side is nested, complex and difficult to model/ deserialize exactly on the client side when it receives a response (JSON object)
My question is:
Do we need to make a class on the client side for all json
response object and deserialize them before binding it to the HTML
view or processing data?
And how big/complex could the response JSON object become a problem?
(Performance, Best practices...)
I am still confusing on how to share workload between client and server. What decides "we should handle it on server side" or "we should let the client do this"? Many of the cases I could let the server side fetch a lot of data before returning to clients, or should I let the client make multiple requests and fetch it on its side?
My application is probably an intranet application for ~ 1000 users (about 5-10 concurrently).
I am new to web application using rest, so I am greatly thank you if you guys could instruct me on this.

Let me answer to subquestion #3. I develop microservice architecture for a small project and have faced the same problem. It's possible to implement logic either at server side or at client side.
At one hand server could return simple plain objects, at another - could return objects containing all necessary nested and processed data.
In the first case, client should make more queries to the server in order to collect all nested data. The client should be more complex and foresee all the situations when server data could be changed during series of requests (keeping the consistency of data). But it really simplifies the server implementation - it could be just series of CRUD repositories. It could be even auto generated server over underlying DB.
In the second case, server returns complex nested object. It allows to simplify client, since JSON deserialization is pretty straightforward with modern frameworks. It reduces number of queries to server. If a client could have all the necessary data in one query - that's great.
In both cases the business logic should be implemented somewhere. I vote for the simple client and hiding complexity at server side. Finally, for one server could be several clients. That's why complex server will be more beneficial than complex client.

Related

How Caching Is Done Under The Hood in REST APIs?

One of the properties of REST APIs is cacheability. I want to understand how caching is done? And is it on client side (i.e. let's say on API client Postman or Insomnia) or on server side or both?
Suppose a resource is accessed as
GET /services/data/{api_version}/{product_tag}/{resource}/{id} and
we get a response.
If we again trigger the same endpoint call almost instantly, we get another response.
Considering API did caching on first call itself, two scenarios:
Data did not change in between two calls. In that case, caching gives correct result.
Data did change between calls. If client relies on cache, stale data is served to user.
How client determines that data changed and serves latest result? Is it something related like setting a dirty bit as we do in operating systems?
I know cache invalidation determination is one of the toughest problems in computer science and depends on scenario, but in general,
What things to cache on client side and what on server side? Caching done by Postman cannot be used by Insomnia.
How to always serve latest data while using cache to its fullest?

Implementing REST using JDBC Tables

Currently we are implementing REST API's using the spring-boot. Since our API's are growing in number we are thinking of a solution to implement the REST API's using a different approach.
The approach is as below :
Expose a single service to receive all the HTTP requests.
We will have the URI's configured in a data base table to call the
next set of services. These service are configured to listen to
particular JMS messages.
The next set of services will receive the JMS messages and process
the data.
Below are my questions :
Will the above approach still represent the REST architecture ?
What are the downsides of above approach(we are aware of network
latency) any thing other then network latency ?
What are the REST architecture benefits will we be missing.
Or can we just say that our approach is the REST architecture done differently ?
You're making 2 major choices, each can be decided separately:
1) Having a single HTTP service
2) Using JMS as the communication between this service and the underlying microservices
Regarding #1, if you do this, you can no longer call your services REST since the whole point of REST is to use HTTP verbs together with your domain objects for a predicable set of endpoints. GET on /objects/ means the object is being fetched, POST on /objects means a new object is being created, etc... Now, this is OK, you can do it this way and it can work, though it will be "non-standard".
In fact, you might want to check out GraphQL https://www.howtographql.com/basics/1-graphql-is-the-better-rest/ as its pretty close to what you're trying to do.
These days really either REST or GraphQL seems to be the two popular approaches.
Another way to do REST, if you're looking to simply expose REST services on your domain objects without having to write a lot of code, is Spring Data REST: https://spring.io/projects/spring-data-rest and if you're comfortable with Spring already, this should be pretty easy to understand.
For #2, your choice of communication between your single gateway service and the underlying services. Do most of your calls require synchronous answers, such as a UI asking for data to display in a browser or phone? If so, JMS is not a good approach. JMS would be an ok approach if the majority of your services were asyncronous - for example someone submitting a stock trade request. The UI would just need to know the request was submitted, but it will actually be processed some time later and the result will be fetched asyncronously.
Without knowing much about your application, I would recommend sticking with HTTP between your services for simplicity sake unless there is a good reason to switch to JMS.

Websocket vs REST when sending data to server

Background
We are writing a Messenger-like app. We have setup Websockets to Inbox and Chat.
Question
My question is simple. What are the advantages and disadvantages when sending data from Client to Server using REST instead of Websockets? (I am not interested in updates now.)
We know that REST has higher overhead in terms of message sizes and that WS is duplex (thus open all time). What about the other things we didn't keep in mind?
Here's a summary of the tradeoffs I'm aware of.
Reasons to use webSocket:
You need/want server-push of data.
You are sending lots of small pieces of data from client to server and doing it very regularly. Using webSocket has significantly less overhead per transmission.
Reasons to use REST:
You want to use server-side frameworks or modules that are built for REST, not for webSocket (such as auth, rate limiting, security, streaming, etc...).
You aren't sending data very often from client to server and thus the server-side burden of keeping a webSocket connection open all the time may lessen your server scalability.
You want your client to run in places where a long-connected webSocket during inactive periods of time may not be practical (perhaps mobile).
You want your client to run in old browsers that don't support webSocket.
You want the browser to enforce same-origin restrictions (those are enforced for REST Ajax calls, but not for webSocket connections).
You don't want to have to write code that detects when the webSocket connection has died and then auto-reconnects and handles back-offs and handles mobile issues with battery usage issues, etc...
You need to run in situations where there are proxies or other network infrastructure that may not support long running webSocket connections.
If you want request/response built in. REST is request/response. WebSocket is not - it's message based. Responses from a webSocket are done by sending a messge back. That message back is not, by itself, a response to any specific request, it's just data being sent back. If you want request/response with webSocket, then you have to build some infrastructure yourself where you tag an id into a request and the response for that particular request contains that specific id. Otherwise, if there are every multiple requests in flight at the same time, then you don't know which response belongs with which request because all the data is being sent over the same connection and you would have no way of matching response with request.
If you want other clients to be able to carry out this operation via an Ajax call.
So, if you already have a webSocket implementation, don't have any problem with it that are lessened with REST and aren't interested in any of the reasons that REST might be better, then stick with your webSocket implementation.
Related references:
websocket vs rest API for real time data?
Ajax vs Socket.io
Adding comments per your request:
It sounds like you're expecting someone to tell you the "right" way to do it. There are reasons to pick one way over the other. If none of those reason compel you one way vs. the other, then it's just an architectural choice and you must take in the whole context of what you are doing and decide which architectural choice makes more sense to you. If you already have the reliably established webSocket connection and none of the advantages of REST apply to your situation then you can optimize for "efficiency" and send your data to the server over the webSocket connection.
On the other hand, if you wanted there to be a simple API on your server that could be reached with an Ajax call from other clients, then you'd want your server to support this operation via REST so it would simplest for these other clients to carry out this one operation. So, it all depends upon which direction your requirements drive you and, if there is no particular driving reason to go one way or the other, you just make an architectural choice yourself.

What's the best practice to collect data from different clients?

Here are the details of my use case:
What's my data..
There would be user experiences, error report, state info and so on. The data is fragmented and may change in the future. So I plan to use NoSQL, maybe mongodb, to save data in the server.
What are the clients..
They are clients written in different languages, like C#, C++, LabVIEW and so on. Some don't even have an access to a mongodb driver, so of course it's not an option to communicate with database directly. And framework like below is needed.
Clients -> (Some protocol) -> Broker -> Database.
As those clients are not web client, so common web server using http may not suit for my case, right? Is there any suggestion for the protocol, broker and database, Or even a new framework.
My goal is to make the clients can send data as convenient as possible.
Thank you!
This is not really new, but a message driven application, which is a well understood pattern.
I did this mostly in Java, so I will stick to this language here.
A broker alone would be not enough here. Let us say you use Apache ActiveMQ as you message broker, you would still need to get your data into the database, since MQ is... ...a message queue. So you need a part which gets the messages out of MQ, processes them according to your business rules and stores them in the (correct) database instance, and the correct collection/bucket/table. Of course you could write this part by hand, but that would be pretty much reinventing the wheel. There is a notion of a "message routing and mediation engine", and the most commonly suggested here is Apache Camel, which has quite some components to communicate with databases and other so called consumers and producers. And that is the key point. In general, if possible, your clients should send their data to the message broker directly. But, if they can't, they can simply send text files or make REST calls – there are actually too many options to list here. This incoming data can be preprocessed and normalized to your standard format by a "route" in Apache Camel (a set of a consumer, conversion rules and a producer, in it's simplest form) and send as an AMQP message to MQ. From there, another Camel route can process the AMQP messages, apply your business rules and store the data in the database... ...or whatever else may come to your mind (for example sending an email).
So this solution supports a multitude of protocols for incoming and outgoing messages (as long as they are supported by Camel) and you have your business rules in a centralized and well defined location.
To implement this, I'd strongly suggest using Apache ServiceMix, which is a distribution of ActiveMQ, Camel and a system to manage the components and business rules.
Finally, web server with http protocal could suit for the use case, I think.
Mostly I want is a universal API for different kinds of clients to save data to cloud. Http has method GET, POST, PUT, DELETE, so with a RESTful API it is naturlly suitable for operate data, I think.
My solution at last is Node.js(Express) + Mongodb (a quite common group), and a RESTful API is provided via Express web server, clients can use http to operate data conviencely. Also, it is quite light weight and easy to get started.
Here is some tutorial: http://cwbuecheler.com/web/tutorials/2013/node-express-mongo/

GWT RequestFactory and propagating server-side changes to the client

I need some advice on how propagating server-side changes of entities to the client is best handled with GWT's RequestFactory.
Let us assume we have two EntityProxies, a PersonProxy and a PersonListProxy (which has a getter for a List). Assume that the client has fetched a PersonList and a Person from the server.
In case the client is editing one of these proxies and firing a request, the machinery of RequestFactory (if I have understood the principles correctly) will fire an EntityProxyChange event if it detects changes done by server code (so that the client can update its display of the entities, for example).
Now assume that the server is changing its entities outside of a request by this client (e.g. due to another client calling the server) so that this client would see another version if it fetched the Person or the PersonList again.
My question is what is the best way inside the RequestFactory framework to tell the client of the changes (and to reuse as much of the machinery as possible)? We can assume that I have a way to send simple messages from the server to the client (e.g. Google App Engine's channel API or server-sent events).
One idea could be that the server sends over this channel a message telling that a Person or a PersonList with a specific id has changed. The client code handling the receipt of these messages could then use RequestFactory to re-fetch (e.g. find) the entity. This change should then be propagated to other parts of the client by an EntityProxyChange event.
Is this the way to go? (And in case that the client already has the current version of the entity, e.g. because the server was dumb and notified the client of changes the client itself made, would the triggered re-fetch just transport a few bits of metadata and not the whole entity again?)
ADDED:
Thinking a bit more about it, I wonder how EntityProxyId's can be generated for the server-sent event channel. When an entity on the server changes, the server only has the server id. It can then send it to the client, of course, but the client only knows of EntityProxyId's. Of course, I could add a getId() (in addition to getStableId()) to each EntityProxy, but it looks as if this would add redundant data to every server response.
Well, I realize that my post isn't precise answer to your question, but it's just my experience.
In fact, there is just a question how to deliver data from server to client.
I faced with some task couple years ago, and found for yourself an approach that make my life easier. To explain it, I want to specify my reasons:
You have to have full data delivery by requesting it from client - it's straight, natural way to requesting data;
You don't want to create and support 2 different models of full data delivery: one by requesting from client and second by pushing from server;
But you need to inform client about some changes on server side;
So, now I'm building my architecture using following approach:
Build full classical client-server API for data delivery - so you can load and refresh your application in natural way even if your pushing functionality is blocked or broken.
Define key information that may be changed on server side and should be delivered to client via push mechanism.
Create small push message construct(s) that will deliver to client just a notification about changes - no any valuable data should be delivered this way - just keys which data was changed.
All that is needed to do on client when it receives such notification is just to get/refresh data from server in natural client-server way that is already supported.
Server logic shouldn't bother client side by huge amount of notifications - sometimes is more effective do not deliver changes, but just refresh everything.
Hope this helps.