I have a RESTful web-service application that I developed using the Netbeans IDE. The application uses MySQL server as its back end server. What I am wondering now is how often a client application that uses my RESTful application would refresh to reflect the data change in the server.
Are there any default pull intervals that clients get from the RESTful application? Does the framework(JAX-RS) do something about it Or is that my business to take care of.
Thanks in advance
#Abraham
There are no such rules. Only thing you can use for properly implementing this is HTTP's caching capabilities. Service must include control information how long representation of a particular resource can be cached, revalidated, never cached etc...
On client application side of things each client may decide it's own path how it will keep itself in sync with a service. It can be done by locally storing data and serve end user from local cache etc... Service can not(and shouldn't know) how clients are implemented, only thing service can do is to include caching information in response messages as i already mentioned above.
It is your responsibility to schedule the service to execute again and again. We can set time out interval but there is no pull interval.
Related
I am building a JavaFX client application communicating with Spring MVC Restful server(Spring boot 1.4.1) application which works as expected.
Some features require fast interaction with the server to validate limits and availability before proceeding to next input example check if member number insert is valid and if has exceeded limit to insert, during accumulation of records(each confirmed record temporarily stored in a tableview before sent to server for storage) before the records are actually saved.
Within JavaFX and Spring framework(in both frontend and backend) scope, how can such kind of features made look more interactive(or live) than normal "let-me-wait-for-response" approach
If question is not clear, just ask, otherwise i think it is
It appears that the only interaction you have between client (JavaFX) and server (SpringBoot) is through a REST API. This will make short bursts of data (such a validation) take longer.
Switching to another communication mechanism (for example gRPC or Netty with Msgpack) could help. Note that once you open the door for non-REST calls it'll make you re-think the use of REST in the first place.
Non-REST communication may not be an option depending on your requirements (firewalls, etc) or may need additional setup in order to surmount other obstacles, in other words, there's no free lunch.
In a distributed systems environment, we have a RESTful service that needs to provide high read throughput at low-latency. Due to limitations in the database technology and given its a read-heavy system, we decided to use MemCached. Now, in a SOA, there are atleast 2 choices for the location of the cache, basically client looks up in Cache before calling server vs client always calls server which looks up in cache. In both cases, caching itself is done in a distributed MemCached server.
Option 1: Client -> RESTful Service -> MemCached -> Database
OR
Option 2: Client -> MemCached -> RESTful Service -> Database
I have an opinion but i'd love to hear arguments for and against either option from SOA experts in the community. Please assume either option is feasible, its a architecture question. Appreciate sharing your experience.
I have seen the
Option 1: Client -> RESTful Service -> Cache Server -> Database
working very well. Pros IMHO are that you are able to operate wtih and use this layer in a way allowing you to "free" part of the load on the DB. Assuming that your end-users can have a lot of similar requests and after all the Client can decide what storage to spare for caching. Also how often to clear it.
I prefer Option 1 and I am currently using it. In this way it is easier to control the load on the DB (just as #ekostatinov mentioned). I have lots of data that are required for every user in the system, but the data is never changed (such as some system rules, types of items, etc). It really reduces the DB load. In this way you can also control the behavior of the cache (such as when to clear the items).
Option 1 is the prefered option as it makes memcache an implementation detail of the service. the other option means that if the business changes and things can't be kept in the cache (or other can etc.) the clients would have to change. Option 1 hides all that behind the service interface.
Additionally option 1 lets you evolve the service as you wish. e.g. maybe later you think you need a new technology, maybe you'd solve the performance problem with the DB etc. Again, option 1 lets you make all these changes without dragging the clients into the mess
Is the REST ful API exposed to external consumers. In that case it is up to the consumer to decide if they want to use a cache and how much stale data can they use.
As for as the REST ful service goes, the service is the container of business logic and it is the authority of data, so it decides how much to cache, cache expiry, when to flush etc. A client consuming the REST service always assumes that the service is providing it with the latest data. And hence option 1 is preferred.
Who is the client in this case?
Is it a wrapper for your REST API. Are you providing both client and the service.
I can share my experience with Enduro/X middleware implementation. For local XATMI service calls any client process connects to shared memory (LMDB) and checks the result there. If there is response saved it returns data directly from shm. If data is not there, client process goes the longest path and performs the IPC. In case of REST access, network clients still performs the HTTP invocation, but HTTP server as XATMI client returns the data from shared mem. From real life, this technique was greatly boosting the web frontend web application which used middleware via REST calls.
Trying to understand REST HATEOAS:
Suppose I have a service that has state; they are: initial, ready, running. I have a client that connects to the service, obtains a page with links that allow it to mutate the service state.
It uses one of the links to change the service's state and obtains another page with new links.
As long as there is 1 client, the state the client holds is identical with the service. But if there is a second client and it changes the service's state, the first client's representation is stale.
How is this resolved in HATEOAS? From what I've read it seems that REST is not applicable and I should maybe look at something else. If so, what?
Thanks!
This is not resolved by HATEOS (entirely). As REST is stateless this is kind of a paradox use case to keep state in client and server aligned.
Assuming I understand your requirements, yes, you're state in client 1 is stale and not the same as the one on the server. But what if the client would make a periodic call to the server to see whether some other client changed it? If so, with HATEOAS you could provide a link to serve the current state and omit the link to change the state.
#Kay - Thanks for answering.
I'm going to try to answer my own question. I realized after reading your answer that the "application" in HATEOAS is really the virtual application the client experiences when it retrieves and processes the resources it gets from the server. Its states are the pages (resources) it transitions between. The server (service?) may have its own state but it's not the same as the client's.
As long as this distinction is kept in mind, it is not unRESTful to have stale links in client 1. The server simply responds with new links reflecting its own state. And the client makes new transitions based on the updated links.
Still trying to understand. If I have it wrong, I'd appreciate some help.
Thanks!
The stateless requirement of REST refers to the ability of the server to understand and process the client request independent from any previous interactions it has made with said client. In other words, the client should be able to send a request "out of the blue" to the server (I.e., without a session saved on said server) and have the request processed. Hence there isn't a concept of login and logout in a purely RESTful architecture.
That's a different constraint than HATEOAS. Basically, "hypermedia as the engine of application state" means that all state is conveyed through the media type being used and not the connection itself. The client can (and often does) keep its own state, and can request snapshots of the state of resources from the server through resource representations (a.k.a media types).
If you want to be notified when a resource changes state, REST is (probably) the wrong choice. You'd likely want to use a different application protocol than, say, HTTP.
As Fielding says: it's not REST without HATEOAS. Don't call your service REST if it's ignore HATEAOS and stateful service can not be REST. You understood HATEOA. The server provides hyperlinks for the client which should be use to change the state located at the CLIENT SIDE.
To solve your problem: omit tend server from any state information. It will easier your life. Then implement REST as using the Richardson Maturity Model while consider information's from here.
I am trying to make the sync data application in which user have some value into the data base .And he have to send this data on to the server .With help http request how can i do that .
I need the
I am not using the php web server .
I am using the Normal HTTP web page .
ANd i have the some data in my iphone application and i want to Synchronization that data to server .
I it must check the Internet is available or not .If the Internet is ON than only he will Synchronization the data .that's my question .
Nothing else .You got my point or not .
I thing people will reply me soon please
Thanks
You have got to have some kind of server backend for synchronization of the local database. You can't do that using just the static HTML pages.
Your application and server have to have a way to talk to each other using a web service protocol, like SOAP or JSON/REST. Then your application has to translate the data from the database into such web service data objects.
Both your local database and the server (in case of more than one client) will have keep the records of at least the times of last synchronizations so both know what should be sent over the air in order to become in sync.
Also, in the usual case of more than one client, you have to solve the problem of conflicts resolution.
Web service versioning is important as well, as there will be very likely a need to improve the communication channel, maybe there will be changes in the database model to be synchronized.
As you can see, the idea of synchronizing local database to a server is not that simple, and if you think you can do it in a simple way, in time you'll realise that you're gradually reimplementing the aforementioned ideas.
Do a research on web service technologies, writing web services-aware apps, on synchronization with web services and on Reachability, for starters.
To check internet availability, check out the Reachability class from Apple. See this article.
To send data to a simple HTTP form via POST use NSURLConnection like in this article.
Cheers,
S
I have a question about the design of an application I'm working on.
I made a monolithic java application with sockets open 24/7, something like a game server. I'm just trying to say it's a single jar application instead of a modular servlet/page based web application.
I would now like to add a RESTful API to this application. So people/clients can make HTTP requests to my application to obtain certain info. Because of the monolithic nature of my java application I'm unsure of how to implement this. One other important thing: I'm expecting multiple requests per second, so it would be nice if I could have an existing http server handle the requests, and somehow forward them to my app to set up a reply, and have the http server send it again.
Some things I have thought of:
wrap my application in a tomcat application, although I'm not sure if tomcat can run an application continuously instead of mapping to servlets on request.
open a socket and parse incoming http requests myself (or there is propably a lib for that?). I fear this will have an impact on performance, and would rather use existing http servers because they are optimized for high traffic.
use an excisting http server to handle the requests (apache, lighttp, ...) and have it forward requests to my app via things like scgi, or use a server that can forward via XMLRPC. Are there any other technologies/protocols to do this?
Any advice on how to handle this?
Thanks!
I'd decouple your RESTful service endpoint as much as possible from your original application. This allows you to scale (add multiple servers for your REST endpoint), but also to change your original application without having to change your REST API directly.
Clients <== REST (HTTP) ==> RESTful endpoint <== legacy (sockets) ==> Legacy backend
So your REST server is one the hand a service provider for your clients, but represents at the same time also a client for your original backend.
I would design the RESTful API and then pick one of the existing REST frameworks for Java, like Restlet, and implement the REST service itself. At the same time you can start implementing a gateway between the REST server and your original backend, by using sockets.
Pay attention to scalability and performance (i.e. you may want to use connection pools for the rest <=> backend bridge and not spawn a socket per incoming API request) and also think of possible advantages of HTTP. You might benefit when you're able to use caching, etc. as far as your backend application logic allows so.