i have a rather general question: How to call a method in RESTful web service correctly?
The method is supposed to do only a transformation in the database and return nothing (so no GET?!). However I also send no values from the client (so no PUT/POST?!?).
So far I am using GET. Put i read thats not the proper way to do it...
Thanks in advance!
REST stands for "REpresentational State Transfer". If you're not transferring state representing the thing you're working with (in one direction or the other), it's pretty much inherently not RESTful, and there's no correct way of doing it and still calling it REST.
If you want RPC, then do RPC. Just don't call it RESTful. :)
The way you do it is through RPC. REST is good for state transfer, but not for triggering actions that have nothing to do with state transfer, such as operations that affect a large number of records. Most systems I've seen use REST for 99% of the work in supporting a UI, and RPC for that last 1% -- operations that do not involve state transfer, bulk update operations, that sort of thing. Your goal should be to express as much of the business logic as possible as reaction to application of state, reserving the corner cases for RPC.
There's really no "correct" way to do this if you're not transfering any kind of data. You're simply calling a method, so REST does not really apply.
These days PATCH with a "JSON Patch" payload might be a way to go - but it's STILL not RESTful.
Calling it an RPC is more appropriate and no reason it cannot be in the same API as long as it is documented. Document your API with your RPC methods and REST resources separated.
E.g.
See:Understanding RPC vs REST For HTTP APIs
Related
I understand what an idempotent operation is and what a safe operation is from HTTP point of view.
Recently, a colleague changed an update operation from PUT to POST claiming it's a matter of style.
I would like to get all my arguments ready before discussing this.
Does a user agent, proxy, or any other component in the internet take advantage of PUT being idempotent?
What do I risk if I use a POST for idempotent operation?
You do not risk anything.
Idempotency means that an operation needs to be repeatable. There is no technical reason that POST with is intended for operations that are not constrained to be repeatable would break in any way if the operation did actually turn out to be repeatable.
Does a user agent, proxy, or any other component in the internet take advantage of PUT being idempotent?
Not really. You could (in theory) write a user agent to automatically retry a failed PUT, on the assumption that the server implements its PUT API methods to have idempotent behavior. Unfortunately, that is not guaranteed, so it would be unwise for a generic user agent to assume that.
Recently, a colleague changed an update operation from PUT to POST claiming it's a matter of style.
If your colleague did that because the operation is non-idempotent in a significant way, then they are correct. It is a matter of style. But style matters when you are designing an API for others to use. (They are likely to expect PUT operations to be idempotent ... and be unpleasantly surprised if they aren't.)
If the change was done for another reason, then you (or they) should clarify what was meant by "it is a matter of style". It is not necessarily correct or incorrect to change an (idempotent) PUT to an (idempotent) POST, but it the stylistic reasoning would be based on other things.
As far as I understand, in a CQRS-oriented API exposed through a RESTful HTTP API the commands and queries are expressed through the HTTP verbs, the commands being asynchronous and usually returning 202 Accepted, while the queries get the information you need. Someone asked me the following: supposing they want to change some information, they would have to send a command and then a query to get the resulting state, why to force the client to make two HTTP requests when you can simply return what they want in the HTTP response of the command in a single HTTP request?
We had a long conversation in DDD/CRQS mailing list a couple of months ago (link). One part of the discussion was "one way command" and this is what I think you are assuming. You can find out that Greg Young is opposed to this pattern. A command changes the state and therefore prone to failure, meaning it can fail and you should support this. REST API with POST/PUT requests provide perfect support for this but you should not just return 202 Accepted but really give some meaningful result back. Some people return 200 success and also some object that contains a URL to retrieve the newly created or updated object. If the command handler fails, it should return 500 and an error message.
Having fire-and-forget commands is dangerous since it can give a consumer wrong ideas about the system state.
My team also recently had a very heated discussion about this very thing. Thanks for posting the question. I have usually been the defender of the "fire and forget" style commands. My position has always been that, if you want to be able to move to an async command dispatcher some day, then you cannot allow commands to return anything. Doing so would kill your chances since an async command doesn't have much of a way to return a value to the original http call. Some of my team mates really challenged this thinking so I had to start thinking if my position was really worth defending.
Then I realized that async or not async is JUST an implementation detail. This led me to realize that, using our frameworks, we can build in middleware to accomplish the same thing our async dispatchers are doing. So, we can build our command handlers the way we want to, returning what ever makes sense, and then let the framework around the handlers deal with the "when".
Example: My team is building an http API in node.js currently. Instead of requiring a POST command to only return a blank 202, we are returning details of the newly created resource. This helps the front-end move on. The front-end POSTS a widget and opens a channel to the server's web socket using the same command as the channel name. the request comes to the server and is intercepted by middleware which passes it to the service bus. When the command is eventually processed synchronously by the handler, it "returns" via the web socket and the front-end is happy. The middleware can be disabled easily, making the API synchronous again.
There is nothing stopping you from doing that. If you execute your commands synchronously and create your projections synchronously, then it will be easy for you to just make a query directly after executing the command and returning that result. If you do this asynchronously via the rest-api, then you have no query result to send back. If you do it asynchronously within your system, then you can wait for the projection to be created and then send the response to the client.
The important thing is that you separate your write and read models in classic CQRS style. That does not mean that you cannot do a read in the same request as you do the command. Sure, you can send a command to the server and then with SignalR (or something) wait for a notification that your projection have been created/updated. I do not see a problem with waiting for the projection to be created on the server side instead for on the client.
How you do this will affect you infrastructure and error handling. Also, you will hold the HTTP request open for a longer time if you return the result at once.
I've got a Backbone web application that talks to a RESTful PHP server. For PUT and POST it matters in which order the requests arrive at the server and for GET it matters in which order the responses arrive at the client.
The web application does not need to be used concurrently by multiple users, but what might happen is that the user changes its name twice really fast. Then the order in which the server processes PUT /name/Ann and PUT /name/Bea determines whether the name is set to Ann or Bea.
Backbone.Safesync and Backbone.Sync.AjaxQueue are two libraries that try to solve this problem. Doesn't Safesync only solve the problem with GET? Sync.AjaxQueue is outdated, but might serve as inspiration to implement a custom queued sync function. Making sync synchronous would solve the problem. If a request is only sent after the previous response is received, then only one request is processed at a time.
Any advice on how to proceed?
BTW: I don't think using PATCH requests would solve anything, because in my example the same attribute is changed twice.
There's a few ways to solve this, here's two:
add a timestamp to all requests, store it in the DB as "modified" and let the server check whether the timestamp of the new request is later than the one in the DB in order to be valid
use Promises to delay the second request from being made before the first one is responded on, there's a promise/deferred mechanism built into jquery, but you can also use a 3rd party one, for instance Q or when
If you can afford the delay, an easy approach is to set the async option to false when you call whatever method you're calling that results in the Backbone.sync. For example, in the appropriate model(s) simply override the default sync method to include the additional option.
When one player makes a move that is sent to the server. And that move is pushed by the server to the second player. As far as I know, the server pushing the move to the second player goes against being a RESTful api.
From what little I know about backbone.js it is meant really for RESTful setups. Is there a way to use backbone.js with websockets to allow the server to push data down to the clients at any time?
Is there even an idiomatic way of implementing chess with backbone.js and websockets? And if not then what would be the correct way to implement chess?
You can definitely do it. Instead of fetching your collection/model, you will just set or update/reset the json data from the websocket into the proper model or collection.
Somewhat pseudo-code example:
var board = new Backbone.Collection(); // this would probably be your own extended Collection instead.
function boardChange(jsonFromServer){
// Take the json array from server,
// and update the collection with it.
// This would trigger 'change' events in each model in the collection (if changed).
board.update(jsonFromServer);
}
Implementing a chess app doesn't really require a Backbone architecture. As long as your server supports Asynchronous API, WebSockets, or even long-polling (anything real-time), it's possible. There's tons of APIs out there on the web already that does this (ie FireBase) as well as frameworks (ie, Meteor) comes into mind.
Also check out Socket IO if you're using Node JS for your server-side. There's tons of open source projects on GitHub that takes advantage of some of these web technologies already, Backbone in particular. Backbone with Socket IO. Backbone.ioBind also looks like a promising project with code samples that you can look at.
To make it work with Backbone, the data API just needs to notify any client-side listeners that an update has been made on the server which in turn triggers a change event on your Backbone Model.
You can even set a timer that performs a request to the server every n amount of time just to test out your code prototypes.
You could overload the Backbone.sync method to use websockets. The de-facto To-Do example (http://addyosmani.github.com/todomvc/) does this to use localstorage instead of a RESTful datastore, and you could do the same for web sockets. In fact if you look around Github/Google you may be able to find someone who's already done it.
If I have a client app sending requests to my web service, one after another, will the web service be able to handle each request made and not override previous request because of a new request made? I want all requests to be handled and not replaced for another. Will I be able to do this with the multiple requests all coming from the same client
I have no idea why the other answer is so long to what is essentially a simple question about the basics but the answer is yes.
Each request is independent of others, unless you specifically program some sort of crossover into the server (e.g. a static cross-thread list used by every request or a more complex structure).
It is easier to encounter crossover on the client side, if using an asynchronous pattern that gives results via events - you need to make sure you got the result to the correct request (generally done by providing some token as the "custom state" variable, which you can use to determine the original request in the response handler).
The answer depends on your architecture.
For example, if the server is multi-threaded, and the business logic part is stateless, then on the server the requests won't override, as each thread will call a function and return the result.
On the client side, your best bet is to have each request sent from a different thread, so that that thread blocks until it gets its response, then the processing can go on fine.
If you have a different design, please describe it.
UPDATE: Based on the new info here is something you may want to look at:
http://weblogs.java.net/blog/2006/02/01/can-i-call-you-back-asynchronous-web-services
I am curious how, or if, you are doing asynchronous webservice calls. Generally webservices seem to block, but if you are making these calls so fast then I can only assume asynchronicity.
So, the webservice can store the answers on the server-side, so there is a stateful class that stores results in a dictionary, by IP address. The client then polls for answers, so, ideally, if you send a request, you should be able to get back an array of answers as a response. If you have sent all the requests and are still waiting for more responses, then poll. You should be able, again, to get an array of answers, to cut down on wasted bandwidth.
The better approach is to have your client also be a server, so that you send the request, with the IP address:port for the callback, so the server would make a oneway response to the client. But, this is more complicated, but it cuts down on wasting bandwidth.
Update 2: This is being done without checking it, so there is probably errors:
#WebMethod
public ResponseModel[] AnswerQuestion(QuestionModel[] question) {
// Get the IP address of the client
AnswerController ac = new AnswerController(question, ipaddress);
return mypackage.myclass.StaticAnswers.GetAnswers(ipaddress);
// return an array
}
#WebMethod
public ResponseModel[] GetAnswers() {
return mypackage.myclass.StaticAnswers.GetAnswers(ipaddress);
}
OK, this should give a rough idea.
There is no assumptions in AnswerController. It should know everything it needs to do the job, as it will be stateless, so, it refers to no global variables that can change, only const and perhaps static variables.
The StaticAnswers class is static and just stores answers, with the lookup being ipaddress, for speed.
It will return the answers in an appropriate array.
When you have sent the last question then just call GetAnswers until you have gotten back everything. You may need to keep track of how many have been sent, and how many have been received, on the client side.
This isn't ideal, and is a very rough sketch, but hopefully it will give you something to work with.