I do hope this question is too subjective, as I am actually looking for a "best practice" that makes sense. However, the question is a bit more broad than just this case.
Lets say I have a view flag on an object (seen or not). When this object is seen, I see three options to make it true:
Let the app consumer set it by issuing an UPDATE call
When we call the GetObject method, we automatically set "seen" to true
We add a method in the API saying SetToSeen which the consumer is responsible to set
What is the favorable approach here?
for me it depends on who uses this flag.
If it is the client, then the client should update the object (maybe "seen" could read as "displayed to the user") like PUT /object/{id}/seen.
If it this is only for the server and read as "displayed to the client", then the server should update if the object was served.
Related
I have a doubt on Http methods in Rest API.I read lot about over internet on this,that we can use Put for create or update the resource and Post for creating the resource and Delete for delete a resource.
But i am saying is this mandatory?because when we write code we just put the annotation like Put,Post and Delete but what would happen if I use Delete Annotation and in side method i do something else, suppose i write add logic instead of delete.I think i can do it,similarly in others methods(Post and Put.).Then what is the significance of these Annotations.If i can do what i have mention above means write the logic for add in Delete Annotation then for me Delete is only a type of request for me and i can write any logic for add or update.
Similar i read Put is idempotent but if i write add logic instead of update then it is not idempotent.
May be i might be wrong here.Please clarify this.This is causing confusion to me and nowhere it is explained,Every where the generic statement there.
Thanks & Regards
Amitabh Pandey
i am saying is this mandatory?
Not mandatory, no.
Roy Fielding made an interesting observation in 2002:
HTTP does not attempt to require the results of a GET to be safe. What
it does is require that the semantics of the operation be safe
The same holds for the other methods -- we all agree what the request message mean (semantics), because that's what is in the standards (RFC 7230, etc). So PUT always means "please replace your current representation of the target resource with the representation I'm providing", but what your implementation does with that message is up to you.
Of course, if your implementation is surprising, there is an important caveat:
it is a fault of the implementation, not the interface or the user of that interface, if anything happens as a result that causes loss of property
The point of REST is that general-purpose components can interact with your resources via the uniform interface. If your implementation doesn't match the uniform interface, then it's your bug, not a bug in the component, that things don't "just work".
i read Put is idempotent but if i write add logic instead of update then it is not idempotent.
The semantics of PUT are idempotent. If your handler for PUT requests isn't idempotent, then you have made a mistake, and your implementation has a fault. If a general purpose component needs to send multiple PUT messages (for instance, because a response was lost on an unreliable network), then that fault becomes a failure.
"Add logic" isn't necessarily not idempotent, of course -- think about adding a key and value to a dictionary; if you add the same key twice, that's the same as adding the key once.
d = {}
d[k] = v
d[k] = v # idempotent, because this is a no-op
e = {k:old}
if e[k] == old:
e[k] == new:
if e[k] == old: # Again, idempotent, because the second copy of the message is a no-op
e[k] == new
What's supposed to happen, if somebody sends a PUT request and you cannot ensure idempotent semantics, is that you should return a 405 Method Not Allowed, and make sure that your response to OPTIONS doesn't claim that PUT is supported for that resource.
I'm using hunchentoot session values to make my server code re-entrant. Problem is that session values are, by definition, retained during the session, i.e., from one call from the same browser to the next, whereas what I really am looking for is what amount to thread-specific re-entrancy, so that all the values disappear between calls -- I want to treat each click as a separate "from scratch" event, even if they are from the same session . Easy enough to have the driver either set to nil, or delete my session values, but I'm wondering if there's a "correct" way to do this? I don't see any thread-based analog to hunchentoot:session-value in the documentation.
Thanks in advance for any guidance you can offer.
If you want a value to be "thread specific" and at the same time to be "from scratch" on every request, that requires that every request must be dispatched in a brand new thread. This is not the case according to the Hunchentoot documentation, which says that two models are supported: a single-threaded taskmaster and a thread-per-connection taskmaster.
If your configuration is multi-threaded, then a thread-specific variable bound in a request-handling can therefore be expected to be per-connection. In a single-threaded Hunchentoot setup, it will effectively be global, tied to the request servicing thread.
A thread-based analog to hunchentoot:session-value probably doesn't exist because it would only introduce behaviors into the web app which surprisingly change if the threading model is reconfigured, or if the request pattern from the browser changes. A browser can make multiple requests using the same connection, or close the connection between requests.
To extend the request objects with custom per-request, I would look into, perhaps, subclassing from the acceptor (how to do this is described in the docs). My custom acceptor would have a custom method of the process-connection generic function which would create extended/subclasses request objects carrying the extra stuff I wanted to put into a request.
Another way would be to have some global weak hash which binds request objects as keys to additional information.
My intent is to create a WebAPI for an IoT Device. It should give me informations about Hardware Ports, Device Status etc. My question now is, would it be okay to use it for controlling some of the ports. For example a LED which is connected to an output of the IoT Device is would be controlled like [GET] /api/led/{id}/on
or
[GET] /api/led/{id}/off
Would that contradict the actual meaning of a WebAPI?
Yes - this is not a great structure, as the GET method is supposed to be idempotent AND safe - see http://restcookbook.com/HTTP%20Methods/idempotency/ for a more detailed definition, but practically speaking what it means to say it's safe is that a GET request should not modify state or data.
So:
GET /api/led/{id}/on
should return a representation to indicate if it is on or off, but should not actually modify the state of the led. It could return true or {"on" : true } if it were on and false if it were off - whatever makes sense for your application.
To turn it on or off you should use a non-safe method, so what you could do is:
PUT /api/led/{id}/on
and make the body true or false, or possibly {"on":true} or {"on":false}
or possibly
POST /api/led/{id}/on
to turn it on and
POST /api/led/{id}/off
to turn it off.
All of the above are valid WebApi/REST techniques, but some may be more or less clear to the consumer depending on standard terminology/semantics in your context.
One of our APIs has a tasks resource. Consumers of the API might create, delete and update a given task as they wish.
If a task is completed (i.e., its status is changed via PUT /tasks/<id>), a new task might be created automatically as a result.
We are trying to keep it RESTful. What would be the correct way to tell the calling user that a new task has been created? The following solutions came to my mind, but all of them have weaknesses in my opinion:
Include an additional field on the PUT response which contains information about an eventual new task.
Return only the updated task, and expect the user to call GET /tasks in order to check if any new tasks have been created.
Option 1 breaks the RESTful-ness in my opinion, since the API is expected to return only information regarding the updated entity. Option 2 expects the user to do stuff, but if he doesn't then no one will realize that a new task was created.
Thank you.
UPDATE: the PUT call returns an HTTP 200 code along the full JSON representation of the updated task.
#tophallen suggests having a task tree so that (if I got it right) the returned entity in option 2 contains the new task as a direct child.
You really have 2 options with a 200 status PUT, you can do headers (which if you do, check out this post). Certainly not a bad option, but you would want to make sure it was normalized site-wide, well documented, and that you didn't have anything such as firewalls/F5's/etc/ re-writing your headers.
Something like this would be a fair option though:
HTTP/1.1 200 OK
Related-Tasks: /tasks/11;/tasks/12
{ ...task response... }
Or you have to give some indication to the client in the response body. You could have a task structure that supports child tasks being on it, or you could normalize all responses to include room for "meta" stuff, i.e.
HTTP/1.1 200 OK
{
"data": { ...the task },
"related_tasks": [],
"aggregate_status": "PartiallyComplete"
}
Something like this used everywhere (a bit of work as it sounds like you aren't just starting this project) can be very useful, as you can also use it for scenarios like paging.
Personally, I think if you made the related_tasks property just contain either routes to call for the child tasks, or id's to call, that might be best, lighter responses, since the client might not always care to call to check on said child-task immediately anyways.
EDIT:
Actually, the more I think about it - the more headers would make sense in your situation - as a client can update a task at any point during the task processing, there may or may not be a child task in play - so modifying the data structure for the off-chance the client calls to update a task when a child task has started seems more work than benefit. A header would allow you to easily add a child task and notify the user at any point - you could apply the same thing for a POST of a task that happens to immediately finish and kicks off a child task, etc. It can easily support more than one task. I think this as well keeps it the most restful and reduces server calls, a client would always be able to know what is going on in the process chain. The details of the header could define, but I believe it is more traditional in a scenario like this to have it point to a resource, rather than a key within a resource.
If there are other options though, I'm very interested to hear them.
It looks like you're very concerned about being RESTful, but you're not using HATEOAS, which is contradictory. If you use HATEOAS, the related entity is just another link and the client can follow them as they please. What you have is a non-problem in REST. If this sounds new to you, read this: http://roy.gbiv.com/untangled/2008/rest-apis-must-be-hypertext-driven
Option 1 breaks the RESTful-ness in my opinion, since the API is
expected to return only information regarding the updated entity.
This is not true. The API is expected to return whatever is documented as the information available for that media-type. If you documented that a task has a field for related side-effects tasks, there's nothing wrong with it.
Suppose I send objects of the following type from GWT client to server through RPC. The objects get stored to a database.
public class MyData2Server implements Serializable
{
private String myDataStr;
public String getMyDataStr() { return myDataStr; }
public void setMyDataStr(String newVal) { myDataStr = newVal; }
}
On the client side, I constrain the field myDataStr to be say 20 character max.
I have been reading on web-application security. If I learned something it is client data should not be trusted. Server should then check the data. So I feel like I ought to check on the server that my field is indeed not longer than 20 characters otherwise I would abort the request since I know it must be an attack attempt (assuming no bug on the client side of course).
So my questions are:
How important is it to actually check on the server side my field is not longer than 20 characters? I mean what are the chances/risks of an attack and how bad could the consequences be? From what I have read, it looks like it could go as far as bringing the server down through overflow and denial of service, but not being a security expert, I could be mis-interpreting.
Assuming I would not be wasting my time doing the field-size check on the server, how should one accomplish it? I seem to recall reading (sorry I no longer have the reference) that a naive check like
if (myData2ServerObject.getMyDataStr().length() > 20) throw new MyException();
is not the right way. Instead one would need to define (or override?) the method readObject(), something like in here. If so, again how should one do it within the context of an RPC call?
Thank you in advance.
How important is it to actually check on the server side my field is not longer than 20 characters?
It's 100% important, except maybe if you can trust the end-user 100% (e. g. some internal apps).
I mean what are the chances
Generally: Increasing. The exact proability can only be answered for your concrete scenario individually (i. e. no one here will be able to tell you, though I would also be interested in general statistics). What I can say is, that tampering is trivially easy. It can be done in the JavaScript code (e. g. using Chrome's built-in dev tools debugger) or by editing the clearly visible HTTP request data.
/risks of an attack and how bad could the consequences be?
The risks can vary. The most direct risk can be evaluated by thinking: "What could you store and do, if you can set any field of any GWT-serializable object to any value?" This is not only about exceeding the size, but maybe tampering with the user ID etc.
From what I have read, it looks like it could go as far as bringing the server down through overflow and denial of service, but not being a security expert, I could be mis-interpreting.
This is yet another level to deal with, and cannot be addressed with server side validation within the GWT RPC method implementation.
Instead one would need to define (or override?) the method readObject(), something like in here.
I don't think that's a good approach. It tries to accomplish two things, but can do neither of them very well. There are two kinds of checks on the server side that must be done:
On a low level, when the bytes come in (before they are converted by RemoteServiceServlet to a Java Object). This needs to be dealt with on every server, not only with GWT, and would need to be answered in a separate question (the answer could simply be a server setting for the maximum request size).
On a logical level, after you have the data in the Java Object. For this, I would recommend a validation/authorization layer. One of the awesome features of GWT is, that you can use JSR 303 validation both on the server and client side now. It doesn't cover every aspect (you would still have to test for user permissions), but it can cover your "#Size(max = 20)" use case.