Use of HTTP RESTful methods GET/POST/etc. Are they superfluous? - rest

What is the value of RESTful “methods” (ie. GET, POST, PUT, DELETE, PATCH, etc.)?
Why not just make every client use the “GET” method w/ any/all relevant params, headers, requestbodies, JSON,etc. etc.?
On the server side, the response to each method is custom & independently coded!
For example, what difference does it make to issue a database query via GET instead of POST?
I understand that GET is for queries that don’t change the DB (or anything else?).
And POST is for calls that do make changes.
But, near as I can tell, the RESTful standard doesn’t prevent one to code up a server response to GET and issue a stored procedure call that indeed DOES change the DB.
Vice versa… the RESTful standard doesn’t prevent one to code up a server response to POST and issue a stored procedure call that indeed does NOT change the ANYTHING!
I’m not arguing that a midtier (HTTP) “RESTlike” layer is necessary. It clearly is.
Let's say I'm wrong (and I may be). Isn't it still likely that there are numerous REST servers violating the proper use of these protocols suffering ZERO repercussions?
The following do not directly address my questions but merely dance uncomfortably around it like an acidhead stoner at a Dead concert:
Different Models for RESTful GET and POST
RESTful - GET or POST - what to do?
GET vs POST in REST Web Service
PUT vs POST in REST
I just spent ~80 hours trying to communicate a PATCH to my REST server (older Android Java doesn't recognize the newer PATCH so I had to issue a stupid kluge HTTP-OVERIDE-METHOD in the header). A POST would have worked fine but the sysop wouldn't budge because he respects REST.
I just don’t understand why to bother with each individual method. They don't seem to have much impact on Idempotence. They seem to be mere guidelines. And if you "violate" these "guidelines" they give someone else a chance to point a feckless finger at you. But so what?
Aren't these guidelines more trouble than they're worth?
I'm just confused. Please excuse the stridency of my post.
Aren’t REST GET/POST/etc. methods superfluous?

What is the value of RESTful “methods” (ie. GET, POST, PUT, DELETE, PATCH, etc.)?
First, a clarification. Those aren't RESTful methods; those are HTTP methods. The web is a reference implementation (for the most part) of the REST architectural style.
Which means that the authoritative answers to your questions are documented in the HTTP specification.
But, near as I can tell, the RESTful standard doesn’t prevent one to code up a server response to GET and issue a stored procedure call that indeed DOES change the DB.
The HTTP specification designates certain methods as being safe. Casually, this designates that a method is read only; the client is not responsible for any side effects that may occur on the server.
The purpose of distinguishing between safe and unsafe methods is to allow automated retrieval processes (spiders) and cache performance optimization (pre-fetching) to work without fear of causing harm.
But you are right, the HTTP standard doesn't prevent you from changing your database in response to a GET request. In fact, it even calls out specifically a case where you may choose to do that:
a safe request initiated by selecting an advertisement on the Web will often have the side effect of charging an advertising account.
The HTTP specification also designates certain methods as being idempotent
Of the request methods defined by this specification, PUT, DELETE, and safe request methods are idempotent.
The motivation for having idempotent methods? Unreliable networks
Idempotent methods are distinguished because the request can be repeated automatically if a communication failure occurs before the client is able to read the server's response.
Note that the client here might not be the user agent, but an intermediary component (like a reverse proxy) participating in the conversation.
Thus, if I'm writing a user agent, or a component, that needs to talk to your server, and your server conforms to the definition of methods in the HTTP specification, then I don't need to know anything about your application protocol to know how to correctly handle lost messages when the method is GET, PUT, or DELETE.
On the other hand, POST doesn't tell me anything, and since the unacknowledged message may still be on its way to you, it is dangerous to send a duplicate copy of the message.
Isn't it still likely that there are numerous REST servers violating the proper use of these protocols suffering ZERO repercussions?
Absolutely -- remember, the reference implementation of hypermedia is HTML, and HTML doesn't include support PUT or DELETE. If you want to afford a hypermedia control that invokes an unsafe operation, while still conforming to the HTTP and HTML standards, the POST is your only option.
Aren't these guidelines more trouble than they're worth?
Not really? They offer real value in reliability, and the extra complexity they add to the mix is pretty minimal.
I just don’t understand why to bother with each individual method. They don't seem to have much impact on idempotence.
They don't impact it, they communicate it.
The server already knows which of its resources are idempotent receivers. It's the client and the intermediary components that need that information. The HTTP specification gives you the ability to communicate that information for free to any other compliant component.
Using the maximally appropriate method for each request means that you can deploy your solution into a topology of commodity components, and it just works.
Alternatively, you can give up reliable messaging. Or you can write a bunch of custom code in your components to tell them explicitly which of your endpoints are idempotent receivers.
POST vs PATCH
Same song, different verse. If a resource supports OPTIONS, GET, and PATCH, then I can discover everything I need to know to execute a partial update, and I can do so using the same commodity implementation I use everywhere else.
Achieving the same result with POST is a whole lot more work. For instance, you need some mechanism for communicating to the client that POST has partial update semantics, and what media-types are accepted when patching a specific resource.
What do I lose by making each call on the client GET and the server honoring such just by paying attention to the request and not the method?
Conforming user-agents are allowed to assume that GET is safe. If you have side effects (writes) on endpoints accessible via GET, then the agent is allowed to pre-fetch the endpoint as an optimization -- the side effects start firing even though nobody expects it.
If the endpoint isn't an idempotent receiver, then you have to consider that the GET calls can happen more than once.
Furthermore, the user agent and intermediary components are allowed to make assumptions about caching -- requests that you expect to get all the way through to the server don't, because conforming components along the way are permitted to server replies out of their own cache.
To ice the cake, you are introducing another additional risk; undefined behavior.
A payload within a GET request message has no defined semantics; sending a payload body on a GET request might cause some existing implementations to reject the request.
Where I believe you are coming from, though I'm not certain, is more of an RPC point of view. Client sends a message, server responds; so long as both participants in the conversation have a common understanding of the semantics of the message, does it matter if the text in the message says "GET" or "POST" or "PATCH"? Of course not.
RPC is a fantastic choice when it fits the problem you are trying to solve.
But...
RPC at web scale is hard. Can your team deliver that? can your team deliver with cost effectiveness?
On the other hand, HTTP at scale is comparatively simple; there's an enormous ecosystem of goodies, using scalable architectures, that are stable, tested, well understood, and inexpensive. The tires are well and truly kicked.
You and your team hardly have to do anything; a bit of block and tackle to comply with the HTTP standards, and from that point on you can concentrate on delivering business value while you fall into the pit of success.

Related

Purpose of the different HTTP-Methods?

What's the purpose of the HTTP-methods GET, POST, PUT, DELETE, HEAD etc.
Is it just a convention to indicate, what I'm intending to do? Or is it, that certain HTTP-features are only available if I use a specific method?
GET requests:
GET requests can be cached
GET requests remain in the browser history
GET requests can be bookmarked
GET requests should never be used when dealing with sensitive data
GET requests have length restrictions
GET requests are only used to request data (not modify)
POST requests:
POST requests are never cached
POST requests do not remain in the browser history
POST requests cannot be bookmarked
POST requests have no restrictions on data length
now I hope you understood the difference between GET and POST. In similar ways, every method has different purpose.
REST enables intermediate processing by constraining messages to be self-descriptive: interaction is stateless between requests, standard methods and media types are used to indicate semantics and exchange information, and responses explicitly indicate cacheability. -- Fielding, 2000
The important idea here is that the self descriptive messages of REST communicate semantics in such a way that general purpose components can understand them and do something useful with that information.
For instance, if we know that the semantics of a message are idempotent, we can automatically resend that message when a response is lost.
We know that if the semantics of a message are safe, then we can send that request purely speculatively. This not only allows optimizations like pre-fetching content, but it also means that we can safely crawl the web, looking for interesting documents.
Because the message semantics are visible in readily standardizable forms, we can drop in general purpose, off the shelf components, and everything just works.
SOAP is an example of message passing that did not have standardized semantics. Messages were XML documents that could only be interpreted correctly by components familiar with that specific schema (usually via the WSDL).
On the web, that meant that, regardless of meaning, every request was sent via POST. What that does, in effect, is reduce HTTP from an application protocol to a transport protocol. And a lot of the power of the web goes away when you do that.

What to use PATCH or POST?

I had a quiet long debate with my colleague about the proper HTTP verb to be used for one of our operation that changes the STATE of a resource.
Suppose we have a resource called WakeUpLan that tries to send event to a system connected in a network. This is kind of a Generic State Machine,
{
id: 1,
retries: {
idle: 5, // after 5 retries it went to FAILED state
wakeup: 0,
process: 0,
shutdown: 0
},
status: 'FAILED',
// other attributes
}`
IDLE --> WAKEUP ---> PROCESS ---> SHUTDOWN | ----> [FAILED]
Every state has a retry mechanism, i.e in IDLE case it tries for x times to transition to WAKEUP and after x retries it dies out and goes to FAILED state.
All the FAILED resource can be again manually restarted or retried one more time from some interface.
So, we have a confusion regarding which HTTP verb best suits in this case.
In my opinion, it is just a change in status and resetting retry count to 0, so that our retry mechanism can catch this and try in next iteration.
so it should be a pure PATCH request
PATCH retry/{id}
{state: 'IDLE'}
But my colleague opposes it to be a POST request as this is a pure action and should be treated as POST.
I am not convinced because we are not creating any new resource but just updating existing resource that our REST server already knows about it.
I would like to know and corrected if I am wrong here.
Any suggestions/advices are welcome.
Thanks in advance.
Any suggestions/advices are welcome.
The reference implementation of the REST architectural style is the world wide web. The world wide web is built on a foundation of URI, HTTP, and HTML -- and HTML form processing is limited to GET and POST.
So POST must be an acceptable answer. After all, the web was catastrophically successful.
PATCH, like PUT, allows you to communicate changes to a representation of a resource. The semantics are more specific than POST, which allows the client to better take advantage. So if all you are doing is creating a message that describes local edits to the representation of the resource, then PATCH is a fine choice.
Don't overlook the possibilities of PUT -- if the size of the complete representation of the resource is of roughly the same order as the representation of your PATCH document, then using PUT may be a better choice, because of the idempotent semantics.
I am not convinced because we are not creating any new resource but just updating existing resource that our REST server already knows about it.
POST is much more general than "create a new resource". Historically, there has been a lot of confusion around this point (the language in the early HTTP specifications didn't help).
HTTP Basics
PATCH
What is PATCH actually? PATCH is a HTTP method defined in RFC 5789 that is similar to patching code in software engineering, where a change to one or multiple sources should be applied in order to transform the target resource to a desired outcome. Thereby a client is calculating a set of instructions the target system has to apply fully in order to generate the requested outcome. These instruction are usually called "patch", in the words of RFC 5789 such a set of instructions is called "patch document".
RFC 5789 does not define in which representation such a patch document need to be transferred from one system to the other. For JSON-based representations application/json-patch+json (RFC 6902) can be used which contains certain instructions like add, replace, move, copy, ... that are more or less clear on what they are doing but the RFC also describes each of the available instructions further.
A further JSON-based, but totally different take on how to inform a system on how to change a resource (or document) is captured in application/merge-patch+json (RFC 7386). In contrast to json-patch, this media-type does define a set of default rules to apply on receiving a JSON based representation to the actual target resource. Here, a single JSON representation of the modified state is sent to the server that only contains fields and objects that should be changed by the server. Default rules define that fields to be removed from the target resource need to be nullified in the request while fields that should change need to contain the new value to apply. Fields that remain unchanged can be left out in the request.
If you read through RFC 5789, you will find merge-patch as more of a hack though. Compared to json-patch, a merge-patch representation lacks the control of the actual sequence the instructions are applied, which might not always be necessary, as well as the lack of changing multiple, different resources at once.
PATCH itself is not idempotent. For a json-patch patch document it is pretty clear that applying the same instructions multiple times may lead to different results, i.e. if you remove the first field. A merge-patch document here behaves similar to a "partial PUT" request that so many developers perform due to pragmatism, even though the actual operation still does not guarantee idempotency. In order to avoid applying the same patch to the same resource unintentionally multiple times, i.e. due to network errors while transmitting the patch document, it is recommended to use PATCH alongside conditional requests (RFC 7232). This guarantees that the changes are only applied to a specific version of the resource and if that resource had changed either through a previous request or by an external source, the request would be declined to prevent data loss. This is basically optimistic locking.
A requirement that all patch documents have to fulfill is, that they need to be applied atomically. Either all the changes are applied or none at all. This puts some transaction burden onto the service provider.
POST
POST method is defined in RFC 7231 as:
requests that the target resource process the representation enclosed in the request according to the resource's own specific semantics.
This is basically a get-free-out-of-jail-card that lets you do anything you want or have to do here. You are free to define the syntax and structure to receive on a certain endpoint. Most of these so-called "REST APIs" consider POST as the C in CRUD, which it can be used for, but is just an oversimplification of what it actually can do for you. HTML basically only supports POST and GET operations so POST requests are used for sending all kinds of data to the server to start of backing processes, create new resources such as blog-posts, Q&A, videos, ... but also to delete or update stuff.
The rule of thumb here is, if a new resource is created as an outcome of triggering a POST request on a certain URI the response code should be 201 Created containing a HTTP response header Location with a URI as a value that points to the newly created resource. In any other case POST does not map to the C (create) of the CRUD stereotype.
REST-related
REST isn't a protocol but an architectural style. As Robert (Uncle Bob) C. Martin stated, architecture is about intent and REST intention is about decoupling clients from servers which allows the latter one to evolve freely by minimizing interoperability issues due to changes introduced by the server.
These are very strong benefits if your system should still work in decades to come. However, these benefits are unfortunately not obtained easily. As outlined in Fieldings dissertation to benefit from REST the mentioned constraints need to be followed strictly or otherwise couplings will remain increasing the likelihood of breaking clients due to changes. Fielding later on ranted about people that did either not read or understand his dissertation and clarified what a REST API has to do in a nutshell.
This rant can be summarized into the following points:
The API should adhere to and not violate the underlying protocol. Altough REST is used via HTTP most of the time, it is not restricted to this protocol.
Strong focus on resources and their presentation via media-types.
Clients should not have initial knowledge or assumptions on the available resources or their returned state ("typed" resource) in an API but learn them on the fly via issued requests and responses that teaches clients on what they can do next. This gives the server the freedom over its namespace and move around things it needs to without negatively impacting clients.
Based on this, REST is about using well-defined standards and adhering to the semantics of the protocols used as transportation facilities. Through the utilization of HATEOAS and stateless communication, the concepts that proved the Web to be scalable and evolution-friendly, the same interaction model that is used by humans in the Web is now used by applications in a REST architecture.
Common media-types provide the affordance on what a system might be able to do with data received for that payload while content-type negotiation guarantees that both, sender and receiver, are able to process and understand the payload correctly. The affordance may differ from media-type to media-type. A payload received for a image/png might be rendered and shown to the user while a application/vnd.acme-form+json might define a form where a server teaches a client on the elements of a request the server does support and a client can enter data and issue the request without having to actively know the method to use or target URI to send the data to as this is already given by the server. This not only removes the need for out-of-band (external) API documentation but also the need for a client to parse or interpret URIs as they are all provided by the server, accompanied by link-relations, that should be either standardized by IANA, follow common conventions such as existing rel values microformats or ontologies like Dublin Core, or represent extension types as defined in RFC 5988 (Web linking).
Question-related
With the introductory done, I hope that for a question like
But my colleague opposes it to be a POST request as this is a pure action and should be treated as POST. I am not convinced because we are not creating any new resource but just updating existing resource that our REST server already knows about it
it is clear that there is no definite yes or no answer to this quest but more of a it depends.
There are a couple of questions that could be asked, i.e. like
How many (different) clients will use the service? Are they all under your control? If so, you don't need REST, but you can still aim for it
How is the client taught or instructed on to perform the update? Will you provide an external API documentation? Will you support a media-type that supports forms, such as HTML, hal-forms, halo+json, Ion or Hydra
In general, if you have multiple clients, especially ones that are not under your control, you might not know which capabilities they support. Here content-type negotiation is an important part. If a client supports application/json-patch+json it might also be able to calculate a patch document containing the instructions to apply onto the target resource. The chances that it will also support PATCH are also very likely as RFC 6902 mentions it. In such a case it would make sense to provide a PATCH endpoint the client can send the request to.
If the client supports application/patch-merge+json one might assume that it supports PATCH as well, as it is primarily intended for use with the HTTP PATCH method, according to RFC 7386. Here the update from a client side perspective is rather trivial as the updated document is send as is to the server.
In any other case though, it is less clear in what representation formats the changes will be transmitted to the server. Here, POST is probably the way to go. From a REST stance, an update here has probably to be similar to an update done to data that is edited in a Web form in your browser with the current content being loaded into each form-element and the client modifies these form elements to its liking and then submits the changes back to the server in probably an application/x-www-form-urlencoded (or the like) structure. In such a case though, PUT would probably be more appropriate as in such a case you'd transmit the whole updated state of the resource back to the service and therefore perform a full update rather than a partial update on the target resource. The actual media-type the form will submit is probably defined in the media-type of the respective form. Note that this does not mean that you can't process json-patch or merge-patch documents in POST also.
The rule of thumb here would be, the more media-type formats and HTTP methods you support, the more likely different clients will be able to actually perform their task.
I would say you're in the right since you are not creating any new resource.
Highlight the part that says use put when you modify the entire existing resource while use patch when you are modifying one component of existing resource.
More here
https://restfulapi.net/rest-put-vs-post/

Why do we use different verbs in Restful APIs?

Aside from the obvious answer of "Best Practice" and "Creates a Standard" is there a technical argument for using GET, POST, PUT, DELETE, etc. instead of doing everything with POST?
In addition to creating a Uniform Interface,
the different verbs have different properties:
safe (GET, HEAD, OPTIONS, TRACE)
The purpose of distinguishing between safe and unsafe methods is to
allow automated retrieval processes (spiders) and cache performance
optimization (pre-fetching) to work without fear of causing harm.
idempotent (PUT, DELETE, GET, HEAD, OPTIONS, TRACE)
Idempotent methods are distinguished because the request can be repeated automatically if a communication failure occurs before the client is able to read the server's response.
cacheable: (GET, HEAD, POST)
Request methods can be defined as "cacheable" to indicate that responses to them are allowed to be stored for future reuse;
All the intermediate servers that make up "the internet" rely on the uniform interface and those properties to work correctly. That includes Content Delivery Networks, proxies, and caches. Using verbs correctly helps the internet work better.
is there a technical argument for using GET, POST, PUT, DELETE, etc. instead of doing everything with POST?
Yes.
In the case of HTTP, one of the important reasons to choose GET over POST is caching. RFC 7234 defines a bunch of caching semantics that can be written into messages (specifically into the headers).
One of the interesting problems of computer science is cache invalidation; in HTTP, there is a specific semantic for invalidation that is related to methods:
A cache MUST invalidate the effective Request URI (Section 5.5 of [RFC7230]) as well as the URI(s) in the Location and Content-Location response header fields (if present) when a non-error status code is received in response to an unsafe request method.
In other words, part of the distinction between POST and GET (also HEAD), is that generic clients know to invalidate cached representations of resources. So I can use a third party headless browser to talk to your API, and caching "just works".
The lines between POST and the other unsafe methods are less clear, but present. The basic outline is that POST can mean anything - but PUT and DELETE both specifically describe idempotent operations. So once again, I don't need to customize a client that does the right thing if messages are being lost on the unreliable network -- I can use a domain agnostic HTTP client to PUT and DELETE, and rebroadcast of lost messages can again "just work".
The power of the distinctions between methods is that we can create and re-use generic intermediate components (headless browsers, caches, reverse proxies) that know HTTP semantics, and (because we've described our domain protocols using those semantics) they all "just work" -- the generic components are able to do useful work even through they have no idea what is really going on.
Those verbs are part of the HTTP specification which is a primary reason we use them when we do REST over HTTP, and it was in the early days we started doing REST over HTTP so they became part of the best practice.

What can go wrong if we do NOT follow RESTful best practices?

TL;DR : scroll down to the last paragraph.
There is a lot of talk about best practices when defining RESTful APIs: what HTTP methods to support, which HTTP method to use in each case, which HTTP status code to return, when to pass parameters in the query string vs. in the path vs. in the content body vs. in the headers, how to do versioning, result set limiting, pagination, etc.
If you are already determined to make use of best practices, there are lots of questions and answers out there about what is the best practice for doing any given thing. Unfortunately, there appears to be no question (nor answer) as to why use best practices in the first place.
Most of the best practice guidelines direct developers to follow the principle of least surprise, which, under normal circumstances, would be a good enough reason to follow them. Unfortunately, REST-over-HTTP is a capricious standard, the best practices of which are impossible to implement without becoming intimately involved with it, and the drawback of intimate involvement is that you tend to end up with your application being very tightly bound to a particular transport mechanism. So, some people (like me) are debating whether the benefit of "least surprise" justifies the drawback of littering the application with REST-over-HTTP concerns.
A different approach examined as an alternative to best practices suggests that our involvement with HTTP should be limited to the bare minimum necessary in order to get an application-defined payload from point A to point B. According to this approach, you only use a single REST entry point URL in your entire application, you never use any HTTP method other than HTTP POST, never return any HTTP status code other than HTTP 200 OK, and never pass any parameter in any way other than within the application-specific payload of the request. The request will either fail to be delivered, in which case it is the responsibility of the web server to return an "HTTP 404 Not Found" to the client, or it will be successfully delivered, in which case the delivery of the request was "HTTP 200 OK" as far as the transport protocol is concerned, and anything else that might go wrong from that point on is exclusively an application concern, and none of the transport protocol's business. Obviously, this approach is kind of like saying "let me show you where to stick your best practices".
Now, there are other voices that say that things are not that simple, and that if you do not follow the RESTful best practices, things will break.
The story goes that for example, in the event of unauthorized access, you should return an actual "HTTP 401 Unauthorized" (instead of a successful response containing a json-serialized UnauthorizedException) because upon receiving the 401, the browser will prompt the user of credentials. Of course this does not really hold any water, because REST requests are not issued by browsers being used by human users.
Another, more sophisticated way the story goes is that usually, between the client and the server exist proxies, and these proxies inspect HTTP requests and responses, and try to make sense out of them, so as to handle different requests differently. For example, they say, somewhere between the client and the server there may be a caching proxy, which may treat all requests to the exact same URL as identical and therefore cacheable. So, path parameters are necessary to differentiate between different resources, otherwise the caching proxy might only ever forward a request to the server once, and return cached responses to all clients thereafter. Furthermore, this caching proxy may need to know that a certain request-response exchange resulted in a failure due to a particular error such as "Permission Denied", so as to again not cache the response, otherwise a request resulting in a temporary error may be answered with a cached error response forever.
So, my questions are:
Besides "familiarity" and "least surprise", what other good reasons are there for following REST best practices? Are these concerns about proxies real? Are caching proxies really so dumb as to cache REST responses? Is it hard to configure the proxies to behave in less dumb ways? Are there drawbacks in configuring the proxies to behave in less dumb ways?
It's worth considering that what you're suggesting is the way that HTTP APIs used to be designed for a good 15 years or so. API designers are tending to move away from that approach these days. They really do have their reasons.
Some points to consider if you want to avoid using ReST over HTTP:
ReST over HTTP is an efficient use of the HTTP/S transport mechanism. Avoiding the ReST paradigm runs the risk of every request / response being wrapped in verbose envelopes. SOAP is an example of this.
ReST encourages client and server decoupling by putting application semantics into standard mechanisms - HTTP and XML/JSON (or others data formats). These protocols and standards are well supported by standard libraries and have been built up over years of experience. Sure, you can create your own 'unauthorized' response body with a 200 status code, but ReST frameworks just make it unnecessary so why bother?
ReST is a design approach which encourages a view of your distributed system which focuses on data rather than functionality, and this has a proven a useful mechanism for building distributed systems. Avoiding ReST runs the risk of focusing on very RPC-like mechanisms which have some risks of their own:
they can become very fine-grained and 'chatty'
which can be an inefficient use of network bandwidth
which can tightly couple client and server, through introducing stateful-ness and temporal coupling beteween requests.
and can be difficult to scale horizontally
Note: there are times when an RPC approach is actually a better way of breaking down a distributed system than a resource-oriented approach, but they tend to be the exceptions rather than the rule.
existing tools for developers make debugging / investigations of ReSTful APIs easier. It's easy to use a browser to do a simple GET, for example. And tools such as Postman or RestClient already exist for more complex ReST-style queries. In extreme situations tcpdump is very useful, as are browser debugging tools such as firebug. If every API call has application layer semantics built on top of HTTP (e.g. special response types for particular error situations) then you immediately lose some value from some of this tooling. Building SOAP envelopes in PostMan is a pain. As is reading SOAP response envelopes.
network infrastructure around caching really can be as dumb as you're asking. It's possible to get around this but you really do have to think about it and it will inevitably involve increased network traffic in some situations where it's unnecessary. And caching responses for repeated queries is one way in which APIs scale out, so you'll likely need to 'solve' the problem yourself (i.e. reinvent the wheel) of how to cache repeated queries.
Having said all that, if you want to look into a pure message-passing design for your distributed system rather than a ReSTful one, why consider HTTP at all? Why not simply use some message-oriented middleware (e.g. RabbitMQ) to build your application, possibly with some sort of HTTP bridge somewhere for Internet-based clients? Using HTTP as a pure transport mechanism involving a simple 'message accepted / not accepted' semantics seems overkill.
REST is intended for long-lived network-based applications that span multiple organizations. If you don’t see a need for the constraints, then don’t use them. -- Roy T Fielding
Unfortunately, there appears to be no question (nor answer) as to why use best practices in the first place.
When in doubt, go back to the source
Fielding's dissertation really does quite a good job at explaining how the REST architectural constraints ensure that you don't destroy the properties those constraints are designed to protect.
Keep in mind - before the web (which is the reference application for REST), "web scale" wasn't a thing; the notion of a generic client (the browers) that could discover and consume thousands of customized applications (provided by web servers) had not previously been realized.
According to this approach, you only use a single REST entry point URL in your entire application, you never use any HTTP method other than HTTP POST, never return any HTTP status code other than HTTP 200 OK, and never pass any parameter in any way other than within the application-specific payload of the request.
Yup - that's a thing, it's called RPC; you are effectively taking the web, and stripping it down to a bare message transport application that just happens to tunnel through port 80.
In doing so, you have stripped away the uniform interface -- you've lost the ability to use commodity parts in your deployment, because nobody can participate in the conversation unless they share the same interpretation of the message data.
Note: that's doesn't at all imply that RPC is "broken"; architecture is about tradeoffs. The RPC approach gives up some of the value derived from the properties guarded by REST, but that doesn't mean it doesn't pick up value somewhere else. Horses for courses.
Besides "familiarity" and "least surprise", what other good reasons are there for following REST best practices?
Cheap scaling of reads - as your offering becomes more popular, you can service more clients by installing a farm of commodity reverse-proxies that will serve cached representations where available, and only put load on the server when no fresh representation is available.
Prefetching - if you are adhering to the safety provisions of the interface, agents (and intermediaries) know that they can download representations at their own discretion without concern that the operators will be liable for loss of capital. AKA - your resources can be crawled (and cached)
Similarly, use of idempotent methods (where appropriate) communicates to agents (and intermediaries) that retrying the send of an unacknowledged message causes no harm (for instance, in the event of a network outage).
Independent innovation of clients and servers, especially cross organizations. Mosaic is a museum piece, Netscape vanished long ago, but the web is still going strong.
Of course this does not really hold any water, because REST requests are not issued by browsers being used by human users.
Of course they are -- where do you think you are reading this answer?
So far, REST works really well at exposing capabilities to human agents; which is to say that the server side is so ubiquitous at this point that we hardly think about it any more. The notion that you -- the human operator -- can use the same application to order pizza, run diagnostics on your house, and remote start your car is as normal as air.
But you are absolutely right that replacing the human still seems a long ways off; there are various standards and media types for communicating semantic content of data -- the automated client can look at markup, identify a phone number element, and provide a customized array of menu options from it -- but building into agents the sorts of fuzzy intelligence needed to align offered capabilities with goals, or to recover from error conditions, seems to be a ways off.

process in Uniform Interface vs HTTP verbs

by considering the application of REST principles in the web.
i am doing a case study on REST and I have some doubt mostly on Uniform interface.
I assumes that Uniform Interface has only one single PROCESS instead of HTTP verbs (e.g. get, post, put, delete, head, ...). Is there any potential consequences of this kind of process with conventional HTTP verbs?
Is there any potential consequences of this kind of process with conventional HTTP verbs?
There are a few.
One consideration is safety. In the RFC-7231, safe is defined this way.
Request methods are considered "safe" if their defined semantics are essentially read-only; i.e., the client does not request, and does not expect, any state change on the origin server as a result of applying a safe method to a target resource.
So if PROCESS were a safe verb, like GET, you would have an analog of the read-only web. The HTTP spec also defines HEAD and OPTIONS (which are optimized reads) and TRACE (a debugging tool); given that HTML has been an extremely successful hypermedia format without including support for these methods suggests that they aren't particularly critical of themselves.
A safe specification of PROCESS preserves all of the scaling benefits of REST. But it's utility is limited - clients can consume content, but they can't produce any.
On the other hand, if PROCESS isn't safe, then a bunch of use cases can no longer be supported. Prefetching content is no longer an option, because the components can no longer assume that invoking PROCESS has no side effects on the server. Crawling is no longer an option, for the same reason.
It's probably worth noticing the mechanics of methods in the web -- it's the hypermedia format that describes which methods are appropriate for which links. So you could potentially work around some of the issues by defining the restrictions on the method within the hypermedia format itself. It's a different way of communicating the same information to any component that can consume the media type in question.
But there are at least two additional considerations. First, that the information in the links can only be understood by components that know that media type. On the web, most components are media type agnostic -- caching html looks exactly like caching json looks like caching jpeg.
Second, that the information in the links is only available on the outbound side. REST means stateless - all of the information needed to process the request is contained within the request. That implies that the request must include within it all of the information needed to handle a communication failure.
For instance, the http spec defines idempotent.
A request method is considered "idempotent" if the intended effect on the server of multiple identical requests with that method is the same as the effect for a single such request.
This property is important for intermediary components when they forward a request along an unreliable, and receive no response from the server. We've got no way to know if a message is lost, or just slow, and we've got no way to distinguish the request message being lost from the response message being lost.
If the request includes the information that it is idempotent, then the intermediaries know that they can resend the message to the server, rather than reporting the error to the client.
Contrast this with correct handling of POST in http; since the POST request does not have an idempotent marker on it, the components do not know that a resending the message is going to have the desired effect (which is why web browsers typically display a warning if you try to POST a form twice).
Locking yourself into a single method give you a choice; do you want to support error recovery by intermediaries? or do you want the flexibility to support not-idempotent writes?