Prevent Modification of PayPal Orders from JavaScript - paypal

I'm starting to integrate PayPal checkouts with a server workflow.
My basic need is to create an order on the server and ensure that the client can not modify it in any way.
Because of this requirement, I have already ruled out using the "simple" JavaScript-only solution, and I'm instead going for a server integration, calling my own URL endpoints for creating and capturing orders.
However, I have found that the client can just ab-use the actions.order.patch() method to modify almost every aspect of the order, including the amount and the custom_id that I'm attaching to the purchase_item.
Basically, It looks like I have absolutely no guarantee on the order contents, even if I created it on the server, is this correct?
In that case, it means I have to check each order's contents against the orders database of my application. It is possible, but I was hoping to not have to do that.
Any clues? How do you deal with this issue?
Thanks!

If you are particularly concerned about this scenario of patching down the total or other details before capture, the only way to ensure it has not changed is to do a server-side ‘get details’ call before the capture and at least validate the total amount value, as well as any other field you’re concerned about.
Otherwise, the usual general safety solution in ecommerce (for this as well as other potential issues that might crop up) is to simply capture and validate the total in the capture response. If the capture has a total you don't expect, issue an immediate refund or flag the occurrence for review before fulfilling anything.

Related

Using GET verb to update in rest api?

I know the use of http verbs is based on standard specification. But my question if I use "GET" for update operations and write a code logic to update, does it create issues in any scenario? Apart from the standard, what else could be the reason to use these verbs for a specific purpose only?
my question if I use "GET" for update operations and write a code logic to update, does it create issues in any scenario?
Yes.
A simple example - suppose the network between the client and the server is unreliable; specifically, for a time, HTTP responses are being lost. A general purpose component (like a web proxy) might time out, and then, noticing that the method token of the request is GET, resend the request a second/third/fourth time, with your server performing its update on every GET request.
Let us further assume that these multiple update operations lead to an undesirable outcome; where do we properly affix blame?
Second example: you send someone a copy of the link to the update operation, so that they can send you a request at the appropriate time. But suppose you send that link to them in an email, and the email client recognizes the uri and (as a performance optimization) pre-fetches the link, triggering your update operation too early. Where do we properly affix the blame?
HTTP does not attempt to require the results of a GET to be safe. What it does is require that the semantics of the operation be safe, and therefore it is a fault of the implementation, not the interface or the user of that interface, if anything happens as a result that causes loss of property -- Fielding, 2002
In these, and other examples, blame is correctly affixed to your server, because GET has a standardized meaning which include the constraint that the semantics of the request are safe.
That's not to say that you can't have side effects when handling a GET request; "hit counters" are almost as old as the web itself. You have a lot of freedom in your implementation; so long as you respect the uniform interface, there won't be too much trouble.
Experience report: one of our internal tools uses GET requests to trigger scheduling; in our carefully controlled context (which is not web scale), we get away with it, and have for a very long time.
To borrow your language, there are certainly scenarios that would give us problems; but given our controls we manage to avoid them.
I wouldn't like our chances, though, if requests started coming in from outside of our carefully controlled context.
I think it's a decent question. You're asking a hypothetical: is there any value to doing the right other than that's we agree to use GET for fetching? e.g.: is there value beyond the fact that it's 'semantically nice'. A similar question in HTML might be: "Is it ok to use a <div> with an onclick instead of a <button>? (the answer is no).
There certainly is. Clients, servers and intermediates all change their behavior depending on what method is used. Even if your server can process GET for updates, and you build a client that uses this, your browser might still get confused.
If you are interested in this subject, don't ask on a forum; read the spec. The HTTP specification tells you what clients, servers and proxies should do when they encounter certain methods, statuses and headers.
Start at RFC7231

Restful business logic on property update

I'm building a REST API and I'm trying to keep it as RESTful as possible, but some things are still not quite clear for me. I saw a lot of topic about similar question but all too centered about the "simple" problem of updating data, my issue is more about the business logic around that.
My main issue is with business logic triggered by partial update of a model. I see a lot of different opinion online about PATCH methods, creating new sub-ressources or adding action, but it often seems counter productive with the REST approach of keeping URI simple and structured.
I have some record that need to be proceeded ( refused, validated, partially validated ..etc ), each change trigger additional actions.
If it's refused, an email with the reason should be sent
if it's partially validated, the link to fulfill the missing data is sent
if it's validated some other ressources must be created.
There is a few other change that can be made to the status but this is enough for the example.
What would be a RESTful way to do that ?
My first idea would be to create actions :
POST /record/:id/refuse
POST /record/:id/validate ..etc
It seems RESTful to me but too complicated, and moreover, this approach means having multiple route performing essentially the same thing : Update one field in the record object
I also see the possibility of a PATCH method like :
PATCH /record/:id in which I check if the field to update is status, and the new value to know which action to perform.
But I feel it can start to be too complex when I will have the need to perform similar action for other property of the record.
My last option, and I think maybe the best but I'm not sure if it's RESTful, would be to use a sub-ressource status and to use PUT to update it :
PUT /record/:id/status, with a switch on the new value.
No matter what the previous value was, switching to accepted will always trigger the creation, switching to refused will always trigger the email ...etc
Are those way of achieving that RESTful and which one make more sense ? Is there other alternative I didn't think about ?
Thanks
What would be a RESTful way to do that ?
In HTTP, your "uniform interface" is that of a document store. Your Rest API is a facade, that takes messages with remote authoring semantics (PUT/POST/PATCH), and your implementation produces useful work as a side effect of its handling of those messages.
See Jim Webber 2011.
I have some record that need to be proceeded ( refused, validated, partially validated ..etc ), each change trigger additional actions.
So think about how we might do this on the web. We GET some resource, and what is returned is an html representation of the information of the record and a bunch of forms that describe actions we can do. So there's a refused form, and a validated form, and so on. The user chooses the correct form to use in the browser, fills in any supplementary information, and submits the form. The browser, using the HTML form processing rules, converts the form information into an HTTP request.
For unsafe operations, the form is configured to use POST, and the browsers therefore know that the form data should be part of the message-body of the request.
The target-uri of the request is just whatever was used as the form action -- which is to say, the representation of the form includes in it the information that describes where the form should be submitted.
As far as the browser and the user are concerned, the target-uri can be anything. So you could have separate resources to handle validate messages and refused messages and so on.
Caching is an important idea, both in REST and in HTTP; HTTP has specific rules baked into it for cache invalidation. Therefore, it is often the case that you will want to use a target-uri that identifies the document you want the client to reload if the command is successful.
So it might go something like this: we GET /record/123, and that gives us a bunch of information, and also some forms describing how we can change the record. So fill one out, submit it successfully, and now we expect the forms to be gone - or a new set of forms to be available. Therefore, it's the record document itself that we would expect to be reloading, and the target-uri of the forms should be /record/123.
(So the API implementation would be responsible for looking at the HTTP request, and figuring out the meaning of the message. They might all go to a single /record/:id POST handler, and that code looks through the message-body to figure out which internal function should do the work).
PUT/PATCH are the same sort of idea, except that instead of submitting forms, we send edited representations of the resource itself. We GET /record/123, change the status (for example, to Rejected), and then send a copy of our new representation of the record to the server for processing. It would therefore be the responsibility of the server to examine the differences between its representation of the resource and the new provided copy, and calculate from them any necessary side effects.
My last option, and I think maybe the best but I'm not sure if it's RESTful, would be to use a sub-resource status and to use PUT to update it
It's fine -- think of any web page you have ever seen where the source has a link to an image, or a link to java script. The result is two resources instead of one, with separate cache entries for each -- which is great, when you want fine grained control over the caching of the resources.
But there's a trade - you also need to fetch more resources. (Server-push mitigates some of this problem).
Making things easier on the server may make things harder on the client - you're really trying to find the design with the best balance.

Server load difference between an http response with a single value or an object containing more data

I want to naI want to know if there is a real practical difference between different types of content in an HTTP response. Let me explain my self better.
Say I submit a POST request to a server with typical resource payload. Let's use a client with client_name, client_email, client_phone.
Would there be an actual difference if the server returns just an id:
{id:100}
Or if it returns the fully created resource without sensible data, like so:
{client_name: 'Some Client', client_email: 'email#sample.com', client_phone: '417-235-4622'}
Suppose that the application as a considerable amount of active users, creating resources at any given moment. Is there a significant cost in server resources associated with returning data from the server (just an ID or a full object)
Given the following scenarios when creating a resource:
Submit POST request, receive resource ID, complete all data visualization feedback with data in memory (info in form element).
Submit POST request, receive full object with id, email and phone. Continue with UI things.
If there is a difference in cost, and its significant, then the response ID is the way to go. But, I'm thinking that if I have lot's fields to submit, and most of them are required, and I'm only expecting an ID in return, then that'a a guarantee that te resource got created but it doesn't mean it was created completely. Suppose I submit the data, and one of those fields fails silently to submit to database (email for example), the server returns ID, the UI shows the user that the resource was created, the user reloads the page and the email is gone.
If the server returns the full object I get the feeling that the transaction is more atomic.
So, to wrap up. Is there a significante difference in terms of cost to the server ?
but it doesn't mean it was created completely. Suppose I submit the data, and one of those fields fails silently to submit to database (email for example)
Even if the email were to be saved in a different table than the rest of the data, it will still have to be done in a transactional manner (an indivisible operation that must succeed or fail as a complete unit; it can never be only partially complete). This could even mean rolling back changes if a failure is detected at any point during the save operation.
Now back to the main question, REST just says that you should conform to the uniform interface. In other words, it says you should do what POST is supposed to do as per the HTTP spec. Here is the quote from that spec that is relevant,
If a resource has been created on the
origin server, the response SHOULD
be 201 (Created) and contain an entity
which describes the status of the
request and refers to the new
resource, and a Location header
(see section 14.30).
I think it all depends on the use case scenarios. If the client immediately needs to display info regarding the newly created object, I really do not see any advantage to returning only the ID and doing a GET request after, to get the data you could have got with your initial POST.
Anyway as long as your API is consistent I think that you should choose the pattern that fits your needs the best. There is not any correct way of how to build a REST API, imo.
Is there a significante difference in terms of cost to the server ?
That's totally unanswerable by us. How powerful is the server? What software are you running on it? What's the breakdown of your expected traffic? What performance targets are you expected to hit? etc..
Performance problems should be solved through a combination of more better hardware and a sensible software architecture that still does everything you need it to. You don't even know if you have a problem yet and you're trying to fix it.
You're asking the wrong question. The question you should be asking is: when my clients create a user, are they likely to need server-created information beyond the URI immediately? Of course, we can't really answer that either. If the server isn't (and won't ever!) be creating anything, there's an obvious answer. If it is, or may, you may want to return a full representation even if the client doesn't need it now, so it's not a breaking change later if they decide they do. The pain there depends a lot on whether this is an internal- or external-facing API, and who owns the clients.
In addition to the other answers given, which are quite comprehensive, I would just like to add that it is contrary to the design of the web to provide object IDs and expect the client to know what to do with them. You should instead be providing URLs to the object in question. Clients can then do a GET request on the provided URL to fetch the full set of data for the object, should they want to. And I f the responses to these GET requests have already been cached, your server will not have to do any work at all to satisfy them!

Keeping things RESTful

New to rest and not having even known what REST was, I began watching a few videos and picked up a book to help guide me towards the correct approach. Unfortunately, my first version is completely botched to hell and I'm likely going to have to break any customers using that implementation shortly. To ensure that I don't need to do this again, I need your assistance!
I have a few DB tables that I'm concerned with here:
'PrimaryBuyer' & 'AllBuyers'
They share a majority of fields, but AllBuyers has a few things Primary does not and vice versa. Each primary buyer is given a unique 'CaseNumber' when entered into the system. This in addition to a 'SequenceNumber' is then used to identify 'AllBuyers'. This CaseNumber is returned to the user of the web service to store for future use. The sequence numbers however are implied based on their location within the XML / JSON.
To specify these tables -> For example, if I were to buy a car I would be the primary buyer and would thusly be entered into BOTH Primary and AllBuyers tables. However, if my credit was bad I could have my spouse cosign on the loan. This would make her a secondary buyer, and she would be entered exclusively into 'AllBuyers' table.
I currently have one REST URI set up as '/buyers/' which mandates that all information for all buyers is entered at once. Similarly if I were to do an update on this URI, the Primary is updated in both tables and any Secondary buyers in the payload would replace previously existing ones.
Ultimately, there is no way to directly access tables 'PrimaryBuyer' and 'AllBuyers'
I've been trying to think of a solution around this problem, but have been unable to think of anything that's necessarily RESTful or not a pain for customers. Is it ridiculous to think that the user should (say on an add) POST to /primarybuyer/, take the returned casenumber, and then POST the same information and then some to /allbuyers/? That seems like it would be a little silly on bandwidth among other things. Should things be left in their current state?
Hopefully that's not too much information to answer such a seemingly simple question.
Is it ridiculous to think that the user should (say on an add) POST to
/primarybuyer/, take the returned casenumber, and then POST the same
information and then some to /allbuyers/?
When you talk about "user" do you mean the person or the system (browser) that uses your service ?
Using a REST service is normally made by a system, today that means a lot browser+Javascript. To do the job for the user person is done at a web facade, and in back is running all your (Javascript) code to make the appropiated REST calls.
Why not post the buyer's information as a serialized object to the rest server? You could do this via the parameters section of the request. When the request got to the server, you could deserialize the object, and implement the logic of updating the database.
Kind of like how amazon does it? http://docs.aws.amazon.com/ElasticMapReduce/latest/API/API_AddInstanceGroups.html
If the client application does POST /PrimaryBuyer there is no reason that the server cannot also copy that case information into the /AllBuyers resource and vice versa.

Transactions in REST?

I'm wondering how you'd implement the following use-case in REST. Is it even possible to do without compromising the conceptual model?
Read or update multiple resources within the scope of a single transaction. For example, transfer $100 from Bob's bank account into John's account.
As far as I can tell, the only way to implement this is by cheating. You could POST to the resource associated with either John or Bob and carry out the entire operation using a single transaction. As far as I'm concerned this breaks the REST architecture because you're essentially tunneling an RPC call through POST instead of really operating on individual resources.
Consider a RESTful shopping basket scenario. The shopping basket is conceptually your transaction wrapper. In the same way that you can add multiple items to a shopping basket and then submit that basket to process the order, you can add Bob's account entry to the transaction wrapper and then Bill's account entry to the wrapper. When all the pieces are in place then you can POST/PUT the transaction wrapper with all the component pieces.
There are a few important cases that aren't answered by this question, which I think is too bad, because it has a high ranking on Google for the search terms :-)
Specifically, a nice propertly would be: If you POST twice (because some cache hiccupped in the intermediate) you should not transfer the amount twice.
To get to this, you create a transaction as an object. This could contain all the data you know already, and put the transaction in a pending state.
POST /transfer/txn
{"source":"john's account", "destination":"bob's account", "amount":10}
{"id":"/transfer/txn/12345", "state":"pending", "source":...}
Once you have this transaction, you can commit it, something like:
PUT /transfer/txn/12345
{"id":"/transfer/txn/12345", "state":"committed", ...}
{"id":"/transfer/txn/12345", "state":"committed", ...}
Note that multiple puts don't matter at this point; even a GET on the txn would return the current state. Specifically, the second PUT would detect that the first was already in the appropriate state, and just return it -- or, if you try to put it into the "rolledback" state after it's already in "committed" state, you would get an error, and the actual committed transaction back.
As long as you talk to a single database, or a database with an integrated transaction monitor, this mechanism will actually work just fine. You might additionally introduce time-outs for transactions, which you could even express using Expires headers if you wanted to.
In REST terms, resources are nouns that can be acted on with CRUD (create/read/update/delete) verbs. Since there is no "transfer money" verb, we need to define a "transaction" resource that can be acted upon with CRUD. Here's an example in HTTP+POX. First step is to CREATE (HTTP POST method) a new empty transaction:
POST /transaction
This returns a transaction ID, e.g. "1234" and according URL "/transaction/1234". Note that firing this POST multiple times will not create the same transaction with multiple IDs and also avoids introduction of a "pending" state. Also, POST can't always be idempotent (a REST requirement), so it's generally good practice to minimize data in POSTs.
You could leave the generation of a transaction ID up to the client. In this case, you would POST /transaction/1234 to create transaction "1234" and the server would return an error if it already existed. In the error response, the server could return a currently unused ID with an appropriate URL. It's not a good idea to query the server for a new ID with a GET method, since GET should never alter server state, and creating/reserving a new ID would alter server state.
Next up, we UPDATE (PUT HTTP method) the transaction with all data, implicitly committing it:
PUT /transaction/1234
<transaction>
<from>/account/john</from>
<to>/account/bob</to>
<amount>100</amount>
</transaction>
If a transaction with ID "1234" has been PUT before, the server gives an error response, otherwise an OK response and a URL to view the completed transaction.
NB: in /account/john , "john" should really be John's unique account number.
Great question, REST is mostly explained with database-like examples, where something is stored, updated, retrieved, deleted. There are few examples like this one, where the server is supposed to process the data in some way. I don't think Roy Fielding included any in his thesis, which was based on http after all.
But he does talk about "representational state transfer" as a state machine, with links moving to the next state. In this way, the documents (the representations) keep track of the client state, instead of the server having to do it. In this way, there is no client state, only state in terms of which link you are on.
I've been thinking about this, and it seems to me reasonable that to get the server to process something for you, when you upload, the server would automatically create related resources, and give you the links to them (in fact, it wouldn't need to automatically create them: it could just tell you the links, and it only create them when and if you follow them - lazy creation). And to also give you links to create new related resources - a related resource has the same URI but is longer (adds a suffix). For example:
You upload (POST) the representation of the concept of a transaction with all the information. This looks just like a RPC call, but it's really creating the "proposed transaction resource". e.g URI: /transaction
Glitches will cause multiple such resources to be created, each with a different URI.
The server's response states the created resource's URI, its representation - this includes the link (URI) to create the related resource of a new "committed transaction resource". Other related resources are the link to delete the proposed transaction. These are states in the state-machine, which the client can follow. Logically, these are part of the resource that has been created on the server, beyond the information the client supplied. e.g URIs: /transaction/1234/proposed, /transaction/1234/committed
You POST to the link to create the "committed transaction resource", which creates that resource, changing the state of the server (the balances of the two accounts)**. By its nature, this resource can only be created once, and can't be updated. Therefore, glitches committing many transactions can't occur.
You can GET those two resources, to see what their state is. Assuming that a POST can change other resources, the proposal would now be flagged as "committed" (or perhaps, not available at all).
This is similar to how webpages operate, with the final webpage saying "are you sure you want to do this?" That final webpage is itself a representation of the state of the transaction, which includes a link to go to the next state. Not just financial transactions; also (eg) preview then commit on wikipedia. I guess the distinction in REST is that each stage in the sequence of states has an explicit name (its URI).
In real-life transactions/sales, there are often different physical documents for different stages of a transaction (proposal, purchase order, receipt etc). Even more for buying a house, with settlement etc.
OTOH This feels like playing with semantics to me; I'm uncomfortable with the nominalization of converting verbs into nouns to make it RESTful, "because it uses nouns (URIs) instead of verbs (RPC calls)". i.e. the noun "committed transaction resource" instead of the verb "commit this transaction". I guess one advantage of nominalization is you can refer to the resource by name, instead of needing to specify it in some other way (such as maintaining session state, so you know what "this" transaction is...)
But the important question is: What are the benefits of this approach? i.e. In what way is this REST-style better than RPC-style? Is a technique that's great for webpages also helpful for processing information, beyond store/retrieve/update/delete? I think that the key benefit of REST is scalability; one aspect of that is not needing to maintain client state explicitly (but making it implicit in the URI of the resource, and the next states as links in its representation). In that sense it helps. Perhaps this helps in layering/pipelining too? OTOH only the one user will look at their specific transaction, so there's no advantage in caching it so others can read it, the big win for http.
I've drifted away from this topic for 10 years. Coming back, I can't believe the religion masquerading as science that you wade into when you google rest+reliable. The confusion is mythic.
I would divide this broad question into three:
Downstream services. Any web service you develop will have downstream services that you use, and whose transaction syntax you have no choice but to follow. You should try and hide all this from users of your service, and make sure all parts of your operation succeed or fail as a group, then return this result to your users.
Your services. Clients want unambiguous outcomes to web-service calls, and the usual REST pattern of making POST, PUT or DELETE requests directly on substantive resources strikes me as a poor, and easily improved, way of providing this certainty. If you care about reliability, you need to identify action requests. This id can be a guid created on the client, or a seed value from a relational DB on the server, it doesn't matter. For server generated ID's, use a 'preflight' request-response to exchange the id of the action. If this request fails or half succeeds, no problem, the client just repeats the request. Unused ids do no harm.This is important because it lets all subsequent requests be fully idempotent, in the sense that if they are repeated n times they return the same result and cause nothing further to happen. The server stores all responses against the action id, and if it sees the same request, it replays the same response. A fuller treatment of the pattern is in this google doc. The doc suggests an implementation that, I believe(!), broadly follows REST principals. Experts will surely tell me how it violates others. This pattern can be usefully employed for any unsafe call to your web-service, whether or not there are downstream transactions involved.
Integration of your service into "transactions" controlled by upstream services. In the context of web-services, full ACID transactions are considered as usually not worth the effort, but you can greatly help consumers of your service by providing cancel and/or confirm links in your confirmation response, and thus achieve transactions by compensation.
Your requirement is a fundamental one. Don't let people tell you your solution is not kosher. Judge their architectures in the light of how well, and how simply, they address your problem.
If you stand back to summarize the discussion here, it's pretty clear that REST is not appropriate for many APIs, particularly when the client-server interaction is inherently stateful, as it is with non-trivial transactions. Why jump through all the hoops suggested, for client and server both, in order to pedantically follow some principle that doesn't fit the problem? A better principle is to give the client the easiest, most natural, productive way to compose with the application.
In summary, if you're really doing a lot of transactions (types, not instances) in your application, you really shouldn't be creating a RESTful API.
You'd have to roll your own "transaction id" type of tx management. So it would be 4 calls:
http://service/transaction (some sort of tx request)
http://service/bankaccount/bob (give tx id)
http://service/bankaccount/john (give tx id)
http://service/transaction (request to commit)
You'd have to handle the storing of the actions in a DB (if load balanced) or in memory or such, then handling commit, rollback, timeout.
Not really a RESTful day in the park.
First of all transferring money is nothing that you can not do in a single resource call. The action you want to do is sending money. So you add a money transfer resource to the account of the sender.
POST: accounts/alice, new Transfer {target:"BOB", abmount:100, currency:"CHF"}.
Done. You do not need to know that this is a transaction that must be atomic etc. You just transfer money aka. send money from A to B.
But for the rare cases here a general solution:
If you want to do something very complex involving many resources in a defined context with a lot of restrictions that actually cross the what vs. why barrier (business vs. implementation knowledge) you need to transfer state. Since REST should be stateless you as a client need to transfer the state around.
If you transfer state you need to hide the information inside from the client. The client should not know internal information only needed by the implementation but does not carry information relevant in terms of business. If those information have no business value the state should be encrypted and a metaphor like token, pass or something need to be used.
This way one can pass internal state around and using encryption and signing the system can be still be secure and sound. Finding the right abstraction for the client why he passes around state information is something that is up to the design and architecture.
The real solution:
Remember REST is talking HTTP and HTTP comes with the concept of using cookies. Those cookies are often forgotten when people talk about REST API and workflows and interactions spanning multiple resources or requests.
Remember what is written in the Wikipedia about HTTP cookies:
Cookies were designed to be a reliable mechanism for websites to remember stateful information (such as items in a shopping cart) or to record the user's browsing activity (including clicking particular buttons, logging in, or recording which pages were visited by the user as far back as months or years ago).
So basically if you need to pass on state, use a cookie. It is designed for exactly the very same reason, it is HTTP and therefore it is compatible to REST by design :).
The better solution:
If you talk about a client performing a workflow involving multiple requests you usually talk about protocol. Every form of protocol comes with a set of preconditions for each potential step like perform step A before you can do B.
This is natural but exposing protocol to clients makes everything more complex. In order to avoid it just think what we do when we have to do complex interactions and things in the real world... . We use an Agent.
Using the Agent metaphor you can provide a resource that can perform all necessary steps for you and store the actual assignment / instructions it is acting upon in its list (so we can use POST on the agent or an 'agency').
A complex example:
Buying a house:
You need to prove your credibility (like providing your police record entries), you need to ensure financial details, you need to buy the actual house using a lawyer and a trusted third party storing the funds, verify that the house now belongs to you and add the buying stuff to your tax records etc. (just as an example, some steps may be wrong or whatever).
These steps might take several days to be completed, some can be done in parallel etc.
In order to do this, you just give the agent the task buy house like:
POST: agency.com/ { task: "buy house", target:"link:toHouse", credibilities:"IamMe"}.
Done. The agency sends you back a reference to you that you can use to see and track the status of this job and the rest is done automatically by the agents of the agency.
Think about a bug tracker for instance. Basically you report the bug and can use the bug id to check whats going on. You can even use a service to listen to changes of this resource. Mission Done.
You must not use server side transactions in REST.
One of the REST contraints:
Stateless
The client–server communication is further constrained by no client context being stored on the server between requests. Each request from any client contains all of the information necessary to service the request, and any session state is held in the client.
The only RESTful way is to create a transaction redo log and put it into the client state. With the requests the client sends the redo log and the server redoes the transaction and
rolls the transaction back but provides a new transaction redo log (one step further)
or finally complete the transaction.
But maybe it's simpler to use a server session based technology which supports server side transactions.
I think that in this case it is totally acceptable to break the pure theory of REST in this situation. In any case, I don't think there is anything actually in REST that says you can't touch dependent objects in business cases that require it.
I really think it's not worth the extra hoops you would jump through to create a custom transaction manager, when you could just leverage the database to do it.
In the simple case (without distributed resources), you could consider the transaction as a resource, where the act of creating it attains the end objective.
So, to transfer between <url-base>/account/a and <url-base>/account/b, you could post the following to <url-base>/transfer.
<transfer>
<from><url-base>/account/a</from>
<to><url-base>/account/b</to>
<amount>50</amount>
</transfer>
This would create a new transfer resource and return the new url of the transfer - for example <url-base>/transfer/256.
At the moment of successful post, then, the 'real' transaction is carried out on the server, and the amount removed from one account and added to another.
This, however, doesn't cover a distributed transaction (if, say 'a' is held at one bank behind one service, and 'b' is held at another bank behind another service) - other than to say "try to phrase all operations in ways that don't require distributed transactions".
I believe that would be the case of using a unique identifier generated on the client to ensure that the connection hiccup not imply in an duplicity saved by the API.
I think using a client generated GUID field along with the transfer object and ensuring that the same GUID was not reinserted again would be a simpler solution to the bank transfer matter.
Do not know about more complex scenarios, such as multiple airline ticket booking or micro architectures.
I found a paper about the subject, relating the experiences of dealing with the transaction atomicity in RESTful services.
I guess you could include the TAN in the URL/resource:
PUT /transaction to get the ID (e.g. "1")
[PUT, GET, POST, whatever] /1/account/bob
[PUT, GET, POST, whatever] /1/account/bill
DELETE /transaction with ID 1
Just an idea.