RESTful Web application- Session data management - rest

We are developing an application having many screens. Each screen data is coming from Rest API. What the best practice to store the session data(screen data) at backed?
For example- I need data of screen two(includes the screen input data and response from rest API) in 4th screen. For this I want to store the rest response of screen two in server side.
I came of with two scenarios for this, if anyone has experience please help:
1) Session management using REDIS-- but this is mostly used in clustering environment.
2) Session management using spring security and spring session management.
Please suggest the better way of doing it.
Details:
Spring-boot application will be hosted in cloud
Also the question is not related to security, authentication, authorization.
Kindly help me with best practice to move the data to different screens.

For example- I need data of screen two(includes the screen input data and response from rest API) in 4th screen. For this I want to store the rest response of screen two in server side.
What you are describing there is a violation of the Stateless architectural constraint of REST.
The "right" answer is to take one of two approaches; one is to store the "session data" on the client -- the server sends the data back to the client (for example, as fixed/hidden fields in the form) in the response. The other is to use the client actions to modify a resource (think shopping cart).
The core problem is this: the stateless constraint means that the server is only operating on the current request; the server only ever sees requests, not state changes (ex: the client can hit the back button, or otherwise jump to some other state in its history, or fetch additional state from somewhere else).
If you use the "modify a resource" approach, you may want to review RFC 7232: Conditional Requests, and think about whether or not your use case needs to worry about the "lost update problem".

Related

How do you save API keys without exposing them in the first place?

If I save API keys to Flutter_secure_storage, they must be exposed in the first place. How could they be pre-encrypted or saved to secure storage without exposing them initially?
I want to add a slight layer of security where keys are stored securely, only to be exposed when making an API call. But if I have keys hardcoded then they are exposed even if only at initial app run. How do you get around this logic?
To avoid exposing API key, you should store keys in a '.env' file and use flutter_dotenv package to access it while making API calls. Although this method will not help when making API call. If you really want to secure exposing keys, you should move the API calls to the backend so those network calls cannot be seen by the client.
If this is a web project, you could use something like base64 on both ends, then debase and save like this:
SERVER ON PHP
apiKeyEncoded = base64_encode(apiKeyGenerator());
CLIENT:
apiKeyEncoded = await getApiKey();
apiKeyDecoded = base64Decode(apiKeyEncoded).toString(); //this is the usable one, save it.
Now, if the project is focused on mobile use, I don't think you actually need to implement this, tho the code would be the same.
I will add some input to this. I am using Parse Back4App which exposes app API keys in the same way that firebase does. I have discovered a few very important security designs which may help with this.
Client side
Don't worry about app API keys being abused. Firebase/Back4App both have some security features in place for this including DoS & DDoS security features.
Move ALL actual API calls to server and call from client via cloud code. If you want to go to the extreme, create a user-device hash code for custom client rate limiting.
Server side
LOCK DOWN ALL CLPs, ALL ACLs, basically lock ALL PERMISSIONS and ONLY allow cloud calls with heavy security checks authorized access to anything server side including outside API calls.
Make API calls from your server only. Better yet, move your API calls outside cloud calls & create "cloudJobs", these run on schedule with Back4App and you can periodically call whatever API from server. Example: a crypto currency app might update prices once per second, once per minute etc. server gets these updates and pushes to clients. No risk of someone getting your crypto API keys and running the limits.
Put in a custom rate-limiting design & design around this so your rate limits would never trip under normal circumstances. If they do trip in excess, ban user & drop their requests.
Also put API keys in .env file on server. Go a step further & use a key encryption hardware service.
It would be a tell-tale sign that your server is compromised if your API keys get abused with this structure.
Want further DoS & DDoS protection? Mirror your server a few times and create a structure whereby client requests can be redirected under attack times or non-DDos/DoS attacking clients receive new app API keys.
... I could go on and on about security & what I've learned but I'll leave it at that.

Why server side geo location detection is preferred over client side API call?

I need to implement geo location detection on our website.
(I need to calculate and pass state variable to Google Tag Manager, in future this variable might be used on server side to render specific block depending on state, but for now it will be only needed in client side for GTM)
I've found this article very helpful. It's a bit more complex example for detecting weather. I need only geo location.
In the article the author gives example with client side API call, but several times he recommends using exactly server-side calls.
For those who want to take this weather analysis seriously, I really
recommend moving to a 100% server-side solution, where the weather
data is polled before the page itself is rendered, and the data is
written in the dataLayer of the page.
and
If you’re serious about this solution, you might want to install a
geolocation service on your own web server, so that you’ll avoid
needing to make any extra API calls in the client.
So seems like server-side detection is better, but I don't really understand why. Could anyone explain please.
One given reason is security - if you query a commercial API via Javascript and pass the API key in your requests someone else might use it at your expense.
Also with JavaScript you have to issue your request and then wait for the response before you continue to render your page. With a server side solution querying, error handling, etc. is already done when the browser renders the page; also you might possibly cache the requests to you API to lower your costs and speed up delivery.

How to combine websockets and http to create a REST API that keeps data up to date? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 4 years ago.
Improve this question
I am thinking about buildning a REST API with both websockets and http where I use websockets to tell the client that new data is available or provide the new data to the client directly.
Here are some different ideas of how it could work:
ws = websocket
Idea A:
David get all users with GET /users
Jacob add a user with POST /users
A ws message is sent to all clients with info that a new user exist
David recive a message by ws and calls GET /users
Idea B:
David get all users with GET /users
David register to get ws updates when a change is done to /users
Jacob add a user with POST /users
The new user is sent to David by ws
Idea C:
David get all users with GET /users
David register to get ws updates when a change is done to /users
Jacob add a user with POST /users and it gets the id 4
David receive the id 4 of the new user by ws
David get the new user with GET /users/4
Idea D:
David get all users with GET /users
David register to get ws updates when changes is done to /users.
Jacob add a user with POST /users
David receive a ws message that changes is done to /users
David get only the delta by calling GET /users?lastcall='time of step one'
Which alternative is the best and what are the pros and cons?
Is it another better 'Idea E'?
Do we even need to use REST or is ws enought for all data?
Edit
To solve problems with data getting out of sync we could provide the header"If-Unmodified-Since"https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/If-Unmodified-Sinceor "E-Tag" https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/ETag or both with PUT requests.
Idea B is for me the best, because the client specifically subscribes for changes in a resource, and gets the incremental updates from that moment.
Do we even need to use REST or is ws enought for all data?
Please check: WebSocket/REST: Client connections?
I don't know Java, but I worked with both Ruby and C on these designs...
Funny enough, I think the easiest solution is to use JSON, where the REST API simply adds the method data (i.e. method: "POST") to the JSON and forwards the request to the same handler the Websocket uses.
The underlying API's response (the response from the API handling JSON requests) can be translated to any format you need, such as HTML rendering... though I would consider simply returning JSON for most use cases.
This helps encapsulate the code and keep it DRY while accessing the same API using both REST and Websockets.
As you might infer, this design makes testing easier, since the underlying API that handles the JSON can be tested locally without the need to emulate a server.
Good Luck!
P.S. (Pub/Sub)
As for the Pub/Sub, I find it best to have a "hook" for any update API calls (a callback) and a separate Pub/Sub module that handles these things.
I also find it more resource friendly to write the whole data to the Pub/Sub service (option B) instead of just a reference number (option C) or an "update available" message (options A and D).
In general, I also believe that sending the whole user list isn't effective for larger systems. Unless you have 10-15 users, the database call might be a bust. Consider the Amazon admin calling for a list of all users... Brrr....
Instead, I would consider dividing this to pages, say 10-50 users a page. These tables can be filled using multiple requests (Websocket / REST, doesn't matter) and easily updated using live Pub/Sub messages or reloaded if a connection was lost and reestablished.
EDIT (REST vs. Websockets)
As For REST vs. Websockets... I find the question of need is mostly a subset of the question "who's the client?"...
However, once the logic is separated from the transport layer, than supporting both is very easy and often it makes more sense to support both.
I should note that Websockets often have a slight edge when it comes to authentication (credentials are exchanged once per connection instead of once per request). I don't know if this is a concern.
For the same reason (as well as others), Websockets usually have an edge with regards to performance... how big an edge over REST depends on the REST transport layer (HTTP/1.1, HTTP/2, etc').
Usually these things are negligible when it comes time to offer a public API access point and I believe implementing both is probably the way to go for now.
To summarize your ideas:
A: Send a message to all clients when a user edits data on the server. All users then request an update of all data.
-This system may make a lot of unnecessary server calls on behalf of clients who are not using the data. I don't recommend producing all of that extra traffic as processing and sending those updates could become costly.
B: After a user pulls data from the server, they then subscribe to updates from the server which sends them information about what has changed.
-This saves a lot of server traffic, but if you ever get out of sync, you're going to be posting incorrect data to your users.
C: Users who subscribe to data updates are sent information about which data has been updated, then fetch it again themselves.
-This is the worst of A and B in that you'll have extra round trips between your users and servers just to notify them that they need to make a request for information which may be out of sync.
D: Users who subscribe to updates are notified when any changes are made and then request the last change made to the server.
-This presents all of the problems with C, but includes the possibility that, once out of sync, you may send data that will be nonsense to your users which might just crash the client side app for all we know.
I think that this option E would be best:
Every time data changes on the server, send the contents of all the data to the clients who have subscribed to it. This limits the traffic between your users and the server while also giving them the least chance of having out of sync data. They might get stale data if their connection drops, but at least you wouldn't be sending them something like Delete entry 4 when you aren't sure whether or not they got the message that entry 5 just moved into slot 4.
Some Considerations:
How often does the data get updated?
How many users need to be updated each time an update occurs?
What are your transmission
costs? If you have users on mobile devices with slow connections, that will affect how often and how much you can afford to send to them.
How much data gets updated in a given update?
What happens if a user sees stale data?
What happens if a user gets data out of sync?
Your worst case scenario would be something like this: Lots of users, with slow connections who are frequently updating large amounts of data that should never be stale and, if it gets out of sync, becomes misleading.
I personally have used Idea B in production and am very satisfied with the results. We use http://www.axonframework.org/, so every change or creation of an entity is published as an event throughout the application. These events are then used to update several read models, which are basically simple Mysql tables backing one or more queries. I added some interceptors to the event processors that update these read models so that they publish the events they just processed after the data is committed to the DB.
Publishing of events is done through STOMP over web sockets. It is made very simple is you use Spring's Web Socket support (https://docs.spring.io/spring/docs/current/spring-framework-reference/html/websocket.html). This is how I wrote it:
#Override
protected void dispatch(Object serializedEvent, String topic, Class eventClass) {
Map<String, Object> headers = new HashMap<>();
headers.put("eventType", eventClass.getName());
messagingTemplate.convertAndSend("/topic" + topic, serializedEvent, headers);
}
I wrote a little configurer that uses Springs bean factory API so that I can annotate my Axon event handlers like this:
#PublishToTopics({
#PublishToTopic(value = "/salary-table/{agreementId}/{salaryTableId}", eventClass = SalaryTableChanged.class),
#PublishToTopic(
value = "/salary-table-replacement/{agreementId}/{activatedTable}/{deactivatedTable}",
eventClass = ActiveSalaryTableReplaced.class
)
})
Of course, that is just one way to do it. Connecting on the client side may look something like this:
var connectedClient = $.Deferred();
function initialize() {
var basePath = ApplicationContext.cataDirectBaseUrl().replace(/^https/, 'wss');
var accessToken = ApplicationContext.accessToken();
var socket = new WebSocket(basePath + '/wss/query-events?access_token=' + accessToken);
var stompClient = Stomp.over(socket);
stompClient.connect({}, function () {
connectedClient.resolve(stompClient);
});
}
this.subscribe = function (topic, callBack) {
connectedClient.then(function (stompClient) {
stompClient.subscribe('/topic' + topic, function (frame) {
callBack(frame.headers.eventType, JSON.parse(frame.body));
});
});
};
initialize();
Another option is to use Firebase Cloud Messaging:
Using FCM, you can notify a client app that new email or other data is
available to sync.
How does it work?
An FCM implementation includes two main components for sending and
receiving:
A trusted environment such as Cloud Functions for Firebase or an app server on which to build, target and send messages.
An iOS, Android, or Web (JavaScript) client app that receives messages.
Client registers its Firebase key to a server. When updates are available, server sends push notification to the Firebase key associated with the client. Client may receive data in notification structure or sync it with a server after receiving a notification.
Generally you might have a look at current "realtime" web frameworks like MeteorJS which tackle exactly this problem.
Meteor in specific works more or less like your example D with subscriptions on certain data and deltas being sent out after changes only to the affected clients. Their protocol used is called DDP which additionally sends the deltas not as overhead prone HTML but raw data.
If websockets are not available fallbacks like long polling or server sent events can be used.
If you plan to implement it yourself i hope these sources are some kind of inspiration how this problem has been approached. As already stated the specific use case is important
The answer depends on your use case. For the most part though I've found that you can implement everything you need with sockets. As long as you are only trying to access your server with clients who can support sockets. Also, scale can be an issue when you're using only sockets. Here are some examples of how you could use just sockets.
Server side:
socket.on('getUsers', () => {
// Get users from db or data model (save as user_list).
socket.emit('users', user_list );
})
socket.on('createUser', (user_info) => {
// Create user in db or data model (save created user as user_data).
io.sockets.emit('newUser', user_data);
})
Client side:
socket.on('newUser', () => {
// Get users from db or data model (save as user_list).
socket.emit('getUsers');
})
socket.on('users', (users) => {
// Do something with users
})
This uses socket.io for node. I'm not sure what your exact scenario is but this would work for that case. If you need to include REST endpoints that would be fine too.
With all great information all the great people added before me.
I found that eventually there is no right or wrong, its simply goes down to what suits your needs:
lets take CRUD in this scenario:
WS Only Approach:
Create/Read/Update/Deleted information goes all through the websocket.
--> e.g If you have critical performance considerations ,that is not
acceptable that the web client will do successive REST request to fetch
information,or if you know that you want the whole data to be seen in
the client no matter what was the event , so just send the CRUD events
AND DATA inside the websocket.
WS TO SEND EVENT INFO + REST TO CONSUME THE DATA ITSELF
Create/Read/Update/Deleted , Event information is sent in the Websocket,
giving the web client information that is necessary to send the proper
REST request to fetch exactly the thing the CRUD that happend in server.
e.g. WS sends UsersListChangedEvent {"ListChangedTrigger: "ItemModified" , "IdOfItem":"XXXX#3232" , "UserExtrainformation":" Enough info to let the client decide if it relevant for it to fetch the changed data"}
I found that using WS [Only for using Event Data] and REST
[To consume the data ]is better because:
[1] Separation between reading and writing model, Imagine you want to add some runtime information when your data is retrieved when its read from REST , that is now achieved because you are not mixing Write & Read models like in 1.
[2] Lets say other platform , not necessarily web client will consume this data.
so you just change the Event trigger from WS to the new way, and use REST to
consume the data.
[3] Client do not need to write 2 ways to read the new/modified data.
usually there is also code that reads the data when the page loads , and not
through the websocket , this code now can be used twice , once when page
loads , and second when WS triggered the specific event.
[4] Maybe the client do not want to fetch the new User because its showing currently only a view of old Data[E.g. users] , and new data changes is not in its interest to fetch ?
i prefer the A, it allows client the flexibility whether or not to update the existing data.
also with this method, implementation and access control becomes much more easier.
for example you can simply broadcast the userUpdated event to all users, this saves having a client list for do specific broadcasts and the Access Controls and Authentications applied for your REST Route wont have to change to reapplied again because the client is gonna make a GET request again.
Many things depends on the what kind of application you are making.

In RESTful world, how does next allowed action returned back to UI in a workflow based application

We have a workflow based application where a given process move from one state to another state based on User action.Currently our requirement is to have UI displayed its current state and allow user to take next action steps.So my question is does generally server tells UI the next actions that can be taken or the UI should decide based on current action? This application is designed using RESTful webservices.
The server should provide the client the next allowed actions, also known as state transfers, using links. Those links should, at least, contain two pieces of information: URL and relationship. The relationship tells the client the meaning of the state transition, allowing to recognize what it will do. The URL only says where to locate the service.
Typically, a REST web service should be ignorant of client state. It should only provide the ability to retrieve and update data based on a specific url (you may want to read up on REST). If you are following these guidelines, then the UI should drive any logic regarding the state of a workflow or record.
The hypertext returned provides the next "possible" transitions in the form of 'links' to the "resources". The client/user selects the next transition.
REST APIs must be hypertext-driven may be a good read!

Transactions in REST?

I'm wondering how you'd implement the following use-case in REST. Is it even possible to do without compromising the conceptual model?
Read or update multiple resources within the scope of a single transaction. For example, transfer $100 from Bob's bank account into John's account.
As far as I can tell, the only way to implement this is by cheating. You could POST to the resource associated with either John or Bob and carry out the entire operation using a single transaction. As far as I'm concerned this breaks the REST architecture because you're essentially tunneling an RPC call through POST instead of really operating on individual resources.
Consider a RESTful shopping basket scenario. The shopping basket is conceptually your transaction wrapper. In the same way that you can add multiple items to a shopping basket and then submit that basket to process the order, you can add Bob's account entry to the transaction wrapper and then Bill's account entry to the wrapper. When all the pieces are in place then you can POST/PUT the transaction wrapper with all the component pieces.
There are a few important cases that aren't answered by this question, which I think is too bad, because it has a high ranking on Google for the search terms :-)
Specifically, a nice propertly would be: If you POST twice (because some cache hiccupped in the intermediate) you should not transfer the amount twice.
To get to this, you create a transaction as an object. This could contain all the data you know already, and put the transaction in a pending state.
POST /transfer/txn
{"source":"john's account", "destination":"bob's account", "amount":10}
{"id":"/transfer/txn/12345", "state":"pending", "source":...}
Once you have this transaction, you can commit it, something like:
PUT /transfer/txn/12345
{"id":"/transfer/txn/12345", "state":"committed", ...}
{"id":"/transfer/txn/12345", "state":"committed", ...}
Note that multiple puts don't matter at this point; even a GET on the txn would return the current state. Specifically, the second PUT would detect that the first was already in the appropriate state, and just return it -- or, if you try to put it into the "rolledback" state after it's already in "committed" state, you would get an error, and the actual committed transaction back.
As long as you talk to a single database, or a database with an integrated transaction monitor, this mechanism will actually work just fine. You might additionally introduce time-outs for transactions, which you could even express using Expires headers if you wanted to.
In REST terms, resources are nouns that can be acted on with CRUD (create/read/update/delete) verbs. Since there is no "transfer money" verb, we need to define a "transaction" resource that can be acted upon with CRUD. Here's an example in HTTP+POX. First step is to CREATE (HTTP POST method) a new empty transaction:
POST /transaction
This returns a transaction ID, e.g. "1234" and according URL "/transaction/1234". Note that firing this POST multiple times will not create the same transaction with multiple IDs and also avoids introduction of a "pending" state. Also, POST can't always be idempotent (a REST requirement), so it's generally good practice to minimize data in POSTs.
You could leave the generation of a transaction ID up to the client. In this case, you would POST /transaction/1234 to create transaction "1234" and the server would return an error if it already existed. In the error response, the server could return a currently unused ID with an appropriate URL. It's not a good idea to query the server for a new ID with a GET method, since GET should never alter server state, and creating/reserving a new ID would alter server state.
Next up, we UPDATE (PUT HTTP method) the transaction with all data, implicitly committing it:
PUT /transaction/1234
<transaction>
<from>/account/john</from>
<to>/account/bob</to>
<amount>100</amount>
</transaction>
If a transaction with ID "1234" has been PUT before, the server gives an error response, otherwise an OK response and a URL to view the completed transaction.
NB: in /account/john , "john" should really be John's unique account number.
Great question, REST is mostly explained with database-like examples, where something is stored, updated, retrieved, deleted. There are few examples like this one, where the server is supposed to process the data in some way. I don't think Roy Fielding included any in his thesis, which was based on http after all.
But he does talk about "representational state transfer" as a state machine, with links moving to the next state. In this way, the documents (the representations) keep track of the client state, instead of the server having to do it. In this way, there is no client state, only state in terms of which link you are on.
I've been thinking about this, and it seems to me reasonable that to get the server to process something for you, when you upload, the server would automatically create related resources, and give you the links to them (in fact, it wouldn't need to automatically create them: it could just tell you the links, and it only create them when and if you follow them - lazy creation). And to also give you links to create new related resources - a related resource has the same URI but is longer (adds a suffix). For example:
You upload (POST) the representation of the concept of a transaction with all the information. This looks just like a RPC call, but it's really creating the "proposed transaction resource". e.g URI: /transaction
Glitches will cause multiple such resources to be created, each with a different URI.
The server's response states the created resource's URI, its representation - this includes the link (URI) to create the related resource of a new "committed transaction resource". Other related resources are the link to delete the proposed transaction. These are states in the state-machine, which the client can follow. Logically, these are part of the resource that has been created on the server, beyond the information the client supplied. e.g URIs: /transaction/1234/proposed, /transaction/1234/committed
You POST to the link to create the "committed transaction resource", which creates that resource, changing the state of the server (the balances of the two accounts)**. By its nature, this resource can only be created once, and can't be updated. Therefore, glitches committing many transactions can't occur.
You can GET those two resources, to see what their state is. Assuming that a POST can change other resources, the proposal would now be flagged as "committed" (or perhaps, not available at all).
This is similar to how webpages operate, with the final webpage saying "are you sure you want to do this?" That final webpage is itself a representation of the state of the transaction, which includes a link to go to the next state. Not just financial transactions; also (eg) preview then commit on wikipedia. I guess the distinction in REST is that each stage in the sequence of states has an explicit name (its URI).
In real-life transactions/sales, there are often different physical documents for different stages of a transaction (proposal, purchase order, receipt etc). Even more for buying a house, with settlement etc.
OTOH This feels like playing with semantics to me; I'm uncomfortable with the nominalization of converting verbs into nouns to make it RESTful, "because it uses nouns (URIs) instead of verbs (RPC calls)". i.e. the noun "committed transaction resource" instead of the verb "commit this transaction". I guess one advantage of nominalization is you can refer to the resource by name, instead of needing to specify it in some other way (such as maintaining session state, so you know what "this" transaction is...)
But the important question is: What are the benefits of this approach? i.e. In what way is this REST-style better than RPC-style? Is a technique that's great for webpages also helpful for processing information, beyond store/retrieve/update/delete? I think that the key benefit of REST is scalability; one aspect of that is not needing to maintain client state explicitly (but making it implicit in the URI of the resource, and the next states as links in its representation). In that sense it helps. Perhaps this helps in layering/pipelining too? OTOH only the one user will look at their specific transaction, so there's no advantage in caching it so others can read it, the big win for http.
I've drifted away from this topic for 10 years. Coming back, I can't believe the religion masquerading as science that you wade into when you google rest+reliable. The confusion is mythic.
I would divide this broad question into three:
Downstream services. Any web service you develop will have downstream services that you use, and whose transaction syntax you have no choice but to follow. You should try and hide all this from users of your service, and make sure all parts of your operation succeed or fail as a group, then return this result to your users.
Your services. Clients want unambiguous outcomes to web-service calls, and the usual REST pattern of making POST, PUT or DELETE requests directly on substantive resources strikes me as a poor, and easily improved, way of providing this certainty. If you care about reliability, you need to identify action requests. This id can be a guid created on the client, or a seed value from a relational DB on the server, it doesn't matter. For server generated ID's, use a 'preflight' request-response to exchange the id of the action. If this request fails or half succeeds, no problem, the client just repeats the request. Unused ids do no harm.This is important because it lets all subsequent requests be fully idempotent, in the sense that if they are repeated n times they return the same result and cause nothing further to happen. The server stores all responses against the action id, and if it sees the same request, it replays the same response. A fuller treatment of the pattern is in this google doc. The doc suggests an implementation that, I believe(!), broadly follows REST principals. Experts will surely tell me how it violates others. This pattern can be usefully employed for any unsafe call to your web-service, whether or not there are downstream transactions involved.
Integration of your service into "transactions" controlled by upstream services. In the context of web-services, full ACID transactions are considered as usually not worth the effort, but you can greatly help consumers of your service by providing cancel and/or confirm links in your confirmation response, and thus achieve transactions by compensation.
Your requirement is a fundamental one. Don't let people tell you your solution is not kosher. Judge their architectures in the light of how well, and how simply, they address your problem.
If you stand back to summarize the discussion here, it's pretty clear that REST is not appropriate for many APIs, particularly when the client-server interaction is inherently stateful, as it is with non-trivial transactions. Why jump through all the hoops suggested, for client and server both, in order to pedantically follow some principle that doesn't fit the problem? A better principle is to give the client the easiest, most natural, productive way to compose with the application.
In summary, if you're really doing a lot of transactions (types, not instances) in your application, you really shouldn't be creating a RESTful API.
You'd have to roll your own "transaction id" type of tx management. So it would be 4 calls:
http://service/transaction (some sort of tx request)
http://service/bankaccount/bob (give tx id)
http://service/bankaccount/john (give tx id)
http://service/transaction (request to commit)
You'd have to handle the storing of the actions in a DB (if load balanced) or in memory or such, then handling commit, rollback, timeout.
Not really a RESTful day in the park.
First of all transferring money is nothing that you can not do in a single resource call. The action you want to do is sending money. So you add a money transfer resource to the account of the sender.
POST: accounts/alice, new Transfer {target:"BOB", abmount:100, currency:"CHF"}.
Done. You do not need to know that this is a transaction that must be atomic etc. You just transfer money aka. send money from A to B.
But for the rare cases here a general solution:
If you want to do something very complex involving many resources in a defined context with a lot of restrictions that actually cross the what vs. why barrier (business vs. implementation knowledge) you need to transfer state. Since REST should be stateless you as a client need to transfer the state around.
If you transfer state you need to hide the information inside from the client. The client should not know internal information only needed by the implementation but does not carry information relevant in terms of business. If those information have no business value the state should be encrypted and a metaphor like token, pass or something need to be used.
This way one can pass internal state around and using encryption and signing the system can be still be secure and sound. Finding the right abstraction for the client why he passes around state information is something that is up to the design and architecture.
The real solution:
Remember REST is talking HTTP and HTTP comes with the concept of using cookies. Those cookies are often forgotten when people talk about REST API and workflows and interactions spanning multiple resources or requests.
Remember what is written in the Wikipedia about HTTP cookies:
Cookies were designed to be a reliable mechanism for websites to remember stateful information (such as items in a shopping cart) or to record the user's browsing activity (including clicking particular buttons, logging in, or recording which pages were visited by the user as far back as months or years ago).
So basically if you need to pass on state, use a cookie. It is designed for exactly the very same reason, it is HTTP and therefore it is compatible to REST by design :).
The better solution:
If you talk about a client performing a workflow involving multiple requests you usually talk about protocol. Every form of protocol comes with a set of preconditions for each potential step like perform step A before you can do B.
This is natural but exposing protocol to clients makes everything more complex. In order to avoid it just think what we do when we have to do complex interactions and things in the real world... . We use an Agent.
Using the Agent metaphor you can provide a resource that can perform all necessary steps for you and store the actual assignment / instructions it is acting upon in its list (so we can use POST on the agent or an 'agency').
A complex example:
Buying a house:
You need to prove your credibility (like providing your police record entries), you need to ensure financial details, you need to buy the actual house using a lawyer and a trusted third party storing the funds, verify that the house now belongs to you and add the buying stuff to your tax records etc. (just as an example, some steps may be wrong or whatever).
These steps might take several days to be completed, some can be done in parallel etc.
In order to do this, you just give the agent the task buy house like:
POST: agency.com/ { task: "buy house", target:"link:toHouse", credibilities:"IamMe"}.
Done. The agency sends you back a reference to you that you can use to see and track the status of this job and the rest is done automatically by the agents of the agency.
Think about a bug tracker for instance. Basically you report the bug and can use the bug id to check whats going on. You can even use a service to listen to changes of this resource. Mission Done.
You must not use server side transactions in REST.
One of the REST contraints:
Stateless
The client–server communication is further constrained by no client context being stored on the server between requests. Each request from any client contains all of the information necessary to service the request, and any session state is held in the client.
The only RESTful way is to create a transaction redo log and put it into the client state. With the requests the client sends the redo log and the server redoes the transaction and
rolls the transaction back but provides a new transaction redo log (one step further)
or finally complete the transaction.
But maybe it's simpler to use a server session based technology which supports server side transactions.
I think that in this case it is totally acceptable to break the pure theory of REST in this situation. In any case, I don't think there is anything actually in REST that says you can't touch dependent objects in business cases that require it.
I really think it's not worth the extra hoops you would jump through to create a custom transaction manager, when you could just leverage the database to do it.
In the simple case (without distributed resources), you could consider the transaction as a resource, where the act of creating it attains the end objective.
So, to transfer between <url-base>/account/a and <url-base>/account/b, you could post the following to <url-base>/transfer.
<transfer>
<from><url-base>/account/a</from>
<to><url-base>/account/b</to>
<amount>50</amount>
</transfer>
This would create a new transfer resource and return the new url of the transfer - for example <url-base>/transfer/256.
At the moment of successful post, then, the 'real' transaction is carried out on the server, and the amount removed from one account and added to another.
This, however, doesn't cover a distributed transaction (if, say 'a' is held at one bank behind one service, and 'b' is held at another bank behind another service) - other than to say "try to phrase all operations in ways that don't require distributed transactions".
I believe that would be the case of using a unique identifier generated on the client to ensure that the connection hiccup not imply in an duplicity saved by the API.
I think using a client generated GUID field along with the transfer object and ensuring that the same GUID was not reinserted again would be a simpler solution to the bank transfer matter.
Do not know about more complex scenarios, such as multiple airline ticket booking or micro architectures.
I found a paper about the subject, relating the experiences of dealing with the transaction atomicity in RESTful services.
I guess you could include the TAN in the URL/resource:
PUT /transaction to get the ID (e.g. "1")
[PUT, GET, POST, whatever] /1/account/bob
[PUT, GET, POST, whatever] /1/account/bill
DELETE /transaction with ID 1
Just an idea.