HATEOAS REST API and Domain Driven Design, where to put the workflow logic? - rest

This is intended as a follow up question to RESTful API: Where should I code my workflow? A brief summary of the question (adapted to fit my question a bit better) would be something like this:
Each domain object contains business logic associated with the specific object in a certain bounded context (X). The REST API contains logic to transform the result of a query or command into data sent over the wire (JSON for example). When using HATEOAS and hypermedia we want to model relationships between resources using links. But in order to determine which links that should be returned by the REST API one often need to resort to business logic / rules. The question is, where do these "workflow rules" belong in a DDD application? Would they perhaps be in a different bounded context dealing only with the workflow rules (perhaps in a "partner"-like relationship with X) or would they belong in the X BC?

I'm not a DDD expert (only through 100 pages in Eric Evan's book) but I can tell you what happened in our case of a e-commerce catalog.
Initially we had such business flow in the same bounded context of the app based on data and the requesting user's roles/permissions we altered the state transitions (you said links, but this is really the heart of it, links are just one way to present state transitions) presented to the user. This worked ok, but it was a lot of repeated logic being executed. Then we added a search application that was a different bounded context as it presented the same data but in different collections/groupings and we didn't want to have them get out of sync.
We moved to a CQRS implementation, so a lot of our business logic is "pre-calculated" thus it's in something like a "partner context" as part of our projection from write model to read model. We don't store the links specifically, but instead flag allowed behaviours on the data. Both the catalog application and search application use this read model and it's behaviour flags to determine state transitions to present to the client.
Some stuff happens on the fly upon requesting of the resource, almost at the serialization step. We've targeted these to be moved to the pre-calculated as much as possible, but what we can't precalculate (only because of the scale) is stuff that is based specifically on the user. Like the recommended search which uses BI data within the search engine to return results. We could pre-calculate that for every single user, but the numbers are too huge for our systems right now. So we send the main apps calculated resource (from the main context) and pass it through yet another partner context to further refine things.
another use case is some links are only presented to authenticated user, and thus are hidden from anonymous users. We do this in the main apps context but it's starting to be a bit of a hindrance because their presence indicates what the user behind the request can do in other apps (like change password), and our context isn't really the authority of what the user can do in some other app. It would be nicer to hand the resource off to their context and have them process it before we return it to the user. One simple solution we used for this was instead of deep linking to functions within the external context, we link to a root resource of that context and allow it present state transitions. It means there's 2 requests, but it cleans up the locations/authorities of the logic.

The workflow is actually composed of domain knowledge and hateoas infrastructure.
If the application is not designed with hypermedia, you may not even need the hateoas part. But the domain knowledge still exists, they just live in the end users' minds. You need domain knowledge to validate the user's command just in case they forget it.
So for hateoas, I split workflow implementation into two parts: Domain models and hateoas infrastructure.
The domain models tell the domain knowledge.
The hateoas infrastructure, on the other side, is hateoas specific, put them out of the domain.
Here is a java example, using springframework-hateoas. I put the workflow in the procedure of mapping domain models to resources.
#Component
public class OrderResourceAssembler extends ResourceAssemblerSupport<Order, OrderResource> {
#Override
public OrderResource toResource(Order model) {
OrderResource resource = mapToResource(model);
resource.add(orderResourceHref(model).withSelfRel());//add self link
if (model.isAvailableToCancel()) { //let the model tell
//add cancel link
resource.add(orderResourceHref(model).withRel("cancel"));
}
if (model.isAvailableToPay()) { //let the model tell
//add payment link
resource.add(new Link("http://www.restfriedchicken.com/online-txn/" + model.getTrackingId(), "payment"));
}
//etc
return resource;
}
// omitted codes
}
If the model cannot tell on its own, you may introduce specification object to do the job.

Wherever the resource resides is the same place that needs to marshal the access to it.
Say you've got a larger system with three bounded contexts: employee management, customer intake, and service provisioning.
GET an employee's information from employee management? Deciding whether to show the link that can be used to POST or DELETE a role is for employee management to decide.
GET a freshly submitted service contract from customer intake? Deciding whether to show the link that can be used to PUT up a revision or POST follow up details is for customer intake to decide. Sure, employee management presides over the handing out of the roles necessary to work with new service contracts, but customer intake is responsible for knowing which claims and other conditions are required, and will ultimately ensure that everything checks out.
Likewise, GET a physical installation's software profile from service provisioning? Service provisioning is going to need to make sure that everything is in order with other services/contexts (customer's account paid up, current user in the network services role, device not locked out for maintenance) before it gives out the link where you can PUT the new quality of service specs.
These kind of 'workflow' decisions are integral to each context -- they should not be split out elsewhere.

Related

Rest api with generic User

I created a few Rest apis right now and I always preferred a solution, where I created an endpoint for each resource.
For example:
GET .../employees/{id}/account
GET .../supervisors/{id}/account
and the same with the other http methods like put, post and delete. This blows up my api pretty much. My rest apis in general preferred redundancy to reduce complexity but in this cases it always feels a bit cumbersome. So I create another approach where I work with inheritance to keep the "dry" principle.
In this case there is a base class User and via inheritance my employee and supervisor model extends from it. Now I only need one endpoint like
GET .../accounts/{id}
and the server decides which object is returned. Also while this thins out my api, it increases complexity and in my api documentation ( where I use spring rest docs ) I have to document two different Objects for the same endpoint.
Now I am not sure about what is the right way to do it ( or at least the better way ). When I think about Rest, I think in resources. So my employees are a seperate resource as well as my supervisors.
Because I always followed this approach, I tink I might be mentally run in it and maybe lost the objectivity.
It would be great if you can give my any objective advice on how this should be handled.
I built an online service that deals with this too. It's called Wirespec:
https://wirespec.dev
The backend automatically creates the url for users and their endpoints dynamically with very little code. The code for handling the frontend is written in Kotlin while the backend for generating APIs for users is written in Node.js. In both cases, the amount of code is very negligible and self-maintaining, meaning that if the user changes the name of their API, the endpoint automatically updates with the name. Here are some examples:
API: https://wirespec.dev/Wirespec/projects/apis/Stackoverflow/apis/getUserDetails
Endpoint: https://api.wirespec.dev/wirespec/stackoverflow/getuserdetails?id=100
So to answer your question, it really doesn't matter where you place the username in the url.
Try signing in to Wirespec with your Github account and you'll see where your Github username appears in the url.
There is, unfortunately, no wright or wrong answer to this one and it soley depends on how you want to design things.
With that being said, you need to distinguish between client and server. A client shouldn't know the nifty details of your API. It is just an arbitrary consumer of your API that is fed all the information it needs in order to make informed choices. I.e. if you want the client to send some data to the server that follows a certain structure, the best advice is to use from-like representations, such as HAL forms, Ion or even HTML. Forms not only teach a client about the respective properties a resource supports but also about the HTTP operation to use, the target URI to send the request to as well as the representation format to send the data in, which in case of HTML is application/x-www-form-urlencoded most of the time.
In regards to receiving data from the server, a client shouldn't attempt to extract knowledge from URIs directly, as they may change over time and thus break clients that rely on such a methodology, but rely on link relation names. Per URI there might be multiple link relation names attached to that URI. A client not knowing the meaning of one should simply ignore it. Here, either one of the standardized link relation names should be used or an extension mechanism as defined by Web linking. While an arbitrary client might not make sense from this "arbitrary string" out of the box, the link relation name may be considered the predicate in a tripple often used in ontologies where the link relation name "connects" the current resource with the one the link relation was annotated for. For a set of URIs and link relation names you might therefore "learn" a semantic graph over all the resources and how they are connected to each other. I.e. you might annotate an URI pointing to a form resource with prefetch to hint a client that it may load the content of the referenced URI if it is IDLE as the likelihood is high that the client will be interested to load that resource next anyway. The same URI might also be annotated with edit-form to hint a client that the resource will provide an edit form to send some data to the server. It might also contain a Web linking extension such as https://acme.org/ref/orderForm that allows clients, that support such a custom extension, to react to such a resource accordingly.
In your accounts example, it is totally fine to return different data for different resources of the same URI-path. I.e. resource A pointing to an employee account might only contain properties name, age, position, salery while resource B pointing to a supervisor could also contain a list of subordinates or the like. To a generic HTTP client these are two totally different resources even though they used a URI structure like /accounts/{id}. Resources in a REST architecture are untyped, meaning they don't have a type ouf of the box per se. Think of some arbitrary Web page you access through your browser. Your browser is not aware of whether the Web page it renders contains details about a specific car or about the most recent local news. HTML is designed to express a multitude of different data in the same way. Different media types though may provide more concrete hints about the data exchanged. I.e. text/vcard, applciation/vcard+xml or application/vcard+json all may respresent data describing an entity (i.e. human person, jusistic entity, animal, ...) while application/mathml+xml might be used to express certain mathematical formulas and so on. The more general a media type is, the more wiedspread usage it may find. With more narrow media types however you can provide more specific support. With content type negotiation you also have a tool at your hand where a client can express its capabilities to servers and if the server/API is smart enough it can respond with a representation the client is able to handle.
This in essence is all what REST is and if followed correctly allow the decoupling of clients from specific servers. While this might sound confusing and burdensome to implement at first, these techniques are intended if you strive for a long-lasting environment that still is able to operate in decateds to come. Evolution is inherently integrated into this phiolosophy and supported by the decoupled design. If you don't need all of that, REST might not be the thing you want to do actually. Buf if you still want something like REST, you for sure should design the interactions between client and server as if you'd intereact with a typical Web server. After all, REST is just a generalization of the concepts used on the Web quite successfully for the past two decades.

How to structure a RESTful backend API with a database?

I want to make an API using REST which interacts (stores) data in a database.
While I was reading some design patterns and I came across remote facade, and the book I was reading mentions that the role of this facade is to translate the course grained methods from the remote calls into fine grained local calls, and that it should not have any extra logic. As an explaination, it says that the program should still work without this facade.
Here's an example
Yet I have two questions:
Considering I also have a database, does it make sense to split the general call into specific calls for each attribute? Doesn't it make more sense to just have a general "get data" method that runs one query against the database and converts it into an usable object, to reduce the number of database calls? So instead of splitting the get address to get street, get city, get zip, make on db call for all that info.
With all this in mind, and, in my case using golang, how should the project be structured in terms of files and functions?
I will have the main file with all the endpoints from the REST API, calling the controllers that handle these requests.
I will have a set of files that define those controllers. Are these controllers the remote facade? Should those methods not have logic in that case, and just call the equivalent local methods?
Should the local methods call the database directly, or should they use some sort of helper class that accesses the database?
Assuming all questions are positive, does the following structure make sense?
Main
Controllers
Domain
Database helper
First and foremost, as Mike Amundsen has stated
Your data model is not your object model is not your resource model is not your affordance model
Jim Webber did say something very similar, that by implementing a REST architecture you have an integration model, in the form of the Web, which is governed by HTTP and the other being the domain model. Resources adept and project your domain model to the world, though there is no 1:1 mapping between the data in your database and the representations you send out. A typical REST system does have many more resources than you have DB entries in your domain model.
With that being said, it is hard to give concrete advice on how you should structure your project, especially in terms of a certain framework you want to use. In regards to Robert "Uncle Bob" C. Martin on looking at the code structure, it should tell you something about the intent of the application and not about the framework¹ you use. According to him Architecture is about intent. Though what you usually see is the default-structure imposed by a framework such as Maven, Ruby on Rails, ... For golang you should probably read through certain documentation or blogs which might or might not give you some ideas.
In terms of accessing the database you might either try to follow a micro-service architecture where each service maintains their own database or you attempt something like a distributed monolith that acts as one cohesive system and shares the database among all its parts. In case you scale to the broad and a couple of parallel services consume data, i.e. in case of a message broker, you might need a distributed lock and/or queue to guarantee that the data is not consumed by multiple instances at the same time.
What you should do, however, is design your data layer in a way that it does scale well. What many developers often forget or underestimate is the benefit they can gain from caching. Links are basically used on the Web to reference from one resource to an other and giving the relation some semantic context by the utilization of well-defined link-relation names. Link relations also allow a server to control its own namespace and change URIs as needed. But URIs are not only pointers to a resource a client can invoke but also keys for a cache. Caching can take place on multiple locations. On the server side to avoid costly calculations or look ups on the client side to avoid sending requests out in general or on intermediary hops which allow to take away pressure from heavily requested servers. Fielding made caching even a constraint that needs to be respected.
In regards to what attributes you should create queries for is totally dependent on the use case you attempt to depict. In case of the address example given it does make sense to return the address information all at once as the street or zip code is rarely queried on its own. If the address is part of some user or employee data it is more vague whether to return that information as part of the user or employee data or just as a link that should be queried on its own as part of a further request. What you return may also depend on the capabilities of the media-type client and your service agree upon (content-type negotiation).
If you implement something like a grouping for i.e. some football players and certain categories they belong to, such as their teams and whether they are offense or defense players, you might have a Team A resource that includes all of the players as embedded data. Within the DB you could have either an own table for teams and references to the respective player or the team could just be a column in the player table. We don't know and a client usually doesn't bother as well. From a design perspective you should however be aware of the benefits and consequences of including all the players at the same time in regards to providing links to the respective player or using a mixed approach of presenting some base data and a link to learn further details.
The latter approach is probably the most sensible way as this gives a client enough information to determine whether more detailed data is needed or not. If needed a simple GET request to the provided URI is enough, which might be served by a cache and thus never reach the actual server at all. The first approach has for sure the disadvantage that it doesn't reuse caching optimally and may return way more data then actually needed. The approach to include links only may not provide enough information forcing the client to perform a follow-up request to learn data about the team member. But as mentioned before, you as the service designer decide which URIs or queries are returned to the client and thus can design your system and data model accordingly.
In general what you do in a REST architecture is providing a client with choices. It is good practice to design the overall interaction flow as a state machine which is traversed through receiving requests and returning responses. As REST uses the same interaction model as the Web, it probably feels more natural to design the whole system as if you'd implement it for the Web and then apply the design to your REST system.
Whether controllers should contain business logic or not is primarily an opinionated question. As Jim Webber correctly stated, HTTP, which is the de-facto transport layer of REST, is an
application protocol whose application domain is the transfer of documents over a network. That is what HTTP does. It moves documents around. ... HTTP is an application protocol, but it is NOT YOUR application protocol.
He further points out that you have to narrow HTTP into a domain application protocol and trigger business activities as a side-effect of moving documents around the network. So, it's the side-effect of moving documents over the network that triggers your business logic. There is no straight rule whether to include business logic in your controller or not, but usually you try to keep the business logic in yet their own layer, i.e. as a service that you just invoke from within the controller. That allows to test the business logic without the need of the controller and thus without the need of a real HTTP request.
While this answer can't provide more detailed information, partly due to the broad nature of the question itself, I hope I could shed some light in what areas you should put in some thoughts and that your data model is not necessarily your resource or affordance model.

Struggling with REST and Larman's (RCP?) system operations

In Craig Larman's book Applying UML and Patterns: An Introduction to Object-Oriented Analysis and Design and Iterative Development (3rd Edition), a use case is converted to a System Sequence Diagram (SSD) which contains system operations. It's a way of doing high-level design first, that's easily traceable to the requirements (use case), and later refining it by detailing the system operations at the domain level via OO design principles.
I'm trying to understand this methodology using RESTful services. The trouble spots seem to be with resources and the stateless operations.
Here's a sample from the book, an SSD with four system operations:
An SSD is also an abstraction of the presentation layer (on the left) and the domain layer (on the right). The domain layer might also include the application and or business logic.
In Larman's approach, a system operation (e.g., makeNewSale()) should be handled by a GRASP Controller which is a domain non-presentation-layer object that handles the system operation.
Now, let's say we're trying to use RESTful services in this approach:
The Uniform Interface constraint of REST is part of the presentation layer. For example, one can configure the route that a POST operation on a resource takes, allowing it to eventually call some operation of an object in the Domain layer. This Byte Rot blog by Aliostad explains very well the details of traditional layer architectures and REST.
In Larman's example, the Cashier clicks somewhere in the GUI to invoke a http://...com/sales - POST request and eventually a GRASP Controller receives makeNewSale(). The REST part http://...com/sales - POST request creates a new resource for the sale e.g., 001 and also makeNewSale() is sent to a GRASP controller object.
Session Controllers (mentioned in the GRASP Controller pattern) don't exist on the System, since REST operations are stateless on the server. This implies that system operations might actually need more arguments to pass state information to the server, and that isn't obvious from the use case. For example, makeNewSale() probably needs to receive and process an authentication token as an argument, since :System has no session information. I found a related question and answers here at RESTful Authentication
So, assuming my understanding up to now is right, my questions are:
Does the Uniform Interface constraint of REST respect loose coupling of presentation/domain layer separation? At first, it seems to be a detail of the presentation layer (like an actionPerformed() method in Java Swing). But the part that bothers me is that http://...com/sales ties right into a Sale (which is a domain) object.Another way of putting it: by creating REST resources and REST verbs accessing them via a Uniform Interface, isn't application/business logic being put into to the presentation layer? Larman's SSD approach is about layers and he explicitly states that application logic should not go in the presentation layer. For example, a POST to http://...com/sales creates a new sales/001 resource and also sends makeNewSale(). The first part seems like business logic. The fact that my REST resource names follow many of my Domain object names seems like my Presentation layer is (more) coupled to my Domain layer, than if I were using Swing and only called makeNewSale() from an actionPerformed() on a JButton.
Larman's SSD concept is oriented towards the RCP model (with state in the server), but can an SSD be made to easily jibe with a REST philosophy? For example, endSale() exists to break the system out of a loop of enterItem() calls. Later in the book, Larman even presents a state diagram about this:
I read about How to manage state in REST and it seems one has to take care with passing state on each system operation if needed (as in the authentication token example above), or the "state" is actually encapsulated in the resources. I thought about it, and with REST, endSale() could probably be removed entirely.
I used a concrete example here, so that the answers can also be concrete.
I'll try to address your questions, but keep in mind that this is a very much debatable topic..
The REST Uniform Interface does respect the separation of concerns between presentation and business logic. The separation is about the presentation layer not knowing details about the implementation of the business logic and vice-versa, but there is no point in hiding what can be done with a domain entity (i.e. the services that are available for a resource). What you want to hide is how the actions are executed.
The REST's HATEOAS principle dictates that the representation of a resource (the instance of a domain entity) should encapsulate the state of the client/server interaction, so the services should return that state to the client one way or another. This, and the use of hypermedia (i.e. links to available services - representing actions that can be done on a resource, given its state) should map quite easily to the state diagram you posted.
Let's continue with your example :
let's say that there is a service newSale returning a Sale object. This service must be invoked using the POST method. in the Sale's hypermedia section there are 2 links, one to the addItem service (which is also a POST and accepts an updated version of the Sale object, returning it after it has been saved), and one to the endSale service (also a POST), which saves the Sale and marks it as 'complete'. This last service does not return any resource representation, just the http response OK if the object is succesfully saved.

Are these valid disadvantages of a REST-ful API?

I get the basic idea of REST-ful API, but I can see how they're not ideal. I want to know where I'm wrong on the following assumptions.
REST-ful API unnecessarily exposes models
On a REST-ful API, it's usually a set of actions e.g CRUD on an entity. Sometimes performing a business action requires many models to be manipulated, and not just one.
For example, consider a business action of refunding an order, which then decreases a buyer's rating. Several possible models are involved here e.g Order, OrderItem, Payment, Buyer, Refund.
We often end up exposing single 'parent' model with an action that updates itself and its sub-models, or we end up exposing many models that must be updated appropriately to successfully accomplish the business action as a whole.
REST-ful API forces one to think in terms of manipulating models instead of the natural behavior of stating intent
Consider a customer service rating application. A customer can state his / her happiness once a support call ends e.g "I'm satisfied" / "I'm angry" / "I'm neutral".
In a REST-ful API, the customer has to figure out what exact model to manipulate in order to state how he feels. Perhaps a CustomerResponse model, or a Feedback model. Why can't the customer just hit an endpoint, identify himself and simply state whether he's happy or not, without having to know the underlying model that tracks his response?
REST-ful API Update action oversimplifies too much
Sometimes on a model, you want to do more than just an update. An update can be many things.
Consider a Word model. You can reverse the characters, randomize the characters, uppercase / lowercase the characters, split the word and many other actions that actually means a Word model is 'updated' in a certain way.
At this point, exposing just an Update action on Word probably oversimplifies how rich the Word model can be.
I do not believe the points you state above are really drawbacks of a RESTful API. More analytically:
REST-ful API unnecessarily exposes models
No models are exposed. Everything is handled by a single controller. The only thing that is exposed to the user is the route of the appropriate controller.
REST-ful API forces one to think in terms of manipulating models instead of the natural behavior of stating intent
Same as above. A single controller can handle the different customer happiness states. The distinction can be made by passing different post paramenters (ex. { state: "happy"}).
REST-ful API Update action oversimplifies too much
Nothing stops you from manipulating the data that needs to be sent to your model before updating it. You can do whatever you want, however complex it may be, before updating your model.
Finally, I believe that a RESTful API is only as good as its implementation. Furthermore, I believe that if you wanted to find a flaw to the REST technique is the fact that you cannot initiate transactions or push notifications server-side
First, Web service APIs that adhere to the REST architectural constraints are called RESTful APIs. HTTP based RESTful APIs are defined with these aspects:
base URI, such as http://example.com/resources/
an Internet media type for the data. This is often JSON but can be any other valid Internet media type (e.g. XML, Atom, microformats, images, etc.)
standard HTTP methods (e.g., GET, PUT, POST, or DELETE)
hypertext links to reference state
hypertext links to reference related resources
Now with your questions:
REST-ful API unnecessarily exposes models
In your example, if you want to refund someone, you obviously use more than one model.
RESTful doesn't means that you expose only one model, for example, a POST on a /api/refunds is a RESTful way of doing it without expose a single model.
The only thing peole see from you API are routes to your different actions in your different controllers
REST-ful API forces one to think in terms of manipulating models instead of the natural behavior of stating intent
Your RESTful API is called from a front-end (smartphone app, javascript app, etc..) or a back-end (a server), the end user (here, the satisfied/angry/neutral client) is not obligated to see the url called by your front-end, the satisfaction form could be here /support/survey and the server API url could be a POST to /api/support_calls/1/surveys.
REST-ful API Update action oversimplifies too much
An PUT on a RESTful route does NOT means that you should only update one model. You can manipulate params, create a model, update another, and then, update your main model.
Finally
Dont forget that RESTful architecture is a ROUTE and URL architecture created for developpers, you can do anything you want in your controllers, this is just a convention-based way of exposing your URLs to API's consumers
I'll try to address your concerns, but keep in mind that the topic is very mch debatable, so take this as my humble opinion..
1) about the model being unnecessary exposed, well I think it is mostly a matter of modeling your domain entities in a way that allows you to express all the available action and concepts in terms of 'doing something on a resource'. In the refund example, you could for instance model a Refund entity and perform actions on that.
2) here too is a matter of modeling domain entities and services. Your happiness service could be a matter of having a service accepting a user id and an integer expressing the user's satissfaction level.
3) In the case of the Word model, you can simply use the PATCH (or PUT) http verb and provide a service that simply overwrites the resource, which would in turn be manipulated by the client.
Again, REST is a paradigm, so it is (among other things) a way to organize the objects of your domain and the actions that can be performed on those objects. Obviously it has drawbacks and limitations, but I personally think that it is quite 'transparent' (i.e. not imposing artificial constraints on your programming), and that it is quite easy to use, being mostly based on widely accepted conventions and technologies (i.e. http verbs and so on).

Best practices for mapping RESTful resources to existing domains

We are are going to be creating some RESTful services which are essentially going to be a passthrough to some ye-olde-fashioned SOAP-based webservices. The SOAP services are more generalized and will used across our entire organization (at least that's the plan). The RESTful services are tailored to specific clients. This decision was already made and is unfortunately out of my control to change...
We're struggling with how to structure our RESTful resources in a way that makes sense, follows REST best practices, and call these SOAP services without causing ourselves too much pain.
We have some freedom around the level of granularity for the back-end services, but the general consensus is: keep them coarse grained and don't tailor them to the specific client's needs.
This is leading to some interesting problems. For instance, how to deal with child resources of a parent. The typical example we've been working with is this: a customer with a child address.
We have a back-end SOAP service which updates a customer as a whole entity. However, the REST services client might need to update just the billing address. How do we best handle subsequent updates of the child resource?
Should we do the updates at the "parent" level (the customer), or expose a more fine-grained REST operation that treats the address as a resource and update it that way? The latter seems to be the correct, RESTful way. But, if we do that, we'll essentially be calling a coarse-grained backend service for just one piece of an update. Doesn't seem to make sense as it's a pretty heavyweight call.
We're also struggling a bit with how to correlate RESTful resources with our back-end domain model. We might care about the RESTful resource as a single entity but in our domain on the back-end, it might be many different entities. We've got a relatively simple DB table to handle this now, but I'm not convinced it will scale out as we map more and more resources to domain objects.
These are just a couple of examples of the things we're hitting... I'm wondering if anyone has run into similar issues and has any words of advice or might be able to point me to some articles that might have some best practices.
It seems like it this isn't an unusual problem and will become more relevant as more and more applications use RESTful architectures, but I can't seem to find any other info on it.
Thanks much!
I've found that most domains I model map very easily to REST. The most important thing to do is get into the right state of mind. REST models domains as a set of resources, where SOAP tends to emphasize a set of actions. Another way to look at this is that REST focuses on states and SOAP on state transitions. Even simpler you can think of REST as nouns versus SOAP as verbs. I know this isn't a perfect analogy but it's useful for me.
When you get into this state of mind the mapping should be almost automatic for you. Let's look at your Customer address interaction.
I don't think it's inappropriate to update the whole customer just to update the address, but not being familiar with your domain, maybe it is. If you want to specifically interact with the address just create a sub-resource (or nested resource). You end up with the following mapping of url's and verbs:
GET /customer/72/address # return the address of customer with id 72
PUT /customer/72/address # update the address of customer with id 72
In this particular case it probably doesn't make sense to map the delete, create or list actions.
If your customer has a collection of some entity, let's say widgets, you could map the interactions like this:
GET /customer/72/widgets # return the widgets of customer with id 72
POST /customer/72/widgets # create a new widget for customer with id 72
GET /customer/72/widgets/158 # return widget with id 158 of customer 72
PUT /customer/72/widgets/158 # update widget with id 158 of customer 72
DELETE /customer/72/widgets/158 # delete widget with id 158 of customer 72
Just have the right mindset and keep in mind the mapping won't be one-to-one and you'll be fine.
Don't try and model your domain entities as resources. Model your views/use cases as resources. There is absolutely nothing wrong with building resources for specific clients. REST encourages serendipitous re-use, i.e. unexpected re-use, it does not suggest you try and build resources that will work for every possible scenario.
RESTful frameworks should make resources really cheap and quick to create. If you need five different resources to model all the ways a customer is accessed, then that's fine.
In general I prefer to go with coarse grained resources, just because that is what HTTP is optimized for.