In my attempt to redesign an existing application using REST architectural style, I came across a problem which I would like to term as "Mediatype Explosion". However, I am not sure if this is really a problem or an inherent benefit of REST. To explain what I mean, take the following example
One tiny part of our application looks like:
collection-of-collections->collections-of-items->items
i.e the top level is a collection of collections and each of these collection is again a collection of items.
Also, each item has 8 attributes which can be read and written individually. Trying to expose the above hierarchy as RESTful resources leaves me with the following media types:
application/vnd.mycompany.collection-of-collections+xml
application/vnd.mycompany.collection-of-items+xml
application/vnd.mycompany.item+xml
Further more, since each item has 8 attributes which can be read and written to individually, it will result in another 8 media types. e.g. one such media type for "value" attribute of an item would be:
application/vnd.mycompany.item_value+xml
As I mentioned earlier, this is just a tiny part of our application and I expect several different collections and items that needs to be exposed in this way.
My questions are:
Am I doing something wrong by having these huge number of media types?
What is the alternative design method to avoid this explosion of media types?
I am also aware that the design above is highly granular, especially exposing individual attributes of the item and having separate media types for each them. However, making it coarse means I will end up transferring unnecessary data over the wire when in reality the client only needs to read or write a single attribute of an item. How would you approach such a design issue?
One approach that would reduce the number of media types required is to use a media type defined to hold lists of other media-types. This could be used for all of your collections. Generally lists tend to have a consistent set of behavior.
You could roll your own vnd.mycompany.resourcelist or you could reuse something like an Atom collection.
With regards to the specific resource representations like vnd.mycompany.item, what you can do depends a whole lot on the characteristics of your client. Is it in a browser? can you do code-download? Is your client a rich UI, or is it a data processing client?
If the client is going to do specific data processing then you pretty much need to stick with the precise media types and you may end up with a large number of them. But look on the bright side, you will have less media-types than you would have namespaces if you were using SOAP!
Remember, the media-type is your contract, if your application needs to define lots of contracts with the client, then so be it.
However, I would not go as far as defining contracts to exchange single attribute values. If you feel the need to do that, then you are doing something else wrong in your design. Distributed interface design needs to have chunky conversations, not chatty ones.
I think I finally got the clarification I sought for the above question from Ian Robinson's presentation and thought I should share it here.
Recently, I came across the statement "media type for helping tune the hypermedia engine, schema for structure" in a blog entry by Jim Webber. I then found this presentation by Ian Robinson of Thoughtworks. This presentation is one of the best that I have come across that provides a very clear understanding of the roles and responsibilities of media types and schema languages (the entire presentation is a treat and I highly recommend for all). Especially lookout for the slides titled "You've Chosen application/xml, you bstrd." and "Custom media types". Ian clearly explains the different roles of the schemas and the media types. In short, this is my take away from Ian's presentation:
A media type description includes the processing model that identifies hypermedia controls and defines what methods are applicable for the resources of that type. Identifying hypermedia controls means "How do we identify links?" in XHTML, links are identified based on tag and RDF has different semantics for the same. The next thing that media types help identify is what methods are applicable for resources of a given media type? A good example is ATOM (application/atom+xml) specification which gives a very rich description of hyper media controls; they tell us how the link element is defined? and what we can expect to be able to do when we dereference a URI so it actually tells something about the methods we can expect to be able to apply to the resource. The structural information of a resource represenation is NOT part of or NOT contained within the media type description but is provided as part of appropriate schema of the actual representation i.e the media type specification won’t necessarily dictate anything about the structure of the representation.
So what does this mean to us? simply that we dont need a separate media type for describing each resource as described above in my original question. We just need one media type for the entire application. This could be a totally new custom media type or a custom media type which reuses existing standard media types or better still, simply a standard media type that can be reused without change in our application.
Hope this helps.
In my opinion, this is the weak link of the REST concept. As an architectural and interface style, REST is outstanding and the work done by Roy F. and others has advanced the state of the art considerably. But there is an upper limit to what can be communicated (not just represented) by standard media types.
For people to understand and use your REST-ish API, they need to understand the meaning of the data. There are APIs where the media types tell most of the story; e.g. if you have a text-to-speech API, the input media type is text/plain and the output media type is audio / mp4, then someone familiar with the subject matter could probably make do. Text in, audio out, probably enough to go on in this case.
But many APIs can't communicate much of their meaning with just media type. Let's say you have an API that handles airline ticketing. The inputs and outputs will mostly be data. The media types on input and output of every API could be application/json or application/xml, so the media type doesn't transmit a lot of information. So then you would look at the individual fields in the inputs & outputs. Maybe there's a field called "price". Is that in dollars or pennies? USD or some other currency? I don't know how a user would answer those questions without either (a) very descriptive names, like "price_pennies_in_usd", or (b) documentation. Not to mention format conventions. Is an account number provided with or without dashes, must letters be all-caps and so on. There is no standard media type that defines these issues.
It's one thing when we're in situations where the client doesn't need a semantic understanding of the data. That works well. The fact that browsers can visually render any compliant document, and interact with any compliant resource, is really great. That's basically the "media" use case.
But it's entirely different when the client (or actually, the developer/user behind the client) needs to understand the semantics of the data. DATA IS NOT MEDIA. There is no way to explain data in all its real-world meaning and subtlety other than documenting it. This is the "data" use case.
The overly-academic definition of REST works in the media use case. It doesn't work, and needs to be supplemented with non-pure but useful things like documentation, for other use cases.
You're using the media type to convey details of your data that should be stored in the representation itself. So you could have just one media type, say "application/xml", and then your XML representations would look like:
<collection-of-collections>
<collection-of-items>
<item>
</item>
<item>
</item>
</collection-of-items>
<collection-of-items>
<item>
</item>
<item>
</item>
</collection-of-items>
</collection-of-collections>
If you're concerned about sending too much data, substitute JSON for XML. Another way to save on bytes written and read is to use gzip encoding, which cuts things down about 60-70%. Unless you have ultra-high performance needs, one of these approaches ought to work well for you. (For better performance, you could use very terse hand-crafted strings, or even drop down to a custom binary TCP/IP protocol.)
Edit One of your concerns is that:
making [the representation] coarse means I will end up transferring unnecessary data over the wire when in reality the client only needs to read or write a single attribute of an item
In any web service there is quite a lot of overhead in sending messages (each HTTP request might cost several hundred bytes for the start line and request headers and ditto for each HTTP response as in this example). So in general you want to have less granular representations. So you would write your client to ask for these bigger representations and then cache them in some convenient in-memory data structure where your program could read data from them many times (but be sure to honor the HTTP expiration date your server sets). When writing data to the server, you would normally combine a set of changes to your in-memory data structure, and then send the updates as a single HTTP PUT request to the server.
You should grab a copy of Richardson and Ruby's RESTful Web Services, which is a truly excellent book on how to design REST web services and explains things much more clearly than I could. If you're working in Java I highly recommend the RESTlet framework, which very faithfully models the REST concepts. Roy Fielding's USC dissertation defining the REST principles may also be helpful.
A media type should be seldomly created and time should be invested in making sure the format can survive change.
As you're relying on xml, there is no particular reason why you couldn't create one media type, provided that media type is described in one source.
Choosing ATOM over having one host media type that supports multiple root elements doesn't necessarily bring you anything: you'll still need to start reading the message within the context of a specific operation before deciding if enough information is present to process the request.
So i would suggest that you could happily have one media type, represented by one root element, and use a schema language to specify which of the elements can be contained.
In other words, a language like xsd can let you type your media type to support one of multiple root elements. There is nothing inherently wrong with application/vnd.acme.humanresources+xml describing an xml document that can take either or as a root element.
So to answer your question, create as few media types as you can possibly afford, by questioning if what you put in the documentation of the media type will be understandable and implementeable by a developer.
Unless you intend on registering these media types you should pick one of the existing mime types instead of trying to make up your own formats. As Jim mentions application/xml or text/xml or application/json works for most of what gets transmitted in a REST design.
In reply to Darrel here is Roy's full post. Aren't you trying to define typed resources by creating your own mime types?
Suresh, why isn't HTTP+POX Restful?
Related
We are implementing a REST API over our CQRS services. We of course don't want to expose any of our domain to users of the REST APIs.
However, a key tenant of CQRS is that the read models generally correspond to a specific view or screen.
With that being the case, it seems logical that the resources in our REST API, will map virtually 1:1 with the read / view models from our queries (where the queries return a DTO containing all the data for the view). Technically this is exposing a part of our domain (the read models - although returned as DTOs). In this case, this seems to be what we want. Any potential downsides to being so closely coupled?
In terms of commands, I have been considering an approach like:
https://www.slideshare.net/fatmuemoo/cqrs-api-v2. There is a slide that indicates that commands are not first class citizens. (See slide 26). By extension, am I correct in assuming that the DTOs returned from my queries will always be the first class citizens, which will then expose the commands that can be executed for that screen?
Thanks
Any potential downsides to being so closely coupled?
You need to be a little bit careful in terms of understanding the direction of your dependencies.
Specifically, if you are trying to integrate with clients that you don't control, then you are going to want to agree upon a contract -- message semantics and schema -- that you cannot change unilaterally.
Which means that the representations are relatively fixed, but you have a lot of freedom about about how you implement the production of that representation. You make a promise to the client that they can get a representation of report 12345, and it will have some convenient layout of the information. But whether that representation is something you produce on demand, or something that you cache, and how you build it is entirely up to you.
At this level, you aren't really coupling your clients to your domain model; you are coupling them to your views/reports, which is to say to your data model. And, in the CQRS world, that coupling is to the read model, not the write model.
In terms of commands, I have been considering an approach like...
I'm going gently suggest that the author, in 2015, didn't have a particularly good understanding of REST by today's standards.
The basic problem here is that the author doesn't recognize that caching is a REST constraint; and the design of our HTTP protocols needs to consider how general purpose components understand cache invalidation.
Normally, for a command (meaning here "a message intended to change the representation of the resource"), you normally want the target-uri of the HTTP request to match the identifier of the primary resource that changes.
POST /foo/123/command
Isn't particularly useful, from the perspective of cache invalidation, if nobody ever sends a GET /foo/123/command request.
I am developing REST API and Frontend as a microservice. I know some basic principles of URL design, but there is a performance issue and I'm not sure how to deal with it.
For the convenience of displaying the webpage, I'd like to get certain information about more than 100 resources per page. (Actually, BFF exists as an orchestration layer)
Since the target resource includes the aggregation result from a large amount of database record, it takes about 3 seconds per request. However, the information I want on the webpage is only a part of it, and it doesn't require complex aggregation to get it, and that makes the response time much shorter.
Take a case as an example.
There is a resource of article, and return the resource data in articles/:id containing a complex aggregation. But in this case, all I need is a count of comments, which can be quickly obtained by issuing a SQL count statement without a counter cache.
However, when examining REST API design, I've never seen a case where a GET request that returns only a specific field.
And in microservices, API should only return resource state in loosely coupled situation, so I think it shouldn't be focused on specific fields.
What kind of URL design or performance optimization can be considered in the face of performance problems?
REST is the architectural style of the world wide web; the style and the web were developed in parallel in the 1990s. Given the usage patterns and technical constraints of the time, the attention of the style is focused more on the caching of large grained documents, rather than trying to reduce the latency of transport.
So you would be more likely to design your representation so that count is present somewhere in the representation, and addressable to that you can call attention to it. Thus: fragments.
So if you already had a resource with the identifier /articles, and having the comment count was important enough, then you might treat the representation like a DTO (Data Transfer Object), and simply include the comment count in the representation, accessible via some identifier like /articles#comment-count.
That's not necessarily a great fit for your use case.
An alternative is to just introduce a stand-alone comment count resource.
Any information that can be named can be a resource -- Fielding, 2000
If you are actually doing REST, then the spelling of the URI don't matter (consider - when is the last time you cared where the google search form actually submitted your query?). The identifiers are used as identifiers, general purpose clients don't try to extract semantics from the identifiers.
So using /d8a496c4-51c5-4eeb-8cbd-d5e777cbdee7 as your identifier for the comment-count should "just work".
It's not particularly friendly to the human reader, of course, so you might prefer something else. URI design is, in this sense, a lot like choosing a good variable name -- the machines don't care, so me make choices that are easier for the human beings to manage. That normally means choosing a spelling that is "consistent" with the other spellings in your API.
RFC 3986 introduces distinctions between the path, the query, and the fragment, that you can expect general-purpose components to understand; one of the potentially important distinctions is that reference resolution describes how a general purpose component can compute a new identifier from a base uri and a relative reference.
/articles/comment-count + ./1 -> /articles/1
My team and I we are refactoring a REST-API and I have come to a question.
For terms of brevity, let us assume that we have an SQL database with 4 tables: Teachers, Students, Courses and Classrooms.
Right now all the relations between the items are represented in the REST-API through referencing the URL of the related item. For example for a course we could have the following
{ "id":"Course1", "teacher": "http://server.com/teacher1", ... }
In addition, if ask a list of courses thought a call GET call to /courses, I get a list of references as shown below:
{
... //pagination details
"items": [
{"href": "http://server1.com/course1"},
{"href": "http://server1.com/course2"}...
]
}
All this is nice and clean but if I want a list of all the courses titles with the teachers' names and I have 2000 courses and 500 teachers I have to do the following:
Approximately 2500 queries just to read the data.
Implement the join between the teachers and courses
Optimize with caching etc, so that I will do it as fast as possible.
My problem is that this method creates a lot of network traffic with thousands of REST-API calls and that I have to re-implement the natural join that the database would do way more efficiently.
Colleagues say that this is approach is the standard way of implementing a REST-API but then a relatively simple query becomes a big hassle.
My question therefore is:
1. Is it wrong if we we nest the teacher information in the courses.
2. Should the listing of items e.g. GET /courses return a list of references or a list of items?
Edit: After some research I would say the model I have in mind corresponds mainly to the one shown in jsonapi.org. Is this a good approach?
My problem is that this method creates a lot of network traffic with thousands of REST-API calls and that I have to re-implement the natural join that the database would do way more efficiently. Colleagues say that this is approach is the standard way of implementing a REST-API but then a relatively simple query becomes a big hassle.
Your colleagues have lost the plot.
Here's your heuristic - how would you support this use case on a web site?
You would probably do it by defining a new web page, that produces the report you need. You'd run the query, you the result set to generate a bunch of HTML, and ta-da! The client has the information that they need in a standardized representation.
A REST-API is the same thing, with more emphasis on machine readability. Create a new document, with a schema so that your clients can understand the semantics of the document you return to them, tell the clients how to find the target uri for the document, and voila.
Creating new resources to handle new use cases is the normal approach to REST.
Yes, I totally think you should design something similar to jsonapi.org. As a rule of thumb, I would say "prefer a solution that requires less network calls". It's especially true if amount of network calls will be less by order of magnitude.
Of course it doesn't eliminate the need to limit the request/response size if it becomes unreasonable.
Real life solutions must have a proper balance. Clean API is nice as long as it works.
So in your case I would so something like:
GET /courses?include=teachers
Or
GET /courses?includeTeacher=true
Or
GET /courses?includeTeacher=brief|full
In the last one the response can have only the teacher's id for brief and full teacher details for full.
My problem is that this method creates a lot of network traffic with thousands of REST-API calls and that I have to re-implement the natural join that the database would do way more efficiently. Colleagues say that this is approach is the standard way of implementing a REST-API but then a relatively simple query becomes a big hassle.
Have you actually measured the overhead generated by each request? If not, how do you know that the overhead will be too intense? From an object-oriented programmers perspective it may sound bad to perform each call on their own, your design, however, lacks one important asset which helped the Web to grew to its current size: caching.
Caching can occur on multiple levels. You can do it on the API level or the client might do something or an intermediary server might do it. Fielding even mad it a constraint of REST! So, if you want to comply to the REST architecture philosophy you should also support caching of responses. Caching helps to reduce the number of requests having to be calculated or even processed by a single server. With the help of stateless communication you might even introduce a multitude of servers that all perform calculations for billions of requests that act as one cohesive system to the client. An intermediary cache may further help to reduce the number of requests that actually reach the server significantly.
A URI as a whole (including any path, matrix or query parameters) is actually a key for a cache. Upon receiving a GET request, i.e., an application checks whether its current cache already contains a stored response for that URI and returns the stored response on behalf of the server directly to the client if the stored data is "fresh enough". If the stored data already exceeded the freshness threshold it will throw away the stored data and route the request to the next hop in line (might be the actual server, might be a further intermediary).
Spotting resources that are ideal for caching might not be easy at times, though the majority of data doesn't change that quickly to completely neglect caching at all. Thus, it should be, at least, of general interest to introduce caching, especially the more traffic your API produces.
While certain media-types such as HAL JSON, jsonapi, ... allow you to embed content gathered from related resources into the response, embedding content has some potential drawbacks such as:
Utilization of the cache might be low due to mixing data that changes quickly with data that is more static
Server might calculate data the client wont need
One server calculates the whole response
If related resources are only linked to instead of directly embedded, a client for sure has to fire off a further request to obtain that data, though it actually is more likely to get (partly) served by a cache which, as mentioned a couple times now throughout the post, reduces the workload on the server. Besides that, a positive side effect could be that you gain more insights into what the clients are actually interested in (if an intermediary cache is run by you i.e.).
Is it wrong if we we nest the teacher information in the courses.
It is not wrong, but it might not be ideal as explained above
Should the listing of items e.g. GET /courses return a list of references or a list of items?
It depends. There is no right or wrong.
As REST is just a generalization of the interaction model used in the Web, basically the same concepts apply to REST as well. Depending on the size of the "item" it might be beneficial to return a short summary of the items content and add a link to the item. Similar things are done in the Web as well. For a list of students enrolled in a course this might be the name and its matriculation number and the link further details of that student could be asked for accompanied by a link-relation name that give the actual link some semantical context which a client can use to decide whether invoking such URI makes sense or not.
Such link-relation names are either standardized by IANA, common approaches such as Dublin Core or schema.org or custom extensions as defined in RFC 8288 (Web Linking). For the above mentioned list of students enrolled in a course you could i.e. make use of the about relation name to hint a client that further information on the current item can be found by following the link. If you want to enable pagination the usage of first, next, prev and last can and probably should be used as well and so forth.
This is actually what HATEOAS is all about. Linking data together and giving them meaningful relation names to span a kind of semantic net between resources. By simply embedding things into a response such semantic graphs might be harder to build and maintain.
In the end it basically boils down to implementation choice whether you want to embed or reference resources. I hope, I could shed some light on the usefulness of caching and the benefits it could yield, especially on large-scale systems, as well as on the benefit of providing link-relation names for URIs, that enhance the semantical context of relations used within your API.
How should we use REST to access multidimensional data in an efficient way? The choices seem to be hi-REST, lo-REST, or OpenSearch (which seems like a specialization of lo-REST).
In order for your system to be RESTful, one of the the requirements is that the client doesn't know up front anything about how your URIs are structured. This means that you can't write code which builds URIs in a particular way like most twitter clients do. The conventional wisdom is that in order for a resource to be located, you need to discover its URI in a different place.
However, there are times that you're dealing a countless number of resources in the system, and providing links to each one is just plain stupid. Multidimensional data fits in this category. In these cases, it's completely valid to provide the client with rules for URI construction, as long as these rules are discovered at run time.
OpenSearch is absolutely a RESTful solution to this problem, and it's programmer friendly at that. A lot of the use of OpenSearch is limited to plain human readable HTML search results, but in actuality it can be used for purely machine readable (e.g. atom) search results too:
<Url type="application/atom+xml"
template="...search/?q={searchTerms}"/>
This template instructs clients that if you'd like an atom representation, you can go here. But how does this fit multidimensional data? The extensibility of OpenSearch comes into play here. The OpenSearch time extension describes how to instruct clients to construct URLs that represent searches that are constrained to a specific time range (assuming xmlns:time is bound correctly:
<Url type="application/atom+xml"
template="...search/?after={time:start}&before={time:end}"/>
If a client sees this template, it can see from the template that it only allows a time constraint. Let's extend it ourselves.
To extend OpenSearch, I have to designate a namespace and some elements in that namespace to mean something specific. This should be published somewhere so that others can access the documentation and implement their own servers and clients. Let's say you want to look a customer up by last name; last name is a pretty generic term, but not universal enough that it's been standardized. Let's say I define a namespace, bind it to the name prefix in my OpenSearch description, and expose the following template:
<Url type="application/atom+xml"
template="...search?lastName={name:last}"/>
This template instructs any client that understands my name namespace that it can do last-name lookups by filling out the template.
This still isn't multidimensional, but let's add another dimension; geography. Luckily, there's an OpenSearch draft extension for geography which allows searching within a bounding box or a circle:
<Url type="application/atom+xml"
template="...search/?latitude={geo:lat?}&
longitude={geo:lon?}&
metres={geo:radius?}"/>
I'm splitting the template to make it readable.
The template still isn't multidimensional, since it only allows searching within one dimension (geospacial). So how do you do multidimensional searches? You provide a template which shows how to do multidimensional searches, that make sense to combine:
E.g. here's a template tha allows me to find people with a given last name in a different region (two dimensions):
<Url type="application/atom+xml"
template="combo-find?customerLastName={name:last}&
lat={geo:lat?}&
lon={geo:lon?}&
radius={geo:radius?}"/>
Here's a template that allows me to constrain names, and geospatial, along with a time constraint (although the OpenSearch Time extension doesn't say anything about what time you're looking for):
<Url type="application/atom+xml"
template="...search/?lastName={name:last}&
lat={geo:lat?}&
lon={geo:lon?}&
r={geo:radius?}&
after={time:start}&
before={time:end}"/>
In these examples, the client is free to look into the URI template to figure out what URI template parameters are to be filled out. So the client will know what dimensions each URI template supports, and can figure out which URI fits best at any one time.
The RESTfulness of all of this is because all of the REST constraints are honored; it's stateless, hypermedia is the engine, it's layered, etc. OpenSearch is just another hypermedia format, a very good one at that!
Based on my Google search of the terms hi-REST and lo_REST, I don't think either choice will have much bearing on efficiency. Rather, it is more of a question of how "correct" you want to be.
Hi-REST is arguably more "correct," but I doubt that the use of Hi-REST will have any significant effect on efficiency. The data representation you choose to transport the data (i.e. XML, Binary XML, JSON, etc.) will have a far greater effect on data performance.
I read the article at REST - complex applications and it answers some of my questions, but not all.
I am designing my first REST application and need to return "subset" lists to GET requests. Which of the following is more "RESTful"?
/patients;listType=appointments;date=2010-02-22;user_id=1234
or
/patients/appointments-list;date=2010-02-22;user_id=1234
or even
/appointments/2010-02-22/patients;user_id=1234
There will be about a dozen different lists that I need to return. In some of these, there will be several filtering parameters and I don't want to have big 'if' statements in my server code to select the subsets based on which parameters are present. For example, I might need all patients for a specific doctor where the covering doctor is another and the primary doctor is yet another. I could select with
/patients;rounds=true;specific_id=xxxx;covering_id=yyyy;primary_id=zzzz
but that would require complicated branching logic to get the right list, where asking for a specific subset (rounds-list) will achieve that same thing.
Note that I need to use matrix parameters instead of query parameters because I need to do filtering at several levels of the URL. The framework I am using (RestEasy), fully supports matrix parameters.
Ralph,
the particular URI patterns are orthogonal to the question how RESTful your application will be.
What matters with regard to RESTfulness is that the client discovers how to construct the URIs at runtime. This can be achieved either with forms or URI templates. Both hypermedia controls tell the client what parameters can be used and where to put them in the URI.
For this to work RESTfully, client and server must know the possible parameters at design time. This is usually achieved by making them part of the specification of the link relationship.
You might for example define a 'my-subset' link relation to have the meaning of linking to subsets of collections and with it you would define the following parameters:
listType, date, userID.
In a link template that spec could be used as
<link rel="my-subset' template="/{listType}/{date}/patients;user_id={userID}"/>
Note how the actual parameter name in the URI is decoupled from the specified parameter name. The value for userID is late-bound to the URI parameter user_id.
This makes it possible for the URI parameter name to change without affecting the client.
You can look at OpenSearch description documents (http://www.opensearch.org) to see how this is done in practice.
Actually, you should be able to leverage OpenSearch quite a bit for your use case. Especially the ability to predefine queries would allow you to describe particular subsets in your 'forms'.
But see for yourself and then ask back again :-)
Jan
I would recommend that you use this URL structure:
/appointments;user_id=1234;date=2010-02-22
Why? I chose /appointments because it is simple and clear. (If you have more than one kind of appointment, let me know in the comments and I can adjust my answer.) I chose the semicolons because they don't imply hierarchy between user_id and date.
One more thing, there is no reason why you should limit yourself to just one URL. It is just fine to have multiple URL structures that refer to the same resource. So you might also use:
/users/1234/appointments;date=2010-02-22
To return a similar result.
That said, I would not recommend using /dates/2010-02-22/appointments;user_id=1234. Why? I don't think, in practice, that /dates refers to a resource. Date is an attribute of an appointment but is not a noun on its own (i.e. it is not a first-class kind of thing).
I can relate to what David James answered.
The format of your URIs can be like he suggested:
/appointments;user_id=1234;date=2010-02-22
and / or
/users/1234/appointments;date=2010-02-22
while still maintaining the discoverability (at runtime) of your resource's URIs (like Jan Algermissen suggested).