I recently studied about Odata and SCIM protocol. Both are providing REST services but am not clear about the disadvantages of SCIM which makes OData better preference for Restful services.Could someone please help me understand the differences ?
That's a great question and one I've had to deal with in the development of our private cloud identity platform.Here's my (long-winded) take:
OData (http://odata.org) is a set of standards for modeling and accessing data of all sorts: relational, hierarchical, graph, you name it. It provides a language for defining data models (read: schema) and a standard approach to defining the APIs to access that data. OData started out firmly rooted in XML, but has evolved to HTTP/JSON over the last couple of years.
SCIM (http://www.simplecloud.info/) started with a much more limited scope: to provide a REST API for provisioning identity data, i.e. users and groups. SCIM v2 expanded to include schema extensibility as well as search and CRUD operations. In this sense, SCIM’s evolution is very similar to that of Service Provisioning Markup Language (SPML). In a lot of ways SCIM is a RESTful, JSON-y version of SPML.
The two standards started with different goals, but have converged somewhat in terms of functionality. Even so, OData is substantially more expressive. A few areas come to mind:
Query expressions – SCIM is limited to expressions comparing
attributes to literal values, e.g. ~/Users?filter=surname eq “smith”,
combined with “and”s and “or”s. OData supports comparison of
attributes to other attributes, e.g. ?$filter=Surname eq Title, and
even comparison of attributes of different entities. I suspect that
SCIM’s limitation in this regard comes from the influence of LDAP on
its designers and early use cases.
Queries on complex data models – Both SCIM and OData support exposing
relationships between resources using URLs. This allows you to create
richer data models. But the SCIM query language doesn’t allow
operations on navigation (“$ref”) attributes, so you can’t navigate
between resources as part of a query. OData incorporates navigation
properties as part of the data model, and as a consequence, you can
use a single OData query to (for instance) “return all the people who
work for my boss’s boss who are assigned to my project”. You can of
course do this with SCIM, but it’s up to the client application to do
the navigation, maintain state, and so on. OData is more like SQL in
that it lets you offload query complexity to the server. It’s a good
idea in general, and critical when the client is a constrained device
like a phone.
Functions and Actions – SCIM defines an API for CRUD and search
operations, but that’s all. OData provides for the definition of
other kinds of APIs. For instance, you can define an entity-bound API
called “SendEmail” that accepts a message as one of its parameters,
and the system would send an email to the entity in context. This is
a really powerful notion in my mind, and allows applications to
discover APIs the same way it can discover everything else about an
OData system. Functions can be used in query expressions too, so you
can ask things like “return all the customer whose
Type system – SCIM doesn’t really provide a type system per se.
Resources are classified by a resource type that defines the allowed
attributes for resources of that type, but that’s it. OData defines
inheritance of types, abstract types, “open” (extensible) types and
type conversion, and supplies a much wider range of primitive types
(e.g. geolocation, streams, enumerations, etc. OData also allows you
to explicitly define referential constraints between entities.
Extensibility – SCIM (in v2) supports the addition of new resource
types and additional attributes to existing resource types. Because
OData is data model driven, everything is extensible, including
entities, properties, relationships, functions, and actions.
If OData is a so much richer approach to modeling data and defining APIs than SCIM, why bother with SCIM at all? Because SCIM was developed to solve a very specific problem (identity provisioning), it is quite a bit simpler to understand and simpler to implement than OData. If your application can provision identities to one system using SCIM 1.1., it can probably provision to lots of others. While provisioning identity information using OData is no more difficult than with SCIM, it's just more likely that each target OData platform will expose a somewhat different model, which means that you potentially have to tweak your code for each target. That's what SCIM was intended to address, and it succeeds at that. But if you want to expose a rich data model and a corresponding set of APIs (even if for identity information), OData far and away a better choice.
SCIM is specifically for identity management (e.g., users and groups). OData is a general purpose framework for building RESTful web services. You could create a SCIM service using OData (with some minor differences in URI and payload formats).
Like any pattern, to compare / judge you need to understand the problem their meant to address. They are an attempt to address enterprise application integration. Applications from different vendors on different OSs with different APIs need some common protocol to communicate. Think Finance App talking to HR app from different vendors.
So why REST? Because REST is built on HTTP which is a lowest common denominator standard (but not a bad one), meaning that everyone can play and port 80 is the only port open in every firewall.
OData is a more mature and comprehensive standard, but keep in mind its a standard not an implementation. With most vendors their implementation of the standard is incomplete or dated (Microsoft, Oracle, SAP). So before committing to anything ensure it has the features you need (real world).
Related
My understanding of SOAP vs REST:
REST = JSON, simple consistent interface, gives you CRUD access to 'entities' (Abstractions of things which are not necessarily single DB rows), simpler protocol, no formally enforced 'contract' (e.g. the values an endpoint returns could change, though it shouldn't)
SOAP = XML, more complex interface, gives you access to 'services' (specific operations you can apply to entities, rather than allowing you to CRUD entities directly), formally enforced, pre-stated 'contract' (like a WSDL, where e.g. the return types are predefined and formalized)
Is that a broadly correct assessment?
What about a mixture?
If so, what do I call an API that is a mixture?
For example, If we have what at surface level looks like a REST API (returns JSON, no WSDL or formalized contract defined - but instead of giving you access to the 'entities' that the system manages (User, product, comment, etc) it instead gives you specific access to services and complex operations (/sendUserAnUpdate/1111, /makeCommentTextPurple/3333, /getAllCommentsByUserThisYear/2222) without having full coverage?
The 'services' already exist internally, and the team simply publishes access to them on a request by request basis, through what would otherwise look like a REST API.
Question:
What is the 'mixture' typically referred to as (besides, maybe, a bad API). Is there a word for it? or a concept I can refer to that'll make most developers understand what I'm referring to, without having to say the entire paragraph I did above?
Is it just "JSON SOAP API?", "A Service-based REST API?" - what would you call it?
Thanks!
Thanks!
If you take a look at all those so-called REST-APIs your observation might seem true, though REST actually is something completely different. It describes an architecture or a philosophy whose intent it is to decouple clients from servers, allowing the latter one to evolve in future without breaking clients. It is quite similar to the typical Web page interaction in that a server will teach a client on what it needs and only reacts on client-triggered requests. One has to be pretty careful and pendant when designing REST services as it is too easy to include a coupling that may affect clients when a change is introduced, especially with all the pragmatism around in (commercial) software engineering. Stefan Tilkov gave a great talk on REST back in 2014 that, alongside with Jim Webber or Asbjørn Ulsberg, can be used as introduction lectures to what REST is at its core.
The general premise in REST should always be that a server teaches clients what they need and what a server expects and offers choices to the client via links. If the server expects to receive data from the client it will send a form-esque representation to inform the client about the respective fields it supports and based on the affordance of the respective elements contained in the form a client knows whether to select one or multiple options, enter some free text or enter a date value and such. Unfortunately, most of the media-type formats that attempt to mimic HTML's forms are still in draft versions.
If you take a look at HTML forms in particular you might sense what I'm refering to. Each of the elements that may occur inside a form are well defined to avoid abmiguity and improve interoperability. This is defacto the ultimate goal in REST, having one client that is able to interact with a sheer amount of other services without having to be adapted to each single API explicitely.
The beauty of REST is, it isn't limited to a single representation form, i.e. JSON, in fact there is almost an infinite number of possible representation formats that could be exchanged in a REST environment. Plain application/json is a terrible media-type for REST applications IMO as it doesn't include any defintions in regards to links and forms and doesn't describe the semantics of certain fields that may be shipped in requests and responses. The lack of semantical description usually leads to typed resources where a recipient expects that receiving data from i.e. /api/users returns some specific user data, that may differ from host to host. If you skim through IANA's media type registry you will find a couple of media-type formats you could have used to transfer user-related data and any client supporting these representation formats whold be able to interact with this enpoint without any issues. Fielding himself claimed that
A REST API should spend almost all of its descriptive effort in defining the media type(s) used for representing resources and driving application state, or in defining extended relation names and/or hypertext-enabled mark-up for existing standard media types. Any effort spent describing what methods to use on what URIs of interest should be entirely defined within the scope of the processing rules for a media type (and, in most cases, already defined by existing media types). (Source)
Through content-type negotiation client and server will negotiate about a representation format both support and understand. The question therefore shouldn't be which one to support but how many you want to support. The more media-type your API or client is able to exchange payloads for, the more likely it will be to interact with other participants.
Most of those so-called REST APIs are in reality just RPC services exposed via HTTP that may or may not respect and support certain HTTP operations. HTTP thereby is just a transport layer whose domain is the transfer of files or data over the Web. Plenty of people still believe that you shouldn't put verbs in URIs when in reality a script or process usually doesn't (and shouldn't) care whether a URI contains a verb or not. The URI itself is just a pointer a client will follow and invoke when it is interested in receiving the payload. We humans are also not that much interested in the URI itself in regards to the content it may return after invoking that URI. The same holds true for arbitrary clients. It is more important what you ship along with that URI. On the Web a link can be annotated with certain text and/or link relation names that set the links content in relation to the current page. It may hint a client that certain content may be invoked before the whole response was parsed as it is quite likely that the client will also want to know about that. preload i.e. is such a link-relation name that hints the client about that. If certain domain-specific terms exist one might use an extension scheme as defined by Web linking or reuse common knowlege or special microformats.
The whole interaction in a REST environment is similar to playing a text-based computer game or following a certain process flow (i.e. ordering and paying produts) defined by an application domain protocol, that can be designed as a state machine. The client is therefore guided through the whole process. It basically just follows the orders the server gave it, with some choices to break out of the process (i.e. cancel the order before paying).
SOAP on the otherhand is, as you've stated, an XML-based RPC protocol reusing a subset of HTTP to exchange requests and responses. The likelihood that when you change something within your WSDL plenty of clients have to be adapted and recompiled are quite high. SOAP even defines its own security mechanism instead of reusing TLS, which requires explicit support by the clients therefore. As you have a one-to-one communication model due to the state that may be kept in process, scaling SOAP services isn't that easy. In a REST environment this is just a matter of adding a load-balancer before the server and then mirroring the server n-times. The load-balancer can send the request to any of the servers due to the stateless constraint
What is the 'mixture' typically referred to as (besides, maybe, a bad API). Is there a word for it? or a concept I can refer to that'll make most developers understand what I'm referring to, without having to say the entire paragraph I did above?
Is it just "JSON SOAP API?", "A Service-based REST API?" - what would you call it?
The general term for an API that communicates on top of HTTP would be Web API or HTTP API IMO. This article also uses this term. It also lists XML-RPC and JSON-RPC besides SOAP. I do agree with Voice though that you'll receive 5 answers on asking 4 people about the right term to use. While it would be convenient to have a respective term available everyone would agree upon, the reality shows that people are not that interested in a clear separation. Just look here at SO on the questions taged with rest. There is nothing wrong with not being "RESTful", though one should avoid the term REST for truly RPC services. Though I think we are already in a situation where the term REST can't be rescued from misusage and marketing purposes.
For something that requires external documentation to use and that ships with its own custom, non-standardized representation format or that just exposes CRUD for domain objects I'd add -RPC to it, as this is more or less what it is at its heart. So if the API sends JSON and the representation to expect is documented via Swagger or some other external documentationJSON-RPC would probably the most fitting name IMO.
To sum up this post, I hope I could shed some light on what REST truly is and how your observation is flawed by all those pragmatic attempts that unfortunately are RPC through and through. If you change something within their implementation, how many clients will break? In addition to that you can't reuse the client that you've implemented for API A to interact with API B (of a different company or vendor) out of the box and therefore have to either adapt your client or create a new one solely for that API. This is true RPC and therfore should be reflected in the name somehow to hint developers about future expectations. Unfortunately, the process of naming things propperly, especially in regards to REST, seems already lost. There is a fine but tiny group who attempt to spread the true meaning, like Voice, Cassio and some others, though it is like fighting windmills. The best advice here would be to first discuss the naming conventions and what each participant understand on which term and then agree on a naming scheme everyone agrees on to avoid future confusion.
My understanding of SOAP vs REST
...
Is that a broadly correct assessment?
No.
REST is an "architectural style", which is to say a coordinated collection of architectural constraints. The World Wide Web is an example of an application built using the REST architectural style.
SOAP is a transport agnostic message protocol specification, based on XML Information Set
If so, what do I call an API that is a mixture?
I don't think you are going to find an authoritative terminology here. Colloquially, you are likely to hear the broad umbrella term "web api" to describe an HTTP API that isn't "RESTful".
The whole space is rather polluted by semantic diffusion.
I'm working an a PHP/JS based project, where I like to introduce Domain Driven Design in the backend. I find that commands and queries are a better way to express my public domain, than CRUD, so I like to build my HTTP-based API following the CQS principle. It is not quiet CQRS since I want to use the same model for the command and query side, however many principles are the same. For API documentation I use Swagger.
I found an article which exposes CQRS through REST resources (https://www.infoq.com/articles/rest-api-on-cqrs). They use 5LMT to distinct commands, which Swagger does not support. Moreover, don't I loose the benefit of the intention-revealing interface which CQS provides by putting it into a resource-oriented REST API? I didn't find any articles or products which expose commands and queries directly through an HTTP-based backend.
So my question is: Is it a good idea to expose commands and queries directly through the API. It would look something like this:
POST /api/module1/command1
GET /api/module1/query1
...
It wouldn't be REST but I don't see how REST brings anything beneficial to the table. Maintaining REST resource would introduce yet another model. Moreover, having commands and queries in the URL would allow to use features like routing frameworks and access logs.
The commands and queries are an implementation detail. This is apparent from the fact that they wouldn't exist at all if you had chosen an alternative style.
A RESTful API usually (if done right) follows the conceptual domain model. The conceptual domain model is not an implementation detail, because it is in your users heads and is the a source for requirements for your system.
Thus, a RESTful API is usually much easier to understand, because the clients (developers) have to understand the conceptual domain model anyway, and RESTful interfaces follow the concepts of such a model. The same is not true for a queries and commands based API.
So we have a trade-off
You already identified the drawbacks of building a RESTful API around the commands and queries, and I pointed out the drawbacks of your suggestion. My advice would be the following:
If you're building an API that other teams or even customers consume, then go the RESTful way. It will be much easier for the clients to understand your API.
If, on the other hand, the API is only an internal one that is e.g. used by a JS front-end that your team builds, and you have no external clients on the API, then your suggestion of exposing the commands and queries directly can be short-cut that's worth the (above mentioned) drawbacks.
If you take the shortcut, be honest to yourself and acknowledge it as such. This means that as soon as your requirements change and you now have external clients, you should probably build a RESTful API.
don't I loose the benefit of the intention-revealing interface which
CQS provides by putting it into a resource-oriented REST API?
Intention revealing for whom? A client side programmer? A server side programmer? In the same team/org that maintains the domain model? Outside of that team/org? Someone on the internet who would access your API naively by just probing a starting URI with an OPTIONS request? Someone who would have access to the full API documentation with URIs and payloads structure?
REST is orthogonal to CQRS. In the end, no matter how you expose your resources on the web, domain notions will be reflected somewhere, whether in the URI, the payloads, the media types. I don't think using DDD or CQRS should influence the way you design your API that much.
In an effort to grasp the underlying purpose of a RESTful API, I've begun delving into HATEOAS.
According to that wikipedia page,
A REST client needs no prior knowledge about how to interact with any particular application or server beyond a generic understanding of hypermedia. By contrast, in a service-oriented architecture (SOA), clients and servers interact through a fixed interface shared through documentation or an interface description language (IDL).
Now, I don't really understand how that's supposed to work, unless there exists prior knowledge of what is available in an API, which decries the stated purpose of HATEOAS. In fact, tools such as Swagger exist for the express purpose of documenting RESTful APIs.
So while I understand that HATEOAS can allow a webservice to indicate the state of a resource, I am missing the link (haha) demonstrating how a client application could possibly figure out what to do with the returned follow-up links in the absence of some sort of "fixed interface".
How is HATEOAS supposed to accomplish this?
You're confusing things. Tools like Swagger don't exist for the express purpose of documenting RESTful APIs. They exist for the express purpose of documenting HTTP APIs that aren't RESTful! You need tools like that for APIs that are not hypertext driven and focus documentation on URI semantics and methods instead of media-types. All those fancy tools that generate lists of URIs and HTTP methods are exactly the opposite of what you are supposed to do in REST. To quote Roy Fielding on this matter:
A REST API should spend almost all of its descriptive effort in
defining the media type(s) used for representing resources and driving
application state, or in defining extended relation names and/or
hypertext-enabled mark-up for existing standard media types. Any
effort spent describing what methods to use on what URIs of interest
should be entirely defined within the scope of the processing rules
for a media type (and, in most cases, already defined by existing
media types). [Failure here implies that out-of-band information is
driving interaction instead of hypertext.]
http://roy.gbiv.com/untangled/2008/rest-apis-must-be-hypertext-driven
HATEOAS doesn't preclude the need for all documentation. HATEOAS allows you to focus your documentation on your media-types. Your application is supposed to conform to the underlying protocol -- HTTP, most of the time -- and the authoritative documentation on that protocol is all your clients should need for driving the interaction. But they still need to know what they are interacting with.
For instance, when you enter Stack Overflow, you know there are users, questions and answers. You need documentation on what exactly are those things, but you don't need documentation detailing what a click on a link or a button does. Your browser already knows how to drive that, and that's how it works everywhere else.
To quote another answer, REST is the architectural style of the web itself. When you enter Stack Overflow, you know what an User, a Question and an Answer are, you know the media types, and the website provides you with the links to them. A REST API has to do the same. If we designed the web the way people think REST should be done, instead of having a home page with links to Questions and Answers, we'd have a static documentation explaining that in order to view a question, you have to take the URI stackoverflow.com/questions/, replace id with the Question.id and paste that on your browser. That's nonsense, but that's what many people think REST is.
There are various answers to this question depending on the level of autonomy of the client interacting with the service ...
Lets first look at a human driven client - the browser. The browser knows nothing about banks, concerts, cats and whatever else you find on the net - but it certainly knows how to render HTML. HTML is a media type with support for hypermedia (links and forms). In this case you have a perfectly working application with a client that only understands generic hypermedia. The "fixed interface" here is HTML.
Then we have the autonomous clients or "scripted" clients that are supposed to interact with a service without human interaction. This is probably the kind of client you are thinking of when comparing REST to SOA(P). You could find such clients in integration scenarios where two independent computer system exchange data in some predefined way.
Such autonomous clients must certainly agree on something in order to interact with each other. The question is what this "something" is or is not.
In a service oriented architecture the clients agree on specific URLs/endpoints and specific "methods" to invoke on those endpoints (RPC) - this adds coupling on the URL structure used. It also forces the client to know which method to call on what service - the server cannot change URLs and it cannot move a "method" from one service to another without breaking clients.
REST/hypermedia based systems behaves differently. In such a system the client and server agrees on one common entry URL where the client can lookup (GET) a service document, at runtime, describing all the possible interactions with the server using hypermedia controls such as links or forms. These hypermedia controls informs the client about how to interact with the service (the HTTP method and payload encoding) and where to interact with the service. Which in essence means we do not have "a service" anymore but possibly many different services as the client will be told, at runtime, where and how to interact with them.
So how does the client know which hypermedia controls it should look for? It does so by agreeing on a set of identifiers the server will use to identify the relevant controls. For links this is often referred to as "link relation types".
This leads us to what kind of "something" it is that servers and clients agree on - it is 1) a hypermedia enabled media type, 2) the root service index URL, 3) the hypermedia control identifiers and 4) the payload expected for each of the controls. At runtime the client then discovers the remaining URLs, HTTP methods and payload encoding (such as JSON, XML or URL-encoded key/value pairs).
Currently there are a small set of general purpose media types for hypermedia APIs - Mason, HAL, Sirene, Collection JSON, Hydra for JSON-LD and probably a few more.
If your are interested then I have covered this topic in various blog postings:
The role of media types in RESTful web services
Media types for APIs
Selling the benefits of hypermedia in APIs
Hypermedia API documentation example
RESTful resources are not typed
we are implementing an operational data store for some key data sets ("single view of customer", "single view of employee") to provide meaningful integrated data primarily to front office applications (primarily B2E....i.e. both provider and consumer are controlled internally, no external exposure).
All the "single views" will be centered around a slow changing master business entity ("customer", "employee", "asset", "product") which has related child/satellite entities which are more transactional/fast changing in nature (i.e. bookings, orders, payments etc.). The different "single views" will be overlapping and interconnected.
So this ODS would become a data abstraction layer between disparate "systems of record" and vertical systems of engagement, providing a traversable data universe decoupling clients from producers
Of course, an ODS is pointless if there is no way to access the data. Therefore, I am looking for some kind of elegant way to implement a resource based data services layer on top of the ODS with some of the following characteristics
Stateless
REST Level 3 (HATEOS) but most likely limited to GET's (i.e. create, update, delete would be executed against the actual "system of record" in order to apply business logic)
abstraction between enterprise data model (=domain entities are exposed as representations) and physical data model (=actual data is stored in physical tables)...no leakage of physical data model
support for common query features, such as order by, paging etc.
easy to use for developers, decoupled from physical data model and backend system changes
Ideally not constrained to relational db's even though this could be optional for now.
The key standard I came across for that would be OData, but there is a couple of concerns though
It is very MS centric both from a standard (now OASIS) but more importantly from an implementation perspective (.NET, WCF Data services). We are an enterprise java shop.
A couple of frontrunners (netflix, ebay) have sunsetted their OData services which is always a concerning thing
OData has been touted as a magic blackbox which exposes the underlying datamodel more or less automatically. That means that there is no "human translation"/modelling in between, and the service layer therefore doesn't provide the abstraction that is one of the key requirements.
Now, some of these disadvantages might not be that critical as we don't plan to expose the data services layer to the external world, but rather use it within our own environment (i.e. rather small number of consumers which can be controlled).But then the question is how much value OData adds.
I do know that there are no free lunches out there :-)
Are there any other approaches of how to implement a generic data access layer ?
thx a lot, Nick
To answer your concerns about OData:
The statement of OData being MS centric is not true anymore. And there's a very good news to you especially when you're using Java to write services or clients. The Apache Olingo project is currently maintained and developed under an open source manner and Microsoft is just one of the contributors of it. The project is based on an OData V2 version and will also support OData V4.
It's true that Netflix and Ebay don't expose their OData services anymore. But according to the monitoring of the OData ecosystem on OData.org, there are new OData services and clients published constantly. (The OData.org has started to accept contribution to the ecosystem)
I'm not sure if I correctly understood this item. But if I did, the OData vocabularies (a part in the OData standard) may be able to resolve your concern. As with the OData model annotated with terms, you can add more "human specification" to the capability of the service and more control to the data and the model. The result will be, that the client will be able to intelligently "human translate" the data and the model to better interact with the service. What is even better about vocabularies is, although there are canonical namespaces of annotations that are reserved by the protocol, you can also define your own vocabulary and have it accepted by whoever wants to consume your service as the vocabularies of OData is extensible as is defined in the OData protocol.
In addition, although your plan is to only expose a data service for limited access. OData natures such as queryablility, RESTful data API, and new OData V4 compelling features such as delta responses, asynchronous requests, server side aggregation will definitely help you write a more efficient and powerful data publishing and consumption story.
I really like OData (WCF Data Services). In past projects I have coded up so many Web-Services just to allow different ways to read my data.
OData gives great flexibility for the clients to have the data as they need it.
However, in a discussion today, a co-worker pointed out that how we are doing OData is little more than giving the client application a connection to the database.
Here is how we are setting up our WCF Data Service (Note: this is the traditional way)
Create an Entity Framework (E)F Data Model of our database
Publish that model with WCF Data Services
Add Security to the OData feed
(This is where it is better than a direct connection to the SQL Server)
My co-worker (correctly) pointed out that all our clients will be coupled to the database now. (If a table or column is refactored then the clients will have to change too)
EF offers a bit of flexibility on how your data is presented and could be used to hide some minor database changes that don't affect the client apps. But I have found it to be quite limited. (See this post for an example) I have found that the POCO templates (while nice for allowing separation of the model and the entities) also does not offer very much flexibility.
So, the question: What do I tell my co-worker? How do I setup my WCF Data Services so they are using business oriented contracts (like they would be if every read operation used a standard WCF Soap based service)?
Just to be clear, let me ask this a different way. How can I decouple EF from WCF Data Services. I am fine to make up my own contracts and use AutoMapper to convert between them. But I would like to not go directly from EF to OData.
NOTE: I still want to use EF as my ORM. Rolling my own ORM is not really a solution...
If you use your custom classes instead of using classes generated directly by EF you will also change a provide for WCF Data Services. It means you will no more pass EF context as generic parameter to DataService base class. This will be OK if you have read only services but once you expect any data modifications from clients you will have a lot of work to do.
Data services based on EF context supports data modifications. All other data services use reflection provider which is read only by default until you implement IUpdatable on your custom "service context class".
Data services are technology for creating quickly services exposing your data. They are coupled with their context and it is responsibility of the context to provide abstraction. If you want to make quick and easy services you are dependent on features supported by EF mapping. You can make some abstractions in EDMX, you can make projections (DefiningQuery, QueryView) etc. but all these features have some limitations (for example projections are readonly unless you use stored procedures for modifications).
Data services are not the same as providing connection to database. There is one very big difference - connection to database will ensure only access and execution permissions but it will not ensure data security. WCF Data Services offer data security because you can create interceptors which will add filters to queries to retrieve only data the user is allowed to see or check if he is allowed to modify the data. That is the difference you can tell your colleague.
In case of abstraction - do you want a quick easy solution or not? You can inject abstraction layer between service and ORM but you need to implement mentioned method and you have to test it.
Most simple approach:
DO NOT PUBLISH YOUR TABLES ;)
Make a separate schema
Add views to this
Put those views to EF and publish them.
The views are decoupled from the tables and thus can be simplified and refactored separately.
Standard approach, also for reporting.
Apart from achieving more granular data authorisation (based of certain field values etc) OData also allows your data to be accessible via open standards like JSON/Xml over Http using OAuth. This is very useful for the web/mobile applications. Now you could create a web service to expose your data but that will warrant a change every time your client needs change in the data requirements (e.g. extra fields needed) whereas OData allows this via OData queries. In a big enterprise this is also useful for designing security at infrastructure level as it will only allow the text based (http) calls which can be inspected/verified for security threats via network firewalls'.
You have some other options for your OData client. Have a look at Simple.OData.Client, described in this article: http://www.codeproject.com/Articles/686240/reasons-to-consume-OData-feeds-using-Simple-ODa
And in case you are familiar with Simple.Data microORM, there is an OData adapter for it:
https://github.com/simplefx/Simple.OData/wiki
UPDATE. My recommendations go for client choice while your question is about setting up your server side. Then of course they are not what you are asking. I will leave however my answer so you aware of client alternatives.