OData version 2 and 3 differences - rest

OData protocol documentation (http://www.odata.org/documentation) describes two versions - 2 and 3.
What are the core differences between two versions?
Are both versions are supported by existing client libraries or version 2 is considered to be "legacy"?
To rephrase - are version 2 clients compatible with version 3?

There are lots of differences between the two versions. For example, OData v3 adds support for actions, functions, collection values, navigation properties on derived types, and stream properties. It also introduces a completely new serialization format for JSON ("application/json" means completely different things in the two versions).
When an OData client makes a request to a server, it can (and should) specify the maximum protocol version it can understand via the MaxDataServiceVersion HTTP header. A client written to only understand v2 of the protocol won't be able to understand a v3 payload.
I don't think I'd call v2 "legacy" or unsupported, but individual servers can choose whether or not to support requests that can only understand up to v2 (or v1). I think many existing clients out there support both v2 and v3. I know the WCF Data Services clients (desktop, windows phone, windows store, and silverlight) do support both.

In addition to a previous answer be aware that some client tools may still support only OData v2 protocol, so in case you need v3 specific features, you should make sure your client code is not limited by something like auto-generated proxy classes that are not capable of handling array types.
Here's an example when server exposes v3 features, but it's not possible to use them because Visual Studio WCF Data Service client proxy generator only supports v2:
http://bloggingabout.net/blogs/vagif/archive/2012/12/16/using-odata-protocol-v3-with-mongodata-odata-provider.aspx

You can find the list of all the differences between the two versions in the pdf of the Open Data Protocol (OData) Specification.
Specifically, the changelog is at section "1.7 Versioning and Capability Negotiation"

Related

what is the purpose of the extra /v1 in the route api/v1

Why not just make your backend api route start with /api?
Why do we want to have the /v1 bit? Why not just api/? Can you give a concrete example? What are the benefits of either?
One of the major challenges surrounding exposing services is handling updates to the API contract. Clients may not want to update their applications when the API changes, so a versioning strategy becomes crucial. A versioning strategy allows clients to continue using the existing REST API and migrate their applications to the newer API when they are ready.
There are four common ways to version a REST API.
Versioning through URI Path
http://www.example.com/api/1/products
REST API versioning through the URI path
One way to version a REST API is to include the version number in the URI path.
xMatters uses this strategy, and so do DevOps teams at Facebook, Twitter, Airbnb, and many more.
The internal version of the API uses the 1.2.3 format, so it looks as follows:
MAJOR.MINOR.PATCH
Major version: The version used in the URI and denotes breaking changes to the API. Internally, a new major version implies creating a new API and the version number is used to route to the correct host.
Minor and Patch versions: These are transparent to the client and used internally for backward-compatible updates. They are usually communicated in change logs to inform clients about a new functionality or a bug fix.
This solution often uses URI routing to point to a specific version of the API. Because cache keys (in this situation URIs) are changed by version, clients can easily cache resources. When a new version of the REST API is released, it is perceived as a new entry in the cache.
Pros: Clients can cache resources easily
Cons: This solution has a pretty big footprint in the code base as introducing breaking changes implies branching the entire API
Ref: https://www.xmatters.com/blog/blog-four-rest-api-versioning-strategies/#:~:text=Clients%20may%20not%20want%20to,API%20when%20they%20are%20ready.

Which is the difference between these google KMS client packages? (CloudKMS vs KeyManagementServiceClient)

I have a java codebase that seems to be using "com.google.api.services.cloudkms.v1.CloudKMS" to call KMS. The online docs says to use "com.google.cloud.kms.v1.KeyManagementServiceClient"
When i looked up both packages seem to be updated, however the reference docs recommend using the latter.
https://developers.google.com/resources/api-libraries/documentation/cloudkms/v1/java/latest/com/google/api/services/cloudkms/v1/CloudKMS.html
https://cloud.google.com/kms/docs/reference/libraries
Could someone tell me what is the difference between these 2 clients packages and if i should move to the one the reference links to?
In general, you should prefer the library referenced on the Reference Libraries page, currently com.google.cloud.kms. The examples and tutorials on the website will use this client library.
Probably more history than you need to know, but we have two client libraries because they run over different protocols. The new libraries (the one's listed on the reference page) use gRPC to communicate. This means less bandwidth and less time spent serializing/de-serializing JSON. On the flip side, gRPC requires HTTP/2, and some organizations can't/won't support HTTP/2 yet. As a result, we still publish and maintain legacy libraries that are REST over HTTP/1. It is strongly recommended you use the gRPC ones unless you can't use HTTP/2.
You can read more about the background and technical details in Kickstart your cryptography with new Cloud KMS client libraries and samples.

ServiceStack/TypeScript: The typescript-ref ignores namespaces (this causing duplicates)

I am learning NativeScript + Angular2 with ServiceStack in C# as the backend.
For the app, I generated TypeScript classes using the typescript-ref command, for use with the JsonServiceClient in ServiceStack:
>typescript-ref http://192.168.0.147:8080 RestApi
It all looked sweet and dandy, until I discovered that it seems to ignore that the ServiceStack Services and Response DTOs are in different namespaces on the .NET side:
I have different branches of services, where the handlers for each service might differ slightly between the branches. This works well in ServiceStack, the Login and handlers work just as expected.
So, on the app/NativeScript/Angular2-side, I used the typescript-ref and generated the restapi.dtos.ts. The problem is it skips the namespace difference and just creates duplicate classes instead (from VSCode):
The backend WS in ServiceStack is built in this "branched" fashion so I don't have to start different services on different ports, but rather gather all of them on one port and keep it simple and clear.
Can my problem be remedied?
You should never rely on .NET namespaces in your public facing Services Contract. It's supported in .NET Clients but not in any other language which requires that each DTO be uniquely named.
In general your DTOs should be unique within your entire SOA boundary so that there's only 1 Test DTO which maps to a single schema definition which ensures that when it's sent through a Service Gateway, resolved through Service Discovery, published to a MQ Server, etc it only maps to a single DTO contract.

azure api versioning x-ms-version api-version comparison

I see that in the Microsft managed REST APIs exposed in Azure there are two ways to do versioning
a) x-ms-version in header
b) api-version in query string
I wanted to understand what is the decision behind the selection between the two. I was reading somewhere that x-ms-versioning is legacy and way forward is the query string versioning mode. Is this correct?
Also as per Scot Hanselman's blog he says Query string parameter is not his preferred way and he would choose the URL Path segment. Then wondering why Microsoft adopted this option? I do agree that each person has his own preference but would be helpful to know the reason for this selection from Microsoft.
The x-ms-version header is legacy and only maintained for backward compatibility. In fact, the notion of using the x- prefix has been deprecated since the introduction of RFC 6648 in 2012.
Using api-version in the query string is one of the official conventions outlined in ยง12 Versioning of the Microsoft REST API Guidelines. The guidelines also allow for using the URL segment method, but the query string method is the most prolific.
Fielding himself has pretty strong opinions about API versioning - namely don't do it. The only universally accepted approach to implementing a versioned REST API is to use media type negotiation. (e.g. accept: application/json;v=1.0 or accept: application/vnd.acme.v1+json). GitHub versions their API this way.
While media type negotiation abides by the REST constraints and isn't all that difficult to implement, it's the least used method. There are likely many explanations for why, but the reality is that there are at least 3 other very common methods: by query string, by URL segment, or by header.
Pedantic musings aside, the most common forms are likely due to the ease in which a client can access the service. The header method is not much different that using media type negotiation. If a header is going to be used, then using the accept and content-type headers with media type negotiation is a better option. This leaves us with just the query string and URL segment approach.
While the URL segment approach is common, it has a few pitfalls that most service authors don't consider. It's proliferation has very much been, "Well, that's how [insert company here] does it". In my opinion, the URL segment method suffers from the following issues:
URLs are not stable. They change over time with each new API version.
As part of the Uniform Interface the URL path is supposed to identify a resource. If your URL paths are api/v1/products/123 and api/v2/products/123, it implies that there are two different products, which is not true. The v1 versus v2 URL segment is implying a different representation. In all API versions, the resource is still the same logical product.
If your API supports HATEOAS, then link generation is a challenge when you have incongruent resource versioning. Mature service taxonomies and the REST constraints themselves are meant to help evolve services and resources over time. It's quite feasible to expect that api/orders could be 1.0 and a referenced line item with api/products/123 support 1.0-3.0. If the version number is baked into the URL, what URL should the service generate? It could assume the same version, but that could be wrong. Any other option is coupling to the related resources and may not be what the client wants. A service cannot know all of the API versions of different resources a client understands. The server should therefore only generate links in the form of vanilla resource identifiers (ex: api/products/123) and let the client specify which API version to use. Any other manipulation by the client of links greatly diminishes the value of supporting HATEOAS. Of course, if everything is the same version or on the same resource, this may be a non-issue.
2 and 3 can be further complicated by a client that persists resource links. When an API version is sunset, any old links to the resource are now broken, even though the resource still exists albeit not using an expected, but obsolete representation.
This brings things full circle to the query string method. While it's true that the query string will vary by API version, it's a parameter not part of the identifier (e.g. path). The URL path stays consistent for clients across all versions. It's also easy for clients to append the query parameter for the version that they understand. API versions are also commonly numeric, date-based, or both. Sometimes they have a status too. The URL api/products/123?api-version=2018-03-10-beta1 is arguably much cleaner than api/2018-03-10-beta1/products/123.
In conclusion, while the query string method may not be the true RESTful method to version a resource, it tends to carry more of the expected traits while remaining easy to consume (which isn't a REST constraint). In conjunction with the Microsoft REST API Guidelines, this is why ASP.NET API Versioning defaults to the query string method out-of-the-box, even though all of these methods are supported.
Hopefully, this provides some useful insights as how different styles can affect your service taxonomy aside from pure preference.

REST API versioning. What to include in a new version?

So let's say that I have two endpoints:
example.com/v1/send
example.com/v1/read
and now I have to change something in /send without losing backward compatibility. So I'm creating:
example.com/v2/send
But what should I do then? Do I need to create example.com/v2/read which will be doing same as /v1? And let's imagine that there are lots of controllers with hundreds of endpoints. Will I be creating a new version like that with changing every small endpoint? Or should my frontend use API like that?
example.com/v1/send
example.com/v2/read
What is the best practice?
Over the time new endpoints may be included, some endpoints may be removed, the model may change, etc. That what versioning is for: track the changes.
It's likely that you will support both version 1 and 2 for a certain period, but you hardly will support both versions forever. At some point you may drop version 1, and want to keep only version 2 fully up and running.
So, consider the new version of the API as an API that can be used independently from the previous versions. In other words, a particular client should target one version of the API instead of multiple. And, of course, it's desirable to have backwards compatibility if possible.
In good time: Instead of adding the version in the URL, have you considered a media type to handle the versioning?
For instance, have a look at the GitHub API. All requests are handled by a default version of the API, but the target version can be defined (and the clients are encouraged to define the target version) in the Accept header, using a media type:
Accept: application/vnd.github.v3+json