I'm currently working on a SOA versioning strategy for my organization. I'm trying to determine where we should store the version number (Major.Minor) in the WSDL. There will be non-breaking changes made to the service interface (i.e. adding new operations) and for these non-breaking changes we'll just increment the minor number. We are considering using the WSDL's targetNamespace to store the version but we're afraid changing the WSDL's targetNamespace from something like 1.0 to 1.1 might result in a breaking change for some clients.
Can anyone tell me the effects that changing the targetNamespace of a WSDL will have on existing consumers of that particular web service. I've run some tests using WCF and I've found that it doesn't break existing applications that use the service. However, I'm wondering if this will still be true from other non-.NET clients?
Note: I do realize that changing the targetNamespace of a XSD referenced by the WSDL does result in a breaking change.
Put the Major version number in the namespace. Put the major and minor in a documentation element. There is a great book from Thomas Erl that covers this sort of stuff: Web Service Contract Design and Versioning for SOA. The best thing about the book is that it will make you think about things you probably haven't considered, like if you plan/want to use a strict, backward compatible or forward compatible versioning strategy and what the implications of each are.
Related
Why not just make your backend api route start with /api?
Why do we want to have the /v1 bit? Why not just api/? Can you give a concrete example? What are the benefits of either?
One of the major challenges surrounding exposing services is handling updates to the API contract. Clients may not want to update their applications when the API changes, so a versioning strategy becomes crucial. A versioning strategy allows clients to continue using the existing REST API and migrate their applications to the newer API when they are ready.
There are four common ways to version a REST API.
Versioning through URI Path
http://www.example.com/api/1/products
REST API versioning through the URI path
One way to version a REST API is to include the version number in the URI path.
xMatters uses this strategy, and so do DevOps teams at Facebook, Twitter, Airbnb, and many more.
The internal version of the API uses the 1.2.3 format, so it looks as follows:
MAJOR.MINOR.PATCH
Major version: The version used in the URI and denotes breaking changes to the API. Internally, a new major version implies creating a new API and the version number is used to route to the correct host.
Minor and Patch versions: These are transparent to the client and used internally for backward-compatible updates. They are usually communicated in change logs to inform clients about a new functionality or a bug fix.
This solution often uses URI routing to point to a specific version of the API. Because cache keys (in this situation URIs) are changed by version, clients can easily cache resources. When a new version of the REST API is released, it is perceived as a new entry in the cache.
Pros: Clients can cache resources easily
Cons: This solution has a pretty big footprint in the code base as introducing breaking changes implies branching the entire API
Ref: https://www.xmatters.com/blog/blog-four-rest-api-versioning-strategies/#:~:text=Clients%20may%20not%20want%20to,API%20when%20they%20are%20ready.
I am developing a new project with spring boot and graphql. I am confused on how to proceed because there are 2 ways to develop it, one is via the graphqls file and Annotation based approach. I prefer Annotation based approach but are they stable. exmaple : https://github.com/leangen/graphql-spqr.
I second AllirionX's answer and just want to add a few details.
Firstly, to answer your question: yes, SPQR has been pretty stable for quite a while now. Many teams are successfully using it in production. The only reason it is still in 0.X versions is the lack of documentation, but an occasional small breaking change in the API does occur.
Secondly, I'd also like to add that going code-first doesn't mean you can't also go contract-first. In fact, I'd argue you should still develop in that style. The only difference is that you get to write your contracts as Java interfaces instead of a new language.
As I highlight in SPQR's README:
Note that developing in the code-first style is still effectively
schema-first, the difference is that you develop your schema not in
yet another language, but in Java, with your IDE, the compiler and all
your tools helping you. Breaking changes to the schema mean the
compilation will fail. No need for linters or other fragile hacks.
So whether the API (as described by the interfaces) changes as the other code changes is entirely up to you. And if you need the SDL for any reason, it can always be generated from the executable schema or the introspection result.
I don't think there is a good or a bad answer to the "how to proceed" question.
There are two different approaches to build your graphql server (with graphl-java, graphql-java-tools, graphql-spqr), and each method has its advantages and inconvenients. All those library propose a springboot starter. Note that I never used graphql-spqr.
Schema first (with graphql-java or graphql-java-tools)
In this approach you first create a SDL file. The graphql library will parse it, and "all" you have to do is wire each graphql type to its data fetcher. The graphql-java-tools can even do the wiring for you.
Advantage
no need to enter into the detail of how the Graphql schema is built server side
you have a nice graphqls schema file that can be read and used by a client, easying the charge of building a graphql client
you actually define your api first (SDL schema): changing the implementation of the api will not require any change client side
Inconvenient
no compile-time check. If something is not wired properly, an exception will be thrown at runtime. But this can be negated by using graphql-java-codegen that will generate for you the java classes and interfaces for your graphql types, unions, queries, enums, etc.
if using graphql-java (no auto wiring), I felt I had to write long boring data fetchers. So I switched to graphql-java-tools.
Code first (with graphql-java or grapqhl-java-tools or graphql-spqr)
The graphql schema is built programmatically (through annotation with graphql-spqr or by building a GraphQLSchema object in graphql-java)
Advantage
compile-time check
no need to maintain both the SDL and the Domain class
Inconvenient
as your schema is generated from your code base, changing your code base will change the api, which might not be great for the clients depending on it.
This is my opinion on those different framework and I would be happy to be shown that I am in the wrong. The ultimate decision depends on your project: the size, if there is an existing code base, etc.
I want to identify what might be considered as a best practice for URI versioning of the APIs, regarding the logic of the back-end implementation.
Let's say we have a java application with the following API:
http://.../api/v1/user
Request:
{
"first name": "John",
"last name": "Doe"
}
After a while, we need to add 2 more mandatory fields to the user API:
http://.../api/v2/user
Request:
{
"first name": "John",
"last name": "Doe",
"age": 20,
"address": "Some address"
}
We are using separate DTOs for each version, one having 2 fields, and another having 4 fields.
We have only one entity for the application, but my question is how we should handle the logic, as a best practice? Is ok to handle this in only one service?
If those 2 new fields "age" and "address" would not be mandatory, this would not be considered a breaking change, but since they are, I am thinking that there are a few options:
use only one manager/service in the business layer for all user API versions (but the complexity of the code in that only one manager will grow very much in time and will be hard to maintain)
use only one manager for all user API versions and also use a class as a translator so I can make compatible older API versions with the new ones
a new manager/service in the business layer for each user API version
If I use only one manager for all user API versions and put there some constraints/validations, V2 will work, but V1 will throw an exception because those fields are not there.
I know that versioning is a big topic, but I could not find a specific answer on the web until now.
My intuition says that having a single manager for all user API versions will result in a method that has nothing to do with clean code, and also, I am thinking that any change added with a new version must be as loosely coupled as possible, because will be easier to make older methods deprecated and remove them in time.
You are correct in your belief that versioning with APIs is a contentious issue.
You are also making a breaking change and so incrementing the version of your API is the correct decision (w.r.t. semver).
Ideally your backend code will be under version control (eg GitHub). In this case you can safely consider V1 to be a specific commit in your repository. This is the code that has been deployed and is serving traffic for V1. You can then continue making changes to your code as you see fit. At some point you will have added some new breaking changes and decide to mark a specific commit as V2. You can then deploy V2 alongside V1. When you decide to depreciate V1 you can simply stop serving traffic.
You'll need some method of ensuring only V1 traffic goes to the V1 backend and V2 to the V2 backend. Generally this is done by using a Reverse Proxy; popular choices include NGINX and Apache. Any sufficient reverse proxy will allow you to direct requests based on the path such that if the request is prefixed by /api/v1 then redirect that request to Backend1 and if prefixed by /api/v2 to Backend2.
Hopefully this model will help keep your code clean: the master branch in your repository only needs to deal with the most recent API. If you need to make changes to older API versions this can be done with relative ease: branch off the V1 commit, make your changes, and then define the HEAD of that modified branch as the 'new' V1.
A couple of assumptions about your backend have been made for this answer that you should be aware about. Firstly, your backend can be scaled horizontally. For example, this means that if you interact with a database then the multiple versions of your API can all safely access the database concurrently. Secondly, that you have the resources do deploy replica backends.
Hopefully that explanation makes sense; but if not any questions send them my way!
If you're able to/can entertain code changes to your existing API, then you can refer to this link. Also, the link's mentioned at the bottom of the post direct you to respective GitHub source code which can be helpful in case if you think to introduce the code changes after your trial-error.
The mentioned approach(using #JsonView) basically prevents one from introducing multiple DTO's of a single entity for the same/multiple clients. Eventually, one can also refrain from introducing new version APIs each & every time you introduce new fields in your existing API.
spring-rest-jackson-jsonviewjackson-jsonview
JAX-RS and JAX-WS are great for producing an API. However, they don't address the concern of backwards compatibility at all.
In order to avoid breaking old client when new capabilities are introduced to the API, you essentially have to accept and provide the exact same input and output format as you did before; many of the XML and JSON parsers out there seem to have a fit if they find a field that doesn't map to anything, or has the wrong type.
Some JSON libraries out there, such as Jackson and Gson, provide a feature where you can specify a different input/output representation for a given object based on a runtime setting, which seems like a suitable way to handle versioning for many cases. This makes it possible to provide backwards compatibility by annotating added and removed fields so they only appear according to the version of the API in use by the client.
Neither JAXB nor any other XML databinding library I have found to date has decent support for this concept, nevermind being able to re-use the same annotations for both JSON and XML. Adding it to the JAXB-RI or EclipseLink Moxy seems potentially possible, but daunting.
The other approach to versioning seems to be to version all the classes that have changed, often by creating a new package each time the API is published and making copies of all modified DTO, Service, and Resource classes in the new package so that all the type information is versioned for the binding and dispatch systems. This approach seems more laborious to me.
My question is: how have you designed your Jave API providers for backwards compatibility? What worked, what didn't?
Links to case studies or blog posts on the subject much appreciated; I've done some googling but haven't been finding much discussion of this.
I'm the tech lead for EclipseLink MOXy, I'm very interested in your versioning requirements. You can reach me through my blog:
http://bdoughan.blogspot.com/p/contact_01.html
MOXy offers a means to represent the JAXB metadata as an XML file. You can leverage this to create multiple mappings for the same object model:
http://wiki.eclipse.org/EclipseLink/Examples/MOXy/EclipseLink-OXM.XML
One of my co-workers is developing SOAP API for php application and he is wondering if CamelCaps names are some kind of convention for SOAP methods?
Our current API has lower_caps_and_underscores, but it seems somewhat strange when compared with random subset of other SOAP APIs, and we wouldn't really want to annoy consumers of API with our wrong convention.
In almost all standard SOAP API, I have seen, had CamelCaps. You may want to look standard SOAP API. i.e. google SOAP api
I think so Underscore may annoy users. You can follow either of them however more important is to follow any single standard naming conventions.
Other important thing to consider for naming a service is, naming should clearly establish a meaning and a context of the what service will do in a particular context.
i.e.
GetCustomerHistoryById = Get a single customers history by id
GetCustomersHistory = Get all customer's history
What language are you developing in (not that it matters)?
From my experience lower_with_underscores seems to be the preferred style for PHP development, but CamelCase seems to be more generally used.
Just a thought
For SOAP, you see either Pascal Casing or Camel Casing. The SOAP namespace is Pascal Cased (soap:Envelope anyone). I guess what you use depends on where you draw the line.
In general, I use Pascal Casing for Methods and Properties. These two elements embody the framework of the contract. Bearing this in mind, I would likely have SOAP elements that correspond to Methods and Properties Pascal Cased.
As for parameters and return values, I would have to think about breaking the Pascal casing rule and using camel casing there. Fortunatley, I am not building a SOAP API right now, so I have time to think about it.
I would not go with something outside of Pascal or Camel casing, however, as it is non standard. Not that I think people would say "I am not using YOUR API because it uses non-standard naming", but just as a matter of convention. But, then, people who buck convention often come up with the next new trend in development. ;-)