How do you create backwards compatible JAX-RS and JAX-WS APIs? - soap

JAX-RS and JAX-WS are great for producing an API. However, they don't address the concern of backwards compatibility at all.
In order to avoid breaking old client when new capabilities are introduced to the API, you essentially have to accept and provide the exact same input and output format as you did before; many of the XML and JSON parsers out there seem to have a fit if they find a field that doesn't map to anything, or has the wrong type.
Some JSON libraries out there, such as Jackson and Gson, provide a feature where you can specify a different input/output representation for a given object based on a runtime setting, which seems like a suitable way to handle versioning for many cases. This makes it possible to provide backwards compatibility by annotating added and removed fields so they only appear according to the version of the API in use by the client.
Neither JAXB nor any other XML databinding library I have found to date has decent support for this concept, nevermind being able to re-use the same annotations for both JSON and XML. Adding it to the JAXB-RI or EclipseLink Moxy seems potentially possible, but daunting.
The other approach to versioning seems to be to version all the classes that have changed, often by creating a new package each time the API is published and making copies of all modified DTO, Service, and Resource classes in the new package so that all the type information is versioned for the binding and dispatch systems. This approach seems more laborious to me.
My question is: how have you designed your Jave API providers for backwards compatibility? What worked, what didn't?
Links to case studies or blog posts on the subject much appreciated; I've done some googling but haven't been finding much discussion of this.

I'm the tech lead for EclipseLink MOXy, I'm very interested in your versioning requirements. You can reach me through my blog:
http://bdoughan.blogspot.com/p/contact_01.html
MOXy offers a means to represent the JAXB metadata as an XML file. You can leverage this to create multiple mappings for the same object model:
http://wiki.eclipse.org/EclipseLink/Examples/MOXy/EclipseLink-OXM.XML

Related

guidance on whether to use Annotation based spring boot graphql server

I am developing a new project with spring boot and graphql. I am confused on how to proceed because there are 2 ways to develop it, one is via the graphqls file and Annotation based approach. I prefer Annotation based approach but are they stable. exmaple : https://github.com/leangen/graphql-spqr.
I second AllirionX's answer and just want to add a few details.
Firstly, to answer your question: yes, SPQR has been pretty stable for quite a while now. Many teams are successfully using it in production. The only reason it is still in 0.X versions is the lack of documentation, but an occasional small breaking change in the API does occur.
Secondly, I'd also like to add that going code-first doesn't mean you can't also go contract-first. In fact, I'd argue you should still develop in that style. The only difference is that you get to write your contracts as Java interfaces instead of a new language.
As I highlight in SPQR's README:
Note that developing in the code-first style is still effectively
schema-first, the difference is that you develop your schema not in
yet another language, but in Java, with your IDE, the compiler and all
your tools helping you. Breaking changes to the schema mean the
compilation will fail. No need for linters or other fragile hacks.
So whether the API (as described by the interfaces) changes as the other code changes is entirely up to you. And if you need the SDL for any reason, it can always be generated from the executable schema or the introspection result.
I don't think there is a good or a bad answer to the "how to proceed" question.
There are two different approaches to build your graphql server (with graphl-java, graphql-java-tools, graphql-spqr), and each method has its advantages and inconvenients. All those library propose a springboot starter. Note that I never used graphql-spqr.
Schema first (with graphql-java or graphql-java-tools)
In this approach you first create a SDL file. The graphql library will parse it, and "all" you have to do is wire each graphql type to its data fetcher. The graphql-java-tools can even do the wiring for you.
Advantage
no need to enter into the detail of how the Graphql schema is built server side
you have a nice graphqls schema file that can be read and used by a client, easying the charge of building a graphql client
you actually define your api first (SDL schema): changing the implementation of the api will not require any change client side
Inconvenient
no compile-time check. If something is not wired properly, an exception will be thrown at runtime. But this can be negated by using graphql-java-codegen that will generate for you the java classes and interfaces for your graphql types, unions, queries, enums, etc.
if using graphql-java (no auto wiring), I felt I had to write long boring data fetchers. So I switched to graphql-java-tools.
Code first (with graphql-java or grapqhl-java-tools or graphql-spqr)
The graphql schema is built programmatically (through annotation with graphql-spqr or by building a GraphQLSchema object in graphql-java)
Advantage
compile-time check
no need to maintain both the SDL and the Domain class
Inconvenient
as your schema is generated from your code base, changing your code base will change the api, which might not be great for the clients depending on it.
This is my opinion on those different framework and I would be happy to be shown that I am in the wrong. The ultimate decision depends on your project: the size, if there is an existing code base, etc.

Provide backwards compatibility to Moxy deserialization with Jackson

Since Jersey projects start with Moxy JSON serializer by default, I used it for a multi module REST project. But writing clients to this REST API had it quirks, due to this known problem, that moxy doesn't work really well with Maps.
I've migrated the development branch of my code to Jackson, where hashmap serialization works well without the entry[] array, making it easier to write new non-jersey clients to the project. But I would also need to keep backwards compatibility somehow if possible, for already written clients. How could I achive this with Jackson?
Sadly some hashmaps dont have predetermined keys, so the solution showed in the link cant be implemented, if I'm not wrong.

Build swagger2 specs via spring-rest-docs

I like the TDD approach to documenting your restful api with spring-rest-docs. However, I love "API Playground" feature enabled by swagger specification. I wish there was a way to get best of both worlds.
Is there a way to build swagger2 specs from spring rest docs? may be via building custom request/response preprocessors.
Do you have any thoughts or recommendations?
There's not out-of-the-box support for this in Spring REST Docs at the moment. The issue that you opened will track the possibility of adding such functionality. In the meantime, your best bet would be to look at writing a custom Snippet implementation that generates (part of) a Swagger specification.
Typically, a Spring REST Docs snippet deals with documentating a single resource, whereas a Swagger specification describes an entire service. This means that the Swagger specification Snippet implementation will need to accumulate state somehow, before producing a complete specification at the end. There are lots of ways to do that (in memory, multiple files that are combined in a post-processing step, etc.). It's not clear to me that one approach is obviously the right one so some experimentation would be useful. If you do some experimentation, please comment on the issue that you opened with your findings.

What is the benefit of using Spring REST Docs comparing to Swagger

Spring REST Docs was released recently and the documentation says:
This approach frees you from the limitations imposed by tools like Swagger
So, I wanted to ask when Spring REST Docs is preferable to use comparing to Swagger and which limitations it frees.
I just saw a presentation here that touches on your question among other topics:
https://www.youtube.com/watch?v=k5ncCJBarRI&t=26m58s
Swagger doesn't support hypermedia at all / it's URI centric
Swagger's method of inspecting your code can lag behind your code. It's possible make a change in your code that Swagger fails to understand and won't process properly until Swagger gets updated.
Swagger requires lot of annotation, and it's painful to include the descriptive text you want in an api document in annotations.
There are just some things that Swagger can't figure out from inspecting your code.
In any case, these are just a couple of points. The presenter does a much better job discussing it than I could.
I thought I would chime in to give a little bit more context surrounding Swagger, what it is, and what it is not. I believe this might help answer your question.
Swagger 2.0 is being adopted by a lot of big names and big platforms like Microsoft Azure, Paypal, SwaggerHub.com, DynamicApis.com, etc... Something to keep in mind is that Swagger is very simply a specification. It is not a framework. There are a lot of frameworks out there built to generate Swagger output that crawl through your code looking at your API information in order to build the Swagger 2.0 JSON file that represents your API. The Swagger UI that you see your APIs on is driven directly from this Swagger 2.0 JSON file. fiddler it to check it out
It is important to note that a framework that was created to allow you to "use swagger" is not how Swagger has to work (i.e. it is completely up to the implementation of the 3rd party framework). If the framework you are using to generate your Swagger 2.0 documents and UI is not working for you then you should be able to go find another framework that generates the Swagger artifacts and swap the technologies out.
Hope this helps.
From Spring REST docs:
The aim of Spring REST Docs is to help you to produce documentation for your RESTful services that is accurate and readable
This test-driven approach helps to guarantee the accuracy of your service’s documentation. If a snippet is incorrect the test that produces it will fail.
Spring REST docs advantages:
Documentation is written in the test code so it does not overload main code with lots of annotations and descriptions
Generated docs and examples are accurate because related test must pass
Docs can provide more specific and descriptive snippets
Format is suitable for publishing
Spring REST docs disadvantages:
Requires more work
Documentation provides request/response examples but don't provide interactive tools to modify and try out requests
Swagger advantages:
Quick, automated generation from a code
Interactive request execution - can be used for acceptance testing
Built around the OpenAPI Specification
Swagger disadvantages:
For more descriptive documentation it will require a lot of annotations
Tests are not related to the documentation so sometimes documention may deviate from reality
There is some limitation with swagger and the specific spring stack.
For example : with "param" in your Request Mapping you can define more than one method with the same url ans so simplify your code.
But swagger show you just one method
One disadvantage with Swagger is: it cannot handle models which have cyclical dependencies. If a model has cyclical dependency and if swagger is enabled, then spring boot server crashes.

client server semantic data transfer with GWT

In short, how do you transfer semantic data between client and server with GWT and which frameworks do you use? Read on for more details that I've thought about.
For example, using GWT 2.2.0 features like the RequestFactory will bring the constraint to have java beans transferred while the semantic resources are represented as triples and a resource can have a varying set of properties. So the RequestFactory itself cannot be shaped to transfer semantic-driven data easily.
A way to do that would be to use RequestFactory with beans that represent triples. Such bean would have 3 properties: subject, predicate, object. These beans will be transferred to client which will know to query, change their properties and then send them to server. This approach will however need a custom implementation(there are no GWT-based frameworks to represent semantic data on client-side, from what I've searched so far) and that could prove buggy or unoptimized. I've seen this approach in this project: http://code.google.com/p/gwt-odb-ui/ - it used GWT-RPC and implements some classes that represent semantic resources. However, I think it's in an incipient stage so I'm reluctant to copy their model.
Also, I've found that Restlets is a framework that supports the semantic web approach to applications. However, there is no documentation or an example on how to use Restlets with Semantic Web and perhaps with GWT. Also, Restlets is also supporting GWT. Does anyone know if this is a viable solution or not?
Thank you!
Restlet should work quite well for you. It has a GWT edition able to automatically serialize your triple beans. In addition, it also comes with an org.restlet.ext.rdf extension, including a Link class similar to your triple bean idea.
For further documentation, I would suggest the "Restlet in Action" book which covers GWT and the semantic web from a Restlet and REST point of view.