Automatic semantic versioning of code - version-control

I am trying to implement versioning in my project. Requirement is to implement automatic semantic versioning(ex- 1.0.1). Just wanted to know, is there any way to implement semantic versioning automatically or we have to give semantic versions manually?
I am trying to save Json schemas in database with semantic versioning. So i am looking for implementing versioning at server side our at database level.
I have already used Hibernate envers to implement automatic versioning in form of 0,1,2.
Is there any other similar versioning technique which we can implement for automatic versioning?

If you're talking JavaScript then you could try semantic-release. It's a powerful system for automated package publishing that updates version numbers in your package.json based on git commit messages.

Related

Using OpenAPI Spec for Semantic Versioning Releases

I am deploying a FastAPI application to docker and I'm pondering how to best manage the Semantic Versioning.
Since it is just an API, I would think that MAJOR.MINOR.PATCH could be determined by comparing the previous builds OpenAPI Spec to the current builds OpenAPI Spec. If the request or response interface has changed, then increment the version accordingly.
Is this something that would work and is there something that already provides this functionality? If not, what approaches are used today to auto manage Semantic Versioning?
Thanks!

guidance on whether to use Annotation based spring boot graphql server

I am developing a new project with spring boot and graphql. I am confused on how to proceed because there are 2 ways to develop it, one is via the graphqls file and Annotation based approach. I prefer Annotation based approach but are they stable. exmaple : https://github.com/leangen/graphql-spqr.
I second AllirionX's answer and just want to add a few details.
Firstly, to answer your question: yes, SPQR has been pretty stable for quite a while now. Many teams are successfully using it in production. The only reason it is still in 0.X versions is the lack of documentation, but an occasional small breaking change in the API does occur.
Secondly, I'd also like to add that going code-first doesn't mean you can't also go contract-first. In fact, I'd argue you should still develop in that style. The only difference is that you get to write your contracts as Java interfaces instead of a new language.
As I highlight in SPQR's README:
Note that developing in the code-first style is still effectively
schema-first, the difference is that you develop your schema not in
yet another language, but in Java, with your IDE, the compiler and all
your tools helping you. Breaking changes to the schema mean the
compilation will fail. No need for linters or other fragile hacks.
So whether the API (as described by the interfaces) changes as the other code changes is entirely up to you. And if you need the SDL for any reason, it can always be generated from the executable schema or the introspection result.
I don't think there is a good or a bad answer to the "how to proceed" question.
There are two different approaches to build your graphql server (with graphl-java, graphql-java-tools, graphql-spqr), and each method has its advantages and inconvenients. All those library propose a springboot starter. Note that I never used graphql-spqr.
Schema first (with graphql-java or graphql-java-tools)
In this approach you first create a SDL file. The graphql library will parse it, and "all" you have to do is wire each graphql type to its data fetcher. The graphql-java-tools can even do the wiring for you.
Advantage
no need to enter into the detail of how the Graphql schema is built server side
you have a nice graphqls schema file that can be read and used by a client, easying the charge of building a graphql client
you actually define your api first (SDL schema): changing the implementation of the api will not require any change client side
Inconvenient
no compile-time check. If something is not wired properly, an exception will be thrown at runtime. But this can be negated by using graphql-java-codegen that will generate for you the java classes and interfaces for your graphql types, unions, queries, enums, etc.
if using graphql-java (no auto wiring), I felt I had to write long boring data fetchers. So I switched to graphql-java-tools.
Code first (with graphql-java or grapqhl-java-tools or graphql-spqr)
The graphql schema is built programmatically (through annotation with graphql-spqr or by building a GraphQLSchema object in graphql-java)
Advantage
compile-time check
no need to maintain both the SDL and the Domain class
Inconvenient
as your schema is generated from your code base, changing your code base will change the api, which might not be great for the clients depending on it.
This is my opinion on those different framework and I would be happy to be shown that I am in the wrong. The ultimate decision depends on your project: the size, if there is an existing code base, etc.

Managing DB schema in a modular application

I'm building a Node.js + PostgreSQL application based on plugin system where each component is a separate npm package. The behaviour of the app can be customised by adding and removing components. Each component may need to define its own data types (i.e. DB tables) or potentially extend the core types.
Are there design patterns for doing this kind of thing?
For example is it reasonable to let plugins run any kind of DB manipulation (like WordPress does)? Could some of PostgreSQL's more advanced features help avoid this, e.g. table inheritance? Or maybe use a separate schema for each plugin – any tips for making that work?

Validate compliance of OpenAPI to REST-design best practices

We are generating API documentation from the source code using Swagger. I am now wondering if there is any tool which automatically checks the compliance of the generated OpenAPI document (= Swagger JSON) to RESTful API design best practices.
For example Zalando has defined a publicly available guideline for REST-design In my opinion in these guideline there are many rules which can be check automatically based on the OpenAPI Specification:
“Don’t Break Backward Compatibility” could be check when OpenAPI
documents of different versions are compared.
“Always Return JSON Object as Top-Level Data Structures to Support
Extensibility"
“Keep URLs Verb-Free” could possible checked if compared with
dictionaries.
…
So far, I only found tools which checks the completeness and naming conventions of an OpenAPI document. Does someone know a tool with more advanced rules?
UPDATE:
Meanwhile I have found a tool called Zally (https://github.com/zalando-incubator/zally). This tool checks for violations of Zalando's REST-Api Guidelines. It is rather easy to configure or extend.
Some of these could be added as rules to openapilint. The backward compatibility check would need to compare two spec versions in search of differences, which is a bit more complex.

How do you create backwards compatible JAX-RS and JAX-WS APIs?

JAX-RS and JAX-WS are great for producing an API. However, they don't address the concern of backwards compatibility at all.
In order to avoid breaking old client when new capabilities are introduced to the API, you essentially have to accept and provide the exact same input and output format as you did before; many of the XML and JSON parsers out there seem to have a fit if they find a field that doesn't map to anything, or has the wrong type.
Some JSON libraries out there, such as Jackson and Gson, provide a feature where you can specify a different input/output representation for a given object based on a runtime setting, which seems like a suitable way to handle versioning for many cases. This makes it possible to provide backwards compatibility by annotating added and removed fields so they only appear according to the version of the API in use by the client.
Neither JAXB nor any other XML databinding library I have found to date has decent support for this concept, nevermind being able to re-use the same annotations for both JSON and XML. Adding it to the JAXB-RI or EclipseLink Moxy seems potentially possible, but daunting.
The other approach to versioning seems to be to version all the classes that have changed, often by creating a new package each time the API is published and making copies of all modified DTO, Service, and Resource classes in the new package so that all the type information is versioned for the binding and dispatch systems. This approach seems more laborious to me.
My question is: how have you designed your Jave API providers for backwards compatibility? What worked, what didn't?
Links to case studies or blog posts on the subject much appreciated; I've done some googling but haven't been finding much discussion of this.
I'm the tech lead for EclipseLink MOXy, I'm very interested in your versioning requirements. You can reach me through my blog:
http://bdoughan.blogspot.com/p/contact_01.html
MOXy offers a means to represent the JAXB metadata as an XML file. You can leverage this to create multiple mappings for the same object model:
http://wiki.eclipse.org/EclipseLink/Examples/MOXy/EclipseLink-OXM.XML