guidance on whether to use Annotation based spring boot graphql server - annotations

I am developing a new project with spring boot and graphql. I am confused on how to proceed because there are 2 ways to develop it, one is via the graphqls file and Annotation based approach. I prefer Annotation based approach but are they stable. exmaple : https://github.com/leangen/graphql-spqr.

I second AllirionX's answer and just want to add a few details.
Firstly, to answer your question: yes, SPQR has been pretty stable for quite a while now. Many teams are successfully using it in production. The only reason it is still in 0.X versions is the lack of documentation, but an occasional small breaking change in the API does occur.
Secondly, I'd also like to add that going code-first doesn't mean you can't also go contract-first. In fact, I'd argue you should still develop in that style. The only difference is that you get to write your contracts as Java interfaces instead of a new language.
As I highlight in SPQR's README:
Note that developing in the code-first style is still effectively
schema-first, the difference is that you develop your schema not in
yet another language, but in Java, with your IDE, the compiler and all
your tools helping you. Breaking changes to the schema mean the
compilation will fail. No need for linters or other fragile hacks.
So whether the API (as described by the interfaces) changes as the other code changes is entirely up to you. And if you need the SDL for any reason, it can always be generated from the executable schema or the introspection result.

I don't think there is a good or a bad answer to the "how to proceed" question.
There are two different approaches to build your graphql server (with graphl-java, graphql-java-tools, graphql-spqr), and each method has its advantages and inconvenients. All those library propose a springboot starter. Note that I never used graphql-spqr.
Schema first (with graphql-java or graphql-java-tools)
In this approach you first create a SDL file. The graphql library will parse it, and "all" you have to do is wire each graphql type to its data fetcher. The graphql-java-tools can even do the wiring for you.
Advantage
no need to enter into the detail of how the Graphql schema is built server side
you have a nice graphqls schema file that can be read and used by a client, easying the charge of building a graphql client
you actually define your api first (SDL schema): changing the implementation of the api will not require any change client side
Inconvenient
no compile-time check. If something is not wired properly, an exception will be thrown at runtime. But this can be negated by using graphql-java-codegen that will generate for you the java classes and interfaces for your graphql types, unions, queries, enums, etc.
if using graphql-java (no auto wiring), I felt I had to write long boring data fetchers. So I switched to graphql-java-tools.
Code first (with graphql-java or grapqhl-java-tools or graphql-spqr)
The graphql schema is built programmatically (through annotation with graphql-spqr or by building a GraphQLSchema object in graphql-java)
Advantage
compile-time check
no need to maintain both the SDL and the Domain class
Inconvenient
as your schema is generated from your code base, changing your code base will change the api, which might not be great for the clients depending on it.
This is my opinion on those different framework and I would be happy to be shown that I am in the wrong. The ultimate decision depends on your project: the size, if there is an existing code base, etc.

Related

What is the real benefit of using GraphQL?

I have been reading about the articles on the web about the benefits of graphql but so far I have not been able to find a single benefit of it.
One of the most common benefits mentioned in those articles are below?
No Overfetching with GraphQL.
Reducing number of calls made from client side.
Data Load Control Granularity
Evolve your API without versions.
Those above all makes sense but it is not the graphql itself that provides these benefits. Any second layer api written in java/python or any other language would be able to provide this benefits too. It is basically introducing another layer of abstraction above the data retrieval systems, rest or whatever, and decoupling the client side from that layer. After you do that everything you can do with graphql can also be done with any other language too.
Anyone can implement a say scala server that retrieves the data from various api's integrates them, create objects internally and feeds the client with only the relevant part of the data with total control on the data. This api can be easily versioned and released accordingly. Considering the syntax of graphql and how cumbersome it is and difficulty of creating a good cache around it, I can't see why would you use it really.
So the overall question is there any benefits of graphql that is provided to the application because of the graphql itself and not because you implement another layer of abstraction between your applications and your api's?
Best practices known as REST existed earlier, too.
GraphQL is more standarized than REST, safer (no injections) and syntax gives great flexibility in the area of quickly changing client needs.
It's just a good standard of best practices.
I feel GrapgQL is another example of overengineering. I would say "Best standards and practices" are "Keeping It Simple."
Breaking down and object and building a custom one before sending it to the client is very basic.

Can’t find best way for apply best in code design techniques in software dev

[Pre]
I have to say that I'm dummy newbie who is trying to get together important puzzles with such crucial details as DDD, TDD, MVVM, and EFCore. I have an about 10 years of windows form develop experience in complete wrong manner, and after I'm joined to Plurasight I'm understood that I'm just lost my last 10 years, and this is really sad :).
[Problem description]
I have an App that i want to re-write from scratch by using latest and greatest technics that've learned for the last 6 month on Pluralsight, but the problem is that these new knowledge’s is stopping me, because simply I'm afraid that I'll do it wrong again...(that is stupid I know, but it is what it is).
So back to my questions, I have a big problem domain, and pretty well documented business logic, which i have to turn in to the code. I'm understand that my start point is design data layer, for these purposes I want to use Entity framework core (I saw Julie Lerman's course on Pluralsight and I think's she is amazing and inspires me to use EFCore as ORM for my app). But at the same time leakage of experience produces more questions than what I’ve learned with Pluralsight, and I will try to write them all(please don’t judge me too hard)
It is looks like that I will need 2 or even more data model projects in my solution, and here is why I have multiple document set types, each of the type contain more than one reference books used to generate unique file names and data sheets. But it looks weird to me have 3 Data model projects such as MyApp.PackType1.DataModel, MyApp.PackType2.DataModel, and each of them will be preinstalled with the EFCore, and each of them will generates its own database based on Data Context defined by EF. Isn’t it very redundant or this is correct way?
I don’t understand how to join these multiple Data Models projects, including Shared Kernel into the one nice model
I don’t understand what is the best way to design my data classes? Should they be just POCO’s or I can design them as nice looking classes with the private var’s and public properties? What are the best practices in here?
Also I don’t understand what is the best practice to use a MVVM pattern on top of that, and is it applicable at all to use MVVM in this case?
Should I keep my Tests in separate projects like MyApp.PackType1.DataModel.Tests, or keep them in same project?
Best regards,
Maks!
P.S.
Apologize for unclear definitions and questions, English isn't my native language.
It's very complicated to answer your question because you have asked for a lot of details, but I going to provide a brief answer and I hope it will be helpful.
You can have only one model for your entities (DDD) and create sub model from this model in your end level projects (Web API or UI)
Read point #1
You have to create an Entity Layer project that represents your database and then you can create DTO's for specific scenarios
From my point of view, use Angular but you can use another UI framework such as React or VueJs, but I prefer to use Angular to build UI interfaces and consume .NET Core Web API from client
Create unit tests and integration tests for you Web API projects and as additional feature you can use Db in memory provider for tests
May be this guide is useful: https://www.codeproject.com/Articles/1160586/Entity-Framework-Core-for-Enterprise
Regards
Hm, multiple DbContexts (models) usually come about when you have distinct databases you are using. General rule is one Context = one Database. Exceptions can occur when there are a lot of tables that can be grouped functionally, but there are downsides to that approach.
A DbContext is a repository pattern but for individual tables. Using a Unit of Work pattern and layering with a custom repository provider would allow you to make it "appear" as a single database, hiding the complexity from the front-end.
Your entity descriptions are usually created as straight POCO. You can get creative with different DTOs
In a nutshell, an MVVM pattern goes like this:
Request from UI to a controller
Controller possibly issues multiple calls to Data Layer to gather data
Assemble data in a single ViewModel (everything the page needs)
Return to UI
The beauty of the approach is single roundtrip (request/response) to the UI
Separate Project in my opinion. There are techniques to spoof the database connection using EF so you are not using "live" data.
That CodeProject article will come in handy.

Avoid duplication in API development with Play Framework

I developed a REST API with Play 2.2.0. Some controllers expose GET methods, other expose POST methods with authentication etc...
I developed the client using Play as well but I have a problem. How can I avoid duplicating the model layer between both applications ?
In the server application, I have a Model Country(code, name).
In the client I am able to list countries and create new ones.
Currently, I have a class Country in both sides. When I get countries I deserialize them. The problem is that if I add a field in Country in the server, I have to maintain the client as well.
How can I share the Country entity between applications ?
PS : I don't want to create a dependency between the API and the client, as the client could have been developed with another language or framework
Thanks
This is not very specific to play framework but is more of a general question. You either create reusable representations of the data in your protocol (the actual data structures you send between your nodes) and get a tight coupling in representation and language. Many projects does it like this, since they know they will have the same platform throghout their architecture.
The other option is to duplicate all of or only the parts of parsing/generating that each part of the architecture needs, this way you get a looser coupling and can use any language in the different parts.
There are also some data protocols/tools that will have a representation in a protocol specific way and then can generate representations in various programming languages.
So as you see, it's all about pros and cons - neither solution is "the right way (tm)" to do this, you will have to think about your specific system/architecture and what pros are most valuable and what cons are most costly to you.
Well I suggest to send to the client a template of what they should display, on the client with js take advantage of js template frameworks, so you can tell to the client how can show them, dynamic... if they want to override them well... more job
We can call them Rest component oriented...
well suggestions :)
should works!

Use PostSharp to Generate a Type

We are currently using PostSharp for its standard functionality (logging, caching, transactions, and so on).
We also generate dynamically, at runtime, some custom classes, using Reflection.Emit. This obviously slows startup, and as we need to add more dynamic type generation, I am wondering, since all the information for the dynamic types is known at build time, whether we can use PostSharp to do this.
So, the question itself is, can I use PostSharp to achieve what I can do with Reflection.Emit, but at build time?
Regards
The PostSharp itself is using PostSharp.Sdk to manipulate the binary code, but this API is not publicly documented and supported at the moment. So, it's not future-proof to rely on it in your project.
The closest you can get with the documented API is probably by introducing interfaces, methods and properties: http://doc.postsharp.net/content/code-injections

How do you create backwards compatible JAX-RS and JAX-WS APIs?

JAX-RS and JAX-WS are great for producing an API. However, they don't address the concern of backwards compatibility at all.
In order to avoid breaking old client when new capabilities are introduced to the API, you essentially have to accept and provide the exact same input and output format as you did before; many of the XML and JSON parsers out there seem to have a fit if they find a field that doesn't map to anything, or has the wrong type.
Some JSON libraries out there, such as Jackson and Gson, provide a feature where you can specify a different input/output representation for a given object based on a runtime setting, which seems like a suitable way to handle versioning for many cases. This makes it possible to provide backwards compatibility by annotating added and removed fields so they only appear according to the version of the API in use by the client.
Neither JAXB nor any other XML databinding library I have found to date has decent support for this concept, nevermind being able to re-use the same annotations for both JSON and XML. Adding it to the JAXB-RI or EclipseLink Moxy seems potentially possible, but daunting.
The other approach to versioning seems to be to version all the classes that have changed, often by creating a new package each time the API is published and making copies of all modified DTO, Service, and Resource classes in the new package so that all the type information is versioned for the binding and dispatch systems. This approach seems more laborious to me.
My question is: how have you designed your Jave API providers for backwards compatibility? What worked, what didn't?
Links to case studies or blog posts on the subject much appreciated; I've done some googling but haven't been finding much discussion of this.
I'm the tech lead for EclipseLink MOXy, I'm very interested in your versioning requirements. You can reach me through my blog:
http://bdoughan.blogspot.com/p/contact_01.html
MOXy offers a means to represent the JAXB metadata as an XML file. You can leverage this to create multiple mappings for the same object model:
http://wiki.eclipse.org/EclipseLink/Examples/MOXy/EclipseLink-OXM.XML