I'm working on the API for TripleStore in Sesame. It has a reasoner which works with RDFs.
As I also work with OWL. So I want to add another semantic reasoner such as Pellet or ITM ...
It appears that the SAIL API supports customisable reasoning and it is extend-able to other RDF-based languages.
Does anyone know how to do this?
Related
I am developing a new project with spring boot and graphql. I am confused on how to proceed because there are 2 ways to develop it, one is via the graphqls file and Annotation based approach. I prefer Annotation based approach but are they stable. exmaple : https://github.com/leangen/graphql-spqr.
I second AllirionX's answer and just want to add a few details.
Firstly, to answer your question: yes, SPQR has been pretty stable for quite a while now. Many teams are successfully using it in production. The only reason it is still in 0.X versions is the lack of documentation, but an occasional small breaking change in the API does occur.
Secondly, I'd also like to add that going code-first doesn't mean you can't also go contract-first. In fact, I'd argue you should still develop in that style. The only difference is that you get to write your contracts as Java interfaces instead of a new language.
As I highlight in SPQR's README:
Note that developing in the code-first style is still effectively
schema-first, the difference is that you develop your schema not in
yet another language, but in Java, with your IDE, the compiler and all
your tools helping you. Breaking changes to the schema mean the
compilation will fail. No need for linters or other fragile hacks.
So whether the API (as described by the interfaces) changes as the other code changes is entirely up to you. And if you need the SDL for any reason, it can always be generated from the executable schema or the introspection result.
I don't think there is a good or a bad answer to the "how to proceed" question.
There are two different approaches to build your graphql server (with graphl-java, graphql-java-tools, graphql-spqr), and each method has its advantages and inconvenients. All those library propose a springboot starter. Note that I never used graphql-spqr.
Schema first (with graphql-java or graphql-java-tools)
In this approach you first create a SDL file. The graphql library will parse it, and "all" you have to do is wire each graphql type to its data fetcher. The graphql-java-tools can even do the wiring for you.
Advantage
no need to enter into the detail of how the Graphql schema is built server side
you have a nice graphqls schema file that can be read and used by a client, easying the charge of building a graphql client
you actually define your api first (SDL schema): changing the implementation of the api will not require any change client side
Inconvenient
no compile-time check. If something is not wired properly, an exception will be thrown at runtime. But this can be negated by using graphql-java-codegen that will generate for you the java classes and interfaces for your graphql types, unions, queries, enums, etc.
if using graphql-java (no auto wiring), I felt I had to write long boring data fetchers. So I switched to graphql-java-tools.
Code first (with graphql-java or grapqhl-java-tools or graphql-spqr)
The graphql schema is built programmatically (through annotation with graphql-spqr or by building a GraphQLSchema object in graphql-java)
Advantage
compile-time check
no need to maintain both the SDL and the Domain class
Inconvenient
as your schema is generated from your code base, changing your code base will change the api, which might not be great for the clients depending on it.
This is my opinion on those different framework and I would be happy to be shown that I am in the wrong. The ultimate decision depends on your project: the size, if there is an existing code base, etc.
Since Jersey projects start with Moxy JSON serializer by default, I used it for a multi module REST project. But writing clients to this REST API had it quirks, due to this known problem, that moxy doesn't work really well with Maps.
I've migrated the development branch of my code to Jackson, where hashmap serialization works well without the entry[] array, making it easier to write new non-jersey clients to the project. But I would also need to keep backwards compatibility somehow if possible, for already written clients. How could I achive this with Jackson?
Sadly some hashmaps dont have predetermined keys, so the solution showed in the link cant be implemented, if I'm not wrong.
I have read the swagger definitions and the format and understood that swagger definition is used to describe the APIs.
Would it be better to write the swagger definition and then the API? or to write the API first and then the swagger? I have no experience with this and I would like to write a REST API and a swagger file for an application.
I don't think the order really matters. Both methods are given legitimacy in the Swagger Getting Started Guide. The key thing is that one should be generated from the other, so you don't have to manually maintain both.
In the comments, cricket_007 has already mentioned that tools exist to generate the web service skeleton from the swagger definition. Using these tools, it would make sense to write the swagger definition first. This is the Top Down approach from the getting started guide.
From the Swagger getting started guide linked above, you can see that there are also tools available to generate Swagger docs from java code, provided you are using a particular framework like JAX-RS. This is the Bottom Up approach.
It comes down to personal preference. If you are the kind of person that would rather not "couple" your code base to Swagger and make sure that your application does not depend on Swagger to work, then the bottom up approach is best. However, if you want to fully embrace the Swagger tool chain and really "buy in" to it, then the top down approach is probably the best.
Also, if this is for educational purposes, then think about what you want to learn about. If you want to learn about writing JSON REST APIs from scratch (or using something like JAX-RS), then the bottom up approach will teach you more. However, if your goal is to learn as much as possible about Swagger, then the top down approach will be better.
I like what I read about Lift, and I like the concept of Dart, but have little experience in both to be able to decide if thinking about using them in the same project is even making sense.
I want both writing structured client side code, and not having to worry about the OWASP top 10 as much
Can they work together? Does it make sense at all? Did anyone try?
I have integrated Dart with Lift using REST services together with Dart's XmlHTTPRequest and liked the result. I would say that any web framework that makes making RESTful services as easy as Lift does is a perfect match for Dart. On the other hand web frameworks such as JSF which requires components to take part of a advanced life cycle are probably not a good fit.
That being said, having the same language on the client and the server side is definitely a win, so when the Dart VM matures a bit more and starts to include RESTful functionality similar to what Express does for NodeJS then I would probably use that instead.
Already now baby steps are being taken for including HTTP support in Dart semilar to what Node provides on V8. Another important point for Dart is that it allows the browser and server to share rich objects, like what GWT does for Java, and this should further ease building advanced web applications with Dart.
JAX-RS and JAX-WS are great for producing an API. However, they don't address the concern of backwards compatibility at all.
In order to avoid breaking old client when new capabilities are introduced to the API, you essentially have to accept and provide the exact same input and output format as you did before; many of the XML and JSON parsers out there seem to have a fit if they find a field that doesn't map to anything, or has the wrong type.
Some JSON libraries out there, such as Jackson and Gson, provide a feature where you can specify a different input/output representation for a given object based on a runtime setting, which seems like a suitable way to handle versioning for many cases. This makes it possible to provide backwards compatibility by annotating added and removed fields so they only appear according to the version of the API in use by the client.
Neither JAXB nor any other XML databinding library I have found to date has decent support for this concept, nevermind being able to re-use the same annotations for both JSON and XML. Adding it to the JAXB-RI or EclipseLink Moxy seems potentially possible, but daunting.
The other approach to versioning seems to be to version all the classes that have changed, often by creating a new package each time the API is published and making copies of all modified DTO, Service, and Resource classes in the new package so that all the type information is versioned for the binding and dispatch systems. This approach seems more laborious to me.
My question is: how have you designed your Jave API providers for backwards compatibility? What worked, what didn't?
Links to case studies or blog posts on the subject much appreciated; I've done some googling but haven't been finding much discussion of this.
I'm the tech lead for EclipseLink MOXy, I'm very interested in your versioning requirements. You can reach me through my blog:
http://bdoughan.blogspot.com/p/contact_01.html
MOXy offers a means to represent the JAXB metadata as an XML file. You can leverage this to create multiple mappings for the same object model:
http://wiki.eclipse.org/EclipseLink/Examples/MOXy/EclipseLink-OXM.XML