Context
I am looking for postgres jdbc drivers which supports reactive programming. I came across https://r2dbc.io/ which is a spec for reactive api's for jdbc.There are two sections in the site
one is "Clients" and another is "Drivers"
The client section starts with
R2DBC encourages libraries to provide a “humane” API in the form of a client library. R2DBC avoids implementing user-space features in each driver, and leaves these for specific clients to implement.
Postrgres implementation of r2dbc - https://github.com/pgjdbc/r2dbc-postgresql starts with
This implementation is not intended to be used directly, but rather to be used as the backing implementation for a humane client library to delegate to
My Questions
What is the difference between client and driver in general, or at-least in above context ?
What is the "humane api" being referred here ?
An example of client and human api in Spring is the DatabaseClient in Spring 5.3.
The original R2dbc spec defines the APIs using reactive streams spec. But DatabaseClient is based on the project reactor, which provides richer APIs for developers.
Compare my example connection factories(I have to use Reactor APIs to wrap the R2dbc APIs to make it more easy for use) and database clients.
Related
I have an application which is using Spring data JPA and hibernate envers for db auditing.
Since R2DBC doesn't support auditing yet, is it possible to use combination of both in a single application?
If yes, Plan is to use Spring Data JPA for insert, update and delete operations, so that all DB auditing will be handled by hibernate envers. And use R2DBC for reactive non-blocking API's to read data.
If no, Is there any suggestions on how to achieve both reactive API's and auditing?
Spring provided simple Auditing via #EnableR2dbcAuditing, check my example.
Mix JPA in a reactive application is also possible, I have an example to demo run JPA in a reactive application, but not added r2dbc in it.
For your plan, a better solution is applying the cqrs pattern on the database topology, use database cluster for your application.
JPA for applying changes, use the main/master database to accept the modification, and sync changes to the secondary/slave database.
r2dbc for queries as you expected, use the secondary/slave database to query.
use a gateway at the front for the query and command service.
Update: I have created a sample to demo JPA and R2dbc coexistence in a single webflux application. But I do not suggest using it in real-world applications. Consider the 3rd solution above if you really need it.
Yes it is possible however you will probably face two issues, one is that multiple repos modules handling needs to be explicited (e.g specify the paths of the respective modules).
Secondly, the JDBC/JPA Spring Boot autoconfiguration will be disabled and you need to import it back.
Those issues have been reported in Reactive and Non Reactive repository in a Spring Boot application with H2 database
And a solution to them and the thought process can be found in this issue:
https://github.com/spring-projects/spring-boot/issues/28025
I've recently been playing around with Spring Webflux and it looks extremely useful and efficient. Also, reading about Reactive Systems, it seems like one of the defining traits of such systems is that they are message-driven.
Came across this post on the web: https://www.captechconsulting.com/blogs/annotation-driven-reactive-web-apis-with-spring-webflux
This post also mentions,
Spring WebFlux contains support for Reactive HTTP Rest API(s),
WebSocket applications, and Server-Sent Events. Spring WebFlux is
responsive, resilient, scalable, and message-driven.
My question is that if a write a simple REST API, much like the post describes, performing CRUD operations backed by a MongoDB and using spring-boot-starter-data-mongodb-reactive, could I call my API service message-driven? I could also potentially add a Webclient to talk to some downstream services.
Does message driven in the context of a REST API even make sense?
No, your application is not message-driven instead your application are Reactive. Reactive applications is event-driven, non-blocking, scalable, resilient and elastic. It supports Publisher and Subscriber mechanism, means asynchronous communication is being done between Publisher and Subscriber. It supports two types of Publishers
Mono: Used when we produce only one item.
Flux: Used when we produce multiple items.
To make your application message-driven, you need to use any message broker like Kafka, RabbitMQ etc.
So I have been using Hapi Fhir Server (for several years) as a way to expose proprietary data in my company....aka, implementing IResourceProvider for several resources.
Think "read only" in this world.
Now I am considering accepting writes.
The Hapi Fhir Server has this exert:
JPA Server
The HAPI FHIR RestfulServer module can be used to create a FHIR server
endpoint against an arbitrary data source, which could be a database
of your own design, an existing clinical system, a set of files, or
anything else you come up with.
HAPI also provides a persistence module which can be used to provide a
complete RESTful server implementation, backed by a database of your
choosing. This module uses the JPA 2.0 API to store data in a database
without depending on any specific database technology.
Important Note: This implementation uses a fairly simple table design,
with a single table being used to hold resource bodies (which are
stored as CLOBs, optionally GZipped to save space) and a set of tables
to hold search indexes, tags, history details, etc. This design is
only one of many possible ways of designing a FHIR server so it is
worth considering whether it is appropriate for the problem you are
trying to solve.
http://hapifhir.io/doc_jpa.html
So I did this download (of the jpa server) and got it working against a real db-engine (overriding the default jpa definition).....and I observed the "fairly simple table design". So I am thankful for this simple demo. But looking at the simple, it does concern me for a full blown production setup.
If I wanted to setup a Fhir Server, are there any "non trivial" (where above says "fairly simple table design") ... to implement a robust fhir server...
that supports versioning (history) of the resources, validation of "references (example, if someone uploads an Encounter, it checks the Patient(reference) and the Practitioner(reference) in the Encounter payload......etc, etc?
And that is using a robust nosql database?
Or am I on the hook for implementing a non-trivial nosql database?
Or did I go down the wrong path with JPA?
I'm ok with starting from "scratch" (an empty data-store for my fhir-server)....and if I had to import any data, I understand what that would entail.
Thanks.
Another way to ask this.....is......is there a hapi-fhir way to emulate this library: (please don't regress into holy-war issues between java and dotnet)
But below is more what I would consider a "full turn key" solution. Using NoSql (CosmoDB).
https://github.com/Microsoft/fhir-server
A .NET Core implementation of the FHIR standard.
FHIR Server for Azure is an open-source implementation of the
emerging HL7 Fast Healthcare Interoperability Resources (FHIR)
specification designed for the Microsoft cloud. The FHIR specification
defines how clinical health data can be made interoperable across
systems, and the FHIR Server for Azure helps facilitate that
interoperability in the cloud. The goal of this Microsoft Healthcare
project is to enable developers to rapidly deploy a FHIR service.
With data in the FHIR format, the FHIR Server for Azure enables
developers to quickly ingest and manage FHIR datasets in the cloud,
track and manage data access and normalize data for machine learning
workloads. FHIR Server for Azure is optimized for the Azure ecosystem:
I'm not aware of any implementation of the HAPI server which support a full persistence layer in NoSQL.
HAPI has been around for a while, the persistence layer has evolved quite a bit and seems to be appropriate for many production scenarios, especially when backed by a performant relational database.
The team that maintains HAPI also uses it as the basis for a commercial offering, Smile CDR. Many of the enhancements that went into making Smile CDR production ready are baked into the HAPI open source project. There has also been some discussion on scaling the JPA implementation.
If you're serious about using HAPI in production I'd recommend doing some benchmarks on the demo server you set up that simulate some of your production use-cases to see if it will get you what you want, you may be surprised. You can also contact the folks at Smile CDR as they do consulting and could likely tell you more specifically how to tune an instance to scale for your production priorities.
You can use Firely's implementation of FHIR. The most used repo is the FHIR SDK;
https://github.com/FirelyTeam/firely-net-sdk
But if you want more done for you out of the box you can use their Spark repo. This uses the SDK underneath and ultimately gives you a IAsyncFhirService which you can use for CRUD operations;
https://github.com/FirelyTeam/spark
And to your question; Spark currently only supports Mongo DB as the data persistence layer i.e. there is no entity like mapping done to create a db schema in a relational database. NoSQL I think made sense in this case.
Alternatively, check out the list of FHIR implementations in other languages maintained by HL7 themselves;
https://wiki.hl7.org/Open_Source_FHIR_implementations
For a Java/Kotlin Spring boot app, if I want to send messages to Kafka or consume messages from Kafka. Would you recommend using Spring Kafka library or just using Kafka Java API.
Not quite sure are there any more benefits Spring provides or just a wrapper? For Spring they provide a lot of annotations which seems more magics when having some runtime error.
Want to hear some opinions.
Full disclosure: I am the project lead for Spring for Apache Kafka.
It's entirely up to you and your colleagues.
It's somewhat comparable to writing assembly code Vs. using a high level language and a compiler.
For an existing Spring shop that is familiar with spring-messaging (JMS, RabbitMQ etc), it's a natural fit, the APIs will be very familiar (POJO listeners, MessageConverters, KafkaTemplate, etc, etc).
When using the simplest APIs, Spring takes care of the low-level stuff like committing offsets, transaction synchronization, error handling, etc, etc.
If you have very basic requirements and/or want to write all that code yourself, then use the native APIs.
A common requirement is to access a JPA DataSource via REST. I want the opposite, i.e. a JPA provider that works by sending HTTP requests to a RESTful persistence service. The benefit of this is that any application written against the JPA API could easily switch between a traditional JPA provider (e.g. Hibernate) and the REST-based JPA provider, with no code changes required.
So my question is whether there is an existing REST-based JPA provider, and if not, would such a thing even be feasible?
Datanucleaus has a JPA implementation over a RESTful json API. However, your REST API must adhere to their conventions: http://www.datanucleus.org/products/accessplatform_3_0/json/support.html
Their S3 and GoogleStorage extend the json API.
EDIT: Put link to wrong product in my original answer.
First of all, JPA is really designed for relational databases...
Second, there is no standard for RESTful persistence so a JPA-REST provider would be specific to that REST persistence application.
You could implement something using EclipseLink-EIS. You'd just have to create the JCA_RestAdapter implementation.
If you mean one of the NoSQL databases when you say "RESTful persistence service" then maybe. Some of these NoSQL DBs provide a REST based interface and some JPA providers are starting to support NoSQL DBs. See http://wiki.eclipse.org/EclipseLink/FAQ/NoSQL.
Honestly you'd be better off just implementing the DAO pattern and abstracting your CRUD(L) operations. This is exactly what DAOs are for.
There are several alternatives out there. For example, take a look at "JEST":
https://www.ibm.com/developerworks/mydeveloperworks/blogs/pinaki/entry/rest_and_jpa_working_together71?lang=en
REST is not an API (Application Programming Interface). It is an
architectural style that prescribes not to have an API to access the
facilities of a service.
...
On the opposite end of the stateless spectrum lies the principle of
JEE Application Servers -- where the server maintains state of
everything and there exists one (or multiple) API for everything. Such
server-centric, stateful, API-oriented principles of JEE led to
several roadblocks.
...
I found REST principles concise and elegant. I also find Java
Persistence API (JPA) providers have done a great job in standardizing
and rationalizing the classic object-relational impedance mismatch.
JPA is often misconstrued as a mere replacement of JDBC -- but it is
much more than JDBC and even more than Object-Relational Mapping
(ORM). JPA is be a robust way to view and update relational data as an
object graph. Also core JPA notions such as detached transaction or
customizable closure or persistent identity are seemed to neatly
aligned with REST principles.
Further links:
http://openjpa.apache.org/jest.html
http://www.ibm.com/developerworks/java/library/j-jest/index.html?ca=drs-