hapi fhir server full turn key implementation - hapi-fhir

So I have been using Hapi Fhir Server (for several years) as a way to expose proprietary data in my company....aka, implementing IResourceProvider for several resources.
Think "read only" in this world.
Now I am considering accepting writes.
The Hapi Fhir Server has this exert:
JPA Server
The HAPI FHIR RestfulServer module can be used to create a FHIR server
endpoint against an arbitrary data source, which could be a database
of your own design, an existing clinical system, a set of files, or
anything else you come up with.
HAPI also provides a persistence module which can be used to provide a
complete RESTful server implementation, backed by a database of your
choosing. This module uses the JPA 2.0 API to store data in a database
without depending on any specific database technology.
Important Note: This implementation uses a fairly simple table design,
with a single table being used to hold resource bodies (which are
stored as CLOBs, optionally GZipped to save space) and a set of tables
to hold search indexes, tags, history details, etc. This design is
only one of many possible ways of designing a FHIR server so it is
worth considering whether it is appropriate for the problem you are
trying to solve.
http://hapifhir.io/doc_jpa.html
So I did this download (of the jpa server) and got it working against a real db-engine (overriding the default jpa definition).....and I observed the "fairly simple table design". So I am thankful for this simple demo. But looking at the simple, it does concern me for a full blown production setup.
If I wanted to setup a Fhir Server, are there any "non trivial" (where above says "fairly simple table design") ... to implement a robust fhir server...
that supports versioning (history) of the resources, validation of "references (example, if someone uploads an Encounter, it checks the Patient(reference) and the Practitioner(reference) in the Encounter payload......etc, etc?
And that is using a robust nosql database?
Or am I on the hook for implementing a non-trivial nosql database?
Or did I go down the wrong path with JPA?
I'm ok with starting from "scratch" (an empty data-store for my fhir-server)....and if I had to import any data, I understand what that would entail.
Thanks.
Another way to ask this.....is......is there a hapi-fhir way to emulate this library: (please don't regress into holy-war issues between java and dotnet)
But below is more what I would consider a "full turn key" solution. Using NoSql (CosmoDB).
https://github.com/Microsoft/fhir-server
A .NET Core implementation of the FHIR standard.
FHIR Server for Azure is an open-source implementation of the
emerging HL7 Fast Healthcare Interoperability Resources (FHIR)
specification designed for the Microsoft cloud. The FHIR specification
defines how clinical health data can be made interoperable across
systems, and the FHIR Server for Azure helps facilitate that
interoperability in the cloud. The goal of this Microsoft Healthcare
project is to enable developers to rapidly deploy a FHIR service.
With data in the FHIR format, the FHIR Server for Azure enables
developers to quickly ingest and manage FHIR datasets in the cloud,
track and manage data access and normalize data for machine learning
workloads. FHIR Server for Azure is optimized for the Azure ecosystem:

I'm not aware of any implementation of the HAPI server which support a full persistence layer in NoSQL.
HAPI has been around for a while, the persistence layer has evolved quite a bit and seems to be appropriate for many production scenarios, especially when backed by a performant relational database.
The team that maintains HAPI also uses it as the basis for a commercial offering, Smile CDR. Many of the enhancements that went into making Smile CDR production ready are baked into the HAPI open source project. There has also been some discussion on scaling the JPA implementation.
If you're serious about using HAPI in production I'd recommend doing some benchmarks on the demo server you set up that simulate some of your production use-cases to see if it will get you what you want, you may be surprised. You can also contact the folks at Smile CDR as they do consulting and could likely tell you more specifically how to tune an instance to scale for your production priorities.

You can use Firely's implementation of FHIR. The most used repo is the FHIR SDK;
https://github.com/FirelyTeam/firely-net-sdk
But if you want more done for you out of the box you can use their Spark repo. This uses the SDK underneath and ultimately gives you a IAsyncFhirService which you can use for CRUD operations;
https://github.com/FirelyTeam/spark
And to your question; Spark currently only supports Mongo DB as the data persistence layer i.e. there is no entity like mapping done to create a db schema in a relational database. NoSQL I think made sense in this case.
Alternatively, check out the list of FHIR implementations in other languages maintained by HL7 themselves;
https://wiki.hl7.org/Open_Source_FHIR_implementations

Related

Redis read-through implementation & tooling example

I am a beginner in Redis. I am interested to introduce Redis into my system to make it a small-scale microservice. I have read the high level concept of Read-Through cache strategy and then imagine the following picture in my mind.
Let me explain briefly, I will have 2 (or more) domain driven microservices (Payment, Customer) responsible for UPDATING (i.e. the "command" part in CQRS) data in their isolated PostgreSql DB schema. For the QUERY part, I would like that all of "GET" API requests from my mobile app fetch data from Redis using the Read-Through strategy by having some kind of "PG to Redis Converter" behind the scene (as in the label (6) in the picture).
This is what I think the Read-Through cache is about as far as I understand. But I am trying to search for an example of this kind of converter to integrate with my NodeJS or Java REST API, but I cannot find one. Most example I can find only talk about the concept. And the ones that show the implementation turn out to be more like Cache-Aside strategy.
Please help suggest which tools to use for such converter. Or if it can be configured directly into Redis itself (e.g. using Lua script?). It would be great if it can be done serverlessly using AWS service, but not necessary, thank you.

What is the real benefit of using GraphQL?

I have been reading about the articles on the web about the benefits of graphql but so far I have not been able to find a single benefit of it.
One of the most common benefits mentioned in those articles are below?
No Overfetching with GraphQL.
Reducing number of calls made from client side.
Data Load Control Granularity
Evolve your API without versions.
Those above all makes sense but it is not the graphql itself that provides these benefits. Any second layer api written in java/python or any other language would be able to provide this benefits too. It is basically introducing another layer of abstraction above the data retrieval systems, rest or whatever, and decoupling the client side from that layer. After you do that everything you can do with graphql can also be done with any other language too.
Anyone can implement a say scala server that retrieves the data from various api's integrates them, create objects internally and feeds the client with only the relevant part of the data with total control on the data. This api can be easily versioned and released accordingly. Considering the syntax of graphql and how cumbersome it is and difficulty of creating a good cache around it, I can't see why would you use it really.
So the overall question is there any benefits of graphql that is provided to the application because of the graphql itself and not because you implement another layer of abstraction between your applications and your api's?
Best practices known as REST existed earlier, too.
GraphQL is more standarized than REST, safer (no injections) and syntax gives great flexibility in the area of quickly changing client needs.
It's just a good standard of best practices.
I feel GrapgQL is another example of overengineering. I would say "Best standards and practices" are "Keeping It Simple."
Breaking down and object and building a custom one before sending it to the client is very basic.

Universal RESTful Data API framework

we are implementing an operational data store for some key data sets ("single view of customer", "single view of employee") to provide meaningful integrated data primarily to front office applications (primarily B2E....i.e. both provider and consumer are controlled internally, no external exposure).
All the "single views" will be centered around a slow changing master business entity ("customer", "employee", "asset", "product") which has related child/satellite entities which are more transactional/fast changing in nature (i.e. bookings, orders, payments etc.). The different "single views" will be overlapping and interconnected.
So this ODS would become a data abstraction layer between disparate "systems of record" and vertical systems of engagement, providing a traversable data universe decoupling clients from producers
Of course, an ODS is pointless if there is no way to access the data. Therefore, I am looking for some kind of elegant way to implement a resource based data services layer on top of the ODS with some of the following characteristics
Stateless
REST Level 3 (HATEOS) but most likely limited to GET's (i.e. create, update, delete would be executed against the actual "system of record" in order to apply business logic)
abstraction between enterprise data model (=domain entities are exposed as representations) and physical data model (=actual data is stored in physical tables)...no leakage of physical data model
support for common query features, such as order by, paging etc.
easy to use for developers, decoupled from physical data model and backend system changes
Ideally not constrained to relational db's even though this could be optional for now.
The key standard I came across for that would be OData, but there is a couple of concerns though
It is very MS centric both from a standard (now OASIS) but more importantly from an implementation perspective (.NET, WCF Data services). We are an enterprise java shop.
A couple of frontrunners (netflix, ebay) have sunsetted their OData services which is always a concerning thing
OData has been touted as a magic blackbox which exposes the underlying datamodel more or less automatically. That means that there is no "human translation"/modelling in between, and the service layer therefore doesn't provide the abstraction that is one of the key requirements.
Now, some of these disadvantages might not be that critical as we don't plan to expose the data services layer to the external world, but rather use it within our own environment (i.e. rather small number of consumers which can be controlled).But then the question is how much value OData adds.
I do know that there are no free lunches out there :-)
Are there any other approaches of how to implement a generic data access layer ?
thx a lot, Nick
To answer your concerns about OData:
The statement of OData being MS centric is not true anymore. And there's a very good news to you especially when you're using Java to write services or clients. The Apache Olingo project is currently maintained and developed under an open source manner and Microsoft is just one of the contributors of it. The project is based on an OData V2 version and will also support OData V4.
It's true that Netflix and Ebay don't expose their OData services anymore. But according to the monitoring of the OData ecosystem on OData.org, there are new OData services and clients published constantly. (The OData.org has started to accept contribution to the ecosystem)
I'm not sure if I correctly understood this item. But if I did, the OData vocabularies (a part in the OData standard) may be able to resolve your concern. As with the OData model annotated with terms, you can add more "human specification" to the capability of the service and more control to the data and the model. The result will be, that the client will be able to intelligently "human translate" the data and the model to better interact with the service. What is even better about vocabularies is, although there are canonical namespaces of annotations that are reserved by the protocol, you can also define your own vocabulary and have it accepted by whoever wants to consume your service as the vocabularies of OData is extensible as is defined in the OData protocol.
In addition, although your plan is to only expose a data service for limited access. OData natures such as queryablility, RESTful data API, and new OData V4 compelling features such as delta responses, asynchronous requests, server side aggregation will definitely help you write a more efficient and powerful data publishing and consumption story.

How to structure an EmberJS application to interface with a REST backend

We have a web2py application that we want to connect to an EmberJS client. The idea is to use the responsive capabilities of EmberJS to keep the client updated writing minimal code.
We have (REST) primitives which are in charge of creating / updating the underlying datastore (CouchDB). These primitives are sometimes complex and covering corner cases, involving the creation of several documents, connecting them, validating configuration parameters, ... This is implemented in the backend. We would like to avoid duplicating the full modelling of the data in our EmberJS application, and avoid duplicating the logic implemented by those primitives.
I have some questions:
does it make sense in EmberJS to just model a subset of the data in the documents? We would just create models for the small amount of properties that the user is able to interact with. The client would not see the full CouchDB documents, just the data necessary for display / interaction.
is it possible to connect EmberJS to a REST interface, without having to fully model the underlying data in the database?
does it make sense in EmberJS to just model a subset of the data in the documents?
Yes. There is no need to create ember models for objects/properties that user will not need to interact with.
is it possible to connect EmberJS to a REST interface, without having to fully model the underlying data in the database?
Definitely that is possible, it's a fairly common use case. The best way to get started is by building a small MVP that works with just couple of models. Once you've got that wired up it will be easy to add more domain objects.
The tricky part (especially at first) will be mapping your rest endpoints to the ember-data REST adapter. The adapter will work out-of-box with some REST endpoints - see the REST Adapter - but connecting a CouchDB datastore will probably require some customization. The tools for this are still evolving, have a look at ember-data integration tests to see what is available.

How can I setup OData and EF with out coupling to my database structure?

I really like OData (WCF Data Services). In past projects I have coded up so many Web-Services just to allow different ways to read my data.
OData gives great flexibility for the clients to have the data as they need it.
However, in a discussion today, a co-worker pointed out that how we are doing OData is little more than giving the client application a connection to the database.
Here is how we are setting up our WCF Data Service (Note: this is the traditional way)
Create an Entity Framework (E)F Data Model of our database
Publish that model with WCF Data Services
Add Security to the OData feed
(This is where it is better than a direct connection to the SQL Server)
My co-worker (correctly) pointed out that all our clients will be coupled to the database now. (If a table or column is refactored then the clients will have to change too)
EF offers a bit of flexibility on how your data is presented and could be used to hide some minor database changes that don't affect the client apps. But I have found it to be quite limited. (See this post for an example) I have found that the POCO templates (while nice for allowing separation of the model and the entities) also does not offer very much flexibility.
So, the question: What do I tell my co-worker? How do I setup my WCF Data Services so they are using business oriented contracts (like they would be if every read operation used a standard WCF Soap based service)?
Just to be clear, let me ask this a different way. How can I decouple EF from WCF Data Services. I am fine to make up my own contracts and use AutoMapper to convert between them. But I would like to not go directly from EF to OData.
NOTE: I still want to use EF as my ORM. Rolling my own ORM is not really a solution...
If you use your custom classes instead of using classes generated directly by EF you will also change a provide for WCF Data Services. It means you will no more pass EF context as generic parameter to DataService base class. This will be OK if you have read only services but once you expect any data modifications from clients you will have a lot of work to do.
Data services based on EF context supports data modifications. All other data services use reflection provider which is read only by default until you implement IUpdatable on your custom "service context class".
Data services are technology for creating quickly services exposing your data. They are coupled with their context and it is responsibility of the context to provide abstraction. If you want to make quick and easy services you are dependent on features supported by EF mapping. You can make some abstractions in EDMX, you can make projections (DefiningQuery, QueryView) etc. but all these features have some limitations (for example projections are readonly unless you use stored procedures for modifications).
Data services are not the same as providing connection to database. There is one very big difference - connection to database will ensure only access and execution permissions but it will not ensure data security. WCF Data Services offer data security because you can create interceptors which will add filters to queries to retrieve only data the user is allowed to see or check if he is allowed to modify the data. That is the difference you can tell your colleague.
In case of abstraction - do you want a quick easy solution or not? You can inject abstraction layer between service and ORM but you need to implement mentioned method and you have to test it.
Most simple approach:
DO NOT PUBLISH YOUR TABLES ;)
Make a separate schema
Add views to this
Put those views to EF and publish them.
The views are decoupled from the tables and thus can be simplified and refactored separately.
Standard approach, also for reporting.
Apart from achieving more granular data authorisation (based of certain field values etc) OData also allows your data to be accessible via open standards like JSON/Xml over Http using OAuth. This is very useful for the web/mobile applications. Now you could create a web service to expose your data but that will warrant a change every time your client needs change in the data requirements (e.g. extra fields needed) whereas OData allows this via OData queries. In a big enterprise this is also useful for designing security at infrastructure level as it will only allow the text based (http) calls which can be inspected/verified for security threats via network firewalls'.
You have some other options for your OData client. Have a look at Simple.OData.Client, described in this article: http://www.codeproject.com/Articles/686240/reasons-to-consume-OData-feeds-using-Simple-ODa
And in case you are familiar with Simple.Data microORM, there is an OData adapter for it:
https://github.com/simplefx/Simple.OData/wiki
UPDATE. My recommendations go for client choice while your question is about setting up your server side. Then of course they are not what you are asking. I will leave however my answer so you aware of client alternatives.