Document-oriented data abstraction layer for NoSQL (Mongo)? - mongodb

I'm building a server application that interfaces the world via a RESTful web service and uses MongoDB of storage. As it happens, the JSON resources of the web service are fairly close to the structure of the BSON documents stored in Mongo.
While I typically use an object-oriented DAO abstraction to hide the details of persistence implementation, it doesn't quite seem to be the best fit in this case since what I really want to do is fetch a document from the DB based on a query and perform a transformation. Building an object graph as an intermediary seems excessive.
Does anyone have any recommendations for an abstraction pattern that fits this bill?
Edit: Removed 1AM degression about just not using any abstraction and just using the Mongo driver directly.

The level of abstraction is up to you, your needs and your requirements. There are various language specific layers on top of the native MongoDB drivers. It is up to you decide what you need and not to us. We can not give recommendation without more precise and detailed background. If you ask a generic question then you will receive a generic question.
And who gave this +1? For what?

Related

GraphQL, MongoDB on .Net Core - middleware?

I'm designing a backend app using a GraphQL, MongoDB, .Net Core stack where performance must be excellent. Wondering if given that the .net core Mongo driver supports Linq if we could just skip using a Mongoose or EFCore middle layer and just use a Repository pattern on top of the DB layer?
We're normally an EF shop so familiarity is a plus but it carries so much baggage and this app schema is fairly simple. 80% of our DB will be "sql-like" and only 20% actually requires NoSql, but due to cloud hosting costs we're going with Mongo for everything. Mongoose was suggested to me, but I'm not really seeing what I gain there.
Has anyone used this combo? Any suggestions appreciated!
Mongoose is different stack all together and there I am guessing you basically plan to use a another model to provide abstraction over the top of data that is being stored in the database.
https://docs.mongodb.com/realm/graphql/ => Check this out if GraphQL is what you are specifically looking for, without any big layer change.
One added layer in between your let's say .NET backend and an extra layer would extra hop BTW. (If you use another tech to just act as abstraction layer)

Shared library vs REST service. What are the pros and cons?

There is:
a requirement to have a key-values pairs storage shared between multiple services
a simple table in DynamoDB
very simple logic of key-value pairs creation
Intuitively I want to put the DynamoDB table behind a REST service that will implement all the simple logic I have. Unfortunately, this means adding a lot of reliability and performance challenges to the solution, since making my service as good, resilient, and performant as DynamoDB isn't easy.
It's been a while now that I think about creating a shared library for the purpose. The library will implement the logic and connect directly to DynamoDB table. I don't anticipate a lot of changes neither in DynamoDB table, nor in logic that will be implemented in the library.
What are the possible pros and cons of both approaches?
A service is simply a packaging and deployment selection for a library. Both are absolutely valid depending on your particular needs.
I'm curious why you feel the need to wrap dynamodb at all? Is there some particular domain logic you would like to place on top of it to constrain it? DynamoDB is already a restful service... Putting your own restful service on top of it may be advantageous, but you would have to convince me of the value of doing so. If you have particular business logic that requires you constraining the functionality, packaging it as a shared library has certain advantages, especially if you can encapsulate that business logic and separate it from the implementation of DynamoDB.
I am assuming that updating the shared library will not be in your control. And clients (library users) will update whenever it suits them.
If the above assumption is true, You should always go with a rest service. Considering few things
Your rest api may use cache instead of calling dynamodb all the time.
You might want to update the schema of the data you want to put in dynamodb.
You may use another db all together.
You may have some validation logic which will certainly evolve over time.

What is the real benefit of using GraphQL?

I have been reading about the articles on the web about the benefits of graphql but so far I have not been able to find a single benefit of it.
One of the most common benefits mentioned in those articles are below?
No Overfetching with GraphQL.
Reducing number of calls made from client side.
Data Load Control Granularity
Evolve your API without versions.
Those above all makes sense but it is not the graphql itself that provides these benefits. Any second layer api written in java/python or any other language would be able to provide this benefits too. It is basically introducing another layer of abstraction above the data retrieval systems, rest or whatever, and decoupling the client side from that layer. After you do that everything you can do with graphql can also be done with any other language too.
Anyone can implement a say scala server that retrieves the data from various api's integrates them, create objects internally and feeds the client with only the relevant part of the data with total control on the data. This api can be easily versioned and released accordingly. Considering the syntax of graphql and how cumbersome it is and difficulty of creating a good cache around it, I can't see why would you use it really.
So the overall question is there any benefits of graphql that is provided to the application because of the graphql itself and not because you implement another layer of abstraction between your applications and your api's?
Best practices known as REST existed earlier, too.
GraphQL is more standarized than REST, safer (no injections) and syntax gives great flexibility in the area of quickly changing client needs.
It's just a good standard of best practices.
I feel GrapgQL is another example of overengineering. I would say "Best standards and practices" are "Keeping It Simple."
Breaking down and object and building a custom one before sending it to the client is very basic.

CRUD with external REST service as Model in Loopback

A couple quick questions:
I would like to use an external REST API service (e.g. AgileCRM). With their service, I would like to use the REST Connector within a model that allows me to CRUD AgileCRM's API. Is this possible? If so, what model should be the base (e.g. PersistedModel, Model, etc)?
I would like to merge data from AgileCRM and a PersistedModel (e.g. MySQL). Should I do this via relationships OR inheritance? If inheritance, which should be the parent model? It would be ideal to use all data from AgileCRM (represented as a model in LB, if possible) and add information from a local MySQL database.
Have you any thoughts on wrapping an API service (e.g. AgileCRM) as a connector type (e.g. REST Connector for AgileCRM, based on REST Connector)? AgileCRM has many features but their CRUD methods operate slightly different from how LB interacts with data sources.
This is a really old question, sorry it never got answered, but it's also very broad is some portions. I would recommend asking shorter, more specific questions, and making multiple StackOverflow questions for them.
That said, here's some brief answers for people reading this entry:
Yes, this is possible. Check out the REST connector.
I would probably use multiple parent models that are internal and then a single exposed REST model (not "persisted") that collates that data together.
Sure, you could do that. Writing a connector isn't too difficult, check out our docs on building a connector.

How to structure an EmberJS application to interface with a REST backend

We have a web2py application that we want to connect to an EmberJS client. The idea is to use the responsive capabilities of EmberJS to keep the client updated writing minimal code.
We have (REST) primitives which are in charge of creating / updating the underlying datastore (CouchDB). These primitives are sometimes complex and covering corner cases, involving the creation of several documents, connecting them, validating configuration parameters, ... This is implemented in the backend. We would like to avoid duplicating the full modelling of the data in our EmberJS application, and avoid duplicating the logic implemented by those primitives.
I have some questions:
does it make sense in EmberJS to just model a subset of the data in the documents? We would just create models for the small amount of properties that the user is able to interact with. The client would not see the full CouchDB documents, just the data necessary for display / interaction.
is it possible to connect EmberJS to a REST interface, without having to fully model the underlying data in the database?
does it make sense in EmberJS to just model a subset of the data in the documents?
Yes. There is no need to create ember models for objects/properties that user will not need to interact with.
is it possible to connect EmberJS to a REST interface, without having to fully model the underlying data in the database?
Definitely that is possible, it's a fairly common use case. The best way to get started is by building a small MVP that works with just couple of models. Once you've got that wired up it will be easy to add more domain objects.
The tricky part (especially at first) will be mapping your rest endpoints to the ember-data REST adapter. The adapter will work out-of-box with some REST endpoints - see the REST Adapter - but connecting a CouchDB datastore will probably require some customization. The tools for this are still evolving, have a look at ember-data integration tests to see what is available.