Managing database validations in MERN stack - mongodb

I'm coming from a Rails background and am trying out building a simple web app with the MERN stack.
With Rails, I had a simple way to manage database-level validations: I would create a migration and set up the schema with validations, then run the migration. Moving to a production environment or after dropping the database, I could just run the same migration.
With MongoDB, I know how to create database-level validations in the mongo console, but not how to manage the validations for reuse later.
What are the best practices for managing database level validations with MongoDB (specific solutions for MERN are fine, though general solutions for just Mongo are fine too? Even better, is there a way to manage up/down validations in case I ever want to change something to a required field later in development but don't want to redo all of the validations from scratch?
Thanks in advance!

As we know that mongo is schema less, so we have to implement data validation in the application itself.
There is a well known npm package called mongoose which serves all these feature and also implements schema at application level.

Related

Is it possible to use PostgresQL views on keystoneJS 6

Using KeystoneJS next, is it possible to have PostgresQL Views?
This is my problem:
I have two tables: [Aquirements] and [Sells], and want a [Inventory] View, I don't want to use a table for it.
I use KeystoneJS next which uses Prisma which happen to deal with PostgresQL Views...
It's possible but fairly hacky at the moment.
KeystoneJS Next uses Prisma under the hood for database access and migrations. In addition to the Keystone list schema, Prisma tracks a schema of it's own – the Prisma Schema – which describes your data model. Amongst other things, the Prisma Schema is used by Prisma Migrate (one of the Prisma tools) to maintain the schema of the actual database. In Keystone, the Prisma Schema is generated automatically from your Keystone list schema so you generally don't need to be aware of it. Similarly, many of the Keystone CLI commands (like keystone dev) lean heavily on Prisma functionality.
Prisma does have some limited support for views in Postgres but, when used, it prevents the use of Prisma Migrate. This in turn, breaks the keystone dev command. Fortunately, it doesn't actually stop Keystone from running – keystone build and keystone start should work fine.
If you decide to proceed, there are a view things you'll need to keep in mind:
You can't use Prisma Migrate so will need another mechanism to rollout changes to your DB structure and keep it in sync with your Keystone lists. There are plenty of options available.
You can't use keystone dev, or at least, not in its current state. Depending on how dirty you want to get your hands, this may not be a show stopper. If you take a look at the function, it's actually not that complicated and it's only lines 40-54 that cause the problem. If you copied/reimplemented this function and removed those lines you could run your Keystone in dev mode (which maintains your Prisma Schema) while preventing Prisma Migrate from trying to manage your DB.
You're going to need to define your views twice – once in the DB view itself and again as a Keystone list. You'll need to keep these in sync. Note also that the view will need an id column – this is just an assumption that Keystone makes.
In most cases, your view won't be updatable so GraphQL mutations that attempt to edit or create items will error. You can configure access control on the list to exclude the mutations and see also the common field config options. You probably want to config each field with these options: ui: { createView: 'hidden', listView: 'read', itemView: 'read' } – this tells the Admin UI to treat the list as read-only.
So, like I said, possible but pretty hacky right now. It's worth noting though, Keystone Next is still in preview and under very active dev. I'm confident this situation will improve over time.

Using Sails.js with AWS DynamoDB....not ideal

I started working on a small POC and decided to give Sails.js a try :)
Part of the POC we wanted to use DynamoDB since the project will eventually involve high scalability and we're not looking to hire full-time MongoDB expert at this point.
We used the module: https://github.com/gadelkareem/sails-dynamodb
Problem is there is no documentation and the module does not even work...
It seems the sails ORM is not ideal for DynamoDB and requires writing custom DB services. Does anyone have experience with this?
I was very excited to come across Sails but if it won't let us play nice with DynamoDB then it might very well be out as an option to us....
Anyone have experience with this or maybe something I'm missing?
One of the important plus of vogels is excellent documentation.
Sails-dynamodb adapter based on the vogels, but not all features are implemented in sails-dynamodb adapter. For example, vogels has Expression Filters.
Vogels able to create tables. Adapter can't. An adapter needs duplication table schema in sails files and dynamodb shell.
Vogels has some own types, such as uuid type, StringSet, NumberSet, TimeUUID. (Adapter can use it too, if includes Vogels and Joi lib)
Vogels and adapter have the same query (create, update, delete, find) capabilities.
Adapter allows without changing the code switch to another data base. Adapter encapsulates establishment of connection to database.
Conclusion - for most purposes this adapter is suitable for the work and do not need to work directly with the Vogels
Sails comes loaded with an ORM called "Waterline". There are some official waterline plugins such as mongodb, postgresql, mysql and then there are some unofficial ones created by the community. I'd assume right now that Dynamo is in the latter category since I have not come across it before. However, with that being said I would not take this experience as a reason to ditch Sails.js.
Sails.js is built with the intention that all of its components can be swapped out, this means you are not tied to a specific template engine, authentication libraries etc. and including your ORM choice.
Waterline is still being actively developed but it is sat at v0.12.1 as of writing this response. It isn't fully there yet so there will be the odd issues still around!
My recommendation? Take a look at swapping out waterline for a different ORM. Keep the flexibility Sails gives you and change out the component that doesn't meet your criteria. There are still many benefits to Sails you can utilise.
Vogels might be worth checking out: https://github.com/ryanfitz/vogels
Turning off waterline: Is there a way to disable waterline and use a different ORM in sails.js?

Why do I need models.py for a Flask app?

As a webapp novice, I'm not sure if I need to define models.py.
I already have a working Postgres database on Heroku that I've linked up with Postico and pgAdmin. Using these GUIs, it seems I can get and post data, and make structure changes very simply.
Most tutorials seem to glaze over the details and reasoning for having a models.py. Thanks!
Web frameworks typically enforce or encourage a Model-View-Controller (MVC) patterns that structures code such that the database code is kept separate to the presentation layer.
Frameworks like django come with and are more integrated with ORM functionality which is used to implement an MVC framework. The ORM allows you to programatically interact with your database without having to write sql code. It can let you create a schema as well as interact with it by mapping programming classes to tables and objects to rows.
Flask can be distinguished from many other web frameworks, like django, in that it is considered a micro framework. It is light weight and can be extended by adding extensions. If you need the database integration then you can use it with an external ORM tool like sqlalchemy (and optionally flask-sqlalchemy extension). You can then define a sqlalchemy model, for instance, in a file called model.py or schema.py, or any other name you find appropriate.
If you only need to run one or two queries against an existing postgres database and feel you have no need for the use of an ORM, you can simply use flask with the postgres driver and write the sql yourself. It is not mandatory to have a model.
A model/ORM can be beneficial. For example if you want to recreate an integration/test instance of the database, you can instruct the ORM tool to create a new instance of the database on another host by deploying the model. A model also provides a programming abstraction to the database, which in theory should make your code more database independent (well in theory, its hard to achieve this as databases can have subtle differences), and should make your code less tied to a specific database solution. Also, it alleviates the need of writing a language within a language (sql text strings within python code).

How can I share MongoDB collections between Meteor apps?

I'd like to be able to have an admin app and a client app for my project. Ideally, I'd like to be able to have a shared MongoDB collection. How would I be able to accomplish this?
I tried creating collections with the same name in two different apps, but found that Meteor will keep the data separate. Any idea what I can do? Thanks.
export MONGO_URL=mongodb://localhost:3002/meteor
Then run meteor app, it will change the default database meteor uses. So share databases or collections won't be a problem!
For administrative reason, I would use a individual MongoDB server managed by myself other than using meteor's internal MongoDB.
A reasonable question and probably worth a discussion in excess of this answer:
The MongoDB connection is handled by the Meteor application process itself and this is - as far as I read and understood - part of Meteors philosophy targeting an approach that might be described like: One data source serves one application belonging to it but many clients subscribing to it.
This in mind, combining "admin" and "client" clients in one application (i.e. your Meteor app) is probably the preferred way.
From a server administrative view, however, connections are handled by Meteor in that way that there is always the default local data source which resides in your project directory (.meteor/local/db, try meteor mongo --url to obtain the mongo connection string while the meteor application process is running). But nevertheless one may specify an optional data source string for deployment purposes like described in these deployment instructions.
So you would need to choose a somewhat creepy way of "local development deployment" for your intended setup to get working. Or you go and hack the sources and... no, forget it. You probably want your application and clients to take advantage of e.g. realtime UI updates (publish) and that is why the Meteor application is tied to an "application data source" and vice-versa by now. When connecting from another app, events that trigger changes in the model would not be transported across those applications. The mongoDB instance itself of course isn't aware of that.
I'm sure the core team won't expose the data source connection to a configuration section for considered reasons unless they extend their architecture with some kind of module concept which provides a common service layer of core Model/Collections abstraction across Meteor instances - at least supporting awareness of publish/subscribe events.
Try this DDP test I hacked together for a way to bridge two apps (server A and B).
Both servers can manipulate data, but data is only stored in one collection on server A.
See this link as well

Possible to sync a sqlServer view into a noSQL db like MongoDB or RavenDB?

I'm looking to get a complex sqlserver view into a documentDB like mongoDB for performance reasons. Is it possible to sync the two together? or What's the best approach to get each record/document from the view into the documentDB.
This is for straight up data viewing on the web only. no updates, deletes or inserts.
*wanting to learn about documentDBs, this would be a simple project for implementation.
Since the source information is the relational database, you need some sort of an update process that happens when a row is updated.
You can do that either via your application, or using some sort of a trigger.
You get all of the required information from the database, and write that in optimized form inside RavenDB.
That is pretty much it, to tell you the truth.