Using KeystoneJS next, is it possible to have PostgresQL Views?
This is my problem:
I have two tables: [Aquirements] and [Sells], and want a [Inventory] View, I don't want to use a table for it.
I use KeystoneJS next which uses Prisma which happen to deal with PostgresQL Views...
It's possible but fairly hacky at the moment.
KeystoneJS Next uses Prisma under the hood for database access and migrations. In addition to the Keystone list schema, Prisma tracks a schema of it's own – the Prisma Schema – which describes your data model. Amongst other things, the Prisma Schema is used by Prisma Migrate (one of the Prisma tools) to maintain the schema of the actual database. In Keystone, the Prisma Schema is generated automatically from your Keystone list schema so you generally don't need to be aware of it. Similarly, many of the Keystone CLI commands (like keystone dev) lean heavily on Prisma functionality.
Prisma does have some limited support for views in Postgres but, when used, it prevents the use of Prisma Migrate. This in turn, breaks the keystone dev command. Fortunately, it doesn't actually stop Keystone from running – keystone build and keystone start should work fine.
If you decide to proceed, there are a view things you'll need to keep in mind:
You can't use Prisma Migrate so will need another mechanism to rollout changes to your DB structure and keep it in sync with your Keystone lists. There are plenty of options available.
You can't use keystone dev, or at least, not in its current state. Depending on how dirty you want to get your hands, this may not be a show stopper. If you take a look at the function, it's actually not that complicated and it's only lines 40-54 that cause the problem. If you copied/reimplemented this function and removed those lines you could run your Keystone in dev mode (which maintains your Prisma Schema) while preventing Prisma Migrate from trying to manage your DB.
You're going to need to define your views twice – once in the DB view itself and again as a Keystone list. You'll need to keep these in sync. Note also that the view will need an id column – this is just an assumption that Keystone makes.
In most cases, your view won't be updatable so GraphQL mutations that attempt to edit or create items will error. You can configure access control on the list to exclude the mutations and see also the common field config options. You probably want to config each field with these options: ui: { createView: 'hidden', listView: 'read', itemView: 'read' } – this tells the Admin UI to treat the list as read-only.
So, like I said, possible but pretty hacky right now. It's worth noting though, Keystone Next is still in preview and under very active dev. I'm confident this situation will improve over time.
Related
I'm coming from a Rails background and am trying out building a simple web app with the MERN stack.
With Rails, I had a simple way to manage database-level validations: I would create a migration and set up the schema with validations, then run the migration. Moving to a production environment or after dropping the database, I could just run the same migration.
With MongoDB, I know how to create database-level validations in the mongo console, but not how to manage the validations for reuse later.
What are the best practices for managing database level validations with MongoDB (specific solutions for MERN are fine, though general solutions for just Mongo are fine too? Even better, is there a way to manage up/down validations in case I ever want to change something to a required field later in development but don't want to redo all of the validations from scratch?
Thanks in advance!
As we know that mongo is schema less, so we have to implement data validation in the application itself.
There is a well known npm package called mongoose which serves all these feature and also implements schema at application level.
As a webapp novice, I'm not sure if I need to define models.py.
I already have a working Postgres database on Heroku that I've linked up with Postico and pgAdmin. Using these GUIs, it seems I can get and post data, and make structure changes very simply.
Most tutorials seem to glaze over the details and reasoning for having a models.py. Thanks!
Web frameworks typically enforce or encourage a Model-View-Controller (MVC) patterns that structures code such that the database code is kept separate to the presentation layer.
Frameworks like django come with and are more integrated with ORM functionality which is used to implement an MVC framework. The ORM allows you to programatically interact with your database without having to write sql code. It can let you create a schema as well as interact with it by mapping programming classes to tables and objects to rows.
Flask can be distinguished from many other web frameworks, like django, in that it is considered a micro framework. It is light weight and can be extended by adding extensions. If you need the database integration then you can use it with an external ORM tool like sqlalchemy (and optionally flask-sqlalchemy extension). You can then define a sqlalchemy model, for instance, in a file called model.py or schema.py, or any other name you find appropriate.
If you only need to run one or two queries against an existing postgres database and feel you have no need for the use of an ORM, you can simply use flask with the postgres driver and write the sql yourself. It is not mandatory to have a model.
A model/ORM can be beneficial. For example if you want to recreate an integration/test instance of the database, you can instruct the ORM tool to create a new instance of the database on another host by deploying the model. A model also provides a programming abstraction to the database, which in theory should make your code more database independent (well in theory, its hard to achieve this as databases can have subtle differences), and should make your code less tied to a specific database solution. Also, it alleviates the need of writing a language within a language (sql text strings within python code).
I have a postgresql database which I keep updated using Osmosis. Osmosis can write to two different database schemas, named Simple and Snapshot. There are not that much different from the database Geoserver uses, But I can't make Geoserver use it perfectly.
The main problem seems to be the way tags are stored in those DBs. I can add the nodes layer and display it with that default Points style, but as soon as I use a "ogc:Filter" in my style to filter the nodes by their "place" tag, the WMS is broken and does not respond (says: The requested Style can not be used with this layer. The style specifies an attribute of place and the layer is: TestDB:nodes)
Is there anyway to make GeoServer understand that one of those shemas, or make Osmosis update to the DB GeoServer knows?
This is a case for using TRIGGERs to manage the integration. The two programs use two different schemas. You can CREATE TRIGGERs in the database which ensure that data written to one application is made available to another. Another option is you can set one or both to use VIEWs populated in part by the other application. In PostgreSQL, a VIEW can have triggers attached so these are not really
This is, in any case, a potentially large project so rather than offering sample code, I will offer a general outline of what sorts of things you need to think about.
Are these generally applicable? If so do you want to start an open source integration project?
Are both of these read-only workloads? Does data ever update? In general, if you are going to use views, updates pose the most concerns, so you want to run the views on the side not doing the updates if such is the case.
What is the write model of both sides? Insert/Update? Append only? Static data? What data do you have to "replicate" between the schemas?
Once you have those answers it should be relatively straightforward to get started and ask for help (either as an open source project or here) where you get stuck.
I have been working on a side project for the last few weeks, and built the system with EntityFramework Code first. This was very handy during development, as any changes i needed to make to the code were reflected in the DB nice and easily. But now that i want to launch the site, but continue development, i dont want to have to drop and recreate the DB every time i make a tweak to a model...
Is there a way to get EF to generate change scripts for the model change so i can deploy them myself to the production server? And how do i use the database somewhere else (Windows Service in the background of the site) without having to drop and recreate the table, and use the same model as I have already? Kind of like a "Code first, but now i have a production DB, dont break it..."
Personally i use the builtin data tools in VS2010 to do a database schema synchronization for updating production.
Another cheaper tool if you dont have VS Premium is SQLDelta which ive used in the past and is really good.
Both tools connect to the two database versions and allow you to synchronise the table schemas first. Both also have an export to SQL script functionality.
Comming up for EF is Migrations which allows you to solve just this problem within your solution however its still in beta. Migrations lets you describe upgrade and downgrade events for your database in code.
No RTM version of EF has this feature. Once you go to production you must handle it yourselves. The common way is to turn off database initializer in production and use some tool like VS Premium or RedGate Database compare to compare your production and dev database and create change SQL script.
You can also try to use EF Migrations which is exactly the tool you are asking for. The problem is it is still beta (but it should be part of EF 4.3 once completed) so it doesn't have to work in all cases and functionality / API can change in RTM.
I'm looking to get a complex sqlserver view into a documentDB like mongoDB for performance reasons. Is it possible to sync the two together? or What's the best approach to get each record/document from the view into the documentDB.
This is for straight up data viewing on the web only. no updates, deletes or inserts.
*wanting to learn about documentDBs, this would be a simple project for implementation.
Since the source information is the relational database, you need some sort of an update process that happens when a row is updated.
You can do that either via your application, or using some sort of a trigger.
You get all of the required information from the database, and write that in optimized form inside RavenDB.
That is pretty much it, to tell you the truth.