Neo4j Deployments / Versioning - deployment

We have Neo4j environments set up on developers' machines, QA and Production. When doing development, we make schema changes, add nodes, add relationships, rename things, etc. - typical development (graph or no graph, a database is a database).
Once the development reaches a certain point, these changes (application code and database code) needs to be pushed to QA -> PROD.
With traditional database (e.g. SQL Server), one could have a table that contains a version, a SQL script that would query that table/version #, and have a branching logic, which, depending on the version, would execute the right statements, to bring target database to the right schema level.
How do people do the same in Neo4j? Is there a good solution? Seems that apoc/branching logic in Cypher are rather limited and cumbersome.

neo4j has documentation on upgrades, and also a web page on upgrades.
Generally, a newly-installed neo4j version will support automatic upgrading of the files backing an existing DB (for specific older versions), as long as the dbms.allow_upgrade config setting is true.
Also, older versions of the Cypher language can still be used. The Cypher version can be specified per-query, or neo4j can be configured to use that version for all queries.

Related

What's the correct way to manage tables and columns in a postgres db in production with Prisma?

For my first backend project, I decided to create a Content Management System (CMS) with NextJS as my full-stack framework and Prisma as the ORM. As a principle, a customizable CMS must be able to programmatically create, drop and modify tables from the database; however, to my (scarce) knowledge of Prisma (and DBs in general) this can only be achieved with prisma migrate dev or prisma db push. According to the documentation, both these CLI tools should not, under any circumstances, be used in production.
I've tried to ignore the docs' warnings and run the prisma migrate dev programmatically with execSync(), but it knows when it is running under a non-interactive environment, shutting it down. Even if I was successful, it does not feel right.
This leads me to believe there's another way to manage these tables, but I can't seem to find it. The only alternative that comes to mind is to use raw SQL, which is absolutely possible but it doesn't look right, given Prisma is such a robust ORM tool.
My question then is: how can I programmatically and safely manage relational database tables using Prisma in production?

Entity Framework Code First Database Migrations - Multple instances of APP with Single DB Server

So transitioning from the development/deployment approach of
Have DB under Source Control and have deployment pipeline for specific versions of the DB
Have Web App/API under Source Control and have deployment pipelines for these;
and then have dependencies of the Web APP/API on DB Versions - hence to add a new DB change for example we have to do a DB release; before we do an APP release - and the DB change has to not 'break' the old app - and then we can upgrade the App to use the new DB Change
Whilst painful - this works; it also works when you have N Servers for the web app (horizontal scale) with a suitably SINGLE DB Server.
Now working towards EF Core 3.1 Code First using Data Migrations. All working as expected one a single web app with single DB.
But - if this was deployed to N Web Servers, again with a single DB instance; Then.....
"IF" Web Servers upgraded one at a time then then Data Migration would occur on the start up of the first "new" app - and potentially the old web apps would continue to work (depends on the changes)
The above isn't really my concern; it's
If you have simultaneous deployment over multiple web app servers and these apps start at the same time; then I imagine the Data Migrations would be attempted all at the same time.... meaning one of them must fail.
So: $64,000 Question - how do people deal with the horiztonal scale out of Web app with Single DB Server with EF Code First Data Migrations?
Is it "just be careful with your changes"?
how do people deal with the horiztonal scale out of Web app with Single DB Server with EF Code First Data Migrations?
Applying migrations at runtime is suitable only for dev and simple production deployments.
The most common pattern here is to generate the database change scripts (perhaps using Migrations, perhaps using a database-oriented tool like SQL Server Data Tools), review the changes for backwards-compatibility and ability to be applied online, and deploy them first.

Embedded blazegraph vs orientdb?

We are looking for a embedded graph database that can run withing application scope. I have tried a proof of concept with OrientDB and blazegraph by integrating jar files within application. I'm not sure which one to pick for my application.
Can anybody explain me which is better among these two?
(disclaimer: I was part of the OrientDB team)
The first thing I evaluate is the licence model.
OrientDB is released under ASL while Blazegraph is released under GPLv2.
Can you deal with GPLv2?
Moreover, the blazegraph github repo is not updated since the end of 2016.
OrientDB, AFAIK, is going to release the 3.0 version and 2.2.x should be very stable, it's at 2.2.30 right now.
After that, you can start to evaluate the features
- APIs
- query languages: SQL, gremlin, RDF
- db features: kind on indexes, backup, restore
- addons: console, web interfaces
- client support (java, js, phyton etc.)
Even if you want to go embedded, maybe in the future you will need to deploy your db in standalone way, so I will evaluate compatibility and support of other client languages.

Does cockroachdb support extensions?

I am wondering whether cockroachdb supports extensions such as time-scale and others?, because I have a project which requires a third party postgres extension in conjunction with cockroachdb.
No, CockroachDB does not support any PostgreSQL extensions. It may one day support features from some of the most popular extensions, but it is very unlikely that it will ever be possible to use arbitrary PostgreSQL extensions directly.
If you are looking for a scale-out SQL database that supports both PostgreSQL client API and its extensions, I would encourage you to take a look at YugaByte DB. We are reusing the PostgreSQL codebase on top of our sharded replicated transactional layer, rather than building a SQL engine from scratch, and that will allow us to stay compatible with new PostgreSQL features as well as extensions.

Is there a way to upgrade from a Heroku shared database to a production grade database like Basic or Crane?

I have been using the Heroku shared database for a while now in an app and I would like to upgrade to their new Basic/Crane/etc. production grade databases. I don't however see a clear path to do that.
The options as I see them are:
I could use db:pull/db:push to migrate data/schema from the current production database to the new database. I could go into maintenance mode, move the data, then update the config to point to the new database. Not terrible, but I fear that the old schema from the shared database is not v9 compatible? Maybe I'm wrong. This could also take a long time resulting in some major downtime. Not cool.
Use pg:backups to create a backup, and use the heroku pg:restore to move the data over. Again I fear the same schema issues but this would be much faster.
Start with a Basic/Crane database and use their Followers concept. This feels like the right way to do it, but I don't know if this works with the shared databases. If it does I do not understand how.
All of these options I feel require me to upgrade to postgres v9 at some point since all the new databases are v9. Is there a way to do this in the shared environment, and then maybe migrating will be less painful... maybe.
Any ideas or suggestions?
Their Migrating Between Databases document points out that your option (3) using Followers for a fast changeover doesn't work when you're starting with a shared instance. What you should do here is use pg:backups to make a second database to test your application against and confirm the new database acts correctly there. If that works, turn on maintenance mode and do it again for the real migration.
There's very few schema level incompatibility issues going between PostgreSQL 8.4 and 9.1. That migration doc warns about one of them, the Bytea changes. If you're using Bytea, you probably know it; it's not that common. Most of the other things changed between those versions are also features you're unlikely to use, such as the PL/pgSQL modifications made in PostgreSQL 9.0. Again, if you're using PL/pgSQL server-side functions, you'll probably know that too, and most people on Heroku don't.
Don't guess if your database is compatible, test. You can run multiple copies of PostgreSQL locally, they just need different data directories and ports configured. That way you can test against 9.1 and 8.4 at will.
You usually use the pg_dump from 9.1 to dump the 8.4 database - pg_dump knows about older versions, but not newer (obviously).
Unless you're doing something unusual with the database (presumably not, since you're on Heroku) then there's unlikely to be any problems just dumping + restoring between versions.