I am at my wits end, and I don't what else I can try. I am trying to do a flyway clean on my local database. I have Postgres 9.4 latest with latest PostGIS extension. I am using the public schema to store 10 tables, and 7 sequences ... it's very small.
Now I am trying to do a simple FlywayDB 3.3 clean instruction with maven. I have the username and password and public schema listed in the maven configuration file. I constantly kept getting errors to suggest various views and tables are used by the PostGIS extension, so it won't delete them.
So, a search here, and on the Internet in general suggested I do:
ALTER USER myuser WITH SUPERUSER; This did not solve my problems.
In other research, it was suggested to use another schema for data, and not the public schema. So, I created a new schema for my app, and moved over my sequences and tables, including table spatial_ref_sys. I updated the maven config to clean only this new schema ... but it says it cannot delete "spatial_ref_sys" because it is used by PostGIS.
I have 10 tables uses for my app, with 7 sequences ... I just want Flyway to clean my database, so I can try the baseline file to rebuild it. I am no expert in PostGIS, so I don't know if I can remnove that table without killing any spatial functionality I want to use in the future.
Thanks for any help, and please let me know if I can provide any other data.
This is a known issue with no trivial solution but 2 good workarounds: https://github.com/flyway/flyway/issues/100
As noted it's a known issue with flyway.
adding a file called beforeClean.sql with the statement:
DROP EXTENSION IF EXISTS PostGIS;
Solves the problem
Related
When I perform flyway clean, it removes everything from my public database including all tables, but also all routines.
The problem is that when I perform my first migration, i'm using Postgres's routine gen_uuid. Consequently, the migration fails and i'm stuck in a loop.
Is that normal ?
I found the answer, turns out there were a strong coupling between migrations written by the previous team on 2 pg extensions, pgcrypto and unaccent that was NOT written as migrations by us but by another container in the stack.
The solution is to let Flyway manage the creation of these extensions.
With Microsoft SQL Server, there is a great schema comparison tool which lets you keep a database schema under source control and push changes in both directions (schema project to database and database to schema project) which can then be kept under source control. This made development very easy directly on the local database, push the changes to the source controlled project and then apply the changes to other environments when required by using the schema comparison tool to generate the updates from the diff.
Is there any way to do something similar to this with DataGrip for PostgreSQL?
Including how to keep a database as schema files.
I've seen that there is VCS integration but I can't get it to generate a project from a database and Google doesn't seem to help.
Any help would be greatly appreciated.
Thanks, James
There is no full integration for now. This is the ticket to follow: https://youtrack.jetbrains.com/issue/DBE-3852
But you can now already generate you file-project for your database.
Select schema in the database explorer
Context menu -> SQL scripts -> SQL Generator
Choose the tab with the diskette, select options
Press Save.
I am having an odd problem with PostGreSQL/PostGIS on a Windows 10 machine.
I have successfully installed PostGIS many times without issue over the years, most recently last week on a Windows 10 machine at work. I typically use QGIS as my client, so have a fair idea how these things need to be configured and operated.
Over the weekend, I downloaded the latest PostGres 11 installer with Stackbuilder, and installed PostGres & PostGIS. pgAdmin4 seems to show everything is as it should be, and the sample 'postgis_23_sample' database is present and correct.
However, using QGIS DB Manager, when I try to connect to the database (on port 5432), whilst I can see the sample database and public schema, I cannot create a new table with geometry, or load any data (e.g. a simple shapefile) into the database. The error message tells me that the 'addGeometryColumn' function does not exist. I checked that the PostGIS extension is installed - which it is. (I ran 'CREATE EXTENSION postgis' again to confirm this.)
So, I went back to pgAdmin, and tried creating a table with geometry via SQL, but got the same error: Creating a table without geometry works just fine. As soon as I try to create a geometry column, I get the error. It seems to me that the PostGIS extension is somehow not being 'found' by PostGres.
I have uninstalled and reinstalled various combinations of PostGres, PostGIS and QGIS over the weekend - but get the same error. I created a new Windows account, and tried installing the stack there: same thing happened.
I also tried uninstalling everything and then installing the 'Portable GIS' stack by Astun. These are much older versions of QGIS and PostGIS, but in theory the installer configures them all to work together. (That, at least, has been my past experience of using Portable GIS at work.) In this instance, I got the same error about the missing AddGeometryColumn function.
I am wondering if the various installs of PostGIS, QGIS and other tools over the last year or so have left some stubborn configuration entries (maybe in the registry?) that uninstallers have not been able to remove. Could something like that be upsetting my recent PostGres insatllations? It feels as though something is preventing DB clients from locating the PostGIS services on the assigned port. Having said that, PostGres iteslef seems to be working fine, so maybe that is nonsense.
In any case, if anyone has any ideas how I can begin to debug and fix this, I would be hugely grateful. (In the meantime, I am about to try the OSGeoLive Lubuntu VM, which at least will enable me to get some proper work done - I hope.)
Thank you...
I'm currently using Flyway to manage migrations on an application which uses Postgis (PostgreSQL geospatial extension).
This plugin uses a table called spatial_ref_sys which is located in the same schema the application uses too, when I call mvn flyway:clean I'm getting an error indicating that Flyway was unable to delete this table (it was created using user postgres); if I change the owner to my application database user, then the error changes to:
ERROR: cannot drop table spatial_ref_sys because extension postgis requires it
[ERROR] Hint: You can drop extension postgis instead.
However, I don't want to drop these items, which are external to my application logic, as they are just "auxiliars".
I have seen two questions where Axel Fontaine said the feature of ignore some table(s) is not supported (both questions are two or more years old), I even have cloned the GitHub repo to add this feature to Flyway by myself, and I've been reading some parts of the code where this change could be implemented; but I'm suspecting it will affect several parts of the code, and my unknowledgement about it could make the things harder..
So, I'm looking some help to implement the change, or maybe some ideas to do a work-around this issue..
I'm thinking in simply do DROP of the entire database and recreate it, then recreate the geospatial extensions (Postgis, PGRouting, etc), and make the migration using Flyway, but this will be not much suitable if I have to do it several times during the development process..
I had this problem for test environment and i wanted to delete schema by flyway. I fixed it by manipulating flyway spring bean sequence. First, I dropped postgis extension before flyway.clean() and then at the first line of V1__init.sql add CREATE EXTENSION postgis SCHEMA public;:
#Bean
#Profile("test")
public Flyway flyway(DataSource dataSource) {
Flyway flyway = new Flyway();
flyway.setDataSource(dataSource);
flyway.setLocations("classpath:db/migration");
runSql("drop extension IF EXISTS postgis CASCADE;", dataSource);
flyway.clean();
flyway.migrate();
return flyway;
}
Before anyone marks this as a duplicate, none of the questions similar to this addressed any of my concerns or answered any of my questions.
I am currently developing all the POCOs and data contexts in a library project, and running migrations from within this project. The database I'm updating is the development database.
What do I do if I want to create the current schema to a fresh, new database? I figure that all I have to do is to change the connection string in web.config and run Update-Database, correct?
While the live/production database is up and running, I want to add new columns and new tables to the schema, and test it out in development. So I switch back the connection string to the development database's connection string, and run Update-Database.
Going back and forth between two databases seems like I'll get conflicts between _MigrationHistory tables and the auto-generated migration scripts.
Is it safe to manually delete the _MigrationHistory tables in both databases, and/or delete the migration files in /Migrations (so I'll run Add-Migration again)? How do we manage this?
What do I do if I want to create the current schema to a fresh, new database?
- Yes, to create fresh database to the current migration level you simply modify the connection string to point to a database that does not yet exist and run update-database. It will run all the migrations in order.
As far as migrating to the Production database, I am running the update-database command with the -script switch to acquire the raw sql and then applying that script to the production database manually. This is helpful if you need to keep a record of sql commands run against the database as well. Additionally, you can generate the script explicitly from a specific migration to another specific migration via some of the other update-database switches.
Alternatively, you can create an Idempotent script that works from any migration by using the–SourceMigration $InitialDatabase switch and optionally specify an end migration with –TargetMigration
If you delete the _MigrationHistory tables you will have issues where the generated script will be trying to add columns that already exist and such.
You may find the following link helpful:
Microsoft Entity Framework Migrations
I would suggest having a separate trunk in your source code repository - one pointing to production and one to development to avoid risks of switching between the two in visual studio.
Me also had the same problem, even when using one and the same database - due to some merges in the repository, and the mix of automatic/manual migrations. For some reason the EF was not taking into account the target database, and calculating what scripts need to me executed, based on what is already in the database.
To fix this, I go to the [__MigrationHistory] table on the target database and get the latest migration name. This will help EF to determinate the state of the DB, and will execute just the scripts needed.
then the following script is run:
update-database -script -sourcemigration {latest migration name}
This creates update script that is specific to the target database (the connection string should be correct, as discussed in the other comments)
you can also use -force parameter if needed
this way you can update any database to latest version, no mater in what version you found it, if it has MigrationHistory table.
Hope this helps
My production and my developmental database went out of synch and it gave me endless problems. I solved it using a tool from Red-Gate to match up the databases. After using the tool, the databases were exactly the same but my migration was not working and I started to get odd errors i.e. trying to add tables/ columns that already existed etc. I solved that. I just deleted the migration folder on the local, recreated it, added the initial migration, updated the database and then matched the data of this migration file (local) to the one on the host (delete all the data in the migration file on the host, and add the same data that is on the local into the host). A more detailed explanation is at:
migration synch developmental and production databases