fastapi alembic exclude celery tables - postgresql

My postgres database already includes celery backend tables (celery_taskmeta and celery_tasksetmeta) and when i use the following command:
alembic revision --autogenerate -m "some message"
the generated file includes dropping these tables in upgrade section.
I have already tried the answer provided for this question but it's not working.

by adding celery's defined model base to target_metadata in alembic's env.py, alembic will recognize celery models as "not deleted".
env.py:
from celery.backends.database.session import ResultModelBase
...
target_metadata = [Base.metadata, ResultModelBase.metadata]

Related

Alembic version in database is not in the version history

According to Alembic's docs, the migration algorithm is trying to calculate the migration "path" to target revision from the version it finds in the alembic_version table. Upon checking that in the db of the service I'm working with, I found out that the current version that Alembic operates from is not in the version history, i.e. there is no migration script in projects migrations folder with that revision ID.
It seems that for this reason, when I added a manually written migration script (with empty template from alembic revision), calling alembic upgrade with this revision's ID failed silently without carrying out the migration. I got an output ending with
INFO [alembic.runtime.migration] Context impl PostgresqlImpl.
INFO [alembic.runtime.migration] Will assume transactional DDL.
and exit code 1.
However, when I changed the value in alembic_version table to the penultimate version ID (the one revised by my own script), alembic upgrade head worked.
It is unclear whether or not the database actually was in the state corresponding with the revision ID I set manually. The script I migrated with only creates a table like this:
...
import sqlalchemy as sa
from alembic import op
from sqlalchemy.dialects import postgresql as pg
# revision identifiers, used by Alembic.
revision = '09d841dcd114'
down_revision = '450e39fe439d'
branch_labels = None
depends_on = None
def upgrade():
op.create_table('table_name',
sa.Column('id', pg.UUID(), autoincrement=False, nullable=False),
sa.Column('user_id', sa.INTEGER(), autoincrement=False, nullable=False),
...
sa.PrimaryKeyConstraint('id'),
sa.ForeignKeyConstraint(['user_id'], ['other_schema.users.id'], name='schemaname_tablename_user_id_fkey'),
schema='schema_name'
)
...
so it's partly reliant on existing database state, but does not determine the state exacly.
So my question is: is it normally possible that the alembic_version in db is not among existing revisions? Was that the reason why my previous attempts to migrate the db failed silently? What would be the correct solution for this problem? Setting the alembic version manually feels like an improper way to deal with this.
The alembic commands here were mostly given using Flask-Migrate, but it seems to not affect anything in this context.
There was a hotfix revision not merged into the main branch, but used on the service. That's where the missing version was

Unknown property errors trying to do a with Flyway migration with per script config files

My company is evaluating Flyway for database releases. We have an AWS PostgreSQL version 11.2 database and I have installed Flyway Community Edition version 6.1.2.
I have successfully baselined the database and run several basic DDL scripts using Flyway migrate. However now I am testing a more complicated scenario in which I need to run multiple scripts as one migration but each script has to connect as a different PostgreSqL user. I have tried to do this by setting up two sql files each with their own config file as described here: https://flywaydb.org/documentation/scriptconfigfiles
Every time I run the migrate command I get a property error: "ERROR: Unknown script configuration property: flyway.user" or "ERROR: Unknown script configuration property: user", etc, etc.
For debugging purposes I removed one sql and config combo so that I now only have one file each. The files are named V2020.1.14.08.41.00__role_test1.sql and V2020.1.14.08.41.00__role_test1.sql.conf. I did confirm that any changes to that config file are being picked up by the migrate command. My config file contains the following properties (values changes for security reasons):
flyway.url=jdbc:postgresql://...
flyway.user=user1
flyway.password=password
flyway.schemas=test
I have also tried removing the flyway prefix:
url=jdbc:postgresql://...
user=user1
password=password
schemas=test
And removing the url parameter (both flyway.url and url) so the migration reads that value from the default flyway.conf file. Example:
user=user1
password=password
schemas=test
I get the errors every time. Anyone have any ideas? All help is greatly appreciated.
There is a typo in your code:
flyeay.user=user1
It should be:
flyway.user=user1

IBM WCS and DB2 : Want to export all catentries data from one DB and import into another DB

Basically i have two environments, Production and QA. On QA DB, the data is not same as it is on production so my QA team is unable to test it properly. So want to import all catentries/products related data in QA DB from production. I searched a lot but not found any solution regarding this.
May be i need to find all product related tables and export them one by one and then import into dev db but not sure.
Can anyone please guide me regarding this. How i can do this activity with best practices?
I am using DB2
The WebSphere Commerce data model is documented, which will help you identify all related tables. You can then use the DB2 utility db2move to export (and later load) those tables in one shot. For example,
db2move yourdb export -sn yourschema -tn catentry,catentrel,catentdesc,catentattr
Be sure to list all tables you need, separated by commas with no spaces. You can specify patterns to match table names:
db2move yourdb export -sn yourschema -tn "catent*,listprice"
db2move will create a file db2move.lst that lists all extracted tables, so you can load all data with:
db2move yourQAdb load -lo replace
running from the same directory.

Tables are not executed after validating and updating schema

I have created entities in zend framework 2 using doctrine 2. After that I used this command to validate current schema.
./vendor/bin/doctrine-module orm:validate-schema
I got output like:
Mapping] OK - The mapping files are correct.
[Database] FAIL - The database schema is not in sync with the current mapping file.
Then I executed update command,
./vendor/bin/doctrine-module orm:schema-tool:update --force
The output for that is like:
Database schema updated successfully! "7" queries were executed
But, the problem is, There is no tables created in my database. What's wrong with this?
I used to run doctrine-module orm:validate-schema and then doctrine-module orm:schema-tool:create.
Here is good project to try:
Fmi-example on github

Target database is not up to date

I'd like to make a migration for a Flask app. I am using Alembic.
However, I receive the following error.
Target database is not up to date.
Online, I read that it has something to do with this.
http://alembic.zzzcomputing.com/en/latest/cookbook.html#building-an-up-to-date-database-from-scratch
Unfortunately, I don't quite understand how to get the database up to date and where/how I should write the code given in the link.
After creating a migration, either manually or as --autogenerate, you must apply it with alembic upgrade head. If you used db.create_all() from a shell, you can use alembic stamp head to indicate that the current state of the database represents the application of all migrations.
This Worked For me
$ flask db stamp head
$ flask db migrate
$ flask db upgrade
My stuation is like this question, When I execute "./manage.py db migrate -m 'Add relationship'", the error occused like this "
alembic.util.exc.CommandError: Target database is not up to date."
So I checked the status of my migrate:
(venv) ]#./manage.py db heads
d996b44eca57 (head)
(venv) ]#./manage.py db current
INFO [alembic.runtime.migration] Context impl SQLiteImpl.
INFO [alembic.runtime.migration] Will assume non-transactional DDL.
715f79abbd75
and found that the heads and the current are different!
I fixed it by doing this steps:
(venv)]#./manage.py db stamp heads
INFO [alembic.runtime.migration] Context impl SQLiteImpl.
INFO [alembic.runtime.migration] Will assume non-transactional DDL.
INFO [alembic.runtime.migration] Running stamp_revision 715f79abbd75 -> d996b44eca57
And now the current is same to the head
(venv) ]#./manage.py db current
INFO [alembic.runtime.migration] Context impl SQLiteImpl.
INFO [alembic.runtime.migration] Will assume non-transactional DDL.
d996b44eca57 (head)
And now I can do the migrate again.
This can be solved bby many ways :
1 To fix this error, delete the latest migration file ( a python file) then try to perform a migration afresh.
If issue still persists try these commands :
$ flask db stamp head # To set the revision in the database to the head, without performing any migrations. You can change head to the required change you want.
$ flask db migrate # To detect automatically all the changes.
$ flask db upgrade # To apply all the changes.
$ flask db stamp head # To set the revision in the database to the head, without performing any migrations. You can change head to the required change you want.
$ flask db migrate # To detect automatically all the changes.
$ flask db upgrade # To apply all the changes.
You can find more info at the documentation https://flask-migrate.readthedocs.io/en/latest/
I had to delete some of my migration files for some reason. Not sure why. But that fixed the problem, kind of.
One issue is that the database ends up getting updated properly, with all the new tables, etc, but the migration files themselves don't show any changes when I use automigrate.
If someone has a better solution, please let me know, as right now my solution is kind of hacky.
I did too run into different heads and I wanted to change one of the fields from string to integer, so first run:
$ flask db stamp head # to make the current the same
$ flask db migrate
$ flask db upgrade
It's solved now!
To fix this error, delete the latest migration file ( a python file) then try to perform a migration afresh.
This can also happen if you, like myself, have just started a new project and you are using in-memory SQLite database (sqlite:///:memory:). If you apply a migration on such a database, obviously the next time you want to say auto-generate a revision, the database will still be in its original state (empty), so alembic will be complaining that the target database is not up to date. The solution is to switch to a persisted database.
I also had the same problem input with flask db migrate
I used
flask db stamp head
and then
flask db migrate
Try to drop all tables before execute the db upgrade command.
To solve this, I drop(delete) the tables in migration and run these commands
flask db migrate
and
flask db upgrade