Database dialect included in Alembic downgrade script, but not on upgrade script? - alembic

I am using alembic to generate database migration scripts for a mysql database. I've noticed that the syntax of the generated upgrade and downgrade scripts differ slightly, whereas I thought they would basically be the same.
models.py- before
class Message_User(db.Model):
id = db.Column(db.Integer, primary_key=True)
name = db.Column(db.String(20), nullable=True)
models.py-after table modification
class Message_User(db.Model):
id = db.Column(db.Integer, primary_key=True)
tag = db.Column(db.String(15), nullable=True)
migration file - original - shows table creation
def upgrade():
op.create_table('message_user',
sa.Column('id', sa.Integer(), nullable=False),
sa.Column('name', sa.String(length=20), nullable=True)
sa.PrimaryKeyConstraint('id', name=op.f('pk_message_user'))
)
def downgrade():
op.drop_table('message_user')
migration file - after - shows table modification
def upgrade():
op.add_column('message_user', sa.Column('tag', sa.String(length=15), nullable=True))
op.drop_column('message_user', 'name')
def downgrade():
op.add_column('message_user', sa.Column('name', mysql.VARCHAR(collation='utf8_bin',
length=20), nullable=True))
op.drop_column('message_user', 'tag')
The upgrade scripts describe the changes purely in sqlalchemy terms, whereas the downgrade scripts add mysql dialect specific changes. Specifically, the upgrade script defines the type as sa.String(length=15) whereas the downgrade defines it as mysql.VARCHAR(collation='utf8_bin', length=20). In create table statements in the downgrade scripts, the autogenerated script also includes mysql_collate, mysql_default_charset and mysql_engine whereas these aren't in create table statements for upgrade scripts. I didn't see any mention of this in the alembic documentation. Does anyone know why this differs?

Related

Alembic runs migration on each startup and executes the same revision repeatedly

I'm new to alembic so this is very likely a beginners fallacy. I have a FastAPI server which runs SQL migrations on startup.
The problem is, that these migrations are being executed every time the server is starting up. This results in an error where the table is already created:
sqlalchemy.exc.OperationalError: (MySQLdb.OperationalError) (1050, "Table 'signal' already exists")
Why is alembic running this migration again and how would it even know that the migration already happened? For example, I wouldn't want to run INSERT statements repeatedly either.
Important detail here (maybe): I am attaching to a Docker container which starts the server each time I connect.
Here's the relevant code:
def run_sql_migrations():
# retrieves the directory that *this* file is in
migrations_dir = os.path.join(os.path.dirname(os.path.realpath(db.__file__)), "alembic")
# this assumes the alembic.ini is also contained in this same directory
config_file = os.path.join(migrations_dir, "..", "alembic.ini")
db_url = f"mysql://{os.environ['DB_USER']}:{os.environ['DB_PASSWORD']}" \
f"#{os.environ['DB_HOST']}:{os.environ['DB_PORT']}/{os.environ['DB_NAME']}"
config = Config(file_=config_file)
config.set_main_option("script_location", migrations_dir)
config.set_main_option("sqlalchemy.url", db_url)
# upgrade the database to the latest revision
upgrade(config, "head")
app = FastAPI()
run_sql_migrations()
My first migration is nothing more than the creation of one table:
"""testing revisioning
Revision ID: 2287ab5f2d12
Revises:
Create Date: 2023-01-27 11:02:48.341141
"""
from alembic import op
import sqlalchemy as sa
# revision identifiers, used by Alembic.
revision = '2287ab5f2d12'
down_revision = None
branch_labels = None
depends_on = None
def upgrade() -> None:
op.create_table(
'signal',
sa.Column('id', sa.Integer, primary_key=True),
)
def downgrade() -> None:
op.drop_table("signal")

Diesel Generate Schema Empty

I'm following the Diesel Getting Started Tutorial.
When I run the migrations the schema.rs file generated is empty.
I checked postgres and the database and table is created.
What am I doing wrong? I followed every step of the tutorial.
Edit: in the up.sql
CREATE TABLE posts (
id SERIAL PRIMARY KEY,
title VARCHAR NOT NULL,
body TEXT NOT NULL,
published BOOLEAN NOT NULL DEFAULT FALSE
)
Cargo.toml
[package]
name = "diesel_demo"
version = "0.1.0"
edition = "2021"
[dependencies]
diesel = { version = "2.0.0", features = ["postgres"] }
dotenvy = "0.15"
diesel.toml
[print_schema]
file = "src/schema.rs"
[migrations_directory]
dir = "migrations"
.env
DATABASE_URL=postgres://postgres:123#localhost/diesel_demo
Output of diesel migration run and diesel migration redo. I only get an output in the first time I ran diesel migration run.
Edit2: I solved by reeinstaling rust, postgresql and the c++ build tools. Thanks everyone for the help.

Flyway migration error with DB2 11.1 SP including pure xml DDL

I have a fairly complex Db2 V11.1 SP that will compile and deploy manually, but when i add the SQL to a migration script I get this issue
https://github.com/flyway/flyway/issues/2795
As the SP compiles and deploys manually, I am confident the SP SQL is ok.
Does anyone have any idea what the underlying issue might be
DB2 11.1
Flyway 6.4.1 (I have tried 7.x versions with same result)
the SP uses pure xml functions, so the SP SQL includes $ and # characters
I tried using obscure statement terminator chars ( ~ ^) but a simple test with Pure xml functions and # as statement terminator seemed to work
--#SET TERMINATOR #
SET SCHEMA CORE
#
CREATE OR REPLACE PROCEDURE CORE.XML_QUERY
LANGUAGE SQL
BEGIN
DECLARE GLOBAL TEMPORARY TABLE OPTIONAL_ELEMENT (
LEG_SEG_ID BIGINT,
OPTIONAL_ELEMENT_NUM INTEGER,
OPTIONAL_ELEMENT_LIST VARCHAR(100),
CLSEQ INTEGER
) ON COMMIT PRESERVE ROWS NOT LOGGED WITH REPLACE;
insert into session.optional_element
select distinct LEG_SEG_ID, A.OPTIONAL_ELEMENT_NUM, A.OPTIONAL_ELEMENT_LIST, A.CLSEQ
from core.leg_seg , XMLTABLE('$d/LO/O' passing XMLPARSE(DOCUMENT(optional_element_xml)) as "d"
COLUMNS
OPTIONAL_ELEMENT_NUM INTEGER PATH '#Num',
OPTIONAL_ELEMENT_LIST VARCHAR(100) PATH 'text()',
CLSEQ INTEGER PATH '#Seq') AS A
WHERE iv_id = 6497222690 and optional_element_xml is not null;
END
#

Database table has different schema than model description in Flask application

I have an old application that I'm redeploying. The following model class:
class TableA(db.Model):
id = db.Column(db.Integer, primary_key=True)
name_key = db.Column(db.String(20), index=True, unique=True, nullable=False)
acronym_key = db.Column(db.String(6), index=True, unique=True, nullable=False)
has the following table in postgres:
Table "public.tablea"
Column | Type | Collation | Nullable | Default
-------------+------------------------+-----------+----------+----------------------------------
id | integer | | not null | nextval('tablea_id_seq'::regclass)
name_key | character varying(10) | | not null |
acronym_key | character varying(6) | | not null |
Notice the length of the column name_key, it does not match.
As I worked in this when I still didn't know what I was doing with the migrations, I double checked if I had left changes that were not saved as migrations, with
flask db migrate and flask db upgrade. I got some changes to the db, but not this one. Do column lengths do not generate migration changes? What am I missing? Any help is appreciated.
Adding to #PeterBlahos' link, alembic needs to be configured to notice differences in column lengths.
For that you need to modify {project-root}/migrations/env.py, modify the run_migrations_* methods context.configure segments by adding the compare_type=True as in the snippet bellow:
def run_migrations_offline():
"""Run migrations in 'offline' mode.
This configures the context with just a URL
and not an Engine, though an Engine is acceptable
here as well. By skipping the Engine creation
we don't even need a DBAPI to be available.
Calls to context.execute() here emit the given string to the
script output.
"""
url = config.get_main_option("sqlalchemy.url")
context.configure(
url=url, target_metadata=target_metadata, literal_binds=True, compare_type=True)
with context.begin_transaction():
context.run_migrations()
def run_migrations_online():
"""Run migrations in 'online' mode.
In this scenario we need to create an Engine
and associate a connection with the context.
"""
connectable = engine_from_config(
config.get_section(config.config_ini_section),
prefix='sqlalchemy.',
poolclass=pool.NullPool)
with connectable.connect() as connection:
context.configure(
connection=connection,
target_metadata=target_metadata,
compare_type=True
)
with context.begin_transaction():
context.run_migrations()
After that just run on your terminal flask db migrate -m "some explanation for the changes" to create the migration file and flask db upgrade for the changes to actually affect the db.
Big thanks to #PeterBlahos who actually answered 85% of the question :).

Flyway does not seem to recognize java/scala migrations

I am using Flyway version 5.2.4 in a scala project and have all my migration scripts are under src/main/resources/db/migrations with the following folder structure
main
-resources
--db
---migrations
----V1__example1.sql
----V2__example2.sql
----V3__scala_migration.scala
locations is set to db.migrations (Without any prefix. Flyway documentation says if no prefix is used then sql/java migrations are supported)
V1 and V2 seem to get picked up without issues. But V3 is being ignored. I tried adding a V3__java_migration.java as well and it made no difference. Did anyone have any kind luck adding non-sql migrations?
Here is the scala code in the migration
package db.migration
import org.flywaydb.core.api.migration.{ BaseJavaMigration, Context }
class V3__scala_migration extends BaseJavaMigration {
override def migrate(context: Context): Unit = {
val conn = context.getConnection
conn.createStatement().executeUpdate(
"""
|DROP TABLE IF EXISTS `users`;
|CREATE TABLE IF NOT EXISTS `users`
|`name` varchar(100) NOT NULL,
|`email` varchar(100) NOT NULL,
|PRIMARY KEY (`email`)
|)ENGINE=InnoDB DEFAULT CHARSET=utf8;
|INSERT INTO `users` (`name`, `email`) ('john','john#example.com');
""".stripMargin)
}
}
You must move your Scala or Java migration scripts to the according directories.
For Scala this would be src/main/scala/db/migration
See here the documentation: https://flywaydb.org/documentation/migrations#discovery-1