I'm following the Diesel Getting Started Tutorial.
When I run the migrations the schema.rs file generated is empty.
I checked postgres and the database and table is created.
What am I doing wrong? I followed every step of the tutorial.
Edit: in the up.sql
CREATE TABLE posts (
id SERIAL PRIMARY KEY,
title VARCHAR NOT NULL,
body TEXT NOT NULL,
published BOOLEAN NOT NULL DEFAULT FALSE
)
Cargo.toml
[package]
name = "diesel_demo"
version = "0.1.0"
edition = "2021"
[dependencies]
diesel = { version = "2.0.0", features = ["postgres"] }
dotenvy = "0.15"
diesel.toml
[print_schema]
file = "src/schema.rs"
[migrations_directory]
dir = "migrations"
.env
DATABASE_URL=postgres://postgres:123#localhost/diesel_demo
Output of diesel migration run and diesel migration redo. I only get an output in the first time I ran diesel migration run.
Edit2: I solved by reeinstaling rust, postgresql and the c++ build tools. Thanks everyone for the help.
Related
I'm new to alembic so this is very likely a beginners fallacy. I have a FastAPI server which runs SQL migrations on startup.
The problem is, that these migrations are being executed every time the server is starting up. This results in an error where the table is already created:
sqlalchemy.exc.OperationalError: (MySQLdb.OperationalError) (1050, "Table 'signal' already exists")
Why is alembic running this migration again and how would it even know that the migration already happened? For example, I wouldn't want to run INSERT statements repeatedly either.
Important detail here (maybe): I am attaching to a Docker container which starts the server each time I connect.
Here's the relevant code:
def run_sql_migrations():
# retrieves the directory that *this* file is in
migrations_dir = os.path.join(os.path.dirname(os.path.realpath(db.__file__)), "alembic")
# this assumes the alembic.ini is also contained in this same directory
config_file = os.path.join(migrations_dir, "..", "alembic.ini")
db_url = f"mysql://{os.environ['DB_USER']}:{os.environ['DB_PASSWORD']}" \
f"#{os.environ['DB_HOST']}:{os.environ['DB_PORT']}/{os.environ['DB_NAME']}"
config = Config(file_=config_file)
config.set_main_option("script_location", migrations_dir)
config.set_main_option("sqlalchemy.url", db_url)
# upgrade the database to the latest revision
upgrade(config, "head")
app = FastAPI()
run_sql_migrations()
My first migration is nothing more than the creation of one table:
"""testing revisioning
Revision ID: 2287ab5f2d12
Revises:
Create Date: 2023-01-27 11:02:48.341141
"""
from alembic import op
import sqlalchemy as sa
# revision identifiers, used by Alembic.
revision = '2287ab5f2d12'
down_revision = None
branch_labels = None
depends_on = None
def upgrade() -> None:
op.create_table(
'signal',
sa.Column('id', sa.Integer, primary_key=True),
)
def downgrade() -> None:
op.drop_table("signal")
I have scripts
/*The extension is used to generate UUID*/
CREATE EXTENSION IF NOT EXISTS "uuid-ossp";
-- auto-generated definition
create table users
(
id uuid not null DEFAULT uuid_generate_v4 ()
constraint profile_pkey
primary key,
em varchar(255),
user varchar(255)
);
In IDE Intellij Idea (a project with Spring Boot):
src/main/resources/db-migration
src/main/resources/sql_scripts :
copy.sql
user.txt
I'm just trying to run a simple Sql command for now to see that everything works clearly
copy.sql
COPY profile FROM '/sql_scripts/user.txt'
USING DELIMITERS ',' WITH NULL AS '\null';
user.txt
'm#mai.com', 'sara'
's#yandex.ru', 'jacobs'
But when I run the copy command, I get an error
ERROR: could not open file...
Maybe who knows how it should work and what needs to be fixed ?
Strong possibility its a pathing issue; could you try, instead of
COPY profile FROM '/sql_scripts/user.txt'
doing
COPY profile FROM './sql_scripts/user.txt'
(or an absolute path)
I am using alembic to generate database migration scripts for a mysql database. I've noticed that the syntax of the generated upgrade and downgrade scripts differ slightly, whereas I thought they would basically be the same.
models.py- before
class Message_User(db.Model):
id = db.Column(db.Integer, primary_key=True)
name = db.Column(db.String(20), nullable=True)
models.py-after table modification
class Message_User(db.Model):
id = db.Column(db.Integer, primary_key=True)
tag = db.Column(db.String(15), nullable=True)
migration file - original - shows table creation
def upgrade():
op.create_table('message_user',
sa.Column('id', sa.Integer(), nullable=False),
sa.Column('name', sa.String(length=20), nullable=True)
sa.PrimaryKeyConstraint('id', name=op.f('pk_message_user'))
)
def downgrade():
op.drop_table('message_user')
migration file - after - shows table modification
def upgrade():
op.add_column('message_user', sa.Column('tag', sa.String(length=15), nullable=True))
op.drop_column('message_user', 'name')
def downgrade():
op.add_column('message_user', sa.Column('name', mysql.VARCHAR(collation='utf8_bin',
length=20), nullable=True))
op.drop_column('message_user', 'tag')
The upgrade scripts describe the changes purely in sqlalchemy terms, whereas the downgrade scripts add mysql dialect specific changes. Specifically, the upgrade script defines the type as sa.String(length=15) whereas the downgrade defines it as mysql.VARCHAR(collation='utf8_bin', length=20). In create table statements in the downgrade scripts, the autogenerated script also includes mysql_collate, mysql_default_charset and mysql_engine whereas these aren't in create table statements for upgrade scripts. I didn't see any mention of this in the alembic documentation. Does anyone know why this differs?
def fetch_holidays_in_date_range(src):
query = "SELECT * from holiday_tab where id = src"
db = dbconnect.connect()
# defining the cursor for reading data
cursor = db.cursor(cursor_factory=psycopg2.extras.RealDictCursor)
# query the database
cursor.execute(query.format(src));
rows = cursor.fetchall()
dbconnect.destroy(cursor, db)
return rows
Could someone help, how to mock this code in pytest or in unittest. I've googled for mock db and I hardly found anything.
Pytest doesn't support you to run the test on production/main DB if you are using pytest-django.
There is a better approach to solve this issue.
Pytest DB resolve method
This says that whenever you run the test with a marker #pytest.mark.django_db, tests are run on another newly created db with name test_your production_db_name.
So if your db name is hello, pytest will create a new db called test_hello and runs tests on it
I am using Flyway version 5.2.4 in a scala project and have all my migration scripts are under src/main/resources/db/migrations with the following folder structure
main
-resources
--db
---migrations
----V1__example1.sql
----V2__example2.sql
----V3__scala_migration.scala
locations is set to db.migrations (Without any prefix. Flyway documentation says if no prefix is used then sql/java migrations are supported)
V1 and V2 seem to get picked up without issues. But V3 is being ignored. I tried adding a V3__java_migration.java as well and it made no difference. Did anyone have any kind luck adding non-sql migrations?
Here is the scala code in the migration
package db.migration
import org.flywaydb.core.api.migration.{ BaseJavaMigration, Context }
class V3__scala_migration extends BaseJavaMigration {
override def migrate(context: Context): Unit = {
val conn = context.getConnection
conn.createStatement().executeUpdate(
"""
|DROP TABLE IF EXISTS `users`;
|CREATE TABLE IF NOT EXISTS `users`
|`name` varchar(100) NOT NULL,
|`email` varchar(100) NOT NULL,
|PRIMARY KEY (`email`)
|)ENGINE=InnoDB DEFAULT CHARSET=utf8;
|INSERT INTO `users` (`name`, `email`) ('john','john#example.com');
""".stripMargin)
}
}
You must move your Scala or Java migration scripts to the according directories.
For Scala this would be src/main/scala/db/migration
See here the documentation: https://flywaydb.org/documentation/migrations#discovery-1