Database table has different schema than model description in Flask application - postgresql

I have an old application that I'm redeploying. The following model class:
class TableA(db.Model):
id = db.Column(db.Integer, primary_key=True)
name_key = db.Column(db.String(20), index=True, unique=True, nullable=False)
acronym_key = db.Column(db.String(6), index=True, unique=True, nullable=False)
has the following table in postgres:
Table "public.tablea"
Column | Type | Collation | Nullable | Default
-------------+------------------------+-----------+----------+----------------------------------
id | integer | | not null | nextval('tablea_id_seq'::regclass)
name_key | character varying(10) | | not null |
acronym_key | character varying(6) | | not null |
Notice the length of the column name_key, it does not match.
As I worked in this when I still didn't know what I was doing with the migrations, I double checked if I had left changes that were not saved as migrations, with
flask db migrate and flask db upgrade. I got some changes to the db, but not this one. Do column lengths do not generate migration changes? What am I missing? Any help is appreciated.

Adding to #PeterBlahos' link, alembic needs to be configured to notice differences in column lengths.
For that you need to modify {project-root}/migrations/env.py, modify the run_migrations_* methods context.configure segments by adding the compare_type=True as in the snippet bellow:
def run_migrations_offline():
"""Run migrations in 'offline' mode.
This configures the context with just a URL
and not an Engine, though an Engine is acceptable
here as well. By skipping the Engine creation
we don't even need a DBAPI to be available.
Calls to context.execute() here emit the given string to the
script output.
"""
url = config.get_main_option("sqlalchemy.url")
context.configure(
url=url, target_metadata=target_metadata, literal_binds=True, compare_type=True)
with context.begin_transaction():
context.run_migrations()
def run_migrations_online():
"""Run migrations in 'online' mode.
In this scenario we need to create an Engine
and associate a connection with the context.
"""
connectable = engine_from_config(
config.get_section(config.config_ini_section),
prefix='sqlalchemy.',
poolclass=pool.NullPool)
with connectable.connect() as connection:
context.configure(
connection=connection,
target_metadata=target_metadata,
compare_type=True
)
with context.begin_transaction():
context.run_migrations()
After that just run on your terminal flask db migrate -m "some explanation for the changes" to create the migration file and flask db upgrade for the changes to actually affect the db.
Big thanks to #PeterBlahos who actually answered 85% of the question :).

Related

Bigint error when copying .csv to postgresql

Trying to import a .csv into my postgres table using the following approach:
System: WSL2 - UBUNTU 20.04
psql -d db_name --user=username -c "\copy test_list FROM 'testmngrs.csv' delimiter '|' csv;"
The content format of my .csv:
1,Name,name#store_id.com,1234567891,City Name
The error I'm receiving:
ERROR: invalid input syntax for type bigint:
CONTEXT: COPY test_list, line 1, column id:
The table:
SELECT * FROM test_list;
id | store_id | name | email | phone | city
The additional id at the head of the table above was not something created during my initial set up of the table.
My ecto migration file is as follows:
I'm not sure what's causing the BigInt error, nor how to avoid it as I copy over the data. I'm also a bit confused as to why there's an additional id column in my table given that it was never defined in my migration
I'm pretty new to postgresql and elixir / ecto so any assistance is greatly/guidance/context is greatly appreciated!
From the docs:
By default, the table will also include an :id primary key field that has a type of :bigserial.
Ecto assumes you want it to generate the id field by default. It's better to just go with it. But you can configure it somewhat counter-intuitively by setting primary_key: false on the table, and primary_key: true on the column:
create table(:managers, primary_key: false) do
add :store_id, :integer, null: false, primary_key: true
...

Database dialect included in Alembic downgrade script, but not on upgrade script?

I am using alembic to generate database migration scripts for a mysql database. I've noticed that the syntax of the generated upgrade and downgrade scripts differ slightly, whereas I thought they would basically be the same.
models.py- before
class Message_User(db.Model):
id = db.Column(db.Integer, primary_key=True)
name = db.Column(db.String(20), nullable=True)
models.py-after table modification
class Message_User(db.Model):
id = db.Column(db.Integer, primary_key=True)
tag = db.Column(db.String(15), nullable=True)
migration file - original - shows table creation
def upgrade():
op.create_table('message_user',
sa.Column('id', sa.Integer(), nullable=False),
sa.Column('name', sa.String(length=20), nullable=True)
sa.PrimaryKeyConstraint('id', name=op.f('pk_message_user'))
)
def downgrade():
op.drop_table('message_user')
migration file - after - shows table modification
def upgrade():
op.add_column('message_user', sa.Column('tag', sa.String(length=15), nullable=True))
op.drop_column('message_user', 'name')
def downgrade():
op.add_column('message_user', sa.Column('name', mysql.VARCHAR(collation='utf8_bin',
length=20), nullable=True))
op.drop_column('message_user', 'tag')
The upgrade scripts describe the changes purely in sqlalchemy terms, whereas the downgrade scripts add mysql dialect specific changes. Specifically, the upgrade script defines the type as sa.String(length=15) whereas the downgrade defines it as mysql.VARCHAR(collation='utf8_bin', length=20). In create table statements in the downgrade scripts, the autogenerated script also includes mysql_collate, mysql_default_charset and mysql_engine whereas these aren't in create table statements for upgrade scripts. I didn't see any mention of this in the alembic documentation. Does anyone know why this differs?

Timescale: ERROR: tried calling catalog_get when extension isn't loaded

I'm using Timescale and today I faced on a problem:
I`m creating a simple table with any of bellow ways:
1-
> create table if not exists "NewTable" as (select * from "OldTable");
SELECT 6
2-
create table "NewTable" ("eventTime" timestamptz, name varchar);
after the table successfully created. I write \d and the result for both tables are same:
Column | Type | Collation | Nullable | Default
-----------+--------------------------+-----------+----------+---------
eventTime | timestamp with time zone | | |
name | character varying | | |
but the problem starts here...
> SELECT create_hypertable('"NewTable"', '"eventTime"' ,migrate_data => true);
ERROR: tried calling catalog_get when extension isn't loaded
so after I googled my issue I found nothing usefull insted of everyone told to create timescaledb extention that I had it before
> CREATE EXTENSION timescaledb CASCADE; //OR CREATE IF NOT EXISTS EXTENSION timescaledb CASCADE;
ERROR: extension "timescaledb" has already been loaded with another version
and
> \dx
Name | Version | Schema | Description
-------------+---------+------------+--------------------------------------------------------
plpgsql | 1.0 | pg_catalog | PL/pgSQL procedural language
timescaledb | 1.5.1 | public | Enables scalable inserts and complex queries for time-series data
so what should I do?
Question : why this happened? how should I create a hyper table now?
before these operations, I tried to take a dump from my database and before that, I have 20 main_hyper_tables.
why this happened?
I guess TimescaleDB was installed in the standard way through apt. Since new version of TimescaleDB (v.1.6) was released recently, apt automatically updated installation and copied the binary of 1.6 into the shared library of PostgreSQL installation. Thus new extension version was loaded, which is different from the extension used to create the database (v.1.5.1).
how should I create a hyper table now?
I see two options:
Load the extension version used to create the database, by explicitly specifying in the postgres config.
Update the extension to the latest version by
ALTER EXTENSION timescaledb UPDATE
See using ALTER EXTENSION section in Update TimescaleDB doc

How Jasperreports Server stores report output internally?

There are few ways to store report output in JR Server: FS, FTP and Repository. The repository output is the default one. I guess the files in the repository must be stored in the DB or file system. Are the files kept forever? How can I manage the repository and for example set a file's lifetime?
The repository outputs are stored in the database. Usually there is no need to set the lifetime.
As of JasperReports Server v 6.3.0 the reference to all resources is kept in jiresource table, while content of is kept in jiresource.
In my case I was able to retrieve all output reports with:
select r.id,r.name,r.creation_date
from jiresource r, jicontentresource c
where r.id = c.id;
The definition of jicontentresource is
jasperserver=# \d+ jicontentresource
id | bigint | not null | plain | |
data | bytea | | extended | |
file_type | character varying(20) | | extended | |

Using JDBCRealm to authenticate user with Shiro

I am trying to authenticate a servlet running within Tomcat 6 using Shiro.
I have the following shiro.ini file:
[main]
ps = org.apache.shiro.authc.credential.DefaultPasswordService
pm = org.apache.shiro.authc.credential.PasswordMatcher
pm.passwordService = $ps
aa = org.apache.shiro.authc.credential.AllowAllCredentialsMatcher
sm = org.apache.shiro.authc.credential.SimpleCredentialsMatcher
jof = org.apache.shiro.jndi.JndiObjectFactory
jof.resourceName = jdbc/UserDB
jof.requiredType = javax.sql.DataSource
jof.resourceRef = true
realm = org.apache.shiro.realm.jdbc.JdbcRealm
realm.permissionsLookupEnabled = true
realm.credentialsMatcher = $pm
; Note factories are automatically invoked via getInstance(),
; see org.apache.shiro.authc.config.ReflectionBuilder::resolveReference
realm.dataSource = $jof
securityManager.realms = $realm
[urls]
/rest/** = authcBasic
/prot/** = authcBasic
And the following in my database:
mysql> select * from users;
+----------+------------------+----------+----------------------------------------------+--------------------------+
| username | email | verified | password | password_salt |
+----------+------------------+----------+----------------------------------------------+--------------------------+
| admin | a.muys#********* | 1 | ojSiTecNwRF0MunGRvz3DRSgP7sMF9EAR77Ol/2IAY8= | eHp9XedrIUa5sECfOb+KOA== |
+----------+------------------+----------+----------------------------------------------+--------------------------+
1 row in set (0.00 sec)
If I use the SimpleCredentialsManager it authenticates fine against a plaintext password in the users table. Trying to use the PasswordMatcher has been extremely frustrating.
The password and password_salt were obtained via the shiro-tools Hasher utility.
When I try to authenticate against a basic HelloWorld servlet I use for testing (path=rest/hello, context=/ws), I get the following in the logs:
15:35:38.667 [http-8080-2] TRACE org.apache.shiro.util.ClassUtils - Unable to load clazz named [ojSiTecNwRF0MunGRvz3DRSgP7sMF9EAR77Ol/2IAY8=] from class loader [WebappClassLoader
context: /ws
delegate: false
repositories:
/WEB-INF/classes/
----------> Parent Classloader:
org.apache.catalina.loader.StandardClassLoader#79ddd026
]
(Full log at https://gist.github.com/recurse/5915693 )
It appears to be trying to load my hashed password as a classname. Is this a bug, or a configuration error on my part? If it is a bug, how can I work around it? If it is a configuration error, what am I missing?
First, thanks for providing a lot of information for this question - it makes providing an answer a lot easier.
By looking at your sample database row list, it does not appear that you are storing the output that the PasswordService expects when performing a hashed password comparison. For example:
$ java -jar ~/.m2/repository/org/apache/shiro/tools/shiro-tools-hasher/1.2.2/shiro-tools-hasher-1.2.2-cli.jar -p
Password to hash:
Password to hash (confirm):
$shiro1$SHA-256$500000$uxaA2ngfdxdXpvSWzpuFdg==$hOJZc+3+bFYYRgVn5wkbQL+m/FseeqDtoM5mOiwAR3E=
The String that starts with $shiro1$ is what you would save to the password column in the database. There is no need for a separate salt column as all the information Shiro needs is in the $shiro1$... String.
The DefaultPasswordService uses the same default configuration parameters (SHA-256, 500,000 iterations, etc) so if you use the Hasher CLI tool as I've shown above (no extra hash algorithm config), you don't need to customize the DefaultPasswordService POJO any further. However, if you change the hashing parameters on the CLI, you need to ensure that the same parameters are configured on the DefaultPasswordService bean (and/or its internal HashingService).
If you are still in testing and can change your DB schema, I'd recommend doing that now to have a single password field that stores the $shiro1$... string. Then you use the PasswordService as documented here under Usage:
http://shiro.apache.org/static/current/apidocs/org/apache/shiro/authc/credential/PasswordService.html