Is there a way to setup code generation in JOOQ for multiple schemas with the same table structure? - postgresql

We have a multi-tenant database, where each tenant has their own dedicated schema. The schemas always have identical table structures. What I'm trying to figure out is if there's a way to pass the schema to JOOQ at query time when using code generation to track the schema. Something like:
dslContext.useSchema("schema1").select(A.id).from(A).fetch()
It seems like the schema is always tied to the table object and the only option for mapping at runtime is statically via an input schema and an output schema.
Environmental info: Java/Kotlin, Maven, Spring Boot, Postgres, Flyway

The features you are looking for are:
Code generation time schema mapping
Runtime schema mapping
See also the FAQ
The simplest solution here is to just turn off the generation of schema information in the code generator:
<outputSchemaToDefault>true</outputSchemaToDefault>
Or at runtime
new Settings().withRenderSchema(false);

Related

Importing SQL Server SQL_VARIANT data type to OrientDB vertex - which property type should I use?

I am evaluating OrientDB as a replacement for MS SQL Server. One of the SQL Server tables I need to import into OrientDB contains time-series data with the value column using a SQL_VARIANT data type. I'm struggling to identify the best data type to use for the equivalent property in a new OrientDB vertex. I'm hesitant to convert it a STRING, but I don't see an equivalent variant type. Any recommendations?
OrientDB Teleporter is a tool that synchronizes a RDBMS to OrientDB database. You can use Teleporter to:
Import your existing RDBMS to OrientDB
Keep your OrientDB database synchronized with changes from the RDBMS. In this case the database on RDBMS remains the primary and the database on OrientDB a synchronized copy. Synchronization is one way, so all the changes in OrientDB database will not be propagated to the RDBMS
Teleporter is fully compatible with several RDBMS that have a JDBC driver: we successfully tested Teleporter with Oracle, SQLServer, MySQL, PostgreSQL and HyperSQL. Teleporter manages all the necessary type conversions between the different DBMSs and imports all your data as Graph in OrientDB.
NOTE: This feature is available both for the OrientDB Enterprise Edition and the OrientDB Community Edition. But beware: in community edition you can migrate your source relational database but you cannot enjoy the synchronize feature, only available in the enterprise edition.
How Teleporter works
Teleporter looks for the specific DBMS meta-data in order to perform a logical inference of the source DB schema for the building of a corresponding graph model. Eventually the data importing phase is performed.
Teleporter has a pluggable importing strategy. Two strategies are provided out of the box:
naive strategy, the simplest one
naive-aggregate strategy. It performs a "naive" import of the data source. The data source schema is translated semi-directly in a correspondent and coherent graph model using an aggregation policy on the junction tables of dimension equals to 2
To learn more about the two different execution strategies click here.
For more information: http://orientdb.com/docs/3.0.x/teleporter/Teleporter-Home.html
Hope it helps
Regards

Why use explicit schema prefix in Postgres functions?

I am using Postgres for microservice backends and the databases are designed to be small(ish) and simple.
We have four schemas in our databases:
live: all the functions, tables, etc used by the application
utest:unit tests
testframe: unit testing functions/framework
testdata: functions that create common test data
When the database is shipped to production ONLY the 'live' schema is retained, all the testing schema's are dropped.
So my question is: Is there any reason for functions in the 'live' schema to explicitly using the 'live.' schema prefix when referring to tables and calling other functions?
After much googling I am having a hard time making an argument for explicitly using the schema prefix.
Thanks, any comments are appreciated.
Always qualifying objects with their schema names is a good way of making sure that no other objects with the same name in other schemas can be used by mistake. For example, the pg_catalog schema is always on your search_path, so system objects might be chosen.

Database Crawler using JPA

We have a requirement for building a database crawler. The application parses the tnsnames, connects to each database and retrieves some information like version, accounts, etc. We are trying to use JPA across the other parts of the application and to persist this data into the application's database.
So far, I only see creating an EntityManagerFactory programmatically for every database. Is there any other options?
We are using Spring, are there any benefits that Spring brings to the table in this scenario?
Thanks
JPA is clearly not the right tool for this job. JPA allows creating functional entities mapping a well-know database schema. Your tool doesn't know anything about the schemas and tables it will find. There could be 0 tables or 5000, with completely unknow names.
You need a much lower-level API to do what you want, like JDBC.
You could use JPA to store the results of your crawlings in a single schema, though.

How to migrate existing data managed with sqeryl?

There is a small project of mine reaching its release, based on squeryl - typesafe relational database framework for Scala (JVM based language).
I foresee multiple updates after initial deployment. The data entered in the database should be persisted over them. This is impossible without some kind of data migration procedure, upgrading data for newer DB schema.
Using old data for testing new code also requires compatibility patches.
Now I use automatic schema generation by framework. It seem to be only able create schema from scratch - no data persists.
Are there methods that allow easy and formalized migration of data to changed schema without completely dropping automatic schema generation?
So far I can only see an easy way to add columns: we dump old data, provide default values for new columns, reset schema and restore old data.
How do I delete, rename, change column types or semantics?
If schema generation is not useful for production database migration, what are standard procedures to follow for conventional manual/scripted redeployment?
There have been several discussions about this on the Squeryl list. The consensus tends to be that there is no real best practice that works for everyone. Having an automated process to update your schema based on your model is brittle (can't handle situations like column renames) and can be dangerous in production. Personally, I like the idea of "migrations" where all of your schema changes are written as SQL. There are a few frameworks that help with this and you can find some of them here. Personally, I just use a light wrapper around the psql command line utility to do schema migrations and data loading as it's a lot faster for the latter than feeding in the data over JDBC.

how to always prepend schema name in jpql - jpa

I have tables across 3 different database schemas. JPA confuses itself because it tries to find the table at the wrong schema.
I know I can specify the schema at the #Table annotation but, one of the schemas varies and I can't block it's name.
So, my idea is to tell JPA to always prepend the schema name in the queries it creates, whether I define it or not in the #Table annotation.
Is this possible?
Any other solution?
Thanks!
Note: I'm not using Hibernate, I'm using Toplink.
Use a JPA orm.xml and define the schema/catalog in there in the global section. Works fine with DataNucleus JPA when you do that.
Talk to your DBA to see if he can create a schema that will merge all three schemas. That way your application will only have to deal with one schema. DB2 for zOS can do this and it saved having to create different orm.xml files for each environment.