Jooq code generation "excludes" not working with typesafe config - code-generation

I am trying to exclude two tables created by Liquibase when using TypeSafe Config.
jooq {
# databasechangelog = Liquibase generated tables
# databasechangeloglock = Liquibase generated tables
excludes = "databasechangelog, databasechangeloglock"
}
When I only supply one excludes such as "databsechangelog", it works.
More than one excludes should be separated by a comma (http://www.jooq.org/doc/2.6/manual/code-generation/codegen-configuration/), but instead it generated both tables.
Doing this is not allowed either.
excludes = "databasechangelog", "databasechangeloglock"
Inside the library, its simplifying calling this (note: getExcludes is a String )
database.setExcludes(new String[]{StringUtils.defaultString(d1.getExcludes())});
Has anyone else ran into this problem?
Here's my code generation
new GenerationTool {
setConnection(connection)
run(new Configuration {
withGenerator(new Generator {
withName(config.jooq.generatorClass)
withDatabase(new org.jooq.util.jaxb.Database {
withIncludes(config.jooq.includes)
withExcludes(config.jooq.excludes)
withInputSchema(config.jooq.inputSchema)
withName(config.jooq.databaseClass)
})
withTarget(new Target {
withPackageName(config.jooq.pkg)
withDirectory(config.jooq.directory)
})
withGenerate(new Generate {
setDaos(true)
})
})
})
}

You're referencing the manual from version 2.6. In the old days, we used comma-separated lists of expressions in includes/excludes - but believe it or not, some people had commas in their table/column names, which is why we dumped the commas. After all, includes / excludes are just regular expressions, and you can separate your individual patterns using the "union operator", the pipe: |
I.e. write:
jooq {
# databasechangelog = Liquibase generated tables
# databasechangeloglock = Liquibase generated tables
excludes = "databasechangelog|databasechangeloglock"
}
This is also documented here in the section "feature removals" (look for "comma-separated"):
http://www.jooq.org/doc/latest/manual/reference/migrating-to-3.0

Related

Pureconfig - is it possible to include in conf file another conf file?

Is it possible to include in *conf file another conf file?
Current implementation:
// db-writer.conf
writer: {
name="DatabaseWriter",
model="model1",
table-name="model1",
append=false,
create-table-file="sql/create_table_model1.sql",
source-file="abcd.csv"
}
Desired solution:
// model1.conf + others model2.conf, model3.conf..
table: {
name="model1",
table-name="model1",
create-table-file="../sql/create_table_model1.sql"
}
//db-writer.conf
import model1.conf <=== some import?
writer: {
name="DatabaseWriter",
model="model1", <=== some reference like this?
append=false,
source-file="abcd.csv"
}
Reason why I would like to have it like this is :
to reduce duplicated definitions
to pre-define user conf file which are rare modified
I guess it is not possible - if not do you have any suggestion how to separate configs & reuse them?
I'm using scala 2.12 lang and pureconfig 0.14 (can be updated to any newer)
Pureconfig uses HOCON (though some of the interpretation of things like durations differ). HOCON include is supported.
So assuming that you have model1.conf in your resources (e.g. src/main/resources), all you need in db-writer.conf is
include "model1"
HOCON-style overrides and concatenation are also supported:
writer: ${table} {
name = "DatabaseWriter"
model = "model1"
append = false
source-file = "abcd"
}

How to use Kotlin's Ktorm to perform WHERE clause operation on custom Postgres "object type" avoiding "PSQLException: ERROR: operator does not exist"

Whilst performing a WHERE clause operation on a custom Postgres "object type" I ended up the following PSQLException.
Language: Kotlin.
ORM Library: Ktorm ORM.
Exception
org.postgresql.util.PSQLException: ERROR: operator does not exist: rate = character varying
Hint: No operator matches the given name and argument types. You might need to add explicit type casts.
I have followed the Ktorm official guides here but, there is no mention of custom Postgres types. Any pointers/help would be highly appreciated. See code below to reproduce:
Thank you.
Example test that would produce the above exception
internal class SuppliersInstanceDAOTest {
#Test
fun shouldReturnInstanceSequence() {
val database = Database.connect("jdbc:postgresql://localhost:5432/mydb", user = "postgres", password = "superpassword")
val instanceDate: LocalDate = LocalDate.of(2019, 4, 1)
database.withSchemaTransaction("suppliers") {
database.from(SuppliersInstanceTable)
.select(SuppliersInstanceTable.instanceSeq)
.whereWithConditions {
// The following line causes "ERROR: operator does not exist: rate = character varying"
it += SuppliersInstanceTable.rate eq Rate.DAILY
}.asIterable()
.first()
.getInt(1)
}
}
}
Schema
-- Note the special custom enum object type here that I cannot do anything about
CREATE TYPE suppliers.rate AS ENUM
('Daily', 'Byweekly');
CREATE TABLE suppliers.instance
(
rate suppliers.rate NOT NULL,
instance_value integer NOT NULL
)
TABLESPACE pg_default;
Kotlin's Ktorms Entities and bindings
enum class Rate(val value: String) {
DAILY("Daily"),
BIWEEKLY("Byweekly")
}
interface SuppliersInstance : Entity<SuppliersInstance> {
companion object : Entity.Factory<SuppliersInstance>()
val rate: Rate
val instanceSeq: Int
}
object SuppliersInstanceTable : Table<SuppliersInstance>("instance") {
val rate = enum("rate", typeRef<Rate>()).primaryKey().bindTo { it.rate } // <-- Suspect
//val rate = enum<Rate>("rate", typeRef()).primaryKey().bindTo { it.rate } // Failed too
val instanceSeq = int("instance_value").primaryKey().bindTo { it.instanceSeq }
}
After seeking help from the maintainers of Ktorm, it turns out there is support in the newer version of ktorm for native Postgressql enum object types. In my case I needed pgEnum instead of the default ktrom enum function that converts the enums to varchar causing the clash in types in Postressql:
Reference here for pgEnum
However, note at the time of writing ktorm's pgEnum function is only in ktorm v3.2.x +. The latest version available in Jcentral and mavenCentral maven repositories is v3.1.0. This is because there is also a group name change from me.liuwj.ktorm to org.ktorm in the latest version. So upgrading would also mean changing the group name in your dependencies for the new group name to match the maven repos. This was a seamless upgrade for my project and the new pgEnum worked in my use case.
For my code examples above, this would mean swapping this
object SuppliersInstanceTable : Table<SuppliersInstance>("instance") {
val rate = enum("rate", typeRef<Rate>()).primaryKey().bindTo { it.rate } <---
...
}
For
object SuppliersInstanceTable : Table<SuppliersInstance>("instance") {
val rate = pgEnum<Rate>("rate").primaryKey().bindTo { it.rate } <---
...
}

Print complete SQL for all queries made by objection.js

I'm looking for a way to capture the raw SQL for all the queries that the Objection.js library executes with the bindings interpolated into the SQL string.
I realize that there's a Knex event handler that I can take advantage of but the second argument to the on('query', data) is an object containing an SQL template with the bindings separate.
e.g.
{
sql: "select \"accounts\".* from \"accounts\" where \"id\" = ?",
bindings: [1]
}
I'm wondering if the most elegant way to do this would be to use something like the .toString() method that exists on the QueryBuilder but I don't think a specific instance of a QueryBuilder is available in the callback. Ideally I don't reinvent the wheel and re-write Knex's interpolation method.
Any pointers would be greatly appreciated.
Thank you!
You can use the .toKnexQuery() function to pull out the underlying knex query builder and gain access to .toSQL() and .toQuery().
I tested and verified the following example using version 2 of Objection. I couldn't find .toKnexQuery() in the version 1 docs and therefore can't verify it will work with earlier versions of Objection.
// Users.js
const { Model } = require('objection')
class Users extends Model {
static get tableName() { return 'users' }
// Insert jsonSchema, relationMappings, etc. here
}
module.exports = Users
const Users = require('./path/to/Users')
const builder = Users.query()
.findById(1)
.toKnexQuery()
console.log(builder.toQuery())
// "select `users`.* from `users` where `users`.`id` = 1"
console.log(builder.toSQL())
// {
// method: 'select',
// bindings: [ 1 ],
// sql: 'select `users`.* from `users` where `users`.`id` = ?'
// }
It should probably be reiterated that in addition to .toString(), .toQuery() can also be vulnerable to SQL injection attacks (see here).
A more "responsible" way to modify the query might be something like this (with MySQL):
const { sql, bindings } = Users.query()
.insert({ id: 1 })
.toKnexQuery()
.toSQL()
.toNative()
Users.knex().raw(`${sql} ON DUPLICATE KEY UPDATE foo = ?`, [...bindings, 'bar'])
Knex / objection.js does not provide any methods that can securely do the interpolation. .toString() can produce invalid results in some cases and they can be vulnerable to sql injection attacks.
If it is only for debugging purposes looking how .toQuery() is implemented helps. https://github.com/knex/knex/blob/e37aeaa31c8ef9c1b07d2e4d3ec6607e557d800d/lib/interface.js#L12
knex.client._formatQuery(sql, bindings, tz)
It is not a public API though so it is not guaranteed to be the same even between patch versions of knex.

Groovy sql.rows returns org.postgresql.util.PSQLException: No hstore extension installed

I am using Groovy Sql in Grails with named parameters to get results from a Postgres DB. My statement is generated dynamically, i.e. concatenated to become the final statement, with the params being added to a map as I go along.
sqlWhere += " AND bar = :namedParam1"
paramsMap.namedParam1 = "blah"
For readability, I am using the groovy string syntax which allows me to write my sql statement over multiple lines, like this:
sql = """
SELECT *
FROM foo
WHERE 1=1
${sqlWhere}
"""
The expression is evaluated as a string containing the linebreaks as \n:
SELECT *\n ...
This is not a problem when I pass params like this
results = sql.rows(sqlString, paramsMap)
but it does become one if paramsMap is empty (which happens since AND bar = :namedParam1 is not always concatenated into the query). I then get an error
org.postgresql.util.PSQLException: No hstore extension installed
which does not really seem to relate to the true nature of the problem. I have for now fixed this with an if...else
if (sqlQuery.params.size() > 0) {
results = sql.rows(sqlString, paramsMap)
} else {
results = sql.rows(sqlString.replace('\n',' '))
}
But this seems a bit weird (especially since it does not work if I use the replace in the if-branch as well).
My question is: why do I really get this error message and is there a better way to prevent it from occuring?
It's certainly a bug in groovy.sql.SQL implementation. The method rows() can't deal with an empty map passed as params. As a workaround, you can test for it and pass an empty list instead.
def paramsMap = [:]
...
if (paramsMap.isEmpty())
paramsMap= []
Issue created at https://issues.apache.org/jira/browse/GROOVY-8082

Dapper QueryMultiple Stored Procedures w/o mapping to Objects

With dapper, I can do batch execute for Stored Procedures, something similar to:
connection.Execute(#"
exec sp1 #i = #one, #y = #two
exec sp2 #i = #three",
new { one = 1, two = 2, three = 3 });
However, the only means of retrieving data that I have seen till now is by using
results.Read<Type>()
What if the results don't map to an object? For instance, I am writing "generic" code to execute any SP with variable in/out parameters & result sets.
Thanks
What API do you want? If you can process the grids separately: do that:
using(var multi = connection.QueryMultiple(...))
{
while(!multi.IsConsumed) {
// ...
}
}
where ... has access to:
Read() for dynamic rows - noting that each row also implements IDictionary<string,object>
Read<T>() for typed rows via generics
Read(Type) for typed rows without generics
Read<DapperRow>() (actually, this is just the T that Read<T>() uses to implement Read(), but perhaps more convenient), which provides slightly more access to metadata
If you want to drop to a raw IDataReader, do that:
using(var reader = connection.ExecuteReader(...)) {
// whatever you want
}
With regards to parameters: the DynamicParameters class provides much richer access to parameter control, including parameter-direction etc.