JOOQ generates CHAR OCTETS columns as CHAR instead of BINARY - firebird

My problem very close the one mentioned in Using UUID PK or FK in Firebird with Jooq
Setup:
Jaybird 3.0.5, Firebird 2.5.7, jOOQ 3.11.7, JDK 1.8
My PK and FK fields like
ID CHAR(16) CHARACTER SET OCTETS NOT NULL
and
TABLE_ID CHAR(16) CHARACTER SET OCTETS
and I want to use UUID as java data type in generated classes
I use JDBC connection in configuration like
<jdbc>
<driver>org.firebirdsql.jdbc.FBDriver</driver>
<url>jdbc:firebirdsql:localhost:c:/DBS/DB.FDB?octetsAsBytes=true</url>
<properties>
<property>
<key>user</key>
<value>SYSDBA</value>
</property>
<property>
<key>password</key>
<value>masterkey</value>
</property>
</properties>
</jdbc>
I have setup forcedType in generator like
<forcedType>
<userType>java.util.UUID</userType>
<binding>com.ekser.nakkash.icdv.converters.jooq.ByteArray2UUIDBinding</binding>
<expression>.*ID$</expression>
<types>CHAR\(16\)</types>
<nullability>ALL</nullability>
</forcedType>
and I have class
class ByteArray2UUIDBinding implements Binding<byte[], UUID>
Now the problem
jOOQ generates
public final TableField<MyTableRecord, UUID> ID = createField("ID", org.jooq.impl.SQLDataType.CHAR(16).nullable(false), this, "", new ByteArray2UUIDBinding());
problem is SQLDataType.CHAR(16), it should be SQLDataType.BINARY(16).
jOOQ translate my char(16) octets fields as string (char(16)), it does not respect octetsAsBytes=true.
I have tried to put it to properties in <jdbc> like
<jdbc>
<driver>org.firebirdsql.jdbc.FBDriver</driver>
<url>jdbc:firebirdsql:localhost:c:/DBS/DB.FDB</url>
<properties>
<property>
<key>user</key>
<value>SYSDBA</value>
</property>
<property>
<key>password</key>
<value>masterkey</value>
</property>
<property>
<key>octetsAsBytes</key>
<value>true</value>
</property>
</properties>
</jdbc>
With the same result.
What is wrong?
I am considering running search&replace for keyword CHAR(16) -> BINARY(16) in generated classes for now, which is not 'stylish'.

The setting octetsAsBytes does nothing in Jaybird 3, see Character set OCTETS handled as JDBC (VAR)BINARY in the Jaybird 3 release notes. Jaybird 3 always behaves as octetsAsBytes=true in previous versions with some further improvements.
In other words, this is not related to that setting at all, but is instead a result of how jOOQ generates this.
jOOQ does its own metadata introspection by directly querying the Firebird metadata tables and mapping the Firebird type codes to jOOQ SQL types (see FirebirdTableDefinition and FirebirdDatabase.FIELD_TYPE). It directly maps Firebird type '15' to CHAR, without further considering subtypes (== character sets for this type).
In other words, you'll need to file an improvement ticket with jOOQ if you want to get this mapped to BINARY instead (although it's unclear to me why this is really a problem for you).

Related

nlog didnt insert database when table name uppercase

This works fine;
<targets>
<target name="database" xsi:type="Database"
dbProvider="Npgsql.NpgsqlConnection, Npgsql"
connectionString="User ID=postgres;Password=xxx;Host=192.xx;Port=5432;Database=xxx;">
<!--Pooling=true;-->
<commandText>
insert into systemlogs(...;
</commandText>
But when changed to table name as
"SystemLogs"
(same done in database as well) it throws exception;
"couldnt find table name "systemlogs"
which make sense there isnt but why nlog dont realize updated table name?
In PostgreSQL all quoted identifiers (e.g. table and column names) are case sensitive:
See: Are PostgreSQL column names case-sensitive?
So NLog can't find them is you use quotes and the wrong casing.
So don't use quotes or use the correct casing
If you specified the table name as "SystemLogs" inside double quotes then you will need to use it that way also:
insert into "SystemLogs" ...
In Postgresql quoted identifiers retain the case they are quoted with and need to be referred to the same way. If you create as unquoted name SystemLogs then it will be folded to lower case. See below for more detail:
https://www.postgresql.org/docs/current/sql-syntax-lexical.html#SQL-SYNTAX-IDENTIFIERS

Default schema for native SQL queries (spring-boot + hibernate + postgresql + postgis)

I am introducing spring to the existing application (hibernate has already been there) and encountered a problem with native SQL queries.
A sample query:
SELECT ST_MAKEPOINT(cast(longitude as float), cast(latitude as float)) FROM
OUR_TABLE;
OUR_TABLE is in OUR_SCHEMA.
When we connect to the db to OUR_SCHEMA:
spring.datasource.url: jdbc:postgresql://host:port/db_name?currentSchema=OUR_SCHEMA
the query fails because function ST_MAKEPOINT is not found - the function is located in schema: PUBLIC.
When we connect to the db without specifying the schema, ST_MAKEPOINT is found and runs correctly, though schema name needs to be added to the table name in the query.
As we are talking about thousands of such queries and all the tables are located in OUR_SCHEMA, is there a chance to anyhow specify the default schema, so still functions from PUBLIC schema were visible?
So far, I have tried the following springboot properties - with no success:
spring.jpa.properties.hibernate.default_schema: OUR_SCHEMA
spring.datasource.tomcat.initSQL: ALTER SESSION SET CURRENT_SCHEMA=OUR_SCHEMA
spring.datasource.initSQL: ALTER SESSION SET CURRENT_SCHEMA=OUR_SCHEMA
Also, it worked before switching to springboot config - specifying hibernate.default-schema = OUR_SCHEMA in persistence.xml was enough.
Stack:
spring-boot: 2.0.6
hibernate: 5.3.1.Final
postgresql: 42.2.5
postgis: 2.2.1
You're probably looking for the PostgreSQL search_path variable, which controls which schemas are checked when trying to resolve database object names. The path accepts several schema names, which are checked in order. So you can use the following
SET search_path=our_schema,public;
This will make PostgreSQL look for your tables (and functions!) first in our_schema, and then in public. Your JDBC driver may or may not support multiple schemas in its current_schema parameter.
Another option is to install the PostGIS extension (which provides the make_point() function) in the our_schema schema:
CREATE EXTENSION postgis SCHEMA our_schema;
This way you only have to have one schema in your search path.
JDBC param currentSchema explicitly allows specifying several schemas separating them by commas:
jdbc:postgresql://postgres-cert:5432/db?currentSchema=my,public&connectTimeout=4&ApplicationName=my-app
From https://jdbc.postgresql.org/documentation/head/connect.html
currentSchema = String
Specify the schema (or several schema separated by commas) to be set in the search-path. This schema will be used to resolve unqualified object names used in statements over this connection.
Note you probably need Postgres 9.6 or better for currentSchema support.
PS Probably better solution is to set search_path per user:
ALTER USER myuser SET search_path TO mydb,pg_catalog;
if you use hibernate.default_schema, then for native queries, you need to provide the {h-schema} placeholder, something like that
SELECT ST_MAKEPOINT(cast(longitude as float), cast(latitude as float)) FROM {h-schema}OUR_TABLE;

Can not using foreign table in DBunit

My project is using SpringMVC, MyBatis, and PostgreSql.
In postgres, I have 2 servers: sv1, sv2.
I imported a table from sv2 into sv1 using:
import foreign schema public limit to (tbl2) from server sv2 into public;
But, when using DBUnit to do testing, I cannot insert data into the foreign table tbl2. The exception is:
ERROR org.dbunit.database.DatabaseDataSet - Table 'tbl2' not found in tableMap=org.dbunit.dataset.OrderedTableNameMap
How can I use foreign table in DBUnit?
You need to configure the DatabaseConfig.
databaseConfig.setProperty(PROPERTY_TABLE_TYPE, [array of string with table types]);
or
databaseConfig.setTableType([array with table types]);
or configure your bean and add property
<property name="tableType">
<array value-type="java.lang.String">
<value>TABLE</value>
<value>FOREIGN TABLE</value>
</array>
</property>
You can see the whole map of table types if you go to any of the implementations DatabaseMetadata and look for "TABLE".

Quoted identifier error Codefluent

We have a QUOTEDIDENTIFIER problem with the Azure producer. We have an entity where we defined a Geography property. We created a geospatial index on that table. However, if we perform an insert or update on that table we get the followind error:
INSERT failed because the following SET options have incorrect
settings: ‘QUOTED_IDENTIFIER’. Verify that SET options are correct for
use with indexed views and/or indexes on computed columns and/or
filtered indexes and/or query notifications and/or XML data type
methods and/or spatial index operations.
We solved the error by dropping and restoring all stored procedures of this table but set QUOTED IDENTIFIER ON now.
The problem is, every time we run the producer the stored procedures are dropped and created with QUOTED IDENTIFIER OFF. How can we solve this situation?
You can configure the SQL Server producer to generate set quoted_identifier ON at the top of the files:
<cf:producer name="SQL Server" typeName="CodeFluent.Producers.SqlServer.SqlServerProducer, CodeFluent.Producers.SqlServer">
<cf:configuration quotedIdentifier="ON" ... />
</cf:producer>

DB2 Character Datatype and JPA Not Working

I am working with DB2 Universal database having lots of tables with columns of datatype CHARACTER. Length of these columns is variable (greater than 1 e.g. 18). As i execute a Native query using JPA it selects only first character of the column. Seems CHARACTER datatype is mapped to Java Character.
How can i get full contents in DB column. I can not change the database bening on Vendor side. Please note i need to do it both ways, i.e. :
Using JPQL ( is attribute columnDefinition can work in this case)
Using native DB query (no pojo is used in this case and i have no control over datatypes)
i am using Hibernate implementation of JPA provided by spring.
If these columns are actually common in your database, you can customize a dialect used by Hibernate. See comments to HHH-2304.
I was able to cast the column to VARCHAR to produce padded String results from createNativeQuery:
select VARCHAR(char_col) from schema.tablename where id=:id