DataJpaTest: Numeric scale default seems to be 0 with spring-boot-starter 2.7.1 - postgresql

I have a DataJpaTest with some schema.sql and data.sql for preparing the postgresql in-memory database. I've just upgraded spring-boot-starter-parent from 2.6.3 to 2.7.1, and now the test fails.
schema:
CREATE TABLE IF NOT EXISTS some_table(
id BIGSERIAL,
name TEXT,
problematic_number NUMERIC NOT NULL
);
data:
INSERT INTO some_table (name, problematic_number) VALUES ('something', 1.4321);
For some reason a test is failing now with:
org.opentest4j.AssertionFailedError:
Expected :1.4321
Actual :1
I also connected to the h2 database and I got really "1" in here instead of "1.4321". Before my spring upgrade, the test was fine.
Did the default scale for numeric maybe change? if I change my schema.sql to NUMERIC(10,4), the test succeeds.

Related

REVINFO table is missing the sequence "revinfo_seq"

I am migrating to SpringBoot 3.0.1 and updated "hibernate-envers" version to "6.1.6.Final". My DB is PostgreSQL 13.6.
Hibernate is configured to create the DB schema:
spring.jpa.hibernate.ddl-auto:create
After starting the application I get the following error:
pim 2022-12-27 12:00:13,715 WARN C#c7b942ec-33b4-4749-b113-22cbb2946a8d [http-nio-9637-exec-1] SqlExceptionHelper/133 - SQL Error: 0, SQLState: 42P01
pim 2022-12-27 12:00:13,715 ERROR C#c7b942ec-33b4-4749-b113-22cbb2946a8d [http-nio-9637-exec-1] SqlExceptionHelper/138 - ERROR: relation "revinfo_seq" does not exist
Position: 16
The revinfo table look like this:
create table revinfo
(
revision bigint not null
primary key,
client_id varchar(255),
correlation_id varchar(255),
origin varchar(255),
request_id varchar(255),
revision_timestamp bigint not null,
timestamp_utc timestamp with time zone,
user_name varchar(255)
);
The sequence "revinfo_seq" does not exist, but in the old DB structure with envers
5.6.8.Final
and SpringBoot 2.6.6 it didn't exist either without any problems.
What am i Missing?
I tried to toggle the paramter
org.hibernate.envers.use_revision_entity_with_native_id
but it did not help.
You can solve it with this property:
spring.jpa.properties.hibernate.id.db_structure_naming_strategy: legacy
Tested with Spring Boot 3.0.1
Reason:
Hibernate 6 changed the sequence naming strategy, so it was searching for a sequence ending with "_seq".
You can a really detailed explanation here: https://thorben-janssen.com/sequence-naming-strategies-in-hibernate-6/

Error when creating external table in Redshift Spectrum with dbt: cross-database reference not supported

I want to create an external table in Redshift Spectrum from CSV files. When I try doing so with dbt, I get a strange error. But when I manually remove some double quotes from the SQL generated by dbt and run it directly, I get no such error.
First I run this in Redshift Query Editor v2 on default database dev in my cluster:
CREATE EXTERNAL SCHEMA example_schema
FROM DATA CATALOG
DATABASE 'example_db'
REGION 'us-east-1'
IAM_ROLE 'iam_role'
CREATE EXTERNAL DATABASE IF NOT EXISTS
;
Database dev now has an external schema named example_schema (and Glue catalog registers example_db).
I then upload example_file.csv to the S3 bucket s3://example_bucket. The file looks like this:
col1,col2
1,a,
2,b,
3,c
Then I run dbt run-operation stage_external_sources in my local dbt project and get this output with an error:
21:03:03 Running with dbt=1.0.1
21:03:03 [WARNING]: Configuration paths exist in your dbt_project.yml file which do not apply to any resources.
There are 1 unused configuration paths:
- models.example_project.example_models
21:03:03 1 of 1 START external source example_schema.example_table
21:03:03 1 of 1 (1) drop table if exists "example_db"."example_schema"."example_table" cascade
21:03:04 Encountered an error while running operation: Database Error
cross-database reference to database "example_db" is not supported
I try running the generated SQL in Query Editor:
DROP TABLE IF EXISTS "example_db"."example_schema"."example_table" CASCADE
and get the same error message:
ERROR: cross-database reference to database "example_db" is not supported
But when I run this SQL in Query Editor, it works:
DROP TABLE IF EXISTS "example_db.example_schema.example_table" CASCADE
Note that I just removed some quotes.
What's going on here? Is this a bug in dbt-core, dbt-redshift, or dbt_external_tables--or just a mistake on my part?
To confirm, I can successfully create the external table by running this in Query Editor:
DROP SCHEMA IF EXISTS example_schema
DROP EXTERNAL DATABASE
CASCADE
;
CREATE EXTERNAL SCHEMA example_schema
FROM DATA CATALOG
DATABASE 'example_db'
REGION 'us-east-1'
IAM_ROLE 'iam_role'
CREATE EXTERNAL DATABASE IF NOT EXISTS
;
CREATE EXTERNAL TABLE example_schema.example_table (
col1 SMALLINT,
col2 CHAR(1)
)
ROW FORMAT SERDE 'org.apache.hadoop.hive.serde2.OpenCSVSerde'
STORED AS TEXTFILE
LOCATION 's3://example_bucket'
TABLE PROPERTIES ('skip.header.line.count'='1')
;
dbt config files
models/example/schema.yml (modeled after this example:
version: 2
sources:
- name: example_source
database: dev
schema: example_schema
loader: S3
tables:
- name: example_table
external:
location: 's3://example_bucket'
row_format: >
serde 'org.apache.hadoop.hive.serde2.OpenCSVSerde'
with serdeproperties (
'strip.outer.array'='false'
)
columns:
- name: col1
data_type: smallint
- name: col2
data_type: char(1)
dbt_project.yml:
name: 'example_project'
version: '1.0.0'
config-version: 2
profile: 'example_profile'
model-paths: ["models"]
analysis-paths: ["analyses"]
test-paths: ["tests"]
seed-paths: ["seeds"]
macro-paths: ["macros"]
snapshot-paths: ["snapshots"]
target-path: "target"
clean-targets:
- "target"
- "dbt_packages"
models:
example_project:
example:
+materialized: view
packages.yml:
packages:
- package: dbt-labs/dbt_external_tables
version: 0.8.0

Spring Boot 2 - H2 Database - #SpringBootTest - Failing on org.h2.jdbc.JdbcSQLException: Table already exists

Unable to test Spring Boot & H2 with a script for creation of table using schema.sql.
So, what’s happening is that I have the following properties set:
spring.datasource.driver-class-name=org.h2.Driver
spring.datasource.initialization-mode=always
spring.datasource.username=sa
spring.datasource.password=
spring.datasource.platform=h2
spring.datasource.url=jdbc:h2:mem:city;MODE=PostgreSQL;DB_CLOSE_DELAY=-1;DB_CLOSE_ON_EXIT=FALSE
spring.jpa.database-platform=org.hibernate.dialect.H2Dialect
spring.jpa.generate-ddl=false
spring.jpa.hibernate.ddl-auto=update
spring.jpa.show-sql=true
and, I expect the tables to be created using the schema.sql. The application works fine when I run gradle bootRun. However, when I run tests using gradle test, my tests for Repository passes, but the one for my Service fails stating that it’s trying to create the table when the table already exists:
Exception raised:
Caused by: org.h2.jdbc.JdbcSQLException: Table "CITY" already exists;
SQL statement:
CREATE TABLE city ( id BIGINT NOT NULL, country VARCHAR(255) NOT NULL, map VARCHAR(255) NOT NULL, name VARCHAR(255) NOT NULL, state VARCHAR(2555) NOT NULL, PRIMARY KEY (id) ) [42101-196]
at org.h2.message.DbException.getJdbcSQLException(DbException.java:345)
at org.h2.message.DbException.get(DbException.java:179)
at org.h2.message.DbException.get(DbException.java:155)
at org.h2.command.ddl.CreateTable.update(CreateTable.java:117)
at org.h2.command.CommandContainer.update(CommandContainer.java:101)
at org.h2.command.Command.executeUpdate(Command.java:260)
at org.h2.jdbc.JdbcStatement.executeInternal(JdbcStatement.java:192)
at org.h2.jdbc.JdbcStatement.execute(JdbcStatement.java:164)
at com.zaxxer.hikari.pool.ProxyStatement.execute(ProxyStatement.java:95)
at com.zaxxer.hikari.pool.HikariProxyStatement.execute(HikariProxyStatement.java)
at org.springframework.jdbc.datasource.init.ScriptUtils.executeSqlScript(ScriptUtils.java:471)
... 105 more
The code is setup and ready to recreate the scenario. README has all the information ->
https://github.com/tekpartner/learn-spring-boot-data-jpa-h2
If the tests are run individually, they pass. I think the problem is due to schema.sql being executed twice against the same database. It fails the second time as the tables already exist.
As a workaround, you could set spring.datasource.continue-on-error=true in application.properties.
Another option is to add the #AutoConfigureTestDatabase annotation where appropriate so that a unique embedded database is used for each test.
There are 2 other possible solutions you could try:
Add a drop table if exists [tablename] in your schema.sql before you create the table.
Change the statement from CREATE TABLE to CREATE TABLE IF NOT EXISTS

postgres schema not found when create with upper case

I am trying to create app using OpenJPA & Postgres 9.2.xx, Currently facing issue at DB level
1) Created schema say PCM:-
CREATE SCHEMA PCM
2) Tried create table :-
CREATE TABLE PCM.USER_PROFILE (
USER_PROFILE_ID BIGINT NOT NULL,
USER_FNAME VARCHAR(60),
USER_LNAME VARCHAR(60)
};
Got error "pcm" schema does not exists
Then tried creating table :-
CREATE TABLE "PCM.USER_PROFILE" (
USER_PROFILE_ID BIGINT NOT NULL,
USER_FNAME VARCHAR(60),
USER_LNAME VARCHAR(60)
};
Table is created successful,
If I list the schema:-
[postgres#DBMigration ~] $ psql -c "\dn"
List of schemas
Name | Owner
--------+----------
pcm | dbadmin
public | postgres
B) In persistence.xml , I have entered configuration
<property name="openjpa.jdbc.Schema" value="PCM" />
Now I am getting issue in OpenJPA stating schema is not present.
I tried refering here, but no success.
I have tried entering schema name in configuration as '\"PCM\"', "\"PCM\"", '\"pcm\"', "\"pcm\"".
Not sure where am I going wrong.
I need suggestion/help,
1) how or what is proper standard to create schema in Postgres & refer while creating table.
2) Is my entry in persistence.xml correct? Then why its not identifying the schema
Object names in Postgres when not quoted are implicitly converted to lower case.
When you create a table the way you did below with quotation mark on "PCM.USER_PROFILE" then the table is created in default public schema with the name of "PCM.USER_PROFILE".
CREATE TABLE "PCM.USER_PROFILE" (
USER_PROFILE_ID BIGINT NOT NULL,
USER_FNAME VARCHAR(60),
USER_LNAME VARCHAR(60)
);
However, your create statement mentioned in the post is completely valid (with the exception of changing } to ) at the end of command:
CREATE TABLE PCM.USER_PROFILE (
USER_PROFILE_ID BIGINT NOT NULL,
USER_FNAME VARCHAR(60),
USER_LNAME VARCHAR(60)
);
It creates user_profile table under pcm schema succesfully.
The error that I did was created schema outside database environment & root user. When we tried running select * from information_schema.schemata; under both users (root & db user) the schema was not listing.
Hence create schema under a DB by running query
psql -U [dbUser] -d [database] -c "CREATE SCHEMA pcm;"
or
psql -h localhost -U [dbUser] -d [database]
[database]#=> CREATE SCHEMA pcm;
Try running query to test if schema is loaded successfully under database & dbowner user.
[database]#=> select * from information_schema.schemata;

HSQL upgrade to 1.8: ... DEFAULT 'NaN' changed to ... DEFAULT 0E0/0E0 in database.script causes HsqlException: unexpected token / in HSQL 2.3.2

I have an application using HSQL 1.7.2 that contains table definitions with default values of NaN for some of the DOUBLE columns. After issuing, with 1.7.2,
SET SCRIPTFORMAT TEXT
SHUTDOWN SCRIPT
the following appears in the database.script file:
CREATE TABLE ... DOUBLE DEFAULT 'NaN' NOT NULL ...
I'm attempting to upgrade to HSQL 2.3.2, but I first have to upgrade to 1.8. I'm finding that after the upgrade to 1.8, the database.script file has:
CREATE TABLE ... DOUBLE DEFAULT 0E0/0E0 NOT NULL ...
When I open this database with HSQL 2.3.2, I get "SQLException: error in script file line ..." on the line at which "/" first appears. The traceback includes "Caused by: org.hsqldb.HsqlException: unexpected token: /".
I've messed around a lot with double_nan=false, without success.
Does anybody have any suggestions for me?
Support for NaN as default value was dropped in version 2.x. This will be restored in the next update.
For the time being, change the default to NULL and add a TRIGGER on the table to convert NULL entries to NaN.
The property hsqldb.double_nan=false must be specified or the statement SET DATABASE SQL DOUBLE NAN FALSE executed for the database to accept the NaN data values .