liquibase generates wrong uppercase characters when generating the sql - postgresql

Im working with jhipster and it uses liquibase to manage tables. But when it generates the sql query it messes up the characters. it turns "int" to "İNT" not "INT" and other "i" characters to "İ" (turkish character for uppercase i) so postgresql doesnt accept those. How do I make liquibase use english locale instead of turkish locale for uppercase conversion?
Caused by: liquibase.exception.DatabaseException: ERROR: type "�nt" does not exist
Position: 47 [Failed SQL: CREATE TABLE public.databasechangeloglock (ID �NT NOT NULL, LOCKED BOOLEAN NOT NULL, LOCKGRANTED TIMESTAMP WITHOUT TIME ZONE, LOCKEDBY VARCHAR(255), CONSTRAINT PK_DATABASECHANGELOGLOCK PRIMARY KEY (ID))]
at liquibase.executor.jvm.JdbcExecutor$ExecuteStatementCallback.doInStatement(JdbcExecutor.java:316)
at liquibase.executor.jvm.JdbcExecutor.execute(JdbcExecutor.java:55)
at liquibase.executor.jvm.JdbcExecutor.execute(JdbcExecutor.java:122)
at liquibase.executor.jvm.JdbcExecutor.execute(JdbcExecutor.java:112)
at liquibase.lockservice.StandardLockService.init(StandardLockService.java:87)
at liquibase.lockservice.StandardLockService.acquireLock(StandardLockService.java:189)
... 114 more
Caused by: org.postgresql.util.PSQLException: ERROR: type "�nt" does not exist
Position: 47
at org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse(QueryExecutorImpl.java:2198)
at org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:1927)
at org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:255)
at org.postgresql.jdbc2.AbstractJdbc2Statement.execute(AbstractJdbc2Statement.java:561)
at org.postgresql.jdbc2.AbstractJdbc2Statement.executeWithFlags(AbstractJdbc2Statement.java:405)
at org.postgresql.jdbc2.AbstractJdbc2Statement.execute(AbstractJdbc2Statement.java:397)
at com.zaxxer.hikari.proxy.StatementProxy.execute(StatementProxy.java:83)
at com.zaxxer.hikari.proxy.StatementJavassistProxy.execute(StatementJavassistProxy.java)
at liquibase.executor.jvm.JdbcExecutor$ExecuteStatementCallback.doInStatement(JdbcExecutor.java:314)
... 119 more

Related

Syntax error at or near "order" (Scala with Quill, Doobie and PostgreSQL)

I am using Quill with Doobie and PostgreSQL (org.tpolecat.doobie-quill artifact with version 0.13.1).
This code
case class SomeRecord(id: Int, order: Int, name: String)
val record = SomeRecord(0, 0, "test")
run(
quote(
querySchema[SomeRecord]("some_table")
).insert(lift(record))
)
Will end up in error message in runtime:
org.postgresql.util.PSQLException: ERROR: syntax error at or near "order"
Position: 46
at org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse(QueryExecutorImpl.java:2553)
at org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:2285)
at org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:323)
at org.postgresql.jdbc.PgStatement.executeInternal(PgStatement.java:481)
at org.postgresql.jdbc.PgStatement.execute(PgStatement.java:401)
at org.postgresql.jdbc.PgPreparedStatement.executeWithFlags(PgPreparedStatement.java:164)
at org.postgresql.jdbc.PgPreparedStatement.executeUpdate(PgPreparedStatement.java:130)
at com.zaxxer.hikari.pool.ProxyPreparedStatement.executeUpdate(ProxyPreparedStatement.java:61)
at com.zaxxer.hikari.pool.HikariProxyPreparedStatement.executeUpdate(HikariProxyPreparedStatement.java)
at doobie.free.KleisliInterpreter$PreparedStatementInterpreter.$anonfun$executeUpdate$5(kleisliinterpreter.scala:955)
at doobie.free.KleisliInterpreter$PreparedStatementInterpreter.$anonfun$executeUpdate$5$adapted(kleisliinterpreter.scala:955)
at doobie.free.KleisliInterpreter.$anonfun$primitive$2(kleisliinterpreter.scala:109)
It seems that Quill does not escape keyword-like column names, so "order" (and other keywords) columns in it's query will always fail. See Escaping keyword-like column names in Postgres . The workaround is to rename the column in table (and corresponding case classes).

Error report - SQL Error: ORA-01843: not a valid month 01843

I am using sql developer for the first time. I cant understand why this error is occuring.
CREATE TABLE TOY_STORE
( TOY_STORE_ID NUMBER(3,0),
TOY_STORE_NAME VARCHAR2(30 BYTE) NOT NULL ENABLE,
CITY VARCHAR2(30 BYTE) DEFAULT 'DELHI',
PHONENUMBER" NUMBER(10,0) NOT NULL ENABLE,
STORE_OPENING_TIME TIMESTAMP (6),
STORE_CLOSING_TIME TIMESTAMP (6),
CHECK (EXTRACT(HOUR FROM CAST (TO_CHAR (STORE_OPENING_TIME, 'YYYY-MON-DD HH24:MI:SS') AS TIMESTAMP)) > 8 || NULL),
CHECK (EXTRACT(HOUR FROM CAST (TO_CHAR (STORE_CLOSING_TIME, 'YYYY-MON-DD HH24:MI:SS') AS TIMESTAMP)) < 21 || NULL);
INSERT INTO TOY_STORE
VALUES(1, 'Kid''s Cave', 'Delhi', 9912312312, '2014-04-01 09:10:12', '2014-04-01 21:42:05');
Following was the error given:
Error report - SQL Error: ORA-01843: not a valid month 01843. 00000 - "not a valid month" *Cause: *Action: Error starting at line : 1 in command - INSERT INTO TOY_STORE VALUES(1, 'Kid''s Cave', 'Delhi', 9912312312, '04-2014-04 09:10:12', '04-2014-04 21:42:05') Error report - SQL Error: ORA-01843: not a valid month 01843. 00000 - "not a valid month"
Your create table as shown in the question has a stray double-quote and is missing a closing parenthesis. Your check constraints are odd:
you are converting a timestamp to a string using a specific format, and then casting back to a timestamp using the session NLS_TIMESTAMP_FORMAT, which will fail for sessions which don't have the setting you expect;
your are concatenating a null onto the hour value you're checking for, e.g. 21 || NULL, which is converting it to a string. Pretty sure you want to allow that to be null so you're using || as or, which isn't correct; but you don't need to explicitly allow for that anyway.
CREATE TABLE TOY_STORE (
TOY_STORE_ID NUMBER(3,0),
TOY_STORE_NAME VARCHAR2(30 BYTE) NOT NULL ENABLE,
CITY VARCHAR2(30 BYTE) DEFAULT 'DELHI',
PHONENUMBER NUMBER(10,0) NOT NULL ENABLE,
STORE_OPENING_TIME TIMESTAMP (6),
STORE_CLOSING_TIME TIMESTAMP (6),
CHECK (EXTRACT(HOUR FROM STORE_OPENING_TIME) > 8),
CHECK (EXTRACT(HOUR FROM STORE_CLOSING_TIME) < 21)
);
You might want to consider naming your constraints; and you might want a lower bound on the closing time and an upper bound on the opening time, or at least make sure opening isn't after closing (unless you're allowing for night opening; in which case you probably wouldn't want those constraints at all).
Then you are inserting using a string literal, not a timestamp. You are again relying on implicit conversion using your client's NLS_TIMESTAMP_FORMAT setting. The SQL Developer default for that is DD-MON-RR HH24.MI.SSXFF, which means with `'2014-04-01 09:10:12`` it will try to map the parts of the string literal to the parts of the format mask and fail; that default model does give ORA-01843.
You should either convert your string with an explicit format mask, using to_timestamp():
INSERT INTO TOY_STORE (TOY_STORE_ID, TOY_STORE_NAME, CITY, PHONENUMBER,
STORE_OPENING_TIME, STORE_CLOSING_TIME)
VALUES(1, 'Kid''s Cave', 'Delhi', 9912312312,
to_timestamp('2014-04-01 09:10:12', 'YYYY-MM-DD HH24:MI:SS'),
to_timestamp('2014-04-01 21:42:05', 'YYYY-MM-DD HH24:MI:SS'));
or use ANSI timestamp literals:
INSERT INTO TOY_STORE (TOY_STORE_ID, TOY_STORE_NAME, CITY, PHONENUMBER,
STORE_OPENING_TIME, STORE_CLOSING_TIME)
VALUES(1, 'Kid''s Cave', 'Delhi', 9912312312,
timestamp '2014-04-01 09:10:12', timestamp '2014-04-01 21:42:05');
... which violates your constraint as expected:
SQL Error: ORA-02290: check constraint (SCHEMA.SYS_C00113492) violated
02290. 00000 - "check constraint (%s.%s) violated"
*Cause: The values being inserted do not satisfy the named check
*Action: do not insert values that violate the constraint.
Notice the SYS_C00113492 name; that's a system-generated name for your constraint. It will be easier to follow what is happening if you name your constraints.
You are still allowed to insert nulls:
INSERT INTO TOY_STORE (TOY_STORE_ID, TOY_STORE_NAME, CITY, PHONENUMBER,
STORE_OPENING_TIME, STORE_CLOSING_TIME)
VALUES(1, 'Kid''s Cave', 'Delhi', 9912312312, null, null);
1 row inserted.

PostGIS-enabled Postgres DB JDBC Query Stuck in SQLException.setNextException

I am currently running a PostGIS-enabled Postgres database with the following version string:
Version string PostgreSQL 9.4.1 on x86_64-unknown-linux-gnu, compiled by gcc (GCC) 4.8.2 20140120 (Red Hat 4.8.2-16), 64-bit
The JDBC driver I am using to connect is
9.4-1201-jdbc41
I am running the following query:
SELECT * FROM foo;
The schema for 'foo' is as follows:
CREATE TABLE foo
(
gid integer NOT NULL DEFAULT nextval('address_gid_seq'::regclass),
objectid numeric(10,0),
house_num integer,
half_add character varying(4),
pre_dir character varying(2),
st_name character varying(50),
st_type character varying(4),
suf_dir character varying(2),
unit_type character varying(4),
unit_id character varying(6),
city character varying(15),
state character varying(2),
zipcode numeric(10,0),
angle numeric,
parcel_num character varying(11),
idnum numeric(10,0),
status character varying(1),
status_dat date,
esnnum character varying(5),
geom geometry(Point,3857),
CONSTRAINT address_pkey PRIMARY KEY (gid)
)
I did not create this table, so I am not sure what may have gone wrong, but the count of the rows (done as a shortcut using pgAdmin3) is ~250,000, so there is demonstrably data in there. Asking to get some of the data via a 'limit' works, although it is incredibly slow.
I can pause my query in a debugger, which pauses in the following stack:
PSQLWarning(SQLException).setNextException(SQLException) line: 294
PSQLWarning(SQLWarning).setNextWarning(SQLWarning) line: 213
Jdbc4ResultSet(AbstractJdbc2ResultSet).addWarning(SQLWarning) line: 2669
AbstractJdbc2ResultSet$CursorResultHandler.handleWarning(SQLWarning) line: 1841
QueryExecutorImpl$3.handleWarning(SQLWarning) line: 2179
QueryExecutorImpl.processResults(ResultHandler, int) line: 2023
QueryExecutorImpl.fetch(ResultCursor, ResultHandler, int) line: 2201
Jdbc4ResultSet(AbstractJdbc2ResultSet).next() line: 1924
I don't really have a ton of time to learn everything about how Postgres' JDBC driver is implemented, so I thought I shout out and see if anyone else has experienced this and if there is something wrong with the data in the table. If I had access to the source data, I might be able to fix it on that end; but it seems strange that a query against an existing Postgres table would result in what seems to be an infinite loop.
I should add that ResultSet.next() never steps in the debugger, the code just stays in the setNextException() method indefinitely.
EDIT:
I am getting tons of this in the "messages" in pgAdmin:
NOTICE: [g_serialized.c:gserialized_get_type:50] entered
NOTICE: [lwgeom.c:lwgeom_set_srid:1455] entered with srid=3857
NOTICE: [lwgeom.c:lwgeom_is_empty:1233] lwgeom_is_empty: got type Point
NOTICE: [lwout_wkb.c:lwgeom_to_wkb:710] WKB output size: 25
NOTICE: [lwout_wkb.c:lwgeom_to_wkb:723] Hex WKB output size: 51
NOTICE: [lwgeom.c:lwgeom_is_empty:1233] lwgeom_is_empty: got type Point
NOTICE: [lwout_wkb.c:lwpoint_to_wkb_buf:393] Entering function, buf = 0x2acec3c3e770
NOTICE: [lwout_wkb.c:lwpoint_to_wkb_buf:395] Endian set, buf = 0x2acec3c3e772
NOTICE: [lwout_wkb.c:integer_to_wkb_buf:189] Writing value '536870913'
NOTICE: [lwout_wkb.c:lwpoint_to_wkb_buf:398] Type set, buf = 0x2acec3c3e77a
NOTICE: [lwout_wkb.c:integer_to_wkb_buf:189] Writing value '3857'
NOTICE: [lwout_wkb.c:lwpoint_to_wkb_buf:403] SRID set, buf = 0x2acec3c3e782
NOTICE: [lwout_wkb.c:ptarray_to_wkb_buf:360] Writing point #0
NOTICE: [lwout_wkb.c:ptarray_to_wkb_buf:364] Writing dimension #0 (buf = 0x2acec3c3e782)
NOTICE: [lwout_wkb.c:ptarray_to_wkb_buf:364] Writing dimension #1 (buf = 0x2acec3c3e792)
NOTICE: [lwout_wkb.c:ptarray_to_wkb_buf:369] Done (buf = 0x2acec3c3e7a2)
NOTICE: [lwout_wkb.c:lwpoint_to_wkb_buf:407] Pointarray set, buf = 0x2acec3c3e7a2
NOTICE: [lwout_wkb.c:lwgeom_to_wkb:759] buf (0x2acec3c3e7a3) - wkb_out (0x2acec3c3e770) = 51
NOTICE: [g_serialized.c:gserialized_get_type:50] entered
NOTICE: [g_serialized.c:lwgeom_from_gserialized:1137] Got type 1 (Point), srid=3857
NOTICE: [g_serialized.c:lwgeom_from_gserialized_buffer:1091] Got type 1 (Point), hasz=0 hasm=0 geodetic=0 hasbox=0
client_min_messages is showing no setting.
The solution to this problem is as mentioned in the comments:
Set client_min_messages to ERROR. This will avoid shipping a dozen error messages to the client over JDBC per geometry record, which will increase the performance by at least an order of magnitude in my case.

How to import data into teradata tables from delimited file using BTEQ import?

I am trying to execute following bteq command on linux environment but couldn't load data properly into Teradata DB server. Can someone please advise me to resolve the below issue that I am facing while loading.
BTEQ Command used :
.SET width 64000;
.SET session transaction btet;
.logmech ldap
.logon XXXXXXX/XXXXXXXX,********;
DATABASE corecm;
.PACK 1000
.IMPORT VARTEXT '~' FILE=/v/global/user/application_event_bus_evt
.REPEAT *
USING(APPLICATION_EVENT_ID CHAR(24),BUS_EVT_ID CHAR(24),BUS_EVT_VID BIGINT,BUS_EVT_RESTATE_IN SMALLINT)
insert into corecm.application_event_bus_evt (APPLICATION_EVENT_ID
, BUS_EVT_ID
, BUS_EVT_VID
, BUS_EVT_RESTATE_IN
)
values
( COALESCE(:APPLICATION_EVENT_ID,1)
, COALESCE(:BUS_EVT_ID,1)
, COALESCE(:BUS_EVT_VID,1)
, COALESCE(:BUS_EVT_RESTATE_IN,1)
) ;
.LOGOFF;
.EXIT;
SAMPLE INPUT FILE DELIMITTER "~" [ /v/global/user/application_event_bus_evt ] :
Ckn3gMxLEeOgIQBQVgErYA==~g+GDDtlaY3n7BdUrYshDFA==~1~1
CL1kEcxLEeOgIQBQVgErYA==~qoKoiuGDbClpcGt/z6RKGw==~1~1
oYIVcMxKEeOgIQBQVgErYA==~mfmQiwl7yAteevzJfilMvA==~1~1
5N7ME5bM4xGhM7exj3ykUw==~yFM2FZbM4xGhM7exj3ykUw==~1~0
JLBH4JfM4xGDH9s5+Ds/8w==~doZ/7pfM4xGDH9s5+Ds/8w==~1~0
fGvpoMxKEeOgIQBQVgErYA==~mQUQIK2mY6WIPcszfp5BTQ==~1~1
Table Definition :
CREATE MULTISET TABLE CORECM.APPLICATION_EVENT_BUS_EVT ,NO FALLBACK ,
NO BEFORE JOURNAL,
NO AFTER JOURNAL,
CHECKSUM = DEFAULT,
DEFAULT MERGEBLOCKRATIO
(
APPLICATION_EVENT_ID CHAR(26) CHARACTER SET LATIN NOT CASESPECIFIC NOT NULL,
BUS_EVT_ID CHAR(26) CHARACTER SET LATIN NOT CASESPECIFIC NOT NULL,
BUS_EVT_VID BIGINT NOT NULL,
BUS_EVT_RESTATE_IN SMALLINT)
UNIQUE PRIMARY INDEX ( APPLICATION_EVENT_ID ,BUS_EVT_ID ,BUS_EVT_VID )
INDEX APPLICATION_EVENT_BUS_EVT_IDX1 ( APPLICATION_EVENT_ID )
INDEX APPLICATION_EVENT_BUS_EVT_IDX2 ( BUS_EVT_ID ,BUS_EVT_VID );
Results set in DB server as,
APPLICATION_EVENT_ID BUS_EVT_ID BUS_EVT_VID BUS_EVT_RESTATE_IN
1 Ckn3gMxLEeOgIQBQVgErYA == g+GDDtlaY3n7BdUrYshD 85,849,873,219,141,958 12,544
2 CL1kEcxLEeOgIQBQVgErYA == qoKoiuGDbClpcGt/z6RK 85,849,873,219,155,783 12,544
3 oYIVcMxKEeOgIQBQVgErYA == mfmQiwl7yAteevzJfilM 85,849,873,219,142,006 12,544
4 5N7ME5bM4xGhM7exj3ykUw == JAf0GpbM4xGhM7exj3yk 85,849,873,219,155,797 12,288
5 JLBH4JfM4xGDH9s5+Ds/8w == Du6T7pfM4xGDH9s5+Ds/ 85,849,873,219,155,768 12,288
6 fGvpoMxKEeOgIQBQVgErYA == mQUQIK2mY6WIPcszfp5B 85,849,873,219,146,068 12,544
If we look at the Data, we can see two issues as,
First two column data length is 24 CHARACTERS ( as per input file ), but the issue is that it been shifted two characters in next column.
Column BUS_EVT_VID and BUS_EVT_RESTATE_IN has wrong data 85,849,873,219,141,958 and 12,544 instead of 1 and 1 respectively (this may be because first two column data got shifted)
I tried following options to resolve the above issue but couldn't resolve the issue,
Modified the Table Definition, i.e. changed datatype to
CHAR(28),CHAR(24),CHAR(26)
Modified the Table Definition column
datatypes to VARCHAR(24), VARCHAR(26)
Modified BTEQ command, i.e. altered datatype in below line,
USING(APPLICATION_EVENT_ID CHAR(24),BUS_EVT_ID CHAR(24),BUS_EVT_VID BIGINT,BUS_EVT_RESTATE_IN SMALLINT)
Thanks in advance.
When you define VARTEXT all input columns must be defined as VARCHAR, but you used CHAR and INT.
This should work, VARCHAR length based on the definition of your target table:
USING(
APPLICATION_EVENT_ID VARCHAR(26),
BUS_EVT_ID VARCHAR(26),
BUS_EVT_VID VARCHAR(19),
BUS_EVT_RESTATE_IN VARCHAR(6)
)

JPA / EclipseLink - create script source with one SQL statement taking multiple lines

I want to let the persistence provider (EclipseLink 2.5.0) automatically create the tables in the, already existing, database by using the persistence unit property "javax.persistence.schema-generation.create-script-source" and a valid SQL-DDL-script.
persistence.xml:
<property name="javax.persistence.schema-generation.create-script-source" value="data/ddl.sql"/>
ddl.sql:
USE myDatabase;
CREATE TABLE MyTable (
id INTEGER NOT NULL AUTO_INCREMENT,
myColumn VARCHAR(255) NOT NULL,
PRIMARY KEY (id)
) ENGINE = InnoDB DEFAULT CHARACTER SET = utf8 DEFAULT COLLATE = utf8_bin;
But I got the following error:
[EL Warning]: 2014-02-12 13:31:44.778--ServerSession(768298666)--Thread(Thread[main,5,main])--Local Exception Stack:
Exception [EclipseLink-4002] (Eclipse Persistence Services - 2.5.0.v20130507-3faac2b): org.eclipse.persistence.exceptions.DatabaseException
Internal Exception: com.mysql.jdbc.exceptions.jdbc4.MySQLSyntaxErrorException: You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near '' at line 1
Error Code: 1064
Call: CREATE TABLE MyTable (
Query: DataModifyQuery(sql="CREATE TABLE MyTable (")
at org.eclipse.persistence.exceptions.DatabaseException.sqlException(DatabaseException.java:331)
at org.eclipse.persistence.internal.databaseaccess.DatabaseAccessor.executeDirectNoSelect(DatabaseAccessor.java:895)
at org.eclipse.persistence.internal.databaseaccess.DatabaseAccessor.executeNoSelect(DatabaseAccessor.java:957)
at org.eclipse.persistence.internal.databaseaccess.DatabaseAccessor.basicExecuteCall(DatabaseAccessor.java:630)
at org.eclipse.persistence.internal.databaseaccess.DatabaseAccessor.executeCall(DatabaseAccessor.java:558)
at org.eclipse.persistence.internal.sessions.AbstractSession.basicExecuteCall(AbstractSession.java:1995)
at org.eclipse.persistence.sessions.server.ServerSession.executeCall(ServerSession.java:570)
at org.eclipse.persistence.internal.queries.DatasourceCallQueryMechanism.executeCall(DatasourceCallQueryMechanism.java:242)
at org.eclipse.persistence.internal.queries.DatasourceCallQueryMechanism.executeCall(DatasourceCallQueryMechanism.java:228)
at org.eclipse.persistence.internal.queries.DatasourceCallQueryMechanism.executeNoSelectCall(DatasourceCallQueryMechanism.java:271)
at org.eclipse.persistence.internal.queries.DatasourceCallQueryMechanism.executeNoSelect(DatasourceCallQueryMechanism.java:251)
at org.eclipse.persistence.queries.DataModifyQuery.executeDatabaseQuery(DataModifyQuery.java:85)
at org.eclipse.persistence.queries.DatabaseQuery.execute(DatabaseQuery.java:899)
at org.eclipse.persistence.internal.sessions.AbstractSession.internalExecuteQuery(AbstractSession.java:3207)
at org.eclipse.persistence.internal.sessions.AbstractSession.executeQuery(AbstractSession.java:1797)
at org.eclipse.persistence.internal.sessions.AbstractSession.executeQuery(AbstractSession.java:1779)
at org.eclipse.persistence.internal.sessions.AbstractSession.executeQuery(AbstractSession.java:1730)
at org.eclipse.persistence.internal.sessions.AbstractSession.executeNonSelectingCall(AbstractSession.java:1499)
at org.eclipse.persistence.internal.sessions.AbstractSession.executeNonSelectingSQL(AbstractSession.java:1517)
at org.eclipse.persistence.internal.jpa.EntityManagerSetupImpl.writeSourceScriptToDatabase(EntityManagerSetupImpl.java:4065)
at org.eclipse.persistence.internal.jpa.EntityManagerSetupImpl.writeDDL(EntityManagerSetupImpl.java:3910)
at org.eclipse.persistence.internal.jpa.EntityManagerSetupImpl.writeDDL(EntityManagerSetupImpl.java:3783)
at org.eclipse.persistence.internal.jpa.EntityManagerSetupImpl.deploy(EntityManagerSetupImpl.java:724)
at org.eclipse.persistence.internal.jpa.EntityManagerFactoryDelegate.getAbstractSession(EntityManagerFactoryDelegate.java:204)
at org.eclipse.persistence.internal.jpa.EntityManagerFactoryDelegate.getDatabaseSession(EntityManagerFactoryDelegate.java:182)
at org.eclipse.persistence.internal.jpa.EntityManagerFactoryImpl.getDatabaseSession(EntityManagerFactoryImpl.java:527)
at org.eclipse.persistence.jpa.PersistenceProvider.createEntityManagerFactoryImpl(PersistenceProvider.java:140)
at org.eclipse.persistence.jpa.PersistenceProvider.createEntityManagerFactory(PersistenceProvider.java:177)
at javax.persistence.Persistence.createEntityManagerFactory(Persistence.java:79)
at javax.persistence.Persistence.createEntityManagerFactory(Persistence.java:54)
at nl.tent.competent.data.access.Main.main(Main.java:22)
Caused by: com.mysql.jdbc.exceptions.jdbc4.MySQLSyntaxErrorException: You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near '' at line 1
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
at com.mysql.jdbc.Util.handleNewInstance(Util.java:411)
at com.mysql.jdbc.Util.getInstance(Util.java:386)
at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:1054)
at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:4187)
at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:4119)
at com.mysql.jdbc.MysqlIO.sendCommand(MysqlIO.java:2570)
at com.mysql.jdbc.MysqlIO.sqlQueryDirect(MysqlIO.java:2731)
at com.mysql.jdbc.ConnectionImpl.execSQL(ConnectionImpl.java:2815)
at com.mysql.jdbc.PreparedStatement.executeInternal(PreparedStatement.java:2155)
at com.mysql.jdbc.PreparedStatement.executeUpdate(PreparedStatement.java:2458)
at com.mysql.jdbc.PreparedStatement.executeUpdate(PreparedStatement.java:2375)
at com.mysql.jdbc.PreparedStatement.executeUpdate(PreparedStatement.java:2359)
at org.eclipse.persistence.internal.databaseaccess.DatabaseAccessor.executeDirectNoSelect(DatabaseAccessor.java:885)
... 29 more
It seems that the carriage return line feed (newline) is the problem. And if I change the SQL-DDL-script, so that one SQL-statement only takes one line, everything is working fine.
adjusted ddl.sql:
USE myDatabase;
CREATE TABLE MyTable (id INTEGER NOT NULL AUTO_INCREMENT, myColumn VARCHAR(255) NOT NULL, PRIMARY KEY (id) ENGINE = InnoDB DEFAULT CHARACTER SET = utf8 DEFAULT COLLATE = utf8_bin;
But I don't want to reformat my SQL-DDL-script for readability. Please help!
If someone still faces this problem, it can be solved by adding following parameter (found here):
<property name="hibernate.hbm2ddl.import_files_sql_extractor" value="org.hibernate.tool.hbm2ddl.MultipleLinesSqlCommandExtractor" />