Syntax error at or near "order" (Scala with Quill, Doobie and PostgreSQL) - postgresql

I am using Quill with Doobie and PostgreSQL (org.tpolecat.doobie-quill artifact with version 0.13.1).
This code
case class SomeRecord(id: Int, order: Int, name: String)
val record = SomeRecord(0, 0, "test")
run(
quote(
querySchema[SomeRecord]("some_table")
).insert(lift(record))
)
Will end up in error message in runtime:
org.postgresql.util.PSQLException: ERROR: syntax error at or near "order"
Position: 46
at org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse(QueryExecutorImpl.java:2553)
at org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:2285)
at org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:323)
at org.postgresql.jdbc.PgStatement.executeInternal(PgStatement.java:481)
at org.postgresql.jdbc.PgStatement.execute(PgStatement.java:401)
at org.postgresql.jdbc.PgPreparedStatement.executeWithFlags(PgPreparedStatement.java:164)
at org.postgresql.jdbc.PgPreparedStatement.executeUpdate(PgPreparedStatement.java:130)
at com.zaxxer.hikari.pool.ProxyPreparedStatement.executeUpdate(ProxyPreparedStatement.java:61)
at com.zaxxer.hikari.pool.HikariProxyPreparedStatement.executeUpdate(HikariProxyPreparedStatement.java)
at doobie.free.KleisliInterpreter$PreparedStatementInterpreter.$anonfun$executeUpdate$5(kleisliinterpreter.scala:955)
at doobie.free.KleisliInterpreter$PreparedStatementInterpreter.$anonfun$executeUpdate$5$adapted(kleisliinterpreter.scala:955)
at doobie.free.KleisliInterpreter.$anonfun$primitive$2(kleisliinterpreter.scala:109)

It seems that Quill does not escape keyword-like column names, so "order" (and other keywords) columns in it's query will always fail. See Escaping keyword-like column names in Postgres . The workaround is to rename the column in table (and corresponding case classes).

Related

Database-migration ms sql to postgresql

I am getting a syntax error when converting between SQL Server to PostgreSQL. Any thoughts?
IF (var_port_with_bmrk_total_mv != 0 AND var_bmrk_info IS NOT NULL) THEN
BEGIN
insert into t$tmp_diff
select #asof_dt asof_dt,#choiceID choiceID ,p.input_array_type ,p.group_order, CONVERT(DECIMAL(32,10),p.port_value/#var_port_total_mv) port_value,convert(decimal(32,10), isnull(bmrk_value/#port_with_bmrk_total_mv,0)) bmrk_value
from t$tmp_port_sum p, t$tmp_bmrk_sum b
where p.input_array_type=b.input_array_type and p.group_order = b.group_order
END;
ELSE
Original before conversion
insert into #tmp_other_diff
select #asof_dt asof_dt,#choiceID choiceID , b.input_array_type,b.grouping,convert(decimal(32,10),0) port_value, (bmrk_value/#port_with_bmrk_total_mv) bmrk_value
from #tmp_bmrk_other_sum b
where b.key_value not in ( select p.key_value from #tmp_port_other_sum p)
Error message:
Error occurred during SQL query execution
Reason:
SQL Error [42601]: ERROR: syntax error at or near ","
Position: 9030
the relevant comma being:
CONVERT(DECIMAL(32,10),p.port_value
There is no convert() function in Postgres. Use the SQL standard cast or the Postgres extension ::data type. In this case:
...., cast(0 as decimal(30,10)) port_value, ....
OR
...., 0::decimal(30,10) port_value, ...
Note: No comma after the expression. In the original port_value is the column alias. You need to keep it that way.

Spark doesn't recognize the column name in SQL query while can output it to a dataset

I'm applying the SQL query like that:
s"SELECT * FROM my_table_joined WHERE (timestamp > '2022-01-23' and writetime is not null and acceptTimestamp is not null)"
and I'm getting the error message like that.
warning: there was one deprecation warning (since 2.0.0); for details, enable `:setting -deprecation' or `:replay -deprecation'
org.postgresql.util.PSQLException: ERROR: column "accepttimestamp" does not exist
Hint: Perhaps you meant to reference the column "mf_joined.acceptTimestamp".
Position: 103
at org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse(QueryExecutorImpl.java:2497)
at org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:2233)
at org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:310)
at org.postgresql.jdbc.PgStatement.executeInternal(PgStatement.java:446)
at org.postgresql.jdbc.PgStatement.execute(PgStatement.java:370)
at org.postgresql.jdbc.PgPreparedStatement.executeWithFlags(PgPreparedStatement.java:149)
at org.postgresql.jdbc.PgPreparedStatement.executeQuery(PgPreparedStatement.java:108)
at org.apache.spark.sql.execution.datasources.jdbc.JDBCRDD$.resolveTable(JDBCRDD.scala:61)
at org.apache.spark.sql.execution.datasources.jdbc.JDBCRelation$.getSchema(JDBCRelation.scala:226)
at org.apache.spark.sql.execution.datasources.jdbc.JdbcRelationProvider.createRelation(JdbcRelationProvider.scala:35)
at org.apache.spark.sql.execution.datasources.DataSource.resolveRelation(DataSource.scala:344)
at org.apache.spark.sql.DataFrameReader.loadV1Source(DataFrameReader.scala:297)
at org.apache.spark.sql.DataFrameReader.$anonfun$load$2(DataFrameReader.scala:286)
at scala.Option.getOrElse(Option.scala:189)
at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:286)
at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:221)
at $$$e76229fa87b6865de321c5274e52c2f9$$$$w$getDFFromJdbcSource(<console>:1133)
... 326 elided
If I omit acceptTimestamp like that:
s"SELECT * FROM my_table_joined WHERE (timestamp > '2022-01-23' and writetime is not null)"
I'm getting the data as below:
+-------------------+----------+----+------------------+-----------------+---+-----+------+----------+---------------+-------+-----------------------+----------+---------+-------------+------------+---------------+---------+-----+-------------------+-----------------------+---------------+--------------+-------------+-------------------+-------------------+---+---+------------------+-----+----+----+------------------+---+
|timestamp |flags |type|lon |lat |alt|speed|course|satellites|digital_twin_id|unit_id|unit_ts |name |unit_type|measure_units|access_level|uid |placement|stale|start |writetime |acceptTimestamp|delayWindowEnd|DiffInSeconds|time |hour |max|min|mean |count|max2|min2|mean2 |rnb|
+-------------------+----------+----+------------------+-----------------+---+-----+------+----------+---------------+-------+-----------------------+----------+---------+-------------+------------+---------------+---------+-----+-------------------+-----------------------+---------------+--------------+-------------+-------------------+-------------------+---+---+------------------+-----+----+----+------------------+---+
please note acceptTimestamp is here!
So how I should handle this column in my query to make it taken into account?
From the exception, it seems this is related to Postgres not Spark. If you look at the error message you got, the column name is folded to lowercase accepttimestamp whereas in your query the T is in uppercase acceptTimestamp.
To make the column name case-sensitive for Postgres, you need to use double-quotes. Try this:
val query = s"""SELECT * FROM my_table_joined
WHERE timestamp > '2022-01-23'
and writetime is not null
and "acceptTimestamp" is not null"""

liquibase generates wrong uppercase characters when generating the sql

Im working with jhipster and it uses liquibase to manage tables. But when it generates the sql query it messes up the characters. it turns "int" to "İNT" not "INT" and other "i" characters to "İ" (turkish character for uppercase i) so postgresql doesnt accept those. How do I make liquibase use english locale instead of turkish locale for uppercase conversion?
Caused by: liquibase.exception.DatabaseException: ERROR: type "�nt" does not exist
Position: 47 [Failed SQL: CREATE TABLE public.databasechangeloglock (ID �NT NOT NULL, LOCKED BOOLEAN NOT NULL, LOCKGRANTED TIMESTAMP WITHOUT TIME ZONE, LOCKEDBY VARCHAR(255), CONSTRAINT PK_DATABASECHANGELOGLOCK PRIMARY KEY (ID))]
at liquibase.executor.jvm.JdbcExecutor$ExecuteStatementCallback.doInStatement(JdbcExecutor.java:316)
at liquibase.executor.jvm.JdbcExecutor.execute(JdbcExecutor.java:55)
at liquibase.executor.jvm.JdbcExecutor.execute(JdbcExecutor.java:122)
at liquibase.executor.jvm.JdbcExecutor.execute(JdbcExecutor.java:112)
at liquibase.lockservice.StandardLockService.init(StandardLockService.java:87)
at liquibase.lockservice.StandardLockService.acquireLock(StandardLockService.java:189)
... 114 more
Caused by: org.postgresql.util.PSQLException: ERROR: type "�nt" does not exist
Position: 47
at org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse(QueryExecutorImpl.java:2198)
at org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:1927)
at org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:255)
at org.postgresql.jdbc2.AbstractJdbc2Statement.execute(AbstractJdbc2Statement.java:561)
at org.postgresql.jdbc2.AbstractJdbc2Statement.executeWithFlags(AbstractJdbc2Statement.java:405)
at org.postgresql.jdbc2.AbstractJdbc2Statement.execute(AbstractJdbc2Statement.java:397)
at com.zaxxer.hikari.proxy.StatementProxy.execute(StatementProxy.java:83)
at com.zaxxer.hikari.proxy.StatementJavassistProxy.execute(StatementJavassistProxy.java)
at liquibase.executor.jvm.JdbcExecutor$ExecuteStatementCallback.doInStatement(JdbcExecutor.java:314)
... 119 more

ERROR: invalid input syntax for type timestamp: "end_time"

New to postgresql and even newer to jsonb. I am trying to filter an array of objects:
[{"event_slug":"test_1","start_time":"2014-10-08","end_time":"2014-10-12"},
{"event_slug":"test_2","start_time":"2013-06-24","end_time":"2013-07-02"},
{"event_slug":"test_3","start_time":"2014-03-26","end_time":"2014-03-30"}]
My Query:
SELECT l.*
FROM locations l
, jsonb_array_elements(l.events) e
WHERE l.events #> '{"event_slug":"test_1"}'
AND e->>'end_time'::timestamp >= '2014-10-30 14:04:06 -0400'::timestamptz;
I get the error:
ERROR: invalid input syntax for type timestamp: "end_time"
LINE 5: AND e->>'end_time'::timestamp >= '2014-10-30 14:04:06 -04...
^
This is an operator precedence issue: :: binds more tightly than ->> does. So you need parentheses.
e->>'end_time'::timestamp
becomes
(e->>'end_time')::timestamp

db2 cast problem

[SQL]2010/12/07 20:18:32:184 : 0.0010 [update REG_COMP_DEF set OrderNo = Cast(Cast(SUBSTR(orderno,1,10) as numeric(10,0))+10 as varchar(10))||NVL(SUBSTR(orderno,10+1,length(orderno)-10),'') where length(OrderNo)>10 and OrderNo>='3000600050' and OrderNo like '300060%' and OrderNo not like '999999%']
com.ibm.db2.jcc.c.SqlException: DB2 SQL error: SQLCODE: -461, SQLSTATE: 42846, SQLERRMC: SYSIBM.DECIMAL;SYSIBM.VARCHAR
inner cast is ok
I can run it on my DB2 for i system (without the NVL(); not supported in my version).
Can you see if the outer cast will run when the inner one is cast to Decimal() instead of Numeric()?
ref: SQLState 42846 = "Cast from source type to target type is not supported."