Use UUID in Doobie SQL update - postgresql

I have the following simple (cut down for brevity) Postgres table:
create table users(
id uuid NOT NULL,
year_of_birth smallint NOT NULL
);
Within a test I have seeded data.
When I run the following SQL update to correct a year_of_birth the error implies that I'm not providing the necessary UUID correctly.
The Doobie SQL I run is:
val id: String = "6ee7a37c-6f58-4c14-a66c-c17083adff81"
val correctYear: Int = 1980
sql"update users set year_of_birth = $correctYear where id = $id".update.run
I have tried both with and without quotes around the given $id e.g. the other version is:
sql"update users set year_of_birth = $correctYear where id = '$id'".update.run
The error upon running the above is:
org.postgresql.util.PSQLException: ERROR: operator does not exist: uuid = character varying
Hint: No operator matches the given name and argument type(s). You might need to add explicit type casts.

Both comments provided viable solutions.
a_horse_with_no_name suggested the use of cast which works though the SQL becomes no so nice when compared to the other solution.
AminMal suggested the use of available Doobie implicits which can handle a UUID within SQL and thus avoid a cast.
So I changed my code to the following:
import doobie.postgres.implicits._
val id: UUID = UUID.fromString("6ee7a37c-6f58-4c14-a66c-c17083adff81")
sql"update users set year_of_birth = $correctYear where id = $id".update.run
So I'd like to mark this question as resolved because of the comment provided by AminMal

Related

How to get correct type and nullability information for enum fields using jOOQ's metadata API?

I'm trying to use jOOQ's metadata API, and most columns behave the way I'd expect, but enum columns seem to be missing type and nullability information somehow.
For example, if I have a schema defined as:
CREATE TYPE public.my_enum AS ENUM (
'foo',
'bar',
'baz'
);
CREATE TABLE public.my_table (
id bigint NOT NULL,
created_at timestamp with time zone DEFAULT now() NOT NULL,
name text,
my_enum_column public.my_enum NOT NULL,
);
The following test passes:
// this is Kotlin, but hopefully pretty easy to decipher
test("something fishy going on here") {
val jooq = DSL.using(myDataSource, SQLDialect.POSTGRES)
val myTable = jooq.meta().tables.find { it.name == "my_table" }!!
// This looks right...
val createdAt = myTable.field("created_at")!!
createdAt.dataType.nullability() shouldBe Nullability.NOT_NULL
createdAt.dataType.typeName shouldBe "timestamp with time zone"
// ...but none of this seems right
val myEnumField = myTable.field("my_enum_column")!!
myEnumField.dataType.typeName shouldBe "other"
myEnumField.dataType.nullability() shouldBe Nullability.DEFAULT
myEnumField.dataType.castTypeName shouldBe "other"
myEnumField.type shouldBe Any::class.java
}
It's telling me that enum columns have Nullability.DEFAULT regardless of whether they are null or not null. For other types, Field.dataType.nullability will vary depending on whether the column is null or not null, as expected.
For any enum column, the type is Object (Any in Kotlin), and the dataType.typeName is "other". For non-enum columns, dataType.typeName gives me the correct SQL for the type.
I'm also using the jOOQ code generator, and it generates the correct types for enum columns. That is, it creates an enum class and uses that as the type for the corresponding fields, which are marked as not-nullable. The generated code for this field looks something like (reformatted to avoid long lines):
public final TableField<MyTableRecord, MyEnum> MY_ENUM_COLUMN =
createField(
DSL.name("my_enum_column"),
SQLDataType.VARCHAR
.nullable(false)
.asEnumDataType(com.example.schema.enums.MyEnum.class),
this,
""
)
So it appears that jOOQ's code generator has the type information, but how can I access the type information via the metadata API?
I'm using postgres:11-alpine and org.jooq:jooq:3.14.11.
Update 1
I tried testing this with org.jooq:jooq:3.16.10 and org.jooq:jooq:3.17.4. They seem to fix the nullability issue, but the datatype is still "other", and the type is still Object. So it appears the nullability issue was a bug in jOOQ. I'll file an issue about the type+datatype.
Update 2
This is looking like it may be a bug, so I've filed an issue.

Spring JPA globally_quoted_identifiers incorrectly quoting column type TEXT

I'm using the property globally_quoted_identifiers to deal with issues of reserved keywords being used for column names in an application I'm maintaining. However I've just encountered a bug, where a create table statement is being generated like so...
create table `MyTable` (`id` bigint not null auto_increment, `body` `TEXT`...
The create statement fails in MySQL, because TEXT should not be quoted.
I'm not sure why this is happening. It doesn't do it to other column types, bigint, varchar etc.
Is there something else I need to do, to have JPA correctly handle the MySQL TEXT column type?
Update: This is the data class which demonstrates the issue...
#Entity
data class MyTable(
#Id
#GeneratedValue(strategy= GenerationType.IDENTITY)
val id: Long? = null,
#Column(columnDefinition = "TEXT")
var body: String? = null
)
This will result in the above table create SQL above, when globally_quoted_identifiers is enabled.
Can you try with below config,
hibernate.globally_quoted_identifiers = true
hibernate.globally_quoted_identifiers_skip_column_definitions = true

mybatis - Passing multiple parameters on #One annotation

I am trying to access a table in my Secondary DB whose name I am obtaining from my Primary DB. My difficulty is to pass the "DB-Name" as a parameter into my secondary query, (BTW I am using MyBatis annotation based Mappers).
This is my Mapper
#SelectProvider(type = DealerQueryBuilder.class, method = "retrieveDealerListQuery")
#Results({
#Result(property="dealerID", column="frm_dealer_master_id"),
#Result(property="dealerTypeID", column="frm_dealer_type_id", one=#One(select="retrieveDealerTypeDAO")),
#Result(property="dealerName", column="frm_dealer_name")
})
public List<Dealer> retrieveDealerListDAO(#Param("firmDBName") String firmDBName);
#Select("SELECT * from ${firmDBName}.frm_dealer_type where frm_dealer_type_id=#{frm_dealer_type_id}")
#Results({
#Result(property="dealerTypeID", column="frm_dealer_type_id"),
#Result(property="dealerType", column="frm_dealer_type")
})
public DealerType retrieveDealerTypeDAO(#Param("firmDBName") String firmDBName, #Param("frm_dealer_type_id") int frm_dealer_type_id);
The firmDBName I have is obtained from my "Primary DB".
If I omit ${firmDBName} in my second query, the query is trying to access my Primary Database and throws out table "PrimaryDB.frm_dealer_type" not found. So it is basically trying to search for a table named "frm_dealer_type" in my Primary DB.
If I try to re-write the #Result like
#Result(property="dealerTypeID", column="firmDBName=firmDBName, frm_dealer_type_id=frm_dealer_type_id", one=#One(select="retrieveDealerTypeDAO")),
It throws an error that Column"firmDBName" does not exist.
Changing ${firmDBName} to #{firmDBName} also did not help.
I did refer to this blog - here
I want a solution to pass my parameter firmDBName from my primary query into secondary query.
The limitation here is that your column must be returned by the first #SELECT.
If you look at the test case here you will see that parent_xxx values returned by the first Select.
Your DealerQueryBuilder must select firmDBName as a return value and your column must map the name of the return column to that.
Your column definition is always wrong, it should be:
{frm_dealer_type_id=frm_dealer_type_id,firmDBName=firmDBName} or whatever it was returned as from your first select.
Again you can refer to the test case I have above as well as the documentation here http://www.mybatis.org/mybatis-3/sqlmap-xml.html#Nested_Select_for_Association

Default value doesn't work in SQLAlchemy + PostgreSQL + aiopg + psycopg2

I've found an unexpected behavior in SQLAlchemy. I'm using the following versions:
SQLAlchemy (0.9.8)
PostgreSQL (9.3.5)
psycopg2 (2.5.4)
aiopg (0.5.1)
This is the table definition for the example:
import asyncio
from aiopg.sa import create_engine
from sqlalchemy import (
MetaData,
Column,
Integer,
Table,
String,
)
metadata = MetaData()
users = Table('users', metadata,
Column('id_user', Integer, primary_key=True, nullable=False),
Column('name', String(20), unique=True),
Column('age', Integer, nullable=False, default=0),
)
Now if I try to execute a simple insert to the table just populating the id_user and name, the column age should be auto-generated right? Lets see...
#asyncio.coroutine
def go():
engine = yield from create_engine('postgresql://USER#localhost/DB')
data = {'id_user':1, 'name':'Jimmy' }
stmt = users.insert(values=data, inline=False)
with (yield from engine) as conn:
result = yield from conn.execute(stmt)
loop = asyncio.get_event_loop()
loop.run_until_complete(go())
This is the resulting statement with the corresponding error:
INSERT INTO users (id_user, name, age) VALUES (1, 'Jimmy', null);
psycopg2.IntegrityError: null value in column "age" violates not-null constraint
I didn't provide the age column, so where is that age = null value coming from? I was expecting something like this:
INSERT INTO users (id_user, name) VALUES (1, 'Jimmy');
Or if the default flag actually works should be:
INSERT INTO users (id_user, name, Age) VALUES (1, 'Jimmy', 0);
Could you put some light on this?
This issue has been confirmed has an aiopg bug. Seems like at the moment it's ignoring the default argument on data manipulation.
I've fixed the issue using server_default instead:
users = Table('users', metadata,
Column('id_user', Integer, primary_key=True, nullable=False),
Column('name', String(20), unique=True),
Column('age', Integer, nullable=False, server_default='0'))
I think you need to use inline=True in your insert. This turns off 'pre-execution'.
Docs are a bit cryptic on what exactly this 'pre-execution' entails, but they mentions default parameters:
:param inline:
if True, SQL defaults present on :class:`.Column` objects via
the ``default`` keyword will be compiled 'inline' into the statement
and not pre-executed. This means that their values will not
be available in the dictionary returned from
:meth:`.ResultProxy.last_updated_params`.
This piece of docstring is from Update class, but they have a shared behavior with Insert.
Besides, that's the only way they test it:
https://github.com/zzzeek/sqlalchemy/blob/rel_0_9/test/sql/test_insert.py#L385

Updating an array of objects fields in crate

I created a table with following syntax:
create table poll(poll_id string primary key,
poll_type_id integer,
poll_rating array(object as (rating_id integer,fk_user_id string, israted_image1 integer, israted_image2 integer, updatedDate timestamp, createdDate timestamp )),
poll_question string,
poll_image1 string,
poll_image2 string
)
And I inserted a record without "poll_rating" field which is actually an array of objects fields.
Now when I try to update a poll_rating with the following commands:
update poll set poll_rating = [{"rating_id":1,"fk_user_id":-1,"israted_image1":1,"israted_image2":0,"createddate":1400067339.0496}] where poll_id = "f748771d7c2e4616b1865f37b7913707";
I'm getting an error message like this:
"SQLParseException[line 1:31: no viable alternative at input '[']; nested: ParsingException[line 1:31: no viable alternative at input '[']; nested: NoViableAltException;"
Can anyone tell me why I get this error when I try to update the array of objects fields.
Defining arrays and objects directly in SQL statement is currently not supported by our SQL parser, please use parameter substitution using placeholders instead as described here:
https://crate.io/docs/current/sql/rest.html
Example using curl is as below:
curl -sSXPOST '127.0.0.1:4200/_sql?pretty' -d#- <<- EOF
{"stmt": "update poll set poll_rating = ? where poll_id = ?",
"args": [ [{"rating_id":1,"fk_user_id":-1,"israted_image1":1,"israted_image2":0,"createddate":1400067339.0496}], "f748771d7c2e4616b1865f37b7913707" ]
}
EOF