How to insert value of UUID? - postgresql

I'm using anorm 2.4 in play framework 2.3 backed postgresql 9.4
Give a model like this:
case class EmailQueue(id:UUID,
send_from:String,
send_to:String,
subject:String,
body:String,
created_date:Date,
is_sent:Boolean,
email_template:String)
This is my parser:
val parser: RowParser[EmailQueue] = {
get[UUID]("id") ~
get[String]("send_from") ~
get[String]("send_to") ~
get[String]("subject") ~
get[String]("body") ~
get[Date]("created_date") ~
get[Boolean]("is_sent") ~
get[String]("email_template") map {
case id ~ send_from ~ send_to ~ subject ~ body ~
created_date ~ is_sent ~ email_template=> EmailQueue(id,
send_from,
send_to,
subject,
body,
created_date,
is_sent,
email_template)
}
}
And this is my insert statement:
def insert(email:EmailQueue): Unit ={
DB.withTransaction { implicit c =>
SQL(s"""
INSERT INTO "email_queue" ( "body", "created_date", "id", "is_sent", "send_from", "send_to", "subject", "email_template")
VALUES ( {body}, {created_date}, {id}, {is_sent}, {send_from}, {send_to}, {subject}, {email_template} );
""").on(
"body" -> email.body,
"created_date" -> email.created_date,
"id" -> email.id,
"is_sent" -> email.is_sent,
"send_from" -> email.send_from,
"send_to" -> email.send_to,
"subject" -> email.subject,
"email_template" -> email.email_template
).executeInsert()
}
}
I receive following error when inserting:
[error] PSQLException: : ERROR: column "id" is of type uuid but
expression is of type character varying [error] Hint: You will need
to rewrite or cast the expression. [error] Position: 153
(xxxxxxxxxx.java:2270)
The database table is created by this query:
CREATE TABLE email_queue (
id UUID PRIMARY KEY,
send_from VARCHAR(255) NOT NULL,
send_to VARCHAR(255) NOT NULL,
subject VARCHAR(2000) NOT NULL,
body text NOT NULL,
created_date timestamp without time zone DEFAULT now(),
is_sent BOOLEAN NOT NULL DEFAULT FALSE,
email_template VARCHAR(2000) NOT NULL
);

Anorm is DB agnostic, as JDBC, so vendor specific datatype are not supported by default.
You can use {id}::uuid in the statement, so that the java.util.UUID passed as String in JDBC parameters is then converted from passed VARCHAR to a specific PostgreSQL uuid.
Using string interpolation in SQL(s"...") is not recommanded (SQL injection), but Anorm interpolation can be used.
def insert(email:EmailQueue): Unit = DB.withTransaction { implicit c =>
SQL"""
INSERT INTO "email_queue" ( "body", "created_date", "id", "is_sent", "send_from", "send_to", "subject", "email_template")
VALUES ( ${email.body}, ${email.created_date}, ${email.id}::uuid, ${email.is_sent}, ${email.send_from}, ${email.send_to}, ${email.subject}, ${email.email_template} )
""").executeInsert()
}
Not recommended, but can be useful sometimes for vendor specific type, the anorm.Object can be used to pass an opaque value as JDBC parameter (there the ::uuid is nicer for me).
You can also implement a custom ToStatement[java.util.UUID].

Related

How do I insert JSON into a postgres table using Anorm?

I'm getting a runtime exception when trying to insert a JSON string into a JSON column. The string I have looks like """{"Events": []}""", the table has a column defined as status JSONB NOT NULL. I can insert the string into the table from the command line no problem. I've defined a method to do the insert as:
import play.api.libs.json._
import anorm._
import anorm.postgresql._
def createStatus(
status: String,
created: LocalDateTime = LocalDateTime.now())(implicit c: SQLConnection): Unit = {
SQL(s"""
|INSERT INTO status_feed
| (status, created)
|VALUES
| ({status}, {created})
|""".stripMargin)
.on(
'status -> Json.parse("{}"), // n.b. would be Json.parse(status) but this provides a concise error message
'created -> created)
.execute()
}
and calling it gives the following error:
TypeDoesNotMatch(Cannot convert {}: org.postgresql.util.PGobject to String for column ColumnName(status_feed.status,Some(status)))
anorm.AnormException: TypeDoesNotMatch(Cannot convert {}: org.postgresql.util.PGobject to String for column ColumnName(status_feed.status,Some(status)))
I've done loads of searching for this issue but there's nothing about this specific use case that I could find - most of it is pulling out json columns into case classes. I've tried slightly different formats using spray-json's JsValue, play's JsValue, simply passing the string as-is and casting in the query with ::JSONB and they all give the same error.
Update: here is the SQL which created the table:
CREATE TABLE status_feed (
id SERIAL PRIMARY KEY,
status JSONB NOT NULL,
created TIMESTAMP WITHOUT TIME ZONE NOT NULL DEFAULT NOW()
)
The error is not on values given to .executeInsert, but on the parsing of the INSERT result (inserted key).
import java.sql._
// postgres=# CREATE TABLE test(foo JSONB NOT NULL);
val jdbcUrl = "jdbc:postgresql://localhost:32769/postgres"
val props = new java.util.Properties()
props.setProperty("user", "postgres")
props.setProperty("password", "mysecretpassword")
implicit val con = DriverManager.getConnection(jdbcUrl, props)
import anorm._, postgresql._
import play.api.libs.json._
SQL"""INSERT INTO test(foo) VALUES(${Json.obj("foo" -> 1)})""".
executeInsert(SqlParser.scalar[JsValue].singleOpt)
// Option[play.api.libs.json.JsValue] = Some({"foo":1})
/*
postgres=# SELECT * FROM test ;
foo
------------
{"foo": 1}
*/
BTW, the plain string interpolation is useless.
Turns out cchantep was right, it was the parser I was using. The test framework I am using swallowed the stack trace and I assumed the problem was on the insert, but what's actually blowing up is the next line in the test where I use the parser.
The case class and parser were defined as:
case class StatusFeed(
status: String,
created: LocalDateTime) {
val ItemsStatus: Status = status.parseJson.convertTo[Status]
}
object StatusFeed extends DefaultJsonProtocol {
val fields: String = sqlFields[StatusFeed]() // helper function that results in "created, status"
// used in SQL as RETURNING ${StatusFeed.fields}
val parser: RowParser[StatusFeed] =
Macro.namedParser[StatusFeed](Macro.ColumnNaming.SnakeCase)
// json formatter for Status
}
As defined the parser attempts to read a JSONB column from the result set into the String status. Changing fields to val fields: String = "created, status::TEXT" resolves the issue, though the cast may be expensive. Alternatively, defining status as a JsValue instead of a String and providing an implicit for anorm (adapted from this answer to use spray-json) fixes the issue:
implicit def columnToJsValue: Column[JsValue] = anorm.Column.nonNull[JsValue] { (value, meta) =>
val MetaDataItem(qualified, nullable, clazz) = meta
value match {
case json: org.postgresql.util.PGobject => Right(json.getValue.parseJson)
case _ =>
Left(TypeDoesNotMatch(
s"Cannot convert $value: ${value.asInstanceOf[AnyRef].getClass} to Json for column $qualified"))
}
}

Slick, H2 insert query auto increment ID

I have this table in H2:
CREATE TABLE computer (id BIGINT NOT NULL, name VARCHAR(255) NOT NULL, introduced TIMESTAMP, discontinued TIMESTAMP, company_id BIGINT, CONSTRAINT pk_computer PRIMARY KEY (id));
CREATE SEQUENCE computer_seq START WITH 1000;
Slick auto generated class for it:
case class ComputerRow(id: Long, name: String, introduced: Option[java.sql.Timestamp], discontinued: Option[java.sql.Timestamp], companyId: Option[Long])
/** GetResult implicit for fetching ComputerRow objects using plain SQL queries */
implicit def GetResultComputerRow(implicit e0: GR[Long], e1: GR[String], e2: GR[Option[java.sql.Timestamp]], e3: GR[Option[Long]]): GR[ComputerRow] = GR{
prs => import prs._
ComputerRow.tupled((<<[Long], <<[String], <<?[java.sql.Timestamp], <<?[java.sql.Timestamp], <<?[Long]))
}
/** Table description of table COMPUTER. Objects of this class serve as prototypes for rows in queries. */
class Computer(tag: Tag) extends Table[ComputerRow](tag, "COMPUTER") {
def * = (id, name, introduced, discontinued, companyId) <> (ComputerRow.tupled, ComputerRow.unapply)
/** Maps whole row to an option. Useful for outer joins. */
def ? = (id.?, name.?, introduced, discontinued, companyId).shaped.<>({r=>import r._; _1.map(_=> ComputerRow.tupled((_1.get, _2.get, _3, _4, _5)))}, (_:Any) => throw new Exception("Inserting into ? projection not supported."))
/** Database column ID PrimaryKey */
val id: Column[Long] = column[Long]("ID", O.PrimaryKey)
/** Database column NAME */
val name: Column[String] = column[String]("NAME")
/** Database column INTRODUCED */
val introduced: Column[Option[java.sql.Timestamp]] = column[Option[java.sql.Timestamp]]("INTRODUCED")
/** Database column DISCONTINUED */
val discontinued: Column[Option[java.sql.Timestamp]] = column[Option[java.sql.Timestamp]]("DISCONTINUED")
/** Database column COMPANY_ID */
val companyId: Column[Option[Long]] = column[Option[Long]]("COMPANY_ID")
/** Foreign key referencing Company (database name FK_COMPUTER_COMPANY_1) */
lazy val companyFk = foreignKey("FK_COMPUTER_COMPANY_1", companyId, Company)(r => r.id, onUpdate=ForeignKeyAction.Restrict, onDelete=ForeignKeyAction.Restrict)
}
/** Collection-like TableQuery object for table Computer */
lazy val Computer = new TableQuery(tag => new Computer(tag))
I can't seem to be able to insert a new row, however. The attempt below
val row = ("name", new Timestamp((new Date).getTime), new Timestamp((new Date).getTime), 123)
DB.withDynSession {
Computer.map( r =>
(r.name, r.introduced, r.discontinued, r.companyId)
) += row
}
Throws an error
[JdbcSQLException: NULL not allowed for column "ID"; SQL statement: INSERT INTO "COMPUTER" ("NAME","INTRODUCED","DISCONTINUED","COMPANY_ID") VALUES (?,?,?,?) [23502-175]]
The same approach works with MySQL and PostgreSQL, so I'm guessing that H2 doesn't have the same primary ID auto increment functionality? How do I make my insert work with slick then?
This is a working example of the same table with Anorm:
DB.withConnection { implicit connection =>
SQL(
"""
insert into computer values (
(select next value for computer_seq),
{name}, {introduced}, {discontinued}, {company_id}
)
"""
).on(
'name -> "name",
'introduced -> new Timestamp((new Date).getTime),
'discontinued -> new Timestamp((new Date).getTime),
'company_id -> 123
).executeUpdate()
}
As far as I can tell this is not a Slick problem, but your DDL statement doesn't specify auto increment for H2. Check the play-slick SQL code for the sample project: https://github.com/playframework/play-slick/blob/master/samples/computer-database/conf/evolutions/default/1.sql
Also check the H2 docs: http://www.h2database.com/html/grammar.html#column_definition
Try giving the case class a default value for id, like this:
case class ComputerRow(id: Long = 0, name: String, introduced: Option[java.sql.Timestamp], discontinued: Option[java.sql.Timestamp], companyId: Option[Long])
I'm not sure for H2 but when I'm working with Postgres I usually specify the default for the id field to comply to eventual checks form Slick, then on the database side the value insert is handled automatically by the sequence.
Edit:
I may be wrong on this but I noticed that you create a sequence without assigning it:
CREATE TABLE computer (id BIGINT NOT NULL, name VARCHAR(255) NOT NULL, introduced TIMESTAMP, discontinued TIMESTAMP, company_id BIGINT, CONSTRAINT pk_computer PRIMARY KEY (id));
CREATE SEQUENCE computer_seq START WITH 1000;
In Postgres you create sequence and then assign them Like this:
create table users (
id bigint not null
);
create sequence users_seq;
alter table users alter column id set default nextval('users_seq');
in H2 as far as I can see this is how you assign an auto increment:
create table test(id bigint auto_increment, name varchar(255));
Taken form this SO question.

[SlickException: Read NULL value for column (USERS /670412212).LOGIN_ID]

I am using Slick 1.0.0 with play framework 2.1.0. I am getting the following error when I query my Users table. The value of LOGIN_ID is null in DB.
The query I am executing is:
val user = { for { u <- Users if u.providerId === id.id } yield u}.first
This results in the following error:
play.api.Application$$anon$1: Execution exception[[SlickException: Read NULL value for column (USERS /670412212).LOGIN_ID]]
at play.api.Application$class.handleError(Application.scala:289) ~[play_2.10.jar:2.1.0]
at play.api.DefaultApplication.handleError(Application.scala:383) [play_2.10.jar:2.1.0]
at play.core.server.netty.PlayDefaultUpstreamHandler$$anonfun$12$$anonfun$apply$24.apply(PlayDefaultUpstreamHandler.scala:314) [play_2.10.jar:2.1.0]
at play.core.server.netty.PlayDefaultUpstreamHandler$$anonfun$12$$anonfun$apply$24.apply(PlayDefaultUpstreamHandler.scala:312) [play_2.10.jar:2.1.0]
at play.api.libs.concurrent.PlayPromise$$anonfun$extend1$1.apply(Promise.scala:113) [play_2.10.jar:2.1.0]
at play.api.libs.concurrent.PlayPromise$$anonfun$extend1$1.apply(Promise.scala:113) [play_2.10.jar:2.1.0]
scala.slick.SlickException: Read NULL value for column (USERS /670412212).LOGIN_ID
at scala.slick.lifted.Column$$anonfun$getResult$1.apply(ColumnBase.scala:29) ~[slick_2.10-1.0.0.jar:1.0.0]
at scala.slick.lifted.TypeMapperDelegate$class.nextValueOrElse(TypeMapper.scala:158) ~[slick_2.10-1.0.0.jar:1.0.0]
at scala.slick.driver.BasicTypeMapperDelegatesComponent$TypeMapperDelegates$StringTypeMapperDelegate.nextValueOrElse(BasicTypeMapperDelegatesComponent.scala:146) ~[slick_2.10-1.0.0.jar:1.0.0]
at scala.slick.lifted.Column.getResult(ColumnBase.scala:28) ~[slick_2.10-1.0.0.jar:1.0.0]
at scala.slick.lifted.Projection15.getResult(Projection.scala:627) ~[slick_2.10-1.0.0.jar:1.0.0]
at scala.slick.lifted.Projection15.getResult(Projection.scala:604) ~[slick_2.10-1.0.0.jar:1.0.0]
My User table is defined as :
package models
import scala.slick.driver.MySQLDriver.simple._
case class User(userId:String,email:String,loginId:String,fullName:String,firstName:String,lastName:String,location:String,homeTown:String,providerId:String,provider:String,state:String,zip:String,accessKey:String,refreshKey:String,avatarUrl:String)
object Users extends Table[User]("USERS") {
def userId = column[String]("USER_ID", O.PrimaryKey) // This is the primary key column
def email = column[String]("EMAIL",O.NotNull)
def loginId = column[String]("LOGIN_ID",O.Nullable)
def fullName = column[String]("FULL_NAME",O.NotNull)
def firstName = column[String]("FIRST_NAME",O.Nullable)
def lastName = column[String]("LAST_NAME",O.Nullable)
def location = column[String]("LOCATION",O.Nullable)
def homeTown = column[String]("HOME_TOWN",O.Nullable)
def providerId = column[String]("PROVIDER_ID",O.Nullable)
def provider = column[String]("PROVIDER",O.Nullable)
def state = column[String]("STATE",O.Nullable)
def zip = column[String]("ZIP",O.Nullable)
def accessKey = column[String]("ACCESS_KEY",O.Nullable)
def refreshKey = column[String]("REFRESH_KEY",O.Nullable)
def avatarUrl = column[String]("AVATAR_URL",O.Nullable)
// Every table needs a * projection with the same type as the table's type parameter
def * = userId ~ email ~ loginId ~ fullName ~ firstName ~ lastName ~ location ~ homeTown ~ providerId ~ provider ~ state ~ zip ~ accessKey ~ refreshKey ~ avatarUrl <> (User,User.unapply _)
}
Please help. It looks like Slick can not handle Null values from DB?
Your case class is not ok. If you use O.Nullable, all your properties have to be Option[String].
If you get this error, you'll have to either make the properties O.Nullable, or you have to specify that your query returns an option.
For example let's say you do a rightJoin you might not want to make the properties of the right record optional. In that case you can customize the way you yield your results using .?
val results = (for {
(left, right) <- rightRecord.table rightJoin leftRecord.table on (_.xId === _.id)
} yield (rightRecord.id, leftRecord.name.?)).list
results map (r => SomeJoinedRecord(Some(r._1), r._2.getOrElse(default)))
This problem arises if a column contains a null value and at runtime it gets a null in the column response. If you see in the code below, my cust_id is nullable, but it has no null values. Since, there is a job that makes sure that is is never null. So, the below mapping works. However, it is the best practice to look at your table structure and create the class accordingly. This avoids nasty runtime exception.
If the table definition on database is like:
CREATE TABLE public.perf_test (
dwh_id serial NOT NULL,
cust_id int4 NULL,
cust_address varchar(30) NULL,
partner_id int4 NULL,
CONSTRAINT perf_test_new_dwh_id_key UNIQUE (dwh_id)
);
The corresponding class definition can be as below. But, it will be advised to have the cust_id also as Option[Int]. However, as long as it has values and no nulls, you will not encounter error.
import slick.jdbc.PostgresProfile.api._
class PerfTest(tag: Tag) extends Table[(Int, Int, Option[String], Option[Int])](tag, "perf_test") {
def dwhId = column[Int]("dwh_id")
def custId = column[Int]("cust_id")
def custAddress = column[Option[String]]("cust_address")
def partnerId = column[Option[Int]]("partner_id")
def * = (dwhId, custId, custAddress,partnerId)
}
What happened to me was that I had anomalies in the DB and some values were accidentally nulls - those that I didn't expect to be. So do not forget to check your data too :)

Anorm string set from postgres ltree column

I have a table with one of the columns having ltree type, and the following code fetching data from it:
SQL("""select * from "queue"""")()
.map(
row =>
{
val queue =
Queue(
row[String]("path"),
row[String]("email_recipients"),
new DateTime(row[java.util.Date]("created_at")),
row[Boolean]("template_required")
)
queue
}
).toList
which results in the following error:
RuntimeException: TypeDoesNotMatch(Cannot convert notification.en.incident_happened:class org.postgresql.util.PGobject to String for column ColumnName(queue.path,Some(path)))
queue table schema is the following:
CREATE TABLE queue
(
id serial NOT NULL,
template_id integer,
template_version integer,
path ltree NOT NULL,
json_params text,
email_recipients character varying(1024) NOT NULL,
email_from character varying(128),
email_subject character varying(512),
created_at timestamp with time zone NOT NULL,
sent_at timestamp with time zone,
failed_recipients character varying(1024),
template_required boolean NOT NULL DEFAULT true,
attachments hstore,
CONSTRAINT pk_queue PRIMARY KEY (id ),
CONSTRAINT fk_queue__email_template FOREIGN KEY (template_id)
REFERENCES email_template (id) MATCH SIMPLE
ON UPDATE CASCADE ON DELETE RESTRICT
)
WITH (
OIDS=FALSE
);
ALTER TABLE queue
OWNER TO postgres;
GRANT ALL ON TABLE queue TO postgres;
GRANT SELECT, UPDATE, INSERT, DELETE ON TABLE queue TO writer;
GRANT SELECT ON TABLE queue TO reader;
Why is that? Isn't notification.en.incident_happened just an ordinary string? Or am I missing anything?
UPD:
The question still applies, but here is a workaround:
SQL("""select id, path::varchar, email_recipients, created_at, template_required from "queue"""")()
This looked like a fun project so I implemented the ltree column mapper.
I piggybacked off anorm-postgresql, since that project already implements some postgres types in anorm. It looks good, and it would be useful if it implemented the full range of postgres types. My code has been merged in, so you can use that library. Alternatively, just use the following code:
import org.postgresql.util.PGobject
import anorm._
object LTree {
implicit def rowToStringSeq: Column[Seq[String]] = Column.nonNull { (value, meta) =>
val MetaDataItem(qualified, nullable, clazz) = meta
value match {
case pgo:PGobject => {
val seq = pgo.getValue().split('.')
Right(seq.toSeq)
}
case x => Left(TypeDoesNotMatch(x.getClass.toString))
}
}
implicit def stringSeqToStatement = new ToStatement[Seq[String]] {
def set(s: java.sql.PreparedStatement, index: Int, aValue: Seq[String]) {
val stringRepresentation = aValue.mkString(".")
val pgo:org.postgresql.util.PGobject = new org.postgresql.util.PGobject()
pgo.setType("ltree");
pgo.setValue( stringRepresentation );
s.setObject(index, pgo)
}
}
}
Then you can map an ltree to a Seq[String]. Notice that it is a sequence of path elements order matters so it is a Seq[String], rather than String or Set[String]. If you want a single string just say path.mkString("."). Usage below:
import LTree._
SQL("""select * from "queue"""")()
.map(
row =>
{
val queue =
Queue(
row[Seq[String]]("path"),
row[String]("email_recipients"),
new DateTime(row[java.util.Date]("created_at")),
row[Boolean]("template_required")
)
queue
}
).toList

How to configure Play! Framework to work with ScalaQuery and H2?

I already have some simple project with one migration script:
# --- !Ups
create table user (
name varchar(255) not null primary key,
password varchar(255) not null
);
insert into user values ('demo', 'demo');
insert into user values ('kuki', 'pass');
# --- !Downs
drop table if exists user;
Database which I'm using is H2 in memory:
db.default.driver=org.h2.Driver
db.default.url=jdbc:h2:mem:play
Then I obviously want to query some data. When I'm using anorm everything is working properly:
case class User(name: String, password: String)
object User {
val simple = {
get[String]("user.name") ~/
get[String]("user.password") ^^ {
case name~password => User(name, password)
}
}
def findByName(name: String): Option[User] = {
DB.withConnection { implicit connection =>
SQL("select * from user where name = {name}").on(
'name -> name
).as(User.simple ?)
}
}
}
unlucky when I try to do the same with ScalaQuery:
object User extends Table[(String, String)]("user") {
lazy val database = Database.forDataSource(DB.getDataSource())
def name = column[String]("name", O PrimaryKey, O NotNull)
def password = column[String]("password", O NotNull)
def * = name ~ password
def findByName(name: String) = database withSession {
implicit db: Session =>
(for (u <- this if u.name === name) yield u.name ~ u.password).list
}
}
I always get the same error:
[JdbcSQLException: Tablela "user" nie istnieje Table "user" not found;
SQL statement: SELECT "t1"."name","t1"."password" FROM "user" "t1" WHERE ("t1"."name"='input_name') [42102-158]]
Is there anything which I'm doing wrong?
I think I strictly follow guide from there: https://github.com/playframework/Play20/wiki/ScalaDatabase
--------------------- EDIT -----------------------
Looks like it's some kind of incompatibility between Play's evolutions and ScalaQuery.
When I created table using:
database withSession {
implicit db: Session =>
User.ddl.create
User.insert("demo", "demo")
}
everything seems to work fine.
Maybe later I'll create some simple MySQL database and check what really happens inside.
--------------------- EDIT 2 -----------------------
So I more or less know what is going on (but I don't know why).
When I'm creating db structure with evolutions then table name and column names are written down with all uppercase letters.
And since I'm on linux then it matters.
If I would change table and columns names in the code to be uppercase also then everything works.
I'm only curious if it's a bug or if it's any way to enforce proper case on migrations?
Most likely, the problem is that the Play! Framework quotes the identifier names (table names, column names) in the query, so that you need to quote the table name in the 'create table' statement as well:
create table "user" (
"name" varchar(255) not null primary key,
"password" varchar(255) not null
);