How do I insert JSON into a postgres table using Anorm? - postgresql

I'm getting a runtime exception when trying to insert a JSON string into a JSON column. The string I have looks like """{"Events": []}""", the table has a column defined as status JSONB NOT NULL. I can insert the string into the table from the command line no problem. I've defined a method to do the insert as:
import play.api.libs.json._
import anorm._
import anorm.postgresql._
def createStatus(
status: String,
created: LocalDateTime = LocalDateTime.now())(implicit c: SQLConnection): Unit = {
SQL(s"""
|INSERT INTO status_feed
| (status, created)
|VALUES
| ({status}, {created})
|""".stripMargin)
.on(
'status -> Json.parse("{}"), // n.b. would be Json.parse(status) but this provides a concise error message
'created -> created)
.execute()
}
and calling it gives the following error:
TypeDoesNotMatch(Cannot convert {}: org.postgresql.util.PGobject to String for column ColumnName(status_feed.status,Some(status)))
anorm.AnormException: TypeDoesNotMatch(Cannot convert {}: org.postgresql.util.PGobject to String for column ColumnName(status_feed.status,Some(status)))
I've done loads of searching for this issue but there's nothing about this specific use case that I could find - most of it is pulling out json columns into case classes. I've tried slightly different formats using spray-json's JsValue, play's JsValue, simply passing the string as-is and casting in the query with ::JSONB and they all give the same error.
Update: here is the SQL which created the table:
CREATE TABLE status_feed (
id SERIAL PRIMARY KEY,
status JSONB NOT NULL,
created TIMESTAMP WITHOUT TIME ZONE NOT NULL DEFAULT NOW()
)

The error is not on values given to .executeInsert, but on the parsing of the INSERT result (inserted key).
import java.sql._
// postgres=# CREATE TABLE test(foo JSONB NOT NULL);
val jdbcUrl = "jdbc:postgresql://localhost:32769/postgres"
val props = new java.util.Properties()
props.setProperty("user", "postgres")
props.setProperty("password", "mysecretpassword")
implicit val con = DriverManager.getConnection(jdbcUrl, props)
import anorm._, postgresql._
import play.api.libs.json._
SQL"""INSERT INTO test(foo) VALUES(${Json.obj("foo" -> 1)})""".
executeInsert(SqlParser.scalar[JsValue].singleOpt)
// Option[play.api.libs.json.JsValue] = Some({"foo":1})
/*
postgres=# SELECT * FROM test ;
foo
------------
{"foo": 1}
*/
BTW, the plain string interpolation is useless.

Turns out cchantep was right, it was the parser I was using. The test framework I am using swallowed the stack trace and I assumed the problem was on the insert, but what's actually blowing up is the next line in the test where I use the parser.
The case class and parser were defined as:
case class StatusFeed(
status: String,
created: LocalDateTime) {
val ItemsStatus: Status = status.parseJson.convertTo[Status]
}
object StatusFeed extends DefaultJsonProtocol {
val fields: String = sqlFields[StatusFeed]() // helper function that results in "created, status"
// used in SQL as RETURNING ${StatusFeed.fields}
val parser: RowParser[StatusFeed] =
Macro.namedParser[StatusFeed](Macro.ColumnNaming.SnakeCase)
// json formatter for Status
}
As defined the parser attempts to read a JSONB column from the result set into the String status. Changing fields to val fields: String = "created, status::TEXT" resolves the issue, though the cast may be expensive. Alternatively, defining status as a JsValue instead of a String and providing an implicit for anorm (adapted from this answer to use spray-json) fixes the issue:
implicit def columnToJsValue: Column[JsValue] = anorm.Column.nonNull[JsValue] { (value, meta) =>
val MetaDataItem(qualified, nullable, clazz) = meta
value match {
case json: org.postgresql.util.PGobject => Right(json.getValue.parseJson)
case _ =>
Left(TypeDoesNotMatch(
s"Cannot convert $value: ${value.asInstanceOf[AnyRef].getClass} to Json for column $qualified"))
}
}

Related

Scala, PostgreSQL - how to create list of objects as jsonb[] in Postgres?

I have a simple postgres schema:
CREATE TABLE person
(
name varchar(255) NOT NULL,
products jsonb[]
);
Now I created a mdoel in Scala to add array of jsons into database:
case class Product
(
name: String,
number: Int
)
case class Person
(
name: String,
products: Option[List[Product]]
)
But when I want to add it via Quill into database I got an error:
InternalPersistenceError(Some(U0000000DN),Left(java.util.concurrent.CompletionException: com.github.jasync.sql.db.postgresql.exceptions.GenericDatabaseException: ErrorMessage(fields=[(Severity, ERROR), (V, ERROR), (SQLSTATE, 22P02), (Message, invalid input syntax for type json), (Detail, Token "Product" is invalid.), (Where, JSON data, line 1: Product...), (File, json.c), (Line, 22), (Routine, report_invalid_token)])))
It seems that json could not be saved into db, but I have no idea why? I'm using raw types everywhere, it looks straight forward. Maybe there is another possibility to save array of jsons into postgres via scala?
Try creating the following for the circe json version of the desired case class you want to insert:
implicit val jsonEncoder: Encoder[Json] = encoder[Json](Types.OTHER,
(index: Index, json: Json, row: PrepareRow) => {
val pgObj = new PGobject()
pgObj.setType("jsonb")
pgObj.setValue(json.noSpaces)
row.setObject(index, pgObj, Types.OTHER)
}
)
implicit val jsonDecoder: MappedEncoding[String, Json] = MappedEncoding[String, Json](str => {
io.circe.parser.parse(str) match
case Right(v) => v
case Left(err) => throw new SQLException(s"Could not parse string to error", err)
})

passing Dataframe contents into sql stored procedure

I am trying to pass the contents in the dataframe into my sql stored procedure. I use a map function to iterate it through the dataframe contents and send them into the db. I have an error when trying to do it.
I am getting an error called No Encoder found for Any
- field (class: "java.lang.Object", name: "_1")
- root class: "scala.Tuple2"
Could anybody help me to correct this.
Below is my code
val savedDataFrame = dataFrame.map(m => sendDataFrameToDB(m.get(0), m.get(1), m.get(2), m.get(3)))
savedDataFrame.collect()
def sendDataFrameToDB(firstName : String, lastName : String, address : String, age : Long) = {
var jdbcConnection: java.sql.Connection = null
try {
val jdbcTemplate = new JDBCTemplate()
jdbcTemplate.getConfiguration()
jdbcConnection = jdbcTemplate.getConnection
if (jdbcConnection != null) {
val statement = "{call insert_user_details (?,?,?,?)}"
val callableStatement = jdbcConnection.prepareCall(statement)
callableStatement.setString(1, firstName)
callableStatement.setString(2, lastName)
callableStatement.setString(3, address)
callableStatement.setLong(4, age)
callableStatement.executeUpdate
}
} catch {
case e: SQLException => logger.error(e.getMessage)
}
}
passing Dataframe contents into sql stored procedure
dataFrame.map(m => sendDataFrameRDBMS(f.getAs("firstname").toString, f.getAs("lastname").toString, f.getAs("address").toString, f.getAs("age").toString.toLong))
m.get(0) belongs to the type of Any and it cannot be passed to String typed firstName directly according to your example. Datframe is different from RDD. "DataFrame is a Dataset organized into named columns. It is conceptually equivalent to a table in a relational database or a data frame in R/Python, but with richer optimizations under the hood" link
When u make the Dataframe makes the columns such as
val dataFrame = dataSet.toDF("firstname", "lastName", "address", "age")
Then you can access elements in the dataframe as below and pass into whatever your method
dataFrame.map(m => sendDataFrameRDBMS(f.getAs("firstname").toString, f.getAs("lastname").toString, f.getAs("address").toString, f.getAs("age").toString.toLong))

Slick: could not find implicit value for parameter e: slick.jdbc.SetParameter[Option[java.util.UUID]]

I'm workihng with Play! 2.5, Slick 3 and PostgreSQL 9.6.
I'm trying to use a simple plain SQL query with an optional UUID but I get this error:
Slick: could not find implicit value for parameter e: slick.jdbc.SetParameter[Option[java.util.UUID]]
My case class looks like
final case class GaClientId(ip: String, userId: Option[UUID], clientId: String)
And my query like this:
db.run(
sql"""
INSERT INTO ga_client_id(ip, user_id, clientId) VALUES (
${gaClientId.ip}, ${gaClientId.userId}, $gaClientId.userId.orNull)
ON CONFLICT DO NOTHING;""")
I tried to add this:
implicit val getUUIDContent: GetResult[GaClientId] =
GetResult(r => GaClientId(r.<<, r.nextStringOption().map(UUID.fromString), r.<<))
but without result.
How can I do this?
As the error implies you should specify SetParameter not GetParameter. It should be as follows:
implicit val uuidSetter = SetParameter[Option[UUID]] {
case (Some(uuid), params) => params.setString(uuid.toString)
case (None, params) => params.setNull(Types.VARCHAR)
}
And then you should probably add asUpdate to your update SQL:
db.run((sql"""
INSERT INTO ga_client_id(ip, user_id, clientId) VALUES (
${gaClientId.ip}, ${gaClientId.userId}, ${gaClientId.clientId})
ON CONFLICT DO NOTHING;
""").asUpdate)
Based on the answer of #Paul Doelga, you can additionally try
db.run((sql"""
INSERT INTO ga_client_id(ip, user_id, clientId) VALUES (
${gaClientId.ip}, ${gaClientId.userId}::uuid, ${gaClientId.clientId})
ON CONFLICT DO NOTHING;
""").asUpdate)
I originally got the answer from
https://github.com/skinny-framework/skinny-framework/issues/301
I guess the main reason of the failure is that Postgres need a parameter in type UUID, but JDBC doesn't support it, so you have to convert it to String, and convert back to UUID in SQL.

How to create anorm's parser with array?

I am using postgresql which supports array column field. To parse a row, I use this parser. It has error at the Array object. I guess I did it wrongly.
case class ServiceRequest(
id: Pk[Long],
firstname: String,
lastname: String,
images: Array[String])
val parser: RowParser[ServiceRequest] = {
get[Pk[Long]]("id") ~
get[String]("firstname") ~
get[String]("lastname") ~
Error here >>> get[Array[String]]("images") map {
case id ~ firstname ~ lastname ~ images=>
ServiceRequest(id, firstname, lastname, images)
}
}
Thanks
The Array[T] type is now natively supported in play 2.4.x, your don't have to roll your own converters.
But, it's still not nice to work with insert or update statements like:
def updateTags(id: Long, values: Seq[String]):Int = {
DB.withConnection { implicit conn =>
SQL("UPDATE entries SET tags = {value} WHERE id = {id}")
.on('value -> values, 'id -> id).executeUpdate
}
will give you an error
play - Cannot invoke the action, eventually got an error:
org.postgresql.util.PSQLException: ERROR: syntax error at or near "$2"
Put it simply, you should create the PreparedStatement using java.sql.Array type:
on('value -> conn.createArrayOf("varchar", value.asInstanceOf[Array[AnyRef]]), ...
As of now (play 2.4-M1), java.sql.Array is not converted to ParameterValue by default, which means
SQL(...).on(`arr -> values:java.sql.Array ) still gives a compilation error, you need another implicit conversion to make it compiling:
implicit object sqlArrayToStatement extends ToStatement[java.sql.Array] {
def set(s: PreparedStatement, i: Int, n: java.sql.Array) = s.setArray(i, n)
}
I have solved my problem by adding this converter:
implicit def rowToStringArray: Column[Array[String]] = Column.nonNull { (value, meta) =>
val MetaDataItem(qualified, nullable, clazz) = meta
value match {
case o: java.sql.Array => Right(o.getArray().asInstanceOf[Array[String]])
case _ => Left(TypeDoesNotMatch("Cannot convert " + value + ":" + value.asInstanceOf[AnyRef].getClass))
}
}
The converter transforms data type from postgresql's jdbc to java's data type for the parser in SELECT function. When you insert, you need another converter to convert from java's data type to postgresql's jdbc.
There is currently a pullrequest to add support for java.sql.Array as column mapping: https://github.com/playframework/playframework/pull/3062 .
Best,

[SlickException: Read NULL value for column (USERS /670412212).LOGIN_ID]

I am using Slick 1.0.0 with play framework 2.1.0. I am getting the following error when I query my Users table. The value of LOGIN_ID is null in DB.
The query I am executing is:
val user = { for { u <- Users if u.providerId === id.id } yield u}.first
This results in the following error:
play.api.Application$$anon$1: Execution exception[[SlickException: Read NULL value for column (USERS /670412212).LOGIN_ID]]
at play.api.Application$class.handleError(Application.scala:289) ~[play_2.10.jar:2.1.0]
at play.api.DefaultApplication.handleError(Application.scala:383) [play_2.10.jar:2.1.0]
at play.core.server.netty.PlayDefaultUpstreamHandler$$anonfun$12$$anonfun$apply$24.apply(PlayDefaultUpstreamHandler.scala:314) [play_2.10.jar:2.1.0]
at play.core.server.netty.PlayDefaultUpstreamHandler$$anonfun$12$$anonfun$apply$24.apply(PlayDefaultUpstreamHandler.scala:312) [play_2.10.jar:2.1.0]
at play.api.libs.concurrent.PlayPromise$$anonfun$extend1$1.apply(Promise.scala:113) [play_2.10.jar:2.1.0]
at play.api.libs.concurrent.PlayPromise$$anonfun$extend1$1.apply(Promise.scala:113) [play_2.10.jar:2.1.0]
scala.slick.SlickException: Read NULL value for column (USERS /670412212).LOGIN_ID
at scala.slick.lifted.Column$$anonfun$getResult$1.apply(ColumnBase.scala:29) ~[slick_2.10-1.0.0.jar:1.0.0]
at scala.slick.lifted.TypeMapperDelegate$class.nextValueOrElse(TypeMapper.scala:158) ~[slick_2.10-1.0.0.jar:1.0.0]
at scala.slick.driver.BasicTypeMapperDelegatesComponent$TypeMapperDelegates$StringTypeMapperDelegate.nextValueOrElse(BasicTypeMapperDelegatesComponent.scala:146) ~[slick_2.10-1.0.0.jar:1.0.0]
at scala.slick.lifted.Column.getResult(ColumnBase.scala:28) ~[slick_2.10-1.0.0.jar:1.0.0]
at scala.slick.lifted.Projection15.getResult(Projection.scala:627) ~[slick_2.10-1.0.0.jar:1.0.0]
at scala.slick.lifted.Projection15.getResult(Projection.scala:604) ~[slick_2.10-1.0.0.jar:1.0.0]
My User table is defined as :
package models
import scala.slick.driver.MySQLDriver.simple._
case class User(userId:String,email:String,loginId:String,fullName:String,firstName:String,lastName:String,location:String,homeTown:String,providerId:String,provider:String,state:String,zip:String,accessKey:String,refreshKey:String,avatarUrl:String)
object Users extends Table[User]("USERS") {
def userId = column[String]("USER_ID", O.PrimaryKey) // This is the primary key column
def email = column[String]("EMAIL",O.NotNull)
def loginId = column[String]("LOGIN_ID",O.Nullable)
def fullName = column[String]("FULL_NAME",O.NotNull)
def firstName = column[String]("FIRST_NAME",O.Nullable)
def lastName = column[String]("LAST_NAME",O.Nullable)
def location = column[String]("LOCATION",O.Nullable)
def homeTown = column[String]("HOME_TOWN",O.Nullable)
def providerId = column[String]("PROVIDER_ID",O.Nullable)
def provider = column[String]("PROVIDER",O.Nullable)
def state = column[String]("STATE",O.Nullable)
def zip = column[String]("ZIP",O.Nullable)
def accessKey = column[String]("ACCESS_KEY",O.Nullable)
def refreshKey = column[String]("REFRESH_KEY",O.Nullable)
def avatarUrl = column[String]("AVATAR_URL",O.Nullable)
// Every table needs a * projection with the same type as the table's type parameter
def * = userId ~ email ~ loginId ~ fullName ~ firstName ~ lastName ~ location ~ homeTown ~ providerId ~ provider ~ state ~ zip ~ accessKey ~ refreshKey ~ avatarUrl <> (User,User.unapply _)
}
Please help. It looks like Slick can not handle Null values from DB?
Your case class is not ok. If you use O.Nullable, all your properties have to be Option[String].
If you get this error, you'll have to either make the properties O.Nullable, or you have to specify that your query returns an option.
For example let's say you do a rightJoin you might not want to make the properties of the right record optional. In that case you can customize the way you yield your results using .?
val results = (for {
(left, right) <- rightRecord.table rightJoin leftRecord.table on (_.xId === _.id)
} yield (rightRecord.id, leftRecord.name.?)).list
results map (r => SomeJoinedRecord(Some(r._1), r._2.getOrElse(default)))
This problem arises if a column contains a null value and at runtime it gets a null in the column response. If you see in the code below, my cust_id is nullable, but it has no null values. Since, there is a job that makes sure that is is never null. So, the below mapping works. However, it is the best practice to look at your table structure and create the class accordingly. This avoids nasty runtime exception.
If the table definition on database is like:
CREATE TABLE public.perf_test (
dwh_id serial NOT NULL,
cust_id int4 NULL,
cust_address varchar(30) NULL,
partner_id int4 NULL,
CONSTRAINT perf_test_new_dwh_id_key UNIQUE (dwh_id)
);
The corresponding class definition can be as below. But, it will be advised to have the cust_id also as Option[Int]. However, as long as it has values and no nulls, you will not encounter error.
import slick.jdbc.PostgresProfile.api._
class PerfTest(tag: Tag) extends Table[(Int, Int, Option[String], Option[Int])](tag, "perf_test") {
def dwhId = column[Int]("dwh_id")
def custId = column[Int]("cust_id")
def custAddress = column[Option[String]]("cust_address")
def partnerId = column[Option[Int]]("partner_id")
def * = (dwhId, custId, custAddress,partnerId)
}
What happened to me was that I had anomalies in the DB and some values were accidentally nulls - those that I didn't expect to be. So do not forget to check your data too :)