I am using Slick to analyze a legacy MySQL database (with MyISAM engine). And I’m using implicit classes to navigate through entities, e.g. user.logs with this code:
implicit class UserNav(user: User) {
def logs = Logs.filter(_.userId === user.id)
}
However, in this case the key is an INT but the foreign key is a BLOB. Using a MySQL client I can run select userId * 1 from logs to get an INT from the BLOB, and I can even join despite the different datatypes. But with Slick I get a compile errors for the code above.
error: Cannot perform option-mapped operation [ERROR]
with type: (Option[java.sql.Blob], Int) => R [ERROR]
for base type:(java.sql.Blob, java.sql.Blob) => Boolean
error: ambiguous implicit values:
both value BooleanOptionColumnCanBeQueryCondition in object CanBeQueryCondition of type => slick.lifted.CanBeQueryCondition[slick.lifted.Rep[Option[Boolean]]]
and value BooleanCanBeQueryCondition in object CanBeQueryCondition of type => slick.lifted.CanBeQueryCondition[Boolean]
match expected type slick.lifted.CanBeQueryCondition[Nothing]
Any idea how to solve this?
As I found out by trying, just pretending the column has a different data type works.
So besides the original val in class Log (which extends Table[LogRow])
val userId: Rep[Option[java.sql.Blob]] = column[Option[java.sql.Blob]]("userId", O.Default(None))
I added
val userId_asInt: Rep[Option[Int]] = column[Option[Int]]("userId", O.Default(None))
and changed the navigation def to
def logs = Logs.filter(_.userId_asInt === user.id)
This did the trick – at least with MySQL/MyISAM (no idea if this works with other engines or even other DBMS).
Instead I could also have replaced the data type in original val userId, but then I would also have to change that data type all over the place like in Log.* and Log.? ...
Related
I am trying to save model ProductCategory object in database. While saving it,categoriesId is a Seq.
case class ProductCategory(productItemId: ProductItemId, categoryies: CategoryId, filterName: FilterName)
/*Inside another object starts*/
def saveCategoriesId(productItemId: ProductItemId, categoryId: Seq[CategoryId], filterName: FilterName):
Future[Seq[ProductItemId]] =
db.run({
DBIO.sequence(categoryId.map(id => save(ProductCategory(productItemId, id, filterName))))
})
def save(productCategory: ProductCategory): DBIO[ProductItemId] =
query returning query.map(_.productItemId) += productCategory
Getting following error:
[error] /Users/vish/Work/jd/app/service/ProductCategoryService.scala:20:35: type mismatch;
[error] found : Seq[slick.dbio.DBIOAction[models.ProductItemId,slick.dbio.NoStream,Nothing]]
[error] required: Seq[slick.dbio.DBIOAction[models.ProductItemId,slick.dbio.NoStream,E]]
[error] DBIO.sequence(categoryId.map(id => save(ProductCategory(productItemId, id, filterName))))
Playframework version is 2.6. This question is not duplicate of this.This issue has blocked the further development. While answering please comment if it correct way of saving categoriesId
Normally in Scala compile error found: Nothing, required: E means that compiler couldn't infer some types. Try to specify some type parameters manually
db.run({
DBIO.sequence[ProductItemId, Seq, All](categoryId.map(id => save(ProductCategory(productItemId, id, filterName))))
})
or
db.run({
DBIO.sequence(categoryId.map[DBIO[ProductItemId], Seq[DBIO[ProductItemId]]](id => save(ProductCategory(productItemId, id, filterName))))
})
or introduce a local variable (then compiler will be able to infer types itself)
val actions = categoryId.map(id => save(ProductCategory(productItemId, id, filterName)))
db.run({
DBIO.sequence(actions)
})
This is the query that I am executing in Postgres via JDBC using Anorm:
val sql = s"select row_to_json(t) as result from tablename t;"
The returned object for this query is of type PGObject, which is the default object that JDBC uses when it doesn't recognize the type of the object delivered by the DB.
I want to retrieve this value like this:
db.withConnection { implicit conn =>
SQL(sql).as(get[JsValue]("result").single)
}
You have two options.
Option One: Simply change the delivered type by casting the jsonb to a text type.
val sql = s"select row_to_json(t)::text as result from tablename;"
Option Two
Add an implicit conversion in the scope of your code:
implicit val columnToJsValue:Column[JsValue] =
anorm.Column.nonNull[JsValue] { (value, meta) =>
val MetaDataItem(qualified, nullable, clazz)=meta
value match {
case json: org.postgresql.util.PGobject=> Right(Json.parse(json.getValue))
case _ => Left(TypeDoesNotMatch(s"Cannot convert $value: ${value.asInstanceOf[AnyRef].getClass} to Json for column $qualified"))
}
}
I stole that last piece of code from here, and I am not entirely sure about how it works. But it does its job and enables you to use get[JsValue] as a valid conversion type.
I'm trying to create a spark UDF to extract a Map of (key, value) pairs from a User defined case class.
The scala function seems to work fine, but when I try to convert that to a UDF in spark2.0, I'm running into the " Schema for type Any is not supported" error.
case class myType(c1: String, c2: Int)
def getCaseClassParams(cc: Product): Map[String, Any] = {
cc
.getClass
.getDeclaredFields // all field names
.map(_.getName)
.zip(cc.productIterator.to) // zipped with all values
.toMap
}
But when I try to instantiate a function value as a UDF it results in the following error -
val ccUDF = udf{(cc: Product, i: String) => getCaseClassParams(cc).get(i)}
java.lang.UnsupportedOperationException: Schema for type Any is not supported
at org.apache.spark.sql.catalyst.ScalaReflection$.schemaFor(ScalaReflection.scala:716)
at org.apache.spark.sql.catalyst.ScalaReflection$.schemaFor(ScalaReflection.scala:668)
at org.apache.spark.sql.catalyst.ScalaReflection$.schemaFor(ScalaReflection.scala:654)
at org.apache.spark.sql.functions$.udf(functions.scala:2841)
The error message says it all. You have an Any in the map. Spark SQL and Dataset api does not support Any in the schema. It has to be one of the supported type (which is a list of basic types such as String, Integer etc. a sequence of supported types or a map of supported types).
I'm using Slick 3.1.0 and Slick-pg 0.10.0. I have an enum as:
object UserProviders extends Enumeration {
type Provider = Value
val Google, Facebook = Value
}
Following the test case, it works fine with the column mapper simply adding the following implicit mapper into my customized driver.
implicit val userProviderMapper = createEnumJdbcType("UserProvider", UserProviders, quoteName = true)
However, when using plain SQL, I encountered the following compilation error:
could not find implicit value for parameter e: slick.jdbc.SetParameter[Option[models.UserProviders.Provider]]
I could not find any document about this. How can I write plain SQL with enum in slick? Thanks.
You need to have an implicit of type SetParameter[T] in scope which tells slick how to set parameters from some custom type T that it doesn't already know about. For example:
implicit val setInstant: SetParameter[Instant] = SetParameter { (instant, pp) =>
pp.setTimestamp(new Timestamp(instant.toEpochMilli))
}
The type of pp is PositionedParameters.
You might also come across the need to tell slick how to extract a query result into some custom type T that it doesn't already know about. For this, you need an implicit GetResult[T] in scope. For example:
implicit def getInstant(implicit get: GetResult[Long]): GetResult[Instant] =
get andThen (Instant.ofEpochMilli(_))
The codes looks like this:
case class Supplier(snum: String, sname: String, status: Int, city: String)
class Suppliers(tag: Tag) extends Table[Supplier](tag, "suppliers") {
def snum = column[String]("snum")
def sname = column[String]("sname")
def status = column[Int]("status")
def city = column[String]("city")
def * = (snum, sname, status, city) <> (Supplier.tupled, Supplier.unapply _)
}
val suppliers = TableQuery[Suppliers]
val gr=suppliers.groupBy(_.city).map{ case (k,v) => (k, v) }.buildColl[Set]
When I compile it, it complains:
Error:(69, 43) No matching Shape found.
Slick does not know how to map the given types.
Possible causes: T in Table[T] does not match your * projection. Or you use an unsupported type in a Query (e.g. scala List).
Required level: scala.slick.lifted.FlatShapeLevel
Source type: (scala.slick.lifted.Column[String], scala.slick.lifted.Query[A$A113.this.Suppliers,A$A113.this.Supplier,[+A]Seq[A]])
Unpacked type: T
Packed type: G
lazy val gr=suppliers.groupBy(_.city).map{ case (k,v) => (k, v) }.buildColl[Set]
^
Error:(69, 43) not enough arguments for method map: (implicit shape: scala.slick.lifted.Shape[_ <: scala.slick.lifted.FlatShapeLevel, (scala.slick.lifted.Column[String], scala.slick.lifted.Query[A$A113.this.Suppliers,A$A113.this.Suppliers#TableElementType,Seq]), T, G])scala.slick.lifted.Query[G,T,Seq].
Unspecified value parameter shape.
lazy val gr=suppliers.groupBy(_.city).map{ case (k,v) => (k, v) }.buildColl[Set]
^
But if I change case(k,v)=>(k,v) to case(k,v)=>(k,v.length), it works again.
Does anyone have ideas about this?
The reason is: Scala's groupBy returns a Map[..., Seq[...]], in other words a collection containing other collections. A nested collection! But SQL does not support nested collections, it always returns flat tables. Supporting nested collections would require a more sophisticated translation from Scala to SQL than Slick currently does. So instead Slick prohibits this case and requires you to make it flat. case(k,v)=>(k,v.length) does that for example, it turns the type into Map[..., Int]. Slick tells you that by saying Required level: scala.slick.lifted.FlatShapeLevel.
A workaround is doing the grouping on the client, e.g. suppliers.run.groupBy(_.city) or in Slick 2.2 and later db.run(suppliers).groupBy(_.city). In case of a join it can be more efficient to run two queries and join them locally instead of transferring a cartesian product and grouping afterwards.