I have a DBIO[Seq[tuple]] and I would like to map it to DBIO[Seq[customCaseClass]].
I am aware that I could do the transformation on the db.run() result with something like: customCaseClass.tupled(row) (see this answer). However, I am interested in composing the DBIO return value in varying functions.
There are three places where you can do this: at the Query level, DBIO level, and (as you noted, and rejected) at the Future level.
Query
At the query level, the conversion will happen as part of the execution of the query on Slick's own execution context.
It would look something like this:
// Given some query that returns a tuple...
val tupleQ: Query[(Rep[String],Rep[String]), (String,String), Seq] =
table.map{ row => (row.column1, row.column2) }
// ...which we'd like to project into this:
case class SomeCaseClass(v1: String, v2: String)
// ...we can use the mapTo macro to generate the conversion:
val ccQ: Query[Rep[SomeCaseClass], SomeCaseClass, Seq] =
tupleQ.map{ _.mapTo[SomeCaseClass] }
If this was all you're doing, then maybe the default projection (def * ...) is the place for this.
If you need more control over the conversion logic, you can use the lower-level <> in place of mapTo. Section 5.2 of Essential Slick gives more detail on this.
DBIO
The question was specifically about DBIO. The conversion there is going to run on your own execution context.
That would look something like this:
// Given a DBIO that returns a tuple...
val tupleD: DBIO[Seq[(String,String)]] =
table.map(row => (row.column1, row.column2)).result
// ... we can use any of the DBIO combinators to convert it, such as map:
val ccD: DBIO[Seq[SomeCaseClass]] =
dQ.map{ pairs => pairs.map{ case (a, b) => SomeCaseClass(a,b) } }
(...or dQ.map(pairs => pairs.map(SomeCaseClass.tupled)) as you noted).
The two big benefit you get at this level are:
you have access to the values, such as (a,b), and so can make decisions about what you want to do with the values.
being part of an action means you could take part in a transcation.
Chapter 4 of Essential Slick lists out many of the DBIO combinators. The Slick Manual also describes the combinators.
Future
The final place is in the Future, which looks very much like the DBIO version but after the db.run (as you've already found).
Related
My table schema in Postgres is the following:
I store List[String] in the 2nd column and I wrote the working method that updates this list with Union of a new list and old list:
def update(userId: Long, unknownWords: List[String]) = db.run {
for {
y <- lists.filter(_.userId === userId).result
words = y.map(_.unknownWords).flatMap(_.union(unknownWords)).distinct.toList
x <- lists.filter(_.userId === userId).map(_.unknownWords).update(words)
} yield x
}
Is there any way to write this better? And maybe the question is pretty dumb, but I don’t quite understand why I should apply .result() to the first line of the for expression, the filter().map() chain on the 3d line is working fine, is there something wrong with the types?
Why .result
The reason you need to apply .result is to do with the difference between queries (Query type) and actions (DBIO) in Slick.
By itself, the lists.filter line is a query. However, the third line (the update) is an action. If you left the .result off your for comprehension would have a type mismatch between a Query and a DBIO (action).
Because you're going to db.run the result of the for comprehension, the for comprehension needs to result in an DBIO action, rather than a query. In other words, putting a .result there is the right thing to do because you're constructing an action to run in the database (namely, fetching some data for the user).
You'll then going to run another action later to update the database. So in all, you're using for to combine two actions (two runnable SQL expressions) into a single DBIO. That's the x you yield, which is executed by db.run.
Better?
This is working for you, and that's just fine.
There's a small amount of duplication. You might spot your query on the first line, is very similar to the update query. You could abstract that out into a value:
val userLists = lists.filter(_.userId === userId)
That's a query. In fact, you could go a step further and modify the query to just select the unknownWords column:
val userUnknownWords = lists.filter(_.userId === userId).map(_.unknownWords)
I've not tried to compile this but that would make your code something like:
def update(userId: Long, unknownWords: List[String]) = {
val userUnknownWords = lists.filter(_.userId === userId).map(_.unknownWords)
db.run {
for {
y <- userUnknowlWords.result
words = y.flatMap(_.union(unknownWords)).distinct.toList
x <- userUnknownWords.update(words)
} yield x
}
Given that you're composing two actions (a select and an update), you could use DBIO.flatMap in place of the for comprehension. You might find it clearer. Or not. But here's an example...
The argument to DBIO.flatMap needs to be another action. That is, flatMap is a way to sequence actions. In particular, it's a way to do that while using the value from the database.
So you could replace the for comprehension with:
val action: DBIO[Int] =
userUnknowlWords.result.flatMap { currentWords =>
userUnknownWords.update(
currentWords.flatMap(_.union(unknownWords)).distinct.toList
)
}
(Again, apologies for not compiling the above: I don't have the details of the types, but hopefully this will give a flavour for how the code could work).
The final action is the one you can pass to db.run. It returns the number of rows changed.
I am trying to implement generic grouping using Slick 3.2.3. By generic grouping I mean grouping the same query by different parameters or sets thereof.
Supposing I have a table:
class MyTable(tag: Tag) extends Table[MyEntry](tag, "my_table") {
def text1 = column[String]("text1")
def text2 = column[Option[String]]("text2")
def list = column[List[String]]("list") // I am using postgres+slick_pg
...
}
Then I have a complex query with several joins and I would like to be able to group it by text1, (text1, text2), list etc. One way to do it would be to define a generic function which performs grouping using extractor parameter:
private def getData[T](extractor: MyTable => T) = {
// supposing MyTable comes second in the list
// of joined tables in my complex query
val groupedQuery = myComplexQuery.groupedBy(x => extractor(x._2))
...
// here goes aggregation functions, mapping etc.
}
where one of extractor implementations may be defined as
val extractor: MyTable => (Rep[String], Rep[Option[String]]) = me => me.text1 -> me.text2
However, since extractor is generic, groupBy cannot find matching Shape for T type, and it means that I will have to provide it as well. My question is how exactly to define such Shapes? Documentation for slick.lifted package lacks examples, and it is not exactly obvious what generic types K, T, G and P mean in Query#groupBy definition (or FlatShapeLevel for that matter). I would appreciate if somebody provided examples of such extractor functions at least for a primitive type (String) and a tuple2 (say, (String, Option[String])). Or perhaps there is a better way to achieve the same result which I have overlooked? Thanks.
There is a similar question here but it doesn't actually answer the question.
Is it possible to use IN clause in plain sql Slick?
Note that this is actually part of a larger and more complex query, so I do need to use plain sql instead of slick's lifted embedding. Something like the following will be good:
val ids = List(2,4,9)
sql"SELECT * FROM coffee WHERE id IN ($ids)"
The sql prefix unlocks a StringContext where you can set SQL parameters. There is no SQL parameter for a list, so you can easily end up opening yourself up to SQL injection here if you're not careful. There are some good (and some dangerous) suggestions about dealing with this problem with SQLServer on this question. You have a few options:
Your best bet is probably to use the #$ operator together with mkString to interpolate dynamic SQL:
val sql = sql"""SELECT * FROM coffee WHERE id IN (#${ids.mkString(",")})"""
This doesn't properly use parameters and therefore might be open to SQL-injection and other problems.
Another option is to use regular string interpolation and mkString to build the statement:
val query = s"""SELECT * FROM coffee WHERE id IN (${ids.mkString(",")})"""
StaticQuery.queryNA[Coffee](query)
This is essentially the same approach as using #$, but might be more flexible in the general case.
If SQL-injection vulnerability is a major concern (e.g. if the elements of ids are user provided), you can build a query with a parameter for each element of ids. Then you'll need to provide a custom SetParameter instance so that slick can turn the List into parameters:
implicit val setStringListParameter = new SetParameter[List[String]]{
def apply(v1: List[String], v2: PositionedParameters): Unit = {
v1.foreach(v2.setString)
}
}
val idsInClause = List.fill(ids.length)("?").mkString("(", ",", ")")
val query = s"""SELECT * FROM coffee WHERE id IN ($idsInClause)"""
Q.query[List[String], String](query).apply(ids).list(s)
Since your ids are Ints, this is probably less of a concern, but if you prefer this method, you would just need to change the setStringListParameter to use Int instead of String:
val ids = List(610113193610210035L, 220702198208189710L)
implicit object SetListLong extends SetParameter[List[Long]] {
def apply(vList: List[Long], pp: PositionedParameters) {
vList.foreach(pp.setLong)
}
}
val select = sql"""
select idnum from idnum_0
where idnum in ($ids#${",?" * (ids.size - 1)})
""".as[Long]
#Ben Reich is right.
this is another sample code, test on slick 3.1.0.
($ids#${",?" * (ids.size - 1)})
Although this is not universal answer and may not be what the author wanted, I still want to point this out to whoever views this question.
Some DB backends support array types, and there are extensions to Slick that allow setting these array types in the interpolations.
For example, Postgres has the syntax where column = any(array), and with slick-pg you can use this syntax like so:
def query(ids: Seq[Long]) = db.run(sql"select * from table where ids = any($ids)".as[Long])
This brings a much cleaner syntax, which is friendlier to the statement compiler cache and also safe from SQL injections and overall danger of creating a malformed SQL with the #$var interpolation syntax.
I have written a small extension to Slick that addresses exactly this problem: https://github.com/rtkaczyk/inslick
For the given example the solution would be:
import accode.inslick.syntax._
val ids = List(2,4,9)
sqli"SELECT * FROM coffee WHERE id IN *$ids"
Additionally InSlick works with iterables of tuples or case classes. It's available for all Slick 3.x versions and Scala versions 2.11 - 2.13. We've been using it in production for several months at the company I work for.
The interpolation is safe from SQL injection. It utilises a macro which rewrites the query in a way similar to trydofor's answer
Ran into essentially this same issue in Slick 3.3.3 when trying to use a Seq[Long] in an IN query for MySQL. Kept getting a compilation error from Slick of:
could not find implicit value for parameter e: slick.jdbc.SetParameter[Seq[Long]]
The original question would have been getting something like:
could not find implicit value for parameter e: slick.jdbc.SetParameter[List[Int]]
Slick 3.3.X+ can handle binding the parameters for the IN query, as long as we provide the implicit definition of how Slick should do so for the types we're using. This means adding the implicit val definition somewhere at the class level. So, like:
class MyClass {
// THIS IS THIS KEY LINE TO ENABLE SLICK TO BIND THE PARAMS
implicit val setListInt = SetParameter[List[Int]]((inputList, params) => inputList.foreach(params.setInt))
def queryByHardcodedIds() = {
val ids: List[Int] = List(2,4,9)
sql"SELECT * FROM coffee WHERE id IN ($ids)" // SLICK CAN AUTO-HANDLE BINDING NOW
}
}
Similar for the case of Seq[Long] & others. Just make sure your types/binding aligns to what you need Slick to handle:
implicit val setSeqLong = SetParameter[Seq[Long]]((inputList, params) => inputList.foreach(params.setLong))
// ^^Note the `SetParameter[Seq[Long]]` & `.setLong` for type alignment
Often I find myself wanting to chain a side-effecting function to the end of another method call in a more functional-looking way, but I don't want to transform the original type to Unit. Suppose I have a read method that searches a database for a record, returning Option[Record].
def read(id: Long): Option[Record] = ...
If read returns Some(record), then I might want to cache that value and move on. I could do something like this:
read(id).map { record =>
// Cache the record
record
}
But, I would like to avoid the above code and end up with something more like this to make it more clear as to what's happening:
read(id).withSideEffect { record =>
// Cache the record
}
Where withSideEffect returns the same value as read(id). After searching high and low, I can't find any method on any type that does something like this. The closest solution I can come up with is using implicit magic:
implicit class ExtendedOption[A](underlying: Option[A]) {
def withSideEffect(op: A => Unit): Option[A] = {
underlying.foreach(op)
underlying
}
}
Are there any Scala types I may have overlooked with methods like this one? And are there are any potential design flaws from using such a method?
Future.andThen (scaladoc) takes a side-effect and returns a future of the current value to facilitate fluent chaining.
The return type is not this.type.
See also duplicate questions about tap.
You can use scalaz for "explicit annotation" of side-effectful functions. In scalaz 7.0.6 it's IO monad: http://eed3si9n.com/learning-scalaz/IO+Monad.html
It's deprecated in scalaz 7.1. I would do something like that with Task
val readAndCache = Task.delay(read(id)).map(record => cacheRecord(record); record)
readAndCache.run // Run task for it's side effects
I'm writing a data structure that converts the results of a database query. The raw structure is a java ResultSet and it would be converted to a map or class which permits accessing different fields on that data structure by either a named method call or passing a string into apply(). Clearly different values may have different types. In order to reduce burden on the clients of this data structure, my preference is that one not need to cast the values of the data structure but the value fetched still has the correct type.
For example, suppose I'm doing a query that fetches two column values, one an Int, the other a String. The result then names of the columns are "a" and "b" respectively. Some ideal syntax might be the following:
val javaResultSet = dbQuery("select a, b from table limit 1")
// with ResultSet, particular values can be accessed like this:
val a = javaResultSet.getInt("a")
val b = javaResultSet.getString("b")
// but this syntax is undesirable.
// since I want to convert this to a single data structure,
// the preferred syntax might look something like this:
val newStructure = toDataStructure[Int, String](javaResultSet)("a", "b")
// that is, I'm willing to state the types during the instantiation
// of such a data structure.
// then,
val a: Int = newStructure("a") // OR
val a: Int = newStructure.a
// in both cases, "val a" does not require asInstanceOf[Int].
I've been trying to determine what sort of data structure might allow this and I could not figure out a way around the casting.
The other requirement is obviously that I would like to define a single data structure used for all db queries. I realize I could easily define a case class or similar per call and that solves the typing issue, but such a solution does not scale well when many db queries are being written. I suspect some people are going to propose using some sort of ORM, but let us assume for my case that it is preferred to maintain the query in the form of a string.
Anyone have any suggestions? Thanks!
To do this without casting, one needs more information about the query and one needs that information at compiole time.
I suspect some people are going to propose using some sort of ORM, but let us assume for my case that it is preferred to maintain the query in the form of a string.
Your suspicion is right and you will not get around this. If current ORMs or DSLs like squeryl don't suit your fancy, you can create your own one. But I doubt you will be able to use query strings.
The basic problem is that you don't know how many columns there will be in any given query, and so you don't know how many type parameters the data structure should have and it's not possible to abstract over the number of type parameters.
There is however, a data structure that exists in different variants for different numbers of type parameters: the tuple. (E.g. Tuple2, Tuple3 etc.) You could define parameterized mapping functions for different numbers of parameters that returns tuples like this:
def toDataStructure2[T1, T2](rs: ResultSet)(c1: String, c2: String) =
(rs.getObject(c1).asInstanceOf[T1],
rs.getObject(c2).asInstanceOf[T2])
def toDataStructure3[T1, T2, T3](rs: ResultSet)(c1: String, c2: String, c3: String) =
(rs.getObject(c1).asInstanceOf[T1],
rs.getObject(c2).asInstanceOf[T2],
rs.getObject(c3).asInstanceOf[T3])
You would have to define these for as many columns you expect to have in your tables (max 22).
This depends of course on that using getObject and casting it to a given type is safe.
In your example you could use the resulting tuple as follows:
val (a, b) = toDataStructure2[Int, String](javaResultSet)("a", "b")
if you decide to go the route of heterogeneous collections, there are some very interesting posts on heterogeneous typed lists:
one for instance is
http://jnordenberg.blogspot.com/2008/08/hlist-in-scala.html
http://jnordenberg.blogspot.com/2008/09/hlist-in-scala-revisited-or-scala.html
with an implementation at
http://www.assembla.com/wiki/show/metascala
a second great series of posts starts with
http://apocalisp.wordpress.com/2010/07/06/type-level-programming-in-scala-part-6a-heterogeneous-list%C2%A0basics/
the series continues with parts "b,c,d" linked from part a
finally, there is a talk by Daniel Spiewak which touches on HOMaps
http://vimeo.com/13518456
so all this to say that perhaps you can build you solution from these ideas. sorry that i don't have a specific example, but i admit i haven't tried these out yet myself!
Joschua Bloch has introduced a heterogeneous collection, which can be written in Java. I once adopted it a little. It now works as a value register. It is basically a wrapper around two maps. Here is the code and this is how you can use it. But this is just FYI, since you are interested in a Scala solution.
In Scala I would start by playing with Tuples. Tuples are kinda heterogeneous collections. The results can be, but not have to be accessed through fields like _1, _2, _3 and so on. But you don't want that, you want names. This is how you can assign names to those:
scala> val tuple = (1, "word")
tuple: ([Int], [String]) = (1, word)
scala> val (a, b) = tuple
a: Int = 1
b: String = word
So as mentioned before I would try to build a ResultSetWrapper around tuples.
If you want "extract the column value by name" on a plain bean instance, you can probably:
use reflects and CASTs, which you(and me) don't like.
use a ResultSetToJavaBeanMapper provided by most ORM libraries, which is a little heavy and coupled.
write a scala compiler plugin, which is too complex to control.
so, I guess a lightweight ORM with following features may satisfy you:
support raw SQL
support a lightweight,declarative and adaptive ResultSetToJavaBeanMapper
nothing else.
I made an experimental project on that idea, but note it's still an ORM, and I just think it may be useful to you, or can bring you some hint.
Usage:
declare the model:
//declare DB schema
trait UserDef extends TableDef {
var name = property[String]("name", title = Some("姓名"))
var age1 = property[Int]("age", primary = true)
}
//declare model, and it mixes in properties as {var name = ""}
#BeanInfo class User extends Model with UserDef
//declare a object.
//it mixes in properties as {var name = Property[String]("name") }
//and, object User is a Mapper[User], thus, it can translate ResultSet to a User instance.
object `package`{
#BeanInfo implicit object User extends Table[User]("users") with UserDef
}
then call raw sql, the implicit Mapper[User] works for you:
val users = SQL("select name, age from users").all[User]
users.foreach{user => println(user.name)}
or even build a type safe query:
val users = User.q.where(User.age > 20).where(User.name like "%liu%").all[User]
for more, see unit test:
https://github.com/liusong1111/soupy-orm/blob/master/src/test/scala/mapper/SoupyMapperSpec.scala
project home:
https://github.com/liusong1111/soupy-orm
It uses "abstract Type" and "implicit" heavily to make the magic happen, and you can check source code of TableDef, Table, Model for detail.
Several million years ago I wrote an example showing how to use Scala's type system to push and pull values from a ResultSet. Check it out; it matches up with what you want to do fairly closely.
implicit val conn = connect("jdbc:h2:f2", "sa", "");
implicit val s: Statement = conn << setup;
val insertPerson = conn prepareStatement "insert into person(type, name) values(?, ?)";
for (val name <- names)
insertPerson<<rnd.nextInt(10)<<name<<!;
for (val person <- query("select * from person", rs => Person(rs,rs,rs)))
println(person.toXML);
for (val person <- "select * from person" <<! (rs => Person(rs,rs,rs)))
println(person.toXML);
Primitives types are used to guide the Scala compiler into selecting the right functions on the ResultSet.