It seems no easy way to use "in" clause in anorm:
val ids = List("111", "222", "333")
val users = SQL("select * from users where id in ({ids})").on('ids-> ???).as(parser *)
How to replace the ??? part?
I tried:
on('ids -> ids)
on('ids -> ids.mkString("'","','","'"))
on('ids -> ids.mkString("','")
But none works.
I see in the discussion the exactly same problem: https://groups.google.com/d/topic/play-framework/qls6dhhdayc/discussion, the author has a complex solution:
val params = List(1, 2, 3)
val paramsList = for ( i <- 0 until params.size ) yield ("userId" + i)
// ---> results in List("userId0", "userId1", "userId2")
User.find("id in ({%s})"
// produces "id in ({userId0},{userId1},{userId2})"
.format(paramsList.mkString("},{"))
// produces Map("userId0" -> 1, "userId1" -> 2, ...)
.on(paramsList.zip(params))
.list()
This is too much complicated.
Is there any easier way? Or should play provide something to make it easier?
Anorm now supports such a case (and more) since 2.3: "Using multi-value parameter"
Back to initial example it gives:
val ids = Seq("111", "222", "333")
val users = SQL("select * from users where id in ({ids})").on('ids-> ids).as(parser *)
Nailed it! There haven't really been any more updates on this thread, but it seems to still be relevant. Because of that, and because there isn't an answer, I thought I'd throw mine in for consideration.
Anorm doesn't support 'IN' clauses. I doubt they ever will. There's nothing you can do to make them work, I even read a post where anorm specifically took out those clauses because they made Anorm feel 'like an ORM'.
It's fairly easy, however, to wrap the SqlQuery in a short class that supports the IN clause, and then convert that class into a SqlQuery when needed.
Instead of pasting the code in here, because it gets a little long, here is the link to my blog, where I've posted the code and how to use it.
In clause with Anorm
Basically, when you have the code from my blog, your statements look like this:
RichSQL(""" SELECT * FROM users WHERE id IN ({userIds}) """).onList("userIds" -> userIds).toSQL.as(userParser *)(connection)
Maybe it's too late but here is a tip for using custom string interpolation that also works for solve the problem of IN clause.
I have implemented a helper class to define a string interpolation. You can see it below, and you can simply copy and paste, but first let's see how you can use it.
Instead of write something
SQL("select * from car where brand = {brand} and color = {color} and year = {year} order by name").on("brand" -> brand, "color" -> color, "year" -> year).as(Car.simple *)
You can simply write:
SQL"select * from car where brand = $brand and color = $color and year = $year order by name".as(Car.simple *)
So using string interpolation it's more concise and easier to read.
And for the case of using the IN clause, you can write:
val carIds = List(1, 3, 5)
SQLin"select * from car where id in ($carIds)".as(Car.simple *)
Or for your example:
val ids = List("111", "222", "333")
val users = SQLin"select * from users where id in ($ids)".as(parser *)
For more information about string interpolation, check this link
The code for this implicit class is the following:
package utils
object AnormHelpers {
def wild (str: String) = "%" + str + "%"
implicit class AnormHelper (val sc: StringContext) extends AnyVal {
// SQL raw -> it simply create an anorm.Sql using string interpolation
def SQLr (args: Any*) = {
// Matches every argument to an arbitrary name -> ("p0", value0), ("p1", value1), ...
val params = args.zipWithIndex.map(p => ("p"+p._2, p._1))
// Regenerates the original query substituting each argument by its name with the brackets -> "select * from user where id = {p0}"
val query = (sc.parts zip params).map{ case (s, p) => s + "{"+p._1+"}" }.mkString("") + sc.parts.last
// Creates the anorm.Sql
anorm.SQL(query).on( params.map(p => (p._1, anorm.toParameterValue(p._2))) :_*)
}
// SQL -> similar to SQLr but trimming any string value
def SQL (args: Any*) = {
val params = args.zipWithIndex.map {
case (arg: String, index) => ("p"+index, arg.trim.replaceAll("\\s{2,}", " "))
case (arg, index) => ("p"+index, arg)
}
val query = (sc.parts zip params).map { case (s, p) => s + "{"+ p._1 + "}" }.mkString("") + sc.parts.last
anorm.SQL(query).on( params.map(p => (p._1, anorm.toParameterValue(p._2))) :_*)
}
// SQL in clause -> similar to SQL but expanding Seq[Any] values separated by commas
def SQLin (args: Any*) = {
// Matches every argument to an arbitrary name -> ("p0", value0), ("p1", value1), ...
val params = args.zipWithIndex.map {
case (arg: String, index) => ("p"+index, arg.trim.replaceAll("\\s{2,}", " "))
case (arg, index) => ("p"+index, arg)
}
// Expands the Seq[Any] values with their names -> ("p0", v0), ("p1_0", v1_item0), ("p1_1", v1_item1), ...
val onParams = params.flatMap {
case (name, values: Seq[Any]) => values.zipWithIndex.map(v => (name+"_"+v._2, anorm.toParameterValue(v._1)))
case (name, value) => List((name, anorm.toParameterValue(value)))
}
// Regenerates the original query substituting each argument by its name expanding Seq[Any] values separated by commas
val query = (sc.parts zip params).map {
case (s, (name, values: Seq[Any])) => s + values.indices.map(name+"_"+_).mkString("{", "},{", "}")
case (s, (name, value)) => s + "{"+name+"}"
}.mkString("") + sc.parts.last
// Creates the anorm.Sql
anorm.SQL(query).on(onParams:_*)
}
}
}
It's probably late, but I add this for others looking for the same.
You could use some built-in database features to overcome this. This is one of the advantages Anorm has over ORMs. For example, if you are using PostgreSQL you could pass your list as an array and unnest the array in your query:
I assume ids are integer.
val ids = List(1, 2, 3)
val idsPgArray = "{%s}".format(ids.mkString(",")) //Outputs {1, 2, 3}
val users = SQL(
"""select * from users where id in (select unnest({idsPgArray}::integer[]))"""
).on('ids-> ???).as(parser *)
Executed query will be
select * from users where id in (select unnest('{1, 2, 3}'::integer[]))
which is equal to
select * from users where id in (1, 2, 3)
I had the same problem recently. Unfortunately there doesn't seem to be a way without using string interpolation and thus vulnerable to SQL injection.
What I ended up doing was kinda sanitizing it by transforming it to a list of ints and back:
val input = "1,2,3,4,5"
// here there will be an exception if someone is trying to sql-inject you
val list = (_ids.split(",") map Integer.parseInt).toList
// re-create the "in" string
SQL("select * from foo where foo.id in (%s)" format list.mkString(","))
User.find("id in (%s)"
.format(params.map("'%s'".format(_)).mkString(",") )
.list()
val ids = List("111", "222", "333")
val users = SQL("select * from users
where id in
(" + ids.reduceLeft((acc, s) => acc + "," + s) + ")").as(parser *)
Related
case class Submission(name: String, plannedDate: Option[LocalDate], revisedDate: Option[LocalDate])
val submission_1 = Submission("Åwesh Care", Some(2020-05-11), Some(2020-06-11))
val submission_2 = Submission("robin Dore", Some(2020-05-11), Some(2020-05-30))
val submission_3 = Submission("AIMS Hospital", Some(2020-01-24), Some(2020-07-30))
val submissions = Seq(submission_1, submission_2, submission_3)
Split the submissions so that the submission with the same plannedDate and/or revisedDate
goes to sameDateGroup and others go to remainder.
val (sameDateGroup, remainder) = someFunction(submissions)
Example result as below:
sameDateGroup should have
Seq(Submission("Åwesh Care", Some(2020-05-11), Some(2020-06-11)),
Submission("robin Dore", Some(2020-05-11), Some(2020-05-30)))
and remainder should have:
Seq(Submission("AIMS Hospital", Some(2020-01-24), Some(2020-07-30)))
So, if I understand the logic here, submission A shares a date with submission B (and both would go in the sameDateGrooup) IFF:
subA.plannedDate == subB.plannedDate
OR subA.plannedDate == subB.revisedDate
OR subA.revisedDate == subB.plannedDate
OR subA.revisedDate == subB.revisedDate
Likewise, and conversely, submission C belongs in the remainder category IFF:
subC.plannedDate // is unique among all planned dates
AND subC.plannedDate // does not exist among all revised dates
AND subC.revisedDate // is unique among all revised dates
AND subC.revisedDate // does not exist among all planned dates
Given all that, I think this does what you're describing.
import java.time.LocalDate
case class Submission(name : String
,plannedDate : Option[LocalDate]
,revisedDate : Option[LocalDate])
val submission_1 = Submission("Åwesh Care"
,Some(LocalDate.parse("2020-05-11"))
,Some(LocalDate.parse("2020-06-11")))
val submission_2 = Submission("robin Dore"
,Some(LocalDate.parse("2020-05-11"))
,Some(LocalDate.parse("2020-05-30")))
val submission_3 = Submission("AIMS Hospital"
,Some(LocalDate.parse("2020-01-24"))
,Some(LocalDate.parse("2020-07-30")))
val submissions = Seq(submission_1, submission_2, submission_3)
val pDates = submissions.groupBy(_.plannedDate)
val rDates = submissions.groupBy(_.revisedDate)
val (sameDateGroup, remainder) = submissions.partition(sub =>
pDates(sub.plannedDate).lengthIs > 1 ||
rDates(sub.revisedDate).lengthIs > 1 ||
pDates.keySet(sub.revisedDate) ||
rDates.keySet(sub.plannedDate))
A simple way to do this is to count the number of matching submissions for each submission in the list, and use that to partition the list:
def matching(s1: Submission, s2: Submission) =
s1.plannedDate == s2.plannedDate || s1.revisedDate == s2.revisedDate
val (sameDateGroup, remainder) =
submissions.partition { s1 =>
submissions.count(s2 => matching(s1, s2)) > 1
}
The matching function can contain whatever specific test is required.
This is O(n^2) so a more sophisticated algorithm would be needed for very long lists.
I think this will do the trick.
I'm sorry, some variablenames are not very meaningful, because I used different case class hen trying this. For some reason I only thought about using .groupBy later. So I'm not really recommend using this, as it is a bit uncomprehensible and can be solved easier with groupby
case class Submission(name: String, plannedDate: Option[String], revisedDate: Option[String])
val l =
List(
Submission("Åwesh Care", Some("2020-05-11"), Some("2020-06-11")),
Submission("robin Dore", Some("2020-05-11"), Some("2020-05-30")),
Submission("AIMS Hospital", Some("2020-01-24"), Some("2020-07-30")))
val t = l
.map((_, 1))
.foldLeft(Map.empty[Option[String], (List[Submission], Int)])((acc, idnTuple) => idnTuple match {
case (idn, count) => {
acc
.get(idn.plannedDate)
.map {
case (mapIdn, mapCount) => acc + (idn.plannedDate -> (idn :: mapIdn, mapCount + count))
}.getOrElse(acc + (idn.plannedDate -> (List(idn), count)))
}})
.values
.partition(_._2 > 1)
val r = (t._1.map(_._1).flatten, t._2.map(_._1).flatten)
println(r)
It basically follows the map-reduce wordcount schema.
If someone sees this, and knows how to do the tuple deconstruction easier, please let me know in the comments.
Sometimes I perform a sequence of computations gradually transforming some value, like:
def complexComputation(input: String): String = {
val first = input.reverse
val second = first + first
val third = second * 3
third
}
Naming variables is sometimes cumbersome and I would like to avoid it. One pattern I am using for this is chaining the values using Option.map:
def complexComputation(input: String): String = {
Option(input)
.map(_.reverse)
.map(s => s + s)
.map(_ * 3)
.get
}
Using Option / get however does not feel quite natural to me. Is there some other way this is commonly done?
Actually, it will be possible with Scala 2.13. It will introduce pipe:
import scala.util.chaining._
input //"str"
.pipe(s => s.reverse) //"rts"
.pipe(s => s + s) //"rtsrts"
.pipe(s => s * 3) //"rtsrtsrtsrtsrtsrts"
Version 2.13.0-M1 is already released. If you don't want to use the milestone version, maybe consider using backport?
As you mentioned, it's doable to implement pipe on your own, e.g.:
implicit class Piper[A](a: A) {
def pipe[B](f: A => B) = {
f(a)
}
}
val res = 2.pipe(i => i + 1).pipe(_ + 3)
println(res) // 6
val resStr = "Hello".pipe(s => s + " you!")
println(resStr) // Hello you!
Or take a look at https://github.com/scala/scala/blob/v2.13.0-M5/src/library/scala/util/ChainingOps.scala#L44.
I'm trying to figure out Slick (the Scala functional relational model). I've started to build a prototype in Slick 3.0.0 but of course... most of the documentation is either out of date or incomplete.
I've managed to get to a point where I can create a schema and return an object from the database.
The problem is, what I'm getting back is a "Rep[Bind]" and not the object I would expect to get back. I can't figure out what to do this this value. For instance, if I try something like rep.countDistinct.result, I get a crash.
Here's a quick synopsis of the code... some removed for brevity:
class UserModel(tag: Tag) extends Table[User](tag, "app_dat_user_t") {
def id = column[Long]("n_user_id", O.PrimaryKey)
def created = column[Long]("d_timestamp_created")
def * = (id.?, created) <> (User.tupled, User.unapply)
}
case class User(id: Option[Long], created: Long)
val users = TableQuery[UserModel]
(users.schema).create
db.run(users += User(Option(1), 2))
println("ID is ... " + users.map(_.id)) // prints "Rep[Bind]"... huh?
val users = for (user <- users) yield user
println(users.map(_.id).toString) // Also prints "Rep[Bind]"...
I can't find a way to "unwrap" the Rep object and I can't find any clear explanation of what it is or how to use it.
Rep[] is a replacement to the Column[] datatype used in slick .
users.map(_.id) returns values of the Column('n_user_id') for all rows
val result : Rep[Long] = users.map(_.id)
users.map(_.id) // => select n_user_id from app_dat_user_t;
The obtained value is of type Column[Long] [ which is now Rep[Long] ].
You cannot directly print values of the above resultSet as it is not of any scala collection type
You can first convert it to some scala collection and then print it as
below :
var idList : List[Long] = List()
users.map(_.id).forEach(id =>
idList = idList :+ id
)
println(idList)** // if you need to print all ids at once
else you can simply use :
users.map(_.id).forEach(id =>
println(id)
) // print for each id
And ,
val users = TableQuery[UserModel] // => returns Query[UserModel, UserModel#TableElementType, Seq])
val users = for (user <- users) yield user // => returns Query[UserModel, UserModel#TableElementType, Seq])
both mean the same , So you can directly use the former and remove the latter
I'm trying to write a function that returns a list (for querying purposes) that has some wildcard elements:
def createPattern(query: List[(String,String)]) = {
val l = List[(_,_,_,_,_,_,_)]
var iter = query
while(iter != null) {
val x = iter.head._1 match {
case "userId" => 0
case "userName" => 1
case "email" => 2
case "userPassword" => 3
case "creationDate" => 4
case "lastLoginDate" => 5
case "removed" => 6
}
l(x) = iter.head._2
iter = iter.tail
}
l
}
So, the user enters some query terms as a list. The function parses through these terms and inserts them into val l. The fields that the user doesn't specify are entered as wildcards.
Val l is causing me troubles. Am I going the right route or are there better ways to do this?
Thanks!
Gosh, where to start. I'd begin by getting an IDE (IntelliJ / Eclipse) which will tell you when you're writing nonsense and why.
Read up on how List works. It's an immutable linked list so your attempts to update by index are very misguided.
Don't use tuples - use case classes.
You shouldn't ever need to use null and I guess here you mean Nil.
Don't use var and while - use for-expression, or the relevant higher-order functions foreach, map etc.
Your code doesn't make much sense as it is, but it seems you're trying to return a 7-element list with the second element of each tuple in the input list mapped via a lookup to position in the output list.
To improve it... don't do that. What you're doing (as programmers have done since arrays were invented) is to use the index as a crude proxy for a Map from Int to whatever. What you want is an actual Map. I don't know what you want to do with it, but wouldn't it be nicer if it were from these key strings themselves, rather than by a number? If so, you can simplify your whole method to
def createPattern(query: List[(String,String)]) = query.toMap
at which point you should realise you probably don't need the method at all, since you can just use toMap at the call site.
If you insist on using an Int index, you could write
def createPattern(query: List[(String,String)]) = {
def intVal(x: String) = x match {
case "userId" => 0
case "userName" => 1
case "email" => 2
case "userPassword" => 3
case "creationDate" => 4
case "lastLoginDate" => 5
case "removed" => 6
}
val tuples = for ((key, value) <- query) yield (intVal(key), value)
tuples.toMap
}
Not sure what you want to do with the resulting list, but you can't create a List of wildcards like that.
What do you want to do with the resulting list, and what type should it be?
Here's how you might build something if you wanted the result to be a List[String], and if you wanted wildcards to be "*":
def createPattern(query:List[(String,String)]) = {
val wildcard = "*"
def orElseWildcard(key:String) = query.find(_._1 == key).getOrElse("",wildcard)._2
orElseWildcard("userID") ::
orElseWildcard("userName") ::
orElseWildcard("email") ::
orElseWildcard("userPassword") ::
orElseWildcard("creationDate") ::
orElseWildcard("lastLoginDate") ::
orElseWildcard("removed") ::
Nil
}
You're not using List, Tuple, iterator, or wild-cards correctly.
I'd take a different approach - maybe something like this:
case class Pattern ( valueMap:Map[String,String] ) {
def this( valueList:List[(String,String)] ) = this( valueList.toMap )
val Seq(
userId,userName,email,userPassword,creationDate,
lastLoginDate,removed
):Seq[Option[String]] = Seq( "userId", "userName",
"email", "userPassword", "creationDate", "lastLoginDate",
"removed" ).map( valueMap.get(_) )
}
Then you can do something like this:
scala> val pattern = new Pattern( List( "userId" -> "Fred" ) )
pattern: Pattern = Pattern(Map(userId -> Fred))
scala> pattern.email
res2: Option[String] = None
scala> pattern.userId
res3: Option[String] = Some(Fred)
, or just use the map directly.
I have a SQL database table with the following structure:
create table category_value (
category varchar(25),
property varchar(25)
);
I want to read this into a Scala Map[String, Set[String]] where each entry in the map is a set of all of the property values that are in the same category.
I would like to do it in a "functional" style with no mutable data (other than the database result set).
Following on the Clojure loop construct, here is what I have come up with:
def fillMap(statement: java.sql.Statement): Map[String, Set[String]] = {
val resultSet = statement.executeQuery("select category, property from category_value")
#tailrec
def loop(m: Map[String, Set[String]]): Map[String, Set[String]] = {
if (resultSet.next) {
val category = resultSet.getString("category")
val property = resultSet.getString("property")
loop(m + (category -> m.getOrElse(category, Set.empty)))
} else m
}
loop(Map.empty)
}
Is there a better way to do this, without using mutable data structures?
If you like, you could try something around
def fillMap(statement: java.sql.Statement): Map[String, Set[String]] = {
val resultSet = statement.executeQuery("select category, property from category_value")
Iterator.continually((resultSet, resultSet.next)).takeWhile(_._2).map(_._1).map{ res =>
val category = res.getString("category")
val property = res.getString("property")
(category, property)
}.toIterable.groupBy(_._1).mapValues(_.map(_._2).toSet)
}
Untested, because I don’t have a proper sql.Statement. And the groupBy part might need some more love to look nice.
Edit: Added the requested changes.
There are two parts to this problem.
Getting the data out of the database and into a list of rows.
I would use a Spring SimpleJdbcOperations for the database access, so that things at least appear functional, even though the ResultSet is being changed behind the scenes.
First, some a simple conversion to let us use a closure to map each row:
implicit def rowMapper[T<:AnyRef](func: (ResultSet)=>T) =
new ParameterizedRowMapper[T]{
override def mapRow(rs:ResultSet, row:Int):T = func(rs)
}
Then let's define a data structure to store the results. (You could use a tuple, but defining my own case class has advantage of being just a little bit clearer regarding the names of things.)
case class CategoryValue(category:String, property:String)
Now select from the database
val db:SimpleJdbcOperations = //get this somehow
val resultList:java.util.List[CategoryValue] =
db.query("select category, property from category_value",
{ rs:ResultSet => CategoryValue(rs.getString(1),rs.getString(2)) } )
Converting the data from a list of rows into the format that you actually want
import scala.collection.JavaConversions._
val result:Map[String,Set[String]] =
resultList.groupBy(_.category).mapValues(_.map(_.property).toSet)
(You can omit the type annotations. I've included them to make it clear what's going on.)
Builders are built for this purpose. Get one via the desired collection type companion, e.g. HashMap.newBuilder[String, Set[String]].
This solution is basically the same as my other solution, but it doesn't use Spring, and the logic for converting a ResultSet to some sort of list is simpler than Debilski's solution.
def streamFromResultSet[T](rs:ResultSet)(func: ResultSet => T):Stream[T] = {
if (rs.next())
func(rs) #:: streamFromResultSet(rs)(func)
else
rs.close()
Stream.empty
}
def fillMap(statement:java.sql.Statement):Map[String,Set[String]] = {
case class CategoryValue(category:String, property:String)
val resultSet = statement.executeQuery("""
select category, property from category_value
""")
val queryResult = streamFromResultSet(resultSet){rs =>
CategoryValue(rs.getString(1),rs.getString(2))
}
queryResult.groupBy(_.category).mapValues(_.map(_.property).toSet)
}
There is only one approach I can think of that does not include either mutable state or extensive copying*. It is actually a very basic technique I learnt in my first term studying CS. Here goes, abstracting from the database stuff:
def empty[K,V](k : K) : Option[V] = None
def add[K,V](m : K => Option[V])(k : K, v : V) : K => Option[V] = q => {
if ( k == q ) {
Some(v)
}
else {
m(q)
}
}
def build[K,V](input : TraversableOnce[(K,V)]) : K => Option[V] = {
input.foldLeft(empty[K,V]_)((m,i) => add(m)(i._1, i._2))
}
Usage example:
val map = build(List(("a",1),("b",2)))
println("a " + map("a"))
println("b " + map("b"))
println("c " + map("c"))
> a Some(1)
> b Some(2)
> c None
Of course, the resulting function does not have type Map (nor any of its benefits) and has linear lookup costs. I guess you could implement something in a similar way that mimicks simple search trees.
(*) I am talking concepts here. In reality, things like value sharing might enable e.g. mutable list constructions without memory overhead.