Can I use Scala List with Slick (Play)? - scala

I'm trying to store a List of integers here is what I am doing:
MODEL
case class Score(
scoresPerTime: List[Int]
)
object Scores extends Table[Score]("SCORES"){
def scorePerTime = column[List[Int]]("SCORE_PER_TIME")
//...more code
}
Controller
val form = Form(
Map(
"scoresPerTime" -> list(number)
)(Score.apply)(Score.unapply)
)
I get one compilation error:
.... could not find implicit value for parameter tm: scala.slick.lifted.TypeMapper[List[Int]][error] def scorePerTime = column[List[Int]]("SCORE_PER_TIME")
How can I fix this to enter a list? or maybe try another option like a tuple, enum...

You can do that by defining a type mapper from let's say List[Int] to String and vice-versa.
One possibility:
implicit def date2dateTime = MappedTypeMapper.base[List[Int], String](
list => list mkString ",",
str => (str split "," map Integer.parseInt).toList
)
I say it's a possibility 'cause I haven't tested. Not sure the fact it's returning a list will disrupt Slick. One place where it can be ambiguous are aggregate queries, where you'd want to count the number of , and not do a count(field) (which will obviously be one).
But this is completely non-relation. The relational way would be to have a new table with two fields, one foreign key referring one line at table SCORES and another field with one SCORE_PER_TIME. The foreign key should be a non-unique index so searches are fast. And slick handles this pretty well.

Related

Prevent empty values in an array being inserted into Mongo collection

I am trying to prevent empty values being inserted into my mongoDB collection. The field in question looks like this:
MongoDB Field
"stadiumArr" : [
"Old Trafford",
"El Calderon",
...
]
Sample of (mapped) case class
case class FormData(_id: Option[BSONObjectID], stadiumArr: Option[List[String]], ..)
Sample of Scala form
object MyForm {
val form = Form(
mapping(
"_id" -> ignored(Option.empty[BSONObjectID]),
"stadiumArr" -> optional(list(text)),
...
)(FormData.apply)(FormData.unapply)
)
}
I am also using the Repeated Values functionality in Play Framework like so:
Play Template
#import helper._
#(myForm: Form[models.db.FormData])(implicit request: RequestHeader, messagesProvider: MessagesProvider)
#repeatWithIndex(myForm("stadiumArr"), min = 5) { (stadium, idx) =>
#inputText(stadium, '_label -> ("stadium #" + (idx + 1)))
}
This ensures that whether there are at least 5 values or not in the array; there will still be (at least) 5 input boxes created. However if one (or more) of the input boxes are empty when the form is submitted an empty string is still being added as value in the array, e.g.
"stadiumArr" : [
"Old Trafford",
"El Calderon",
"",
"",
""
]
Based on some other ways of converting types from/to the database; I've tried playing around with a few solutions; such as:
implicit val arrayWrite: Writes[List[String]] = new Writes[List[String]] {
def writes(list: List[String]): JsValue = Json.arr(list.filterNot(_.isEmpty))
}
.. but this isn't working. Any ideas on how to prevent empty values being inserted into the database collection?
Without knowing specific versions or libraries you're using it's hard to give you an answer, but since you linked to play 2.6 documentation I'll assume that's what you're using there. The other assumption I'm going to make is that you're using reactive-mongo library. Whether or not you're using the play plugin for that library or not is the reason why I'm giving you two different answers here:
In that library, with no plugin, you'll have defined a BSONDocumentReader and a BSONDocumentWriter for your case class. This might be auto-generated for you with macros or not, but regardless how you get it, these two classes have useful methods you can use to transform the reads/writes you have to another one. So, let's say I defined a reader and writer for you like this:
import reactivemongo.bson._
case class FormData(_id: Option[BSONObjectID], stadiumArr: Option[List[String]])
implicit val formDataReaderWriter = new BSONDocumentReader[FormData] with BSONDocumentWriter[FormData] {
def read(bson: BSONDocument): FormData = {
FormData(
_id = bson.getAs[BSONObjectID]("_id"),
stadiumArr = bson.getAs[List[String]]("stadiumArr").map(_.filterNot(_.isEmpty))
)
}
def write(formData: FormData) = {
BSONDocument(
"_id" -> formData._id,
"stadiumArr" -> formData.stadiumArr
)
}
}
Great you say, that works! You can see in the reads I went ahead and filtered out any empty strings. So even if it's in the data, it can be cleaned up. That's nice and all, but let's notice I didn't do the same for the writes. I did that so I can show you how to use a useful method called afterWrite. So pretend the reader/writer weren't the same class and were separate, then I can do this:
val initialWriter = new BSONDocumentWriter[FormData] {
def write(formData: FormData) = {
BSONDocument(
"_id" -> formData._id,
"stadiumArr" -> formData.stadiumArr
)
}
}
implicit val cleanWriter = initialWriter.afterWrite { bsonDocument =>
val fixedField = bsonDocument.getAs[List[String]]("stadiumArr").map(_.filterNot(_.isEmpty))
bsonDocument.remove("stadiumArr") ++ BSONDocument("stadiumArr" -> fixedField)
}
Note that cleanWriter is the implicit one, that means when the insert call on the collection happens, it will be the one chosen to be used.
Now, that's all a bunch of work, if you're using the plugin/module for play that lets you use JSONCollections then you can get by with just defining play json Reads and Writes. If you look at the documentation you'll see that the reads trait has a useful map function you can use to transform one Reads into another.
So, you'd have:
val jsonReads = Json.reads[FormData]
implicit val cleanReads = jsonReads.map(formData => formData.copy(stadiumArr = formData.stadiumArr.map(_.filterNot(_.isEmpty))))
And again, because only the clean Reads is implicit, the collection methods for mongo will use that.
NOW, all of that said, doing this at the database level is one thing, but really, I personally think you should be dealing with this at your Form level.
val form = Form(
mapping(
"_id" -> ignored(Option.empty[BSONObjectID]),
"stadiumArr" -> optional(list(text)),
...
)(FormData.apply)(FormData.unapply)
)
Mainly because, surprise surprise, form has a way to deal with this. Specifically, the mapping class itself. If you look there you'll find a transform method you can use to filter out empty values easily. Just call it on the mapping you need to modify, for example:
"stadiumArr" -> optional(
list(text).transform(l => l.filter(_.nonEmpty), l => l.filter(_.nonEmpty))
)
To explain a little more about this method, in case you're not used to reading the signatures in the scaladoc.
def
transform[B](f1: (T) ⇒ B, f2: (B) ⇒ T): Mapping[B]
says that by calling transform on some mapping of type Mapping[T] you can create a new mapping of type Mapping[B]. In order to do this you must provide functions that convert from one to the other. So the code above causes the list mapping (Mapping[List[String]]) to become a Mapping[List[String]] (the type did not change here), but when it does so it removes any empty elements. If I break this code down a little it might be more clear:
def convertFromTtoB(list: List[String]): List[String] = list.filter(_.nonEmpty)
def convertFromBtoT(list: List[String]): List[String] = list.filter(_.nonEmpty)
...
list(text).transform(convertFromTtoB, convertFromBtoT)
You might wondering why you need to provide both, the reason is because when you call Form.fill and the form is populated with values, the second method will be called so that the data goes into the format the play form is expecting. This is more obvious if the type actually changes. For example, if you had a text area where people could enter CSV but you wanted to map it to a form model that had a proper List[String] you might do something like:
def convertFromTtoB(raw: String): List[String] = raw.split(",").filter(_.nonEmpty)
def convertFromBtoT(list: List[String]): String = list.mkString(",")
...
text.transform(convertFromTtoB, convertFromBtoT)
Note that when I've done this in the past sometimes I've had to write a separate method and just pass it in if I didn't want to fully specify all the types, but you should be able to work from here given the documentation and type signature for the transform method on mapping.
The reason I suggest doing this in the form binding is because the form/controller should be the one with the concern of dealing with your user data and cleaning things up I think. But you can always have multiple layers of cleaning and whatnot, it's not bad to be safe!
I've gone for this (which always seems obvious when it's written and tested):
implicit val arrayWrite: Writes[List[String]] = new Writes[List[String]] {
def writes(list: List[String]): JsValue = Json.toJson(list.filterNot(_.isEmpty).toIndexedSeq)
}
But I would be interested to know how to
.map the existing Reads rather than redefining from scratch
as #cchantep suggests

Listing columns on a Slick table

I have a Slick 3.0 table definition similar to the following:
case class Simple(a: String, b: Int, c: Option[String])
trait Tables { this: JdbcDriver =>
import api._
class Simples(tag: Tag) extends Table[Simple](tag, "simples") {
def a = column[String]("a")
def b = column[Int]("b")
def c = column[Option[String]]("c")
def * = (a, b, c) <> (Simple.tupled, Simple.unapply)
}
lazy val simples = TableQuery[Simples]
}
object DB extends Tables with MyJdbcDriver
I would like to be able to do 2 things:
Get a list of the column names as Seq[String]
For an instance of Simple, generate a Seq[String] that would correspond to how the data would be inserted into the database using a raw query (e.g. Simple("hello", 1, None) becomes Seq("'hello'", "1", "NULL"))
What would be the best way to do this using the Slick table definition?
First of all it is not possible to trick Slick and change the order on the left side of the <> operator in the * method without changing the order of values in Simple, the row type of Simples, i.e. what Ben assumed is not possible. The ProvenShape return type of the * projection method ensures that there is a Shape available for translating between the Column-based type in * and the client-side type and if you write def * = (c, b, a) <> Simple.tupled, Simple.unapply) having Simple defined as case class Simple(a: String, b: Int, c: Option[String]), Slick will complain with an error "No matching Shape found. Slick does not know how to map the given types...". Ergo, you can iterate over all the elements of an instance of Simple with its productIterator.
Secondly, you already have the definition of the Simples table in your code and querying metatables to get the same information you already have is not sensible. You can get all you column names with a one-liner simples.baseTableRow.create_*.map(_.name). Note that the * projection of the table also defines the columns generated when you create the table schema. So the columns not mentioned in the projection are not created and the statement above is guaranteed to return exactly what you need and not to drop anything.
To recap briefly:
To get a list of the column names of the Simples table as Seq[String] use
simples.baseTableRow.create_*.map(_.name).toSeq
To generate a Seq[String] that would correspond to how the data
would be inserted into the database using a raw query for aSimple,
an instance of Simple use aSimple.productIterator.toSeq
To get the column names, try this:
db.run(for {
metaTables <- slick.jdbc.meta.MTable.getTables("simples")
columns <- metaTables.head.getColumns
} yield columns.map {_.name}) foreach println
This will print
Vector(a, b, c)
And for the case class values, you can use productIterator:
Simple("hello", 1, None).productIterator.toVector
is
Vector(hello, 1, None)
You still have to do the value mapping, and guarantee that the order of the columns in the table and the values in the case class are the same.

In Scala what is the easiest way to parse json and map to objects?

I'm looking for a super simple way to take a big JSON fragment, that is a long list with a bunch of big objects in it, and parse it, then pick out the same few values from each object and then map into a case class.
I have tried pretty hard to get lift-json (2.5) working for me, but I'm having trouble cleanly dealing with checking if a key is present, and if so, then map the whole object, but if not, then skip it.
I absolutely do not understand this syntax for Lift-JSON one bit:
case class Car(make: String, model: String)
...
val parsed = parse(jsonFragment)
val JArray(cars) = parsed / "cars"
val carList = new MutableList[Car]
for (car <- cars) {
val JString(model) = car / "model"
val JString(make) = car / "make"
// i want to check if they both exist here, and if so
// then add to carList
carList += car
}
What on earth is that construct that makes it look like a case class is being created left of the assignment operator? I'm talking about the "JString" part.
Also how is it supposed to cope with the situation where a key is missing?
Can someone please explain to me what the right way to do this is?
And if I have nested values I'm looking for, I just want to skip the whole object and go on to try to map the next one.
Is there something more straightforward for this than Lift-JSON?
Would using extractOpt help?
I have looked at this a lot:
https://github.com/lift/framework/tree/master/core/json
and it's still not particularly clear to me.
Help is very much appreciated!!!!!
Since you are only looking to extract certain fields, you are on the right track. This modified version of your for-comprehension will loop through your car structure, extract the make and model and only yield your case class if both items exist:
for{
car <- cars
model <- (car \ "model").extractOpt[String]
make <- (car \ "make").extractOpt[String]
} yield Car(make, model)
You would add additional required fields the same way. If you want to also utilize optional parameters, let's say color - then you can call that in your yield section and the for comprehension won't unbox them:
for{
car <- cars
model <- (car \ "model").extractOpt[String]
make <- (car \ "make").extractOpt[String]
} yield Car(make, model, (car \ "color").extractOpt[String])
In both cases you will get back a List of Car case classes.
The weird looking assignment is pattern-matching used on val declaration.
When you see
val JArray(cars) = parsed / "cars"
it extracts from the parsed json the subtree of "cars" objects and matches the resulting value with the extractor pattern JArrays(cars).
That is to say that the value is expected to be in the form of a constructor JArrays(something) and the something is bound to the cars variable name.
It works pretty much the same as you're probably familiar with case classes, like Options, e.g.
//define a value with a class that can pattern match
val option = Some(1)
//do the matching on val assignment
val Some(number) = option
//use the extracted binding as a variable
println(number)
The following assignments are exactly the same stuff
//pattern match on a JSon String whose inner value is assigned to "model"
val JString(model) = car / "model"
//pattern match on a JSon String whose inner value is assigned to "make"
val JString(make) = car / "make"
References
The JSON types (e.g. JValue, JString, JDouble) are defined as aliases within the net.liftweb.json object here.
The aliases in turn point to corresponding inner case classes within the net.liftweb.json.JsonAST object, found here
The case classes have an unapply method for free, which lets you do the pattern-matching as explained in the above answer.
I think this should work for you:
case class UserInfo(
name: String,
firstName: Option[String],
lastName: Option[String],
smiles: Boolean
)
val jValue: JValue
val extractedUserInfoClass: Option[UserInfo] = jValue.extractOpt[UserInfo]
val jsonArray: JArray
val listOfUserInfos: List[Option[UserInfo]] = jsonArray.arr.map(_.extractOpt[UserInfo])
I expect jValue to have smiles and name -- otherwise extracting will fail.
I don't expect jValue to necessarily have firstName and lastName -- so I write Option[T] in the case class.

How to create an lookup Map in Scala

While I know there's a few ways to do this, I'm most interested in finding the most idiomatic and functional Scala method.
Given the following trite example:
case class User(id: String)
val users = List(User("1"), User("2"), User("3"), User("4"))
What's the best way to create an immutable lookup Map of user.id -> User so that I can perform quick lookups by user.id.
In Java I'd probably use Google-Collection's Maps.uniqueIndex although its unique property I care less about.
You can keep the users in a List and use list.find:
users.find{_.id == "3"} //returns Option[User], either Some(User("3")) or None if no such user
or if you want to use a Map, map the list of users to a list of 2-tuples, then use the toMap method:
val umap = users.map{u => (u.id, u)}.toMap
which will return an immutable Map[String, User], then you can use
umap contains "1" //return true
or
umap.get("1") //returns Some(User("1"))
If you're sure all IDs are unique, the canonical way is
users.map(u => (u.id, u)).toMap
as #Dan Simon said. However, if you are not sure all IDs are unique, then the canonical way is:
users.groupBy(_.id)
This will generate a mapping from user IDs to a list of users that share that ID.
Thus, there is an alternate not-entirely-canonical way to generate the map from ID to single users:
users.groupBy(_.id).mapValues(_.head)
For expert users who want to avoid the intermediate step of creating a map of lists, or a list which then gets turned into a map, there is the handy scala.collecion.breakOut method that builds the type that you want if there's a straightforward way to do it. It needs to know the type, though, so this will do the trick:
users.map(u => (u.id,u))(collection.breakOut): Map[String,User]
(You can also assign to a var or val of specified type.)
Convert the List into a Map and use it as a function:
case class User(id: String)
val users = List(User("1"), User("2"), User("3"))
val usersMap = users map { case user # User(id) => id -> user } .toMap
usersMap("1") // Some(User("1"))
usersMap("0") // None
If you would like to use a numeric index:
scala> users.map (u=> u.id.toInt -> u).toMap
res18: scala.collection.immutable.Map[Int,User] =
Map((1,User(1)), (2,User(2)), (3,User(3)))
Maps are functions too, their apply method provides access to the value associated with a particular key (or a NoSuchElementException is thrown for an unknown key) so this makes for a very clean lookup syntax. Following on from Dan Simon's answer and using a more semantically meaningful name:
scala> val Users = users map {u => (u.id, u)} toMap
Users: scala.collection.immutable.Map[String,User] = Map((1,User(1)), (2,User(2)), (3,User(3)))
which then provides the following lookup syntax:
scala> val user2 = Users("2")
user2: User = User(2)

scala: map-like structure that doesn't require casting when fetching a value?

I'm writing a data structure that converts the results of a database query. The raw structure is a java ResultSet and it would be converted to a map or class which permits accessing different fields on that data structure by either a named method call or passing a string into apply(). Clearly different values may have different types. In order to reduce burden on the clients of this data structure, my preference is that one not need to cast the values of the data structure but the value fetched still has the correct type.
For example, suppose I'm doing a query that fetches two column values, one an Int, the other a String. The result then names of the columns are "a" and "b" respectively. Some ideal syntax might be the following:
val javaResultSet = dbQuery("select a, b from table limit 1")
// with ResultSet, particular values can be accessed like this:
val a = javaResultSet.getInt("a")
val b = javaResultSet.getString("b")
// but this syntax is undesirable.
// since I want to convert this to a single data structure,
// the preferred syntax might look something like this:
val newStructure = toDataStructure[Int, String](javaResultSet)("a", "b")
// that is, I'm willing to state the types during the instantiation
// of such a data structure.
// then,
val a: Int = newStructure("a") // OR
val a: Int = newStructure.a
// in both cases, "val a" does not require asInstanceOf[Int].
I've been trying to determine what sort of data structure might allow this and I could not figure out a way around the casting.
The other requirement is obviously that I would like to define a single data structure used for all db queries. I realize I could easily define a case class or similar per call and that solves the typing issue, but such a solution does not scale well when many db queries are being written. I suspect some people are going to propose using some sort of ORM, but let us assume for my case that it is preferred to maintain the query in the form of a string.
Anyone have any suggestions? Thanks!
To do this without casting, one needs more information about the query and one needs that information at compiole time.
I suspect some people are going to propose using some sort of ORM, but let us assume for my case that it is preferred to maintain the query in the form of a string.
Your suspicion is right and you will not get around this. If current ORMs or DSLs like squeryl don't suit your fancy, you can create your own one. But I doubt you will be able to use query strings.
The basic problem is that you don't know how many columns there will be in any given query, and so you don't know how many type parameters the data structure should have and it's not possible to abstract over the number of type parameters.
There is however, a data structure that exists in different variants for different numbers of type parameters: the tuple. (E.g. Tuple2, Tuple3 etc.) You could define parameterized mapping functions for different numbers of parameters that returns tuples like this:
def toDataStructure2[T1, T2](rs: ResultSet)(c1: String, c2: String) =
(rs.getObject(c1).asInstanceOf[T1],
rs.getObject(c2).asInstanceOf[T2])
def toDataStructure3[T1, T2, T3](rs: ResultSet)(c1: String, c2: String, c3: String) =
(rs.getObject(c1).asInstanceOf[T1],
rs.getObject(c2).asInstanceOf[T2],
rs.getObject(c3).asInstanceOf[T3])
You would have to define these for as many columns you expect to have in your tables (max 22).
This depends of course on that using getObject and casting it to a given type is safe.
In your example you could use the resulting tuple as follows:
val (a, b) = toDataStructure2[Int, String](javaResultSet)("a", "b")
if you decide to go the route of heterogeneous collections, there are some very interesting posts on heterogeneous typed lists:
one for instance is
http://jnordenberg.blogspot.com/2008/08/hlist-in-scala.html
http://jnordenberg.blogspot.com/2008/09/hlist-in-scala-revisited-or-scala.html
with an implementation at
http://www.assembla.com/wiki/show/metascala
a second great series of posts starts with
http://apocalisp.wordpress.com/2010/07/06/type-level-programming-in-scala-part-6a-heterogeneous-list%C2%A0basics/
the series continues with parts "b,c,d" linked from part a
finally, there is a talk by Daniel Spiewak which touches on HOMaps
http://vimeo.com/13518456
so all this to say that perhaps you can build you solution from these ideas. sorry that i don't have a specific example, but i admit i haven't tried these out yet myself!
Joschua Bloch has introduced a heterogeneous collection, which can be written in Java. I once adopted it a little. It now works as a value register. It is basically a wrapper around two maps. Here is the code and this is how you can use it. But this is just FYI, since you are interested in a Scala solution.
In Scala I would start by playing with Tuples. Tuples are kinda heterogeneous collections. The results can be, but not have to be accessed through fields like _1, _2, _3 and so on. But you don't want that, you want names. This is how you can assign names to those:
scala> val tuple = (1, "word")
tuple: ([Int], [String]) = (1, word)
scala> val (a, b) = tuple
a: Int = 1
b: String = word
So as mentioned before I would try to build a ResultSetWrapper around tuples.
If you want "extract the column value by name" on a plain bean instance, you can probably:
use reflects and CASTs, which you(and me) don't like.
use a ResultSetToJavaBeanMapper provided by most ORM libraries, which is a little heavy and coupled.
write a scala compiler plugin, which is too complex to control.
so, I guess a lightweight ORM with following features may satisfy you:
support raw SQL
support a lightweight,declarative and adaptive ResultSetToJavaBeanMapper
nothing else.
I made an experimental project on that idea, but note it's still an ORM, and I just think it may be useful to you, or can bring you some hint.
Usage:
declare the model:
//declare DB schema
trait UserDef extends TableDef {
var name = property[String]("name", title = Some("姓名"))
var age1 = property[Int]("age", primary = true)
}
//declare model, and it mixes in properties as {var name = ""}
#BeanInfo class User extends Model with UserDef
//declare a object.
//it mixes in properties as {var name = Property[String]("name") }
//and, object User is a Mapper[User], thus, it can translate ResultSet to a User instance.
object `package`{
#BeanInfo implicit object User extends Table[User]("users") with UserDef
}
then call raw sql, the implicit Mapper[User] works for you:
val users = SQL("select name, age from users").all[User]
users.foreach{user => println(user.name)}
or even build a type safe query:
val users = User.q.where(User.age > 20).where(User.name like "%liu%").all[User]
for more, see unit test:
https://github.com/liusong1111/soupy-orm/blob/master/src/test/scala/mapper/SoupyMapperSpec.scala
project home:
https://github.com/liusong1111/soupy-orm
It uses "abstract Type" and "implicit" heavily to make the magic happen, and you can check source code of TableDef, Table, Model for detail.
Several million years ago I wrote an example showing how to use Scala's type system to push and pull values from a ResultSet. Check it out; it matches up with what you want to do fairly closely.
implicit val conn = connect("jdbc:h2:f2", "sa", "");
implicit val s: Statement = conn << setup;
val insertPerson = conn prepareStatement "insert into person(type, name) values(?, ?)";
for (val name <- names)
insertPerson<<rnd.nextInt(10)<<name<<!;
for (val person <- query("select * from person", rs => Person(rs,rs,rs)))
println(person.toXML);
for (val person <- "select * from person" <<! (rs => Person(rs,rs,rs)))
println(person.toXML);
Primitives types are used to guide the Scala compiler into selecting the right functions on the ResultSet.