Compiled query doesn't recognize 'exists' method - scala

I am facing a lot of trouble while updating my application from play 2.3.x to play 2.4.11.
I started by updating play-slick from version 0.8.1 to 1.1.1, which implies updating slick from 2.1.0 to 3.1.0.
I have a generic class which aggregates the basic method like findById.
The problem I am facing at this moment is:
I had this method working as well:
def existsById(id: Long)(implicit s: Session): DBIO[Boolean] =
tableReference.filter(_.id === id).exists.result
I decided to use compiled queries, so I did as following:
private val queryById = Compiled((id: Rep[Option[Long]]) => tableReference.filter(_.id === id))
def existsById(id: Option[Long])(implicit s: Session): DBIO[Boolean] =
queryById(id).exists.result
and now, I am getting an error saying that
Cannot resolve symbol exists
Am I doing it wrong? or is it a bug?

After you've "lifted" a Query into a Compiled you have to use map to transfrom it to a diferent Query. For example:
val existsById = queryById.map(q => (id: Rep[Long]) => q(id).exists)

Related

ReactiveMongo ConnectionNotInitialized In Test After Migrating to Play 2.5

After migrating my Play (Scala) app to 2.5.3, some tests of my code using ReactiveMongo that once passed now fail in the setup.
Here is my code using ScalaTest:
def fixture(testMethod: (...) => Any) {
implicit val injector = new ScaldiApplicationBuilder()
.prependModule(new ReactiveMongoModule)
.prependModule(new TestModule)
.buildInj()
def reactiveMongoApi = inject[ReactiveMongoApi]
def collection: BSONCollection = reactiveMongoApi.db.collection[BSONCollection](testCollection)
lazy val id = BSONObjectID.generate
//Error occurs at next line
Await.result(collection.insert(Person(id = id, slug = "test-slug", firstName = "Mickey", lastName = "Mouse")), 10.seconds)
...
}
At the insert line, I get this:
reactivemongo.core.errors.ConnectionNotInitialized: MongoError['Connection is missing metadata (like protocol version, etc.) The connection pool is probably being initialized.']
I have tried a bunch of things like initializing collection with a lazy val instead of def. But nothing has worked.
Any insight into how to get my tests passing again is appreciated.
With thanks to #cchantep, the test runs as expected by replacing this code above:
def collection: BSONCollection = reactiveMongoApi.db.collection[BSONCollection](testCollection)
with this code
def collection: BSONCollection = Await.result(reactiveMongoApi.database.map(_.collection[BSONCollection](testCollection)), 10.seconds)
In other words, reactiveMongoApi.database (along with the appropriate changes because of the Future) is the way to go.

flatMap Compile Error found: TraversableOnce[String] required: TraversableOnce[String]

EDIT#2: This might be memory related. Logs are showing out-of-heap.
Yes, definitely memory related. Basically docker logs reports all the
spewage of out-of-heap from the java, but the jupyter web notebook does not pass that to the user. Instead the user gets kernel failures and occasional weird behavior like code not compiling correctly.
Spark 1.6, particularly docker run -d .... jupyter/all-spark-notebook
Would like to count accounts in a file of ~ 1 million transactions.
This is simple enough, it can be done without spark but I've hit an odd error trying with spark scala.
Input data is type RDD[etherTrans] where etherTrans is a custom type enclosing a single transaction: a timestamp, the from and to accounts, and the value transacted in ether.
class etherTrans(ts_in:Long, afrom_in:String, ato_in:String, ether_in: Float)
extends Serializable {
var ts: Long = ts_in
var afrom: String = afrom_in
var ato: String = ato_in
var ether: Float = ether_in
override def toString():String = ts.toString+","+afrom+","+ato+","+ether.toString
}
data:RDD[etherTrans] looks ok:
data.take(10).foreach(println)
etherTrans(1438918233,0xa1e4380a3b1f749673e270229993ee55f35663b4,0x5df9b87991262f6ba471f09758cde1c0fc1de734,3.1337E-14)
etherTrans(1438918613,0xbd08e0cddec097db7901ea819a3d1fd9de8951a2,0x5c12a8e43faf884521c2454f39560e6c265a68c8,19.9)
etherTrans(1438918630,0x63ac545c991243fa18aec41d4f6f598e555015dc,0xc93f2250589a6563f5359051c1ea25746549f0d8,599.9895)
etherTrans(1438918983,0x037dd056e7fdbd641db5b6bea2a8780a83fae180,0x7e7ec15a5944e978257ddae0008c2f2ece0a6090,100.0)
etherTrans(1438919175,0x3f2f381491797cc5c0d48296c14fd0cd00cdfa2d,0x4bd5f0ee173c81d42765154865ee69361b6ad189,803.9895)
etherTrans(1438919394,0xa1e4380a3b1f749673e270229993ee55f35663b4,0xc9d4035f4a9226d50f79b73aafb5d874a1b6537e,3.1337E-14)
etherTrans(1438919451,0xc8ebccc5f5689fa8659d83713341e5ad19349448,0xc8ebccc5f5689fa8659d83713341e5ad19349448,0.0)
etherTrans(1438919461,0xa1e4380a3b1f749673e270229993ee55f35663b4,0x5df9b87991262f6ba471f09758cde1c0fc1de734,3.1337E-14)
etherTrans(1438919491,0xf0cf0af5bd7d8a3a1cad12a30b097265d49f255d,0xb608771949021d2f2f1c9c5afb980ad8bcda3985,100.0)
etherTrans(1438919571,0x1c68a66138783a63c98cc675a9ec77af4598d35e,0xc8ebccc5f5689fa8659d83713341e5ad19349448,50.0)
This next function parses ok and is written this way because earlier attempts were complaining of type mismatch between Array[String] or List[String] and TraversableOnce[?]:
def arrow(e:etherTrans):TraversableOnce[String] = Array(e.afrom,e.ato)
But then using this function with flatMap to get an RDD[String] of all accounts fails.
val accts:RDD[String] = data.flatMap(arrow)
Name: Compile Error
Message: :38: error: type mismatch;
found : etherTrans(in class $iwC)(in class $iwC)(in class $iwC)(in class $iwC) => TraversableOnce[String]
required: etherTrans(in class $iwC)(in class $iwC)(in class $iwC)(in class $iwC) => TraversableOnce[String]
val accts:RDD[String] = data.flatMap(arrow)
^
StackTrace:
Make sure you scroll right to see it complain that TraversableOnce[String]
doesn't match TraversableOnce[String]
This must be a fairly common problem as a more blatant type mismatch comes up in Generate List of Pairs and while there isn't enough context, is suggested in I have a Scala List, how can I get a TraversableOnce?.
What's going on here?
EDIT: The issue reported above doesn't appear, and code works fine in older spark-shell, Spark 1.3.1 running standalone in a docker container. Errors are generated running in the spark 1.6 scala jupyter environment with the jupyter/all-spark-notebook docker container.
Also #zero323 says that this toy example:
val rdd = sc.parallelize(Seq((1L, "foo", "bar", 1))).map{ case (ts, fr, to, et) => new etherTrans(ts, fr, to, et)}
rdd.flatMap(arrow).collect
worked for him in the terminal spark-shell 1.6.0/spark 2.10.5 and also Scala 2.11.7 and Spark 1.5.2 work as well.
I think you should switch to use case classes, and it should work fine. Using "regular" classes, might case weird issues when serializing them, and it looks like all you need are value objects, so case classes look like a better fit for your use case.
An example:
case class EtherTrans(ts: Long, afrom: String, ato: String, ether: Float)
val source = sc.parallelize(Array(
(1L, "from1", "to1", 1.234F),
(2L, "from2", "to2", 3.456F)
))
val data = source.as[EtherTrans]
val data = source.map { l => EtherTrans(l._1, l._2, l._3, l._4) }
def arrow(e: EtherTrans) = Array(e.afrom, e.ato)
data.map(arrow).take(5)
/*
res3: Array[Array[String]] = Array(Array(from1, to1), Array(from2, to2))
*/
data.map(arrow).take(5)
// res3: Array[Array[String]] = Array(Array(from1, to1), Array(from2, to2))
If you need to, you can just create some method / object to generate your case classes.
If you don't really need the "toString" method for your logic, but just for "presentation", keep it out of the case class: you can always add it with a map operation before storing if or showing it.
Also, if you are in Spark 1.6.0 or higher, you could try using the DataSet API instead, that would look more or less like this:
val data = sqlContext.read.text("your_file").as[EtherTrans]
https://databricks.com/blog/2016/01/04/introducing-spark-datasets.html

How do I print the results of a slick query

I have a table called Materials. I used slicks schema auto generation to create the TableQuery classes for me.
I can't figure out how to just print the results of a simple query.
Materials.map(_.name)
I've tried
val m = Materials.map(_.name).toString()
println(m)
and get the result
Rep(Bind)
if I try
Materials.map(_.name).forEach(m => println(m))
I get a compile error
value forEach is not a member of slick.lifted.Query[slick.lifted.Rep[Option[String]],Option[String],Seq]
To clarify I'm using just slick 3.1.0 not play slick
You have written a Query, but it needs converted into an Action by calling its result method
val query = materials.map(_.name)
val action = query.result
val results: Future[ Seq[Option[ String ] ]] = db.run( action)
results.foreach( println )
The db object needs to be initialized depending the Slick version that you are using . e.g Slick or Play Slick
I assume that you have this
val materials = TableQuery[Materials]
You can evaluate function with side effects using map:
Materials.map(println(_.name))

TypeSafe Slick 1.0.1 - AutoInc and return inserted id

I'm using Scala 2.10.0, Play Framework 2.1.0, TypeSafe Slick 1.0.1 and play-slick 0.3.2.
I've created a set of Slick entities that work as expected. I'd like to add some logic in the Global.scala to create a basic set of database records on startup by overriding onStart.
I'm using the examples I've found online with a statement like this:
def forInsert = name ~ description ~ viewDesc ~ skinDesc <>({ t => PackageDescription(None, t._1, t._2, t._3, t._4)}, { (pd: PackageDescription) => Some((pd.name, pd.description, pd.viewDesc , pd.skinDesc ))})
And in Global.scala, I'm invoking it like this:
override def onStart(app: Application) {
DB.withSession {
implicit session => {
val packageDescId = PackageDescriptions.forInsert returning PackageDescription.id insert PackageDescription(None, "foo", "bar", "baz", "quux")
}
}
When I do this in Global.scala 'returning' and 'insert' are not resolved as symbols. This works fine elsewhere. I've verified that I have all the same imports in both locations.
I'm assuming something isn't initialized at this point in the application or is this assumption false? If true, is there anyway to initialize what I need? Otherwise, is there a way to do what I want?
I'm just starting with both Scala and Play so some of my basic understanding may be lacking.

which driver can i use to access mongodb in scala swing application?

HI..
I am working with the scala n mongodb.
now i want access mongodb database in scala swing application.
so which drivers can i use for it?
and which can easily work?
please reply
I've been using casbah http://api.mongodb.org/scala/casbah/2.0.2/index.html to talk to mongodb from my scala swing application.
It's pretty easy to install and setup, and the API is quite scala-esque.
The hardest part is understanding mongodb itself, (coming from an sql background)
We were sort of unsatisfied with the way Casbah works for deep objects or simple maps and no real case class mapping support so we rolled our own MongoDB Synchronous Scala driver on top of the legacy java driver which I would like to shamelessly plug here with an example on how to store and retrieve a map and a simple case class. The driver does not have a lot of magic and is easy to setup and features a simple BSON implementation which was inspired by the Play2 JSON impl.
Here is how to use it with some simple values:
val client = MongoClient("hostname", 27017)
val db = client("dbname")
val coll = db("collectionname")
coll.save(Bson.doc("_id" -> 1, "vals" -> Map("key1" -> "val1")))
val docOpt = coll.findOneById(1) // => Option[BsonDoc]
for(doc <- docOpt)
println(doc.as[Map[String, String]]("vals")("key1")) // => prints "val1"
For a case class it is a little bit more complex but it is all handrolled and there is no magic involved so you can do whatever you like and how you need it, i.e. provider some shorter key names in the doc:
case class DnsRecord(host: String = "", ttl: Long = 0, otherProps: Map[String, String] = Map())
case object DnsRecord {
implicit object DnsRecordToBsonElement extends ToBsonElement[DnsRecord] {
def toBson(v: DnsRecord): BsonElement = DnsRecordToBsonDoc.toBson(v)
}
implicit object DnsRecordFromBsonElement extends FromBsonElement[DnsRecord] {
def fromBson(v: BsonElement): DnsRecord = DnsRecordFromBsonDoc.fromBson(v.asInstanceOf[BsonDoc])
}
implicit object DnsRecordFromBsonDoc extends FromBsonDoc[DnsRecord] {
def fromBson(d: BsonDoc): DnsRecord = DnsRecord(
d[String]("host"),
d[Long]("ttl"),
d[Map[String, String]]("op")
)
}
implicit object DnsRecordToBsonDoc extends ToBsonDoc[DnsRecord] {
def toBson(m: DnsRecord): BsonDoc = Bson.doc(
"host" -> m.host,
"ttl" -> m.ttl,
"op" -> m.otherProps
)
}
}
coll.save(DnsRecord("test.de", 4456, Map("p2" -> "val1")))
for (r <- coll.findAs[DnsRecord](Bson.doc("host" -> "test.de")))
println(r.host)
As an update for people finding this thread and interested in MongoDB 3.X. We're using Async driver which can be found here https://github.com/evojam/mongodb-driver-scala. It's API is build in Scala way with new Play 2.4 module ready if you're using it, but you can always take only driver.