I'm creating a OData query language in Scala. It's going pretty well, but there is one error I get which I can't explain.
First let me show you the Query class I created (omitted some code out for brevity):
case class Query(val name: String, val args: Seq[(String, String)] = Seq())
(val parent: Option[Query] = None)
(val options: Seq[QueryOption] = Seq()) {
def /(newChild: Query): Query = new Query(newChild.name, newChild.args)(Some(this))(options)
def $(newOptions: QueryOption*): Query = new Query(name, args)(parent)(options ++ newOptions)
def |(newArgs: (String, String)*): Query = new Query(name, args ++ newArgs)(parent)(options)
}
object Query {
private def emptyQueryWithName(name: String): Query = Query(name, Seq())(None)(Seq())
def /(name: String): Query = emptyQueryWithName(name)
implicit def createQuery(name: String): Query = emptyQueryWithName(name)
}
package object queries {
implicit class QueryOps(name: String) {
def ===(attr: Any): (String, String) = (name, attr.toString)
}
}
I've written some tests for this DSL and they work mostly. For instance this code:
Query / "Pages" / "Component" | ("ItemId" === 123, "PublicationId" === 1) $ ("Title" ==| "Test Title")
Gives me the expected query: /Pages/Component(ItemId=123,PublicationId=1)?$filter=Title eq 'Test Title'
But this one:
Query / "Pages" | ("ItemId" === 123) / "Title" $ jsonFormat $ ("Url" ==| "Bla")
Complains about the '/ "Title"' part. As if the compiler is not aware that the proceeding code results in a Query instance. It can't find the '/' method. To me this seems very strange as the '$' method is found, which has the same scope; the Query class.
I'm probably running into some kind of limitation I can't fathom, but I would like to understand. Thanks for any help!
The fact that parentheses fix the problem usually means an operator precedence problem. Have a look at http://books.google.es/books?id=MFjNhTjeQKkC&pg=PA90&lpg=PA90&dq=scala+operator+precedence+reference&source=bl&ots=FMlkUEDSpq&sig=pf3szEM4GExN_UCsgaxcQNBegPQ&hl=en&sa=X&ei=ZezQU_-SDszY7Ab-pIDQDQ&redir_esc=y#v=onepage&q=scala%20operator%20precedence%20reference&f=false
| has a lower precedence than /, and $ has the highest precedence, so your expression is being interpreted as:
(Query / "Pages") | (("ItemId" === 123) / (("Title" $ jsonFormat) $ ("Url" ==| "Bla")))
Also, providing the exact error message is usually useful.
Related
I am getting an error in the extractor step (unapply method call).
The error message is: Wrong number of arguments for the extractors. found 2; expected 0
Can someone please help what is causing the error (where my misunderstanding is).
class ABC(val name:String, val age:Int) //class is defined.
object ABC{
def apply(age:Int, name:String) = new ABC(name, age)
def unapply(x:ABC) = (x.name, x.age)
}
val ins = ABC(25, "Joe") //here apply method is in action.
val ABC(x,y) = ins //unapply is indirectly called. As per my understanding , 25 and Joe suppose to be captured in x and y respectively. But this steps gives error.
The error I get is
an unapply result must have a member def isEmpty: Boolean
The easiest way to fix this is to make unapply return an Option:
def unapply(x: ABC) = Option((x.name, x.age))
The unapply method in an extractor which binds values must return an Option. This is because there's no intrinsic guarantee that an extractor will always succeed. For instance consider this massively oversimplified example of an extractor for an email address:
object Email {
def unapply(s: String): Option[(String, String)] =
s.indexOf('#') match {
case idx if idx >= 0 =>
val (user, maybeSite) = s.splitAt(idx)
if (maybeSite.length < 2 || maybeSite.lastIndexOf('#') > 0) None
else Some(user -> maybeSite.tail)
case _ => None
}
}
At the application site:
val Email(u, s) = "user3103957#stackoverflow.example.xyz"
Turns into code that's basically (from the description in Programming In Scala (Odersky, Spoon, Venners (3rd ed))):
val _tmpTuple2 =
"user3103957#stackoverflow.example.xyz" match {
case str: String =>
Email.unapply(str).getOrElse(throw ???)
case _ => throw ???
}
val u = _tmpTuple2._1
val s = _tmpTuple2._2
Technically, since the compiler already knows that the value is a String, the type check is elided, but I've included the type check for generality. The desugaring of extractors in a pattern match also need not throw except for the last extractor attempt.
I want to evaluate a function passed as a variable string in scala (sorry but i'm new to scala )
def concate(a:String,b:String): String ={
a+" "+b
}
var func="concate" //i'll get this function name from config as string
I want to perform something like
eval(func("hello","world)) //Like in Python
so output will be like
hello world
Eventually I want to execute few in built functions on a string coming from my config and I don't want to hard code the function names in the code.
EDIT
To Be More clear with my exact usecase
I have a Config file which has multiple functions defined in it that are Spark inbuilt functions on Data frame
application.conf looks like
transformations = [
{
"table" : "users",
"function" : "from_unixtime",
"column" : "epoch"
},
{
"table" : "users",
"function" : "yearofweek",
"column" : "epoch"
}
]
Now functions yearofweek and from_unixtime are Spark inbuilt functions now I want to eval my Dataframe by the functions defined in config. #all the functions are applied to a column defined.
the Obvious way is to write an if else and do string comparison calling a particular inbuilt function but that is way to much..
i am looking for a better solution.
This is indeed possible in scala, as scala is JSR 223 compliant scripting language. Here is an example (running with scala 2.11.8). Note that you need to import your method because otherwise the interpreter will not find it:
package my.example
object EvalDemo {
// evalutates scala code and returns the result as T
def evalAs[T](code: String) = {
import scala.reflect.runtime.currentMirror
import scala.tools.reflect.ToolBox
val toolbox = currentMirror.mkToolBox()
import toolbox.{eval, parse}
eval(parse(code)).asInstanceOf[T]
}
def concate(a: String, b: String): String = a + " " + b
def main(args: Array[String]): Unit = {
var func = "concate" //i'll get this function name from config as string
val code =
s"""
|import my.example.EvalDemo._
|${func}("hello","world")
|""".stripMargin
val result: String = evalAs[String](code)
println(result) // "hello world"
}
}
Have Function to name mapping in the code
def foo(str: String) = str + ", foo"
def bar(str: String) = str + ", bar"
val fmap = Map("foo" -> foo _, "bar" -> bar _)
fmap("foo")("hello")
now based on the function name we get from the config, pass the name to the map and lookup the corresponding function and invoke the arguments on it.
Scala repl
scala> :paste
// Entering paste mode (ctrl-D to finish)
def foo(str: String) = str + ", foo"
def bar(str: String) = str + ", bar"
val fmap = Map("foo" -> foo _, "bar" -> bar _)
fmap("foo")("hello")
// Exiting paste mode, now interpreting.
foo: (str: String)String
bar: (str: String)String
fmap: scala.collection.immutable.Map[String,String => String] = Map(foo -> $$Lambda$1104/1335082762#778a1250, bar -> $$Lambda$1105/841090268#55acec99)
res0: String = hello, foo
Spark offers you a way to write your transformations or queries using SQL. So, you really don't have to worry about Scala functions, casting and evaluation in this case. You just have to parse your config to generate the SQL query.
Let's say you have registered a table users with Spark and want to do a select and transform based on provided config,
// your generated query will look like this,
val query = "SELECT from_unixtime(epoch) as time, weekofyear(epoch) FROM users"
val result = spark.sql(query)
So, all you need to do is - build that query from your config.
I have the following SQL query, which I'd like to map to Slick
SELECT * FROM rates
WHERE '3113212512' LIKE (prefix || '%') and present = true
ORDER BY prefix DESC
LIMIT 1;
However the like symbol is not defined for String:
case class Rate(grid: String, prefix: String, present: Boolean)
class Rates(tag: Tag) extends Table[Rate](tag, "rates") {
def grid = column[String]("grid", O.PrimaryKey, O.NotNull)
def prefix = column[String]("prefix", O.NotNull)
def present = column[Boolean]("present", O.NotNull)
// Foreign keys
def ratePlan = foreignKey("rate_plan_fk", ratePlanId, RatePlans)(_.grid)
def ratePlanId = column[String]("rate_plan_id", O.NotNull)
def * = (grid, prefix, present) <>(Rate.tupled, Rate.unapply)
}
object Rates extends TableQuery(new Rates(_)) {
def findActiveRateByRatePlanAndPartialPrefix(ratePlan: String, prefix: String) = {
DB withSession {
implicit session: Session =>
Rates.filter(_.ratePlanId === ratePlan)
.filter(_.present === true)
.filter(prefix like _.prefix)
.sortBy(_.prefix.desc).firstOption
}
}
}
Obviously it's logical that something like this won't work:
.filter(prefix like _.prefix)
And:
.filter(_.prefix like prefix)
Will render incorrect SQL, and I'm not even considering the '%' right now (only WHERE clause concerning the prefix:
(x2."prefix" like '3113212512')
Instead of:
'3113212512' LIKE (prefix || '%')
Of course I can solve this using a static query, but I'd like to know whether this would be possible at all?
For clarity, here are some prefixes in the database
31
3113
312532623
31113212
And 31113212 is expected as a result
You could simply rewrite your query in a style that is supported. I think this should be equivalent:
.filter(s => (s.prefix === "") || s.prefix.isEmpty || LiteralColumn(prefix) like s.prefix)
If the conditions are more complex and you need a Case expression, see http://slick.typesafe.com/doc/2.1.0/sql-to-slick.html#case
If you still want the || operator you may be able to define it this way:
/** SQL || for String (currently collides with Column[Boolean] || so other name) */
val thisOrThat = SimpleBinaryOperator[String]("||")
...
.filter(s => LiteralColumn(prefix) like thisOrThat(s.prefix,"%"))
I didn't try this though and I saw we don't have a test for SimpleBinaryOperator. Open a ticket on github.com/slick/slick if it doesn't work. Probably we should also change it to return a typed function. I added a ticket for that https://github.com/slick/slick/issues/1073.
Also see http://slick.typesafe.com/doc/2.1.0/userdefined.html#scalar-database-functions
Luckily SimpleBinaryOperator is not even needed here.
UPDATE 2
One actually does not require the concat, as it can be written as follows as well:
filter(s => LiteralColumn(prefix) like (s.prefix ++ "%"))
UPDATE:
After hints by cvogt this is the final result without using internal Slick APIs:
val concat = SimpleBinaryOperator[String]("||")
Rates.filter(_.ratePlanId === ratePlan)
.filter(_.present === true)
.filter(s => LiteralColumn(prefix) like concat(s.prefix, "%"))
.sortBy(_.prefix.desc).firstOption
Which perfectly gives:
and ('3113212512' like (x2."prefix" || '%'))
Alternative:
This seems to possible. Note however that the following method uses the Internal API of Slick and may not be compatible in the future due to API changes (see comment by cvogt) I used Slick 2.1.0.
Basically I created an extension to Slick in which I define a custom scheme to implement the required result:
package utils
import scala.language.{higherKinds, implicitConversions}
import scala.slick.ast.ScalaBaseType._
import scala.slick.ast._
import scala.slick.lifted.{Column, ExtensionMethods}
object Helpers {
implicit class PrefixExtensionMethods[P1](val c: scala.slick.lifted.Column[P1]) extends AnyVal with ExtensionMethods[String, P1] {
def subPrefixFor[P2, R](prefix: Column[P2])(implicit om: o#arg[String, P2]#to[Boolean, R]) = {
om.column(Library.Like, prefix.toNode, om.column(new Library.SqlOperator("||"), n, LiteralNode("%")).toNode)
}
}
}
This can now be used as follows:
import utils.Helpers._
[...]
def findActiveRateByRatePlanAndPartialPrefix(ratePlan: String, prefix: String) = {
DB withSession {
implicit session: Session =>
Rates.filter(_.ratePlanId === ratePlan)
.filter(_.present === true)
.filter(_.prefix subPrefixFor prefix)
.sortBy(_.prefix.desc).firstOption
}
}
And will render correct result:
and ('3113212512' like (x2."prefix" || '%'))
Notes:
I could probably have some better naming
Had to chain the || and Like using om.column
I introduced new SqlOperator since Or operator renders or and I need concatenation in Postgres ||
I am trying to get an underlying querys case classes from a Scala Slick query, and it seems more difficult to me than it should be. Here is my compiler error:
[info] Compiling 18 Scala sources to /home/target/scala-2.11/classes...
[error] /home/src/main/scala/com/core/address/AddressDAO.scala:30: type mismatch;
[error] found : scala.slick.lifted.Query[com.core.address.AddressDAO,com.core.protocol.Address,Seq]
[error] required: Option[com.core.protocol.Address]
[error] q
Here is the method I have written:
def getAddress(otherAddress: String): Future[Option[Address]] = {
future {
val q = for (addr <- addresses if (addr.address == otherAddress)) yield addr
q
}
Here is the Slick Schema:
class AddressDAO(tag: Tag) extends Table[Address](tag, "ADDRESSES") with DbConfig {
def address = column[String]("ADDRESS", O.PrimaryKey)
def hash160 = column[String]("HASH160")
def n_tx = column[Long]("N_TX")
def total_received = column[Double]("TOTAL_RECEIVED")
def total_sent = column[Double]("TOTAL_SENT")
def final_balance = column[Double]("FINAL_BALANCE")
def * = (hash160, address, n_tx, total_received, total_sent, final_balance) <> (Address.tupled, Address.unapply)
}
What I want to do is expressed in the return type of the method getAddress which is Future[Option[Address]]. I want the first element that the database is finding. The type that is actually being returned is of type scala.slick.lifted.Query[com.core.address.AddressDAO,com.core.protocol.Address,Seq]
There does not seem to be any execute method or anything to kick start the query. I suspect this can be done with for-comprehensions but I cannot figure how to actually do it.
Thanks!
I am certainly no Slick expert, but after a quick perusal of the docs, I am wondering if maybe wrapping something like this in your Future would work:
addresses.filter(_.address === "givenAddressString").firstOption
You have to actually run your query.
q.firstOption is what you probably want to use:
def getAddress(otherAddress: String): Future[Option[Address]] = {
future {
val q = for (addr <- addresses if (addr.address == otherAddress)) yield addr
q.firstOption
}
PS. Simply filtering addresses for the given value of address is indeed more concise. And of course the performance of this query depends on whether you have an index on that field and the size of your table :)
I'm parsing XML, and keep finding myself writing code like:
val xml = <outertag>
<dog>val1</dog>
<cat>val2</cat>
</outertag>
var cat = ""
var dog = ""
for (inner <- xml \ "_") {
inner match {
case <dog>{ dg # _* }</dog> => dog = dg(0).toString()
case <cat>{ ct # _* }</cat> => cat = ct(0).toString()
}
}
/* do something with dog and cat */
It annoys me because I should be able to declare cat and dog as val (immutable), since I only need to set them once, but I have to make them mutable. And besides that it just seems like there must be a better way to do this in scala. Any ideas?
Here are two (now make it three) possible solutions. The first one is pretty quick and dirty. You can run the whole bit in the Scala interpreter.
val xmlData = <outertag>
<dog>val1</dog>
<cat>val2</cat>
</outertag>
// A very simple way to do this mapping.
def simpleGetNodeValue(x:scala.xml.NodeSeq, tag:String) = (x \\ tag).text
val cat = simpleGetNodeValue(xmlData, "cat")
val dog = simpleGetNodeValue(xmlData, "dog")
cat will be "val2", and dog will be "val1".
Note that if either node is not found, an empty string will be returned. You can work around this, or you could write it in a slightly more idiomatic way:
// A more idiomatic Scala way, even though Scala wouldn't give us nulls.
// This returns an Option[String].
def getNodeValue(x:scala.xml.NodeSeq, tag:String) = {
(x \\ tag).text match {
case "" => None
case x:String => Some(x)
}
}
val cat1 = getNodeValue(xmlData, "cat") getOrElse "No cat found."
val dog1 = getNodeValue(xmlData, "dog") getOrElse "No dog found."
val goat = getNodeValue(xmlData, "goat") getOrElse "No goat found."
cat1 will be "val2", dog1 will be "val1", and goat will be "No goat found."
UPDATE: Here's one more convenience method to take a list of tag names and return their matches as a Map[String, String].
// Searches for all tags in the List and returns a Map[String, String].
def getNodeValues(x:scala.xml.NodeSeq, tags:List[String]) = {
tags.foldLeft(Map[String, String]()) { (a, b) => a(b) = simpleGetNodeValue(x, b)}
}
val tagsToMatch = List("dog", "cat")
val matchedValues = getNodeValues(xmlData, tagsToMatch)
If you run that, matchedValues will be Map(dog -> val1, cat -> val2).
Hope that helps!
UPDATE 2: Per Daniel's suggestion, I'm using the double-backslash operator, which will descend into child elements, which may be better as your XML dataset evolves.
scala> val xml = <outertag><dog>val1</dog><cat>val2</cat></outertag>
xml: scala.xml.Elem = <outertag><dog>val1</dog><cat>val2</cat></outertag>
scala> val cat = xml \\ "cat" text
cat: String = val2
scala> val dog = xml \\ "dog" text
dog: String = val1
Consider wrapping up the XML inspection and pattern matching in a function that returns the multiple values you need as a tuple (Tuple2[String, String]). But stop and consider: it looks like it's possible to not match any dog and cat elements, which would leave you returning null for one or both of the tuple components. Perhaps you could return a tuple of Option[String], or throw if either of the element patterns fail to bind.
In any case, you can generally solve these initialization problems by wrapping up the constituent statements into a function to yield an expression. Once you have an expression in hand, you can initialize a constant with the result of its evaluation.