How to solve reassignment to val error? - scala

I am writing a parser trying to calculate the result of an expression containing float and an RDD, I have override + - / * and it works fine. In one part I am getting the famous error the "reassignment to val" but cannot figure out how to solve it.
Part of the code is as follow:
def calc: Parser[Any]=rep(term2 ~ operator) ^^ {
//match a list of term~operator
case termss =>
var stack =List[Either[RDD[(Int,Array[Float])], Float]]()
var lastop:(Either[RDD[(Int,Array[Float])], Float], Either[RDD[(Int,Array[Float])], Float]) => RDD[(Int,Array[Float])] = add
termss.foreach(t =>
t match { case nums ~ op => {
if (nums=="/path1/test3D.xml")
nums=sv.getInlineArrayRDD()
lastop = op; stack = reduce(stack ++ nums, op)}}
)
stack.reduceRight((x, y) => lastop(y, x))
}
def term2: Parser[List[Any]] = rep(factor2)
def factor2: Parser[Any] = pathIdent | num | "(" ~> calc <~ ")"
def num: Parser[Float] = floatingPointNumber ^^ (_.toFloat)
I defined pathIdent to parse paths.
Here is the error:
[error] reassignment to val:
[error] nums=sv.getInlineArrayRDD()
[error] ^
I have changed def in term2, factor2, and num to var although I knew it seems incorrect but that's the only thing came into my mind to test and it didn't work.
Where is it coming from?

In this piece of code:
case nums ~ op => {
if (nums=="/path1/test3D.xml")
nums=sv.getInlineArrayRDD()
The nums isn't reassignable because it comes from the pattern matching (see the case line). The last line (nums = ...) is trying to assign to nums when it can't.

Related

scala match on iteration, weird compiler error

This small piece of scala code when compiled
it gives a strange error.
import scala.collection.mutable.ListBuffer
object main {
def main(args: Array[String]) {
val rs = new ListBuffer[String]()
val ns = scala.collection.mutable.Map[String, String]()
"A very long string".split("\\ ") foreach { word =>
word match {
case x if x.length() > 7 => ns += x -> "="
case _ => rs += word
}
}
}
}
Gives the following error:
test.scala:11: error: type arguments [String,Iterable[java.io.Serializable] with PartialFunction[String with Int,String] with scala.collection.generic.Subtractable[String,Iterable[java.io.Serializable] with PartialFunction[String with Int,String] with scala.collection.generic.Subtractable[String,Equals]]{def seq: Iterable[java.io.Serializable] with PartialFunction[String with Int,String]}] do not conform to trait Subtractable's type parameter bounds [A,+Repr <: scala.collection.generic.Subtractable[A,Repr]]
"A very long string".split("\\ ") foreach { word =>
^
one error found
Any hints?
This happens due to the fact that your pattern match returns a lengthy compound type:
[String,Iterable[java.io.Serializable] with PartialFunction[String with Int,String] with scala.collection.generic.Subtractable[String,Iterable[java.io.Serializable] with PartialFunction[String with Int,String] with scala.collection.generic.Subtractable[String,Equals]]{def seq: Iterable[java.io.Serializable] with PartialFunction[String with Int,String]}]
Where one of the types in the linearization is Subtractable[String, Iterable[Serializable]] which your type doesn't conform to it's type constraint:
trait Subtractable[A, +Repr <: Subtractable[A, Repr]
To fix this, help the compiler out with type annotation on foreach:
"A very long string"
.split("\\")
.foreach[Unit] {
case x if x.length() > 7 => ns += x -> "="
case word => rs += word
}
As a side note, perhaps you can find the following implementation helpful:
val (largerThanSeven, smallerThanSeven) = "A very long string"
.split("\\")
.partition(_.length > 7)
val largerThanSevenMap = largerThanSeven.map(_ -> "=").toMap
The problem is here:
ns += x -> "="
I guess the intention is to put the key x and the value "=" into the mutable map.
This will compile if changed to
ns(x) = "="
The problem is that there are implicits involved into conversions with -> and the types are inferred weirdly into the monstrous string in the error message.

Representing ad-hoc operator precedence in Scala Parser

So I'm trying to write a parser specifically for the arithmetic fragment of a programming language I'm playing with, using scala RegexParsers.
As it stands, my top-level expression parser is of the form:
parser: Parser[Exp] = binAppExp | otherKindsOfParserLike | lval | int
It accepts lvals (things like "a.b, a.b[c.d], a[b], {record=expression, like=this}" just fine. Now, I'd like to enable expressions like "1 + b / c = d", but potentially with (source language, not Scala) compile-time user-defined operators.
My initial thought was, if I encode the operations recursively and numerically by precedence, then I could add higher precedences ad-hoc, and each level of precedence could parse consuming lower-precedence sub-terms on the right-side of the operation expression. So, I'm trying to build a toy of that idea with just some fairly common operators.
So I'd expect "1 * 2+1" to parse into something like Call(*, Seq(1, Call(+ Seq(2,1)))), where case class Call(functionName: String, args: Seq[Exp]) extends Exp.
Instead though, it parses as IntExp(1).
Is there a reason that this can't work (is it left-recursive in a way I'm missing? If so, I'm sure there's something else wrong, or it'd never terminate, right?), or is it just plain wrong for some other reason?
def binAppExp: Parser[Exp] = {
//assume a registry of operations
val ops = Map(
(7, Set("*", "/")),
(6, Set("-", "+")),
(4, Set("=", "!=", ">", "<", ">=", "<=")),
(3, Set("&")),
(2, Set("|"))
)
//relevant ops for a level of precedence
def opsWithPrecedence(n: Int): Set[String] = ops.getOrElse(n, Set.empty)
//parse an op with some level of precedence
def opWithPrecedence(n: Int): Parser[String] = ".+".r ^? (
{ case s if opsWithPrecedence(n).contains(s) => s },
{ case s => s"SYMBOL NOT FOUND: $s" }
)
//assuming the parse happens, encode it as an AST representation
def folder(h: Exp, t: LangParser.~[String, Exp]): CallExp =
CallExp(t._1, Seq(h, t._2))
val maxPrecedence: Int = ops.maxBy(_._1)._1
def term: (Int => Parser[Exp]) = {
case 0 => lval | int | notApp | "(" ~> term(maxPrecedence) <~ ")"
case n =>
val lowerTerm = term(n - 1)
lowerTerm ~ rep(opWithPrecedence(n) ~ lowerTerm) ^^ {
case h ~ ts => ts.foldLeft(h)(folder)
}
}
term(maxPrecedence)
}
Okay, so there was nothing inherently impossible with what I was trying to do, it was just wrong in the details.
The core idea is just: maintain a mapping from level of precedence to operators/parsers, and recursively look for parses based on that table. If you allow parenthetical expressions, just nest a call to your most precedent possible parser within the call to the parenthetical terms' parser.
Just in case anyone else ever wants to do something like this, here's code for a set of arithmetic/logical operators, heavily commented to relate it to the above:
def opExp: Parser[Exp] = {
sealed trait Assoc
val ops = Map(
(1, Set("*", "/")),
(2, Set("-", "+")),
(3, Set("=", "!=", ">", "<", ">=", "<=")),
(4, Set("&")),
(5, Set("|"))
)
def opsWithPrecedence(n: Int): Set[String] = ops.getOrElse(n, Set.empty)
/* before, this was trying to match the remainder of the expression,
so something like `3 - 2` would parse the Int(3),
and try to pass "- 2" as an operator to the op parser.
RegexParsers has an implicit def "literal : String => SubclassOfParser[String]",
that I'm using explicitly here.
*/
def opWithPrecedence(n: Int): Parser[String] = {
val ops = opsWithPrecedence(n)
if (ops.size > 1) {
ops.map(literal).fold (literal(ops.head)) {
case (l1, l2) => l1 | l2
}
} else if (ops.size == 1) {
literal(ops.head)
} else {
failure(s"No Ops for Precedence $n")
}
}
def folder(h: Exp, t: TigerParser.~[String, Exp]): CallExp = CallExp(t._1, Seq(h, t._2))
val maxPrecedence: Int = ops.maxBy(_._1)._1
def term: (Int => Parser[Exp]) = {
case 0 => lval | int | "(" ~> { term(maxPrecedence) } <~ ")"
case n if n > 0 =>
val lowerTerm = term(n - 1)
lowerTerm ~ rep(opWithPrecedence(n) ~ lowerTerm) ^^ {
case h ~ ts if ts.nonEmpty => ts.foldLeft(h)(folder)
case h ~ _ => h
}
}
term(maxPrecedence)
}

Is it possible to annotate lambda in scala?

For my dsl I need something in the spirit of:
#deprecated def foo(x: Int) = x
... only for lambdas\anonymous functions.
Is something like this possible?
Apparently this exists in the language according to the lang spec:
An annotation of an expression e appears after the expression e,
separated by a colon.
So this supposed to work:
object TestAnnotation {
val o = Some(1)
def f = o.map(_ + 1 : #deprecated("gone", "forever"))
val e = { 1 + 2 } : #deprecated("hmm", "y")
println(f)
println(e)
}
However, when I compile it with scalac -deprecation I get no warnings whatsoever. I opened an issue here and got a response that it's not supported.
One workaround you could use is to declare lambda separately:
object TestAnnotation {
val o = Some(1)
#deprecated("this", "works") val deprecatedLambda: Int => Int = _ + 1
o.map(deprecatedLambda)
}
scalac then gives:
Annotation.scala:6: warning: value deprecatedLambda in object TestAnnotation is deprecated: this
o.map(deprecatedLambda)
^
one warning found

How to ignore single line comments in a parser-combinator

I have a working parser, but I've just realised I do not cater for comments. In the DSL I am parsing, comments start with a ; character. If a ; is encountered, the rest of the line is ignored (not all of it however, unless the first character is ;).
I am extending RegexParsers for my parser and ignoring whitespace (the default way), so I am losing the new line characters anyway. I don't wish to modify each and every parser I have to cater for the possibility of comments either, because statements can span across multiple lines (thus each part of each statement may end with a comment). Is there any clean way to acheive this?
One thing that may influence your choice is whether comments can be found within your valid parsers. For instance let's say you have something like:
val p = "(" ~> "[a-z]*".r <~ ")"
which would parse something like ( abc ) but because of comments you could actually encounter something like:
( ; comment goes here
abc
)
Then I would recommend using a TokenParser or one of its subclass. It's more work because you have to provide a lexical parser that will do a first pass to discard the comments. But it is also more flexible if you have nested comments or if the ; can be escaped or if the ; can be inside a string literal like:
abc = "; don't ignore this" ; ignore this
On the other hand, you could also try to override the value of whitespace to be something like
override protected val whiteSpace = """(\s|;.*)+""".r
Or something along those lines.
For instance using the example from the RegexParsers scaladoc:
import scala.util.parsing.combinator.RegexParsers
object so1 {
Calculator("""(1 + ; foo
(1 + 2))
; bar""")
}
object Calculator extends RegexParsers {
override protected val whiteSpace = """(\s|;.*)+""".r
def number: Parser[Double] = """\d+(\.\d*)?""".r ^^ { _.toDouble }
def factor: Parser[Double] = number | "(" ~> expr <~ ")"
def term: Parser[Double] = factor ~ rep("*" ~ factor | "/" ~ factor) ^^ {
case number ~ list => (number /: list) {
case (x, "*" ~ y) => x * y
case (x, "/" ~ y) => x / y
}
}
def expr: Parser[Double] = term ~ rep("+" ~ log(term)("Plus term") | "-" ~ log(term)("Minus term")) ^^ {
case number ~ list => list.foldLeft(number) { // same as before, using alternate name for /:
case (x, "+" ~ y) => x + y
case (x, "-" ~ y) => x - y
}
}
def apply(input: String): Double = parseAll(expr, input) match {
case Success(result, _) => result
case failure: NoSuccess => scala.sys.error(failure.msg)
}
}
This prints:
Plus term --> [2.9] parsed: 2.0
Plus term --> [2.10] parsed: 3.0
res0: Double = 4.0
Just filter out all the comments with a regex before you pass the code into your parser.
def removeComments(input: String): String = {
"""(?ms)\".*?\"|;.*?$|.+?""".r.findAllIn(input).map(str => if(str.startsWith(";")) "" else str).mkString
}
val code =
"""abc "def; ghij"
abc ;this is a comment
def"""
println(removeComments(code))

Parentheses matching in Scala --- functional approach

Let's say I want to parse a string with various opening and closing brackets (I used parentheses in the title because I believe it is more common -- the question is the same nevertheless) so that I get all the higher levels separated in a list.
Given:
[hello:=[notting],[hill]][3.4(4.56676|5.67787)][the[hill[is[high]]not]]
I want:
List("[hello:=[notting],[hill]]", "[3.4(4.56676|5.67787)]", "[the[hill[is[high]]not]]")
The way I am doing this is by counting the opening and closing brackets and adding to the list whenever I get my counter to 0. However, I have an ugly imperative code. You may assume that the original string is well formed.
My question is: what would be a nice functional approach to this problem?
Notes: I have thought of using the for...yield construct but given the use of the counters I cannot get a simple conditional (I must have conditionals just for updating the counters as well) and I do not know how I could use this construct in this case.
Quick solution using Scala parser combinator library:
import util.parsing.combinator.RegexParsers
object Parser extends RegexParsers {
lazy val t = "[^\\[\\]\\(\\)]+".r
def paren: Parser[String] =
("(" ~ rep1(t | paren) ~ ")" |
"[" ~ rep1(t | paren) ~ "]") ^^ {
case o ~ l ~ c => (o :: l ::: c :: Nil) mkString ""
}
def all = rep(paren)
def apply(s: String) = parseAll(all, s)
}
Checking it in REPL:
scala> Parser("[hello:=[notting],[hill]][3.4(4.56676|5.67787)][the[hill[is[high]]not]]")
res0: Parser.ParseResult[List[String]] = [1.72] parsed: List([hello:=[notting],[hill]], [3.4(4.56676|5.67787)], [the[hill[is[high]]not]])
What about:
def split(input: String): List[String] = {
def loop(pos: Int, ends: List[Int], xs: List[String]): List[String] =
if (pos >= 0)
if ((input charAt pos) == ']') loop(pos-1, pos+1 :: ends, xs)
else if ((input charAt pos) == '[')
if (ends.size == 1) loop(pos-1, Nil, input.substring(pos, ends.head) :: xs)
else loop(pos-1, ends.tail, xs)
else loop(pos-1, ends, xs)
else xs
loop(input.length-1, Nil, Nil)
}
scala> val s1 = "[hello:=[notting],[hill]][3.4(4.56676|5.67787)][the[hill[is[high]]not]]"
s1: String = [hello:=[notting],[hill]][3.4(4.56676|5.67787)][the[hill[is[high]]not]]
scala> val s2 = "[f[sad][add]dir][er][p]"
s2: String = [f[sad][add]dir][er][p]
scala> split(s1) foreach println
[hello:=[notting],[hill]]
[3.4(4.56676|5.67787)]
[the[hill[is[high]]not]]
scala> split(s2) foreach println
[f[sad][add]dir]
[er]
[p]
Given your requirements counting the parenthesis seems perfectly fine. How would you do that in a functional way? You can make the state explicitly passed around.
So first we define our state which accumulates results in blocks or concatenates the next block and keeps track of the depth:
case class Parsed(blocks: Vector[String], block: String, depth: Int)
Then we write a pure function that processed that returns the next state. Hopefully, we can just carefully look at this one function and ensure it's correct.
def nextChar(parsed: Parsed, c: Char): Parsed = {
import parsed._
c match {
case '[' | '(' => parsed.copy(block = block + c,
depth = depth + 1)
case ']' | ')' if depth == 1
=> parsed.copy(blocks = blocks :+ (block + c),
block = "",
depth = depth - 1)
case ']' | ')' => parsed.copy(block = block + c,
depth = depth - 1)
case _ => parsed.copy(block = block + c)
}
}
Then we just used a foldLeft to process the data with an initial state:
val data = "[hello:=[notting],[hill]][3.4(4.56676|5.67787)][the[hill[is[high]]not]]"
val parsed = data.foldLeft(Parsed(Vector(), "", 0))(nextChar)
parsed.blocks foreach println
Which returns:
[hello:=[notting],[hill]]
[3.4(4.56676|5.67787)]
[the[hill[is[high]]not]]
You have an ugly imperative solution, so why not make a good-looking one? :)
This is an imperative translation of huynhjl's solution, but just posting to show that sometimes imperative is concise and perhaps easier to follow.
def parse(s: String) = {
var res = Vector[String]()
var depth = 0
var block = ""
for (c <- s) {
block += c
c match {
case '[' => depth += 1
case ']' => depth -= 1
if (depth == 0) {
res :+= block
block = ""
}
case _ =>
}
}
res
}
Try this:
val s = "[hello:=[notting],[hill]][3.4(4.56676|5.67787)][the[hill[is[high]]not]]"
s.split("]\\[").toList
returns:
List[String](
[hello:=[notting],[hill],
3.4(4.56676|5.67787),
the[hill[is[high]]not]]
)