I'll give you the tl;dr up front
I'm trying to use the state monad transformer in Scalaz 7 to thread extra state through a parser, and I'm having trouble doing anything useful without writing a lot of t m a -> t m b versions of m a -> m b methods.
An example parsing problem
Suppose I have a string containing nested parentheses with digits inside them:
val input = "((617)((0)(32)))"
I also have a stream of fresh variable names (characters, in this case):
val names = Stream('a' to 'z': _*)
I want to pull a name off the top of the stream and assign it to each parenthetical
expression as I parse it, and then map that name to a string representing the
contents of the parentheses, with the nested parenthetical expressions (if any) replaced by their
names.
To make this more concrete, here's what I'd want the output to look like for the example input above:
val target = Map(
'a' -> "617",
'b' -> "0",
'c' -> "32",
'd' -> "bc",
'e' -> "ad"
)
There may be either a string of digits or arbitrarily many sub-expressions at a given level, but these two kinds of content won't be mixed in a single parenthetical expression.
To keep things simple, we'll assume that the stream of names will never
contain either duplicates or digits, and that it will always contain enough
names for our input.
Using parser combinators with a bit of mutable state
The example above is a slightly simplified version of the parsing problem in
this Stack Overflow question.
I answered that question with
a solution that looked roughly like this:
import scala.util.parsing.combinator._
class ParenParser(names: Iterator[Char]) extends RegexParsers {
def paren: Parser[List[(Char, String)]] = "(" ~> contents <~ ")" ^^ {
case (s, m) => (names.next -> s) :: m
}
def contents: Parser[(String, List[(Char, String)])] =
"\\d+".r ^^ (_ -> Nil) | rep1(paren) ^^ (
ps => ps.map(_.head._1).mkString -> ps.flatten
)
def parse(s: String) = parseAll(paren, s).map(_.toMap)
}
It's not too bad, but I'd prefer to avoid the mutable state.
What I want
Haskell's Parsec library makes
adding user state to a parser trivially easy:
import Control.Applicative ((*>), (<$>), (<*))
import Data.Map (fromList)
import Text.Parsec
paren = do
(s, m) <- char '(' *> contents <* char ')'
h : t <- getState
putState t
return $ (h, s) : m
where
contents
= flip (,) []
<$> many1 digit
<|> (\ps -> (map (fst . head) ps, concat ps))
<$> many1 paren
main = print $
runParser (fromList <$> paren) ['a'..'z'] "example" "((617)((0)(32)))"
This is a fairly straightforward translation of my Scala parser above, but without mutable state.
What I've tried
I'm trying to get as close to the Parsec solution as I can using Scalaz's state monad transformer, so instead of Parser[A] I'm working with StateT[Parser, Stream[Char], A].
I have a "solution" that allows me to write the following:
import scala.util.parsing.combinator._
import scalaz._, Scalaz._
object ParenParser extends ExtraStateParsers[Stream[Char]] with RegexParsers {
protected implicit def monadInstance = parserMonad(this)
def paren: ESP[List[(Char, String)]] =
(lift("(" ) ~> contents <~ lift(")")).flatMap {
case (s, m) => get.flatMap(
names => put(names.tail).map(_ => (names.head -> s) :: m)
)
}
def contents: ESP[(String, List[(Char, String)])] =
lift("\\d+".r ^^ (_ -> Nil)) | rep1(paren).map(
ps => ps.map(_.head._1).mkString -> ps.flatten
)
def parse(s: String, names: Stream[Char]) =
parseAll(paren.eval(names), s).map(_.toMap)
}
This works, and it's not that much less concise than either the mutable state version or the Parsec version.
But my ExtraStateParsers is ugly as sin—I don't want to try your patience more than I already have, so I won't include it here (although here's a link, if you really want it). I've had to write new versions of every Parser and Parsers method I use above
for my ExtraStateParsers and ESP types (rep1, ~>, <~, and |, in case you're counting). If I had needed to use other combinators, I'd have had to write new state transformer-level versions of them as well.
Is there a cleaner way to do this? I'd love to see an example of a Scalaz 7's state monad transformer being used to thread state through a parser, but Scalaz 6 or Haskell examples would also be useful and appreciated.
Probably the most general solution would be to rewrite Scala's parser library to accommodate monadic computations while parsing (like you partly did), but that would be quite a laborious task.
I suggest a solution using ScalaZ's State where each of our result isn't a value of type Parse[X], but a value of type Parse[State[Stream[Char],X]] (aliased as ParserS[X]). So the overall parsed result isn't a value, but a monadic state value, which is then run on some Stream[Char]. This is almost a monad transformer, but we have to do lifting/unlifting manually. It makes the code a bit uglier, as we need to lift values sometimes or use map/flatMap on several places, but I believe it's still reasonable.
import scala.util.parsing.combinator._
import scalaz._
import Scalaz._
import Traverse._
object ParenParser extends RegexParsers with States {
type S[X] = State[Stream[Char],X];
type ParserS[X] = Parser[S[X]];
// Haskell's `return` for States
def toState[S,X](x: X): State[S,X] = gets(_ => x)
// Haskell's `mapM` for State
def mapM[S,X](l: List[State[S,X]]): State[S,List[X]] =
l.traverse[({type L[Y] = State[S,Y]})#L,X](identity _);
// .................................................
// Read the next character from the stream inside the state
// and update the state to the stream's tail.
def next: S[Char] = state(s => (s.tail, s.head));
def paren: ParserS[List[(Char, String)]] =
"(" ~> contents <~ ")" ^^ (_ flatMap {
case (s, m) => next map (v => (v -> s) :: m)
})
def contents: ParserS[(String, List[(Char, String)])] = digits | parens;
def digits: ParserS[(String, List[(Char, String)])] =
"\\d+".r ^^ (_ -> Nil) ^^ (toState _)
def parens: ParserS[(String, List[(Char, String)])] =
rep1(paren) ^^ (mapM _) ^^ (_.map(
ps => ps.map(_.head._1).mkString -> ps.flatten
))
def parse(s: String): ParseResult[S[Map[Char,String]]] =
parseAll(paren, s).map(_.map(_.toMap))
def parse(s: String, names: Stream[Char]): ParseResult[Map[Char,String]] =
parse(s).map(_ ! names);
}
object ParenParserTest extends App {
{
println(ParenParser.parse("((617)((0)(32)))", Stream('a' to 'z': _*)));
}
}
Note: I believe that your approach with StateT[Parser, Stream[Char], _] isn't conceptually correct. The type says that we're constructing a parser given some state (a stream of names). So it would be possible that given different streams we get different parsers. This is not what we want to do. We only want that the result of parsing depends on the names, not the whole parser. In this way Parser[State[Stream[Char],_]] seems to be more appropriate (Haskell's Parsec takes a similar approach, the state/monad is inside the parser).
Related
I am writing a parser for boolean expressions, and try to parse input like "true and false"
def boolExpression: Parser[BoolExpression] = boolLiteral | andExpression
def andExpression: Parser[AndExpression] = (boolExpression ~ "and" ~ boolExpression) ^^ {
case b1 ~ "and" ~ b2 => AndExpression(b1, b2)
}
def boolLiteral: Parser[BoolLiteral] = ("true" | "false") ^^ {
s => BoolLiteral(java.lang.Boolean.valueOf(s))
}
The above code does not parse "true and false", since it reads only "true" and applies rule boolLiteral immediately
But if I change the rule boolExpression to this:
def boolExpression: Parser[BoolExpression] = andExpression | boolLiteral
Then, when parsing "true and false", the code throws a StackoverflowError due to endless recursion
java.lang.StackOverflowError
at Parser.boolExpression(NewParser.scala:58)
at Parser.andExpression(NewParser.scala:62)
at Parser.boolExpression(NewParser.scala:58)
at Parser.andExpression(NewParser.scala:62)
...
How to solve this?
This appears to be parsing a string of boolean constants separated by "and" which is best done using the chainl1 primitive in the parser. This process a chain of operations a op b op c op d using left-to-right precedence.
It might look something like this totally untested code:
trait BoolExpression
case class BoolLiteral(value: Boolean) extends BoolExpression
case class AndExpression(l: BoolExpression, r: BoolExpression) extends BoolExpression
def boolLiteral: Parser[BoolLiteral] =
("true" | "false") ^^ { s => BoolLiteral(java.lang.Boolean.valueOf(s)) }
def andExpression: Parser[(BoolExpression, BoolExpression) => BoolExpression] =
"and" ^^ { _ => (l: BoolExpression, r: BoolExpression) => AndExpression(l, r) }
def boolExpression: Parser[BoolExpression] =
chainl1(boolLiteral, andExpression) ^^ { expr => expr }
Presumably the requirement is more complex than this, but the chainl1 parser is a good starting point.
The core of the issue is that the usual parser combinator libraries use parsing algorithms that don't support left recursion. One solution is to rewrite the grammar without left recursion, meaning that every parser needs to consume at least some input before invoking itself recursively. For instance, your grammar can be written like so:
def boolExpression: Parser[BoolExpression] = andExpression | boolLiteral
def andExpression: Parser[AndExpression] = (boolLiteral ~ "and" ~ boolExpression) ^^ {
case b1 ~ "and" ~ b2 => AndExpression(b1, b2)
}
But you can also use a parser combinator library that's based on another parsing algorithm that supports left recursion. I'm only aware of one for Scala:
https://github.com/djspiewak/gll-combinators
I don't know to what degree it is production ready.
//edit:
I think this one might also support left recursion. Again, I don't know the degree to which it is production ready.
https://github.com/djspiewak/parseback
I am running into this famous 10 year old ticket in Scala https://github.com/scala/bug/issues/2823
Because I am expecting for-comprehensions to work like do-blocks in Haskell. And why shouldn't they, Monads go great with a side of sugar. At this point I have something like this:
import scalaz.{Monad, Traverse}
import scalaz.std.either._
import scalaz.std.list._
type ErrorM[A] = Either[String, A]
def toIntSafe(s : String) : ErrorM[Int] = {
try {
Right(s.toInt)
} catch {
case e: Exception => Left(e.getMessage)
}
}
def convert(args: List[String])(implicit m: Monad[ErrorM], tr: Traverse[List]): ErrorM[List[Int]] = {
val expanded = for {
arg <- args
result <- toIntSafe(arg)
} yield result
tr.sequence(expanded)(m)
}
println(convert(List("1", "2", "3")))
println(convert(List("1", "foo")))
And I'm getting the error
"Value map is not a member of ErrorM[Int]"
result <- toIntSafe(arg)
How do I get back to beautiful, monadic-comprehensions that I am used to? Some research shows the FilterMonadic[A, Repr] abstract class is what to extend if you want to be a comprehension, any examples of combining FilterMonadic with scalaz?
Can I reuse my Monad implicit and not have to redefine map, flatMap etc?
Can I keep my type alias and not have to wrap around Either or worse, redefine case classes for ErrorM?
Using Scala 2.11.8
EDIT: I am adding Haskell code to show this does indeed work in GHC without explicit Monad transformers, only traversals and default monad instances for Either and List.
type ErrorM = Either String
toIntSafe :: Read a => String -> ErrorM a
toIntSafe s = case reads s of
[(val, "")] -> Right val
_ -> Left $ "Cannot convert to int " ++ s
convert :: [String] -> ErrorM [Int]
convert = sequence . conv
where conv s = do
arg <- s
return . toIntSafe $ arg
main :: IO ()
main = do
putStrLn . show . convert $ ["1", "2", "3"]
putStrLn . show . convert $ ["1", "foo"]
Your haskell code and your scala code are not equivalent:
do
arg <- s
return . toIntSafe $ arg
corresponds to
for {
arg <- args
} yield toIntSafe(arg)
Which compiles fine.
To see why your example one doesn't compile, we can desugar it:
for {
arg <- args
result <- toIntSafe(arg)
} yield result
=
args.flatMap { arg =>
toIntSafe(arg).map {result => result}
}
Now looking at types:
args: List[String]
args.flatMap: (String => List[B]) => List[B]
arg => toIntSafe(arg).map {result => result} : String => ErrorM[Int]
Which shows the problem. flatMap is expecting a function returning a List but you are giving it a function returning an ErrorM.
Haskell code along the lines of:
do
arg <- s
result <- toIntSafe arg
return result
wouldn't compile either for roughly the same reason: trying to bind across two different monads, List and Either.
A for comprehension in scala will or a do expression in haskell will only ever work for the same underlying monad, because they are both basically syntactic translations to series of flatMaps and >>=s respectively. And those still need to typecheck.
If you want to compose monads one thing you can do use monad transformers ( EitherT), although in your above example I don't think you want to, since you actually want to sequence in the end.
Finally, in my opinion the most elegant way of expressing your code is:
def convert(args: List[String]) = args.traverse(toIntSafe)
Because map followed by sequence is traverse
With Scala's quasiquotes you can build trees of selects easily, like so:
> tq"a.b.MyObj"
res: Select(Select(Ident(TermName("a")), TermName("b")), TermName("MyObj"))
My question is, how do I do this if the list of things to select from (a,b,...,etc) is variable length (and therefore in a variable that needs to be spliced in)?
I was hoping lifting would work (e.g. tq"""..${List("a","b","MyObj")}""" but it doesn't. Or maybe even this tq"""${List("a","b","MyObj").mkString(".")}""", but no luck.
Is there a way to support this with quasiquotes? Or do I just need to construct the tree of selects manually in this case?
I don't think there is a way to do this outright with quasiquotes. I'm definitely sure that anything like tq"""${List("a","b","MyObj").mkString(".")}""" will not work. My understanding of quasiquotes is that they are just sugar for extractors and apply.
However, building on that idea, we can define a custom extractor to do what you want. (By the way, I'm sure there is a nicer way to express this, but you get the idea...)
object SelectTermList {
def apply(arg0: String, args: List[String]): universe.Tree =
args.foldLeft(Ident(TermName(arg0)).asInstanceOf[universe.Tree])
((s,arg) => Select(s, TermName(arg)))
def unapply(t: universe.Tree): Option[(String,List[String])] = t match {
case Ident(TermName(arg0)) => Some((arg0, List()))
case Select(SelectTermList(arg0,args),TermName(arg)) =>
Some((arg0, args ++ List(arg)))
case _ => None
}
}
Then, you can use this to both construct and extract expressions of the form a.b.MyObj.
Extractor tests:
scala> val SelectTermList(obj0,selectors0) = q"a.b.c.d.e.f.g.h"
obj0: String = a
selectors0: List[String] = List(b, c, d, e, f, g, h)
scala> val q"someObject.method(${SelectTermList(obj1,selectors1)})" = q"someObject.method(a.b.MyObj)"
obj1: String = a
selectors1: List[String] = List(b, MyObj)
Corresponding apply tests:
scala> SelectTermList(obj0,selectors0)
res: universe.Tree = a.b.c.d.e.f.g.h
scala> q"someObject.method(${SelectTermList(obj1,selectors1)})"
res: universe.Tree = someObject.method(a.b.MyObj)
As you can see, there is no problem with nesting the extractors deep inside quasi quotes, both when constructing and extracting.
Introduction
I use Scalaz 7's iteratees in a number of projects, primarily for processing large-ish files. I'd like to start switching to Scalaz streams, which are designed to replace the iteratee package (which frankly is missing a lot of pieces and is kind of a pain to use).
Streams are based on machines (another variation on the iteratee idea), which have also been implemented in Haskell. I've used the Haskell machines library a bit, but the relationship between machines and streams isn't completely obvious (to me, at least), and the documentation for the streams library is still a little sparse.
This question is about a simple parsing task that I'd like to see implemented using streams instead of iteratees. I'll answer the question myself if nobody else beats me to it, but I'm sure I'm not the only one who's making (or at least considering) this transition, and since I need to work through this exercise anyway, I figured I might as well do it in public.
Task
Supposed I've got a file containing sentences that have been tokenized and tagged with parts of speech:
no UH
, ,
it PRP
was VBD
n't RB
monday NNP
. .
the DT
equity NN
market NN
was VBD
illiquid JJ
. .
There's one token per line, words and parts of speech are separated by a single space, and blank lines represent sentence boundaries. I want to parse this file and return a list of sentences, which we might as well represent as lists of tuples of strings:
List((no,UH), (,,,), (it,PRP), (was,VBD), (n't,RB), (monday,NNP), (.,.))
List((the,DT), (equity,NN), (market,NN), (was,VBD), (illiquid,JJ), (.,.)
As usual, we want to fail gracefully if we hit invalid input or file reading exceptions, we don't want to have to worry about closing resources manually, etc.
An iteratee solution
First for some general file reading stuff (that really ought to be part of the iteratee package, which currently doesn't provide anything remotely this high-level):
import java.io.{ BufferedReader, File, FileReader }
import scalaz._, Scalaz._, effect.IO
import iteratee.{ Iteratee => I, _ }
type ErrorOr[A] = EitherT[IO, Throwable, A]
def tryIO[A, B](action: IO[B]) = I.iterateeT[A, ErrorOr, B](
EitherT(action.catchLeft).map(I.sdone(_, I.emptyInput))
)
def enumBuffered(r: => BufferedReader) = new EnumeratorT[String, ErrorOr] {
lazy val reader = r
def apply[A] = (s: StepT[String, ErrorOr, A]) => s.mapCont(k =>
tryIO(IO(Option(reader.readLine))).flatMap {
case None => s.pointI
case Some(line) => k(I.elInput(line)) >>== apply[A]
}
)
}
def enumFile(f: File) = new EnumeratorT[String, ErrorOr] {
def apply[A] = (s: StepT[String, ErrorOr, A]) => tryIO(
IO(new BufferedReader(new FileReader(f)))
).flatMap(reader => I.iterateeT[String, ErrorOr, A](
EitherT(
enumBuffered(reader).apply(s).value.run.ensuring(IO(reader.close()))
)
))
}
And then our sentence reader:
def sentence: IterateeT[String, ErrorOr, List[(String, String)]] = {
import I._
def loop(acc: List[(String, String)])(s: Input[String]):
IterateeT[String, ErrorOr, List[(String, String)]] = s(
el = _.trim.split(" ") match {
case Array(form, pos) => cont(loop(acc :+ (form, pos)))
case Array("") => cont(done(acc, _))
case pieces =>
val throwable: Throwable = new Exception(
"Invalid line: %s!".format(pieces.mkString(" "))
)
val error: ErrorOr[List[(String, String)]] = EitherT.left(
throwable.point[IO]
)
IterateeT.IterateeTMonadTrans[String].liftM(error)
},
empty = cont(loop(acc)),
eof = done(acc, eofInput)
)
cont(loop(Nil))
}
And finally our parsing action:
val action =
I.consume[List[(String, String)], ErrorOr, List] %=
sentence.sequenceI &=
enumFile(new File("example.txt"))
We can demonstrate that it works:
scala> action.run.run.unsafePerformIO().foreach(_.foreach(println))
List((no,UH), (,,,), (it,PRP), (was,VBD), (n't,RB), (monday,NNP), (.,.))
List((the,DT), (equity,NN), (market,NN), (was,VBD), (illiquid,JJ), (.,.))
And we're done.
What I want
More or less the same program implemented using Scalaz streams instead of iteratees.
A scalaz-stream solution:
import scalaz.std.vector._
import scalaz.syntax.traverse._
import scalaz.std.string._
val action = linesR("example.txt").map(_.trim).
splitOn("").flatMap(_.traverseU { s => s.split(" ") match {
case Array(form, pos) => emit(form -> pos)
case _ => fail(new Exception(s"Invalid input $s"))
}})
We can demonstrate that it works:
scala> action.collect.attempt.run.foreach(_.foreach(println))
Vector((no,UH), (,,,), (it,PRP), (was,VBD), (n't,RB), (monday,NNP), (.,.))
Vector((the,DT), (equity,NN), (market,NN), (was,VBD), (illiquid,JJ), (.,.))
And we're done.
The traverseU function is a common Scalaz combinator. In this case it's being used to traverse, in the Process monad, the sentence Vector generated by splitOn. It's equivalent to map followed by sequence.
I'm trying to build an internal DSL in Scala to represent algebraic definitions. Let's consider this simplified data model:
case class Var(name:String)
case class Eq(head:Var, body:Var*)
case class Definition(name:String, body:Eq*)
For example a simple definition would be:
val x = Var("x")
val y = Var("y")
val z = Var("z")
val eq1 = Eq(x, y, z)
val eq2 = Eq(y, x, z)
val defn = Definition("Dummy", eq1, eq2)
I would like to have an internal DSL to represent such an equation in the form:
Dummy {
x = y z
y = x z
}
The closest I could get is the following:
Definition("Dummy") := (
"x" -> ("y", "z")
"y" -> ("x", "z")
)
The first problem I encountered is that I cannot have two implicit conversions for Definition and Var, hence Definition("Dummy"). The main problem, however, are the lists. I don't want to surround them by any thing, e.g. (), and I also don't want their elements be separated by commas.
Is what I want possible using Scala? If yes, can anyone show me an easy way of achieving it?
While Scalas syntax is powerful, it is not flexible enough to create arbitrary delimiters for symbols. Thus, there is no way to leave commas and replace them only with spaces.
Nevertheless, it is possible to use macros and parse a string with arbitrary content at compile time. It is not an "easy" solution, but one that works:
object AlgDefDSL {
import language.experimental.macros
import scala.reflect.macros.Context
implicit class DefDSL(sc: StringContext) {
def dsl(): Definition = macro __dsl_impl
}
def __dsl_impl(c: Context)(): c.Expr[Definition] = {
import c.universe._
val defn = c.prefix.tree match {
case Apply(_, List(Apply(_, List(Literal(Constant(s: String)))))) =>
def toAST[A : TypeTag](xs: Tree*): Tree =
Apply(
Select(Ident(typeOf[A].typeSymbol.companionSymbol), newTermName("apply")),
xs.toList
)
def toVarAST(varObj: Var) =
toAST[Var](c.literal(varObj.name).tree)
def toEqAST(eqObj: Eq) =
toAST[Eq]((eqObj.head +: eqObj.body).map(toVarAST(_)): _*)
def toDefAST(defObj: Definition) =
toAST[Definition](c.literal(defObj.name).tree +: defObj.body.map(toEqAST(_)): _*)
parsers.parse(s) match {
case parsers.Success(defn, _) => toDefAST(defn)
case parsers.NoSuccess(msg, _) => c.abort(c.enclosingPosition, msg)
}
}
c.Expr(defn)
}
import scala.util.parsing.combinator.JavaTokenParsers
private object parsers extends JavaTokenParsers {
override val whiteSpace = "[ \t]*".r
lazy val newlines =
opt(rep("\n"))
lazy val varP =
"[a-z]+".r ^^ Var
lazy val eqP =
(varP <~ "=") ~ rep(varP) ^^ {
case lhs ~ rhs => Eq(lhs, rhs: _*)
}
lazy val defHead =
newlines ~> ("[a-zA-Z]+".r <~ "{") <~ newlines
lazy val defBody =
rep(eqP <~ rep("\n"))
lazy val defEnd =
"}" ~ newlines
lazy val defP =
defHead ~ defBody <~ defEnd ^^ {
case name ~ eqs => Definition(name, eqs: _*)
}
def parse(s: String) = parseAll(defP, s)
}
case class Var(name: String)
case class Eq(head: Var, body: Var*)
case class Definition(name: String, body: Eq*)
}
It can be used with something like this:
scala> import AlgDefDSL._
import AlgDefDSL._
scala> dsl"""
| Dummy {
| x = y z
| y = x z
| }
| """
res12: AlgDefDSL.Definition = Definition(Dummy,WrappedArray(Eq(Var(x),WrappedArray(Var(y), Var(z))), Eq(Var(y),WrappedArray(Var(x), Var(z)))))
In addition to sschaef's nice solution I want to mention a few possibilities that are commonly used to get rid of commas in list construction for a DSL.
Colons
This might be trivial, but it is sometimes overlooked as a solution.
line1 ::
line2 ::
line3 ::
Nil
For a DSL it is often desired that every line that contains some instruction/data is terminated the same way (opposed to Lists where all but the last line will get a comma). With such a solutions exchanging the lines no longer can mess up the trailing comma. Unfortunately, the Nil looks a bit ugly.
Fluid API
Another alternative that might be interesting for a DSL is something like that:
BuildDefinition()
.line1
.line2
.line3
.build
where each line is a member function of the builder (and returns a modified builder). This solution requires to eventually convert the builder to a list (which might be done as an implicit conversion). Note that for some APIs it might be possible to pass around the builder instances themselves, and only extract the data wherever needed.
Constructor API
Similarly another possibility is to exploit constructors.
new BuildInterface {
line1
line2
line3
}
Here, BuildInterface is a trait and we simply instantiate an anonymous class from the interface. The line functions call some member functions of this trait. Each invocation can internally update the state of the build interface. Note that this commonly results in a mutable design (but only during construction). To extract the list, an implicit conversion could be used.
Since I don't understand the actual purpose of your DSL, I'm not really sure if any of these techniques is interesting for your scenario. I just wanted to add them since they are common ways to get rid of ",".
Here is another solution which is relatively simple and enables a syntax that is pretty close to your ideal
(as other have pointed, the exact syntax your asked for is not possible, in particular because you cannot redefine delimiter symbols).
My solution stretches a bit what is reasonable to do because it adds an operator right on scala.Symbol,
but if you're going to use this DSL in a constrained scope then this should be OK.
object VarOps {
val currentEqs = new util.DynamicVariable( Vector.empty[Eq] )
}
implicit class VarOps( val variable: Var ) extends AnyVal {
import VarOps._
def :=[T]( body: Var* ) = {
val eq = Eq( variable, body:_* )
currentEqs.value = currentEqs.value :+ eq
}
}
implicit class SymbolOps( val sym: Symbol ) extends AnyVal {
def apply[T]( body: => Unit ): Definition = {
import VarOps._
currentEqs.withValue( Vector.empty[Eq] ) {
body
Definition( sym.name, currentEqs.value:_* )
}
}
}
Now you can do:
'Dummy {
x := (y, z)
y := (x, z)
}
Which builds the following definition (as printed in the REPL):
Definition(Dummy,Vector(Eq(Var(x),WrappedArray(Var(y), Var(z))), Eq(Var(y),WrappedArray(Var(x), Var(z)))))