I am writing a parser for boolean expressions, and try to parse input like "true and false"
def boolExpression: Parser[BoolExpression] = boolLiteral | andExpression
def andExpression: Parser[AndExpression] = (boolExpression ~ "and" ~ boolExpression) ^^ {
case b1 ~ "and" ~ b2 => AndExpression(b1, b2)
}
def boolLiteral: Parser[BoolLiteral] = ("true" | "false") ^^ {
s => BoolLiteral(java.lang.Boolean.valueOf(s))
}
The above code does not parse "true and false", since it reads only "true" and applies rule boolLiteral immediately
But if I change the rule boolExpression to this:
def boolExpression: Parser[BoolExpression] = andExpression | boolLiteral
Then, when parsing "true and false", the code throws a StackoverflowError due to endless recursion
java.lang.StackOverflowError
at Parser.boolExpression(NewParser.scala:58)
at Parser.andExpression(NewParser.scala:62)
at Parser.boolExpression(NewParser.scala:58)
at Parser.andExpression(NewParser.scala:62)
...
How to solve this?
This appears to be parsing a string of boolean constants separated by "and" which is best done using the chainl1 primitive in the parser. This process a chain of operations a op b op c op d using left-to-right precedence.
It might look something like this totally untested code:
trait BoolExpression
case class BoolLiteral(value: Boolean) extends BoolExpression
case class AndExpression(l: BoolExpression, r: BoolExpression) extends BoolExpression
def boolLiteral: Parser[BoolLiteral] =
("true" | "false") ^^ { s => BoolLiteral(java.lang.Boolean.valueOf(s)) }
def andExpression: Parser[(BoolExpression, BoolExpression) => BoolExpression] =
"and" ^^ { _ => (l: BoolExpression, r: BoolExpression) => AndExpression(l, r) }
def boolExpression: Parser[BoolExpression] =
chainl1(boolLiteral, andExpression) ^^ { expr => expr }
Presumably the requirement is more complex than this, but the chainl1 parser is a good starting point.
The core of the issue is that the usual parser combinator libraries use parsing algorithms that don't support left recursion. One solution is to rewrite the grammar without left recursion, meaning that every parser needs to consume at least some input before invoking itself recursively. For instance, your grammar can be written like so:
def boolExpression: Parser[BoolExpression] = andExpression | boolLiteral
def andExpression: Parser[AndExpression] = (boolLiteral ~ "and" ~ boolExpression) ^^ {
case b1 ~ "and" ~ b2 => AndExpression(b1, b2)
}
But you can also use a parser combinator library that's based on another parsing algorithm that supports left recursion. I'm only aware of one for Scala:
https://github.com/djspiewak/gll-combinators
I don't know to what degree it is production ready.
//edit:
I think this one might also support left recursion. Again, I don't know the degree to which it is production ready.
https://github.com/djspiewak/parseback
Related
I am trying to write a parser for a language very similar to Milner's CCS. Basically what I am parsing so far are expressions of the following sort:
a.b.a.1
a.0
An expression must start with a letter (excluding t) and could have any number of letters following the first letter (separated by a '.'). The Expression must terminate with a digit (for simplicity I chose digits between 0 and 2 for now). I want to use Parser Combinators for Scala, however this is the first time that I am working with them. This is what I have so far:
import scala.util.parsing.combinator._
class SimpleParser extends RegexParsers {
def alpha: Parser[String] = """[^t]{1}""".r ^^ { _.toString }
def digit: Parser[Int] = """[0-2]{1}""".r ^^ { _.toInt }
def expr: Parser[Any] = alpha ~ "." ~ digit ^^ {
case al ~ "." ~ di => List(al, di)
}
def simpleExpression: Parser[Any] = alpha ~ "." ~ rep(alpha ~ ".") ~ digit //^^ { }
}
As you can see in def expr :Parser[Any] I am trying to return the result as a list, since Lists in Scala are very easy to work with (in my opinion). Is this the correct way how to convert a Parser[Any] result to a List? Can anyone give me any tips on how I can do this for def simpleExpression:Parser[Any].
The main reason why I want to use Lists is because after parsing and Expression I want to be able to consume it. For example, given the expression a.b.1, if I am given an 'a', I would like to consume the expression to end up with a new expression: b.1 (i.e. a.b.1 ->(a)-> b.1). The idea behind this is to simulate finite state automatas. Any tips on how I may improve my implementation are appreciated.
To keeps things type safe, I recommend a parser that produces a tuple of a list of strings and an int. That is, the input a.b.a.1 would get parsed as (List("a", "b", "a"), 1). Note also that the regex for alpha was modified to exclude anything that is not a lowercase letter (in addition to t).
class SimpleParser extends RegexParsers {
def alpha: Parser[String] = """[a-su-z]{1}""".r ^^ { _.toString }
def digit: Parser[Int] = """[0-2]{1}""".r ^^ { _.toInt }
def repAlpha: Parser[List[String]] = rep1sep(alpha, ".")
def expr: Parser[(List[String], Int)] = repAlpha ~ "." ~ digit ^^ {
case alphas ~ _ ~ num =>
(alphas, num)
}
}
With an instance of this SimpleParser, here's the output I got:
println(parser.parse(parser.expr, "a.b.a.1"))
// [1.8] parsed: (List(a, b, a),1)
println(parser.parse(parser.expr, "a.0"))
// [1.4] parsed: (List(a),0)
I'm trying to use a JavaToken combinator parser to pull out a particular match that's in the middle of larger string (ie ignore a random set of prefix chars). However I can't get it working and think I'm getting caught out by a greedy parser and/or CRs LFs. (the prefix chars can be basically anything). I have:
class RuleHandler extends JavaTokenParsers {
def allowedPrefixChars = """[a-zA-Z0-9=*+-/<>!\_(){}~\\s]*""".r
def findX: Parser[Double] = allowedPrefixChars ~ "(x=" ~> floatingPointNumber <~ ")" ^^ { case num => num.toDouble}
}
and then in my test case ..
"when looking for the X value" in {
"must find and correctly interpret X" in {
val testString =
"""
|Looking (only)
|for (x=45) within
|this string
""".stripMargin
val answer = ruleHandler.parse(ruleHandler.findX, testString)
System.out.println(" X value is : " + answer.toString)
}
}
I think it's similar to this SO question. Can anyone see whats wrong pls ? Tks.
First, you should not escape "\\s" twice inside """ """:
def allowedPrefixChars = """[a-zA-Z0-9=*+-/<>!\_(){}~\s]*?""".r
In your case it was interpreted separately "\" or "s" (s as symbol, not \s)
Second, your allowedPrefixChars parser includes (, x, =, so it captures the whole string, including (x=, nothing is left to subsequent parsers.
The solution is to be more concrete about prefix you want:
object ruleHandler extends JavaTokenParsers {
def allowedPrefixChar: Parser[String] = """[a-zA-Z0-9=*+-/<>!\_){}~\s]""".r //no "(" here
def findX: Parser[Double] = rep(allowedPrefixChar | "\\((?!x=)".r ) ~ "(x=" ~> floatingPointNumber <~ ")" ^^ { case num => num.toDouble}
}
ruleHandler.parse(ruleHandler.findX, testString)
res14: ruleHandler.ParseResult[Double] = [3.11] parsed: 45.0
I've told the parser to ignore (, that has x= going after (it's just negative lookahead).
Alternative:
"""\(x=(.*?)\)""".r.findAllMatchIn(testString).map(_.group(1).toDouble).toList
res22: List[Double] = List(45.0)
If you want to use parsers correctly, I would recommend you to describe the whole BNF grammar (with all possible (,) and = usages) - not just fragment. For example, include (only) into your parser if it's keyword, "(" ~> valueName <~ "=" ~ value to get value. Don't forget that scala-parser is intended to return you AST, not just some matched value. Pure regexps are better for regular matching from unstructured data.
Example how it would like to use parsers in correct way (didn't try to compile):
trait Command
case class Rule(name: String, value: Double) extends Command
case class Directive(name: String) extends Command
class RuleHandler extends JavaTokenParsers { //why `JavaTokenParsers` (not `RegexParsers`) if you don't use tokens from Java Language Specification ?
def string = """[a-zA-Z0-9*+-/<>!\_{}~\s]*""".r //it's still wrong you should use some predefined Java-like literals from **JavaToken**Parsers
def rule = "(" ~> string <~ "=" ~> string <~ ")" ^^ { case name ~ num => Rule(name, num.toDouble} }
def directive = "(" ~> string <~ ")" ^^ { case name => Directive(name) }
def commands: Parser[Command] = repsep(rule | directive, string)
}
If you need to process natural language (Chomsky type-0), scalanlp or something similar fits better.
I'm trying to build an internal DSL in Scala to represent algebraic definitions. Let's consider this simplified data model:
case class Var(name:String)
case class Eq(head:Var, body:Var*)
case class Definition(name:String, body:Eq*)
For example a simple definition would be:
val x = Var("x")
val y = Var("y")
val z = Var("z")
val eq1 = Eq(x, y, z)
val eq2 = Eq(y, x, z)
val defn = Definition("Dummy", eq1, eq2)
I would like to have an internal DSL to represent such an equation in the form:
Dummy {
x = y z
y = x z
}
The closest I could get is the following:
Definition("Dummy") := (
"x" -> ("y", "z")
"y" -> ("x", "z")
)
The first problem I encountered is that I cannot have two implicit conversions for Definition and Var, hence Definition("Dummy"). The main problem, however, are the lists. I don't want to surround them by any thing, e.g. (), and I also don't want their elements be separated by commas.
Is what I want possible using Scala? If yes, can anyone show me an easy way of achieving it?
While Scalas syntax is powerful, it is not flexible enough to create arbitrary delimiters for symbols. Thus, there is no way to leave commas and replace them only with spaces.
Nevertheless, it is possible to use macros and parse a string with arbitrary content at compile time. It is not an "easy" solution, but one that works:
object AlgDefDSL {
import language.experimental.macros
import scala.reflect.macros.Context
implicit class DefDSL(sc: StringContext) {
def dsl(): Definition = macro __dsl_impl
}
def __dsl_impl(c: Context)(): c.Expr[Definition] = {
import c.universe._
val defn = c.prefix.tree match {
case Apply(_, List(Apply(_, List(Literal(Constant(s: String)))))) =>
def toAST[A : TypeTag](xs: Tree*): Tree =
Apply(
Select(Ident(typeOf[A].typeSymbol.companionSymbol), newTermName("apply")),
xs.toList
)
def toVarAST(varObj: Var) =
toAST[Var](c.literal(varObj.name).tree)
def toEqAST(eqObj: Eq) =
toAST[Eq]((eqObj.head +: eqObj.body).map(toVarAST(_)): _*)
def toDefAST(defObj: Definition) =
toAST[Definition](c.literal(defObj.name).tree +: defObj.body.map(toEqAST(_)): _*)
parsers.parse(s) match {
case parsers.Success(defn, _) => toDefAST(defn)
case parsers.NoSuccess(msg, _) => c.abort(c.enclosingPosition, msg)
}
}
c.Expr(defn)
}
import scala.util.parsing.combinator.JavaTokenParsers
private object parsers extends JavaTokenParsers {
override val whiteSpace = "[ \t]*".r
lazy val newlines =
opt(rep("\n"))
lazy val varP =
"[a-z]+".r ^^ Var
lazy val eqP =
(varP <~ "=") ~ rep(varP) ^^ {
case lhs ~ rhs => Eq(lhs, rhs: _*)
}
lazy val defHead =
newlines ~> ("[a-zA-Z]+".r <~ "{") <~ newlines
lazy val defBody =
rep(eqP <~ rep("\n"))
lazy val defEnd =
"}" ~ newlines
lazy val defP =
defHead ~ defBody <~ defEnd ^^ {
case name ~ eqs => Definition(name, eqs: _*)
}
def parse(s: String) = parseAll(defP, s)
}
case class Var(name: String)
case class Eq(head: Var, body: Var*)
case class Definition(name: String, body: Eq*)
}
It can be used with something like this:
scala> import AlgDefDSL._
import AlgDefDSL._
scala> dsl"""
| Dummy {
| x = y z
| y = x z
| }
| """
res12: AlgDefDSL.Definition = Definition(Dummy,WrappedArray(Eq(Var(x),WrappedArray(Var(y), Var(z))), Eq(Var(y),WrappedArray(Var(x), Var(z)))))
In addition to sschaef's nice solution I want to mention a few possibilities that are commonly used to get rid of commas in list construction for a DSL.
Colons
This might be trivial, but it is sometimes overlooked as a solution.
line1 ::
line2 ::
line3 ::
Nil
For a DSL it is often desired that every line that contains some instruction/data is terminated the same way (opposed to Lists where all but the last line will get a comma). With such a solutions exchanging the lines no longer can mess up the trailing comma. Unfortunately, the Nil looks a bit ugly.
Fluid API
Another alternative that might be interesting for a DSL is something like that:
BuildDefinition()
.line1
.line2
.line3
.build
where each line is a member function of the builder (and returns a modified builder). This solution requires to eventually convert the builder to a list (which might be done as an implicit conversion). Note that for some APIs it might be possible to pass around the builder instances themselves, and only extract the data wherever needed.
Constructor API
Similarly another possibility is to exploit constructors.
new BuildInterface {
line1
line2
line3
}
Here, BuildInterface is a trait and we simply instantiate an anonymous class from the interface. The line functions call some member functions of this trait. Each invocation can internally update the state of the build interface. Note that this commonly results in a mutable design (but only during construction). To extract the list, an implicit conversion could be used.
Since I don't understand the actual purpose of your DSL, I'm not really sure if any of these techniques is interesting for your scenario. I just wanted to add them since they are common ways to get rid of ",".
Here is another solution which is relatively simple and enables a syntax that is pretty close to your ideal
(as other have pointed, the exact syntax your asked for is not possible, in particular because you cannot redefine delimiter symbols).
My solution stretches a bit what is reasonable to do because it adds an operator right on scala.Symbol,
but if you're going to use this DSL in a constrained scope then this should be OK.
object VarOps {
val currentEqs = new util.DynamicVariable( Vector.empty[Eq] )
}
implicit class VarOps( val variable: Var ) extends AnyVal {
import VarOps._
def :=[T]( body: Var* ) = {
val eq = Eq( variable, body:_* )
currentEqs.value = currentEqs.value :+ eq
}
}
implicit class SymbolOps( val sym: Symbol ) extends AnyVal {
def apply[T]( body: => Unit ): Definition = {
import VarOps._
currentEqs.withValue( Vector.empty[Eq] ) {
body
Definition( sym.name, currentEqs.value:_* )
}
}
}
Now you can do:
'Dummy {
x := (y, z)
y := (x, z)
}
Which builds the following definition (as printed in the REPL):
Definition(Dummy,Vector(Eq(Var(x),WrappedArray(Var(y), Var(z))), Eq(Var(y),WrappedArray(Var(x), Var(z)))))
I'll give you the tl;dr up front
I'm trying to use the state monad transformer in Scalaz 7 to thread extra state through a parser, and I'm having trouble doing anything useful without writing a lot of t m a -> t m b versions of m a -> m b methods.
An example parsing problem
Suppose I have a string containing nested parentheses with digits inside them:
val input = "((617)((0)(32)))"
I also have a stream of fresh variable names (characters, in this case):
val names = Stream('a' to 'z': _*)
I want to pull a name off the top of the stream and assign it to each parenthetical
expression as I parse it, and then map that name to a string representing the
contents of the parentheses, with the nested parenthetical expressions (if any) replaced by their
names.
To make this more concrete, here's what I'd want the output to look like for the example input above:
val target = Map(
'a' -> "617",
'b' -> "0",
'c' -> "32",
'd' -> "bc",
'e' -> "ad"
)
There may be either a string of digits or arbitrarily many sub-expressions at a given level, but these two kinds of content won't be mixed in a single parenthetical expression.
To keep things simple, we'll assume that the stream of names will never
contain either duplicates or digits, and that it will always contain enough
names for our input.
Using parser combinators with a bit of mutable state
The example above is a slightly simplified version of the parsing problem in
this Stack Overflow question.
I answered that question with
a solution that looked roughly like this:
import scala.util.parsing.combinator._
class ParenParser(names: Iterator[Char]) extends RegexParsers {
def paren: Parser[List[(Char, String)]] = "(" ~> contents <~ ")" ^^ {
case (s, m) => (names.next -> s) :: m
}
def contents: Parser[(String, List[(Char, String)])] =
"\\d+".r ^^ (_ -> Nil) | rep1(paren) ^^ (
ps => ps.map(_.head._1).mkString -> ps.flatten
)
def parse(s: String) = parseAll(paren, s).map(_.toMap)
}
It's not too bad, but I'd prefer to avoid the mutable state.
What I want
Haskell's Parsec library makes
adding user state to a parser trivially easy:
import Control.Applicative ((*>), (<$>), (<*))
import Data.Map (fromList)
import Text.Parsec
paren = do
(s, m) <- char '(' *> contents <* char ')'
h : t <- getState
putState t
return $ (h, s) : m
where
contents
= flip (,) []
<$> many1 digit
<|> (\ps -> (map (fst . head) ps, concat ps))
<$> many1 paren
main = print $
runParser (fromList <$> paren) ['a'..'z'] "example" "((617)((0)(32)))"
This is a fairly straightforward translation of my Scala parser above, but without mutable state.
What I've tried
I'm trying to get as close to the Parsec solution as I can using Scalaz's state monad transformer, so instead of Parser[A] I'm working with StateT[Parser, Stream[Char], A].
I have a "solution" that allows me to write the following:
import scala.util.parsing.combinator._
import scalaz._, Scalaz._
object ParenParser extends ExtraStateParsers[Stream[Char]] with RegexParsers {
protected implicit def monadInstance = parserMonad(this)
def paren: ESP[List[(Char, String)]] =
(lift("(" ) ~> contents <~ lift(")")).flatMap {
case (s, m) => get.flatMap(
names => put(names.tail).map(_ => (names.head -> s) :: m)
)
}
def contents: ESP[(String, List[(Char, String)])] =
lift("\\d+".r ^^ (_ -> Nil)) | rep1(paren).map(
ps => ps.map(_.head._1).mkString -> ps.flatten
)
def parse(s: String, names: Stream[Char]) =
parseAll(paren.eval(names), s).map(_.toMap)
}
This works, and it's not that much less concise than either the mutable state version or the Parsec version.
But my ExtraStateParsers is ugly as sin—I don't want to try your patience more than I already have, so I won't include it here (although here's a link, if you really want it). I've had to write new versions of every Parser and Parsers method I use above
for my ExtraStateParsers and ESP types (rep1, ~>, <~, and |, in case you're counting). If I had needed to use other combinators, I'd have had to write new state transformer-level versions of them as well.
Is there a cleaner way to do this? I'd love to see an example of a Scalaz 7's state monad transformer being used to thread state through a parser, but Scalaz 6 or Haskell examples would also be useful and appreciated.
Probably the most general solution would be to rewrite Scala's parser library to accommodate monadic computations while parsing (like you partly did), but that would be quite a laborious task.
I suggest a solution using ScalaZ's State where each of our result isn't a value of type Parse[X], but a value of type Parse[State[Stream[Char],X]] (aliased as ParserS[X]). So the overall parsed result isn't a value, but a monadic state value, which is then run on some Stream[Char]. This is almost a monad transformer, but we have to do lifting/unlifting manually. It makes the code a bit uglier, as we need to lift values sometimes or use map/flatMap on several places, but I believe it's still reasonable.
import scala.util.parsing.combinator._
import scalaz._
import Scalaz._
import Traverse._
object ParenParser extends RegexParsers with States {
type S[X] = State[Stream[Char],X];
type ParserS[X] = Parser[S[X]];
// Haskell's `return` for States
def toState[S,X](x: X): State[S,X] = gets(_ => x)
// Haskell's `mapM` for State
def mapM[S,X](l: List[State[S,X]]): State[S,List[X]] =
l.traverse[({type L[Y] = State[S,Y]})#L,X](identity _);
// .................................................
// Read the next character from the stream inside the state
// and update the state to the stream's tail.
def next: S[Char] = state(s => (s.tail, s.head));
def paren: ParserS[List[(Char, String)]] =
"(" ~> contents <~ ")" ^^ (_ flatMap {
case (s, m) => next map (v => (v -> s) :: m)
})
def contents: ParserS[(String, List[(Char, String)])] = digits | parens;
def digits: ParserS[(String, List[(Char, String)])] =
"\\d+".r ^^ (_ -> Nil) ^^ (toState _)
def parens: ParserS[(String, List[(Char, String)])] =
rep1(paren) ^^ (mapM _) ^^ (_.map(
ps => ps.map(_.head._1).mkString -> ps.flatten
))
def parse(s: String): ParseResult[S[Map[Char,String]]] =
parseAll(paren, s).map(_.map(_.toMap))
def parse(s: String, names: Stream[Char]): ParseResult[Map[Char,String]] =
parse(s).map(_ ! names);
}
object ParenParserTest extends App {
{
println(ParenParser.parse("((617)((0)(32)))", Stream('a' to 'z': _*)));
}
}
Note: I believe that your approach with StateT[Parser, Stream[Char], _] isn't conceptually correct. The type says that we're constructing a parser given some state (a stream of names). So it would be possible that given different streams we get different parsers. This is not what we want to do. We only want that the result of parsing depends on the names, not the whole parser. In this way Parser[State[Stream[Char],_]] seems to be more appropriate (Haskell's Parsec takes a similar approach, the state/monad is inside the parser).
I'm writing an application that will take in various "command" strings. I've been looking at the Scala combinator library to tokenize the commands. I find in a lot of cases I want to say: "These tokens are an orderless set, and so they can appear in any order, and some might not appear".
With my current knowledge of grammars I would have to define all combinations of sequences as such (pseudo grammar):
command = action~content
action = alphanum
content = (tokenA~tokenB~tokenC | tokenB~tokenC~tokenA | tokenC~tokenB~tokenA ....... )
So my question is, considering tokenA-C are unique, is there a shorter way to define a set of any order using a grammar?
You can use the "Parser.^?" operator to check a group of parse elements for duplicates.
def tokens = tokenA | tokenB | tokenC
def uniqueTokens = (tokens*) ^? (
{ case t if (t == t.removeDuplicates) => t },
{ "duplicate tokens found: " + _ })
Here is an example that allows you to enter any of the four stooges in any order, but fails to parse if a duplicate is encountered:
package blevins.example
import scala.util.parsing.combinator._
case class Stooge(name: String)
object StoogesParser extends RegexParsers {
def moe = "Moe".r
def larry = "Larry".r
def curly = "Curly".r
def shemp = "Shemp".r
def stooge = ( moe | larry | curly | shemp ) ^^ { case s => Stooge(s) }
def certifiedStooge = stooge | """\w+""".r ^? (
{ case s: Stooge => s },
{ "not a stooge: " + _ })
def stooges = (certifiedStooge*) ^? (
{ case x if (x == x.removeDuplicates) => x.toSet },
{ "duplicate stooge in: " + _ })
def parse(s: String): String = {
parseAll(stooges, new scala.util.parsing.input.CharSequenceReader(s)) match {
case Success(r,_) => r.mkString(" ")
case Failure(r,_) => "failure: " + r
case Error(r,_) => "error: " + r
}
}
}
And some example usage:
package blevins.example
object App extends Application {
def printParse(s: String): Unit = println(StoogesParser.parse(s))
printParse("Moe Shemp Larry")
printParse("Moe Shemp Shemp")
printParse("Curly Beyonce")
/* Output:
Stooge(Moe) Stooge(Shemp) Stooge(Larry)
failure: duplicate stooge in: List(Stooge(Moe), Stooge(Shemp), Stooge(Shemp))
failure: not a stooge: Beyonce
*/
}
There are ways around it. Take a look at the parser here, for example. It accepts 4 pre-defined numbers, which may appear in any other, but must appear once, and only once.
OTOH, you could write a combinator, if this pattern happens often:
def comb3[A](a: Parser[A], b: Parser[A], c: Parser[A]) =
a ~ b ~ c | a ~ c ~ b | b ~ a ~ c | b ~ c ~ a | c ~ a ~ b | c ~ b ~ a
I would not try to enforce this requirement syntactically. I'd write a production that admits multiple tokens from the set allowed and then use a non-parsing approach to ascertaining the acceptability of the keywords actually given. In addition to allowing a simpler grammar, it will allow you to more easily continue parsing after emitting a diagnostic about the erroneous usage.
Randall Schulz
I don't know what kind of constructs you want to support, but I gather you should be specifying a more specific grammar. From your comment to another answer:
todo message:link Todo class to database
I guess you don't want to accept something like
todo message:database Todo to link class
So you probably want to define some message-level keywords like "link" and "to"...
def token = alphanum~':'~ "link" ~ alphanum ~ "class" ~ "to" ~ alphanum
^^ { (a:String,b:String,c:String) => /* a == "message", b="Todo", c="database" */ }
I guess you would have to define your grammar at that level.
You could of course write a combination rule that does this for you if you encounter this situation frequently.
On the other hand, maybe the option exists to make "tokenA..C" just "token" and then differentiate inside the handler of "token"