I have a for comprehension like this:
for {
(value1: String, value2: String, value3: String) <- getConfigs(args)
// more stuff using those values
}
getConfigs returns an Either[Throwable, (Seq[String], String, String)] and when I try to compile I get this error:
value withFilter is not a member of Either[Throwable,(Seq[String], String, String)]
How can I use this method (that returns an Either) in the for comprehension?
Like this:
for {
tuple <- getConfigs()
} println(tuple)
Joking aside, I think that is an interesting question but it is misnamed a bit.
The problem (see above) is not that for comprehensions are not possible but that pattern matching inside the for comprehension is not possible within Either.
There is documentation how for comprehensions are translated but they don't cover each case. This one is not covered there, as far as I can see. So I looked it up in my instance of "Programming in Scala" -- Second Edition (because that is the one I have by my side on dead trees).
Section 23.4 - Translation of for-expressions
There is a subchapter "Translating patterns in generators", which is what is the problem here, as described above. It lists two cases:
Case One: Tuples
Is exactly our case:
for ((x1, …, xn) <- expr1) yield expr2
should translate to expr1.map { case (x1, …, xn) => expr2).
Which is exactly what IntelliJ does, when you select the code and do an "Desugar for comprehension" action. Yay!
… but that makes it even weirder in my eyes, because the desugared code actually runs without problems.
So this case is the one which is (imho) matching the case, but is not what is happening. At least not what we observed. Hm?!
Case two: Arbitrary patterns
for (pat <- expr1) yield expr2
translates to
expr1 withFilter {
case pat => true
case _ => false
} map {
case pat => expr2
}
where there is now an withFilter method!
This case totally explains the error message and why pattern matching in an Either is not possible.
The chapter ultimately refers to the scala language specification (to an older one though) which is where I stop now.
So I a sorry I can't totally answer that question, but hopefully I could hint enough what is the root of the problem here.
Intuition
So why is Either problematic and doesn't propose an withFilter method, where Try and Option do?
Because filter removes elements from the "container" and probably "all", so we need something that is representing an "empty container".
That is easy for Option, where this is obviously None. Also easy for e.g. List. Not so easy for Try, because there are multiple Failure, each one can hold a specific exception. However there are multiple failures taking this place:
NoSuchElementException and
UnsupportedOperationException
and which is why Try[X] runs, but an Either[Throwable, X] does not.
It's almost the same thing, but not entirely. Try knows that Left are Throwable and the library authors can take advantage out of it.
However on an Either (which is now right biased) the "empty" case is the Left case; which is generic. So the user determines which type it is, so the library authors couldn't pick generic instances for each possible left.
I think this is why Either doesn't provide an withFilter out-of-the-box and why your expression fails.
Btw. the
expr1.map { case (x1, …, xn) => expr2) }
case works, because it throws an MatchError on the calling stack and panics out of the problem which… in itself might be a greater problem.
Oh and for the ones that are brave enough: I didn't use the "Monad" word up until now, because Scala doesn't have a datastructure for it, but for-comprehensions work just without it. But maybe a reference won't hurt: Additive Monads have this "zero" value, which is exactly what Either misses here and what I tried to give some meaning in the "intuition" part.
I guess you want your loop to run only if the value is a Right. If it is a Left, it should not run. This can be achieved really easy:
for {
(value1, value2, value3) <- getConfigs(args).right.toOption
// more stuff using those values
}
Sidenote: I don't know whats your exact use case, but scala.util.Try is better suited for cases where you either have a result or a failure (an exception).
Just write Try { /*some code that may throw an exception*/ } and you'll either have Success(/*the result*/) or a Failure(/*the caught exception*/).
If your getConfigs method returns a Try instead of Either, then your above could would work without any changes.
You can do this using Oleg's better-monadic-for compiler plugin:
build.sbt:
addCompilerPlugin("com.olegpy" %% "better-monadic-for" % "0.2.4")
And then:
object Test {
def getConfigs: Either[Throwable, (String, String, String)] = Right(("a", "b", "c"))
def main(args: Array[String]): Unit = {
val res = for {
(fst, snd, third) <- getConfigs
} yield fst
res.foreach(println)
}
}
Yields:
a
This works because the plugin removes the unnecessary withFilter and unchecked while desugaring and uses a .map call. Thus, we get:
val res: Either[Throwable, String] =
getConfigs
.map[String](((x$1: (String, String, String)) => x$1 match {
case (_1: String, _2: String, _3: String)
(String, String, String)((fst # _), (snd # _), (third # _)) => fst
}));
I think the part you may find surprising is that the Scala compiler emits this error because you deconstruct the tuple in place. This is surprisingly forces the compiler to check for withFilter method because it looks to the compilers like an implicit check for the type of the value inside the container and checks on values are implemented using withFilter. If you write your code as
for {
tmp <- getConfigs(args)
(value1: Seq[String], value2: String, value3: String) = tmp
// more stuff using those values
}
it should compile without errors.
Related
json4 has the following types:
sealed abstract class JValue
case class JString(s: String) extends JValue
//etc
I have the following json value:
val json: JValue = JString("hi")
and I use it in a for-comprehension as such:
val token = for {
JString(s) <- json
} yield s
Here is the question:
As it is, the token will be evaluated as List("hi") namely an instance of the type List[String]. My understanding was that it should instead be Option[String]. why Option -> List?
IntelliJ's type-"inference" helper, suggests setting the type JValue for the result of the for-comprehension. When you that however, you get a compile error. What's exactly at fault here? and why is the confusion happening?
Why List[String]?
Well, as you can see here, that's what the json4s authors chose. They could have gone with Vector[String], or Array[String], but decided upon List[String]. I assume they had good reasons.
What's withFilter() got to do with it?
As we all learned on the first day of Scala class, for is not a control-flow language construct, but is actually syntactic sugar used to prettify nested map/flatMap constructs.
for {
b <- a // flatMap()
if b.isSomeCondition // withFilter()
c <- b // map()
} yield c
It turns out that withFilter() is also employed when pattern matching in a for construct. So this...
for {
JString(s) <- json
} yield s
...gets translated into this (roughly).
json.withFilter{
case JString((s # _)) => true
case _ => false
}.map{
case JString(s # _) => s
}
The authors of json4s decided that withFilter() should return a List and thus that's the result of your for comprehension.
IntelliJ's confusion.
IntelliJ's syntax checker and suggestion engine is pretty good, but it's not perfect. Scala code can get rather complicated and it's not uncommon for IntelliJ to get confused. Trust the compiler. If it compiles without warning then just ignore the faulty type suggestions.
Take the following example, why is the extractor called multiple times as opposed to temporarily storing the results of the first call and matching against that. Wouldn't it be reasonable to assume that results from unapply would not change given the same string.
object Name {
val NameReg = """^(\w+)\s(?:(\w+)\s)?(\w+)$""".r
def unapply(fullName: String): Option[(String, String, String)] = {
val NameReg(fname, mname, lname) = fullName
Some((fname, if (mname == null) "" else mname, lname))
}
}
"John Smith Doe" match {
case Name("Jane", _, _) => println("I know you, Jane.")
case Name(f, "", _) => println(s"Hi ${f}")
case Name(f, m, _) => println(s"Howdy, ${f} ${m}.")
case _ => println("Don't know you")
}
Wouldn't it be reasonable to assume that results from unapply would not change given the same string.
Unfortunately, assuming isn't good enough for a (static) compiler. In order for memoizing to be a legal optimization, the compiler has to prove that the expression being memoized is pure and referentially transparent. However, in the general case, this is equivalent to solving the Halting Problem.
It would certainly be possible to write an optimization pass which tries to prove purity of certain expressions and memoizes them iff and only iff it succeeds, but that may be more trouble than it's worth. Such proofs get very hard very quickly, so they are only likely to succeed for very trivial expressions, which execute very quickly anyway.
What is a pattern match? The spec says it matches the "shape" of the value and binds vars to its "components."
In the realm of mutation, you have questions like, if I match on case class C(var v: V), does a case C(x) capture the mutable field? Well, the answer was no:
https://issues.scala-lang.org/browse/SI-5158
The spec says (sorry) that order of evaluation may be changed, so it recommends against side-effects:
In the interest of efficiency the evaluation of a pattern matching
expression may try patterns in some other order than textual sequence.
This might affect evaluation through side effects in guards.
That's in relation to guard expressions, presumably because extractors were added after case classes.
There's no special promise to evaluate extractors exactly once. (Explicitly in the spec, that is.)
The semantics are only that "patterns are tried in sequence".
Also, your example for regexes can be simplified, since a regex will not be re-evaluated when unapplied to its own Match. See the example in the doc. That is,
Name.NameReg findFirstMatchIn s map {
case Name("Jane",_,_) =>
}
where
object Name { def unapply(m: Regex.Matcher) = ... }
I am a bit new to Scala, so apologies if this is something a bit trivial.
I have a list of items which I want to iterate through. I to execute a check on each of the items and if just one of them fails I want the whole function to return false. So you can see this as an AND condition. I want it to be evaluated lazily, i.e. the moment I encounter the first false return false.
I am used to the for - yield syntax which filters items generated through some generator (list of items, sequence etc.). In my case however I just want to break out and return false without executing the rest of the loop. In normal Java one would just do a return false; within the loop.
In an inefficient way (i.e. not stopping when I encounter the first false item), I could do it:
(for {
item <- items
if !satisfiesCondition(item)
} yield item).isEmpty
Which is essentially saying that if no items make it through the filter all of them satisfy the condition. But this seems a bit convoluted and inefficient (consider you have 1 million items and the first one already did not satisfy the condition).
What is the best and most elegant way to do this in Scala?
Stopping early at the first false for a condition is done using forall in Scala. (A related question)
Your solution rewritten:
items.forall(satisfiesCondition)
To demonstrate short-circuiting:
List(1,2,3,4,5,6).forall { x => println(x); x < 3 }
1
2
3
res1: Boolean = false
The opposite of forall is exists which stops as soon as a condition is met:
List(1,2,3,4,5,6).exists{ x => println(x); x > 3 }
1
2
3
4
res2: Boolean = true
Scala's for comprehensions are not general iterations. That means they cannot produce every possible result that one can produce out of an iteration, as, for example, the very thing you want to do.
There are three things that a Scala for comprehension can do, when you are returning a value (that is, using yield). In the most basic case, it can do this:
Given an object of type M[A], and a function A => B (that is, which returns an object of type B when given an object of type A), return an object of type M[B];
For example, given a sequence of characters, Seq[Char], get UTF-16 integer for that character:
val codes = for (char <- "A String") yield char.toInt
The expression char.toInt converts a Char into an Int, so the String -- which is implicitly converted into a Seq[Char] in Scala --, becomes a Seq[Int] (actually, an IndexedSeq[Int], through some Scala collection magic).
The second thing it can do is this:
Given objects of type M[A], M[B], M[C], etc, and a function of A, B, C, etc into D, return an object of type M[D];
You can think of this as a generalization of the previous transformation, though not everything that could support the previous transformation can necessarily support this transformation. For example, we could produce coordinates for all coordinates of a battleship game like this:
val coords = for {
column <- 'A' to 'L'
row <- 1 to 10
} yield s"$column$row"
In this case, we have objects of the types Seq[Char] and Seq[Int], and a function (Char, Int) => String, so we get back a Seq[String].
The third, and final, thing a for comprehension can do is this:
Given an object of type M[A], such that the type M[T] has a zero value for any type T, a function A => B, and a condition A => Boolean, return either the zero or an object of type M[B], depending on the condition;
This one is harder to understand, though it may look simple at first. Let's look at something that looks simple first, say, finding all vowels in a sequence of characters:
def vowels(s: String) = for {
letter <- s
if Set('a', 'e', 'i', 'o', 'u') contains letter.toLower
} yield letter.toLower
val aStringVowels = vowels("A String")
It looks simple: we have a condition, we have a function Char => Char, and we get a result, and there doesn't seem to be any need for a "zero" of any kind. In this case, the zero would be the empty sequence, but it hardly seems worth mentioning it.
To explain it better, I'll switch from Seq to Option. An Option[A] has two sub-types: Some[A] and None. The zero, evidently, is the None. It is used when you need to represent the possible absence of a value, or the value itself.
Now, let's say we have a web server where users who are logged in and are administrators get extra javascript on their web pages for administration tasks (like wordpress does). First, we need to get the user, if there's a user logged in, let's say this is done by this method:
def getUser(req: HttpRequest): Option[User]
If the user is not logged in, we get None, otherwise we get Some(user), where user is the data structure with information about the user that made the request. We can then model that operation like this:
def adminJs(req; HttpRequest): Option[String] = for {
user <- getUser(req)
if user.isAdmin
} yield adminScriptForUser(user)
Here it is easier to see the point of the zero. When the condition is false, adminScriptForUser(user) cannot be executed, so the for comprehension needs something to return instead, and that something is the "zero": None.
In technical terms, Scala's for comprehensions provides syntactic sugars for operations on monads, with an extra operation for monads with zero (see list comprehensions in the same article).
What you actually want to accomplish is called a catamorphism, usually represented as a fold method, which can be thought of as a function of M[A] => B. You can write it with fold, foldLeft or foldRight in a sequence, but none of them would actually short-circuit the iteration.
Short-circuiting arises naturally out of non-strict evaluation, which is the default in Haskell, in which most of these papers are written. Scala, as most other languages, is by default strict.
There are three solutions to your problem:
Use the special methods forall or exists, which target your precise use case, though they don't solve the generic problem;
Use a non-strict collection; there's Scala's Stream, but it has problems that prevents its effective use. The Scalaz library can help you there;
Use an early return, which is how Scala library solves this problem in the general case (in specific cases, it uses better optimizations).
As an example of the third option, you could write this:
def hasEven(xs: List[Int]): Boolean = {
for (x <- xs) if (x % 2 == 0) return true
false
}
Note as well that this is called a "for loop", not a "for comprehension", because it doesn't return a value (well, it returns Unit), since it doesn't have the yield keyword.
You can read more about real generic iteration in the article The Essence of The Iterator Pattern, which is a Scala experiment with the concepts described in the paper by the same name.
forall is definitely the best choice for the specific scenario but for illustration here's good old recursion:
#tailrec def hasEven(xs: List[Int]): Boolean = xs match {
case head :: tail if head % 2 == 0 => true
case Nil => false
case _ => hasEven(xs.tail)
}
I tend to use recursion a lot for loops w/short circuit use cases that don't involve collections.
UPDATE:
DO NOT USE THE CODE IN MY ANSWER BELOW!
Shortly after I posted the answer below (after misinterpreting the original poster's question), I have discovered a way superior generic answer (to the listing of requirements below) here: https://stackoverflow.com/a/60177908/501113
It appears you have several requirements:
Iterate through a (possibly large) list of items doing some (possibly expensive) work
The work done to an item could return an error
At the first item that returns an error, short circuit the iteration, throw away the work already done, and return the item's error
A for comprehension isn't designed for this (as is detailed in the other answers).
And I was unable to find another Scala collections pre-built iterator that provided the requirements above.
While the code below is based on a contrived example (transforming a String of digits into a BigInt), it is the general pattern I prefer to use; i.e. process a collection and transform it into something else.
def getDigits(shouldOnlyBeDigits: String): Either[IllegalArgumentException, BigInt] = {
#scala.annotation.tailrec
def recursive(
charactersRemaining: String = shouldOnlyBeDigits
, accumulator: List[Int] = Nil
): Either[IllegalArgumentException, List[Int]] =
if (charactersRemaining.isEmpty)
Right(accumulator) //All work completed without error
else {
val item = charactersRemaining.head
val isSuccess =
item.isDigit //Work the item
if (isSuccess)
//This item's work completed without error, so keep iterating
recursive(charactersRemaining.tail, (item - 48) :: accumulator)
else {
//This item hit an error, so short circuit
Left(new IllegalArgumentException(s"item [$item] is not a digit"))
}
}
recursive().map(digits => BigInt(digits.reverse.mkString))
}
When it is called as getDigits("1234") in a REPL (or Scala Worksheet), it returns:
val res0: Either[IllegalArgumentException,BigInt] = Right(1234)
And when called as getDigits("12A34") in a REPL (or Scala Worksheet), it returns:
val res1: Either[IllegalArgumentException,BigInt] = Left(java.lang.IllegalArgumentException: item [A] is not digit)
You can play with this in Scastie here:
https://scastie.scala-lang.org/7ddVynRITIOqUflQybfXUA
I have a model, which has some Option fields, which contain another Option fields. For example:
case class First(second: Option[Second], name: Option[String])
case class Second(third: Option[Third], title: Option[String])
case class Third(numberOfSmth: Option[Int])
I'm receiving this data from external JSON's and sometimes this data may contain null's, that was the reason of such model design.
So the question is: what is the best way to get a deepest field?
First.get.second.get.third.get.numberOfSmth.get
Above method looks really ugly and it may cause exception if one of the objects will be None. I was looking in to Scalaz lib, but didn't figure out a better way to do that.
Any ideas?
The solution is to use Option.map and Option.flatMap:
First.flatMap(_.second.flatMap(_.third.map(_.numberOfSmth)))
Or the equivalent (see the update at the end of this answer):
First flatMap(_.second) flatMap(_.third) map(_.numberOfSmth)
This returns an Option[Int] (provided that numberOfSmth returns an Int). If any of the options in the call chain is None, the result will be None, otherwise it will be Some(count) where count is the value returned by numberOfSmth.
Of course this can get ugly very fast. For this reason scala supports for comprehensions as a syntactic sugar. The above can be rewritten as:
for {
first <- First
second <- first .second
third <- second.third
} third.numberOfSmth
Which is arguably nicer (especially if you are not yet used to seeing map/flatMap everywhere, as will certainly be the case after a while using scala), and generates the exact same code under the hood.
For more background, you may check this other question: What is Scala's yield?
UPDATE:
Thanks to Ben James for pointing out that flatMap is associative. In other words x flatMap(y flatMap z))) is the same as x flatMap y flatMap z. While the latter is usually not shorter, it has the advantage of avoiding any nesting, which is easier to follow.
Here is some illustration in the REPL (the 4 styles are equivalent, with the first two using flatMap nesting, the other two using flat chains of flatMap):
scala> val l = Some(1,Some(2,Some(3,"aze")))
l: Some[(Int, Some[(Int, Some[(Int, String)])])] = Some((1,Some((2,Some((3,aze))))))
scala> l.flatMap(_._2.flatMap(_._2.map(_._2)))
res22: Option[String] = Some(aze)
scala> l flatMap(_._2 flatMap(_._2 map(_._2)))
res23: Option[String] = Some(aze)
scala> l flatMap(_._2) flatMap(_._2) map(_._2)
res24: Option[String] = Some(aze)
scala> l.flatMap(_._2).flatMap(_._2).map(_._2)
res25: Option[String] = Some(aze)
There is no need for scalaz:
for {
first <- yourFirst
second <- f.second
third <- second.third
number <- third.numberOfSmth
} yield number
Alternatively you can use nested flatMaps
This can be done by chaining calls to flatMap:
def getN(first: Option[First]): Option[Int] =
first flatMap (_.second) flatMap (_.third) flatMap (_.numberOfSmth)
You can also do this with a for-comprehension, but it's more verbose as it forces you to name each intermediate value:
def getN(first: Option[First]): Option[Int] =
for {
f <- first
s <- f.second
t <- s.third
n <- t.numberOfSmth
} yield n
I think it is an overkill for your problem but just as a general reference:
This nested access problem is addressed by a concept called Lenses. They provide a nice mechanism to access nested data types by simple composition. As introduction you might want to check for instance this SO answer or this tutorial. The question whether it makes sense to use Lenses in your case is whether you also have to perform a lot of updates in you nested option structure (note: update not in the mutable sense, but returning a new modified but immutable instance). Without Lenses this leads to lengthy nested case class copy code. If you do not have to update at all, I would stick to om-nom-nom's suggestion.
In languages like SML, Erlang and in buch of others we may define functions like this:
fun reverse [] = []
| reverse x :: xs = reverse xs # [x];
I know we can write analog in Scala like this (and I know, there are many flaws in the code below):
def reverse[T](lst: List[T]): List[T] = lst match {
case Nil => Nil
case x :: xs => reverse(xs) ++ List(x)
}
But I wonder, if we could write former code in Scala, perhaps with desugaring to the latter.
Is there any fundamental limitations for such syntax being implemented in the future (I mean, really fundamental -- e.g. the way type inference works in scala, or something else, except parser obviously)?
UPD
Here is a snippet of how it could look like:
type T
def reverse(Nil: List[T]) = Nil
def reverse(x :: xs: List[T]): List[T] = reverse(xs) ++ List(x)
It really depends on what you mean by fundamental.
If you are really asking "if there is a technical showstopper that would prevent to implement this feature", then I would say the answer is no. You are talking about desugaring, and you are on the right track here. All there is to do is to basically stitch several separates cases into one single function, and this can be done as a mere preprocessing step (this only requires syntactic knowledge, no need for semantic knowledge). But for this to even make sense, I would define a few rules:
The function signature is mandatory (in Haskell by example, this would be optional, but it is always optional whether you are defining the function at once or in several parts). We could try to arrange to live without the signature and attempt to extract it from the different parts, but lack of type information would quickly come to byte us. A simpler argument is that if we are to try to infer an implicit signature, we might as well do it for all the methods. But the truth is that there are very good reasons to have explicit singatures in scala and I can't imagine to change that.
All the parts must be defined within the same scope. To start with, they must be declared in the same file because each source file is compiled separately, and thus a simple preprocessor would not be enough to implement the feature. Second, we still end up with a single method in the end, so it's only natural to have all the parts in the same scope.
Overloading is not possible for such methods (otherwise we would need to repeat the signature for each part just so the preprocessor knows which part belongs to which overload)
Parts are added (stitched) to the generated match in the order they are declared
So here is how it could look like:
def reverse[T](lst: List[T]): List[T] // Exactly like an abstract def (provides the signature)
// .... some unrelated code here...
def reverse(Nil) = Nil
// .... another bit of unrelated code here...
def reverse(x :: xs ) = reverse(xs) ++ List(x)
Which could be trivially transformed into:
def reverse[T](list: List[T]): List[T] = lst match {
case Nil => Nil
case x :: xs => reverse(xs) ++ List(x)
}
// .... some unrelated code here...
// .... another bit of unrelated code here...
It is easy to see that the above transformation is very mechanical and can be done by just manipulating a source AST (the AST produced by the slightly modified grammar that accepts this new constructs), and transforming it into the target AST (the AST produced by the standard scala grammar).
Then we can compile the result as usual.
So there you go, with a few simple rules we are able to implement a preprocessor that does all the work to implement this new feature.
If by fundamental you are asking "is there anything that would make this feature out of place" then it can be argued that this does not feel very scala. But more to the point, it does not bring that much to the table. Scala author(s) actually tend toward making the language simpler (as in less built-in features, trying to move some built-in features into libraries) and adding a new syntax that is not really more readable goes against the goal of simplification.
In SML, your code snippet is literally just syntactic sugar (a "derived form" in the terminology of the language spec) for
val rec reverse = fn x =>
case x of [] => []
| x::xs = reverse xs # [x]
which is very close to the Scala code you show. So, no there is no "fundamental" reason that Scala couldn't provide the same kind of syntax. The main problem is Scala's need for more type annotations, which makes this shorthand syntax far less attractive in general, and probably not worth the while.
Note also that the specific syntax you suggest would not fly well, because there is no way to distinguish one case-by-case function definition from two overloaded functions syntactically. You probably would need some alternative syntax, similar to SML using "|".
I don't know SML or Erlang, but I know Haskell. It is a language without method overloading. Method overloading combined with such pattern matching could lead to ambiguities. Imagine following code:
def f(x: String) = "String "+x
def f(x: List[_]) = "List "+x
What should it mean? It can mean method overloading, i.e. the method is determined in compile time. It can also mean pattern matching. There would be just a f(x: AnyRef) method that would do the matching.
Scala also has named parameters, which would be probably also broken.
I don't think that Scala is able to offer more simple syntax than you have shown in general. A simpler syntax may IMHO work in some special cases only.
There are at least two problems:
[ and ] are reserved characters because they are used for type arguments. The compiler allows spaces around them, so that would not be an option.
The other problem is that = returns Unit. So the expression after the | would not return any result
The closest I could come up with is this (note that is very specialized towards your example):
// Define a class to hold the values left and right of the | sign
class |[T, S](val left: T, val right: PartialFunction[T, T])
// Create a class that contains the | operator
class OrAssoc[T](left: T) {
def |(right: PartialFunction[T, T]): T | T = new |(left, right)
}
// Add the | to any potential target
implicit def anyToOrAssoc[S](left: S): OrAssoc[S] = new OrAssoc(left)
object fun {
// Use the magic of the update method
def update[T, S](choice: T | S): T => T = { arg =>
if (choice.right.isDefinedAt(arg)) choice.right(arg)
else choice.left
}
}
// Use the above construction to define a new method
val reverse: List[Int] => List[Int] =
fun() = List.empty[Int] | {
case x :: xs => reverse(xs) ++ List(x)
}
// Call the method
reverse(List(3, 2, 1))