In python you can avoid try {} catch {} finally {} boilerplate with with (see What is the python keyword "with" used for?). I remember seeing an alternative to that in Scala, but I can't find it anymore.
It goes along the lines of:
def using[O](r: {def close()})(doit: () => O): O = try {
doit()
} finally {
r.close
}
using(myWriter){() => myWriter.println("something or another")}
Is it built into 2.10, or do I need a separate library for it?
It's almost trivial to make your own that covers almost all use cases (here using 2.10):
implicit class TidyUpAnything[A](val a: A) extends AnyVal {
def tidily[Z](g: A=>Any)(f: A=>Z) = try { f(a) } finally { g(a) }
}
If you want exceptions to pass through, use as is:
scala> Option(null: String).tidily(println){_.get} // Should print None
None
java.util.NoSuchElementException: None.get
at scala.None$.get(Option.scala:313)
...
and if you want to handle exceptions, use it in conjunction with scala.util.Try:
scala> import scala.util._
scala> Try( Option(null: String).tidily(println){ _.get } )
None
res1: scala.util.Try[String] = Failure(java.util.NoSuchElementException: None.get)
Normally you would make g be something like _.close, but you can do arbitary resource cleanup with it. For example, here we back off a counter by one whenever we finish:
var i = 0
val a = Array(1,2)
a.tidily(_ => i -= 1){ _.foreach(_ => i += 1) }
scala> i
res2: Int = 1
Related
I'm in the process of adopting IO/Either to replace Future/Exception's where applicable, but I need help with the following code:
// some Java library
def dbLoad(id: Int): Int = {
throw new Exception("db exception")
}
// my scala code
sealed trait DbError extends Exception with Product
object DbError {
case object SomeError extends DbError
}
val load: Int => IO[Either[DbError, Int]] = { id =>
IO.fromFuture { IO { Future {
try { Right(dbLoad(id)) } catch { case NonFatal(e) => Left(SomeError) }
} } }
}
val loadAll: IO[Either[DbError, (Int, Int, Int)]] =
for {
i1 <- load(1)
i2 <- // call 'load' passing i1 as parameter, if i1 is 'right'
i3 <- // call 'load' passing i2 as parameter, if i2 is 'right'
} yield (i1, i2, i3) match {
case (Right(i1), Right(i2), Right(i3)) => Right((i1, i2, i3))
case _ => Left(SomeError)
}
I have not been able to get it working/compiling correctly, could you please help me understand:
how can I avoid executing subsequent calls to load (in loadAll) if a Left is detected?
if a call to load is successful, how can I use its right value for the following call to load?
is this the right approach to do it? Would you implement it in a different way?
Thanks everyone
Let me first put up the code that I think gets you what you want, and how typically things like this might be approached, I'll then describe what and why, and maybe some other suggestions:
import cats.data.EitherT
import cats.effect.IO
import cats.implicits._
import com.example.StackOverflow.DbError.SomeError
import scala.concurrent.Future
import scala.util.control.NonFatal
import scala.concurrent.ExecutionContext.Implicits.global
object StackOverflow {
// some Java library
def dbLoad(id: Int): Int = {
throw new Exception("db exception")
}
// my scala code
sealed trait DbError extends Exception with Product
object DbError {
case object SomeError extends DbError
}
val load: Int => IO[Either[DbError, Int]] = { id =>
IO.fromFuture(
IO(
Future(dbLoad(id))
.map(Right(_))
.recover {
case NonFatal(_) => Left(SomeError)
}
)
)
}
val loadAll: EitherT[IO, DbError, (Int, Int, Int)] =
for {
i1 <- EitherT(load(1))
i2 <- EitherT(load(i1))
i3 <- EitherT(load(i2))
} yield (i1, i2, i3)
val x: IO[Either[DbError, (Int, Int, Int)]] = loadAll.value
}
First, rather than try-catch inside the IO[Future[_]], Future itself has a number of combinators that can help you manage errors, assuming you have some control over what you get back.
For-comprehensions in Scala when applied in this fashion "short circuit" so if the first call to load(1) fails with a left then the rest of the comprehension won't execute. The use of EitherT allows you to manage the fact that your Either is "wrapped" in an effect type.
There are some problems with this approach, specifically around variance, you can read about them here:
http://www.beyondthelines.net/programming/the-problem-with-eithert/
There are also some performance implications for using this pattern that you may want to consider
Let's say we have a fake data source which will return data it holds in batch
class DataSource(size: Int) {
private var s = 0
implicit val g = scala.concurrent.ExecutionContext.global
def getData(): Future[List[Int]] = {
s = s + 1
Future {
Thread.sleep(Random.nextInt(s * 100))
if (s <= size) {
List.fill(100)(s)
} else {
List()
}
}
}
object Test extends App {
val source = new DataSource(100)
implicit val g = scala.concurrent.ExecutionContext.global
def process(v: List[Int]): Unit = {
println(v)
}
def next(f: (List[Int]) => Unit): Unit = {
val fut = source.getData()
fut.onComplete {
case Success(v) => {
f(v)
v match {
case h :: t => next(f)
}
}
}
}
next(process)
Thread.sleep(1000000000)
}
I have mine, the problem here is some portion is more not pure. Ideally, I would like to wrap the Future for each batch into a big future, and the wrapper future success when last batch returned 0 size list? My situation is a little from this post, the next() there is synchronous call while my is also async.
Or is it ever possible to do what I want? Next batch will only be fetched when the previous one is resolved in the end whether to fetch the next batch depends on the size returned?
What's the best way to walk through this type of data sources? Are there any existing Scala frameworks that provide the feature I am looking for? Is play's Iteratee, Enumerator, Enumeratee the right tool? If so, can anyone provide an example on how to use those facilities to implement what I am looking for?
Edit----
With help from chunjef, I had just tried out. And it actually did work out for me. However, there was some small change I made based on his answer.
Source.fromIterator(()=>Iterator.continually(source.getData())).mapAsync(1) (f=>f.filter(_.size > 0))
.via(Flow[List[Int]].takeWhile(_.nonEmpty))
.runForeach(println)
However, can someone give comparison between Akka Stream and Play Iteratee? Does it worth me also try out Iteratee?
Code snip 1:
Source.fromIterator(() => Iterator.continually(ds.getData)) // line 1
.mapAsync(1)(identity) // line 2
.takeWhile(_.nonEmpty) // line 3
.runForeach(println) // line 4
Code snip 2: Assuming the getData depends on some other output of another flow, and I would like to concat it with the below flow. However, it yield too many files open error. Not sure what would cause this error, the mapAsync has been limited to 1 as its throughput if I understood correctly.
Flow[Int].mapConcat[Future[List[Int]]](c => {
Iterator.continually(ds.getData(c)).to[collection.immutable.Iterable]
}).mapAsync(1)(identity).takeWhile(_.nonEmpty).runForeach(println)
The following is one way to achieve the same behavior with Akka Streams, using your DataSource class:
import scala.concurrent.Future
import scala.util.Random
import akka.actor.ActorSystem
import akka.stream._
import akka.stream.scaladsl._
object StreamsExample extends App {
implicit val system = ActorSystem("Sandbox")
implicit val materializer = ActorMaterializer()
val ds = new DataSource(100)
Source.fromIterator(() => Iterator.continually(ds.getData)) // line 1
.mapAsync(1)(identity) // line 2
.takeWhile(_.nonEmpty) // line 3
.runForeach(println) // line 4
}
class DataSource(size: Int) {
...
}
A simplified line-by-line overview:
line 1: Creates a stream source that continually calls ds.getData if there is downstream demand.
line 2: mapAsync is a way to deal with stream elements that are Futures. In this case, the stream elements are of type Future[List[Int]]. The argument 1 is the level of parallelism: we specify 1 here because DataSource internally uses a mutable variable, and a parallelism level greater than one could produce unexpected results. identity is shorthand for x => x, which basically means that for each Future, we pass its result downstream without transforming it.
line 3: Essentially, ds.getData is called as long as the result of the Future is a non-empty List[Int]. If an empty List is encountered, processing is terminated.
line 4: runForeach here takes a function List[Int] => Unit and invokes that function for each stream element.
Ideally, I would like to wrap the Future for each batch into a big future, and the wrapper future success when last batch returned 0 size list?
I think you are looking for a Promise.
You would set up a Promise before you start the first iteration.
This gives you promise.future, a Future that you can then use to follow the completion of everything.
In your onComplete, you add a case _ => promise.success().
Something like
def loopUntilDone(f: (List[Int]) => Unit): Future[Unit] = {
val promise = Promise[Unit]
def next(): Unit = source.getData().onComplete {
case Success(v) =>
f(v)
v match {
case h :: t => next()
case _ => promise.success()
}
case Failure(e) => promise.failure(e)
}
// get going
next(f)
// return the Future for everything
promise.future
}
// future for everything, this is a `Future[Unit]`
// its `onComplete` will be triggered when there is no more data
val everything = loopUntilDone(process)
You are probably looking for a reactive streams library. My personal favorite (and one I'm most familiar with) is Monix. This is how it will work with DataSource unchanged
import scala.concurrent.duration.Duration
import scala.concurrent.Await
import monix.reactive.Observable
import monix.execution.Scheduler.Implicits.global
object Test extends App {
val source = new DataSource(100)
val completed = // <- this is Future[Unit], completes when foreach is done
Observable.repeat(Observable.fromFuture(source.getData()))
.flatten // <- Here it's Observable[List[Int]], it has collection-like methods
.takeWhile(_.nonEmpty)
.foreach(println)
Await.result(completed, Duration.Inf)
}
I just figured out that by using flatMapConcat can achieve what I wanted to achieve. There is no point to start another question as I have had the answer already. Put my sample code here just in case someone is looking for similar answer.
This type of API is very common for some integration between traditional Enterprise applications. The DataSource is to mock the API while the object App is to demonstrate how the client code can utilize Akka Stream to consume the APIs.
In my small project the API was provided in SOAP, and I used scalaxb to transform the SOAP to Scala async style. And with the client calls demonstrated in the object App, we can consume the API with AKKA Stream. Thanks for all for the help.
class DataSource(size: Int) {
private var transactionId: Long = 0
private val transactionCursorMap: mutable.HashMap[TransactionId, Set[ReadCursorId]] = mutable.HashMap.empty
private val cursorIteratorMap: mutable.HashMap[ReadCursorId, Iterator[List[Int]]] = mutable.HashMap.empty
implicit val g = scala.concurrent.ExecutionContext.global
case class TransactionId(id: Long)
case class ReadCursorId(id: Long)
def startTransaction(): Future[TransactionId] = {
Future {
synchronized {
transactionId += transactionId
}
val t = TransactionId(transactionId)
transactionCursorMap.update(t, Set(ReadCursorId(0)))
t
}
}
def createCursorId(t: TransactionId): ReadCursorId = {
synchronized {
val c = transactionCursorMap.getOrElseUpdate(t, Set(ReadCursorId(0)))
val currentId = c.foldLeft(0l) { (acc, a) => acc.max(a.id) }
val cId = ReadCursorId(currentId + 1)
transactionCursorMap.update(t, c + cId)
cursorIteratorMap.put(cId, createIterator)
cId
}
}
def createIterator(): Iterator[List[Int]] = {
(for {i <- 1 to 100} yield List.fill(100)(i)).toIterator
}
def startRead(t: TransactionId): Future[ReadCursorId] = {
Future {
createCursorId(t)
}
}
def getData(cursorId: ReadCursorId): Future[List[Int]] = {
synchronized {
Future {
Thread.sleep(Random.nextInt(100))
cursorIteratorMap.get(cursorId) match {
case Some(i) => i.next()
case _ => List()
}
}
}
}
}
object Test extends App {
val source = new DataSource(10)
implicit val system = ActorSystem("Sandbox")
implicit val materializer = ActorMaterializer()
implicit val g = scala.concurrent.ExecutionContext.global
//
// def process(v: List[Int]): Unit = {
// println(v)
// }
//
// def next(f: (List[Int]) => Unit): Unit = {
// val fut = source.getData()
// fut.onComplete {
// case Success(v) => {
// f(v)
// v match {
//
// case h :: t => next(f)
//
// }
// }
//
// }
//
// }
//
// next(process)
//
// Thread.sleep(1000000000)
val s = Source.fromFuture(source.startTransaction())
.map { e =>
source.startRead(e)
}
.mapAsync(1)(identity)
.flatMapConcat(
e => {
Source.fromIterator(() => Iterator.continually(source.getData(e)))
})
.mapAsync(5)(identity)
.via(Flow[List[Int]].takeWhile(_.nonEmpty))
.runForeach(println)
/*
val done = Source.fromIterator(() => Iterator.continually(source.getData())).mapAsync(1)(identity)
.via(Flow[List[Int]].takeWhile(_.nonEmpty))
.runFold(List[List[Int]]()) { (acc, r) =>
// println("=======" + acc + r)
r :: acc
}
done.onSuccess {
case e => {
e.foreach(println)
}
}
done.onComplete(_ => system.terminate())
*/
}
private def foo(a:A):B = a match{
case A(...) =>
val x = a.b //error: wrong forward reference a
...
}
Where b is not mentioned in A(...), if that matters.
I've tried my luck on Google, but I seem to find only posts of people having errors involving forward references but no explanation of what this particular error actually means.
Would appreciate it if somebody could help me out.
Well, don't I feel stupid now...
private def foo(a:A):B = a match{
case A(...) =>
val x = a.b //error: wrong forward reference a
...
val a = ... //<-- THAT's the reason for the error
...
}
So a simple rename will resolve the issue:
private def foo(aa:A):B = aa match{
case A(...) =>
val x = aa.b
...
val a = ...
...
}
Here is an attempt to explain what #User1291 had not with his/her answer.
I'm new to Scala and Java so the answer wasn't obvious to me. I was surprised to run into this error in my (simplified) code:
object Main {
val data = getData()
def getUser() = {
getUserFrom(data) // error: Wrong Forward Reference
}
}
Wrong Forward Reference is equivalent to Java's Illegal Forward Reference, which is a fancy way of saying you can't reference a value that isn't known at compile time. In this case, getData() can only return value during run time, and referencing data gave this error.
When I tried changing the code to reference a known string, as expected the error went away:
object Main {
val name = "PieOhPah"
def getUser() = {
getUserFrom(name)
}
}
Another way is to close over the value with a function and access it from inside since functions are not evaluated until runtime:
object Main {
val data = getData()
def getUser(userData: UserData) = {
getUserFrom(userData)
}
// Invoke the method later with `data`
print(getUser(data).name)
}
The problem is that you are probably using pattern-matching in some wrong way. As... You have not provided complete code. I have no idea about what is that mistake.
I am sure there is a problem somewhere else as following code (which is almost same as what you have given ) works flawlessly,
scala> :pa
// Entering paste mode (ctrl-D to finish)
case class A( c: String ) {
val b: String = c
}
def demoA( a: A ): String = a match {
case A( iAmC ) => {
val x = a.b
x
}
}
// Exiting paste mode, now interpreting.
defined class A
demoA: (a: A)String
scala> val anA = A( "sdfsd" )
anA: A = A(sdfsd)
scala> demoA( anA )
res3: String = sdfsd
So... basically if you have a case class like following,
case class A( b: String, c: String )
Now following would have worked.
private def foo( a:A ): B = a match{
case A( iAmB, iAmC ) => {
// iAmB and iAmC have values of a.b and a.c repectively
...
}
}
In your case...your function clearly says that your a is an instance of A - def foo( a:A ) so... you really don't need to pattern match here.
private def foo( a:A ): B = {
// Now class A should have member b and c
val iAmB = a.b
val iAmC = a.c
...
}
According to the Scala Language Specification (§6.19), "An enumerator sequence always starts with a generator". Why?
I sometimes find this restriction to be a hindrance when using for-comprehensions with monads, because it means you can't do things like this:
def getFooValue(): Future[Int] = {
for {
manager = Manager.getManager() // could throw an exception
foo <- manager.makeFoo() // method call returns a Future
value = foo.getValue()
} yield value
}
Indeed, scalac rejects this with the error message '<-' expected but '=' found.
If this was valid syntax in Scala, one advantage would be that any exception thrown by Manager.getManager() would be caught by the Future monad used within the for-comprehension, and would cause it to yield a failed Future, which is what I want. The workaround of moving the call to Manager.getManager() outside the for-comprehension doesn't have this advantage:
def getFooValue(): Future[Int] = {
val manager = Manager.getManager()
for {
foo <- manager.makeFoo()
value = foo.getValue()
} yield value
}
In this case, an exception thrown by foo.getValue() will yield a failed Future (which is what I want), but an exception thrown by Manager.getManager() will be thrown back to the caller of getFooValue() (which is not what I want). Other possible ways of handling the exception are more verbose.
I find this restriction especially puzzling because in Haskell's otherwise similar do notation, there is no requirement that a do block should begin with a statement containing <-. Can anyone explain this difference between Scala and Haskell?
Here's a complete working example showing how exceptions are caught by the Future monad in for-comprehensions:
import scala.concurrent._
import scala.concurrent.duration._
import scala.concurrent.ExecutionContext.Implicits.global
import scala.util.{Try, Success, Failure}
class Foo(val value: Int) {
def getValue(crash: Boolean): Int = {
if (crash) {
throw new Exception("failed to get value")
} else {
value
}
}
}
class Manager {
def makeFoo(crash: Boolean): Future[Foo] = {
if (crash) {
throw new Exception("failed to make Foo")
} else {
Future(new Foo(10))
}
}
}
object Manager {
def getManager(crash: Boolean): Manager = {
if (crash) {
throw new Exception("failed to get manager")
} else {
new Manager()
}
}
}
object Main extends App {
def getFooValue(crashGetManager: Boolean,
crashMakeFoo: Boolean,
crashGetValue: Boolean): Future[Int] = {
for {
manager <- Future(Manager.getManager(crashGetManager))
foo <- manager.makeFoo(crashMakeFoo)
value = foo.getValue(crashGetValue)
} yield value
}
def waitForValue(future: Future[Int]): Unit = {
val result = Try(Await.result(future, Duration("10 seconds")))
result match {
case Success(value) => println(s"Got value: $value")
case Failure(e) => println(s"Got error: $e")
}
}
val future1 = getFooValue(false, false, false)
waitForValue(future1)
val future2 = getFooValue(true, false, false)
waitForValue(future2)
val future3 = getFooValue(false, true, false)
waitForValue(future3)
val future4 = getFooValue(false, false, true)
waitForValue(future4)
}
Here's the output:
Got value: 10
Got error: java.lang.Exception: failed to get manager
Got error: java.lang.Exception: failed to make Foo
Got error: java.lang.Exception: failed to get value
This is a trivial example, but I'm working on a project in which we have a lot of non-trivial code that depends on this behaviour. As far as I understand, this is one of the main advantages of using Future (or Try) as a monad. What I find strange is that I have to write
manager <- Future(Manager.getManager(crashGetManager))
instead of
manager = Manager.getManager(crashGetManager)
(Edited to reflect #RexKerr's point that the monad is doing the work of catching the exceptions.)
for comprehensions do not catch exceptions. Try does, and it has the appropriate methods to participate in for-comprehensions, so you can
for {
manager <- Try { Manager.getManager() }
...
}
But then it's expecting Try all the way down unless you manually or implicitly have a way to switch container types (e.g. something that converts Try to a List).
So I'm not sure your premises are right. Any assignment you made in a for-comprehension can just be made early.
(Also, there is no point doing an assignment inside a for comprehension just to yield that exact value. Just do the computation in the yield block.)
(Also, just to illustrate that multiple types can play a role in for comprehensions so there's not a super-obvious correct answer for how to wrap an early assignment in terms of later types:
// List and Option, via implicit conversion
for {i <- List(1,2,3); j <- Option(i).filter(_ <2)} yield j
// Custom compatible types with map/flatMap
// Use :paste in the REPL to define A and B together
class A[X] { def flatMap[Y](f: X => B[Y]): A[Y] = new A[Y] }
class B[X](x: X) { def map[Y](f: X => Y): B[Y] = new B(f(x)) }
for{ i <- (new A[Int]); j <- (new B(i)) } yield j.toString
Even if you take the first type you still have the problem of whether there is a unique "bind" (way to wrap) and whether to doubly-wrap things that are already the correct type. There could be rules for all these things, but for-comprehensions are already hard enough to learn, no?)
Haskell translates the equivalent of for { manager = Manager.getManager(); ... } to the equivalent of lazy val manager = Manager.getManager(); for { ... }. This seems to work:
scala> lazy val x: Int = throw new Exception("")
x: Int = <lazy>
scala> for { y <- Future(x + 1) } yield y
res8: scala.concurrent.Future[Int] = scala.concurrent.impl.Promise$DefaultPromise#fedb05d
scala> Try(Await.result(res1, Duration("10 seconds")))
res9: scala.util.Try[Int] = Failure(java.lang.Exception: )
I think the reason this can't be done is because for-loops are syntactic sugar for flatMap and map methods (except if you are using a condition in the for-loop, in that case it's desugared with the method withFilter). When you are storing in a immutable variable, you can't use these methods. That's the reason you would be ok using Try as pointed out by Rex Kerr. In that case, you should be able to use map and flatMap methods.
Here is code in Scala:
def write() = {
try {
val out = new PrintWriter(new BufferedWriter(new FileWriter(fileName, true)))
out.println("123")
out.close
} catch {
case e: IOException => {}
}
//finally {
//out.close // ops, it's not visible in this context
//}
}
It would be better to have "out.close" in finally block, isn't it? But I don't want to use var.
My question is, how do I achieve that?
A variable defined in a block is local to that block. So if you insist on using try/finally manually you will have to move the val out of the block.
However, what you are trying to achieve is to create a resource, use it in a block, and call a close method on it when leaving the block, no matter whether you leave the block normally or abnormally via an exception. This is an extremely common problem, so there is already a library for it, called Scala ARM. ARM stands for automatic resource management.
Here is the basic usage:
import resource._
for(input <- managed(new FileInputStream("test.txt")) {
// Code that uses the input as a FileInputStream
}
There was some talk of moving this construct to the scala standard library, so in the future you probably won't even need an external dependency.
I would recommend using a library for something like this. It is just one more line in your build.sbt. But for educational purposes, here is how you would roll your own:
def managed[T <: AutoCloseable](resource:T) = new Traversable[T] {
def foreach[U](f:T=>U) {
try {
f(resource)
} finally {
resource.close()
}
}
}
And here is how to use it
scala> for(reader<-managed(new java.io.FileReader("/etc/passwd"))) { println(reader.read()) }
114
scala> for(reader<-managed(new java.io.FileReader("/etc/shadow"))) { println(reader.read()) }
java.io.FileNotFoundException: /etc/shadow (Permission denied)
...
You will still get the exception, but close will be called. Of course if close throws an exception as well you this will hide the original exception. Little details like this are probably handled better in scala ARM.
This is what I use to manage closable resources passed to function returning and not retuning futures
def withClosable[ T, C <: Closeable ]( closable: C )( f: C ⇒ T ) = try { f( closable ) } finally { IOUtils closeQuietly closable }
def withFutureClosable[ T <: Future[Any], C <: Closeable ]( closable: C )( f: C ⇒ T ) = f( closable ) andThen {
case _ => IOUtils closeQuietly closable
}
}
I use IOUtils from commons-io to simplify the call to actually close the resource. A simple try { closable.close() } catch { case _ => /* blah */ } would do
Example usage:
withClosable(new FileInpustream("f")) { stream => /* read the stream */ }
The loan pattern is more usual for this use case, but since anything goes on Stack Overflow, you can construct the expression you're looking for with Try.
Try deserves more exposure as a handy tool.
scala> import util._
import util._
scala> import io._
import io._
Try to open the file --
scala> def f =
| Try (Source.fromFile("foo.text")) map { in =>
then do something with it, packaging the result in a tuple with the i/o source --
note that when you do a value definition in a for-comprehension, this is what it does --
| (in, Try(in.getLines.mkString("/")))
| } flatMap {
then close the source and yield the result of the computation --
| case (in, res) =>
| in.close()
| res
| }
f: scala.util.Try[String]
Uncommented:
scala> def f =
| Try (Source.fromFile("foo.text")) map { in =>
| (in, Try(in.getLines.mkString("/")))
| } flatMap {
| case (in, res) =>
| in.close()
| res
| }
f: scala.util.Try[String]
scala> f
res1: scala.util.Try[String] = Failure(java.io.FileNotFoundException: foo.text (No such file or directory))
Create the test file with some classic humour text, then try again:
scala> f
res2: scala.util.Try[String] = Success(Now is the time/for all good dogs/to lie.)
You can sugarcoat it as a for-comprehension, though observe the extra flatten, since you get a map instead of flatMap from the yield:
scala> def g = (for {
| in <- Try (Source.fromFile("foo.text"))
| res = Try(in.getLines.mkString("/"))
| } yield {
| in.close()
| res
| }).flatten
g: scala.util.Try[String]
scala> g
res2: scala.util.Try[String] = Success(Now is the time/for all good dogs/to lie.)
What if we want to fail if the close fails?
I don't want to type in all that stuff into the REPL again!
scala> :hi // :history
[snip]
2490 def g = (for {
2491 in <- Try (Source.fromFile("foo.text"))
2492 res = Try(in.getLines.mkString("/"))
2493 } yield {
2494 in.close()
2495 res
2496 }).flatten
2497 :hi
scala> :edit 2490+7 // or just :edit 2490-
+import util._
+import io._
+def g = (for {
+ in <- Try (Source.fromFile("foo.text"))
+ res = Try(in.getLines.mkString("/"))
+} yield {
+ val ok = Try(in.close())
+ res transform (s => ok map (_ => s), new Failure(_))
+}).flatten
+
import util._
import io._
g: scala.util.Try[String]
The transform says that if the computation succeeded, convert that success to failure if the result of the close, ok, is a failure; and on a failed computation, keep that failure, though some people prefer to add up their failures.
Don't you know try/catch is so 1990s. </droll> (droll does not mean troll.)
Or just:
val is = new FileInputStream(file)
val result = try {
// do stuff
} finally {
is.close()
}
Because there's no way is can be null.
In your question code, just move val out outside try block. That way it will be identical to what Java AutoCloseable does, except for null case. But in Scala you don't have to deal with nullables.