Finally closing stream using Scala exception catching - scala

Anybody know a solution to this problem ? I rewrote try catch finally construct to a functional way of doing things, but I can't close the stream now :-)
import scala.util.control.Exception._
def gunzip() = {
logger.info(s"Gunziping file ${f.getAbsolutePath}")
catching(classOf[IOException], classOf[FileNotFoundException]).
andFinally(println("how can I close the stream ?")).
either ({
val is = new GZIPInputStream(new FileInputStream(f))
Stream.continually(is.read()).takeWhile(-1 !=).map(_.toByte).toArray
}) match {
case Left(e) =>
val msg = s"IO error reading file ${f.getAbsolutePath} ! on host ${Setup.smtpHost}"
logger.error(msg, e)
MailClient.send(msg, msg)
new Array[Byte](0)
case Right(v) => v
}
}
I rewrote it based on Senia's solution like this :
def gunzip() = {
logger.info(s"Gunziping file ${file.getAbsolutePath}")
def closeAfterReading(c: InputStream)(f: InputStream => Array[Byte]) = {
catching(classOf[IOException], classOf[FileNotFoundException])
.andFinally(c.close())
.either(f(c)) match {
case Left(e) => {
val msg = s"IO error reading file ${file.getAbsolutePath} ! on host ${Setup.smtpHost}"
logger.error(msg, e)
new Array[Byte](0)
}
case Right(v) => v
}
}
closeAfterReading(new GZIPInputStream(new FileInputStream(file))) { is =>
Stream.continually(is.read()).takeWhile(-1 !=).map(_.toByte).toArray
}
}

I prefer this construction for such cases:
def withCloseable[T <: Closeable, R](t: T)(f: T => R): R = {
allCatch.andFinally{t.close} apply { f(t) }
}
def read(f: File) =
withCloseable(new GZIPInputStream(new FileInputStream(f))) { is =>
Stream.continually(is.read()).takeWhile(-1 !=).map(_.toByte).toArray
}
Now you could wrap it with Try and recover on some exceptions:
val result =
Try { read(f) }.recover{
case e: IOException => recover(e) // logging, default value
case e: FileNotFoundException => recover(e)
}
val array = result.get // Exception here!

take "scala-arm"
take Apache "commons-io"
then do the following
val result =
for {fis <- resource.managed(new FileInputStream(f))
gis <- resource.managed(new GZIPInputStream(fis))}
yield IOUtils.toString(gis, "UTF-8")
result.acquireFor(identity) fold (reportExceptions _, v => v)

One way how to handle it would be to use a mutable list of things that are opened and need to be closed later:
val cs: Buffer[Closeable] = new ArrayBuffer();
def addClose[C <: Closeable](c: C) = { cs += c; c; }
catching(classOf[IOException], classOf[FileNotFoundException]).
andFinally({ cs.foreach(_.close()) }).
either ({
val is = addClose(new GZIPInputStream(new FileInputStream(f)))
Stream.continually(is.read()).takeWhile(-1 !=).map(_.toByte).toArray
}) // ...
Update: You could use scala-conduit library (I'm the author) for this purpose. (The library is currently not considered production ready.) The main aim of pipes (AKA conduids) is to construct composable components with well defined resource handling. Each pipe repeatedly receives input and produces input. Optionally, it also produces a final result when it finishes. Pips has finalizers that are run after a pipe finishes - either on its own or when its downstream pipe finishes. Your example could be reworked (using Java NIO) as follows:
/**
* Filters buffers until a given character is found. The last buffer
* (truncated up to the character) is also included.
*/
def untilPipe(c: Byte): Pipe[ByteBuffer,ByteBuffer,Unit] = ...
// Create a new source that chunks a file as ByteBuffer's.
// (Note that the buffer changes on every step.)
val source: Source[ByteBuffer,Unit] = ...
// Sink that prints bytes to the standard output.
// You would create your own sink doing whatever you want.
val sink: Sink[ByteBuffer,Unit]
= NIO.writeChannel(Channels.newChannel(System.out));
runPipe(source >-> untilPipe(-1) >-> sink);
As soon as untilPipe(-1) finds -1 and finishes, its upstream source pipe's finalizer is run and the input is closed. If an exception occurs anywhere in the pipeline, the input is closed as well.
The full example can be found here.

I have another one proposition for cases when a closeble object like java.io.Socket may run for a long time, so one have to wrap it in Future. This is handful when you also control a timeout, when Socket is not responding.
object CloseableFuture {
type Closeable = {
def close(): Unit
}
private def withClose[T, F1 <: Closeable](f: => F1, andThen: F1 => Future[T]): Future[T] = future(f).flatMap(closeable => {
val internal = andThen(closeable)
internal.onComplete(_ => closeable.close())
internal
})
def apply[T, F1 <: Closeable](f: => F1, andThen: F1 => T): Future[T] =
withClose(f, {c: F1 => future(andThen(c))})
def apply[T, F1 <: Closeable, F2 <: Closeable](f1: => F1, thenF2: F1 => F2, andThen: (F1,F2) => T): Future [T] =
withClose(f1, {c1:F1 => CloseableFuture(thenF2(c1), {c2:F2 => andThen(c1,c2)})})
}
After I open a java.io.Socket and java.io.InputStream for it, and then execute code which reads from WhoisServer, I got both of them closed finally. Full code:
CloseableFuture(
{new Socket(server.address, WhoisPort)},
(s: Socket) => s.getInputStream,
(socket: Socket, inputStream: InputStream) => {
val streamReader = new InputStreamReader(inputStream)
val bufferReader = new BufferedReader(streamReader)
val outputStream = socket.getOutputStream
val writer = new OutputStreamWriter(outputStream)
val bufferWriter = new BufferedWriter(writer)
bufferWriter.write(urlToAsk+System.getProperty("line.separator"))
bufferWriter.flush()
def readBuffer(acc: List[String]): List[String] = bufferReader.readLine() match {
case null => acc
case str => {
readBuffer(str :: acc)
}
}
val result = readBuffer(Nil).reverse.mkString("\r\n")
WhoisResult(urlToAsk, result)
}
)

Related

How to perofrm a try and catch in ZIO?

My first steps with ZIO, I am trying to convert this readfile function with a compatible ZIO version.
The below snippet compiles, but I am not closing the source in the ZIO version. How do I do that?
def run(args: List[String]) =
myAppLogic.exitCode
val myAppLogic =
for {
_ <- readFileZio("C:\\path\\to\file.csv")
_ <- putStrLn("Hello! What is your name?")
name <- getStrLn
_ <- putStrLn(s"Hello, ${name}, welcome to ZIO!")
} yield ()
def readfile(file: String): String = {
val source = scala.io.Source.fromFile(file)
try source.getLines.mkString finally source.close()
}
def readFileZio(file: String): zio.Task[String] = {
val source = scala.io.Source.fromFile(file)
ZIO.fromTry[String]{
Try{source.getLines.mkString}
}
}
The simplest solution for your problem would be using bracket function, which in essence has similar purpose as try-finally block. It gets as first argument effect that closes resource (in your case Source) and as second effect that uses it.
So you could rewrite readFileZio like:
def readFileZio(file: String): Task[Iterator[String]] =
ZIO(Source.fromFile(file))
.bracket(
s => URIO(s.close),
s => ZIO(s.getLines())
)
Another option is to use ZManaged which is a data type that encapsulates the operation of opening and closing resource:
def managedSource(file: String): ZManaged[Any, Throwable, BufferedSource] =
Managed.make(ZIO(Source.fromFile(file)))(s => URIO(s.close))
And then you could use it like this:
def readFileZioManaged(file: String): Task[Iterator[String]] =
managedSource(file).use(s => ZIO(s.getLines()))

Akka-stream group only Right elements of Either

I have a source which emits Either[String, MyClass].
I want to call an external service with batches of MyClass and continue downstream with Either[String, ExternalServiceResponse], that's why I need to group elements of stream.
If the stream would emit only MyClass elements, it would be easy - just call grouped:
val source: Source[MyClass, NotUsed] = <custom implementation>
source
.grouped(10) // Seq[MyClass]
.map(callExternalService(_)) // ExternalServiceResponse
But how to group only elements on the right side of Either in my scenario?
val source: Source[Either[String, MyClass], NotUsed] = <custom implementation>
source
.??? // Either[String, Seq[MyClass]]
.map {
case Right(myClasses) => callExternalService(myClasses)
case Left(string) => Left(string)
} // Either[String, ExternalServiceResponse]
The following works, but is there any more idiomatic way?
val source: Source[Either[String, MyClass], NotUsed] = <custom implementation>
source
.groupBy(2, either => either.isRight)
.grouped(10)
.map(input => input.headOption match {
case Some(Right(_)) =>
callExternalService(input.map(item => item.right.get))
case _ =>
input
})
.mapConcat(_.to[scala.collection.immutable.Iterable])
.mergeSubstreams
This should transform a source of Either[L, R] into a source of Either[L, Seq[R]] with a configurable grouping of Rights.
def groupRights[L, R](groupSize: Int)(in: Source[Either[L, R], NotUsed]): Source[Either[L, Seq[R]], NotUsed] =
in.map(Option _) // Yep, an Option[Either[L, R]]
.concat(Source.single(None)) // to emit when `in` completes
.statefulMapConcat { () =>
val buffer = new scala.collection.mutable.ArrayBuffer[R](groupSize)
def dumpBuffer(): List[Either[L, Seq[R]] = {
val out = List(Right(buffer.toList))
buffer.clear()
out
}
incoming: Option[Either[L,R]] => {
incoming.map { _.fold(
l => List(Left(l)), // unfortunate that we have to re-wrap
r => {
buffer += r
if (buffer.size == groupSize) {
dumpBuffer()
} else {
Nil
}
}
)
}.getOrElse(dumpBuffer()) // End of stream
}
}
Beyond that, I'll note that the downstream code to call the external service could be rewritten as
.map(_.right.map(callExternalService))
If you can reliably call the external service with parallelism n, it may also be worth doing that with:
.mapAsync(n) { e.fold(
l => Future.successful(Left(l)),
r => Future { Right(callExternalService(r)) }
)
}
You could even, if wanting to maximize throughput at the cost of preserving order, replace mapAsync with mapAsyncUnordered.
You could divide your source of eithers in two branches in order to process rights their own way and then merge back the two sub-flows:
// case class MyClass(x: Int)
// case class ExternalServiceResponse(xs: Seq[MyClass])
// def callExternalService(xs: Seq[MyClass]): ExternalServiceResponse =
// ExternalServiceResponse(xs)
// val source: Source[Either[String, MyClass], _] =
// Source(List(Right(MyClass(1)), Left("2"), Right(MyClass(3)), Left("4"), Right(MyClass(5))))
val lefts: Source[Either[String, Nothing], _] =
source
.collect { case Left(l) => Left(l) }
val rights: Source[Either[Nothing, ExternalServiceResponse], _] =
source
.collect { case Right(x: MyClass) => x }
.grouped(2)
.map(callExternalService)
.map(Right(_))
val out: Source[Either[String, ExternalServiceResponse], _] = rights.merge(lefts)
// out.runForeach(println)
// Left(2)
// Right(ExternalServiceResponse(Vector(MyClass(1), MyClass(3))))
// Left(4)
// Right(ExternalServiceResponse(Vector(MyClass(5))))

FS2 Stream with StateT[IO, _, _], periodically dumping state

I have a program which consumes an infinite stream of data. Along the way I'd like to record some metrics, which form a monoid since they're just simple sums and averages. Periodically, I want to write out these metrics somewhere, clear them, and return to accumulating them. I have essentially:
object Foo {
type MetricsIO[A] = StateT[IO, MetricData, A]
def recordMetric(m: MetricData): MetricsIO[Unit] = {
StateT.modify(_.combine(m))
}
def sendMetrics: MetricsIO[Unit] = {
StateT.modifyF { s =>
val write: IO[Unit] = writeMetrics(s)
write.attempt.map {
case Left(_) => s
case Right(_) => Monoid[MetricData].empty
}
}
}
}
So most of the execution uses IO directly and lifts using StateT.liftF. And in certain situations, I include some calls to recordMetric. At the end of it I've got a stream:
val mainStream: Stream[MetricsIO, Bar] = ...
And I want to periodically, say every minute or so, dump the metrics, so I tried:
val scheduler: Scheduler = ...
val sendStream =
scheduler
.awakeEvery[MetricsIO](FiniteDuration(1, TimeUnit.Minutes))
.evalMap(_ => Foo.sendMetrics)
val result = mainStream.concurrently(sendStream).compile.drain
And then I do the usual top level program stuff of calling run with the start state and then calling unsafeRunSync.
The issue is, I only ever see empty metrics! I suspect it's something to with my monoid implicitly providing empty metrics to sendStream but I can't quite figure out why that should be or how to fix it. Maybe there's a way I can "interleave" these sendMetrics calls into the main stream instead?
Edit: here's a minimal complete runnable example:
import fs2._
import cats.implicits._
import cats.data._
import cats.effect._
import java.util.concurrent.Executors
import scala.concurrent.ExecutionContext
import scala.concurrent.duration._
val sec = Executors.newScheduledThreadPool(4)
implicit val ec = ExecutionContext.fromExecutorService(sec)
type F[A] = StateT[IO, List[String], A]
val slowInts = Stream.unfoldEval[F, Int, Int](1) { n =>
StateT(state => IO {
Thread.sleep(500)
val message = s"hello $n"
val newState = message :: state
val result = Some((n, n + 1))
(newState, result)
})
}
val ticks = Scheduler.fromScheduledExecutorService(sec).fixedDelay[F](FiniteDuration(1, SECONDS))
val slowIntsPeriodicallyClearedState = slowInts.either(ticks).evalMap[Int] {
case Left(n) => StateT.liftF(IO(n))
case Right(_) => StateT(state => IO {
println(state)
(List.empty, -1)
})
}
Now if I do:
slowInts.take(10).compile.drain.run(List.empty).unsafeRunSync
Then I get the expected result - the state properly accumulates into the output. But if I do:
slowIntsPeriodicallyClearedState.take(10).compile.drain.run(List.empty).unsafeRunSync
Then I see an empty list consistently printed out. I would have expected partial lists (approx. 2 elements) printed out.
StateT is not safe to use with effect types, because it's not safe in the face of concurrent access. Instead, consider using a Ref (from either fs2 or cats-effect, depending what version).
Something like this:
def slowInts(ref: Ref[IO, Int]) = Stream.unfoldEval[F, Int, Int](1) { n =>
val message = s"hello $n"
ref.modify(message :: _) *> IO {
Thread.sleep(500)
val result = Some((n, n + 1))
result
}
}
val ticks = Scheduler.fromScheduledExecutorService(sec).fixedDelay[IO](FiniteDuration(1, SECONDS))
def slowIntsPeriodicallyClearedState(ref: Ref[IO, Int] =
slowInts.either(ticks).evalMap[Int] {
case Left(n) => IO.pure(n)
case Right(_) =>
ref.modify(_ => Nil).flatMap { case Change(previous, now) =>
IO(println(now)).as(-1)
}
}

Iterate data source asynchronously in batch and stop while remote return no data in Scala

Let's say we have a fake data source which will return data it holds in batch
class DataSource(size: Int) {
private var s = 0
implicit val g = scala.concurrent.ExecutionContext.global
def getData(): Future[List[Int]] = {
s = s + 1
Future {
Thread.sleep(Random.nextInt(s * 100))
if (s <= size) {
List.fill(100)(s)
} else {
List()
}
}
}
object Test extends App {
val source = new DataSource(100)
implicit val g = scala.concurrent.ExecutionContext.global
def process(v: List[Int]): Unit = {
println(v)
}
def next(f: (List[Int]) => Unit): Unit = {
val fut = source.getData()
fut.onComplete {
case Success(v) => {
f(v)
v match {
case h :: t => next(f)
}
}
}
}
next(process)
Thread.sleep(1000000000)
}
I have mine, the problem here is some portion is more not pure. Ideally, I would like to wrap the Future for each batch into a big future, and the wrapper future success when last batch returned 0 size list? My situation is a little from this post, the next() there is synchronous call while my is also async.
Or is it ever possible to do what I want? Next batch will only be fetched when the previous one is resolved in the end whether to fetch the next batch depends on the size returned?
What's the best way to walk through this type of data sources? Are there any existing Scala frameworks that provide the feature I am looking for? Is play's Iteratee, Enumerator, Enumeratee the right tool? If so, can anyone provide an example on how to use those facilities to implement what I am looking for?
Edit----
With help from chunjef, I had just tried out. And it actually did work out for me. However, there was some small change I made based on his answer.
Source.fromIterator(()=>Iterator.continually(source.getData())).mapAsync(1) (f=>f.filter(_.size > 0))
.via(Flow[List[Int]].takeWhile(_.nonEmpty))
.runForeach(println)
However, can someone give comparison between Akka Stream and Play Iteratee? Does it worth me also try out Iteratee?
Code snip 1:
Source.fromIterator(() => Iterator.continually(ds.getData)) // line 1
.mapAsync(1)(identity) // line 2
.takeWhile(_.nonEmpty) // line 3
.runForeach(println) // line 4
Code snip 2: Assuming the getData depends on some other output of another flow, and I would like to concat it with the below flow. However, it yield too many files open error. Not sure what would cause this error, the mapAsync has been limited to 1 as its throughput if I understood correctly.
Flow[Int].mapConcat[Future[List[Int]]](c => {
Iterator.continually(ds.getData(c)).to[collection.immutable.Iterable]
}).mapAsync(1)(identity).takeWhile(_.nonEmpty).runForeach(println)
The following is one way to achieve the same behavior with Akka Streams, using your DataSource class:
import scala.concurrent.Future
import scala.util.Random
import akka.actor.ActorSystem
import akka.stream._
import akka.stream.scaladsl._
object StreamsExample extends App {
implicit val system = ActorSystem("Sandbox")
implicit val materializer = ActorMaterializer()
val ds = new DataSource(100)
Source.fromIterator(() => Iterator.continually(ds.getData)) // line 1
.mapAsync(1)(identity) // line 2
.takeWhile(_.nonEmpty) // line 3
.runForeach(println) // line 4
}
class DataSource(size: Int) {
...
}
A simplified line-by-line overview:
line 1: Creates a stream source that continually calls ds.getData if there is downstream demand.
line 2: mapAsync is a way to deal with stream elements that are Futures. In this case, the stream elements are of type Future[List[Int]]. The argument 1 is the level of parallelism: we specify 1 here because DataSource internally uses a mutable variable, and a parallelism level greater than one could produce unexpected results. identity is shorthand for x => x, which basically means that for each Future, we pass its result downstream without transforming it.
line 3: Essentially, ds.getData is called as long as the result of the Future is a non-empty List[Int]. If an empty List is encountered, processing is terminated.
line 4: runForeach here takes a function List[Int] => Unit and invokes that function for each stream element.
Ideally, I would like to wrap the Future for each batch into a big future, and the wrapper future success when last batch returned 0 size list?
I think you are looking for a Promise.
You would set up a Promise before you start the first iteration.
This gives you promise.future, a Future that you can then use to follow the completion of everything.
In your onComplete, you add a case _ => promise.success().
Something like
def loopUntilDone(f: (List[Int]) => Unit): Future[Unit] = {
val promise = Promise[Unit]
def next(): Unit = source.getData().onComplete {
case Success(v) =>
f(v)
v match {
case h :: t => next()
case _ => promise.success()
}
case Failure(e) => promise.failure(e)
}
// get going
next(f)
// return the Future for everything
promise.future
}
// future for everything, this is a `Future[Unit]`
// its `onComplete` will be triggered when there is no more data
val everything = loopUntilDone(process)
You are probably looking for a reactive streams library. My personal favorite (and one I'm most familiar with) is Monix. This is how it will work with DataSource unchanged
import scala.concurrent.duration.Duration
import scala.concurrent.Await
import monix.reactive.Observable
import monix.execution.Scheduler.Implicits.global
object Test extends App {
val source = new DataSource(100)
val completed = // <- this is Future[Unit], completes when foreach is done
Observable.repeat(Observable.fromFuture(source.getData()))
.flatten // <- Here it's Observable[List[Int]], it has collection-like methods
.takeWhile(_.nonEmpty)
.foreach(println)
Await.result(completed, Duration.Inf)
}
I just figured out that by using flatMapConcat can achieve what I wanted to achieve. There is no point to start another question as I have had the answer already. Put my sample code here just in case someone is looking for similar answer.
This type of API is very common for some integration between traditional Enterprise applications. The DataSource is to mock the API while the object App is to demonstrate how the client code can utilize Akka Stream to consume the APIs.
In my small project the API was provided in SOAP, and I used scalaxb to transform the SOAP to Scala async style. And with the client calls demonstrated in the object App, we can consume the API with AKKA Stream. Thanks for all for the help.
class DataSource(size: Int) {
private var transactionId: Long = 0
private val transactionCursorMap: mutable.HashMap[TransactionId, Set[ReadCursorId]] = mutable.HashMap.empty
private val cursorIteratorMap: mutable.HashMap[ReadCursorId, Iterator[List[Int]]] = mutable.HashMap.empty
implicit val g = scala.concurrent.ExecutionContext.global
case class TransactionId(id: Long)
case class ReadCursorId(id: Long)
def startTransaction(): Future[TransactionId] = {
Future {
synchronized {
transactionId += transactionId
}
val t = TransactionId(transactionId)
transactionCursorMap.update(t, Set(ReadCursorId(0)))
t
}
}
def createCursorId(t: TransactionId): ReadCursorId = {
synchronized {
val c = transactionCursorMap.getOrElseUpdate(t, Set(ReadCursorId(0)))
val currentId = c.foldLeft(0l) { (acc, a) => acc.max(a.id) }
val cId = ReadCursorId(currentId + 1)
transactionCursorMap.update(t, c + cId)
cursorIteratorMap.put(cId, createIterator)
cId
}
}
def createIterator(): Iterator[List[Int]] = {
(for {i <- 1 to 100} yield List.fill(100)(i)).toIterator
}
def startRead(t: TransactionId): Future[ReadCursorId] = {
Future {
createCursorId(t)
}
}
def getData(cursorId: ReadCursorId): Future[List[Int]] = {
synchronized {
Future {
Thread.sleep(Random.nextInt(100))
cursorIteratorMap.get(cursorId) match {
case Some(i) => i.next()
case _ => List()
}
}
}
}
}
object Test extends App {
val source = new DataSource(10)
implicit val system = ActorSystem("Sandbox")
implicit val materializer = ActorMaterializer()
implicit val g = scala.concurrent.ExecutionContext.global
//
// def process(v: List[Int]): Unit = {
// println(v)
// }
//
// def next(f: (List[Int]) => Unit): Unit = {
// val fut = source.getData()
// fut.onComplete {
// case Success(v) => {
// f(v)
// v match {
//
// case h :: t => next(f)
//
// }
// }
//
// }
//
// }
//
// next(process)
//
// Thread.sleep(1000000000)
val s = Source.fromFuture(source.startTransaction())
.map { e =>
source.startRead(e)
}
.mapAsync(1)(identity)
.flatMapConcat(
e => {
Source.fromIterator(() => Iterator.continually(source.getData(e)))
})
.mapAsync(5)(identity)
.via(Flow[List[Int]].takeWhile(_.nonEmpty))
.runForeach(println)
/*
val done = Source.fromIterator(() => Iterator.continually(source.getData())).mapAsync(1)(identity)
.via(Flow[List[Int]].takeWhile(_.nonEmpty))
.runFold(List[List[Int]]()) { (acc, r) =>
// println("=======" + acc + r)
r :: acc
}
done.onSuccess {
case e => {
e.foreach(println)
}
}
done.onComplete(_ => system.terminate())
*/
}

What Automatic Resource Management alternatives exist for Scala?

I have seen many examples of ARM (automatic resource management) on the web for Scala. It seems to be a rite-of-passage to write one, though most look pretty much like one another. I did see a pretty cool example using continuations, though.
At any rate, a lot of that code has flaws of one type or another, so I figured it would be a good idea to have a reference here on Stack Overflow, where we can vote up the most correct and appropriate versions.
Chris Hansen's blog entry 'ARM Blocks in Scala: Revisited' from 3/26/09 talks about about slide 21 of Martin Odersky's FOSDEM presentation. This next block is taken straight from slide 21 (with permission):
def using[T <: { def close() }]
(resource: T)
(block: T => Unit)
{
try {
block(resource)
} finally {
if (resource != null) resource.close()
}
}
--end quote--
Then we can call like this:
using(new BufferedReader(new FileReader("file"))) { r =>
var count = 0
while (r.readLine != null) count += 1
println(count)
}
What are the drawbacks of this approach? That pattern would seem to address 95% of where I would need automatic resource management...
Edit: added code snippet
Edit2: extending the design pattern - taking inspiration from python with statement and addressing:
statements to run before the block
re-throwing exception depending on the managed resource
handling two resources with one single using statement
resource-specific handling by providing an implicit conversion and a Managed class
This is with Scala 2.8.
trait Managed[T] {
def onEnter(): T
def onExit(t:Throwable = null): Unit
def attempt(block: => Unit): Unit = {
try { block } finally {}
}
}
def using[T <: Any](managed: Managed[T])(block: T => Unit) {
val resource = managed.onEnter()
var exception = false
try { block(resource) } catch {
case t:Throwable => exception = true; managed.onExit(t)
} finally {
if (!exception) managed.onExit()
}
}
def using[T <: Any, U <: Any]
(managed1: Managed[T], managed2: Managed[U])
(block: T => U => Unit) {
using[T](managed1) { r =>
using[U](managed2) { s => block(r)(s) }
}
}
class ManagedOS(out:OutputStream) extends Managed[OutputStream] {
def onEnter(): OutputStream = out
def onExit(t:Throwable = null): Unit = {
attempt(out.close())
if (t != null) throw t
}
}
class ManagedIS(in:InputStream) extends Managed[InputStream] {
def onEnter(): InputStream = in
def onExit(t:Throwable = null): Unit = {
attempt(in.close())
if (t != null) throw t
}
}
implicit def os2managed(out:OutputStream): Managed[OutputStream] = {
return new ManagedOS(out)
}
implicit def is2managed(in:InputStream): Managed[InputStream] = {
return new ManagedIS(in)
}
def main(args:Array[String]): Unit = {
using(new FileInputStream("foo.txt"), new FileOutputStream("bar.txt")) {
in => out =>
Iterator continually { in.read() } takeWhile( _ != -1) foreach {
out.write(_)
}
}
}
Daniel,
I've just recently deployed the scala-arm library for automatic resource management. You can find the documentation here: https://github.com/jsuereth/scala-arm/wiki
This library supports three styles of usage (currently):
1) Imperative/for-expression:
import resource._
for(input <- managed(new FileInputStream("test.txt")) {
// Code that uses the input as a FileInputStream
}
2) Monadic-style
import resource._
import java.io._
val lines = for { input <- managed(new FileInputStream("test.txt"))
val bufferedReader = new BufferedReader(new InputStreamReader(input))
line <- makeBufferedReaderLineIterator(bufferedReader)
} yield line.trim()
lines foreach println
3) Delimited Continuations-style
Here's an "echo" tcp server:
import java.io._
import util.continuations._
import resource._
def each_line_from(r : BufferedReader) : String #suspendable =
shift { k =>
var line = r.readLine
while(line != null) {
k(line)
line = r.readLine
}
}
reset {
val server = managed(new ServerSocket(8007)) !
while(true) {
// This reset is not needed, however the below denotes a "flow" of execution that can be deferred.
// One can envision an asynchronous execuction model that would support the exact same semantics as below.
reset {
val connection = managed(server.accept) !
val output = managed(connection.getOutputStream) !
val input = managed(connection.getInputStream) !
val writer = new PrintWriter(new BufferedWriter(new OutputStreamWriter(output)))
val reader = new BufferedReader(new InputStreamReader(input))
writer.println(each_line_from(reader))
writer.flush()
}
}
}
The code makes uses of a Resource type-trait, so it's able to adapt to most resource types. It has a fallback to use structural typing against classes with either a close or dispose method. Please check out the documentation and let me know if you think of any handy features to add.
Here's James Iry solution using continuations:
// standard using block definition
def using[X <: {def close()}, A](resource : X)(f : X => A) = {
try {
f(resource)
} finally {
resource.close()
}
}
// A DC version of 'using'
def resource[X <: {def close()}, B](res : X) = shift(using[X, B](res))
// some sugar for reset
def withResources[A, C](x : => A #cps[A, C]) = reset{x}
Here are the solutions with and without continuations for comparison:
def copyFileCPS = using(new BufferedReader(new FileReader("test.txt"))) {
reader => {
using(new BufferedWriter(new FileWriter("test_copy.txt"))) {
writer => {
var line = reader.readLine
var count = 0
while (line != null) {
count += 1
writer.write(line)
writer.newLine
line = reader.readLine
}
count
}
}
}
}
def copyFileDC = withResources {
val reader = resource[BufferedReader,Int](new BufferedReader(new FileReader("test.txt")))
val writer = resource[BufferedWriter,Int](new BufferedWriter(new FileWriter("test_copy.txt")))
var line = reader.readLine
var count = 0
while(line != null) {
count += 1
writer write line
writer.newLine
line = reader.readLine
}
count
}
And here's Tiark Rompf's suggestion of improvement:
trait ContextType[B]
def forceContextType[B]: ContextType[B] = null
// A DC version of 'using'
def resource[X <: {def close()}, B: ContextType](res : X): X #cps[B,B] = shift(using[X, B](res))
// some sugar for reset
def withResources[A](x : => A #cps[A, A]) = reset{x}
// and now use our new lib
def copyFileDC = withResources {
implicit val _ = forceContextType[Int]
val reader = resource(new BufferedReader(new FileReader("test.txt")))
val writer = resource(new BufferedWriter(new FileWriter("test_copy.txt")))
var line = reader.readLine
var count = 0
while(line != null) {
count += 1
writer write line
writer.newLine
line = reader.readLine
}
count
}
For now Scala 2.13 has finally supported: try with resources by using Using :), Example:
val lines: Try[Seq[String]] =
Using(new BufferedReader(new FileReader("file.txt"))) { reader =>
Iterator.unfold(())(_ => Option(reader.readLine()).map(_ -> ())).toList
}
or using Using.resource avoid Try
val lines: Seq[String] =
Using.resource(new BufferedReader(new FileReader("file.txt"))) { reader =>
Iterator.unfold(())(_ => Option(reader.readLine()).map(_ -> ())).toList
}
You can find more examples from Using doc.
A utility for performing automatic resource management. It can be used to perform an operation using resources, after which it releases the resources in reverse order of their creation.
I see a gradual 4 step evolution for doing ARM in Scala:
No ARM: Dirt
Only closures: Better, but multiple nested blocks
Continuation Monad: Use For to flatten the nesting, but unnatural separation in 2 blocks
Direct style continuations: Nirava, aha! This is also the most type-safe alternative: a resource outside withResource block will be type error.
There is light-weight (10 lines of code) ARM included with better-files. See: https://github.com/pathikrit/better-files#lightweight-arm
import better.files._
for {
in <- inputStream.autoClosed
out <- outputStream.autoClosed
} in.pipeTo(out)
// The input and output streams are auto-closed once out of scope
Here is how it is implemented if you don't want the whole library:
type Closeable = {
def close(): Unit
}
type ManagedResource[A <: Closeable] = Traversable[A]
implicit class CloseableOps[A <: Closeable](resource: A) {
def autoClosed: ManagedResource[A] = new Traversable[A] {
override def foreach[U](f: A => U) = try {
f(resource)
} finally {
resource.close()
}
}
}
How about using Type classes
trait GenericDisposable[-T] {
def dispose(v:T):Unit
}
...
def using[T,U](r:T)(block:T => U)(implicit disp:GenericDisposable[T]):U = try {
block(r)
} finally {
Option(r).foreach { r => disp.dispose(r) }
}
Another alternative is Choppy's Lazy TryClose monad. It's pretty good with database connections:
val ds = new JdbcDataSource()
val output = for {
conn <- TryClose(ds.getConnection())
ps <- TryClose(conn.prepareStatement("select * from MyTable"))
rs <- TryClose.wrap(ps.executeQuery())
} yield wrap(extractResult(rs))
// Note that Nothing will actually be done until 'resolve' is called
output.resolve match {
case Success(result) => // Do something
case Failure(e) => // Handle Stuff
}
And with streams:
val output = for {
outputStream <- TryClose(new ByteArrayOutputStream())
gzipOutputStream <- TryClose(new GZIPOutputStream(outputStream))
_ <- TryClose.wrap(gzipOutputStream.write(content))
} yield wrap({gzipOutputStream.flush(); outputStream.toByteArray})
output.resolve.unwrap match {
case Success(bytes) => // process result
case Failure(e) => // handle exception
}
More info here: https://github.com/choppythelumberjack/tryclose
Here is #chengpohi's answer, modified so it works with Scala 2.8+, instead of just Scala 2.13 (yes, it works with Scala 2.13 also):
def unfold[A, S](start: S)(op: S => Option[(A, S)]): List[A] =
Iterator
.iterate(op(start))(_.flatMap{ case (_, s) => op(s) })
.map(_.map(_._1))
.takeWhile(_.isDefined)
.flatten
.toList
def using[A <: AutoCloseable, B](resource: A)
(block: A => B): B =
try block(resource) finally resource.close()
val lines: Seq[String] =
using(new BufferedReader(new FileReader("file.txt"))) { reader =>
unfold(())(_ => Option(reader.readLine()).map(_ -> ())).toList
}
While Using is OK, I prefer the monadic style of resource composition. Twitter Util's Managed is pretty nice, except for its dependencies and its not-very-polished API.
To that end, I've published https://github.com/dvgica/managerial for Scala 2.12, 2.13, and 3.0.0. Largely based on the Twitter Util Managed code, no dependencies, with some API improvements inspired by cats-effect Resource.
The simple example:
import ca.dvgi.managerial._
val fileContents = Managed.from(scala.io.Source.fromFile("file.txt")).use(_.mkString)
But the real strength of the library is composing resources via for comprehensions.
Let me know what you think!