Choosing between two Action.async blocks, why can't I specify Action.async in the caller? - scala

On the first line below, I want to know if there's anything I can or should put to the left of the = that will future-proof against bad editing. I had one Action.async, but now I want to have two choices (run a single MongoDB query or run multiple MongoDB queries) to arrive at the answer to the web query q. My code compiles as-is, but I think I should have to put something before the = to make the code more type-safe.
def q(arg: String) =
if (wantMultipleQueriesByDataSource)
runMultipleQueriesByDataSource(arg)
else
runSingleQuery(arg)
def runSingleQuery(arg: String) = Action.async {
if (oDb.isDefined) {
val collection: MongoCollection[Document] = oDb.get.getCollection(collectionName)
val fut = getMapData(collection, arg)
fut.map { docs: Seq[Document] =>
val docsSources = List(DocsSource(docs, "*"))
val pickedDocs = pickDocs(args, docsSources)
Ok(buildQueryAnswer(pickedDocs))
} recover {
case e => BadRequest("FAIL(rect): " + e.getMessage)
}
}
else Future.successful(Ok(buildQueryAnswer(Nil)))
}
def runMultipleQueriesByDataSource(arg: String) = Action.async {
if (oDb.isDefined) {
val collection: MongoCollection[Document] = oDb.get.getCollection(collectionName)
val dataSources = List("apples", "bananas", "cherries")
val futureCollections = dataSources map { getMapDataByDataSource(collection, _, arg) }
Future.sequence(futureCollections) map {
case docsGroupedByDataSource: Seq[Seq[Document]] =>
val docsSources = (docsGroupedByDataSource zip dataSources) map {x => DocsSource(x._1, x._2)}
val pickedDocs = pickDocs(args, docsSources)
Ok(buildQueryAnswer(pickedDocs))
case _ => Ok(buildQueryAnswer(Nil))
} recover {
case e => BadRequest("FAIL(rect): " + e.getMessage)
}
}
else Future.successful(Ok(buildQueryAnswer(Nil)))
}

You don't need to because Scala has type inference in this case. But if you want, you can change the method signature to the following:
def q(arg: String): Action[AnyContent]
I'm assuming here that you are using Playframework.
Edit: In fact, in this case, Action.async return an Action[AnyContent] and it receives a block that returns a Future[Result].

Related

Scala Play: how to wait until future is complete before OK result is returned to frontend

In my playframework application I want to wait until my future is completed and the return it to the view.
my code looks like:
def getContentComponentUsageSearch: Action[AnyContent] = Action.async { implicit request =>
println(request.body.asJson)
request.body.asJson.map(_.validate[StepIds] match {
case JsSuccess(stepIds, _) =>
println("VALIDE SUCCESS -------------------------------")
val fList: List[Seq[Future[ProcessTemplatesModel]]] = List() :+ stepIds.s.map(s => {
processTemplateDTO.getProcessStepTemplate(s.processStep_id).flatMap(stepTemplate => {
processTemplateDTO.getProcessTemplate(stepTemplate.get.processTemplate_id.get).map(a => {
a.get
})
})
})
fList.map(u => {
val a: Seq[Future[ProcessTemplatesModel]] = u
Future.sequence(a).map(s => {
println(s)
})
})
Future.successful(Ok(Json.obj("id" -> "")))
case JsError(_) =>
println("NOT VALID -------------------------------")
Future.successful(BadRequest("Process Template not create client"))
case _ => Future.successful(BadRequest("Process Template create client"))
}).getOrElse(Future.successful(BadRequest("Process Template create client")))
}
the pirntln(s) is printing the finished stuff. But how can I wait until it is complete and return it then to the view?
thanks in advance
UPDATE:
also tried this:
val process = for {
fList: List[Seq[Future[ProcessTemplatesModel]]] <- List() :+ stepIds.s.map(s => {
processTemplateDTO.getProcessStepTemplate(s.processStep_id).flatMap(stepTemplate => {
processTemplateDTO.getProcessTemplate(stepTemplate.get.processTemplate_id.get).map(a => {
a.get
})
})
})
} yield (fList)
process.map({ case (fList) =>
Ok(Json.obj(
"processTemplate" -> fList
))
})
but then I got this:
UPDATE:
My problem is that the futures in fList do not complete before an OK result is returned
The code in the question didn't seem compilable, so here is an untested very rough sketch, that hopefully provides enough inspiration for further search of the correct solution:
def getContentComponentUsageSearch: = Action.async { implicit req =>
req.body.asJson.map(_.validate[StepIds] match {
case JsSuccess(stepIds, _) => {
// Create list of futures
val listFuts: List[Future[ProcessTemplatesModel]] = (stepIds.s.map(s => {
processTemplateDTO.
getProcessStepTemplate(s.processStep_id).
flatMap{ stepTemplate =>
processTemplateDTO.
getProcessTemplate(stepTemplate.get.processTemplate_id.get).
map(_.get)
}
})).toList
// Sequence all the futures into a single future of list
val futList = Future.sequence(listFuts)
// Flat map this single future to the OK result
for {
listPTMs <- futList
} yield {
// Apparently some debug output?
listPTMs foreach printl
Ok(Json.obj("id" -> ""))
}
}
case JsError(_) => {
println("NOT VALID -------------------------------")
Future.successful(BadRequest("Process Template not create client"))
}
case _ => Future.successful(BadRequest("Process Template create client"))
}).getOrElse(Future.successful(BadRequest("Process Template create client")))
}
If I understood your question correctly, what you wanted was to make sure that all futures in the list complete before you return the OK. Therefore I have first created a List[Future[...]]:
val listFuts: List[Future[ProcessTemplatesModel]] = // ...
Then I've combined all the futures into a single future of list, which completes only when every element has completed:
// Sequence all the futures into a single future of list
val futList = Future.sequence(listFuts)
Then I've used a for-comprehension to make sure that the listPTMs finishes computation before the OK is returned:
// Flat map this single future to the OK result
for {
listPTMs <- futList
} yield {
// Apparently some debug output?
listPTMs foreach printl
Ok(Json.obj("id" -> ""))
}
The for-yield (equivalent to map here) is what establishes the finish-this-before-doing-that behavior, so that listPTMs is fully evaluated before OK is constructed.
In order to wait until a Future is complete, it is most common to do one of two things:
Use a for-comprehension, which does a bunch of mapping and flatmapping behind the scenes before doing anything in the yield section (see Andrey's comment for a more detailed explanation). A simplified example:
def index: Action[AnyContent] = Action.async {
val future1 = Future(1)
val future2 = Future(2)
for {
f1 <- future1
f2 <- future2
} yield {
println(s"$f1 + $f2 = ${f1 + f2}") // prints 3
Ok(views.html.index("Home"))
}
}
Map inside a Future:
def index: Action[AnyContent] = Action.async {
val future1 = Future(1)
future1.map{
f1 =>
println(s"$f1")
Ok(views.html.index("Home"))
}
}
If there are multiple Futures:
def index: Action[AnyContent] = Action.async {
val future1 = Future(1)
val future2 = Future(2)
future1.flatMap{
f1 =>
future2.map {
f2 =>
println(s"$f1 + $f2 = ${f1 + f2}")
Ok(views.html.index("Home"))
}
}
}
}
When you have multiple Futures though, the argument for for-yield comprehensions gets much stronger as it gets easier to read. Also, you are probably aware but if you work with futures you may need to following imports:
import scala.concurrent.Future
import scala.concurrent.ExecutionContext.Implicits.global

scala returns doesn't conform to required S_

I got the error
found : scala.concurrent.Future[Option[models.ProcessTemplatesModel]]
required: Option[models.ProcessTemplatesModel]
My function is below
def createCopyOfProcessTemplate(processTemplateId: Int): Future[Option[ProcessTemplatesModel]] = {
val action = processTemplates.filter(_.id === processTemplateId).result.map(_.headOption)
val result: Future[Option[ProcessTemplatesModel]] = db.run(action)
result.map { case (result) =>
result match {
case Some(r) => {
var copy = (processTemplates returning processTemplates.map(_.id)) += ProcessTemplatesModel(None, "[Copy of] " + r.title, r.version, r.createdat, r.updatedat, r.deadline, r.status, r.comment, Some(false), r.checkedat, Some(false), r.approvedat, false, r.approveprocess, r.trainingsprocess)
val composedAction = copy.flatMap { id =>
processTemplates.filter(_.id === id).result.headOption
}
db.run(composedAction)
}
}
}
}
what is my problem in this case?
edit:
my controller function looks like this:
def createCopyOfProcessTemplate(processTemplateId: Int) = Action.async {
processTemplateDTO.createCopyOfProcessTemplate(processTemplateId).map { process =>
Ok(Json.toJson(process))
}
}
Is there my failure?
According to the your code - there are the following issues:
You use two db.run which return futures, but inner future will
not complete. For resolving it you should compose futures with
flatMap or for-comprehension.
You use only one partial-function case Some(_) => for pattern matching
and don't handle another value None.
You can use only one db.run and actions composition.
Your code can be like as:
def createCopyOfProcessTemplate(processTemplateId: Int): Future[Option[ProcessTemplatesModel]] = {
val action = processTemplates.filter(...).result.map(_.headOption)
val composedAction = action.flatMap {
case Some(r) =>
val copyAction = (processTemplates returning processTemplates...)
copyAction.flatMap { id =>
processTemplates.filter(_.id === id).result.headOption
}
case _ =>
DBIO.successful(None) // issue #2 has been resolved here
}
db.run(composedAction) // issue #3 has been resolved here
}
We get rid of issue #1 (because we use actions composition).

ReactiveMongo query returning None

I am just new to learning Scala and the related technologies.I am coming across the problem where the loadUser should return a record but its coming empty.
I am getting the following error:
java.util.NoSuchElementException: None.get
I appreciate this is not ideal Scala, so feel free to suggest me improvements.
class MongoDataAccess extends Actor {
val message = "Hello message"
override def receive: Receive = {
case data: Payload => {
val user: Future[Option[User]] = MongoDataAccess.loadUser(data.deviceId)
val twillioApiAccess = context.actorOf(Props[TwillioApiAccess], "TwillioApiAccess")
user onComplete {
case Failure(exception) => println(exception)
case p: Try[Option[User]] => p match {
case Failure(exception) => println(exception)
case u: Try[Option[User]] => twillioApiAccess ! Action(data, u.get.get.phoneNumber, message)
}
}
}
case _ => println("received unknown message")
}
}
object MongoDataAccess extends MongoDataApi {
def connect(): Future[DefaultDB] = {
// gets an instance of the driver
val driver = new MongoDriver
val connection = driver.connection(List("192.168.99.100:32768"))
// Gets a reference to the database "sensor"
connection.database("sensor")
}
def props = Props(new MongoDataAccess)
def loadUser(deviceId: UUID): Future[Option[User]] = {
println(s"Loading user from the database with device id: $deviceId")
val query = BSONDocument("deviceId" -> deviceId.toString)
// By default, you get a Future[BSONCollection].
val collection: Future[BSONCollection] = connect().map(_.collection("profile"))
collection flatMap { x => x.find(query).one[User] }
}
}
Thanks
There is no guaranty the find-one (.one[T]) matches at least one document in your DB, so you get an Option[T].
Then it's up to you to consider (or not) that having found no document is a failure (or not); e.g.
val u: Future[User] = x.find(query).one[User].flatMap[User] {
case Some(matchingUser) => Future.successful(matchingUser)
case _ => Future.failed(new MySemanticException("No matching user found"))
}
Using .get on Option is a bad idea anyway.

Scala: Cool way to manage sequential execution of Futures?

I'm trying to write a data module in Scala.
While loading entire data in parallel, some data depends on other data, so execution sequence has to be managed in efficient way.
For example in code, I keep a map with name of data and manifest
val dataManifestMap = Map(
"foo" -> manifest[String],
"bar" -> manifest[Int],
"baz" -> manifest[Int],
"foobar" -> manifest[Set[String]], // need to be executed after "foo" and "bar" is ready
"foobarbaz" -> manifest[String], // need to be executed after "foobar" and "baz" is ready
)
These data will be stored in a mutable hash map
private var dataStorage = new mutable.HashMap[String, Future[Any]]()
There are some code that will load data
def loadAllData(): Future[Unit] = {
Future.join(
(dataManifestMap map {
case (data, m) => loadData(data, m) } // function has all the string matching and loading stuff
).toSeq
)
}
def loadData[T](data: String, m: Manifest[T]): Future[Unit] = {
val d = data match {
case "foo" => Future.value("foo")
case "bar" => Future.value(3)
case "foobar" => // do something with dataStorage("foo") and dataStorage("bar")
... // and so forth (in a real example it would be much more complicated for sure)
}
d flatMap {
dVal => { this.synchronized { dataStorage(data) = dVal }; Future.value(Unit) }
}
}
This way, I cannot make sure "foobar" is loaded when "foo" and "bar" is ready, and so forth.
How can I manage this in a "cool" way, since I might have hundreds of different data?
It would be "awesome" if I could have some kind of data structure that has all the info about something has to be loaded after something, and sequential execution can be handled by flatMap in a neat way.
Thanks for the help in advance.
All things being equal, I'd tend to use for comprehensions. For example:
def findBucket: Future[Bucket[Empty]] = ???
def fillBucket(bucket: Bucket[Empty]): Future[Bucket[Water]] = ???
def extinguishOvenFire(waterBucket: Bucket[Water]): Future[Oven] = ???
def makeBread(oven: Oven): Future[Bread] = ???
def makeSoup(oven: Oven): Future[Soup] = ???
def eatSoup(soup: Soup, bread: Bread): Unit = ???
def doLunch = {
for (bucket <- findBucket;
filledBucket <- fillBucket(bucket);
oven <- extinguishOvenFire(filledBucket);
soupFuture = makeSoup(oven);
breadFuture = makeBread(oven);
soup <- soupFuture;
bread <- breadFuture) {
eatSoup(soup, bread)
}
}
This chains futures together, and calls the relevant methods once dependencies are satisfied. Note that we use = in the for comprehension to allow us to start two Futures at the same time. As it stands, doLunch returns Unit, but if you replace the last few lines with:
// ..snip..
bread <- breadFuture) yield {
eatSoup(soup, bread)
oven
}
}
Then it will return Future[Oven] - which might be useful if you want to use the oven for something else after lunch.
As for your code, my first though would be that you should consider Spray cache, as it looks like it might fit your requirements. If not, my next thought would be to replace the Stringly typed interface you've currently got and go with something based on typed method calls:
private def save[T](key: String)(value: Future[T]) = this.synchronized {
dataStorage(key) = value
value
}
def loadFoo = save("foo"){Future("foo")}
def loadBar = save("bar"){Future(3)}
def loadFooBar = save("foobar"){
for (foo <- loadFoo;
bar <- loadBar) yield foo + bar // Or whatever
}
def loadBaz = save("baz"){Future(200L)}
def loadAll = {
val topLevelFutures = Seq(loadFooBar, loadBaz)
// Use standard library function to combine futures
Future.fold(topLevelFutures)(())((u,f) => ())
}
// I don't consider this method necessary, but if you've got a legacy API to support...
def loadData[T](key: String)(implicit manifest: Manifest[T]) = {
val future = key match {
case "foo" => loadFoo
case "bar" => loadBar
case "foobar" => loadFooBar
case "baz" => loadBaz
case "all" => loadAll
}
future.mapTo[T]
}

What Automatic Resource Management alternatives exist for Scala?

I have seen many examples of ARM (automatic resource management) on the web for Scala. It seems to be a rite-of-passage to write one, though most look pretty much like one another. I did see a pretty cool example using continuations, though.
At any rate, a lot of that code has flaws of one type or another, so I figured it would be a good idea to have a reference here on Stack Overflow, where we can vote up the most correct and appropriate versions.
Chris Hansen's blog entry 'ARM Blocks in Scala: Revisited' from 3/26/09 talks about about slide 21 of Martin Odersky's FOSDEM presentation. This next block is taken straight from slide 21 (with permission):
def using[T <: { def close() }]
(resource: T)
(block: T => Unit)
{
try {
block(resource)
} finally {
if (resource != null) resource.close()
}
}
--end quote--
Then we can call like this:
using(new BufferedReader(new FileReader("file"))) { r =>
var count = 0
while (r.readLine != null) count += 1
println(count)
}
What are the drawbacks of this approach? That pattern would seem to address 95% of where I would need automatic resource management...
Edit: added code snippet
Edit2: extending the design pattern - taking inspiration from python with statement and addressing:
statements to run before the block
re-throwing exception depending on the managed resource
handling two resources with one single using statement
resource-specific handling by providing an implicit conversion and a Managed class
This is with Scala 2.8.
trait Managed[T] {
def onEnter(): T
def onExit(t:Throwable = null): Unit
def attempt(block: => Unit): Unit = {
try { block } finally {}
}
}
def using[T <: Any](managed: Managed[T])(block: T => Unit) {
val resource = managed.onEnter()
var exception = false
try { block(resource) } catch {
case t:Throwable => exception = true; managed.onExit(t)
} finally {
if (!exception) managed.onExit()
}
}
def using[T <: Any, U <: Any]
(managed1: Managed[T], managed2: Managed[U])
(block: T => U => Unit) {
using[T](managed1) { r =>
using[U](managed2) { s => block(r)(s) }
}
}
class ManagedOS(out:OutputStream) extends Managed[OutputStream] {
def onEnter(): OutputStream = out
def onExit(t:Throwable = null): Unit = {
attempt(out.close())
if (t != null) throw t
}
}
class ManagedIS(in:InputStream) extends Managed[InputStream] {
def onEnter(): InputStream = in
def onExit(t:Throwable = null): Unit = {
attempt(in.close())
if (t != null) throw t
}
}
implicit def os2managed(out:OutputStream): Managed[OutputStream] = {
return new ManagedOS(out)
}
implicit def is2managed(in:InputStream): Managed[InputStream] = {
return new ManagedIS(in)
}
def main(args:Array[String]): Unit = {
using(new FileInputStream("foo.txt"), new FileOutputStream("bar.txt")) {
in => out =>
Iterator continually { in.read() } takeWhile( _ != -1) foreach {
out.write(_)
}
}
}
Daniel,
I've just recently deployed the scala-arm library for automatic resource management. You can find the documentation here: https://github.com/jsuereth/scala-arm/wiki
This library supports three styles of usage (currently):
1) Imperative/for-expression:
import resource._
for(input <- managed(new FileInputStream("test.txt")) {
// Code that uses the input as a FileInputStream
}
2) Monadic-style
import resource._
import java.io._
val lines = for { input <- managed(new FileInputStream("test.txt"))
val bufferedReader = new BufferedReader(new InputStreamReader(input))
line <- makeBufferedReaderLineIterator(bufferedReader)
} yield line.trim()
lines foreach println
3) Delimited Continuations-style
Here's an "echo" tcp server:
import java.io._
import util.continuations._
import resource._
def each_line_from(r : BufferedReader) : String #suspendable =
shift { k =>
var line = r.readLine
while(line != null) {
k(line)
line = r.readLine
}
}
reset {
val server = managed(new ServerSocket(8007)) !
while(true) {
// This reset is not needed, however the below denotes a "flow" of execution that can be deferred.
// One can envision an asynchronous execuction model that would support the exact same semantics as below.
reset {
val connection = managed(server.accept) !
val output = managed(connection.getOutputStream) !
val input = managed(connection.getInputStream) !
val writer = new PrintWriter(new BufferedWriter(new OutputStreamWriter(output)))
val reader = new BufferedReader(new InputStreamReader(input))
writer.println(each_line_from(reader))
writer.flush()
}
}
}
The code makes uses of a Resource type-trait, so it's able to adapt to most resource types. It has a fallback to use structural typing against classes with either a close or dispose method. Please check out the documentation and let me know if you think of any handy features to add.
Here's James Iry solution using continuations:
// standard using block definition
def using[X <: {def close()}, A](resource : X)(f : X => A) = {
try {
f(resource)
} finally {
resource.close()
}
}
// A DC version of 'using'
def resource[X <: {def close()}, B](res : X) = shift(using[X, B](res))
// some sugar for reset
def withResources[A, C](x : => A #cps[A, C]) = reset{x}
Here are the solutions with and without continuations for comparison:
def copyFileCPS = using(new BufferedReader(new FileReader("test.txt"))) {
reader => {
using(new BufferedWriter(new FileWriter("test_copy.txt"))) {
writer => {
var line = reader.readLine
var count = 0
while (line != null) {
count += 1
writer.write(line)
writer.newLine
line = reader.readLine
}
count
}
}
}
}
def copyFileDC = withResources {
val reader = resource[BufferedReader,Int](new BufferedReader(new FileReader("test.txt")))
val writer = resource[BufferedWriter,Int](new BufferedWriter(new FileWriter("test_copy.txt")))
var line = reader.readLine
var count = 0
while(line != null) {
count += 1
writer write line
writer.newLine
line = reader.readLine
}
count
}
And here's Tiark Rompf's suggestion of improvement:
trait ContextType[B]
def forceContextType[B]: ContextType[B] = null
// A DC version of 'using'
def resource[X <: {def close()}, B: ContextType](res : X): X #cps[B,B] = shift(using[X, B](res))
// some sugar for reset
def withResources[A](x : => A #cps[A, A]) = reset{x}
// and now use our new lib
def copyFileDC = withResources {
implicit val _ = forceContextType[Int]
val reader = resource(new BufferedReader(new FileReader("test.txt")))
val writer = resource(new BufferedWriter(new FileWriter("test_copy.txt")))
var line = reader.readLine
var count = 0
while(line != null) {
count += 1
writer write line
writer.newLine
line = reader.readLine
}
count
}
For now Scala 2.13 has finally supported: try with resources by using Using :), Example:
val lines: Try[Seq[String]] =
Using(new BufferedReader(new FileReader("file.txt"))) { reader =>
Iterator.unfold(())(_ => Option(reader.readLine()).map(_ -> ())).toList
}
or using Using.resource avoid Try
val lines: Seq[String] =
Using.resource(new BufferedReader(new FileReader("file.txt"))) { reader =>
Iterator.unfold(())(_ => Option(reader.readLine()).map(_ -> ())).toList
}
You can find more examples from Using doc.
A utility for performing automatic resource management. It can be used to perform an operation using resources, after which it releases the resources in reverse order of their creation.
I see a gradual 4 step evolution for doing ARM in Scala:
No ARM: Dirt
Only closures: Better, but multiple nested blocks
Continuation Monad: Use For to flatten the nesting, but unnatural separation in 2 blocks
Direct style continuations: Nirava, aha! This is also the most type-safe alternative: a resource outside withResource block will be type error.
There is light-weight (10 lines of code) ARM included with better-files. See: https://github.com/pathikrit/better-files#lightweight-arm
import better.files._
for {
in <- inputStream.autoClosed
out <- outputStream.autoClosed
} in.pipeTo(out)
// The input and output streams are auto-closed once out of scope
Here is how it is implemented if you don't want the whole library:
type Closeable = {
def close(): Unit
}
type ManagedResource[A <: Closeable] = Traversable[A]
implicit class CloseableOps[A <: Closeable](resource: A) {
def autoClosed: ManagedResource[A] = new Traversable[A] {
override def foreach[U](f: A => U) = try {
f(resource)
} finally {
resource.close()
}
}
}
How about using Type classes
trait GenericDisposable[-T] {
def dispose(v:T):Unit
}
...
def using[T,U](r:T)(block:T => U)(implicit disp:GenericDisposable[T]):U = try {
block(r)
} finally {
Option(r).foreach { r => disp.dispose(r) }
}
Another alternative is Choppy's Lazy TryClose monad. It's pretty good with database connections:
val ds = new JdbcDataSource()
val output = for {
conn <- TryClose(ds.getConnection())
ps <- TryClose(conn.prepareStatement("select * from MyTable"))
rs <- TryClose.wrap(ps.executeQuery())
} yield wrap(extractResult(rs))
// Note that Nothing will actually be done until 'resolve' is called
output.resolve match {
case Success(result) => // Do something
case Failure(e) => // Handle Stuff
}
And with streams:
val output = for {
outputStream <- TryClose(new ByteArrayOutputStream())
gzipOutputStream <- TryClose(new GZIPOutputStream(outputStream))
_ <- TryClose.wrap(gzipOutputStream.write(content))
} yield wrap({gzipOutputStream.flush(); outputStream.toByteArray})
output.resolve.unwrap match {
case Success(bytes) => // process result
case Failure(e) => // handle exception
}
More info here: https://github.com/choppythelumberjack/tryclose
Here is #chengpohi's answer, modified so it works with Scala 2.8+, instead of just Scala 2.13 (yes, it works with Scala 2.13 also):
def unfold[A, S](start: S)(op: S => Option[(A, S)]): List[A] =
Iterator
.iterate(op(start))(_.flatMap{ case (_, s) => op(s) })
.map(_.map(_._1))
.takeWhile(_.isDefined)
.flatten
.toList
def using[A <: AutoCloseable, B](resource: A)
(block: A => B): B =
try block(resource) finally resource.close()
val lines: Seq[String] =
using(new BufferedReader(new FileReader("file.txt"))) { reader =>
unfold(())(_ => Option(reader.readLine()).map(_ -> ())).toList
}
While Using is OK, I prefer the monadic style of resource composition. Twitter Util's Managed is pretty nice, except for its dependencies and its not-very-polished API.
To that end, I've published https://github.com/dvgica/managerial for Scala 2.12, 2.13, and 3.0.0. Largely based on the Twitter Util Managed code, no dependencies, with some API improvements inspired by cats-effect Resource.
The simple example:
import ca.dvgi.managerial._
val fileContents = Managed.from(scala.io.Source.fromFile("file.txt")).use(_.mkString)
But the real strength of the library is composing resources via for comprehensions.
Let me know what you think!