I am trying to solve a problem given by the Book "Scala for the Impatient", which asked to implement java's BufferedInputStream as a trait. Here is my implementation,
trait Buffering {
this:InputStream =>
private[this] val bis = {
new JavaBufferedInputStream(this)
}
override def read = bis.read
override def read(byte:Array[Byte], off:Int, len:Int) = bis.read(byte, off, len)
override def available = bis.available
override def close() {
bis.close
}
override def skip(n:Long) = bis.skip(n)
}
def main(args:Array[String]) {
val bfis = new FileInputStream(new File("foo.txt")) with Buffering
println(bfis.read)
bfis.close
}
But this give me a java stackoverflow error, so what's wrong with it? Thanks!
It looks like you are getting a stack overflow where you don't expect one. The key to troubleshoot these is to look at the repeating cycle of the stack trace. It usually points to what is repeatedly allocating frames. Here it will show something like that:
at C.Buffering$class.read(C.scala:12)
at C.C$$anon$1.read(C.scala:23)
at java.io.BufferedInputStream.read1(BufferedInputStream.java:256)
at java.io.BufferedInputStream.read(BufferedInputStream.java:317)
at C.Buffering$class.read(C.scala:12)
at C.C$$anon$1.read(C.scala:23)
at java.io.BufferedInputStream.read1(BufferedInputStream.java:256)
at java.io.BufferedInputStream.read(BufferedInputStream.java:317)
at C.Buffering$class.read(C.scala:12)
So reading from bottom to top, it looks like your read(byte, ...) is calling bis.read(byte, ...) which is calling BufferedInputStream.read which is then calling your read(byte, ...) again.
It would appear that new BufferedInputStream(this) is calling read on the underlying InputStream but since the underlying this is your object that then delegates calls on bis we have infinite recursion.
I'm guessing that the author wants you to use the abstract override stackable modifications pattern where you can use super to refer to the right read method.
Maybe this is the answer. Just maybe from what i understood.
trait Buffering extends InputStream{
abstract override def read()=
{
println("Hello");
super.read()
}
}
val t = new ByteArrayInputStream("hello".getBytes()) with Buffering
I have been going through "Scala for the Impatient". Here is what I have as a solution to exercise 8 in chapter 10:
import java.io._
object BufferedInputStream extends App {
trait Buffering {
this: FileInputStream =>
val br = new BufferedInputStream(this)
}
def bufferedReader(f: String): Unit = {
val fis = new FileInputStream(f) with Buffering
var c = 0
var blen = 8192
var len = 0
var b = new Array[Byte](blen)
while (fis.br.available > 0) {
if (fis.br.available > blen) len = blen
else len = fis.br.available
c = fis.br.read(b, 0, len)
if (c == -1) return
print((for (i <- 0 to (c - 1)) yield b(i).toChar).mkString)
}
fis.br.close
fis.close
}
bufferedReader("webpagexample")
}
Related
I have a pseudo-code like shown below. The ItemChurner.churn() is abstracted component which generates objects until x times, where x is unknown. :
def func: MyList = {
var list: MyList = MyList()
while(ItemChurner.canChurn) {
list = new MyList(ItemChurner.churn(), list)
}
list
}
Is there a way to avoid use of var?
If canChurn works as it should:
def func(churner: ItemChurner) = {
val iterator = new Iterator {
def hasNext = churner.canChurn
def next = churner.churn()
}
iterator.toList
}
About version (of the question) that contained catched exception check for churn():
If really expect some exceptions, what's the point of canChurn then?
Anyway, if you care about exceptions:
Iterator.continually(Try(churner.churn)).takeWhile(_.isSuccess).map(_.get).toList
This actually is not much precise, as churn might throw some other exception that has to be propagated, so here the scala's Exception helpers) come in hand:
def step = catching(classOf[NoMoreElementsException]) opt churner.churn()
Iterator.continually(step).takeWhile(_.nonEmpty).map(_.get).toList
If you want to do using Simple recursion and avoid var. This is the general strategy used in functional programming.
Use Vector instead of List for effective append
def func[T](): List[T] = {
#tailrec
def helper(result: List[T]): List[T] = {
if (ItemChurner.canChurn) helper(result ++ List(ItemChurner.churn))
else result
}
helper(List.Empty[T])
}
Assuming ItemChurner.canChurn does not throw any exception. if it throws exceptions simply wrap it inside Try
Improving upon pamu's answer, you can do something on the lines of;
def func: MyList = {
def process(cur: MyList): MyList = {
if (ItemChurner.canChurn) process(new MyList(ItemChurner.churn(), cur))
else cur
}
process(new MyList())
}
I'm trying to do some web scraping in Scala, and is currently using JSoup. Now I found that the iterator is not working in Scala, so I did some pimpin' and wrote an iterator myself. It looks like this:
object Pimp {
implicit class PimpElements(es: Elements) extends Iterable[Element] {
def iterator = new Iterator[Element] {
var currentElem = 0
def hasNext = currentElem < size
def next(): Element = {
currentElem += 1
es.get(currentElem - 1)
}
}
}
}
Now, the code that does not work, because intelliJ or Scala does not recognize my variable cider to be of type Element I guess:
for (cider <- ciders; if cider.getElementsByClass("info").text() != "") {
ciderArray += Drink(DrinkType.CIDER, cider)
}
But why not? My next() method returns es.get(i) which supposedly should be an Element and works in the code below:
for (i <- 0 to ciders.size() - 1; if ciders.get(i).getElementsByClass("info").text() != "") {
ciderArray += Drink(DrinkType.CIDER, ciders.get(i))
}
Isn't this code basically doing the same as the iterator, but gets recognized for some reason? The type of cider is, according to intelliJ, Any and not Element.
The for comprehension is translated to c.withFilter(p).foreach(f).
Possibly you expected it to call iterator.
This question is interesting because these encodings can result in more inferred type parameters or other effects.
I see Elements is an ArrayList.
TraversableLike.withFilter does turn out to be different from Iterator.withFilter.
Your example works, after fixing the call to size (which stackoverflows). It also works with Java types for Elements and Element.
object Test extends App {
case class Element(value: String)
type Elements = java.util.ArrayList[Element]
implicit class PimpElements(es: Elements) extends Iterable[Element] {
def iterator = new Iterator[Element] {
var currentElem = 0
def hasNext = currentElem < es.size
def next(): Element = {
currentElem += 1
es.get(currentElem - 1)
}
}
}
val vs = new java.util.ArrayList[Element]
vs.add(new Element("hi"))
vs.add(new Element("bye"))
for (v <- vs if v.value.startsWith("h")) println(v)
}
But it will also work this way:
object Test extends App {
implicit class PimpElements(es: Elements) extends Iterator[Element] {
var currentElem = 0
def hasNext = currentElem < es.size
def next(): Element = {
currentElem += 1
es.get(currentElem - 1)
}
}
val vs = new Elements
vs.add(new Element("hi"))
vs.add(new Element("bye"))
for (v <- vs if v.value.startsWith("h")) println(v)
}
The Traversable tracks its representation as a type parameter, which might make for type inference issues. Both classes incur a wrapper for filtering. But the Iterator doesn't override foreach when filtering, so it saves the last unfiltered element on hasNext for the call to next. Possibly, the Traversable.foreach is more efficient.
There is some data that I have pulled from a remote API, for which I use a Future-style interface. The data is structured as a linked-list. A relevant example data container is shown below.
case class Data(information: Int) {
def hasNext: Boolean = ??? // Implemented
def next: Future[Data] = ??? // Implemented
}
Now I'm interested in adding some functionality to the data class, such as map, foreach, reduce, etc. To do so I want to implement some form of IterableLike such that it inherets these methods.
Given below is the trait Data may extend, such that it gets this property.
trait AsyncIterable[+T]
extends IterableLike[Future[T], AsyncIterable[T]]
{
def hasNext : Boolean
def next : Future[T]
// How to implement?
override def iterator: Iterator[Future[T]] = ???
override protected[this] def newBuilder: mutable.Builder[Future[T], AsyncIterable[T]] = ???
override def seq: TraversableOnce[Future[T]] = ???
}
It should be a non-blocking implementation, which when acted on, starts requesting the next data from the remote data source.
It is then possible to do cool stuff such as
case class Data(information: Int) extends AsyncIterable[Data]
val data = Data(1) // And more, of course
// Asynchronously print all the information.
data.foreach(data => println(data.information))
It is also acceptable for the interface to be different. But the result should in some way represent asynchronous iteration over the collection. Preferably in a way that is familiar to developers, as it will be part of an (open source) library.
In production I would use one of following:
Akka Streams
Reactive Extensions
For private tests I would implement something similar to following.
(Explanations are below)
I have modified a little bit your Data:
abstract class AsyncIterator[T] extends Iterator[Future[T]] {
def hasNext: Boolean
def next(): Future[T]
}
For it we can implement this Iterable:
class AsyncIterable[T](sourceIterator: AsyncIterator[T])
extends IterableLike[Future[T], AsyncIterable[T]]
{
private def stream(): Stream[Future[T]] =
if(sourceIterator.hasNext) {sourceIterator.next #:: stream()} else {Stream.empty}
val asStream = stream()
override def iterator = asStream.iterator
override def seq = asStream.seq
override protected[this] def newBuilder = throw new UnsupportedOperationException()
}
And if see it in action using following code:
object Example extends App {
val source = "Hello World!";
val iterator1 = new DelayedIterator[Char](100L, source.toCharArray)
new AsyncIterable(iterator1).foreach(_.foreach(print)) //prints 1 char per 100 ms
pause(2000L)
val iterator2 = new DelayedIterator[String](100L, source.toCharArray.map(_.toString))
new AsyncIterable(iterator2).reduceLeft((fl: Future[String], fr) =>
for(l <- fl; r <- fr) yield {println(s"$l+$r"); l + r}) //prints 1 line per 100 ms
pause(2000L)
def pause(duration: Long) = {println("->"); Thread.sleep(duration); println("\n<-")}
}
class DelayedIterator[T](delay: Long, data: Seq[T]) extends AsyncIterator[T] {
private val dataIterator = data.iterator
private var nextTime = System.currentTimeMillis() + delay
override def hasNext = dataIterator.hasNext
override def next = {
val thisTime = math.max(System.currentTimeMillis(), nextTime)
val thisValue = dataIterator.next()
nextTime = thisTime + delay
Future {
val now = System.currentTimeMillis()
if(thisTime > now) Thread.sleep(thisTime - now) //Your implementation will be better
thisValue
}
}
}
Explanation
AsyncIterable uses Stream because it's calculated lazily and it's simple.
Pros:
simplicity
multiple calls to iterator and seq methods return same iterable with all items.
Cons:
could lead to memory overflow because stream keeps all prevously obtained values.
first value is eagerly gotten during creation of AsyncIterable
DelayedIterator is very simplistic implementation of AsyncIterator, don't blame me for quick and dirty code here.
It's still strange for me to see synchronous hasNext and asynchronous next()
Using Twitter Spool I've implemented a working example.
To implement spool I modified the example in the documentation.
import com.twitter.concurrent.Spool
import com.twitter.util.{Await, Return, Promise}
import scala.concurrent.{ExecutionContext, Future}
trait AsyncIterable[+T <: AsyncIterable[T]] { self : T =>
def hasNext : Boolean
def next : Future[T]
def spool(implicit ec: ExecutionContext) : Spool[T] = {
def fill(currentPage: Future[T], rest: Promise[Spool[T]]) {
currentPage foreach { cPage =>
if(hasNext) {
val nextSpool = new Promise[Spool[T]]
rest() = Return(cPage *:: nextSpool)
fill(next, nextSpool)
} else {
val emptySpool = new Promise[Spool[T]]
emptySpool() = Return(Spool.empty[T])
rest() = Return(cPage *:: emptySpool)
}
}
}
val rest = new Promise[Spool[T]]
if(hasNext) {
fill(next, rest)
} else {
rest() = Return(Spool.empty[T])
}
self *:: rest
}
}
Data is the same as before, and now we can use it.
// Cool stuff
implicit val ec = scala.concurrent.ExecutionContext.global
val data = Data(1) // And others
// Print all the information asynchronously
val fut = data.spool.foreach(data => println(data.information))
Await.ready(fut)
It will trow an exception on the second element, because the implementation of next was not provided.
Suppose this API is given and we cannot change it:
object ProviderAPI {
trait Receiver[T] {
def receive(entry: T)
def close()
}
def run(r: Receiver[Int]) {
new Thread() {
override def run() {
(0 to 9).foreach { i =>
r.receive(i)
Thread.sleep(100)
}
r.close()
}
}.start()
}
}
In this example, ProviderAPI.run takes a Receiver, calls receive(i) 10 times and then closes. Typically, ProviderAPI.run would call receive(i) based on a collection which could be infinite.
This API is intended to be used in imperative style, like an external iterator. If our application needs to filter, map and print this input, we need to implement a Receiver which mixes all these operations:
object Main extends App {
class MyReceiver extends ProviderAPI.Receiver[Int] {
def receive(entry: Int) {
if (entry % 2 == 0) {
println("Entry#" + entry)
}
}
def close() {}
}
ProviderAPI.run(new MyReceiver())
}
Now, the question is how to use the ProviderAPI in functional style, internal iterator (without changing the implementation of ProviderAPI, which is given to us). Note that ProviderAPI could also call receive(i) infinite times, so it is not an option to collect everything in a list (also, we should handle each result one by one, instead of collecting all the input first, and processing it afterwards).
I am asking how to implement such a ReceiverToIterator, so that we can use the ProviderAPI in functional style:
object Main extends App {
val iterator = new ReceiverToIterator[Int] // how to implement this?
ProviderAPI.run(iterator)
iterator
.view
.filter(_ % 2 == 0)
.map("Entry#" + _)
.foreach(println)
}
Update
Here are four solutions:
IteratorWithSemaphorSolution: The workaround solution I proposed first attached to the question
QueueIteratorSolution: Using the BlockingQueue[Option[T]] based on the suggestion of nadavwr.
It allows the producer to continue producing up to queueCapacity before being blocked by the consumer.
PublishSubjectSolution: Very simple solution, using PublishSubject from Netflix RxJava-Scala API.
SameThreadReceiverToTraversable: Very simple solution, by relaxing the constraints of the question
Updated: BlockingQueue of 1 entry
What you've implemented here is essentially Java's BlockingQueue, with a queue size of 1.
Main characteristic: uber-blocking. A slow consumer will kill your producer's performance.
Update: #gzm0 mentioned that BlockingQueue doesn't cover EOF. You'll have to use BlockingQueue[Option[T]] for that.
Update: Here's a code fragment. It can be made to fit with your Receiver.
Some of it inspired by Iterator.buffered. Note that peek is a misleading name, as it may block -- and so will hasNext.
// fairness enabled -- you probably want to preserve order...
// alternatively, disable fairness and increase buffer to be 'big enough'
private val queue = new java.util.concurrent.ArrayBlockingQueue[Option[T]](1, true)
// the following block provides you with a potentially blocking peek operation
// it should `queue.take` when the previous peeked head has been invalidated
// specifically, it will `queue.take` and block when the queue is empty
private var head: Option[T] = _
private var headDefined: Boolean = false
private def invalidateHead() { headDefined = false }
private def peek: Option[T] = {
if (!headDefined) {
head = queue.take()
headDefined = true
}
head
}
def iterator = new Iterator[T] {
// potentially blocking; only false upon taking `None`
def hasNext = peek.isDefined
// peeks and invalidates head; throws NoSuchElementException as appropriate
def next: T = {
val opt = peek; invalidateHead()
if (opt.isEmpty) throw new NoSuchElementException
else opt.get
}
}
Alternative: Iteratees
Iterator-based solutions will generally involve more blocking. Conceptually, you could use continuations on the thread doing the iteration to avoid blocking the thread, but continuations mess with Scala's for-comprehensions, so no joy down that road.
Alternatively, you could consider an iteratee-based solution. Iteratees are different than iterators in that the consumer isn't responsible for advancing the iteration -- the producer is. With iteratees, the consumer basically folds over the entries pushed by the producer over time. Folding each next entry as it becomes available can take place in a thread pool, since the thread is relinquished after each fold completes.
You won't get nice for-syntax for iteration, and the learning curve is a little challenging, but if you feel confident using a foldLeft you'll end up with a non-blocking solution that does look reasonable on the eye.
To read more about iteratees, I suggest taking a peek at PlayFramework 2.X's iteratee reference. The documentation describes their stand-alone iteratee library, which is 100% usable outside the context of Play. Scalaz 7 also has a comprehensive iteratee library.
IteratorWithSemaphorSolution
The first workaround solution that I proposed attached to the question.
I moved it here as an answer.
import java.util.concurrent.Semaphore
object Main extends App {
val iterator = new ReceiverToIterator[Int]
ProviderAPI.run(iterator)
iterator
.filter(_ % 2 == 0)
.map("Entry#" + _)
.foreach(println)
}
class ReceiverToIterator[T] extends ProviderAPI.Receiver[T] with Iterator[T] {
var lastEntry: T = _
var waitingToReceive = new Semaphore(1)
var waitingToBeConsumed = new Semaphore(1)
var eof = false
waitingToReceive.acquire()
def receive(entry: T) {
println("ReceiverToIterator.receive(" + entry + "). START.")
waitingToBeConsumed.acquire()
lastEntry = entry
waitingToReceive.release()
println("ReceiverToIterator.receive(" + entry + "). END.")
}
def close() {
println("ReceiverToIterator.close().")
eof = true
waitingToReceive.release()
}
def hasNext = {
println("ReceiverToIterator.hasNext().START.")
waitingToReceive.acquire()
waitingToReceive.release()
println("ReceiverToIterator.hasNext().END.")
!eof
}
def next = {
println("ReceiverToIterator.next().START.")
waitingToReceive.acquire()
if (eof) { throw new NoSuchElementException }
val entryToReturn = lastEntry
waitingToBeConsumed.release()
println("ReceiverToIterator.next().END.")
entryToReturn
}
}
QueueIteratorSolution
The second workaround solution that I proposed attached to the question. I moved it here as an answer.
Solution using the BlockingQueue[Option[T]] based on the suggestion of nadavwr.
It allows the producer to continue producing up to queueCapacity before being blocked by the consumer.
I implement a QueueToIterator that uses a ArrayBlockingQueue with a given capacity.
BlockingQueue has a take() method, but not a peek or hasNext, so I need an OptionNextToIterator as follows:
trait OptionNextToIterator[T] extends Iterator[T] {
def getOptionNext: Option[T] // abstract
def hasNext = { ... }
def next = { ... }
}
Note: I am using the synchronized block inside OptionNextToIterator, and I am not sure it is totally correct
Solution:
import java.util.concurrent.ArrayBlockingQueue
object Main extends App {
val receiverToIterator = new ReceiverToIterator[Int](queueCapacity = 3)
ProviderAPI.run(receiverToIterator)
Thread.sleep(3000) // test that ProviderAPI.run can produce 3 items ahead before being blocked by the consumer
receiverToIterator.filter(_ % 2 == 0).map("Entry#" + _).foreach(println)
}
class ReceiverToIterator[T](val queueCapacity: Int = 1) extends ProviderAPI.Receiver[T] with QueueToIterator[T] {
def receive(entry: T) { queuePut(entry) }
def close() { queueClose() }
}
trait QueueToIterator[T] extends OptionNextToIterator[T] {
val queueCapacity: Int
val queue = new ArrayBlockingQueue[Option[T]](queueCapacity)
var queueClosed = false
def queuePut(entry: T) {
if (queueClosed) { throw new IllegalStateException("The queue has already been closed."); }
queue.put(Some(entry))
}
def queueClose() {
queueClosed = true
queue.put(None)
}
def getOptionNext = queue.take
}
trait OptionNextToIterator[T] extends Iterator[T] {
def getOptionNext: Option[T]
var answerReady: Boolean = false
var eof: Boolean = false
var element: T = _
def hasNext = {
prepareNextAnswerIfNecessary()
!eof
}
def next = {
prepareNextAnswerIfNecessary()
if (eof) { throw new NoSuchElementException }
val retVal = element
answerReady = false
retVal
}
def prepareNextAnswerIfNecessary() {
if (answerReady) {
return
}
synchronized {
getOptionNext match {
case None => eof = true
case Some(e) => element = e
}
answerReady = true
}
}
}
PublishSubjectSolution
A very simple solution using PublishSubject from Netflix RxJava-Scala API:
// libraryDependencies += "com.netflix.rxjava" % "rxjava-scala" % "0.20.7"
import rx.lang.scala.subjects.PublishSubject
class MyReceiver[T] extends ProviderAPI.Receiver[T] {
val channel = PublishSubject[T]()
def receive(entry: T) { channel.onNext(entry) }
def close() { channel.onCompleted() }
}
object Main extends App {
val myReceiver = new MyReceiver[Int]()
ProviderAPI.run(myReceiver)
myReceiver.channel.filter(_ % 2 == 0).map("Entry#" + _).subscribe{n => println(n)}
}
ReceiverToTraversable
This stackoverflow question came when I wanted to list and process a svn repository using the svnkit.com API as follows:
SvnList svnList = new SvnOperationFactory().createList();
svnList.setReceiver(new ISvnObjectReceiver<SVNDirEntry>() {
public void receive(SvnTarget target, SVNDirEntry dirEntry) throws SVNException {
// do something with dirEntry
}
});
svnList.run();
the API used a callback function, and I wanted to use a functional style instead, as follows:
svnList.
.filter(e => "pom.xml".compareToIgnoreCase(e.getName()) == 0)
.map(_.getURL)
.map(getMavenArtifact)
.foreach(insertArtifact)
I thought of having a class ReceiverToIterator[T] extends ProviderAPI.Receiver[T] with Iterator[T],
but this required the svnkit api to run in another thread.
That's why I asked how to solve this problem with a ProviderAPI.run method that run in a new thread. But that was not very wise: if I had explained the real case, someone might have found a better solution before.
Solution
If we tackle the real problem (so, no need of using a thread for the svnkit),
a simpler solution is to implement a scala.collection.Traversable instead of a scala.collection.Iterator.
While Iterator requires a next and hasNext def, Traversable requires a foreach def,
which is very similar to the svnkit callback!
Note that by using view, we make the transformers lazy, so elements are passed one by one through all the chain to foreach(println).
this allows to process an infinite collection.
object ProviderAPI {
trait Receiver[T] {
def receive(entry: T)
def close()
}
// Later I found out that I don't need a thread
def run(r: Receiver[Int]) {
(0 to 9).foreach { i => r.receive(i); Thread.sleep(100) }
}
}
object Main extends App {
new ReceiverToTraversable[Int](r => ProviderAPI.run(r))
.view
.filter(_ % 2 == 0)
.map("Entry#" + _)
.foreach(println)
}
class ReceiverToTraversable[T](val runProducer: (ProviderAPI.Receiver[T] => Unit)) extends Traversable[T] {
override def foreach[U](f: (T) => U) = {
object MyReceiver extends ProviderAPI.Receiver[T] {
def receive(entry: T) = f(entry)
def close() = {}
}
runProducer(MyReceiver)
}
}
I have a Traversable, and I want to make it into a Java Iterator. My problem is that I want everything to be lazily done. If I do .toIterator on the traversable, it eagerly produces the result, copies it into a List, and returns an iterator over the List.
I'm sure I'm missing something simple here...
Here is a small test case that shows what I mean:
class Test extends Traversable[String] {
def foreach[U](f : (String) => U) {
f("1")
f("2")
f("3")
throw new RuntimeException("Not lazy!")
}
}
val a = new Test
val iter = a.toIterator
The reason you can't get lazily get an iterator from a traversable is that you intrinsically can't. Traversable defines foreach, and foreach runs through everything without stopping. No laziness there.
So you have two options, both terrible, for making it lazy.
First, you can iterate through the whole thing each time. (I'm going to use the Scala Iterator, but the Java Iterator is basically the same.)
class Terrible[A](t: Traversable[A]) extends Iterator[A] {
private var i = 0
def hasNext = i < t.size // This could be O(n)!
def next: A = {
val a = t.slice(i,i+1).head // Also could be O(n)!
i += 1
a
}
}
If you happen to have efficient indexed slicing, this will be okay. If not, each "next" will take time linear in the length of the iterator, for O(n^2) time just to traverse it. But this is also not necessarily lazy; if you insist that it must be you have to enforce O(n^2) in all cases and do
class Terrible[A](t: Traversable[A]) extends Iterator[A] {
private var i = 0
def hasNext: Boolean = {
var j = 0
t.foreach { a =>
j += 1
if (j>i) return true
}
false
}
def next: A = {
var j = 0
t.foreach{ a =>
j += 1
if (j>i) { i += 1; return a }
}
throw new NoSuchElementException("Terribly empty")
}
}
This is clearly a terrible idea for general code.
The other way to go is to use a thread and block the traversal of foreach as it's going. That's right, you have to do inter-thread communication on every single element access! Let's see how that works--I'm going to use Java threads here since Scala is in the middle of a switch to Akka-style actors (though any of the old actors or the Akka actors or the Scalaz actors or the Lift actors or (etc.) will work)
class Horrible[A](t: Traversable[A]) extends Iterator[A] {
private val item = new java.util.concurrent.SynchronousQueue[Option[A]]()
private class Loader extends Thread {
override def run() { t.foreach{ a => item.put(Some(a)) }; item.put(None) }
}
private val loader = new Loader
loader.start
private var got: Option[A] = null
def hasNext: Boolean = {
if (got==null) { got = item.poll; hasNext }
else got.isDefined
}
def next = {
if (got==null) got = item.poll
val ans = got.get
got = null
ans
}
}
This avoids the O(n^2) disaster, but ties up a thread and has desperately slow element-by-element access. I get about two million accesses per second on my machine, as compared to >100M for a typical traversable. This is clearly a horrible idea for general code.
So there you have it. Traversable is not lazy in general, and there is no good way to make it lazy without compromising performance tremendously.
I've run into this problem before and as far as I can tell, no one's particularly interested in making it easier to get an Iterator when all you've defined is foreach.
But as you've noted, toStream is the problem, so you could just override that:
class Test extends Traversable[String] {
def foreach[U](f: (String) => U) {
f("1")
f("2")
f("3")
throw new RuntimeException("Not lazy!")
}
override def toStream: Stream[String] = {
"1" #::
"2" #::
"3" #::
Stream[String](throw new RuntimeException("Not lazy!"))
}
}
Another alternative would be to define an Iterable instead of a Traversable, and then you'd get the iterator method directly. Could you explain a bit more what your Traversable is doing in your real use case?