Is it appropriate to use Futures and Promises for delayed initialization, rather than using an Option var or some mutable variable?
You could create a factory class that encapsulates the promise:
class IntFactory{
val intPromise = Promise[Int]
def create () : Future[Int] = intPromise.future
def init (data : String) : Unit = intPromise success data.length
}
An actor or some other class could then use it like this:
class MyActor(factory : IntFactory) extends Actor{
val future_int = factory.create()
def receive = {
case (msg : String) => factory.init(msg) // Now the promise is fulfilled
}
}
Is there anything wrong with doing something like this? It may not have been ideal to use an actor as an example, as I think there are better alternatives for actors (become or FSM). I am currently considering using this with a non-actor class. Some of the instance variables are nothing until certain events occur. I was considering doing this instead of using a var Option and setting it to None. If this is bad, what are some other alternatives?
EDIT:
I thought of situations where this might be more useful. If I had multiple things that needed to be initialized, and I had some async action that I wanted to perform when it was all done:
class MyActor(factory1 : IntFactory, factory2 : IntFactory) extends Actor{
val future_int1 = factory1.create()
val future_int2 = factory2.create()
for{
x <- future_int1
y <- future_int2
} // Do some stuff when both are complete
def receive = {
case "first" => factory1.init("first")
case "second" => factory2.init("second")
}
}
Then I would not have to check which ones are None every time I get another piece.
MORE EDITS:
Some additional information that I failed to specify in my original question:
The data needed to initialize the objects will come in asynchronously.
The data passed to the init function is required for initialization. I edited my example code so that this is now the case.
I am not using Akka. I thought Akka would be helpful for throwing together a quick example and thought that experienced Akka people could provide useful feedback.
Yes, this is certainly a better approach than using mutable variables (whether Option or not). Using lazy val, as #PatrykĆwiek suggested, is even better if you can initialize the state at any time instead of waiting for external events and don't need to do it asynchronously.
Judging from your IntFactory, you don't really need the data string (it's not used anywhere), so I think the base case could be rewritten like this:
class Foo {
lazy val first = {
Thread.sleep(2000) // Some computation, initialization etc.
25
}
lazy val second = {
Thread.sleep(1000) // Some computation, initialization etc.
11
}
def receive(s : String) = s match {
case "first" => first
case "second" => second
case _ => -1
}
}
now, let's say you do this:
val foo = new Foo()
println(foo.receive("first")) // waiting for 2 seconds, initializing
println(foo.receive("first")) // returns immediately
println(foo.receive("second")) // waiting for 1 second, initializing
Now both first and second can be initialized at most once.
You can't pass parameters to lazy vals, so if the data string is somehow important to the initialization, then you'd probably be better off using factory method with memoization (IMO).
Related
After reading chapter six of Functional Programming in Scala and trying to understand the State monad, I have a question regarding wrapping an side-effecting class.
Say I have a class that modifies itself in someway.
class SideEffect(x:Int) {
var value = x
def modifyValue(newValue:Int):Unit = { value = newValue }
}
My understanding is that if we wrapped this in a State monad as below, it would still modify the original making the point of wrapping it moot.
case class State[S,+A](run: S => (A, S)) { // See footnote
// map, flatmap, unit, utility functions
}
val sideEffect = new SideEffect(20)
println(sideEffect.value) // Prints "20"
val stateMonad = State[SideEffect,Int](state => {
state.modifyValue(10)
(state.value,state)
})
stateMonad.run(sideEffect) // run the modification
println(sideEffect.value) // Prints "10" i.e. we have modified the original state
The only solution to this that I can see is to make a copy of the class and modify that, but that seems computationally expensive as SideEffect grows. Plus if we wanted to wrap something like a Java class that doesn't implement Cloneable, we'd be out of luck.
val stateMonad = State[SideEffect,Int](state => {
val newState = SideEffect(state.value) // Easier if it was a case class but hypothetically if one was, say, working with a Java library, one would not have this luxury
newState.modifyValue(10)
(newState.value,newState)
})
stateMonad.run(sideEffect) // run the modification
println(sideEffect.value) // Prints "20", original state not modified
Am I using the State monad wrong? How would one go about wrapping a side-effecting class without having to copy it or is this the only way?
The implementation for the State monad I'm using here is from the book, and might differ from Scalaz implementation
You can't do anything with mutable object except hiding mutation in some wrapper. So the scope of program that requires more attention in testing will be much more smaller. Your first sample is good enough. One moment only. It would be better to hide outer reference at all. Instead stateMonad.run(sideEffect) use something like stateMonad.run(new SideEffect(20)) or
def initState: SideEffect = new SideEffect(20)
val (state, value) = stateMonad.run(initState)
I am attempting to pass an ActorRef to a calling client. Here is some code:
object Sub {
implicit val timeout = Timeout(5 seconds)
lazy val default = {
val subActor = Akka.system.actorOf(Props[Sub], "sub")
subActor
}
def apply(pChannel: Concurrent.Channel[JsValue]):ActorRef = {
(default ? Register(callback)).map {
case ref:ActorRef => ref
}
}
}
The client invoking this is simply calling val sub:ActorRef = Sub(channel)
The problem I get here however is:
[error] found : scala.concurrent.Future[akka.actor.ActorRef]
[error] required: akka.actor.ActorRef
How can I modify the code above to get an ActorRef for the calling code to get the ref it needs?
Future is the promise of a certain value at a later time. In this case Future[ActorRef] is a value that represents an ActorRef now or at some point in the future.
You don't really want to get the ActorRef directly, you probably want to compose your calling code with the future that is returned.
For instance, if your code does:
val sub = Sub(channel)
doSomething(sub)
you'd want to rewrite it as:
Sub(channel).map { sub =>
doSomething(sub)
}
as that will create a new future that automatically calls doSomething(sub) when the sub value is available. You can also rewrite the example as:
for(sub <- Sub(channel)) yield doSomething(sub)
If you're looking to block in the calling code and return the value when available (which goes against the design principles of Akka, Play and reactive programming in general), you can always use Await, such as:
// Await.result() takes a Future[T] and returns a T
val sub = Await.result(Sub(channel), 10 seconds)
but it is poor design to do this in library code and isn't recommended. You should only wait on futures at the very end of your processing, and even then, the framework will usually handle that for you.
I'm looking for a FIFO stream in Scala, i.e., something that provides the functionality of
immutable.Stream (a stream that can be finite and memorizes the elements that have already been read)
mutable.Queue (which allows for added elements to the FIFO)
The stream should be closable and should block access to the next element until the element has been added or the stream has been closed.
Actually I'm a bit surprised that the collection library does not (seem to) include such a data structure, since it is IMO a quite classical one.
My questions:
1) Did I overlook something? Is there already a class providing this functionality?
2) OK, if it's not included in the collection library then it might by just a trivial combination of existing collection classes. However, I tried to find this trivial code but my implementation looks still quite complex for such a simple problem. Is there a simpler solution for such a FifoStream?
class FifoStream[T] extends Closeable {
val queue = new Queue[Option[T]]
lazy val stream = nextStreamElem
private def nextStreamElem: Stream[T] = next() match {
case Some(elem) => Stream.cons(elem, nextStreamElem)
case None => Stream.empty
}
/** Returns next element in the queue (may wait for it to be inserted). */
private def next() = {
queue.synchronized {
if (queue.isEmpty) queue.wait()
queue.dequeue()
}
}
/** Adds new elements to this stream. */
def enqueue(elems: T*) {
queue.synchronized {
queue.enqueue(elems.map{Some(_)}: _*)
queue.notify()
}
}
/** Closes this stream. */
def close() {
queue.synchronized {
queue.enqueue(None)
queue.notify()
}
}
}
Paradigmatic's solution (sightly modified)
Thanks for your suggestions. I slightly modified paradigmatic's solution so that toStream returns an immutable stream (allows for repeatable reads) so that it fits my needs. Just for completeness, here is the code:
import collection.JavaConversions._
import java.util.concurrent.{LinkedBlockingQueue, BlockingQueue}
class FIFOStream[A]( private val queue: BlockingQueue[Option[A]] = new LinkedBlockingQueue[Option[A]]() ) {
lazy val toStream: Stream[A] = queue2stream
private def queue2stream: Stream[A] = queue take match {
case Some(a) => Stream cons ( a, queue2stream )
case None => Stream empty
}
def close() = queue add None
def enqueue( as: A* ) = queue addAll as.map( Some(_) )
}
In Scala, streams are "functional iterators". People expect them to be pure (no side effects) and immutable. In you case, everytime you iterate on the stream you modify the queue (so it's no pure). This can create a lot of misunderstandings, because iterating twice the same stream, will have two different results.
That being said, you should rather use Java BlockingQueues, rather than rolling your own implementation. They are considered well implemented in term of safety and performances. Here is the cleanest code I can think of (using your approach):
import java.util.concurrent.BlockingQueue
import scala.collection.JavaConversions._
class FIFOStream[A]( private val queue: BlockingQueue[Option[A]] ) {
def toStream: Stream[A] = queue take match {
case Some(a) => Stream cons ( a, toStream )
case None => Stream empty
}
def close() = queue add None
def enqueue( as: A* ) = queue addAll as.map( Some(_) )
}
object FIFOStream {
def apply[A]() = new LinkedBlockingQueue
}
I'm assuming you're looking for something like java.util.concurrent.BlockingQueue?
Akka has a BoundedBlockingQueue implementation of this interface. There are of course the implementations available in java.util.concurrent.
You might also consider using Akka's actors for whatever it is you are doing. Use Actors to be notified or pushed a new event or message instead of pulling.
1) It seems you're looking for a dataflow stream seen in languages like Oz, which supports the producer-consumer pattern. Such a collection is not available in the collections API, but you could always create one yourself.
2) The data flow stream relies on the concept of single-assignment variables (such that they don't have to be initialized upon declaration point and reading them prior to initialization causes blocking):
val x: Int
startThread {
println(x)
}
println("The other thread waits for the x to be assigned")
x = 1
It would be straightforward to implement such a stream if single-assignment (or dataflow) variables were supported in the language (see the link). Since they are not a part of Scala, you have to use the wait-synchronized-notify pattern just like you did.
Concurrent queues from Java can be used to achieve that as well, as the other user suggested.
I have a class that extends Iterator and model an complex algorithm (MyAlgorithm1). Thus, the algorithm can advance step by step through the Next method.
class MyAlgorithm1(val c:Set) extends Iterator[Step] {
override def next():Step {
/* ... */
}
/* ... */
}
Now I want apply a different algorithm (MyAlgorithm2) in each pass of the first algorithm. The iterations of algorithm 1 and 2 should be inserted
class MyAlgorithm2(val c:Set) { /* ... */ }
How I can do this in the best way? Perhaps with some Trait?
UPDATE:
MyAlgorithm2 recives a set and transform it. MyAlgorithm1 too, but this is more complex and it's necessary runs step by step. The idea is run one step of MyAlgoirthm1 and then run MyAlgorithm2. Next step same. Really, MyAlgorithm2 simplifies the set and may be useful to simplify work of MyAlgorithm1.
As described, the problem can be solved with either inheritance or trait. For instance:
class MyAlgorithm1(val c:Set) extends Iterator[Step] {
protected var current = Step(c)
override def next():Step = {
current = process(current)
current
}
override def hasNext: Boolean = !current.set.isEmpty
private def process(s: Step): Step = s
}
class MyAlgorithm2(c: Set) extends MyAlgorithm1(c) {
override def next(): Step = {
super.next()
current = process(current)
current
}
private def process(s: Step): Step = s
}
With traits you could do something with abstract override, but designing it so that the result of the simplification gets fed to the first algorithm may be harder.
But, let me propose you are getting at the problem in the wrong way.
Instead of creating a class for the algorithm extending an iterator, you could define your algorithm like this:
class MyAlgorithm1 extends Function1[Step, Step] {
def apply(s: Step): Step = s
}
class MyAlgorithm2 extends Function1[Step, Step] {
def apply(s: Step): Step = s
}
The iterator then could be much more easily defined:
Iterator.iterate(Step(set))(MyAlgorithm1 andThen MyAlgorithm2).takeWhile(_.set.nonEmpty)
Extending Iterator is probably more work than you actually need to do. Let's roll back a bit.
You've got some stateful object of type MyAlgorithm1
val alg1 = new MyAlgorithm1(args)
Now you wish to repeatedly call some function on it, which will alter it's state and return some value. This is best modeled not by having your object implement Iterator, but rather by creating a new object that handles the iteration. Probably the easiest one in the Scala standard library is Stream. Here's an object which creates a stream of result from your algorithm.
val alg1Stream:Stream[Step] = Stream.continually(alg1.next())
Now if you wanted to repeatedly get results from that stream, it would be as easy as
for(step<-alg1Stream){
// do something
}
or equivalently
alg1Stream.forEach{
//do something
}
Now assume we've also encapsulated myAlgorithm2 as a stream
val alg2=new MyAlgorithm2(args)
val alg2Stream:Stream[Step] = Stream.continually(alg2.next())
Then we just need some way to interleave streams, and then we could say
for(step<-interleave(alg1Stream, algStream2)){
// do something
}
Sadly, a quick glance through the standard library, reveals no Stream interleaving function. Easy enough to write one
def interleave[A](stream1:Stream[A], stream2:Stream[A]):Stream[A] ={
var str1 = stream1
var str2 = stream2
var streamToUse = 1
Stream.continually{
if(streamToUse == 1){
streamToUse = 2
val out = str1.head
str1 = str1.tail
out
}else{
streamToUse = 1
val out = str2.head
str2 = str1.tail
out
}
}
}
That constructs a stream that repeatedly alternates between two streams, fetching the next result from the appropriate one and then setting up it's state for the next fetch. Note that this interleave only works for infinite streams, and we'd need a more clever one to handle streams that can end, but that's fine for the sake of the problem.
I have a class that extends Iterator and model an complex algorithm (MyAlgorithm1).
Well, stop for a moment there. An algorithm is not an iterator, so it doesn't make sense for it to extend Iterator.
It seems as if you might want to use a fold or a map instead, depending on exactly what it is that you want to do. This is a common pattern in functional programming: You generate a list/sequence/stream of something and then run a function on each element. If you want to run two functions on each element, you can either compose the functions or run another map.
I've been given a java api for connecting to and communicating over a proprietary bus using a callback based style. I'm currently implementing a proof-of-concept application in scala, and I'm trying to work out how I might produce a slightly more idiomatic scala interface.
A typical (simplified) application might look something like this in Java:
DataType type = new DataType();
BusConnector con = new BusConnector();
con.waitForData(type.getClass()).addListener(new IListener<DataType>() {
public void onEvent(DataType t) {
//some stuff happens in here, and then we need some more data
con.waitForData(anotherType.getClass()).addListener(new IListener<anotherType>() {
public void onEvent(anotherType t) {
//we do more stuff in here, and so on
}
});
}
});
//now we've got the behaviours set up we call
con.start();
In scala I can obviously define an implicit conversion from (T => Unit) into an IListener, which certainly makes things a bit simpler to read:
implicit def func2Ilistener[T](f: (T => Unit)) : IListener[T] = new IListener[T]{
def onEvent(t:T) = f
}
val con = new BusConnector
con.waitForData(DataType.getClass).addListener( (d:DataType) => {
//some stuff, then another wait for stuff
con.waitForData(OtherType.getClass).addListener( (o:OtherType) => {
//etc
})
})
Looking at this reminded me of both scalaz promises and f# async workflows.
My question is this:
Can I convert this into either a for comprehension or something similarly idiomatic (I feel like this should map to actors reasonably well too)
Ideally I'd like to see something like:
for(
d <- con.waitForData(DataType.getClass);
val _ = doSomethingWith(d);
o <- con.waitForData(OtherType.getClass)
//etc
)
If you want to use a for comprehension for this, I'd recommend looking at the Scala Language Specification for how for comprehensions are expanded to map, flatMap, etc. This will give you some clues about how this structure relates to what you've already got (with nested calls to addListener). You can then add an implicit conversion from the return type of the waitForData call to a new type with the appropriate map, flatMap, etc methods that delegate to addListener.
Update
I think you can use scala.Responder[T] from the standard library:
Assuming the class with the addListener is called Dispatcher[T]:
trait Dispatcher[T] {
def addListener(listener: IListener[T]): Unit
}
trait IListener[T] {
def onEvent(t: T): Unit
}
implicit def dispatcher2Responder[T](d: Dispatcher[T]):Responder[T] = new Responder[T} {
def respond(k: T => Unit) = d.addListener(new IListener[T] {
def onEvent(t:T) = k
})
}
You can then use this as requested
for(
d <- con.waitForData(DataType.getClass);
val _ = doSomethingWith(d);
o <- con.waitForData(OtherType.getClass)
//etc
) ()
See the Scala wiki and this presentation on using Responder[T] for a Comet chat application.
I have very little Scala experience, but if I were implementing something like this I'd look to leverage the actor mechanism rather than using callback listener classes. Actors were made for asynchronous communication, they nicely separate those different parts of your app for you. You can also have them send messages to multiple listeners.
We'll have to wait for a "real" Scala programmer to flesh this idea out, though. ;)