I have a text field which is supposed to hold some manualy given minutes, then I need this given number (these minutes) to be decreased by a timer (I already have configured a timer to give me ticks every 1 second), but I am struggling to find how I can do it simply; can it help if I try to use
scala.concurrent.duration._
frankly, I haven't take a look on it before. Or any other suggestion is welcomed.
My current solution, for somebody that may need it:
class ScalaTimer(val delay: Int) {
val tmr: javax.swing.Timer = new javax.swing.Timer(delay, null)
def start() = tmr.start()
def stop() = tmr.stop()
}
object Test33 { //extends SimpleSwingApplication {
val timer = new ScalaTimer(1000)
timer.start
var remainedtime: Int = 10
def main(args: Array[String]) {
timer.tmr.addActionListener(Swing.ActionListener(e => {
val tm = remainedtime - 1
remainedtime = tm
println(tm)
}))
Thread.sleep(10000);
}
Lets hope that I may use it now, as I want :)
Related
I wish to pass the value of var/val from one method to another.
eg, I have
object abc {
def onStart = {
val startTime = new java.sql.Timestamp( new Date())
}
def onEnd = {
//use startTime here
}
}
calling:
onStart()
executeReports(reportName, sqlContexts)
onEnd()
Here onStart() and onEnd() are job monitoring functions for executeReports().
executeReports() runs in a loop for 5 reports.
I have tried using global variables like
object abc{
var startTime : java.sql.Timestamp = _
def onStart = {
startTime = new java.sql.Timestamp( new Date())
}
def onEnd = {
//use startTime here
}
}
but the catch with this is when the loop executes for the next report, the startTime does not change.
I also tried using Singleton Class that did not work for me either.
My requirement is to have a startTime for every iteration i.e, for every report.
Any ideas are welcome here. I'll be happy to provide more clarification on my requirement if needed.
The common Scala solution to this is to write a function that wraps other functions and performs the setup and shutdown internally.
def timeit[T]( fun: => T ): T = {
val start = System.currentTimeMillis //Do your start stuff
val res = fun
println (s"Time ${System.currentTimeMillis - start}") // Do your end stuff
res
}
RussS has the better solution, but if for some reason you're wedded to the design you've described, you might try using a mutable val, i.e. a mutable collection.
I got this to compile and pass some small tests.
object abc {
private val q = collection.mutable.Queue[java.sql.Timestamp]()
def onStart = {
q.enqueue(new java.sql.Timestamp(java.util.Calendar.getInstance().getTime.getTime))
}
def onEnd = {
val startTime = q.dequeue
}
}
Base from your requirements, it might be better to do it this way.
case class Job(report: List<Report>) {
def execute // does the looping on Report by calling start and call end to generate monitoring data
private def start // iterate over each Report and calls it's execute method
private def end // iterate over each Report and uses startTime and executionTime to generate monitoring data.
}
abstract class Report {
var startTime: DateTime //Time started for the report
def doReport // unimplemented method that does the report generation.
def execute // first set stateTime to Now then call doReport, lastly calculate executionTime
}
The subtype of the Report should implement the doReport which does actual reporting.
You can also change the Job.execute method to accept
report: List<Report>
so that you can have a singleton Job (For sure, start and end will be the same for all Job you have.)
I have two Scala functions that are expensive to run. Each one is like below, they start improving the value of a variable and I'd like to run them simultaneously and after 5 minutes (or some other time). I'd like to terminate the two functions and take their latest value up to that time.
def func1(n: Int): Double = {
var a = 0.0D
while (not terminated) {
/// improve value of 'a' with algorithm 1
}
}
def func2(n: Int): Double = {
var a = 0.0D
while (not terminated) {
/// improve value of 'a' with algorithm 2
}
}
I would like to know how I should structure my code for doing that and what is the best practice here? I was thinking about running them in two different threads with a timeout and return their latest value at time out. But it seems there can be other ways for doing that. I am new to Scala so any insight would be tremendously helpful.
It is not hard. Here is one way of doing it:
#volatile var terminated = false
def func1(n: Int): Double = {
var a = 0.0D
while (!terminated) {
a = 0.0001 + a * 0.99999; //some useless formula1
}
a
}
def func2(n: Int): Double = {
var a = 0.0D
while (!terminated) {
a += 0.0001 //much simpler formula2, just for testing
}
a
}
def main(args: Array[String]): Unit = {
val f1 = Future { func1(1) } //work starts here
val f2 = Future { func2(2) } //and here
//aggregate results into one common future
val aggregatedFuture = for{
f1Result <- f1
f2Result <- f2
} yield (f1Result, f2Result)
Thread.sleep(500) //wait here for some calculations in ms
terminated = true //this is where we actually command to stop
//since looping to while() takes time, we need to wait for results
val res = Await.result(aggregatedFuture, 50.millis)
//just a printout
println("results:" + res)
}
But, of course, you would want to maybe look at your while loops and create a more manageable and chainable calculations.
Output: results:(9.999999999933387,31206.34691883926)
I am not 100% sure if this is something you would want to do, but here is one approach (not for 5 minutes, but you can change that) :
object s
{
def main(args: Array[String]): Unit = println(run())
def run(): (Int, Int) =
{
val (s, numNanoSec, seedVal) = (System.nanoTime, 500000000L, 0)
Seq(f1 _, f2 _).par.map(f =>
{
var (i, id) = f(seedVal)
while (System.nanoTime - s < numNanoSec)
{
i = f(i)._1
}
(i, id)
}).seq.maxBy(_._1)
}
def f1(a: Int): (Int, Int) = (a + 1, 1)
def f2(a: Int): (Int, Int) = (a + 2, 2)
}
Output:
me#ideapad:~/junk> scala s.scala
(34722678,2)
me#ideapad:~/junk> scala s.scala
(30065688,2)
me#ideapad:~/junk> scala s.scala
(34650716,2)
Of course this all assumes you have at least two threads available to distribute tasks to.
You can use Future with Await result to do that:
def fun2(): Double = {
var a = 0.0f
val f = Future {
// improve a with algorithm 2
a
}
try {
Await.result(f, 5 minutes)
} catch {
case e: TimeoutException => a
}
}
use the Await.result to wait algorithm with timeout, when we met this timeout, we return the a directly
I have this piece of code, trying to build a timer that will decrease a given value in a TextField (which is supposed to contain minutes needed to finish a job, but those minutes will be given manually and then be decreased by this clock):
import scala.swing._
class ScalaTimer(val delay: Int) {
val tmr: javax.swing.Timer = new javax.swing.Timer(delay, null)
def start() = tmr.start()
def stop() = tmr.stop()
}
object Test33 { //extends SimpleSwingApplication {
val timer = new ScalaTimer(50)
timer.tmr.start
//def top = new MainFrame {
def main(args: Array[String]) {
timer.tmr.addActionListener(Swing.ActionListener(e => {
println(timer.delay - 1)
}))
}
//}
}
I don't get why it doesn't print anything when I use a main() method, but it prints the current given delay when I use a Frame :|
It won't print anything with your code as it stands because your application exits as soon as it's added the ActionListener, and before anything has had a chance to fire it!
Try adding
Thread.sleep(10000);
just before the end of your main method, and you'll find it'll print 49 repeatedly.
It works as it stands with a Frame because that prevents the application from terminating until the Frame is closed.
I am looking for opportunities to increase concurrency and performance in my Scala 2.9 / Akka 2.0 RC2 code. Given the following code:
import akka.actor._
case class DataDelivery(data:Double)
class ComputeActor extends Actor {
var buffer = scala.collection.mutable.ArrayBuffer[Double]()
val functionsToCompute = List("f1","f2","f3","f4","f5")
var functionMap = scala.collection.mutable.LinkedHashMap[String,(Map[String,Any]) => Double]()
functionMap += {"f1" -> f1}
functionMap += {"f2" -> f2}
functionMap += {"f3" -> f3}
functionMap += {"f4" -> f4}
functionMap += {"f5" -> f5}
def updateData(data:Double):scala.collection.mutable.ArrayBuffer[Double] = {
buffer += data
buffer
}
def f1(map:Map[String,Any]):Double = {
// println("hello from f1")
0.0
}
def f2(map:Map[String,Any]):Double = {
// println("hello from f2")
0.0
}
def f3(map:Map[String,Any]):Double = {
// println("hello from f3")
0.0
}
def f4(map:Map[String,Any]):Double = {
// println("hello from f4")
0.0
}
def f5(map:Map[String,Any]):Double = {
// println("hello from f5")
0.0
}
def computeValues(immutableBuffer:IndexedSeq[Double]):Map[String,Double] = {
var map = Map[String,Double]()
try {
functionsToCompute.foreach(function => {
val value = functionMap(function)
function match {
case "f1" =>
var v = value(Map("lookback"->10,"buffer"->immutableBuffer,"parm1"->0.0))
map += {function -> v}
case "f2" =>
var v = value(Map("lookback"->20,"buffer"->immutableBuffer))
map += {function -> v}
case "f3" =>
var v = value(Map("lookback"->30,"buffer"->immutableBuffer,"parm1"->1.0,"parm2"->false))
map += {function -> v}
case "f4" =>
var v = value(Map("lookback"->40,"buffer"->immutableBuffer))
map += {function -> v}
case "f5" =>
var v = value(Map("buffer"->immutableBuffer))
map += {function -> v}
case _ =>
println(this.unhandled())
}
})
} catch {
case ex: Exception =>
ex.printStackTrace()
}
map
}
def receive = {
case DataDelivery(data) =>
val startTime = System.nanoTime()/1000
val answers = computeValues(updateData(data))
val endTime = System.nanoTime()/1000
val elapsedTime = endTime - startTime
println("elapsed time is " + elapsedTime)
// reply or forward
case msg =>
println("msg is " + msg)
}
}
object Test {
def main(args:Array[String]) {
val system = ActorSystem("actorSystem")
val computeActor = system.actorOf(Props(new ComputeActor),"computeActor")
var i = 0
while (i < 1000) {
computeActor ! DataDelivery(i.toDouble)
i += 1
}
}
}
When I run this the output (converted to microseconds) is
elapsed time is 4898
elapsed time is 184
elapsed time is 144
.
.
.
elapsed time is 109
elapsed time is 103
You can see the JVM's incremental compiler kicking in.
I thought that one quick win might be to change
functionsToCompute.foreach(function => {
to
functionsToCompute.par.foreach(function => {
but this results in the following elapsed times
elapsed time is 31689
elapsed time is 4874
elapsed time is 622
.
.
.
elapsed time is 698
elapsed time is 2171
Some info:
1) I'm running this on a Macbook Pro with 2 cores.
2) In the full version, the functions are long running operations that loop over portions of the mutable shared buffer. This doesn't appear to be a problem since retrieving messages from the actor's mailbox is controlling the flow, but I suspect it could be an issue with increased concurrency. This is why I've converted to an IndexedSeq.
3) In the full version, the functionsToCompute list may vary, so that not all items in the functionMap are necessarily called (i.e.) functionMap.size may be much larger than functionsToCompute.size
4) The functions can be computed in parallel, but the resultant map must be complete before returning
Some questions:
1) What can I do to make the parallel version run faster?
2) Where would it make sense to add non-blocking and blocking futures?
3) Where would it make sense to forward computation to another actor?
4) What are some opportunities for increasing immutability/safety?
Thanks,
Bruce
Providing an example, as requested (sorry about the delay... I don't have notifications on for SO).
There's a great example in the Akka documentation Section on 'Composing Futures' but I'll give you something a little more tailored to your situation.
Now, after reading this, please take some time to read through the tutorials and docs on Akka's website. You're missing a lot of key information that those docs will provide for you.
import akka.dispatch.{Await, Future, ExecutionContext}
import akka.util.duration._
import java.util.concurrent.Executors
object Main {
// This just makes the example work. You probably have enough context
// set up already to not need these next two lines
val pool = Executors.newCachedThreadPool()
implicit val ec = ExecutionContext.fromExecutorService(pool)
// I'm simulating your function. It just has to return a tuple, I believe
// with a String and a Double
def theFunction(s: String, d: Double) = (s, d)
def main(args: Array[String]) {
// Here we run your functions - I'm just doing a thousand of them
// for fun. You do what yo need to do
val listOfFutures = (1 to 1000) map { i =>
// Run them in parallel in the future
Future {
theFunction(i.toString, i.toDouble)
}
}
// These lines can be composed better, but breaking them up should
// be more illustrative.
//
// Turn the list of Futures (i.e. Seq[Future[(String, Double)]]) into a
// Future with a sequence of results (i.e. Future[Seq[(String, Double)]])
val futureOfResults = Future.sequence(listOfFutures)
// Convert that future into another future that contains a map instead
// instead of a sequence
val intermediate = futureOfResults map { _.toList.toMap }
// Wait for it complete. Ideally you don't do this. Continue to
// transform the future into other forms or use pipeTo() to get it to go
// as a result to some other Actor. "Await" is really just evil... the
// only place you should really use it is in silly programs like this or
// some other special purpose app.
val resultingMap = Await.result(intermediate, 1 second)
println(resultingMap)
// Again, just to make the example work
pool.shutdown()
}
}
All you need in your classpath to get this running is the akka-actor jar. The Akka website will tell you how to set up what you need, but it's really dead simple.
We have some code which needs to run faster. Its already profiled so we would like to make use of multiple threads. Usually I would setup an in memory queue, and have a number of threads taking jobs of the queue and calculating the results. For the shared data I would use a ConcurrentHashMap or similar.
I don't really want to go down that route again. From what I have read using actors will result in cleaner code and if I use akka migrating to more than 1 jvm should be easier. Is that true?
However, I don't know how to think in actors so I am not sure where to start.
To give a better idea of the problem here is some sample code:
case class Trade(price:Double, volume:Int, stock:String) {
def value(priceCalculator:PriceCalculator) =
(priceCalculator.priceFor(stock)-> price)*volume
}
class PriceCalculator {
def priceFor(stock:String) = {
Thread.sleep(20)//a slow operation which can be cached
50.0
}
}
object ValueTrades {
def valueAll(trades:List[Trade],
priceCalculator:PriceCalculator):List[(Trade,Double)] = {
trades.map { trade => (trade,trade.value(priceCalculator)) }
}
def main(args:Array[String]) {
val trades = List(
Trade(30.5, 10, "Foo"),
Trade(30.5, 20, "Foo")
//usually much longer
)
val priceCalculator = new PriceCalculator
val values = valueAll(trades, priceCalculator)
}
}
I'd appreciate it if someone with experience using actors could suggest how this would map on to actors.
This is a complement to my comment on shared results for expensive calculations. Here it is:
import scala.actors._
import Actor._
import Futures._
case class PriceFor(stock: String) // Ask for result
// The following could be an "object" as well, if it's supposed to be singleton
class PriceCalculator extends Actor {
val map = new scala.collection.mutable.HashMap[String, Future[Double]]()
def act = loop {
react {
case PriceFor(stock) => reply(map getOrElseUpdate (stock, future {
Thread.sleep(2000) // a slow operation
50.0
}))
}
}
}
Here's an usage example:
scala> val pc = new PriceCalculator; pc.start
pc: PriceCalculator = PriceCalculator#141fe06
scala> class Test(stock: String) extends Actor {
| def act = {
| println(System.currentTimeMillis().toString+": Asking for stock "+stock)
| val f = (pc !? PriceFor(stock)).asInstanceOf[Future[Double]]
| println(System.currentTimeMillis().toString+": Got the future back")
| val res = f.apply() // this blocks until the result is ready
| println(System.currentTimeMillis().toString+": Value: "+res)
| }
| }
defined class Test
scala> List("abc", "def", "abc").map(new Test(_)).map(_.start)
1269310737461: Asking for stock abc
res37: List[scala.actors.Actor] = List(Test#6d888e, Test#1203c7f, Test#163d118)
1269310737461: Asking for stock abc
1269310737461: Asking for stock def
1269310737464: Got the future back
scala> 1269310737462: Got the future back
1269310737465: Got the future back
1269310739462: Value: 50.0
1269310739462: Value: 50.0
1269310739465: Value: 50.0
scala> new Test("abc").start // Should return instantly
1269310755364: Asking for stock abc
res38: scala.actors.Actor = Test#15b5b68
1269310755365: Got the future back
scala> 1269310755367: Value: 50.0
For simple parallelization, where I throw a bunch of work out to process and then wait for it all to come back, I tend to like to use a Futures pattern.
class ActorExample {
import actors._
import Actor._
class Worker(val id: Int) extends Actor {
def busywork(i0: Int, i1: Int) = {
var sum,i = i0
while (i < i1) {
i += 1
sum += 42*i
}
sum
}
def act() { loop { react {
case (i0:Int,i1:Int) => sender ! busywork(i0,i1)
case None => exit()
}}}
}
val workforce = (1 to 4).map(i => new Worker(i)).toList
def parallelFourSums = {
workforce.foreach(_.start())
val futures = workforce.map(w => w !! ((w.id,1000000000)) );
val computed = futures.map(f => f() match {
case i:Int => i
case _ => throw new IllegalArgumentException("I wanted an int!")
})
workforce.foreach(_ ! None)
computed
}
def serialFourSums = {
val solo = workforce.head
workforce.map(w => solo.busywork(w.id,1000000000))
}
def timed(f: => List[Int]) = {
val t0 = System.nanoTime
val result = f
val t1 = System.nanoTime
(result, t1-t0)
}
def go {
val serial = timed( serialFourSums )
val parallel = timed( parallelFourSums )
println("Serial result: " + serial._1)
println("Parallel result:" + parallel._1)
printf("Serial took %.3f seconds\n",serial._2*1e-9)
printf("Parallel took %.3f seconds\n",parallel._2*1e-9)
}
}
Basically, the idea is to create a collection of workers--one per workload--and then throw all the data at them with !! which immediately gives back a future. When you try to read the future, the sender blocks until the worker's actually done with the data.
You could rewrite the above so that PriceCalculator extended Actor instead, and valueAll coordinated the return of the data.
Note that you have to be careful passing non-immutable data around.
Anyway, on the machine I'm typing this from, if you run the above you get:
scala> (new ActorExample).go
Serial result: List(-1629056553, -1629056636, -1629056761, -1629056928)
Parallel result:List(-1629056553, -1629056636, -1629056761, -1629056928)
Serial took 1.532 seconds
Parallel took 0.443 seconds
(Obviously I have at least four cores; the parallel timing varies rather a bit depending on which worker gets what processor and what else is going on on the machine.)