Possibility to set min_wait/max_wait per task or something similar? - locust

I am tying to simulate a specific behavior, where one api calls gets executed every 8 seconds and another every 24 seconds. Locust has a possibility to set min_wait and max_wait for the whole task_set. Is there was way to set it per task or to prevent the task from being executed unless given time has pass by in a safe way or to schedule the each on task its specific interval?
Example:
from locust import HttpLocust, TaskSet, task
class Paint(TaskSet):
def on_start(self):
self.login()
def login(self):
data = {'username': "paint", 'password': 'bucket'}
self.auth = self.client.post('/auth', data)
#task(1)
def get_red(self):
min_wait = 8000
max_wait = 8000
self.client.get("/red", headers=self.auth.request.headers['Cookie'])
#task(1)
def get_blue(self):
min_wait = 24000
max_wait = 24000
self.client.get("/blue", headers=self.auth.request.headers['Cookie'])
class PaintBucket(HttpLocust):
task_set = Paint

As it was commented already, this is not how Locust is designed, but there is a way to work it around. You can wait in the task instead of between tasks. Based on your example:
from locust import HttpLocust, TaskSet, task
from random import randint
class Paint(TaskSet):
def on_start(self):
self.login()
def login(self):
data = {'username': "paint", 'password': 'bucket'}
self.auth = self.client.post('/auth', data)
#task(1)
def get_red(self):
min_wait = 8000
max_wait = 8000
self.client.get("/red", headers=self.auth.request.headers['Cookie'])
self._sleep(randint(min_wait, max_wait))
#task(1)
def get_blue(self):
min_wait = 24000
max_wait = 24000
self.client.get("/blue", headers=self.auth.request.headers['Cookie'])
self._sleep(randint(min_wait, max_wait))
class PaintBucket(HttpLocust):
task_set = Paint
min_wait = 0
max_wait = 0
Of course if you always want the same wait time, you can just specify that instead of the min/max/random:
from locust import HttpLocust, TaskSet, task
class Paint(TaskSet):
def on_start(self):
self.login()
def login(self):
data = {'username': "paint", 'password': 'bucket'}
self.auth = self.client.post('/auth', data)
#task(1)
def get_red(self):
wait = 8000
self.client.get("/red", headers=self.auth.request.headers['Cookie'])
self._sleep(wait)
#task(1)
def get_blue(self):
wait = 24000
self.client.get("/blue", headers=self.auth.request.headers['Cookie'])
self._sleep(wait)
class PaintBucket(HttpLocust):
task_set = Paint
min_wait = 0
max_wait = 0

Related

locust 0.9 to 1.3 Exception: No tasks defined. use the #task decorator or set the tasks property of the User

I have the following code which run fine in locust 0.9. Now with 1.3 it throws the exception mentioned in the title. Can anyone see what's wrong?
import time
import random
import datetime
import requests
from requests.packages.urllib3.exceptions import InsecureRequestWarning
import logging
import json
import os
from random import randint, choice
from locust import HttpUser, TaskSet, task
from pyquery import PyQuery
requests.packages.urllib3.disable_warnings()
class FrontPage(TaskSet):
def on_start(self):
self.client.verify = False
#task(20)
def index(self):
self.client.get("/")
class DestinationPagesFixed(TaskSet):
de_paths = ["/belgien", "daenemark", "deutschland", "frankreich", "griechenland"
, "italien"
, "luxemburg"
]
def on_start(self):
self.client.verify = False
#task
def test_1(self):
paths = self.de_paths
path = choice(paths)
self.client.get(path, name="Static page")
class UserBehavior(TaskSet):
tasks = {FrontPage: 15, DestinationPagesFixed: 19}
class WebsiteUser(HttpUser):
task_set = UserBehavior
min_wait = 400
max_wait = 10000
Change
task_set = UserBehavior
to
tasks = [UserBehavior]
Or (skipping your UserBehaviour class entirely)
tasks = {FrontPage: 15, DestinationPagesFixed: 19}

Calling method with return type future is not working in parallel, why?

I was learning the Future monad in scala. I wrote the following code:
object MultipleFutures extends App {
// (a) create three futures
val aaplFuture = getStockPrice("AAPL")
val amznFuture = getStockPrice("AMZN")
val googFuture = getStockPrice("GOOG")
def sleep(time: Long): Unit = Thread.sleep(time)
Thread.sleep(5000)
def getStockPrice(stockSymbol: String): Future[Double] = {
println(s"starting $stockSymbol")
val r = scala.util.Random
val randomSleepTime = r.nextInt(3000)
println(s"For $stockSymbol, sleep time is $randomSleepTime")
sleep(randomSleepTime)
fetchData()
}
def fetchData() = Future {
Thread.sleep(Random.nextInt(10000))
Random.nextDouble()
}
}
I get the result in sequential order:
starting AAPL
For AAPL, sleep time is 2925
starting AMZN
For AMZN, sleep time is 336
starting GOOG
For GOOG, sleep time is 1065
But when I convert getStockPrice method to:
def getStockPrice(stockSymbol: String): Future[Double] = Future {
println(s"starting $stockSymbol")
val r = scala.util.Random
val randomSleepTime = r.nextInt(3000)
println(s"For $stockSymbol, sleep time is $randomSleepTime")
sleep(randomSleepTime)
fetchData()
}.flatten
The code started running in parallel.
starting AAPL
starting GOOG
starting AMZN
For AAPL, sleep time is 1233
For GOOG, sleep time is 1734
For AMZN, sleep time is 1
I don't understand why?
In the first version of getStockPrice you are calling sleep(randomSleepTime) before creating the Future so it is running in the main thread.
In the second version everything in the function is inside the Future so it runs in parallel.
To avoid the Thread.sleep in the main App use
val futures = List(aaplFuture, amznFuture, googFuture)
Await.result(Future.sequence(futures), Duration.Inf)

In akka streaming program w/ Source.queue & Sink.queue I offer 1000 items, but it just hangs when I try to get 'em out

I am trying to understand how i should be working with Source.queue & Sink.queue in Akka streaming.
In the little test program that I wrote below I find that I am able to successfully offer 1000 items to the Source.queue.
However, when i wait on the future that should give me the results of pulling all those items off the queue, my
future never completes. Specifically, the message 'print what we pulled off the queue' that we should see at the end
never prints out -- instead we see the error "TimeoutException: Futures timed out after [10 seconds]"
any guidance greatly appreciated !
import akka.actor.ActorSystem
import akka.event.{Logging, LoggingAdapter}
import akka.stream.scaladsl.{Flow, Keep, Sink, Source}
import akka.stream.{ActorMaterializer, Attributes}
import org.scalatest.FunSuite
import scala.collection.immutable
import scala.concurrent.duration._
import scala.concurrent.{Await, ExecutionContext, Future}
class StreamSpec extends FunSuite {
implicit val actorSystem: ActorSystem = ActorSystem()
implicit val materializer: ActorMaterializer = ActorMaterializer()
implicit val log: LoggingAdapter = Logging(actorSystem.eventStream, "basis-test")
implicit val ec: ExecutionContext = actorSystem.dispatcher
case class Req(name: String)
case class Response(
httpVersion: String = "",
method: String = "",
url: String = "",
headers: Map[String, String] = Map())
test("put items on queue then take them off") {
val source = Source.queue[String](128, akka.stream.OverflowStrategy.backpressure)
val flow = Flow[String].map(element => s"Modified $element")
val sink = Sink.queue[String]().withAttributes( Attributes.inputBuffer(128, 128))
val (sourceQueue, sinkQueue) = source.via(flow).toMat(sink)(Keep.both).run()
(1 to 1000).map( i =>
Future {
println("offerd" + i) // I see this print 1000 times as expected
sourceQueue.offer(s"batch-$i")
}
)
println("DONE OFFER FUTURE FIRING")
// Now use the Sink.queue to pull the items we added onto the Source.queue
val seqOfFutures: immutable.Seq[Future[Option[String]]] =
(1 to 1000).map{ i => sinkQueue.pull() }
val futureOfSeq: Future[immutable.Seq[Option[String]]] =
Future.sequence(seqOfFutures)
val seq: immutable.Seq[Option[String]] =
Await.result(futureOfSeq, 10.second)
// unfortunately our future times out here
println("print what we pulled off the queue:" + seq);
}
}
Looking at this again, I realize that I originally set up and posed my question incorrectly.
The test that accompanies my original question launches a wave
of 1000 futures, each of which tries to offer 1 item to the queue.
Then the second step in that test attempts create a 1000-element sequence (seqOfFutures)
where each future is trying to pull a value from the queue.
My theory as to why I was getting time-out errors is that there was some kind of deadlock due to running
out of threads or due to one thread waiting on another but where the waited-on-thread was blocked,
or something like that.
I'm not interested in hunting down the exact cause at this point because I have corrected
things in the code below (see CORRECTED CODE).
In the new code the test that uses the queue is called:
"put items on queue then take them off (with async parallelism) - (3)".
In this test I have a set of 10 tasks which run in parallel to do the 'enequeue' operation.
Then I have another 10 tasks which do the dequeue operation, which involves not only taking
the item off the list, but also calling stringModifyFunc which introduces a 1 ms processing delay.
I also wanted to prove that I got some performance benefit from
launching tasks in parallel and having the task steps communicate by passing their results through a
queue, so test 3 runs as a timed operation, and I found that it takes 1.9 seconds.
Tests (1) and (2) do the same amount of work, but serially -- The first with no intervening queue, and the second
using the queue to pass results between steps. These tests run in 13.6 and 15.6 seconds respectively
(which shows that the queue adds a bit of overhead, but that this is overshadowed by the efficiencies of running tasks in parallel.)
CORRECTED CODE
import akka.{Done, NotUsed}
import akka.actor.ActorSystem
import akka.event.{Logging, LoggingAdapter}
import akka.stream.scaladsl.{Flow, Keep, Sink, Source}
import akka.stream.{ActorMaterializer, Attributes, QueueOfferResult}
import org.scalatest.FunSuite
import scala.concurrent.duration._
import scala.concurrent.{Await, ExecutionContext, Future}
class Speco extends FunSuite {
implicit val actorSystem: ActorSystem = ActorSystem()
implicit val materializer: ActorMaterializer = ActorMaterializer()
implicit val log: LoggingAdapter = Logging(actorSystem.eventStream, "basis-test")
implicit val ec: ExecutionContext = actorSystem.dispatcher
val stringModifyFunc: String => String = element => {
Thread.sleep(1)
s"Modified $element"
}
def setup = {
val source = Source.queue[String](128, akka.stream.OverflowStrategy.backpressure)
val sink = Sink.queue[String]().withAttributes(Attributes.inputBuffer(128, 128))
val (sourceQueue, sinkQueue) = source.toMat(sink)(Keep.both).run()
val offers: Source[String, NotUsed] = Source(
(1 to iterations).map { i =>
s"item-$i"
}
)
(sourceQueue,sinkQueue,offers)
}
val outer = 10
val inner = 1000
val iterations = outer * inner
def timedOperation[T](block : => T) = {
val t0 = System.nanoTime()
val result: T = block // call-by-name
val t1 = System.nanoTime()
println("Elapsed time: " + (t1 - t0) / (1000 * 1000) + " milliseconds")
result
}
test("20k iterations in single threaded loop no queue (1)") {
timedOperation{
(1 to iterations).foreach { i =>
val str = stringModifyFunc(s"tag-${i.toString}")
System.out.println("str:" + str);
}
}
}
test("20k iterations in single threaded loop with queue (2)") {
timedOperation{
val (sourceQueue, sinkQueue, offers) = setup
val resultFuture: Future[Done] = offers.runForeach{ str =>
val itemFuture = for {
_ <- sourceQueue.offer(str)
item <- sinkQueue.pull()
} yield (stringModifyFunc(item.getOrElse("failed")) )
val item = Await.result(itemFuture, 10.second)
System.out.println("item:" + item);
}
val result = Await.result(resultFuture, 20.second)
System.out.println("result:" + result);
}
}
test("put items on queue then take them off (with async parallelism) - (3)") {
timedOperation{
val (sourceQueue, sinkQueue, offers) = setup
def enqueue(str: String) = sourceQueue.offer(str)
def dequeue = {
sinkQueue.pull().map{
maybeStr =>
val str = stringModifyFunc( maybeStr.getOrElse("failed2"))
println(s"dequeud value is $str")
}
}
val offerResults: Source[QueueOfferResult, NotUsed] =
offers.mapAsyncUnordered(10){ string => enqueue(string)}
val dequeueResults: Source[Unit, NotUsed] = offerResults.mapAsyncUnordered(10){ _ => dequeue }
val runAll: Future[Done] = dequeueResults.runForeach(u => u)
Await.result(runAll, 20.second)
}
}
}

monitor system users on raspberry pi with akka actors

I've a raspberry pi on my network with an LED strip attached to it.
My purpose is to create a jar file that will sit on the pi, monitor system events such as logins and load average, and drive the LED based on the those inputs.
To continuosly monitor the logged in users, I am trying to use akka actors. Using the examples provided here, this is what I've gotten so far :
import com.pi4j.io.gpio.GpioFactory
import com.pi4j.io.gpio.RaspiPin
import sys.process._
import akka.actor.{Actor, Props, ActorSystem}
import scala.concurrent.duration._
val who :String = "who".!!
class Blinker extends Actor {
private def gpio = GpioFactory.getInstance
private def led = gpio.provisionDigitalOutputPin(RaspiPin.GPIO_08)
def receive = {
case x if who.contains("pi") => led.blink(250)
case x if who.contains("moocow") => println("falalalala")
}
val blinker = system.actorOf(Props(classOf[Blinker], this))
val cancellable = system.scheduler.schedule(
0 milliseconds,
50 milliseconds,
blinker,
who)
}
However, system is not recognised by my IDE (IntelliJ) and it says, cannot resolve symbol
I also have a main object like this:
object ledStrip {
def main(args: Array[String]): Unit = {
val blink = new Blinker
// blink.receive
}
}
In main, I'm not quite sure how to initialise the application.
Needless to say, this my first time writing a scala program
Help?
Edit::
Here is the updated program after incorporating what Michal has said
class Blinker extends Actor {
val who: String = "who".!!
private val gpio = GpioFactory.getInstance
private val led = gpio.provisionDigitalOutputPin(RaspiPin.GPIO_08)
def receive = {
case x if who.contains("pi") => led.blink(250)
case x if who.contains("moocow") => println("falalalala")
}
val system = ActorSystem()
}
object ledStrip extends Blinker {
def main(args: Array[String]): Unit = {
val blinker = system.actorOf(Props(classOf[Blinker], this))
import system.dispatcher
val cancellable =
system.scheduler.schedule(
50 milliseconds,
5000 milliseconds,
blinker,
who)
}
}
This program compiles fine, but throws the following error upon execution:
Exception in thread "main" java.lang.ExceptionInInitializerError at
ledStrip.main(ledStrip.scala) Caused by:
akka.actor.ActorInitializationException: You cannot create an instance
of [ledStrip$] explicitly using the constructor (new). You have to use
one of the 'actorOf' factory methods to create a new actor. See the
documentation. at
akka.actor.ActorInitializationException$.apply(Actor.scala:194) at
akka.actor.Actor.$init$(Actor.scala:472) at
Blinker.(ledStrip.scala:15) at
ledStrip$.(ledStrip.scala:34) at
ledStrip$.(ledStrip.scala) ... 1 more
Edit 2
Code that compiles and runs (behaviour is still not as desired)< blink(1500) is never executed when user: pi logs out from the shell>
object sysUser {
val who: String = "who".!!
}
class Blinker extends Actor {
private val gpio = GpioFactory.getInstance
private val led = gpio.provisionDigitalOutputPin(RaspiPin.GPIO_08)
def receive = {
case x if x.toString.contains("pi") => led.blink(50)
case x if x.toString.contains("moocow") => println("falalalala")
case _ => led.blink(1500)
}
}
object ledStrip {
def main(args: Array[String]): Unit = {
val system = ActorSystem()
val blinker = system.actorOf(Props[Blinker], "blinker")
import system.dispatcher
val cancellable =
system.scheduler.schedule(
50 milliseconds,
5000 milliseconds,
blinker,
sysUser.who)
}
}
Well, it looks like you haven't defined "system" anywhere. See this example for instance:
https://doc.akka.io/docs/akka/current/actors.html#here-is-another-example-that-you-can-edit-and-run-in-the-browser-
you'll find this line there:
val system = ActorSystem("pingpong")
That's what creates the ActorSystem and defines the val called "system", which you then call methods on.
In the main, I don't think you want to create another instance with "new Blinker", just use:
system.actorOf(Props[Blinker], "blinker")
(which you are already doing and putting it into the "blinker" val)
Seems it is just a akka usage issue. I don't know why you do something seems strange, so I change them for change1, change2, change3, FYI.
class Blinker extends Actor {
val who: String = "who".!!
private val gpio = GpioFactory.getInstance
private val led = gpio.provisionDigitalOutputPin(RaspiPin.GPIO_08)
def receive = {
case x if who.contains("pi") => led.blink(250)
case x if who.contains("moocow") => println("falalalala")
}
}
object ledStrip { // change1
def main(args: Array[String]): Unit = {
val system = ActorSystem() // change2
val blinker = system.actorOf(Props(classOf[Blinker])) // change3
import system.dispatcher
val cancellable =
system.scheduler.schedule(
50 milliseconds,
5000 milliseconds,
blinker,
who)
}
}

Storing the results of Quartz job runs

Is there a magic setting in Quartz (2.2) to store the results of job runs?
I was naively expecting that the fired triggers get stored in qrtz_fired_triggers table, , however it looks like they get deleted after the execution, and then there is no history at all. Am I missing something or should I do this myself?
quartz.properties:
org.quartz.scheduler.instanceName = MyScheduler
org.quartz.threadPool.threadCount = 3
org.quartz.jobStore.class = org.quartz.impl.jdbcjobstore.JobStoreTX
org.quartz.jobStore.driverDelegateClass = org.quartz.impl.jdbcjobstore.StdJDBCDelegate
org.quartz.jobStore.tablePrefix = QRTZ_
org.quartz.jobStore.dataSource = quartzDataSource
org.quartz.jobStore.useProperties = true
org.quartz.dataSource.quartzDataSource.driver = com.mysql.jdbc.Driver
org.quartz.dataSource.quartzDataSource.URL = <url>
org.quartz.dataSource.quartzDataSource.user = <user>
org.quartz.dataSource.quartzDataSource.password = <pw>
org.quartz.dataSource.quartzDataSource.maxConnections = 10
relevant code snippets:
import org.quartz.JobBuilder.newJob
import org.quartz.SimpleScheduleBuilder.simpleSchedule
import org.quartz.TriggerBuilder._
import org.quartz.impl.StdSchedulerFactory
import org.quartz._
import org.quartz.JobKey._
import org.quartz.TriggerKey._
JobHelper.at("00 50 14 11 12 ? 2013", "demojob201312111450", Map("prop"->"val"), classOf[DemoJob])
object JobHelper {
private val scheduler = StdSchedulerFactory.getDefaultScheduler
private val defaultGroup = "Group"
def at(cronPattern: String, name:String, jobData:Map[String, _], jobClass:Class[_ <: Job]) {
val job = _makeJob(name, jobData, jobClass)
val trigger = newTrigger()
.withIdentity(name)
.withSchedule(CronScheduleBuilder.cronSchedule(new CronExpression(cronPattern)))
.build
scheduler.scheduleJob(job, trigger)
}
}
PersistJobDataAfterExecution
#DisallowConcurrentExecution
class DemoJob extends Job {
def execute(ctx: JobExecutionContext) {
val name = ctx.getJobDetail.getKey.getName
val data:Map[String,Object] = (mapAsScalaMap(ctx.getMergedJobDataMap.getWrappedMap)).toMap
println(s"""running DemoJob $name with data: $data""")
}
}