As I understand it the idea of a pool in gevent is to limit the total number of concurrent requests at any time, to a database or an API or similar.
Say I have code like this where I am spawning more greenlets than I have room for in the Pool:
import gevent.pool
pool = gevent.pool.Pool(50)
jobs = []
for number in xrange(300):
jobs.append(pool.spawn(do_something, number))
total_result = [x.get() for x in jobs]
What is the actual behavior when trying to spawn the 51st request? When is the 51st request handled?
The pool class uses a semaphore to count active greenlets, initialized with size count in the constructor:
class Pool(Group):
def __init__(self, size=None, greenlet_class=None):
if size is not None and size < 1:
raise ValueError('Invalid size for pool (positive integer or None required): %r' % (size, ))
Group.__init__(self)
self.size = size
if greenlet_class is not None:
self.greenlet_class = greenlet_class
if size is None:
self._semaphore = DummySemaphore()
else:
self._semaphore = Semaphore(size)
Every time spawn() is called, it tries to acquire the semaphore:
def spawn(self, *args, **kwargs):
self._semaphore.acquire()
try:
greenlet = self.greenlet_class.spawn(*args, **kwargs)
self.add(greenlet)
except:
self._semaphore.release()
raise
return greenlet
If the pool is full, the called greenlet will thus wait on _semaphore.acquire() call. Semaphore is released whenever any of the greenlets ends execution:
def discard(self, greenlet):
Group.discard(self, greenlet)
self._semaphore.release()
So in your case, I'd expect the 51st request to be handled (or started, to be precise) as soon as any of the first 50 requests is done.
Related
How do I pass the number of total users to simulate and spawn rate in the Web UI when I run the locust file, instead, I would like to pass them as variables in the script itself?
class QuickstartUser(HttpUser):
wait_time = between(1, 2.5)
users = 10
spawn_rate = 1
#task
def on_start(self):
filenumber="ABC"
# Get file info
response = self.client.get(f"/files/" + filenumber)
json_var = response.json()
print("response Json: ", json_var)
time.sleep(1)
You could probably do it by accessing the Runner in code, but it would be much easier if you used a Load Shape.
class MyCustomShape(LoadTestShape):
time_limit = 600
spawn_rate = 20
def tick(self):
run_time = self.get_run_time()
if run_time < self.time_limit:
# User count rounded to nearest hundred.
user_count = round(run_time, -2)
return (user_count, spawn_rate)
return None
tick is called automatically, you just have to return a tuple of the user count and spawn rate you want. You can do whatever work you want to calculate what the users and rate should be. There are more examples in the GitHub repo.
output printing the len of arrival and service timesI am trying to implement an M/M/1 markovian process with exponential inter arrival and exponential service times using simpy. The code runs fine but I dont quite get the expected results. Also the number of list items in arrival times is lesser than the number of list items in service time after the code is run.
# make a markovian queue
# make a server as a resource
# make customers at random times
# record the customer arrival time
# customer gets the resource
# record when the customer got the resource
# serve the customers for a random time using resource
# save this random time as service time
# customer yields the resource and next is served
import statistics
import simpy
import random
arrival_time = []
service_time = []
mean_service = 2.0
mean_arrival = 1.0
num_servers = 1
class Markovian(object):
def __init__(self, env, num_servers):
self.env = env
self.servers = simpy.Resource(env, num_servers)
#self.action = env.process(self.run())
def server(self,packet ):
#timeout after random service time
t = random.expovariate(1.0/mean_service)
#service_time.append(t)
yield self.env.timeout(t)
def getting_service(env, packet, markovian):
# new packet arrives in the system
arrival_time = env.now
with markovian.servers.request() as req:
yield req
yield env.process(markovian.server(packet))
service_time.append(env.now - arrival_time)
def run_markovian(env,num_servers):
markovian = Markovian(env,num_servers)
packet = 0
#generate new packets
while True:
t = random.expovariate(1.0/mean_arrival)
arrival_time.append(t)
yield env.timeout(t)
packet +=1
env.process(Markovian.getting_service(env,packet,markovian))
def get_average_service_time(service_time):
average_service_time = statistics.mean(service_time)
return average_service_time
def main():
random.seed(42)
env= simpy.Environment()
env.process(Markovian.run_markovian(env,num_servers))
env.run(until = 50)
print(Markovian.get_average_service_time(service_time))
print (arrival_time)
print (service_time)
if __name__ == "__main__":
main()
Hello there were basically one bug in your code and two queuing theory misconceptions:
Bug 1) the definition of the servers were inside the class, this makes the model behaves as a M/M/inf not M/M/1
Answer: I put the definition of your resources out the the class, and pass the servers not the num_servers from now on.
Misconception 1: with the times as you defined:
mean_service = 2.0
mean_arrival = 1.0
The system will generate much more packets and it is able to serve. That's why the size of the lists were so different.
Answer:
mean_service = 1.0
mean_arrival = 2.0
Misconception 2:
What you call service time in your code is actually system time.
I also put some prints in your code so we could see that is doing. Fell free to comment them. And there is no need for the library Statistics, so I commented it too.
I hope this answer is useful to you.
# make a markovian queue
# make a server as a resource
# make customers at random times
# record the customer arrival time
# customer gets the resource
# record when the customer got the resource
# serve the customers for a random time using resource
# save this random time as service time
# customer yields the resource and next is served
#import statistics
import simpy
import random
arrivals_time = []
service_time = []
waiting_time = []
mean_service = 1.0
mean_arrival = 2.0
num_servers = 1
class Markovian(object):
def __init__(self, env, servers):
self.env = env
#self.action = env.process(self.run())
#def server(self,packet ):
#timeout after random service time
# t = random.expovariate(1.0/mean_service)
#service_time.append(t)
# yield self.env.timeout(t)
def getting_service(env, packet, servers):
# new packet arrives in the system
begin_wait = env.now
req = servers.request()
yield req
begin_service = env.now
waiting_time.append(begin_service - begin_wait)
print('%.1f Begin Service of packet %d' % (begin_service, packet))
yield env.timeout(random.expovariate(1.0/mean_service))
service_time.append(env.now - begin_service)
yield servers.release(req)
print('%.1f End Service of packet %d' % (env.now, packet))
def run_markovian(env,servers):
markovian = Markovian(env,servers)
packet = 0
#generate new packets
while True:
t = random.expovariate(1.0/mean_arrival)
yield env.timeout(t)
arrivals_time.append(t)
packet +=1
print('%.1f Arrival of packet %d' % (env.now, packet))
env.process(Markovian.getting_service(env,packet,servers))
def get_average_service_time(service_time):
average_service_time = statistics.mean(service_time)
return average_service_time
def main():
random.seed(42)
env= simpy.Environment()
servers = simpy.Resource(env, num_servers)
env.process(Markovian.run_markovian(env,servers))
env.run(until = 50)
print(Markovian.get_average_service_time(service_time))
print ("Time between consecutive arrivals \n", arrivals_time)
print("Size: ", len(arrivals_time))
print ("Service Times \n", service_time)
print("Size: ", len(service_time))
print ("Waiting Times \n", service_time)
print (waiting_time)
print("Size: ",len(waiting_time))
if __name__ == "__main__":
main()
New python user here and first post on this great website. I haven't been able to find an answer to my question so hopefully it is unique.
Using simpy I am trying to create a train subway/metro simulation with failures and repairs periodically built into the system. These failures happen to the train but also to signals on sections of track and on plaforms. I have read and applied the official Machine Shop example (which you can see resemblance of in the attached code) and have thus managed to model random failures and repairs to the train by interrupting its 'journey time'.
However I have not figured out how to model failures of signals on the routes which the trains follow. I am currently just specifying a time for a trip from A to B, which does get interrupted but only due to train failure.
Is it possible to define each trip as its own process i.e. a separate process for sections A_to_B and B_to_C, and separate platforms as pA, pB and pC. Each one with a single resource (to allow only one train on it at a time) and to incorporate random failures and repairs for these section and platform processes? I would also need to perhaps have several sections between two platforms, any of which could experience a failure.
Any help would be greatly appreciated.
Here's my code so far:
import random
import simpy
import numpy
RANDOM_SEED = 1234
T_MEAN_A = 240.0 # mean journey time
T_MEAN_EXPO_A = 1/T_MEAN_A # for exponential distribution
T_MEAN_B = 240.0 # mean journey time
T_MEAN_EXPO_B = 1/T_MEAN_B # for exponential distribution
DWELL_TIME = 30.0 # amount of time train sits at platform for passengers
DWELL_TIME_EXPO = 1/DWELL_TIME
MTTF = 3600.0 # mean time to failure (seconds)
TTF_MEAN = 1/MTTF # for exponential distribution
REPAIR_TIME = 240.0
REPAIR_TIME_EXPO = 1/REPAIR_TIME
NUM_TRAINS = 1
SIM_TIME_DAYS = 100
SIM_TIME = 3600 * 18 * SIM_TIME_DAYS
SIM_TIME_HOURS = SIM_TIME/3600
# Defining the times for processes
def A_B(): # returns processing time for journey A to B
return random.expovariate(T_MEAN_EXPO_A) + random.expovariate(DWELL_TIME_EXPO)
def B_C(): # returns processing time for journey B to C
return random.expovariate(T_MEAN_EXPO_B) + random.expovariate(DWELL_TIME_EXPO)
def time_to_failure(): # returns time until next failure
return random.expovariate(TTF_MEAN)
# Defining the train
class Train(object):
def __init__(self, env, name, repair):
self.env = env
self.name = name
self.trips_complete = 0
self.broken = False
# Start "travelling" and "break_train" processes for the train
self.process = env.process(self.running(repair))
env.process(self.break_train())
def running(self, repair):
while True:
# start trip A_B
done_in = A_B()
while done_in:
try:
# going on the trip
start = self.env.now
yield self.env.timeout(done_in)
done_in = 0 # Set to 0 to exit while loop
except simpy.Interrupt:
self.broken = True
done_in -= self.env.now - start # How much time left?
with repair.request(priority = 1) as req:
yield req
yield self.env.timeout(random.expovariate(REPAIR_TIME_EXPO))
self.broken = False
# Trip is finished
self.trips_complete += 1
# start trip B_C
done_in = B_C()
while done_in:
try:
# going on the trip
start = self.env.now
yield self.env.timeout(done_in)
done_in = 0 # Set to 0 to exit while loop
except simpy.Interrupt:
self.broken = True
done_in -= self.env.now - start # How much time left?
with repair.request(priority = 1) as req:
yield req
yield self.env.timeout(random.expovariate(REPAIR_TIME_EXPO))
self.broken = False
# Trip is finished
self.trips_complete += 1
# Defining the failure
def break_train(self):
while True:
yield self.env.timeout(time_to_failure())
if not self.broken:
# Only break the train if it is currently working
self.process.interrupt()
# Setup and start the simulation
print('Train trip simulator')
random.seed(RANDOM_SEED) # Helps with reproduction
# Create an environment and start setup process
env = simpy.Environment()
repair = simpy.PreemptiveResource(env, capacity = 1)
trains = [Train(env, 'Train %d' % i, repair)
for i in range(NUM_TRAINS)]
# Execute
env.run(until = SIM_TIME)
# Analysis
trips = []
print('Train trips after %s hours of simulation' % SIM_TIME_HOURS)
for train in trains:
print('%s completed %d trips.' % (train.name, train.trips_complete))
trips.append(train.trips_complete)
mean_trips = numpy.mean(trips)
std_trips = numpy.std(trips)
print "mean trips: %d" % mean_trips
print "standard deviation trips: %d" % std_trips
it looks like you are using Python 2, which is a bit unfortunate, because
Python 3.3 and above give you some more flexibility with Python generators. But
your problem should be solveable in Python 2 nonetheless.
you can use sub processes within in a process:
def sub(env):
print('I am a sub process')
yield env.timeout(1)
# return 23 # Only works in py3.3 and above
env.exit(23) # Workaround for older python versions
def main(env):
print('I am the main process')
retval = yield env.process(sub(env))
print('Sub returned', retval)
As you can see, you can use Process instances returned by Environment.process()
like normal events. You can even use return values in your sub proceses.
If you use Python 3.3 or newer, you don’t have to explicitly start a new
sub-process but can use sub() as a sub routine instead and just forward the
events it yields:
def sub(env):
print('I am a sub routine')
yield env.timeout(1)
return 23
def main(env):
print('I am the main process')
retval = yield from sub(env)
print('Sub returned', retval)
You may also be able to model signals as resources that may either be used
by failure process or by a train. If the failure process requests the signal
at first, the train has to wait in front of the signal until the failure
process releases the signal resource. If the train is aleady passing the
signal (and thus has the resource), the signal cannot break. I don’t think
that’s a problem be cause the train can’t stop anyway. If it should be
a problem, just use a PreemptiveResource.
I hope this helps. Please feel welcome to join our mailing list for more
discussions.
I was trying the following POC to check how to get high concurrency
implicit def executionContext = context.system.dispatchers.lookup("async-futures-dispatcher")
implicit val timeout = 10 seconds
val contestroute = "/contestroute" {
get {
respondWithMediaType(`application/json`) {
dynamic {
onSuccess(
Future {
val start = System.currentTimeMillis()
// np here should be dealt by 200 threads defined below, so why
// overall time takes so long? why doesn't it really utilize all
// threads I have given to it? how to update the code so it
// utilizes the 200 threads?
Thread.sleep(5000)
val status = s"timediff ${System.currentTimeMillis() - start}ms ${Thread.currentThread().getName}"
status
}) { time =>
complete(s"status: $time")
}
}
}
}
}
My config:
async-futures-dispatcher {
# Dispatcher is the name of the event-based dispatcher
type = Dispatcher
# What kind of ExecutionService to use
executor = "thread-pool-executor"
# Configuration for the thread pool
thread-pool-executor {
# minimum number of threads to cap factor-based core number to
core-pool-size-min = 200
# No of core threads ... ceil(available processors * factor)
core-pool-size-factor = 20.0
# maximum number of threads to cap factor-based number to
core-pool-size-max = 200
}
# Throughput defines the maximum number of messages to be
# processed per actor before the thread jumps to the next actor.
# Set to 1 for as fair as possible.
throughput = 100
}
however when I run apache bench like this:
ab -n 200 -c 50 http://LAP:8080/contestroute
Results I get are:
Server Software: Apache-Coyote/1.1
Server Port:erred: 37500 bytes
HTML transferred: 10350 bytes
Requests per second: 4.31 [#/sec] (mean)
Time per request: 34776.278 [ms] (mean)
Time per request: 231.842 [ms] (mean, across all concurrent requests)
Transfer rate: 1.05 [Kbytes/sec] received
Connection Times (ms)
min mean[+/-sd] median max
Connect: 5 406 1021.3 7 3001
Processing: 30132 30466 390.8 30308 31231
Waiting: 30131 30464 391.8 30306 31231
Total: 30140 30872 998.9 30353 33228 8080
Document Path: /contestroute
Document Length: 69 bytes
Concurrency Level: 150
Time taken for tests: 34.776 seconds
Complete requests: 150
Failed requests: 0
Write errors: 0
Non-2xx responses: 150
Total transferred: 37500 bytes
HTML transferred: 10350 bytes
Requests per second: 4.31 [#/sec] (mean)
Time per request: 34776.278 [ms] (mean)
Time per request: 231.842 [ms] (mean, across all concurrent requests)
Transfer rate: 1.05 [Kbytes/sec] received
Connection Times (ms)
min mean[+/-sd] median max
Connect: 5 406 1021.3 7 3001
Processing: 30132 30466 390.8 30308 31231
Waiting: 30131 30464 391.8 30306 31231
Total: 30140 30872 998.9 30353 33228
Am I missing something big? what do I need to change to have my spray and futures utilize all threads i given to it?
(to add i'm running on top of tomcat servlet 3.0)
In your example all spray operations and blocking operations happen in the same context. You need to split 2 contexts:
Also I don't see the reason to use dynamic, I guess just 'complete' should be good.
implicit val timeout = 10.seconds
// Execution Context for blocking ops
val blockingExecutionContext = {
ExecutionContext.fromExecutor(Executors.newFixedThreadPool(2000))
}
// Execution Context for Spray
import context.dispatcher
override def receive: Receive = runRoute(contestroute)
val contestroute = path("contestroute") {
get {
complete {
Future.apply {
val start = System.currentTimeMillis()
// np here should be dealt by 200 threads defined below, so why
// overall time takes so long? why doesn't it really utilize all
// threads I have given to it? how to update the code so it
// utilizes the 200 threads?
Thread.sleep(5000)
val status = s"timediff ${System.currentTimeMillis() - start}ms ${Thread.currentThread().getName}"
status
}(blockingExecutionContext)
}
}
}
After that you can test it with
ab -n 200 -c 200 http://LAP:8080/contestroute
and you'll see that spray will create all 200 threads for blocking operations
Results:
Concurrency Level: 200
Time taken for tests: 5.096 seconds
I have a celery chain that runs some tasks. Each of the tasks can fail and be retried. Please see below for a quick example:
from celery import task
#task(ignore_result=True)
def add(x, y, fail=True):
try:
if fail:
raise Exception('Ugly exception.')
print '%d + %d = %d' % (x, y, x+y)
except Exception as e:
raise add.retry(args=(x, y, False), exc=e, countdown=10)
#task(ignore_result=True)
def mul(x, y):
print '%d * %d = %d' % (x, y, x*y)
and the chain:
from celery.canvas import chain
chain(add.si(1, 2), mul.si(3, 4)).apply_async()
Running the two tasks (and assuming that nothing fails), your would get/see printed:
1 + 2 = 3
3 * 4 = 12
However, when the add task fails the first time and succeeds in subsequent retry calls, the rest of the tasks in the chain do not run, i.e. the add task fails, all other tasks in the chain are not run and after a few seconds, the add task runs again and succeeds and the rest of the tasks in the chain (in this case mul.si(3, 4)) does not run.
Does celery provide a way to continue failed chains from the task that failed, onwards? If not, what would be the best approach to accomplishing this and making sure that a chain's tasks run in the order specified and only after the previous task has executed successfully even if the task is retried a few times?
Note 1: The issue can be solved by doing
add.delay(1, 2).get()
mul.delay(3, 4).get()
but I am interested in understanding why chains do not work with failed tasks.
You've found a bug :)
Fixed in https://github.com/celery/celery/commit/b2b9d922fdaed5571cf685249bdc46f28acacde3
will be part of 3.0.4.
I'm also interested in understanding why chains do not work with failed tasks.
I dig some celery code and what I've found so far is:
The implementation happends at app.builtins.py
#shared_task
def add_chain_task(app):
from celery.canvas import chord, group, maybe_subtask
_app = app
class Chain(app.Task):
app = _app
name = 'celery.chain'
accept_magic_kwargs = False
def prepare_steps(self, args, tasks):
steps = deque(tasks)
next_step = prev_task = prev_res = None
tasks, results = [], []
i = 0
while steps:
# First task get partial args from chain.
task = maybe_subtask(steps.popleft())
task = task.clone() if i else task.clone(args)
i += 1
tid = task.options.get('task_id')
if tid is None:
tid = task.options['task_id'] = uuid()
res = task.type.AsyncResult(tid)
# automatically upgrade group(..) | s to chord(group, s)
if isinstance(task, group):
try:
next_step = steps.popleft()
except IndexError:
next_step = None
if next_step is not None:
task = chord(task, body=next_step, task_id=tid)
if prev_task:
# link previous task to this task.
prev_task.link(task)
# set the results parent attribute.
res.parent = prev_res
results.append(res)
tasks.append(task)
prev_task, prev_res = task, res
return tasks, results
def apply_async(self, args=(), kwargs={}, group_id=None, chord=None,
task_id=None, **options):
if self.app.conf.CELERY_ALWAYS_EAGER:
return self.apply(args, kwargs, **options)
options.pop('publisher', None)
tasks, results = self.prepare_steps(args, kwargs['tasks'])
result = results[-1]
if group_id:
tasks[-1].set(group_id=group_id)
if chord:
tasks[-1].set(chord=chord)
if task_id:
tasks[-1].set(task_id=task_id)
result = tasks[-1].type.AsyncResult(task_id)
tasks[0].apply_async()
return result
def apply(self, args=(), kwargs={}, **options):
tasks = [maybe_subtask(task).clone() for task in kwargs['tasks']]
res = prev = None
for task in tasks:
res = task.apply((prev.get(), ) if prev else ())
res.parent, prev = prev, res
return res
return Chain
You can see that at the end prepare_steps prev_task is linked to the next task.
When the prev_task failed the next task is not called.
I'm testing with adding the link_error from prev task to the next:
if prev_task:
# link and link_error previous task to this task.
prev_task.link(task)
prev_task.link_error(task)
# set the results parent attribute.
res.parent = prev_res
But then, the next task must take care of both cases (maybe, except when it's configured to be immutable, e.g. not accept more arguments).
I think chain can support that by allowing some syntax likes this:
c = chain(t1, (t2, t1e), (t3, t2e))
which means:
t1 link to t2 and link_error to t1e
t2 link to t3 and link_error to t2e