Celery Prefetch count ignored when using greenlet based pools with SQS - celery

We are experiencing an issue with Celery+SQS, using the gevent or the eventlet pools.
Even if we have a large amount of messages in the queue, and the concurrency set to 100, prefetch multiplier set to 5 (so the total prefetch_count reported is 500) celery will receive 10 messages from the queue (the max allowed by SQS) and wait for all of them to finish before attempting to receive again (from AWS we see either 10 or 0 messages in flight).
We do not see the same behavior using the prefork pool.
Our tasks are I/O bound, so using prefork would be inefficient, is there any way to increase the amount of messages pulled from the queue to keep all the greenlet busy?

Related

Multiple concurrent connections with Vertx

I'm trying to build a web application that should be able to handle at least 15000 rps. Some of the optimizations I have done is increase the worker pool size to 20 and set an accept back log to 25000. Since I have set my worker pool size to 20; wil this help with the the blocking piece of code?
A worker pool size of 20 seems to be the default.
I believe the important question in your case is how long do you expect each request to run. On my side, I expect to have thousands of short-lived requests, each with a payload size of about 5-10KB. All of these will be blocking, because of a blocking database driver I use at the moment. I have increased the default worker pool size to 40 and I have explicitly set my deploy vertical instances using the following formulae:
final int instances = Math.min(Math.max(Runtime.getRuntime().availableProcessors() / 2, 1), 2);
A test run of 500 simultaneous clients running for 60 seconds, on a vert.x server doing nothing but blocking calls, produced an average of 6 failed requests out of 11089. My test payload in this case was ~28KB.
Of course, from experience I know that running my software in production would often produce results that I have not anticipated. Thus, the important thing in my case is to have good atomicity rules in place, so that I don't get half-baked or corrupted data in the database.

Django Celery Rate Limit setting doesn't work as expected

I use below command to create one worker:
celery -A proj worker -l info --concurrency=50 -Q celery,token_1 -n token_1
And in my task, I set the rate limit to 4000/m.
However, when I start running the collection, I noticed the average task processed is just around 10-20/s (with rate limit rule 4000/m enabled).
Then, I removed the rate limit rule, now the task rates goes to around 60/s.
I am confused, since my rate limit is 4000/m, which is relatively 65/s. Why it finally goes just 10-20/s????? (I have already set 50 threads for the worker....)
You're misunderstanding how the rate limits operate in celery. 'According to the documentation on version 4.2:
The rate limits can be specified in seconds, minutes or hours by appending “/s”`, “/m” or “/h” to the value. Tasks will be evenly distributed over the specified time frame.
Example: “100/m” (hundred tasks a minute). This will enforce a minimum delay of 600ms between starting two tasks on the same worker instance.
In essence, celery was adding a forced delay between your tasks. Since each task was already processing in about 16ms (1/60 of a second), adding another 16 ms forced delay between tasks reduced the rate at which they would process.

How does jmeter starts sending requests to server

If Thread: 100, Rampup: 1 and Loop count: 1 is the configuration, how will jmeter start sending requests to the server?
Request will be sent 1 req/sec or all requests will be sent all at once to server?
JMeter will send requests as fast as it can, to wit:
It will start all threads (virtual users) you define in Thread Group within the ramp-up period (in your case - 100 threads in 1 second)
Each thread (virtual user) will start executing Samplers which are present in the Thread Group upside down (or according to the Logic Controllers)
When there are no more samplers to execute or loops to iterate the thread will be shut down
When there are no more active threads left - JMeter test will end.
With regards to requests per second - it mostly depends on your application response time, i.e.
if you have 100 virtual users and response time is 1 second - you will get 100 requests/second
if you have 100 virtual users and response time is 2 seconds - you will get 50 requests/second
if you have 100 virtual users and response time is 500 milliseconds - you will get 200 requests/second
etc.
I would recommend increasing (and decreasing) the load gradually, this way you will be able to correlate increasing load with increasing throughput/response time/number of errors, etc. while releasing all threads at once will not tell you the full story (unless you're doing a form of spike testing, in this case consider using Synchronizing Timer)
JMeter's ramp-up period set as 1 means to start all 100 threads in 1 second.
This isn't recommended settings as describe below
The ramp-up period tells JMeter how long to take to "ramp-up" to the full number of threads chosen. If 10 threads are used, and the ramp-up period is 100 seconds, then JMeter will take 100 seconds to get all 10 threads up and running. Each thread will start 10 (100/10) seconds after the previous thread was begun. If there are 30 threads and a ramp-up period of 120 seconds, then each successive thread will be delayed by 4 seconds.
Ramp-up needs to be long enough to avoid too large a work-load at the start of a test, and short enough that the last threads start running before the first ones finish (unless one wants that to happen).
Start with Ramp-up = number of threads and adjust up or down as needed.
See also Can i set ramp up period 0 in JMeter?
bear in mind that with low rampup and many threads, you may be limited by local resources, so your results may be a measurement of client capability rather than server.

How many kafka streams app is recommended to run on single machine in production?

In our architecture, we are assuming to run three jvm processes on one machine (approx.) and each jvm machine can host upto 15 kafka-stream apps.
And if I am not wrong each kafka-stream app spawns one java thread. So, this seems like an awkward architecture to have with around 45 kafka-stream apps running on a single machine.
So, I have question in three parts
1) Is my understanding correct that each kafka-stream app spawns one java thread ? Also, each kafka-stream starts a new tcp connection with kafka-broker ?
2) Is there a way to share one tcp connection for multiple kafka-streams ?
3) Is is difficult(not recommended) to run 45 streams on single machine ?
The answer to this is definitely NO unless there is a real use case in production.
Multiple answers:
a KafkaStreams instance start one processing thread by default (you
can configure more processing threads, too)
internally, KafkaStreams uses two KafkaConsumers and one KafkaProducer
(if you turn on EOS, it uses even more KafkaProducers): a KafkaConsumer
starts a background heartbeat thread and a KafkaProducer starts a
background sender thread => you get 4 threads in total (processing, 2x
heartbeat, sender) -- if you configure two processing threads, you end
up with 8 threads in total, etc)
there is more than one TCP connection as the consumer and the producer
(and the restore consumer, if you enable StandbyTasks) connect to the
cluster
it's not possible to share any TPC connections atm (this would require
a mayor rewrite of consumers and producers)
how many threads you can efficient run depends on your hardware and
workload... monitor you CPU utilization and see how buys your machine is...
Each Kafka stream job spawns a single thread.If the thread number is
set as n numbers it will provide parallelism in processing n number
of Kafka partitions.
If a single machine does not have the capacity to run large number of
threads, parallelism can be achieved by submitting the Streams
applications job with same application name in another machine
in the same cluster. The job will be identified by Kafka
streams and handled in background.
is is difficult(not recommended) to run 45 streams on single machine
? The answer to this is definitely NO unless there is a real use
case in production.--unless your system has these many cores
or the input has 45 partition this is not necessary

How to put a rate limit on a celery queue?

I read this in the celery documentation for Task.rate_limit:
Note that this is a per worker instance rate limit, and not a global rate limit. To enforce a global rate limit (e.g., for an API with a maximum number of requests per second), you must restrict to a given queue.
How do I put a rate limit on a celery queue?
Turns out it cant be done at queue level for multiple workers.
IT can be done at queue level for 1 worker. Or at queue level for each worker.
So if u say 10 jobs/ minute on 5 workers. Your workers will process upto 50 jobs per minute collectively.
So to have only 10 jobs running at a time you either chose one worker. Or chose 5 workers with a limit of 2/minute.
Update: How to exactly put the limit in settings/configuration:
task_annotations = {'tasks.<task_name>': {'rate_limit': '10/m'}}
or change the same for all tasks:
task_annotations = {'*': {'rate_limit': '10/m'}}
10/m means 10 tasks per minute, /s would mean per second. More details here: Task annotations setting
hey I am trying to find a way to do rate limit on queue, and I find out Celery can't do that, however Celery can control the rate per tasks, see this:
http://docs.celeryproject.org/en/latest/userguide/workers.html#rate-limits
so for a workaround, maybe you can set up one tasks per queue(which makes sense in a lot of situations), and put the limit on task.
You can set this limit in the flower > worker pane.
there is a specified blank space for entering your limit there.
The format that is suggested to be used is also like the below:
The rate limits can be specified in seconds, minutes or hours by appending “/s”, >“/m” or “/h” to the value. Tasks will be evenly distributed over the specified >time frame.
Example: “100/m” (hundred tasks a minute). This will enforce a minimum delay of >600ms between starting two tasks on the same worker instance.