Send mail with some delay time in odoo11 - email

I have to send a mail when stage changes with a delay of some specific time. For example, in my case, I have to send mail after 10 minutes of stage changes. The time should be configurable.
I have try to achieve this by making force parameter to false as below:-
self.env['mail.template'].browse(template.id).send_mail(self.id, force_send=False)
And after that i have changed time intervals of "Mail: Email Queue Manager" template in scheduled action as per my requirement. In this case arises that When i send 2 mails lets say first mail on 11:30 and second mail on 11:33 and scheduled action will be performed on 11:35. So it sends both mails on 11:35 instead of sending mails on 11:35 and 11:38 respectively if i scheduled it at every 5 minutes after stage changes.
So how can i achieve this?

You may pass the send mail function to a new thread with execution time 5 minutes
Imagine that the Odoo project is running in a main thread and you are gonna send the mail in a second thread
import threading
import time
def odoo_project():
for x in range(0,11):
print(x)
time.sleep(1)
def mail_sender():
for x in range(100,103):
print(x)
time.sleep(5)
t1 = threading.Thread(target=odoo_project)
t2 = threading.Thread(target=mail_sender)
t1.start()
t2.start()
This will output
0
100
1
2
3
4
101
5
6
7
8
9
102
10
which means the Odoo project will continue working while the mail thread is gonna send the mail after 5 minutes -replace 5 with 5 * 60 in the example-
Also it would be a better design if you did a config field that you type the delay value in it instead of hard-coding the 5 minutes in the code in case you wanted future updates

Related

How to calculate celery load correctly

For example, I have a VPS with 2 shared CPUs, 10 000 receivers, and a task that should not be executed more than 15 times per second. Also, if the request receives a 429 code then it needs to make the request again after 1800 seconds.
for i in receivers_arr:
send_message.delay(i)
#celery_app.task(ignore_result=True,
time_limit=5,
autoretry_for=(Exception,),
retry_backoff=1800,
retry_kwargs={'max_retries': 2},
retry_jitter=False,
rate_limit=1)
def send_message(reciever_id):
code = send(reciever_id)
if code == 429:
raise Exception
How to choose the right number of workers and concurrency? Also, how correctly am I using decorator arguments (at the moment I have 3 workers with 4 concurrency)? (the main task is to avoid RuntimeError: can't start new thread)

How do I consume Kafka-Messages older than x minutes, but all messages on restart?

I need some grace period before consuming the kafka message.
My approach is to use a hopping window.
e.g. If I want to consume the message after 5 minutes, the hopping window would be 6 minutes and will advance by 1 minute.
Then I'll use a filter to get data older than 5 minutes (there's also a timestamp in the message itself). Hence I will process data from minute 0 to minute 1. Then the hopping window jumps 1 minute forward and I process data from minute 1 to minute 2 and so on.
However I need to consume all messages when starting the application and not just the last 6 minutes.
I'm also open for other suggestions, regarding the 5 minute grace period.
I've made wrong assumptions here. All the data in the topic will be consumed, no matter how old it is.
e.g. It's 12:10 now and we start the Kafka-Stream.
The data in the topic, we want to consume, was pushed at 12:00 and we have a window of 6 minutes.
I was expecting everything to be consumed from 12:04 to 12:10 (6 minutes) and everything ago would be lost.
But the 12:00 data will be consumed anyway, it just falls into an older window.

Gatling Scenario Response time

I am doing load test for an api which average response time is 5 sec
in my script i setup constantUserPerSecond 2 and duration 150 second
.inject(constantUsersPerSec(2) during (150 seconds)),
will it generate 2 request per second ? or less , because of 1 request will take 5 second to complete ?
constantUsersPerSec(2) will start a new user executing the scenario every .5 seconds or so. For this sort of injection profile gatling doesn't take into consideration how long it takes for a request to complete.

Agent receiving several messages at the same time in AnyLogic

Suppose you have two agent types:
Agent Type 1 with a population of 10
Agent Type 2 with a population of 1
Suppose Type 2 has a statechart with two states as follows:
Agent Type 2 statechart
If all 10 agents of Type 1 send the same message simultaneously or at least with intervals smaller than the timeout transition shown in the image, what happens to the messages received while the the agent of Type 2 is in the state "evaluateLenderDecision"? Are the messages discarded or queued until the state "waitingForLender" is reached again?
First I suggest you watch this youtube video I made that explains how messages are sent. https://www.youtube.com/watch?v=Fe2U8IAhlHM
The messages using send or deliver are received in the connections object where the message is redirected to the different state charts that you define there.
In your case, you should maybe generate a queue yourself with all the messages that have been received (using a collection maybe)
If your messages are sent at the same time, 9 of your 10 agents will have their message discarded from your statechart point of view since there will be no statechart waiting for a message after the first one is received, but not from your connections point of view... All messages are received effectively.

Why does Gatling still sends requests when scenario injection is on nothingFor?

So I have the following scenario:
setUp(scenario.inject(
nothingFor(30 seconds), // 1
rampUsers(10) during (30 seconds),
nothingFor(1 minute),
rampUsers(20) during (30 seconds)
).protocols(httpconf)).maxDuration(3 minutes)
I expected this scenario to start by doing nothing for 30 seconds, ramping up 10 users over 30 seconds, do nothing(pause) for a minute and finish by ramping up 20 users for 30 seconds.
But what I got is a 30 second pause, ramp up 10 users over 30 seconds, steady state of 10 users for a minute and then an additional ramp up of 20 users. (I ended up running 30 users)
What am I missing here?
The injection profiles only specify when users start a scenario, not how long they're active for - that will be determined by how long it takes for a user to finish the scenario. So when you ramp 10 users over 30 seconds one user will start the scenario every 3 seconds, but they keep running until they finish (however long that is). I'm guessing your scenario takes more than a couple of minutes for a user to complete.