I am using below code to access an email template and send mail. It works fine except it took long time (30 to 60 mins) to send mail after execution of code. I don't know how to solve this issue? Suggest any solution!!Thanks.
sen_mail.py
#api.multi
def send_email(self,invoice_id):
invoice_data = self.env['account.invoice'].browse(invoice_id)
email_template_obj = self.env['email.template']
template_id = self.env.ref('multi_db.email_template_subscription_invoice', False)
if template_id:
values = email_template_obj.generate_email(template_id.id,invoice_id)
values['subject'] = 'Invoice for AMS registration'
values['email_to'] = invoice_data.partner_id.email
values['partner_to'] = invoice_data.partner_id
mail_obj = self.env['mail.mail']
msg_id = mail_obj.create(values)
if msg_id:
mail_obj.send([msg_id])
return True
Finally got solution.
I have changed Interval Number and Interval Unit in Settings -> Technical -> Automation -> Scheduled Actions.
Kabir
Yes sure you can increase email frequency to send email faster from outgoing email queue but if you want to send email immediately without waiting then yon also use below code alternatively:
#api.multi
def send_email(self,invoice_id):
invoice_data = self.env['account.invoice'].browse(invoice_id)
email_template_obj = self.env['email.template']
template_id = self.env.ref('multi_db.email_template_subscription_invoice', False)
if template_id:
template_id.send_mail(invoice_id, force_send=False, raise_exception=False)
return True
This will send email without waiting.
Bests
Related
My script at G AppsScript ran more often than the trigger is set for.
The purpose of the script is to check Gmail inbox every hour and if an automated email was not delivered - alert a slack channel.
There is an automation that delivers email to the Gmail address every hour, Gmail rules add a label to the emails. The script checks for the label, if found - the label is removed, the email is marked as read. When there is no label - webhook url is triggered to send an alert.
However, now the code was executed 3 times within an hour instead of 1 time as trigger is set. This resulted in 2 notifications to slack.
Could someone help to understand what is wrong?
trigger
executions
function parseEmailByLabel() {
var gmailLabelName = "ParseThis",
externalHandlerScript = "https://hooks.slack.com/workflows/T1234",
gmailLabelObject = GmailApp.getUserLabelByName(gmailLabelName),
threads = gmailLabelObject.getThreads(),
messages,
message,
params,
response;
if (threads != "") {
for (var i = 0; i < threads.length; i++) {
messages = threads[i].getMessages();
for (var j = 0; j < messages.length; j++) {
message = messages[j];
message.markRead();
}
threads[i].removeLabel(gmailLabelObject);
}
} else if (threads == "") {
params = {
'method': 'post',
};
response = UrlFetchApp.fetch(externalHandlerScript, params).getContentText();
Logger.log(response);
}
}
You may have created more than one trigger. You may check here:
https://script.google.com/home/triggers
I'm trying to write my very first liquidsoap program. It goes something like this:
sounds_path = "../var/sounds"
# Log file
set("log.file.path","var/log/liquidsoap.log")
set("harbor.bind_addr", "127.0.0.1")
set("harbor.timeout", 5)
set("harbor.verbose", true)
set("harbor.reverse_dns", false)
silence = blank()
queue = request.queue()
def play(~protocol, ~data, ~headers, uri) =
request.push("#{sounds_path}#{uri}")
http_response(protocol=protocol, code=20000)
end
harbor.http.register(port=8080, method="POST", "^/(?!\0)+", play)
stream = fallback(track_sensitive=false, [queue, silence])
...output.whatever...
And I was wondering if there is any way to push to the queue from the harbor callback.
Else, how should I proceed about making requests originate from HTTP calls? I really want to avoid telnet. My final objective is having an endpoint that I can call to make my stream play a file on demand and be silent the rest of the time.
give this a go its liquidsoap so its tricky to understand but it should do the trick
########### functions ##############
def playnow(source,~action="override", ~protocol, ~data, ~headers, uri) =
queue_count = list.length(server.execute("playnow.primary_queue"))
arr = of_json(default=[("key","value")], data)
track = arr["track"];
log("adding playnow track '#{track}'")
if queue_count != 0 and action == "override" then
server.execute("playnow.insert 0 #{track}")
source.skip(source)
print("skipping playnow queue")
else
server.execute("playnow.push #{track}")
print("no skip required")
end
http_response(
protocol=protocol,
code=200,
headers=[("Content-Type","application/json; charset=utf-8")],
data='{"status":"success", "track": "#{track}", "action": "#{action}"}'
)
end
######## live stuff below #######
playlist= playlist(reload=1, reload_mode="watch", "/etc/liquidsoap/playlist.xspf")
requested = crossfade(request.equeue(id="playnow"))
live= fallback(track_sensitive=false,transitions=[crossfade, crossfade],[requested, playlist])
output.harbor(%mp3,id="live",mount="live_radio", radio)
harbor.http.register(port=MY_HARBOR_PORT, method="POST","/playnow", playnow(live))
to use the above you need to send a post request with json data like so:
{"track":"http://mydomain/mysong.mp3"}
this is also with the assumption you have the harbor running which you should be able to find out using the liquidsoap docs
there are multiple methods of sending into the queue, there is telnet, you can create a http input, or a metadata request to playnow via the harbor, let me know which one you opt for and i can provide you with a code example
I am using Zend_Mail_Storage_Imap library to retrieve e-mails from IMAP.
$mail = new Zend_Mail_Storage_Imap(array('connection details'));
foreach($mail as $message)
{
if($message->date > $myDesiredDate)
{
//do stuff
}else{
continue;
}
This code retrieves all the mails with the oldest mail retrieved first. The variable $myDesiredDate is the date/time, mails beyond which are not needed. Is there a way to skip the retrieval of all the mails and check each mail's date one by one? If not, can I reverse the $mail object to get the latest email at the top ?
UPDATE: I have now modified the code a little, to start from the latest mail and checking the date time of the current mail. The moment I encounter an email with the time beyond which I don't want to parse emails, I break the loop.
//time upto which I want to fetch emails (in seconds from current time)
$time = 3600;
$mail = new Zend_Mail_Storage_Imap(array('connection details'));
//get total number of messages
$total = $mail->countMessages()
//loop through the mails, starting from the latest mail
while($total>0)
{
$mailTime = strtotime(substr($mail->getMessage($total)->date,0,strlen($mail->getMessage($total)->date)-6));
//check if the email was received before the time limit
if($mailTime < (time()-$time))
break;
else
//do my thing
$total--;
}
//close mail connection
$mail->close();
The only thing that I am concerned here is, whether I shall get the mails in the correct order or not, if I start from mail count to 0 ?
Since, my code is working absolutely fine, I shall include this as an answer (quick and dirty).I now start from the latest mail and check the date time of the current mail. The moment I encounter an email with the time beyond which I don't want to parse emails, I break the loop.
//time upto which I want to fetch emails (in seconds from current time)
$time = 3600;
$mail = new Zend_Mail_Storage_Imap(array('connection details'));
//get total number of messages
$total = $mail->countMessages()
//loop through the mails, starting from the latest mail
while($total>0)
{
$mailTime = strtotime(substr($mail->getMessage($total)->date,0,strlen($mail->getMessage($total)->date)-6));
//check if the email was received before the time limit
if($mailTime < (time()-$time))
break;
else
//do my thing
$total--;
}
//close mail connection
$mail->close();
We're using Amazon SES to send emails, and it says our max send rate is 5 per second.
What happens if we send more than 5 per second? Do they queue or are they rejected?
We have a mailing list that has over 1,000 people on it and they all attempt to send all in one go (and we are approved to use Amazon SES for this purpose).
Here's the code I'm using to send the email:
namespace Amazon
{
public class Emailer
{
/// <summary>
/// Send an email using the Amazon SES service
/// </summary>
public static void SendEmail(String from, String To, String Subject, String HTML = null, String emailReplyTo = null, String returnPath = null)
{
try
{
List<String> to
= To
.Replace(", ", ",")
.Split(',')
.ToList();
var destination = new Destination();
destination.WithToAddresses(to);
var subject = new Content();
subject.WithCharset("UTF-8");
subject.WithData(Subject);
var html = new Content();
html.WithCharset("UTF-8");
html.WithData(HTML);
var body = new Body();
body.WithHtml(html);
var message = new Message();
message.WithBody(body);
message.WithSubject(subject);
var ses = AWSClientFactory.CreateAmazonSimpleEmailServiceClient("xxx", "xxx");
var request = new SendEmailRequest();
request.WithDestination(destination);
request.WithMessage(message);
request.WithSource(from);
if (emailReplyTo != null)
{
List<String> replyto
= emailReplyTo
.Replace(", ", ",")
.Split(',')
.ToList();
request.WithReplyToAddresses(replyto);
}
if (returnPath != null)
request.WithReturnPath(returnPath);
SendEmailResponse response = ses.SendEmail(request);
SendEmailResult result = response.SendEmailResult;
}
catch (Exception e)
{
}
}
}
}
I think that the request are rejected if we are trying to send more messages per second then the allowed limit.
I found this in the SES Blog http://sesblog.amazon.com/post/TxKR75VKOYDS60/How-to-handle-a-quot-Throttling-Maximum-sending-rate-exceeded-quot-error
When you call Amazon SES faster than your maximum allocated send rate, Amazon SES will reject your over the limit requests with a "Throttling – Maximum sending rate exceeded" error.
A "Throttling – Maximum sending rate exceeded" error is retriable. This error is different than other errors returned by Amazon SES, such as sending from an email address that is not verified or sending to an email address that is blacklisted. Those errors indicate that the request will not be accepted in its current form. A request rejected with a "Throttling" error can be retried at a later time and is likely to succeed.
If they would queue the requests this would be a great option but our experience is that they don't. Please let me know if I understand something wrong here.
I've since found out the answer is that they are rejected.
If you attempt to send an email after reaching your daily sending quota (the maximum amount of email you can send in a 24-hour period) or your maximum sending rate (the maximum number of messages you can send per second), Amazon SES drops the message and doesn't attempt to redeliver it.
https://docs.aws.amazon.com/ses/latest/DeveloperGuide/reach-sending-limits.html
I'm getting stuck on this situation and on the way finding the best way for resoling.
I have a workflow started and persisted using messaging activities.
The correlation between the Start initial command and the Stop final command works well if they're sent within few seconds.
Problems begin when the workflow is unloaded, because the following Stop message throws the following FaultException:
If LoadWorkflowByInstanceKeyCommand.AssociateLookupKeyToInstanceId is not specified, the LookupInstanceKey must already be associated to an instance, or the LoadWorkflowByInstanceKeyCommand will fail. For this reason, it is invalid to also specify the LookupInstanceKey in the InstanceKeysToAssociate collection if AssociateLookupKeyToInstanceId isn't set
Can anybody help me?
The variables inside the workflow are of types int and XDocument.
This is the code to initialize the WorkflowServiceHost:
WorkflowServiceHost serviceHost = new WorkflowServiceHost(myWorkflow, new Uri(serviceUri));
ServiceDebugBehavior debug = serviceHost.Description.Behaviors.Find<ServiceDebugBehavior>();
if (debug == null)
{
debug = new ServiceDebugBehavior();
serviceHost.Description.Behaviors.Add(debug);
}
debug.IncludeExceptionDetailInFaults = true;
WorkflowIdleBehavior idle = serviceHost.Description.Behaviors.Find<WorkflowIdleBehavior>();
if (idle == null)
{
idle = new WorkflowIdleBehavior();
serviceHost.Description.Behaviors.Add(idle);
}
idle.TimeToPersist = TimeSpan.FromSeconds(2);
idle.TimeToUnload = TimeSpan.FromSeconds(10);
var behavior = new SqlWorkflowInstanceStoreBehavior
{
ConnectionString = ConfigurationManager.ConnectionStrings["WorkflowPersistence"].ConnectionString,
InstanceEncodingOption = InstanceEncodingOption.None,
InstanceCompletionAction = InstanceCompletionAction.DeleteAll,
InstanceLockedExceptionAction = InstanceLockedExceptionAction.BasicRetry,
HostLockRenewalPeriod = new TimeSpan(00, 00, 30),
RunnableInstancesDetectionPeriod = new TimeSpan(00, 00, 05)
};
serviceHost.Description.Behaviors.Add(behavior);
serviceHost.Open();
Looking at the database, it seems that the workflow is never suspended.
Any help appreciated,
thank you
Not really sure what is going on here but it sounds like there are types used in the workflow that cannot be serialized and prevent the workflow from being stored to disk. When you say "Looking at the database, it seems that the workflow is never suspended." do you really mean suspended? And why do you expect the workflow to be suspended?
What happens if you send just the start message to the workflow and wait 2 seconds? Do you get a new record in the persistence database?