What are possible reasons why Google script with hourly trigger ran 3 times within several minutes? - email

My script at G AppsScript ran more often than the trigger is set for.
The purpose of the script is to check Gmail inbox every hour and if an automated email was not delivered - alert a slack channel.
There is an automation that delivers email to the Gmail address every hour, Gmail rules add a label to the emails. The script checks for the label, if found - the label is removed, the email is marked as read. When there is no label - webhook url is triggered to send an alert.
However, now the code was executed 3 times within an hour instead of 1 time as trigger is set. This resulted in 2 notifications to slack.
Could someone help to understand what is wrong?
trigger
executions
function parseEmailByLabel() {
var gmailLabelName = "ParseThis",
externalHandlerScript = "https://hooks.slack.com/workflows/T1234",
gmailLabelObject = GmailApp.getUserLabelByName(gmailLabelName),
threads = gmailLabelObject.getThreads(),
messages,
message,
params,
response;
if (threads != "") {
for (var i = 0; i < threads.length; i++) {
messages = threads[i].getMessages();
for (var j = 0; j < messages.length; j++) {
message = messages[j];
message.markRead();
}
threads[i].removeLabel(gmailLabelObject);
}
} else if (threads == "") {
params = {
'method': 'post',
};
response = UrlFetchApp.fetch(externalHandlerScript, params).getContentText();
Logger.log(response);
}
}

You may have created more than one trigger. You may check here:
https://script.google.com/home/triggers

Related

How To Find Replication Queue is Blocked Programmatically

On AEM CaaS, we are trying to send email notification If replication queue is stuck via custom ReplicationEventHandler. We used the agent manager to get the replication queue and trying to add send email logic when queue is blocked.
We have applied 2 approaches based upon the API Docs which doesn't seems working.
Approach 1 : This sends the emails multiple times, even queue is not blocked
for (Agent agent : agentsMap.values()) {
if (agent.isEnabled() && agent.getId().equals("publish")) {
ReplicationQueue replicationQueue = agent.getQueue();
if(replicationQueue.getStatus().getNextRetryTime() != 0) {
Map<String, String> emailParams = new HashMap<>();
emailParams.put("agentId",agent.getId());
emailParams.put("agentName",agent.getConfiguration().getConfigPath());
sendEmail(emailParams);
log.info("::: Replication Queue Blocked :::");
}
}
}
}
Approach 2 : This doesn't trigger email, even queue is blocked.
if(agent.isValid() && agent.isEnabled()) {
ReplicationQueue replicationQueue = agent.getQueue();
if(!replicationQueue.entries().isEmpty()) {
ReplicationQueue.Entry firstEntry = replicationQueue.entries().get(0);
if(firstEntry.getNumProcessed() > 3) {
// Send Email That Queue Is Blocked
}
} else {
// Queue is Not Empty
}
}
Looking for solution..
Thanks

Running Mirth Channel with API Requests to external server very slow to process

In this question Mirth HTTP POST request with Parameters using Javascript I used a semblance of the first answer. Code seen below.
I'm running this code for a file that has nearly 46,000 rows. Which equates to about 46,000 requests hitting our external server. I'm noting that Mirth is making requests to our API endpoint about 1.6 times per second. This is unusually slow, and I would like some help to understand whether this is something related to Mirth, or related to the code above. Can repeated Imports in a for loop cause slow downs? Or is there a specific Mirth setting that limits the number of requests sent?
Version of Mirth is 3.12.0
Started the process at 2:27 PM and it's expected to be finished by almost 8:41 PM tonight, that's ridiculously slow.
//Skip the first header row
for (i = 1; i < msg['row'].length(); i++) {
col1 = msg['row'][i]['column1'].toString();
col2...
...
//Insert into results if the file and sample aren't already present
InsertIntoDatabase()
}
function InsertIntoDatabase() {
with(JavaImporter(
org.apache.commons.io.IOUtils,
org.apache.http.client.methods.HttpPost,
org.apache.http.client.entity.UrlEncodedFormEntity,
org.apache.http.impl.client.HttpClients,
org.apache.http.message.BasicNameValuePair,
com.google.common.io.Closer)) {
var closer = Closer.create();
try {
var httpclient = closer.register(HttpClients.createDefault());
var httpPost = new HttpPost('http://<server_name>/InsertNewCorrection');
var postParameters = [
new BasicNameValuePair("col1", col1),
new BasicNameValuePair(...
...
];
httpPost.setEntity(new UrlEncodedFormEntity(postParameters, "UTF-8"));
httpPost.setHeader('Content-Type', 'application/x-www-form-urlencoded')
var response = closer.register(httpclient.execute(httpPost));
var is = closer.register(response.entity.content);
result = IOUtils.toString(is, 'UTF-8');
} finally {
closer.close();
}
}
return result;
}

OPC UA Client capture the lost item values from the UA server after a disconnect/connection error?

I am building a OPC UA Client using OPC Foundation SDK. I am able to create a subscription containing some Monitoreditems.
On the OPC UA server these monitored items change value constantly (every second or so).
I want to disconnect the client (simulate a connection broken ), keep the subcription alive and wait for a while. Then I reconnect having my subscriptions back, but I also want all the monitored Item values queued up during the disconnect. Right now I only get the last server value on reconnect.
I am setting a queuesize:
monitoredItem.QueueSize = 100;
To kind of simulate a connection error I have set the "delete subscription" to false on ClosesSession:
m_session.CloseSession(new RequestHeader(), false);
My question is how to capture the content of the queue after a disconnect/connection error???
Should the ‘lost values’ be “new MonitoredItem_Notification” automatically when the client reconnect?
Should the SubscriptionId be the same as before the connection was broken?
Should the sessionId be the same or will a new SessionId let med keep the existing subscriptions? What is the best way to simulate a connection error?
Many questions :-)
A sample from the code where I create the subscription containing some MonitoredItems and the MonitoredItem_Notification event method.
Any OPC UA Guru out there??
if (node.Displayname == "node to monitor")
{
MonitoredItem mon = CreateMonitoredItem((NodeId)node.reference.NodeId, node.Displayname);
m_subscription.AddItem(mon);
m_subscription.ApplyChanges();
}
private MonitoredItem CreateMonitoredItem(NodeId nodeId, string displayName)
{
if (m_subscription == null)
{
m_subscription = new Subscription(m_session.DefaultSubscription);
m_subscription.PublishingEnabled = true;
m_subscription.PublishingInterval = 3000;//1000;
m_subscription.KeepAliveCount = 10;
m_subscription.LifetimeCount = 10;
m_subscription.MaxNotificationsPerPublish = 1000;
m_subscription.Priority = 100;
bool cache = m_subscription.DisableMonitoredItemCache;
m_session.AddSubscription(m_subscription);
m_subscription.Create();
}
// add the new monitored item.
MonitoredItem monitoredItem = new MonitoredItem(m_subscription.DefaultItem);
//Each time a monitored item is sampled, the server evaluates the sample using a filter defined for each monitoreditem.
//The server uses the filter to determine if the sample should be reported. The type of filter is dependent on the type of item.
//DataChangeFilter for Variable, Eventfilter when monitoring Events. etc
//MonitoringFilter f = new MonitoringFilter();
//DataChangeFilter f = new DataChangeFilter();
//f.DeadbandValue
monitoredItem.StartNodeId = nodeId;
monitoredItem.AttributeId = Attributes.Value;
monitoredItem.DisplayName = displayName;
//Disabled, Sampling, (Report (includes sampling))
monitoredItem.MonitoringMode = MonitoringMode.Reporting;
//How often the Client wish to check for new values on the server. Must be 0 if item is an event.
//If a negative number the SamplingInterval is set equal to the PublishingInterval (inherited)
//The Subscriptions KeepAliveCount should always be longer than the SamplingInterval/PublishingInterval
monitoredItem.SamplingInterval = 500;
//Number of samples stored on the server between each reporting
monitoredItem.QueueSize = 100;
monitoredItem.DiscardOldest = true;//Discard oldest values when full
monitoredItem.CacheQueueSize = 100;
monitoredItem.Notification += m_MonitoredItem_Notification;
if (ServiceResult.IsBad(monitoredItem.Status.Error))
{
return null;
}
return monitoredItem;
}
private void MonitoredItem_Notification(MonitoredItem monitoredItem, MonitoredItemNotificationEventArgs e)
{
if (this.InvokeRequired)
{
this.BeginInvoke(new MonitoredItemNotificationEventHandler(MonitoredItem_Notification), monitoredItem, e);
return;
}
try
{
if (m_session == null)
{
return;
}
MonitoredItemNotification notification = e.NotificationValue as MonitoredItemNotification;
if (notification == null)
{
return;
}
string sess = m_session.SessionId.Identifier.ToString();
string s = string.Format(" MonitoredItem: {0}\t Value: {1}\t Status: {2}\t SourceTimeStamp: {3}", monitoredItem.DisplayName, (notification.Value.WrappedValue.ToString().Length == 1) ? notification.Value.WrappedValue.ToString() : notification.Value.WrappedValue.ToString(), notification.Value.StatusCode.ToString(), notification.Value.SourceTimestamp.ToLocalTime().ToString("HH:mm:ss.fff"));
richTextBox1.AppendText(s + "SessionId: " + sess);
}
catch (Exception exception)
{
ClientUtils.HandleException(this.Text, exception);
}
}e here
I don't know how much of this, if any, the SDK you're using does for you, but the approach when reconnecting is generally:
try to resume (re-activate) your old session. If this is successful your subscriptions will already exist and all you need to do is send more PublishRequests. Since you're trying to test by closing the session this probably won't work.
create a new session and then call the TransferSubscription service to transfer the previous subscriptions to your new session. You can then start sending PublishRequests and you'll get the queued notifications.
Again, depending on the stack/SDK/toolkit you're using some or none of this may be handled for you.

CakePHP email timeout & PHP maximum execution time

My application uses an Exchange SMTP server for sending emails. At present we don't have any message queuing in our architecture and emails are sent as part of the HTTP request/response cycle.
Occasionally the Exchange server has issues and times out while the email is sending, and as a result the email doesn't send. Sometimes, Cake recognizes the time out and throws an exception. The application code can catch the exception and report to the user that something went wrong.
However, on other occasions PHP hits its maximum execution time before Cake can throw the exception and so the user just gets an error 500 with no useful information as to what happened.
In an effort to combat this, I overwrote CakeEmail::send() in a custom class CustomEmail (extending CakeEmail) as follows:
public function send($content = null)
{
//get PHP and email timeout values
$phpTimeout = ini_get("max_execution_time");
//if $this->_transportClass is debug, just invoke parent::send() and return
if (!$this->_transportClass instanceof SmtpTransport) {
return parent::send($content);
}
$cfg = $this->_transportClass->config();
$emailTimeout = isset($cfg["timeout"]) && $cfg["timeout"] ? $cfg["timeout"] : 30;
//if PHP max execution time is set (and isn't 0), set it to the email timeout plus 1 second; this should mean the SMTP server should always time out before PHP does
if ($phpTimeout) {
set_time_limit($emailTimeout + 1);
}
//send email
$send = parent::send($content);
//reset PHP timeout to previous value
set_time_limit($phpTimeout);
return $send;
}
However, this isn't alwayus successful and I have had a few instances of this:
Fatal Error: Maximum execution time of 31 seconds exceeded in [C:\path\app\Vendor\pear-pear.cakephp.org\CakePHP\Cake\Network\CakeSocket.php, line 303]
CakeSocket.php line 303 is the $buffer = fread()... line from this CakeSocket::read():
public function read($length = 1024) {
if (!$this->connected) {
if (!$this->connect()) {
return false;
}
}
if (!feof($this->connection)) {
$buffer = fread($this->connection, $length);
$info = stream_get_meta_data($this->connection);
if ($info['timed_out']) {
$this->setLastError(E_WARNING, __d('cake_dev', 'Connection timed out'));
return false;
}
return $buffer;
}
return false;
}
Any ideas?
The problem lies here:
//if PHP max execution time is set (and isn't 0), set it to the email timeout plus 1 second; this should mean the SMTP server should always time out before PHP does
if ($phpTimeout) {
set_time_limit($emailTimeout + 1);
}
In some places in my code I was increasing max time out to more than 30 seconds, but the email timeout was still only 30. This code reverted the PHP timeout to 31 seconds, and I'm guessing other stuff was happening before the email started to send which caused issues.
Fixed code:
//if PHP max execution time is set (and isn't 0), set it to the email timeout plus 1 second; this should mean the SMTP server should always time out before PHP does
if ($phpTimeout && $phpTimeout <= $emailTimeout) {
set_time_limit($emailTimeout + 1);
}

Retrieve mails up to a specific date from IMAP using Zend

I am using Zend_Mail_Storage_Imap library to retrieve e-mails from IMAP.
$mail = new Zend_Mail_Storage_Imap(array('connection details'));
foreach($mail as $message)
{
if($message->date > $myDesiredDate)
{
//do stuff
}else{
continue;
}
This code retrieves all the mails with the oldest mail retrieved first. The variable $myDesiredDate is the date/time, mails beyond which are not needed. Is there a way to skip the retrieval of all the mails and check each mail's date one by one? If not, can I reverse the $mail object to get the latest email at the top ?
UPDATE: I have now modified the code a little, to start from the latest mail and checking the date time of the current mail. The moment I encounter an email with the time beyond which I don't want to parse emails, I break the loop.
//time upto which I want to fetch emails (in seconds from current time)
$time = 3600;
$mail = new Zend_Mail_Storage_Imap(array('connection details'));
//get total number of messages
$total = $mail->countMessages()
//loop through the mails, starting from the latest mail
while($total>0)
{
$mailTime = strtotime(substr($mail->getMessage($total)->date,0,strlen($mail->getMessage($total)->date)-6));
//check if the email was received before the time limit
if($mailTime < (time()-$time))
break;
else
//do my thing
$total--;
}
//close mail connection
$mail->close();
The only thing that I am concerned here is, whether I shall get the mails in the correct order or not, if I start from mail count to 0 ?
Since, my code is working absolutely fine, I shall include this as an answer (quick and dirty).I now start from the latest mail and check the date time of the current mail. The moment I encounter an email with the time beyond which I don't want to parse emails, I break the loop.
//time upto which I want to fetch emails (in seconds from current time)
$time = 3600;
$mail = new Zend_Mail_Storage_Imap(array('connection details'));
//get total number of messages
$total = $mail->countMessages()
//loop through the mails, starting from the latest mail
while($total>0)
{
$mailTime = strtotime(substr($mail->getMessage($total)->date,0,strlen($mail->getMessage($total)->date)-6));
//check if the email was received before the time limit
if($mailTime < (time()-$time))
break;
else
//do my thing
$total--;
}
//close mail connection
$mail->close();