My application uses an Exchange SMTP server for sending emails. At present we don't have any message queuing in our architecture and emails are sent as part of the HTTP request/response cycle.
Occasionally the Exchange server has issues and times out while the email is sending, and as a result the email doesn't send. Sometimes, Cake recognizes the time out and throws an exception. The application code can catch the exception and report to the user that something went wrong.
However, on other occasions PHP hits its maximum execution time before Cake can throw the exception and so the user just gets an error 500 with no useful information as to what happened.
In an effort to combat this, I overwrote CakeEmail::send() in a custom class CustomEmail (extending CakeEmail) as follows:
public function send($content = null)
{
//get PHP and email timeout values
$phpTimeout = ini_get("max_execution_time");
//if $this->_transportClass is debug, just invoke parent::send() and return
if (!$this->_transportClass instanceof SmtpTransport) {
return parent::send($content);
}
$cfg = $this->_transportClass->config();
$emailTimeout = isset($cfg["timeout"]) && $cfg["timeout"] ? $cfg["timeout"] : 30;
//if PHP max execution time is set (and isn't 0), set it to the email timeout plus 1 second; this should mean the SMTP server should always time out before PHP does
if ($phpTimeout) {
set_time_limit($emailTimeout + 1);
}
//send email
$send = parent::send($content);
//reset PHP timeout to previous value
set_time_limit($phpTimeout);
return $send;
}
However, this isn't alwayus successful and I have had a few instances of this:
Fatal Error: Maximum execution time of 31 seconds exceeded in [C:\path\app\Vendor\pear-pear.cakephp.org\CakePHP\Cake\Network\CakeSocket.php, line 303]
CakeSocket.php line 303 is the $buffer = fread()... line from this CakeSocket::read():
public function read($length = 1024) {
if (!$this->connected) {
if (!$this->connect()) {
return false;
}
}
if (!feof($this->connection)) {
$buffer = fread($this->connection, $length);
$info = stream_get_meta_data($this->connection);
if ($info['timed_out']) {
$this->setLastError(E_WARNING, __d('cake_dev', 'Connection timed out'));
return false;
}
return $buffer;
}
return false;
}
Any ideas?
The problem lies here:
//if PHP max execution time is set (and isn't 0), set it to the email timeout plus 1 second; this should mean the SMTP server should always time out before PHP does
if ($phpTimeout) {
set_time_limit($emailTimeout + 1);
}
In some places in my code I was increasing max time out to more than 30 seconds, but the email timeout was still only 30. This code reverted the PHP timeout to 31 seconds, and I'm guessing other stuff was happening before the email started to send which caused issues.
Fixed code:
//if PHP max execution time is set (and isn't 0), set it to the email timeout plus 1 second; this should mean the SMTP server should always time out before PHP does
if ($phpTimeout && $phpTimeout <= $emailTimeout) {
set_time_limit($emailTimeout + 1);
}
Related
Here is my server side read code.
void ServerSession::doRead()
{
//note sbuf_ is std::string
asio::async_read_until(socket_, asio::dynamic_buffer(sbuf_), "\n",
[this](std::error_code ec, std::size_t length)
{
if(!ec || ec == asio::error::eof)
{
printf("length = %lu [S] received str size = %lu, Client sent : %s\n", length, sbuf_.size(), sbuf_.data());
if(sbuf_.size() > 0)
{
std::string msg{sbuf_};
addMessageToQueue(std::move(msg));
sbuf_.clear();
}
}
else
{
socket_.close(); //force close the socket upon read error.
}
});
}
I run it and connect using a TCP client, I send some text say "A 1" server receives it correctly. but when I send a next string say "B 12" it doesn't receive it.
I tried multiple connections. for all the connections that I establish with server, server receives first string that client sends, and after that there is a silence. I added many log statements in the code, but I am not able to see them when I try to send second string.
I have a simple XMLHttpRequest handler written in C. It reads and processes requests coming from a JavaScript XMLHttpRequest send() running in a browser.
The parent process accepts incoming connections and forks a child process for each incoming connection to read and process the data.
It works perfectly for most requests, but fails in some cases (apparently related to the network infrastructure between the client and the server) if the request is over about 2 kB in length. I'm assuming that the request is being broken into multiple packets somewhere between the browser and my socket server.
I can't change the request format, but I can see the request being sent and verify the content. The data is a 'GET' with an encoded URI that contains a 'type' field. If the type is 'file', the request could be as long as 3 kB, otherwise it's a couple of hundred bytes at most. 'File' requests are rare - the user is providing configuration data to be written to a file on the server. All other requests work fine, and any 'file' requests shorter than about 2 kB work fine.
What's the preferred technique for ensuring that I have all of the data in this situation?
Here's the portion of the parent that accepts the connection and forks the child (non-blocking version):
for (hit = 1;; hit++) {
length = sizeof(cli_addr);
if ((socketfd = accept4(listensd, (struct sockaddr *) &cli_addr, &length, SOCK_NONBLOCK)) < 0){
//if ((socketfd = accept(listensd, (struct sockaddr *) &cli_addr, &length)) < 0){
exit(3);
}
if ((pid = fork()) < 0) {
exit(3);
} else {
if (pid == 0) { /* child */
//(void) close(listensd);
childProcess(socketfd, hit); /* never returns. Close listensd when done*/
} else { /* parent */
(void) close(socketfd);
}
}
}
Here's the portion of the child process that performs the initial recv(). In the case of long 'file' requests, the child's first socket recv() gets about 1700 bytes of payload followed by the browser-supplied connection data.
ret = recv(socketfd, recv_data, BUFSIZE, 0); // read request
if (ret == 0 || ret == -1) { // read failure stop now
sprintf(sbuff, "failed to read request: %d", ret);
logger(&shm, FATAL, sbuff, socketfd);
}
recv_data[ret] = 0;
len = ret;
If the type is 'file', there could be more data. The child process never gets the rest of the data. If the socket is blocking, a second read attempt simply hangs. If the socket is non-blocking (as in the snippet below) all subsequent reads return -1 with error 'Resource temporarily unavailable' until it times out:
// It's a file. Could be broken into multiple blocks. Try second read
sleep(1);
ret = recv(socketfd, &recv_data[len], BUFSIZE, 0); // read request
while (ret != 0){
if (ret > 0){
recv_data[len+ret] = 0;
len += ret;
} else {
sleep(1);
}
ret = recv(socketfd, &recv_data[len], BUFSIZE, 0); // read request
}
I expected that read() would return 0 when the client closes the connection, but that doesn't happen.
A GET request only has a head and no body (well, almost always), so you have everything the client has sent as soon as you have the request head, and you know when you have read the whole request head when you read a blank line i.e. two returns (and no sooner or later).
If the client sends just a part, without the blank line, you are supposed to wait for the rest. I would put a time-out on that and reject the whole request if it takes too long.
BTW there are still browsers out there, and maybe some proxies as well, with a URL length limit of about 2000 characters.
I am building a OPC UA Client using OPC Foundation SDK. I am able to create a subscription containing some Monitoreditems.
On the OPC UA server these monitored items change value constantly (every second or so).
I want to disconnect the client (simulate a connection broken ), keep the subcription alive and wait for a while. Then I reconnect having my subscriptions back, but I also want all the monitored Item values queued up during the disconnect. Right now I only get the last server value on reconnect.
I am setting a queuesize:
monitoredItem.QueueSize = 100;
To kind of simulate a connection error I have set the "delete subscription" to false on ClosesSession:
m_session.CloseSession(new RequestHeader(), false);
My question is how to capture the content of the queue after a disconnect/connection error???
Should the ‘lost values’ be “new MonitoredItem_Notification” automatically when the client reconnect?
Should the SubscriptionId be the same as before the connection was broken?
Should the sessionId be the same or will a new SessionId let med keep the existing subscriptions? What is the best way to simulate a connection error?
Many questions :-)
A sample from the code where I create the subscription containing some MonitoredItems and the MonitoredItem_Notification event method.
Any OPC UA Guru out there??
if (node.Displayname == "node to monitor")
{
MonitoredItem mon = CreateMonitoredItem((NodeId)node.reference.NodeId, node.Displayname);
m_subscription.AddItem(mon);
m_subscription.ApplyChanges();
}
private MonitoredItem CreateMonitoredItem(NodeId nodeId, string displayName)
{
if (m_subscription == null)
{
m_subscription = new Subscription(m_session.DefaultSubscription);
m_subscription.PublishingEnabled = true;
m_subscription.PublishingInterval = 3000;//1000;
m_subscription.KeepAliveCount = 10;
m_subscription.LifetimeCount = 10;
m_subscription.MaxNotificationsPerPublish = 1000;
m_subscription.Priority = 100;
bool cache = m_subscription.DisableMonitoredItemCache;
m_session.AddSubscription(m_subscription);
m_subscription.Create();
}
// add the new monitored item.
MonitoredItem monitoredItem = new MonitoredItem(m_subscription.DefaultItem);
//Each time a monitored item is sampled, the server evaluates the sample using a filter defined for each monitoreditem.
//The server uses the filter to determine if the sample should be reported. The type of filter is dependent on the type of item.
//DataChangeFilter for Variable, Eventfilter when monitoring Events. etc
//MonitoringFilter f = new MonitoringFilter();
//DataChangeFilter f = new DataChangeFilter();
//f.DeadbandValue
monitoredItem.StartNodeId = nodeId;
monitoredItem.AttributeId = Attributes.Value;
monitoredItem.DisplayName = displayName;
//Disabled, Sampling, (Report (includes sampling))
monitoredItem.MonitoringMode = MonitoringMode.Reporting;
//How often the Client wish to check for new values on the server. Must be 0 if item is an event.
//If a negative number the SamplingInterval is set equal to the PublishingInterval (inherited)
//The Subscriptions KeepAliveCount should always be longer than the SamplingInterval/PublishingInterval
monitoredItem.SamplingInterval = 500;
//Number of samples stored on the server between each reporting
monitoredItem.QueueSize = 100;
monitoredItem.DiscardOldest = true;//Discard oldest values when full
monitoredItem.CacheQueueSize = 100;
monitoredItem.Notification += m_MonitoredItem_Notification;
if (ServiceResult.IsBad(monitoredItem.Status.Error))
{
return null;
}
return monitoredItem;
}
private void MonitoredItem_Notification(MonitoredItem monitoredItem, MonitoredItemNotificationEventArgs e)
{
if (this.InvokeRequired)
{
this.BeginInvoke(new MonitoredItemNotificationEventHandler(MonitoredItem_Notification), monitoredItem, e);
return;
}
try
{
if (m_session == null)
{
return;
}
MonitoredItemNotification notification = e.NotificationValue as MonitoredItemNotification;
if (notification == null)
{
return;
}
string sess = m_session.SessionId.Identifier.ToString();
string s = string.Format(" MonitoredItem: {0}\t Value: {1}\t Status: {2}\t SourceTimeStamp: {3}", monitoredItem.DisplayName, (notification.Value.WrappedValue.ToString().Length == 1) ? notification.Value.WrappedValue.ToString() : notification.Value.WrappedValue.ToString(), notification.Value.StatusCode.ToString(), notification.Value.SourceTimestamp.ToLocalTime().ToString("HH:mm:ss.fff"));
richTextBox1.AppendText(s + "SessionId: " + sess);
}
catch (Exception exception)
{
ClientUtils.HandleException(this.Text, exception);
}
}e here
I don't know how much of this, if any, the SDK you're using does for you, but the approach when reconnecting is generally:
try to resume (re-activate) your old session. If this is successful your subscriptions will already exist and all you need to do is send more PublishRequests. Since you're trying to test by closing the session this probably won't work.
create a new session and then call the TransferSubscription service to transfer the previous subscriptions to your new session. You can then start sending PublishRequests and you'll get the queued notifications.
Again, depending on the stack/SDK/toolkit you're using some or none of this may be handled for you.
I'm using vertx.io web framework to send a list of items to a downstream HTTP server.
records.records() emits 4 records and I have specifically set the web client to connect to the wrong I.P/port.
Processing... prints 4 times.
Exception outer! prints 3 times.
If I put back the proper I.P/port then Susbscribe outer! prints 4 times.
io.reactivex.Flowable
.fromIterable(records.records())
.flatMap(inRecord -> {
System.out.println("Processing...");
// Do stuff here....
Observable<Buffer> bodyBuffer = Observable.just(Buffer.buffer(...));
Single<HttpResponse<Buffer>> request = client
.post(..., ..., ...)
.rxSendStream(bodyBuffer);
return request.toFlowable();
})
.subscribe(record -> {
System.out.println("Subscribe outer!");
}, ex -> {
System.out.println("Exception outer! " + ex.getMessage());
});
UPDATE:
I now understand that on error RX stops right a way. Is there a way to continue and process all records regardless and get an error for each?
Given this article: https://medium.com/#jagsaund/5-not-so-obvious-things-about-rxjava-c388bd19efbc
I have come up with this... Unless you see something wrong with this?
io.reactivex.Flowable
.fromIterable(records.records())
.flatMap
(inRecord -> {
Observable<Buffer> bodyBuffer = Observable.just(Buffer.buffer(inRecord.toString()));
Single<HttpResponse<Buffer>> request = client
.post("xxxxxx", "xxxxxx", "xxxxxx")
.rxSendStream(bodyBuffer);
// So we can capture how long each request took.
final long startTime = System.currentTimeMillis();
return request.toFlowable()
.doOnNext(response -> {
// Capture total time and print it with the logs. Removed below for brevity.
long processTimeMs = System.currentTimeMillis() - startTime;
int status = response.statusCode();
if(status == 200)
logger.info("Success!");
else
logger.error("Failed!");
}).doOnError(ex -> {
long processTimeMs = System.currentTimeMillis() - startTime;
logger.error("Failed! Exception.", ex);
}).doOnTerminate(() -> {
// Do some extra stuff here...
}).onErrorResumeNext(Flowable.empty()); // This will allow us to continue.
}
).subscribe(); // Don't handle here. We subscribe to the inner events.
Is there a way to continue and process all records regardless and get
an error for each?
According to the doc, the observable should be terminated if it encounters an error. So you can't get each error in onError.
You can use onErrorReturn or onErrorResumeNext() to tell the upstream what to do if it encounters an error (e.g. emit null or Flowable.empty()).
I am using Zend_Mail_Storage_Imap library to retrieve e-mails from IMAP.
$mail = new Zend_Mail_Storage_Imap(array('connection details'));
foreach($mail as $message)
{
if($message->date > $myDesiredDate)
{
//do stuff
}else{
continue;
}
This code retrieves all the mails with the oldest mail retrieved first. The variable $myDesiredDate is the date/time, mails beyond which are not needed. Is there a way to skip the retrieval of all the mails and check each mail's date one by one? If not, can I reverse the $mail object to get the latest email at the top ?
UPDATE: I have now modified the code a little, to start from the latest mail and checking the date time of the current mail. The moment I encounter an email with the time beyond which I don't want to parse emails, I break the loop.
//time upto which I want to fetch emails (in seconds from current time)
$time = 3600;
$mail = new Zend_Mail_Storage_Imap(array('connection details'));
//get total number of messages
$total = $mail->countMessages()
//loop through the mails, starting from the latest mail
while($total>0)
{
$mailTime = strtotime(substr($mail->getMessage($total)->date,0,strlen($mail->getMessage($total)->date)-6));
//check if the email was received before the time limit
if($mailTime < (time()-$time))
break;
else
//do my thing
$total--;
}
//close mail connection
$mail->close();
The only thing that I am concerned here is, whether I shall get the mails in the correct order or not, if I start from mail count to 0 ?
Since, my code is working absolutely fine, I shall include this as an answer (quick and dirty).I now start from the latest mail and check the date time of the current mail. The moment I encounter an email with the time beyond which I don't want to parse emails, I break the loop.
//time upto which I want to fetch emails (in seconds from current time)
$time = 3600;
$mail = new Zend_Mail_Storage_Imap(array('connection details'));
//get total number of messages
$total = $mail->countMessages()
//loop through the mails, starting from the latest mail
while($total>0)
{
$mailTime = strtotime(substr($mail->getMessage($total)->date,0,strlen($mail->getMessage($total)->date)-6));
//check if the email was received before the time limit
if($mailTime < (time()-$time))
break;
else
//do my thing
$total--;
}
//close mail connection
$mail->close();