PHPList settings to send 50,000 emails per hour - email

I have a VPS with 2GB with cPanel/WHM
Installed PHPList Latest Version
Create a batch of 2000 list
PHPList Settings as following
define('USE_DOMAIN_THROTTLE',0.2);
define('DOMAIN_BATCH_SIZE',800);
define('DOMAIN_BATCH_PERIOD',120);
Still it's not sending emails more than 300 taking too much time
A process for this page is already running and it was still alive 532 seconds ago
Sleeping for 20 seconds, aborting will quit
A process for this page is already running and it was still alive 593 seconds ago
Sleeping for 20 seconds, aborting will quit
Started
Processing has started, 1 message(s) to process.
Please leave this window open. You have batch processing enabled, so it will reload
several times to send the messages. Reports will be sent by email to
xxx#yahoo.com
Processing message 79
Looking for subscribers
Found them: 1392 to process
Sending in batches of 10000 emails

I loaded the phplist on my vps having 1gb ram, 20gb hdd, 1 core, ubuntu (lamp, sendmail) and it is running as I answer this. I did not enabled/tweaked any of the above settings. Left as it is. Well there are some issues if the batch size is more than 1000 mails and process q is taking some time if we do not enable domain throttling. Check processing queue in config_extended.php file and it might help in q processing better as they say.
Also check
http://docs.phplist.com/ProcessQueueInfo.html
I also suggest - using phplist with an smtp server (installed on the same vps) or smtp relay. That gives you better mail sending.
might help you
Good Luck
Following are copied from the process...
Started
Processing has started,
One campaign to process.
Please leave this window open. phpList will process your queue until all messages have been sent. This may take a while
Report of processing will be sent by email
Processing message 8
Looking for subscribers
Found them: 6594 to process
Sending in batches of 1000 emails
Processing batch of: 1000
Size of HTML email: 4Kb
Processed 188 out of 6594 subscribers
Script stage: 5
187 messages sent in 60.05 seconds (11210 msgs/hr)
Finished this run
Less than batch size were sent, so reloading imminently
This batch will be 923 emails, because in the last 30 seconds 77 emails were sent
Processing batch of: 923
Size of HTML email: 4Kb
Processed 183 out of 6407 subscribers
Script stage: 5
182 messages sent in 60.07 seconds (10906 msgs/hr)
Finished this run
Less than batch size were sent, so reloading imminently

Related

Attaching zip file in Jenkins Editable Email Notification is not working

I am trying to add report.zip file (which is present in workspace) in Jenkins Email notification attachment, but not receiving emails whereas When I attach report.html file(which is also present in Jenkins workspace), I am able to receive the emails and attachment.
Zip File size : 334KB
I tried report.zip, **/.zip, /.zip. But nothing works. Can anyone please help!
In Jenkins console, not seeing any failures, seeing the success message.
Archiving artifacts
Started calculate disk usage of build
Finished Calculation of disk usage of build in 0 seconds
Started calculate disk usage of workspace
Finished Calculation of disk usage of workspace in 0 seconds
Email was triggered for: Always
Sending email for trigger: Always
Sending email to: abc#gmail.com
Finished: SUCCESS

REST API does not return answer back after more than 3600 seconds of processing

We have spent several weeks trying to fix an issue that occurs in the customer's production environment and does not occur in our test environment.
After several analyses, we have found that this error occurs only when one condition is met: processing times greater than 3600 seconds in the API.
The situation is the following:
SAP is connected to a server with Windows Server 2016 and IIS 10.0 where we have an API that is responsible for interacting with a DB use by an external system.
The process that we execute sends data from SAP to the API and this, with the data it receives from SAP and the data it obtains from the DB of the external system, performs a processing and a subsequent update in the DB.
This process finishes without problems when the processing time in the API is less than 3600 seconds.
On the other hand, when the processing time is greater than 3600 seconds, the API generates the response correctly, and the server tries to return the response to SAP, but it is not possible.
Below I show an example of a server log entry when it tries to return a response after more than 3600 seconds of API processing. As you can see, a 995 error occurs: (I have censored some parts)
Any idea where the error could come from?
We have compared IIS configurations in Production and Test. We have also reviewed the parameters of the SAP system in Production and Test and we have not found anything either.
I remain at your disposal to provide any type of additional information that may be useful for solving the problem.
UPDATE 1 - 02/09/2022
After enabling FRT (Failed Request Tracing) on IIS for 200 response codes, looking at the event log of the request that is causing the error, we have seen this event at the end:
Any information about what could be causing this error? ErrorCode="The I/O operation has been aborted because of either a thread exit or an application request. (0x800703e3)"
UPDATE 2 - 02/09/2022
Comparing configurations from customer's environment and our test environment:
There is a Firewall between SAP Server and IIS Server with the default idle timeout configured for TCP (3600 seconds). This is not happening in Test Environment because there is no Firewall.
Establishing a Firewall policy specifying a custom idle timeout for this service (7200 seconds) the problem will be solved.
sc-win32 status 995, the I/O operation has been aborted because of
either a thread exit or an application request.
Please check the setting of minBytesPerSecond configuration parameter in IIS. The default "minBytesPerSecond" is 240.
Specifies the minimum throughput rate, in bytes, that HTTP.sys
enforces when it sends a response to the client. The minBytesPerSecond
attribute prevents malicious or malfunctioning software clients from
using resources by holding a connection open with minimal data. If the
throughput rate is lower than the minBytesPerSecond setting, the
connection is terminated.

Azure devops TimeoutException when deploying to on premise server

The deploy to some of the servers takes extremely long. Where it normally takes like 30 seconds to download the artifact on some servers it can take over 8 minutes, sometimes the deploy even fails if it takes too long. This behavior is consistent for the same server and it didn't changed for atleast the past 2 weeks.
Internet connectivity is good and I see 1gbit up and down with a speedtest on these servers. We use west europe as region and I can confirm this by looking at the urls in the log.
In the log I see these kind of messages:
2022-04-19T08:07:15.8187615Z ArtifactHttpRetryMessageHandler.SendAsync: https://vsblobprodsu6weu.vsblob.visualstudio.com/someguid/_apis/dedup/urls attempt 1/6 failed with TimeoutException: 'The HTTP request timed out after 00:01:40.'
What can cause this to happen?

Do not send emails to subscribers in the Reporting Configuration. Kentico 9

After creating a subscription in the Reporting Configuration, the emails are not sent at the specified time, but if you perform any actions on the site, the emails will be sent. This is the problem of the virtual machine on which the site or IIS is deployed. Or does the feature Kentico.
Prompt in what there can be a problem and how the system of sending of emails inin the Reporting Configuration works.
This is a correct behavior. Kentico checks for e-mails that are to be sent at the end of requests which means that if there are no requests, no e-mails are sent. If you need some tasks to be send at a specified time, you need to use Windows scheduler and configure task (e-mmail sending) to use it. See official documentation for more detail.s
Another option to Enn's, is to use a service like UpTime Robot to continually visit your page (like every 5 minutes), this not only generates a request but also will help keep your site awake and from going to sleep if you can't manually set the Worker Processor to never go to sleep.
Windows Scheduler is the most reliable, but UpTime robot has worked well for us.
https://uptimerobot.com/
Under the "Scheduled Tasks" module there is a schedule task called "Report subscription sender". This task runs every minute and is responsible for checking your subscriptions and sending e-mails based on your configuration. This task runs within the Kentico instance. By default IIS will put the site/app pool to sleep after x (default 20) minutes of idle and the scheduled tasks are no longer able to run. When you hit the site it wakes the process back up and the scheduled task is able to run again. You can go into IIS and configure the "Idle Time-out (minutes)" for the application pool see this link https://patrickdesjardins.com/blog/iis-no-sleep-idle-and-autostart-with-values for a pretty good illustration. You can also adjust the app pool recycle intervals but that is probably not necessary for your issue.
The other option, as mentioned by Enn, is to install the Kentico Scheduler Windows service which always runs and configure that scheduled task to run in the service.

how to stop postfix MAILER-DAEMON emails

I am running Ubuntu 12.04 with Postfix
Late yesterday, I added a package (ispconfig3) that modified my postfix configuration and also added an entry to the root crontab that was invoking a script.
At around 11PM last night, I uninstalled that package and went to bed. The uninstall deleted the script and it's directory ok. But it did not clean up the crontab entry.
Since cron had trouble invoking the script, it sent root#xx.org an email. But ispconfig3 had modified my postfix configuration, therefore there is no mail transport capability. So a MAILER-DAEMON email was placed in the mail queue.
Overnight, (I'm guessing here!) cron wakes up every minute and tries to do the same thing. So by 7:00AM there are now 1100+ emails in the mail queue. But since postfix is messed up, I can't see them.
At around 8:00ish I realize that something is wrong with my email set up. I check postfix configuration, backout the changes and now I can get emails ok. I can send them, receive them, etc.
Then the flurry of emails start. Every minute or so, I get around 30 MAILER-DAEMON emails indicating that cron couldn't invoke the script. I check
sudo crontab -l
see the stale command for the non-existing script. I clear it out:
sudo crontab -e
I expect the emails to stop.
They don't.
In fact, every minute they seem to be increasing in number. I then spend a few hours looking at a ton of configuration files to try to figure out what is going on. By 11:00ish or so, it's up to 50+ emails coming in every minute.
I finally realized that this stream of emails was occurring because of the failures that occurred the night before and that it was going to go on for 7 days. The "7d" comes from a postfix configuration setting. (BTW I changed that to be "2d" i.e. only a couple of hours).
In any case, I solved it. I'm adding this post so others can save themselves some time. See below.
Finally hit on the idea to look at the mail queue.
A bit of googling and I found this site:
https://www.garron.me/en/linux/delete-purge-flush-mail-queue-postfix.html
I tried
postqueue -p
which listed all of the "(mail transport unavailable)" emails:
... snip ...
-- 1104 Kbytes in 1185 Requests.
I then did:
postqueue -f # this flushes the mail queue
postqueue -p
Mail queue is empty
And all of a sudden email flurry ended.
Note: the website above said to use:
postfix -f
that did not work for me. A bit of googling found the postqueue command.
Another note: I was worried there were emails in that mail queue that were not "mail transport unavailable", so I double checked all 1185 emails to ensure it was ok to purge them.