I have a server with SMTP set up for my site's outbound email. In order to not get blacklisted I'd like to limit outbound emails to under an arbitrary threshold (let's say 500 per hour). What's the best way to implement this?
The two possibilities I see would be:
1) Some sort of outbound throttling within the SMTP Virtual Server (Not sure if this is possible when not on a full fledged Exchange Server)
2) Create a windows service that polls a database table for emails, processes the TOP N results and then sleeps for X Minutes.
Are either of these the best approach?
Just write a stored procedure to process the top N results and schedule the stored procedure to run at frequent intervals.
Related
I am renting a VPS that blocks outgoing port 25 so that I have to use their relayhost. Which works fine accept for one thing. The relayhost has these restrictions for outgoing mails:
1000 mails / hour
3000 mails / day
20000 mails / month
Exceeding these limits either cost more money or result in being banned.
I therefore would like to set the same restrictions in my own postfix server in such a way that mails stay in the defer queue if the outgoing limit will be reached. I don't mind that mail is delayed a few hours in order to stay within the limit of the relayhost.
There does not seem to be a postfix setting that will do this out of the box. However for incoming mails there are settings like smtpd_client_recipient_rate_limit and anvil_rate_time_unit that can throttle incoming mails.
I was therefore thinking of putting 3 additional smtpd processes in master.cf each of which sets smtpd_client_recipient_rate_limit and anvil_rate_time_unit according to the 3 rate restrictions.
Is this the most practical approach or is there a simpler solution?
I ran into a similar issue with my VPS host saying I'm sending too fast. I used the below config to slow the email rate.
default_destination_rate_delay = 5s
This puts a 5 second delay between each outbound smtp connnection to the same destination. There are other default_destination parameters that maybe of use to you on the man page
http://www.postfix.org/postconf.5.html
Sometimes my RPI drops internet connection and cron jobs starts failing left and right and the queue grows incredibly fast, and once I reconnect it just floods me with email.
I'd like to limit the mail queue to a fixed number, is this possible?
Yes, You can limit the maximal number of recipients and active queue size,
In main.cf please add these config options
qmgr_message_active_limit = 40000
qmgr_message_recipient_limit = 40000
Refer this links for better understanding active limit, recipient limit
What you want is not really possible.
I'd say a solution would be to either limit the size of the queue (by using the queue_minfree parameter) or make the system more robust regarding internet outages (like not sending mails for every error cron produces).
I see redis capable of 10s of thousands of connections. But why does it need so many? The connection(s) should be established by server, and only 1 server-redis connection should be sufficient for as many sessions as may be.
Is there something wrong with my logic?
You are right - one connection from server should be sufficient, BUT under "server" you have to imagine single instance of HTTP server running. On single machine there can me a lot of running server instances.
Then multiply this count of servers by count of individual machines using the same redis server and easily you are on very large numbers of connections.
Kacer is right about the scenario. However, assume one more scenario.. where application keeps connection pool for performance reason.
Assume you are a proud owner of a Travel agency. However your agency have only 1 car and 1 driver. But people are mad about your company and want to travel by your travel agency only. So you need to send 100 people from Destination A to B.
A ———— B
Now, when first person goes second will have to wait until car returns after dropping 1st person. then 2nd will go and then 3rd. Although you have the fastest car and fastest driver but still it will take some time
Now, assume you have 50 cars and 50 drivers… Would be much better right?
And what is unfortunately in first scenario your car met with an accident. You will have no other option. But if you manage connection pool you have 49 other alternates.
Let do simple thing, we have a cloud, which client draws, and server which sends commands to move cloud. Assume what client 1 runs on 60 fps and Client 2 runs on 30 fps and we want kinda smooth cloud transition.
First problem - server have different fps with clients and if send move command every tick, it will start spamming commands much faster, then clients will draw.
Possible solution 1 - client sends "i want update" command after finishing frame.
Possible soolution 2 - server sends move cloud commands every x ms, but then cloud will not move smoothly. Can be combined with solution 3.
Possible solution 3 - server sends - "start move cloud with speed x" and "change "cloud direction" instead of "move cloud to x". But problem again is what checks for changing cloud dir on edge of screen, will trigger faster then cloud actually drawned on client.
Also Client 2 draws 2 times slower then Client 1, how compensate this?
How sync server logic with clients drawning in basic way?
Solution 3 sounds like the best one by far, if you can do it. All of your other solutions are much too chatty: they require extremely frequent communication between the client and server, much too frequent unless servers and clients have a very good network connection between them.
If your cloud movements are all simple enough that they can be sent to the clients as vectors such that the client can move the cloud along one vector for an extended period of time (many frames) before receiving new instructions (a new starting location and vector) from the server then you should definitely do that. If your cloud movements are not so easily representable as simple vectors then you can choose a more complex model (e.g. add instructions to transform the vector over time) and send the model's parameters to the clients.
If the cloud is part of a larger world and the clients track time in the world, then each of the sets of instructions coming from the server should include a timestamp representing the time when the initial conditions in the model are valid.
As for your question about how to compensate for client 2 drawing two times slower than client 1, you need to make your world clock tick at a consistent rate on both clients. This rate need not have any relationship with the screen refresh rate on either client.
The Back Story
A little while back, I was asked if we could implement a mass email solution in house so that we would have better control over sensitive information. I proposed a two step plan: Develop a prototype in Excel/VBA/CDO for user familiarity, then phase a .Net/SQL server solution for speed and robustness.
What's Changed
3 Months into the 2nd phase, management decides to go ahead and outsource email marketing to another company, which is fine. The 1st problem is that management has not made a move on a company to go through, so I am still implicity obligated to make the current prototype work.
Still, the prototype works, or at least it did. The 2nd problem came when our Exchange 2003 Relay Server got switched with Exchange 2010. Turns out that more "safety" features are turned by default like Throttling Policies, which I have been helping the sysadmin iron out a server config that works. What's happening is that the after +100 emails get sent, the server starts rejecting the send requests with the following error:
The message could not be sent to the SMTP server. The transport error code is 0x800ccc67.
The server response was 421 4.3.2 The maximum number of concurrent connections has exceeded a limit, closing transmission channel
Unfortunately, we only get to test the server configuration when Marketing has something to send out, which is about once per month.
What's Next?
I am looking at Excel's VBA Timer Function to help throttle my main loop pump to help throttle the send requests. The 3rd problem here is, from what I understand from reading, is that the best precision I can get is 1 second on the timer. 1 email per second would be considerably longer ( about 4x-5x longer) as oppossed to the 5 email/sec we have been sending at. This turns a 3 hour process into a an all day process past the hours of staff availability. I suppose I can invert the rate by sending 5 emails for every second that passes, but the creates more of a burst affect as opposed a steady rate if had more precision on the timer. In my opinion, this creates a less controlled process and I am not sure how the server will handle bursts as opposed a steady rate. What are my options?
You can use the windows sleep API if you need finer timer control. It has it's units in milliseconds:
Private Declare Sub Sleep Lib "kernel32" (ByVal dwMilliseconds As Long)
Public Sub Testing()
'do something
Sleep(1000) 'sleep for 1 second
'continue doing something
End Sub
I'm not very familiar with Exchange, so I can't comment on the throttling policies in place.