I am renting a VPS that blocks outgoing port 25 so that I have to use their relayhost. Which works fine accept for one thing. The relayhost has these restrictions for outgoing mails:
1000 mails / hour
3000 mails / day
20000 mails / month
Exceeding these limits either cost more money or result in being banned.
I therefore would like to set the same restrictions in my own postfix server in such a way that mails stay in the defer queue if the outgoing limit will be reached. I don't mind that mail is delayed a few hours in order to stay within the limit of the relayhost.
There does not seem to be a postfix setting that will do this out of the box. However for incoming mails there are settings like smtpd_client_recipient_rate_limit and anvil_rate_time_unit that can throttle incoming mails.
I was therefore thinking of putting 3 additional smtpd processes in master.cf each of which sets smtpd_client_recipient_rate_limit and anvil_rate_time_unit according to the 3 rate restrictions.
Is this the most practical approach or is there a simpler solution?
I ran into a similar issue with my VPS host saying I'm sending too fast. I used the below config to slow the email rate.
default_destination_rate_delay = 5s
This puts a 5 second delay between each outbound smtp connnection to the same destination. There are other default_destination parameters that maybe of use to you on the man page
http://www.postfix.org/postconf.5.html
Related
I am willing to know about the comparison of the Packet delivery rate between MQTT and CoAP transmission. I know that TCP is more secure than UDP, so MQTT should have a higher Packet delivery rate. I just want to know, if 2000 packets are sent using both protocols separately what would be the approximate percentage in the two cases?
Please help with an example if possible.
If you dig a little, you will find, that both, TCP and UDP, are mainly sending IP messages. And some of these messages may be lost. For TCP, the retransmission is handled by the TCP protocol without your influence. That sure works not too bad (at least in many cases). For CoAP, when you use CON messages, CoAP does the retransmission for you, so also not too much to lose.
When it comes to transmissions with more message loss (eg. bad connectivity), the reliability may also depend on the amount of data. If it fits into one IP message, the probability that this reaches the destination is higher, than 4 messages reaching their destination. In that situation the difference starts:
Using TCP requires, that all the messages are reaching the destination without gaps (e.g. 1,2, (drop 3), 4 will not work). CoAP will deliver the messages also with gaps. It depends on your application, if you can benefit from that or not.
I've been testing CoAP over DTLS 1.2 (Connection ID) for mostly a year now using a Android phone and just moving around sending request (about 400 bytes) with wifi and mobile network. It works very reliable. Current statistic: 2000 request, 143 retransmissions, 4 lost. Please note: 4 lost mainly means "no connection", so be sure, using TCP will have results below that, especially when moving around and frequently new TCP/TLS handshakes get required.
So my conclusion:
If you have a stable network connection, both should work.
If you have a less stable network connection and gaps are not acceptable by your application, both have trouble.
If you have a less stable network connection and gaps are acceptable by your application, then CoAP will do a better job.
We have a website that runs on 6 computers in a cluster. So any one of the computers can be accessed using that domain name such as:
http://example.com/
On top of that, we have a 7th computer that is used for sending emails with postfix. That mail server is setup to send emails using DMARC, SPF, and DKIM.
At this time, our DNS returns 5 different IP addresses, using round robin, to any one of the first 5 front end computers, plus one to the mail server. If the mail server gets hit by an HTTP request, we forward it to the 6th front end server (i.e. proxying.)
So the DNS has entries like this (only with real public IPs):
example.com IN SOA ns1.example.com. webmaster.example.com. (
...
)
60 IN 10.0.0.1
60 IN 10.0.0.2
60 IN 10.0.0.3
60 IN 10.0.0.4
60 IN 10.0.0.5
60 IN 10.0.0.7
As you can see, the last one is 10.0.0.7, for the mail server. The others are front end computers.
Now the question is: Do I need all the computers, 1 through 5 and 7 to all have the correct PTR so the mail server works as expected? (i.e. after all, a reverse look up will return all of those 6 IPs... but with the round robin, they will be presented in what looks like a random order.) Or do I need that PTR setup on the 10.0.0.7 computer only since that one is the one actually sending the emails?
My concern is that some mail server verifying our IP address may end up finding any of the 6 IPs that reference example.com and not just the one for the 7th computer (10.0.0.7. in my example) and as a result refuse the email because the reverse lookup failed.
I am actually hoping that only the computer with IP 10.0.0.7 needs the PTR. The others could have different names and no PTR.
If you look in the headers of the emails you send it had a "received:" which should list your LSIP (Last Sending IP) for that email. That's the IP address that's going to have the rDNS set up on it. You can easily check that with email testing tools.
But when setting up rDNS you really want it to be fcRDNS (Foward Confirmed RDNS)
It's just a DNS entry, just add the rDNS to all your outgoing IPs, which will alleviate your concern.
Sometimes my RPI drops internet connection and cron jobs starts failing left and right and the queue grows incredibly fast, and once I reconnect it just floods me with email.
I'd like to limit the mail queue to a fixed number, is this possible?
Yes, You can limit the maximal number of recipients and active queue size,
In main.cf please add these config options
qmgr_message_active_limit = 40000
qmgr_message_recipient_limit = 40000
Refer this links for better understanding active limit, recipient limit
What you want is not really possible.
I'd say a solution would be to either limit the size of the queue (by using the queue_minfree parameter) or make the system more robust regarding internet outages (like not sending mails for every error cron produces).
I have a LPD server running on vxworks 6.3. The client application (over which I have no control) is sending me a LPQ query every tenth of a second. After 235 requests, the client receives a RST when trying to connect. After a time device will again accept some queries (about 300), until it again starts sending out RST.
I have confirmed that it is the TCP stack that is causing the RST. There are some things that I have noticed.
1) I can somewhat change the number of sockets that will accepted if I change the number of other applications that are running. For example, I freed up 4 sockets thereby changing the number accepted from 235 to 239.
2) If I send requests to lpr (port 515) and another port (say, port 80), the total number of connections that are accepted before the RST start happening stays constant at 235.
3) There are lots of sockets sitting TIME_WAIT.
4) I have a mock version of the client. If I slow the client down to one request every quarter second, the server doesn't reject the connections.
5) If I slow down the server's responses, I don't have any connections rejected.
So my theory is that there is some share resource (my top guess is total number of socket handles) that VxWorks can have consumed at a given time. I'm also guessing that this number tops out at 255.
Does anyone know how I can get VxWorks to accept more connections, and leave them in TIME_WAIT when closed? I have looked through the kernel configuration and changed all the values that looked remotely likely, but I have not been able change the number.
We know that we could set SO_LINGER but this not an acceptable solution. However, this does prevent the client connections from getting rejected. We have also tried changed the timeout value for SO_LINGER. This does not appear to be supported in VxWorks. It's either on or off.
Thanks!
Gail
To me it sounds like you are making a new connection for every LPQ query, and after the query is done you aren't closing the connection. In my opinion the correct thing to do is to accept one TCP connection and then use that to get all of the LPQ queries, however that may require mods to the client application. To avoid mods to the client, you should just close the TCP connection after each LPQ query.
Furthermore you can set the max number of FDs open in vxworks by adjusting the #define NUM_FILES config.h (or configall.h or one of those files), but that will just postpone an error if you have a FD leak, which you probably do.
I have a server with SMTP set up for my site's outbound email. In order to not get blacklisted I'd like to limit outbound emails to under an arbitrary threshold (let's say 500 per hour). What's the best way to implement this?
The two possibilities I see would be:
1) Some sort of outbound throttling within the SMTP Virtual Server (Not sure if this is possible when not on a full fledged Exchange Server)
2) Create a windows service that polls a database table for emails, processes the TOP N results and then sleeps for X Minutes.
Are either of these the best approach?
Just write a stored procedure to process the top N results and schedule the stored procedure to run at frequent intervals.