Sometimes my RPI drops internet connection and cron jobs starts failing left and right and the queue grows incredibly fast, and once I reconnect it just floods me with email.
I'd like to limit the mail queue to a fixed number, is this possible?
Yes, You can limit the maximal number of recipients and active queue size,
In main.cf please add these config options
qmgr_message_active_limit = 40000
qmgr_message_recipient_limit = 40000
Refer this links for better understanding active limit, recipient limit
What you want is not really possible.
I'd say a solution would be to either limit the size of the queue (by using the queue_minfree parameter) or make the system more robust regarding internet outages (like not sending mails for every error cron produces).
Related
How to choose the right value for the TTL? We need a push messsage delivered reliably, not being dropped, but at the same time we would like it delivered faster, because it is used to initiate live calls. I understand that 0 is not an option for us, since it has a good chance to be dropped? But then should it be 60*60 (an hour) or 60 (a minute) or what is the right way of thinking here?
You must remember that the value of TTL paramater must be a duration from 0 to 2,419,200 seconds, and it corresponds to the maximum period of time of push message to live on the push service before it's delivered.
If you set a TTL of zero, the push service will attempt to deliver the
message immediately, but if the device can't be reached, your message
will be immediately dropped from the push service queue.
You can also consider the following best practice of using TTL:
The higher the TTL, the less frequently caching name servers need to query authoritative name servers.
A higher TTL reduces the perceived latency of a site and decreases the dependency on the authoritative name servers.
The lower the TTL, the sooner the cached record expires. This allows queries for the records to occur more frequently.
I am renting a VPS that blocks outgoing port 25 so that I have to use their relayhost. Which works fine accept for one thing. The relayhost has these restrictions for outgoing mails:
1000 mails / hour
3000 mails / day
20000 mails / month
Exceeding these limits either cost more money or result in being banned.
I therefore would like to set the same restrictions in my own postfix server in such a way that mails stay in the defer queue if the outgoing limit will be reached. I don't mind that mail is delayed a few hours in order to stay within the limit of the relayhost.
There does not seem to be a postfix setting that will do this out of the box. However for incoming mails there are settings like smtpd_client_recipient_rate_limit and anvil_rate_time_unit that can throttle incoming mails.
I was therefore thinking of putting 3 additional smtpd processes in master.cf each of which sets smtpd_client_recipient_rate_limit and anvil_rate_time_unit according to the 3 rate restrictions.
Is this the most practical approach or is there a simpler solution?
I ran into a similar issue with my VPS host saying I'm sending too fast. I used the below config to slow the email rate.
default_destination_rate_delay = 5s
This puts a 5 second delay between each outbound smtp connnection to the same destination. There are other default_destination parameters that maybe of use to you on the man page
http://www.postfix.org/postconf.5.html
I have a ZMQ_PUB socket sending messages out at ~50Hz. One destination needs to react to each message, so it has a standard ZMQ_SUB socket with a while(true) loop checking for new messages. A second destination should only react once a second to the "most recent" message. That is, my second destination needs to subsample.
For the second destination, I believe I'd want to have a time-based loop that is called at my desired rate (1Hz) and recv() the latest message, dropping the rest. I believe this is done via a ZMQ_HWM on the subscriber. Is there another option that needs to be set somewhere?
Do I need to worry about the different subscribers having different HWMs? Will the publisher become angry? It's a shame ZMQ_RATE only applies to multicast sockets.
Is there a best way to accomplish what I'm attempting?
zmq v3.2.4
The HighWaterMark will not be a fantastic solution for your problem. Setting it on the subscriber to, let's say, 10 and reading 1 message per second, will just give you the old messages first, slowly, and throw away all the new, because it's limit are reached.
You could either use a topic on you publisher that makes you able to filter out every 50th message like making the topic messageCount % 50 and subscribe to 0.
Otherwise maybe you shouldn't use zmq's pub/sub, but instead do you own look alike with router/dealer that allows you to subscribe to sampled messages.
Lastly you could also just send them all. 50 m/s is hardly anything in zmq (if they aren't heavy on data, like megs) and then only use every 50th message.
I need to create a proxy service scheduler that receive messages of the queue after 5 minutes. like queue produce message either a single or multiple but proxy receieve that messages after interval of every 5 minutes. how can i achieve this only using oracle service bus ...
Kindly help me for this
OSB do not provide Scheduler capabilities out of the box. You can do either of the following:
For JMS Queue put infinite retries by not setting retry limit and set retry interval as 5 minutes.
Create a scheduler. Check this post for the same: http://blogs.oracle.com/jamesbayer/entry/weblogic_scheduling_a_polling
Answer left for reference only, messages shouldn't be a subject to complex computed selections in this way, some value comparison and pattern matching only.
To fetch only old enough messages from queue,
not modifying queue or messages
not introducing any new brokers between queue and consumer
not prematurely consuming messages
, use Message Selector field of OSB Proxy on JMS Transport tab to set boolean expression (SQL 92) that checks that message's JMSTimestamp header is at least 5 minutes older than current time.
... and I wasn't successful to quickly produce valid message selector neither from timestamp nor JMSMessageID (it contains time in milis - 'ID:<465788.1372152510324.0>').
I guess somebody could still use it in some specific case.
You can use Quartz scheduler APIs to create schedulers across domains.
Regards,
Sajeev
I don't know whether this works for you, but its working good for me. May be you can use this to do your needful.
Goto Transport Details of your Proxy Service, under Advanced Options tab, set the following fields.
Polling Frequency (Mention your frequency 300 sec(5 min))
Physical Directory (may be here you need to give your Queue path)
In order not to flood the remote endpoint my server app will have to implement a "to-send" queue of packets I wish to send.
I use Windows Winsock, I/O Completion Ports.
So, I know that when my code calls "socket->send(.....)" my custom "send()" function will check to see if a data is already "on the wire" (towards that socket).
If a data is indeed on the wire it will simply queue the data to be sent later.
If no data is on the wire it will call WSASend() to really send the data.
So far everything is nice.
Now, the size of the data I'm going to send is unpredictable, so I break it into smaller chunks (say 64 bytes) in order not to waste memory for small packets, and queue/send these small chunks.
When a "write-done" completion status is given by IOCP regarding the packet I've sent, I send the next packet in the queue.
That's the problem; The speed is awfully low.
I'm actually getting, and it's on a local connection (127.0.0.1) speeds like 200kb/s.
So, I know I'll have to call WSASend() with seveal chunks (array of WSABUF objects), and that will give much better performance, but, how much will I send at once?
Is there a recommended size of bytes? I'm sure the answer is specific to my needs, yet I'm also sure there is some "general" point to start with.
Is there any other, better, way to do this?
Of course you only need to resort to providing your own queue if you are trying to send data faster than the peer can process it (either due to link speed or the speed that the peer can read and process the data). Then you only need to resort to your own data queue if you want to control the amount of system resources being used. If you only have a few connections then it is likely that this is all unnecessary, if you have 1000s then it's something that you need to be concerned about. The main thing to realise here is that if you use ANY of the asynchronous network send APIs on Windows, managed or unmanaged, then you are handing control over the lifetime of your send buffers to the receiving application and the network. See here for more details.
And once you have decided that you DO need to bother with this you then don't always need to bother, if the peer can process the data faster than you can produce it then there's no need to slow things down by queuing on the sender. You'll see that you need to queue data because your write completions will begin to take longer as the overlapped writes that you issue cannot complete due to the TCP stack being unable to send any more data due to flow control issues (see http://www.tcpipguide.com/free/t_TCPWindowSizeAdjustmentandFlowControl.htm). At this point you are potentially using an unconstrained amount of limited system resources (both non-paged pool memory and the number of memory pages that can be locked are limited and (as far as I know) both are used by pending socket writes)...
Anyway, enough of that... I assume you already have achieved good throughput before you added your send queue? To achieve maximum performance you probably need to set the TCP window size to something larger than the default (see http://msdn.microsoft.com/en-us/library/ms819736.aspx) and post multiple overlapped writes on the connection.
Assuming you already HAVE good throughput then you need to allow a number of pending overlapped writes before you start queuing, this maximises the amount of data that is ready to be sent. Once you have your magic number of pending writes outstanding you can start to queue the data and then send it based on subsequent completions. Of course, as soon as you have ANY data queued all further data must be queued. Make the number configurable and profile to see what works best as a trade off between speed and resources used (i.e. number of concurrent connections that you can maintain).
I tend to queue the whole data buffer that is due to be sent as a single entry in a queue of data buffers, since you're using IOCP it's likely that these data buffers are already reference counted to make it easy to release then when the completions occur and not before and so the queuing process is made simpler as you simply hold a reference to the send buffer whilst the data is in the queue and release it once you've issued a send.
Personally I wouldn't optimise by using scatter/gather writes with multiple WSABUFs until you have the base working and you know that doing so actually improves performance, I doubt that it will if you have enough data already pending; but as always, measure and you will know.
64 bytes is too small.
You may have already seen this but I wrote about the subject here: http://www.lenholgate.com/blog/2008/03/bug-in-timer-queue-code.html though it's possibly too vague for you.