What is the best way to get the total number of messages in a folder on a gmail account? I can get the number of unseen messages like this request/response:
d STATUS "py-lists/pygtk" (UNSEEN)
* STATUS "py-lists/pygtk" (UNSEEN 10963)
But when I want to get the total number of messages:
e STATUS "py-lists/pygtk" (MESSAGES)
* STATUS "py-lists/pygtk" (MESSAGES 0)
How on earth can the number of unseen messages (10963) be larger than the total number of messages (0)? Am I misunderstanding imap or have I stumbled upon a gmail bug? If so, what can I use as a workaround? Selecting the folder with f SELECT "py-lists/pygtk" works, but isn't great as the select operation can take a very long time to complete.
Related
I don't see an option in documentation for just "subscribers" but I can take subscribersGained and subtract subscribersLost. However, the number calculated is a bit lower than the actual result. Is there a reason for this or is there a way to actually get raw sub count?
As of writing (June 2021), there is no metric to gather the total number of subscribers via the YouTube Analytics API, although, as you have already noted, it is possible to collect the number of people that have subscribed and unsubscribed. There is currently no metric or dimension that allows this.
A workaround to get the total number of subscribers would be to collect all subscribers and those that have unsubscribed (using the subscribersGained and subscribersLost metrics), from when a channel was created, and subtracting the one from the other to get the total. eg. -
Total number of subscribers = subscribersGained - subscribersLost
I have an application running under Wildfly 9.0 which sends emails with attachments.
Some of these emails can have large attachments, when this happens I get an error message "552 5.2.3 Message size is over limit (15728640).
How can I increase Wildfly's maximum limit so that it can send larger attachments (e.g. 20mb)?
What parameters should I change on?
Thanks
Needs to increment the value of Max header size and Max post size of the server.
From wildly console : Configuration:Subsystems > Subsystem:Undertow > Settings:HTTP
Select the HTTPListener anche change the value of the attributes
I am looking at http://stocktwits.com/developers/docs/parameters and am wondering if anyone has used pagination before.
The doc says there is a limit of 800 messages, how does that interact with the request limit? Could I in theory query 200 different stock tickers every hour and get back (up to) 800 messages?
If so that sounds like a great way to get around the 30 message limit.
The documentation is unclear on this and we are rolling out new documentation that explains this more clearly.
Every stream request will have a default and max limit of 30 messages per response, regardless of whether the cursor params are present or not. So you could query 200 different stock streams every hour and get up to 6,000 messages or 12,000 if sending your access token along with the request. 200 request per hour for non authenticated requests and 400 for authenticated requests.
My question is theoretical.
I have a database with e-mails. For each email I store the desired sending time (as an UNIX timestamp) and the contents of the e-mail (sender, receiver, subject, body, etc.). There's a large number of e-mails scheduled.
This's how I wanted to send the e-mails so far:
I would have a worker process or server which periodically queries the database for "overdue" e-mails based on the timestamps. Then it sends those e-mails, and in the end it deletes them from the DB.
I started to think about two things:
What if the worker dies when it has sent the e-mail but hasn't
deleted it from the database? If I restart the worker, the e-mail
will be sent again.
How do I do it if I have a really large number of
e-mails and therefore I run multiple workers? I can mark an e-mail in
the database as "being sent", but how do I re-initiate sending if the
responsible worker dies? I mean I won't know if a worker has died or
it's just so slow that it's still sending the messages. I'm assuming I cannot get notified about a worker has died, so I can't re-send the e-mails that it failed to send.
I know that e-mail sending is not a so serious thing like bank transactions, but I think there must be a good solution for this.
How is this used to be done?
I would actually use a flag on each email record in the database:
Your worker (or multiples) update the oldest record with their unique worker ID (e.g. a PID or IP/PID combination).
Example for Oracle SQL:
update email set workerid = 'my-unqiue-worker-id' where emailid in (
select emailid from email where
rownum <= 1 and
duetime < sysdate and
workerid = null
order by duetime
)
This would just take 1 not yet processed record (ordered by duetime, which has to be in the past) and set the worker ID. This procedure would be synchronized by the normal database locking mechanism (so only one thread writes at the same time).
Then you select all records with:
select * from email where workerid = 'my-unique-worker-id'
which will be either 0 or 1 record. If it is 0, there is no due mail.
If you have finished sending the email you set the workerid = 'some-invalid-value' (or you use another flag-column to mark the progress. That way it doesn't get picked up by the next worker.
You probably won't be able to find out if the email really has been sent. If the worker dies after sending and before updating the record, there's not much you can do. To be a bit more self-sufficient the worker could create a process file locally (e.g. an empty file with the emailid as the file name. This could at least detect if the crash was just a database connection issue..
If the worker is started and before updating any record already finds a message, which has its ID as the workerid then I would raise an alert / error which should be handled manually (by checking the SMTP server log and manually updating the record).
Is there a limit on how ofter a user can remove and re-add their song to a group (or just the general number of connections in general), say per minute/hour/day etc... I ask as I have created a script which automatically removes and re-adds all 5 of my songs within the same 75 groups, however before 1 cycle completes I get the 429 error and seem to be blocked for the day.
Yes there is a limit. The HTTP 429 status code indicates:
The user has sent too many requests in a given amount of time.