Email function for azure postgresql server - postgresql

How can one write a code in azure postgresql server which can send email.
As Azure postgresql sever is a fully managed by azure and no option is there for installing extensions apart from limited already available extensions.

You usually shouldn't send emails from the database. If the email sending becomes slow, you can get cascading locks. And if it fails, then what should you do? You can store the email to be sent in a table, then have a process written in you favorite language access the table to do the sending. Or you can set up LISTEN/NOTIFY to do the same thing. Or you could combine them, if you want the transactionality of a separate table but don't want the polling of checking on it periodically.
Another option of course is not to use hosting solutions which prevent you from doing what you want.

Related

Queries regarding Mailkit

I have a requirement where I need to download mails from mailbox, I have below queries
Is there any option to download bulk mails using ImapClient
How many concurrent connections I can open to a mailbox using ImapClient
MailKit's ImapFolder class has some GetStreams() and GetStreamsAsync() APIs that you might find useful. Obviously, these APIs present the downloaded messages as System.IO.Stream instead of MimeMessage, but if you are bulk downloading messages, you probably don't intend to use them as MimeMessages anyway (you probably just want to dump them into files).
Each ImapClient only supports a single connection. That said, if you want multiple concurrent connections, there's nothing stopping you from instantiating multiple ImapClients and connecting them to the same IMAP server and credentials.

Postgres trigger to open http connection

I need to ping some HTTP service every time an insert occurs in a Postgres table using a trigger and an HTTP GET or POST?
Is there any simple way to achieve that with a standard PostgreSQL installation?
In case there isn't, is there any way to do it with additional libraries?
You can do this with PL/perlu or PL/pythonu . However, I strongly recommend that you don't do it this way. DNS problems, problems with the server, etc will cause your PostgreSQL backends to stall, severely disrupting database performance and possibly causing you to exhaust max_connections.
Instead, have a trigger send a NOTIFY when the change occurs, and have the trigger insert details into a log table. Have a client application LISTEN for the notification, read the records inserted by the trigger into the log table, and make any appropriate HTTP requests.
The client should get the requests from the log table one-by-one using SELECT ... FOR UPDATE and DELETE it when the request is made successfully.
See this answer about sending email from within the DB for some more detail.

is it possible to write record as NO-UNDO in transaction?

we are making some loging issue, where we need write the logentries in the DB. But the process run in a transaction and by rollback are our new logentries also deleted. can I make a write in DB out of the transaction? something like write in temptable with NO-UNDO option...? that the new logentries still remain in DB...?
Another possibility would be to use an app server. Transactions on app server sessions are independent from transactions in the original session (that's what the optional and redundant "DISTINCT TRANSACTION" syntax is all about).
Another option would be to use a simple messaging system. One very easy to setup and use option is STOMP. It is platform neutral and very easy to get going with.
Julian Lyndon-Smith posted the following on PEG about a month ago, and it really is as easy to setup and use as he says (I've tried it, I used ApacheMQ which is also very easy to setup and use):
Following on from presentations in Boston and Finland, dot.r is
pleased to announce the open source Stomp project, available
immediately.
Download from either http://www.dotr.com or
https://bitbucket.org/jmls/stomp , the dot.r stomp programs allow you
to connect your progress session to any other application or service
that is connected to the same message broker.
Open source, free message brokers that support Stomp are:
Fuse
(http://fusesource.com/products/fuse-mq-enterprise/) [a Progress company now owned by Red Hat inc]
Fuse MQ Enterprise is a standards-based, open source messaging platform that deploys with a very small footprint. The lack of license
fees combined with high-performance, reliable messaging that can be
used with any development environment provides a solution that
supports integration everywhere
ActiveMQ
Apache ActiveMQ (tm) (http://activemq.apache.org/)is the most popular
and powerful open source messaging and Integration Patterns server. Apache
ActiveMQ is fast, supports many Cross Language Clients and Protocols, comes
with easy to use Enterprise Integration Patterns and many advanced features
while fully supporting JMS 1.1 and J2EE 1.4.
Apache ActiveMQ is released under the Apache 2.0 License.
RabbitMQ
RabbitMQ is a message broker. The principal idea is pretty simple: it
accepts and forwards messages. You can think about it as a post
office: when you send mail to the post box you're pretty sure that Mr.
Postman will eventually deliver the mail to your recipient. Using this
metaphor RabbitMQ is a post box, a post office and a postman.
The major difference between RabbitMQ and the post office is the fact
that it doesn't deal with paper, instead it accepts, stores and
forwards binary blobs of data - messages.
Please feel free to log any issues on the
https://bitbucket.org/jmls/stomp issue system, and fork the project in
order to commit back all those new features that you are going to add
...
dot.r Stomp uses the permissive MIT licence
(http://en.wikipedia.org/wiki/MIT_License)
Have fun, enjoy !
Julian
Every change to the database must be part of a transaction. If you do not explicitly start one it will be implicitly started for you and scoped to the next outer block with transaction capabilities.
However and although I would not recommend you to, work with sub-transactions. You can invoke a sub transaction by explicitly specifying a DO TRANSACTION within the transaction scope. Although the database will never know about it, the client can roll back the sub transaction while the database can commit the transaction.
But in order to implement something like this you must master the concepts of transaction scope, block behavior and error handling.
RealHeavyDude.
Write your log entries to a no-undo temp-table.
When the code will commit a transaction, or transactions aren't active (transactionID = ?) have your code write the log entries out.
I don't think there is any way to do this in ABL as you planned either efficiently (sprinkling temp-table flushes or other tidbits all over the place is gross) or reliably (what if the application crashes with an un-flushed temp-table?), as others have mentioned. I would suggest making your complicated logging less coupled to your app by making the database writes asynchronous, occurring outside of your application if possible.
Since you're on Windows, you could change your logging to use the .NET log4net library instead of ABL constructs. log4net has a few appenders that would be useful:
AdoNetAppender which lets you log directly to a database
RemoteSyslogAppender which uses the syslog protocol, letting you log to an external Unix syslog or rsyslog daemon (rsyslog supports writing log messages to databases)
UDPAppender which sends the log messages via UDP packets somewhere else to be handled (e.g. a logFaces server, which supports writing to databases)
If you must do it in ABL then you could use a named output stream specifically for your log messages (OUTPUT TO STREAM) which writes to a specific location where an external process is listening to handle it. This file could be a pipe created by something like mkfifo or just a regular text file that is monitored for changes with inotify (not sure what the Windows equivalents of these are). This external process would handle parsing the messages and writing them to the database (basically re-inventing rsyslog).
I like the no-undo temp-table idea, just be sure to put the database write part in a "FINALLY" block in case of unhandled exceptions.

Does the POP3 protocol allow you to specify a subset of emails to download?

I am writing a POP3 mail client. I want to leave the messages on the server, but I don't want to have to redownload all messages every time I reconnect.
If I download all the messages today, and reconnect tomorrow does the protocol support the ability to only download the messages from the last 24 hours or from a certain sequential ID? Or will I have to redownload all of the messages again?
I am aware of the Unique IDentification Listing feature, but according to http://www.faqs.org/rfcs/rfc1939.html it's not supported in the original specification. Do most mail servers support this feature?
Yes, my client supports IMAP too, but this question is specifically for the POP servers.
Have you considered using IMAP?
I've done it.
You'll have to reread all the headers but you can decide which messages to download.
I don't recall anything in the header that will give you a foolproof timestamp, however. I don't believe your solution is possible without keeping a record of what you have already seen.
(In my case I didn't care--I was simply looking for messages with certain identifying features in the header--those messages were downloaded, processed and killed, everything else was untouched.)
I also wonder if you're misunderstanding the protocol. Just because you download a message doesn't mean it's removed from the server. It's only removed from the server if you give an explicit command to kill the message. (And when a message contains so many attachments that the system time-outs before you properly log off and thus your kill command is discarded you'll be driven up the wall!) (It was an oversight in the design. The original logic was attach one file over 100k, or as many as possible whose total was under 100k. Another task barfed and generated thousands of files of around 100 bytes each. While it was a perfectly legit, albeit extreme, e-mail nothing was able to kill it!)
Thus if I were writing a mail client I would simply download anything I didn't already have locally. If it's supposed to remain on the server, fine, just don't give the kill command.
The way I have seen that handled in the past is on a client-by-client basis. For example, if I use Scribe to get e-mail on one machine without deleting, then move to another machine, all e-mails are downloaded again despite the fact that I've seen them before. Internally, I imagine the client has a table that stores whether or not an e-mail has been downloaded previously.
There's nothing in the protocol that I'm aware of that would allow for that.
Sort-of. You can download individual messages, but you can't store state on the remote server.
See the RETR command at http://www.faqs.org/rfcs/rfc1939.html.

IMAP folder/messages synchronization strategy?

I'm about to add IMAP email integration to one of our web applications (ASP.NET / SQL Server). I'm already using a commercial library which exposes the most important IMAP functionality: get folder list, get message headers, get mime message etc.)
Getting email data "live" from the IMAP server works very well. But here comes the difficult task: I have to keep the email/folders caching SQL database synchronized to the IMAP server (I have to show data applying different criteria).
Our database schema essentially contains a "Folders" and an "Emails" table. The "Emails" table contains primarily header information like "FromAddress", "FromName", "IsRead", "IsAnswered", "IsForwarded", "HasAttachments" etc. (without the email content or attachments).
I have to consider two major scenarios:
Getting all messages the first time (or after a user re-organized the folders)
Getting new/recent messages
What would be a good synchronization strategy for keeping the mail server and database server up-to-date, considering that performance is a major design criterion (I can't just query/compare thousands of messages every time I connect, in order to find out if the user moved or deleted some old emails).
Thanks!
From your library's feature list:
Better UniqueId Support: We've added
even more options for requesting a
message's unique id. You can now
return the UniqueId in a message's
DataTable for return trips to the IMAP
server.
And:
Retrieve only New Messages
Search Flagged Messages
Mark/Unmark Messages as Read
It looks to me as though your library has all the support you need to keep your SQL server synchronized. You can programmatically mark messages as read, and the library supports retrieval of only new messages. That takes care of your second item.
Your strategy will depend partly on how your solution works. If I read your question correclty, your users manage their email on the IMAP server, and your SQL Server is "subscribed" to the IMAP server, from a syncronization perspective.
If this is correct, then synchronization is effectively a background task. My approach would be to synchronize using an event model on a user-by-user basis. If possible, "notify" the synchronization program when there is activity (new/deleted emails) for a user. Add a synchronization "job" to a background process that batches synch jobs together. A notification model will ensure that the synch program only works on users that need a synch.
Small new/deleted email synch jobs go to one "processor" and larger jobs like total resynch and folder reorganization go to another. Really big resynch jobs may have to be split up in order to keep overall throughput high. The "small job" and "big job" processors could be two different services, or possibly two different threads depending on performance and design considerations.