I'm about to add IMAP email integration to one of our web applications (ASP.NET / SQL Server). I'm already using a commercial library which exposes the most important IMAP functionality: get folder list, get message headers, get mime message etc.)
Getting email data "live" from the IMAP server works very well. But here comes the difficult task: I have to keep the email/folders caching SQL database synchronized to the IMAP server (I have to show data applying different criteria).
Our database schema essentially contains a "Folders" and an "Emails" table. The "Emails" table contains primarily header information like "FromAddress", "FromName", "IsRead", "IsAnswered", "IsForwarded", "HasAttachments" etc. (without the email content or attachments).
I have to consider two major scenarios:
Getting all messages the first time (or after a user re-organized the folders)
Getting new/recent messages
What would be a good synchronization strategy for keeping the mail server and database server up-to-date, considering that performance is a major design criterion (I can't just query/compare thousands of messages every time I connect, in order to find out if the user moved or deleted some old emails).
Thanks!
From your library's feature list:
Better UniqueId Support: We've added
even more options for requesting a
message's unique id. You can now
return the UniqueId in a message's
DataTable for return trips to the IMAP
server.
And:
Retrieve only New Messages
Search Flagged Messages
Mark/Unmark Messages as Read
It looks to me as though your library has all the support you need to keep your SQL server synchronized. You can programmatically mark messages as read, and the library supports retrieval of only new messages. That takes care of your second item.
Your strategy will depend partly on how your solution works. If I read your question correclty, your users manage their email on the IMAP server, and your SQL Server is "subscribed" to the IMAP server, from a syncronization perspective.
If this is correct, then synchronization is effectively a background task. My approach would be to synchronize using an event model on a user-by-user basis. If possible, "notify" the synchronization program when there is activity (new/deleted emails) for a user. Add a synchronization "job" to a background process that batches synch jobs together. A notification model will ensure that the synch program only works on users that need a synch.
Small new/deleted email synch jobs go to one "processor" and larger jobs like total resynch and folder reorganization go to another. Really big resynch jobs may have to be split up in order to keep overall throughput high. The "small job" and "big job" processors could be two different services, or possibly two different threads depending on performance and design considerations.
Related
Can I use mirth connect to store millions of HL7v2 messages (pipe delimited) and query them programmatically by our third party software application at a later point of time?
What's the best way to do that? Is mirth's REST API capable to query its message store efficently?
Unfortunatly I need a running mirth connect instance to browse the REST API documentation according to the manual at page 368. (If it wouldn't require to have a running instance of mirth to browse the documentation of the REST API I wouldn't have asked that question. Is there a mirth connect instance available on the internet to play with? Or would somebody be so kind to post the relevant REST API documentation for that question?)
So far, those are the scenarios I came up yet:
Mirth is integration engine, and its strength is processing messages. Browsing historical messages can be at times difficult or slow, depending on the storage settings for the channel and whether or not you take care to pull additional information out during processing to store in "custom metadata" fields. The custom metadata fields are not indexed by default, but you can add your own (mirth supports several back-end databases, including postgres, mysql, oracle, and mssql.) Searching the message content basically involves doing a full-text search and scanning. Filter options to reduce scan time, apart from the custom metadata you create, are mostly related to the message properties (datetime received, status, etc..) and not the content.
So, I would not recommend it for the use-case you are suggesting.
However, Mirth could definitely be used to convert your messages (batched from files or live) to xml which could be put in a database designed to handle and query large volumes of xml documents. I assume when you say HL7 you mean the ER7 (pipe delimited) format of HL7v2. Mirth automatically does the conversion to xml for those types of messages as they are handled as xml during processing. You could easily create a new parent node that holds both the converted xml and the original message string as children.
If the database you choose has a JDBC driver, Java SDK, or HTTP/REST API, mirth can likely directly insert the converted messages for you as it processes them.
There are two misconceptions here:
HL7v2 message is triggered by the real-world event, called the trigger event, on the placer (sender) side. It expects some activity to happen on the filler (receiver) side by either confirming the message, replying with the query response, etc. I.e., HL7v2 supports data flow among systems.
Mirth Connect is HL7 interface engine aimed at transforming incoming feeds in one format (e.g., HL7v2 in ER7 format) into outgoing feeds in another format (which could be another HL7v2, or XML, or database, etc.). It does not store anything except a configured portion of messages for audit purposes.
Now, to implement a solution you outlined, Mirth Connect or any other transformation mechanism has to implement two flows: receive, convert if needed and store incoming messages; provide an interface to query those messages.
This is obviously can be done with Mirth Connect but your initial question if Mirth is capable in storing millions of records is incorrect. In fact it's recommended to keep as less messages as possible to speed up Mirth processing (each processed message is stored in the Mirth internal database several times depending on configuration). Thus, all transformed messages are going into the external public or private message storage exactly as shown on your diagrams.
I am designing a whatsapp like messenger application for the desktop using WPF and .Net. Now, when a user creates a group I want other members of the group to receive a notification that they were added to a group. My frontend is built in C#.Net, which is connected to a RESTful Webservice (Ruby on Rails). I am using Postgres for the database. I also have a Redis layer to cache my rails models.
I am considering the following options.
1) Use Postgres's inbuilt NOTIFY/LISTEN mechanism which the clients can subscribe to directly. I foresee two issues here
i) Postgres might not be able to handle 10000's of clients subscribed directly.
ii) There is no guarantee of delivery if the client is disconnected
2) Use Redis' Pub/Sub mechanism to which the clients can subscribe. I am still concerned with no guarantee of delivery here.
3) Use a messaging queue like RabbitMQ. The producer of this queue will be postgres which will push in messages through triggers. The consumer of-course will be the .Net clients.
So far, I am inclined to use the 3rd option.
Does anyone have any suggestions how to design this?
In an application like WhatsApp itself, the client running in your phone is an integral part of a large and complex event-based, distributed system.
Without more context, it would be impossible to point in the right direction. That said:
For option 1: You seem to imply that each client, as in a WhatsApp client, would directly (or through some web service) communicate with Postgres as an event bus, which is not sound and would not scale because you can only have ONE Postgres instance.
For option 2: You have the same problem that in option 1 with worse failure modes.
For option 3: RabbitMQ seems like a reasonable ally here. It is distributed in nature and scales well. As a matter of fact, it runs on erlang just as most of WhatsApp does. Using triggers inside Postgres to publish messages however does not make a lot of sense.
You need a message bus because you would have lots of updates to do in the background, not to directly connect your users to each other. As you said, clients can be offline.
Architecture is more about deferring decisions than taking them.
I suggest that you start simple. Build a small, monolithic, synchronous system first, pushing updates as persisted data to all the involved users. For example; In a group of n users, just write n records to a table. It is already complicated to reliably keep track of who has received and read what.
This heavy "group" updates can then be moved to long-running processes using RabbitMQ or the like, but a system with several thousand users can very well work without such thing, especially because a simple message from user A to user B would not need many writes.
we've got different processes that send mails in case of issues encountered (e.g. not enough permissions to perform an operation on a certain order item). This works fine to the point that sometimes identical messages are sent every 5 minutes. In our environment it is very difficult to synchronize the email sending on application layer (actually there are different applications sending out email, so we'd have to touch every application if we were to implement this inside application layer).
It would seem logical for me that filtering out mails (by duplicate subjects) is best done within the email layer, e.g. the application receiving the SMTP requests.
Yet we'd also prefer not to go down to SMTP layer by ourselves, rather use an existing service/application.
Is anybody aware of a web mailer (like googlemail) which does this kind of filtering? it would be ok for us the pay for such a service, so being "free as in beer" would be nice, but being not free is not a showstopper.
Thanks in advance
Holger
I find the idea of filtering duplicate e-mail message by the Subject: header quite worrisome. If they are produced by multiple applications, how can you be certain that the content of the messages is duplicated and that you are not unwittingly dropping important notifications?
The only unique feature of a message that can be used to filter out duplicates is its Message-ID: header. If that header is the same for two messages, then it's usually reasonable to assume that they are copies of the same original message - e.g. one received directly and one that was CC'ed to a mailing list.
That said, you can do pretty much anything you want on most SMTP servers - at least those that are based on a Unix-like OS. For example, Postfix can use custom shell scripts for filtering.
You can, for example, use formail to extract the body of each message and produce its
MD5 hash. Comparing the message body hashes along with the Date:, Subject:, From:, To: and Cc: headers at the same time is a good start to detect real duplicates.
I'm in the planning stages of a .NET service which continually processes incoming messages, which involves various transformations, database inserts and updates, etc. As a whole, the service is huge and complicated, but the individual tasks it performs are small, simple, and well-defined.
For this reason, and in order to allow for easy expansion in future, I want to split the service into several smaller services which basically perform part of the processing before passing it onto the next service in the chain.
In order to achieve this, I need some kind of intermediary messaging system that will pass messages from one service to another. I want this to happen in such a way that if a link in the chain crashing or is taken offline briefly, the messages will begin to queue up and get processed once the destination comes back online.
I've always used message queuing for this type of thing, but have recently been made aware of SQL Service Broker which appears to do something similar. Is SQLSB a viable alternative for this scenario and, if so, would I see any performance benefits by using that instead of standard Message Queuing?
Thanks
It sounds to me like you may be after a service bus architecture. This would provide you with the coordination and fault tolerance you are looking for. I'm most familiar and partial to NServiceBus, but there are others including Mass Transit and Rhino Service Bus.
If most of these steps initiate from a database state and end up in a database update, then merging your message storage with your data storage makes a lot of sense:
a single product to backup/restore
consistent state backups
a single high-availability/disaster recoverability solution (DB mirroring, clustering, log shipping etc)
database scale storage (IO capabilities, size and capacity limitations etc as per the database product characteristics, not the limits of message store products).
a single product to tune, troubleshoot, administer
In addition there are also serious performance considerations, as having your message store be the same as the data store means you are not required to do two-phase commit on every message interaction. Using a separate message store requires you to enroll the message store and the data store in a distributed transaction (even if is on the same machine) which requires two-phase commit and is much slower than the single-phase commit of database alone transactions.
In addition using a message store in the database as opposed to an external one has advantages like queryability (run SELECT over the message queues).
Now if we translate the abstract terms 'message store in the database as being Service Broker and 'non-database message store' as being MSMQ, you can see my point why SSB will run circles any time around MSMQ.
My recent experiences with both approaches (starting with Sql Server Service Broker) led me to the situation in which I cry for getting my messages out of SQL server. The problem is quasi-political but you might want to consider it: SQL server in my organisation is managed by a specialized DBA while application servers (i.e. messaging like NServiceBus) by developers and network team. Any change to database servers requires painful performance analysis from DBA and is immersed in fear that we might get standard SQL responsibilities down by our queuing engine living in the same space.
SSSB is pretty difficult to manage (not unlike messaging middleware) but the difference is that I am more allowed to screw something up in the messaging world (the worst that may happen is some pile of messages building up somewhere and logs filling up) and I can't afford for any mistakes in SQL world, where customer transactional data live and is vital for business (including data from legacy systems). I really don't want to get those 'unexpected database growth' or 'wait time alert' or 'why is my temp db growing without end' emails anymore.
I've learned that application servers are cheap. Just add message handlers, add machines... easy. Virtually no license costs. With SQL server it is exactly opposite. It now appears to me that using Service Broker for messaging is like using an expensive car to plow potato field. It is much better for other things.
I am writing a POP3 mail client. I want to leave the messages on the server, but I don't want to have to redownload all messages every time I reconnect.
If I download all the messages today, and reconnect tomorrow does the protocol support the ability to only download the messages from the last 24 hours or from a certain sequential ID? Or will I have to redownload all of the messages again?
I am aware of the Unique IDentification Listing feature, but according to http://www.faqs.org/rfcs/rfc1939.html it's not supported in the original specification. Do most mail servers support this feature?
Yes, my client supports IMAP too, but this question is specifically for the POP servers.
Have you considered using IMAP?
I've done it.
You'll have to reread all the headers but you can decide which messages to download.
I don't recall anything in the header that will give you a foolproof timestamp, however. I don't believe your solution is possible without keeping a record of what you have already seen.
(In my case I didn't care--I was simply looking for messages with certain identifying features in the header--those messages were downloaded, processed and killed, everything else was untouched.)
I also wonder if you're misunderstanding the protocol. Just because you download a message doesn't mean it's removed from the server. It's only removed from the server if you give an explicit command to kill the message. (And when a message contains so many attachments that the system time-outs before you properly log off and thus your kill command is discarded you'll be driven up the wall!) (It was an oversight in the design. The original logic was attach one file over 100k, or as many as possible whose total was under 100k. Another task barfed and generated thousands of files of around 100 bytes each. While it was a perfectly legit, albeit extreme, e-mail nothing was able to kill it!)
Thus if I were writing a mail client I would simply download anything I didn't already have locally. If it's supposed to remain on the server, fine, just don't give the kill command.
The way I have seen that handled in the past is on a client-by-client basis. For example, if I use Scribe to get e-mail on one machine without deleting, then move to another machine, all e-mails are downloaded again despite the fact that I've seen them before. Internally, I imagine the client has a table that stores whether or not an e-mail has been downloaded previously.
There's nothing in the protocol that I'm aware of that would allow for that.
Sort-of. You can download individual messages, but you can't store state on the remote server.
See the RETR command at http://www.faqs.org/rfcs/rfc1939.html.