Mirth Messages are missing from Admin Console - interface

We use mirth as our interface engine and the ActiveMQ and Spring Inbound listener to process messages.
Our customers reported that some of the messages are missing from the mirth console but found in the ActiveMQ queue and the Spring listener application.
Initially we thought that someone may have removed manually from Mirth. But when checked the event logs there's no sign of removing a message.
We found this happening on some of the messages but could not identify the cause of the issue or pattern of messages.
Have anyone face and issue like with Mirth Admin console ?
We have the client DB as well but unable to open except through Mirth to check whether data is available.
Highly appreciate if someone can help on this.
Thanks

I have found some channels don't display "filtered" messages properly. But I have never seen successful messages go "missing".
If you don't trust the Mirth Admin then I would recommend querying the Mirth DB.
This can be done outside the confines of Mirth provided that Mirth is writing to an external DB such as MS-SQL Sever.
The data you get from it is VERY rich, but if you are sending 1000's of messages an hour (or more) you'll probably want to limit the time-range you search. Free-text searching like
select * from message m where m.raw_data like ('%needle%')
is NOT recommended and will take a long time to execute.
Being able search Mirth via the DB has opened up a tone of analysis for us that we don't have through the admin interface.

Just to chime on this question: if you are are running quite a number of channels or if you have quite a high volume of messages mirth may have trouble keeping up with its database updates due to row/table locks and inefficient conversions or data types (this should be resolved now).
We do, however, at peak times every so often, see a message or two processed through the engine with log entries indicating it was unable to insert the message and it was rolled back. I would say we have around 10 per year like that. Hopefully this is non-issue in Mirth 3 with the new backend ...

Related

Query message store of mirth connect

Can I use mirth connect to store millions of HL7v2 messages (pipe delimited) and query them programmatically by our third party software application at a later point of time?
What's the best way to do that? Is mirth's REST API capable to query its message store efficently?
Unfortunatly I need a running mirth connect instance to browse the REST API documentation according to the manual at page 368. (If it wouldn't require to have a running instance of mirth to browse the documentation of the REST API I wouldn't have asked that question. Is there a mirth connect instance available on the internet to play with? Or would somebody be so kind to post the relevant REST API documentation for that question?)
So far, those are the scenarios I came up yet:
Mirth is integration engine, and its strength is processing messages. Browsing historical messages can be at times difficult or slow, depending on the storage settings for the channel and whether or not you take care to pull additional information out during processing to store in "custom metadata" fields. The custom metadata fields are not indexed by default, but you can add your own (mirth supports several back-end databases, including postgres, mysql, oracle, and mssql.) Searching the message content basically involves doing a full-text search and scanning. Filter options to reduce scan time, apart from the custom metadata you create, are mostly related to the message properties (datetime received, status, etc..) and not the content.
So, I would not recommend it for the use-case you are suggesting.
However, Mirth could definitely be used to convert your messages (batched from files or live) to xml which could be put in a database designed to handle and query large volumes of xml documents. I assume when you say HL7 you mean the ER7 (pipe delimited) format of HL7v2. Mirth automatically does the conversion to xml for those types of messages as they are handled as xml during processing. You could easily create a new parent node that holds both the converted xml and the original message string as children.
If the database you choose has a JDBC driver, Java SDK, or HTTP/REST API, mirth can likely directly insert the converted messages for you as it processes them.
There are two misconceptions here:
HL7v2 message is triggered by the real-world event, called the trigger event, on the placer (sender) side. It expects some activity to happen on the filler (receiver) side by either confirming the message, replying with the query response, etc. I.e., HL7v2 supports data flow among systems.
Mirth Connect is HL7 interface engine aimed at transforming incoming feeds in one format (e.g., HL7v2 in ER7 format) into outgoing feeds in another format (which could be another HL7v2, or XML, or database, etc.). It does not store anything except a configured portion of messages for audit purposes.
Now, to implement a solution you outlined, Mirth Connect or any other transformation mechanism has to implement two flows: receive, convert if needed and store incoming messages; provide an interface to query those messages.
This is obviously can be done with Mirth Connect but your initial question if Mirth is capable in storing millions of records is incorrect. In fact it's recommended to keep as less messages as possible to speed up Mirth processing (each processed message is stored in the Mirth internal database several times depending on configuration). Thus, all transformed messages are going into the external public or private message storage exactly as shown on your diagrams.

redis- Should I use redis to store chat messages?

So I am currently working on a chat, and I wonder if I could use Redis to store the chat messages. The messages will be only at the web and I want at least a chat history of 20 messages for each private chat. The Chats subscribers will be already stored in MongoDB.
I mainly want to use Redis, because I get rid of the MongoDB stuff, for more speed.
I already use Pub/Sub, but what about storing a copy in Redis Lists? Also what about reading statuses, how could I implement that?
Redis only loses data in case of power outage, if the system is shutdown properly, it will save its data and in this case, data won't be lost.
It is good approach to dump data from redis to mongoDb/anyotherDb when a size limit is reached or on date basis (weekly or monthly) so that your realtime chat database stays light weighted.
Many modern systems now a days prepare for power outage, a ups will run and the system will shutdown properly.
see : https://hackernoon.com/how-to-shutdown-your-servers-in-case-of-power-failure-ups-nut-co-34d22a08e92
Also what about reading statuses, how could I implement that?
Depends on protocol you are implementing, if you are using xmpp, see this.
Otherwise, you can use a property in message model for e.g "DeliveryStatus" and set it to your enums (1. Sent, 2. Delivered, 3. Read). Mark message as Sent as soon as it is received at server. For Delivered and Read, your clients will send you back packets indicating the respective action has occurred.
As pointed in the comment above, the important thing to consider here is the persistency model. Redis offers some persistency (with snapshots and aof-files). The important thing is to first understand what you need:
can you afford to lose all the data? can you afford to lose some of the data? if the answer is no, then perhaps you should not bother with redis.

What is the purpose of quickfix message store?

What is the purpose of the message store structure in quickfix? I understand that you can log all incoming and outgoing fix messages via the message store interface and quickfix provides multiple implementations like file store etc.
My question is why do you even care about the message store other than logging your fix messages for the record?
You are confusing the MessageStore and the Log, which are two different things.
The MessageStore is for internal engine use. It tracks the current incoming and outgoing message sequence numbers, session start time, and other stuff. If your app goes down for whatever reason, when it restarts, it uses the MessageStore to resume where it left off with regards to sequence number and whether to reset the session.
The Log, however, is just a log. The engine doesn't really care about it. It's for the developers.

Biztalk not tracking send/receive ports

It seems that any new send or receive ports that I create do not display any tracking even if I tick all the tracking boxes. I have an existing application and the receive port and orchestration tracking work, but the send port tracking doesn't.
On the same machine I also tried creating a new application. Created a send and a receive port and no tracking at all. I did the same thing on a fresh install of biztalk on another machine and I got tracking so I'm not crazy.
I've tried ...
ticking every box in tracking for the receive, orch, send ports.
creating a new host specifically for tracking
recreating the original host with a different name
sql service is running
reboot system
reboot host instances
restart biztalk services
nothing shows in event logs
all sql jobs ok except for 'monitor biztalk' which complains about 7 orphaned dta.
can't see anything in particular that stands out from mbv except for the above mentioned oraphaned dta.
In addition to Mike's answer:
You need to ensure that at least one of your hosts is enabled for tracking. In BizTalk Administrator, under Platform Settings, Hosts, Select the host, and enable tracking (the list of hosts also shows which host(s) are current tracking enabled).
You can also verify that the tracking SQL Agent job is running by looking directly at the database
select count(*) from BizTalkMsgBoxDb.dbo.Spool (NOLOCK)
select count(*) from BizTalkDTADb.dbo.Tracking_Parts1 (NOLOCK)
Basically, spool should be a fairly low number (< 10 000), and should come back to a static level after a spike in messages, unless your suspended orchs are growing.
And new messages should be copied across from the MessageBox to DtaDb.TrackingParts every minute, so Tracking_Parts1 should grow a few records every 60-120 seconds after processing new messages, although they will be eventually purged / archived in line with your tracking archiving / purge strategy.
In a Dev environment, the more tracking the merrier, as HAT (the orchestration debugger) will give you more information the more you track. However, in a PROD environment, you would typically want to minimize tracking to improve performance and reduce disk overhead. We just track one copy, viz 'before processing' on the receive and 'after processing' on the send ports to our partners, and nothing at all on internal Ports and Orchs. This allows us to provide sufficient evidence of data received and sent.
This post might help some people: http://learningcenter2.eworldtree.net:7090/Lists/Posts/Post.aspx?ID=78
For message tracking to work, among other factors, make sure that the "Message send and receive events" checkbox in the corresponding pipeline is enabled.
Please take a look at these two articles, What is Message Tracking? and Insight into BizTalk Server message tracking. The first article has an item of interest for you and I'll quote it below and the second should just solidify what you're trying to do.
The SQL Server Agent service must be running on all MessageBox databases. The TrackedMessages_Copy_ job makes message bodies available to tracking queries and WMI. To efficiently copy the message bodies, they remain in the MessageBox database and are periodically copied to the BizTalk Tracking (BizTalkDTADb) database by the TrackedMessages_Copy_ job. Having the SQL Server Agent service running is also a prerequisite for the archiving and purging process to work correctly.
Are you using a default pipeline? Have you checked the tracking check boxes on them? There is some bug where the pipeline tracking is disabled for default pipelines.
More info here:
http://blog.ibiz-solutions.se/integration/biztalk-global-pipeline-tracking-disabled-unexpectedly/
Please ensure that required tracking is enbled in the properties of the send pipeline used by your send port. If message body tracking is disabled on the send pipeline, nothing is tracked on the send port as well.

is it possible to write record as NO-UNDO in transaction?

we are making some loging issue, where we need write the logentries in the DB. But the process run in a transaction and by rollback are our new logentries also deleted. can I make a write in DB out of the transaction? something like write in temptable with NO-UNDO option...? that the new logentries still remain in DB...?
Another possibility would be to use an app server. Transactions on app server sessions are independent from transactions in the original session (that's what the optional and redundant "DISTINCT TRANSACTION" syntax is all about).
Another option would be to use a simple messaging system. One very easy to setup and use option is STOMP. It is platform neutral and very easy to get going with.
Julian Lyndon-Smith posted the following on PEG about a month ago, and it really is as easy to setup and use as he says (I've tried it, I used ApacheMQ which is also very easy to setup and use):
Following on from presentations in Boston and Finland, dot.r is
pleased to announce the open source Stomp project, available
immediately.
Download from either http://www.dotr.com or
https://bitbucket.org/jmls/stomp , the dot.r stomp programs allow you
to connect your progress session to any other application or service
that is connected to the same message broker.
Open source, free message brokers that support Stomp are:
Fuse
(http://fusesource.com/products/fuse-mq-enterprise/) [a Progress company now owned by Red Hat inc]
Fuse MQ Enterprise is a standards-based, open source messaging platform that deploys with a very small footprint. The lack of license
fees combined with high-performance, reliable messaging that can be
used with any development environment provides a solution that
supports integration everywhere
ActiveMQ
Apache ActiveMQ (tm) (http://activemq.apache.org/)is the most popular
and powerful open source messaging and Integration Patterns server. Apache
ActiveMQ is fast, supports many Cross Language Clients and Protocols, comes
with easy to use Enterprise Integration Patterns and many advanced features
while fully supporting JMS 1.1 and J2EE 1.4.
Apache ActiveMQ is released under the Apache 2.0 License.
RabbitMQ
RabbitMQ is a message broker. The principal idea is pretty simple: it
accepts and forwards messages. You can think about it as a post
office: when you send mail to the post box you're pretty sure that Mr.
Postman will eventually deliver the mail to your recipient. Using this
metaphor RabbitMQ is a post box, a post office and a postman.
The major difference between RabbitMQ and the post office is the fact
that it doesn't deal with paper, instead it accepts, stores and
forwards binary blobs of data - messages.
Please feel free to log any issues on the
https://bitbucket.org/jmls/stomp issue system, and fork the project in
order to commit back all those new features that you are going to add
...
dot.r Stomp uses the permissive MIT licence
(http://en.wikipedia.org/wiki/MIT_License)
Have fun, enjoy !
Julian
Every change to the database must be part of a transaction. If you do not explicitly start one it will be implicitly started for you and scoped to the next outer block with transaction capabilities.
However and although I would not recommend you to, work with sub-transactions. You can invoke a sub transaction by explicitly specifying a DO TRANSACTION within the transaction scope. Although the database will never know about it, the client can roll back the sub transaction while the database can commit the transaction.
But in order to implement something like this you must master the concepts of transaction scope, block behavior and error handling.
RealHeavyDude.
Write your log entries to a no-undo temp-table.
When the code will commit a transaction, or transactions aren't active (transactionID = ?) have your code write the log entries out.
I don't think there is any way to do this in ABL as you planned either efficiently (sprinkling temp-table flushes or other tidbits all over the place is gross) or reliably (what if the application crashes with an un-flushed temp-table?), as others have mentioned. I would suggest making your complicated logging less coupled to your app by making the database writes asynchronous, occurring outside of your application if possible.
Since you're on Windows, you could change your logging to use the .NET log4net library instead of ABL constructs. log4net has a few appenders that would be useful:
AdoNetAppender which lets you log directly to a database
RemoteSyslogAppender which uses the syslog protocol, letting you log to an external Unix syslog or rsyslog daemon (rsyslog supports writing log messages to databases)
UDPAppender which sends the log messages via UDP packets somewhere else to be handled (e.g. a logFaces server, which supports writing to databases)
If you must do it in ABL then you could use a named output stream specifically for your log messages (OUTPUT TO STREAM) which writes to a specific location where an external process is listening to handle it. This file could be a pipe created by something like mkfifo or just a regular text file that is monitored for changes with inotify (not sure what the Windows equivalents of these are). This external process would handle parsing the messages and writing them to the database (basically re-inventing rsyslog).
I like the no-undo temp-table idea, just be sure to put the database write part in a "FINALLY" block in case of unhandled exceptions.