Is it possible to prune messages that have been filtered in the Mirth database?
In Message Storage in the Summary tab of the receiving TCP Listener channel I've tried checking the checkboxes Remove content on completion & Filtered only but this doesn't work.
Also the Date Pruner in Settings does not seem to prune filtered messages.
Related
We are using googles steaming pull subscriber the design is as follows
We are doing
sending file from FE (frontend) to BE (backend)
BE converting that file to ByteArray and publishing to pubsub topic as message (so ByteArray going as message)
Topic sending that message to subscriber, subscriber converting the ByteArray to file again
that converted file subscriber sending to that tool
tool doing some cool stuff with file and notify the status to subscriber
that status going to BE and BE update the DB and sending that status to FE
Now in our subscriber when we receive message we are immediately acknowledge it and removing the listener of subscriber so that we don't get message any more
and when that tool done that stuff, it sending status to subscriber (we have express server running on subscriber) and
after receiving status we are re-creating listener of subscriber to receive message
Note
that tool may take 1hr or more to do stuff
we are using ordering key to properly distribute message to VM's
this code is working fine but my question is
is there any flaw in this (bcz we r removing listener then again re creating it or anything like that)
or any better option or GCP services to best fit this design
or any improvement in code
EDIT :
Removed code sample
I would say that there are several parts of this design that are sub-optimal. First of all, acking a message before you have finished processing it means you risk message loss. What happens if your tool or subscriber crashes after acking the message, but before processing has completed? This means when the processes start back up, they will not receive the message again. Are you okay with requests from the frontend possibly never being processed? If not, you'll want to ack after processing is completed, or--given that your processing takes so long--persist the request to a database or to some storage and then acknowledge the message. If you are going to have to persist the file somewhere else anyway, you might want to consider taking Pub/Sub out of the picture and just writing the file to storage like GCS and then having your subscribers instead read out of GCS directly.
Secondly, stopping the subscriber upon each message being received is an anti-pattern. Your subscriber should be receiving and processing each message as it arrives. If you need to limit the number of messages being processed in parallel, use message flow control.
Also ordering keys isn't really a way to "properly distribute message to VM's." Ordering keys is only a means by which to ensure ordered delivery. There are no guarantees that the messages for the same ordering key will continually go to the same subscriber client. In fact, if you shut down the subscriber client after receiving each message, then another subscriber could receiving the next message for the ordering key since you've acked the earlier message. If all you mean by "properly distribute message" is that you want the messages delivered in order, then this is the correct way to use ordering keys.
You say you have a subscription per client, then whether or not that is the right thing to do depends on what you mean by "client." If client means "user of the front end," then I imagine you plan to have a different topic per user as well. If so, then you need to keep in mind the 10,000 topic-per-project limit. If you mean that each VM has its own subscription, then note that each VM is going to receive every message published to the topic. If you only want one VM to receive each message, then you need to use the same subscription across all VMs.
In general, also keep in mind that Cloud Pub/Sub has at-least-once delivery semantics. That means that even an acknowledged message could be redelivered, so you do need to be prepared to handle duplicate message delivery.
I need to send files data to an ftp server using kafka connect sink, after an entire file is received by the server, I also have to sen an acknowledgment to change the status of those tasks in the db.
I would like to know what's the best way to go here, I initially thought about creating a custom ftp kafka connect which will also change the task db status.
Is it the best way to go, or are there other options here?
What's the requirement driving the need to update the status in the task database?
Off the top of my head, you could write a custom application that tracks the offset of the FTP sink vs available messages, and updates a database as needed.
Worth noting is that https://www.confluent.io/product/control-center/ can track the delivery of messages in a Kafka Connect pipeline, and alert on latencies.
I am creating PubSub like messaging application in which all subscribers of particular channel will get the message if publishers sends to the channel moreover 100k is the maximum subscriber count.
using ejabberd may I know the possibility of performance i.e can ejabberd handle 100k subscribers and will able to send message to all ?
Performance depends on many elements. Payload size, push frequency, node configuration, type of online clients connection (slow / fast), machine specification.
However, you should be able to reach that level, indeed.
We use mirth as our interface engine and the ActiveMQ and Spring Inbound listener to process messages.
Our customers reported that some of the messages are missing from the mirth console but found in the ActiveMQ queue and the Spring listener application.
Initially we thought that someone may have removed manually from Mirth. But when checked the event logs there's no sign of removing a message.
We found this happening on some of the messages but could not identify the cause of the issue or pattern of messages.
Have anyone face and issue like with Mirth Admin console ?
We have the client DB as well but unable to open except through Mirth to check whether data is available.
Highly appreciate if someone can help on this.
Thanks
I have found some channels don't display "filtered" messages properly. But I have never seen successful messages go "missing".
If you don't trust the Mirth Admin then I would recommend querying the Mirth DB.
This can be done outside the confines of Mirth provided that Mirth is writing to an external DB such as MS-SQL Sever.
The data you get from it is VERY rich, but if you are sending 1000's of messages an hour (or more) you'll probably want to limit the time-range you search. Free-text searching like
select * from message m where m.raw_data like ('%needle%')
is NOT recommended and will take a long time to execute.
Being able search Mirth via the DB has opened up a tone of analysis for us that we don't have through the admin interface.
Just to chime on this question: if you are are running quite a number of channels or if you have quite a high volume of messages mirth may have trouble keeping up with its database updates due to row/table locks and inefficient conversions or data types (this should be resolved now).
We do, however, at peak times every so often, see a message or two processed through the engine with log entries indicating it was unable to insert the message and it was rolled back. I would say we have around 10 per year like that. Hopefully this is non-issue in Mirth 3 with the new backend ...
What is Source Connector Inbound,Source Connector Outbound and Destination 1 Outbound in Mirth means? And which case these be used.
I searched on Mirth forum but didn't get satisfactory answer.
I'm unable to sense these 3 concepts.
Any help is appreciated.
I'm not sure what the third concept is, as you repeated Source Connector Inbound twice.
In general, a Mirth Channel represents a transformation of an incoming message to one or more actions and/or outgoing messages. A channel consists of a single Source Connector and one or more Destination Connectors.
The Source Connector defines how you receive your inbound message. It could be a traditional LLP listener that is receiving messages from a client on a TCP connection, it could be a file reader which monitors an FTP site for uploaded message batches, a database reader which monitors for changed records in an EMR system, etc.
The Destination Connectors define what you do with the data or message once you have it. Destination Connectors can be given any name you choose, but the default name for the first Destination Connector in a Channel is always "Destination 1". Destination Connectors allow you to do things like save message data to a database, generate a new or transformed message that is based on the incoming message and send it by a variety of mechanisms, create a PDF or HTML document, etc.
This blog post is four years old but still provides a useful introduction to the very basics.