seems that the filestore for the quickfixj messages are stored in a unreadable charaters. Whats the best tool to read it?
Also, is there any design that use the filestore as a backup without manually requesting resend some of the earlier messages?
Where could I configure how much message data I want to save in the filestore?
Related
I am using auditSink object in order to get the audit logs.
I didn't find any documentation/api regarding retry option for audit logs.
What happens in case the web server / service is not available?
https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.18/#auditsink-v1alpha1-auditregistration-k8s-io
The fine source implies there is a retry mechanism, and thus the need for configuring its backoff, but aside from whatever you can find by surfing around in the source, I don't know that any promises have been made about deliverability. If you need such guarantees, you may be happier sending audit event to stdout or to disk and then egressing them the way you would with any other log content
I'm interested in publishing a file's contents over a kafka channel in realtime (I can do this in python) but I'm wondering what strategy might be effective to prevent sending duplicate data in case my publisher crashes and I need to restart it? Is there anything in kafka that can help with this directly or must I explicitly track the file offset I've published so far?
I suppose another way might be for the publisher to bootstrap the data already published and count the bytes received then file seek and recover?
Are there any existing scripts or apps that handle this already I can perhaps leverage instead?
Instead of publishing it yourself, I strongly recommend using Kafka Connect. In addition of not having to write a custom made code, the connectors could also support the "exactly-once" feature for you.
More details about connectors can be found here: https://www.confluent.io/product/connectors/
You might want to check kafka's Log compaction feature. It does the deduplication for u provided u have unique key for all the duplicate messages.
https://kafka.apache.org/documentation/#compaction
I need to send files data to an ftp server using kafka connect sink, after an entire file is received by the server, I also have to sen an acknowledgment to change the status of those tasks in the db.
I would like to know what's the best way to go here, I initially thought about creating a custom ftp kafka connect which will also change the task db status.
Is it the best way to go, or are there other options here?
What's the requirement driving the need to update the status in the task database?
Off the top of my head, you could write a custom application that tracks the offset of the FTP sink vs available messages, and updates a database as needed.
Worth noting is that https://www.confluent.io/product/control-center/ can track the delivery of messages in a Kafka Connect pipeline, and alert on latencies.
We use mirth as our interface engine and the ActiveMQ and Spring Inbound listener to process messages.
Our customers reported that some of the messages are missing from the mirth console but found in the ActiveMQ queue and the Spring listener application.
Initially we thought that someone may have removed manually from Mirth. But when checked the event logs there's no sign of removing a message.
We found this happening on some of the messages but could not identify the cause of the issue or pattern of messages.
Have anyone face and issue like with Mirth Admin console ?
We have the client DB as well but unable to open except through Mirth to check whether data is available.
Highly appreciate if someone can help on this.
Thanks
I have found some channels don't display "filtered" messages properly. But I have never seen successful messages go "missing".
If you don't trust the Mirth Admin then I would recommend querying the Mirth DB.
This can be done outside the confines of Mirth provided that Mirth is writing to an external DB such as MS-SQL Sever.
The data you get from it is VERY rich, but if you are sending 1000's of messages an hour (or more) you'll probably want to limit the time-range you search. Free-text searching like
select * from message m where m.raw_data like ('%needle%')
is NOT recommended and will take a long time to execute.
Being able search Mirth via the DB has opened up a tone of analysis for us that we don't have through the admin interface.
Just to chime on this question: if you are are running quite a number of channels or if you have quite a high volume of messages mirth may have trouble keeping up with its database updates due to row/table locks and inefficient conversions or data types (this should be resolved now).
We do, however, at peak times every so often, see a message or two processed through the engine with log entries indicating it was unable to insert the message and it was rolled back. I would say we have around 10 per year like that. Hopefully this is non-issue in Mirth 3 with the new backend ...
It seems that MSMQ doesn't use any Database management system to manage messages.
How does MSMQ manage messages?
Does it store the messages in flat file?
I'm trying to implement a messages management system.
MSMQ uses flat files located in %windir%\system32\msmq.
If you want to implement your own queueing, I suggest you take a look at Ayende's blog post on queueing
it stores them as files on the disk.
If you wanna manage them use the System.Messaging API