Build 2 Mirth Connect Channels - mirth

I'm totally new to Mirth Connect and Interface things.
The requirements are as beneath:
Build an HL7 ADT interface for updating patient location in RIS (Radiology Information System) whenever changes apply to patient hospitalization.
Interface Triggers: 1- Patient is newly admitted to ED
2- Patient is admitted to hospital
3- Patient is transferred from one unit to another
4- Patient is discharged from ED or hospital.
Mirth Channel: - It is expected to receive the following information from a database with the above triggers from Patient table and the visit/transfer log table
o Patient ID
o Patient Triple Name (FName, MName, LName)
o Patient Date of Birth
o Patient Marital Status
o Patient Age
o Patient Full Address
o Patient Phone
o Current Bed
o Current Unit
o Current Admission Date
o Previous Bed
o Previous Unit
o Visit Type (IN, ED, OUT)
o Event Type
o Event Date
Analyze the three RIS Inbound ADT interfaces for commonalities.
o ADT A01 (Admit/Visit notification)
o ADT A02 (Transfer a patient)
o ADT A07 (Change inpatient to an outpatient)
Build an interval-based channel that picks the admissions/transfers/discharge log and issue HL7 messages to RIS
Data Preparation:
- Build a dummy SQL Database that will simulate the real HIS DB
- Build a structured table with the model referenced in 2.1.2 as minimum set of columns. Note that you may need to add additional fields.
Mirth Channel RIS Simulator:
- Create a channel that listens to a specific port as to simulate the Radiology Information System (RIS)
- The channel will respond with a successful acknowledgment no matter what
- The acknowledgement message will contain the same “Message Control ID” received. Every HL7 message has a unique message ID
Mirth Channel ADT Interface:
- Create a channel that reads from the prepared data
- It will be interval-based; for example, every 5 mins
- It will read from the simulated database and filter records that were not sent yet
- Choose two types of triggers to implement
- Based on the event type; formulate the respective HL7 message [Inbound guidelines are attached]
- Expect an acknowledgement message from RIS Simulator
- Flag sent data when a successful acknowledgement is received so that it is not sent again next 5-min interval

From you requirement I can understand one channel. (i.e Mirth Channel ADT Interface) I need more specificity on you other channel Mirth channel RIS simulator
I will roughly provide you a code base.
You will be creating a sql sample database. I have created MySQL db here.
Once you have created the DB, you need to configure the source in mirth as show in the picture below :
After this, you can create your specific code to HL7V2 in source transformer. use the following javascript code:
var uniqueControlID = UUIDGenerator.getUUID();
var date = DateUtil.getCurrentDate("YYYYMMdd");
//Field change on data
tmp['MSH']['MSH.7']['MSH.7.1'] = date;
tmp['MSH']['MSH.9']['MSH.9.1'] = "ADT";
if(msg['patientinfomation_eventtype']=="IN")
{
tmp['MSH']['MSH.9']['MSH.9.2'] = "A01";
}
if(msg['patientinfomation_eventtype']=="ED")
{
tmp['MSH']['MSH.9']['MSH.9.2'] = "A02";
}
if(msg['patientinfomation_eventtype']=="OUT")
{
tmp['MSH']['MSH.9']['MSH.9.2'] = "A07";
}
tmp['MSH']['MSH.10']['MSH.10.1'] = uniqueControlID;
tmp['MSH']['MSH.11']['MSH.11.1'] = "T"; //note it can be either D,P,T,A,R,I
tmp['MSH']['MSH.12']['MSH.12.1'] = "2.4"; // Assuming we receive 2.4 version of HL7V2 message
//PID segment
tmp['PID']['PID.1']['PID.1.1'] = msg['patientinfomation_patientid'].toString();
tmp['PID']['PID.5']['PID.5.1'] = msg['patientfirstname'].toString();
tmp['PID']['PID.5']['PID.5.2'] = msg['patientlastname'].toString();
tmp['PID']['PID.5']['PID.5.3'] = msg['patientmiddlename'].toString()
tmp['PID']['PID.7']['PID.7.1'] = msg['patientinfomation_patientdob'].toString();
tmp['PID']['PID.13']['PID.13.1'] = msg['contactnumber'].toString();
tmp['PID']['PID.16']['PID.16.1'] = msg['maritalstatus'].toString();
Define a code template in the HL7 outbound transformer like this:
MSH|^~\&|||||||^|||
PID||||||||||||||||||||
EVN||||||||||||||||
When you deploy the channel you will get the desired output.

another method is to paste the below template in the outbound message template field and map the fields in the inbound message template with those in the oubound message template
MSH|^~\&|AccMgr|1|||20050110045504||ADT^A01|599102|P|2.3|||
EVN|A01|20050110045502|||||
PID|1||10006579^^^1^MRN^1||DUCK^DONALD^D||19241010|M||1|111 DUCK ST^^FOWL^CA^999990000^^M|1|8885551212|8885551212|1|2||40007716^^^AccMgr^VN^1|123121234|||||||||||NO NK1

Related

Scala object to unpivot the data fields

Read RAW_us_deaths.csv and RAW_us_confirmed.csv Develop scala objects
to Unpivot the date fields from raw files to populate metrics by date in rows for death
and confirmed cases with the given schema.
details
Merge Case count & Death count:Join Deaths & Confirmed covid cases data sets that
were arrived in the previous step and prepare one data set that holds both Case_count
& death_count for both US & global and write it to local path
details2
Remove the records with both case_count & death_count = 0 and write the final output
to a parquet file.

With KSQL, why does my table keep data with older ROWTIME and discard updates with newer ROWTIME?

I have a process that feeds relatively simple vehicle data into a kafka topic. The records are keyd by registration and the values contain things like latitude/longitude etc + a value called DateTime which is a timestamp based on the sensor that took the readings (not the producer or the cluster).
My data arrives out of order in general and also especially if I keep on pumping the same test data set into the vehicle-update-log topic over and over. My data set contains two records for the vehicle I'm testing with.
My expectation is that when I do a select on the table, that it will return one row with the most recent data based on the ROWTIME of the records. (I've verified that the ROWTIME is getting set correctly.)
What happens instead is that the result has both rows (for the same primary KEY) and the last value is the oldest ROWTIME.
I'm confused; I thought ksql will keep the most recent update only. Must I now write additional logic on the client side to pick the latest of the data I get?
I created the table like this:
CREATE TABLE vehicle_updates
(
Latitude DOUBLE,
Longitude DOUBLE,
DateTime BIGINT,
Registration STRING PRIMARY KEY
)
WITH
(
KAFKA_TOPIC = 'vehicle-update-log',
VALUE_FORMAT = 'JSON_SR',
TIMESTAMP = 'DateTime'
);
Here is my query:
SELECT
registration,
ROWTIME,
TIMESTAMPTOSTRING(ROWTIME, 'yyyy-MM-dd HH:mm:ss.SSS', 'Africa/Johannesburg') AS rowtime_formatted
FROM vehicle_updates
WHERE registration = 'BT66MVE'
EMIT CHANGES;
Results while no data is flowing:
+------------------------------+------------------------------+------------------------------+
|REGISTRATION |ROWTIME |ROWTIME_FORMATTED |
+------------------------------+------------------------------+------------------------------+
|BT66MVE |1631532052000 |2021-09-13 13:20:52.000 |
|BT66MVE |1631527147000 |2021-09-13 11:59:07.000 |
Here's the same query, but I'm pumping the data set into the topic again while the query is running. I'm surprised to be getting the older record as updates.
Results while feeding data:
+------------------------------+------------------------------+------------------------------+
|REGISTRATION |ROWTIME |ROWTIME_FORMATTED |
+------------------------------+------------------------------+------------------------------+
|BT66MVE |1631532052000 |2021-09-13 13:20:52.000 |
|BT66MVE |1631527147000 |2021-09-13 11:59:07.000 |
|BT66MVE |1631527147000 |2021-09-13 11:59:07.000 |
What gives?
In the end, it's a issue in Kafka Streams, that is not easy to resolve: https://issues.apache.org/jira/browse/KAFKA-10493 (we are working on some long term solution for it already though).
While event-time based processing is a central design pillar, there are some gaps that still needs to get closed.
The underlying issue is, that Kafka itself was originally designed based on log-append order only. Timestamps got added later (in 0.10 release). There are still some gaps today (eg, https://issues.apache.org/jira/browse/KAFKA-7061) in which "offset order" is dominant. You are hitting one of those cases.

Mirth Database Reader failed to process row retrieved from the database in channel (index out of range)?

I have a Mirth (v3.10) Database Reader channel source that grabs some test records (from an SQL Server source) using the query...
select *
from [mydb].[dbo].[lab_test_MIRTHTEST_001]
where orc_2_1_placer_order_number
in (
'testid_001', 'testid_002', 'testid_003'
)
Even though the channel appears to function properly and messages are getting written to the channel destination, I am seeing SQL errors in the server logs in the dashboard when deploying the channel:
[2020-12-16 08:16:28,266] ERROR (com.mirth.connect.connectors.jdbc.DatabaseReceiver:268): Failed to process row retrieved from the database in channel "MSSQL2SFTP_TEST"
com.mirth.connect.connectors.jdbc.DatabaseReceiverException: com.microsoft.sqlserver.jdbc.SQLServerException: The index 1 is out of range.
at com.mirth.connect.connectors.jdbc.DatabaseReceiverQuery.runPostProcess(DatabaseReceiverQuery.java:233)
at com.mirth.connect.connectors.jdbc.DatabaseReceiver.processRecord(DatabaseReceiver.java:260)
...
I can run this query fine in the SQL Server Mgmt Studio itself (and the messages seem to be transmitting fine), so not sure why this error is popping up but am concerned there is something I'm missing here.
Anyone with more experience know what is going on here? How to fix?
The issue looks to be in the post-process SQL section of the Database Reader, so it makes sense that the messages appear to be working.
Did you intend to enable the post-process section at the bottom of your source tab?
Kindly share the code that you are using to process data in the result set. In the meantime, you can consider the code below as a staring point. You can place this in Javascript transformer step in the source connector of your channel.
//Declaring variables to hold column values returned from the result set
var variable1;
var variable2;
//defining the sql read command
var Query = "select * from [mydb].[dbo].[lab_test_MIRTHTEST_001]";
Query += " where orc_2_1_placer_order_number in";
Query += " ('testid_001', 'testid_002', 'testid_003')";
var result = dbconn.executeCachedQuery(Query);
//where dbconn is your database connection string
//looping through the results
while(result.next())
{
variable1=result.getString("variable1");
variable2 = result.getString("variable2");
}
//optionally place the returned values in a channel map for use later
$c('variable1',variable1);
$c('variable2',variable2);

How to do pattern matching using match_recognize in Esper for unknown properties of events in event stream?

I am new to Esper and I am trying to filter the events properties from event streams having multiple events coming with high velocity.
I am using Kafka to send row by row of CSV from producer to consumer and at consumer I am converting those rows to HashMap and creating Events at run-time in Esper.
For example I have events listed below which are coming every 1 second
WeatherEvent Stream:
E1 = {hum=51.0, precipi=1, precipm=1, tempi=57.9, icon=clear, pickup_datetime=2016-09-26 02:51:00, tempm=14.4, thunder=0, windchilli=, wgusti=, pressurei=30.18, windchillm=}
E2 = {hum=51.5, precipi=1, precipm=1, tempi=58.9, icon=clear, pickup_datetime=2016-09-26 02:55:00, tempm=14.5, thunder=0, windchilli=, wgusti=, pressurei=31.18, windchillm=}
E3 = {hum=52, precipi=1, precipm=1, tempi=59.9, icon=clear, pickup_datetime=2016-09-26 02:59:00, tempm=14.6, thunder=0, windchilli=, wgusti=, pressurei=32.18, windchillm=}#
Where E1, E2...EN are multiple events in WeatherEvent
In the above events I just want to filter out properties like hum, tempi, tempm and presssurei because they are changing as the time proceeds ( during 4 secs) and dont want to care about the properties which are not changing at all or are changing really slowly.
Using below EPL query I am able to filter out the properties like temp, hum etc
#Name('Out') select * from weatherEvent.win:time(10 sec)
match_recognize (
partition by pickup_datetime?
measures A.tempm? as a_temp, B.tempm? as b_temp
pattern (A B)
define
B as Math.abs(B.tempm? - A.tempm?) > 0
)
The problem is I can only do it when I specify tempm or hum in the query for pattern matching.
But as the data is coming from CSV and it has high-dimensions or features so I dont know what are the properties of Events before-hand.
I want Esper to automatically detects features/properties (during run-time) which are changing and filter it out, without me specifying properties of events.
Any Ideas how to do it? Is that even possible with Esper? If not, can I do it with other CEP engines like Siddhi, OracleCEP?
You may add a "?" to the event property name to get the value of those properties that are not known at the time the event type is defined. This is called dynamic property see documentation . The type returned is Object so you need to downcast.

Oracle GoldenGate adapter for Kafka - JSON message contents

In My golden gate big data for kafka. when i try to update the record am getting only updated column and primary key column in after part in json file
{"table":"MYSCHEMATOPIC.PASSPORTS","op_type":"U","op_ts":"2018-03-17 13:57:50.000000","current_ts":"2018-03-17T13:57:53.901000","pos":"00000000030000010627","before":{"PASSPORT_ID":71541893,"PPS_ID":71541892,"PASSPORT_NO":"1234567","PASSPORT_NO_NUMERIC":241742,"PASSPORT_TYPE_ID":7,"ISSUE_DATE":null,"EXPIRY_DATE":"0060-12-21 00:00:00","ISSUE_PLACE_EN":"UN-DEFINED","ISSUE_PLACE_AR":"?????? ????????","ISSUE_COUNTRY_ID":203,"ISSUE_GOV_COUNTRY_ID":203,"IS_ACTIVE":1,"PREV_PASSPORT_ID":null,"CREATED_DATE":"2003-06-08 00:00:00","CREATED_BY":-9,"MODIFIED_DATE":null,"MODIFIED_BY":null,"IS_SETTLED":0,"MAIN_PASSPORT_PERSON_INFO_ID":34834317,"NATIONALITY_ID":590},
"after":{"PASSPORT_ID":71541893,"NATIONALITY_ID":589}}
In After part in my json out i want to show all columns
How to get all columns in after part?
gg.handlerlist = kafkahandler
gg.handler.kafkahandler.type=kafka gg.handler.kafkahandler.KafkaProducerConfigFile=custom_kafka_producer.properties
#The following resolves the topic name using the short table name
gg.handler.kafkahandler.topicMappingTemplate=passports
gg.handler.kafkahandler.format=json
gg.handler.kafkahandler.BlockingSend =false
gg.handler.kafkahandler.includeTokens=false
gg.handler.kafkahandler.mode=op
#gg.handler.kafkahandler.format.insertOpKey=I
#gg.handler.kafkahandler.format.updateOpKey=U
#gg.handler.kafkahandler.format.deleteOpKey=D
#gg.handler.kafkahandler.format.truncateOpKey=T
#gg.handler.kafkahandler.format.includeColumnNames=TRUE
goldengate.userexit.timestamp=utc
goldengate.userexit.writers=javawriter
javawriter.stats.display=TRUE
javawriter.stats.full=TRUE
gg.log=log4j
gg.log.level=info
gg.report.time=30sec
Try using the Kafka Connect handler instead - this includes the full payload. This article goes through the setup process.
Hi This issue is fixed by added below change in golden gate side
ADD TRANDATA table_name ALLCOLS