I'm working on a project that sends us some CDA documents so I have to parse and extract the data using Mirth Connect as interface engine and save them in a Mirth Results (provider portal). Any idea what is the best way to approach this like configuration or coding to a channel in Mirth to load content of CCD document and extract fields from the CCD document and populate the channel variables map.
I happen to come across this question. I think you would have got the answer, anyway let me share what I have, it may help you in fututre
The CDA document that you fetch is bascially parsed as a XML document. You can either use the MDHT libraries or a simple javascipt that Mirth tool support.
It is not always mandatory that you have to go for external libraries. I have worked with CCDA document structure which is parsible with Javascript supported by mirth.
It depends of what process you follow.If its only one CDA document you are parsing, then fetch it in inbound template, the CDA document will contain a lot of sections like patient demographics, vital signs and other fields. To provide a generalized solution we have to loop through the segments to get rid of referring index inside the array.
Example for looping thorugh care plan section:
function parseCarePlan(section) {
var careplan = [],
care, entries = section['entry'],
entry;
for (j = 0; j < entries.length(); j++) {
entry = entries[j];
care = {};
care.date = entry['procedure']['effectiveTime']['center']['#value'].toString();
care.text = entry['procedure']['code']['text'].toString();
care.code = entry['procedure']['code']['#code'].toString();
}
We have to create a JSON data from the XML (CDA) and then provide the JSON objects inside the Database
If you have a license for the Mirth Results software you will have a support contract to help you answer questions like this. In fact the Mirth Results software has very good native support for CCDA documents. Mirth did very well at Connectathon in 2014 with their CCDA library.
You can use this library https://www.projects.openhealthtools.org/sf/projects/mdht/ to parse CCDA, Create a jar for parsing your CCD document and call that jar - > public method which will accept document and return JSON as response to mirth connect javascript.
Its working for me.
Related
I am new to MongoDB. And I have the following issue on currently developing web application.
We have an application where we use mongoDB to store data.
And we have an API where we search for the document via text search.
As an example: if the user type “New York” then the request should send the all the available data in the collection to the keyword “New York". (Here we call the API for each letter typed.) We have nearly 200000 data in the DB. Once the user searches for a document then it returns nearly 4000 data for some keywords. We tried with limiting the data to 5 – so it returns the top 5 data, and not the other available data. And we tried without limiting data now it returns hundreds and thousands of data as I mentioned. And it causes the request to slow down.
At Frontend we Bind search results to a dropdown. (NextJs)
My question:
Is there an optimizing way to search a document?
Are there any suggestions of a suitable way that I can implement this requirement using mongoDB and net5.0?
Or any other Implementation methods regarding this requirement?
Following code segment shows the query to retrieve the data to the incomming keyword.
var hotels = await _hotelsCollection
.Find(Builders<HotelDocument>.Filter.Text(keyword))
.Project<HotelDocument>(hotelFields)
.ToListAsync();
var terminals = await _terminalsCollection
.Find(Builders<TerminalDocument>.Filter.Text(keyword))
.Project<TerminalDocument>(terminalFeilds)
.ToListAsync();
var destinations = await _destinationsCollection
.Find(Builders<DestinationDocument>.Filter.Text(keyword))
.Project<DestinationDocument>(destinationFields)
.ToListAsync();
So this is a classic "autocomplete" feature, there are some known best practices you should follow:
On the client side you should use a debounce feature, this is a most. there is no reason to execute a request for each letter. This is most critical for an autocomplete feature.
On the backend things can get a bit more complicated, naturally you want to be using a db that is suited for this task, specifically MongoDB have a service called Atlas search that is a lucene based text search engine.
This will get you autocomplete support out of the box, however if you don't want to make big changes to your infra here are some suggestions:
Make sure the field your searching on is indexed.
I see your executing 3 separate requests, consider using something like Task.WhenAll to execute all of them at once instead of 1 by 1, I am not sure how the client side is built but if all 3 entities are shown in the same list then ideally you merge the labels into 1 collection so you could paginate the search properly.
As mentioned in #2 you must add server side pagination, no search engine can exist without one. I can't give specifics on how you should implement it as you have 3 separate entities and this could potentially make pagination implementation harder, i'd consider wether or not you need all 3 of these in the same API route.
We're experimenting with storing data to a MongoDB by using node-red. As it is now, we can store data on the database, but it seems like only the 'msg.payload' is stored (as document) - and not the whole msg object. Which confuses us a little...
The flow is very simple and nothing much has really been done.
We actually dont need ALL data, but we wish to store payload but also metadata as a document to our collection on our database. We've tried searching for an answer to this, but couldn't find anything relevant on how to do this. Hopefully we can get some help on this forum.
Thanks in advance! (btw. we're using mongodb3 on node-red to store data)
The node you are using is working as intended.
The normal pattern for Node-RED is that the focus of any given node is the msg.payload entry, any other msg properties are considered to be meta data.
The simplest thing here would be to use the built in core change node to move the other fields you are interested in to be properties of the msg.payload object.
Can a custom REST service be used as a data source for a dojo data grid? I am needing to combine data from three different databases into one data grid. The column data will need to be sort-able. The response from the REST service looks to be correct. I have having trouble with binding the JSON data to the dojo grid columns.
Very interesting -- I tested and saw the same thing with a custom REST service -- it doesn't work when referenced as the storeComponentId of the grid.
I got it to work with the following steps:
Include two dojo modules in the page resources to set up the data store
A pass-thru script tag with code to set up a JSON data store for the grid (uses the dojo modules that the resources specify)
The grid’s store property is set to the variable set up for the data source in the tag. (storeComponentId needs an XPages component name)
Here are some snippets that show the changes:
<xp:this.resources>
<xp:dojoModule name="dojo.store.JsonRest"></xp:dojoModule>
<xp:dojoModule name="dojo.data.ObjectStore"></xp:dojoModule>
</xp:this.resources>
...
<xe:restService id="restService1" pathInfo="gridData">
...
<script>
var jsonStore = new dojo.store.JsonRest(
{target:"CURRENT_PAGE_NAME_HERE.xsp/gridData"}
);
var dataStore = dojo.data.ObjectStore({objectStore: jsonStore});
</script>
...
<xe:djxDataGrid id="djxDataGrid1" store="dataStore">
There's more information and a full sample here:
http://xcellerant.net/dojo-data-grid-33-reading-custom-rest-service/
The easiest way is to start with the extension library. There's a sample for a custom JSON-Rest service. While it pulls data from one source, it is easy to extend to pull data from more than one. I strongly suggest you watch out for all over performance.
What I would do:
create a bean that spits out the JSON to the grid
test it with one database
learn about threads in XPages and here
use one thread each for the databases, cuts down your load time
use a ConcurrentSkipListMap with a comparator so you have the initial JSON in the sort order most useful to the user (or the one from the preferences or the last run)
Memento bene: the Java Collections Framework is your friend (a sometimes difficult one).
Let us know how it goes!
Right now I am working on a project to fetch data from a SharePoint list using SOAP API. I tried and successfully fetches the complete list, but now I want to fetch some specific data that is updated after a specific date.
Is this possible to fetch such data using SOAP query. I can see last update filed when I view single item at the bottom. Is this some how possible to use that filed?
Yes you can use the Web Services to do lot of things just like filtering a list result. I don't know which language you use, but with JavaScript you can look at these two frameworks that should help you:
http://aymkdn.github.io/SharepointPlus/ : easy way to create your queries (I created it)
http://spservices.codeplex.com/ : the most popular framework but less easy to use (it's my point of view)
You can also look at the documentation on MSDN (the param to use is query): http://msdn.microsoft.com/en-us/library/lists.lists.getlistitems.aspx
At last found the answer,
The last update date and time can be retrieved from the list column "Modified".
The soap response will have the value in the attribute "ows_Modified".
Muhammad Usman
I have a requirement where I have to read data from sql server local database and first map it in XML file provided by another third party org. who have their own database. Then once I have proper mapping of fields I have to transform the data from sql server database to XML format and vice versa.
So far, I am able to connect sqlserver database in mirthconnect however I dont know what steps are required to create in channels and transformer to carry the task of reading data and mapping corresponding fields to XML format provided by third party and finally writing in XML file provided and vice versa.
In short if I can get details of creating such channel in mirth connect where I can read sql server database and map the fields in corresponding xml file....I guess I can write to it. Same way applies if I go from xml format to sqlserver database. Can someone tell me how to accomplish this?
For database field mapping whats the best way to map fields entirely on two different databases is there any tool which can help....
Also once the task of transforming the data from one end to another is accomplished is there any way of validation in mirth connect that verifies that data is correctly moved from one to another?
If you want to process one row at a time, the normal database reader will work fine; just set the data type under Summary to XML for all steps. Set a destination of channel writer to nowhere and run it once to see what it does in the Dashboard. You can copy and paste that as an example into your message template so you can map variables.
If you want to work an entire result at one time in the Transformer steps, I find it easier to create a custom reader and use "FOR XML RAW, ELEMENTS" on the end of my Microsoft SQL query.
Something like:
//build connection
var dbConn = DatabaseConnectionFactory.createDatabaseConnection('com.microsoft.sqlserver.jdbc.SQLServerDriver','jdbc:sqlserver://servername:1433;databaseName=dbname;integratedSecurity=true;','',''); //this uses the MS JDBC driver and auth dll
//query results with XML output from server 'FOR XML' statement at end
var result = dbConn.executeCachedQuery("SELECT col1 AS FirstColumn, col2 AS SecondColumn FROM [dbname].[dbo].[table1] WHERE [processed] = 'False' FOR XML RAW, ELEMENTS");
//Make sure we are at the top of results
result.beforeFirst();
//wrap XML. Namespace etc. not required
XMLresult = '<message>';
//XML broke up across several rows in one column. Re-combine
while (result.next()) {
XMLresult += result.getString(1);
}
XMLresult += '</message>';
dbConn.close();
return XMLresult;