Mirth -> HL7 into XML conversion Ques - mirth

I'm new to the Mirth Connect When I tried to convert HL7 into XML I 'm struggling.Suppose my HL7 messages have repeat segments like ORC in ORM Messages how to iterate that.
below is my code:
tmp['Messages']['orderList']['order'][count]['provider']=msg['ORC'][count]['ORC.10']['ORC.10.1'].toString();
but it is throwing an error:
`TypeError: Cannot read property "provider"` from undefined.
please help me to proceed further.

It's failing because your count is higher than the number of elements returned by tmp['Messages']['orderList']['order'], so it is returning undefined. The short answer is that you need to add another order node to tmp['Messages']['orderList'] before you can access it. It's hard to say how best to do that without seeing more of your code, requirements, outbound template, etc... Most frequently I build the node first, and then use appendChild to add it.
A simple example would be:
var tmp = <xml>
<Messages>
<orderList />
</Messages>
</xml>;
var prov = 12345;
var nextOrder = <order>
<provider>{prov}</provider>
</order>;
tmp.Messages.orderList.appendChild(nextOrder);
After which, tmp will look like:
<xml>
<Messages>
<orderList>
<order>
<provider>12345</provider>
</order>
</orderList>
</Messages>
</xml>
The technology you are using to work with xml is called e4x, and it's running on the Mozilla Rhino Javascript engine. Here are a couple resources that might help you.
https://web.archive.org/web/20181120184304/https://wso2.com/project/mashup/0.2/docs/e4xquickstart.html
https://developer.mozilla.org/en-US/docs/Archive/Web/E4X/Processing_XML_with_E4X

Related

Populating opcua address space with Nodes from an xml schema

Am working on a project to build an opc ua server from specification,
I've gone far enough on the implementation, am currently working on the write request, I already have a few nodes in the server address space.
There seem to be so many nodes, so many actually. It's almost impossible to create
and add the Nodes one by one.
Anyways back to the question, I've downloaded an xml file from opc foundation containing schema for all the nodes in the address space, Here is a link to the xml file
What is the most efficient way to create nodes from the xml file ? I am writing on a c95 compiler.
Below is a quick view of how Nodes are represented in the nodeset Xml file,
<Nodes>
<Node i:type="DataTypeNode">
<NodeId>
<Identifier>i=1</Identifier>
</NodeId>
<NodeClass>DataType_64</NodeClass>
<BrowseName>
<NamespaceIndex>0</NamespaceIndex>
<Name>Boolean</Name>
</BrowseName>
<DisplayName>
<Locale></Locale>
<Text>Boolean</Text>
</DisplayName>
<Description>
<Locale></Locale>
<Text>Describes a value that is either TRUE or FALSE.</Text>
</Description>
<WriteMask>0</WriteMask>
<UserWriteMask>0</UserWriteMask>
<RolePermissions />
<UserRolePermissions />
<AccessRestrictions>0</AccessRestrictions>
<References>
<ReferenceNode>
<ReferenceTypeId>
<Identifier>i=45</Identifier>
</ReferenceTypeId>
<IsInverse>true</IsInverse>
<TargetId>
<Identifier>i=24</Identifier>
</TargetId>
</ReferenceNode>
</References>
<IsAbstract>false</IsAbstract>
<DataTypeDefinition i:nil="true" />
</Node>
Programatically filling a running OPC-UA server with nodes is unacceptably slow.
you may want to investigate the ModelCompiler.
I found it fairly straightforward to fill a modeldesign XML with data and generate code and NodeSet2.xml. So even if you have no need for the generated C# code, which I suspect to be your case, this approach may be useful.
You may also want to look at the UA-.NETStandard repository.
It offers a method LoadFromXML method that reads your nodeset pretty quickly. You may find inspiration in this method.
Bon courage et un grand merci pour vos contributions au monde OPC-UA.
Maybe I'm a bit late, but I answer if it can help someone.
If you are using C/C++ with open62541 SDK I found that it is possible to generate *.c and *.h files to include in your opcua server, as described with some examples here: you only need to run a python program, providing some parameters and the name of output files to be generated, then include these files in your opcua server.
Another way that I found is using UaModeler by Unified Automation, in that case you can generate source files to include in your project, drawing your information model in the program and exporting it to xml or source files.

Reading an SB-Messaging Send Port properties using the Microsoft.BizTalk.ExplorerOM makes a breaking change

I am working on a PowerShell script making use of the Microsoft.BizTalk.ExplorerOM to dynamically update the SB-Messaging SAS key for BizTalk Receive Locations and Send Ports. This is to enable us to roll the SAS keys for our Service Bus queues, and update BizTalk with the new keys as painlessly as possible.
I have this working correctly for Receive Locations, but Send Ports are giving me a different issue.
As soon as I read the PrimaryTransport properties of the Send Port, it seems that some change is made under the covers, that then prevents SaveChanges from working, instead throwing an "Invalid or malformed XML data exception".
This is compared to the the ReceiveLocation, where I can read any of its properties, and then SaveChanges successfully.
Note that in both of this cases, no changes have been made by me. I am simply doing a Read, and then a Save.
Can anyone offer any advice as to what could be causing the issue, and any possible solutions to try?
Had this very same issue, when using Powershell to replace values in ServiceBus ReceiveLocations & SendPorts.
The problem is with the none valid xml symbols in the TransportTypeData, which are converted when the script reads them out in the PS cmd.
All none valid xml symbols (such as the one occuring for Namespace value, ) need to be converted to amp, and if I'm not mistaken even double amp:ed.
Here's an example article showing examples on what I mean by "double amp:ed":
How do I escape ampersands in XML so they are rendered as entities in HTML?
Hope this make sense, and if not, then let me know and I'll give it another go.
Just tried doing this from C#, seems to work ok:
var root = new Microsoft.BizTalk.ExplorerOM.BtsCatalogExplorer() { ConnectionString = "Data Source=(local);Initial Catalog=BizTalkMgmtDb;Integrated Security=SSPI;" };
var sendPort = root.SendPorts["xxxx.ServiceBusQueue"];
System.Diagnostics.Trace.TraceInformation(sendPort.PrimaryTransport.TransportTypeData);
sendPort .PrimaryTransport.TransportTypeData = sendPort.PrimaryTransport.TransportTypeData.Replace("RootManageSharedAccessKey", "MySharedAccessKey");
root.SaveChanges();

In the qbXML ItemQuery request , how do I get the description for a service item?

I am creating a tool that will synchronize our production database with QuickBooks (QB). I am trying to get a list of all the items in QB using ItemQuery and I want to get the description of each item as well. However, it seems that different types of items have different ways of specifying the description. Using IncludeRetElement, I am able to get the SalesDesc for ItemInventoryRet, but I am struggling with getting the description for ItemServiceRet (and a few others, but I think if I can figure this one out, I will be able to figure out the others).
Here is my request...
<?qbxml version="12.0"?>
<QBXML>
<QBXMLMsgsRq onError="stopOnError">
<ItemQueryRq requestID="2">
<IncludeRetElement>ListID</IncludeRetElement>
<IncludeRetElement>Name</IncludeRetElement>
<IncludeRetElement>FullName</IncludeRetElement>
<IncludeRetElement>ParentRef</IncludeRetElement>
<IncludeRetElement>SalesAndPurchase_SalesDesc</IncludeRetElement>
<IncludeRetElement>SalesOrPurchase_Desc</IncludeRetElement>
<IncludeRetElement>ItemDesc</IncludeRetElement>
<IncludeRetElement>SalesDesc</IncludeRetElement>
</ItemQueryRq>
</QBXMLMsgsRq>
</QBXML>
And here is the response I'm getting (shortened for clarity)...
<QBXML>
<QBXMLMsgsRs>
<ItemQueryRs requestID="2" statusCode="0" statusSeverity="Info" statusMessage="Status OK">
<ItemServiceRet>
<ListID>240000-1071531214</ListID>
<Name>Delivery</Name>
<FullName>Delivery</FullName>
</ItemServiceRet>
<ItemInventoryRet>
<ListID>270000-1071524193</ListID>
<Name>1/2" Line</Name>
<FullName>Irrigation Hose:1/2" Line</FullName>
<SalesDesc>1/2" Vinyl Irrigation Line</SalesDesc>
</ItemInventoryRet>
<ItemGroupRet>
<ListID>1E0000-934380927</ListID>
<Name>Walkway</Name>
<ItemDesc>Walkway lighting</ItemDesc>
</ItemGroupRet>
</ItemQueryRs>
</QBXMLMsgsRs>
</QBXML>
According to the documentation (pick ItemQuery from the dropdown), the description I think I want is ItemServiceRet > ORSalePurchase > SaleOrPurchase > Desc. The request includes one way I've tried, but I've tried quite a few other ways as well...
<IncludeRetElement>ORSalesPurchase:SalesOrPurchase:Desc</IncludeRetElement>
<IncludeRetElement>ORSalesPurchase.SalesOrPurchase.Desc</IncludeRetElement>
<IncludeRetElement>ORSalesPurchase_SalesOrPurchase_Desc</IncludeRetElement>
<IncludeRetElement>SalesOrPurchase:Desc</IncludeRetElement>
<IncludeRetElement>SalesOrPurchase.Desc</IncludeRetElement>
<IncludeRetElement>SalesOrPurchase_Desc</IncludeRetElement>
<IncludeRetElement>Desc</IncludeRetElement>
So the question is, how do you retrieve sub elements in qbXML queries?
I have found that if I remove all the IncludeRetElements, I do get the values. But I would like to learn how to only get the data I care about. We have a HUGE QB database so this could be a major performance issue if I have to get everything.
As a note, I switched to using QBFC10Lib instead of creating the XML myself hoping it would help me solve this issue, but it didn't. I am still having the exact same issue. I'm guessing that an answer to one will resolve both qbXML and QBFC.
I figured it out. You have to add each level separately, like this...
<?qbxml version="12.0"?>
<QBXML>
<QBXMLMsgsRq onError="stopOnError">
<ItemQueryRq requestID="2">
<IncludeRetElement>ListID</IncludeRetElement>
<IncludeRetElement>Name</IncludeRetElement>
<IncludeRetElement>FullName</IncludeRetElement>
<IncludeRetElement>ItemDesc</IncludeRetElement>
<IncludeRetElement>SalesDesc</IncludeRetElement>
<IncludeRetElement>ORSalesPurchase</IncludeRetElement>
<IncludeRetElement>SalesOrPurchase</IncludeRetElement>
<IncludeRetElement>SalesAndPurchase</IncludeRetElement>
<IncludeRetElement>Desc</IncludeRetElement>
</ItemQueryRq>
</QBXMLMsgsRq>
</QBXML>
I had tried this earlier, but I think I missed a level. Anyway, hopefully this answer will help somebody else from wasting half their day :).

Vertica-Tableau error Multiple commands cannot be active

We have dataset in Vertica and Tableau is querying the data (4 Billions record) from vertica for dashboard as shown below :
All list and graphs are separate worksheets in tableau and using same connection to Vertica DB. Each list is a column in DB and list is descending order of # count of items in dataset's respective column. Graph also same as list but calculated in slightly different manner. Start Date and End Date is date range for Data to be query like data connection filter which will restrict the query to fixed amount of data example past week, last month, etc.
But I get this ERROR :
Vertica][VerticaDSII] (10) An error occurred during query preparation: Multiple commands cannot be active on the same connection. Consider increasing ResultBufferSize or fetching all results before initiating another command.
Is the any workaround this issue or any better way to do this
you'll need a TDC file which specifies a particular ODBC connection string option to get around the issue.
The guidance from Vertica was to add an ODBC Connect String parameter with the value “ResultBufferSize=0“. This apparently forces the result buffer to be unlimited, preventing the error. This is simple enough to accomplish when building a connection string manually or working with a DSN, but Vertica is one of Tableau’s native connectors. So how do you tell the native connector to do something else with its connection?
Native Connections in Tableau can be customized using TDC files
“Native connectors” still connect through the vendor’s ODBC drivers, and can be customized just the same as an “Other Databases” / ODBC connection. In the TDC files themselves, “ODBC” connections are referred to as “Generic ODBC”, which is a much more accurate way to think about the difference.
The full guide to TDC customizations, with all of the options, is available here although it is pretty dense reading. One thing that isn’t provided is an example of customizing a “native connector”. The basic structure of a TDC file is this
<?xml version='1.0' encoding='utf-8' ?>
<connection-customization class='genericodbc' enabled='true' version='7.7'>
<vendor name='' />
<driver name='' />
<customizations>
</customizations>
</connection-customization>
When using “Generic ODBC”, the class is “genericodbc” and then the vendor and driver name must be specified so that Tableau can know when the TDC file should be applied. It’s much simpler for a native connector — you just use the native connector name in all three places. The big list of native connector names is at the end of this article. Luckily for us, Vertica is simply referred to as “vertica”. So our Vertica TDC framework will look like:
<?xml version='1.0' encoding='utf-8' ?>
<connection-customization class='vertica' enabled='true' version='7.7'>
<vendor name='vertica' />
<driver name='vertica' />
<customizations>
</customizations>
</connection-customization>
This is a good start, but we need some actual customization tags to cause anything to happen. Per the documentation, to add additional elements to the ODBC connection string, we use a tag named ‘odbc-connect-string-extras‘. This would look like
<customization name='odbc-connect-string-extras' value='ResultBufferSize=0;' />
One important thing we discovered was that all ODBC connection extras need to be in this single tag. Because we wanted to turn on load balancing in the Vertica cluster, there was a second parameter recommended: ConnectionLoadBalance=1. To get both of these parameters in place, the correct way method is
<customization name='odbc-connect-string-extras' value='ResultBufferSize=0;ConnectionLoadBalance=1;' />
There are a whole set of other customizations you can put in to place to see how they affect performance. Make sure you understand the way the customization option is worded — if it starts with ‘SUPRESS’ then giving a ‘yes’ value will turn off the feature; other times you want to set the value to ‘no’ to turn off the feature. Some of the other ones we tried were
<customization name='CAP_SUPPRESS_DISCOVERY_QUERIES' value='yes' />
<customization name='CAP_ODBC_METADATA_SUPPRESS_PREPARED_QUERY' value='yes' />
<customization name='CAP_ODBC_METADATA_SUPPRESS_SELECT_STAR' value='yes' />
<customization name='CAP_ODBC_METADATA_SUPPRESS_EXECUTED_QUERY' value='yes' />
<customization name='CAP_ODBC_METADATA_SUPRESS_SQLSTATISTICS_API' value='yes' />
<customization name= 'CAP_CREATE_TEMP_TABLES' value='no' />
<customization name= 'CAP_SELECT_INTO' value='no' />
<customization name= 'CAP_SELECT_TOP_INTO' value='no' />
The first set were mostly about reducing the number of queries for metadata detection, while the second set tell Tableau not to use TEMP tables.
The best way to see the results of these customizations is to change the TDC file and restart Tableau Desktop Once you are satisfied with the changes, then move the TDC file to your Tableau Server and restart it.
Where to put the TDC files
Per the documentation ”
For Tableau Desktop on Windows: Documents\My Tableau Repository\Datasources
For Tableau Server: Program Files\Tableau\Tableau Server\\bin
Note: The file must be saved using a .tdc extension, but the name does not matter.”
If you are running a Tableau Server cluster, the .tdc file must be placed on every worker node in the bin folder so that the vizqlserver process can find it. I’ve also highlighted the biggest issue of all — you should edit these using a real text editor like Notepad++ or SublimeText rather than Notepad, because Notepad likes to save things with a hidden .TXT ending, and the TDC file will only be recognized if the ending is really .tdc, not .tdc.txt.
Restarting the taableau resolved my issue which was giving same error.

Remote stream multiple files in SOLR

I want to use SOLR's remote-streaming facility to extract and index the content of files.
This works fine if I pass stream.file=xxx as a parameter to the http GET method.
However, I have a lot of these, and want to batch them up (i.e. not have to have a GET per file).
Is there a way I can do this in SOLR?
e.g. I'd like to be able to POST some xml like this:
<add>
<doc stream_file="filename">
<field name="id">123</field>
</doc>
<doc>...
This has been recently asked (and answered) in the solr-user mailing list.
I find that multiple ADDs are fast, so long as you only COMMIT the batch and don't try to COMMIT after every ADD. I would guess that the performance penalty is not worth writing your own RequestHandler.