Mule message enricher with query to MongoDB - mongodb

Hey this is my first post here so thanks for any help :)
I am trying to make a flow (using Mule) that reads and transforms a couple of csv files to a common format (this part works). After that I want to enrich my messages with 3 objects from a MongoDB. I think I can do that by using one of the attributes in my payload, something like Payload.MeterUID to find the document _id in the MongoDB and then use 3 of the given _id objects to enrich my main message.
This is my enricher so far:
<enricher doc:name="Message Enricher">
<mongo:find-objects config-ref="Mongo_DB1" collection="GSMdata" doc:name="Mongo DB">
<mongo:query-attributes>
<mongo:query-attribute key="MeterUID">#[payload.MeterUID]</mongo:query-attribute>
</mongo:query-attributes>
<mongo:fields ref="#[payload]"/>
</mongo:find-objects>
</enricher>
How do I complete this enricher so that it works in the way I described, if it is even possible?
At this point any help will be appreciated.

There are a few problems with your configuration, and one of them is that you are not really stating how you want to enrich your message.
Firstly, with mongo:query-attributes you need to use mongo:find-objects-using-query-map, not mongo:find-objects.
Secondly, for mongo:fields you need a list of fields you want to return in the query, not a reference to your message payload. If you just need the _id field, then just use
<mongo:fields>
<mongo:field>_id</mongo:field>
</mongo:fields>
Thirdly, the enricher needs to know how it should enrich the message. Are you setting a new field in a map payload, or perhaps some property or variable? Assuming you have a map payload, you would specify this as something like <enricher target="#[payload.my_mongo_id_list]" doc:name="Message Enricher">.
So, all together:
<enricher target="#[payload.my_mongo_id_list]" doc:name="Message Enricher">
<mongo:find-objects-using-query-map config-ref="Mongo_DB1" collection="GSMdata" doc:name="Mongo DB">
<mongo:query-attributes>
<mongo:query-attribute key="MeterUID">#[payload.MeterUID]</mongo:query-attribute>
</mongo:query-attributes>
<mongo:fields>
<mongo:field>_id</mongo:field>
</mongo:fields>
</mongo:find-objects-using-query-map>
</enricher>

Related

Mule esb 3.8 how to add object variable (HashMap) into payload (HashMap)

Hello need a bit of help to understand how I can merge 2 payloads from db calls into one - final payload.
First payload is like:
[{Name=John, Age=31}]
Second payload is like:
Address=[{Planet=Earth, Continent=Europa, Town=London},{Planet=Earth, Continent=Europa, Town=Dublin}]
Final result I am expecting as such:
[{Name=John, Age=31, Address=[{Planet=Earth, Continent=Europa, Town=London},{Planet=Earth, Continent=Europa, Town=Dublin}]}]
I was try ++ and putAll but its not happy and don't allow me to do it, preferable without dw.
Technically I understand that its need to add but cant find right syntactic and help is not helpful for such :(
Thanks in advance.
Your payload is an ArrayList of HashMap, not a HashMap. Similarly your flowVars.Address is also a List. For adding it in the first HashMap of your payload you can try the following
<expression-component doc:name="Expression"><![CDATA[#[payload.get(0).put("Address", flowVars.Address)]]]></expression-component>

Load more records from Gatling feeder

I would like to inject n-rows from my csv file to Gatling feeder. The default approach of Gatling is to read and inject one row at a time. However, I cannot find anywhere, how to take and inject an eg. Array into a template.
I came up with creating a JSON template with Gatling Expressions as some of the fields. The issue is I have a JSON array with N-elements:
[
{"myKey": ${value}, "mySecondKey": ${value2}, ...},
{"myKey": ${value}, "mySecondKey": ${value2}, ...},
{"myKey": ${value}, "mySecondKey": ${value2}, ...},
{"myKey": ${value}, "mySecondKey": ${value2}, ...}
]
And my csv:
value,value2,...
value,value2,...
value,value2,...
value,value2,...
...
I would like to make it as efficient as possible. My data is in CSV file, so I would like to use csv feeder. Also, the size is large, so readRecords is not possible, since I'm getting out of memory.
Is there a way I can put N-records into the request body using Gatling?
From the documentation:
feed(feeder, 2)
Old Gatling versions:
Attribute names, will be suffixed. For example, if the columns are name “foo” and “bar” and you’re feeding 2 records at once, you’ll get “foo1”, “bar1”, “foo2” and “bar2” session attributes.
Modern Gatling versions:
values will be arrays containing all the values of the same key.
In this latter case, you can access a value at a given index with Gatling EL: #{foo(0)}, #{foo(1)}, #{bar(0)} and #{bar(1)}
It seems that the documentation on this front might have changed a bit since then:
It’s also possible to feed multiple records at once. In this case, values will be arrays containing all the values of the same key.
I personally wrote this in Java, but it is easy to find the syntax for scala as well in the documentation.
The solution I used for my CSV file is to add the feeder to the scenario like:
.feed(CoreDsl.csv("pathofyourcsvfile"), NUMBER_OF_RECORDS)
To apply/receive that array data during your .exec you can do something like this:
.post("YourEndpointPath").body(StringBody(session -> yourMethod(session.get(YourStringKey))))
In this case, I am using a POST and requestBody, but the concept remains similar for GET and their corresponding queryParameters. So basically, you can use the session lambda in combination with the session.get method.
"yourMethod" can then receive this parameter as an Object[].

mirth connect Database Reader automatic column mapping

Please could somebody confirm the following..
I am using Mirth Connect 3.5.08232.
My Source Connector is a Database Reader.
Say, I am using a query that returns multiple rows, and return the result (via JavaScript), as documentation suggests, so that Mirth would treat each row as a separate message. I also use a couple of mappers as source transformers, and save the mapped fields in my channel map (which ends up to contain only those fields that I define in transformers)
In the destination, and specifically, in destination response transformer (or destination body, if it is a JavaScript writer), how do I access the source fields?
the only way I found by trial and error is
var rawMsg = connectorMessage.getRawData();
var xmlMsg = new XML(rawMsg);
logger.info(xmlMsg.some_field); // ignore the root element of rawMsg
Is this the right way to do this? I thought that maybe the fields that were nicely automatically detected would be put in some kind of a map, like sourceMap - but that doesn't seem to be the case, right?
Thank you
If you are using Mapper steps in your transformer to extract the data and put it into a variable map (like the channel map), then you can use any of the following methods to retrieve it from a subsequent JavaScript context (including a JavaScript Writer, and your response transformer):
var value = channelMap.get('key');
var value = $c('key');
var value = $('key');
Look at the Variable Maps section of the User Guide for more information.
So to recap, say you're selecting a column "mycolumn" with a Database Reader. The XML sent to the channel will be something like this:
<result>
<mycolumn>value</mycolumn>
</result>
Then you can choose to extract pieces of that message into specific variables for later use. The transformer allows you to easily drag-and-drop pieces of the sample inbound message.
Finally in your JavaScript Writer (or in any subsequent filter, transformer, or response transformer), just drag the value into the field you want:
And the corresponding JavaScript code will automatically be inserted:
One last note, if you are selecting a lot of variables and don't want to make Mapper steps for each one individually, you can use a JavaScript Step to iterate through the message and extract each column into a separate map variable:
for each (child in msg.children()) {
channelMap.put(child.localName(), child.toString());
}
Or, you can just reference the columns directly from within the JavaScript Writer:
var msg = new XML(connectorMessage.getEncodedData());
var column1 = msg.column1.toString();
var column2 = msg.column2.toString();
...

Iterate within the payload in mule

I am retrieving some rows from the database. For example, after retrieving I have two rows as the payload. I would like to iterate within the rows. The retrieved two rows are as follows,
[{PAYLOAD_MSG=PU, SENSITIVEDATAINDICATOR=Y, ERRORMESSAGE=User generated exception test,
INTERFACEID=I0826, EXCEPTIONID=73, PAYLOADMSGID=I0826MessTesting0002, SEVERITY=2, INTERFACENAME=replay,
SOURCEPROTOCOL=MQ, CREATIONTIME=2016-03-01 08:29:36.211319, EVENTSOURCE=MQ_Input.transaction.Rollback},
{PAYLOAD_MSG=UvdjI, SENSITIVEDATAINDICATOR=N, ERRORMESSAGE=User generated exception test, INTERFACEID=I0826,
EXCEPTIONID=72, PAYLOADMSGID=I0826MessTesting0001, SEVERITY=2, INTERFACENAME=replay, SOURCEPROTOCOL=MQ,
CREATIONTIME=2016-03-01 08:29:36.211319, EVENTSOURCE=MQ_Input.transaction.Rollback}]
I want to access the Payload_Msg of the first row and as well as the second row. Specifically how can this be done? I have tried the following ways,
[payload[0].Payload_Msg], #[payload.Payload_Msg]. But they were not working. Could someone help to overcome this.
You can use ForEach loop to iterate and access objects.
<foreach collection="payload">
<logger message="#[payload.PAYLOAD_MSG]"/>
</foreach>
#[payload[0].PAYLOAD_MSG] must work else check with #[message.payload[0] PAYLOAD_MSG]
You can use Foreach component, but first set it to a flow var so you can still access it within the Foreach component. Another way to handle this is to use the Collection Splitter component.
Hope this helps.
You could use a Collection Splitter after the database connector to process each records one by one. Then you can use #[payload.Payload_Msg] in the Logger to retrieve the value.

Data Processing, how to approach

I have the following Problem, given this XML Datastructure:
<level1>
<level2ElementTypeA></level2ElementTypeA>
<level2ElementTypeB>
<level3ElementTypeA>String1Ineed<level3ElementTypeB>
</level2ElementTypeB>
...
<level2ElementTypeC>
<level3ElementTypeB attribute1>
<level4ElementTypeA>String2Ineed<level4ElementTypeA>
<level3ElementTypeB>
<level2ElementTypeC>
...
<level2ElementTypeD></level2ElementTypeD>
</level1>
<level1>...</level1>
I need to create an Entity which contain: String1Ineed and String2Ineed.
So every time I came across a level3ElementTypeB with a certain value in attribute1, I have my String2Ineed. The ugly part is how to obtain String1Ineed, which is located in the first element of type level2ElementTypeB above the current level2ElementTypeC.
My 'imperative' solution looks like that that I always keep an variable with the last value of String1Ineed and if I hit criteria for String2Ineed, I simply use that. If we look at this from a plain collection processing point of view. How would you model the backtracking logic between String1Ineed and String2Ineed? Using the State Monad?
Isn't this what XPATH is for? You can find String2Ineed and then change the axis to search back for String1Ineed.