Mirth connect error in channel deployment - mirth

I am getting below error in deployment, If anyone having full example of mirth connect please provide including details of filter, transformer, response for both source and destination and Script.
Here are the screen-shots of channel
Channel summary
Channel Source
Source Transformer
Channel Destination

When you're working in a JavaScript context like that, here is the correct syntax to reference a map variable:
$('varName')
So you can replace the $varName instances in your code with $('varName') and it should work.
However, you should also consider changing your code to use prepared statements. That prevents SQL injection and other unintended problems (what happens if one of those variables contains a quotation mark?). The DatabaseConnection class has another version of executeUpdate that takes a list of parameters. So try something like this:
var params = Lists.list($('title')).append($('category')).append($('sumitted_date')).append($('assigner')).append($('assignee')).append($('due_date'));
var result = dbConn.executeUpdate("INSERT INTO patient (title, category, sumitted_date, assigner, assignee, due_date) VALUES (?, ?, ?, ?, ?, ?)", params);

Mirth is complaining about the use of the undefined $title variable in your Javascript.
It looks like you're trying to open a JDBC connection to a postgres database and perform some INSERTs, but you're referencing data (e.g. $title) that's not part of Mirth's channel map.

Related

How do you pass variables into Mirth Database Reader Channels for SQL expressions?

I can't find any documentation on how to manager parameters into Database Reader SQL statements?
-> this is a simplified example: I am not looking for scripting a variable to "yesterday" which is easy to express in SQL. That's not the point. I have have more complex variables in the actual SQL statement I'm trying to martial in. I just want to know how to get variables into the SQL form if possible.
-> "you can just do that in JavaScript": the actual queries I need to run are about a hundred lines long, I don't want to maintain and debug a query build by concatenating strings and then deal with escaping 'quoted' things everywhere in the SQL. I really prefer to maintain an actual SQL statement that copy/paste works in a SQL IDE.
How do we pass in parameters into the SQL block at the bottom of the Database Reader form?
SELECT patientsex, visitnumber, samplereceived_dt, sr_d, sr_t, orderpriority, orderrequestcode, orderrequestname
FROM mydata.somedata
WHERE sr_d = (${DateUtil.getCurrentDate('yyyyMMdd')})::integer;
JavaScript is the feasible way to achieve this, with SQL statements defined inside Mirth connect or have the SQL statements bundled in a stored procedure then use SQL server's Exec command within Mirth connect to call the stored procedure while passing the parameters (interestingly using JavaScript).
For example
var dbConn;
try {
dbConn = DatabaseConnectionFactory.createDatabaseConnection('','DB:Port\instance','user','pass');
var paramList = new java.util.ArrayList();
paramList.add($('patientId'));
paramList.add($('lastName'));
paramList.add($('firstName'));
var result = dbConn.executeCachedQuery("SELECT * FROM patients WHERE patientid = ? AND lastname = ? AND firstname = ?) ",paramList);
while (result.next()) {
//you can reference by column index like so...
//result.getString(1);
}
} finally {
if (dbConn) {
dbConn.close();
}
}
Should be noted that the parameters you add to the list MUST be in order.

operator does not exist: # timestamp without time zone

In a parameterized query issued from c# code to PostgreSQL 10.14 via dotConnect 7.7.832 .NET connector, I select either a parameter value or the local timestamp, if the parameter is NULL:
using (var cmd = new PgSqlCommand("select COALESCE(#eventTime, LOCALTIMESTAMP)", connection)
When executed, this statement throws the error in subject. If I comment out the corresponding parameter
cmd.Parameters.Add("#eventTime", PgSqlType.TimeStamp).Value = DateTime.Now;
and hardcode
using (var cmd = new PgSqlCommand("select COALESCE('11/6/2020 2:36:58 PM', LOCALTIMESTAMP)", connection)
or if I cast the parameter
using (var cmd = new PgSqlCommand("select COALESCE(cast(#eventTime as timestamp without time zone), LOCALTIMESTAMP)", connection)
then it works. Can anyone explain what # operator in the error is referring to and why the error?
In the case that doesn't work, your .Net connection library seems to be passing an SQL command containing a literal # to the database, rather than substituting it. The database assumes you are trying to use # as a user defined operator, as it doesn't know what else it could possibly be. But no such operator has been defined.
Why is it doing that? I have no idea. That is a question about your .Net connection library, not about PostgreSQL itself, so you might want to add tag.
The error message you get from the database should include the text of the query it received (as opposed to the text you think it was sent) and it is often useful to see that in situations like this. If that text is not present in the client's error message (some connection libraries do not faithfully pass this info along) you should be able to pull it directly from the PostgreSQL server's log file.

mirth connect Database Reader automatic column mapping

Please could somebody confirm the following..
I am using Mirth Connect 3.5.08232.
My Source Connector is a Database Reader.
Say, I am using a query that returns multiple rows, and return the result (via JavaScript), as documentation suggests, so that Mirth would treat each row as a separate message. I also use a couple of mappers as source transformers, and save the mapped fields in my channel map (which ends up to contain only those fields that I define in transformers)
In the destination, and specifically, in destination response transformer (or destination body, if it is a JavaScript writer), how do I access the source fields?
the only way I found by trial and error is
var rawMsg = connectorMessage.getRawData();
var xmlMsg = new XML(rawMsg);
logger.info(xmlMsg.some_field); // ignore the root element of rawMsg
Is this the right way to do this? I thought that maybe the fields that were nicely automatically detected would be put in some kind of a map, like sourceMap - but that doesn't seem to be the case, right?
Thank you
If you are using Mapper steps in your transformer to extract the data and put it into a variable map (like the channel map), then you can use any of the following methods to retrieve it from a subsequent JavaScript context (including a JavaScript Writer, and your response transformer):
var value = channelMap.get('key');
var value = $c('key');
var value = $('key');
Look at the Variable Maps section of the User Guide for more information.
So to recap, say you're selecting a column "mycolumn" with a Database Reader. The XML sent to the channel will be something like this:
<result>
<mycolumn>value</mycolumn>
</result>
Then you can choose to extract pieces of that message into specific variables for later use. The transformer allows you to easily drag-and-drop pieces of the sample inbound message.
Finally in your JavaScript Writer (or in any subsequent filter, transformer, or response transformer), just drag the value into the field you want:
And the corresponding JavaScript code will automatically be inserted:
One last note, if you are selecting a lot of variables and don't want to make Mapper steps for each one individually, you can use a JavaScript Step to iterate through the message and extract each column into a separate map variable:
for each (child in msg.children()) {
channelMap.put(child.localName(), child.toString());
}
Or, you can just reference the columns directly from within the JavaScript Writer:
var msg = new XML(connectorMessage.getEncodedData());
var column1 = msg.column1.toString();
var column2 = msg.column2.toString();
...

Setting application_name on Postgres/SQLAlchemy

Looking at the output of select * from pg_stat_activity;, I see a column called application_name, described here.
I see psql sets this value correctly (to psql...), but my application code (psycopg2/SQLAlchemy) leaves it blank.
I'd like to set this to something useful, like web.1, web.2, etc, so I could later on correlate what I see in pg_stat_activity with what I see in my application logs.
I couldn't find how to set this field using SQLAlchemy (and if push comes to shove - even with raw sql; I'm using PostgresSQL 9.1.7 on Heroku, if that matters).
Am I missing something obvious?
the answer to this is a combination of:
http://initd.org/psycopg/docs/module.html#psycopg2.connect
Any other connection parameter supported by the client library/server can be passed either in the connection string or as keywords. The PostgreSQL documentation contains the complete list of the supported parameters. Also note that the same parameters can be passed to the client library using environment variables.
where the variable we need is:
http://www.postgresql.org/docs/current/static/runtime-config-logging.html#GUC-APPLICATION-NAME
The application_name can be any string of less than NAMEDATALEN characters (64 characters in a standard build). It is typically set by an application upon connection to the server. The name will be displayed in the pg_stat_activity view and included in CSV log entries. It can also be included in regular log entries via the log_line_prefix parameter. Only printable ASCII characters may be used in the application_name value. Other characters will be replaced with question marks (?).
combined with :
http://docs.sqlalchemy.org/en/rel_0_8/core/engines.html#custom-dbapi-args
String-based arguments can be passed directly from the URL string as query arguments: (example...) create_engine() also takes an argument connect_args which is an additional dictionary that will be passed to connect(). This can be used when arguments of a type other than string are required, and SQLAlchemy’s database connector has no type conversion logic present for that parameter
from that we get:
e = create_engine("postgresql://scott:tiger#localhost/test?application_name=myapp")
or:
e = create_engine("postgresql://scott:tiger#localhost/test",
connect_args={"application_name":"myapp"})
If you're using asyncpg driver, you should use
conn = await asyncpg.connect(server_settings={'application_name': 'foo'})
src - https://github.com/MagicStack/asyncpg/issues/204#issuecomment-333917251

Can the Sequence of RecordSets in a Multiple RecordSet ADO.Net resultset be determined, controlled?

I am using code similar to this Support / KB article to return multiple recordsets to my C# program.
But I don't want C# code to be dependant on the physical sequence of the recordsets returned, in order to do it's job.
So my question is, "Is there a way to determine which set of records from a multiplerecordset resultset am I currently processing?"
I know I could probably decipher this indirectly by looking for a unique column name or something per resultset, but I think/hope there is a better way.
P.S. I am using Visual Studio 2008 Pro & SQL Server 2008 Express Edition.
No, because the SqlDataReader is forward only. As far as I know, the best you can do is open the reader with KeyInfo and inspect the schema data table created with the reader's GetSchemaTable method (or just inspect the fields, which is easier, but less reliable).
I spent a couple of days on this. I ended up just living with the physical order dependency. I heavily commented both the code method and the stored procedure with !!!IMPORTANT!!!, and included an #If...#End If to output the result sets when needed to validate the stored procedure output.
The following code snippet may help you.
Helpful Code
Dim fContainsNextResult As Boolean
Dim oReader As DbDataReader = Nothing
oReader = Me.SelectCommand.ExecuteReader(CommandBehavior.CloseConnection Or CommandBehavior.KeyInfo)
#If DEBUG_ignore Then
'load method of data table internally advances to the next result set
'therefore, must check to see if reader is closed instead of calling next result
Do
Dim oTable As New DataTable("Table")
oTable.Load(oReader)
oTable.WriteXml("C:\" + Environment.TickCount.ToString + ".xml")
oTable.Dispose()
Loop While oReader.IsClosed = False
'must re-open the connection
Me.SelectCommand.Connection.Open()
'reload data reader
oReader = Me.SelectCommand.ExecuteReader(CommandBehavior.CloseConnection Or CommandBehavior.KeyInfo)
#End If
Do
Dim oSchemaTable As DataTable = oReader.GetSchemaTable
'!!!IMPORTANT!!! PopulateTable expects the result sets in a specific order
' Therefore, if you suddenly start getting exceptions that only a novice would make
' the stored procedure has been changed!
PopulateTable(oReader, oDatabaseTable, _includeHiddenFields)
fContainsNextResult = oReader.NextResult
Loop While fContainsNextResult
Because you're explicitly stating in which order to execute the SQL statements the results will appear in that same order. In any case if you want to programmatically determine which recordset you're processing you still have to identify some columns in the result.