Mirth Database Reader failed to process row retrieved from the database in channel (index out of range)? - mirth

I have a Mirth (v3.10) Database Reader channel source that grabs some test records (from an SQL Server source) using the query...
select *
from [mydb].[dbo].[lab_test_MIRTHTEST_001]
where orc_2_1_placer_order_number
in (
'testid_001', 'testid_002', 'testid_003'
)
Even though the channel appears to function properly and messages are getting written to the channel destination, I am seeing SQL errors in the server logs in the dashboard when deploying the channel:
[2020-12-16 08:16:28,266] ERROR (com.mirth.connect.connectors.jdbc.DatabaseReceiver:268): Failed to process row retrieved from the database in channel "MSSQL2SFTP_TEST"
com.mirth.connect.connectors.jdbc.DatabaseReceiverException: com.microsoft.sqlserver.jdbc.SQLServerException: The index 1 is out of range.
at com.mirth.connect.connectors.jdbc.DatabaseReceiverQuery.runPostProcess(DatabaseReceiverQuery.java:233)
at com.mirth.connect.connectors.jdbc.DatabaseReceiver.processRecord(DatabaseReceiver.java:260)
...
I can run this query fine in the SQL Server Mgmt Studio itself (and the messages seem to be transmitting fine), so not sure why this error is popping up but am concerned there is something I'm missing here.
Anyone with more experience know what is going on here? How to fix?

The issue looks to be in the post-process SQL section of the Database Reader, so it makes sense that the messages appear to be working.
Did you intend to enable the post-process section at the bottom of your source tab?

Kindly share the code that you are using to process data in the result set. In the meantime, you can consider the code below as a staring point. You can place this in Javascript transformer step in the source connector of your channel.
//Declaring variables to hold column values returned from the result set
var variable1;
var variable2;
//defining the sql read command
var Query = "select * from [mydb].[dbo].[lab_test_MIRTHTEST_001]";
Query += " where orc_2_1_placer_order_number in";
Query += " ('testid_001', 'testid_002', 'testid_003')";
var result = dbconn.executeCachedQuery(Query);
//where dbconn is your database connection string
//looping through the results
while(result.next())
{
variable1=result.getString("variable1");
variable2 = result.getString("variable2");
}
//optionally place the returned values in a channel map for use later
$c('variable1',variable1);
$c('variable2',variable2);

Related

matlab error when connecting to postgresql database

I'm trying to connec to a PostgreSQL database with following command:
connection = database( ...
options.getDatabaseName(), ...
options.getUsername(), ...
options.getPassword(), ...
"org.postgresql.Driver", ...
"jdbc:postgresql://" + options.getHostname() + ":" + options.getPort() + "/" + options.getDatabaseName() ...
);
It returns me following error:
Error using database (line 59)
Unmatched parameter name 'org.postgresql.Driver' must be a string scalar or character vector that can represent a field name.
I've seen other questions about that, like this one but the error message is different.
What I'm doing wrong?
I've found the solution by myself, and it's tricky (maybe related to a bug in my opinion).
In order to test the database connection I've created first a connection with the Database explorer. It worked, and I saved this connection using the same name of the database.
When I use the database command, by inspecting it source code I've seen that the first thing that it does it to check if there's an existing data source with that name and, if not, it search for the database. The problem was that since my connection had the same database name, database supposed that I wanted to use the data source command version instead of the database. It tried to use this command:
conn = database(datasource,username,password)
instead of this one:
conn = database(databasename,username,password,driver,url)
since wtrade is both name of the database and of the data source. In that case the fourth argument, driver, must be a parameter name, like "Vendor" of "PortNumber", as per Matlab documentation, so since the driver string does not match a parameter name, I had the error.
I've removed the datasource with the same name of the database and everything started to work.
I've notified this to MathWorks, since in my opinion there should be no problem if a database has the same name of a datasource, since the signature are different, so database command should handle also this case.

Using VSTS.Feed() in Power BI to access odata

I am trying to use the VSTS.Feed() function in Power BI to read WorkItemSnapshot data. There are multiple problems. If I build the entire URL into a single string and call VSTS.Feed () with that, I get the correct information in Power BI desktop, but it will not refresh in Power BI online. I have been told to use the (undocumented) Query parameter, as shown below, but it is clear that this parameter is ignored. I can see that the select parameter is ignored on smaller projects, because all columns are returned. I can see that the filter parameter is ignored because the query fails on larger projects.
Does anyone have a working example of using the Query parameter with VSTS.Feed()?
let
BaseURL = "https://server.analytics.visualstudio.com/DefaultCollection/project/_odata/WorkItemSnapshot",
Select = "DateSK,WorkItemId,State,WorkItemType",
Filter = "WorkItemType eq Bug and State ne Closed and State ne Removed and DateSK ge 20180517 and DateSK le 20180615",
Source = VSTS.Feed(BaseURL, [Query=[select=#"Select",filter=#"Filter"]])
in
Source
Update:
With the query above, the message I get is shown below. As I said earlier, it is clearly not using the Filter parameter, and I'm assuming it is not using the Select parameter, either. I can't query everything because there is too much data, and I can't use a filter because I can't figure out a way to get the Options parameter to work. With VSTS.AccountContents, the options parameter works well, but those API endpoints don't use $ in parameter names.
Error: Query result contains 36,788,023 rows and it exceeds maximum allowed size of 300,000. Please reduce the number of records by applying additional filters
Details:
DataSourceKind=Visual Studio Team Services
ActivityId=881f7988-9863-4e03-8375-0489028f28f3
Url=https://server.analytics.visualstudio.com/DefaultCollection/Project/_odata/WorkItemSnapshot
error=Record
The query that started this whole line of questioning is simply one with a variable for a start date.
let
startDate = DateTimeZone.ToText (Date.AddDays(DateTimeZone.UtcNow(), -45), "yyyyMMdd"),
URL = "https://server.analytics.visualstudio.com/DefaultCollection/project/_odata/WorkItemSnapshot?$select=DateSK,WorkItemId,State,WorkItemType&$filter=WorkItemType eq 'Bug' and State ne 'Closed' and State ne 'Removed' and DateSK gt " & startDate,
Source = VSTS.Feed(URL)
in
Source
While this query mostly works in Power BI desktop (the select clause is ignored), the message I get when the data source is refreshed online is:
You can't schedule refresh for this dataset because one or more sources currently don't support refresh.
Discover Data Sources
Query contains unknown or unsupported data sources.
The documentation for VSTS.Feed() contradicts itself, saying both
The VSTS.Feed function has the same arguments, options and return value format as OData.Feed.
and
'VSTS.Feed' provides a subset of the Arguments and Options available through 'OData.Feed'.
To to summarize, I know that I can't combine data sources in Power BI. Does VSTS.Feed() support the options parameter? If so, how do I pass a Filter and Select clause to it?
To get WorkItemSnapshot by vsts.feed, please refer below query:
let
Source = OData.Feed("https://account.analytics.visualstudio.com/project/_odata/v1.0-preview", null, [Implementation="2.0"]),
WorkItemSnapshot_table = Source{[Name="WorkItemSnapshot",Signature="table"]}[Data]
in
WorkItemSnapshot_table
Note: the URL format should be https://account.analytics.visualstudio.com/project/_odata/v1.0-preview, or https://account.analytics.visualstudio.com/_odata/v1.0-preview.
And you can refer below documents:
Connect to VSTS using the Power BI OData feed
Connect using Power Query and Visual Studio Team Services (VSTS) functions

Bi-directional database syncing for Postgres and Mongodb

Let's say I have a local server running and I also have an exactly similar server already running on amazon.
Both server can CRUD data to its databases.
Note that the servers use both `postgres` and `mongodb`.
Now when no one is using the wifi (usually in the night), I would like to sync both postgres and mongodb databases so that all writes from each database on server to each database on local gets properly applied.
I don't want to use Multi-Master because:
MongoDB does not support this architecture itself, so perhaps I will need a complex alternative.
I want to control when and how much I sync both databases.
I do not want to use network bandwidth when others are using the internet.
So can anyone show me right direction.
Also, if you list some tools that solve my problem, it will be very helpful.
Thanks.
We have several drivers what would be able to help you with this process. I'm presuming some knowledge of software development and will showcase our ADO.NET Provider for MongoDB, which using the familiar-looking MongoDBConnection, MongoDBCommand, and MongoDBDataReader objects.
First, you'll want to create your connection string for connecting with you cloud MongoDB instance:
string connString = "Auth Database=test;Database=test;Password=test;Port=27117;Server=http://clouddbaddress;User=test;Flatten Objects=false";
You'll note that we have the Flatten Objects property set to false, this ensures that any JSON/BSON objects contained in the documents will be returned as raw JSON/BSON.
After you create the connection string, you can establish the connection and read data from the database. You'll want to store the returned data in some way that would let you access it easily for future use.
List<string> columns = new List<string>();
List<object> values;
List<List<object>> rows = new List<List<object>>();
using (MongoDBConnection conn = new MongoDBConnection(connString))
{
//create a WHERE clause that will limit the results to newly added documents
MongoDBCommand cmd = new MongoDBCommand("SELECT * FROM SomeTable WHERE ...", conn);
rdr = cmd.ExecuteReader();
results = 0;
while (rdr.Read())
{
values = new List<object>();
for (int i = 0; i < rdr.FieldCount; i++)
{
if (results == 0)
columns.Add(rdr.GetName(i));
values.Add(rdr.GetValue(i));
}
rows.Add(values);
results++;
}
}
After you've collected all of the data for each of the objects that you want to replicated, you can configure a new connection to your local MongoDB instance and build queries to insert the new documents.
connString = "Auth Database=testSync;Database=testSync;Password=testSync;Port=27117;Server=localhost;User=testSync;Flatten Objects=false";
using (MongoDBConnection conn = new MongoDBConnection(connString)) {
foreach (var row in rows) {
//code here to create comma-separated strings for the columns
// and values to be inserted in a SQL statement
String sqlInsert = "INSERT INTO backup_table (" + column_names + ") VALUES (" + column_values + ")";
MongoDBCommand cmd = new MongoDBCommand(sqlInsert, conn);
cmd.ExecuteQuery();
}
At this point, you'll have inserted all of the new documents. You could then change your filter (the WHERE clause at the beginning) to filter based on updated date/time and update their corresponding entries in the local MongoDB instance using the UPDATE command.
Things to look out for:
Be sure that you're properly filtering out new/updated entries.
Be sure that you're properly interpreting the type of variable so that you properly surround with quotes (or not) when entering the values in the SQL query.
We have a few drivers that might be useful to you. I demonstrated the ADO.NET Provider above, but we also have a driver for writing apps in Xamarin and a JDBC driver (for Java).

Code first migration for a SQL Server CE database file

MigrateDatabaseToLatestVersion is used. The database that is stored within SQL Server Express is updated.
When opening a local stored .sdf file (SQL Server CE database) with a valid path and file name, this file is not updated.
Database.SetInitializer(new MigrateDatabaseToLatestVersion<DTDataContext, Configuration>());
var connection = DTDataContext.GetConnectionSqlServerCE40(fullPathName);
dataBaseContext = new DTDataContext(connection, true);
dataBaseContext.Database.Initialize(true);
The MigrationHistory entries will be made in SQL Server Express and not in the local SQL Server CE database file.
What would be the easiest way to update a local SQL Server CE database file?
After a few experiments, an adequate solution was found (which fits for my purpose).
The question was focused about the old sdf(s) that were previously written but with an older model in contrast to the code.
I decided not to migrate old files (which are applied as a kind of backups).
Only reading will be made within those files. Obviously, it is possible that newer sdf(s) will be read once in the future but that's not a big deal.
Before reading stuff of an entity that could maybe not exist (in a sdf), it will be checked via SqlQuery and count(*).
[System.Diagnostics.CodeAnalysis.SuppressMessage( "Microsoft.Design", "CA1031:DoNotCatchGeneralExceptionTypes" )]
private bool TestIfTableExists( string tableName, DTDataContext dataContext )
{
try
{
int cnt = dataContext.Database.SqlQuery<int>( "select count(*) from " + tableName ).First();
return cnt > 0;
}
catch( Exception ex ) { /*available SqlCeException assembly does not fit --- table does not exist*/ return false; }
}
btw When using SqlCeException (v3.5), which could be provided as a reference via the assembly search, the above situation would fail (=unhandled exception!). Have not tested it with v4 because I wanna avoid a 'manual' reference because it must be checked in (no need for any path problems with other workstations).
Concerning writing a sdf:
When writing a new sdf with the current model, this is not a problem at all.
Database.CreateIfNotExists() was applied.
In my case, updating a sdf was not necessary --- and a quick solution for that was not found.

How to connect Excel to MS SQL and get data WITH column names?

One of my users wants to get data into Excel from SQL 2008 query/stored proc.
I never actually did it before.
I tried a sample using ADO and got data but user reasonably asked - where are the column names?
How do I connect a spreadsheet to an SQL resultset and get it with column names?
Apparently the field names are in the recordset object already.. just needed to pull them out.
i = 1
For Each objField In rs.Fields
Sheet1.Cells(1, i) = objField.Name
i = i + 1
Next objField
I don't know which version of Excel you are using but in Excel 2007 you can just connect to the SQL DB by going to Data -> From Other Sources -> From SQL Server. After you select your server and database, your connection will be created. Then you can edit it (Data -> Connections -> Properties) where in the Definition tab you change the Command type to SQL and enter your query in the Command text box. You can also create a view on the server and just point to that from Excel.
This should do it unless I misunderstood your question.