Mirth Connect send old messages when changing server - mirth

I have a Mirth application installed in Ubuntu server. I try to move the application from one server to another server (DRC server). When I moved the application, somehow the Mirth keep sending old messages to the channel.
The source of sending channel is using Database Reader and connecter type for destinations is using TCP Sender. Im using Mirth Connect version 3.5.2
Does anyone know why this is happening. Is there any log files that I need to clear when moving the application from one server to another?

This can happen for several reasons. Application logic, queued messages. My guess is you moved appdata directory along with installation, if so you must be seeing similar stats from where you moved.
Mirth stores all channels information, transactions etc. by default under appdata folder. If you are using default settings it'll use derby db. You can connect to that DB with any DB client support JDBC. i.e.
SQuirelL or DB Visualizer and that can give you an idea what's happening.
I recommend you to make a clear setup. Then, you can export/import your channels into your new environment. You can also consider using any other DB, oracle/sqlserver/mysql.. for Mirth. Current version is 3.9.10 and it has better support for DBs other than derby.
As mentioned in the comments your application logic also matters.

Related

How to take backup of Tableau Server Repository(PostgreSQL)

we are using 2018.3 version of Tableau Server. The server stats like user login, and other stats are getting logged into PostgreSQL DB. and the same being cleared regularly after 1 week.
Is there any API available in Tableau to connect the DB and take backup of data somewhere like HDFS or any place in Linux server.
Kindly let me know if there are any other way other than API as well.
Thanks.
You can enable access to the underlying PostgreSQL repository database with the tsm command. Here is a link to the documentation for your (older) version of Tableau
https://help.tableau.com/v2018.3/server/en-us/cli_data-access.htm#repository-access-enable
It would be good security practice to limit access to only the machines (whitelisted) that need it, create or use an existing read-only account to access the repository, and ideally to disable access when your admin programs are complete (i.e.. enable access, do your query, disable access)
This way you can have any SQL client code you wish query the repository, create a mirror, create reports, run auditing procedures - whatever you like.
Personally, before writing significant custom code, I’d first see if the info you want is already available another way, in one of the built in admin views, via the REST API, or using the public domain LogShark or TabMon systems or with the Addon (for more recent versions of Tableau) the Server Management Add-on, or possibly the new Data Catalog.
I know at least one server admin who somehow clones the whole Postgres repository database periodically so he can analyze stats offline. Not sure what approach he uses to clone. So you have several options.

How to replicate a postgresql database from local to web server

I am new in the form and also new in postgresql.
Normally I use MySQL for my project but I’ve decided to start migrating towards postgresql for some valid reasons which I found in this database.
Expanding on the problem:
I need to analyze data via some mathematical formulas but in order to do this I need to get the data from the software via the API.
The software, the API and Postgresql v. 11.4 which I installed on a desktop are running on windows. So far I’ve managed to take the data via the API and import it into Postgreql.
My problem is how to transfer this data from
the local Postgresql (on the PC ) to a web Postgresql (installed in a Web server ) which is running Linux.
For example if I take the data every five minutes from software via API and put it in local db postgresql, how can I transfer this data (automatically if possible) to the db in the web server running Linux? I rejected a data dump because importing the whole db every time is not viable.
What I would like is to import only the five-minute data which gradually adds to the previous data.
I also rejected the idea of making a master - slave architecture
because not knowing the total amount of data, on the web server I have almost 2 Tb of hard disk while on the local pc I have only one hard disk that serves only to take the data and then to send it to the web server for the analysis.
Could someone please help by giving some good advice regarding how to achieve this objective?
Thanks to all for any answers.

Wildfly won't deploy when datasource is unavailable

I am using wildfly-8.2.0.Final.
There are several databases that i have to connect to. However, some of them are only used for certain functionalities on the web application and they are not needed to be online all the time. So when the wildfly starts, some of the datasources may not be online. However, disconnection to any datasource causes wildfly to not deploy .war deployment and i cannot find any way to solve this problem. Is there a way?
UPDATE:
I have a single table on a remote database server. The user will be able to query the table via my web application. The thing is, I have almost no control over the mentioned database. When the web application starts, it could be offline. However, this would cause my web application to fail to start. What I want is being able to run queries on a remote database if it is online. If it is offline, the web page could fail or the query can be canceled. But the only thing that I don’t want is that my web application to be limited by a remote database that I may have no control over.
My previous solution was a workaround. I would run queries on the remote database via a local database which has a foreign table to the remote one. However, the local one reads all data on the remote table before applying any constraints on postgresql 9.5. As the remote table has a large number of rows and I am using lazy loading, it takes so long for a single query and defeats the whole purpose of the lazy loading.
I found a similar question, but there is no answer.
On wildfly, you can set the datasource so that it tries to reconnect periodically when it disconnects. In my case, the deployment should be successful initially for this to be helpful.
The deployment will failed if it references those datasources.
Also you could define but disable those datasources.

Use Cygnus to store historical data from Orion ContextBroker in a local Hadoop database

We are currently working in a project where we use Orion ContextBroker to store information from different sensors and Wirecloud to show them in a web page.
We want to store historical data from these sensors in order to show them in a graph. I have looked around the Fiware documentation and they recommend to store the data in a Cosmos instance of Fi-lab, through Cygnus.
The thing is that we would like to store that historical data in a local Hadoop based server we have in our company, not in Cosmos, because we are running this project in a local net where we don't have internet access, and also to have that information stored in our local server.
Is it possible to configure Cygnus to redirect the output data to my file system? If so, which files must be configured in order to achieve this?
Thank you
The answer is yes. Cygnus is meant to persist context data in whatever HDFS-based filesystem (as the one used by Cosmos), thus nothing special has to be done when configuring Cygnus.
If you download the lastest version (0.7.0 at the moment of writting this), you will need to configure:
A cygnus_instance_default.conf file from cygnus_instance.conf.template. This is the instance configuration. From 0.7.1 is possible to have multiple instance configurations that are run in a parallel way, and they all have to called cygnus_instance_<whatever>.conf.
A agent.conf file from agent.conf.template. This is the Flume specific configuration that you will find in the README.md.

Using Entity Framework with Informix

I've been trying for quite some time to use Entity Framework with our IBM Informix databases. Hours of searching has pointed me towards installing the IBM .NET Data Server Provider, which I have installed, however when I attempt to add a new Entity Model to my project I only have the Microsoft SQL Server Data Providers listed. Am I missing a step? Is this even possible?
I am not an expert on Windows or .NET; treat any comments I make with due caution.
Installing the .NET Data Server Provider is an important first step. You now have to make sure that you can use it to connect to the Informix databases you want to manipulate. There are several things you'll need to check here:
Is the server (meaning the Informix instance) configured to allow DRDA connections?
By default, it probably isn't.
If you're the DBSA (database system administrator), you'll need to check that you've enabled 'drsoctcp' connections on the system, and configured a server alias to use that connection.
If you're not the DBSA, you'll need to chat with your DBSA to get the relevant information.
Assuming that you have DRDA connectivity enabled at the server side, you then need to ensure you have an appropriately configured ... DSN? Your client code needs to be able to connect to the server.
There is no reason I'm aware of why it cannot be done. However, I don't know exactly how to guide you step-by-step through any of the above.
You might need to seek assistance from IBM Technical Support.
You would help everyone if you clarified which version of Informix (the DBMS) you have, along with the version information for the platform where it is running (whether Windows or Unix, and the o/s version information) - and which version of the Data Server Provider you are using (and which variant of Windows you are using it on).