Mirth Database Reader channel automatically reruns CRON query when you delete the message history? - mirth

Is this a defect? "Remove all messages" causes this channel type to automatically reprocess?
Create a channel with:
Database Reader Source that
-- runs on a CRON (0 5 * * * ? for example)
-- does not use Javascript (uses the SQL text block)
-- does not aggregate results
-- does not cache results
File Writer Destination
-- append to file
-- write the SELECT columns out to the file
Then run the channel. After it runs and you write numerous rows to the output file,
go into the Dashboard and try to "REMOVEALL" messages. It cleared the messages, but goes right back into polling the DB and rerunning the query regrardless what the Source Cron was set to.
This creates duplicates in the output file if we clear the dashboard message history. Why?

Kindly edit the channel to be able to flag records that have already been read in the database reader source connector. You can achieve this by adding a state flag (i.e is_sent column) to the source table. Essentially, setting a default value of 0 then toggling to 1 once pulled by Mirth connect.

Related

Where can I find a complete list about replication slot options in PostgreSQL?

I an working on PG logical replication by Java, and find a demo on the jdbc driver docs
PGReplicationStream stream =
replConnection.getReplicationAPI()
.replicationStream()
.logical()
.withSlotName("demo_logical_slot")
.withSlotOption("include-xids", false)
.withSlotOption("skip-empty-xacts", true)
.start();
then I can parse message from the stream.
This is enough for some daily needs, but now I want to know the transaction commit time.
From the help of the question on stackoverflow, I add .withSlotOption("include-timestamp", "on") and it is working.
My question is where can find a complete list about the "slot option", so we can find them very conveniently instead of search on google or stackoverflow.
The available options depend on the logical decoding plugin of the replication slot, which is specified when the replication slot is created.
The example must be using the test_decoding plugin, which is included with PostgreSQL as a contrib module for testing and playing.
The available options for that plugin are not documented, but can be found in the source code:
include-xids: include the transaction number in BEGIN and COMMIT output
include-timestamp: include timestamp information with COMMIT output
force-binary: specifies that the output mode is binary
skip-empty-xacts: don't output anything for transactions that didn't modify the database
only-local: output only data whose replication origin is not set
include-rewrites: include information from table rewrites caused by DDL statements

Talend - Stats and Logs - On database - error

I have a job that inserts data from sql server to mysql. I have set the project settings as -
Have checked the check box for - Use statistics(tStatCatcher), Use logs (tLogcatcher), Use volumentrics (tflowmetercatcher)
Have selected 'On Databases'. And put in the table names
(stats_table,logs_table,flowmeter_table) as well. These tables were created before. The schema of these tables were determined using tcreatetable component.
The problem is when I run the job, data is inserted in the stats_table but not in flowmeter_table
My job is as follows
tmssInput -->tmap --> tmysqoutput.
I have not included tstatcatcher,tlogcatcher,tflowmetercatcher. The stats and logs for this job are taken from the project settings.
My question - Why is there no data entered in flowmeter_table? Should I include tStatCatcher , tlogcatcher and tflowmetercatcher explicitly in the job for it to run fine?
I am using TOS
Thanks in advance
Rathi
Using flow meter requires you to manually configure the flows you want to monitor.
On every flow you want to monitor, right-click on the row >parameters>advanced settings>Monitor connection.
Then you should be able to see data in your flow table.
If you are using the project settings , you don't need to add the *Catcher component on your job.
You need to use tstatcatcher,tlogcatcher,tflowmetercatcher composant in the job directly.
The composant have already their schema defined so you jusneed to put a tmap and redirect in the table you want like :
Moreover in order tu use the tlog catcher you need to put some tdie or twarn in your job.

Trigger update does not work when creating a new request

I am new to DB Oracle, When I create a new request in Clarity (that is a project & portfolio management application) or when I change the status of a request, I would like to update the field status to the new value of mb_status_idea.
The following query works well in case of Update, but if I create a new request, it does not update the status. (so status is not equal to status MB).
IF ( :old.mb_status_idea != :new.mb_status_idea)
THEN update inv_investments a
set a.status = stat
where a.id=:new.id ;
END IF;
I think the problem is that when creating a new request, since for insert trigger OLD contains NO VALUE, so the condition would be false and it doeas not update the status.
Note: The field status is in the table INV_INVETMENTS , (stat := :new.mb_status_idea) and database column for status MB is mb_status_idea
I also added this condition --> or (:old.mb_status_idea is null), but again when I create a new request, the value of "Status" and "status MB" are different (status is not updated).
I do appreciate if someone could help to overcome this problem.
All ideas are highly appreciated,
Mona
With Clarity it is recommended to not use triggers for a couple of reasons... jobs and processes may sometimes change the values of some fields at other times than when edits happen through the application. You can't control these. Triggers can't be used if you use CA hosting services. Triggers will have to be removed for upgrades because the upgrade process breaks them.
For this type of action I would recommend using the process engine. You can setup a process to run any time the field is updated. The update could be performed by a custom script or a system action. The system action is fairly straight forward to configure. If you use a custom script there are examples in the admin bookshelf documentation. You would write a SQL update statement and put it in a GEL script.

Talend load fails in midway resulting rollback

I have a tfileinputdelimited component and a tmap and the result is pass to the tfileoutdelimited which creates a csv file.
now in middle of the job some times data load fails resulting rollback of the destination file.
It cause wastage of resources and time.
can anyone provide a way of approach so that once a job fails in between so that data that is passed will go to save state and the next time when job runs then it again start from the point of failure only.
Talend won't rollback a process when writing in tFileOutputDelimited. If you got empty output file, it means that your job died prematurely and no record were written in the output buffer.
If an error occurs while writing in the file, then the following code (generated by tFileOutputDelimited) close the outputBuffer and flush the data successfully inserted before the error :
...
} finally {
if (outtFileOutputDelimited_1 != null) {
outtFileOutputDelimited_1.flush();
outtFileOutputDelimited_1.close();
}
...
}
...
There's no real "resume" feature in Talend, but you can create your own die&resume process in the job as following :
tFileInput1 ==> tHashOutput
tFileInput2 = main => tMap ==> tFileOutput1
tHashInput =lookup=> tMap
tFileInput1 : reads the data generated by the last run of your job and is stored in memory with tHashOutput
tFileInput2 : reads your input file
tFileOutput1 : stores the output data
tHashInput : reads the data in memory and serves as lookup in the tMap
In your tMap, create an inner join between tFileInput1 and tHashInput. Then, for your output schema, select catch lookup inner join reject to process all the record that are not in tHashInput.
Not sure that it will save resource and time. The best way to manage errors is to identify them and do all the checks in the job to avoid them !
For more clarity, could you give an example of error that occurs when you run the job ?

Why doesn't mirth recognize the changes I have made to my channel?

I added a database writer destination to a working mirth channel. The destination is not writing to the table like it is supposed to, but it is not generating errors on the dashboard. I'm not really sure how to get it to work.
Here are the steps I have taken so far:
changed the name of the table to a non-existing table // Does not generate error, suggesting that it does not even recognize the destination
validator connector (successful)
verified username/pw/URL are correct (I even cloned a working database writer from the same channel to try to get it to run)
removed all filters (in case it was filtering for some reason)
cloned the same transformer used in another working destination from the same channel
allowed nulls in the SQL server database in case it was trying to insert nulls
disabled/enabled channels. Started/restarted mirth. Opened/closed SQL server
I am not really sure what else there is to do. Any suggestions?
You have to click deploy all channels in the channels menu in order for mirth to launch the modified version of a channel after you make changes to it. Then you may have to start all channels in the dashboard too. That got my channel working