I would like to use JBoss/XA Transacion with Database Connector component in Mule 3.7.
But an exception is throws: Transactional action is ALWAYS_JOIN but there is no active transaction (java.lang.IllegalStateException).
My transactional scenario is:
Pool + Database component - select table A
VM Component. Just to start a transaction - ALWAYS_BEGIN
Database component - MySQL - insert table B
Database component - Oracle - insert table C
We must ensure that both inserts running and commit together or rollback together.
Following XML code:
<jbossts:transaction-manager doc:name="JBoss Transaction Manager">
<property key="com.arjuna.ats.arjuna.coordinator.defaultTimeout" value="50" />
<property key="com.arjuna.ats.arjuna.coordinator.txReaperTimeout" value="108000"/><
</jbossts:transaction-manager>
<spring:beans>
<spring:bean id="oraDataSource" class="oracle.ucp.jdbc.PoolXADataSourceImpl" name="Bean">
<spring:property name="URL" value="jdbc:oracle:thin:#//${db.host}:${db.port}/${db.instance}"/>
<spring:property name="user" value="${db.user}"/>
<spring:property name="password" value="${db.password}"/>
<spring:property name="connectionFactoryClassName" value="oracle.jdbc.xa.client.OracleXADataSource"/>
<spring:property name="minPoolSize" value="1"/>
<spring:property name="maxPoolSize" value="20"/>
<spring:property name="connectionWaitTimeout" value="30"/>
</spring:bean>
</spring:beans>
<spring:bean id="mysqlDataSource" class="com.mysql.jdbc.jdbc2.optional.MysqlXADataSource">
<spring:property name="url" value="jdbc:mysql://${mysql.host}:${mysql.port}/${mysql.instance}?user=${mysql.user}&password=${mysql.password}" />
</spring:bean>
<db:oracle-config name="Oracle_Configuration" doc:name="Oracle Configuration Bean" dataSource-ref="oraDataSource"/>
<db:mysql-config name="MySQL_Configuration" doc:name="MySQL Configuration Bean" dataSource-ref="mysqlDataSource"/>
<vm:connector name="VM" validateConnections="true" doc:name="VM"/>
<flow name="propostaFlow" processingStrategy="synchronous">
<poll doc:name="Poll">
<fixed-frequency-scheduler frequency="1000"/>
<watermark variable="carimboTempo" default-expression="2016-01-01 00:00:00" selector="MAX" selector-expression="#[payload.date_modified]"/>
<db:select config-ref="MySQL_Configuration" doc:name="Database Proposta">
<db:parameterized-query><![CDATA[select a.id, a.number, a.date_modified from table_a a where a.date_modified > #[flowVars.carimboTempo]]]></db:parameterized-query>
</db:select>
</poll>
<foreach doc:name="For Each - Proposta">
<vm:outbound-endpoint exchange-pattern="one-way" path="in" connector-ref="VM" doc:name="VM">
<xa-transaction action="ALWAYS_BEGIN" timeout="10000"/>
</vm:outbound-endpoint>
<enricher target="#[flowVars.resultadoInsert1]" doc:name="Message Enricher">
<db:insert config-ref="Oracle_Configuration" transactionalAction="ALWAYS_JOIN" doc:name="Database 1">
<db:parameterized-query><![CDATA[insert into table_b(ID, NAME) values(#[payload.id],#[payload.name])]]></db:parameterized-query>
</db:insert>
</enricher>
<db:insert config-ref="MySQL_Configuration" transactionalAction="ALWAYS_JOIN" doc:name="Database 2">
<db:parameterized-query><![CDATA[insert into table_c(ID, NAME) values(#[payload.id],#[payload.name])]]></db:parameterized-query>
</db:insert>
</foreach>
</flow>
IMPORTANT: We are using Mule 3.7.0 CE. We know that in Mule EE a solution is very easy with <transaction> and XA.
Questions:
Did we something wrong?
Is the Database Connect component aware of the JBoss/XA Transaction?
Is a correct form to start XA transaction with VM Component?
Finally, what we want to do, embed XA transaction in Mule CE, is really possible?
Thanks!
Based from the link you provided, since you use the driver classes oracle.jdbc.xa.client.OracleXADataSource and com.mysql.jdbc.jdbc2.optional.MysqlXADataSource you need to update the configuration like the following:
<jdbc:inbound-endpoint queryKey="selectQuery"
connector-ref="jdbcConnectorSource" pollingFrequency="10000">
<xa-transaction action="ALWAYS_BEGIN" />
</jdbc:inbound-endpoint>
<jdbc:outbound-endpoint queryKey="insert_call"
connector-ref="jdbcConnectorDest">
<xa-transaction action="ALWAYS_JOIN" />
</jdbc:outbound-endpoint>
Based on my experience I have found that the only way to use XA transactions is to use Mule EE.
You have to include your transactional operations (for example database update and JMS publishing ) in this block
<ee:xa-transactional action="ALWAYS_BEGIN" doc:name="Transactional">
and be sure to use database XA datasource, JMS XA connection factory and a transaction manager like this
<jbossts:transaction-manager doc:name="JBoss Transaction Manager" />
I've noticed this difference in Anypoint Studio between CE and EE.
In Mule CE transactional block, you can only specify transaction action
Instead in Mule EE you can specify transaction action and transaction type
Francesco.
Related
We are using standalone-vdb.xml domain to create a vdb and then make it accessible through Jupiter for other users.
Now based on the xml file below as an example, we created the VIEW "customer_view"
from the table "Export2.customer_table" and they are both accessible from the Jupiter.
However, we only want the VIEWS to be accessible and not the physical tables
which property can be used to hide the tables and only expose the VIEWS for the end user.
Any one have a clue which property can do that? I tried to find it from the documentation but couldn't find any mentioning for that.
we are using WildFly Full 17.0.1 through the HAL management interface in a Docker container environment and Postgresql database.
<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<vdb name="stock" version="1">
<description>The VDB</description>
<property name="UseConnectorMetadata" value="true" />
<model visible="true" name="Export2">
<property name="importer.useFullSchemaName" value="false"/>
<property name="importer.schemaPattern" value="public"/>
<property name="importer.tableTypes" value="TABLES,VIEW"/>
<source name="stockDS" translator-name="postgresql" connection-jndi-name="java:jboss/datasources/stockDS"/>
</model>
<model visible="true" name="Data" type="VIRTUAL">
<metadata type="DDL"><![CDATA[
CREATE VIEW customer_view (
field_names string,
field_description string
) AS
SELECT variable_name, variable_description
FROM Export2.customer_table;
]]> </metadata>
</model>
<data-role name="RoleA" any-authenticated="true">
<description>Allow Reads and Writes to tables and procedures</description>
<permission>
<resource-name>Export2.customer_table</resource-name>
<allow-create>true</allow-create>
<allow-read>true</allow-read>
<allow-update>true</allow-update>
</permission>
<mapped-role-name>Admin</mapped-role-name>
</data-role>
</vdb>
see http://teiid.github.io/teiid-documents/master/content/reference/r_xml-deployment-mode.html
you need to define the model with visibility to false like
<model visible="false" name="Export2">
note that this will remove the metadata exposure from any APIs, however, if someone knows the schema they still can use the same connection to issue the query and see the data. If you want to avoid that then you need to look into data security policies to avoid any access.
Can anyone help me on how to configure/create a Custom Data Source and using WSO2 4.0.2
Here is a sample wso2-dss-connector for Mongo DB ( Link : https://github.com/wso2/wso2-dss-connectors/tree/master/mongodb) . How to deploy this with WSO2. While building this project,we can get a jar ,so how to integrate this with wso2 for creating a custom datasource
I am new to wso2,i didn't get a clear picture from the official doc
Thanks in advance
If you want a connector for mongo db you can use the above link you mentioned and build this jar and you can put this in the DSSHOME/repository/component/dropins folder and restart the server. Once you have added the jar to dropins you can use the following database descriptor file (dbs) to test this
In this example we create a data source call "mongo_ds" and our query will be called mongo_find, which will retreive set of Documents elements.
<config id="mongo_ds">
<property name="custom_query_datasource_class">org.wso2.dss.connectors.mongodb.MongoDBDataSource</property>
<property name="custom_datasource_props">
<property name="servers">localhost</property>
<property name="database">mydb</property>
</property>
</config>
<query id="mongo_find" useConfig="mongo_ds">
<expression>things.find()</expression>
<result element="Documents" rowName="Document">
<element column="document" name="Data" xsdType="string"/>
</result>
</query>
If you want to write custom data sources for other data sources please refer the following guide.
I am using Jasperserver 4.5.0 Pro. I have developed a custom data-source for some additional feature. All reports that use this custom DS get executed properly and show the correct output when executed manually. But when the same reports are scheduled using Jasper's report job scheduler, there is some problem with session initiation, and hence the reports do not get executed.
Let me explain this a bit.
For manual execution of reports -
As part of custom DS, I had to update the following 2 xmls -
viewReportFlow.xml :
I updated the action state 'runReport' to use our custom DS executer action bean method 'xmlHttpDsExecuterAction.setUpSession' to start session. Please see the below tag of runReport -
<action-state id="runReport" xmlns:b="http://www.springframework.org/schema/webflow" xmlns:xi="http://www.w3.org/2001/XInclude">
<on-entry>
<evaluate expression="xmlHttpDsExecuterAction.setUpSession"/>
</on-entry>
<evaluate expression="viewReportActionBean"/>
<transition on="success" to="reportOutput"/>
<on-exit>
<evaluate expression="xmlHttpDsExecuterPageAction.setIndex"/>
</on-exit>
viewReportBeans.xml :
I defined the executer action beans used in above flow xml here -
<bean id="xmlHttpDsExecuterAction" class="com.sigma.reporting.xmlhttpds.XmlHttpDsExecuterAction" xmlns:xsi="http://www.w 3.org/2001/XMLSchema-instance"/> <bean id="xmlHttpDsExecuterPageAction" class="com.sigma.reporting.xmlhttpds.XmlHttpDsExecuterPageAction" xmlns:xsi="http://www.w 3.org/2001/XMLSchema-instance">
<property name="requestParameterPageIndex" value="pageIndex"/>
<property name="flowAttributePageIndex" value="pageIndex"/>
<property name="xmlHttpDataSourceName" value="com.sigma.reporting.xmlhttpds.XmlHttpDsExecuterDataSourceService"/>
<property name="repository">
<ref bean="repositoryService"/>
</property>
<property name="jasperPrintName" value="jasperPrintName"/>
<property name="reportUnitObject" value="reportUnitObject"/> </bean>
For job scheduling of reports :
I want to implement similarly as above using scheduler. During my investigations, I have tried to analyze the scheduler flow, and tried to put our changes, but no luck so far. Can any one please let me know what flows are used for running reports via scheduler and also please recommend the places to configure custom DS as above?
Finally after understanding the flow of japser server scheduler i have got the solution for this.
For setting your custom data source beans and calling the function,we need to specify the bean destination in $JASPER_HOME/apache-tomcat/weaaps/jasperserver-pro/WEBINF/flows/reportJobBeans.xml and we can use this bean in reportJobFlow.xml in jobOutput tag lik this
<view-state id="jobOutput" view="modules/reportScheduling/jobOutput">
<on-entry>
<set name="flowScope.prevForm" value="'jobOutput'"/>
<evaluate expression="reportOptionsJobEditAction.setOutputReferenceData"/>
<evaluate expression="xmlHttpDsExecuterAction.setUpSession"/>
</on-entry>
</view-state>
I am in process of implementing a REST API server using Apache CXF JAX-RS v(2.30). I am using spring as container. I am thinking of making use of org.apache.cxf.jaxrs.ext.RequestHandler to implement few features like license check, authentication, authorization (All of which has custom code). My idea is to segregate this code in individual implementation classes (implementing RequestHandler) and configure it for a base REST url something like /rest/*. Being new to Apache CXF and JAX-RS, I want to understand following things.
Is this approach the right way to implement the features I want to?
If yes, then is the order in which the RequestHandlers are declared is the order of their invocation?
For example if in my definition I declare:
<beans>
<jaxrs:server id="abcRestService" address="/rest">
<jaxrs:serviceBeans>
<bean class="com.abc.api.rest.service.FooService" />
</jaxrs:serviceBeans>
<jaxrs:providers>
<ref bean="licenseFilter" />
<ref bean="authorizationFilter" />
</jaxrs:providers>
</jaxrs:server>
<bean id="licenseFilter" class="com.abc.api.rest.providers.LicenseValidator">
<!-- License check bean properties -->
</bean>
<bean id="authorizationFilter" class="com.abc.api.rest.providers.AuthorizationFilter">
<!-- authorization bean properties -->
</bean>
</beans>
then will the licenseFilter always get invoked before authorizationFilter?
I did not find a mention of invocation ordering of RequestHandlers as well as ResponseHandlers.
Thanks in advance.
Figured this out.
It gets invoked in the order of declaration of beans in <jaxrs:providers>. Thus in case mentioned in question, licenseFilter will get invoked before authorizationFilter.
We have a jboss esb server which is reading files from the file system in a scheduled way (schedule frequency of 20sec) and convert them into the esb message then we parse the message.
There are some other providers/listeners (jms) and services configured on the esb servers. When there is an error in one of the services it effects the above process. File system provider (gateway) is working fine but the jms-listener who takes the gateway messages are not working and lots of messages are accumulated in the jbm queue (jbm_msg Oracle DB table).
Here is the problem, when my server is restarted messages in the jbm-queue is parsed in the esb for just 20 seconds which is the scheduled frequency of fs-provider, never process messages again and cpu usage goes up to 100% and stays there. We believe somehow fs-providers interrupts the jms-provider.
Is there any configuration we have been missing out.
Here are the configuration files that we have:
jboss-esb.xml
<?xml version = "1.0" encoding = "UTF-8"?>
<jbossesb xmlns="http://anonsvn.labs.jboss.com/labs/jbossesb/trunk/product/etc/schemas/xml/jbossesb-1.0.1.xsd" parameterReloadSecs="5">
<providers>
<fs-provider name="SitaIstProvider">
<fs-bus busid="gw_sita_ist" >
<fs-message-filter
directory="/ikarussita/IST/IN"
input-suffix=".RCV"
work-suffix=".lck"
post-delete="false"
post-directory="/ikarussita/IST/OK"
post-suffix=".ok"
error-delete="false"
error-directory="/ikarussita/IST/ERR"
error-suffix=".err"/>
</fs-bus>
</fs-provider>
<jms-provider name="SitaESBQueue" connection-factory="ConnectionFactory">
<jms-bus busid="esb_sita_queue">
<jms-message-filter dest-type="QUEUE" dest-name="queue/esb_sita_queue"/>
</jms-bus>
</jms-provider>
</providers>
<services>
<service category="SITA" name="SITA_IST" description="SITA Daemon For ISTCOXH">
<listeners>
<fs-listener name="Sita_Ist_Gateway" busidref="gw_sita_ist" is-gateway="true" schedule-frequency="20" />
<jms-listener name="Jms_Sita_EsbAware" busidref="esb_sita_queue" />
</listeners>
<actions mep="OneWay">
<action name="parse_msg" class="com.celebi.integration.action.sita.inbound.SitaHandler" process="parseMessage" />
<action name="send_ikarus" class="com.celebi.integration.action.ikarus.outbound.fis.FlightJmsSender" />
</actions>
</service>
</services>
</jbossesb>
jbm-queue-service.xml
<?xml version="1.0" encoding="UTF-8"?>
<server>
<mbean code="org.jboss.jms.server.destination.QueueService"
name="jboss.messaging.destination:service=Queue,name=esb_sita_queue"
xmbean-dd="xmdesc/Queue-xmbean.xml">
<depends optional-attribute-name="ServerPeer">jboss.messaging:service=ServerPeer</depends>
<depends>jboss.messaging:service=PostOffice</depends>
</mbean>
<server>
deployment.xml
<jbossesb-deployment>
<depends>jboss.messaging.destination:service=Queue,name=esb_sita_queue</depends>
</jbossesb-deployment>
Thanx
Split the service into 2 separate services, one handling the JMS queue, the other the file poller. Specify the same action pipeline. That way you get the same functionality but without the threading issue. Also use max-threads attr on the listener to specify the number of reading threads.