Error: Could not find or load main class org.apache.tez.dag.app.DAGAppMaster - apache-tez

I have installed tez and want to run the example like this
hadoop jar tez-examples-0.10.1.jar orderedwordcount /input /output
but it's not work and the log is
Log Type: stderr
Log Upload Time: Thu May 12 13:19:25 +0800 2022
Log Length: 77
Error: Could not find or load main class org.apache.tez.dag.app.DAGAppMaster
Log Type: stdout
Log Upload Time: Thu May 12 13:19:25 +0800 2022
Log Length: 716
Heap
PSYoungGen total 17920K, used 921K [0x00000000eef00000, 0x00000000f0300000, 0x0000000100000000)
eden space 15360K, 6% used [0x00000000eef00000,0x00000000eefe67a8,0x00000000efe00000)
from space 2560K, 0% used [0x00000000f0080000,0x00000000f0080000,0x00000000f0300000)
to space 2560K, 0% used [0x00000000efe00000,0x00000000efe00000,0x00000000f0080000)
ParOldGen total 40960K, used 0K [0x00000000ccc00000, 0x00000000cf400000, 0x00000000eef00000)
object space 40960K, 0% used [0x00000000ccc00000,0x00000000ccc00000,0x00000000cf400000)
Metaspace used 2541K, capacity 4480K, committed 4480K, reserved 1056768K
class space used 283K, capacity 384K, committed 384K, reserved 1048576K
my_env.sh is
#JAVA_HOME
export JAVA_HOME=/opt/module/jdk1.8.0_331
export PATH=$PATH:$JAVA_HOME/bin
#HADOOP_HOME
export HADOOP_HOME=/opt/module/hadoop-3.3.2
export PATH=$PATH:$HADOOP_HOME/bin
export PATH=$PATH:$HADOOP_HOME/sbin
#HIVE_HOME
export HIVE_HOME=/opt/module/hive-3.1.3
export PATH=$PATH:$HIVE_HOME/bin
#MAVEN_HOME
export MAVEN_HOME=/opt/module/maven-3.8.5
export PATH=$PATH:$MAVEN_HOME/bin
#TEZ_HOME
export TEZ_HOME=/opt/module/tez-0.10.1
export HADOOP_CLASSPATH=${TEZ_HOME}/conf:${TEZ_HOME}/*:${TEZ_HOME}/lib/*
tez-site.xml is
<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<configuration>
<property>
<name>tez.lib.uris</name>
<value>${fs.defaultFS}/apps/tez/tez-0.10.1.tar.gz</value>
</property>
<property>
<name>tez.use.cluster.hadoop-libs</name>
<value>false</value>
</property>
<property>
<name>tez.history.logging.service.class</name>
<value>org.apache.tez.dag.history.logging.ats.ATSHistoryLoggingService</value>
</property>
</configuration>
I have tried answer https://issues.apache.org/jira/browse/TEZ-3392 but it's not work.
Please help me in resolving this.Thanks in Advance!!!

I had the similar error message with running tez example. The installation guide https://tez.apache.org/install.html is not so obious, but has some valueable notes.
Helpful was an application tracking page where from logs i found out that path in the container after decompress tez archive is incorrect.
The whole archive is decompressed to the directory called ./tezlib and excerpt of CLASSPATH looks like that:
$PWD:$PWD/*:$PWD/tezlib/*:$PWD/tezlib/lib/*
but archive apache-tez-0.10.1-bin.tar.gz (on my HDFS in path /apps/apache-tez-0.10.1-bin.tar.gz) is decompressed inside a container to ./tezlib/apache-tez-0.10.1-bin.
So, after several hours trial and error i resolved this issue in the following steps:
tar -xf apache-tez-0.10.1-bin.tar.gz
tar -czf apache-tez-0.10.1-bin-nodir.tar.gz -C apache-tez-0.10.1-bin .
hdfs dfs -copyFromLocal apache-tez-0.10.1-bin-nodir.tar.gz /apps/
The second line above pack tez jars into an archive without parent directory.
After that tez example runs without error and finishes succeed.
My tez-site.xml:
<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<configuration>
<property>
<name>tez.lib.uris</name>
<value>${fs.defaultFS}/apps/apache-tez-0.10.1-bin-nodir.tar.gz</value>
</property>
<property>
<name>tez.use.cluster.hadoop-libs</name>
<value>true</value>
</property>
</configuration>
Of course there are propably another ways to manage this error with incorrect path to jars.
I've tested that at hadoop 3.2.2 from bigtop distribution and tez 0.10.0/0.10.1.

I had solved my question by upload to hdfs an uncompressed tez package and change my tez-site.xml file.
hadoop fs -put tez-0.10.1 /apps/tez
My changed tez-site.xml
The main different place is "tez.lib.uris"
<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<configuration>
<property>
<name>tez.lib.uris</name>
<value>${fs.defaultFS}/apps/tez/tez-0.10.1,${fs.defaultFS}/apps/tez/tez-0.10.1/lib</value>
</property>
<property>
<name>tez.use.cluster.hadoop-libs</name>
<value>false</value>
</property>
<property>
<name>tez.history.logging.service.class</name>
<value>org.apache.tez.dag.history.logging.ats.ATSHistoryLoggingService</value>
</property>
<property>
<name>tez.am.resource.memory.mb</name>
<value>1024</value>
</property>
<property>
<name>tez.am.resource.cpu.vcores</name>
<value>1</value>
</property>
</configuration>

Related

Centos OVA file with network configuration during import

I need to make an ova file with a clean Centos Stream 8 system that will ask the user for an IP address during import and then set this address inside the VM.
From the information I've found so far, I've managed to edit the ovf file and add this code to it:
<!-- IP ASSIGNMENT SECTION -->
<vmw:IpAssignmentSection ovf:required="false" vmw:protocols="IPv4" vmw:schemes="">
<Info>Supported IP assignment schemes</Info>
</vmw:IpAssignmentSection>
<!-- END -->
<VirtualSystem ovf:id="vm">
<Info>A virtual machine</Info>
<Name>CentOS8</Name>
<!-- EULA -->
<EulaSection>
<Info>End User License Agreement</Info>
<License>
TEST
</License>
</EulaSection>
<!-- END EULA -->
<!-- PRODUCT SECTION -->
<ProductSection ovf:class="vami" ovf:instance="vm" ovf:required="false">
<Info>VAMI Properties</Info>
<Category>Networking Properties</Category>
<Property ovf:key="gateway" ovf:type="string" ovf:userConfigurable="true">
<Label>Default Gateway</Label>
<Description>The default gateway address for this VM. Leave blank if DHCP is desired.</Description>
</Property>
<Property ovf:key="DNS" ovf:type="string" ovf:userConfigurable="true">
<Label>DNS</Label>
<Description>The domain name servers for this VM (comma separated). Leave blank if DHCP is desired.</Description>
</Property>
<Property ovf:key="ip0" ovf:type="string" ovf:userConfigurable="true">
<Label>Network 1 IP Address</Label>
<Description>The IP address for this interface. Leave blank if DHCP is desired.</Description>
</Property>
<Property ovf:key="netmask0" ovf:type="string" ovf:userConfigurable="true">
<Label>Network 1 Netmask</Label>
<Description>The netmask or prefix for this interface. Leave blank if DHCP is desired.</Description>
</Property>
</ProductSection>
<!-- END -->
When I import the ova file edited in this way I have the option to enter the IP address but I don't know how to apply this information inside the VM.
import of the edited file
Does anyone know how i can set automatically this static IP address inside the imported VM?

Cannot get openjpa tools to run from the command line

What I did:
download and extract the latest openjpa release (3.0.0)
download the mariadb jdbc driver jar and copy it to the same directory where openjpa-all-3.0.0.jar is located
inside the same directory, create a subdirectory META_INF and a file META-INF/persistence.xml with the following contents:
<?xml version="1.0"?>
<persistence version="1.0">
<persistence-unit name="openjpa">
<provider>org.apache.openjpa.persistence.PersistenceProviderImpl</provider>
<properties>
<property name="openjpa.ConnectionURL" value="jdbc:mariadb://localhost:3306/databasename"/>
<property name="openjpa.ConnectionDriverName" value="org.mariadb.jdbc.Driver"/>
<property name="openjpa.ConnectionUserName" value="dbuser"/>
<property name="openjpa.ConnectionPassword" value="dbpassword"/>
<property name="openjpa.DynamicEnhancementAgent" value="false"/>
<property name="openjpa.RuntimeUnenhancedClasses" value="supported"/>
<property name="openjpa.Log" value="SQL=TRACE"/>
<property name="openjpa.ConnectionFactoryProperties" value="PrettyPrint=true, PrettyPrintLineLength=120, PrintParameters=true, MaxActive=10, MaxIdle=5, MinIdle=2, MaxWait=60000"/>
</properties>
</persistence-unit>
create an empty directory src as a sub-directory of where the openjpa and mariadb driver jars are
ran the following command:
java -cp ./:openjpa-all-3.0.0.jar:mariadb-java-client-2.4.0.jar:openjpa-all-3.0.0.jar org.apache.openjpa.jdbc.meta.ReverseMappingTool -pkg some.package -d ./src
Instead of getting any kind of output, or an error related to generation, I get:
8 INFO [main] openjpa.Tool - The reverse mapping tool will run on the database. The tool is gathering schema information; this process may take some time. Enable the org.apache.openjpa.jdbc.Schema logging category to see messages about schema data.
Exception in thread "main" <openjpa-3.0.0-r422266:1833209 fatal user error> org.apache.openjpa.util.UserException: The persistence provider is attempting to use properties in the persistence.xml file to resolve the data source. A Java Database Connectivity (JDBC) driver or data source class name must be specified in the openjpa.ConnectionDriverName or javax.persistence.jdbc.driver property. The following properties are available in the configuration: "org.apache.openjpa.jdbc.conf.JDBCConfigurationImpl#f248234b".
at org.apache.openjpa.jdbc.schema.DataSourceFactory.newDataSource(DataSourceFactory.java:71)
at org.apache.openjpa.jdbc.conf.JDBCConfigurationImpl.createConnectionFactory(JDBCConfigurationImpl.java:850)
at org.apache.openjpa.jdbc.conf.JDBCConfigurationImpl.getConnectionFactory(JDBCConfigurationImpl.java:733)
at org.apache.openjpa.jdbc.conf.JDBCConfigurationImpl.getDataSource(JDBCConfigurationImpl.java:879)
at org.apache.openjpa.jdbc.conf.JDBCConfigurationImpl.getDataSource2(JDBCConfigurationImpl.java:921)
at org.apache.openjpa.jdbc.schema.SchemaGenerator.<init>(SchemaGenerator.java:86)
at org.apache.openjpa.jdbc.meta.ReverseMappingTool.run(ReverseMappingTool.java:2027)
at org.apache.openjpa.jdbc.meta.ReverseMappingTool.run(ReverseMappingTool.java:2005)
at org.apache.openjpa.jdbc.meta.ReverseMappingTool.run(ReverseMappingTool.java:1882)
at org.apache.openjpa.jdbc.meta.ReverseMappingTool$1.run(ReverseMappingTool.java:1863)
at org.apache.openjpa.lib.conf.Configurations.launchRunnable(Configurations.java:762)
at org.apache.openjpa.lib.conf.Configurations.runAgainstAllAnchors(Configurations.java:747)
at org.apache.openjpa.jdbc.meta.ReverseMappingTool.main(ReverseMappingTool.java:1858)
What am I doing wrong?
I tried variations of this, by adding -p persistence.xml#openjpa, -p #openjpa and -connectionDriverName org.mariadb.jdbc.Driver to the command line, but it made no difference.

GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos tgt) while connecting Polybase with Kerberos

We want to connect our SQL Server 2016 Enterprise via Polybase with our Kerberized OnPrem Hadoop-Cluster with Cloudera 5.14.
I followed the Microsoft PolyBase Guide to configure Polybase. After working few days on this topic I'm not able to continue because of an exception: javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos tgt)]
Microsoft has an built in diagnostic tool for troubleshooting the connectivity with PolyBase and Kerberos. On this troubleshooting guide from Microsoft there are 4 checkpoints and I'm stuck on checkpoint 4.
Short information about the checkpoints (where I'm successfull):
Checkpoint 1: Successfull! Authenticated against the KDC and received a TGT
Checkpoint 2: Successfull! Regarding troubleshooting guide PolyBase will make an attempt to access the HDFS and fail because the request did not contain the necessary Service Ticket.
Checkpoint 3: Sucessfull! A second hex dump indicates that SQL Server successfully used the TGT and acquired the applicable Service Ticket for the name node's SPN from the KDC.
Checkpoint 4: Not successfull SQL Server was authenticated by Hadoop using the ST (Service Ticket) and a session was granted to access the secured resource.
krb5.conf file
[libdefaults]
default_realm = COMPANY.REALM.COM
dns_lookup_kdc = false
dns_lookup_realm = false
ticket_lifetime = 86400
renew_lifetime = 604800
forwardable = true
default_tgs_enctypes = aes256-cts-hmac-sha1-96 aes128-cts-hmac-sha1-96
default_tkt_enctypes = aes256-cts-hmac-sha1-96 aes128-cts-hmac-sha1-96
permitted_enctypes = aes256-cts-hmac-sha1-96 aes128-cts-hmac-sha1-96
udp_preference_limit = 1
kdc_timeout = 3000
[realms]
COMPANY.REALM.COM = {
kdc = ipadress.kdc.host
admin_server = ipadress.kdc.host
}
[logging]
default = FILE:/var/log/krb5/kdc.log
kdc = FILE:/var/log/krb5/kdc.log
admin_server = FILE:/var/log/krb5/kadmind.log
core-site.xml for Polybase on SQL-Server
<?xml version="1.0" encoding="utf-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!-- Put site-specific property overrides in this file. -->
<configuration>
<property>
<name>io.file.buffer.size</name>
<value>131072</value>
</property>
<property>
<name>ipc.client.connect.max.retries</name>
<value>2</value>
</property>
<property>
<name>ipc.client.connect.max.retries.on.timeouts</name>
<value>2</value>
</property>
<!-- kerberos security information, PLEASE FILL THESE IN ACCORDING TO HADOOP CLUSTER CONFIG -->
<property>
<name>polybase.kerberos.realm</name>
<value>COMPANY.REALM.COM</value>
</property>
<property>
<name>polybase.kerberos.kdchost</name>
<value>ipadress.kdc.host</value>
</property>
<property>
<name>hadoop.security.authentication</name>
<value>KERBEROS</value>
</property>
</configuration>
hdfs-site.xml for Polybase on SQL-Server
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!-- Put site-specific property overrides in this file. -->
<configuration>
<property>
<name>dfs.block.size</name>
<value>268435456</value>
</property>
<!-- Client side file system caching is disabled below for credential refresh and
settting the below cache disabled options to true might result in
stale credentials when an alter credential or alter datasource is performed
-->
<property>
<name>fs.wasb.impl.disable.cache</name>
<value>true</value>
</property>
<property>
<name>fs.wasbs.impl.disable.cache</name>
<value>true</value>
</property>
<property>
<name>fs.asv.impl.disable.cache</name>
<value>true</value>
</property>
<property>
<name>fs.asvs.impl.disable.cache</name>
<value>true</value>
</property>
<property>
<name>fs.hdfs.impl.disable.cache</name>
<value>true</value>
</property>
<!-- kerberos security information, PLEASE FILL THESE IN ACCORDING TO HADOOP CLUSTER CONFIG -->
<property>
<name>dfs.namenode.kerberos.principal</name>
<value>hdfs/_HOST#COMPANY.REALM.COM</value>
</property>
</configuration>
Polybase Exception
[2018-06-22 12:51:50,349] WARN 2872[main] - org.apache.hadoop.security.UserGroupInformation.hasSufficientTimeElapsed(UserGroupInformation.java:1156) - Not attempting to re-login since the last re-login was attempted less than 600 seconds before.
[2018-06-22 12:51:53,568] WARN 6091[main] - org.apache.hadoop.security.UserGroupInformation.hasSufficientTimeElapsed(UserGroupInformation.java:1156) - Not attempting to re-login since the last re-login was attempted less than 600 seconds before.
[2018-06-22 12:51:56,127] WARN 8650[main] - org.apache.hadoop.security.UserGroupInformation.hasSufficientTimeElapsed(UserGroupInformation.java:1156) - Not attempting to re-login since the last re-login was attempted less than 600 seconds before.
[2018-06-22 12:51:58,998] WARN 11521[main] - org.apache.hadoop.security.UserGroupInformation.hasSufficientTimeElapsed(UserGroupInformation.java:1156) - Not attempting to re-login since the last re-login was attempted less than 600 seconds before.
[2018-06-22 12:51:59,139] WARN 11662[main] - org.apache.hadoop.ipc.Client$Connection$1.run(Client.java:676) - Couldn't setup connection for hdfs#COMPANY.REALM.COM to IPADRESS_OF_NAMENODE:8020
javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos tgt)]
Log Entry on NameNode
Socket Reader #1 for port 8020: readAndProcess from client IP-ADRESS_SQL-SERVER threw exception [javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: Failure unspecified at GSS-API level (Mechanism level: AES128 CTS mode with HMAC SHA1-96 encryption type not in permitted_enctypes list)]]
Auth failed for IP-ADRESS_SQL-SERVER:60484:null (GSS initiate failed) with true cause: (GSS initiate failed)
The confusing part for me is the log entry from our NameNode because AES128 CTS mode with HMAC SHA1-96 is already in the list of permitted enctypes as shown in krb5.conf and in Cloudera Manager UI
We appreciate your help!
The problem has itself taken care after we restarted the cluster.
I think the problem was that the krb5.conf file in our Hadoop-Cluster could not be distributed on all nodes because of some running services. There was also a warning in the Cloudera Manager about a stale configuration regarding Kerberos.
Many thanks to everyone!

how to work with glusterfs-hadoop plugin?

i installed glusterfs and works fine, after that i installed hadoop 1.x and works fine with hdfs, but when i use glusterfs-hadoop plugin to use glusterfs as the filesystem backend for my hadoop i get error, i use github site for glusterfs-hadoop plugin. and copy jar file to hadoop library directory, and change my core-site.xml to this:
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<configuration>
<property>
<name>fs.glusterfs.impl</name>
<value>org.apache.hadoop.fs.glusterfs.GlusterFileSystem</value>
</property>
<property>
<name>fs.default.name</name>
<value>glusterfs://fedora1:9010</value>
</property>
<property>
<name>fs.AbstractFileSystem.glusterfs.impl</name>
<value>org.apache.hadoop.fs.local.GlusterFs</value>
</property>
<property>
<name>fs.glusterfs.volumes</name>
<value>test1</value>
</property>
<property>
<name>fs.glusterfs.volume.fuse.test1</name>
<value>/mnt/Hadoop</value>
</property>
</configuration>
and when execute start-mapred.sh, jobtracker and tasktracker start whitout any problem, but when execute this command "hadoop fs -mkdir ossl" i get this output:
15/04/14 12:52:53 INFO glusterfs.GlusterVolume: Initializing gluster volume..
15/04/14 12:52:53 INFO glusterfs.GlusterFileSystem: Configuring GlusterFS
15/04/14 12:52:53 INFO glusterfs.GlusterFileSystem: Initializing GlusterFS, CRC disabled.
15/04/14 12:52:53 INFO glusterfs.GlusterFileSystem: GIT INFO={git.commit.id.abbrev=f0fee73, git.commit.user.email=bchilds#redhat.com, git.commit.message.full=Merge pull request #122 from childsb/getfattrparse
Refactor and cleanup the BlockLocation parsing code, git.commit.id=f0fee73c336ac19461d5b5bb91a77e05cff73361, git.commit.message.short=Merge pull request #122 from childsb/getfattrparse, git.commit.user.name=bradley childs, git.build.user.name=Unknown, git.commit.id.describe=GA-12-gf0fee73, git.build.user.email=Unknown, git.branch=master, git.commit.time=31.03.2015 # 00:36:46 IRDT, git.build.time=12.04.2015 # 14:45:49 IRDT}
15/04/14 12:52:53 INFO glusterfs.GlusterFileSystem: GIT_TAG=GA
15/04/14 12:52:53 INFO glusterfs.GlusterFileSystem: Configuring GlusterFS
15/04/14 12:52:53 INFO glusterfs.GlusterVolume: Initializing gluster volume..
15/04/14 14:36:01 INFO glusterfs.GlusterVolume: Gluster volume: test at : /mnt/hadoop
15/04/14 14:36:01 INFO glusterfs.GlusterVolume: Working directory is : /
15/04/14 14:36:01 INFO glusterfs.GlusterVolume: Write buffer size : 131072
15/04/14 14:36:01 INFO glusterfs.GlusterVolume: Default block size : 67108864
15/04/14 14:36:01 INFO glusterfs.GlusterVolume: Directory list order : fs ordering
15/04/14 14:36:01 INFO glusterfs.GlusterVolume: File timestamp lease significant digits removed : 0
mkdir: Error undefined volume:fedora1:9010 in path: glusterfs://fedora1:9010/ossl
please help me, thanks for your reply.
If I'm not mistaken this should work:
<property>
<name>fs.default.name</name>
<value>glusterfs:///fedora1:9010</value>
</property>

Transaction commit delay when routing message from one jms queue to another

We are trying to build simple transacted jms-to-jms router using Mule ESB and JBoss Messaging. When we run Mule ESB with application configured as below, we observe strange behaviour.
Approximately 10 messages are routed from queue test1 to test2
Nothing happens for ~40 seconds.
goto 1
Queue test1 is filled with around 500 messages when we start test. We use Mule 3.2 and JBoss 5.1.
If I remove transactions from code below everything works fine, all messages are sent to queue test2 instantly. Also, everything is fine if I change transactions from xa to jms -- by replacing xa-transaction tags with jms:transaction.
I don't know what causes this pause in message processing on ESB, probably transaction commit is delayed.
My question is: what should I do to have xa transactions working correctly?
I'll provide more details if needed. I asked this question on Mule ESB forum before with no answer http://forum.mulesoft.org/mulesoft/topics/transaction_commit_delay_when_routing_message_from_one_jms_queue_to_another
<?xml version="1.0" encoding="UTF-8"?>
<mule xmlns="http://www.mulesoft.org/schema/mule/core" xmlns:jms="http://www.mulesoft.org/schema/mule/jms" xmlns:doc="http://www.mulesoft.org/schema/mule/documentation" xmlns:spring="http://www.springframework.org/schema/beans" xmlns:core="http://www.mulesoft.org/schema/mule/core" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:jbossts="http://www.mulesoft.org/schema/mule/jbossts" version="CE-3.2.1" xsi:schemaLocation="
http://www.mulesoft.org/schema/mule/jms http://www.mulesoft.org/schema/mule/jms/current/mule-jms.xsd
http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-3.0.xsd
http://www.mulesoft.org/schema/mule/core http://www.mulesoft.org/schema/mule/core/current/mule.xsd
http://www.mulesoft.org/schema/mule/jbossts http://www.mulesoft.org/schema/mule/jbossts/current/mule-jbossts.xsd ">
<jbossts:transaction-manager> </jbossts:transaction-manager>
<configuration>
<default-threading-profile maxThreadsActive="30" maxThreadsIdle="5"/>
<default-receiver-threading-profile maxThreadsActive="10" maxThreadsIdle="5"/>
</configuration>
<spring:beans>
<spring:bean id="jmsJndiTemplate" class="org.springframework.jndi.JndiTemplate" doc:name="Bean">
<spring:property name="environment">
<spring:props>
<spring:prop key="java.naming.factory.url.pkgs">org.jboss.naming:org.jnp.interfaces</spring:prop>
<spring:prop key="jnp.disableDiscovery">true</spring:prop>
<spring:prop key="java.naming.factory.initial">org.jnp.interfaces.NamingContextFactory</spring:prop>
<spring:prop key="java.naming.provider.url">localhost:1099</spring:prop>
</spring:props>
</spring:property>
</spring:bean>
<spring:bean id="jmsConnectionFactory" class="org.springframework.jndi.JndiObjectFactoryBean" doc:name="Bean">
<spring:property name="jndiTemplate">
<spring:ref bean="jmsJndiTemplate"/>
</spring:property>
<spring:property name="jndiName">
<spring:value>XAConnectionFactory</spring:value>
</spring:property>
</spring:bean>
</spring:beans>
<jms:connector name="JMS" specification="1.1" numberOfConsumers="10" connectionFactory-ref="jmsConnectionFactory" doc:name="JMS"/>
<flow name="flow" doc:name="flow">
<jms:inbound-endpoint queue="test1" connector-ref="JMS" doc:name="qt1">
<xa-transaction action="ALWAYS_BEGIN"/>
</jms:inbound-endpoint>
<echo-component doc:name="Echo"/>
<jms:outbound-endpoint queue="test2" connector-ref="JMS" doc:name="qt2">
<xa-transaction action="ALWAYS_JOIN"/>
</jms:outbound-endpoint>
<echo-component doc:name="Echo"/>
</flow>
</mule>
Here you can find log fragment for 1 message interaction. Please note that in this case there was no delay.
And here is log fragment for 11 messages. All of them were in queue test1 when app started, as you can see 10 messages are routed instantly and one is delayed by 1 minute.
I've found root of my problem: my queues were defined with following attribute:
<attribute name="RedeliveryDelay">60000</attribute>
Removing it or setting low value solves my problem with delays. Problem is, I don't know why :)
I always thought that redelivery delay is used when delivery fails, which was not the case in my app.