Sometimes getting below exception
javax.jms.JMSException: Could not create a session: Unable to get managed connection for JmsXA
at org.hornetq.ra.HornetQRASessionFactoryImpl.allocateConnection(HornetQRASessionFactoryImpl.java:881)
at org.hornetq.ra.HornetQRASessionFactoryImpl.createQueueSession(HornetQRASessionFactoryImpl.java:237)
While creating QueueSession, below is the snippet used
connection.createQueueSession(false, Session.AUTO_ACKNOWLEDGE);
We are using java:JmsXA connection factory which uses INVM .
AFAIK there is no use of setting parameters in nettyconnectionfactory and INVMconnectionfactory in hornetq-jms.xml
Either we should set it some parameters in jms-ds.xml(JMS Queue Configuration File) or ra.xml(MDB configuration file)
I know some parameters can be set to
1. <reconnect-attempts>1000</reconnect-attempts>
this will try to reconnect 1000 times after it gets disconnected
2. <call-timeout>10800000</call-timeout>
also there is no use of setting
as it is default to -1 and will try to connect unlimited no. of times
I am confused as to what parameters can be set and at what level ..i.e. either on queue level (in jms-ds.xml) or at MDB level (ra.xml) as some parameters are same e.g. call-timeout,retry-interval,etc
Try with increasing max-pool-size of pooled-connection-factory JmsXA.
Related
In confluent-5.5.0 - I am unable to change the max.request.size , which always defaults to max.request.size = 1048576 in the ProducerConfig.
The following are the parameters I have already tried with noluck:
confluent-5.5.0/etc/kafka/producer.properties
max.request.size=15728640
producer.max.request.size=15728640
confluent-5.5.0/etc/kafka/server.properties
message.max.bytes=15728640
replica.fetch.max.bytes=15728640
max.request.size=15728640
fetch.message.max.bytes=15728640
/data/confluent-5.5.0/etc/kafka/consumer.properties
max.partition.fetch.bytes=15728640
confluent-5.5.0/etc/kafka-rest/kafka-rest.properties
max.request.size=15728640
NOTE : None of these values is getting updated in the connect.log
I have stop/started confluent-5.5.0 , even destroyed the previous images and restarted.
Am i missing something ?
The following i have also tried after the information from comment :
/data/confluent-5.5.0/etc/kafka/connect-standalone.properties
producer.override.max.request.size=15728640
consumer.override.max.partition.fetch.bytes=15728640
/data/confluent-5.5.0/etc/kafka/connect-distributed.properties
producer.override.max.request.size=15728640
consumer.override.max.partition.fetch.bytes=15728640
Still in the max.request.size has not got changed.
( Solved )Based on the inputs :
I have added the above configuration in the connect
or configuration. And also changed the policy from none to ALL. Which applied the configuration changes properly.
Those files are not used by Connect.
server is for the Apache Kafka Broker only
consumer|producer are for the kafka-console utilities
kafka-rest is for the Confluent REST Proxy only
You need to use connect-distributed.properties or connect-standalone.properties and notice that you need to additionally set the property correctly using prefixes.
the solution is to set configuration in kafka connect proprties file :
add the following in distributed or standalone connect properties file
producer.max.request.size=157286400
consumer.max.request.size=157286400
max.request.size=157286400
and it will work !!!
I am trying to run a simple program of jcloud. The program is as follows:
String provider = "openstack-nova";
String identity = "Tenant:usename"; // tenantName:userName
String credential = "pass";
novaApi = ContextBuilder.newBuilder(provider).endpoint("http://openstack.infosys.tuwien.ac.at/identity/v2.0")
.credentials(identity, credential).modules(modules).buildApi(NovaApi.class);
regions = novaApi.getConfiguredRegions();
The openstack.infosys is connect via SOCKS proxy on port 7777. I have also enlisted the same on eclipse(Window->Preferences->General->Network Config->SOCKS(Manual)) . However, everytime I run the code I get the following error:
ERROR o.j.h.i.JavaUrlHttpCommandExecutorService - Command not considered safe to retry because request method is POST:
Which is then caused by
Caused by: java.net.SocketTimeoutException: connect timed out
I am able to access the horizon web interface of the same without any issues.
Can someone please help me in understanding what is the possible problem.
You need to tell Apache jclouds about your proxy configuration when creating the context. Have a look at these properties, and pass the ones you need to the overrides method of the ContextBuilder:
Proxy type
Proxy host
Proxy port
Proxy user
Proxy password
I need your help in getting rid of the below warning because it is stopping me of doing any activity in the jsf page:
Socket BEA-000449 Closing socket as no data read from it on
XXX.XXX.XXX.XX,XXX during the configured idle timeout of 5 secs
I tried changing the session timeout in web.xml, but still it shows the above warning:
<session-config>
<session-timeout>200</session-timeout>
</session-config>
You can use JAVA_OPTIONS to set this parameter.
In you start script add the following Java option:
-Dweblogic.client.socket.ConnectTimeout=XXX
where XXX is value in ms.
Also you can read the following thread from Oracle:
https://community.oracle.com/thread/2125724
I am writing an apache-camel RabbitMQ consumer. I would like to react somehow to connection problems (i.e. try to reconnect). Is it possible to configure apache-camel to automatically reconnect?
If not, how can I find out that a connection to the queue was interrupted? I've done the following test:
start the queue (and some producer)
start my consumer (it was getting messages as expected)
stop the queue (the messages stopped arriving, as expected, but no exception was thrown)
start the queue (no new messages were received)
I am using camel in Scala (via akka-camel), but a Java solution would be probably also OK
You can pass in the flag automaticRecoveryEnabled=true to the URI, Camel will reconnect if the connection is lost.
For automatic RabbitMQ resource recovery (Connections/Channels/Consumers/Queues/Exchanages/Bindings) when failures occur, check out Lyra (which I authored). Example usage:
Config config = new Config()
.withRecoveryPolicy(new RecoveryPolicy()
.withMaxAttempts(20)
.withInterval(Duration.seconds(1))
.withMaxDuration(Duration.minutes(5)));
ConnectionOptions options = new ConnectionOptions().withHost("localhost");
Connection connection = Connections.create(options, config);
The rest of the API is just the amqp-client API, except your resources are automatically recovered when failures occur.
I'm not sure about camel-rabbitmq specifically, but hopefully there's a way you can swap in your own resource creation via Lyra.
Current camel-rabbitmq just create a connection and the channel when the consumer or producer is started. So it don't have a chance to catch the connection exception :(.
Does anyone know what the default value is of the max pool size within the -ds.xml file? As you can see below we only have minimum set to 0 with no entry for maxium. I'm worried the vendor who configured this was thinking no maximum entry means unlimited. Im wondering if no entry takes the default value Jboss assigns. I'm not sure what that value is.
Reason i'm concerned is because I'm getting this error:
Njavax.transaction.TransactionRolledbackException: Error obtaining connection: org.jboss.util.NestedSQLException: No ManagedConnections available within configured blocking timeout ( 30000 [ms] ); - nested throwable: (javax.resource.ResourceException: No ManagedConnections available within configured blocking timeout ( 30000 [ms] ));
My -ds.xml file
datasources>
<local-tx-datasource>
<jndi-name>SabaSite</jndi-name>
<connection-url>saba:jdbc:JSQLConnect://********/database=######/asciiStringParameters=false</connection-url>
<driver-class>com.saba.mssql.SabaJNETMSSQLDatabaseDriver</driver-class>
<min-pool-size>0</min-pool-size>
<exception-sorter-class-name>org.jboss.resource.adapter.jdbc.vendor.OracleExceptionSorter</exception-sorter-class-name>
</local-tx-datasource>
</datasources>
Thanks,
Justin
You can check actual datasource properties yourself with help of JMX Console.
See How to check datasource in JBoss?