Messages are not consumed when connection use JNDI or ActiveMQConnectionFactory to connect to EmbeddedActiveMQ - activemq-artemis

This is follow-up to this question.
My code can initiate connection, session etc., however messages are not consumed. I don't see any exceptions in logs.
This test reproduces the problem:
import javax.jms.Connection;
import javax.jms.ConnectionFactory;
import javax.jms.JMSException;
import javax.jms.MessageConsumer;
import javax.jms.MessageProducer;
import javax.jms.Queue;
import javax.jms.Session;
import javax.naming.Context;
import java.io.File;
import java.util.Hashtable;
import java.util.concurrent.CountDownLatch;
import java.util.concurrent.TimeUnit;
import org.apache.activemq.artemis.api.core.QueueConfiguration;
import org.apache.activemq.artemis.api.core.SimpleString;
import org.apache.activemq.artemis.core.config.Configuration;
import org.apache.activemq.artemis.core.config.impl.ConfigurationImpl;
import org.apache.activemq.artemis.core.server.embedded.EmbeddedActiveMQ;
import org.apache.activemq.artemis.core.settings.impl.AddressSettings;
import org.apache.activemq.artemis.jndi.ActiveMQInitialContextFactory;
import org.junit.After;
import org.junit.Before;
public class Test {
EmbeddedActiveMQ jmsServer;
final String QUEUE_NAME = "myQueue";
#Before
public void setUp() throws Exception {
final String baseDir = File.separator + "tmp";
final EmbeddedActiveMQ embeddedActiveMQ = new EmbeddedActiveMQ();
final Configuration config = new ConfigurationImpl();
config.setPersistenceEnabled(true);
config.setBindingsDirectory(baseDir + File.separator + "bindings");
config.setJournalDirectory(baseDir + File.separator + "journal");
config.setPagingDirectory(baseDir + File.separator + "paging");
config.setLargeMessagesDirectory(baseDir + File.separator + "largemessages");
config.setSecurityEnabled(false);
AddressSettings adr = new AddressSettings();
adr.setDeadLetterAddress(new SimpleString("DLQ"));
adr.setExpiryAddress(new SimpleString("ExpiryQueue"));
config.addAddressSetting("#", adr);
config.addAcceptorConfiguration("invmConnectionFactory", "vm://0");
embeddedActiveMQ.setConfiguration(config);
this.jmsServer = embeddedActiveMQ;
this.jmsServer.start();
System.out.println("creating queue");
final boolean isSuccess = jmsServer.getActiveMQServer().createQueue(new QueueConfiguration(QUEUE_NAME)) != null;
if(isSuccess) {
System.out.println(QUEUE_NAME + "queue created");
}
}
#After
public void tearDown() {
try {
this.jmsServer.stop();
} catch(Exception e) {
// ignore
}
}
#org.junit.Test
public void simpleTest() throws Exception {
Hashtable d = new Hashtable();
d.put(Context.INITIAL_CONTEXT_FACTORY, "org.apache.activemq.artemis.jndi.ActiveMQInitialContextFactory");
d.put("connectionFactory.invmConnectionFactory", "vm://0");
final ActiveMQInitialContextFactory activeMQInitialContextFactory = new ActiveMQInitialContextFactory();
Context initialContext = activeMQInitialContextFactory.getInitialContext(d);
ConnectionFactory connectionFactory = (ConnectionFactory) initialContext.lookup("invmConnectionFactory");
Connection connection = connectionFactory.createConnection();
Session session = connection.createSession(true, Session.SESSION_TRANSACTED);
Queue queue = session.createQueue(QUEUE_NAME);
MessageProducer producer = session.createProducer(queue);
MessageConsumer consumer = session.createConsumer(queue);
CountDownLatch latch = new CountDownLatch(1);
consumer.setMessageListener(message -> {
System.out.println("=== " + message);
try {
message.acknowledge();
session.commit();
latch.countDown();
} catch(JMSException e) {
e.printStackTrace();
}
});
connection.start();
producer.send(session.createMessage());
session.commit();
if(!latch.await(2, TimeUnit.SECONDS)) {
throw new IllegalStateException();
}
connection.close();
}
}

The problem with this code is subtle but important. When configuring the broker you're creating a queue like so:
...
final String QUEUE_NAME = "myQueue";
...
jmsServer.getActiveMQServer().createQueue(new QueueConfiguration(QUEUE_NAME))
...
This is perfectly valid in and of itself, but for this use-case involving a JMS queue it's important to note that this will result in an address named myQueue and a multicast queue named myQueue since the default routing type is MULTICAST and you didn't specify any routing type on your QueueConfiguration. This is not the kind of configuration you want for a JMS queue. You want an address and an ANYCAST queue of the same name (i.e. myQueue in this case) as noted in the documentation. Therefore, you should use:
...
import org.apache.activemq.artemis.api.core.RoutingType;
...
jmsServer.getActiveMQServer().createQueue(new QueueConfiguration(QUEUE_NAME).setRoutingType(RoutingType.ANYCAST))
When you use the multicast queue the message sent by the JMS client will not actually be routed because it is sent with the anycast routing type.
Another option would be to not create the queue explicitly at all and allow it to be auto-created.

Related

how to verify http connection pool can improve performance

I want to use http connection pool with Spring RestTemplate, but before using it, I need to verify whether it can improve performance.
I do a little programing here:
#Configuration
public class RestTemplateConfig {
#Bean
public RestTemplate restTemplate() {
return new RestTemplate();
}
}
and test code here
#SpringBootTest
class RestnopoolApplicationTests {
String url = "https://www.baidu.com/";
// String url = "http://localhost:8080/actuator/";
#Autowired
RestTemplate restTemplate;
#Test
void contextLoads() {
}
#Test
void verify_health() {
Instant start = Instant.now();
for(int i=0; i < 100; i ++) {
restTemplate.getForObject(url, String.class);
}
Instant end = Instant.now();
Duration d = Duration.between(start,end );
System.out.println("time span " + d.getSeconds());
}
Also, I write http connection pool below
import java.security.KeyManagementException;
import java.security.KeyStoreException;
import java.security.NoSuchAlgorithmException;
import java.util.concurrent.TimeUnit;
import org.apache.http.HeaderElement;
import org.apache.http.HeaderElementIterator;
import org.apache.http.HttpResponse;
import org.apache.http.client.config.RequestConfig;
import org.apache.http.config.Registry;
import org.apache.http.config.RegistryBuilder;
import org.apache.http.conn.ConnectionKeepAliveStrategy;
import org.apache.http.conn.socket.ConnectionSocketFactory;
import org.apache.http.conn.socket.PlainConnectionSocketFactory;
import org.apache.http.conn.ssl.SSLConnectionSocketFactory;
import org.apache.http.conn.ssl.TrustSelfSignedStrategy;
import org.apache.http.impl.client.CloseableHttpClient;
import org.apache.http.impl.client.HttpClients;
import org.apache.http.impl.conn.PoolingHttpClientConnectionManager;
import org.apache.http.message.BasicHeaderElementIterator;
import org.apache.http.protocol.HTTP;
import org.apache.http.protocol.HttpContext;
import org.apache.http.ssl.SSLContextBuilder;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import org.springframework.scheduling.annotation.EnableScheduling;
import org.springframework.scheduling.annotation.Scheduled;
/**
* - Supports both HTTP and HTTPS
* - Uses a connection pool to re-use connections and save overhead of creating connections.
* - Has a custom connection keep-alive strategy (to apply a default keep-alive if one isn't specified)
* - Starts an idle connection monitor to continuously clean up stale connections.
*/
#Configuration
#EnableScheduling
public class HttpClientConfig {
private static final Logger LOGGER = LoggerFactory.getLogger(HttpClientConfig.class);
// Determines the timeout in milliseconds until a connection is established.
private static final int CONNECT_TIMEOUT = 30000;
// The timeout when requesting a connection from the connection manager.
private static final int REQUEST_TIMEOUT = 30000;
// The timeout for waiting for data
private static final int SOCKET_TIMEOUT = 60000;
private static final int MAX_TOTAL_CONNECTIONS = 50;
private static final int DEFAULT_KEEP_ALIVE_TIME_MILLIS = 20 * 1000;
private static final int CLOSE_IDLE_CONNECTION_WAIT_TIME_SECS = 30;
#Bean
public PoolingHttpClientConnectionManager poolingConnectionManager() {
SSLContextBuilder builder = new SSLContextBuilder();
try {
builder.loadTrustMaterial(null, new TrustSelfSignedStrategy());
} catch (NoSuchAlgorithmException | KeyStoreException e) {
LOGGER.error("Pooling Connection Manager Initialisation failure because of " + e.getMessage(), e);
}
SSLConnectionSocketFactory sslsf = null;
try {
sslsf = new SSLConnectionSocketFactory(builder.build());
} catch (KeyManagementException | NoSuchAlgorithmException e) {
LOGGER.error("Pooling Connection Manager Initialisation failure because of " + e.getMessage(), e);
}
Registry<ConnectionSocketFactory> socketFactoryRegistry = RegistryBuilder
.<ConnectionSocketFactory>create().register("https", sslsf)
.register("http", new PlainConnectionSocketFactory())
.build();
PoolingHttpClientConnectionManager poolingConnectionManager = new PoolingHttpClientConnectionManager(socketFactoryRegistry);
poolingConnectionManager.setMaxTotal(MAX_TOTAL_CONNECTIONS);
return poolingConnectionManager;
}
#Bean
public ConnectionKeepAliveStrategy connectionKeepAliveStrategy() {
return new ConnectionKeepAliveStrategy() {
#Override
public long getKeepAliveDuration(HttpResponse response, HttpContext context) {
HeaderElementIterator it = new BasicHeaderElementIterator
(response.headerIterator(HTTP.CONN_KEEP_ALIVE));
while (it.hasNext()) {
HeaderElement he = it.nextElement();
String param = he.getName();
String value = he.getValue();
if (value != null && param.equalsIgnoreCase("timeout")) {
return Long.parseLong(value) * 1000;
}
}
return DEFAULT_KEEP_ALIVE_TIME_MILLIS;
}
};
}
#Bean
public CloseableHttpClient httpClient() {
RequestConfig requestConfig = RequestConfig.custom()
.setConnectionRequestTimeout(REQUEST_TIMEOUT)
.setConnectTimeout(CONNECT_TIMEOUT)
.setSocketTimeout(SOCKET_TIMEOUT).build();
return HttpClients.custom()
.setDefaultRequestConfig(requestConfig)
.setConnectionManager(poolingConnectionManager())
.setKeepAliveStrategy(connectionKeepAliveStrategy())
.build();
}
#Bean
public Runnable idleConnectionMonitor(final PoolingHttpClientConnectionManager connectionManager) {
return new Runnable() {
#Override
#Scheduled(fixedDelay = 10000)
public void run() {
try {
if (connectionManager != null) {
LOGGER.trace("run IdleConnectionMonitor - Closing expired and idle connections...");
connectionManager.closeExpiredConnections();
connectionManager.closeIdleConnections(CLOSE_IDLE_CONNECTION_WAIT_TIME_SECS, TimeUnit.SECONDS);
} else {
LOGGER.trace("run IdleConnectionMonitor - Http Client Connection manager is not initialised");
}
} catch (Exception e) {
LOGGER.error("run IdleConnectionMonitor - Exception occurred. msg={}, e={}", e.getMessage(), e);
}
}
};
}
}
and RestTemplateConfig below
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import org.springframework.http.client.HttpComponentsClientHttpRequestFactory;
import org.springframework.scheduling.TaskScheduler;
import org.springframework.scheduling.concurrent.ThreadPoolTaskScheduler;
import org.springframework.web.client.RestTemplate;
import org.apache.http.impl.client.CloseableHttpClient;
#Configuration
public class RestTemplateConfig {
#Autowired
CloseableHttpClient httpClient;
#Bean
public RestTemplate restTemplate() {
RestTemplate restTemplate = new RestTemplate(clientHttpRequestFactory());
return restTemplate;
}
#Bean
public HttpComponentsClientHttpRequestFactory clientHttpRequestFactory() {
HttpComponentsClientHttpRequestFactory clientHttpRequestFactory = new HttpComponentsClientHttpRequestFactory();
clientHttpRequestFactory.setHttpClient(httpClient);
return clientHttpRequestFactory;
}
#Bean
public TaskScheduler taskScheduler() {
ThreadPoolTaskScheduler scheduler = new ThreadPoolTaskScheduler();
scheduler.setThreadNamePrefix("poolScheduler");
scheduler.setPoolSize(50);
return scheduler;
}
}
The test result cannot prove that connection pool impvoe performance.
You have not used your new implementation. You are still using the default Apache client. Use your method httpClient() to get the ClosableHttpClient.
Please also note that your test is synchronous, no matter how many connections do you have in the pool, you will use it sequential. Use threads to execute the get request.

KTable not updating (immediately) when new messages are put on input stream

I have a kafka topic of Strings with an arbitrary key. I want to create a topic of characters in string : value pairs, e.g:
input("key","value") -> outputs (["v","value"],["a","value"],...)
To keep it simple, my input topic has a single partition, and thus the KTable code should be receiving all messages to a single instance.
I have created the following sandbox code, which builds the new table just fine , but doesn't update when a new item is put into the original topic:
import java.util.LinkedHashSet;
import java.util.Properties;
import java.util.Set;
import java.util.concurrent.TimeUnit;
import org.apache.kafka.clients.consumer.ConsumerConfig;
import org.apache.kafka.clients.producer.KafkaProducer;
import org.apache.kafka.clients.producer.Producer;
import org.apache.kafka.clients.producer.ProducerConfig;
import org.apache.kafka.clients.producer.ProducerRecord;
import org.apache.kafka.common.serialization.Serdes;
import org.apache.kafka.common.serialization.StringSerializer;
import org.apache.kafka.common.utils.Bytes;
import org.apache.kafka.streams.KafkaStreams;
import org.apache.kafka.streams.KeyValue;
import org.apache.kafka.streams.StoreQueryParameters;
import org.apache.kafka.streams.StreamsBuilder;
import org.apache.kafka.streams.StreamsConfig;
import org.apache.kafka.streams.kstream.Consumed;
import org.apache.kafka.streams.kstream.Grouped;
import org.apache.kafka.streams.kstream.Materialized;
import org.apache.kafka.streams.state.KeyValueIterator;
import org.apache.kafka.streams.state.KeyValueStore;
import org.apache.kafka.streams.state.QueryableStoreTypes;
import org.apache.kafka.streams.state.ReadOnlyKeyValueStore;
public class Sandbox
{
private final static String kafkaBootstrapServers = "192.168.1.254:9092";
private final static String kafkaGlobalTablesDirectory = "C:\\Kafka\\tmp\\kafka-streams-global-tables\\";
private final static String topic = "sandbox";
private static KafkaStreams streams;
public static void main(String[] args)
{
// 1. set up the test data
Properties producerProperties = new Properties();
producerProperties.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, kafkaBootstrapServers);
producerProperties.put(ProducerConfig.CLIENT_ID_CONFIG, Sandbox.class.getName() + "_testProducer");
producerProperties.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, StringSerializer.class.getName());
producerProperties.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, StringSerializer.class.getName());
Producer<String, String> sandboxProducer = new KafkaProducer<>(producerProperties);
sandboxProducer.send(new ProducerRecord<String, String>(topic,"uvw","uvw"));
// 2. read the test data and check it's working
ReadOnlyKeyValueStore<String, String> store = getStore();
printStore(store.all());
System.out.println("-------------ADDING NEW VALUE----------------");
sandboxProducer.send(new ProducerRecord<String, String>(topic,"xyz","xyz"));
System.out.println("-------------ADDED NEW VALUE----------------");
printStore(store.all());
sandboxProducer.close();
streams.close();
}
private static void printStore(KeyValueIterator<String, String> i)
{
System.out.println("-------------PRINT START----------------");
while (i.hasNext())
{
KeyValue<String, String> n = i.next();
System.out.println(n.key + ":" + String.join(",", n.value));
}
System.out.println("-------------PRINT END----------------");
}
private static ReadOnlyKeyValueStore<String, String> getStore()
{
ReadOnlyKeyValueStore<String, String> store = null;
String storeString = "sandbox_store";
StreamsBuilder builder = new StreamsBuilder();
builder.stream(topic
, Consumed.with(Serdes.String(),Serdes.String()))
.filter((k,v)->v!=null)
.flatMap((k,v)->{
Set<KeyValue<String, String>> results = new LinkedHashSet<>();
if (v != null)
{
for (char subChar : v.toCharArray())
{
results.add(KeyValue.<String, String>pair(new String(new char[] {subChar}), v));
}
}
return results;
})
.groupByKey(Grouped.with(Serdes.String(), Serdes.String()))
.aggregate(()->new String()
, (key, value, agg) -> {
agg = agg + value;
return agg;
}
,Materialized.<String, String, KeyValueStore<Bytes, byte[]>>as(storeString)
.withKeySerde(Serdes.String())
.withValueSerde(Serdes.String()));
final Properties streamsConfiguration = new Properties();
streamsConfiguration.put(StreamsConfig.APPLICATION_ID_CONFIG, "sandbox");
streamsConfiguration.put(StreamsConfig.CLIENT_ID_CONFIG, Sandbox.class.getName());
streamsConfiguration.put(StreamsConfig.BOOTSTRAP_SERVERS_CONFIG, kafkaBootstrapServers);
streamsConfiguration.put(StreamsConfig.STATE_DIR_CONFIG, kafkaGlobalTablesDirectory + "Sandbox");
streamsConfiguration.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest");
streams = new KafkaStreams(builder.build(), streamsConfiguration);
streams.setUncaughtExceptionHandler((Thread thread, Throwable throwable) -> {
System.out.println("Exception on thread " + thread.getName() + ":" + throwable.getLocalizedMessage());
});
Runtime.getRuntime().addShutdownHook(new Thread(streams::close));
streams.cleanUp(); // clear any old streams data - forces a rebuild of the local caches.
streams.start(); // hangs until the global table is built
StoreQueryParameters<ReadOnlyKeyValueStore<String, String>> storeSqp
= StoreQueryParameters.fromNameAndType(storeString
,QueryableStoreTypes.<String, String>keyValueStore());
// this while loop gives time for Kafka Streams to start up properly before creating the store
while (store == null)
{
try {
TimeUnit.SECONDS.sleep(1);
store = streams.store(storeSqp);
System.out.println("Store " + storeString + " Created successfully.");
} catch (InterruptedException e) {
}
catch (Exception e) {
System.out.println("Exception creating store " + storeString + ". Will try again in 1 second. Message: " + e.getLocalizedMessage());
}
}
return store;
}
}
The output I am getting is as follows:
Store sandbox_store Created successfully.
-------------PRINT START----------------
u:uvw
v:uvw
w:uvw
-------------PRINT END----------------
-------------ADDING NEW VALUE----------------
-------------ADDED NEW VALUE----------------
-------------PRINT START----------------
u:uvw
v:uvw
w:uvw
-------------PRINT END----------------
Note that the xyz I added has gone missing!
(p.s. I know I could use reduce instead of aggregate, but in practise the new value would be a different type, not a string, so it wouldn't work for my actual use-case)
Now, if I add a 10 second pause before printing the second time; or if I then restart the Sandbox class without clearing the topic, the first xyz shows up then. So it's clearly that there's a time delay somewhere in the system. And in practise I'm dealing with 300mb+ of messages all going onto the input topic at once, once an hour; and so the delay is even longer than just a few seconds.
How can I help speed things up?

can Flink receive http requests as datasource?

Flink can read a socket stream, can it read http requests? how?
// socket example
DataStream<XXX> socketStream = env
.socketTextStream("localhost", 9999)
.map(...);
There's an open JIRA ticket for creating an HTTP sink connector for Flink, but I've seen no discussion about creating a source connector.
Moreover, it's not clear this is a good idea. Flink's approach to fault tolerance requires sources that can be rewound and replayed, so it works best with input sources that behave like message queues. I would suggest buffering the incoming http requests in a distributed log.
For an example, look at how DriveTribe uses Flink to power their website on the data Artisans blog and on YouTube.
I write one custom http source. please ref OneHourHttpTextStreamFunction. you need create a fat jar to include apache httpserver classes if you want run my code.
package org.apache.flink.streaming.examples.http;
import org.apache.flink.api.common.functions.FlatMapFunction;
import org.apache.flink.api.common.functions.ReduceFunction;
import org.apache.flink.api.java.utils.ParameterTool;
import org.apache.flink.streaming.api.datastream.DataStream;
import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;
import org.apache.flink.streaming.api.functions.source.SourceFunction;
import org.apache.flink.streaming.api.windowing.time.Time;
import org.apache.flink.streaming.examples.socket.SocketWindowWordCount.WordWithCount;
import org.apache.flink.util.Collector;
import org.apache.http.HttpException;
import org.apache.http.HttpRequest;
import org.apache.http.HttpResponse;
import org.apache.http.entity.StringEntity;
import org.apache.http.impl.bootstrap.HttpServer;
import org.apache.http.impl.bootstrap.ServerBootstrap;
import org.apache.http.protocol.HttpContext;
import org.apache.http.protocol.HttpRequestHandler;
import java.io.IOException;
import java.util.concurrent.TimeUnit;
import static org.apache.flink.util.Preconditions.checkArgument;
import static org.apache.flink.util.Preconditions.checkNotNull;
public class HttpRequestCount {
public static void main(String[] args) throws Exception {
// the host and the port to connect to
final String path;
final int port;
try {
final ParameterTool params = ParameterTool.fromArgs(args);
path = params.has("path") ? params.get("path") : "*";
port = params.getInt("port");
} catch (Exception e) {
System.err.println("No port specified. Please run 'SocketWindowWordCount "
+ "--path <hostname> --port <port>', where path (* by default) "
+ "and port is the address of the text server");
System.err.println("To start a simple text server, run 'netcat -l <port>' and "
+ "type the input text into the command line");
return;
}
// get the execution environment
final StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
// get input data by connecting to the socket
DataStream<String> text = env.addSource(new OneHourHttpTextStreamFunction(path, port));
// parse the data, group it, window it, and aggregate the counts
DataStream<WordWithCount> windowCounts = text
.flatMap(new FlatMapFunction<String, WordWithCount>() {
#Override
public void flatMap(String value, Collector<WordWithCount> out) {
try {
Thread.sleep(1000);
} catch (InterruptedException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
for (String word : value.split("\\s")) {
out.collect(new WordWithCount(word, 1L));
}
}
})
.keyBy("word").timeWindow(Time.seconds(5))
.reduce(new ReduceFunction<WordWithCount>() {
#Override
public WordWithCount reduce(WordWithCount a, WordWithCount b) {
try {
Thread.sleep(1000);
} catch (InterruptedException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
return new WordWithCount(a.word, a.count + b.count);
}
});
// print the results with a single thread, rather than in parallel
windowCounts.print().setParallelism(1);
env.execute("Http Request Count");
}
}
class OneHourHttpTextStreamFunction implements SourceFunction<String> {
private static final long serialVersionUID = 1L;
private final String path;
private final int port;
private transient HttpServer server;
public OneHourHttpTextStreamFunction(String path, int port) {
checkArgument(port > 0 && port < 65536, "port is out of range");
this.path = checkNotNull(path, "path must not be null");
this.port = port;
}
#Override
public void run(SourceContext<String> ctx) throws Exception {
server = ServerBootstrap.bootstrap().setListenerPort(port).registerHandler(path, new HttpRequestHandler(){
#Override
public void handle(HttpRequest req, HttpResponse rep, HttpContext context) throws HttpException, IOException {
ctx.collect(req.getRequestLine().getUri());
rep.setStatusCode(200);
rep.setEntity(new StringEntity("OK"));
}
}).create();
server.start();
server.awaitTermination(1, TimeUnit.HOURS);
}
#Override
public void cancel() {
server.stop();
}
}
Leave you comment, if you want the demo jar.

JMS Consumer terminates and doesn't receive Message

So I'm following this youtube tutorial on Java Message Service with JBoss. My codes are the same to the video however when I run my TopicConsumer and TopicProducer applications, both terminates and don't stay alive for me to receive my message. I read that setMessageListener would have created a new thread so the message should be received even if the main thread was terminated but I'm still not receiving the message.
I found out that it's not calling onMessage, is it because TopicConsumer was terminated before it gets a chance to?
I've my JBoss 5.0 server running, just like in the video I run TopicConsumer first (but it terminates after the print statement unlike in the video) then TopicProduver (which also terminates right after the print statement) and I don't receive my message.
Thanks.
TopicConsumer.java
package jmspubsubtutorial;
import java.util.Properties;
import javax.jms.JMSException;
import javax.jms.Message;
import javax.jms.MessageListener;
import javax.jms.TextMessage;
import javax.jms.Topic;
import javax.jms.TopicConnection;
import javax.jms.TopicConnectionFactory;
import javax.jms.TopicSession;
import javax.naming.Context;
import javax.naming.InitialContext;
import javax.naming.NamingException;
public class TopicConsumer implements MessageListener {
public static void main(String[] args) throws JMSException, NamingException{
System.out.println("---Starting TopicConsumer---");
Context context = TopicConsumer.getInitialContext();
TopicConnectionFactory topicConnectionFactory = (TopicConnectionFactory) context.lookup("ConnectionFactory");
Topic topic = (Topic) context.lookup("topic/JMS_tutorial");
TopicConnection topicConnection = topicConnectionFactory.createTopicConnection();
TopicSession topicSession = topicConnection.createTopicSession(false, TopicSession.AUTO_ACKNOWLEDGE);
topicSession.createSubscriber(topic).setMessageListener(new TopicConsumer());
topicConnection.start();
System.out.println("---Exiting TopicConsumer---");
}
#Override
public void onMessage(Message message) {
System.out.println("--- onMessage ---");
try {
System.out.println("Incoming message: " + ((TextMessage)message).getText());
} catch (JMSException e) {
System.out.println("onMessage failed");
e.printStackTrace();
}
}
public static Context getInitialContext() throws JMSException, NamingException {
Properties props = new Properties();
props.setProperty("java.naming.factory.initial", "org.jnp.interfaces.NamingContextFactory");
props.setProperty("java.naming.factory.url.pkgs", "org.jboss.naming");
props.setProperty("java.naming.provider.url", "localhost:1099");
Context context = new InitialContext(props);
return context;
}
}
TopicProducer.java
package jmspubsubtutorial;
import javax.jms.JMSException;
import javax.jms.TextMessage;
import javax.jms.Topic;
import javax.jms.TopicConnection;
import javax.jms.TopicConnectionFactory;
import javax.jms.TopicPublisher;
import javax.jms.TopicSession;
import javax.naming.Context;
import javax.naming.NamingException;
public class TopicProducer {
public static void main(String[] args) throws JMSException, NamingException{
System.out.println("---Starting TopicProducer---");
Context context = TopicConsumer.getInitialContext();
TopicConnectionFactory topicConnectionFactory = (TopicConnectionFactory) context.lookup("ConnectionFactory");
Topic topic = (Topic) context.lookup("topic/JMS_tutorial");
TopicConnection topicConnection = topicConnectionFactory.createTopicConnection();
TopicSession topicSession = topicConnection.createTopicSession(false, TopicSession.AUTO_ACKNOWLEDGE);
topicConnection.start();
TopicProducer topicProducer = new TopicProducer();
String text = "message 1 from TopicProducer...";
topicProducer.sendMessage(text, topicSession, topic);
System.out.println("---Exiting TopicProducer---");
}
public void sendMessage(String text, TopicSession topicSession, Topic topic) throws JMSException {
System.out.println("Send Message: " + text + " " + topicSession + " " + topic);
TopicPublisher topicPublisher = topicSession.createPublisher(topic);
TextMessage textMessage = topicSession.createTextMessage(text);
topicPublisher.publish(textMessage);
topicPublisher.close();
}
}
So the problem is that you are relying on the JMS library to maintain at least one non-daemon thread in order to keep your application alive after you create the consumer and assign the message listener but in reality there is no guarantee that it will do any such thing.
It's true that many JMS providers do indeed attempt to always have a single non-daemon thread running internally but assuming that this will always be the case is not really advisable. You've seemed to find that the your particular provider does not do this for you, so if you want to ensure your application stays running you should make this happen yourself.

MDB Listener doesn't listen to hornetq

I configured standalone hornetq and started it. Created one sender class and one MDB receiver class.
When i register my sender class with consumer class using the method,
messageConsumer.setMessageListener(listener) in the sender class itself, it works fine.
But when i deploy my MDB Receiver(.war file !) in the jboss application server, it does not listen to the queue messages.
Sender Class:
**package com.mdas.sender;
import javax.jms.Connection;
import javax.jms.ConnectionFactory;
import javax.jms.JMSException;
import javax.jms.MessageConsumer;
import javax.jms.MessageProducer;
import javax.jms.ObjectMessage;
import javax.jms.Queue;
import javax.jms.Session;
import javax.naming.InitialContext;
import javax.naming.NamingException;
import org.apache.log4j.Logger;
import com.mdas.receiver.TestQueueListnerMDB;
import com.mdas.vo.JmsVO;
public class JMSSender {
static Logger logger = Logger.getLogger(JMSSender.class);
// *************** Connection Factory JNDI name *************************
public String connectionFactory;
// *************** Queue JNDI name *************************
public String queueName;
protected ConnectionFactory tconFactory;
protected Connection tcon;
protected Session session;
protected MessageProducer producer;
protected Queue queue;
protected MessageConsumer messageConsumer;
protected TestQueueListnerMDB listener;
protected InitialContext ic;
public JMSSender(InitialContext ic, String connectionFactory, String queueName){
this.ic = ic;
this.connectionFactory = connectionFactory;
this.queueName = queueName;
}
public void sendJms(JmsVO jmsVO) throws Exception
{
System.out.println("Message put in jms destination");
ObjectMessage objectMessage = session.createObjectMessage();
objectMessage.setObject(jmsVO);
producer.send(objectMessage);
}
public void init() throws NamingException,
JMSException
{
System.out.println("0");
tconFactory = (ConnectionFactory) ic.lookup(connectionFactory);
System.out.println("1");
tcon = tconFactory.createConnection();
System.out.println("2");
session = tcon.createSession(false, Session.CLIENT_ACKNOWLEDGE);
System.out.println("3");
queue = (Queue) ic.lookup(queueName);
System.out.println("4");
producer = session.createProducer(queue);
System.out.println("5");
/*messageConsumer = session.createConsumer(queue);
listener = new TestQueueListnerMDB();
messageConsumer.setMessageListener(listener);*/
tcon.start();
}
public void closeQueueConnections(){
System.out.println("<<<< start closeQueueConnections >>>>>");
try {
producer.close();
//messageConsumer.close();
session.close();
tcon.close();
System.out.println("<<<< end closeQueueConnections successfully >>>>>");
} catch (Exception e) {
logger.error("Error in closeQueueConnections()", e);
}
}
}**
Receiver Class:
package com.mdas.receiver;
import javax.ejb.ActivationConfigProperty;
import javax.ejb.EJBException;
import javax.ejb.MessageDriven;
import javax.ejb.MessageDrivenBean;
import javax.ejb.MessageDrivenContext;
import javax.ejb.TransactionAttribute;
import javax.ejb.TransactionAttributeType;
import javax.ejb.TransactionManagement;
import javax.ejb.TransactionManagementType;
import javax.jms.Message;
import javax.jms.MessageListener;
import javax.jms.ObjectMessage;
import org.apache.log4j.Logger;
import org.jboss.ejb3.annotation.ResourceAdapter;
import com.mdas.vo.JmsVO;
#MessageDriven(
activationConfig = { #ActivationConfigProperty(propertyName = "destinationType", propertyValue = "javax.jms.Queue"),
#ActivationConfigProperty(propertyName = "destination", propertyValue = "queue/TestQueue"),
#ActivationConfigProperty(propertyName = "acknowledgeMode", propertyValue = "Auto-acknowledge"),
#ActivationConfigProperty(propertyName = "maxSession", propertyValue = "100"),
#ActivationConfigProperty(propertyName = "hostName", propertyValue = "localhost"),
#ActivationConfigProperty(propertyName = "port", propertyValue = "5455")
})
#TransactionManagement(value = TransactionManagementType.CONTAINER)
#TransactionAttribute(value = TransactionAttributeType.NOT_SUPPORTED)
#ResourceAdapter("hornetq-ra.rar")
public class TestQueueListnerMDB implements MessageListener, MessageDrivenBean{
private static final long serialVersionUID = 1L;
static Logger logger = Logger.getLogger(TestQueueListnerMDB.class);
public TestQueueListnerMDB() {
logger.info("Snmp MDB Created :: " + this);
System.out.println("Snmp MDB Created :: " + this);
}
public void onMessage(Message message) {
try
{
System.out.println("Entered in onMessage::: ");
logger.info("Trap Received In Processor ::: ");
ObjectMessage objectMessage = (ObjectMessage)message;
JmsVO received = (JmsVO)objectMessage.getObject();
System.out.println(received.getText());
message.acknowledge();
}
catch (Exception e)
{
logger.error("Error in receiving alarm from queue", e);
}finally{
}
}
public void ejbRemove() throws EJBException {
logger.info("QueueListnerMDB is being removed");
}
public void setMessageDrivenContext(MessageDrivenContext messageDrivenContext) throws EJBException {
}
}
Your question is not well formulated, but let me try...
To give you a proper answer I would need more information to where is your server (is it remote), and what version you are using.
Usually all you have to do is just to specify the remote server. you have some information on the HornetQ documentation:
http://docs.jboss.org/hornetq/2.2.2.Final/user-manual/en/html/appserver-integration.html#d0e8389
If you provide me more information to what's happening (errors, version) I may try to give you a better answer.