ActiveMQ Artemis - Increasing Message Count on Management Queues - activemq-artemis

I am using the ActiveMQ Artemis management API to consume information about which queues are in use and how many consumers there are.
Since I am caching the ClientRequestor the message count on the "activemq.management.*" queue increases on and on. Although I defined expiry delay the messages on the queue are not discarded.
Can someone explain why?
I would expect, that the messages are consumed and afterwards gone.
private ServerLocator locator;
private ClientSessionFactory defaultFactory;
private ClientSession session;
private ClientRequestor requestor;
public ManagementHelper(String defaultURL) {
this.locator = ActiveMQClient.createServerLocator(defaultURL);
this.defaultFactory = locator.createSessionFactory();
this.session = factory.createSession(this.username, this.password, false, true, true, locator.isPreAcknowledge(), locator.getAckBatchSize());
this.requestor = new ClientRequestor(session, "activemq.management");
}
#Scheduled(fixedRateString = "20000")
public void getConsumerNbos(ClientSessionFactory factory, ServerLocator locator) {
ClientMessage message = session.createMessage(false);
ManagementHelper.putOperationInvocation(message, ResourceNames.BROKER, listAllConsumersAsJSON);
session.start();
ClientMessage replyConsumer = requestor.request(message);
String resultJSON = (String) ManagementHelper.getResult(replyConsumer, String.class);
ClientMessage message2 = session.createMessage(false);
ManagementHelper.putOperationInvocation(message2, ResourceNames.BROKER, MANAGEMENT_OPERATION_QUEUES);
ClientMessage replyQueueNames = requestor.request(message2);
Object[] objQueueNames = (Object[]) ManagementHelper.getResult(replyQueueNames);
}

The messages are not removed because your application is not acknowledging them. You must invoke acknowledge() on the reply messages just like any other message you might receive from a queue using the core API, e.g.:
#Scheduled(fixedRateString = "20000")
public void getConsumerNbos(ClientSessionFactory factory, ServerLocator locator) {
ClientMessage message = session.createMessage(false);
ManagementHelper.putOperationInvocation(message, ResourceNames.BROKER, listAllConsumersAsJSON);
session.start();
ClientMessage replyConsumer = requestor.request(message);
replyConsumer.acknowledge();
String resultJSON = (String) ManagementHelper.getResult(replyConsumer, String.class);
ClientMessage message2 = session.createMessage(false);
ManagementHelper.putOperationInvocation(message2, ResourceNames.BROKER, MANAGEMENT_OPERATION_QUEUES);
ClientMessage replyQueueNames = requestor.request(message2);
replyQueueNames.acknowledge();
Object[] objQueueNames = (Object[]) ManagementHelper.getResult(replyQueueNames);
}
You don't need to invoke commit() on the ClientSession since you're creating it with autoCommitAcks as true.

Related

Why i'm getting FailedToSendMessageException Exception when sending message at first time

When sending a message for the first time I get the exception. But second message sends properly.
Here is my Producer Configuration:
public void Init()
{
_logger.info("Initializing KAFKA server context...");
Properties ProducerProperties = new Properties();
ProducerProperties.put("metadata.broker.list", "localhost:9092");
ProducerProperties.put("serializer.class", "kafka.serializer.StringEncoder");
ProducerProperties.put("producer.type", "async");
_kafkaProducerConfig = new ProducerConfig(ProducerProperties);
_logger.info("KAFKA configuration done ...!!");
}`
I am sending Message using this method:
public Event Send(WSSession ws, JSONObject obj) throws JSONException
{
String roomId = obj.getString("RoomId");
Room room = _roomList.get(roomId);
String message = "";
if (_hmRooms.containsKey(room.getRoomId()))
{
String msg = new KafkaMessage(obj.getString("ReqId"), obj.getString("ReqType"), ws.getUser().getTaskName(), obj.getString("Message")).toString();
kafka.javaapi.producer.Producer<String, String> producer = new kafka.javaapi.producer.Producer<String, String>(_kafkaProducerConfig);
KeyedMessage<String, String> km = new KeyedMessage<String, String>(room.getRoomId(), msg);
producer.send(km);
producer.close();
message = "sent";
}
else
{
message = "failled";
}
EventSuccess evt = new EventSuccess(obj);
evt.setMessage(message);
return evt;
}

Cannot produce Message when Main Thread sleep less than 1000

When I am using the Java API of Kafka,if I let my main Thread sleep less than 2000ns,it cannot prodece any message.I really want to know why this happen?
Here is my producer:
public class Producer {
private final KafkaProducer<String, String> producer;
private final String topic;
public Producer(String topic, String[] args) {
//......
//......
producer = new KafkaProducer<>(props);
this.topic = topic;
}
public void producerMsg() throws InterruptedException {
String data = "Apache Storm is a free and open source distributed";
data = data.replaceAll("[\\pP‘’“”]", "");
String[] words = data.split(" ");
Random _rand = new Random();
Random rnd = new Random();
int events = 10;
for (long nEvents = 0; nEvents < events; nEvents++) {
long runtime = new Date().getTime();
int lastIPnum = rnd.nextInt(255);
String ip = "192.168.2." + lastIPnum;
String msg = words[_rand.nextInt(words.length)];
try {
producer.send(new ProducerRecord<>(topic, ip, msg));
System.out.println("Sent message: (" + ip + ", " + msg + ")");
} catch (Exception e) {
e.printStackTrace();
}
}
}
public static void main(String[] args) throws InterruptedException {
Producer producer = new Producer(Constants.TOPIC, args);
producer.producerMsg();
//If I write Thread.sleep(1000),It will not work!!!!!!!!!!!!!!!!!!!!
Thread.sleep(2000);
}
}
appreciate that
can you show the props you are using for configuring the Producer ? I'm only guessing that it's possible that ...
In the producerMsg() you are using the async way to use the producer so just producer.send() which means that the message is put in an internal buffer for making batches that will be sent later. The producer has an internal thread to get from the buffer and sending the batch. Maybe that only 1000 ms aren't enough for reaching the condition where the producer really sends messages (see batch.size and linger.ms), the main application ends and the producer dies without sending messages. Giving it more time (2000 ms), it works. Btw, I didn't try the code.
So the reason seems to be your :
props.put("linger.ms", 1000);
that matches with your sleep. So the producer will start to send messages after 1000 ms, because the batch isn't already full (batch.size is 16 MB). At same time, the main thread ends after sleeping 1 secs and the producer doesn't send messages. You have to use a lower linger.ms time.

Understanding kafka zookeper auto reset

I still having doubts with kafka ZOOKEPER_AUTO_RESET.I have seen lot of questions asked on this regard. Kindly excuse if the same is a duplicate query .
I am having a high level java consumer which keeps on consuming.
I do have multiple topics and all topics are having a single partition.
My concern is on the below.
I started the consumerkafka.jar with consumer group name as “ncdev1” and ZOOKEPER_AUTO_RESET = smallest . Could observe that init offset is set as -1. Then I stop/started the jar after sometime. At this time, it picks the latest offset assigned to the consumer group (ncdev1) ie 36. I again restarted after sometime, then the initoffset is set to 39. Which is the latest value.
Then I changed the group name to ZOOKEPER_GROUP_ID = ncdev2. And restarted the jar file, this time again the offset is set to -1. In further restarts, it jumped to the latest value ie 39
Then I set the
ZOOKEPER_AUTO_RESET=largest and ZOOKEPER_GROUP_ID = ncdev3
Then tried restarting the jar file with group name ncdev3. There is no difference in the way it picks offset when it restarts. That is it is picking 39 when it restarts, which is same as the previous configuration.
Any idea on why is it not picking offset form the beginning.Any other configuration to be done to make it read from the beginning?(largest and smallest understanding from What determines Kafka consumer offset?)
Thanks in Advance
Code addedd
public class ConsumerForKafka {
private final ConsumerConnector consumer;
private final String topic;
private ExecutorService executor;
ServerSocket soketToWrite;
Socket s_Accept ;
OutputStream s1out ;
DataOutputStream dos;
static boolean logEnabled ;
static File fileName;
private static final Logger logger = Logger.getLogger(ConsumerForKafka.class);
public ConsumerForKafka(String a_zookeeper, String a_groupId, String a_topic,String session_timeout,String auto_reset,String a_commitEnable) {
consumer = kafka.consumer.Consumer.createJavaConsumerConnector(
createConsumerConfig(a_zookeeper, a_groupId,session_timeout,auto_reset,a_commitEnable));
this.topic =a_topic;
}
public void run(int a_numThreads,String a_zookeeper, String a_topic) throws InterruptedException, IOException {
Map<String, Integer> topicCountMap = new HashMap<String, Integer>();
topicCountMap.put(topic, new Integer(a_numThreads));
Map<String, List<KafkaStream<byte[], byte[]>>> consumerMap = consumer.createMessageStreams(topicCountMap);
String socketURL = PropertyUtils.getProperty("SOCKET_CONNECT_HOST");
int socketPort = Integer.parseInt(PropertyUtils.getProperty("SOCKET_CONNECT_PORT"));
Socket socks = new Socket(socketURL,socketPort);
//****
String keeper = a_zookeeper;
String topic = a_topic;
long millis = new java.util.Date().getTime();
//****
PrintWriter outWriter = new PrintWriter(socks.getOutputStream(), true);
List<KafkaStream<byte[], byte[]>> streams = null;
// now create an object to consume the messages
//
int threadNumber = 0;
// System.out.println("going to forTopic value is "+topic);
boolean keepRunningThread =false;
boolean chcek = false;
logger.info("logged");
BufferedWriter bw = null;
FileWriter fw = null;
if(logEnabled){
fw = new FileWriter(fileName, true);
bw = new BufferedWriter(fw);
}
for (;;) {
streams = consumerMap.get(topic);
keepRunningThread =true;
for (final KafkaStream stream : streams) {
ConsumerIterator<byte[], byte[]> it = stream.iterator();
while(keepRunningThread)
{
try{
if (it.hasNext()){
if(logEnabled){
String data = new String(it.next().message())+""+"\n";
bw.write(data);
bw.flush();
outWriter.print(data);
outWriter.flush();
consumer.commitOffsets();
logger.info("Explicit commit ......");
}else{
outWriter.print(new String(it.next().message())+""+"\n");
outWriter.flush();
}
}
// logger.info("running");
} catch(ConsumerTimeoutException ex) {
keepRunningThread =false;
break;
}catch(NullPointerException npe ){
keepRunningThread =true;
npe.printStackTrace();
}catch(IllegalStateException ile){
keepRunningThread =true;
ile.printStackTrace();
}
}
}
}
}
private static ConsumerConfig createConsumerConfig(String a_zookeeper, String a_groupId,String session_timeout,String auto_reset,String commitEnable) {
Properties props = new Properties();
props.put("zookeeper.connect", a_zookeeper);
props.put("group.id", a_groupId);
props.put("zookeeper.session.timeout.ms", session_timeout);
props.put("zookeeper.sync.time.ms", "2000");
props.put("auto.offset.reset", auto_reset);
props.put("auto.commit.interval.ms", "60000");
props.put("consumer.timeout.ms", "30");
props.put("auto.commit.enable",commitEnable);
//props.put("rebalance.max.retries", "4");
return new ConsumerConfig(props);
}
public static void main(String[] args) throws InterruptedException {
String zooKeeper = PropertyUtils.getProperty("ZOOKEEPER_URL_PORT");
String groupId = PropertyUtils.getProperty("ZOOKEPER_GROUP_ID");
String session_timeout = PropertyUtils.getProperty("ZOOKEPER_SESSION_TIMOUT_MS"); //6400
String auto_reset = PropertyUtils.getProperty("ZOOKEPER_AUTO_RESET"); //smallest
String enableLogging = PropertyUtils.getProperty("ENABLE_LOG");
String directoryPath = PropertyUtils.getProperty("LOG_DIRECTORY");
String log4jpath = PropertyUtils.getProperty("LOG_DIR");
String commitEnable = PropertyUtils.getProperty("ZOOKEPER_COMMIT"); //false
PropertyConfigurator.configure(log4jpath);
String socketURL = PropertyUtils.getProperty("SOCKET_CONNECT_HOST");
int socketPort = Integer.parseInt(PropertyUtils.getProperty("SOCKET_CONNECT_PORT"));
try {
Socket socks = new Socket(socketURL,socketPort);
boolean connected = socks.isConnected() && !socks.isClosed();
if(connected){
//System.out.println("Able to connect ");
}else{
logger.info("Not able to conenct to socket ..Exiting...");
System.exit(0);
}
} catch (UnknownHostException e1) {
// TODO Auto-generated catch block
e1.printStackTrace();
} catch(java.net.ConnectException cne){
logger.info("Not able to conenct to socket ..Exitring...");
System.exit(0);
}
catch (IOException e1) {
// TODO Auto-generated catch block
e1.printStackTrace();
}
// String zooKeeper = args[0];
// String groupId = args[1];
String topic = args[0];
int threads = 1;
logEnabled = Boolean.parseBoolean(enableLogging);
if(logEnabled)
createDirectory(topic,directoryPath);
ConsumerForKafka example = new ConsumerForKafka(zooKeeper, groupId, topic, session_timeout,auto_reset,commitEnable);
try {
example.run(threads,zooKeeper,topic);
} catch(java.net.ConnectException cne){
cne.printStackTrace();
System.exit(0);
}
catch (IOException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
private static void createDirectory(String topic,String d_Path) {
try{
File file = new File(d_Path);
if (!file.exists()) {
if (file.mkdir()) {
logger.info("Directory Created" +file.getPath());
} else {
logger.info("Directory Creation failed");
}
}
fileName = new File(d_Path + topic + ".log");
if (!fileName.exists()) {
fileName.createNewFile();
}
}catch(IOException IOE){
//logger.info("IOException occured during Directory or During File creation ");
}
}
}
After rereading your post carefully, I think what you ran into should be as expected.
I started the consumerkafka.jar with consumer group name as “ncdev1” and ZOOKEPER_AUTO_RESET = smallest . Could observe that init offset is set as -1. Then I stop/started the jar after sometime. At this time, it picks the latest offset assigned to the consumer group (ncdev1) ie 36.
auto.offset.reset only applies when there is no initial offset or if an offset is out of range. Since you only have 36 messages in the log, it's possible for the consumer group to read all those records very quickly, that's why you see consumer group always picked the latest offsets every time it got restarted.

JBoss JMS MessageConsumer waits indefinitely for response message

I am trying to create a synchronous request using JMS on JBoss
Code for MDB is:
#Resource(mappedName = "java:/ConnectionFactory")
private ConnectionFactory connectionFactory;
#Override
public void onMessage(Message message) {
logger.info("Received message for client call");
if (message instanceof ObjectMessage) {
Connection con = null;
try {
con = connectionFactory.createConnection();
con.start();
Requests requests = (Requests) ((ObjectMessage) message)
.getObject();
String response = getClient().get(getRequest(requests));
con = connectionFactory.createConnection();
Session ses = con.createSession(true, Session.AUTO_ACKNOWLEDGE);
MessageProducer producer = ses.createProducer(message
.getJMSReplyTo());
TextMessage replyMsg = ses.createTextMessage();
replyMsg.setJMSCorrelationID(message.getJMSCorrelationID());
replyMsg.setText(response);
logger.info("Sending reply to client call : " + response );
producer.send(replyMsg);
} catch (JMSException e) {
logger.severe(e.getMessage());
} finally {
if (con != null) {
try {
con.close();
} catch (Exception e2) {
logger.severe(e2.getMessage());
}
}
}
}
}
Code for client is:
#Resource(mappedName = "java:/ConnectionFactory")
private QueueConnectionFactory queueConnectionFactory;
#Resource(mappedName = "java:/queue/request")
private Queue requestQueue;
#Override
public Responses getResponses(Requests requests) {
QueueConnection connection = null;
try {
connection = queueConnectionFactory.createQueueConnection();
connection.start();
QueueSession session = connection.createQueueSession(false,
Session.AUTO_ACKNOWLEDGE);
MessageProducer messageProducer = session
.createProducer(requestQueue);
ObjectMessage message = session.createObjectMessage();
message.setObject(requests);
TemporaryQueue temp = session.createTemporaryQueue();
MessageConsumer consumer = session.createConsumer(temp);
message.setJMSReplyTo(temp);
messageProducer.send(message);
Message response = consumer.receive();
if (response instanceof TextMessage) {
logger.info("Received response");
return new Responses(null, ((TextMessage) response).getText());
}
} catch (JMSException e) {
logger.severe(e.getMessage());
} finally {
if (connection != null) {
try {
connection.close();
} catch (Exception e2) {
logger.severe(e2.getMessage());
}
}
}
return null;
}
The message is received fine on the queue, the response message is created and the MessageProducer sends the response without issue, with no errors. However the consumer just sits and waits indefinitely. I have also tried creating a separate reply queue rather then using a temporary queue and the result is the same.
I am guessing that I am missing something basic with this set up but I cannot for the life of me see anything I am doing wrong.
There is no other code, the 2 things I have read on this that can cause problems is that the connection.start() isn't called or the repsonses are going to some other different receiver, which isn't happening here (as far as I know - there are no other messaging parts to the code outside of these classes yet)
So I guess my question is, should the above code work or am I missing some fundamental understanding of the JMS flow?
So..I persevered and I got it to work.
The answer is that when I create the session, the transacted attribute in both the client and the MDB had to be set to false:
Session ses = con.createSession(true, Session.AUTO_ACKNOWLEDGE);
had to be changed to:
Session ses = con.createSession(false, Session.AUTO_ACKNOWLEDGE);
for both client and server.
I know why now! I am effectively doing the below which is taken from the Oracle JMS documentation!
If you try to use a request/reply mechanism, whereby you send a message and then try to receive a reply to the sent message in the same transaction, the program will hang, because the send cannot take place until the transaction is committed. The following code fragment illustrates the problem:
// Don’t do this!
outMsg.setJMSReplyTo(replyQueue);
producer.send(outQueue, outMsg);
consumer = session.createConsumer(replyQueue);
inMsg = consumer.receive();
session.commit();

Use a TemporaryQueue on client-side for synchronous Request/Reply JMS with a JBoss server bean

I have a MDB running on JBoss 7.1, and a simple Java application as a client on another machine. The goal is the following:
the client sends a request (ObjectMessage) to the server
the server processes the request and sends back a response to the client (ObjectMessage again)
I thought to use a TemporaryQueue on the client to listen for the response (because I don't know how to do it asynchronously), and the JMSReplyTo Message's property to correctly reply back because I should support multiple independent clients.
This is the client:
public class MessagingService{
private static final String JBOSS_HOST = "localhost";
private static final int JBOSS_PORT = 5455;
private static Map connectionParams = new HashMap();
private Window window;
private Queue remoteQueue;
private TemporaryQueue localQueue;
private ConnectionFactory connectionFactory;
private Connection connection;
private Session session;
public MessagingService(Window myWindow){
this.window = myWindow;
MessagingService.connectionParams.put(TransportConstants.PORT_PROP_NAME, JBOSS_PORT);
MessagingService.connectionParams.put(TransportConstants.HOST_PROP_NAME, JBOSS_HOST);
TransportConfiguration transportConfiguration = new TransportConfiguration(NettyConnectorFactory.class.getName(), connectionParams);
this.connectionFactory = (ConnectionFactory) HornetQJMSClient.createConnectionFactoryWithoutHA(JMSFactoryType.CF, transportConfiguration);
}
public void sendRequest(ClientRequest request) {
try {
connection = connectionFactory.createConnection();
this.session = connection.createSession(false, QueueSession.AUTO_ACKNOWLEDGE);
this.remoteQueue = HornetQJMSClient.createQueue("testQueue");
this.localQueue = session.createTemporaryQueue();
MessageProducer producer = session.createProducer(remoteQueue);
MessageConsumer consumer = session.createConsumer(localQueue);
ObjectMessage message = session.createObjectMessage();
message.setObject(request);
message.setJMSReplyTo(localQueue);
producer.send(message);
ObjectMessage response = (ObjectMessage) consumer.receive();
ServerResponse serverResponse = (ServerResponse) response.getObject();
this.window.dispatchResponse(serverResponse);
this.session.close();
} catch (JMSException e) {
// TODO splittare e differenziare
e.printStackTrace();
}
}
Now I'm having troubles writing the server side, as I cannot figure out how to establish a Connection to a TemporaryQueue...
public void onMessage(Message message) {
try {
if (message instanceof ObjectMessage) {
Destination replyDestination = message.getJMSReplyTo();
ObjectMessage objectMessage = (ObjectMessage) message;
ClientRequest request = (ClientRequest) objectMessage.getObject();
System.out.println("Queue: I received an ObjectMessage at " + new Date());
System.out.println("Client Request Details: ");
System.out.println(request.getDeparture());
System.out.println(request.getArrival());
System.out.println(request.getDate());
System.out.println("Replying...");
// no idea what to do here
Connection connection = ? ? ? ? ? ? ? ?
Session session = connection.createSession(false, Session.AUTO_ACKNOWLEDGE);
MessageProducer replyProducer = session.createProducer(replyDestination);
ServerResponse serverResponse = new ServerResponse("TEST RESPONSE");
ObjectMessage response = session.createObjectMessage();
response.setObject(serverResponse);
replyProducer.send(response);
} else {
System.out.println("Not a valid message for this Queue MDB");
}
} catch (JMSException e) {
e.printStackTrace();
}
}
I cannot figure out what am I missing
You are asking the wrong question here.. You should look at how to create a Connection inside any Bean.
you need to get the ConnectionFactory, and create the connection accordingly.
For more information, look at the javaee examples on the HornetQ download.
In specific look at javaee/mdb-tx-send/ when you download hornetq.
#MessageDriven(name = "MDBMessageSendTxExample",
activationConfig =
{
#ActivationConfigProperty(propertyName = "destinationType", propertyValue = "javax.jms.Queue"),
#ActivationConfigProperty(propertyName = "destination", propertyValue = "queue/testQueue")
})
public class MDBMessageSendTxExample implements MessageListener
{
#Resource(mappedName = "java:/JmsXA")
ConnectionFactory connectionFactory;
public void onMessage(Message message)
{
Connection conn = null;
try
{
// your code here...
//Step 11. we create a JMS connection
conn = connectionFactory.createConnection();
//Step 12. We create a JMS session
Session sess = conn.createSession(false, Session.AUTO_ACKNOWLEDGE);
//Step 13. we create a producer for the reply queue
MessageProducer producer = sess.createProducer(replyDestination);
//Step 14. we create a message and send it
producer.send(sess.createTextMessage("this is a reply"));
}
catch (Exception e)
{
e.printStackTrace();
}
finally
{
if(conn != null)
{
try
{
conn.close();
}
catch (JMSException e)
{
}
}
}
}