How do I get the number of JMS messages waiting to be consumed by a specific JMS message subscriber? I use the Topic model (publish/subscribe) and not the Queue model.
I want my MDB (message driven bean) to be able to figure out this information about the topic it listens to. To be clear; I want my MDB to get the number of messages waiting to be consumed.
I can't find any information in either on Internet or the documentation :(
I use JBoss Messaging 1.4.4.
AFAIK, JMS does not specify anything to count the number of messages in a destination.
You need to use JMX for this. Check out the MBean attributes of the Topic MBean in the documentation and/or the java documentation of TopicMBean#getMessageCounters(). The depth attribute of MessageCounter should be what you're looking for. But, to be honest, I don't know what you're gonna do with this information and if this has a sense for a Topic. A message will stay in a Topic as long as it hasn't been delivered to all subscribers and each subscriber typically hasn't any knowledge of its peers. So how would one MDB interpret a count of messages?
Also note that I couldn't find this MBean in JBoss Messaging 2.0.0.alpha1's javadoc. I don't know if it has been deprecated (according to the code in 1.4, it wasn't) or if the documentation is not up to date (after all, it's the alpha1 javadoc and I couldn't find a link for beta4).
EDIT: As skaffman pointed out, JBoss Messaging has been rebranded as HornetQ. It looks like there have been some changes in the API but concepts still apply. The documentation is here.
You can't, not with the JMS API. The internal JBossMessaging API may expose that information, but you'll have to go looking through that documentation to find it.
Related
I'm using ActiveMQ Artemis 2.18.0 and some Spring Boot clients that communicate with each other via topics. The Spring Boot clients use JMS for all MQTT operations.
I'd like to know if it is possible for a producer with one or more subscribers to find out whether a certain subscriber is actively listening or not. For example, there are 3 clients - SB1, SB2, and SB3. SB1 publishes to test/topic, and SB2 and SB3 are subscribed to test/topic. If SB2 shuts down for any reason would it be possible for SB1 to become aware of this?
I understand that queues would be the way to go for this, but my project is much better suited to the use of topics, and it is set up this way already and works fine. There's just one operation where it must be determined whether the listener is active or not in order to update the listener's online status, a crucial parameter. Right now, clients and the server continually poll a database so that the online status is periodically updated, I want to avoid doing this and use something that Artemis may provide instead.
Apache ActiveMQ Artemis emits notifications to inform listeners of potentially interesting events as consumer created or closed, see Management Notifications at http://activemq.apache.org/components/artemis/documentation/latest/management.html.
A listener of the management notification address would receive a message for each consumer created or closed, see the Management Notification Example at http://activemq.apache.org/components/artemis/documentation/latest/examples.html#management-notification
Part of the point to pub/sub based messaging is to decouple the information producer (publisher) from the consumer (subscriber). As a rule a published REALLY shouldn't care if there even are any subscribers.
If you want to know the status of the subscriber then it's up to the subscriber to update this, not the publisher. Things like the Last Will & Testament feature allow the subscriber to update it's status in the event of a failure to explicitly do it when going offline.
I have designed the REST Post API in java which actually publishes the message to particular Kafka topic, lets say its "ProductTopic".
In the background, a microservice is listening to this "ProductTopic" topic and start to consume the message and saves to DB. Now i would like write a GET REST API to see the progress(which gives the output of job) of the job, like how much messages are successfully consumed and how is still pending. So that end user will have an idea about what's happening.
Is there a way to achieve this ? I did searched a lot in google, all i see was the command line query to see the consumption of the messages. Not any java implementation example available from confluent side. Any help would be appreciated.
You should check consumer lag for the consumer group of your service. Lag is approximately endOffset-currentOffset. You can find examples here
I have configured the redelivery settings in Wildfly 10 configuration some thing like below.
<address-setting name = "jms.queue.MyQueue"
redelivery-delay="2000" max-redelivery-delay="10000" max-delivery-attempts="5"
max-size-bytes="10485760" address-full-policy="FAIL"/>
I haven't configured the DLQ which I want to do myself.
When a message fails , I would like to move it to certain queue with the error in it. Unfortunately if I configure the DLQ, I only get the original message but not the reason why it failed.
For that I would like to read the JMSXDeliveryCount and decide if this is the last attempt. If so then Move it to some other queue myself with additional information.
is it possible to read the original setting as done in standalone-full.xml from my Queue while consuming the message?
The max-delivery-attempts setting is not defined in the JMS specification so in order to retrieve it from the server you'll need to use the Wildfly management API. There are a couple of ways to do this - native or HTTP. To be clear, this will make your application difficult to port to other potential JMS providers and/or Java application servers.
To avoid having to use the Wildfly management API you might consider setting a special property on the message from the producer to indicate how many times it should be delivered. Then you could just read this property in your consumer application and compare it to JMXSDeliveryCount. If you don't want to change the producer application you could probably accomplish the same thing using an Artemis outgoing interceptor to set the property on the message as it's being delivered to the consumer.
I'm implementing a set of Lagom services and I have one that's planned to purely read from a topic and replying back to that topic.
I can't seem to find any docs on how to do this? All of the subscriber examples map from a message to Done and all of the publish examples are mapping from the event store events to publish external messages.
I have a problem, that I want to solve using kafka queues.
I need to process some result, then return it to the user.
As you can see in the picture, the Rest Service, requests something to the Calculator Service.
Both services have a kafka consumer, and a kafka producer.
The rest service receive a request, then produces a message on toAdd queue, then keep consuming the fromAdd queue, until receives a value.
The calculator service keep consuming the toAdd queue, when some message comes, it sum two values, then produces a message on fromAdd queue.
Sometimes the rest service receives old messages from the queue, or more than one message.
I find something about idempotent configuration, but I don't know how to implement right.
Is that diagram, the right way to the communication between two or more services using kafka?
Can someone give a example?
Thanks.
Is that diagram, the right way to the communication between two or more services using kafka?
If you mean "Does it make sense to have two or more services communicate indirectly through Kafka?", then yes, it does.
Can someone give a example?
Here are some good pointers including examples:
Build Services on a Backbone of Events, Confluent blog, May 2017
Commander: Better Distributed Applications through CQRS, Event Sourcing, and Immutable Logs, by Bobby Calderwood, StrangeLoop, Sep 2016
Recorded talk
Reference implementation on GitHub
To answer your question: There is no problem with such communication.
Now referring back to other parts...
Keep in mind that it's an asynchronous communication so you should not keep HTTP connection open and keep user of that service waiting for the response. This is just not the way to go. You can solve this in many ways. For instance: you can use WebSockets, you can send an email/SMS/slack msg to the user with the reply and so on.